BREAKTHROUGH STRATEGY: Don't wait for error, download files proactively!
The problem:
- Waiting for EOF error is too slow
- By the time we extract chunk ID, Spark has deleted the file
- Volume garbage collection removes chunks quickly
The solution:
1. Monitor for 'Running seaweed.spark.SparkSQLTest' in logs
2. Sleep 5 seconds (let test write files)
3. Download ALL files from /test-spark/employees/ immediately
4. Keep files for analysis when EOF occurs
This downloads files while they still exist, BEFORE Spark cleanup!
Timeline:
Write → Download (NEW!) → Read → EOF Error → Analyze
Instead of:
Write → Read → EOF Error → Try to download (file gone!) ❌
This will finally capture the actual problematic file!