The issue: Files written to employees/ but immediately moved/deleted by Spark
Spark's file commit process:
1. Write to: employees/_temporary/0/_temporary/attempt_xxx/part-xxx.parquet
2. Commit/rename to: employees/part-xxx.parquet
3. Read and delete (on failure)
By the time we check employees/, the file is already gone!
Solution: Search multiple locations
- employees/ (final location)
- employees/_temporary/ (intermediate)
- employees/_temporary/0/_temporary/ (write location)
- Recursive search as fallback
Also:
- Extract exact filename from write log
- Try all locations until we find the file
- Show directory listings for debugging
This should catch files in their temporary location before Spark moves them!
PRECISION TRIGGER: Log exactly when the file we need is written!
Changes:
1. SeaweedOutputStream.close(): Add WARN log for /test-spark/employees/*.parquet
- Format: '=== PARQUET FILE WRITTEN TO EMPLOYEES: filename (size bytes) ==='
- Uses WARN level so it stands out in logs
2. Workflow: Trigger download on this exact log message
- Instead of 'Running seaweed.spark.SparkSQLTest' (too early)
- Now triggers on 'PARQUET FILE WRITTEN TO EMPLOYEES' (exact moment!)
Timeline:
File write starts
↓
close() called → LOG APPEARS
↓
Workflow detects log → DOWNLOAD NOW! ← We're here instantly!
↓
Spark reads file → EOF error
↓
Analyze downloaded file ✅
This gives us the EXACT moment to download, with near-zero latency!
The issue: Fixed 5-second sleep was too short - files not written yet
The solution: Poll every second for up to 30 seconds
- Check if files exist in employees directory
- Download immediately when they appear
- Log progress every 5 seconds
This gives us a 30-second window to catch the file between:
- Write (file appears)
- Read (EOF error)
The file should appear within a few seconds of SparkSQLTest starting, and we'll grab it immediately!
BREAKTHROUGH STRATEGY: Don't wait for error, download files proactively!
The problem:
- Waiting for EOF error is too slow
- By the time we extract chunk ID, Spark has deleted the file
- Volume garbage collection removes chunks quickly
The solution:
1. Monitor for 'Running seaweed.spark.SparkSQLTest' in logs
2. Sleep 5 seconds (let test write files)
3. Download ALL files from /test-spark/employees/ immediately
4. Keep files for analysis when EOF occurs
This downloads files while they still exist, BEFORE Spark cleanup!
Timeline:
Write → Download (NEW!) → Read → EOF Error → Analyze
Instead of:
Write → Read → EOF Error → Try to download (file gone!) ❌
This will finally capture the actual problematic file!
The issue: grep pattern was wrong and looking in wrong place
- EOF exception is in the 'Caused by' section
- Filename is in the outer exception message
The fix:
- Search for 'Encountered error while reading file' line
- Extract filename: part-00000-xxx-c000.snappy.parquet
- Fixed regex pattern (was missing dash before c000)
Example from logs:
'Encountered error while reading file seaweedfs://...part-00000-c5a41896-5221-4d43-a098-d0839f5745f6-c000.snappy.parquet'
This will finally extract the right filename!
The issue: We're not finding the correct file because:
1. Error mentions: test-spark/employees/part-00000-xxx.parquet
2. But we downloaded chunk from employees_window (different file!)
The problem:
- File is already written when error occurs
- Error happens during READ, not write
- Need to find when SeaweedInputStream opens this file for reading
New approach:
1. Extract filename from EOF error message
2. Search for 'new path:' + filename (when file is opened for read)
3. Get chunk info from the entry details logged at that point
4. Download the ACTUAL failing chunk
This should finally get us the right file with the 78-byte issue!
CRITICAL FIX: We were downloading the wrong file!
The issue:
- EOF error is for: test-spark/employees/part-00000-xxx.parquet
- But logs contain MULTIPLE files (employees_window with 1275 bytes, etc.)
- grep -B 50 was matching chunk info from OTHER files
The solution:
1. Extract the EXACT failing filename from EOF error message
2. Search logs for chunk info specifically for THAT file
3. Download the correct chunk
Example:
- EOF error mentions: part-00000-32cafb4f-82c4-436e-a22a-ebf2f5cb541e-c000.snappy.parquet
- Find chunk info for this specific file, not other files in logs
Now we'll download the actual problematic file, not a random one!
SUCCESS: File downloaded and readable! Now analyzing WHY Parquet expects 78 more bytes.
Added analysis:
1. Parse footer length from last 8 bytes
2. Extract column chunk offsets from parquet-tools meta
3. Compare actual file size with expected size from metadata
4. Identify if offsets are pointing beyond actual data
This will reveal:
- Are column chunk offsets incorrectly calculated during write?
- Is the footer claiming data that doesn't exist?
- Where exactly are the missing 78 bytes supposed to be?
The file is already uploaded as artifact for deeper local analysis.
The grep was matching 'source_file_id' instead of 'file_id'.
Fixed pattern to look for ' file_id: ' (with spaces) which excludes
'source_file_id:' line.
Now will correctly extract:
file_id: "7,d0cdf5711" ← THIS ONE
Instead of:
source_file_id: "0,000000000" ← NOT THIS
The correct chunk ID should download successfully from volume server!
BREAKTHROUGH: Download chunk data directly from volume server, bypassing filer!
The issue: Even real-time monitoring is too slow - Spark deletes filer
metadata instantly after the EOF error.
THE SOLUTION: Extract chunk ID from logs and download directly from volume
server. Volume keeps data even after filer metadata is deleted!
From logs we see:
file_id: "7,d0364fd01"
size: 693
We can download this directly:
curl http://localhost:8080/7,d0364fd01
Changes:
1. Extract chunk file_id from logs (format: "volume,filekey")
2. Download directly from volume server port 8080
3. Volume data persists longer than filer metadata
4. Comprehensive analysis with parquet-tools, hexdump, magic bytes
This WILL capture the actual file data!
ROOT CAUSE: Spark cleans up files after test completes (even on failure).
By the time we try to download, files are already deleted.
SOLUTION: Monitor test logs in real-time and download file THE INSTANT
we see the EOF error (meaning file exists and was just read).
Changes:
1. Start tests in detached mode
2. Background process monitors logs for 'EOFException.*78 bytes'
3. When detected, extract filename from error message
4. Download IMMEDIATELY (file still exists!)
5. Quick analysis with parquet-tools
6. Main process waits for test completion
This catches the file at the exact moment it exists and is causing the error!
The directory is empty, which means tests are failing BEFORE writing files.
Enhanced diagnostics:
1. List /test-spark/ root to see what directories exist
2. Grep test logs for 'employees', 'people_partitioned', '.parquet'
3. Try multiple possible locations: employees, people_partitioned, people
4. Show WHERE the test actually tried to write files
This will reveal:
- If test fails before writing (connection error, etc.)
- What path the test is actually using
- Whether files exist in a different location
REAL ROOT CAUSE: --abort-on-container-exit stops ALL containers immediately
when the test container exits, including the filer. So we couldn't download
files because filer was already stopped.
SOLUTION: Run tests in detached mode, wait for completion, then download
while filer is still running.
Changes:
1. docker compose up -d spark-tests (detached mode)
2. docker wait seaweedfs-spark-tests (wait for completion)
3. docker inspect to get exit code
4. docker compose logs to show test output
5. Download file while all services still running
6. Then exit with test exit code
Improved grep pattern to be more specific:
part-[a-f0-9-]+\.c000\.snappy\.parquet
This MUST work - filer is guaranteed to be running during download!
ROOT CAUSE FOUND: Files disappear after docker compose stops containers.
The data doesn't persist because:
- docker compose up --abort-on-container-exit stops ALL containers when tests finish
- When containers stop, the data in SeaweedFS is lost (even with named volumes,
the metadata/index is lost when master/filer stop)
- By the time we tried to download files, they were gone
SOLUTION: Download file IMMEDIATELY after test failure, BEFORE docker compose
exits and stops containers.
Changes:
1. Moved file download INTO the test-run step
2. Download happens right after TEST_EXIT_CODE is captured
3. File downloads while containers are still running
4. Analysis step now just uses the already-downloaded file
5. Removed all the restart/diagnostics complexity
This should finally get us the Parquet file for analysis!
Added checks to diagnose why files aren't accessible:
1. Container status before restart
- See if containers are still running or stopped
- Check exit codes
2. Volume inspection
- List all docker volumes
- Inspect seaweedfs-volume-data volume
- Check if volume data persisted
3. Access from inside container
- Use curl from inside filer container
- This bypasses host networking issues
- Shows if files exist but aren't exposed
4. Direct filesystem check
- Try to ls the directory from inside container
- See if filer has filesystem access
This will definitively show:
- Did data persist through container restart?
- Are files there but not accessible via HTTP from host?
- Is the volume getting cleaned up somehow?
Added weed shell commands to inspect the directory structure:
- List /test-spark/ to see what directories exist
- List /test-spark/employees/ to see what files are there
This will help diagnose why the HTTP API returns empty:
- Are files there but HTTP not working?
- Are files in a different location?
- Were files cleaned up after the test?
- Did the volume data persist after container restart?
Will show us exactly what's in SeaweedFS after test failure.
The heredoc syntax (<<'SHELL_EOF') in the workflow was breaking
YAML parsing and preventing the workflow from running.
Changed from:
weed shell <<'SHELL_EOF'
fs.ls /test-spark/employees/
exit
SHELL_EOF
To:
echo -e 'fs.ls /test-spark/employees/\nexit' | weed shell
This achieves the same result but is YAML-compatible.
Removed branch restrictions from workflow triggers.
Now the tests will run on ANY branch when relevant files change:
- test/java/spark/**
- other/java/hdfs2/**
- other/java/hdfs3/**
- other/java/client/**
- workflow file itself
This fixes the issue where tests weren't running on feature branches.
Problem: File download step shows 'No Parquet files found'
even though ports are exposed (8888:8888) and services are running.
Improvements:
1. Show raw curl output to see actual API response
2. Use improved grep pattern with -oP for better parsing
3. Add fallback to fetch file via docker exec if HTTP fails
4. If no files found via HTTP, try docker exec curl
5. If still no files, use weed shell 'fs.ls' to list files
This will help us understand:
- Is the HTTP API returning files in unexpected format?
- Are files accessible from inside the container but not outside?
- Are files in a different path than expected?
One of these methods WILL find the files!
Problem: --abort-on-container-exit stops ALL containers when tests
fail, so SeaweedFS services are down when file download step runs.
Solution:
1. Use continue-on-error: true to capture test failure
2. Store exit code in GITHUB_OUTPUT for later checking
3. Add new step to restart SeaweedFS services if tests failed
4. Download step runs after services are back up
5. Final step checks test exit code and fails workflow
This ensures:
✅ Services keep running for file analysis
✅ Parquet files are accessible via filer API
✅ Workflow still fails if tests failed
✅ All diagnostics can complete
Now we'll actually be able to download and examine the Parquet files!
Added diagnostic step to download and examine actual Parquet files
when tests fail. This will definitively answer:
1. Is the file complete? (Check PAR1 magic bytes at start/end)
2. What size is it? (Compare actual vs expected)
3. Can parquet-tools read it? (Reader compatibility test)
4. What does the footer contain? (Hex dump last 200 bytes)
Steps performed:
- List files in SeaweedFS
- Download first Parquet file
- Check magic bytes (PAR1 at offset 0 and EOF-4)
- Show file size from filesystem
- Hex dump header (first 100 bytes)
- Hex dump footer (last 200 bytes)
- Run parquet-tools inspect/show
- Upload file as artifact for local analysis
This will reveal if the issue is:
A) File is incomplete (missing trailer) → SeaweedFS write problem
B) File is complete but unreadable → Parquet format problem
C) File is complete and readable → SeaweedFS read problem
D) File size doesn't match metadata → Footer offset problem
The downloaded file will be available as 'failed-parquet-file' artifact.
Benefits:
- Eliminates artifact upload/download complexity
- Maven artifacts stay in ~/.m2 throughout
- Simpler debugging (all logs in one place)
- Faster execution (no transfer overhead)
- More reliable (no artifact transfer failures)
Structure:
1. Build SeaweedFS binary + Java dependencies
2. Run Spark integration tests (Docker)
3. Run Spark example (host-based, push/dispatch only)
4. Upload results & diagnostics
Trade-off: Example runs sequentially after tests instead of parallel,
but overall runtime is likely faster without artifact transfers.
The Maven artifacts are not appearing in the downloaded artifacts!
Only 'docker' directory is present, '.m2' is missing.
Added verification to show:
1. Does ~/.m2/repository/com/seaweedfs exist?
2. What files are being copied?
3. What SNAPSHOT artifacts are in the upload?
4. Full structure of artifacts/ before upload
This will reveal if:
- Maven install didn't work (artifacts not created)
- Copy command failed (wrong path)
- Upload excluded .m2 somehow (artifact filter issue)
The next run will show exactly where the Maven artifacts are lost!
Issue: Docker volume mount from $HOME/.m2 wasn't working in GitHub Actions
- Container couldn't access the locally built SNAPSHOT JARs
- Maven failed with 'Could not find artifact seaweedfs-hadoop3-client:3.80.1-SNAPSHOT'
Solution: Copy Maven repository into workspace
1. In CI: Copy ~/.m2/repository/com/seaweedfs to test/java/spark/.m2/repository/com/
2. docker-compose.yml: Mount ./.m2 (relative path in workspace)
3. .gitignore: Added .m2/ to ignore copied artifacts
Why this works:
- Workspace directory (.) is successfully mounted as /workspace
- ./.m2 is inside workspace, so it gets mounted too
- Container sees artifacts at /root/.m2/repository/com/seaweedfs/...
- Maven finds the 3.80.1-SNAPSHOT JARs with our debug logging!
Next run should finally show the [DEBUG-2024] logs! 🎯
Issue: docker-compose was using ~ which may not expand correctly in CI
Changes:
1. docker-compose.yml: Changed ~/.m2 to ${HOME}/.m2
- Ensures proper path expansion in GitHub Actions
- $HOME is /home/runner in GitHub Actions runners
2. Added verification step in workflow:
- Lists all SNAPSHOT artifacts before tests
- Shows what's available in Maven local repo
- Will help diagnose if artifacts aren't being restored correctly
This should ensure the Maven container can access the locally built
3.80.1-SNAPSHOT JARs with our debug logging code.
Added -U flag to mvn install to force dependency updates
Added verification step using javap to check compiled bytecode
This will show if the JAR actually contains the new logging code:
- If 'totalPosition' string is found → JAR is updated
- If not found → Something is wrong with the build
The verification output will help diagnose why INFO logs aren't showing.
Troubleshooting 'seaweedfs-volume: Temporary failure in name resolution':
docker-compose.yml changes:
- Add MAVEN_OPTS to disable Java DNS caching (ttl=0)
Java caches DNS lookups which can cause stale results
- Add ping tests before mvn test to verify DNS resolution
Tests: ping -c 1 seaweedfs-volume && ping -c 1 seaweedfs-filer
- This will show if DNS works before tests run
workflow changes:
- List Docker networks before running tests
- Shows network configuration for debugging
- Helps verify spark-tests joins correct network
If ping succeeds but tests fail, it's a Java/Maven DNS issue.
If ping fails, it's a Docker networking configuration issue.
Note: Previous test failures may be from old code before Docker networking fix.
Better approach than mixing host and container networks.
Changes to docker-compose.yml:
- Remove 'network_mode: host' from spark-tests container
- Add spark-tests to seaweedfs-spark bridge network
- Update SEAWEEDFS_FILER_HOST from 'localhost' to 'seaweedfs-filer'
- Add depends_on to ensure services are healthy before tests
- Update volume publicUrl from 'localhost:8080' to 'seaweedfs-volume:8080'
Changes to workflow:
- Remove separate build and test steps
- Run tests via 'docker compose up spark-tests'
- Use --abort-on-container-exit and --exit-code-from for proper exit codes
- Simpler: one step instead of two
Benefits:
✓ All components use Docker DNS (seaweedfs-master, seaweedfs-volume, seaweedfs-filer)
✓ No host/container network split or DNS resolution issues
✓ Consistent with how other SeaweedFS integration tests work
✓ Tests are fully containerized and reproducible
✓ Volume server accessible via seaweedfs-volume:8080 for all clients
✓ Automatic volume creation works (master can reach volume via gRPC)
✓ Data writes work (Spark can reach volume via Docker network)
This matches the architecture of other integration tests and is cleaner.
Root cause identified:
- Volume server was using -ip=127.0.0.1
- Master couldn't reach volume server at 127.0.0.1 from its container
- When Spark requested assignment, master tried to create volume via gRPC
- Master's gRPC call to 127.0.0.1:18080 failed (reached itself, not volume server)
- Result: 'No writable volumes' error
Solution:
- Change volume server to use -ip=seaweedfs-volume (container hostname)
- Master can now reach volume server at seaweedfs-volume:18080
- Automatic volume creation works as designed
- Kept -publicUrl=127.0.0.1:8080 for external clients (host network)
Workflow changes:
- Remove forced volume creation (curl POST to /vol/grow)
- Volumes will be created automatically on first write request
- Keep diagnostic output for troubleshooting
- Simplified startup verification
This matches how other SeaweedFS tests work with Docker networking.
Root cause: With -max=0 (unlimited volumes), volumes are created on-demand,
but no volumes existed when tests started, causing first write to fail.
Solution:
- Explicitly trigger volume growth via /vol/grow API
- Create 3 volumes with replication=000 before running tests
- Verify volumes exist before proceeding
- Fail early with clear message if volumes can't be created
Changes:
- POST to http://localhost:9333/vol/grow?replication=000&count=3
- Wait up to 10 seconds for volumes to appear
- Show volume count and layout status
- Exit with error if no volumes after 10 attempts
- Applied to both spark-tests and spark-example jobs
This ensures writable volumes exist before Spark tries to write data.
- Add 'weed shell' execution to run 'volume.list' on failure
- Shows which volumes exist, their status, and available space
- Add cluster status JSON output for detailed topology view
- Helps diagnose volume allocation issues and full volumes
- Added to both spark-tests and spark-example jobs
- Diagnostic runs only when tests fail (if: failure())
- Add 'docker compose down -v' before starting services to clean up stale volumes
- Prevents accumulation of data/buckets from previous test runs
- Add volume registration verification after service startup
- Check that volume server has registered with master and volumes are available
- Helps diagnose 'No writable volumes' errors
- Shows volume count and waits up to 30 seconds for volumes to be created
- Both spark-tests and spark-example jobs updated with same improvements
- Add -Dcentral.publishing.skip=true to all Maven builds
- Central publishing plugin is only needed for Maven Central releases
- Prevents plugin resolution errors during CI builds
- Complements existing -Dgpg.skip=true flag
- Add container status (docker compose ps -a) on startup failure
- Add detailed logs for all three services (master, volume, filer)
- Add container inspection to verify binary exists
- Add debugging info for spark-example job
- Helps diagnose startup failures before containers are torn down
- Add ls -la to show build-artifacts/docker/ contents
- Add file command to verify binary type
- Add --no-cache to docker compose build to prevent stale cache issues
- Ensures fresh build with current binary
- Download artifacts to 'build-artifacts' directory instead of '.'
- Prevents checkout from overwriting downloaded files
- Explicitly copy weed binary from build-artifacts to docker/ directory
- Update Maven artifact restoration to use new path
- Add step to chmod +x the weed binary after downloading artifacts
- Artifacts lose executable permissions during upload/download
- Prevents 'Permission denied' errors when Docker tries to run the binary
- Add step to restore Maven artifacts from download to ~/.m2/repository
- Restructure artifact upload to use consistent directory layout
- Remove obsolete 'version' field from docker-compose.yml to eliminate warnings
- Ensures SeaweedFS Java dependencies are available during test execution
1. Add explicit permissions (least privilege):
- contents: read
- checks: write (for test reports)
- pull-requests: write (for PR comments)
2. Extract duplicate build steps into shared 'build-deps' job:
- Eliminates duplication between spark-tests and spark-example
- Build artifacts are uploaded and reused by dependent jobs
- Reduces CI time and ensures consistency
3. Fix spark-example service startup verification:
- Match robust approach from spark-tests job
- Add explicit timeout and failure handling
- Verify all services (master, volume, filer)
- Include diagnostic logging on failure
- Prevents silent failures and obscure errors
These changes improve maintainability, security, and reliability
of the Spark integration test workflow.