Benefits:
- Eliminates artifact upload/download complexity
- Maven artifacts stay in ~/.m2 throughout
- Simpler debugging (all logs in one place)
- Faster execution (no transfer overhead)
- More reliable (no artifact transfer failures)
Structure:
1. Build SeaweedFS binary + Java dependencies
2. Run Spark integration tests (Docker)
3. Run Spark example (host-based, push/dispatch only)
4. Upload results & diagnostics
Trade-off: Example runs sequentially after tests instead of parallel,
but overall runtime is likely faster without artifact transfers.
The Maven artifacts are not appearing in the downloaded artifacts!
Only 'docker' directory is present, '.m2' is missing.
Added verification to show:
1. Does ~/.m2/repository/com/seaweedfs exist?
2. What files are being copied?
3. What SNAPSHOT artifacts are in the upload?
4. Full structure of artifacts/ before upload
This will reveal if:
- Maven install didn't work (artifacts not created)
- Copy command failed (wrong path)
- Upload excluded .m2 somehow (artifact filter issue)
The next run will show exactly where the Maven artifacts are lost!
Issue: Docker volume mount from $HOME/.m2 wasn't working in GitHub Actions
- Container couldn't access the locally built SNAPSHOT JARs
- Maven failed with 'Could not find artifact seaweedfs-hadoop3-client:3.80.1-SNAPSHOT'
Solution: Copy Maven repository into workspace
1. In CI: Copy ~/.m2/repository/com/seaweedfs to test/java/spark/.m2/repository/com/
2. docker-compose.yml: Mount ./.m2 (relative path in workspace)
3. .gitignore: Added .m2/ to ignore copied artifacts
Why this works:
- Workspace directory (.) is successfully mounted as /workspace
- ./.m2 is inside workspace, so it gets mounted too
- Container sees artifacts at /root/.m2/repository/com/seaweedfs/...
- Maven finds the 3.80.1-SNAPSHOT JARs with our debug logging!
Next run should finally show the [DEBUG-2024] logs! 🎯
Issue: docker-compose was using ~ which may not expand correctly in CI
Changes:
1. docker-compose.yml: Changed ~/.m2 to ${HOME}/.m2
- Ensures proper path expansion in GitHub Actions
- $HOME is /home/runner in GitHub Actions runners
2. Added verification step in workflow:
- Lists all SNAPSHOT artifacts before tests
- Shows what's available in Maven local repo
- Will help diagnose if artifacts aren't being restored correctly
This should ensure the Maven container can access the locally built
3.80.1-SNAPSHOT JARs with our debug logging code.
Added -U flag to mvn install to force dependency updates
Added verification step using javap to check compiled bytecode
This will show if the JAR actually contains the new logging code:
- If 'totalPosition' string is found → JAR is updated
- If not found → Something is wrong with the build
The verification output will help diagnose why INFO logs aren't showing.
Troubleshooting 'seaweedfs-volume: Temporary failure in name resolution':
docker-compose.yml changes:
- Add MAVEN_OPTS to disable Java DNS caching (ttl=0)
Java caches DNS lookups which can cause stale results
- Add ping tests before mvn test to verify DNS resolution
Tests: ping -c 1 seaweedfs-volume && ping -c 1 seaweedfs-filer
- This will show if DNS works before tests run
workflow changes:
- List Docker networks before running tests
- Shows network configuration for debugging
- Helps verify spark-tests joins correct network
If ping succeeds but tests fail, it's a Java/Maven DNS issue.
If ping fails, it's a Docker networking configuration issue.
Note: Previous test failures may be from old code before Docker networking fix.
Better approach than mixing host and container networks.
Changes to docker-compose.yml:
- Remove 'network_mode: host' from spark-tests container
- Add spark-tests to seaweedfs-spark bridge network
- Update SEAWEEDFS_FILER_HOST from 'localhost' to 'seaweedfs-filer'
- Add depends_on to ensure services are healthy before tests
- Update volume publicUrl from 'localhost:8080' to 'seaweedfs-volume:8080'
Changes to workflow:
- Remove separate build and test steps
- Run tests via 'docker compose up spark-tests'
- Use --abort-on-container-exit and --exit-code-from for proper exit codes
- Simpler: one step instead of two
Benefits:
✓ All components use Docker DNS (seaweedfs-master, seaweedfs-volume, seaweedfs-filer)
✓ No host/container network split or DNS resolution issues
✓ Consistent with how other SeaweedFS integration tests work
✓ Tests are fully containerized and reproducible
✓ Volume server accessible via seaweedfs-volume:8080 for all clients
✓ Automatic volume creation works (master can reach volume via gRPC)
✓ Data writes work (Spark can reach volume via Docker network)
This matches the architecture of other integration tests and is cleaner.
Root cause identified:
- Volume server was using -ip=127.0.0.1
- Master couldn't reach volume server at 127.0.0.1 from its container
- When Spark requested assignment, master tried to create volume via gRPC
- Master's gRPC call to 127.0.0.1:18080 failed (reached itself, not volume server)
- Result: 'No writable volumes' error
Solution:
- Change volume server to use -ip=seaweedfs-volume (container hostname)
- Master can now reach volume server at seaweedfs-volume:18080
- Automatic volume creation works as designed
- Kept -publicUrl=127.0.0.1:8080 for external clients (host network)
Workflow changes:
- Remove forced volume creation (curl POST to /vol/grow)
- Volumes will be created automatically on first write request
- Keep diagnostic output for troubleshooting
- Simplified startup verification
This matches how other SeaweedFS tests work with Docker networking.
Root cause: With -max=0 (unlimited volumes), volumes are created on-demand,
but no volumes existed when tests started, causing first write to fail.
Solution:
- Explicitly trigger volume growth via /vol/grow API
- Create 3 volumes with replication=000 before running tests
- Verify volumes exist before proceeding
- Fail early with clear message if volumes can't be created
Changes:
- POST to http://localhost:9333/vol/grow?replication=000&count=3
- Wait up to 10 seconds for volumes to appear
- Show volume count and layout status
- Exit with error if no volumes after 10 attempts
- Applied to both spark-tests and spark-example jobs
This ensures writable volumes exist before Spark tries to write data.
- Add 'weed shell' execution to run 'volume.list' on failure
- Shows which volumes exist, their status, and available space
- Add cluster status JSON output for detailed topology view
- Helps diagnose volume allocation issues and full volumes
- Added to both spark-tests and spark-example jobs
- Diagnostic runs only when tests fail (if: failure())
- Add 'docker compose down -v' before starting services to clean up stale volumes
- Prevents accumulation of data/buckets from previous test runs
- Add volume registration verification after service startup
- Check that volume server has registered with master and volumes are available
- Helps diagnose 'No writable volumes' errors
- Shows volume count and waits up to 30 seconds for volumes to be created
- Both spark-tests and spark-example jobs updated with same improvements
- Add -Dcentral.publishing.skip=true to all Maven builds
- Central publishing plugin is only needed for Maven Central releases
- Prevents plugin resolution errors during CI builds
- Complements existing -Dgpg.skip=true flag
- Add container status (docker compose ps -a) on startup failure
- Add detailed logs for all three services (master, volume, filer)
- Add container inspection to verify binary exists
- Add debugging info for spark-example job
- Helps diagnose startup failures before containers are torn down
- Add ls -la to show build-artifacts/docker/ contents
- Add file command to verify binary type
- Add --no-cache to docker compose build to prevent stale cache issues
- Ensures fresh build with current binary
- Download artifacts to 'build-artifacts' directory instead of '.'
- Prevents checkout from overwriting downloaded files
- Explicitly copy weed binary from build-artifacts to docker/ directory
- Update Maven artifact restoration to use new path
- Add step to chmod +x the weed binary after downloading artifacts
- Artifacts lose executable permissions during upload/download
- Prevents 'Permission denied' errors when Docker tries to run the binary
- Add step to restore Maven artifacts from download to ~/.m2/repository
- Restructure artifact upload to use consistent directory layout
- Remove obsolete 'version' field from docker-compose.yml to eliminate warnings
- Ensures SeaweedFS Java dependencies are available during test execution
1. Add explicit permissions (least privilege):
- contents: read
- checks: write (for test reports)
- pull-requests: write (for PR comments)
2. Extract duplicate build steps into shared 'build-deps' job:
- Eliminates duplication between spark-tests and spark-example
- Build artifacts are uploaded and reused by dependent jobs
- Reduces CI time and ensures consistency
3. Fix spark-example service startup verification:
- Match robust approach from spark-tests job
- Add explicit timeout and failure handling
- Verify all services (master, volume, filer)
- Include diagnostic logging on failure
- Prevents silent failures and obscure errors
These changes improve maintainability, security, and reliability
of the Spark integration test workflow.