Browse Source

fix: use localhost publicUrl and -max=100 for host-based Spark tests

The previous fix enabled master-to-volume communication but broke client writes.

Problem:
- Volume server uses -ip=seaweedfs-volume (Docker hostname)
- Master can reach it ✓
- Spark tests run on HOST (not in Docker container)
- Host can't resolve 'seaweedfs-volume' → UnknownHostException ✗

Solution:
- Keep -ip=seaweedfs-volume for master gRPC communication
- Change -publicUrl to 'localhost:8080' for host-based clients
- Change -max=0 to -max=100 (matches other integration tests)

Why -max=100:
- Pre-allocates volume capacity at startup
- Volumes ready immediately for writes
- Consistent with other test configurations
- More reliable than on-demand (-max=0)

This configuration allows:
- Master → Volume: seaweedfs-volume:18080 (Docker network)
- Clients → Volume: localhost:8080 (host network via port mapping)
pull/7526/head
chrislu 6 days ago
parent
commit
150d084b3b
  1. 2
      test/java/spark/docker-compose.yml

2
test/java/spark/docker-compose.yml

@ -27,7 +27,7 @@ services:
ports: ports:
- "8080:8080" - "8080:8080"
- "18080:18080" - "18080:18080"
command: "volume -mserver=seaweedfs-master:9333 -ip=seaweedfs-volume -ip.bind=0.0.0.0 -port=8080 -port.grpc=18080 -publicUrl=127.0.0.1:8080 -max=0 -dir=/data -preStopSeconds=1"
command: "volume -mserver=seaweedfs-master:9333 -ip=seaweedfs-volume -ip.bind=0.0.0.0 -port=8080 -port.grpc=18080 -publicUrl=localhost:8080 -max=100 -dir=/data -preStopSeconds=1"
volumes: volumes:
- seaweedfs-volume-data:/data - seaweedfs-volume-data:/data
depends_on: depends_on:

Loading…
Cancel
Save