Browse Source
Critical fix for multi-filer deployments: **Problem:** - S3ApiServer.WithFilerClient() was creating direct connections to ONE filer - Used pb.WithGrpcClient() with single filer address - No failover - if that filer failed, ALL operations failed - Caused test failures: "bucket directory not found" - IAM Integration Tests failing with 500 Internal Error **Root Cause:** - WithFilerClient bypassed filerClient connection management - Always connected to getFilerAddress() (current filer only) - Didn't retry other filers on failure - All getEntry(), updateEntry(), etc. operations failed if current filer down **Solution:** 1. Added FilerClient.GetAllFilers() method - Returns snapshot of all filer addresses - Thread-safe copy to avoid races 2. Implemented withFilerClientFailover() - Try current filer first (fast path) - On failure, try all other filers - Log successful failover - Return error only if ALL filers fail 3. Updated WithFilerClient() - Use filerClient for failover when available - Fallback to direct connection for testing/init **Impact:** ✅ All S3 operations now support multi-filer failover ✅ Bucket metadata reads work with any available filer ✅ Entry operations (getEntry, updateEntry) failover automatically ✅ IAM tests should pass now ✅ Production-ready HA support **Files Changed:** - wdclient/filer_client.go: Add GetAllFilers() method - s3api/s3api_handlers.go: Implement failover logic This fixes the test failure where bucket operations failed when the primary filer was temporarily unavailable during cleanup.pull/7550/head
2 changed files with 64 additions and 1 deletions
Loading…
Reference in new issue