Browse Source

Support multiple filers for S3 and IAM servers with automatic failover (#7550)

* Support multiple filers for S3 and IAM servers with automatic failover

This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover.

Key changes:
- Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress)
- Updated IamServerOption.Filer to Filers ([]pb.ServerAddress)
- Modified -filer flag to accept comma-separated addresses
- Added getFilerAddress() helper methods for backward compatibility
- Updated all filer client calls to support multiple addresses
- Uses pb.WithOneOfGrpcFilerClients for automatic failover

Usage:
  weed s3 -filer=localhost:8888,localhost:8889
  weed iam -filer=localhost:8888,localhost:8889

The underlying FilerClient already supported multiple filers with health
tracking and automatic failover - this change exposes that capability
through the command-line interface.

* Add filer discovery: treat initial filers as seeds and discover peers from master

Enhances FilerClient to automatically discover additional filers in the same
filer group by querying the master server. This allows users to specify just
a few seed filers, and the client will discover all other filers in the cluster.

Key changes to wdclient/FilerClient:
- Added MasterClient, FilerGroup, and DiscoveryInterval fields
- Added thread-safe filer list management with RWMutex
- Implemented discoverFilers() background goroutine
- Uses cluster.ListExistingPeerUpdates() to query master for filers
- Automatically adds newly discovered filers to the list
- Added Close() method to clean up discovery goroutine

New FilerClientOption fields:
- MasterClient: enables filer discovery from master
- FilerGroup: specifies which filer group to discover
- DiscoveryInterval: how often to refresh (default 5 minutes)

Usage example:
  masterClient := wdclient.NewMasterClient(...)
  filerClient := wdclient.NewFilerClient(
    []pb.ServerAddress{"localhost:8888"}, // seed filers
    grpcDialOption,
    dataCenter,
    &wdclient.FilerClientOption{
      MasterClient: masterClient,
      FilerGroup: "my-group",
    },
  )
  defer filerClient.Close()

The initial filers act as seeds - the client discovers and adds all other
filers in the same group from the master. Discovered filers are added
dynamically without removing existing ones (relying on health checks for
unavailable filers).

* Address PR review comments: implement full failover for IAM operations

Critical fixes based on code review feedback:

1. **IAM API Failover (Critical)**:
   - Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in:
     * GetS3ApiConfigurationFromFiler()
     * PutS3ApiConfigurationToFiler()
     * GetPolicies()
     * PutPolicies()
   - Now all IAM operations support automatic failover across multiple filers

2. **Validation Improvements**:
   - Add validation in NewIamApiServerWithStore() to require at least one filer
   - Add validation in NewS3ApiServerWithStore() to require at least one filer
   - Add warning log when no filers configured for credential store

3. **Error Logging**:
   - Circuit breaker now logs when config load fails instead of silently ignoring
   - Helps operators understand why circuit breaker limits aren't applied

4. **Code Quality**:
   - Use ToGrpcAddress() for filer address in credential store setup
   - More consistent with rest of codebase and future-proof

These changes ensure IAM operations have the same high availability guarantees
as S3 operations, completing the multi-filer failover implementation.

* Fix IAM manager initialization: remove code duplication, add TODO for HA

Addresses review comment on s3api_server.go:145

Changes:
- Remove duplicate code for getting first filer address
- Extract filerAddr variable once and reuse
- Add TODO comment documenting the HA limitation for IAM manager
- Document that loadIAMManagerFromConfig and NewS3IAMIntegration need
  updates to support multiple filers for full HA

Note: This is a known limitation when using filer-backed IAM stores.
The interfaces need to be updated to accept multiple filer addresses.
For now, documenting this limitation clearly.

* Document credential store HA limitation with TODO

Addresses review comment on auth_credentials.go:149

Changes:
- Add TODO comment documenting that SetFilerClient interface needs update
  for multi-filer support
- Add informative log message indicating HA limitation
- Document that this is a known limitation for filer-backed credential stores

The SetFilerClient interface currently only accepts a single filer address.
To properly support HA, the credential store interfaces need to be updated
to handle multiple filer addresses.

* Track current active filer in FilerClient for better HA

Add GetCurrentFiler() method to FilerClient that returns the currently
active filer based on the filerIndex which is updated on successful
operations. This provides better availability than always using the
first filer.

Changes:
- Add FilerClient.GetCurrentFiler() method that returns current active filer
- Update S3ApiServer.getFilerAddress() to use FilerClient's current filer
- Add fallback to first filer if FilerClient not yet initialized
- Document IAM limitation (doesn't have FilerClient access)

Benefits:
- Single-filer operations (URLs, ReadFilerConf, etc.) now use the
  currently active/healthy filer
- Better distribution and failover behavior
- FilerClient's round-robin and health tracking automatically
  determines which filer to use

* Document ReadFilerConf HA limitation in lifecycle handlers

Addresses review comment on s3api_bucket_handlers.go:880

Add comment documenting that ReadFilerConf uses the current active filer
from FilerClient (which is better than always using first filer), but
doesn't have built-in multi-filer failover.

Add TODO to update filer.ReadFilerConf to support multiple filers for
complete HA. For now, it uses the currently active/healthy filer tracked
by FilerClient which provides reasonable availability.

* Document multipart upload URL HA limitation

Addresses review comment on s3api_object_handlers_multipart.go:442

Add comment documenting that part upload URLs point to the current
active filer (tracked by FilerClient), which is better than always
using the first filer but still creates a potential point of failure
if that filer becomes unavailable during upload.

Suggest TODO solutions:
- Use virtual hostname/load balancer for filers
- Have S3 server proxy uploads to healthy filers

Current behavior provides reasonable availability by using the
currently active/healthy filer rather than being pinned to first filer.

* Document multipart completion Location URL limitation

Addresses review comment on filer_multipart.go:187

Add comment documenting that the Location URL in CompleteMultipartUpload
response points to the current active filer (tracked by FilerClient).

Note that clients should ideally use the S3 API endpoint rather than
this direct URL. If direct access is attempted and the specific filer
is unavailable, the request will fail.

Current behavior uses the currently active/healthy filer rather than
being pinned to the first filer, providing better availability.

* Make credential store use current active filer for HA

Update FilerEtcStore to use a function that returns the current active
filer instead of a fixed address, enabling high availability.

Changes:
- Add SetFilerAddressFunc() method to FilerEtcStore
- Store uses filerAddressFunc instead of fixed filerGrpcAddress
- withFilerClient() calls the function to get current active filer
- Keep SetFilerClient() for backward compatibility (marked deprecated)
- Update S3ApiServer to pass FilerClient.GetCurrentFiler to store

Benefits:
- Credential store now uses currently active/healthy filer
- Automatic failover when filer becomes unavailable
- True HA for credential operations
- Backward compatible with old SetFilerClient interface

This addresses the credential store limitation - no longer pinned to
first filer, uses FilerClient's tracked current active filer.

* Clarify multipart URL comments: filer address not used for uploads

Update comments to reflect that multipart upload URLs are not actually
used for upload traffic - uploads go directly to volume servers.

Key clarifications:
- genPartUploadUrl: Filer address is parsed out, only path is used
- CompleteMultipartUpload Location: Informational field per AWS S3 spec
- Actual uploads bypass filer proxy and go directly to volume servers

The filer address in these URLs is NOT a HA concern because:
1. Part uploads: URL is parsed for path, upload goes to volume servers
2. Location URL: Informational only, clients use S3 endpoint

This addresses the observation that S3 uploads don't go through filers,
only metadata operations do.

* Remove filer address from upload paths - pass path directly

Eliminate unnecessary filer address from upload URLs by passing file
paths directly instead of full URLs that get immediately parsed.

Changes:
- Rename genPartUploadUrl() → genPartUploadPath() (returns path only)
- Rename toFilerUrl() → toFilerPath() (returns path only)
- Update putToFiler() to accept filePath instead of uploadUrl
- Remove URL parsing code (no longer needed)
- Remove net/url import (no longer used)
- Keep old function names as deprecated wrappers for compatibility

Benefits:
- Cleaner code - no fake URL construction/parsing
- No dependency on filer address for internal operations
- More accurate naming (these are paths, not URLs)
- Eliminates confusion about HA concerns

This completely removes the filer address from upload operations - it was
never actually used for routing, only parsed for the path.

* Remove deprecated functions: use new path-based functions directly

Remove deprecated wrapper functions and update all callers to use the
new function names directly.

Removed:
- genPartUploadUrl() → all callers now use genPartUploadPath()
- toFilerUrl() → all callers now use toFilerPath()
- SetFilerClient() → removed along with fallback code

Updated:
- s3api_object_handlers_multipart.go: uploadUrl → filePath
- s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath
- s3api_object_versioning.go: toFilerUrl → toFilerPath
- s3api_object_handlers_test.go: toFilerUrl → toFilerPath
- auth_credentials.go: removed SetFilerClient fallback
- filer_etc_store.go: removed deprecated SetFilerClient method

Benefits:
- Cleaner codebase with no deprecated functions
- All variable names accurately reflect that they're paths, not URLs
- Single interface for credential stores (SetFilerAddressFunc only)

All code now consistently uses the new path-based approach.

* Fix toFilerPath: remove URL escaping for raw file paths

The toFilerPath function should return raw file paths, not URL-escaped
paths. URL escaping was needed when the path was embedded in a URL
(old toFilerUrl), but now that we pass paths directly to putToFiler,
they should be unescaped.

This fixes S3 integration test failures:
- test_bucket_listv2_encoding_basic
- test_bucket_list_encoding_basic
- test_bucket_listv2_delimiter_whitespace
- test_bucket_list_delimiter_whitespace

The tests were failing because paths were double-encoded (escaped when
stored, then escaped again when listed), resulting in %252B instead of
%2B for '+' characters.

Root cause: When we removed URL parsing in putToFiler, we should have
also removed URL escaping in toFilerPath since paths are now used
directly without URL encoding/decoding.

* Add thread safety to FilerEtcStore and clarify credential store comments

Address review suggestions for better thread safety and code clarity:

1. **Thread Safety**: Add RWMutex to FilerEtcStore
   - Protects filerAddressFunc and grpcDialOption from concurrent access
   - Initialize() uses write lock when setting function
   - SetFilerAddressFunc() uses write lock
   - withFilerClient() uses read lock to get function and dial option
   - GetPolicies() uses read lock to check if configured

2. **Improved Error Messages**:
   - Prefix errors with "filer_etc:" for easier debugging
   - "filer address not configured" → "filer_etc: filer address function not configured"
   - "filer address is empty" → "filer_etc: filer address is empty"

3. **Clarified Comments**:
   - auth_credentials.go: Clarify that initial setup is temporary
   - Document that it's updated in s3api_server.go after FilerClient creation
   - Remove ambiguity about when FilerClient.GetCurrentFiler is used

Benefits:
- Safe for concurrent credential operations
- Clear error messages for debugging
- Explicit documentation of initialization order

* Enable filer discovery: pass master addresses to FilerClient

Fix two critical issues:

1. **Filer Discovery Not Working**: Master client was not being passed to
   FilerClient, so peer discovery couldn't work

2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler
   function - this is the correct design for HA

Changes:

**Command (s3.go):**
- Read master addresses from GetFilerConfiguration response
- Pass masterAddresses to S3ApiServerOption
- Log master addresses for visibility

**S3ApiServerOption:**
- Add Masters []pb.ServerAddress field for discovery

**S3ApiServer:**
- Create MasterClient from Masters when available
- Pass MasterClient + FilerGroup to FilerClient via options
- Enable discovery with 5-minute refresh interval
- Log whether discovery is enabled or disabled

**Credential Store:**
- Already correctly uses filerClient.GetCurrentFiler via function
- This provides HA without tight coupling to FilerClient struct
- Function-based design is clean and thread-safe

Discovery Flow:
1. S3 command reads filer config → gets masters + filer group
2. S3ApiServer creates MasterClient from masters
3. FilerClient uses MasterClient to query for peer filers
4. Background goroutine refreshes peer list every 5 minutes
5. Credential store uses GetCurrentFiler to get active filer

Now filer discovery actually works! ��

* Use S3 endpoint in multipart Location instead of filer address

* Add multi-filer failover to ReadFilerConf

* Address CodeRabbit review: fix buffer reuse and improve lock safety

Address two code review suggestions:

1. **Fix buffer reuse in ReadFilerConfFromFilers**:
   - Use local []byte data instead of shared buffer
   - Prevents partial data from failed attempts affecting successful reads
   - Creates fresh buffer inside callback for masterClient path
   - More robust to future changes in read helpers

2. **Improve lock safety in FilerClient**:
   - Add *WithHealth variants that accept health pointer
   - Get health pointer while holding lock, then release before calling
   - Eliminates potential for lock confusion (though no actual deadlock existed)
   - Clearer separation: lock for data access, atomics for health ops

Changes:
- ReadFilerConfFromFilers: var data []byte, create buf inside callback
- shouldSkipUnhealthyFilerWithHealth(health *filerHealth)
- recordFilerSuccessWithHealth(health *filerHealth)
- recordFilerFailureWithHealth(health *filerHealth)
- Keep old functions for backward compatibility (marked deprecated)
- Update LookupVolumeIds to use WithHealth variants

Benefits:
- More robust multi-filer configuration reading
- Clearer lock vs atomic operation boundaries
- No lock held during health checks (even though atomics don't block)
- Better code organization and maintainability

* add constant

* Fix IAM manager and post policy to use current active filer

* Fix critical race condition and goroutine leak

* Update weed/s3api/filer_multipart.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Fix compilation error and address code review suggestions

Address remaining unresolved comments:

1. **Fix compilation error**: Add missing net/url import
   - filer_multipart.go used url.PathEscape without import
   - Added "net/url" to imports

2. **Fix Location URL formatting** (all 4 occurrences):
   - Add missing slash between bucket and key
   - Use url.PathEscape for bucket names
   - Use urlPathEscape for object keys
   - Handles special characters in bucket/key names
   - Before: http://host/bucketkey
   - After: http://host/bucket/key (properly escaped)

3. **Optimize discovery loop** (O(N*M) → O(N+M)):
   - Use map for existing filers (O(1) lookup)
   - Reduces time holding write lock
   - Better performance with many filers
   - Before: Nested loop for each discovered filer
   - After: Build map once, then O(1) lookups

Changes:
- filer_multipart.go: Import net/url, fix all Location URLs
- filer_client.go: Use map for efficient filer discovery

Benefits:
- Compiles successfully
- Proper URL encoding (handles spaces, special chars)
- Faster discovery with less lock contention
- Production-ready URL formatting

* Fix race conditions and make Close() idempotent

Address CodeRabbit review #3512078995:

1. **Critical: Fix unsynchronized read in error message**
   - Line 584 read len(fc.filerAddresses) without lock
   - Race with refreshFilerList appending to slice
   - Fixed: Take RLock to read length safely
   - Prevents race detector warnings

2. **Important: Make Close() idempotent**
   - Closing already-closed channel panics
   - Can happen with layered cleanup in shutdown paths
   - Fixed: Use sync.Once to ensure single close
   - Safe to call Close() multiple times now

3. **Nitpick: Add warning for empty filer address**
   - getFilerAddress() can return empty string
   - Helps diagnose unexpected state
   - Added: Warning log when no filers available

4. **Nitpick: Guard deprecated index-based helpers**
   - shouldSkipUnhealthyFiler, recordFilerSuccess/Failure
   - Accessed filerHealth without lock (races with discovery)
   - Fixed: Take RLock and check bounds before array access
   - Prevents index out of bounds and races

Changes:
- filer_client.go:
  - Add closeDiscoveryOnce sync.Once field
  - Use Do() in Close() for idempotent channel close
  - Add RLock guards to deprecated index-based helpers
  - Add bounds checking to prevent panics
  - Synchronized read of filerAddresses length in error

- s3api_server.go:
  - Add warning log when getFilerAddress returns empty

Benefits:
- No race conditions (passes race detector)
- No panic on double-close
- Better error diagnostics
- Safe with discovery enabled
- Production-hardened shutdown logic

* Fix hardcoded http scheme and add panic recovery

Address CodeRabbit review #3512114811:

1. **Major: Fix hardcoded http:// scheme in Location URLs**
   - Location URLs always used http:// regardless of client connection
   - HTTPS clients got http:// URLs (incorrect)
   - Fixed: Detect scheme from request
   - Check X-Forwarded-Proto header (for proxies) first
   - Check r.TLS != nil for direct HTTPS
   - Fallback to http for plain connections
   - Applied to all 4 CompleteMultipartUploadResult locations

2. **Major: Add panic recovery to discovery goroutine**
   - Long-running background goroutine could crash entire process
   - Panic in refreshFilerList would terminate program
   - Fixed: Add defer recover() with error logging
   - Goroutine failures now logged, not fatal

3. **Note: Close() idempotency already implemented**
   - Review flagged as duplicate issue
   - Already fixed in commit 3d7a65c7e
   - sync.Once (closeDiscoveryOnce) prevents double-close panic
   - Safe to call Close() multiple times

Changes:
- filer_multipart.go:
  - Add getRequestScheme() helper function
  - Update all 4 Location URLs to use dynamic scheme
  - Format: scheme://host/bucket/key (was: http://...)

- filer_client.go:
  - Add panic recovery to discoverFilers()
  - Log panics instead of crashing

Benefits:
- Correct scheme (https/http) in Location URLs
- Works behind proxies (X-Forwarded-Proto)
- No process crashes from discovery failures
- Production-hardened background goroutine
- Proper AWS S3 API compliance

* Fix S3 WithFilerClient to use filer failover

Critical fix for multi-filer deployments:

**Problem:**
- S3ApiServer.WithFilerClient() was creating direct connections to ONE filer
- Used pb.WithGrpcClient() with single filer address
- No failover - if that filer failed, ALL operations failed
- Caused test failures: "bucket directory not found"
- IAM Integration Tests failing with 500 Internal Error

**Root Cause:**
- WithFilerClient bypassed filerClient connection management
- Always connected to getFilerAddress() (current filer only)
- Didn't retry other filers on failure
- All getEntry(), updateEntry(), etc. operations failed if current filer down

**Solution:**
1. Added FilerClient.GetAllFilers() method
   - Returns snapshot of all filer addresses
   - Thread-safe copy to avoid races

2. Implemented withFilerClientFailover()
   - Try current filer first (fast path)
   - On failure, try all other filers
   - Log successful failover
   - Return error only if ALL filers fail

3. Updated WithFilerClient()
   - Use filerClient for failover when available
   - Fallback to direct connection for testing/init

**Impact:**
 All S3 operations now support multi-filer failover
 Bucket metadata reads work with any available filer
 Entry operations (getEntry, updateEntry) failover automatically
 IAM tests should pass now
 Production-ready HA support

**Files Changed:**
- wdclient/filer_client.go: Add GetAllFilers() method
- s3api/s3api_handlers.go: Implement failover logic

This fixes the test failure where bucket operations failed when
the primary filer was temporarily unavailable during cleanup.

* Update current filer after successful failover

Address code review: https://github.com/seaweedfs/seaweedfs/pull/7550#pullrequestreview-3512223723

**Issue:**
After successful failover, the current filer index was not updated.
This meant every subsequent request would still try the (potentially
unhealthy) original filer first, then failover again.

**Solution:**

1. Added FilerClient.SetCurrentFiler(addr) method:
   - Finds the index of specified filer address
   - Atomically updates filerIndex to point to it
   - Thread-safe with RLock

2. Call SetCurrentFiler after successful failover:
   - Update happens immediately after successful connection
   - Future requests start with the known-healthy filer
   - Reduces unnecessary failover attempts

**Benefits:**
 Subsequent requests use healthy filer directly
 No repeated failover to same unhealthy filer
 Better performance - fast path hits healthy filer
 Comment now matches actual behavior

* Integrate health tracking with S3 failover

Address code review suggestion to leverage existing health tracking
instead of simple iteration through all filers.

**Changes:**

1. Added address-based health tracking API to FilerClient:
   - ShouldSkipUnhealthyFiler(addr) - check circuit breaker
   - RecordFilerSuccess(addr) - reset failure count
   - RecordFilerFailure(addr) - increment failure count

   These methods find the filer by address and delegate to
   existing *WithHealth methods for actual health management.

2. Updated withFilerClientFailover to use health tracking:
   - Record success/failure for every filer attempt
   - Skip unhealthy filers during failover (circuit breaker)
   - Only try filers that haven't exceeded failure threshold
   - Automatic re-check after reset timeout

**Benefits:**

 Circuit breaker prevents wasting time on known-bad filers
 Health tracking shared across all operations
 Automatic recovery when unhealthy filers come back
 Reduced latency - skip filers in failure state
 Better visibility with health metrics

**Behavior:**

- Try current filer first (fast path)
- If fails, record failure and try other HEALTHY filers
- Skip filers with failureCount >= threshold (default 3)
- Re-check unhealthy filers after resetTimeout (default 30s)
- Record all successes/failures for health tracking

* Update weed/wdclient/filer_client.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* Enable filer discovery with empty filerGroup

Empty filerGroup is a valid value representing the default group.
The master client can discover filers even when filerGroup is empty.

**Change:**
- Remove the filerGroup != "" check in NewFilerClient
- Keep only masterClient != nil check
- Empty string will be passed to ListClusterNodes API as-is

This enables filer discovery to work with the default group.

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
pull/7556/head
Chris Lu 2 days ago
committed by GitHub
parent
commit
5075381060
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 1
      weed/cluster/cluster.go
  2. 27
      weed/command/iam.go
  3. 30
      weed/command/s3.go
  4. 4
      weed/credential/filer_etc/filer_etc_identity.go
  5. 12
      weed/credential/filer_etc/filer_etc_policy.go
  6. 41
      weed/credential/filer_etc/filer_etc_store.go
  7. 28
      weed/filer/filer_conf.go
  8. 16
      weed/iamapi/iamapi_server.go
  9. 21
      weed/s3api/auth_credentials.go
  10. 25
      weed/s3api/filer_multipart.go
  11. 7
      weed/s3api/s3api_bucket_handlers.go
  12. 4
      weed/s3api/s3api_circuit_breaker.go
  13. 68
      weed/s3api/s3api_handlers.go
  14. 10
      weed/s3api/s3api_object_handlers.go
  15. 12
      weed/s3api/s3api_object_handlers_multipart.go
  16. 4
      weed/s3api/s3api_object_handlers_postpolicy.go
  17. 37
      weed/s3api/s3api_object_handlers_put.go
  18. 2
      weed/s3api/s3api_object_handlers_test.go
  19. 6
      weed/s3api/s3api_object_versioning.go
  20. 78
      weed/s3api/s3api_server.go
  21. 320
      weed/wdclient/filer_client.go

1
weed/cluster/cluster.go

@ -13,6 +13,7 @@ const (
VolumeServerType = "volumeServer"
FilerType = "filer"
BrokerType = "broker"
S3Type = "s3"
)
type FilerGroupName string

27
weed/command/iam.go

@ -15,6 +15,7 @@ import (
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/security"
"github.com/seaweedfs/seaweedfs/weed/util"
"github.com/seaweedfs/seaweedfs/weed/util/grace"
// Import credential stores to register them
_ "github.com/seaweedfs/seaweedfs/weed/credential/filer_etc"
@ -35,16 +36,18 @@ type IamOptions struct {
func init() {
cmdIam.Run = runIam // break init cycle
iamStandaloneOptions.filer = cmdIam.Flag.String("filer", "localhost:8888", "filer server address")
iamStandaloneOptions.filer = cmdIam.Flag.String("filer", "localhost:8888", "comma-separated filer server addresses for high availability")
iamStandaloneOptions.masters = cmdIam.Flag.String("master", "localhost:9333", "comma-separated master servers")
iamStandaloneOptions.ip = cmdIam.Flag.String("ip", util.DetectedHostAddress(), "iam server http listen ip address")
iamStandaloneOptions.port = cmdIam.Flag.Int("port", 8111, "iam server http listen port")
}
var cmdIam = &Command{
UsageLine: "iam [-port=8111] [-filer=<ip:port>] [-master=<ip:port>,<ip:port>]",
UsageLine: "iam [-port=8111] [-filer=<ip:port>[,<ip:port>]...] [-master=<ip:port>,<ip:port>]",
Short: "start a iam API compatible server",
Long: "start a iam API compatible server.",
Long: `start a iam API compatible server.
Multiple filer addresses can be specified for high availability, separated by commas.`,
}
func runIam(cmd *Command, args []string) bool {
@ -52,24 +55,24 @@ func runIam(cmd *Command, args []string) bool {
}
func (iamopt *IamOptions) startIamServer() bool {
filerAddress := pb.ServerAddress(*iamopt.filer)
filerAddresses := pb.ServerAddresses(*iamopt.filer).ToAddresses()
util.LoadSecurityConfiguration()
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
for {
err := pb.WithGrpcFilerClient(false, 0, filerAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
err := pb.WithOneOfGrpcFilerClients(false, filerAddresses, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerAddress, err)
return fmt.Errorf("get filer configuration: %v", err)
}
glog.V(0).Infof("IAM read filer configuration: %s", resp)
return nil
})
if err != nil {
glog.V(0).Infof("wait to connect to filer %s grpc address %s", *iamopt.filer, filerAddress.ToGrpcAddress())
glog.V(0).Infof("wait to connect to filers %v", filerAddresses)
time.Sleep(time.Second)
} else {
glog.V(0).Infof("connected to filer %s grpc address %s", *iamopt.filer, filerAddress.ToGrpcAddress())
glog.V(0).Infof("connected to filers %v", filerAddresses)
break
}
}
@ -78,7 +81,7 @@ func (iamopt *IamOptions) startIamServer() bool {
router := mux.NewRouter().SkipClean(true)
iamApiServer, iamApiServer_err := iamapi.NewIamApiServer(router, &iamapi.IamServerOption{
Masters: masters,
Filer: filerAddress,
Filers: filerAddresses,
Port: *iamopt.port,
GrpcDialOption: grpcDialOption,
})
@ -87,8 +90,10 @@ func (iamopt *IamOptions) startIamServer() bool {
glog.Fatalf("IAM API Server startup error: %v", iamApiServer_err)
}
// Ensure cleanup on shutdown
defer iamApiServer.Shutdown()
// Register shutdown handler to prevent goroutine leak
grace.OnInterrupt(func() {
iamApiServer.Shutdown()
})
listenAddress := fmt.Sprintf(":%d", *iamopt.port)
iamApiListener, iamApiLocalListener, err := util.NewIpAndLocalListeners(*iamopt.ip, *iamopt.port, time.Duration(10)*time.Second)

30
weed/command/s3.go

@ -61,7 +61,7 @@ type S3Options struct {
func init() {
cmdS3.Run = runS3 // break init cycle
s3StandaloneOptions.filer = cmdS3.Flag.String("filer", "localhost:8888", "filer server address")
s3StandaloneOptions.filer = cmdS3.Flag.String("filer", "localhost:8888", "comma-separated filer server addresses for high availability")
s3StandaloneOptions.bindIp = cmdS3.Flag.String("ip.bind", "", "ip address to bind to. Default to localhost.")
s3StandaloneOptions.port = cmdS3.Flag.Int("port", 8333, "s3 server http listen port")
s3StandaloneOptions.portHttps = cmdS3.Flag.Int("port.https", 0, "s3 server https listen port")
@ -86,9 +86,12 @@ func init() {
}
var cmdS3 = &Command{
UsageLine: "s3 [-port=8333] [-filer=<ip:port>] [-config=</path/to/config.json>]",
Short: "start a s3 API compatible server that is backed by a filer",
Long: `start a s3 API compatible server that is backed by a filer.
UsageLine: "s3 [-port=8333] [-filer=<ip:port>[,<ip:port>]...] [-config=</path/to/config.json>]",
Short: "start a s3 API compatible server that is backed by filer(s)",
Long: `start a s3 API compatible server that is backed by filer(s).
Multiple filer addresses can be specified for high availability, separated by commas.
The S3 server will automatically failover between filers if one becomes unavailable.
By default, you can use any access key and secret key to access the S3 APIs.
To enable credential based access, create a config.json file similar to this:
@ -200,10 +203,11 @@ func (s3opt *S3Options) GetCertificateWithUpdate(*tls.ClientHelloInfo) (*tls.Cer
func (s3opt *S3Options) startS3Server() bool {
filerAddress := pb.ServerAddress(*s3opt.filer)
filerAddresses := pb.ServerAddresses(*s3opt.filer).ToAddresses()
filerBucketsPath := "/buckets"
filerGroup := ""
var masterAddresses []pb.ServerAddress
grpcDialOption := security.LoadClientTLS(util.GetViper(), "grpc.client")
@ -212,22 +216,27 @@ func (s3opt *S3Options) startS3Server() bool {
var metricsIntervalSec int
for {
err := pb.WithGrpcFilerClient(false, 0, filerAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
err := pb.WithOneOfGrpcFilerClients(false, filerAddresses, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.GetFilerConfiguration(context.Background(), &filer_pb.GetFilerConfigurationRequest{})
if err != nil {
return fmt.Errorf("get filer %s configuration: %v", filerAddress, err)
return fmt.Errorf("get filer configuration: %v", err)
}
filerBucketsPath = resp.DirBuckets
filerGroup = resp.FilerGroup
// Get master addresses for filer discovery
masterAddresses = pb.ServerAddresses(strings.Join(resp.Masters, ",")).ToAddresses()
metricsAddress, metricsIntervalSec = resp.MetricsAddress, int(resp.MetricsIntervalSec)
glog.V(0).Infof("S3 read filer buckets dir: %s", filerBucketsPath)
if len(masterAddresses) > 0 {
glog.V(0).Infof("S3 read master addresses for discovery: %v", masterAddresses)
}
return nil
})
if err != nil {
glog.V(0).Infof("wait to connect to filer %s grpc address %s", *s3opt.filer, filerAddress.ToGrpcAddress())
glog.V(0).Infof("wait to connect to filers %v grpc address", filerAddresses)
time.Sleep(time.Second)
} else {
glog.V(0).Infof("connected to filer %s grpc address %s", *s3opt.filer, filerAddress.ToGrpcAddress())
glog.V(0).Infof("connected to filers %v", filerAddresses)
break
}
}
@ -252,7 +261,8 @@ func (s3opt *S3Options) startS3Server() bool {
}
s3ApiServer, s3ApiServer_err = s3api.NewS3ApiServer(router, &s3api.S3ApiServerOption{
Filer: filerAddress,
Filers: filerAddresses,
Masters: masterAddresses,
Port: *s3opt.port,
Config: *s3opt.config,
DomainName: *s3opt.domainName,

4
weed/credential/filer_etc/filer_etc_identity.go

@ -15,8 +15,8 @@ import (
func (store *FilerEtcStore) LoadConfiguration(ctx context.Context) (*iam_pb.S3ApiConfiguration, error) {
s3cfg := &iam_pb.S3ApiConfiguration{}
glog.V(1).Infof("Loading IAM configuration from %s/%s (filer: %s)",
filer.IamConfigDirectory, filer.IamIdentityFile, store.filerGrpcAddress)
glog.V(1).Infof("Loading IAM configuration from %s/%s (using current active filer)",
filer.IamConfigDirectory, filer.IamIdentityFile)
err := store.withFilerClient(func(client filer_pb.SeaweedFilerClient) error {
// Use ReadInsideFiler instead of ReadEntry since identity.json is small

12
weed/credential/filer_etc/filer_etc_policy.go

@ -20,15 +20,19 @@ func (store *FilerEtcStore) GetPolicies(ctx context.Context) (map[string]policy_
Policies: make(map[string]policy_engine.PolicyDocument),
}
// Check if filer client is configured
if store.filerGrpcAddress == "" {
// Check if filer client is configured (with mutex protection)
store.mu.RLock()
configured := store.filerAddressFunc != nil
store.mu.RUnlock()
if !configured {
glog.V(1).Infof("Filer client not configured for policy retrieval, returning empty policies")
// Return empty policies if filer client is not configured
return policiesCollection.Policies, nil
}
glog.V(2).Infof("Loading IAM policies from %s/%s (filer: %s)",
filer.IamConfigDirectory, filer.IamPoliciesFile, store.filerGrpcAddress)
glog.V(2).Infof("Loading IAM policies from %s/%s (using current active filer)",
filer.IamConfigDirectory, filer.IamPoliciesFile)
err := store.withFilerClient(func(client filer_pb.SeaweedFilerClient) error {
// Use ReadInsideFiler instead of ReadEntry since policies.json is small

41
weed/credential/filer_etc/filer_etc_store.go

@ -2,6 +2,7 @@ package filer_etc
import (
"fmt"
"sync"
"github.com/seaweedfs/seaweedfs/weed/credential"
"github.com/seaweedfs/seaweedfs/weed/pb"
@ -16,8 +17,9 @@ func init() {
// FilerEtcStore implements CredentialStore using SeaweedFS filer for storage
type FilerEtcStore struct {
filerGrpcAddress string
filerAddressFunc func() pb.ServerAddress // Function to get current active filer
grpcDialOption grpc.DialOption
mu sync.RWMutex // Protects filerAddressFunc and grpcDialOption
}
func (store *FilerEtcStore) GetName() credential.CredentialStoreTypeName {
@ -27,27 +29,48 @@ func (store *FilerEtcStore) GetName() credential.CredentialStoreTypeName {
func (store *FilerEtcStore) Initialize(configuration util.Configuration, prefix string) error {
// Handle nil configuration gracefully
if configuration != nil {
store.filerGrpcAddress = configuration.GetString(prefix + "filer")
filerAddr := configuration.GetString(prefix + "filer")
if filerAddr != "" {
// Static configuration - use fixed address
store.mu.Lock()
store.filerAddressFunc = func() pb.ServerAddress {
return pb.ServerAddress(filerAddr)
}
store.mu.Unlock()
}
// TODO: Initialize grpcDialOption based on configuration
}
// Note: filerGrpcAddress can be set later via SetFilerClient method
// Note: filerAddressFunc can be set later via SetFilerAddressFunc method
return nil
}
// SetFilerClient sets the filer client details for the file store
func (store *FilerEtcStore) SetFilerClient(filerAddress string, grpcDialOption grpc.DialOption) {
store.filerGrpcAddress = filerAddress
// SetFilerAddressFunc sets a function that returns the current active filer address
// This enables high availability by using the currently active filer
func (store *FilerEtcStore) SetFilerAddressFunc(getFiler func() pb.ServerAddress, grpcDialOption grpc.DialOption) {
store.mu.Lock()
defer store.mu.Unlock()
store.filerAddressFunc = getFiler
store.grpcDialOption = grpcDialOption
}
// withFilerClient executes a function with a filer client
func (store *FilerEtcStore) withFilerClient(fn func(client filer_pb.SeaweedFilerClient) error) error {
if store.filerGrpcAddress == "" {
return fmt.Errorf("filer address not configured")
store.mu.RLock()
if store.filerAddressFunc == nil {
store.mu.RUnlock()
return fmt.Errorf("filer_etc: filer address function not configured")
}
filerAddress := store.filerAddressFunc()
dialOption := store.grpcDialOption
store.mu.RUnlock()
if filerAddress == "" {
return fmt.Errorf("filer_etc: filer address is empty")
}
// Use the pb.WithGrpcFilerClient helper similar to existing code
return pb.WithGrpcFilerClient(false, 0, pb.ServerAddress(store.filerGrpcAddress), store.grpcDialOption, fn)
return pb.WithGrpcFilerClient(false, 0, filerAddress, dialOption, fn)
}
func (store *FilerEtcStore) Shutdown() {

28
weed/filer/filer_conf.go

@ -32,22 +32,34 @@ type FilerConf struct {
}
func ReadFilerConf(filerGrpcAddress pb.ServerAddress, grpcDialOption grpc.DialOption, masterClient *wdclient.MasterClient) (*FilerConf, error) {
var buf bytes.Buffer
if err := pb.WithGrpcFilerClient(false, 0, filerGrpcAddress, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
return ReadFilerConfFromFilers([]pb.ServerAddress{filerGrpcAddress}, grpcDialOption, masterClient)
}
// ReadFilerConfFromFilers reads filer configuration with multi-filer failover support
func ReadFilerConfFromFilers(filerGrpcAddresses []pb.ServerAddress, grpcDialOption grpc.DialOption, masterClient *wdclient.MasterClient) (*FilerConf, error) {
var data []byte
if err := pb.WithOneOfGrpcFilerClients(false, filerGrpcAddresses, grpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
if masterClient != nil {
return ReadEntry(masterClient, client, DirectoryEtcSeaweedFS, FilerConfName, &buf)
} else {
content, err := ReadInsideFiler(client, DirectoryEtcSeaweedFS, FilerConfName)
buf = *bytes.NewBuffer(content)
var buf bytes.Buffer
if err := ReadEntry(masterClient, client, DirectoryEtcSeaweedFS, FilerConfName, &buf); err != nil {
return err
}
data = buf.Bytes()
return nil
}
content, err := ReadInsideFiler(client, DirectoryEtcSeaweedFS, FilerConfName)
if err != nil {
return err
}
data = content
return nil
}); err != nil && err != filer_pb.ErrNotFound {
return nil, fmt.Errorf("read %s/%s: %v", DirectoryEtcSeaweedFS, FilerConfName, err)
}
fc := NewFilerConf()
if buf.Len() > 0 {
if err := fc.LoadFromBytes(buf.Bytes()); err != nil {
if len(data) > 0 {
if err := fc.LoadFromBytes(data); err != nil {
return nil, fmt.Errorf("parse %s/%s: %v", DirectoryEtcSeaweedFS, FilerConfName, err)
}
}

16
weed/iamapi/iamapi_server.go

@ -40,7 +40,7 @@ type IamS3ApiConfigure struct {
type IamServerOption struct {
Masters map[string]pb.ServerAddress
Filer pb.ServerAddress
Filers []pb.ServerAddress
Port int
GrpcDialOption grpc.DialOption
}
@ -60,6 +60,10 @@ func NewIamApiServer(router *mux.Router, option *IamServerOption) (iamApiServer
}
func NewIamApiServerWithStore(router *mux.Router, option *IamServerOption, explicitStore string) (iamApiServer *IamApiServer, err error) {
if len(option.Filers) == 0 {
return nil, fmt.Errorf("at least one filer address is required")
}
masterClient := wdclient.NewMasterClient(option.GrpcDialOption, "", "iam", "", "", "", *pb.NewServiceDiscoveryFromMap(option.Masters))
// Create a cancellable context for the master client connection
@ -80,7 +84,7 @@ func NewIamApiServerWithStore(router *mux.Router, option *IamServerOption, expli
s3ApiConfigure = configure
s3Option := s3api.S3ApiServerOption{
Filer: option.Filer,
Filers: option.Filers,
GrpcDialOption: option.GrpcDialOption,
}
@ -149,7 +153,7 @@ func (iama *IamS3ApiConfigure) PutS3ApiConfigurationToCredentialManager(s3cfg *i
func (iama *IamS3ApiConfigure) GetS3ApiConfigurationFromFiler(s3cfg *iam_pb.S3ApiConfiguration) (err error) {
var buf bytes.Buffer
err = pb.WithGrpcFilerClient(false, 0, iama.option.Filer, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
err = pb.WithOneOfGrpcFilerClients(false, iama.option.Filers, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
if err = filer.ReadEntry(iama.masterClient, client, filer.IamConfigDirectory, filer.IamIdentityFile, &buf); err != nil {
return err
}
@ -171,7 +175,7 @@ func (iama *IamS3ApiConfigure) PutS3ApiConfigurationToFiler(s3cfg *iam_pb.S3ApiC
if err := filer.ProtoToText(&buf, s3cfg); err != nil {
return fmt.Errorf("ProtoToText: %s", err)
}
return pb.WithGrpcFilerClient(false, 0, iama.option.Filer, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
return pb.WithOneOfGrpcFilerClients(false, iama.option.Filers, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
err = util.Retry("saveIamIdentity", func() error {
return filer.SaveInsideFiler(client, filer.IamConfigDirectory, filer.IamIdentityFile, buf.Bytes())
})
@ -184,7 +188,7 @@ func (iama *IamS3ApiConfigure) PutS3ApiConfigurationToFiler(s3cfg *iam_pb.S3ApiC
func (iama *IamS3ApiConfigure) GetPolicies(policies *Policies) (err error) {
var buf bytes.Buffer
err = pb.WithGrpcFilerClient(false, 0, iama.option.Filer, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
err = pb.WithOneOfGrpcFilerClients(false, iama.option.Filers, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
if err = filer.ReadEntry(iama.masterClient, client, filer.IamConfigDirectory, filer.IamPoliciesFile, &buf); err != nil {
return err
}
@ -208,7 +212,7 @@ func (iama *IamS3ApiConfigure) PutPolicies(policies *Policies) (err error) {
if b, err = json.Marshal(policies); err != nil {
return err
}
return pb.WithGrpcFilerClient(false, 0, iama.option.Filer, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
return pb.WithOneOfGrpcFilerClients(false, iama.option.Filers, iama.option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
if err := filer.SaveInsideFiler(client, filer.IamConfigDirectory, filer.IamPoliciesFile, b); err != nil {
return err
}

21
weed/s3api/auth_credentials.go

@ -14,6 +14,7 @@ import (
"github.com/seaweedfs/seaweedfs/weed/filer"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/kms"
"github.com/seaweedfs/seaweedfs/weed/pb"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/pb/iam_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
@ -136,12 +137,24 @@ func NewIdentityAccessManagementWithStore(option *S3ApiServerOption, explicitSto
glog.Fatalf("failed to initialize credential manager: %v", err)
}
// For stores that need filer client details, set them
// For stores that need filer client details, set them temporarily
// This will be updated to use FilerClient's GetCurrentFiler after FilerClient is created
if store := credentialManager.GetStore(); store != nil {
if filerClientSetter, ok := store.(interface {
SetFilerClient(string, grpc.DialOption)
if filerFuncSetter, ok := store.(interface {
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
}); ok {
filerClientSetter.SetFilerClient(string(option.Filer), option.GrpcDialOption)
// Temporary setup: use first filer until FilerClient is available
// See s3api_server.go where this is updated to FilerClient.GetCurrentFiler
if len(option.Filers) > 0 {
getFiler := func() pb.ServerAddress {
if len(option.Filers) > 0 {
return option.Filers[0]
}
return ""
}
filerFuncSetter.SetFilerAddressFunc(getFiler, option.GrpcDialOption)
glog.V(1).Infof("Credential store configured with temporary filer function (will be updated after FilerClient creation)")
}
}
}

25
weed/s3api/filer_multipart.go

@ -10,6 +10,7 @@ import (
"errors"
"fmt"
"math"
"net/url"
"path/filepath"
"slices"
"sort"
@ -42,6 +43,20 @@ type InitiateMultipartUploadResult struct {
s3.CreateMultipartUploadOutput
}
// getRequestScheme determines the URL scheme (http or https) from the request
// Checks X-Forwarded-Proto header first (for proxies), then TLS state
func getRequestScheme(r *http.Request) string {
// Check X-Forwarded-Proto header for proxied requests
if proto := r.Header.Get("X-Forwarded-Proto"); proto != "" {
return proto
}
// Check if connection is TLS
if r.TLS != nil {
return "https"
}
return "http"
}
func (s3a *S3ApiServer) createMultipartUpload(r *http.Request, input *s3.CreateMultipartUploadInput) (output *InitiateMultipartUploadResult, code s3err.ErrorCode) {
glog.V(2).Infof("createMultipartUpload input %v", input)
@ -183,8 +198,10 @@ func (s3a *S3ApiServer) completeMultipartUpload(r *http.Request, input *s3.Compl
entryName, dirName := s3a.getEntryNameAndDir(input)
if entry, _ := s3a.getEntry(dirName, entryName); entry != nil && entry.Extended != nil {
if uploadId, ok := entry.Extended[s3_constants.SeaweedFSUploadId]; ok && *input.UploadId == string(uploadId) {
// Location uses the S3 endpoint that the client connected to
// Format: scheme://s3-endpoint/bucket/object (following AWS S3 API)
return &CompleteMultipartUploadResult{
Location: aws.String(fmt.Sprintf("http://%s%s/%s", s3a.option.Filer.ToHttpAddress(), urlEscapeObject(dirName), urlPathEscape(entryName))),
Location: aws.String(fmt.Sprintf("%s://%s/%s/%s", getRequestScheme(r), r.Host, url.PathEscape(*input.Bucket), urlPathEscape(*input.Key))),
Bucket: input.Bucket,
ETag: aws.String("\"" + filer.ETagChunks(entry.GetChunks()) + "\""),
Key: objectKey(input.Key),
@ -401,7 +418,7 @@ func (s3a *S3ApiServer) completeMultipartUpload(r *http.Request, input *s3.Compl
// The latest version information is tracked in the .versions directory metadata
output = &CompleteMultipartUploadResult{
Location: aws.String(fmt.Sprintf("http://%s%s/%s", s3a.option.Filer.ToHttpAddress(), urlEscapeObject(dirName), urlPathEscape(entryName))),
Location: aws.String(fmt.Sprintf("%s://%s/%s/%s", getRequestScheme(r), r.Host, url.PathEscape(*input.Bucket), urlPathEscape(*input.Key))),
Bucket: input.Bucket,
ETag: aws.String("\"" + filer.ETagChunks(finalParts) + "\""),
Key: objectKey(input.Key),
@ -454,7 +471,7 @@ func (s3a *S3ApiServer) completeMultipartUpload(r *http.Request, input *s3.Compl
// Note: Suspended versioning should NOT return VersionId field according to AWS S3 spec
output = &CompleteMultipartUploadResult{
Location: aws.String(fmt.Sprintf("http://%s%s/%s", s3a.option.Filer.ToHttpAddress(), urlEscapeObject(dirName), urlPathEscape(entryName))),
Location: aws.String(fmt.Sprintf("%s://%s/%s/%s", getRequestScheme(r), r.Host, url.PathEscape(*input.Bucket), urlPathEscape(*input.Key))),
Bucket: input.Bucket,
ETag: aws.String("\"" + filer.ETagChunks(finalParts) + "\""),
Key: objectKey(input.Key),
@ -511,7 +528,7 @@ func (s3a *S3ApiServer) completeMultipartUpload(r *http.Request, input *s3.Compl
// For non-versioned buckets, return response without VersionId
output = &CompleteMultipartUploadResult{
Location: aws.String(fmt.Sprintf("http://%s%s/%s", s3a.option.Filer.ToHttpAddress(), urlEscapeObject(dirName), urlPathEscape(entryName))),
Location: aws.String(fmt.Sprintf("%s://%s/%s/%s", getRequestScheme(r), r.Host, url.PathEscape(*input.Bucket), urlPathEscape(*input.Key))),
Bucket: input.Bucket,
ETag: aws.String("\"" + filer.ETagChunks(finalParts) + "\""),
Key: objectKey(input.Key),

7
weed/s3api/s3api_bucket_handlers.go

@ -877,7 +877,8 @@ func (s3a *S3ApiServer) GetBucketLifecycleConfigurationHandler(w http.ResponseWr
s3err.WriteErrorResponse(w, r, err)
return
}
fc, err := filer.ReadFilerConf(s3a.option.Filer, s3a.option.GrpcDialOption, nil)
// ReadFilerConfFromFilers provides multi-filer failover
fc, err := filer.ReadFilerConfFromFilers(s3a.option.Filers, s3a.option.GrpcDialOption, nil)
if err != nil {
glog.Errorf("GetBucketLifecycleConfigurationHandler: %s", err)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
@ -938,7 +939,7 @@ func (s3a *S3ApiServer) PutBucketLifecycleConfigurationHandler(w http.ResponseWr
return
}
fc, err := filer.ReadFilerConf(s3a.option.Filer, s3a.option.GrpcDialOption, nil)
fc, err := filer.ReadFilerConfFromFilers(s3a.option.Filers, s3a.option.GrpcDialOption, nil)
if err != nil {
glog.Errorf("PutBucketLifecycleConfigurationHandler read filer config: %s", err)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
@ -1020,7 +1021,7 @@ func (s3a *S3ApiServer) DeleteBucketLifecycleHandler(w http.ResponseWriter, r *h
return
}
fc, err := filer.ReadFilerConf(s3a.option.Filer, s3a.option.GrpcDialOption, nil)
fc, err := filer.ReadFilerConfFromFilers(s3a.option.Filers, s3a.option.GrpcDialOption, nil)
if err != nil {
glog.Errorf("DeleteBucketLifecycleHandler read filer config: %s", err)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)

4
weed/s3api/s3api_circuit_breaker.go

@ -29,7 +29,8 @@ func NewCircuitBreaker(option *S3ApiServerOption) *CircuitBreaker {
limitations: make(map[string]int64),
}
err := pb.WithFilerClient(false, 0, option.Filer, option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
// Use WithOneOfGrpcFilerClients to support multiple filers with failover
err := pb.WithOneOfGrpcFilerClients(false, option.Filers, option.GrpcDialOption, func(client filer_pb.SeaweedFilerClient) error {
content, err := filer.ReadInsideFiler(client, s3_constants.CircuitBreakerConfigDir, s3_constants.CircuitBreakerConfigFile)
if errors.Is(err, filer_pb.ErrNotFound) {
return nil
@ -41,6 +42,7 @@ func NewCircuitBreaker(option *S3ApiServerOption) *CircuitBreaker {
})
if err != nil {
glog.Warningf("S3 circuit breaker disabled; failed to load config from any filer: %v", err)
}
return cb

68
weed/s3api/s3api_handlers.go

@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
"google.golang.org/grpc"
@ -15,12 +16,75 @@ import (
var _ = filer_pb.FilerClient(&S3ApiServer{})
func (s3a *S3ApiServer) WithFilerClient(streamingMode bool, fn func(filer_pb.SeaweedFilerClient) error) error {
// Use filerClient for proper connection management and failover
if s3a.filerClient != nil {
return s3a.withFilerClientFailover(streamingMode, fn)
}
// Fallback to direct connection if filerClient not initialized
// This should only happen during initialization or testing
return pb.WithGrpcClient(streamingMode, s3a.randomClientId, func(grpcConnection *grpc.ClientConn) error {
client := filer_pb.NewSeaweedFilerClient(grpcConnection)
return fn(client)
}, s3a.option.Filer.ToGrpcAddress(), false, s3a.option.GrpcDialOption)
}, s3a.getFilerAddress().ToGrpcAddress(), false, s3a.option.GrpcDialOption)
}
// withFilerClientFailover attempts to execute fn with automatic failover to other filers
func (s3a *S3ApiServer) withFilerClientFailover(streamingMode bool, fn func(filer_pb.SeaweedFilerClient) error) error {
// Get current filer as starting point
currentFiler := s3a.filerClient.GetCurrentFiler()
// Try current filer first (fast path)
err := pb.WithGrpcClient(streamingMode, s3a.randomClientId, func(grpcConnection *grpc.ClientConn) error {
client := filer_pb.NewSeaweedFilerClient(grpcConnection)
return fn(client)
}, currentFiler.ToGrpcAddress(), false, s3a.option.GrpcDialOption)
if err == nil {
s3a.filerClient.RecordFilerSuccess(currentFiler)
return nil
}
// Record failure for current filer
s3a.filerClient.RecordFilerFailure(currentFiler)
// Current filer failed - try all other filers with health-aware selection
filers := s3a.filerClient.GetAllFilers()
var lastErr error = err
for _, filer := range filers {
if filer == currentFiler {
continue // Already tried this one
}
// Skip filers known to be unhealthy (circuit breaker pattern)
if s3a.filerClient.ShouldSkipUnhealthyFiler(filer) {
glog.V(2).Infof("WithFilerClient: skipping unhealthy filer %s", filer)
continue
}
err = pb.WithGrpcClient(streamingMode, s3a.randomClientId, func(grpcConnection *grpc.ClientConn) error {
client := filer_pb.NewSeaweedFilerClient(grpcConnection)
return fn(client)
}, filer.ToGrpcAddress(), false, s3a.option.GrpcDialOption)
if err == nil {
// Success! Record success and update current filer for future requests
s3a.filerClient.RecordFilerSuccess(filer)
s3a.filerClient.SetCurrentFiler(filer)
glog.V(1).Infof("WithFilerClient: failover from %s to %s succeeded", currentFiler, filer)
return nil
}
// Record failure for health tracking
s3a.filerClient.RecordFilerFailure(filer)
glog.V(2).Infof("WithFilerClient: failover to %s failed: %v", filer, err)
lastErr = err
}
// All filers failed
return fmt.Errorf("all filers failed, last error: %w", lastErr)
}
func (s3a *S3ApiServer) AdjustedUrl(location *filer_pb.Location) string {

10
weed/s3api/s3api_object_handlers.go

@ -404,11 +404,11 @@ func newListEntry(entry *filer_pb.Entry, key string, dir string, name string, bu
return listEntry
}
func (s3a *S3ApiServer) toFilerUrl(bucket, object string) string {
object = urlPathEscape(removeDuplicateSlashes(object))
destUrl := fmt.Sprintf("http://%s%s/%s%s",
s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, object)
return destUrl
func (s3a *S3ApiServer) toFilerPath(bucket, object string) string {
// Returns the raw file path - no URL escaping needed
// The path is used directly, not embedded in a URL
object = removeDuplicateSlashes(object)
return fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, bucket, object)
}
// hasConditionalHeaders checks if the request has any conditional headers

12
weed/s3api/s3api_object_handlers_multipart.go

@ -404,7 +404,7 @@ func (s3a *S3ApiServer) PutObjectPartHandler(w http.ResponseWriter, r *http.Requ
}
}
uploadUrl := s3a.genPartUploadUrl(bucket, uploadID, partID)
filePath := s3a.genPartUploadPath(bucket, uploadID, partID)
if partID == 1 && r.Header.Get("Content-Type") == "" {
dataReader = mimeDetect(r, dataReader)
@ -413,7 +413,7 @@ func (s3a *S3ApiServer) PutObjectPartHandler(w http.ResponseWriter, r *http.Requ
glog.V(2).Infof("PutObjectPart: bucket=%s, object=%s, uploadId=%s, partNumber=%d, size=%d",
bucket, object, uploadID, partID, r.ContentLength)
etag, errCode, sseMetadata := s3a.putToFiler(r, uploadUrl, dataReader, bucket, partID)
etag, errCode, sseMetadata := s3a.putToFiler(r, filePath, dataReader, bucket, partID)
if errCode != s3err.ErrNone {
glog.Errorf("PutObjectPart: putToFiler failed with error code %v for bucket=%s, object=%s, partNumber=%d",
errCode, bucket, object, partID)
@ -437,9 +437,11 @@ func (s3a *S3ApiServer) genUploadsFolder(bucket string) string {
return fmt.Sprintf("%s/%s/%s", s3a.option.BucketsPath, bucket, s3_constants.MultipartUploadsFolder)
}
func (s3a *S3ApiServer) genPartUploadUrl(bucket, uploadID string, partID int) string {
return fmt.Sprintf("http://%s%s/%s/%04d_%s.part",
s3a.option.Filer.ToHttpAddress(), s3a.genUploadsFolder(bucket), uploadID, partID, uuid.NewString())
func (s3a *S3ApiServer) genPartUploadPath(bucket, uploadID string, partID int) string {
// Returns just the file path - no filer address needed
// Upload traffic goes directly to volume servers, not through filer
return fmt.Sprintf("%s/%s/%04d_%s.part",
s3a.genUploadsFolder(bucket), uploadID, partID, uuid.NewString())
}
// Generate uploadID hash string from object

4
weed/s3api/s3api_object_handlers_postpolicy.go

@ -114,7 +114,7 @@ func (s3a *S3ApiServer) PostPolicyBucketHandler(w http.ResponseWriter, r *http.R
}
}
uploadUrl := fmt.Sprintf("http://%s%s/%s%s", s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, urlEscapeObject(object))
filePath := fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, bucket, object)
// Get ContentType from post formData
// Otherwise from formFile ContentType
@ -136,7 +136,7 @@ func (s3a *S3ApiServer) PostPolicyBucketHandler(w http.ResponseWriter, r *http.R
}
}
etag, errCode, sseMetadata := s3a.putToFiler(r, uploadUrl, fileBody, bucket, 1)
etag, errCode, sseMetadata := s3a.putToFiler(r, filePath, fileBody, bucket, 1)
if errCode != s3err.ErrNone {
s3err.WriteErrorResponse(w, r, errCode)

37
weed/s3api/s3api_object_handlers_put.go

@ -8,7 +8,6 @@ import (
"fmt"
"io"
"net/http"
"net/url"
"path/filepath"
"strconv"
"strings"
@ -223,12 +222,12 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
s3a.setSSEResponseHeaders(w, r, sseMetadata)
default:
// Handle regular PUT (never configured versioning)
uploadUrl := s3a.toFilerUrl(bucket, object)
filePath := s3a.toFilerPath(bucket, object)
if objectContentType == "" {
dataReader = mimeDetect(r, dataReader)
}
etag, errCode, sseMetadata := s3a.putToFiler(r, uploadUrl, dataReader, bucket, 1)
etag, errCode, sseMetadata := s3a.putToFiler(r, filePath, dataReader, bucket, 1)
if errCode != s3err.ErrNone {
s3err.WriteErrorResponse(w, r, errCode)
@ -248,9 +247,10 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
writeSuccessResponseEmpty(w, r)
}
func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader io.Reader, bucket string, partNumber int) (etag string, code s3err.ErrorCode, sseMetadata SSEResponseMetadata) {
func (s3a *S3ApiServer) putToFiler(r *http.Request, filePath string, dataReader io.Reader, bucket string, partNumber int) (etag string, code s3err.ErrorCode, sseMetadata SSEResponseMetadata) {
// NEW OPTIMIZATION: Write directly to volume servers, bypassing filer proxy
// This eliminates the filer proxy overhead for PUT operations
// Note: filePath is now passed directly instead of URL (no parsing needed)
// For SSE, encrypt with offset=0 for all parts
// Each part is encrypted independently, then decrypted using metadata during GET
@ -311,20 +311,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
glog.V(4).Infof("putToFiler: explicit encryption already applied, skipping bucket default encryption")
}
// Parse the upload URL to extract the file path
// uploadUrl format: http://filer:8888/path/to/bucket/object (or https://, IPv6, etc.)
// Use proper URL parsing instead of string manipulation for robustness
parsedUrl, parseErr := url.Parse(uploadUrl)
if parseErr != nil {
glog.Errorf("putToFiler: failed to parse uploadUrl %q: %v", uploadUrl, parseErr)
return "", s3err.ErrInternalError, SSEResponseMetadata{}
}
// Use parsedUrl.Path directly - it's already decoded by url.Parse()
// Per Go documentation: "Path is stored in decoded form: /%47%6f%2f becomes /Go/"
// Calling PathUnescape again would double-decode and fail on keys like "b%ar"
filePath := parsedUrl.Path
// filePath is already provided directly - no URL parsing needed
// Step 1 & 2: Use auto-chunking to handle large files without OOM
// This splits large uploads into 8MB chunks, preventing memory issues on both S3 API and volume servers
const chunkSize = 8 * 1024 * 1024 // 8MB chunks (S3 standard)
@ -743,7 +730,7 @@ func (s3a *S3ApiServer) setObjectOwnerFromRequest(r *http.Request, entry *filer_
// For suspended versioning, objects are stored as regular files (version ID "null") in the bucket directory,
// while existing versions from when versioning was enabled remain preserved in the .versions subdirectory.
func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, object string, dataReader io.Reader, objectContentType string) (etag string, errCode s3err.ErrorCode, sseMetadata SSEResponseMetadata) {
// Normalize object path to ensure consistency with toFilerUrl behavior
// Normalize object path to ensure consistency with toFilerPath behavior
normalizedObject := removeDuplicateSlashes(object)
glog.V(3).Infof("putSuspendedVersioningObject: START bucket=%s, object=%s, normalized=%s",
@ -783,7 +770,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
glog.V(3).Infof("putSuspendedVersioningObject: no .versions directory for %s/%s", bucket, object)
}
uploadUrl := s3a.toFilerUrl(bucket, normalizedObject)
filePath := s3a.toFilerPath(bucket, normalizedObject)
body := dataReader
if objectContentType == "" {
@ -846,7 +833,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
}
// Upload the file using putToFiler - this will create the file with version metadata
etag, errCode, sseMetadata = s3a.putToFiler(r, uploadUrl, body, bucket, 1)
etag, errCode, sseMetadata = s3a.putToFiler(r, filePath, body, bucket, 1)
if errCode != s3err.ErrNone {
glog.Errorf("putSuspendedVersioningObject: failed to upload object: %v", errCode)
return "", errCode, SSEResponseMetadata{}
@ -937,7 +924,7 @@ func (s3a *S3ApiServer) putVersionedObject(r *http.Request, bucket, object strin
// Generate version ID
versionId = generateVersionId()
// Normalize object path to ensure consistency with toFilerUrl behavior
// Normalize object path to ensure consistency with toFilerPath behavior
normalizedObject := removeDuplicateSlashes(object)
glog.V(2).Infof("putVersionedObject: creating version %s for %s/%s (normalized: %s)", versionId, bucket, object, normalizedObject)
@ -948,7 +935,7 @@ func (s3a *S3ApiServer) putVersionedObject(r *http.Request, bucket, object strin
// Upload directly to the versions directory
// We need to construct the object path relative to the bucket
versionObjectPath := normalizedObject + s3_constants.VersionsFolder + "/" + versionFileName
versionUploadUrl := s3a.toFilerUrl(bucket, versionObjectPath)
versionFilePath := s3a.toFilerPath(bucket, versionObjectPath)
// Ensure the .versions directory exists before uploading
bucketDir := s3a.option.BucketsPath + "/" + bucket
@ -966,9 +953,9 @@ func (s3a *S3ApiServer) putVersionedObject(r *http.Request, bucket, object strin
body = mimeDetect(r, body)
}
glog.V(2).Infof("putVersionedObject: uploading %s/%s version %s to %s", bucket, object, versionId, versionUploadUrl)
glog.V(2).Infof("putVersionedObject: uploading %s/%s version %s to %s", bucket, object, versionId, versionFilePath)
etag, errCode, sseMetadata = s3a.putToFiler(r, versionUploadUrl, body, bucket, 1)
etag, errCode, sseMetadata = s3a.putToFiler(r, versionFilePath, body, bucket, 1)
if errCode != s3err.ErrNone {
glog.Errorf("putVersionedObject: failed to upload version: %v", errCode)
return "", "", errCode, SSEResponseMetadata{}

2
weed/s3api/s3api_object_handlers_test.go

@ -114,7 +114,7 @@ func TestRemoveDuplicateSlashes(t *testing.T) {
}
}
func TestS3ApiServer_toFilerUrl(t *testing.T) {
func TestS3ApiServer_toFilerPath(t *testing.T) {
tests := []struct {
name string
args string

6
weed/s3api/s3api_object_versioning.go

@ -607,7 +607,7 @@ func (s3a *S3ApiServer) calculateETagFromChunks(chunks []*filer_pb.FileChunk) st
// getSpecificObjectVersion retrieves a specific version of an object
func (s3a *S3ApiServer) getSpecificObjectVersion(bucket, object, versionId string) (*filer_pb.Entry, error) {
// Normalize object path to ensure consistency with toFilerUrl behavior
// Normalize object path to ensure consistency with toFilerPath behavior
normalizedObject := removeDuplicateSlashes(object)
if versionId == "" {
@ -639,7 +639,7 @@ func (s3a *S3ApiServer) getSpecificObjectVersion(bucket, object, versionId strin
// deleteSpecificObjectVersion deletes a specific version of an object
func (s3a *S3ApiServer) deleteSpecificObjectVersion(bucket, object, versionId string) error {
// Normalize object path to ensure consistency with toFilerUrl behavior
// Normalize object path to ensure consistency with toFilerPath behavior
normalizedObject := removeDuplicateSlashes(object)
if versionId == "" {
@ -843,7 +843,7 @@ func (s3a *S3ApiServer) ListObjectVersionsHandler(w http.ResponseWriter, r *http
// getLatestObjectVersion finds the latest version of an object by reading .versions directory metadata
func (s3a *S3ApiServer) getLatestObjectVersion(bucket, object string) (*filer_pb.Entry, error) {
// Normalize object path to ensure consistency with toFilerUrl behavior
// Normalize object path to ensure consistency with toFilerPath behavior
normalizedObject := removeDuplicateSlashes(object)
bucketDir := s3a.option.BucketsPath + "/" + bucket

78
weed/s3api/s3api_server.go

@ -11,29 +11,31 @@ import (
"strings"
"time"
"github.com/gorilla/mux"
"google.golang.org/grpc"
"github.com/seaweedfs/seaweedfs/weed/cluster"
"github.com/seaweedfs/seaweedfs/weed/credential"
"github.com/seaweedfs/seaweedfs/weed/filer"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/iam/integration"
"github.com/seaweedfs/seaweedfs/weed/iam/policy"
"github.com/seaweedfs/seaweedfs/weed/iam/sts"
"github.com/seaweedfs/seaweedfs/weed/pb/s3_pb"
"github.com/seaweedfs/seaweedfs/weed/util/grace"
"github.com/seaweedfs/seaweedfs/weed/wdclient"
"github.com/gorilla/mux"
"github.com/seaweedfs/seaweedfs/weed/pb"
"github.com/seaweedfs/seaweedfs/weed/pb/s3_pb"
. "github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
"github.com/seaweedfs/seaweedfs/weed/security"
"github.com/seaweedfs/seaweedfs/weed/util"
"github.com/seaweedfs/seaweedfs/weed/util/grace"
util_http "github.com/seaweedfs/seaweedfs/weed/util/http"
util_http_client "github.com/seaweedfs/seaweedfs/weed/util/http/client"
"google.golang.org/grpc"
"github.com/seaweedfs/seaweedfs/weed/wdclient"
)
type S3ApiServerOption struct {
Filer pb.ServerAddress
Filers []pb.ServerAddress
Masters []pb.ServerAddress // For filer discovery
Port int
Config string
DomainName string
@ -69,6 +71,10 @@ func NewS3ApiServer(router *mux.Router, option *S3ApiServerOption) (s3ApiServer
}
func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, explicitStore string) (s3ApiServer *S3ApiServer, err error) {
if len(option.Filers) == 0 {
return nil, fmt.Errorf("at least one filer address is required")
}
startTsNs := time.Now().UnixNano()
v := util.GetViper()
@ -95,9 +101,38 @@ func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, expl
// Initialize FilerClient for volume location caching
// Uses the battle-tested vidMap with filer-based lookups
// S3 API typically connects to a single filer, but wrap in slice for consistency
filerClient := wdclient.NewFilerClient([]pb.ServerAddress{option.Filer}, option.GrpcDialOption, option.DataCenter)
glog.V(0).Infof("S3 API initialized FilerClient for volume location caching")
// Supports multiple filer addresses with automatic failover for high availability
var filerClient *wdclient.FilerClient
if len(option.Masters) > 0 && option.FilerGroup != "" {
// Enable filer discovery via master
masterMap := make(map[string]pb.ServerAddress)
for i, addr := range option.Masters {
masterMap[fmt.Sprintf("master%d", i)] = addr
}
masterClient := wdclient.NewMasterClient(option.GrpcDialOption, option.FilerGroup, cluster.S3Type, "", "", "", *pb.NewServiceDiscoveryFromMap(masterMap))
filerClient = wdclient.NewFilerClient(option.Filers, option.GrpcDialOption, option.DataCenter, &wdclient.FilerClientOption{
MasterClient: masterClient,
FilerGroup: option.FilerGroup,
DiscoveryInterval: 5 * time.Minute,
})
glog.V(0).Infof("S3 API initialized FilerClient with %d filer(s) and discovery enabled (group: %s, masters: %v)",
len(option.Filers), option.FilerGroup, option.Masters)
} else {
filerClient = wdclient.NewFilerClient(option.Filers, option.GrpcDialOption, option.DataCenter)
glog.V(0).Infof("S3 API initialized FilerClient with %d filer(s) (no discovery)", len(option.Filers))
}
// Update credential store to use FilerClient's current filer for HA
if store := iam.credentialManager.GetStore(); store != nil {
if filerFuncSetter, ok := store.(interface {
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
}); ok {
// Use FilerClient's GetCurrentFiler for true HA
filerFuncSetter.SetFilerAddressFunc(filerClient.GetCurrentFiler, option.GrpcDialOption)
glog.V(1).Infof("Updated credential store to use FilerClient's current active filer (HA-aware)")
}
}
s3ApiServer = &S3ApiServer{
option: option,
@ -119,14 +154,16 @@ func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, expl
if option.IamConfig != "" {
glog.V(1).Infof("Loading advanced IAM configuration from: %s", option.IamConfig)
// Use FilerClient's GetCurrentFiler for HA-aware filer selection
iamManager, err := loadIAMManagerFromConfig(option.IamConfig, func() string {
return string(option.Filer)
return string(filerClient.GetCurrentFiler())
})
if err != nil {
glog.Errorf("Failed to load IAM configuration: %v", err)
} else {
// Create S3 IAM integration with the loaded IAM manager
s3iam := NewS3IAMIntegration(iamManager, string(option.Filer))
// filerAddress not actually used, just for backward compatibility
s3iam := NewS3IAMIntegration(iamManager, "")
// Set IAM integration in server
s3ApiServer.iamIntegration = s3iam
@ -134,7 +171,7 @@ func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, expl
// Set the integration in the traditional IAM for compatibility
iam.SetIAMIntegration(s3iam)
glog.V(1).Infof("Advanced IAM system initialized successfully")
glog.V(1).Infof("Advanced IAM system initialized successfully with HA filer support")
}
}
@ -173,6 +210,21 @@ func NewS3ApiServerWithStore(router *mux.Router, option *S3ApiServerOption, expl
return s3ApiServer, nil
}
// getFilerAddress returns the current active filer address
// Uses FilerClient's tracked current filer which is updated on successful operations
// This provides better availability than always using the first filer
func (s3a *S3ApiServer) getFilerAddress() pb.ServerAddress {
if s3a.filerClient != nil {
return s3a.filerClient.GetCurrentFiler()
}
// Fallback to first filer if filerClient not initialized
if len(s3a.option.Filers) > 0 {
return s3a.option.Filers[0]
}
glog.Warningf("getFilerAddress: no filer addresses available")
return ""
}
// syncBucketPolicyToEngine syncs a bucket policy to the policy engine
// This helper method centralizes the logic for loading bucket policies into the engine
// to avoid duplication and ensure consistent error handling

320
weed/wdclient/filer_client.go

@ -5,6 +5,7 @@ import (
"fmt"
"math/rand"
"strings"
"sync"
"sync/atomic"
"time"
@ -12,6 +13,7 @@ import (
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"github.com/seaweedfs/seaweedfs/weed/cluster"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
@ -35,9 +37,11 @@ type filerHealth struct {
// It uses the shared vidMap cache for efficient lookups
// Supports multiple filer addresses with automatic failover for high availability
// Tracks filer health to avoid repeatedly trying known-unhealthy filers
// Can discover additional filers from master server when configured with filer group
type FilerClient struct {
*vidMapClient
filerAddresses []pb.ServerAddress
filerAddressesMu sync.RWMutex // Protects filerAddresses and filerHealth
filerIndex int32 // atomic: current filer index for round-robin
filerHealth []*filerHealth // health status per filer (same order as filerAddresses)
grpcDialOption grpc.DialOption
@ -50,6 +54,13 @@ type FilerClient struct {
maxRetries int // Retry: maximum retry attempts for transient failures
initialRetryWait time.Duration // Retry: initial wait time before first retry
retryBackoffFactor float64 // Retry: backoff multiplier for wait time
// Filer discovery fields
masterClient *MasterClient // Optional: for discovering filers in the same group
filerGroup string // Optional: filer group for discovery
discoveryInterval time.Duration // How often to refresh filer list from master
stopDiscovery chan struct{} // Signal to stop discovery goroutine
closeDiscoveryOnce sync.Once // Ensures discovery channel is closed at most once
}
// filerVolumeProvider implements VolumeLocationProvider by querying filer
@ -68,6 +79,11 @@ type FilerClientOption struct {
MaxRetries int // Retry: maximum retry attempts for transient failures (0 = use default of 3)
InitialRetryWait time.Duration // Retry: initial wait time before first retry (0 = use default of 1s)
RetryBackoffFactor float64 // Retry: backoff multiplier for wait time (0 = use default of 1.5)
// Filer discovery options
MasterClient *MasterClient // Optional: enables filer discovery from master
FilerGroup string // Optional: filer group name for discovery (required if MasterClient is set)
DiscoveryInterval time.Duration // Optional: how often to refresh filer list (0 = use default of 5 minutes)
}
// NewFilerClient creates a new client that queries filer(s) for volume locations
@ -87,6 +103,9 @@ func NewFilerClient(filerAddresses []pb.ServerAddress, grpcDialOption grpc.DialO
maxRetries := 3 // Default: 3 retry attempts for transient failures
initialRetryWait := time.Second // Default: 1 second initial retry wait
retryBackoffFactor := 1.5 // Default: 1.5x backoff multiplier
var masterClient *MasterClient
var filerGroup string
discoveryInterval := 5 * time.Minute // Default: refresh every 5 minutes
// Override with provided options
if len(opts) > 0 && opts[0] != nil {
@ -115,6 +134,13 @@ func NewFilerClient(filerAddresses []pb.ServerAddress, grpcDialOption grpc.DialO
if opt.RetryBackoffFactor > 0 {
retryBackoffFactor = opt.RetryBackoffFactor
}
if opt.MasterClient != nil {
masterClient = opt.MasterClient
filerGroup = opt.FilerGroup
if opt.DiscoveryInterval > 0 {
discoveryInterval = opt.DiscoveryInterval
}
}
}
// Initialize health tracking for each filer
@ -137,6 +163,17 @@ func NewFilerClient(filerAddresses []pb.ServerAddress, grpcDialOption grpc.DialO
maxRetries: maxRetries,
initialRetryWait: initialRetryWait,
retryBackoffFactor: retryBackoffFactor,
masterClient: masterClient,
filerGroup: filerGroup,
discoveryInterval: discoveryInterval,
}
// Start filer discovery if master client is configured
// Empty filerGroup is valid (represents default group)
if masterClient != nil {
fc.stopDiscovery = make(chan struct{})
go fc.discoverFilers()
glog.V(0).Infof("FilerClient: started filer discovery for group '%s' (refresh interval: %v)", filerGroup, discoveryInterval)
}
// Create provider that references this FilerClient for failover support
@ -149,6 +186,204 @@ func NewFilerClient(filerAddresses []pb.ServerAddress, grpcDialOption grpc.DialO
return fc
}
// GetCurrentFiler returns the currently active filer address
// This is the filer that was last successfully used or the one indicated by round-robin
// Returns empty string if no filers are configured
func (fc *FilerClient) GetCurrentFiler() pb.ServerAddress {
fc.filerAddressesMu.RLock()
defer fc.filerAddressesMu.RUnlock()
if len(fc.filerAddresses) == 0 {
return ""
}
// Get current index (atomically updated on successful operations)
index := atomic.LoadInt32(&fc.filerIndex)
if index >= int32(len(fc.filerAddresses)) {
index = 0
}
return fc.filerAddresses[index]
}
// GetAllFilers returns a snapshot of all filer addresses
// Returns a copy to avoid concurrent modification issues
func (fc *FilerClient) GetAllFilers() []pb.ServerAddress {
fc.filerAddressesMu.RLock()
defer fc.filerAddressesMu.RUnlock()
// Return a copy to avoid concurrent modification
filers := make([]pb.ServerAddress, len(fc.filerAddresses))
copy(filers, fc.filerAddresses)
return filers
}
// SetCurrentFiler updates the current filer index to the specified address
// This is useful after successful failover to prefer the healthy filer for future requests
func (fc *FilerClient) SetCurrentFiler(addr pb.ServerAddress) {
fc.filerAddressesMu.RLock()
defer fc.filerAddressesMu.RUnlock()
// Find the index of the specified filer address
for i, filer := range fc.filerAddresses {
if filer == addr {
atomic.StoreInt32(&fc.filerIndex, int32(i))
return
}
}
// If address not found, leave index unchanged
}
// ShouldSkipUnhealthyFiler checks if a filer address should be skipped based on health tracking
// Returns true if the filer has exceeded failure threshold and reset timeout hasn't elapsed
func (fc *FilerClient) ShouldSkipUnhealthyFiler(addr pb.ServerAddress) bool {
fc.filerAddressesMu.RLock()
defer fc.filerAddressesMu.RUnlock()
// Find the health for this filer address
for i, filer := range fc.filerAddresses {
if filer == addr {
if i < len(fc.filerHealth) {
return fc.shouldSkipUnhealthyFilerWithHealth(fc.filerHealth[i])
}
return false
}
}
// If address not found, don't skip it
return false
}
// RecordFilerSuccess resets failure tracking for a successful filer
func (fc *FilerClient) RecordFilerSuccess(addr pb.ServerAddress) {
fc.filerAddressesMu.RLock()
defer fc.filerAddressesMu.RUnlock()
// Find the health for this filer address
for i, filer := range fc.filerAddresses {
if filer == addr {
if i < len(fc.filerHealth) {
fc.recordFilerSuccessWithHealth(fc.filerHealth[i])
}
return
}
}
}
// RecordFilerFailure increments failure count for an unhealthy filer
func (fc *FilerClient) RecordFilerFailure(addr pb.ServerAddress) {
fc.filerAddressesMu.RLock()
defer fc.filerAddressesMu.RUnlock()
// Find the health for this filer address
for i, filer := range fc.filerAddresses {
if filer == addr {
if i < len(fc.filerHealth) {
fc.recordFilerFailureWithHealth(fc.filerHealth[i])
}
return
}
}
}
// Close stops the filer discovery goroutine if running
// Safe to call multiple times (idempotent)
func (fc *FilerClient) Close() {
if fc.stopDiscovery != nil {
fc.closeDiscoveryOnce.Do(func() {
close(fc.stopDiscovery)
})
}
}
// discoverFilers periodically queries the master to discover filers in the same group
// and updates the filer list. This runs in a background goroutine.
func (fc *FilerClient) discoverFilers() {
defer func() {
if r := recover(); r != nil {
glog.Errorf("FilerClient: panic in filer discovery goroutine for group '%s': %v", fc.filerGroup, r)
}
}()
// Do an initial discovery
fc.refreshFilerList()
ticker := time.NewTicker(fc.discoveryInterval)
defer ticker.Stop()
for {
select {
case <-ticker.C:
fc.refreshFilerList()
case <-fc.stopDiscovery:
glog.V(0).Infof("FilerClient: stopping filer discovery for group '%s'", fc.filerGroup)
return
}
}
}
// refreshFilerList queries the master for the current list of filers and updates the local list
func (fc *FilerClient) refreshFilerList() {
if fc.masterClient == nil {
return
}
// Get current master address
currentMaster := fc.masterClient.GetMaster(context.Background())
if currentMaster == "" {
glog.V(1).Infof("FilerClient: no master available for filer discovery")
return
}
// Query master for filers in our group
updates := cluster.ListExistingPeerUpdates(currentMaster, fc.grpcDialOption, fc.filerGroup, cluster.FilerType)
if len(updates) == 0 {
glog.V(2).Infof("FilerClient: no filers found in group '%s'", fc.filerGroup)
return
}
// Build new filer address list
discoveredFilers := make(map[pb.ServerAddress]bool)
for _, update := range updates {
if update.Address != "" {
discoveredFilers[pb.ServerAddress(update.Address)] = true
}
}
// Thread-safe update of filer list
fc.filerAddressesMu.Lock()
defer fc.filerAddressesMu.Unlock()
// Build a map of existing filers for efficient O(1) lookup
existingFilers := make(map[pb.ServerAddress]struct{}, len(fc.filerAddresses))
for _, f := range fc.filerAddresses {
existingFilers[f] = struct{}{}
}
// Find new filers - O(N+M) instead of O(N*M)
var newFilers []pb.ServerAddress
for addr := range discoveredFilers {
if _, found := existingFilers[addr]; !found {
newFilers = append(newFilers, addr)
}
}
// Add new filers
if len(newFilers) > 0 {
glog.V(0).Infof("FilerClient: discovered %d new filer(s) in group '%s': %v", len(newFilers), fc.filerGroup, newFilers)
fc.filerAddresses = append(fc.filerAddresses, newFilers...)
// Initialize health tracking for new filers
for range newFilers {
fc.filerHealth = append(fc.filerHealth, &filerHealth{})
}
}
// Optionally, remove filers that are no longer in the cluster
// For now, we keep all filers and rely on health checks to avoid dead ones
// This prevents removing filers that might be temporarily unavailable
}
// GetLookupFileIdFunction returns a lookup function with URL preference handling
func (fc *FilerClient) GetLookupFileIdFunction() LookupFileIdFunctionType {
if fc.urlPreference == PreferUrl {
@ -245,8 +480,9 @@ func isRetryableGrpcError(err error) bool {
// shouldSkipUnhealthyFiler checks if we should skip a filer based on recent failures
// Circuit breaker pattern: skip filers with multiple recent consecutive failures
func (fc *FilerClient) shouldSkipUnhealthyFiler(index int32) bool {
health := fc.filerHealth[index]
// shouldSkipUnhealthyFilerWithHealth checks if a filer should be skipped based on health
// Uses atomic operations only - safe to call without locks
func (fc *FilerClient) shouldSkipUnhealthyFilerWithHealth(health *filerHealth) bool {
failureCount := atomic.LoadInt32(&health.failureCount)
// Check if failure count exceeds threshold
@ -267,17 +503,53 @@ func (fc *FilerClient) shouldSkipUnhealthyFiler(index int32) bool {
return true // Skip this unhealthy filer
}
// Deprecated: Use shouldSkipUnhealthyFilerWithHealth instead
// This function is kept for backward compatibility but requires array access
// Note: This function is now thread-safe.
func (fc *FilerClient) shouldSkipUnhealthyFiler(index int32) bool {
fc.filerAddressesMu.RLock()
if index >= int32(len(fc.filerHealth)) {
fc.filerAddressesMu.RUnlock()
return true // Invalid index - skip
}
health := fc.filerHealth[index]
fc.filerAddressesMu.RUnlock()
return fc.shouldSkipUnhealthyFilerWithHealth(health)
}
// recordFilerSuccessWithHealth resets failure tracking for a successful filer
func (fc *FilerClient) recordFilerSuccessWithHealth(health *filerHealth) {
atomic.StoreInt32(&health.failureCount, 0)
}
// recordFilerSuccess resets failure tracking for a successful filer
func (fc *FilerClient) recordFilerSuccess(index int32) {
fc.filerAddressesMu.RLock()
if index >= int32(len(fc.filerHealth)) {
fc.filerAddressesMu.RUnlock()
return // Invalid index
}
health := fc.filerHealth[index]
atomic.StoreInt32(&health.failureCount, 0)
fc.filerAddressesMu.RUnlock()
fc.recordFilerSuccessWithHealth(health)
}
// recordFilerFailureWithHealth increments failure count for an unhealthy filer
func (fc *FilerClient) recordFilerFailureWithHealth(health *filerHealth) {
atomic.AddInt32(&health.failureCount, 1)
atomic.StoreInt64(&health.lastFailureTimeNs, time.Now().UnixNano())
}
// recordFilerFailure increments failure count for an unhealthy filer
func (fc *FilerClient) recordFilerFailure(index int32) {
fc.filerAddressesMu.RLock()
if index >= int32(len(fc.filerHealth)) {
fc.filerAddressesMu.RUnlock()
return // Invalid index
}
health := fc.filerHealth[index]
atomic.AddInt32(&health.failureCount, 1)
atomic.StoreInt64(&health.lastFailureTimeNs, time.Now().UnixNano())
fc.filerAddressesMu.RUnlock()
fc.recordFilerFailureWithHealth(health)
}
// LookupVolumeIds queries the filer for volume locations with automatic failover
@ -299,13 +571,34 @@ func (p *filerVolumeProvider) LookupVolumeIds(ctx context.Context, volumeIds []s
// Try all filer addresses with round-robin starting from current index
// Skip known-unhealthy filers (circuit breaker pattern)
i := atomic.LoadInt32(&fc.filerIndex)
// Get filer count with read lock
fc.filerAddressesMu.RLock()
n := int32(len(fc.filerAddresses))
fc.filerAddressesMu.RUnlock()
for x := int32(0); x < n; x++ {
// Circuit breaker: skip unhealthy filers
if fc.shouldSkipUnhealthyFiler(i) {
// Get current filer address and health with read lock
fc.filerAddressesMu.RLock()
if len(fc.filerAddresses) == 0 {
fc.filerAddressesMu.RUnlock()
lastErr = fmt.Errorf("no filers available")
break
}
if i >= int32(len(fc.filerAddresses)) {
// Filer list changed, reset index
i = 0
}
// Get health pointer while holding lock
health := fc.filerHealth[i]
filerAddress := fc.filerAddresses[i]
fc.filerAddressesMu.RUnlock()
// Circuit breaker: skip unhealthy filers (no lock needed - uses atomics)
if fc.shouldSkipUnhealthyFilerWithHealth(health) {
glog.V(2).Infof("FilerClient: skipping unhealthy filer %s (consecutive failures: %d)",
fc.filerAddresses[i], atomic.LoadInt32(&fc.filerHealth[i].failureCount))
filerAddress, atomic.LoadInt32(&health.failureCount))
i++
if i >= n {
i = 0
@ -313,8 +606,6 @@ func (p *filerVolumeProvider) LookupVolumeIds(ctx context.Context, volumeIds []s
continue
}
filerAddress := fc.filerAddresses[i]
// Use anonymous function to ensure defer cancel() is called per iteration, not accumulated
err := func() error {
// Create a fresh timeout context for each filer attempt
@ -367,7 +658,7 @@ func (p *filerVolumeProvider) LookupVolumeIds(ctx context.Context, volumeIds []s
if err != nil {
glog.V(1).Infof("FilerClient: filer %s lookup failed (attempt %d/%d, retry %d/%d): %v", filerAddress, x+1, n, retry+1, maxRetries, err)
fc.recordFilerFailure(i)
fc.recordFilerFailureWithHealth(health)
lastErr = err
i++
if i >= n {
@ -378,7 +669,7 @@ func (p *filerVolumeProvider) LookupVolumeIds(ctx context.Context, volumeIds []s
// Success - update the preferred filer index and reset health tracking
atomic.StoreInt32(&fc.filerIndex, i)
fc.recordFilerSuccess(i)
fc.recordFilerSuccessWithHealth(health)
glog.V(3).Infof("FilerClient: looked up %d volumes on %s, found %d", len(volumeIds), filerAddress, len(result))
return result, nil
}
@ -400,5 +691,8 @@ func (p *filerVolumeProvider) LookupVolumeIds(ctx context.Context, volumeIds []s
}
// All retries exhausted
return nil, fmt.Errorf("all %d filer(s) failed after %d attempts, last error: %w", len(fc.filerAddresses), maxRetries, lastErr)
fc.filerAddressesMu.RLock()
totalFilers := len(fc.filerAddresses)
fc.filerAddressesMu.RUnlock()
return nil, fmt.Errorf("all %d filer(s) failed after %d attempts, last error: %w", totalFilers, maxRetries, lastErr)
}
Loading…
Cancel
Save