Branch:
master
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/github_actions/actions/cache-5
dependabot/github_actions/actions/upload-artifact-6
dependabot/github_actions/wangyoucao577/go-release-action-1.55
dependabot/go_modules/github.com/aws/aws-sdk-go-v2-1.41.0
dependabot/go_modules/github.com/aws/aws-sdk-go-v2/credentials-1.19.5
dependabot/go_modules/github.com/go-redsync/redsync/v4-4.15.0
dependabot/go_modules/github.com/ydb-platform/ydb-go-sdk/v3-3.122.0
dependabot/go_modules/golang.org/x/image-0.34.0
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
711 Commits (master)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
1b1e5f69a2
|
Add TUS protocol support for resumable uploads (#7592)
* Add TUS protocol integration tests
This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.
Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation
Tests are skipped in short mode and require a running SeaweedFS cluster.
* Add TUS session storage types and utilities
Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file
Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.
* Add TUS HTTP handlers
Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers
Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header
* Wire up TUS protocol routes in filer server
Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.
TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload
* Improve TUS integration test setup
Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines
Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads
Follows the pattern established by other tests in test/ folder.
* Fix TUS integration tests and creation-with-upload
- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster
* Add TUS protocol tests to GitHub Actions CI
- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging
* Make TUS base path configurable via CLI
- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions
* Rename -tus.path to -tusBasePath with default .tus
- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)
* Address code review comments
- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests
* fmt
* Address remaining code review comments
- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging
* Address critical and high-priority review comments
- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted
* Fix race condition to work across multiple filer instances
- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance
* Address code review comments: fix variable shadowing, sniff size, and test stability
- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay
* Fix test stability: increase cluster stabilization delay to 5 seconds
The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.
* Address PR review comments for TUS protocol support
- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession
* Address additional PR review comments for TUS protocol
- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time
* Address more PR review comments for TUS protocol
- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow
* Remove unnecessary mutexes in tusWriteData
The upload loop is sequential, so uploadErrLock and chunksLock are not needed.
* Rename updateTusSessionOffset to saveTusChunk
Remove unused newOffset parameter and rename function to better reflect its purpose.
* Improve TUS upload performance and add path validation
- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes
* Address PR review comments for TUS protocol
- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability
* Address additional TUS PR review comments
- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format
* Revert log_buffer changes (moved to separate PR)
* Minor style fixes from PR review
- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
|
18 hours ago |
|
|
60649460b2
|
fix: default policy storeType to memory when not specified (#7754)
When loading IAM config from JSON, if the policy section exists but storeType is not specified, default to 'memory' instead of 'filer'. This ensures that policies defined in JSON config files are properly loaded into memory for test environments and standalone setups that don't rely on the filer for policy persistence. |
19 hours ago |
|
|
f64ce759e0
|
feat(iam): add SetUserStatus and UpdateAccessKey actions (#7750)
feat(iam): add SetUserStatus and UpdateAccessKey actions (#7745) Add ability to enable/disable users and access keys without deleting them. ## Changes ### Protocol Buffer Updates - Add `disabled` field (bool) to Identity message for user status - false (default) = enabled, true = disabled - No backward compatibility hack needed since zero value is correct - Add `status` field (string: Active/Inactive) to Credential message ### New IAM Actions - SetUserStatus: Enable or disable a user (requires admin) - UpdateAccessKey: Change access key status (self-service or admin) ### Behavior - Disabled users: All API requests return AccessDenied - Inactive access keys: Signature validation fails - Status check happens early in auth flow for performance - Backward compatible: existing configs default to enabled (disabled=false) ### Use Cases 1. Temporary suspension: Disable user access during investigation 2. Key rotation: Deactivate old key before deletion 3. Offboarding: Disable rather than delete for audit purposes 4. Emergency response: Quickly disable compromised credentials Fixes #7745 |
21 hours ago |
|
|
f734b2d4bf |
Refactor: Extract common IAM logic into shared weed/iam package (#7747)
This resolves GitHub issue #7747 by extracting duplicated IAM code into a shared package that both the embedded S3 IAM and standalone IAM use. New shared package (weed/iam/): - constants.go: Common constants (charsets, action strings, error messages) - helpers.go: Shared helper functions (Hash, GenerateRandomString, GenerateAccessKeyId, GenerateSecretAccessKey, StringSlicesEqual, MapToStatementAction, MapToIdentitiesAction, MaskAccessKey) - responses.go: Common IAM response structs (CommonResponse, ListUsersResponse, CreateUserResponse, etc.) - helpers_test.go: Unit tests for shared helpers Updated files: - weed/s3api/s3api_embedded_iam.go: Use type aliases and function wrappers to the shared package, removing ~200 lines of duplicated code - weed/iamapi/iamapi_management_handlers.go: Use shared package for constants and helper functions, removing ~100 lines of duplicated code - weed/iamapi/iamapi_response.go: Re-export types from shared package for backwards compatibility Benefits: - Single source of truth for IAM constants and helpers - Easier maintenance - changes only need to be made in one place - Reduced risk of inconsistencies between embedded and standalone IAM - Better test coverage through shared test suite |
24 hours ago |
|
|
f41925b60b
|
Embed IAM API into S3 server (#7740)
* Embed IAM API into S3 server This change simplifies the S3 and IAM deployment by embedding the IAM API directly into the S3 server, following the patterns used by MinIO and Ceph RGW. Changes: - Add -iam flag to S3 server (enabled by default) - Create embedded IAM API handler in s3api package - Register IAM routes (POST to /) in S3 server when enabled - Deprecate standalone 'weed iam' command with warning Benefits: - Single binary, single port for both S3 and IAM APIs - Simpler deployment and configuration - Shared credential manager between S3 and IAM - Backward compatible: 'weed iam' still works with deprecation warning Usage: - weed s3 -port=8333 # S3 + IAM on same port (default) - weed s3 -iam=false # S3 only, disable embedded IAM - weed iam -port=8111 # Deprecated, shows warning * Fix nil pointer panic: add s3.iam flag to weed server command The enableIam field was not initialized when running S3 via 'weed server', causing a nil pointer dereference when checking *s3opt.enableIam. * Fix nil pointer panic: add s3.iam flag to weed filer command The enableIam field was not initialized when running S3 via 'weed filer -s3', causing a nil pointer dereference when checking *s3opt.enableIam. * Add integration tests for embedded IAM API Tests cover: - CreateUser, ListUsers, GetUser, UpdateUser, DeleteUser - CreateAccessKey, DeleteAccessKey, ListAccessKeys - CreatePolicy, PutUserPolicy, GetUserPolicy - Implicit username extraction from authorization header - Full user lifecycle workflow test These tests validate the embedded IAM API functionality that was added in the S3 server, ensuring IAM operations work correctly when served from the same port as S3. * Security: Use crypto/rand for IAM credential generation SECURITY FIX: Replace math/rand with crypto/rand for generating access keys and secret keys. Using math/rand is not cryptographically secure and can lead to predictable credentials. This change: 1. Replaces math/rand with crypto/rand in both: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) 2. Removes the seededRand variable that was initialized with time-based seed (predictable) 3. Updates StringWithCharset/iamStringWithCharset to: - Use crypto/rand.Int() for secure random index generation - Return an error for proper error handling 4. Updates CreateAccessKey to handle the new error return 5. Updates DoActions handlers to propagate errors properly * Fix critical bug: DeleteUserPolicy was deleting entire user instead of policy BUG FIX: DeleteUserPolicy was incorrectly deleting the entire user identity from s3cfg.Identities instead of just clearing the user's inline policy (Actions). Before (wrong): s3cfg.Identities = append(s3cfg.Identities[:i], s3cfg.Identities[i+1:]...) After (correct): ident.Actions = nil Also: - Added proper iamDeleteUserPolicyResponse / DeleteUserPolicyResponse types - Fixed return type from iamPutUserPolicyResponse to iamDeleteUserPolicyResponse Affected files: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) - weed/iamapi/iamapi_response.go (response types) * Add tests for DeleteUserPolicy to prevent regression Added two tests: 1. TestEmbeddedIamDeleteUserPolicy - Verifies that: - User is NOT deleted (identity still exists) - Credentials are NOT deleted - Only Actions (policy) are cleared to nil 2. TestEmbeddedIamDeleteUserPolicyUserNotFound - Verifies: - Returns 404 when user doesn't exist These tests ensure the bug fixed in the previous commit (deleting user instead of policy) doesn't regress. * Fix race condition: Add mutex lock to IAM DoActions The DoActions function performs a read-modify-write operation on the shared IAM configuration without any locking. This could lead to race conditions and data loss if multiple requests modify the IAM config concurrently. Added mutex lock at the start of DoActions in both: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) The lock protects the entire read-modify-write cycle: 1. GetS3ApiConfiguration (read) 2. Modify s3cfg based on action 3. PutS3ApiConfiguration (write) * Fix action comparison and document CreatePolicy limitation 1. Replace reflect.DeepEqual with order-independent string slice comparison - Added iamStringSlicesEqual/stringSlicesEqual helper functions - Prevents duplicate policy statements when actions are in different order 2. Document CreatePolicy limitation in embedded IAM - Added TODO comment explaining that managed policies are not persisted - Users should use PutUserPolicy for inline policies 3. Fix deadlock in standalone IAM's CreatePolicy - Removed nested lock acquisition (DoActions already holds the lock) Files changed: - weed/s3api/s3api_embedded_iam.go - weed/iamapi/iamapi_management_handlers.go * Add rate limiting to embedded IAM endpoint Apply circuit breaker rate limiting to the IAM endpoint to prevent abuse. Also added request tracking for IAM operations. The IAM endpoint now follows the same pattern as other S3 endpoints: - track() for request metrics - s3a.iam.Auth() for authentication - s3a.cb.Limit() for rate limiting * Fix handleImplicitUsername to properly look up username from AccessKeyId According to AWS spec, when UserName is not specified in an IAM request, IAM should determine the username implicitly based on the AccessKeyId signing the request. Previously, the code incorrectly extracted s[2] (region field) from the SigV4 credential string as the username. This fix: 1. Extracts the AccessKeyId from s[0] of the credential string 2. Looks up the AccessKeyId in the credential store using LookupByAccessKey 3. Uses the identity's Name field as the username if found Also: - Added exported LookupByAccessKey wrapper method to IdentityAccessManagement - Updated tests to verify correct access key lookup behavior - Applied fix to both embedded IAM and standalone IAM implementations * Fix CreatePolicy to not trigger unnecessary save CreatePolicy validates the policy document and returns metadata but does not actually store the policy (SeaweedFS uses inline policies attached via PutUserPolicy). However, 'changed' was left as true, triggering an unnecessary save operation. Set changed = false after successful CreatePolicy validation in both embedded IAM and standalone IAM implementations. * Improve embedded IAM test quality - Remove unused mock types (mockCredentialManager, mockEmbeddedIamApi) - Use proto.Clone instead of proto.Merge for proper deep copy semantics - Replace brittle regex-based XML error extraction with proper XML unmarshalling - Remove unused regexp import - Add state and field assertions to tests: - CreateUser: verify username in response and user persisted in config - ListUsers: verify response contains expected users - GetUser: verify username in response - CreatePolicy: verify policy metadata in response - PutUserPolicy: verify actions were attached to user - CreateAccessKey: verify credentials in response and persisted in config * Remove shared test state and improve executeEmbeddedIamRequest - Remove package-level embeddedIamApi variable to avoid shared test state - Update executeEmbeddedIamRequest to accept API instance as parameter - Only call xml.Unmarshal when v != nil, making nil-v cases explicit - Return unmarshal error properly instead of always returning it - Update all tests to create their own EmbeddedIamApiForTest instance - Each test now has isolated state, preventing test interdependencies * Add comprehensive test coverage for embedded IAM Added tests for previously uncovered functions: - iamStringSlicesEqual: 0% → 100% - iamMapToStatementAction: 40% → 100% - iamMapToIdentitiesAction: 30% → 70% - iamHash: 100% - iamStringWithCharset: 85.7% - GetPolicyDocument: 75% → 100% - CreatePolicy: 91.7% → 100% - DeleteUser: 83.3% → 100% - GetUser: 83.3% → 100% - ListAccessKeys: 55.6% → 88.9% New test cases for helper functions, error handling, and edge cases. * Document IAM code duplication and reference GitHub issue #7747 Added comments to both IAM implementations noting the code duplication and referencing the tracking issue for future refactoring: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) See: https://github.com/seaweedfs/seaweedfs/issues/7747 * Implement granular IAM authorization for self-service operations Previously, all IAM actions required ACTION_ADMIN permission, which was overly restrictive. This change implements AWS-like granular permissions: Self-service operations (allowed without admin for own resources): - CreateAccessKey (on own user) - DeleteAccessKey (on own user) - ListAccessKeys (on own user) - GetUser (on own user) - UpdateAccessKey (on own user) Admin-only operations: - CreateUser, DeleteUser, UpdateUser - PutUserPolicy, GetUserPolicy, DeleteUserPolicy - CreatePolicy - ListUsers - Operations on other users The new AuthIam middleware: 1. Authenticates the request (signature verification) 2. Parses the IAM Action and target UserName 3. For self-service actions, allows if user is operating on own resources 4. For all other actions or operations on other users, requires admin * Fix misleading comment in standalone IAM CreatePolicy The comment incorrectly stated that CreatePolicy only validates the policy document. In the standalone IAM server, CreatePolicy actually persists the policy via iama.s3ApiConfig.PutPolicies(). The changed flag is false because it doesn't modify s3cfg.Identities, not because nothing is stored. * Simplify IAM auth and add RequestId to responses - Remove redundant ACTION_ADMIN fallback in AuthIam: The action parameter in authRequest is for permission checking, not signature verification. If auth fails with ACTION_READ, it will fail with ACTION_ADMIN too. - Add SetRequestId() call before writing IAM responses for AWS compatibility. All IAM response structs embed iamCommonResponse which has SetRequestId(). * Address code review feedback for IAM implementation 1. auth_credentials.go: Add documentation warning that LookupByAccessKey returns internal pointers that should not be mutated. 2. iamapi_management_handlers.go & s3api_embedded_iam.go: Add input guards for StringWithCharset/iamStringWithCharset when length <= 0 or charset is empty to avoid runtime errors from rand.Int. 3. s3api_embedded_iam_test.go: Don't ignore xml.Marshal errors in test DoActions handler. Return proper error response if marshaling fails. 4. s3api_embedded_iam_test.go: Use obviously fake access key IDs (AKIATESTFAKEKEY*) to avoid CI secret scanner false positives. * Address code review feedback for IAM implementation (batch 2) 1. iamapi/iamapi_management_handlers.go: - Redact Authorization header log (security: avoid exposing signature) - Add nil-guard for iama.iam before LookupByAccessKey call 2. iamapi/iamapi_test.go: - Replace real-looking access keys with obviously fake ones (AKIATESTFAKEKEY*) to avoid CI secret scanner false positives 3. s3api/s3api_embedded_iam.go - CreateUser: - Validate UserName is not empty (return ErrCodeInvalidInputException) - Check for duplicate users (return ErrCodeEntityAlreadyExistsException) 4. s3api/s3api_embedded_iam.go - CreateAccessKey: - Return ErrCodeNoSuchEntityException if user doesn't exist - Removed implicit user creation behavior 5. s3api/s3api_embedded_iam.go - getActions: - Fix S3 ARN parsing for bucket/path patterns - Handle mybucket, mybucket/*, mybucket/path/* correctly - Return error if no valid actions found in policy 6. s3api/s3api_embedded_iam.go - handleImplicitUsername: - Redact Authorization header log - Add nil-guard for e.iam 7. s3api/s3api_embedded_iam.go - DoActions: - Reload in-memory IAM maps after credential mutations - Call LoadS3ApiConfigurationFromCredentialManager after save 8. s3api/auth_credentials.go - AuthSignatureOnly: - Add new signature-only authentication method - Bypasses S3 authorization checks for IAM operations - Used by AuthIam to properly separate signature verification from IAM-specific permission checks * Fix nil pointer dereference and error handling in IAM 1. AuthIam: Add nil check for identity after AuthSignatureOnly - AuthSignatureOnly can return nil identity with ErrNone for authTypePostPolicy or authTypeStreamingUnsigned - Now returns ErrAccessDenied if identity is nil 2. writeIamErrorResponse: Add missing error code cases - ErrCodeEntityAlreadyExistsException -> HTTP 409 Conflict - ErrCodeInvalidInputException -> HTTP 400 Bad Request 3. UpdateUser: Use consistent error handling - Changed from direct ErrInvalidRequest to writeIamErrorResponse - Now returns correct HTTP status codes based on error type * Add IAM config reload to standalone IAM server after mutations Match the behavior of embedded IAM (s3api_embedded_iam.go) by reloading the in-memory identity maps after persisting configuration changes. This ensures newly created access keys are visible to LookupByAccessKey immediately without requiring a server restart. * Minor improvements to test helpers and log masking 1. iamapi_test.go: Update mustMarshalJSON to use t.Helper() and t.Fatal() instead of panic() for better test diagnostics 2. s3api_embedded_iam.go: Mask access key in 'not found' log message to avoid exposing full access key IDs in logs * Mask access key in standalone IAM log message for consistency Match the embedded IAM version by masking the access key ID in the 'not found' log message (show only first 4 chars). |
24 hours ago |
|
|
a77674ead3
|
fix: use path instead of filepath for S3 object paths on Windows (#7739)
fix: use path instead of filepath for S3 object paths on Windows (#7733) |
1 day ago |
|
|
51c2ab0107
|
fix: admin UI bucket deletion with filer group configured (#7735)
|
2 days ago |
|
|
f70cd05404
|
fix: CORS wildcard subdomain matching cache race condition (#7736)
test: add HTTPS test cases for CORS wildcard subdomain matching This adds comprehensive test coverage for HTTPS subdomain wildcard matching in TestMatchesOrigin: - https exact match - https no match - https wildcard subdomain match - https wildcard subdomain no match (base domain) - https wildcard subdomain no match (different domain) - protocol mismatch tests (http pattern vs https origin and vice versa) The matchWildcard function was already working correctly - this just adds test coverage for the HTTPS cases that were previously untested. Note: The cache invalidation is already handled synchronously by setBucketMetadata() which is called via: - UpdateBucketCORS -> UpdateBucketMetadata -> setBucketMetadata - ClearBucketCORS -> UpdateBucketMetadata -> setBucketMetadata Added clarifying comments to document this call chain. |
2 days ago |
|
|
f77e6ed2d4
|
fix: admin UI bucket delete now properly deletes collection and checks Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock Fixes #7711 The admin UI's DeleteS3Bucket function was missing two critical behaviors: 1. It did not delete the collection from the master (unlike s3.bucket.delete shell command), leaving orphaned volume data that caused fs.verify errors. 2. It did not check for Object Lock protections before deletion, potentially allowing deletion of buckets with locked objects. Changes: - Add shared Object Lock checking utilities to object_lock_utils.go: - EntryHasActiveLock: standalone function to check if an entry has active lock - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects - Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function - Update admin UI DeleteS3Bucket to: - Check Object Lock using shared HasObjectsWithActiveLocks utility - Delete the collection before deleting filer entries (matching s3.bucket.delete) * refactor: S3 API uses shared Object Lock utilities Removes 114 lines of duplicated code from s3api_bucket_handlers.go by having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks function in object_lock_utils.go. Now both S3 API and Admin UI use the same shared utilities: - EntryHasActiveLock - HasObjectsWithActiveLocks - recursivelyCheckLocksWithClient - checkVersionsForLocksWithClient * feat: s3.bucket.delete shell command now checks Object Lock Add Object Lock protection to the s3.bucket.delete shell command. If the bucket has Object Lock enabled and contains objects with active retention or legal hold, deletion is prevented. Also refactors Object Lock checking utilities into a new s3_objectlock package to avoid import cycles between shell, s3api, and admin packages. All three components now share the same logic: - S3 API (DeleteBucketHandler) - Admin UI (DeleteS3Bucket) - Shell command (s3.bucket.delete) * refactor: unified Object Lock checking and consistent deletion parameters 1. Add CheckBucketForLockedObjects() - a unified function that combines: - Bucket entry lookup - Object Lock enabled check - Scan for locked objects 2. All three components now use this single function: - S3 API (via s3api.CheckBucketForLockedObjects) - Admin UI (via s3api.CheckBucketForLockedObjects) - Shell command (via s3_objectlock.CheckBucketForLockedObjects) 3. Aligned deletion parameters across all components: - isDeleteData: false (collection already deleted separately) - isRecursive: true - ignoreRecursiveError: true * fix: properly handle non-EOF errors in Recv() loops The Recv() loops in recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient were breaking on any error, which could hide real stream errors and incorrectly report 'no locks found'. Now: - io.EOF: break loop (normal end of stream) - any other error: return it so caller knows the stream failed * fix: address PR review comments 1. Add path traversal protection - validate entry names before building subdirectory paths. Skip entries with empty names, '.', '..', or containing path separators. 2. Use exact match for .versions folder instead of HasSuffix() to avoid mismatching unrelated directories like 'foo.versions'. 3. Replace path.Join with simple string concatenation since we now validate entry names. * refactor: extract paginateEntries helper to reduce duplication The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient functions shared significant structural similarity. Extracted a generic paginateEntries helper that: - Handles pagination logic (lastFileName tracking, Limit) - Handles stream receiving with proper EOF vs error handling - Validates entry names (path traversal protection) - Calls a processEntry callback for business logic This centralizes pagination logic and makes the code more maintainable. * feat: add context propagation for timeout and cancellation support All Object Lock checking functions now accept context.Context parameter: - paginateEntries(ctx, client, dir, processEntry) - recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime) - checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime) - HasObjectsWithActiveLocks(ctx, client, bucketPath) - CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName) This enables: - Timeout support for large bucket scans - Cancellation propagation from HTTP requests - The S3 API handler now uses r.Context() for proper request lifecycle * fix: address PR review comments 1. Add DefaultBucketsPath constant in admin_server.go instead of hardcoding "/buckets" in multiple places. 2. Add defensive normalization in EntryHasActiveLock: - TrimSpace to handle whitespace around values - ToUpper for case-insensitive comparison of legal hold and retention mode values - TrimSpace on retention date before parsing * fix: use ctx variable consistently instead of context.Background() In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx variable defined at the start of the function for all gRPC calls instead of creating new context.Background() instances. |
2 days ago |
|
|
d80d8be012
|
fix(s3): start KeepConnectedToMaster for filer discovery with filerGroup (#7732)
Fixes #7721 When S3 server is configured with a filerGroup, it creates a MasterClient to enable dynamic filer discovery. However, the KeepConnectedToMaster() background goroutine was never started, causing GetMaster() to block indefinitely in WaitUntilConnected(). This resulted in the log message: WaitUntilConnected still waiting for master connection (attempt N)... being logged repeatedly every ~20 seconds. The fix adds the missing 'go masterClient.KeepConnectedToMaster(ctx)' call to properly establish the connection to master servers. Also adds unit tests to verify WaitUntilConnected respects context cancellation. |
2 days ago |
|
|
72853a3bbf
|
fix: prevent path doubling in versioned object listing (#7729)
* fix: prevent path doubling in versioned object listing
Fix path doubling bug in getLatestVersionEntryForListOperation that caused
Velero/Kopia backups to fail when using S3 bucket versioning.
The issue was that when creating logical entries for versioned object listing,
the entry.Name was set to the full object path (e.g., 'kopia/logpaste/kopia.blobcfg')
instead of just the base filename ('kopia.blobcfg'). When this entry was used in
the list callback which combines dir + entry.Name, the paths got doubled:
'/buckets/velero/kopia/logpaste/kopia/logpaste/kopia.blobcfg'
This caused Kopia to fail loading pack indexes with the error:
'unable to load pack indexes despite 10 retries'
The fix uses path.Base(object) to extract only the filename portion, matching
how regular (non-versioned) entries work in the listing callback.
Fixes: GitHub discussion #7573
* refactor: use path.Base directly in test instead of reimplementing
Address code review feedback to simplify the test by using the standard
library path.Base function directly instead of reimplementing it.
* remove test: unit test for path.Base doesn't add much value
|
3 days ago |
|
|
6fb3ec968d
|
s3: allow -s3.config and -s3.iam.config to work together (#7727)
When both -s3.config and -s3.iam.config are configured, traditional credentials from -s3.config were failing with Access Denied because the authorization code always used IAM authorization when IAM integration was configured. The fix checks if the identity has legacy Actions (from -s3.config). If so, use the legacy canDo() authorization. Only use IAM authorization for JWT/STS identities that don't have legacy Actions. This allows both configuration options to coexist: - Traditional credentials use legacy authorization - JWT/STS credentials use IAM authorization Fixes #7720 |
3 days ago |
|
|
b0e0c5aaab
|
s3: enable auth when IAM integration is configured (#7726)
When only IAM integration is configured (via -s3.iam.config) without traditional S3 identities, the isAuthEnabled flag was not being set, causing the Auth middleware to bypass all authentication checks. This fix ensures that when SetIAMIntegration is called with a non-nil integration, isAuthEnabled is set to true, properly enforcing authentication for all requests. Added negative authentication tests: - TestS3AuthenticationDenied: tests rejection of unauthenticated, invalid, and expired JWT requests - TestS3IAMOnlyModeRejectsAnonymous: tests that IAM-only mode properly rejects anonymous requests Fixes #7724 |
3 days ago |
|
|
de3ecaf0de
|
s3: fix presigned POST upload missing slash between bucket and key (#7714)
* s3: fix presigned POST upload missing slash between bucket and key When uploading a file using presigned POST (e.g., boto3.generate_presigned_post), the file was saved with the bucket name and object key concatenated without a slash (e.g., 'my-bucketfilename' instead of 'my-bucket/filename'). The issue was that PostPolicyBucketHandler retrieved the object key from form values without ensuring it had a leading slash, unlike GetBucketAndObject() which normalizes the key. Fixes #7713 * s3: add tests for presigned POST key normalization Add comprehensive tests for PostPolicyBucketHandler to ensure: - Object keys without leading slashes are properly normalized - ${filename} substitution works correctly with normalization - Path construction correctly separates bucket and key - Form value extraction works properly These tests would have caught the bug fixed in the previous commit where keys like 'test_image.png' were concatenated with bucket without a separator, resulting in 'my-buckettest_image.png'. * s3: create normalizeObjectKey function for robust key normalization Address review feedback by creating a reusable normalizeObjectKey function that both adds a leading slash and removes duplicate slashes, aligning with how other handlers process paths (e.g., toFilerPath uses removeDuplicateSlashes). The function handles edge cases like: - Keys without leading slashes (the original bug) - Keys with duplicate slashes (e.g., 'a//b' -> '/a/b') - Keys with leading duplicate slashes (e.g., '///a' -> '/a') Updated tests to use the new function and added TestNormalizeObjectKey for comprehensive coverage of the new function. * s3: move NormalizeObjectKey to s3_constants for shared use Move the NormalizeObjectKey function to the s3_constants package so it can be reused by: - GetBucketAndObject() - now normalizes all object keys from URL paths - GetPrefix() - now normalizes prefix query parameters - PostPolicyBucketHandler - normalizes keys from form values This ensures consistent object key normalization across all S3 API handlers, handling both missing leading slashes and duplicate slashes. Benefits: - Single source of truth for key normalization - GetBucketAndObject now removes duplicate slashes (previously only added leading slash) - All handlers benefit from the improved normalization automatically |
5 days ago |
|
|
2188d1ccc5 |
fix object name
|
5 days ago |
|
|
27a28faf49
|
Fix s3 versioning listing bugs (#7705)
* fix: add pagination to list-object-versions for buckets with >1000 objects
The findVersionsRecursively() function used a fixed limit of 1000 entries
without pagination. This caused objects beyond the first 1000 entries
(sorted alphabetically) to never appear in list-object-versions responses.
Changes:
- Add pagination loop using filer.PaginationSize (1024)
- Use isLast flag from s3a.list() to detect end of pagination
- Track startFrom marker for each page
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* fix: prevent infinite loop in ListObjects when processing .versions directories
The doListFilerEntries() function processes .versions directories in a
secondary loop after the main entry loop, but failed to update nextMarker.
This caused infinite pagination loops when results were truncated, as the
same .versions directories would be reprocessed on each page.
Bug introduced by:
|
5 days ago |
|
|
b4e2cca204
|
s3api: remove redundant auth verification in getRequestDataReader (#7685)
* s3api: remove redundant auth verification in getRequestDataReader The handlers PutObjectHandler and PutObjectPartHandler are already wrapped with s3a.iam.Auth() middleware which performs signature verification via authRequest() before the handler is invoked. The signature verification for authTypeSignedV2, authTypePresignedV2, authTypePresigned, and authTypeSigned in getRequestDataReader was therefore redundant. The newChunkedReader() call for streaming auth types is kept as it's needed to parse the chunked transfer encoding and extract the actual data. Fixes #7683 * simplify switch to if statement for single condition |
6 days ago |
|
|
d6d893c8c3
|
s3: add s3:ExistingObjectTag condition support for bucket policies (#7677)
* s3: add s3:ExistingObjectTag condition support in policy engine
Add support for s3:ExistingObjectTag/<tag-key> condition keys in bucket
policies, allowing access control based on object tags.
Changes:
- Add ObjectEntry field to PolicyEvaluationArgs (entry.Extended metadata)
- Update EvaluateConditions to handle s3:ExistingObjectTag/<key> format
- Extract tag value from entry metadata using X-Amz-Tagging-<key> prefix
This enables policies like:
{
"Condition": {
"StringEquals": {
"s3:ExistingObjectTag/status": ["public"]
}
}
}
Fixes: https://github.com/seaweedfs/seaweedfs/issues/7447
* s3: update EvaluatePolicy to accept object entry for tag conditions
Update BucketPolicyEngine.EvaluatePolicy to accept objectEntry parameter
(entry.Extended metadata) for evaluating tag-based policy conditions.
Changes:
- Add objectEntry parameter to EvaluatePolicy method
- Update callers in auth_credentials.go and s3api_bucket_handlers.go
- Pass nil for objectEntry in auth layer (entry fetched later in handlers)
For tag-based conditions to work, handlers should call EvaluatePolicy
with the object's entry.Extended after fetching the entry from filer.
* s3: add tests for s3:ExistingObjectTag policy conditions
Add comprehensive tests for object tag-based policy conditions:
- TestExistingObjectTagCondition: Basic tag matching scenarios
- Matching/non-matching tag values
- Missing tags, no tags, empty tags
- Multiple tags with one matching
- TestExistingObjectTagConditionMultipleTags: Multiple tag conditions
- Both tags match
- Only one tag matches
- TestExistingObjectTagDenyPolicy: Deny policies with tag conditions
- Default allow without tag
- Deny when specific tag present
* s3: document s3:ExistingObjectTag support and feature status
Update policy engine documentation:
- Add s3:ExistingObjectTag/<tag-key> to supported condition keys
- Add 'Object Tag-Based Access Control' section with examples
- Add 'Feature Status' section with implemented and planned features
Planned features for future implementation:
- s3:RequestObjectTag/<key>
- s3:RequestObjectTagKeys
- s3:x-amz-server-side-encryption
- Cross-account access
* Implement tag-based policy re-check in handlers
- Add checkPolicyWithEntry helper to S3ApiServer for handlers to re-check
policy after fetching object entry (for s3:ExistingObjectTag conditions)
- Add HasPolicyForBucket method to policy engine for efficient check
- Integrate policy re-check in GetObjectHandler after entry is fetched
- Integrate policy re-check in HeadObjectHandler after entry is fetched
- Update auth_credentials.go comments to explain two-phase evaluation
- Update documentation with supported operations for tag-based conditions
This implements 'Approach 1' where handlers re-check the policy with
the object entry after fetching it, allowing tag-based conditions to
be properly evaluated.
* Add integration tests for s3:ExistingObjectTag conditions
- Add TestCheckPolicyWithEntry: tests checkPolicyWithEntry helper with various
tag scenarios (matching tags, non-matching tags, empty entry, nil entry)
- Add TestCheckPolicyWithEntryNoPolicyForBucket: tests early return when no policy
- Add TestCheckPolicyWithEntryNilPolicyEngine: tests nil engine handling
- Add TestCheckPolicyWithEntryDenyPolicy: tests deny policies with tag conditions
- Add TestHasPolicyForBucket: tests HasPolicyForBucket method
These tests cover the Phase 2 policy evaluation with object entry metadata,
ensuring tag-based conditions are properly evaluated.
* Address code review nitpicks
- Remove unused extractObjectTags placeholder function (engine.go)
- Add clarifying comment about s3:ExistingObjectTag/<key> evaluation
- Consolidate duplicate tag-based examples in README
- Factor out tagsToEntry helper to package level in tests
* Address code review feedback
- Fix unsafe type assertions in GetObjectHandler and HeadObjectHandler
when getting identity from context (properly handle type assertion failure)
- Extract getConditionContextValue helper to eliminate duplicated logic
between EvaluateConditions and EvaluateConditionsLegacy
- Ensure consistent handling of missing condition keys (always return
empty slice)
* Fix GetObjectHandler to match HeadObjectHandler pattern
Add safety check for nil objectEntryForSSE before tag-based policy
evaluation, ensuring tag-based conditions are always evaluated rather
than silently skipped if entry is unexpectedly nil.
Addresses review comment from Copilot.
* Fix HeadObject action name in docs for consistency
Change 'HeadObject' to 's3:HeadObject' to match other action names.
* Extract recheckPolicyWithObjectEntry helper to reduce duplication
Move the repeated identity extraction and policy re-check logic from
GetObjectHandler and HeadObjectHandler into a shared helper method.
* Add validation for empty tag key in s3:ExistingObjectTag condition
Prevent potential issues with malformed policies containing
s3:ExistingObjectTag/ (empty tag key after slash).
|
6 days ago |
|
|
f5c0bcafa3
|
s3: fix ListBuckets not showing buckets created by authenticated users (#7648)
* s3: fix ListBuckets not showing buckets created by authenticated users Fixes #7647 ## Problem Users with proper Admin permissions could create buckets but couldn't list them. The issue occurred because ListBucketsHandler was not wrapped with the Auth middleware, so the authenticated identity was never set in the request context. ## Root Cause - PutBucketHandler uses iam.Auth() middleware which sets identity in context - ListBucketsHandler did NOT use iam.Auth() middleware - Without the middleware, GetIdentityNameFromContext() returned empty string - Bucket ownership checks failed because no identity was present ## Changes 1. Wrap ListBucketsHandler with iam.Auth() middleware (s3api_server.go) 2. Update ListBucketsHandler to get identity from context (s3api_bucket_handlers.go) 3. Add lookupByIdentityName() helper method (auth_credentials.go) 4. Add comprehensive test TestListBucketsIssue7647 (s3api_bucket_handlers_test.go) ## Testing - All existing tests pass (1348 tests in s3api package) - New test TestListBucketsIssue7647 validates the fix - Verified admin users can see their created buckets - Verified admin users can see all buckets - Verified backward compatibility maintained * s3: fix ListBuckets for JWT/Keycloak authentication The previous fix broke JWT/Keycloak authentication because JWT identities are created on-the-fly and not stored in the iam.identities list. The lookupByIdentityName() would return nil for JWT users. Solution: Store the full Identity object in the request context, not just the name. This allows ListBucketsHandler to retrieve the complete identity for all authentication types (SigV2, SigV4, JWT, Anonymous). Changes: - Add SetIdentityInContext/GetIdentityFromContext in s3_constants/header.go - Update Auth middleware to store full identity in context - Update ListBucketsHandler to retrieve identity from context first, with fallback to lookup for backward compatibility * s3: optimize lookupByIdentityName to O(1) using map Address code review feedback: Use a map for O(1) lookups instead of O(N) linear scan through identities list. Changes: - Add nameToIdentity map to IdentityAccessManagement struct - Populate map in loadS3ApiConfiguration (consistent with accessKeyIdent pattern) - Update lookupByIdentityName to use map lookup instead of loop This improves performance when many identities are configured and aligns with the existing pattern used for accessKeyIdent lookups. * s3: address code review feedback on nameToIdentity and logging Address two code review points: 1. Wire nameToIdentity into env-var fallback path - The AWS env-var fallback in NewIdentityAccessManagementWithStore now populates nameToIdentity map along with accessKeyIdent - Keeps all identity lookup maps in sync - Avoids potential issues if handlers rely on lookupByIdentityName 2. Improve access key lookup logging - Reduce log verbosity: V(1) -> V(2) for failed lookups - Truncate access keys in logs (show first 4 chars + ***) - Include key length for debugging - Prevents credential exposure in production logs - Reduces log noise from misconfigured clients * fmt * s3: refactor truncation logic and improve error handling Address additional code review feedback: 1. DRY principle: Extract key truncation logic into local function - Define truncate() helper at function start - Reuse throughout lookupByAccessKey - Eliminates code duplication 2. Enhanced security: Mask very short access keys - Keys <= 4 chars now show as '***' instead of full key - Prevents any credential exposure even for short keys - Consistent masking across all log statements 3. Improved robustness: Add warning log for type assertion failure - Log unexpected type when identity context object is wrong type - Helps debug potential middleware or context issues - Better production diagnostics 4. Documentation: Add comment about future optimization opportunity - Note potential for lightweight identity view in context - Suggests credential-free view for better data minimization - Documents design decision for future maintainers |
1 week ago |
|
|
5167bbd2a9 |
Remove deprecated allowEmptyFolder CLI option
The allowEmptyFolder option is no longer functional because: 1. The code that used it was already commented out 2. Empty folder cleanup is now handled asynchronously by EmptyFolderCleaner The CLI flags are kept for backward compatibility but marked as deprecated and ignored. This removes: - S3ApiServerOption.AllowEmptyFolder field - The actual usage in s3api_object_handlers_list.go - Helm chart values and template references - References in test Makefiles and docker-compose files |
1 week ago |
|
|
55f0fbf364
|
s3: optimize DELETE by skipping lock check for buckets without Object Lock (#7642)
This optimization avoids an expensive filer gRPC call for every DELETE operation on buckets that don't have Object Lock enabled. Before this change, enforceObjectLockProtections() would always call getObjectEntry() to fetch object metadata to check for retention/legal hold, even for buckets that never had Object Lock configured. Changes: 1. Add early return in enforceObjectLockProtections() if bucket has no Object Lock config or bucket doesn't exist 2. Add isObjectLockEnabled() helper function to check if a bucket has Object Lock configured 3. Fix validateObjectLockHeaders() to check ObjectLockConfig instead of just versioningEnabled - this ensures object-lock headers are properly rejected on buckets without Object Lock enabled, which aligns with AWS S3 semantics 4. Make bucket creation with Object Lock atomic - set Object Lock config in the same CreateEntry call as bucket creation, preventing race conditions where bucket exists without Object Lock enabled 5. Properly handle Object Lock setup failures during bucket creation - if StoreObjectLockConfigurationInExtended fails, roll back the bucket creation and return an error instead of leaving a bucket without the requested Object Lock configuration This significantly improves DELETE latency for non-Object-Lock buckets, which is the common case (lockCheck time reduced from 1-10ms to ~1µs). |
1 week ago |
|
|
9c266fac29
|
fix: CompleteMultipartUpload fails for uploads with more than 1000 parts (#7641)
When completing a multipart upload, the code was listing parts with limit=0, which relies on the server's DirListingLimit default. In 'weed server' mode, this defaults to 1000, causing uploads with more than 1000 parts to fail with InvalidPart error. For a 38GB file with 8MB parts (AWS CLI default), this results in ~4564 parts, far exceeding the 1000 limit. Fix: Use explicit limit of MaxS3MultipartParts+1 (10001) to ensure all parts are listed regardless of server configuration. Fixes #7638 |
1 week ago |
|
|
716f21fbd3
|
s3: support STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER for signed chunked uploads with checksums (#7623)
* s3: support STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER for signed chunked uploads with checksums When AWS SDK v2 clients upload with both chunked encoding and checksum validation enabled, they use the x-amz-content-sha256 header value of STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER instead of the simpler STREAMING-AWS4-HMAC-SHA256-PAYLOAD. This caused the chunked reader to not be properly activated, resulting in chunk-signature metadata being stored as part of the file content. Changes: - Add streamingSignedPayloadTrailer constant for the new header value - Update isRequestSignStreamingV4() to recognize this header - Update newChunkedReader() to handle this streaming type - Update calculateSeedSignature() to accept this header - Add unit test for signed streaming upload with trailer Fixes issue where Quarkus/AWS SDK v2 uploads with checksum validation resulted in corrupted file content containing chunk-signature data. * address review comments: add trailer signature to test, fix constant alignment * test: separate canonical trailer text (\n) from on-wire format (\r\n) * test: add negative test for invalid trailer signature * refactor: check HTTP method first in streaming auth checks (fail-fast) * test: handle crc32 Write error return for completeness * refactor: extract createTrailerStreamingRequest helper to reduce test duplication * fmt * docs: clarify test comment about trailer signature validation status * refactor: calculate chunk data length dynamically instead of hardcoding * Update weed/s3api/chunked_reader_v4_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * fix: use current time for signatures instead of hardcoded past date --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
2 weeks ago |
|
|
a5ab05ec03
|
fix: S3 GetObject/HeadObject with PartNumber should return object ETag, not part ETag (#7622)
AWS S3 behavior: when calling GetObject or HeadObject with the PartNumber query parameter, the ETag header should still return the complete object's ETag (e.g., 'abc123-4' for a 4-part multipart upload), not the individual part's ETag. The previous implementation incorrectly overrode the ETag with the part's ETag, causing test_multipart_get_part to fail. This fix removes the ETag override logic while keeping: - x-amz-mp-parts-count header (correct) - Content-Length adjusted to part size (correct) - Range calculation for part boundaries (correct) |
2 weeks ago |
|
|
39ba19eea6
|
filer: async empty folder cleanup via metadata events (#7614)
* filer: async empty folder cleanup via metadata events Implements asynchronous empty folder cleanup when files are deleted in S3. Key changes: 1. EmptyFolderCleaner - New component that handles folder cleanup: - Uses consistent hashing (LockRing) to determine folder ownership - Each filer owns specific folders, avoiding duplicate cleanup work - Debounces delete events (10s delay) to batch multiple deletes - Caches rough folder counts to skip unnecessary checks - Cancels pending cleanup when new files are created - Handles both file and subdirectory deletions 2. Integration with metadata events: - Listens to both local and remote filer metadata events - Processes create/delete/rename events to track folder state - Only processes folders under /buckets/<bucket>/... 3. Removed synchronous empty folder cleanup from S3 handlers: - DeleteObjectHandler no longer calls DoDeleteEmptyParentDirectories - DeleteMultipleObjectsHandler no longer tracks/cleans directories - Cleanup now happens asynchronously via metadata events Benefits: - Non-blocking: S3 delete requests return immediately - Coordinated: Only one filer (the owner) cleans each folder - Efficient: Batching and caching reduce unnecessary checks - Event-driven: Folder deletion triggers parent folder check automatically * filer: add CleanupQueue data structure for deduplicated folder cleanup CleanupQueue uses a linked list for FIFO ordering and a hashmap for O(1) deduplication. Processing is triggered when: - Queue size reaches maxSize (default 1000), OR - Oldest item exceeds maxAge (default 10 minutes) Key features: - O(1) Add, Remove, Pop, Contains operations - Duplicate folders are ignored (keeps original position/time) - Testable with injectable time function - Thread-safe with mutex protection * filer: use CleanupQueue for empty folder cleanup Replace timer-per-folder approach with queue-based processing: - Use CleanupQueue for deduplication and ordered processing - Process queue when full (1000 items) or oldest item exceeds 10 minutes - Background processor checks queue every 10 seconds - Remove from queue on create events to cancel pending cleanup Benefits: - Bounded memory: queue has max size, not unlimited timers - Efficient: O(1) add/remove/contains operations - Batch processing: handle many folders efficiently - Better for high-volume delete scenarios * filer: CleanupQueue.Add moves duplicate to back with updated time When adding a folder that already exists in the queue: - Remove it from its current position - Add it to the back of the queue - Update the queue time to current time This ensures that folders with recent delete activity are processed later, giving more time for additional deletes to occur. * filer: CleanupQueue uses event time and inserts in sorted order Changes: - Add() now takes eventTime parameter instead of using current time - Insert items in time-sorted order (oldest at front) to handle out-of-order events - When updating duplicate with newer time, reposition to maintain sort order - Ignore updates with older time (keep existing later time) This ensures proper ordering when processing events from distributed filers where event arrival order may not match event occurrence order. * filer: remove unused CleanupQueue functions (SetNowFunc, GetAll) Removed test-only functions: - SetNowFunc: tests now use real time with past event times - GetAll: tests now use Pop() to verify order Kept functions used in production: - Peek: used in filer_notify_read.go - OldestAge: used in empty_folder_cleaner.go logging * filer: initialize cache entry on first delete/create event Previously, roughCount was only updated if the cache entry already existed, but entries were only created during executeCleanup. This meant delete/create events before the first cleanup didn't track the count. Now create the cache entry on first event, so roughCount properly tracks all changes from the start. * filer: skip adding to cleanup queue if roughCount > 0 If the cached roughCount indicates there are still items in the folder, don't bother adding it to the cleanup queue. This avoids unnecessary queue entries and reduces wasted cleanup checks. * filer: don't create cache entry on create event Only update roughCount if the folder is already being tracked. New folders don't need tracking until we see a delete event. * filer: move empty folder cleanup to its own package - Created weed/filer/empty_folder_cleanup package - Defined FilerOperations interface to break circular dependency - Added CountDirectoryEntries method to Filer - Exported IsUnderPath and IsUnderBucketPath helper functions * filer: make isUnderPath and isUnderBucketPath private These helpers are only used within the empty_folder_cleanup package. |
2 weeks ago |
|
|
3bcadc9f90
|
fix: update getVersioningState to signal non-existent buckets with Er… (#7613)
* fix: update getVersioningState to signal non-existent buckets with ErrNotFound This change modifies the getVersioningState function to return filer_pb.ErrNotFound when a requested bucket does not exist, allowing callers to handle the situation appropriately, such as auto-creating the bucket in PUT handlers. This improves error handling and clarity in the API's behavior regarding bucket existence. * Update weed/s3api/s3api_bucket_config.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: 洪晓威 <xiaoweihong@deepglint.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
2 weeks ago |
|
|
733ca8e6df
|
Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type (#7598)
* Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type Fixes GitHub #7562: Copying objects between encrypted buckets was failing. Root causes: 1. processMetadataBytes was re-adding SSE headers from source entry, undoing the encryption header filtering. Now uses dstEntry.Extended which is already filtered. 2. SSE-S3 streaming copy returned nil metadata. Now properly generates and returns SSE-S3 destination metadata (SeaweedFSSSES3Key, AES256 header) via ExecuteStreamingCopyWithMetadata. 3. Chunks created during streaming copy didn't have SseType set. Now sets SseType and per-chunk SseMetadata with chunk-specific IVs for SSE-S3, enabling proper decryption on GetObject. * Address review: make SSE-S3 metadata serialization failures fatal errors - In executeEncryptCopy: return error instead of just logging if SerializeSSES3Metadata fails - In createChunkFromData: return error if chunk SSE-S3 metadata serialization fails This ensures objects/chunks are never created without proper encryption metadata, preventing unreadable/corrupted data. * fmt * Refactor: reuse function names instead of creating WithMetadata variants - Change ExecuteStreamingCopy to return (*EncryptionSpec, error) directly - Remove ExecuteStreamingCopyWithMetadata wrapper - Change executeStreamingReencryptCopy to return (*EncryptionSpec, error) - Remove executeStreamingReencryptCopyWithMetadata wrapper - Update callers to ignore encryption spec with _ where not needed * Add TODO documenting large file SSE-S3 copy limitation The streaming copy approach encrypts the entire stream with a single IV but stores data in chunks with per-chunk IVs. This causes decryption issues for large files. Small inline files work correctly. This is a known architectural issue that needs separate work to fix. * Use chunk-by-chunk encryption for SSE-S3 copy (consistent with SSE-C/SSE-KMS) Instead of streaming encryption (which had IV mismatch issues for multi-chunk files), SSE-S3 now uses the same chunk-by-chunk approach as SSE-C and SSE-KMS: 1. Extended copyMultipartCrossEncryption to handle SSE-S3: - Added SSE-S3 source decryption in copyCrossEncryptionChunk - Added SSE-S3 destination encryption with per-chunk IVs - Added object-level metadata generation for SSE-S3 destinations 2. Updated routing in executeEncryptCopy/executeDecryptCopy/executeReencryptCopy to use copyMultipartCrossEncryption for all SSE-S3 scenarios 3. Removed streaming copy functions (shouldUseStreamingCopy, executeStreamingReencryptCopy) as they're no longer used 4. Added large file (1MB) integration test to verify chunk-by-chunk copy works This ensures consistent behavior across all SSE types and fixes data corruption that occurred with large files in the streaming copy approach. * fmt * fmt * Address review: fail explicitly if SSE-S3 metadata is missing Instead of silently ignoring missing SSE-S3 metadata (which could create unreadable objects), now explicitly fail the copy operation with a clear error message if: - First chunk is missing - First chunk doesn't have SSE-S3 type - First chunk has empty SSE metadata - Deserialization fails * Address review: improve comment to reflect full scope of chunk creation * Address review: fail explicitly if baseIV is empty for SSE-S3 chunk encryption If DestinationIV is not set when encrypting SSE-S3 chunks, the chunk would be created without SseMetadata, causing GetObject decryption to fail later. Now fails explicitly with a clear error message. Note: calculateIVWithOffset returns ([]byte, int) not ([]byte, error) - the int is a skip amount for intra-block alignment, not an error code. * Address review: handle 0-byte files in SSE-S3 copy For 0-byte files, there are no chunks to get metadata from. Generate an IV for the object-level metadata to ensure even empty files are properly marked as SSE-S3 encrypted. Also validate that we don't have a non-empty file with no chunks (which would indicate an internal error). |
2 weeks ago |
|
|
099a351f3b
|
Fix issue #6847: S3 chunked encoding includes headers in stored content (#7595)
* Fix issue #6847: S3 chunked encoding includes headers in stored content - Add hasTrailer flag to s3ChunkedReader to track trailer presence - Update state transition logic to properly handle trailers in unsigned streaming - Enhance parseChunkChecksum to handle multiple trailer lines - Skip checksum verification for unsigned streaming uploads - Add test case for mixed format handling (unsigned headers with signed chunks) - Remove redundant CRLF reading in trailer processing This fixes the issue where chunk-signature and x-amz headers were appearing in stored file content when using chunked encoding with newer AWS SDKs. * Fix checksum validation for unsigned streaming uploads - Always validate checksum for data integrity regardless of signing - Correct checksum value in test case - Addresses PR review feedback about checksum verification * Add warning log when multiple checksum headers found in trailer - Log a warning when multiple valid checksum headers appear in trailers - Uses last checksum header as suggested by CodeRabbit reviewer - Improves debugging for edge cases with multiple checksum algorithms * Improve trailer parsing robustness in parseChunkChecksum - Remove redundant trimTrailingWhitespace call since readChunkLine already trims - Use bytes.TrimSpace for both key and value to handle whitespace around colon separator - Follows HTTP header specifications for optional whitespace around separators - Addresses Gemini Code Assist review feedback |
2 weeks ago |
|
|
8c585a9682 |
Fix S3 object tagging issue #7589
- Add X-Amz-Tagging header parsing in putToFiler function for PUT object operations - Store tags with X-Amz-Tagging- prefix in entry.Extended metadata - Add comprehensive test suite for S3 object tagging functionality - Tests cover upload tagging, API operations, special characters, and edge cases |
2 weeks ago |
|
|
bd419fda51
|
fix: copy to bucket with default SSE-S3 encryption fails (#7562) (#7568)
* filer use context without cancellation * pass along context * fix: copy to bucket with default SSE-S3 encryption fails (#7562) When copying an object from an encrypted bucket to a temporary unencrypted bucket, then to another bucket with default SSE-S3 encryption, the operation fails with 'invalid SSE-S3 source key type' error. Root cause: When objects are copied from an SSE-S3 encrypted bucket to an unencrypted bucket, the 'X-Amz-Server-Side-Encryption: AES256' header is preserved but the actual encryption key (SeaweedFSSSES3Key) is stripped. This creates an 'orphaned' SSE-S3 header that causes IsSSES3EncryptedInternal() to return true, triggering decryption logic with a nil key. Fix: 1. Modified IsSSES3EncryptedInternal() to require BOTH the AES256 header AND the SeaweedFSSSES3Key to be present before returning true 2. Added isOrphanedSSES3Header() to detect orphaned SSE-S3 headers 3. Updated copy handler to strip orphaned headers during copy operations Fixes #7562 * fmt * refactor: simplify isOrphanedSSES3Header function logic Remove redundant existence check since the caller iterates through metadata map, making the check unnecessary. Improves readability while maintaining the same functionality. |
2 weeks ago |
|
|
487c0f92a9 |
fmt
|
3 weeks ago |
|
|
ebb4f57cc7
|
s3api: Fix response-content-disposition query parameter not being honored (#7559)
* s3api: Fix response-content-disposition query parameter not being honored Fixes #7486 This fix resolves an issue where S3 presigned URLs with query parameters like `response-content-disposition`, `response-content-type`, etc. were being ignored, causing browsers to use default file handling instead of the specified behavior. Changes: - Modified `setResponseHeaders()` to accept the HTTP request object - Added logic to process S3 passthrough headers from query parameters - Updated all call sites to pass the request object - Supports all AWS S3 response override parameters: - response-content-disposition - response-content-type - response-cache-control - response-content-encoding - response-content-language - response-expires The implementation follows the same pattern used in the filer handler and properly honors the AWS S3 API specification for presigned URLs. Testing: - Existing S3 API tests pass without modification - Build succeeds with no compilation errors * Update weed/s3api/s3api_object_handlers.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 weeks ago |
|
|
848bec6d24
|
Metrics: Add Prometheus metrics for concurrent upload tracking (#7555)
* metrics: add Prometheus metrics for concurrent upload tracking Add Prometheus metrics to monitor concurrent upload activity for both filer and S3 servers. This provides visibility into the upload limiting feature added in the previous PR. New Metrics: - SeaweedFS_filer_in_flight_upload_bytes: Current bytes being uploaded to filer - SeaweedFS_filer_in_flight_upload_count: Current number of uploads to filer - SeaweedFS_s3_in_flight_upload_bytes: Current bytes being uploaded to S3 - SeaweedFS_s3_in_flight_upload_count: Current number of uploads to S3 The metrics are updated atomically whenever uploads start or complete, providing real-time visibility into upload concurrency levels. This helps operators: - Monitor upload concurrency in real-time - Set appropriate limits based on actual usage patterns - Detect potential bottlenecks or capacity issues - Track the effectiveness of upload limiting configuration * grafana: add dashboard panels for concurrent upload metrics Add 4 new panels to the Grafana dashboard to visualize the concurrent upload metrics added in this PR: Filer Section: - Filer Concurrent Uploads: Shows current number of concurrent uploads - Filer Concurrent Upload Bytes: Shows current bytes being uploaded S3 Gateway Section: - S3 Concurrent Uploads: Shows current number of concurrent uploads - S3 Concurrent Upload Bytes: Shows current bytes being uploaded These panels help operators monitor upload concurrency in real-time and tune the upload limiting configuration based on actual usage patterns. * more efficient |
3 weeks ago |
|
|
cd2fac4551
|
S3: pass HTTP 429 from volume servers to S3 clients (#7556)
With the recent changes (commit
|
3 weeks ago |
|
|
3ab26e39ff
|
fix(s3api): fix AWS Signature V2 format and validation (#7488)
* fix(s3api): fix AWS Signature V2 format and validation * fix(s3api): Skip space after "AWS" prefix (+1 offset) * test(s3api): add unit tests for Signature V2 authentication fix * fix(s3api): simply comparing signatures * validation for the colon extraction in expectedAuth --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
3 weeks ago |
|
|
edf0ef7a80
|
Filer, S3: Feature/add concurrent file upload limit (#7554)
* Support multiple filers for S3 and IAM servers with automatic failover
This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover.
Key changes:
- Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress)
- Updated IamServerOption.Filer to Filers ([]pb.ServerAddress)
- Modified -filer flag to accept comma-separated addresses
- Added getFilerAddress() helper methods for backward compatibility
- Updated all filer client calls to support multiple addresses
- Uses pb.WithOneOfGrpcFilerClients for automatic failover
Usage:
weed s3 -filer=localhost:8888,localhost:8889
weed iam -filer=localhost:8888,localhost:8889
The underlying FilerClient already supported multiple filers with health
tracking and automatic failover - this change exposes that capability
through the command-line interface.
* Add filer discovery: treat initial filers as seeds and discover peers from master
Enhances FilerClient to automatically discover additional filers in the same
filer group by querying the master server. This allows users to specify just
a few seed filers, and the client will discover all other filers in the cluster.
Key changes to wdclient/FilerClient:
- Added MasterClient, FilerGroup, and DiscoveryInterval fields
- Added thread-safe filer list management with RWMutex
- Implemented discoverFilers() background goroutine
- Uses cluster.ListExistingPeerUpdates() to query master for filers
- Automatically adds newly discovered filers to the list
- Added Close() method to clean up discovery goroutine
New FilerClientOption fields:
- MasterClient: enables filer discovery from master
- FilerGroup: specifies which filer group to discover
- DiscoveryInterval: how often to refresh (default 5 minutes)
Usage example:
masterClient := wdclient.NewMasterClient(...)
filerClient := wdclient.NewFilerClient(
[]pb.ServerAddress{"localhost:8888"}, // seed filers
grpcDialOption,
dataCenter,
&wdclient.FilerClientOption{
MasterClient: masterClient,
FilerGroup: "my-group",
},
)
defer filerClient.Close()
The initial filers act as seeds - the client discovers and adds all other
filers in the same group from the master. Discovered filers are added
dynamically without removing existing ones (relying on health checks for
unavailable filers).
* Address PR review comments: implement full failover for IAM operations
Critical fixes based on code review feedback:
1. **IAM API Failover (Critical)**:
- Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in:
* GetS3ApiConfigurationFromFiler()
* PutS3ApiConfigurationToFiler()
* GetPolicies()
* PutPolicies()
- Now all IAM operations support automatic failover across multiple filers
2. **Validation Improvements**:
- Add validation in NewIamApiServerWithStore() to require at least one filer
- Add validation in NewS3ApiServerWithStore() to require at least one filer
- Add warning log when no filers configured for credential store
3. **Error Logging**:
- Circuit breaker now logs when config load fails instead of silently ignoring
- Helps operators understand why circuit breaker limits aren't applied
4. **Code Quality**:
- Use ToGrpcAddress() for filer address in credential store setup
- More consistent with rest of codebase and future-proof
These changes ensure IAM operations have the same high availability guarantees
as S3 operations, completing the multi-filer failover implementation.
* Fix IAM manager initialization: remove code duplication, add TODO for HA
Addresses review comment on s3api_server.go:145
Changes:
- Remove duplicate code for getting first filer address
- Extract filerAddr variable once and reuse
- Add TODO comment documenting the HA limitation for IAM manager
- Document that loadIAMManagerFromConfig and NewS3IAMIntegration need
updates to support multiple filers for full HA
Note: This is a known limitation when using filer-backed IAM stores.
The interfaces need to be updated to accept multiple filer addresses.
For now, documenting this limitation clearly.
* Document credential store HA limitation with TODO
Addresses review comment on auth_credentials.go:149
Changes:
- Add TODO comment documenting that SetFilerClient interface needs update
for multi-filer support
- Add informative log message indicating HA limitation
- Document that this is a known limitation for filer-backed credential stores
The SetFilerClient interface currently only accepts a single filer address.
To properly support HA, the credential store interfaces need to be updated
to handle multiple filer addresses.
* Track current active filer in FilerClient for better HA
Add GetCurrentFiler() method to FilerClient that returns the currently
active filer based on the filerIndex which is updated on successful
operations. This provides better availability than always using the
first filer.
Changes:
- Add FilerClient.GetCurrentFiler() method that returns current active filer
- Update S3ApiServer.getFilerAddress() to use FilerClient's current filer
- Add fallback to first filer if FilerClient not yet initialized
- Document IAM limitation (doesn't have FilerClient access)
Benefits:
- Single-filer operations (URLs, ReadFilerConf, etc.) now use the
currently active/healthy filer
- Better distribution and failover behavior
- FilerClient's round-robin and health tracking automatically
determines which filer to use
* Document ReadFilerConf HA limitation in lifecycle handlers
Addresses review comment on s3api_bucket_handlers.go:880
Add comment documenting that ReadFilerConf uses the current active filer
from FilerClient (which is better than always using first filer), but
doesn't have built-in multi-filer failover.
Add TODO to update filer.ReadFilerConf to support multiple filers for
complete HA. For now, it uses the currently active/healthy filer tracked
by FilerClient which provides reasonable availability.
* Document multipart upload URL HA limitation
Addresses review comment on s3api_object_handlers_multipart.go:442
Add comment documenting that part upload URLs point to the current
active filer (tracked by FilerClient), which is better than always
using the first filer but still creates a potential point of failure
if that filer becomes unavailable during upload.
Suggest TODO solutions:
- Use virtual hostname/load balancer for filers
- Have S3 server proxy uploads to healthy filers
Current behavior provides reasonable availability by using the
currently active/healthy filer rather than being pinned to first filer.
* Document multipart completion Location URL limitation
Addresses review comment on filer_multipart.go:187
Add comment documenting that the Location URL in CompleteMultipartUpload
response points to the current active filer (tracked by FilerClient).
Note that clients should ideally use the S3 API endpoint rather than
this direct URL. If direct access is attempted and the specific filer
is unavailable, the request will fail.
Current behavior uses the currently active/healthy filer rather than
being pinned to the first filer, providing better availability.
* Make credential store use current active filer for HA
Update FilerEtcStore to use a function that returns the current active
filer instead of a fixed address, enabling high availability.
Changes:
- Add SetFilerAddressFunc() method to FilerEtcStore
- Store uses filerAddressFunc instead of fixed filerGrpcAddress
- withFilerClient() calls the function to get current active filer
- Keep SetFilerClient() for backward compatibility (marked deprecated)
- Update S3ApiServer to pass FilerClient.GetCurrentFiler to store
Benefits:
- Credential store now uses currently active/healthy filer
- Automatic failover when filer becomes unavailable
- True HA for credential operations
- Backward compatible with old SetFilerClient interface
This addresses the credential store limitation - no longer pinned to
first filer, uses FilerClient's tracked current active filer.
* Clarify multipart URL comments: filer address not used for uploads
Update comments to reflect that multipart upload URLs are not actually
used for upload traffic - uploads go directly to volume servers.
Key clarifications:
- genPartUploadUrl: Filer address is parsed out, only path is used
- CompleteMultipartUpload Location: Informational field per AWS S3 spec
- Actual uploads bypass filer proxy and go directly to volume servers
The filer address in these URLs is NOT a HA concern because:
1. Part uploads: URL is parsed for path, upload goes to volume servers
2. Location URL: Informational only, clients use S3 endpoint
This addresses the observation that S3 uploads don't go through filers,
only metadata operations do.
* Remove filer address from upload paths - pass path directly
Eliminate unnecessary filer address from upload URLs by passing file
paths directly instead of full URLs that get immediately parsed.
Changes:
- Rename genPartUploadUrl() → genPartUploadPath() (returns path only)
- Rename toFilerUrl() → toFilerPath() (returns path only)
- Update putToFiler() to accept filePath instead of uploadUrl
- Remove URL parsing code (no longer needed)
- Remove net/url import (no longer used)
- Keep old function names as deprecated wrappers for compatibility
Benefits:
- Cleaner code - no fake URL construction/parsing
- No dependency on filer address for internal operations
- More accurate naming (these are paths, not URLs)
- Eliminates confusion about HA concerns
This completely removes the filer address from upload operations - it was
never actually used for routing, only parsed for the path.
* Remove deprecated functions: use new path-based functions directly
Remove deprecated wrapper functions and update all callers to use the
new function names directly.
Removed:
- genPartUploadUrl() → all callers now use genPartUploadPath()
- toFilerUrl() → all callers now use toFilerPath()
- SetFilerClient() → removed along with fallback code
Updated:
- s3api_object_handlers_multipart.go: uploadUrl → filePath
- s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath
- s3api_object_versioning.go: toFilerUrl → toFilerPath
- s3api_object_handlers_test.go: toFilerUrl → toFilerPath
- auth_credentials.go: removed SetFilerClient fallback
- filer_etc_store.go: removed deprecated SetFilerClient method
Benefits:
- Cleaner codebase with no deprecated functions
- All variable names accurately reflect that they're paths, not URLs
- Single interface for credential stores (SetFilerAddressFunc only)
All code now consistently uses the new path-based approach.
* Fix toFilerPath: remove URL escaping for raw file paths
The toFilerPath function should return raw file paths, not URL-escaped
paths. URL escaping was needed when the path was embedded in a URL
(old toFilerUrl), but now that we pass paths directly to putToFiler,
they should be unescaped.
This fixes S3 integration test failures:
- test_bucket_listv2_encoding_basic
- test_bucket_list_encoding_basic
- test_bucket_listv2_delimiter_whitespace
- test_bucket_list_delimiter_whitespace
The tests were failing because paths were double-encoded (escaped when
stored, then escaped again when listed), resulting in %252B instead of
%2B for '+' characters.
Root cause: When we removed URL parsing in putToFiler, we should have
also removed URL escaping in toFilerPath since paths are now used
directly without URL encoding/decoding.
* Add thread safety to FilerEtcStore and clarify credential store comments
Address review suggestions for better thread safety and code clarity:
1. **Thread Safety**: Add RWMutex to FilerEtcStore
- Protects filerAddressFunc and grpcDialOption from concurrent access
- Initialize() uses write lock when setting function
- SetFilerAddressFunc() uses write lock
- withFilerClient() uses read lock to get function and dial option
- GetPolicies() uses read lock to check if configured
2. **Improved Error Messages**:
- Prefix errors with "filer_etc:" for easier debugging
- "filer address not configured" → "filer_etc: filer address function not configured"
- "filer address is empty" → "filer_etc: filer address is empty"
3. **Clarified Comments**:
- auth_credentials.go: Clarify that initial setup is temporary
- Document that it's updated in s3api_server.go after FilerClient creation
- Remove ambiguity about when FilerClient.GetCurrentFiler is used
Benefits:
- Safe for concurrent credential operations
- Clear error messages for debugging
- Explicit documentation of initialization order
* Enable filer discovery: pass master addresses to FilerClient
Fix two critical issues:
1. **Filer Discovery Not Working**: Master client was not being passed to
FilerClient, so peer discovery couldn't work
2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler
function - this is the correct design for HA
Changes:
**Command (s3.go):**
- Read master addresses from GetFilerConfiguration response
- Pass masterAddresses to S3ApiServerOption
- Log master addresses for visibility
**S3ApiServerOption:**
- Add Masters []pb.ServerAddress field for discovery
**S3ApiServer:**
- Create MasterClient from Masters when available
- Pass MasterClient + FilerGroup to FilerClient via options
- Enable discovery with 5-minute refresh interval
- Log whether discovery is enabled or disabled
**Credential Store:**
- Already correctly uses filerClient.GetCurrentFiler via function
- This provides HA without tight coupling to FilerClient struct
- Function-based design is clean and thread-safe
Discovery Flow:
1. S3 command reads filer config → gets masters + filer group
2. S3ApiServer creates MasterClient from masters
3. FilerClient uses MasterClient to query for peer filers
4. Background goroutine refreshes peer list every 5 minutes
5. Credential store uses GetCurrentFiler to get active filer
Now filer discovery actually works! ��
* Use S3 endpoint in multipart Location instead of filer address
* Add multi-filer failover to ReadFilerConf
* Address CodeRabbit review: fix buffer reuse and improve lock safety
Address two code review suggestions:
1. **Fix buffer reuse in ReadFilerConfFromFilers**:
- Use local []byte data instead of shared buffer
- Prevents partial data from failed attempts affecting successful reads
- Creates fresh buffer inside callback for masterClient path
- More robust to future changes in read helpers
2. **Improve lock safety in FilerClient**:
- Add *WithHealth variants that accept health pointer
- Get health pointer while holding lock, then release before calling
- Eliminates potential for lock confusion (though no actual deadlock existed)
- Clearer separation: lock for data access, atomics for health ops
Changes:
- ReadFilerConfFromFilers: var data []byte, create buf inside callback
- shouldSkipUnhealthyFilerWithHealth(health *filerHealth)
- recordFilerSuccessWithHealth(health *filerHealth)
- recordFilerFailureWithHealth(health *filerHealth)
- Keep old functions for backward compatibility (marked deprecated)
- Update LookupVolumeIds to use WithHealth variants
Benefits:
- More robust multi-filer configuration reading
- Clearer lock vs atomic operation boundaries
- No lock held during health checks (even though atomics don't block)
- Better code organization and maintainability
* add constant
* Fix IAM manager and post policy to use current active filer
* Fix critical race condition and goroutine leak
* Update weed/s3api/filer_multipart.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Fix compilation error and address code review suggestions
Address remaining unresolved comments:
1. **Fix compilation error**: Add missing net/url import
- filer_multipart.go used url.PathEscape without import
- Added "net/url" to imports
2. **Fix Location URL formatting** (all 4 occurrences):
- Add missing slash between bucket and key
- Use url.PathEscape for bucket names
- Use urlPathEscape for object keys
- Handles special characters in bucket/key names
- Before: http://host/bucketkey
- After: http://host/bucket/key (properly escaped)
3. **Optimize discovery loop** (O(N*M) → O(N+M)):
- Use map for existing filers (O(1) lookup)
- Reduces time holding write lock
- Better performance with many filers
- Before: Nested loop for each discovered filer
- After: Build map once, then O(1) lookups
Changes:
- filer_multipart.go: Import net/url, fix all Location URLs
- filer_client.go: Use map for efficient filer discovery
Benefits:
- Compiles successfully
- Proper URL encoding (handles spaces, special chars)
- Faster discovery with less lock contention
- Production-ready URL formatting
* Fix race conditions and make Close() idempotent
Address CodeRabbit review #3512078995:
1. **Critical: Fix unsynchronized read in error message**
- Line 584 read len(fc.filerAddresses) without lock
- Race with refreshFilerList appending to slice
- Fixed: Take RLock to read length safely
- Prevents race detector warnings
2. **Important: Make Close() idempotent**
- Closing already-closed channel panics
- Can happen with layered cleanup in shutdown paths
- Fixed: Use sync.Once to ensure single close
- Safe to call Close() multiple times now
3. **Nitpick: Add warning for empty filer address**
- getFilerAddress() can return empty string
- Helps diagnose unexpected state
- Added: Warning log when no filers available
4. **Nitpick: Guard deprecated index-based helpers**
- shouldSkipUnhealthyFiler, recordFilerSuccess/Failure
- Accessed filerHealth without lock (races with discovery)
- Fixed: Take RLock and check bounds before array access
- Prevents index out of bounds and races
Changes:
- filer_client.go:
- Add closeDiscoveryOnce sync.Once field
- Use Do() in Close() for idempotent channel close
- Add RLock guards to deprecated index-based helpers
- Add bounds checking to prevent panics
- Synchronized read of filerAddresses length in error
- s3api_server.go:
- Add warning log when getFilerAddress returns empty
Benefits:
- No race conditions (passes race detector)
- No panic on double-close
- Better error diagnostics
- Safe with discovery enabled
- Production-hardened shutdown logic
* Fix hardcoded http scheme and add panic recovery
Address CodeRabbit review #3512114811:
1. **Major: Fix hardcoded http:// scheme in Location URLs**
- Location URLs always used http:// regardless of client connection
- HTTPS clients got http:// URLs (incorrect)
- Fixed: Detect scheme from request
- Check X-Forwarded-Proto header (for proxies) first
- Check r.TLS != nil for direct HTTPS
- Fallback to http for plain connections
- Applied to all 4 CompleteMultipartUploadResult locations
2. **Major: Add panic recovery to discovery goroutine**
- Long-running background goroutine could crash entire process
- Panic in refreshFilerList would terminate program
- Fixed: Add defer recover() with error logging
- Goroutine failures now logged, not fatal
3. **Note: Close() idempotency already implemented**
- Review flagged as duplicate issue
- Already fixed in commit
|
3 weeks ago |
|
|
5075381060
|
Support multiple filers for S3 and IAM servers with automatic failover (#7550)
* Support multiple filers for S3 and IAM servers with automatic failover
This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover.
Key changes:
- Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress)
- Updated IamServerOption.Filer to Filers ([]pb.ServerAddress)
- Modified -filer flag to accept comma-separated addresses
- Added getFilerAddress() helper methods for backward compatibility
- Updated all filer client calls to support multiple addresses
- Uses pb.WithOneOfGrpcFilerClients for automatic failover
Usage:
weed s3 -filer=localhost:8888,localhost:8889
weed iam -filer=localhost:8888,localhost:8889
The underlying FilerClient already supported multiple filers with health
tracking and automatic failover - this change exposes that capability
through the command-line interface.
* Add filer discovery: treat initial filers as seeds and discover peers from master
Enhances FilerClient to automatically discover additional filers in the same
filer group by querying the master server. This allows users to specify just
a few seed filers, and the client will discover all other filers in the cluster.
Key changes to wdclient/FilerClient:
- Added MasterClient, FilerGroup, and DiscoveryInterval fields
- Added thread-safe filer list management with RWMutex
- Implemented discoverFilers() background goroutine
- Uses cluster.ListExistingPeerUpdates() to query master for filers
- Automatically adds newly discovered filers to the list
- Added Close() method to clean up discovery goroutine
New FilerClientOption fields:
- MasterClient: enables filer discovery from master
- FilerGroup: specifies which filer group to discover
- DiscoveryInterval: how often to refresh (default 5 minutes)
Usage example:
masterClient := wdclient.NewMasterClient(...)
filerClient := wdclient.NewFilerClient(
[]pb.ServerAddress{"localhost:8888"}, // seed filers
grpcDialOption,
dataCenter,
&wdclient.FilerClientOption{
MasterClient: masterClient,
FilerGroup: "my-group",
},
)
defer filerClient.Close()
The initial filers act as seeds - the client discovers and adds all other
filers in the same group from the master. Discovered filers are added
dynamically without removing existing ones (relying on health checks for
unavailable filers).
* Address PR review comments: implement full failover for IAM operations
Critical fixes based on code review feedback:
1. **IAM API Failover (Critical)**:
- Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in:
* GetS3ApiConfigurationFromFiler()
* PutS3ApiConfigurationToFiler()
* GetPolicies()
* PutPolicies()
- Now all IAM operations support automatic failover across multiple filers
2. **Validation Improvements**:
- Add validation in NewIamApiServerWithStore() to require at least one filer
- Add validation in NewS3ApiServerWithStore() to require at least one filer
- Add warning log when no filers configured for credential store
3. **Error Logging**:
- Circuit breaker now logs when config load fails instead of silently ignoring
- Helps operators understand why circuit breaker limits aren't applied
4. **Code Quality**:
- Use ToGrpcAddress() for filer address in credential store setup
- More consistent with rest of codebase and future-proof
These changes ensure IAM operations have the same high availability guarantees
as S3 operations, completing the multi-filer failover implementation.
* Fix IAM manager initialization: remove code duplication, add TODO for HA
Addresses review comment on s3api_server.go:145
Changes:
- Remove duplicate code for getting first filer address
- Extract filerAddr variable once and reuse
- Add TODO comment documenting the HA limitation for IAM manager
- Document that loadIAMManagerFromConfig and NewS3IAMIntegration need
updates to support multiple filers for full HA
Note: This is a known limitation when using filer-backed IAM stores.
The interfaces need to be updated to accept multiple filer addresses.
For now, documenting this limitation clearly.
* Document credential store HA limitation with TODO
Addresses review comment on auth_credentials.go:149
Changes:
- Add TODO comment documenting that SetFilerClient interface needs update
for multi-filer support
- Add informative log message indicating HA limitation
- Document that this is a known limitation for filer-backed credential stores
The SetFilerClient interface currently only accepts a single filer address.
To properly support HA, the credential store interfaces need to be updated
to handle multiple filer addresses.
* Track current active filer in FilerClient for better HA
Add GetCurrentFiler() method to FilerClient that returns the currently
active filer based on the filerIndex which is updated on successful
operations. This provides better availability than always using the
first filer.
Changes:
- Add FilerClient.GetCurrentFiler() method that returns current active filer
- Update S3ApiServer.getFilerAddress() to use FilerClient's current filer
- Add fallback to first filer if FilerClient not yet initialized
- Document IAM limitation (doesn't have FilerClient access)
Benefits:
- Single-filer operations (URLs, ReadFilerConf, etc.) now use the
currently active/healthy filer
- Better distribution and failover behavior
- FilerClient's round-robin and health tracking automatically
determines which filer to use
* Document ReadFilerConf HA limitation in lifecycle handlers
Addresses review comment on s3api_bucket_handlers.go:880
Add comment documenting that ReadFilerConf uses the current active filer
from FilerClient (which is better than always using first filer), but
doesn't have built-in multi-filer failover.
Add TODO to update filer.ReadFilerConf to support multiple filers for
complete HA. For now, it uses the currently active/healthy filer tracked
by FilerClient which provides reasonable availability.
* Document multipart upload URL HA limitation
Addresses review comment on s3api_object_handlers_multipart.go:442
Add comment documenting that part upload URLs point to the current
active filer (tracked by FilerClient), which is better than always
using the first filer but still creates a potential point of failure
if that filer becomes unavailable during upload.
Suggest TODO solutions:
- Use virtual hostname/load balancer for filers
- Have S3 server proxy uploads to healthy filers
Current behavior provides reasonable availability by using the
currently active/healthy filer rather than being pinned to first filer.
* Document multipart completion Location URL limitation
Addresses review comment on filer_multipart.go:187
Add comment documenting that the Location URL in CompleteMultipartUpload
response points to the current active filer (tracked by FilerClient).
Note that clients should ideally use the S3 API endpoint rather than
this direct URL. If direct access is attempted and the specific filer
is unavailable, the request will fail.
Current behavior uses the currently active/healthy filer rather than
being pinned to the first filer, providing better availability.
* Make credential store use current active filer for HA
Update FilerEtcStore to use a function that returns the current active
filer instead of a fixed address, enabling high availability.
Changes:
- Add SetFilerAddressFunc() method to FilerEtcStore
- Store uses filerAddressFunc instead of fixed filerGrpcAddress
- withFilerClient() calls the function to get current active filer
- Keep SetFilerClient() for backward compatibility (marked deprecated)
- Update S3ApiServer to pass FilerClient.GetCurrentFiler to store
Benefits:
- Credential store now uses currently active/healthy filer
- Automatic failover when filer becomes unavailable
- True HA for credential operations
- Backward compatible with old SetFilerClient interface
This addresses the credential store limitation - no longer pinned to
first filer, uses FilerClient's tracked current active filer.
* Clarify multipart URL comments: filer address not used for uploads
Update comments to reflect that multipart upload URLs are not actually
used for upload traffic - uploads go directly to volume servers.
Key clarifications:
- genPartUploadUrl: Filer address is parsed out, only path is used
- CompleteMultipartUpload Location: Informational field per AWS S3 spec
- Actual uploads bypass filer proxy and go directly to volume servers
The filer address in these URLs is NOT a HA concern because:
1. Part uploads: URL is parsed for path, upload goes to volume servers
2. Location URL: Informational only, clients use S3 endpoint
This addresses the observation that S3 uploads don't go through filers,
only metadata operations do.
* Remove filer address from upload paths - pass path directly
Eliminate unnecessary filer address from upload URLs by passing file
paths directly instead of full URLs that get immediately parsed.
Changes:
- Rename genPartUploadUrl() → genPartUploadPath() (returns path only)
- Rename toFilerUrl() → toFilerPath() (returns path only)
- Update putToFiler() to accept filePath instead of uploadUrl
- Remove URL parsing code (no longer needed)
- Remove net/url import (no longer used)
- Keep old function names as deprecated wrappers for compatibility
Benefits:
- Cleaner code - no fake URL construction/parsing
- No dependency on filer address for internal operations
- More accurate naming (these are paths, not URLs)
- Eliminates confusion about HA concerns
This completely removes the filer address from upload operations - it was
never actually used for routing, only parsed for the path.
* Remove deprecated functions: use new path-based functions directly
Remove deprecated wrapper functions and update all callers to use the
new function names directly.
Removed:
- genPartUploadUrl() → all callers now use genPartUploadPath()
- toFilerUrl() → all callers now use toFilerPath()
- SetFilerClient() → removed along with fallback code
Updated:
- s3api_object_handlers_multipart.go: uploadUrl → filePath
- s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath
- s3api_object_versioning.go: toFilerUrl → toFilerPath
- s3api_object_handlers_test.go: toFilerUrl → toFilerPath
- auth_credentials.go: removed SetFilerClient fallback
- filer_etc_store.go: removed deprecated SetFilerClient method
Benefits:
- Cleaner codebase with no deprecated functions
- All variable names accurately reflect that they're paths, not URLs
- Single interface for credential stores (SetFilerAddressFunc only)
All code now consistently uses the new path-based approach.
* Fix toFilerPath: remove URL escaping for raw file paths
The toFilerPath function should return raw file paths, not URL-escaped
paths. URL escaping was needed when the path was embedded in a URL
(old toFilerUrl), but now that we pass paths directly to putToFiler,
they should be unescaped.
This fixes S3 integration test failures:
- test_bucket_listv2_encoding_basic
- test_bucket_list_encoding_basic
- test_bucket_listv2_delimiter_whitespace
- test_bucket_list_delimiter_whitespace
The tests were failing because paths were double-encoded (escaped when
stored, then escaped again when listed), resulting in %252B instead of
%2B for '+' characters.
Root cause: When we removed URL parsing in putToFiler, we should have
also removed URL escaping in toFilerPath since paths are now used
directly without URL encoding/decoding.
* Add thread safety to FilerEtcStore and clarify credential store comments
Address review suggestions for better thread safety and code clarity:
1. **Thread Safety**: Add RWMutex to FilerEtcStore
- Protects filerAddressFunc and grpcDialOption from concurrent access
- Initialize() uses write lock when setting function
- SetFilerAddressFunc() uses write lock
- withFilerClient() uses read lock to get function and dial option
- GetPolicies() uses read lock to check if configured
2. **Improved Error Messages**:
- Prefix errors with "filer_etc:" for easier debugging
- "filer address not configured" → "filer_etc: filer address function not configured"
- "filer address is empty" → "filer_etc: filer address is empty"
3. **Clarified Comments**:
- auth_credentials.go: Clarify that initial setup is temporary
- Document that it's updated in s3api_server.go after FilerClient creation
- Remove ambiguity about when FilerClient.GetCurrentFiler is used
Benefits:
- Safe for concurrent credential operations
- Clear error messages for debugging
- Explicit documentation of initialization order
* Enable filer discovery: pass master addresses to FilerClient
Fix two critical issues:
1. **Filer Discovery Not Working**: Master client was not being passed to
FilerClient, so peer discovery couldn't work
2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler
function - this is the correct design for HA
Changes:
**Command (s3.go):**
- Read master addresses from GetFilerConfiguration response
- Pass masterAddresses to S3ApiServerOption
- Log master addresses for visibility
**S3ApiServerOption:**
- Add Masters []pb.ServerAddress field for discovery
**S3ApiServer:**
- Create MasterClient from Masters when available
- Pass MasterClient + FilerGroup to FilerClient via options
- Enable discovery with 5-minute refresh interval
- Log whether discovery is enabled or disabled
**Credential Store:**
- Already correctly uses filerClient.GetCurrentFiler via function
- This provides HA without tight coupling to FilerClient struct
- Function-based design is clean and thread-safe
Discovery Flow:
1. S3 command reads filer config → gets masters + filer group
2. S3ApiServer creates MasterClient from masters
3. FilerClient uses MasterClient to query for peer filers
4. Background goroutine refreshes peer list every 5 minutes
5. Credential store uses GetCurrentFiler to get active filer
Now filer discovery actually works! ��
* Use S3 endpoint in multipart Location instead of filer address
* Add multi-filer failover to ReadFilerConf
* Address CodeRabbit review: fix buffer reuse and improve lock safety
Address two code review suggestions:
1. **Fix buffer reuse in ReadFilerConfFromFilers**:
- Use local []byte data instead of shared buffer
- Prevents partial data from failed attempts affecting successful reads
- Creates fresh buffer inside callback for masterClient path
- More robust to future changes in read helpers
2. **Improve lock safety in FilerClient**:
- Add *WithHealth variants that accept health pointer
- Get health pointer while holding lock, then release before calling
- Eliminates potential for lock confusion (though no actual deadlock existed)
- Clearer separation: lock for data access, atomics for health ops
Changes:
- ReadFilerConfFromFilers: var data []byte, create buf inside callback
- shouldSkipUnhealthyFilerWithHealth(health *filerHealth)
- recordFilerSuccessWithHealth(health *filerHealth)
- recordFilerFailureWithHealth(health *filerHealth)
- Keep old functions for backward compatibility (marked deprecated)
- Update LookupVolumeIds to use WithHealth variants
Benefits:
- More robust multi-filer configuration reading
- Clearer lock vs atomic operation boundaries
- No lock held during health checks (even though atomics don't block)
- Better code organization and maintainability
* add constant
* Fix IAM manager and post policy to use current active filer
* Fix critical race condition and goroutine leak
* Update weed/s3api/filer_multipart.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Fix compilation error and address code review suggestions
Address remaining unresolved comments:
1. **Fix compilation error**: Add missing net/url import
- filer_multipart.go used url.PathEscape without import
- Added "net/url" to imports
2. **Fix Location URL formatting** (all 4 occurrences):
- Add missing slash between bucket and key
- Use url.PathEscape for bucket names
- Use urlPathEscape for object keys
- Handles special characters in bucket/key names
- Before: http://host/bucketkey
- After: http://host/bucket/key (properly escaped)
3. **Optimize discovery loop** (O(N*M) → O(N+M)):
- Use map for existing filers (O(1) lookup)
- Reduces time holding write lock
- Better performance with many filers
- Before: Nested loop for each discovered filer
- After: Build map once, then O(1) lookups
Changes:
- filer_multipart.go: Import net/url, fix all Location URLs
- filer_client.go: Use map for efficient filer discovery
Benefits:
- Compiles successfully
- Proper URL encoding (handles spaces, special chars)
- Faster discovery with less lock contention
- Production-ready URL formatting
* Fix race conditions and make Close() idempotent
Address CodeRabbit review #3512078995:
1. **Critical: Fix unsynchronized read in error message**
- Line 584 read len(fc.filerAddresses) without lock
- Race with refreshFilerList appending to slice
- Fixed: Take RLock to read length safely
- Prevents race detector warnings
2. **Important: Make Close() idempotent**
- Closing already-closed channel panics
- Can happen with layered cleanup in shutdown paths
- Fixed: Use sync.Once to ensure single close
- Safe to call Close() multiple times now
3. **Nitpick: Add warning for empty filer address**
- getFilerAddress() can return empty string
- Helps diagnose unexpected state
- Added: Warning log when no filers available
4. **Nitpick: Guard deprecated index-based helpers**
- shouldSkipUnhealthyFiler, recordFilerSuccess/Failure
- Accessed filerHealth without lock (races with discovery)
- Fixed: Take RLock and check bounds before array access
- Prevents index out of bounds and races
Changes:
- filer_client.go:
- Add closeDiscoveryOnce sync.Once field
- Use Do() in Close() for idempotent channel close
- Add RLock guards to deprecated index-based helpers
- Add bounds checking to prevent panics
- Synchronized read of filerAddresses length in error
- s3api_server.go:
- Add warning log when getFilerAddress returns empty
Benefits:
- No race conditions (passes race detector)
- No panic on double-close
- Better error diagnostics
- Safe with discovery enabled
- Production-hardened shutdown logic
* Fix hardcoded http scheme and add panic recovery
Address CodeRabbit review #3512114811:
1. **Major: Fix hardcoded http:// scheme in Location URLs**
- Location URLs always used http:// regardless of client connection
- HTTPS clients got http:// URLs (incorrect)
- Fixed: Detect scheme from request
- Check X-Forwarded-Proto header (for proxies) first
- Check r.TLS != nil for direct HTTPS
- Fallback to http for plain connections
- Applied to all 4 CompleteMultipartUploadResult locations
2. **Major: Add panic recovery to discovery goroutine**
- Long-running background goroutine could crash entire process
- Panic in refreshFilerList would terminate program
- Fixed: Add defer recover() with error logging
- Goroutine failures now logged, not fatal
3. **Note: Close() idempotency already implemented**
- Review flagged as duplicate issue
- Already fixed in commit
|
3 weeks ago |
|
|
c156a130b7
|
S3: Auto create bucket (#7549)
* auto create buckets * only admin users can auto create buckets * Update weed/s3api/s3api_bucket_handlers.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * validate bucket name * refactor * error handling * error * refetch * ensure owner * multiple errors --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 weeks ago |
|
|
2e6c746a30
|
fix copying for paused versioning buckets (#7548)
* fix copying for paused versioning buckets * copy for non versioned files * add tests * better tests * Update weed/s3api/s3api_object_handlers_copy.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * remove etag * update * Update s3api_object_handlers_copy_test.go * Update weed/s3api/s3api_object_handlers_copy_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/s3api/s3api_object_handlers_copy_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * revert --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 weeks ago |
|
|
f6a604c538
|
S3: Fix encrypted file copy with multiple chunks (#7530) (#7546)
* S3: Fix encrypted file copy with multiple chunks (#7530) When copying encrypted files with multiple chunks (encrypted volumes via -filer.encryptVolumeData), the copied file could not be read. This was caused by the chunk copy operation not preserving the IsCompressed flag, which led to improper handling of compressed/encrypted data during upload. The fix: 1. Modified uploadChunkData to accept an isCompressed parameter 2. Updated copySingleChunk to pass the source chunk's IsCompressed flag 3. Updated copySingleChunkForRange for partial copy operations 4. Updated all other callers to pass the appropriate compression flag 5. Added comprehensive tests for encrypted volume copy scenarios This ensures that when copying chunks: - The IsCompressed flag from the source chunk is passed to the upload - Compressed data is marked as compressed, preventing double-compression - Already-encrypted data is not re-encrypted (Cipher: false is correct) - All chunk metadata (CipherKey, IsCompressed, ETag) is preserved Tests added: - TestCreateDestinationChunkPreservesEncryption: Verifies metadata preservation - TestCopySingleChunkWithEncryption: Tests various encryption/compression scenarios - TestCopyChunksPreservesMetadata: Tests multi-chunk metadata preservation - TestEncryptedVolumeScenario: Documents and tests the exact issue #7530 scenario Fixes #7530 * Address PR review feedback: simplify tests and improve clarity - Removed TestUploadChunkDataCompressionFlag (panic-based test) - Removed TestCopySingleChunkWithEncryption (duplicate coverage) - Removed TestCopyChunksPreservesMetadata (duplicate coverage) - Added ETag verification to TestEncryptedVolumeCopyScenario - Renamed to TestEncryptedVolumeCopyScenario for better clarity - All test coverage now in TestCreateDestinationChunkPreservesEncryption and TestEncryptedVolumeCopyScenario which focus on the actual behavior |
3 weeks ago |
|
|
a24c31de06
|
S3: Add `Vary` header for non-wildcard AllowOrigin (#7547)
|
3 weeks ago |
|
|
a77dfb1ddd |
add debugging for InvalidAccessKeyId
|
3 weeks ago |
|
|
99a9a67741 |
check errors
|
3 weeks ago |
|
|
f125a013a8
|
S3: set identity to request context, and remove obsolete code (#7523)
* list owned buckets * simplify * add unit tests * no-owner buckets * set identity id * fallback to request header if iam is not enabled * refactor to test * fix comparing * fix security vulnerability * Update s3api_bucket_handlers.go * Update s3api_bucket_handlers.go * Update s3api_bucket_handlers.go * set identity to request context * remove SeaweedFSIsDirectoryKey * remove obsolete * simplify * reuse * refactor or remove obsolete logic on filer * Removed the redundant check in GetOrHeadHandler * surfacing invalid X-Amz-Tagging as a client error * clean up * constant * reuse * multiple header values * code reuse * err on duplicated tag key |
3 weeks ago |
|
|
a9fefcd22c
|
S3: list owned buckets (#7519)
* list owned buckets * simplify * add unit tests * no-owner buckets * set identity id * fallback to request header if iam is not enabled * refactor to test * fix comparing * fix security vulnerability * Update s3api_bucket_handlers.go * Update s3api_bucket_handlers.go * Update s3api_bucket_handlers.go |
3 weeks ago |
|
|
c1b8d4bf0d
|
S3: adds FilerClient to use cached volume id (#7518)
* adds FilerClient to use cached volume id
* refactor: MasterClient embeds vidMapClient to eliminate ~150 lines of duplication
- Create masterVolumeProvider that implements VolumeLocationProvider
- MasterClient now embeds vidMapClient instead of maintaining duplicate cache logic
- Removed duplicate methods: LookupVolumeIdsWithFallback, getStableVidMap, etc.
- MasterClient still receives real-time updates via KeepConnected streaming
- Updates call inherited addLocation/deleteLocation from vidMapClient
- Benefits: DRY principle, shared singleflight, cache chain logic reused
- Zero behavioral changes - only architectural improvement
* refactor: mount uses FilerClient for efficient volume location caching
- Add configurable vidMap cache size (default: 5 historical snapshots)
- Add FilerClientOption struct for clean configuration
* GrpcTimeout: default 5 seconds (prevents hanging requests)
* UrlPreference: PreferUrl or PreferPublicUrl
* CacheSize: number of historical vidMap snapshots (for volume moves)
- NewFilerClient uses option struct for better API extensibility
- Improved error handling in filerVolumeProvider.LookupVolumeIds:
* Distinguish genuine 'not found' from communication failures
* Log volumes missing from filer response
* Return proper error context with volume count
* Document that filer Locations lacks Error field (unlike master)
- FilerClient.GetLookupFileIdFunction() handles URL preference automatically
- Mount (WFS) creates FilerClient with appropriate options
- Benefits for weed mount:
* Singleflight: Deduplicates concurrent volume lookups
* Cache history: Old volume locations available briefly when volumes move
* Configurable cache depth: Tune for different deployment environments
* Battle-tested vidMap cache with cache chain
* Better concurrency handling with timeout protection
* Improved error visibility and debugging
- Old filer.LookupFn() kept for backward compatibility
- Performance improvement for mount operations with high concurrency
* fix: prevent vidMap swap race condition in LookupFileIdWithFallback
- Hold vidMapLock.RLock() during entire vm.LookupFileId() call
- Prevents resetVidMap() from swapping vidMap mid-operation
- Ensures atomic access to the current vidMap instance
- Added documentation warnings to getStableVidMap() about swap risks
- Enhanced withCurrentVidMap() documentation for clarity
This fixes a subtle race condition where:
1. Thread A: acquires lock, gets vm pointer, releases lock
2. Thread B: calls resetVidMap(), swaps vc.vidMap
3. Thread A: calls vm.LookupFileId() on old/stale vidMap
While the old vidMap remains valid (in cache chain), holding the lock
ensures we consistently use the current vidMap for the entire operation.
* fix: FilerClient supports multiple filer addresses for high availability
Critical fix: FilerClient now accepts []ServerAddress instead of single address
- Prevents mount failure when first filer is down (regression fix)
- Implements automatic failover to remaining filers
- Uses round-robin with atomic index tracking (same pattern as WFS.WithFilerClient)
- Retries all configured filers before giving up
- Updates successful filer index for future requests
Changes:
- NewFilerClient([]pb.ServerAddress, ...) instead of (pb.ServerAddress, ...)
- filerVolumeProvider references FilerClient for failover access
- LookupVolumeIds tries all filers with util.Retry pattern
- Mount passes all option.FilerAddresses for HA
- S3 wraps single filer in slice for API consistency
This restores the high availability that existed in the old implementation
where mount would automatically failover between configured filers.
* fix: restore leader change detection in KeepConnected stream loop
Critical fix: Leader change detection was accidentally removed from the streaming loop
- Master can announce leader changes during an active KeepConnected stream
- Without this check, client continues talking to non-leader until connection breaks
- This can lead to stale data or operational errors
The check needs to be in TWO places:
1. Initial response (lines 178-187): Detect redirect on first connect
2. Stream loop (lines 203-209): Detect leader changes during active stream
Restored the loop check that was accidentally removed during refactoring.
This ensures the client immediately reconnects to new leader when announced.
* improve: address code review findings on error handling and documentation
1. Master provider now preserves per-volume errors
- Surface detailed errors from master (e.g., misconfiguration, deletion)
- Return partial results with aggregated errors using errors.Join
- Callers can now distinguish specific volume failures from general errors
- Addresses issue of losing vidLoc.Error details
2. Document GetMaster initialization contract
- Add comprehensive documentation explaining blocking behavior
- Clarify that KeepConnectedToMaster must be started first
- Provide typical initialization pattern example
- Prevent confusing timeouts during warm-up
3. Document partial results API contract
- LookupVolumeIdsWithFallback explicitly documents partial results
- Clear examples of how to handle result + error combinations
- Helps prevent callers from discarding valid partial results
4. Add safeguards to legacy filer.LookupFn
- Add deprecation warning with migration guidance
- Implement simple 10,000 entry cache limit
- Log warning when limit reached
- Recommend wdclient.FilerClient for new code
- Prevents unbounded memory growth in long-running processes
These changes improve API clarity and operational safety while maintaining
backward compatibility.
* fix: handle partial results correctly in LookupVolumeIdsWithFallback callers
Two callers were discarding partial results by checking err before processing
the result map. While these are currently single-volume lookups (so partial
results aren't possible), the code was fragile and would break if we ever
batched multiple volumes together.
Changes:
- Check result map FIRST, then conditionally check error
- If volume is found in result, use it (ignore errors about other volumes)
- If volume is NOT found and err != nil, include error context with %w
- Add defensive comments explaining the pattern for future maintainers
This makes the code:
1. Correct for future batched lookups
2. More informative (preserves underlying error details)
3. Consistent with filer_grpc_server.go which already handles this correctly
Example: If looking up ["1", "2", "999"] and only 999 fails, callers
looking for volumes 1 or 2 will succeed instead of failing unnecessarily.
* improve: address remaining code review findings
1. Lazy initialize FilerClient in mount for proxy-only setups
- Only create FilerClient when VolumeServerAccess != "filerProxy"
- Avoids wasted work when all reads proxy through filer
- filerClient is nil for proxy mode, initialized for direct access
2. Fix inaccurate deprecation comment in filer.LookupFn
- Updated comment to reflect current behavior (10k bounded cache)
- Removed claim of "unbounded growth" after adding size limit
- Still directs new code to wdclient.FilerClient for better features
3. Audit all MasterClient usages for KeepConnectedToMaster
- Verified all production callers start KeepConnectedToMaster early
- Filer, Shell, Master, Broker, Benchmark, Admin all correct
- IAM creates MasterClient but never uses it (harmless)
- Test code doesn't need KeepConnectedToMaster (mocks)
All callers properly follow the initialization pattern documented in
GetMaster(), preventing unexpected blocking or timeouts.
* fix: restore observability instrumentation in MasterClient
During the refactoring, several important stats counters and logging
statements were accidentally removed from tryConnectToMaster. These are
critical for monitoring and debugging the health of master client connections.
Restored instrumentation:
1. stats.MasterClientConnectCounter("total") - tracks all connection attempts
2. stats.MasterClientConnectCounter(FailedToKeepConnected) - when KeepConnected stream fails
3. stats.MasterClientConnectCounter(FailedToReceive) - when Recv() fails in loop
4. stats.MasterClientConnectCounter(Failed) - when overall gprcErr occurs
5. stats.MasterClientConnectCounter(OnPeerUpdate) - when peer updates detected
Additionally restored peer update logging:
- "+ filer@host noticed group.type address" for node additions
- "- filer@host noticed group.type address" for node removals
- Only logs updates matching the client's FilerGroup for noise reduction
This information is valuable for:
- Monitoring cluster health and connection stability
- Debugging cluster membership changes
- Tracking master failover and reconnection patterns
- Identifying network issues between clients and masters
No functional changes - purely observability restoration.
* improve: implement gRPC-aware retry for FilerClient volume lookups
The previous implementation used util.Retry which only retries errors
containing the string "transport". This is insufficient for handling
the full range of transient gRPC errors.
Changes:
1. Added isRetryableGrpcError() to properly inspect gRPC status codes
- Retries: Unavailable, DeadlineExceeded, ResourceExhausted, Aborted
- Falls back to string matching for non-gRPC network errors
2. Replaced util.Retry with custom retry loop
- 3 attempts with exponential backoff (1s, 1.5s, 2.25s)
- Tries all N filers on each attempt (N*3 total attempts max)
- Fast-fails on non-retryable errors (NotFound, PermissionDenied, etc.)
3. Improved logging
- Shows both filer attempt (x/N) and retry attempt (y/3)
- Logs retry reason and wait time for debugging
Benefits:
- Better handling of transient gRPC failures (server restarts, load spikes)
- Faster failure for permanent errors (no wasted retries)
- More informative logs for troubleshooting
- Maintains existing HA failover across multiple filers
Example: If all 3 filers return Unavailable (server overload):
- Attempt 1: try all 3 filers, wait 1s
- Attempt 2: try all 3 filers, wait 1.5s
- Attempt 3: try all 3 filers, fail
Example: If filer returns NotFound (volume doesn't exist):
- Attempt 1: try all 3 filers, fast-fail (no retry)
* fmt
* improve: add circuit breaker to skip known-unhealthy filers
The previous implementation tried all filers on every failure, including
known-unhealthy ones. This wasted time retrying permanently down filers.
Problem scenario (3 filers, filer0 is down):
- Last successful: filer1 (saved as filerIndex=1)
- Next lookup when filer1 fails:
Retry 1: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Retry 2: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Retry 3: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Total wasted: 15 seconds on known-bad filer!
Solution: Circuit breaker pattern
- Track consecutive failures per filer (atomic int32)
- Skip filers with 3+ consecutive failures
- Re-check unhealthy filers every 30 seconds
- Reset failure count on success
New behavior:
- filer0 fails 3 times → marked unhealthy
- Future lookups skip filer0 for 30 seconds
- After 30s, re-check filer0 (allows recovery)
- If filer0 succeeds, reset failure count to 0
Benefits:
1. Avoids wasting time on known-down filers
2. Still sticks to last healthy filer (via filerIndex)
3. Allows recovery (30s re-check window)
4. No configuration needed (automatic)
Implementation details:
- filerHealth struct tracks failureCount (atomic) + lastFailureTime
- shouldSkipUnhealthyFiler(): checks if we should skip this filer
- recordFilerSuccess(): resets failure count to 0
- recordFilerFailure(): increments count, updates timestamp
- Logs when skipping unhealthy filers (V(2) level)
Example with circuit breaker:
- filer0 down, saved filerIndex=1 (filer1 healthy)
- Lookup 1: filer1(ok) → Done (0.01s)
- Lookup 2: filer1(fail) → filer2(ok) → Done, save filerIndex=2 (0.01s)
- Lookup 3: filer2(fail) → skip filer0 (unhealthy) → filer1(ok) → Done (0.01s)
Much better than wasting 15s trying filer0 repeatedly!
* fix: OnPeerUpdate should only process updates for matching FilerGroup
Critical bug: The OnPeerUpdate callback was incorrectly moved outside the
FilerGroup check when restoring observability instrumentation. This caused
clients to process peer updates for ALL filer groups, not just their own.
Problem:
Before: mc.OnPeerUpdate only called for update.FilerGroup == mc.FilerGroup
Bug: mc.OnPeerUpdate called for ALL updates regardless of FilerGroup
Impact:
- Multi-tenant deployments with separate filer groups would see cross-group
updates (e.g., group A clients processing group B updates)
- Could cause incorrect cluster membership tracking
- OnPeerUpdate handlers (like Filer's DLM ring updates) would receive
irrelevant updates from other groups
Example scenario:
Cluster has two filer groups: "production" and "staging"
Production filer connects with FilerGroup="production"
Incorrect behavior (bug):
- Receives "staging" group updates
- Incorrectly adds staging filers to production DLM ring
- Cross-tenant data access issues
Correct behavior (fixed):
- Only receives "production" group updates
- Only adds production filers to production DLM ring
- Proper isolation between groups
Fix:
Moved mc.OnPeerUpdate(update, time.Now()) back INSIDE the FilerGroup check
where it belongs, matching the original implementation.
The logging and stats counter were already correctly scoped to matching
FilerGroup, so they remain inside the if block as intended.
* improve: clarify Aborted error handling in volume lookups
Added documentation and logging to address the concern that codes.Aborted
might not always be retryable in all contexts.
Context-specific justification for treating Aborted as retryable:
Volume location lookups (LookupVolume RPC) are simple, read-only operations:
- No transactions
- No write conflicts
- No application-level state changes
- Idempotent (safe to retry)
In this context, Aborted is most likely caused by:
- Filer restarting/recovering (transient)
- Connection interrupted mid-request (transient)
- Server-side resource cleanup (transient)
NOT caused by:
- Application-level conflicts (no writes)
- Transaction failures (no transactions)
- Logical errors (read-only lookup)
Changes:
1. Added detailed comment explaining the context-specific reasoning
2. Added V(1) logging when treating Aborted as retryable
- Helps detect misclassification if it occurs
- Visible in verbose logs for troubleshooting
3. Split switch statement for clarity (one case per line)
If future analysis shows Aborted should not be retried, operators will
now have visibility via logs to make that determination. The logging
provides evidence for future tuning decisions.
Alternative approaches considered but not implemented:
- Removing Aborted entirely (too conservative for read-only ops)
- Message content inspection (adds complexity, no known patterns yet)
- Different handling per RPC type (premature optimization)
* fix: IAM server must start KeepConnectedToMaster for masterClient usage
The IAM server creates and uses a MasterClient but never started
KeepConnectedToMaster, which could cause blocking if IAM config files
have chunks requiring volume lookups.
Problem flow:
NewIamApiServerWithStore()
→ creates masterClient
→ ❌ NEVER starts KeepConnectedToMaster
GetS3ApiConfigurationFromFiler()
→ filer.ReadEntry(iama.masterClient, ...)
→ StreamContent(masterClient, ...) if file has chunks
→ masterClient.GetLookupFileIdFunction()
→ GetMaster(ctx) ← BLOCKS indefinitely waiting for connection!
While IAM config files (identity & policies) are typically small and
stored inline without chunks, the code path exists and would block
if the files ever had chunks.
Fix:
Start KeepConnectedToMaster in background goroutine right after
creating masterClient, following the documented pattern:
mc := wdclient.NewMasterClient(...)
go mc.KeepConnectedToMaster(ctx)
This ensures masterClient is usable if ReadEntry ever needs to
stream chunked content from volume servers.
Note: This bug was dormant because IAM config files are small (<256 bytes)
and SeaweedFS stores small files inline in Entry.Content, not as chunks.
The bug would only manifest if:
- IAM config grew > 256 bytes (inline threshold)
- Config was stored as chunks on volume servers
- ReadEntry called StreamContent
- GetMaster blocked indefinitely
Now all 9 production MasterClient instances correctly follow the pattern.
* fix: data race on filerHealth.lastFailureTime in circuit breaker
The circuit breaker tracked lastFailureTime as time.Time, which was
written in recordFilerFailure and read in shouldSkipUnhealthyFiler
without synchronization, causing a data race.
Data race scenario:
Goroutine 1: recordFilerFailure(0)
health.lastFailureTime = time.Now() // ❌ unsynchronized write
Goroutine 2: shouldSkipUnhealthyFiler(0)
time.Since(health.lastFailureTime) // ❌ unsynchronized read
→ RACE DETECTED by -race detector
Fix:
Changed lastFailureTime from time.Time to int64 (lastFailureTimeNs)
storing Unix nanoseconds for atomic access:
Write side (recordFilerFailure):
atomic.StoreInt64(&health.lastFailureTimeNs, time.Now().UnixNano())
Read side (shouldSkipUnhealthyFiler):
lastFailureNs := atomic.LoadInt64(&health.lastFailureTimeNs)
if lastFailureNs == 0 { return false } // Never failed
lastFailureTime := time.Unix(0, lastFailureNs)
time.Since(lastFailureTime) > 30*time.Second
Benefits:
- Atomic reads/writes (no data race)
- Efficient (int64 is 8 bytes, always atomic on 64-bit systems)
- Zero value (0) naturally means "never failed"
- No mutex needed (lock-free circuit breaker)
Note: sync/atomic was already imported for failureCount, so no new
import needed.
* fix: create fresh timeout context for each filer retry attempt
The timeout context was created once at function start and reused across
all retry attempts, causing subsequent retries to run with progressively
shorter (or expired) deadlines.
Problem flow:
Line 244: timeoutCtx, cancel := context.WithTimeout(ctx, 5s)
defer cancel()
Retry 1, filer 0: client.LookupVolume(timeoutCtx, ...) ← 5s available ✅
Retry 1, filer 1: client.LookupVolume(timeoutCtx, ...) ← 3s left
Retry 1, filer 2: client.LookupVolume(timeoutCtx, ...) ← 0.5s left
Retry 2, filer 0: client.LookupVolume(timeoutCtx, ...) ← EXPIRED! ❌
Result: Retries always fail with DeadlineExceeded, defeating the purpose
of retries.
Fix:
Moved context.WithTimeout inside the per-filer loop, creating a fresh
timeout context for each attempt:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
err := pb.WithGrpcFilerClient(..., func(client) {
resp, err := client.LookupVolume(timeoutCtx, ...)
...
})
cancel() // Clean up immediately after call
}
Benefits:
- Each filer attempt gets full fc.grpcTimeout (default 5s)
- Retries actually have time to complete
- No context leaks (cancel called after each attempt)
- More predictable timeout behavior
Example with fix:
Retry 1, filer 0: fresh 5s timeout ✅
Retry 1, filer 1: fresh 5s timeout ✅
Retry 2, filer 0: fresh 5s timeout ✅
Total max time: 3 retries × 3 filers × 5s = 45s (plus backoff)
Note: The outer ctx (from caller) still provides overall cancellation if
the caller cancels or times out the entire operation.
* fix: always reset vidMap cache on master reconnection
The previous refactoring removed the else block that resets vidMap when
the first message from a newly connected master is not a VolumeLocation.
Problem scenario:
1. Client connects to master-1 and builds vidMap cache
2. Master-1 fails, client connects to master-2
3. First message from master-2 is a ClusterNodeUpdate (not VolumeLocation)
4. Old code: vidMap is reset and updated ✅
5. New code: vidMap is NOT reset ❌
6. Result: Client uses stale cache from master-1 → data access errors
Example flow with bug:
Connect to master-2
First message: ClusterNodeUpdate {filer.x added}
→ No resetVidMap() call
→ vidMap still has master-1's stale volume locations
→ Client reads from wrong volume servers → 404 errors
Fix:
Restored the else block that resets vidMap when first message is not
a VolumeLocation:
if resp.VolumeLocation != nil {
// ... check leader, reset, and update ...
} else {
// First message is ClusterNodeUpdate or other type
// Must still reset to avoid stale data
mc.resetVidMap()
}
This ensures the cache is always cleared when establishing a new master
connection, regardless of what the first message type is.
Root cause:
During the vidMapClient refactoring, this else block was accidentally
dropped, making failover behavior fragile and non-deterministic (depends
on which message type arrives first from the new master).
Impact:
- High severity for master failover scenarios
- Could cause read failures, 404s, or wrong data access
- Only manifests when first message is not VolumeLocation
* fix: goroutine and connection leak in IAM server shutdown
The IAM server's KeepConnectedToMaster goroutine used context.Background(),
which is non-cancellable, causing the goroutine and its gRPC connections
to leak on server shutdown.
Problem:
go masterClient.KeepConnectedToMaster(context.Background())
- context.Background() never cancels
- KeepConnectedToMaster goroutine runs forever
- gRPC connection to master stays open
- No way to stop cleanly on server shutdown
Result: Resource leaks when IAM server is stopped
Fix:
1. Added shutdownContext and shutdownCancel to IamApiServer struct
2. Created cancellable context in NewIamApiServerWithStore:
shutdownCtx, shutdownCancel := context.WithCancel(context.Background())
3. Pass shutdownCtx to KeepConnectedToMaster:
go masterClient.KeepConnectedToMaster(shutdownCtx)
4. Added Shutdown() method to invoke cancel:
func (iama *IamApiServer) Shutdown() {
if iama.shutdownCancel != nil {
iama.shutdownCancel()
}
}
5. Stored masterClient reference on IamApiServer for future use
Benefits:
- Goroutine stops cleanly when Shutdown() is called
- gRPC connections are closed properly
- No resource leaks on server restart/stop
- Shutdown() is idempotent (safe to call multiple times)
Usage (for future graceful shutdown):
iamServer, _ := iamapi.NewIamApiServer(...)
defer iamServer.Shutdown()
// or in signal handler:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-sigChan
iamServer.Shutdown()
os.Exit(0)
}()
Note: Current command implementations (weed/command/iam.go) don't have
shutdown paths yet, but this makes IAM server ready for proper lifecycle
management when that infrastructure is added.
* refactor: remove unnecessary KeepMasterClientConnected wrapper in filer
The Filer.KeepMasterClientConnected() method was an unnecessary wrapper that
just forwarded to MasterClient.KeepConnectedToMaster(). This wrapper added
no value and created inconsistency with other components that call
KeepConnectedToMaster directly.
Removed:
filer.go:178-180
func (fs *Filer) KeepMasterClientConnected(ctx context.Context) {
fs.MasterClient.KeepConnectedToMaster(ctx)
}
Updated caller:
filer_server.go:181
- go fs.filer.KeepMasterClientConnected(context.Background())
+ go fs.filer.MasterClient.KeepConnectedToMaster(context.Background())
Benefits:
- Consistent with other components (S3, IAM, Shell, Mount)
- Removes unnecessary indirection
- Clearer that KeepConnectedToMaster runs in background goroutine
- Follows the documented pattern from MasterClient.GetMaster()
Note: shell/commands.go was verified and already correctly starts
KeepConnectedToMaster in a background goroutine (shell_liner.go:51):
go commandEnv.MasterClient.KeepConnectedToMaster(ctx)
* fix: use client ID instead of timeout for gRPC signature parameter
The pb.WithGrpcFilerClient signature parameter is meant to be a client
identifier for logging and tracking (added as 'sw-client-id' gRPC metadata
in streaming mode), not a timeout value.
Problem:
timeoutMs := int32(fc.grpcTimeout.Milliseconds()) // 5000 (5 seconds)
err := pb.WithGrpcFilerClient(false, timeoutMs, filerAddress, ...)
- Passing timeout (5000ms) as signature/client ID
- Misuse of API: signature should be a unique client identifier
- Timeout is already handled by timeoutCtx passed to gRPC call
- Inconsistent with other callers (all use 0 or proper client ID)
How WithGrpcFilerClient uses signature parameter:
func WithGrpcClient(..., signature int32, ...) {
if streamingMode && signature != 0 {
md := metadata.New(map[string]string{"sw-client-id": fmt.Sprintf("%d", signature)})
ctx = metadata.NewOutgoingContext(ctx, md)
}
...
}
It's for client identification, not timeout control!
Fix:
1. Added clientId int32 field to FilerClient struct
2. Initialize with rand.Int31() in NewFilerClient for unique ID
3. Removed timeoutMs variable (and misleading comment)
4. Use fc.clientId in pb.WithGrpcFilerClient call
Before:
err := pb.WithGrpcFilerClient(false, timeoutMs, ...)
^^^^^^^^^ Wrong! (5000)
After:
err := pb.WithGrpcFilerClient(false, fc.clientId, ...)
^^^^^^^^^^^^ Correct! (random int31)
Benefits:
- Correct API usage (signature = client ID, not timeout)
- Timeout still works via timeoutCtx (unchanged)
- Consistent with other pb.WithGrpcFilerClient callers
- Enables proper client tracking on filer side via gRPC metadata
- Each FilerClient instance has unique ID for debugging
Examples of correct usage elsewhere:
weed/iamapi/iamapi_server.go:145 pb.WithGrpcFilerClient(false, 0, ...)
weed/command/s3.go:215 pb.WithGrpcFilerClient(false, 0, ...)
weed/shell/commands.go:110 pb.WithGrpcFilerClient(streamingMode, 0, ...)
All use 0 (or a proper signature), not a timeout value.
* fix: add timeout to master volume lookup to prevent indefinite blocking
The masterVolumeProvider.LookupVolumeIds method was using the context
directly without a timeout, which could cause it to block indefinitely
if the master is slow to respond or unreachable.
Problem:
err := pb.WithMasterClient(false, p.masterClient.GetMaster(ctx), ...)
resp, err := client.LookupVolume(ctx, &master_pb.LookupVolumeRequest{...})
- No timeout on gRPC call to master
- Could block indefinitely if master is unresponsive
- Inconsistent with FilerClient which uses 5s timeout
- This is a fallback path (cache miss) but still needs protection
Scenarios where this could hang:
1. Master server under heavy load (slow response)
2. Network issues between client and master
3. Master server hung or deadlocked
4. Master in process of shutting down
Fix:
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
err := pb.WithMasterClient(false, p.masterClient.GetMaster(timeoutCtx), ...)
resp, err := client.LookupVolume(timeoutCtx, &master_pb.LookupVolumeRequest{...})
Benefits:
- Prevents indefinite blocking on master lookup
- Consistent with FilerClient timeout pattern (5 seconds)
- Faster failure detection when master is unresponsive
- Caller's context still honored (timeout is in addition, not replacement)
- Improves overall system resilience
Note: 5 seconds is a reasonable default for volume lookups:
- Long enough for normal master response (~10-50ms)
- Short enough to fail fast on issues
- Matches FilerClient's grpcTimeout default
* purge
* refactor: address code review feedback on comments and style
Fixed several code quality issues identified during review:
1. Corrected backoff algorithm description in filer_client.go:
- Changed "Exponential backoff" to "Multiplicative backoff with 1.5x factor"
- The formula waitTime * 3/2 produces 1s, 1.5s, 2.25s, not exponential 2^n
- More accurate terminology prevents confusion
2. Removed redundant nil check in vidmap_client.go:
- After the for loop, node is guaranteed to be non-nil
- Loop either returns early or assigns non-nil value to node
- Simplified: if node != nil { node.cache.Store(nil) } → node.cache.Store(nil)
3. Added startup logging to IAM server for consistency:
- Log when master client connection starts
- Matches pattern in S3ApiServer (line 100 in s3api_server.go)
- Improves operational visibility during startup
- Added missing glog import
4. Fixed indentation in filer/reader_at.go:
- Lines 76-91 had incorrect indentation (extra tab level)
- Line 93 also misaligned
- Now properly aligned with surrounding code
5. Updated deprecation comment to follow Go convention:
- Changed "DEPRECATED:" to "Deprecated:" (standard Go format)
- Tools like staticcheck and IDEs recognize the standard format
- Enables automated deprecation warnings in tooling
- Better developer experience
All changes are cosmetic and do not affect functionality.
* fmt
* refactor: make circuit breaker parameters configurable in FilerClient
The circuit breaker failure threshold (3) and reset timeout (30s) were
hardcoded, making it difficult to tune the client's behavior in different
deployment environments without modifying the code.
Problem:
func shouldSkipUnhealthyFiler(index int32) bool {
if failureCount < 3 { // Hardcoded threshold
return false
}
if time.Since(lastFailureTime) > 30*time.Second { // Hardcoded timeout
return false
}
}
Different environments have different needs:
- High-traffic production: may want lower threshold (2) for faster failover
- Development/testing: may want higher threshold (5) to tolerate flaky networks
- Low-latency services: may want shorter reset timeout (10s)
- Batch processing: may want longer reset timeout (60s)
Solution:
1. Added fields to FilerClientOption:
- FailureThreshold int32 (default: 3)
- ResetTimeout time.Duration (default: 30s)
2. Added fields to FilerClient:
- failureThreshold int32
- resetTimeout time.Duration
3. Applied defaults in NewFilerClient with option override:
failureThreshold := int32(3)
resetTimeout := 30 * time.Second
if opt.FailureThreshold > 0 {
failureThreshold = opt.FailureThreshold
}
if opt.ResetTimeout > 0 {
resetTimeout = opt.ResetTimeout
}
4. Updated shouldSkipUnhealthyFiler to use configurable values:
if failureCount < fc.failureThreshold { ... }
if time.Since(lastFailureTime) > fc.resetTimeout { ... }
Benefits:
✓ Tunable for different deployment environments
✓ Backward compatible (defaults match previous hardcoded values)
✓ No breaking changes to existing code
✓ Better maintainability and flexibility
Example usage:
// Aggressive failover for low-latency production
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
FailureThreshold: 2,
ResetTimeout: 10 * time.Second,
})
// Tolerant of flaky networks in development
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
FailureThreshold: 5,
ResetTimeout: 60 * time.Second,
})
* retry parameters
* refactor: make retry and timeout parameters configurable
Made retry logic and gRPC timeouts configurable across FilerClient and
MasterClient to support different deployment environments and network
conditions.
Problem 1: Hardcoded retry parameters in FilerClient
waitTime := time.Second // Fixed at 1s
maxRetries := 3 // Fixed at 3 attempts
waitTime = waitTime * 3 / 2 // Fixed 1.5x multiplier
Different environments have different needs:
- Unstable networks: may want more retries (5) with longer waits (2s)
- Low-latency production: may want fewer retries (2) with shorter waits (500ms)
- Batch processing: may want exponential backoff (2x) instead of 1.5x
Problem 2: Hardcoded gRPC timeout in MasterClient
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
Master lookups may need different timeouts:
- High-latency cross-region: may need 10s timeout
- Local network: may use 2s timeout for faster failure detection
Solution for FilerClient:
1. Added fields to FilerClientOption:
- MaxRetries int (default: 3)
- InitialRetryWait time.Duration (default: 1s)
- RetryBackoffFactor float64 (default: 1.5)
2. Added fields to FilerClient:
- maxRetries int
- initialRetryWait time.Duration
- retryBackoffFactor float64
3. Updated LookupVolumeIds to use configurable values:
waitTime := fc.initialRetryWait
maxRetries := fc.maxRetries
for retry := 0; retry < maxRetries; retry++ {
...
waitTime = time.Duration(float64(waitTime) * fc.retryBackoffFactor)
}
Solution for MasterClient:
1. Added grpcTimeout field to MasterClient (default: 5s)
2. Initialize in NewMasterClient with 5 * time.Second default
3. Updated masterVolumeProvider to use p.masterClient.grpcTimeout
Benefits:
✓ Tunable for different network conditions and deployment scenarios
✓ Backward compatible (defaults match previous hardcoded values)
✓ No breaking changes to existing code
✓ Consistent configuration pattern across FilerClient and MasterClient
Example usage:
// Fast-fail for low-latency production with stable network
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
MaxRetries: 2,
InitialRetryWait: 500 * time.Millisecond,
RetryBackoffFactor: 2.0, // Exponential backoff
GrpcTimeout: 2 * time.Second,
})
// Patient retries for unstable network or batch processing
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
MaxRetries: 5,
InitialRetryWait: 2 * time.Second,
RetryBackoffFactor: 1.5,
GrpcTimeout: 10 * time.Second,
})
Note: MasterClient timeout is currently set at construction time and not
user-configurable via NewMasterClient parameters. Future enhancement could
add a MasterClientOption struct similar to FilerClientOption.
* fix: rename vicCacheLock to vidCacheLock for consistency
Fixed typo in variable name for better code consistency and readability.
Problem:
vidCache := make(map[string]*filer_pb.Locations)
var vicCacheLock sync.RWMutex // Typo: vic instead of vid
vicCacheLock.RLock()
locations, found := vidCache[vid]
vicCacheLock.RUnlock()
The variable name 'vicCacheLock' is inconsistent with 'vidCache'.
Both should use 'vid' prefix (volume ID) not 'vic'.
Fix:
Renamed all 5 occurrences:
- var vicCacheLock → var vidCacheLock (line 56)
- vicCacheLock.RLock() → vidCacheLock.RLock() (line 62)
- vicCacheLock.RUnlock() → vidCacheLock.RUnlock() (line 64)
- vicCacheLock.Lock() → vidCacheLock.Lock() (line 81)
- vicCacheLock.Unlock() → vidCacheLock.Unlock() (line 91)
Benefits:
✓ Consistent variable naming convention
✓ Clearer intent (volume ID cache lock)
✓ Better code readability
✓ Easier code navigation
* fix: use defer cancel() with anonymous function for proper context cleanup
Fixed context cancellation to use defer pattern correctly in loop iteration.
Problem:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
err := pb.WithGrpcFilerClient(...)
cancel() // Only called on normal return, not on panic
}
Issues with original approach:
1. If pb.WithGrpcFilerClient panics, cancel() is never called → context leak
2. If callback returns early (though unlikely here), cleanup might be missed
3. Not following Go best practices for context.WithTimeout usage
Problem with naive defer in loop:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
defer cancel() // ❌ WRONG: All defers accumulate until function returns
}
In Go, defer executes when the surrounding *function* returns, not when
the loop iteration ends. This would accumulate n deferred cancel() calls
and leak contexts until LookupVolumeIds returns.
Solution: Wrap in anonymous function
for x := 0; x < n; x++ {
err := func() error {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
defer cancel() // ✅ Executes when anonymous function returns (per iteration)
return pb.WithGrpcFilerClient(...)
}()
}
Benefits:
✓ Context always cancelled, even on panic
✓ defer executes after each iteration (not accumulated)
✓ Follows Go best practices for context.WithTimeout
✓ No resource leaks during retry loop execution
✓ Cleaner error handling
Reference:
Go documentation for context.WithTimeout explicitly shows:
ctx, cancel := context.WithTimeout(...)
defer cancel()
This is the idiomatic pattern that should always be followed.
* Can't use defer directly in loop
* improve: add data center preference and URL shuffling for consistent performance
Added missing data center preference and load distribution (URL shuffling)
to ensure consistent performance and behavior across all code paths.
Problem 1: PreferPublicUrl path missing DC preference and shuffling
Location: weed/wdclient/filer_client.go lines 184-192
The custom PreferPublicUrl implementation was simply iterating through
locations and building URLs without considering:
1. Data center proximity (latency optimization)
2. Load distribution across volume servers
Before:
for _, loc := range locations {
url := loc.PublicUrl
if url == "" { url = loc.Url }
fullUrls = append(fullUrls, "http://"+url+"/"+fileId)
}
return fullUrls, nil
After:
var sameDcUrls, otherDcUrls []string
dataCenter := fc.GetDataCenter()
for _, loc := range locations {
url := loc.PublicUrl
if url == "" { url = loc.Url }
httpUrl := "http://" + url + "/" + fileId
if dataCenter != "" && dataCenter == loc.DataCenter {
sameDcUrls = append(sameDcUrls, httpUrl)
} else {
otherDcUrls = append(otherDcUrls, httpUrl)
}
}
rand.Shuffle(len(sameDcUrls), ...)
rand.Shuffle(len(otherDcUrls), ...)
fullUrls = append(sameDcUrls, otherDcUrls...)
Problem 2: Cache miss path missing URL shuffling
Location: weed/wdclient/vidmap_client.go lines 95-108
The cache miss path (fallback lookup) was missing URL shuffling, while
the cache hit path (vm.LookupFileId) already shuffles URLs. This
inconsistency meant:
- Cache hit: URLs shuffled → load distributed
- Cache miss: URLs not shuffled → first server always hit
Before:
var sameDcUrls, otherDcUrls []string
// ... build URLs ...
fullUrls = append(sameDcUrls, otherDcUrls...)
return fullUrls, nil
After:
var sameDcUrls, otherDcUrls []string
// ... build URLs ...
rand.Shuffle(len(sameDcUrls), ...)
rand.Shuffle(len(otherDcUrls), ...)
fullUrls = append(sameDcUrls, otherDcUrls...)
return fullUrls, nil
Benefits:
✓ Reduced latency by preferring same-DC volume servers
✓ Even load distribution across all volume servers
✓ Consistent behavior between cache hit/miss paths
✓ Consistent behavior between PreferUrl and PreferPublicUrl
✓ Matches behavior of existing vidMap.LookupFileId implementation
Impact on performance:
- Lower read latency (same-DC preference)
- Better volume server utilization (load spreading)
- No single volume server becomes a hotspot
Note: Added math/rand import to vidmap_client.go for shuffle support.
* Update weed/wdclient/masterclient.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* improve: call IAM server Shutdown() for best-effort cleanup
Added call to iamApiServer.Shutdown() to ensure cleanup happens when possible,
and documented the limitations of the current approach.
Problem:
The Shutdown() method was defined in IamApiServer but never called anywhere,
meaning the KeepConnectedToMaster goroutine would continue running even when
the IAM server stopped, causing resource leaks.
Changes:
1. Store iamApiServer instance in weed/command/iam.go
- Changed: _, iamApiServer_err := iamapi.NewIamApiServer(...)
- To: iamApiServer, iamApiServer_err := iamapi.NewIamApiServer(...)
2. Added defer call for best-effort cleanup
- defer iamApiServer.Shutdown()
- This will execute if startIamServer() returns normally
3. Added logging in Shutdown() method
- Log when shutdown is triggered for visibility
4. Documented limitations and future improvements
- Added note that defer only works for normal function returns
- SeaweedFS commands don't currently have signal handling
- Suggested future enhancement: add SIGTERM/SIGINT handling
Current behavior:
- ✓ Cleanup happens if HTTP server fails to start (glog.Fatalf path)
- ✓ Cleanup happens if Serve() returns with error (unlikely)
- ✗ Cleanup does NOT happen on SIGTERM/SIGINT (process killed)
The last case is a limitation of the current command architecture - all
SeaweedFS commands (s3, filer, volume, master, iam) lack signal handling
for graceful shutdown. This is a systemic issue that affects all services.
Future enhancement:
To properly handle SIGTERM/SIGINT, the command layer would need:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
httpServer.Serve(listener) // Non-blocking
}()
<-sigChan
glog.V(0).Infof("Received shutdown signal")
iamApiServer.Shutdown()
httpServer.Shutdown(context.Background())
This would require refactoring the command structure for all services,
which is out of scope for this change.
Benefits of current approach:
✓ Best-effort cleanup (better than nothing)
✓ Proper cleanup in error paths
✓ Documented for future improvement
✓ Consistent with how other SeaweedFS services handle lifecycle
* data racing in test
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
4 weeks ago |
|
|
5f77f87335
|
S3: S3 Object Retention API to include XML namespace support (#7517)
* Refactor S3 Object Retention API to include XML namespace support and improve compatibility with Veeam. Updated XML tags to remove hardcoded namespaces and added test cases for retention and legal hold configurations without namespaces. * Added XMLNS field setting in both places |
4 weeks ago |
|
|
6281e62d7f
|
S3: JWT generation for volume server authentication (#7514)
* Refactor JWT generation for volume server authentication to use centralized function from filer package, improving code clarity and reducing redundancy. * Update s3api_object_handlers.go |
4 weeks ago |
|
|
ca84a8a713
|
S3: Directly read write volume servers (#7481)
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization
* revert
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
* Lazy Entry Fetch for SSE, Skip Conditional Header Check
* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)
* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS
* refactor
* Removed Premature Mutual Exclusivity Check
* check for the presence of the X-Amz-Server-Side-Encryption header
* not used
* fmt
* directly read write volume servers
* HTTP Range Request Support
* set header
* md5
* copy object
* fix sse
* fmt
* implement sse
* sse continue
* fixed the suffix range bug (bytes=-N for "last N bytes")
* debug logs
* Missing PartsCount Header
* profiling
* url encoding
* test_multipart_get_part
* headers
* debug
* adjust log level
* handle part number
* Update s3api_object_handlers.go
* nil safety
* set ModifiedTsNs
* remove
* nil check
* fix sse header
* same logic as filer
* decode values
* decode ivBase64
* s3: Fix SSE decryption JWT authentication and streaming errors
Critical fix for SSE (Server-Side Encryption) test failures:
1. **JWT Authentication Bug** (Root Cause):
- Changed from GenJwtForFilerServer to GenJwtForVolumeServer
- S3 API now uses correct JWT when directly reading from volume servers
- Matches filer's authentication pattern for direct volume access
- Fixes 'unexpected EOF' and 500 errors in SSE tests
2. **Streaming Error Handling**:
- Added error propagation in getEncryptedStreamFromVolumes goroutine
- Use CloseWithError() to properly communicate stream failures
- Added debug logging for streaming errors
3. **Response Header Timing**:
- Removed premature WriteHeader(http.StatusOK) call
- Let Go's http package write status automatically on first write
- Prevents header lock when errors occur during streaming
4. **Enhanced SSE Decryption Debugging**:
- Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
- Better error messages for missing or invalid encryption metadata
- Added glog.V(2) debugging for decryption setup
This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.
* s3: Fix SSE multipart upload metadata preservation
Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):
**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:
1. **Wrong Constant Name** (Key Mismatch Bug):
- Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
- Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
- Result: SSE-KMS metadata was NEVER copied → 500 errors
2. **Missing SSE-C and SSE-S3 Headers**:
- SSE-C requires: IV, Algorithm, KeyMD5
- SSE-S3 requires: encrypted key data + standard headers
- Old code: copied nothing for SSE-C/SSE-S3 → decryption failures
**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:
- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption
Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)
**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.
Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)
* s3: Add debug logging for versioning state diagnosis
Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.
Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)
This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers
* s3: Enhanced versioning state tracing for suspended versioning diagnosis
Added comprehensive logging across the entire versioning state flow:
PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change
setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code
updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails
getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors
This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues
Expected to show why bucket still reports 'Enabled' after 'Suspended' call.
* s3: Add SSE chunk detection debugging for multipart uploads
Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:
detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)
This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly
Expected to show if chunks have SseType=0 (none) or proper SSE types set.
* s3: Trace SSE chunk metadata through multipart completion and retrieval
Added end-to-end logging to track SSE chunk metadata lifecycle:
**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes
**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result
This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug
Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.
* s3: Fix SSE-C multipart IV base64 decoding bug
**Critical Bug Found**: SSE-C multipart uploads were failing because:
Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error
Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"
Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures
Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET
This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.
* fixes
* kms sse
* handle retry if not found in .versions folder and should read the normal object
* quick check (no retries) to see if the .versions/ directory exists
* skip retry if object is not found
* explicit update to avoid sync delay
* fix map update lock
* Remove fmt.Printf debug statements
* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating
* fmt
* Fix ACL grants storage logic
* header handling
* nil handling
* range read for sse content
* test range requests for sse objects
* fmt
* unused code
* upload in chunks
* header case
* fix url
* bucket policy error vs bucket not found
* jwt handling
* fmt
* jwt in request header
* Optimize Case-Insensitive Prefix Check
* dead code
* Eliminated Unnecessary Stream Prefetch for Multipart SSE
* range sse
* sse
* refactor
* context
* fmt
* fix type
* fix SSE-C IV Mismatch
* Fix Headers Being Set After WriteHeader
* fix url parsing
* propergate sse headers
* multipart sse-s3
* aws sig v4 authen
* sse kms
* set content range
* better errors
* Update s3api_object_handlers_copy.go
* Update s3api_object_handlers.go
* Update s3api_object_handlers.go
* avoid magic number
* clean up
* Update s3api_bucket_policy_handlers.go
* fix url parsing
* context
* data and metadata both use background context
* adjust the offset
* SSE Range Request IV Calculation
* adjust logs
* IV relative to offset in each part, not the whole file
* collect logs
* offset
* fix offset
* fix url
* logs
* variable
* jwt
* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.
* sse
* adjust IV and offset
* multipart boundaries
* ensures PUT and GET operations return consistent ETags
* Metadata Header Case
* CommonPrefixes Sorting with URL Encoding
* always sort
* remove the extra PathUnescape call
* fix the multipart get part ETag
* the FileChunk is created without setting ModifiedTsNs
* Sort CommonPrefixes lexicographically to match AWS S3 behavior
* set md5 for multipart uploads
* prevents any potential data loss or corruption in the small-file inline storage path
* compiles correctly
* decryptedReader will now be properly closed after use
* Fixed URL encoding and sort order for CommonPrefixes
* Update s3api_object_handlers_list.go
* SSE-x Chunk View Decryption
* Different IV offset calculations for single-part vs multipart objects
* still too verbose in logs
* less logs
* ensure correct conversion
* fix listing
* nil check
* minor fixes
* nil check
* single character delimiter
* optimize
* range on empty object or zero-length
* correct IV based on its position within that part, not its position in the entire object
* adjust offset
* offset
Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk
* look breaking
* refactor
* error on no content
* handle intra-block byte skipping
* Incomplete HTTP Response Error Handling
* multipart SSE
* Update s3api_object_handlers.go
* address comments
* less logs
* handling directory
* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups
* Revert "handling directory"
This reverts commit
|
4 weeks ago |
|
|
fa8df6e42b
|
S3: Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization (#7480)
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization * revert Reverted the conditional versioning check to always check versioning status Reverted the conditional SSE entry fetch to always fetch entry metadata Reverted the conditional versioning check to always check versioning status Reverted the conditional SSE entry fetch to always fetch entry metadata * Lazy Entry Fetch for SSE, Skip Conditional Header Check * SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive) * SSE-C is mutually exclusive with SSE-S3 and SSE-KMS * refactor * Removed Premature Mutual Exclusivity Check * check for the presence of the X-Amz-Server-Side-Encryption header * not used * fmt |
4 weeks ago |