Tree:
92800c31a2
add-ec-vacuum
add-filer-iam-grpc
add-iam-grpc-management
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
chris/s3tables-shell-admin
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature/mini-port-detection
feature/modernize-s3-tests
feature/s3-multi-cert-support
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-helm-fromtoml-compatibility
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
fix/windows-test-file-cleanup
ftp
gh-pages
iam-multi-file-migration
iam-permissions-and-api
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
pr/7984
pr/8140
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
s3tables-by-claude
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
4.07
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
82 Commits (92800c31a22181e4683fe02de5c252bb60a0d290)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
551a31e156
|
Implement IAM propagation to S3 servers (#8130)
* Implement IAM propagation to S3 servers - Add PropagatingCredentialStore to propagate IAM changes to S3 servers via gRPC - Add Policy management RPCs to S3 proto and S3ApiServer - Update CredentialManager to use PropagatingCredentialStore when MasterClient is available - Wire FilerServer to enable propagation * Implement parallel IAM propagation and fix S3 cluster registration - Parallelized IAM change propagation with 10s timeout. - Refined context usage in PropagatingCredentialStore. - Added S3Type support to cluster node management. - Enabled S3 servers to register with gRPC address to the master. - Ensured IAM configuration reload after policy updates via gRPC. * Optimize IAM propagation with direct in-memory cache updates * Secure IAM propagation: Use metadata to skip persistence only on propagation * pb: refactor IAM and S3 services for unidirectional IAM propagation - Move SeaweedS3IamCache service from iam.proto to s3.proto. - Remove legacy IAM management RPCs and empty SeaweedS3 service from s3.proto. - Enforce that S3 servers only use the synchronization interface. * pb: regenerate Go code for IAM and S3 services Updated generated code following the proto refactoring of IAM synchronization services. * s3api: implement read-only mode for Embedded IAM API - Add readOnly flag to EmbeddedIamApi to reject write operations via HTTP. - Enable read-only mode by default in S3ApiServer. - Handle AccessDenied error in writeIamErrorResponse. - Embed SeaweedS3IamCacheServer in S3ApiServer. * credential: refactor PropagatingCredentialStore for unidirectional IAM flow - Update to use s3_pb.SeaweedS3IamCacheClient for propagation to S3 servers. - Propagate full Identity object via PutIdentity for consistency. - Remove redundant propagation of specific user/account/policy management RPCs. - Add timeout context for propagation calls. * s3api: implement SeaweedS3IamCacheServer for unidirectional sync - Update S3ApiServer to implement the cache synchronization gRPC interface. - Methods (PutIdentity, RemoveIdentity, etc.) now perform direct in-memory cache updates. - Register SeaweedS3IamCacheServer in command/s3.go. - Remove registration for the legacy and now empty SeaweedS3 service. * s3api: update tests for read-only IAM and propagation - Added TestEmbeddedIamReadOnly to verify rejection of write operations in read-only mode. - Update test setup to pass readOnly=false to NewEmbeddedIamApi in routing tests. - Updated EmbeddedIamApiForTest helper with read-only checks matching production behavior. * s3api: add back temporary debug logs for IAM updates Log IAM updates received via: - gRPC propagation (PutIdentity, PutPolicy, etc.) - Metadata configuration reloads (LoadS3ApiConfigurationFromCredentialManager) - Core identity management (UpsertIdentity, RemoveIdentity) * IAM: finalize propagation fix with reduced logging and clarified architecture * Allow configuring IAM read-only mode for S3 server integration tests * s3api: add defensive validation to UpsertIdentity * s3api: fix log message to reference correct IAM read-only flag * test/s3/iam: ensure WaitForS3Service checks for IAM write permissions * test: enable writable IAM in Makefile for integration tests * IAM: add GetPolicy/ListPolicies RPCs to s3.proto * S3: add GetBucketPolicy and ListBucketPolicies helpers * S3: support storing generic IAM policies in IdentityAccessManagement * S3: implement IAM policy RPCs using IdentityAccessManagement * IAM: fix stale user identity on rename propagation |
4 days ago |
|
|
43229b05ce
|
Explicit IAM gRPC APIs for S3 Server (#8126)
* Update IAM and S3 protobuf definitions for explicit IAM gRPC APIs * Refactor s3api: Extract generic ExecuteAction method for IAM operations * Implement explicit IAM gRPC APIs in S3 server * iam: remove deprecated GetConfiguration and PutConfiguration RPCs * iamapi: refactor handlers to use CredentialManager directly * s3api: refactor embedded IAM to use CredentialManager directly * server: remove deprecated configuration gRPC handlers * credential/grpc: refactor configuration calls to return error * shell: update s3.configure to list users instead of full config * s3api: fix CreateServiceAccount gRPC handler to map required fields * s3api: fix UpdateServiceAccount gRPC handler to map fields and safe status * s3api: enforce UserName in embedded IAM ListAccessKeys * test: fix test_config.json structure to match proto definition * Revert "credential/grpc: refactor configuration calls to return error" This reverts commit |
4 days ago |
|
|
6bf088cec9
|
IAM Policy Management via gRPC (#8109)
* Add IAM gRPC service definition - Add GetConfiguration/PutConfiguration for config management - Add CreateUser/GetUser/UpdateUser/DeleteUser/ListUsers for user management - Add CreateAccessKey/DeleteAccessKey/GetUserByAccessKey for access key management - Methods mirror existing IAM HTTP API functionality * Add IAM gRPC handlers on filer server - Implement IamGrpcServer with CredentialManager integration - Handle configuration get/put operations - Handle user CRUD operations - Handle access key create/delete operations - All methods delegate to CredentialManager for actual storage * Wire IAM gRPC service to filer server - Add CredentialManager field to FilerOption and FilerServer - Import credential store implementations in filer command - Initialize CredentialManager from credential.toml if available - Register IAM gRPC service on filer gRPC server - Enable credential management via gRPC alongside existing filer services * Regenerate IAM protobuf with gRPC service methods * iam_pb: add Policy Management to protobuf definitions * credential: implement PolicyManager in credential stores * filer: implement IAM Policy Management RPCs * shell: add s3.policy command * test: add integration test for s3.policy * test: fix compilation errors in policy_test * pb * fmt * test * weed shell: add -policies flag to s3.configure This allows linking/unlinking IAM policies to/from identities directly from the s3.configure command. * test: verify s3.configure policy linking and fix port allocation - Added test case for linking policies to users via s3.configure - Implemented findAvailablePortPair to ensure HTTP and gRPC ports are both available, avoiding conflicts with randomized port assignments. - Updated assertion to match jsonpb output (policyNames) * credential: add StoreTypeGrpc constant * credential: add IAM gRPC store boilerplate * credential: implement identity methods in gRPC store * credential: implement policy methods in gRPC store * admin: use gRPC credential store for AdminServer This ensures that all IAM and policy changes made through the Admin UI are persisted via the Filer's IAM gRPC service instead of direct file manipulation. * shell: s3.configure use granular IAM gRPC APIs instead of full config patching * shell: s3.configure use granular IAM gRPC APIs * shell: replace deprecated ioutil with os in s3.policy * filer: use gRPC FailedPrecondition for unconfigured credential manager * test: improve s3.policy integration tests and fix error checks * ci: add s3 policy shell integration tests to github workflow * filer: fix LoadCredentialConfiguration error handling * credential/grpc: propagate unmarshal errors in GetPolicies * filer/grpc: improve error handling and validation * shell: use gRPC status codes in s3.configure * credential: document PutPolicy as create-or-replace * credential/postgres: reuse CreatePolicy in PutPolicy to deduplicate logic * shell: add timeout context and strictly enforce flags in s3.policy * iam: standardize policy content field naming in gRPC and proto * shell: extract slice helper functions in s3.configure * filer: map credential store errors to gRPC status codes * filer: add input validation for UpdateUser and CreateAccessKey * iam: improve validation in policy and config handlers * filer: ensure IAM service registration by defaulting credential manager * credential: add GetStoreName method to manager * test: verify policy deletion in integration test |
5 days ago |
|
|
535be3096b
|
Add AWS IAM integration tests and refactor admin authorization (#8098)
* Add AWS IAM integration tests and refactor admin authorization - Added AWS IAM management integration tests (User, AccessKey, Policy) - Updated test framework to support IAM client creation with JWT/OIDC - Refactored s3api authorization to be policy-driven for IAM actions - Removed hardcoded role name checks for admin privileges - Added new tests to GitHub Actions basic test matrix * test(s3/iam): add UpdateUser and UpdateAccessKey tests and fix nil pointer dereference * feat(s3api): add DeletePolicy and update tests with cleanup logic * test(s3/iam): use t.Cleanup for managed policy deletion in CreatePolicy test |
1 week ago |
|
|
d664ca5ed3
|
fix: IAM authentication with AWS Signature V4 and environment credentials (#8099)
* fix: IAM authentication with AWS Signature V4 and environment credentials Three key fixes for authenticated IAM requests to work: 1. Fix request body consumption before signature verification - iamMatcher was calling r.ParseForm() which consumed POST body - This broke AWS Signature V4 verification on subsequent reads - Now only check query string in matcher, preserving body for verification - File: weed/s3api/s3api_server.go 2. Preserve environment variable credentials across config reloads - After IAM mutations, config reload overwrote env var credentials - Extract env var loading into loadEnvironmentVariableCredentials() - Call after every config reload to persist credentials - File: weed/s3api/auth_credentials.go 3. Add authenticated IAM tests and test infrastructure - New TestIAMAuthenticated suite with AWS SDK + Signature V4 - Dynamic port allocation for independent test execution - Flag reset to prevent state leakage between tests - CI workflow to run S3 and IAM tests separately - Files: test/s3/example/*, .github/workflows/s3-example-integration-tests.yml All tests pass: - TestIAMCreateUser (unauthenticated) - TestIAMAuthenticated (with AWS Signature V4) - S3 integration tests * fmt * chore: rename test/s3/example to test/s3/normal * simplify: CI runs all integration tests in single job * Update s3-example-integration-tests.yml * ci: run each test group separately to avoid raft registry conflicts |
1 week ago |
|
|
14f44379cb
|
test: fix flaky S3 volume encryption test (#8083)
Specifically: - Use bytes.NewReader for binary data instead of strings.NewReader - Increase binary test data from 8 bytes to 1KB to avoid edge cases - Add 50ms delay between subtests to prevent overwhelming the server |
1 week ago |
|
|
51735e667c
|
Fix S3 conditional writes with versioning (Issue #8073) (#8080)
* Fix S3 conditional writes with versioning (Issue #8073) Refactors conditional header checks to properly resolve the latest object version when versioning is enabled. This prevents incorrect validation against non-versioned root objects. * Add integration test for S3 conditional writes with versioning (Issue #8073) * Refactor: Propagate internal errors in conditional header checks - Make resolveObjectEntry return errors from isVersioningConfigured - Update checkConditionalHeaders checks to return 500 on internal resolve errors * Refactor: Stricter error handling and test assertions - Propagate internal errors in checkConditionalHeaders*WithGetter functions - Enforce strict 412 PreconditionFailed check in integration test * Perf: Add early return for conditional headers + safety improvements - Add fast path to skip resolveObjectEntry when no conditional headers present - Avoids expensive getLatestObjectVersion retries in common case - Add nil checks before dereferencing pointers in integration test - Fix grammar in test comments - Remove duplicate comment in resolveObjectEntry * Refactor: Use errors.Is for robust ErrNotFound checking - Update checkConditionalHeaders* to use errors.Is(err, filer_pb.ErrNotFound) - Update resolveObjectEntry to use errors.Is for wrapped error compatibility - Remove duplicate comment lines in s3api handlers * Perf: Optimize resolveObjectEntry for conditional checks - Refactor getLatestObjectVersion to doGetLatestObjectVersion supporting variable retries - Use 1-retry path in resolveObjectEntry to avoid exponential backoff latency * Test: Enhance integration test with content verification - Verify actual object content equals expected content after successful conditional write - Add missing io and errors imports to test file * Refactor: Final refinements based on feedback - Optimize header validation by passing parsed headers to avoid redundant parsing - Simplify integration test assertions using require.Error and assert.True - Fix build errors in s3api handler and test imports * Test: Use smithy.APIError for robust error code checking - Replace string-based error checking with structured API error - Add smithy-go import for AWS SDK v2 error handling * Test: Use types.PreconditionFailed and handle io.ReadAll error - Replace smithy.APIError with more specific types.PreconditionFailed - Add proper error handling for io.ReadAll in content verification * Refactor: Use combined error checking and add nil guards - Use smithy.APIError with ErrorCode() for robust error checking - Add nil guards for entry.Attributes before accessing Mtime - Prevents potential panics when Attributes is uninitialized |
1 week ago |
|
|
86c61e86c9
|
fix: S3 copying test Makefile syntax and add S3_ENDPOINT env support (#8042)
* fix: S3 copying test Makefile syntax and add S3_ENDPOINT env support * fix: add weed mini to stop-seaweedfs target Ensure weed mini process is properly killed when stopping SeaweedFS, matching the process started in start-seaweedfs target. * Clean up PID file in stop-seaweedfs and clean targets Address review feedback to ensure /tmp/weed-mini.pid is removed for a clean state after tests. |
2 weeks ago |
|
|
ee3813787e
|
feat(s3api): Implement S3 Policy Variables (#8039)
* feat: Add AWS IAM Policy Variables support to S3 API
Implements policy variables for dynamic access control in bucket policies.
Supported variables:
- aws:username - Extracted from principal ARN
- aws:userid - User identifier (same as username in SeaweedFS)
- aws:principaltype - IAMUser, IAMRole, or AssumedRole
- jwt:* - Any JWT claim (e.g., jwt:preferred_username, jwt:sub)
Key changes:
- Added PolicyVariableRegex to detect ${...} patterns
- Extended CompiledStatement with DynamicResourcePatterns, DynamicPrincipalPatterns, DynamicActionPatterns
- Added Claims field to PolicyEvaluationArgs for JWT claim access
- Implemented SubstituteVariables() for variable replacement from context and JWT claims
- Implemented extractPrincipalVariables() for ARN parsing
- Updated EvaluateConditions() to support variable substitution
- Comprehensive unit and integration tests
Resolves #8037
* feat: Add LDAP and PrincipalAccount variable support
Completes future enhancements for policy variables:
- Added ldap:* variable support for LDAP claims
- ldap:username - LDAP username from claims
- ldap:dn - LDAP distinguished name from claims
- ldap:* - Any LDAP claim
- Added aws:PrincipalAccount extraction from ARN
- Extracts account ID from principal ARN
- Available as ${aws:PrincipalAccount} in policies
Updated SubstituteVariables() to check LDAP claims
Updated extractPrincipalVariables() to extract account ID
Added comprehensive tests for new variables
* feat(s3api): implement IAM policy variables core logic and optimization
* feat(s3api): integrate policy variables with S3 authentication and handlers
* test(s3api): add integration tests for policy variables
* cleanup: remove unused policy conversion files
* Add S3 policy variables integration tests and path support
- Add comprehensive integration tests for policy variables
- Test username isolation, JWT claims, LDAP claims
- Add support for IAM paths in principal ARN parsing
- Add tests for principals with paths
* Fix IAM Role principal variable extraction
IAM Roles should not have aws:userid or aws:PrincipalAccount
according to AWS behavior. Only IAM Users and Assumed Roles
should have these variables.
Fixes TestExtractPrincipalVariables test failures.
* Security fixes and bug fixes for S3 policy variables
SECURITY FIXES:
- Prevent X-SeaweedFS-Principal header spoofing by clearing internal
headers at start of authentication (auth_credentials.go)
- Restrict policy variable substitution to safe allowlist to prevent
client header injection (iam/policy/policy_engine.go)
- Add core policy validation before storing bucket policies
BUG FIXES:
- Remove unused sid variable in evaluateStatement
- Fix LDAP claim lookup to check both prefixed and unprefixed keys
- Add ValidatePolicy call in PutBucketPolicyHandler
These fixes prevent privilege escalation via header injection and
ensure only validated identity claims are used in policy evaluation.
* Additional security fixes and code cleanup
SECURITY FIXES:
- Fixed X-Forwarded-For spoofing by only trusting proxy headers from
private/localhost IPs (s3_iam_middleware.go)
- Changed context key from "sourceIP" to "aws:SourceIp" for proper
policy variable substitution
CODE IMPROVEMENTS:
- Kept aws:PrincipalAccount for IAM Roles to support condition evaluations
- Removed redundant STS principaltype override
- Removed unused service variable
- Cleaned up commented-out debug logging statements
- Updated tests to reflect new IAM Role behavior
These changes prevent IP spoofing attacks and ensure policy variables
work correctly with the safe allowlist.
* Add security documentation for ParseJWTToken
Added comprehensive security comments explaining that ParseJWTToken
is safe despite parsing without verification because:
- It's only used for routing to the correct verification method
- All code paths perform cryptographic verification before trusting claims
- OIDC tokens: validated via validateExternalOIDCToken
- STS tokens: validated via ValidateSessionToken
Enhanced function documentation with clear security warnings about
proper usage to prevent future misuse.
* Fix IP condition evaluation to use aws:SourceIp key
Fixed evaluateIPCondition in IAM policy engine to use "aws:SourceIp"
instead of "sourceIP" to match the updated extractRequestContext.
This fixes the failing IP-restricted role test where IP-based policy
conditions were not being evaluated correctly.
Updated all test cases to use the correct "aws:SourceIp" key.
* Address code review feedback: optimize and clarify
PERFORMANCE IMPROVEMENT:
- Optimized expandPolicyVariables to use regexp.ReplaceAllStringFunc
for single-pass variable substitution instead of iterating through
all safe variables. This improves performance from O(n*m) to O(m)
where n is the number of safe variables and m is the pattern length.
CODE CLARITY:
- Added detailed comment explaining LDAP claim fallback mechanism
(checks both prefixed and unprefixed keys for compatibility)
- Enhanced TODO comment for trusted proxy configuration with rationale
and recommendations for supporting cloud load balancers, CDNs, and
complex network topologies
All tests passing.
* Address Copilot code review feedback
BUG FIXES:
- Fixed type switch for int/int32/int64 - separated into individual cases
since interface type switches only match the first type in multi-type cases
- Fixed grammatically incorrect error message in types.go
CODE QUALITY:
- Removed duplicate Resource/NotResource validation (already in ValidateStatement)
- Added comprehensive comment explaining isEnabled() logic and security implications
- Improved trusted proxy NOTE comment to be more concise while noting limitations
All tests passing.
* Fix test failures after extractSourceIP security changes
Updated tests to work with the security fix that only trusts
X-Forwarded-For/X-Real-IP headers from private IP addresses:
- Set RemoteAddr to 127.0.0.1 in tests to simulate trusted proxy
- Changed context key from "sourceIP" to "aws:SourceIp"
- Added test case for untrusted proxy (public RemoteAddr)
- Removed invalid ValidateStatement call (validation happens in ValidatePolicy)
All tests now passing.
* Address remaining Gemini code review feedback
CODE SAFETY:
- Deep clone Action field in CompileStatement to prevent potential data races
if the original policy document is modified after compilation
TEST CLEANUP:
- Remove debug logging (fmt.Fprintf) from engine_notresource_test.go
- Remove unused imports in engine_notresource_test.go
All tests passing.
* Fix insecure JWT parsing in IAM auth flow
SECURITY FIX:
- Renamed ParseJWTToken to ParseUnverifiedJWTToken with explicit security warnings.
- Refactored AuthenticateJWT to use the trusted SessionInfo returned by ValidateSessionToken
instead of relying on unverified claims from the initial parse.
- Refactored ValidatePresignedURLWithIAM to reuse the robust AuthenticateJWT logic, removing
duplicated and insecure manual token parsing.
This ensures all identity information (Role, Principal, Subject) used for authorization
decisions is derived solely from cryptographically verified tokens.
* Security: Fix insecure JWT claim extraction in policy engine
- Refactored EvaluatePolicy to accept trusted claims from verified Identity instead of parsing unverified tokens
- Updated AuthenticateJWT to populate Claims in IAMIdentity from verified sources (SessionInfo/ExternalIdentity)
- Updated s3api_server and handlers to pass claims correctly
- Improved isPrivateIP to support IPv6 loopback, link-local, and ULA
- Fixed flaky distributed_session_consistency test with retry logic
* fix(iam): populate Subject in STSSessionInfo to ensure correct identity propagation
This fixes the TestS3IAMAuthentication/valid_jwt_token_authentication failure by ensuring the session subject (sub) is correctly mapped to the internal SessionInfo struct, allowing bucket ownership validation to succeed.
* Optimized isPrivateIP
* Create s3-policy-tests.yml
* fix tests
* fix tests
* tests(s3/iam): simplify policy to resource-based \ (step 1)
* tests(s3/iam): add explicit Deny NotResource for isolation (step 2)
* fixes
* policy: skip resource matching for STS trust policies to allow AssumeRole evaluation
* refactor: remove debug logging and hoist policy variables for performance
* test: fix TestS3IAMBucketPolicyIntegration cleanup to handle per-subtest object lifecycle
* test: fix bucket name generation to comply with S3 63-char limit
* test: skip TestS3IAMPolicyEnforcement until role setup is implemented
* test: use weed mini for simpler test server deployment
Replace 'weed server' with 'weed mini' for IAM tests to avoid port binding issues
and simplify the all-in-one server deployment. This improves test reliability
and execution time.
* security: prevent allocation overflow in policy evaluation
Add maxPoliciesForEvaluation constant to cap the number of policies evaluated
in a single request. This prevents potential integer overflow when allocating
slices for policy lists that may be influenced by untrusted input.
Changes:
- Add const maxPoliciesForEvaluation = 1024 to set an upper bound
- Validate len(policies) < maxPoliciesForEvaluation before appending bucket policy
- Use append() instead of make([]string, len+1) to avoid arithmetic overflow
- Apply fix to both IsActionAllowed policy evaluation paths
|
2 weeks ago |
|
|
905e7e72d9
|
Add remote.copy.local command to copy local files to remote storage (#8033)
* Add remote.copy.local command to copy local files to remote storage This new command solves the issue described in GitHub Discussion #8031 where files exist locally but are not synced to remote storage due to missing filer logs. Features: - Copies local-only files to remote storage - Supports file filtering (include/exclude patterns) - Dry run mode to preview actions - Configurable concurrency for performance - Force update option for existing remote files - Comprehensive error handling with retry logic Usage: remote.copy.local -dir=/path/to/mount/dir [options] This addresses the need to manually sync files when filer logs were deleted or when local files were never synced to remote storage. * shell: rename commandRemoteLocalSync to commandRemoteCopyLocal * test: add comprehensive remote cache integration tests * shell: fix forceUpdate logic in remote.copy.local The previous logic only allowed force updates when localEntry.RemoteEntry was not nil, which defeated the purpose of using -forceUpdate to fix inconsistencies where local metadata might be missing. Now -forceUpdate will overwrite remote files whenever they exist, regardless of local metadata state. * shell: fix code review issues in remote.copy.local - Return actual error from flag parsing instead of swallowing it - Use sync.Once to safely capture first error in concurrent operations - Add atomic counter to track actual successful copies - Protect concurrent writes to output with mutex to prevent interleaving - Fix path matching to prevent false positives with sibling directories (e.g., /mnt/remote2 no longer matches /mnt/remote) * test: address code review nitpicks in integration tests - Improve create_bucket error handling to fail on real errors - Fix test assertions to properly verify expected failures - Use case-insensitive string matching for error detection - Replace weak logging-only tests with proper assertions - Remove extra blank line in Makefile * test: remove redundant edge case tests Removed 5 tests that were either duplicates or didn't assert meaningful behavior: - TestEdgeCaseEmptyDirectory (duplicate of TestRemoteCopyLocalEmptyDirectory) - TestEdgeCaseRapidCacheUncache (no meaningful assertions) - TestEdgeCaseConcurrentCommands (only logs errors, no assertions) - TestEdgeCaseInvalidPaths (no security assertions) - TestEdgeCaseFileNamePatterns (duplicate of pattern tests in cache tests) Kept valuable stress tests: nested directories, special characters, very large files (100MB), many small files (100), and zero-byte files. * test: fix CI failures by forcing localhost IP advertising Added -ip=127.0.0.1 flag to both primary and remote weed mini commands to prevent IP auto-detection issues in CI environments. Without this flag, the master would advertise itself using the actual IP (e.g., 10.1.0.17) while binding to 127.0.0.1, causing connection refused errors when other services tried to connect to the gRPC port. * test: address final code review issues - Add proper error assertions for concurrent commands test - Require errors for invalid path tests instead of just logging - Remove unused 'match' field from pattern test struct - Add dry-run output assertion to verify expected behavior - Simplify redundant condition in remote.copy.local (remove entry.RemoteEntry check) * test: fix remote.configure tests to match actual validation rules - Use only letters in remote names (no numbers) to match validation - Relax missing parameter test expectations since validation may not be strict - Generate unique names using letter suffix instead of numbers * shell: rename pathToCopyCopy to localPath for clarity Improved variable naming in concurrent copy loop to make the code more readable and less repetitive. * test: fix remaining test failures - Remove strict error requirement for invalid paths (commands handle gracefully) - Fix TestRemoteUncacheBasic to actually test uncache instead of cache - Use simple numeric names for remote.configure tests (testcfg1234 format) to avoid validation issues with letter-only or complex name generation * test: use only letters in remote.configure test names The validation regex ^[A-Za-z][A-Za-z0-9]*$ requires names to start with a letter, but using static letter-only names avoids any potential issues with the validation. * test: remove quotes from -name parameter in remote.configure tests Single quotes were being included as part of the name value, causing validation failures. Changed from -name='testremote' to -name=testremote. * test: fix remote.configure assertion to be flexible about JSON formatting Changed from checking exact JSON format with specific spacing to just checking if the name appears in the output, since JSON formatting may vary (e.g., "name": "value" vs "name": "value"). |
2 weeks ago |
|
|
06391701ed
|
Add AssumeRole and AssumeRoleWithLDAPIdentity STS actions (#8003)
* test: add integration tests for AssumeRole and AssumeRoleWithLDAPIdentity STS actions - Add s3_sts_assume_role_test.go with comprehensive tests for AssumeRole: * Parameter validation (missing RoleArn, RoleSessionName, invalid duration) * AWS SigV4 authentication with valid/invalid credentials * Temporary credential generation and usage - Add s3_sts_ldap_test.go with tests for AssumeRoleWithLDAPIdentity: * Parameter validation (missing LDAP credentials, RoleArn) * LDAP authentication scenarios (valid/invalid credentials) * Integration with LDAP server (when configured) - Update Makefile with new test targets: * test-sts: run all STS tests * test-sts-assume-role: run AssumeRole tests only * test-sts-ldap: run LDAP STS tests only * test-sts-suite: run tests with full service lifecycle - Enhance setup_all_tests.sh: * Add OpenLDAP container setup for LDAP testing * Create test LDAP users (testuser, ldapadmin) * Set LDAP environment variables for tests * Update cleanup to remove LDAP container - Fix setup_keycloak.sh: * Enable verbose error logging for realm creation * Improve error diagnostics Tests use fail-fast approach (t.Fatal) when server not configured, ensuring clear feedback when infrastructure is missing. * feat: implement AssumeRole and AssumeRoleWithLDAPIdentity STS actions Implement two new STS actions to match MinIO's STS feature set: **AssumeRole Implementation:** - Add handleAssumeRole with full AWS SigV4 authentication - Integrate with existing IAM infrastructure via verifyV4Signature - Validate required parameters (RoleArn, RoleSessionName) - Validate DurationSeconds (900-43200 seconds range) - Generate temporary credentials with expiration - Return AWS-compatible XML response **AssumeRoleWithLDAPIdentity Implementation:** - Add handleAssumeRoleWithLDAPIdentity handler (stub) - Validate LDAP-specific parameters (LDAPUsername, LDAPPassword) - Validate common STS parameters (RoleArn, RoleSessionName, DurationSeconds) - Return proper error messages for missing LDAP provider - Ready for LDAP provider integration **Routing Fixes:** - Add explicit routes for AssumeRole and AssumeRoleWithLDAPIdentity - Prevent IAM handler from intercepting authenticated STS requests - Ensure proper request routing priority **Handler Infrastructure:** - Add IAM field to STSHandlers for SigV4 verification - Update NewSTSHandlers to accept IAM reference - Add STS-specific error codes and response types - Implement writeSTSErrorResponse for AWS-compatible errors The AssumeRole action is fully functional and tested. AssumeRoleWithLDAPIdentity requires LDAP provider implementation. * fix: update IAM matcher to exclude STS actions from interception Update the IAM handler matcher to check for STS actions (AssumeRole, AssumeRoleWithWebIdentity, AssumeRoleWithLDAPIdentity) and exclude them from IAM handler processing. This allows STS requests to be handled by the STS fallback handler even when they include AWS SigV4 authentication. The matcher now parses the form data to check the Action parameter and returns false for STS actions, ensuring they are routed to the correct handler. Note: This is a work-in-progress fix. Tests are still showing some routing issues that need further investigation. * fix: address PR review security issues for STS handlers This commit addresses all critical security issues from PR review: Security Fixes: - Use crypto/rand for cryptographically secure credential generation instead of time.Now().UnixNano() (fixes predictable credentials) - Add sts:AssumeRole permission check via VerifyActionPermission to prevent unauthorized role assumption - Generate proper session tokens using crypto/rand instead of placeholder strings Code Quality Improvements: - Refactor DurationSeconds parsing into reusable parseDurationSeconds() helper function used by all three STS handlers - Create generateSecureCredentials() helper for consistent and secure temporary credential generation - Fix iamMatcher to check query string as fallback when Action not found in form data LDAP Provider Implementation: - Add go-ldap/ldap/v3 dependency - Create LDAPProvider implementing IdentityProvider interface with full LDAP authentication support (connect, bind, search, groups) - Update ProviderFactory to create real LDAP providers - Wire LDAP provider into AssumeRoleWithLDAPIdentity handler Test Infrastructure: - Add LDAP user creation verification step in setup_all_tests.sh * fix: address PR feedback (Round 2) - config validation & provider improvements - Implement `validateLDAPConfig` in `ProviderFactory` - Improve `LDAPProvider.Initialize`: - Support `connectionTimeout` parsing (string/int/float) from config map - Warn if `BindDN` is present but `BindPassword` is empty - Improve `LDAPProvider.GetUserInfo`: - Add fallback to `searchUserGroups` if `memberOf` returns no groups (consistent with Authenticate) * fix: address PR feedback (Round 3) - LDAP connection improvements & build fix - Improve `LDAPProvider` connection handling: - Use `net.Dialer` with configured timeout for connection establishment - Enforce TLS 1.2+ (`MinVersion: tls.VersionTLS12`) for both LDAPS and StartTLS - Fix build error in `s3api_sts.go` (format verb for ErrorCode) * fix: address PR feedback (Round 4) - LDAP hardening, Authz check & Routing fix - LDAP Provider Hardening: - Prevent re-initialization - Enforce single user match in `GetUserInfo` (was explicit only in Authenticate) - Ensure connection closure if StartTLS fails - STS Handlers: - Add robust provider detection using type assertion - **Security**: Implement authorization check (`VerifyActionPermission`) after LDAP authentication - Routing: - Update tests to reflect that STS actions are handled by STS handler, not generic IAM * fix: address PR feedback (Round 5) - JWT tokens, ARN formatting, PrincipalArn CRITICAL FIXES: - Replace standalone credential generation with STS service JWT tokens - handleAssumeRole now generates proper JWT session tokens - handleAssumeRoleWithLDAPIdentity now generates proper JWT session tokens - Session tokens can be validated across distributed instances - Fix ARN formatting in responses - Extract role name from ARN using utils.ExtractRoleNameFromArn() - Prevents malformed ARNs like "arn:aws:sts::assumed-role/arn:aws:iam::..." - Add configurable AccountId for federated users - Add AccountId field to STSConfig (defaults to "111122223333") - PrincipalArn now uses configured account ID instead of hardcoded "aws" - Enables proper trust policy validation IMPROVEMENTS: - Sanitize LDAP authentication error messages (don't leak internal details) - Remove duplicate comment in provider detection - Add utils import for ARN parsing utilities * feat: implement LDAP connection pooling to prevent resource exhaustion PERFORMANCE IMPROVEMENT: - Add connection pool to LDAPProvider (default size: 10 connections) - Reuse LDAP connections across authentication requests - Prevent file descriptor exhaustion under high load IMPLEMENTATION: - connectionPool struct with channel-based connection management - getConnection(): retrieves from pool or creates new connection - returnConnection(): returns healthy connections to pool - createConnection(): establishes new LDAP connection with TLS support - Close(): cleanup method to close all pooled connections - Connection health checking (IsClosing()) before reuse BENEFITS: - Reduced connection overhead (no TCP handshake per request) - Better resource utilization under load - Prevents "too many open files" errors - Non-blocking pool operations (creates new conn if pool empty) * fix: correct TokenGenerator access in STS handlers CRITICAL FIX: - Make TokenGenerator public in STSService (was private tokenGenerator) - Update all references from Config.TokenGenerator to TokenGenerator - Remove TokenGenerator from STSConfig (it belongs in STSService) This fixes the "NotImplemented" errors in distributed and Keycloak tests. The issue was that Round 5 changes tried to access Config.TokenGenerator which didn't exist - TokenGenerator is a field in STSService, not STSConfig. The TokenGenerator is properly initialized in STSService.Initialize() and is now accessible for JWT token generation in AssumeRole handlers. * fix: update tests to use public TokenGenerator field Following the change to make TokenGenerator public in STSService, this commit updates the test files to reference the correct public field name. This resolves compilation errors in the IAM STS test suite. * fix: update distributed tests to use valid Keycloak users Updated s3_iam_distributed_test.go to use 'admin-user' and 'read-user' which exist in the standard Keycloak setup provided by setup_keycloak.sh. This resolves 'unknown test user' errors in distributed integration tests. * fix: ensure iam_config.json exists in setup target for CI The GitHub Actions workflow calls 'make setup' which was not creating iam_config.json, causing the server to start without IAM integration enabled (iamIntegration = nil), resulting in NotImplemented errors. Now 'make setup' copies iam_config.local.json to iam_config.json if it doesn't exist, ensuring IAM is properly configured in CI. * fix(iam/ldap): fix connection pool race and rebind corruption - Add atomic 'closed' flag to connection pool to prevent racing on Close() - Rebind authenticated user connections back to service account before returning to pool - Close connections on error instead of returning potentially corrupted state to pool * fix(iam/ldap): populate standard TokenClaims fields in ValidateToken - Set Subject, Issuer, Audience, IssuedAt, and ExpiresAt to satisfy the interface - Use time.Time for timestamps as required by TokenClaims struct - Default to 1 hour TTL for LDAP tokens * fix(s3api): include account ID in STS AssumedRoleUser ARN - Consistent with AWS, include the account ID in the assumed-role ARN - Use the configured account ID from STS service if available, otherwise default to '111122223333' - Apply to both AssumeRole and AssumeRoleWithLDAPIdentity handlers - Also update .gitignore to ignore IAM test environment files * refactor(s3api): extract shared STS credential generation logic - Move common logic for session claims and credential generation to prepareSTSCredentials - Update handleAssumeRole and handleAssumeRoleWithLDAPIdentity to use the helper - Remove stale comments referencing outdated line numbers * feat(iam/ldap): make pool size configurable and add audience support - Add PoolSize to LDAPConfig (default 10) - Add Audience to LDAPConfig to align with OIDC validation - Update initialization and ValidateToken to use new fields * update tests * debug * chore(iam): cleanup debug prints and fix test config port * refactor(iam): use mapstructure for LDAP config parsing * feat(sts): implement strict trust policy validation for AssumeRole * test(iam): refactor STS tests to use AWS SDK signer * test(s3api): implement ValidateTrustPolicyForPrincipal in MockIAMIntegration * fix(s3api): ensure IAM matcher checks query string on ParseForm error * fix(sts): use crypto/rand for secure credentials and extract constants * fix(iam): fix ldap connection leaks and add insecure warning * chore(iam): improved error wrapping and test parameterization * feat(sts): add support for LDAPProviderName parameter * Update weed/iam/ldap/ldap_provider.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/s3api/s3api_sts.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix(sts): use STSErrSTSNotReady when LDAP provider is missing * fix(sts): encapsulate TokenGenerator in STSService and add getter --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
3 weeks ago |
|
|
217d8b9e0e
|
Fix: ListObjectVersions delimiter support (#7987)
* Fix: Add delimiter support to ListObjectVersions with proper truncation - Implemented delimiter support to group keys into CommonPrefixes - Fixed critical truncation bug: now merges versions and common prefixes into single sorted list before truncation - Ensures total items never exceed MaxKeys (prevents infinite pagination loops) - Properly sets NextKeyMarker and NextVersionIdMarker for pagination - Added integration tests in test/s3/versioning/s3_versioning_delimiter_test.go - Verified behavior matches S3 API specification * Fix: Add delimiter support to ListObjectVersions with proper truncation - Implemented delimiter support to group keys into CommonPrefixes - Fixed critical truncation bug: now merges versions and common prefixes before truncation - Added safety guard for maxKeys=0 to prevent panics - Condensed verbose comments for better readability - Added robust Go integration tests with nil checks for AWS SDK pointers - Verified behavior matches S3 API specification - Resolved compilation error in integration tests - Refined pagination comments and ensured exclusive KeyMarker behavior - Refactored listObjectVersions into helper methods for better maintainability |
3 weeks ago |
|
|
9012069bd7
|
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
3 weeks ago |
|
|
ae9a943ef6
|
IAM: Add Service Account Support (#7744) (#7901)
* iam: add ServiceAccount protobuf schema Add ServiceAccount message type to iam.proto with support for: - Unique ID and parent user linkage - Optional expiration timestamp - Separate credentials (access key/secret) - Action restrictions (subset of parent) - Enable/disable status This is the first step toward implementing issue #7744 (IAM Service Account Support). * iam: add service account response types Add IAM API response types for service account operations: - ServiceAccountInfo struct for marshaling account details - CreateServiceAccountResponse - DeleteServiceAccountResponse - ListServiceAccountsResponse - GetServiceAccountResponse - UpdateServiceAccountResponse Also add type aliases in iamapi package for backwards compatibility. Part of issue #7744 (IAM Service Account Support). * iam: implement service account API handlers Add CRUD operations for service accounts: - CreateServiceAccount: Creates service account with ABIA key prefix - DeleteServiceAccount: Removes service account and parent linkage - ListServiceAccounts: Lists all or filtered by parent user - GetServiceAccount: Retrieves service account details - UpdateServiceAccount: Modifies status, description, expiration Service accounts inherit parent user's actions by default and support optional expiration timestamps. Part of issue #7744 (IAM Service Account Support). * sts: add AssumeRoleWithWebIdentity HTTP endpoint Add STS API HTTP endpoint for AWS SDK compatibility: - Create s3api_sts.go with HTTP handlers matching AWS STS spec - Support AssumeRoleWithWebIdentity action with JWT token - Return XML response with temporary credentials (AccessKeyId, SecretAccessKey, SessionToken) matching AWS format - Register STS route at POST /?Action=AssumeRoleWithWebIdentity This enables AWS SDKs (boto3, AWS CLI, etc.) to obtain temporary S3 credentials using OIDC/JWT tokens. Part of issue #7744 (IAM Service Account Support). * test: add service account and STS integration tests Add integration tests for new IAM features: s3_service_account_test.go: - TestServiceAccountLifecycle: Create, Get, List, Update, Delete - TestServiceAccountValidation: Error handling for missing params s3_sts_test.go: - TestAssumeRoleWithWebIdentityValidation: Parameter validation - TestAssumeRoleWithWebIdentityWithMockJWT: JWT token handling Tests skip gracefully when SeaweedFS is not running or when IAM features are not configured. Part of issue #7744 (IAM Service Account Support). * iam: address code review comments - Add constants for service account ID and key lengths - Use strconv.ParseInt instead of fmt.Sscanf for better error handling - Allow clearing descriptions by checking key existence in url.Values - Replace magic numbers (12, 20, 40) with named constants Addresses review comments from gemini-code-assist[bot] * test: add proper error handling in service account tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Addresses review comment from gemini-code-assist[bot] * test: add proper error handling in STS tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Repeated this fix throughout the file. Addresses review comment from gemini-code-assist[bot] in PR #7901. * iam: address additional code review comments - Specific error code mapping for STS service errors - Distinguish between Sender and Receiver error types in STS responses - Add nil checks for credentials in List/GetServiceAccount - Validate expiration date is in the future - Improve integration test error messages (include response body) - Add credential verification step in service account tests Addresses remaining review comments from gemini-code-assist[bot] across multiple files. * iam: fix shared slice reference in service account creation Copy parent's actions to create an independent slice for the service account instead of sharing the underlying array. This prevents unexpected mutations when the parent's actions are modified later. Addresses review comment from coderabbitai[bot] in PR #7901. * iam: remove duplicate unused constant Removed redundant iamServiceAccountKeyPrefix as ServiceAccountKeyPrefix is already defined and used. Addresses remaining cleanup task. * sts: document limitation of string-based error mapping Added TODO comment explaining that the current string-based error mapping approach is fragile and should be replaced with typed errors from the STS service in a future refactoring. This addresses the architectural concern raised in code review while deferring the actual implementation to a separate PR to avoid scope creep in the current service account feature addition. * iam: fix remaining review issues - Add future-date validation for expiration in UpdateServiceAccount - Reorder tests so credential verification happens before deletion - Fix compilation error by using correct JWT generation methods Addresses final review comments from coderabbitai[bot]. * iam: fix service account access key length The access key IDs were incorrectly generated with 24 characters instead of the AWS-standard 20 characters. This was caused by generating 20 random characters and then prepending the 4-character ABIA prefix. Fixed by subtracting the prefix length from AccessKeyLength, so the final key is: ABIA (4 chars) + random (16 chars) = 20 chars total. This ensures compatibility with S3 clients that validate key length. * test: add comprehensive service account security tests Added comprehensive integration tests for service account functionality: - TestServiceAccountS3Access: Verify SA credentials work for S3 operations - TestServiceAccountExpiration: Test expiration date validation and enforcement - TestServiceAccountInheritedPermissions: Verify parent-child relationship - TestServiceAccountAccessKeyFormat: Validate AWS-compatible key format (ABIA prefix, 20 char length) These tests ensure SeaweedFS service accounts are compatible with AWS conventions and provide robust security coverage. * iam: remove unused UserAccessKeyPrefix constant Code cleanup to remove unused constants. * iam: remove unused iamCommonResponse type alias Code cleanup to remove unused type aliases. * iam: restore and use UserAccessKeyPrefix constant Restored UserAccessKeyPrefix constant and updated s3api tests to use it instead of hardcoded strings for better maintainability and consistency. * test: improve error handling in service account security tests Added explicit error checking for io.ReadAll and xml.Unmarshal in TestServiceAccountExpiration to ensure failures are reported correctly and cleanup is performed only when appropriate. Also added logging for failed responses. * test: use t.Cleanup for reliable resource cleanup Replaced defer with t.Cleanup to ensure service account cleanup runs even when require.NoError fails. Also switched from manual error checking to require.NoError for more idiomatic testify usage. * iam: add CreatedBy field and optimize identity lookups - Added createdBy parameter to CreateServiceAccount to track who created each service account - Extract creator identity from request context using GetIdentityNameFromContext - Populate created_by field in ServiceAccount protobuf - Added findIdentityByName helper function to optimize identity lookups - Replaced nested loops with O(n) helper function calls in CreateServiceAccount and DeleteServiceAccount This addresses code review feedback for better auditing and performance. * iam: prevent user deletion when service accounts exist Following AWS IAM behavior, prevent deletion of users that have active service accounts. This ensures explicit cleanup and prevents orphaned service account resources with invalid ParentUser references. Users must delete all associated service accounts before deleting the parent user, providing safer resource management. * sts: enhance TODO with typed error implementation guidance Updated TODO comment with detailed implementation approach for replacing string-based error matching with typed errors using errors.Is(). This provides a clear roadmap for a follow-up PR to improve error handling robustness and maintainability. * iam: add operational limits for service account creation Added AWS IAM-compatible safeguards to prevent resource exhaustion: - Maximum 100 service accounts per user (LimitExceededException) - Maximum 1000 character description length (InvalidInputException) These limits prevent accidental or malicious resource exhaustion while not impacting legitimate use cases. * iam: add missing operational limit constants Added MaxServiceAccountsPerUser and MaxDescriptionLength constants that were referenced in the previous commit but not defined. * iam: enforce service account expiration during authentication CRITICAL SECURITY FIX: Expired service account credentials were not being rejected during authentication, allowing continued access after expiration. Changes: - Added Expiration field to Credential struct - Populate expiration when loading service accounts from configuration - Check expiration in all authentication paths (V2 and V4 signatures) - Return ErrExpiredToken for expired credentials This ensures expired service accounts are properly rejected at authentication time, matching AWS IAM behavior and preventing unauthorized access. * iam: fix error code for expired service account credentials Use ErrAccessDenied instead of non-existent ErrExpiredToken for expired service account credentials. This provides appropriate access denial for expired credentials while maintaining AWS-compatible error responses. * iam: fix remaining ErrExpiredToken references Replace all remaining instances of non-existent ErrExpiredToken with ErrAccessDenied for expired service account credentials. * iam: apply AWS-standard key format to user access keys Updated CreateAccessKey to generate AWS-standard 20-character access keys with AKIA prefix for regular users, matching the format used for service accounts. This ensures consistency across all access key types and full AWS compatibility. - Access keys: AKIA + 16 random chars = 20 total (was 21 chars, no prefix) - Secret keys: 40 random chars (was 42, now matches AWS standard) - Uses AccessKeyLength and UserAccessKeyPrefix constants * sts: replace fragile string-based error matching with typed errors Implemented robust error handling using typed errors and errors.Is() instead of fragile strings.Contains() matching. This decouples the HTTP layer from service implementation details and prevents errors from being miscategorized if error messages change. Changes: - Added typed error variables to weed/iam/sts/constants.go: * ErrTypedTokenExpired * ErrTypedInvalidToken * ErrTypedInvalidIssuer * ErrTypedInvalidAudience * ErrTypedMissingClaims - Updated STS service to wrap provider authentication errors with typed errors - Replaced strings.Contains() with errors.Is() in HTTP layer for error checking - Removed TODO comment as the improvement is now implemented This makes error handling more maintainable and reliable. * sts: eliminate all string-based error matching with provider-level typed errors Completed the typed error implementation by adding provider-level typed errors and updating provider implementations to return them. This eliminates ALL fragile string matching throughout the entire error handling stack. Changes: - Added typed error definitions to weed/iam/providers/errors.go: * ErrProviderTokenExpired * ErrProviderInvalidToken * ErrProviderInvalidIssuer * ErrProviderInvalidAudience * ErrProviderMissingClaims - Updated OIDC provider to wrap JWT validation errors with typed provider errors - Replaced strings.Contains() with errors.Is() in STS service for error mapping - Complete error chain: Provider -> STS -> HTTP layer, all using errors.Is() This provides: - Reliable error classification independent of error message content - Type-safe error checking throughout the stack - No order-dependent string matching - Maintainable error handling that won't break with message changes * oidc: use jwt.ErrTokenExpired instead of string matching Replaced the last remaining string-based error check with the JWT library's exported typed error. This makes the error detection independent of error message content and more robust against library updates. Changed from: strings.Contains(errMsg, "expired") To: errors.Is(err, jwt.ErrTokenExpired) This completes the elimination of ALL string-based error matching throughout the entire authentication stack. * iam: add description length validation to UpdateServiceAccount Fixed inconsistency where UpdateServiceAccount didn't validate description length against MaxDescriptionLength, allowing operational limits to be bypassed during updates. Now validates that updated descriptions don't exceed 1000 characters, matching the validation in CreateServiceAccount. * iam: refactor expiration check into helper method Extracted duplicated credential expiration check logic into a helper method to reduce code duplication and improve maintainability. Added Credential.isCredentialExpired() method and replaced 5 instances of inline expiration checks across auth_signature_v2.go and auth_signature_v4.go. * iam: address critical Copilot security and consistency feedback Fixed three critical issues identified by Copilot code review: 1. SECURITY: Prevent loading disabled service account credentials - Added check to skip disabled service accounts during credential loading - Disabled accounts can no longer authenticate 2. Add DurationSeconds validation for STS AssumeRoleWithWebIdentity - Enforce AWS-compatible range: 900-43200 seconds (15 min - 12 hours) - Returns proper error for out-of-range values 3. Fix expiration update consistency in UpdateServiceAccount - Added key existence check like Description field - Allows explicit clearing of expiration by setting to empty string - Distinguishes between "not updating" and "clearing expiration" * sts: remove unused durationSecondsStr variable Fixed build error from unused variable after refactoring duration parsing. * iam: address remaining Copilot feedback and remove dead code Completed remaining Copilot code review items: 1. Remove unused getPermission() method (dead code) - Method was defined but never called anywhere 2. Improve slice modification safety in DeleteServiceAccount - Replaced append-with-slice-operations with filter pattern - Avoids potential issues from mutating slice during iteration 3. Fix route registration order - Moved STS route registration BEFORE IAM route - Prevents IAM route from intercepting STS requests - More specific route (with query parameter) now registered first * iam: improve expiration validation and test cleanup robustness Addressed additional Copilot feedback: 1. Make expiration validation more explicit - Added explicit check for negative values - Added comment clarifying that 0 is allowed to clear expiration - Improves code readability and intent 2. Fix test cleanup order in s3_service_account_test.go - Track created service accounts in a slice - Delete all service accounts before deleting parent user - Prevents DeleteConflictException during cleanup - More robust cleanup even if test fails mid-execution Note: s3_service_account_security_test.go already had correct cleanup order due to LIFO defer execution. * test: remove redundant variable assignments Removed duplicate assignments of createdSAId, createdAccessKeyId, and createdSecretAccessKey on lines 148-150 that were already assigned on lines 132-134. |
1 month ago |
|
|
8d6bcddf60
|
Add S3 volume encryption support with -s3.encryptVolumeData flag (#7890)
* Add S3 volume encryption support with -s3.encryptVolumeData flag
This change adds volume-level encryption support for S3 uploads, similar
to the existing -filer.encryptVolumeData option. Each chunk is encrypted
with its own auto-generated CipherKey when the flag is enabled.
Changes:
- Add -s3.encryptVolumeData flag to weed s3, weed server, and weed mini
- Wire Cipher option through S3ApiServer and ChunkedUploadOption
- Add integration tests for multi-chunk range reads with encryption
- Tests verify encryption works across chunk boundaries
Usage:
weed s3 -encryptVolumeData
weed server -s3 -s3.encryptVolumeData
weed mini -s3.encryptVolumeData
Integration tests:
go test -v -tags=integration -timeout 5m ./test/s3/sse/...
* Add GitHub Actions CI for S3 volume encryption tests
- Add test-volume-encryption target to Makefile that starts server with -s3.encryptVolumeData
- Add s3-volume-encryption job to GitHub Actions workflow
- Tests run with integration build tag and 10m timeout
- Server logs uploaded on failure for debugging
* Fix S3 client credentials to use environment variables
The test was using hardcoded credentials "any"/"any" but the Makefile
sets AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY to "some_access_key1"/
"some_secret_key1". Updated getS3Client() to read from environment
variables with fallback to "any"/"any" for manual testing.
* Change bucket creation errors from skip to fatal
Tests should fail, not skip, when bucket creation fails. This ensures
that credential mismatches and other configuration issues are caught
rather than silently skipped.
* Make copy and multipart test jobs fail instead of succeed
Changed exit 0 to exit 1 for s3-sse-copy-operations and s3-sse-multipart
jobs. These jobs document known limitations but should fail to ensure
the issues are tracked and addressed, not silently ignored.
* Hardcode S3 credentials to match Makefile
Changed from environment variables to hardcoded credentials
"some_access_key1"/"some_secret_key1" to match the Makefile
configuration. This ensures tests work reliably.
* fix Double Encryption
* fix Chunk Size Mismatch
* Added IsCompressed
* is gzipped
* fix copying
* only perform HEAD request when len(cipherKey) > 0
* Revert "Make copy and multipart test jobs fail instead of succeed"
This reverts commit
|
1 month ago |
|
|
7064ad420d
|
Refactor S3 integration tests to use weed mini (#7877)
* Refactor S3 integration tests to use weed mini * Fix weed mini flags for sse and parquet tests * Fix IAM test startup: remove -iam.config flag from weed mini * Enhance logging in IAM Makefile to debug startup failure * Simplify weed mini flags and checks in S3 tests (IAM, Parquet, SSE, Copying) * Simplify weed mini flags and checks in all S3 tests * Fix IAM tests: use -s3.iam.config for weed mini * Replace timeout command with portable loop in IAM Makefile * Standardize portable loop-based readiness checks in all S3 Makefiles * Define SERVER_DIR in retention Makefile * Fix versioning and retention Makefiles: remove unsupported weed mini flags * fix filer_group test * fix cors * emojis * fix sse * fix retention * fixes * fix * fixes * fix parquet * fixes * fix * clean up * avoid duplicated debug server * Update .gitignore * simplify * clean up * add credentials * bind * delay * Update Makefile * Update Makefile * check ready * delay * update remote credentials * Update Makefile * clean up * kill * Update Makefile * update credentials |
1 month ago |
|
|
2f6aa98221
|
Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
* Replace removeDuplicateSlashes with NormalizeObjectKey Use s3_constants.NormalizeObjectKey instead of removeDuplicateSlashes in most places for consistency. NormalizeObjectKey handles both duplicate slash removal and ensures the path starts with '/', providing more complete normalization. * Fix double slash issues after NormalizeObjectKey After using NormalizeObjectKey, object keys have a leading '/'. This commit ensures: - getVersionedObjectDir strips leading slash before concatenation - getEntry calls receive names without leading slash - String concatenation with '/' doesn't create '//' paths This prevents path construction errors like: /buckets/bucket//object (wrong) /buckets/bucket/object (correct) * ensure object key leading "/" * fix compilation * fix: Strip leading slash from object keys in S3 API responses After introducing NormalizeObjectKey, all internal object keys have a leading slash. However, S3 API responses must return keys without leading slashes to match AWS S3 behavior. Fixed in three functions: - addVersion: Strip slash for version list entries - processRegularFile: Strip slash for regular file entries - processExplicitDirectory: Strip slash for directory entries This ensures ListObjectVersions and similar APIs return keys like 'bar' instead of '/bar', matching S3 API specifications. * fix: Normalize keyMarker for consistent pagination comparison The S3 API provides keyMarker without a leading slash (e.g., 'object-001'), but after introducing NormalizeObjectKey, all internal object keys have leading slashes (e.g., '/object-001'). When comparing keyMarker < normalizedObjectKey in shouldSkipObjectForMarker, the ASCII value of '/' (47) is less than 'o' (111), causing all objects to be incorrectly skipped during pagination. This resulted in page 2 and beyond returning 0 results. Fix: Normalize the keyMarker when creating versionCollector so comparisons work correctly with normalized object keys. Fixes pagination tests: - TestVersioningPaginationOver1000Versions - TestVersioningPaginationMultipleObjectsManyVersions * refactor: Change NormalizeObjectKey to return keys without leading slash BREAKING STRATEGY CHANGE: Previously, NormalizeObjectKey added a leading slash to all object keys, which required stripping it when returning keys to S3 API clients and caused complexity in marker normalization for pagination. NEW STRATEGY: - NormalizeObjectKey now returns keys WITHOUT leading slash (e.g., 'foo/bar' not '/foo/bar') - This matches the S3 API format directly - All path concatenations now explicitly add '/' between bucket and object - No need to strip slashes in responses or normalize markers Changes: 1. Modified NormalizeObjectKey to strip leading slash instead of adding it 2. Fixed all path concatenations to use: - BucketsPath + '/' + bucket + '/' + object instead of: - BucketsPath + '/' + bucket + object 3. Reverted response key stripping in: - addVersion() - processRegularFile() - processExplicitDirectory() 4. Reverted keyMarker normalization in findVersionsRecursively() 5. Updated matchesPrefixFilter() to work with keys without leading slash 6. Fixed paths in handlers: - s3api_object_handlers.go (GetObject, HeadObject, cacheRemoteObjectForStreaming) - s3api_object_handlers_postpolicy.go - s3api_object_handlers_tagging.go - s3api_object_handlers_acl.go - s3api_version_id.go (getVersionedObjectDir, getVersionIdFormat) - s3api_object_versioning.go (getObjectVersionList, updateLatestVersionAfterDeletion) All versioning tests pass including pagination stress tests. * adjust format * Update post policy tests to match new NormalizeObjectKey behavior - Update TestPostPolicyKeyNormalization to expect keys without leading slashes - Update TestNormalizeObjectKey to expect keys without leading slashes - Update TestPostPolicyFilenameSubstitution to expect keys without leading slashes - Update path construction in tests to use new pattern: BucketsPath + '/' + bucket + '/' + object * Fix ListObjectVersions prefix filtering Remove leading slash addition to prefix parameter to allow correct filtering of .versions directories when listing object versions with a specific prefix. The prefix parameter should match entry paths relative to bucket root. Adding a leading slash was breaking the prefix filter for paginated requests. Fixes pagination issue where second page returned 0 versions instead of continuing with remaining versions. * no leading slash * Fix urlEscapeObject to add leading slash for filer paths NormalizeObjectKey now returns keys without leading slashes to match S3 API format. However, urlEscapeObject is used for filer paths which require leading slashes. Add leading slash back after normalization to ensure filer paths are correct. Fixes TestS3ApiServer_toFilerPath test failures. * adjust tests * normalize * Fix: Normalize prefixes and markers in LIST operations using NormalizeObjectKey Ensure consistent key normalization across all S3 operations (GET, PUT, LIST). Previously, LIST operations were not applying the same normalization rules (handling backslashes, duplicate slashes, leading slashes) as GET/PUT operations. Changes: - Updated normalizePrefixMarker() to call NormalizeObjectKey for both prefix and marker - This ensures prefixes with leading slashes, backslashes, or duplicate slashes are handled consistently with how object keys are normalized - Fixes Parquet test failures where pads.write_dataset creates implicit directory structures that couldn't be discovered by subsequent LIST operations - Added TestPrefixNormalizationInList and TestListPrefixConsistency tests All existing LIST tests continue to pass with the normalization improvements. * Add debugging logging to LIST operations to track prefix normalization * Fix: Remove leading slash addition from GetPrefix to work with NormalizeObjectKey The NormalizeObjectKey function removes leading slashes to match S3 API format (e.g., 'foo/bar' not '/foo/bar'). However, GetPrefix was adding a leading slash back, which caused LIST operations to fail with incorrect path handling. Now GetPrefix only normalizes duplicate slashes without adding a leading slash, which allows NormalizeObjectKey changes to work correctly for S3 LIST operations. All Parquet integration tests now pass (20/20). * Fix: Handle object paths without leading slash in checkDirectoryObject NormalizeObjectKey() removes the leading slash to match S3 API format. However, checkDirectoryObject() was assuming the object path has a leading slash when processing directory markers (paths ending with '/'). Now we ensure the object has a leading slash before processing it for filer operations. Fixes implicit directory marker test (explicit_dir/) while keeping Parquet integration tests passing (20/20). All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * Fix: Handle explicit directory markers with trailing slashes Explicit directory markers created with put_object(Key='dir/', ...) are stored in the filer with the trailing slash as part of the name. The checkDirectoryObject() function now checks for both: 1. Explicit directories: lookup with trailing slash preserved (e.g., 'explicit_dir/') 2. Implicit directories: lookup without trailing slash (e.g., 'implicit_dir') This ensures both types of directory markers are properly recognized. All tests pass: - Implicit directory tests: 6/6 (including explicit directory marker test) - Parquet integration tests: 20/20 * Fix: Preserve trailing slash in NormalizeObjectKey NormalizeObjectKey now preserves trailing slashes when normalizing object keys. This is important for explicit directory markers like 'explicit_dir/' which rely on the trailing slash to be recognized as directory objects. The normalization process: 1. Notes if trailing slash was present 2. Removes duplicate slashes and converts backslashes 3. Removes leading slash for S3 API format 4. Restores trailing slash if it was in the original This ensures explicit directory markers created with put_object(Key='dir/', ...) are properly normalized and can be looked up by their exact name. All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * clean object * Fix: Don't restore trailing slash if result is empty When normalizing paths that are only slashes (e.g., '///', '/'), the function should return an empty string, not a single slash. The fix ensures we only restore the trailing slash if the result is non-empty. This fixes the 'just_slashes' test case: - Input: '///' - Expected: '' - Previous: '/' - Fixed: '' All tests now pass: - Unit tests: TestNormalizeObjectKey (13/13) - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * prefixEndsOnDelimiter * Update s3api_object_handlers_list.go * Update s3api_object_handlers_list.go * handle create directory |
1 month ago |
|
|
014027f75a
|
Fix: Support object tagging in versioned buckets (Issue #7868) (#7871)
* Fix: Support object tagging in versioned buckets (Issue #7868) This fix addresses the issue where setting tags on files in versioned buckets would fail with 'filer: no entry is found in filer store' error. Changes: - Updated GetObjectTaggingHandler to check versioning status and retrieve correct object versions - Updated PutObjectTaggingHandler to properly locate and update tags on versioned objects - Updated DeleteObjectTaggingHandler to delete tags from versioned objects - Added proper handling for both specific versions and latest versions - Added distinction between null versions (pre-versioning objects) and versioned objects The fix follows the same versioning-aware pattern already implemented in ACL handlers. Tests: - Added comprehensive test suite for tagging operations on versioned buckets - Tests cover PUT, GET, and DELETE tagging operations on specific versions and latest versions - Tests verify tag isolation between different versions of the same object * Fix: Ensure consistent directory path construction in tagging handlers Changed directory path construction to match the pattern used in ACL handlers: - Added missing '/' before object path when constructing .versions directory path - This ensures compatibility with the filer's expected path structure - Applied to both PutObjectTaggingHandler and DeleteObjectTaggingHandler * Revert: Remove redundant slash in path construction - object already has leading slash from NormalizeObjectKey * Fix: Remove redundant slashes in versioning path construction across handlers - getVersionedObjectDir: object already starts with '/', no need for extra '/' - ACL handlers: same pattern, fix both PutObjectAcl locations - Ensures consistent path construction with object parameter normalization * fix test compilation * Add: Comprehensive ACL tests for versioned and non-versioned buckets - Added s3_acl_versioning_test.go with 5 test cases covering: * GetObjectAcl on versioned buckets * GetObjectAcl on specific versions * PutObjectAcl on versioned buckets * PutObjectAcl on specific versions * Independent ACL management across versions These tests were missing and would have caught the path construction issues we just fixed in the ACL handler. Tests validate that ACL operations work correctly on both versioned and non-versioned objects. * Fix: Correct tagging versioning test file formatting * fix: Update AWS SDK endpoint config and improve cleanup to handle delete markers - Replace deprecated EndpointResolverWithOptions with BaseEndpoint in AWS SDK v2 client configuration - Update cleanupTestBucket to properly delete both object versions and delete markers - Apply changes to both ACL and tagging test files for consistency * Fix S3 multi-delete for versioned objects The bug was in getVersionedObjectDir() which was constructing paths without a slash between the bucket and object key: BEFORE (WRONG): /buckets/mybucket{key}.versions AFTER (FIXED): /buckets/mybucket/{key}/.versions This caused version deletions to claim success but not actually delete files, breaking S3 compatibility tests: - test_versioning_multi_object_delete - test_versioning_multi_object_delete_with_marker - test_versioning_concurrent_multi_object_delete - test_object_lock_multi_delete_object_with_retention Added comprehensive test that reproduces the issue and verifies the fix. * Remove emojis from test output |
1 month ago |
|
|
4a764dbb37 |
fmt
|
1 month ago |
|
|
bccef78082
|
fix: reduce N+1 queries in S3 versioned object list operations (#7814)
* fix: achieve single-scan efficiency for S3 versioned object listing When listing objects in a versioning-enabled bucket, the original code triggered multiple getEntry calls per versioned object (up to 12 with retries), causing excessive 'find' operations visible in Grafana and leading to high memory usage. This fix achieves single-scan efficiency by caching list metadata (size, ETag, mtime, owner) directly in the .versions directory: 1. Add new Extended keys for caching list metadata in .versions dir 2. Update upload/copy/multipart paths to cache metadata when creating versions 3. Update getLatestVersionEntryFromDirectoryEntry to use cached metadata (zero getEntry calls when cache is available) 4. Update updateLatestVersionAfterDeletion to maintain cache consistency Performance improvement for N versioned objects: - Before: N×1 to N×12 find operations per list request - After: 0 extra find operations (all metadata from single scan) This matches the efficiency of normal (non-versioned) object listing. * Update s3api_object_versioning.go * s3api: fix ETag handling for versioned objects and simplify delete marker creation - Add Md5 attribute to synthetic logicalEntry for single-part uploads to ensure filer.ETag() returns correct value in ListObjects response - Simplify delete marker creation by initializing entry directly in mkFile callback - Add bytes and encoding/hex imports for ETag parsing * s3api: preserve default attributes in delete marker mkFile callback Only modify Mtime field instead of replacing the entire Attributes struct, preserving default values like Crtime, FileMode, Uid, and Gid that mkFile initializes. * s3api: fix ETag handling in newListEntry for multipart uploads Prioritize ExtETagKey from Extended attributes before falling back to filer.ETag(). This properly handles multipart upload ETags (format: md5-parts) for versioned objects, where the synthetic entry has cached ETag metadata but no chunks to calculate from. * s3api: reduce code duplication in delete marker creation Extract deleteMarkerExtended map to be reused in both mkFile callback and deleteMarkerEntry construction. * test: add multipart upload versioning tests for ETag verification Add tests to verify that multipart uploaded objects in versioned buckets have correct ETags when listed: - TestMultipartUploadVersioningListETag: Basic multipart upload with 2 parts - TestMultipartUploadMultipleVersionsListETag: Multiple multipart versions - TestMixedSingleAndMultipartVersionsListETag: Mix of single-part and multipart These tests cover a bug where synthetic entries for versioned objects didn't include proper ETag handling for multipart uploads. * test: add delete marker test for multipart uploaded versioned objects TestMultipartUploadDeleteMarkerListBehavior verifies: - Delete marker creation hides object from ListObjectsV2 - ListObjectVersions shows both version and delete marker - Version ETag (multipart format) is preserved after delete marker - Object can be accessed by version ID after delete marker - Removing delete marker restores object visibility * refactor: address code review feedback - test: use assert.ElementsMatch for ETag verification (more idiomatic) - s3api: optimize newListEntry ETag logic (check ExtETagKey first) - s3api: fix edge case in ETag parsing (>= 2 instead of > 2) * s3api: prevent stale cached metadata and preserve existing extended attrs - setCachedListMetadata: clear old cached keys before setting new values to prevent stale data when new version lacks certain fields (e.g., owner) - createDeleteMarker: merge extended attributes instead of overwriting to preserve any existing metadata on the entry * s3api: extract clearCachedVersionMetadata to reduce code duplication - clearCachedVersionMetadata: clears only metadata fields (size, mtime, etag, owner, deleteMarker) - clearCachedListMetadata: now reuses clearCachedVersionMetadata + clears ID/filename - setCachedListMetadata: uses clearCachedVersionMetadata (not clearCachedListMetadata because caller has already set ID/filename) * s3api: share timestamp between version entry and cache entry Capture versionMtime once before mkFile and reuse for both: - versionEntry.Attributes.Mtime in the mkFile callback - versionEntryForCache.Attributes.Mtime for list caching This keeps list vs. HEAD LastModified timestamps aligned. * s3api: remove amzAccountId variable shadowing in multipart upload Extract amzAccountId before mkFile callback and reuse in both places, similar to how versionMtime is handled. Avoids confusion from redeclaring the same variable. |
1 month ago |
|
|
2763f105f4
|
fix: use unique bucket name in TestS3IAMPresignedURLIntegration to avoid flaky test (#7801)
The test was using a static bucket name 'test-iam-bucket' that could conflict with buckets created by other tests or previous runs. Each test framework creates new RSA keys for JWT signing, so the 'admin-user' identity differs between runs. When the bucket exists from a previous test, the new admin cannot access or delete it, causing AccessDenied errors. Changed to use GenerateUniqueBucketName() which ensures each test run gets its own bucket, avoiding cross-test conflicts. |
1 month ago |
|
|
504b258258
|
s3: fix remote object not caching (#7790)
* s3: fix remote object not caching * s3: address review comments for remote object caching - Fix leading slash in object name by using strings.TrimPrefix - Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations - Reuse existing helper function instead of inline gRPC call * s3/filer: add singleflight deduplication for remote object caching - Add singleflight.Group to FilerServer to deduplicate concurrent cache operations - Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one caching operation runs per object when multiple clients request the same file - Add early-return check for already-cached objects - S3 API calls filer gRPC with timeout and graceful fallback on error - Clear negative bucket cache when bucket is created via weed shell - Add integration tests for remote cache with singleflight deduplication This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects by preventing redundant cache operations and improving concurrent access performance. Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599 * fix: data race in concurrent remote object caching - Add mutex to protect chunks slice from concurrent append - Add mutex to protect fetchAndWriteErr from concurrent read/write - Fix incorrect error check (was checking assignResult.Error instead of parseErr) - Rename inner variable to avoid shadowing fetchAndWriteErr * fix: address code review comments - Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version - Add mutex protection for concurrent chunk slice and error access (data race fix) - Use lazy initialization for S3 client in tests to avoid panic during package load - Fix markdown linting: add language specifier to code fence, blank lines around tables - Add 'all' target to Makefile as alias for test-with-server - Remove unused 'util' import * style: remove emojis from test files * fix: add defensive checks and sort chunks by offset - Add nil check and type assertion check for singleflight result - Sort chunks by offset after concurrent fetching to maintain file order * fix: improve test diagnostics and path normalization - runWeedShell now returns error for better test diagnostics - Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health) - Strip leading slash from normalizedObject to avoid double slashes in path --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
2 months ago |
|
|
9150d84eea |
test: use -master.peers=none for faster test server startup
|
2 months ago |
|
|
26121c55c9 |
test: improve pagination stress test with QUICK_TEST option and better assertions
|
2 months ago |
|
|
f517bc39fc |
test: fix nil pointer dereference and add debugging to pagination stress tests
|
2 months ago |
|
|
0972a0acf3 |
test: add pagination stress tests for S3 versioning with >1000 versions
|
2 months ago |
|
|
f5c666052e
|
feat: add S3 bucket size and object count metrics (#7776)
* feat: add S3 bucket size and object count metrics Adds periodic collection of bucket size metrics: - SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas) - SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas) - SeaweedFS_s3_bucket_object_count: object count (deduplicated) Collection runs every 1 minute via background goroutine that queries filer Statistics RPC for each bucket's collection. Also adds Grafana dashboard panels for: - S3 Bucket Size (logical vs physical) - S3 Bucket Object Count * address PR comments: fix bucket size metrics collection 1. Fix collectCollectionInfoFromMaster to use master VolumeList API - Now properly queries master for topology info - Uses WithMasterClient to get volume list from master - Correctly calculates logical vs physical size based on replication 2. Return error when filerClient is nil to trigger fallback - Changed from 'return nil, nil' to 'return nil, error' - Ensures fallback to filer stats is properly triggered 3. Implement pagination in listBucketNames - Added listBucketPageSize constant (1000) - Uses StartFromFileName for pagination - Continues fetching until fewer entries than limit returned 4. Handle NewReplicaPlacementFromByte error and prevent division by zero - Check error return from NewReplicaPlacementFromByte - Default to 1 copy if error occurs - Add explicit check for copyCount == 0 * simplify bucket size metrics: remove filer fallback, align with quota enforcement - Remove fallback to filer Statistics RPC - Use only master topology for collection info (same as s3.bucket.quota.enforce) - Updated comments to clarify this runs the same collection logic as quota enforcement - Simplified code by removing collectBucketSizeFromFilerStats * use s3a.option.Masters directly instead of querying filer * address PR comments: fix dashboard overlaps and improve metrics collection Grafana dashboard fixes: - Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30) - Fix grid collision in k8s dashboard (moved panel 72 to y=48) - Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways Go code improvements: - Add graceful shutdown support via context cancellation - Use ticker instead of time.Sleep for better shutdown responsiveness - Distinguish EOF from actual errors in stream handling * improve bucket size metrics: multi-master failover and proper error handling - Initial delay now respects context cancellation using select with time.After - Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0] - Properly propagate stream errors instead of just logging them (EOF vs real errors) * improve bucket size metrics: distributed lock and volume ID deduplication - Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time - Add IsLocked() method to LiveLock for checking lock status - Fix deduplication: use volume ID tracking instead of dividing by copyCount - Previous approach gave wrong results if replicas were missing - Now tracks seen volume IDs and counts each volume only once - Physical size still includes all replicas for accurate disk usage reporting * rename lock to s3.leader * simplify: remove StartBucketSizeMetricsCollection wrapper function * fix data race: use atomic operations for LiveLock.isLocked field - Change isLocked from bool to int32 - Use atomic.LoadInt32/StoreInt32 for all reads/writes - Sync shared isLocked field in StartLongLivedLock goroutine * add nil check for topology info to prevent panic * fix bucket metrics: use Ticker for consistent intervals, fix pagination logic - Use time.Ticker instead of time.After for consistent interval execution - Fix pagination: count all entries (not just directories) for proper termination - Update lastFileName for all entries to prevent pagination issues * address PR comments: remove redundant atomic store, propagate context - Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it) - Propagate context through metrics collection for proper cancellation on shutdown - collectAndUpdateBucketSizeMetrics now accepts ctx - collectCollectionInfoFromMaster uses ctx for VolumeList RPC - listBucketNames uses ctx for ListEntries RPC |
2 months ago |
|
|
44beb42eb9
|
s3: fix PutObject ETag format for multi-chunk uploads (#7771)
* s3: fix PutObject ETag format for multi-chunk uploads Fix issue #7768: AWS S3 SDK for Java fails with 'Invalid base 16 character: -' when performing PutObject on files that are internally auto-chunked. The issue was that SeaweedFS returned a composite ETag format (<md5hash>-<count>) for regular PutObject when the file was split into multiple chunks due to auto-chunking. However, per AWS S3 spec, the composite ETag format should only be used for multipart uploads (CreateMultipartUpload/UploadPart/CompleteMultipartUpload API). Regular PutObject should always return a pure MD5 hash as the ETag, regardless of how the file is stored internally. The fix ensures the MD5 hash is always stored in entry.Attributes.Md5 for regular PutObject operations, so filer.ETag() returns the pure MD5 hash instead of falling back to ETagChunks() composite format. * test: add comprehensive ETag format tests for issue #7768 Add integration tests to ensure PutObject ETag format compatibility: Go tests (test/s3/etag/): - TestPutObjectETagFormat_SmallFile: 1KB single chunk - TestPutObjectETagFormat_LargeFile: 10MB auto-chunked (critical for #7768) - TestPutObjectETagFormat_ExtraLargeFile: 25MB multi-chunk - TestMultipartUploadETagFormat: verify composite ETag for multipart - TestPutObjectETagConsistency: ETag consistency across PUT/HEAD/GET - TestETagHexValidation: simulate AWS SDK v2 hex decoding - TestMultipleLargeFileUploads: stress test multiple large uploads Java tests (other/java/s3copier/): - Update pom.xml to include AWS SDK v2 (2.20.127) - Add ETagValidationTest.java with comprehensive SDK v2 tests - Add README.md documenting SDK versions and test coverage Documentation: - Add test/s3/SDK_COMPATIBILITY.md documenting validated SDK versions - Add test/s3/etag/README.md explaining test coverage These tests ensure large file PutObject (>8MB) returns pure MD5 ETags (not composite format), which is required for AWS SDK v2 compatibility. * fix: lower Java version requirement to 11 for CI compatibility * address CodeRabbit review comments - s3_etag_test.go: Handle rand.Read error, fix multipart part-count logging - Makefile: Add 'all' target, pass S3_ENDPOINT to test commands - SDK_COMPATIBILITY.md: Add language tag to fenced code block - ETagValidationTest.java: Add pagination to cleanup logic - README.md: Clarify Go SDK tests are in separate location * ci: add s3copier ETag validation tests to Java integration tests - Enable S3 API (-s3 -s3.port=8333) in SeaweedFS test server - Add S3 API readiness check to wait loop - Add step to run ETagValidationTest from s3copier This ensures the fix for issue #7768 is continuously tested against AWS SDK v2 for Java in CI. * ci: add S3 config with credentials for s3copier tests - Add -s3.config pointing to docker/compose/s3.json - Add -s3.allowDeleteBucketNotEmpty for test cleanup - Set S3_ACCESS_KEY and S3_SECRET_KEY env vars for tests * ci: pass S3 config as Maven system properties Pass S3_ENDPOINT, S3_ACCESS_KEY, S3_SECRET_KEY via -D flags so they're available via System.getProperty() in Java tests |
2 months ago |
|
|
1b1e5f69a2
|
Add TUS protocol support for resumable uploads (#7592)
* Add TUS protocol integration tests
This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.
Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation
Tests are skipped in short mode and require a running SeaweedFS cluster.
* Add TUS session storage types and utilities
Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file
Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.
* Add TUS HTTP handlers
Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers
Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header
* Wire up TUS protocol routes in filer server
Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.
TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload
* Improve TUS integration test setup
Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines
Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads
Follows the pattern established by other tests in test/ folder.
* Fix TUS integration tests and creation-with-upload
- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster
* Add TUS protocol tests to GitHub Actions CI
- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging
* Make TUS base path configurable via CLI
- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions
* Rename -tus.path to -tusBasePath with default .tus
- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)
* Address code review comments
- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests
* fmt
* Address remaining code review comments
- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging
* Address critical and high-priority review comments
- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted
* Fix race condition to work across multiple filer instances
- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance
* Address code review comments: fix variable shadowing, sniff size, and test stability
- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay
* Fix test stability: increase cluster stabilization delay to 5 seconds
The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.
* Address PR review comments for TUS protocol support
- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession
* Address additional PR review comments for TUS protocol
- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time
* Address more PR review comments for TUS protocol
- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow
* Remove unnecessary mutexes in tusWriteData
The upload loop is sequential, so uploadErrLock and chunksLock are not needed.
* Rename updateTusSessionOffset to saveTusChunk
Remove unused newOffset parameter and rename function to better reflect its purpose.
* Improve TUS upload performance and add path validation
- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes
* Address PR review comments for TUS protocol
- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability
* Address additional TUS PR review comments
- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format
* Revert log_buffer changes (moved to separate PR)
* Minor style fixes from PR review
- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
|
2 months ago |
|
|
51c2ab0107
|
fix: admin UI bucket deletion with filer group configured (#7735)
|
2 months ago |
|
|
36b8b2147b
|
test: add integration test for versioned object listing path fix (#7731)
* test: add integration test for versioned object listing path fix Add integration test that validates the fix for GitHub discussion #7573. The test verifies that: - Entry names use path.Base() to get base filename only - Path doubling bug is prevented when listing versioned objects - Logical entries are created correctly with proper attributes - .versions folder paths are handled correctly This test documents the Velero/Kopia compatibility fix and prevents regression of the path doubling bug. * test: add Velero/Kopia integration test for versioned object listing Add integration tests that simulate Velero/Kopia's exact access patterns when using S3 versioning. These tests validate the fix for GitHub discussion #7573 where versioned objects with nested paths would have their paths doubled in ListObjects responses. Tests added: - TestVeleroKopiaVersionedObjectListing: Tests various Kopia path patterns - TestVeleroKopiaGetAfterList: Verifies list-then-get workflow works - TestVeleroMultipleVersionsWithNestedPaths: Tests multi-version objects - TestVeleroListVersionsWithNestedPaths: Tests ListObjectVersions API Each test verifies: 1. Listed keys match original keys without path doubling 2. Objects can be retrieved using the listed keys 3. Content integrity is maintained Related: https://github.com/seaweedfs/seaweedfs/discussions/7573 * refactor: remove old unit test, keep only Velero integration test Remove weed/s3api/s3api_versioning_list_test.go as it was a simpler unit test that the comprehensive Velero integration test supersedes. The integration test in test/s3/versioning/s3_velero_integration_test.go provides better coverage by actually exercising the S3 API with real AWS SDK calls. * refactor: use defer for response body cleanup in test loop Use anonymous function with defer for getResp.Body.Close() to be more defensive against future code additions in the loop body. * refactor: improve hasDoubledPath function clarity and efficiency - Fix comment to accurately describe checking for repeated pairs - Tighten outer loop bound from len(parts)-2 to len(parts)-3 - Remove redundant bounds checks in the condition |
2 months ago |
|
|
d5f21fd8ba
|
fix: add missing backslash for volume extraArgs in helm chart (#7676)
Fixes #7467 The -mserver argument line in volume-statefulset.yaml was missing a trailing backslash, which prevented extraArgs from being passed to the weed volume process. Also: - Extracted master server list generation logic into shared helper templates in _helpers.tpl for better maintainability - Updated all occurrences of deprecated -mserver flag to -master across docker-compose files, test files, and documentation |
2 months ago |
|
|
5167bbd2a9 |
Remove deprecated allowEmptyFolder CLI option
The allowEmptyFolder option is no longer functional because: 1. The code that used it was already commented out 2. Empty folder cleanup is now handled asynchronously by EmptyFolderCleaner The CLI flags are kept for backward compatibility but marked as deprecated and ignored. This removes: - S3ApiServerOption.AllowEmptyFolder field - The actual usage in s3api_object_handlers_list.go - Helm chart values and template references - References in test Makefiles and docker-compose files |
2 months ago |
|
|
55f0fbf364
|
s3: optimize DELETE by skipping lock check for buckets without Object Lock (#7642)
This optimization avoids an expensive filer gRPC call for every DELETE operation on buckets that don't have Object Lock enabled. Before this change, enforceObjectLockProtections() would always call getObjectEntry() to fetch object metadata to check for retention/legal hold, even for buckets that never had Object Lock configured. Changes: 1. Add early return in enforceObjectLockProtections() if bucket has no Object Lock config or bucket doesn't exist 2. Add isObjectLockEnabled() helper function to check if a bucket has Object Lock configured 3. Fix validateObjectLockHeaders() to check ObjectLockConfig instead of just versioningEnabled - this ensures object-lock headers are properly rejected on buckets without Object Lock enabled, which aligns with AWS S3 semantics 4. Make bucket creation with Object Lock atomic - set Object Lock config in the same CreateEntry call as bucket creation, preventing race conditions where bucket exists without Object Lock enabled 5. Properly handle Object Lock setup failures during bucket creation - if StoreObjectLockConfigurationInExtended fails, roll back the bucket creation and return an error instead of leaving a bucket without the requested Object Lock configuration This significantly improves DELETE latency for non-Object-Lock buckets, which is the common case (lockCheck time reduced from 1-10ms to ~1µs). |
2 months ago |
|
|
733ca8e6df
|
Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type (#7598)
* Fix SSE-S3 copy: preserve encryption metadata and set chunk SSE type Fixes GitHub #7562: Copying objects between encrypted buckets was failing. Root causes: 1. processMetadataBytes was re-adding SSE headers from source entry, undoing the encryption header filtering. Now uses dstEntry.Extended which is already filtered. 2. SSE-S3 streaming copy returned nil metadata. Now properly generates and returns SSE-S3 destination metadata (SeaweedFSSSES3Key, AES256 header) via ExecuteStreamingCopyWithMetadata. 3. Chunks created during streaming copy didn't have SseType set. Now sets SseType and per-chunk SseMetadata with chunk-specific IVs for SSE-S3, enabling proper decryption on GetObject. * Address review: make SSE-S3 metadata serialization failures fatal errors - In executeEncryptCopy: return error instead of just logging if SerializeSSES3Metadata fails - In createChunkFromData: return error if chunk SSE-S3 metadata serialization fails This ensures objects/chunks are never created without proper encryption metadata, preventing unreadable/corrupted data. * fmt * Refactor: reuse function names instead of creating WithMetadata variants - Change ExecuteStreamingCopy to return (*EncryptionSpec, error) directly - Remove ExecuteStreamingCopyWithMetadata wrapper - Change executeStreamingReencryptCopy to return (*EncryptionSpec, error) - Remove executeStreamingReencryptCopyWithMetadata wrapper - Update callers to ignore encryption spec with _ where not needed * Add TODO documenting large file SSE-S3 copy limitation The streaming copy approach encrypts the entire stream with a single IV but stores data in chunks with per-chunk IVs. This causes decryption issues for large files. Small inline files work correctly. This is a known architectural issue that needs separate work to fix. * Use chunk-by-chunk encryption for SSE-S3 copy (consistent with SSE-C/SSE-KMS) Instead of streaming encryption (which had IV mismatch issues for multi-chunk files), SSE-S3 now uses the same chunk-by-chunk approach as SSE-C and SSE-KMS: 1. Extended copyMultipartCrossEncryption to handle SSE-S3: - Added SSE-S3 source decryption in copyCrossEncryptionChunk - Added SSE-S3 destination encryption with per-chunk IVs - Added object-level metadata generation for SSE-S3 destinations 2. Updated routing in executeEncryptCopy/executeDecryptCopy/executeReencryptCopy to use copyMultipartCrossEncryption for all SSE-S3 scenarios 3. Removed streaming copy functions (shouldUseStreamingCopy, executeStreamingReencryptCopy) as they're no longer used 4. Added large file (1MB) integration test to verify chunk-by-chunk copy works This ensures consistent behavior across all SSE types and fixes data corruption that occurred with large files in the streaming copy approach. * fmt * fmt * Address review: fail explicitly if SSE-S3 metadata is missing Instead of silently ignoring missing SSE-S3 metadata (which could create unreadable objects), now explicitly fail the copy operation with a clear error message if: - First chunk is missing - First chunk doesn't have SSE-S3 type - First chunk has empty SSE metadata - Deserialization fails * Address review: improve comment to reflect full scope of chunk creation * Address review: fail explicitly if baseIV is empty for SSE-S3 chunk encryption If DestinationIV is not set when encrypting SSE-S3 chunks, the chunk would be created without SseMetadata, causing GetObject decryption to fail later. Now fails explicitly with a clear error message. Note: calculateIVWithOffset returns ([]byte, int) not ([]byte, error) - the int is a skip amount for intra-block alignment, not an error code. * Address review: handle 0-byte files in SSE-S3 copy For 0-byte files, there are no chunks to get metadata from. Generate an IV for the object-level metadata to ensure even empty files are properly marked as SSE-S3 encrypted. Also validate that we don't have a non-empty file with no chunks (which would indicate an internal error). |
2 months ago |
|
|
208d008fe3 |
Fix tagging test pattern to run our comprehensive tests instead of basic tests
|
2 months ago |
|
|
ec41795594 |
Update s3-tagging-tests to use Makefile server management like other S3 tests
|
2 months ago |
|
|
a33e5a9e6a |
Add S3 object tagging tests to CI workflow
- Modified test/s3/tagging/s3_tagging_test.go to use environment variables for configurable endpoint and credentials - Added s3-tagging-tests job to .github/workflows/s3-go-tests.yml to run tagging tests in CI - Tests will now run automatically on pull requests |
2 months ago |
|
|
8c585a9682 |
Fix S3 object tagging issue #7589
- Add X-Amz-Tagging header parsing in putToFiler function for PUT object operations - Store tags with X-Amz-Tagging- prefix in entry.Extended metadata - Add comprehensive test suite for S3 object tagging functionality - Tests cover upload tagging, API operations, special characters, and edge cases |
2 months ago |
|
|
bd419fda51
|
fix: copy to bucket with default SSE-S3 encryption fails (#7562) (#7568)
* filer use context without cancellation * pass along context * fix: copy to bucket with default SSE-S3 encryption fails (#7562) When copying an object from an encrypted bucket to a temporary unencrypted bucket, then to another bucket with default SSE-S3 encryption, the operation fails with 'invalid SSE-S3 source key type' error. Root cause: When objects are copied from an SSE-S3 encrypted bucket to an unencrypted bucket, the 'X-Amz-Server-Side-Encryption: AES256' header is preserved but the actual encryption key (SeaweedFSSSES3Key) is stripped. This creates an 'orphaned' SSE-S3 header that causes IsSSES3EncryptedInternal() to return true, triggering decryption logic with a nil key. Fix: 1. Modified IsSSES3EncryptedInternal() to require BOTH the AES256 header AND the SeaweedFSSSES3Key to be present before returning true 2. Added isOrphanedSSES3Header() to detect orphaned SSE-S3 headers 3. Updated copy handler to strip orphaned headers during copy operations Fixes #7562 * fmt * refactor: simplify isOrphanedSSES3Header function logic Remove redundant existence check since the caller iterates through metadata map, making the check unnecessary. Improves readability while maintaining the same functionality. |
2 months ago |
|
|
64dcbbb25b
|
test read write by s3fs and PyArrow native file system for s3 (#7520)
* test read write by s3fs and PyArrow native file system for s3 * address comments * add github action |
2 months ago |
|
|
c1b8d4bf0d
|
S3: adds FilerClient to use cached volume id (#7518)
* adds FilerClient to use cached volume id
* refactor: MasterClient embeds vidMapClient to eliminate ~150 lines of duplication
- Create masterVolumeProvider that implements VolumeLocationProvider
- MasterClient now embeds vidMapClient instead of maintaining duplicate cache logic
- Removed duplicate methods: LookupVolumeIdsWithFallback, getStableVidMap, etc.
- MasterClient still receives real-time updates via KeepConnected streaming
- Updates call inherited addLocation/deleteLocation from vidMapClient
- Benefits: DRY principle, shared singleflight, cache chain logic reused
- Zero behavioral changes - only architectural improvement
* refactor: mount uses FilerClient for efficient volume location caching
- Add configurable vidMap cache size (default: 5 historical snapshots)
- Add FilerClientOption struct for clean configuration
* GrpcTimeout: default 5 seconds (prevents hanging requests)
* UrlPreference: PreferUrl or PreferPublicUrl
* CacheSize: number of historical vidMap snapshots (for volume moves)
- NewFilerClient uses option struct for better API extensibility
- Improved error handling in filerVolumeProvider.LookupVolumeIds:
* Distinguish genuine 'not found' from communication failures
* Log volumes missing from filer response
* Return proper error context with volume count
* Document that filer Locations lacks Error field (unlike master)
- FilerClient.GetLookupFileIdFunction() handles URL preference automatically
- Mount (WFS) creates FilerClient with appropriate options
- Benefits for weed mount:
* Singleflight: Deduplicates concurrent volume lookups
* Cache history: Old volume locations available briefly when volumes move
* Configurable cache depth: Tune for different deployment environments
* Battle-tested vidMap cache with cache chain
* Better concurrency handling with timeout protection
* Improved error visibility and debugging
- Old filer.LookupFn() kept for backward compatibility
- Performance improvement for mount operations with high concurrency
* fix: prevent vidMap swap race condition in LookupFileIdWithFallback
- Hold vidMapLock.RLock() during entire vm.LookupFileId() call
- Prevents resetVidMap() from swapping vidMap mid-operation
- Ensures atomic access to the current vidMap instance
- Added documentation warnings to getStableVidMap() about swap risks
- Enhanced withCurrentVidMap() documentation for clarity
This fixes a subtle race condition where:
1. Thread A: acquires lock, gets vm pointer, releases lock
2. Thread B: calls resetVidMap(), swaps vc.vidMap
3. Thread A: calls vm.LookupFileId() on old/stale vidMap
While the old vidMap remains valid (in cache chain), holding the lock
ensures we consistently use the current vidMap for the entire operation.
* fix: FilerClient supports multiple filer addresses for high availability
Critical fix: FilerClient now accepts []ServerAddress instead of single address
- Prevents mount failure when first filer is down (regression fix)
- Implements automatic failover to remaining filers
- Uses round-robin with atomic index tracking (same pattern as WFS.WithFilerClient)
- Retries all configured filers before giving up
- Updates successful filer index for future requests
Changes:
- NewFilerClient([]pb.ServerAddress, ...) instead of (pb.ServerAddress, ...)
- filerVolumeProvider references FilerClient for failover access
- LookupVolumeIds tries all filers with util.Retry pattern
- Mount passes all option.FilerAddresses for HA
- S3 wraps single filer in slice for API consistency
This restores the high availability that existed in the old implementation
where mount would automatically failover between configured filers.
* fix: restore leader change detection in KeepConnected stream loop
Critical fix: Leader change detection was accidentally removed from the streaming loop
- Master can announce leader changes during an active KeepConnected stream
- Without this check, client continues talking to non-leader until connection breaks
- This can lead to stale data or operational errors
The check needs to be in TWO places:
1. Initial response (lines 178-187): Detect redirect on first connect
2. Stream loop (lines 203-209): Detect leader changes during active stream
Restored the loop check that was accidentally removed during refactoring.
This ensures the client immediately reconnects to new leader when announced.
* improve: address code review findings on error handling and documentation
1. Master provider now preserves per-volume errors
- Surface detailed errors from master (e.g., misconfiguration, deletion)
- Return partial results with aggregated errors using errors.Join
- Callers can now distinguish specific volume failures from general errors
- Addresses issue of losing vidLoc.Error details
2. Document GetMaster initialization contract
- Add comprehensive documentation explaining blocking behavior
- Clarify that KeepConnectedToMaster must be started first
- Provide typical initialization pattern example
- Prevent confusing timeouts during warm-up
3. Document partial results API contract
- LookupVolumeIdsWithFallback explicitly documents partial results
- Clear examples of how to handle result + error combinations
- Helps prevent callers from discarding valid partial results
4. Add safeguards to legacy filer.LookupFn
- Add deprecation warning with migration guidance
- Implement simple 10,000 entry cache limit
- Log warning when limit reached
- Recommend wdclient.FilerClient for new code
- Prevents unbounded memory growth in long-running processes
These changes improve API clarity and operational safety while maintaining
backward compatibility.
* fix: handle partial results correctly in LookupVolumeIdsWithFallback callers
Two callers were discarding partial results by checking err before processing
the result map. While these are currently single-volume lookups (so partial
results aren't possible), the code was fragile and would break if we ever
batched multiple volumes together.
Changes:
- Check result map FIRST, then conditionally check error
- If volume is found in result, use it (ignore errors about other volumes)
- If volume is NOT found and err != nil, include error context with %w
- Add defensive comments explaining the pattern for future maintainers
This makes the code:
1. Correct for future batched lookups
2. More informative (preserves underlying error details)
3. Consistent with filer_grpc_server.go which already handles this correctly
Example: If looking up ["1", "2", "999"] and only 999 fails, callers
looking for volumes 1 or 2 will succeed instead of failing unnecessarily.
* improve: address remaining code review findings
1. Lazy initialize FilerClient in mount for proxy-only setups
- Only create FilerClient when VolumeServerAccess != "filerProxy"
- Avoids wasted work when all reads proxy through filer
- filerClient is nil for proxy mode, initialized for direct access
2. Fix inaccurate deprecation comment in filer.LookupFn
- Updated comment to reflect current behavior (10k bounded cache)
- Removed claim of "unbounded growth" after adding size limit
- Still directs new code to wdclient.FilerClient for better features
3. Audit all MasterClient usages for KeepConnectedToMaster
- Verified all production callers start KeepConnectedToMaster early
- Filer, Shell, Master, Broker, Benchmark, Admin all correct
- IAM creates MasterClient but never uses it (harmless)
- Test code doesn't need KeepConnectedToMaster (mocks)
All callers properly follow the initialization pattern documented in
GetMaster(), preventing unexpected blocking or timeouts.
* fix: restore observability instrumentation in MasterClient
During the refactoring, several important stats counters and logging
statements were accidentally removed from tryConnectToMaster. These are
critical for monitoring and debugging the health of master client connections.
Restored instrumentation:
1. stats.MasterClientConnectCounter("total") - tracks all connection attempts
2. stats.MasterClientConnectCounter(FailedToKeepConnected) - when KeepConnected stream fails
3. stats.MasterClientConnectCounter(FailedToReceive) - when Recv() fails in loop
4. stats.MasterClientConnectCounter(Failed) - when overall gprcErr occurs
5. stats.MasterClientConnectCounter(OnPeerUpdate) - when peer updates detected
Additionally restored peer update logging:
- "+ filer@host noticed group.type address" for node additions
- "- filer@host noticed group.type address" for node removals
- Only logs updates matching the client's FilerGroup for noise reduction
This information is valuable for:
- Monitoring cluster health and connection stability
- Debugging cluster membership changes
- Tracking master failover and reconnection patterns
- Identifying network issues between clients and masters
No functional changes - purely observability restoration.
* improve: implement gRPC-aware retry for FilerClient volume lookups
The previous implementation used util.Retry which only retries errors
containing the string "transport". This is insufficient for handling
the full range of transient gRPC errors.
Changes:
1. Added isRetryableGrpcError() to properly inspect gRPC status codes
- Retries: Unavailable, DeadlineExceeded, ResourceExhausted, Aborted
- Falls back to string matching for non-gRPC network errors
2. Replaced util.Retry with custom retry loop
- 3 attempts with exponential backoff (1s, 1.5s, 2.25s)
- Tries all N filers on each attempt (N*3 total attempts max)
- Fast-fails on non-retryable errors (NotFound, PermissionDenied, etc.)
3. Improved logging
- Shows both filer attempt (x/N) and retry attempt (y/3)
- Logs retry reason and wait time for debugging
Benefits:
- Better handling of transient gRPC failures (server restarts, load spikes)
- Faster failure for permanent errors (no wasted retries)
- More informative logs for troubleshooting
- Maintains existing HA failover across multiple filers
Example: If all 3 filers return Unavailable (server overload):
- Attempt 1: try all 3 filers, wait 1s
- Attempt 2: try all 3 filers, wait 1.5s
- Attempt 3: try all 3 filers, fail
Example: If filer returns NotFound (volume doesn't exist):
- Attempt 1: try all 3 filers, fast-fail (no retry)
* fmt
* improve: add circuit breaker to skip known-unhealthy filers
The previous implementation tried all filers on every failure, including
known-unhealthy ones. This wasted time retrying permanently down filers.
Problem scenario (3 filers, filer0 is down):
- Last successful: filer1 (saved as filerIndex=1)
- Next lookup when filer1 fails:
Retry 1: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Retry 2: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Retry 3: filer1(fail) → filer2(fail) → filer0(fail, wastes 5s timeout)
Total wasted: 15 seconds on known-bad filer!
Solution: Circuit breaker pattern
- Track consecutive failures per filer (atomic int32)
- Skip filers with 3+ consecutive failures
- Re-check unhealthy filers every 30 seconds
- Reset failure count on success
New behavior:
- filer0 fails 3 times → marked unhealthy
- Future lookups skip filer0 for 30 seconds
- After 30s, re-check filer0 (allows recovery)
- If filer0 succeeds, reset failure count to 0
Benefits:
1. Avoids wasting time on known-down filers
2. Still sticks to last healthy filer (via filerIndex)
3. Allows recovery (30s re-check window)
4. No configuration needed (automatic)
Implementation details:
- filerHealth struct tracks failureCount (atomic) + lastFailureTime
- shouldSkipUnhealthyFiler(): checks if we should skip this filer
- recordFilerSuccess(): resets failure count to 0
- recordFilerFailure(): increments count, updates timestamp
- Logs when skipping unhealthy filers (V(2) level)
Example with circuit breaker:
- filer0 down, saved filerIndex=1 (filer1 healthy)
- Lookup 1: filer1(ok) → Done (0.01s)
- Lookup 2: filer1(fail) → filer2(ok) → Done, save filerIndex=2 (0.01s)
- Lookup 3: filer2(fail) → skip filer0 (unhealthy) → filer1(ok) → Done (0.01s)
Much better than wasting 15s trying filer0 repeatedly!
* fix: OnPeerUpdate should only process updates for matching FilerGroup
Critical bug: The OnPeerUpdate callback was incorrectly moved outside the
FilerGroup check when restoring observability instrumentation. This caused
clients to process peer updates for ALL filer groups, not just their own.
Problem:
Before: mc.OnPeerUpdate only called for update.FilerGroup == mc.FilerGroup
Bug: mc.OnPeerUpdate called for ALL updates regardless of FilerGroup
Impact:
- Multi-tenant deployments with separate filer groups would see cross-group
updates (e.g., group A clients processing group B updates)
- Could cause incorrect cluster membership tracking
- OnPeerUpdate handlers (like Filer's DLM ring updates) would receive
irrelevant updates from other groups
Example scenario:
Cluster has two filer groups: "production" and "staging"
Production filer connects with FilerGroup="production"
Incorrect behavior (bug):
- Receives "staging" group updates
- Incorrectly adds staging filers to production DLM ring
- Cross-tenant data access issues
Correct behavior (fixed):
- Only receives "production" group updates
- Only adds production filers to production DLM ring
- Proper isolation between groups
Fix:
Moved mc.OnPeerUpdate(update, time.Now()) back INSIDE the FilerGroup check
where it belongs, matching the original implementation.
The logging and stats counter were already correctly scoped to matching
FilerGroup, so they remain inside the if block as intended.
* improve: clarify Aborted error handling in volume lookups
Added documentation and logging to address the concern that codes.Aborted
might not always be retryable in all contexts.
Context-specific justification for treating Aborted as retryable:
Volume location lookups (LookupVolume RPC) are simple, read-only operations:
- No transactions
- No write conflicts
- No application-level state changes
- Idempotent (safe to retry)
In this context, Aborted is most likely caused by:
- Filer restarting/recovering (transient)
- Connection interrupted mid-request (transient)
- Server-side resource cleanup (transient)
NOT caused by:
- Application-level conflicts (no writes)
- Transaction failures (no transactions)
- Logical errors (read-only lookup)
Changes:
1. Added detailed comment explaining the context-specific reasoning
2. Added V(1) logging when treating Aborted as retryable
- Helps detect misclassification if it occurs
- Visible in verbose logs for troubleshooting
3. Split switch statement for clarity (one case per line)
If future analysis shows Aborted should not be retried, operators will
now have visibility via logs to make that determination. The logging
provides evidence for future tuning decisions.
Alternative approaches considered but not implemented:
- Removing Aborted entirely (too conservative for read-only ops)
- Message content inspection (adds complexity, no known patterns yet)
- Different handling per RPC type (premature optimization)
* fix: IAM server must start KeepConnectedToMaster for masterClient usage
The IAM server creates and uses a MasterClient but never started
KeepConnectedToMaster, which could cause blocking if IAM config files
have chunks requiring volume lookups.
Problem flow:
NewIamApiServerWithStore()
→ creates masterClient
→ ❌ NEVER starts KeepConnectedToMaster
GetS3ApiConfigurationFromFiler()
→ filer.ReadEntry(iama.masterClient, ...)
→ StreamContent(masterClient, ...) if file has chunks
→ masterClient.GetLookupFileIdFunction()
→ GetMaster(ctx) ← BLOCKS indefinitely waiting for connection!
While IAM config files (identity & policies) are typically small and
stored inline without chunks, the code path exists and would block
if the files ever had chunks.
Fix:
Start KeepConnectedToMaster in background goroutine right after
creating masterClient, following the documented pattern:
mc := wdclient.NewMasterClient(...)
go mc.KeepConnectedToMaster(ctx)
This ensures masterClient is usable if ReadEntry ever needs to
stream chunked content from volume servers.
Note: This bug was dormant because IAM config files are small (<256 bytes)
and SeaweedFS stores small files inline in Entry.Content, not as chunks.
The bug would only manifest if:
- IAM config grew > 256 bytes (inline threshold)
- Config was stored as chunks on volume servers
- ReadEntry called StreamContent
- GetMaster blocked indefinitely
Now all 9 production MasterClient instances correctly follow the pattern.
* fix: data race on filerHealth.lastFailureTime in circuit breaker
The circuit breaker tracked lastFailureTime as time.Time, which was
written in recordFilerFailure and read in shouldSkipUnhealthyFiler
without synchronization, causing a data race.
Data race scenario:
Goroutine 1: recordFilerFailure(0)
health.lastFailureTime = time.Now() // ❌ unsynchronized write
Goroutine 2: shouldSkipUnhealthyFiler(0)
time.Since(health.lastFailureTime) // ❌ unsynchronized read
→ RACE DETECTED by -race detector
Fix:
Changed lastFailureTime from time.Time to int64 (lastFailureTimeNs)
storing Unix nanoseconds for atomic access:
Write side (recordFilerFailure):
atomic.StoreInt64(&health.lastFailureTimeNs, time.Now().UnixNano())
Read side (shouldSkipUnhealthyFiler):
lastFailureNs := atomic.LoadInt64(&health.lastFailureTimeNs)
if lastFailureNs == 0 { return false } // Never failed
lastFailureTime := time.Unix(0, lastFailureNs)
time.Since(lastFailureTime) > 30*time.Second
Benefits:
- Atomic reads/writes (no data race)
- Efficient (int64 is 8 bytes, always atomic on 64-bit systems)
- Zero value (0) naturally means "never failed"
- No mutex needed (lock-free circuit breaker)
Note: sync/atomic was already imported for failureCount, so no new
import needed.
* fix: create fresh timeout context for each filer retry attempt
The timeout context was created once at function start and reused across
all retry attempts, causing subsequent retries to run with progressively
shorter (or expired) deadlines.
Problem flow:
Line 244: timeoutCtx, cancel := context.WithTimeout(ctx, 5s)
defer cancel()
Retry 1, filer 0: client.LookupVolume(timeoutCtx, ...) ← 5s available ✅
Retry 1, filer 1: client.LookupVolume(timeoutCtx, ...) ← 3s left
Retry 1, filer 2: client.LookupVolume(timeoutCtx, ...) ← 0.5s left
Retry 2, filer 0: client.LookupVolume(timeoutCtx, ...) ← EXPIRED! ❌
Result: Retries always fail with DeadlineExceeded, defeating the purpose
of retries.
Fix:
Moved context.WithTimeout inside the per-filer loop, creating a fresh
timeout context for each attempt:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
err := pb.WithGrpcFilerClient(..., func(client) {
resp, err := client.LookupVolume(timeoutCtx, ...)
...
})
cancel() // Clean up immediately after call
}
Benefits:
- Each filer attempt gets full fc.grpcTimeout (default 5s)
- Retries actually have time to complete
- No context leaks (cancel called after each attempt)
- More predictable timeout behavior
Example with fix:
Retry 1, filer 0: fresh 5s timeout ✅
Retry 1, filer 1: fresh 5s timeout ✅
Retry 2, filer 0: fresh 5s timeout ✅
Total max time: 3 retries × 3 filers × 5s = 45s (plus backoff)
Note: The outer ctx (from caller) still provides overall cancellation if
the caller cancels or times out the entire operation.
* fix: always reset vidMap cache on master reconnection
The previous refactoring removed the else block that resets vidMap when
the first message from a newly connected master is not a VolumeLocation.
Problem scenario:
1. Client connects to master-1 and builds vidMap cache
2. Master-1 fails, client connects to master-2
3. First message from master-2 is a ClusterNodeUpdate (not VolumeLocation)
4. Old code: vidMap is reset and updated ✅
5. New code: vidMap is NOT reset ❌
6. Result: Client uses stale cache from master-1 → data access errors
Example flow with bug:
Connect to master-2
First message: ClusterNodeUpdate {filer.x added}
→ No resetVidMap() call
→ vidMap still has master-1's stale volume locations
→ Client reads from wrong volume servers → 404 errors
Fix:
Restored the else block that resets vidMap when first message is not
a VolumeLocation:
if resp.VolumeLocation != nil {
// ... check leader, reset, and update ...
} else {
// First message is ClusterNodeUpdate or other type
// Must still reset to avoid stale data
mc.resetVidMap()
}
This ensures the cache is always cleared when establishing a new master
connection, regardless of what the first message type is.
Root cause:
During the vidMapClient refactoring, this else block was accidentally
dropped, making failover behavior fragile and non-deterministic (depends
on which message type arrives first from the new master).
Impact:
- High severity for master failover scenarios
- Could cause read failures, 404s, or wrong data access
- Only manifests when first message is not VolumeLocation
* fix: goroutine and connection leak in IAM server shutdown
The IAM server's KeepConnectedToMaster goroutine used context.Background(),
which is non-cancellable, causing the goroutine and its gRPC connections
to leak on server shutdown.
Problem:
go masterClient.KeepConnectedToMaster(context.Background())
- context.Background() never cancels
- KeepConnectedToMaster goroutine runs forever
- gRPC connection to master stays open
- No way to stop cleanly on server shutdown
Result: Resource leaks when IAM server is stopped
Fix:
1. Added shutdownContext and shutdownCancel to IamApiServer struct
2. Created cancellable context in NewIamApiServerWithStore:
shutdownCtx, shutdownCancel := context.WithCancel(context.Background())
3. Pass shutdownCtx to KeepConnectedToMaster:
go masterClient.KeepConnectedToMaster(shutdownCtx)
4. Added Shutdown() method to invoke cancel:
func (iama *IamApiServer) Shutdown() {
if iama.shutdownCancel != nil {
iama.shutdownCancel()
}
}
5. Stored masterClient reference on IamApiServer for future use
Benefits:
- Goroutine stops cleanly when Shutdown() is called
- gRPC connections are closed properly
- No resource leaks on server restart/stop
- Shutdown() is idempotent (safe to call multiple times)
Usage (for future graceful shutdown):
iamServer, _ := iamapi.NewIamApiServer(...)
defer iamServer.Shutdown()
// or in signal handler:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
<-sigChan
iamServer.Shutdown()
os.Exit(0)
}()
Note: Current command implementations (weed/command/iam.go) don't have
shutdown paths yet, but this makes IAM server ready for proper lifecycle
management when that infrastructure is added.
* refactor: remove unnecessary KeepMasterClientConnected wrapper in filer
The Filer.KeepMasterClientConnected() method was an unnecessary wrapper that
just forwarded to MasterClient.KeepConnectedToMaster(). This wrapper added
no value and created inconsistency with other components that call
KeepConnectedToMaster directly.
Removed:
filer.go:178-180
func (fs *Filer) KeepMasterClientConnected(ctx context.Context) {
fs.MasterClient.KeepConnectedToMaster(ctx)
}
Updated caller:
filer_server.go:181
- go fs.filer.KeepMasterClientConnected(context.Background())
+ go fs.filer.MasterClient.KeepConnectedToMaster(context.Background())
Benefits:
- Consistent with other components (S3, IAM, Shell, Mount)
- Removes unnecessary indirection
- Clearer that KeepConnectedToMaster runs in background goroutine
- Follows the documented pattern from MasterClient.GetMaster()
Note: shell/commands.go was verified and already correctly starts
KeepConnectedToMaster in a background goroutine (shell_liner.go:51):
go commandEnv.MasterClient.KeepConnectedToMaster(ctx)
* fix: use client ID instead of timeout for gRPC signature parameter
The pb.WithGrpcFilerClient signature parameter is meant to be a client
identifier for logging and tracking (added as 'sw-client-id' gRPC metadata
in streaming mode), not a timeout value.
Problem:
timeoutMs := int32(fc.grpcTimeout.Milliseconds()) // 5000 (5 seconds)
err := pb.WithGrpcFilerClient(false, timeoutMs, filerAddress, ...)
- Passing timeout (5000ms) as signature/client ID
- Misuse of API: signature should be a unique client identifier
- Timeout is already handled by timeoutCtx passed to gRPC call
- Inconsistent with other callers (all use 0 or proper client ID)
How WithGrpcFilerClient uses signature parameter:
func WithGrpcClient(..., signature int32, ...) {
if streamingMode && signature != 0 {
md := metadata.New(map[string]string{"sw-client-id": fmt.Sprintf("%d", signature)})
ctx = metadata.NewOutgoingContext(ctx, md)
}
...
}
It's for client identification, not timeout control!
Fix:
1. Added clientId int32 field to FilerClient struct
2. Initialize with rand.Int31() in NewFilerClient for unique ID
3. Removed timeoutMs variable (and misleading comment)
4. Use fc.clientId in pb.WithGrpcFilerClient call
Before:
err := pb.WithGrpcFilerClient(false, timeoutMs, ...)
^^^^^^^^^ Wrong! (5000)
After:
err := pb.WithGrpcFilerClient(false, fc.clientId, ...)
^^^^^^^^^^^^ Correct! (random int31)
Benefits:
- Correct API usage (signature = client ID, not timeout)
- Timeout still works via timeoutCtx (unchanged)
- Consistent with other pb.WithGrpcFilerClient callers
- Enables proper client tracking on filer side via gRPC metadata
- Each FilerClient instance has unique ID for debugging
Examples of correct usage elsewhere:
weed/iamapi/iamapi_server.go:145 pb.WithGrpcFilerClient(false, 0, ...)
weed/command/s3.go:215 pb.WithGrpcFilerClient(false, 0, ...)
weed/shell/commands.go:110 pb.WithGrpcFilerClient(streamingMode, 0, ...)
All use 0 (or a proper signature), not a timeout value.
* fix: add timeout to master volume lookup to prevent indefinite blocking
The masterVolumeProvider.LookupVolumeIds method was using the context
directly without a timeout, which could cause it to block indefinitely
if the master is slow to respond or unreachable.
Problem:
err := pb.WithMasterClient(false, p.masterClient.GetMaster(ctx), ...)
resp, err := client.LookupVolume(ctx, &master_pb.LookupVolumeRequest{...})
- No timeout on gRPC call to master
- Could block indefinitely if master is unresponsive
- Inconsistent with FilerClient which uses 5s timeout
- This is a fallback path (cache miss) but still needs protection
Scenarios where this could hang:
1. Master server under heavy load (slow response)
2. Network issues between client and master
3. Master server hung or deadlocked
4. Master in process of shutting down
Fix:
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
err := pb.WithMasterClient(false, p.masterClient.GetMaster(timeoutCtx), ...)
resp, err := client.LookupVolume(timeoutCtx, &master_pb.LookupVolumeRequest{...})
Benefits:
- Prevents indefinite blocking on master lookup
- Consistent with FilerClient timeout pattern (5 seconds)
- Faster failure detection when master is unresponsive
- Caller's context still honored (timeout is in addition, not replacement)
- Improves overall system resilience
Note: 5 seconds is a reasonable default for volume lookups:
- Long enough for normal master response (~10-50ms)
- Short enough to fail fast on issues
- Matches FilerClient's grpcTimeout default
* purge
* refactor: address code review feedback on comments and style
Fixed several code quality issues identified during review:
1. Corrected backoff algorithm description in filer_client.go:
- Changed "Exponential backoff" to "Multiplicative backoff with 1.5x factor"
- The formula waitTime * 3/2 produces 1s, 1.5s, 2.25s, not exponential 2^n
- More accurate terminology prevents confusion
2. Removed redundant nil check in vidmap_client.go:
- After the for loop, node is guaranteed to be non-nil
- Loop either returns early or assigns non-nil value to node
- Simplified: if node != nil { node.cache.Store(nil) } → node.cache.Store(nil)
3. Added startup logging to IAM server for consistency:
- Log when master client connection starts
- Matches pattern in S3ApiServer (line 100 in s3api_server.go)
- Improves operational visibility during startup
- Added missing glog import
4. Fixed indentation in filer/reader_at.go:
- Lines 76-91 had incorrect indentation (extra tab level)
- Line 93 also misaligned
- Now properly aligned with surrounding code
5. Updated deprecation comment to follow Go convention:
- Changed "DEPRECATED:" to "Deprecated:" (standard Go format)
- Tools like staticcheck and IDEs recognize the standard format
- Enables automated deprecation warnings in tooling
- Better developer experience
All changes are cosmetic and do not affect functionality.
* fmt
* refactor: make circuit breaker parameters configurable in FilerClient
The circuit breaker failure threshold (3) and reset timeout (30s) were
hardcoded, making it difficult to tune the client's behavior in different
deployment environments without modifying the code.
Problem:
func shouldSkipUnhealthyFiler(index int32) bool {
if failureCount < 3 { // Hardcoded threshold
return false
}
if time.Since(lastFailureTime) > 30*time.Second { // Hardcoded timeout
return false
}
}
Different environments have different needs:
- High-traffic production: may want lower threshold (2) for faster failover
- Development/testing: may want higher threshold (5) to tolerate flaky networks
- Low-latency services: may want shorter reset timeout (10s)
- Batch processing: may want longer reset timeout (60s)
Solution:
1. Added fields to FilerClientOption:
- FailureThreshold int32 (default: 3)
- ResetTimeout time.Duration (default: 30s)
2. Added fields to FilerClient:
- failureThreshold int32
- resetTimeout time.Duration
3. Applied defaults in NewFilerClient with option override:
failureThreshold := int32(3)
resetTimeout := 30 * time.Second
if opt.FailureThreshold > 0 {
failureThreshold = opt.FailureThreshold
}
if opt.ResetTimeout > 0 {
resetTimeout = opt.ResetTimeout
}
4. Updated shouldSkipUnhealthyFiler to use configurable values:
if failureCount < fc.failureThreshold { ... }
if time.Since(lastFailureTime) > fc.resetTimeout { ... }
Benefits:
✓ Tunable for different deployment environments
✓ Backward compatible (defaults match previous hardcoded values)
✓ No breaking changes to existing code
✓ Better maintainability and flexibility
Example usage:
// Aggressive failover for low-latency production
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
FailureThreshold: 2,
ResetTimeout: 10 * time.Second,
})
// Tolerant of flaky networks in development
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
FailureThreshold: 5,
ResetTimeout: 60 * time.Second,
})
* retry parameters
* refactor: make retry and timeout parameters configurable
Made retry logic and gRPC timeouts configurable across FilerClient and
MasterClient to support different deployment environments and network
conditions.
Problem 1: Hardcoded retry parameters in FilerClient
waitTime := time.Second // Fixed at 1s
maxRetries := 3 // Fixed at 3 attempts
waitTime = waitTime * 3 / 2 // Fixed 1.5x multiplier
Different environments have different needs:
- Unstable networks: may want more retries (5) with longer waits (2s)
- Low-latency production: may want fewer retries (2) with shorter waits (500ms)
- Batch processing: may want exponential backoff (2x) instead of 1.5x
Problem 2: Hardcoded gRPC timeout in MasterClient
timeoutCtx, cancel := context.WithTimeout(ctx, 5*time.Second)
Master lookups may need different timeouts:
- High-latency cross-region: may need 10s timeout
- Local network: may use 2s timeout for faster failure detection
Solution for FilerClient:
1. Added fields to FilerClientOption:
- MaxRetries int (default: 3)
- InitialRetryWait time.Duration (default: 1s)
- RetryBackoffFactor float64 (default: 1.5)
2. Added fields to FilerClient:
- maxRetries int
- initialRetryWait time.Duration
- retryBackoffFactor float64
3. Updated LookupVolumeIds to use configurable values:
waitTime := fc.initialRetryWait
maxRetries := fc.maxRetries
for retry := 0; retry < maxRetries; retry++ {
...
waitTime = time.Duration(float64(waitTime) * fc.retryBackoffFactor)
}
Solution for MasterClient:
1. Added grpcTimeout field to MasterClient (default: 5s)
2. Initialize in NewMasterClient with 5 * time.Second default
3. Updated masterVolumeProvider to use p.masterClient.grpcTimeout
Benefits:
✓ Tunable for different network conditions and deployment scenarios
✓ Backward compatible (defaults match previous hardcoded values)
✓ No breaking changes to existing code
✓ Consistent configuration pattern across FilerClient and MasterClient
Example usage:
// Fast-fail for low-latency production with stable network
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
MaxRetries: 2,
InitialRetryWait: 500 * time.Millisecond,
RetryBackoffFactor: 2.0, // Exponential backoff
GrpcTimeout: 2 * time.Second,
})
// Patient retries for unstable network or batch processing
fc := wdclient.NewFilerClient(filers, dialOpt, dc, &wdclient.FilerClientOption{
MaxRetries: 5,
InitialRetryWait: 2 * time.Second,
RetryBackoffFactor: 1.5,
GrpcTimeout: 10 * time.Second,
})
Note: MasterClient timeout is currently set at construction time and not
user-configurable via NewMasterClient parameters. Future enhancement could
add a MasterClientOption struct similar to FilerClientOption.
* fix: rename vicCacheLock to vidCacheLock for consistency
Fixed typo in variable name for better code consistency and readability.
Problem:
vidCache := make(map[string]*filer_pb.Locations)
var vicCacheLock sync.RWMutex // Typo: vic instead of vid
vicCacheLock.RLock()
locations, found := vidCache[vid]
vicCacheLock.RUnlock()
The variable name 'vicCacheLock' is inconsistent with 'vidCache'.
Both should use 'vid' prefix (volume ID) not 'vic'.
Fix:
Renamed all 5 occurrences:
- var vicCacheLock → var vidCacheLock (line 56)
- vicCacheLock.RLock() → vidCacheLock.RLock() (line 62)
- vicCacheLock.RUnlock() → vidCacheLock.RUnlock() (line 64)
- vicCacheLock.Lock() → vidCacheLock.Lock() (line 81)
- vicCacheLock.Unlock() → vidCacheLock.Unlock() (line 91)
Benefits:
✓ Consistent variable naming convention
✓ Clearer intent (volume ID cache lock)
✓ Better code readability
✓ Easier code navigation
* fix: use defer cancel() with anonymous function for proper context cleanup
Fixed context cancellation to use defer pattern correctly in loop iteration.
Problem:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
err := pb.WithGrpcFilerClient(...)
cancel() // Only called on normal return, not on panic
}
Issues with original approach:
1. If pb.WithGrpcFilerClient panics, cancel() is never called → context leak
2. If callback returns early (though unlikely here), cleanup might be missed
3. Not following Go best practices for context.WithTimeout usage
Problem with naive defer in loop:
for x := 0; x < n; x++ {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
defer cancel() // ❌ WRONG: All defers accumulate until function returns
}
In Go, defer executes when the surrounding *function* returns, not when
the loop iteration ends. This would accumulate n deferred cancel() calls
and leak contexts until LookupVolumeIds returns.
Solution: Wrap in anonymous function
for x := 0; x < n; x++ {
err := func() error {
timeoutCtx, cancel := context.WithTimeout(ctx, fc.grpcTimeout)
defer cancel() // ✅ Executes when anonymous function returns (per iteration)
return pb.WithGrpcFilerClient(...)
}()
}
Benefits:
✓ Context always cancelled, even on panic
✓ defer executes after each iteration (not accumulated)
✓ Follows Go best practices for context.WithTimeout
✓ No resource leaks during retry loop execution
✓ Cleaner error handling
Reference:
Go documentation for context.WithTimeout explicitly shows:
ctx, cancel := context.WithTimeout(...)
defer cancel()
This is the idiomatic pattern that should always be followed.
* Can't use defer directly in loop
* improve: add data center preference and URL shuffling for consistent performance
Added missing data center preference and load distribution (URL shuffling)
to ensure consistent performance and behavior across all code paths.
Problem 1: PreferPublicUrl path missing DC preference and shuffling
Location: weed/wdclient/filer_client.go lines 184-192
The custom PreferPublicUrl implementation was simply iterating through
locations and building URLs without considering:
1. Data center proximity (latency optimization)
2. Load distribution across volume servers
Before:
for _, loc := range locations {
url := loc.PublicUrl
if url == "" { url = loc.Url }
fullUrls = append(fullUrls, "http://"+url+"/"+fileId)
}
return fullUrls, nil
After:
var sameDcUrls, otherDcUrls []string
dataCenter := fc.GetDataCenter()
for _, loc := range locations {
url := loc.PublicUrl
if url == "" { url = loc.Url }
httpUrl := "http://" + url + "/" + fileId
if dataCenter != "" && dataCenter == loc.DataCenter {
sameDcUrls = append(sameDcUrls, httpUrl)
} else {
otherDcUrls = append(otherDcUrls, httpUrl)
}
}
rand.Shuffle(len(sameDcUrls), ...)
rand.Shuffle(len(otherDcUrls), ...)
fullUrls = append(sameDcUrls, otherDcUrls...)
Problem 2: Cache miss path missing URL shuffling
Location: weed/wdclient/vidmap_client.go lines 95-108
The cache miss path (fallback lookup) was missing URL shuffling, while
the cache hit path (vm.LookupFileId) already shuffles URLs. This
inconsistency meant:
- Cache hit: URLs shuffled → load distributed
- Cache miss: URLs not shuffled → first server always hit
Before:
var sameDcUrls, otherDcUrls []string
// ... build URLs ...
fullUrls = append(sameDcUrls, otherDcUrls...)
return fullUrls, nil
After:
var sameDcUrls, otherDcUrls []string
// ... build URLs ...
rand.Shuffle(len(sameDcUrls), ...)
rand.Shuffle(len(otherDcUrls), ...)
fullUrls = append(sameDcUrls, otherDcUrls...)
return fullUrls, nil
Benefits:
✓ Reduced latency by preferring same-DC volume servers
✓ Even load distribution across all volume servers
✓ Consistent behavior between cache hit/miss paths
✓ Consistent behavior between PreferUrl and PreferPublicUrl
✓ Matches behavior of existing vidMap.LookupFileId implementation
Impact on performance:
- Lower read latency (same-DC preference)
- Better volume server utilization (load spreading)
- No single volume server becomes a hotspot
Note: Added math/rand import to vidmap_client.go for shuffle support.
* Update weed/wdclient/masterclient.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* improve: call IAM server Shutdown() for best-effort cleanup
Added call to iamApiServer.Shutdown() to ensure cleanup happens when possible,
and documented the limitations of the current approach.
Problem:
The Shutdown() method was defined in IamApiServer but never called anywhere,
meaning the KeepConnectedToMaster goroutine would continue running even when
the IAM server stopped, causing resource leaks.
Changes:
1. Store iamApiServer instance in weed/command/iam.go
- Changed: _, iamApiServer_err := iamapi.NewIamApiServer(...)
- To: iamApiServer, iamApiServer_err := iamapi.NewIamApiServer(...)
2. Added defer call for best-effort cleanup
- defer iamApiServer.Shutdown()
- This will execute if startIamServer() returns normally
3. Added logging in Shutdown() method
- Log when shutdown is triggered for visibility
4. Documented limitations and future improvements
- Added note that defer only works for normal function returns
- SeaweedFS commands don't currently have signal handling
- Suggested future enhancement: add SIGTERM/SIGINT handling
Current behavior:
- ✓ Cleanup happens if HTTP server fails to start (glog.Fatalf path)
- ✓ Cleanup happens if Serve() returns with error (unlikely)
- ✗ Cleanup does NOT happen on SIGTERM/SIGINT (process killed)
The last case is a limitation of the current command architecture - all
SeaweedFS commands (s3, filer, volume, master, iam) lack signal handling
for graceful shutdown. This is a systemic issue that affects all services.
Future enhancement:
To properly handle SIGTERM/SIGINT, the command layer would need:
sigChan := make(chan os.Signal, 1)
signal.Notify(sigChan, syscall.SIGTERM, syscall.SIGINT)
go func() {
httpServer.Serve(listener) // Non-blocking
}()
<-sigChan
glog.V(0).Infof("Received shutdown signal")
iamApiServer.Shutdown()
httpServer.Shutdown(context.Background())
This would require refactoring the command structure for all services,
which is out of scope for this change.
Benefits of current approach:
✓ Best-effort cleanup (better than nothing)
✓ Proper cleanup in error paths
✓ Documented for future improvement
✓ Consistent with how other SeaweedFS services handle lifecycle
* data racing in test
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
2 months ago |
|
|
8be9e258fc
|
S3: Add tests for PyArrow with native S3 filesystem (#7508)
* PyArrow native S3 filesystem * add sse-s3 tests * update * minor * ENABLE_SSE_S3 * Update test_pyarrow_native_s3.py * clean up * refactoring * Update test_pyarrow_native_s3.py |
2 months ago |
|
|
ca84a8a713
|
S3: Directly read write volume servers (#7481)
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization
* revert
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
* Lazy Entry Fetch for SSE, Skip Conditional Header Check
* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)
* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS
* refactor
* Removed Premature Mutual Exclusivity Check
* check for the presence of the X-Amz-Server-Side-Encryption header
* not used
* fmt
* directly read write volume servers
* HTTP Range Request Support
* set header
* md5
* copy object
* fix sse
* fmt
* implement sse
* sse continue
* fixed the suffix range bug (bytes=-N for "last N bytes")
* debug logs
* Missing PartsCount Header
* profiling
* url encoding
* test_multipart_get_part
* headers
* debug
* adjust log level
* handle part number
* Update s3api_object_handlers.go
* nil safety
* set ModifiedTsNs
* remove
* nil check
* fix sse header
* same logic as filer
* decode values
* decode ivBase64
* s3: Fix SSE decryption JWT authentication and streaming errors
Critical fix for SSE (Server-Side Encryption) test failures:
1. **JWT Authentication Bug** (Root Cause):
- Changed from GenJwtForFilerServer to GenJwtForVolumeServer
- S3 API now uses correct JWT when directly reading from volume servers
- Matches filer's authentication pattern for direct volume access
- Fixes 'unexpected EOF' and 500 errors in SSE tests
2. **Streaming Error Handling**:
- Added error propagation in getEncryptedStreamFromVolumes goroutine
- Use CloseWithError() to properly communicate stream failures
- Added debug logging for streaming errors
3. **Response Header Timing**:
- Removed premature WriteHeader(http.StatusOK) call
- Let Go's http package write status automatically on first write
- Prevents header lock when errors occur during streaming
4. **Enhanced SSE Decryption Debugging**:
- Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
- Better error messages for missing or invalid encryption metadata
- Added glog.V(2) debugging for decryption setup
This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.
* s3: Fix SSE multipart upload metadata preservation
Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):
**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:
1. **Wrong Constant Name** (Key Mismatch Bug):
- Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
- Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
- Result: SSE-KMS metadata was NEVER copied → 500 errors
2. **Missing SSE-C and SSE-S3 Headers**:
- SSE-C requires: IV, Algorithm, KeyMD5
- SSE-S3 requires: encrypted key data + standard headers
- Old code: copied nothing for SSE-C/SSE-S3 → decryption failures
**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:
- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption
Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)
**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.
Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)
* s3: Add debug logging for versioning state diagnosis
Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.
Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)
This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers
* s3: Enhanced versioning state tracing for suspended versioning diagnosis
Added comprehensive logging across the entire versioning state flow:
PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change
setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code
updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails
getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors
This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues
Expected to show why bucket still reports 'Enabled' after 'Suspended' call.
* s3: Add SSE chunk detection debugging for multipart uploads
Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:
detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)
This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly
Expected to show if chunks have SseType=0 (none) or proper SSE types set.
* s3: Trace SSE chunk metadata through multipart completion and retrieval
Added end-to-end logging to track SSE chunk metadata lifecycle:
**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes
**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result
This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug
Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.
* s3: Fix SSE-C multipart IV base64 decoding bug
**Critical Bug Found**: SSE-C multipart uploads were failing because:
Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error
Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"
Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures
Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET
This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.
* fixes
* kms sse
* handle retry if not found in .versions folder and should read the normal object
* quick check (no retries) to see if the .versions/ directory exists
* skip retry if object is not found
* explicit update to avoid sync delay
* fix map update lock
* Remove fmt.Printf debug statements
* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating
* fmt
* Fix ACL grants storage logic
* header handling
* nil handling
* range read for sse content
* test range requests for sse objects
* fmt
* unused code
* upload in chunks
* header case
* fix url
* bucket policy error vs bucket not found
* jwt handling
* fmt
* jwt in request header
* Optimize Case-Insensitive Prefix Check
* dead code
* Eliminated Unnecessary Stream Prefetch for Multipart SSE
* range sse
* sse
* refactor
* context
* fmt
* fix type
* fix SSE-C IV Mismatch
* Fix Headers Being Set After WriteHeader
* fix url parsing
* propergate sse headers
* multipart sse-s3
* aws sig v4 authen
* sse kms
* set content range
* better errors
* Update s3api_object_handlers_copy.go
* Update s3api_object_handlers.go
* Update s3api_object_handlers.go
* avoid magic number
* clean up
* Update s3api_bucket_policy_handlers.go
* fix url parsing
* context
* data and metadata both use background context
* adjust the offset
* SSE Range Request IV Calculation
* adjust logs
* IV relative to offset in each part, not the whole file
* collect logs
* offset
* fix offset
* fix url
* logs
* variable
* jwt
* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.
* sse
* adjust IV and offset
* multipart boundaries
* ensures PUT and GET operations return consistent ETags
* Metadata Header Case
* CommonPrefixes Sorting with URL Encoding
* always sort
* remove the extra PathUnescape call
* fix the multipart get part ETag
* the FileChunk is created without setting ModifiedTsNs
* Sort CommonPrefixes lexicographically to match AWS S3 behavior
* set md5 for multipart uploads
* prevents any potential data loss or corruption in the small-file inline storage path
* compiles correctly
* decryptedReader will now be properly closed after use
* Fixed URL encoding and sort order for CommonPrefixes
* Update s3api_object_handlers_list.go
* SSE-x Chunk View Decryption
* Different IV offset calculations for single-part vs multipart objects
* still too verbose in logs
* less logs
* ensure correct conversion
* fix listing
* nil check
* minor fixes
* nil check
* single character delimiter
* optimize
* range on empty object or zero-length
* correct IV based on its position within that part, not its position in the entire object
* adjust offset
* offset
Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk
* look breaking
* refactor
* error on no content
* handle intra-block byte skipping
* Incomplete HTTP Response Error Handling
* multipart SSE
* Update s3api_object_handlers.go
* address comments
* less logs
* handling directory
* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups
* Revert "handling directory"
This reverts commit
|
2 months ago |
|
|
aef5121c36 |
faster master startup
|
2 months ago |
|
|
508d06d9a5
|
S3: Enforce bucket policy (#7471)
* evaluate policies during authorization * cache bucket policy * refactor * matching with regex special characters * Case Sensitivity, pattern cache, Dead Code Removal * Fixed Typo, Restored []string Case, Added Cache Size Limit * hook up with policy engine * remove old implementation * action mapping * validate * if not specified, fall through to IAM checks * fmt * Fail-close on policy evaluation errors * Explicit `Allow` bypasses IAM checks * fix error message * arn:seaweed => arn:aws * remove legacy support * fix tests * Clean up bucket policy after this test * fix for tests * address comments * security fixes * fix tests * temp comment out |
3 months ago |
|
|
f4f2718ba0 |
adjust test
|
3 months ago |
|
|
498ac8903f
|
S3: prevent deleting buckets with object locking (#7434)
* prevent deleting buckets with object locking * addressing comments * Update s3api_bucket_handlers.go * address comments * early return * refactor * simplify * constant * go fmt |
3 months ago |
|
|
eaecd8328b
|
S3: add fallback for CORS (#7404)
* add fallback for cors * refactor * expose aws headers * add fallback to test * refactor * Only falls back to global config when there's explicitly no bucket-level config. * fmt * Update s3_cors_http_test.go * refactoring |
3 months ago |
|
|
8d63a9cf5f
|
Fixes for kafka gateway (#7329)
* fix race condition
* save checkpoint every 2 seconds
* Inlined the session creation logic to hold the lock continuously
* comment
* more logs on offset resume
* only recreate if we need to seek backward (requested offset < current offset), not on any mismatch
* Simplified GetOrCreateSubscriber to always reuse existing sessions
* atomic currentStartOffset
* fmt
* avoid deadlock
* fix locking
* unlock
* debug
* avoid race condition
* refactor dedup
* consumer group that does not join group
* increase deadline
* use client timeout wait
* less logs
* add some delays
* adjust deadline
* Update fetch.go
* more time
* less logs, remove unused code
* purge unused
* adjust return values on failures
* clean up consumer protocols
* avoid goroutine leak
* seekable subscribe messages
* ack messages to broker
* reuse cached records
* pin s3 test version
* adjust s3 tests
* verify produced messages are consumed
* track messages with testStartTime
* removing the unnecessary restart logic and relying on the seek mechanism we already implemented
* log read stateless
* debug fetch offset APIs
* fix tests
* fix go mod
* less logs
* test: increase timeouts for consumer group operations in E2E tests
Consumer group operations (coordinator discovery, offset fetch/commit) are
slower in CI environments with limited resources. This increases timeouts to:
- ProduceMessages: 10s -> 30s (for when consumer groups are active)
- ConsumeWithGroup: 30s -> 60s (for offset fetch/commit operations)
Fixes the TestOffsetManagement timeout failures in GitHub Actions CI.
* feat: add context timeout propagation to produce path
This commit adds proper context propagation throughout the produce path,
enabling client-side timeouts to be honored on the broker side. Previously,
only fetch operations respected client timeouts - produce operations continued
indefinitely even if the client gave up.
Changes:
- Add ctx parameter to ProduceRecord and ProduceRecordValue signatures
- Add ctx parameter to PublishRecord and PublishRecordValue in BrokerClient
- Add ctx parameter to handleProduce and related internal functions
- Update all callers (protocol handlers, mocks, tests) to pass context
- Add context cancellation checks in PublishRecord before operations
Benefits:
- Faster failure detection when client times out
- No orphaned publish operations consuming broker resources
- Resource efficiency improvements (no goroutine/stream/lock leaks)
- Consistent timeout behavior between produce and fetch paths
- Better error handling with proper cancellation signals
This fixes the root cause of CI test timeouts where produce operations
continued indefinitely after clients gave up, leading to cascading delays.
* feat: add disk I/O fallback for historical offset reads
This commit implements async disk I/O fallback to handle cases where:
1. Data is flushed from memory before consumers can read it (CI issue)
2. Consumers request historical offsets not in memory
3. Small LogBuffer retention in resource-constrained environments
Changes:
- Add readHistoricalDataFromDisk() helper function
- Update ReadMessagesAtOffset() to call ReadFromDiskFn when offset < bufferStartOffset
- Properly handle maxMessages and maxBytes limits during disk reads
- Return appropriate nextOffset after disk reads
- Log disk read operations at V(2) and V(3) levels
Benefits:
- Fixes CI test failures where data is flushed before consumption
- Enables consumers to catch up even if they fall behind memory retention
- No blocking on hot path (disk read only for historical data)
- Respects existing ReadFromDiskFn timeout handling
How it works:
1. Try in-memory read first (fast path)
2. If offset too old and ReadFromDiskFn configured, read from disk
3. Return disk data with proper nextOffset
4. Consumer continues reading seamlessly
This fixes the 'offset 0 too old (earliest in-memory: 5)' error in
TestOffsetManagement where messages were flushed before consumer started.
* fmt
* feat: add in-memory cache for disk chunk reads
This commit adds an LRU cache for disk chunks to optimize repeated reads
of historical data. When multiple consumers read the same historical offsets,
or a single consumer refetches the same data, the cache eliminates redundant
disk I/O.
Cache Design:
- Chunk size: 1000 messages per chunk
- Max chunks: 16 (configurable, ~16K messages cached)
- Eviction policy: LRU (Least Recently Used)
- Thread-safe with RWMutex
- Chunk-aligned offsets for efficient lookups
New Components:
1. DiskChunkCache struct - manages cached chunks
2. CachedDiskChunk struct - stores chunk data with metadata
3. getCachedDiskChunk() - checks cache before disk read
4. cacheDiskChunk() - stores chunks with LRU eviction
5. extractMessagesFromCache() - extracts subset from cached chunk
How It Works:
1. Read request for offset N (e.g., 2500)
2. Calculate chunk start: (2500 / 1000) * 1000 = 2000
3. Check cache for chunk starting at 2000
4. If HIT: Extract messages 2500-2999 from cached chunk
5. If MISS: Read chunk 2000-2999 from disk, cache it, extract 2500-2999
6. If cache full: Evict LRU chunk before caching new one
Benefits:
- Eliminates redundant disk I/O for popular historical data
- Reduces latency for repeated reads (cache hit ~1ms vs disk ~100ms)
- Supports multiple consumers reading same historical offsets
- Automatically evicts old chunks when cache is full
- Zero impact on hot path (in-memory reads unchanged)
Performance Impact:
- Cache HIT: ~99% faster than disk read
- Cache MISS: Same as disk read (with caching overhead ~1%)
- Memory: ~16MB for 16 chunks (16K messages x 1KB avg)
Example Scenario (CI tests):
- Producer writes offsets 0-4
- Data flushes to disk
- Consumer 1 reads 0-4 (cache MISS, reads from disk, caches chunk 0-999)
- Consumer 2 reads 0-4 (cache HIT, served from memory)
- Consumer 1 rebalances, re-reads 0-4 (cache HIT, no disk I/O)
This optimization is especially valuable in CI environments where:
- Small memory buffers cause frequent flushing
- Multiple consumers read the same historical data
- Disk I/O is relatively slow compared to memory access
* fix: commit offsets in Cleanup() before rebalancing
This commit adds explicit offset commit in the ConsumerGroupHandler.Cleanup()
method, which is called during consumer group rebalancing. This ensures all
marked offsets are committed BEFORE partitions are reassigned to other consumers,
significantly reducing duplicate message consumption during rebalancing.
Problem:
- Cleanup() was not committing offsets before rebalancing
- When partition reassigned to another consumer, it started from last committed offset
- Uncommitted messages (processed but not yet committed) were read again by new consumer
- This caused ~100-200% duplicate messages during rebalancing in tests
Solution:
- Add session.Commit() in Cleanup() method
- This runs after all ConsumeClaim goroutines have exited
- Ensures all MarkMessage() calls are committed before partition release
- New consumer starts from the last processed offset, not an older committed offset
Benefits:
- Dramatically reduces duplicate messages during rebalancing
- Improves at-least-once semantics (closer to exactly-once for normal cases)
- Better performance (less redundant processing)
- Cleaner test results (expected duplicates only from actual failures)
Kafka Rebalancing Lifecycle:
1. Rebalance triggered (consumer join/leave, timeout, etc.)
2. All ConsumeClaim goroutines cancelled
3. Cleanup() called ← WE COMMIT HERE NOW
4. Partitions reassigned to other consumers
5. New consumer starts from last committed offset ← NOW MORE UP-TO-DATE
Expected Results:
- Before: ~100-200% duplicates during rebalancing (2-3x reads)
- After: <10% duplicates (only from uncommitted in-flight messages)
This is a critical fix for production deployments where consumer churn
(scaling, restarts, failures) causes frequent rebalancing.
* fmt
* feat: automatic idle partition cleanup to prevent memory bloat
Implements automatic cleanup of topic partitions with no active publishers
or subscribers to prevent memory accumulation from short-lived topics.
**Key Features:**
1. Activity Tracking (local_partition.go)
- Added lastActivityTime field to LocalPartition
- UpdateActivity() called on publish, subscribe, and message reads
- IsIdle() checks if partition has no publishers/subscribers
- GetIdleDuration() returns time since last activity
- ShouldCleanup() determines if partition eligible for cleanup
2. Cleanup Task (local_manager.go)
- Background goroutine runs every 1 minute (configurable)
- Removes partitions idle for > 5 minutes (configurable)
- Automatically removes empty topics after all partitions cleaned
- Proper shutdown handling with WaitForCleanupShutdown()
3. Broker Integration (broker_server.go)
- StartIdlePartitionCleanup() called on broker startup
- Default: check every 1 minute, cleanup after 5 minutes idle
- Transparent operation with sensible defaults
**Cleanup Process:**
- Checks: partition.Publishers.Size() == 0 && partition.Subscribers.Size() == 0
- Calls partition.Shutdown() to:
- Flush all data to disk (no data loss)
- Stop 3 goroutines (loopFlush, loopInterval, cleanupLoop)
- Free in-memory buffers (~100KB-10MB per partition)
- Close LogBuffer resources
- Removes partition from LocalTopic.Partitions
- Removes topic if no partitions remain
**Benefits:**
- Prevents memory bloat from short-lived topics
- Reduces goroutine count (3 per partition cleaned)
- Zero configuration required
- Data remains on disk, can be recreated on demand
- No impact on active partitions
**Example Logs:**
I Started idle partition cleanup task (check: 1m, timeout: 5m)
I Cleaning up idle partition topic-0 (idle for 5m12s, publishers=0, subscribers=0)
I Cleaned up 2 idle partition(s)
**Memory Freed per Partition:**
- In-memory message buffer: ~100KB-10MB
- Disk buffer cache
- 3 goroutines
- Publisher/subscriber tracking maps
- Condition variables and mutexes
**Related Issue:**
Prevents memory accumulation in systems with high topic churn or
many short-lived consumer groups, improving long-term stability
and resource efficiency.
**Testing:**
- Compiles cleanly
- No linting errors
- Ready for integration testing
fmt
* refactor: reduce verbosity of debug log messages
Changed debug log messages with bracket prefixes from V(1)/V(2) to V(3)/V(4)
to reduce log noise in production. These messages were added during development
for detailed debugging and are still available with higher verbosity levels.
Changes:
- glog.V(2).Infof("[") -> glog.V(4).Infof("[") (~104 messages)
- glog.V(1).Infof("[") -> glog.V(3).Infof("[") (~30 messages)
Affected files:
- weed/mq/broker/broker_grpc_fetch.go
- weed/mq/broker/broker_grpc_sub_offset.go
- weed/mq/kafka/integration/broker_client_fetch.go
- weed/mq/kafka/integration/broker_client_subscribe.go
- weed/mq/kafka/integration/seaweedmq_handler.go
- weed/mq/kafka/protocol/fetch.go
- weed/mq/kafka/protocol/fetch_partition_reader.go
- weed/mq/kafka/protocol/handler.go
- weed/mq/kafka/protocol/offset_management.go
Benefits:
- Cleaner logs in production (default -v=0)
- Still available for deep debugging with -v=3 or -v=4
- No code behavior changes, only log verbosity
- Safer than deletion - messages preserved for debugging
Usage:
- Default (-v=0): Only errors and important events
- -v=1: Standard info messages
- -v=2: Detailed info messages
- -v=3: Debug messages (previously V(1) with brackets)
- -v=4: Verbose debug (previously V(2) with brackets)
* refactor: change remaining glog.Infof debug messages to V(3)
Changed remaining debug log messages with bracket prefixes from
glog.Infof() to glog.V(3).Infof() to prevent them from showing
in production logs by default.
Changes (8 messages across 3 files):
- glog.Infof("[") -> glog.V(3).Infof("[")
Files updated:
- weed/mq/broker/broker_grpc_fetch.go (4 messages)
- [FetchMessage] CALLED! debug marker
- [FetchMessage] request details
- [FetchMessage] LogBuffer read start
- [FetchMessage] LogBuffer read completion
- weed/mq/kafka/integration/broker_client_fetch.go (3 messages)
- [FETCH-STATELESS-CLIENT] received messages
- [FETCH-STATELESS-CLIENT] converted records (with data)
- [FETCH-STATELESS-CLIENT] converted records (empty)
- weed/mq/kafka/integration/broker_client_publish.go (1 message)
- [GATEWAY RECV] _schemas topic debug
Now ALL debug messages with bracket prefixes require -v=3 or higher:
- Default (-v=0): Clean production logs ✅
- -v=3: All debug messages visible
- -v=4: All verbose debug messages visible
Result: Production logs are now clean with default settings!
* remove _schemas debug
* less logs
* fix: critical bug causing 51% message loss in stateless reads
CRITICAL BUG FIX: ReadMessagesAtOffset was returning error instead of
attempting disk I/O when data was flushed from memory, causing massive
message loss (6254 out of 12192 messages = 51% loss).
Problem:
In log_read_stateless.go lines 120-131, when data was flushed to disk
(empty previous buffer), the code returned an 'offset out of range' error
instead of attempting disk I/O. This caused consumers to skip over flushed
data entirely, leading to catastrophic message loss.
The bug occurred when:
1. Data was written to LogBuffer
2. Data was flushed to disk due to buffer rotation
3. Consumer requested that offset range
4. Code found offset in expected range but not in memory
5. ❌ Returned error instead of reading from disk
Root Cause:
Lines 126-131 had early return with error when previous buffer was empty:
// Data not in memory - for stateless fetch, we don't do disk I/O
return messages, startOffset, highWaterMark, false,
fmt.Errorf("offset %d out of range...")
This comment was incorrect - we DO need disk I/O for flushed data!
Fix:
1. Lines 120-132: Changed to fall through to disk read logic instead of
returning error when previous buffer is empty
2. Lines 137-177: Enhanced disk read logic to handle TWO cases:
- Historical data (offset < bufferStartOffset)
- Flushed data (offset >= bufferStartOffset but not in memory)
Changes:
- Line 121: Log "attempting disk read" instead of breaking
- Line 130-132: Fall through to disk read instead of returning error
- Line 141: Changed condition from 'if startOffset < bufferStartOffset'
to 'if startOffset < currentBufferEnd' to handle both cases
- Lines 143-149: Add context-aware logging for both historical and flushed data
- Lines 154-159: Add context-aware error messages
Expected Results:
- Before: 51% message loss (6254/12192 missing)
- After: <1% message loss (only from rebalancing, which we already fixed)
- Duplicates: Should remain ~47% (from rebalancing, expected until offsets committed)
Testing:
- ✅ Compiles successfully
- Ready for integration testing with standard-test
Related Issues:
- This explains the massive data loss in recent load tests
- Disk I/O fallback was implemented but not reachable due to early return
- Disk chunk cache is working but was never being used for flushed data
Priority: CRITICAL - Fixes production-breaking data loss bug
* perf: add topic configuration cache to fix 60% CPU overhead
CRITICAL PERFORMANCE FIX: Added topic configuration caching to eliminate
massive CPU overhead from repeated filer reads and JSON unmarshaling on
EVERY fetch request.
Problem (from CPU profile):
- ReadTopicConfFromFiler: 42.45% CPU (5.76s out of 13.57s)
- protojson.Unmarshal: 25.64% CPU (3.48s)
- GetOrGenerateLocalPartition called on EVERY FetchMessage request
- No caching - reading from filer and unmarshaling JSON every time
- This caused filer, gateway, and broker to be extremely busy
Root Cause:
GetOrGenerateLocalPartition() is called on every FetchMessage request and
was calling ReadTopicConfFromFiler() without any caching. Each call:
1. Makes gRPC call to filer (expensive)
2. Reads JSON from disk (expensive)
3. Unmarshals protobuf JSON (25% of CPU!)
The disk I/O fix (previous commit) made this worse by enabling more reads,
exposing this performance bottleneck.
Solution:
Added topicConfCache similar to existing topicExistsCache:
Changes to broker_server.go:
- Added topicConfCacheEntry struct
- Added topicConfCache map to MessageQueueBroker
- Added topicConfCacheMu RWMutex for thread safety
- Added topicConfCacheTTL (30 seconds)
- Initialize cache in NewMessageBroker()
Changes to broker_topic_conf_read_write.go:
- Modified GetOrGenerateLocalPartition() to check cache first
- Cache HIT: Return cached config immediately (V(4) log)
- Cache MISS: Read from filer, cache result, proceed
- Added invalidateTopicConfCache() for cache invalidation
- Added import "time" for cache TTL
Cache Strategy:
- TTL: 30 seconds (matches topicExistsCache)
- Thread-safe with RWMutex
- Cache key: topic.String() (e.g., "kafka.loadtest-topic-0")
- Invalidation: Call invalidateTopicConfCache() when config changes
Expected Results:
- Before: 60% CPU on filer reads + JSON unmarshaling
- After: <1% CPU (only on cache miss every 30s)
- Filer load: Reduced by ~99% (from every fetch to once per 30s)
- Gateway CPU: Dramatically reduced
- Broker CPU: Dramatically reduced
- Throughput: Should increase significantly
Performance Impact:
With 50 msgs/sec per topic × 5 topics = 250 fetches/sec:
- Before: 250 filer reads/sec (25000% overhead!)
- After: 0.17 filer reads/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer filer calls
Testing:
- ✅ Compiles successfully
- Ready for load test to verify CPU reduction
Priority: CRITICAL - Fixes production-breaking performance issue
Related: Works with previous commit (disk I/O fix) to enable correct and fast reads
* fmt
* refactor: merge topicExistsCache and topicConfCache into unified topicCache
Merged two separate caches into one unified cache to simplify code and
reduce memory usage. The unified cache stores both topic existence and
configuration in a single structure.
Design:
- Single topicCacheEntry with optional *ConfigureTopicResponse
- If conf != nil: topic exists with full configuration
- If conf == nil: topic doesn't exist (negative cache)
- Same 30-second TTL for both existence and config caching
Changes to broker_server.go:
- Removed topicExistsCacheEntry struct
- Removed topicConfCacheEntry struct
- Added unified topicCacheEntry struct (conf can be nil)
- Removed topicExistsCache, topicExistsCacheMu, topicExistsCacheTTL
- Removed topicConfCache, topicConfCacheMu, topicConfCacheTTL
- Added unified topicCache, topicCacheMu, topicCacheTTL
- Updated NewMessageBroker() to initialize single cache
Changes to broker_topic_conf_read_write.go:
- Modified GetOrGenerateLocalPartition() to use unified cache
- Added negative caching (conf=nil) when topic not found
- Renamed invalidateTopicConfCache() to invalidateTopicCache()
- Single cache lookup instead of two separate checks
Changes to broker_grpc_lookup.go:
- Modified TopicExists() to use unified cache
- Check: exists = (entry.conf != nil)
- Only cache negative results (conf=nil) in TopicExists
- Positive results cached by GetOrGenerateLocalPartition
- Removed old invalidateTopicExistsCache() function
Changes to broker_grpc_configure.go:
- Updated invalidateTopicExistsCache() calls to invalidateTopicCache()
- Two call sites updated
Benefits:
1. Code Simplification: One cache instead of two
2. Memory Reduction: Single map, single mutex, single TTL
3. Consistency: No risk of cache desync between existence and config
4. Less Lock Contention: One lock instead of two
5. Easier Maintenance: Single invalidation function
6. Same Performance: Still eliminates 60% CPU overhead
Cache Behavior:
- TopicExists: Lightweight check, only caches negative (conf=nil)
- GetOrGenerateLocalPartition: Full config read, caches positive (conf != nil)
- Both share same 30s TTL
- Both use same invalidation on topic create/update/delete
Testing:
- ✅ Compiles successfully
- Ready for integration testing
This refactor maintains all performance benefits while simplifying
the codebase and reducing memory footprint.
* fix: add cache to LookupTopicBrokers to eliminate 26% CPU overhead
CRITICAL: LookupTopicBrokers was bypassing cache, causing 26% CPU overhead!
Problem (from CPU profile):
- LookupTopicBrokers: 35.74% CPU (9s out of 25.18s)
- ReadTopicConfFromFiler: 26.41% CPU (6.65s)
- protojson.Unmarshal: 16.64% CPU (4.19s)
- LookupTopicBrokers called b.fca.ReadTopicConfFromFiler() directly on line 35
- Completely bypassed our unified topicCache!
Root Cause:
LookupTopicBrokers is called VERY frequently by clients (every fetch request
needs to know partition assignments). It was calling ReadTopicConfFromFiler
directly instead of using the cache, causing:
1. Expensive gRPC calls to filer on every lookup
2. Expensive JSON unmarshaling on every lookup
3. 26%+ CPU overhead on hot path
4. Our cache optimization was useless for this critical path
Solution:
Created getTopicConfFromCache() helper and updated all callers:
Changes to broker_topic_conf_read_write.go:
- Added getTopicConfFromCache() - public API for cached topic config reads
- Implements same caching logic: check cache -> read filer -> cache result
- Handles both positive (conf != nil) and negative (conf == nil) caching
- Refactored GetOrGenerateLocalPartition() to use new helper (code dedup)
- Now only 14 lines instead of 60 lines (removed duplication)
Changes to broker_grpc_lookup.go:
- Modified LookupTopicBrokers() to call getTopicConfFromCache()
- Changed from: b.fca.ReadTopicConfFromFiler(t) (no cache)
- Changed to: b.getTopicConfFromCache(t) (with cache)
- Added comment explaining this fixes 26% CPU overhead
Cache Strategy:
- First call: Cache MISS -> read filer + unmarshal JSON -> cache for 30s
- Next 1000+ calls in 30s: Cache HIT -> return cached config immediately
- No filer gRPC, no JSON unmarshaling, near-zero CPU
- Cache invalidated on topic create/update/delete
Expected CPU Reduction:
- Before: 26.41% on ReadTopicConfFromFiler + 16.64% on JSON unmarshal = 43% CPU
- After: <0.1% (only on cache miss every 30s)
- Expected total broker CPU: 25.18s -> ~8s (67% reduction!)
Performance Impact (with 250 lookups/sec):
- Before: 250 filer reads/sec + 250 JSON unmarshals/sec
- After: 0.17 filer reads/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer expensive operations
Code Quality:
- Eliminated code duplication (60 lines -> 14 lines in GetOrGenerateLocalPartition)
- Single source of truth for cached reads (getTopicConfFromCache)
- Clear API: "Always use getTopicConfFromCache, never ReadTopicConfFromFiler directly"
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Completes the cache optimization to achieve full performance fix
* perf: optimize broker assignment validation to eliminate 14% CPU overhead
CRITICAL: Assignment validation was running on EVERY LookupTopicBrokers call!
Problem (from CPU profile):
- ensureTopicActiveAssignments: 14.18% CPU (2.56s out of 18.05s)
- EnsureAssignmentsToActiveBrokers: 14.18% CPU (2.56s)
- ConcurrentMap.IterBuffered: 12.85% CPU (2.32s) - iterating all brokers
- Called on EVERY LookupTopicBrokers request, even with cached config!
Root Cause:
LookupTopicBrokers flow was:
1. getTopicConfFromCache() - returns cached config (fast ✅)
2. ensureTopicActiveAssignments() - validates assignments (slow ❌)
Even though config was cached, we still validated assignments every time,
iterating through ALL active brokers on every single request. With 250
requests/sec, this meant 250 full broker iterations per second!
Solution:
Move assignment validation inside getTopicConfFromCache() and only run it
on cache misses:
Changes to broker_topic_conf_read_write.go:
- Modified getTopicConfFromCache() to validate assignments after filer read
- Validation only runs on cache miss (not on cache hit)
- If hasChanges: Save to filer immediately, invalidate cache, return
- If no changes: Cache config with validated assignments
- Added ensureTopicActiveAssignmentsUnsafe() helper (returns bool)
- Kept ensureTopicActiveAssignments() for other callers (saves to filer)
Changes to broker_grpc_lookup.go:
- Removed ensureTopicActiveAssignments() call from LookupTopicBrokers
- Assignment validation now implicit in getTopicConfFromCache()
- Added comments explaining the optimization
Cache Behavior:
- Cache HIT: Return config immediately, skip validation (saves 14% CPU!)
- Cache MISS: Read filer -> validate assignments -> cache result
- If broker changes detected: Save to filer, invalidate cache, return
- Next request will re-read and re-validate (ensures consistency)
Performance Impact:
With 30-second cache TTL and 250 lookups/sec:
- Before: 250 validations/sec × 10ms each = 2.5s CPU/sec (14% overhead)
- After: 0.17 validations/sec (only on cache miss)
- Reduction: 99.93% fewer validations
Expected CPU Reduction:
- Before (with cache): 18.05s total, 2.56s validation (14%)
- After (with optimization): ~15.5s total (-14% = ~2.5s saved)
- Combined with previous cache fix: 25.18s -> ~15.5s (38% total reduction)
Cache Consistency:
- Assignments validated when config first cached
- If broker membership changes, assignments updated and saved
- Cache invalidated to force fresh read
- All brokers eventually converge on correct assignments
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Completes optimization of LookupTopicBrokers hot path
* fmt
* perf: add partition assignment cache in gateway to eliminate 13.5% CPU overhead
CRITICAL: Gateway calling LookupTopicBrokers on EVERY fetch to translate
Kafka partition IDs to SeaweedFS partition ranges!
Problem (from CPU profile):
- getActualPartitionAssignment: 13.52% CPU (1.71s out of 12.65s)
- Called bc.client.LookupTopicBrokers on line 228 for EVERY fetch
- With 250 fetches/sec, this means 250 LookupTopicBrokers calls/sec!
- No caching at all - same overhead as broker had before optimization
Root Cause:
Gateway needs to translate Kafka partition IDs (0, 1, 2...) to SeaweedFS
partition ranges (0-341, 342-682, etc.) for every fetch request. This
translation requires calling LookupTopicBrokers to get partition assignments.
Without caching, every fetch request triggered:
1. gRPC call to broker (LookupTopicBrokers)
2. Broker reads from its cache (fast now after broker optimization)
3. gRPC response back to gateway
4. Gateway computes partition range mapping
The gRPC round-trip overhead was consuming 13.5% CPU even though broker
cache was fast!
Solution:
Added partitionAssignmentCache to BrokerClient:
Changes to types.go:
- Added partitionAssignmentCacheEntry struct (assignments + expiresAt)
- Added cache fields to BrokerClient:
* partitionAssignmentCache map[string]*partitionAssignmentCacheEntry
* partitionAssignmentCacheMu sync.RWMutex
* partitionAssignmentCacheTTL time.Duration
Changes to broker_client.go:
- Initialize partitionAssignmentCache in NewBrokerClientWithFilerAccessor
- Set partitionAssignmentCacheTTL to 30 seconds (same as broker)
Changes to broker_client_publish.go:
- Added "time" import
- Modified getActualPartitionAssignment() to check cache first:
* Cache HIT: Use cached assignments (fast ✅)
* Cache MISS: Call LookupTopicBrokers, cache result for 30s
- Extracted findPartitionInAssignments() helper function
* Contains range calculation and partition matching logic
* Reused for both cached and fresh lookups
Cache Behavior:
- First fetch: Cache MISS -> LookupTopicBrokers (~2ms) -> cache for 30s
- Next 7500 fetches in 30s: Cache HIT -> immediate return (~0.01ms)
- Cache automatically expires after 30s, re-validates on next fetch
Performance Impact:
With 250 fetches/sec and 5 topics:
- Before: 250 LookupTopicBrokers/sec = 500ms CPU overhead
- After: 0.17 LookupTopicBrokers/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer gRPC calls
Expected CPU Reduction:
- Before: 12.65s total, 1.71s in getActualPartitionAssignment (13.5%)
- After: ~11s total (-13.5% = 1.65s saved)
- Benefit: 13% lower CPU, more capacity for actual message processing
Cache Consistency:
- Same 30-second TTL as broker's topic config cache
- Partition assignments rarely change (only on topic reconfiguration)
- 30-second staleness is acceptable for partition mapping
- Gateway will eventually converge with broker's view
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Eliminates major performance bottleneck in gateway fetch path
* perf: add RecordType inference cache to eliminate 37% gateway CPU overhead
CRITICAL: Gateway was creating Avro codecs and inferring RecordTypes on
EVERY fetch request for schematized topics!
Problem (from CPU profile):
- NewCodec (Avro): 17.39% CPU (2.35s out of 13.51s)
- inferRecordTypeFromAvroSchema: 20.13% CPU (2.72s)
- Total schema overhead: 37.52% CPU
- Called during EVERY fetch to check if topic is schematized
- No caching - recreating expensive goavro.Codec objects repeatedly
Root Cause:
In the fetch path, isSchematizedTopic() -> matchesSchemaRegistryConvention()
-> ensureTopicSchemaFromRegistryCache() -> inferRecordTypeFromCachedSchema()
-> inferRecordTypeFromAvroSchema() was being called.
The inferRecordTypeFromAvroSchema() function created a NEW Avro decoder
(which internally calls goavro.NewCodec()) on every call, even though:
1. The schema.Manager already has a decoder cache by schema ID
2. The same schemas are used repeatedly for the same topics
3. goavro.NewCodec() is expensive (parses JSON, builds schema tree)
This was wasteful because:
- Same schema string processed repeatedly
- No reuse of inferred RecordType structures
- Creating codecs just to infer types, then discarding them
Solution:
Added inferredRecordTypes cache to Handler:
Changes to handler.go:
- Added inferredRecordTypes map[string]*schema_pb.RecordType to Handler
- Added inferredRecordTypesMu sync.RWMutex for thread safety
- Initialize cache in NewTestHandlerWithMock() and NewSeaweedMQBrokerHandlerWithDefaults()
Changes to produce.go:
- Added glog import
- Modified inferRecordTypeFromAvroSchema():
* Check cache first (key: schema string)
* Cache HIT: Return immediately (V(4) log)
* Cache MISS: Create decoder, infer type, cache result
- Modified inferRecordTypeFromProtobufSchema():
* Same caching strategy (key: "protobuf:" + schema)
- Modified inferRecordTypeFromJSONSchema():
* Same caching strategy (key: "json:" + schema)
Cache Strategy:
- Key: Full schema string (unique per schema content)
- Value: Inferred *schema_pb.RecordType
- Thread-safe with RWMutex (optimized for reads)
- No TTL - schemas don't change for a topic
- Memory efficient - RecordType is small compared to codec
Performance Impact:
With 250 fetches/sec across 5 topics (1-3 schemas per topic):
- Before: 250 codec creations/sec + 250 inferences/sec = ~5s CPU
- After: 3-5 codec creations total (one per schema) = ~0.05s CPU
- Reduction: 99% fewer expensive operations
Expected CPU Reduction:
- Before: 13.51s total, 5.07s schema operations (37.5%)
- After: ~8.5s total (-37.5% = 5s saved)
- Benefit: 37% lower gateway CPU, more capacity for message processing
Cache Consistency:
- Schemas are immutable once registered in Schema Registry
- If schema changes, schema ID changes, so safe to cache indefinitely
- New schemas automatically cached on first use
- No need for invalidation or TTL
Additional Optimizations:
- Protobuf and JSON Schema also cached (same pattern)
- Prevents future bottlenecks as more schema formats are used
- Consistent caching approach across all schema types
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement under load
Priority: HIGH - Eliminates major performance bottleneck in gateway schema path
* fmt
* fix Node ID Mismatch, and clean up log messages
* clean up
* Apply client-specified timeout to context
* Add comprehensive debug logging for Noop record processing
- Track Produce v2+ request reception with API version and request body size
- Log acks setting, timeout, and topic/partition information
- Log record count from parseRecordSet and any parse errors
- **CRITICAL**: Log when recordCount=0 fallback extraction attempts
- Log record extraction with NULL value detection (Noop records)
- Log record key in hex for Noop key identification
- Track each record being published to broker
- Log offset assigned by broker for each record
- Log final response with offset and error code
This enables root cause analysis of Schema Registry Noop record timeout issue.
* fix: Remove context timeout propagation from produce that breaks consumer init
Commit
|
4 months ago |