Branch:
master
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/go_modules/github.com/parquet-go/parquet-go-0.26.3
dependabot/go_modules/github.com/schollz/progressbar/v3-3.19.0
dependabot/go_modules/gocloud.dev/pubsub/natspubsub-0.44.0
dependabot/go_modules/google.golang.org/api-0.258.0
dependabot/go_modules/modernc.org/sqlite-1.42.2
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature/mini-port-detection
feature/modernize-s3-tests
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-auth-bypass-7912
fix-bucket-name-case-7910
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
12394 Commits (master)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
808205e38f
|
s3: implement Bucket Owner Enforced for object ownership (#7913)
* s3: implement Bucket Owner Enforced for object ownership Objects uploaded by service accounts (or any user) are now owned by the bucket owner when the bucket has BucketOwnerEnforced ownership policy (the modern AWS default since April 2023). This provides a more intuitive ownership model where users expect objects created by their service accounts to be owned by themselves. - Modified setObjectOwnerFromRequest to check bucket ObjectOwnership - When BucketOwnerEnforced: use bucket owner's account ID - When ObjectWriter: use uploader's account ID (backward compatible) * s3: add nil check and fix ownership logic hole - Add nil check for bucketRegistry before calling GetBucketMetadata - Fix logic hole where objects could be created without owner when BucketOwnerEnforced is set but bucket owner is nil - Refactor to ensure objects always have an owner by falling back to uploader when bucket owner is unavailable - Improve logging to distinguish between different fallback scenarios Addresses code review feedback from Gemini on PR #7913 * s3: add comprehensive tests for object ownership logic Add unit tests for setObjectOwnerFromRequest covering: - BucketOwnerEnforced: uses bucket owner - ObjectWriter: uses uploader - BucketOwnerPreferred: uses uploader - Nil owner fallback scenarios - Bucket metadata errors - Nil bucketRegistry - Empty account ID handling All 8 test cases pass, verifying correct ownership assignment in all scenarios including edge cases. |
13 hours ago |
|
|
b6d99f1c9e
|
Admin: Add Service Account Management UI (#7902)
* admin: add Service Account management UI Add admin UI for managing service accounts: New files: - handlers/service_account_handlers.go - HTTP handlers - dash/service_account_management.go - CRUD operations - view/app/service_accounts.templ - UI template Changes: - dash/types.go - Add ServiceAccount and related types - handlers/admin_handlers.go - Register routes and handlers - view/layout/layout.templ - Add sidebar navigation link Service accounts are stored as special identities with "sa:" prefix in their name, using ABIA access key prefix. They can be created, listed, enabled/disabled, and deleted through the admin UI. Features: - Create service accounts linked to parent users - View and manage service account status - Delete service accounts - Service accounts inherit parent user permissions Note: STS configuration is read-only (configured via JSON file). Full STS integration requires changes from PR #7901. * admin: use dropdown for parent user selection Change the Parent User field from text input to dropdown when creating a service account. The dropdown is populated with all existing Object Store users. Changes: - Add AvailableUsers field to ServiceAccountsData type - Populate available users in getServiceAccountsData handler - Update template to use <select> element with user options * admin: show secret access key on service account creation Display both access key and secret access key when creating a service account, with proper AWS CLI usage instructions. Changes: - Add SecretAccessKey field to ServiceAccount type (only populated on creation) - Return secret key from CreateServiceAccount - Add credentials modal with copy-to-clipboard buttons - Show AWS CLI usage example with actual credentials - Modal is non-dismissible until user confirms they saved credentials The secret key is only shown once during creation for security. After creation, only the access key ID is visible in the list. * admin: address code review comments for service account management - Persist creation dates in identity actions (createdAt:timestamp) - Replace magic number slicing with len(accessKeyPrefix) - Add bounds checking after strings.SplitN - Use accessKeyPrefix constant instead of hardcoded "ABIA" Creation dates are now stored as actions (e.g., "createdAt:1735473600") and will persist across restarts. Helper functions getCreationDate() and setCreationDate() manage the timestamp storage. Addresses review comments from gemini-code-assist[bot] and coderabbitai[bot] * admin: fix XSS vulnerabilities in service account details Replace innerHTML with template literals with safe DOM creation. The createSADetailsContent function now uses createElement and textContent to prevent XSS attacks from malicious service account data (id, description, parent_user, etc.). Also added try-catch for date parsing to prevent exceptions on malformed input. Addresses security review comments from coderabbitai[bot] * admin: add context.Context to service account management methods Addressed PR #7902 review feedback: 1. All service account management methods now accept context.Context as first parameter to enable cancellation, deadlines, and tracing 2. Removed all context.Background() calls 3. Updated handlers to pass c.Request.Context() from HTTP requests Methods updated: - GetServiceAccounts - GetServiceAccountDetails - CreateServiceAccount - UpdateServiceAccount - DeleteServiceAccount - GetServiceAccountByAccessKey Note: Creation date persistence was already implemented using the createdAt:<timestamp> action pattern as suggested in the review. * admin: fix render flow to prevent partial HTML writes Fixed ShowServiceAccounts handler to render template to an in-memory buffer first before writing to the response. This prevents partial HTML writes followed by JSON error responses, which would result in invalid mixed content. Changes: - Render to bytes.Buffer first - Only write to c.Writer if render succeeds - Use c.AbortWithStatus on error instead of attempting JSON response - Prevents any additional headers/body writes after partial write * admin: fix error handling, date validation, and event parameters Addressed multiple code review issues: 1. Proper 404 vs 500 error handling: - Added ErrServiceAccountNotFound sentinel error - GetServiceAccountDetails now wraps errors with sentinel - Handler uses errors.Is() to distinguish not-found from internal errors - Returns 404 only for missing resources, 500 for other errors - Logs internal errors before returning 500 2. Date validation in JavaScript: - Validate expiration date before using it - Check !isNaN(date.getTime()) to ensure valid date - Return validation error if date is invalid - Prevents invalid Date construction 3. Event parameter handling: - copyToClipboard now accepts event parameter - Updated onclick attributes to pass event object - Prevents reliance on window.event - More explicit and reliable event handling * admin: replace deprecated execCommand with Clipboard API Replaced deprecated document.execCommand('copy') with modern navigator.clipboard.writeText() API for better security and UX. Changes: - Made copyToClipboard async to support Clipboard API - Use navigator.clipboard.writeText() as primary method - Fallback to execCommand if Clipboard API fails (older browsers) - Added console warning when fallback is used - Maintains same visual feedback behavior * admin: improve security and UX for error handling Addressed code review feedback: 1. Security: Remove sensitive error details from API responses - CreateServiceAccount: Return generic error message - UpdateServiceAccount: Return generic error message - DeleteServiceAccount: Return generic error message - Detailed errors still logged server-side via glog.Errorf() - Prevents exposure of internal system details to clients 2. UX: Replace alert() with Bootstrap toast notifications - Implemented showToast() function using Bootstrap 5 toasts - Non-blocking, modern notification system - Auto-dismiss after 5 seconds - Proper HTML escaping to prevent XSS - Toast container positioned at top-right - Success (green) and error (red) variants * admin: complete error handling improvements Addressed remaining security review feedback: 1. GetServiceAccounts: Remove error details from response - Log errors server-side via glog.Errorf() - Return generic error message to client 2. UpdateServiceAccount & DeleteServiceAccount: - Wrap not-found errors with ErrServiceAccountNotFound sentinel - Enables proper 404 vs 500 distinction in handlers 3. Update & Delete handlers: - Added errors.Is() check for ErrServiceAccountNotFound - Return 404 for missing resources - Return 500 for internal errors with logging - Consistent with GetServiceAccountDetails behavior All handlers now properly distinguish not-found (404) from internal errors (500) and never expose sensitive error details to clients. * admin: implement expiration support and improve code quality Addressed final code review feedback: 1. Expiration Support: - Added expiration helper functions (getExpiration, setExpiration) - Implemented expiration in CreateServiceAccount - Implemented expiration in UpdateServiceAccount - Added Expiration field to ServiceAccount struct - Parse and validate RFC3339 expiration dates 2. Constants for Magic Strings: - Added StatusActive, StatusInactive constants - Added disabledAction, serviceAccountPrefix constants - Replaced all magic strings with constants throughout - Improves maintainability and prevents typos 3. Helper Function to Reduce Duplication: - Created identityToServiceAccount() helper - Reduces code duplication across Get/Update/Delete methods - Centralizes ServiceAccount struct building logic 4. Fixed time.Now() Fallback: - Changed from time.Now() to time.Time{} for legacy accounts - Prevents creation date from changing on each fetch - UI can display zero time as "N/A" or blank All code quality issues addressed! * admin: fix StatusActive reference in handler Use dash.StatusActive to properly reference the constant from the dash package. * admin: regenerate templ files Regenerated all templ Go files after recent template changes. The AWS CLI usage example already uses proper <pre><code> formatting which preserves line breaks for better readability. * admin: add explicit white-space CSS to AWS CLI example Added style="white-space: pre-wrap;" to the pre tag to ensure line breaks are preserved and displayed correctly in all browsers. This forces the browser to respect the newlines in the code block. * admin: fix AWS CLI example to display on separate lines Replaced pre/code block with individual div elements for each line. This ensures each command displays on its own line regardless of how templ processes whitespace. Each line is now a separate div with font-monospace styling for code appearance. * make * admin: filter service accounts from parent user dropdown Service accounts should not appear as selectable parent users when creating new service accounts. Added filter to GetObjectStoreUsers() to skip identities with "sa:" prefix, ensuring only actual IAM users are shown in the parent user dropdown. * admin: address code review feedback - Use constants for magic strings in service account management - Add Expiration field to service account responses - Add nil checks and context propagation - Improve templates (date validation, async clipboard, toast notifications) * Update service_accounts_templ.go |
14 hours ago |
|
|
ae9a943ef6
|
IAM: Add Service Account Support (#7744) (#7901)
* iam: add ServiceAccount protobuf schema Add ServiceAccount message type to iam.proto with support for: - Unique ID and parent user linkage - Optional expiration timestamp - Separate credentials (access key/secret) - Action restrictions (subset of parent) - Enable/disable status This is the first step toward implementing issue #7744 (IAM Service Account Support). * iam: add service account response types Add IAM API response types for service account operations: - ServiceAccountInfo struct for marshaling account details - CreateServiceAccountResponse - DeleteServiceAccountResponse - ListServiceAccountsResponse - GetServiceAccountResponse - UpdateServiceAccountResponse Also add type aliases in iamapi package for backwards compatibility. Part of issue #7744 (IAM Service Account Support). * iam: implement service account API handlers Add CRUD operations for service accounts: - CreateServiceAccount: Creates service account with ABIA key prefix - DeleteServiceAccount: Removes service account and parent linkage - ListServiceAccounts: Lists all or filtered by parent user - GetServiceAccount: Retrieves service account details - UpdateServiceAccount: Modifies status, description, expiration Service accounts inherit parent user's actions by default and support optional expiration timestamps. Part of issue #7744 (IAM Service Account Support). * sts: add AssumeRoleWithWebIdentity HTTP endpoint Add STS API HTTP endpoint for AWS SDK compatibility: - Create s3api_sts.go with HTTP handlers matching AWS STS spec - Support AssumeRoleWithWebIdentity action with JWT token - Return XML response with temporary credentials (AccessKeyId, SecretAccessKey, SessionToken) matching AWS format - Register STS route at POST /?Action=AssumeRoleWithWebIdentity This enables AWS SDKs (boto3, AWS CLI, etc.) to obtain temporary S3 credentials using OIDC/JWT tokens. Part of issue #7744 (IAM Service Account Support). * test: add service account and STS integration tests Add integration tests for new IAM features: s3_service_account_test.go: - TestServiceAccountLifecycle: Create, Get, List, Update, Delete - TestServiceAccountValidation: Error handling for missing params s3_sts_test.go: - TestAssumeRoleWithWebIdentityValidation: Parameter validation - TestAssumeRoleWithWebIdentityWithMockJWT: JWT token handling Tests skip gracefully when SeaweedFS is not running or when IAM features are not configured. Part of issue #7744 (IAM Service Account Support). * iam: address code review comments - Add constants for service account ID and key lengths - Use strconv.ParseInt instead of fmt.Sscanf for better error handling - Allow clearing descriptions by checking key existence in url.Values - Replace magic numbers (12, 20, 40) with named constants Addresses review comments from gemini-code-assist[bot] * test: add proper error handling in service account tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Addresses review comment from gemini-code-assist[bot] * test: add proper error handling in STS tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Repeated this fix throughout the file. Addresses review comment from gemini-code-assist[bot] in PR #7901. * iam: address additional code review comments - Specific error code mapping for STS service errors - Distinguish between Sender and Receiver error types in STS responses - Add nil checks for credentials in List/GetServiceAccount - Validate expiration date is in the future - Improve integration test error messages (include response body) - Add credential verification step in service account tests Addresses remaining review comments from gemini-code-assist[bot] across multiple files. * iam: fix shared slice reference in service account creation Copy parent's actions to create an independent slice for the service account instead of sharing the underlying array. This prevents unexpected mutations when the parent's actions are modified later. Addresses review comment from coderabbitai[bot] in PR #7901. * iam: remove duplicate unused constant Removed redundant iamServiceAccountKeyPrefix as ServiceAccountKeyPrefix is already defined and used. Addresses remaining cleanup task. * sts: document limitation of string-based error mapping Added TODO comment explaining that the current string-based error mapping approach is fragile and should be replaced with typed errors from the STS service in a future refactoring. This addresses the architectural concern raised in code review while deferring the actual implementation to a separate PR to avoid scope creep in the current service account feature addition. * iam: fix remaining review issues - Add future-date validation for expiration in UpdateServiceAccount - Reorder tests so credential verification happens before deletion - Fix compilation error by using correct JWT generation methods Addresses final review comments from coderabbitai[bot]. * iam: fix service account access key length The access key IDs were incorrectly generated with 24 characters instead of the AWS-standard 20 characters. This was caused by generating 20 random characters and then prepending the 4-character ABIA prefix. Fixed by subtracting the prefix length from AccessKeyLength, so the final key is: ABIA (4 chars) + random (16 chars) = 20 chars total. This ensures compatibility with S3 clients that validate key length. * test: add comprehensive service account security tests Added comprehensive integration tests for service account functionality: - TestServiceAccountS3Access: Verify SA credentials work for S3 operations - TestServiceAccountExpiration: Test expiration date validation and enforcement - TestServiceAccountInheritedPermissions: Verify parent-child relationship - TestServiceAccountAccessKeyFormat: Validate AWS-compatible key format (ABIA prefix, 20 char length) These tests ensure SeaweedFS service accounts are compatible with AWS conventions and provide robust security coverage. * iam: remove unused UserAccessKeyPrefix constant Code cleanup to remove unused constants. * iam: remove unused iamCommonResponse type alias Code cleanup to remove unused type aliases. * iam: restore and use UserAccessKeyPrefix constant Restored UserAccessKeyPrefix constant and updated s3api tests to use it instead of hardcoded strings for better maintainability and consistency. * test: improve error handling in service account security tests Added explicit error checking for io.ReadAll and xml.Unmarshal in TestServiceAccountExpiration to ensure failures are reported correctly and cleanup is performed only when appropriate. Also added logging for failed responses. * test: use t.Cleanup for reliable resource cleanup Replaced defer with t.Cleanup to ensure service account cleanup runs even when require.NoError fails. Also switched from manual error checking to require.NoError for more idiomatic testify usage. * iam: add CreatedBy field and optimize identity lookups - Added createdBy parameter to CreateServiceAccount to track who created each service account - Extract creator identity from request context using GetIdentityNameFromContext - Populate created_by field in ServiceAccount protobuf - Added findIdentityByName helper function to optimize identity lookups - Replaced nested loops with O(n) helper function calls in CreateServiceAccount and DeleteServiceAccount This addresses code review feedback for better auditing and performance. * iam: prevent user deletion when service accounts exist Following AWS IAM behavior, prevent deletion of users that have active service accounts. This ensures explicit cleanup and prevents orphaned service account resources with invalid ParentUser references. Users must delete all associated service accounts before deleting the parent user, providing safer resource management. * sts: enhance TODO with typed error implementation guidance Updated TODO comment with detailed implementation approach for replacing string-based error matching with typed errors using errors.Is(). This provides a clear roadmap for a follow-up PR to improve error handling robustness and maintainability. * iam: add operational limits for service account creation Added AWS IAM-compatible safeguards to prevent resource exhaustion: - Maximum 100 service accounts per user (LimitExceededException) - Maximum 1000 character description length (InvalidInputException) These limits prevent accidental or malicious resource exhaustion while not impacting legitimate use cases. * iam: add missing operational limit constants Added MaxServiceAccountsPerUser and MaxDescriptionLength constants that were referenced in the previous commit but not defined. * iam: enforce service account expiration during authentication CRITICAL SECURITY FIX: Expired service account credentials were not being rejected during authentication, allowing continued access after expiration. Changes: - Added Expiration field to Credential struct - Populate expiration when loading service accounts from configuration - Check expiration in all authentication paths (V2 and V4 signatures) - Return ErrExpiredToken for expired credentials This ensures expired service accounts are properly rejected at authentication time, matching AWS IAM behavior and preventing unauthorized access. * iam: fix error code for expired service account credentials Use ErrAccessDenied instead of non-existent ErrExpiredToken for expired service account credentials. This provides appropriate access denial for expired credentials while maintaining AWS-compatible error responses. * iam: fix remaining ErrExpiredToken references Replace all remaining instances of non-existent ErrExpiredToken with ErrAccessDenied for expired service account credentials. * iam: apply AWS-standard key format to user access keys Updated CreateAccessKey to generate AWS-standard 20-character access keys with AKIA prefix for regular users, matching the format used for service accounts. This ensures consistency across all access key types and full AWS compatibility. - Access keys: AKIA + 16 random chars = 20 total (was 21 chars, no prefix) - Secret keys: 40 random chars (was 42, now matches AWS standard) - Uses AccessKeyLength and UserAccessKeyPrefix constants * sts: replace fragile string-based error matching with typed errors Implemented robust error handling using typed errors and errors.Is() instead of fragile strings.Contains() matching. This decouples the HTTP layer from service implementation details and prevents errors from being miscategorized if error messages change. Changes: - Added typed error variables to weed/iam/sts/constants.go: * ErrTypedTokenExpired * ErrTypedInvalidToken * ErrTypedInvalidIssuer * ErrTypedInvalidAudience * ErrTypedMissingClaims - Updated STS service to wrap provider authentication errors with typed errors - Replaced strings.Contains() with errors.Is() in HTTP layer for error checking - Removed TODO comment as the improvement is now implemented This makes error handling more maintainable and reliable. * sts: eliminate all string-based error matching with provider-level typed errors Completed the typed error implementation by adding provider-level typed errors and updating provider implementations to return them. This eliminates ALL fragile string matching throughout the entire error handling stack. Changes: - Added typed error definitions to weed/iam/providers/errors.go: * ErrProviderTokenExpired * ErrProviderInvalidToken * ErrProviderInvalidIssuer * ErrProviderInvalidAudience * ErrProviderMissingClaims - Updated OIDC provider to wrap JWT validation errors with typed provider errors - Replaced strings.Contains() with errors.Is() in STS service for error mapping - Complete error chain: Provider -> STS -> HTTP layer, all using errors.Is() This provides: - Reliable error classification independent of error message content - Type-safe error checking throughout the stack - No order-dependent string matching - Maintainable error handling that won't break with message changes * oidc: use jwt.ErrTokenExpired instead of string matching Replaced the last remaining string-based error check with the JWT library's exported typed error. This makes the error detection independent of error message content and more robust against library updates. Changed from: strings.Contains(errMsg, "expired") To: errors.Is(err, jwt.ErrTokenExpired) This completes the elimination of ALL string-based error matching throughout the entire authentication stack. * iam: add description length validation to UpdateServiceAccount Fixed inconsistency where UpdateServiceAccount didn't validate description length against MaxDescriptionLength, allowing operational limits to be bypassed during updates. Now validates that updated descriptions don't exceed 1000 characters, matching the validation in CreateServiceAccount. * iam: refactor expiration check into helper method Extracted duplicated credential expiration check logic into a helper method to reduce code duplication and improve maintainability. Added Credential.isCredentialExpired() method and replaced 5 instances of inline expiration checks across auth_signature_v2.go and auth_signature_v4.go. * iam: address critical Copilot security and consistency feedback Fixed three critical issues identified by Copilot code review: 1. SECURITY: Prevent loading disabled service account credentials - Added check to skip disabled service accounts during credential loading - Disabled accounts can no longer authenticate 2. Add DurationSeconds validation for STS AssumeRoleWithWebIdentity - Enforce AWS-compatible range: 900-43200 seconds (15 min - 12 hours) - Returns proper error for out-of-range values 3. Fix expiration update consistency in UpdateServiceAccount - Added key existence check like Description field - Allows explicit clearing of expiration by setting to empty string - Distinguishes between "not updating" and "clearing expiration" * sts: remove unused durationSecondsStr variable Fixed build error from unused variable after refactoring duration parsing. * iam: address remaining Copilot feedback and remove dead code Completed remaining Copilot code review items: 1. Remove unused getPermission() method (dead code) - Method was defined but never called anywhere 2. Improve slice modification safety in DeleteServiceAccount - Replaced append-with-slice-operations with filter pattern - Avoids potential issues from mutating slice during iteration 3. Fix route registration order - Moved STS route registration BEFORE IAM route - Prevents IAM route from intercepting STS requests - More specific route (with query parameter) now registered first * iam: improve expiration validation and test cleanup robustness Addressed additional Copilot feedback: 1. Make expiration validation more explicit - Added explicit check for negative values - Added comment clarifying that 0 is allowed to clear expiration - Improves code readability and intent 2. Fix test cleanup order in s3_service_account_test.go - Track created service accounts in a slice - Delete all service accounts before deleting parent user - Prevents DeleteConflictException during cleanup - More robust cleanup even if test fails mid-execution Note: s3_service_account_security_test.go already had correct cleanup order due to LIFO defer execution. * test: remove redundant variable assignments Removed duplicate assignments of createdSAId, createdAccessKeyId, and createdSecretAccessKey on lines 148-150 that were already assigned on lines 132-134. |
17 hours ago |
|
|
288ba5fec8
|
mount: let filer handle chunk deletion decision (#7900)
* mount: let filer handle chunk deletion decision Remove chunk deletion decision from FUSE mount's Unlink operation. Previously, the mount decided whether to delete chunks based on its locally cached entry's HardLinkCounter, which could be stale. Now always pass isDeleteData=true and let the filer make the authoritative decision based on its own data. This prevents potential inconsistencies when: - The FUSE mount's cached entry is stale - Race conditions occur between multiple mounts - Direct filer operations change hard link counts * filer: check hard link counter before deleting chunks When deleting an entry, only delete the underlying chunks if: 1. It is not a hard link 2. OR it is the last hard link (counter <= 1) This protects against data loss when a client (like FUSE mount) requests chunk deletion for a file that has multiple hard links. |
2 days ago |
|
|
29a245052e |
tidy
|
2 days ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
2 days ago |
|
|
2b529e310d
|
s3: Add SOSAPI support for Veeam integration (#7899)
* s3api: Add SOSAPI core implementation and tests Implement Smart Object Storage API (SOSAPI) support for Veeam integration. - Add s3api_sosapi.go with XML structures and handlers for system.xml and capacity.xml - Implement virtual object detection and dynamic XML generation - Add capacity retrieval via gRPC (to be optimized in follow-up) - Include comprehensive unit tests covering detection, XML generation, and edge cases This enables Veeam Backup & Replication to discover SeaweedFS capabilities and capacity. * s3api: Integrate SOSAPI handlers into GetObject and HeadObject Add early interception for SOSAPI virtual objects in GetObjectHandler and HeadObjectHandler. - Check for SOSAPI objects (.system-*/system.xml, .system-*/capacity.xml) before normal processing - Delegate to handleSOSAPIGetObject and handleSOSAPIHeadObject when detected - Ensures virtual objects are served without hitting storage layer * s3api: Allow anonymous access to SOSAPI virtual objects Enable discovery of SOSAPI capabilities without requiring credentials. - Modify AuthWithPublicRead to bypass auth for SOSAPI objects if bucket exists - Supports Veeam's initial discovery phase before full IAM setup - Validates bucket existence to prevent information disclosure * s3api: Fix SOSAPI capacity retrieval to use proper master connection Fix gRPC error by connecting directly to master servers instead of through filer. - Use pb.WithOneOfGrpcMasterClients with s3a.option.Masters - Matches pattern used in bucket_size_metrics.go - Resolves "unknown service master_pb.Seaweed" error - Gracefully handles missing master configuration * Merge origin/master and implement robust SOSAPI capacity logic - Resolved merge conflict in s3api_sosapi.go - Replaced global Statistics RPC with VolumeList (topology) for accurate bucket-specific 'Used' calculation - Added bucket quota support (report quota as Capacity if set) - Implemented cluster-wide capacity calculation from topology when no quota - Added unit tests for topology capacity and usage calculations * s3api: Remove anonymous access to SOSAPI virtual objects Reverts the implicit public access for system.xml and capacity.xml. Requests to these objects now require standard S3 authentication, unless the bucket has a public-read policy. * s3api: Refactor SOSAPI handlers to use http.ServeContent - Consolidate handleSOSAPIGetObject and handleSOSAPIHeadObject into serveSOSAPI - Use http.ServeContent for standard Range, HEAD, and ETag handling - Remove manual range request handler and reduce code duplication * s3api: Unify SOSAPI request handling - Replaced handleSOSAPIGetObject and handleSOSAPIHeadObject with single HandleSOSAPI function - Updated call sites in s3api_object_handlers.go - Simplifies logic and ensures consistent handling for both GET and HEAD requests via http.ServeContent * s3api: Restore distinct SOSAPI GET/HEAD handlers - Reverted unified handler to enforce distinct behavior for GET and HEAD - GET: Supports Range requests via http.ServeContent - HEAD: Explicitly ignores Range requests (matches MinIO behavior) and writes headers only * s3api: Refactor SOSAPI handlers to eliminate duplication - Extracted shared content generation logic into generateSOSAPIContent helper - handleSOSAPIGetObject: Uses http.ServeContent (supports Range requests) - handleSOSAPIHeadObject: Manually sets headers (no Range, no body) - Maintains distinct behavior while following DRY principle * s3api: Remove low-value SOSAPI tests Removed tests that validate standard library behavior or trivial constant checks: - TestSOSAPIConstants (string prefix/suffix checks) - TestSystemInfoXMLRootElement (redundant with TestGenerateSystemXML) - TestSOSAPIXMLContentType (tests httptest, not our code) - TestHTTPTimeFormat (tests standard library) - TestCapacityInfoXMLStruct (tests Go's XML marshaling) Kept tests that validate actual business logic and edge cases. * s3api: Use consistent S3-compliant error responses in SOSAPI Replaced http.Error() with s3err.WriteErrorResponse() for internal errors to ensure all SOSAPI errors return S3-compliant XML instead of plain text. * s3api: Return error when no masters configured for SOSAPI capacity Changed getCapacityInfo to return an error instead of silently returning zero capacity when no master servers are configured. This helps surface configuration issues rather than masking them. * s3api: Use collection name with FilerGroup prefix for SOSAPI capacity Fixed collectBucketUsageFromTopology to use s3a.getCollectionName(bucket) instead of raw bucket name. This ensures collection comparisons match actual volume collection names when FilerGroup prefix is configured. * s3api: Apply PR review feedback for SOSAPI - Renamed `bucket` parameter to `collectionName` in collectBucketUsageFromTopology for clarity - Changed error checks from `==` to `errors.Is()` for better wrapped error handling - Added `errors` import * s3api: Avoid variable shadowing in SOSAPI capacity retrieval Refactored `getCapacityInfo` to use distinct variable names for errors to improve code clarity and avoid unintentional shadowing of the return parameter. |
2 days ago |
|
|
e8baeb3616 |
s3api: Allow anonymous access to SOSAPI virtual objects
Enable discovery of SOSAPI capabilities without requiring credentials. - Modify AuthWithPublicRead to bypass auth for SOSAPI objects if bucket exists - Supports Veeam's initial discovery phase before full IAM setup - Validates bucket existence to prevent information disclosure |
2 days ago |
|
|
a757ef77b1 |
s3api: Integrate SOSAPI handlers into GetObject and HeadObject
Add early interception for SOSAPI virtual objects in GetObjectHandler and HeadObjectHandler. - Check for SOSAPI objects (.system-*/system.xml, .system-*/capacity.xml) before normal processing - Delegate to handleSOSAPIGetObject and handleSOSAPIHeadObject when detected - Ensures virtual objects are served without hitting storage layer |
2 days ago |
|
|
fba67ce0f0 |
s3api: Add SOSAPI core implementation and tests
Implement Smart Object Storage API (SOSAPI) support for Veeam integration. - Add s3api_sosapi.go with XML structures and handlers for system.xml and capacity.xml - Implement virtual object detection and dynamic XML generation - Add capacity retrieval via gRPC (to be optimized in follow-up) - Include comprehensive unit tests covering detection, XML generation, and edge cases This enables Veeam Backup & Replication to discover SeaweedFS capabilities and capacity. |
2 days ago |
|
|
56ebd9236e
|
Fix: relax gRPC server keepalive enforcement to 20s (#7898)
* Fix: relax gRPC server keepalive enforcement to 20s to prevent 'too_many_pings' errors * Refactor: introduce GrpcKeepAliveMinimumTime constant for clarity |
2 days ago |
|
|
f88b9a643d |
add one example
|
2 days ago |
|
|
694218dc13 |
Merge branch 'master' of https://github.com/seaweedfs/seaweedfs
|
3 days ago |
|
|
a3c090e606 |
adjust layout
|
3 days ago |
|
|
915a7d4a54
|
feat: Add probes to worker service (#7896)
* feat: Add probes to worker service * feat: Add probes to worker service * Merge branch 'master' into pr/7896 * refactor --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
3 days ago |
|
|
ef20873c31
|
S3: Fix Content-Encoding header not preserved (#7894) (#7895)
* S3: Fix Content-Encoding header not preserved (#7894) The Content-Encoding header was not being returned in S3 GET/HEAD responses because it wasn't being stored in metadata during PUT operations. Root cause: The putToFiler function only stored a hardcoded list of standard HTTP headers (Cache-Control, Expires, Content-Disposition) but was missing Content-Encoding and Content-Language. Fix: Added Content-Encoding and Content-Language to the list of standard headers that are stored in entry.Extended during PUT operations. This matches the behavior of ParseS3Metadata (used for multipart uploads) and ensures consistency across all S3 operations. Fixes #7894 * Update s3api_object_handlers_put.go |
3 days ago |
|
|
6de6061ce9
|
admin: add cursor-based pagination to file browser (#7891)
* adjust menu items * admin: add cursor-based pagination to file browser - Implement cursor-based pagination using lastFileName parameter - Add customizable page size selector (20/50/100/200 entries) - Add compact pagination controls in header and footer - Remove summary cards for cleaner UI - Make directory names clickable to return to first page - Support forward-only navigation (Next button) - Preserve cursor position when changing page size - Remove sorting to align with filer's storage order approach * Update file_browser_templ.go * admin: remove directory icons from breadcrumbs * Update file_browser_templ.go * admin: address PR comments - Fix fragile EOF check: use io.EOF instead of string comparison - Cap page size at 200 to prevent potential DoS - Remove unused helper functions from template - Use safer templ script for page size selector to prevent XSS * admin: cleanup redundant first button * Update file_browser_templ.go * admin: remove entry counting logic * admin: remove unused variables in file browser data * admin: remove unused logic for FirstFileName and HasPrevPage * admin: remove unused TotalEntries and TotalSize fields * Update file_browser_data.go |
3 days ago |
|
|
8d6bcddf60
|
Add S3 volume encryption support with -s3.encryptVolumeData flag (#7890)
* Add S3 volume encryption support with -s3.encryptVolumeData flag
This change adds volume-level encryption support for S3 uploads, similar
to the existing -filer.encryptVolumeData option. Each chunk is encrypted
with its own auto-generated CipherKey when the flag is enabled.
Changes:
- Add -s3.encryptVolumeData flag to weed s3, weed server, and weed mini
- Wire Cipher option through S3ApiServer and ChunkedUploadOption
- Add integration tests for multi-chunk range reads with encryption
- Tests verify encryption works across chunk boundaries
Usage:
weed s3 -encryptVolumeData
weed server -s3 -s3.encryptVolumeData
weed mini -s3.encryptVolumeData
Integration tests:
go test -v -tags=integration -timeout 5m ./test/s3/sse/...
* Add GitHub Actions CI for S3 volume encryption tests
- Add test-volume-encryption target to Makefile that starts server with -s3.encryptVolumeData
- Add s3-volume-encryption job to GitHub Actions workflow
- Tests run with integration build tag and 10m timeout
- Server logs uploaded on failure for debugging
* Fix S3 client credentials to use environment variables
The test was using hardcoded credentials "any"/"any" but the Makefile
sets AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY to "some_access_key1"/
"some_secret_key1". Updated getS3Client() to read from environment
variables with fallback to "any"/"any" for manual testing.
* Change bucket creation errors from skip to fatal
Tests should fail, not skip, when bucket creation fails. This ensures
that credential mismatches and other configuration issues are caught
rather than silently skipped.
* Make copy and multipart test jobs fail instead of succeed
Changed exit 0 to exit 1 for s3-sse-copy-operations and s3-sse-multipart
jobs. These jobs document known limitations but should fail to ensure
the issues are tracked and addressed, not silently ignored.
* Hardcode S3 credentials to match Makefile
Changed from environment variables to hardcoded credentials
"some_access_key1"/"some_secret_key1" to match the Makefile
configuration. This ensures tests work reliably.
* fix Double Encryption
* fix Chunk Size Mismatch
* Added IsCompressed
* is gzipped
* fix copying
* only perform HEAD request when len(cipherKey) > 0
* Revert "Make copy and multipart test jobs fail instead of succeed"
This reverts commit
|
4 days ago |
|
|
935f41bff6
|
filer.backup: ignore missing volume/lookup errors when -ignore404Error is set (#7889)
* filer.backup: ignore missing volume/lookup errors when -ignore404Error is set (#7888) * simplify |
4 days ago |
|
|
c688b69700 |
reduce logs
|
4 days ago |
|
|
82dac3df03
|
s3: do not persist multi part "Response-Content-Disposition" in request header (#7887)
* fix: support standard HTTP headers in S3 multipart upload * fix(s3api): validate standard HTTP headers correctly and avoid persisting Response-Content-Disposition --------- Co-authored-by: steve.wei <coderushing@gmail.com> |
4 days ago |
|
|
f07ba2c5aa
|
fix: support standard HTTP headers in S3 multipart upload (#7884)
Co-authored-by: Chris Lu <chris.lu@gmail.com> |
4 days ago |
|
|
95716a2f87 |
less verbose
|
4 days ago |
|
|
2b3ff3cd05 |
verbose mode
|
4 days ago |
|
|
b866907461
|
fs.meta.save: fix directory entry parent path in FullEntry construction (#7886)
* Checkpoint from VS Code for coding agent session * Fix fs.meta.save to correctly save directory's own metadata Co-authored-by: chrislusf <1543151+chrislusf@users.noreply.github.com> * address error --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: chrislusf <1543151+chrislusf@users.noreply.github.com> |
4 days ago |
|
|
5aa111708d
|
grpc: reduce client idle pings to avoid ENHANCE_YOUR_CALM (#7885)
* grpc: reduce client idle pings to avoid ENHANCE_YOUR_CALM (too_many_pings) * test: use context.WithTimeout and pb constants for keepalive * test(kafka): use separate dial and client contexts in NewDirectBrokerClient * test(kafka): fix client context usage in NewDirectBrokerClient |
4 days ago |
|
|
c260e6a22e
|
Fix issue #7880: Tasks use Volume IDs instead of ip:port (#7881)
* Fix issue #7880: Tasks use Volume IDs instead of ip:port When volume servers are registered with custom IDs, tasks were attempting to connect using the ID instead of the actual ip:port address, causing connection failures. Modified task detection logic in balance, erasure coding, and vacuum tasks to resolve volume server IDs to their actual ip:port addresses using ActiveTopology information. * Use server addresses directly instead of translating from IDs Modified VolumeHealthMetrics to include ServerAddress field populated directly from topology DataNodeInfo.Address. Updated task detection logic to use addresses directly without runtime lookups. Changes: - Added ServerAddress field to VolumeHealthMetrics - Updated maintenance scanner to populate ServerAddress - Modified task detection to use ServerAddress for Node fields - Updated DestinationPlan to include TargetAddress - Removed runtime address lookups in favor of direct address usage * Address PR comments: add ServerAddress field, improve error handling - Add missing ServerAddress field to VolumeHealthMetrics struct - Add warning in vacuum detection when server not found in topology - Improve error handling in erasure coding to abort task if sources missing - Make vacuum task stricter by skipping if server not found in topology * Refactor: Extract common address resolution logic into shared utility - Created weed/worker/tasks/util/address.go with ResolveServerAddress function - Updated balance, erasure_coding, and vacuum detection to use the shared utility - Removed code duplication and improved maintainability - Consistent error handling across all task types * Fix critical issues in task address resolution - Vacuum: Require topology availability and fail if server not found (no fallback to ID) - Ensure all task types consistently fail early when topology is incomplete - Prevent creation of tasks that would fail due to missing server addresses * Address additional PR feedback - Add validation for empty addresses in ResolveServerAddress - Remove redundant serverAddress variable in vacuum detection - Improve robustness of address resolution * Improve error logging in vacuum detection - Include actual error details in log message for better diagnostics - Make error messages consistent with other task types |
5 days ago |
|
|
225e3d0302
|
Add read only user (#7862)
* add readonly user * add args * address comments * avoid same user name * Prevents timing attacks * doc --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
5 days ago |
|
|
e8a41ec053
|
Fix the issue where fuse command on a node cannot specify multiple configuration directory paths (#7874)
Changes:
Modified weed/command/fuse.go to add a function GetFuseCommandName to return the name of the fuse command.
Modified weed/weed.go to conditionally initialize the global HTTP client only if the command is not "fuse".
Modified weed/command/fuse_std.go to parse parameters and ensure the global HTTP client is initialized for the fuse command.
Tests:
Use /etc/fstab like:
fuse /repos fuse.weed filer=192.168.1.101:7202,filer.path=/hpc/repos,config_dir=/etc/seaweedfs/seaweedfs_01 0 0
fuse /opt/ohpc/pub fuse.weed filer=192.168.1.102:7202,filer.path=/hpc_cluster/pub,config_dir=/etc/seaweedfs/seaweedfs_02 0 0
Co-authored-by: zhangxl56 <zhangxl56@lenovo.com>
|
5 days ago |
|
|
e439e33888
|
fix(filer): check error from FindEntry (#7878)
* fix(filer): check error from FindEntry * remove --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
5 days ago |
|
|
7064ad420d
|
Refactor S3 integration tests to use weed mini (#7877)
* Refactor S3 integration tests to use weed mini * Fix weed mini flags for sse and parquet tests * Fix IAM test startup: remove -iam.config flag from weed mini * Enhance logging in IAM Makefile to debug startup failure * Simplify weed mini flags and checks in S3 tests (IAM, Parquet, SSE, Copying) * Simplify weed mini flags and checks in all S3 tests * Fix IAM tests: use -s3.iam.config for weed mini * Replace timeout command with portable loop in IAM Makefile * Standardize portable loop-based readiness checks in all S3 Makefiles * Define SERVER_DIR in retention Makefile * Fix versioning and retention Makefiles: remove unsupported weed mini flags * fix filer_group test * fix cors * emojis * fix sse * fix retention * fixes * fix * fixes * fix parquet * fixes * fix * clean up * avoid duplicated debug server * Update .gitignore * simplify * clean up * add credentials * bind * delay * Update Makefile * Update Makefile * check ready * delay * update remote credentials * Update Makefile * clean up * kill * Update Makefile * update credentials |
5 days ago |
|
|
2f6aa98221
|
Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
* Replace removeDuplicateSlashes with NormalizeObjectKey Use s3_constants.NormalizeObjectKey instead of removeDuplicateSlashes in most places for consistency. NormalizeObjectKey handles both duplicate slash removal and ensures the path starts with '/', providing more complete normalization. * Fix double slash issues after NormalizeObjectKey After using NormalizeObjectKey, object keys have a leading '/'. This commit ensures: - getVersionedObjectDir strips leading slash before concatenation - getEntry calls receive names without leading slash - String concatenation with '/' doesn't create '//' paths This prevents path construction errors like: /buckets/bucket//object (wrong) /buckets/bucket/object (correct) * ensure object key leading "/" * fix compilation * fix: Strip leading slash from object keys in S3 API responses After introducing NormalizeObjectKey, all internal object keys have a leading slash. However, S3 API responses must return keys without leading slashes to match AWS S3 behavior. Fixed in three functions: - addVersion: Strip slash for version list entries - processRegularFile: Strip slash for regular file entries - processExplicitDirectory: Strip slash for directory entries This ensures ListObjectVersions and similar APIs return keys like 'bar' instead of '/bar', matching S3 API specifications. * fix: Normalize keyMarker for consistent pagination comparison The S3 API provides keyMarker without a leading slash (e.g., 'object-001'), but after introducing NormalizeObjectKey, all internal object keys have leading slashes (e.g., '/object-001'). When comparing keyMarker < normalizedObjectKey in shouldSkipObjectForMarker, the ASCII value of '/' (47) is less than 'o' (111), causing all objects to be incorrectly skipped during pagination. This resulted in page 2 and beyond returning 0 results. Fix: Normalize the keyMarker when creating versionCollector so comparisons work correctly with normalized object keys. Fixes pagination tests: - TestVersioningPaginationOver1000Versions - TestVersioningPaginationMultipleObjectsManyVersions * refactor: Change NormalizeObjectKey to return keys without leading slash BREAKING STRATEGY CHANGE: Previously, NormalizeObjectKey added a leading slash to all object keys, which required stripping it when returning keys to S3 API clients and caused complexity in marker normalization for pagination. NEW STRATEGY: - NormalizeObjectKey now returns keys WITHOUT leading slash (e.g., 'foo/bar' not '/foo/bar') - This matches the S3 API format directly - All path concatenations now explicitly add '/' between bucket and object - No need to strip slashes in responses or normalize markers Changes: 1. Modified NormalizeObjectKey to strip leading slash instead of adding it 2. Fixed all path concatenations to use: - BucketsPath + '/' + bucket + '/' + object instead of: - BucketsPath + '/' + bucket + object 3. Reverted response key stripping in: - addVersion() - processRegularFile() - processExplicitDirectory() 4. Reverted keyMarker normalization in findVersionsRecursively() 5. Updated matchesPrefixFilter() to work with keys without leading slash 6. Fixed paths in handlers: - s3api_object_handlers.go (GetObject, HeadObject, cacheRemoteObjectForStreaming) - s3api_object_handlers_postpolicy.go - s3api_object_handlers_tagging.go - s3api_object_handlers_acl.go - s3api_version_id.go (getVersionedObjectDir, getVersionIdFormat) - s3api_object_versioning.go (getObjectVersionList, updateLatestVersionAfterDeletion) All versioning tests pass including pagination stress tests. * adjust format * Update post policy tests to match new NormalizeObjectKey behavior - Update TestPostPolicyKeyNormalization to expect keys without leading slashes - Update TestNormalizeObjectKey to expect keys without leading slashes - Update TestPostPolicyFilenameSubstitution to expect keys without leading slashes - Update path construction in tests to use new pattern: BucketsPath + '/' + bucket + '/' + object * Fix ListObjectVersions prefix filtering Remove leading slash addition to prefix parameter to allow correct filtering of .versions directories when listing object versions with a specific prefix. The prefix parameter should match entry paths relative to bucket root. Adding a leading slash was breaking the prefix filter for paginated requests. Fixes pagination issue where second page returned 0 versions instead of continuing with remaining versions. * no leading slash * Fix urlEscapeObject to add leading slash for filer paths NormalizeObjectKey now returns keys without leading slashes to match S3 API format. However, urlEscapeObject is used for filer paths which require leading slashes. Add leading slash back after normalization to ensure filer paths are correct. Fixes TestS3ApiServer_toFilerPath test failures. * adjust tests * normalize * Fix: Normalize prefixes and markers in LIST operations using NormalizeObjectKey Ensure consistent key normalization across all S3 operations (GET, PUT, LIST). Previously, LIST operations were not applying the same normalization rules (handling backslashes, duplicate slashes, leading slashes) as GET/PUT operations. Changes: - Updated normalizePrefixMarker() to call NormalizeObjectKey for both prefix and marker - This ensures prefixes with leading slashes, backslashes, or duplicate slashes are handled consistently with how object keys are normalized - Fixes Parquet test failures where pads.write_dataset creates implicit directory structures that couldn't be discovered by subsequent LIST operations - Added TestPrefixNormalizationInList and TestListPrefixConsistency tests All existing LIST tests continue to pass with the normalization improvements. * Add debugging logging to LIST operations to track prefix normalization * Fix: Remove leading slash addition from GetPrefix to work with NormalizeObjectKey The NormalizeObjectKey function removes leading slashes to match S3 API format (e.g., 'foo/bar' not '/foo/bar'). However, GetPrefix was adding a leading slash back, which caused LIST operations to fail with incorrect path handling. Now GetPrefix only normalizes duplicate slashes without adding a leading slash, which allows NormalizeObjectKey changes to work correctly for S3 LIST operations. All Parquet integration tests now pass (20/20). * Fix: Handle object paths without leading slash in checkDirectoryObject NormalizeObjectKey() removes the leading slash to match S3 API format. However, checkDirectoryObject() was assuming the object path has a leading slash when processing directory markers (paths ending with '/'). Now we ensure the object has a leading slash before processing it for filer operations. Fixes implicit directory marker test (explicit_dir/) while keeping Parquet integration tests passing (20/20). All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * Fix: Handle explicit directory markers with trailing slashes Explicit directory markers created with put_object(Key='dir/', ...) are stored in the filer with the trailing slash as part of the name. The checkDirectoryObject() function now checks for both: 1. Explicit directories: lookup with trailing slash preserved (e.g., 'explicit_dir/') 2. Implicit directories: lookup without trailing slash (e.g., 'implicit_dir') This ensures both types of directory markers are properly recognized. All tests pass: - Implicit directory tests: 6/6 (including explicit directory marker test) - Parquet integration tests: 20/20 * Fix: Preserve trailing slash in NormalizeObjectKey NormalizeObjectKey now preserves trailing slashes when normalizing object keys. This is important for explicit directory markers like 'explicit_dir/' which rely on the trailing slash to be recognized as directory objects. The normalization process: 1. Notes if trailing slash was present 2. Removes duplicate slashes and converts backslashes 3. Removes leading slash for S3 API format 4. Restores trailing slash if it was in the original This ensures explicit directory markers created with put_object(Key='dir/', ...) are properly normalized and can be looked up by their exact name. All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * clean object * Fix: Don't restore trailing slash if result is empty When normalizing paths that are only slashes (e.g., '///', '/'), the function should return an empty string, not a single slash. The fix ensures we only restore the trailing slash if the result is non-empty. This fixes the 'just_slashes' test case: - Input: '///' - Expected: '' - Previous: '/' - Fixed: '' All tests now pass: - Unit tests: TestNormalizeObjectKey (13/13) - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * prefixEndsOnDelimiter * Update s3api_object_handlers_list.go * Update s3api_object_handlers_list.go * handle create directory |
6 days ago |
|
|
014027f75a
|
Fix: Support object tagging in versioned buckets (Issue #7868) (#7871)
* Fix: Support object tagging in versioned buckets (Issue #7868) This fix addresses the issue where setting tags on files in versioned buckets would fail with 'filer: no entry is found in filer store' error. Changes: - Updated GetObjectTaggingHandler to check versioning status and retrieve correct object versions - Updated PutObjectTaggingHandler to properly locate and update tags on versioned objects - Updated DeleteObjectTaggingHandler to delete tags from versioned objects - Added proper handling for both specific versions and latest versions - Added distinction between null versions (pre-versioning objects) and versioned objects The fix follows the same versioning-aware pattern already implemented in ACL handlers. Tests: - Added comprehensive test suite for tagging operations on versioned buckets - Tests cover PUT, GET, and DELETE tagging operations on specific versions and latest versions - Tests verify tag isolation between different versions of the same object * Fix: Ensure consistent directory path construction in tagging handlers Changed directory path construction to match the pattern used in ACL handlers: - Added missing '/' before object path when constructing .versions directory path - This ensures compatibility with the filer's expected path structure - Applied to both PutObjectTaggingHandler and DeleteObjectTaggingHandler * Revert: Remove redundant slash in path construction - object already has leading slash from NormalizeObjectKey * Fix: Remove redundant slashes in versioning path construction across handlers - getVersionedObjectDir: object already starts with '/', no need for extra '/' - ACL handlers: same pattern, fix both PutObjectAcl locations - Ensures consistent path construction with object parameter normalization * fix test compilation * Add: Comprehensive ACL tests for versioned and non-versioned buckets - Added s3_acl_versioning_test.go with 5 test cases covering: * GetObjectAcl on versioned buckets * GetObjectAcl on specific versions * PutObjectAcl on versioned buckets * PutObjectAcl on specific versions * Independent ACL management across versions These tests were missing and would have caught the path construction issues we just fixed in the ACL handler. Tests validate that ACL operations work correctly on both versioned and non-versioned objects. * Fix: Correct tagging versioning test file formatting * fix: Update AWS SDK endpoint config and improve cleanup to handle delete markers - Replace deprecated EndpointResolverWithOptions with BaseEndpoint in AWS SDK v2 client configuration - Update cleanupTestBucket to properly delete both object versions and delete markers - Apply changes to both ACL and tagging test files for consistency * Fix S3 multi-delete for versioned objects The bug was in getVersionedObjectDir() which was constructing paths without a slash between the bucket and object key: BEFORE (WRONG): /buckets/mybucket{key}.versions AFTER (FIXED): /buckets/mybucket/{key}/.versions This caused version deletions to claim success but not actually delete files, breaking S3 compatibility tests: - test_versioning_multi_object_delete - test_versioning_multi_object_delete_with_marker - test_versioning_concurrent_multi_object_delete - test_object_lock_multi_delete_object_with_retention Added comprehensive test that reproduces the issue and verifies the fix. * Remove emojis from test output |
6 days ago |
|
|
7f611f5d3a
|
fix: Correct admin server port in Helm worker deployment (#7872)
The worker deployment was incorrectly passing the admin gRPC port (33646) to the -admin flag. However, the SeaweedFS worker command automatically calculates the gRPC port by adding 10000 to the HTTP port provided. This caused workers to attempt connection to port 43646 (33646 + 10000) instead of the correct gRPC port 33646 (23646 + 10000). Changes: - Update worker-deployment.yaml to use admin.port instead of admin.grpcPort - Workers now correctly connect to admin HTTP port, allowing the binary to calculate the gRPC port automatically Fixes workers failing with: "dial tcp <admin-ip>:43646: connect: no route to host" Related: - Worker code: weed/pb/grpc_client_server.go:272 (grpcPort = port + 10000) - Worker docs: weed/command/worker.go:36 (admin HTTP port + 10000) |
6 days ago |
|
|
1e3ace54a0 |
Merge branch 'master' of https://github.com/seaweedfs/seaweedfs
|
6 days ago |
|
|
71cc233fac |
add missing action
|
6 days ago |
|
|
911aca74f3
|
Support volume server ID in Helm chart (#7867)
helm: Support volume server ID |
6 days ago |
|
|
26acebdef1
|
fix: restore TimeToFirstByte metric for S3 GetObject operations (issue #7869) (#7870)
* fix(iam): add support for fine-grained S3 actions in IAM policies Add support for fine-grained S3 actions like s3:DeleteObject, s3:PutObject, and other specific S3 actions in IAM policy mapping. Previously, only coarse-grained action patterns (Put*, Get*, etc.) were supported, causing IAM policies with specific actions to be rejected with 'not a valid action' error. Fixes issue #7864 part 2: s3:DeleteObject IAM action is now supported. Changes: - Extended MapToStatementAction() to handle fine-grained S3 actions - Maps S3-specific actions to appropriate internal action constants - Supports 30+ S3 actions including DeleteObject, PutObject, GetObject, etc. * fix(s3api): correct resource ARN generation for subpath permissions Fix convertSingleAction() to properly handle subpath patterns in legacy actions. Previously, when a user was granted Write permission to a subpath (e.g., Write:bucket/sub_path/*), the resource ARN was incorrectly generated, causing DELETE operations to be denied even though s3:DeleteObject was included in the Write action. The fix: - Extract bucket name and prefix path separately from patterns like 'bucket/prefix/*' - Generate correct S3 ARN format: arn:aws:s3:::bucket/prefix/* - Ensure all permission checks (Read, Write, List, Tagging, etc.) work correctly with subpaths - Support nested paths (e.g., bucket/a/b/c/*) Fixes issue #7864 part 1: Write permission on subpath now allows DELETE. Example: - Permission: Write:mybucket/documents/* - Objects can now be: PUT, DELETE, or ACL operations on mybucket/documents/* - Objects outside this path are still denied * test(s3api): add comprehensive tests for subpath permission handling Add new test file with comprehensive tests for convertSingleAction(): 1. TestConvertSingleActionDeleteObject: Verifies s3:DeleteObject is included in Write actions (fixes issue #7864 part 2) 2. TestConvertSingleActionSubpath: Tests proper resource ARN generation for different permission patterns: - Bucket-level: Write:mybucket -> arn:aws:s3:::mybucket - Wildcard: Write:mybucket/* -> arn:aws:s3:::mybucket/* - Subpath: Write:mybucket/sub_path/* -> arn:aws:s3:::mybucket/sub_path/* - Nested: Read:mybucket/documents/* -> arn:aws:s3:::mybucket/documents/* 3. TestConvertSingleActionSubpathDeleteAllowed: Specifically validates that subpath Write permissions allow DELETE operations 4. TestConvertSingleActionNestedPaths: Tests deeply nested path handling (e.g., bucket/a/b/c/*) All tests pass and validate the fixes for issue #7864. * fix: address review comments from PR #7865 - Fix critical bug: use parsed 'bucket' instead of 'resourcePattern' for GetObjectRetention, GetObjectLegalHold, and PutObjectLegalHold actions to avoid malformed ARNs like arn:aws:s3:::bucket/*/* - Refactor large switch statement in MapToStatementAction() into a map-based lookup for better performance and maintainability * fmt * refactor: extract extractBucketAndPrefix helper and simplify convertSingleAction - Extract extractBucketAndPrefix as a package-level function for better testability and reusability - Remove unused bucketName parameter from convertSingleAction signature - Update GetResourcesFromLegacyAction to use the extracted helper for consistent ARN generation - Update all call sites in tests to match new function signature - All tests pass and module compiles without errors * fix: use extracted bucket variable consistently in all ARN generation branches Replace resourcePattern with extracted bucket variable in else branches and bucket-level cases to avoid malformed ARNs like 'arn:aws:s3:::mybucket/*/*': - Read case: bucket-level else branch - Write case: bucket-level else branch - Admin case: both bucket and object ARNs - List case: bucket-level else branch - GetBucketObjectLockConfiguration: bucket extraction - PutBucketObjectLockConfiguration: bucket extraction This ensures consistent ARN format: arn:aws:s3:::bucket or arn:aws:s3:::bucket/* * fix: address remaining review comments from PR #7865 High priority fixes: - Write action on bucket-level now generates arn:aws:s3:::mybucket/* instead of arn:aws:s3:::mybucket to enable object-level S3 actions (s3:PutObject, s3:DeleteObject) - GetResourcesFromLegacyAction now generates both bucket and object ARNs for /* patterns to maintain backward compatibility with mixed action groups Medium priority improvements: - Remove unused 'bucket' field from TestConvertSingleActionSubpath test struct - Update test to use assert.ElementsMatch instead of assert.Contains for more comprehensive resource ARN validation - Clarify test expectations with expectedResources slice instead of single expectedResource All tests pass, compilation verified * test: improve TestConvertSingleActionNestedPaths with comprehensive assertions Update test to use assert.ElementsMatch for more robust resource ARN verification: - Change struct from single expectedResource to expectedResources slice - Update Read nested path test to expect both bucket and prefix ARNs - Use assert.ElementsMatch to verify all generated resources match exactly - Provides complete coverage for nested path handling This matches the improvement pattern used in TestConvertSingleActionSubpath * refactor: simplify S3 action map and improve resource ARN detection - Refactor fineGrainedActionMap to use init() function for programmatic population of both prefixed (s3:Action) and unprefixed (Action) variants, eliminating 70+ duplicate entries - Add buildObjectResourceArn() helper to eliminate duplicated resource ARN generation logic across switch cases - Fix bucket vs object-level access detection: only use HasSuffix(/*) check instead of Contains('/') which incorrectly matched patterns like 'bucket/prefix' without wildcard - Apply buildObjectResourceArn() consistently to Tagging, BypassGovernanceRetention, GetObjectRetention, PutObjectRetention, GetObjectLegalHold, and PutObjectLegalHold cases * fmt * fix: generate object-level ARNs for bucket-level read access When bucket-level read access is granted (e.g., 'Read:mybucket'), generate both bucket and object ARNs so that object-level actions like s3:GetObject can properly authorize. Similarly, in GetResourcesFromLegacyAction, bucket-level patterns should generate both ARN levels for consistency with patterns that include wildcards. This ensures that users with bucket-level permissions can read objects, not just the bucket itself. * fix: address Copilot code review comments - Remove unused bucketName parameter from ConvertIdentityToPolicy signature - Update all callers in examples.go and engine_test.go - Bucket is now extracted from action string itself - Update extractBucketAndPrefix documentation - Add nested path example (bucket/a/b/c/*) - Clarify that prefix can contain multiple path segments - Make GetResourcesFromLegacyAction action-aware - Different action types have different resource requirements - List actions only need bucket ARN (bucket-only operations) - Read/Write/Tagging actions need both bucket and object ARNs - Aligns with convertSingleAction logic for consistency All tests pass successfully * test: add comprehensive tests for GetResourcesFromLegacyAction consistency - Add TestGetResourcesFromLegacyAction to verify action-aware resource generation - Validate consistency with convertSingleAction for all action types: * List actions: bucket-only ARNs (s3:ListBucket is bucket-level operation) * Read actions: both bucket and object ARNs * Write actions: object-only ARNs (subpaths) or object ARNs (bucket-level) * Admin actions: both bucket and object ARNs - Update GetResourcesFromLegacyAction to generate Admin ARNs consistent with convertSingleAction - All tests pass (35+ test cases across integration_test.go) * refactor: eliminate code duplication in GetResourcesFromLegacyAction - Simplify GetResourcesFromLegacyAction to delegate to convertSingleAction - Eliminates ~50 lines of duplicated action-type-specific logic - Ensures single source of truth for resource ARN generation - Improves maintainability: changes to ARN logic only need to be made in one place - All tests pass: any inconsistencies would be caught immediately - Addresses Gemini Code Assist review comment about code duplication * fix: remove fragile 'dummy' action type in CreatePolicyFromLegacyIdentity - Replace hardcoded 'dummy:' prefix with proper representative action type - Use first valid action type from the action list to determine resource requirements - Ensures GetResourcesFromLegacyAction receives a valid action type - Prevents silent failures when convertSingleAction encounters unknown action - Improves code clarity: explains why representative action type is needed - All tests pass: policy engine tests verify correct behavior * security: prevent privilege escalation in Admin action subpath handling - Admin action with subpath (e.g., Admin:bucket/admin/*) now correctly restricts to the specified subpath instead of granting full bucket access - If prefix exists: resources restricted to bucket + bucket/prefix/* - If no prefix: full bucket access (unchanged behavior for root Admin) - Added test case Admin_on_subpath to validate the security fix - All 40+ policy engine tests pass * refactor: address Copilot code review comments on S3 authorization - Fix GetObjectTagging mapping: change from ACTION_READ to ACTION_TAGGING (tagging operations should not be classified as general read operations) - Enhance extractBucketAndPrefix edge case handling: - Add input validation (reject empty strings, whitespace, slash-only) - Normalize double slashes and trailing slashes - Return empty bucket/prefix for invalid patterns - Prevent generation of malformed ARNs - Separate Read action from ListBucket (AWS S3 IAM semantics): - ListBucket is a bucket-level operation, not object-level - Read action now only includes s3:GetObject, s3:GetObjectVersion - This aligns with AWS S3 IAM policy best practices - Update buildObjectResourceArn to handle invalid bucket names gracefully: - Return empty slice if bucket is empty after validation - Prevents malformed ARN generation - Add comprehensive TestExtractBucketAndPrefixEdgeCases with 8 test cases: - Validates empty strings, whitespace, special characters - Confirms proper normalization of double/trailing slashes - Ensures robust parsing of nested paths - Update existing tests to reflect removed ListBucket from Read action All 40+ policy engine tests pass * fix: aggregate resource ARNs from all action types in CreatePolicyFromLegacyIdentity CRITICAL FIX: The previous implementation incorrectly used a single representative action type to determine resource ARNs when multiple legacy actions targeted the same resource pattern. This caused incorrect policy generation when action types with different resource requirements (e.g., List vs Write) were grouped together. Example of the bug: - Input: List:mybucket/path/*, Write:mybucket/path/* - Old behavior: Used only List's resources (bucket-level ARN) - Result: Policy had Write actions (s3:PutObject) but only bucket ARN - Consequence: s3:PutObject would be denied due to missing object-level ARN Solution: - Iterate through all action types for a given resource pattern - For each action type, call GetResourcesFromLegacyAction to get required ARNs - Aggregate all ARNs into a set to eliminate duplicates - Use the merged set for the final policy statement - Admin action short-circuits (always includes full permissions) Example of correct behavior: - Input: List:mybucket/path/*, Write:mybucket/path/* - New behavior: Aggregates both List and Write resource requirements - Result: Policy has Write actions with BOTH bucket and object-level ARNs - Outcome: s3:PutObject works correctly on mybucket/path/* Added TestCreatePolicyFromLegacyIdentityMultipleActions with 3 test cases: 1. List + Write on subpath: verifies bucket + object ARN aggregation 2. Read + Tagging on bucket: verifies action-specific ARN combinations 3. Admin with other actions: verifies Admin dominates resource ARNs All 45+ policy engine tests pass * fix: remove bucket-level ARN from Read action for consistency ISSUE: The Read action was including bucket-level ARNs (arn:aws:s3:::bucket) even though the only S3 actions in Read are s3:GetObject and s3:GetObjectVersion, which are object-level operations. This created a mismatch between the actions and resources in the policy statement. ROOT CAUSE: s3:ListBucket was previously removed from the Read action, but the bucket-level ARN was not removed, creating an inconsistency. SOLUTION: Update Read action to only generate object-level ARNs using buildObjectResourceArn, consistent with how Write and Tagging actions work. This ensures: - Read:mybucket generates arn:aws:s3:::mybucket/* (not bucket ARN) - Read:bucket/prefix/* generates arn:aws:s3:::bucket/prefix/* (object-level only) - Consistency: same actions, same resources, same logic across all object operations Updated test expectations: - TestConvertSingleActionSubpath: Read_on_subpath now expects only object ARN - TestConvertSingleActionNestedPaths: Read nested path now expects only object ARN - TestConvertIdentityToPolicy: Read resources now 1 instead of 2 - TestCreatePolicyFromLegacyIdentityMultipleActions: Read+Tagging aggregates to 1 ARN All 45+ policy engine tests pass * doc * fmt * fix: address Copilot code review on Read action consistency and missing S3 action mappings - Clarify MapToStatementAction comment to reflect exact lookup (not pattern matching) - Add missing S3 actions to baseS3ActionMap: - ListBucketVersions, ListAllMyBuckets for bucket operations - GetBucketCors, PutBucketCors, DeleteBucketCors for CORS - GetBucketNotification, PutBucketNotification for notifications - GetBucketObjectLockConfiguration, PutBucketObjectLockConfiguration for object lock - GetObjectVersionTagging for version tagging - GetObjectVersionAcl, PutBucketAcl for ACL operations - PutBucketTagging, DeleteBucketTagging for bucket tagging - Fix Read action scope inconsistency with GetActionMappings(): - Previously: only included GetObject, GetObjectVersion - Now: includes full Read set (14 actions) from GetActionMappings - Includes both bucket-level (ListBucket*, GetBucket*) and object-level (GetObject*) ARNs - Bucket ARN enables ListBucket operations, object ARN enables GetObject operations - Update all test expectations: - TestConvertSingleActionSubpath: Read now returns 2 ARNs (bucket + objects) - TestConvertSingleActionNestedPaths: Read nested path now includes bucket ARN - TestGetResourcesFromLegacyAction: Read test cases updated for consistency - TestCreatePolicyFromLegacyIdentityMultipleActions: Read_and_Tagging now returns 2 ARNs - TestConvertIdentityToPolicy: Updated to expect 14 Read actions and 2 resources Fixes: Inconsistency between convertSingleAction Read case and GetActionMappings function * fmt * fix: align convertSingleAction with GetActionMappings and add bucket validation - Fix Write action: now includes all 16 actions from GetActionMappings (object and bucket operations) - Includes PutBucketVersioning, PutBucketCors, PutBucketAcl, PutBucketTagging, etc. - Generates both bucket and object ARNs to support bucket-level operations - Fix List action: add ListAllMyBuckets from GetActionMappings - Previously: ListBucket, ListBucketVersions - Now: ListBucket, ListBucketVersions, ListAllMyBuckets - Add bucket validation to prevent malformed ARNs with empty bucket - Fix Tagging action: include bucket-level tagging operations - Previously: only object-level (GetObjectTagging, PutObjectTagging, DeleteObjectTagging) - Now: includes bucket-level (GetBucketTagging, PutBucketTagging, DeleteBucketTagging) - Generates both bucket and object ARNs to support bucket-level operations - Add bucket validation to prevent malformed ARNs: - Admin: return error if bucket is empty - List: generate empty resources if bucket is empty - Tagging: check bucket before generating ARNs - GetBucketObjectLockConfiguration, PutBucketObjectLockConfiguration: validate bucket - Fix TrimRight issue in extractBucketAndPrefix: - Change from strings.TrimRight(pattern, "/") to remove only one trailing slash - Prevents loss of prefix when pattern has multiple trailing slashes - Update all test cases: - TestConvertSingleActionSubpath: Write now returns 16 actions and bucket+object ARNs - TestConvertSingleActionNestedPaths: Write includes bucket ARN - TestGetResourcesFromLegacyAction: Updated Write and Tagging expectations - TestCreatePolicyFromLegacyIdentityMultipleActions: Updated action/resource counts Fixes: Inconsistencies between convertSingleAction and GetActionMappings for Write/List/Tagging actions * fmt * fix: resolve ListMultipartUploads/ListParts mapping inconsistency and add action validation - Fix ListMultipartUploads and ListParts mapping in helpers.go: - Changed from ACTION_LIST to ACTION_WRITE for consistency with GetActionMappings - These operations are part of the multipart write workflow and should map to Write action - Prevents inconsistent behavior when same actions processed through different code paths - Add documentation to clarify multipart operations in Write action: - Explain why ListMultipartUploads and ListParts are part of Write permissions - These are required for meaningful multipart upload workflow management - Add action validation to CreatePolicyFromLegacyIdentity: - Validates action format before processing using ValidateActionMapping - Logs warnings for invalid actions instead of silently skipping them - Provides clearer error messages when invalid action types are used - Ensures users know when their intended permissions weren't applied - Consistent with ConvertLegacyActions validation approach Fixes: Inconsistent action type mappings and silent failure for invalid actions * fix: restore TimeToFirstByte metric for S3 GetObject operations (issue #7869) |
6 days ago |
|
|
5469b7c58f
|
fix: resolve inconsistent S3 API authorization for DELETE operations (issue #7864) (#7865)
* fix(iam): add support for fine-grained S3 actions in IAM policies Add support for fine-grained S3 actions like s3:DeleteObject, s3:PutObject, and other specific S3 actions in IAM policy mapping. Previously, only coarse-grained action patterns (Put*, Get*, etc.) were supported, causing IAM policies with specific actions to be rejected with 'not a valid action' error. Fixes issue #7864 part 2: s3:DeleteObject IAM action is now supported. Changes: - Extended MapToStatementAction() to handle fine-grained S3 actions - Maps S3-specific actions to appropriate internal action constants - Supports 30+ S3 actions including DeleteObject, PutObject, GetObject, etc. * fix(s3api): correct resource ARN generation for subpath permissions Fix convertSingleAction() to properly handle subpath patterns in legacy actions. Previously, when a user was granted Write permission to a subpath (e.g., Write:bucket/sub_path/*), the resource ARN was incorrectly generated, causing DELETE operations to be denied even though s3:DeleteObject was included in the Write action. The fix: - Extract bucket name and prefix path separately from patterns like 'bucket/prefix/*' - Generate correct S3 ARN format: arn:aws:s3:::bucket/prefix/* - Ensure all permission checks (Read, Write, List, Tagging, etc.) work correctly with subpaths - Support nested paths (e.g., bucket/a/b/c/*) Fixes issue #7864 part 1: Write permission on subpath now allows DELETE. Example: - Permission: Write:mybucket/documents/* - Objects can now be: PUT, DELETE, or ACL operations on mybucket/documents/* - Objects outside this path are still denied * test(iam): add tests for fine-grained S3 action mappings Extend TestMapToStatementAction with test cases for fine-grained S3 actions: - s3:DeleteObject - s3:PutObject - s3:GetObject - s3:ListBucket - s3:PutObjectAcl - s3:GetObjectAcl Ensures the new action mapping support is working correctly. * test(s3api): add comprehensive tests for subpath permission handling Add new test file with comprehensive tests for convertSingleAction(): 1. TestConvertSingleActionDeleteObject: Verifies s3:DeleteObject is included in Write actions (fixes issue #7864 part 2) 2. TestConvertSingleActionSubpath: Tests proper resource ARN generation for different permission patterns: - Bucket-level: Write:mybucket -> arn:aws:s3:::mybucket - Wildcard: Write:mybucket/* -> arn:aws:s3:::mybucket/* - Subpath: Write:mybucket/sub_path/* -> arn:aws:s3:::mybucket/sub_path/* - Nested: Read:mybucket/documents/* -> arn:aws:s3:::mybucket/documents/* 3. TestConvertSingleActionSubpathDeleteAllowed: Specifically validates that subpath Write permissions allow DELETE operations 4. TestConvertSingleActionNestedPaths: Tests deeply nested path handling (e.g., bucket/a/b/c/*) All tests pass and validate the fixes for issue #7864. * fix: address review comments from PR #7865 - Fix critical bug: use parsed 'bucket' instead of 'resourcePattern' for GetObjectRetention, GetObjectLegalHold, and PutObjectLegalHold actions to avoid malformed ARNs like arn:aws:s3:::bucket/*/* - Refactor large switch statement in MapToStatementAction() into a map-based lookup for better performance and maintainability * fmt * refactor: extract extractBucketAndPrefix helper and simplify convertSingleAction - Extract extractBucketAndPrefix as a package-level function for better testability and reusability - Remove unused bucketName parameter from convertSingleAction signature - Update GetResourcesFromLegacyAction to use the extracted helper for consistent ARN generation - Update all call sites in tests to match new function signature - All tests pass and module compiles without errors * fix: use extracted bucket variable consistently in all ARN generation branches Replace resourcePattern with extracted bucket variable in else branches and bucket-level cases to avoid malformed ARNs like 'arn:aws:s3:::mybucket/*/*': - Read case: bucket-level else branch - Write case: bucket-level else branch - Admin case: both bucket and object ARNs - List case: bucket-level else branch - GetBucketObjectLockConfiguration: bucket extraction - PutBucketObjectLockConfiguration: bucket extraction This ensures consistent ARN format: arn:aws:s3:::bucket or arn:aws:s3:::bucket/* * fix: address remaining review comments from PR #7865 High priority fixes: - Write action on bucket-level now generates arn:aws:s3:::mybucket/* instead of arn:aws:s3:::mybucket to enable object-level S3 actions (s3:PutObject, s3:DeleteObject) - GetResourcesFromLegacyAction now generates both bucket and object ARNs for /* patterns to maintain backward compatibility with mixed action groups Medium priority improvements: - Remove unused 'bucket' field from TestConvertSingleActionSubpath test struct - Update test to use assert.ElementsMatch instead of assert.Contains for more comprehensive resource ARN validation - Clarify test expectations with expectedResources slice instead of single expectedResource All tests pass, compilation verified * test: improve TestConvertSingleActionNestedPaths with comprehensive assertions Update test to use assert.ElementsMatch for more robust resource ARN verification: - Change struct from single expectedResource to expectedResources slice - Update Read nested path test to expect both bucket and prefix ARNs - Use assert.ElementsMatch to verify all generated resources match exactly - Provides complete coverage for nested path handling This matches the improvement pattern used in TestConvertSingleActionSubpath * refactor: simplify S3 action map and improve resource ARN detection - Refactor fineGrainedActionMap to use init() function for programmatic population of both prefixed (s3:Action) and unprefixed (Action) variants, eliminating 70+ duplicate entries - Add buildObjectResourceArn() helper to eliminate duplicated resource ARN generation logic across switch cases - Fix bucket vs object-level access detection: only use HasSuffix(/*) check instead of Contains('/') which incorrectly matched patterns like 'bucket/prefix' without wildcard - Apply buildObjectResourceArn() consistently to Tagging, BypassGovernanceRetention, GetObjectRetention, PutObjectRetention, GetObjectLegalHold, and PutObjectLegalHold cases * fmt * fix: generate object-level ARNs for bucket-level read access When bucket-level read access is granted (e.g., 'Read:mybucket'), generate both bucket and object ARNs so that object-level actions like s3:GetObject can properly authorize. Similarly, in GetResourcesFromLegacyAction, bucket-level patterns should generate both ARN levels for consistency with patterns that include wildcards. This ensures that users with bucket-level permissions can read objects, not just the bucket itself. * fix: address Copilot code review comments - Remove unused bucketName parameter from ConvertIdentityToPolicy signature - Update all callers in examples.go and engine_test.go - Bucket is now extracted from action string itself - Update extractBucketAndPrefix documentation - Add nested path example (bucket/a/b/c/*) - Clarify that prefix can contain multiple path segments - Make GetResourcesFromLegacyAction action-aware - Different action types have different resource requirements - List actions only need bucket ARN (bucket-only operations) - Read/Write/Tagging actions need both bucket and object ARNs - Aligns with convertSingleAction logic for consistency All tests pass successfully * test: add comprehensive tests for GetResourcesFromLegacyAction consistency - Add TestGetResourcesFromLegacyAction to verify action-aware resource generation - Validate consistency with convertSingleAction for all action types: * List actions: bucket-only ARNs (s3:ListBucket is bucket-level operation) * Read actions: both bucket and object ARNs * Write actions: object-only ARNs (subpaths) or object ARNs (bucket-level) * Admin actions: both bucket and object ARNs - Update GetResourcesFromLegacyAction to generate Admin ARNs consistent with convertSingleAction - All tests pass (35+ test cases across integration_test.go) * refactor: eliminate code duplication in GetResourcesFromLegacyAction - Simplify GetResourcesFromLegacyAction to delegate to convertSingleAction - Eliminates ~50 lines of duplicated action-type-specific logic - Ensures single source of truth for resource ARN generation - Improves maintainability: changes to ARN logic only need to be made in one place - All tests pass: any inconsistencies would be caught immediately - Addresses Gemini Code Assist review comment about code duplication * fix: remove fragile 'dummy' action type in CreatePolicyFromLegacyIdentity - Replace hardcoded 'dummy:' prefix with proper representative action type - Use first valid action type from the action list to determine resource requirements - Ensures GetResourcesFromLegacyAction receives a valid action type - Prevents silent failures when convertSingleAction encounters unknown action - Improves code clarity: explains why representative action type is needed - All tests pass: policy engine tests verify correct behavior * security: prevent privilege escalation in Admin action subpath handling - Admin action with subpath (e.g., Admin:bucket/admin/*) now correctly restricts to the specified subpath instead of granting full bucket access - If prefix exists: resources restricted to bucket + bucket/prefix/* - If no prefix: full bucket access (unchanged behavior for root Admin) - Added test case Admin_on_subpath to validate the security fix - All 40+ policy engine tests pass * refactor: address Copilot code review comments on S3 authorization - Fix GetObjectTagging mapping: change from ACTION_READ to ACTION_TAGGING (tagging operations should not be classified as general read operations) - Enhance extractBucketAndPrefix edge case handling: - Add input validation (reject empty strings, whitespace, slash-only) - Normalize double slashes and trailing slashes - Return empty bucket/prefix for invalid patterns - Prevent generation of malformed ARNs - Separate Read action from ListBucket (AWS S3 IAM semantics): - ListBucket is a bucket-level operation, not object-level - Read action now only includes s3:GetObject, s3:GetObjectVersion - This aligns with AWS S3 IAM policy best practices - Update buildObjectResourceArn to handle invalid bucket names gracefully: - Return empty slice if bucket is empty after validation - Prevents malformed ARN generation - Add comprehensive TestExtractBucketAndPrefixEdgeCases with 8 test cases: - Validates empty strings, whitespace, special characters - Confirms proper normalization of double/trailing slashes - Ensures robust parsing of nested paths - Update existing tests to reflect removed ListBucket from Read action All 40+ policy engine tests pass * fix: aggregate resource ARNs from all action types in CreatePolicyFromLegacyIdentity CRITICAL FIX: The previous implementation incorrectly used a single representative action type to determine resource ARNs when multiple legacy actions targeted the same resource pattern. This caused incorrect policy generation when action types with different resource requirements (e.g., List vs Write) were grouped together. Example of the bug: - Input: List:mybucket/path/*, Write:mybucket/path/* - Old behavior: Used only List's resources (bucket-level ARN) - Result: Policy had Write actions (s3:PutObject) but only bucket ARN - Consequence: s3:PutObject would be denied due to missing object-level ARN Solution: - Iterate through all action types for a given resource pattern - For each action type, call GetResourcesFromLegacyAction to get required ARNs - Aggregate all ARNs into a set to eliminate duplicates - Use the merged set for the final policy statement - Admin action short-circuits (always includes full permissions) Example of correct behavior: - Input: List:mybucket/path/*, Write:mybucket/path/* - New behavior: Aggregates both List and Write resource requirements - Result: Policy has Write actions with BOTH bucket and object-level ARNs - Outcome: s3:PutObject works correctly on mybucket/path/* Added TestCreatePolicyFromLegacyIdentityMultipleActions with 3 test cases: 1. List + Write on subpath: verifies bucket + object ARN aggregation 2. Read + Tagging on bucket: verifies action-specific ARN combinations 3. Admin with other actions: verifies Admin dominates resource ARNs All 45+ policy engine tests pass * fix: remove bucket-level ARN from Read action for consistency ISSUE: The Read action was including bucket-level ARNs (arn:aws:s3:::bucket) even though the only S3 actions in Read are s3:GetObject and s3:GetObjectVersion, which are object-level operations. This created a mismatch between the actions and resources in the policy statement. ROOT CAUSE: s3:ListBucket was previously removed from the Read action, but the bucket-level ARN was not removed, creating an inconsistency. SOLUTION: Update Read action to only generate object-level ARNs using buildObjectResourceArn, consistent with how Write and Tagging actions work. This ensures: - Read:mybucket generates arn:aws:s3:::mybucket/* (not bucket ARN) - Read:bucket/prefix/* generates arn:aws:s3:::bucket/prefix/* (object-level only) - Consistency: same actions, same resources, same logic across all object operations Updated test expectations: - TestConvertSingleActionSubpath: Read_on_subpath now expects only object ARN - TestConvertSingleActionNestedPaths: Read nested path now expects only object ARN - TestConvertIdentityToPolicy: Read resources now 1 instead of 2 - TestCreatePolicyFromLegacyIdentityMultipleActions: Read+Tagging aggregates to 1 ARN All 45+ policy engine tests pass * doc * fmt * fix: address Copilot code review on Read action consistency and missing S3 action mappings - Clarify MapToStatementAction comment to reflect exact lookup (not pattern matching) - Add missing S3 actions to baseS3ActionMap: - ListBucketVersions, ListAllMyBuckets for bucket operations - GetBucketCors, PutBucketCors, DeleteBucketCors for CORS - GetBucketNotification, PutBucketNotification for notifications - GetBucketObjectLockConfiguration, PutBucketObjectLockConfiguration for object lock - GetObjectVersionTagging for version tagging - GetObjectVersionAcl, PutBucketAcl for ACL operations - PutBucketTagging, DeleteBucketTagging for bucket tagging - Fix Read action scope inconsistency with GetActionMappings(): - Previously: only included GetObject, GetObjectVersion - Now: includes full Read set (14 actions) from GetActionMappings - Includes both bucket-level (ListBucket*, GetBucket*) and object-level (GetObject*) ARNs - Bucket ARN enables ListBucket operations, object ARN enables GetObject operations - Update all test expectations: - TestConvertSingleActionSubpath: Read now returns 2 ARNs (bucket + objects) - TestConvertSingleActionNestedPaths: Read nested path now includes bucket ARN - TestGetResourcesFromLegacyAction: Read test cases updated for consistency - TestCreatePolicyFromLegacyIdentityMultipleActions: Read_and_Tagging now returns 2 ARNs - TestConvertIdentityToPolicy: Updated to expect 14 Read actions and 2 resources Fixes: Inconsistency between convertSingleAction Read case and GetActionMappings function * fmt * fix: align convertSingleAction with GetActionMappings and add bucket validation - Fix Write action: now includes all 16 actions from GetActionMappings (object and bucket operations) - Includes PutBucketVersioning, PutBucketCors, PutBucketAcl, PutBucketTagging, etc. - Generates both bucket and object ARNs to support bucket-level operations - Fix List action: add ListAllMyBuckets from GetActionMappings - Previously: ListBucket, ListBucketVersions - Now: ListBucket, ListBucketVersions, ListAllMyBuckets - Add bucket validation to prevent malformed ARNs with empty bucket - Fix Tagging action: include bucket-level tagging operations - Previously: only object-level (GetObjectTagging, PutObjectTagging, DeleteObjectTagging) - Now: includes bucket-level (GetBucketTagging, PutBucketTagging, DeleteBucketTagging) - Generates both bucket and object ARNs to support bucket-level operations - Add bucket validation to prevent malformed ARNs: - Admin: return error if bucket is empty - List: generate empty resources if bucket is empty - Tagging: check bucket before generating ARNs - GetBucketObjectLockConfiguration, PutBucketObjectLockConfiguration: validate bucket - Fix TrimRight issue in extractBucketAndPrefix: - Change from strings.TrimRight(pattern, "/") to remove only one trailing slash - Prevents loss of prefix when pattern has multiple trailing slashes - Update all test cases: - TestConvertSingleActionSubpath: Write now returns 16 actions and bucket+object ARNs - TestConvertSingleActionNestedPaths: Write includes bucket ARN - TestGetResourcesFromLegacyAction: Updated Write and Tagging expectations - TestCreatePolicyFromLegacyIdentityMultipleActions: Updated action/resource counts Fixes: Inconsistencies between convertSingleAction and GetActionMappings for Write/List/Tagging actions * fmt * fix: resolve ListMultipartUploads/ListParts mapping inconsistency and add action validation - Fix ListMultipartUploads and ListParts mapping in helpers.go: - Changed from ACTION_LIST to ACTION_WRITE for consistency with GetActionMappings - These operations are part of the multipart write workflow and should map to Write action - Prevents inconsistent behavior when same actions processed through different code paths - Add documentation to clarify multipart operations in Write action: - Explain why ListMultipartUploads and ListParts are part of Write permissions - These are required for meaningful multipart upload workflow management - Add action validation to CreatePolicyFromLegacyIdentity: - Validates action format before processing using ValidateActionMapping - Logs warnings for invalid actions instead of silently skipping them - Provides clearer error messages when invalid action types are used - Ensures users know when their intended permissions weren't applied - Consistent with ConvertLegacyActions validation approach Fixes: Inconsistent action type mappings and silent failure for invalid actions |
6 days ago |
|
|
1261e93ef2
|
fix: comprehensive go vet error fixes and add CI enforcement (#7861)
* fix: use keyed fields in struct literals - Replace unsafe reflect.StringHeader/SliceHeader with safe unsafe.String/Slice (weed/query/sqltypes/unsafe.go) - Add field names to Type_ScalarType struct literals (weed/mq/schema/schema_builder.go) - Add Duration field name to FlexibleDuration struct literals across test files - Add field names to bson.D struct literals (weed/filer/mongodb/mongodb_store_kv.go) Fixes go vet warnings about unkeyed struct literals. * fix: remove unreachable code - Remove unreachable return statements after infinite for loops - Remove unreachable code after if/else blocks where all paths return - Simplify recursive logic by removing unnecessary for loop (inode_to_path.go) - Fix Type_ScalarType literal to use enum value directly (schema_builder.go) - Call onCompletionFn on stream error (subscribe_session.go) Files fixed: - weed/query/sqltypes/unsafe.go - weed/mq/schema/schema_builder.go - weed/mq/client/sub_client/connect_to_sub_coordinator.go - weed/filer/redis3/ItemList.go - weed/mq/client/agent_client/subscribe_session.go - weed/mq/broker/broker_grpc_pub_balancer.go - weed/mount/inode_to_path.go - weed/util/skiplist/name_list.go * fix: avoid copying lock values in protobuf messages - Use proto.Merge() instead of direct assignment to avoid copying sync.Mutex in S3ApiConfiguration (iamapi_server.go) - Add explicit comments noting that channel-received values are already copies before taking addresses (volume_grpc_client_to_master.go) The protobuf messages contain sync.Mutex fields from the message state, which should not be copied. Using proto.Merge() properly merges messages without copying the embedded mutex. * fix: correct byte array size for uint32 bit shift operations The generateAccountId() function only needs 4 bytes to create a uint32 value. Changed from allocating 8 bytes to 4 bytes to match the actual usage. This fixes go vet warning about shifting 8-bit values (bytes) by more than 8 bits. * fix: ensure context cancellation on all error paths In broker_client_subscribe.go, ensure subscriberCancel() is called on all error return paths: - When stream creation fails - When partition assignment fails - When sending initialization message fails This prevents context leaks when an error occurs during subscriber creation. * fix: ensure subscriberCancel called for CreateFreshSubscriber stream.Send error Ensure subscriberCancel() is called when stream.Send fails in CreateFreshSubscriber. * ci: add go vet step to prevent future lint regressions - Add go vet step to GitHub Actions workflow - Filter known protobuf lock warnings (MessageState sync.Mutex) These are expected in generated protobuf code and are safe - Prevents accumulation of go vet errors in future PRs - Step runs before build to catch issues early * fix: resolve remaining syntax and logic errors in vet fixes - Fixed syntax errors in filer_sync.go caused by missing closing braces - Added missing closing brace for if block and function - Synchronized fixes to match previous commits on branch * fix: add missing return statements to daemon functions - Add 'return false' after infinite loops in filer_backup.go and filer_meta_backup.go - Satisfies declared bool return type signatures - Maintains consistency with other daemon functions (runMaster, runFilerSynchronize, runWorker) - While unreachable, explicitly declares the return satisfies function signature contract * fix: add nil check for onCompletionFn in SubscribeMessageRecord - Check if onCompletionFn is not nil before calling it - Prevents potential panic if nil function is passed - Matches pattern used in other callback functions * docs: clarify unreachable return statements in daemon functions - Add comments documenting that return statements satisfy function signature - Explains that these returns follow infinite loops and are unreachable - Improves code clarity for future maintainers |
7 days ago |
|
|
88ed187c27
|
fix(worker): add metrics HTTP server and health checks for Kubernetes (#7860)
* feat(worker): add metrics HTTP server and debug profiling support - Add -metricsPort flag to enable Prometheus metrics endpoint - Add -metricsIp flag to configure metrics server bind address - Implement /metrics endpoint for Prometheus-compatible metrics - Implement /health endpoint for Kubernetes readiness/liveness probes - Add -debug flag to enable pprof debugging server - Add -debug.port flag to configure debug server port - Fix stats package import naming conflict by using alias - Update usage examples to show new flags Fixes #7843 * feat(helm): add worker metrics and health check support - Update worker readiness probe to use httpGet on /health endpoint - Update worker liveness probe to use httpGet on /health endpoint - Add metricsPort flag to worker command in deployment template - Support both httpGet and tcpSocket probe types for backward compatibility - Update values.yaml with health check configuration This enables Kubernetes pod lifecycle management for worker components through proper health checks on the new metrics HTTP endpoint. * feat(mini): align all services to share single debug and metrics servers - Disable S3's separate debug server in mini mode (port 6060 now shared by all) - Add metrics server startup to embedded worker for health monitoring - All services now share the single metrics port (9327) and single debug port (6060) - Consistent pattern with master, filer, volume, webdav services * fix(worker): fix variable shadowing in health check handler - Rename http.ResponseWriter parameter from 'w' to 'rw' to avoid shadowing the outer 'w *worker.Worker' parameter - Prevents potential bugs if future code tries to use worker state in handler - Improves code clarity and follows Go best practices * fix(worker): remove unused worker parameter in metrics server - Change 'w *worker.Worker' parameter to '_' as it's not used - Clarifies intent that parameter is intentionally unused - Follows Go best practices and improves code clarity * fix(helm): fix trailing backslash syntax errors in worker command - Fix conditional backslash placement to prevent shell syntax errors - Only add backslash when metricsPort OR extraArgs are present - Prevents worker pod startup failures due to malformed command arguments - Ensures proper shell command parsing regardless of configuration state * refactor(worker): use standard stats.StartMetricsServer for consistency - Replace custom metrics server implementation with stats.StartMetricsServer to match pattern used in master, volume, s3, filer_sync components - Simplifies code and improves maintainability - Uses glog.Fatal for errors (consistent with other SeaweedFS components) - Remove unused net/http and prometheus/promhttp imports - Automatically provides /metrics and /health endpoints via standard implementation |
1 week ago |
|
|
621ff124f0
|
fix: ensure Helm chart is published only after container images are available (#7859)
fix: consolidate Helm chart release with container image build Resolve issue #7855 by consolidating the Helm chart release workflow with the container image build workflow. This ensures perfect alignment: 1. Container images are built and pushed to GHCR 2. Images are copied from GHCR to Docker Hub 3. Helm chart is published only after step 2 completes Previously, the Helm chart was published immediately on tag push before images were available in Docker Hub, causing deployment failures. Changes: - Added helm-release job to container_release_unified.yml that depends on copy-to-dockerhub job - Removed helm_chart_release.yml workflow (consolidated into unified release) Benefits: - No race conditions between image push and chart publication - Users can deploy immediately after release - Single source of truth for release process - Clearer job dependencies and execution flow |
1 week ago |
|
|
9c784cf9e2
|
fix: use path to handle urls in weed admin file browser (#7858)
* fix: use path instead of filepath to handle urls in weed admin file browser * test: add comprehensive tests for file browser path handling - Test breadcrumb generation for various path scenarios - Test path handling with forward slashes (URL compatibility) - Test parent path calculation for Windows compatibility - Test file extension handling using path.Ext - Test bucket path detection logic These tests verify that the switch from filepath to path package works correctly and handles URLs properly across all platforms. * refactor: simplify fullPath construction using path.Join Replace verbose manual path construction with path.Join which: - Handles trailing slashes automatically - Is more concise and readable - Is more robust for edge cases * fix: normalize path in ShowFileBrowser and rename generateBreadcrumbs parameter Critical fix: - Add util.CleanWindowsPath() normalization to path parameter in ShowFileBrowser handler, matching the pattern used in other file operation handlers (lines 273, 464) - This ensures Windows-style backslashes are converted to forward slashes before processing, fixing path handling issues on Windows Consistency improvement: - Rename path parameter to dir in generateBreadcrumbs function - Aligns with parameter rename in GetFileBrowser for consistent naming throughout the file * test: improve coverage for Windows path handling and production code behavior Address reviewer feedback by enhancing test quality: 1. Improved test documentation: - Added clear comments explaining what each test validates - Clarified that some tests validate expected behavior vs production code - Documented the Windows path normalization flow 2. Enhanced actual production code testing: - TestGenerateBreadcrumbs: Calls actual production function - TestBreadcrumbPathFormatting: Validates production output format - TestDirectoryNavigation: Integration-style test for complete flow 3. Added new test functions for better coverage: - TestPathJoinHandlesEdgeCases: Verifies path.Join behavior - TestWindowsPathNormalizationBehavior: Documents expected normalization - TestDirectoryNavigation: Complete navigation flow test 4. Improved test organization: - Fixed duplicate field naming issues - Better test names for clarity - More comprehensive edge case coverage These improvements ensure the fix for issue #7628 (Windows path handling) is properly validated across the complete flow from handler to path logic. * test: use actual util.CleanWindowsPath function in Windows path normalization test Address reviewer feedback by testing the actual production function: - Import util package for CleanWindowsPath - Call the real util.CleanWindowsPath() instead of reimplementing logic - Ensures test validates actual implementation, not just expected behavior - Added more test cases for edge cases (simple path, deep nesting) This change validates that the Windows path normalization in the ShowFileBrowser handler (handlers/file_browser_handlers.go:64) works correctly with the actual util.CleanWindowsPath function. * style: fix indentation in TestPathJoinHandlesEdgeCases Align t.Errorf statement inside the if block with proper indentation. The error message now correctly aligns with the if block body, maintaining consistent indentation throughout the function. * test: restore backslash validation check in TestPathJoinHandlesEdgeCases --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 week ago |
|
|
8d75290601 |
4.04
|
1 week ago |
|
|
289ec5e2f5
|
Fix SeaweedFS S3 bucket extended attributes handling (#7854)
* refactor: Convert versioning to three-state string model matching AWS S3 - Change VersioningEnabled bool to VersioningStatus string in S3Bucket struct - Add GetVersioningStatus() function returning empty string (never enabled), 'Enabled', or 'Suspended' - Update StoreVersioningInExtended() to delete key instead of setting 'Suspended' - Ensures Admin UI and S3 API use consistent versioning state representation * fix: Add validation for bucket quota and Object Lock configuration - Prevent buckets with quota enabled but size=0 (validation check) - Fix Object Lock mode handling to only pass mode when setDefaultRetention is true - Ensures proper extended attribute storage for Object Lock configuration - Matches AWS S3 behavior for Object Lock setup * feat: Handle versioned objects in bucket details view - Recognize .versions directories as versioned objects in listBucketObjects() - Extract size and mtime from extended attribute metadata (ExtLatestVersionSizeKey, ExtLatestVersionMtimeKey) - Add length validation (8 bytes) before parsing extended attribute byte arrays - Update GetBucketDetails() and GetS3Buckets() to use new GetVersioningStatus() - Properly display versioned objects without .versions suffix in bucket details * ui: Update bucket management UI to show three-state versioning and Object Lock - Change versioning display from binary (Enabled/Disabled) to three-state (Not configured/Enabled/Suspended) - Update Object Lock display to show 'Not configured' instead of 'Disabled' - Fix bucket details modal to use bucket.versioning_status instead of bucket.versioning_enabled - Update displayBucketDetails() JavaScript to handle three versioning states * chore: Regenerate template code for bucket UI changes - Generated from updated s3_buckets.templ - Reflects three-state versioning and Object Lock UI improvements |
1 week ago |
|
|
683e3d06a4 |
go mod tidy
|
1 week ago |
|
|
2567be8040
|
refactor: remove unused gRPC connection age parameters (#7852)
The GrpcMaxConnectionAge and GrpcMaxConnectionAgeGrace constants have a troubled history - they were removed in 2022 due to gRPC issues, reverted later, and recently re-added. However, they are not essential to the core worker reconnection fix which was solved through proper goroutine ordering. The Docker Swarm DNS handling mentioned in the comments is not critical, and these parameters have caused problems in the past. Removing them simplifies the configuration without losing functionality. |
1 week ago |
|
|
14df5d1bb5
|
fix: improve worker reconnection robustness and prevent handleOutgoing hang (#7838)
* feat: add automatic port detection and fallback for mini command - Added port availability detection using TCP binding tests - Implemented port fallback mechanism searching for available ports - Support for both HTTP and gRPC port handling - IP-aware port checking using actual service bind address - Dual-interface verification (specific IP and wildcard 0.0.0.0) - All services (Master, Volume, Filer, S3, WebDAV, Admin) auto-reallocate to available ports - Enables multiple mini instances to run simultaneously without conflicts * fix: use actual bind IP for service health checks - Previously health checks were hardcoded to localhost (127.0.0.1) - This caused failures when services bind to actual IP (e.g., 10.21.153.8) - Now health checks use the same IP that services are binding to - Fixes Volume and other service health check failures on non-localhost IPs * refactor: improve port detection logic and remove gRPC handling duplication - findAvailablePortOnIP now returns 0 on failure instead of unavailable port Allows callers to detect when port finding fails and handle appropriately - Remove duplicate gRPC port handling from ensureAllPortsAvailableOnIP All gRPC port logic is now centralized in initializeGrpcPortsOnIP - Log final port configuration only after all ports are finalized Both HTTP and gRPC ports are now correctly initialized before logging - Add error logging when port allocation fails Makes debugging easier when ports can't be found * refactor: fix race condition and clean up port detection code - Convert parallel HTTP port checks to sequential to prevent race conditions where multiple goroutines could allocate the same available port - Remove unused 'sync' import since WaitGroup is no longer used - Add documentation to localhost wrapper functions explaining they are kept for backwards compatibility and future use - All gRPC port logic is now exclusively handled in initializeGrpcPortsOnIP eliminating any duplication in ensureAllPortsAvailableOnIP * refactor: address code review comments - constants, helper function, and cleanup - Define GrpcPortOffset constant (10000) to replace magic numbers throughout the code for better maintainability and consistency - Extract bindIp determination logic into getBindIp() helper function to eliminate code duplication between runMini and startMiniServices - Remove redundant 'calculatedPort = calculatedPort' assignment that had no effect - Update all gRPC port calculations to use GrpcPortOffset constant (lines 489, 886 and the error logging at line 501) * refactor: remove unused wrapper functions and update documentation - Remove unused localhost wrapper functions that were never called: - isPortOpen() - wrapper around isPortOpenOnIP with hardcoded 127.0.0.1 - findAvailablePort() - wrapper around findAvailablePortOnIP with hardcoded 127.0.0.1 - ensurePortAvailable() - wrapper around ensurePortAvailableOnIP with hardcoded 127.0.0.1 - ensureAllPortsAvailable() - wrapper around ensureAllPortsAvailableOnIP with hardcoded 127.0.0.1 Since this is new functionality with no backwards compatibility concerns, these wrapper functions were not needed. The comments claiming they were 'kept for future use or backwards compatibility' are no longer valid. - Update documentation to reference GrpcPortOffset constant instead of hardcoded 10000: - Update comment in ensureAllPortsAvailableOnIP to use GrpcPortOffset - Update admin.port.grpc flag help text to reference GrpcPortOffset Note: getBindIp() is actually being used and should be retained (contrary to the review comment suggesting it was unused - it's called in both runMini and startMiniServices functions) * refactor: prevent HTTP/gRPC port collisions and improve error handling - Add upfront reservation of all calculated gRPC ports before allocating HTTP ports to prevent collisions where an HTTP port allocation could use a port that will later be needed for a gRPC port calculation. Example scenario that is now prevented: - Master HTTP reallocated from 9333 to 9334 (original in use) - Filer HTTP search finds 19334 available and assigns it - Master gRPC calculated as 9334 + GrpcPortOffset = 19334 → collision! Now: reserved gRPC ports are tracked upfront and HTTP port search skips them. - Improve admin server gRPC port fallback error handling: - Change from silent V(1) verbose log to Warningf to make the error visible - Update comment to clarify this indicates a problem in the port initialization sequence - Add explanation that the fallback calculation may cause bind failure - Update ensureAllPortsAvailableOnIP comment to clarify it avoids reserved ports * fix: enforce reserved ports in HTTP allocation and improve admin gRPC fallback Critical fixes for port allocation safety: 1. Make findAvailablePortOnIP and ensurePortAvailableOnIP aware of reservedPorts: - Add reservedPorts map parameter to both functions - findAvailablePortOnIP now skips reserved ports when searching for alternatives - ensurePortAvailableOnIP passes reservedPorts through to findAvailablePortOnIP - This prevents HTTP ports from being allocated to ports reserved for gRPC 2. Update ensureAllPortsAvailableOnIP to pass reservedPorts: - Pass the reservedPorts map to ensurePortAvailableOnIP calls - Maintains the map updates (delete/add) for accuracy as ports change 3. Replace blind admin gRPC port fallback with proper availability checks: - Previous code just calculated *miniAdminOptions.port + GrpcPortOffset - New code checks both the calculated port and finds alternatives if needed - Uses the same availability checking logic as initializeGrpcPortsOnIP - Properly logs the fallback process and any port changes - Will fail gracefully if no available ports found (consistent with other services) These changes eliminate two critical vulnerabilities: - HTTP port allocation can no longer accidentally claim gRPC ports - Admin gRPC port fallback no longer blindly uses an unchecked port * fix: prevent gRPC port collisions during multi-service fallback allocation Critical fix for gRPC port allocation safety across multiple services: Problem: When multiple services need gRPC port fallback allocation in sequence (e.g., Master gRPC unavailable → finds alternative, then Filer gRPC unavailable → searches from calculated port), there was no tracking of previously allocated gRPC ports. This could allow two services to claim the same port. Scenario that is now prevented: - Master gRPC: calculated 19333 unavailable → finds 19334 → assigns 19334 - Filer gRPC: calculated 18888 unavailable → searches from 18889, might land on 19334 if consecutive ports in range are unavailable (especially with custom port configurations or in high-port-contention environments) Solution: - Add allocatedGrpcPorts map to track gRPC ports allocated within the function - Check allocatedGrpcPorts before using calculated port for each service - Pass allocatedGrpcPorts to findAvailablePortOnIP when finding fallback ports - Add allocatedGrpcPorts[port] = true after each successful allocation - This ensures no two services can allocate the same gRPC port The fix handles both: 1. Calculated gRPC ports (when grpcPort == 0) 2. Explicitly set gRPC ports (when user provides -service.port.grpc value) While default port spacing makes collision unlikely, this fix is essential for: - Custom port configurations - High-contention environments - Edge cases with many unavailable consecutive ports - Correctness and safety guarantees * feat: enforce hard-fail behavior for explicitly specified ports When users explicitly specify a port via command-line flags (e.g., -s3.port=8333), the server should fail immediately if the port is unavailable, rather than silently falling back to an alternative port. This prevents user confusion and makes misconfiguration failures obvious. Changes: - Modified ensurePortAvailableOnIP() to check if a port was explicitly passed via isFlagPassed() - If an explicit port is unavailable, return error instead of silently allocating alternative - Updated ensureAllPortsAvailableOnIP() to handle the returned error and fail startup - Modified runMini() to check error from ensureAllPortsAvailableOnIP() and return false on failure - Default ports (not explicitly specified) continue to fallback to available alternatives This ensures: - Explicit ports: fail if unavailable (e.g., -s3.port=8333 fails if 8333 is taken) - Default ports: fallback to alternatives (e.g., s3.port without flag falls back to 8334 if 8333 taken) * fix: accurate error messages for explicitly specified unavailable ports When a port is explicitly specified via CLI flags but is unavailable, the error message now correctly reports the originally requested port instead of reporting a fallback port that was calculated internally. The issue was that the config file applied after CLI flag parsing caused isFlagPassed() to return true for ports loaded from the config file (since flag.Visit() was called during config file application), incorrectly marking them as explicitly specified. Solution: Capture which port flags were explicitly passed on the CLI BEFORE the config file is applied, storing them in the explicitPortFlags map. This preserves the accurate distinction between user-specified ports and defaults/config-file ports. Example: - User runs: weed mini -dir=. -s3.port=22 - Now correctly shows: 'port 22 for S3 (specified by flag s3.port) is not available' - Previously incorrectly showed: 'port 8334 for S3...' (some calculated fallback) * fix: respect explicitly specified ports and prevent config file override When a port is explicitly specified via CLI flags (e.g., -s3.port=8333), the config file options should NOT override it. Previously, config file options would be applied if the flag value differed from default, but this check wasn't sufficient to prevent override in all cases. Solution: Check the explicitPortFlags map before applying any config file port options. If a port was explicitly passed on the CLI, skip applying the config file option for that port. This ensures: - Explicit ports take absolute precedence over config file ports - Config file ports are only used if port wasn't specified on CLI - Example: 'weed mini -s3.port=8333' will use 8333, never the config file value * fix: don't print usage on port allocation error When a port allocation fails (e.g., explicit port is unavailable), exit immediately without showing the usage example. This provides cleaner error output when the error is expected (port conflict). * refactor: clean up code quality issues Remove no-op assignment (calculatedPort = calculatedPort) that had no effect. The variable already holds the correct value when no alternative port is found. Improve documentation for the defensive gRPC port initialization fallback in startAdminServer. While this code shouldn't execute in normal flow because ensureAllPortsAvailableOnIP is called earlier in runMini, the fallback handles edge cases where port initialization may have been skipped or failed silently due to configuration changes or error handling paths. * fix: improve worker reconnection robustness and prevent handleOutgoing hang - Add dedicated streamFailed signaling channel to abort registration waits early when stream dies - Add per-connection regWait channel to route RegistrationResponse separately from shared incoming channel, avoiding race where other consumers steal the response - Refactor handleOutgoing() loop to use select on streamExit/errCh, ensuring old handlers exit cleanly on reconnect (prevents stale senders competing with new stream) - Buffer msgCh to reduce shutdown edge cases - Add cleanup of streamFailed and regWait channels on reconnect/disconnect - Fixes registration timeout and potential stream lifecycle hangs on aggressive server max_age recycling * fix: prevent deadlock when stream error occurs - make cmds send non-blocking If managerLoop is blocked (e.g., waiting on regWait), a blocking send to cmds will deadlock handleIncoming. Make the send non-blocking to prevent this. * fix: address code review comments on mini.go port allocation - Remove flawed fallback gRPC port initialization and convert to fatal error (ensures port initialization issues are caught immediately instead of silently failing with an empty reserved ports map) - Extract common port validation logic to eliminate duplication between calculated and explicitly set gRPC port handling * Fix critical race condition and improve error handling in worker client - Capture channel pointers before checking for nil (prevents TOCTOU race with reconnect) - Use async fallback goroutine for cmds send to prevent error loss when manager is busy - Consistently close regWait channel on disconnect (matches streamFailed behavior) - Complete cleanup of channels on failed registration - Improve error messages for clarity (replace 'timeout' with 'failed' where appropriate) * Add debug logging for registration response routing Add glog.V(3) and glog.V(2) logs to track successful and dropped registration responses in handleIncoming, helping diagnose registration issues in production. * Update weed/worker/client.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Ensure stream errors are never lost by using async fallback When handleIncoming detects a stream error, queue ActionStreamError to managerLoop with non-blocking send. If managerLoop is busy and cmds channel is full, spawn an async goroutine to queue the error asynchronously. This ensures the manager is always notified of stream failures, preventing the connection from remaining in an inconsistent state (connected=true while stream is dead). * Refactor handleOutgoing to eliminate duplicate error handling code Extract error handling and cleanup logic into helper functions to avoid duplication in nested select statements. This improves maintainability and reduces the risk of inconsistencies when updating error handling logic. * Prevent goroutine leaks by adding timeouts to blocking cmds sends Add 2-second timeouts to both handleStreamError and the async fallback goroutine when sending ActionStreamError to cmds channel. This prevents the handleOutgoing and handleIncoming goroutines from blocking indefinitely if the managerLoop is no longer receiving (e.g., during shutdown), preventing resource leaks. * Properly close regWait channel in reconnect to prevent resource leaks Close the regWait channel before setting it to nil in reconnect(), matching the pattern used in handleDisconnect(). This ensures any goroutines waiting on this channel during reconnection are properly signaled, preventing them from hanging. * Use non-blocking async pattern in handleOutgoing error reporting Refactor handleStreamError to use non-blocking send with async fallback goroutine, matching the pattern used in handleIncoming. This allows handleOutgoing to exit immediately when errors occur rather than blocking for up to 2 seconds, improving responsiveness and consistency across handlers. * fix: drain regWait channel before closing to prevent message loss - Add drain loop before closing regWait in reconnect() cleanup - Add drain loop before closing regWait in handleDisconnect() cleanup - Ensures no pending RegistrationResponse messages are lost during channel closure * docs: add comments explaining regWait buffered channel design - Document that regWait buffer size 1 prevents race conditions - Explain non-blocking send pattern between sendRegistration and handleIncoming - Clarify timing of registration response handling in handleIncoming * fix: improve error messages and channel handling in sendRegistration - Clarify error message when stream fails before registration sent - Use two-value receive form to properly detect closed channels - Better distinguish between closed channel and nil value scenarios * refactor: extract drain and close channel logic into helper function - Create drainAndCloseRegWaitChannel() helper to eliminate code duplication - Replace 3 copies of drain-and-close logic with single function call - Improves maintainability and consistency across cleanup paths --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
1 week ago |
|
|
ce71968bad
|
chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 (#7849)
* chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 Bumps [golang.org/x/net](https://github.com/golang/net) from 0.47.0 to 0.48.0. - [Commits](https://github.com/golang/net/compare/v0.47.0...v0.48.0) --- updated-dependencies: - dependency-name: golang.org/x/net dependency-version: 0.48.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * mod --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 week ago |
|
|
a898160e39
|
chore(deps): bump golang.org/x/crypto from 0.45.0 to 0.46.0 (#7847)
* chore(deps): bump golang.org/x/crypto from 0.45.0 to 0.46.0 Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.45.0 to 0.46.0. - [Commits](https://github.com/golang/crypto/compare/v0.45.0...v0.46.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-version: 0.46.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * mod --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 week ago |