* Fix IAM inline user policy retrieval
* fmt
* Persist inline user policies to avoid loss on server restart
- Use s3ApiConfig.PutPolicies/GetPolicies for persistent storage instead of non-persistent global map
- Remove unused global policyDocuments map
- Update PutUserPolicy to store policies in persistent storage
- Update GetUserPolicy to read from persistent storage
- Update DeleteUserPolicy to clean up persistent storage
- Add mock IamS3ApiConfig for testing
- Improve test to verify policy statements are not merged or lost
* Fix inline policy key collision and action aggregation
* Improve error handling and optimize inline policy management
- GetUserPolicy: Propagate GetPolicies errors instead of silently falling through
- DeleteUserPolicy: Return error immediately on GetPolicies failure
- computeAggregatedActionsForUser: Add optional Policies parameter for I/O optimization
- PutUserPolicy: Reuse fetched policies to avoid redundant GetPolicies call
- Improve logging with clearer messages about best-effort aggregation
- Update test to use exact action string matching instead of substring checks
All 15 tests pass with no regressions.
* Add per-user policy index for O(1) lookup performance
- Extend Policies struct with InlinePolicies map[userName]map[policyName]
- Add getOrCreateUserPolicies() helper for safe user map management
- Update computeAggregatedActionsForUser to use direct user map access
- Update PutUserPolicy, GetUserPolicy, DeleteUserPolicy for new structure
- Performance: O(1) user lookups instead of O(all_policies) iteration
- Eliminates string prefix matching loop
- All tests pass; backward compatible with managed policies
* Fix DeleteUserPolicy to validate user existence before storage modification
Refactor DeleteUserPolicy handler to check user existence early:
- First iterate s3cfg.Identities to verify user exists
- Return NoSuchEntity error immediately if user not found
- Only then proceed with GetPolicies and policy deletion
- Capture reference to found identity for direct update
This ensures consistency: if user doesn't exist, storage is not modified.
Previously the code would delete from storage first and check identity
afterwards, potentially leaving orphaned policies.
Benefits:
- Fail-fast validation before storage operations
- No orphaned policies in storage if validation fails
- Atomic from logical perspective
- Direct identity reference eliminates redundant loop
- All error paths preserved and tested
All 15 tests pass; no functional changes to behavior.
* Fix GetUserPolicy to return NoSuchEntity when inline policy not found
When InlinePolicies[userName] exists but does not contain policyName,
the handler now immediately returns NoSuchEntity error instead of
falling through to the reconstruction logic.
Changes:
- Add else clause after userPolicies[policyName] lookup
- Return IamError(NoSuchEntityException, "policy not found") immediately
- Prevents incorrect fallback to reconstructing ident.Actions
- Ensures explicit error when policy explicitly doesn't exist
This improves error semantics:
- Policy exists in stored inline policies → return error (not reconstruct)
- Policy doesn't exist in stored inline policies → try reconstruction (backward compat)
- Storage error → return service failure error
All 15 tests pass; no behavioral changes to existing error or success paths.
* fix(plugin/worker): make VacuumHandler report MaxExecutionConcurrency from worker startup flag
Previously, MaxExecutionConcurrency was hardcoded to 2 in VacuumHandler.Capability().
The scheduler's schedulerWorkerExecutionLimit() takes the minimum of the UI-configured
PerWorkerExecutionConcurrency and the worker-reported capability limit, so the hardcoded
value silently capped each worker to 2 concurrent vacuum executions regardless of the
--max-execute flag passed at worker startup.
Pass maxExecutionConcurrency into NewVacuumHandler() and wire it through
buildPluginWorkerHandler/buildPluginWorkerHandlers so the capability reflects the actual
worker configuration. The default falls back to 2 when the value is unset or zero.
* Update weed/command/worker_runtime.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Anton Ustyugov <anton@devops>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* fix(admin): release mutex before disk I/O in maintenance queue
saveTaskState performs synchronous BoltDB writes. Calling it while
holding mq.mutex.Lock() in AddTask, GetNextTask, and CompleteTask
blocks all readers (GetTasks via RLock) for the full disk write
duration on every task state change.
During a maintenance scan AddTasksFromResults calls AddTask for every
volume — potentially hundreds of times — meaning the write lock is
held almost continuously. The HTTP handler for /maintenance calls
GetTasks which blocks on RLock, exceeding the 30s timeout and
returning 408 to the browser.
Fix: update in-memory state (mq.tasks, mq.pendingTasks) under the
lock as before, then unlock before calling saveTaskState. In-memory
state is the authoritative source; persistence is crash-recovery only
and does not require lock protection during the write.
* fix(admin): add mutex to ConfigPersistence to synchronize tasks/ filesystem ops
saveTaskState is now called outside mq.mutex, meaning SaveTaskState,
LoadAllTaskStates, DeleteTaskState, and CleanupCompletedTasks can be
invoked concurrently from multiple goroutines. ConfigPersistence had no
internal synchronization, creating races on the tasks/ directory:
- concurrent os.WriteFile + os.ReadFile on the same .pb file could
yield a partial read and unmarshal error
- LoadAllTaskStates (ReadDir + per-file ReadFile) could see a
directory entry for a file being written or deleted concurrently
- CleanupCompletedTasks (LoadAllTaskStates + DeleteTaskState) could
race with SaveTaskState on the same file
Fix: add tasksMu sync.Mutex to ConfigPersistence, acquired at the top
of SaveTaskState, LoadTaskState, LoadAllTaskStates, DeleteTaskState,
and CleanupCompletedTasks. Extract private Locked helpers so that
CleanupCompletedTasks (which holds tasksMu) can call them internally
without deadlocking.
---------
Co-authored-by: Anton Ustyugov <anton@devops>
Avoid stripping default ports (80/443) from the Host header in extractHostHeader.
This fixes SignatureDoesNotMatch errors when SeaweedFS is accessed via a proxy
(like Kong Ingress) that explicitly includes the port in the Host header or
X-Forwarded-Host, which S3 clients sign.
Also cleaned up unused variables and logic after refactoring.
* fix(s3api): make ListObjectsV1 namespaced and stop marker-echo pagination loops
* test(s3api): harden marker-echo coverage and align V1 encoding tag
* test(s3api): cover encoded marker matching and trim redundant setup
* refactor(s3api): tighten V1 list helper visibility and test mock docs
Fixes#8401
When creating balance/vacuum tasks, the worker maintenance scheduler was
accidentally discarding the custom grpcPort defined on the DataNodeInfo
by using just its HTTP Address string, which defaults to +10000
during grpc dialing.
By using pb.NewServerAddressFromDataNode, the grpcPort suffix is correctly
encoded in the server address string, preventing connection refused errors
for users running volume servers with custom gRPC ports.
* helm: refine openshift-values.yaml to remove hardcoded UIDs
Remove hardcoded runAsUser, runAsGroup, and fsGroup from the
openshift-values.yaml example. This allows OpenShift's admission
controller to automatically assign a valid UID from the namespace's
allocated range, avoiding "forbidden" errors when UID 1000 is
outside the permissible range.
Updates #8381, #8390.
* helm: fix volume.logs and add consistent security context comments
* Update README.md
* fix volume.fsck crashing on EC volumes and add multi-volume vacuum support
* address comments
* S3: Truncate timestamps to milliseconds for CopyObjectResult and CopyPartResult
Fixes#8394
* S3: Address nitpick comments in copy handlers
- Synchronize Mtime and LastModified by capturing time once\n- Optimize copyChunksForRange loop\n- Use built-in min/max\n- Remove dead previewLen code
* helm: refine openshift-values.yaml to remove hardcoded UIDs
Remove hardcoded runAsUser, runAsGroup, and fsGroup from the
openshift-values.yaml example. This allows OpenShift's admission
controller to automatically assign a valid UID from the namespace's
allocated range, avoiding "forbidden" errors when UID 1000 is
outside the permissible range.
Updates #8381, #8390.
* helm: fix volume.logs and add consistent security context comments
* Update README.md
* Allow user to define admin access and secret key via values
* Add comments to values.yaml
* Add support for read for consistency
* Simplify templating
* Add checksum to s3 config
* Update comments
* Revert "Add checksum to s3 config"
This reverts commit d21a7038a8.
* Enforce IAM for s3tables bucket creation
* Prefer IAM path when policies exist
* Ensure IAM enforcement honors default allow
* address comments
* Reused the precomputed principal when setting tableBucketMetadata.OwnerAccountID, avoiding the redundant getAccountID call.
* get identity
* fix
* dedup
* fix
* comments
* fix tests
* update iam config
* go fmt
* fix ports
* fix flags
* mini clean shutdown
* Revert "update iam config"
This reverts commit ca48fdbb0a.
Revert "mini clean shutdown"
This reverts commit 9e17f6baff.
Revert "fix flags"
This reverts commit e9e7b29d2f.
Revert "go fmt"
This reverts commit bd3241960b.
* test/s3tables: share single weed mini per test package via TestMain
Previously each top-level test function in the catalog and s3tables
package started and stopped its own weed mini instance. This caused
failures when a prior instance wasn't cleanly stopped before the next
one started (port conflicts, leaked global state).
Changes:
- catalog/iceberg_catalog_test.go: introduce TestMain that starts one
shared TestEnvironment (external weed binary) before all tests and
tears it down after. All individual test functions now use sharedEnv.
Added randomSuffix() for unique resource names across tests.
- catalog/pyiceberg_test.go: updated to use sharedEnv instead of
per-test environments.
- catalog/pyiceberg_test_helpers.go -> pyiceberg_test_helpers_test.go:
renamed to a _test.go file so it can access TestEnvironment which is
defined in a test file.
- table-buckets/setup.go: add package-level sharedCluster variable.
- table-buckets/s3tables_integration_test.go: introduce TestMain that
starts one shared TestCluster before all tests. TestS3TablesIntegration
now uses sharedCluster. Extract startMiniClusterInDir (no *testing.T)
for TestMain use. TestS3TablesCreateBucketIAMPolicy keeps its own
cluster (different IAM config). Remove miniClusterMutex (no longer
needed). Fix Stop() to not panic when t is nil."
* delete
* parse
* default allow should work with anonymous
* fix port
* iceberg route
The failures are from Iceberg REST using the default bucket warehouse when no prefix is provided. Your tests create random buckets, so /v1/namespaces was looking in warehouse and failing. I updated the tests to use the prefixed Iceberg routes (/v1/{bucket}/...) via a small helper.
* test(s3tables): fix port conflicts and IAM ARN matching in integration tests
- Pass -master.dir explicitly to prevent filer store directory collision
between shared cluster and per-test clusters running in the same process
- Pass -volume.port.public and -volume.publicUrl to prevent the global
publicPort flag (mutated from 0 → concrete port by first cluster) from
being reused by a second cluster, causing 'address already in use'
- Remove the flag-reset loop in Stop() that reset global flag values while
other goroutines were reading them (race → panic)
- Fix IAM policy Resource ARN in TestS3TablesCreateBucketIAMPolicy to use
wildcards (arn:aws:s3tables:*:*:bucket/<name>) because the handler
generates ARNs with its own DefaultRegion (us-east-1) and principal name
('admin'), not the test constants testRegion/testAccountID
* docker: fix entrypoint chown guard; helm: add openshift-values.yaml
Fix a regression in entrypoint.sh where the DATA_UID/DATA_GID
ownership comparison was dropped, causing chown -R /data to run
unconditionally on every container start even when ownership was
already correct. Restore the guard so the recursive chown is
skipped when the seaweed user already owns /data — making startup
faster on subsequent runs and a no-op on OpenShift/PVC deployments
where fsGroup has already set correct ownership.
Add k8s/charts/seaweedfs/openshift-values.yaml: an example Helm
overrides file for deploying SeaweedFS on OpenShift (or any cluster
enforcing the Kubernetes restricted Pod Security Standard). Replaces
hostPath volumes with PVCs, sets runAsUser/fsGroup to 1000
(the seaweed user baked into the image), drops all capabilities,
disables privilege escalation, and enables RuntimeDefault seccomp —
satisfying OpenShift's default restricted SCC without needing a
custom SCC or root access.
Fixes #8381"
* Enforce IAM for s3tables bucket creation
* Prefer IAM path when policies exist
* Ensure IAM enforcement honors default allow
* address comments
* Reused the precomputed principal when setting tableBucketMetadata.OwnerAccountID, avoiding the redundant getAccountID call.
* get identity
* fix
* dedup
* fix
* comments
* fix tests
* update iam config
* go fmt
* fix ports
* fix flags
* mini clean shutdown
* Revert "update iam config"
This reverts commit ca48fdbb0a.
Revert "mini clean shutdown"
This reverts commit 9e17f6baff.
Revert "fix flags"
This reverts commit e9e7b29d2f.
Revert "go fmt"
This reverts commit bd3241960b.
* test/s3tables: share single weed mini per test package via TestMain
Previously each top-level test function in the catalog and s3tables
package started and stopped its own weed mini instance. This caused
failures when a prior instance wasn't cleanly stopped before the next
one started (port conflicts, leaked global state).
Changes:
- catalog/iceberg_catalog_test.go: introduce TestMain that starts one
shared TestEnvironment (external weed binary) before all tests and
tears it down after. All individual test functions now use sharedEnv.
Added randomSuffix() for unique resource names across tests.
- catalog/pyiceberg_test.go: updated to use sharedEnv instead of
per-test environments.
- catalog/pyiceberg_test_helpers.go -> pyiceberg_test_helpers_test.go:
renamed to a _test.go file so it can access TestEnvironment which is
defined in a test file.
- table-buckets/setup.go: add package-level sharedCluster variable.
- table-buckets/s3tables_integration_test.go: introduce TestMain that
starts one shared TestCluster before all tests. TestS3TablesIntegration
now uses sharedCluster. Extract startMiniClusterInDir (no *testing.T)
for TestMain use. TestS3TablesCreateBucketIAMPolicy keeps its own
cluster (different IAM config). Remove miniClusterMutex (no longer
needed). Fix Stop() to not panic when t is nil."
* delete
* parse
* default allow should work with anonymous
* fix port
* iceberg route
The failures are from Iceberg REST using the default bucket warehouse when no prefix is provided. Your tests create random buckets, so /v1/namespaces was looking in warehouse and failing. I updated the tests to use the prefixed Iceberg routes (/v1/{bucket}/...) via a small helper.
* test(s3tables): fix port conflicts and IAM ARN matching in integration tests
- Pass -master.dir explicitly to prevent filer store directory collision
between shared cluster and per-test clusters running in the same process
- Pass -volume.port.public and -volume.publicUrl to prevent the global
publicPort flag (mutated from 0 → concrete port by first cluster) from
being reused by a second cluster, causing 'address already in use'
- Remove the flag-reset loop in Stop() that reset global flag values while
other goroutines were reading them (race → panic)
- Fix IAM policy Resource ARN in TestS3TablesCreateBucketIAMPolicy to use
wildcards (arn:aws:s3tables:*:*:bucket/<name>) because the handler
generates ARNs with its own DefaultRegion (us-east-1) and principal name
('admin'), not the test constants testRegion/testAccountID
* fix: resolve gRPC DNS resolution issues in Kubernetes #8384
- Replace direct `grpc.NewClient` calls with `pb.GrpcDial` for consistent connection establishment
- Fix async DNS resolution behavior in K8s with `ndots:5`
- Ensure high-level components use established helper for reliable networking
* refactor: refine gRPC DNS fix and add documentation
- Use instance's grpcDialOption in BrokerClient.ConfigureTopic
- Add detailed comments to GrpcDial explaining Kubernetes DNS resolution rationale
* fix: ensure proper context propagation in broker_client gRPC calls
- Pass the provided `ctx` to `pb.GrpcDial` in `ConfigureTopic` and `GetUnflushedMessages`
- Ensures that timeouts and cancellations are correctly honored during connection establishment
* docs: refine gRPC resolver documentation and cleanup dead code
- Enhanced documentation for `GrpcDial` with explicit warnings about global state mutation when using `resolver.SetDefaultScheme("passthrough")`.
- Recommended `passthrough:///` prefix as the primary migration path for `grpc.NewClient`.
- Removed dead commented-out code for `grpc.WithBlock()` and `grpc.WithTimeout()`.
* Persist managed IAM policies
* Add IAM list/get policy integration test
* Faster marker lookup and cleanup
* Handle delete conflict and improve listing
* Add delete-in-use policy integration test
* Stabilize policy ID and guard path prefix
* Tighten CreatePolicy guard and reload
* Add ListPolicyNames to credential store
* s3: fix signature mismatch with non-standard ports and capitalized host
- ensure host header extraction is case-insensitive in SignedHeaders
- prioritize non-standard ports in X-Forwarded-Host over default ports in X-Forwarded-Port
- add regression tests for both scenarios
fixes https://github.com/seaweedfs/seaweedfs/issues/8382
* simplify
* iam: add XML responses for managed user policy APIs
* s3api: implement attach/detach/list attached user policies
* s3api: add embedded IAM tests for managed user policies
* iam: update CredentialStore interface and Manager for managed policies
Updated the `CredentialStore` interface to include `AttachUserPolicy`,
`DetachUserPolicy`, and `ListAttachedUserPolicies` methods.
The `CredentialManager` was updated to delegate these calls to the store.
Added common error variables for policy management.
* iam: implement managed policy methods in MemoryStore
Implemented `AttachUserPolicy`, `DetachUserPolicy`, and
`ListAttachedUserPolicies` in the MemoryStore.
Also ensured deep copying of identities includes PolicyNames.
* iam: implement managed policy methods in PostgresStore
Modified Postgres schema to include `policy_names` JSONB column in `users`.
Implemented `AttachUserPolicy`, `DetachUserPolicy`, and `ListAttachedUserPolicies`.
Updated user CRUD operations to handle policy names persistence.
* iam: implement managed policy methods in remaining stores
Implemented user policy management in:
- `FilerEtcStore` (partial implementation)
- `IamGrpcStore` (delegated via GetUser/UpdateUser)
- `PropagatingCredentialStore` (to broadcast updates)
Ensures cluster-wide consistency for policy attachments.
* s3api: refactor EmbeddedIamApi to use managed policy APIs
- Refactored `AttachUserPolicy`, `DetachUserPolicy`, and `ListAttachedUserPolicies`
to use `e.credentialManager` directly.
- Fixed a critical error suppression bug in `ExecuteAction` that always
returned success even on failure.
- Implemented robust error matching using string comparison fallbacks.
- Improved consistency by reloading configuration after policy changes.
* s3api: update and refine IAM integration tests
- Updated tests to use a real `MemoryStore`-backed `CredentialManager`.
- Refined test configuration synchronization using `sync.Once` and
manual deep-copying to prevent state corruption.
- Improved `extractEmbeddedIamErrorCodeAndMessage` to handle more XML
formats robustly.
- Adjusted test expectations to match current AWS IAM behavior.
* fix compilation
* visibility
* ensure 10 policies
* reload
* add integration tests
* Guard raft command registration
* Allow IAM actions in policy tests
* Validate gRPC policy attachments
* Revert Validate gRPC policy attachments
* Tighten gRPC policy attach/detach
* Improve IAM managed policy handling
* Improve managed policy filters
* fix: cancel volume server requests on client disconnect during S3 downloads
- Use http.NewRequestWithContext in ReadUrlAsStream so in-flight volume
server requests are properly aborted when the client disconnects and
the request context is canceled
- Distinguish context-canceled errors (client disconnect, expected) from
real server errors in streamFromVolumeServers; log at V(3) instead of
ERROR to reduce noise from client-side disconnects (e.g. Nginx upstream
timeout, browser cancel, curl --max-time)
Fixes: streamFromVolumeServers: streamFn failed...context canceled"
* fixup: separate Canceled/DeadlineExceeded log severity in streamFromVolumeServers
- context.Canceled → V(3) Infof "client disconnected" (expected, no noise)
- context.DeadlineExceeded → Warningf "server-side deadline exceeded" (unexpected, needs attention)
- all other errors → Errorf (unchanged)"