* Initial plan
* Fix port conflict in s3-tagging-tests CI job by changing volume port from 8084 to 8085
* Update s3-tagging-tests to use Makefile server management like other S3 tests
* Fix tagging test pattern to run our comprehensive tests instead of basic tests
* Set S3_ENDPOINT environment variable in CI workflow for tagging tests
---------
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
- Modified test/s3/tagging/s3_tagging_test.go to use environment variables for configurable endpoint and credentials
- Added s3-tagging-tests job to .github/workflows/s3-go-tests.yml to run tagging tests in CI
- Tests will now run automatically on pull requests
- Add X-Amz-Tagging header parsing in putToFiler function for PUT object operations
- Store tags with X-Amz-Tagging- prefix in entry.Extended metadata
- Add comprehensive test suite for S3 object tagging functionality
- Tests cover upload tagging, API operations, special characters, and edge cases
* filer: add username and keyPrefix support for Redis stores
Addresses https://github.com/seaweedfs/seaweedfs/issues/7299
- Add username config option to redis2, redis_cluster2, redis_lua, and
redis_lua_cluster stores (sentinel stores already had it)
- Add keyPrefix config option to all Redis stores to prefix all keys,
useful for Envoy Redis Proxy or multi-tenant Redis setups
* refactor: reduce duplication in redis.NewClient creation
Address code review feedback by defining redis.Options once and
conditionally setting TLSConfig instead of duplicating the entire
NewClient call.
* filer.toml: add username and keyPrefix to redis2.tmp example
* Fix#7575: Correct interface check for filer address function in admin server
Problem:
User creation in object store was failing with error:
'filer_etc: filer address function not configured'
Root Cause:
In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail.
Solution:
- Fixed interface check to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer address
Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
- Integration tests in test/admin/user_creation_integration_test.go
- Documentation in test/admin/README.md
All tests pass successfully.
* Fix#7575: Correct interface check for filer address function in admin UI
Problem:
User creation in Admin UI was failing with error:
'filer_etc: filer address function not configured'
Root Cause:
In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail in the Admin UI.
Note: This bug only affects the Admin UI. The S3 API and weed shell
commands (s3.configure) were unaffected as they use the correct interface
or bypass the credential manager entirely.
Solution:
- Fixed interface check in admin_server.go to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer (HA-aware)
- Cleaned up redundant comments in the code
Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
* TestFilerAddressFunctionInterface - verifies correct interface
* TestGenerateAccessKey - tests key generation
* TestGenerateSecretKey - tests secret generation
* TestGenerateAccountId - tests account ID generation
All tests pass and will run automatically in CI.
* Fix#7575: Correct interface check for filer address function in admin UI
Problem:
User creation in Admin UI was failing with error:
'filer_etc: filer address function not configured'
Root Cause:
1. In admin_server.go, the code checked for incorrect interface method
SetFilerClient(string, grpc.DialOption) instead of the actual
SetFilerAddressFunc(func() pb.ServerAddress, grpc.DialOption)
2. The admin command was missing the filer_etc import, so the store
was never registered
This interface mismatch prevented the filer address function from being
configured, causing user creation operations to fail in the Admin UI.
Note: This bug only affects the Admin UI. The S3 API and weed shell
commands (s3.configure) were unaffected as they use the correct interface
or bypass the credential manager entirely.
Solution:
- Added filer_etc import to weed/command/admin.go to register the store
- Fixed interface check in admin_server.go to use SetFilerAddressFunc
- Updated function call to properly configure filer address function
- Function now dynamically returns current active filer (HA-aware)
- Hoisted credentialManager assignment to reduce code duplication
Tests Added:
- Unit tests in weed/admin/dash/user_management_test.go
* TestFilerAddressFunctionInterface - verifies correct interface
* TestGenerateAccessKey - tests key generation
* TestGenerateSecretKey - tests secret generation
* TestGenerateAccountId - tests account ID generation
All tests pass and will run automatically in CI.
* Enable FIPS 140-3 compliant crypto by default
Addresses #6889
- Enable GOEXPERIMENT=systemcrypto by default in all Makefiles
- Enable GOEXPERIMENT=systemcrypto by default in all Dockerfiles
- Go 1.24+ has native FIPS 140-3 support via this setting
- Users can disable by setting GOEXPERIMENT= (empty)
Algorithms used (all FIPS approved):
- AES-256-GCM for data encryption
- AES-256-CTR for SSE-C
- HMAC-SHA256 for S3 signatures
- TLS 1.2/1.3 for transport encryption
* Fix: Remove invalid GOEXPERIMENT=systemcrypto
Go 1.24 uses GODEBUG=fips140=on at runtime, not GOEXPERIMENT at build time.
- Remove GOEXPERIMENT=systemcrypto from all Makefiles
- Remove GOEXPERIMENT=systemcrypto from all Dockerfiles
FIPS 140-3 mode can be enabled at runtime:
GODEBUG=fips140=on ./weed server ...
* Add FIPS 140-3 support enabled by default
Addresses #6889
- FIPS 140-3 mode is ON by default in Docker containers
- Sets GODEBUG=fips140=on via entrypoint.sh
- To disable: docker run -e GODEBUG=fips140=off ...
Have `volume.check.disk` select a random (heathly) source volume when repairing read-only volumes.
This ensures uniform load across the topology when the command is run. Also remove a lingering
TODO about ignoring full volumes; not only there's no way to discern read-only volumes from
being full vs. being damaged, we ultimately want to check the former anyway.
* Add link to wiki installation page in README
* Add building for docker in weed/Makefile
Building without `CGO_ENABLED=0` and using the executable in docker can result in a docker container exiting with an error
* Update README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Add GOOS=linux to build_docker target for cross-compilation
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* mount: improve read throughput with parallel chunk fetching
This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.
Changes:
- Add -concurrentReaders mount option (default: 16) to control the
maximum number of parallel chunk fetches during read operations
- Implement parallel section reading in ChunkGroup.ReadDataAt() using
errgroup for better throughput when reading across multiple sections
- Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks
ahead in parallel during sequential reads (now prefetches 4 chunks)
- Increase ReaderCache limit dynamically based on concurrentReaders
to support higher read parallelism
The bottleneck was that chunks were being read sequentially even when
they reside on different volume servers. By introducing parallel chunk
fetching, a single mount instance can now better saturate available
network bandwidth.
Fixes: #7504
* fmt
* Address review comments: make prefetch configurable, improve error handling
Changes:
1. Add DefaultPrefetchCount constant (4) to reader_at.go
2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count
from concurrentReaders (1/4 ratio, min 1, max 8)
3. Pass prefetch count through NewChunkReaderAtFromClient
4. Fix error handling in readDataAtParallel to prioritize errgroup error
5. Update all callers to use DefaultPrefetchCount constant
For mount operations, prefetch scales with -concurrentReaders:
- concurrentReaders=16 (default) -> prefetch=4
- concurrentReaders=32 -> prefetch=8 (capped)
- concurrentReaders=4 -> prefetch=1
For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount.
* fmt
* Refactor: use variadic parameter instead of new function name
Use NewChunkGroup with optional concurrentReaders parameter instead of
creating a separate NewChunkGroupWithConcurrency function.
This maintains backward compatibility - existing callers without the
parameter get the default of 16 concurrent readers.
* Use explicit concurrentReaders parameter instead of variadic
* Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function
* Address nitpick review comments
- Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out
- Cap readerCacheLimit at 256 accordingly
- Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections
* filer use context without cancellation
* pass along context
* fix: copy to bucket with default SSE-S3 encryption fails (#7562)
When copying an object from an encrypted bucket to a temporary unencrypted
bucket, then to another bucket with default SSE-S3 encryption, the operation
fails with 'invalid SSE-S3 source key type' error.
Root cause:
When objects are copied from an SSE-S3 encrypted bucket to an unencrypted
bucket, the 'X-Amz-Server-Side-Encryption: AES256' header is preserved but
the actual encryption key (SeaweedFSSSES3Key) is stripped. This creates an
'orphaned' SSE-S3 header that causes IsSSES3EncryptedInternal() to return
true, triggering decryption logic with a nil key.
Fix:
1. Modified IsSSES3EncryptedInternal() to require BOTH the AES256 header
AND the SeaweedFSSSES3Key to be present before returning true
2. Added isOrphanedSSES3Header() to detect orphaned SSE-S3 headers
3. Updated copy handler to strip orphaned headers during copy operations
Fixes#7562
* fmt
* refactor: simplify isOrphanedSSES3Header function logic
Remove redundant existence check since the caller iterates through
metadata map, making the check unnecessary. Improves readability
while maintaining the same functionality.
* s3api: Fix response-content-disposition query parameter not being honored
Fixes#7486
This fix resolves an issue where S3 presigned URLs with query parameters
like `response-content-disposition`, `response-content-type`, etc. were
being ignored, causing browsers to use default file handling instead of
the specified behavior.
Changes:
- Modified `setResponseHeaders()` to accept the HTTP request object
- Added logic to process S3 passthrough headers from query parameters
- Updated all call sites to pass the request object
- Supports all AWS S3 response override parameters:
- response-content-disposition
- response-content-type
- response-cache-control
- response-content-encoding
- response-content-language
- response-expires
The implementation follows the same pattern used in the filer handler
and properly honors the AWS S3 API specification for presigned URLs.
Testing:
- Existing S3 API tests pass without modification
- Build succeeds with no compilation errors
* Update weed/s3api/s3api_object_handlers.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix(tikv): improve context propagation and refactor batch delete logic
Address review comments from PR #7557:
1. Replace context.TODO() with ctx in txn.Get calls
- Fixes timeout/cancellation propagation in FindEntry
- Fixes timeout/cancellation propagation in KvGet
2. Refactor DeleteFolderChildren to use flush helper
- Eliminates code duplication
- Cleaner and more maintainable
These changes ensure proper context propagation throughout all
TiKV operations and improve code maintainability.
* error formatting
* metrics: add Prometheus metrics for concurrent upload tracking
Add Prometheus metrics to monitor concurrent upload activity for both
filer and S3 servers. This provides visibility into the upload limiting
feature added in the previous PR.
New Metrics:
- SeaweedFS_filer_in_flight_upload_bytes: Current bytes being uploaded to filer
- SeaweedFS_filer_in_flight_upload_count: Current number of uploads to filer
- SeaweedFS_s3_in_flight_upload_bytes: Current bytes being uploaded to S3
- SeaweedFS_s3_in_flight_upload_count: Current number of uploads to S3
The metrics are updated atomically whenever uploads start or complete,
providing real-time visibility into upload concurrency levels.
This helps operators:
- Monitor upload concurrency in real-time
- Set appropriate limits based on actual usage patterns
- Detect potential bottlenecks or capacity issues
- Track the effectiveness of upload limiting configuration
* grafana: add dashboard panels for concurrent upload metrics
Add 4 new panels to the Grafana dashboard to visualize the concurrent
upload metrics added in this PR:
Filer Section:
- Filer Concurrent Uploads: Shows current number of concurrent uploads
- Filer Concurrent Upload Bytes: Shows current bytes being uploaded
S3 Gateway Section:
- S3 Concurrent Uploads: Shows current number of concurrent uploads
- S3 Concurrent Upload Bytes: Shows current bytes being uploaded
These panels help operators monitor upload concurrency in real-time and
tune the upload limiting configuration based on actual usage patterns.
* more efficient
* fix(tikv): replace DeleteRange with transaction-based batch deletes
Fixes#7187
Problem:
TiKV's DeleteRange API is a RawKV operation that bypasses transaction
isolation. When SeaweedFS filer uses TiKV with txn client and another
service uses RawKV client on the same cluster, DeleteFolderChildren
can accidentally delete KV pairs from the RawKV client because
DeleteRange operates at the raw key level without respecting
transaction boundaries.
Reproduction:
1. SeaweedFS filer using TiKV txn client for metadata
2. Another service using rawkv client on same TiKV cluster
3. Filer performs batch file deletion via DeleteFolderChildren
4. Result: ~50% of rawkv client's KV pairs get deleted
Solution:
Replace client.DeleteRange() (RawKV API) with transactional batch
deletes using txn.Delete() within transactions. This ensures:
- Transaction isolation - operations respect TiKV's MVCC boundaries
- Keyspace separation - txn client and RawKV client stay isolated
- Proper key handling - keys are copied to avoid iterator reuse issues
- Batch processing - deletes batched (10K default) to manage memory
Changes:
1. Core data structure:
- Removed deleteRangeConcurrency field
- Added batchCommitSize field (configurable, default 10000)
2. DeleteFolderChildren rewrite:
- Replaced DeleteRange with iterative batch deletes
- Added proper transaction lifecycle management
- Implemented key copying to avoid iterator buffer reuse
- Added batching to prevent memory exhaustion
3. New deleteBatch helper:
- Handles transaction creation and lifecycle
- Batches deletes within single transaction
- Properly commits/rolls back based on context
4. Context propagation:
- Updated RunInTxn to accept context parameter
- All RunInTxn call sites now pass context
- Enables proper timeout/cancellation handling
5. Configuration:
- Removed deleterange_concurrency setting
- Added batchdelete_count setting (default 10000)
All critical review comments from PR #7188 have been addressed:
- Proper key copying with append([]byte(nil), key...)
- Conditional transaction rollback based on inContext flag
- Context propagation for commits
- Proper transaction lifecycle management
- Configurable batch size
Co-authored-by: giftz <giftz@users.noreply.github.com>
* fix: remove extra closing brace causing syntax error in tikv_store.go
---------
Co-authored-by: giftz <giftz@users.noreply.github.com>
With the recent changes (commit c1b8d4bf0) that made S3 directly access
volume servers instead of proxying through filer, we need to properly
handle HTTP 429 (Too Many Requests) errors from volume servers.
This change ensures that when volume servers rate limit requests with
HTTP 429, the S3 API properly translates this to an S3-compatible error
response (ErrRequestBytesExceed with HTTP 503) instead of returning a
generic InternalError.
Changes:
- Add ErrTooManyRequests sentinel error in weed/util/http
- Detect HTTP 429 in ReadUrlAsStream and wrap with ErrTooManyRequests
- Check for ErrTooManyRequests in GetObjectHandler and map to S3 error
- Return ErrRequestBytesExceed (HTTP 503) for rate limiting scenarios
This addresses the same issue as PR #7482 but for the new direct
volume server access path instead of the filer proxy path.
Fixes: Rate limiting errors from volume servers being masked as 500
* fix(s3api): fix AWS Signature V2 format and validation
* fix(s3api): Skip space after "AWS" prefix (+1 offset)
* test(s3api): add unit tests for Signature V2 authentication fix
* fix(s3api): simply comparing signatures
* validation for the colon extraction in expectedAuth
---------
Co-authored-by: chrislu <chris.lu@gmail.com>