Fix nil pointer dereference panic when starting master.follower.
The init() function was missing initialization for:
- maxParallelVacuumPerServer
- telemetryUrl
- telemetryEnabled
These fields are dereferenced in toMasterOption() causing a panic.
Fixes#7806
fix: authenticate before parsing form in IAM API (#7802)
The AuthIam middleware was calling ParseForm() before AuthSignatureOnly(),
which consumed the request body before signature verification could hash it.
For IAM requests (service != 's3'), the signature verification needs to hash
the request body. When ParseForm() was called first, the body was already
consumed, resulting in an empty body hash and SignatureDoesNotMatch error.
The fix moves authentication before form parsing. The streamHashRequestBody
function preserves the body after reading, so ParseForm() works correctly
after authentication.
Fixes#7802
The test was using a static bucket name 'test-iam-bucket' that could conflict
with buckets created by other tests or previous runs. Each test framework
creates new RSA keys for JWT signing, so the 'admin-user' identity differs
between runs. When the bucket exists from a previous test, the new admin
cannot access or delete it, causing AccessDenied errors.
Changed to use GenerateUniqueBucketName() which ensures each test run gets
its own bucket, avoiding cross-test conflicts.
* fix: ListBuckets returns empty for users with bucket-specific permissions (#7796)
The ListBucketsHandler was using sequential AND logic where ownership check
happened before permission check. If a user had 'List:bucketname' permission
but didn't own the bucket (different AmzIdentityId or missing owner metadata),
the bucket was filtered out before the permission check could run.
Changed to OR logic: a bucket is now visible if the user owns it OR has
explicit permission to list it. This allows users with bucket-specific
permissions like 'List:geoserver' to see buckets they have access to,
even if they don't own them.
Changes:
- Modified ListBucketsHandler to check both ownership and permission,
including bucket if either check passes
- Renamed isBucketVisibleToIdentity to isBucketOwnedByIdentity for clarity
- Added comprehensive tests in TestListBucketsIssue7796
Fixes#7796
* address review comments: optimize permission check and add integration test
- Skip permission check if user is already the owner (performance optimization)
- Add integration test that simulates the complete handler filtering logic
to verify the combination of ownership OR permission check works correctly
* add visibility assertions to each sub-test for self-contained verification
Each sub-test now verifies the final outcome using isOwner || canList logic,
making tests more robust and independently verifiable.
* fix: sync replica entries before ec.encode and volume.tier.move (#7797)
This addresses the data inconsistency risk in multi-replica volumes.
When ec.encode or volume.tier.move operates on a multi-replica volume:
1. Find the replica with the highest file count (the 'best' one)
2. Copy missing entries from other replicas INTO this best replica
3. Use this union replica for the destructive operation
This ensures no data is lost due to replica inconsistency before
EC encoding or tier moving.
Added:
- command_volume_replica_check.go: Core sync and select logic
- command_volume_replica_check_test.go: Test coverage
Modified:
- command_ec_encode.go: Call syncAndSelectBestReplica before encoding
- command_volume_tier_move.go: Call syncAndSelectBestReplica before moving
Fixes#7797
* test: add integration test for replicated volume sync during ec.encode
* test: improve retry logic for replicated volume integration test
* fix: resolve JWT issue in integration tests by using empty security.toml
* address review comments: add readNeedleMeta, parallelize status fetch, fix collection param, fix test issues
* test: use collection parameter consistently in replica sync test
* fix: convert weed binary path to absolute to work with changed working directory
* fix: remove skip behavior, keep tests failing on missing binary
* fix: always check recency for each needle, add divergent replica test
fix: add debug logging for JWT validation failures (#7788)
When JWT file ID validation fails during replication, add a log message
showing both the expected and actual file IDs to help diagnose issues.
Ref #7788
* Fix volume repeatedly toggling between crowded and uncrowded
Fixes#6712
The issue was that removeFromCrowded() was called in removeFromWritable(),
which is invoked whenever a volume temporarily becomes unwritable (due to
replica count fluctuations, heartbeat issues, or read-only state changes).
This caused unnecessary toggling:
1. Volume becomes temporarily unwritable → removeFromWritable() →
removeFromCrowded() logs 'becomes uncrowded'
2. Volume becomes writable again
3. CollectDeadNodeAndFullVolumes() runs → setVolumeCrowded() logs
'becomes crowded'
The fix:
- Remove removeFromCrowded() call from removeFromWritable()
- Only clear crowded status when volume is fully unregistered from
the layout (when location.Length() == 0 in UnRegisterVolume)
This ensures transient state changes don't cause log spam and the
crowded status accurately reflects the volume's size relative to
the grow threshold.
* Refactor test to use subtests for better readability
Address review feedback: use t.Run subtests to make the test's intent
clearer by giving each verification step a descriptive name.
* Fix S3 server panic when -s3.port.https equals -s3.port
When starting the S3 server with -s3.port.https=8333 (same as default
-s3.port), the server would panic with nil pointer dereference because:
1. The HTTP listener was already bound to port 8333
2. NewIpAndLocalListeners for HTTPS failed but error was discarded
3. ServeTLS was called on nil listener causing panic
This fix:
- Adds early validation to prevent using same port for HTTP and HTTPS
- Properly handles the error from NewIpAndLocalListeners for HTTPS
Fixes#7792
* Fix volume repeatedly toggling between crowded and uncrowded
Fixes#6712
The issue was that removeFromCrowded() was called in removeFromWritable(),
which is invoked whenever a volume temporarily becomes unwritable (due to
replica count fluctuations, heartbeat issues, or read-only state changes).
This caused unnecessary toggling:
1. Volume becomes temporarily unwritable → removeFromWritable() →
removeFromCrowded() logs 'becomes uncrowded'
2. Volume becomes writable again
3. CollectDeadNodeAndFullVolumes() runs → setVolumeCrowded() logs
'becomes crowded'
The fix:
- Remove removeFromCrowded() call from removeFromWritable()
- Only clear crowded status when volume is fully unregistered from
the layout (when location.Length() == 0 in UnRegisterVolume)
This ensures transient state changes don't cause log spam and the
crowded status accurately reflects the volume's size relative to
the grow threshold.
* Refactor test to use subtests for better readability
Address review feedback: use t.Run subtests to make the test's intent
clearer by giving each verification step a descriptive name.
* s3: fix remote object not caching
* s3: address review comments for remote object caching
- Fix leading slash in object name by using strings.TrimPrefix
- Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations
- Reuse existing helper function instead of inline gRPC call
* s3/filer: add singleflight deduplication for remote object caching
- Add singleflight.Group to FilerServer to deduplicate concurrent cache operations
- Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one
caching operation runs per object when multiple clients request the same file
- Add early-return check for already-cached objects
- S3 API calls filer gRPC with timeout and graceful fallback on error
- Clear negative bucket cache when bucket is created via weed shell
- Add integration tests for remote cache with singleflight deduplication
This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects
by preventing redundant cache operations and improving concurrent access performance.
Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599
* fix: data race in concurrent remote object caching
- Add mutex to protect chunks slice from concurrent append
- Add mutex to protect fetchAndWriteErr from concurrent read/write
- Fix incorrect error check (was checking assignResult.Error instead of parseErr)
- Rename inner variable to avoid shadowing fetchAndWriteErr
* fix: address code review comments
- Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version
- Add mutex protection for concurrent chunk slice and error access (data race fix)
- Use lazy initialization for S3 client in tests to avoid panic during package load
- Fix markdown linting: add language specifier to code fence, blank lines around tables
- Add 'all' target to Makefile as alias for test-with-server
- Remove unused 'util' import
* style: remove emojis from test files
* fix: add defensive checks and sort chunks by offset
- Add nil check and type assertion check for singleflight result
- Sort chunks by offset after concurrent fetching to maintain file order
* fix: improve test diagnostics and path normalization
- runWeedShell now returns error for better test diagnostics
- Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health)
- Strip leading slash from normalizedObject to avoid double slashes in path
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
s3: fix fallback owner lookup to use specific version
Address review feedback: the fallback logic was incorrectly using
getLatestObjectVersion which returns the wrong owner when different
versions have different owners.
Fix by using getSpecificObjectVersion with the version.VersionId to
fetch the correct entry for the specific version being processed.
This also simplifies the code by removing the separate null version
handling since getSpecificObjectVersion already handles that case.
When paginating with keyMarker, we must collect all versions first because
filtering happens after sorting. Previously, we limited collection to maxKeys+1
which caused us to miss versions beyond the marker when there were many versions
before it.
The ListObjectVersions API was receiving key-marker and version-id-marker
parameters but not using them to filter results. This caused infinite
pagination loops when clients tried to paginate through results.
Fix by adding filtering logic after sorting:
- Skip versions with key < keyMarker (already returned in previous pages)
- For key == keyMarker, skip versions with versionId >= versionIdMarker
- Include versions with key > keyMarker or (key == keyMarker and versionId < versionIdMarker)
This respects the S3 sort order (key ascending, versionId descending for same key)
and correctly returns only versions that come AFTER the marker position.
Add pagination stress tests (>1000 versions) to the S3 versioning stress
test job in GitHub CI. These tests run on master branch pushes to validate
that ListObjectVersions correctly handles objects with more than 1000
versions using pagination.
* s3: add pagination to getObjectVersionList and reduce memory
This commit makes two improvements to S3 version listing:
1. Add pagination to getObjectVersionList:
- Previously hardcoded limit of 1000 versions per object
- Now paginates through all versions using startFrom marker
- Fixes correctness issue where objects with >1000 versions would
have some versions missing from list results
2. Reduce memory by not retaining full Entry:
- Replace Entry field with OwnerID string in ObjectVersion struct
- Extract owner ID when creating ObjectVersion
- Avoids retaining Chunks array which can be large for big files
- Clear seenVersionIds map after use to help GC
3. Update getObjectOwnerFromVersion:
- Use new OwnerID field instead of accessing Entry.Extended
- Maintains backward compatibility with fallback lookups
* s3: propagate errors from list operation instead of returning partial results
Address review feedback: when s3a.list fails during version listing,
the function was logging at V(2) level and returning partial results
with nil error. This hides the error and could lead to silent data
inconsistencies.
Fix by:
1. Log at Warningf level for better visibility
2. Return nil versions slice with the error to prevent partial results
from being processed as complete
* s3: fix memory leak in ListObjectVersions with early termination
This fixes a critical memory leak in S3 versioned bucket listing operations:
1. Add maxCollect parameter to findVersionsRecursively() to stop collecting
versions once we have enough for the response + truncation detection
2. Add early termination checks throughout the recursive traversal to
prevent scanning entire buckets when only a small number of results
are requested
3. Use clear() on tracking maps after collection to help GC reclaim memory
4. Create new slice with exact capacity when truncating results instead of
re-slicing, which allows GC to reclaim the excess backing array memory
5. Pre-allocate result slice with reasonable initial capacity to reduce
reallocations during collection
Before this fix, listing versions on a bucket with many objects and versions
would load ALL versions into memory before pagination, causing OOM crashes.
Fixes memory exhaustion when calling ListObjectVersions on large versioned buckets.
* s3: fix pre-allocation capacity to be consistent with maxCollect
Address review feedback: the previous capping logic caused an inconsistency
where initialCap was capped to 1000 but maxCollect was maxKeys+1, leading to
unnecessary reallocations when maxKeys was 1000.
Fix by:
1. Cap maxKeys to 1000 (S3 API limit) at the start of the function
2. Use maxKeys+1 directly for slice capacity, ensuring consistency with
the maxCollect parameter passed to findVersionsRecursively
* fix: correctly detect missing source file during volume copy
The previous fix (commit 5c27522) incorrectly used progressedBytes == 0 to
detect if the source file didn't exist. This was wrong because it would also
delete files when the source file exists but is empty.
This fix:
1. Server side: Send ModifiedTsNs even for empty files, so the client knows
the source file exists
2. Client side: Check modifiedTsNs == 0 instead of progressedBytes == 0 to
determine if source file didn't exist
Now the logic correctly handles:
- Source file doesn't exist → No ModifiedTsNs sent → Remove empty file
- Source file exists but is empty → ModifiedTsNs sent → Keep empty file
- Source file exists with content → ModifiedTsNs sent → Keep file with content
Fixes#7777
* Update weed/server/volume_grpc_copy.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
This is a follow-up fix to PR #7332 which partially addressed the issue.
The problem is that size=0 needles are in a gray area:
- IsValid() returns false for size=0 (because size must be > 0)
- IsDeleted() returns false for size=0 (because size must be < 0 or == TombstoneFileSize)
PR #7332 only fixed 2 places, but several other places still had the same bug:
1. needle_map_memory.go:doLoading - line 43 still used oldSize.IsValid()
2. needle_map_memory.go:DoOffsetLoading - used during vacuum and incremental loading
3. needle_map_leveldb.go:generateLevelDbFile - used when generating LevelDB needle maps
4. needle_map_leveldb.go:DoOffsetLoading - used during incremental loading for LevelDB
5. needle_map/compact_map.go:delete - couldn't delete size=0 entries because:
- The condition 'size > 0' failed for size=0
- Even if it passed, negating 0 gives 0 (not marking as deleted)
Changes:
- Changed size.IsValid() to !size.IsDeleted() in doLoading and DoOffsetLoading functions
- Fixed compact_map delete to use TombstoneFileSize for size=0 entries
Fixes#7293
Sort lists of filers, volume servers, masters, and message brokers
by address to ensure consistent ordering on page refresh.
This fixes the non-deterministic ordering caused by iterating over
Go maps with range.
Fixes#7781
* Fix EC Volumes page header styling to match admin theme
Fixes#7779
The EC Volumes page was rendering with bright Bootstrap default colors
instead of the admin dark theme because it was structured as a standalone
HTML document with its own DOCTYPE, head, and body tags.
This fix converts the template to be a content fragment (like other
properly styled templates such as cluster_ec_shards.templ) so it
correctly inherits the admin.css styling when rendered within the layout.
* Address review comments: fix URL interpolation and falsy value check
- Fix collection filter link to use templ.URL() for proper interpolation
- Change updateUrl() falsy check from 'if (params[key])' to
'if (params[key] != null)' to handle 0 and false values correctly
* Address additional review comments
- Use erasure_coding.TotalShardsCount constant instead of hardcoded '14'
for shard count displays (lines 88 and 214)
- Improve error handling in repairVolume() to check response.ok before
parsing JSON, preventing confusing errors on non-JSON responses
- Remove unused totalSize variable in formatShardRangesWithSizes()
- Simplify redundant pagination conditions
* Remove unused code: displayShardLocationsHTML, groupShardsByServerWithSizes, formatShardRangesWithSizes
These functions and templates were defined but never called anywhere
in the codebase. Removing them reduces code maintenance burden.
* Address review feedback: improve code quality
- Add defensive JSON response validation in repairVolume function
- Replace O(n²) bubble sorts with Go's standard sort.Ints and sort.Slice
- Document volume status thresholds explaining EC recovery logic:
* Critical: unrecoverable (more than DataShardsCount missing)
* Degraded: high risk (more than half DataShardsCount missing)
* Incomplete: reduced redundancy (more than half ParityShardsCount missing)
* Minor: fully recoverable with good margin
* Fix redundant shard count display in Healthy Volumes card
Changed from 'Complete (14/14 shards)' to 'All 14 shards present' since
the numerator and denominator were always the same value.
* Use templ.URL for default collection link for consistency
* Fix Clear Filter link to stay on EC Volumes page
Changed href from /cluster/ec-shards to /cluster/ec-volumes so clearing
the filter stays on the current page instead of navigating away.
* feat: add S3 bucket size and object count metrics
Adds periodic collection of bucket size metrics:
- SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas)
- SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas)
- SeaweedFS_s3_bucket_object_count: object count (deduplicated)
Collection runs every 1 minute via background goroutine that queries
filer Statistics RPC for each bucket's collection.
Also adds Grafana dashboard panels for:
- S3 Bucket Size (logical vs physical)
- S3 Bucket Object Count
* address PR comments: fix bucket size metrics collection
1. Fix collectCollectionInfoFromMaster to use master VolumeList API
- Now properly queries master for topology info
- Uses WithMasterClient to get volume list from master
- Correctly calculates logical vs physical size based on replication
2. Return error when filerClient is nil to trigger fallback
- Changed from 'return nil, nil' to 'return nil, error'
- Ensures fallback to filer stats is properly triggered
3. Implement pagination in listBucketNames
- Added listBucketPageSize constant (1000)
- Uses StartFromFileName for pagination
- Continues fetching until fewer entries than limit returned
4. Handle NewReplicaPlacementFromByte error and prevent division by zero
- Check error return from NewReplicaPlacementFromByte
- Default to 1 copy if error occurs
- Add explicit check for copyCount == 0
* simplify bucket size metrics: remove filer fallback, align with quota enforcement
- Remove fallback to filer Statistics RPC
- Use only master topology for collection info (same as s3.bucket.quota.enforce)
- Updated comments to clarify this runs the same collection logic as quota enforcement
- Simplified code by removing collectBucketSizeFromFilerStats
* use s3a.option.Masters directly instead of querying filer
* address PR comments: fix dashboard overlaps and improve metrics collection
Grafana dashboard fixes:
- Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30)
- Fix grid collision in k8s dashboard (moved panel 72 to y=48)
- Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways
Go code improvements:
- Add graceful shutdown support via context cancellation
- Use ticker instead of time.Sleep for better shutdown responsiveness
- Distinguish EOF from actual errors in stream handling
* improve bucket size metrics: multi-master failover and proper error handling
- Initial delay now respects context cancellation using select with time.After
- Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0]
- Properly propagate stream errors instead of just logging them (EOF vs real errors)
* improve bucket size metrics: distributed lock and volume ID deduplication
- Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time
- Add IsLocked() method to LiveLock for checking lock status
- Fix deduplication: use volume ID tracking instead of dividing by copyCount
- Previous approach gave wrong results if replicas were missing
- Now tracks seen volume IDs and counts each volume only once
- Physical size still includes all replicas for accurate disk usage reporting
* rename lock to s3.leader
* simplify: remove StartBucketSizeMetricsCollection wrapper function
* fix data race: use atomic operations for LiveLock.isLocked field
- Change isLocked from bool to int32
- Use atomic.LoadInt32/StoreInt32 for all reads/writes
- Sync shared isLocked field in StartLongLivedLock goroutine
* add nil check for topology info to prevent panic
* fix bucket metrics: use Ticker for consistent intervals, fix pagination logic
- Use time.Ticker instead of time.After for consistent interval execution
- Fix pagination: count all entries (not just directories) for proper termination
- Update lastFileName for all entries to prevent pagination issues
* address PR comments: remove redundant atomic store, propagate context
- Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it)
- Propagate context through metrics collection for proper cancellation on shutdown
- collectAndUpdateBucketSizeMetrics now accepts ctx
- collectCollectionInfoFromMaster uses ctx for VolumeList RPC
- listBucketNames uses ctx for ListEntries RPC
When copying/evacuating empty volumes, the .idx file may not exist on disk
(this is allowed by checkIdxFile for volumes with only super block in .dat).
This fix:
1. Uses os.IsNotExist() instead of err == os.ErrNotExist for proper
wrapped error checking in CopyFile
2. Treats missing source file as success when StopOffset == 0 (empty file)
3. Allows checkCopyFiles to pass when idx file doesn't exist but
IdxFileSize == 0 (empty volume)
Fixes volumeServer.evacuate and volume.fix.replication for empty volumes.
* fix: add S3 bucket traffic sent metric tracking
The BucketTrafficSent() function was defined but never called, causing
the S3 Bucket Traffic Sent Grafana dashboard panel to not display data.
Added BucketTrafficSent() calls in the streaming functions:
- streamFromVolumeServers: for inline and chunked content
- streamFromVolumeServersWithSSE: for encrypted range and full object requests
The traffic received metric already worked because BucketTrafficReceived()
was properly called in putToFiler() for both regular and multipart uploads.
* feat: add S3 API Calls per Bucket panel to Grafana dashboards
Added a new panel showing API calls per bucket using the existing
SeaweedFS_s3_request_total metric aggregated by bucket.
Updated all Grafana dashboard files:
- other/metrics/grafana_seaweedfs.json
- other/metrics/grafana_seaweedfs_k8s.json
- other/metrics/grafana_seaweedfs_heartbeat.json
- k8s/charts/seaweedfs/dashboards/seaweedfs-grafana-dashboard.json
* address PR comments: use actual bytes written for traffic metrics
- Use actual bytes written from w.Write instead of expected size for inline content
- Add countingWriter wrapper to track actual bytes for chunked content streaming
- Update streamDecryptedRangeFromChunks to return actual bytes written for SSE
- Remove redundant nil check that caused linter warning
- Fix duplicate panel id 86 in grafana_seaweedfs.json (changed to 90)
- Fix overlapping panel positions in grafana_seaweedfs_k8s.json (rebalanced x positions)
* fix grafana k8s dashboard: rebalance S3 panels to avoid overlap
- Panel 86 (S3 API Calls per Bucket): w:6, x:0, y:15
- Panel 67 (S3 Request Duration 95th): w:6, x:6, y:15
- Panel 68 (S3 Request Duration 80th): w:6, x:12, y:15
- Panel 65 (S3 Request Duration 99th): w:6, x:18, y:15
All four S3 panels now fit in a single row (y:15) with width 6 each.
Filer row header at y:22 and subsequent panels remain correctly positioned.
* add input validation and clarify comments in adjustRangeForPart
- Add validation that partStartOffset <= partEndOffset at function start
- Add clarifying comments for suffix-range handling where clientEnd
temporarily holds the suffix length before being reassigned
* align pluginVersion for panel 86 to 10.3.1 in k8s dashboard
* track partial writes for accurate egress traffic accounting
- Change condition from 'err == nil' to 'written > 0' for inline content
- Move BucketTrafficSent before error check for chunked content streaming
- Track traffic even on partial SSE range writes
- Track traffic even on partial full SSE object copies
This ensures egress traffic is counted even when writes fail partway through,
providing more accurate bandwidth metrics.
* s3: warm bucket config cache on startup for multi-filer consistency
In multi-filer clusters, the bucket configuration cache (storing Object Lock,
versioning, and other settings) was not being pre-populated on S3 API server
startup. This caused issues where:
1. After server restart, Object Lock and versioning settings appeared lost
until the bucket was accessed (lazy loading)
2. In multi-filer clusters, race conditions during bucket creation could
result in inconsistent Object Lock configuration
This fix warms the bucketConfigCache during BucketRegistry initialization,
ensuring all bucket configurations (including Object Lock and versioning)
are immediately available after restart without waiting for first access.
The fix piggybacks on the existing BucketRegistry.init() which already
iterates through all buckets, adding a call to update the config cache
with each bucket's extended attributes.
* s3: add visibility logging for bucket config cache warming
- Add bucket count tracking during initialization
- Log error if bucket listing fails
- Log INFO message with count of warmed buckets on successful init
This improves observability for the cache warming process and addresses
review feedback about error handling visibility.
* s3: fix bucket deletion not invalidating config cache
Bug fix: The metadata subscription handler had an early return when
NewEntry was nil, which skipped the onBucketMetadataChange call for
bucket deletions. This caused deleted buckets to remain in the config
cache.
The fix moves onBucketMetadataChange before the nil check so it's
called for all events (create, update, delete). The IAM and circuit
breaker updates still require NewEntry content, so they remain after
the check.
* s3: handle config file deletions for IAM and circuit breaker
Refactored the metadata subscription handlers to properly handle all
event types (create, update, delete) for IAM and circuit breaker configs:
- Renamed onIamConfigUpdate -> onIamConfigChange
- Renamed onCircuitBreakerConfigUpdate -> onCircuitBreakerConfigChange
- Both handlers now check for deletions (newEntry == nil && oldEntry != nil)
- On config file deletion, reset to empty config by loading empty bytes
- Simplified processEventFn to call all handlers unconditionally
- Each handler checks for nil entries internally
This ensures that deleting identity.json or circuit_breaker.json will
clear the in-memory config rather than leaving stale data.
* s3: restore NewParentPath handling for rename/move operations
The directory resolution logic was accidentally removed. This restores
the check for NewParentPath which is needed when files are renamed or
moved - in such cases, NewParentPath contains the destination directory
which should be used for directory matching in the handlers.
* filer: improve FoundationDB performance by disabling batch by default
This PR addresses a performance issue where FoundationDB filer was achieving
only ~757 ops/sec with 12 concurrent S3 clients, despite FDB being capable
of 17,000+ ops/sec.
Root cause: The write batcher was waiting up to 5ms for each operation to
batch, even though S3 semantics require waiting for durability confirmation.
This added artificial latency that defeated the purpose of batching.
Changes:
- Disable write batching by default (batch_enabled = false)
- Each write now commits immediately in its own transaction
- Reduce batch interval from 5ms to 1ms when batching is enabled
- Add batch_enabled config option to toggle behavior
- Improve batcher to collect available ops without blocking
- Add benchmarks comparing batch vs no-batch performance
Benchmark results (16 concurrent goroutines):
- With batch: 2,924 ops/sec (342,032 ns/op)
- Without batch: 4,625 ops/sec (216,219 ns/op)
- Improvement: +58% faster
Configuration:
- Default: batch_enabled = false (optimal for S3 PUT latency)
- For bulk ingestion: set batch_enabled = true
Also fixes ARM64 Docker test setup (shell compatibility, fdbserver path).
* fix: address review comments - use atomic counter and remove duplicate batcher
- Use sync/atomic.Uint64 for unique filenames in concurrent benchmarks
- Remove duplicate batcher creation in createBenchmarkStoreWithBatching
(initialize() already creates batcher when batchEnabled=true)
* fix: add realistic default values to benchmark store helper
Set directoryPrefix, timeout, and maxRetryDelay to reasonable defaults
for more realistic benchmark conditions.
* s3: fix PutObject ETag format for multi-chunk uploads
Fix issue #7768: AWS S3 SDK for Java fails with 'Invalid base 16
character: -' when performing PutObject on files that are internally
auto-chunked.
The issue was that SeaweedFS returned a composite ETag format
(<md5hash>-<count>) for regular PutObject when the file was split
into multiple chunks due to auto-chunking. However, per AWS S3 spec,
the composite ETag format should only be used for multipart uploads
(CreateMultipartUpload/UploadPart/CompleteMultipartUpload API).
Regular PutObject should always return a pure MD5 hash as the ETag,
regardless of how the file is stored internally.
The fix ensures the MD5 hash is always stored in entry.Attributes.Md5
for regular PutObject operations, so filer.ETag() returns the pure
MD5 hash instead of falling back to ETagChunks() composite format.
* test: add comprehensive ETag format tests for issue #7768
Add integration tests to ensure PutObject ETag format compatibility:
Go tests (test/s3/etag/):
- TestPutObjectETagFormat_SmallFile: 1KB single chunk
- TestPutObjectETagFormat_LargeFile: 10MB auto-chunked (critical for #7768)
- TestPutObjectETagFormat_ExtraLargeFile: 25MB multi-chunk
- TestMultipartUploadETagFormat: verify composite ETag for multipart
- TestPutObjectETagConsistency: ETag consistency across PUT/HEAD/GET
- TestETagHexValidation: simulate AWS SDK v2 hex decoding
- TestMultipleLargeFileUploads: stress test multiple large uploads
Java tests (other/java/s3copier/):
- Update pom.xml to include AWS SDK v2 (2.20.127)
- Add ETagValidationTest.java with comprehensive SDK v2 tests
- Add README.md documenting SDK versions and test coverage
Documentation:
- Add test/s3/SDK_COMPATIBILITY.md documenting validated SDK versions
- Add test/s3/etag/README.md explaining test coverage
These tests ensure large file PutObject (>8MB) returns pure MD5 ETags
(not composite format), which is required for AWS SDK v2 compatibility.
* fix: lower Java version requirement to 11 for CI compatibility
* address CodeRabbit review comments
- s3_etag_test.go: Handle rand.Read error, fix multipart part-count logging
- Makefile: Add 'all' target, pass S3_ENDPOINT to test commands
- SDK_COMPATIBILITY.md: Add language tag to fenced code block
- ETagValidationTest.java: Add pagination to cleanup logic
- README.md: Clarify Go SDK tests are in separate location
* ci: add s3copier ETag validation tests to Java integration tests
- Enable S3 API (-s3 -s3.port=8333) in SeaweedFS test server
- Add S3 API readiness check to wait loop
- Add step to run ETagValidationTest from s3copier
This ensures the fix for issue #7768 is continuously tested
against AWS SDK v2 for Java in CI.
* ci: add S3 config with credentials for s3copier tests
- Add -s3.config pointing to docker/compose/s3.json
- Add -s3.allowDeleteBucketNotEmpty for test cleanup
- Set S3_ACCESS_KEY and S3_SECRET_KEY env vars for tests
* ci: pass S3 config as Maven system properties
Pass S3_ENDPOINT, S3_ACCESS_KEY, S3_SECRET_KEY via -D flags
so they're available via System.getProperty() in Java tests
* Add keyPrefix support for TiKV store
Similar to the Redis keyPrefix feature (#7299), this adds keyPrefix support
for TiKV stores to enable sharing a single TiKV cluster as metadata store
for multitenant SeaweedFS clusters.
Changes:
- Add keyPrefix field to TikvStore struct
- Update Initialize function to read keyPrefix from config
- Add getKey method to prepend prefix to all keys
- Update generateKey, getNameFromKey, and genDirectoryKeyPrefix methods
to be store receiver methods and handle key prefixing
- Update filer.toml scaffold with keyPrefix configuration option
Fixes#7752
* Fix potential slice corruption in getKey method
Use a new slice with proper capacity to avoid modifying the
underlying array of store.keyPrefix when appending.
* Add keyPrefix validation and defensive bounds check
- Add validation in Initialize to reject keyPrefix longer than 256 bytes
- Add bounds check in getNameFromKey to prevent panic on malformed keys
* Update weed/filer/tikv/tikv_store.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/command/scaffold/filer.toml
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Add TUS protocol integration tests
This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.
Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation
Tests are skipped in short mode and require a running SeaweedFS cluster.
* Add TUS session storage types and utilities
Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file
Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.
* Add TUS HTTP handlers
Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers
Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header
* Wire up TUS protocol routes in filer server
Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.
TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload
* Improve TUS integration test setup
Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines
Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads
Follows the pattern established by other tests in test/ folder.
* Fix TUS integration tests and creation-with-upload
- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster
* Add TUS protocol tests to GitHub Actions CI
- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging
* Make TUS base path configurable via CLI
- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions
* Rename -tus.path to -tusBasePath with default .tus
- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)
* Address code review comments
- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests
* fmt
* Address remaining code review comments
- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging
* Address critical and high-priority review comments
- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted
* Fix race condition to work across multiple filer instances
- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance
* Address code review comments: fix variable shadowing, sniff size, and test stability
- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay
* Fix test stability: increase cluster stabilization delay to 5 seconds
The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.
* Address PR review comments for TUS protocol support
- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession
* Address additional PR review comments for TUS protocol
- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time
* Address more PR review comments for TUS protocol
- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow
* Remove unnecessary mutexes in tusWriteData
The upload loop is sequential, so uploadErrLock and chunksLock are not needed.
* Rename updateTusSessionOffset to saveTusChunk
Remove unused newOffset parameter and rename function to better reflect its purpose.
* Improve TUS upload performance and add path validation
- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes
* Address PR review comments for TUS protocol
- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability
* Address additional TUS PR review comments
- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format
* Revert log_buffer changes (moved to separate PR)
* Minor style fixes from PR review
- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
When ReadFromBuffer returns ResumeFromDiskError, the function now:
- Attempts to read from disk if ReadFromDiskFn is available
- Checks if the client is still connected via waitForDataFn
- Waits for notification or short timeout before retrying
- Continues the loop instead of immediately returning the error
This fixes TestNewLogBufferFirstBuffer which was failing because the
function returned too early before data was available in the buffer.
* fix: sync EC volume files before copying to fix deleted needles not being marked when decoding (#7751)
When a file is deleted from an EC volume, the deletion is written to both
the .ecx and .ecj files. However, these writes were not synced to disk
before the files were copied during ec.decode. This caused the copied
files to miss the deletion markers, resulting in 'leaked' space where
deleted files were not properly tracked after decoding.
This fix:
1. Adds a Sync() method to EcVolume that flushes .ecx and .ecj files
to disk without closing them
2. Calls Sync() in CopyFile before copying EC volume files to ensure
all deletions are visible to the copy operation
Fixes#7751
* test: add integration tests for EC volume deletion sync (issue #7751)
Add comprehensive tests to verify that deleted needles are properly
visible after EcVolume.Sync() is called. These tests cover:
1. TestWriteIdxFileFromEcIndex_PreservesDeletedNeedles
- Verifies that WriteIdxFileFromEcIndex preserves deletion markers
from .ecx files when generating .idx files
2. TestWriteIdxFileFromEcIndex_ProcessesEcjJournal
- Verifies that deletions from .ecj journal file are correctly
appended to the generated .idx file
3. TestEcxFileDeletionVisibleAfterSync
- Verifies that MarkNeedleDeleted changes are visible after Sync()
4. TestEcxFileDeletionWithSeparateHandles
- Tests that synced changes are visible across separate file handles
5. TestEcVolumeSyncEnsuresDeletionsVisible
- Integration test for the full EcVolume.DeleteNeedleFromEcx +
Sync() workflow that validates the fix for issue #7751
* refactor: log sync errors in EcVolume.Sync() instead of ignoring them
Per code review feedback: sync errors could reintroduce the bug this PR
fixes, so logging warnings helps with debugging.
When loading IAM config from JSON, if the policy section exists but
storeType is not specified, default to 'memory' instead of 'filer'.
This ensures that policies defined in JSON config files are properly
loaded into memory for test environments and standalone setups that
don't rely on the filer for policy persistence.