* fix: cancel volume server requests on client disconnect during S3 downloads
- Use http.NewRequestWithContext in ReadUrlAsStream so in-flight volume
server requests are properly aborted when the client disconnects and
the request context is canceled
- Distinguish context-canceled errors (client disconnect, expected) from
real server errors in streamFromVolumeServers; log at V(3) instead of
ERROR to reduce noise from client-side disconnects (e.g. Nginx upstream
timeout, browser cancel, curl --max-time)
Fixes: streamFromVolumeServers: streamFn failed...context canceled"
* fixup: separate Canceled/DeadlineExceeded log severity in streamFromVolumeServers
- context.Canceled → V(3) Infof "client disconnected" (expected, no noise)
- context.DeadlineExceeded → Warningf "server-side deadline exceeded" (unexpected, needs attention)
- all other errors → Errorf (unchanged)"
* pb: fix IPv6 double brackets in ServerAddress formatting
* pb: refactor IPv6 tests into table-driven test
* util: add JoinHostPortStr and use it in pb to avoid unsafe port parsing
* Fix chown Input/output error on large file sets (Fixes#7911)
Implemented retry logic for MySQL/MariaDB backend to handle transient errors like deadlocks and timeouts.
* Fix syntax error: missing closing brace
* Refactor: Use %w for error wrapping and errors.As for extraction
* Fix: Disable retry logic inside transactions
* chore: execute goimports to format the code
Signed-off-by: promalert <promalert@outlook.com>
* goimports -w .
---------
Signed-off-by: promalert <promalert@outlook.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* filer.meta.backup: add -excludePaths flag to skip paths from backup
Add a new -excludePaths flag that accepts comma-separated path prefixes
to exclude from backup operations. This enables selective backup when
certain directories (e.g., legacy buckets) should be skipped.
Usage:
weed filer.meta.backup -filerDir=/buckets -excludePaths=/buckets/legacy1,/buckets/legacy2 -config=backup.toml
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* filer.meta.backup: address code review feedback for -excludePaths
Fixes based on CodeRabbit and Gemini review:
- Cache parsed exclude paths in struct (performance)
- TrimSpace and skip empty entries (handles "a,,b" and "a, b")
- Add trailing slash for directory boundary matching (prevents
/buckets/legacy matching /buckets/legacy_backup)
- Validate paths start with '/' and warn if not
- Log excluded paths at startup for debugging
- Fix rename handling: check both old and new paths, handle all
four combinations correctly
- Add docstring to shouldExclude()
- Update UsageLine and Long description with new flag
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* filer.meta.backup: address nitpick feedback
- Clarify directory boundary matching behavior in help text
- Add warning when root path '/' is excluded (would exclude everything)
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* includePrefixes and excludePrefixes
---------
Co-authored-by: C Shaw <cliffshaw@users.noreply.github.com>
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* fix: use keyed fields in struct literals
- Replace unsafe reflect.StringHeader/SliceHeader with safe unsafe.String/Slice (weed/query/sqltypes/unsafe.go)
- Add field names to Type_ScalarType struct literals (weed/mq/schema/schema_builder.go)
- Add Duration field name to FlexibleDuration struct literals across test files
- Add field names to bson.D struct literals (weed/filer/mongodb/mongodb_store_kv.go)
Fixes go vet warnings about unkeyed struct literals.
* fix: remove unreachable code
- Remove unreachable return statements after infinite for loops
- Remove unreachable code after if/else blocks where all paths return
- Simplify recursive logic by removing unnecessary for loop (inode_to_path.go)
- Fix Type_ScalarType literal to use enum value directly (schema_builder.go)
- Call onCompletionFn on stream error (subscribe_session.go)
Files fixed:
- weed/query/sqltypes/unsafe.go
- weed/mq/schema/schema_builder.go
- weed/mq/client/sub_client/connect_to_sub_coordinator.go
- weed/filer/redis3/ItemList.go
- weed/mq/client/agent_client/subscribe_session.go
- weed/mq/broker/broker_grpc_pub_balancer.go
- weed/mount/inode_to_path.go
- weed/util/skiplist/name_list.go
* fix: avoid copying lock values in protobuf messages
- Use proto.Merge() instead of direct assignment to avoid copying sync.Mutex in S3ApiConfiguration (iamapi_server.go)
- Add explicit comments noting that channel-received values are already copies before taking addresses (volume_grpc_client_to_master.go)
The protobuf messages contain sync.Mutex fields from the message state, which should not be copied.
Using proto.Merge() properly merges messages without copying the embedded mutex.
* fix: correct byte array size for uint32 bit shift operations
The generateAccountId() function only needs 4 bytes to create a uint32 value.
Changed from allocating 8 bytes to 4 bytes to match the actual usage.
This fixes go vet warning about shifting 8-bit values (bytes) by more than 8 bits.
* fix: ensure context cancellation on all error paths
In broker_client_subscribe.go, ensure subscriberCancel() is called on all error return paths:
- When stream creation fails
- When partition assignment fails
- When sending initialization message fails
This prevents context leaks when an error occurs during subscriber creation.
* fix: ensure subscriberCancel called for CreateFreshSubscriber stream.Send error
Ensure subscriberCancel() is called when stream.Send fails in CreateFreshSubscriber.
* ci: add go vet step to prevent future lint regressions
- Add go vet step to GitHub Actions workflow
- Filter known protobuf lock warnings (MessageState sync.Mutex)
These are expected in generated protobuf code and are safe
- Prevents accumulation of go vet errors in future PRs
- Step runs before build to catch issues early
* fix: resolve remaining syntax and logic errors in vet fixes
- Fixed syntax errors in filer_sync.go caused by missing closing braces
- Added missing closing brace for if block and function
- Synchronized fixes to match previous commits on branch
* fix: add missing return statements to daemon functions
- Add 'return false' after infinite loops in filer_backup.go and filer_meta_backup.go
- Satisfies declared bool return type signatures
- Maintains consistency with other daemon functions (runMaster, runFilerSynchronize, runWorker)
- While unreachable, explicitly declares the return satisfies function signature contract
* fix: add nil check for onCompletionFn in SubscribeMessageRecord
- Check if onCompletionFn is not nil before calling it
- Prevents potential panic if nil function is passed
- Matches pattern used in other callback functions
* docs: clarify unreachable return statements in daemon functions
- Add comments documenting that return statements satisfy function signature
- Explains that these returns follow infinite loops and are unreachable
- Improves code clarity for future maintainers
* Add consistent -debug and -debug.port flags to commands
Add -debug and -debug.port flags to weed master, weed volume, weed s3,
weed mq.broker, and weed filer.sync commands for consistency with
weed filer.
When -debug is enabled, an HTTP server starts on the specified port
(default 6060) serving runtime profiling data at /debug/pprof/.
For mq.broker, replaced the older -port.pprof flag with the new
-debug and -debug.port pattern for consistency.
* Update weed/util/grace/pprof.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
When ReadFromBuffer returns ResumeFromDiskError, the function now:
- Attempts to read from disk if ReadFromDiskFn is available
- Checks if the client is still connected via waitForDataFn
- Waits for notification or short timeout before retrying
- Continues the loop instead of immediately returning the error
This fixes TestNewLogBufferFirstBuffer which was failing because the
function returned too early before data was available in the buffer.
* filer: reduce allocations in MatchStorageRule
Optimize MatchStorageRule to avoid allocations in common cases:
- Return singleton emptyPathConf when no rules match (zero allocations)
- Return existing rule directly when only one rule matches (zero allocations)
- Only allocate and merge when multiple rules match (rare case)
Based on heap profile analysis showing 111MB allocated from 1.64M calls
to this function during 180 seconds of operation.
* filer: add fast path for getActualStore when no path-specific stores
Add hasPathSpecificStore flag to FilerStoreWrapper to skip
the MatchPrefix() call and []byte(path) conversion when no
path-specific stores are configured (the common case).
Based on heap profile analysis showing 1.39M calls to this
function during 180 seconds of operation, each requiring a
string-to-byte slice conversion for the MatchPrefix call.
* filer/foundationdb: use sync.Pool for tuple allocation in genKey
Use sync.Pool to reuse tuple.Tuple slices in genKey(), reducing
allocation overhead for every FoundationDB operation.
Based on heap profile analysis showing 102MB allocated from 1.79M
calls to genKey() during 180 seconds of operation. The Pack() call
still allocates internally, but this reduces the tuple slice
allocation overhead by ~50%.
* filer: use sync.Pool for protobuf Entry and FuseAttributes
Add pooling for filer_pb.Entry and filer_pb.FuseAttributes in
EncodeAttributesAndChunks and DecodeAttributesAndChunks to reduce
allocations during filer store operations.
Changes:
- Add pbEntryPool with pre-allocated FuseAttributes
- Add EntryAttributeToExistingPb for in-place attribute conversion
- Update ToExistingProtoEntry to reuse existing Attributes when available
Based on heap profile showing:
- EncodeAttributesAndChunks: 69.5MB cumulative
- DecodeAttributesAndChunks: 46.5MB cumulative
- EntryAttributeToPb: 47.5MB flat allocations
* log_buffer: use sync.Pool for LogEntry in readTs
Add logEntryPool to reuse filer_pb.LogEntry objects in readTs(),
which is called frequently during binary search in ReadFromBuffer.
This function only needs the TsNs field from the unmarshaled entry,
so pooling the LogEntry avoids repeated allocations.
Based on heap profile showing readTs with 188MB cumulative allocations
from timestamp lookups during log buffer reads.
* pb: reduce gRPC metadata allocations in interceptor
Optimize requestIDUnaryInterceptor and WithGrpcClient to reduce
metadata allocations on every gRPC request:
- Use AppendToOutgoingContext instead of NewOutgoingContext + New()
This avoids creating a new map[string]string for single key-value pairs
- Check FromIncomingContext return value before using metadata
Based on heap profile showing metadata operations contributing 0.45GB
(10.5%) of allocations, with requestIDUnaryInterceptor being the main
source at 0.44GB cumulative.
Expected reduction: ~0.2GB from avoiding map allocations per request.
* filer/log_buffer: address code review feedback
- Use proto.Reset() instead of manual field clearing in resetLogEntry
for more idiomatic and comprehensive state clearing
- Add resetPbEntry() call before pool return in error path for
consistency with success path in DecodeAttributesAndChunks
* log_buffer: reduce PreviousBufferCount from 32 to 4
Reduce the number of retained previous buffers from 32 to 4.
Each buffer is 8MB, so this reduces the maximum retained memory
from 256MB to 32MB for previous buffers.
Most subscribers catch up quickly, so 4 buffers (32MB) should
be sufficient while significantly reducing memory footprint.
* filer/foundationdb: use defer for tuple pool cleanup in genKey
Refactor genKey to use defer for returning the pooled tuple.
This ensures the pooled object is always returned even if
store.seaweedfsDir.Pack panics, making the code more robust.
Also simplifies the code by removing the temporary variable.
* filer: early-stop MatchStorageRule prescan after 2 matches
Stop the prescan callback after finding 2 matches since we only
need to know if there are 0, 1, or multiple matches. This avoids
unnecessarily scanning the rest of the trie when many rules exist.
* fix: address critical code review issues
filer_conf.go:
- Remove mutable singleton emptyPathConf that could corrupt shared state
- Return fresh copy for no-match case and cloned copy for single-match case
- Add clonePathConf helper to create shallow copies safely
grpc_client_server.go:
- Remove incorrect AppendToOutgoingContext call in server interceptor
(that API is for outbound client calls, not server-side handlers)
- Rely on request_id.Set and SetTrailer for request ID propagation
* fix: treat FilerConf_PathConf as immutable
Fix callers that were incorrectly mutating the returned PathConf:
- filer_server_handlers_write.go: Use local variable for MaxFileNameLength
instead of mutating the shared rule
- command_s3_bucket_quota_check.go: Create new PathConf explicitly when
modifying config instead of mutating the returned one
This allows MatchStorageRule to safely return the singleton or direct
references without copying, restoring the memory optimization.
Callers must NOT mutate the returned *FilerConf_PathConf.
* filer: add ClonePathConf helper for creating mutable copies
Add reusable ClonePathConf function that creates a mutable copy of
a PathConf. This is useful when callers need to modify config before
calling SetLocationConf.
Update command_s3_bucket_quota_check.go to use the new helper.
Also fix redundant return statement in DeleteLocationConf.
* fmt
* filer: fix protobuf pool reset to clear internal fields
Address code review feedback:
1. resetPbEntry/resetFuseAttributes: Use struct assignment (*e = T{})
instead of field-by-field reset to clear protobuf internal fields
(unknownFields, sizeCache) that would otherwise accumulate across
pool reuses, causing data corruption or memory bloat.
2. EntryAttributeToExistingPb: Add nil guard for attr parameter to
prevent panic if caller passes nil.
* log_buffer: reset logEntry before pool return in error path
For consistency with success path, reset the logEntry before putting
it back in the pool in the error path. This prevents the pooled object
from holding references to partially unmarshaled data.
* filer: optimize MatchStorageRule and document ClonePathConf
1. Avoid double []byte(path) conversion in multi-match case by
converting once and reusing pathBytes.
2. Add IMPORTANT comment to ClonePathConf documenting that it must
be kept in sync with filer_pb.FilerConf_PathConf fields when
the protobuf evolves.
* filer/log_buffer: fix data race and use defer for pool cleanup
1. entry_codec.go EncodeAttributesAndChunks: Fix critical data race -
proto.Marshal may return a slice sharing memory with the message.
Copy the data before returning message to pool to prevent corruption.
2. entry_codec.go DecodeAttributesAndChunks: Use defer for cleaner
pool management, ensuring message is always returned to pool.
3. log_buffer.go readTs: Use defer for pool cleanup, removing
duplicated resetLogEntry/Put calls in success and error paths.
* filer: fix ClonePathConf field order and add comprehensive test
1. Fix field order in ClonePathConf to match protobuf struct definition
(WormGracePeriodSeconds before WormRetentionTimeSeconds).
2. Add TestClonePathConf that constructs a fully-populated PathConf,
calls ClonePathConf, and asserts equality of all exported fields.
This will catch future schema drift when new fields are added.
3. Add TestClonePathConfNil to verify nil handling.
* filer: use reflection in ClonePathConf test to detect schema drift
Replace hardcoded field comparisons with reflection-based comparison.
This automatically catches:
1. New fields added to the protobuf but not copied in ClonePathConf
2. Missing non-zero test values for any exported field
The test iterates over all exported fields using reflect and compares
src vs clone values, failing if any field differs.
* filer: update EntryAttributeToExistingPb comment to reflect nil handling
The function safely handles nil attr by returning early, but the comment
incorrectly stated 'attr must not be nil'. Update comment to accurately
describe the defensive behavior.
* Fix review feedback: restore request ID propagation and remove redundant resets
1. grpc_client_server.go: Restore AppendToOutgoingContext for request ID
so handlers making downstream gRPC calls will automatically propagate
the request ID to downstream services.
2. entry_codec.go: Remove redundant resetPbEntry calls after Get.
The defer block ensures reset before Put, so next Get receives clean object.
3. log_buffer.go: Remove redundant resetLogEntry call after Get for
same reason - defer already handles reset before Put.
* fix: prevent filer.backup stall in single-filer setups (#4977)
When MetaAggregator.MetaLogBuffer is empty (which happens in single-filer
setups with no peers), ReadFromBuffer was returning nil error, causing
LoopProcessLogData to enter an infinite wait loop on ListenersCond.
This fix returns ResumeFromDiskError instead, allowing SubscribeMetadata
to loop back and read from persisted logs on disk. This ensures filer.backup
continues processing events even when the in-memory aggregator buffer is empty.
Fixes#4977
* test: add integration tests for metadata subscription
Add integration tests for metadata subscription functionality:
- TestMetadataSubscribeBasic: Tests basic subscription and event receiving
- TestMetadataSubscribeSingleFilerNoStall: Regression test for #4977,
verifies subscription doesn't stall under high load in single-filer setups
- TestMetadataSubscribeResumeFromDisk: Tests resuming subscription from disk
Related to #4977
* ci: add GitHub Actions workflow for metadata subscribe tests
Add CI workflow that runs on:
- Push/PR to master affecting filer, log_buffer, or metadata subscribe code
- Runs the integration tests for metadata subscription
- Uploads logs on failure for debugging
Related to #4977
* fix: use multipart form-data for file uploads in integration tests
The filer expects multipart/form-data for file uploads, not raw POST body.
This fixes the 'Content-Type isn't multipart/form-data' error.
* test: use -peers=none for faster master startup
* test: add -peers=none to remaining master startup in ec tests
* fix: use filer HTTP port 8888, WithFilerClient adds 10000 for gRPC
WithFilerClient calls ToGrpcAddress() which adds 10000 to the port.
Passing 18888 resulted in connecting to 28888. Use 8888 instead.
* test: add concurrent writes and million updates tests
- TestMetadataSubscribeConcurrentWrites: 50 goroutines writing 20 files each
- TestMetadataSubscribeMillionUpdates: 1 million metadata entries via gRPC
(metadata only, no actual file content for speed)
* fix: address PR review comments
- Handle os.MkdirAll errors explicitly instead of ignoring
- Handle log file creation errors with proper error messages
- Replace silent event dropping with 100ms timeout and warning log
* Update metadata_subscribe_integration_test.go
fix: normalize Windows backslash paths in file uploads
When uploading files from a Windows client to a Linux server,
file paths containing backslashes were not being properly interpreted as
directory separators. This caused files intended for subdirectories to be
created in the root directory with backslashes in their filenames.
Changes:
- Add util.CleanWindowsPath and util.CleanWindowsPathBase helper functions
in weed/util/fullpath.go for reusable path normalization
- Use path.Join/path.Clean/path.Base instead of filepath equivalents
for URL path semantics (filepath is OS-specific)
- Apply normalization in weed admin handlers and filer upload parsing
Fixes#7628
* filer: remove lock contention during chunk download
This addresses issue #7504 where a single weed mount FUSE instance
does not fully utilize node network bandwidth when reading large files.
The SingleChunkCacher was holding a mutex during the entire HTTP download,
causing readers to block until the download completed. This serialized
chunk reads even when multiple goroutines were downloading in parallel.
Changes:
- Add sync.Cond to SingleChunkCacher for efficient waiting
- Move HTTP download outside the critical section in startCaching()
- Use condition variable in readChunkAt() to wait for download completion
- Add isComplete flag to track download state
Now multiple chunk downloads can proceed truly in parallel, and readers
wait efficiently using the condition variable instead of blocking on
a mutex held during I/O operations.
Ref: #7504
* filer: parallel chunk fetching within doReadAt
This addresses issue #7504 by enabling parallel chunk downloads within
a single read operation.
Previously, doReadAt() processed chunks sequentially in a loop, meaning
each chunk had to be fully downloaded before the next one started.
This left significant network bandwidth unused when chunks resided on
different volume servers.
Changes:
- Collect all chunk read tasks upfront
- Use errgroup to fetch multiple chunks in parallel
- Each chunk reads directly into its correct buffer position
- Limit concurrency to prefetchCount (min 4) to avoid overwhelming the system
- Handle gaps and zero-filling before parallel fetch
- Trigger prefetch after parallel reads complete
For a read spanning N chunks on different volume servers, this can
now utilize up to N times the bandwidth of a single connection.
Ref: #7504
* http: direct buffer read to reduce memory copies
This addresses issue #7504 by reducing memory copy overhead during
chunk downloads.
Previously, RetriedFetchChunkData used ReadUrlAsStream which:
1. Allocated a 64KB intermediate buffer
2. Read data in 64KB chunks
3. Called a callback to copy each chunk to the destination
For a 16MB chunk, this meant 256 copy operations plus the callback
overhead. Profiling showed significant time spent in memmove.
Changes:
- Add readUrlDirectToBuffer() that reads directly into the destination
- Add retriedFetchChunkDataDirect() for unencrypted, non-gzipped chunks
- Automatically use direct read path when possible (cipher=nil, gzip=false)
- Use http.NewRequestWithContext for proper cancellation
For unencrypted chunks (the common case), this eliminates the
intermediate buffer entirely, reading HTTP response bytes directly
into the final destination buffer.
Ref: #7504
* address review comments
- Use channel (done) instead of sync.Cond for download completion signaling
This integrates better with context cancellation patterns
- Remove redundant groupErr check in reader_at.go (errors are already captured in task.err)
- Remove buggy URL encoding logic from retriedFetchChunkDataDirect
(The existing url.PathEscape on full URL is a pre-existing bug that should be fixed separately)
* address review comments (round 2)
- Return io.ErrUnexpectedEOF when HTTP response is truncated
This prevents silent data corruption from incomplete reads
- Simplify errgroup error handling by using g.Wait() error directly
Remove redundant task.err field and manual error aggregation loop
- Define minReadConcurrency constant instead of magic number 4
Improves code readability and maintainability
Note: Context propagation to startCaching() is intentionally NOT changed.
The downloaded chunk is a shared resource that may be used by multiple
readers. Using context.Background() ensures the download completes even
if one reader cancels, preventing data loss for other waiting readers.
* http: inject request ID for observability in direct read path
Add request_id.InjectToRequest() call to readUrlDirectToBuffer() for
consistency with ReadUrlAsStream path. This ensures full-chunk reads
carry the same tracing/correlation headers for server logs and metrics.
* filer: consistent timestamp handling in sequential read path
Use max(ts, task.chunk.ModifiedTsNs) in sequential path to match
parallel path behavior. Also update ts before error check so that
on failure, the returned timestamp reflects the max of all chunks
processed so far.
* filer: document why context.Background() is used in startCaching
Add comment explaining the intentional design decision: the downloaded
chunk is a shared resource that may be used by multiple concurrent
readers. Using context.Background() ensures the download completes
even if one reader cancels, preventing errors for other waiting readers.
* filer: propagate context for reader cancellation
Address review comment: pass context through ReadChunkAt call chain so
that a reader can cancel its wait for a download. The key distinction is:
- Download uses context.Background() - shared resource, always completes
- Reader wait uses request context - can be cancelled individually
If a reader cancels, it stops waiting and returns ctx.Err(), but the
download continues to completion for other readers waiting on the same
chunk. This properly handles the shared resource semantics while still
allowing individual reader cancellation.
* filer: use defer for close(done) to guarantee signal on panic
Move close(s.done) to a defer statement at the start of startCaching()
to ensure the completion signal is always sent, even if an unexpected
panic occurs. This prevents readers from blocking indefinitely.
* filer: remove unnecessary code
- Remove close(s.cacheStartedCh) in destroy() - the channel is only used
for one-time synchronization, closing it provides no benefit
- Remove task := task loop variable capture - Go 1.22+ fixed loop variable
semantics, this capture is no longer necessary (go.mod specifies Go 1.24.0)
* filer: restore fallback to chunkCache when cacher returns no data
Fix critical issue where ReadChunkAt would return 0,nil immediately
if SingleChunkCacher couldn't provide data for the requested offset,
without trying the chunkCache fallback. Now if cacher.readChunkAt
returns n=0 and err=nil, we fall through to try chunkCache.
* filer: add comprehensive tests for ReaderCache
Tests cover:
- Context cancellation while waiting for download
- Fallback to chunkCache when cacher returns n=0, err=nil
- Multiple concurrent readers waiting for same chunk
- Partial reads at different offsets
- Downloader cleanup when exceeding cache limit
- Done channel signaling (no hangs on completion)
* filer: prioritize done channel over context cancellation
If data is already available (done channel closed), return it even if
the reader's context is also cancelled. This avoids unnecessary errors
when the download has already completed.
* filer: add lookup error test and document test limitations
Add TestSingleChunkCacherLookupError to test error handling when lookup
fails. Document that full HTTP integration tests for SingleChunkCacher
require global HTTP client initialization which is complex in unit tests.
The download path is tested via FUSE integration tests.
* filer: add tests that exercise SingleChunkCacher concurrency logic
Add tests that use blocking lookupFileIdFn to exercise the actual
SingleChunkCacher wait/cancellation logic:
- TestSingleChunkCacherContextCancellationDuringLookup: tests reader
cancellation while lookup is blocked
- TestSingleChunkCacherMultipleReadersWaitForDownload: tests multiple
readers waiting on the same download
- TestSingleChunkCacherOneReaderCancelsOthersContinue: tests that when
one reader cancels, other readers continue waiting
These tests properly exercise the done channel wait/cancel logic without
requiring HTTP calls - the blocking lookup simulates a slow download.
* fix: S3 downloads failing after idle timeout (#7618)
The idle timeout was incorrectly terminating active downloads because
read and write deadlines were managed independently. During a download,
the server writes data but rarely reads, so the read deadline would
expire even though the connection was actively being used.
Changes:
1. Simplify to single Timeout field - since this is a 'no activity timeout'
where any activity extends the deadline, separate read/write timeouts
are unnecessary. Now uses SetDeadline() which sets both at once.
2. Implement proper 'no activity timeout' - any activity (read or write)
now extends the deadline. The connection only times out when there's
genuinely no activity in either direction.
3. Increase default S3 idleTimeout from 10s to 120s for additional safety
margin when fetching chunks from slow storage backends.
Fixes#7618
* Update weed/util/net_timeout.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* pb: add id field to Heartbeat message for stable volume server identification
This adds an 'id' field to the Heartbeat protobuf message that allows
volume servers to identify themselves independently of their IP:port address.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* storage: add Id field to Store struct
Add Id field to Store struct and include it in CollectHeartbeat().
The Id field provides a stable volume server identity independent of IP:port.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* topology: support id-based DataNode identification
Update GetOrCreateDataNode to accept an id parameter for stable node
identification. When id is provided, the DataNode can maintain its identity
even when its IP address changes (e.g., in Kubernetes pod reschedules).
For backward compatibility:
- If id is provided, use it as the node ID
- If id is empty, fall back to ip:port
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* volume: add -id flag for stable volume server identity
Add -id command line flag to volume server that allows specifying a stable
identifier independent of the IP address. This is useful for Kubernetes
deployments with hostPath volumes where pods can be rescheduled to different
nodes while the persisted data remains on the original node.
Usage: weed volume -id=node-1 -ip=10.0.0.1 ...
If -id is not specified, it defaults to ip:port for backward compatibility.
Fixes https://github.com/seaweedfs/seaweedfs/issues/7487
* server: add -volume.id flag to weed server command
Support the -volume.id flag in the all-in-one 'weed server' command,
consistent with the standalone 'weed volume' command.
Usage: weed server -volume.id=node-1 ...
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* topology: add test for id-based DataNode identification
Test the key scenarios:
1. Create DataNode with explicit id
2. Same id with different IP returns same DataNode (K8s reschedule)
3. IP/PublicUrl are updated when node reconnects with new address
4. Different id creates new DataNode
5. Empty id falls back to ip:port (backward compatibility)
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* pb: add address field to DataNodeInfo for proper node addressing
Previously, DataNodeInfo.Id was used as the node address, which worked
when Id was always ip:port. Now that Id can be an explicit string,
we need a separate Address field for connection purposes.
Changes:
- Add 'address' field to DataNodeInfo protobuf message
- Update ToDataNodeInfo() to populate the address field
- Update NewServerAddressFromDataNode() to use Address (with Id fallback)
- Fix LookupEcVolume to use dn.Url() instead of dn.Id()
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: trim whitespace from volume server id and fix test
- Trim whitespace from -id flag to treat ' ' as empty
- Fix store_load_balancing_test.go to include id parameter in NewStore call
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* refactor: extract GetVolumeServerId to util package
Move the volume server ID determination logic to a shared utility function
to avoid code duplication between volume.go and rack.go.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: improve transition logic for legacy nodes
- Use exact ip:port match instead of net.SplitHostPort heuristic
- Update GrpcPort and PublicUrl during transition for consistency
- Remove unused net import
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: add id normalization and address change logging
- Normalize id parameter at function boundary (trim whitespace)
- Log when DataNode IP:Port changes (helps debug K8s pod rescheduling)
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
With the recent changes (commit c1b8d4bf0) that made S3 directly access
volume servers instead of proxying through filer, we need to properly
handle HTTP 429 (Too Many Requests) errors from volume servers.
This change ensures that when volume servers rate limit requests with
HTTP 429, the S3 API properly translates this to an S3-compatible error
response (ErrRequestBytesExceed with HTTP 503) instead of returning a
generic InternalError.
Changes:
- Add ErrTooManyRequests sentinel error in weed/util/http
- Detect HTTP 429 in ReadUrlAsStream and wrap with ErrTooManyRequests
- Check for ErrTooManyRequests in GetObjectHandler and map to S3 error
- Return ErrRequestBytesExceed (HTTP 503) for rate limiting scenarios
This addresses the same issue as PR #7482 but for the new direct
volume server access path instead of the filer proxy path.
Fixes: Rate limiting errors from volume servers being masked as 500
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization
* revert
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
* Lazy Entry Fetch for SSE, Skip Conditional Header Check
* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)
* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS
* refactor
* Removed Premature Mutual Exclusivity Check
* check for the presence of the X-Amz-Server-Side-Encryption header
* not used
* fmt
* directly read write volume servers
* HTTP Range Request Support
* set header
* md5
* copy object
* fix sse
* fmt
* implement sse
* sse continue
* fixed the suffix range bug (bytes=-N for "last N bytes")
* debug logs
* Missing PartsCount Header
* profiling
* url encoding
* test_multipart_get_part
* headers
* debug
* adjust log level
* handle part number
* Update s3api_object_handlers.go
* nil safety
* set ModifiedTsNs
* remove
* nil check
* fix sse header
* same logic as filer
* decode values
* decode ivBase64
* s3: Fix SSE decryption JWT authentication and streaming errors
Critical fix for SSE (Server-Side Encryption) test failures:
1. **JWT Authentication Bug** (Root Cause):
- Changed from GenJwtForFilerServer to GenJwtForVolumeServer
- S3 API now uses correct JWT when directly reading from volume servers
- Matches filer's authentication pattern for direct volume access
- Fixes 'unexpected EOF' and 500 errors in SSE tests
2. **Streaming Error Handling**:
- Added error propagation in getEncryptedStreamFromVolumes goroutine
- Use CloseWithError() to properly communicate stream failures
- Added debug logging for streaming errors
3. **Response Header Timing**:
- Removed premature WriteHeader(http.StatusOK) call
- Let Go's http package write status automatically on first write
- Prevents header lock when errors occur during streaming
4. **Enhanced SSE Decryption Debugging**:
- Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
- Better error messages for missing or invalid encryption metadata
- Added glog.V(2) debugging for decryption setup
This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.
* s3: Fix SSE multipart upload metadata preservation
Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):
**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:
1. **Wrong Constant Name** (Key Mismatch Bug):
- Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
- Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
- Result: SSE-KMS metadata was NEVER copied → 500 errors
2. **Missing SSE-C and SSE-S3 Headers**:
- SSE-C requires: IV, Algorithm, KeyMD5
- SSE-S3 requires: encrypted key data + standard headers
- Old code: copied nothing for SSE-C/SSE-S3 → decryption failures
**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:
- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption
Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)
**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.
Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)
* s3: Add debug logging for versioning state diagnosis
Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.
Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)
This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers
* s3: Enhanced versioning state tracing for suspended versioning diagnosis
Added comprehensive logging across the entire versioning state flow:
PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change
setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code
updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails
getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors
This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues
Expected to show why bucket still reports 'Enabled' after 'Suspended' call.
* s3: Add SSE chunk detection debugging for multipart uploads
Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:
detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)
This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly
Expected to show if chunks have SseType=0 (none) or proper SSE types set.
* s3: Trace SSE chunk metadata through multipart completion and retrieval
Added end-to-end logging to track SSE chunk metadata lifecycle:
**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes
**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result
This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug
Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.
* s3: Fix SSE-C multipart IV base64 decoding bug
**Critical Bug Found**: SSE-C multipart uploads were failing because:
Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error
Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"
Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures
Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET
This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.
* fixes
* kms sse
* handle retry if not found in .versions folder and should read the normal object
* quick check (no retries) to see if the .versions/ directory exists
* skip retry if object is not found
* explicit update to avoid sync delay
* fix map update lock
* Remove fmt.Printf debug statements
* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating
* fmt
* Fix ACL grants storage logic
* header handling
* nil handling
* range read for sse content
* test range requests for sse objects
* fmt
* unused code
* upload in chunks
* header case
* fix url
* bucket policy error vs bucket not found
* jwt handling
* fmt
* jwt in request header
* Optimize Case-Insensitive Prefix Check
* dead code
* Eliminated Unnecessary Stream Prefetch for Multipart SSE
* range sse
* sse
* refactor
* context
* fmt
* fix type
* fix SSE-C IV Mismatch
* Fix Headers Being Set After WriteHeader
* fix url parsing
* propergate sse headers
* multipart sse-s3
* aws sig v4 authen
* sse kms
* set content range
* better errors
* Update s3api_object_handlers_copy.go
* Update s3api_object_handlers.go
* Update s3api_object_handlers.go
* avoid magic number
* clean up
* Update s3api_bucket_policy_handlers.go
* fix url parsing
* context
* data and metadata both use background context
* adjust the offset
* SSE Range Request IV Calculation
* adjust logs
* IV relative to offset in each part, not the whole file
* collect logs
* offset
* fix offset
* fix url
* logs
* variable
* jwt
* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.
* sse
* adjust IV and offset
* multipart boundaries
* ensures PUT and GET operations return consistent ETags
* Metadata Header Case
* CommonPrefixes Sorting with URL Encoding
* always sort
* remove the extra PathUnescape call
* fix the multipart get part ETag
* the FileChunk is created without setting ModifiedTsNs
* Sort CommonPrefixes lexicographically to match AWS S3 behavior
* set md5 for multipart uploads
* prevents any potential data loss or corruption in the small-file inline storage path
* compiles correctly
* decryptedReader will now be properly closed after use
* Fixed URL encoding and sort order for CommonPrefixes
* Update s3api_object_handlers_list.go
* SSE-x Chunk View Decryption
* Different IV offset calculations for single-part vs multipart objects
* still too verbose in logs
* less logs
* ensure correct conversion
* fix listing
* nil check
* minor fixes
* nil check
* single character delimiter
* optimize
* range on empty object or zero-length
* correct IV based on its position within that part, not its position in the entire object
* adjust offset
* offset
Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk
* look breaking
* refactor
* error on no content
* handle intra-block byte skipping
* Incomplete HTTP Response Error Handling
* multipart SSE
* Update s3api_object_handlers.go
* address comments
* less logs
* handling directory
* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups
* Revert "handling directory"
This reverts commit 3a335f0ac3.
* constant
* Consolidate nil entry checks in GetObjectHandler
* add range tests
* Consolidate redundant nil entry checks in HeadObjectHandler
* adjust logs
* SSE type
* large files
* large files
Reverted the plain-object range test
* ErrNoEncryptionConfig
* Fixed SSERangeReader Infinite Loop Vulnerability
* Fixed SSE-KMS Multipart ChunkReader HTTP Body Leak
* handle empty directory in S3, added PyArrow tests
* purge unused code
* Update s3_parquet_test.py
* Update requirements.txt
* According to S3 specifications, when both partNumber and Range are present, the Range should apply within the selected part's boundaries, not to the full object.
* handle errors
* errors after writing header
* https
* fix: Wait for volume assignment readiness before running Parquet tests
The test-implicit-dir-with-server test was failing with an Internal Error
because volume assignment was not ready when tests started. This fix adds
a check that attempts a volume assignment and waits for it to succeed
before proceeding with tests.
This ensures that:
1. Volume servers are registered with the master
2. Volume growth is triggered if needed
3. The system can successfully assign volumes for writes
Fixes the timeout issue where boto3 would retry 4 times and fail with
'We encountered an internal error, please try again.'
* sse tests
* store derived IV
* fix: Clean up gRPC ports between tests to prevent port conflicts
The second test (test-implicit-dir-with-server) was failing because the
volume server's gRPC port (18080 = VOLUME_PORT + 10000) was still in use
from the first test. The cleanup code only killed HTTP port processes,
not gRPC port processes.
Added cleanup for gRPC ports in all stop targets:
- Master gRPC: MASTER_PORT + 10000 (19333)
- Volume gRPC: VOLUME_PORT + 10000 (18080)
- Filer gRPC: FILER_PORT + 10000 (18888)
This ensures clean state between test runs in CI.
* add import
* address comments
* docs: Add placeholder documentation files for Parquet test suite
Added three missing documentation files referenced in test/s3/parquet/README.md:
1. TEST_COVERAGE.md - Documents 43 total test cases (17 Go unit tests,
6 Python integration tests, 20 Python end-to-end tests)
2. FINAL_ROOT_CAUSE_ANALYSIS.md - Explains the s3fs compatibility issue
with PyArrow, the implicit directory problem, and how the fix works
3. MINIO_DIRECTORY_HANDLING.md - Compares MinIO's directory handling
approach with SeaweedFS's implementation
Each file contains:
- Title and overview
- Key technical details relevant to the topic
- TODO sections for future expansion
These placeholder files resolve the broken README links and provide
structure for future detailed documentation.
* clean up if metadata operation failed
* Update s3_parquet_test.py
* clean up
* Update Makefile
* Update s3_parquet_test.py
* Update Makefile
* Handle ivSkip for non-block-aligned offsets
* Update README.md
* stop volume server faster
* stop volume server in 1 second
* different IV for each chunk in SSE-S3 and SSE-KMS
* clean up if fails
* testing upload
* error propagation
* fmt
* simplify
* fix copying
* less logs
* endian
* Added marshaling error handling
* handling invalid ranges
* error handling for adding to log buffer
* fix logging
* avoid returning too quickly and ensure proper cleaning up
* Activity Tracking for Disk Reads
* Cleanup Unused Parameters
* Activity Tracking for Kafka Publishers
* Proper Test Error Reporting
* refactoring
* less logs
* less logs
* go fmt
* guard it with if entry.Attributes.TtlSec > 0 to match the pattern used elsewhere.
* Handle bucket-default encryption config errors explicitly for multipart
* consistent activity tracking
* obsolete code for s3 on filer read/write handlers
* Update weed/s3api/s3api_object_handlers_list.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* do delete expired entries on s3 list request
https://github.com/seaweedfs/seaweedfs/issues/6837
* disable delete expires s3 entry in filer
* pass opt allowDeleteObjectsByTTL to all servers
* delete on get and head
* add lifecycle expiration s3 tests
* fix opt allowDeleteObjectsByTTL for server
* fix test lifecycle expiration
* fix IsExpired
* fix locationPrefix for updateEntriesTTL
* fix s3tests
* resolv coderabbitai
* GetS3ExpireTime on filer
* go mod
* clear TtlSeconds for volume
* move s3 delete expired entry to filer
* filer delete meta and data
* del unusing func removeExpiredObject
* test s3 put
* test s3 put multipart
* allowDeleteObjectsByTTL by default
* fix pipline tests
* rm dublicate SeaweedFSExpiresS3
* revert expiration tests
* fix updateTTL
* rm log
* resolv comment
* fix delete version object
* fix S3Versioning
* fix delete on FindEntry
* fix delete chunks
* fix sqlite not support concurrent writes/reads
* move deletion out of listing transaction; delete entries and empty folders
* Revert "fix sqlite not support concurrent writes/reads"
This reverts commit 5d5da14e0e.
* clearer handling on recursive empty directory deletion
* handle listing errors
* strut copying
* reuse code to delete empty folders
* use iterative approach with a queue to avoid recursive WithFilerClient calls
* stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF
* still issue UpdateEntry when the flag must be added
* errors join
* join path
* cleaner
* add context, sort directories by depth (deepest first) to avoid redundant checks
* batched operation, refactoring
* prevent deleting bucket
* constant
* reuse code
* more logging
* refactoring
* s3 TTL time
* Safety check
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
* server can start when no network for local dev
* fixed superfluous response.WriteHeader call" warning
* adaptive based on last write time
* more doc
* refactoring
* [Admin UI] Login not possible due to securecookie error
* avoid 404 favicon
* Update weed/admin/dash/auth_middleware.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* address comments
* avoid variable over shadowing
* log session save error
* When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations.
* reuse fileId
* refactor
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* avoid repeated reading disk
* checks both flush time AND read position advancement
* wait on cond
* fix reading
Gap detection and skipping to earliest memory time
Time-based reads that include events at boundary times for first reads (offset ≤ 0)
Aggregated subscriber wake-up via ListenersWaits signaling
* address comments
Gap detection and skipping to earliest memory time
Time-based reads that include events at boundary times for first reads (offset ≤ 0)
Aggregated subscriber wake-up via ListenersWaits signaling