Tree:
53048ffffb
add-ec-vacuum
add-filer-iam-grpc
add-iam-grpc-management
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
admin/csrf-s3tables
allow-no-role-arn
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
bucket-config-enhancements
changing-to-zap
codex-rust-volume-server-bootstrap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature-8113-storage-class-disk-routing
feature/mini-port-detection
feature/modernize-s3-tests
feature/s3-multi-cert-support
feature/s3tables-improvements-and-spark-tests
feature/sra-uds-handler
filer1_maintenance_branch
fix-8303-s3-lifecycle-ttl-assign
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-helm-fromtoml-compatibility
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
fix/iceberg-stage-create-semantics
fix/object-lock-delete-enforcement
fix/sts-body-preservation
fix/windows-test-file-cleanup
ftp
gh-pages
iam-multi-file-migration
iam-permissions-and-api
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
plugin-system-phase1
plugin-ui-enhancements-restored
pr-7412
pr/7984
pr/8140
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
s3tables-by-claude
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
4.07
4.08
4.09
4.12
4.13
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
889 Commits (53048ffffbde3ac60eb9094541e30da8167c8476)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
3c3a78d08e |
4.13
|
4 days ago |
|
|
1e4f30c56f
|
pb: fix IPv6 double brackets in ServerAddress formatting (#8329)
* pb: fix IPv6 double brackets in ServerAddress formatting * pb: refactor IPv6 tests into table-driven test * util: add JoinHostPortStr and use it in pb to avoid unsafe port parsing |
1 week ago |
|
|
b57429ef2e
|
Switch empty-folder cleanup to bucket policy (#8292)
* Fix Spark _temporary cleanup and add issue #8285 regression test * Generalize empty folder cleanup for Spark temp artifacts * Revert synchronous folder pruning and add cleanup diagnostics * Add actionable empty-folder cleanup diagnostics * Fix Spark temp marker cleanup in async folder cleaner * Fix Spark temp cleanup with implicit directory markers * Keep explicit directory markers non-implicit * logging * more logs * Switch empty-folder cleanup to bucket policy * Seaweed-X-Amz-Allow-Empty-Folders * less logs * go vet * less logs * refactoring |
1 week ago |
|
|
af8273386d |
4.12
|
2 weeks ago |
|
|
cb9e21cdc5
|
Normalize hashicorp raft peer ids (#8253)
* Normalize raft voter ids * 4.11 * Update raft_hashicorp.go |
2 weeks ago |
|
|
0c89185291 |
4.10
|
2 weeks ago |
|
|
5a5cc38692 |
4.09
|
2 weeks ago |
|
|
330bd92ddc |
4.08
|
3 weeks ago |
|
|
ba8816e2e1 |
4.08
|
3 weeks ago |
|
|
bc853bdee5 |
4.07
|
1 month ago |
|
|
ce6e9be66b |
4.06
|
1 month ago |
|
|
379c032868
|
Fix chown Input/output error on large file sets (#7996)
* Fix chown Input/output error on large file sets (Fixes #7911) Implemented retry logic for MySQL/MariaDB backend to handle transient errors like deadlocks and timeouts. * Fix syntax error: missing closing brace * Refactor: Use %w for error wrapping and errors.As for extraction * Fix: Disable retry logic inside transactions |
1 month ago |
|
|
9012069bd7
|
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 month ago |
|
|
0e9f433ec4 |
refactoring
|
2 months ago |
|
|
87b71029f7 |
4.05
|
2 months ago |
|
|
f7f133166a |
adjust fuse logs
|
2 months ago |
|
|
73098c9792
|
filer.meta.backup: add -excludePaths flag to skip paths from backup (#7916)
* filer.meta.backup: add -excludePaths flag to skip paths from backup Add a new -excludePaths flag that accepts comma-separated path prefixes to exclude from backup operations. This enables selective backup when certain directories (e.g., legacy buckets) should be skipped. Usage: weed filer.meta.backup -filerDir=/buckets -excludePaths=/buckets/legacy1,/buckets/legacy2 -config=backup.toml 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * filer.meta.backup: address code review feedback for -excludePaths Fixes based on CodeRabbit and Gemini review: - Cache parsed exclude paths in struct (performance) - TrimSpace and skip empty entries (handles "a,,b" and "a, b") - Add trailing slash for directory boundary matching (prevents /buckets/legacy matching /buckets/legacy_backup) - Validate paths start with '/' and warn if not - Log excluded paths at startup for debugging - Fix rename handling: check both old and new paths, handle all four combinations correctly - Add docstring to shouldExclude() - Update UsageLine and Long description with new flag 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * filer.meta.backup: address nitpick feedback - Clarify directory boundary matching behavior in help text - Add warning when root path '/' is excluded (would exclude everything) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * includePrefixes and excludePrefixes --------- Co-authored-by: C Shaw <cliffshaw@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
2 months ago |
|
|
1261e93ef2
|
fix: comprehensive go vet error fixes and add CI enforcement (#7861)
* fix: use keyed fields in struct literals - Replace unsafe reflect.StringHeader/SliceHeader with safe unsafe.String/Slice (weed/query/sqltypes/unsafe.go) - Add field names to Type_ScalarType struct literals (weed/mq/schema/schema_builder.go) - Add Duration field name to FlexibleDuration struct literals across test files - Add field names to bson.D struct literals (weed/filer/mongodb/mongodb_store_kv.go) Fixes go vet warnings about unkeyed struct literals. * fix: remove unreachable code - Remove unreachable return statements after infinite for loops - Remove unreachable code after if/else blocks where all paths return - Simplify recursive logic by removing unnecessary for loop (inode_to_path.go) - Fix Type_ScalarType literal to use enum value directly (schema_builder.go) - Call onCompletionFn on stream error (subscribe_session.go) Files fixed: - weed/query/sqltypes/unsafe.go - weed/mq/schema/schema_builder.go - weed/mq/client/sub_client/connect_to_sub_coordinator.go - weed/filer/redis3/ItemList.go - weed/mq/client/agent_client/subscribe_session.go - weed/mq/broker/broker_grpc_pub_balancer.go - weed/mount/inode_to_path.go - weed/util/skiplist/name_list.go * fix: avoid copying lock values in protobuf messages - Use proto.Merge() instead of direct assignment to avoid copying sync.Mutex in S3ApiConfiguration (iamapi_server.go) - Add explicit comments noting that channel-received values are already copies before taking addresses (volume_grpc_client_to_master.go) The protobuf messages contain sync.Mutex fields from the message state, which should not be copied. Using proto.Merge() properly merges messages without copying the embedded mutex. * fix: correct byte array size for uint32 bit shift operations The generateAccountId() function only needs 4 bytes to create a uint32 value. Changed from allocating 8 bytes to 4 bytes to match the actual usage. This fixes go vet warning about shifting 8-bit values (bytes) by more than 8 bits. * fix: ensure context cancellation on all error paths In broker_client_subscribe.go, ensure subscriberCancel() is called on all error return paths: - When stream creation fails - When partition assignment fails - When sending initialization message fails This prevents context leaks when an error occurs during subscriber creation. * fix: ensure subscriberCancel called for CreateFreshSubscriber stream.Send error Ensure subscriberCancel() is called when stream.Send fails in CreateFreshSubscriber. * ci: add go vet step to prevent future lint regressions - Add go vet step to GitHub Actions workflow - Filter known protobuf lock warnings (MessageState sync.Mutex) These are expected in generated protobuf code and are safe - Prevents accumulation of go vet errors in future PRs - Step runs before build to catch issues early * fix: resolve remaining syntax and logic errors in vet fixes - Fixed syntax errors in filer_sync.go caused by missing closing braces - Added missing closing brace for if block and function - Synchronized fixes to match previous commits on branch * fix: add missing return statements to daemon functions - Add 'return false' after infinite loops in filer_backup.go and filer_meta_backup.go - Satisfies declared bool return type signatures - Maintains consistency with other daemon functions (runMaster, runFilerSynchronize, runWorker) - While unreachable, explicitly declares the return satisfies function signature contract * fix: add nil check for onCompletionFn in SubscribeMessageRecord - Check if onCompletionFn is not nil before calling it - Prevents potential panic if nil function is passed - Matches pattern used in other callback functions * docs: clarify unreachable return statements in daemon functions - Add comments documenting that return statements satisfy function signature - Explains that these returns follow infinite loops and are unreachable - Improves code clarity for future maintainers |
2 months ago |
|
|
8d75290601 |
4.04
|
2 months ago |
|
|
ed1da07665
|
Add consistent -debug and -debug.port flags to commands (#7816)
* Add consistent -debug and -debug.port flags to commands Add -debug and -debug.port flags to weed master, weed volume, weed s3, weed mq.broker, and weed filer.sync commands for consistency with weed filer. When -debug is enabled, an HTTP server starts on the specified port (default 6060) serving runtime profiling data at /debug/pprof/. For mq.broker, replaced the older -port.pprof flag with the new -debug and -debug.port pattern for consistency. * Update weed/util/grace/pprof.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
2 months ago |
|
|
bcce8d164c |
4.03
|
2 months ago |
|
|
221b352593
|
fix: handle ResumeFromDiskError gracefully in LoopProcessLogData (#7753)
When ReadFromBuffer returns ResumeFromDiskError, the function now: - Attempts to read from disk if ReadFromDiskFn is available - Checks if the client is still connected via waitForDataFn - Waits for notification or short timeout before retrying - Continues the loop instead of immediately returning the error This fixes TestNewLogBufferFirstBuffer which was failing because the function returned too early before data was available in the buffer. |
2 months ago |
|
|
e8b7347031
|
Reduce memory allocations in hot paths (#7725)
* filer: reduce allocations in MatchStorageRule
Optimize MatchStorageRule to avoid allocations in common cases:
- Return singleton emptyPathConf when no rules match (zero allocations)
- Return existing rule directly when only one rule matches (zero allocations)
- Only allocate and merge when multiple rules match (rare case)
Based on heap profile analysis showing 111MB allocated from 1.64M calls
to this function during 180 seconds of operation.
* filer: add fast path for getActualStore when no path-specific stores
Add hasPathSpecificStore flag to FilerStoreWrapper to skip
the MatchPrefix() call and []byte(path) conversion when no
path-specific stores are configured (the common case).
Based on heap profile analysis showing 1.39M calls to this
function during 180 seconds of operation, each requiring a
string-to-byte slice conversion for the MatchPrefix call.
* filer/foundationdb: use sync.Pool for tuple allocation in genKey
Use sync.Pool to reuse tuple.Tuple slices in genKey(), reducing
allocation overhead for every FoundationDB operation.
Based on heap profile analysis showing 102MB allocated from 1.79M
calls to genKey() during 180 seconds of operation. The Pack() call
still allocates internally, but this reduces the tuple slice
allocation overhead by ~50%.
* filer: use sync.Pool for protobuf Entry and FuseAttributes
Add pooling for filer_pb.Entry and filer_pb.FuseAttributes in
EncodeAttributesAndChunks and DecodeAttributesAndChunks to reduce
allocations during filer store operations.
Changes:
- Add pbEntryPool with pre-allocated FuseAttributes
- Add EntryAttributeToExistingPb for in-place attribute conversion
- Update ToExistingProtoEntry to reuse existing Attributes when available
Based on heap profile showing:
- EncodeAttributesAndChunks: 69.5MB cumulative
- DecodeAttributesAndChunks: 46.5MB cumulative
- EntryAttributeToPb: 47.5MB flat allocations
* log_buffer: use sync.Pool for LogEntry in readTs
Add logEntryPool to reuse filer_pb.LogEntry objects in readTs(),
which is called frequently during binary search in ReadFromBuffer.
This function only needs the TsNs field from the unmarshaled entry,
so pooling the LogEntry avoids repeated allocations.
Based on heap profile showing readTs with 188MB cumulative allocations
from timestamp lookups during log buffer reads.
* pb: reduce gRPC metadata allocations in interceptor
Optimize requestIDUnaryInterceptor and WithGrpcClient to reduce
metadata allocations on every gRPC request:
- Use AppendToOutgoingContext instead of NewOutgoingContext + New()
This avoids creating a new map[string]string for single key-value pairs
- Check FromIncomingContext return value before using metadata
Based on heap profile showing metadata operations contributing 0.45GB
(10.5%) of allocations, with requestIDUnaryInterceptor being the main
source at 0.44GB cumulative.
Expected reduction: ~0.2GB from avoiding map allocations per request.
* filer/log_buffer: address code review feedback
- Use proto.Reset() instead of manual field clearing in resetLogEntry
for more idiomatic and comprehensive state clearing
- Add resetPbEntry() call before pool return in error path for
consistency with success path in DecodeAttributesAndChunks
* log_buffer: reduce PreviousBufferCount from 32 to 4
Reduce the number of retained previous buffers from 32 to 4.
Each buffer is 8MB, so this reduces the maximum retained memory
from 256MB to 32MB for previous buffers.
Most subscribers catch up quickly, so 4 buffers (32MB) should
be sufficient while significantly reducing memory footprint.
* filer/foundationdb: use defer for tuple pool cleanup in genKey
Refactor genKey to use defer for returning the pooled tuple.
This ensures the pooled object is always returned even if
store.seaweedfsDir.Pack panics, making the code more robust.
Also simplifies the code by removing the temporary variable.
* filer: early-stop MatchStorageRule prescan after 2 matches
Stop the prescan callback after finding 2 matches since we only
need to know if there are 0, 1, or multiple matches. This avoids
unnecessarily scanning the rest of the trie when many rules exist.
* fix: address critical code review issues
filer_conf.go:
- Remove mutable singleton emptyPathConf that could corrupt shared state
- Return fresh copy for no-match case and cloned copy for single-match case
- Add clonePathConf helper to create shallow copies safely
grpc_client_server.go:
- Remove incorrect AppendToOutgoingContext call in server interceptor
(that API is for outbound client calls, not server-side handlers)
- Rely on request_id.Set and SetTrailer for request ID propagation
* fix: treat FilerConf_PathConf as immutable
Fix callers that were incorrectly mutating the returned PathConf:
- filer_server_handlers_write.go: Use local variable for MaxFileNameLength
instead of mutating the shared rule
- command_s3_bucket_quota_check.go: Create new PathConf explicitly when
modifying config instead of mutating the returned one
This allows MatchStorageRule to safely return the singleton or direct
references without copying, restoring the memory optimization.
Callers must NOT mutate the returned *FilerConf_PathConf.
* filer: add ClonePathConf helper for creating mutable copies
Add reusable ClonePathConf function that creates a mutable copy of
a PathConf. This is useful when callers need to modify config before
calling SetLocationConf.
Update command_s3_bucket_quota_check.go to use the new helper.
Also fix redundant return statement in DeleteLocationConf.
* fmt
* filer: fix protobuf pool reset to clear internal fields
Address code review feedback:
1. resetPbEntry/resetFuseAttributes: Use struct assignment (*e = T{})
instead of field-by-field reset to clear protobuf internal fields
(unknownFields, sizeCache) that would otherwise accumulate across
pool reuses, causing data corruption or memory bloat.
2. EntryAttributeToExistingPb: Add nil guard for attr parameter to
prevent panic if caller passes nil.
* log_buffer: reset logEntry before pool return in error path
For consistency with success path, reset the logEntry before putting
it back in the pool in the error path. This prevents the pooled object
from holding references to partially unmarshaled data.
* filer: optimize MatchStorageRule and document ClonePathConf
1. Avoid double []byte(path) conversion in multi-match case by
converting once and reusing pathBytes.
2. Add IMPORTANT comment to ClonePathConf documenting that it must
be kept in sync with filer_pb.FilerConf_PathConf fields when
the protobuf evolves.
* filer/log_buffer: fix data race and use defer for pool cleanup
1. entry_codec.go EncodeAttributesAndChunks: Fix critical data race -
proto.Marshal may return a slice sharing memory with the message.
Copy the data before returning message to pool to prevent corruption.
2. entry_codec.go DecodeAttributesAndChunks: Use defer for cleaner
pool management, ensuring message is always returned to pool.
3. log_buffer.go readTs: Use defer for pool cleanup, removing
duplicated resetLogEntry/Put calls in success and error paths.
* filer: fix ClonePathConf field order and add comprehensive test
1. Fix field order in ClonePathConf to match protobuf struct definition
(WormGracePeriodSeconds before WormRetentionTimeSeconds).
2. Add TestClonePathConf that constructs a fully-populated PathConf,
calls ClonePathConf, and asserts equality of all exported fields.
This will catch future schema drift when new fields are added.
3. Add TestClonePathConfNil to verify nil handling.
* filer: use reflection in ClonePathConf test to detect schema drift
Replace hardcoded field comparisons with reflection-based comparison.
This automatically catches:
1. New fields added to the protobuf but not copied in ClonePathConf
2. Missing non-zero test values for any exported field
The test iterates over all exported fields using reflect and compares
src vs clone values, failing if any field differs.
* filer: update EntryAttributeToExistingPb comment to reflect nil handling
The function safely handles nil attr by returning early, but the comment
incorrectly stated 'attr must not be nil'. Update comment to accurately
describe the defensive behavior.
* Fix review feedback: restore request ID propagation and remove redundant resets
1. grpc_client_server.go: Restore AppendToOutgoingContext for request ID
so handlers making downstream gRPC calls will automatically propagate
the request ID to downstream services.
2. entry_codec.go: Remove redundant resetPbEntry calls after Get.
The defer block ensures reset before Put, so next Get receives clean object.
3. log_buffer.go: Remove redundant resetLogEntry call after Get for
same reason - defer already handles reset before Put.
|
2 months ago |
|
|
d970c15d71
|
fix: prevent filer.backup stall in single-filer setups (#7695)
* fix: prevent filer.backup stall in single-filer setups (#4977) When MetaAggregator.MetaLogBuffer is empty (which happens in single-filer setups with no peers), ReadFromBuffer was returning nil error, causing LoopProcessLogData to enter an infinite wait loop on ListenersCond. This fix returns ResumeFromDiskError instead, allowing SubscribeMetadata to loop back and read from persisted logs on disk. This ensures filer.backup continues processing events even when the in-memory aggregator buffer is empty. Fixes #4977 * test: add integration tests for metadata subscription Add integration tests for metadata subscription functionality: - TestMetadataSubscribeBasic: Tests basic subscription and event receiving - TestMetadataSubscribeSingleFilerNoStall: Regression test for #4977, verifies subscription doesn't stall under high load in single-filer setups - TestMetadataSubscribeResumeFromDisk: Tests resuming subscription from disk Related to #4977 * ci: add GitHub Actions workflow for metadata subscribe tests Add CI workflow that runs on: - Push/PR to master affecting filer, log_buffer, or metadata subscribe code - Runs the integration tests for metadata subscription - Uploads logs on failure for debugging Related to #4977 * fix: use multipart form-data for file uploads in integration tests The filer expects multipart/form-data for file uploads, not raw POST body. This fixes the 'Content-Type isn't multipart/form-data' error. * test: use -peers=none for faster master startup * test: add -peers=none to remaining master startup in ec tests * fix: use filer HTTP port 8888, WithFilerClient adds 10000 for gRPC WithFilerClient calls ToGrpcAddress() which adds 10000 to the port. Passing 18888 resulted in connecting to 28888. Use 8888 instead. * test: add concurrent writes and million updates tests - TestMetadataSubscribeConcurrentWrites: 50 goroutines writing 20 files each - TestMetadataSubscribeMillionUpdates: 1 million metadata entries via gRPC (metadata only, no actual file content for speed) * fix: address PR review comments - Handle os.MkdirAll errors explicitly instead of ignoring - Handle log file creation errors with proper error messages - Replace silent event dropping with 100ms timeout and warning log * Update metadata_subscribe_integration_test.go |
2 months ago |
|
|
805950b401 |
4.02
|
2 months ago |
|
|
28ac536280
|
fix: normalize Windows backslash paths in weed admin file uploads (#7636)
fix: normalize Windows backslash paths in file uploads When uploading files from a Windows client to a Linux server, file paths containing backslashes were not being properly interpreted as directory separators. This caused files intended for subdirectories to be created in the root directory with backslashes in their filenames. Changes: - Add util.CleanWindowsPath and util.CleanWindowsPathBase helper functions in weed/util/fullpath.go for reusable path normalization - Use path.Join/path.Clean/path.Base instead of filepath equivalents for URL path semantics (filepath is OS-specific) - Apply normalization in weed admin handlers and filer upload parsing Fixes #7628 |
3 months ago |
|
|
5c1de633cb
|
mount: improve read throughput with parallel chunk fetching (#7627)
* filer: remove lock contention during chunk download This addresses issue #7504 where a single weed mount FUSE instance does not fully utilize node network bandwidth when reading large files. The SingleChunkCacher was holding a mutex during the entire HTTP download, causing readers to block until the download completed. This serialized chunk reads even when multiple goroutines were downloading in parallel. Changes: - Add sync.Cond to SingleChunkCacher for efficient waiting - Move HTTP download outside the critical section in startCaching() - Use condition variable in readChunkAt() to wait for download completion - Add isComplete flag to track download state Now multiple chunk downloads can proceed truly in parallel, and readers wait efficiently using the condition variable instead of blocking on a mutex held during I/O operations. Ref: #7504 * filer: parallel chunk fetching within doReadAt This addresses issue #7504 by enabling parallel chunk downloads within a single read operation. Previously, doReadAt() processed chunks sequentially in a loop, meaning each chunk had to be fully downloaded before the next one started. This left significant network bandwidth unused when chunks resided on different volume servers. Changes: - Collect all chunk read tasks upfront - Use errgroup to fetch multiple chunks in parallel - Each chunk reads directly into its correct buffer position - Limit concurrency to prefetchCount (min 4) to avoid overwhelming the system - Handle gaps and zero-filling before parallel fetch - Trigger prefetch after parallel reads complete For a read spanning N chunks on different volume servers, this can now utilize up to N times the bandwidth of a single connection. Ref: #7504 * http: direct buffer read to reduce memory copies This addresses issue #7504 by reducing memory copy overhead during chunk downloads. Previously, RetriedFetchChunkData used ReadUrlAsStream which: 1. Allocated a 64KB intermediate buffer 2. Read data in 64KB chunks 3. Called a callback to copy each chunk to the destination For a 16MB chunk, this meant 256 copy operations plus the callback overhead. Profiling showed significant time spent in memmove. Changes: - Add readUrlDirectToBuffer() that reads directly into the destination - Add retriedFetchChunkDataDirect() for unencrypted, non-gzipped chunks - Automatically use direct read path when possible (cipher=nil, gzip=false) - Use http.NewRequestWithContext for proper cancellation For unencrypted chunks (the common case), this eliminates the intermediate buffer entirely, reading HTTP response bytes directly into the final destination buffer. Ref: #7504 * address review comments - Use channel (done) instead of sync.Cond for download completion signaling This integrates better with context cancellation patterns - Remove redundant groupErr check in reader_at.go (errors are already captured in task.err) - Remove buggy URL encoding logic from retriedFetchChunkDataDirect (The existing url.PathEscape on full URL is a pre-existing bug that should be fixed separately) * address review comments (round 2) - Return io.ErrUnexpectedEOF when HTTP response is truncated This prevents silent data corruption from incomplete reads - Simplify errgroup error handling by using g.Wait() error directly Remove redundant task.err field and manual error aggregation loop - Define minReadConcurrency constant instead of magic number 4 Improves code readability and maintainability Note: Context propagation to startCaching() is intentionally NOT changed. The downloaded chunk is a shared resource that may be used by multiple readers. Using context.Background() ensures the download completes even if one reader cancels, preventing data loss for other waiting readers. * http: inject request ID for observability in direct read path Add request_id.InjectToRequest() call to readUrlDirectToBuffer() for consistency with ReadUrlAsStream path. This ensures full-chunk reads carry the same tracing/correlation headers for server logs and metrics. * filer: consistent timestamp handling in sequential read path Use max(ts, task.chunk.ModifiedTsNs) in sequential path to match parallel path behavior. Also update ts before error check so that on failure, the returned timestamp reflects the max of all chunks processed so far. * filer: document why context.Background() is used in startCaching Add comment explaining the intentional design decision: the downloaded chunk is a shared resource that may be used by multiple concurrent readers. Using context.Background() ensures the download completes even if one reader cancels, preventing errors for other waiting readers. * filer: propagate context for reader cancellation Address review comment: pass context through ReadChunkAt call chain so that a reader can cancel its wait for a download. The key distinction is: - Download uses context.Background() - shared resource, always completes - Reader wait uses request context - can be cancelled individually If a reader cancels, it stops waiting and returns ctx.Err(), but the download continues to completion for other readers waiting on the same chunk. This properly handles the shared resource semantics while still allowing individual reader cancellation. * filer: use defer for close(done) to guarantee signal on panic Move close(s.done) to a defer statement at the start of startCaching() to ensure the completion signal is always sent, even if an unexpected panic occurs. This prevents readers from blocking indefinitely. * filer: remove unnecessary code - Remove close(s.cacheStartedCh) in destroy() - the channel is only used for one-time synchronization, closing it provides no benefit - Remove task := task loop variable capture - Go 1.22+ fixed loop variable semantics, this capture is no longer necessary (go.mod specifies Go 1.24.0) * filer: restore fallback to chunkCache when cacher returns no data Fix critical issue where ReadChunkAt would return 0,nil immediately if SingleChunkCacher couldn't provide data for the requested offset, without trying the chunkCache fallback. Now if cacher.readChunkAt returns n=0 and err=nil, we fall through to try chunkCache. * filer: add comprehensive tests for ReaderCache Tests cover: - Context cancellation while waiting for download - Fallback to chunkCache when cacher returns n=0, err=nil - Multiple concurrent readers waiting for same chunk - Partial reads at different offsets - Downloader cleanup when exceeding cache limit - Done channel signaling (no hangs on completion) * filer: prioritize done channel over context cancellation If data is already available (done channel closed), return it even if the reader's context is also cancelled. This avoids unnecessary errors when the download has already completed. * filer: add lookup error test and document test limitations Add TestSingleChunkCacherLookupError to test error handling when lookup fails. Document that full HTTP integration tests for SingleChunkCacher require global HTTP client initialization which is complex in unit tests. The download path is tested via FUSE integration tests. * filer: add tests that exercise SingleChunkCacher concurrency logic Add tests that use blocking lookupFileIdFn to exercise the actual SingleChunkCacher wait/cancellation logic: - TestSingleChunkCacherContextCancellationDuringLookup: tests reader cancellation while lookup is blocked - TestSingleChunkCacherMultipleReadersWaitForDownload: tests multiple readers waiting on the same download - TestSingleChunkCacherOneReaderCancelsOthersContinue: tests that when one reader cancels, other readers continue waiting These tests properly exercise the done channel wait/cancel logic without requiring HTTP calls - the blocking lookup simulates a slow download. |
3 months ago |
|
|
3183a49698
|
fix: S3 downloads failing after idle timeout (#7626)
* fix: S3 downloads failing after idle timeout (#7618) The idle timeout was incorrectly terminating active downloads because read and write deadlines were managed independently. During a download, the server writes data but rarely reads, so the read deadline would expire even though the connection was actively being used. Changes: 1. Simplify to single Timeout field - since this is a 'no activity timeout' where any activity extends the deadline, separate read/write timeouts are unnecessary. Now uses SetDeadline() which sets both at once. 2. Implement proper 'no activity timeout' - any activity (read or write) now extends the deadline. The connection only times out when there's genuinely no activity in either direction. 3. Increase default S3 idleTimeout from 10s to 120s for additional safety margin when fetching chunks from slow storage backends. Fixes #7618 * Update weed/util/net_timeout.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 months ago |
|
|
5ed0b00fb9
|
Support separate volume server ID independent of RPC bind address (#7609)
* pb: add id field to Heartbeat message for stable volume server identification This adds an 'id' field to the Heartbeat protobuf message that allows volume servers to identify themselves independently of their IP:port address. Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * storage: add Id field to Store struct Add Id field to Store struct and include it in CollectHeartbeat(). The Id field provides a stable volume server identity independent of IP:port. Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * topology: support id-based DataNode identification Update GetOrCreateDataNode to accept an id parameter for stable node identification. When id is provided, the DataNode can maintain its identity even when its IP address changes (e.g., in Kubernetes pod reschedules). For backward compatibility: - If id is provided, use it as the node ID - If id is empty, fall back to ip:port Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * volume: add -id flag for stable volume server identity Add -id command line flag to volume server that allows specifying a stable identifier independent of the IP address. This is useful for Kubernetes deployments with hostPath volumes where pods can be rescheduled to different nodes while the persisted data remains on the original node. Usage: weed volume -id=node-1 -ip=10.0.0.1 ... If -id is not specified, it defaults to ip:port for backward compatibility. Fixes https://github.com/seaweedfs/seaweedfs/issues/7487 * server: add -volume.id flag to weed server command Support the -volume.id flag in the all-in-one 'weed server' command, consistent with the standalone 'weed volume' command. Usage: weed server -volume.id=node-1 ... Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * topology: add test for id-based DataNode identification Test the key scenarios: 1. Create DataNode with explicit id 2. Same id with different IP returns same DataNode (K8s reschedule) 3. IP/PublicUrl are updated when node reconnects with new address 4. Different id creates new DataNode 5. Empty id falls back to ip:port (backward compatibility) Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * pb: add address field to DataNodeInfo for proper node addressing Previously, DataNodeInfo.Id was used as the node address, which worked when Id was always ip:port. Now that Id can be an explicit string, we need a separate Address field for connection purposes. Changes: - Add 'address' field to DataNodeInfo protobuf message - Update ToDataNodeInfo() to populate the address field - Update NewServerAddressFromDataNode() to use Address (with Id fallback) - Fix LookupEcVolume to use dn.Url() instead of dn.Id() Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * fix: trim whitespace from volume server id and fix test - Trim whitespace from -id flag to treat ' ' as empty - Fix store_load_balancing_test.go to include id parameter in NewStore call Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * refactor: extract GetVolumeServerId to util package Move the volume server ID determination logic to a shared utility function to avoid code duplication between volume.go and rack.go. Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * fix: improve transition logic for legacy nodes - Use exact ip:port match instead of net.SplitHostPort heuristic - Update GrpcPort and PublicUrl during transition for consistency - Remove unused net import Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * fix: add id normalization and address change logging - Normalize id parameter at function boundary (trim whitespace) - Log when DataNode IP:Port changes (helps debug K8s pod rescheduling) Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 |
3 months ago |
|
|
7e15a4abe2 |
4.01
|
3 months ago |
|
|
cd2fac4551
|
S3: pass HTTP 429 from volume servers to S3 clients (#7556)
With the recent changes (commit
|
3 months ago |
|
|
5f7a292334
|
add build info metrics (#7525)
* add build info metrics * unused * metrics on build * size limit * once |
3 months ago |
|
|
ca84a8a713
|
S3: Directly read write volume servers (#7481)
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization
* revert
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
* Lazy Entry Fetch for SSE, Skip Conditional Header Check
* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)
* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS
* refactor
* Removed Premature Mutual Exclusivity Check
* check for the presence of the X-Amz-Server-Side-Encryption header
* not used
* fmt
* directly read write volume servers
* HTTP Range Request Support
* set header
* md5
* copy object
* fix sse
* fmt
* implement sse
* sse continue
* fixed the suffix range bug (bytes=-N for "last N bytes")
* debug logs
* Missing PartsCount Header
* profiling
* url encoding
* test_multipart_get_part
* headers
* debug
* adjust log level
* handle part number
* Update s3api_object_handlers.go
* nil safety
* set ModifiedTsNs
* remove
* nil check
* fix sse header
* same logic as filer
* decode values
* decode ivBase64
* s3: Fix SSE decryption JWT authentication and streaming errors
Critical fix for SSE (Server-Side Encryption) test failures:
1. **JWT Authentication Bug** (Root Cause):
- Changed from GenJwtForFilerServer to GenJwtForVolumeServer
- S3 API now uses correct JWT when directly reading from volume servers
- Matches filer's authentication pattern for direct volume access
- Fixes 'unexpected EOF' and 500 errors in SSE tests
2. **Streaming Error Handling**:
- Added error propagation in getEncryptedStreamFromVolumes goroutine
- Use CloseWithError() to properly communicate stream failures
- Added debug logging for streaming errors
3. **Response Header Timing**:
- Removed premature WriteHeader(http.StatusOK) call
- Let Go's http package write status automatically on first write
- Prevents header lock when errors occur during streaming
4. **Enhanced SSE Decryption Debugging**:
- Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
- Better error messages for missing or invalid encryption metadata
- Added glog.V(2) debugging for decryption setup
This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.
* s3: Fix SSE multipart upload metadata preservation
Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):
**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:
1. **Wrong Constant Name** (Key Mismatch Bug):
- Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
- Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
- Result: SSE-KMS metadata was NEVER copied → 500 errors
2. **Missing SSE-C and SSE-S3 Headers**:
- SSE-C requires: IV, Algorithm, KeyMD5
- SSE-S3 requires: encrypted key data + standard headers
- Old code: copied nothing for SSE-C/SSE-S3 → decryption failures
**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:
- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption
Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)
**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.
Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)
* s3: Add debug logging for versioning state diagnosis
Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.
Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)
This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers
* s3: Enhanced versioning state tracing for suspended versioning diagnosis
Added comprehensive logging across the entire versioning state flow:
PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change
setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code
updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails
getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors
This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues
Expected to show why bucket still reports 'Enabled' after 'Suspended' call.
* s3: Add SSE chunk detection debugging for multipart uploads
Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:
detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)
This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly
Expected to show if chunks have SseType=0 (none) or proper SSE types set.
* s3: Trace SSE chunk metadata through multipart completion and retrieval
Added end-to-end logging to track SSE chunk metadata lifecycle:
**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes
**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result
This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug
Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.
* s3: Fix SSE-C multipart IV base64 decoding bug
**Critical Bug Found**: SSE-C multipart uploads were failing because:
Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error
Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"
Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures
Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET
This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.
* fixes
* kms sse
* handle retry if not found in .versions folder and should read the normal object
* quick check (no retries) to see if the .versions/ directory exists
* skip retry if object is not found
* explicit update to avoid sync delay
* fix map update lock
* Remove fmt.Printf debug statements
* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating
* fmt
* Fix ACL grants storage logic
* header handling
* nil handling
* range read for sse content
* test range requests for sse objects
* fmt
* unused code
* upload in chunks
* header case
* fix url
* bucket policy error vs bucket not found
* jwt handling
* fmt
* jwt in request header
* Optimize Case-Insensitive Prefix Check
* dead code
* Eliminated Unnecessary Stream Prefetch for Multipart SSE
* range sse
* sse
* refactor
* context
* fmt
* fix type
* fix SSE-C IV Mismatch
* Fix Headers Being Set After WriteHeader
* fix url parsing
* propergate sse headers
* multipart sse-s3
* aws sig v4 authen
* sse kms
* set content range
* better errors
* Update s3api_object_handlers_copy.go
* Update s3api_object_handlers.go
* Update s3api_object_handlers.go
* avoid magic number
* clean up
* Update s3api_bucket_policy_handlers.go
* fix url parsing
* context
* data and metadata both use background context
* adjust the offset
* SSE Range Request IV Calculation
* adjust logs
* IV relative to offset in each part, not the whole file
* collect logs
* offset
* fix offset
* fix url
* logs
* variable
* jwt
* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.
* sse
* adjust IV and offset
* multipart boundaries
* ensures PUT and GET operations return consistent ETags
* Metadata Header Case
* CommonPrefixes Sorting with URL Encoding
* always sort
* remove the extra PathUnescape call
* fix the multipart get part ETag
* the FileChunk is created without setting ModifiedTsNs
* Sort CommonPrefixes lexicographically to match AWS S3 behavior
* set md5 for multipart uploads
* prevents any potential data loss or corruption in the small-file inline storage path
* compiles correctly
* decryptedReader will now be properly closed after use
* Fixed URL encoding and sort order for CommonPrefixes
* Update s3api_object_handlers_list.go
* SSE-x Chunk View Decryption
* Different IV offset calculations for single-part vs multipart objects
* still too verbose in logs
* less logs
* ensure correct conversion
* fix listing
* nil check
* minor fixes
* nil check
* single character delimiter
* optimize
* range on empty object or zero-length
* correct IV based on its position within that part, not its position in the entire object
* adjust offset
* offset
Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk
* look breaking
* refactor
* error on no content
* handle intra-block byte skipping
* Incomplete HTTP Response Error Handling
* multipart SSE
* Update s3api_object_handlers.go
* address comments
* less logs
* handling directory
* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups
* Revert "handling directory"
This reverts commit
|
3 months ago |
|
|
084b377f87
|
do delete expired entries on s3 list request (#7426)
* do delete expired entries on s3 list request
https://github.com/seaweedfs/seaweedfs/issues/6837
* disable delete expires s3 entry in filer
* pass opt allowDeleteObjectsByTTL to all servers
* delete on get and head
* add lifecycle expiration s3 tests
* fix opt allowDeleteObjectsByTTL for server
* fix test lifecycle expiration
* fix IsExpired
* fix locationPrefix for updateEntriesTTL
* fix s3tests
* resolv coderabbitai
* GetS3ExpireTime on filer
* go mod
* clear TtlSeconds for volume
* move s3 delete expired entry to filer
* filer delete meta and data
* del unusing func removeExpiredObject
* test s3 put
* test s3 put multipart
* allowDeleteObjectsByTTL by default
* fix pipline tests
* rm dublicate SeaweedFSExpiresS3
* revert expiration tests
* fix updateTTL
* rm log
* resolv comment
* fix delete version object
* fix S3Versioning
* fix delete on FindEntry
* fix delete chunks
* fix sqlite not support concurrent writes/reads
* move deletion out of listing transaction; delete entries and empty folders
* Revert "fix sqlite not support concurrent writes/reads"
This reverts commit
|
4 months ago |
|
|
498ac8903f
|
S3: prevent deleting buckets with object locking (#7434)
* prevent deleting buckets with object locking * addressing comments * Update s3api_bucket_handlers.go * address comments * early return * refactor * simplify * constant * go fmt |
4 months ago |
|
|
a154ef9a0f |
4.00
|
4 months ago |
|
|
58acc14d2c |
avoid unnecessary fail fast
fix https://github.com/seaweedfs/seaweedfs/issues/7417 |
4 months ago |
|
|
f00ae727b7 |
detect ipv6
|
4 months ago |
|
|
ba07b3e4c6
|
network: Adaptive timeout (#7410)
* server can start when no network for local dev * fixed superfluous response.WriteHeader call" warning * adaptive based on last write time * more doc * refactoring |
4 months ago |
|
|
a6da3eb770 |
server can start when no network for local dev
|
4 months ago |
|
|
ed023f4a7d |
purge emojis
|
4 months ago |
|
|
b7ba6785a2 |
go fmt
|
4 months ago |
|
|
a80b5eea5e |
3.99
|
4 months ago |
|
|
263e891da0
|
Clients to volume server requires JWT tokens for all read operations (#7376)
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
4 months ago |
|
|
7d147f238c
|
avoid repeated reading disk (#7369)
* avoid repeated reading disk * checks both flush time AND read position advancement * wait on cond * fix reading Gap detection and skipping to earliest memory time Time-based reads that include events at boundary times for first reads (offset ≤ 0) Aggregated subscriber wake-up via ListenersWaits signaling * address comments |
4 months ago |
|
|
d220875ef4 |
Revert "fix reading"
This reverts commit
|
4 months ago |
|
|
64a4ce9358 |
fix reading
Gap detection and skipping to earliest memory time Time-based reads that include events at boundary times for first reads (offset ≤ 0) Aggregated subscriber wake-up via ListenersWaits signaling |
4 months ago |
|
|
d3095f0c2a |
3.98
|
4 months ago |
|
|
97f3028782
|
Clean up logs and deprecated functions (#7339)
* less logs * fix deprecated grpc.Dial |
4 months ago |
|
|
8d63a9cf5f
|
Fixes for kafka gateway (#7329)
* fix race condition
* save checkpoint every 2 seconds
* Inlined the session creation logic to hold the lock continuously
* comment
* more logs on offset resume
* only recreate if we need to seek backward (requested offset < current offset), not on any mismatch
* Simplified GetOrCreateSubscriber to always reuse existing sessions
* atomic currentStartOffset
* fmt
* avoid deadlock
* fix locking
* unlock
* debug
* avoid race condition
* refactor dedup
* consumer group that does not join group
* increase deadline
* use client timeout wait
* less logs
* add some delays
* adjust deadline
* Update fetch.go
* more time
* less logs, remove unused code
* purge unused
* adjust return values on failures
* clean up consumer protocols
* avoid goroutine leak
* seekable subscribe messages
* ack messages to broker
* reuse cached records
* pin s3 test version
* adjust s3 tests
* verify produced messages are consumed
* track messages with testStartTime
* removing the unnecessary restart logic and relying on the seek mechanism we already implemented
* log read stateless
* debug fetch offset APIs
* fix tests
* fix go mod
* less logs
* test: increase timeouts for consumer group operations in E2E tests
Consumer group operations (coordinator discovery, offset fetch/commit) are
slower in CI environments with limited resources. This increases timeouts to:
- ProduceMessages: 10s -> 30s (for when consumer groups are active)
- ConsumeWithGroup: 30s -> 60s (for offset fetch/commit operations)
Fixes the TestOffsetManagement timeout failures in GitHub Actions CI.
* feat: add context timeout propagation to produce path
This commit adds proper context propagation throughout the produce path,
enabling client-side timeouts to be honored on the broker side. Previously,
only fetch operations respected client timeouts - produce operations continued
indefinitely even if the client gave up.
Changes:
- Add ctx parameter to ProduceRecord and ProduceRecordValue signatures
- Add ctx parameter to PublishRecord and PublishRecordValue in BrokerClient
- Add ctx parameter to handleProduce and related internal functions
- Update all callers (protocol handlers, mocks, tests) to pass context
- Add context cancellation checks in PublishRecord before operations
Benefits:
- Faster failure detection when client times out
- No orphaned publish operations consuming broker resources
- Resource efficiency improvements (no goroutine/stream/lock leaks)
- Consistent timeout behavior between produce and fetch paths
- Better error handling with proper cancellation signals
This fixes the root cause of CI test timeouts where produce operations
continued indefinitely after clients gave up, leading to cascading delays.
* feat: add disk I/O fallback for historical offset reads
This commit implements async disk I/O fallback to handle cases where:
1. Data is flushed from memory before consumers can read it (CI issue)
2. Consumers request historical offsets not in memory
3. Small LogBuffer retention in resource-constrained environments
Changes:
- Add readHistoricalDataFromDisk() helper function
- Update ReadMessagesAtOffset() to call ReadFromDiskFn when offset < bufferStartOffset
- Properly handle maxMessages and maxBytes limits during disk reads
- Return appropriate nextOffset after disk reads
- Log disk read operations at V(2) and V(3) levels
Benefits:
- Fixes CI test failures where data is flushed before consumption
- Enables consumers to catch up even if they fall behind memory retention
- No blocking on hot path (disk read only for historical data)
- Respects existing ReadFromDiskFn timeout handling
How it works:
1. Try in-memory read first (fast path)
2. If offset too old and ReadFromDiskFn configured, read from disk
3. Return disk data with proper nextOffset
4. Consumer continues reading seamlessly
This fixes the 'offset 0 too old (earliest in-memory: 5)' error in
TestOffsetManagement where messages were flushed before consumer started.
* fmt
* feat: add in-memory cache for disk chunk reads
This commit adds an LRU cache for disk chunks to optimize repeated reads
of historical data. When multiple consumers read the same historical offsets,
or a single consumer refetches the same data, the cache eliminates redundant
disk I/O.
Cache Design:
- Chunk size: 1000 messages per chunk
- Max chunks: 16 (configurable, ~16K messages cached)
- Eviction policy: LRU (Least Recently Used)
- Thread-safe with RWMutex
- Chunk-aligned offsets for efficient lookups
New Components:
1. DiskChunkCache struct - manages cached chunks
2. CachedDiskChunk struct - stores chunk data with metadata
3. getCachedDiskChunk() - checks cache before disk read
4. cacheDiskChunk() - stores chunks with LRU eviction
5. extractMessagesFromCache() - extracts subset from cached chunk
How It Works:
1. Read request for offset N (e.g., 2500)
2. Calculate chunk start: (2500 / 1000) * 1000 = 2000
3. Check cache for chunk starting at 2000
4. If HIT: Extract messages 2500-2999 from cached chunk
5. If MISS: Read chunk 2000-2999 from disk, cache it, extract 2500-2999
6. If cache full: Evict LRU chunk before caching new one
Benefits:
- Eliminates redundant disk I/O for popular historical data
- Reduces latency for repeated reads (cache hit ~1ms vs disk ~100ms)
- Supports multiple consumers reading same historical offsets
- Automatically evicts old chunks when cache is full
- Zero impact on hot path (in-memory reads unchanged)
Performance Impact:
- Cache HIT: ~99% faster than disk read
- Cache MISS: Same as disk read (with caching overhead ~1%)
- Memory: ~16MB for 16 chunks (16K messages x 1KB avg)
Example Scenario (CI tests):
- Producer writes offsets 0-4
- Data flushes to disk
- Consumer 1 reads 0-4 (cache MISS, reads from disk, caches chunk 0-999)
- Consumer 2 reads 0-4 (cache HIT, served from memory)
- Consumer 1 rebalances, re-reads 0-4 (cache HIT, no disk I/O)
This optimization is especially valuable in CI environments where:
- Small memory buffers cause frequent flushing
- Multiple consumers read the same historical data
- Disk I/O is relatively slow compared to memory access
* fix: commit offsets in Cleanup() before rebalancing
This commit adds explicit offset commit in the ConsumerGroupHandler.Cleanup()
method, which is called during consumer group rebalancing. This ensures all
marked offsets are committed BEFORE partitions are reassigned to other consumers,
significantly reducing duplicate message consumption during rebalancing.
Problem:
- Cleanup() was not committing offsets before rebalancing
- When partition reassigned to another consumer, it started from last committed offset
- Uncommitted messages (processed but not yet committed) were read again by new consumer
- This caused ~100-200% duplicate messages during rebalancing in tests
Solution:
- Add session.Commit() in Cleanup() method
- This runs after all ConsumeClaim goroutines have exited
- Ensures all MarkMessage() calls are committed before partition release
- New consumer starts from the last processed offset, not an older committed offset
Benefits:
- Dramatically reduces duplicate messages during rebalancing
- Improves at-least-once semantics (closer to exactly-once for normal cases)
- Better performance (less redundant processing)
- Cleaner test results (expected duplicates only from actual failures)
Kafka Rebalancing Lifecycle:
1. Rebalance triggered (consumer join/leave, timeout, etc.)
2. All ConsumeClaim goroutines cancelled
3. Cleanup() called ← WE COMMIT HERE NOW
4. Partitions reassigned to other consumers
5. New consumer starts from last committed offset ← NOW MORE UP-TO-DATE
Expected Results:
- Before: ~100-200% duplicates during rebalancing (2-3x reads)
- After: <10% duplicates (only from uncommitted in-flight messages)
This is a critical fix for production deployments where consumer churn
(scaling, restarts, failures) causes frequent rebalancing.
* fmt
* feat: automatic idle partition cleanup to prevent memory bloat
Implements automatic cleanup of topic partitions with no active publishers
or subscribers to prevent memory accumulation from short-lived topics.
**Key Features:**
1. Activity Tracking (local_partition.go)
- Added lastActivityTime field to LocalPartition
- UpdateActivity() called on publish, subscribe, and message reads
- IsIdle() checks if partition has no publishers/subscribers
- GetIdleDuration() returns time since last activity
- ShouldCleanup() determines if partition eligible for cleanup
2. Cleanup Task (local_manager.go)
- Background goroutine runs every 1 minute (configurable)
- Removes partitions idle for > 5 minutes (configurable)
- Automatically removes empty topics after all partitions cleaned
- Proper shutdown handling with WaitForCleanupShutdown()
3. Broker Integration (broker_server.go)
- StartIdlePartitionCleanup() called on broker startup
- Default: check every 1 minute, cleanup after 5 minutes idle
- Transparent operation with sensible defaults
**Cleanup Process:**
- Checks: partition.Publishers.Size() == 0 && partition.Subscribers.Size() == 0
- Calls partition.Shutdown() to:
- Flush all data to disk (no data loss)
- Stop 3 goroutines (loopFlush, loopInterval, cleanupLoop)
- Free in-memory buffers (~100KB-10MB per partition)
- Close LogBuffer resources
- Removes partition from LocalTopic.Partitions
- Removes topic if no partitions remain
**Benefits:**
- Prevents memory bloat from short-lived topics
- Reduces goroutine count (3 per partition cleaned)
- Zero configuration required
- Data remains on disk, can be recreated on demand
- No impact on active partitions
**Example Logs:**
I Started idle partition cleanup task (check: 1m, timeout: 5m)
I Cleaning up idle partition topic-0 (idle for 5m12s, publishers=0, subscribers=0)
I Cleaned up 2 idle partition(s)
**Memory Freed per Partition:**
- In-memory message buffer: ~100KB-10MB
- Disk buffer cache
- 3 goroutines
- Publisher/subscriber tracking maps
- Condition variables and mutexes
**Related Issue:**
Prevents memory accumulation in systems with high topic churn or
many short-lived consumer groups, improving long-term stability
and resource efficiency.
**Testing:**
- Compiles cleanly
- No linting errors
- Ready for integration testing
fmt
* refactor: reduce verbosity of debug log messages
Changed debug log messages with bracket prefixes from V(1)/V(2) to V(3)/V(4)
to reduce log noise in production. These messages were added during development
for detailed debugging and are still available with higher verbosity levels.
Changes:
- glog.V(2).Infof("[") -> glog.V(4).Infof("[") (~104 messages)
- glog.V(1).Infof("[") -> glog.V(3).Infof("[") (~30 messages)
Affected files:
- weed/mq/broker/broker_grpc_fetch.go
- weed/mq/broker/broker_grpc_sub_offset.go
- weed/mq/kafka/integration/broker_client_fetch.go
- weed/mq/kafka/integration/broker_client_subscribe.go
- weed/mq/kafka/integration/seaweedmq_handler.go
- weed/mq/kafka/protocol/fetch.go
- weed/mq/kafka/protocol/fetch_partition_reader.go
- weed/mq/kafka/protocol/handler.go
- weed/mq/kafka/protocol/offset_management.go
Benefits:
- Cleaner logs in production (default -v=0)
- Still available for deep debugging with -v=3 or -v=4
- No code behavior changes, only log verbosity
- Safer than deletion - messages preserved for debugging
Usage:
- Default (-v=0): Only errors and important events
- -v=1: Standard info messages
- -v=2: Detailed info messages
- -v=3: Debug messages (previously V(1) with brackets)
- -v=4: Verbose debug (previously V(2) with brackets)
* refactor: change remaining glog.Infof debug messages to V(3)
Changed remaining debug log messages with bracket prefixes from
glog.Infof() to glog.V(3).Infof() to prevent them from showing
in production logs by default.
Changes (8 messages across 3 files):
- glog.Infof("[") -> glog.V(3).Infof("[")
Files updated:
- weed/mq/broker/broker_grpc_fetch.go (4 messages)
- [FetchMessage] CALLED! debug marker
- [FetchMessage] request details
- [FetchMessage] LogBuffer read start
- [FetchMessage] LogBuffer read completion
- weed/mq/kafka/integration/broker_client_fetch.go (3 messages)
- [FETCH-STATELESS-CLIENT] received messages
- [FETCH-STATELESS-CLIENT] converted records (with data)
- [FETCH-STATELESS-CLIENT] converted records (empty)
- weed/mq/kafka/integration/broker_client_publish.go (1 message)
- [GATEWAY RECV] _schemas topic debug
Now ALL debug messages with bracket prefixes require -v=3 or higher:
- Default (-v=0): Clean production logs ✅
- -v=3: All debug messages visible
- -v=4: All verbose debug messages visible
Result: Production logs are now clean with default settings!
* remove _schemas debug
* less logs
* fix: critical bug causing 51% message loss in stateless reads
CRITICAL BUG FIX: ReadMessagesAtOffset was returning error instead of
attempting disk I/O when data was flushed from memory, causing massive
message loss (6254 out of 12192 messages = 51% loss).
Problem:
In log_read_stateless.go lines 120-131, when data was flushed to disk
(empty previous buffer), the code returned an 'offset out of range' error
instead of attempting disk I/O. This caused consumers to skip over flushed
data entirely, leading to catastrophic message loss.
The bug occurred when:
1. Data was written to LogBuffer
2. Data was flushed to disk due to buffer rotation
3. Consumer requested that offset range
4. Code found offset in expected range but not in memory
5. ❌ Returned error instead of reading from disk
Root Cause:
Lines 126-131 had early return with error when previous buffer was empty:
// Data not in memory - for stateless fetch, we don't do disk I/O
return messages, startOffset, highWaterMark, false,
fmt.Errorf("offset %d out of range...")
This comment was incorrect - we DO need disk I/O for flushed data!
Fix:
1. Lines 120-132: Changed to fall through to disk read logic instead of
returning error when previous buffer is empty
2. Lines 137-177: Enhanced disk read logic to handle TWO cases:
- Historical data (offset < bufferStartOffset)
- Flushed data (offset >= bufferStartOffset but not in memory)
Changes:
- Line 121: Log "attempting disk read" instead of breaking
- Line 130-132: Fall through to disk read instead of returning error
- Line 141: Changed condition from 'if startOffset < bufferStartOffset'
to 'if startOffset < currentBufferEnd' to handle both cases
- Lines 143-149: Add context-aware logging for both historical and flushed data
- Lines 154-159: Add context-aware error messages
Expected Results:
- Before: 51% message loss (6254/12192 missing)
- After: <1% message loss (only from rebalancing, which we already fixed)
- Duplicates: Should remain ~47% (from rebalancing, expected until offsets committed)
Testing:
- ✅ Compiles successfully
- Ready for integration testing with standard-test
Related Issues:
- This explains the massive data loss in recent load tests
- Disk I/O fallback was implemented but not reachable due to early return
- Disk chunk cache is working but was never being used for flushed data
Priority: CRITICAL - Fixes production-breaking data loss bug
* perf: add topic configuration cache to fix 60% CPU overhead
CRITICAL PERFORMANCE FIX: Added topic configuration caching to eliminate
massive CPU overhead from repeated filer reads and JSON unmarshaling on
EVERY fetch request.
Problem (from CPU profile):
- ReadTopicConfFromFiler: 42.45% CPU (5.76s out of 13.57s)
- protojson.Unmarshal: 25.64% CPU (3.48s)
- GetOrGenerateLocalPartition called on EVERY FetchMessage request
- No caching - reading from filer and unmarshaling JSON every time
- This caused filer, gateway, and broker to be extremely busy
Root Cause:
GetOrGenerateLocalPartition() is called on every FetchMessage request and
was calling ReadTopicConfFromFiler() without any caching. Each call:
1. Makes gRPC call to filer (expensive)
2. Reads JSON from disk (expensive)
3. Unmarshals protobuf JSON (25% of CPU!)
The disk I/O fix (previous commit) made this worse by enabling more reads,
exposing this performance bottleneck.
Solution:
Added topicConfCache similar to existing topicExistsCache:
Changes to broker_server.go:
- Added topicConfCacheEntry struct
- Added topicConfCache map to MessageQueueBroker
- Added topicConfCacheMu RWMutex for thread safety
- Added topicConfCacheTTL (30 seconds)
- Initialize cache in NewMessageBroker()
Changes to broker_topic_conf_read_write.go:
- Modified GetOrGenerateLocalPartition() to check cache first
- Cache HIT: Return cached config immediately (V(4) log)
- Cache MISS: Read from filer, cache result, proceed
- Added invalidateTopicConfCache() for cache invalidation
- Added import "time" for cache TTL
Cache Strategy:
- TTL: 30 seconds (matches topicExistsCache)
- Thread-safe with RWMutex
- Cache key: topic.String() (e.g., "kafka.loadtest-topic-0")
- Invalidation: Call invalidateTopicConfCache() when config changes
Expected Results:
- Before: 60% CPU on filer reads + JSON unmarshaling
- After: <1% CPU (only on cache miss every 30s)
- Filer load: Reduced by ~99% (from every fetch to once per 30s)
- Gateway CPU: Dramatically reduced
- Broker CPU: Dramatically reduced
- Throughput: Should increase significantly
Performance Impact:
With 50 msgs/sec per topic × 5 topics = 250 fetches/sec:
- Before: 250 filer reads/sec (25000% overhead!)
- After: 0.17 filer reads/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer filer calls
Testing:
- ✅ Compiles successfully
- Ready for load test to verify CPU reduction
Priority: CRITICAL - Fixes production-breaking performance issue
Related: Works with previous commit (disk I/O fix) to enable correct and fast reads
* fmt
* refactor: merge topicExistsCache and topicConfCache into unified topicCache
Merged two separate caches into one unified cache to simplify code and
reduce memory usage. The unified cache stores both topic existence and
configuration in a single structure.
Design:
- Single topicCacheEntry with optional *ConfigureTopicResponse
- If conf != nil: topic exists with full configuration
- If conf == nil: topic doesn't exist (negative cache)
- Same 30-second TTL for both existence and config caching
Changes to broker_server.go:
- Removed topicExistsCacheEntry struct
- Removed topicConfCacheEntry struct
- Added unified topicCacheEntry struct (conf can be nil)
- Removed topicExistsCache, topicExistsCacheMu, topicExistsCacheTTL
- Removed topicConfCache, topicConfCacheMu, topicConfCacheTTL
- Added unified topicCache, topicCacheMu, topicCacheTTL
- Updated NewMessageBroker() to initialize single cache
Changes to broker_topic_conf_read_write.go:
- Modified GetOrGenerateLocalPartition() to use unified cache
- Added negative caching (conf=nil) when topic not found
- Renamed invalidateTopicConfCache() to invalidateTopicCache()
- Single cache lookup instead of two separate checks
Changes to broker_grpc_lookup.go:
- Modified TopicExists() to use unified cache
- Check: exists = (entry.conf != nil)
- Only cache negative results (conf=nil) in TopicExists
- Positive results cached by GetOrGenerateLocalPartition
- Removed old invalidateTopicExistsCache() function
Changes to broker_grpc_configure.go:
- Updated invalidateTopicExistsCache() calls to invalidateTopicCache()
- Two call sites updated
Benefits:
1. Code Simplification: One cache instead of two
2. Memory Reduction: Single map, single mutex, single TTL
3. Consistency: No risk of cache desync between existence and config
4. Less Lock Contention: One lock instead of two
5. Easier Maintenance: Single invalidation function
6. Same Performance: Still eliminates 60% CPU overhead
Cache Behavior:
- TopicExists: Lightweight check, only caches negative (conf=nil)
- GetOrGenerateLocalPartition: Full config read, caches positive (conf != nil)
- Both share same 30s TTL
- Both use same invalidation on topic create/update/delete
Testing:
- ✅ Compiles successfully
- Ready for integration testing
This refactor maintains all performance benefits while simplifying
the codebase and reducing memory footprint.
* fix: add cache to LookupTopicBrokers to eliminate 26% CPU overhead
CRITICAL: LookupTopicBrokers was bypassing cache, causing 26% CPU overhead!
Problem (from CPU profile):
- LookupTopicBrokers: 35.74% CPU (9s out of 25.18s)
- ReadTopicConfFromFiler: 26.41% CPU (6.65s)
- protojson.Unmarshal: 16.64% CPU (4.19s)
- LookupTopicBrokers called b.fca.ReadTopicConfFromFiler() directly on line 35
- Completely bypassed our unified topicCache!
Root Cause:
LookupTopicBrokers is called VERY frequently by clients (every fetch request
needs to know partition assignments). It was calling ReadTopicConfFromFiler
directly instead of using the cache, causing:
1. Expensive gRPC calls to filer on every lookup
2. Expensive JSON unmarshaling on every lookup
3. 26%+ CPU overhead on hot path
4. Our cache optimization was useless for this critical path
Solution:
Created getTopicConfFromCache() helper and updated all callers:
Changes to broker_topic_conf_read_write.go:
- Added getTopicConfFromCache() - public API for cached topic config reads
- Implements same caching logic: check cache -> read filer -> cache result
- Handles both positive (conf != nil) and negative (conf == nil) caching
- Refactored GetOrGenerateLocalPartition() to use new helper (code dedup)
- Now only 14 lines instead of 60 lines (removed duplication)
Changes to broker_grpc_lookup.go:
- Modified LookupTopicBrokers() to call getTopicConfFromCache()
- Changed from: b.fca.ReadTopicConfFromFiler(t) (no cache)
- Changed to: b.getTopicConfFromCache(t) (with cache)
- Added comment explaining this fixes 26% CPU overhead
Cache Strategy:
- First call: Cache MISS -> read filer + unmarshal JSON -> cache for 30s
- Next 1000+ calls in 30s: Cache HIT -> return cached config immediately
- No filer gRPC, no JSON unmarshaling, near-zero CPU
- Cache invalidated on topic create/update/delete
Expected CPU Reduction:
- Before: 26.41% on ReadTopicConfFromFiler + 16.64% on JSON unmarshal = 43% CPU
- After: <0.1% (only on cache miss every 30s)
- Expected total broker CPU: 25.18s -> ~8s (67% reduction!)
Performance Impact (with 250 lookups/sec):
- Before: 250 filer reads/sec + 250 JSON unmarshals/sec
- After: 0.17 filer reads/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer expensive operations
Code Quality:
- Eliminated code duplication (60 lines -> 14 lines in GetOrGenerateLocalPartition)
- Single source of truth for cached reads (getTopicConfFromCache)
- Clear API: "Always use getTopicConfFromCache, never ReadTopicConfFromFiler directly"
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Completes the cache optimization to achieve full performance fix
* perf: optimize broker assignment validation to eliminate 14% CPU overhead
CRITICAL: Assignment validation was running on EVERY LookupTopicBrokers call!
Problem (from CPU profile):
- ensureTopicActiveAssignments: 14.18% CPU (2.56s out of 18.05s)
- EnsureAssignmentsToActiveBrokers: 14.18% CPU (2.56s)
- ConcurrentMap.IterBuffered: 12.85% CPU (2.32s) - iterating all brokers
- Called on EVERY LookupTopicBrokers request, even with cached config!
Root Cause:
LookupTopicBrokers flow was:
1. getTopicConfFromCache() - returns cached config (fast ✅)
2. ensureTopicActiveAssignments() - validates assignments (slow ❌)
Even though config was cached, we still validated assignments every time,
iterating through ALL active brokers on every single request. With 250
requests/sec, this meant 250 full broker iterations per second!
Solution:
Move assignment validation inside getTopicConfFromCache() and only run it
on cache misses:
Changes to broker_topic_conf_read_write.go:
- Modified getTopicConfFromCache() to validate assignments after filer read
- Validation only runs on cache miss (not on cache hit)
- If hasChanges: Save to filer immediately, invalidate cache, return
- If no changes: Cache config with validated assignments
- Added ensureTopicActiveAssignmentsUnsafe() helper (returns bool)
- Kept ensureTopicActiveAssignments() for other callers (saves to filer)
Changes to broker_grpc_lookup.go:
- Removed ensureTopicActiveAssignments() call from LookupTopicBrokers
- Assignment validation now implicit in getTopicConfFromCache()
- Added comments explaining the optimization
Cache Behavior:
- Cache HIT: Return config immediately, skip validation (saves 14% CPU!)
- Cache MISS: Read filer -> validate assignments -> cache result
- If broker changes detected: Save to filer, invalidate cache, return
- Next request will re-read and re-validate (ensures consistency)
Performance Impact:
With 30-second cache TTL and 250 lookups/sec:
- Before: 250 validations/sec × 10ms each = 2.5s CPU/sec (14% overhead)
- After: 0.17 validations/sec (only on cache miss)
- Reduction: 99.93% fewer validations
Expected CPU Reduction:
- Before (with cache): 18.05s total, 2.56s validation (14%)
- After (with optimization): ~15.5s total (-14% = ~2.5s saved)
- Combined with previous cache fix: 25.18s -> ~15.5s (38% total reduction)
Cache Consistency:
- Assignments validated when config first cached
- If broker membership changes, assignments updated and saved
- Cache invalidated to force fresh read
- All brokers eventually converge on correct assignments
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Completes optimization of LookupTopicBrokers hot path
* fmt
* perf: add partition assignment cache in gateway to eliminate 13.5% CPU overhead
CRITICAL: Gateway calling LookupTopicBrokers on EVERY fetch to translate
Kafka partition IDs to SeaweedFS partition ranges!
Problem (from CPU profile):
- getActualPartitionAssignment: 13.52% CPU (1.71s out of 12.65s)
- Called bc.client.LookupTopicBrokers on line 228 for EVERY fetch
- With 250 fetches/sec, this means 250 LookupTopicBrokers calls/sec!
- No caching at all - same overhead as broker had before optimization
Root Cause:
Gateway needs to translate Kafka partition IDs (0, 1, 2...) to SeaweedFS
partition ranges (0-341, 342-682, etc.) for every fetch request. This
translation requires calling LookupTopicBrokers to get partition assignments.
Without caching, every fetch request triggered:
1. gRPC call to broker (LookupTopicBrokers)
2. Broker reads from its cache (fast now after broker optimization)
3. gRPC response back to gateway
4. Gateway computes partition range mapping
The gRPC round-trip overhead was consuming 13.5% CPU even though broker
cache was fast!
Solution:
Added partitionAssignmentCache to BrokerClient:
Changes to types.go:
- Added partitionAssignmentCacheEntry struct (assignments + expiresAt)
- Added cache fields to BrokerClient:
* partitionAssignmentCache map[string]*partitionAssignmentCacheEntry
* partitionAssignmentCacheMu sync.RWMutex
* partitionAssignmentCacheTTL time.Duration
Changes to broker_client.go:
- Initialize partitionAssignmentCache in NewBrokerClientWithFilerAccessor
- Set partitionAssignmentCacheTTL to 30 seconds (same as broker)
Changes to broker_client_publish.go:
- Added "time" import
- Modified getActualPartitionAssignment() to check cache first:
* Cache HIT: Use cached assignments (fast ✅)
* Cache MISS: Call LookupTopicBrokers, cache result for 30s
- Extracted findPartitionInAssignments() helper function
* Contains range calculation and partition matching logic
* Reused for both cached and fresh lookups
Cache Behavior:
- First fetch: Cache MISS -> LookupTopicBrokers (~2ms) -> cache for 30s
- Next 7500 fetches in 30s: Cache HIT -> immediate return (~0.01ms)
- Cache automatically expires after 30s, re-validates on next fetch
Performance Impact:
With 250 fetches/sec and 5 topics:
- Before: 250 LookupTopicBrokers/sec = 500ms CPU overhead
- After: 0.17 LookupTopicBrokers/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer gRPC calls
Expected CPU Reduction:
- Before: 12.65s total, 1.71s in getActualPartitionAssignment (13.5%)
- After: ~11s total (-13.5% = 1.65s saved)
- Benefit: 13% lower CPU, more capacity for actual message processing
Cache Consistency:
- Same 30-second TTL as broker's topic config cache
- Partition assignments rarely change (only on topic reconfiguration)
- 30-second staleness is acceptable for partition mapping
- Gateway will eventually converge with broker's view
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Eliminates major performance bottleneck in gateway fetch path
* perf: add RecordType inference cache to eliminate 37% gateway CPU overhead
CRITICAL: Gateway was creating Avro codecs and inferring RecordTypes on
EVERY fetch request for schematized topics!
Problem (from CPU profile):
- NewCodec (Avro): 17.39% CPU (2.35s out of 13.51s)
- inferRecordTypeFromAvroSchema: 20.13% CPU (2.72s)
- Total schema overhead: 37.52% CPU
- Called during EVERY fetch to check if topic is schematized
- No caching - recreating expensive goavro.Codec objects repeatedly
Root Cause:
In the fetch path, isSchematizedTopic() -> matchesSchemaRegistryConvention()
-> ensureTopicSchemaFromRegistryCache() -> inferRecordTypeFromCachedSchema()
-> inferRecordTypeFromAvroSchema() was being called.
The inferRecordTypeFromAvroSchema() function created a NEW Avro decoder
(which internally calls goavro.NewCodec()) on every call, even though:
1. The schema.Manager already has a decoder cache by schema ID
2. The same schemas are used repeatedly for the same topics
3. goavro.NewCodec() is expensive (parses JSON, builds schema tree)
This was wasteful because:
- Same schema string processed repeatedly
- No reuse of inferred RecordType structures
- Creating codecs just to infer types, then discarding them
Solution:
Added inferredRecordTypes cache to Handler:
Changes to handler.go:
- Added inferredRecordTypes map[string]*schema_pb.RecordType to Handler
- Added inferredRecordTypesMu sync.RWMutex for thread safety
- Initialize cache in NewTestHandlerWithMock() and NewSeaweedMQBrokerHandlerWithDefaults()
Changes to produce.go:
- Added glog import
- Modified inferRecordTypeFromAvroSchema():
* Check cache first (key: schema string)
* Cache HIT: Return immediately (V(4) log)
* Cache MISS: Create decoder, infer type, cache result
- Modified inferRecordTypeFromProtobufSchema():
* Same caching strategy (key: "protobuf:" + schema)
- Modified inferRecordTypeFromJSONSchema():
* Same caching strategy (key: "json:" + schema)
Cache Strategy:
- Key: Full schema string (unique per schema content)
- Value: Inferred *schema_pb.RecordType
- Thread-safe with RWMutex (optimized for reads)
- No TTL - schemas don't change for a topic
- Memory efficient - RecordType is small compared to codec
Performance Impact:
With 250 fetches/sec across 5 topics (1-3 schemas per topic):
- Before: 250 codec creations/sec + 250 inferences/sec = ~5s CPU
- After: 3-5 codec creations total (one per schema) = ~0.05s CPU
- Reduction: 99% fewer expensive operations
Expected CPU Reduction:
- Before: 13.51s total, 5.07s schema operations (37.5%)
- After: ~8.5s total (-37.5% = 5s saved)
- Benefit: 37% lower gateway CPU, more capacity for message processing
Cache Consistency:
- Schemas are immutable once registered in Schema Registry
- If schema changes, schema ID changes, so safe to cache indefinitely
- New schemas automatically cached on first use
- No need for invalidation or TTL
Additional Optimizations:
- Protobuf and JSON Schema also cached (same pattern)
- Prevents future bottlenecks as more schema formats are used
- Consistent caching approach across all schema types
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement under load
Priority: HIGH - Eliminates major performance bottleneck in gateway schema path
* fmt
* fix Node ID Mismatch, and clean up log messages
* clean up
* Apply client-specified timeout to context
* Add comprehensive debug logging for Noop record processing
- Track Produce v2+ request reception with API version and request body size
- Log acks setting, timeout, and topic/partition information
- Log record count from parseRecordSet and any parse errors
- **CRITICAL**: Log when recordCount=0 fallback extraction attempts
- Log record extraction with NULL value detection (Noop records)
- Log record key in hex for Noop key identification
- Track each record being published to broker
- Log offset assigned by broker for each record
- Log final response with offset and error code
This enables root cause analysis of Schema Registry Noop record timeout issue.
* fix: Remove context timeout propagation from produce that breaks consumer init
Commit
|
4 months ago |