Branch:
master
add-ec-vacuum
add-foundation-db
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
fasthttp
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
original_weed_mount
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
11963 Commits (master)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
824dcac3bf
|
s3: combine all signature verification checks into a single function (#7330)
|
20 hours ago |
|
|
6a8c53bc44
|
Filer: batch deletion operations to return individual error results (#7382)
* batch deletion operations to return individual error results Modify batch deletion operations to return individual error results instead of one aggregated error, enabling better tracking of which specific files failed to delete (helping reduce orphan file issues). * Simplified logging logic * Optimized nested loop * handles the edge case where the RPC succeeds but connection cleanup fails * simplify * simplify * ignore 'not found' errors here |
21 hours ago |
|
|
37af41fbfe
|
Shell: Added a helper function `isHelpRequest()` (#7380)
* Added a helper function `isHelpRequest()` * also handles combined short flags like -lh or -hl * Created handleHelpRequest() helper function encapsulates both: Checking for help flags Printing the help message * Limit to reasonable length (2-4 chars total) to avoid matching long options like -verbose |
1 day ago |
|
|
922bb17194
|
Store full shell command in shell history (#7378)
Store shell command in history before parsing Store the shell command in history before parsing it. This will allow users to press the 'Up' arrow and see the entire command. |
1 day ago |
|
|
fa35efc076
|
Volume Server: Unexpected Deletion of Remote Tier Data (#7377)
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor * fix deleting remote tier * simplify the fix * Update weed/storage/volume_loading.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/storage/volume_loading.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/storage/volume_loading.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * fix --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
1 day ago |
|
|
263e891da0
|
Clients to volume server requires JWT tokens for all read operations (#7376)
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error * When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations. * reuse fileId * refactor --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 day ago |
|
|
9f4075441c
|
[Admin UI] Login not possible due to securecookie error (#7374)
* [Admin UI] Login not possible due to securecookie error * avoid 404 favicon * Update weed/admin/dash/auth_middleware.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * address comments * avoid variable over shadowing * log session save error --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 day ago |
|
|
bf58c5a688
|
fix: Use a mixed of virtual and path styles within a single subdomain (#7357)
* fix: Use a mixed of virtual and path styles within a single subdomain * address comments * add tests --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
2 days ago |
|
|
1d0471aebb
|
Improve admin urls (#7370)
* Improve Master and Volume URLs in admin dashboard - Add clickable URL for master node. - Refactor Volume server URL to use PublicURL if set. 'address' is used as fallback. * Make volume servers show in consistent order - Sort servers by name to ensure predictable order after each refresh. * address comment --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
2 days ago |
|
|
7d147f238c
|
avoid repeated reading disk (#7369)
* avoid repeated reading disk * checks both flush time AND read position advancement * wait on cond * fix reading Gap detection and skipping to earliest memory time Time-based reads that include events at boundary times for first reads (offset ≤ 0) Aggregated subscriber wake-up via ListenersWaits signaling * address comments |
2 days ago |
|
|
d220875ef4 |
Revert "fix reading"
This reverts commit
|
2 days ago |
|
|
64a4ce9358 |
fix reading
Gap detection and skipping to earliest memory time Time-based reads that include events at boundary times for first reads (offset ≤ 0) Aggregated subscriber wake-up via ListenersWaits signaling |
2 days ago |
|
|
832df5265f
|
Fix 'NaN%' issue when running volume.fsck (#7368)
* Fix 'NaN%' issue when running volume.fsck - Running `volume.fsck` on an empty cluster will display 'NaN%'. * Refactor - Extract cound of orphan chunks in summary to new var. - Restore handling for 'NaN' for individual volumes. Its not necessary because the check is already done. * Make code more idiomatic |
2 days ago |
|
|
0abf70061b
|
S3 API: Fix SSE-S3 decryption on object download (#7366)
* S3 API: Fix SSE-S3 decryption on object download Fixes #7363 This commit adds missing SSE-S3 decryption support when downloading objects from SSE-S3 encrypted buckets. Previously, SSE-S3 encrypted objects were returned in their encrypted form, causing data corruption and hash mismatches. Changes: - Updated detectPrimarySSEType() to detect SSE-S3 encrypted objects by examining chunk metadata and distinguishing SSE-S3 from SSE-KMS - Added SSE-S3 handling in handleSSEResponse() to route to new handler - Implemented handleSSES3Response() for both single-part and multipart SSE-S3 encrypted objects with proper decryption - Implemented createMultipartSSES3DecryptedReader() for multipart objects with per-chunk decryption using stored IVs - Updated addSSEHeadersToResponse() to include SSE-S3 response headers The fix follows the existing SSE-C and SSE-KMS patterns, using the envelope encryption architecture where each object's DEK is encrypted with the KEK stored in the filer. * Add comprehensive tests for SSE-S3 decryption - TestSSES3EncryptionDecryption: basic encryption/decryption - TestSSES3IsRequestInternal: request detection - TestSSES3MetadataSerialization: metadata serialization/deserialization - TestDetectPrimarySSETypeS3: SSE type detection for various scenarios - TestAddSSES3HeadersToResponse: response header validation - TestSSES3EncryptionWithBaseIV: multipart encryption with base IV - TestSSES3WrongKeyDecryption: wrong key error handling - TestSSES3KeyGeneration: key generation and uniqueness - TestSSES3VariousSizes: encryption/decryption with various data sizes - TestSSES3ResponseHeaders: response header correctness - TestSSES3IsEncryptedInternal: metadata-based encryption detection - TestSSES3InvalidMetadataDeserialization: error handling for invalid metadata - TestGetSSES3Headers: header generation - TestProcessSSES3Request: request processing - TestGetSSES3KeyFromMetadata: key extraction from metadata - TestSSES3EnvelopeEncryption: envelope encryption correctness - TestValidateSSES3Key: key validation All tests pass successfully, providing comprehensive coverage for the SSE-S3 decryption fix. * Address PR review comments 1. Fix resource leak in createMultipartSSES3DecryptedReader: - Wrap decrypted reader with closer to properly release resources - Ensure underlying chunkReader is closed when done 2. Handle mixed-encryption objects correctly: - Check chunk encryption type before attempting decryption - Pass through non-SSE-S3 chunks unmodified - Log encryption type for debugging 3. Improve SSE type detection logic: - Add explicit case for aws:kms algorithm - Handle unknown algorithms gracefully - Better documentation for tie-breaking precedence 4. Document tie-breaking behavior: - Clarify that mixed encryption indicates potential corruption - Explicit precedence order: SSE-C > SSE-KMS > SSE-S3 These changes address high-severity resource management issues and improve robustness when handling edge cases and mixed-encryption scenarios. * Fix IV retrieval for small/inline SSE-S3 encrypted files Critical bug fix: The previous implementation only looked for the IV in chunk metadata, which would fail for small files stored inline (without chunks). Changes: - Check object-level metadata (sseS3Key.IV) first for inline files - Fallback to first chunk metadata only if object-level IV not found - Improved error message to indicate both locations were checked This ensures small SSE-S3 encrypted files (stored inline in entry.Content) can be properly decrypted, as their IV is stored in the object-level SeaweedFSSSES3Key metadata rather than in chunk metadata. Fixes the high-severity issue identified in PR review. * Clean up unused SSE metadata helper functions Remove legacy SSE metadata helper functions that were never fully implemented or used: Removed unused functions: - StoreSSECMetadata() / GetSSECMetadata() - StoreSSEKMSMetadata() / GetSSEKMSMetadata() - StoreSSES3Metadata() / GetSSES3Metadata() - IsSSEEncrypted() - GetSSEAlgorithm() Removed unused constants: - MetaSSEAlgorithm - MetaSSECKeyMD5 - MetaSSEKMSKeyID - MetaSSEKMSEncryptedKey - MetaSSEKMSContext - MetaSSES3KeyID These functions were from an earlier design where IV and other metadata would be stored in common entry.Extended keys. The actual implementations use type-specific serialization: - SSE-C: Uses StoreIVInMetadata()/GetIVFromMetadata() directly for IV - SSE-KMS: Serializes entire SSEKMSKey structure as JSON (includes IV) - SSE-S3: Serializes entire SSES3Key structure as JSON (includes IV) This follows Option A: SSE-S3 uses envelope encryption pattern like SSE-KMS, where IV is stored within the serialized key metadata rather than in a separate metadata field. Kept functions still in use: - StoreIVInMetadata() - Used by SSE-C - GetIVFromMetadata() - Used by SSE-C and streaming copy - MetaSSEIV constant - Used by SSE-C All tests pass after cleanup. * Rename SSE metadata functions to clarify SSE-C specific usage Renamed functions and constants to explicitly indicate they are SSE-C specific, improving code clarity: Renamed: - MetaSSEIV → MetaSSECIV - StoreIVInMetadata() → StoreSSECIVInMetadata() - GetIVFromMetadata() → GetSSECIVFromMetadata() Updated all usages across: - s3api_key_rotation.go - s3api_streaming_copy.go - s3api_object_handlers_copy.go - s3_sse_copy_test.go - s3_sse_test_utils_test.go Rationale: These functions are exclusively used by SSE-C for storing/retrieving the IV in entry.Extended metadata. SSE-KMS and SSE-S3 use different approaches (IV stored in serialized key structures), so the generic names were misleading. The new names make it clear these are part of the SSE-C implementation. All tests pass. * Add integration tests for SSE-S3 end-to-end encryption/decryption These integration tests cover the complete encrypt->store->decrypt cycle that was missing from the original test suite. They would have caught the IV retrieval bug for inline files. Tests added: - TestSSES3EndToEndSmallFile: Tests inline files (10, 50, 256 bytes) * Specifically tests the critical IV retrieval path for inline files * This test explicitly checks the bug we fixed where inline files couldn't retrieve their IV from object-level metadata - TestSSES3EndToEndChunkedFile: Tests multipart encrypted files * Verifies per-chunk metadata serialization/deserialization * Tests that each chunk can be independently decrypted with its own IV - TestSSES3EndToEndWithDetectPrimaryType: Tests type detection * Verifies inline vs chunked SSE-S3 detection * Ensures SSE-S3 is distinguished from SSE-KMS Note: Full HTTP handler tests (PUT -> GET through actual handlers) would require a complete mock server with filer connections, which is complex. These tests focus on the critical decrypt path and data flow. Why these tests are important: - Unit tests alone don't catch integration issues - The IV retrieval bug existed because there was no end-to-end test - These tests simulate the actual storage/retrieval flow - They verify the complete encryption architecture works correctly All tests pass. * Fix TestValidateSSES3Key expectations to match actual implementation The ValidateSSES3Key function only validates that the key struct is not nil, but doesn't validate the Key field contents or size. The test was expecting validation that doesn't exist. Updated test cases: - Nil key struct → should error (correct) - Valid key → should not error (correct) - Invalid key size → should not error (validation doesn't check this) - Nil key bytes → should not error (validation doesn't check this) Added comments to clarify what the current validation actually checks. This matches the behavior of ValidateSSEKMSKey and ValidateSSECKey which also only check for nil struct, not field contents. All SSE tests now pass. * Improve ValidateSSES3Key to properly validate key contents Enhanced the validation function from only checking nil struct to comprehensive validation of all key fields: Validations added: 1. Key bytes not nil 2. Key size exactly 32 bytes (SSES3KeySize) 3. Algorithm must be "AES256" (SSES3Algorithm) 4. Key ID must not be empty 5. IV length must be 16 bytes if set (optional - set during encryption) Test improvements (10 test cases): - Nil key struct - Valid key without IV - Valid key with IV - Invalid key size (too small) - Invalid key size (too large) - Nil key bytes - Empty key ID - Invalid algorithm - Invalid IV length - Empty IV (allowed - set during encryption) This matches the robustness of SSE-C and SSE-KMS validation and will catch configuration errors early rather than failing during encryption/decryption. All SSE tests pass. * Replace custom string helper functions with strings.Contains Address Gemini Code Assist review feedback: - Remove custom contains() and findSubstring() helper functions - Use standard library strings.Contains() instead - Add strings import This makes the code more idiomatic and easier to maintain by using the standard library instead of reimplementing functionality. Changes: - Added "strings" to imports - Replaced contains(err.Error(), tc.errorMsg) with strings.Contains(err.Error(), tc.errorMsg) - Removed 15 lines of custom helper code All tests pass. * filer fix reading and writing SSE-S3 headers * filter out seaweedfs internal headers * Update weed/s3api/s3api_object_handlers.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/s3api/s3_validation_utils.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update s3api_streaming_copy.go * remove fallback * remove redundant check * refactor * remove extra object fetching * in case object is not found * Correct Version Entry for SSE Routing * Proper Error Handling for SSE Entry Fetching * Eliminated All Redundant Lookups * Removed brittle “exactly 5 successes/failures” assertions. Added invariant checks total recorded attempts equals request count, successes never exceed capacity, failures cover remaining attempts, final AvailableSpace matches capacity - successes. * refactor * fix test * Fixed Broken Fallback Logic * refactor * Better Error for Encryption Type Mismatch * refactor --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
2 days ago |
|
|
f06ddd05cc
|
Improve-worker (#7367)
* ♻️ refactor(worker): remove goto * ♻️ refactor(worker): let manager loop exit by itself * ♻️ refactor(worker): fix race condition when closing worker CloseSend is not safe to call when another goroutine concurrently calls Send. streamCancel already handles proper stream closure. Also, streamExit signal should be called AFTER sending shutdownMsg Now the worker has no race condition if stopped during any moment (hopefully, tested with -race flag) * 🐛 fix(task_logger): deadlock in log closure * 🐛 fix(balance): fix balance task Removes the outdated "UnloadVolume" step as it is handled by "DeleteVolume". #7346 |
2 days ago |
|
|
557aa4ec09
|
fixing auto complete (#7365)
* fixing auto complete * Update weed/command/autocomplete.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/command/autocomplete.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
2 days ago |
|
|
cb9a662e20 |
Update volume_growth_reservation_test.go
|
3 days ago |
|
|
30e8133524 |
Update volume_growth_reservation_test.go
|
3 days ago |
|
|
fa025dc96f
|
♻️ refactor(worker): decouple state management using command-query pattern (#7354)
* ♻️ refactor(worker): decouple state management using command-query pattern This commit eliminates all uses of sync.Mutex across the `worker.go` and `client.go` components, changing how mutable state is accessed and modified. Single Owner Principle is now enforced. - Guarantees thread safety and prevents data races by ensuring that only one goroutine ever modifies or reads state. Impact: Improves application concurrency, reliability, and maintainability by isolating state concerns. * 🐛 fix(worker): fix race condition when closing The use of select/default is wrong for mandatory shutdown signals. * 🐛 fix(worker): do not get tickers in every iteration * 🐛 fix(worker): fix race condition when closing pt 2 refactor `handleOutgoing` to mirror the non-blocking logic of `handleIncoming` * address comments * To ensure stream errors are always processed, the send should be blocking. * avoid blocking the manager loop while waiting for tasks to complete --------- Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
3 days ago |
|
|
f7bd75ef3b
|
S3: Avoid in-memory map concurrent writes in SSE-S3 key manager (#7358)
* Fix concurrent map writes in SSE-S3 key manager This commit fixes issue #7352 where parallel uploads to SSE-S3 enabled buckets were causing 'fatal error: concurrent map writes' crashes. The SSES3KeyManager struct had an unsynchronized map that was being accessed from multiple goroutines during concurrent PUT operations. Changes: - Added sync.RWMutex to SSES3KeyManager struct - Protected StoreKey() with write lock - Protected GetKey() with read lock - Updated GetOrCreateKey() with proper read/write locking pattern including double-check to prevent race conditions All existing SSE tests pass successfully. Fixes #7352 * Improve SSE-S3 key manager with envelope encryption Replace in-memory key storage with envelope encryption using a super key (KEK). Instead of storing DEKs in a map, the key manager now: - Uses a randomly generated 256-bit super key (KEK) - Encrypts each DEK with the super key using AES-GCM - Stores the encrypted DEK in object metadata - Decrypts the DEK on-demand when reading objects Benefits: - Eliminates unbounded memory growth from caching DEKs - Provides better security with authenticated encryption (AES-GCM) - Follows envelope encryption best practices (similar to AWS KMS) - No need for mutex-protected map lookups on reads - Each object's encrypted DEK is self-contained in its metadata This approach matches the design pattern used in the local KMS provider and is more suitable for production use. * Persist SSE-S3 KEK in filer for multi-server support Store the SSE-S3 super key (KEK) in the filer at /.seaweedfs/s3/kek instead of generating it per-server. This ensures: 1. **Multi-server consistency**: All S3 API servers use the same KEK 2. **Persistence across restarts**: KEK survives server restarts 3. **Centralized management**: KEK stored in filer, accessible to all servers 4. **Automatic initialization**: KEK is created on first startup if it doesn't exist The KEK is: - Stored as hex-encoded bytes in filer - Protected with file mode 0600 (read/write for owner only) - Located in /.seaweedfs/s3/ directory (mode 0700) - Loaded on S3 API server startup - Reused across all S3 API server instances This matches the architecture of centralized configuration in SeaweedFS and enables proper SSE-S3 support in multi-server deployments. * Change KEK storage location to /etc/s3/kek Move SSE-S3 KEK from /.seaweedfs/s3/kek to /etc/s3/kek for better organization and consistency with other SeaweedFS configuration files. The /etc directory is the standard location for configuration files in SeaweedFS. * use global sse-se key manager when copying * Update volume_growth_reservation_test.go * Rename KEK file to sse_kek for clarity Changed /etc/s3/kek to /etc/s3/sse_kek to make it clear this key is specifically for SSE-S3 encryption, not for other KMS purposes. This improves clarity and avoids potential confusion with the separate KMS provider system used for SSE-KMS. * Use constants for SSE-S3 KEK directory and file name Refactored to use named constants instead of string literals: - SSES3KEKDirectory = "/etc/s3" - SSES3KEKParentDir = "/etc" - SSES3KEKDirName = "s3" - SSES3KEKFileName = "sse_kek" This improves maintainability and makes it easier to change the storage location if needed in the future. * Address PR review: Improve error handling and robustness Addresses review comments from https://github.com/seaweedfs/seaweedfs/pull/7358#pullrequestreview-3367476264 Critical fixes: 1. Distinguish between 'not found' and other errors when loading KEK - Only generate new KEK if ErrNotFound - Fail fast on connectivity/permission errors to prevent data loss - Prevents creating new KEK that would make existing data undecryptable 2. Make SSE-S3 initialization failure fatal - Return error instead of warning when initialization fails - Prevents server from running in broken state 3. Improve directory creation error handling - Only ignore 'file exists' errors - Fail on permission/connectivity errors These changes ensure the SSE-S3 key manager is robust against transient errors and prevents accidental data loss. * Fix KEK path conflict with /etc/s3 file Changed KEK storage from /etc/s3/sse_kek to /etc/seaweedfs/s3_sse_kek to avoid conflict with the circuit breaker config at /etc/s3. The /etc/s3 path is used by CircuitBreakerConfigDir and may exist as a file (circuit_breaker.json), causing the error: 'CreateEntry /etc/s3/sse_kek: /etc/s3 should be a directory' New KEK location: /etc/seaweedfs/s3_sse_kek This uses the seaweedfs subdirectory which is more appropriate for internal SeaweedFS configuration files. Fixes startup failure when /etc/s3 exists as a file. * Revert KEK path back to /etc/s3/sse_kek Changed back from /etc/seaweedfs/s3_sse_kek to /etc/s3/sse_kek as requested. The /etc/s3 directory will be created properly when it doesn't exist. * Fix directory creation with proper ModeDir flag Set FileMode to uint32(0755 | os.ModeDir) when creating /etc/s3 directory to ensure it's created as a directory, not a file. Without the os.ModeDir flag, the entry was being created as a file, which caused the error 'CreateEntry: /etc/s3 is a file' when trying to create the KEK file inside it. Uses 0755 permissions (rwxr-xr-x) for the directory and adds os import for os.ModeDir constant. |
3 days ago |
|
|
aed91baa2e
|
[weed] update volume.fix.replication description (#7340)
* [weed] update volume.fix.replication description * Update master-cloud.toml * Update master.toml |
4 days ago |
|
|
ad46d80f49 |
go mod tidy
|
4 days ago |
|
|
d1603d0a6f |
Merge branch 'master' of https://github.com/seaweedfs/seaweedfs
|
4 days ago |
|
|
9b7ed67311 |
Switch to seaweedfs/cockroachdb-parser directly
Removed the replace directive and updated all references to use github.com/seaweedfs/cockroachdb-parser directly instead of redirecting from github.com/cockroachdb/cockroachdb-parser. Changes: - Updated import in weed/query/engine/cockroach_parser.go - Removed replace directive from go.mod - Now using seaweedfs/cockroachdb-parser@909763b17138 which includes all 32-bit architecture fixes and uses seaweedfs namespace throughout Build verified successfully for GOOS=openbsd GOARCH=arm. |
4 days ago |
|
|
986e3fe12e |
Update cockroachdb-parser to fix 32-bit builds
Use seaweedfs/cockroachdb-parser v0.0.0-20251021182748-d0c58c67297e via replace directive to fix building on 32-bit architectures like OpenBSD ARM. The replace directive ensures all imports of github.com/cockroachdb/cockroachdb-parser use the seaweedfs fork which includes fixes for: - Integer overflow in tsearch evaluation (int -> int64) - Flag type overflow in lexbase and tree packages (int -> int64) |
4 days ago |
|
|
76520583c6 |
Update cockroachdb-parser to fix 32-bit builds
Updated to seaweedfs/cockroachdb-parser v0.0.0-20251021182748-d0c58c67297e which includes fixes for building on 32-bit architectures like OpenBSD ARM. Fixes: - Integer overflow in tsearch evaluation (int -> int64) - Flag type overflow in lexbase and tree packages (int -> int64) |
4 days ago |
|
|
8c6e109657 |
update cockroachdb-parser due to build issues
|
5 days ago |
|
|
d3095f0c2a |
3.98
|
5 days ago |
|
|
d989971f3e |
Merge branch 'master' of https://github.com/seaweedfs/seaweedfs
|
5 days ago |
|
|
479e7bc38b |
go install github.com/a-h/templ/cmd/templ@latest
|
5 days ago |
|
|
34054ed910
|
Fix deadlock in worker client Connect() method (#7350)
The Connect() method was holding a write lock via defer for its entire duration, including when calling attemptConnection(). This caused a deadlock because attemptConnection() tries to acquire a read lock at line 119 to access c.lastWorkerInfo. The fix removes the defer unlock pattern and manually releases the lock after checking the connected state but before calling attemptConnection(). This allows attemptConnection() to acquire its own locks without deadlock. Fixes #7192 |
5 days ago |
|
|
82fbf15b71
|
chore(deps): bump golang.org/x/image from 0.30.0 to 0.32.0 (#7343)
* chore(deps): bump golang.org/x/image from 0.30.0 to 0.32.0 Bumps [golang.org/x/image](https://github.com/golang/image) from 0.30.0 to 0.32.0. - [Commits](https://github.com/golang/image/compare/v0.30.0...v0.32.0) --- updated-dependencies: - dependency-name: golang.org/x/image dependency-version: 0.32.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * go mod * go mod tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
5 days ago |
|
|
da18e3dc82
|
chore(deps): bump golang.org/x/crypto from 0.42.0 to 0.43.0 (#7347)
* chore(deps): bump golang.org/x/crypto from 0.42.0 to 0.43.0 Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.42.0 to 0.43.0. - [Commits](https://github.com/golang/crypto/compare/v0.42.0...v0.43.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-version: 0.43.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * go mod * go mod 2 * go mod tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
5 days ago |
|
|
5bec9e4114
|
chore(deps): bump github.com/klauspost/compress from 1.18.0 to 1.18.1 (#7344)
* chore(deps): bump github.com/klauspost/compress from 1.18.0 to 1.18.1 Bumps [github.com/klauspost/compress](https://github.com/klauspost/compress) from 1.18.0 to 1.18.1. - [Release notes](https://github.com/klauspost/compress/releases) - [Changelog](https://github.com/klauspost/compress/blob/master/.goreleaser.yml) - [Commits](https://github.com/klauspost/compress/compare/v1.18.0...v1.18.1) --- updated-dependencies: - dependency-name: github.com/klauspost/compress dependency-version: 1.18.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * go mod * go mod tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> |
5 days ago |
|
|
1fa753abcb
|
chore(deps): bump actions/setup-go from 5 to 6 (#7348)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/v5...v6) --- updated-dependencies: - dependency-name: actions/setup-go dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
c547cd7c18 |
fix tests
|
5 days ago |
|
|
ffd43218f6
|
create new volumes on less occupied disk locations (#7349)
* create new volumes on less occupied disk locations * add unit tests * address comments * fixes |
5 days ago |
|
|
3e8cc5a906
|
chore(deps): bump github.com/redis/go-redis/v9 from 9.14.0 to 9.14.1 (#7342)
Bumps [github.com/redis/go-redis/v9](https://github.com/redis/go-redis) from 9.14.0 to 9.14.1. - [Release notes](https://github.com/redis/go-redis/releases) - [Changelog](https://github.com/redis/go-redis/blob/v9.14.1/RELEASE-NOTES.md) - [Commits](https://github.com/redis/go-redis/compare/v9.14.0...v9.14.1) --- updated-dependencies: - dependency-name: github.com/redis/go-redis/v9 dependency-version: 9.14.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
0baad7b5a1
|
chore(deps): bump actions/checkout from 4 to 5 (#7345)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 5. - [Release notes](https://github.com/actions/checkout/releases) - [Commits](https://github.com/actions/checkout/compare/v4...v5) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '5' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
97f3028782
|
Clean up logs and deprecated functions (#7339)
* less logs * fix deprecated grpc.Dial |
1 week ago |
|
|
8d63a9cf5f
|
Fixes for kafka gateway (#7329)
* fix race condition
* save checkpoint every 2 seconds
* Inlined the session creation logic to hold the lock continuously
* comment
* more logs on offset resume
* only recreate if we need to seek backward (requested offset < current offset), not on any mismatch
* Simplified GetOrCreateSubscriber to always reuse existing sessions
* atomic currentStartOffset
* fmt
* avoid deadlock
* fix locking
* unlock
* debug
* avoid race condition
* refactor dedup
* consumer group that does not join group
* increase deadline
* use client timeout wait
* less logs
* add some delays
* adjust deadline
* Update fetch.go
* more time
* less logs, remove unused code
* purge unused
* adjust return values on failures
* clean up consumer protocols
* avoid goroutine leak
* seekable subscribe messages
* ack messages to broker
* reuse cached records
* pin s3 test version
* adjust s3 tests
* verify produced messages are consumed
* track messages with testStartTime
* removing the unnecessary restart logic and relying on the seek mechanism we already implemented
* log read stateless
* debug fetch offset APIs
* fix tests
* fix go mod
* less logs
* test: increase timeouts for consumer group operations in E2E tests
Consumer group operations (coordinator discovery, offset fetch/commit) are
slower in CI environments with limited resources. This increases timeouts to:
- ProduceMessages: 10s -> 30s (for when consumer groups are active)
- ConsumeWithGroup: 30s -> 60s (for offset fetch/commit operations)
Fixes the TestOffsetManagement timeout failures in GitHub Actions CI.
* feat: add context timeout propagation to produce path
This commit adds proper context propagation throughout the produce path,
enabling client-side timeouts to be honored on the broker side. Previously,
only fetch operations respected client timeouts - produce operations continued
indefinitely even if the client gave up.
Changes:
- Add ctx parameter to ProduceRecord and ProduceRecordValue signatures
- Add ctx parameter to PublishRecord and PublishRecordValue in BrokerClient
- Add ctx parameter to handleProduce and related internal functions
- Update all callers (protocol handlers, mocks, tests) to pass context
- Add context cancellation checks in PublishRecord before operations
Benefits:
- Faster failure detection when client times out
- No orphaned publish operations consuming broker resources
- Resource efficiency improvements (no goroutine/stream/lock leaks)
- Consistent timeout behavior between produce and fetch paths
- Better error handling with proper cancellation signals
This fixes the root cause of CI test timeouts where produce operations
continued indefinitely after clients gave up, leading to cascading delays.
* feat: add disk I/O fallback for historical offset reads
This commit implements async disk I/O fallback to handle cases where:
1. Data is flushed from memory before consumers can read it (CI issue)
2. Consumers request historical offsets not in memory
3. Small LogBuffer retention in resource-constrained environments
Changes:
- Add readHistoricalDataFromDisk() helper function
- Update ReadMessagesAtOffset() to call ReadFromDiskFn when offset < bufferStartOffset
- Properly handle maxMessages and maxBytes limits during disk reads
- Return appropriate nextOffset after disk reads
- Log disk read operations at V(2) and V(3) levels
Benefits:
- Fixes CI test failures where data is flushed before consumption
- Enables consumers to catch up even if they fall behind memory retention
- No blocking on hot path (disk read only for historical data)
- Respects existing ReadFromDiskFn timeout handling
How it works:
1. Try in-memory read first (fast path)
2. If offset too old and ReadFromDiskFn configured, read from disk
3. Return disk data with proper nextOffset
4. Consumer continues reading seamlessly
This fixes the 'offset 0 too old (earliest in-memory: 5)' error in
TestOffsetManagement where messages were flushed before consumer started.
* fmt
* feat: add in-memory cache for disk chunk reads
This commit adds an LRU cache for disk chunks to optimize repeated reads
of historical data. When multiple consumers read the same historical offsets,
or a single consumer refetches the same data, the cache eliminates redundant
disk I/O.
Cache Design:
- Chunk size: 1000 messages per chunk
- Max chunks: 16 (configurable, ~16K messages cached)
- Eviction policy: LRU (Least Recently Used)
- Thread-safe with RWMutex
- Chunk-aligned offsets for efficient lookups
New Components:
1. DiskChunkCache struct - manages cached chunks
2. CachedDiskChunk struct - stores chunk data with metadata
3. getCachedDiskChunk() - checks cache before disk read
4. cacheDiskChunk() - stores chunks with LRU eviction
5. extractMessagesFromCache() - extracts subset from cached chunk
How It Works:
1. Read request for offset N (e.g., 2500)
2. Calculate chunk start: (2500 / 1000) * 1000 = 2000
3. Check cache for chunk starting at 2000
4. If HIT: Extract messages 2500-2999 from cached chunk
5. If MISS: Read chunk 2000-2999 from disk, cache it, extract 2500-2999
6. If cache full: Evict LRU chunk before caching new one
Benefits:
- Eliminates redundant disk I/O for popular historical data
- Reduces latency for repeated reads (cache hit ~1ms vs disk ~100ms)
- Supports multiple consumers reading same historical offsets
- Automatically evicts old chunks when cache is full
- Zero impact on hot path (in-memory reads unchanged)
Performance Impact:
- Cache HIT: ~99% faster than disk read
- Cache MISS: Same as disk read (with caching overhead ~1%)
- Memory: ~16MB for 16 chunks (16K messages x 1KB avg)
Example Scenario (CI tests):
- Producer writes offsets 0-4
- Data flushes to disk
- Consumer 1 reads 0-4 (cache MISS, reads from disk, caches chunk 0-999)
- Consumer 2 reads 0-4 (cache HIT, served from memory)
- Consumer 1 rebalances, re-reads 0-4 (cache HIT, no disk I/O)
This optimization is especially valuable in CI environments where:
- Small memory buffers cause frequent flushing
- Multiple consumers read the same historical data
- Disk I/O is relatively slow compared to memory access
* fix: commit offsets in Cleanup() before rebalancing
This commit adds explicit offset commit in the ConsumerGroupHandler.Cleanup()
method, which is called during consumer group rebalancing. This ensures all
marked offsets are committed BEFORE partitions are reassigned to other consumers,
significantly reducing duplicate message consumption during rebalancing.
Problem:
- Cleanup() was not committing offsets before rebalancing
- When partition reassigned to another consumer, it started from last committed offset
- Uncommitted messages (processed but not yet committed) were read again by new consumer
- This caused ~100-200% duplicate messages during rebalancing in tests
Solution:
- Add session.Commit() in Cleanup() method
- This runs after all ConsumeClaim goroutines have exited
- Ensures all MarkMessage() calls are committed before partition release
- New consumer starts from the last processed offset, not an older committed offset
Benefits:
- Dramatically reduces duplicate messages during rebalancing
- Improves at-least-once semantics (closer to exactly-once for normal cases)
- Better performance (less redundant processing)
- Cleaner test results (expected duplicates only from actual failures)
Kafka Rebalancing Lifecycle:
1. Rebalance triggered (consumer join/leave, timeout, etc.)
2. All ConsumeClaim goroutines cancelled
3. Cleanup() called ← WE COMMIT HERE NOW
4. Partitions reassigned to other consumers
5. New consumer starts from last committed offset ← NOW MORE UP-TO-DATE
Expected Results:
- Before: ~100-200% duplicates during rebalancing (2-3x reads)
- After: <10% duplicates (only from uncommitted in-flight messages)
This is a critical fix for production deployments where consumer churn
(scaling, restarts, failures) causes frequent rebalancing.
* fmt
* feat: automatic idle partition cleanup to prevent memory bloat
Implements automatic cleanup of topic partitions with no active publishers
or subscribers to prevent memory accumulation from short-lived topics.
**Key Features:**
1. Activity Tracking (local_partition.go)
- Added lastActivityTime field to LocalPartition
- UpdateActivity() called on publish, subscribe, and message reads
- IsIdle() checks if partition has no publishers/subscribers
- GetIdleDuration() returns time since last activity
- ShouldCleanup() determines if partition eligible for cleanup
2. Cleanup Task (local_manager.go)
- Background goroutine runs every 1 minute (configurable)
- Removes partitions idle for > 5 minutes (configurable)
- Automatically removes empty topics after all partitions cleaned
- Proper shutdown handling with WaitForCleanupShutdown()
3. Broker Integration (broker_server.go)
- StartIdlePartitionCleanup() called on broker startup
- Default: check every 1 minute, cleanup after 5 minutes idle
- Transparent operation with sensible defaults
**Cleanup Process:**
- Checks: partition.Publishers.Size() == 0 && partition.Subscribers.Size() == 0
- Calls partition.Shutdown() to:
- Flush all data to disk (no data loss)
- Stop 3 goroutines (loopFlush, loopInterval, cleanupLoop)
- Free in-memory buffers (~100KB-10MB per partition)
- Close LogBuffer resources
- Removes partition from LocalTopic.Partitions
- Removes topic if no partitions remain
**Benefits:**
- Prevents memory bloat from short-lived topics
- Reduces goroutine count (3 per partition cleaned)
- Zero configuration required
- Data remains on disk, can be recreated on demand
- No impact on active partitions
**Example Logs:**
I Started idle partition cleanup task (check: 1m, timeout: 5m)
I Cleaning up idle partition topic-0 (idle for 5m12s, publishers=0, subscribers=0)
I Cleaned up 2 idle partition(s)
**Memory Freed per Partition:**
- In-memory message buffer: ~100KB-10MB
- Disk buffer cache
- 3 goroutines
- Publisher/subscriber tracking maps
- Condition variables and mutexes
**Related Issue:**
Prevents memory accumulation in systems with high topic churn or
many short-lived consumer groups, improving long-term stability
and resource efficiency.
**Testing:**
- Compiles cleanly
- No linting errors
- Ready for integration testing
fmt
* refactor: reduce verbosity of debug log messages
Changed debug log messages with bracket prefixes from V(1)/V(2) to V(3)/V(4)
to reduce log noise in production. These messages were added during development
for detailed debugging and are still available with higher verbosity levels.
Changes:
- glog.V(2).Infof("[") -> glog.V(4).Infof("[") (~104 messages)
- glog.V(1).Infof("[") -> glog.V(3).Infof("[") (~30 messages)
Affected files:
- weed/mq/broker/broker_grpc_fetch.go
- weed/mq/broker/broker_grpc_sub_offset.go
- weed/mq/kafka/integration/broker_client_fetch.go
- weed/mq/kafka/integration/broker_client_subscribe.go
- weed/mq/kafka/integration/seaweedmq_handler.go
- weed/mq/kafka/protocol/fetch.go
- weed/mq/kafka/protocol/fetch_partition_reader.go
- weed/mq/kafka/protocol/handler.go
- weed/mq/kafka/protocol/offset_management.go
Benefits:
- Cleaner logs in production (default -v=0)
- Still available for deep debugging with -v=3 or -v=4
- No code behavior changes, only log verbosity
- Safer than deletion - messages preserved for debugging
Usage:
- Default (-v=0): Only errors and important events
- -v=1: Standard info messages
- -v=2: Detailed info messages
- -v=3: Debug messages (previously V(1) with brackets)
- -v=4: Verbose debug (previously V(2) with brackets)
* refactor: change remaining glog.Infof debug messages to V(3)
Changed remaining debug log messages with bracket prefixes from
glog.Infof() to glog.V(3).Infof() to prevent them from showing
in production logs by default.
Changes (8 messages across 3 files):
- glog.Infof("[") -> glog.V(3).Infof("[")
Files updated:
- weed/mq/broker/broker_grpc_fetch.go (4 messages)
- [FetchMessage] CALLED! debug marker
- [FetchMessage] request details
- [FetchMessage] LogBuffer read start
- [FetchMessage] LogBuffer read completion
- weed/mq/kafka/integration/broker_client_fetch.go (3 messages)
- [FETCH-STATELESS-CLIENT] received messages
- [FETCH-STATELESS-CLIENT] converted records (with data)
- [FETCH-STATELESS-CLIENT] converted records (empty)
- weed/mq/kafka/integration/broker_client_publish.go (1 message)
- [GATEWAY RECV] _schemas topic debug
Now ALL debug messages with bracket prefixes require -v=3 or higher:
- Default (-v=0): Clean production logs ✅
- -v=3: All debug messages visible
- -v=4: All verbose debug messages visible
Result: Production logs are now clean with default settings!
* remove _schemas debug
* less logs
* fix: critical bug causing 51% message loss in stateless reads
CRITICAL BUG FIX: ReadMessagesAtOffset was returning error instead of
attempting disk I/O when data was flushed from memory, causing massive
message loss (6254 out of 12192 messages = 51% loss).
Problem:
In log_read_stateless.go lines 120-131, when data was flushed to disk
(empty previous buffer), the code returned an 'offset out of range' error
instead of attempting disk I/O. This caused consumers to skip over flushed
data entirely, leading to catastrophic message loss.
The bug occurred when:
1. Data was written to LogBuffer
2. Data was flushed to disk due to buffer rotation
3. Consumer requested that offset range
4. Code found offset in expected range but not in memory
5. ❌ Returned error instead of reading from disk
Root Cause:
Lines 126-131 had early return with error when previous buffer was empty:
// Data not in memory - for stateless fetch, we don't do disk I/O
return messages, startOffset, highWaterMark, false,
fmt.Errorf("offset %d out of range...")
This comment was incorrect - we DO need disk I/O for flushed data!
Fix:
1. Lines 120-132: Changed to fall through to disk read logic instead of
returning error when previous buffer is empty
2. Lines 137-177: Enhanced disk read logic to handle TWO cases:
- Historical data (offset < bufferStartOffset)
- Flushed data (offset >= bufferStartOffset but not in memory)
Changes:
- Line 121: Log "attempting disk read" instead of breaking
- Line 130-132: Fall through to disk read instead of returning error
- Line 141: Changed condition from 'if startOffset < bufferStartOffset'
to 'if startOffset < currentBufferEnd' to handle both cases
- Lines 143-149: Add context-aware logging for both historical and flushed data
- Lines 154-159: Add context-aware error messages
Expected Results:
- Before: 51% message loss (6254/12192 missing)
- After: <1% message loss (only from rebalancing, which we already fixed)
- Duplicates: Should remain ~47% (from rebalancing, expected until offsets committed)
Testing:
- ✅ Compiles successfully
- Ready for integration testing with standard-test
Related Issues:
- This explains the massive data loss in recent load tests
- Disk I/O fallback was implemented but not reachable due to early return
- Disk chunk cache is working but was never being used for flushed data
Priority: CRITICAL - Fixes production-breaking data loss bug
* perf: add topic configuration cache to fix 60% CPU overhead
CRITICAL PERFORMANCE FIX: Added topic configuration caching to eliminate
massive CPU overhead from repeated filer reads and JSON unmarshaling on
EVERY fetch request.
Problem (from CPU profile):
- ReadTopicConfFromFiler: 42.45% CPU (5.76s out of 13.57s)
- protojson.Unmarshal: 25.64% CPU (3.48s)
- GetOrGenerateLocalPartition called on EVERY FetchMessage request
- No caching - reading from filer and unmarshaling JSON every time
- This caused filer, gateway, and broker to be extremely busy
Root Cause:
GetOrGenerateLocalPartition() is called on every FetchMessage request and
was calling ReadTopicConfFromFiler() without any caching. Each call:
1. Makes gRPC call to filer (expensive)
2. Reads JSON from disk (expensive)
3. Unmarshals protobuf JSON (25% of CPU!)
The disk I/O fix (previous commit) made this worse by enabling more reads,
exposing this performance bottleneck.
Solution:
Added topicConfCache similar to existing topicExistsCache:
Changes to broker_server.go:
- Added topicConfCacheEntry struct
- Added topicConfCache map to MessageQueueBroker
- Added topicConfCacheMu RWMutex for thread safety
- Added topicConfCacheTTL (30 seconds)
- Initialize cache in NewMessageBroker()
Changes to broker_topic_conf_read_write.go:
- Modified GetOrGenerateLocalPartition() to check cache first
- Cache HIT: Return cached config immediately (V(4) log)
- Cache MISS: Read from filer, cache result, proceed
- Added invalidateTopicConfCache() for cache invalidation
- Added import "time" for cache TTL
Cache Strategy:
- TTL: 30 seconds (matches topicExistsCache)
- Thread-safe with RWMutex
- Cache key: topic.String() (e.g., "kafka.loadtest-topic-0")
- Invalidation: Call invalidateTopicConfCache() when config changes
Expected Results:
- Before: 60% CPU on filer reads + JSON unmarshaling
- After: <1% CPU (only on cache miss every 30s)
- Filer load: Reduced by ~99% (from every fetch to once per 30s)
- Gateway CPU: Dramatically reduced
- Broker CPU: Dramatically reduced
- Throughput: Should increase significantly
Performance Impact:
With 50 msgs/sec per topic × 5 topics = 250 fetches/sec:
- Before: 250 filer reads/sec (25000% overhead!)
- After: 0.17 filer reads/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer filer calls
Testing:
- ✅ Compiles successfully
- Ready for load test to verify CPU reduction
Priority: CRITICAL - Fixes production-breaking performance issue
Related: Works with previous commit (disk I/O fix) to enable correct and fast reads
* fmt
* refactor: merge topicExistsCache and topicConfCache into unified topicCache
Merged two separate caches into one unified cache to simplify code and
reduce memory usage. The unified cache stores both topic existence and
configuration in a single structure.
Design:
- Single topicCacheEntry with optional *ConfigureTopicResponse
- If conf != nil: topic exists with full configuration
- If conf == nil: topic doesn't exist (negative cache)
- Same 30-second TTL for both existence and config caching
Changes to broker_server.go:
- Removed topicExistsCacheEntry struct
- Removed topicConfCacheEntry struct
- Added unified topicCacheEntry struct (conf can be nil)
- Removed topicExistsCache, topicExistsCacheMu, topicExistsCacheTTL
- Removed topicConfCache, topicConfCacheMu, topicConfCacheTTL
- Added unified topicCache, topicCacheMu, topicCacheTTL
- Updated NewMessageBroker() to initialize single cache
Changes to broker_topic_conf_read_write.go:
- Modified GetOrGenerateLocalPartition() to use unified cache
- Added negative caching (conf=nil) when topic not found
- Renamed invalidateTopicConfCache() to invalidateTopicCache()
- Single cache lookup instead of two separate checks
Changes to broker_grpc_lookup.go:
- Modified TopicExists() to use unified cache
- Check: exists = (entry.conf != nil)
- Only cache negative results (conf=nil) in TopicExists
- Positive results cached by GetOrGenerateLocalPartition
- Removed old invalidateTopicExistsCache() function
Changes to broker_grpc_configure.go:
- Updated invalidateTopicExistsCache() calls to invalidateTopicCache()
- Two call sites updated
Benefits:
1. Code Simplification: One cache instead of two
2. Memory Reduction: Single map, single mutex, single TTL
3. Consistency: No risk of cache desync between existence and config
4. Less Lock Contention: One lock instead of two
5. Easier Maintenance: Single invalidation function
6. Same Performance: Still eliminates 60% CPU overhead
Cache Behavior:
- TopicExists: Lightweight check, only caches negative (conf=nil)
- GetOrGenerateLocalPartition: Full config read, caches positive (conf != nil)
- Both share same 30s TTL
- Both use same invalidation on topic create/update/delete
Testing:
- ✅ Compiles successfully
- Ready for integration testing
This refactor maintains all performance benefits while simplifying
the codebase and reducing memory footprint.
* fix: add cache to LookupTopicBrokers to eliminate 26% CPU overhead
CRITICAL: LookupTopicBrokers was bypassing cache, causing 26% CPU overhead!
Problem (from CPU profile):
- LookupTopicBrokers: 35.74% CPU (9s out of 25.18s)
- ReadTopicConfFromFiler: 26.41% CPU (6.65s)
- protojson.Unmarshal: 16.64% CPU (4.19s)
- LookupTopicBrokers called b.fca.ReadTopicConfFromFiler() directly on line 35
- Completely bypassed our unified topicCache!
Root Cause:
LookupTopicBrokers is called VERY frequently by clients (every fetch request
needs to know partition assignments). It was calling ReadTopicConfFromFiler
directly instead of using the cache, causing:
1. Expensive gRPC calls to filer on every lookup
2. Expensive JSON unmarshaling on every lookup
3. 26%+ CPU overhead on hot path
4. Our cache optimization was useless for this critical path
Solution:
Created getTopicConfFromCache() helper and updated all callers:
Changes to broker_topic_conf_read_write.go:
- Added getTopicConfFromCache() - public API for cached topic config reads
- Implements same caching logic: check cache -> read filer -> cache result
- Handles both positive (conf != nil) and negative (conf == nil) caching
- Refactored GetOrGenerateLocalPartition() to use new helper (code dedup)
- Now only 14 lines instead of 60 lines (removed duplication)
Changes to broker_grpc_lookup.go:
- Modified LookupTopicBrokers() to call getTopicConfFromCache()
- Changed from: b.fca.ReadTopicConfFromFiler(t) (no cache)
- Changed to: b.getTopicConfFromCache(t) (with cache)
- Added comment explaining this fixes 26% CPU overhead
Cache Strategy:
- First call: Cache MISS -> read filer + unmarshal JSON -> cache for 30s
- Next 1000+ calls in 30s: Cache HIT -> return cached config immediately
- No filer gRPC, no JSON unmarshaling, near-zero CPU
- Cache invalidated on topic create/update/delete
Expected CPU Reduction:
- Before: 26.41% on ReadTopicConfFromFiler + 16.64% on JSON unmarshal = 43% CPU
- After: <0.1% (only on cache miss every 30s)
- Expected total broker CPU: 25.18s -> ~8s (67% reduction!)
Performance Impact (with 250 lookups/sec):
- Before: 250 filer reads/sec + 250 JSON unmarshals/sec
- After: 0.17 filer reads/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer expensive operations
Code Quality:
- Eliminated code duplication (60 lines -> 14 lines in GetOrGenerateLocalPartition)
- Single source of truth for cached reads (getTopicConfFromCache)
- Clear API: "Always use getTopicConfFromCache, never ReadTopicConfFromFiler directly"
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Completes the cache optimization to achieve full performance fix
* perf: optimize broker assignment validation to eliminate 14% CPU overhead
CRITICAL: Assignment validation was running on EVERY LookupTopicBrokers call!
Problem (from CPU profile):
- ensureTopicActiveAssignments: 14.18% CPU (2.56s out of 18.05s)
- EnsureAssignmentsToActiveBrokers: 14.18% CPU (2.56s)
- ConcurrentMap.IterBuffered: 12.85% CPU (2.32s) - iterating all brokers
- Called on EVERY LookupTopicBrokers request, even with cached config!
Root Cause:
LookupTopicBrokers flow was:
1. getTopicConfFromCache() - returns cached config (fast ✅)
2. ensureTopicActiveAssignments() - validates assignments (slow ❌)
Even though config was cached, we still validated assignments every time,
iterating through ALL active brokers on every single request. With 250
requests/sec, this meant 250 full broker iterations per second!
Solution:
Move assignment validation inside getTopicConfFromCache() and only run it
on cache misses:
Changes to broker_topic_conf_read_write.go:
- Modified getTopicConfFromCache() to validate assignments after filer read
- Validation only runs on cache miss (not on cache hit)
- If hasChanges: Save to filer immediately, invalidate cache, return
- If no changes: Cache config with validated assignments
- Added ensureTopicActiveAssignmentsUnsafe() helper (returns bool)
- Kept ensureTopicActiveAssignments() for other callers (saves to filer)
Changes to broker_grpc_lookup.go:
- Removed ensureTopicActiveAssignments() call from LookupTopicBrokers
- Assignment validation now implicit in getTopicConfFromCache()
- Added comments explaining the optimization
Cache Behavior:
- Cache HIT: Return config immediately, skip validation (saves 14% CPU!)
- Cache MISS: Read filer -> validate assignments -> cache result
- If broker changes detected: Save to filer, invalidate cache, return
- Next request will re-read and re-validate (ensures consistency)
Performance Impact:
With 30-second cache TTL and 250 lookups/sec:
- Before: 250 validations/sec × 10ms each = 2.5s CPU/sec (14% overhead)
- After: 0.17 validations/sec (only on cache miss)
- Reduction: 99.93% fewer validations
Expected CPU Reduction:
- Before (with cache): 18.05s total, 2.56s validation (14%)
- After (with optimization): ~15.5s total (-14% = ~2.5s saved)
- Combined with previous cache fix: 25.18s -> ~15.5s (38% total reduction)
Cache Consistency:
- Assignments validated when config first cached
- If broker membership changes, assignments updated and saved
- Cache invalidated to force fresh read
- All brokers eventually converge on correct assignments
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Completes optimization of LookupTopicBrokers hot path
* fmt
* perf: add partition assignment cache in gateway to eliminate 13.5% CPU overhead
CRITICAL: Gateway calling LookupTopicBrokers on EVERY fetch to translate
Kafka partition IDs to SeaweedFS partition ranges!
Problem (from CPU profile):
- getActualPartitionAssignment: 13.52% CPU (1.71s out of 12.65s)
- Called bc.client.LookupTopicBrokers on line 228 for EVERY fetch
- With 250 fetches/sec, this means 250 LookupTopicBrokers calls/sec!
- No caching at all - same overhead as broker had before optimization
Root Cause:
Gateway needs to translate Kafka partition IDs (0, 1, 2...) to SeaweedFS
partition ranges (0-341, 342-682, etc.) for every fetch request. This
translation requires calling LookupTopicBrokers to get partition assignments.
Without caching, every fetch request triggered:
1. gRPC call to broker (LookupTopicBrokers)
2. Broker reads from its cache (fast now after broker optimization)
3. gRPC response back to gateway
4. Gateway computes partition range mapping
The gRPC round-trip overhead was consuming 13.5% CPU even though broker
cache was fast!
Solution:
Added partitionAssignmentCache to BrokerClient:
Changes to types.go:
- Added partitionAssignmentCacheEntry struct (assignments + expiresAt)
- Added cache fields to BrokerClient:
* partitionAssignmentCache map[string]*partitionAssignmentCacheEntry
* partitionAssignmentCacheMu sync.RWMutex
* partitionAssignmentCacheTTL time.Duration
Changes to broker_client.go:
- Initialize partitionAssignmentCache in NewBrokerClientWithFilerAccessor
- Set partitionAssignmentCacheTTL to 30 seconds (same as broker)
Changes to broker_client_publish.go:
- Added "time" import
- Modified getActualPartitionAssignment() to check cache first:
* Cache HIT: Use cached assignments (fast ✅)
* Cache MISS: Call LookupTopicBrokers, cache result for 30s
- Extracted findPartitionInAssignments() helper function
* Contains range calculation and partition matching logic
* Reused for both cached and fresh lookups
Cache Behavior:
- First fetch: Cache MISS -> LookupTopicBrokers (~2ms) -> cache for 30s
- Next 7500 fetches in 30s: Cache HIT -> immediate return (~0.01ms)
- Cache automatically expires after 30s, re-validates on next fetch
Performance Impact:
With 250 fetches/sec and 5 topics:
- Before: 250 LookupTopicBrokers/sec = 500ms CPU overhead
- After: 0.17 LookupTopicBrokers/sec (5 topics / 30s TTL)
- Reduction: 99.93% fewer gRPC calls
Expected CPU Reduction:
- Before: 12.65s total, 1.71s in getActualPartitionAssignment (13.5%)
- After: ~11s total (-13.5% = 1.65s saved)
- Benefit: 13% lower CPU, more capacity for actual message processing
Cache Consistency:
- Same 30-second TTL as broker's topic config cache
- Partition assignments rarely change (only on topic reconfiguration)
- 30-second staleness is acceptable for partition mapping
- Gateway will eventually converge with broker's view
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement
Priority: CRITICAL - Eliminates major performance bottleneck in gateway fetch path
* perf: add RecordType inference cache to eliminate 37% gateway CPU overhead
CRITICAL: Gateway was creating Avro codecs and inferring RecordTypes on
EVERY fetch request for schematized topics!
Problem (from CPU profile):
- NewCodec (Avro): 17.39% CPU (2.35s out of 13.51s)
- inferRecordTypeFromAvroSchema: 20.13% CPU (2.72s)
- Total schema overhead: 37.52% CPU
- Called during EVERY fetch to check if topic is schematized
- No caching - recreating expensive goavro.Codec objects repeatedly
Root Cause:
In the fetch path, isSchematizedTopic() -> matchesSchemaRegistryConvention()
-> ensureTopicSchemaFromRegistryCache() -> inferRecordTypeFromCachedSchema()
-> inferRecordTypeFromAvroSchema() was being called.
The inferRecordTypeFromAvroSchema() function created a NEW Avro decoder
(which internally calls goavro.NewCodec()) on every call, even though:
1. The schema.Manager already has a decoder cache by schema ID
2. The same schemas are used repeatedly for the same topics
3. goavro.NewCodec() is expensive (parses JSON, builds schema tree)
This was wasteful because:
- Same schema string processed repeatedly
- No reuse of inferred RecordType structures
- Creating codecs just to infer types, then discarding them
Solution:
Added inferredRecordTypes cache to Handler:
Changes to handler.go:
- Added inferredRecordTypes map[string]*schema_pb.RecordType to Handler
- Added inferredRecordTypesMu sync.RWMutex for thread safety
- Initialize cache in NewTestHandlerWithMock() and NewSeaweedMQBrokerHandlerWithDefaults()
Changes to produce.go:
- Added glog import
- Modified inferRecordTypeFromAvroSchema():
* Check cache first (key: schema string)
* Cache HIT: Return immediately (V(4) log)
* Cache MISS: Create decoder, infer type, cache result
- Modified inferRecordTypeFromProtobufSchema():
* Same caching strategy (key: "protobuf:" + schema)
- Modified inferRecordTypeFromJSONSchema():
* Same caching strategy (key: "json:" + schema)
Cache Strategy:
- Key: Full schema string (unique per schema content)
- Value: Inferred *schema_pb.RecordType
- Thread-safe with RWMutex (optimized for reads)
- No TTL - schemas don't change for a topic
- Memory efficient - RecordType is small compared to codec
Performance Impact:
With 250 fetches/sec across 5 topics (1-3 schemas per topic):
- Before: 250 codec creations/sec + 250 inferences/sec = ~5s CPU
- After: 3-5 codec creations total (one per schema) = ~0.05s CPU
- Reduction: 99% fewer expensive operations
Expected CPU Reduction:
- Before: 13.51s total, 5.07s schema operations (37.5%)
- After: ~8.5s total (-37.5% = 5s saved)
- Benefit: 37% lower gateway CPU, more capacity for message processing
Cache Consistency:
- Schemas are immutable once registered in Schema Registry
- If schema changes, schema ID changes, so safe to cache indefinitely
- New schemas automatically cached on first use
- No need for invalidation or TTL
Additional Optimizations:
- Protobuf and JSON Schema also cached (same pattern)
- Prevents future bottlenecks as more schema formats are used
- Consistent caching approach across all schema types
Testing:
- ✅ Compiles successfully
- Ready to deploy and measure CPU improvement under load
Priority: HIGH - Eliminates major performance bottleneck in gateway schema path
* fmt
* fix Node ID Mismatch, and clean up log messages
* clean up
* Apply client-specified timeout to context
* Add comprehensive debug logging for Noop record processing
- Track Produce v2+ request reception with API version and request body size
- Log acks setting, timeout, and topic/partition information
- Log record count from parseRecordSet and any parse errors
- **CRITICAL**: Log when recordCount=0 fallback extraction attempts
- Log record extraction with NULL value detection (Noop records)
- Log record key in hex for Noop key identification
- Track each record being published to broker
- Log offset assigned by broker for each record
- Log final response with offset and error code
This enables root cause analysis of Schema Registry Noop record timeout issue.
* fix: Remove context timeout propagation from produce that breaks consumer init
Commit
|
1 week ago |
|
|
52419c513b
|
chore(deps): bump cloud.google.com/go/storage from 1.56.2 to 1.57.0 (#7326)
Bumps [cloud.google.com/go/storage](https://github.com/googleapis/google-cloud-go) from 1.56.2 to 1.57.0. - [Release notes](https://github.com/googleapis/google-cloud-go/releases) - [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/CHANGES.md) - [Commits](https://github.com/googleapis/google-cloud-go/compare/storage/v1.56.2...spanner/v1.57.0) --- updated-dependencies: - dependency-name: cloud.google.com/go/storage dependency-version: 1.57.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
fb97e3ea06
|
chore(deps): bump cloud.google.com/go/kms from 1.22.0 to 1.23.1 (#7325)
Bumps [cloud.google.com/go/kms](https://github.com/googleapis/google-cloud-go) from 1.22.0 to 1.23.1. - [Release notes](https://github.com/googleapis/google-cloud-go/releases) - [Changelog](https://github.com/googleapis/google-cloud-go/blob/main/documentai/CHANGES.md) - [Commits](https://github.com/googleapis/google-cloud-go/compare/kms/v1.22.0...kms/v1.23.1) --- updated-dependencies: - dependency-name: cloud.google.com/go/kms dependency-version: 1.23.1 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
b52d28bc41
|
chore(deps): bump actions/dependency-review-action from 4.8.0 to 4.8.1 (#7324)
Bumps [actions/dependency-review-action](https://github.com/actions/dependency-review-action) from 4.8.0 to 4.8.1.
- [Release notes](https://github.com/actions/dependency-review-action/releases)
- [Commits](
|
1 week ago |
|
|
bb88e463be
|
chore(deps): bump github/codeql-action from 3 to 4 (#7323)
Bumps [github/codeql-action](https://github.com/github/codeql-action) from 3 to 4. - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](https://github.com/github/codeql-action/compare/v3...v4) --- updated-dependencies: - dependency-name: github/codeql-action dependency-version: '4' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
67c3d52a00
|
chore(deps): bump github.com/go-redsync/redsync/v4 from 4.13.0 to 4.14.0 (#7322)
Bumps [github.com/go-redsync/redsync/v4](https://github.com/go-redsync/redsync) from 4.13.0 to 4.14.0. - [Release notes](https://github.com/go-redsync/redsync/releases) - [Commits](https://github.com/go-redsync/redsync/compare/v4.13.0...v4.14.0) --- updated-dependencies: - dependency-name: github.com/go-redsync/redsync/v4 dependency-version: 4.14.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
3e9b605da7
|
fix: helm release invalid version v prefix (#7328)
|
1 week ago |
|
|
76863e3eee
|
chore(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity from 1.12.0 to 1.13.0 (#7321)
chore(deps): bump github.com/Azure/azure-sdk-for-go/sdk/azidentity Bumps [github.com/Azure/azure-sdk-for-go/sdk/azidentity](https://github.com/Azure/azure-sdk-for-go) from 1.12.0 to 1.13.0. - [Release notes](https://github.com/Azure/azure-sdk-for-go/releases) - [Changelog](https://github.com/Azure/azure-sdk-for-go/blob/main/documentation/sdk-breaking-changes-guide-migration.md) - [Commits](https://github.com/Azure/azure-sdk-for-go/compare/sdk/azcore/v1.12.0...sdk/azcore/v1.13.0) --- updated-dependencies: - dependency-name: github.com/Azure/azure-sdk-for-go/sdk/azidentity dependency-version: 1.13.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
bf29963f75
|
ingress config (#7319)
* ingress config * fixing issues * prefix path type For the S3 ingress path /, using pathType: Prefix is more explicit and standard-compliant for matching all subpaths. While ImplementationSpecific might work similarly with your current Ingress controller (often defaulting to a prefix match when use-regex is not enabled), Prefix clearly states the intent and improves portability across different Ingress controllers. --------- Co-authored-by: Philipp Kraus <philipp.kraus@flashpixx.de> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 week ago |
|
|
8fe14d1368
|
fix(helm): set securitycontext for idx move initcontainer if enabled (#7331)
|
1 week ago |