Tree:
daa3af826f
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
12305 Commits (daa3af826f2cf5368793944df5d335dc719f628c)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
daa3af826f |
ci: fix stress tests by adding server start/stop
|
5 days ago |
|
|
aff144f8b5 |
ci: run versioning stress tests on all PRs, not just master pushes
|
5 days ago |
|
|
9150d84eea |
test: use -master.peers=none for faster test server startup
|
5 days ago |
|
|
5dd34e3260 |
s3: fix ListObjectVersions pagination by implementing key-marker filtering
The ListObjectVersions API was receiving key-marker and version-id-marker parameters but not using them to filter results. This caused infinite pagination loops when clients tried to paginate through results. Fix by adding filtering logic after sorting: - Skip versions with key < keyMarker (already returned in previous pages) - For key == keyMarker, skip versions with versionId >= versionIdMarker - Include versions with key > keyMarker or (key == keyMarker and versionId < versionIdMarker) This respects the S3 sort order (key ascending, versionId descending for same key) and correctly returns only versions that come AFTER the marker position. |
5 days ago |
|
|
26121c55c9 |
test: improve pagination stress test with QUICK_TEST option and better assertions
|
5 days ago |
|
|
f517bc39fc |
test: fix nil pointer dereference and add debugging to pagination stress tests
|
5 days ago |
|
|
8236df1368 |
ci: enable pagination stress tests in GitHub CI
Add pagination stress tests (>1000 versions) to the S3 versioning stress test job in GitHub CI. These tests run on master branch pushes to validate that ListObjectVersions correctly handles objects with more than 1000 versions using pagination. |
5 days ago |
|
|
0972a0acf3 |
test: add pagination stress tests for S3 versioning with >1000 versions
|
5 days ago |
|
|
3f62240976
|
s3: add pagination to getObjectVersionList and reduce memory (#7787)
* s3: add pagination to getObjectVersionList and reduce memory
This commit makes two improvements to S3 version listing:
1. Add pagination to getObjectVersionList:
- Previously hardcoded limit of 1000 versions per object
- Now paginates through all versions using startFrom marker
- Fixes correctness issue where objects with >1000 versions would
have some versions missing from list results
2. Reduce memory by not retaining full Entry:
- Replace Entry field with OwnerID string in ObjectVersion struct
- Extract owner ID when creating ObjectVersion
- Avoids retaining Chunks array which can be large for big files
- Clear seenVersionIds map after use to help GC
3. Update getObjectOwnerFromVersion:
- Use new OwnerID field instead of accessing Entry.Extended
- Maintains backward compatibility with fallback lookups
* s3: propagate errors from list operation instead of returning partial results
Address review feedback: when s3a.list fails during version listing,
the function was logging at V(2) level and returning partial results
with nil error. This hides the error and could lead to silent data
inconsistencies.
Fix by:
1. Log at Warningf level for better visibility
2. Return nil versions slice with the error to prevent partial results
from being processed as complete
|
5 days ago |
|
|
d26c260041
|
s3: fix memory leak in ListObjectVersions with early termination (#7785)
* s3: fix memory leak in ListObjectVersions with early termination This fixes a critical memory leak in S3 versioned bucket listing operations: 1. Add maxCollect parameter to findVersionsRecursively() to stop collecting versions once we have enough for the response + truncation detection 2. Add early termination checks throughout the recursive traversal to prevent scanning entire buckets when only a small number of results are requested 3. Use clear() on tracking maps after collection to help GC reclaim memory 4. Create new slice with exact capacity when truncating results instead of re-slicing, which allows GC to reclaim the excess backing array memory 5. Pre-allocate result slice with reasonable initial capacity to reduce reallocations during collection Before this fix, listing versions on a bucket with many objects and versions would load ALL versions into memory before pagination, causing OOM crashes. Fixes memory exhaustion when calling ListObjectVersions on large versioned buckets. * s3: fix pre-allocation capacity to be consistent with maxCollect Address review feedback: the previous capping logic caused an inconsistency where initialCap was capped to 1000 but maxCollect was maxKeys+1, leading to unnecessary reallocations when maxKeys was 1000. Fix by: 1. Cap maxKeys to 1000 (S3 API limit) at the start of the function 2. Use maxKeys+1 directly for slice capacity, ensuring consistency with the maxCollect parameter passed to findVersionsRecursively |
5 days ago |
|
|
ef28f49ec3
|
fix: correctly detect missing source file during volume copy (#7784)
* fix: correctly detect missing source file during volume copy
The previous fix (commit
|
5 days ago |
|
|
7920ffa98c
|
Fix uncleanable size=0 orphans with volume.fsck -forcePurging (#7783)
This is a follow-up fix to PR #7332 which partially addressed the issue. The problem is that size=0 needles are in a gray area: - IsValid() returns false for size=0 (because size must be > 0) - IsDeleted() returns false for size=0 (because size must be < 0 or == TombstoneFileSize) PR #7332 only fixed 2 places, but several other places still had the same bug: 1. needle_map_memory.go:doLoading - line 43 still used oldSize.IsValid() 2. needle_map_memory.go:DoOffsetLoading - used during vacuum and incremental loading 3. needle_map_leveldb.go:generateLevelDbFile - used when generating LevelDB needle maps 4. needle_map_leveldb.go:DoOffsetLoading - used during incremental loading for LevelDB 5. needle_map/compact_map.go:delete - couldn't delete size=0 entries because: - The condition 'size > 0' failed for size=0 - Even if it passed, negating 0 gives 0 (not marking as deleted) Changes: - Changed size.IsValid() to !size.IsDeleted() in doLoading and DoOffsetLoading functions - Fixed compact_map delete to use TombstoneFileSize for size=0 entries Fixes #7293 |
5 days ago |
|
|
93499cd944
|
Fix admin GUI list ordering on refresh (#7782)
Sort lists of filers, volume servers, masters, and message brokers by address to ensure consistent ordering on page refresh. This fixes the non-deterministic ordering caused by iterating over Go maps with range. Fixes #7781 |
5 days ago |
|
|
44cd07f835 |
Update cluster_ec_volumes_templ.go
|
5 days ago |
|
|
95ef041bfb
|
Fix EC Volumes page header styling to match admin theme (#7780)
* Fix EC Volumes page header styling to match admin theme Fixes #7779 The EC Volumes page was rendering with bright Bootstrap default colors instead of the admin dark theme because it was structured as a standalone HTML document with its own DOCTYPE, head, and body tags. This fix converts the template to be a content fragment (like other properly styled templates such as cluster_ec_shards.templ) so it correctly inherits the admin.css styling when rendered within the layout. * Address review comments: fix URL interpolation and falsy value check - Fix collection filter link to use templ.URL() for proper interpolation - Change updateUrl() falsy check from 'if (params[key])' to 'if (params[key] != null)' to handle 0 and false values correctly * Address additional review comments - Use erasure_coding.TotalShardsCount constant instead of hardcoded '14' for shard count displays (lines 88 and 214) - Improve error handling in repairVolume() to check response.ok before parsing JSON, preventing confusing errors on non-JSON responses - Remove unused totalSize variable in formatShardRangesWithSizes() - Simplify redundant pagination conditions * Remove unused code: displayShardLocationsHTML, groupShardsByServerWithSizes, formatShardRangesWithSizes These functions and templates were defined but never called anywhere in the codebase. Removing them reduces code maintenance burden. * Address review feedback: improve code quality - Add defensive JSON response validation in repairVolume function - Replace O(n²) bubble sorts with Go's standard sort.Ints and sort.Slice - Document volume status thresholds explaining EC recovery logic: * Critical: unrecoverable (more than DataShardsCount missing) * Degraded: high risk (more than half DataShardsCount missing) * Incomplete: reduced redundancy (more than half ParityShardsCount missing) * Minor: fully recoverable with good margin * Fix redundant shard count display in Healthy Volumes card Changed from 'Complete (14/14 shards)' to 'All 14 shards present' since the numerator and denominator were always the same value. * Use templ.URL for default collection link for consistency * Fix Clear Filter link to stay on EC Volumes page Changed href from /cluster/ec-shards to /cluster/ec-volumes so clearing the filter stays on the current page instead of navigating away. |
5 days ago |
|
|
f5c666052e
|
feat: add S3 bucket size and object count metrics (#7776)
* feat: add S3 bucket size and object count metrics Adds periodic collection of bucket size metrics: - SeaweedFS_s3_bucket_size_bytes: logical size (deduplicated across replicas) - SeaweedFS_s3_bucket_physical_size_bytes: physical size (including replicas) - SeaweedFS_s3_bucket_object_count: object count (deduplicated) Collection runs every 1 minute via background goroutine that queries filer Statistics RPC for each bucket's collection. Also adds Grafana dashboard panels for: - S3 Bucket Size (logical vs physical) - S3 Bucket Object Count * address PR comments: fix bucket size metrics collection 1. Fix collectCollectionInfoFromMaster to use master VolumeList API - Now properly queries master for topology info - Uses WithMasterClient to get volume list from master - Correctly calculates logical vs physical size based on replication 2. Return error when filerClient is nil to trigger fallback - Changed from 'return nil, nil' to 'return nil, error' - Ensures fallback to filer stats is properly triggered 3. Implement pagination in listBucketNames - Added listBucketPageSize constant (1000) - Uses StartFromFileName for pagination - Continues fetching until fewer entries than limit returned 4. Handle NewReplicaPlacementFromByte error and prevent division by zero - Check error return from NewReplicaPlacementFromByte - Default to 1 copy if error occurs - Add explicit check for copyCount == 0 * simplify bucket size metrics: remove filer fallback, align with quota enforcement - Remove fallback to filer Statistics RPC - Use only master topology for collection info (same as s3.bucket.quota.enforce) - Updated comments to clarify this runs the same collection logic as quota enforcement - Simplified code by removing collectBucketSizeFromFilerStats * use s3a.option.Masters directly instead of querying filer * address PR comments: fix dashboard overlaps and improve metrics collection Grafana dashboard fixes: - Fix overlapping panels 55 and 59 in grafana_seaweedfs.json (moved 59 to y=30) - Fix grid collision in k8s dashboard (moved panel 72 to y=48) - Aggregate bucket metrics with max() by (bucket) for multi-instance S3 gateways Go code improvements: - Add graceful shutdown support via context cancellation - Use ticker instead of time.Sleep for better shutdown responsiveness - Distinguish EOF from actual errors in stream handling * improve bucket size metrics: multi-master failover and proper error handling - Initial delay now respects context cancellation using select with time.After - Use WithOneOfGrpcMasterClients for multi-master failover instead of hardcoding Masters[0] - Properly propagate stream errors instead of just logging them (EOF vs real errors) * improve bucket size metrics: distributed lock and volume ID deduplication - Add distributed lock (LiveLock) so only one S3 instance collects metrics at a time - Add IsLocked() method to LiveLock for checking lock status - Fix deduplication: use volume ID tracking instead of dividing by copyCount - Previous approach gave wrong results if replicas were missing - Now tracks seen volume IDs and counts each volume only once - Physical size still includes all replicas for accurate disk usage reporting * rename lock to s3.leader * simplify: remove StartBucketSizeMetricsCollection wrapper function * fix data race: use atomic operations for LiveLock.isLocked field - Change isLocked from bool to int32 - Use atomic.LoadInt32/StoreInt32 for all reads/writes - Sync shared isLocked field in StartLongLivedLock goroutine * add nil check for topology info to prevent panic * fix bucket metrics: use Ticker for consistent intervals, fix pagination logic - Use time.Ticker instead of time.After for consistent interval execution - Fix pagination: count all entries (not just directories) for proper termination - Update lastFileName for all entries to prevent pagination issues * address PR comments: remove redundant atomic store, propagate context - Remove redundant atomic.StoreInt32 in StartLongLivedLock (AttemptToLock already sets it) - Propagate context through metrics collection for proper cancellation on shutdown - collectAndUpdateBucketSizeMetrics now accepts ctx - collectCollectionInfoFromMaster uses ctx for VolumeList RPC - listBucketNames uses ctx for ListEntries RPC |
5 days ago |
|
|
4dcd33bbc8
|
fix: handle missing idx file for empty volumes during copy (#7777) (#7778)
When copying/evacuating empty volumes, the .idx file may not exist on disk (this is allowed by checkIdxFile for volumes with only super block in .dat). This fix: 1. Uses os.IsNotExist() instead of err == os.ErrNotExist for proper wrapped error checking in CopyFile 2. Treats missing source file as success when StopOffset == 0 (empty file) 3. Allows checkCopyFiles to pass when idx file doesn't exist but IdxFileSize == 0 (empty volume) Fixes volumeServer.evacuate and volume.fix.replication for empty volumes. |
5 days ago |
|
|
93d0779318
|
fix: add S3 bucket traffic sent metric tracking (#7774)
* fix: add S3 bucket traffic sent metric tracking The BucketTrafficSent() function was defined but never called, causing the S3 Bucket Traffic Sent Grafana dashboard panel to not display data. Added BucketTrafficSent() calls in the streaming functions: - streamFromVolumeServers: for inline and chunked content - streamFromVolumeServersWithSSE: for encrypted range and full object requests The traffic received metric already worked because BucketTrafficReceived() was properly called in putToFiler() for both regular and multipart uploads. * feat: add S3 API Calls per Bucket panel to Grafana dashboards Added a new panel showing API calls per bucket using the existing SeaweedFS_s3_request_total metric aggregated by bucket. Updated all Grafana dashboard files: - other/metrics/grafana_seaweedfs.json - other/metrics/grafana_seaweedfs_k8s.json - other/metrics/grafana_seaweedfs_heartbeat.json - k8s/charts/seaweedfs/dashboards/seaweedfs-grafana-dashboard.json * address PR comments: use actual bytes written for traffic metrics - Use actual bytes written from w.Write instead of expected size for inline content - Add countingWriter wrapper to track actual bytes for chunked content streaming - Update streamDecryptedRangeFromChunks to return actual bytes written for SSE - Remove redundant nil check that caused linter warning - Fix duplicate panel id 86 in grafana_seaweedfs.json (changed to 90) - Fix overlapping panel positions in grafana_seaweedfs_k8s.json (rebalanced x positions) * fix grafana k8s dashboard: rebalance S3 panels to avoid overlap - Panel 86 (S3 API Calls per Bucket): w:6, x:0, y:15 - Panel 67 (S3 Request Duration 95th): w:6, x:6, y:15 - Panel 68 (S3 Request Duration 80th): w:6, x:12, y:15 - Panel 65 (S3 Request Duration 99th): w:6, x:18, y:15 All four S3 panels now fit in a single row (y:15) with width 6 each. Filer row header at y:22 and subsequent panels remain correctly positioned. * add input validation and clarify comments in adjustRangeForPart - Add validation that partStartOffset <= partEndOffset at function start - Add clarifying comments for suffix-range handling where clientEnd temporarily holds the suffix length before being reassigned * align pluginVersion for panel 86 to 10.3.1 in k8s dashboard * track partial writes for accurate egress traffic accounting - Change condition from 'err == nil' to 'written > 0' for inline content - Move BucketTrafficSent before error check for chunked content streaming - Track traffic even on partial SSE range writes - Track traffic even on partial full SSE object copies This ensures egress traffic is counted even when writes fail partway through, providing more accurate bandwidth metrics. |
5 days ago |
|
|
d0cc51e7c6
|
chore(deps): bump io.netty:netty-codec-http from 4.1.125.Final to 4.1.129.Final in /test/java/spark (#7773)
chore(deps): bump io.netty:netty-codec-http in /test/java/spark Bumps [io.netty:netty-codec-http](https://github.com/netty/netty) from 4.1.125.Final to 4.1.129.Final. - [Commits](https://github.com/netty/netty/compare/netty-4.1.125.Final...netty-4.1.129.Final) --- updated-dependencies: - dependency-name: io.netty:netty-codec-http dependency-version: 4.1.129.Final dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
c6e07429e7
|
chore(deps): bump golang.org/x/image from 0.33.0 to 0.34.0 (#7764)
* chore(deps): bump golang.org/x/image from 0.33.0 to 0.34.0 Bumps [golang.org/x/image](https://github.com/golang/image) from 0.33.0 to 0.34.0. - [Commits](https://github.com/golang/image/compare/v0.33.0...v0.34.0) --- updated-dependencies: - dependency-name: golang.org/x/image dependency-version: 0.34.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> |
5 days ago |
|
|
b53e50485f
|
s3: warm bucket config cache on startup for multi-filer consistency (#7772)
* s3: warm bucket config cache on startup for multi-filer consistency In multi-filer clusters, the bucket configuration cache (storing Object Lock, versioning, and other settings) was not being pre-populated on S3 API server startup. This caused issues where: 1. After server restart, Object Lock and versioning settings appeared lost until the bucket was accessed (lazy loading) 2. In multi-filer clusters, race conditions during bucket creation could result in inconsistent Object Lock configuration This fix warms the bucketConfigCache during BucketRegistry initialization, ensuring all bucket configurations (including Object Lock and versioning) are immediately available after restart without waiting for first access. The fix piggybacks on the existing BucketRegistry.init() which already iterates through all buckets, adding a call to update the config cache with each bucket's extended attributes. * s3: add visibility logging for bucket config cache warming - Add bucket count tracking during initialization - Log error if bucket listing fails - Log INFO message with count of warmed buckets on successful init This improves observability for the cache warming process and addresses review feedback about error handling visibility. * s3: fix bucket deletion not invalidating config cache Bug fix: The metadata subscription handler had an early return when NewEntry was nil, which skipped the onBucketMetadataChange call for bucket deletions. This caused deleted buckets to remain in the config cache. The fix moves onBucketMetadataChange before the nil check so it's called for all events (create, update, delete). The IAM and circuit breaker updates still require NewEntry content, so they remain after the check. * s3: handle config file deletions for IAM and circuit breaker Refactored the metadata subscription handlers to properly handle all event types (create, update, delete) for IAM and circuit breaker configs: - Renamed onIamConfigUpdate -> onIamConfigChange - Renamed onCircuitBreakerConfigUpdate -> onCircuitBreakerConfigChange - Both handlers now check for deletions (newEntry == nil && oldEntry != nil) - On config file deletion, reset to empty config by loading empty bytes - Simplified processEventFn to call all handlers unconditionally - Each handler checks for nil entries internally This ensures that deleting identity.json or circuit_breaker.json will clear the in-memory config rather than leaving stale data. * s3: restore NewParentPath handling for rename/move operations The directory resolution logic was accidentally removed. This restores the check for NewParentPath which is needed when files are renamed or moved - in such cases, NewParentPath contains the destination directory which should be used for directory matching in the handlers. |
5 days ago |
|
|
5a03b5538f
|
filer: improve FoundationDB performance by disabling batch by default (#7770)
* filer: improve FoundationDB performance by disabling batch by default This PR addresses a performance issue where FoundationDB filer was achieving only ~757 ops/sec with 12 concurrent S3 clients, despite FDB being capable of 17,000+ ops/sec. Root cause: The write batcher was waiting up to 5ms for each operation to batch, even though S3 semantics require waiting for durability confirmation. This added artificial latency that defeated the purpose of batching. Changes: - Disable write batching by default (batch_enabled = false) - Each write now commits immediately in its own transaction - Reduce batch interval from 5ms to 1ms when batching is enabled - Add batch_enabled config option to toggle behavior - Improve batcher to collect available ops without blocking - Add benchmarks comparing batch vs no-batch performance Benchmark results (16 concurrent goroutines): - With batch: 2,924 ops/sec (342,032 ns/op) - Without batch: 4,625 ops/sec (216,219 ns/op) - Improvement: +58% faster Configuration: - Default: batch_enabled = false (optimal for S3 PUT latency) - For bulk ingestion: set batch_enabled = true Also fixes ARM64 Docker test setup (shell compatibility, fdbserver path). * fix: address review comments - use atomic counter and remove duplicate batcher - Use sync/atomic.Uint64 for unique filenames in concurrent benchmarks - Remove duplicate batcher creation in createBenchmarkStoreWithBatching (initialize() already creates batcher when batchEnabled=true) * fix: add realistic default values to benchmark store helper Set directoryPrefix, timeout, and maxRetryDelay to reasonable defaults for more realistic benchmark conditions. |
5 days ago |
|
|
44beb42eb9
|
s3: fix PutObject ETag format for multi-chunk uploads (#7771)
* s3: fix PutObject ETag format for multi-chunk uploads Fix issue #7768: AWS S3 SDK for Java fails with 'Invalid base 16 character: -' when performing PutObject on files that are internally auto-chunked. The issue was that SeaweedFS returned a composite ETag format (<md5hash>-<count>) for regular PutObject when the file was split into multiple chunks due to auto-chunking. However, per AWS S3 spec, the composite ETag format should only be used for multipart uploads (CreateMultipartUpload/UploadPart/CompleteMultipartUpload API). Regular PutObject should always return a pure MD5 hash as the ETag, regardless of how the file is stored internally. The fix ensures the MD5 hash is always stored in entry.Attributes.Md5 for regular PutObject operations, so filer.ETag() returns the pure MD5 hash instead of falling back to ETagChunks() composite format. * test: add comprehensive ETag format tests for issue #7768 Add integration tests to ensure PutObject ETag format compatibility: Go tests (test/s3/etag/): - TestPutObjectETagFormat_SmallFile: 1KB single chunk - TestPutObjectETagFormat_LargeFile: 10MB auto-chunked (critical for #7768) - TestPutObjectETagFormat_ExtraLargeFile: 25MB multi-chunk - TestMultipartUploadETagFormat: verify composite ETag for multipart - TestPutObjectETagConsistency: ETag consistency across PUT/HEAD/GET - TestETagHexValidation: simulate AWS SDK v2 hex decoding - TestMultipleLargeFileUploads: stress test multiple large uploads Java tests (other/java/s3copier/): - Update pom.xml to include AWS SDK v2 (2.20.127) - Add ETagValidationTest.java with comprehensive SDK v2 tests - Add README.md documenting SDK versions and test coverage Documentation: - Add test/s3/SDK_COMPATIBILITY.md documenting validated SDK versions - Add test/s3/etag/README.md explaining test coverage These tests ensure large file PutObject (>8MB) returns pure MD5 ETags (not composite format), which is required for AWS SDK v2 compatibility. * fix: lower Java version requirement to 11 for CI compatibility * address CodeRabbit review comments - s3_etag_test.go: Handle rand.Read error, fix multipart part-count logging - Makefile: Add 'all' target, pass S3_ENDPOINT to test commands - SDK_COMPATIBILITY.md: Add language tag to fenced code block - ETagValidationTest.java: Add pagination to cleanup logic - README.md: Clarify Go SDK tests are in separate location * ci: add s3copier ETag validation tests to Java integration tests - Enable S3 API (-s3 -s3.port=8333) in SeaweedFS test server - Add S3 API readiness check to wait loop - Add step to run ETagValidationTest from s3copier This ensures the fix for issue #7768 is continuously tested against AWS SDK v2 for Java in CI. * ci: add S3 config with credentials for s3copier tests - Add -s3.config pointing to docker/compose/s3.json - Add -s3.allowDeleteBucketNotEmpty for test cleanup - Set S3_ACCESS_KEY and S3_SECRET_KEY env vars for tests * ci: pass S3 config as Maven system properties Pass S3_ENDPOINT, S3_ACCESS_KEY, S3_SECRET_KEY via -D flags so they're available via System.getProperty() in Java tests |
5 days ago |
|
|
187ef65e8f
|
Humanize output for `weed.server` by default (#7758)
* Implement a `weed shell` command to return a status overview of the cluster. Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` * Humanize output for `weed.server` by default. Makes things more readable :) ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%) EC: 1 EC volume on 14 shards (14 shards/volume) storage: total: 172 MB regular volumes: 172 MB EC volumes: 0 B raw: 516 MB on volume replicas, 0 B on EC shards ``` ``` > cluster.status --humanize=false cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s), 10 disk(s) on 1 rack(s) volumes: total: 3 volume(s), 1 collection(s) max size: 31457280000 byte(s) regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%) EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume) storage: total: 172128072 byte(s) regular volumes: 172128072 byte(s) EC volumes: 0 byte(s) raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards ``` Also adds unit tests, and reshuffles test files handling for clarity. |
5 days ago |
|
|
d1435ead8d
|
chore(deps): bump actions/cache from 4 to 5 (#7760)
Bumps [actions/cache](https://github.com/actions/cache) from 4 to 5. - [Release notes](https://github.com/actions/cache/releases) - [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md) - [Commits](https://github.com/actions/cache/compare/v4...v5) --- updated-dependencies: - dependency-name: actions/cache dependency-version: '5' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
a52bfb5d98
|
chore(deps): bump wangyoucao577/go-release-action from 1.54 to 1.55 (#7761)
Bumps [wangyoucao577/go-release-action](https://github.com/wangyoucao577/go-release-action) from 1.54 to 1.55.
- [Release notes](https://github.com/wangyoucao577/go-release-action/releases)
- [Commits](
|
5 days ago |
|
|
1f97eb2c9f
|
chore(deps): bump actions/upload-artifact from 5 to 6 (#7762)
Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 5 to 6. - [Release notes](https://github.com/actions/upload-artifact/releases) - [Commits](https://github.com/actions/upload-artifact/compare/v5...v6) --- updated-dependencies: - dependency-name: actions/upload-artifact dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
49805296ff
|
chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials from 1.19.3 to 1.19.5 (#7763)
chore(deps): bump github.com/aws/aws-sdk-go-v2/credentials Bumps [github.com/aws/aws-sdk-go-v2/credentials](https://github.com/aws/aws-sdk-go-v2) from 1.19.3 to 1.19.5. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/service/pi/v1.19.3...service/m2/v1.19.5) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2/credentials dependency-version: 1.19.5 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
e71ca3bbf4
|
chore(deps): bump github.com/ydb-platform/ydb-go-sdk/v3 from 3.121.0 to 3.122.0 (#7765)
chore(deps): bump github.com/ydb-platform/ydb-go-sdk/v3 Bumps [github.com/ydb-platform/ydb-go-sdk/v3](https://github.com/ydb-platform/ydb-go-sdk) from 3.121.0 to 3.122.0. - [Release notes](https://github.com/ydb-platform/ydb-go-sdk/releases) - [Changelog](https://github.com/ydb-platform/ydb-go-sdk/blob/master/CHANGELOG.md) - [Commits](https://github.com/ydb-platform/ydb-go-sdk/compare/v3.121.0...v3.122.0) --- updated-dependencies: - dependency-name: github.com/ydb-platform/ydb-go-sdk/v3 dependency-version: 3.122.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
4210fc08cd
|
chore(deps): bump github.com/go-redsync/redsync/v4 from 4.14.0 to 4.15.0 (#7766)
Bumps [github.com/go-redsync/redsync/v4](https://github.com/go-redsync/redsync) from 4.14.0 to 4.15.0. - [Release notes](https://github.com/go-redsync/redsync/releases) - [Commits](https://github.com/go-redsync/redsync/compare/v4.14.0...v4.15.0) --- updated-dependencies: - dependency-name: github.com/go-redsync/redsync/v4 dependency-version: 4.15.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
ca409f634b
|
chore(deps): bump github.com/aws/aws-sdk-go-v2 from 1.40.1 to 1.41.0 (#7767)
Bumps [github.com/aws/aws-sdk-go-v2](https://github.com/aws/aws-sdk-go-v2) from 1.40.1 to 1.41.0. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/v1.40.1...v1.41.0) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2 dependency-version: 1.41.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
5 days ago |
|
|
bcce8d164c |
4.03
|
6 days ago |
|
|
59a7c40043
|
Add keyPrefix support for TiKV store (#7756)
* Add keyPrefix support for TiKV store Similar to the Redis keyPrefix feature (#7299), this adds keyPrefix support for TiKV stores to enable sharing a single TiKV cluster as metadata store for multitenant SeaweedFS clusters. Changes: - Add keyPrefix field to TikvStore struct - Update Initialize function to read keyPrefix from config - Add getKey method to prepend prefix to all keys - Update generateKey, getNameFromKey, and genDirectoryKeyPrefix methods to be store receiver methods and handle key prefixing - Update filer.toml scaffold with keyPrefix configuration option Fixes #7752 * Fix potential slice corruption in getKey method Use a new slice with proper capacity to avoid modifying the underlying array of store.keyPrefix when appending. * Add keyPrefix validation and defensive bounds check - Add validation in Initialize to reject keyPrefix longer than 256 bytes - Add bounds check in getNameFromKey to prevent panic on malformed keys * Update weed/filer/tikv/tikv_store.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Update weed/command/scaffold/filer.toml Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
6 days ago |
|
|
1b1e5f69a2
|
Add TUS protocol support for resumable uploads (#7592)
* Add TUS protocol integration tests
This commit adds integration tests for the TUS (resumable upload) protocol
in preparation for implementing TUS support in the filer.
Test coverage includes:
- OPTIONS handler for capability discovery
- Basic single-request upload
- Chunked/resumable uploads
- HEAD requests for offset tracking
- DELETE for upload cancellation
- Error handling (invalid offsets, missing uploads)
- Creation-with-upload extension
- Resume after interruption simulation
Tests are skipped in short mode and require a running SeaweedFS cluster.
* Add TUS session storage types and utilities
Implements TUS upload session management:
- TusSession struct for tracking upload state
- Session creation with directory-based storage
- Session persistence using filer entries
- Session retrieval and offset updates
- Session deletion with chunk cleanup
- Upload completion with chunk assembly into final file
Session data is stored in /.uploads.tus/{upload-id}/ directory,
following the pattern used by S3 multipart uploads.
* Add TUS HTTP handlers
Implements TUS protocol HTTP handlers:
- tusHandler: Main entry point routing requests
- tusOptionsHandler: Capability discovery (OPTIONS)
- tusCreateHandler: Create new upload (POST)
- tusHeadHandler: Get upload offset (HEAD)
- tusPatchHandler: Upload data at offset (PATCH)
- tusDeleteHandler: Cancel upload (DELETE)
- tusWriteData: Upload data to volume servers
Features:
- Supports creation-with-upload extension
- Validates TUS protocol headers
- Offset conflict detection
- Automatic upload completion when size is reached
- Metadata parsing from Upload-Metadata header
* Wire up TUS protocol routes in filer server
Add TUS handler route (/.tus/) to the filer HTTP server.
The TUS route is registered before the catch-all route to ensure
proper routing of TUS protocol requests.
TUS protocol is now accessible at:
- OPTIONS /.tus/ - Capability discovery
- POST /.tus/{path} - Create upload
- HEAD /.tus/.uploads/{id} - Get offset
- PATCH /.tus/.uploads/{id} - Upload data
- DELETE /.tus/.uploads/{id} - Cancel upload
* Improve TUS integration test setup
Add comprehensive Makefile for TUS tests with targets:
- test-with-server: Run tests with automatic server management
- test-basic/chunked/resume/errors: Specific test categories
- manual-start/stop: For development testing
- debug-logs/status: For debugging
- ci-test: For CI/CD pipelines
Update README.md with:
- Detailed TUS protocol documentation
- All endpoint descriptions with headers
- Usage examples with curl commands
- Architecture diagram
- Comparison with S3 multipart uploads
Follows the pattern established by other tests in test/ folder.
* Fix TUS integration tests and creation-with-upload
- Fix test URLs to use full URLs instead of relative paths
- Fix creation-with-upload to refresh session before completing
- Fix Makefile to properly handle test cleanup
- Add FullURL helper function to TestCluster
* Add TUS protocol tests to GitHub Actions CI
- Add tus-tests.yml workflow that runs on PRs and pushes
- Runs when TUS-related files are modified
- Automatic server management for integration testing
- Upload logs on failure for debugging
* Make TUS base path configurable via CLI
- Add -tus.path CLI flag to filer command
- TUS is disabled by default (empty path)
- Example: -tus.path=/.tus to enable at /.tus endpoint
- Update test Makefile to use -tus.path flag
- Update README with TUS enabling instructions
* Rename -tus.path to -tusBasePath with default .tus
- Rename CLI flag from -tus.path to -tusBasePath
- Default to .tus (TUS enabled by default)
- Add -filer.tusBasePath option to weed server command
- Properly handle path prefix (prepend / if missing)
* Address code review comments
- Sort chunks by offset before assembling final file
- Use chunk.Offset directly instead of recalculating
- Return error on invalid file ID instead of skipping
- Require Content-Length header for PATCH requests
- Use fs.option.Cipher for encryption setting
- Detect MIME type from data using http.DetectContentType
- Fix concurrency group for push events in workflow
- Use os.Interrupt instead of Kill for graceful shutdown in tests
* fmt
* Address remaining code review comments
- Fix potential open redirect vulnerability by sanitizing uploadLocation path
- Add language specifier to README code block
- Handle os.Create errors in test setup
- Use waitForHTTPServer instead of time.Sleep for master/volume readiness
- Improve test reliability and debugging
* Address critical and high-priority review comments
- Add per-session locking to prevent race conditions in updateTusSessionOffset
- Stream data directly to volume server instead of buffering entire chunk
- Only buffer 512 bytes for MIME type detection, then stream remaining data
- Clean up session locks when session is deleted
* Fix race condition to work across multiple filer instances
- Store each chunk as a separate file entry instead of updating session JSON
- Chunk file names encode offset, size, and fileId for atomic storage
- getTusSession loads chunks from directory listing (atomic read)
- Eliminates read-modify-write race condition across multiple filers
- Remove in-memory mutex that only worked for single filer instance
* Address code review comments: fix variable shadowing, sniff size, and test stability
- Rename path variable to reqPath to avoid shadowing path package
- Make sniff buffer size respect contentLength (read at most contentLength bytes)
- Handle Content-Length < 0 in creation-with-upload (return error for chunked encoding)
- Fix test cluster: use temp directory for filer store, add startup delay
* Fix test stability: increase cluster stabilization delay to 5 seconds
The tests were intermittently failing because the volume server needed more
time to create volumes and register with the master. Increasing the delay
from 2 to 5 seconds fixes the flaky test behavior.
* Address PR review comments for TUS protocol support
- Fix strconv.Atoi error handling in test file (lines 386, 747)
- Fix lossy fileId encoding: use base64 instead of underscore replacement
- Add pagination support for ListDirectoryEntries in getTusSession
- Batch delete chunks instead of one-by-one in deleteTusSession
* Address additional PR review comments for TUS protocol
- Fix UploadAt timestamp: use entry.Crtime instead of time.Now()
- Remove redundant JSON content in chunk entry (metadata in filename)
- Refactor tusWriteData to stream in 4MB chunks to avoid OOM on large uploads
- Pass filer.Entry to parseTusChunkPath to preserve actual upload time
* Address more PR review comments for TUS protocol
- Normalize TUS path once in filer_server.go, store in option.TusPath
- Remove redundant path normalization from TUS handlers
- Remove goto statement in tusCreateHandler, simplify control flow
* Remove unnecessary mutexes in tusWriteData
The upload loop is sequential, so uploadErrLock and chunksLock are not needed.
* Rename updateTusSessionOffset to saveTusChunk
Remove unused newOffset parameter and rename function to better reflect its purpose.
* Improve TUS upload performance and add path validation
- Reuse operation.Uploader across sub-chunks for better connection reuse
- Guard against TusPath='/' to prevent hijacking all filer routes
* Address PR review comments for TUS protocol
- Fix critical chunk filename parsing: use strings.Cut instead of SplitN
to correctly handle base64-encoded fileIds that may contain underscores
- Rename tusPath to tusBasePath for naming consistency across codebase
- Add background garbage collection for expired TUS sessions (runs hourly)
- Improve error messages with %w wrapping for better debuggability
* Address additional TUS PR review comments
- Fix tusBasePath default to use leading slash (/.tus) for consistency
- Add chunk contiguity validation in completeTusUpload to detect gaps/overlaps
- Fix offset calculation to find maximum contiguous range from 0, not just last chunk
- Return 413 Request Entity Too Large instead of silently truncating content
- Document tusChunkSize rationale (4MB balances memory vs request overhead)
- Fix Makefile xargs portability by removing GNU-specific -r flag
- Add explicit -tusBasePath flag to integration test for robustness
- Fix README example to use /.uploads/tus path format
* Revert log_buffer changes (moved to separate PR)
* Minor style fixes from PR review
- Simplify tusBasePath flag description to use example format
- Add 'TUS upload' prefix to session not found error message
- Remove duplicate tusChunkSize comment
- Capitalize warning message for consistency
- Add grep filter to Makefile xargs for better empty input handling
|
6 days ago |
|
|
221b352593
|
fix: handle ResumeFromDiskError gracefully in LoopProcessLogData (#7753)
When ReadFromBuffer returns ResumeFromDiskError, the function now: - Attempts to read from disk if ReadFromDiskFn is available - Checks if the client is still connected via waitForDataFn - Waits for notification or short timeout before retrying - Continues the loop instead of immediately returning the error This fixes TestNewLogBufferFirstBuffer which was failing because the function returned too early before data was available in the buffer. |
6 days ago |
|
|
32a9a1f46f
|
fix: sync EC volume files before copying to fix deleted needles not being marked when decoding (#7755)
* fix: sync EC volume files before copying to fix deleted needles not being marked when decoding (#7751) When a file is deleted from an EC volume, the deletion is written to both the .ecx and .ecj files. However, these writes were not synced to disk before the files were copied during ec.decode. This caused the copied files to miss the deletion markers, resulting in 'leaked' space where deleted files were not properly tracked after decoding. This fix: 1. Adds a Sync() method to EcVolume that flushes .ecx and .ecj files to disk without closing them 2. Calls Sync() in CopyFile before copying EC volume files to ensure all deletions are visible to the copy operation Fixes #7751 * test: add integration tests for EC volume deletion sync (issue #7751) Add comprehensive tests to verify that deleted needles are properly visible after EcVolume.Sync() is called. These tests cover: 1. TestWriteIdxFileFromEcIndex_PreservesDeletedNeedles - Verifies that WriteIdxFileFromEcIndex preserves deletion markers from .ecx files when generating .idx files 2. TestWriteIdxFileFromEcIndex_ProcessesEcjJournal - Verifies that deletions from .ecj journal file are correctly appended to the generated .idx file 3. TestEcxFileDeletionVisibleAfterSync - Verifies that MarkNeedleDeleted changes are visible after Sync() 4. TestEcxFileDeletionWithSeparateHandles - Tests that synced changes are visible across separate file handles 5. TestEcVolumeSyncEnsuresDeletionsVisible - Integration test for the full EcVolume.DeleteNeedleFromEcx + Sync() workflow that validates the fix for issue #7751 * refactor: log sync errors in EcVolume.Sync() instead of ignoring them Per code review feedback: sync errors could reintroduce the bug this PR fixes, so logging warnings helps with debugging. |
6 days ago |
|
|
60649460b2
|
fix: default policy storeType to memory when not specified (#7754)
When loading IAM config from JSON, if the policy section exists but storeType is not specified, default to 'memory' instead of 'filer'. This ensures that policies defined in JSON config files are properly loaded into memory for test environments and standalone setups that don't rely on the filer for policy persistence. |
6 days ago |
|
|
f64ce759e0
|
feat(iam): add SetUserStatus and UpdateAccessKey actions (#7750)
feat(iam): add SetUserStatus and UpdateAccessKey actions (#7745) Add ability to enable/disable users and access keys without deleting them. ## Changes ### Protocol Buffer Updates - Add `disabled` field (bool) to Identity message for user status - false (default) = enabled, true = disabled - No backward compatibility hack needed since zero value is correct - Add `status` field (string: Active/Inactive) to Credential message ### New IAM Actions - SetUserStatus: Enable or disable a user (requires admin) - UpdateAccessKey: Change access key status (self-service or admin) ### Behavior - Disabled users: All API requests return AccessDenied - Inactive access keys: Signature validation fails - Status check happens early in auth flow for performance - Backward compatible: existing configs default to enabled (disabled=false) ### Use Cases 1. Temporary suspension: Disable user access during investigation 2. Key rotation: Deactivate old key before deletion 3. Offboarding: Disable rather than delete for audit purposes 4. Emergency response: Quickly disable compromised credentials Fixes #7745 |
6 days ago |
|
|
7ed7578424
|
fix(ec.decode): purge EC shards when volume is empty (#7749)
* fix(ec.decode): purge EC shards when volume is empty When an EC volume has no live entries (all deleted), ec.decode should not generate an empty normal volume. Instead, treat decode as a no-op and allow shard purge to proceed cleanly.\n\nFixes: #7748 * chore: address PR review comments * test: cover live EC index + avoid magic string * chore: harden empty-EC handling - Make shard cleanup best-effort (collect errors)\n- Remove unreachable EOF handling in HasLiveNeedles\n- Add empty ecx test case\n- Share no-live-entries substring between server/client\n * perf: parallelize EC shard unmount/delete across locations * refactor: combine unmount+delete into single goroutine per location * refactor: use errors.Join for multi-error aggregation * refactor: use existing ErrorWaitGroup for parallel execution * fix: capture loop variables + clarify SuperBlockSize safety |
6 days ago |
|
|
8bdc4390a0 |
Update constants.go
|
6 days ago |
|
|
f734b2d4bf |
Refactor: Extract common IAM logic into shared weed/iam package (#7747)
This resolves GitHub issue #7747 by extracting duplicated IAM code into a shared package that both the embedded S3 IAM and standalone IAM use. New shared package (weed/iam/): - constants.go: Common constants (charsets, action strings, error messages) - helpers.go: Shared helper functions (Hash, GenerateRandomString, GenerateAccessKeyId, GenerateSecretAccessKey, StringSlicesEqual, MapToStatementAction, MapToIdentitiesAction, MaskAccessKey) - responses.go: Common IAM response structs (CommonResponse, ListUsersResponse, CreateUserResponse, etc.) - helpers_test.go: Unit tests for shared helpers Updated files: - weed/s3api/s3api_embedded_iam.go: Use type aliases and function wrappers to the shared package, removing ~200 lines of duplicated code - weed/iamapi/iamapi_management_handlers.go: Use shared package for constants and helper functions, removing ~100 lines of duplicated code - weed/iamapi/iamapi_response.go: Re-export types from shared package for backwards compatibility Benefits: - Single source of truth for IAM constants and helpers - Easier maintenance - changes only need to be made in one place - Reduced risk of inconsistencies between embedded and standalone IAM - Better test coverage through shared test suite |
6 days ago |
|
|
f41925b60b
|
Embed IAM API into S3 server (#7740)
* Embed IAM API into S3 server This change simplifies the S3 and IAM deployment by embedding the IAM API directly into the S3 server, following the patterns used by MinIO and Ceph RGW. Changes: - Add -iam flag to S3 server (enabled by default) - Create embedded IAM API handler in s3api package - Register IAM routes (POST to /) in S3 server when enabled - Deprecate standalone 'weed iam' command with warning Benefits: - Single binary, single port for both S3 and IAM APIs - Simpler deployment and configuration - Shared credential manager between S3 and IAM - Backward compatible: 'weed iam' still works with deprecation warning Usage: - weed s3 -port=8333 # S3 + IAM on same port (default) - weed s3 -iam=false # S3 only, disable embedded IAM - weed iam -port=8111 # Deprecated, shows warning * Fix nil pointer panic: add s3.iam flag to weed server command The enableIam field was not initialized when running S3 via 'weed server', causing a nil pointer dereference when checking *s3opt.enableIam. * Fix nil pointer panic: add s3.iam flag to weed filer command The enableIam field was not initialized when running S3 via 'weed filer -s3', causing a nil pointer dereference when checking *s3opt.enableIam. * Add integration tests for embedded IAM API Tests cover: - CreateUser, ListUsers, GetUser, UpdateUser, DeleteUser - CreateAccessKey, DeleteAccessKey, ListAccessKeys - CreatePolicy, PutUserPolicy, GetUserPolicy - Implicit username extraction from authorization header - Full user lifecycle workflow test These tests validate the embedded IAM API functionality that was added in the S3 server, ensuring IAM operations work correctly when served from the same port as S3. * Security: Use crypto/rand for IAM credential generation SECURITY FIX: Replace math/rand with crypto/rand for generating access keys and secret keys. Using math/rand is not cryptographically secure and can lead to predictable credentials. This change: 1. Replaces math/rand with crypto/rand in both: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) 2. Removes the seededRand variable that was initialized with time-based seed (predictable) 3. Updates StringWithCharset/iamStringWithCharset to: - Use crypto/rand.Int() for secure random index generation - Return an error for proper error handling 4. Updates CreateAccessKey to handle the new error return 5. Updates DoActions handlers to propagate errors properly * Fix critical bug: DeleteUserPolicy was deleting entire user instead of policy BUG FIX: DeleteUserPolicy was incorrectly deleting the entire user identity from s3cfg.Identities instead of just clearing the user's inline policy (Actions). Before (wrong): s3cfg.Identities = append(s3cfg.Identities[:i], s3cfg.Identities[i+1:]...) After (correct): ident.Actions = nil Also: - Added proper iamDeleteUserPolicyResponse / DeleteUserPolicyResponse types - Fixed return type from iamPutUserPolicyResponse to iamDeleteUserPolicyResponse Affected files: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) - weed/iamapi/iamapi_response.go (response types) * Add tests for DeleteUserPolicy to prevent regression Added two tests: 1. TestEmbeddedIamDeleteUserPolicy - Verifies that: - User is NOT deleted (identity still exists) - Credentials are NOT deleted - Only Actions (policy) are cleared to nil 2. TestEmbeddedIamDeleteUserPolicyUserNotFound - Verifies: - Returns 404 when user doesn't exist These tests ensure the bug fixed in the previous commit (deleting user instead of policy) doesn't regress. * Fix race condition: Add mutex lock to IAM DoActions The DoActions function performs a read-modify-write operation on the shared IAM configuration without any locking. This could lead to race conditions and data loss if multiple requests modify the IAM config concurrently. Added mutex lock at the start of DoActions in both: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) The lock protects the entire read-modify-write cycle: 1. GetS3ApiConfiguration (read) 2. Modify s3cfg based on action 3. PutS3ApiConfiguration (write) * Fix action comparison and document CreatePolicy limitation 1. Replace reflect.DeepEqual with order-independent string slice comparison - Added iamStringSlicesEqual/stringSlicesEqual helper functions - Prevents duplicate policy statements when actions are in different order 2. Document CreatePolicy limitation in embedded IAM - Added TODO comment explaining that managed policies are not persisted - Users should use PutUserPolicy for inline policies 3. Fix deadlock in standalone IAM's CreatePolicy - Removed nested lock acquisition (DoActions already holds the lock) Files changed: - weed/s3api/s3api_embedded_iam.go - weed/iamapi/iamapi_management_handlers.go * Add rate limiting to embedded IAM endpoint Apply circuit breaker rate limiting to the IAM endpoint to prevent abuse. Also added request tracking for IAM operations. The IAM endpoint now follows the same pattern as other S3 endpoints: - track() for request metrics - s3a.iam.Auth() for authentication - s3a.cb.Limit() for rate limiting * Fix handleImplicitUsername to properly look up username from AccessKeyId According to AWS spec, when UserName is not specified in an IAM request, IAM should determine the username implicitly based on the AccessKeyId signing the request. Previously, the code incorrectly extracted s[2] (region field) from the SigV4 credential string as the username. This fix: 1. Extracts the AccessKeyId from s[0] of the credential string 2. Looks up the AccessKeyId in the credential store using LookupByAccessKey 3. Uses the identity's Name field as the username if found Also: - Added exported LookupByAccessKey wrapper method to IdentityAccessManagement - Updated tests to verify correct access key lookup behavior - Applied fix to both embedded IAM and standalone IAM implementations * Fix CreatePolicy to not trigger unnecessary save CreatePolicy validates the policy document and returns metadata but does not actually store the policy (SeaweedFS uses inline policies attached via PutUserPolicy). However, 'changed' was left as true, triggering an unnecessary save operation. Set changed = false after successful CreatePolicy validation in both embedded IAM and standalone IAM implementations. * Improve embedded IAM test quality - Remove unused mock types (mockCredentialManager, mockEmbeddedIamApi) - Use proto.Clone instead of proto.Merge for proper deep copy semantics - Replace brittle regex-based XML error extraction with proper XML unmarshalling - Remove unused regexp import - Add state and field assertions to tests: - CreateUser: verify username in response and user persisted in config - ListUsers: verify response contains expected users - GetUser: verify username in response - CreatePolicy: verify policy metadata in response - PutUserPolicy: verify actions were attached to user - CreateAccessKey: verify credentials in response and persisted in config * Remove shared test state and improve executeEmbeddedIamRequest - Remove package-level embeddedIamApi variable to avoid shared test state - Update executeEmbeddedIamRequest to accept API instance as parameter - Only call xml.Unmarshal when v != nil, making nil-v cases explicit - Return unmarshal error properly instead of always returning it - Update all tests to create their own EmbeddedIamApiForTest instance - Each test now has isolated state, preventing test interdependencies * Add comprehensive test coverage for embedded IAM Added tests for previously uncovered functions: - iamStringSlicesEqual: 0% → 100% - iamMapToStatementAction: 40% → 100% - iamMapToIdentitiesAction: 30% → 70% - iamHash: 100% - iamStringWithCharset: 85.7% - GetPolicyDocument: 75% → 100% - CreatePolicy: 91.7% → 100% - DeleteUser: 83.3% → 100% - GetUser: 83.3% → 100% - ListAccessKeys: 55.6% → 88.9% New test cases for helper functions, error handling, and edge cases. * Document IAM code duplication and reference GitHub issue #7747 Added comments to both IAM implementations noting the code duplication and referencing the tracking issue for future refactoring: - weed/s3api/s3api_embedded_iam.go (embedded IAM) - weed/iamapi/iamapi_management_handlers.go (standalone IAM) See: https://github.com/seaweedfs/seaweedfs/issues/7747 * Implement granular IAM authorization for self-service operations Previously, all IAM actions required ACTION_ADMIN permission, which was overly restrictive. This change implements AWS-like granular permissions: Self-service operations (allowed without admin for own resources): - CreateAccessKey (on own user) - DeleteAccessKey (on own user) - ListAccessKeys (on own user) - GetUser (on own user) - UpdateAccessKey (on own user) Admin-only operations: - CreateUser, DeleteUser, UpdateUser - PutUserPolicy, GetUserPolicy, DeleteUserPolicy - CreatePolicy - ListUsers - Operations on other users The new AuthIam middleware: 1. Authenticates the request (signature verification) 2. Parses the IAM Action and target UserName 3. For self-service actions, allows if user is operating on own resources 4. For all other actions or operations on other users, requires admin * Fix misleading comment in standalone IAM CreatePolicy The comment incorrectly stated that CreatePolicy only validates the policy document. In the standalone IAM server, CreatePolicy actually persists the policy via iama.s3ApiConfig.PutPolicies(). The changed flag is false because it doesn't modify s3cfg.Identities, not because nothing is stored. * Simplify IAM auth and add RequestId to responses - Remove redundant ACTION_ADMIN fallback in AuthIam: The action parameter in authRequest is for permission checking, not signature verification. If auth fails with ACTION_READ, it will fail with ACTION_ADMIN too. - Add SetRequestId() call before writing IAM responses for AWS compatibility. All IAM response structs embed iamCommonResponse which has SetRequestId(). * Address code review feedback for IAM implementation 1. auth_credentials.go: Add documentation warning that LookupByAccessKey returns internal pointers that should not be mutated. 2. iamapi_management_handlers.go & s3api_embedded_iam.go: Add input guards for StringWithCharset/iamStringWithCharset when length <= 0 or charset is empty to avoid runtime errors from rand.Int. 3. s3api_embedded_iam_test.go: Don't ignore xml.Marshal errors in test DoActions handler. Return proper error response if marshaling fails. 4. s3api_embedded_iam_test.go: Use obviously fake access key IDs (AKIATESTFAKEKEY*) to avoid CI secret scanner false positives. * Address code review feedback for IAM implementation (batch 2) 1. iamapi/iamapi_management_handlers.go: - Redact Authorization header log (security: avoid exposing signature) - Add nil-guard for iama.iam before LookupByAccessKey call 2. iamapi/iamapi_test.go: - Replace real-looking access keys with obviously fake ones (AKIATESTFAKEKEY*) to avoid CI secret scanner false positives 3. s3api/s3api_embedded_iam.go - CreateUser: - Validate UserName is not empty (return ErrCodeInvalidInputException) - Check for duplicate users (return ErrCodeEntityAlreadyExistsException) 4. s3api/s3api_embedded_iam.go - CreateAccessKey: - Return ErrCodeNoSuchEntityException if user doesn't exist - Removed implicit user creation behavior 5. s3api/s3api_embedded_iam.go - getActions: - Fix S3 ARN parsing for bucket/path patterns - Handle mybucket, mybucket/*, mybucket/path/* correctly - Return error if no valid actions found in policy 6. s3api/s3api_embedded_iam.go - handleImplicitUsername: - Redact Authorization header log - Add nil-guard for e.iam 7. s3api/s3api_embedded_iam.go - DoActions: - Reload in-memory IAM maps after credential mutations - Call LoadS3ApiConfigurationFromCredentialManager after save 8. s3api/auth_credentials.go - AuthSignatureOnly: - Add new signature-only authentication method - Bypasses S3 authorization checks for IAM operations - Used by AuthIam to properly separate signature verification from IAM-specific permission checks * Fix nil pointer dereference and error handling in IAM 1. AuthIam: Add nil check for identity after AuthSignatureOnly - AuthSignatureOnly can return nil identity with ErrNone for authTypePostPolicy or authTypeStreamingUnsigned - Now returns ErrAccessDenied if identity is nil 2. writeIamErrorResponse: Add missing error code cases - ErrCodeEntityAlreadyExistsException -> HTTP 409 Conflict - ErrCodeInvalidInputException -> HTTP 400 Bad Request 3. UpdateUser: Use consistent error handling - Changed from direct ErrInvalidRequest to writeIamErrorResponse - Now returns correct HTTP status codes based on error type * Add IAM config reload to standalone IAM server after mutations Match the behavior of embedded IAM (s3api_embedded_iam.go) by reloading the in-memory identity maps after persisting configuration changes. This ensures newly created access keys are visible to LookupByAccessKey immediately without requiring a server restart. * Minor improvements to test helpers and log masking 1. iamapi_test.go: Update mustMarshalJSON to use t.Helper() and t.Fatal() instead of panic() for better test diagnostics 2. s3api_embedded_iam.go: Mask access key in 'not found' log message to avoid exposing full access key IDs in logs * Mask access key in standalone IAM log message for consistency Match the embedded IAM version by masking the access key ID in the 'not found' log message (show only first 4 chars). |
6 days ago |
|
|
a77674ead3
|
fix: use path instead of filepath for S3 object paths on Windows (#7739)
fix: use path instead of filepath for S3 object paths on Windows (#7733) |
6 days ago |
|
|
eb860752e6
|
fix: WaitUntilConnected now respects context cancellation during sleep (#7737)
The WaitUntilConnected function was not properly respecting context cancellation when sleeping between attempts. The time.Sleep call would block for up to 200ms even after the context was cancelled. This fix uses select with time.After to immediately return when the context is cancelled, rather than waiting for the sleep to complete. This fixes flaky test behavior where the function would take ~200ms to return instead of respecting the ~100ms context timeout. |
7 days ago |
|
|
51c2ab0107
|
fix: admin UI bucket deletion with filer group configured (#7735)
|
7 days ago |
|
|
f70cd05404
|
fix: CORS wildcard subdomain matching cache race condition (#7736)
test: add HTTPS test cases for CORS wildcard subdomain matching This adds comprehensive test coverage for HTTPS subdomain wildcard matching in TestMatchesOrigin: - https exact match - https no match - https wildcard subdomain match - https wildcard subdomain no match (base domain) - https wildcard subdomain no match (different domain) - protocol mismatch tests (http pattern vs https origin and vice versa) The matchWildcard function was already working correctly - this just adds test coverage for the HTTPS cases that were previously untested. Note: The cache invalidation is already handled synchronously by setBucketMetadata() which is called via: - UpdateBucketCORS -> UpdateBucketMetadata -> setBucketMetadata - ClearBucketCORS -> UpdateBucketMetadata -> setBucketMetadata Added clarifying comments to document this call chain. |
7 days ago |
|
|
f77e6ed2d4
|
fix: admin UI bucket delete now properly deletes collection and checks Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock Fixes #7711 The admin UI's DeleteS3Bucket function was missing two critical behaviors: 1. It did not delete the collection from the master (unlike s3.bucket.delete shell command), leaving orphaned volume data that caused fs.verify errors. 2. It did not check for Object Lock protections before deletion, potentially allowing deletion of buckets with locked objects. Changes: - Add shared Object Lock checking utilities to object_lock_utils.go: - EntryHasActiveLock: standalone function to check if an entry has active lock - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects - Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function - Update admin UI DeleteS3Bucket to: - Check Object Lock using shared HasObjectsWithActiveLocks utility - Delete the collection before deleting filer entries (matching s3.bucket.delete) * refactor: S3 API uses shared Object Lock utilities Removes 114 lines of duplicated code from s3api_bucket_handlers.go by having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks function in object_lock_utils.go. Now both S3 API and Admin UI use the same shared utilities: - EntryHasActiveLock - HasObjectsWithActiveLocks - recursivelyCheckLocksWithClient - checkVersionsForLocksWithClient * feat: s3.bucket.delete shell command now checks Object Lock Add Object Lock protection to the s3.bucket.delete shell command. If the bucket has Object Lock enabled and contains objects with active retention or legal hold, deletion is prevented. Also refactors Object Lock checking utilities into a new s3_objectlock package to avoid import cycles between shell, s3api, and admin packages. All three components now share the same logic: - S3 API (DeleteBucketHandler) - Admin UI (DeleteS3Bucket) - Shell command (s3.bucket.delete) * refactor: unified Object Lock checking and consistent deletion parameters 1. Add CheckBucketForLockedObjects() - a unified function that combines: - Bucket entry lookup - Object Lock enabled check - Scan for locked objects 2. All three components now use this single function: - S3 API (via s3api.CheckBucketForLockedObjects) - Admin UI (via s3api.CheckBucketForLockedObjects) - Shell command (via s3_objectlock.CheckBucketForLockedObjects) 3. Aligned deletion parameters across all components: - isDeleteData: false (collection already deleted separately) - isRecursive: true - ignoreRecursiveError: true * fix: properly handle non-EOF errors in Recv() loops The Recv() loops in recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient were breaking on any error, which could hide real stream errors and incorrectly report 'no locks found'. Now: - io.EOF: break loop (normal end of stream) - any other error: return it so caller knows the stream failed * fix: address PR review comments 1. Add path traversal protection - validate entry names before building subdirectory paths. Skip entries with empty names, '.', '..', or containing path separators. 2. Use exact match for .versions folder instead of HasSuffix() to avoid mismatching unrelated directories like 'foo.versions'. 3. Replace path.Join with simple string concatenation since we now validate entry names. * refactor: extract paginateEntries helper to reduce duplication The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient functions shared significant structural similarity. Extracted a generic paginateEntries helper that: - Handles pagination logic (lastFileName tracking, Limit) - Handles stream receiving with proper EOF vs error handling - Validates entry names (path traversal protection) - Calls a processEntry callback for business logic This centralizes pagination logic and makes the code more maintainable. * feat: add context propagation for timeout and cancellation support All Object Lock checking functions now accept context.Context parameter: - paginateEntries(ctx, client, dir, processEntry) - recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime) - checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime) - HasObjectsWithActiveLocks(ctx, client, bucketPath) - CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName) This enables: - Timeout support for large bucket scans - Cancellation propagation from HTTP requests - The S3 API handler now uses r.Context() for proper request lifecycle * fix: address PR review comments 1. Add DefaultBucketsPath constant in admin_server.go instead of hardcoding "/buckets" in multiple places. 2. Add defensive normalization in EntryHasActiveLock: - TrimSpace to handle whitespace around values - ToUpper for case-insensitive comparison of legal hold and retention mode values - TrimSpace on retention date before parsing * fix: use ctx variable consistently instead of context.Background() In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx variable defined at the start of the function for all gRPC calls instead of creating new context.Background() instances. |
7 days ago |
|
|
d80d8be012
|
fix(s3): start KeepConnectedToMaster for filer discovery with filerGroup (#7732)
Fixes #7721 When S3 server is configured with a filerGroup, it creates a MasterClient to enable dynamic filer discovery. However, the KeepConnectedToMaster() background goroutine was never started, causing GetMaster() to block indefinitely in WaitUntilConnected(). This resulted in the log message: WaitUntilConnected still waiting for master connection (attempt N)... being logged repeatedly every ~20 seconds. The fix adds the missing 'go masterClient.KeepConnectedToMaster(ctx)' call to properly establish the connection to master servers. Also adds unit tests to verify WaitUntilConnected respects context cancellation. |
1 week ago |
|
|
36b8b2147b
|
test: add integration test for versioned object listing path fix (#7731)
* test: add integration test for versioned object listing path fix Add integration test that validates the fix for GitHub discussion #7573. The test verifies that: - Entry names use path.Base() to get base filename only - Path doubling bug is prevented when listing versioned objects - Logical entries are created correctly with proper attributes - .versions folder paths are handled correctly This test documents the Velero/Kopia compatibility fix and prevents regression of the path doubling bug. * test: add Velero/Kopia integration test for versioned object listing Add integration tests that simulate Velero/Kopia's exact access patterns when using S3 versioning. These tests validate the fix for GitHub discussion #7573 where versioned objects with nested paths would have their paths doubled in ListObjects responses. Tests added: - TestVeleroKopiaVersionedObjectListing: Tests various Kopia path patterns - TestVeleroKopiaGetAfterList: Verifies list-then-get workflow works - TestVeleroMultipleVersionsWithNestedPaths: Tests multi-version objects - TestVeleroListVersionsWithNestedPaths: Tests ListObjectVersions API Each test verifies: 1. Listed keys match original keys without path doubling 2. Objects can be retrieved using the listed keys 3. Content integrity is maintained Related: https://github.com/seaweedfs/seaweedfs/discussions/7573 * refactor: remove old unit test, keep only Velero integration test Remove weed/s3api/s3api_versioning_list_test.go as it was a simpler unit test that the comprehensive Velero integration test supersedes. The integration test in test/s3/versioning/s3_velero_integration_test.go provides better coverage by actually exercising the S3 API with real AWS SDK calls. * refactor: use defer for response body cleanup in test loop Use anonymous function with defer for getResp.Body.Close() to be more defensive against future code additions in the loop body. * refactor: improve hasDoubledPath function clarity and efficiency - Fix comment to accurately describe checking for repeated pairs - Tighten outer loop bound from len(parts)-2 to len(parts)-3 - Remove redundant bounds checks in the condition |
1 week ago |
|
|
93cca3a96b
|
volume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hours (#7730)
* volume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hours This change makes the fsck check more conservative by only considering chunks older than 5 hours as potential orphans. A 5 minute window was too aggressive and could incorrectly flag recently written chunks, especially in busy systems or during backup operations. Addresses #7649 * Update command_volume_fsck.go * volume.fsck: add help text explaining cutoffTimeAgo parameter * Update command_volume_fsck.go |
1 week ago |