Tree:
7eb90fdfd7
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feat/s3-policy-variables-final
feature/mini-port-detection
feature/modernize-s3-tests
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
master-split-brain-protection
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
pr/7984
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
973 Commits (7eb90fdfd770716663c01b64c7c4b73078f217df)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
7eb90fdfd7
|
Enhance EC balancing to separate parity and data shards (#8038)
* Enhance EC balancing to separate parity and data shards across racks * Rename avoidRacks to antiAffinityRacks for clarity * Implement server-level EC separation for parity/data shards * Optimize EC balancing: consolidate helpers and extract two-pass selection logic * Add comprehensive edge case tests for EC balancing logic * Apply code review feedback: rename select_(), add divide-by-zero guard, fix comment * Remove unused parameters from doBalanceEcShardsWithinOneRack and add explicit anti-affinity check * Add disk-level anti-affinity for data/parity shard separation - Modified pickBestDiskOnNode to accept shardId and dataShardCount - Implemented explicit anti-affinity: 1000-point penalty for placing data shards on disks with parity (and vice versa) - Updated all call sites including balancing and evacuation - For evacuation, disabled anti-affinity by passing dataShardCount=0 |
18 hours ago |
|
|
905e7e72d9
|
Add remote.copy.local command to copy local files to remote storage (#8033)
* Add remote.copy.local command to copy local files to remote storage This new command solves the issue described in GitHub Discussion #8031 where files exist locally but are not synced to remote storage due to missing filer logs. Features: - Copies local-only files to remote storage - Supports file filtering (include/exclude patterns) - Dry run mode to preview actions - Configurable concurrency for performance - Force update option for existing remote files - Comprehensive error handling with retry logic Usage: remote.copy.local -dir=/path/to/mount/dir [options] This addresses the need to manually sync files when filer logs were deleted or when local files were never synced to remote storage. * shell: rename commandRemoteLocalSync to commandRemoteCopyLocal * test: add comprehensive remote cache integration tests * shell: fix forceUpdate logic in remote.copy.local The previous logic only allowed force updates when localEntry.RemoteEntry was not nil, which defeated the purpose of using -forceUpdate to fix inconsistencies where local metadata might be missing. Now -forceUpdate will overwrite remote files whenever they exist, regardless of local metadata state. * shell: fix code review issues in remote.copy.local - Return actual error from flag parsing instead of swallowing it - Use sync.Once to safely capture first error in concurrent operations - Add atomic counter to track actual successful copies - Protect concurrent writes to output with mutex to prevent interleaving - Fix path matching to prevent false positives with sibling directories (e.g., /mnt/remote2 no longer matches /mnt/remote) * test: address code review nitpicks in integration tests - Improve create_bucket error handling to fail on real errors - Fix test assertions to properly verify expected failures - Use case-insensitive string matching for error detection - Replace weak logging-only tests with proper assertions - Remove extra blank line in Makefile * test: remove redundant edge case tests Removed 5 tests that were either duplicates or didn't assert meaningful behavior: - TestEdgeCaseEmptyDirectory (duplicate of TestRemoteCopyLocalEmptyDirectory) - TestEdgeCaseRapidCacheUncache (no meaningful assertions) - TestEdgeCaseConcurrentCommands (only logs errors, no assertions) - TestEdgeCaseInvalidPaths (no security assertions) - TestEdgeCaseFileNamePatterns (duplicate of pattern tests in cache tests) Kept valuable stress tests: nested directories, special characters, very large files (100MB), many small files (100), and zero-byte files. * test: fix CI failures by forcing localhost IP advertising Added -ip=127.0.0.1 flag to both primary and remote weed mini commands to prevent IP auto-detection issues in CI environments. Without this flag, the master would advertise itself using the actual IP (e.g., 10.1.0.17) while binding to 127.0.0.1, causing connection refused errors when other services tried to connect to the gRPC port. * test: address final code review issues - Add proper error assertions for concurrent commands test - Require errors for invalid path tests instead of just logging - Remove unused 'match' field from pattern test struct - Add dry-run output assertion to verify expected behavior - Simplify redundant condition in remote.copy.local (remove entry.RemoteEntry check) * test: fix remote.configure tests to match actual validation rules - Use only letters in remote names (no numbers) to match validation - Relax missing parameter test expectations since validation may not be strict - Generate unique names using letter suffix instead of numbers * shell: rename pathToCopyCopy to localPath for clarity Improved variable naming in concurrent copy loop to make the code more readable and less repetitive. * test: fix remaining test failures - Remove strict error requirement for invalid paths (commands handle gracefully) - Fix TestRemoteUncacheBasic to actually test uncache instead of cache - Use simple numeric names for remote.configure tests (testcfg1234 format) to avoid validation issues with letter-only or complex name generation * test: use only letters in remote.configure test names The validation regex ^[A-Za-z][A-Za-z0-9]*$ requires names to start with a letter, but using static letter-only names avoids any potential issues with the validation. * test: remove quotes from -name parameter in remote.configure tests Single quotes were being included as part of the name value, causing validation failures. Changed from -name='testremote' to -name=testremote. * test: fix remote.configure assertion to be flexible about JSON formatting Changed from checking exact JSON format with specific spacing to just checking if the name appears in the output, since JSON formatting may vary (e.g., "name": "value" vs "name": "value"). |
1 day ago |
|
|
f2e7af257d
|
Fix volume.fsck -forcePurging -reallyDeleteFromVolume to fail fast on filer traversal errors (#8015)
* Add TraverseBfsWithContext and fix race conditions in error handling
- Add TraverseBfsWithContext function to support context cancellation
- Fix race condition in doTraverseBfsAndSaving using atomic.Bool and sync.Once
- Improve error handling with fail-fast behavior and proper error propagation
- Update command_volume_fsck to use error-returning saveFn callback
- Enhance error messages in readFilerFileIdFile with detailed context
* refactoring
* fix error format
* atomic
* filer_pb: make enqueue return void
* shell: simplify fs.meta.save error handling
* filer_pb: handle enqueue return value
* Revert "atomic"
This reverts commit
|
1 day ago |
|
|
f47bc8c539
|
Fix remote.meta.sync TTL issue (#8021) (#8030)
* Fix remote.meta.sync TTL issue (#8021) Remote entries should not have TTL applied because they represent files in remote storage, not local SeaweedFS files. When TTL was configured on a prefix, remote.meta.sync would create entries that immediately expired, causing them to be deleted and recreated on each sync. Changes: - Set TtlSec=0 explicitly when creating remote entries in remote.meta.sync - Skip TTL application in CreateEntry handler for entries with Remote field set Fixes #8021 * Add TTL protection for remote entries in update path - Set TtlSec=0 in doSaveRemoteEntry before calling UpdateEntry - Add server-side TTL protection in UpdateEntry handler for remote entries - Ensures remote entries don't inherit or preserve TTL when updated |
2 days ago |
|
|
9012069bd7
|
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 week ago |
|
|
e10f11b480
|
opt: reduce ShardsInfo memory usage with bitmap and sorted slice (#7974)
* opt: reduce ShardsInfo memory usage with bitmap and sorted slice - Replace map[ShardId]*ShardInfo with sorted []ShardInfo slice - Add ShardBits (uint32) bitmap for O(1) existence checks - Use binary search for O(log n) lookups by shard ID - Maintain sorted order for efficient iteration - Add comprehensive unit tests and benchmarks Memory savings: - Map overhead: ~48 bytes per entry eliminated - Pointers: 8 bytes per entry eliminated - Total: ~56 bytes per shard saved Performance improvements: - Has(): O(1) using bitmap - Size(): O(log n) using binary search (was O(1), acceptable tradeoff) - Count(): O(1) using popcount on bitmap - Iteration: Faster due to cache locality * refactor: add methods to ShardBits type - Add Has(), Set(), Clear(), and Count() methods to ShardBits - Simplify ShardsInfo methods by using ShardBits methods - Improves code readability and encapsulation * opt: use ShardBits directly in ShardsCountFromVolumeEcShardInformationMessage Avoid creating a full ShardsInfo object just to count shards. Directly cast vi.EcIndexBits to ShardBits and use Count() method. * opt: use strings.Builder in ShardsInfo.String() for efficiency * refactor: change AsSlice to return []ShardInfo (values instead of pointers) This completes the memory optimization by avoiding unnecessary pointer slices and potential allocations. * refactor: rename ShardsCountFromVolumeEcShardInformationMessage to GetShardCount * fix: prevent deadlock in Add and Subtract methods Copy shards data from 'other' before releasing its lock to avoid potential deadlock when a.Add(b) and b.Add(a) are called concurrently. The previous implementation held other's lock while calling si.Set/Delete, which acquires si's lock. This could deadlock if two goroutines tried to add/subtract each other concurrently. * opt: avoid unnecessary locking in constructor functions ShardsInfoFromVolume and ShardsInfoFromVolumeEcShardInformationMessage now build shards slice and bitmap directly without calling Set(), which acquires a lock on every call. Since the object is local and not yet shared, locking is unnecessary and adds overhead. This improves performance during object construction. * fix: rename 'copy' variable to avoid shadowing built-in function The variable name 'copy' in TestShardsInfo_Copy shadowed the built-in copy() function, which is confusing and bad practice. Renamed to 'siCopy'. * opt: use math/bits.OnesCount32 and reorganize types 1. Replace manual popcount loop with math/bits.OnesCount32 for better performance and idiomatic Go code 2. Move ShardSize type definition to ec_shards_info.go for better code organization since it's primarily used there * refactor: Set() now accepts ShardInfo for future extensibility Changed Set(id ShardId, size ShardSize) to Set(shard ShardInfo) to support future additions to ShardInfo without changing the API. This makes the code more extensible as new fields can be added to ShardInfo (e.g., checksum, location, etc.) without breaking the Set API. * refactor: move ShardInfo and ShardSize to separate file Created ec_shard_info.go to hold the basic shard types (ShardInfo and ShardSize) for better code organization and separation of concerns. * refactor: add ShardInfo constructor and helper functions Added NewShardInfo() constructor and IsValid() method to better encapsulate ShardInfo creation and validation. Updated code to use the constructor for cleaner, more maintainable code. * fix: update remaining Set() calls to use NewShardInfo constructor Fixed compilation errors in storage and shell packages where Set() calls were not updated to use the new NewShardInfo() constructor. * fix: remove unreachable code in filer backup commands Removed unreachable return statements after infinite loops in filer_backup.go and filer_meta_backup.go to fix compilation errors. * fix: rename 'new' variable to avoid shadowing built-in Renamed 'new' to 'result' in MinusParityShards, Plus, and Minus methods to avoid shadowing Go's built-in new() function. * fix: update remaining test files to use NewShardInfo constructor Fixed Set() calls in command_volume_list_test.go and ec_rebalance_slots_test.go to use NewShardInfo() constructor. |
1 week ago |
|
|
0b5a65e00b |
avoid extra missing shard warning
fix https://github.com/seaweedfs/seaweedfs/issues/7956 |
2 weeks ago |
|
|
91fcc60898
|
Have `volume.list` account for EC shards when computing disk usage. (#7909)
|
2 weeks ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
3 weeks ago |
|
|
f88b9a643d |
add one example
|
3 weeks ago |
|
|
2b3ff3cd05 |
verbose mode
|
3 weeks ago |
|
|
b866907461
|
fs.meta.save: fix directory entry parent path in FullEntry construction (#7886)
* Checkpoint from VS Code for coding agent session * Fix fs.meta.save to correctly save directory's own metadata Co-authored-by: chrislusf <1543151+chrislusf@users.noreply.github.com> * address error --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: chrislusf <1543151+chrislusf@users.noreply.github.com> |
3 weeks ago |
|
|
4aa50bfa6a
|
fix: EC rebalance fails with replica placement 000 (#7812)
* fix: EC rebalance fails with replica placement 000 This PR fixes several issues with EC shard distribution: 1. Pre-flight check before EC encoding - Verify target disk type has capacity before encoding starts - Prevents encoding shards only to fail during rebalance - Shows helpful error when wrong diskType is specified (e.g., ssd when volumes are on hdd) 2. Fix EC rebalance with replica placement 000 - When DiffRackCount=0, shards should be distributed freely across racks - The '000' placement means 'no volume replication needed' because EC provides redundancy - Previously all racks were skipped with error 'shards X > replica placement limit (0)' 3. Add unit tests for EC rebalance slot calculation - TestECRebalanceWithLimitedSlots: documents the limited slots scenario - TestECRebalanceZeroFreeSlots: reproduces the 0 free slots error 4. Add Makefile for manual EC testing - make setup: start cluster and populate data - make shell: open weed shell for EC commands - make clean: stop cluster and cleanup * fix: default -rebalance to true for ec.encode The -rebalance flag was defaulting to false, which meant ec.encode would only print shard moves but not actually execute them. This is a poor default since the whole point of EC encoding is to distribute shards across servers for fault tolerance. Now -rebalance defaults to true, so shards are actually distributed after encoding. Users can use -rebalance=false if they only want to see what would happen without making changes. * test/erasure_coding: improve Makefile safety and docs - Narrow pkill pattern for volume servers to use TEST_DIR instead of port pattern, avoiding accidental kills of unrelated SeaweedFS processes - Document external dependencies (curl, jq) in header comments * shell: refactor buildRackWithEcShards to reuse buildEcShards Extract common shard bit construction logic to avoid duplication between buildEcShards and buildRackWithEcShards helper functions. * shell: update test for EC replication 000 behavior When DiffRackCount=0 (replication "000"), EC shards should be distributed freely across racks since erasure coding provides its own redundancy. Update test expectation to reflect this behavior. * erasure_coding: add distribution package for proportional EC shard placement Add a new reusable package for EC shard distribution that: - Supports configurable EC ratios (not hard-coded 10+4) - Distributes shards proportionally based on replication policy - Provides fault tolerance analysis - Prefers moving parity shards to keep data shards spread out Key components: - ECConfig: Configurable data/parity shard counts - ReplicationConfig: Parsed XYZ replication policy - ECDistribution: Target shard counts per DC/rack/node - Rebalancer: Plans shard moves with parity-first strategy This enables seaweed-enterprise custom EC ratios and weed worker integration while maintaining a clean, testable architecture. * shell: integrate distribution package for EC rebalancing Add shell wrappers around the distribution package: - ProportionalECRebalancer: Plans moves using distribution.Rebalancer - NewProportionalECRebalancerWithConfig: Supports custom EC configs - GetDistributionSummary/GetFaultToleranceAnalysis: Helper functions The shell layer converts between EcNode types and the generic TopologyNode types used by the distribution package. * test setup * ec: improve data and parity shard distribution across racks - Add shardsByTypePerRack helper to track data vs parity shards - Rewrite doBalanceEcShardsAcrossRacks for two-pass balancing: 1. Balance data shards (0-9) evenly, max ceil(10/6)=2 per rack 2. Balance parity shards (10-13) evenly, max ceil(4/6)=1 per rack - Add balanceShardTypeAcrossRacks for generic shard type balancing - Add pickRackForShardType to select destination with room for type - Add unit tests for even data/parity distribution verification This ensures even read load during normal operation by spreading both data and parity shards across all available racks. * ec: make data/parity shard counts configurable in ecBalancer - Add dataShardCount and parityShardCount fields to ecBalancer struct - Add getDataShardCount() and getParityShardCount() methods with defaults - Replace direct constant usage with configurable methods - Fix unused variable warning for parityPerRack This allows seaweed-enterprise to use custom EC ratios while defaulting to standard 10+4 scheme. * Address PR 7812 review comments Makefile improvements: - Save PIDs for each volume server for precise termination - Use PID-based killing in stop target with pkill fallback - Use more specific pkill patterns with TEST_DIR paths Documentation: - Document jq dependency in README.md Rebalancer fix: - Fix duplicate shard count updates in applyMovesToAnalysis - All planners (DC/rack/node) update counts inline during planning - Remove duplicate updates from applyMovesToAnalysis to avoid double-counting * test/erasure_coding: use mktemp for test file template Use mktemp instead of hardcoded /tmp/testfile_template.bin path to provide better isolation for concurrent test runs. |
4 weeks ago |
|
|
f4cdfcc5fd
|
Add cluster.raft.leader.transfer command for graceful leader change (#7819)
* proto: add RaftLeadershipTransfer RPC for forced leader change Add new gRPC RPC and messages for leadership transfer: - RaftLeadershipTransferRequest: optional target_id and target_address - RaftLeadershipTransferResponse: previous_leader and new_leader This enables graceful leadership transfer before master maintenance, reducing errors in filers during planned maintenance windows. Ref: https://github.com/seaweedfs/seaweedfs/issues/7527 * proto: regenerate Go files for RaftLeadershipTransfer Generated from master.proto changes. * master: implement RaftLeadershipTransfer gRPC handler Add gRPC handler for leadership transfer with support for: - Transfer to any eligible follower (when target_id is empty) - Transfer to a specific server (when target_id and target_address are provided) Uses hashicorp/raft LeadershipTransfer() and LeadershipTransferToServer() APIs. Returns the previous and new leader in the response. * shell: add cluster.raft.leader.transfer command Add weed shell command for graceful leadership transfer: - Displays current cluster status before transfer - Supports auto-selection of target (any eligible follower) - Supports targeted transfer with -id and -address flags - Provides clear feedback on success/failure with troubleshooting tips Usage: cluster.raft.leader.transfer cluster.raft.leader.transfer -id <server_id> -address <grpc_address> * master: add unit tests for raft gRPC handlers Add tests covering: - RaftLeadershipTransfer with no raft initialized - RaftLeadershipTransfer with target_id but no address - RaftListClusterServers with no raft initialized - RaftAddServer with no raft initialized - RaftRemoveServer with no raft initialized These tests verify error handling when raft is not configured. * shell: add tests for cluster.raft.leader.transfer command Add tests covering: - Command name and help text validation - HasTag returns false for ResourceHeavy - Validation of -id without -address - Argument parsing with unknown flags * master: clarify that leadership transfer requires -raftHashicorp The default raft implementation (seaweedfs/raft, a goraft fork) does not support graceful leadership transfer. This feature is only available when using hashicorp raft (-raftHashicorp=true). Update error messages and help text to make this requirement clear: - gRPC handler returns specific error for goraft users - Shell command help text notes the requirement - Added test for goraft case * test: use strings.Contains instead of custom helper Replace custom contains/containsHelper functions with the standard library strings.Contains for better maintainability. * shell: return flag parsing errors instead of swallowing them - Return the error from flag.Parse() instead of returning nil - Update test to explicitly assert error for unknown flags * test: document integration test scenarios for Raft leadership transfer Add comments explaining: - Why these unit tests only cover 'Raft not initialized' scenarios - What integration tests should cover (with multi-master cluster) - hashicorp/raft uses concrete types that cannot be easily mocked * fix: address reviewer feedback on tests and leader routing - Remove misleading tests that couldn't properly validate their documented behavior without a real Raft cluster: - TestRaftLeadershipTransfer_GoraftNotSupported - TestRaftLeadershipTransfer_ValidationTargetIdWithoutAddress - Change WithClient(false) to WithClient(true) for RaftLeadershipTransfer RPC to ensure the request is routed to the current leader * Improve cluster.raft.transferLeader command - Rename command from cluster.raft.leader.transfer to cluster.raft.transferLeader - Add symmetric validation: -id and -address must be specified together - Handle case where same leader is re-elected after transfer - Add test for -address without -id validation - Add docker compose file for 5-master raft cluster testing |
4 weeks ago |
|
|
6a1b9ce8cd
|
Give `cluster.status` detailed file metrics for regular volumes (#7791)
* Implement a `weed shell` command to return a status overview of the cluster. Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` * Humanize output for `weed.server` by default. Makes things more readable :) ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%) EC: 1 EC volume on 14 shards (14 shards/volume) storage: total: 172 MB regular volumes: 172 MB EC volumes: 0 B raw: 516 MB on volume replicas, 0 B on EC shards ``` ``` > cluster.status --humanize=false cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s), 10 disk(s) on 1 rack(s) volumes: total: 3 volume(s), 1 collection(s) max size: 31457280000 byte(s) regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%) EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume) storage: total: 172128072 byte(s) regular volumes: 172128072 byte(s) EC volumes: 0 byte(s) raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards ``` Also adds unit tests, and reshuffles test files handling for clarity. * `cluster.status`: Add detailed file metrics for regular volumes. |
4 weeks ago |
|
|
347ed7cbfa
|
fix: sync replica entries before ec.encode and volume.tier.move (#7798)
* fix: sync replica entries before ec.encode and volume.tier.move (#7797) This addresses the data inconsistency risk in multi-replica volumes. When ec.encode or volume.tier.move operates on a multi-replica volume: 1. Find the replica with the highest file count (the 'best' one) 2. Copy missing entries from other replicas INTO this best replica 3. Use this union replica for the destructive operation This ensures no data is lost due to replica inconsistency before EC encoding or tier moving. Added: - command_volume_replica_check.go: Core sync and select logic - command_volume_replica_check_test.go: Test coverage Modified: - command_ec_encode.go: Call syncAndSelectBestReplica before encoding - command_volume_tier_move.go: Call syncAndSelectBestReplica before moving Fixes #7797 * test: add integration test for replicated volume sync during ec.encode * test: improve retry logic for replicated volume integration test * fix: resolve JWT issue in integration tests by using empty security.toml * address review comments: add readNeedleMeta, parallelize status fetch, fix collection param, fix test issues * test: use collection parameter consistently in replica sync test * fix: convert weed binary path to absolute to work with changed working directory * fix: remove skip behavior, keep tests failing on missing binary * fix: always check recency for each needle, add divergent replica test |
1 month ago |
|
|
504b258258
|
s3: fix remote object not caching (#7790)
* s3: fix remote object not caching * s3: address review comments for remote object caching - Fix leading slash in object name by using strings.TrimPrefix - Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations - Reuse existing helper function instead of inline gRPC call * s3/filer: add singleflight deduplication for remote object caching - Add singleflight.Group to FilerServer to deduplicate concurrent cache operations - Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one caching operation runs per object when multiple clients request the same file - Add early-return check for already-cached objects - S3 API calls filer gRPC with timeout and graceful fallback on error - Clear negative bucket cache when bucket is created via weed shell - Add integration tests for remote cache with singleflight deduplication This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects by preventing redundant cache operations and improving concurrent access performance. Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599 * fix: data race in concurrent remote object caching - Add mutex to protect chunks slice from concurrent append - Add mutex to protect fetchAndWriteErr from concurrent read/write - Fix incorrect error check (was checking assignResult.Error instead of parseErr) - Rename inner variable to avoid shadowing fetchAndWriteErr * fix: address code review comments - Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version - Add mutex protection for concurrent chunk slice and error access (data race fix) - Use lazy initialization for S3 client in tests to avoid panic during package load - Fix markdown linting: add language specifier to code fence, blank lines around tables - Add 'all' target to Makefile as alias for test-with-server - Remove unused 'util' import * style: remove emojis from test files * fix: add defensive checks and sort chunks by offset - Add nil check and type assertion check for singleflight result - Sort chunks by offset after concurrent fetching to maintain file order * fix: improve test diagnostics and path normalization - runWeedShell now returns error for better test diagnostics - Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health) - Strip leading slash from normalizedObject to avoid double slashes in path --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
1 month ago |
|
|
187ef65e8f
|
Humanize output for `weed.server` by default (#7758)
* Implement a `weed shell` command to return a status overview of the cluster. Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` * Humanize output for `weed.server` by default. Makes things more readable :) ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%) EC: 1 EC volume on 14 shards (14 shards/volume) storage: total: 172 MB regular volumes: 172 MB EC volumes: 0 B raw: 516 MB on volume replicas, 0 B on EC shards ``` ``` > cluster.status --humanize=false cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s), 10 disk(s) on 1 rack(s) volumes: total: 3 volume(s), 1 collection(s) max size: 31457280000 byte(s) regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%) EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume) storage: total: 172128072 byte(s) regular volumes: 172128072 byte(s) EC volumes: 0 byte(s) raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards ``` Also adds unit tests, and reshuffles test files handling for clarity. |
1 month ago |
|
|
7ed7578424
|
fix(ec.decode): purge EC shards when volume is empty (#7749)
* fix(ec.decode): purge EC shards when volume is empty When an EC volume has no live entries (all deleted), ec.decode should not generate an empty normal volume. Instead, treat decode as a no-op and allow shard purge to proceed cleanly.\n\nFixes: #7748 * chore: address PR review comments * test: cover live EC index + avoid magic string * chore: harden empty-EC handling - Make shard cleanup best-effort (collect errors)\n- Remove unreachable EOF handling in HasLiveNeedles\n- Add empty ecx test case\n- Share no-live-entries substring between server/client\n * perf: parallelize EC shard unmount/delete across locations * refactor: combine unmount+delete into single goroutine per location * refactor: use errors.Join for multi-error aggregation * refactor: use existing ErrorWaitGroup for parallel execution * fix: capture loop variables + clarify SuperBlockSize safety |
1 month ago |
|
|
f77e6ed2d4
|
fix: admin UI bucket delete now properly deletes collection and checks Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock Fixes #7711 The admin UI's DeleteS3Bucket function was missing two critical behaviors: 1. It did not delete the collection from the master (unlike s3.bucket.delete shell command), leaving orphaned volume data that caused fs.verify errors. 2. It did not check for Object Lock protections before deletion, potentially allowing deletion of buckets with locked objects. Changes: - Add shared Object Lock checking utilities to object_lock_utils.go: - EntryHasActiveLock: standalone function to check if an entry has active lock - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects - Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function - Update admin UI DeleteS3Bucket to: - Check Object Lock using shared HasObjectsWithActiveLocks utility - Delete the collection before deleting filer entries (matching s3.bucket.delete) * refactor: S3 API uses shared Object Lock utilities Removes 114 lines of duplicated code from s3api_bucket_handlers.go by having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks function in object_lock_utils.go. Now both S3 API and Admin UI use the same shared utilities: - EntryHasActiveLock - HasObjectsWithActiveLocks - recursivelyCheckLocksWithClient - checkVersionsForLocksWithClient * feat: s3.bucket.delete shell command now checks Object Lock Add Object Lock protection to the s3.bucket.delete shell command. If the bucket has Object Lock enabled and contains objects with active retention or legal hold, deletion is prevented. Also refactors Object Lock checking utilities into a new s3_objectlock package to avoid import cycles between shell, s3api, and admin packages. All three components now share the same logic: - S3 API (DeleteBucketHandler) - Admin UI (DeleteS3Bucket) - Shell command (s3.bucket.delete) * refactor: unified Object Lock checking and consistent deletion parameters 1. Add CheckBucketForLockedObjects() - a unified function that combines: - Bucket entry lookup - Object Lock enabled check - Scan for locked objects 2. All three components now use this single function: - S3 API (via s3api.CheckBucketForLockedObjects) - Admin UI (via s3api.CheckBucketForLockedObjects) - Shell command (via s3_objectlock.CheckBucketForLockedObjects) 3. Aligned deletion parameters across all components: - isDeleteData: false (collection already deleted separately) - isRecursive: true - ignoreRecursiveError: true * fix: properly handle non-EOF errors in Recv() loops The Recv() loops in recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient were breaking on any error, which could hide real stream errors and incorrectly report 'no locks found'. Now: - io.EOF: break loop (normal end of stream) - any other error: return it so caller knows the stream failed * fix: address PR review comments 1. Add path traversal protection - validate entry names before building subdirectory paths. Skip entries with empty names, '.', '..', or containing path separators. 2. Use exact match for .versions folder instead of HasSuffix() to avoid mismatching unrelated directories like 'foo.versions'. 3. Replace path.Join with simple string concatenation since we now validate entry names. * refactor: extract paginateEntries helper to reduce duplication The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient functions shared significant structural similarity. Extracted a generic paginateEntries helper that: - Handles pagination logic (lastFileName tracking, Limit) - Handles stream receiving with proper EOF vs error handling - Validates entry names (path traversal protection) - Calls a processEntry callback for business logic This centralizes pagination logic and makes the code more maintainable. * feat: add context propagation for timeout and cancellation support All Object Lock checking functions now accept context.Context parameter: - paginateEntries(ctx, client, dir, processEntry) - recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime) - checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime) - HasObjectsWithActiveLocks(ctx, client, bucketPath) - CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName) This enables: - Timeout support for large bucket scans - Cancellation propagation from HTTP requests - The S3 API handler now uses r.Context() for proper request lifecycle * fix: address PR review comments 1. Add DefaultBucketsPath constant in admin_server.go instead of hardcoding "/buckets" in multiple places. 2. Add defensive normalization in EntryHasActiveLock: - TrimSpace to handle whitespace around values - ToUpper for case-insensitive comparison of legal hold and retention mode values - TrimSpace on retention date before parsing * fix: use ctx variable consistently instead of context.Background() In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx variable defined at the start of the function for all gRPC calls instead of creating new context.Background() instances. |
1 month ago |
|
|
93cca3a96b
|
volume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hours (#7730)
* volume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hours This change makes the fsck check more conservative by only considering chunks older than 5 hours as potential orphans. A 5 minute window was too aggressive and could incorrectly flag recently written chunks, especially in busy systems or during backup operations. Addresses #7649 * Update command_volume_fsck.go * volume.fsck: add help text explaining cutoffTimeAgo parameter * Update command_volume_fsck.go |
1 month ago |
|
|
662a6ac8ee
|
Implement a `weed shell` command to return a status overview of the cluster. (#7704)
Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` |
1 month ago |
|
|
a1eab5ff99
|
shell: add -owner flag to s3.bucket.create command (#7728)
* shell: add -owner flag to s3.bucket.create command
This fixes an issue where buckets created via weed shell cannot be accessed
by non-admin S3 users because the bucket has no owner set.
When using S3 IAM authentication, non-admin users can only access buckets
they own. Buckets created via lazy S3 creation automatically have their
owner set from the request context, but buckets created via weed shell
had no owner, making them inaccessible to non-admin users.
The new -owner flag allows setting the bucket owner identity (s3-identity-id)
at creation time:
s3.bucket.create -name my-bucket -owner my-identity-name
Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599
* shell: add s3.bucket.owner command to view/change bucket ownership
This command allows viewing and changing the owner of an S3 bucket,
making it easier to manage bucket access for IAM users.
Usage:
# View the current owner of a bucket
s3.bucket.owner -name my-bucket
# Set or change the owner of a bucket
s3.bucket.owner -name my-bucket -set -owner new-identity
# Remove the owner (make bucket admin-only)
s3.bucket.owner -name my-bucket -set -owner ""
* shell: show bucket owner in s3.bucket.list output
Display the bucket owner (s3-identity-id) when listing buckets,
making it easier to see which identity owns each bucket.
Example output:
my-bucket size:1024 chunk:5 owner:my-identity
* admin: add bucket owner support to admin UI
- Add Owner field to S3Bucket struct for displaying bucket ownership
- Add Owner field to CreateBucketRequest for setting owner at creation
- Add UpdateBucketOwner API endpoint (PUT /api/s3/buckets/:bucket/owner)
- Add SetBucketOwner function for updating bucket ownership
- Update GetS3Buckets to populate owner from s3-identity-id extended attribute
- Update CreateS3BucketWithObjectLock to set owner when creating bucket
This allows the admin UI to display bucket owners and supports creating/
editing bucket ownership, which is essential for S3 IAM authentication
where non-admin users can only access buckets they own.
* admin: show bucket owner in buckets list and create form
- Add Owner column to buckets table to display bucket ownership
- Add Owner field to create bucket form for setting owner at creation
- Show owner in bucket details modal
- Update JavaScript to include owner when creating buckets
This makes bucket ownership visible and configurable from the admin UI,
which is essential for S3 IAM authentication where non-admin users can
only access buckets they own.
* admin: add bucket owner management with user dropdown
- Add 'Manage Owner' button to bucket actions
- Add modal with dropdown to select owner from existing users
- Fetch users from /api/users endpoint to populate dropdown
- Update create bucket form to use dropdown for owner selection
- Allow setting owner to empty (no owner = admin-only access)
This provides a user-friendly way to manage bucket ownership by selecting
from existing S3 identities rather than manually typing identity names.
* fix: use username instead of name for user dropdown
The /api/users endpoint returns 'username' field, not 'name'.
Fixed both the manage owner modal and create bucket form.
* Update s3_buckets_templ.go
* fix: address code review feedback for s3.bucket.create
- Check if entry.Extended is nil before making a new map to prevent
overwriting any previously set extended attributes
- Use fmt.Fprintln(writer, ...) instead of println() for consistent
output handling across the shell command framework
* fix: improve help text and validate owner input
- Add note that -owner value should match identity name in s3.json
- Trim whitespace from owner and treat whitespace-only as empty
* fix: address code review feedback for list and owner commands
- s3.bucket.list: Use %q to escape owner value and prevent malformed
tabular output from special characters (tabs/newlines/control chars)
- s3.bucket.owner: Use neutral error message for lookup failures since
they can occur for reasons other than missing bucket (e.g., permission)
* fix: improve s3.bucket.owner CLI UX
- Remove confusing -set flag that was required but not shown in examples
- Add explicit -delete flag to remove owner (safer than empty string)
- Presence of -owner now implies set operation (no extra flag needed)
- Validate that -owner and -delete cannot be used together
- Trim whitespace from owner value
- Update help text with correct examples and add note about identity name
- Clearer success messages for each operation
* fix: address code review feedback for admin UI
- GetBucketDetails: Extract and return owner from extended attributes
- CSV export: Fix column indices after adding Owner column, add Owner to header
- XSS prevention: Add escapeHtml() function to sanitize user data in innerHTML
(bucket.name, bucket.owner, bucket.object_lock_mode, obj.key, obj.storage_class)
* fix: address additional code review feedback
- types.go: Add omitempty to Owner JSON tag, update comment
- bucket_management.go: Trim and validate owner (max 256 chars) in CreateBucket
- bucket_management.go: Use neutral error message in SetBucketOwner lookup
* fix: improve owner field handling and error recovery
bucket_management.go:
- Use *string pointer for Owner to detect if field was explicitly provided
- Return HTTP 400 if owner field is missing (use empty string to clear)
- Trim and validate owner (max 256 chars) in UpdateBucketOwner
s3_buckets.templ:
- Re-enable owner select dropdown on fetch error
- Reset dropdown to default 'No owner' option on error
- Allow users to retry or continue without selecting an owner
* fix: move modal instance variables to global scope
Move deleteModalInstance, quotaModalInstance, ownerModalInstance,
detailsModalInstance, and cachedUsers to global scope so they are
accessible from both DOMContentLoaded handlers and global functions
like deleteBucket(). This fixes the undefined variable issue.
* refactor: improve modal handling and avoid global window properties
- Initialize modal instances once on DOMContentLoaded and reuse with show()
- Replace window.currentBucket* global properties with data attributes on forms
- Remove modal dispose/recreate pattern and unnecessary cleanup code
- Scope state to relevant DOM elements instead of global namespace
* Update s3_buckets_templ.go
* fix: define MaxOwnerNameLength constant and implement RFC 4180 CSV escaping
bucket_management.go:
- Add MaxOwnerNameLength constant (256) with documentation
- Replace magic number 256 with constant in both validation checks
s3_buckets.templ:
- Add escapeCsvField() helper for RFC 4180 compliant CSV escaping
- Properly handle commas, double quotes, and newlines in field values
- Escape internal quotes by doubling them (")→("")
* Update s3_buckets_templ.go
* refactor: use direct gRPC client methods for consistency
- command_s3_bucket_create.go: Use client.CreateEntry instead of filer_pb.CreateEntry
- command_s3_bucket_owner.go: Use client.LookupDirectoryEntry instead of filer_pb.LookupEntry
- command_s3_bucket_owner.go: Use client.UpdateEntry instead of filer_pb.UpdateEntry
This aligns with the pattern used in weed/admin/dash/bucket_management.go
|
1 month ago |
|
|
e8b7347031
|
Reduce memory allocations in hot paths (#7725)
* filer: reduce allocations in MatchStorageRule
Optimize MatchStorageRule to avoid allocations in common cases:
- Return singleton emptyPathConf when no rules match (zero allocations)
- Return existing rule directly when only one rule matches (zero allocations)
- Only allocate and merge when multiple rules match (rare case)
Based on heap profile analysis showing 111MB allocated from 1.64M calls
to this function during 180 seconds of operation.
* filer: add fast path for getActualStore when no path-specific stores
Add hasPathSpecificStore flag to FilerStoreWrapper to skip
the MatchPrefix() call and []byte(path) conversion when no
path-specific stores are configured (the common case).
Based on heap profile analysis showing 1.39M calls to this
function during 180 seconds of operation, each requiring a
string-to-byte slice conversion for the MatchPrefix call.
* filer/foundationdb: use sync.Pool for tuple allocation in genKey
Use sync.Pool to reuse tuple.Tuple slices in genKey(), reducing
allocation overhead for every FoundationDB operation.
Based on heap profile analysis showing 102MB allocated from 1.79M
calls to genKey() during 180 seconds of operation. The Pack() call
still allocates internally, but this reduces the tuple slice
allocation overhead by ~50%.
* filer: use sync.Pool for protobuf Entry and FuseAttributes
Add pooling for filer_pb.Entry and filer_pb.FuseAttributes in
EncodeAttributesAndChunks and DecodeAttributesAndChunks to reduce
allocations during filer store operations.
Changes:
- Add pbEntryPool with pre-allocated FuseAttributes
- Add EntryAttributeToExistingPb for in-place attribute conversion
- Update ToExistingProtoEntry to reuse existing Attributes when available
Based on heap profile showing:
- EncodeAttributesAndChunks: 69.5MB cumulative
- DecodeAttributesAndChunks: 46.5MB cumulative
- EntryAttributeToPb: 47.5MB flat allocations
* log_buffer: use sync.Pool for LogEntry in readTs
Add logEntryPool to reuse filer_pb.LogEntry objects in readTs(),
which is called frequently during binary search in ReadFromBuffer.
This function only needs the TsNs field from the unmarshaled entry,
so pooling the LogEntry avoids repeated allocations.
Based on heap profile showing readTs with 188MB cumulative allocations
from timestamp lookups during log buffer reads.
* pb: reduce gRPC metadata allocations in interceptor
Optimize requestIDUnaryInterceptor and WithGrpcClient to reduce
metadata allocations on every gRPC request:
- Use AppendToOutgoingContext instead of NewOutgoingContext + New()
This avoids creating a new map[string]string for single key-value pairs
- Check FromIncomingContext return value before using metadata
Based on heap profile showing metadata operations contributing 0.45GB
(10.5%) of allocations, with requestIDUnaryInterceptor being the main
source at 0.44GB cumulative.
Expected reduction: ~0.2GB from avoiding map allocations per request.
* filer/log_buffer: address code review feedback
- Use proto.Reset() instead of manual field clearing in resetLogEntry
for more idiomatic and comprehensive state clearing
- Add resetPbEntry() call before pool return in error path for
consistency with success path in DecodeAttributesAndChunks
* log_buffer: reduce PreviousBufferCount from 32 to 4
Reduce the number of retained previous buffers from 32 to 4.
Each buffer is 8MB, so this reduces the maximum retained memory
from 256MB to 32MB for previous buffers.
Most subscribers catch up quickly, so 4 buffers (32MB) should
be sufficient while significantly reducing memory footprint.
* filer/foundationdb: use defer for tuple pool cleanup in genKey
Refactor genKey to use defer for returning the pooled tuple.
This ensures the pooled object is always returned even if
store.seaweedfsDir.Pack panics, making the code more robust.
Also simplifies the code by removing the temporary variable.
* filer: early-stop MatchStorageRule prescan after 2 matches
Stop the prescan callback after finding 2 matches since we only
need to know if there are 0, 1, or multiple matches. This avoids
unnecessarily scanning the rest of the trie when many rules exist.
* fix: address critical code review issues
filer_conf.go:
- Remove mutable singleton emptyPathConf that could corrupt shared state
- Return fresh copy for no-match case and cloned copy for single-match case
- Add clonePathConf helper to create shallow copies safely
grpc_client_server.go:
- Remove incorrect AppendToOutgoingContext call in server interceptor
(that API is for outbound client calls, not server-side handlers)
- Rely on request_id.Set and SetTrailer for request ID propagation
* fix: treat FilerConf_PathConf as immutable
Fix callers that were incorrectly mutating the returned PathConf:
- filer_server_handlers_write.go: Use local variable for MaxFileNameLength
instead of mutating the shared rule
- command_s3_bucket_quota_check.go: Create new PathConf explicitly when
modifying config instead of mutating the returned one
This allows MatchStorageRule to safely return the singleton or direct
references without copying, restoring the memory optimization.
Callers must NOT mutate the returned *FilerConf_PathConf.
* filer: add ClonePathConf helper for creating mutable copies
Add reusable ClonePathConf function that creates a mutable copy of
a PathConf. This is useful when callers need to modify config before
calling SetLocationConf.
Update command_s3_bucket_quota_check.go to use the new helper.
Also fix redundant return statement in DeleteLocationConf.
* fmt
* filer: fix protobuf pool reset to clear internal fields
Address code review feedback:
1. resetPbEntry/resetFuseAttributes: Use struct assignment (*e = T{})
instead of field-by-field reset to clear protobuf internal fields
(unknownFields, sizeCache) that would otherwise accumulate across
pool reuses, causing data corruption or memory bloat.
2. EntryAttributeToExistingPb: Add nil guard for attr parameter to
prevent panic if caller passes nil.
* log_buffer: reset logEntry before pool return in error path
For consistency with success path, reset the logEntry before putting
it back in the pool in the error path. This prevents the pooled object
from holding references to partially unmarshaled data.
* filer: optimize MatchStorageRule and document ClonePathConf
1. Avoid double []byte(path) conversion in multi-match case by
converting once and reusing pathBytes.
2. Add IMPORTANT comment to ClonePathConf documenting that it must
be kept in sync with filer_pb.FilerConf_PathConf fields when
the protobuf evolves.
* filer/log_buffer: fix data race and use defer for pool cleanup
1. entry_codec.go EncodeAttributesAndChunks: Fix critical data race -
proto.Marshal may return a slice sharing memory with the message.
Copy the data before returning message to pool to prevent corruption.
2. entry_codec.go DecodeAttributesAndChunks: Use defer for cleaner
pool management, ensuring message is always returned to pool.
3. log_buffer.go readTs: Use defer for pool cleanup, removing
duplicated resetLogEntry/Put calls in success and error paths.
* filer: fix ClonePathConf field order and add comprehensive test
1. Fix field order in ClonePathConf to match protobuf struct definition
(WormGracePeriodSeconds before WormRetentionTimeSeconds).
2. Add TestClonePathConf that constructs a fully-populated PathConf,
calls ClonePathConf, and asserts equality of all exported fields.
This will catch future schema drift when new fields are added.
3. Add TestClonePathConfNil to verify nil handling.
* filer: use reflection in ClonePathConf test to detect schema drift
Replace hardcoded field comparisons with reflection-based comparison.
This automatically catches:
1. New fields added to the protobuf but not copied in ClonePathConf
2. Missing non-zero test values for any exported field
The test iterates over all exported fields using reflect and compares
src vs clone values, failing if any field differs.
* filer: update EntryAttributeToExistingPb comment to reflect nil handling
The function safely handles nil attr by returning early, but the comment
incorrectly stated 'attr must not be nil'. Update comment to accurately
describe the defensive behavior.
* Fix review feedback: restore request ID propagation and remove redundant resets
1. grpc_client_server.go: Restore AppendToOutgoingContext for request ID
so handlers making downstream gRPC calls will automatically propagate
the request ID to downstream services.
2. entry_codec.go: Remove redundant resetPbEntry calls after Get.
The defer block ensures reset before Put, so next Get receives clean object.
3. log_buffer.go: Remove redundant resetLogEntry call after Get for
same reason - defer already handles reset before Put.
|
1 month ago |
|
|
df4f2f7020
|
ec: add -diskType flag to EC commands for SSD support (#7607)
* ec: add diskType parameter to core EC functions
Add diskType parameter to:
- ecBalancer struct
- collectEcVolumeServersByDc()
- collectEcNodesForDC()
- collectEcNodes()
- EcBalance()
This allows EC operations to target specific disk types (hdd, ssd, etc.)
instead of being hardcoded to HardDriveType only.
For backward compatibility, all callers currently pass types.HardDriveType
as the default value. Subsequent commits will add -diskType flags to
the individual EC commands.
* ec: update helper functions to use configurable diskType
Update the following functions to accept/use diskType parameter:
- findEcVolumeShards()
- addEcVolumeShards()
- deleteEcVolumeShards()
- moveMountedShardToEcNode()
- countShardsByRack()
- pickNEcShardsToMoveFrom()
All ecBalancer methods now use ecb.diskType instead of hardcoded
types.HardDriveType. Non-ecBalancer callers (like volumeServer.evacuate
and ec.rebuild) use types.HardDriveType as the default.
Update all test files to pass diskType where needed.
* ec: add -diskType flag to ec.balance and ec.encode commands
Add -diskType flag to specify the target disk type for EC operations:
- ec.balance -diskType=ssd
- ec.encode -diskType=ssd
The disk type can be 'hdd', 'ssd', or empty for default (hdd).
This allows placing EC shards on SSD or other disk types instead of
only HDD.
Example usage:
ec.balance -collection=mybucket -diskType=ssd -apply
ec.encode -collection=mybucket -diskType=ssd -force
* test: add integration tests for EC disk type support
Add integration tests to verify the -diskType flag works correctly:
- TestECDiskTypeSupport: Tests EC encode and balance with SSD disk type
- TestECDiskTypeMixedCluster: Tests EC operations on a mixed HDD/SSD cluster
The tests verify:
- Volume servers can be configured with specific disk types
- ec.encode accepts -diskType flag and encodes to the correct disk type
- ec.balance accepts -diskType flag and balances on the correct disk type
- Mixed disk type clusters work correctly with separate collections
* ec: add -sourceDiskType to ec.encode and -diskType to ec.decode
ec.encode:
- Add -sourceDiskType flag to filter source volumes by disk type
- This enables tier migration scenarios (e.g., SSD volumes → HDD EC shards)
- -diskType specifies target disk type for EC shards
ec.decode:
- Add -diskType flag to specify source disk type where EC shards are stored
- Update collectEcShardIds() and collectEcNodeShardBits() to accept diskType
Examples:
# Encode SSD volumes to HDD EC shards (tier migration)
ec.encode -collection=mybucket -sourceDiskType=ssd -diskType=hdd
# Decode EC shards from SSD
ec.decode -collection=mybucket -diskType=ssd
Integration tests updated to cover new flags.
* ec: fix variable shadowing and add -diskType to ec.rebuild and volumeServer.evacuate
Address code review comments:
1. Fix variable shadowing in collectEcVolumeServersByDc():
- Rename loop variable 'diskType' to 'diskTypeKey' and 'diskTypeStr'
to avoid shadowing the function parameter
2. Fix hardcoded HardDriveType in ecBalancer methods:
- balanceEcRack(): use ecb.diskType instead of types.HardDriveType
- collectVolumeIdToEcNodes(): use ecb.diskType
3. Add -diskType flag to ec.rebuild command:
- Add diskType field to ecRebuilder struct
- Pass diskType to collectEcNodes() and addEcVolumeShards()
4. Add -diskType flag to volumeServer.evacuate command:
- Add diskType field to commandVolumeServerEvacuate struct
- Pass diskType to collectEcVolumeServersByDc() and moveMountedShardToEcNode()
* test: add diskType field to ecBalancer in TestPickEcNodeToBalanceShardsInto
Address nitpick comment: ensure test ecBalancer struct has diskType
field set for consistency with other tests.
* ec: filter disk selection by disk type in pickBestDiskOnNode
When evacuating or rebalancing EC shards, pickBestDiskOnNode now
filters disks by the target disk type. This ensures:
1. EC shards from SSD disks are moved to SSD disks on destination nodes
2. EC shards from HDD disks are moved to HDD disks on destination nodes
3. No cross-disk-type shard movement occurs
This maintains the storage tier isolation when moving EC shards
between nodes during evacuation or rebalancing operations.
* ec: allow disk type fallback during evacuation
Update pickBestDiskOnNode to accept a strictDiskType parameter:
- strictDiskType=true (balancing): Only use disks of matching type.
This maintains storage tier isolation during normal rebalancing.
- strictDiskType=false (evacuation): Prefer same disk type, but
fall back to other disk types if no matching disk is available.
This ensures evacuation can complete even when same-type capacity
is insufficient.
Priority order for evacuation:
1. Same disk type with lowest shard count (preferred)
2. Different disk type with lowest shard count (fallback)
* test: use defer for lock/unlock to prevent lock leaks
Use defer to ensure locks are always released, even on early returns
or test failures. This prevents lock leaks that could cause subsequent
tests to hang or fail.
Changes:
- Return early if lock acquisition fails
- Immediately defer unlock after successful lock
- Remove redundant explicit unlock calls at end of tests
- Fix unused variable warning (err -> encodeErr/locErr)
* ec: dynamically discover disk types from topology for evacuation
Disk types are free-form tags (e.g., 'ssd', 'nvme', 'archive') that come
from the topology, not a hardcoded set. Only 'hdd' (or empty) is the
default disk type.
Use collectVolumeDiskTypes() to discover all disk types present in the
cluster topology instead of hardcoding [HardDriveType, SsdType].
* test: add evacuation fallback and cross-rack EC placement tests
Add two new integration tests:
1. TestEvacuationFallbackBehavior:
- Tests that when same disk type has no capacity, shards fall back
to other disk types during evacuation
- Creates cluster with 1 SSD + 2 HDD servers (limited SSD capacity)
- Verifies pickBestDiskOnNode behavior with strictDiskType=false
2. TestCrossRackECPlacement:
- Tests EC shard distribution across different racks
- Creates cluster with 4 servers in 4 different racks
- Verifies shards are spread across multiple racks
- Tests that ec.balance respects rack placement
Helper functions added:
- startLimitedSsdCluster: 1 SSD + 2 HDD servers
- startMultiRackCluster: 4 servers in 4 racks
- countShardsPerRack: counts EC shards per rack from disk
* test: fix collection mismatch in TestCrossRackECPlacement
The EC commands were using collection 'rack_test' but uploaded test data
uses collection 'test' (default). This caused ec.encode/ec.balance to not
find the uploaded volume.
Fix: Change EC commands to use '-collection test' to match the uploaded data.
Addresses review comment from PR #7607.
* test: close log files in MultiDiskCluster.Stop() to prevent FD leaks
Track log files in MultiDiskCluster.logFiles and close them in Stop()
to prevent file descriptor accumulation in long-running or many-test
scenarios.
Addresses review comment about logging resources cleanup.
* test: improve EC integration tests with proper assertions
- Add assertNoFlagError helper to detect flag parsing regressions
- Update diskType subtests to fail on flag errors (ec.encode, ec.balance, ec.decode)
- Update verify_disktype_flag_parsing to check help output contains diskType
- Remove verify_fallback_disk_selection (was documentation-only, not executable)
- Add assertion to verify_cross_rack_distribution for minimum 2 racks
- Consolidate uploadTestDataWithDiskType to accept collection parameter
- Remove duplicate uploadTestDataWithDiskTypeMixed function
* test: extract captureCommandOutput helper and fix error handling
- Add captureCommandOutput helper to reduce code duplication in diskType tests
- Create commandRunner interface to match shell command Do method
- Update ec_encode_with_ssd_disktype, ec_balance_with_ssd_disktype,
ec_encode_with_source_disktype, ec_decode_with_disktype to use helper
- Fix filepath.Glob error handling in countShardsPerRack instead of ignoring it
* test: add flag validation to ec_balance_targets_correct_disk_type
Add assertNoFlagError calls after ec.balance commands to ensure
-diskType flag is properly recognized for both SSD and HDD disk types.
* test: add proper assertions for EC command results
- ec_encode_with_ssd_disktype: check for expected volume-related errors
- ec_balance_with_ssd_disktype: require success with require.NoError
- ec_encode_with_source_disktype: check for expected no-volume errors
- ec_decode_with_disktype: check for expected no-ec-volume errors
- upload_to_ssd_and_hdd: use require.NoError for setup validation
Tests now properly fail on unexpected errors rather than just logging.
* test: fix missing unlock in ec_encode_with_disk_awareness
Add defer unlock pattern to ensure lock is always released, matching
the pattern used in other subtests.
* test: improve helper robustness
- Make assertNoFlagError case-insensitive for pattern matching
- Use defer in captureCommandOutput to restore stdout/stderr and close
pipe ends to avoid FD leaks even if cmd.Do panics
|
1 month ago |
|
|
ca1ad9c4c2
|
Nit: have `ec.encode` exit immediately if no volumes are processed. (#7654)
* Nit: have `ec.encode` exit immediately if no volumes are processed. * Update weed/shell/command_ec_encode.go Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> --------- Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> |
1 month ago |
|
|
f9b4a4c396
|
fix: check freeEcSlot before evacuating EC shards to prevent data loss (#7621)
* fix: check freeEcSlot before evacuating EC shards to prevent data loss Related to #7619 The moveAwayOneEcVolume function was missing the freeEcSlot check that exists in other EC shard placement functions. This could cause EC shards to be moved to volume servers that have no capacity, resulting in: 1. 0-byte shard files when disk is full 2. Data loss when source shards are deleted after 'successful' copy Changes: - Add freeEcSlot check before attempting to move EC shards - Sort destinations by both shard count and free slots - Refresh topology during evacuation to get updated slot counts - Log when nodes are skipped due to no free slots - Update freeEcSlot count after successful moves * fix: clarify comment wording per CodeRabbit review The comment stated 'after each move' but the code executes before calling moveAwayOneEcVolume. Updated to 'before moving each EC volume' for accuracy. * fix: collect topology once and track capacity changes locally Remove the topology refresh within the loop as it gives a false sense of correctness - the refreshed topology could still be stale (minutes old). Instead, we: 1. Collect topology once at the start 2. Track capacity changes ourselves via freeEcSlot decrement after each move This is more accurate because we know exactly what moves we've made, rather than relying on potentially stale topology refreshes. * fix: ensure partial EC volume moves are reported as failures Set hasMoved=false when a shard fails to move, even if previous shards succeeded. This prevents the caller from incorrectly assuming the entire volume was evacuated, which could lead to data loss if the source server is decommissioned based on this incorrect status. * fix: also reset hasMoved on moveMountedShardToEcNode error Same issue as the previous fix: if moveMountedShardToEcNode fails after some shards succeeded, hasMoved would incorrectly be true. Ensure partial moves are always reported as failures. |
1 month ago |
|
|
d59cc1b09f
|
Fix handling of fixed read-only volumes for `volume.check.disk`. (#7612)
There's unfortunatley no way to tell whether a volume is flagged read-only because it got full, or because it is faulty. To address this, modify the check logic so all read-only volumes are processed; if no changes are written (i.e. the volume is healthy) it is kept as read-only. Volumes which are modified in this process are deemed fixed, and switched to writable. |
1 month ago |
|
|
4f038820dc
|
Add disk-aware EC rebalancing (#7597)
* Add placement package for EC shard placement logic - Consolidate EC shard placement algorithm for reuse across shell and worker tasks - Support multi-pass selection: racks, then servers, then disks - Include proper spread verification and scoring functions - Comprehensive test coverage for various cluster topologies * Make ec.balance disk-aware for multi-disk servers - Add EcDisk struct to track individual disks on volume servers - Update EcNode to maintain per-disk shard distribution - Parse disk_id from EC shard information during topology collection - Implement pickBestDiskOnNode() for selecting best disk per shard - Add diskDistributionScore() for tie-breaking node selection - Update all move operations to specify target disk in RPC calls - Improves shard balance within multi-disk servers, not just across servers * Use placement package in EC detection for consistent disk-level placement - Replace custom EC disk selection logic with shared placement package - Convert topology DiskInfo to placement.DiskCandidate format - Use SelectDestinations() for multi-rack/server/disk spreading - Convert placement results back to topology DiskInfo for task creation - Ensures EC detection uses same placement logic as shell commands * Make volume server evacuation disk-aware - Use pickBestDiskOnNode() when selecting evacuation target disk - Specify target disk in evacuation RPC requests - Maintains balanced disk distribution during server evacuations * Rename PlacementConfig to PlacementRequest for clarity PlacementRequest better reflects that this is a request for placement rather than a configuration object. This improves API semantics. * Rename DefaultConfig to DefaultPlacementRequest Aligns with the PlacementRequest type naming for consistency * Address review comments from Gemini and CodeRabbit Fix HIGH issues: - Fix empty disk discovery: Now discovers all disks from VolumeInfos, not just from EC shards. This ensures disks without EC shards are still considered for placement. - Fix EC shard count calculation in detection.go: Now correctly filters by DiskId and sums actual shard counts using ShardBits.ShardIdCount() instead of just counting EcShardInfo entries. Fix MEDIUM issues: - Add disk ID to evacuation log messages for consistency with other logging - Remove unused serverToDisks variable in placement.go - Fix comment that incorrectly said 'ascending' when sorting is 'descending' * add ec tests * Update ec-integration-tests.yml * Update ec_integration_test.go * Fix EC integration tests CI: build weed binary and update actions - Add 'Build weed binary' step before running tests - Update actions/setup-go from v4 to v6 (Node20 compatibility) - Update actions/checkout from v2 to v4 (Node20 compatibility) - Move working-directory to test step only * Add disk-aware EC rebalancing integration tests - Add TestDiskAwareECRebalancing test with multi-disk cluster setup - Test EC encode with disk awareness (shows disk ID in output) - Test EC balance with disk-level shard distribution - Add helper functions for disk-level verification: - startMultiDiskCluster: 3 servers x 4 disks each - countShardsPerDisk: track shards per disk per server - calculateDiskShardVariance: measure distribution balance - Verify no single disk is overloaded with shards |
1 month ago |
|
|
ebb06a3908
|
Mutex command output writes for `volume.check.disk`. (#7605)
Prevents potential screen garbling when operations are parallelized .Also simplifies logging by automatically adding newlines on output, if necessary. Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
1 month ago |
|
|
ee775825bc
|
Parallelize read-only volume check pass for `volume.check.disk`. (#7602)
|
1 month ago |
|
|
36dd59560b
|
Have `volume.check.disk` select a random (heathly) source volume when… (#7574)
Have `volume.check.disk` select a random (heathly) source volume when repairing read-only volumes. This ensures uniform load across the topology when the command is run. Also remove a lingering TODO about ignoring full volumes; not only there's no way to discern read-only volumes from being full vs. being damaged, we ultimately want to check the former anyway. |
2 months ago |
|
|
5c25df20f2
|
feat(volume.fix): show all replica locations for misplaced volumes (#7560)
|
2 months ago |
|
|
2843cb1255
|
Bootstrap logic to fix read-only volumes with `volume.check.disk`. (#7531)
* Bootstrap logic to fix read-only volumes with `volume.check.disk`.
The new implementation performs a second pass where read-only volumes are (optionally)
verified and fixed.
For each non-writable volume ID A:
if volume is not full
prune late volume entries not matching its index file
select a writable volume replica B
append missing entries from B into A
mark the volume as writable (healthy)
* variable and parameter renaming
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
|
2 months ago |
|
|
c89f394aba
|
Parallelize `ec.rebuild` operations per affected volume. (#7466)
* Parallelize `ec.rebuild` operations per affected volume. * node.freeEcSlot >= slotsNeeded * variable names, help messages, * Protected the read operation with the same mutex * accurate error message * fix broken test --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
2 months ago |
|
|
3dd5348616
|
`volume.check.disk`: add support for uni- or bi-directional sync between volume replicas. (#7484)
* `volume.check.disk`: add support for uni- or bi-directional sync between volume replicas. We'll need this to support repairing broken replicas, which involve syncing from a known good source replica without modifying it. * S3: Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization (#7480) * Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization * revert Reverted the conditional versioning check to always check versioning status Reverted the conditional SSE entry fetch to always fetch entry metadata Reverted the conditional versioning check to always check versioning status Reverted the conditional SSE entry fetch to always fetch entry metadata * Lazy Entry Fetch for SSE, Skip Conditional Header Check * SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive) * SSE-C is mutually exclusive with SSE-S3 and SSE-KMS * refactor * Removed Premature Mutual Exclusivity Check * check for the presence of the X-Amz-Server-Side-Encryption header * not used * fmt * Volume Server: avoid aggressive volume assignment (#7501) * avoid aggressive volume assignment * also test ec shards * separate DiskLocation instances for each subtest * edge cases * No volumes plus low disk space * Multiple EC volumes * simplify * chore(deps): bump github.com/getsentry/sentry-go from 0.36.1 to 0.38.0 (#7498) Bumps [github.com/getsentry/sentry-go](https://github.com/getsentry/sentry-go) from 0.36.1 to 0.38.0. - [Release notes](https://github.com/getsentry/sentry-go/releases) - [Changelog](https://github.com/getsentry/sentry-go/blob/master/CHANGELOG.md) - [Commits](https://github.com/getsentry/sentry-go/compare/v0.36.1...v0.38.0) --- updated-dependencies: - dependency-name: github.com/getsentry/sentry-go dependency-version: 0.38.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump go.etcd.io/etcd/client/v3 from 3.6.5 to 3.6.6 (#7496) Bumps [go.etcd.io/etcd/client/v3](https://github.com/etcd-io/etcd) from 3.6.5 to 3.6.6. - [Release notes](https://github.com/etcd-io/etcd/releases) - [Commits](https://github.com/etcd-io/etcd/compare/v3.6.5...v3.6.6) --- updated-dependencies: - dependency-name: go.etcd.io/etcd/client/v3 dependency-version: 3.6.6 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump github.com/hanwen/go-fuse/v2 from 2.8.0 to 2.9.0 (#7495) Bumps [github.com/hanwen/go-fuse/v2](https://github.com/hanwen/go-fuse) from 2.8.0 to 2.9.0. - [Commits](https://github.com/hanwen/go-fuse/compare/v2.8.0...v2.9.0) --- updated-dependencies: - dependency-name: github.com/hanwen/go-fuse/v2 dependency-version: 2.9.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump github.com/linxGnu/grocksdb from 1.10.2 to 1.10.3 (#7494) Bumps [github.com/linxGnu/grocksdb](https://github.com/linxGnu/grocksdb) from 1.10.2 to 1.10.3. - [Release notes](https://github.com/linxGnu/grocksdb/releases) - [Commits](https://github.com/linxGnu/grocksdb/compare/v1.10.2...v1.10.3) --- updated-dependencies: - dependency-name: github.com/linxGnu/grocksdb dependency-version: 1.10.3 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * chore(deps): bump actions/dependency-review-action from 4.8.1 to 4.8.2 (#7493) Bumps [actions/dependency-review-action](https://github.com/actions/dependency-review-action) from 4.8.1 to 4.8.2. - [Release notes](https://github.com/actions/dependency-review-action/releases) - [Commits]( |
2 months ago |
|
|
d8cac1a6cc
|
Account Info (#7507)
* Account Info Add account info on s3.configure * address comments * Update command_s3_configure.go --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
2 months ago |
|
|
0299e78de7
|
de/compress the fs meta file if filename ends with gz/gzip (#7500)
* de/compress the fs meta file if filename ends with gz/gzip * gemini code review * update help msg |
2 months ago |
|
|
0e69f7c916
|
Split logic for `volume.check.disk` into writable and read-only volume replicas. (#7476)
|
2 months ago |
|
|
79fa87bad4
|
Rework parameters passing for functions within `ec.rebuild` (#7445)
* Rework parameters passing for functions within `ec.rebuild` This simplifies the overall codebase and allows to cleanly handle parallelization via waitgroups. * fix copy source * add tests * remove tests not useful * fmt * nil check --------- Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> |
2 months ago |
|
|
9744382a18
|
Rework parameters passing for functions within `volume.check.disk`. (#7448)
* Rework parameters passing for functions within `volume.check.disk`. We'll need to rework this logic to account for read-only volumes, and there're already way too many parameters shuffled around. Grouping these into a single struct simplifies the overall codebase. * similar fix * Improved Error Handling in Tests * propagate the errors * edge cases * edge case on modified time * clean up --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
2 months ago |
|
|
76e4a51964
|
Unify the parameter to disable dry-run on weed shell commands to `-apply` (instead of `-force`). (#7450)
* Unify the parameter to disable dry-run on weed shell commands to --apply (instead of --force). * lint * refactor * Execution Order Corrected * handle deprecated force flag * fix help messages * Refactoring]: Using flag.FlagSet.Visit() * consistent with other commands * Checks for both flags * fix toml files --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
2 months ago |
|
|
f466ff1412
|
Nit: use `time.Duration`s instead of constants in seconds. (#7438)
Nit: use `time.Durations` instead of constants in seconds. Makes for slightly more readable code. |
2 months ago |
|
|
1668c1042b
|
Rework collection resultion for `ec.rebuild`, in preparation for parallelization. (#7420)
* Rework collection resultion for `ec.rebuild`, in preparation for parallelization. See https://github.com/seaweedfs/seaweedfs/issues/7416 . * simplify * Update weed/shell/command_ec_rebuild.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
2 months ago |
|
|
d745e6e41d
|
Fix masterclient vidmap race condition (#7412)
* fallback to check master
* clean up
* parsing
* refactor
* handle parse error
* return error
* avoid dup lookup
* use batch key
* dedup lookup logic
* address comments
* errors.Join(lookupErrors...)
* add a comment
* Fix: Critical data race in MasterClient vidMap
Fixes a critical data race where resetVidMap() was writing to the vidMap
pointer while other methods were reading it concurrently without synchronization.
Changes:
- Removed embedded *vidMap from MasterClient
- Added vidMapLock (sync.RWMutex) to protect vidMap pointer access
- Created safe accessor methods (GetLocations, GetDataCenter, etc.)
- Updated all direct vidMap accesses to use thread-safe methods
- Updated resetVidMap() to acquire write lock during pointer swap
The vidMap already has internal locking for its operations, but this fix
protects the vidMap pointer itself from concurrent read/write races.
Verified with: go test -race ./weed/wdclient/...
Impact:
- Prevents potential panics from concurrent pointer access
- No performance impact - uses RWMutex for read-heavy workloads
- Maintains backward compatibility through wrapper methods
* fmt
* Fix: Critical data race in MasterClient vidMap
Fixes a critical data race where resetVidMap() was writing to the vidMap
pointer while other methods were reading it concurrently without synchronization.
Changes:
- Removed embedded *vidMap from MasterClient struct
- Added vidMapLock (sync.RWMutex) to protect vidMap pointer access
- Created minimal public accessor methods for external packages:
* GetLocations, GetLocationsClone, GetVidLocations
* LookupFileId, LookupVolumeServerUrl
* GetDataCenter
- Internal code directly locks and accesses vidMap (no extra indirection)
- Updated resetVidMap() to acquire write lock during pointer swap
- Updated shell/commands.go to use GetDataCenter() method
Design Philosophy:
- vidMap already has internal locking for its map operations
- This fix specifically protects the vidMap *pointer* from concurrent access
- Public methods for external callers, direct locking for internal use
- Minimizes wrapper overhead while maintaining thread safety
Verified with: go test -race ./weed/wdclient/... (passes)
Impact:
- Prevents potential panics/crashes from data races
- Minimal performance impact (RWMutex for read-heavy workload)
- Maintains full backward compatibility
* fix more concurrent access
* reduce lock scope
* Optimize vidMap locking for better concurrency
Improved locking strategy based on the understanding that:
- vidMapLock protects the vidMap pointer from concurrent swaps
- vidMap has internal locks that protect its data structures
Changes:
1. Read operations: Grab pointer with RLock, release immediately, then operate
- Reduces lock hold time
- Allows resetVidMap to proceed sooner
- Methods: GetLocations, GetLocationsClone, GetVidLocations,
LookupVolumeServerUrl, GetDataCenter
2. Write operations: Changed from Lock() to RLock()
- RLock prevents pointer swap during operation
- Allows concurrent readers and other writers (serialized by vidMap's lock)
- Methods: addLocation, deleteLocation, addEcLocation, deleteEcLocation
Benefits:
- Significantly reduced lock contention
- Better concurrent performance under load
- Still prevents all race conditions
Verified with: go test -race ./weed/wdclient/... (passes)
* Further reduce lock contention in LookupVolumeIdsWithFallback
Optimized two loops that were holding RLock for extended periods:
Before:
- Held RLock during entire loop iteration
- Included string parsing and cache lookups
- Could block resetVidMap for significant time with large batches
After:
- Grab vidMap pointer with brief RLock
- Release lock immediately
- Perform all loop operations on local pointer
Impact:
- First loop: Cache check on initial volumeIds
- Second loop: Double-check after singleflight wait
Benefits:
- Minimal lock hold time (just pointer copy)
- resetVidMap no longer blocked by long loops
- Better concurrent performance with large volume ID lists
- Still thread-safe (vidMap methods have internal locks)
Verified with: go test -race ./weed/wdclient/... (passes)
* Add clarifying comments to vidMap helper functions
Added inline documentation to each helper function (addLocation, deleteLocation,
addEcLocation, deleteEcLocation) explaining the two-level locking strategy:
- RLock on vidMapLock prevents resetVidMap from swapping the pointer
- vidMap has internal locks that protect the actual map mutations
- This design provides optimal concurrency
The comments make it clear why RLock (not Lock) is correct and intentional,
preventing future confusion about the locking strategy.
* Improve encapsulation: Add shallowClone() method to vidMap
Added a shallowClone() method to vidMap to improve encapsulation and prevent
MasterClient from directly accessing vidMap's internal fields.
Changes:
1. Added vidMap.shallowClone() in vid_map.go
- Encapsulates the shallow copy logic within vidMap
- Makes vidMap responsible for its own state representation
- Documented that caller is responsible for thread safety
2. Simplified resetVidMap() in masterclient.go
- Uses tail := mc.vidMap.shallowClone() instead of manual field access
- Cleaner, more maintainable code
- Better adherence to encapsulation principles
Benefits:
- Improved code organization and maintainability
- vidMap internals are now properly encapsulated
- Easier to modify vidMap structure in the future
- More self-documenting code
Verified with: go test -race ./weed/wdclient/... (passes)
* Optimize locking: Reduce lock acquisitions and use helper methods
Two optimizations to further reduce lock contention and improve code consistency:
1. LookupFileIdWithFallback: Eliminated redundant lock acquisition
- Before: Two separate locks to get vidMap and dataCenter
- After: Single lock gets both values together
- Benefit: 50% reduction in lock/unlock overhead for this hot path
2. KeepConnected: Use GetDataCenter() helper for consistency
- Before: Manual lock/unlock to access DataCenter field
- After: Use existing GetDataCenter() helper method
- Benefit: Better encapsulation and code consistency
Impact:
- Reduced lock contention in high-traffic lookup path
- More consistent use of accessor methods throughout codebase
- Cleaner, more maintainable code
Verified with: go test -race ./weed/wdclient/... (passes)
* Refactor: Extract common locking patterns into helper methods
Eliminated code duplication by introducing two helper methods that encapsulate
the common locking patterns used throughout MasterClient:
1. getStableVidMap() - For read operations
- Acquires lock, gets pointer, releases immediately
- Returns stable snapshot for thread-safe reads
- Used by: GetLocations, GetLocationsClone, GetVidLocations,
LookupFileId, LookupVolumeServerUrl, GetDataCenter
2. withCurrentVidMap(f func(vm *vidMap)) - For write operations
- Holds RLock during callback execution
- Prevents pointer swap while allowing concurrent operations
- Used by: addLocation, deleteLocation, addEcLocation, deleteEcLocation
Benefits:
- Reduced code duplication (eliminated 48 lines of repetitive locking code)
- Centralized locking logic makes it easier to understand and maintain
- Self-documenting pattern through named helper methods
- Easier to modify locking strategy in the future (single point of change)
- Improved readability - accessor methods are now one-liners
Code size reduction: ~40% fewer lines for accessor/helper methods
Verified with: go test -race ./weed/wdclient/... (passes)
* consistent
* Fix cache pointer race condition with atomic.Pointer
Use atomic.Pointer for vidMap cache field to prevent data races
during cache trimming in resetVidMap. This addresses the race condition
where concurrent GetLocations calls could read the cache pointer while
resetVidMap is modifying it during cache chain trimming.
Changes:
- Changed cache field from *vidMap to atomic.Pointer[vidMap]
- Updated all cache access to use Load() and Store() atomic operations
- Updated shallowClone, GetLocations, deleteLocation, deleteEcLocation
- Updated resetVidMap to use atomic operations for cache trimming
* Merge: Resolve conflict in deleteEcLocation - keep atomic.Pointer and fix bug
Resolved merge conflict by combining:
1. Atomic pointer access pattern (from HEAD): cache.Load()
2. Correct method call (from fix): deleteEcLocation (not deleteLocation)
Resolution:
- Before (HEAD): cachedMap.deleteLocation() - WRONG, reintroduced bug
- Before (fix): vc.cache.deleteEcLocation() - RIGHT method, old pattern
- After (merged): cachedMap.deleteEcLocation() - RIGHT method, new pattern
This preserves both improvements:
✓ Thread-safe atomic.Pointer access pattern
✓ Correct recursive call to deleteEcLocation
Verified with: go test -race ./weed/wdclient/... (passes)
* Update vid_map.go
* remove shallow clone
* simplify
|
3 months ago |
|
|
208d7f24f4
|
Erasure Coding: Ec refactoring (#7396)
* refactor: add ECContext structure to encapsulate EC parameters
- Create ec_context.go with ECContext struct
- NewDefaultECContext() creates context with default 10+4 configuration
- Helper methods: CreateEncoder(), ToExt(), String()
- Foundation for cleaner function signatures
- No behavior change, still uses hardcoded 10+4
* refactor: update ec_encoder.go to use ECContext
- Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions
- Keep old functions for backward compatibility (call new versions)
- Update all internal functions to accept ECContext parameter
- Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently
- Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New()
- Use ctx.ToExt() for shard file extensions
- No behavior change, still uses default 10+4 configuration
* refactor: update ec_volume.go to use ECContext
- Add ECContext field to EcVolume struct
- Initialize ECContext with default configuration in NewEcVolume()
- Update LocateEcShardNeedleInterval() to use ECContext.DataShards
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* refactor: add EC shard count fields to VolumeInfo protobuf
- Add data_shards_count field (field 8) to VolumeInfo message
- Add parity_shards_count field (field 9) to VolumeInfo message
- Fields are optional, 0 means use default (10+4)
- Backward compatible: fields added at end
- Phase 1: Foundation for future customization
* refactor: regenerate protobuf Go files with EC shard count fields
- Regenerated volume_server_pb/*.go with new EC fields
- DataShardsCount and ParityShardsCount accessors added to VolumeInfo
- No behavior change, fields not yet used
* refactor: update VolumeEcShardsGenerate to use ECContext
- Create ECContext with default configuration in VolumeEcShardsGenerate
- Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup
- Call WriteEcFilesWithContext() instead of WriteEcFiles()
- Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo
- Log EC context being used
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* fmt
* refactor: update ec_test.go to use ECContext
- Update TestEncodingDecoding to create and use ECContext
- Update validateFiles() to accept ECContext parameter
- Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt()
- Test passes with default 10+4 configuration
* refactor: use EcShardConfig message instead of separate fields
* optimize: pre-calculate row sizes in EC encoding loop
* refactor: replace TotalShards field with Total() method
- Remove TotalShards field from ECContext to avoid field drift
- Add Total() method that computes DataShards + ParityShards
- Update all references to use ctx.Total() instead of ctx.TotalShards
- Read EC config from VolumeInfo when loading EC volumes
- Read data shard count from .vif in VolumeEcShardsToVolume
- Use >= instead of > for exact boundary handling in encoding loops
* optimize: simplify VolumeEcShardsToVolume to use existing EC context
- Remove redundant CollectEcShards call
- Remove redundant .vif file loading
- Use v.ECContext.DataShards directly (already loaded by NewEcVolume)
- Slice tempShards instead of collecting again
* refactor: rename MaxShardId to MaxShardCount for clarity
- Change from MaxShardId=31 to MaxShardCount=32
- Eliminates confusing +1 arithmetic (MaxShardId+1)
- More intuitive: MaxShardCount directly represents the limit
fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume
- Add MaxShardId constant (31, since ShardBits is uint32)
- Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer
- Prevents panic when slicing for volumes with >14 total shards
- Critical fix for custom EC configurations like 20+10
* fix: add validation for EC shard counts from VolumeInfo
- Validate DataShards/ParityShards are positive and within MaxShardCount
- Prevent zero or invalid values that could cause divide-by-zero
- Fallback to defaults if validation fails, with warning log
- VolumeEcShardsGenerate now preserves existing EC config when regenerating
- Critical safety fix for corrupted or legacy .vif files
* fix: RebuildEcFiles now loads EC config from .vif file
- Critical: RebuildEcFiles was always using default 10+4 config
- Now loads actual EC config from .vif file when rebuilding shards
- Validates config before use (positive shards, within MaxShardCount)
- Falls back to default if .vif missing or invalid
- Prevents data corruption when rebuilding custom EC volumes
* add: defensive validation for dataShards in VolumeEcShardsToVolume
- Validate dataShards > 0 and <= MaxShardCount before use
- Prevents panic from corrupted or uninitialized ECContext
- Returns clear error message instead of panic
- Defense-in-depth: validates even though upstream should catch issues
* fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support
Critical fixes to support custom EC ratios > 14 shards:
disk_location_ec.go:
- validateEcVolume: Check shards 0-31 instead of 0-13 during validation
- removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup
ec_volume_info.go ShardBits methods:
- ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14)
- ToUint32Slice(): Iterate up to MaxShardCount (32)
- IndexToShardId(): Iterate up to MaxShardCount (32)
- MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2)
- Minus() shard size copy: Iterate up to MaxShardCount (32)
- resizeShardSizes(): Iterate up to MaxShardCount (32)
Without these changes:
- Custom EC ratios > 14 total shards would fail validation on startup
- Shards 14-31 would never be discovered or cleaned up
- ShardBits operations would miss shards >= 14
These changes are backward compatible - MaxShardCount (32) includes
the default TotalShardsCount (14), so existing 10+4 volumes work as before.
* fix: replace TotalShardsCount with MaxShardCount in critical data structures
Critical fixes for buffer allocations and loops that must support
custom EC ratios up to 32 shards:
Data Structures:
- store_ec.go:354: Buffer allocation for shard recovery (bufs array)
- topology_ec.go:14: EcShardLocations.Locations fixed array size
- command_ec_rebuild.go:268: EC shard map allocation
- command_ec_common.go:626: Shard-to-locations map allocation
Shard Discovery Loops:
- ec_task.go:378: Loop to find generated shard files
- ec_shard_management.go: All 8 loops that check/count EC shards
These changes are critical because:
1. Buffer allocations sized to 14 would cause index-out-of-bounds panics
when accessing shards 14-31
2. Fixed arrays sized to 14 would truncate shard location data
3. Loops limited to 0-13 would never discover/manage shards 14-31
Note: command_ec_encode.go:208 intentionally NOT changed - it creates
shard IDs to mount after encoding. In Phase 1 we always generate 14
shards, so this remains TotalShardsCount and will be made dynamic in
Phase 2 based on actual EC context.
Without these fixes, custom EC ratios > 14 total shards would cause:
- Runtime panics (array index out of bounds)
- Data loss (shards 14-31 never discovered/tracked)
- Incomplete shard management (missing shards not detected)
* refactor: move MaxShardCount constant to ec_encoder.go
Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it
with other shard count constants (DataShardsCount, ParityShardsCount,
TotalShardsCount). This improves code organization and makes it easier
to understand the relationship between these constants.
Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks
* improve: add defensive programming and better error messages for EC
Code review improvements from CodeRabbit:
1. ShardBits Guardrails (ec_volume_info.go):
- AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount
- HasShardId: Return false for out-of-range shard IDs
- Prevents silent no-ops from bit shifts with invalid IDs
2. Future-Proof Regex (disk_location_ec.go):
- Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3}
- Now matches .ec00 through .ec999 (currently .ec00-.ec31 used)
- Supports future increases to MaxShardCount beyond 99
3. Better Error Messages (volume_grpc_erasure_coding.go):
- Include valid range (1..32) in dataShards validation error
- Helps operators quickly identify the problem
4. Validation Before Save (volume_grpc_erasure_coding.go):
- Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount)
- Log EC config being saved to .vif for debugging
- Prevents writing invalid configs to disk
These changes improve robustness and debuggability without changing
core functionality.
* fmt
* fix: critical bugs from code review + clean up comments
Critical bug fixes:
1. command_ec_rebuild.go: Fixed indentation causing compilation error
- Properly nested if/for blocks in registerEcNode
2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Default 10+4 volumes were being incorrectly reported as incomplete
- Missing shards 14-31 were being incorrectly reported as missing
- Fixed in 4 locations: volume completeness checks and getMissingShards
3. ec_volume_info.go: Fixed MinusParityShards removing too many shards
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Was incorrectly removing shard IDs 10-31 instead of just 10-13
Comment cleanup:
- Removed Phase 1/Phase 2 references (development plan context)
- Replaced with clear statements about default 10+4 configuration
- SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed
Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount.
MaxShardCount (32) is the limit for buffer allocations and shard ID loops,
but TotalShardsCount (14) must be used for default EC configuration logic.
* fix: add defensive bounds checks and compute actual shard counts
Critical fixes from code review:
1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard
- Prevent panic when shardId >= MaxShardCount (32)
- Return false instead of crashing on out-of-range shard IDs
2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks
- Was using hardcoded TotalShardsCount (14) for all volumes
- Now computes actual totalShardsForVolume from rackToShardCount
- Fixes incorrect rebalancing for volumes with custom EC ratios
- Example: 5+2=7 shards would incorrectly use 14 as average
These fixes improve robustness and prepare for future custom EC ratios
without changing current behavior for default 10+4 volumes.
Note: MinusParityShards and ec_task.go intentionally NOT changed for
seaweedfs repo - these will be enhanced in seaweed-enterprise repo
where custom EC ratio configuration is added.
* fmt
* style: make MaxShardCount type casting explicit in loops
Improved code clarity by explicitly casting MaxShardCount to the
appropriate type when used in loop comparisons:
- ShardId comparisons: Cast to ShardId(MaxShardCount)
- uint32 comparisons: Cast to uint32(MaxShardCount)
Changed in 5 locations:
- Minus() loop (line 90)
- ShardIds() loop (line 143)
- ToUint32Slice() loop (line 152)
- IndexToShardId() loop (line 219)
- resizeShardSizes() loop (line 248)
This makes the intent explicit and improves type safety readability.
No functional changes - purely a style improvement.
|
3 months ago |
|
|
6a8c53bc44
|
Filer: batch deletion operations to return individual error results (#7382)
* batch deletion operations to return individual error results Modify batch deletion operations to return individual error results instead of one aggregated error, enabling better tracking of which specific files failed to delete (helping reduce orphan file issues). * Simplified logging logic * Optimized nested loop * handles the edge case where the RPC succeeds but connection cleanup fails * simplify * simplify * ignore 'not found' errors here |
3 months ago |
|
|
37af41fbfe
|
Shell: Added a helper function `isHelpRequest()` (#7380)
* Added a helper function `isHelpRequest()` * also handles combined short flags like -lh or -hl * Created handleHelpRequest() helper function encapsulates both: Checking for help flags Printing the help message * Limit to reasonable length (2-4 chars total) to avoid matching long options like -verbose |
3 months ago |
|
|
922bb17194
|
Store full shell command in shell history (#7378)
Store shell command in history before parsing Store the shell command in history before parsing it. This will allow users to press the 'Up' arrow and see the entire command. |
3 months ago |
|
|
832df5265f
|
Fix 'NaN%' issue when running volume.fsck (#7368)
* Fix 'NaN%' issue when running volume.fsck - Running `volume.fsck` on an empty cluster will display 'NaN%'. * Refactor - Extract cound of orphan chunks in summary to new var. - Restore handling for 'NaN' for individual volumes. Its not necessary because the check is already done. * Make code more idiomatic |
3 months ago |