Lisandro Pin
6b98b52acc
Fix reporting of EC shard sizes from nodes to masters. ( #7835 )
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
2 months ago
Lisandro Pin
5725d2c464
Fix reporting of EC shard sizes from nodes to masters.
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
2 months ago
Lisandro Pin
76e4a51964
Unify the parameter to disable dry-run on weed shell commands to `-apply` (instead of `-force`). ( #7450 )
* Unify the parameter to disable dry-run on weed shell commands to --apply (instead of --force).
* lint
* refactor
* Execution Order Corrected
* handle deprecated force flag
* fix help messages
* Refactoring]: Using flag.FlagSet.Visit()
* consistent with other commands
* Checks for both flags
* fix toml files
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
3 months ago
Chris Lu
41aedaa687
Shell: support regular expression for collection selection ( #7158 )
* support regular expression for collection selection
* refactor
* ordering
* fix exact match
* Update command_volume_balance_test.go
* simplify
* Update command_volume_balance.go
* comment
6 months ago
FQHSLycopene
e1f8da0db3
fix: consider EC shard count in volume.balance capacity calculation ( #7034 )
* fix: consider EC shard count in volume.balance capacity calculation
* update the implementation of capacityByMaxVolumeCount to include the EC shard usage
7 months ago
ftong2020
e7f2936dcc
fix force arg dropped during volume balance command ( #6432 )
1 year ago
chrislu
ec155022e7
"golang.org/x/exp/slices" => "slices" and go fmt
1 year ago
Konstantin Lebedev
0a4b1909a2
[shell] only apply the balancing for writable volumes ( #6346 )
1 year ago
Konstantin Lebedev
9a5d3e7b31
[shell] add admin noLock for balance ( #6209 )
add admin noLock for balance
1 year ago
Konstantin Lebedev
5bddf0c085
[shell] volume.balance collect volume servers by dc rack node ( #6191 )
* chore: balance by rack
* fix: rm check lock
* fix: selected racks
* fix: selected nodes
* fix: containts
* fix: one collectVolumeServersByDcRackNode
* fix: revert lock and add lock
* fix: panic test
* revert noLock
1 year ago
chrislu
ec30a504ba
refactor
1 year ago
chrislu
701abbb9df
add IsResourceHeavy() to command interface
1 year ago
skycope
6e4b9181f5
fix "volume.fix.replication" move many replications only to one volumeServer ( #5522 )
2 years ago
Benoît Knecht
1f08010ef0
weed/shell: Cleanup volume balance logic ( #5241 )
2 years ago
Benoît Knecht
a6aee847b9
weed/shell: Fix volume.balance logic ( #5238 )
2 years ago
chrislu
645ae8c57b
Revert "Revert "Merge branch 'master' of https://github.com/seaweedfs/seaweedfs ""
This reverts commit 8cb42c39
2 years ago
chrislu
8cb42c39ad
Revert "Merge branch 'master' of https://github.com/seaweedfs/seaweedfs "
This reverts commit 2e5aa06026 , reversing
changes made to 4d414f54a2 .
2 years ago
dependabot[bot]
a04bd4d26f
Bump github.com/rclone/rclone from 1.63.1 to 1.64.0 ( #4850 )
* Bump github.com/rclone/rclone from 1.63.1 to 1.64.0
Bumps [github.com/rclone/rclone](https://github.com/rclone/rclone ) from 1.63.1 to 1.64.0.
- [Release notes](https://github.com/rclone/rclone/releases )
- [Changelog](https://github.com/rclone/rclone/blob/master/RELEASE.md )
- [Commits](https://github.com/rclone/rclone/compare/v1.63.1...v1.64.0 )
---
updated-dependencies:
- dependency-name: github.com/rclone/rclone
dependency-type: direct:production
update-type: version-update:semver-minor
...
Signed-off-by: dependabot[bot] <support@github.com>
* API changes
* go mod
---------
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
2 years ago
themarkchen
7592d013fe
fix shell volume.balance bug ( #4447 )
3 years ago
chrislu
5869945f16
avoid infinite loop
fix https://github.com/seaweedfs/seaweedfs/issues/4195#issuecomment-1426100904
3 years ago
chrislu
dc4ed2cd9b
do not move cloud tier volumes
fix https://github.com/seaweedfs/seaweedfs/issues/4195
3 years ago
chrislu
d5364218b2
adjust help message
3 years ago
chrislu
0623bf582e
include ec shard for capacityByFreeVolumeCount
3 years ago
chrislu
049f040c3c
refactor
3 years ago
chrislu
f9383aa726
refactor to change capacity data type
3 years ago
chrislu
fc4208d128
volume.balance: default to balance ALL_COLLECTIONS
3 years ago
chrislu
4260804613
volume.balance: avoid moving out volume with max=1
3 years ago
chrislu
d653c5f811
unused
3 years ago
chrislu
676e27c589
shell: stop long running jobs if lock is lost
4 years ago
qzh
74b53729e1
feat(weed.move): add a speed limit parameter of moving files ( #3478 )
* feat(weed.move): add a speed limit parameter of moving files
* fix(weed.move): set the default value of ioBytePerSecond to vs.compactionBytePerSecond
Co-authored-by: zhihao.qu <zhihao.qu@ly.com>
4 years ago
chrislu
26dbc6c905
move to https://github.com/seaweedfs/seaweedfs
4 years ago
chrislu
141f662734
edge case checking when volume server does not have capacity to balance
fix https://github.com/chrislusf/seaweedfs/issues/3257
4 years ago
chrislu
6793bc853c
help message when in simulation mode
4 years ago
justin
3551ca2fcf
enhancement: replace sort.Slice with slices.SortFunc to reduce reflection
4 years ago
chrislu
f18803424a
volume.balance: add delay during tight loop
fix https://github.com/chrislusf/seaweedfs/issues/2637
4 years ago
chrislu
a2d3f89c7b
add lock messages
4 years ago
Chris Lu
119d5908dd
shell: do not need to lock to see volume -h
4 years ago
Chris Lu
e5fc35ed0c
change server address from string to a type
4 years ago
Chris Lu
057ef429ac
format
5 years ago
Chris Lu
0526db12e2
do not treat read only volumes differently
5 years ago
Chris Lu
e50a5b8e28
minor: print disk type
5 years ago
Chris Lu
db6275a0c8
print out balance ratio
5 years ago
Chris Lu
69a6da7969
avoid fail on tail error
5 years ago
Chris Lu
2ae9705442
adjust text
5 years ago
Chris Lu
3739717092
Revert "adds a test"
This reverts commit f690643b47 .
5 years ago
Chris Lu
f690643b47
adds a test
5 years ago
Chris Lu
6de786185d
volume.balance: balance read only volumes first
5 years ago
Chris Lu
a4cfffc264
shell: fix moving volume, volume server evacuate
fix https://github.com/chrislusf/seaweedfs/issues/1534
5 years ago
Chris Lu
352ba23f83
revert previous change
revert 29e62aba00
5 years ago
Chris Lu
29e62aba00
possible fix for volume balance
address https://github.com/chrislusf/seaweedfs/issues/1534
5 years ago