Lisandro Pin
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
4 days ago |
Lisandro Pin
|
dddb0f0ae5
|
Fix update of `SeaweedFS_volumeServer_volumes` gauge metrics when EC shards are unmounted (#6776)
|
8 months ago |
chrislu
|
c9f3448692
|
ReadAt may return io.EOF t end of file
related to https://github.com/seaweedfs/seaweedfs/issues/6219
|
1 year ago |
steve.wei
|
0bdf121e51
|
rename VolumeServerVolumeGauge (#5504)
|
2 years ago |
chrislu
|
6ebe26a765
|
Revert "Revert "Revert "Add disk type to prometheus metrics" (#4777)""
This reverts commit 567d788928.
|
2 years ago |
chrislu
|
7540d43ee9
|
Revert "Revert "fix compilation""
This reverts commit f9abfd0b03.
|
2 years ago |
chrislu
|
249c0e06ef
|
Revert "fix compilation"
This reverts commit 451ec6504d.
|
2 years ago |
chrislu
|
451ec6504d
|
fix compilation
|
2 years ago |
chrislu
|
f9abfd0b03
|
Revert "fix compilation"
This reverts commit 0483ba3889.
|
2 years ago |
chrislu
|
0483ba3889
|
fix compilation
|
2 years ago |
chrislu
|
567d788928
|
Revert "Revert "Add disk type to prometheus metrics" (#4777)"
This reverts commit 9215ba24be.
|
2 years ago |
Chris Lu
|
9215ba24be
|
Revert "Add disk type to prometheus metrics" (#4777)
Revert "Add disk type to prometheus metrics (#4736)"
This reverts commit 9956d93a40.
|
2 years ago |
Dmitry Mishin
|
9956d93a40
|
Add disk type to prometheus metrics (#4736)
* Add disk type to prometheus metrics
* Del metrics
* Disk type as readable string
---------
Co-authored-by: Dima Mishin <dimm@dimm.dev>
|
2 years ago |
Nikita Mochalov
|
e6a49dc533
|
Fix resource leaks (#4737)
* Fix division by zero
* Fix file handle leak
* Fix file handle leak
* Fix file handle leak
* Fix goroutine leak
|
2 years ago |
chrislu
|
26dbc6c905
|
move to https://github.com/seaweedfs/seaweedfs
|
3 years ago |
Chris Lu
|
f8446b42ab
|
this can compile now!!!
|
5 years ago |
Chris Lu
|
ab759f0ec2
|
erasure coding: fix EC error if multiple disks are configured in one volume server
|
5 years ago |
Chris Lu
|
d1cf39f180
|
fix logging
|
5 years ago |
j.laycock
|
6fc6322c90
|
Change joeslay paths to chrislusf paths
|
6 years ago |
j.laycock
|
595a1beff0
|
Swap imports to use joeslay
|
6 years ago |
Chris Lu
|
a7b1b23c58
|
fix wrong volume count
fix https://github.com/chrislusf/seaweedfs/issues/1013
|
7 years ago |
Chris Lu
|
115558e5f5
|
adjust counters
|
7 years ago |
Chris Lu
|
289fd7eb39
|
count number of volumes and ec shards
|
7 years ago |
Chris Lu
|
ca8a2bb534
|
go fmt
|
7 years ago |
Chris Lu
|
2215e81be7
|
ui add ec shard statuses
|
7 years ago |
Chris Lu
|
7e80b2b882
|
fix multiple bugs
|
7 years ago |
Chris Lu
|
40ca2f2903
|
add collection.delete
|
7 years ago |
Chris Lu
|
3a8c1055a2
|
refactoring ecx to ecVolume
|
7 years ago |
Chris Lu
|
b4b407e403
|
add grpc ec shard read
|
7 years ago |
Chris Lu
|
a4f3d82c57
|
convert needle id to ec intervals to read from
|
7 years ago |