This fixes issue #6823 where a single volume server shutdown would cause
other healthy volume servers to fail their health checks and get restarted
by Kubernetes, causing a cascading failure.
Previously, the healthz handler checked if all replicated volumes could
reach their remote replicas via GetWritableRemoteReplications(). When a
volume server went down, the master would remove it from the volume
location list. Other volume servers would then fail their healthz checks
because they couldn't find all required replicas, causing Kubernetes to
restart them.
The healthz endpoint now only checks local conditions:
1. Is the server shutting down?
2. Is the server heartbeating with the master?
This follows the principle that a health check should only verify the
health of THIS server, not the overall cluster state.
Fixes#6823
* pb: add id field to Heartbeat message for stable volume server identification
This adds an 'id' field to the Heartbeat protobuf message that allows
volume servers to identify themselves independently of their IP:port address.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* storage: add Id field to Store struct
Add Id field to Store struct and include it in CollectHeartbeat().
The Id field provides a stable volume server identity independent of IP:port.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* topology: support id-based DataNode identification
Update GetOrCreateDataNode to accept an id parameter for stable node
identification. When id is provided, the DataNode can maintain its identity
even when its IP address changes (e.g., in Kubernetes pod reschedules).
For backward compatibility:
- If id is provided, use it as the node ID
- If id is empty, fall back to ip:port
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* volume: add -id flag for stable volume server identity
Add -id command line flag to volume server that allows specifying a stable
identifier independent of the IP address. This is useful for Kubernetes
deployments with hostPath volumes where pods can be rescheduled to different
nodes while the persisted data remains on the original node.
Usage: weed volume -id=node-1 -ip=10.0.0.1 ...
If -id is not specified, it defaults to ip:port for backward compatibility.
Fixes https://github.com/seaweedfs/seaweedfs/issues/7487
* server: add -volume.id flag to weed server command
Support the -volume.id flag in the all-in-one 'weed server' command,
consistent with the standalone 'weed volume' command.
Usage: weed server -volume.id=node-1 ...
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* topology: add test for id-based DataNode identification
Test the key scenarios:
1. Create DataNode with explicit id
2. Same id with different IP returns same DataNode (K8s reschedule)
3. IP/PublicUrl are updated when node reconnects with new address
4. Different id creates new DataNode
5. Empty id falls back to ip:port (backward compatibility)
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* pb: add address field to DataNodeInfo for proper node addressing
Previously, DataNodeInfo.Id was used as the node address, which worked
when Id was always ip:port. Now that Id can be an explicit string,
we need a separate Address field for connection purposes.
Changes:
- Add 'address' field to DataNodeInfo protobuf message
- Update ToDataNodeInfo() to populate the address field
- Update NewServerAddressFromDataNode() to use Address (with Id fallback)
- Fix LookupEcVolume to use dn.Url() instead of dn.Id()
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: trim whitespace from volume server id and fix test
- Trim whitespace from -id flag to treat ' ' as empty
- Fix store_load_balancing_test.go to include id parameter in NewStore call
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* refactor: extract GetVolumeServerId to util package
Move the volume server ID determination logic to a shared utility function
to avoid code duplication between volume.go and rack.go.
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: improve transition logic for legacy nodes
- Use exact ip:port match instead of net.SplitHostPort heuristic
- Update GrpcPort and PublicUrl during transition for consistency
- Remove unused net import
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* fix: add id normalization and address change logging
- Normalize id parameter at function boundary (trim whitespace)
- Log when DataNode IP:Port changes (helps debug K8s pod rescheduling)
Ref: https://github.com/seaweedfs/seaweedfs/issues/7487
* avoid aggressive volume assignment
* also test ec shards
* separate DiskLocation instances for each subtest
* edge cases
* No volumes plus low disk space
* Multiple EC volumes
* simplify
* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
the calculation of `unclaimedSpaces` only needs to subtract `unusedSpace` when `preallocate` is not enabled.
Signed-off-by: LHHDZ <shichanglin5@qq.com>
* types packages is imported more than onece
* lazy-loading
* fix bugs
* fix bugs
* fix unit tests
* fix test error
* rename function
* unload ldb after initial startup
* Don't load ldb when starting volume server if ldbtimeout is set.
* remove uncessary unloadldb
* Update weed/command/server.go
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* Update weed/command/volume.go
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: guol-fnst <goul-fnst@fujitsu.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* simplify a bit
* feat: volume: add "readBufSize" option to customize read optimization
* refactor : redbufSIze -> readBufferSize
* simplify a bit
* simplify a bit