* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* types packages is imported more than onece
* lazy-loading
* fix bugs
* fix bugs
* fix unit tests
* fix test error
* rename function
* unload ldb after initial startup
* Don't load ldb when starting volume server if ldbtimeout is set.
* remove uncessary unloadldb
* Update weed/command/server.go
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* Update weed/command/volume.go
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: guol-fnst <goul-fnst@fujitsu.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* optimiz vacuuming volume
* fix bugx
* rename parameters
* fix conflict
* change copyDataBasedOnIndexFile to an instance method
* close needlemap
* optimiz commiting Vacuum volume for leveldb index
* fix bugs
* fix leveldb loading bugs
* refactor
* fix leveldb loading bug
* add leveldb recovery
* add test case for levelDB
* modify test case to cover all the new branches
* use one tmpNm instead of two instances
* refactor
* refactor
* move setWatermark to the end
* add test for watermark and updating leveldb
* fix error logic
* refactor, add test
* check nil before close needlemapeer
add test case
fix metric bug
* add tests, fix bugs
* adjust log level
remove wrong test case
refactor
* avoid duplicate updating metric for leveldb index
Under normal circumstances, there will be no problems, but when the
master is debugged in the local environment, the volume client cannot
communicate with the master normally, so the sendHeartBeat logic is
restarted, and a new connection is created to report the heartbeat. If
the master has not cleared the uuid of the volume at this time, then The
master will respond to volume duplicateUUIDS, and the volume service
will exit, but in fact the uuid of the volume is not duplicated
from Jethro in slack:
is it possible to make the assign request a bit smarter? Currently I’m in the state that a disk failed but all assign request are being send to this volume. It would be cool if the master sees this and stopped using this volume.
e=HTTP(http://x:8089/913,045a782b63176edf) not 200 but 500 Internal Server Error
Body={"size":740167,"error":"failed to write to local disk: write /mnt/v9/913.dat: input/output error","eTag":"ee4381e202212ff3aee647704c036689"}
e=HTTP(http://x:8089/913,045a782c90240077) not 200 but 500 Internal Server Error
Body={"size":792779,"error":"failed to write to local disk: write /mnt/v9/913.dat: input/output error","eTag":"c43463ccc11eb6eb2fc306f407a6a953"}
e=HTTP(http://x:8089/913,045a782e6b7901ea) not 200 but 500 Internal Server Error
Body={"size":3962392,"error":"failed to write to local disk: write /mnt/v9/913.dat: input/output error","eTag":"04c91198e9b276c81f11dbf189af5d28"}