* Fix#8040: Support 'default' keyword in collectionPattern to match default collection
The default collection in SeaweedFS is represented as an empty string internally.
Previously, it was impossible to specifically target only the default collection
because:
- Empty collectionPattern matched ALL collections (filter was skipped)
- Using collectionPattern="default" tried to match the literal string "default"
This commit adds special handling for the keyword "default" in collectionPattern
across multiple shell commands:
- volume.tier.move
- volume.list
- volume.fix.replication
- volume.configure.replication
Now users can use -collectionPattern="default" to specifically target volumes
in the default collection (empty collection name), while maintaining backward
compatibility where empty pattern matches all collections.
Updated help text to document this feature.
* Update compileCollectionPattern to support 'default' keyword
This extends the fix to all commands that use regex-based collection
pattern matching:
- ec.encode
- ec.decode
- volume.tier.download
- volume.balance
The compileCollectionPattern function now treats "default" as a special
keyword that compiles to the regex "^$" (matching empty strings), making
it consistent with the other commands that use filepath.Match.
* Use CollectionDefault constant instead of hardcoded "default" string
Refactored the collection pattern matching logic to use a central constant
CollectionDefault defined in weed/shell/common.go. This improves maintainability
and ensures consistency across all shell commands.
* Address PR review feedback: simplify logic and use '_default' keyword
Changes:
1. Changed CollectionDefault from "default" to "_default" to avoid collision
with literal collection names
2. Simplified pattern matching logic to reduce code duplication across all
affected commands
3. Fixed error handling in command_volume_tier_move.go to properly propagate
filepath.Match errors instead of swallowing them
4. Updated documentation to clarify how to match a literal "default"
collection using regex patterns like "^default$"
This addresses all feedback from PR review comments.
* Remove unnecessary documentation about matching literal 'default'
Since we changed the keyword to '_default', users can now simply use
'default' to match a literal collection named "default". The previous
documentation about using regex patterns was confusing and no longer needed.
* Fix error propagation and empty pattern handling
1. command_volume_tier_move.go: Added early termination check after
eachDataNode callback to stop processing remaining nodes if a pattern
matching error occurred, improving efficiency
2. command_volume_configure_replication.go: Fixed empty pattern handling
to match all collections (collectionMatched = true when pattern is empty),
mirroring the behavior in other commands
These changes address the remaining PR review feedback.
* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>