* fix filer range read
Only return true if we're reading the ENTIRE chunk from the beginning.
// This prevents bandwidth amplification when range requests happen to align
// with chunk boundaries but don't actually want the full chunk.
* Update weed/filer/filechunks.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* refactoring
* add ec shard size
* address comments
* passing task id
There seems to be a disconnect between the pending tasks created in ActiveTopology and the TaskDetectionResult returned by this function. A taskID is generated locally and used to create pending tasks via AddPendingECShardTask, but this taskID is not stored in the TaskDetectionResult or passed along in any way.
This makes it impossible for the worker that eventually executes the task to know which pending task in ActiveTopology it corresponds to. Without the correct taskID, the worker cannot call AssignTask or CompleteTask on the master, breaking the entire task lifecycle and capacity management feature.
A potential solution is to add a TaskID field to TaskDetectionResult and worker_pb.TaskParams, ensuring the ID is propagated from detection to execution.
* 1 source multiple destinations
* task supports multi source and destination
* ec needs to clean up previous shards
* use erasure coding constants
* getPlanningCapacityUnsafe getEffectiveAvailableCapacityUnsafe should return StorageSlotChange for calculation
* use CanAccommodate to calculate
* remove dead code
* address comments
* fix Mutex Copying in Protobuf Structs
* use constants
* fix estimatedSize
The calculation for estimatedSize only considers source.EstimatedSize and dest.StorageChange, but omits dest.EstimatedSize. The TaskDestination struct has an EstimatedSize field, which seems to be ignored here. This could lead to an incorrect estimation of the total size of data involved in tasks on a disk. The loop should probably also include estimatedSize += dest.EstimatedSize.
* at.assignTaskToDisk(task)
* refactoring
* Update weed/admin/topology/internal.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fail fast
* fix compilation
* Update weed/worker/tasks/erasure_coding/detection.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* indexes for volume and shard locations
* dedup with ToVolumeSlots
* return an additional boolean to indicate success, or an error
* Update abstract_sql_store.go
* fix
* Update weed/worker/tasks/erasure_coding/detection.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/admin/topology/task_management.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* faster findVolumeDisk
* Update weed/worker/tasks/erasure_coding/detection.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/admin/topology/storage_slot_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* refactor
* simplify
* remove unused GetDiskStorageImpact function
* refactor
* add comments
* Update weed/admin/topology/storage_impact.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/admin/topology/storage_slot_test.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update storage_impact.go
* AddPendingTask
The unified AddPendingTask function now serves as the single entry point for all task creation, successfully consolidating the previously separate functions while maintaining full functionality and improving code organization.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* refactor planning into task detection
* refactoring worker tasks
* refactor
* compiles, but only balance task is registered
* compiles, but has nil exception
* avoid nil logger
* add back ec task
* setting ec log directory
* implement balance and vacuum tasks
* EC tasks will no longer fail with "file not found" errors
* Use ReceiveFile API to send locally generated shards
* distributing shard files and ecx,ecj,vif files
* generate .ecx files correctly
* do not mount all possible EC shards (0-13) on every destination
* use constants
* delete all replicas
* rename files
* pass in volume size to tasks
* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix command_volume_tier_upload bug: Avoid deleting volumes under the same collection
* simplify a bit
---------
Co-authored-by: hzxialei <hzxialei@corp.netease.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
* fix: consider EC shard count in volume.balance capacity calculation
* update the implementation of capacityByMaxVolumeCount to include the EC shard usage
* fix listing objects
* add more list testing
* address comments
* fix next marker
* fix isTruncated in listing
* fix tests
* address tests
* Update s3api_object_handlers_multipart.go
* fixes
* store json into bucket content, for tagging and cors
* switch bucket metadata from json to proto
* fix
* Update s3api_bucket_config.go
* fix test issue
* fix test_bucket_listv2_delimiter_prefix
* Update cors.go
* skip special characters
* passing listing
* fix test_bucket_list_delimiter_prefix
* ok. fix the xsd generated go code now
* fix cors tests
* fix test
* fix test_bucket_list_unordered and test_bucket_listv2_unordered
do not accept the allow-unordered and delimiter parameter combination
* fix test_bucket_list_objects_anonymous and test_bucket_listv2_objects_anonymous
The tests test_bucket_list_objects_anonymous and test_bucket_listv2_objects_anonymous were failing because they try to set bucket ACL to public-read, but SeaweedFS only supported private ACL.
Updated PutBucketAclHandler to use the existing ExtractAcl function which already supports all standard S3 canned ACLs
Replaced the hardcoded check for only private ACL with proper ACL parsing that handles public-read, public-read-write, authenticated-read, bucket-owner-read, bucket-owner-full-control, etc.
Added unit tests to verify all standard canned ACLs are accepted
* fix list unordered
The test is expecting the error code to be InvalidArgument instead of InvalidRequest
* allow anonymous listing( and head, get)
* fix test_bucket_list_maxkeys_invalid
Invalid values: max-keys=blah → Returns ErrInvalidMaxKeys (HTTP 400)
* updating IsPublicRead when parsing acl
* more logs
* CORS Test Fix
* fix test_bucket_list_return_data
* default to private
* fix test_bucket_list_delimiter_not_skip_special
* default no acl
* add debug logging
* more logs
* use basic http client
remove logs also
* fixes
* debug
* Update stats.go
* debugging
* fix anonymous test expectation
anonymous user can read, as configured in s3 json.
* add back tests
* get put object acl
* check permission to put object acl
* rename file
* object list versions now contains owners
* set object owner
* refactoring
* Revert "add back tests"
This reverts commit 9adc507c45.
* test versioning also
* fix some versioning tests
* fall back
* fixes
Never-versioned buckets: No VersionId headers, no Status field
Pre-versioning objects: Regular files, VersionId="null", included in all operations
Post-versioning objects: Stored in .versions directories with real version IDs
Suspended versioning: Proper status handling and null version IDs
* fixes
Bucket Versioning Status Compliance
Fixed: New buckets now return no Status field (AWS S3 compliant)
Before: Always returned "Suspended" ❌
After: Returns empty VersioningConfiguration for unconfigured buckets ✅
2. Multi-Object Delete Versioning Support
Fixed: DeleteMultipleObjectsHandler now fully versioning-aware
Before: Always deleted physical files, breaking versioning ❌
After: Creates delete markers or deletes specific versions properly ✅
Added: DeleteMarker field in response structure for AWS compatibility
3. Copy Operations Versioning Support
Fixed: CopyObjectHandler and CopyObjectPartHandler now versioning-aware
Before: Only copied regular files, couldn't handle versioned sources ❌
After: Parses version IDs from copy source, creates versions in destination ✅
Added: pathToBucketObjectAndVersion() function for version ID parsing
4. Pre-versioning Object Handling
Fixed: getLatestObjectVersion() now has proper fallback logic
Before: Failed when .versions directory didn't exist ❌
After: Falls back to regular objects for pre-versioning scenarios ✅
5. Enhanced Object Version Listings
Fixed: listObjectVersions() includes both versioned AND pre-versioning objects
Before: Only showed .versions directories, ignored pre-versioning objects ❌
After: Shows complete version history with VersionId="null" for pre-versioning ✅
6. Null Version ID Handling
Fixed: getSpecificObjectVersion() properly handles versionId="null"
Before: Couldn't retrieve pre-versioning objects by version ID ❌
After: Returns regular object files for "null" version requests ✅
7. Version ID Response Headers
Fixed: PUT operations only return x-amz-version-id when appropriate
Before: Returned version IDs for non-versioned buckets ❌
After: Only returns version IDs for explicitly configured versioning ✅
* more fixes
* fix copying with versioning, multipart upload
* more fixes
* reduce volume size for easier dev test
* fix
* fix version id
* fix versioning
* Update filer_multipart.go
* fix multipart versioned upload
* more fixes
* more fixes
* fix versioning on suspended
* fixes
* fixing test_versioning_obj_suspended_copy
* Update s3api_object_versioning.go
* fix versions
* skipping test_versioning_obj_suspend_versions
* > If the versioning state has never been set on a bucket, it has no versioning state; a GetBucketVersioning request does not return a versioning state value.
* fix tests, avoid duplicated bucket creation, skip tests
* only run s3tests_boto3/functional/test_s3.py
* fix checking filer_pb.ErrNotFound
* Update weed/s3api/s3api_object_versioning.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/s3api_object_handlers_copy.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/s3api_bucket_config.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update test/s3/versioning/s3_versioning_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>