* initial design
* added simulation as tests
* reorganized the codebase to move the simulation framework and tests into their own dedicated package
* integration test. ec worker task
* remove "enhanced" reference
* start master, volume servers, filer
Current Status
✅ Master: Healthy and running (port 9333)
✅ Filer: Healthy and running (port 8888)
✅ Volume Servers: All 6 servers running (ports 8080-8085)
🔄 Admin/Workers: Will start when dependencies are ready
* generate write load
* tasks are assigned
* admin start wtih grpc port. worker has its own working directory
* Update .gitignore
* working worker and admin. Task detection is not working yet.
* compiles, detection uses volumeSizeLimitMB from master
* compiles
* worker retries connecting to admin
* build and restart
* rendering pending tasks
* skip task ID column
* sticky worker id
* test canScheduleTaskNow
* worker reconnect to admin
* clean up logs
* worker register itself first
* worker can run ec work and report status
but:
1. one volume should not be repeatedly worked on.
2. ec shards needs to be distributed and source data should be deleted.
* move ec task logic
* listing ec shards
* local copy, ec. Need to distribute.
* ec is mostly working now
* distribution of ec shards needs improvement
* need configuration to enable ec
* show ec volumes
* interval field UI component
* rename
* integration test with vauuming
* garbage percentage threshold
* fix warning
* display ec shard sizes
* fix ec volumes list
* Update ui.go
* show default values
* ensure correct default value
* MaintenanceConfig use ConfigField
* use schema defined defaults
* config
* reduce duplication
* refactor to use BaseUIProvider
* each task register its schema
* checkECEncodingCandidate use ecDetector
* use vacuumDetector
* use volumeSizeLimitMB
* remove
remove
* remove unused
* refactor
* use new framework
* remove v2 reference
* refactor
* left menu can scroll now
* The maintenance manager was not being initialized when no data directory was configured for persistent storage.
* saving config
* Update task_config_schema_templ.go
* enable/disable tasks
* protobuf encoded task configurations
* fix system settings
* use ui component
* remove logs
* interface{} Reduction
* reduce interface{}
* reduce interface{}
* avoid from/to map
* reduce interface{}
* refactor
* keep it DRY
* added logging
* debug messages
* debug level
* debug
* show the log caller line
* use configured task policy
* log level
* handle admin heartbeat response
* Update worker.go
* fix EC rack and dc count
* Report task status to admin server
* fix task logging, simplify interface checking, use erasure_coding constants
* factor in empty volume server during task planning
* volume.list adds disk id
* track disk id also
* fix locking scheduled and manual scanning
* add active topology
* simplify task detector
* ec task completed, but shards are not showing up
* implement ec in ec_typed.go
* adjust log level
* dedup
* implementing ec copying shards and only ecx files
* use disk id when distributing ec shards
🎯 Planning: ActiveTopology creates DestinationPlan with specific TargetDisk
📦 Task Creation: maintenance_integration.go creates ECDestination with DiskId
🚀 Task Execution: EC task passes DiskId in VolumeEcShardsCopyRequest
💾 Volume Server: Receives disk_id and stores shards on specific disk (vs.store.Locations[req.DiskId])
📂 File System: EC shards and metadata land in the exact disk directory planned
* Delete original volume from all locations
* clean up existing shard locations
* local encoding and distributing
* Update docker/admin_integration/EC-TESTING-README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* check volume id range
* simplify
* fix tests
* fix types
* clean up logs and tests
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* test versioning also
* fix some versioning tests
* fall back
* fixes
Never-versioned buckets: No VersionId headers, no Status field
Pre-versioning objects: Regular files, VersionId="null", included in all operations
Post-versioning objects: Stored in .versions directories with real version IDs
Suspended versioning: Proper status handling and null version IDs
* fixes
Bucket Versioning Status Compliance
Fixed: New buckets now return no Status field (AWS S3 compliant)
Before: Always returned "Suspended" ❌
After: Returns empty VersioningConfiguration for unconfigured buckets ✅
2. Multi-Object Delete Versioning Support
Fixed: DeleteMultipleObjectsHandler now fully versioning-aware
Before: Always deleted physical files, breaking versioning ❌
After: Creates delete markers or deletes specific versions properly ✅
Added: DeleteMarker field in response structure for AWS compatibility
3. Copy Operations Versioning Support
Fixed: CopyObjectHandler and CopyObjectPartHandler now versioning-aware
Before: Only copied regular files, couldn't handle versioned sources ❌
After: Parses version IDs from copy source, creates versions in destination ✅
Added: pathToBucketObjectAndVersion() function for version ID parsing
4. Pre-versioning Object Handling
Fixed: getLatestObjectVersion() now has proper fallback logic
Before: Failed when .versions directory didn't exist ❌
After: Falls back to regular objects for pre-versioning scenarios ✅
5. Enhanced Object Version Listings
Fixed: listObjectVersions() includes both versioned AND pre-versioning objects
Before: Only showed .versions directories, ignored pre-versioning objects ❌
After: Shows complete version history with VersionId="null" for pre-versioning ✅
6. Null Version ID Handling
Fixed: getSpecificObjectVersion() properly handles versionId="null"
Before: Couldn't retrieve pre-versioning objects by version ID ❌
After: Returns regular object files for "null" version requests ✅
7. Version ID Response Headers
Fixed: PUT operations only return x-amz-version-id when appropriate
Before: Returned version IDs for non-versioned buckets ❌
After: Only returns version IDs for explicitly configured versioning ✅
* more fixes
* fix copying with versioning, multipart upload
* more fixes
* reduce volume size for easier dev test
* fix
* fix version id
* fix versioning
* Update filer_multipart.go
* fix multipart versioned upload
* more fixes
* more fixes
* fix versioning on suspended
* fixes
* fixing test_versioning_obj_suspended_copy
* Update s3api_object_versioning.go
* fix versions
* skipping test_versioning_obj_suspend_versions
* > If the versioning state has never been set on a bucket, it has no versioning state; a GetBucketVersioning request does not return a versioning state value.
* fix tests, avoid duplicated bucket creation, skip tests
* only run s3tests_boto3/functional/test_s3.py
* fix checking filer_pb.ErrNotFound
* Update weed/s3api/s3api_object_versioning.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/s3api_object_handlers_copy.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/s3api_bucket_config.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update test/s3/versioning/s3_versioning_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* it compiles
* refactored
* reduce to 4 concurrent chunk upload
* CopyObjectPartHandler
* copy a range of the chunk data, fix offset size in copied chunks
* Update s3api_object_handlers_copy.go
What the PR Accomplishes:
CopyObjectHandler - Now copies entire objects by copying chunks individually instead of downloading/uploading the entire file
CopyObjectPartHandler - Handles copying parts of objects for multipart uploads by copying only the relevant chunk portions
Efficient Chunk Copying - Uses direct chunk-to-chunk copying with proper volume assignment and concurrent processing (limited to 4 concurrent operations)
Range Support - Properly handles range-based copying for partial object copies
* fix compilation
* fix part destination
* handling small objects
* use mkFile
* copy to existing file or part
* add testing tools
* adjust tests
* fix chunk lookup
* refactoring
* fix TestObjectCopyRetainingMetadata
* ensure bucket name not conflicting
* fix conditional copying tests
* remove debug messages
* add custom s3 copy tests
* add a menu item "Message Queue"
* add a menu item "Message Queue"
* move the "brokers" link under it.
* add "topics", "subscribers". Add pages for them.
* refactor
* show topic details
* admin display publisher and subscriber info
* remove publisher and subscribers from the topic row pull down
* collecting more stats from publishers and subscribers
* fix layout
* fix publisher name
* add local listeners for mq broker and agent
* render consumer group offsets
* remove subscribers from left menu
* topic with retention
* support editing topic retention
* show retention when listing topics
* create bucket
* Update s3_buckets_templ.go
* embed the static assets into the binary
fix https://github.com/seaweedfs/seaweedfs/issues/6964
* rename
* set agent address
* refactor
* add agent sub
* pub messages
* grpc new client
* can publish records via agent
* send init message with session id
* fmt
* check cancelled request while waiting
* use sessionId
* handle possible nil stream
* subscriber process messages
* separate debug port
* use atomic int64
* less logs
* minor
* skip io.EOF
* rename
* remove unused
* use saved offsets
* do not reuse session, since always session id is new after restart
remove last active ts from SessionEntry
* simplify printing
* purge unused
* just proxy the subscription, skipping the session step
* adjust offset types
* subscribe offset type and possible value
* start after the known tsns
* avoid wrongly set startPosition
* move
* remove
* refactor
* typo
* fix
* fix changed path