* fix GetObjectLockConfigurationHandler
* cache and use bucket object lock config
* subscribe to bucket configuration changes
* increase bucket config cache TTL
* refactor
* Update weed/s3api/s3api_server.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* avoid duplidated work
* rename variable
* Update s3api_object_handlers_put.go
* fix routing
* admin ui and api handler are consistent now
* use fields instead of xml
* fix test
* address comments
* Update weed/s3api/s3api_object_handlers_put.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update test/s3/retention/s3_retention_test.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update weed/s3api/object_lock_utils.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* change error style
* errorf
* read entry once
* add s3 tests for object lock and retention
* use marker
* install s3 tests
* Update s3tests.yml
* Update s3tests.yml
* Update s3tests.conf
* Update s3tests.conf
* address test errors
* address test errors
With these fixes, the s3-tests should now:
✅ Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
✅ Return MalformedXML for invalid retention configurations
✅ Include VersionId in response headers when available
✅ Return proper HTTP status codes (403 Forbidden for retention mode changes)
✅ Handle all object lock validation errors consistently
* fixes
With these comprehensive fixes, the s3-tests should now:
✅ Return InvalidBucketState (409 Conflict) for object lock operations on invalid buckets
✅ Return InvalidRetentionPeriod for invalid retention periods
✅ Return MalformedXML for malformed retention configurations
✅ Include VersionId in response headers when available
✅ Return proper HTTP status codes for all error conditions
✅ Handle all object lock validation errors consistently
The workflow should now pass significantly more object lock tests, bringing SeaweedFS's S3 object lock implementation much closer to AWS S3 compatibility standards.
* fixes
With these final fixes, the s3-tests should now:
✅ Return MalformedXML for ObjectLockEnabled: 'Disabled'
✅ Return MalformedXML when both Days and Years are specified in retention configuration
✅ Return InvalidBucketState (409 Conflict) when trying to suspend versioning on buckets with object lock enabled
✅ Handle all object lock validation errors consistently with proper error codes
* constants and fixes
✅ Return InvalidRetentionPeriod for invalid retention values (0 days, negative years)
✅ Return ObjectLockConfigurationNotFoundError when object lock configuration doesn't exist
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Return MalformedXML when both Days and Years are specified in the same retention configuration
✅ Return 400 (Bad Request) with InvalidRequest when object lock operations are attempted on buckets without object lock enabled
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Return 409 (Conflict) with InvalidBucketState for bucket-level object lock configuration operations on buckets without object lock enabled
✅ Allow increasing retention periods and overriding retention with same/later dates
✅ Only block decreasing retention periods without proper bypass permissions
✅ Handle all object lock validation errors consistently with proper error codes
* fixes
✅ Include VersionId in multipart upload completion responses when versioning is enabled
✅ Block retention mode changes (GOVERNANCE ↔ COMPLIANCE) without bypass permissions
✅ Handle all object lock validation errors consistently with proper error codes
✅ Pass the remaining object lock tests
* fix tests
* fixes
* pass tests
* fix tests
* fixes
* add error mapping
* Update s3tests.conf
* fix test_object_lock_put_obj_lock_invalid_days
* fixes
* fix many issues
* fix test_object_lock_delete_multipart_object_with_legal_hold_on
* fix tests
* refactor
* fix test_object_lock_delete_object_with_retention_and_marker
* fix tests
* fix tests
* fix tests
* fix test itself
* fix tests
* fix test
* Update weed/s3api/s3api_object_retention.go
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* reduce logs
* address comments
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* it compiles
* refactored
* reduce to 4 concurrent chunk upload
* CopyObjectPartHandler
* copy a range of the chunk data, fix offset size in copied chunks
* Update s3api_object_handlers_copy.go
What the PR Accomplishes:
CopyObjectHandler - Now copies entire objects by copying chunks individually instead of downloading/uploading the entire file
CopyObjectPartHandler - Handles copying parts of objects for multipart uploads by copying only the relevant chunk portions
Efficient Chunk Copying - Uses direct chunk-to-chunk copying with proper volume assignment and concurrent processing (limited to 4 concurrent operations)
Range Support - Properly handles range-based copying for partial object copies
* fix compilation
* fix part destination
* handling small objects
* use mkFile
* copy to existing file or part
* add testing tools
* adjust tests
* fix chunk lookup
* refactoring
* fix TestObjectCopyRetainingMetadata
* ensure bucket name not conflicting
* fix conditional copying tests
* remove debug messages
* add custom s3 copy tests
If there are putObjectPart requests with the same uploadId during
completeMultiPart, it can result in data loss. putObjectPart requests
might be due to timeout retries.
Co-authored-by: Yang Wang <yangwang@weride.ai>
The file paths were being having '/' changed to '\' by filepath.Dir()
resulting in a file being created with '\' separators, but when trying
to read the files, the same wasn't happening.
Co-authored-by: Hendrik Appel <happel@europe.altair.com>
ubuntu@prod-master-1:~$ aws --endpoint http://10.244.15.66:8333 s3api abort-multipart-upload --bucket prod-cache --key multipart-test --upload-id 5347f936-6adc-43de-8e5c-1fd137c3b2bc
ubuntu@prod-master-1:~$ aws --endpoint http://10.244.15.66:8333 s3api list-parts --bucket prod-cache --key multipart-test --upload-id 5347f936-6adc-43de-8e5c-1fd137c3b2bc
{
"Initiator": null,
"Owner": null,
"StorageClass": "STANDARD"
}
If we abort a multipart upload, it appears that records are left behind. We should get a 404 NoSuchKey error.