What problem are we solving?
Fix: #6379
How are we solving the problem?
We check for the AllowEmptyFolders option prior to cascade
deleting parent folders in S3 DeleteMultipleObjectsHandler.
How is the PR tested?
We ran SeaweedFS in a Kubernetes Cluster with a joint Filer
and S3 server in one container, with leveldb2 as the filer storage,
and AllowEmptyFolders set to true.
When using the Distribution Registry as the S3 client, it calls the
DeleteMultipleObjectsHandler as part of the artifact upload process
(uploads to a temp location, then performs a copy and delete).
Without this fix, the deletion cascade deleted parent folder until
the entire contents of the bucket were gone.
With this fix, the existing content of the bucket remained, and the
newly uploaded content was added.
Checks
[ ] I have added unit tests if possible.
[ ] I will add related wiki document changes and link to this PR after merging.
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* Added global http client
* Added Do func for global http client
* Changed the code to use the global http client
* Fix http client in volume uploader
* Fixed pkg name
* Fixed http util funcs
* Fixed http client for bench_filer_upload
* Fixed http client for stress_filer_upload
* Fixed http client for filer_server_handlers_proxy
* Fixed http client for command_fs_merge_volumes
* Fixed http client for command_fs_merge_volumes and command_volume_fsck
* Fixed http client for s3api_server
* Added init global client for main funcs
* Rename global_client to client
* Changed:
- fixed NewHttpClient;
- added CheckIsHttpsClientEnabled func
- updated security.toml in scaffold
* Reduce the visibility of some functions in the util/http/client pkg
* Added the loadSecurityConfig function
* Use util.LoadSecurityConfiguration() in NewHttpClient func
* revert skip deletion error, since the error file was not found is already skipped on the side of the grpc function
* fix response error
* fix test_lifecycle_get
* Revert "fix test_lifecycle_get"
This reverts commit 8f991bdcf9.
* add s3test for sql
* fix test test_bucket_listv2_delimiter_basic for s3
* fix action s3tests
* regen s3 api xsd
* rm minor s3 test test_bucket_listv2_fetchowner_defaultempty
* add docs
* without xmlns
If there are putObjectPart requests with the same uploadId during
completeMultiPart, it can result in data loss. putObjectPart requests
might be due to timeout retries.
Co-authored-by: Yang Wang <yangwang@weride.ai>
* fix: install cronie
* chore: refactor configure S3Sink
* chore: refactor cinfig
* add filer-backup compose file
* fix: X-Amz-Meta-Mtime and resolve with comments
* fix: attr mtime
* fix: MaxUploadPartst is reduced to the maximum allowable
* fix: env and force set max MaxUploadParts
* fix: env WEED_SINK_S3_UPLOADER_PART_SIZE_MB