Browse Source
do delete expired entries on s3 list request (#7426)
do delete expired entries on s3 list request (#7426)
* do delete expired entries on s3 list request
https://github.com/seaweedfs/seaweedfs/issues/6837
* disable delete expires s3 entry in filer
* pass opt allowDeleteObjectsByTTL to all servers
* delete on get and head
* add lifecycle expiration s3 tests
* fix opt allowDeleteObjectsByTTL for server
* fix test lifecycle expiration
* fix IsExpired
* fix locationPrefix for updateEntriesTTL
* fix s3tests
* resolv coderabbitai
* GetS3ExpireTime on filer
* go mod
* clear TtlSeconds for volume
* move s3 delete expired entry to filer
* filer delete meta and data
* del unusing func removeExpiredObject
* test s3 put
* test s3 put multipart
* allowDeleteObjectsByTTL by default
* fix pipline tests
* rm dublicate SeaweedFSExpiresS3
* revert expiration tests
* fix updateTTL
* rm log
* resolv comment
* fix delete version object
* fix S3Versioning
* fix delete on FindEntry
* fix delete chunks
* fix sqlite not support concurrent writes/reads
* move deletion out of listing transaction; delete entries and empty folders
* Revert "fix sqlite not support concurrent writes/reads"
This reverts commit 5d5da14e0e.
* clearer handling on recursive empty directory deletion
* handle listing errors
* strut copying
* reuse code to delete empty folders
* use iterative approach with a queue to avoid recursive WithFilerClient calls
* stop a gRPC stream from the client-side callback is to return a specific error, e.g., io.EOF
* still issue UpdateEntry when the flag must be added
* errors join
* join path
* cleaner
* add context, sort directories by depth (deepest first) to avoid redundant checks
* batched operation, refactoring
* prevent deleting bucket
* constant
* reuse code
* more logging
* refactoring
* s3 TTL time
* Safety check
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
pull/7394/merge
committed by
GitHub
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 489 additions and 108 deletions
-
14.github/workflows/s3tests.yml
-
27test/kafka/go.mod
-
52test/kafka/go.sum
-
24weed/filer/entry.go
-
133weed/filer/filer.go
-
56weed/pb/filer_pb/filer_client.go
-
26weed/pb/filer_pb/filer_pb_helper.go
-
19weed/s3api/filer_multipart.go
-
107weed/s3api/filer_util.go
-
1weed/s3api/s3_constants/extend_key.go
-
1weed/s3api/s3_constants/header.go
-
12weed/s3api/s3api_bucket_handlers.go
-
2weed/s3api/s3api_object_handlers.go
-
89weed/s3api/s3api_object_handlers_delete.go
-
3weed/s3api/s3api_object_handlers_put.go
-
15weed/server/filer_server_handlers_write_autochunk.go
-
8weed/util/constants_lifecycle_interval_10sec.go
-
8weed/util/constants_lifecycle_interval_day.go
@ -0,0 +1,8 @@ |
|||
//go:build s3tests
|
|||
// +build s3tests
|
|||
|
|||
package util |
|||
|
|||
import "time" |
|||
|
|||
const LifeCycleInterval = 10 * time.Second |
|||
@ -0,0 +1,8 @@ |
|||
//go:build !s3tests
|
|||
// +build !s3tests
|
|||
|
|||
package util |
|||
|
|||
import "time" |
|||
|
|||
const LifeCycleInterval = 24 * time.Hour |
|||
Write
Preview
Loading…
Cancel
Save
Reference in new issue