You can not select more than 25 topics
Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
Tree:
f77e6ed2d4
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock Fixes #7711 The admin UI's DeleteS3Bucket function was missing two critical behaviors: 1. It did not delete the collection from the master (unlike s3.bucket.delete shell command), leaving orphaned volume data that caused fs.verify errors. 2. It did not check for Object Lock protections before deletion, potentially allowing deletion of buckets with locked objects. Changes: - Add shared Object Lock checking utilities to object_lock_utils.go: - EntryHasActiveLock: standalone function to check if an entry has active lock - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects - Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function - Update admin UI DeleteS3Bucket to: - Check Object Lock using shared HasObjectsWithActiveLocks utility - Delete the collection before deleting filer entries (matching s3.bucket.delete) * refactor: S3 API uses shared Object Lock utilities Removes 114 lines of duplicated code from s3api_bucket_handlers.go by having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks function in object_lock_utils.go. Now both S3 API and Admin UI use the same shared utilities: - EntryHasActiveLock - HasObjectsWithActiveLocks - recursivelyCheckLocksWithClient - checkVersionsForLocksWithClient * feat: s3.bucket.delete shell command now checks Object Lock Add Object Lock protection to the s3.bucket.delete shell command. If the bucket has Object Lock enabled and contains objects with active retention or legal hold, deletion is prevented. Also refactors Object Lock checking utilities into a new s3_objectlock package to avoid import cycles between shell, s3api, and admin packages. All three components now share the same logic: - S3 API (DeleteBucketHandler) - Admin UI (DeleteS3Bucket) - Shell command (s3.bucket.delete) * refactor: unified Object Lock checking and consistent deletion parameters 1. Add CheckBucketForLockedObjects() - a unified function that combines: - Bucket entry lookup - Object Lock enabled check - Scan for locked objects 2. All three components now use this single function: - S3 API (via s3api.CheckBucketForLockedObjects) - Admin UI (via s3api.CheckBucketForLockedObjects) - Shell command (via s3_objectlock.CheckBucketForLockedObjects) 3. Aligned deletion parameters across all components: - isDeleteData: false (collection already deleted separately) - isRecursive: true - ignoreRecursiveError: true * fix: properly handle non-EOF errors in Recv() loops The Recv() loops in recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient were breaking on any error, which could hide real stream errors and incorrectly report 'no locks found'. Now: - io.EOF: break loop (normal end of stream) - any other error: return it so caller knows the stream failed * fix: address PR review comments 1. Add path traversal protection - validate entry names before building subdirectory paths. Skip entries with empty names, '.', '..', or containing path separators. 2. Use exact match for .versions folder instead of HasSuffix() to avoid mismatching unrelated directories like 'foo.versions'. 3. Replace path.Join with simple string concatenation since we now validate entry names. * refactor: extract paginateEntries helper to reduce duplication The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient functions shared significant structural similarity. Extracted a generic paginateEntries helper that: - Handles pagination logic (lastFileName tracking, Limit) - Handles stream receiving with proper EOF vs error handling - Validates entry names (path traversal protection) - Calls a processEntry callback for business logic This centralizes pagination logic and makes the code more maintainable. * feat: add context propagation for timeout and cancellation support All Object Lock checking functions now accept context.Context parameter: - paginateEntries(ctx, client, dir, processEntry) - recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime) - checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime) - HasObjectsWithActiveLocks(ctx, client, bucketPath) - CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName) This enables: - Timeout support for large bucket scans - Cancellation propagation from HTTP requests - The S3 API handler now uses r.Context() for proper request lifecycle * fix: address PR review comments 1. Add DefaultBucketsPath constant in admin_server.go instead of hardcoding "/buckets" in multiple places. 2. Add defensive normalization in EntryHasActiveLock: - TrimSpace to handle whitespace around values - ToUpper for case-insensitive comparison of legal hold and retention mode values - TrimSpace on retention date before parsing * fix: use ctx variable consistently instead of context.Background() In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx variable defined at the start of the function for all gRPC calls instead of creating new context.Background() instances. |
4 days ago | |
|---|---|---|
| .. | ||
| src | HDFS: Java client replication configuration (#7526) | 3 weeks ago |
| .gitignore | HDFS: Java client replication configuration (#7526) | 3 weeks ago |
| Makefile | HDFS: Java client replication configuration (#7526) | 3 weeks ago |
| docker-compose.yml | fix: add missing backslash for volume extraArgs in helm chart (#7676) | 1 week ago |
| pom.xml | java 4.00 | 3 weeks ago |
| quick-start.sh | HDFS: Java client replication configuration (#7526) | 3 weeks ago |
| run-tests.sh | HDFS: Java client replication configuration (#7526) | 3 weeks ago |