Browse Source
Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
* Replace removeDuplicateSlashes with NormalizeObjectKey Use s3_constants.NormalizeObjectKey instead of removeDuplicateSlashes in most places for consistency. NormalizeObjectKey handles both duplicate slash removal and ensures the path starts with '/', providing more complete normalization. * Fix double slash issues after NormalizeObjectKey After using NormalizeObjectKey, object keys have a leading '/'. This commit ensures: - getVersionedObjectDir strips leading slash before concatenation - getEntry calls receive names without leading slash - String concatenation with '/' doesn't create '//' paths This prevents path construction errors like: /buckets/bucket//object (wrong) /buckets/bucket/object (correct) * ensure object key leading "/" * fix compilation * fix: Strip leading slash from object keys in S3 API responses After introducing NormalizeObjectKey, all internal object keys have a leading slash. However, S3 API responses must return keys without leading slashes to match AWS S3 behavior. Fixed in three functions: - addVersion: Strip slash for version list entries - processRegularFile: Strip slash for regular file entries - processExplicitDirectory: Strip slash for directory entries This ensures ListObjectVersions and similar APIs return keys like 'bar' instead of '/bar', matching S3 API specifications. * fix: Normalize keyMarker for consistent pagination comparison The S3 API provides keyMarker without a leading slash (e.g., 'object-001'), but after introducing NormalizeObjectKey, all internal object keys have leading slashes (e.g., '/object-001'). When comparing keyMarker < normalizedObjectKey in shouldSkipObjectForMarker, the ASCII value of '/' (47) is less than 'o' (111), causing all objects to be incorrectly skipped during pagination. This resulted in page 2 and beyond returning 0 results. Fix: Normalize the keyMarker when creating versionCollector so comparisons work correctly with normalized object keys. Fixes pagination tests: - TestVersioningPaginationOver1000Versions - TestVersioningPaginationMultipleObjectsManyVersions * refactor: Change NormalizeObjectKey to return keys without leading slash BREAKING STRATEGY CHANGE: Previously, NormalizeObjectKey added a leading slash to all object keys, which required stripping it when returning keys to S3 API clients and caused complexity in marker normalization for pagination. NEW STRATEGY: - NormalizeObjectKey now returns keys WITHOUT leading slash (e.g., 'foo/bar' not '/foo/bar') - This matches the S3 API format directly - All path concatenations now explicitly add '/' between bucket and object - No need to strip slashes in responses or normalize markers Changes: 1. Modified NormalizeObjectKey to strip leading slash instead of adding it 2. Fixed all path concatenations to use: - BucketsPath + '/' + bucket + '/' + object instead of: - BucketsPath + '/' + bucket + object 3. Reverted response key stripping in: - addVersion() - processRegularFile() - processExplicitDirectory() 4. Reverted keyMarker normalization in findVersionsRecursively() 5. Updated matchesPrefixFilter() to work with keys without leading slash 6. Fixed paths in handlers: - s3api_object_handlers.go (GetObject, HeadObject, cacheRemoteObjectForStreaming) - s3api_object_handlers_postpolicy.go - s3api_object_handlers_tagging.go - s3api_object_handlers_acl.go - s3api_version_id.go (getVersionedObjectDir, getVersionIdFormat) - s3api_object_versioning.go (getObjectVersionList, updateLatestVersionAfterDeletion) All versioning tests pass including pagination stress tests. * adjust format * Update post policy tests to match new NormalizeObjectKey behavior - Update TestPostPolicyKeyNormalization to expect keys without leading slashes - Update TestNormalizeObjectKey to expect keys without leading slashes - Update TestPostPolicyFilenameSubstitution to expect keys without leading slashes - Update path construction in tests to use new pattern: BucketsPath + '/' + bucket + '/' + object * Fix ListObjectVersions prefix filtering Remove leading slash addition to prefix parameter to allow correct filtering of .versions directories when listing object versions with a specific prefix. The prefix parameter should match entry paths relative to bucket root. Adding a leading slash was breaking the prefix filter for paginated requests. Fixes pagination issue where second page returned 0 versions instead of continuing with remaining versions. * no leading slash * Fix urlEscapeObject to add leading slash for filer paths NormalizeObjectKey now returns keys without leading slashes to match S3 API format. However, urlEscapeObject is used for filer paths which require leading slashes. Add leading slash back after normalization to ensure filer paths are correct. Fixes TestS3ApiServer_toFilerPath test failures. * adjust tests * normalize * Fix: Normalize prefixes and markers in LIST operations using NormalizeObjectKey Ensure consistent key normalization across all S3 operations (GET, PUT, LIST). Previously, LIST operations were not applying the same normalization rules (handling backslashes, duplicate slashes, leading slashes) as GET/PUT operations. Changes: - Updated normalizePrefixMarker() to call NormalizeObjectKey for both prefix and marker - This ensures prefixes with leading slashes, backslashes, or duplicate slashes are handled consistently with how object keys are normalized - Fixes Parquet test failures where pads.write_dataset creates implicit directory structures that couldn't be discovered by subsequent LIST operations - Added TestPrefixNormalizationInList and TestListPrefixConsistency tests All existing LIST tests continue to pass with the normalization improvements. * Add debugging logging to LIST operations to track prefix normalization * Fix: Remove leading slash addition from GetPrefix to work with NormalizeObjectKey The NormalizeObjectKey function removes leading slashes to match S3 API format (e.g., 'foo/bar' not '/foo/bar'). However, GetPrefix was adding a leading slash back, which caused LIST operations to fail with incorrect path handling. Now GetPrefix only normalizes duplicate slashes without adding a leading slash, which allows NormalizeObjectKey changes to work correctly for S3 LIST operations. All Parquet integration tests now pass (20/20). * Fix: Handle object paths without leading slash in checkDirectoryObject NormalizeObjectKey() removes the leading slash to match S3 API format. However, checkDirectoryObject() was assuming the object path has a leading slash when processing directory markers (paths ending with '/'). Now we ensure the object has a leading slash before processing it for filer operations. Fixes implicit directory marker test (explicit_dir/) while keeping Parquet integration tests passing (20/20). All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * Fix: Handle explicit directory markers with trailing slashes Explicit directory markers created with put_object(Key='dir/', ...) are stored in the filer with the trailing slash as part of the name. The checkDirectoryObject() function now checks for both: 1. Explicit directories: lookup with trailing slash preserved (e.g., 'explicit_dir/') 2. Implicit directories: lookup without trailing slash (e.g., 'implicit_dir') This ensures both types of directory markers are properly recognized. All tests pass: - Implicit directory tests: 6/6 (including explicit directory marker test) - Parquet integration tests: 20/20 * Fix: Preserve trailing slash in NormalizeObjectKey NormalizeObjectKey now preserves trailing slashes when normalizing object keys. This is important for explicit directory markers like 'explicit_dir/' which rely on the trailing slash to be recognized as directory objects. The normalization process: 1. Notes if trailing slash was present 2. Removes duplicate slashes and converts backslashes 3. Removes leading slash for S3 API format 4. Restores trailing slash if it was in the original This ensures explicit directory markers created with put_object(Key='dir/', ...) are properly normalized and can be looked up by their exact name. All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * clean object * Fix: Don't restore trailing slash if result is empty When normalizing paths that are only slashes (e.g., '///', '/'), the function should return an empty string, not a single slash. The fix ensures we only restore the trailing slash if the result is non-empty. This fixes the 'just_slashes' test case: - Input: '///' - Expected: '' - Previous: '/' - Fixed: '' All tests now pass: - Unit tests: TestNormalizeObjectKey (13/13) - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * prefixEndsOnDelimiter * Update s3api_object_handlers_list.go * Update s3api_object_handlers_list.go * handle create directorymaster
committed by
GitHub
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
18 changed files with 349 additions and 161 deletions
-
86test/s3/parquet/debug_write_dataset.py
-
2test/s3/parquet/test_implicit_directory_fix.py
-
5weed/replication/sink/gcssink/gcs_sink.go
-
20weed/s3api/s3_constants/header.go
-
28weed/s3api/s3_constants/header_test.go
-
2weed/s3api/s3api_conditional_headers_test.go
-
85weed/s3api/s3api_list_normalization_test.go
-
51weed/s3api/s3api_object_handlers.go
-
16weed/s3api/s3api_object_handlers_acl.go
-
16weed/s3api/s3api_object_handlers_copy.go
-
6weed/s3api/s3api_object_handlers_delete.go
-
2weed/s3api/s3api_object_handlers_postpolicy.go
-
66weed/s3api/s3api_object_handlers_postpolicy_test.go
-
24weed/s3api/s3api_object_handlers_put.go
-
34weed/s3api/s3api_object_handlers_tagging.go
-
61weed/s3api/s3api_object_versioning.go
-
4weed/s3api/s3api_version_id.go
-
2weed/storage/backend/s3_backend/s3_download.go
@ -0,0 +1,86 @@ |
|||
#!/usr/bin/env python3 |
|||
"""Debug script to understand what pads.write_dataset creates.""" |
|||
|
|||
import sys |
|||
import pyarrow as pa |
|||
import pyarrow.dataset as pads |
|||
import s3fs |
|||
|
|||
# Create a simple test table |
|||
table = pa.table({'id': [1, 2, 3], 'value': [1.0, 2.0, 3.0]}) |
|||
|
|||
# Initialize S3 filesystem |
|||
fs = s3fs.S3FileSystem( |
|||
client_kwargs={'endpoint_url': 'http://localhost:8333'}, |
|||
key='some_access_key1', |
|||
secret='some_secret_key1', |
|||
use_listings_cache=False, |
|||
) |
|||
|
|||
# Create bucket |
|||
if not fs.exists('test-bucket'): |
|||
fs.mkdir('test-bucket') |
|||
|
|||
# Write with pads.write_dataset |
|||
test_path = 's3://test-bucket/test-write-simple/' |
|||
print(f"Writing to: {test_path}") |
|||
print(f"Table schema: {table.schema}") |
|||
print(f"Table rows: {table.num_rows}") |
|||
|
|||
try: |
|||
pads.write_dataset(table, test_path, format='parquet', filesystem=fs) |
|||
print("\n✓ Write succeeded") |
|||
|
|||
# List all files recursively |
|||
print(f"\nListing all files recursively under {test_path}:") |
|||
import os |
|||
base_path = 'test-bucket/test-write-simple' |
|||
def list_recursive(path, indent=0): |
|||
try: |
|||
items = fs.ls(path, detail=False) |
|||
for item in items: |
|||
is_dir = fs.isdir(item) |
|||
item_name = item.split('/')[-1] if '/' in item else item |
|||
if is_dir: |
|||
print(f"{' ' * indent}📁 {item_name}/") |
|||
list_recursive(item, indent + 1) |
|||
else: |
|||
# Get file size |
|||
try: |
|||
info = fs.info(item) |
|||
size = info.get('size', 0) |
|||
print(f"{' ' * indent}📄 {item_name} ({size} bytes)") |
|||
except: |
|||
print(f"{' ' * indent}📄 {item_name}") |
|||
except Exception as e: |
|||
print(f"{' ' * indent}Error listing {path}: {e}") |
|||
|
|||
list_recursive(base_path) |
|||
|
|||
# Try to read back with different methods |
|||
print(f"\n\nTrying to read back using different methods:") |
|||
|
|||
# Method 1: pads.dataset with the same path |
|||
print(f"\n1. pads.dataset('{test_path}'):") |
|||
try: |
|||
dataset = pads.dataset(test_path, format='parquet', filesystem=fs) |
|||
result = dataset.to_table() |
|||
print(f" ✓ Success: {result.num_rows} rows") |
|||
except Exception as e: |
|||
print(f" ✗ Failed: {e}") |
|||
|
|||
# Method 2: pads.dataset with the dir containing parquet files |
|||
print(f"\n2. pads.dataset without trailing slash:") |
|||
test_path_no_slash = 's3://test-bucket/test-write-simple' |
|||
try: |
|||
dataset = pads.dataset(test_path_no_slash, format='parquet', filesystem=fs) |
|||
result = dataset.to_table() |
|||
print(f" ✓ Success: {result.num_rows} rows") |
|||
except Exception as e: |
|||
print(f" ✗ Failed: {e}") |
|||
|
|||
except Exception as e: |
|||
import traceback |
|||
print(f"✗ Error: {e}") |
|||
traceback.print_exc() |
|||
sys.exit(1) |
|||
@ -0,0 +1,85 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants" |
|||
) |
|||
|
|||
// TestPrefixNormalizationInList verifies that prefixes are normalized consistently in list operations
|
|||
func TestPrefixNormalizationInList(t *testing.T) { |
|||
tests := []struct { |
|||
name string |
|||
inputPrefix string |
|||
expectedPrefix string |
|||
description string |
|||
}{ |
|||
{ |
|||
name: "simple prefix", |
|||
inputPrefix: "parquet-tests/abc123/", |
|||
expectedPrefix: "parquet-tests/abc123/", |
|||
description: "Normal prefix with trailing slash", |
|||
}, |
|||
{ |
|||
name: "leading slash", |
|||
inputPrefix: "/parquet-tests/abc123/", |
|||
expectedPrefix: "parquet-tests/abc123/", |
|||
description: "Prefix with leading slash should be stripped", |
|||
}, |
|||
{ |
|||
name: "duplicate slashes", |
|||
inputPrefix: "parquet-tests//abc123/", |
|||
expectedPrefix: "parquet-tests/abc123/", |
|||
description: "Prefix with duplicate slashes should be cleaned", |
|||
}, |
|||
{ |
|||
name: "backslashes", |
|||
inputPrefix: "parquet-tests\\abc123\\", |
|||
expectedPrefix: "parquet-tests/abc123/", |
|||
description: "Backslashes should be converted to forward slashes", |
|||
}, |
|||
} |
|||
|
|||
for _, tt := range tests { |
|||
t.Run(tt.name, func(t *testing.T) { |
|||
// Normalize using NormalizeObjectKey (same as object keys)
|
|||
normalizedPrefix := s3_constants.NormalizeObjectKey(tt.inputPrefix) |
|||
|
|||
if normalizedPrefix != tt.expectedPrefix { |
|||
t.Errorf("Prefix normalization mismatch:\n Input: %q\n Expected: %q\n Got: %q\n Desc: %s", |
|||
tt.inputPrefix, tt.expectedPrefix, normalizedPrefix, tt.description) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestListPrefixConsistency verifies that objects written and listed use consistent key formats
|
|||
func TestListPrefixConsistency(t *testing.T) { |
|||
// When an object is written to "parquet-tests/123/data.parquet",
|
|||
// and we list with prefix "parquet-tests/123/",
|
|||
// we should find that object
|
|||
|
|||
objectKey := "parquet-tests/123/data.parquet" |
|||
listPrefix := "parquet-tests/123/" |
|||
|
|||
// Normalize as would happen in PUT
|
|||
normalizedObjectKey := s3_constants.NormalizeObjectKey(objectKey) |
|||
|
|||
// Check that the list prefix would match the object path
|
|||
if !startsWithPrefix(normalizedObjectKey, listPrefix) { |
|||
t.Errorf("List prefix mismatch:\n Object: %q\n Prefix: %q\n Object doesn't start with prefix", |
|||
normalizedObjectKey, listPrefix) |
|||
} |
|||
} |
|||
|
|||
func startsWithPrefix(objectKey, prefix string) bool { |
|||
// Normalize the prefix using the same logic as NormalizeObjectKey
|
|||
normalizedPrefix := s3_constants.NormalizeObjectKey(prefix) |
|||
|
|||
// Check if the object starts with the normalized prefix
|
|||
if normalizedPrefix == "" { |
|||
return true |
|||
} |
|||
|
|||
return objectKey == normalizedPrefix || objectKey[:len(normalizedPrefix)] == normalizedPrefix |
|||
} |
|||
Write
Preview
Loading…
Cancel
Save
Reference in new issue