Browse Source
S3: Directly read write volume servers (#7481)
S3: Directly read write volume servers (#7481)
* Lazy Versioning Check, Conditional SSE Entry Fetch, HEAD Request Optimization
* revert
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
Reverted the conditional versioning check to always check versioning status
Reverted the conditional SSE entry fetch to always fetch entry metadata
* Lazy Entry Fetch for SSE, Skip Conditional Header Check
* SSE-KMS headers are present, this is not an SSE-C request (mutually exclusive)
* SSE-C is mutually exclusive with SSE-S3 and SSE-KMS
* refactor
* Removed Premature Mutual Exclusivity Check
* check for the presence of the X-Amz-Server-Side-Encryption header
* not used
* fmt
* directly read write volume servers
* HTTP Range Request Support
* set header
* md5
* copy object
* fix sse
* fmt
* implement sse
* sse continue
* fixed the suffix range bug (bytes=-N for "last N bytes")
* debug logs
* Missing PartsCount Header
* profiling
* url encoding
* test_multipart_get_part
* headers
* debug
* adjust log level
* handle part number
* Update s3api_object_handlers.go
* nil safety
* set ModifiedTsNs
* remove
* nil check
* fix sse header
* same logic as filer
* decode values
* decode ivBase64
* s3: Fix SSE decryption JWT authentication and streaming errors
Critical fix for SSE (Server-Side Encryption) test failures:
1. **JWT Authentication Bug** (Root Cause):
- Changed from GenJwtForFilerServer to GenJwtForVolumeServer
- S3 API now uses correct JWT when directly reading from volume servers
- Matches filer's authentication pattern for direct volume access
- Fixes 'unexpected EOF' and 500 errors in SSE tests
2. **Streaming Error Handling**:
- Added error propagation in getEncryptedStreamFromVolumes goroutine
- Use CloseWithError() to properly communicate stream failures
- Added debug logging for streaming errors
3. **Response Header Timing**:
- Removed premature WriteHeader(http.StatusOK) call
- Let Go's http package write status automatically on first write
- Prevents header lock when errors occur during streaming
4. **Enhanced SSE Decryption Debugging**:
- Added IV/Key validation and logging for SSE-C, SSE-KMS, SSE-S3
- Better error messages for missing or invalid encryption metadata
- Added glog.V(2) debugging for decryption setup
This fixes SSE integration test failures where encrypted objects
could not be retrieved due to volume server authentication failures.
The JWT bug was causing volume servers to reject requests, resulting
in truncated/empty streams (EOF) or internal errors.
* s3: Fix SSE multipart upload metadata preservation
Critical fix for SSE multipart upload test failures (SSE-C and SSE-KMS):
**Root Cause - Incomplete SSE Metadata Copying**:
The old code only tried to copy 'SeaweedFSSSEKMSKey' from the first
part to the completed object. This had TWO bugs:
1. **Wrong Constant Name** (Key Mismatch Bug):
- Storage uses: SeaweedFSSSEKMSKeyHeader = 'X-SeaweedFS-SSE-KMS-Key'
- Old code read: SeaweedFSSSEKMSKey = 'x-seaweedfs-sse-kms-key'
- Result: SSE-KMS metadata was NEVER copied → 500 errors
2. **Missing SSE-C and SSE-S3 Headers**:
- SSE-C requires: IV, Algorithm, KeyMD5
- SSE-S3 requires: encrypted key data + standard headers
- Old code: copied nothing for SSE-C/SSE-S3 → decryption failures
**Fix - Complete SSE Header Preservation**:
Now copies ALL SSE headers from first part to completed object:
- SSE-C: SeaweedFSSSEIV, CustomerAlgorithm, CustomerKeyMD5
- SSE-KMS: SeaweedFSSSEKMSKeyHeader, AwsKmsKeyId, ServerSideEncryption
- SSE-S3: SeaweedFSSSES3Key, ServerSideEncryption
Applied consistently to all 3 code paths:
1. Versioned buckets (creates version file)
2. Suspended versioning (creates main object with null versionId)
3. Non-versioned buckets (creates main object)
**Why This Is Correct**:
The headers copied EXACTLY match what putToFiler stores during part
upload (lines 496-521 in s3api_object_handlers_put.go). This ensures
detectPrimarySSEType() can correctly identify encrypted multipart
objects and trigger inline decryption with proper metadata.
Fixes: TestSSEMultipartUploadIntegration (SSE-C and SSE-KMS subtests)
* s3: Add debug logging for versioning state diagnosis
Temporary debug logging to diagnose test_versioning_obj_plain_null_version_overwrite_suspended failure.
Added glog.V(0) logging to show:
1. setBucketVersioningStatus: when versioning status is changed
2. PutObjectHandler: what versioning state is detected (Enabled/Suspended/none)
3. PutObjectHandler: which code path is taken (putVersionedObject vs putSuspendedVersioningObject)
This will help identify if:
- The versioning status is being set correctly in bucket config
- The cache is returning stale/incorrect versioning state
- The switch statement is correctly routing to suspended vs enabled handlers
* s3: Enhanced versioning state tracing for suspended versioning diagnosis
Added comprehensive logging across the entire versioning state flow:
PutBucketVersioningHandler:
- Log requested status (Enabled/Suspended)
- Log when calling setBucketVersioningStatus
- Log success/failure of status change
setBucketVersioningStatus:
- Log bucket and status being set
- Log when config is updated
- Log completion with error code
updateBucketConfig:
- Log versioning state being written to cache
- Immediate cache verification after Set
- Log if cache verification fails
getVersioningState:
- Log bucket name and state being returned
- Log if object lock forces VersioningEnabled
- Log errors
This will reveal:
1. If PutBucketVersioning(Suspended) is reaching the handler
2. If the cache update succeeds
3. What state getVersioningState returns during PUT
4. Any cache consistency issues
Expected to show why bucket still reports 'Enabled' after 'Suspended' call.
* s3: Add SSE chunk detection debugging for multipart uploads
Added comprehensive logging to diagnose why TestSSEMultipartUploadIntegration fails:
detectPrimarySSEType now logs:
1. Total chunk count and extended header count
2. All extended headers with 'sse'/'SSE'/'encryption' in the name
3. For each chunk: index, SseType, and whether it has metadata
4. Final SSE type counts (SSE-C, SSE-KMS, SSE-S3)
This will reveal if:
- Chunks are missing SSE metadata after multipart completion
- Extended headers are copied correctly from first part
- The SSE detection logic is working correctly
Expected to show if chunks have SseType=0 (none) or proper SSE types set.
* s3: Trace SSE chunk metadata through multipart completion and retrieval
Added end-to-end logging to track SSE chunk metadata lifecycle:
**During Multipart Completion (filer_multipart.go)**:
1. Log finalParts chunks BEFORE mkFile - shows SseType and metadata
2. Log versionEntry.Chunks INSIDE mkFile callback - shows if mkFile preserves SSE info
3. Log success after mkFile completes
**During GET Retrieval (s3api_object_handlers.go)**:
1. Log retrieved entry chunks - shows SseType and metadata after retrieval
2. Log detected SSE type result
This will reveal at which point SSE chunk metadata is lost:
- If finalParts have SSE metadata but versionEntry.Chunks don't → mkFile bug
- If versionEntry.Chunks have SSE metadata but retrieved chunks don't → storage/retrieval bug
- If chunks never have SSE metadata → multipart completion SSE processing bug
Expected to show chunks with SseType=NONE during retrieval even though
they were created with proper SseType during multipart completion.
* s3: Fix SSE-C multipart IV base64 decoding bug
**Critical Bug Found**: SSE-C multipart uploads were failing because:
Root Cause:
- entry.Extended[SeaweedFSSSEIV] stores base64-encoded IV (24 bytes for 16-byte IV)
- SerializeSSECMetadata expects raw IV bytes (16 bytes)
- During multipart completion, we were passing base64 IV directly → serialization error
Error Message:
"Failed to serialize SSE-C metadata for chunk in part X: invalid IV length: expected 16 bytes, got 24"
Fix:
- Base64-decode IV before passing to SerializeSSECMetadata
- Added error handling for decode failures
Impact:
- SSE-C multipart uploads will now correctly serialize chunk metadata
- Chunks will have proper SSE metadata for decryption during GET
This fixes the SSE-C subtest of TestSSEMultipartUploadIntegration.
SSE-KMS still has a separate issue (error code 23) being investigated.
* fixes
* kms sse
* handle retry if not found in .versions folder and should read the normal object
* quick check (no retries) to see if the .versions/ directory exists
* skip retry if object is not found
* explicit update to avoid sync delay
* fix map update lock
* Remove fmt.Printf debug statements
* Fix SSE-KMS multipart base IV fallback to fail instead of regenerating
* fmt
* Fix ACL grants storage logic
* header handling
* nil handling
* range read for sse content
* test range requests for sse objects
* fmt
* unused code
* upload in chunks
* header case
* fix url
* bucket policy error vs bucket not found
* jwt handling
* fmt
* jwt in request header
* Optimize Case-Insensitive Prefix Check
* dead code
* Eliminated Unnecessary Stream Prefetch for Multipart SSE
* range sse
* sse
* refactor
* context
* fmt
* fix type
* fix SSE-C IV Mismatch
* Fix Headers Being Set After WriteHeader
* fix url parsing
* propergate sse headers
* multipart sse-s3
* aws sig v4 authen
* sse kms
* set content range
* better errors
* Update s3api_object_handlers_copy.go
* Update s3api_object_handlers.go
* Update s3api_object_handlers.go
* avoid magic number
* clean up
* Update s3api_bucket_policy_handlers.go
* fix url parsing
* context
* data and metadata both use background context
* adjust the offset
* SSE Range Request IV Calculation
* adjust logs
* IV relative to offset in each part, not the whole file
* collect logs
* offset
* fix offset
* fix url
* logs
* variable
* jwt
* Multipart ETag semantics: conditionally set object-level Md5 for single-chunk uploads only.
* sse
* adjust IV and offset
* multipart boundaries
* ensures PUT and GET operations return consistent ETags
* Metadata Header Case
* CommonPrefixes Sorting with URL Encoding
* always sort
* remove the extra PathUnescape call
* fix the multipart get part ETag
* the FileChunk is created without setting ModifiedTsNs
* Sort CommonPrefixes lexicographically to match AWS S3 behavior
* set md5 for multipart uploads
* prevents any potential data loss or corruption in the small-file inline storage path
* compiles correctly
* decryptedReader will now be properly closed after use
* Fixed URL encoding and sort order for CommonPrefixes
* Update s3api_object_handlers_list.go
* SSE-x Chunk View Decryption
* Different IV offset calculations for single-part vs multipart objects
* still too verbose in logs
* less logs
* ensure correct conversion
* fix listing
* nil check
* minor fixes
* nil check
* single character delimiter
* optimize
* range on empty object or zero-length
* correct IV based on its position within that part, not its position in the entire object
* adjust offset
* offset
Fetch FULL encrypted chunk (not just the range)
Adjust IV by PartOffset/ChunkOffset only
Decrypt full chunk
Skip in the DECRYPTED stream to reach OffsetInChunk
* look breaking
* refactor
* error on no content
* handle intra-block byte skipping
* Incomplete HTTP Response Error Handling
* multipart SSE
* Update s3api_object_handlers.go
* address comments
* less logs
* handling directory
* Optimized rejectDirectoryObjectWithoutSlash() to avoid unnecessary lookups
* Revert "handling directory"
This reverts commit 3a335f0ac3.
* constant
* Consolidate nil entry checks in GetObjectHandler
* add range tests
* Consolidate redundant nil entry checks in HeadObjectHandler
* adjust logs
* SSE type
* large files
* large files
Reverted the plain-object range test
* ErrNoEncryptionConfig
* Fixed SSERangeReader Infinite Loop Vulnerability
* Fixed SSE-KMS Multipart ChunkReader HTTP Body Leak
* handle empty directory in S3, added PyArrow tests
* purge unused code
* Update s3_parquet_test.py
* Update requirements.txt
* According to S3 specifications, when both partNumber and Range are present, the Range should apply within the selected part's boundaries, not to the full object.
* handle errors
* errors after writing header
* https
* fix: Wait for volume assignment readiness before running Parquet tests
The test-implicit-dir-with-server test was failing with an Internal Error
because volume assignment was not ready when tests started. This fix adds
a check that attempts a volume assignment and waits for it to succeed
before proceeding with tests.
This ensures that:
1. Volume servers are registered with the master
2. Volume growth is triggered if needed
3. The system can successfully assign volumes for writes
Fixes the timeout issue where boto3 would retry 4 times and fail with
'We encountered an internal error, please try again.'
* sse tests
* store derived IV
* fix: Clean up gRPC ports between tests to prevent port conflicts
The second test (test-implicit-dir-with-server) was failing because the
volume server's gRPC port (18080 = VOLUME_PORT + 10000) was still in use
from the first test. The cleanup code only killed HTTP port processes,
not gRPC port processes.
Added cleanup for gRPC ports in all stop targets:
- Master gRPC: MASTER_PORT + 10000 (19333)
- Volume gRPC: VOLUME_PORT + 10000 (18080)
- Filer gRPC: FILER_PORT + 10000 (18888)
This ensures clean state between test runs in CI.
* add import
* address comments
* docs: Add placeholder documentation files for Parquet test suite
Added three missing documentation files referenced in test/s3/parquet/README.md:
1. TEST_COVERAGE.md - Documents 43 total test cases (17 Go unit tests,
6 Python integration tests, 20 Python end-to-end tests)
2. FINAL_ROOT_CAUSE_ANALYSIS.md - Explains the s3fs compatibility issue
with PyArrow, the implicit directory problem, and how the fix works
3. MINIO_DIRECTORY_HANDLING.md - Compares MinIO's directory handling
approach with SeaweedFS's implementation
Each file contains:
- Title and overview
- Key technical details relevant to the topic
- TODO sections for future expansion
These placeholder files resolve the broken README links and provide
structure for future detailed documentation.
* clean up if metadata operation failed
* Update s3_parquet_test.py
* clean up
* Update Makefile
* Update s3_parquet_test.py
* Update Makefile
* Handle ivSkip for non-block-aligned offsets
* Update README.md
* stop volume server faster
* stop volume server in 1 second
* different IV for each chunk in SSE-S3 and SSE-KMS
* clean up if fails
* testing upload
* error propagation
* fmt
* simplify
* fix copying
* less logs
* endian
* Added marshaling error handling
* handling invalid ranges
* error handling for adding to log buffer
* fix logging
* avoid returning too quickly and ensure proper cleaning up
* Activity Tracking for Disk Reads
* Cleanup Unused Parameters
* Activity Tracking for Kafka Publishers
* Proper Test Error Reporting
* refactoring
* less logs
* less logs
* go fmt
* guard it with if entry.Attributes.TtlSec > 0 to match the pattern used elsewhere.
* Handle bucket-default encryption config errors explicitly for multipart
* consistent activity tracking
* obsolete code for s3 on filer read/write handlers
* Update weed/s3api/s3api_object_handlers_list.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
pull/7510/head
committed by
GitHub
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
74 changed files with 8380 additions and 1311 deletions
-
130.github/workflows/s3-parquet-tests.yml
-
12.github/workflows/s3-sse-tests.yml
-
2.github/workflows/s3tests.yml
-
25.github/workflows/test-s3-over-https-using-awscli.yml
-
40test/s3/parquet/.gitignore
-
58test/s3/parquet/FINAL_ROOT_CAUSE_ANALYSIS.md
-
70test/s3/parquet/MINIO_DIRECTORY_HANDLING.md
-
365test/s3/parquet/Makefile
-
206test/s3/parquet/README.md
-
46test/s3/parquet/TEST_COVERAGE.md
-
7test/s3/parquet/requirements.txt
-
421test/s3/parquet/s3_parquet_test.py
-
307test/s3/parquet/test_implicit_directory_fix.py
-
104test/s3/sse/s3_range_headers_test.go
-
445test/s3/sse/s3_sse_range_server_test.go
-
4weed/filer/filer_notify.go
-
5weed/filer/meta_aggregator.go
-
6weed/mq/broker/broker_grpc_pub_follow.go
-
27weed/mq/broker/broker_log_buffer_offset.go
-
22weed/mq/topic/local_partition.go
-
7weed/mq/topic/local_partition_offset.go
-
267weed/operation/upload_chunked.go
-
312weed/operation/upload_chunked_test.go
-
2weed/pb/filer_pb/filer_pb_helper.go
-
26weed/s3api/auth_credentials.go
-
12weed/s3api/auth_credentials_subscribe.go
-
4weed/s3api/custom_types.go
-
267weed/s3api/filer_multipart.go
-
6weed/s3api/filer_util.go
-
5weed/s3api/policy_conversion.go
-
27weed/s3api/policy_conversion_test.go
-
18weed/s3api/s3_bucket_encryption.go
-
13weed/s3api/s3_constants/header.go
-
2weed/s3api/s3_iam_middleware.go
-
2weed/s3api/s3_multipart_iam.go
-
25weed/s3api/s3_sse_c.go
-
307weed/s3api/s3_sse_ctr_test.go
-
18weed/s3api/s3_sse_kms.go
-
14weed/s3api/s3_sse_s3.go
-
266weed/s3api/s3_sse_s3_multipart_test.go
-
15weed/s3api/s3_sse_utils.go
-
11weed/s3api/s3api_bucket_config.go
-
3weed/s3api/s3api_bucket_handlers.go
-
3weed/s3api/s3api_bucket_policy_arn_test.go
-
4weed/s3api/s3api_bucket_policy_engine.go
-
64weed/s3api/s3api_bucket_policy_handlers.go
-
285weed/s3api/s3api_implicit_directory_test.go
-
2763weed/s3api/s3api_object_handlers.go
-
136weed/s3api/s3api_object_handlers_copy.go
-
77weed/s3api/s3api_object_handlers_list.go
-
34weed/s3api/s3api_object_handlers_multipart.go
-
4weed/s3api/s3api_object_handlers_postpolicy.go
-
694weed/s3api/s3api_object_handlers_put.go
-
109weed/s3api/s3api_object_handlers_test.go
-
22weed/s3api/s3api_object_versioning.go
-
24weed/s3api/s3api_put_handlers.go
-
16weed/s3api/s3api_server.go
-
361weed/s3api/s3api_sse_chunk_metadata_test.go
-
189weed/s3api/s3api_sse_decrypt_test.go
-
257weed/s3api/s3api_sse_s3_upload_test.go
-
4weed/s3api/s3err/error_handler.go
-
26weed/server/filer_server_handlers_read.go
-
54weed/server/filer_server_handlers_write_autochunk.go
-
68weed/server/filer_server_handlers_write_upload.go
-
167weed/util/log_buffer/log_buffer.go
-
224weed/util/log_buffer/log_buffer_corruption_test.go
-
43weed/util/log_buffer/log_buffer_flush_gap_test.go
-
20weed/util/log_buffer/log_buffer_queryability_test.go
-
24weed/util/log_buffer/log_buffer_test.go
-
18weed/util/log_buffer/log_read.go
-
18weed/util/log_buffer/log_read_integration_test.go
-
36weed/util/log_buffer/log_read_stateless_test.go
-
4weed/util/log_buffer/log_read_test.go
-
12weed/util/log_buffer/sealed_buffer.go
@ -0,0 +1,130 @@ |
|||
name: "S3 PyArrow Parquet Tests" |
|||
|
|||
on: |
|||
push: |
|||
branches: [master] |
|||
paths: |
|||
- 'weed/s3api/**' |
|||
- 'weed/filer/**' |
|||
- 'test/s3/parquet/**' |
|||
- '.github/workflows/s3-parquet-tests.yml' |
|||
pull_request: |
|||
branches: [master] |
|||
paths: |
|||
- 'weed/s3api/**' |
|||
- 'weed/filer/**' |
|||
- 'test/s3/parquet/**' |
|||
- '.github/workflows/s3-parquet-tests.yml' |
|||
workflow_dispatch: |
|||
|
|||
env: |
|||
S3_ACCESS_KEY: some_access_key1 |
|||
S3_SECRET_KEY: some_secret_key1 |
|||
S3_ENDPOINT_URL: http://localhost:8333 |
|||
BUCKET_NAME: test-parquet-bucket |
|||
|
|||
jobs: |
|||
parquet-integration-tests: |
|||
name: PyArrow Parquet Tests (Python ${{ matrix.python-version }}) |
|||
runs-on: ubuntu-latest |
|||
timeout-minutes: 20 |
|||
|
|||
strategy: |
|||
fail-fast: false |
|||
matrix: |
|||
python-version: ['3.9', '3.11', '3.12'] |
|||
|
|||
steps: |
|||
- name: Checkout code |
|||
uses: actions/checkout@v4 |
|||
|
|||
- name: Set up Go |
|||
uses: actions/setup-go@v5 |
|||
with: |
|||
go-version: ^1.24 |
|||
cache: true |
|||
|
|||
- name: Set up Python ${{ matrix.python-version }} |
|||
uses: actions/setup-python@v5 |
|||
with: |
|||
python-version: ${{ matrix.python-version }} |
|||
cache: 'pip' |
|||
cache-dependency-path: 'test/s3/parquet/requirements.txt' |
|||
|
|||
- name: Install system dependencies |
|||
run: | |
|||
sudo apt-get update |
|||
sudo apt-get install -y lsof netcat-openbsd |
|||
|
|||
- name: Build SeaweedFS |
|||
run: | |
|||
cd weed |
|||
go build -v |
|||
sudo cp weed /usr/local/bin/ |
|||
weed version |
|||
|
|||
- name: Run PyArrow Parquet integration tests |
|||
run: | |
|||
cd test/s3/parquet |
|||
make test-with-server |
|||
env: |
|||
SEAWEEDFS_BINARY: weed |
|||
S3_PORT: 8333 |
|||
FILER_PORT: 8888 |
|||
VOLUME_PORT: 8080 |
|||
MASTER_PORT: 9333 |
|||
VOLUME_MAX_SIZE_MB: 50 |
|||
|
|||
- name: Run implicit directory fix tests |
|||
run: | |
|||
cd test/s3/parquet |
|||
make test-implicit-dir-with-server |
|||
env: |
|||
SEAWEEDFS_BINARY: weed |
|||
S3_PORT: 8333 |
|||
FILER_PORT: 8888 |
|||
VOLUME_PORT: 8080 |
|||
MASTER_PORT: 9333 |
|||
|
|||
- name: Upload test logs on failure |
|||
if: failure() |
|||
uses: actions/upload-artifact@v4 |
|||
with: |
|||
name: test-logs-python-${{ matrix.python-version }} |
|||
path: | |
|||
/tmp/seaweedfs-parquet-*.log |
|||
test/s3/parquet/*.log |
|||
retention-days: 7 |
|||
|
|||
- name: Cleanup |
|||
if: always() |
|||
run: | |
|||
cd test/s3/parquet |
|||
make stop-seaweedfs-safe || true |
|||
make clean || true |
|||
|
|||
unit-tests: |
|||
name: Go Unit Tests (Implicit Directory) |
|||
runs-on: ubuntu-latest |
|||
timeout-minutes: 10 |
|||
|
|||
steps: |
|||
- name: Checkout code |
|||
uses: actions/checkout@v4 |
|||
|
|||
- name: Set up Go |
|||
uses: actions/setup-go@v5 |
|||
with: |
|||
go-version: ^1.24 |
|||
cache: true |
|||
|
|||
- name: Run Go unit tests |
|||
run: | |
|||
cd weed/s3api |
|||
go test -v -run TestImplicitDirectory |
|||
|
|||
- name: Run all S3 API tests |
|||
run: | |
|||
cd weed/s3api |
|||
go test -v -timeout 5m |
|||
|
|||
@ -0,0 +1,40 @@ |
|||
# Python virtual environment |
|||
venv/ |
|||
.venv/ |
|||
env/ |
|||
ENV/ |
|||
|
|||
# Python cache |
|||
__pycache__/ |
|||
*.py[cod] |
|||
*$py.class |
|||
*.so |
|||
.Python |
|||
|
|||
# Test artifacts |
|||
*.log |
|||
test_run.log |
|||
weed-test.log |
|||
|
|||
# SeaweedFS data directories |
|||
filerldb2/ |
|||
idx/ |
|||
dat/ |
|||
*.idx |
|||
*.dat |
|||
|
|||
# Temporary test files |
|||
.pytest_cache/ |
|||
.coverage |
|||
htmlcov/ |
|||
|
|||
# IDE |
|||
.vscode/ |
|||
.idea/ |
|||
*.swp |
|||
*.swo |
|||
*~ |
|||
|
|||
# OS |
|||
.DS_Store |
|||
Thumbs.db |
|||
@ -0,0 +1,58 @@ |
|||
# Final Root Cause Analysis |
|||
|
|||
## Overview |
|||
|
|||
This document provides a deep technical analysis of the s3fs compatibility issue with PyArrow Parquet datasets on SeaweedFS, and the solution implemented to resolve it. |
|||
|
|||
## Root Cause |
|||
|
|||
When PyArrow writes datasets using `write_dataset()`, it creates implicit directory structures by writing files without explicit directory markers. However, some S3 workflows may create 0-byte directory markers. |
|||
|
|||
### The Problem |
|||
|
|||
1. **PyArrow writes dataset files** without creating explicit directory objects |
|||
2. **s3fs calls HEAD** on the directory path to check if it exists |
|||
3. **If HEAD returns 200** with `Content-Length: 0`, s3fs interprets it as a file (not a directory) |
|||
4. **PyArrow fails** when trying to read, reporting "Parquet file size is 0 bytes" |
|||
|
|||
### AWS S3 Behavior |
|||
|
|||
AWS S3 returns **404 Not Found** for implicit directories (directories that only exist because they have children but no explicit marker object). This allows s3fs to fall back to LIST operations to detect the directory. |
|||
|
|||
## The Solution |
|||
|
|||
### Implementation |
|||
|
|||
Modified the S3 API HEAD handler in `weed/s3api/s3api_object_handlers.go` to: |
|||
|
|||
1. **Check if object ends with `/`**: Explicit directory markers return 200 as before |
|||
2. **Check if object has children**: If a 0-byte object has children in the filer, treat it as an implicit directory |
|||
3. **Return 404 for implicit directories**: This matches AWS S3 behavior and triggers s3fs's LIST fallback |
|||
|
|||
### Code Changes |
|||
|
|||
The fix is implemented in the `HeadObjectHandler` function with logic to: |
|||
- Detect implicit directories by checking for child entries |
|||
- Return 404 (NoSuchKey) for implicit directories |
|||
- Preserve existing behavior for explicit directory markers and regular files |
|||
|
|||
## Performance Considerations |
|||
|
|||
### Optimization: Child Check Cache |
|||
- Child existence checks are performed via filer LIST operations |
|||
- Results could be cached for frequently accessed paths |
|||
- Trade-off between consistency and performance |
|||
|
|||
### Impact |
|||
- Minimal performance impact for normal file operations |
|||
- Slight overhead for HEAD requests on implicit directories (one additional LIST call) |
|||
- Overall improvement in PyArrow compatibility outweighs minor performance cost |
|||
|
|||
## TODO |
|||
|
|||
- [ ] Add detailed benchmarking results comparing before/after fix |
|||
- [ ] Document edge cases discovered during implementation |
|||
- [ ] Add architectural diagrams showing the request flow |
|||
- [ ] Document alternative solutions considered and why they were rejected |
|||
- [ ] Add performance profiling data for child existence checks |
|||
|
|||
@ -0,0 +1,70 @@ |
|||
# MinIO Directory Handling Comparison |
|||
|
|||
## Overview |
|||
|
|||
This document compares how MinIO handles directory markers versus SeaweedFS's implementation, and explains the different approaches to S3 directory semantics. |
|||
|
|||
## MinIO's Approach |
|||
|
|||
MinIO handles implicit directories similarly to AWS S3: |
|||
|
|||
1. **No explicit directory objects**: Directories are implicit, defined only by object key prefixes |
|||
2. **HEAD on directory returns 404**: Consistent with AWS S3 behavior |
|||
3. **LIST operations reveal directories**: Directories are discovered through delimiter-based LIST operations |
|||
4. **Automatic prefix handling**: MinIO automatically recognizes prefixes as directories |
|||
|
|||
### MinIO Implementation Details |
|||
|
|||
- Uses in-memory metadata for fast prefix lookups |
|||
- Optimized for LIST operations with common delimiter (`/`) |
|||
- No persistent directory objects in storage layer |
|||
- Directories "exist" as long as they contain objects |
|||
|
|||
## SeaweedFS Approach |
|||
|
|||
SeaweedFS uses a filer-based approach with real directory entries: |
|||
|
|||
### Before the Fix |
|||
|
|||
1. **Explicit directory objects**: Could create 0-byte objects as directory markers |
|||
2. **HEAD returns 200**: Even for implicit directories |
|||
3. **Caused s3fs issues**: s3fs interpreted 0-byte HEAD responses as empty files |
|||
|
|||
### After the Fix |
|||
|
|||
1. **Hybrid approach**: Supports both explicit markers (with `/` suffix) and implicit directories |
|||
2. **HEAD returns 404 for implicit directories**: Matches AWS S3 and MinIO behavior |
|||
3. **Filer integration**: Uses filer's directory metadata to detect implicit directories |
|||
4. **s3fs compatibility**: Triggers proper LIST fallback behavior |
|||
|
|||
## Key Differences |
|||
|
|||
| Aspect | MinIO | SeaweedFS (After Fix) | |
|||
|--------|-------|----------------------| |
|||
| Directory Storage | No persistent objects | Filer directory entries | |
|||
| Implicit Directory HEAD | 404 Not Found | 404 Not Found | |
|||
| Explicit Marker HEAD | Not applicable | 200 OK (with `/` suffix) | |
|||
| Child Detection | Prefix scan | Filer LIST operation | |
|||
| Performance | In-memory lookups | Filer gRPC calls | |
|||
|
|||
## Implementation Considerations |
|||
|
|||
### Advantages of SeaweedFS Approach |
|||
- Integrates with existing filer metadata |
|||
- Supports both implicit and explicit directories |
|||
- Preserves directory metadata and attributes |
|||
- Compatible with POSIX filer semantics |
|||
|
|||
### Trade-offs |
|||
- Additional filer communication overhead for HEAD requests |
|||
- Complexity of supporting both directory paradigms |
|||
- Performance depends on filer efficiency |
|||
|
|||
## TODO |
|||
|
|||
- [ ] Add performance benchmark comparison: MinIO vs SeaweedFS |
|||
- [ ] Document edge cases where behaviors differ |
|||
- [ ] Add example request/response traces for both systems |
|||
- [ ] Document migration path for users moving from MinIO to SeaweedFS |
|||
- [ ] Add compatibility matrix for different S3 clients |
|||
|
|||
@ -0,0 +1,365 @@ |
|||
# Makefile for S3 Parquet Integration Tests
|
|||
# This Makefile provides targets for running comprehensive S3 Parquet tests with PyArrow
|
|||
|
|||
# Default values
|
|||
SEAWEEDFS_BINARY ?= weed |
|||
S3_PORT ?= 8333 |
|||
FILER_PORT ?= 8888 |
|||
VOLUME_PORT ?= 8080 |
|||
MASTER_PORT ?= 9333 |
|||
TEST_TIMEOUT ?= 15m |
|||
ACCESS_KEY ?= some_access_key1 |
|||
SECRET_KEY ?= some_secret_key1 |
|||
VOLUME_MAX_SIZE_MB ?= 50 |
|||
VOLUME_MAX_COUNT ?= 100 |
|||
BUCKET_NAME ?= test-parquet-bucket |
|||
|
|||
# Python configuration
|
|||
PYTHON ?= python3 |
|||
VENV_DIR ?= .venv |
|||
PYTHON_TEST_SCRIPT ?= s3_parquet_test.py |
|||
|
|||
# Test directory
|
|||
TEST_DIR := $(shell pwd) |
|||
SEAWEEDFS_ROOT := $(shell cd ../../../ && pwd) |
|||
|
|||
# Colors for output
|
|||
RED := \033[0;31m |
|||
GREEN := \033[0;32m |
|||
YELLOW := \033[1;33m |
|||
NC := \033[0m # No Color |
|||
|
|||
.PHONY: all build-weed check-binary check-python ci-test clean debug-logs debug-status help manual-start manual-stop setup-python start-seaweedfs start-seaweedfs-ci stop-seaweedfs stop-seaweedfs-safe test test-implicit-dir test-implicit-dir-with-server test-quick test-with-server |
|||
|
|||
all: test |
|||
|
|||
# Build SeaweedFS binary (GitHub Actions compatible)
|
|||
build-weed: |
|||
@echo "Building SeaweedFS binary..." |
|||
@cd $(SEAWEEDFS_ROOT)/weed && go install -buildvcs=false |
|||
@echo "✅ SeaweedFS binary built successfully" |
|||
|
|||
help: |
|||
@echo "SeaweedFS S3 Parquet Integration Tests" |
|||
@echo "" |
|||
@echo "Available targets:" |
|||
@echo " test - Run full S3 Parquet integration tests (small and large files)" |
|||
@echo " test-with-server - Run full tests with automatic server management (CI compatible)" |
|||
@echo " test-quick - Run quick tests with small files only (sets TEST_QUICK=1)" |
|||
@echo " test-implicit-dir - Test implicit directory fix for s3fs compatibility" |
|||
@echo " test-implicit-dir-with-server - Test implicit directory fix with server management" |
|||
@echo " setup-python - Setup Python virtual environment and install dependencies" |
|||
@echo " check-python - Check if Python and required packages are available" |
|||
@echo " start-seaweedfs - Start SeaweedFS server for testing" |
|||
@echo " start-seaweedfs-ci - Start SeaweedFS server (CI-safe version)" |
|||
@echo " stop-seaweedfs - Stop SeaweedFS server" |
|||
@echo " stop-seaweedfs-safe - Stop SeaweedFS server (CI-safe version)" |
|||
@echo " clean - Clean up test artifacts" |
|||
@echo " check-binary - Check if SeaweedFS binary exists" |
|||
@echo " build-weed - Build SeaweedFS binary" |
|||
@echo "" |
|||
@echo "Configuration:" |
|||
@echo " SEAWEEDFS_BINARY=$(SEAWEEDFS_BINARY)" |
|||
@echo " S3_PORT=$(S3_PORT)" |
|||
@echo " FILER_PORT=$(FILER_PORT)" |
|||
@echo " VOLUME_PORT=$(VOLUME_PORT)" |
|||
@echo " MASTER_PORT=$(MASTER_PORT)" |
|||
@echo " BUCKET_NAME=$(BUCKET_NAME)" |
|||
@echo " VOLUME_MAX_SIZE_MB=$(VOLUME_MAX_SIZE_MB)" |
|||
@echo " PYTHON=$(PYTHON)" |
|||
|
|||
check-binary: |
|||
@if ! command -v $(SEAWEEDFS_BINARY) > /dev/null 2>&1; then \
|
|||
echo "$(RED)Error: SeaweedFS binary '$(SEAWEEDFS_BINARY)' not found in PATH$(NC)"; \
|
|||
echo "Please build SeaweedFS first by running 'make' in the root directory"; \
|
|||
exit 1; \
|
|||
fi |
|||
@echo "$(GREEN)SeaweedFS binary found: $$(which $(SEAWEEDFS_BINARY))$(NC)" |
|||
|
|||
check-python: |
|||
@if ! command -v $(PYTHON) > /dev/null 2>&1; then \
|
|||
echo "$(RED)Error: Python '$(PYTHON)' not found$(NC)"; \
|
|||
echo "Please install Python 3.8 or later"; \
|
|||
exit 1; \
|
|||
fi |
|||
@echo "$(GREEN)Python found: $$(which $(PYTHON)) ($$($(PYTHON) --version))$(NC)" |
|||
|
|||
setup-python: check-python |
|||
@echo "$(YELLOW)Setting up Python virtual environment...$(NC)" |
|||
@if [ ! -d "$(VENV_DIR)" ]; then \
|
|||
$(PYTHON) -m venv $(VENV_DIR); \
|
|||
echo "$(GREEN)Virtual environment created$(NC)"; \
|
|||
fi |
|||
@echo "$(YELLOW)Installing Python dependencies...$(NC)" |
|||
@$(VENV_DIR)/bin/pip install --upgrade pip > /dev/null |
|||
@$(VENV_DIR)/bin/pip install -r requirements.txt |
|||
@echo "$(GREEN)Python dependencies installed successfully$(NC)" |
|||
|
|||
start-seaweedfs-ci: check-binary |
|||
@echo "$(YELLOW)Starting SeaweedFS server for Parquet testing...$(NC)" |
|||
|
|||
# Clean up any existing processes first (CI-safe) |
|||
@echo "Cleaning up any existing processes..." |
|||
@if command -v lsof >/dev/null 2>&1; then \
|
|||
lsof -ti :$(MASTER_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$(VOLUME_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$(FILER_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$(S3_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$$(( $(MASTER_PORT) + 10000 )) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$$(( $(VOLUME_PORT) + 10000 )) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$$(( $(FILER_PORT) + 10000 )) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
fi |
|||
@sleep 2 |
|||
|
|||
# Create necessary directories |
|||
@mkdir -p /tmp/seaweedfs-test-parquet-master |
|||
@mkdir -p /tmp/seaweedfs-test-parquet-volume |
|||
@mkdir -p /tmp/seaweedfs-test-parquet-filer |
|||
|
|||
# Clean up any old server logs |
|||
@rm -f /tmp/seaweedfs-parquet-*.log || true |
|||
|
|||
# Start master server with volume size limit and explicit gRPC port |
|||
@echo "Starting master server..." |
|||
@nohup $(SEAWEEDFS_BINARY) master -port=$(MASTER_PORT) -port.grpc=$$(( $(MASTER_PORT) + 10000 )) -mdir=/tmp/seaweedfs-test-parquet-master -volumeSizeLimitMB=$(VOLUME_MAX_SIZE_MB) -ip=127.0.0.1 -peers=none > /tmp/seaweedfs-parquet-master.log 2>&1 & |
|||
@sleep 3 |
|||
|
|||
# Start volume server with master HTTP port and increased capacity |
|||
@echo "Starting volume server..." |
|||
@nohup $(SEAWEEDFS_BINARY) volume -port=$(VOLUME_PORT) -mserver=127.0.0.1:$(MASTER_PORT) -dir=/tmp/seaweedfs-test-parquet-volume -max=$(VOLUME_MAX_COUNT) -ip=127.0.0.1 -preStopSeconds=1 > /tmp/seaweedfs-parquet-volume.log 2>&1 & |
|||
@sleep 5 |
|||
|
|||
# Start filer server with embedded S3 |
|||
@echo "Starting filer server with embedded S3..." |
|||
@printf '{"identities":[{"name":"%s","credentials":[{"accessKey":"%s","secretKey":"%s"}],"actions":["Admin","Read","Write"]}]}' "$(ACCESS_KEY)" "$(ACCESS_KEY)" "$(SECRET_KEY)" > /tmp/seaweedfs-parquet-s3.json |
|||
@AWS_ACCESS_KEY_ID=$(ACCESS_KEY) AWS_SECRET_ACCESS_KEY=$(SECRET_KEY) nohup $(SEAWEEDFS_BINARY) filer -port=$(FILER_PORT) -port.grpc=$$(( $(FILER_PORT) + 10000 )) -master=127.0.0.1:$(MASTER_PORT) -dataCenter=defaultDataCenter -ip=127.0.0.1 -s3 -s3.port=$(S3_PORT) -s3.config=/tmp/seaweedfs-parquet-s3.json > /tmp/seaweedfs-parquet-filer.log 2>&1 & |
|||
@sleep 5 |
|||
|
|||
# Wait for S3 service to be ready - use port-based checking for reliability |
|||
@echo "$(YELLOW)Waiting for S3 service to be ready...$(NC)" |
|||
@for i in $$(seq 1 20); do \
|
|||
if netstat -an 2>/dev/null | grep -q ":$(S3_PORT).*LISTEN" || \
|
|||
ss -an 2>/dev/null | grep -q ":$(S3_PORT).*LISTEN" || \
|
|||
lsof -i :$(S3_PORT) >/dev/null 2>&1; then \
|
|||
echo "$(GREEN)S3 service is listening on port $(S3_PORT)$(NC)"; \
|
|||
sleep 1; \
|
|||
break; \
|
|||
fi; \
|
|||
if [ $$i -eq 20 ]; then \
|
|||
echo "$(RED)S3 service failed to start within 20 seconds$(NC)"; \
|
|||
echo "=== Detailed Logs ==="; \
|
|||
echo "Master log:"; tail -30 /tmp/seaweedfs-parquet-master.log || true; \
|
|||
echo "Volume log:"; tail -30 /tmp/seaweedfs-parquet-volume.log || true; \
|
|||
echo "Filer log:"; tail -30 /tmp/seaweedfs-parquet-filer.log || true; \
|
|||
echo "=== Port Status ==="; \
|
|||
netstat -an 2>/dev/null | grep ":$(S3_PORT)" || \
|
|||
ss -an 2>/dev/null | grep ":$(S3_PORT)" || \
|
|||
echo "No port listening on $(S3_PORT)"; \
|
|||
exit 1; \
|
|||
fi; \
|
|||
echo "Waiting for S3 service... ($$i/20)"; \
|
|||
sleep 1; \
|
|||
done |
|||
|
|||
# Additional wait for filer gRPC to be ready |
|||
@echo "$(YELLOW)Waiting for filer gRPC to be ready...$(NC)" |
|||
@sleep 2 |
|||
|
|||
# Wait for volume server to register with master and ensure volume assignment works |
|||
@echo "$(YELLOW)Waiting for volume assignment to be ready...$(NC)" |
|||
@for i in $$(seq 1 30); do \
|
|||
ASSIGN_RESULT=$$(curl -s "http://localhost:$(MASTER_PORT)/dir/assign?count=1" 2>/dev/null); \
|
|||
if echo "$$ASSIGN_RESULT" | grep -q '"fid"'; then \
|
|||
echo "$(GREEN)Volume assignment is ready$(NC)"; \
|
|||
break; \
|
|||
fi; \
|
|||
if [ $$i -eq 30 ]; then \
|
|||
echo "$(RED)Volume assignment not ready after 30 seconds$(NC)"; \
|
|||
echo "=== Last assign attempt ==="; \
|
|||
echo "$$ASSIGN_RESULT"; \
|
|||
echo "=== Master Status ==="; \
|
|||
curl -s "http://localhost:$(MASTER_PORT)/dir/status" 2>/dev/null || echo "Failed to get master status"; \
|
|||
echo "=== Master Logs ==="; \
|
|||
tail -50 /tmp/seaweedfs-parquet-master.log 2>/dev/null || echo "No master log"; \
|
|||
echo "=== Volume Logs ==="; \
|
|||
tail -50 /tmp/seaweedfs-parquet-volume.log 2>/dev/null || echo "No volume log"; \
|
|||
exit 1; \
|
|||
fi; \
|
|||
echo "Waiting for volume assignment... ($$i/30)"; \
|
|||
sleep 1; \
|
|||
done |
|||
|
|||
@echo "$(GREEN)SeaweedFS server started successfully for Parquet testing$(NC)" |
|||
@echo "Master: http://localhost:$(MASTER_PORT)" |
|||
@echo "Volume: http://localhost:$(VOLUME_PORT)" |
|||
@echo "Filer: http://localhost:$(FILER_PORT)" |
|||
@echo "S3: http://localhost:$(S3_PORT)" |
|||
@echo "Volume Max Size: $(VOLUME_MAX_SIZE_MB)MB" |
|||
|
|||
start-seaweedfs: check-binary |
|||
@echo "$(YELLOW)Starting SeaweedFS server for Parquet testing...$(NC)" |
|||
@# Use port-based cleanup for consistency and safety |
|||
@echo "Cleaning up any existing processes..." |
|||
@lsof -ti :$(MASTER_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$(VOLUME_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$(FILER_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$(S3_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@# Clean up gRPC ports (HTTP port + 10000) |
|||
@lsof -ti :$$(( $(MASTER_PORT) + 10000 )) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$$(( $(VOLUME_PORT) + 10000 )) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$$(( $(FILER_PORT) + 10000 )) 2>/dev/null | xargs -r kill -TERM || true |
|||
@sleep 2 |
|||
@$(MAKE) start-seaweedfs-ci |
|||
|
|||
stop-seaweedfs: |
|||
@echo "$(YELLOW)Stopping SeaweedFS server...$(NC)" |
|||
@# Use port-based cleanup for consistency and safety |
|||
@lsof -ti :$(MASTER_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$(VOLUME_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$(FILER_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$(S3_PORT) 2>/dev/null | xargs -r kill -TERM || true |
|||
@# Clean up gRPC ports (HTTP port + 10000) |
|||
@lsof -ti :$$(( $(MASTER_PORT) + 10000 )) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$$(( $(VOLUME_PORT) + 10000 )) 2>/dev/null | xargs -r kill -TERM || true |
|||
@lsof -ti :$$(( $(FILER_PORT) + 10000 )) 2>/dev/null | xargs -r kill -TERM || true |
|||
@sleep 2 |
|||
@echo "$(GREEN)SeaweedFS server stopped$(NC)" |
|||
|
|||
# CI-safe server stop that's more conservative
|
|||
stop-seaweedfs-safe: |
|||
@echo "$(YELLOW)Safely stopping SeaweedFS server...$(NC)" |
|||
@# Use port-based cleanup which is safer in CI |
|||
@if command -v lsof >/dev/null 2>&1; then \
|
|||
echo "Using lsof for port-based cleanup..."; \
|
|||
lsof -ti :$(MASTER_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$(VOLUME_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$(FILER_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$(S3_PORT) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$$(( $(MASTER_PORT) + 10000 )) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$$(( $(VOLUME_PORT) + 10000 )) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
lsof -ti :$$(( $(FILER_PORT) + 10000 )) 2>/dev/null | head -5 | while read pid; do kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
else \
|
|||
echo "lsof not available, using netstat approach..."; \
|
|||
netstat -tlnp 2>/dev/null | grep :$(MASTER_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
netstat -tlnp 2>/dev/null | grep :$(VOLUME_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
netstat -tlnp 2>/dev/null | grep :$(FILER_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
netstat -tlnp 2>/dev/null | grep :$(S3_PORT) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
netstat -tlnp 2>/dev/null | grep :$$(( $(MASTER_PORT) + 10000 )) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
netstat -tlnp 2>/dev/null | grep :$$(( $(VOLUME_PORT) + 10000 )) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
netstat -tlnp 2>/dev/null | grep :$$(( $(FILER_PORT) + 10000 )) | awk '{print $$7}' | cut -d/ -f1 | head -5 | while read pid; do [ "$$pid" != "-" ] && kill -TERM $$pid 2>/dev/null || true; done; \
|
|||
fi |
|||
@sleep 2 |
|||
@echo "$(GREEN)SeaweedFS server safely stopped$(NC)" |
|||
|
|||
clean: |
|||
@echo "$(YELLOW)Cleaning up Parquet test artifacts...$(NC)" |
|||
@rm -rf /tmp/seaweedfs-test-parquet-* |
|||
@rm -f /tmp/seaweedfs-parquet-*.log |
|||
@rm -f /tmp/seaweedfs-parquet-s3.json |
|||
@rm -f s3_parquet_test_errors_*.log |
|||
@rm -rf $(VENV_DIR) |
|||
@echo "$(GREEN)Parquet test cleanup completed$(NC)" |
|||
|
|||
# Test with automatic server management (GitHub Actions compatible)
|
|||
test-with-server: build-weed setup-python |
|||
@echo "🚀 Starting Parquet integration tests with automated server management..." |
|||
@echo "Starting SeaweedFS cluster..." |
|||
@if $(MAKE) start-seaweedfs-ci > weed-test.log 2>&1; then \
|
|||
echo "✅ SeaweedFS cluster started successfully"; \
|
|||
echo "Running Parquet integration tests..."; \
|
|||
trap '$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe || true' EXIT; \
|
|||
S3_ENDPOINT_URL=http://localhost:$(S3_PORT) \
|
|||
S3_ACCESS_KEY=$(ACCESS_KEY) \
|
|||
S3_SECRET_KEY=$(SECRET_KEY) \
|
|||
BUCKET_NAME=$(BUCKET_NAME) \
|
|||
$(VENV_DIR)/bin/$(PYTHON) $(PYTHON_TEST_SCRIPT) || exit 1; \
|
|||
echo "✅ All tests completed successfully"; \
|
|||
$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe || true; \
|
|||
else \
|
|||
echo "❌ Failed to start SeaweedFS cluster"; \
|
|||
echo "=== Server startup logs ==="; \
|
|||
tail -100 weed-test.log 2>/dev/null || echo "No startup log available"; \
|
|||
echo "=== System information ==="; \
|
|||
ps aux | grep -E "weed|make" | grep -v grep || echo "No relevant processes found"; \
|
|||
exit 1; \
|
|||
fi |
|||
|
|||
# Run tests assuming SeaweedFS is already running
|
|||
test: setup-python |
|||
@echo "$(YELLOW)Running Parquet integration tests...$(NC)" |
|||
@echo "$(YELLOW)Assuming SeaweedFS is already running on localhost:$(S3_PORT)$(NC)" |
|||
@S3_ENDPOINT_URL=http://localhost:$(S3_PORT) \
|
|||
S3_ACCESS_KEY=$(ACCESS_KEY) \
|
|||
S3_SECRET_KEY=$(SECRET_KEY) \
|
|||
BUCKET_NAME=$(BUCKET_NAME) \
|
|||
$(VENV_DIR)/bin/$(PYTHON) $(PYTHON_TEST_SCRIPT) |
|||
|
|||
# Run quick tests with small files only
|
|||
test-quick: setup-python |
|||
@echo "$(YELLOW)Running quick Parquet tests (small files only)...$(NC)" |
|||
@echo "$(YELLOW)Assuming SeaweedFS is already running on localhost:$(S3_PORT)$(NC)" |
|||
@S3_ENDPOINT_URL=http://localhost:$(S3_PORT) \
|
|||
S3_ACCESS_KEY=$(ACCESS_KEY) \
|
|||
S3_SECRET_KEY=$(SECRET_KEY) \
|
|||
BUCKET_NAME=$(BUCKET_NAME) \
|
|||
TEST_QUICK=1 \
|
|||
$(VENV_DIR)/bin/$(PYTHON) $(PYTHON_TEST_SCRIPT) |
|||
|
|||
# Test implicit directory fix for s3fs compatibility
|
|||
test-implicit-dir: setup-python |
|||
@echo "$(YELLOW)Running implicit directory fix tests...$(NC)" |
|||
@echo "$(YELLOW)Assuming SeaweedFS is already running on localhost:$(S3_PORT)$(NC)" |
|||
@S3_ENDPOINT_URL=http://localhost:$(S3_PORT) \
|
|||
S3_ACCESS_KEY=$(ACCESS_KEY) \
|
|||
S3_SECRET_KEY=$(SECRET_KEY) \
|
|||
BUCKET_NAME=test-implicit-dir \
|
|||
$(VENV_DIR)/bin/$(PYTHON) test_implicit_directory_fix.py |
|||
|
|||
# Test implicit directory fix with automatic server management
|
|||
test-implicit-dir-with-server: build-weed setup-python |
|||
@echo "🚀 Starting implicit directory fix tests with automated server management..." |
|||
@echo "Starting SeaweedFS cluster..." |
|||
@if $(MAKE) start-seaweedfs-ci > weed-test.log 2>&1; then \
|
|||
echo "✅ SeaweedFS cluster started successfully"; \
|
|||
echo "Running implicit directory fix tests..."; \
|
|||
trap '$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe || true' EXIT; \
|
|||
S3_ENDPOINT_URL=http://localhost:$(S3_PORT) \
|
|||
S3_ACCESS_KEY=$(ACCESS_KEY) \
|
|||
S3_SECRET_KEY=$(SECRET_KEY) \
|
|||
BUCKET_NAME=test-implicit-dir \
|
|||
$(VENV_DIR)/bin/$(PYTHON) test_implicit_directory_fix.py || exit 1; \
|
|||
echo "✅ All tests completed successfully"; \
|
|||
$(MAKE) -C $(TEST_DIR) stop-seaweedfs-safe || true; \
|
|||
else \
|
|||
echo "❌ Failed to start SeaweedFS cluster"; \
|
|||
echo "=== Server startup logs ==="; \
|
|||
tail -100 weed-test.log 2>/dev/null || echo "No startup log available"; \
|
|||
exit 1; \
|
|||
fi |
|||
|
|||
# Debug targets
|
|||
debug-logs: |
|||
@echo "$(YELLOW)=== Master Log ===$(NC)" |
|||
@tail -n 50 /tmp/seaweedfs-parquet-master.log || echo "No master log found" |
|||
@echo "$(YELLOW)=== Volume Log ===$(NC)" |
|||
@tail -n 50 /tmp/seaweedfs-parquet-volume.log || echo "No volume log found" |
|||
@echo "$(YELLOW)=== Filer Log ===$(NC)" |
|||
@tail -n 50 /tmp/seaweedfs-parquet-filer.log || echo "No filer log found" |
|||
|
|||
debug-status: |
|||
@echo "$(YELLOW)=== Process Status ===$(NC)" |
|||
@ps aux | grep -E "(weed|seaweedfs)" | grep -v grep || echo "No SeaweedFS processes found" |
|||
@echo "$(YELLOW)=== Port Status ===$(NC)" |
|||
@netstat -an | grep -E "($(MASTER_PORT)|$(VOLUME_PORT)|$(FILER_PORT)|$(S3_PORT))" || echo "No ports in use" |
|||
|
|||
# Manual test targets for development
|
|||
manual-start: start-seaweedfs |
|||
@echo "$(GREEN)SeaweedFS with S3 is now running for manual testing$(NC)" |
|||
@echo "You can now run Parquet tests manually" |
|||
@echo "Run 'make manual-stop' when finished" |
|||
|
|||
manual-stop: stop-seaweedfs clean |
|||
|
|||
# CI/CD targets
|
|||
ci-test: test-with-server |
|||
|
|||
@ -0,0 +1,206 @@ |
|||
# PyArrow Parquet S3 Compatibility Tests |
|||
|
|||
This directory contains tests for PyArrow Parquet compatibility with SeaweedFS S3 API, including the implicit directory detection fix. |
|||
|
|||
## Overview |
|||
|
|||
**Status**: ✅ **All PyArrow methods work correctly with SeaweedFS** |
|||
|
|||
SeaweedFS implements implicit directory detection to improve compatibility with s3fs and PyArrow. When PyArrow writes datasets using `write_dataset()`, it may create directory markers that can confuse s3fs. SeaweedFS now handles these correctly by returning 404 for HEAD requests on implicit directories (directories with children), forcing s3fs to use LIST-based discovery. |
|||
|
|||
## Quick Start |
|||
|
|||
### Running Tests |
|||
|
|||
```bash |
|||
# Setup Python environment |
|||
make setup-python |
|||
|
|||
# Run all tests with server (small and large files) |
|||
make test-with-server |
|||
|
|||
# Run quick tests with small files only (faster for development) |
|||
make test-quick |
|||
|
|||
# Run implicit directory fix tests |
|||
make test-implicit-dir-with-server |
|||
|
|||
# Clean up |
|||
make clean |
|||
``` |
|||
|
|||
### Using PyArrow with SeaweedFS |
|||
|
|||
```python |
|||
import pyarrow as pa |
|||
import pyarrow.parquet as pq |
|||
import pyarrow.dataset as pads |
|||
import s3fs |
|||
|
|||
# Configure s3fs |
|||
fs = s3fs.S3FileSystem( |
|||
key='your_access_key', |
|||
secret='your_secret_key', |
|||
endpoint_url='http://localhost:8333', |
|||
use_ssl=False |
|||
) |
|||
|
|||
# Write dataset (creates directory structure) |
|||
table = pa.table({'id': [1, 2, 3], 'value': ['a', 'b', 'c']}) |
|||
pads.write_dataset(table, 'bucket/dataset', filesystem=fs) |
|||
|
|||
# Read dataset (all methods work!) |
|||
dataset = pads.dataset('bucket/dataset', filesystem=fs) # ✅ |
|||
table = pq.read_table('bucket/dataset', filesystem=fs) # ✅ |
|||
dataset = pq.ParquetDataset('bucket/dataset', filesystem=fs) # ✅ |
|||
``` |
|||
|
|||
## Test Files |
|||
|
|||
### Main Test Suite |
|||
- **`s3_parquet_test.py`** - Comprehensive PyArrow test suite |
|||
- Tests 2 write methods × 5 read methods × 2 dataset sizes = 20 combinations |
|||
- All tests pass with the implicit directory fix ✅ |
|||
|
|||
### Implicit Directory Tests |
|||
- **`test_implicit_directory_fix.py`** - Specific tests for the implicit directory fix |
|||
- Tests HEAD request behavior |
|||
- Tests s3fs directory detection |
|||
- Tests PyArrow dataset reading |
|||
- All 6 tests pass ✅ |
|||
|
|||
### Configuration |
|||
- **`Makefile`** - Build and test automation |
|||
- **`requirements.txt`** - Python dependencies (pyarrow, s3fs, boto3) |
|||
- **`.gitignore`** - Ignore patterns for test artifacts |
|||
|
|||
## Documentation |
|||
|
|||
### Technical Documentation |
|||
- **`TEST_COVERAGE.md`** - Comprehensive test coverage documentation |
|||
- Unit tests (Go): 17 test cases |
|||
- Integration tests (Python): 6 test cases |
|||
- End-to-end tests (Python): 20 test cases |
|||
|
|||
- **`FINAL_ROOT_CAUSE_ANALYSIS.md`** - Deep technical analysis |
|||
- Root cause of the s3fs compatibility issue |
|||
- How the implicit directory fix works |
|||
- Performance considerations |
|||
|
|||
- **`MINIO_DIRECTORY_HANDLING.md`** - Comparison with MinIO |
|||
- How MinIO handles directory markers |
|||
- Differences in implementation approaches |
|||
|
|||
## The Implicit Directory Fix |
|||
|
|||
### Problem |
|||
When PyArrow writes datasets with `write_dataset()`, it may create 0-byte directory markers. s3fs's `info()` method calls HEAD on these paths, and if HEAD returns 200 with size=0, s3fs incorrectly reports them as files instead of directories. This causes PyArrow to fail with "Parquet file size is 0 bytes". |
|||
|
|||
### Solution |
|||
SeaweedFS now returns 404 for HEAD requests on implicit directories (0-byte objects or directories with children, when requested without a trailing slash). This forces s3fs to fall back to LIST-based discovery, which correctly identifies directories by checking for children. |
|||
|
|||
### Implementation |
|||
The fix is implemented in `weed/s3api/s3api_object_handlers.go`: |
|||
- `HeadObjectHandler` - Returns 404 for implicit directories |
|||
- `hasChildren` - Helper function to check if a path has children |
|||
|
|||
See the source code for detailed inline documentation. |
|||
|
|||
### Test Coverage |
|||
- **Unit tests** (Go): `weed/s3api/s3api_implicit_directory_test.go` |
|||
- Run: `cd weed/s3api && go test -v -run TestImplicitDirectory` |
|||
|
|||
- **Integration tests** (Python): `test_implicit_directory_fix.py` |
|||
- Run: `cd test/s3/parquet && make test-implicit-dir-with-server` |
|||
|
|||
- **End-to-end tests** (Python): `s3_parquet_test.py` |
|||
- Run: `cd test/s3/parquet && make test-with-server` |
|||
|
|||
## Makefile Targets |
|||
|
|||
```bash |
|||
# Setup |
|||
make setup-python # Create Python virtual environment and install dependencies |
|||
make build-weed # Build SeaweedFS binary |
|||
|
|||
# Testing |
|||
make test # Run full tests (assumes server is already running) |
|||
make test-with-server # Run full PyArrow test suite with server (small + large files) |
|||
make test-quick # Run quick tests with small files only (assumes server is running) |
|||
make test-implicit-dir-with-server # Run implicit directory tests with server |
|||
|
|||
# Server Management |
|||
make start-seaweedfs-ci # Start SeaweedFS in background (CI mode) |
|||
make stop-seaweedfs-safe # Stop SeaweedFS gracefully |
|||
make clean # Clean up all test artifacts |
|||
|
|||
# Development |
|||
make help # Show all available targets |
|||
``` |
|||
|
|||
## Continuous Integration |
|||
|
|||
The tests are automatically run in GitHub Actions on every push/PR that affects S3 or filer code: |
|||
|
|||
**Workflow**: `.github/workflows/s3-parquet-tests.yml` |
|||
|
|||
**Test Matrix**: |
|||
- Python versions: 3.9, 3.11, 3.12 |
|||
- PyArrow integration tests: 20 test combinations |
|||
- Implicit directory fix tests: 6 test scenarios |
|||
- Go unit tests: 17 test cases |
|||
|
|||
**Triggers**: |
|||
- Push/PR to master (when `weed/s3api/**` or `weed/filer/**` changes) |
|||
- Manual trigger via GitHub UI (workflow_dispatch) |
|||
|
|||
## Requirements |
|||
|
|||
- Python 3.8+ |
|||
- PyArrow 22.0.0+ |
|||
- s3fs 2024.12.0+ |
|||
- boto3 1.40.0+ |
|||
- SeaweedFS (latest) |
|||
|
|||
## AWS S3 Compatibility |
|||
|
|||
The implicit directory fix makes SeaweedFS behavior more compatible with AWS S3: |
|||
- AWS S3 typically doesn't create directory markers for implicit directories |
|||
- HEAD on "dataset" (when only "dataset/file.txt" exists) returns 404 on AWS |
|||
- SeaweedFS now matches this behavior for implicit directories with children |
|||
|
|||
## Edge Cases Handled |
|||
|
|||
✅ **Implicit directories with children** → 404 (forces LIST-based discovery) |
|||
✅ **Empty files (0-byte, no children)** → 200 (legitimate empty file) |
|||
✅ **Empty directories (no children)** → 200 (legitimate empty directory) |
|||
✅ **Explicit directory requests (trailing slash)** → 200 (normal directory behavior) |
|||
✅ **Versioned buckets** → Skip implicit directory check (versioned semantics) |
|||
✅ **Regular files** → 200 (normal file behavior) |
|||
|
|||
## Performance |
|||
|
|||
The implicit directory check adds minimal overhead: |
|||
- Only triggered for 0-byte objects or directories without trailing slash |
|||
- Cost: One LIST operation with Limit=1 (~1-5ms) |
|||
- No impact on regular file operations |
|||
|
|||
## Contributing |
|||
|
|||
When adding new tests: |
|||
1. Add test cases to the appropriate test file |
|||
2. Update TEST_COVERAGE.md |
|||
3. Run the full test suite to ensure no regressions |
|||
4. Update this README if adding new functionality |
|||
|
|||
## References |
|||
|
|||
- [PyArrow Documentation](https://arrow.apache.org/docs/python/parquet.html) |
|||
- [s3fs Documentation](https://s3fs.readthedocs.io/) |
|||
- [SeaweedFS S3 API](https://github.com/seaweedfs/seaweedfs/wiki/Amazon-S3-API) |
|||
- [AWS S3 API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/) |
|||
|
|||
--- |
|||
|
|||
**Last Updated**: November 19, 2025 |
|||
**Status**: All tests passing ✅ |
|||
@ -0,0 +1,46 @@ |
|||
# Test Coverage Documentation |
|||
|
|||
## Overview |
|||
|
|||
This document provides comprehensive test coverage documentation for the SeaweedFS S3 Parquet integration tests. |
|||
|
|||
## Test Categories |
|||
|
|||
### Unit Tests (Go) |
|||
- 17 test cases covering S3 API handlers |
|||
- Tests for implicit directory handling |
|||
- HEAD request behavior validation |
|||
- Located in: `weed/s3api/s3api_implicit_directory_test.go` |
|||
|
|||
### Integration Tests (Python) |
|||
- 6 test cases for implicit directory fix |
|||
- Tests HEAD request behavior on directory markers |
|||
- s3fs directory detection validation |
|||
- PyArrow dataset read compatibility |
|||
- Located in: `test_implicit_directory_fix.py` |
|||
|
|||
### End-to-End Tests (Python) |
|||
- 20 test cases combining write and read methods |
|||
- Small file tests (5 rows): 10 test combinations |
|||
- Large file tests (200,000 rows): 10 test combinations |
|||
- Tests multiple write methods: `pads.write_dataset`, `pq.write_table+s3fs` |
|||
- Tests multiple read methods: `pads.dataset`, `pq.ParquetDataset`, `pq.read_table`, `s3fs+direct`, `s3fs+buffered` |
|||
- Located in: `s3_parquet_test.py` |
|||
|
|||
## Coverage Summary |
|||
|
|||
| Test Type | Count | Status | |
|||
|-----------|-------|--------| |
|||
| Unit Tests (Go) | 17 | ✅ Pass | |
|||
| Integration Tests (Python) | 6 | ✅ Pass | |
|||
| End-to-End Tests (Python) | 20 | ✅ Pass | |
|||
| **Total** | **43** | **✅ All Pass** | |
|||
|
|||
## TODO |
|||
|
|||
- [ ] Add detailed test execution time metrics |
|||
- [ ] Document test data generation strategies |
|||
- [ ] Add code coverage percentages for Go tests |
|||
- [ ] Document edge cases and corner cases tested |
|||
- [ ] Add performance benchmarking results |
|||
|
|||
@ -0,0 +1,7 @@ |
|||
# Python dependencies for S3 Parquet tests |
|||
# Install with: pip install -r requirements.txt |
|||
|
|||
pyarrow>=10.0.0 |
|||
s3fs>=2023.12.0 |
|||
boto3>=1.28.0 |
|||
|
|||
@ -0,0 +1,421 @@ |
|||
#!/usr/bin/env python3 |
|||
""" |
|||
Test script for S3-compatible storage with PyArrow Parquet files. |
|||
|
|||
This script tests different write methods (PyArrow write_dataset vs. pq.write_table to buffer) |
|||
combined with different read methods (PyArrow dataset, direct s3fs read, buffered read) to |
|||
identify which combinations work with large files that span multiple row groups. |
|||
|
|||
This test specifically addresses issues with large tables using PyArrow where files span |
|||
multiple row-groups (default row_group size is around 130,000 rows). |
|||
|
|||
Requirements: |
|||
- pyarrow>=22 |
|||
- s3fs>=2024.12.0 |
|||
|
|||
Environment Variables: |
|||
S3_ENDPOINT_URL: S3 endpoint (default: http://localhost:8333) |
|||
S3_ACCESS_KEY: S3 access key (default: some_access_key1) |
|||
S3_SECRET_KEY: S3 secret key (default: some_secret_key1) |
|||
BUCKET_NAME: S3 bucket name (default: test-parquet-bucket) |
|||
TEST_QUICK: Run only small/quick tests (default: 0, set to 1 for quick mode) |
|||
|
|||
Usage: |
|||
# Run with default environment variables |
|||
python3 s3_parquet_test.py |
|||
|
|||
# Run with custom environment variables |
|||
S3_ENDPOINT_URL=http://localhost:8333 \ |
|||
S3_ACCESS_KEY=mykey \ |
|||
S3_SECRET_KEY=mysecret \ |
|||
BUCKET_NAME=mybucket \ |
|||
python3 s3_parquet_test.py |
|||
""" |
|||
|
|||
import io |
|||
import logging |
|||
import os |
|||
import secrets |
|||
import sys |
|||
import traceback |
|||
from datetime import datetime |
|||
from typing import Tuple |
|||
|
|||
import pyarrow as pa |
|||
import pyarrow.dataset as pads |
|||
import pyarrow.parquet as pq |
|||
|
|||
try: |
|||
import s3fs |
|||
except ImportError: |
|||
logging.error("s3fs not installed. Install with: pip install s3fs") |
|||
sys.exit(1) |
|||
|
|||
logging.basicConfig(level=logging.INFO, format="%(message)s") |
|||
|
|||
# Error log file |
|||
ERROR_LOG_FILE = f"s3_parquet_test_errors_{datetime.now().strftime('%Y%m%d_%H%M%S')}.log" |
|||
|
|||
# Configuration from environment variables with defaults |
|||
S3_ENDPOINT_URL = os.environ.get("S3_ENDPOINT_URL", "http://localhost:8333") |
|||
S3_ACCESS_KEY = os.environ.get("S3_ACCESS_KEY", "some_access_key1") |
|||
S3_SECRET_KEY = os.environ.get("S3_SECRET_KEY", "some_secret_key1") |
|||
BUCKET_NAME = os.getenv("BUCKET_NAME", "test-parquet-bucket") |
|||
TEST_QUICK = os.getenv("TEST_QUICK", "0") == "1" |
|||
|
|||
# Create randomized test directory |
|||
TEST_RUN_ID = secrets.token_hex(8) |
|||
TEST_DIR = f"{BUCKET_NAME}/parquet-tests/{TEST_RUN_ID}" |
|||
|
|||
# Test file sizes |
|||
TEST_SIZES = { |
|||
"small": 5, |
|||
"large": 200_000, # This will create multiple row groups |
|||
} |
|||
|
|||
# Filter to only small tests if quick mode is enabled |
|||
if TEST_QUICK: |
|||
TEST_SIZES = {"small": TEST_SIZES["small"]} |
|||
logging.info("Quick test mode enabled - running only small tests") |
|||
|
|||
|
|||
def create_sample_table(num_rows: int = 5) -> pa.Table: |
|||
"""Create a sample PyArrow table for testing.""" |
|||
return pa.table({ |
|||
"id": pa.array(range(num_rows), type=pa.int64()), |
|||
"name": pa.array([f"user_{i}" for i in range(num_rows)], type=pa.string()), |
|||
"value": pa.array([float(i) * 1.5 for i in range(num_rows)], type=pa.float64()), |
|||
"flag": pa.array([i % 2 == 0 for i in range(num_rows)], type=pa.bool_()), |
|||
}) |
|||
|
|||
|
|||
def log_error(operation: str, short_msg: str) -> None: |
|||
"""Log error details to file with full traceback.""" |
|||
with open(ERROR_LOG_FILE, "a") as f: |
|||
f.write(f"\n{'='*80}\n") |
|||
f.write(f"Operation: {operation}\n") |
|||
f.write(f"Time: {datetime.now().isoformat()}\n") |
|||
f.write(f"Message: {short_msg}\n") |
|||
f.write("Full Traceback:\n") |
|||
f.write(traceback.format_exc()) |
|||
f.write(f"{'='*80}\n") |
|||
|
|||
|
|||
def init_s3fs() -> s3fs.S3FileSystem: |
|||
"""Initialize and return S3FileSystem.""" |
|||
logging.info("Initializing S3FileSystem...") |
|||
logging.info(f" Endpoint: {S3_ENDPOINT_URL}") |
|||
logging.info(f" Bucket: {BUCKET_NAME}") |
|||
try: |
|||
fs = s3fs.S3FileSystem( |
|||
client_kwargs={"endpoint_url": S3_ENDPOINT_URL}, |
|||
key=S3_ACCESS_KEY, |
|||
secret=S3_SECRET_KEY, |
|||
use_listings_cache=False, |
|||
) |
|||
logging.info("✓ S3FileSystem initialized successfully\n") |
|||
return fs |
|||
except Exception: |
|||
logging.exception("✗ Failed to initialize S3FileSystem") |
|||
raise |
|||
|
|||
|
|||
def ensure_bucket_exists(fs: s3fs.S3FileSystem) -> None: |
|||
"""Ensure the test bucket exists.""" |
|||
try: |
|||
if not fs.exists(BUCKET_NAME): |
|||
logging.info(f"Creating bucket: {BUCKET_NAME}") |
|||
fs.mkdir(BUCKET_NAME) |
|||
logging.info(f"✓ Bucket created: {BUCKET_NAME}") |
|||
else: |
|||
logging.info(f"✓ Bucket exists: {BUCKET_NAME}") |
|||
except Exception: |
|||
logging.exception("✗ Failed to create/check bucket") |
|||
raise |
|||
|
|||
|
|||
# Write Methods |
|||
|
|||
def write_with_pads(table: pa.Table, path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str]: |
|||
"""Write using pads.write_dataset with filesystem parameter.""" |
|||
try: |
|||
pads.write_dataset(table, path, format="parquet", filesystem=fs) |
|||
return True, "pads.write_dataset" |
|||
except Exception as e: |
|||
error_msg = f"pads.write_dataset: {type(e).__name__}" |
|||
log_error("write_with_pads", error_msg) |
|||
return False, error_msg |
|||
|
|||
|
|||
def write_with_buffer_and_s3fs(table: pa.Table, path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str]: |
|||
"""Write using pq.write_table to buffer, then upload via s3fs.""" |
|||
try: |
|||
buffer = io.BytesIO() |
|||
pq.write_table(table, buffer) |
|||
buffer.seek(0) |
|||
with fs.open(path, "wb") as f: |
|||
f.write(buffer.read()) |
|||
return True, "pq.write_table+s3fs.open" |
|||
except Exception as e: |
|||
error_msg = f"pq.write_table+s3fs.open: {type(e).__name__}" |
|||
log_error("write_with_buffer_and_s3fs", error_msg) |
|||
return False, error_msg |
|||
|
|||
|
|||
# Read Methods |
|||
|
|||
def get_parquet_files(path: str, fs: s3fs.S3FileSystem) -> list: |
|||
""" |
|||
Helper to discover all parquet files for a given path. |
|||
|
|||
Args: |
|||
path: S3 path (file or directory) |
|||
fs: S3FileSystem instance |
|||
|
|||
Returns: |
|||
List of parquet file paths |
|||
|
|||
Raises: |
|||
ValueError: If no parquet files are found in a directory |
|||
""" |
|||
if fs.isdir(path): |
|||
# Find all parquet files in the directory |
|||
files = [f for f in fs.ls(path) if f.endswith('.parquet')] |
|||
if not files: |
|||
raise ValueError(f"No parquet files found in directory: {path}") |
|||
return files |
|||
else: |
|||
# Single file path |
|||
return [path] |
|||
|
|||
|
|||
def read_with_pads_dataset(path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str, int]: |
|||
"""Read using pads.dataset - handles both single files and directories.""" |
|||
try: |
|||
# pads.dataset() should auto-discover parquet files in the directory |
|||
dataset = pads.dataset(path, format="parquet", filesystem=fs) |
|||
result = dataset.to_table() |
|||
return True, "pads.dataset", result.num_rows |
|||
except Exception as e: |
|||
error_msg = f"pads.dataset: {type(e).__name__}" |
|||
log_error("read_with_pads_dataset", error_msg) |
|||
return False, error_msg, 0 |
|||
|
|||
|
|||
def read_direct_s3fs(path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str, int]: |
|||
"""Read directly via s3fs.open() streaming.""" |
|||
try: |
|||
# Get all parquet files (handles both single file and directory) |
|||
parquet_files = get_parquet_files(path, fs) |
|||
|
|||
# Read all parquet files and concatenate them |
|||
tables = [] |
|||
for file_path in parquet_files: |
|||
with fs.open(file_path, "rb") as f: |
|||
table = pq.read_table(f) |
|||
tables.append(table) |
|||
|
|||
# Concatenate all tables into one |
|||
if len(tables) == 1: |
|||
result = tables[0] |
|||
else: |
|||
result = pa.concat_tables(tables) |
|||
|
|||
return True, "s3fs.open+pq.read_table", result.num_rows |
|||
except Exception as e: |
|||
error_msg = f"s3fs.open+pq.read_table: {type(e).__name__}" |
|||
log_error("read_direct_s3fs", error_msg) |
|||
return False, error_msg, 0 |
|||
|
|||
|
|||
def read_buffered_s3fs(path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str, int]: |
|||
"""Read via s3fs.open() into buffer, then pq.read_table.""" |
|||
try: |
|||
# Get all parquet files (handles both single file and directory) |
|||
parquet_files = get_parquet_files(path, fs) |
|||
|
|||
# Read all parquet files and concatenate them |
|||
tables = [] |
|||
for file_path in parquet_files: |
|||
with fs.open(file_path, "rb") as f: |
|||
buffer = io.BytesIO(f.read()) |
|||
buffer.seek(0) |
|||
table = pq.read_table(buffer) |
|||
tables.append(table) |
|||
|
|||
# Concatenate all tables into one |
|||
if len(tables) == 1: |
|||
result = tables[0] |
|||
else: |
|||
result = pa.concat_tables(tables) |
|||
|
|||
return True, "s3fs.open+BytesIO+pq.read_table", result.num_rows |
|||
except Exception as e: |
|||
error_msg = f"s3fs.open+BytesIO+pq.read_table: {type(e).__name__}" |
|||
log_error("read_buffered_s3fs", error_msg) |
|||
return False, error_msg, 0 |
|||
|
|||
|
|||
def read_with_parquet_dataset(path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str, int]: |
|||
"""Read using pq.ParquetDataset - designed for directories.""" |
|||
try: |
|||
# ParquetDataset is specifically designed to handle directories |
|||
dataset = pq.ParquetDataset(path, filesystem=fs) |
|||
result = dataset.read() |
|||
return True, "pq.ParquetDataset", result.num_rows |
|||
except Exception as e: |
|||
error_msg = f"pq.ParquetDataset: {type(e).__name__}" |
|||
log_error("read_with_parquet_dataset", error_msg) |
|||
return False, error_msg, 0 |
|||
|
|||
|
|||
def read_with_pq_read_table(path: str, fs: s3fs.S3FileSystem) -> Tuple[bool, str, int]: |
|||
"""Read using pq.read_table with filesystem parameter.""" |
|||
try: |
|||
# pq.read_table() with filesystem should handle directories |
|||
result = pq.read_table(path, filesystem=fs) |
|||
return True, "pq.read_table+filesystem", result.num_rows |
|||
except Exception as e: |
|||
error_msg = f"pq.read_table+filesystem: {type(e).__name__}" |
|||
log_error("read_with_pq_read_table", error_msg) |
|||
return False, error_msg, 0 |
|||
|
|||
|
|||
def test_combination( |
|||
fs: s3fs.S3FileSystem, |
|||
test_name: str, |
|||
write_func, |
|||
read_func, |
|||
num_rows: int, |
|||
) -> Tuple[bool, str]: |
|||
"""Test a specific write/read combination.""" |
|||
table = create_sample_table(num_rows=num_rows) |
|||
path = f"{TEST_DIR}/{test_name}/data.parquet" |
|||
|
|||
# Write |
|||
write_ok, write_msg = write_func(table, path, fs) |
|||
if not write_ok: |
|||
return False, f"WRITE_FAIL: {write_msg}" |
|||
|
|||
# Read |
|||
read_ok, read_msg, rows_read = read_func(path, fs) |
|||
if not read_ok: |
|||
return False, f"READ_FAIL: {read_msg}" |
|||
|
|||
# Verify |
|||
if rows_read != num_rows: |
|||
return False, f"DATA_MISMATCH: expected {num_rows}, got {rows_read}" |
|||
|
|||
return True, f"{write_msg} + {read_msg}" |
|||
|
|||
|
|||
def cleanup_test_files(fs: s3fs.S3FileSystem) -> None: |
|||
"""Clean up test files from S3.""" |
|||
try: |
|||
if fs.exists(TEST_DIR): |
|||
logging.info(f"Cleaning up test directory: {TEST_DIR}") |
|||
fs.rm(TEST_DIR, recursive=True) |
|||
logging.info("✓ Test directory cleaned up") |
|||
except Exception as e: |
|||
logging.warning(f"Failed to cleanup test directory: {e}") |
|||
|
|||
|
|||
def main(): |
|||
"""Run all write/read method combinations.""" |
|||
print("=" * 80) |
|||
print("Write/Read Method Combination Tests for S3-Compatible Storage") |
|||
print("Testing PyArrow Parquet Files with Multiple Row Groups") |
|||
if TEST_QUICK: |
|||
print("*** QUICK TEST MODE - Small files only ***") |
|||
print("=" * 80 + "\n") |
|||
|
|||
print("Configuration:") |
|||
print(f" S3 Endpoint: {S3_ENDPOINT_URL}") |
|||
print(f" Bucket: {BUCKET_NAME}") |
|||
print(f" Test Directory: {TEST_DIR}") |
|||
print(f" Quick Mode: {'Yes (small files only)' if TEST_QUICK else 'No (all file sizes)'}") |
|||
print() |
|||
|
|||
try: |
|||
fs = init_s3fs() |
|||
ensure_bucket_exists(fs) |
|||
except Exception as e: |
|||
print(f"Cannot proceed without S3 connection: {e}") |
|||
return 1 |
|||
|
|||
# Define all write methods |
|||
write_methods = [ |
|||
("pads", write_with_pads), |
|||
("buffer+s3fs", write_with_buffer_and_s3fs), |
|||
] |
|||
|
|||
# Define all read methods |
|||
read_methods = [ |
|||
("pads.dataset", read_with_pads_dataset), |
|||
("pq.ParquetDataset", read_with_parquet_dataset), |
|||
("pq.read_table", read_with_pq_read_table), |
|||
("s3fs+direct", read_direct_s3fs), |
|||
("s3fs+buffered", read_buffered_s3fs), |
|||
] |
|||
|
|||
results = [] |
|||
|
|||
# Test all combinations for each file size |
|||
for size_name, num_rows in TEST_SIZES.items(): |
|||
print(f"\n{'='*80}") |
|||
print(f"Testing with {size_name} files ({num_rows:,} rows)") |
|||
print(f"{'='*80}\n") |
|||
print(f"{'Write Method':<20} | {'Read Method':<20} | {'Result':<40}") |
|||
print("-" * 85) |
|||
|
|||
for write_name, write_func in write_methods: |
|||
for read_name, read_func in read_methods: |
|||
test_name = f"{size_name}_{write_name}_{read_name}" |
|||
success, message = test_combination( |
|||
fs, test_name, write_func, read_func, num_rows |
|||
) |
|||
results.append((test_name, success, message)) |
|||
status = "✓ PASS" if success else "✗ FAIL" |
|||
print(f"{write_name:<20} | {read_name:<20} | {status}: {message[:35]}") |
|||
|
|||
# Summary |
|||
print("\n" + "=" * 80) |
|||
print("SUMMARY") |
|||
print("=" * 80) |
|||
passed = sum(1 for _, success, _ in results if success) |
|||
total = len(results) |
|||
print(f"\nTotal: {passed}/{total} passed\n") |
|||
|
|||
# Group results by file size |
|||
for size_name in TEST_SIZES.keys(): |
|||
size_results = [r for r in results if size_name in r[0]] |
|||
size_passed = sum(1 for _, success, _ in size_results if success) |
|||
print(f"{size_name.upper()}: {size_passed}/{len(size_results)} passed") |
|||
|
|||
print("\n" + "=" * 80) |
|||
if passed == total: |
|||
print("✓ ALL TESTS PASSED!") |
|||
else: |
|||
print(f"✗ {total - passed} test(s) failed") |
|||
print("\nFailing combinations:") |
|||
for name, success, message in results: |
|||
if not success: |
|||
parts = name.split("_") |
|||
size = parts[0] |
|||
write = parts[1] |
|||
read = "_".join(parts[2:]) |
|||
print(f" - {size:6} | {write:15} | {read:20} -> {message[:50]}") |
|||
|
|||
print("=" * 80 + "\n") |
|||
print(f"Error details logged to: {ERROR_LOG_FILE}") |
|||
print("=" * 80 + "\n") |
|||
|
|||
# Cleanup |
|||
cleanup_test_files(fs) |
|||
|
|||
return 0 if passed == total else 1 |
|||
|
|||
|
|||
if __name__ == "__main__": |
|||
sys.exit(main()) |
|||
|
|||
@ -0,0 +1,307 @@ |
|||
#!/usr/bin/env python3 |
|||
""" |
|||
Test script to verify the implicit directory fix for s3fs compatibility. |
|||
|
|||
This test verifies that: |
|||
1. Implicit directory markers (0-byte objects with children) return 404 on HEAD |
|||
2. s3fs correctly identifies them as directories via LIST fallback |
|||
3. PyArrow can read datasets created with write_dataset() |
|||
|
|||
The fix makes SeaweedFS behave like AWS S3 and improves s3fs compatibility. |
|||
""" |
|||
|
|||
import io |
|||
import logging |
|||
import os |
|||
import sys |
|||
import traceback |
|||
|
|||
import pyarrow as pa |
|||
import pyarrow.dataset as pads |
|||
import pyarrow.parquet as pq |
|||
import s3fs |
|||
import boto3 |
|||
from botocore.exceptions import ClientError |
|||
|
|||
# Configure logging |
|||
logging.basicConfig( |
|||
level=logging.INFO, |
|||
format='%(asctime)s - %(levelname)s - %(message)s' |
|||
) |
|||
logger = logging.getLogger(__name__) |
|||
|
|||
# Configuration |
|||
S3_ENDPOINT_URL = os.environ.get("S3_ENDPOINT_URL", "http://localhost:8333") |
|||
S3_ACCESS_KEY = os.environ.get("S3_ACCESS_KEY", "some_access_key1") |
|||
S3_SECRET_KEY = os.environ.get("S3_SECRET_KEY", "some_secret_key1") |
|||
BUCKET_NAME = os.getenv("BUCKET_NAME", "test-implicit-dir") |
|||
|
|||
def create_sample_table(num_rows: int = 1000) -> pa.Table: |
|||
"""Create a sample PyArrow table.""" |
|||
return pa.table({ |
|||
'id': pa.array(range(num_rows), type=pa.int64()), |
|||
'value': pa.array([f'value_{i}' for i in range(num_rows)], type=pa.string()), |
|||
'score': pa.array([float(i) * 1.5 for i in range(num_rows)], type=pa.float64()), |
|||
}) |
|||
|
|||
def setup_s3(): |
|||
"""Set up S3 clients.""" |
|||
# s3fs client |
|||
fs = s3fs.S3FileSystem( |
|||
key=S3_ACCESS_KEY, |
|||
secret=S3_SECRET_KEY, |
|||
client_kwargs={'endpoint_url': S3_ENDPOINT_URL}, |
|||
use_ssl=False |
|||
) |
|||
|
|||
# boto3 client for raw S3 operations |
|||
s3_client = boto3.client( |
|||
's3', |
|||
endpoint_url=S3_ENDPOINT_URL, |
|||
aws_access_key_id=S3_ACCESS_KEY, |
|||
aws_secret_access_key=S3_SECRET_KEY, |
|||
use_ssl=False |
|||
) |
|||
|
|||
return fs, s3_client |
|||
|
|||
def test_implicit_directory_head_behavior(fs, s3_client): |
|||
"""Test that HEAD on implicit directory markers returns 404.""" |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST 1: Implicit Directory HEAD Behavior") |
|||
logger.info("="*80) |
|||
|
|||
test_path = f"{BUCKET_NAME}/test_implicit_dir" |
|||
|
|||
# Clean up any existing data |
|||
try: |
|||
fs.rm(test_path, recursive=True) |
|||
except: |
|||
pass |
|||
|
|||
# Create a dataset using PyArrow (creates implicit directory) |
|||
logger.info(f"Creating dataset at: {test_path}") |
|||
table = create_sample_table(1000) |
|||
pads.write_dataset(table, test_path, filesystem=fs, format='parquet') |
|||
|
|||
# List what was created |
|||
logger.info("\nFiles created:") |
|||
files = fs.ls(test_path, detail=True) |
|||
for f in files: |
|||
logger.info(f" {f['name']} - size: {f['size']} bytes, type: {f['type']}") |
|||
|
|||
# Test HEAD request on the directory marker (without trailing slash) |
|||
logger.info(f"\nTesting HEAD on: {test_path}") |
|||
try: |
|||
response = s3_client.head_object(Bucket=BUCKET_NAME, Key='test_implicit_dir') |
|||
logger.info(f" HEAD response: {response['ResponseMetadata']['HTTPStatusCode']}") |
|||
logger.info(f" Content-Length: {response.get('ContentLength', 'N/A')}") |
|||
logger.info(f" Content-Type: {response.get('ContentType', 'N/A')}") |
|||
logger.warning(" ⚠️ Expected 404, but got 200 - fix may not be working") |
|||
return False |
|||
except ClientError as e: |
|||
if e.response['Error']['Code'] == '404': |
|||
logger.info(" ✓ HEAD returned 404 (expected - implicit directory)") |
|||
return True |
|||
else: |
|||
logger.error(f" ✗ Unexpected error: {e}") |
|||
return False |
|||
|
|||
def test_s3fs_directory_detection(fs): |
|||
"""Test that s3fs correctly detects the directory.""" |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST 2: s3fs Directory Detection") |
|||
logger.info("="*80) |
|||
|
|||
test_path = f"{BUCKET_NAME}/test_implicit_dir" |
|||
|
|||
# Test s3fs.info() |
|||
logger.info(f"\nTesting s3fs.info('{test_path}'):") |
|||
try: |
|||
info = fs.info(test_path) |
|||
logger.info(f" Type: {info.get('type', 'N/A')}") |
|||
logger.info(f" Size: {info.get('size', 'N/A')}") |
|||
|
|||
if info.get('type') == 'directory': |
|||
logger.info(" ✓ s3fs correctly identified as directory") |
|||
return True |
|||
else: |
|||
logger.warning(f" ⚠️ s3fs identified as: {info.get('type')}") |
|||
return False |
|||
except Exception as e: |
|||
logger.error(f" ✗ Error: {e}") |
|||
return False |
|||
|
|||
def test_s3fs_isdir(fs): |
|||
"""Test that s3fs.isdir() works correctly.""" |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST 3: s3fs.isdir() Method") |
|||
logger.info("="*80) |
|||
|
|||
test_path = f"{BUCKET_NAME}/test_implicit_dir" |
|||
|
|||
logger.info(f"\nTesting s3fs.isdir('{test_path}'):") |
|||
try: |
|||
is_dir = fs.isdir(test_path) |
|||
logger.info(f" Result: {is_dir}") |
|||
|
|||
if is_dir: |
|||
logger.info(" ✓ s3fs.isdir() correctly returned True") |
|||
return True |
|||
else: |
|||
logger.warning(" ⚠️ s3fs.isdir() returned False") |
|||
return False |
|||
except Exception as e: |
|||
logger.error(f" ✗ Error: {e}") |
|||
return False |
|||
|
|||
def test_pyarrow_dataset_read(fs): |
|||
"""Test that PyArrow can read the dataset.""" |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST 4: PyArrow Dataset Read") |
|||
logger.info("="*80) |
|||
|
|||
test_path = f"{BUCKET_NAME}/test_implicit_dir" |
|||
|
|||
logger.info(f"\nReading dataset from: {test_path}") |
|||
try: |
|||
ds = pads.dataset(test_path, filesystem=fs, format='parquet') |
|||
table = ds.to_table() |
|||
logger.info(f" ✓ Successfully read {len(table)} rows") |
|||
logger.info(f" Columns: {table.column_names}") |
|||
return True |
|||
except Exception as e: |
|||
logger.error(f" ✗ Failed to read dataset: {e}") |
|||
traceback.print_exc() |
|||
return False |
|||
|
|||
def test_explicit_directory_marker(fs, s3_client): |
|||
"""Test that explicit directory markers (with trailing slash) still work.""" |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST 5: Explicit Directory Marker (with trailing slash)") |
|||
logger.info("="*80) |
|||
|
|||
# Create an explicit directory marker |
|||
logger.info(f"\nCreating explicit directory: {BUCKET_NAME}/explicit_dir/") |
|||
try: |
|||
s3_client.put_object( |
|||
Bucket=BUCKET_NAME, |
|||
Key='explicit_dir/', |
|||
Body=b'', |
|||
ContentType='httpd/unix-directory' |
|||
) |
|||
logger.info(" ✓ Created explicit directory marker") |
|||
except Exception as e: |
|||
logger.error(f" ✗ Failed to create: {e}") |
|||
return False |
|||
|
|||
# Test HEAD with trailing slash |
|||
logger.info(f"\nTesting HEAD on: {BUCKET_NAME}/explicit_dir/") |
|||
try: |
|||
response = s3_client.head_object(Bucket=BUCKET_NAME, Key='explicit_dir/') |
|||
logger.info(f" ✓ HEAD returned 200 (expected for explicit directory)") |
|||
logger.info(f" Content-Type: {response.get('ContentType', 'N/A')}") |
|||
return True |
|||
except ClientError as e: |
|||
logger.error(f" ✗ HEAD failed: {e}") |
|||
return False |
|||
|
|||
def test_empty_file_not_directory(fs, s3_client): |
|||
"""Test that legitimate empty files are not treated as directories.""" |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST 6: Empty File (not a directory)") |
|||
logger.info("="*80) |
|||
|
|||
# Create an empty file with text/plain mime type |
|||
logger.info(f"\nCreating empty file: {BUCKET_NAME}/empty.txt") |
|||
try: |
|||
s3_client.put_object( |
|||
Bucket=BUCKET_NAME, |
|||
Key='empty.txt', |
|||
Body=b'', |
|||
ContentType='text/plain' |
|||
) |
|||
logger.info(" ✓ Created empty file") |
|||
except Exception as e: |
|||
logger.error(f" ✗ Failed to create: {e}") |
|||
return False |
|||
|
|||
# Test HEAD |
|||
logger.info(f"\nTesting HEAD on: {BUCKET_NAME}/empty.txt") |
|||
try: |
|||
response = s3_client.head_object(Bucket=BUCKET_NAME, Key='empty.txt') |
|||
logger.info(f" ✓ HEAD returned 200 (expected for empty file)") |
|||
logger.info(f" Content-Type: {response.get('ContentType', 'N/A')}") |
|||
|
|||
# Verify s3fs doesn't think it's a directory |
|||
info = fs.info(f"{BUCKET_NAME}/empty.txt") |
|||
if info.get('type') == 'file': |
|||
logger.info(" ✓ s3fs correctly identified as file") |
|||
return True |
|||
else: |
|||
logger.warning(f" ⚠️ s3fs identified as: {info.get('type')}") |
|||
return False |
|||
except Exception as e: |
|||
logger.error(f" ✗ Error: {e}") |
|||
return False |
|||
|
|||
def main(): |
|||
"""Run all tests.""" |
|||
logger.info("="*80) |
|||
logger.info("Implicit Directory Fix Test Suite") |
|||
logger.info("="*80) |
|||
logger.info(f"Endpoint: {S3_ENDPOINT_URL}") |
|||
logger.info(f"Bucket: {BUCKET_NAME}") |
|||
logger.info("="*80) |
|||
|
|||
# Set up S3 clients |
|||
fs, s3_client = setup_s3() |
|||
|
|||
# Create bucket if it doesn't exist |
|||
try: |
|||
s3_client.create_bucket(Bucket=BUCKET_NAME) |
|||
logger.info(f"\n✓ Created bucket: {BUCKET_NAME}") |
|||
except ClientError as e: |
|||
error_code = e.response['Error']['Code'] |
|||
if error_code in ['BucketAlreadyOwnedByYou', 'BucketAlreadyExists']: |
|||
logger.info(f"\n✓ Bucket already exists: {BUCKET_NAME}") |
|||
else: |
|||
logger.error(f"\n✗ Failed to create bucket: {e}") |
|||
return 1 |
|||
|
|||
# Run tests |
|||
results = [] |
|||
|
|||
results.append(("Implicit Directory HEAD", test_implicit_directory_head_behavior(fs, s3_client))) |
|||
results.append(("s3fs Directory Detection", test_s3fs_directory_detection(fs))) |
|||
results.append(("s3fs.isdir() Method", test_s3fs_isdir(fs))) |
|||
results.append(("PyArrow Dataset Read", test_pyarrow_dataset_read(fs))) |
|||
results.append(("Explicit Directory Marker", test_explicit_directory_marker(fs, s3_client))) |
|||
results.append(("Empty File Not Directory", test_empty_file_not_directory(fs, s3_client))) |
|||
|
|||
# Print summary |
|||
logger.info("\n" + "="*80) |
|||
logger.info("TEST SUMMARY") |
|||
logger.info("="*80) |
|||
|
|||
passed = sum(1 for _, result in results if result) |
|||
total = len(results) |
|||
|
|||
for name, result in results: |
|||
status = "✓ PASS" if result else "✗ FAIL" |
|||
logger.info(f"{status}: {name}") |
|||
|
|||
logger.info("="*80) |
|||
logger.info(f"Results: {passed}/{total} tests passed") |
|||
logger.info("="*80) |
|||
|
|||
if passed == total: |
|||
logger.info("\n🎉 All tests passed! The implicit directory fix is working correctly.") |
|||
return 0 |
|||
else: |
|||
logger.warning(f"\n⚠️ {total - passed} test(s) failed. The fix may not be fully working.") |
|||
return 1 |
|||
|
|||
if __name__ == "__main__": |
|||
sys.exit(main()) |
|||
|
|||
@ -0,0 +1,104 @@ |
|||
package sse_test |
|||
|
|||
import ( |
|||
"bytes" |
|||
"context" |
|||
"fmt" |
|||
"io" |
|||
"testing" |
|||
|
|||
"github.com/aws/aws-sdk-go-v2/aws" |
|||
"github.com/aws/aws-sdk-go-v2/service/s3" |
|||
"github.com/stretchr/testify/assert" |
|||
"github.com/stretchr/testify/require" |
|||
) |
|||
|
|||
// TestPlainObjectRangeAndHeadHeaders ensures non-SSE objects advertise correct
|
|||
// Content-Length and Content-Range information for both HEAD and ranged GETs.
|
|||
func TestPlainObjectRangeAndHeadHeaders(t *testing.T) { |
|||
ctx := context.Background() |
|||
|
|||
client, err := createS3Client(ctx, defaultConfig) |
|||
require.NoError(t, err, "failed to create S3 client") |
|||
|
|||
bucketName, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"range-plain-") |
|||
require.NoError(t, err, "failed to create test bucket") |
|||
defer cleanupTestBucket(ctx, client, bucketName) |
|||
|
|||
// SeaweedFS S3 auto-chunks uploads at 8MiB (see chunkSize in putToFiler).
|
|||
// Using 16MiB ensures at least two chunks without stressing CI resources.
|
|||
const chunkSize = 8 * 1024 * 1024 |
|||
const objectSize = 2 * chunkSize |
|||
objectKey := "plain-range-validation" |
|||
testData := generateTestData(objectSize) |
|||
|
|||
_, err = client.PutObject(ctx, &s3.PutObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
Body: bytes.NewReader(testData), |
|||
}) |
|||
require.NoError(t, err, "failed to upload test object") |
|||
|
|||
t.Run("HeadObject reports accurate Content-Length", func(t *testing.T) { |
|||
resp, err := client.HeadObject(ctx, &s3.HeadObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
}) |
|||
require.NoError(t, err, "HeadObject request failed") |
|||
assert.Equal(t, int64(objectSize), resp.ContentLength, "Content-Length mismatch on HEAD") |
|||
assert.Equal(t, "bytes", aws.ToString(resp.AcceptRanges), "Accept-Ranges should advertise bytes") |
|||
}) |
|||
|
|||
t.Run("Range request across chunk boundary", func(t *testing.T) { |
|||
// Test range that spans an 8MiB chunk boundary (chunkSize - 1KB to chunkSize + 3KB)
|
|||
rangeStart := int64(chunkSize - 1024) |
|||
rangeEnd := rangeStart + 4096 - 1 |
|||
rangeHeader := fmt.Sprintf("bytes=%d-%d", rangeStart, rangeEnd) |
|||
|
|||
resp, err := client.GetObject(ctx, &s3.GetObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
Range: aws.String(rangeHeader), |
|||
}) |
|||
require.NoError(t, err, "GetObject range request failed") |
|||
defer resp.Body.Close() |
|||
|
|||
expectedLen := rangeEnd - rangeStart + 1 |
|||
assert.Equal(t, expectedLen, resp.ContentLength, "Content-Length must match requested range size") |
|||
assert.Equal(t, |
|||
fmt.Sprintf("bytes %d-%d/%d", rangeStart, rangeEnd, objectSize), |
|||
aws.ToString(resp.ContentRange), |
|||
"Content-Range header mismatch") |
|||
|
|||
body, err := io.ReadAll(resp.Body) |
|||
require.NoError(t, err, "failed to read range response body") |
|||
assert.Equal(t, int(expectedLen), len(body), "actual bytes read mismatch") |
|||
assert.Equal(t, testData[rangeStart:rangeEnd+1], body, "range payload mismatch") |
|||
}) |
|||
|
|||
t.Run("Suffix range request", func(t *testing.T) { |
|||
const suffixSize = 2048 |
|||
resp, err := client.GetObject(ctx, &s3.GetObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
Range: aws.String(fmt.Sprintf("bytes=-%d", suffixSize)), |
|||
}) |
|||
require.NoError(t, err, "GetObject suffix range request failed") |
|||
defer resp.Body.Close() |
|||
|
|||
expectedStart := int64(objectSize - suffixSize) |
|||
expectedEnd := int64(objectSize - 1) |
|||
expectedLen := expectedEnd - expectedStart + 1 |
|||
|
|||
assert.Equal(t, expectedLen, resp.ContentLength, "suffix Content-Length mismatch") |
|||
assert.Equal(t, |
|||
fmt.Sprintf("bytes %d-%d/%d", expectedStart, expectedEnd, objectSize), |
|||
aws.ToString(resp.ContentRange), |
|||
"suffix Content-Range mismatch") |
|||
|
|||
body, err := io.ReadAll(resp.Body) |
|||
require.NoError(t, err, "failed to read suffix range response body") |
|||
assert.Equal(t, int(expectedLen), len(body), "suffix range byte count mismatch") |
|||
assert.Equal(t, testData[expectedStart:expectedEnd+1], body, "suffix range payload mismatch") |
|||
}) |
|||
} |
|||
@ -0,0 +1,445 @@ |
|||
package sse_test |
|||
|
|||
import ( |
|||
"bytes" |
|||
"context" |
|||
"crypto/sha256" |
|||
"fmt" |
|||
"io" |
|||
"net/http" |
|||
"testing" |
|||
"time" |
|||
|
|||
"github.com/aws/aws-sdk-go-v2/aws" |
|||
v4 "github.com/aws/aws-sdk-go-v2/aws/signer/v4" |
|||
"github.com/aws/aws-sdk-go-v2/service/s3" |
|||
s3types "github.com/aws/aws-sdk-go-v2/service/s3/types" |
|||
"github.com/stretchr/testify/assert" |
|||
"github.com/stretchr/testify/require" |
|||
) |
|||
|
|||
// signRawHTTPRequest signs a raw HTTP request with AWS Signature V4
|
|||
func signRawHTTPRequest(ctx context.Context, req *http.Request, cfg *S3SSETestConfig) error { |
|||
// Create credentials
|
|||
creds := aws.Credentials{ |
|||
AccessKeyID: cfg.AccessKey, |
|||
SecretAccessKey: cfg.SecretKey, |
|||
} |
|||
|
|||
// Create signer
|
|||
signer := v4.NewSigner() |
|||
|
|||
// Calculate payload hash (empty for GET requests)
|
|||
payloadHash := fmt.Sprintf("%x", sha256.Sum256([]byte{})) |
|||
|
|||
// Sign the request
|
|||
err := signer.SignHTTP(ctx, creds, req, payloadHash, "s3", cfg.Region, time.Now()) |
|||
if err != nil { |
|||
return fmt.Errorf("failed to sign request: %w", err) |
|||
} |
|||
|
|||
return nil |
|||
} |
|||
|
|||
// TestSSECRangeRequestsServerBehavior tests that the server correctly handles Range requests
|
|||
// for SSE-C encrypted objects by checking actual HTTP response (not SDK-processed response)
|
|||
func TestSSECRangeRequestsServerBehavior(t *testing.T) { |
|||
ctx := context.Background() |
|||
client, err := createS3Client(ctx, defaultConfig) |
|||
require.NoError(t, err, "Failed to create S3 client") |
|||
|
|||
bucketName, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"ssec-range-server-") |
|||
require.NoError(t, err, "Failed to create test bucket") |
|||
defer cleanupTestBucket(ctx, client, bucketName) |
|||
|
|||
sseKey := generateSSECKey() |
|||
testData := generateTestData(2048) // 2KB test file
|
|||
objectKey := "test-range-server-validation" |
|||
|
|||
// Upload with SSE-C
|
|||
_, err = client.PutObject(ctx, &s3.PutObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
Body: bytes.NewReader(testData), |
|||
SSECustomerAlgorithm: aws.String("AES256"), |
|||
SSECustomerKey: aws.String(sseKey.KeyB64), |
|||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5), |
|||
}) |
|||
require.NoError(t, err, "Failed to upload SSE-C object") |
|||
|
|||
// Test cases for range requests
|
|||
testCases := []struct { |
|||
name string |
|||
rangeHeader string |
|||
expectedStart int64 |
|||
expectedEnd int64 |
|||
expectedTotal int64 |
|||
}{ |
|||
{ |
|||
name: "First 100 bytes", |
|||
rangeHeader: "bytes=0-99", |
|||
expectedStart: 0, |
|||
expectedEnd: 99, |
|||
expectedTotal: 2048, |
|||
}, |
|||
{ |
|||
name: "Middle range", |
|||
rangeHeader: "bytes=500-699", |
|||
expectedStart: 500, |
|||
expectedEnd: 699, |
|||
expectedTotal: 2048, |
|||
}, |
|||
{ |
|||
name: "Last 100 bytes", |
|||
rangeHeader: "bytes=1948-2047", |
|||
expectedStart: 1948, |
|||
expectedEnd: 2047, |
|||
expectedTotal: 2048, |
|||
}, |
|||
{ |
|||
name: "Single byte", |
|||
rangeHeader: "bytes=1000-1000", |
|||
expectedStart: 1000, |
|||
expectedEnd: 1000, |
|||
expectedTotal: 2048, |
|||
}, |
|||
{ |
|||
name: "AES block boundary crossing", |
|||
rangeHeader: "bytes=15-17", |
|||
expectedStart: 15, |
|||
expectedEnd: 17, |
|||
expectedTotal: 2048, |
|||
}, |
|||
{ |
|||
name: "Open-ended range", |
|||
rangeHeader: "bytes=2000-", |
|||
expectedStart: 2000, |
|||
expectedEnd: 2047, |
|||
expectedTotal: 2048, |
|||
}, |
|||
{ |
|||
name: "Suffix range (last 100 bytes)", |
|||
rangeHeader: "bytes=-100", |
|||
expectedStart: 1948, |
|||
expectedEnd: 2047, |
|||
expectedTotal: 2048, |
|||
}, |
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
// Build object URL (Endpoint already includes http://)
|
|||
objectURL := fmt.Sprintf("%s/%s/%s", |
|||
defaultConfig.Endpoint, |
|||
bucketName, |
|||
objectKey, |
|||
) |
|||
|
|||
// Create raw HTTP request
|
|||
req, err := http.NewRequest("GET", objectURL, nil) |
|||
require.NoError(t, err, "Failed to create HTTP request") |
|||
|
|||
// Add Range header
|
|||
req.Header.Set("Range", tc.rangeHeader) |
|||
|
|||
// Add SSE-C headers
|
|||
req.Header.Set("x-amz-server-side-encryption-customer-algorithm", "AES256") |
|||
req.Header.Set("x-amz-server-side-encryption-customer-key", sseKey.KeyB64) |
|||
req.Header.Set("x-amz-server-side-encryption-customer-key-MD5", sseKey.KeyMD5) |
|||
|
|||
// Sign the request with AWS Signature V4
|
|||
err = signRawHTTPRequest(ctx, req, defaultConfig) |
|||
require.NoError(t, err, "Failed to sign HTTP request") |
|||
|
|||
// Make request with raw HTTP client
|
|||
httpClient := &http.Client{} |
|||
resp, err := httpClient.Do(req) |
|||
require.NoError(t, err, "Failed to execute range request") |
|||
defer resp.Body.Close() |
|||
|
|||
// CRITICAL CHECK 1: Status code must be 206 Partial Content
|
|||
assert.Equal(t, http.StatusPartialContent, resp.StatusCode, |
|||
"Server must return 206 Partial Content for range request, got %d", resp.StatusCode) |
|||
|
|||
// CRITICAL CHECK 2: Content-Range header must be present and correct
|
|||
expectedContentRange := fmt.Sprintf("bytes %d-%d/%d", |
|||
tc.expectedStart, tc.expectedEnd, tc.expectedTotal) |
|||
actualContentRange := resp.Header.Get("Content-Range") |
|||
assert.Equal(t, expectedContentRange, actualContentRange, |
|||
"Content-Range header mismatch") |
|||
|
|||
// CRITICAL CHECK 3: Content-Length must match requested range size
|
|||
expectedLength := tc.expectedEnd - tc.expectedStart + 1 |
|||
actualLength := resp.ContentLength |
|||
assert.Equal(t, expectedLength, actualLength, |
|||
"Content-Length mismatch: expected %d, got %d", expectedLength, actualLength) |
|||
|
|||
// CRITICAL CHECK 4: Actual bytes received from network
|
|||
bodyBytes, err := io.ReadAll(resp.Body) |
|||
require.NoError(t, err, "Failed to read response body") |
|||
assert.Equal(t, int(expectedLength), len(bodyBytes), |
|||
"Actual bytes received from server mismatch: expected %d, got %d", |
|||
expectedLength, len(bodyBytes)) |
|||
|
|||
// CRITICAL CHECK 5: Verify decrypted content matches expected range
|
|||
expectedData := testData[tc.expectedStart : tc.expectedEnd+1] |
|||
assert.Equal(t, expectedData, bodyBytes, |
|||
"Decrypted range content doesn't match expected data") |
|||
|
|||
// Verify SSE-C headers are present in response
|
|||
assert.Equal(t, "AES256", resp.Header.Get("x-amz-server-side-encryption-customer-algorithm"), |
|||
"SSE-C algorithm header missing in range response") |
|||
assert.Equal(t, sseKey.KeyMD5, resp.Header.Get("x-amz-server-side-encryption-customer-key-MD5"), |
|||
"SSE-C key MD5 header missing in range response") |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestSSEKMSRangeRequestsServerBehavior tests server-side Range handling for SSE-KMS
|
|||
func TestSSEKMSRangeRequestsServerBehavior(t *testing.T) { |
|||
ctx := context.Background() |
|||
client, err := createS3Client(ctx, defaultConfig) |
|||
require.NoError(t, err, "Failed to create S3 client") |
|||
|
|||
bucketName, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"ssekms-range-server-") |
|||
require.NoError(t, err, "Failed to create test bucket") |
|||
defer cleanupTestBucket(ctx, client, bucketName) |
|||
|
|||
kmsKeyID := "test-range-key" |
|||
testData := generateTestData(4096) // 4KB test file
|
|||
objectKey := "test-kms-range-server-validation" |
|||
|
|||
// Upload with SSE-KMS
|
|||
_, err = client.PutObject(ctx, &s3.PutObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
Body: bytes.NewReader(testData), |
|||
ServerSideEncryption: "aws:kms", |
|||
SSEKMSKeyId: aws.String(kmsKeyID), |
|||
}) |
|||
require.NoError(t, err, "Failed to upload SSE-KMS object") |
|||
|
|||
// Test various ranges
|
|||
testCases := []struct { |
|||
name string |
|||
rangeHeader string |
|||
start int64 |
|||
end int64 |
|||
}{ |
|||
{"First KB", "bytes=0-1023", 0, 1023}, |
|||
{"Second KB", "bytes=1024-2047", 1024, 2047}, |
|||
{"Last KB", "bytes=3072-4095", 3072, 4095}, |
|||
{"Unaligned range", "bytes=100-299", 100, 299}, |
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
objectURL := fmt.Sprintf("%s/%s/%s", |
|||
defaultConfig.Endpoint, |
|||
bucketName, |
|||
objectKey, |
|||
) |
|||
|
|||
req, err := http.NewRequest("GET", objectURL, nil) |
|||
require.NoError(t, err) |
|||
req.Header.Set("Range", tc.rangeHeader) |
|||
|
|||
// Sign the request with AWS Signature V4
|
|||
err = signRawHTTPRequest(ctx, req, defaultConfig) |
|||
require.NoError(t, err, "Failed to sign HTTP request") |
|||
|
|||
httpClient := &http.Client{} |
|||
resp, err := httpClient.Do(req) |
|||
require.NoError(t, err) |
|||
defer resp.Body.Close() |
|||
|
|||
// Verify 206 status
|
|||
assert.Equal(t, http.StatusPartialContent, resp.StatusCode, |
|||
"SSE-KMS range request must return 206, got %d", resp.StatusCode) |
|||
|
|||
// Verify Content-Range
|
|||
expectedContentRange := fmt.Sprintf("bytes %d-%d/%d", tc.start, tc.end, int64(len(testData))) |
|||
assert.Equal(t, expectedContentRange, resp.Header.Get("Content-Range")) |
|||
|
|||
// Verify actual bytes received
|
|||
bodyBytes, err := io.ReadAll(resp.Body) |
|||
require.NoError(t, err) |
|||
expectedLength := tc.end - tc.start + 1 |
|||
assert.Equal(t, int(expectedLength), len(bodyBytes), |
|||
"Actual network bytes mismatch") |
|||
|
|||
// Verify content
|
|||
expectedData := testData[tc.start : tc.end+1] |
|||
assert.Equal(t, expectedData, bodyBytes) |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestSSES3RangeRequestsServerBehavior tests server-side Range handling for SSE-S3
|
|||
func TestSSES3RangeRequestsServerBehavior(t *testing.T) { |
|||
ctx := context.Background() |
|||
client, err := createS3Client(ctx, defaultConfig) |
|||
require.NoError(t, err, "Failed to create S3 client") |
|||
|
|||
bucketName, err := createTestBucket(ctx, client, "sses3-range-server") |
|||
require.NoError(t, err, "Failed to create test bucket") |
|||
defer cleanupTestBucket(ctx, client, bucketName) |
|||
|
|||
testData := generateTestData(8192) // 8KB test file
|
|||
objectKey := "test-s3-range-server-validation" |
|||
|
|||
// Upload with SSE-S3
|
|||
_, err = client.PutObject(ctx, &s3.PutObjectInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
Body: bytes.NewReader(testData), |
|||
ServerSideEncryption: "AES256", |
|||
}) |
|||
require.NoError(t, err, "Failed to upload SSE-S3 object") |
|||
|
|||
// Test range request
|
|||
objectURL := fmt.Sprintf("%s/%s/%s", |
|||
defaultConfig.Endpoint, |
|||
bucketName, |
|||
objectKey, |
|||
) |
|||
|
|||
req, err := http.NewRequest("GET", objectURL, nil) |
|||
require.NoError(t, err) |
|||
req.Header.Set("Range", "bytes=1000-1999") |
|||
|
|||
// Sign the request with AWS Signature V4
|
|||
err = signRawHTTPRequest(ctx, req, defaultConfig) |
|||
require.NoError(t, err, "Failed to sign HTTP request") |
|||
|
|||
httpClient := &http.Client{} |
|||
resp, err := httpClient.Do(req) |
|||
require.NoError(t, err) |
|||
defer resp.Body.Close() |
|||
|
|||
// Verify server response
|
|||
assert.Equal(t, http.StatusPartialContent, resp.StatusCode) |
|||
assert.Equal(t, "bytes 1000-1999/8192", resp.Header.Get("Content-Range")) |
|||
assert.Equal(t, int64(1000), resp.ContentLength) |
|||
|
|||
bodyBytes, err := io.ReadAll(resp.Body) |
|||
require.NoError(t, err) |
|||
assert.Equal(t, 1000, len(bodyBytes)) |
|||
assert.Equal(t, testData[1000:2000], bodyBytes) |
|||
} |
|||
|
|||
// TestSSEMultipartRangeRequestsServerBehavior tests Range requests on multipart encrypted objects
|
|||
func TestSSEMultipartRangeRequestsServerBehavior(t *testing.T) { |
|||
ctx := context.Background() |
|||
client, err := createS3Client(ctx, defaultConfig) |
|||
require.NoError(t, err) |
|||
|
|||
bucketName, err := createTestBucket(ctx, client, defaultConfig.BucketPrefix+"ssec-mp-range-") |
|||
require.NoError(t, err) |
|||
defer cleanupTestBucket(ctx, client, bucketName) |
|||
|
|||
sseKey := generateSSECKey() |
|||
objectKey := "test-multipart-range-server" |
|||
|
|||
// Create 10MB test data (2 parts of 5MB each)
|
|||
partSize := 5 * 1024 * 1024 |
|||
part1Data := generateTestData(partSize) |
|||
part2Data := generateTestData(partSize) |
|||
fullData := append(part1Data, part2Data...) |
|||
|
|||
// Initiate multipart upload
|
|||
createResp, err := client.CreateMultipartUpload(ctx, &s3.CreateMultipartUploadInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
SSECustomerAlgorithm: aws.String("AES256"), |
|||
SSECustomerKey: aws.String(sseKey.KeyB64), |
|||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5), |
|||
}) |
|||
require.NoError(t, err) |
|||
uploadID := aws.ToString(createResp.UploadId) |
|||
|
|||
// Upload part 1
|
|||
part1Resp, err := client.UploadPart(ctx, &s3.UploadPartInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
UploadId: aws.String(uploadID), |
|||
PartNumber: aws.Int32(1), |
|||
Body: bytes.NewReader(part1Data), |
|||
SSECustomerAlgorithm: aws.String("AES256"), |
|||
SSECustomerKey: aws.String(sseKey.KeyB64), |
|||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5), |
|||
}) |
|||
require.NoError(t, err) |
|||
|
|||
// Upload part 2
|
|||
part2Resp, err := client.UploadPart(ctx, &s3.UploadPartInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
UploadId: aws.String(uploadID), |
|||
PartNumber: aws.Int32(2), |
|||
Body: bytes.NewReader(part2Data), |
|||
SSECustomerAlgorithm: aws.String("AES256"), |
|||
SSECustomerKey: aws.String(sseKey.KeyB64), |
|||
SSECustomerKeyMD5: aws.String(sseKey.KeyMD5), |
|||
}) |
|||
require.NoError(t, err) |
|||
|
|||
// Complete multipart upload
|
|||
_, err = client.CompleteMultipartUpload(ctx, &s3.CompleteMultipartUploadInput{ |
|||
Bucket: aws.String(bucketName), |
|||
Key: aws.String(objectKey), |
|||
UploadId: aws.String(uploadID), |
|||
MultipartUpload: &s3types.CompletedMultipartUpload{ |
|||
Parts: []s3types.CompletedPart{ |
|||
{PartNumber: aws.Int32(1), ETag: part1Resp.ETag}, |
|||
{PartNumber: aws.Int32(2), ETag: part2Resp.ETag}, |
|||
}, |
|||
}, |
|||
}) |
|||
require.NoError(t, err) |
|||
|
|||
// Test range that crosses part boundary
|
|||
objectURL := fmt.Sprintf("%s/%s/%s", |
|||
defaultConfig.Endpoint, |
|||
bucketName, |
|||
objectKey, |
|||
) |
|||
|
|||
// Range spanning across the part boundary
|
|||
start := int64(partSize - 1000) |
|||
end := int64(partSize + 1000) |
|||
|
|||
req, err := http.NewRequest("GET", objectURL, nil) |
|||
require.NoError(t, err) |
|||
req.Header.Set("Range", fmt.Sprintf("bytes=%d-%d", start, end)) |
|||
req.Header.Set("x-amz-server-side-encryption-customer-algorithm", "AES256") |
|||
req.Header.Set("x-amz-server-side-encryption-customer-key", sseKey.KeyB64) |
|||
req.Header.Set("x-amz-server-side-encryption-customer-key-MD5", sseKey.KeyMD5) |
|||
|
|||
// Sign the request with AWS Signature V4
|
|||
err = signRawHTTPRequest(ctx, req, defaultConfig) |
|||
require.NoError(t, err, "Failed to sign HTTP request") |
|||
|
|||
httpClient := &http.Client{} |
|||
resp, err := httpClient.Do(req) |
|||
require.NoError(t, err) |
|||
defer resp.Body.Close() |
|||
|
|||
// Verify server behavior for cross-part range
|
|||
assert.Equal(t, http.StatusPartialContent, resp.StatusCode, |
|||
"Multipart range request must return 206") |
|||
|
|||
expectedLength := end - start + 1 |
|||
assert.Equal(t, expectedLength, resp.ContentLength, |
|||
"Content-Length for cross-part range") |
|||
|
|||
bodyBytes, err := io.ReadAll(resp.Body) |
|||
require.NoError(t, err) |
|||
assert.Equal(t, int(expectedLength), len(bodyBytes), |
|||
"Actual bytes for cross-part range") |
|||
|
|||
// Verify content spans the part boundary correctly
|
|||
expectedData := fullData[start : end+1] |
|||
assert.Equal(t, expectedData, bodyBytes, |
|||
"Cross-part range content must be correctly decrypted and assembled") |
|||
} |
|||
@ -0,0 +1,267 @@ |
|||
package operation |
|||
|
|||
import ( |
|||
"bytes" |
|||
"context" |
|||
"crypto/md5" |
|||
"fmt" |
|||
"hash" |
|||
"io" |
|||
"sort" |
|||
"sync" |
|||
"time" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/glog" |
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
"github.com/seaweedfs/seaweedfs/weed/security" |
|||
) |
|||
|
|||
// ChunkedUploadResult contains the result of a chunked upload
|
|||
type ChunkedUploadResult struct { |
|||
FileChunks []*filer_pb.FileChunk |
|||
Md5Hash hash.Hash |
|||
TotalSize int64 |
|||
SmallContent []byte // For files smaller than threshold
|
|||
} |
|||
|
|||
// ChunkedUploadOption contains options for chunked uploads
|
|||
type ChunkedUploadOption struct { |
|||
ChunkSize int32 |
|||
SmallFileLimit int64 |
|||
Collection string |
|||
Replication string |
|||
DataCenter string |
|||
SaveSmallInline bool |
|||
Jwt security.EncodedJwt |
|||
MimeType string |
|||
AssignFunc func(ctx context.Context, count int) (*VolumeAssignRequest, *AssignResult, error) |
|||
UploadFunc func(ctx context.Context, data []byte, option *UploadOption) (*UploadResult, error) // Optional: for testing
|
|||
} |
|||
|
|||
var chunkBufferPool = sync.Pool{ |
|||
New: func() interface{} { |
|||
return new(bytes.Buffer) |
|||
}, |
|||
} |
|||
|
|||
// UploadReaderInChunks reads from reader and uploads in chunks to volume servers
|
|||
// This prevents OOM by processing the stream in fixed-size chunks
|
|||
// Returns file chunks, MD5 hash, total size, and any small content stored inline
|
|||
func UploadReaderInChunks(ctx context.Context, reader io.Reader, opt *ChunkedUploadOption) (*ChunkedUploadResult, error) { |
|||
|
|||
md5Hash := md5.New() |
|||
var partReader = io.TeeReader(reader, md5Hash) |
|||
|
|||
var fileChunks []*filer_pb.FileChunk |
|||
var fileChunksLock sync.Mutex |
|||
var uploadErr error |
|||
var uploadErrLock sync.Mutex |
|||
var chunkOffset int64 = 0 |
|||
|
|||
var wg sync.WaitGroup |
|||
const bytesBufferCounter = 4 |
|||
bytesBufferLimitChan := make(chan struct{}, bytesBufferCounter) |
|||
|
|||
uploadLoop: |
|||
for { |
|||
// Throttle buffer usage
|
|||
bytesBufferLimitChan <- struct{}{} |
|||
|
|||
// Check for errors from parallel uploads
|
|||
uploadErrLock.Lock() |
|||
if uploadErr != nil { |
|||
<-bytesBufferLimitChan |
|||
uploadErrLock.Unlock() |
|||
break |
|||
} |
|||
uploadErrLock.Unlock() |
|||
|
|||
// Check for context cancellation
|
|||
select { |
|||
case <-ctx.Done(): |
|||
<-bytesBufferLimitChan |
|||
uploadErrLock.Lock() |
|||
if uploadErr == nil { |
|||
uploadErr = ctx.Err() |
|||
} |
|||
uploadErrLock.Unlock() |
|||
break uploadLoop |
|||
default: |
|||
} |
|||
|
|||
// Get buffer from pool
|
|||
bytesBuffer := chunkBufferPool.Get().(*bytes.Buffer) |
|||
limitedReader := io.LimitReader(partReader, int64(opt.ChunkSize)) |
|||
bytesBuffer.Reset() |
|||
|
|||
// Read one chunk
|
|||
dataSize, err := bytesBuffer.ReadFrom(limitedReader) |
|||
if err != nil { |
|||
glog.V(2).Infof("UploadReaderInChunks: read error at offset %d: %v", chunkOffset, err) |
|||
chunkBufferPool.Put(bytesBuffer) |
|||
<-bytesBufferLimitChan |
|||
uploadErrLock.Lock() |
|||
if uploadErr == nil { |
|||
uploadErr = err |
|||
} |
|||
uploadErrLock.Unlock() |
|||
break |
|||
} |
|||
// If no data was read, we've reached EOF
|
|||
// Only break if we've already read some data (chunkOffset > 0) or if this is truly EOF
|
|||
if dataSize == 0 { |
|||
if chunkOffset == 0 { |
|||
glog.Warningf("UploadReaderInChunks: received 0 bytes on first read - creating empty file") |
|||
} |
|||
chunkBufferPool.Put(bytesBuffer) |
|||
<-bytesBufferLimitChan |
|||
// If we've already read some chunks, this is normal EOF
|
|||
// If we haven't read anything yet (chunkOffset == 0), this could be an empty file
|
|||
// which is valid (e.g., touch command creates 0-byte files)
|
|||
break |
|||
} |
|||
|
|||
// For small files at offset 0, store inline instead of uploading
|
|||
if chunkOffset == 0 && opt.SaveSmallInline && dataSize < opt.SmallFileLimit { |
|||
smallContent := make([]byte, dataSize) |
|||
n, readErr := io.ReadFull(bytesBuffer, smallContent) |
|||
chunkBufferPool.Put(bytesBuffer) |
|||
<-bytesBufferLimitChan |
|||
|
|||
if readErr != nil { |
|||
return nil, fmt.Errorf("failed to read small content: read %d of %d bytes: %w", n, dataSize, readErr) |
|||
} |
|||
|
|||
return &ChunkedUploadResult{ |
|||
FileChunks: nil, |
|||
Md5Hash: md5Hash, |
|||
TotalSize: dataSize, |
|||
SmallContent: smallContent, |
|||
}, nil |
|||
} |
|||
|
|||
// Upload chunk in parallel goroutine
|
|||
wg.Add(1) |
|||
go func(offset int64, buf *bytes.Buffer) { |
|||
defer func() { |
|||
chunkBufferPool.Put(buf) |
|||
<-bytesBufferLimitChan |
|||
wg.Done() |
|||
}() |
|||
|
|||
// Assign volume for this chunk
|
|||
_, assignResult, assignErr := opt.AssignFunc(ctx, 1) |
|||
if assignErr != nil { |
|||
uploadErrLock.Lock() |
|||
if uploadErr == nil { |
|||
uploadErr = fmt.Errorf("assign volume: %w", assignErr) |
|||
} |
|||
uploadErrLock.Unlock() |
|||
return |
|||
} |
|||
|
|||
// Upload chunk data
|
|||
uploadUrl := fmt.Sprintf("http://%s/%s", assignResult.Url, assignResult.Fid) |
|||
|
|||
// Use per-assignment JWT if present, otherwise fall back to the original JWT
|
|||
// This is critical for secured clusters where each volume assignment has its own JWT
|
|||
jwt := opt.Jwt |
|||
if assignResult.Auth != "" { |
|||
jwt = assignResult.Auth |
|||
} |
|||
|
|||
uploadOption := &UploadOption{ |
|||
UploadUrl: uploadUrl, |
|||
Cipher: false, |
|||
IsInputCompressed: false, |
|||
MimeType: opt.MimeType, |
|||
PairMap: nil, |
|||
Jwt: jwt, |
|||
} |
|||
|
|||
var uploadResult *UploadResult |
|||
var uploadResultErr error |
|||
|
|||
// Use mock upload function if provided (for testing), otherwise use real uploader
|
|||
if opt.UploadFunc != nil { |
|||
uploadResult, uploadResultErr = opt.UploadFunc(ctx, buf.Bytes(), uploadOption) |
|||
} else { |
|||
uploader, uploaderErr := NewUploader() |
|||
if uploaderErr != nil { |
|||
uploadErrLock.Lock() |
|||
if uploadErr == nil { |
|||
uploadErr = fmt.Errorf("create uploader: %w", uploaderErr) |
|||
} |
|||
uploadErrLock.Unlock() |
|||
return |
|||
} |
|||
uploadResult, uploadResultErr = uploader.UploadData(ctx, buf.Bytes(), uploadOption) |
|||
} |
|||
|
|||
if uploadResultErr != nil { |
|||
uploadErrLock.Lock() |
|||
if uploadErr == nil { |
|||
uploadErr = fmt.Errorf("upload chunk: %w", uploadResultErr) |
|||
} |
|||
uploadErrLock.Unlock() |
|||
return |
|||
} |
|||
|
|||
// Create chunk entry
|
|||
// Set ModifiedTsNs to current time (nanoseconds) to track when upload completed
|
|||
// This is critical for multipart uploads where the same part may be uploaded multiple times
|
|||
// The part with the latest ModifiedTsNs is selected as the authoritative version
|
|||
fid, _ := filer_pb.ToFileIdObject(assignResult.Fid) |
|||
chunk := &filer_pb.FileChunk{ |
|||
FileId: assignResult.Fid, |
|||
Offset: offset, |
|||
Size: uint64(uploadResult.Size), |
|||
ModifiedTsNs: time.Now().UnixNano(), |
|||
ETag: uploadResult.ContentMd5, |
|||
Fid: fid, |
|||
CipherKey: uploadResult.CipherKey, |
|||
} |
|||
|
|||
fileChunksLock.Lock() |
|||
fileChunks = append(fileChunks, chunk) |
|||
glog.V(4).Infof("uploaded chunk %d to %s [%d,%d)", len(fileChunks), chunk.FileId, offset, offset+int64(chunk.Size)) |
|||
fileChunksLock.Unlock() |
|||
|
|||
}(chunkOffset, bytesBuffer) |
|||
|
|||
// Update offset for next chunk
|
|||
chunkOffset += dataSize |
|||
|
|||
// If this was a partial chunk, we're done
|
|||
if dataSize < int64(opt.ChunkSize) { |
|||
break |
|||
} |
|||
} |
|||
|
|||
// Wait for all uploads to complete
|
|||
wg.Wait() |
|||
|
|||
// Sort chunks by offset (do this even if there's an error, for cleanup purposes)
|
|||
sort.Slice(fileChunks, func(i, j int) bool { |
|||
return fileChunks[i].Offset < fileChunks[j].Offset |
|||
}) |
|||
|
|||
// Check for errors - return partial results for cleanup
|
|||
if uploadErr != nil { |
|||
glog.Errorf("chunked upload failed: %v (returning %d partial chunks for cleanup)", uploadErr, len(fileChunks)) |
|||
// IMPORTANT: Return partial results even on error so caller can cleanup orphaned chunks
|
|||
return &ChunkedUploadResult{ |
|||
FileChunks: fileChunks, |
|||
Md5Hash: md5Hash, |
|||
TotalSize: chunkOffset, |
|||
SmallContent: nil, |
|||
}, uploadErr |
|||
} |
|||
|
|||
return &ChunkedUploadResult{ |
|||
FileChunks: fileChunks, |
|||
Md5Hash: md5Hash, |
|||
TotalSize: chunkOffset, |
|||
SmallContent: nil, |
|||
}, nil |
|||
} |
|||
@ -0,0 +1,312 @@ |
|||
package operation |
|||
|
|||
import ( |
|||
"bytes" |
|||
"context" |
|||
"errors" |
|||
"io" |
|||
"testing" |
|||
) |
|||
|
|||
// TestUploadReaderInChunksReturnsPartialResultsOnError verifies that when
|
|||
// UploadReaderInChunks fails mid-upload, it returns partial results containing
|
|||
// the chunks that were successfully uploaded before the error occurred.
|
|||
// This allows the caller to cleanup orphaned chunks and prevent resource leaks.
|
|||
func TestUploadReaderInChunksReturnsPartialResultsOnError(t *testing.T) { |
|||
// Create test data larger than one chunk to force multiple chunk uploads
|
|||
testData := bytes.Repeat([]byte("test data for chunk upload failure testing"), 1000) // ~40KB
|
|||
reader := bytes.NewReader(testData) |
|||
|
|||
uploadAttempts := 0 |
|||
|
|||
// Create a mock assign function that succeeds for first chunk, then fails
|
|||
assignFunc := func(ctx context.Context, count int) (*VolumeAssignRequest, *AssignResult, error) { |
|||
uploadAttempts++ |
|||
|
|||
if uploadAttempts == 1 { |
|||
// First chunk succeeds
|
|||
return nil, &AssignResult{ |
|||
Fid: "test-fid-1,1234", |
|||
Url: "http://test-volume-1:8080", |
|||
PublicUrl: "http://test-volume-1:8080", |
|||
Count: 1, |
|||
}, nil |
|||
} |
|||
|
|||
// Second chunk fails (simulating volume server down or network error)
|
|||
return nil, nil, errors.New("simulated volume assignment failure") |
|||
} |
|||
|
|||
// Mock upload function that simulates successful upload
|
|||
uploadFunc := func(ctx context.Context, data []byte, option *UploadOption) (*UploadResult, error) { |
|||
return &UploadResult{ |
|||
Name: "test-file", |
|||
Size: uint32(len(data)), |
|||
ContentMd5: "mock-md5-hash", |
|||
Error: "", |
|||
}, nil |
|||
} |
|||
|
|||
// Attempt upload with small chunk size to trigger multiple uploads
|
|||
result, err := UploadReaderInChunks(context.Background(), reader, &ChunkedUploadOption{ |
|||
ChunkSize: 8 * 1024, // 8KB chunks
|
|||
SmallFileLimit: 256, |
|||
Collection: "test", |
|||
DataCenter: "", |
|||
SaveSmallInline: false, |
|||
AssignFunc: assignFunc, |
|||
UploadFunc: uploadFunc, |
|||
}) |
|||
|
|||
// VERIFICATION 1: Error should be returned
|
|||
if err == nil { |
|||
t.Fatal("Expected error from UploadReaderInChunks, got nil") |
|||
} |
|||
t.Logf("✓ Got expected error: %v", err) |
|||
|
|||
// VERIFICATION 2: Result should NOT be nil (this is the fix)
|
|||
if result == nil { |
|||
t.Fatal("CRITICAL: UploadReaderInChunks returned nil result on error - caller cannot cleanup orphaned chunks!") |
|||
} |
|||
t.Log("✓ Result is not nil (partial results returned)") |
|||
|
|||
// VERIFICATION 3: Result should contain partial chunks from successful uploads
|
|||
// Note: In reality, the first chunk upload would succeed before assignment fails for chunk 2
|
|||
// But in this test, assignment fails immediately for chunk 2, so we may have 0 chunks
|
|||
// The important thing is that the result struct is returned, not that it has chunks
|
|||
t.Logf("✓ Result contains %d chunks (may be 0 if all assignments failed)", len(result.FileChunks)) |
|||
|
|||
// VERIFICATION 4: MD5 hash should be available even on partial failure
|
|||
if result.Md5Hash == nil { |
|||
t.Error("Expected Md5Hash to be non-nil") |
|||
} else { |
|||
t.Log("✓ Md5Hash is available for partial data") |
|||
} |
|||
|
|||
// VERIFICATION 5: TotalSize should reflect bytes read before failure
|
|||
if result.TotalSize < 0 { |
|||
t.Errorf("Expected non-negative TotalSize, got %d", result.TotalSize) |
|||
} else { |
|||
t.Logf("✓ TotalSize = %d bytes read before failure", result.TotalSize) |
|||
} |
|||
} |
|||
|
|||
// TestUploadReaderInChunksSuccessPath verifies normal successful upload behavior
|
|||
func TestUploadReaderInChunksSuccessPath(t *testing.T) { |
|||
testData := []byte("small test data") |
|||
reader := bytes.NewReader(testData) |
|||
|
|||
// Mock assign function that always succeeds
|
|||
assignFunc := func(ctx context.Context, count int) (*VolumeAssignRequest, *AssignResult, error) { |
|||
return nil, &AssignResult{ |
|||
Fid: "test-fid,1234", |
|||
Url: "http://test-volume:8080", |
|||
PublicUrl: "http://test-volume:8080", |
|||
Count: 1, |
|||
}, nil |
|||
} |
|||
|
|||
// Mock upload function that simulates successful upload
|
|||
uploadFunc := func(ctx context.Context, data []byte, option *UploadOption) (*UploadResult, error) { |
|||
return &UploadResult{ |
|||
Name: "test-file", |
|||
Size: uint32(len(data)), |
|||
ContentMd5: "mock-md5-hash", |
|||
Error: "", |
|||
}, nil |
|||
} |
|||
|
|||
result, err := UploadReaderInChunks(context.Background(), reader, &ChunkedUploadOption{ |
|||
ChunkSize: 8 * 1024, |
|||
SmallFileLimit: 256, |
|||
Collection: "test", |
|||
DataCenter: "", |
|||
SaveSmallInline: false, |
|||
AssignFunc: assignFunc, |
|||
UploadFunc: uploadFunc, |
|||
}) |
|||
|
|||
// VERIFICATION 1: No error should occur
|
|||
if err != nil { |
|||
t.Fatalf("Expected successful upload, got error: %v", err) |
|||
} |
|||
t.Log("✓ Upload completed without error") |
|||
|
|||
// VERIFICATION 2: Result should not be nil
|
|||
if result == nil { |
|||
t.Fatal("Expected non-nil result") |
|||
} |
|||
t.Log("✓ Result is not nil") |
|||
|
|||
// VERIFICATION 3: Should have file chunks
|
|||
if len(result.FileChunks) == 0 { |
|||
t.Error("Expected at least one file chunk") |
|||
} else { |
|||
t.Logf("✓ Result contains %d file chunk(s)", len(result.FileChunks)) |
|||
} |
|||
|
|||
// VERIFICATION 4: Total size should match input data
|
|||
if result.TotalSize != int64(len(testData)) { |
|||
t.Errorf("Expected TotalSize=%d, got %d", len(testData), result.TotalSize) |
|||
} else { |
|||
t.Logf("✓ TotalSize=%d matches input data", result.TotalSize) |
|||
} |
|||
|
|||
// VERIFICATION 5: MD5 hash should be available
|
|||
if result.Md5Hash == nil { |
|||
t.Error("Expected non-nil Md5Hash") |
|||
} else { |
|||
t.Log("✓ Md5Hash is available") |
|||
} |
|||
|
|||
// VERIFICATION 6: Chunk should have expected properties
|
|||
if len(result.FileChunks) > 0 { |
|||
chunk := result.FileChunks[0] |
|||
if chunk.FileId != "test-fid,1234" { |
|||
t.Errorf("Expected chunk FileId='test-fid,1234', got '%s'", chunk.FileId) |
|||
} |
|||
if chunk.Offset != 0 { |
|||
t.Errorf("Expected chunk Offset=0, got %d", chunk.Offset) |
|||
} |
|||
if chunk.Size != uint64(len(testData)) { |
|||
t.Errorf("Expected chunk Size=%d, got %d", len(testData), chunk.Size) |
|||
} |
|||
t.Logf("✓ Chunk properties validated: FileId=%s, Offset=%d, Size=%d", |
|||
chunk.FileId, chunk.Offset, chunk.Size) |
|||
} |
|||
} |
|||
|
|||
// TestUploadReaderInChunksContextCancellation verifies behavior when context is cancelled
|
|||
func TestUploadReaderInChunksContextCancellation(t *testing.T) { |
|||
testData := bytes.Repeat([]byte("test data"), 10000) // ~80KB
|
|||
reader := bytes.NewReader(testData) |
|||
|
|||
// Create a context that we'll cancel
|
|||
ctx, cancel := context.WithCancel(context.Background()) |
|||
|
|||
// Cancel immediately to trigger cancellation handling
|
|||
cancel() |
|||
|
|||
assignFunc := func(ctx context.Context, count int) (*VolumeAssignRequest, *AssignResult, error) { |
|||
return nil, &AssignResult{ |
|||
Fid: "test-fid,1234", |
|||
Url: "http://test-volume:8080", |
|||
PublicUrl: "http://test-volume:8080", |
|||
Count: 1, |
|||
}, nil |
|||
} |
|||
|
|||
// Mock upload function that simulates successful upload
|
|||
uploadFunc := func(ctx context.Context, data []byte, option *UploadOption) (*UploadResult, error) { |
|||
return &UploadResult{ |
|||
Name: "test-file", |
|||
Size: uint32(len(data)), |
|||
ContentMd5: "mock-md5-hash", |
|||
Error: "", |
|||
}, nil |
|||
} |
|||
|
|||
result, err := UploadReaderInChunks(ctx, reader, &ChunkedUploadOption{ |
|||
ChunkSize: 8 * 1024, |
|||
SmallFileLimit: 256, |
|||
Collection: "test", |
|||
DataCenter: "", |
|||
SaveSmallInline: false, |
|||
AssignFunc: assignFunc, |
|||
UploadFunc: uploadFunc, |
|||
}) |
|||
|
|||
// Should get context cancelled error
|
|||
if err == nil { |
|||
t.Error("Expected context cancellation error") |
|||
} |
|||
|
|||
// Should still get partial results for cleanup
|
|||
if result == nil { |
|||
t.Error("Expected non-nil result even on context cancellation") |
|||
} else { |
|||
t.Logf("✓ Got partial result on cancellation: chunks=%d", len(result.FileChunks)) |
|||
} |
|||
} |
|||
|
|||
// mockFailingReader simulates a reader that fails after reading some data
|
|||
type mockFailingReader struct { |
|||
data []byte |
|||
pos int |
|||
failAfter int |
|||
} |
|||
|
|||
func (m *mockFailingReader) Read(p []byte) (n int, err error) { |
|||
if m.pos >= m.failAfter { |
|||
return 0, errors.New("simulated read failure") |
|||
} |
|||
|
|||
remaining := m.failAfter - m.pos |
|||
toRead := len(p) |
|||
if toRead > remaining { |
|||
toRead = remaining |
|||
} |
|||
if toRead > len(m.data)-m.pos { |
|||
toRead = len(m.data) - m.pos |
|||
} |
|||
|
|||
if toRead == 0 { |
|||
return 0, io.EOF |
|||
} |
|||
|
|||
copy(p, m.data[m.pos:m.pos+toRead]) |
|||
m.pos += toRead |
|||
return toRead, nil |
|||
} |
|||
|
|||
// TestUploadReaderInChunksReaderFailure verifies behavior when reader fails mid-read
|
|||
func TestUploadReaderInChunksReaderFailure(t *testing.T) { |
|||
testData := bytes.Repeat([]byte("test"), 5000) // 20KB
|
|||
failingReader := &mockFailingReader{ |
|||
data: testData, |
|||
pos: 0, |
|||
failAfter: 10000, // Fail after 10KB
|
|||
} |
|||
|
|||
assignFunc := func(ctx context.Context, count int) (*VolumeAssignRequest, *AssignResult, error) { |
|||
return nil, &AssignResult{ |
|||
Fid: "test-fid,1234", |
|||
Url: "http://test-volume:8080", |
|||
PublicUrl: "http://test-volume:8080", |
|||
Count: 1, |
|||
}, nil |
|||
} |
|||
|
|||
// Mock upload function that simulates successful upload
|
|||
uploadFunc := func(ctx context.Context, data []byte, option *UploadOption) (*UploadResult, error) { |
|||
return &UploadResult{ |
|||
Name: "test-file", |
|||
Size: uint32(len(data)), |
|||
ContentMd5: "mock-md5-hash", |
|||
Error: "", |
|||
}, nil |
|||
} |
|||
|
|||
result, err := UploadReaderInChunks(context.Background(), failingReader, &ChunkedUploadOption{ |
|||
ChunkSize: 8 * 1024, // 8KB chunks
|
|||
SmallFileLimit: 256, |
|||
Collection: "test", |
|||
DataCenter: "", |
|||
SaveSmallInline: false, |
|||
AssignFunc: assignFunc, |
|||
UploadFunc: uploadFunc, |
|||
}) |
|||
|
|||
// Should get read error
|
|||
if err == nil { |
|||
t.Error("Expected read failure error") |
|||
} |
|||
|
|||
// Should still get partial results
|
|||
if result == nil { |
|||
t.Fatal("Expected non-nil result on read failure") |
|||
} |
|||
|
|||
t.Logf("✓ Got partial result on read failure: chunks=%d, totalSize=%d", |
|||
len(result.FileChunks), result.TotalSize) |
|||
} |
|||
@ -0,0 +1,307 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"bytes" |
|||
"crypto/aes" |
|||
"crypto/cipher" |
|||
"crypto/rand" |
|||
"io" |
|||
"testing" |
|||
) |
|||
|
|||
// TestCalculateIVWithOffset tests the calculateIVWithOffset function
|
|||
func TestCalculateIVWithOffset(t *testing.T) { |
|||
baseIV := make([]byte, 16) |
|||
rand.Read(baseIV) |
|||
|
|||
tests := []struct { |
|||
name string |
|||
offset int64 |
|||
expectedSkip int |
|||
expectedBlock int64 |
|||
}{ |
|||
{"BlockAligned_0", 0, 0, 0}, |
|||
{"BlockAligned_16", 16, 0, 1}, |
|||
{"BlockAligned_32", 32, 0, 2}, |
|||
{"BlockAligned_48", 48, 0, 3}, |
|||
{"NonAligned_1", 1, 1, 0}, |
|||
{"NonAligned_5", 5, 5, 0}, |
|||
{"NonAligned_10", 10, 10, 0}, |
|||
{"NonAligned_15", 15, 15, 0}, |
|||
{"NonAligned_17", 17, 1, 1}, |
|||
{"NonAligned_21", 21, 5, 1}, |
|||
{"NonAligned_33", 33, 1, 2}, |
|||
{"NonAligned_47", 47, 15, 2}, |
|||
{"LargeOffset", 1000, 1000 % 16, 1000 / 16}, |
|||
} |
|||
|
|||
for _, tt := range tests { |
|||
t.Run(tt.name, func(t *testing.T) { |
|||
adjustedIV, skip := calculateIVWithOffset(baseIV, tt.offset) |
|||
|
|||
// Verify skip is correct
|
|||
if skip != tt.expectedSkip { |
|||
t.Errorf("calculateIVWithOffset(%d) skip = %d, want %d", tt.offset, skip, tt.expectedSkip) |
|||
} |
|||
|
|||
// Verify IV length is preserved
|
|||
if len(adjustedIV) != 16 { |
|||
t.Errorf("calculateIVWithOffset(%d) IV length = %d, want 16", tt.offset, len(adjustedIV)) |
|||
} |
|||
|
|||
// Verify IV was adjusted correctly (last 8 bytes incremented by blockOffset)
|
|||
if tt.expectedBlock == 0 { |
|||
if !bytes.Equal(adjustedIV, baseIV) { |
|||
t.Errorf("calculateIVWithOffset(%d) IV changed when blockOffset=0", tt.offset) |
|||
} |
|||
} else { |
|||
// IV should be different for non-zero block offsets
|
|||
if bytes.Equal(adjustedIV, baseIV) { |
|||
t.Errorf("calculateIVWithOffset(%d) IV not changed when blockOffset=%d", tt.offset, tt.expectedBlock) |
|||
} |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestCTRDecryptionWithNonBlockAlignedOffset tests that CTR decryption works correctly
|
|||
// for non-block-aligned offsets (the critical bug fix)
|
|||
func TestCTRDecryptionWithNonBlockAlignedOffset(t *testing.T) { |
|||
// Generate test data
|
|||
plaintext := make([]byte, 1024) |
|||
for i := range plaintext { |
|||
plaintext[i] = byte(i % 256) |
|||
} |
|||
|
|||
// Generate random key and IV
|
|||
key := make([]byte, 32) // AES-256
|
|||
iv := make([]byte, 16) |
|||
rand.Read(key) |
|||
rand.Read(iv) |
|||
|
|||
// Encrypt the entire plaintext
|
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(plaintext)) |
|||
stream := cipher.NewCTR(block, iv) |
|||
stream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
// Test various offsets (both block-aligned and non-block-aligned)
|
|||
testOffsets := []int64{0, 1, 5, 10, 15, 16, 17, 21, 32, 33, 47, 48, 100, 500} |
|||
|
|||
for _, offset := range testOffsets { |
|||
t.Run(string(rune('A'+offset)), func(t *testing.T) { |
|||
// Calculate adjusted IV and skip
|
|||
adjustedIV, skip := calculateIVWithOffset(iv, offset) |
|||
|
|||
// CRITICAL: Start from the block-aligned offset, not the user offset
|
|||
// CTR mode works on 16-byte blocks, so we need to decrypt from the block start
|
|||
blockAlignedOffset := offset - int64(skip) |
|||
|
|||
// Decrypt from the block-aligned offset
|
|||
decryptBlock, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create decrypt cipher: %v", err) |
|||
} |
|||
|
|||
decryptStream := cipher.NewCTR(decryptBlock, adjustedIV) |
|||
|
|||
// Create a reader for the ciphertext starting at block-aligned offset
|
|||
ciphertextFromBlockStart := ciphertext[blockAlignedOffset:] |
|||
decryptedFromBlockStart := make([]byte, len(ciphertextFromBlockStart)) |
|||
decryptStream.XORKeyStream(decryptedFromBlockStart, ciphertextFromBlockStart) |
|||
|
|||
// CRITICAL: Skip the intra-block bytes to get to the user-requested offset
|
|||
if skip > 0 { |
|||
if skip > len(decryptedFromBlockStart) { |
|||
t.Fatalf("Skip %d exceeds decrypted data length %d", skip, len(decryptedFromBlockStart)) |
|||
} |
|||
decryptedFromBlockStart = decryptedFromBlockStart[skip:] |
|||
} |
|||
|
|||
// Rename for consistency
|
|||
decryptedFromOffset := decryptedFromBlockStart |
|||
|
|||
// Verify decrypted data matches original plaintext
|
|||
expectedPlaintext := plaintext[offset:] |
|||
if !bytes.Equal(decryptedFromOffset, expectedPlaintext) { |
|||
t.Errorf("Decryption mismatch at offset %d (skip=%d)", offset, skip) |
|||
previewLen := 32 |
|||
if len(expectedPlaintext) < previewLen { |
|||
previewLen = len(expectedPlaintext) |
|||
} |
|||
t.Errorf(" Expected first 32 bytes: %x", expectedPlaintext[:previewLen]) |
|||
previewLen2 := 32 |
|||
if len(decryptedFromOffset) < previewLen2 { |
|||
previewLen2 = len(decryptedFromOffset) |
|||
} |
|||
t.Errorf(" Got first 32 bytes: %x", decryptedFromOffset[:previewLen2]) |
|||
|
|||
// Find first mismatch
|
|||
for i := 0; i < len(expectedPlaintext) && i < len(decryptedFromOffset); i++ { |
|||
if expectedPlaintext[i] != decryptedFromOffset[i] { |
|||
t.Errorf(" First mismatch at byte %d: expected %02x, got %02x", i, expectedPlaintext[i], decryptedFromOffset[i]) |
|||
break |
|||
} |
|||
} |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestCTRRangeRequestSimulation simulates a real-world S3 range request scenario
|
|||
func TestCTRRangeRequestSimulation(t *testing.T) { |
|||
// Simulate uploading a 5MB object
|
|||
objectSize := 5 * 1024 * 1024 |
|||
plaintext := make([]byte, objectSize) |
|||
for i := range plaintext { |
|||
plaintext[i] = byte(i % 256) |
|||
} |
|||
|
|||
// Encrypt the object
|
|||
key := make([]byte, 32) |
|||
iv := make([]byte, 16) |
|||
rand.Read(key) |
|||
rand.Read(iv) |
|||
|
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(plaintext)) |
|||
stream := cipher.NewCTR(block, iv) |
|||
stream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
// Simulate various S3 range requests
|
|||
rangeTests := []struct { |
|||
name string |
|||
start int64 |
|||
end int64 |
|||
}{ |
|||
{"First byte", 0, 0}, |
|||
{"First 100 bytes", 0, 99}, |
|||
{"Mid-block range", 5, 100}, // Critical: starts at non-aligned offset
|
|||
{"Single mid-block byte", 17, 17}, // Critical: single byte at offset 17
|
|||
{"Cross-block range", 10, 50}, // Spans multiple blocks
|
|||
{"Large range", 1000, 10000}, |
|||
{"Tail range", int64(objectSize - 1000), int64(objectSize - 1)}, |
|||
} |
|||
|
|||
for _, rt := range rangeTests { |
|||
t.Run(rt.name, func(t *testing.T) { |
|||
rangeSize := rt.end - rt.start + 1 |
|||
|
|||
// Calculate adjusted IV and skip for the range start
|
|||
adjustedIV, skip := calculateIVWithOffset(iv, rt.start) |
|||
|
|||
// CRITICAL: Start decryption from block-aligned offset
|
|||
blockAlignedStart := rt.start - int64(skip) |
|||
|
|||
// Create decryption stream
|
|||
decryptBlock, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create decrypt cipher: %v", err) |
|||
} |
|||
|
|||
decryptStream := cipher.NewCTR(decryptBlock, adjustedIV) |
|||
|
|||
// Decrypt from block-aligned start through the end of range
|
|||
ciphertextFromBlock := ciphertext[blockAlignedStart : rt.end+1] |
|||
decryptedFromBlock := make([]byte, len(ciphertextFromBlock)) |
|||
decryptStream.XORKeyStream(decryptedFromBlock, ciphertextFromBlock) |
|||
|
|||
// CRITICAL: Skip intra-block bytes to get to user-requested start
|
|||
if skip > 0 { |
|||
decryptedFromBlock = decryptedFromBlock[skip:] |
|||
} |
|||
|
|||
decryptedRange := decryptedFromBlock |
|||
|
|||
// Verify decrypted range matches original plaintext
|
|||
expectedPlaintext := plaintext[rt.start : rt.end+1] |
|||
if !bytes.Equal(decryptedRange, expectedPlaintext) { |
|||
t.Errorf("Range decryption mismatch for %s (offset=%d, size=%d, skip=%d)", |
|||
rt.name, rt.start, rangeSize, skip) |
|||
previewLen := 64 |
|||
if len(expectedPlaintext) < previewLen { |
|||
previewLen = len(expectedPlaintext) |
|||
} |
|||
t.Errorf(" Expected: %x", expectedPlaintext[:previewLen]) |
|||
previewLen2 := previewLen |
|||
if len(decryptedRange) < previewLen2 { |
|||
previewLen2 = len(decryptedRange) |
|||
} |
|||
t.Errorf(" Got: %x", decryptedRange[:previewLen2]) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestCTRDecryptionWithIOReader tests the integration with io.Reader
|
|||
func TestCTRDecryptionWithIOReader(t *testing.T) { |
|||
plaintext := []byte("Hello, World! This is a test of CTR mode decryption with non-aligned offsets.") |
|||
|
|||
key := make([]byte, 32) |
|||
iv := make([]byte, 16) |
|||
rand.Read(key) |
|||
rand.Read(iv) |
|||
|
|||
// Encrypt
|
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(plaintext)) |
|||
stream := cipher.NewCTR(block, iv) |
|||
stream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
// Test reading from various offsets using io.Reader
|
|||
testOffsets := []int64{0, 5, 10, 16, 17, 30} |
|||
|
|||
for _, offset := range testOffsets { |
|||
t.Run(string(rune('A'+offset)), func(t *testing.T) { |
|||
// Calculate adjusted IV and skip
|
|||
adjustedIV, skip := calculateIVWithOffset(iv, offset) |
|||
|
|||
// CRITICAL: Start reading from block-aligned offset in ciphertext
|
|||
blockAlignedOffset := offset - int64(skip) |
|||
|
|||
// Create decrypted reader
|
|||
decryptBlock, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create decrypt cipher: %v", err) |
|||
} |
|||
|
|||
decryptStream := cipher.NewCTR(decryptBlock, adjustedIV) |
|||
ciphertextReader := bytes.NewReader(ciphertext[blockAlignedOffset:]) |
|||
decryptedReader := &cipher.StreamReader{S: decryptStream, R: ciphertextReader} |
|||
|
|||
// Skip intra-block bytes to get to user-requested offset
|
|||
if skip > 0 { |
|||
_, err := io.CopyN(io.Discard, decryptedReader, int64(skip)) |
|||
if err != nil { |
|||
t.Fatalf("Failed to skip %d bytes: %v", skip, err) |
|||
} |
|||
} |
|||
|
|||
// Read decrypted data
|
|||
decryptedData, err := io.ReadAll(decryptedReader) |
|||
if err != nil { |
|||
t.Fatalf("Failed to read decrypted data: %v", err) |
|||
} |
|||
|
|||
// Verify
|
|||
expectedPlaintext := plaintext[offset:] |
|||
if !bytes.Equal(decryptedData, expectedPlaintext) { |
|||
t.Errorf("Decryption mismatch at offset %d (skip=%d)", offset, skip) |
|||
t.Errorf(" Expected: %q", expectedPlaintext) |
|||
t.Errorf(" Got: %q", decryptedData) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
@ -0,0 +1,266 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"bytes" |
|||
"crypto/aes" |
|||
"crypto/cipher" |
|||
"crypto/rand" |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
) |
|||
|
|||
// TestSSES3MultipartChunkViewDecryption tests that multipart SSE-S3 objects use per-chunk IVs
|
|||
func TestSSES3MultipartChunkViewDecryption(t *testing.T) { |
|||
// Generate test key and base IV
|
|||
key := make([]byte, 32) |
|||
rand.Read(key) |
|||
baseIV := make([]byte, 16) |
|||
rand.Read(baseIV) |
|||
|
|||
// Create test plaintext
|
|||
plaintext := []byte("This is test data for SSE-S3 multipart encryption testing") |
|||
|
|||
// Simulate multipart upload with 2 parts at different offsets
|
|||
testCases := []struct { |
|||
name string |
|||
partNumber int |
|||
partOffset int64 |
|||
data []byte |
|||
}{ |
|||
{"Part 1", 1, 0, plaintext[:30]}, |
|||
{"Part 2", 2, 5 * 1024 * 1024, plaintext[30:]}, // 5MB offset
|
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
// Calculate IV with offset (simulating upload encryption)
|
|||
adjustedIV, _ := calculateIVWithOffset(baseIV, tc.partOffset) |
|||
|
|||
// Encrypt the part data
|
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(tc.data)) |
|||
stream := cipher.NewCTR(block, adjustedIV) |
|||
stream.XORKeyStream(ciphertext, tc.data) |
|||
|
|||
// SSE-S3 stores the offset-adjusted IV directly in chunk metadata
|
|||
// (unlike SSE-C which stores base IV + PartOffset)
|
|||
chunkIV := adjustedIV |
|||
|
|||
// Verify the IV is offset-adjusted for non-zero offsets
|
|||
if tc.partOffset == 0 { |
|||
if !bytes.Equal(chunkIV, baseIV) { |
|||
t.Error("IV should equal base IV when offset is 0") |
|||
} |
|||
} else { |
|||
if bytes.Equal(chunkIV, baseIV) { |
|||
t.Error("Chunk IV should be offset-adjusted, not base IV") |
|||
} |
|||
} |
|||
|
|||
// Verify decryption works with the chunk's IV
|
|||
decryptedData := make([]byte, len(ciphertext)) |
|||
decryptBlock, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create decrypt cipher: %v", err) |
|||
} |
|||
decryptStream := cipher.NewCTR(decryptBlock, chunkIV) |
|||
decryptStream.XORKeyStream(decryptedData, ciphertext) |
|||
|
|||
if !bytes.Equal(decryptedData, tc.data) { |
|||
t.Errorf("Decryption failed: expected %q, got %q", tc.data, decryptedData) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestSSES3SinglePartChunkViewDecryption tests single-part SSE-S3 objects use object-level IV
|
|||
func TestSSES3SinglePartChunkViewDecryption(t *testing.T) { |
|||
// Generate test key and IV
|
|||
key := make([]byte, 32) |
|||
rand.Read(key) |
|||
iv := make([]byte, 16) |
|||
rand.Read(iv) |
|||
|
|||
// Create test plaintext
|
|||
plaintext := []byte("This is test data for SSE-S3 single-part encryption testing") |
|||
|
|||
// Encrypt the data
|
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(plaintext)) |
|||
stream := cipher.NewCTR(block, iv) |
|||
stream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
// Create a mock file chunk WITHOUT per-chunk metadata (single-part path)
|
|||
fileChunk := &filer_pb.FileChunk{ |
|||
FileId: "test-file-id", |
|||
Offset: 0, |
|||
Size: uint64(len(ciphertext)), |
|||
SseType: filer_pb.SSEType_SSE_S3, |
|||
SseMetadata: nil, // No per-chunk metadata for single-part
|
|||
} |
|||
|
|||
// Verify the chunk does NOT have per-chunk metadata
|
|||
if len(fileChunk.GetSseMetadata()) > 0 { |
|||
t.Error("Single-part chunk should not have per-chunk metadata") |
|||
} |
|||
|
|||
// For single-part, the object-level IV is used
|
|||
objectLevelIV := iv |
|||
|
|||
// Verify decryption works with the object-level IV
|
|||
decryptedData := make([]byte, len(ciphertext)) |
|||
decryptBlock, _ := aes.NewCipher(key) |
|||
decryptStream := cipher.NewCTR(decryptBlock, objectLevelIV) |
|||
decryptStream.XORKeyStream(decryptedData, ciphertext) |
|||
|
|||
if !bytes.Equal(decryptedData, plaintext) { |
|||
t.Errorf("Decryption failed: expected %q, got %q", plaintext, decryptedData) |
|||
} |
|||
} |
|||
|
|||
// TestSSES3IVOffsetCalculation verifies IV offset calculation for multipart uploads
|
|||
func TestSSES3IVOffsetCalculation(t *testing.T) { |
|||
baseIV := make([]byte, 16) |
|||
rand.Read(baseIV) |
|||
|
|||
testCases := []struct { |
|||
name string |
|||
partNumber int |
|||
partSize int64 |
|||
offset int64 |
|||
}{ |
|||
{"Part 1", 1, 5 * 1024 * 1024, 0}, |
|||
{"Part 2", 2, 5 * 1024 * 1024, 5 * 1024 * 1024}, |
|||
{"Part 3", 3, 5 * 1024 * 1024, 10 * 1024 * 1024}, |
|||
{"Part 10", 10, 5 * 1024 * 1024, 45 * 1024 * 1024}, |
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
// Calculate IV with offset
|
|||
adjustedIV, skip := calculateIVWithOffset(baseIV, tc.offset) |
|||
|
|||
// Verify IV is different from base (except for offset 0)
|
|||
if tc.offset == 0 { |
|||
if !bytes.Equal(adjustedIV, baseIV) { |
|||
t.Error("IV should equal base IV when offset is 0") |
|||
} |
|||
if skip != 0 { |
|||
t.Errorf("Skip should be 0 when offset is 0, got %d", skip) |
|||
} |
|||
} else { |
|||
if bytes.Equal(adjustedIV, baseIV) { |
|||
t.Error("IV should be different from base IV when offset > 0") |
|||
} |
|||
} |
|||
|
|||
// Verify skip is calculated correctly
|
|||
expectedSkip := int(tc.offset % 16) |
|||
if skip != expectedSkip { |
|||
t.Errorf("Skip mismatch: expected %d, got %d", expectedSkip, skip) |
|||
} |
|||
|
|||
// Verify IV adjustment is deterministic
|
|||
adjustedIV2, skip2 := calculateIVWithOffset(baseIV, tc.offset) |
|||
if !bytes.Equal(adjustedIV, adjustedIV2) || skip != skip2 { |
|||
t.Error("IV calculation is not deterministic") |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestSSES3ChunkMetadataDetection tests detection of per-chunk vs object-level metadata
|
|||
func TestSSES3ChunkMetadataDetection(t *testing.T) { |
|||
// Test data for multipart chunk
|
|||
mockMetadata := []byte("mock-serialized-metadata") |
|||
|
|||
testCases := []struct { |
|||
name string |
|||
chunk *filer_pb.FileChunk |
|||
expectedMultipart bool |
|||
}{ |
|||
{ |
|||
name: "Multipart chunk with metadata", |
|||
chunk: &filer_pb.FileChunk{ |
|||
SseType: filer_pb.SSEType_SSE_S3, |
|||
SseMetadata: mockMetadata, |
|||
}, |
|||
expectedMultipart: true, |
|||
}, |
|||
{ |
|||
name: "Single-part chunk without metadata", |
|||
chunk: &filer_pb.FileChunk{ |
|||
SseType: filer_pb.SSEType_SSE_S3, |
|||
SseMetadata: nil, |
|||
}, |
|||
expectedMultipart: false, |
|||
}, |
|||
{ |
|||
name: "Non-SSE-S3 chunk", |
|||
chunk: &filer_pb.FileChunk{ |
|||
SseType: filer_pb.SSEType_NONE, |
|||
SseMetadata: nil, |
|||
}, |
|||
expectedMultipart: false, |
|||
}, |
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
hasPerChunkMetadata := tc.chunk.GetSseType() == filer_pb.SSEType_SSE_S3 && len(tc.chunk.GetSseMetadata()) > 0 |
|||
|
|||
if hasPerChunkMetadata != tc.expectedMultipart { |
|||
t.Errorf("Expected multipart=%v, got hasPerChunkMetadata=%v", tc.expectedMultipart, hasPerChunkMetadata) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestSSES3EncryptionConsistency verifies encryption/decryption roundtrip
|
|||
func TestSSES3EncryptionConsistency(t *testing.T) { |
|||
plaintext := []byte("Test data for SSE-S3 encryption consistency verification") |
|||
|
|||
key := make([]byte, 32) |
|||
rand.Read(key) |
|||
iv := make([]byte, 16) |
|||
rand.Read(iv) |
|||
|
|||
// Encrypt
|
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(plaintext)) |
|||
encryptStream := cipher.NewCTR(block, iv) |
|||
encryptStream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
// Decrypt
|
|||
decrypted := make([]byte, len(ciphertext)) |
|||
decryptBlock, _ := aes.NewCipher(key) |
|||
decryptStream := cipher.NewCTR(decryptBlock, iv) |
|||
decryptStream.XORKeyStream(decrypted, ciphertext) |
|||
|
|||
// Verify
|
|||
if !bytes.Equal(decrypted, plaintext) { |
|||
t.Errorf("Decryption mismatch: expected %q, got %q", plaintext, decrypted) |
|||
} |
|||
|
|||
// Verify idempotency - decrypt again should give garbage
|
|||
decrypted2 := make([]byte, len(ciphertext)) |
|||
decryptStream2 := cipher.NewCTR(decryptBlock, iv) |
|||
decryptStream2.XORKeyStream(decrypted2, ciphertext) |
|||
|
|||
if !bytes.Equal(decrypted2, plaintext) { |
|||
t.Error("Second decryption should also work with fresh stream") |
|||
} |
|||
} |
|||
@ -0,0 +1,285 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"io" |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
) |
|||
|
|||
// TestImplicitDirectoryBehaviorLogic tests the core logic for implicit directory detection
|
|||
// This tests the decision logic without requiring a full S3 server setup
|
|||
func TestImplicitDirectoryBehaviorLogic(t *testing.T) { |
|||
tests := []struct { |
|||
name string |
|||
objectPath string |
|||
hasTrailingSlash bool |
|||
fileSize uint64 |
|||
isDirectory bool |
|||
hasChildren bool |
|||
versioningEnabled bool |
|||
shouldReturn404 bool |
|||
description string |
|||
}{ |
|||
{ |
|||
name: "Implicit directory: 0-byte file with children, no trailing slash", |
|||
objectPath: "dataset", |
|||
hasTrailingSlash: false, |
|||
fileSize: 0, |
|||
isDirectory: false, |
|||
hasChildren: true, |
|||
versioningEnabled: false, |
|||
shouldReturn404: true, |
|||
description: "Should return 404 to force s3fs LIST-based discovery", |
|||
}, |
|||
{ |
|||
name: "Implicit directory: actual directory with children, no trailing slash", |
|||
objectPath: "dataset", |
|||
hasTrailingSlash: false, |
|||
fileSize: 0, |
|||
isDirectory: true, |
|||
hasChildren: true, |
|||
versioningEnabled: false, |
|||
shouldReturn404: true, |
|||
description: "Should return 404 for directory with children", |
|||
}, |
|||
{ |
|||
name: "Explicit directory request: trailing slash", |
|||
objectPath: "dataset/", |
|||
hasTrailingSlash: true, |
|||
fileSize: 0, |
|||
isDirectory: true, |
|||
hasChildren: true, |
|||
versioningEnabled: false, |
|||
shouldReturn404: false, |
|||
description: "Should return 200 for explicit directory request (trailing slash)", |
|||
}, |
|||
{ |
|||
name: "Empty file: 0-byte file without children", |
|||
objectPath: "empty.txt", |
|||
hasTrailingSlash: false, |
|||
fileSize: 0, |
|||
isDirectory: false, |
|||
hasChildren: false, |
|||
versioningEnabled: false, |
|||
shouldReturn404: false, |
|||
description: "Should return 200 for legitimate empty file", |
|||
}, |
|||
{ |
|||
name: "Empty directory: 0-byte directory without children", |
|||
objectPath: "empty-dir", |
|||
hasTrailingSlash: false, |
|||
fileSize: 0, |
|||
isDirectory: true, |
|||
hasChildren: false, |
|||
versioningEnabled: false, |
|||
shouldReturn404: false, |
|||
description: "Should return 200 for empty directory", |
|||
}, |
|||
{ |
|||
name: "Regular file: non-zero size", |
|||
objectPath: "file.txt", |
|||
hasTrailingSlash: false, |
|||
fileSize: 100, |
|||
isDirectory: false, |
|||
hasChildren: false, |
|||
versioningEnabled: false, |
|||
shouldReturn404: false, |
|||
description: "Should return 200 for regular file with content", |
|||
}, |
|||
{ |
|||
name: "Versioned bucket: implicit directory should return 200", |
|||
objectPath: "dataset", |
|||
hasTrailingSlash: false, |
|||
fileSize: 0, |
|||
isDirectory: false, |
|||
hasChildren: true, |
|||
versioningEnabled: true, |
|||
shouldReturn404: false, |
|||
description: "Should return 200 for versioned buckets (skip implicit dir check)", |
|||
}, |
|||
{ |
|||
name: "PyArrow directory marker: 0-byte with children", |
|||
objectPath: "dataset", |
|||
hasTrailingSlash: false, |
|||
fileSize: 0, |
|||
isDirectory: false, |
|||
hasChildren: true, |
|||
versioningEnabled: false, |
|||
shouldReturn404: true, |
|||
description: "Should return 404 for PyArrow-created directory markers", |
|||
}, |
|||
} |
|||
|
|||
for _, tt := range tests { |
|||
t.Run(tt.name, func(t *testing.T) { |
|||
// Test the logic: should we return 404?
|
|||
// Logic from HeadObjectHandler:
|
|||
// if !versioningConfigured && !strings.HasSuffix(object, "/") {
|
|||
// if isZeroByteFile || isActualDirectory {
|
|||
// if hasChildren {
|
|||
// return 404
|
|||
// }
|
|||
// }
|
|||
// }
|
|||
|
|||
isZeroByteFile := tt.fileSize == 0 && !tt.isDirectory |
|||
isActualDirectory := tt.isDirectory |
|||
|
|||
shouldReturn404 := false |
|||
if !tt.versioningEnabled && !tt.hasTrailingSlash { |
|||
if isZeroByteFile || isActualDirectory { |
|||
if tt.hasChildren { |
|||
shouldReturn404 = true |
|||
} |
|||
} |
|||
} |
|||
|
|||
if shouldReturn404 != tt.shouldReturn404 { |
|||
t.Errorf("Logic mismatch for %s:\n Expected shouldReturn404=%v\n Got shouldReturn404=%v\n Description: %s", |
|||
tt.name, tt.shouldReturn404, shouldReturn404, tt.description) |
|||
} else { |
|||
t.Logf("✓ %s: correctly returns %d", tt.name, map[bool]int{true: 404, false: 200}[shouldReturn404]) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestHasChildrenLogic tests the hasChildren helper function logic
|
|||
func TestHasChildrenLogic(t *testing.T) { |
|||
tests := []struct { |
|||
name string |
|||
bucket string |
|||
prefix string |
|||
listResponse *filer_pb.ListEntriesResponse |
|||
listError error |
|||
expectedResult bool |
|||
description string |
|||
}{ |
|||
{ |
|||
name: "Directory with children", |
|||
bucket: "test-bucket", |
|||
prefix: "dataset", |
|||
listResponse: &filer_pb.ListEntriesResponse{ |
|||
Entry: &filer_pb.Entry{ |
|||
Name: "file.parquet", |
|||
IsDirectory: false, |
|||
}, |
|||
}, |
|||
listError: nil, |
|||
expectedResult: true, |
|||
description: "Should return true when at least one child exists", |
|||
}, |
|||
{ |
|||
name: "Empty directory", |
|||
bucket: "test-bucket", |
|||
prefix: "empty-dir", |
|||
listResponse: nil, |
|||
listError: io.EOF, |
|||
expectedResult: false, |
|||
description: "Should return false when no children exist (EOF)", |
|||
}, |
|||
{ |
|||
name: "Directory with leading slash in prefix", |
|||
bucket: "test-bucket", |
|||
prefix: "/dataset", |
|||
listResponse: &filer_pb.ListEntriesResponse{ |
|||
Entry: &filer_pb.Entry{ |
|||
Name: "file.parquet", |
|||
IsDirectory: false, |
|||
}, |
|||
}, |
|||
listError: nil, |
|||
expectedResult: true, |
|||
description: "Should handle leading slashes correctly", |
|||
}, |
|||
} |
|||
|
|||
for _, tt := range tests { |
|||
t.Run(tt.name, func(t *testing.T) { |
|||
// Test the hasChildren logic:
|
|||
// 1. It should trim leading slashes from prefix
|
|||
// 2. It should list with Limit=1
|
|||
// 3. It should return true if any entry is received
|
|||
// 4. It should return false if EOF is received
|
|||
|
|||
hasChildren := false |
|||
if tt.listError == nil && tt.listResponse != nil { |
|||
hasChildren = true |
|||
} else if tt.listError == io.EOF { |
|||
hasChildren = false |
|||
} |
|||
|
|||
if hasChildren != tt.expectedResult { |
|||
t.Errorf("hasChildren logic mismatch for %s:\n Expected: %v\n Got: %v\n Description: %s", |
|||
tt.name, tt.expectedResult, hasChildren, tt.description) |
|||
} else { |
|||
t.Logf("✓ %s: correctly returns %v", tt.name, hasChildren) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestImplicitDirectoryEdgeCases tests edge cases in the implicit directory detection
|
|||
func TestImplicitDirectoryEdgeCases(t *testing.T) { |
|||
tests := []struct { |
|||
name string |
|||
scenario string |
|||
expectation string |
|||
}{ |
|||
{ |
|||
name: "PyArrow write_dataset creates 0-byte files", |
|||
scenario: "PyArrow creates 'dataset' as 0-byte file, then writes 'dataset/file.parquet'", |
|||
expectation: "HEAD dataset → 404 (has children), s3fs uses LIST → correctly identifies as directory", |
|||
}, |
|||
{ |
|||
name: "Filer creates actual directories", |
|||
scenario: "Filer creates 'dataset' as actual directory with IsDirectory=true", |
|||
expectation: "HEAD dataset → 404 (has children), s3fs uses LIST → correctly identifies as directory", |
|||
}, |
|||
{ |
|||
name: "Empty file edge case", |
|||
scenario: "User creates 'empty.txt' as 0-byte file with no children", |
|||
expectation: "HEAD empty.txt → 200 (no children), s3fs correctly reports as file", |
|||
}, |
|||
{ |
|||
name: "Explicit directory request", |
|||
scenario: "User requests 'dataset/' with trailing slash", |
|||
expectation: "HEAD dataset/ → 200 (explicit directory request), normal directory behavior", |
|||
}, |
|||
{ |
|||
name: "Versioned bucket", |
|||
scenario: "Bucket has versioning enabled", |
|||
expectation: "HEAD dataset → 200 (skip implicit dir check), versioned semantics apply", |
|||
}, |
|||
{ |
|||
name: "AWS S3 compatibility", |
|||
scenario: "Only 'dataset/file.txt' exists, no marker at 'dataset'", |
|||
expectation: "HEAD dataset → 404 (object doesn't exist), matches AWS S3 behavior", |
|||
}, |
|||
} |
|||
|
|||
for _, tt := range tests { |
|||
t.Run(tt.name, func(t *testing.T) { |
|||
t.Logf("Scenario: %s", tt.scenario) |
|||
t.Logf("Expected: %s", tt.expectation) |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestImplicitDirectoryIntegration is an integration test placeholder
|
|||
// Run with: cd test/s3/parquet && make test-implicit-dir-with-server
|
|||
func TestImplicitDirectoryIntegration(t *testing.T) { |
|||
if testing.Short() { |
|||
t.Skip("Skipping integration test in short mode") |
|||
} |
|||
|
|||
t.Skip("Integration test - run manually with: cd test/s3/parquet && make test-implicit-dir-with-server") |
|||
} |
|||
|
|||
// Benchmark for hasChildren performance
|
|||
func BenchmarkHasChildrenCheck(b *testing.B) { |
|||
// This benchmark would measure the performance impact of the hasChildren check
|
|||
// Expected: ~1-5ms per call (one gRPC LIST request with Limit=1)
|
|||
b.Skip("Benchmark - requires full filer setup") |
|||
} |
|||
2763
weed/s3api/s3api_object_handlers.go
File diff suppressed because it is too large
View File
File diff suppressed because it is too large
View File
@ -0,0 +1,361 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"bytes" |
|||
"crypto/aes" |
|||
"crypto/cipher" |
|||
"crypto/rand" |
|||
"encoding/json" |
|||
"io" |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
) |
|||
|
|||
// TestSSEKMSChunkMetadataAssignment tests that SSE-KMS creates per-chunk metadata
|
|||
// with correct ChunkOffset values for each chunk (matching the fix in putToFiler)
|
|||
func TestSSEKMSChunkMetadataAssignment(t *testing.T) { |
|||
kmsKey := SetupTestKMS(t) |
|||
defer kmsKey.Cleanup() |
|||
|
|||
// Generate SSE-KMS key by encrypting test data (this gives us a real SSEKMSKey)
|
|||
encryptionContext := BuildEncryptionContext("test-bucket", "test-object", false) |
|||
testData := "Test data for SSE-KMS chunk metadata validation" |
|||
encryptedReader, sseKMSKey, err := CreateSSEKMSEncryptedReader(bytes.NewReader([]byte(testData)), kmsKey.KeyID, encryptionContext) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create encrypted reader: %v", err) |
|||
} |
|||
// Read to complete encryption setup
|
|||
io.ReadAll(encryptedReader) |
|||
|
|||
// Serialize the base metadata (what putToFiler receives before chunking)
|
|||
baseMetadata, err := SerializeSSEKMSMetadata(sseKMSKey) |
|||
if err != nil { |
|||
t.Fatalf("Failed to serialize base SSE-KMS metadata: %v", err) |
|||
} |
|||
|
|||
// Simulate multi-chunk upload scenario (what putToFiler does after UploadReaderInChunks)
|
|||
simulatedChunks := []*filer_pb.FileChunk{ |
|||
{FileId: "chunk1", Offset: 0, Size: 8 * 1024 * 1024}, // 8MB chunk at offset 0
|
|||
{FileId: "chunk2", Offset: 8 * 1024 * 1024, Size: 8 * 1024 * 1024}, // 8MB chunk at offset 8MB
|
|||
{FileId: "chunk3", Offset: 16 * 1024 * 1024, Size: 4 * 1024 * 1024}, // 4MB chunk at offset 16MB
|
|||
} |
|||
|
|||
// THIS IS THE CRITICAL FIX: Create per-chunk metadata (lines 421-443 in putToFiler)
|
|||
for _, chunk := range simulatedChunks { |
|||
chunk.SseType = filer_pb.SSEType_SSE_KMS |
|||
|
|||
// Create a copy of the SSE-KMS key with chunk-specific offset
|
|||
chunkSSEKey := &SSEKMSKey{ |
|||
KeyID: sseKMSKey.KeyID, |
|||
EncryptedDataKey: sseKMSKey.EncryptedDataKey, |
|||
EncryptionContext: sseKMSKey.EncryptionContext, |
|||
BucketKeyEnabled: sseKMSKey.BucketKeyEnabled, |
|||
IV: sseKMSKey.IV, |
|||
ChunkOffset: chunk.Offset, // Set chunk-specific offset
|
|||
} |
|||
|
|||
// Serialize per-chunk metadata
|
|||
chunkMetadata, serErr := SerializeSSEKMSMetadata(chunkSSEKey) |
|||
if serErr != nil { |
|||
t.Fatalf("Failed to serialize SSE-KMS metadata for chunk at offset %d: %v", chunk.Offset, serErr) |
|||
} |
|||
chunk.SseMetadata = chunkMetadata |
|||
} |
|||
|
|||
// VERIFICATION 1: Each chunk should have different metadata (due to different ChunkOffset)
|
|||
metadataSet := make(map[string]bool) |
|||
for i, chunk := range simulatedChunks { |
|||
metadataStr := string(chunk.SseMetadata) |
|||
if metadataSet[metadataStr] { |
|||
t.Errorf("Chunk %d has duplicate metadata (should be unique per chunk)", i) |
|||
} |
|||
metadataSet[metadataStr] = true |
|||
|
|||
// Deserialize and verify ChunkOffset
|
|||
var metadata SSEKMSMetadata |
|||
if err := json.Unmarshal(chunk.SseMetadata, &metadata); err != nil { |
|||
t.Fatalf("Failed to deserialize chunk %d metadata: %v", i, err) |
|||
} |
|||
|
|||
expectedOffset := chunk.Offset |
|||
if metadata.PartOffset != expectedOffset { |
|||
t.Errorf("Chunk %d: expected PartOffset=%d, got %d", i, expectedOffset, metadata.PartOffset) |
|||
} |
|||
|
|||
t.Logf("✓ Chunk %d: PartOffset=%d (correct)", i, metadata.PartOffset) |
|||
} |
|||
|
|||
// VERIFICATION 2: Verify metadata can be deserialized and has correct ChunkOffset
|
|||
for i, chunk := range simulatedChunks { |
|||
// Deserialize chunk metadata
|
|||
deserializedKey, err := DeserializeSSEKMSMetadata(chunk.SseMetadata) |
|||
if err != nil { |
|||
t.Fatalf("Failed to deserialize chunk %d metadata: %v", i, err) |
|||
} |
|||
|
|||
// Verify the deserialized key has correct ChunkOffset
|
|||
if deserializedKey.ChunkOffset != chunk.Offset { |
|||
t.Errorf("Chunk %d: deserialized ChunkOffset=%d, expected %d", |
|||
i, deserializedKey.ChunkOffset, chunk.Offset) |
|||
} |
|||
|
|||
// Verify IV is set (should be inherited from base)
|
|||
if len(deserializedKey.IV) != aes.BlockSize { |
|||
t.Errorf("Chunk %d: invalid IV length: %d", i, len(deserializedKey.IV)) |
|||
} |
|||
|
|||
// Verify KeyID matches
|
|||
if deserializedKey.KeyID != sseKMSKey.KeyID { |
|||
t.Errorf("Chunk %d: KeyID mismatch", i) |
|||
} |
|||
|
|||
t.Logf("✓ Chunk %d: metadata deserialized successfully (ChunkOffset=%d, KeyID=%s)", |
|||
i, deserializedKey.ChunkOffset, deserializedKey.KeyID) |
|||
} |
|||
|
|||
// VERIFICATION 3: Ensure base metadata is NOT reused (the bug we're preventing)
|
|||
var baseMetadataStruct SSEKMSMetadata |
|||
if err := json.Unmarshal(baseMetadata, &baseMetadataStruct); err != nil { |
|||
t.Fatalf("Failed to deserialize base metadata: %v", err) |
|||
} |
|||
|
|||
// Base metadata should have ChunkOffset=0
|
|||
if baseMetadataStruct.PartOffset != 0 { |
|||
t.Errorf("Base metadata should have PartOffset=0, got %d", baseMetadataStruct.PartOffset) |
|||
} |
|||
|
|||
// Chunks 2 and 3 should NOT have the same metadata as base (proving we're not reusing)
|
|||
for i := 1; i < len(simulatedChunks); i++ { |
|||
if bytes.Equal(simulatedChunks[i].SseMetadata, baseMetadata) { |
|||
t.Errorf("CRITICAL BUG: Chunk %d reuses base metadata (should have per-chunk metadata)", i) |
|||
} |
|||
} |
|||
|
|||
t.Log("✓ All chunks have unique per-chunk metadata (bug prevented)") |
|||
} |
|||
|
|||
// TestSSES3ChunkMetadataAssignment tests that SSE-S3 creates per-chunk metadata
|
|||
// with offset-adjusted IVs for each chunk (matching the fix in putToFiler)
|
|||
func TestSSES3ChunkMetadataAssignment(t *testing.T) { |
|||
// Initialize global SSE-S3 key manager
|
|||
globalSSES3KeyManager = NewSSES3KeyManager() |
|||
defer func() { |
|||
globalSSES3KeyManager = NewSSES3KeyManager() |
|||
}() |
|||
|
|||
keyManager := GetSSES3KeyManager() |
|||
keyManager.superKey = make([]byte, 32) |
|||
rand.Read(keyManager.superKey) |
|||
|
|||
// Generate SSE-S3 key
|
|||
sseS3Key, err := GenerateSSES3Key() |
|||
if err != nil { |
|||
t.Fatalf("Failed to generate SSE-S3 key: %v", err) |
|||
} |
|||
|
|||
// Generate base IV
|
|||
baseIV := make([]byte, aes.BlockSize) |
|||
rand.Read(baseIV) |
|||
sseS3Key.IV = baseIV |
|||
|
|||
// Serialize base metadata (what putToFiler receives)
|
|||
baseMetadata, err := SerializeSSES3Metadata(sseS3Key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to serialize base SSE-S3 metadata: %v", err) |
|||
} |
|||
|
|||
// Simulate multi-chunk upload scenario (what putToFiler does after UploadReaderInChunks)
|
|||
simulatedChunks := []*filer_pb.FileChunk{ |
|||
{FileId: "chunk1", Offset: 0, Size: 8 * 1024 * 1024}, // 8MB chunk at offset 0
|
|||
{FileId: "chunk2", Offset: 8 * 1024 * 1024, Size: 8 * 1024 * 1024}, // 8MB chunk at offset 8MB
|
|||
{FileId: "chunk3", Offset: 16 * 1024 * 1024, Size: 4 * 1024 * 1024}, // 4MB chunk at offset 16MB
|
|||
} |
|||
|
|||
// THIS IS THE CRITICAL FIX: Create per-chunk metadata (lines 444-468 in putToFiler)
|
|||
for _, chunk := range simulatedChunks { |
|||
chunk.SseType = filer_pb.SSEType_SSE_S3 |
|||
|
|||
// Calculate chunk-specific IV using base IV and chunk offset
|
|||
chunkIV, _ := calculateIVWithOffset(sseS3Key.IV, chunk.Offset) |
|||
|
|||
// Create a copy of the SSE-S3 key with chunk-specific IV
|
|||
chunkSSEKey := &SSES3Key{ |
|||
Key: sseS3Key.Key, |
|||
KeyID: sseS3Key.KeyID, |
|||
Algorithm: sseS3Key.Algorithm, |
|||
IV: chunkIV, // Use chunk-specific IV
|
|||
} |
|||
|
|||
// Serialize per-chunk metadata
|
|||
chunkMetadata, serErr := SerializeSSES3Metadata(chunkSSEKey) |
|||
if serErr != nil { |
|||
t.Fatalf("Failed to serialize SSE-S3 metadata for chunk at offset %d: %v", chunk.Offset, serErr) |
|||
} |
|||
chunk.SseMetadata = chunkMetadata |
|||
} |
|||
|
|||
// VERIFICATION 1: Each chunk should have different metadata (due to different IVs)
|
|||
metadataSet := make(map[string]bool) |
|||
for i, chunk := range simulatedChunks { |
|||
metadataStr := string(chunk.SseMetadata) |
|||
if metadataSet[metadataStr] { |
|||
t.Errorf("Chunk %d has duplicate metadata (should be unique per chunk)", i) |
|||
} |
|||
metadataSet[metadataStr] = true |
|||
|
|||
// Deserialize and verify IV
|
|||
deserializedKey, err := DeserializeSSES3Metadata(chunk.SseMetadata, keyManager) |
|||
if err != nil { |
|||
t.Fatalf("Failed to deserialize chunk %d metadata: %v", i, err) |
|||
} |
|||
|
|||
// Calculate expected IV for this chunk
|
|||
expectedIV, _ := calculateIVWithOffset(baseIV, chunk.Offset) |
|||
if !bytes.Equal(deserializedKey.IV, expectedIV) { |
|||
t.Errorf("Chunk %d: IV mismatch\nExpected: %x\nGot: %x", |
|||
i, expectedIV[:8], deserializedKey.IV[:8]) |
|||
} |
|||
|
|||
t.Logf("✓ Chunk %d: IV correctly adjusted for offset=%d", i, chunk.Offset) |
|||
} |
|||
|
|||
// VERIFICATION 2: Verify decryption works with per-chunk IVs
|
|||
for i, chunk := range simulatedChunks { |
|||
// Deserialize chunk metadata
|
|||
deserializedKey, err := DeserializeSSES3Metadata(chunk.SseMetadata, keyManager) |
|||
if err != nil { |
|||
t.Fatalf("Failed to deserialize chunk %d metadata: %v", i, err) |
|||
} |
|||
|
|||
// Simulate encryption/decryption with the chunk's IV
|
|||
testData := []byte("Test data for SSE-S3 chunk decryption verification") |
|||
block, err := aes.NewCipher(deserializedKey.Key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
// Encrypt with chunk's IV
|
|||
ciphertext := make([]byte, len(testData)) |
|||
stream := cipher.NewCTR(block, deserializedKey.IV) |
|||
stream.XORKeyStream(ciphertext, testData) |
|||
|
|||
// Decrypt with chunk's IV
|
|||
plaintext := make([]byte, len(ciphertext)) |
|||
block2, _ := aes.NewCipher(deserializedKey.Key) |
|||
stream2 := cipher.NewCTR(block2, deserializedKey.IV) |
|||
stream2.XORKeyStream(plaintext, ciphertext) |
|||
|
|||
if !bytes.Equal(plaintext, testData) { |
|||
t.Errorf("Chunk %d: decryption failed", i) |
|||
} |
|||
|
|||
t.Logf("✓ Chunk %d: encryption/decryption successful with chunk-specific IV", i) |
|||
} |
|||
|
|||
// VERIFICATION 3: Ensure base IV is NOT reused for non-zero offset chunks (the bug we're preventing)
|
|||
for i := 1; i < len(simulatedChunks); i++ { |
|||
if bytes.Equal(simulatedChunks[i].SseMetadata, baseMetadata) { |
|||
t.Errorf("CRITICAL BUG: Chunk %d reuses base metadata (should have per-chunk metadata)", i) |
|||
} |
|||
|
|||
// Verify chunk metadata has different IV than base IV
|
|||
deserializedKey, _ := DeserializeSSES3Metadata(simulatedChunks[i].SseMetadata, keyManager) |
|||
if bytes.Equal(deserializedKey.IV, baseIV) { |
|||
t.Errorf("CRITICAL BUG: Chunk %d uses base IV (should use offset-adjusted IV)", i) |
|||
} |
|||
} |
|||
|
|||
t.Log("✓ All chunks have unique per-chunk IVs (bug prevented)") |
|||
} |
|||
|
|||
// TestSSEChunkMetadataComparison tests that the bug (reusing same metadata for all chunks)
|
|||
// would cause decryption failures, while the fix (per-chunk metadata) works correctly
|
|||
func TestSSEChunkMetadataComparison(t *testing.T) { |
|||
// Generate test key and IV
|
|||
key := make([]byte, 32) |
|||
rand.Read(key) |
|||
baseIV := make([]byte, aes.BlockSize) |
|||
rand.Read(baseIV) |
|||
|
|||
// Create test data for 3 chunks
|
|||
chunk0Data := []byte("Chunk 0 data at offset 0") |
|||
chunk1Data := []byte("Chunk 1 data at offset 8MB") |
|||
chunk2Data := []byte("Chunk 2 data at offset 16MB") |
|||
|
|||
chunkOffsets := []int64{0, 8 * 1024 * 1024, 16 * 1024 * 1024} |
|||
chunkDataList := [][]byte{chunk0Data, chunk1Data, chunk2Data} |
|||
|
|||
// Scenario 1: BUG - Using same IV for all chunks (what the old code did)
|
|||
t.Run("Bug: Reusing base IV causes decryption failures", func(t *testing.T) { |
|||
var encryptedChunks [][]byte |
|||
|
|||
// Encrypt each chunk with offset-adjusted IV (what encryption does)
|
|||
for i, offset := range chunkOffsets { |
|||
adjustedIV, _ := calculateIVWithOffset(baseIV, offset) |
|||
block, _ := aes.NewCipher(key) |
|||
stream := cipher.NewCTR(block, adjustedIV) |
|||
|
|||
ciphertext := make([]byte, len(chunkDataList[i])) |
|||
stream.XORKeyStream(ciphertext, chunkDataList[i]) |
|||
encryptedChunks = append(encryptedChunks, ciphertext) |
|||
} |
|||
|
|||
// Try to decrypt with base IV (THE BUG)
|
|||
for i := range encryptedChunks { |
|||
block, _ := aes.NewCipher(key) |
|||
stream := cipher.NewCTR(block, baseIV) // BUG: Always using base IV
|
|||
|
|||
plaintext := make([]byte, len(encryptedChunks[i])) |
|||
stream.XORKeyStream(plaintext, encryptedChunks[i]) |
|||
|
|||
if i == 0 { |
|||
// Chunk 0 should work (offset 0 means base IV = adjusted IV)
|
|||
if !bytes.Equal(plaintext, chunkDataList[i]) { |
|||
t.Errorf("Chunk 0 decryption failed (unexpected)") |
|||
} |
|||
} else { |
|||
// Chunks 1 and 2 should FAIL (wrong IV)
|
|||
if bytes.Equal(plaintext, chunkDataList[i]) { |
|||
t.Errorf("BUG NOT REPRODUCED: Chunk %d decrypted correctly with base IV (should fail)", i) |
|||
} else { |
|||
t.Logf("✓ Chunk %d: Correctly failed to decrypt with base IV (bug reproduced)", i) |
|||
} |
|||
} |
|||
} |
|||
}) |
|||
|
|||
// Scenario 2: FIX - Using per-chunk offset-adjusted IVs (what the new code does)
|
|||
t.Run("Fix: Per-chunk IVs enable correct decryption", func(t *testing.T) { |
|||
var encryptedChunks [][]byte |
|||
var chunkIVs [][]byte |
|||
|
|||
// Encrypt each chunk with offset-adjusted IV
|
|||
for i, offset := range chunkOffsets { |
|||
adjustedIV, _ := calculateIVWithOffset(baseIV, offset) |
|||
chunkIVs = append(chunkIVs, adjustedIV) |
|||
|
|||
block, _ := aes.NewCipher(key) |
|||
stream := cipher.NewCTR(block, adjustedIV) |
|||
|
|||
ciphertext := make([]byte, len(chunkDataList[i])) |
|||
stream.XORKeyStream(ciphertext, chunkDataList[i]) |
|||
encryptedChunks = append(encryptedChunks, ciphertext) |
|||
} |
|||
|
|||
// Decrypt with per-chunk IVs (THE FIX)
|
|||
for i := range encryptedChunks { |
|||
block, _ := aes.NewCipher(key) |
|||
stream := cipher.NewCTR(block, chunkIVs[i]) // FIX: Using per-chunk IV
|
|||
|
|||
plaintext := make([]byte, len(encryptedChunks[i])) |
|||
stream.XORKeyStream(plaintext, encryptedChunks[i]) |
|||
|
|||
if !bytes.Equal(plaintext, chunkDataList[i]) { |
|||
t.Errorf("Chunk %d decryption failed with per-chunk IV (unexpected)", i) |
|||
} else { |
|||
t.Logf("✓ Chunk %d: Successfully decrypted with per-chunk IV", i) |
|||
} |
|||
} |
|||
}) |
|||
} |
|||
@ -0,0 +1,189 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"bytes" |
|||
"crypto/aes" |
|||
"crypto/cipher" |
|||
"crypto/rand" |
|||
"io" |
|||
"testing" |
|||
) |
|||
|
|||
// TestSSECDecryptChunkView_NoOffsetAdjustment verifies that SSE-C decryption
|
|||
// does NOT apply calculateIVWithOffset, preventing the critical bug where
|
|||
// offset adjustment would cause CTR stream misalignment and data corruption.
|
|||
func TestSSECDecryptChunkView_NoOffsetAdjustment(t *testing.T) { |
|||
// Setup: Create test data
|
|||
plaintext := []byte("This is a test message for SSE-C decryption without offset adjustment") |
|||
customerKey := &SSECustomerKey{ |
|||
Key: make([]byte, 32), // 256-bit key
|
|||
KeyMD5: "test-key-md5", |
|||
} |
|||
// Generate random AES key
|
|||
if _, err := rand.Read(customerKey.Key); err != nil { |
|||
t.Fatalf("Failed to generate random key: %v", err) |
|||
} |
|||
|
|||
// Generate random IV for this "part"
|
|||
randomIV := make([]byte, aes.BlockSize) |
|||
if _, err := rand.Read(randomIV); err != nil { |
|||
t.Fatalf("Failed to generate random IV: %v", err) |
|||
} |
|||
|
|||
// Encrypt the plaintext using the random IV (simulating SSE-C multipart upload)
|
|||
// This is what CreateSSECEncryptedReader does - uses the IV directly without offset
|
|||
block, err := aes.NewCipher(customerKey.Key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
ciphertext := make([]byte, len(plaintext)) |
|||
stream := cipher.NewCTR(block, randomIV) |
|||
stream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
partOffset := int64(1024) // Non-zero offset that should NOT be applied during SSE-C decryption
|
|||
|
|||
// TEST: Decrypt using stored IV directly (correct behavior)
|
|||
decryptedReaderCorrect, err := CreateSSECDecryptedReader( |
|||
io.NopCloser(bytes.NewReader(ciphertext)), |
|||
customerKey, |
|||
randomIV, // Use stored IV directly - CORRECT
|
|||
) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create decrypted reader (correct): %v", err) |
|||
} |
|||
decryptedCorrect, err := io.ReadAll(decryptedReaderCorrect) |
|||
if err != nil { |
|||
t.Fatalf("Failed to read decrypted data (correct): %v", err) |
|||
} |
|||
|
|||
// Verify correct decryption
|
|||
if !bytes.Equal(decryptedCorrect, plaintext) { |
|||
t.Errorf("Correct decryption failed:\nExpected: %s\nGot: %s", plaintext, decryptedCorrect) |
|||
} else { |
|||
t.Logf("✓ Correct decryption (using stored IV directly) successful") |
|||
} |
|||
|
|||
// ANTI-TEST: Decrypt using offset-adjusted IV (incorrect behavior - the bug)
|
|||
adjustedIV, ivSkip := calculateIVWithOffset(randomIV, partOffset) |
|||
decryptedReaderWrong, err := CreateSSECDecryptedReader( |
|||
io.NopCloser(bytes.NewReader(ciphertext)), |
|||
customerKey, |
|||
adjustedIV, // Use adjusted IV - WRONG
|
|||
) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create decrypted reader (wrong): %v", err) |
|||
} |
|||
|
|||
// Skip ivSkip bytes (as the buggy code would do)
|
|||
if ivSkip > 0 { |
|||
io.CopyN(io.Discard, decryptedReaderWrong, int64(ivSkip)) |
|||
} |
|||
|
|||
decryptedWrong, err := io.ReadAll(decryptedReaderWrong) |
|||
if err != nil { |
|||
t.Fatalf("Failed to read decrypted data (wrong): %v", err) |
|||
} |
|||
|
|||
// Verify that offset adjustment produces DIFFERENT (corrupted) output
|
|||
if bytes.Equal(decryptedWrong, plaintext) { |
|||
t.Errorf("CRITICAL: Offset-adjusted IV produced correct plaintext! This shouldn't happen for SSE-C.") |
|||
} else { |
|||
t.Logf("✓ Verified: Offset-adjusted IV produces corrupted data (as expected for SSE-C)") |
|||
maxLen := 20 |
|||
if len(plaintext) < maxLen { |
|||
maxLen = len(plaintext) |
|||
} |
|||
t.Logf(" Plaintext: %q", plaintext[:maxLen]) |
|||
maxLen2 := 20 |
|||
if len(decryptedWrong) < maxLen2 { |
|||
maxLen2 = len(decryptedWrong) |
|||
} |
|||
t.Logf(" Corrupted: %q", decryptedWrong[:maxLen2]) |
|||
} |
|||
} |
|||
|
|||
// TestSSEKMSDecryptChunkView_RequiresOffsetAdjustment verifies that SSE-KMS
|
|||
// decryption DOES require calculateIVWithOffset, unlike SSE-C.
|
|||
func TestSSEKMSDecryptChunkView_RequiresOffsetAdjustment(t *testing.T) { |
|||
// Setup: Create test data
|
|||
plaintext := []byte("This is a test message for SSE-KMS decryption with offset adjustment") |
|||
|
|||
// Generate base IV and key
|
|||
baseIV := make([]byte, aes.BlockSize) |
|||
key := make([]byte, 32) |
|||
if _, err := rand.Read(baseIV); err != nil { |
|||
t.Fatalf("Failed to generate base IV: %v", err) |
|||
} |
|||
if _, err := rand.Read(key); err != nil { |
|||
t.Fatalf("Failed to generate key: %v", err) |
|||
} |
|||
|
|||
chunkOffset := int64(2048) // Simulate chunk at offset 2048
|
|||
|
|||
// Encrypt using base IV + offset (simulating SSE-KMS multipart upload)
|
|||
adjustedIV, ivSkip := calculateIVWithOffset(baseIV, chunkOffset) |
|||
block, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
ciphertext := make([]byte, len(plaintext)) |
|||
stream := cipher.NewCTR(block, adjustedIV) |
|||
|
|||
// Skip ivSkip bytes in the encryption stream if needed
|
|||
if ivSkip > 0 { |
|||
dummy := make([]byte, ivSkip) |
|||
stream.XORKeyStream(dummy, dummy) |
|||
} |
|||
stream.XORKeyStream(ciphertext, plaintext) |
|||
|
|||
// TEST: Decrypt using base IV + offset adjustment (correct for SSE-KMS)
|
|||
adjustedIVDecrypt, ivSkipDecrypt := calculateIVWithOffset(baseIV, chunkOffset) |
|||
blockDecrypt, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher for decryption: %v", err) |
|||
} |
|||
|
|||
decrypted := make([]byte, len(ciphertext)) |
|||
streamDecrypt := cipher.NewCTR(blockDecrypt, adjustedIVDecrypt) |
|||
|
|||
// Skip ivSkip bytes in the decryption stream
|
|||
if ivSkipDecrypt > 0 { |
|||
dummy := make([]byte, ivSkipDecrypt) |
|||
streamDecrypt.XORKeyStream(dummy, dummy) |
|||
} |
|||
streamDecrypt.XORKeyStream(decrypted, ciphertext) |
|||
|
|||
// Verify correct decryption with offset adjustment
|
|||
if !bytes.Equal(decrypted, plaintext) { |
|||
t.Errorf("SSE-KMS decryption with offset adjustment failed:\nExpected: %s\nGot: %s", plaintext, decrypted) |
|||
} else { |
|||
t.Logf("✓ SSE-KMS decryption with offset adjustment successful") |
|||
} |
|||
|
|||
// ANTI-TEST: Decrypt using base IV directly (incorrect for SSE-KMS)
|
|||
blockWrong, err := aes.NewCipher(key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher for wrong decryption: %v", err) |
|||
} |
|||
|
|||
decryptedWrong := make([]byte, len(ciphertext)) |
|||
streamWrong := cipher.NewCTR(blockWrong, baseIV) // Use base IV directly - WRONG for SSE-KMS
|
|||
streamWrong.XORKeyStream(decryptedWrong, ciphertext) |
|||
|
|||
// Verify that NOT using offset adjustment produces corrupted output
|
|||
if bytes.Equal(decryptedWrong, plaintext) { |
|||
t.Errorf("CRITICAL: Base IV without offset produced correct plaintext! SSE-KMS requires offset adjustment.") |
|||
} else { |
|||
t.Logf("✓ Verified: Base IV without offset produces corrupted data (as expected for SSE-KMS)") |
|||
} |
|||
} |
|||
|
|||
// TestSSEDecryptionDifferences documents the key differences between SSE types
|
|||
func TestSSEDecryptionDifferences(t *testing.T) { |
|||
t.Log("SSE-C: Random IV per part → Use stored IV DIRECTLY (no offset)") |
|||
t.Log("SSE-KMS: Base IV + offset → MUST call calculateIVWithOffset(baseIV, offset)") |
|||
t.Log("SSE-S3: Base IV + offset → Stores ADJUSTED IV, use directly") |
|||
|
|||
// This test documents the critical differences and serves as executable documentation
|
|||
} |
|||
@ -0,0 +1,257 @@ |
|||
package s3api |
|||
|
|||
import ( |
|||
"bytes" |
|||
"crypto/aes" |
|||
"crypto/cipher" |
|||
"crypto/rand" |
|||
"encoding/base64" |
|||
"io" |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants" |
|||
) |
|||
|
|||
// TestSSES3MultipartUploadStoresDerivedIV verifies the critical fix where
|
|||
// handleSSES3MultipartEncryption must store the DERIVED IV (not base IV)
|
|||
// in the returned key so it gets serialized into chunk metadata.
|
|||
//
|
|||
// This test prevents the bug where the derived IV was discarded, causing
|
|||
// decryption to use the wrong IV and produce corrupted plaintext.
|
|||
func TestSSES3MultipartUploadStoresDerivedIV(t *testing.T) { |
|||
// Setup: Create a test key and base IV
|
|||
keyManager := GetSSES3KeyManager() |
|||
sseS3Key, err := keyManager.GetOrCreateKey("") |
|||
if err != nil { |
|||
t.Fatalf("Failed to create SSE-S3 key: %v", err) |
|||
} |
|||
|
|||
// Generate a random base IV
|
|||
baseIV := make([]byte, aes.BlockSize) |
|||
if _, err := rand.Read(baseIV); err != nil { |
|||
t.Fatalf("Failed to generate base IV: %v", err) |
|||
} |
|||
|
|||
// Test data for multipart upload parts
|
|||
testCases := []struct { |
|||
name string |
|||
partOffset int64 |
|||
data []byte |
|||
}{ |
|||
{"Part 1 at offset 0", 0, []byte("First part of multipart upload")}, |
|||
{"Part 2 at offset 1MB", 1024 * 1024, []byte("Second part of multipart upload")}, |
|||
{"Part 3 at offset 5MB", 5 * 1024 * 1024, []byte("Third part at 5MB offset")}, |
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
// Calculate the expected derived IV (what encryption will use)
|
|||
expectedDerivedIV, ivSkip := calculateIVWithOffset(baseIV, tc.partOffset) |
|||
|
|||
// Call CreateSSES3EncryptedReaderWithBaseIV to encrypt the data
|
|||
dataReader := bytes.NewReader(tc.data) |
|||
encryptedReader, returnedDerivedIV, encErr := CreateSSES3EncryptedReaderWithBaseIV( |
|||
dataReader, |
|||
sseS3Key, |
|||
baseIV, |
|||
tc.partOffset, |
|||
) |
|||
if encErr != nil { |
|||
t.Fatalf("Failed to create encrypted reader: %v", encErr) |
|||
} |
|||
|
|||
// Read the encrypted data
|
|||
encryptedData, err := io.ReadAll(encryptedReader) |
|||
if err != nil { |
|||
t.Fatalf("Failed to read encrypted data: %v", err) |
|||
} |
|||
|
|||
// CRITICAL VERIFICATION: The returned IV should be the DERIVED IV
|
|||
if !bytes.Equal(returnedDerivedIV, expectedDerivedIV) { |
|||
t.Errorf("CreateSSES3EncryptedReaderWithBaseIV returned wrong IV:\nExpected: %x\nGot: %x", |
|||
expectedDerivedIV[:8], returnedDerivedIV[:8]) |
|||
} |
|||
|
|||
// CRITICAL TEST: Verify the key.IV field would be updated (simulating handleSSES3MultipartEncryption)
|
|||
// This is what the fix does: key.IV = derivedIV
|
|||
keyWithDerivedIV := &SSES3Key{ |
|||
Key: sseS3Key.Key, |
|||
KeyID: sseS3Key.KeyID, |
|||
Algorithm: sseS3Key.Algorithm, |
|||
IV: returnedDerivedIV, // This simulates: key.IV = derivedIV
|
|||
} |
|||
|
|||
// TEST 1: Verify decryption with DERIVED IV produces correct plaintext (correct behavior)
|
|||
decryptedWithDerivedIV := make([]byte, len(encryptedData)) |
|||
block, err := aes.NewCipher(keyWithDerivedIV.Key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
stream := cipher.NewCTR(block, keyWithDerivedIV.IV) |
|||
|
|||
// Handle ivSkip for non-block-aligned offsets
|
|||
if ivSkip > 0 { |
|||
skipDummy := make([]byte, ivSkip) |
|||
stream.XORKeyStream(skipDummy, skipDummy) |
|||
} |
|||
stream.XORKeyStream(decryptedWithDerivedIV, encryptedData) |
|||
|
|||
if !bytes.Equal(decryptedWithDerivedIV, tc.data) { |
|||
t.Errorf("Decryption with derived IV failed:\nExpected: %q\nGot: %q", |
|||
tc.data, decryptedWithDerivedIV) |
|||
} else { |
|||
t.Logf("✓ Derived IV decryption successful for offset %d", tc.partOffset) |
|||
} |
|||
|
|||
// TEST 2: Verify decryption with BASE IV produces WRONG plaintext (bug behavior)
|
|||
// This is what would happen if the bug wasn't fixed
|
|||
if tc.partOffset > 0 { // Only test for non-zero offsets (where IVs differ)
|
|||
keyWithBaseIV := &SSES3Key{ |
|||
Key: sseS3Key.Key, |
|||
KeyID: sseS3Key.KeyID, |
|||
Algorithm: sseS3Key.Algorithm, |
|||
IV: baseIV, // BUG: Using base IV instead of derived IV
|
|||
} |
|||
|
|||
decryptedWithBaseIV := make([]byte, len(encryptedData)) |
|||
blockWrong, err := aes.NewCipher(keyWithBaseIV.Key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher for wrong decryption: %v", err) |
|||
} |
|||
streamWrong := cipher.NewCTR(blockWrong, keyWithBaseIV.IV) |
|||
streamWrong.XORKeyStream(decryptedWithBaseIV, encryptedData) |
|||
|
|||
if bytes.Equal(decryptedWithBaseIV, tc.data) { |
|||
t.Errorf("CRITICAL BUG: Base IV produced correct plaintext at offset %d! Should produce corrupted data.", tc.partOffset) |
|||
} else { |
|||
t.Logf("✓ Verified: Base IV produces corrupted data at offset %d (bug would cause this)", tc.partOffset) |
|||
} |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
|
|||
// TestHandleSSES3MultipartEncryptionFlow is an integration test that verifies
|
|||
// the complete flow of handleSSES3MultipartEncryption, including that the
|
|||
// returned key contains the derived IV (not base IV).
|
|||
func TestHandleSSES3MultipartEncryptionFlow(t *testing.T) { |
|||
// This test simulates what happens in a real multipart upload request
|
|||
|
|||
// Generate test key manually (simulating a complete SSE-S3 key)
|
|||
keyBytes := make([]byte, 32) // 256-bit key
|
|||
if _, err := rand.Read(keyBytes); err != nil { |
|||
t.Fatalf("Failed to generate key: %v", err) |
|||
} |
|||
|
|||
originalKey := &SSES3Key{ |
|||
Key: keyBytes, |
|||
KeyID: "test-key-id", |
|||
Algorithm: SSES3Algorithm, |
|||
IV: nil, // Will be set later
|
|||
} |
|||
|
|||
baseIV := make([]byte, aes.BlockSize) |
|||
if _, err := rand.Read(baseIV); err != nil { |
|||
t.Fatalf("Failed to generate base IV: %v", err) |
|||
} |
|||
|
|||
// For this test, we'll work directly with the key structure
|
|||
// since SerializeSSES3Metadata requires KMS setup
|
|||
|
|||
// Test with a non-zero offset (where base IV != derived IV)
|
|||
partOffset := int64(2 * 1024 * 1024) // 2MB offset
|
|||
plaintext := []byte("Test data for part 2 of multipart upload") |
|||
|
|||
// Calculate what the derived IV should be
|
|||
expectedDerivedIV, ivSkip := calculateIVWithOffset(baseIV, partOffset) |
|||
|
|||
// Simulate the upload by calling CreateSSES3EncryptedReaderWithBaseIV directly
|
|||
// (This is what handleSSES3MultipartEncryption does internally)
|
|||
dataReader := bytes.NewReader(plaintext) |
|||
|
|||
// Encrypt with base IV and offset
|
|||
encryptedReader, derivedIV, encErr := CreateSSES3EncryptedReaderWithBaseIV( |
|||
dataReader, |
|||
originalKey, |
|||
baseIV, |
|||
partOffset, |
|||
) |
|||
if encErr != nil { |
|||
t.Fatalf("Failed to create encrypted reader: %v", encErr) |
|||
} |
|||
|
|||
// THE FIX: Update key.IV with derivedIV (this is what the bug fix does)
|
|||
originalKey.IV = derivedIV |
|||
|
|||
// Read encrypted data
|
|||
encryptedData, err := io.ReadAll(encryptedReader) |
|||
if err != nil { |
|||
t.Fatalf("Failed to read encrypted data: %v", err) |
|||
} |
|||
|
|||
// VERIFICATION 1: Derived IV should match expected
|
|||
if !bytes.Equal(derivedIV, expectedDerivedIV) { |
|||
t.Errorf("Derived IV mismatch:\nExpected: %x\nGot: %x", |
|||
expectedDerivedIV[:8], derivedIV[:8]) |
|||
} |
|||
|
|||
// VERIFICATION 2: Key should now contain derived IV (the fix)
|
|||
if !bytes.Equal(originalKey.IV, derivedIV) { |
|||
t.Errorf("Key.IV was not updated with derived IV!\nKey.IV: %x\nDerived IV: %x", |
|||
originalKey.IV[:8], derivedIV[:8]) |
|||
} else { |
|||
t.Logf("✓ Key.IV correctly updated with derived IV") |
|||
} |
|||
|
|||
// VERIFICATION 3: The IV stored in the key can be used for decryption
|
|||
decryptedData := make([]byte, len(encryptedData)) |
|||
block, err := aes.NewCipher(originalKey.Key) |
|||
if err != nil { |
|||
t.Fatalf("Failed to create cipher: %v", err) |
|||
} |
|||
|
|||
stream := cipher.NewCTR(block, originalKey.IV) |
|||
|
|||
// Handle ivSkip for non-block-aligned offsets
|
|||
if ivSkip > 0 { |
|||
skipDummy := make([]byte, ivSkip) |
|||
stream.XORKeyStream(skipDummy, skipDummy) |
|||
} |
|||
stream.XORKeyStream(decryptedData, encryptedData) |
|||
|
|||
if !bytes.Equal(decryptedData, plaintext) { |
|||
t.Errorf("Final decryption failed:\nExpected: %q\nGot: %q", plaintext, decryptedData) |
|||
} else { |
|||
t.Logf("✓ Full encrypt-update_key-decrypt cycle successful") |
|||
} |
|||
} |
|||
|
|||
// TestSSES3HeaderEncoding tests that the header encoding/decoding works correctly
|
|||
func TestSSES3HeaderEncoding(t *testing.T) { |
|||
// Generate test base IV
|
|||
baseIV := make([]byte, aes.BlockSize) |
|||
if _, err := rand.Read(baseIV); err != nil { |
|||
t.Fatalf("Failed to generate base IV: %v", err) |
|||
} |
|||
|
|||
// Encode as it would be in HTTP header
|
|||
baseIVHeader := base64.StdEncoding.EncodeToString(baseIV) |
|||
|
|||
// Decode (as handleSSES3MultipartEncryption does)
|
|||
decodedBaseIV, err := base64.StdEncoding.DecodeString(baseIVHeader) |
|||
if err != nil { |
|||
t.Fatalf("Failed to decode base IV: %v", err) |
|||
} |
|||
|
|||
// Verify round-trip
|
|||
if !bytes.Equal(decodedBaseIV, baseIV) { |
|||
t.Errorf("Base IV encoding round-trip failed:\nOriginal: %x\nDecoded: %x", |
|||
baseIV, decodedBaseIV) |
|||
} |
|||
|
|||
// Verify length
|
|||
if len(decodedBaseIV) != s3_constants.AESBlockSize { |
|||
t.Errorf("Decoded base IV has wrong length: expected %d, got %d", |
|||
s3_constants.AESBlockSize, len(decodedBaseIV)) |
|||
} |
|||
} |
|||
@ -0,0 +1,224 @@ |
|||
package log_buffer |
|||
|
|||
import ( |
|||
"errors" |
|||
"testing" |
|||
"time" |
|||
|
|||
"google.golang.org/protobuf/proto" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
) |
|||
|
|||
// TestReadTsCorruptedBuffer tests that readTs properly returns an error for corrupted data
|
|||
func TestReadTsCorruptedBuffer(t *testing.T) { |
|||
// Create a corrupted buffer with invalid protobuf data
|
|||
buf := make([]byte, 100) |
|||
|
|||
// Set size field to 10 bytes (using proper encoding)
|
|||
util.Uint32toBytes(buf[0:4], 10) |
|||
|
|||
// Fill with garbage data that won't unmarshal as LogEntry
|
|||
for i := 4; i < 14; i++ { |
|||
buf[i] = 0xFF |
|||
} |
|||
|
|||
// Attempt to read timestamp
|
|||
size, ts, err := readTs(buf, 0) |
|||
|
|||
// Should return an error
|
|||
if err == nil { |
|||
t.Error("Expected error for corrupted buffer, got nil") |
|||
} |
|||
|
|||
// Size and ts should be zero on error
|
|||
if size != 0 { |
|||
t.Errorf("Expected size=0 on error, got %d", size) |
|||
} |
|||
|
|||
if ts != 0 { |
|||
t.Errorf("Expected ts=0 on error, got %d", ts) |
|||
} |
|||
|
|||
// Error should indicate corruption
|
|||
if !errors.Is(err, ErrBufferCorrupted) { |
|||
t.Logf("Error message: %v", err) |
|||
// Check if error message contains expected text
|
|||
if err.Error() == "" || len(err.Error()) == 0 { |
|||
t.Error("Expected non-empty error message") |
|||
} |
|||
} |
|||
|
|||
t.Logf("✓ readTs correctly returned error for corrupted buffer: %v", err) |
|||
} |
|||
|
|||
// TestReadTsValidBuffer tests that readTs works correctly for valid data
|
|||
func TestReadTsValidBuffer(t *testing.T) { |
|||
// Create a valid LogEntry
|
|||
logEntry := &filer_pb.LogEntry{ |
|||
TsNs: 123456789, |
|||
Key: []byte("test-key"), |
|||
} |
|||
|
|||
// Marshal it
|
|||
data, err := proto.Marshal(logEntry) |
|||
if err != nil { |
|||
t.Fatalf("Failed to marshal LogEntry: %v", err) |
|||
} |
|||
|
|||
// Create buffer with size prefix using util function
|
|||
buf := make([]byte, 4+len(data)) |
|||
util.Uint32toBytes(buf[0:4], uint32(len(data))) |
|||
copy(buf[4:], data) |
|||
|
|||
// Read timestamp
|
|||
size, ts, err := readTs(buf, 0) |
|||
|
|||
// Should succeed
|
|||
if err != nil { |
|||
t.Fatalf("Expected no error for valid buffer, got: %v", err) |
|||
} |
|||
|
|||
// Should return correct values
|
|||
if size != len(data) { |
|||
t.Errorf("Expected size=%d, got %d", len(data), size) |
|||
} |
|||
|
|||
if ts != logEntry.TsNs { |
|||
t.Errorf("Expected ts=%d, got %d", logEntry.TsNs, ts) |
|||
} |
|||
|
|||
t.Logf("✓ readTs correctly parsed valid buffer: size=%d, ts=%d", size, ts) |
|||
} |
|||
|
|||
// TestReadFromBufferCorruption tests that ReadFromBuffer propagates corruption errors
|
|||
func TestReadFromBufferCorruption(t *testing.T) { |
|||
lb := NewLogBuffer("test-corruption", time.Second, nil, nil, func() {}) |
|||
|
|||
// Add a valid entry first using AddDataToBuffer
|
|||
validKey := []byte("valid") |
|||
validData, _ := proto.Marshal(&filer_pb.LogEntry{ |
|||
TsNs: 1000, |
|||
Key: validKey, |
|||
}) |
|||
if err := lb.AddDataToBuffer(validKey, validData, 1000); err != nil { |
|||
t.Fatalf("Failed to add data to buffer: %v", err) |
|||
} |
|||
|
|||
// Manually corrupt the buffer by writing garbage
|
|||
// This simulates a corruption scenario
|
|||
if len(lb.idx) > 0 { |
|||
pos := lb.idx[0] |
|||
// Overwrite the protobuf data with garbage
|
|||
for i := pos + 4; i < pos+8 && i < len(lb.buf); i++ { |
|||
lb.buf[i] = 0xFF |
|||
} |
|||
} |
|||
|
|||
// Try to read - should detect corruption
|
|||
startPos := MessagePosition{Time: lb.startTime} |
|||
buf, offset, err := lb.ReadFromBuffer(startPos) |
|||
|
|||
// Should return corruption error
|
|||
if err == nil { |
|||
t.Error("Expected corruption error, got nil") |
|||
if buf != nil { |
|||
t.Logf("Unexpected success: got buffer with %d bytes", buf.Len()) |
|||
} |
|||
} else { |
|||
// Verify it's a corruption error
|
|||
if !errors.Is(err, ErrBufferCorrupted) { |
|||
t.Logf("Got error (not ErrBufferCorrupted sentinel, but still an error): %v", err) |
|||
} |
|||
t.Logf("✓ ReadFromBuffer correctly detected corruption: %v", err) |
|||
} |
|||
|
|||
t.Logf("ReadFromBuffer result: buf=%v, offset=%d, err=%v", buf != nil, offset, err) |
|||
} |
|||
|
|||
// TestLocateByTsCorruption tests that locateByTs propagates corruption errors
|
|||
func TestLocateByTsCorruption(t *testing.T) { |
|||
// Create a MemBuffer with corrupted data
|
|||
mb := &MemBuffer{ |
|||
buf: make([]byte, 100), |
|||
size: 14, |
|||
} |
|||
|
|||
// Set size field (using proper encoding)
|
|||
util.Uint32toBytes(mb.buf[0:4], 10) |
|||
|
|||
// Fill with garbage
|
|||
for i := 4; i < 14; i++ { |
|||
mb.buf[i] = 0xFF |
|||
} |
|||
|
|||
// Try to locate by timestamp
|
|||
pos, err := mb.locateByTs(mb.startTime) |
|||
|
|||
// Should return error
|
|||
if err == nil { |
|||
t.Errorf("Expected corruption error, got nil (pos=%d)", pos) |
|||
} else { |
|||
t.Logf("✓ locateByTs correctly detected corruption: %v", err) |
|||
} |
|||
} |
|||
|
|||
// TestErrorPropagationChain tests the complete error propagation from readTs -> locateByTs -> ReadFromBuffer
|
|||
func TestErrorPropagationChain(t *testing.T) { |
|||
t.Run("Corruption in readTs", func(t *testing.T) { |
|||
// Already covered by TestReadTsCorruptedBuffer
|
|||
t.Log("✓ readTs error propagation tested") |
|||
}) |
|||
|
|||
t.Run("Corruption in locateByTs", func(t *testing.T) { |
|||
// Already covered by TestLocateByTsCorruption
|
|||
t.Log("✓ locateByTs error propagation tested") |
|||
}) |
|||
|
|||
t.Run("Corruption in ReadFromBuffer binary search", func(t *testing.T) { |
|||
// Already covered by TestReadFromBufferCorruption
|
|||
t.Log("✓ ReadFromBuffer error propagation tested") |
|||
}) |
|||
|
|||
t.Log("✓ Complete error propagation chain verified") |
|||
} |
|||
|
|||
// TestNoSilentCorruption verifies that corruption never returns (0, 0) silently
|
|||
func TestNoSilentCorruption(t *testing.T) { |
|||
// Create various corrupted buffers
|
|||
testCases := []struct { |
|||
name string |
|||
buf []byte |
|||
pos int |
|||
}{ |
|||
{ |
|||
name: "Invalid protobuf", |
|||
buf: []byte{10, 0, 0, 0, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF}, |
|||
pos: 0, |
|||
}, |
|||
{ |
|||
name: "Truncated data", |
|||
buf: []byte{100, 0, 0, 0, 1, 2, 3}, // Size says 100 but only 3 bytes available
|
|||
pos: 0, |
|||
}, |
|||
} |
|||
|
|||
for _, tc := range testCases { |
|||
t.Run(tc.name, func(t *testing.T) { |
|||
size, ts, err := readTs(tc.buf, tc.pos) |
|||
|
|||
// CRITICAL: Must return error, never silent (0, 0)
|
|||
if err == nil { |
|||
t.Errorf("CRITICAL: readTs returned (%d, %d, nil) for corrupted buffer - this causes silent data corruption!", size, ts) |
|||
} else { |
|||
t.Logf("✓ Correctly returned error instead of silent (0, 0): %v", err) |
|||
} |
|||
|
|||
// On error, size and ts should be 0
|
|||
if size != 0 || ts != 0 { |
|||
t.Errorf("On error, expected (0, 0), got (%d, %d)", size, ts) |
|||
} |
|||
}) |
|||
} |
|||
} |
|||
Write
Preview
Loading…
Cancel
Save
Reference in new issue