Browse Source

quick check (no retries) to see if the .versions/ directory exists

pull/7481/head
chrislu 3 weeks ago
parent
commit
cb2cf65dc9
  1. 364
      COMBINED_TTFB_PROFILING.md
  2. 14
      weed/s3api/s3api_bucket_config.go
  3. 4
      weed/s3api/s3api_bucket_handlers.go
  4. 128
      weed/s3api/s3api_object_handlers.go
  5. 7
      weed/s3api/s3api_object_handlers_multipart.go
  6. 65
      weed/s3api/s3api_object_handlers_put.go

364
COMBINED_TTFB_PROFILING.md

@ -1,364 +0,0 @@
# Combined TTFB Profiling - Complete Request Breakdown
## Overview
Integrated profiling that shows complete GET request timing from entry to completion, including both high-level TTFB stages and detailed streaming metrics.
## Profiling Output Format
When running with `-v=2`, you'll see hierarchical profiling logs for each GET request:
### Example 1: Unencrypted Object GET
```
GET TTFB PROFILE mybucket/myobject.txt: total=52ms | conditional=0.5ms, versioning=8ms, entryFetch=1ms, stream=42ms
└─ streamFromVolumeServers: total=42ms, rangeParse=0.2ms, headerSet=0.4ms, chunkResolve=9ms, streamPrep=2ms, streamExec=30ms
```
**Breakdown**:
```
Total Request: 52ms
├─ Conditional Headers: 0.5ms (check If-Match, If-None-Match, etc.)
├─ Versioning Check: 8ms (check bucket versioning config + fetch version)
├─ Entry Fetch: 1ms (reuse or fetch metadata for SSE detection)
└─ Stream: 42ms (direct volume read - MAIN TIME)
├─ Range Parse: 0.2ms (parse Range header if present)
├─ Header Set: 0.4ms (set response headers)
├─ Chunk Resolve: 9ms (lookup volume locations)
├─ Stream Prep: 2ms (prepare streaming function)
└─ Stream Exec: 30ms (actual data transfer) ⚡
```
### Example 2: SSE-C Encrypted Object GET
```
GET TTFB PROFILE mybucket/encrypted.bin: total=68ms | conditional=0.4ms, versioning=7ms, entryFetch=2ms, stream=58ms
└─ streamFromVolumeServersWithSSE (SSE-C): total=58ms, keyValidate=4ms, headerSet=0.5ms, streamFetch=14ms, decryptSetup=1ms, copy=38ms
```
**Breakdown**:
```
Total Request: 68ms
├─ Conditional Headers: 0.4ms
├─ Versioning Check: 7ms
├─ Entry Fetch: 2ms
└─ Stream (SSE-C): 58ms (encrypted object with inline decryption)
├─ Key Validate: 4ms (validate SSE-C customer key)
├─ Header Set: 0.5ms (set SSE response headers)
├─ Stream Fetch: 14ms (fetch encrypted data from volumes)
├─ Decrypt Setup: 1ms (setup AES-256 decryption wrapper)
└─ Copy: 38ms (decrypt + stream to client) ⚡
```
### Example 3: Range Request
```
GET TTFB PROFILE mybucket/video.mp4: total=28ms | conditional=0.3ms, versioning=0.1ms, entryFetch=0.5ms, stream=27ms
└─ streamFromVolumeServers: total=27ms, rangeParse=0.4ms, headerSet=0.3ms, chunkResolve=7ms, streamPrep=1ms, streamExec=18ms
```
**Breakdown**:
```
Total Request: 28ms (faster due to partial content)
├─ Conditional Headers: 0.3ms
├─ Versioning Check: 0.1ms (non-versioned bucket)
├─ Entry Fetch: 0.5ms (reused from cache)
└─ Stream: 27ms
├─ Range Parse: 0.4ms (parse "bytes=0-1023")
├─ Header Set: 0.3ms (set 206 Partial Content)
├─ Chunk Resolve: 7ms (resolve for range only)
├─ Stream Prep: 1ms
└─ Stream Exec: 18ms (stream only requested range)
```
## Performance Metrics
### Typical Performance (10MB object)
| Stage | Unencrypted | SSE-C | SSE-KMS | SSE-S3 |
|-------|-------------|-------|---------|--------|
| **Conditional Headers** | 0.3-1ms | 0.3-1ms | 0.3-1ms | 0.3-1ms |
| **Versioning Check** | 0.1-10ms* | 0.1-10ms* | 0.1-10ms* | 0.1-10ms* |
| **Entry Fetch** | 0.5-2ms** | 0.5-2ms** | 0.5-2ms** | 0.5-2ms** |
| **Stream Total** | 40-50ms | 55-65ms | 55-65ms | 55-65ms |
| └─ Range Parse | 0.1-0.5ms | - | - | - |
| └─ Key Validate | - | 2-5ms | 3-8ms | 2-4ms |
| └─ Header Set | 0.3-1ms | 0.5-1ms | 0.5-1ms | 0.5-1ms |
| └─ Chunk Resolve | 6-12ms | - | - | - |
| └─ Stream Fetch | - | 12-18ms | 12-18ms | 12-18ms |
| └─ Stream Prep | 1-3ms | - | - | - |
| └─ Decrypt Setup | - | 0.5-2ms | 0.5-2ms | 0.5-2ms |
| └─ Stream Exec / Copy | 30-40ms | 35-45ms | 35-45ms | 35-45ms |
| **TOTAL** | **42-63ms** | **57-78ms** | **58-82ms** | **57-78ms** |
\* Higher if versioned bucket with many versions
\** Higher if entry not cached, near 0 if reused
### Overhead Analysis
**SSE Encryption Overhead**:
- SSE-C: +12-20ms (~20-30% overhead)
- SSE-KMS: +15-25ms (~25-35% overhead)
- SSE-S3: +12-20ms (~20-30% overhead)
**Main Cost**: Decryption during streaming (`copy` stage)
## How to Enable
```bash
# Start SeaweedFS S3 API with verbose logging
weed server -s3 -v=2
# Or standalone S3 server
weed s3 -filer=localhost:8888 -v=2
```
## Filtering Logs
```bash
# View complete TTFB profiles
grep "GET TTFB PROFILE" /path/to/logs
# View with detailed streaming breakdown
grep -A1 "GET TTFB PROFILE" /path/to/logs
# View only SSE requests
grep "SSE-C\|SSE-KMS\|SSE-S3" /path/to/logs | grep "PROFILE"
# Extract timing statistics
grep "GET TTFB PROFILE" logs.txt | \
awk -F'total=' '{print $2}' | \
awk '{print $1}' | \
sed 's/ms.*//' | \
sort -n | \
awk '
{sum+=$1; count++; vals[count]=$1}
END {
print "Requests:", count
print "Average:", sum/count "ms"
print "Median:", vals[int(count/2)] "ms"
print "P90:", vals[int(count*0.9)] "ms"
print "P99:", vals[int(count*0.99)] "ms"
}
'
```
## Performance Analysis Guide
### ✅ Healthy Performance Indicators
1. **High-Level (TTFB)**:
- `conditional` < 2ms
- `versioning` < 15ms
- `entryFetch` < 3ms
- `stream` dominates total time (>70%)
2. **Low-Level (Stream)**:
- Unencrypted: `streamExec` dominates
- Encrypted: `copy` dominates
- `chunkResolve` < 15ms
- `streamPrep` < 5ms
### ⚠️ Performance Issues
#### 1. Slow Versioning (`versioning` > 20ms)
**Symptom**:
```
GET TTFB PROFILE: total=85ms | conditional=0.5ms, versioning=45ms, entryFetch=1ms, stream=38ms
```
**Causes**:
- Filer slow to check versioning config
- Many versions to iterate through
- Filer under heavy load
**Solutions**:
- Check filer CPU/memory
- Review filer metadata store performance
- Consider filer caching improvements
#### 2. Slow Chunk Resolution (`chunkResolve` > 20ms)
**Symptom**:
```
└─ streamFromVolumeServers: total=65ms, rangeParse=0.2ms, headerSet=0.4ms, chunkResolve=35ms, streamPrep=2ms, streamExec=27ms
```
**Causes**:
- Filer LookupVolume RPC slow
- Network latency to filer
- Many chunks to resolve
**Solutions**:
- Check network latency between S3 API and filer
- Review filer volume location cache
- Consider volume location caching in S3 API
#### 3. Slow SSE Key Validation (`keyValidate` > 10ms)
**Symptom**:
```
└─ streamFromVolumeServersWithSSE (SSE-KMS): total=75ms, keyValidate=25ms, headerSet=0.5ms, streamFetch=14ms, decryptSetup=1ms, copy=34ms
```
**Causes**:
- KMS external service slow
- Complex key derivation
- Network latency to KMS
**Solutions**:
- Review KMS service performance
- Consider key caching (if secure)
- Check network to KMS endpoint
#### 4. Slow Data Transfer (`streamExec` or `copy` >> expected)
**Symptom**:
```
└─ streamFromVolumeServers: total=180ms, rangeParse=0.2ms, headerSet=0.4ms, chunkResolve=8ms, streamPrep=2ms, streamExec=169ms
```
**Causes**:
- Network bandwidth constrained
- Volume server disk I/O slow
- Volume server CPU overloaded
- Large object size
**Solutions**:
- Check network throughput
- Review volume server metrics (CPU, disk I/O)
- Verify object size expectations
- Consider volume server scaling
#### 5. High Decryption Overhead (SSE `copy` >> unencrypted `streamExec`)
**Symptom**:
```
# Unencrypted: streamExec=30ms
# SSE-C: copy=85ms (283% overhead, should be ~20-30%)
```
**Causes**:
- CPU lacking AES-NI instructions
- S3 API server CPU constrained
- Inefficient cipher implementation
**Solutions**:
- Verify CPU has AES-NI support
- Scale S3 API server horizontally
- Review Go crypto performance
## Comparison with Old Architecture
### Before: Filer Proxy Path
```
Total TTFB: ~75ms
├─ Entry Fetch: 8ms
├─ Proxy Setup: 19ms ⚠️ WASTED TIME
├─ Filer Proxy: 5ms ⚠️ EXTRA HOP
└─ Volume Stream: 43ms
```
### After: Direct Volume Reads
```
Total TTFB: ~52ms (31% faster!)
├─ Conditional: 0.5ms
├─ Versioning: 8ms
├─ Entry Fetch: 1ms
└─ Stream: 42ms
├─ Chunk Resolve: 9ms
├─ Stream Prep: 2ms
└─ Stream Exec: 30ms ✅ DIRECT
```
**Improvements**:
- ✅ Eliminated 19ms proxy setup overhead
- ✅ Eliminated extra filer hop
- ✅ Reduced total TTFB by ~31%
- ✅ Detailed visibility into all stages
## Integration with Load Testing
### With warp
```bash
# Run load test
warp get --host localhost:8333 --access-key ... --secret-key ... \
--obj.size 10MiB --duration 5m --concurrent 20 2>&1 | tee warp.log &
# Monitor profiling in real-time
tail -f /path/to/s3.log | grep "GET TTFB PROFILE"
# After test completes, analyze
grep "GET TTFB PROFILE" /path/to/s3.log > ttfb_results.txt
# Calculate average per stage
echo "=== Conditional Headers ==="
grep "conditional=" ttfb_results.txt | grep -o "conditional=[^,]*" | \
cut -d'=' -f2 | sed 's/ms//' | \
awk '{sum+=$1; count++} END {print "Avg:", sum/count "ms"}'
echo "=== Versioning Check ==="
grep "versioning=" ttfb_results.txt | grep -o "versioning=[^,]*" | \
cut -d'=' -f2 | sed 's/ms//' | \
awk '{sum+=$1; count++} END {print "Avg:", sum/count "ms"}'
echo "=== Stream Time ==="
grep "stream=" ttfb_results.txt | grep -o "stream=[^,]*$" | \
cut -d'=' -f2 | sed 's/ms//' | \
awk '{sum+=$1; count++} END {print "Avg:", sum/count "ms"}'
```
### With s3-tests
```bash
# Run specific test with profiling
cd /path/to/s3-tests
tox -e py -- s3tests/functional/test_s3.py::test_object_read_not_exist -vv
# Check profiling for that test
grep "GET TTFB PROFILE.*test-bucket" /tmp/s3.log
# Example output:
# GET TTFB PROFILE test-bucket/nonexistent: total=12ms | conditional=0.3ms, versioning=8ms, entryFetch=3ms, stream=0ms
# (note: stream=0ms because object didn't exist)
```
## Real-World Example Analysis
### Example: Slow Overall Performance
```
GET TTFB PROFILE videos/movie.mp4: total=245ms | conditional=0.5ms, versioning=2ms, entryFetch=1ms, stream=241ms
└─ streamFromVolumeServers: total=241ms, rangeParse=0.3ms, headerSet=0.4ms, chunkResolve=8ms, streamPrep=2ms, streamExec=230ms
```
**Analysis**:
- ✅ Conditional/Versioning/Entry fast (good)
- ⚠️ `streamExec` = 230ms for unknown size object
- **Action**: Check object size. If 10MB, this is slow (expected ~30ms). If 100MB, this is reasonable.
### Example: Versioning Bottleneck
```
GET TTFB PROFILE docs/policy.pdf: total=95ms | conditional=0.4ms, versioning=78ms, entryFetch=1ms, stream=15ms
└─ streamFromVolumeServers: total=15ms, rangeParse=0.2ms, headerSet=0.3ms, chunkResolve=4ms, streamPrep=1ms, streamExec=9ms
```
**Analysis**:
- ⚠️ `versioning` = 78ms (very slow!)
- ✅ Stream fast (15ms)
- **Action**: Investigate filer versioning performance. Possibly many versions or slow filer.
### Example: Perfect Performance
```
GET TTFB PROFILE images/logo.png: total=18ms | conditional=0.2ms, versioning=0.1ms, entryFetch=0.5ms, stream=17ms
└─ streamFromVolumeServers: total=17ms, rangeParse=0.1ms, headerSet=0.2ms, chunkResolve=5ms, streamPrep=1ms, streamExec=10ms
```
**Analysis**:
- ✅ All stages optimal
- ✅ Small object streamed quickly
- ✅ No bottlenecks
- **Result**: Operating at peak performance!
## Summary
**Integrated TTFB profiling provides**:
- ✅ Complete visibility from request entry to completion
- ✅ High-level breakdown (conditional, versioning, entry, stream)
- ✅ Low-level breakdown (parse, resolve, prep, exec/decrypt)
- ✅ SSE-specific metrics (key validation, decryption)
- ✅ Easy identification of bottlenecks
- ✅ Hierarchical output for clarity
**Enable with**: `-v=2` flag
**View with**: `grep "GET TTFB PROFILE" /path/to/logs`
**Analyze**: Use provided scripts to calculate statistics per stage

14
weed/s3api/s3api_bucket_config.go

@ -479,16 +479,8 @@ func (s3a *S3ApiServer) updateBucketConfig(bucket string, updateFn func(*BucketC
glog.V(3).Infof("updateBucketConfig: saved entry to filer for bucket %s", bucket)
// Update cache
glog.V(0).Infof("updateBucketConfig: updating cache for bucket %s, Versioning='%s', ObjectLockConfig=%+v", bucket, config.Versioning, config.ObjectLockConfig)
s3a.bucketConfigCache.Set(bucket, config)
// Verify cache update
if cached, found := s3a.bucketConfigCache.Get(bucket); found {
glog.V(0).Infof("updateBucketConfig: cache verification - bucket %s now has Versioning='%s'", bucket, cached.Versioning)
} else {
glog.Errorf("updateBucketConfig: CACHE VERIFICATION FAILED - bucket %s not found in cache after Set", bucket)
}
return s3err.ErrNone
}
@ -527,7 +519,6 @@ func (s3a *S3ApiServer) getVersioningState(bucket string) (string, error) {
config, errCode := s3a.getBucketConfig(bucket)
if errCode != s3err.ErrNone {
if errCode == s3err.ErrNoSuchBucket {
glog.V(0).Infof("getVersioningState: bucket %s not found", bucket)
return "", nil
}
glog.Errorf("getVersioningState: failed to get bucket config for %s: %v", bucket, errCode)
@ -536,12 +527,10 @@ func (s3a *S3ApiServer) getVersioningState(bucket string) (string, error) {
// If object lock is enabled, versioning must be enabled regardless of explicit setting
if config.ObjectLockConfig != nil {
glog.V(0).Infof("getVersioningState: bucket %s has object lock, returning VersioningEnabled", bucket)
return s3_constants.VersioningEnabled, nil
}
// Return the explicit versioning status (empty string means never configured)
glog.V(0).Infof("getVersioningState: bucket %s returning versioning state='%s'", bucket, config.Versioning)
return config.Versioning, nil
}
@ -559,13 +548,10 @@ func (s3a *S3ApiServer) getBucketVersioningStatus(bucket string) (string, s3err.
// setBucketVersioningStatus sets the versioning status for a bucket
func (s3a *S3ApiServer) setBucketVersioningStatus(bucket, status string) s3err.ErrorCode {
glog.V(0).Infof("setBucketVersioningStatus: bucket=%s, setting status='%s'", bucket, status)
errCode := s3a.updateBucketConfig(bucket, func(config *BucketConfig) error {
config.Versioning = status
glog.V(0).Infof("setBucketVersioningStatus: bucket=%s, updated config.Versioning='%s'", bucket, config.Versioning)
return nil
})
glog.V(0).Infof("setBucketVersioningStatus: bucket=%s, update complete with errCode=%v", bucket, errCode)
return errCode
}

4
weed/s3api/s3api_bucket_handlers.go

@ -1158,7 +1158,6 @@ func (s3a *S3ApiServer) PutBucketVersioningHandler(w http.ResponseWriter, r *htt
}
status := *versioningConfig.Status
glog.V(0).Infof("PutBucketVersioningHandler: bucket=%s, requested status='%s'", bucket, status)
if status != s3_constants.VersioningEnabled && status != s3_constants.VersioningSuspended {
glog.Errorf("PutBucketVersioningHandler: invalid status '%s' for bucket %s", status, bucket)
s3err.WriteErrorResponse(w, r, s3err.ErrInvalidRequest)
@ -1167,7 +1166,6 @@ func (s3a *S3ApiServer) PutBucketVersioningHandler(w http.ResponseWriter, r *htt
// Check if trying to suspend versioning on a bucket with object lock enabled
if status == s3_constants.VersioningSuspended {
glog.V(0).Infof("PutBucketVersioningHandler: attempting to suspend versioning for bucket=%s", bucket)
// Get bucket configuration to check for object lock
bucketConfig, errCode := s3a.getBucketConfig(bucket)
if errCode == s3err.ErrNone && bucketConfig.ObjectLockConfig != nil {
@ -1178,13 +1176,11 @@ func (s3a *S3ApiServer) PutBucketVersioningHandler(w http.ResponseWriter, r *htt
}
// Update bucket versioning configuration using new bucket config system
glog.V(0).Infof("PutBucketVersioningHandler: calling setBucketVersioningStatus for bucket=%s with status='%s'", bucket, status)
if errCode := s3a.setBucketVersioningStatus(bucket, status); errCode != s3err.ErrNone {
glog.Errorf("PutBucketVersioningHandler save config: bucket=%s, status='%s', errCode=%d", bucket, status, errCode)
s3err.WriteErrorResponse(w, r, errCode)
return
}
glog.V(0).Infof("PutBucketVersioningHandler: successfully set versioning status for bucket=%s to '%s'", bucket, status)
writeSuccessResponseEmpty(w, r)
}

128
weed/s3api/s3api_object_handlers.go

@ -327,7 +327,7 @@ func (s3a *S3ApiServer) GetObjectHandler(w http.ResponseWriter, r *http.Request)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
return
}
glog.V(0).Infof("GetObject: bucket %s, object %s, versioningConfigured=%v, versionId=%s", bucket, object, versioningConfigured, versionId)
glog.V(3).Infof("GetObject: bucket %s, object %s, versioningConfigured=%v, versionId=%s", bucket, object, versioningConfigured, versionId)
if versioningConfigured {
// Handle versioned GET - check if specific version requested
@ -335,7 +335,7 @@ func (s3a *S3ApiServer) GetObjectHandler(w http.ResponseWriter, r *http.Request)
if versionId != "" {
// Request for specific version - must look in .versions directory
glog.V(2).Infof("GetObject: requesting specific version %s for %s%s", versionId, bucket, object)
glog.V(3).Infof("GetObject: requesting specific version %s for %s%s", versionId, bucket, object)
entry, err = s3a.getSpecificObjectVersion(bucket, object, versionId)
if err != nil {
glog.Errorf("Failed to get specific version %s: %v", versionId, err)
@ -350,48 +350,43 @@ func (s3a *S3ApiServer) GetObjectHandler(w http.ResponseWriter, r *http.Request)
}
targetVersionId = versionId
} else {
// Request for latest version - OPTIMIZATION for suspended versioning:
// For suspended versioning, new objects are stored at regular path with version ID "null".
// Check regular path FIRST to avoid 12-second retry delay on .versions directory.
glog.V(0).Infof("GetObject: requesting latest version for %s%s, checking regular path first", bucket, object)
// Request for latest version - OPTIMIZATION:
// Check if .versions/ directory exists quickly (no retries) to decide path
// - If .versions/ exists: real versions available, use getLatestObjectVersion
// - If .versions/ doesn't exist (ErrNotFound): only null version at regular path, use it directly
// - If transient error: fall back to getLatestObjectVersion which has retry logic
bucketDir := s3a.option.BucketsPath + "/" + bucket
normalizedObject := removeDuplicateSlashes(object)
regularEntry, regularErr := s3a.getEntry(bucketDir, normalizedObject)
versionsDir := normalizedObject + s3_constants.VersionsFolder
if regularErr == nil && regularEntry != nil {
// Found object at regular path - check if it's a null version or pre-versioning object
hasNullVersion := false
if regularEntry.Extended != nil {
if versionIdBytes, exists := regularEntry.Extended[s3_constants.ExtVersionIdKey]; exists {
versionIdStr := string(versionIdBytes)
if versionIdStr == "null" {
hasNullVersion = true
targetVersionId = "null"
}
}
}
// Quick check (no retries) for .versions/ directory
versionsEntry, versionsErr := s3a.getEntry(bucketDir, versionsDir)
if hasNullVersion || regularEntry.Extended == nil || regularEntry.Extended[s3_constants.ExtVersionIdKey] == nil {
// This is either a null version (suspended) or pre-versioning object
// Use it directly instead of checking .versions
glog.V(0).Infof("GetObject: found null/pre-versioning object at regular path for %s%s", bucket, object)
entry = regularEntry
if targetVersionId == "" {
targetVersionId = "null"
}
} else {
// Has a real version ID, must be in .versions - fall through to getLatestObjectVersion
glog.V(0).Infof("GetObject: regular path object has version ID, checking .versions for %s%s", bucket, object)
if versionsErr == nil && versionsEntry != nil {
// .versions/ exists, meaning real versions are stored there
// Use getLatestObjectVersion which will properly find the newest version
entry, err = s3a.getLatestObjectVersion(bucket, object)
if err != nil {
glog.Errorf("GetObject: Failed to get latest version for %s%s: %v", bucket, object, err)
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
return
}
} else if errors.Is(versionsErr, filer_pb.ErrNotFound) {
// .versions/ doesn't exist (confirmed not found), check regular path for null version
regularEntry, regularErr := s3a.getEntry(bucketDir, normalizedObject)
if regularErr == nil && regularEntry != nil {
// Found object at regular path - this is the null version
entry = regularEntry
targetVersionId = "null"
} else {
// No object at regular path either - object doesn't exist
glog.Errorf("GetObject: object not found at regular path or .versions for %s%s", bucket, object)
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
return
}
} else {
// No object at regular path, check .versions directory
glog.V(0).Infof("GetObject: no object at regular path, checking .versions for %s%s", bucket, object)
// Transient error checking .versions/, fall back to getLatestObjectVersion with retries
glog.V(2).Infof("GetObject: transient error checking .versions for %s%s: %v, falling back to getLatestObjectVersion", bucket, object, versionsErr)
entry, err = s3a.getLatestObjectVersion(bucket, object)
if err != nil {
glog.Errorf("GetObject: Failed to get latest version for %s%s: %v", bucket, object, err)
@ -675,7 +670,6 @@ func (s3a *S3ApiServer) streamFromVolumeServers(w http.ResponseWriter, r *http.R
// For small files stored inline in entry.Content
if len(entry.Content) > 0 && totalSize == int64(len(entry.Content)) {
glog.V(0).Infof("streamFromVolumeServers: streaming %d bytes from entry.Content", len(entry.Content))
if isRangeRequest {
_, err := w.Write(entry.Content[offset : offset+size])
return err
@ -686,7 +680,6 @@ func (s3a *S3ApiServer) streamFromVolumeServers(w http.ResponseWriter, r *http.R
// Get chunks
chunks := entry.GetChunks()
glog.V(0).Infof("streamFromVolumeServers: entry has %d chunks, totalSize=%d, len(entry.Content)=%d", len(chunks), totalSize, len(entry.Content))
if len(chunks) == 0 {
// BUG FIX: If totalSize > 0 but no chunks and no content, this is a data integrity issue
if totalSize > 0 && len(entry.Content) == 0 {
@ -698,7 +691,6 @@ func (s3a *S3ApiServer) streamFromVolumeServers(w http.ResponseWriter, r *http.R
// Headers were already set by setResponseHeaders above
w.WriteHeader(http.StatusOK)
}
glog.V(0).Infof("streamFromVolumeServers: empty object (totalSize=%d), returning", totalSize)
return nil
}
@ -1278,51 +1270,36 @@ func (s3a *S3ApiServer) HeadObjectHandler(w http.ResponseWriter, r *http.Request
}
targetVersionId = versionId
} else {
// Request for latest version - OPTIMIZATION for suspended versioning:
// For suspended versioning, new objects are stored at regular path with version ID "null".
// Check regular path FIRST to avoid 12-second retry delay on .versions directory.
glog.V(0).Infof("HeadObject: requesting latest version for %s%s, checking regular path first", bucket, object)
// Request for latest version - OPTIMIZATION:
// Check if .versions/ directory exists quickly (no retries) to decide path
// - If .versions/ exists: real versions available, use getLatestObjectVersion
// - If .versions/ doesn't exist: only null version at regular path, use it directly
bucketDir := s3a.option.BucketsPath + "/" + bucket
normalizedObject := removeDuplicateSlashes(object)
regularEntry, regularErr := s3a.getEntry(bucketDir, normalizedObject)
versionsDir := normalizedObject + s3_constants.VersionsFolder
if regularErr == nil && regularEntry != nil {
// Found object at regular path - check if it's a null version or pre-versioning object
hasNullVersion := false
if regularEntry.Extended != nil {
if versionIdBytes, exists := regularEntry.Extended[s3_constants.ExtVersionIdKey]; exists {
versionIdStr := string(versionIdBytes)
if versionIdStr == "null" {
hasNullVersion = true
targetVersionId = "null"
}
}
}
// Quick check (no retries) for .versions/ directory
versionsEntry, versionsErr := s3a.getEntry(bucketDir, versionsDir)
if hasNullVersion || regularEntry.Extended == nil || regularEntry.Extended[s3_constants.ExtVersionIdKey] == nil {
// This is either a null version (suspended) or pre-versioning object
// Use it directly instead of checking .versions
glog.V(0).Infof("HeadObject: found null/pre-versioning object at regular path for %s%s", bucket, object)
entry = regularEntry
if targetVersionId == "" {
targetVersionId = "null"
}
} else {
// Has a real version ID, must be in .versions - fall through to getLatestObjectVersion
glog.V(0).Infof("HeadObject: regular path object has version ID, checking .versions for %s%s", bucket, object)
if versionsErr == nil && versionsEntry != nil {
// .versions/ exists, meaning real versions are stored there
// Use getLatestObjectVersion which will properly find the newest version
entry, err = s3a.getLatestObjectVersion(bucket, object)
if err != nil {
glog.Errorf("HeadObject: Failed to get latest version for %s%s: %v", bucket, object, err)
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
return
}
}
} else {
// No object at regular path, check .versions directory
glog.V(0).Infof("HeadObject: no object at regular path, checking .versions for %s%s", bucket, object)
entry, err = s3a.getLatestObjectVersion(bucket, object)
if err != nil {
glog.Errorf("HeadObject: Failed to get latest version for %s%s: %v", bucket, object, err)
// .versions/ doesn't exist, check regular path for null version
regularEntry, regularErr := s3a.getEntry(bucketDir, normalizedObject)
if regularErr == nil && regularEntry != nil {
// Found object at regular path - this is the null version
entry = regularEntry
targetVersionId = "null"
} else {
// No object at regular path either - object doesn't exist
glog.Errorf("HeadObject: object not found at regular path or .versions for %s%s", bucket, object)
s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
return
}
@ -2184,14 +2161,6 @@ func (s3a *S3ApiServer) detectPrimarySSEType(entry *filer_pb.Entry) string {
return "None"
}
// Log extended headers for debugging
glog.V(0).Infof("detectPrimarySSEType: entry has %d chunks, %d extended headers", len(entry.GetChunks()), len(entry.Extended))
for k := range entry.Extended {
if strings.Contains(k, "sse") || strings.Contains(k, "SSE") || strings.Contains(k, "encryption") {
glog.V(0).Infof("detectPrimarySSEType: extended[%s] exists", k)
}
}
if len(entry.GetChunks()) == 0 {
// No chunks - check object-level metadata only (single objects or smallContent)
hasSSEC := entry.Extended[s3_constants.AmzServerSideEncryptionCustomerAlgorithm] != nil
@ -2236,9 +2205,7 @@ func (s3a *S3ApiServer) detectPrimarySSEType(entry *filer_pb.Entry) string {
ssekmsChunks := 0
sses3Chunks := 0
glog.V(0).Infof("detectPrimarySSEType: examining %d chunks for SSE metadata", len(entry.GetChunks()))
for i, chunk := range entry.GetChunks() {
glog.V(0).Infof("detectPrimarySSEType: chunk[%d] - SseType=%v, hasMetadata=%v", i, chunk.GetSseType(), len(chunk.GetSseMetadata()) > 0)
for _, chunk := range entry.GetChunks() {
switch chunk.GetSseType() {
case filer_pb.SSEType_SSE_C:
ssecChunks++
@ -2252,7 +2219,6 @@ func (s3a *S3ApiServer) detectPrimarySSEType(entry *filer_pb.Entry) string {
}
}
}
glog.V(0).Infof("detectPrimarySSEType: chunk counts - SSE-C=%d, SSE-KMS=%d, SSE-S3=%d", ssecChunks, ssekmsChunks, sses3Chunks)
// Primary type is the one with more chunks
// Note: Tie-breaking follows precedence order SSE-C > SSE-KMS > SSE-S3

7
weed/s3api/s3api_object_handlers_multipart.go

@ -359,16 +359,12 @@ func (s3a *S3ApiServer) PutObjectPartHandler(w http.ResponseWriter, r *http.Requ
}
if len(baseIV) == 0 {
fmt.Printf("[SSE-KMS] No valid base IV found for SSE-KMS multipart upload %s\n", uploadID)
glog.Errorf("No valid base IV found for SSE-KMS multipart upload %s", uploadID)
// Generate a new base IV as fallback
baseIV = make([]byte, 16)
if _, err := rand.Read(baseIV); err != nil {
fmt.Printf("[SSE-KMS] Failed to generate fallback base IV: %v\n", err)
glog.Errorf("Failed to generate fallback base IV: %v", err)
}
} else {
fmt.Printf("[SSE-KMS] Using base IV for multipart upload %s\n", uploadID)
}
// Add SSE-KMS headers to the request for putToFiler to handle encryption
@ -389,7 +385,6 @@ func (s3a *S3ApiServer) PutObjectPartHandler(w http.ResponseWriter, r *http.Requ
} else {
// Check if this upload uses SSE-S3
if err := s3a.handleSSES3MultipartHeaders(r, uploadEntry, uploadID); err != nil {
fmt.Printf("[SSE-S3] Failed to setup SSE-S3 multipart headers: %v\n", err)
glog.Errorf("Failed to setup SSE-S3 multipart headers: %v", err)
s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
return
@ -410,9 +405,7 @@ func (s3a *S3ApiServer) PutObjectPartHandler(w http.ResponseWriter, r *http.Requ
glog.V(2).Infof("PutObjectPart: bucket=%s, object=%s, uploadId=%s, partNumber=%d, size=%d",
bucket, object, uploadID, partID, r.ContentLength)
fmt.Printf("[PutObjectPartHandler] About to call putToFiler - bucket=%s, object=%s, partID=%d\n", bucket, object, partID)
etag, errCode, sseType := s3a.putToFiler(r, uploadUrl, dataReader, destination, bucket, partID)
fmt.Printf("[PutObjectPartHandler] putToFiler returned - errCode=%v\n", errCode)
if errCode != s3err.ErrNone {
glog.Errorf("PutObjectPart: putToFiler failed with error code %v for bucket=%s, object=%s, partNumber=%d",
errCode, bucket, object, partID)

65
weed/s3api/s3api_object_handlers_put.go

@ -138,7 +138,7 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
versioningEnabled := (versioningState == s3_constants.VersioningEnabled)
versioningConfigured := (versioningState != "")
glog.V(0).Infof("PutObjectHandler: bucket=%s, object=%s, versioningState='%s', versioningEnabled=%v, versioningConfigured=%v", bucket, object, versioningState, versioningEnabled, versioningConfigured)
glog.V(3).Infof("PutObjectHandler: bucket=%s, object=%s, versioningState='%s', versioningEnabled=%v, versioningConfigured=%v", bucket, object, versioningState, versioningEnabled, versioningConfigured)
// Validate object lock headers before processing
if err := s3a.validateObjectLockHeaders(r, versioningEnabled); err != nil {
@ -161,7 +161,7 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
switch versioningState {
case s3_constants.VersioningEnabled:
// Handle enabled versioning - create new versions with real version IDs
glog.V(0).Infof("PutObjectHandler: ENABLED versioning detected for %s/%s, calling putVersionedObject", bucket, object)
glog.V(3).Infof("PutObjectHandler: ENABLED versioning detected for %s/%s, calling putVersionedObject", bucket, object)
versionId, etag, errCode, sseType := s3a.putVersionedObject(r, bucket, object, dataReader, objectContentType)
if errCode != s3err.ErrNone {
glog.Errorf("PutObjectHandler: putVersionedObject failed with errCode=%v for %s/%s", errCode, bucket, object)
@ -169,12 +169,12 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
return
}
glog.V(0).Infof("PutObjectHandler: putVersionedObject returned versionId=%s, etag=%s for %s/%s", versionId, etag, bucket, object)
glog.V(3).Infof("PutObjectHandler: putVersionedObject returned versionId=%s, etag=%s for %s/%s", versionId, etag, bucket, object)
// Set version ID in response header
if versionId != "" {
w.Header().Set("x-amz-version-id", versionId)
glog.V(0).Infof("PutObjectHandler: set x-amz-version-id header to %s for %s/%s", versionId, bucket, object)
glog.V(3).Infof("PutObjectHandler: set x-amz-version-id header to %s for %s/%s", versionId, bucket, object)
} else {
glog.Errorf("PutObjectHandler: CRITICAL - versionId is EMPTY for versioned bucket %s, object %s", bucket, object)
}
@ -187,7 +187,7 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
case s3_constants.VersioningSuspended:
// Handle suspended versioning - overwrite with "null" version ID but preserve existing versions
glog.V(0).Infof("PutObjectHandler: SUSPENDED versioning detected for %s/%s, calling putSuspendedVersioningObject", bucket, object)
glog.V(3).Infof("PutObjectHandler: SUSPENDED versioning detected for %s/%s, calling putSuspendedVersioningObject", bucket, object)
etag, errCode, sseType := s3a.putSuspendedVersioningObject(r, bucket, object, dataReader, objectContentType)
if errCode != s3err.ErrNone {
s3err.WriteErrorResponse(w, r, errCode)
@ -232,21 +232,16 @@ func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request)
func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader io.Reader, destination string, bucket string, partNumber int) (etag string, code s3err.ErrorCode, sseType string) {
// NEW OPTIMIZATION: Write directly to volume servers, bypassing filer proxy
// This eliminates the filer proxy overhead for PUT operations
fmt.Printf("[putToFiler] ENTRY - uploadUrl=%s, partNumber=%d\n", uploadUrl, partNumber)
// For SSE, encrypt with offset=0 for all parts
// Each part is encrypted independently, then decrypted using metadata during GET
partOffset := int64(0)
// Handle all SSE encryption types in a unified manner
fmt.Printf("[putToFiler] Calling handleAllSSEEncryption - partNumber=%d, partOffset=%d\n", partNumber, partOffset)
sseResult, sseErrorCode := s3a.handleAllSSEEncryption(r, dataReader, partOffset)
if sseErrorCode != s3err.ErrNone {
fmt.Printf("[putToFiler] handleAllSSEEncryption FAILED with error code %v\n", sseErrorCode)
return "", sseErrorCode, ""
}
fmt.Printf("[putToFiler] handleAllSSEEncryption SUCCESS - hasCustomerKey=%v, hasKMSKey=%v, hasS3Key=%v\n",
sseResult.CustomerKey != nil, sseResult.SSEKMSKey != nil, sseResult.SSES3Key != nil)
// Extract results from unified SSE handling
dataReader = sseResult.DataReader
@ -366,7 +361,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
return "", s3err.ErrInternalError, ""
}
glog.V(0).Infof("putToFiler: Uploading to volume server - fileId=%s", assignResult.FileId)
glog.V(3).Infof("putToFiler: Uploading to volume server - fileId=%s", assignResult.FileId)
uploadResult, uploadErr := uploader.UploadData(context.Background(), data, uploadOption)
if uploadErr != nil {
glog.Errorf("putToFiler: failed to upload to volume server for %s: %v", filePath, uploadErr)
@ -376,7 +371,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
return "", s3err.ErrInternalError, ""
}
glog.V(0).Infof("putToFiler: Volume upload SUCCESS - fileId=%s, size=%d, md5(base64)=%s",
glog.V(3).Infof("putToFiler: Volume upload SUCCESS - fileId=%s, size=%d, md5(base64)=%s",
assignResult.FileId, uploadResult.Size, uploadResult.ContentMd5)
// Step 3: Create metadata entry
@ -476,7 +471,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
// Set version ID if present
if versionIdHeader := r.Header.Get(s3_constants.ExtVersionIdKey); versionIdHeader != "" {
entry.Extended[s3_constants.ExtVersionIdKey] = []byte(versionIdHeader)
glog.V(0).Infof("putToFiler: setting version ID %s for object %s", versionIdHeader, filePath)
glog.V(3).Infof("putToFiler: setting version ID %s for object %s", versionIdHeader, filePath)
}
// Set TTL-based S3 expiry
@ -506,7 +501,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
entry.Extended[s3_constants.SeaweedFSSSEIV] = sseIV
entry.Extended[s3_constants.AmzServerSideEncryptionCustomerAlgorithm] = []byte("AES256")
entry.Extended[s3_constants.AmzServerSideEncryptionCustomerKeyMD5] = []byte(customerKey.KeyMD5)
glog.V(0).Infof("putToFiler: storing SSE-C metadata - IV len=%d", len(sseIV))
glog.V(3).Infof("putToFiler: storing SSE-C metadata - IV len=%d", len(sseIV))
}
// Set SSE-KMS metadata
@ -516,7 +511,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
// Set standard SSE headers for detection
entry.Extended[s3_constants.AmzServerSideEncryption] = []byte("aws:kms")
entry.Extended[s3_constants.AmzServerSideEncryptionAwsKmsKeyId] = []byte(sseKMSKey.KeyID)
glog.V(0).Infof("putToFiler: storing SSE-KMS metadata - keyID=%s, raw len=%d", sseKMSKey.KeyID, len(sseKMSMetadata))
glog.V(3).Infof("putToFiler: storing SSE-KMS metadata - keyID=%s, raw len=%d", sseKMSKey.KeyID, len(sseKMSMetadata))
}
// Set SSE-S3 metadata
@ -525,18 +520,18 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
entry.Extended[s3_constants.SeaweedFSSSES3Key] = sseS3Metadata
// Set standard SSE header for detection
entry.Extended[s3_constants.AmzServerSideEncryption] = []byte("AES256")
glog.V(0).Infof("putToFiler: storing SSE-S3 metadata - keyID=%s, raw len=%d", sseS3Key.KeyID, len(sseS3Metadata))
glog.V(3).Infof("putToFiler: storing SSE-S3 metadata - keyID=%s, raw len=%d", sseS3Key.KeyID, len(sseS3Metadata))
}
// Step 4: Save metadata to filer via gRPC
glog.V(0).Infof("putToFiler: About to create entry - dir=%s, name=%s, chunks=%d, extended keys=%d",
glog.V(3).Infof("putToFiler: About to create entry - dir=%s, name=%s, chunks=%d, extended keys=%d",
filepath.Dir(filePath), filepath.Base(filePath), len(entry.Chunks), len(entry.Extended))
createErr := s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
req := &filer_pb.CreateEntryRequest{
Directory: filepath.Dir(filePath),
Entry: entry,
}
glog.V(0).Infof("putToFiler: Calling CreateEntry for %s", filePath)
glog.V(3).Infof("putToFiler: Calling CreateEntry for %s", filePath)
_, err := client.CreateEntry(context.Background(), req)
if err != nil {
glog.Errorf("putToFiler: CreateEntry returned error: %v", err)
@ -547,7 +542,7 @@ func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader
glog.Errorf("putToFiler: failed to create entry for %s: %v", filePath, createErr)
return "", filerErrorToS3Error(createErr.Error()), ""
}
glog.V(0).Infof("putToFiler: CreateEntry SUCCESS for %s", filePath)
glog.V(3).Infof("putToFiler: CreateEntry SUCCESS for %s", filePath)
glog.V(2).Infof("putToFiler: Metadata saved SUCCESS - path=%s, etag(hex)=%s, size=%d, partNumber=%d",
filePath, etag, entry.Attributes.FileSize, partNumber)
@ -665,11 +660,11 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
// Enable detailed logging for testobjbar
isTestObj := (normalizedObject == "testobjbar")
glog.V(0).Infof("putSuspendedVersioningObject: START bucket=%s, object=%s, normalized=%s, isTestObj=%v",
glog.V(3).Infof("putSuspendedVersioningObject: START bucket=%s, object=%s, normalized=%s, isTestObj=%v",
bucket, object, normalizedObject, isTestObj)
if isTestObj {
glog.V(0).Infof("=== TESTOBJBAR: putSuspendedVersioningObject START ===")
glog.V(3).Infof("=== TESTOBJBAR: putSuspendedVersioningObject START ===")
}
bucketDir := s3a.option.BucketsPath + "/" + bucket
@ -682,20 +677,20 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
entries, _, err := s3a.list(versionsDir, "", "", false, 1000)
if err == nil {
// .versions directory exists
glog.V(0).Infof("putSuspendedVersioningObject: found %d entries in .versions for %s/%s", len(entries), bucket, object)
glog.V(3).Infof("putSuspendedVersioningObject: found %d entries in .versions for %s/%s", len(entries), bucket, object)
for _, entry := range entries {
if entry.Extended != nil {
if versionIdBytes, ok := entry.Extended[s3_constants.ExtVersionIdKey]; ok {
versionId := string(versionIdBytes)
glog.V(0).Infof("putSuspendedVersioningObject: found version '%s' in .versions", versionId)
glog.V(3).Infof("putSuspendedVersioningObject: found version '%s' in .versions", versionId)
if versionId == "null" {
// Only delete null version - preserve real versioned entries
glog.V(0).Infof("putSuspendedVersioningObject: deleting null version from .versions")
glog.V(3).Infof("putSuspendedVersioningObject: deleting null version from .versions")
err := s3a.rm(versionsDir, entry.Name, true, false)
if err != nil {
glog.Warningf("putSuspendedVersioningObject: failed to delete null version: %v", err)
} else {
glog.V(0).Infof("putSuspendedVersioningObject: successfully deleted null version")
glog.V(3).Infof("putSuspendedVersioningObject: successfully deleted null version")
}
break
}
@ -703,7 +698,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
}
}
} else {
glog.V(0).Infof("putSuspendedVersioningObject: no .versions directory for %s/%s", bucket, object)
glog.V(3).Infof("putSuspendedVersioningObject: no .versions directory for %s/%s", bucket, object)
}
uploadUrl := s3a.toFilerUrl(bucket, normalizedObject)
@ -721,7 +716,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
// Set version ID to "null" for suspended versioning
r.Header.Set(s3_constants.ExtVersionIdKey, "null")
if isTestObj {
glog.V(0).Infof("=== TESTOBJBAR: set version header before putToFiler, r.Header[%s]=%s ===",
glog.V(3).Infof("=== TESTOBJBAR: set version header before putToFiler, r.Header[%s]=%s ===",
s3_constants.ExtVersionIdKey, r.Header.Get(s3_constants.ExtVersionIdKey))
}
@ -775,7 +770,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
// Upload the file using putToFiler - this will create the file with version metadata
if isTestObj {
glog.V(0).Infof("=== TESTOBJBAR: calling putToFiler ===")
glog.V(3).Infof("=== TESTOBJBAR: calling putToFiler ===")
}
etag, errCode, sseType = s3a.putToFiler(r, uploadUrl, body, "", bucket, 1)
if errCode != s3err.ErrNone {
@ -783,7 +778,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
return "", errCode, ""
}
if isTestObj {
glog.V(0).Infof("=== TESTOBJBAR: putToFiler completed, etag=%s ===", etag)
glog.V(3).Infof("=== TESTOBJBAR: putToFiler completed, etag=%s ===", etag)
}
// Verify the metadata was set correctly during file creation
@ -793,19 +788,19 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
for attempt := 1; attempt <= maxRetries; attempt++ {
verifyEntry, verifyErr := s3a.getEntry(bucketDir, normalizedObject)
if verifyErr == nil {
glog.V(0).Infof("=== TESTOBJBAR: verify attempt %d, entry.Extended=%v ===", attempt, verifyEntry.Extended)
glog.V(3).Infof("=== TESTOBJBAR: verify attempt %d, entry.Extended=%v ===", attempt, verifyEntry.Extended)
if verifyEntry.Extended != nil {
if versionIdBytes, ok := verifyEntry.Extended[s3_constants.ExtVersionIdKey]; ok {
glog.V(0).Infof("=== TESTOBJBAR: verification SUCCESSFUL, version=%s ===", string(versionIdBytes))
glog.V(3).Infof("=== TESTOBJBAR: verification SUCCESSFUL, version=%s ===", string(versionIdBytes))
} else {
glog.V(0).Infof("=== TESTOBJBAR: verification FAILED, ExtVersionIdKey not found ===")
glog.V(3).Infof("=== TESTOBJBAR: verification FAILED, ExtVersionIdKey not found ===")
}
} else {
glog.V(0).Infof("=== TESTOBJBAR: verification FAILED, Extended is nil ===")
glog.V(3).Infof("=== TESTOBJBAR: verification FAILED, Extended is nil ===")
}
break
} else {
glog.V(0).Infof("=== TESTOBJBAR: getEntry failed on attempt %d: %v ===", attempt, verifyErr)
glog.V(3).Infof("=== TESTOBJBAR: getEntry failed on attempt %d: %v ===", attempt, verifyErr)
}
if attempt < maxRetries {
time.Sleep(time.Millisecond * 10)
@ -822,7 +817,7 @@ func (s3a *S3ApiServer) putSuspendedVersioningObject(r *http.Request, bucket, ob
glog.V(2).Infof("putSuspendedVersioningObject: successfully created null version for %s/%s", bucket, object)
if isTestObj {
glog.V(0).Infof("=== TESTOBJBAR: putSuspendedVersioningObject COMPLETED ===")
glog.V(3).Infof("=== TESTOBJBAR: putSuspendedVersioningObject COMPLETED ===")
}
return etag, s3err.ErrNone, sseType
}

Loading…
Cancel
Save