* weed master -peers=none
* single master mode only when peers is none
* refactoring
* revert duplicated code
* revert
* Update weed/command/master.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* preventing "none" passed to other components if master is not started
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* adjust "weed benchmark" CLI to use readOnly/writeOnly
* consistently use "-master" CLI option
* If both -readOnly and -writeOnly are specified, the current logic silently allows it with -writeOnly taking precedence. This is confusing and could lead to unexpected behavior.
* fallback to check master
* clean up
* parsing
* refactor
* handle parse error
* return error
* avoid dup lookup
* use batch key
* dedup lookup logic
* address comments
* errors.Join(lookupErrors...)
* add a comment
* Fix: Critical data race in MasterClient vidMap
Fixes a critical data race where resetVidMap() was writing to the vidMap
pointer while other methods were reading it concurrently without synchronization.
Changes:
- Removed embedded *vidMap from MasterClient
- Added vidMapLock (sync.RWMutex) to protect vidMap pointer access
- Created safe accessor methods (GetLocations, GetDataCenter, etc.)
- Updated all direct vidMap accesses to use thread-safe methods
- Updated resetVidMap() to acquire write lock during pointer swap
The vidMap already has internal locking for its operations, but this fix
protects the vidMap pointer itself from concurrent read/write races.
Verified with: go test -race ./weed/wdclient/...
Impact:
- Prevents potential panics from concurrent pointer access
- No performance impact - uses RWMutex for read-heavy workloads
- Maintains backward compatibility through wrapper methods
* fmt
* Fix: Critical data race in MasterClient vidMap
Fixes a critical data race where resetVidMap() was writing to the vidMap
pointer while other methods were reading it concurrently without synchronization.
Changes:
- Removed embedded *vidMap from MasterClient struct
- Added vidMapLock (sync.RWMutex) to protect vidMap pointer access
- Created minimal public accessor methods for external packages:
* GetLocations, GetLocationsClone, GetVidLocations
* LookupFileId, LookupVolumeServerUrl
* GetDataCenter
- Internal code directly locks and accesses vidMap (no extra indirection)
- Updated resetVidMap() to acquire write lock during pointer swap
- Updated shell/commands.go to use GetDataCenter() method
Design Philosophy:
- vidMap already has internal locking for its map operations
- This fix specifically protects the vidMap *pointer* from concurrent access
- Public methods for external callers, direct locking for internal use
- Minimizes wrapper overhead while maintaining thread safety
Verified with: go test -race ./weed/wdclient/... (passes)
Impact:
- Prevents potential panics/crashes from data races
- Minimal performance impact (RWMutex for read-heavy workload)
- Maintains full backward compatibility
* fix more concurrent access
* reduce lock scope
* Optimize vidMap locking for better concurrency
Improved locking strategy based on the understanding that:
- vidMapLock protects the vidMap pointer from concurrent swaps
- vidMap has internal locks that protect its data structures
Changes:
1. Read operations: Grab pointer with RLock, release immediately, then operate
- Reduces lock hold time
- Allows resetVidMap to proceed sooner
- Methods: GetLocations, GetLocationsClone, GetVidLocations,
LookupVolumeServerUrl, GetDataCenter
2. Write operations: Changed from Lock() to RLock()
- RLock prevents pointer swap during operation
- Allows concurrent readers and other writers (serialized by vidMap's lock)
- Methods: addLocation, deleteLocation, addEcLocation, deleteEcLocation
Benefits:
- Significantly reduced lock contention
- Better concurrent performance under load
- Still prevents all race conditions
Verified with: go test -race ./weed/wdclient/... (passes)
* Further reduce lock contention in LookupVolumeIdsWithFallback
Optimized two loops that were holding RLock for extended periods:
Before:
- Held RLock during entire loop iteration
- Included string parsing and cache lookups
- Could block resetVidMap for significant time with large batches
After:
- Grab vidMap pointer with brief RLock
- Release lock immediately
- Perform all loop operations on local pointer
Impact:
- First loop: Cache check on initial volumeIds
- Second loop: Double-check after singleflight wait
Benefits:
- Minimal lock hold time (just pointer copy)
- resetVidMap no longer blocked by long loops
- Better concurrent performance with large volume ID lists
- Still thread-safe (vidMap methods have internal locks)
Verified with: go test -race ./weed/wdclient/... (passes)
* Add clarifying comments to vidMap helper functions
Added inline documentation to each helper function (addLocation, deleteLocation,
addEcLocation, deleteEcLocation) explaining the two-level locking strategy:
- RLock on vidMapLock prevents resetVidMap from swapping the pointer
- vidMap has internal locks that protect the actual map mutations
- This design provides optimal concurrency
The comments make it clear why RLock (not Lock) is correct and intentional,
preventing future confusion about the locking strategy.
* Improve encapsulation: Add shallowClone() method to vidMap
Added a shallowClone() method to vidMap to improve encapsulation and prevent
MasterClient from directly accessing vidMap's internal fields.
Changes:
1. Added vidMap.shallowClone() in vid_map.go
- Encapsulates the shallow copy logic within vidMap
- Makes vidMap responsible for its own state representation
- Documented that caller is responsible for thread safety
2. Simplified resetVidMap() in masterclient.go
- Uses tail := mc.vidMap.shallowClone() instead of manual field access
- Cleaner, more maintainable code
- Better adherence to encapsulation principles
Benefits:
- Improved code organization and maintainability
- vidMap internals are now properly encapsulated
- Easier to modify vidMap structure in the future
- More self-documenting code
Verified with: go test -race ./weed/wdclient/... (passes)
* Optimize locking: Reduce lock acquisitions and use helper methods
Two optimizations to further reduce lock contention and improve code consistency:
1. LookupFileIdWithFallback: Eliminated redundant lock acquisition
- Before: Two separate locks to get vidMap and dataCenter
- After: Single lock gets both values together
- Benefit: 50% reduction in lock/unlock overhead for this hot path
2. KeepConnected: Use GetDataCenter() helper for consistency
- Before: Manual lock/unlock to access DataCenter field
- After: Use existing GetDataCenter() helper method
- Benefit: Better encapsulation and code consistency
Impact:
- Reduced lock contention in high-traffic lookup path
- More consistent use of accessor methods throughout codebase
- Cleaner, more maintainable code
Verified with: go test -race ./weed/wdclient/... (passes)
* Refactor: Extract common locking patterns into helper methods
Eliminated code duplication by introducing two helper methods that encapsulate
the common locking patterns used throughout MasterClient:
1. getStableVidMap() - For read operations
- Acquires lock, gets pointer, releases immediately
- Returns stable snapshot for thread-safe reads
- Used by: GetLocations, GetLocationsClone, GetVidLocations,
LookupFileId, LookupVolumeServerUrl, GetDataCenter
2. withCurrentVidMap(f func(vm *vidMap)) - For write operations
- Holds RLock during callback execution
- Prevents pointer swap while allowing concurrent operations
- Used by: addLocation, deleteLocation, addEcLocation, deleteEcLocation
Benefits:
- Reduced code duplication (eliminated 48 lines of repetitive locking code)
- Centralized locking logic makes it easier to understand and maintain
- Self-documenting pattern through named helper methods
- Easier to modify locking strategy in the future (single point of change)
- Improved readability - accessor methods are now one-liners
Code size reduction: ~40% fewer lines for accessor/helper methods
Verified with: go test -race ./weed/wdclient/... (passes)
* consistent
* Fix cache pointer race condition with atomic.Pointer
Use atomic.Pointer for vidMap cache field to prevent data races
during cache trimming in resetVidMap. This addresses the race condition
where concurrent GetLocations calls could read the cache pointer while
resetVidMap is modifying it during cache chain trimming.
Changes:
- Changed cache field from *vidMap to atomic.Pointer[vidMap]
- Updated all cache access to use Load() and Store() atomic operations
- Updated shallowClone, GetLocations, deleteLocation, deleteEcLocation
- Updated resetVidMap to use atomic operations for cache trimming
* Merge: Resolve conflict in deleteEcLocation - keep atomic.Pointer and fix bug
Resolved merge conflict by combining:
1. Atomic pointer access pattern (from HEAD): cache.Load()
2. Correct method call (from fix): deleteEcLocation (not deleteLocation)
Resolution:
- Before (HEAD): cachedMap.deleteLocation() - WRONG, reintroduced bug
- Before (fix): vc.cache.deleteEcLocation() - RIGHT method, old pattern
- After (merged): cachedMap.deleteEcLocation() - RIGHT method, new pattern
This preserves both improvements:
✓ Thread-safe atomic.Pointer access pattern
✓ Correct recursive call to deleteEcLocation
Verified with: go test -race ./weed/wdclient/... (passes)
* Update vid_map.go
* remove shallow clone
* simplify
* Add nginx reverse proxy documentation for S3 API
Fixes#7407
Add comprehensive documentation and example configuration for using
nginx as a reverse proxy with SeaweedFS S3 API while maintaining AWS
Signature V4 authentication compatibility.
Changes:
- Add docker/nginx/README.md with detailed setup guide
- Add docker/nginx/s3-example.conf with working configuration
- Update docker/nginx/proxy.conf with important S3 notes
The documentation covers:
- Critical requirements for AWS Signature V4 authentication
- Common mistakes and why they break S3 authentication
- Complete working nginx configurations
- Debugging tips and troubleshooting
- Performance tuning recommendations
* Fix IPv6 host header formatting to match AWS SDK behavior
Follow-up to PR #7403
When a default port (80 for HTTP, 443 for HTTPS) is stripped from an
IPv6 address, the square brackets should also be removed to match AWS
SDK behavior for S3 signature calculation.
Reference: https://github.com/aws/aws-sdk-go-v2/blob/main/aws/signer/internal/v4/host.go
The AWS SDK's stripPort function explicitly removes brackets when
returning an IPv6 address without a port.
Changes:
- Update extractHostHeader to strip brackets from IPv6 addresses when
no port or default port is used
- Update test expectations to match AWS SDK behavior
- Add detailed comments explaining the AWS SDK compatibility requirement
This ensures S3 signature validation works correctly with IPv6 addresses
behind reverse proxies, matching AWS S3 canonical request format.
Fixes the issue raised in PR #7403 comment:
https://github.com/seaweedfs/seaweedfs/pull/7403#issuecomment-3471105438
* Update docker/nginx/README.md
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Add nginx reverse proxy documentation for S3 API
Fixes#7407
Add comprehensive documentation and example configuration for using
nginx as a reverse proxy with SeaweedFS S3 API while maintaining AWS
Signature V4 authentication compatibility.
Changes:
- Add docker/nginx/README.md with detailed setup guide
- Add docker/nginx/s3-example.conf with working configuration
- Update docker/nginx/proxy.conf with important S3 notes
The documentation covers:
- Critical requirements for AWS Signature V4 authentication
- Common mistakes and why they break S3 authentication
- Complete working nginx configurations
- Debugging tips and troubleshooting
- Performance tuning recommendations
Fix IPv6 host header formatting to match AWS SDK behavior
Follow-up to PR #7403
When a default port (80 for HTTP, 443 for HTTPS) is stripped from an
IPv6 address, the square brackets should also be removed to match AWS
SDK behavior for S3 signature calculation.
Reference: https://github.com/aws/aws-sdk-go-v2/blob/main/aws/signer/internal/v4/host.go
The AWS SDK's stripPort function explicitly removes brackets when
returning an IPv6 address without a port.
Changes:
- Update extractHostHeader to strip brackets from IPv6 addresses when
no port or default port is used
- Update test expectations to match AWS SDK behavior
- Add detailed comments explaining the AWS SDK compatibility requirement
This ensures S3 signature validation works correctly with IPv6 addresses
behind reverse proxies, matching AWS S3 canonical request format.
Fixes the issue raised in PR #7403 comment:
https://github.com/seaweedfs/seaweedfs/pull/7403#issuecomment-3471105438
* Revert "Merge branch 'fix-ipv6-brackets-default-port' of https://github.com/seaweedfs/seaweedfs into fix-ipv6-brackets-default-port"
This reverts commit cca3f3985f, reversing
changes made to 2b8f9de78e.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* server can start when no network for local dev
* fixed superfluous response.WriteHeader call" warning
* adaptive based on last write time
* more doc
* refactoring
* Fix S3 bucket policy ARN validation to accept AWS ARNs and simplified formats
Fixes#7252
The bucket policy validation was rejecting valid AWS-style ARNs and
simplified resource formats, causing validation failures with the
error 'resource X does not match bucket X' even when they were
identical strings.
Changes:
- Updated validateResourceForBucket() to accept three formats:
1. AWS-style ARNs: arn:aws:s3:::bucket-name[/*|/path]
2. SeaweedFS ARNs: arn:seaweed:s3:::bucket-name[/*|/path]
3. Simplified formats: bucket-name[/*|/path]
- Added comprehensive test coverage for all three formats
- Added specific test cases from issue #7252 to prevent regression
This ensures compatibility with standard AWS S3 bucket policies
while maintaining support for SeaweedFS-specific ARN format.
* Refactor validateResourceForBucket to reduce code duplication
Simplified the validation logic by stripping ARN prefixes first,
then performing validation on the remaining resource path.
This reduces code duplication and improves maintainability while
maintaining identical functionality.
Addresses review feedback from Gemini Code Assist.
* Use strings.CutPrefix for cleaner ARN prefix stripping
Replace strings.HasPrefix checks with strings.CutPrefix for more
idiomatic Go code. This function is available in Go 1.20+ and
provides cleaner conditional logic with the combined check and
extraction.
Addresses review feedback from Gemini Code Assist.
* Filer: Add retry mechanism for failed file deletions
Implement a retry queue with exponential backoff for handling transient
deletion failures, particularly when volumes are temporarily read-only.
Key features:
- Automatic retry for retryable errors (read-only volumes, network issues)
- Exponential backoff: 5min → 10min → 20min → ... (max 6 hours)
- Maximum 10 retry attempts per file before giving up
- Separate goroutine processing retry queue every minute
- Enhanced logging with retry/permanent error classification
This addresses the issue where file deletions fail when volumes are
temporarily read-only (tiered volumes, maintenance, etc.) and these
deletions were previously lost.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Update weed/filer/filer_deletion.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Filer: Add retry mechanism for failed file deletions
Implement a retry queue with exponential backoff for handling transient
deletion failures, particularly when volumes are temporarily read-only.
Key features:
- Automatic retry for retryable errors (read-only volumes, network issues)
- Exponential backoff: 5min → 10min → 20min → ... (max 6 hours)
- Maximum 10 retry attempts per file before giving up
- Separate goroutine processing retry queue every minute
- Map-based retry queue for O(1) lookups and deletions
- Enhanced logging with retry/permanent error classification
- Consistent error detail limiting (max 10 total errors logged)
- Graceful shutdown support with quit channel for both processors
This addresses the issue where file deletions fail when volumes are
temporarily read-only (tiered volumes, maintenance, etc.) and these
deletions were previously lost.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* Filer: Replace magic numbers with named constants in retry processor
Replace hardcoded values with package-level constants for better
maintainability:
- DeletionRetryPollInterval (1 minute): interval for checking retry queue
- DeletionRetryBatchSize (1000): max items to process per iteration
This improves code readability and makes configuration changes easier.
* Filer: Optimize retry queue with min-heap data structure
Replace map-based retry queue with a min-heap for better scalability
and deterministic ordering.
Performance improvements:
- GetReadyItems: O(N) → O(K log N) where K is items retrieved
- AddOrUpdate: O(1) → O(log N) (acceptable trade-off)
- Early exit when checking ready items (heap top is earliest)
- No full iteration over all items while holding lock
Benefits:
- Deterministic processing order (earliest NextRetryAt first)
- Better scalability for large retry queues (thousands of items)
- Reduced lock contention duration
- Memory efficient (no separate slice reconstruction)
Implementation:
- Min-heap ordered by NextRetryAt using container/heap
- Dual index: heap for ordering + map for O(1) FileId lookups
- heap.Fix() used when updating existing items
- Comprehensive complexity documentation in comments
This addresses the performance bottleneck identified in GetReadyItems
where iterating over the entire map with a write lock could block
other goroutines in high-failure scenarios.
* Filer: Modernize heap interface and improve error handling docs
1. Replace interface{} with any in heap methods
- Addresses modern Go style (Go 1.18+)
- Improves code readability
2. Enhance isRetryableError documentation
- Acknowledge string matching brittleness
- Add comprehensive TODO for future improvements:
* Use HTTP status codes (503, 429, etc.)
* Implement structured error types with errors.Is/As
* Extract gRPC status codes
* Add error wrapping for better context
- Document each error pattern with context
- Add defensive check for empty error strings
Current implementation remains pragmatic for initial release while
documenting a clear path for future robustness improvements. String
matching is acceptable for now but should be replaced with structured
error checking when refactoring the deletion pipeline.
* Filer: Refactor deletion processors for better readability
Extract large callback functions into dedicated private methods to
improve code organization and maintainability.
Changes:
1. Extract processDeletionBatch method
- Handles deletion of a batch of file IDs
- Classifies errors (success, not found, retryable, permanent)
- Manages retry queue additions
- Consolidates logging logic
2. Extract processRetryBatch method
- Handles retry attempts for previously failed deletions
- Processes retry results and updates queue
- Symmetric to processDeletionBatch for consistency
Benefits:
- Main loop functions (loopProcessingDeletion, loopProcessingDeletionRetry)
are now concise and focused on orchestration
- Business logic is separated into testable methods
- Reduced nesting depth improves readability
- Easier to understand control flow at a glance
- Better separation of concerns
The refactored methods follow the single responsibility principle,
making the codebase more maintainable and easier to extend.
* Update weed/filer/filer_deletion.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Filer: Fix critical retry count bug and add comprehensive error patterns
Critical bug fixes from PR review:
1. Fix RetryCount reset bug (CRITICAL)
- Problem: When items are re-queued via AddOrUpdate, RetryCount
resets to 1, breaking exponential backoff
- Solution: Add RequeueForRetry() method that preserves retry state
- Impact: Ensures proper exponential backoff progression
2. Add overflow protection in backoff calculation
- Check shift amount > 63 to prevent bit-shift overflow
- Additional safety: check if delay <= 0 or > MaxRetryDelay
- Protects against arithmetic overflow in extreme cases
3. Expand retryable error patterns
- Added: timeout, deadline exceeded, context canceled
- Added: lookup error/failed (volume discovery issues)
- Added: connection refused, broken pipe (network errors)
- Added: too many requests, service unavailable (backpressure)
- Added: temporarily unavailable, try again (transient errors)
- Added: i/o timeout (network timeouts)
Benefits:
- Retry mechanism now works correctly across restarts
- More robust against edge cases and overflow
- Better coverage of transient failure scenarios
- Improved resilience in high-failure environments
Addresses feedback from CodeRabbit and Gemini Code Assist in PR #7402.
* Filer: Add persistence docs and comprehensive unit tests
Documentation improvements:
1. Document in-memory queue limitation
- Acknowledge that retry queue is volatile (lost on restart)
- Document trade-offs and future persistence options
- Provide clear path for production hardening
- Note eventual consistency through main deletion queue
Unit test coverage:
1. TestDeletionRetryQueue_AddAndRetrieve
- Basic add/retrieve operations
- Verify items not ready before delay elapsed
2. TestDeletionRetryQueue_ExponentialBackoff
- Verify exponential backoff progression (5m→10m→20m→40m→80m)
- Validate delay calculations with timing tolerance
3. TestDeletionRetryQueue_OverflowProtection
- Test high retry counts (60+) that could cause overflow
- Verify capping at MaxRetryDelay
4. TestDeletionRetryQueue_MaxAttemptsReached
- Verify items discarded after MaxRetryAttempts
- Confirm proper queue cleanup
5. TestIsRetryableError
- Comprehensive error pattern coverage
- Test all retryable error types (timeout, connection, lookup, etc.)
- Verify non-retryable errors correctly identified
6. TestDeletionRetryQueue_HeapOrdering
- Verify min-heap property maintained
- Test items processed in NextRetryAt order
- Validate heap.Init() integration
All tests passing. Addresses PR feedback on testing requirements.
* Filer: Add code quality improvements for deletion retry
Address PR feedback with minor optimizations:
- Add MaxLoggedErrorDetails constant (replaces magic number 10)
- Pre-allocate slices and maps in processRetryBatch for efficiency
- Improve log message formatting to use constant
These changes improve code maintainability and runtime performance
without altering functionality.
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactoring retrying
* use constant
* assert
* address comment
* refactor
* address comments
* dedup
* process retried deletions
* address comment
* check in-flight items also; dedup code
* refactoring
* refactoring
* simplify
* reset heap
* more efficient
* add DeletionBatchSize as a constant;Permanent > Retryable > Success > Not Found
---------
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* * Fix s3 auth with proxy request
* * 6649 Add unit test for signature v4
* address comments
* fix for tests
* ipv6
* address comments
* setting scheme
Works for both cases (direct HTTPS and behind proxy)
* trim for ipv6
* Corrected Scheme Precedence Order
* trim
* accurate
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* add fallback for cors
* refactor
* expose aws headers
* add fallback to test
* refactor
* Only falls back to global config when there's explicitly no bucket-level config.
* fmt
* Update s3_cors_http_test.go
* refactoring
* s3: fix if-match error
* add more checks
* minor
* minor
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* Fixed critical bugs in the Azure SDK migration (PR #7310)
fix https://github.com/seaweedfs/seaweedfs/issues/5044
* purge emojis
* conditional delete
* Update azure_sink_test.go
* refactoring
* refactor
* add context to each call
* refactor
* address comments
* refactor
* defer
* DeleteSnapshots
The conditional delete in handleExistingBlob was missing DeleteSnapshots, which would cause the delete operation to fail on Azure storage accounts that have blob snapshots enabled.
* ensure the expected size
* adjust comment
* IAM: add support for advanced IAM config file to server command
* Add support for advanced IAM config file in S3 options
* Fix S3 IAM config handling to simplify checks for configuration presence
* simplify
* simplify again
* copy the value
* const
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
* refactor: add ECContext structure to encapsulate EC parameters
- Create ec_context.go with ECContext struct
- NewDefaultECContext() creates context with default 10+4 configuration
- Helper methods: CreateEncoder(), ToExt(), String()
- Foundation for cleaner function signatures
- No behavior change, still uses hardcoded 10+4
* refactor: update ec_encoder.go to use ECContext
- Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions
- Keep old functions for backward compatibility (call new versions)
- Update all internal functions to accept ECContext parameter
- Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently
- Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New()
- Use ctx.ToExt() for shard file extensions
- No behavior change, still uses default 10+4 configuration
* refactor: update ec_volume.go to use ECContext
- Add ECContext field to EcVolume struct
- Initialize ECContext with default configuration in NewEcVolume()
- Update LocateEcShardNeedleInterval() to use ECContext.DataShards
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* refactor: add EC shard count fields to VolumeInfo protobuf
- Add data_shards_count field (field 8) to VolumeInfo message
- Add parity_shards_count field (field 9) to VolumeInfo message
- Fields are optional, 0 means use default (10+4)
- Backward compatible: fields added at end
- Phase 1: Foundation for future customization
* refactor: regenerate protobuf Go files with EC shard count fields
- Regenerated volume_server_pb/*.go with new EC fields
- DataShardsCount and ParityShardsCount accessors added to VolumeInfo
- No behavior change, fields not yet used
* refactor: update VolumeEcShardsGenerate to use ECContext
- Create ECContext with default configuration in VolumeEcShardsGenerate
- Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup
- Call WriteEcFilesWithContext() instead of WriteEcFiles()
- Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo
- Log EC context being used
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* fmt
* refactor: update ec_test.go to use ECContext
- Update TestEncodingDecoding to create and use ECContext
- Update validateFiles() to accept ECContext parameter
- Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt()
- Test passes with default 10+4 configuration
* refactor: use EcShardConfig message instead of separate fields
* optimize: pre-calculate row sizes in EC encoding loop
* refactor: replace TotalShards field with Total() method
- Remove TotalShards field from ECContext to avoid field drift
- Add Total() method that computes DataShards + ParityShards
- Update all references to use ctx.Total() instead of ctx.TotalShards
- Read EC config from VolumeInfo when loading EC volumes
- Read data shard count from .vif in VolumeEcShardsToVolume
- Use >= instead of > for exact boundary handling in encoding loops
* optimize: simplify VolumeEcShardsToVolume to use existing EC context
- Remove redundant CollectEcShards call
- Remove redundant .vif file loading
- Use v.ECContext.DataShards directly (already loaded by NewEcVolume)
- Slice tempShards instead of collecting again
* refactor: rename MaxShardId to MaxShardCount for clarity
- Change from MaxShardId=31 to MaxShardCount=32
- Eliminates confusing +1 arithmetic (MaxShardId+1)
- More intuitive: MaxShardCount directly represents the limit
fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume
- Add MaxShardId constant (31, since ShardBits is uint32)
- Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer
- Prevents panic when slicing for volumes with >14 total shards
- Critical fix for custom EC configurations like 20+10
* fix: add validation for EC shard counts from VolumeInfo
- Validate DataShards/ParityShards are positive and within MaxShardCount
- Prevent zero or invalid values that could cause divide-by-zero
- Fallback to defaults if validation fails, with warning log
- VolumeEcShardsGenerate now preserves existing EC config when regenerating
- Critical safety fix for corrupted or legacy .vif files
* fix: RebuildEcFiles now loads EC config from .vif file
- Critical: RebuildEcFiles was always using default 10+4 config
- Now loads actual EC config from .vif file when rebuilding shards
- Validates config before use (positive shards, within MaxShardCount)
- Falls back to default if .vif missing or invalid
- Prevents data corruption when rebuilding custom EC volumes
* add: defensive validation for dataShards in VolumeEcShardsToVolume
- Validate dataShards > 0 and <= MaxShardCount before use
- Prevents panic from corrupted or uninitialized ECContext
- Returns clear error message instead of panic
- Defense-in-depth: validates even though upstream should catch issues
* fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support
Critical fixes to support custom EC ratios > 14 shards:
disk_location_ec.go:
- validateEcVolume: Check shards 0-31 instead of 0-13 during validation
- removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup
ec_volume_info.go ShardBits methods:
- ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14)
- ToUint32Slice(): Iterate up to MaxShardCount (32)
- IndexToShardId(): Iterate up to MaxShardCount (32)
- MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2)
- Minus() shard size copy: Iterate up to MaxShardCount (32)
- resizeShardSizes(): Iterate up to MaxShardCount (32)
Without these changes:
- Custom EC ratios > 14 total shards would fail validation on startup
- Shards 14-31 would never be discovered or cleaned up
- ShardBits operations would miss shards >= 14
These changes are backward compatible - MaxShardCount (32) includes
the default TotalShardsCount (14), so existing 10+4 volumes work as before.
* fix: replace TotalShardsCount with MaxShardCount in critical data structures
Critical fixes for buffer allocations and loops that must support
custom EC ratios up to 32 shards:
Data Structures:
- store_ec.go:354: Buffer allocation for shard recovery (bufs array)
- topology_ec.go:14: EcShardLocations.Locations fixed array size
- command_ec_rebuild.go:268: EC shard map allocation
- command_ec_common.go:626: Shard-to-locations map allocation
Shard Discovery Loops:
- ec_task.go:378: Loop to find generated shard files
- ec_shard_management.go: All 8 loops that check/count EC shards
These changes are critical because:
1. Buffer allocations sized to 14 would cause index-out-of-bounds panics
when accessing shards 14-31
2. Fixed arrays sized to 14 would truncate shard location data
3. Loops limited to 0-13 would never discover/manage shards 14-31
Note: command_ec_encode.go:208 intentionally NOT changed - it creates
shard IDs to mount after encoding. In Phase 1 we always generate 14
shards, so this remains TotalShardsCount and will be made dynamic in
Phase 2 based on actual EC context.
Without these fixes, custom EC ratios > 14 total shards would cause:
- Runtime panics (array index out of bounds)
- Data loss (shards 14-31 never discovered/tracked)
- Incomplete shard management (missing shards not detected)
* refactor: move MaxShardCount constant to ec_encoder.go
Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it
with other shard count constants (DataShardsCount, ParityShardsCount,
TotalShardsCount). This improves code organization and makes it easier
to understand the relationship between these constants.
Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks
* improve: add defensive programming and better error messages for EC
Code review improvements from CodeRabbit:
1. ShardBits Guardrails (ec_volume_info.go):
- AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount
- HasShardId: Return false for out-of-range shard IDs
- Prevents silent no-ops from bit shifts with invalid IDs
2. Future-Proof Regex (disk_location_ec.go):
- Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3}
- Now matches .ec00 through .ec999 (currently .ec00-.ec31 used)
- Supports future increases to MaxShardCount beyond 99
3. Better Error Messages (volume_grpc_erasure_coding.go):
- Include valid range (1..32) in dataShards validation error
- Helps operators quickly identify the problem
4. Validation Before Save (volume_grpc_erasure_coding.go):
- Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount)
- Log EC config being saved to .vif for debugging
- Prevents writing invalid configs to disk
These changes improve robustness and debuggability without changing
core functionality.
* fmt
* fix: critical bugs from code review + clean up comments
Critical bug fixes:
1. command_ec_rebuild.go: Fixed indentation causing compilation error
- Properly nested if/for blocks in registerEcNode
2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Default 10+4 volumes were being incorrectly reported as incomplete
- Missing shards 14-31 were being incorrectly reported as missing
- Fixed in 4 locations: volume completeness checks and getMissingShards
3. ec_volume_info.go: Fixed MinusParityShards removing too many shards
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Was incorrectly removing shard IDs 10-31 instead of just 10-13
Comment cleanup:
- Removed Phase 1/Phase 2 references (development plan context)
- Replaced with clear statements about default 10+4 configuration
- SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed
Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount.
MaxShardCount (32) is the limit for buffer allocations and shard ID loops,
but TotalShardsCount (14) must be used for default EC configuration logic.
* fix: add defensive bounds checks and compute actual shard counts
Critical fixes from code review:
1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard
- Prevent panic when shardId >= MaxShardCount (32)
- Return false instead of crashing on out-of-range shard IDs
2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks
- Was using hardcoded TotalShardsCount (14) for all volumes
- Now computes actual totalShardsForVolume from rackToShardCount
- Fixes incorrect rebalancing for volumes with custom EC ratios
- Example: 5+2=7 shards would incorrectly use 14 as average
These fixes improve robustness and prepare for future custom EC ratios
without changing current behavior for default 10+4 volumes.
Note: MinusParityShards and ec_task.go intentionally NOT changed for
seaweedfs repo - these will be enhanced in seaweed-enterprise repo
where custom EC ratio configuration is added.
* fmt
* style: make MaxShardCount type casting explicit in loops
Improved code clarity by explicitly casting MaxShardCount to the
appropriate type when used in loop comparisons:
- ShardId comparisons: Cast to ShardId(MaxShardCount)
- uint32 comparisons: Cast to uint32(MaxShardCount)
Changed in 5 locations:
- Minus() loop (line 90)
- ShardIds() loop (line 143)
- ToUint32Slice() loop (line 152)
- IndexToShardId() loop (line 219)
- resizeShardSizes() loop (line 248)
This makes the intent explicit and improves type safety readability.
No functional changes - purely a style improvement.
* handle incomplete ec encoding
* unit tests
* simplify, and better logs
* Update disk_location_ec.go
When loadEcShards() fails partway through, some EC shards may already be loaded into the l.ecVolumes map in memory. The previous code only cleaned up filesystem files but left orphaned in-memory state, which could cause memory leaks and inconsistent state.
* address comments
* Performance: Avoid Double os.Stat() Call
* Platform Compatibility: Use filepath.Join
* in memory cleanup
* Update disk_location_ec.go
* refactor
* Added Shard Size Validation
* check ec shard sizes
* validate shard size
* calculate expected shard size
* refactoring
* minor
* fix shard directory
* 10GB sparse files can be slow or fail on non-sparse FS. Use 10MB to hit SmallBlockSize math (1MB shards) deterministically.
* grouping logic should be updated to use both collection and volumeId to ensure correctness
* unexpected error
* handle exceptions in tests; use constants
* The check for orphaned shards should be performed for the previous volume before resetting sameVolumeShards for the new volume.
* address comments
* Eliminated Redundant Parsing in checkOrphanedShards
* minor
* Avoid misclassifying local EC as distributed when .dat stat errors occur; also standardize unload-before-remove.
* fmt
* refactor
* refactor
* adjust to warning
* batch deletion operations to return individual error results
Modify batch deletion operations to return individual error results instead of one aggregated error, enabling better tracking of which specific files failed to delete (helping reduce orphan file issues).
* Simplified logging logic
* Optimized nested loop
* handles the edge case where the RPC succeeds but connection cleanup fails
* simplify
* simplify
* ignore 'not found' errors here
* Added a helper function `isHelpRequest()`
* also handles combined short flags like -lh or -hl
* Created handleHelpRequest() helper function
encapsulates both:
Checking for help flags
Printing the help message
* Limit to reasonable length (2-4 chars total) to avoid matching long options like -verbose
Store shell command in history before parsing
Store the shell command in history before parsing it. This will allow users to press the 'Up' arrow and see the entire command.
* [Admin UI] Login not possible due to securecookie error
* avoid 404 favicon
* Update weed/admin/dash/auth_middleware.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* address comments
* avoid variable over shadowing
* log session save error
* When jwt.signing.read.key is enabled in security.toml, the volume server requires JWT tokens for all read operations.
* reuse fileId
* refactor
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix: Use a mixed of virtual and path styles within a single subdomain
* address comments
* add tests
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>