Tree:
c2e22bd11e
add-ec-vacuum
add-filer-iam-grpc
add-iam-grpc-management
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
admin/csrf-s3tables
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature-8113-storage-class-disk-routing
feature/mini-port-detection
feature/modernize-s3-tests
feature/s3-multi-cert-support
feature/s3tables-improvements-and-spark-tests
filer1_maintenance_branch
fix-8301-source-ip-forwarded
fix-8303-s3-lifecycle-ttl-assign
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-helm-fromtoml-compatibility
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
fix-volume-fsck-auth-8230
fix/iceberg-stage-create-semantics
fix/windows-test-file-cleanup
ftp
gh-pages
iam-multi-file-migration
iam-permissions-and-api
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
pr/7984
pr/8140
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
s3tables-by-claude
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
4.07
4.08
4.09
4.12
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
993 Commits (c2e22bd11e68bdfdb8234adcef85bebf4d6557d5)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
1a5679a5eb
|
Implement a `VolumeEcStatus()` RPC for volume servers. (#8006)
Just like `VolumeStatus()`, this call allows inspecting details for a given EC volume - including number of files and their total size. |
2 days ago |
|
|
be6b5db65a
|
s3: fix health check endpoints returning 404 for HEAD requests #8243 (#8248)
* Fix disk errors handling in vacuum compaction When a disk reports IO errors during vacuum compaction (e.g., 'read /mnt/d1/weed/oc_xyz.dat: input/output error'), the vacuum task should signal the error to the master so it can: 1. Drop the faulty volume replica 2. Rebuild the replica from healthy copies Changes: - Add checkReadWriteError() calls in vacuum read paths (ReadNeedleBlob, ReadData, ScanVolumeFile) to flag EIO errors in volume.lastIoError - Preserve error wrapping using %w format instead of %v so EIO propagates correctly - The existing heartbeat logic will detect lastIoError and remove the bad volume Fixes issue #8237 * error * s3: fix health check endpoints returning 404 for HEAD requests #8243 |
3 days ago |
|
|
330ba7d9dc
|
Fix disk errors handling in vacuum compaction (#8244)
When a disk reports IO errors during vacuum compaction (e.g., 'read /mnt/d1/weed/oc_xyz.dat: input/output error'), the vacuum task should signal the error to the master so it can: 1. Drop the faulty volume replica 2. Rebuild the replica from healthy copies Changes: - Add checkReadWriteError() calls in vacuum read paths (ReadNeedleBlob, ReadData, ScanVolumeFile) to flag EIO errors in volume.lastIoError - Preserve error wrapping using %w format instead of %v so EIO propagates correctly - The existing heartbeat logic will detect lastIoError and remove the bad volume Fixes issue #8237 |
4 days ago |
|
|
c284e51d20
|
fix: multipart upload ETag calculation (#8238)
* fix multipart etag * address comments * clean up * clean up * optimization * address comments * unquoted etag * dedup * upgrade * clean * etag * return quoted tag * quoted etag * debug * s3api: unify ETag retrieval and quoting across handlers Refactor newListEntry to take *S3ApiServer and use getObjectETag, and update setResponseHeaders to use the same logic. This ensures consistent ETags are returned for both listing and direct access. * s3api: implement ListObjects deduplication for versioned buckets Handle duplicate entries between the main path and the .versions directory by prioritizing the latest version when bucket versioning is enabled. * s3api: cleanup stale main file entries during versioned uploads Add explicit deletion of pre-existing "main" files when creating new versions in versioned buckets. This prevents stale entries from appearing in bucket listings and ensures consistency. * s3api: fix cleanup code placement in versioned uploads Correct the placement of rm calls in completeMultipartUpload and putVersionedObject to ensure stale main files are properly deleted during versioned uploads. * s3api: improve getObjectETag fallback for empty ExtETagKey Ensure that when ExtETagKey exists but contains an empty value, the function falls through to MD5/chunk-based calculation instead of returning an empty string. * s3api: fix test files for new newListEntry signature Update test files to use the new newListEntry signature where the first parameter is *S3ApiServer. Created mockS3ApiServer to properly test owner display name lookup functionality. * s3api: use filer.ETag for consistent Md5 handling in getEtagFromEntry Change getEtagFromEntry fallback to use filer.ETag(entry) instead of filer.ETagChunks to ensure legacy entries with Attributes.Md5 are handled consistently with the rest of the codebase. * s3api: optimize list logic and fix conditional header logging - Hoist bucket versioning check out of per-entry callback to avoid repeated getVersioningState calls - Extract appendOrDedup helper function to eliminate duplicate dedup/append logic across multiple code paths - Change If-Match mismatch logging from glog.Errorf to glog.V(3).Infof and remove DEBUG prefix for consistency * s3api: fix test mock to properly initialize IAM accounts Fixed nil pointer dereference in TestNewListEntryOwnerDisplayName by directly initializing the IdentityAccessManagement.accounts map in the test setup. This ensures newListEntry can properly look up account display names without panicking. * cleanup * s3api: remove premature main file cleanup in versioned uploads Removed incorrect cleanup logic that was deleting main files during versioned uploads. This was causing test failures because it deleted objects that should have been preserved as null versions when versioning was first enabled. The deduplication logic in listing is sufficient to handle duplicate entries without deleting files during upload. * s3api: add empty-value guard to getEtagFromEntry Added the same empty-value guard used in getObjectETag to prevent returning quoted empty strings. When ExtETagKey exists but is empty, the function now falls through to filer.ETag calculation instead of returning "". * s3api: fix listing of directory key objects with matching prefix Revert prefix handling logic to use strings.TrimPrefix instead of checking HasPrefix with empty string result. This ensures that when a directory key object exactly matches the prefix (e.g. prefix="dir/", object="dir/"), it is correctly handled as a regular entry instead of being skipped or incorrectly processed as a common prefix. Also fixed missing variable definition. * s3api: refactor list inline dedup to use appendOrDedup helper Refactored the inline deduplication logic in listFilerEntries to use the shared appendOrDedup helper function. This ensures consistent behavior and reduces code duplication. * test: fix port allocation race in s3tables integration test Updated startMiniCluster to find all required ports simultaneously using findAvailablePorts instead of sequentially. This prevents race conditions where the OS reallocates a port that was just released, causing multiple services (e.g. Filer and Volume) to be assigned the same port and fail to start. |
5 days ago |
|
|
2cda4289f4
|
Add a version token on RPCs to read/update volume server states. (#8191)
* Add a version token on `GetState()`/`SetState()` RPCs for volume server states. * Make state version a property ov `VolumeServerState` instead of an in-memory counter. Also extend state atomicity to reads, instead of just writes. |
5 days ago |
|
|
f84b70c362
|
Implement index (fast) scrubbing for regular/EC volumes. (#8207)
Implement index (fast) scrubbing for regular/EC volumes via `ScrubVolume()`/`ScrubEcVolume()`. Also rearranges existing index test files for reuse across unit tests for different modules. |
6 days ago |
|
|
82d9d8687b
|
Fix concurrent map access in EC shards info (#8222)
* fix concurrent map access in EC shards info #8219 * refactor: simplify Disk.ToDiskInfo to use ecShards snapshot and avoid redundant locking * refactor: improve GetEcShards with pre-allocation and defer |
6 days ago |
|
|
7169fe585d |
Delete verify_gc_empty_test.go
|
1 week ago |
|
|
2bb21ea276
|
feat: Add Iceberg REST Catalog server and admin UI (#8175)
* feat: Add Iceberg REST Catalog server Implement Iceberg REST Catalog API on a separate port (default 8181) that exposes S3 Tables metadata through the Apache Iceberg REST protocol. - Add new weed/s3api/iceberg package with REST handlers - Implement /v1/config endpoint returning catalog configuration - Implement namespace endpoints (list/create/get/head/delete) - Implement table endpoints (list/create/load/head/delete/update) - Add -port.iceberg flag to S3 standalone server (s3.go) - Add -s3.port.iceberg flag to combined server mode (server.go) - Add -s3.port.iceberg flag to mini cluster mode (mini.go) - Support prefix-based routing for multiple catalogs The Iceberg REST server reuses S3 Tables metadata storage under /table-buckets and enables DuckDB, Spark, and other Iceberg clients to connect to SeaweedFS as a catalog. * feat: Add Iceberg Catalog pages to admin UI Add admin UI pages to browse Iceberg catalogs, namespaces, and tables. - Add Iceberg Catalog menu item under Object Store navigation - Create iceberg_catalog.templ showing catalog overview with REST info - Create iceberg_namespaces.templ listing namespaces in a catalog - Create iceberg_tables.templ listing tables in a namespace - Add handlers and routes in admin_handlers.go - Add Iceberg data provider methods in s3tables_management.go - Add Iceberg data types in types.go The Iceberg Catalog pages provide visibility into the same S3 Tables data through an Iceberg-centric lens, including REST endpoint examples for DuckDB and PyIceberg. * test: Add Iceberg catalog integration tests and reorg s3tables tests - Reorganize existing s3tables tests to test/s3tables/table-buckets/ - Add new test/s3tables/catalog/ for Iceberg REST catalog tests - Add TestIcebergConfig to verify /v1/config endpoint - Add TestIcebergNamespaces to verify namespace listing - Add TestDuckDBIntegration for DuckDB connectivity (requires Docker) - Update CI workflow to use new test paths * fix: Generate proper random UUIDs for Iceberg tables Address code review feedback: - Replace placeholder UUID with crypto/rand-based UUID v4 generation - Add detailed TODO comments for handleUpdateTable stub explaining the required atomic metadata swap implementation * fix: Serve Iceberg on localhost listener when binding to different interface Address code review feedback: properly serve the localhost listener when the Iceberg server is bound to a non-localhost interface. * ci: Add Iceberg catalog integration tests to CI Add new job to run Iceberg catalog tests in CI, along with: - Iceberg package build verification - Iceberg unit tests - Iceberg go vet checks - Iceberg format checks * fix: Address code review feedback for Iceberg implementation - fix: Replace hardcoded account ID with s3_constants.AccountAdminId in buildTableBucketARN() - fix: Improve UUID generation error handling with deterministic fallback (timestamp + PID + counter) - fix: Update handleUpdateTable to return HTTP 501 Not Implemented instead of fake success - fix: Better error handling in handleNamespaceExists to distinguish 404 from 500 errors - fix: Use relative URL in template instead of hardcoded localhost:8181 - fix: Add HTTP timeout to test's waitForService function to avoid hangs - fix: Use dynamic ephemeral ports in integration tests to avoid flaky parallel failures - fix: Add Iceberg port to final port configuration logging in mini.go * fix: Address critical issues in Iceberg implementation - fix: Cache table UUIDs to ensure persistence across LoadTable calls The UUID now remains stable for the lifetime of the server session. TODO: For production, UUIDs should be persisted in S3 Tables metadata. - fix: Remove redundant URL-encoded namespace parsing mux router already decodes %1F to \x1F before passing to handlers. Redundant ReplaceAll call could cause bugs with literal %1F in namespace. * fix: Improve test robustness and reduce code duplication - fix: Make DuckDB test more robust by failing on unexpected errors Instead of silently logging errors, now explicitly check for expected conditions (extension not available) and skip the test appropriately. - fix: Extract username helper method to reduce duplication Created getUsername() helper in AdminHandlers to avoid duplicating the username retrieval logic across Iceberg page handlers. * fix: Add mutex protection to table UUID cache Protects concurrent access to the tableUUIDs map with sync.RWMutex. Uses read-lock for fast path when UUID already cached, and write-lock for generating new UUIDs. Includes double-check pattern to handle race condition between read-unlock and write-lock. * style: fix go fmt errors * feat(iceberg): persist table UUID in S3 Tables metadata * feat(admin): configure Iceberg port in Admin UI and commands * refactor: address review comments (flags, tests, handlers) - command/mini: fix tracking of explicit s3.port.iceberg flag - command/admin: add explicit -iceberg.port flag - admin/handlers: reuse getUsername helper - tests: use 127.0.0.1 for ephemeral ports and os.Stat for file size check * test: check error from FileStat in verify_gc_empty_test |
1 week ago |
|
|
ff5a8f0579
|
Implement RPC skeleton for regular/EC volumes scrubbing. (#8187)
* Implement RPC skeleton for regular/EC volumes scrubbing. See https://github.com/seaweedfs/seaweedfs/issues/8018 for details. * Minor proto improvements for `ScrubVolume()`, `ScrubEcVolume()`: - Add fields for scrubbing details in `ScrubVolumeResponse` and `ScrubEcVolumeResponse`, instead of reporting these through RPC errors. - Return a list of broken shards when scrubbing EC volumes, via `EcShardInfo'. |
1 week ago |
|
|
345ac950b6
|
Add volume server RPCs to read and update state flags. (#8186)
* Boostrap persistent state for volume servers. This PR implements logic load/save persistent state information for storages associated with volume servers, and reporting state changes back to masters via heartbeat messages. More work ensues! See https://github.com/seaweedfs/seaweedfs/issues/7977 for details. * Add volume server RPCs to read and update state flags. |
1 week ago |
|
|
f23e09f58b
|
fix: skip exhausted blocks before creating an interval (#8180)
* fix: skip exhausted blocks before creating an interval * refactor: optimize interval creation and fix logic duplication * docs: add docstring for LocateData * refactor: extract moveToNextBlock helper to deduplicate logic * fix: use int64 for block index comparison to prevent overflow * test: add unit test for LocateData boundary crossing (issue #8179) * fix: skip exhausted blocks to prevent negative interval size and panics (issue #8179) * refactor: apply review suggestions for test maintainability and code style |
1 week ago |
|
|
550a4ff761
|
Fix inconsistent TTL reporting in volume.list #8158 (#8164)
* fix inconsistent TTL reporting in volume.list #8158 * simplify volume.list output using vi.String() |
2 weeks ago |
|
|
5c8de5e282
|
fix: close volumes and EC shards in tests for Windows compatibility (#8152)
* fix: close volumes and EC shards in tests to prevent Windows cleanup failures On Windows, t.TempDir() cleanup fails when test files are still open because Windows enforces mandatory file locking. Add defer v.Close(), defer store.Close(), and EC volume cleanup to ensure all file handles are released before temp directory removal. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: extract closeEcVolumes helper to reduce duplication Address code review feedback by extracting the repeated EC volume cleanup loop into a closeEcVolumes() helper function. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> |
2 weeks ago |
|
|
25a4691135 |
Update store_ec_recovery_test.go
|
3 weeks ago |
|
|
7e81c0bf0d
|
Clarfiy errors upon needle CRC mismatches. (#8096)
|
3 weeks ago |
|
|
e717a63665
|
Fix EC shard recovery with improved diagnostics (#8091)
* storage: fix EC shard recovery with improved diagnostics and logging - Fix buffer size mismatch in ReconstructData call - Add detailed logging of available and missing shards - Improve error messages when recovery is impossible - Add unit tests for EC recovery shard counting logic * test: refine EC recovery unit tests - Remove redundant tests that only validate setup - Use standard strings.Contains instead of custom recursive helper * adjust tests and minor improvement |
3 weeks ago |
|
|
3e5d34dd67 |
skip md5 validation if Content-MD5 is not provided
|
3 weeks ago |
|
|
a473278bfa
|
Fix: Fail fast on unsupported volume versions (#8047)
* Fix: Fail fast when initializing volume with Version 0 * Fix: Fail fast when loading unsupported volume version (e.g. 0 or 4) * Refactor: Use IsSupportedVersion helper function for version validation |
4 weeks ago |
|
|
ba74185700
|
fix: CompactMap race condition causing runtime panic (#8029)
Fixed critical race condition in CompactMap where Set(), Delete(), and Get() methods had issues with concurrent map access. Root cause: segmentForKey() can create new map segments, which modifies the cm.segments map. Calling this under a read lock caused concurrent map write panics when multiple goroutines accessed the map simultaneously (e.g., during VolumeNeedleStatus gRPC calls). Changes: - Set() method: Changed RLock/RUnlock to Lock/Unlock - Delete() method: Changed RLock/RUnlock to Lock/Unlock, optimized to avoid creating empty segments when key doesn't exist - Get() method: Removed segmentForKey() call to avoid race condition, now checks segment existence directly and returns early if segment doesn't exist (optimization: avoids unnecessary segment creation) This fix resolves the runtime/maps.fatal panic that occurred under concurrent load. Tested with race detector: go test -v -race ./weed/storage/needle_map/... |
4 weeks ago |
|
|
6b0eade6d4 |
storage: upgrade protobuf API in store_state.go
|
1 month ago |
|
|
587e782feb |
storage: use non-blocking send to StateUpdateChan
|
1 month ago |
|
|
2af293ce60
|
Boostrap persistent state for volume servers. (#7984)
This PR implements logic load/save persistent state information for storages associated with volume servers, and reporting state changes back to masters via heartbeat messages. More work ensues! See https://github.com/seaweedfs/seaweedfs/issues/7977 for details. |
1 month ago |
|
|
9012069bd7
|
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 month ago |
|
|
e10f11b480
|
opt: reduce ShardsInfo memory usage with bitmap and sorted slice (#7974)
* opt: reduce ShardsInfo memory usage with bitmap and sorted slice - Replace map[ShardId]*ShardInfo with sorted []ShardInfo slice - Add ShardBits (uint32) bitmap for O(1) existence checks - Use binary search for O(log n) lookups by shard ID - Maintain sorted order for efficient iteration - Add comprehensive unit tests and benchmarks Memory savings: - Map overhead: ~48 bytes per entry eliminated - Pointers: 8 bytes per entry eliminated - Total: ~56 bytes per shard saved Performance improvements: - Has(): O(1) using bitmap - Size(): O(log n) using binary search (was O(1), acceptable tradeoff) - Count(): O(1) using popcount on bitmap - Iteration: Faster due to cache locality * refactor: add methods to ShardBits type - Add Has(), Set(), Clear(), and Count() methods to ShardBits - Simplify ShardsInfo methods by using ShardBits methods - Improves code readability and encapsulation * opt: use ShardBits directly in ShardsCountFromVolumeEcShardInformationMessage Avoid creating a full ShardsInfo object just to count shards. Directly cast vi.EcIndexBits to ShardBits and use Count() method. * opt: use strings.Builder in ShardsInfo.String() for efficiency * refactor: change AsSlice to return []ShardInfo (values instead of pointers) This completes the memory optimization by avoiding unnecessary pointer slices and potential allocations. * refactor: rename ShardsCountFromVolumeEcShardInformationMessage to GetShardCount * fix: prevent deadlock in Add and Subtract methods Copy shards data from 'other' before releasing its lock to avoid potential deadlock when a.Add(b) and b.Add(a) are called concurrently. The previous implementation held other's lock while calling si.Set/Delete, which acquires si's lock. This could deadlock if two goroutines tried to add/subtract each other concurrently. * opt: avoid unnecessary locking in constructor functions ShardsInfoFromVolume and ShardsInfoFromVolumeEcShardInformationMessage now build shards slice and bitmap directly without calling Set(), which acquires a lock on every call. Since the object is local and not yet shared, locking is unnecessary and adds overhead. This improves performance during object construction. * fix: rename 'copy' variable to avoid shadowing built-in function The variable name 'copy' in TestShardsInfo_Copy shadowed the built-in copy() function, which is confusing and bad practice. Renamed to 'siCopy'. * opt: use math/bits.OnesCount32 and reorganize types 1. Replace manual popcount loop with math/bits.OnesCount32 for better performance and idiomatic Go code 2. Move ShardSize type definition to ec_shards_info.go for better code organization since it's primarily used there * refactor: Set() now accepts ShardInfo for future extensibility Changed Set(id ShardId, size ShardSize) to Set(shard ShardInfo) to support future additions to ShardInfo without changing the API. This makes the code more extensible as new fields can be added to ShardInfo (e.g., checksum, location, etc.) without breaking the Set API. * refactor: move ShardInfo and ShardSize to separate file Created ec_shard_info.go to hold the basic shard types (ShardInfo and ShardSize) for better code organization and separation of concerns. * refactor: add ShardInfo constructor and helper functions Added NewShardInfo() constructor and IsValid() method to better encapsulate ShardInfo creation and validation. Updated code to use the constructor for cleaner, more maintainable code. * fix: update remaining Set() calls to use NewShardInfo constructor Fixed compilation errors in storage and shell packages where Set() calls were not updated to use the new NewShardInfo() constructor. * fix: remove unreachable code in filer backup commands Removed unreachable return statements after infinite loops in filer_backup.go and filer_meta_backup.go to fix compilation errors. * fix: rename 'new' variable to avoid shadowing built-in Renamed 'new' to 'result' in MinusParityShards, Plus, and Minus methods to avoid shadowing Go's built-in new() function. * fix: update remaining test files to use NewShardInfo constructor Fixed Set() calls in command_volume_list_test.go and ec_rebalance_slots_test.go to use NewShardInfo() constructor. |
1 month ago |
|
|
ec1c27a4b3
|
storage/needle: add bounds check for WriteNeedleBlob buffer (#7973)
* storage/needle: add bounds check for WriteNeedleBlob buffer * storage/needle: use int offsets when checking/writing Version3 timestamp * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 month ago |
|
|
ade2e58cf4 |
refactoring
|
1 month ago |
|
|
4e2af080df
|
optimize: enable immediate EC shard reporting during startup (#7933)
* optimize: enable immediate EC shard reporting during startup
Ported the immediate EC shard reporting feature from Enterprise to Community version.
This allows the master to be notified about EC shards immediately during volume server startup,
instead of waiting for the first heartbeat.
Changes:
1. Updated NewStore to initialize notification channels BEFORE loading volumes (fixes potential nil panic).
2. Added ecShardNotifyHandler to report EC shards to NewEcShardsChan during startup.
3. Implemented non-blocking channel send for EC reporting to prevent deadlock when loading many EC shards (fixing the enterprise bug
|
1 month ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
1 month ago |
|
|
2f6aa98221
|
Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
* Replace removeDuplicateSlashes with NormalizeObjectKey Use s3_constants.NormalizeObjectKey instead of removeDuplicateSlashes in most places for consistency. NormalizeObjectKey handles both duplicate slash removal and ensures the path starts with '/', providing more complete normalization. * Fix double slash issues after NormalizeObjectKey After using NormalizeObjectKey, object keys have a leading '/'. This commit ensures: - getVersionedObjectDir strips leading slash before concatenation - getEntry calls receive names without leading slash - String concatenation with '/' doesn't create '//' paths This prevents path construction errors like: /buckets/bucket//object (wrong) /buckets/bucket/object (correct) * ensure object key leading "/" * fix compilation * fix: Strip leading slash from object keys in S3 API responses After introducing NormalizeObjectKey, all internal object keys have a leading slash. However, S3 API responses must return keys without leading slashes to match AWS S3 behavior. Fixed in three functions: - addVersion: Strip slash for version list entries - processRegularFile: Strip slash for regular file entries - processExplicitDirectory: Strip slash for directory entries This ensures ListObjectVersions and similar APIs return keys like 'bar' instead of '/bar', matching S3 API specifications. * fix: Normalize keyMarker for consistent pagination comparison The S3 API provides keyMarker without a leading slash (e.g., 'object-001'), but after introducing NormalizeObjectKey, all internal object keys have leading slashes (e.g., '/object-001'). When comparing keyMarker < normalizedObjectKey in shouldSkipObjectForMarker, the ASCII value of '/' (47) is less than 'o' (111), causing all objects to be incorrectly skipped during pagination. This resulted in page 2 and beyond returning 0 results. Fix: Normalize the keyMarker when creating versionCollector so comparisons work correctly with normalized object keys. Fixes pagination tests: - TestVersioningPaginationOver1000Versions - TestVersioningPaginationMultipleObjectsManyVersions * refactor: Change NormalizeObjectKey to return keys without leading slash BREAKING STRATEGY CHANGE: Previously, NormalizeObjectKey added a leading slash to all object keys, which required stripping it when returning keys to S3 API clients and caused complexity in marker normalization for pagination. NEW STRATEGY: - NormalizeObjectKey now returns keys WITHOUT leading slash (e.g., 'foo/bar' not '/foo/bar') - This matches the S3 API format directly - All path concatenations now explicitly add '/' between bucket and object - No need to strip slashes in responses or normalize markers Changes: 1. Modified NormalizeObjectKey to strip leading slash instead of adding it 2. Fixed all path concatenations to use: - BucketsPath + '/' + bucket + '/' + object instead of: - BucketsPath + '/' + bucket + object 3. Reverted response key stripping in: - addVersion() - processRegularFile() - processExplicitDirectory() 4. Reverted keyMarker normalization in findVersionsRecursively() 5. Updated matchesPrefixFilter() to work with keys without leading slash 6. Fixed paths in handlers: - s3api_object_handlers.go (GetObject, HeadObject, cacheRemoteObjectForStreaming) - s3api_object_handlers_postpolicy.go - s3api_object_handlers_tagging.go - s3api_object_handlers_acl.go - s3api_version_id.go (getVersionedObjectDir, getVersionIdFormat) - s3api_object_versioning.go (getObjectVersionList, updateLatestVersionAfterDeletion) All versioning tests pass including pagination stress tests. * adjust format * Update post policy tests to match new NormalizeObjectKey behavior - Update TestPostPolicyKeyNormalization to expect keys without leading slashes - Update TestNormalizeObjectKey to expect keys without leading slashes - Update TestPostPolicyFilenameSubstitution to expect keys without leading slashes - Update path construction in tests to use new pattern: BucketsPath + '/' + bucket + '/' + object * Fix ListObjectVersions prefix filtering Remove leading slash addition to prefix parameter to allow correct filtering of .versions directories when listing object versions with a specific prefix. The prefix parameter should match entry paths relative to bucket root. Adding a leading slash was breaking the prefix filter for paginated requests. Fixes pagination issue where second page returned 0 versions instead of continuing with remaining versions. * no leading slash * Fix urlEscapeObject to add leading slash for filer paths NormalizeObjectKey now returns keys without leading slashes to match S3 API format. However, urlEscapeObject is used for filer paths which require leading slashes. Add leading slash back after normalization to ensure filer paths are correct. Fixes TestS3ApiServer_toFilerPath test failures. * adjust tests * normalize * Fix: Normalize prefixes and markers in LIST operations using NormalizeObjectKey Ensure consistent key normalization across all S3 operations (GET, PUT, LIST). Previously, LIST operations were not applying the same normalization rules (handling backslashes, duplicate slashes, leading slashes) as GET/PUT operations. Changes: - Updated normalizePrefixMarker() to call NormalizeObjectKey for both prefix and marker - This ensures prefixes with leading slashes, backslashes, or duplicate slashes are handled consistently with how object keys are normalized - Fixes Parquet test failures where pads.write_dataset creates implicit directory structures that couldn't be discovered by subsequent LIST operations - Added TestPrefixNormalizationInList and TestListPrefixConsistency tests All existing LIST tests continue to pass with the normalization improvements. * Add debugging logging to LIST operations to track prefix normalization * Fix: Remove leading slash addition from GetPrefix to work with NormalizeObjectKey The NormalizeObjectKey function removes leading slashes to match S3 API format (e.g., 'foo/bar' not '/foo/bar'). However, GetPrefix was adding a leading slash back, which caused LIST operations to fail with incorrect path handling. Now GetPrefix only normalizes duplicate slashes without adding a leading slash, which allows NormalizeObjectKey changes to work correctly for S3 LIST operations. All Parquet integration tests now pass (20/20). * Fix: Handle object paths without leading slash in checkDirectoryObject NormalizeObjectKey() removes the leading slash to match S3 API format. However, checkDirectoryObject() was assuming the object path has a leading slash when processing directory markers (paths ending with '/'). Now we ensure the object has a leading slash before processing it for filer operations. Fixes implicit directory marker test (explicit_dir/) while keeping Parquet integration tests passing (20/20). All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * Fix: Handle explicit directory markers with trailing slashes Explicit directory markers created with put_object(Key='dir/', ...) are stored in the filer with the trailing slash as part of the name. The checkDirectoryObject() function now checks for both: 1. Explicit directories: lookup with trailing slash preserved (e.g., 'explicit_dir/') 2. Implicit directories: lookup without trailing slash (e.g., 'implicit_dir') This ensures both types of directory markers are properly recognized. All tests pass: - Implicit directory tests: 6/6 (including explicit directory marker test) - Parquet integration tests: 20/20 * Fix: Preserve trailing slash in NormalizeObjectKey NormalizeObjectKey now preserves trailing slashes when normalizing object keys. This is important for explicit directory markers like 'explicit_dir/' which rely on the trailing slash to be recognized as directory objects. The normalization process: 1. Notes if trailing slash was present 2. Removes duplicate slashes and converts backslashes 3. Removes leading slash for S3 API format 4. Restores trailing slash if it was in the original This ensures explicit directory markers created with put_object(Key='dir/', ...) are properly normalized and can be looked up by their exact name. All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * clean object * Fix: Don't restore trailing slash if result is empty When normalizing paths that are only slashes (e.g., '///', '/'), the function should return an empty string, not a single slash. The fix ensures we only restore the trailing slash if the result is non-empty. This fixes the 'just_slashes' test case: - Input: '///' - Expected: '' - Previous: '/' - Fixed: '' All tests now pass: - Unit tests: TestNormalizeObjectKey (13/13) - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * prefixEndsOnDelimiter * Update s3api_object_handlers_list.go * Update s3api_object_handlers_list.go * handle create directory |
2 months ago |
|
|
3613279f25
|
Add 'weed mini' command for S3 beginners and small/dev use cases (#7831)
* Add 'weed mini' command for S3 beginners and small/dev use cases
This new command simplifies starting SeaweedFS by combining all components
in one process with optimized settings for development and small deployments.
Features:
- Starts master, volume, filer, S3, WebDAV, and admin in one command
- Volume size limit: 64MB (optimized for small files)
- Volume max: 0 (auto-configured based on free disk space)
- Pre-stop seconds: 1 (faster shutdown for development)
- Master peers: none (single master mode by default)
- Includes admin UI with one worker for maintenance tasks
- Clean, user-friendly startup message with all endpoint URLs
Usage:
weed mini # Use default temp directory
weed mini -dir=/data # Custom data directory
This makes it much easier for:
- Developers getting started with SeaweedFS
- Testing and development workflows
- Learning S3 API with SeaweedFS
- Small deployments that don't need complex clustering
* Change default volume server port to 9340 to avoid popular port 8080
* Fix nil pointer dereference by initializing all required volume server fields
Added missing VolumeServerOptions field initializations:
- id, publicUrl, diskType
- maintenanceMBPerSecond, ldbTimeout
- concurrentUploadLimitMB, concurrentDownloadLimitMB
- pprof, idxFolder
- inflightUploadDataTimeout, inflightDownloadDataTimeout
- hasSlowRead, readBufferSizeMB
This resolves the panic that occurred when starting the volume server.
* Fix multiple nil pointer dereferences in mini command
Added missing field initializations for:
- Master options: raftHashicorp, raftBootstrap, telemetryUrl, telemetryEnabled
- Filer options: filerGroup, saveToFilerLimit, concurrentUploadLimitMB,
concurrentFileUploadLimit, localSocket, showUIDirectoryDelete,
downloadMaxMBps, diskType, allowedOrigins, exposeDirectoryData, tusBasePath
- Volume options: id, publicUrl, diskType, maintenanceMBPerSecond, ldbTimeout,
concurrentUploadLimitMB, concurrentDownloadLimitMB, pprof, idxFolder,
inflightUploadDataTimeout, inflightDownloadDataTimeout, hasSlowRead, readBufferSizeMB
- WebDAV options: tlsPrivateKey, tlsCertificate, filerRootPath
- Admin options: master
These initializations are required to avoid runtime panics when starting components.
* Fix remaining S3 option nil pointers in mini command
* Update mini command: 256MB volume size and add S3 access instructions for beginners
* mini: set default master.volumeSizeLimitMB to 128MB and update help/banner text
* mini: shorten S3 help text to a concise pointer to docs/Admin UI
* mini: remove duplicated component bullet list, use concise sentence
* mini: tidy help alignment and update example usage
* mini: default -dir to current directory
* mini: load initial S3 credentials from env and write IAM config
* mini: use AWS env vars for initial S3 creds; instruct to create via Admin UI if absent
* Improve startup synchronization with channel-based coordination
- Replace fragile time.Sleep delays with robust channel-based synchronization
- Implement proper service dependency ordering (Master → Volume → Filer → S3/WebDAV/Admin)
- Add sync.WaitGroup for goroutine coordination
- Add startup readiness logging for better visibility
- Implement 10-second timeout for admin server startup
- Remove arbitrary sleep delays for faster, more reliable startup
- Services now start deterministically based on dependencies, not timing
This makes the startup process more reliable and eliminates race conditions on slow systems or under load.
* Refactor service startup logic for better maintainability
Extract service startup into dedicated helper functions:
- startMiniServices(): Orchestrates all service startup with dependency coordination
- startServiceWithCoordination(): Starts services with readiness signaling
- startServiceWithoutReady(): Starts services without readiness signaling
- startS3Service(): Encapsulates S3 initialization logic
Benefits:
- Reduced code duplication in runMini()
- Clearer separation of concerns
- Easier to add new services or modify startup sequence
- More testable code structure
- Improved readability with explicit service names and logging
* Remove unused serviceStartupInfo struct type
- Delete the serviceStartupInfo struct that was defined but never used
- Improves code clarity by removing dead code
- All service startup is now handled directly by helper functions
* Preserve existing IAM config file instead of truncating
- Use os.Stat to check if IAM config file already exists
- Only create and write configuration if file doesn't exist
- Log appropriate messages for each case:
* File exists: skip writing, preserve existing config
* File absent: create with os.OpenFile and write new config
* Stat error: log error without overwriting
- Set *miniIamConfig only when new file is successfully created
- Use os.O_CREATE|os.O_WRONLY flags for safe file creation
- Handles file operations with proper error checking and cleanup
* Fix CodeQL security issue: prevent logging of sensitive S3 credentials
- Add createdInitialIAM flag to track when initial IAM config is created from env vars
- Set flag in startS3Service() when new IAM config is successfully written
- Update welcome message to inform user of credential creation without exposing secrets
- Print only the username (mini) and config file location to user
- Never print access keys or secret keys in clear text
- Maintain security while keeping user informed of what was created
- Addresses CodeQL finding: Clear-text logging of sensitive information
* Fix three code review issues in weed mini command
1. Fix deadlock in service startup coordination:
- Run blocking service functions (startMaster, startFiler, etc.) in separate goroutines
- This allows readyChan to be closed and prevents indefinite blocking
- Services now start concurrently instead of sequentially blocking the coordinator
2. Use shared grace.StartDebugServer for consistency:
- Replace inline debug server startup with grace.StartDebugServer
- Improves code consistency with other commands (master, filer, etc.)
- Removes net/http import which is no longer needed
3. Simplify IAM config file cleanup with defer:
- Use 'defer f.Close()' instead of multiple f.Close() calls
- Ensures file is closed regardless of which code path is taken
- Improves robustness and code clarity
* fmt
* Fix: Remove misleading 'service is ready' logs
The previous fix removed 'go' from service function calls but left misleading
'service is ready' log messages. The service helpers now correctly:
- Call fn() directly (blocking) instead of 'go fn()' (non-blocking)
- Remove the 'service is ready' message that was printed before the service
actually started running
- Services run as blocking goroutines within the coordinator goroutine,
which keeps them alive while the program runs
- The readiness channels still work correctly because they're closed when
the coordinator finishes waiting for dependencies
* Update mini.go
* Fix four code review issues in weed mini command
1. Use restrictive file permissions (0600) for IAM config:
- Changed from 0644 to 0600 when creating iam_config.json
- Prevents world-readable access to sensitive AWS credentials
- Protects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
2. Remove unused sync.WaitGroup:
- Removed WaitGroup that was never waited on
- All services run as blocking goroutines in the coordinator
- Main goroutine blocks indefinitely with select{}
- Removes unnecessary complexity without changing behavior
3. Merge service startup helper functions:
- Combined startServiceWithCoordination and startServiceWithoutReady
- Made readyChan optional (nil for services without readiness signaling)
- Reduces code duplication and improves maintainability
- Both services now use single startServiceWithCoordination function
4. Fix admin server readiness check:
- Removed misleading timeout channel that never closed at startup
- Replaced with simple 2-second sleep before worker startup
- startAdminServer() blocks indefinitely, so channel would only close on shutdown
- Explicit sleep is clearer about the startup coordination intent
* Fix three code quality issues in weed mini command
1. Define volume configuration as named constants:
- Added miniVolumeMaxDataVolumeCounts = "0"
- Added miniVolumeMinFreeSpace = "1"
- Added miniVolumeMinFreeSpacePercent = "1"
- Removed local variable assignments in Volume startup
- Improves maintainability and documents configuration intent
2. Fix deadlock in startServiceWithCoordination:
- Changed from 'defer close(readyChan)' with blocking fn() to running fn() in goroutine
- Close readyChan immediately after launching service goroutine
- Prevents deadlock where fn() never returns, blocking defer execution
- Allows dependent services to start without waiting for blocking call
3. Improve admin server readiness check:
- Replaced fixed 2-second sleep with polling the gRPC port
- Polls up to 20 times (10 seconds total) with 500ms intervals
- Uses net.DialTimeout to check if port is available
- Properly handles IPv6 addresses using net.JoinHostPort
- Logs progress and warnings about connection status
- More robust than sleep against server startup timing variations
4. Add net import for network operations (IPv6 support)
Also fixed IAM config file close error handling to properly check error
from f.Close() and log any failures, preventing silent data loss on NFS.
* Document environment variable setup for S3 credentials
Updated welcome message to explain two ways to create S3 credentials:
1. Environment variables (recommended for quick setup):
- Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
- Run 'weed mini -dir=/data'
- Creates initial 'mini' user credentials automatically
2. Admin UI (for managing multiple users and policies):
- Open http://localhost:23646 (Admin UI)
- Add identities to create new S3 credentials
This gives users clear guidance on the easiest way to get started with S3
credentials while also explaining the more advanced option for multiple users.
* Print welcome message after all services are running
Moved the welcome message printing from immediately after startMiniServices()
to after all services have been started and are ready. This ensures users see
the welcome message only after startup is complete, not mixed with startup logs.
Changes:
- Extract welcome message logic into printWelcomeMessage() function
- Call printWelcomeMessage() after startMiniServices() completes
- Change message from 'are starting' to 'are running and ready to use'
- This provides cleaner startup output without interleaved logs
* Wait for all services to complete before printing welcome message
The welcome message should only appear after all services are fully running and
the worker is connected. This prevents the message from appearing too early before
startup logs complete.
Changes:
- Pass allServicesReady channel through startMiniServices()
- Add adminReadyChan to track when admin/worker startup completes
- Signal allServicesReady when admin service is fully ready
- Wait for allServicesReady in runMini() before printing welcome message
- This ensures clean output: startup logs first, then welcome message once ready
Now the user sees all startup activity, then a clear welcome message when
everything is truly ready to use.
* Fix welcome message timing: print after worker is fully started
The welcome message was printing too early because allServicesReady was being
closed when the Admin service goroutine started, not when it actually completed.
The Admin service launches startMiniAdminWithWorker() which is a blocking call
that doesn't return until the worker is fully connected.
Now allServicesReady is passed through to startMiniWorker() which closes it
after the worker successfully starts and connects to the admin server.
This ensures the welcome message only appears after:
- Master is ready
- Volume server is ready
- Filer is ready
- S3 service is ready
- WebDAV service is ready
- Admin server is ready
- Worker is connected and running
All startup logs appear first, then the clean welcome message at the end.
* Wait for S3 and WebDAV services to be ready before showing welcome message
The welcome message was printing before S3 and WebDAV servers had fully
initialized. Now the readiness flow is:
1. Master → ready
2. Volume → ready
3. Filer → ready
4. S3 → ready (signals s3ReadyChan)
5. WebDAV → ready (signals webdavReadyChan)
6. Admin/Worker → starts, then waits for both S3 and WebDAV
7. Welcome message prints (all services truly ready)
Changes:
- Add s3ReadyChan and webdavReadyChan to service startup
- Pass S3 and WebDAV ready channels through to Admin service
- Admin/Worker waits for both S3 and WebDAV before closing allServicesReady
- This ensures welcome message appears only when all services are operational
* Admin service should wait for Filer, S3, and WebDAV to be ready
Admin service depends on Filer being operational since it uses the filer
for credential storage. It also makes sense to wait for S3 and WebDAV
since they are user-facing services that should be ready before Admin.
Updated dependencies:
- Admin now waits for: Master, Filer, S3, WebDAV
- This ensures all critical services are operational before Admin starts
- Welcome message will print only after all services including Admin are ready
* Add initialization delay for S3 and WebDAV services
S3 and WebDAV servers need extra time to fully initialize and start listening
after their service functions are launched. Added a 1-second delay after
launching S3 and WebDAV goroutines before signaling readiness.
This ensures the welcome message doesn't print until both services have
emitted their startup logs and are actually serving requests.
* Increase service initialization wait times for more reliable startup
- Increase S3 and WebDAV initialization delay from 1s to 2s to ensure they emit startup logs before welcome message
- Add 1s initialization delay for Filer to ensure it's listening
- Increase admin gRPC polling timeout from 10s to 20s to ensure admin server is fully ready
- This ensures welcome message prints only after all services are fully initialized and ready to accept requests
* Increase service wait time to 10 seconds for reliable startup
All services now wait 10 seconds after launching to ensure they are fully initialized and ready before signaling readiness to dependent services. This ensures the welcome message prints only after all services have fully started.
* Replace fixed 10s delay with intelligent port polling for service readiness
Instead of waiting a fixed 10 seconds for each service, now polls the service
port to check if it's actually accepting connections. This eliminates unnecessary
waiting and allows services to signal readiness as soon as they're ready.
- Polls each service port with up to 30 attempts (6 seconds total)
- Each attempt waits 200ms before retrying
- Stops polling immediately once service is ready
- Falls back gracefully if service is unknown
- Significantly faster startup sequence while maintaining reliability
* Replace channel-based coordination with HTTP pinging for service readiness
Instead of using channels to coordinate service startup, now uses HTTP GET requests
to ping each service endpoint to check if it's ready to accept connections.
Key changes:
- Removed all readiness channels (masterReadyChan, volumeReadyChan, etc.)
- Simplified startMiniServices to use sequential HTTP polling for each service
- startMiniService now just starts the service with logging
- waitForServiceReady uses HTTP client to ping service endpoints (max 6 seconds)
- waitForAdminServerReady uses HTTP GET to check admin server availability
- startMiniAdminWithWorker and startMiniWorker simplified without channel parameters
Benefits:
- Cleaner, more straightforward code
- HTTP pinging is more reliable than TCP port probing
- Services signal readiness through their HTTP endpoints
- Eliminates channel synchronization complexity
* log level
* Remove overly specific comment from volume size limit in welcome message
The '(good for small files)' comment is too limiting. The 128MB volume size
limit works well for general use cases, not just small files. Simplified the
message to just show the value.
* Ensure allServicesReady channel is always closed via defer
Add 'defer close(allServicesReady)' at the start of startMiniAdminWithWorker
to guarantee the channel is closed on ALL exit paths (normal and error).
This prevents the caller waiting on <-allServicesReady from ever hanging,
while removing the explicit close() at the successful end prevents panic
from double-close.
This makes the code more robust by:
- Guaranteeing channel closure even if worker setup fails
- Eliminating the possibility of caller hanging on errors
- Following Go defer patterns for resource cleanup
* Enhance health check polling for more robust service coordination
The service startup already uses HTTP health checks via waitForServiceReady()
to verify services are actually accepting connections. This commit improves
the health check implementation:
Changes:
- Elevated success logging to Info level so users see when services become ready
- Improved error messages to clarify that health check timeouts are not fatal
- Services continue startup even if health checks timeout (they may still work)
- Consistent handling of health check results across all services
This provides better visibility into service startup while maintaining the
existing robust coordination via HTTP pinging rather than just TCP port checks.
* Implement stricter error handling for robust mini server startup
Apply all PR review feedback to ensure the mini server fails fast and clearly
when critical components cannot start:
Changes:
1. Remove redundant miniVolumeMinFreeSpacePercent constant
- Simplified util.MustParseMinFreeSpace() call to use single parameter
2. Make service readiness checks fatal errors:
- Master, Volume, Filer, S3, WebDAV health check failures now return errors
- Prevents partially-functional servers from running
- Caller can handle errors gracefully instead of continuing with broken state
3. Make admin server readiness fatal:
- Admin gRPC availability is critical for worker startup
- Use glog.Fatalf to terminate with clear error message
4. Improve IAM config error handling:
- Treat all file operation failures (stat, open, write, close) as fatal
- Prevents silent failures in S3 credential setup
- User gets immediate feedback instead of authentication issues later
5. Use glog.Fatalf for critical worker setup errors:
- Failed to create worker directory, task directories, or worker instance
- Failed to create admin client or start worker
- Ensures mini server doesn't run in broken state
This ensures deterministic startup: services succeed completely or fail with
clear, actionable error messages for the user.
* Make health checks non-fatal for graceful degradation and improve IAM file handling
Address PR feedback to make the mini command more resilient for development:
1. Make health check failures non-fatal
- Master, Volume, Filer, S3, WebDAV health checks now log warnings but allow startup
- Services may still work even if health check endpoints aren't immediately available
- Aligns with intent of a dev-focused tool that should be forgiving of timing issues
- Only prevents startup if startup coordination or critical errors occur
2. Improve IAM config file handling
- Refactored to guarantee file is always closed using separate error variables
- Opens file once and handles write/close errors independently
- Maintains strict error handling while improving code clarity
- All file operation failures still cause fatal errors (as intended)
This makes startup more graceful while maintaining critical error handling for
fundamental failures like missing directories or configuration errors.
* Fix code quality issues in weed mini command
- Fix pointer aliasing: use value copy (*miniBindIp = *miniIp) instead of pointer assignment
- Remove misleading error return from waitForServiceReady() function
- Simplify health check callers to call waitForServiceReady() directly without error handling
- Remove redundant S3 option assignments already set in init() block
- Remove unused allServicesReady parameter from startMiniWorker() function
* Refactor welcome message to use template strings and add startup delay
- Convert welcome message to constant template strings for cleaner code
- Separate credentials instructions into dedicated constant
- Add 500ms delay after worker startup to allow full initialization before welcome message
- Improves output cleanliness by avoiding log interleaving with welcome message
* Fix code style issues in weed mini command
- Fix indentation in IAM config block (lines 424-432) to align with surrounding code
- Remove unused adminServerDone channel that was created but never read
* Address code review feedback for robustness and resource management
- Use defer f.Close() for IAM file handling to ensure file is closed in all code paths, preventing potential file descriptor leaks
- Use 127.0.0.1 instead of *miniIp for service readiness checks to ensure checks always target localhost, improving reliability in environments with firewalls or complex network configurations
- Simplify error handling in waitForAdminServerReady by using single error return instead of separate write/close error variables
* Fix function declaration formatting
- Separate closing brace of startS3Service from startMiniAdminWithWorker declaration with blank line
- Move comment to proper position above function declaration
- Run gofmt for consistent formatting
* Fix IAM config pointer assignment when file already exists
- Add missing *miniIamConfig = iamPath assignment when IAM config file already exists
- Ensures S3 service is properly pointed to the existing IAM configuration
- Retains logging to inform user that existing configuration is being preserved
* Improve pointer assignments and worker synchronization
- Simplify grpcPort and dataDir pointer assignments by directly dereferencing and assigning values instead of taking address of local variables
- Replace time.Sleep(500ms) with proper TCP-based polling to wait for worker gRPC port readiness
- Add waitForWorkerReady function that polls worker's gRPC port with max 6-second timeout
- Add net package import for TCP connection checks
- Improves code idiomaticity and synchronization robustness
* Refactor and simplify error handling for maintainability
- Remove unused error return from startMiniServices (always returned nil)
- Update runMini caller to not expect error from startMiniServices
- Refactor init() into component-specific helper functions:
* initMiniCommonFlags() for common options
* initMiniMasterFlags() for master server options
* initMiniFilerFlags() for filer server options
* initMiniVolumeFlags() for volume server options
* initMiniS3Flags() for S3 server options
* initMiniWebDAVFlags() for WebDAV server options
* initMiniAdminFlags() for admin server options
- Significantly improves code readability and maintainability
- Each component's flags are now in dedicated, focused functions
|
2 months ago |
|
|
4aa50bfa6a
|
fix: EC rebalance fails with replica placement 000 (#7812)
* fix: EC rebalance fails with replica placement 000 This PR fixes several issues with EC shard distribution: 1. Pre-flight check before EC encoding - Verify target disk type has capacity before encoding starts - Prevents encoding shards only to fail during rebalance - Shows helpful error when wrong diskType is specified (e.g., ssd when volumes are on hdd) 2. Fix EC rebalance with replica placement 000 - When DiffRackCount=0, shards should be distributed freely across racks - The '000' placement means 'no volume replication needed' because EC provides redundancy - Previously all racks were skipped with error 'shards X > replica placement limit (0)' 3. Add unit tests for EC rebalance slot calculation - TestECRebalanceWithLimitedSlots: documents the limited slots scenario - TestECRebalanceZeroFreeSlots: reproduces the 0 free slots error 4. Add Makefile for manual EC testing - make setup: start cluster and populate data - make shell: open weed shell for EC commands - make clean: stop cluster and cleanup * fix: default -rebalance to true for ec.encode The -rebalance flag was defaulting to false, which meant ec.encode would only print shard moves but not actually execute them. This is a poor default since the whole point of EC encoding is to distribute shards across servers for fault tolerance. Now -rebalance defaults to true, so shards are actually distributed after encoding. Users can use -rebalance=false if they only want to see what would happen without making changes. * test/erasure_coding: improve Makefile safety and docs - Narrow pkill pattern for volume servers to use TEST_DIR instead of port pattern, avoiding accidental kills of unrelated SeaweedFS processes - Document external dependencies (curl, jq) in header comments * shell: refactor buildRackWithEcShards to reuse buildEcShards Extract common shard bit construction logic to avoid duplication between buildEcShards and buildRackWithEcShards helper functions. * shell: update test for EC replication 000 behavior When DiffRackCount=0 (replication "000"), EC shards should be distributed freely across racks since erasure coding provides its own redundancy. Update test expectation to reflect this behavior. * erasure_coding: add distribution package for proportional EC shard placement Add a new reusable package for EC shard distribution that: - Supports configurable EC ratios (not hard-coded 10+4) - Distributes shards proportionally based on replication policy - Provides fault tolerance analysis - Prefers moving parity shards to keep data shards spread out Key components: - ECConfig: Configurable data/parity shard counts - ReplicationConfig: Parsed XYZ replication policy - ECDistribution: Target shard counts per DC/rack/node - Rebalancer: Plans shard moves with parity-first strategy This enables seaweed-enterprise custom EC ratios and weed worker integration while maintaining a clean, testable architecture. * shell: integrate distribution package for EC rebalancing Add shell wrappers around the distribution package: - ProportionalECRebalancer: Plans moves using distribution.Rebalancer - NewProportionalECRebalancerWithConfig: Supports custom EC configs - GetDistributionSummary/GetFaultToleranceAnalysis: Helper functions The shell layer converts between EcNode types and the generic TopologyNode types used by the distribution package. * test setup * ec: improve data and parity shard distribution across racks - Add shardsByTypePerRack helper to track data vs parity shards - Rewrite doBalanceEcShardsAcrossRacks for two-pass balancing: 1. Balance data shards (0-9) evenly, max ceil(10/6)=2 per rack 2. Balance parity shards (10-13) evenly, max ceil(4/6)=1 per rack - Add balanceShardTypeAcrossRacks for generic shard type balancing - Add pickRackForShardType to select destination with room for type - Add unit tests for even data/parity distribution verification This ensures even read load during normal operation by spreading both data and parity shards across all available racks. * ec: make data/parity shard counts configurable in ecBalancer - Add dataShardCount and parityShardCount fields to ecBalancer struct - Add getDataShardCount() and getParityShardCount() methods with defaults - Replace direct constant usage with configurable methods - Fix unused variable warning for parityPerRack This allows seaweed-enterprise to use custom EC ratios while defaulting to standard 10+4 scheme. * Address PR 7812 review comments Makefile improvements: - Save PIDs for each volume server for precise termination - Use PID-based killing in stop target with pkill fallback - Use more specific pkill patterns with TEST_DIR paths Documentation: - Document jq dependency in README.md Rebalancer fix: - Fix duplicate shard count updates in applyMovesToAnalysis - All planners (DC/rack/node) update counts inline during planning - Remove duplicate updates from applyMovesToAnalysis to avoid double-counting * test/erasure_coding: use mktemp for test file template Use mktemp instead of hardcoded /tmp/testfile_template.bin path to provide better isolation for concurrent test runs. |
2 months ago |
|
|
7920ffa98c
|
Fix uncleanable size=0 orphans with volume.fsck -forcePurging (#7783)
This is a follow-up fix to PR #7332 which partially addressed the issue. The problem is that size=0 needles are in a gray area: - IsValid() returns false for size=0 (because size must be > 0) - IsDeleted() returns false for size=0 (because size must be < 0 or == TombstoneFileSize) PR #7332 only fixed 2 places, but several other places still had the same bug: 1. needle_map_memory.go:doLoading - line 43 still used oldSize.IsValid() 2. needle_map_memory.go:DoOffsetLoading - used during vacuum and incremental loading 3. needle_map_leveldb.go:generateLevelDbFile - used when generating LevelDB needle maps 4. needle_map_leveldb.go:DoOffsetLoading - used during incremental loading for LevelDB 5. needle_map/compact_map.go:delete - couldn't delete size=0 entries because: - The condition 'size > 0' failed for size=0 - Even if it passed, negating 0 gives 0 (not marking as deleted) Changes: - Changed size.IsValid() to !size.IsDeleted() in doLoading and DoOffsetLoading functions - Fixed compact_map delete to use TombstoneFileSize for size=0 entries Fixes #7293 |
2 months ago |
|
|
32a9a1f46f
|
fix: sync EC volume files before copying to fix deleted needles not being marked when decoding (#7755)
* fix: sync EC volume files before copying to fix deleted needles not being marked when decoding (#7751) When a file is deleted from an EC volume, the deletion is written to both the .ecx and .ecj files. However, these writes were not synced to disk before the files were copied during ec.decode. This caused the copied files to miss the deletion markers, resulting in 'leaked' space where deleted files were not properly tracked after decoding. This fix: 1. Adds a Sync() method to EcVolume that flushes .ecx and .ecj files to disk without closing them 2. Calls Sync() in CopyFile before copying EC volume files to ensure all deletions are visible to the copy operation Fixes #7751 * test: add integration tests for EC volume deletion sync (issue #7751) Add comprehensive tests to verify that deleted needles are properly visible after EcVolume.Sync() is called. These tests cover: 1. TestWriteIdxFileFromEcIndex_PreservesDeletedNeedles - Verifies that WriteIdxFileFromEcIndex preserves deletion markers from .ecx files when generating .idx files 2. TestWriteIdxFileFromEcIndex_ProcessesEcjJournal - Verifies that deletions from .ecj journal file are correctly appended to the generated .idx file 3. TestEcxFileDeletionVisibleAfterSync - Verifies that MarkNeedleDeleted changes are visible after Sync() 4. TestEcxFileDeletionWithSeparateHandles - Tests that synced changes are visible across separate file handles 5. TestEcVolumeSyncEnsuresDeletionsVisible - Integration test for the full EcVolume.DeleteNeedleFromEcx + Sync() workflow that validates the fix for issue #7751 * refactor: log sync errors in EcVolume.Sync() instead of ignoring them Per code review feedback: sync errors could reintroduce the bug this PR fixes, so logging warnings helps with debugging. |
2 months ago |
|
|
7ed7578424
|
fix(ec.decode): purge EC shards when volume is empty (#7749)
* fix(ec.decode): purge EC shards when volume is empty When an EC volume has no live entries (all deleted), ec.decode should not generate an empty normal volume. Instead, treat decode as a no-op and allow shard purge to proceed cleanly.\n\nFixes: #7748 * chore: address PR review comments * test: cover live EC index + avoid magic string * chore: harden empty-EC handling - Make shard cleanup best-effort (collect errors)\n- Remove unreachable EOF handling in HasLiveNeedles\n- Add empty ecx test case\n- Share no-live-entries substring between server/client\n * perf: parallelize EC shard unmount/delete across locations * refactor: combine unmount+delete into single goroutine per location * refactor: use errors.Join for multi-error aggregation * refactor: use existing ErrorWaitGroup for parallel execution * fix: capture loop variables + clarify SuperBlockSize safety |
2 months ago |
|
|
0ede657a34
|
fix: return error on size mismatch in ReadNeedleMeta for consistency (#7687)
* fix: return error on size mismatch in ReadNeedleMeta for consistency When ReadNeedleMeta encounters a size mismatch at offset >= MaxPossibleVolumeSize, it previously just continued without returning an error, potentially using wrong data. This fix makes ReadNeedleMeta consistent with ReadBytes (needle_read.go), which properly returns an error in both cases: - ErrorSizeMismatch when offset < MaxPossibleVolumeSize (to trigger retry at offset+32GB) - A descriptive error when offset >= MaxPossibleVolumeSize (after retry failed) Fixes #7673 * refactor: use more accurate error message for size mismatch |
2 months ago |
|
|
5c27522507
|
fix: prevent empty .vif files from ec.decode causing parse errors (#7686)
* fix: prevent empty .vif files from ec.decode causing parse errors When ec.decode copies .vif files from EC shard nodes, if a source node doesn't have the .vif file, an empty .vif file was created on the target node. This caused volume.configure.replication to fail with 'proto: syntax error' when trying to parse the empty file. This fix: 1. In writeToFile: Remove empty files when no data was written (source file was not found) to avoid leaving corrupted empty files 2. In MaybeLoadVolumeInfo: Handle empty .vif files gracefully by treating them as non-existent, allowing the system to create a proper one Fixes #7666 * refactor: remove redundant dst.Close() and add error logging Address review feedback: - Remove redundant dst.Close() call since defer already handles it - Add error logging for os.Remove() failure |
2 months ago |
|
|
d5f21fd8ba
|
fix: add missing backslash for volume extraArgs in helm chart (#7676)
Fixes #7467 The -mserver argument line in volume-statefulset.yaml was missing a trailing backslash, which prevented extraArgs from being passed to the weed volume process. Also: - Extracted master server list generation logic into shared helper templates in _helpers.tpl for better maintainability - Updated all occurrences of deprecated -mserver flag to -master across docker-compose files, test files, and documentation |
2 months ago |
|
|
cea12ba3c4
|
fix: prevent makeslice panic in ReadNeedleMeta with corrupted needle (#7675)
* fix: prevent makeslice panic in ReadNeedleMeta with corrupted needle When a needle's DataSize in the .dat file is corrupted to a very large value, the calculation of metaSize can become negative, causing a panic with 'makeslice: len out of range' when creating the metadata slice. This fix adds validation to check if metaSize is negative before creating the slice, returning a descriptive error instead of panicking. Fixes #7475 * Update weed/storage/needle/needle_read_page.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
2 months ago |
|
|
28ac536280
|
fix: normalize Windows backslash paths in weed admin file uploads (#7636)
fix: normalize Windows backslash paths in file uploads When uploading files from a Windows client to a Linux server, file paths containing backslashes were not being properly interpreted as directory separators. This caused files intended for subdirectories to be created in the root directory with backslashes in their filenames. Changes: - Add util.CleanWindowsPath and util.CleanWindowsPathBase helper functions in weed/util/fullpath.go for reusable path normalization - Use path.Join/path.Clean/path.Base instead of filepath equivalents for URL path semantics (filepath is OS-specific) - Apply normalization in weed admin handlers and filer upload parsing Fixes #7628 |
2 months ago |
|
|
e9da64f62a
|
fix: volume server healthz now checks local conditions only (#7610)
This fixes issue #6823 where a single volume server shutdown would cause other healthy volume servers to fail their health checks and get restarted by Kubernetes, causing a cascading failure. Previously, the healthz handler checked if all replicated volumes could reach their remote replicas via GetWritableRemoteReplications(). When a volume server went down, the master would remove it from the volume location list. Other volume servers would then fail their healthz checks because they couldn't find all required replicas, causing Kubernetes to restart them. The healthz endpoint now only checks local conditions: 1. Is the server shutting down? 2. Is the server heartbeating with the master? This follows the principle that a health check should only verify the health of THIS server, not the overall cluster state. Fixes #6823 |
2 months ago |
|
|
5ed0b00fb9
|
Support separate volume server ID independent of RPC bind address (#7609)
* pb: add id field to Heartbeat message for stable volume server identification This adds an 'id' field to the Heartbeat protobuf message that allows volume servers to identify themselves independently of their IP:port address. Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * storage: add Id field to Store struct Add Id field to Store struct and include it in CollectHeartbeat(). The Id field provides a stable volume server identity independent of IP:port. Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * topology: support id-based DataNode identification Update GetOrCreateDataNode to accept an id parameter for stable node identification. When id is provided, the DataNode can maintain its identity even when its IP address changes (e.g., in Kubernetes pod reschedules). For backward compatibility: - If id is provided, use it as the node ID - If id is empty, fall back to ip:port Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * volume: add -id flag for stable volume server identity Add -id command line flag to volume server that allows specifying a stable identifier independent of the IP address. This is useful for Kubernetes deployments with hostPath volumes where pods can be rescheduled to different nodes while the persisted data remains on the original node. Usage: weed volume -id=node-1 -ip=10.0.0.1 ... If -id is not specified, it defaults to ip:port for backward compatibility. Fixes https://github.com/seaweedfs/seaweedfs/issues/7487 * server: add -volume.id flag to weed server command Support the -volume.id flag in the all-in-one 'weed server' command, consistent with the standalone 'weed volume' command. Usage: weed server -volume.id=node-1 ... Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * topology: add test for id-based DataNode identification Test the key scenarios: 1. Create DataNode with explicit id 2. Same id with different IP returns same DataNode (K8s reschedule) 3. IP/PublicUrl are updated when node reconnects with new address 4. Different id creates new DataNode 5. Empty id falls back to ip:port (backward compatibility) Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * pb: add address field to DataNodeInfo for proper node addressing Previously, DataNodeInfo.Id was used as the node address, which worked when Id was always ip:port. Now that Id can be an explicit string, we need a separate Address field for connection purposes. Changes: - Add 'address' field to DataNodeInfo protobuf message - Update ToDataNodeInfo() to populate the address field - Update NewServerAddressFromDataNode() to use Address (with Id fallback) - Fix LookupEcVolume to use dn.Url() instead of dn.Id() Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * fix: trim whitespace from volume server id and fix test - Trim whitespace from -id flag to treat ' ' as empty - Fix store_load_balancing_test.go to include id parameter in NewStore call Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * refactor: extract GetVolumeServerId to util package Move the volume server ID determination logic to a shared utility function to avoid code duplication between volume.go and rack.go. Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * fix: improve transition logic for legacy nodes - Use exact ip:port match instead of net.SplitHostPort heuristic - Update GrpcPort and PublicUrl during transition for consistency - Remove unused net import Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 * fix: add id normalization and address change logging - Normalize id parameter at function boundary (trim whitespace) - Log when DataNode IP:Port changes (helps debug K8s pod rescheduling) Ref: https://github.com/seaweedfs/seaweedfs/issues/7487 |
2 months ago |
|
|
51841a2e04
|
fix: skip cookie validation for EC volume deletion when SkipCookieCheck is set (#7608)
fix: EC volume deletion issues
Fixes #7489
1. Skip cookie check for EC volume deletion when SkipCookieCheck is set
When batch deleting files from EC volumes with SkipCookieCheck=true
(e.g., orphan file cleanup), the cookie is not available. The deletion
was failing with 'unexpected cookie 0' because DeleteEcShardNeedle
always validated the cookie.
2. Optimize doDeleteNeedleFromAtLeastOneRemoteEcShards to return early
Return immediately when a deletion succeeds, instead of continuing
to try all parity shards unnecessarily.
3. Remove useless log message that always logged nil error
The log at V(1) was logging err after checking it was nil.
Regression introduced in commit
|
2 months ago |
|
|
4f038820dc
|
Add disk-aware EC rebalancing (#7597)
* Add placement package for EC shard placement logic - Consolidate EC shard placement algorithm for reuse across shell and worker tasks - Support multi-pass selection: racks, then servers, then disks - Include proper spread verification and scoring functions - Comprehensive test coverage for various cluster topologies * Make ec.balance disk-aware for multi-disk servers - Add EcDisk struct to track individual disks on volume servers - Update EcNode to maintain per-disk shard distribution - Parse disk_id from EC shard information during topology collection - Implement pickBestDiskOnNode() for selecting best disk per shard - Add diskDistributionScore() for tie-breaking node selection - Update all move operations to specify target disk in RPC calls - Improves shard balance within multi-disk servers, not just across servers * Use placement package in EC detection for consistent disk-level placement - Replace custom EC disk selection logic with shared placement package - Convert topology DiskInfo to placement.DiskCandidate format - Use SelectDestinations() for multi-rack/server/disk spreading - Convert placement results back to topology DiskInfo for task creation - Ensures EC detection uses same placement logic as shell commands * Make volume server evacuation disk-aware - Use pickBestDiskOnNode() when selecting evacuation target disk - Specify target disk in evacuation RPC requests - Maintains balanced disk distribution during server evacuations * Rename PlacementConfig to PlacementRequest for clarity PlacementRequest better reflects that this is a request for placement rather than a configuration object. This improves API semantics. * Rename DefaultConfig to DefaultPlacementRequest Aligns with the PlacementRequest type naming for consistency * Address review comments from Gemini and CodeRabbit Fix HIGH issues: - Fix empty disk discovery: Now discovers all disks from VolumeInfos, not just from EC shards. This ensures disks without EC shards are still considered for placement. - Fix EC shard count calculation in detection.go: Now correctly filters by DiskId and sums actual shard counts using ShardBits.ShardIdCount() instead of just counting EcShardInfo entries. Fix MEDIUM issues: - Add disk ID to evacuation log messages for consistency with other logging - Remove unused serverToDisks variable in placement.go - Fix comment that incorrectly said 'ascending' when sorting is 'descending' * add ec tests * Update ec-integration-tests.yml * Update ec_integration_test.go * Fix EC integration tests CI: build weed binary and update actions - Add 'Build weed binary' step before running tests - Update actions/setup-go from v4 to v6 (Node20 compatibility) - Update actions/checkout from v2 to v4 (Node20 compatibility) - Move working-directory to test step only * Add disk-aware EC rebalancing integration tests - Add TestDiskAwareECRebalancing test with multi-disk cluster setup - Test EC encode with disk awareness (shows disk ID in output) - Test EC balance with disk-level shard distribution - Add helper functions for disk-level verification: - startMultiDiskCluster: 3 servers x 4 disks each - countShardsPerDisk: track shards per disk per server - calculateDiskShardVariance: measure distribution balance - Verify no single disk is overloaded with shards |
2 months ago |
|
|
65f8986fe2
|
Volume Server: avoid aggressive volume assignment (#7501)
* avoid aggressive volume assignment * also test ec shards * separate DiskLocation instances for each subtest * edge cases * No volumes plus low disk space * Multiple EC volumes * simplify |
3 months ago |
|
|
bf8e4f40e6
|
S3: Perf related (#7463)
* reduce checks * s3 object lookup optimization * Only check versioning configuration if client requests * Consolidate SSE Entry Lookups * optimize * revert optimization for versioned objects * Removed: getObjectEntryForSSE() function * refactor * Refactoring: Added fetchObjectEntryRequired * avoid refetching * return early if not found * reuse objects from conditional check * clear cache when creating bucket |
3 months ago |
|
|
d19eca71eb
|
[master] vaccum fix warn (#7312)
|
4 months ago |
|
|
b7ba6785a2 |
go fmt
|
4 months ago |
|
|
208d7f24f4
|
Erasure Coding: Ec refactoring (#7396)
* refactor: add ECContext structure to encapsulate EC parameters
- Create ec_context.go with ECContext struct
- NewDefaultECContext() creates context with default 10+4 configuration
- Helper methods: CreateEncoder(), ToExt(), String()
- Foundation for cleaner function signatures
- No behavior change, still uses hardcoded 10+4
* refactor: update ec_encoder.go to use ECContext
- Add WriteEcFilesWithContext() and RebuildEcFilesWithContext() functions
- Keep old functions for backward compatibility (call new versions)
- Update all internal functions to accept ECContext parameter
- Use ctx.DataShards, ctx.ParityShards, ctx.TotalShards consistently
- Use ctx.CreateEncoder() instead of hardcoded reedsolomon.New()
- Use ctx.ToExt() for shard file extensions
- No behavior change, still uses default 10+4 configuration
* refactor: update ec_volume.go to use ECContext
- Add ECContext field to EcVolume struct
- Initialize ECContext with default configuration in NewEcVolume()
- Update LocateEcShardNeedleInterval() to use ECContext.DataShards
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* refactor: add EC shard count fields to VolumeInfo protobuf
- Add data_shards_count field (field 8) to VolumeInfo message
- Add parity_shards_count field (field 9) to VolumeInfo message
- Fields are optional, 0 means use default (10+4)
- Backward compatible: fields added at end
- Phase 1: Foundation for future customization
* refactor: regenerate protobuf Go files with EC shard count fields
- Regenerated volume_server_pb/*.go with new EC fields
- DataShardsCount and ParityShardsCount accessors added to VolumeInfo
- No behavior change, fields not yet used
* refactor: update VolumeEcShardsGenerate to use ECContext
- Create ECContext with default configuration in VolumeEcShardsGenerate
- Use ecCtx.TotalShards and ecCtx.ToExt() in cleanup
- Call WriteEcFilesWithContext() instead of WriteEcFiles()
- Save EC configuration (DataShardsCount, ParityShardsCount) to VolumeInfo
- Log EC context being used
- Phase 1: Always uses default 10+4 configuration
- No behavior change
* fmt
* refactor: update ec_test.go to use ECContext
- Update TestEncodingDecoding to create and use ECContext
- Update validateFiles() to accept ECContext parameter
- Update removeGeneratedFiles() to use ctx.TotalShards and ctx.ToExt()
- Test passes with default 10+4 configuration
* refactor: use EcShardConfig message instead of separate fields
* optimize: pre-calculate row sizes in EC encoding loop
* refactor: replace TotalShards field with Total() method
- Remove TotalShards field from ECContext to avoid field drift
- Add Total() method that computes DataShards + ParityShards
- Update all references to use ctx.Total() instead of ctx.TotalShards
- Read EC config from VolumeInfo when loading EC volumes
- Read data shard count from .vif in VolumeEcShardsToVolume
- Use >= instead of > for exact boundary handling in encoding loops
* optimize: simplify VolumeEcShardsToVolume to use existing EC context
- Remove redundant CollectEcShards call
- Remove redundant .vif file loading
- Use v.ECContext.DataShards directly (already loaded by NewEcVolume)
- Slice tempShards instead of collecting again
* refactor: rename MaxShardId to MaxShardCount for clarity
- Change from MaxShardId=31 to MaxShardCount=32
- Eliminates confusing +1 arithmetic (MaxShardId+1)
- More intuitive: MaxShardCount directly represents the limit
fix: support custom EC ratios beyond 14 shards in VolumeEcShardsToVolume
- Add MaxShardId constant (31, since ShardBits is uint32)
- Use MaxShardId+1 (32) instead of TotalShardsCount (14) for tempShards buffer
- Prevents panic when slicing for volumes with >14 total shards
- Critical fix for custom EC configurations like 20+10
* fix: add validation for EC shard counts from VolumeInfo
- Validate DataShards/ParityShards are positive and within MaxShardCount
- Prevent zero or invalid values that could cause divide-by-zero
- Fallback to defaults if validation fails, with warning log
- VolumeEcShardsGenerate now preserves existing EC config when regenerating
- Critical safety fix for corrupted or legacy .vif files
* fix: RebuildEcFiles now loads EC config from .vif file
- Critical: RebuildEcFiles was always using default 10+4 config
- Now loads actual EC config from .vif file when rebuilding shards
- Validates config before use (positive shards, within MaxShardCount)
- Falls back to default if .vif missing or invalid
- Prevents data corruption when rebuilding custom EC volumes
* add: defensive validation for dataShards in VolumeEcShardsToVolume
- Validate dataShards > 0 and <= MaxShardCount before use
- Prevents panic from corrupted or uninitialized ECContext
- Returns clear error message instead of panic
- Defense-in-depth: validates even though upstream should catch issues
* fix: replace TotalShardsCount with MaxShardCount for custom EC ratio support
Critical fixes to support custom EC ratios > 14 shards:
disk_location_ec.go:
- validateEcVolume: Check shards 0-31 instead of 0-13 during validation
- removeEcVolumeFiles: Remove shards 0-31 instead of 0-13 during cleanup
ec_volume_info.go ShardBits methods:
- ShardIds(): Iterate up to MaxShardCount (32) instead of TotalShardsCount (14)
- ToUint32Slice(): Iterate up to MaxShardCount (32)
- IndexToShardId(): Iterate up to MaxShardCount (32)
- MinusParityShards(): Remove shards 10-31 instead of 10-13 (added note about Phase 2)
- Minus() shard size copy: Iterate up to MaxShardCount (32)
- resizeShardSizes(): Iterate up to MaxShardCount (32)
Without these changes:
- Custom EC ratios > 14 total shards would fail validation on startup
- Shards 14-31 would never be discovered or cleaned up
- ShardBits operations would miss shards >= 14
These changes are backward compatible - MaxShardCount (32) includes
the default TotalShardsCount (14), so existing 10+4 volumes work as before.
* fix: replace TotalShardsCount with MaxShardCount in critical data structures
Critical fixes for buffer allocations and loops that must support
custom EC ratios up to 32 shards:
Data Structures:
- store_ec.go:354: Buffer allocation for shard recovery (bufs array)
- topology_ec.go:14: EcShardLocations.Locations fixed array size
- command_ec_rebuild.go:268: EC shard map allocation
- command_ec_common.go:626: Shard-to-locations map allocation
Shard Discovery Loops:
- ec_task.go:378: Loop to find generated shard files
- ec_shard_management.go: All 8 loops that check/count EC shards
These changes are critical because:
1. Buffer allocations sized to 14 would cause index-out-of-bounds panics
when accessing shards 14-31
2. Fixed arrays sized to 14 would truncate shard location data
3. Loops limited to 0-13 would never discover/manage shards 14-31
Note: command_ec_encode.go:208 intentionally NOT changed - it creates
shard IDs to mount after encoding. In Phase 1 we always generate 14
shards, so this remains TotalShardsCount and will be made dynamic in
Phase 2 based on actual EC context.
Without these fixes, custom EC ratios > 14 total shards would cause:
- Runtime panics (array index out of bounds)
- Data loss (shards 14-31 never discovered/tracked)
- Incomplete shard management (missing shards not detected)
* refactor: move MaxShardCount constant to ec_encoder.go
Moved MaxShardCount from ec_volume_info.go to ec_encoder.go to group it
with other shard count constants (DataShardsCount, ParityShardsCount,
TotalShardsCount). This improves code organization and makes it easier
to understand the relationship between these constants.
Location: ec_encoder.go line 22, between TotalShardsCount and MinTotalDisks
* improve: add defensive programming and better error messages for EC
Code review improvements from CodeRabbit:
1. ShardBits Guardrails (ec_volume_info.go):
- AddShardId, RemoveShardId: Reject shard IDs >= MaxShardCount
- HasShardId: Return false for out-of-range shard IDs
- Prevents silent no-ops from bit shifts with invalid IDs
2. Future-Proof Regex (disk_location_ec.go):
- Updated regex from \.ec[0-9][0-9] to \.ec\d{2,3}
- Now matches .ec00 through .ec999 (currently .ec00-.ec31 used)
- Supports future increases to MaxShardCount beyond 99
3. Better Error Messages (volume_grpc_erasure_coding.go):
- Include valid range (1..32) in dataShards validation error
- Helps operators quickly identify the problem
4. Validation Before Save (volume_grpc_erasure_coding.go):
- Validate ECContext (DataShards > 0, ParityShards > 0, Total <= MaxShardCount)
- Log EC config being saved to .vif for debugging
- Prevents writing invalid configs to disk
These changes improve robustness and debuggability without changing
core functionality.
* fmt
* fix: critical bugs from code review + clean up comments
Critical bug fixes:
1. command_ec_rebuild.go: Fixed indentation causing compilation error
- Properly nested if/for blocks in registerEcNode
2. ec_shard_management.go: Fixed isComplete logic incorrectly using MaxShardCount
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Default 10+4 volumes were being incorrectly reported as incomplete
- Missing shards 14-31 were being incorrectly reported as missing
- Fixed in 4 locations: volume completeness checks and getMissingShards
3. ec_volume_info.go: Fixed MinusParityShards removing too many shards
- Changed from MaxShardCount (32) back to TotalShardsCount (14)
- Was incorrectly removing shard IDs 10-31 instead of just 10-13
Comment cleanup:
- Removed Phase 1/Phase 2 references (development plan context)
- Replaced with clear statements about default 10+4 configuration
- SeaweedFS repo uses fixed 10+4 EC ratio, no phases needed
Root cause: Over-aggressive replacement of TotalShardsCount with MaxShardCount.
MaxShardCount (32) is the limit for buffer allocations and shard ID loops,
but TotalShardsCount (14) must be used for default EC configuration logic.
* fix: add defensive bounds checks and compute actual shard counts
Critical fixes from code review:
1. topology_ec.go: Add defensive bounds checks to AddShard/DeleteShard
- Prevent panic when shardId >= MaxShardCount (32)
- Return false instead of crashing on out-of-range shard IDs
2. command_ec_common.go: Fix doBalanceEcShardsAcrossRacks
- Was using hardcoded TotalShardsCount (14) for all volumes
- Now computes actual totalShardsForVolume from rackToShardCount
- Fixes incorrect rebalancing for volumes with custom EC ratios
- Example: 5+2=7 shards would incorrectly use 14 as average
These fixes improve robustness and prepare for future custom EC ratios
without changing current behavior for default 10+4 volumes.
Note: MinusParityShards and ec_task.go intentionally NOT changed for
seaweedfs repo - these will be enhanced in seaweed-enterprise repo
where custom EC ratio configuration is added.
* fmt
* style: make MaxShardCount type casting explicit in loops
Improved code clarity by explicitly casting MaxShardCount to the
appropriate type when used in loop comparisons:
- ShardId comparisons: Cast to ShardId(MaxShardCount)
- uint32 comparisons: Cast to uint32(MaxShardCount)
Changed in 5 locations:
- Minus() loop (line 90)
- ShardIds() loop (line 143)
- ToUint32Slice() loop (line 152)
- IndexToShardId() loop (line 219)
- resizeShardSizes() loop (line 248)
This makes the intent explicit and improves type safety readability.
No functional changes - purely a style improvement.
|
4 months ago |
|
|
2ad2ffcdff |
fix comment
|
4 months ago |