Tree:
453310b057
add-ec-vacuum
add-filer-iam-grpc
add-iam-grpc-management
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
admin/csrf-s3tables
allow-no-role-arn
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
codex-rust-volume-server-bootstrap
codex/ec-repair-worker
codex/list-object-versions-newest-first
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
expand-the-s3-PutObject-permission-to-the-multipart-permissions
fasthttp
feature-8113-storage-class-disk-routing
feature/mini-port-detection
feature/modernize-s3-tests
feature/s3-multi-cert-support
feature/s3tables-improvements-and-spark-tests
feature/sra-uds-handler
filer1_maintenance_branch
fix-8303-s3-lifecycle-ttl-assign
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-helm-fromtoml-compatibility
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
fix/iceberg-stage-create-semantics
fix/object-lock-delete-enforcement
fix/sts-body-preservation
fix/windows-test-file-cleanup
ftp
gh-pages
has-weed-sql-command
iam-multi-file-migration
iam-permissions-and-api
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
plugin-system-phase1
plugin-ui-enhancements-restored
pr-7412
pr/7984
pr/8140
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
s3tables-by-claude
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
4.07
4.08
4.09
4.12
4.13
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
1001 Commits (453310b057db9ddf7614af25786401ec99b93b54)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
a3cb7fa8cc |
go fmt
|
2 days ago |
|
|
b565a0cc86
|
Adds volume.merge command with deduplication and disk-based backend (#8441)
* Enhance volume.merge command with deduplication and disk-based backend
* Fix copyVolume function call with correct argument order and missing bool parameter
* Revert "Fix copyVolume function call with correct argument order and missing bool parameter"
This reverts commit
|
2 days ago |
|
|
da4edb5fe6
|
Fix live volume move tail timestamp (#8440)
* Improve move tail timestamp * Add move tail timestamp integration test * Simulate traffic during move |
2 days ago |
|
|
cd6832249b
|
Fix volume.fsck crashing on EC volumes and add multi-volume vacuum support (#8406)
* helm: refine openshift-values.yaml to remove hardcoded UIDs Remove hardcoded runAsUser, runAsGroup, and fsGroup from the openshift-values.yaml example. This allows OpenShift's admission controller to automatically assign a valid UID from the namespace's allocated range, avoiding "forbidden" errors when UID 1000 is outside the permissible range. Updates #8381, #8390. * helm: fix volume.logs and add consistent security context comments * Update README.md * fix volume.fsck crashing on EC volumes and add multi-volume vacuum support * address comments |
4 days ago |
|
|
01b3125815
|
[shell]: volume balance capacity by min volume density (#8026)
volume balance by min volume density and active volumes |
1 week ago |
|
|
f44e25b422
|
fix(iam): ensure access key status is persisted and defaulted to Active (#8341)
* Fix master leader election startup issue Fixes #error-log-leader-not-selected-yet * not useful test * fix(iam): ensure access key status is persisted and defaulted to Active * make pb * update tests * using constants |
2 weeks ago |
|
|
fbe7dd32c2
|
Implement full scrubbing for regular volumes (#8254)
Implement full scrubbing for regular volumes. |
2 weeks ago |
|
|
221bd237c4
|
Fix file stat collection metric bug for the `cluster.status` command. (#8302)
When the `--files` flag is present, `cluster.status` will scrape file metrics from volume servers to provide detailed stats on those. The progress indicator was not being updated properly though, so the command would complete before it read 100%. |
2 weeks ago |
|
|
a3136c523f
|
Fix volume.fsck 401 Unauthorized by adding JWT to HTTP delete requests (#8306)
* Fix volume.fsck 401 Unauthorized by adding JWT to HTTP delete requests * Additionally, for performance, consider fetching the jwt.filer_signing.key once before any loops that call httpDelete, rather than inside httpDelete itself, to avoid repeated configuration lookups. |
2 weeks ago |
|
|
e657e7d827
|
Implement local scrubbing for EC volumes. (#8283)
|
2 weeks ago |
|
|
2a73219397
|
Add weed shell command `volumeServer.state` to query/update volume server state settings. (#8271)
Add weed shell command `volumeServer.state` to query/update volume server states. |
2 weeks ago |
|
|
f400fb44a0
|
Update `cluster.status` to resolve file details on EC volumes. (#8268)
Also parallelizes queries for file metrics collections when the `--files` flag is specified, and improves the command's output for readability: ``` > cluster.status --files collecting file stats: 100% cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 1/80 volume on 3 replicas, 3 writable (100%), 0 read-only (0%) EC: 2 EC volumes on 28 shards (14 shards/volume) storage: total: 269 MB (522 MB raw, 193.95%) regular volumes: 91 MB (272 MB raw, 300%) EC volumes: 178 MB (250 MB raw, 140%) files: total: 363 files, 300 readable (82.64%), 63 deleted (17.35%), avg 522 kB per file regular: 168 files, 105 readable (62.5%), 63 deleted (37.5%), avg 540 kB per file EC: 195 files, 195 readable (100%), 0 deleted (0%), avg 506 kB per file ``` |
2 weeks ago |
|
|
30812b85f3
|
fix ec.encode skipping volumes when one replica is on a full disk (#8227)
* fix ec.encode skipping volumes when one replica is on a full disk This fixes issue #8218. Previously, ec.encode would skip a volume if ANY of its replicas resided on a disk with low free volume count. Now it accepts the volume if AT LEAST ONE replica is on a healthy disk. * refine noFreeDisk counter logic in ec.encode Ensure noFreeDisk is decremented if a volume initially marked as bad is later found to have a healthy replica. This ensures accurate summary statistics. * defer noFreeDisk counting and refine logging in ec.encode Updated logging to be replica-scoped and deferred noFreeDisk counting to the final pass over vidMap. This ensures that the counter only reflects volumes that are definitively excluded because all replicas are on full disks. * filter replicas by free space during ec.encode Updated doEcEncode to filter out replicas on disks with FreeVolumeCount < 2 before selecting the best replica for encoding. This ensures that EC shards are not generated on healthy source replicas that happen to be on disks with low free space. |
3 weeks ago |
|
|
6a61037333
|
fix issue #8230: volume.fsck deletion logic to respect purgeAbsent flag (#8266)
* fix issue #8230: volume.fsck deletion logic to respect purgeAbsent flag This commit fixes two issues in volume.fsck: 1. Missing chunks in existing volumes are now deleted if -reallyDeleteFilerEntries is set. 2. Missing volumes are now properly handled when a -volumeId filter is specified, allowing deletion of filer entries for those volumes. * address PR feedback for issue #8230 - Ensure volume filter is applied before reporting missing volumes - Fix potential nil-pointer dereferences in httpDelete method - Use proper error checking throughout httpDelete * address second round PR feedback for issue #8230 - Use fmt.Fprintf(c.writer, ...) instead of fmt.Printf - Add missing newline in "deleting path" log message |
3 weeks ago |
|
|
63b846b73b
|
Parallelize operations for the `volume.scrub` and `ec.scrub` commands (#8247)
Parallelize operations for the `volume.scrub` and `ec.scrub` commands. |
3 weeks ago |
|
|
2ed5a8f65c |
add tests
|
3 weeks ago |
|
|
963398ac8c
|
use ReadFull (#40) (#8240)
* use ReadFull * fix error checking |
3 weeks ago |
|
|
9d751a7b61
|
Contrib/volume scrub local (#8226)
|
3 weeks ago |
|
|
3306abae10
|
shell: add minCacheAge flag to remote.uncache command (#8225)
* add minCacheAge flag to remote.uncache command #8221 * address code review feedback: add nil check and improve test isolation * address code review feedback: use consistent timestamp in FileFilter |
3 weeks ago |
|
|
2ecbae3611
|
Add volume.scrub and ec.scrub shell commands to scrub regular & EC volumes on demand. (#8188)
* Implement RPC skeleton for regular/EC volumes scrubbing. See https://github.com/seaweedfs/seaweedfs/issues/8018 for details. * Add `volume.scrub` and `ec.scrub` shell commands to scrub regular & EC volumes on demand. F.ex: ``` > ec.scrub --full Scrubbing 10.200.17.13:9005 (1/10)... Scrubbing 10.200.17.13:9001 (2/10)... Scrubbing 10.200.17.13:9008 (3/10)... Scrubbing 10.200.17.13:9009 (4/10)... Scrubbing 10.200.17.13:9004 (5/10)... Scrubbing 10.200.17.13:9010 (6/10)... Scrubbing 10.200.17.13:9007 (7/10)... Scrubbing 10.200.17.13:9002 (8/10)... Scrubbing 10.200.17.13:9003 (9/10)... Scrubbing 10.200.17.13:9006 (10/10)... Scrubbed 20 EC files and 20 volumes on 10 nodes Got scrub failures on 1 EC volumes and 2 EC shards :( Affected volumes: 10.200.17.13:9005:1 Details: [10.200.17.13:9005] expected 551041 bytes for needle 6, got 551072 [10.200.17.13:9005] needles in volume file (1) don't match index entries (173) for volume 1 ``` |
3 weeks ago |
|
|
79722bcf30
|
Add s3tables shell and admin UI (#8172)
* Add shared s3tables manager * Add s3tables shell commands * Add s3tables admin API * Add s3tables admin UI * Fix admin s3tables namespace create * Rename table buckets menu * Centralize s3tables tag validation * Reuse s3tables manager in admin * Extract s3tables list limit * Add s3tables bucket ARN helper * Remove write middleware from s3tables APIs * Fix bucket link and policy hint * Fix table tag parsing and nav link * Disable namespace table link on invalid ARN * Improve s3tables error decode * Return flag parse errors for s3tables tag * Accept query params for namespace create * Bind namespace create form data * Read s3tables JS data from DOM * s3tables: allow empty region ARN * shell: pass s3tables account id * shell: require account for table buckets * shell: use bucket name for namespaces * shell: use bucket name for tables * shell: use bucket name for tags * admin: add table buckets links in file browser * s3api: reuse s3tables tag validation * admin: harden s3tables UI handlers * fix admin list table buckets * allow admin s3tables access * validate s3tables bucket tags * log s3tables bucket metadata errors * rollback table bucket on owner failure * show s3tables bucket owner * add s3tables iam conditions * Add s3tables user permissions UI * Authorize s3tables using identity actions * Add s3tables permissions to user modal * Disambiguate bucket scope in user permissions * Block table bucket names that match S3 buckets * Pretty-print IAM identity JSON * Include tags in s3tables permission context * admin: refactor S3 Tables inline JavaScript into a separate file * s3tables: extend IAM policy condition operators support * shell: use LookupEntry wrapper for s3tables bucket conflict check * admin: handle buildBucketPermissions validation in create/update flows |
4 weeks ago |
|
|
49c66bbb2e
|
shell: allow spaces in arguments via quoting (#8157) (#8165)
* shell: allow spaces in arguments via quoting (#8157) - updated argument splitting regex to handle quoted segments - added robust quote stripping to remove matching quotes from flags - added unit tests for regex splitting and flag parsing * shell: use robust state machine parser for command line arguments - replaced regex-based splitter with splitCommandLine state machine - added escape character support in splitCommandLine and stripQuotes - updated unit tests to include escaped quotes and single-quote literals - addressed feedback regarding escaped quotes handling (#8157) * shell: detect unbalanced quotes in stripQuotes - modified stripQuotes to return the original string if quotes are unbalanced - added test cases for unbalanced quotes in shell_liner_test.go * shell: refactor shared parsing logic into parseShellInput helper - unified splitting and unquoting logic into a single state machine - splitCommandLine now returns unquoted tokens directly - simplified processEachCmd by removing redundant unquoting loop - improved maintainability by eliminating code duplication * shell: detect trailing backslash in stripQuotes - updated parseShellInput to include escaped state in unbalanced flag - stripQuotes now returns original string if it ends with an unescaped backslash - added test case for trailing backslash in shell_liner_test.go |
4 weeks ago |
|
|
94e0b902f9 |
shell: update fs.verify and volume.fsck for new BFS signature
Updated dependent commands to match the refactored doTraverseBfsAndSaving signature and use context for channel sends. |
4 weeks ago |
|
|
3b05efbdbc |
shell: fix potential deadlock in fs.meta.save BFS traversal
Refactored doTraverseBfsAndSaving to use context cancellation. If the saving process fails, the traversal is stopped immediately to prevent workers from blocking on the output channel. |
4 weeks ago |
|
|
550a4ff761
|
Fix inconsistent TTL reporting in volume.list #8158 (#8164)
* fix inconsistent TTL reporting in volume.list #8158 * simplify volume.list output using vi.String() |
4 weeks ago |
|
|
b1d63d0943
|
feat(shell): add Object Lock management commands (#8141)
* feat(shell): add s3.bucket.lock command for Object Lock management Add new weed shell command to view and enable S3 Object Lock on existing buckets. This allows administrators to enable Object Lock without recreating buckets, which is useful when buckets already contain data. The command: - Shows current Object Lock and Versioning status - Enables Object Lock with -enable flag (irreversible, per AWS S3 spec) - Automatically enables Versioning if not already enabled (required for Object Lock) Usage: s3.bucket.lock -name <bucket> # view status s3.bucket.lock -name <bucket> -enable # enable Object Lock Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com> * feat(shell): add -withLock flag to s3.bucket.create command Add support for creating buckets with Object Lock enabled directly from weed shell. The flag automatically enables versioning as required by Object Lock. Usage: s3.bucket.create -name mybucket -withLock Co-Authored-By: Claude <noreply@anthropic.com> Signed-off-by: Andrei Kvapil <kvapss@gmail.com> * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Signed-off-by: Andrei Kvapil <kvapss@gmail.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 month ago |
|
|
6bf088cec9
|
IAM Policy Management via gRPC (#8109)
* Add IAM gRPC service definition - Add GetConfiguration/PutConfiguration for config management - Add CreateUser/GetUser/UpdateUser/DeleteUser/ListUsers for user management - Add CreateAccessKey/DeleteAccessKey/GetUserByAccessKey for access key management - Methods mirror existing IAM HTTP API functionality * Add IAM gRPC handlers on filer server - Implement IamGrpcServer with CredentialManager integration - Handle configuration get/put operations - Handle user CRUD operations - Handle access key create/delete operations - All methods delegate to CredentialManager for actual storage * Wire IAM gRPC service to filer server - Add CredentialManager field to FilerOption and FilerServer - Import credential store implementations in filer command - Initialize CredentialManager from credential.toml if available - Register IAM gRPC service on filer gRPC server - Enable credential management via gRPC alongside existing filer services * Regenerate IAM protobuf with gRPC service methods * iam_pb: add Policy Management to protobuf definitions * credential: implement PolicyManager in credential stores * filer: implement IAM Policy Management RPCs * shell: add s3.policy command * test: add integration test for s3.policy * test: fix compilation errors in policy_test * pb * fmt * test * weed shell: add -policies flag to s3.configure This allows linking/unlinking IAM policies to/from identities directly from the s3.configure command. * test: verify s3.configure policy linking and fix port allocation - Added test case for linking policies to users via s3.configure - Implemented findAvailablePortPair to ensure HTTP and gRPC ports are both available, avoiding conflicts with randomized port assignments. - Updated assertion to match jsonpb output (policyNames) * credential: add StoreTypeGrpc constant * credential: add IAM gRPC store boilerplate * credential: implement identity methods in gRPC store * credential: implement policy methods in gRPC store * admin: use gRPC credential store for AdminServer This ensures that all IAM and policy changes made through the Admin UI are persisted via the Filer's IAM gRPC service instead of direct file manipulation. * shell: s3.configure use granular IAM gRPC APIs instead of full config patching * shell: s3.configure use granular IAM gRPC APIs * shell: replace deprecated ioutil with os in s3.policy * filer: use gRPC FailedPrecondition for unconfigured credential manager * test: improve s3.policy integration tests and fix error checks * ci: add s3 policy shell integration tests to github workflow * filer: fix LoadCredentialConfiguration error handling * credential/grpc: propagate unmarshal errors in GetPolicies * filer/grpc: improve error handling and validation * shell: use gRPC status codes in s3.configure * credential: document PutPolicy as create-or-replace * credential/postgres: reuse CreatePolicy in PutPolicy to deduplicate logic * shell: add timeout context and strictly enforce flags in s3.policy * iam: standardize policy content field naming in gRPC and proto * shell: extract slice helper functions in s3.configure * filer: map credential store errors to gRPC status codes * filer: add input validation for UpdateUser and CreateAccessKey * iam: improve validation in policy and config handlers * filer: ensure IAM service registration by defaulting credential manager * credential: add GetStoreName method to manager * test: verify policy deletion in integration test |
1 month ago |
|
|
0a46577700
|
Fix #8040: Support '_default' keyword in collectionPattern to match default collection (#8046)
* Fix #8040: Support 'default' keyword in collectionPattern to match default collection The default collection in SeaweedFS is represented as an empty string internally. Previously, it was impossible to specifically target only the default collection because: - Empty collectionPattern matched ALL collections (filter was skipped) - Using collectionPattern="default" tried to match the literal string "default" This commit adds special handling for the keyword "default" in collectionPattern across multiple shell commands: - volume.tier.move - volume.list - volume.fix.replication - volume.configure.replication Now users can use -collectionPattern="default" to specifically target volumes in the default collection (empty collection name), while maintaining backward compatibility where empty pattern matches all collections. Updated help text to document this feature. * Update compileCollectionPattern to support 'default' keyword This extends the fix to all commands that use regex-based collection pattern matching: - ec.encode - ec.decode - volume.tier.download - volume.balance The compileCollectionPattern function now treats "default" as a special keyword that compiles to the regex "^$" (matching empty strings), making it consistent with the other commands that use filepath.Match. * Use CollectionDefault constant instead of hardcoded "default" string Refactored the collection pattern matching logic to use a central constant CollectionDefault defined in weed/shell/common.go. This improves maintainability and ensures consistency across all shell commands. * Address PR review feedback: simplify logic and use '_default' keyword Changes: 1. Changed CollectionDefault from "default" to "_default" to avoid collision with literal collection names 2. Simplified pattern matching logic to reduce code duplication across all affected commands 3. Fixed error handling in command_volume_tier_move.go to properly propagate filepath.Match errors instead of swallowing them 4. Updated documentation to clarify how to match a literal "default" collection using regex patterns like "^default$" This addresses all feedback from PR review comments. * Remove unnecessary documentation about matching literal 'default' Since we changed the keyword to '_default', users can now simply use 'default' to match a literal collection named "default". The previous documentation about using regex patterns was confusing and no longer needed. * Fix error propagation and empty pattern handling 1. command_volume_tier_move.go: Added early termination check after eachDataNode callback to stop processing remaining nodes if a pattern matching error occurred, improving efficiency 2. command_volume_configure_replication.go: Fixed empty pattern handling to match all collections (collectionMatched = true when pattern is empty), mirroring the behavior in other commands These changes address the remaining PR review feedback. |
1 month ago |
|
|
7eb90fdfd7
|
Enhance EC balancing to separate parity and data shards (#8038)
* Enhance EC balancing to separate parity and data shards across racks * Rename avoidRacks to antiAffinityRacks for clarity * Implement server-level EC separation for parity/data shards * Optimize EC balancing: consolidate helpers and extract two-pass selection logic * Add comprehensive edge case tests for EC balancing logic * Apply code review feedback: rename select_(), add divide-by-zero guard, fix comment * Remove unused parameters from doBalanceEcShardsWithinOneRack and add explicit anti-affinity check * Add disk-level anti-affinity for data/parity shard separation - Modified pickBestDiskOnNode to accept shardId and dataShardCount - Implemented explicit anti-affinity: 1000-point penalty for placing data shards on disks with parity (and vice versa) - Updated all call sites including balancing and evacuation - For evacuation, disabled anti-affinity by passing dataShardCount=0 |
1 month ago |
|
|
905e7e72d9
|
Add remote.copy.local command to copy local files to remote storage (#8033)
* Add remote.copy.local command to copy local files to remote storage This new command solves the issue described in GitHub Discussion #8031 where files exist locally but are not synced to remote storage due to missing filer logs. Features: - Copies local-only files to remote storage - Supports file filtering (include/exclude patterns) - Dry run mode to preview actions - Configurable concurrency for performance - Force update option for existing remote files - Comprehensive error handling with retry logic Usage: remote.copy.local -dir=/path/to/mount/dir [options] This addresses the need to manually sync files when filer logs were deleted or when local files were never synced to remote storage. * shell: rename commandRemoteLocalSync to commandRemoteCopyLocal * test: add comprehensive remote cache integration tests * shell: fix forceUpdate logic in remote.copy.local The previous logic only allowed force updates when localEntry.RemoteEntry was not nil, which defeated the purpose of using -forceUpdate to fix inconsistencies where local metadata might be missing. Now -forceUpdate will overwrite remote files whenever they exist, regardless of local metadata state. * shell: fix code review issues in remote.copy.local - Return actual error from flag parsing instead of swallowing it - Use sync.Once to safely capture first error in concurrent operations - Add atomic counter to track actual successful copies - Protect concurrent writes to output with mutex to prevent interleaving - Fix path matching to prevent false positives with sibling directories (e.g., /mnt/remote2 no longer matches /mnt/remote) * test: address code review nitpicks in integration tests - Improve create_bucket error handling to fail on real errors - Fix test assertions to properly verify expected failures - Use case-insensitive string matching for error detection - Replace weak logging-only tests with proper assertions - Remove extra blank line in Makefile * test: remove redundant edge case tests Removed 5 tests that were either duplicates or didn't assert meaningful behavior: - TestEdgeCaseEmptyDirectory (duplicate of TestRemoteCopyLocalEmptyDirectory) - TestEdgeCaseRapidCacheUncache (no meaningful assertions) - TestEdgeCaseConcurrentCommands (only logs errors, no assertions) - TestEdgeCaseInvalidPaths (no security assertions) - TestEdgeCaseFileNamePatterns (duplicate of pattern tests in cache tests) Kept valuable stress tests: nested directories, special characters, very large files (100MB), many small files (100), and zero-byte files. * test: fix CI failures by forcing localhost IP advertising Added -ip=127.0.0.1 flag to both primary and remote weed mini commands to prevent IP auto-detection issues in CI environments. Without this flag, the master would advertise itself using the actual IP (e.g., 10.1.0.17) while binding to 127.0.0.1, causing connection refused errors when other services tried to connect to the gRPC port. * test: address final code review issues - Add proper error assertions for concurrent commands test - Require errors for invalid path tests instead of just logging - Remove unused 'match' field from pattern test struct - Add dry-run output assertion to verify expected behavior - Simplify redundant condition in remote.copy.local (remove entry.RemoteEntry check) * test: fix remote.configure tests to match actual validation rules - Use only letters in remote names (no numbers) to match validation - Relax missing parameter test expectations since validation may not be strict - Generate unique names using letter suffix instead of numbers * shell: rename pathToCopyCopy to localPath for clarity Improved variable naming in concurrent copy loop to make the code more readable and less repetitive. * test: fix remaining test failures - Remove strict error requirement for invalid paths (commands handle gracefully) - Fix TestRemoteUncacheBasic to actually test uncache instead of cache - Use simple numeric names for remote.configure tests (testcfg1234 format) to avoid validation issues with letter-only or complex name generation * test: use only letters in remote.configure test names The validation regex ^[A-Za-z][A-Za-z0-9]*$ requires names to start with a letter, but using static letter-only names avoids any potential issues with the validation. * test: remove quotes from -name parameter in remote.configure tests Single quotes were being included as part of the name value, causing validation failures. Changed from -name='testremote' to -name=testremote. * test: fix remote.configure assertion to be flexible about JSON formatting Changed from checking exact JSON format with specific spacing to just checking if the name appears in the output, since JSON formatting may vary (e.g., "name": "value" vs "name": "value"). |
1 month ago |
|
|
f2e7af257d
|
Fix volume.fsck -forcePurging -reallyDeleteFromVolume to fail fast on filer traversal errors (#8015)
* Add TraverseBfsWithContext and fix race conditions in error handling
- Add TraverseBfsWithContext function to support context cancellation
- Fix race condition in doTraverseBfsAndSaving using atomic.Bool and sync.Once
- Improve error handling with fail-fast behavior and proper error propagation
- Update command_volume_fsck to use error-returning saveFn callback
- Enhance error messages in readFilerFileIdFile with detailed context
* refactoring
* fix error format
* atomic
* filer_pb: make enqueue return void
* shell: simplify fs.meta.save error handling
* filer_pb: handle enqueue return value
* Revert "atomic"
This reverts commit
|
1 month ago |
|
|
f47bc8c539
|
Fix remote.meta.sync TTL issue (#8021) (#8030)
* Fix remote.meta.sync TTL issue (#8021) Remote entries should not have TTL applied because they represent files in remote storage, not local SeaweedFS files. When TTL was configured on a prefix, remote.meta.sync would create entries that immediately expired, causing them to be deleted and recreated on each sync. Changes: - Set TtlSec=0 explicitly when creating remote entries in remote.meta.sync - Skip TTL application in CreateEntry handler for entries with Remote field set Fixes #8021 * Add TTL protection for remote entries in update path - Set TtlSec=0 in doSaveRemoteEntry before calling UpdateEntry - Add server-side TTL protection in UpdateEntry handler for remote entries - Ensures remote entries don't inherit or preserve TTL when updated |
1 month ago |
|
|
9012069bd7
|
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
2 months ago |
|
|
e10f11b480
|
opt: reduce ShardsInfo memory usage with bitmap and sorted slice (#7974)
* opt: reduce ShardsInfo memory usage with bitmap and sorted slice - Replace map[ShardId]*ShardInfo with sorted []ShardInfo slice - Add ShardBits (uint32) bitmap for O(1) existence checks - Use binary search for O(log n) lookups by shard ID - Maintain sorted order for efficient iteration - Add comprehensive unit tests and benchmarks Memory savings: - Map overhead: ~48 bytes per entry eliminated - Pointers: 8 bytes per entry eliminated - Total: ~56 bytes per shard saved Performance improvements: - Has(): O(1) using bitmap - Size(): O(log n) using binary search (was O(1), acceptable tradeoff) - Count(): O(1) using popcount on bitmap - Iteration: Faster due to cache locality * refactor: add methods to ShardBits type - Add Has(), Set(), Clear(), and Count() methods to ShardBits - Simplify ShardsInfo methods by using ShardBits methods - Improves code readability and encapsulation * opt: use ShardBits directly in ShardsCountFromVolumeEcShardInformationMessage Avoid creating a full ShardsInfo object just to count shards. Directly cast vi.EcIndexBits to ShardBits and use Count() method. * opt: use strings.Builder in ShardsInfo.String() for efficiency * refactor: change AsSlice to return []ShardInfo (values instead of pointers) This completes the memory optimization by avoiding unnecessary pointer slices and potential allocations. * refactor: rename ShardsCountFromVolumeEcShardInformationMessage to GetShardCount * fix: prevent deadlock in Add and Subtract methods Copy shards data from 'other' before releasing its lock to avoid potential deadlock when a.Add(b) and b.Add(a) are called concurrently. The previous implementation held other's lock while calling si.Set/Delete, which acquires si's lock. This could deadlock if two goroutines tried to add/subtract each other concurrently. * opt: avoid unnecessary locking in constructor functions ShardsInfoFromVolume and ShardsInfoFromVolumeEcShardInformationMessage now build shards slice and bitmap directly without calling Set(), which acquires a lock on every call. Since the object is local and not yet shared, locking is unnecessary and adds overhead. This improves performance during object construction. * fix: rename 'copy' variable to avoid shadowing built-in function The variable name 'copy' in TestShardsInfo_Copy shadowed the built-in copy() function, which is confusing and bad practice. Renamed to 'siCopy'. * opt: use math/bits.OnesCount32 and reorganize types 1. Replace manual popcount loop with math/bits.OnesCount32 for better performance and idiomatic Go code 2. Move ShardSize type definition to ec_shards_info.go for better code organization since it's primarily used there * refactor: Set() now accepts ShardInfo for future extensibility Changed Set(id ShardId, size ShardSize) to Set(shard ShardInfo) to support future additions to ShardInfo without changing the API. This makes the code more extensible as new fields can be added to ShardInfo (e.g., checksum, location, etc.) without breaking the Set API. * refactor: move ShardInfo and ShardSize to separate file Created ec_shard_info.go to hold the basic shard types (ShardInfo and ShardSize) for better code organization and separation of concerns. * refactor: add ShardInfo constructor and helper functions Added NewShardInfo() constructor and IsValid() method to better encapsulate ShardInfo creation and validation. Updated code to use the constructor for cleaner, more maintainable code. * fix: update remaining Set() calls to use NewShardInfo constructor Fixed compilation errors in storage and shell packages where Set() calls were not updated to use the new NewShardInfo() constructor. * fix: remove unreachable code in filer backup commands Removed unreachable return statements after infinite loops in filer_backup.go and filer_meta_backup.go to fix compilation errors. * fix: rename 'new' variable to avoid shadowing built-in Renamed 'new' to 'result' in MinusParityShards, Plus, and Minus methods to avoid shadowing Go's built-in new() function. * fix: update remaining test files to use NewShardInfo constructor Fixed Set() calls in command_volume_list_test.go and ec_rebalance_slots_test.go to use NewShardInfo() constructor. |
2 months ago |
|
|
0b5a65e00b |
avoid extra missing shard warning
fix https://github.com/seaweedfs/seaweedfs/issues/7956 |
2 months ago |
|
|
91fcc60898
|
Have `volume.list` account for EC shards when computing disk usage. (#7909)
|
2 months ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
2 months ago |
|
|
f88b9a643d |
add one example
|
2 months ago |
|
|
2b3ff3cd05 |
verbose mode
|
2 months ago |
|
|
b866907461
|
fs.meta.save: fix directory entry parent path in FullEntry construction (#7886)
* Checkpoint from VS Code for coding agent session * Fix fs.meta.save to correctly save directory's own metadata Co-authored-by: chrislusf <1543151+chrislusf@users.noreply.github.com> * address error --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: chrislusf <1543151+chrislusf@users.noreply.github.com> |
2 months ago |
|
|
4aa50bfa6a
|
fix: EC rebalance fails with replica placement 000 (#7812)
* fix: EC rebalance fails with replica placement 000 This PR fixes several issues with EC shard distribution: 1. Pre-flight check before EC encoding - Verify target disk type has capacity before encoding starts - Prevents encoding shards only to fail during rebalance - Shows helpful error when wrong diskType is specified (e.g., ssd when volumes are on hdd) 2. Fix EC rebalance with replica placement 000 - When DiffRackCount=0, shards should be distributed freely across racks - The '000' placement means 'no volume replication needed' because EC provides redundancy - Previously all racks were skipped with error 'shards X > replica placement limit (0)' 3. Add unit tests for EC rebalance slot calculation - TestECRebalanceWithLimitedSlots: documents the limited slots scenario - TestECRebalanceZeroFreeSlots: reproduces the 0 free slots error 4. Add Makefile for manual EC testing - make setup: start cluster and populate data - make shell: open weed shell for EC commands - make clean: stop cluster and cleanup * fix: default -rebalance to true for ec.encode The -rebalance flag was defaulting to false, which meant ec.encode would only print shard moves but not actually execute them. This is a poor default since the whole point of EC encoding is to distribute shards across servers for fault tolerance. Now -rebalance defaults to true, so shards are actually distributed after encoding. Users can use -rebalance=false if they only want to see what would happen without making changes. * test/erasure_coding: improve Makefile safety and docs - Narrow pkill pattern for volume servers to use TEST_DIR instead of port pattern, avoiding accidental kills of unrelated SeaweedFS processes - Document external dependencies (curl, jq) in header comments * shell: refactor buildRackWithEcShards to reuse buildEcShards Extract common shard bit construction logic to avoid duplication between buildEcShards and buildRackWithEcShards helper functions. * shell: update test for EC replication 000 behavior When DiffRackCount=0 (replication "000"), EC shards should be distributed freely across racks since erasure coding provides its own redundancy. Update test expectation to reflect this behavior. * erasure_coding: add distribution package for proportional EC shard placement Add a new reusable package for EC shard distribution that: - Supports configurable EC ratios (not hard-coded 10+4) - Distributes shards proportionally based on replication policy - Provides fault tolerance analysis - Prefers moving parity shards to keep data shards spread out Key components: - ECConfig: Configurable data/parity shard counts - ReplicationConfig: Parsed XYZ replication policy - ECDistribution: Target shard counts per DC/rack/node - Rebalancer: Plans shard moves with parity-first strategy This enables seaweed-enterprise custom EC ratios and weed worker integration while maintaining a clean, testable architecture. * shell: integrate distribution package for EC rebalancing Add shell wrappers around the distribution package: - ProportionalECRebalancer: Plans moves using distribution.Rebalancer - NewProportionalECRebalancerWithConfig: Supports custom EC configs - GetDistributionSummary/GetFaultToleranceAnalysis: Helper functions The shell layer converts between EcNode types and the generic TopologyNode types used by the distribution package. * test setup * ec: improve data and parity shard distribution across racks - Add shardsByTypePerRack helper to track data vs parity shards - Rewrite doBalanceEcShardsAcrossRacks for two-pass balancing: 1. Balance data shards (0-9) evenly, max ceil(10/6)=2 per rack 2. Balance parity shards (10-13) evenly, max ceil(4/6)=1 per rack - Add balanceShardTypeAcrossRacks for generic shard type balancing - Add pickRackForShardType to select destination with room for type - Add unit tests for even data/parity distribution verification This ensures even read load during normal operation by spreading both data and parity shards across all available racks. * ec: make data/parity shard counts configurable in ecBalancer - Add dataShardCount and parityShardCount fields to ecBalancer struct - Add getDataShardCount() and getParityShardCount() methods with defaults - Replace direct constant usage with configurable methods - Fix unused variable warning for parityPerRack This allows seaweed-enterprise to use custom EC ratios while defaulting to standard 10+4 scheme. * Address PR 7812 review comments Makefile improvements: - Save PIDs for each volume server for precise termination - Use PID-based killing in stop target with pkill fallback - Use more specific pkill patterns with TEST_DIR paths Documentation: - Document jq dependency in README.md Rebalancer fix: - Fix duplicate shard count updates in applyMovesToAnalysis - All planners (DC/rack/node) update counts inline during planning - Remove duplicate updates from applyMovesToAnalysis to avoid double-counting * test/erasure_coding: use mktemp for test file template Use mktemp instead of hardcoded /tmp/testfile_template.bin path to provide better isolation for concurrent test runs. |
2 months ago |
|
|
f4cdfcc5fd
|
Add cluster.raft.leader.transfer command for graceful leader change (#7819)
* proto: add RaftLeadershipTransfer RPC for forced leader change Add new gRPC RPC and messages for leadership transfer: - RaftLeadershipTransferRequest: optional target_id and target_address - RaftLeadershipTransferResponse: previous_leader and new_leader This enables graceful leadership transfer before master maintenance, reducing errors in filers during planned maintenance windows. Ref: https://github.com/seaweedfs/seaweedfs/issues/7527 * proto: regenerate Go files for RaftLeadershipTransfer Generated from master.proto changes. * master: implement RaftLeadershipTransfer gRPC handler Add gRPC handler for leadership transfer with support for: - Transfer to any eligible follower (when target_id is empty) - Transfer to a specific server (when target_id and target_address are provided) Uses hashicorp/raft LeadershipTransfer() and LeadershipTransferToServer() APIs. Returns the previous and new leader in the response. * shell: add cluster.raft.leader.transfer command Add weed shell command for graceful leadership transfer: - Displays current cluster status before transfer - Supports auto-selection of target (any eligible follower) - Supports targeted transfer with -id and -address flags - Provides clear feedback on success/failure with troubleshooting tips Usage: cluster.raft.leader.transfer cluster.raft.leader.transfer -id <server_id> -address <grpc_address> * master: add unit tests for raft gRPC handlers Add tests covering: - RaftLeadershipTransfer with no raft initialized - RaftLeadershipTransfer with target_id but no address - RaftListClusterServers with no raft initialized - RaftAddServer with no raft initialized - RaftRemoveServer with no raft initialized These tests verify error handling when raft is not configured. * shell: add tests for cluster.raft.leader.transfer command Add tests covering: - Command name and help text validation - HasTag returns false for ResourceHeavy - Validation of -id without -address - Argument parsing with unknown flags * master: clarify that leadership transfer requires -raftHashicorp The default raft implementation (seaweedfs/raft, a goraft fork) does not support graceful leadership transfer. This feature is only available when using hashicorp raft (-raftHashicorp=true). Update error messages and help text to make this requirement clear: - gRPC handler returns specific error for goraft users - Shell command help text notes the requirement - Added test for goraft case * test: use strings.Contains instead of custom helper Replace custom contains/containsHelper functions with the standard library strings.Contains for better maintainability. * shell: return flag parsing errors instead of swallowing them - Return the error from flag.Parse() instead of returning nil - Update test to explicitly assert error for unknown flags * test: document integration test scenarios for Raft leadership transfer Add comments explaining: - Why these unit tests only cover 'Raft not initialized' scenarios - What integration tests should cover (with multi-master cluster) - hashicorp/raft uses concrete types that cannot be easily mocked * fix: address reviewer feedback on tests and leader routing - Remove misleading tests that couldn't properly validate their documented behavior without a real Raft cluster: - TestRaftLeadershipTransfer_GoraftNotSupported - TestRaftLeadershipTransfer_ValidationTargetIdWithoutAddress - Change WithClient(false) to WithClient(true) for RaftLeadershipTransfer RPC to ensure the request is routed to the current leader * Improve cluster.raft.transferLeader command - Rename command from cluster.raft.leader.transfer to cluster.raft.transferLeader - Add symmetric validation: -id and -address must be specified together - Handle case where same leader is re-elected after transfer - Add test for -address without -id validation - Add docker compose file for 5-master raft cluster testing |
2 months ago |
|
|
6a1b9ce8cd
|
Give `cluster.status` detailed file metrics for regular volumes (#7791)
* Implement a `weed shell` command to return a status overview of the cluster. Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` * Humanize output for `weed.server` by default. Makes things more readable :) ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%) EC: 1 EC volume on 14 shards (14 shards/volume) storage: total: 172 MB regular volumes: 172 MB EC volumes: 0 B raw: 516 MB on volume replicas, 0 B on EC shards ``` ``` > cluster.status --humanize=false cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s), 10 disk(s) on 1 rack(s) volumes: total: 3 volume(s), 1 collection(s) max size: 31457280000 byte(s) regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%) EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume) storage: total: 172128072 byte(s) regular volumes: 172128072 byte(s) EC volumes: 0 byte(s) raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards ``` Also adds unit tests, and reshuffles test files handling for clarity. * `cluster.status`: Add detailed file metrics for regular volumes. |
2 months ago |
|
|
347ed7cbfa
|
fix: sync replica entries before ec.encode and volume.tier.move (#7798)
* fix: sync replica entries before ec.encode and volume.tier.move (#7797) This addresses the data inconsistency risk in multi-replica volumes. When ec.encode or volume.tier.move operates on a multi-replica volume: 1. Find the replica with the highest file count (the 'best' one) 2. Copy missing entries from other replicas INTO this best replica 3. Use this union replica for the destructive operation This ensures no data is lost due to replica inconsistency before EC encoding or tier moving. Added: - command_volume_replica_check.go: Core sync and select logic - command_volume_replica_check_test.go: Test coverage Modified: - command_ec_encode.go: Call syncAndSelectBestReplica before encoding - command_volume_tier_move.go: Call syncAndSelectBestReplica before moving Fixes #7797 * test: add integration test for replicated volume sync during ec.encode * test: improve retry logic for replicated volume integration test * fix: resolve JWT issue in integration tests by using empty security.toml * address review comments: add readNeedleMeta, parallelize status fetch, fix collection param, fix test issues * test: use collection parameter consistently in replica sync test * fix: convert weed binary path to absolute to work with changed working directory * fix: remove skip behavior, keep tests failing on missing binary * fix: always check recency for each needle, add divergent replica test |
2 months ago |
|
|
504b258258
|
s3: fix remote object not caching (#7790)
* s3: fix remote object not caching * s3: address review comments for remote object caching - Fix leading slash in object name by using strings.TrimPrefix - Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations - Reuse existing helper function instead of inline gRPC call * s3/filer: add singleflight deduplication for remote object caching - Add singleflight.Group to FilerServer to deduplicate concurrent cache operations - Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one caching operation runs per object when multiple clients request the same file - Add early-return check for already-cached objects - S3 API calls filer gRPC with timeout and graceful fallback on error - Clear negative bucket cache when bucket is created via weed shell - Add integration tests for remote cache with singleflight deduplication This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects by preventing redundant cache operations and improving concurrent access performance. Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599 * fix: data race in concurrent remote object caching - Add mutex to protect chunks slice from concurrent append - Add mutex to protect fetchAndWriteErr from concurrent read/write - Fix incorrect error check (was checking assignResult.Error instead of parseErr) - Rename inner variable to avoid shadowing fetchAndWriteErr * fix: address code review comments - Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version - Add mutex protection for concurrent chunk slice and error access (data race fix) - Use lazy initialization for S3 client in tests to avoid panic during package load - Fix markdown linting: add language specifier to code fence, blank lines around tables - Add 'all' target to Makefile as alias for test-with-server - Remove unused 'util' import * style: remove emojis from test files * fix: add defensive checks and sort chunks by offset - Add nil check and type assertion check for singleflight result - Sort chunks by offset after concurrent fetching to maintain file order * fix: improve test diagnostics and path normalization - runWeedShell now returns error for better test diagnostics - Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health) - Strip leading slash from normalizedObject to avoid double slashes in path --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
2 months ago |
|
|
187ef65e8f
|
Humanize output for `weed.server` by default (#7758)
* Implement a `weed shell` command to return a status overview of the cluster. Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` * Humanize output for `weed.server` by default. Makes things more readable :) ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%) EC: 1 EC volume on 14 shards (14 shards/volume) storage: total: 172 MB regular volumes: 172 MB EC volumes: 0 B raw: 516 MB on volume replicas, 0 B on EC shards ``` ``` > cluster.status --humanize=false cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s), 10 disk(s) on 1 rack(s) volumes: total: 3 volume(s), 1 collection(s) max size: 31457280000 byte(s) regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%) EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume) storage: total: 172128072 byte(s) regular volumes: 172128072 byte(s) EC volumes: 0 byte(s) raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards ``` Also adds unit tests, and reshuffles test files handling for clarity. |
2 months ago |
|
|
7ed7578424
|
fix(ec.decode): purge EC shards when volume is empty (#7749)
* fix(ec.decode): purge EC shards when volume is empty When an EC volume has no live entries (all deleted), ec.decode should not generate an empty normal volume. Instead, treat decode as a no-op and allow shard purge to proceed cleanly.\n\nFixes: #7748 * chore: address PR review comments * test: cover live EC index + avoid magic string * chore: harden empty-EC handling - Make shard cleanup best-effort (collect errors)\n- Remove unreachable EOF handling in HasLiveNeedles\n- Add empty ecx test case\n- Share no-live-entries substring between server/client\n * perf: parallelize EC shard unmount/delete across locations * refactor: combine unmount+delete into single goroutine per location * refactor: use errors.Join for multi-error aggregation * refactor: use existing ErrorWaitGroup for parallel execution * fix: capture loop variables + clarify SuperBlockSize safety |
2 months ago |
|
|
f77e6ed2d4
|
fix: admin UI bucket delete now properly deletes collection and checks Object Lock (#7734)
* fix: admin UI bucket delete now properly deletes collection and checks Object Lock Fixes #7711 The admin UI's DeleteS3Bucket function was missing two critical behaviors: 1. It did not delete the collection from the master (unlike s3.bucket.delete shell command), leaving orphaned volume data that caused fs.verify errors. 2. It did not check for Object Lock protections before deletion, potentially allowing deletion of buckets with locked objects. Changes: - Add shared Object Lock checking utilities to object_lock_utils.go: - EntryHasActiveLock: standalone function to check if an entry has active lock - HasObjectsWithActiveLocks: shared function to scan bucket for locked objects - Refactor S3 API entryHasActiveLock to use shared EntryHasActiveLock function - Update admin UI DeleteS3Bucket to: - Check Object Lock using shared HasObjectsWithActiveLocks utility - Delete the collection before deleting filer entries (matching s3.bucket.delete) * refactor: S3 API uses shared Object Lock utilities Removes 114 lines of duplicated code from s3api_bucket_handlers.go by having hasObjectsWithActiveLocks delegate to the shared HasObjectsWithActiveLocks function in object_lock_utils.go. Now both S3 API and Admin UI use the same shared utilities: - EntryHasActiveLock - HasObjectsWithActiveLocks - recursivelyCheckLocksWithClient - checkVersionsForLocksWithClient * feat: s3.bucket.delete shell command now checks Object Lock Add Object Lock protection to the s3.bucket.delete shell command. If the bucket has Object Lock enabled and contains objects with active retention or legal hold, deletion is prevented. Also refactors Object Lock checking utilities into a new s3_objectlock package to avoid import cycles between shell, s3api, and admin packages. All three components now share the same logic: - S3 API (DeleteBucketHandler) - Admin UI (DeleteS3Bucket) - Shell command (s3.bucket.delete) * refactor: unified Object Lock checking and consistent deletion parameters 1. Add CheckBucketForLockedObjects() - a unified function that combines: - Bucket entry lookup - Object Lock enabled check - Scan for locked objects 2. All three components now use this single function: - S3 API (via s3api.CheckBucketForLockedObjects) - Admin UI (via s3api.CheckBucketForLockedObjects) - Shell command (via s3_objectlock.CheckBucketForLockedObjects) 3. Aligned deletion parameters across all components: - isDeleteData: false (collection already deleted separately) - isRecursive: true - ignoreRecursiveError: true * fix: properly handle non-EOF errors in Recv() loops The Recv() loops in recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient were breaking on any error, which could hide real stream errors and incorrectly report 'no locks found'. Now: - io.EOF: break loop (normal end of stream) - any other error: return it so caller knows the stream failed * fix: address PR review comments 1. Add path traversal protection - validate entry names before building subdirectory paths. Skip entries with empty names, '.', '..', or containing path separators. 2. Use exact match for .versions folder instead of HasSuffix() to avoid mismatching unrelated directories like 'foo.versions'. 3. Replace path.Join with simple string concatenation since we now validate entry names. * refactor: extract paginateEntries helper to reduce duplication The recursivelyCheckLocksWithClient and checkVersionsForLocksWithClient functions shared significant structural similarity. Extracted a generic paginateEntries helper that: - Handles pagination logic (lastFileName tracking, Limit) - Handles stream receiving with proper EOF vs error handling - Validates entry names (path traversal protection) - Calls a processEntry callback for business logic This centralizes pagination logic and makes the code more maintainable. * feat: add context propagation for timeout and cancellation support All Object Lock checking functions now accept context.Context parameter: - paginateEntries(ctx, client, dir, processEntry) - recursivelyCheckLocksWithClient(ctx, client, dir, hasLocks, currentTime) - checkVersionsForLocksWithClient(ctx, client, versionsDir, hasLocks, currentTime) - HasObjectsWithActiveLocks(ctx, client, bucketPath) - CheckBucketForLockedObjects(ctx, client, bucketsPath, bucketName) This enables: - Timeout support for large bucket scans - Cancellation propagation from HTTP requests - The S3 API handler now uses r.Context() for proper request lifecycle * fix: address PR review comments 1. Add DefaultBucketsPath constant in admin_server.go instead of hardcoding "/buckets" in multiple places. 2. Add defensive normalization in EntryHasActiveLock: - TrimSpace to handle whitespace around values - ToUpper for case-insensitive comparison of legal hold and retention mode values - TrimSpace on retention date before parsing * fix: use ctx variable consistently instead of context.Background() In both DeleteS3Bucket and command_s3_bucket_delete, use the ctx variable defined at the start of the function for all gRPC calls instead of creating new context.Background() instances. |
3 months ago |
|
|
93cca3a96b
|
volume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hours (#7730)
* volume.fsck: increase default cutoffTimeAgo from 5 minutes to 5 hours This change makes the fsck check more conservative by only considering chunks older than 5 hours as potential orphans. A 5 minute window was too aggressive and could incorrectly flag recently written chunks, especially in busy systems or during backup operations. Addresses #7649 * Update command_volume_fsck.go * volume.fsck: add help text explaining cutoffTimeAgo parameter * Update command_volume_fsck.go |
3 months ago |
|
|
662a6ac8ee
|
Implement a `weed shell` command to return a status overview of the cluster. (#7704)
Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` |
3 months ago |