Tree:
683e3d06a4
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature/mini-port-detection
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
12349 Commits (683e3d06a44bc22e6c6ef0ca404373c558e62acf)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
683e3d06a4 |
go mod tidy
|
14 hours ago |
|
|
2567be8040
|
refactor: remove unused gRPC connection age parameters (#7852)
The GrpcMaxConnectionAge and GrpcMaxConnectionAgeGrace constants have a troubled history - they were removed in 2022 due to gRPC issues, reverted later, and recently re-added. However, they are not essential to the core worker reconnection fix which was solved through proper goroutine ordering. The Docker Swarm DNS handling mentioned in the comments is not critical, and these parameters have caused problems in the past. Removing them simplifies the configuration without losing functionality. |
15 hours ago |
|
|
14df5d1bb5
|
fix: improve worker reconnection robustness and prevent handleOutgoing hang (#7838)
* feat: add automatic port detection and fallback for mini command - Added port availability detection using TCP binding tests - Implemented port fallback mechanism searching for available ports - Support for both HTTP and gRPC port handling - IP-aware port checking using actual service bind address - Dual-interface verification (specific IP and wildcard 0.0.0.0) - All services (Master, Volume, Filer, S3, WebDAV, Admin) auto-reallocate to available ports - Enables multiple mini instances to run simultaneously without conflicts * fix: use actual bind IP for service health checks - Previously health checks were hardcoded to localhost (127.0.0.1) - This caused failures when services bind to actual IP (e.g., 10.21.153.8) - Now health checks use the same IP that services are binding to - Fixes Volume and other service health check failures on non-localhost IPs * refactor: improve port detection logic and remove gRPC handling duplication - findAvailablePortOnIP now returns 0 on failure instead of unavailable port Allows callers to detect when port finding fails and handle appropriately - Remove duplicate gRPC port handling from ensureAllPortsAvailableOnIP All gRPC port logic is now centralized in initializeGrpcPortsOnIP - Log final port configuration only after all ports are finalized Both HTTP and gRPC ports are now correctly initialized before logging - Add error logging when port allocation fails Makes debugging easier when ports can't be found * refactor: fix race condition and clean up port detection code - Convert parallel HTTP port checks to sequential to prevent race conditions where multiple goroutines could allocate the same available port - Remove unused 'sync' import since WaitGroup is no longer used - Add documentation to localhost wrapper functions explaining they are kept for backwards compatibility and future use - All gRPC port logic is now exclusively handled in initializeGrpcPortsOnIP eliminating any duplication in ensureAllPortsAvailableOnIP * refactor: address code review comments - constants, helper function, and cleanup - Define GrpcPortOffset constant (10000) to replace magic numbers throughout the code for better maintainability and consistency - Extract bindIp determination logic into getBindIp() helper function to eliminate code duplication between runMini and startMiniServices - Remove redundant 'calculatedPort = calculatedPort' assignment that had no effect - Update all gRPC port calculations to use GrpcPortOffset constant (lines 489, 886 and the error logging at line 501) * refactor: remove unused wrapper functions and update documentation - Remove unused localhost wrapper functions that were never called: - isPortOpen() - wrapper around isPortOpenOnIP with hardcoded 127.0.0.1 - findAvailablePort() - wrapper around findAvailablePortOnIP with hardcoded 127.0.0.1 - ensurePortAvailable() - wrapper around ensurePortAvailableOnIP with hardcoded 127.0.0.1 - ensureAllPortsAvailable() - wrapper around ensureAllPortsAvailableOnIP with hardcoded 127.0.0.1 Since this is new functionality with no backwards compatibility concerns, these wrapper functions were not needed. The comments claiming they were 'kept for future use or backwards compatibility' are no longer valid. - Update documentation to reference GrpcPortOffset constant instead of hardcoded 10000: - Update comment in ensureAllPortsAvailableOnIP to use GrpcPortOffset - Update admin.port.grpc flag help text to reference GrpcPortOffset Note: getBindIp() is actually being used and should be retained (contrary to the review comment suggesting it was unused - it's called in both runMini and startMiniServices functions) * refactor: prevent HTTP/gRPC port collisions and improve error handling - Add upfront reservation of all calculated gRPC ports before allocating HTTP ports to prevent collisions where an HTTP port allocation could use a port that will later be needed for a gRPC port calculation. Example scenario that is now prevented: - Master HTTP reallocated from 9333 to 9334 (original in use) - Filer HTTP search finds 19334 available and assigns it - Master gRPC calculated as 9334 + GrpcPortOffset = 19334 → collision! Now: reserved gRPC ports are tracked upfront and HTTP port search skips them. - Improve admin server gRPC port fallback error handling: - Change from silent V(1) verbose log to Warningf to make the error visible - Update comment to clarify this indicates a problem in the port initialization sequence - Add explanation that the fallback calculation may cause bind failure - Update ensureAllPortsAvailableOnIP comment to clarify it avoids reserved ports * fix: enforce reserved ports in HTTP allocation and improve admin gRPC fallback Critical fixes for port allocation safety: 1. Make findAvailablePortOnIP and ensurePortAvailableOnIP aware of reservedPorts: - Add reservedPorts map parameter to both functions - findAvailablePortOnIP now skips reserved ports when searching for alternatives - ensurePortAvailableOnIP passes reservedPorts through to findAvailablePortOnIP - This prevents HTTP ports from being allocated to ports reserved for gRPC 2. Update ensureAllPortsAvailableOnIP to pass reservedPorts: - Pass the reservedPorts map to ensurePortAvailableOnIP calls - Maintains the map updates (delete/add) for accuracy as ports change 3. Replace blind admin gRPC port fallback with proper availability checks: - Previous code just calculated *miniAdminOptions.port + GrpcPortOffset - New code checks both the calculated port and finds alternatives if needed - Uses the same availability checking logic as initializeGrpcPortsOnIP - Properly logs the fallback process and any port changes - Will fail gracefully if no available ports found (consistent with other services) These changes eliminate two critical vulnerabilities: - HTTP port allocation can no longer accidentally claim gRPC ports - Admin gRPC port fallback no longer blindly uses an unchecked port * fix: prevent gRPC port collisions during multi-service fallback allocation Critical fix for gRPC port allocation safety across multiple services: Problem: When multiple services need gRPC port fallback allocation in sequence (e.g., Master gRPC unavailable → finds alternative, then Filer gRPC unavailable → searches from calculated port), there was no tracking of previously allocated gRPC ports. This could allow two services to claim the same port. Scenario that is now prevented: - Master gRPC: calculated 19333 unavailable → finds 19334 → assigns 19334 - Filer gRPC: calculated 18888 unavailable → searches from 18889, might land on 19334 if consecutive ports in range are unavailable (especially with custom port configurations or in high-port-contention environments) Solution: - Add allocatedGrpcPorts map to track gRPC ports allocated within the function - Check allocatedGrpcPorts before using calculated port for each service - Pass allocatedGrpcPorts to findAvailablePortOnIP when finding fallback ports - Add allocatedGrpcPorts[port] = true after each successful allocation - This ensures no two services can allocate the same gRPC port The fix handles both: 1. Calculated gRPC ports (when grpcPort == 0) 2. Explicitly set gRPC ports (when user provides -service.port.grpc value) While default port spacing makes collision unlikely, this fix is essential for: - Custom port configurations - High-contention environments - Edge cases with many unavailable consecutive ports - Correctness and safety guarantees * feat: enforce hard-fail behavior for explicitly specified ports When users explicitly specify a port via command-line flags (e.g., -s3.port=8333), the server should fail immediately if the port is unavailable, rather than silently falling back to an alternative port. This prevents user confusion and makes misconfiguration failures obvious. Changes: - Modified ensurePortAvailableOnIP() to check if a port was explicitly passed via isFlagPassed() - If an explicit port is unavailable, return error instead of silently allocating alternative - Updated ensureAllPortsAvailableOnIP() to handle the returned error and fail startup - Modified runMini() to check error from ensureAllPortsAvailableOnIP() and return false on failure - Default ports (not explicitly specified) continue to fallback to available alternatives This ensures: - Explicit ports: fail if unavailable (e.g., -s3.port=8333 fails if 8333 is taken) - Default ports: fallback to alternatives (e.g., s3.port without flag falls back to 8334 if 8333 taken) * fix: accurate error messages for explicitly specified unavailable ports When a port is explicitly specified via CLI flags but is unavailable, the error message now correctly reports the originally requested port instead of reporting a fallback port that was calculated internally. The issue was that the config file applied after CLI flag parsing caused isFlagPassed() to return true for ports loaded from the config file (since flag.Visit() was called during config file application), incorrectly marking them as explicitly specified. Solution: Capture which port flags were explicitly passed on the CLI BEFORE the config file is applied, storing them in the explicitPortFlags map. This preserves the accurate distinction between user-specified ports and defaults/config-file ports. Example: - User runs: weed mini -dir=. -s3.port=22 - Now correctly shows: 'port 22 for S3 (specified by flag s3.port) is not available' - Previously incorrectly showed: 'port 8334 for S3...' (some calculated fallback) * fix: respect explicitly specified ports and prevent config file override When a port is explicitly specified via CLI flags (e.g., -s3.port=8333), the config file options should NOT override it. Previously, config file options would be applied if the flag value differed from default, but this check wasn't sufficient to prevent override in all cases. Solution: Check the explicitPortFlags map before applying any config file port options. If a port was explicitly passed on the CLI, skip applying the config file option for that port. This ensures: - Explicit ports take absolute precedence over config file ports - Config file ports are only used if port wasn't specified on CLI - Example: 'weed mini -s3.port=8333' will use 8333, never the config file value * fix: don't print usage on port allocation error When a port allocation fails (e.g., explicit port is unavailable), exit immediately without showing the usage example. This provides cleaner error output when the error is expected (port conflict). * refactor: clean up code quality issues Remove no-op assignment (calculatedPort = calculatedPort) that had no effect. The variable already holds the correct value when no alternative port is found. Improve documentation for the defensive gRPC port initialization fallback in startAdminServer. While this code shouldn't execute in normal flow because ensureAllPortsAvailableOnIP is called earlier in runMini, the fallback handles edge cases where port initialization may have been skipped or failed silently due to configuration changes or error handling paths. * fix: improve worker reconnection robustness and prevent handleOutgoing hang - Add dedicated streamFailed signaling channel to abort registration waits early when stream dies - Add per-connection regWait channel to route RegistrationResponse separately from shared incoming channel, avoiding race where other consumers steal the response - Refactor handleOutgoing() loop to use select on streamExit/errCh, ensuring old handlers exit cleanly on reconnect (prevents stale senders competing with new stream) - Buffer msgCh to reduce shutdown edge cases - Add cleanup of streamFailed and regWait channels on reconnect/disconnect - Fixes registration timeout and potential stream lifecycle hangs on aggressive server max_age recycling * fix: prevent deadlock when stream error occurs - make cmds send non-blocking If managerLoop is blocked (e.g., waiting on regWait), a blocking send to cmds will deadlock handleIncoming. Make the send non-blocking to prevent this. * fix: address code review comments on mini.go port allocation - Remove flawed fallback gRPC port initialization and convert to fatal error (ensures port initialization issues are caught immediately instead of silently failing with an empty reserved ports map) - Extract common port validation logic to eliminate duplication between calculated and explicitly set gRPC port handling * Fix critical race condition and improve error handling in worker client - Capture channel pointers before checking for nil (prevents TOCTOU race with reconnect) - Use async fallback goroutine for cmds send to prevent error loss when manager is busy - Consistently close regWait channel on disconnect (matches streamFailed behavior) - Complete cleanup of channels on failed registration - Improve error messages for clarity (replace 'timeout' with 'failed' where appropriate) * Add debug logging for registration response routing Add glog.V(3) and glog.V(2) logs to track successful and dropped registration responses in handleIncoming, helping diagnose registration issues in production. * Update weed/worker/client.go Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Ensure stream errors are never lost by using async fallback When handleIncoming detects a stream error, queue ActionStreamError to managerLoop with non-blocking send. If managerLoop is busy and cmds channel is full, spawn an async goroutine to queue the error asynchronously. This ensures the manager is always notified of stream failures, preventing the connection from remaining in an inconsistent state (connected=true while stream is dead). * Refactor handleOutgoing to eliminate duplicate error handling code Extract error handling and cleanup logic into helper functions to avoid duplication in nested select statements. This improves maintainability and reduces the risk of inconsistencies when updating error handling logic. * Prevent goroutine leaks by adding timeouts to blocking cmds sends Add 2-second timeouts to both handleStreamError and the async fallback goroutine when sending ActionStreamError to cmds channel. This prevents the handleOutgoing and handleIncoming goroutines from blocking indefinitely if the managerLoop is no longer receiving (e.g., during shutdown), preventing resource leaks. * Properly close regWait channel in reconnect to prevent resource leaks Close the regWait channel before setting it to nil in reconnect(), matching the pattern used in handleDisconnect(). This ensures any goroutines waiting on this channel during reconnection are properly signaled, preventing them from hanging. * Use non-blocking async pattern in handleOutgoing error reporting Refactor handleStreamError to use non-blocking send with async fallback goroutine, matching the pattern used in handleIncoming. This allows handleOutgoing to exit immediately when errors occur rather than blocking for up to 2 seconds, improving responsiveness and consistency across handlers. * fix: drain regWait channel before closing to prevent message loss - Add drain loop before closing regWait in reconnect() cleanup - Add drain loop before closing regWait in handleDisconnect() cleanup - Ensures no pending RegistrationResponse messages are lost during channel closure * docs: add comments explaining regWait buffered channel design - Document that regWait buffer size 1 prevents race conditions - Explain non-blocking send pattern between sendRegistration and handleIncoming - Clarify timing of registration response handling in handleIncoming * fix: improve error messages and channel handling in sendRegistration - Clarify error message when stream fails before registration sent - Use two-value receive form to properly detect closed channels - Better distinguish between closed channel and nil value scenarios * refactor: extract drain and close channel logic into helper function - Create drainAndCloseRegWaitChannel() helper to eliminate code duplication - Replace 3 copies of drain-and-close logic with single function call - Improves maintainability and consistency across cleanup paths --------- Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> |
15 hours ago |
|
|
ce71968bad
|
chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 (#7849)
* chore(deps): bump golang.org/x/net from 0.47.0 to 0.48.0 Bumps [golang.org/x/net](https://github.com/golang/net) from 0.47.0 to 0.48.0. - [Commits](https://github.com/golang/net/compare/v0.47.0...v0.48.0) --- updated-dependencies: - dependency-name: golang.org/x/net dependency-version: 0.48.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * mod --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
17 hours ago |
|
|
a898160e39
|
chore(deps): bump golang.org/x/crypto from 0.45.0 to 0.46.0 (#7847)
* chore(deps): bump golang.org/x/crypto from 0.45.0 to 0.46.0 Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.45.0 to 0.46.0. - [Commits](https://github.com/golang/crypto/compare/v0.45.0...v0.46.0) --- updated-dependencies: - dependency-name: golang.org/x/crypto dependency-version: 0.46.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> * mod --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
17 hours ago |
|
|
aaa6de7712 |
Increase timeout from 5m to 10m for S3 HTTPS test workflow
|
17 hours ago |
|
|
1d0361d936
|
Fix: Eliminate duplicate versioned objects in S3 list operations (#7850)
* Fix: Eliminate duplicate versioned objects in S3 list operations - Move versioned directory processing outside of pagination loop to process only once - Add deduplication during .versions directory collection phase - Fix directory handling to not add directories to results in recursive mode - Directly add versioned entries to contents array instead of using callback Fixes issue where AWS S3 list operations returned duplicated versioned objects (e.g., 1000 duplicate entries from 4 unique objects). Now correctly returns only the unique logical entries without duplication. Verified with: aws s3api list-objects --endpoint-url http://localhost:8333 --bucket pm-itatiaiucu-01 Returns exactly 4 entries (ClientInfo.xml and Repository from 2 Veeam backup folders) * Refactor: Process .versions directories immediately when encountered Instead of collecting .versions directories and processing them after the pagination loop, process them immediately when encountered during traversal. Benefits: - Simpler code: removed versionedDirEntry struct and collection array - More efficient: no need to store and iterate through collected entries - Same O(V) complexity but with less memory overhead - Clearer logic: processing happens in one pass during traversal Since each .versions directory is only visited once during recursive traversal (we never traverse into them), there's no need for deferred processing or deduplication. * Add comprehensive tests for versioned objects list - TestListObjectsWithVersionedObjects: Tests listing with various delimiters - TestVersionedObjectsNoDuplication: Core test validating no 250x duplication - TestVersionedObjectsWithDeleteMarker: Tests delete marker filtering - TestVersionedObjectsMaxKeys: Tests pagination with versioned objects - TestVersionsDirectoryNotTraversed: Ensures .versions never traversed - Fix existing test signature to match updated doListFilerEntries * style: Fix formatting alignment in versioned objects tests * perf: Optimize path extraction using string indexing Replace multiple strings.Split/Join calls with efficient strings.Index slicing to extract bucket-relative path from directory string. Reduces unnecessary allocations and improves performance in versioned objects listing path construction. * refactor: Address code review feedback from Gemini Code Assist 1. Fix misleading comment about versioned directory processing location. Versioned directories are processed immediately in doListFilerEntries, not deferred to ListObjectsV1Handler. 2. Simplify path extraction logic using explicit bucket path construction instead of index-based string slicing for better readability and maintainability. 3. Add clarifying comment to test callback explaining why production logic is duplicated - necessary because listFilerEntries is not easily testable with filer client injection. * fmt * refactor: Address code review feedback from Copilot - Fix misleading comment about versioned directory processing location (note that processing happens within doListFilerEntries, not at top level) - Add maxKeys validation checks in all test callbacks for consistency - Add maxKeys check before calling eachEntryFn for versioned objects - Improve test documentation to clarify testing approach and avoid apologetic tone * refactor: Address code review feedback from Gemini Code Assist - Remove redundant maxKeys check before eachEntryFn call on line 541 (the loop already checks maxKeys <= 0 at line 502, ensuring quota exists) - Fix pagination pattern consistency in all test callbacks - TestVersionedObjectsNoDuplication: Use cursor.maxKeys <= 0 check and decrement - TestVersionedObjectsWithDeleteMarker: Use cursor.maxKeys <= 0 check and decrement - TestVersionsDirectoryNotTraversed: Use cursor.maxKeys <= 0 check and decrement - Ensures consistent pagination logic across all callbacks matching production behavior * refactor: Address code review suggestions for code quality - Adjust log verbosity from V(5) to V(4) for file additions to reduce noise while maintaining useful debug output during troubleshooting - Remove unused isRecursive parameter from doListFilerEntries function signature and all call sites (not used for any logic decisions) - Consolidate redundant comments about versioned directory handling to reduce documentation duplication These changes improve code maintainability and clarity. * fmt * refactor: Add pagination test and optimize stream processing - Add comprehensive test validation to TestVersionedObjectsMaxKeys that verifies truncation is correctly set when maxKeys is exhausted with more entries available, ensuring proper pagination state - Optimize stream processing in doListFilerEntries by using 'break' instead of 'continue' when quota is exhausted (cursor.maxKeys <= 0) This avoids receiving and discarding entries from the stream when we've already reached the requested limit, improving efficiency |
17 hours ago |
|
|
276fd764da
|
chore(deps): bump github.com/aws/aws-sdk-go-v2/config from 1.31.3 to 1.32.6 (#7846)
chore(deps): bump github.com/aws/aws-sdk-go-v2/config Bumps [github.com/aws/aws-sdk-go-v2/config](https://github.com/aws/aws-sdk-go-v2) from 1.31.3 to 1.32.6. - [Release notes](https://github.com/aws/aws-sdk-go-v2/releases) - [Changelog](https://github.com/aws/aws-sdk-go-v2/blob/main/changelog-template.json) - [Commits](https://github.com/aws/aws-sdk-go-v2/compare/config/v1.31.3...v1.32.6) --- updated-dependencies: - dependency-name: github.com/aws/aws-sdk-go-v2/config dependency-version: 1.32.6 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
19 hours ago |
|
|
044e448305
|
chore(deps): bump github.com/ydb-platform/ydb-go-sdk-auth-environ from 0.5.0 to 0.5.1 (#7848)
chore(deps): bump github.com/ydb-platform/ydb-go-sdk-auth-environ Bumps [github.com/ydb-platform/ydb-go-sdk-auth-environ](https://github.com/ydb-platform/ydb-go-sdk-auth-environ) from 0.5.0 to 0.5.1. - [Changelog](https://github.com/ydb-platform/ydb-go-sdk-auth-environ/blob/master/CHANGELOG.md) - [Commits](https://github.com/ydb-platform/ydb-go-sdk-auth-environ/compare/v0.5.0...v0.5.1) --- updated-dependencies: - dependency-name: github.com/ydb-platform/ydb-go-sdk-auth-environ dependency-version: 0.5.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
19 hours ago |
|
|
cc2edfaf68
|
fix: enable RetryForever for active-active cluster sync to prevent out-of-sync (#7840)
Fixes #7230 When a cluster goes down during file replication, the chunk upload process would fail after a limited number of retries. Once the remote cluster came back online, those failed uploads were never retried, leaving the clusters out-of-sync. This change enables the RetryForever flag in the UploadOption when replicating chunks between filers. This ensures that upload operations will keep retrying indefinitely, and once the remote cluster comes back online, the pending uploads will automatically succeed. Users no longer need to manually run fs.meta.save and fs.meta.load as a workaround for out-of-sync clusters. |
1 day ago |
|
|
9a4f32fc49
|
feat: add automatic port detection and fallback for mini command (#7836)
* feat: add automatic port detection and fallback for mini command - Added port availability detection using TCP binding tests - Implemented port fallback mechanism searching for available ports - Support for both HTTP and gRPC port handling - IP-aware port checking using actual service bind address - Dual-interface verification (specific IP and wildcard 0.0.0.0) - All services (Master, Volume, Filer, S3, WebDAV, Admin) auto-reallocate to available ports - Enables multiple mini instances to run simultaneously without conflicts * fix: use actual bind IP for service health checks - Previously health checks were hardcoded to localhost (127.0.0.1) - This caused failures when services bind to actual IP (e.g., 10.21.153.8) - Now health checks use the same IP that services are binding to - Fixes Volume and other service health check failures on non-localhost IPs * refactor: improve port detection logic and remove gRPC handling duplication - findAvailablePortOnIP now returns 0 on failure instead of unavailable port Allows callers to detect when port finding fails and handle appropriately - Remove duplicate gRPC port handling from ensureAllPortsAvailableOnIP All gRPC port logic is now centralized in initializeGrpcPortsOnIP - Log final port configuration only after all ports are finalized Both HTTP and gRPC ports are now correctly initialized before logging - Add error logging when port allocation fails Makes debugging easier when ports can't be found * refactor: fix race condition and clean up port detection code - Convert parallel HTTP port checks to sequential to prevent race conditions where multiple goroutines could allocate the same available port - Remove unused 'sync' import since WaitGroup is no longer used - Add documentation to localhost wrapper functions explaining they are kept for backwards compatibility and future use - All gRPC port logic is now exclusively handled in initializeGrpcPortsOnIP eliminating any duplication in ensureAllPortsAvailableOnIP * refactor: address code review comments - constants, helper function, and cleanup - Define GrpcPortOffset constant (10000) to replace magic numbers throughout the code for better maintainability and consistency - Extract bindIp determination logic into getBindIp() helper function to eliminate code duplication between runMini and startMiniServices - Remove redundant 'calculatedPort = calculatedPort' assignment that had no effect - Update all gRPC port calculations to use GrpcPortOffset constant (lines 489, 886 and the error logging at line 501) * refactor: remove unused wrapper functions and update documentation - Remove unused localhost wrapper functions that were never called: - isPortOpen() - wrapper around isPortOpenOnIP with hardcoded 127.0.0.1 - findAvailablePort() - wrapper around findAvailablePortOnIP with hardcoded 127.0.0.1 - ensurePortAvailable() - wrapper around ensurePortAvailableOnIP with hardcoded 127.0.0.1 - ensureAllPortsAvailable() - wrapper around ensureAllPortsAvailableOnIP with hardcoded 127.0.0.1 Since this is new functionality with no backwards compatibility concerns, these wrapper functions were not needed. The comments claiming they were 'kept for future use or backwards compatibility' are no longer valid. - Update documentation to reference GrpcPortOffset constant instead of hardcoded 10000: - Update comment in ensureAllPortsAvailableOnIP to use GrpcPortOffset - Update admin.port.grpc flag help text to reference GrpcPortOffset Note: getBindIp() is actually being used and should be retained (contrary to the review comment suggesting it was unused - it's called in both runMini and startMiniServices functions) * refactor: prevent HTTP/gRPC port collisions and improve error handling - Add upfront reservation of all calculated gRPC ports before allocating HTTP ports to prevent collisions where an HTTP port allocation could use a port that will later be needed for a gRPC port calculation. Example scenario that is now prevented: - Master HTTP reallocated from 9333 to 9334 (original in use) - Filer HTTP search finds 19334 available and assigns it - Master gRPC calculated as 9334 + GrpcPortOffset = 19334 → collision! Now: reserved gRPC ports are tracked upfront and HTTP port search skips them. - Improve admin server gRPC port fallback error handling: - Change from silent V(1) verbose log to Warningf to make the error visible - Update comment to clarify this indicates a problem in the port initialization sequence - Add explanation that the fallback calculation may cause bind failure - Update ensureAllPortsAvailableOnIP comment to clarify it avoids reserved ports * fix: enforce reserved ports in HTTP allocation and improve admin gRPC fallback Critical fixes for port allocation safety: 1. Make findAvailablePortOnIP and ensurePortAvailableOnIP aware of reservedPorts: - Add reservedPorts map parameter to both functions - findAvailablePortOnIP now skips reserved ports when searching for alternatives - ensurePortAvailableOnIP passes reservedPorts through to findAvailablePortOnIP - This prevents HTTP ports from being allocated to ports reserved for gRPC 2. Update ensureAllPortsAvailableOnIP to pass reservedPorts: - Pass the reservedPorts map to ensurePortAvailableOnIP calls - Maintains the map updates (delete/add) for accuracy as ports change 3. Replace blind admin gRPC port fallback with proper availability checks: - Previous code just calculated *miniAdminOptions.port + GrpcPortOffset - New code checks both the calculated port and finds alternatives if needed - Uses the same availability checking logic as initializeGrpcPortsOnIP - Properly logs the fallback process and any port changes - Will fail gracefully if no available ports found (consistent with other services) These changes eliminate two critical vulnerabilities: - HTTP port allocation can no longer accidentally claim gRPC ports - Admin gRPC port fallback no longer blindly uses an unchecked port * fix: prevent gRPC port collisions during multi-service fallback allocation Critical fix for gRPC port allocation safety across multiple services: Problem: When multiple services need gRPC port fallback allocation in sequence (e.g., Master gRPC unavailable → finds alternative, then Filer gRPC unavailable → searches from calculated port), there was no tracking of previously allocated gRPC ports. This could allow two services to claim the same port. Scenario that is now prevented: - Master gRPC: calculated 19333 unavailable → finds 19334 → assigns 19334 - Filer gRPC: calculated 18888 unavailable → searches from 18889, might land on 19334 if consecutive ports in range are unavailable (especially with custom port configurations or in high-port-contention environments) Solution: - Add allocatedGrpcPorts map to track gRPC ports allocated within the function - Check allocatedGrpcPorts before using calculated port for each service - Pass allocatedGrpcPorts to findAvailablePortOnIP when finding fallback ports - Add allocatedGrpcPorts[port] = true after each successful allocation - This ensures no two services can allocate the same gRPC port The fix handles both: 1. Calculated gRPC ports (when grpcPort == 0) 2. Explicitly set gRPC ports (when user provides -service.port.grpc value) While default port spacing makes collision unlikely, this fix is essential for: - Custom port configurations - High-contention environments - Edge cases with many unavailable consecutive ports - Correctness and safety guarantees * feat: enforce hard-fail behavior for explicitly specified ports When users explicitly specify a port via command-line flags (e.g., -s3.port=8333), the server should fail immediately if the port is unavailable, rather than silently falling back to an alternative port. This prevents user confusion and makes misconfiguration failures obvious. Changes: - Modified ensurePortAvailableOnIP() to check if a port was explicitly passed via isFlagPassed() - If an explicit port is unavailable, return error instead of silently allocating alternative - Updated ensureAllPortsAvailableOnIP() to handle the returned error and fail startup - Modified runMini() to check error from ensureAllPortsAvailableOnIP() and return false on failure - Default ports (not explicitly specified) continue to fallback to available alternatives This ensures: - Explicit ports: fail if unavailable (e.g., -s3.port=8333 fails if 8333 is taken) - Default ports: fallback to alternatives (e.g., s3.port without flag falls back to 8334 if 8333 taken) * fix: accurate error messages for explicitly specified unavailable ports When a port is explicitly specified via CLI flags but is unavailable, the error message now correctly reports the originally requested port instead of reporting a fallback port that was calculated internally. The issue was that the config file applied after CLI flag parsing caused isFlagPassed() to return true for ports loaded from the config file (since flag.Visit() was called during config file application), incorrectly marking them as explicitly specified. Solution: Capture which port flags were explicitly passed on the CLI BEFORE the config file is applied, storing them in the explicitPortFlags map. This preserves the accurate distinction between user-specified ports and defaults/config-file ports. Example: - User runs: weed mini -dir=. -s3.port=22 - Now correctly shows: 'port 22 for S3 (specified by flag s3.port) is not available' - Previously incorrectly showed: 'port 8334 for S3...' (some calculated fallback) * fix: respect explicitly specified ports and prevent config file override When a port is explicitly specified via CLI flags (e.g., -s3.port=8333), the config file options should NOT override it. Previously, config file options would be applied if the flag value differed from default, but this check wasn't sufficient to prevent override in all cases. Solution: Check the explicitPortFlags map before applying any config file port options. If a port was explicitly passed on the CLI, skip applying the config file option for that port. This ensures: - Explicit ports take absolute precedence over config file ports - Config file ports are only used if port wasn't specified on CLI - Example: 'weed mini -s3.port=8333' will use 8333, never the config file value * fix: don't print usage on port allocation error When a port allocation fails (e.g., explicit port is unavailable), exit immediately without showing the usage example. This provides cleaner error output when the error is expected (port conflict). * fix: increase worker registration timeout for reconnections Increase the worker registration timeout from 10 seconds to 30 seconds. The 10-second timeout was too aggressive for reconnections when the admin server might be busy processing other operations. Reconnecting workers need more time to: 1. Re-establish the gRPC connection 2. Send the registration message 3. Wait for the admin server to process and respond This prevents spurious "registration timeout" errors during long-running mini instances when brief network hiccups or admin server load cause delays. * refactor: clean up code quality issues Remove no-op assignment (calculatedPort = calculatedPort) that had no effect. The variable already holds the correct value when no alternative port is found. Improve documentation for the defensive gRPC port initialization fallback in startAdminServer. While this code shouldn't execute in normal flow because ensureAllPortsAvailableOnIP is called earlier in runMini, the fallback handles edge cases where port initialization may have been skipped or failed silently due to configuration changes or error handling paths. |
1 day ago |
|
|
683eef72a6
|
fix: prevent panic on close of closed channel in worker client reconnection (#7837)
* fix: prevent panic on close of closed channel in worker client reconnection - Use idiomatic Go pattern of setting channels to nil after closing instead of flags - Extract repeated safe-close logic into safeCloseChannel() helper method - Call safeCloseChannel() in attemptConnection(), reconnect(), and handleDisconnect() - In safeCloseChannel(), check if channel is not nil, close it, and set to nil - Also set streamExit to nil in attemptConnection() when registration fails - This follows Go best practices for channel management and prevents double-close panics - Improved code maintainability by eliminating duplication * fix: prevent panic on close of closed channel in worker client reconnection - Use idiomatic Go pattern of setting channels to nil after closing instead of flags - Extract repeated safe-close logic into safeCloseChannel() helper method - Call safeCloseChannel() in attemptConnection(), reconnect(), and handleDisconnect() - In safeCloseChannel(), check if channel is not nil, close it, and set to nil - Also set streamExit to nil in attemptConnection() when registration fails - Document thread-safety assumptions: function is safe in current usage (serialized in managerLoop) but would need synchronization if used in concurrent contexts - This follows Go best practices for channel management and prevents double-close panics - Improved code maintainability by eliminating duplication |
2 days ago |
|
|
1dfda78e59 |
update doc
|
2 days ago |
|
|
31cb28d9d3
|
feat: auto-configure optimal volume size limit based on available disk space (#7833)
* feat: auto-configure optimal volume size limit based on available disk space - Add calculateOptimalVolumeSizeMB() function with OS-independent disk detection - Reuses existing stats.NewDiskStatus() which works across Linux, macOS, Windows, BSD, Solaris - Algorithm: available disk / 100, rounded up to nearest power of 2 (64MB, 128MB, 256MB, 512MB, 1024MB) - Volume size capped to maximum of 1GB (1024MB) for better stability - Minimum volume size is 64MB - Uses efficient bits.Len() for power-of-2 rounding instead of floating-point operations - Only auto-calculates volume size if user didn't specify a custom value via -master.volumeSizeLimitMB - Respects user-specified values without override - Master logs whether value was auto-calculated or user-specified - Welcome message displays the configured volume size with correct format string ordering - Removed unused autoVolumeSizeMB variable (logging handles source tracking) Fixes: #0 * Refactor: Consolidate volume size constants and use robust flag detection for mini mode This commit addresses all code review feedback on the auto-optimal volume size feature: 1. **Consolidate hardcoded defaults into package-level constants** - Moved minVolumeSizeMB=64 and maxVolumeSizeMB=1024 from local function-scope constants to package-level constants for consistency and maintainability - All three volume size constants (min, default, max) now defined in one place 2. **Implement robust flag detection using flag.Visit()** - Added isFlagPassed() helper function using flag.Visit() to check if a CLI flag was explicitly passed on the command line - Replaces the previous implementation that checked if current value equals default (which could incorrectly assume user intent if default was specified) - Now correctly detects user override regardless of the actual value 3. **Restructure power-of-2 rounding logic for clarity** - Changed from 'only round if above min threshold' to 'always round to power-of-2 first, then apply min/max constraints' - More robust: works correctly even if min/max constants are adjusted in future - Clearer intent: all non-zero values go through consistent rounding logic 4. **Fix import ordering** - Added 'flag' import (aliased to fla9 package) to support isFlagPassed() - Added 'math/bits' import to support power-of-2 rounding Benefits: - Better code organization with all volume size limits in package constants - Correct user override detection that doesn't rely on value equality checks - More maintainable rounding logic that's easier to understand and modify - Consistent with SeaweedFS conventions (uses fla9 package like other commands) * fix: Address code review feedback for volume size calculation This commit resolves three code review comments for better code quality and robustness: 1. **Handle comma-separated directories in -dir flag** - The -dir flag accepts comma-separated list of directories, but the volume size calculation was passing the entire string to util.ResolvePath() - Now splits on comma and uses the first directory for disk space calculation - Added explanatory comment about the multi-directory support - Ensures the optimal size calculation works correctly in all scenarios 2. **Change disk detection failure from verbose log to warning** - When disk status cannot be determined, the warning is now logged via glog.Warningf() instead of glog.V(1).Infof() - Makes the event visible in default logs without requiring verbose mode - Better alerting for operators about fallback to default values 3. **Avoid recalculating availableMB/100 and define bytesPerMB constant** - Added bytesPerMB = 1024*1024 constant for clarity and reusability - Replaced hardcoded (1024 * 1024) with bytesPerMB constant - Store availableMB/100 in initialOptimalMB variable to avoid recalculation - Log message now references initialOptimalMB instead of recalculating - Improves maintainability and reduces redundant computation All three changes maintain the same logic while improving code quality and robustness as requested by the reviewer. * fix: Address rounding logic, logging clarity, and disk capacity measurement issues This commit resolves three additional code review comments to improve robustness and clarity of the volume size calculation: 1. **Fix power-of-2 rounding logic for edge cases** - The previous condition 'if optimalMB > 0' created a bug: when optimalMB=1, bits.Len(0)=0, resulting in 1<<0=1, which is below minimum (64MB) - Changed to explicitly handle zero case first: 'if optimalMB == 0' - Separate zero-handling from power-of-2 rounding ensures correct behavior: * optimalMB=0 → set to minVolumeSizeMB (64) * optimalMB>=1 → apply power-of-2 rounding - Then apply min/max constraints unconditionally - More explicit and easier to reason about correctness 2. **Use total disk capacity instead of free space for stable configuration** - Changed from diskStatus.Free (available space) to diskStatus.All (total capacity) - Free space varies based on current disk usage at startup time - This caused inconsistent volume sizes: same disk could get different sizes depending on how full it is when the service starts - Using total capacity ensures predictable, stable configuration across restarts - Better aligns with the intended behavior of sizing based on disk capacity - Added explanatory comments about why total capacity is more appropriate 3. **Improve log message clarity and accuracy** - Updated message to clearly show: * 'total disk capacity' instead of vague 'available disk' * 'capacity/100 before rounding' to match actual calculation * 'clamped to [min,max]' instead of 'capped to max' to show both bounds * Includes min and max values in log for context - More accurate and helpful for operators troubleshooting volume sizing These changes ensure the volume size calculation is both correct and predictable. * feat: Save mini configuration to file for persistence and documentation This commit adds persistent configuration storage for the 'weed mini' command, saving all non-default parameters to a JSON configuration file for: 1. **Configuration Documentation** - All parameters actually passed on the command line are saved - Provides a clear record of the running configuration - Useful for auditing and understanding how the system is configured 2. **Persistence of Auto-Calculated Values** - The auto-calculated optimal volume size (master.volumeSizeLimitMB) is saved with a note indicating it was auto-calculated - On restart, if the auto-calculated value exists, it won't be recalculated - Users can delete the auto-calculated entry to force recalculation on next startup - Provides stable, predictable configuration across restarts 3. **Configuration File Location** - Saved to: <data-folder>/.seaweedfs/mini.config.json - Uses the first directory from comma-separated -dir list - Directory is created automatically if it doesn't exist - JSON format for easy parsing and manual editing 4. **Implementation Details** - Uses flag.Visit() to collect only explicitly passed flags - Distinguishes between user-specified and auto-calculated values - Includes helpful notes in the JSON file - Graceful handling of save errors (logs warnings, doesn't fail startup) The configuration file includes all parameters such as: - IP and port settings (master, filer, volume, admin) - Data directories and metadata folders - Replication and collection settings - S3 and IAM configurations - Performance tuning parameters (concurrency limits, timeouts, etc.) - Auto-calculated volume size (if applicable) Example mini.config.json output: { "debug": "true", "dir": "/data/seaweedfs", "master.port": "9333", "filer.port": "8888", "volume.port": "9340", "master.volumeSizeLimitMB.auto": "256", "_note_auto_calculated": "This value was auto-calculated. Remove it to recalculate on next startup." } This allows operators to: - Review what configuration was active - Replicate the configuration on other systems - Understand the startup behavior - Control when auto-calculation occurs * refactor: Change configuration file format to match command-line options format Update the saved configuration format from JSON to shell-compatible options format that matches how options are expected to be passed on the command line. Configuration file: .seaweedfs/mini.options Format: Each line contains a command-line option in the format -name=value Benefits: - Format is compatible with shell scripts and can be sourced - Can be easily converted to command-line options - Human-readable and editable - Values with spaces are properly quoted - Includes helpful comments explaining auto-calculated values - Directly usable with weed mini command The file can be used in multiple ways: 1. Extract options: cat .seaweedfs/mini.options | grep -v '^#' | tr '\n' ' ' 2. Inline in command: weed mini \$(cat .seaweedfs/mini.options | grep -v '^#') 3. Manual review: cat .seaweedfs/mini.options * refactor: Save mini.options directly to -dir folder * docs: Update PR description with accurate algorithm and examples Update the function documentation comments to accurately reflect the implemented algorithm and provide real-world examples with actual calculated outputs. Changes: - Clarify that algorithm uses total disk capacity (not free space) - Document exact calculation: capacity/100, round to power of 2, clamp to [64,1024] - Add realistic examples showing input disk sizes and resulting volume sizes: * 10GB disk → 64MB (minimum) * 100GB disk → 64MB (minimum) * 1TB disk → 64MB (minimum) * 6.4TB disk → 64MB * 12.8TB disk → 128MB * 100TB disk → 1024MB (maximum) * 1PB disk → 1024MB (maximum) - Include note that values are rounded to next power of 2 and capped at 1GB This helps users understand the volume size calculation and predict what size will be set for their specific disk configurations. * feat: integrate configuration file loading into mini startup - Load mini.options file at startup if it exists - Apply loaded configuration options before normal initialization - CLI flags override file-based configuration - Exclude 'dir' option from being saved (environment-specific) - Configuration file format: option=value without leading dashes - Auto-calculated volume size persists with recalculation marker |
2 days ago |
|
|
3613279f25
|
Add 'weed mini' command for S3 beginners and small/dev use cases (#7831)
* Add 'weed mini' command for S3 beginners and small/dev use cases
This new command simplifies starting SeaweedFS by combining all components
in one process with optimized settings for development and small deployments.
Features:
- Starts master, volume, filer, S3, WebDAV, and admin in one command
- Volume size limit: 64MB (optimized for small files)
- Volume max: 0 (auto-configured based on free disk space)
- Pre-stop seconds: 1 (faster shutdown for development)
- Master peers: none (single master mode by default)
- Includes admin UI with one worker for maintenance tasks
- Clean, user-friendly startup message with all endpoint URLs
Usage:
weed mini # Use default temp directory
weed mini -dir=/data # Custom data directory
This makes it much easier for:
- Developers getting started with SeaweedFS
- Testing and development workflows
- Learning S3 API with SeaweedFS
- Small deployments that don't need complex clustering
* Change default volume server port to 9340 to avoid popular port 8080
* Fix nil pointer dereference by initializing all required volume server fields
Added missing VolumeServerOptions field initializations:
- id, publicUrl, diskType
- maintenanceMBPerSecond, ldbTimeout
- concurrentUploadLimitMB, concurrentDownloadLimitMB
- pprof, idxFolder
- inflightUploadDataTimeout, inflightDownloadDataTimeout
- hasSlowRead, readBufferSizeMB
This resolves the panic that occurred when starting the volume server.
* Fix multiple nil pointer dereferences in mini command
Added missing field initializations for:
- Master options: raftHashicorp, raftBootstrap, telemetryUrl, telemetryEnabled
- Filer options: filerGroup, saveToFilerLimit, concurrentUploadLimitMB,
concurrentFileUploadLimit, localSocket, showUIDirectoryDelete,
downloadMaxMBps, diskType, allowedOrigins, exposeDirectoryData, tusBasePath
- Volume options: id, publicUrl, diskType, maintenanceMBPerSecond, ldbTimeout,
concurrentUploadLimitMB, concurrentDownloadLimitMB, pprof, idxFolder,
inflightUploadDataTimeout, inflightDownloadDataTimeout, hasSlowRead, readBufferSizeMB
- WebDAV options: tlsPrivateKey, tlsCertificate, filerRootPath
- Admin options: master
These initializations are required to avoid runtime panics when starting components.
* Fix remaining S3 option nil pointers in mini command
* Update mini command: 256MB volume size and add S3 access instructions for beginners
* mini: set default master.volumeSizeLimitMB to 128MB and update help/banner text
* mini: shorten S3 help text to a concise pointer to docs/Admin UI
* mini: remove duplicated component bullet list, use concise sentence
* mini: tidy help alignment and update example usage
* mini: default -dir to current directory
* mini: load initial S3 credentials from env and write IAM config
* mini: use AWS env vars for initial S3 creds; instruct to create via Admin UI if absent
* Improve startup synchronization with channel-based coordination
- Replace fragile time.Sleep delays with robust channel-based synchronization
- Implement proper service dependency ordering (Master → Volume → Filer → S3/WebDAV/Admin)
- Add sync.WaitGroup for goroutine coordination
- Add startup readiness logging for better visibility
- Implement 10-second timeout for admin server startup
- Remove arbitrary sleep delays for faster, more reliable startup
- Services now start deterministically based on dependencies, not timing
This makes the startup process more reliable and eliminates race conditions on slow systems or under load.
* Refactor service startup logic for better maintainability
Extract service startup into dedicated helper functions:
- startMiniServices(): Orchestrates all service startup with dependency coordination
- startServiceWithCoordination(): Starts services with readiness signaling
- startServiceWithoutReady(): Starts services without readiness signaling
- startS3Service(): Encapsulates S3 initialization logic
Benefits:
- Reduced code duplication in runMini()
- Clearer separation of concerns
- Easier to add new services or modify startup sequence
- More testable code structure
- Improved readability with explicit service names and logging
* Remove unused serviceStartupInfo struct type
- Delete the serviceStartupInfo struct that was defined but never used
- Improves code clarity by removing dead code
- All service startup is now handled directly by helper functions
* Preserve existing IAM config file instead of truncating
- Use os.Stat to check if IAM config file already exists
- Only create and write configuration if file doesn't exist
- Log appropriate messages for each case:
* File exists: skip writing, preserve existing config
* File absent: create with os.OpenFile and write new config
* Stat error: log error without overwriting
- Set *miniIamConfig only when new file is successfully created
- Use os.O_CREATE|os.O_WRONLY flags for safe file creation
- Handles file operations with proper error checking and cleanup
* Fix CodeQL security issue: prevent logging of sensitive S3 credentials
- Add createdInitialIAM flag to track when initial IAM config is created from env vars
- Set flag in startS3Service() when new IAM config is successfully written
- Update welcome message to inform user of credential creation without exposing secrets
- Print only the username (mini) and config file location to user
- Never print access keys or secret keys in clear text
- Maintain security while keeping user informed of what was created
- Addresses CodeQL finding: Clear-text logging of sensitive information
* Fix three code review issues in weed mini command
1. Fix deadlock in service startup coordination:
- Run blocking service functions (startMaster, startFiler, etc.) in separate goroutines
- This allows readyChan to be closed and prevents indefinite blocking
- Services now start concurrently instead of sequentially blocking the coordinator
2. Use shared grace.StartDebugServer for consistency:
- Replace inline debug server startup with grace.StartDebugServer
- Improves code consistency with other commands (master, filer, etc.)
- Removes net/http import which is no longer needed
3. Simplify IAM config file cleanup with defer:
- Use 'defer f.Close()' instead of multiple f.Close() calls
- Ensures file is closed regardless of which code path is taken
- Improves robustness and code clarity
* fmt
* Fix: Remove misleading 'service is ready' logs
The previous fix removed 'go' from service function calls but left misleading
'service is ready' log messages. The service helpers now correctly:
- Call fn() directly (blocking) instead of 'go fn()' (non-blocking)
- Remove the 'service is ready' message that was printed before the service
actually started running
- Services run as blocking goroutines within the coordinator goroutine,
which keeps them alive while the program runs
- The readiness channels still work correctly because they're closed when
the coordinator finishes waiting for dependencies
* Update mini.go
* Fix four code review issues in weed mini command
1. Use restrictive file permissions (0600) for IAM config:
- Changed from 0644 to 0600 when creating iam_config.json
- Prevents world-readable access to sensitive AWS credentials
- Protects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
2. Remove unused sync.WaitGroup:
- Removed WaitGroup that was never waited on
- All services run as blocking goroutines in the coordinator
- Main goroutine blocks indefinitely with select{}
- Removes unnecessary complexity without changing behavior
3. Merge service startup helper functions:
- Combined startServiceWithCoordination and startServiceWithoutReady
- Made readyChan optional (nil for services without readiness signaling)
- Reduces code duplication and improves maintainability
- Both services now use single startServiceWithCoordination function
4. Fix admin server readiness check:
- Removed misleading timeout channel that never closed at startup
- Replaced with simple 2-second sleep before worker startup
- startAdminServer() blocks indefinitely, so channel would only close on shutdown
- Explicit sleep is clearer about the startup coordination intent
* Fix three code quality issues in weed mini command
1. Define volume configuration as named constants:
- Added miniVolumeMaxDataVolumeCounts = "0"
- Added miniVolumeMinFreeSpace = "1"
- Added miniVolumeMinFreeSpacePercent = "1"
- Removed local variable assignments in Volume startup
- Improves maintainability and documents configuration intent
2. Fix deadlock in startServiceWithCoordination:
- Changed from 'defer close(readyChan)' with blocking fn() to running fn() in goroutine
- Close readyChan immediately after launching service goroutine
- Prevents deadlock where fn() never returns, blocking defer execution
- Allows dependent services to start without waiting for blocking call
3. Improve admin server readiness check:
- Replaced fixed 2-second sleep with polling the gRPC port
- Polls up to 20 times (10 seconds total) with 500ms intervals
- Uses net.DialTimeout to check if port is available
- Properly handles IPv6 addresses using net.JoinHostPort
- Logs progress and warnings about connection status
- More robust than sleep against server startup timing variations
4. Add net import for network operations (IPv6 support)
Also fixed IAM config file close error handling to properly check error
from f.Close() and log any failures, preventing silent data loss on NFS.
* Document environment variable setup for S3 credentials
Updated welcome message to explain two ways to create S3 credentials:
1. Environment variables (recommended for quick setup):
- Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
- Run 'weed mini -dir=/data'
- Creates initial 'mini' user credentials automatically
2. Admin UI (for managing multiple users and policies):
- Open http://localhost:23646 (Admin UI)
- Add identities to create new S3 credentials
This gives users clear guidance on the easiest way to get started with S3
credentials while also explaining the more advanced option for multiple users.
* Print welcome message after all services are running
Moved the welcome message printing from immediately after startMiniServices()
to after all services have been started and are ready. This ensures users see
the welcome message only after startup is complete, not mixed with startup logs.
Changes:
- Extract welcome message logic into printWelcomeMessage() function
- Call printWelcomeMessage() after startMiniServices() completes
- Change message from 'are starting' to 'are running and ready to use'
- This provides cleaner startup output without interleaved logs
* Wait for all services to complete before printing welcome message
The welcome message should only appear after all services are fully running and
the worker is connected. This prevents the message from appearing too early before
startup logs complete.
Changes:
- Pass allServicesReady channel through startMiniServices()
- Add adminReadyChan to track when admin/worker startup completes
- Signal allServicesReady when admin service is fully ready
- Wait for allServicesReady in runMini() before printing welcome message
- This ensures clean output: startup logs first, then welcome message once ready
Now the user sees all startup activity, then a clear welcome message when
everything is truly ready to use.
* Fix welcome message timing: print after worker is fully started
The welcome message was printing too early because allServicesReady was being
closed when the Admin service goroutine started, not when it actually completed.
The Admin service launches startMiniAdminWithWorker() which is a blocking call
that doesn't return until the worker is fully connected.
Now allServicesReady is passed through to startMiniWorker() which closes it
after the worker successfully starts and connects to the admin server.
This ensures the welcome message only appears after:
- Master is ready
- Volume server is ready
- Filer is ready
- S3 service is ready
- WebDAV service is ready
- Admin server is ready
- Worker is connected and running
All startup logs appear first, then the clean welcome message at the end.
* Wait for S3 and WebDAV services to be ready before showing welcome message
The welcome message was printing before S3 and WebDAV servers had fully
initialized. Now the readiness flow is:
1. Master → ready
2. Volume → ready
3. Filer → ready
4. S3 → ready (signals s3ReadyChan)
5. WebDAV → ready (signals webdavReadyChan)
6. Admin/Worker → starts, then waits for both S3 and WebDAV
7. Welcome message prints (all services truly ready)
Changes:
- Add s3ReadyChan and webdavReadyChan to service startup
- Pass S3 and WebDAV ready channels through to Admin service
- Admin/Worker waits for both S3 and WebDAV before closing allServicesReady
- This ensures welcome message appears only when all services are operational
* Admin service should wait for Filer, S3, and WebDAV to be ready
Admin service depends on Filer being operational since it uses the filer
for credential storage. It also makes sense to wait for S3 and WebDAV
since they are user-facing services that should be ready before Admin.
Updated dependencies:
- Admin now waits for: Master, Filer, S3, WebDAV
- This ensures all critical services are operational before Admin starts
- Welcome message will print only after all services including Admin are ready
* Add initialization delay for S3 and WebDAV services
S3 and WebDAV servers need extra time to fully initialize and start listening
after their service functions are launched. Added a 1-second delay after
launching S3 and WebDAV goroutines before signaling readiness.
This ensures the welcome message doesn't print until both services have
emitted their startup logs and are actually serving requests.
* Increase service initialization wait times for more reliable startup
- Increase S3 and WebDAV initialization delay from 1s to 2s to ensure they emit startup logs before welcome message
- Add 1s initialization delay for Filer to ensure it's listening
- Increase admin gRPC polling timeout from 10s to 20s to ensure admin server is fully ready
- This ensures welcome message prints only after all services are fully initialized and ready to accept requests
* Increase service wait time to 10 seconds for reliable startup
All services now wait 10 seconds after launching to ensure they are fully initialized and ready before signaling readiness to dependent services. This ensures the welcome message prints only after all services have fully started.
* Replace fixed 10s delay with intelligent port polling for service readiness
Instead of waiting a fixed 10 seconds for each service, now polls the service
port to check if it's actually accepting connections. This eliminates unnecessary
waiting and allows services to signal readiness as soon as they're ready.
- Polls each service port with up to 30 attempts (6 seconds total)
- Each attempt waits 200ms before retrying
- Stops polling immediately once service is ready
- Falls back gracefully if service is unknown
- Significantly faster startup sequence while maintaining reliability
* Replace channel-based coordination with HTTP pinging for service readiness
Instead of using channels to coordinate service startup, now uses HTTP GET requests
to ping each service endpoint to check if it's ready to accept connections.
Key changes:
- Removed all readiness channels (masterReadyChan, volumeReadyChan, etc.)
- Simplified startMiniServices to use sequential HTTP polling for each service
- startMiniService now just starts the service with logging
- waitForServiceReady uses HTTP client to ping service endpoints (max 6 seconds)
- waitForAdminServerReady uses HTTP GET to check admin server availability
- startMiniAdminWithWorker and startMiniWorker simplified without channel parameters
Benefits:
- Cleaner, more straightforward code
- HTTP pinging is more reliable than TCP port probing
- Services signal readiness through their HTTP endpoints
- Eliminates channel synchronization complexity
* log level
* Remove overly specific comment from volume size limit in welcome message
The '(good for small files)' comment is too limiting. The 128MB volume size
limit works well for general use cases, not just small files. Simplified the
message to just show the value.
* Ensure allServicesReady channel is always closed via defer
Add 'defer close(allServicesReady)' at the start of startMiniAdminWithWorker
to guarantee the channel is closed on ALL exit paths (normal and error).
This prevents the caller waiting on <-allServicesReady from ever hanging,
while removing the explicit close() at the successful end prevents panic
from double-close.
This makes the code more robust by:
- Guaranteeing channel closure even if worker setup fails
- Eliminating the possibility of caller hanging on errors
- Following Go defer patterns for resource cleanup
* Enhance health check polling for more robust service coordination
The service startup already uses HTTP health checks via waitForServiceReady()
to verify services are actually accepting connections. This commit improves
the health check implementation:
Changes:
- Elevated success logging to Info level so users see when services become ready
- Improved error messages to clarify that health check timeouts are not fatal
- Services continue startup even if health checks timeout (they may still work)
- Consistent handling of health check results across all services
This provides better visibility into service startup while maintaining the
existing robust coordination via HTTP pinging rather than just TCP port checks.
* Implement stricter error handling for robust mini server startup
Apply all PR review feedback to ensure the mini server fails fast and clearly
when critical components cannot start:
Changes:
1. Remove redundant miniVolumeMinFreeSpacePercent constant
- Simplified util.MustParseMinFreeSpace() call to use single parameter
2. Make service readiness checks fatal errors:
- Master, Volume, Filer, S3, WebDAV health check failures now return errors
- Prevents partially-functional servers from running
- Caller can handle errors gracefully instead of continuing with broken state
3. Make admin server readiness fatal:
- Admin gRPC availability is critical for worker startup
- Use glog.Fatalf to terminate with clear error message
4. Improve IAM config error handling:
- Treat all file operation failures (stat, open, write, close) as fatal
- Prevents silent failures in S3 credential setup
- User gets immediate feedback instead of authentication issues later
5. Use glog.Fatalf for critical worker setup errors:
- Failed to create worker directory, task directories, or worker instance
- Failed to create admin client or start worker
- Ensures mini server doesn't run in broken state
This ensures deterministic startup: services succeed completely or fail with
clear, actionable error messages for the user.
* Make health checks non-fatal for graceful degradation and improve IAM file handling
Address PR feedback to make the mini command more resilient for development:
1. Make health check failures non-fatal
- Master, Volume, Filer, S3, WebDAV health checks now log warnings but allow startup
- Services may still work even if health check endpoints aren't immediately available
- Aligns with intent of a dev-focused tool that should be forgiving of timing issues
- Only prevents startup if startup coordination or critical errors occur
2. Improve IAM config file handling
- Refactored to guarantee file is always closed using separate error variables
- Opens file once and handles write/close errors independently
- Maintains strict error handling while improving code clarity
- All file operation failures still cause fatal errors (as intended)
This makes startup more graceful while maintaining critical error handling for
fundamental failures like missing directories or configuration errors.
* Fix code quality issues in weed mini command
- Fix pointer aliasing: use value copy (*miniBindIp = *miniIp) instead of pointer assignment
- Remove misleading error return from waitForServiceReady() function
- Simplify health check callers to call waitForServiceReady() directly without error handling
- Remove redundant S3 option assignments already set in init() block
- Remove unused allServicesReady parameter from startMiniWorker() function
* Refactor welcome message to use template strings and add startup delay
- Convert welcome message to constant template strings for cleaner code
- Separate credentials instructions into dedicated constant
- Add 500ms delay after worker startup to allow full initialization before welcome message
- Improves output cleanliness by avoiding log interleaving with welcome message
* Fix code style issues in weed mini command
- Fix indentation in IAM config block (lines 424-432) to align with surrounding code
- Remove unused adminServerDone channel that was created but never read
* Address code review feedback for robustness and resource management
- Use defer f.Close() for IAM file handling to ensure file is closed in all code paths, preventing potential file descriptor leaks
- Use 127.0.0.1 instead of *miniIp for service readiness checks to ensure checks always target localhost, improving reliability in environments with firewalls or complex network configurations
- Simplify error handling in waitForAdminServerReady by using single error return instead of separate write/close error variables
* Fix function declaration formatting
- Separate closing brace of startS3Service from startMiniAdminWithWorker declaration with blank line
- Move comment to proper position above function declaration
- Run gofmt for consistent formatting
* Fix IAM config pointer assignment when file already exists
- Add missing *miniIamConfig = iamPath assignment when IAM config file already exists
- Ensures S3 service is properly pointed to the existing IAM configuration
- Retains logging to inform user that existing configuration is being preserved
* Improve pointer assignments and worker synchronization
- Simplify grpcPort and dataDir pointer assignments by directly dereferencing and assigning values instead of taking address of local variables
- Replace time.Sleep(500ms) with proper TCP-based polling to wait for worker gRPC port readiness
- Add waitForWorkerReady function that polls worker's gRPC port with max 6-second timeout
- Add net package import for TCP connection checks
- Improves code idiomaticity and synchronization robustness
* Refactor and simplify error handling for maintainability
- Remove unused error return from startMiniServices (always returned nil)
- Update runMini caller to not expect error from startMiniServices
- Refactor init() into component-specific helper functions:
* initMiniCommonFlags() for common options
* initMiniMasterFlags() for master server options
* initMiniFilerFlags() for filer server options
* initMiniVolumeFlags() for volume server options
* initMiniS3Flags() for S3 server options
* initMiniWebDAVFlags() for WebDAV server options
* initMiniAdminFlags() for admin server options
- Significantly improves code readability and maintainability
- Each component's flags are now in dedicated, focused functions
|
2 days ago |
|
|
f67ba35f4a
|
Make lock_manager.RenewInterval configurable in LiveLock (#7830)
* Make lock_manager.RenewInterval configurable in LiveLock - Add renewInterval field to LiveLock struct - Modify StartLongLivedLock to accept renewInterval parameter - Update all call sites to pass lock_manager.RenewInterval - Default to lock_manager.RenewInterval if zero is passed * S3 metrics: reduce collection interval to half of bucketSizeMetricsInterval Since S3 metrics collection is not critical, check more frequently but only collect when holding the distributed lock. This allows faster detection of any issues while avoiding overhead on non-leader instances. * Remove unused lock_manager import from bucket_size_metrics.go * Refactor: Make lockTTL the primary parameter, derive renewInterval from it Instead of configurable renew interval, lockTTL is now the input parameter. The renewal interval is automatically derived as lockTTL / 2, ensuring that locks are renewed well before expiration. Changes: - Replace renewInterval parameter with lockTTL - Rename LiveLock.renewInterval field to lockTTL - Calculate renewInterval as lockTTL / 2 inside the goroutine - Update all call sites to pass lockTTL values - Simplify sleep logic to use consistent renewInterval for both states This approach is more intuitive and guarantees safe renewal windows. * When locked, renew more aggressively to actively keep the lock When holding the lock, sleep for renewInterval/2 to renew more frequently. When seeking the lock, sleep for renewInterval to retry with normal frequency. This ensures we actively maintain lock ownership while being less aggressive when competing for the lock. * Simplify: use consistent renewInterval for all lock states Since renewInterval is already lockTTL / 2, there's no need to differentiate between locked and unlocked states. Both use the same interval for consistency. * Adjust sleep intervals for different lock states - Locked instances sleep for renewInterval (lockTTL/2) to renew the lock - Unlocked instances sleep for 5*renewInterval (2.5*lockTTL) to retry acquisition less frequently |
3 days ago |
|
|
f63d9ad390
|
s3api: fix bucket-root listing w/ delimiter (#7827)
* s3api: fix bucket-root listing w/ delimiter * test: improve mock robustness for bucket-root listing test - Make testListEntriesStream implement interface explicitly without embedding - Add prefix filtering logic to testFilerClient to simulate real filer behavior - Special-case prefix='/' to not filter for bucket root compatibility - Add required imports for metadata and strings packages This addresses review comments about test mock brittleness and accuracy. * test: add clarifying comment for mock filtering behavior Add detailed comment explaining which ListEntriesRequest parameters are implemented (Prefix) vs ignored (Limit, StartFromFileName, etc.) in the test mock to improve code documentation and future maintenance. * logging * less logs * less check if already locked |
3 days ago |
|
|
5b86d33c3c
|
Fix worker reconnection race condition causing context canceled errors (#7825)
* Fix worker reconnection race condition causing context canceled errors Fixes #7824 This commit fixes critical connection stability issues between admin server and workers that manifested as rapid reconnection cycles with 'context canceled' errors, particularly after 24+ hours of operation in containerized environments. Root Cause: ----------- Race condition where TWO goroutines were calling stream.Recv() on the same gRPC bidirectional stream concurrently: 1. sendRegistrationSync() started a goroutine that calls stream.Recv() 2. handleIncoming() also calls stream.Recv() in a loop Per gRPC specification, only ONE goroutine can call Recv() on a stream at a time. Concurrent Recv() calls cause undefined behavior, manifesting as 'context canceled' errors and stream corruption. The race occurred during worker reconnection: - Sometimes sendRegistrationSync goroutine read the registration response first (success) - Sometimes handleIncoming read it first, causing sendRegistrationSync to timeout - This left the stream in an inconsistent state, triggering 'context canceled' error - The error triggered rapid reconnection attempts, creating a reconnection storm Why it happened after 24 hours: Container orchestration systems (Docker Swarm/Kubernetes) periodically restart pods. Over time, workers reconnect multiple times. Each reconnection had a chance of hitting the race condition. Eventually the race manifested and caused the connection storm. Changes: -------- weed/worker/client.go: - Start handleIncoming and handleOutgoing goroutines BEFORE sending registration - Use sendRegistration() instead of sendRegistrationSync() - Ensures only ONE goroutine (handleIncoming) calls stream.Recv() - Eliminates race condition entirely weed/admin/dash/worker_grpc_server.go: - Clean up old connection when worker reconnects with same ID - Cancel old connection context to stop its goroutines - Prevents resource leaks and stale connection accumulation Impact: ------- Before: Random 'context canceled' errors during reconnection, rapid reconnection cycles, resource leaks, requires manual restart to recover After: Reliable reconnection, single Recv() goroutine, proper cleanup, stable operation over 24+ hours Testing: -------- Build verified successful with no compilation errors. How to reproduce the bug: 1. Start admin server and worker 2. Restart admin server (simulates container recreation) 3. Worker reconnects 4. Race condition may manifest, causing 'context canceled' error 5. Observe rapid reconnection cycles in logs The fix is backward compatible and requires no configuration changes. * Add MaxConnectionAge to gRPC server for Docker Swarm DNS handling - Configure MaxConnectionAge and MaxConnectionAgeGrace for gRPC server - Expand error detection in shouldInvalidateConnection for better cache invalidation - Add connection lifecycle logging for debugging * Add topology validation and nil-safety checks - Add validation guards in UpdateTopology to prevent invalid updates - Add nil-safety checks in rebuildIndexes - Add GetDiskCount method for diagnostic purposes * Fix worker registration race condition - Reorder goroutine startup in WorkerStream to prevent race conditions - Add defensive cleanup in unregisterWorker with panic-safe channel closing * Add comprehensive topology update logging - Enhance UpdateTopologyInfo with detailed logging of datacenter/node/disk counts - Add metrics logging for topology changes * Add periodic diagnostic status logging - Implement topologyStatusLoop running every 5 minutes - Add logTopologyStatus function reporting system metrics - Run as background goroutine in maintenance manager * Enhance master client connection logging - Add connection timing logs in tryConnectToMaster - Add reconnection attempt counting in KeepConnectedToMaster - Improve diagnostic visibility for connection issues * Remove unused sendRegistrationSync function - Function is no longer called after switching to asynchronous sendRegistration - Contains the problematic concurrent stream.Recv() pattern that caused race conditions - Cleanup as suggested in PR review * Clarify comment for channel closing during disconnection - Improve comment to explain why channels are closed and their effect - Make the code more self-documenting as suggested in PR review * Address code review feedback: refactor and improvements - Extract topology counting logic to shared helper function CountTopologyResources() to eliminate duplication between topology_management.go and maintenance_integration.go - Use gRPC status codes for more robust error detection in shouldInvalidateConnection(), falling back to string matching for transport-level errors - Add recover wrapper for channel close consistency in cleanupStaleConnections() to match unregisterWorker() pattern * Update grpc_client_server.go * Fix data race on lastSeen field access - Add mutex protection around conn.lastSeen = time.Now() in WorkerStream method - Ensures thread-safe access consistent with cleanupStaleConnections * Fix goroutine leaks in worker reconnection logic - Close streamExit in reconnect() before creating new connection - Close streamExit in attemptConnection() when sendRegistration fails - Prevents orphaned handleOutgoing/handleIncoming goroutines from previous connections - Ensures proper cleanup of goroutines competing for shared outgoing channel * Minor cleanup improvements for consistency and clarity - Remove redundant string checks in shouldInvalidateConnection that overlap with gRPC status codes - Add recover block to Stop() method for consistency with other channel close operations - Maintains valuable DNS and transport-specific error detection while eliminating redundancy * Improve topology update error handling - Return descriptive errors instead of silently preserving topology for invalid updates - Change nil topologyInfo case to return 'rejected invalid topology update: nil topologyInfo' - Change empty DataCenterInfos case to return 'rejected invalid topology update: empty DataCenterInfos (had X nodes, Y disks)' - Keep existing glog.Warningf calls but append error details to logs before returning errors - Allows callers to distinguish rejected updates and handle them appropriately * Refactor safe channel closing into helper method - Add safeCloseOutgoingChannel helper method to eliminate code duplication - Replace repeated recover blocks in Stop, unregisterWorker, and cleanupStaleConnections - Improves maintainability and ensures consistent error handling across all channel close operations - Maintains same panic recovery behavior with contextual source identification * Make connection invalidation string matching case-insensitive - Convert error string to lowercase once for all string.Contains checks - Improves robustness by catching error message variations from different sources - Eliminates need for separate 'DNS resolution' and 'dns' checks - Maintains same error detection coverage with better reliability * Clean up warning logs in UpdateTopology to avoid duplicating error text - Remove duplicated error phrases from glog.Warningf messages - Keep concise contextual warnings that don't repeat the fmt.Errorf content - Maintain same error returns for backward compatibility * Add robust validation to prevent topology wipeout during master restart - Reject topology updates with 0 nodes when current topology has nodes - Prevents transient empty topology from overwriting valid state - Improves resilience during master restart scenarios - Maintains backward compatibility for legitimate empty topology updates |
4 days ago |
|
|
4a764dbb37 |
fmt
|
4 days ago |
|
|
4aa50bfa6a
|
fix: EC rebalance fails with replica placement 000 (#7812)
* fix: EC rebalance fails with replica placement 000 This PR fixes several issues with EC shard distribution: 1. Pre-flight check before EC encoding - Verify target disk type has capacity before encoding starts - Prevents encoding shards only to fail during rebalance - Shows helpful error when wrong diskType is specified (e.g., ssd when volumes are on hdd) 2. Fix EC rebalance with replica placement 000 - When DiffRackCount=0, shards should be distributed freely across racks - The '000' placement means 'no volume replication needed' because EC provides redundancy - Previously all racks were skipped with error 'shards X > replica placement limit (0)' 3. Add unit tests for EC rebalance slot calculation - TestECRebalanceWithLimitedSlots: documents the limited slots scenario - TestECRebalanceZeroFreeSlots: reproduces the 0 free slots error 4. Add Makefile for manual EC testing - make setup: start cluster and populate data - make shell: open weed shell for EC commands - make clean: stop cluster and cleanup * fix: default -rebalance to true for ec.encode The -rebalance flag was defaulting to false, which meant ec.encode would only print shard moves but not actually execute them. This is a poor default since the whole point of EC encoding is to distribute shards across servers for fault tolerance. Now -rebalance defaults to true, so shards are actually distributed after encoding. Users can use -rebalance=false if they only want to see what would happen without making changes. * test/erasure_coding: improve Makefile safety and docs - Narrow pkill pattern for volume servers to use TEST_DIR instead of port pattern, avoiding accidental kills of unrelated SeaweedFS processes - Document external dependencies (curl, jq) in header comments * shell: refactor buildRackWithEcShards to reuse buildEcShards Extract common shard bit construction logic to avoid duplication between buildEcShards and buildRackWithEcShards helper functions. * shell: update test for EC replication 000 behavior When DiffRackCount=0 (replication "000"), EC shards should be distributed freely across racks since erasure coding provides its own redundancy. Update test expectation to reflect this behavior. * erasure_coding: add distribution package for proportional EC shard placement Add a new reusable package for EC shard distribution that: - Supports configurable EC ratios (not hard-coded 10+4) - Distributes shards proportionally based on replication policy - Provides fault tolerance analysis - Prefers moving parity shards to keep data shards spread out Key components: - ECConfig: Configurable data/parity shard counts - ReplicationConfig: Parsed XYZ replication policy - ECDistribution: Target shard counts per DC/rack/node - Rebalancer: Plans shard moves with parity-first strategy This enables seaweed-enterprise custom EC ratios and weed worker integration while maintaining a clean, testable architecture. * shell: integrate distribution package for EC rebalancing Add shell wrappers around the distribution package: - ProportionalECRebalancer: Plans moves using distribution.Rebalancer - NewProportionalECRebalancerWithConfig: Supports custom EC configs - GetDistributionSummary/GetFaultToleranceAnalysis: Helper functions The shell layer converts between EcNode types and the generic TopologyNode types used by the distribution package. * test setup * ec: improve data and parity shard distribution across racks - Add shardsByTypePerRack helper to track data vs parity shards - Rewrite doBalanceEcShardsAcrossRacks for two-pass balancing: 1. Balance data shards (0-9) evenly, max ceil(10/6)=2 per rack 2. Balance parity shards (10-13) evenly, max ceil(4/6)=1 per rack - Add balanceShardTypeAcrossRacks for generic shard type balancing - Add pickRackForShardType to select destination with room for type - Add unit tests for even data/parity distribution verification This ensures even read load during normal operation by spreading both data and parity shards across all available racks. * ec: make data/parity shard counts configurable in ecBalancer - Add dataShardCount and parityShardCount fields to ecBalancer struct - Add getDataShardCount() and getParityShardCount() methods with defaults - Replace direct constant usage with configurable methods - Fix unused variable warning for parityPerRack This allows seaweed-enterprise to use custom EC ratios while defaulting to standard 10+4 scheme. * Address PR 7812 review comments Makefile improvements: - Save PIDs for each volume server for precise termination - Use PID-based killing in stop target with pkill fallback - Use more specific pkill patterns with TEST_DIR paths Documentation: - Document jq dependency in README.md Rebalancer fix: - Fix duplicate shard count updates in applyMovesToAnalysis - All planners (DC/rack/node) update counts inline during planning - Remove duplicate updates from applyMovesToAnalysis to avoid double-counting * test/erasure_coding: use mktemp for test file template Use mktemp instead of hardcoded /tmp/testfile_template.bin path to provide better isolation for concurrent test runs. |
4 days ago |
|
|
77a56c2857 |
adjust default concurrent reader and writer
related to https://github.com/seaweedfs/seaweedfs-csi-driver/pull/221 |
4 days ago |
|
|
f4cdfcc5fd
|
Add cluster.raft.leader.transfer command for graceful leader change (#7819)
* proto: add RaftLeadershipTransfer RPC for forced leader change Add new gRPC RPC and messages for leadership transfer: - RaftLeadershipTransferRequest: optional target_id and target_address - RaftLeadershipTransferResponse: previous_leader and new_leader This enables graceful leadership transfer before master maintenance, reducing errors in filers during planned maintenance windows. Ref: https://github.com/seaweedfs/seaweedfs/issues/7527 * proto: regenerate Go files for RaftLeadershipTransfer Generated from master.proto changes. * master: implement RaftLeadershipTransfer gRPC handler Add gRPC handler for leadership transfer with support for: - Transfer to any eligible follower (when target_id is empty) - Transfer to a specific server (when target_id and target_address are provided) Uses hashicorp/raft LeadershipTransfer() and LeadershipTransferToServer() APIs. Returns the previous and new leader in the response. * shell: add cluster.raft.leader.transfer command Add weed shell command for graceful leadership transfer: - Displays current cluster status before transfer - Supports auto-selection of target (any eligible follower) - Supports targeted transfer with -id and -address flags - Provides clear feedback on success/failure with troubleshooting tips Usage: cluster.raft.leader.transfer cluster.raft.leader.transfer -id <server_id> -address <grpc_address> * master: add unit tests for raft gRPC handlers Add tests covering: - RaftLeadershipTransfer with no raft initialized - RaftLeadershipTransfer with target_id but no address - RaftListClusterServers with no raft initialized - RaftAddServer with no raft initialized - RaftRemoveServer with no raft initialized These tests verify error handling when raft is not configured. * shell: add tests for cluster.raft.leader.transfer command Add tests covering: - Command name and help text validation - HasTag returns false for ResourceHeavy - Validation of -id without -address - Argument parsing with unknown flags * master: clarify that leadership transfer requires -raftHashicorp The default raft implementation (seaweedfs/raft, a goraft fork) does not support graceful leadership transfer. This feature is only available when using hashicorp raft (-raftHashicorp=true). Update error messages and help text to make this requirement clear: - gRPC handler returns specific error for goraft users - Shell command help text notes the requirement - Added test for goraft case * test: use strings.Contains instead of custom helper Replace custom contains/containsHelper functions with the standard library strings.Contains for better maintainability. * shell: return flag parsing errors instead of swallowing them - Return the error from flag.Parse() instead of returning nil - Update test to explicitly assert error for unknown flags * test: document integration test scenarios for Raft leadership transfer Add comments explaining: - Why these unit tests only cover 'Raft not initialized' scenarios - What integration tests should cover (with multi-master cluster) - hashicorp/raft uses concrete types that cannot be easily mocked * fix: address reviewer feedback on tests and leader routing - Remove misleading tests that couldn't properly validate their documented behavior without a real Raft cluster: - TestRaftLeadershipTransfer_GoraftNotSupported - TestRaftLeadershipTransfer_ValidationTargetIdWithoutAddress - Change WithClient(false) to WithClient(true) for RaftLeadershipTransfer RPC to ensure the request is routed to the current leader * Improve cluster.raft.transferLeader command - Rename command from cluster.raft.leader.transfer to cluster.raft.transferLeader - Add symmetric validation: -id and -address must be specified together - Handle case where same leader is re-elected after transfer - Add test for -address without -id validation - Add docker compose file for 5-master raft cluster testing |
4 days ago |
|
|
134fd6a1ae
|
fix: S3 remote storage cold-cache read fails with 'size reported but no content available' (#7817)
fix: S3 remote storage cold-cache read fails with 'size reported but no content available' (#7815) When a remote-only entry's initial caching attempt times out or fails, streamFromVolumeServers() now detects this case and retries caching synchronously before streaming, similar to how the filer server handles remote-only entries. Changes: - Modified streamFromVolumeServers() to check entry.IsInRemoteOnly() before treating missing chunks as a data integrity error - Added doCacheRemoteObject() as the core caching function (calls filer gRPC) - Added buildRemoteObjectPath() helper to reduce code duplication - Refactored cacheRemoteObjectWithDedup() and cacheRemoteObjectForStreaming() to reuse the shared functions - Added integration tests for remote storage scenarios Fixes https://github.com/seaweedfs/seaweedfs/issues/7815 |
4 days ago |
|
|
6442da6f17
|
mount: efficient file lookup in large directories, skipping directory caching (#7818)
* mount: skip directory caching on file lookup and write When opening or creating a file in a directory that hasn't been cached yet, don't list the entire directory. Instead: - For reads: fetch only the single file's metadata directly from the filer - For writes: create on filer but skip local cache insertion This fixes a performance issue where opening a file in a directory with millions of files would hang because EnsureVisited() had to list all entries before the open could complete. The directory will still be cached when explicitly listed (ReadDir), but individual file operations now bypass the full directory caching. Key optimizations: - Extract shared lookupEntry() method to eliminate code duplication - Skip EnsureVisited on Lookup (file open) - Skip cache insertion on Mknod, Mkdir, Symlink, Link if dir not cached - Skip cache update on file sync/flush if dir not cached - If directory IS cached and entry not found, return ENOENT immediately Fixes #7145 * mount: add error handling for meta cache insert/update operations Handle errors from metaCache.InsertEntry and metaCache.UpdateEntry calls instead of silently ignoring them. This prevents silent cache inconsistencies and ensures errors are properly propagated. Files updated: - filehandle_read.go: handle InsertEntry error in downloadRemoteEntry - weedfs_file_sync.go: handle InsertEntry error in doFlush - weedfs_link.go: handle UpdateEntry and InsertEntry errors in Link - weedfs_symlink.go: handle InsertEntry error in Symlink * mount: use error wrapping (%w) for consistent error handling Use %w instead of %v in fmt.Errorf to preserve the original error, allowing it to be inspected up the call stack with errors.Is/As. |
4 days ago |
|
|
ed1da07665
|
Add consistent -debug and -debug.port flags to commands (#7816)
* Add consistent -debug and -debug.port flags to commands Add -debug and -debug.port flags to weed master, weed volume, weed s3, weed mq.broker, and weed filer.sync commands for consistency with weed filer. When -debug is enabled, an HTTP server starts on the specified port (default 6060) serving runtime profiling data at /debug/pprof/. For mq.broker, replaced the older -port.pprof flag with the new -debug and -debug.port pattern for consistency. * Update weed/util/grace/pprof.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
5 days ago |
|
|
bccef78082
|
fix: reduce N+1 queries in S3 versioned object list operations (#7814)
* fix: achieve single-scan efficiency for S3 versioned object listing When listing objects in a versioning-enabled bucket, the original code triggered multiple getEntry calls per versioned object (up to 12 with retries), causing excessive 'find' operations visible in Grafana and leading to high memory usage. This fix achieves single-scan efficiency by caching list metadata (size, ETag, mtime, owner) directly in the .versions directory: 1. Add new Extended keys for caching list metadata in .versions dir 2. Update upload/copy/multipart paths to cache metadata when creating versions 3. Update getLatestVersionEntryFromDirectoryEntry to use cached metadata (zero getEntry calls when cache is available) 4. Update updateLatestVersionAfterDeletion to maintain cache consistency Performance improvement for N versioned objects: - Before: N×1 to N×12 find operations per list request - After: 0 extra find operations (all metadata from single scan) This matches the efficiency of normal (non-versioned) object listing. * Update s3api_object_versioning.go * s3api: fix ETag handling for versioned objects and simplify delete marker creation - Add Md5 attribute to synthetic logicalEntry for single-part uploads to ensure filer.ETag() returns correct value in ListObjects response - Simplify delete marker creation by initializing entry directly in mkFile callback - Add bytes and encoding/hex imports for ETag parsing * s3api: preserve default attributes in delete marker mkFile callback Only modify Mtime field instead of replacing the entire Attributes struct, preserving default values like Crtime, FileMode, Uid, and Gid that mkFile initializes. * s3api: fix ETag handling in newListEntry for multipart uploads Prioritize ExtETagKey from Extended attributes before falling back to filer.ETag(). This properly handles multipart upload ETags (format: md5-parts) for versioned objects, where the synthetic entry has cached ETag metadata but no chunks to calculate from. * s3api: reduce code duplication in delete marker creation Extract deleteMarkerExtended map to be reused in both mkFile callback and deleteMarkerEntry construction. * test: add multipart upload versioning tests for ETag verification Add tests to verify that multipart uploaded objects in versioned buckets have correct ETags when listed: - TestMultipartUploadVersioningListETag: Basic multipart upload with 2 parts - TestMultipartUploadMultipleVersionsListETag: Multiple multipart versions - TestMixedSingleAndMultipartVersionsListETag: Mix of single-part and multipart These tests cover a bug where synthetic entries for versioned objects didn't include proper ETag handling for multipart uploads. * test: add delete marker test for multipart uploaded versioned objects TestMultipartUploadDeleteMarkerListBehavior verifies: - Delete marker creation hides object from ListObjectsV2 - ListObjectVersions shows both version and delete marker - Version ETag (multipart format) is preserved after delete marker - Object can be accessed by version ID after delete marker - Removing delete marker restores object visibility * refactor: address code review feedback - test: use assert.ElementsMatch for ETag verification (more idiomatic) - s3api: optimize newListEntry ETag logic (check ExtETagKey first) - s3api: fix edge case in ETag parsing (>= 2 instead of > 2) * s3api: prevent stale cached metadata and preserve existing extended attrs - setCachedListMetadata: clear old cached keys before setting new values to prevent stale data when new version lacks certain fields (e.g., owner) - createDeleteMarker: merge extended attributes instead of overwriting to preserve any existing metadata on the entry * s3api: extract clearCachedVersionMetadata to reduce code duplication - clearCachedVersionMetadata: clears only metadata fields (size, mtime, etag, owner, deleteMarker) - clearCachedListMetadata: now reuses clearCachedVersionMetadata + clears ID/filename - setCachedListMetadata: uses clearCachedVersionMetadata (not clearCachedListMetadata because caller has already set ID/filename) * s3api: share timestamp between version entry and cache entry Capture versionMtime once before mkFile and reuse for both: - versionEntry.Attributes.Mtime in the mkFile callback - versionEntryForCache.Attributes.Mtime for list caching This keeps list vs. HEAD LastModified timestamps aligned. * s3api: remove amzAccountId variable shadowing in multipart upload Extract amzAccountId before mkFile callback and reuse in both places, similar to how versionMtime is handled. Avoids confusion from redeclaring the same variable. |
5 days ago |
|
|
414cda4215
|
fix: S3 versioning memory leak in ListObjectVersions pagination (#7813)
* fix: S3 versioning memory leak in ListObjectVersions pagination This commit fixes a memory leak issue in S3 versioning buckets where ListObjectVersions with pagination (key-marker set) would collect ALL versions in the bucket before filtering, causing O(N) memory usage. Root cause: - When keyMarker was set, maxCollect was set to 0 (unlimited) - This caused findVersionsRecursively to traverse the entire bucket - All versions were collected into memory, sorted, then filtered Fix: - Updated findVersionsRecursively to accept keyMarker and versionIdMarker - Skips objects/versions before the marker during recursion (not after) - Always respects maxCollect limit (never unlimited) - Memory usage is now O(maxKeys) instead of O(total versions) Refactoring: - Introduced versionCollector struct to encapsulate collection state - Extracted helper methods for cleaner, more testable code: - matchesPrefixFilter: prefix matching logic - shouldSkipObjectForMarker: keyMarker filtering - shouldSkipVersionForMarker: versionIdMarker filtering - processVersionsDirectory: .versions directory handling - processExplicitDirectory: S3 directory object handling - processRegularFile: pre-versioning file handling - collectVersions: main recursive collection loop - processDirectory: directory entry dispatch This reduces the high QPS on 'find' and 'prefixList' operations by skipping irrelevant objects during traversal. Fixes customer-reported memory leak with high find/prefixList QPS in Grafana for S3 versioning buckets. * s3: infer version ID format from ExtLatestVersionIdKey metadata Simplified version format detection: - Removed ExtVersionIdFormatKey - no longer needed - getVersionIdFormat() now infers format from ExtLatestVersionIdKey - Uses isNewFormatVersionId() to check if latest version uses inverted format This approach is simpler because: - ExtLatestVersionIdKey is already stored in .versions directory metadata - No need for separate format metadata field - Format is naturally determined by the existing version IDs |
5 days ago |
|
|
6a1b9ce8cd
|
Give `cluster.status` detailed file metrics for regular volumes (#7791)
* Implement a `weed shell` command to return a status overview of the cluster. Detailed file information will be implemented in a follow-up MR. Note also that masters are currently not reporting back EC shard sizes correctly, via `master_pb.VolumeEcShardInformationMessage.shard_sizes`. F.ex: ``` > status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s)s, 1 disk(s) on 1 rack(s) volumes: total: 3 volumes on 1 collections max size: 31457280000 bytes regular: 2/80 volumes on 6 replicas, 6 writable (100.00%), 0 read-only (0.00%) EC: 1 EC volumes on 14 shards (14.00 shards/volume) storage: total: 186024424 bytes regular volumes: 186024424 bytes EC volumes: 0 bytes raw: 558073152 bytes on volume replicas, 0 bytes on EC shard files ``` * Humanize output for `weed.server` by default. Makes things more readable :) ``` > cluster.status cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC, 10 disks on 1 rack volumes: total: 3 volumes, 1 collection max size: 32 GB regular: 2/80 volumes on 6 replicas, 6 writable (100%), 0 read-only (0%) EC: 1 EC volume on 14 shards (14 shards/volume) storage: total: 172 MB regular volumes: 172 MB EC volumes: 0 B raw: 516 MB on volume replicas, 0 B on EC shards ``` ``` > cluster.status --humanize=false cluster: id: topo status: LOCKED nodes: 10 topology: 1 DC(s), 10 disk(s) on 1 rack(s) volumes: total: 3 volume(s), 1 collection(s) max size: 31457280000 byte(s) regular: 2/80 volume(s) on 6 replica(s), 5 writable (83.33%), 1 read-only (16.67%) EC: 1 EC volume(s) on 14 shard(s) (14.00 shards/volume) storage: total: 172128072 byte(s) regular volumes: 172128072 byte(s) EC volumes: 0 byte(s) raw: 516384216 byte(s) on volume replicas, 0 byte(s) on EC shards ``` Also adds unit tests, and reshuffles test files handling for clarity. * `cluster.status`: Add detailed file metrics for regular volumes. |
6 days ago |
|
|
0e998e07d0
|
Upgrade raft to v1.1.6 to fix panic on log compaction (#7811)
Fixes #7810 The raft library would panic when prevLogIndex was beyond the end of the log after compaction. The fix in raft v1.1.6 returns nil instead, triggering the snapshot fallback mechanism. |
6 days ago |
|
|
22271358c6
|
Fix worker and admin ca (#7807)
* Fix Worker and Admin CA in helm chart * Fix Worker and Admin CA in helm chart - add security.toml modification * Fix Worker and Admin CA in helm chart - fix security.toml modification error * Fix Worker and Admin CA in helm chart - fix errors in volume mounts * Fix Worker and Admin CA in helm chart - address review comments - Remove worker-cert from admin pod (principle of least privilege) - Remove admin-cert from worker pod (principle of least privilege) - Remove overly broad namespace wildcards from admin-cert dnsNames - Remove overly broad namespace wildcards from worker-cert dnsNames --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
6 days ago |
|
|
df0ea18084
|
fix: use consistent telemetryUrl default in master.follower (#7809)
Update telemetryUrl to use the same default value as the master command for consistency and maintainability. Addresses review feedback from PR #7808 |
6 days ago |
|
|
0b8fdab1e3
|
fix: initialize missing MasterOptions fields in master.follower (#7808)
Fix nil pointer dereference panic when starting master.follower. The init() function was missing initialization for: - maxParallelVacuumPerServer - telemetryUrl - telemetryEnabled These fields are dereferenced in toMasterOption() causing a panic. Fixes #7806 |
6 days ago |
|
|
ec3378f7a6
|
fix: improve mount quota enforcement to prevent overflow (#7804)
* fix: improve mount quota enforcement to prevent overflow (fixes seaweedfs-csi-driver#218) * test: add unit tests for quota enforcement |
6 days ago |
|
|
99a2e79efc
|
fix: authenticate before parsing form in IAM API (#7803)
fix: authenticate before parsing form in IAM API (#7802) The AuthIam middleware was calling ParseForm() before AuthSignatureOnly(), which consumed the request body before signature verification could hash it. For IAM requests (service != 's3'), the signature verification needs to hash the request body. When ParseForm() was called first, the body was already consumed, resulting in an empty body hash and SignatureDoesNotMatch error. The fix moves authentication before form parsing. The streamHashRequestBody function preserves the body after reading, so ParseForm() works correctly after authentication. Fixes #7802 |
6 days ago |
|
|
2763f105f4
|
fix: use unique bucket name in TestS3IAMPresignedURLIntegration to avoid flaky test (#7801)
The test was using a static bucket name 'test-iam-bucket' that could conflict with buckets created by other tests or previous runs. Each test framework creates new RSA keys for JWT signing, so the 'admin-user' identity differs between runs. When the bucket exists from a previous test, the new admin cannot access or delete it, causing AccessDenied errors. Changed to use GenerateUniqueBucketName() which ensures each test run gets its own bucket, avoiding cross-test conflicts. |
6 days ago |
|
|
a77b145590
|
fix: ListBuckets returns empty for users with bucket-specific permissions (#7799)
* fix: ListBuckets returns empty for users with bucket-specific permissions (#7796) The ListBucketsHandler was using sequential AND logic where ownership check happened before permission check. If a user had 'List:bucketname' permission but didn't own the bucket (different AmzIdentityId or missing owner metadata), the bucket was filtered out before the permission check could run. Changed to OR logic: a bucket is now visible if the user owns it OR has explicit permission to list it. This allows users with bucket-specific permissions like 'List:geoserver' to see buckets they have access to, even if they don't own them. Changes: - Modified ListBucketsHandler to check both ownership and permission, including bucket if either check passes - Renamed isBucketVisibleToIdentity to isBucketOwnedByIdentity for clarity - Added comprehensive tests in TestListBucketsIssue7796 Fixes #7796 * address review comments: optimize permission check and add integration test - Skip permission check if user is already the owner (performance optimization) - Add integration test that simulates the complete handler filtering logic to verify the combination of ownership OR permission check works correctly * add visibility assertions to each sub-test for self-contained verification Each sub-test now verifies the final outcome using isOwner || canList logic, making tests more robust and independently verifiable. |
6 days ago |
|
|
9e9c97ec61 |
fix bucket link
|
6 days ago |
|
|
347ed7cbfa
|
fix: sync replica entries before ec.encode and volume.tier.move (#7798)
* fix: sync replica entries before ec.encode and volume.tier.move (#7797) This addresses the data inconsistency risk in multi-replica volumes. When ec.encode or volume.tier.move operates on a multi-replica volume: 1. Find the replica with the highest file count (the 'best' one) 2. Copy missing entries from other replicas INTO this best replica 3. Use this union replica for the destructive operation This ensures no data is lost due to replica inconsistency before EC encoding or tier moving. Added: - command_volume_replica_check.go: Core sync and select logic - command_volume_replica_check_test.go: Test coverage Modified: - command_ec_encode.go: Call syncAndSelectBestReplica before encoding - command_volume_tier_move.go: Call syncAndSelectBestReplica before moving Fixes #7797 * test: add integration test for replicated volume sync during ec.encode * test: improve retry logic for replicated volume integration test * fix: resolve JWT issue in integration tests by using empty security.toml * address review comments: add readNeedleMeta, parallelize status fetch, fix collection param, fix test issues * test: use collection parameter consistently in replica sync test * fix: convert weed binary path to absolute to work with changed working directory * fix: remove skip behavior, keep tests failing on missing binary * fix: always check recency for each needle, add divergent replica test |
6 days ago |
|
|
9c4a2e1b1a
|
fix: JWT validation failures during replication (#7788) (#7795)
fix: add debug logging for JWT validation failures (#7788) When JWT file ID validation fails during replication, add a log message showing both the expected and actual file IDs to help diagnose issues. Ref #7788 |
7 days ago |
|
|
02f7d3f3e2
|
Fix S3 server panic when -s3.port.https equals -s3.port (#7794)
* Fix volume repeatedly toggling between crowded and uncrowded Fixes #6712 The issue was that removeFromCrowded() was called in removeFromWritable(), which is invoked whenever a volume temporarily becomes unwritable (due to replica count fluctuations, heartbeat issues, or read-only state changes). This caused unnecessary toggling: 1. Volume becomes temporarily unwritable → removeFromWritable() → removeFromCrowded() logs 'becomes uncrowded' 2. Volume becomes writable again 3. CollectDeadNodeAndFullVolumes() runs → setVolumeCrowded() logs 'becomes crowded' The fix: - Remove removeFromCrowded() call from removeFromWritable() - Only clear crowded status when volume is fully unregistered from the layout (when location.Length() == 0 in UnRegisterVolume) This ensures transient state changes don't cause log spam and the crowded status accurately reflects the volume's size relative to the grow threshold. * Refactor test to use subtests for better readability Address review feedback: use t.Run subtests to make the test's intent clearer by giving each verification step a descriptive name. * Fix S3 server panic when -s3.port.https equals -s3.port When starting the S3 server with -s3.port.https=8333 (same as default -s3.port), the server would panic with nil pointer dereference because: 1. The HTTP listener was already bound to port 8333 2. NewIpAndLocalListeners for HTTPS failed but error was discarded 3. ServeTLS was called on nil listener causing panic This fix: - Adds early validation to prevent using same port for HTTP and HTTPS - Properly handles the error from NewIpAndLocalListeners for HTTPS Fixes #7792 |
7 days ago |
|
|
8518f06777
|
Fix volume repeatedly toggling between crowded and uncrowded (#7793)
* Fix volume repeatedly toggling between crowded and uncrowded Fixes #6712 The issue was that removeFromCrowded() was called in removeFromWritable(), which is invoked whenever a volume temporarily becomes unwritable (due to replica count fluctuations, heartbeat issues, or read-only state changes). This caused unnecessary toggling: 1. Volume becomes temporarily unwritable → removeFromWritable() → removeFromCrowded() logs 'becomes uncrowded' 2. Volume becomes writable again 3. CollectDeadNodeAndFullVolumes() runs → setVolumeCrowded() logs 'becomes crowded' The fix: - Remove removeFromCrowded() call from removeFromWritable() - Only clear crowded status when volume is fully unregistered from the layout (when location.Length() == 0 in UnRegisterVolume) This ensures transient state changes don't cause log spam and the crowded status accurately reflects the volume's size relative to the grow threshold. * Refactor test to use subtests for better readability Address review feedback: use t.Run subtests to make the test's intent clearer by giving each verification step a descriptive name. |
7 days ago |
|
|
504b258258
|
s3: fix remote object not caching (#7790)
* s3: fix remote object not caching * s3: address review comments for remote object caching - Fix leading slash in object name by using strings.TrimPrefix - Return cached entry from CacheRemoteObjectToLocalCluster to get updated local chunk locations - Reuse existing helper function instead of inline gRPC call * s3/filer: add singleflight deduplication for remote object caching - Add singleflight.Group to FilerServer to deduplicate concurrent cache operations - Wrap CacheRemoteObjectToLocalCluster with singleflight to ensure only one caching operation runs per object when multiple clients request the same file - Add early-return check for already-cached objects - S3 API calls filer gRPC with timeout and graceful fallback on error - Clear negative bucket cache when bucket is created via weed shell - Add integration tests for remote cache with singleflight deduplication This benefits all clients (S3, HTTP, Hadoop) accessing remote-mounted objects by preventing redundant cache operations and improving concurrent access performance. Fixes: https://github.com/seaweedfs/seaweedfs/discussions/7599 * fix: data race in concurrent remote object caching - Add mutex to protect chunks slice from concurrent append - Add mutex to protect fetchAndWriteErr from concurrent read/write - Fix incorrect error check (was checking assignResult.Error instead of parseErr) - Rename inner variable to avoid shadowing fetchAndWriteErr * fix: address code review comments - Remove duplicate remote caching block in GetObjectHandler, keep only singleflight version - Add mutex protection for concurrent chunk slice and error access (data race fix) - Use lazy initialization for S3 client in tests to avoid panic during package load - Fix markdown linting: add language specifier to code fence, blank lines around tables - Add 'all' target to Makefile as alias for test-with-server - Remove unused 'util' import * style: remove emojis from test files * fix: add defensive checks and sort chunks by offset - Add nil check and type assertion check for singleflight result - Sort chunks by offset after concurrent fetching to maintain file order * fix: improve test diagnostics and path normalization - runWeedShell now returns error for better test diagnostics - Add all targets to .PHONY in Makefile (logs-primary, logs-remote, health) - Strip leading slash from normalizedObject to avoid double slashes in path --------- Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> |
7 days ago |
|
|
697b56003d
|
s3: reduce ObjectVersion memory by not retaining full Entry (#7786)
s3: fix fallback owner lookup to use specific version Address review feedback: the fallback logic was incorrectly using getLatestObjectVersion which returns the wrong owner when different versions have different owners. Fix by using getSpecificObjectVersion with the version.VersionId to fetch the correct entry for the specific version being processed. This also simplifies the code by removing the separate null version handling since getSpecificObjectVersion already handles that case. |
7 days ago |
|
|
956c5a1626 |
s3: fix pagination by collecting all versions when keyMarker is set
When paginating with keyMarker, we must collect all versions first because filtering happens after sorting. Previously, we limited collection to maxKeys+1 which caused us to miss versions beyond the marker when there were many versions before it. |
1 week ago |
|
|
daa3af826f |
ci: fix stress tests by adding server start/stop
|
1 week ago |
|
|
aff144f8b5 |
ci: run versioning stress tests on all PRs, not just master pushes
|
1 week ago |
|
|
9150d84eea |
test: use -master.peers=none for faster test server startup
|
1 week ago |
|
|
5dd34e3260 |
s3: fix ListObjectVersions pagination by implementing key-marker filtering
The ListObjectVersions API was receiving key-marker and version-id-marker parameters but not using them to filter results. This caused infinite pagination loops when clients tried to paginate through results. Fix by adding filtering logic after sorting: - Skip versions with key < keyMarker (already returned in previous pages) - For key == keyMarker, skip versions with versionId >= versionIdMarker - Include versions with key > keyMarker or (key == keyMarker and versionId < versionIdMarker) This respects the S3 sort order (key ascending, versionId descending for same key) and correctly returns only versions that come AFTER the marker position. |
1 week ago |
|
|
26121c55c9 |
test: improve pagination stress test with QUICK_TEST option and better assertions
|
1 week ago |
|
|
f517bc39fc |
test: fix nil pointer dereference and add debugging to pagination stress tests
|
1 week ago |