Tree:
6e7730a70e
add-admin-and-worker-to-helm-charts
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature/tus-protocol
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-mount-http-parallelism
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
fix/sftp-homedir-path-translation
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
original_weed_mount
pr-7412
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
12143 Commits (6e7730a70e6fe19d6abe3d2f480b717a35079ece)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
6e7730a70e |
Rename DefaultConfig to DefaultPlacementRequest
Aligns with the PlacementRequest type naming for consistency |
3 days ago |
|
|
44e1d9a0ed |
Rename PlacementConfig to PlacementRequest for clarity
PlacementRequest better reflects that this is a request for placement rather than a configuration object. This improves API semantics. |
3 days ago |
|
|
6ebda6f276 |
Make volume server evacuation disk-aware
- Use pickBestDiskOnNode() when selecting evacuation target disk - Specify target disk in evacuation RPC requests - Maintains balanced disk distribution during server evacuations |
3 days ago |
|
|
01a440bc36 |
Use placement package in EC detection for consistent disk-level placement
- Replace custom EC disk selection logic with shared placement package - Convert topology DiskInfo to placement.DiskCandidate format - Use SelectDestinations() for multi-rack/server/disk spreading - Convert placement results back to topology DiskInfo for task creation - Ensures EC detection uses same placement logic as shell commands |
3 days ago |
|
|
32d4f8e2c7 |
Make ec.balance disk-aware for multi-disk servers
- Add EcDisk struct to track individual disks on volume servers - Update EcNode to maintain per-disk shard distribution - Parse disk_id from EC shard information during topology collection - Implement pickBestDiskOnNode() for selecting best disk per shard - Add diskDistributionScore() for tie-breaking node selection - Update all move operations to specify target disk in RPC calls - Improves shard balance within multi-disk servers, not just across servers |
3 days ago |
|
|
439a4a60b6 |
Add placement package for EC shard placement logic
- Consolidate EC shard placement algorithm for reuse across shell and worker tasks - Support multi-pass selection: racks, then servers, then disks - Include proper spread verification and scoring functions - Comprehensive test coverage for various cluster topologies |
3 days ago |
|
|
5461f85240
|
Update README and weed/Makefile (#7571)
* Add link to wiki installation page in README * Add building for docker in weed/Makefile Building without `CGO_ENABLED=0` and using the executable in docker can result in a docker container exiting with an error * Update README.md Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Add GOOS=linux to build_docker target for cross-compilation --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
4 days ago |
|
|
d48e1e1659
|
mount: improve read throughput with parallel chunk fetching (#7569)
* mount: improve read throughput with parallel chunk fetching This addresses issue #7504 where a single weed mount FUSE instance does not fully utilize node network bandwidth when reading large files. Changes: - Add -concurrentReaders mount option (default: 16) to control the maximum number of parallel chunk fetches during read operations - Implement parallel section reading in ChunkGroup.ReadDataAt() using errgroup for better throughput when reading across multiple sections - Enhance ReaderCache with MaybeCacheMany() to prefetch multiple chunks ahead in parallel during sequential reads (now prefetches 4 chunks) - Increase ReaderCache limit dynamically based on concurrentReaders to support higher read parallelism The bottleneck was that chunks were being read sequentially even when they reside on different volume servers. By introducing parallel chunk fetching, a single mount instance can now better saturate available network bandwidth. Fixes: #7504 * fmt * Address review comments: make prefetch configurable, improve error handling Changes: 1. Add DefaultPrefetchCount constant (4) to reader_at.go 2. Add GetPrefetchCount() method to ChunkGroup that derives prefetch count from concurrentReaders (1/4 ratio, min 1, max 8) 3. Pass prefetch count through NewChunkReaderAtFromClient 4. Fix error handling in readDataAtParallel to prioritize errgroup error 5. Update all callers to use DefaultPrefetchCount constant For mount operations, prefetch scales with -concurrentReaders: - concurrentReaders=16 (default) -> prefetch=4 - concurrentReaders=32 -> prefetch=8 (capped) - concurrentReaders=4 -> prefetch=1 For non-mount paths (WebDAV, query engine, MQ), uses DefaultPrefetchCount. * fmt * Refactor: use variadic parameter instead of new function name Use NewChunkGroup with optional concurrentReaders parameter instead of creating a separate NewChunkGroupWithConcurrency function. This maintains backward compatibility - existing callers without the parameter get the default of 16 concurrent readers. * Use explicit concurrentReaders parameter instead of variadic * Refactor: use MaybeCache with count parameter instead of new MaybeCacheMany function * Address nitpick review comments - Add upper bound (128) on concurrentReaders to prevent excessive goroutine fan-out - Cap readerCacheLimit at 256 accordingly - Fix SetChunks: use Lock() instead of RLock() since we are writing to group.sections |
4 days ago |
|
|
bd419fda51
|
fix: copy to bucket with default SSE-S3 encryption fails (#7562) (#7568)
* filer use context without cancellation * pass along context * fix: copy to bucket with default SSE-S3 encryption fails (#7562) When copying an object from an encrypted bucket to a temporary unencrypted bucket, then to another bucket with default SSE-S3 encryption, the operation fails with 'invalid SSE-S3 source key type' error. Root cause: When objects are copied from an SSE-S3 encrypted bucket to an unencrypted bucket, the 'X-Amz-Server-Side-Encryption: AES256' header is preserved but the actual encryption key (SeaweedFSSSES3Key) is stripped. This creates an 'orphaned' SSE-S3 header that causes IsSSES3EncryptedInternal() to return true, triggering decryption logic with a nil key. Fix: 1. Modified IsSSES3EncryptedInternal() to require BOTH the AES256 header AND the SeaweedFSSSES3Key to be present before returning true 2. Added isOrphanedSSES3Header() to detect orphaned SSE-S3 headers 3. Updated copy handler to strip orphaned headers during copy operations Fixes #7562 * fmt * refactor: simplify isOrphanedSSES3Header function logic Remove redundant existence check since the caller iterates through metadata map, making the check unnecessary. Improves readability while maintaining the same functionality. |
5 days ago |
|
|
7e4bab8032
|
filer write request use context without cancellation (#7567)
* filer use context without cancellation * pass along context |
5 days ago |
|
|
03ce060e85
|
fix too many pings (#7566)
* fix too many pings * constants * Update weed/pb/grpc_client_server.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
5 days ago |
|
|
626954b76c |
unified builds
|
6 days ago |
|
|
755b1b55ff |
reduce dockerhub operations
|
6 days ago |
|
|
5734223b2b |
fix git checkout
|
6 days ago |
|
|
0064e37206 |
fix build issues
|
6 days ago |
|
|
487c0f92a9 |
fmt
|
6 days ago |
|
|
49f6edc001 |
build use https://mirror.gcr.io
|
6 days ago |
|
|
8102fb5642 |
Add Docker Hub registry mirror to avoid rate limits
|
6 days ago |
|
|
76b9611881 |
fix nil map
|
6 days ago |
|
|
50fb79be2b |
Merge branch 'master' of https://github.com/seaweedfs/seaweedfs
|
6 days ago |
|
|
3c1914532f |
re-organize github actions
|
6 days ago |
|
|
e5521673eb
|
Helm Charts: add certificate duration and renewBefore options (#7563)
* Helm Charts: add certificate duration and renewBefore options Signed-off-by: ohotnikov.ivan <ohotnikov.ivan@e-queo.net> * use .Values.global.certificates instead certificates ca --------- Signed-off-by: ohotnikov.ivan <ohotnikov.ivan@e-queo.net> Co-authored-by: ohotnikov.ivan <ohotnikov.ivan@e-queo.net> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
6 days ago |
|
|
75d593d7fa |
use .Values.global.certificates instead
certificates ca |
6 days ago |
|
|
f00cd38393 |
certificates ca
|
6 days ago |
|
|
3ae05b072a |
use .Values.global.certificates instead
|
6 days ago |
|
|
0e385122b5 |
Add free disk space step to container build workflows to prevent 'No space left on device' errors
free space |
6 days ago |
|
|
7e15a4abe2 |
4.01
|
6 days ago |
|
|
5c25df20f2
|
feat(volume.fix): show all replica locations for misplaced volumes (#7560)
|
7 days ago |
|
|
865fc88b3b |
java 4.00
|
7 days ago |
|
|
ebb4f57cc7
|
s3api: Fix response-content-disposition query parameter not being honored (#7559)
* s3api: Fix response-content-disposition query parameter not being honored Fixes #7486 This fix resolves an issue where S3 presigned URLs with query parameters like `response-content-disposition`, `response-content-type`, etc. were being ignored, causing browsers to use default file handling instead of the specified behavior. Changes: - Modified `setResponseHeaders()` to accept the HTTP request object - Added logic to process S3 passthrough headers from query parameters - Updated all call sites to pass the request object - Supports all AWS S3 response override parameters: - response-content-disposition - response-content-type - response-cache-control - response-content-encoding - response-content-language - response-expires The implementation follows the same pattern used in the filer handler and properly honors the AWS S3 API specification for presigned URLs. Testing: - Existing S3 API tests pass without modification - Build succeeds with no compilation errors * Update weed/s3api/s3api_object_handlers.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
7 days ago |
|
|
4106fc0436
|
fix(tikv): improve context propagation and refactor batch delete logic (#7558)
* fix(tikv): improve context propagation and refactor batch delete logic Address review comments from PR #7557: 1. Replace context.TODO() with ctx in txn.Get calls - Fixes timeout/cancellation propagation in FindEntry - Fixes timeout/cancellation propagation in KvGet 2. Refactor DeleteFolderChildren to use flush helper - Eliminates code duplication - Cleaner and more maintainable These changes ensure proper context propagation throughout all TiKV operations and improve code maintainability. * error formatting |
1 week ago |
|
|
848bec6d24
|
Metrics: Add Prometheus metrics for concurrent upload tracking (#7555)
* metrics: add Prometheus metrics for concurrent upload tracking Add Prometheus metrics to monitor concurrent upload activity for both filer and S3 servers. This provides visibility into the upload limiting feature added in the previous PR. New Metrics: - SeaweedFS_filer_in_flight_upload_bytes: Current bytes being uploaded to filer - SeaweedFS_filer_in_flight_upload_count: Current number of uploads to filer - SeaweedFS_s3_in_flight_upload_bytes: Current bytes being uploaded to S3 - SeaweedFS_s3_in_flight_upload_count: Current number of uploads to S3 The metrics are updated atomically whenever uploads start or complete, providing real-time visibility into upload concurrency levels. This helps operators: - Monitor upload concurrency in real-time - Set appropriate limits based on actual usage patterns - Detect potential bottlenecks or capacity issues - Track the effectiveness of upload limiting configuration * grafana: add dashboard panels for concurrent upload metrics Add 4 new panels to the Grafana dashboard to visualize the concurrent upload metrics added in this PR: Filer Section: - Filer Concurrent Uploads: Shows current number of concurrent uploads - Filer Concurrent Upload Bytes: Shows current bytes being uploaded S3 Gateway Section: - S3 Concurrent Uploads: Shows current number of concurrent uploads - S3 Concurrent Upload Bytes: Shows current bytes being uploaded These panels help operators monitor upload concurrency in real-time and tune the upload limiting configuration based on actual usage patterns. * more efficient |
1 week ago |
|
|
5287d9f3e3
|
fix(tikv): replace DeleteRange with transaction-based batch deletes (#7557)
* fix(tikv): replace DeleteRange with transaction-based batch deletes Fixes #7187 Problem: TiKV's DeleteRange API is a RawKV operation that bypasses transaction isolation. When SeaweedFS filer uses TiKV with txn client and another service uses RawKV client on the same cluster, DeleteFolderChildren can accidentally delete KV pairs from the RawKV client because DeleteRange operates at the raw key level without respecting transaction boundaries. Reproduction: 1. SeaweedFS filer using TiKV txn client for metadata 2. Another service using rawkv client on same TiKV cluster 3. Filer performs batch file deletion via DeleteFolderChildren 4. Result: ~50% of rawkv client's KV pairs get deleted Solution: Replace client.DeleteRange() (RawKV API) with transactional batch deletes using txn.Delete() within transactions. This ensures: - Transaction isolation - operations respect TiKV's MVCC boundaries - Keyspace separation - txn client and RawKV client stay isolated - Proper key handling - keys are copied to avoid iterator reuse issues - Batch processing - deletes batched (10K default) to manage memory Changes: 1. Core data structure: - Removed deleteRangeConcurrency field - Added batchCommitSize field (configurable, default 10000) 2. DeleteFolderChildren rewrite: - Replaced DeleteRange with iterative batch deletes - Added proper transaction lifecycle management - Implemented key copying to avoid iterator buffer reuse - Added batching to prevent memory exhaustion 3. New deleteBatch helper: - Handles transaction creation and lifecycle - Batches deletes within single transaction - Properly commits/rolls back based on context 4. Context propagation: - Updated RunInTxn to accept context parameter - All RunInTxn call sites now pass context - Enables proper timeout/cancellation handling 5. Configuration: - Removed deleterange_concurrency setting - Added batchdelete_count setting (default 10000) All critical review comments from PR #7188 have been addressed: - Proper key copying with append([]byte(nil), key...) - Conditional transaction rollback based on inContext flag - Context propagation for commits - Proper transaction lifecycle management - Configurable batch size Co-authored-by: giftz <giftz@users.noreply.github.com> * fix: remove extra closing brace causing syntax error in tikv_store.go --------- Co-authored-by: giftz <giftz@users.noreply.github.com> |
1 week ago |
|
|
cd2fac4551
|
S3: pass HTTP 429 from volume servers to S3 clients (#7556)
With the recent changes (commit
|
1 week ago |
|
|
0653394ec6
|
fix(filer-ui): support folder creation with JWT token in URL (#7271)
fix: filer ui create folder with jwt token error |
1 week ago |
|
|
3ab26e39ff
|
fix(s3api): fix AWS Signature V2 format and validation (#7488)
* fix(s3api): fix AWS Signature V2 format and validation * fix(s3api): Skip space after "AWS" prefix (+1 offset) * test(s3api): add unit tests for Signature V2 authentication fix * fix(s3api): simply comparing signatures * validation for the colon extraction in expectedAuth --------- Co-authored-by: chrislu <chris.lu@gmail.com> |
1 week ago |
|
|
edf0ef7a80
|
Filer, S3: Feature/add concurrent file upload limit (#7554)
* Support multiple filers for S3 and IAM servers with automatic failover
This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover.
Key changes:
- Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress)
- Updated IamServerOption.Filer to Filers ([]pb.ServerAddress)
- Modified -filer flag to accept comma-separated addresses
- Added getFilerAddress() helper methods for backward compatibility
- Updated all filer client calls to support multiple addresses
- Uses pb.WithOneOfGrpcFilerClients for automatic failover
Usage:
weed s3 -filer=localhost:8888,localhost:8889
weed iam -filer=localhost:8888,localhost:8889
The underlying FilerClient already supported multiple filers with health
tracking and automatic failover - this change exposes that capability
through the command-line interface.
* Add filer discovery: treat initial filers as seeds and discover peers from master
Enhances FilerClient to automatically discover additional filers in the same
filer group by querying the master server. This allows users to specify just
a few seed filers, and the client will discover all other filers in the cluster.
Key changes to wdclient/FilerClient:
- Added MasterClient, FilerGroup, and DiscoveryInterval fields
- Added thread-safe filer list management with RWMutex
- Implemented discoverFilers() background goroutine
- Uses cluster.ListExistingPeerUpdates() to query master for filers
- Automatically adds newly discovered filers to the list
- Added Close() method to clean up discovery goroutine
New FilerClientOption fields:
- MasterClient: enables filer discovery from master
- FilerGroup: specifies which filer group to discover
- DiscoveryInterval: how often to refresh (default 5 minutes)
Usage example:
masterClient := wdclient.NewMasterClient(...)
filerClient := wdclient.NewFilerClient(
[]pb.ServerAddress{"localhost:8888"}, // seed filers
grpcDialOption,
dataCenter,
&wdclient.FilerClientOption{
MasterClient: masterClient,
FilerGroup: "my-group",
},
)
defer filerClient.Close()
The initial filers act as seeds - the client discovers and adds all other
filers in the same group from the master. Discovered filers are added
dynamically without removing existing ones (relying on health checks for
unavailable filers).
* Address PR review comments: implement full failover for IAM operations
Critical fixes based on code review feedback:
1. **IAM API Failover (Critical)**:
- Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in:
* GetS3ApiConfigurationFromFiler()
* PutS3ApiConfigurationToFiler()
* GetPolicies()
* PutPolicies()
- Now all IAM operations support automatic failover across multiple filers
2. **Validation Improvements**:
- Add validation in NewIamApiServerWithStore() to require at least one filer
- Add validation in NewS3ApiServerWithStore() to require at least one filer
- Add warning log when no filers configured for credential store
3. **Error Logging**:
- Circuit breaker now logs when config load fails instead of silently ignoring
- Helps operators understand why circuit breaker limits aren't applied
4. **Code Quality**:
- Use ToGrpcAddress() for filer address in credential store setup
- More consistent with rest of codebase and future-proof
These changes ensure IAM operations have the same high availability guarantees
as S3 operations, completing the multi-filer failover implementation.
* Fix IAM manager initialization: remove code duplication, add TODO for HA
Addresses review comment on s3api_server.go:145
Changes:
- Remove duplicate code for getting first filer address
- Extract filerAddr variable once and reuse
- Add TODO comment documenting the HA limitation for IAM manager
- Document that loadIAMManagerFromConfig and NewS3IAMIntegration need
updates to support multiple filers for full HA
Note: This is a known limitation when using filer-backed IAM stores.
The interfaces need to be updated to accept multiple filer addresses.
For now, documenting this limitation clearly.
* Document credential store HA limitation with TODO
Addresses review comment on auth_credentials.go:149
Changes:
- Add TODO comment documenting that SetFilerClient interface needs update
for multi-filer support
- Add informative log message indicating HA limitation
- Document that this is a known limitation for filer-backed credential stores
The SetFilerClient interface currently only accepts a single filer address.
To properly support HA, the credential store interfaces need to be updated
to handle multiple filer addresses.
* Track current active filer in FilerClient for better HA
Add GetCurrentFiler() method to FilerClient that returns the currently
active filer based on the filerIndex which is updated on successful
operations. This provides better availability than always using the
first filer.
Changes:
- Add FilerClient.GetCurrentFiler() method that returns current active filer
- Update S3ApiServer.getFilerAddress() to use FilerClient's current filer
- Add fallback to first filer if FilerClient not yet initialized
- Document IAM limitation (doesn't have FilerClient access)
Benefits:
- Single-filer operations (URLs, ReadFilerConf, etc.) now use the
currently active/healthy filer
- Better distribution and failover behavior
- FilerClient's round-robin and health tracking automatically
determines which filer to use
* Document ReadFilerConf HA limitation in lifecycle handlers
Addresses review comment on s3api_bucket_handlers.go:880
Add comment documenting that ReadFilerConf uses the current active filer
from FilerClient (which is better than always using first filer), but
doesn't have built-in multi-filer failover.
Add TODO to update filer.ReadFilerConf to support multiple filers for
complete HA. For now, it uses the currently active/healthy filer tracked
by FilerClient which provides reasonable availability.
* Document multipart upload URL HA limitation
Addresses review comment on s3api_object_handlers_multipart.go:442
Add comment documenting that part upload URLs point to the current
active filer (tracked by FilerClient), which is better than always
using the first filer but still creates a potential point of failure
if that filer becomes unavailable during upload.
Suggest TODO solutions:
- Use virtual hostname/load balancer for filers
- Have S3 server proxy uploads to healthy filers
Current behavior provides reasonable availability by using the
currently active/healthy filer rather than being pinned to first filer.
* Document multipart completion Location URL limitation
Addresses review comment on filer_multipart.go:187
Add comment documenting that the Location URL in CompleteMultipartUpload
response points to the current active filer (tracked by FilerClient).
Note that clients should ideally use the S3 API endpoint rather than
this direct URL. If direct access is attempted and the specific filer
is unavailable, the request will fail.
Current behavior uses the currently active/healthy filer rather than
being pinned to the first filer, providing better availability.
* Make credential store use current active filer for HA
Update FilerEtcStore to use a function that returns the current active
filer instead of a fixed address, enabling high availability.
Changes:
- Add SetFilerAddressFunc() method to FilerEtcStore
- Store uses filerAddressFunc instead of fixed filerGrpcAddress
- withFilerClient() calls the function to get current active filer
- Keep SetFilerClient() for backward compatibility (marked deprecated)
- Update S3ApiServer to pass FilerClient.GetCurrentFiler to store
Benefits:
- Credential store now uses currently active/healthy filer
- Automatic failover when filer becomes unavailable
- True HA for credential operations
- Backward compatible with old SetFilerClient interface
This addresses the credential store limitation - no longer pinned to
first filer, uses FilerClient's tracked current active filer.
* Clarify multipart URL comments: filer address not used for uploads
Update comments to reflect that multipart upload URLs are not actually
used for upload traffic - uploads go directly to volume servers.
Key clarifications:
- genPartUploadUrl: Filer address is parsed out, only path is used
- CompleteMultipartUpload Location: Informational field per AWS S3 spec
- Actual uploads bypass filer proxy and go directly to volume servers
The filer address in these URLs is NOT a HA concern because:
1. Part uploads: URL is parsed for path, upload goes to volume servers
2. Location URL: Informational only, clients use S3 endpoint
This addresses the observation that S3 uploads don't go through filers,
only metadata operations do.
* Remove filer address from upload paths - pass path directly
Eliminate unnecessary filer address from upload URLs by passing file
paths directly instead of full URLs that get immediately parsed.
Changes:
- Rename genPartUploadUrl() → genPartUploadPath() (returns path only)
- Rename toFilerUrl() → toFilerPath() (returns path only)
- Update putToFiler() to accept filePath instead of uploadUrl
- Remove URL parsing code (no longer needed)
- Remove net/url import (no longer used)
- Keep old function names as deprecated wrappers for compatibility
Benefits:
- Cleaner code - no fake URL construction/parsing
- No dependency on filer address for internal operations
- More accurate naming (these are paths, not URLs)
- Eliminates confusion about HA concerns
This completely removes the filer address from upload operations - it was
never actually used for routing, only parsed for the path.
* Remove deprecated functions: use new path-based functions directly
Remove deprecated wrapper functions and update all callers to use the
new function names directly.
Removed:
- genPartUploadUrl() → all callers now use genPartUploadPath()
- toFilerUrl() → all callers now use toFilerPath()
- SetFilerClient() → removed along with fallback code
Updated:
- s3api_object_handlers_multipart.go: uploadUrl → filePath
- s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath
- s3api_object_versioning.go: toFilerUrl → toFilerPath
- s3api_object_handlers_test.go: toFilerUrl → toFilerPath
- auth_credentials.go: removed SetFilerClient fallback
- filer_etc_store.go: removed deprecated SetFilerClient method
Benefits:
- Cleaner codebase with no deprecated functions
- All variable names accurately reflect that they're paths, not URLs
- Single interface for credential stores (SetFilerAddressFunc only)
All code now consistently uses the new path-based approach.
* Fix toFilerPath: remove URL escaping for raw file paths
The toFilerPath function should return raw file paths, not URL-escaped
paths. URL escaping was needed when the path was embedded in a URL
(old toFilerUrl), but now that we pass paths directly to putToFiler,
they should be unescaped.
This fixes S3 integration test failures:
- test_bucket_listv2_encoding_basic
- test_bucket_list_encoding_basic
- test_bucket_listv2_delimiter_whitespace
- test_bucket_list_delimiter_whitespace
The tests were failing because paths were double-encoded (escaped when
stored, then escaped again when listed), resulting in %252B instead of
%2B for '+' characters.
Root cause: When we removed URL parsing in putToFiler, we should have
also removed URL escaping in toFilerPath since paths are now used
directly without URL encoding/decoding.
* Add thread safety to FilerEtcStore and clarify credential store comments
Address review suggestions for better thread safety and code clarity:
1. **Thread Safety**: Add RWMutex to FilerEtcStore
- Protects filerAddressFunc and grpcDialOption from concurrent access
- Initialize() uses write lock when setting function
- SetFilerAddressFunc() uses write lock
- withFilerClient() uses read lock to get function and dial option
- GetPolicies() uses read lock to check if configured
2. **Improved Error Messages**:
- Prefix errors with "filer_etc:" for easier debugging
- "filer address not configured" → "filer_etc: filer address function not configured"
- "filer address is empty" → "filer_etc: filer address is empty"
3. **Clarified Comments**:
- auth_credentials.go: Clarify that initial setup is temporary
- Document that it's updated in s3api_server.go after FilerClient creation
- Remove ambiguity about when FilerClient.GetCurrentFiler is used
Benefits:
- Safe for concurrent credential operations
- Clear error messages for debugging
- Explicit documentation of initialization order
* Enable filer discovery: pass master addresses to FilerClient
Fix two critical issues:
1. **Filer Discovery Not Working**: Master client was not being passed to
FilerClient, so peer discovery couldn't work
2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler
function - this is the correct design for HA
Changes:
**Command (s3.go):**
- Read master addresses from GetFilerConfiguration response
- Pass masterAddresses to S3ApiServerOption
- Log master addresses for visibility
**S3ApiServerOption:**
- Add Masters []pb.ServerAddress field for discovery
**S3ApiServer:**
- Create MasterClient from Masters when available
- Pass MasterClient + FilerGroup to FilerClient via options
- Enable discovery with 5-minute refresh interval
- Log whether discovery is enabled or disabled
**Credential Store:**
- Already correctly uses filerClient.GetCurrentFiler via function
- This provides HA without tight coupling to FilerClient struct
- Function-based design is clean and thread-safe
Discovery Flow:
1. S3 command reads filer config → gets masters + filer group
2. S3ApiServer creates MasterClient from masters
3. FilerClient uses MasterClient to query for peer filers
4. Background goroutine refreshes peer list every 5 minutes
5. Credential store uses GetCurrentFiler to get active filer
Now filer discovery actually works! ��
* Use S3 endpoint in multipart Location instead of filer address
* Add multi-filer failover to ReadFilerConf
* Address CodeRabbit review: fix buffer reuse and improve lock safety
Address two code review suggestions:
1. **Fix buffer reuse in ReadFilerConfFromFilers**:
- Use local []byte data instead of shared buffer
- Prevents partial data from failed attempts affecting successful reads
- Creates fresh buffer inside callback for masterClient path
- More robust to future changes in read helpers
2. **Improve lock safety in FilerClient**:
- Add *WithHealth variants that accept health pointer
- Get health pointer while holding lock, then release before calling
- Eliminates potential for lock confusion (though no actual deadlock existed)
- Clearer separation: lock for data access, atomics for health ops
Changes:
- ReadFilerConfFromFilers: var data []byte, create buf inside callback
- shouldSkipUnhealthyFilerWithHealth(health *filerHealth)
- recordFilerSuccessWithHealth(health *filerHealth)
- recordFilerFailureWithHealth(health *filerHealth)
- Keep old functions for backward compatibility (marked deprecated)
- Update LookupVolumeIds to use WithHealth variants
Benefits:
- More robust multi-filer configuration reading
- Clearer lock vs atomic operation boundaries
- No lock held during health checks (even though atomics don't block)
- Better code organization and maintainability
* add constant
* Fix IAM manager and post policy to use current active filer
* Fix critical race condition and goroutine leak
* Update weed/s3api/filer_multipart.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Fix compilation error and address code review suggestions
Address remaining unresolved comments:
1. **Fix compilation error**: Add missing net/url import
- filer_multipart.go used url.PathEscape without import
- Added "net/url" to imports
2. **Fix Location URL formatting** (all 4 occurrences):
- Add missing slash between bucket and key
- Use url.PathEscape for bucket names
- Use urlPathEscape for object keys
- Handles special characters in bucket/key names
- Before: http://host/bucketkey
- After: http://host/bucket/key (properly escaped)
3. **Optimize discovery loop** (O(N*M) → O(N+M)):
- Use map for existing filers (O(1) lookup)
- Reduces time holding write lock
- Better performance with many filers
- Before: Nested loop for each discovered filer
- After: Build map once, then O(1) lookups
Changes:
- filer_multipart.go: Import net/url, fix all Location URLs
- filer_client.go: Use map for efficient filer discovery
Benefits:
- Compiles successfully
- Proper URL encoding (handles spaces, special chars)
- Faster discovery with less lock contention
- Production-ready URL formatting
* Fix race conditions and make Close() idempotent
Address CodeRabbit review #3512078995:
1. **Critical: Fix unsynchronized read in error message**
- Line 584 read len(fc.filerAddresses) without lock
- Race with refreshFilerList appending to slice
- Fixed: Take RLock to read length safely
- Prevents race detector warnings
2. **Important: Make Close() idempotent**
- Closing already-closed channel panics
- Can happen with layered cleanup in shutdown paths
- Fixed: Use sync.Once to ensure single close
- Safe to call Close() multiple times now
3. **Nitpick: Add warning for empty filer address**
- getFilerAddress() can return empty string
- Helps diagnose unexpected state
- Added: Warning log when no filers available
4. **Nitpick: Guard deprecated index-based helpers**
- shouldSkipUnhealthyFiler, recordFilerSuccess/Failure
- Accessed filerHealth without lock (races with discovery)
- Fixed: Take RLock and check bounds before array access
- Prevents index out of bounds and races
Changes:
- filer_client.go:
- Add closeDiscoveryOnce sync.Once field
- Use Do() in Close() for idempotent channel close
- Add RLock guards to deprecated index-based helpers
- Add bounds checking to prevent panics
- Synchronized read of filerAddresses length in error
- s3api_server.go:
- Add warning log when getFilerAddress returns empty
Benefits:
- No race conditions (passes race detector)
- No panic on double-close
- Better error diagnostics
- Safe with discovery enabled
- Production-hardened shutdown logic
* Fix hardcoded http scheme and add panic recovery
Address CodeRabbit review #3512114811:
1. **Major: Fix hardcoded http:// scheme in Location URLs**
- Location URLs always used http:// regardless of client connection
- HTTPS clients got http:// URLs (incorrect)
- Fixed: Detect scheme from request
- Check X-Forwarded-Proto header (for proxies) first
- Check r.TLS != nil for direct HTTPS
- Fallback to http for plain connections
- Applied to all 4 CompleteMultipartUploadResult locations
2. **Major: Add panic recovery to discovery goroutine**
- Long-running background goroutine could crash entire process
- Panic in refreshFilerList would terminate program
- Fixed: Add defer recover() with error logging
- Goroutine failures now logged, not fatal
3. **Note: Close() idempotency already implemented**
- Review flagged as duplicate issue
- Already fixed in commit
|
1 week ago |
|
|
5075381060
|
Support multiple filers for S3 and IAM servers with automatic failover (#7550)
* Support multiple filers for S3 and IAM servers with automatic failover
This change adds support for multiple filer addresses in the 'weed s3' and 'weed iam' commands, enabling high availability through automatic failover.
Key changes:
- Updated S3ApiServerOption.Filer to Filers ([]pb.ServerAddress)
- Updated IamServerOption.Filer to Filers ([]pb.ServerAddress)
- Modified -filer flag to accept comma-separated addresses
- Added getFilerAddress() helper methods for backward compatibility
- Updated all filer client calls to support multiple addresses
- Uses pb.WithOneOfGrpcFilerClients for automatic failover
Usage:
weed s3 -filer=localhost:8888,localhost:8889
weed iam -filer=localhost:8888,localhost:8889
The underlying FilerClient already supported multiple filers with health
tracking and automatic failover - this change exposes that capability
through the command-line interface.
* Add filer discovery: treat initial filers as seeds and discover peers from master
Enhances FilerClient to automatically discover additional filers in the same
filer group by querying the master server. This allows users to specify just
a few seed filers, and the client will discover all other filers in the cluster.
Key changes to wdclient/FilerClient:
- Added MasterClient, FilerGroup, and DiscoveryInterval fields
- Added thread-safe filer list management with RWMutex
- Implemented discoverFilers() background goroutine
- Uses cluster.ListExistingPeerUpdates() to query master for filers
- Automatically adds newly discovered filers to the list
- Added Close() method to clean up discovery goroutine
New FilerClientOption fields:
- MasterClient: enables filer discovery from master
- FilerGroup: specifies which filer group to discover
- DiscoveryInterval: how often to refresh (default 5 minutes)
Usage example:
masterClient := wdclient.NewMasterClient(...)
filerClient := wdclient.NewFilerClient(
[]pb.ServerAddress{"localhost:8888"}, // seed filers
grpcDialOption,
dataCenter,
&wdclient.FilerClientOption{
MasterClient: masterClient,
FilerGroup: "my-group",
},
)
defer filerClient.Close()
The initial filers act as seeds - the client discovers and adds all other
filers in the same group from the master. Discovered filers are added
dynamically without removing existing ones (relying on health checks for
unavailable filers).
* Address PR review comments: implement full failover for IAM operations
Critical fixes based on code review feedback:
1. **IAM API Failover (Critical)**:
- Replace pb.WithGrpcFilerClient with pb.WithOneOfGrpcFilerClients in:
* GetS3ApiConfigurationFromFiler()
* PutS3ApiConfigurationToFiler()
* GetPolicies()
* PutPolicies()
- Now all IAM operations support automatic failover across multiple filers
2. **Validation Improvements**:
- Add validation in NewIamApiServerWithStore() to require at least one filer
- Add validation in NewS3ApiServerWithStore() to require at least one filer
- Add warning log when no filers configured for credential store
3. **Error Logging**:
- Circuit breaker now logs when config load fails instead of silently ignoring
- Helps operators understand why circuit breaker limits aren't applied
4. **Code Quality**:
- Use ToGrpcAddress() for filer address in credential store setup
- More consistent with rest of codebase and future-proof
These changes ensure IAM operations have the same high availability guarantees
as S3 operations, completing the multi-filer failover implementation.
* Fix IAM manager initialization: remove code duplication, add TODO for HA
Addresses review comment on s3api_server.go:145
Changes:
- Remove duplicate code for getting first filer address
- Extract filerAddr variable once and reuse
- Add TODO comment documenting the HA limitation for IAM manager
- Document that loadIAMManagerFromConfig and NewS3IAMIntegration need
updates to support multiple filers for full HA
Note: This is a known limitation when using filer-backed IAM stores.
The interfaces need to be updated to accept multiple filer addresses.
For now, documenting this limitation clearly.
* Document credential store HA limitation with TODO
Addresses review comment on auth_credentials.go:149
Changes:
- Add TODO comment documenting that SetFilerClient interface needs update
for multi-filer support
- Add informative log message indicating HA limitation
- Document that this is a known limitation for filer-backed credential stores
The SetFilerClient interface currently only accepts a single filer address.
To properly support HA, the credential store interfaces need to be updated
to handle multiple filer addresses.
* Track current active filer in FilerClient for better HA
Add GetCurrentFiler() method to FilerClient that returns the currently
active filer based on the filerIndex which is updated on successful
operations. This provides better availability than always using the
first filer.
Changes:
- Add FilerClient.GetCurrentFiler() method that returns current active filer
- Update S3ApiServer.getFilerAddress() to use FilerClient's current filer
- Add fallback to first filer if FilerClient not yet initialized
- Document IAM limitation (doesn't have FilerClient access)
Benefits:
- Single-filer operations (URLs, ReadFilerConf, etc.) now use the
currently active/healthy filer
- Better distribution and failover behavior
- FilerClient's round-robin and health tracking automatically
determines which filer to use
* Document ReadFilerConf HA limitation in lifecycle handlers
Addresses review comment on s3api_bucket_handlers.go:880
Add comment documenting that ReadFilerConf uses the current active filer
from FilerClient (which is better than always using first filer), but
doesn't have built-in multi-filer failover.
Add TODO to update filer.ReadFilerConf to support multiple filers for
complete HA. For now, it uses the currently active/healthy filer tracked
by FilerClient which provides reasonable availability.
* Document multipart upload URL HA limitation
Addresses review comment on s3api_object_handlers_multipart.go:442
Add comment documenting that part upload URLs point to the current
active filer (tracked by FilerClient), which is better than always
using the first filer but still creates a potential point of failure
if that filer becomes unavailable during upload.
Suggest TODO solutions:
- Use virtual hostname/load balancer for filers
- Have S3 server proxy uploads to healthy filers
Current behavior provides reasonable availability by using the
currently active/healthy filer rather than being pinned to first filer.
* Document multipart completion Location URL limitation
Addresses review comment on filer_multipart.go:187
Add comment documenting that the Location URL in CompleteMultipartUpload
response points to the current active filer (tracked by FilerClient).
Note that clients should ideally use the S3 API endpoint rather than
this direct URL. If direct access is attempted and the specific filer
is unavailable, the request will fail.
Current behavior uses the currently active/healthy filer rather than
being pinned to the first filer, providing better availability.
* Make credential store use current active filer for HA
Update FilerEtcStore to use a function that returns the current active
filer instead of a fixed address, enabling high availability.
Changes:
- Add SetFilerAddressFunc() method to FilerEtcStore
- Store uses filerAddressFunc instead of fixed filerGrpcAddress
- withFilerClient() calls the function to get current active filer
- Keep SetFilerClient() for backward compatibility (marked deprecated)
- Update S3ApiServer to pass FilerClient.GetCurrentFiler to store
Benefits:
- Credential store now uses currently active/healthy filer
- Automatic failover when filer becomes unavailable
- True HA for credential operations
- Backward compatible with old SetFilerClient interface
This addresses the credential store limitation - no longer pinned to
first filer, uses FilerClient's tracked current active filer.
* Clarify multipart URL comments: filer address not used for uploads
Update comments to reflect that multipart upload URLs are not actually
used for upload traffic - uploads go directly to volume servers.
Key clarifications:
- genPartUploadUrl: Filer address is parsed out, only path is used
- CompleteMultipartUpload Location: Informational field per AWS S3 spec
- Actual uploads bypass filer proxy and go directly to volume servers
The filer address in these URLs is NOT a HA concern because:
1. Part uploads: URL is parsed for path, upload goes to volume servers
2. Location URL: Informational only, clients use S3 endpoint
This addresses the observation that S3 uploads don't go through filers,
only metadata operations do.
* Remove filer address from upload paths - pass path directly
Eliminate unnecessary filer address from upload URLs by passing file
paths directly instead of full URLs that get immediately parsed.
Changes:
- Rename genPartUploadUrl() → genPartUploadPath() (returns path only)
- Rename toFilerUrl() → toFilerPath() (returns path only)
- Update putToFiler() to accept filePath instead of uploadUrl
- Remove URL parsing code (no longer needed)
- Remove net/url import (no longer used)
- Keep old function names as deprecated wrappers for compatibility
Benefits:
- Cleaner code - no fake URL construction/parsing
- No dependency on filer address for internal operations
- More accurate naming (these are paths, not URLs)
- Eliminates confusion about HA concerns
This completely removes the filer address from upload operations - it was
never actually used for routing, only parsed for the path.
* Remove deprecated functions: use new path-based functions directly
Remove deprecated wrapper functions and update all callers to use the
new function names directly.
Removed:
- genPartUploadUrl() → all callers now use genPartUploadPath()
- toFilerUrl() → all callers now use toFilerPath()
- SetFilerClient() → removed along with fallback code
Updated:
- s3api_object_handlers_multipart.go: uploadUrl → filePath
- s3api_object_handlers_put.go: uploadUrl → filePath, versionUploadUrl → versionFilePath
- s3api_object_versioning.go: toFilerUrl → toFilerPath
- s3api_object_handlers_test.go: toFilerUrl → toFilerPath
- auth_credentials.go: removed SetFilerClient fallback
- filer_etc_store.go: removed deprecated SetFilerClient method
Benefits:
- Cleaner codebase with no deprecated functions
- All variable names accurately reflect that they're paths, not URLs
- Single interface for credential stores (SetFilerAddressFunc only)
All code now consistently uses the new path-based approach.
* Fix toFilerPath: remove URL escaping for raw file paths
The toFilerPath function should return raw file paths, not URL-escaped
paths. URL escaping was needed when the path was embedded in a URL
(old toFilerUrl), but now that we pass paths directly to putToFiler,
they should be unescaped.
This fixes S3 integration test failures:
- test_bucket_listv2_encoding_basic
- test_bucket_list_encoding_basic
- test_bucket_listv2_delimiter_whitespace
- test_bucket_list_delimiter_whitespace
The tests were failing because paths were double-encoded (escaped when
stored, then escaped again when listed), resulting in %252B instead of
%2B for '+' characters.
Root cause: When we removed URL parsing in putToFiler, we should have
also removed URL escaping in toFilerPath since paths are now used
directly without URL encoding/decoding.
* Add thread safety to FilerEtcStore and clarify credential store comments
Address review suggestions for better thread safety and code clarity:
1. **Thread Safety**: Add RWMutex to FilerEtcStore
- Protects filerAddressFunc and grpcDialOption from concurrent access
- Initialize() uses write lock when setting function
- SetFilerAddressFunc() uses write lock
- withFilerClient() uses read lock to get function and dial option
- GetPolicies() uses read lock to check if configured
2. **Improved Error Messages**:
- Prefix errors with "filer_etc:" for easier debugging
- "filer address not configured" → "filer_etc: filer address function not configured"
- "filer address is empty" → "filer_etc: filer address is empty"
3. **Clarified Comments**:
- auth_credentials.go: Clarify that initial setup is temporary
- Document that it's updated in s3api_server.go after FilerClient creation
- Remove ambiguity about when FilerClient.GetCurrentFiler is used
Benefits:
- Safe for concurrent credential operations
- Clear error messages for debugging
- Explicit documentation of initialization order
* Enable filer discovery: pass master addresses to FilerClient
Fix two critical issues:
1. **Filer Discovery Not Working**: Master client was not being passed to
FilerClient, so peer discovery couldn't work
2. **Credential Store Design**: Already uses FilerClient via GetCurrentFiler
function - this is the correct design for HA
Changes:
**Command (s3.go):**
- Read master addresses from GetFilerConfiguration response
- Pass masterAddresses to S3ApiServerOption
- Log master addresses for visibility
**S3ApiServerOption:**
- Add Masters []pb.ServerAddress field for discovery
**S3ApiServer:**
- Create MasterClient from Masters when available
- Pass MasterClient + FilerGroup to FilerClient via options
- Enable discovery with 5-minute refresh interval
- Log whether discovery is enabled or disabled
**Credential Store:**
- Already correctly uses filerClient.GetCurrentFiler via function
- This provides HA without tight coupling to FilerClient struct
- Function-based design is clean and thread-safe
Discovery Flow:
1. S3 command reads filer config → gets masters + filer group
2. S3ApiServer creates MasterClient from masters
3. FilerClient uses MasterClient to query for peer filers
4. Background goroutine refreshes peer list every 5 minutes
5. Credential store uses GetCurrentFiler to get active filer
Now filer discovery actually works! ��
* Use S3 endpoint in multipart Location instead of filer address
* Add multi-filer failover to ReadFilerConf
* Address CodeRabbit review: fix buffer reuse and improve lock safety
Address two code review suggestions:
1. **Fix buffer reuse in ReadFilerConfFromFilers**:
- Use local []byte data instead of shared buffer
- Prevents partial data from failed attempts affecting successful reads
- Creates fresh buffer inside callback for masterClient path
- More robust to future changes in read helpers
2. **Improve lock safety in FilerClient**:
- Add *WithHealth variants that accept health pointer
- Get health pointer while holding lock, then release before calling
- Eliminates potential for lock confusion (though no actual deadlock existed)
- Clearer separation: lock for data access, atomics for health ops
Changes:
- ReadFilerConfFromFilers: var data []byte, create buf inside callback
- shouldSkipUnhealthyFilerWithHealth(health *filerHealth)
- recordFilerSuccessWithHealth(health *filerHealth)
- recordFilerFailureWithHealth(health *filerHealth)
- Keep old functions for backward compatibility (marked deprecated)
- Update LookupVolumeIds to use WithHealth variants
Benefits:
- More robust multi-filer configuration reading
- Clearer lock vs atomic operation boundaries
- No lock held during health checks (even though atomics don't block)
- Better code organization and maintainability
* add constant
* Fix IAM manager and post policy to use current active filer
* Fix critical race condition and goroutine leak
* Update weed/s3api/filer_multipart.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Fix compilation error and address code review suggestions
Address remaining unresolved comments:
1. **Fix compilation error**: Add missing net/url import
- filer_multipart.go used url.PathEscape without import
- Added "net/url" to imports
2. **Fix Location URL formatting** (all 4 occurrences):
- Add missing slash between bucket and key
- Use url.PathEscape for bucket names
- Use urlPathEscape for object keys
- Handles special characters in bucket/key names
- Before: http://host/bucketkey
- After: http://host/bucket/key (properly escaped)
3. **Optimize discovery loop** (O(N*M) → O(N+M)):
- Use map for existing filers (O(1) lookup)
- Reduces time holding write lock
- Better performance with many filers
- Before: Nested loop for each discovered filer
- After: Build map once, then O(1) lookups
Changes:
- filer_multipart.go: Import net/url, fix all Location URLs
- filer_client.go: Use map for efficient filer discovery
Benefits:
- Compiles successfully
- Proper URL encoding (handles spaces, special chars)
- Faster discovery with less lock contention
- Production-ready URL formatting
* Fix race conditions and make Close() idempotent
Address CodeRabbit review #3512078995:
1. **Critical: Fix unsynchronized read in error message**
- Line 584 read len(fc.filerAddresses) without lock
- Race with refreshFilerList appending to slice
- Fixed: Take RLock to read length safely
- Prevents race detector warnings
2. **Important: Make Close() idempotent**
- Closing already-closed channel panics
- Can happen with layered cleanup in shutdown paths
- Fixed: Use sync.Once to ensure single close
- Safe to call Close() multiple times now
3. **Nitpick: Add warning for empty filer address**
- getFilerAddress() can return empty string
- Helps diagnose unexpected state
- Added: Warning log when no filers available
4. **Nitpick: Guard deprecated index-based helpers**
- shouldSkipUnhealthyFiler, recordFilerSuccess/Failure
- Accessed filerHealth without lock (races with discovery)
- Fixed: Take RLock and check bounds before array access
- Prevents index out of bounds and races
Changes:
- filer_client.go:
- Add closeDiscoveryOnce sync.Once field
- Use Do() in Close() for idempotent channel close
- Add RLock guards to deprecated index-based helpers
- Add bounds checking to prevent panics
- Synchronized read of filerAddresses length in error
- s3api_server.go:
- Add warning log when getFilerAddress returns empty
Benefits:
- No race conditions (passes race detector)
- No panic on double-close
- Better error diagnostics
- Safe with discovery enabled
- Production-hardened shutdown logic
* Fix hardcoded http scheme and add panic recovery
Address CodeRabbit review #3512114811:
1. **Major: Fix hardcoded http:// scheme in Location URLs**
- Location URLs always used http:// regardless of client connection
- HTTPS clients got http:// URLs (incorrect)
- Fixed: Detect scheme from request
- Check X-Forwarded-Proto header (for proxies) first
- Check r.TLS != nil for direct HTTPS
- Fallback to http for plain connections
- Applied to all 4 CompleteMultipartUploadResult locations
2. **Major: Add panic recovery to discovery goroutine**
- Long-running background goroutine could crash entire process
- Panic in refreshFilerList would terminate program
- Fixed: Add defer recover() with error logging
- Goroutine failures now logged, not fatal
3. **Note: Close() idempotency already implemented**
- Review flagged as duplicate issue
- Already fixed in commit
|
1 week ago |
|
|
7c7b673fc1
|
hide millseconds in up time (#7553)
|
1 week ago |
|
|
b669607fcd
|
Add error list each entry func (#7485)
* added error return in type ListEachEntryFunc * return error if errClose * fix fmt.Errorf * fix return errClose * use %w fmt.Errorf * added entry in messege error * add callbackErr in ListDirectoryEntries * fix error * add log * clear err when the scanner stops on io.EOF, so returning err doesn’t surface EOF as a failure. * more info in error * add ctx to logs, error handling * fix return eachEntryFunc * fix * fix log * fix return * fix foundationdb test s * fix eachEntryFunc * fix return resEachEntryFuncErr * Update weed/filer/filer.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/elastic/v7/elastic_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/hbase/hbase_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/foundationdb/foundationdb_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/filer/ydb/ydb_store.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * fix * add scanErr --------- Co-authored-by: Roman Tamarov <r.tamarov@kryptonite.ru> Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 week ago |
|
|
76f1a23fec |
fix docker login
|
1 week ago |
|
|
c156a130b7
|
S3: Auto create bucket (#7549)
* auto create buckets * only admin users can auto create buckets * Update weed/s3api/s3api_bucket_handlers.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * validate bucket name * refactor * error handling * error * refetch * ensure owner * multiple errors --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 week ago |
|
|
2843cb1255
|
Bootstrap logic to fix read-only volumes with `volume.check.disk`. (#7531)
* Bootstrap logic to fix read-only volumes with `volume.check.disk`.
The new implementation performs a second pass where read-only volumes are (optionally)
verified and fixed.
For each non-writable volume ID A:
if volume is not full
prune late volume entries not matching its index file
select a writable volume replica B
append missing entries from B into A
mark the volume as writable (healthy)
* variable and parameter renaming
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
|
1 week ago |
|
|
2e6c746a30
|
fix copying for paused versioning buckets (#7548)
* fix copying for paused versioning buckets * copy for non versioned files * add tests * better tests * Update weed/s3api/s3api_object_handlers_copy.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * remove etag * update * Update s3api_object_handlers_copy_test.go * Update weed/s3api/s3api_object_handlers_copy_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * Update weed/s3api/s3api_object_handlers_copy_test.go Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> * revert --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
1 week ago |
|
|
3f1a34d8d7 |
doc
|
1 week ago |
|
|
f6a604c538
|
S3: Fix encrypted file copy with multiple chunks (#7530) (#7546)
* S3: Fix encrypted file copy with multiple chunks (#7530) When copying encrypted files with multiple chunks (encrypted volumes via -filer.encryptVolumeData), the copied file could not be read. This was caused by the chunk copy operation not preserving the IsCompressed flag, which led to improper handling of compressed/encrypted data during upload. The fix: 1. Modified uploadChunkData to accept an isCompressed parameter 2. Updated copySingleChunk to pass the source chunk's IsCompressed flag 3. Updated copySingleChunkForRange for partial copy operations 4. Updated all other callers to pass the appropriate compression flag 5. Added comprehensive tests for encrypted volume copy scenarios This ensures that when copying chunks: - The IsCompressed flag from the source chunk is passed to the upload - Compressed data is marked as compressed, preventing double-compression - Already-encrypted data is not re-encrypted (Cipher: false is correct) - All chunk metadata (CipherKey, IsCompressed, ETag) is preserved Tests added: - TestCreateDestinationChunkPreservesEncryption: Verifies metadata preservation - TestCopySingleChunkWithEncryption: Tests various encryption/compression scenarios - TestCopyChunksPreservesMetadata: Tests multi-chunk metadata preservation - TestEncryptedVolumeScenario: Documents and tests the exact issue #7530 scenario Fixes #7530 * Address PR review feedback: simplify tests and improve clarity - Removed TestUploadChunkDataCompressionFlag (panic-based test) - Removed TestCopySingleChunkWithEncryption (duplicate coverage) - Removed TestCopyChunksPreservesMetadata (duplicate coverage) - Added ETag verification to TestEncryptedVolumeCopyScenario - Renamed to TestEncryptedVolumeCopyScenario for better clarity - All test coverage now in TestCreateDestinationChunkPreservesEncryption and TestEncryptedVolumeCopyScenario which focus on the actual behavior |
1 week ago |
|
|
a24c31de06
|
S3: Add `Vary` header for non-wildcard AllowOrigin (#7547)
|
1 week ago |
|
|
935c641f25
|
chore(deps): bump actions/setup-go from 5 to 6 (#7542)
Bumps [actions/setup-go](https://github.com/actions/setup-go) from 5 to 6. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/v5...v6) --- updated-dependencies: - dependency-name: actions/setup-go dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
fb01269ded
|
chore(deps): bump actions/checkout from 4 to 6 (#7543)
Bumps [actions/checkout](https://github.com/actions/checkout) from 4 to 6. - [Release notes](https://github.com/actions/checkout/releases) - [Commits](https://github.com/actions/checkout/compare/v4...v6) --- updated-dependencies: - dependency-name: actions/checkout dependency-version: '6' dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
1 week ago |
|
|
2712a0daeb
|
chore(deps): bump github.com/linkedin/goavro/v2 from 2.14.0 to 2.14.1 (#7537)
* chore(deps): bump github.com/linkedin/goavro/v2 from 2.14.0 to 2.14.1 Bumps [github.com/linkedin/goavro/v2](https://github.com/linkedin/goavro) from 2.14.0 to 2.14.1. - [Release notes](https://github.com/linkedin/goavro/releases) - [Changelog](https://github.com/linkedin/goavro/blob/master/debug_release.go) - [Commits](https://github.com/linkedin/goavro/compare/v2.14.0...v2.14.1) --- updated-dependencies: - dependency-name: github.com/linkedin/goavro/v2 dependency-version: 2.14.1 dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] <support@github.com> * go mod tidy --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: chrislu <chris.lu@gmail.com> |
1 week ago |