Tree:
87b71029f7
add-ec-vacuum
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
fasthttp
feature/mini-port-detection
feature/modernize-s3-tests
filer1_maintenance_branch
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-object-tagging-issue-7589
fix-versioning-listing-only
ftp
gh-pages
improve-fuse-mount
improve-fuse-mount2
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
pr-7412
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
12420 Commits (87b71029f75efc2fe6f1bcf12a3f129127f04e90)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
87b71029f7 |
4.05
|
1 day ago |
|
|
ade2e58cf4 |
refactoring
|
1 day ago |
|
|
4e2af080df
|
optimize: enable immediate EC shard reporting during startup (#7933)
* optimize: enable immediate EC shard reporting during startup
Ported the immediate EC shard reporting feature from Enterprise to Community version.
This allows the master to be notified about EC shards immediately during volume server startup,
instead of waiting for the first heartbeat.
Changes:
1. Updated NewStore to initialize notification channels BEFORE loading volumes (fixes potential nil panic).
2. Added ecShardNotifyHandler to report EC shards to NewEcShardsChan during startup.
3. Implemented non-blocking channel send for EC reporting to prevent deadlock when loading many EC shards (fixing the enterprise bug
|
1 day ago |
|
|
6f28cb7f87
|
helm: Support multiple hosts for S3 ingress (#7931)
|
2 days ago |
|
|
c405ff1374
|
feat(iam): add TLS configuration support for OIDC provider (#7929)
* feat(iam): add TLS configuration support for OIDC provider Adds tlsCaCert and tlsInsecureSkipVerify options to OIDC provider configuration to allow using custom CA certificates and skipping verification in development environments. * fix: use SystemCertPool for custom CA and add security warning - Use x509.SystemCertPool() to preserve trust in public CAs - Add warning log when TLSInsecureSkipVerify is enabled - Addresses code review feedback from gemini-code-assist * docs: enhance TLS configuration field documentation - Add explicit warning about TLSInsecureSkipVerify production usage - Clarify TLSCACert is for custom/self-signed certificates * security: enforce TLS 1.2 minimum version - Set MinVersion to TLS 1.2 to prevent downgrade attacks - Ensures secure communication with OIDC providers * security: validate CA cert path is absolute - Add filepath.IsAbs check before reading CA certificate - Prevents reading unintended files from relative paths - Fail fast on misconfigured paths |
2 days ago |
|
|
998bcf2b3f |
classify grpc errors
|
2 days ago |
|
|
f7f133166a |
adjust fuse logs
|
2 days ago |
|
|
60707f99d8 |
customizable adminServer
|
2 days ago |
|
|
31a4f57cd9
|
Fix: Add -admin.grpc flag to worker for explicit gRPC port (#7926) (#7927)
* Fix: Add -admin.grpc flag to worker for explicit gRPC port configuration * Fix(helm): Add adminGrpcServer to worker configuration * Refactor: Support host:port.grpcPort address format, revert -admin.grpc flag * Helm: Conditionally append grpcPort to worker admin address * weed/admin: fix "send on closed channel" panic in worker gRPC server Make unregisterWorker connection-aware to prevent closing channels belonging to newer connections. * weed/worker: improve gRPC client stability and logging - Fix goroutine leak in reconnection logic - Refactor reconnection loop to exit on success and prevent busy-waiting - Add session identification and enhanced logging to client handlers - Use constant for internal reset action and remove unused variables * weed/worker: fix worker state initialization and add lifecycle logs - Revert workerState to use running boolean correctly - Prevent handleStart failing by checking running state instead of startTime - Add more detailed logs for worker startup events |
2 days ago |
|
|
5a135f8c5a
|
fuse: add FUSE performance options to weed fuse command (#7925)
This adds support for the new FUSE performance options to the 'weed fuse' command, matching the functionality available in 'weed mount'. Added options: - writebackCache: Enable FUSE writeback cache for improved write performance - asyncDio: Enable async direct I/O for better concurrency - cacheSymlink: Enable symlink caching to reduce metadata lookups - sys.novncache: (macOS only) Disable vnode name caching to avoid stale data These options can now be used with mount -t weed: mount -t weed fuse /mnt -o "filer=localhost:8888,writebackCache=true,asyncDio=true" This ensures feature parity between 'weed mount' and 'weed fuse' commands. |
3 days ago |
|
|
099da9332b
|
mount: add -sys.novncache flag for macOS vnode cache control (#7924)
* mount: add -asyncDio flag for async direct I/O This adds support for async direct I/O via the -asyncDio flag. Async DIO enables the FUSE_CAP_ASYNC_DIO capability, allowing the kernel to perform direct I/O operations asynchronously. This improves concurrency for applications that use O_DIRECT flag. Benefits: - Better concurrency for direct I/O operations - Improved performance for applications using O_DIRECT - Reduced blocking on I/O operations Use cases: - Database workloads that use direct I/O - Applications that bypass page cache intentionally - High-performance I/O scenarios Implementation inspired by JuiceFS which enables this capability for improved I/O performance. Usage: weed mount -filer=localhost:8888 -dir=/mnt/seaweedfs -asyncDio * mount: add all remaining FUSE options (asyncDio, cacheSymlink, novncache) This combines the remaining three FUSE mount options on top of the merged writebackCache PR: 1. asyncDio: Enable async direct I/O for better concurrency 2. cacheSymlink: Enable symlink caching to reduce metadata lookups 3. novncache: (macOS only) Disable vnode name caching to avoid stale data All options use the function parameter 'option' instead of global 'mountOptions'. |
3 days ago |
|
|
b107c45176
|
mount: add -cacheSymlink flag for symlink caching (#7923)
* mount: add -asyncDio flag for async direct I/O This adds support for async direct I/O via the -asyncDio flag. Async DIO enables the FUSE_CAP_ASYNC_DIO capability, allowing the kernel to perform direct I/O operations asynchronously. This improves concurrency for applications that use O_DIRECT flag. Benefits: - Better concurrency for direct I/O operations - Improved performance for applications using O_DIRECT - Reduced blocking on I/O operations Use cases: - Database workloads that use direct I/O - Applications that bypass page cache intentionally - High-performance I/O scenarios Implementation inspired by JuiceFS which enables this capability for improved I/O performance. Usage: weed mount -filer=localhost:8888 -dir=/mnt/seaweedfs -asyncDio * mount: add all remaining FUSE options (asyncDio, cacheSymlink, novncache) This combines the remaining three FUSE mount options on top of the merged writebackCache PR: 1. asyncDio: Enable async direct I/O for better concurrency 2. cacheSymlink: Enable symlink caching to reduce metadata lookups 3. novncache: (macOS only) Disable vnode name caching to avoid stale data All options use the function parameter 'option' instead of global 'mountOptions'. |
3 days ago |
|
|
9072e1d38a
|
mount: add -asyncDio flag for async direct I/O (#7922)
* mount: add -asyncDio flag for async direct I/O This adds support for async direct I/O via the -asyncDio flag. Async DIO enables the FUSE_CAP_ASYNC_DIO capability, allowing the kernel to perform direct I/O operations asynchronously. This improves concurrency for applications that use O_DIRECT flag. Benefits: - Better concurrency for direct I/O operations - Improved performance for applications using O_DIRECT - Reduced blocking on I/O operations Use cases: - Database workloads that use direct I/O - Applications that bypass page cache intentionally - High-performance I/O scenarios Implementation inspired by JuiceFS which enables this capability for improved I/O performance. Usage: weed mount -filer=localhost:8888 -dir=/mnt/seaweedfs -asyncDio * mount: add all remaining FUSE options (asyncDio, cacheSymlink, novncache) This combines the remaining three FUSE mount options on top of the merged writebackCache PR: 1. asyncDio: Enable async direct I/O for better concurrency 2. cacheSymlink: Enable symlink caching to reduce metadata lookups 3. novncache: (macOS only) Disable vnode name caching to avoid stale data All options use the function parameter 'option' instead of global 'mountOptions'. |
3 days ago |
|
|
1424fe6ed5
|
mount: add -writebackCache flag for FUSE writeback caching (#7921)
* mount: add -writebackCache flag for FUSE writeback caching This adds support for FUSE writeback caching via the -writebackCache flag. Writeback caching buffers writes in the kernel page cache before flushing to the filesystem. This significantly improves performance for workloads with many small writes by reducing the number of write syscalls. Benefits: - Improved write performance for small files (2-5x faster) - Reduced latency for write-heavy workloads - Better handling of bursty write patterns Trade-offs: - Data may be lost if system crashes before kernel flushes - Not recommended for critical data without proper fsync usage - Disabled by default for safety Inspired by JuiceFS implementation which uses the same FUSE option. Usage: weed mount -filer=localhost:8888 -dir=/mnt/seaweedfs -writebackCache * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 days ago |
|
|
8f182e68e8 |
Update README.md
|
3 days ago |
|
|
568f1fe5b1
|
fix: include DiskType in metadata log volume assignment (#7918)
When writing metadata logs to /topics/.system/log, the filer was not respecting the disk type configuration from path-specific rules (fs.configure). This caused volume assignment failures when volume servers used a specific disk type (e.g., "ssd") because the assign request defaulted to empty disk type. The fix adds DiskType to the VolumeAssignRequest in the filer's metadata log write path, ensuring that path-specific disk type configurations are properly honored for internal system writes. Fixes errors like: "metadata log write failed /topics/.system/log/...: AssignVolume: failed to find writable volumes for collection" Signed-off-by: Charles Darke <s.cduk@toodevious.com> Co-authored-by: Charles Darke <s.cduk@toodevious.com> |
3 days ago |
|
|
91fcc60898
|
Have `volume.list` account for EC shards when computing disk usage. (#7909)
|
3 days ago |
|
|
0995214b37
|
chore(deps): bump github.com/parquet-go/parquet-go from 0.25.1 to 0.26.3 (#7908)
Bumps [github.com/parquet-go/parquet-go](https://github.com/parquet-go/parquet-go) from 0.25.1 to 0.26.3. - [Release notes](https://github.com/parquet-go/parquet-go/releases) - [Changelog](https://github.com/parquet-go/parquet-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/parquet-go/parquet-go/compare/v0.25.1...v0.26.3) --- updated-dependencies: - dependency-name: github.com/parquet-go/parquet-go dependency-version: 0.26.3 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
3 days ago |
|
|
b1a9f344fe
|
chore(deps): bump gocloud.dev/pubsub/natspubsub from 0.43.0 to 0.44.0 (#7907)
Bumps [gocloud.dev/pubsub/natspubsub](https://github.com/google/go-cloud) from 0.43.0 to 0.44.0. - [Release notes](https://github.com/google/go-cloud/releases) - [Commits](https://github.com/google/go-cloud/compare/v0.43.0...v0.44.0) --- updated-dependencies: - dependency-name: gocloud.dev/pubsub/natspubsub dependency-version: 0.44.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
3 days ago |
|
|
ea4a2422f4
|
chore(deps): bump github.com/schollz/progressbar/v3 from 3.18.0 to 3.19.0 (#7906)
chore(deps): bump github.com/schollz/progressbar/v3 Bumps [github.com/schollz/progressbar/v3](https://github.com/schollz/progressbar) from 3.18.0 to 3.19.0. - [Release notes](https://github.com/schollz/progressbar/releases) - [Commits](https://github.com/schollz/progressbar/compare/v3.18.0...v3.19.0) --- updated-dependencies: - dependency-name: github.com/schollz/progressbar/v3 dependency-version: 3.19.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
3 days ago |
|
|
4391cff2e8
|
chore(deps): bump google.golang.org/api from 0.247.0 to 0.258.0 (#7905)
Bumps [google.golang.org/api](https://github.com/googleapis/google-api-go-client) from 0.247.0 to 0.258.0. - [Release notes](https://github.com/googleapis/google-api-go-client/releases) - [Changelog](https://github.com/googleapis/google-api-go-client/blob/main/CHANGES.md) - [Commits](https://github.com/googleapis/google-api-go-client/compare/v0.247.0...v0.258.0) --- updated-dependencies: - dependency-name: google.golang.org/api dependency-version: 0.258.0 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
3 days ago |
|
|
7bd9f6b5d8
|
chore(deps): bump modernc.org/sqlite from 1.39.0 to 1.42.2 (#7904)
Bumps [modernc.org/sqlite](https://gitlab.com/cznic/sqlite) from 1.39.0 to 1.42.2. - [Commits](https://gitlab.com/cznic/sqlite/compare/v1.39.0...v1.42.2) --- updated-dependencies: - dependency-name: modernc.org/sqlite dependency-version: 1.42.2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> |
3 days ago |
|
|
e3db95e0c1
|
Fix: Route unauthenticated specific STS requests to STS handler correctly (#7920)
* Fix STS Access Denied for AssumeRoleWithWebIdentity (Issue #7917) * Fix logging regression: ensure IAM status is logged even if STS is enabled * Address PR feedback: fix duplicate log, clarify comments, add comprehensive routing tests * Add edge case test: authenticated STS action routes to IAM (auth precedence) |
3 days ago |
|
|
b034cf188e
|
Fix: trim prefix slash in ListObjectVersionsHandler (#7919)
* Fix: trim prefix slash in ListObjectVersionsHandler * Add test for ListObjectVersions prefix handling Test validates that prefix normalization works correctly with and without leading slashes, ensuring the fix for /Veeam/Archive/ style prefixes. * Simplify prefix test to validate normalization logic The test now validates that the prefix normalization (TrimPrefix) works correctly and that normalized prefixes match paths as expected. This is a focused unit test that validates the core fix without requiring complex mocking of the filer client. * Enhance prefix test with full matchesPrefixFilter logic Added test cases for directory traversal including: - Directory matching with trailing slash - canDescend logic for recursive directory search - Full simulation of matchesPrefixFilter behavior This provides more comprehensive coverage of the prefix normalization fix and ensures it works correctly for both files and directories. |
3 days ago |
|
|
73098c9792
|
filer.meta.backup: add -excludePaths flag to skip paths from backup (#7916)
* filer.meta.backup: add -excludePaths flag to skip paths from backup Add a new -excludePaths flag that accepts comma-separated path prefixes to exclude from backup operations. This enables selective backup when certain directories (e.g., legacy buckets) should be skipped. Usage: weed filer.meta.backup -filerDir=/buckets -excludePaths=/buckets/legacy1,/buckets/legacy2 -config=backup.toml 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * filer.meta.backup: address code review feedback for -excludePaths Fixes based on CodeRabbit and Gemini review: - Cache parsed exclude paths in struct (performance) - TrimSpace and skip empty entries (handles "a,,b" and "a, b") - Add trailing slash for directory boundary matching (prevents /buckets/legacy matching /buckets/legacy_backup) - Validate paths start with '/' and warn if not - Log excluded paths at startup for debugging - Fix rename handling: check both old and new paths, handle all four combinations correctly - Add docstring to shouldExclude() - Update UsageLine and Long description with new flag 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * filer.meta.backup: address nitpick feedback - Clarify directory boundary matching behavior in help text - Add warning when root path '/' is excluded (would exclude everything) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * includePrefixes and excludePrefixes --------- Co-authored-by: C Shaw <cliffshaw@users.noreply.github.com> Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
3 days ago |
|
|
7a18c3a16f
|
Fix critical authentication bypass vulnerability (#7912) (#7915)
* Fix critical authentication bypass vulnerability (#7912) The isRequestPostPolicySignatureV4() function was incorrectly returning true for ANY POST request with multipart/form-data content type, causing all such requests to bypass authentication in authRequest(). This allowed unauthenticated access to S3 API endpoints, as reported in issue #7912 where any credentials (or no credentials) were accepted. The fix removes isRequestPostPolicySignatureV4() entirely, preventing authTypePostPolicy from ever being set. PostPolicy signature verification is still properly handled in PostPolicyBucketHandler via doesPolicySignatureMatch(). Fixes #7912 * add AuthPostPolicy * refactor * Optimizing Auth Credentials * Update auth_credentials.go * Update auth_credentials.go |
3 days ago |
|
|
808205e38f
|
s3: implement Bucket Owner Enforced for object ownership (#7913)
* s3: implement Bucket Owner Enforced for object ownership Objects uploaded by service accounts (or any user) are now owned by the bucket owner when the bucket has BucketOwnerEnforced ownership policy (the modern AWS default since April 2023). This provides a more intuitive ownership model where users expect objects created by their service accounts to be owned by themselves. - Modified setObjectOwnerFromRequest to check bucket ObjectOwnership - When BucketOwnerEnforced: use bucket owner's account ID - When ObjectWriter: use uploader's account ID (backward compatible) * s3: add nil check and fix ownership logic hole - Add nil check for bucketRegistry before calling GetBucketMetadata - Fix logic hole where objects could be created without owner when BucketOwnerEnforced is set but bucket owner is nil - Refactor to ensure objects always have an owner by falling back to uploader when bucket owner is unavailable - Improve logging to distinguish between different fallback scenarios Addresses code review feedback from Gemini on PR #7913 * s3: add comprehensive tests for object ownership logic Add unit tests for setObjectOwnerFromRequest covering: - BucketOwnerEnforced: uses bucket owner - ObjectWriter: uses uploader - BucketOwnerPreferred: uses uploader - Nil owner fallback scenarios - Bucket metadata errors - Nil bucketRegistry - Empty account ID handling All 8 test cases pass, verifying correct ownership assignment in all scenarios including edge cases. |
4 days ago |
|
|
b6d99f1c9e
|
Admin: Add Service Account Management UI (#7902)
* admin: add Service Account management UI Add admin UI for managing service accounts: New files: - handlers/service_account_handlers.go - HTTP handlers - dash/service_account_management.go - CRUD operations - view/app/service_accounts.templ - UI template Changes: - dash/types.go - Add ServiceAccount and related types - handlers/admin_handlers.go - Register routes and handlers - view/layout/layout.templ - Add sidebar navigation link Service accounts are stored as special identities with "sa:" prefix in their name, using ABIA access key prefix. They can be created, listed, enabled/disabled, and deleted through the admin UI. Features: - Create service accounts linked to parent users - View and manage service account status - Delete service accounts - Service accounts inherit parent user permissions Note: STS configuration is read-only (configured via JSON file). Full STS integration requires changes from PR #7901. * admin: use dropdown for parent user selection Change the Parent User field from text input to dropdown when creating a service account. The dropdown is populated with all existing Object Store users. Changes: - Add AvailableUsers field to ServiceAccountsData type - Populate available users in getServiceAccountsData handler - Update template to use <select> element with user options * admin: show secret access key on service account creation Display both access key and secret access key when creating a service account, with proper AWS CLI usage instructions. Changes: - Add SecretAccessKey field to ServiceAccount type (only populated on creation) - Return secret key from CreateServiceAccount - Add credentials modal with copy-to-clipboard buttons - Show AWS CLI usage example with actual credentials - Modal is non-dismissible until user confirms they saved credentials The secret key is only shown once during creation for security. After creation, only the access key ID is visible in the list. * admin: address code review comments for service account management - Persist creation dates in identity actions (createdAt:timestamp) - Replace magic number slicing with len(accessKeyPrefix) - Add bounds checking after strings.SplitN - Use accessKeyPrefix constant instead of hardcoded "ABIA" Creation dates are now stored as actions (e.g., "createdAt:1735473600") and will persist across restarts. Helper functions getCreationDate() and setCreationDate() manage the timestamp storage. Addresses review comments from gemini-code-assist[bot] and coderabbitai[bot] * admin: fix XSS vulnerabilities in service account details Replace innerHTML with template literals with safe DOM creation. The createSADetailsContent function now uses createElement and textContent to prevent XSS attacks from malicious service account data (id, description, parent_user, etc.). Also added try-catch for date parsing to prevent exceptions on malformed input. Addresses security review comments from coderabbitai[bot] * admin: add context.Context to service account management methods Addressed PR #7902 review feedback: 1. All service account management methods now accept context.Context as first parameter to enable cancellation, deadlines, and tracing 2. Removed all context.Background() calls 3. Updated handlers to pass c.Request.Context() from HTTP requests Methods updated: - GetServiceAccounts - GetServiceAccountDetails - CreateServiceAccount - UpdateServiceAccount - DeleteServiceAccount - GetServiceAccountByAccessKey Note: Creation date persistence was already implemented using the createdAt:<timestamp> action pattern as suggested in the review. * admin: fix render flow to prevent partial HTML writes Fixed ShowServiceAccounts handler to render template to an in-memory buffer first before writing to the response. This prevents partial HTML writes followed by JSON error responses, which would result in invalid mixed content. Changes: - Render to bytes.Buffer first - Only write to c.Writer if render succeeds - Use c.AbortWithStatus on error instead of attempting JSON response - Prevents any additional headers/body writes after partial write * admin: fix error handling, date validation, and event parameters Addressed multiple code review issues: 1. Proper 404 vs 500 error handling: - Added ErrServiceAccountNotFound sentinel error - GetServiceAccountDetails now wraps errors with sentinel - Handler uses errors.Is() to distinguish not-found from internal errors - Returns 404 only for missing resources, 500 for other errors - Logs internal errors before returning 500 2. Date validation in JavaScript: - Validate expiration date before using it - Check !isNaN(date.getTime()) to ensure valid date - Return validation error if date is invalid - Prevents invalid Date construction 3. Event parameter handling: - copyToClipboard now accepts event parameter - Updated onclick attributes to pass event object - Prevents reliance on window.event - More explicit and reliable event handling * admin: replace deprecated execCommand with Clipboard API Replaced deprecated document.execCommand('copy') with modern navigator.clipboard.writeText() API for better security and UX. Changes: - Made copyToClipboard async to support Clipboard API - Use navigator.clipboard.writeText() as primary method - Fallback to execCommand if Clipboard API fails (older browsers) - Added console warning when fallback is used - Maintains same visual feedback behavior * admin: improve security and UX for error handling Addressed code review feedback: 1. Security: Remove sensitive error details from API responses - CreateServiceAccount: Return generic error message - UpdateServiceAccount: Return generic error message - DeleteServiceAccount: Return generic error message - Detailed errors still logged server-side via glog.Errorf() - Prevents exposure of internal system details to clients 2. UX: Replace alert() with Bootstrap toast notifications - Implemented showToast() function using Bootstrap 5 toasts - Non-blocking, modern notification system - Auto-dismiss after 5 seconds - Proper HTML escaping to prevent XSS - Toast container positioned at top-right - Success (green) and error (red) variants * admin: complete error handling improvements Addressed remaining security review feedback: 1. GetServiceAccounts: Remove error details from response - Log errors server-side via glog.Errorf() - Return generic error message to client 2. UpdateServiceAccount & DeleteServiceAccount: - Wrap not-found errors with ErrServiceAccountNotFound sentinel - Enables proper 404 vs 500 distinction in handlers 3. Update & Delete handlers: - Added errors.Is() check for ErrServiceAccountNotFound - Return 404 for missing resources - Return 500 for internal errors with logging - Consistent with GetServiceAccountDetails behavior All handlers now properly distinguish not-found (404) from internal errors (500) and never expose sensitive error details to clients. * admin: implement expiration support and improve code quality Addressed final code review feedback: 1. Expiration Support: - Added expiration helper functions (getExpiration, setExpiration) - Implemented expiration in CreateServiceAccount - Implemented expiration in UpdateServiceAccount - Added Expiration field to ServiceAccount struct - Parse and validate RFC3339 expiration dates 2. Constants for Magic Strings: - Added StatusActive, StatusInactive constants - Added disabledAction, serviceAccountPrefix constants - Replaced all magic strings with constants throughout - Improves maintainability and prevents typos 3. Helper Function to Reduce Duplication: - Created identityToServiceAccount() helper - Reduces code duplication across Get/Update/Delete methods - Centralizes ServiceAccount struct building logic 4. Fixed time.Now() Fallback: - Changed from time.Now() to time.Time{} for legacy accounts - Prevents creation date from changing on each fetch - UI can display zero time as "N/A" or blank All code quality issues addressed! * admin: fix StatusActive reference in handler Use dash.StatusActive to properly reference the constant from the dash package. * admin: regenerate templ files Regenerated all templ Go files after recent template changes. The AWS CLI usage example already uses proper <pre><code> formatting which preserves line breaks for better readability. * admin: add explicit white-space CSS to AWS CLI example Added style="white-space: pre-wrap;" to the pre tag to ensure line breaks are preserved and displayed correctly in all browsers. This forces the browser to respect the newlines in the code block. * admin: fix AWS CLI example to display on separate lines Replaced pre/code block with individual div elements for each line. This ensures each command displays on its own line regardless of how templ processes whitespace. Each line is now a separate div with font-monospace styling for code appearance. * make * admin: filter service accounts from parent user dropdown Service accounts should not appear as selectable parent users when creating new service accounts. Added filter to GetObjectStoreUsers() to skip identities with "sa:" prefix, ensuring only actual IAM users are shown in the parent user dropdown. * admin: address code review feedback - Use constants for magic strings in service account management - Add Expiration field to service account responses - Add nil checks and context propagation - Improve templates (date validation, async clipboard, toast notifications) * Update service_accounts_templ.go |
4 days ago |
|
|
ae9a943ef6
|
IAM: Add Service Account Support (#7744) (#7901)
* iam: add ServiceAccount protobuf schema Add ServiceAccount message type to iam.proto with support for: - Unique ID and parent user linkage - Optional expiration timestamp - Separate credentials (access key/secret) - Action restrictions (subset of parent) - Enable/disable status This is the first step toward implementing issue #7744 (IAM Service Account Support). * iam: add service account response types Add IAM API response types for service account operations: - ServiceAccountInfo struct for marshaling account details - CreateServiceAccountResponse - DeleteServiceAccountResponse - ListServiceAccountsResponse - GetServiceAccountResponse - UpdateServiceAccountResponse Also add type aliases in iamapi package for backwards compatibility. Part of issue #7744 (IAM Service Account Support). * iam: implement service account API handlers Add CRUD operations for service accounts: - CreateServiceAccount: Creates service account with ABIA key prefix - DeleteServiceAccount: Removes service account and parent linkage - ListServiceAccounts: Lists all or filtered by parent user - GetServiceAccount: Retrieves service account details - UpdateServiceAccount: Modifies status, description, expiration Service accounts inherit parent user's actions by default and support optional expiration timestamps. Part of issue #7744 (IAM Service Account Support). * sts: add AssumeRoleWithWebIdentity HTTP endpoint Add STS API HTTP endpoint for AWS SDK compatibility: - Create s3api_sts.go with HTTP handlers matching AWS STS spec - Support AssumeRoleWithWebIdentity action with JWT token - Return XML response with temporary credentials (AccessKeyId, SecretAccessKey, SessionToken) matching AWS format - Register STS route at POST /?Action=AssumeRoleWithWebIdentity This enables AWS SDKs (boto3, AWS CLI, etc.) to obtain temporary S3 credentials using OIDC/JWT tokens. Part of issue #7744 (IAM Service Account Support). * test: add service account and STS integration tests Add integration tests for new IAM features: s3_service_account_test.go: - TestServiceAccountLifecycle: Create, Get, List, Update, Delete - TestServiceAccountValidation: Error handling for missing params s3_sts_test.go: - TestAssumeRoleWithWebIdentityValidation: Parameter validation - TestAssumeRoleWithWebIdentityWithMockJWT: JWT token handling Tests skip gracefully when SeaweedFS is not running or when IAM features are not configured. Part of issue #7744 (IAM Service Account Support). * iam: address code review comments - Add constants for service account ID and key lengths - Use strconv.ParseInt instead of fmt.Sscanf for better error handling - Allow clearing descriptions by checking key existence in url.Values - Replace magic numbers (12, 20, 40) with named constants Addresses review comments from gemini-code-assist[bot] * test: add proper error handling in service account tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Addresses review comment from gemini-code-assist[bot] * test: add proper error handling in STS tests Use require.NoError(t, err) for io.ReadAll and xml.Unmarshal to prevent silent failures and ensure test reliability. Repeated this fix throughout the file. Addresses review comment from gemini-code-assist[bot] in PR #7901. * iam: address additional code review comments - Specific error code mapping for STS service errors - Distinguish between Sender and Receiver error types in STS responses - Add nil checks for credentials in List/GetServiceAccount - Validate expiration date is in the future - Improve integration test error messages (include response body) - Add credential verification step in service account tests Addresses remaining review comments from gemini-code-assist[bot] across multiple files. * iam: fix shared slice reference in service account creation Copy parent's actions to create an independent slice for the service account instead of sharing the underlying array. This prevents unexpected mutations when the parent's actions are modified later. Addresses review comment from coderabbitai[bot] in PR #7901. * iam: remove duplicate unused constant Removed redundant iamServiceAccountKeyPrefix as ServiceAccountKeyPrefix is already defined and used. Addresses remaining cleanup task. * sts: document limitation of string-based error mapping Added TODO comment explaining that the current string-based error mapping approach is fragile and should be replaced with typed errors from the STS service in a future refactoring. This addresses the architectural concern raised in code review while deferring the actual implementation to a separate PR to avoid scope creep in the current service account feature addition. * iam: fix remaining review issues - Add future-date validation for expiration in UpdateServiceAccount - Reorder tests so credential verification happens before deletion - Fix compilation error by using correct JWT generation methods Addresses final review comments from coderabbitai[bot]. * iam: fix service account access key length The access key IDs were incorrectly generated with 24 characters instead of the AWS-standard 20 characters. This was caused by generating 20 random characters and then prepending the 4-character ABIA prefix. Fixed by subtracting the prefix length from AccessKeyLength, so the final key is: ABIA (4 chars) + random (16 chars) = 20 chars total. This ensures compatibility with S3 clients that validate key length. * test: add comprehensive service account security tests Added comprehensive integration tests for service account functionality: - TestServiceAccountS3Access: Verify SA credentials work for S3 operations - TestServiceAccountExpiration: Test expiration date validation and enforcement - TestServiceAccountInheritedPermissions: Verify parent-child relationship - TestServiceAccountAccessKeyFormat: Validate AWS-compatible key format (ABIA prefix, 20 char length) These tests ensure SeaweedFS service accounts are compatible with AWS conventions and provide robust security coverage. * iam: remove unused UserAccessKeyPrefix constant Code cleanup to remove unused constants. * iam: remove unused iamCommonResponse type alias Code cleanup to remove unused type aliases. * iam: restore and use UserAccessKeyPrefix constant Restored UserAccessKeyPrefix constant and updated s3api tests to use it instead of hardcoded strings for better maintainability and consistency. * test: improve error handling in service account security tests Added explicit error checking for io.ReadAll and xml.Unmarshal in TestServiceAccountExpiration to ensure failures are reported correctly and cleanup is performed only when appropriate. Also added logging for failed responses. * test: use t.Cleanup for reliable resource cleanup Replaced defer with t.Cleanup to ensure service account cleanup runs even when require.NoError fails. Also switched from manual error checking to require.NoError for more idiomatic testify usage. * iam: add CreatedBy field and optimize identity lookups - Added createdBy parameter to CreateServiceAccount to track who created each service account - Extract creator identity from request context using GetIdentityNameFromContext - Populate created_by field in ServiceAccount protobuf - Added findIdentityByName helper function to optimize identity lookups - Replaced nested loops with O(n) helper function calls in CreateServiceAccount and DeleteServiceAccount This addresses code review feedback for better auditing and performance. * iam: prevent user deletion when service accounts exist Following AWS IAM behavior, prevent deletion of users that have active service accounts. This ensures explicit cleanup and prevents orphaned service account resources with invalid ParentUser references. Users must delete all associated service accounts before deleting the parent user, providing safer resource management. * sts: enhance TODO with typed error implementation guidance Updated TODO comment with detailed implementation approach for replacing string-based error matching with typed errors using errors.Is(). This provides a clear roadmap for a follow-up PR to improve error handling robustness and maintainability. * iam: add operational limits for service account creation Added AWS IAM-compatible safeguards to prevent resource exhaustion: - Maximum 100 service accounts per user (LimitExceededException) - Maximum 1000 character description length (InvalidInputException) These limits prevent accidental or malicious resource exhaustion while not impacting legitimate use cases. * iam: add missing operational limit constants Added MaxServiceAccountsPerUser and MaxDescriptionLength constants that were referenced in the previous commit but not defined. * iam: enforce service account expiration during authentication CRITICAL SECURITY FIX: Expired service account credentials were not being rejected during authentication, allowing continued access after expiration. Changes: - Added Expiration field to Credential struct - Populate expiration when loading service accounts from configuration - Check expiration in all authentication paths (V2 and V4 signatures) - Return ErrExpiredToken for expired credentials This ensures expired service accounts are properly rejected at authentication time, matching AWS IAM behavior and preventing unauthorized access. * iam: fix error code for expired service account credentials Use ErrAccessDenied instead of non-existent ErrExpiredToken for expired service account credentials. This provides appropriate access denial for expired credentials while maintaining AWS-compatible error responses. * iam: fix remaining ErrExpiredToken references Replace all remaining instances of non-existent ErrExpiredToken with ErrAccessDenied for expired service account credentials. * iam: apply AWS-standard key format to user access keys Updated CreateAccessKey to generate AWS-standard 20-character access keys with AKIA prefix for regular users, matching the format used for service accounts. This ensures consistency across all access key types and full AWS compatibility. - Access keys: AKIA + 16 random chars = 20 total (was 21 chars, no prefix) - Secret keys: 40 random chars (was 42, now matches AWS standard) - Uses AccessKeyLength and UserAccessKeyPrefix constants * sts: replace fragile string-based error matching with typed errors Implemented robust error handling using typed errors and errors.Is() instead of fragile strings.Contains() matching. This decouples the HTTP layer from service implementation details and prevents errors from being miscategorized if error messages change. Changes: - Added typed error variables to weed/iam/sts/constants.go: * ErrTypedTokenExpired * ErrTypedInvalidToken * ErrTypedInvalidIssuer * ErrTypedInvalidAudience * ErrTypedMissingClaims - Updated STS service to wrap provider authentication errors with typed errors - Replaced strings.Contains() with errors.Is() in HTTP layer for error checking - Removed TODO comment as the improvement is now implemented This makes error handling more maintainable and reliable. * sts: eliminate all string-based error matching with provider-level typed errors Completed the typed error implementation by adding provider-level typed errors and updating provider implementations to return them. This eliminates ALL fragile string matching throughout the entire error handling stack. Changes: - Added typed error definitions to weed/iam/providers/errors.go: * ErrProviderTokenExpired * ErrProviderInvalidToken * ErrProviderInvalidIssuer * ErrProviderInvalidAudience * ErrProviderMissingClaims - Updated OIDC provider to wrap JWT validation errors with typed provider errors - Replaced strings.Contains() with errors.Is() in STS service for error mapping - Complete error chain: Provider -> STS -> HTTP layer, all using errors.Is() This provides: - Reliable error classification independent of error message content - Type-safe error checking throughout the stack - No order-dependent string matching - Maintainable error handling that won't break with message changes * oidc: use jwt.ErrTokenExpired instead of string matching Replaced the last remaining string-based error check with the JWT library's exported typed error. This makes the error detection independent of error message content and more robust against library updates. Changed from: strings.Contains(errMsg, "expired") To: errors.Is(err, jwt.ErrTokenExpired) This completes the elimination of ALL string-based error matching throughout the entire authentication stack. * iam: add description length validation to UpdateServiceAccount Fixed inconsistency where UpdateServiceAccount didn't validate description length against MaxDescriptionLength, allowing operational limits to be bypassed during updates. Now validates that updated descriptions don't exceed 1000 characters, matching the validation in CreateServiceAccount. * iam: refactor expiration check into helper method Extracted duplicated credential expiration check logic into a helper method to reduce code duplication and improve maintainability. Added Credential.isCredentialExpired() method and replaced 5 instances of inline expiration checks across auth_signature_v2.go and auth_signature_v4.go. * iam: address critical Copilot security and consistency feedback Fixed three critical issues identified by Copilot code review: 1. SECURITY: Prevent loading disabled service account credentials - Added check to skip disabled service accounts during credential loading - Disabled accounts can no longer authenticate 2. Add DurationSeconds validation for STS AssumeRoleWithWebIdentity - Enforce AWS-compatible range: 900-43200 seconds (15 min - 12 hours) - Returns proper error for out-of-range values 3. Fix expiration update consistency in UpdateServiceAccount - Added key existence check like Description field - Allows explicit clearing of expiration by setting to empty string - Distinguishes between "not updating" and "clearing expiration" * sts: remove unused durationSecondsStr variable Fixed build error from unused variable after refactoring duration parsing. * iam: address remaining Copilot feedback and remove dead code Completed remaining Copilot code review items: 1. Remove unused getPermission() method (dead code) - Method was defined but never called anywhere 2. Improve slice modification safety in DeleteServiceAccount - Replaced append-with-slice-operations with filter pattern - Avoids potential issues from mutating slice during iteration 3. Fix route registration order - Moved STS route registration BEFORE IAM route - Prevents IAM route from intercepting STS requests - More specific route (with query parameter) now registered first * iam: improve expiration validation and test cleanup robustness Addressed additional Copilot feedback: 1. Make expiration validation more explicit - Added explicit check for negative values - Added comment clarifying that 0 is allowed to clear expiration - Improves code readability and intent 2. Fix test cleanup order in s3_service_account_test.go - Track created service accounts in a slice - Delete all service accounts before deleting parent user - Prevents DeleteConflictException during cleanup - More robust cleanup even if test fails mid-execution Note: s3_service_account_security_test.go already had correct cleanup order due to LIFO defer execution. * test: remove redundant variable assignments Removed duplicate assignments of createdSAId, createdAccessKeyId, and createdSecretAccessKey on lines 148-150 that were already assigned on lines 132-134. |
4 days ago |
|
|
288ba5fec8
|
mount: let filer handle chunk deletion decision (#7900)
* mount: let filer handle chunk deletion decision Remove chunk deletion decision from FUSE mount's Unlink operation. Previously, the mount decided whether to delete chunks based on its locally cached entry's HardLinkCounter, which could be stale. Now always pass isDeleteData=true and let the filer make the authoritative decision based on its own data. This prevents potential inconsistencies when: - The FUSE mount's cached entry is stale - Race conditions occur between multiple mounts - Direct filer operations change hard link counts * filer: check hard link counter before deleting chunks When deleting an entry, only delete the underlying chunks if: 1. It is not a hard link 2. OR it is the last hard link (counter <= 1) This protects against data loss when a client (like FUSE mount) requests chunk deletion for a file that has multiple hard links. |
5 days ago |
|
|
29a245052e |
tidy
|
5 days ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
5 days ago |
|
|
2b529e310d
|
s3: Add SOSAPI support for Veeam integration (#7899)
* s3api: Add SOSAPI core implementation and tests Implement Smart Object Storage API (SOSAPI) support for Veeam integration. - Add s3api_sosapi.go with XML structures and handlers for system.xml and capacity.xml - Implement virtual object detection and dynamic XML generation - Add capacity retrieval via gRPC (to be optimized in follow-up) - Include comprehensive unit tests covering detection, XML generation, and edge cases This enables Veeam Backup & Replication to discover SeaweedFS capabilities and capacity. * s3api: Integrate SOSAPI handlers into GetObject and HeadObject Add early interception for SOSAPI virtual objects in GetObjectHandler and HeadObjectHandler. - Check for SOSAPI objects (.system-*/system.xml, .system-*/capacity.xml) before normal processing - Delegate to handleSOSAPIGetObject and handleSOSAPIHeadObject when detected - Ensures virtual objects are served without hitting storage layer * s3api: Allow anonymous access to SOSAPI virtual objects Enable discovery of SOSAPI capabilities without requiring credentials. - Modify AuthWithPublicRead to bypass auth for SOSAPI objects if bucket exists - Supports Veeam's initial discovery phase before full IAM setup - Validates bucket existence to prevent information disclosure * s3api: Fix SOSAPI capacity retrieval to use proper master connection Fix gRPC error by connecting directly to master servers instead of through filer. - Use pb.WithOneOfGrpcMasterClients with s3a.option.Masters - Matches pattern used in bucket_size_metrics.go - Resolves "unknown service master_pb.Seaweed" error - Gracefully handles missing master configuration * Merge origin/master and implement robust SOSAPI capacity logic - Resolved merge conflict in s3api_sosapi.go - Replaced global Statistics RPC with VolumeList (topology) for accurate bucket-specific 'Used' calculation - Added bucket quota support (report quota as Capacity if set) - Implemented cluster-wide capacity calculation from topology when no quota - Added unit tests for topology capacity and usage calculations * s3api: Remove anonymous access to SOSAPI virtual objects Reverts the implicit public access for system.xml and capacity.xml. Requests to these objects now require standard S3 authentication, unless the bucket has a public-read policy. * s3api: Refactor SOSAPI handlers to use http.ServeContent - Consolidate handleSOSAPIGetObject and handleSOSAPIHeadObject into serveSOSAPI - Use http.ServeContent for standard Range, HEAD, and ETag handling - Remove manual range request handler and reduce code duplication * s3api: Unify SOSAPI request handling - Replaced handleSOSAPIGetObject and handleSOSAPIHeadObject with single HandleSOSAPI function - Updated call sites in s3api_object_handlers.go - Simplifies logic and ensures consistent handling for both GET and HEAD requests via http.ServeContent * s3api: Restore distinct SOSAPI GET/HEAD handlers - Reverted unified handler to enforce distinct behavior for GET and HEAD - GET: Supports Range requests via http.ServeContent - HEAD: Explicitly ignores Range requests (matches MinIO behavior) and writes headers only * s3api: Refactor SOSAPI handlers to eliminate duplication - Extracted shared content generation logic into generateSOSAPIContent helper - handleSOSAPIGetObject: Uses http.ServeContent (supports Range requests) - handleSOSAPIHeadObject: Manually sets headers (no Range, no body) - Maintains distinct behavior while following DRY principle * s3api: Remove low-value SOSAPI tests Removed tests that validate standard library behavior or trivial constant checks: - TestSOSAPIConstants (string prefix/suffix checks) - TestSystemInfoXMLRootElement (redundant with TestGenerateSystemXML) - TestSOSAPIXMLContentType (tests httptest, not our code) - TestHTTPTimeFormat (tests standard library) - TestCapacityInfoXMLStruct (tests Go's XML marshaling) Kept tests that validate actual business logic and edge cases. * s3api: Use consistent S3-compliant error responses in SOSAPI Replaced http.Error() with s3err.WriteErrorResponse() for internal errors to ensure all SOSAPI errors return S3-compliant XML instead of plain text. * s3api: Return error when no masters configured for SOSAPI capacity Changed getCapacityInfo to return an error instead of silently returning zero capacity when no master servers are configured. This helps surface configuration issues rather than masking them. * s3api: Use collection name with FilerGroup prefix for SOSAPI capacity Fixed collectBucketUsageFromTopology to use s3a.getCollectionName(bucket) instead of raw bucket name. This ensures collection comparisons match actual volume collection names when FilerGroup prefix is configured. * s3api: Apply PR review feedback for SOSAPI - Renamed `bucket` parameter to `collectionName` in collectBucketUsageFromTopology for clarity - Changed error checks from `==` to `errors.Is()` for better wrapped error handling - Added `errors` import * s3api: Avoid variable shadowing in SOSAPI capacity retrieval Refactored `getCapacityInfo` to use distinct variable names for errors to improve code clarity and avoid unintentional shadowing of the return parameter. |
5 days ago |
|
|
e8baeb3616 |
s3api: Allow anonymous access to SOSAPI virtual objects
Enable discovery of SOSAPI capabilities without requiring credentials. - Modify AuthWithPublicRead to bypass auth for SOSAPI objects if bucket exists - Supports Veeam's initial discovery phase before full IAM setup - Validates bucket existence to prevent information disclosure |
5 days ago |
|
|
a757ef77b1 |
s3api: Integrate SOSAPI handlers into GetObject and HeadObject
Add early interception for SOSAPI virtual objects in GetObjectHandler and HeadObjectHandler. - Check for SOSAPI objects (.system-*/system.xml, .system-*/capacity.xml) before normal processing - Delegate to handleSOSAPIGetObject and handleSOSAPIHeadObject when detected - Ensures virtual objects are served without hitting storage layer |
5 days ago |
|
|
fba67ce0f0 |
s3api: Add SOSAPI core implementation and tests
Implement Smart Object Storage API (SOSAPI) support for Veeam integration. - Add s3api_sosapi.go with XML structures and handlers for system.xml and capacity.xml - Implement virtual object detection and dynamic XML generation - Add capacity retrieval via gRPC (to be optimized in follow-up) - Include comprehensive unit tests covering detection, XML generation, and edge cases This enables Veeam Backup & Replication to discover SeaweedFS capabilities and capacity. |
5 days ago |
|
|
56ebd9236e
|
Fix: relax gRPC server keepalive enforcement to 20s (#7898)
* Fix: relax gRPC server keepalive enforcement to 20s to prevent 'too_many_pings' errors * Refactor: introduce GrpcKeepAliveMinimumTime constant for clarity |
5 days ago |
|
|
f88b9a643d |
add one example
|
5 days ago |
|
|
694218dc13 |
Merge branch 'master' of https://github.com/seaweedfs/seaweedfs
|
6 days ago |
|
|
a3c090e606 |
adjust layout
|
6 days ago |
|
|
915a7d4a54
|
feat: Add probes to worker service (#7896)
* feat: Add probes to worker service * feat: Add probes to worker service * Merge branch 'master' into pr/7896 * refactor --------- Co-authored-by: Chris Lu <chris.lu@gmail.com> |
6 days ago |
|
|
ef20873c31
|
S3: Fix Content-Encoding header not preserved (#7894) (#7895)
* S3: Fix Content-Encoding header not preserved (#7894) The Content-Encoding header was not being returned in S3 GET/HEAD responses because it wasn't being stored in metadata during PUT operations. Root cause: The putToFiler function only stored a hardcoded list of standard HTTP headers (Cache-Control, Expires, Content-Disposition) but was missing Content-Encoding and Content-Language. Fix: Added Content-Encoding and Content-Language to the list of standard headers that are stored in entry.Extended during PUT operations. This matches the behavior of ParseS3Metadata (used for multipart uploads) and ensures consistency across all S3 operations. Fixes #7894 * Update s3api_object_handlers_put.go |
6 days ago |
|
|
6de6061ce9
|
admin: add cursor-based pagination to file browser (#7891)
* adjust menu items * admin: add cursor-based pagination to file browser - Implement cursor-based pagination using lastFileName parameter - Add customizable page size selector (20/50/100/200 entries) - Add compact pagination controls in header and footer - Remove summary cards for cleaner UI - Make directory names clickable to return to first page - Support forward-only navigation (Next button) - Preserve cursor position when changing page size - Remove sorting to align with filer's storage order approach * Update file_browser_templ.go * admin: remove directory icons from breadcrumbs * Update file_browser_templ.go * admin: address PR comments - Fix fragile EOF check: use io.EOF instead of string comparison - Cap page size at 200 to prevent potential DoS - Remove unused helper functions from template - Use safer templ script for page size selector to prevent XSS * admin: cleanup redundant first button * Update file_browser_templ.go * admin: remove entry counting logic * admin: remove unused variables in file browser data * admin: remove unused logic for FirstFileName and HasPrevPage * admin: remove unused TotalEntries and TotalSize fields * Update file_browser_data.go |
7 days ago |
|
|
8d6bcddf60
|
Add S3 volume encryption support with -s3.encryptVolumeData flag (#7890)
* Add S3 volume encryption support with -s3.encryptVolumeData flag
This change adds volume-level encryption support for S3 uploads, similar
to the existing -filer.encryptVolumeData option. Each chunk is encrypted
with its own auto-generated CipherKey when the flag is enabled.
Changes:
- Add -s3.encryptVolumeData flag to weed s3, weed server, and weed mini
- Wire Cipher option through S3ApiServer and ChunkedUploadOption
- Add integration tests for multi-chunk range reads with encryption
- Tests verify encryption works across chunk boundaries
Usage:
weed s3 -encryptVolumeData
weed server -s3 -s3.encryptVolumeData
weed mini -s3.encryptVolumeData
Integration tests:
go test -v -tags=integration -timeout 5m ./test/s3/sse/...
* Add GitHub Actions CI for S3 volume encryption tests
- Add test-volume-encryption target to Makefile that starts server with -s3.encryptVolumeData
- Add s3-volume-encryption job to GitHub Actions workflow
- Tests run with integration build tag and 10m timeout
- Server logs uploaded on failure for debugging
* Fix S3 client credentials to use environment variables
The test was using hardcoded credentials "any"/"any" but the Makefile
sets AWS_ACCESS_KEY_ID/AWS_SECRET_ACCESS_KEY to "some_access_key1"/
"some_secret_key1". Updated getS3Client() to read from environment
variables with fallback to "any"/"any" for manual testing.
* Change bucket creation errors from skip to fatal
Tests should fail, not skip, when bucket creation fails. This ensures
that credential mismatches and other configuration issues are caught
rather than silently skipped.
* Make copy and multipart test jobs fail instead of succeed
Changed exit 0 to exit 1 for s3-sse-copy-operations and s3-sse-multipart
jobs. These jobs document known limitations but should fail to ensure
the issues are tracked and addressed, not silently ignored.
* Hardcode S3 credentials to match Makefile
Changed from environment variables to hardcoded credentials
"some_access_key1"/"some_secret_key1" to match the Makefile
configuration. This ensures tests work reliably.
* fix Double Encryption
* fix Chunk Size Mismatch
* Added IsCompressed
* is gzipped
* fix copying
* only perform HEAD request when len(cipherKey) > 0
* Revert "Make copy and multipart test jobs fail instead of succeed"
This reverts commit
|
7 days ago |
|
|
935f41bff6
|
filer.backup: ignore missing volume/lookup errors when -ignore404Error is set (#7889)
* filer.backup: ignore missing volume/lookup errors when -ignore404Error is set (#7888) * simplify |
1 week ago |
|
|
c688b69700 |
reduce logs
|
1 week ago |
|
|
82dac3df03
|
s3: do not persist multi part "Response-Content-Disposition" in request header (#7887)
* fix: support standard HTTP headers in S3 multipart upload * fix(s3api): validate standard HTTP headers correctly and avoid persisting Response-Content-Disposition --------- Co-authored-by: steve.wei <coderushing@gmail.com> |
1 week ago |
|
|
f07ba2c5aa
|
fix: support standard HTTP headers in S3 multipart upload (#7884)
Co-authored-by: Chris Lu <chris.lu@gmail.com> |
1 week ago |
|
|
95716a2f87 |
less verbose
|
1 week ago |
|
|
2b3ff3cd05 |
verbose mode
|
1 week ago |