When a disk reports IO errors during vacuum compaction (e.g., 'read /mnt/d1/weed/oc_xyz.dat: input/output error'), the vacuum task should signal the error to the master so it can:
1. Drop the faulty volume replica
2. Rebuild the replica from healthy copies
Changes:
- Add checkReadWriteError() calls in vacuum read paths (ReadNeedleBlob, ReadData, ScanVolumeFile) to flag EIO errors in volume.lastIoError
- Preserve error wrapping using %w format instead of %v so EIO propagates correctly
- The existing heartbeat logic will detect lastIoError and remove the bad volume
Fixes issue #8237
* Add Trino blog operations test
* Update test/s3tables/catalog_trino/trino_blog_operations_test.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* feat: add table bucket path helpers and filer operations
- Add table object root and table location mapping directories
- Implement ensureDirectory, upsertFile, deleteEntryIfExists helpers
- Support table location bucket mapping for S3 access
* feat: manage table bucket object roots on creation/deletion
- Create .objects directory for table buckets on creation
- Clean up table object bucket paths on deletion
- Enable S3 operations on table bucket object roots
* feat: add table location mapping for Iceberg REST
- Track table location bucket mappings when tables are created/updated/deleted
- Enable location-based routing for S3 operations on table data
* feat: route S3 operations to table bucket object roots
- Route table-s3 bucket names to mapped table paths
- Route table buckets to object root directories
- Support table location bucket mapping lookup
* feat: emit table-s3 locations from Iceberg REST
- Generate unique table-s3 bucket names with UUID suffix
- Store table metadata under table bucket paths
- Return table-s3 locations for Trino compatibility
* fix: handle missing directories in S3 list operations
- Propagate ErrNotFound from ListEntries for non-existent directories
- Treat missing directories as empty results for list operations
- Fixes Trino non-empty location checks on table creation
* test: improve Trino CSV parsing for single-value results
- Sanitize Trino output to skip jline warnings
- Handle single-value CSV results without header rows
- Strip quotes from numeric values in tests
* refactor: use bucket path helpers throughout S3 API
- Replace direct bucket path operations with helper functions
- Leverage centralized table bucket routing logic
- Improve maintainability with consistent path resolution
* fix: add table bucket cache and improve filer error handling
- Cache table bucket lookups to reduce filer overhead on repeated checks
- Use filer_pb.CreateEntry and filer_pb.UpdateEntry helpers to check resp.Error
- Fix delete order in handler_bucket_get_list_delete: delete table object before directory
- Make location mapping errors best-effort: log and continue, don't fail API
- Update table location mappings to delete stale prior bucket mappings on update
- Add 1-second sleep before timestamp time travel query to ensure timestamps are in past
- Fix CSV parsing: examine all lines, not skip first; handle single-value rows
* fix: properly handle stale metadata location mapping cleanup
- Capture oldMetadataLocation before mutation in handleUpdateTable
- Update updateTableLocationMapping to accept both old and new locations
- Use passed-in oldMetadataLocation to detect location changes
- Delete stale mapping only when location actually changes
- Pass empty string for oldLocation in handleCreateTable (new tables have no prior mapping)
- Improve logging to show old -> new location transitions
* refactor: cleanup imports and cache design
- Remove unused 'sync' import from bucket_paths.go
- Use filer_pb.UpdateEntry helper in setExtendedAttribute and deleteExtendedAttribute for consistent error handling
- Add dedicated tableBucketCache map[string]bool to BucketRegistry instead of mixing concerns with metadataCache
- Improve cache separation: table buckets cache is now separate from bucket metadata cache
* fix: improve cache invalidation and add transient error handling
Cache invalidation (critical fix):
- Add tableLocationCache to BucketRegistry for location mapping lookups
- Clear tableBucketCache and tableLocationCache in RemoveBucketMetadata
- Prevents stale cache entries when buckets are deleted/recreated
Transient error handling:
- Only cache table bucket lookups when conclusive (found or ErrNotFound)
- Skip caching on transient errors (network, permission, etc)
- Prevents marking real table buckets as non-table due to transient failures
Performance optimization:
- Cache tableLocationDir results to avoid repeated filer RPCs on hot paths
- tableLocationDir now checks cache before making expensive filer lookups
- Cache stores empty string for 'not found' to avoid redundant lookups
Code clarity:
- Add comment to deleteDirectory explaining DeleteEntry response lacks Error field
* go fmt
* fix: mirror transient error handling in tableLocationDir and optimize bucketDir
Transient error handling:
- tableLocationDir now only caches definitive results
- Mirrors isTableBucket behavior to prevent treating transient errors as permanent misses
- Improves reliability on flaky systems or during recovery
Performance optimization:
- bucketDir avoids redundant isTableBucket call via bucketRoot
- Directly use s3a.option.BucketsPath for regular buckets
- Saves one cache lookup for every non-table bucket operation
* fix: revert bucketDir optimization to preserve bucketRoot logic
The optimization to directly use BucketsPath bypassed bucketRoot's logic
and caused issues with S3 list operations on delimiter+prefix cases.
Revert to using path.Join(s3a.bucketRoot(bucket), bucket) which properly
handles all bucket types and ensures consistent path resolution across
the codebase.
The slight performance cost of an extra cache lookup is worth the correctness
and consistency benefits.
* feat: move table buckets under /buckets
Add a table-bucket marker attribute, reuse bucket metadata cache for table bucket detection, and update list/validation/UI/test paths to treat table buckets as /buckets entries.
* Fix S3 Tables code review issues
- handler_bucket_create.go: Fix bucket existence check to properly validate
entryResp.Entry before setting s3BucketExists flag (nil Entry should not
indicate existing bucket)
- bucket_paths.go: Add clarifying comment to bucketRoot() explaining unified
buckets root path for all bucket types
- file_browser_data.go: Optimize by extracting table bucket check early to
avoid redundant WithFilerClient call
* Fix list prefix delimiter handling
* Handle list errors conservatively
* Fix Trino FOR TIMESTAMP query - use past timestamp
Iceberg requires the timestamp to be strictly in the past.
Use current_timestamp - interval '1' second instead of current_timestamp.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix multipart etag
* address comments
* clean up
* clean up
* optimization
* address comments
* unquoted etag
* dedup
* upgrade
* clean
* etag
* return quoted tag
* quoted etag
* debug
* s3api: unify ETag retrieval and quoting across handlers
Refactor newListEntry to take *S3ApiServer and use getObjectETag,
and update setResponseHeaders to use the same logic. This ensures
consistent ETags are returned for both listing and direct access.
* s3api: implement ListObjects deduplication for versioned buckets
Handle duplicate entries between the main path and the .versions
directory by prioritizing the latest version when bucket versioning
is enabled.
* s3api: cleanup stale main file entries during versioned uploads
Add explicit deletion of pre-existing "main" files when creating new
versions in versioned buckets. This prevents stale entries from
appearing in bucket listings and ensures consistency.
* s3api: fix cleanup code placement in versioned uploads
Correct the placement of rm calls in completeMultipartUpload and
putVersionedObject to ensure stale main files are properly deleted
during versioned uploads.
* s3api: improve getObjectETag fallback for empty ExtETagKey
Ensure that when ExtETagKey exists but contains an empty value,
the function falls through to MD5/chunk-based calculation instead
of returning an empty string.
* s3api: fix test files for new newListEntry signature
Update test files to use the new newListEntry signature where the
first parameter is *S3ApiServer. Created mockS3ApiServer to properly
test owner display name lookup functionality.
* s3api: use filer.ETag for consistent Md5 handling in getEtagFromEntry
Change getEtagFromEntry fallback to use filer.ETag(entry) instead of
filer.ETagChunks to ensure legacy entries with Attributes.Md5 are
handled consistently with the rest of the codebase.
* s3api: optimize list logic and fix conditional header logging
- Hoist bucket versioning check out of per-entry callback to avoid
repeated getVersioningState calls
- Extract appendOrDedup helper function to eliminate duplicate
dedup/append logic across multiple code paths
- Change If-Match mismatch logging from glog.Errorf to glog.V(3).Infof
and remove DEBUG prefix for consistency
* s3api: fix test mock to properly initialize IAM accounts
Fixed nil pointer dereference in TestNewListEntryOwnerDisplayName by
directly initializing the IdentityAccessManagement.accounts map in the
test setup. This ensures newListEntry can properly look up account
display names without panicking.
* cleanup
* s3api: remove premature main file cleanup in versioned uploads
Removed incorrect cleanup logic that was deleting main files during
versioned uploads. This was causing test failures because it deleted
objects that should have been preserved as null versions when
versioning was first enabled. The deduplication logic in listing is
sufficient to handle duplicate entries without deleting files during
upload.
* s3api: add empty-value guard to getEtagFromEntry
Added the same empty-value guard used in getObjectETag to prevent
returning quoted empty strings. When ExtETagKey exists but is empty,
the function now falls through to filer.ETag calculation instead of
returning "".
* s3api: fix listing of directory key objects with matching prefix
Revert prefix handling logic to use strings.TrimPrefix instead of
checking HasPrefix with empty string result. This ensures that when a
directory key object exactly matches the prefix (e.g. prefix="dir/",
object="dir/"), it is correctly handled as a regular entry instead of
being skipped or incorrectly processed as a common prefix. Also fixed
missing variable definition.
* s3api: refactor list inline dedup to use appendOrDedup helper
Refactored the inline deduplication logic in listFilerEntries to use the
shared appendOrDedup helper function. This ensures consistent behavior
and reduces code duplication.
* test: fix port allocation race in s3tables integration test
Updated startMiniCluster to find all required ports simultaneously using
findAvailablePorts instead of sequentially. This prevents race conditions
where the OS reallocates a port that was just released, causing multiple
services (e.g. Filer and Volume) to be assigned the same port and fail
to start.
* test: add comprehensive CRUD tests for S3 Tables Catalog Trino integration
- Add TestNamespaceCRUD: Tests complete Create-Read-Update-Delete lifecycle for namespaces
- Add TestNamespaceListingPagination: Tests listing multiple namespaces with verification
- Add TestNamespaceErrorHandling: Tests error handling for edge cases (IF EXISTS, IF NOT EXISTS)
- Add TestSchemaIntegrationWithCatalog: Tests integration between Trino SQL and Iceberg REST Catalog
All tests pass successfully and use Trino SQL interface for practical integration testing.
Tests properly skip when Docker is unavailable.
Use randomized namespace names to avoid conflicts in parallel execution.
The tests provide comprehensive coverage of namespace/schema CRUD operations which form the
foundation of the Iceberg catalog integration with Trino.
* test: Address code review feedback for S3 Tables Catalog Trino CRUD tests
- Extract common test setup into setupTrinoTest() helper function
- Replace all fmt.Printf calls with idiomatic t.Logf
- Change namespace deletion verification from t.Logf to t.Errorf for proper test failures
- Enhance TestNamespaceErrorHandling with persistence verification test
- Remove unnecessary fmt import
- Improve test documentation with clarifying comments
* test: Fix schema naming and remove verbose output logging
- Fix TestNamespaceListingPagination schema name generation: use fmt.Sprintf instead of string(rune())
- Remove verbose logging of SHOW SCHEMAS output to reduce noise in test logs
- Keep high-level operation logging while removing detailed result output
* test: add Trino Iceberg catalog integration test
- Create test/s3/catalog_trino/trino_catalog_test.go with TestTrinoIcebergCatalog
- Tests integration between Trino SQL engine and SeaweedFS Iceberg REST catalog
- Starts weed mini with all services and Trino in Docker container
- Validates Iceberg catalog schema creation and listing operations
- Uses native S3 filesystem support in Trino with path-style access
- Add workflow job to s3-tables-tests.yml for CI execution
* fix: preserve AWS environment credentials when replacing S3 configuration
When S3 configuration is loaded from filer/db, it replaces the identities list
and inadvertently removes AWS_ACCESS_KEY_ID credentials that were added from
environment variables. This caused auth to remain disabled even though valid
credentials were present.
Fix by preserving environment-based identities when replacing the configuration
and re-adding them after the replacement. This ensures environment credentials
persist across configuration reloads and properly enable authentication.
* fix: use correct ServerAddress format with gRPC port encoding
The admin server couldn't connect to master because the master address
was missing the gRPC port information. Use pb.NewServerAddress() which
properly encodes both HTTP and gRPC ports in the address string.
Changes:
- weed/command/mini.go: Use pb.NewServerAddress for master address in admin
- test/s3/policy/policy_test.go: Store and use gRPC ports for master/filer addresses
This fix applies to:
1. Admin server connection to master (mini.go)
2. Test shell commands that need master/filer addresses (policy_test.go)
* move
* move
* fix: always include gRPC port in server address encoding
The NewServerAddress() function was omitting the gRPC port from the address
string when it matched the port+10000 convention. However, gRPC port allocation
doesn't always follow this convention - when the calculated port is busy, an
alternative port is allocated.
This caused a bug where:
1. Master's gRPC port was allocated as 50661 (sequential, not port+10000)
2. Address was encoded as '192.168.1.66:50660' (gRPC port omitted)
3. Admin client called ToGrpcAddress() which assumed port+10000 offset
4. Admin tried to connect to 60660 but master was on 50661 → connection failed
Fix: Always include explicit gRPC port in address format (host:httpPort.grpcPort)
unless gRPC port is 0. This makes addresses unambiguous and works regardless of
the port allocation strategy used.
Impacts: All server-to-server gRPC connections now use properly formatted addresses.
* test: fix Iceberg REST API readiness check
The Iceberg REST API endpoints require authentication. When checked without
credentials, the API returns 403 Forbidden (not 401 Unauthorized). The
readiness check now accepts both auth error codes (401/403) as indicators
that the service is up and ready, it just needs credentials.
This fixes the 'Iceberg REST API did not become ready' test failure.
* Fix AWS SigV4 signature verification for base64-encoded payload hashes
AWS SigV4 canonical requests must use hex-encoded SHA256 hashes,
but the X-Amz-Content-Sha256 header may be transmitted as base64.
Changes:
- Added normalizePayloadHash() function to convert base64 to hex
- Call normalizePayloadHash() in extractV4AuthInfoFromHeader()
- Added encoding/base64 import
Fixes 403 Forbidden errors on POST requests to Iceberg REST API
when clients send base64-encoded content hashes in the header.
Impacted services: Iceberg REST API, S3Tables
* Fix AWS SigV4 signature verification for base64-encoded payload hashes
AWS SigV4 canonical requests must use hex-encoded SHA256 hashes,
but the X-Amz-Content-Sha256 header may be transmitted as base64.
Changes:
- Added normalizePayloadHash() function to convert base64 to hex
- Call normalizePayloadHash() in extractV4AuthInfoFromHeader()
- Added encoding/base64 import
- Removed unused fmt import
Fixes 403 Forbidden errors on POST requests to Iceberg REST API
when clients send base64-encoded content hashes in the header.
Impacted services: Iceberg REST API, S3Tables
* pass sigv4
* s3api: fix identity preservation and logging levels
- Ensure environment-based identities are preserved during config replacement
- Update accessKeyIdent and nameToIdentity maps correctly
- Downgrade informational logs to V(2) to reduce noise
* test: fix trino integration test and s3 policy test
- Pin Trino image version to 479
- Fix port binding to 0.0.0.0 for Docker connectivity
- Fix S3 policy test hang by correctly assigning MiniClusterCtx
- Improve port finding robustness in policy tests
* ci: pre-pull trino image to avoid timeouts
- Pull trinodb/trino:479 after Docker setup
- Ensure image is ready before integration tests start
* iceberg: remove unused checkAuth and improve logging
- Remove unused checkAuth method
- Downgrade informational logs to V(2)
- Ensure loggingMiddleware uses a status writer for accurate reported codes
- Narrow catch-all route to avoid interfering with other subsystems
* iceberg: fix build failure by removing unused s3api import
* Update iceberg.go
* use warehouse
* Update trino_catalog_test.go
* Support Linux file/dir ACL in weed mount #8229
* Update weed/command/mount_std.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Add a version token on `GetState()`/`SetState()` RPCs for volume server states.
* Make state version a property ov `VolumeServerState` instead of an in-memory counter.
Also extend state atomicity to reads, instead of just writes.
- Create test/s3/catalog_trino/trino_catalog_test.go with TestTrinoIcebergCatalog
- Tests integration between Trino SQL engine and SeaweedFS Iceberg REST catalog
- Starts weed mini with all services and Trino in Docker container
- Validates Iceberg catalog schema creation and listing operations
- Uses native S3 filesystem support in Trino with path-style access
- Add workflow job to s3-tables-tests.yml for CI execution
* add minCacheAge flag to remote.uncache command #8221
* address code review feedback: add nil check and improve test isolation
* address code review feedback: use consistent timestamp in FileFilter
Implement index (fast) scrubbing for regular/EC volumes via `ScrubVolume()`/`ScrubEcVolume()`.
Also rearranges existing index test files for reuse across unit tests for different modules.
* fix concurrent map access in EC shards info #8219
* refactor: simplify Disk.ToDiskInfo to use ecShards snapshot and avoid redundant locking
* refactor: improve GetEcShards with pre-allocation and defer
* fix: honor SSE-C chunk offsets in decryption for large chunked uploads
Fixes issue #8215 where SSE-C decryption for large objects could corrupt
data by ignoring per-chunk PartOffset values.
Changes:
- Add TestSSECLargeObjectChunkReassembly unit test to verify correct
decryption of 19MB object split into 8MB chunks using PartOffset
- Update decryptSSECChunkView and createMultipartSSECDecryptedReaderDirect
to extract PartOffset from SSE-C metadata and pass to
CreateSSECDecryptedReaderWithOffset for offset-aware decryption
- Fix createCTRStreamWithOffset to use calculateIVWithOffset for proper
block-aligned counter advancement, matching SSE-KMS/S3 behavior
- Update comments to clarify SSE-C IV handling uses per-chunk offsets
(unlike base IV approach used by KMS/S3)
All tests pass: go test ./weed/s3api ✓
* fix: close chunkReader on error paths in createMultipartSSECDecryptedReader
Address resource leak issue reported in PR #8216: ensure chunkReader is
properly closed before returning on all error paths, including:
- DeserializeSSECMetadata failures
- IV decoding errors
- Invalid PartOffset values
- SSE-C reader creation failures
- Missing per-chunk metadata
This prevents leaking network connections and file handles during
SSE-C multipart decryption error scenarios.
* docs: clarify SSE-C IV handling in decryptSSECChunkView comment
Replace misleading warning 'Do NOT call calculateIVWithOffset' with
accurate explanation that:
- CreateSSECDecryptedReaderWithOffset internally uses calculateIVWithOffset
to advance the CTR counter to reach PartOffset
- calculateIVWithOffset is applied only to the per-part IV, NOT to derive
a global base IV for all parts
- This differs fundamentally from SSE-KMS/SSE-S3 which use base IV +
calculateIVWithOffset(ChunkOffset)
This clarifies the IV advancement mechanism while contrasting it with
the base IV approach used by other encryption schemes.
* admin: fix capacity leak in maintenance system by preserving Task IDs
Preserve the original TaskID generated during detection and sync task
states (Assign/Complete/Retry) with ActiveTopology. This ensures that
capacity reserved during task assignment is properly released when a
task completes or fails, preventing 'need 9, have 0' capacity exhaustion.
Fixes https://github.com/seaweedfs/seaweedfs/issues/8202
* Update weed/admin/maintenance/maintenance_queue.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/admin/maintenance/maintenance_queue.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* test: rename ActiveTopologySync to TaskIDPreservation
Rename the test case to more accurately reflect its scope, as suggested
by the code review bot.
* Add TestMaintenanceQueue_ActiveTopologySync to verify task state synchronization and capacity management
* Implement task assignment rollback and add verification test
* Enhance ActiveTopology.CompleteTask to support pending tasks
* Populate storage impact in MaintenanceIntegration.SyncTask
* Release capacity in RemoveStaleWorkers when worker becomes unavailable
* Release capacity in MaintenanceManager.CancelTask when pending task is cancelled
* Sync reloaded tasks with ActiveTopology in LoadTasksFromPersistence
* Add verification tests for consistent capacity management lifecycle
* Add TestMaintenanceQueue_RetryCapacitySync to verify capacity tracking during retries
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* s3: allow single Statement object in policy document
Fixes#8201
* s3: add unit test for single Statement object in policy
* s3: improve error message for malformed PolicyDocument.Statement
* s3: simplify error message for malformed PolicyDocument.Statement
* s3api: fix ListObjectVersions inconsistency with delimiters (fixes#8206)
Prioritize handling of .versions and .uploads directories before
delimiter processing in collectVersions. This ensures .versions
directories are processed as version containers instead of being
incorrectly rolled up into CommonPrefixes when a delimiter is used.
* s3api: refactor processDirectory to remove redundant special directory checks
These checks are now handled in the main collectVersions loop.
* helm: add Iceberg REST catalog support to S3 service
* helm: add Iceberg REST catalog support to S3 service
* add ingress for iceberg catalog endpoint
* helm: conditionally render ingressClassName in s3-iceberg-ingress.yaml
* helm: refactor s3-iceberg-ingress.yaml to use named template for paths
* helm: remove unused $serviceName variable in s3-iceberg-ingress.yaml
---------
Co-authored-by: yalin.sahin <yalin.sahin@tradition.ch>
Co-authored-by: Chris Lu <chris.lu@gmail.com>
command: fix s3 panic in filer command due to uninitialized options
This fixes a nil pointer dereference panic when starting the S3 server
via the `weed filer` command by correctly initializing `iamReadOnly`
and `portIceberg` flags.
Relates to #8200
* fix float stepping
* do not auto refresh
* only logs when non 200 status
* fix maintenance task sorting and cleanup redundant handler logic
* Refactor log retrieval to persist to disk and fix slowness
- Move log retrieval to disk-based persistence in GetMaintenanceTaskDetail
- Implement background log fetching on task completion in worker_grpc_server.go
- Implement async background refresh for in-progress tasks
- Completely remove blocking gRPC calls from the UI path to fix 10s timeouts
- Cleanup debug logs and performance profiling code
* Ensure consistent deterministic sorting in config_persistence cleanup
* Replace magic numbers with constants and remove debug logs
- Added descriptive constants for truncation limits and timeouts in admin_server.go and worker_grpc_server.go
- Replaced magic numbers with these constants throughout the codebase
- Verified removal of stdout debug printing
- Ensured consistent truncation logic during log persistence
* Address code review feedback on history truncation and logging logic
- Fix AssignmentHistory double-serialization by copying task in GetMaintenanceTaskDetail
- Fix handleTaskCompletion logging logic (mutually exclusive success/failure logs)
- Remove unused Timeout field from LogRequestContext and sync select timeouts with constants
- Ensure AssignmentHistory is only provided in the top-level field for better JSON structure
* Implement goroutine leak protection and request deduplication
- Add request deduplication in RequestTaskLogs to prevent multiple concurrent fetches for the same task
- Implement safe cleanup in timeout handlers to avoid race conditions in pendingLogRequests map
- Add a 10s cooldown for background log refreshes in GetMaintenanceTaskDetail to prevent spamming
- Ensure all persistent log-fetching goroutines are bounded and efficiently managed
* Fix potential nil pointer panics in maintenance handlers
- Add nil checks for adminServer in ShowTaskDetail, ShowMaintenanceWorkers, and UpdateTaskConfig
- Update getMaintenanceQueueData to return a descriptive error instead of nil when adminServer is uninitialized
- Ensure internal helper methods consistently check for adminServer initialization before use
* Strictly enforce disk-only log reading
- Remove background log fetching from GetMaintenanceTaskDetail to prevent timeouts and network calls during page view
- Remove unused lastLogFetch tracking fields to clean up dead code
- Ensure logs are only updated upon task completion via handleTaskCompletion
* Refactor GetWorkerLogs to read from disk
- Update /api/maintenance/workers/:id/logs endpoint to use configPersistence.LoadTaskExecutionLogs
- Remove synchronous gRPC call RequestTaskLogs to prevent timeouts and bad gateway errors
- Ensure consistent log retrieval behavior across the application (disk-only)
* Fix timestamp parsing in log viewer
- Update task_detail.templ JS to handle both ISO 8601 strings and Unix timestamps
- Fix "Invalid time value" error when displaying logs fetched from disk
- Regenerate templates
* master: fallback to HDD if SSD volumes are full in Assign
* worker: improve EC detection logging and fix skip counters
* worker: add Sync method to TaskLogger interface
* worker: implement Sync and ensure logs are flushed before task completion
* admin: improve task log retrieval with retries and better timeouts
* admin: robust timestamp parsing in task detail view
* Fix: Initialize filer CredentialManager with filer address
* The fix involves checking for directory existence before creation.
* adjust error message
* Fix: Implement FilerAddressSetter in PropagatingCredentialStore
* Refactor: Reorder credential manager initialization in filer server
* refactor
* full integration with iceberg-go
* Table Commit Operations (handleUpdateTable)
* s3tables: fix Iceberg v2 compliance and namespace properties
This commit ensures SeaweedFS Iceberg REST Catalog is compliant with
Iceberg Format Version 2 by:
- Using iceberg-go's table.NewMetadataWithUUID for strict v2 compliance.
- Explicitly initializing namespace properties to empty maps.
- Removing omitempty from required Iceberg response fields.
- Fixing CommitTableRequest unmarshaling using table.Requirements and table.Updates.
* s3tables: automate Iceberg integration tests
- Added Makefile for local test execution and cluster management.
- Added docker-compose for PyIceberg compatibility kit.
- Added Go integration test harness for PyIceberg.
- Updated GitHub CI to run Iceberg catalog tests automatically.
* s3tables: update PyIceberg test suite for compatibility
- Updated test_rest_catalog.py to use latest PyIceberg transaction APIs.
- Updated Dockerfile to include pyarrow and pandas dependencies.
- Improved namespace and table handling in integration tests.
* s3tables: address review feedback on Iceberg Catalog
- Implemented robust metadata version parsing and incrementing.
- Ensured table metadata changes are persisted during commit (handleUpdateTable).
- Standardized namespace property initialization for consistency.
- Fixed unused variable and incorrect struct field build errors.
* s3tables: finalize Iceberg REST Catalog and optimize tests
- Implemented robust metadata versioning and persistence.
- Standardized namespace property initialization.
- Optimized integration tests using pre-built Docker image.
- Added strict property persistence validation to test suite.
- Fixed build errors from previous partial updates.
* Address PR review: fix Table UUID stability, implement S3Tables UpdateTable, and support full metadata persistence individually
* fix: Iceberg catalog stable UUIDs, metadata persistence, and file writing
- Ensure table UUIDs are stable (do not regenerate on load).
- Persist full table metadata (Iceberg JSON) in s3tables extended attributes.
- Add `MetadataVersion` to explicitly track version numbers, replacing regex parsing.
- Implement `saveMetadataFile` to persist metadata JSON files to the Filer on commit.
- Update `CreateTable` and `UpdateTable` handlers to use the new logic.
* test: bind weed mini to 0.0.0.0 in integration tests to fix Docker connectivity
* Iceberg: fix metadata handling in REST catalog
- Add nil guard in createTable
- Fix updateTable to correctly load existing metadata from storage
- Ensure full metadata persistence on updates
- Populate loadTable result with parsed metadata
* S3Tables: add auth checks and fix response fields in UpdateTable
- Add CheckPermissionWithContext to UpdateTable handler
- Include TableARN and MetadataLocation in UpdateTable response
- Use ErrCodeConflict (409) for version token mismatches
* Tests: improve Iceberg catalog test infrastructure and cleanup
- Makefile: use PID file for precise process killing
- test_rest_catalog.py: remove unused variables and fix f-strings
* Iceberg: fix variable shadowing in UpdateTable
- Rename inner loop variable `req` to `requirement` to avoid shadowing outer request variable
* S3Tables: simplify MetadataVersion initialization
- Use `max(req.MetadataVersion, 1)` instead of anonymous function
* Tests: remove unicode characters from S3 tables integration test logs
- Remove unicode checkmarks from test output for cleaner logs
* Iceberg: improve metadata persistence robustness
- Fix MetadataLocation in LoadTableResult to fallback to generated location
- Improve saveMetadataFile to ensure directory hierarchy existence and robust error handling
* helm: add Iceberg REST catalog support to S3 service
* helm: add Iceberg REST catalog support to S3 service
---------
Co-authored-by: yalin.sahin <yalin.sahin@tradition.ch>
* s3: enforce authentication and JSON error format for Iceberg REST Catalog
* s3/iceberg: align error exception types with OpenAPI spec examples
* s3api: refactor AuthenticateRequest to return identity object
* s3/iceberg: propagate full identity object to request context
* s3/iceberg: differentiate NotAuthorizedException and ForbiddenException
* s3/iceberg: reject requests if authenticator is nil to prevent auth bypass
* s3/iceberg: refactor Auth middleware to build context incrementally and use switch for error mapping
* s3api: update misleading comment for authRequestWithAuthType
* s3api: return ErrAccessDenied if IAM is not configured to prevent auth bypass
* s3/iceberg: optimize context update in Auth middleware
* s3api: export CanDo for external authorization use
* s3/iceberg: enforce identity-based authorization in all API handlers
* s3api: fix compilation errors by updating internal CanDo references
* s3/iceberg: robust identity validation and consistent action usage in handlers
* s3api: complete CanDo rename across tests and policy engine integration
* s3api: fix integration tests by allowing admin access when auth is disabled and explicit gRPC ports
* duckdb
* create test bucket
* feat: Add Iceberg REST Catalog server
Implement Iceberg REST Catalog API on a separate port (default 8181)
that exposes S3 Tables metadata through the Apache Iceberg REST protocol.
- Add new weed/s3api/iceberg package with REST handlers
- Implement /v1/config endpoint returning catalog configuration
- Implement namespace endpoints (list/create/get/head/delete)
- Implement table endpoints (list/create/load/head/delete/update)
- Add -port.iceberg flag to S3 standalone server (s3.go)
- Add -s3.port.iceberg flag to combined server mode (server.go)
- Add -s3.port.iceberg flag to mini cluster mode (mini.go)
- Support prefix-based routing for multiple catalogs
The Iceberg REST server reuses S3 Tables metadata storage under
/table-buckets and enables DuckDB, Spark, and other Iceberg clients
to connect to SeaweedFS as a catalog.
* feat: Add Iceberg Catalog pages to admin UI
Add admin UI pages to browse Iceberg catalogs, namespaces, and tables.
- Add Iceberg Catalog menu item under Object Store navigation
- Create iceberg_catalog.templ showing catalog overview with REST info
- Create iceberg_namespaces.templ listing namespaces in a catalog
- Create iceberg_tables.templ listing tables in a namespace
- Add handlers and routes in admin_handlers.go
- Add Iceberg data provider methods in s3tables_management.go
- Add Iceberg data types in types.go
The Iceberg Catalog pages provide visibility into the same S3 Tables
data through an Iceberg-centric lens, including REST endpoint examples
for DuckDB and PyIceberg.
* test: Add Iceberg catalog integration tests and reorg s3tables tests
- Reorganize existing s3tables tests to test/s3tables/table-buckets/
- Add new test/s3tables/catalog/ for Iceberg REST catalog tests
- Add TestIcebergConfig to verify /v1/config endpoint
- Add TestIcebergNamespaces to verify namespace listing
- Add TestDuckDBIntegration for DuckDB connectivity (requires Docker)
- Update CI workflow to use new test paths
* fix: Generate proper random UUIDs for Iceberg tables
Address code review feedback:
- Replace placeholder UUID with crypto/rand-based UUID v4 generation
- Add detailed TODO comments for handleUpdateTable stub explaining
the required atomic metadata swap implementation
* fix: Serve Iceberg on localhost listener when binding to different interface
Address code review feedback: properly serve the localhost listener
when the Iceberg server is bound to a non-localhost interface.
* ci: Add Iceberg catalog integration tests to CI
Add new job to run Iceberg catalog tests in CI, along with:
- Iceberg package build verification
- Iceberg unit tests
- Iceberg go vet checks
- Iceberg format checks
* fix: Address code review feedback for Iceberg implementation
- fix: Replace hardcoded account ID with s3_constants.AccountAdminId in buildTableBucketARN()
- fix: Improve UUID generation error handling with deterministic fallback (timestamp + PID + counter)
- fix: Update handleUpdateTable to return HTTP 501 Not Implemented instead of fake success
- fix: Better error handling in handleNamespaceExists to distinguish 404 from 500 errors
- fix: Use relative URL in template instead of hardcoded localhost:8181
- fix: Add HTTP timeout to test's waitForService function to avoid hangs
- fix: Use dynamic ephemeral ports in integration tests to avoid flaky parallel failures
- fix: Add Iceberg port to final port configuration logging in mini.go
* fix: Address critical issues in Iceberg implementation
- fix: Cache table UUIDs to ensure persistence across LoadTable calls
The UUID now remains stable for the lifetime of the server session.
TODO: For production, UUIDs should be persisted in S3 Tables metadata.
- fix: Remove redundant URL-encoded namespace parsing
mux router already decodes %1F to \x1F before passing to handlers.
Redundant ReplaceAll call could cause bugs with literal %1F in namespace.
* fix: Improve test robustness and reduce code duplication
- fix: Make DuckDB test more robust by failing on unexpected errors
Instead of silently logging errors, now explicitly check for expected
conditions (extension not available) and skip the test appropriately.
- fix: Extract username helper method to reduce duplication
Created getUsername() helper in AdminHandlers to avoid duplicating
the username retrieval logic across Iceberg page handlers.
* fix: Add mutex protection to table UUID cache
Protects concurrent access to the tableUUIDs map with sync.RWMutex.
Uses read-lock for fast path when UUID already cached, and write-lock
for generating new UUIDs. Includes double-check pattern to handle race
condition between read-unlock and write-lock.
* style: fix go fmt errors
* feat(iceberg): persist table UUID in S3 Tables metadata
* feat(admin): configure Iceberg port in Admin UI and commands
* refactor: address review comments (flags, tests, handlers)
- command/mini: fix tracking of explicit s3.port.iceberg flag
- command/admin: add explicit -iceberg.port flag
- admin/handlers: reuse getUsername helper
- tests: use 127.0.0.1 for ephemeral ports and os.Stat for file size check
* test: check error from FileStat in verify_gc_empty_test
* Implement RPC skeleton for regular/EC volumes scrubbing.
See https://github.com/seaweedfs/seaweedfs/issues/8018 for details.
* Minor proto improvements for `ScrubVolume()`, `ScrubEcVolume()`:
- Add fields for scrubbing details in `ScrubVolumeResponse` and `ScrubEcVolumeResponse`,
instead of reporting these through RPC errors.
- Return a list of broken shards when scrubbing EC volumes, via `EcShardInfo'.
* Boostrap persistent state for volume servers.
This PR implements logic load/save persistent state information for storages
associated with volume servers, and reporting state changes back to masters
via heartbeat messages.
More work ensues!
See https://github.com/seaweedfs/seaweedfs/issues/7977 for details.
* Add volume server RPCs to read and update state flags.
* Boostrap persistent state for volume servers.
This PR implements logic load/save persistent state information for storages
associated with volume servers, and reporting state changes back to masters
via heartbeat messages.
More work ensues!
See https://github.com/seaweedfs/seaweedfs/issues/7977 for details.
* Block RPC operations writing to volume servers when maintenance mode is on.
* fix: skip exhausted blocks before creating an interval
* refactor: optimize interval creation and fix logic duplication
* docs: add docstring for LocateData
* refactor: extract moveToNextBlock helper to deduplicate logic
* fix: use int64 for block index comparison to prevent overflow
* test: add unit test for LocateData boundary crossing (issue #8179)
* fix: skip exhausted blocks to prevent negative interval size and panics (issue #8179)
* refactor: apply review suggestions for test maintainability and code style
* s3tables: add Iceberg file layout validation for table buckets
This PR adds file layout validation for table buckets to enforce Apache
Iceberg table structure. Files uploaded to table buckets must conform
to the expected Iceberg layout:
- metadata/ directory: contains metadata files (*.json, *.avro)
- v*.metadata.json (table metadata)
- snap-*.avro (snapshot manifests)
- *-m*.avro (manifest files)
- version-hint.text
- data/ directory: contains data files (*.parquet, *.orc, *.avro)
- Supports partition paths (e.g., year=2024/month=01/)
- Supports bucket subdirectories
The validator exports functions for use by the S3 API:
- IsTableBucketPath: checks if a path is under /table-buckets/
- GetTableInfoFromPath: extracts bucket/namespace/table from path
- ValidateTableBucketUpload: validates file layout for table bucket uploads
- ValidateTableBucketUploadWithClient: validates with filer client access
Invalid uploads receive InvalidIcebergLayout error response.
* Address review comments: regex performance, error handling, stricter patterns
* Fix validateMetadataFile and validateDataFile to handle subdirectories and directory creation
* Fix error handling, metadata validation, reduce code duplication
* Fix empty remainingPath handling for directory paths
* Refactor: unify validateMetadataFile and validateDataFile
* Refactor: extract UUID pattern constant
* fix: allow Iceberg partition and directory paths without trailing slashes
Modified validateFile to correctly handle directory paths that do not end with a trailing slash.
This ensures that paths like 'data/year=2024' are validated as directories if they match
partition or subdirectory patterns, rather than being incorrectly rejected as invalid files.
Added comprehensive test cases for various directory and partition path combinations.
* refactor: use standard path package and idiomatic returns
Simplified directory and filename extraction in validateFile by using the standard
path package (aliased as pathpkg). This improves readability and avoids manual string
manipulation. Also updated GetTableInfoFromPath to use naked returns for named
return values, aligning with Go conventions for short functions.
* feat: enforce strict Iceberg top-level directories and metadata restrictions
Implemented strict validation for Iceberg layout:
- Bare top-level keys like 'metadata' and 'data' are now rejected; they must have
a trailing slash or a subpath.
- Subdirectories under 'metadata/' are now prohibited to enforce the flat structure
required by Iceberg.
- Updated the test suite with negative test cases and ensured proper formatting.
* feat: allow table root directory markers in ValidateTableBucketUpload
Modified ValidateTableBucketUpload to short-circuit and return nil when the
relative path within a table is empty. This occurs when a trailing slash is
used on the table directory (e.g., /table-buckets/mybucket/myns/mytable/).
Added a test case 'table dir with slash' to verify this behavior.
* test: add regression cases for metadata subdirs and table markers
Enforced a strictly flat structure for the metadata directory by removing the
"directory without trailing slash" fallback in validateFile for metadata.
Added regression test cases:
- metadata/nested (must fail)
- /table-buckets/.../mytable/ (must pass)
Verified all tests pass.
* feat: reject double slashes in Iceberg table paths
Modified validateDirectoryPath to return an error when encountering empty path
segments, effectively rejecting double slashes like 'data//file.parquet'.
Updated validateFile to use manual path splitting instead of the 'path' package
for intermediate directories to ensure redundant slashes are not auto-cleaned
before validation. Added regression tests for various double slash scenarios.
* refactor: separate isMetadata logic in validateDirectoryPath
Following reviewer feedback, refactored validateDirectoryPath to explicitly
separate the handling of metadata and data paths. This improves readability
and clarifies the function's intent while maintaining the strict validation rules
and double-slash rejection previously implemented.
* feat: validate bucket, namespace, and table path segments
Updated ValidateTableBucketUpload to ensure that bucket, namespace, and table
segments in the path are non-empty. This prevents invalid paths like
'/table-buckets//myns/mytable/...' from being accepted during upload.
Added regression tests for various empty segment scenarios.
* Update weed/s3api/s3tables/iceberg_layout.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* feat: block double-slash bypass in table relative paths
Added a guard in ValidateTableBucketUpload to reject tableRelativePath if it
starts with a '/' or contains '//'. This ensures that paths like
'/table-buckets/b/ns/t//data/file.parquet' are properly rejected and cannot
bypass the layout validation. Added regression tests to verify.
---------
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>