Browse Source

feat: Add S3 Tables support for Iceberg tabular data (#8147)

* s3tables: extract utility and filer operations to separate modules

- Move ARN parsing, path helpers, and metadata structures to utils.go
- Extract all extended attribute and filer operations to filer_ops.go
- Reduces code duplication and improves modularity
- Improves code organization and maintainability

* s3tables: split table bucket operations into focused modules

- Create bucket_create.go for CreateTableBucket operation
- Create bucket_get_list_delete.go for Get, List, Delete operations
- Related operations grouped for better maintainability
- Each file has a single, clear responsibility
- Improves code clarity and makes it easier to test

* s3tables: simplify handler by removing duplicate utilities

- Reduce handler.go from 370 to 195 lines (47% reduction)
- Remove duplicate ARN parsing and path helper functions
- Remove filer operation methods moved to filer_ops.go
- Remove metadata structure definitions moved to utils.go
- Keep handler focused on request routing and response formatting
- Maintains all functionality with improved code organization

* s3tables: complete s3tables package implementation

- namespace.go: namespace CRUD operations (310 lines)
- table.go: table CRUD operations with Iceberg schema support (409 lines)
- policy.go: resource policies and tagging operations (419 lines)
- types.go: request/response types and error definitions (290 lines)
- All handlers updated to use standalone utilities from utils.go
- All files follow single responsibility principle

* s3api: add S3 Tables integration layer

- Create s3api_tables.go to integrate S3 Tables with S3 API server
- Implement S3 Tables route matcher for X-Amz-Target header
- Register S3 Tables routes with API router
- Provide gRPC filer client interface for S3 Tables handlers
- All S3 Tables operations accessible via S3 API endpoint

* s3api: register S3 Tables routes in API server

- Add S3 Tables route registration in s3api_server.go registerRouter method
- Enable S3 Tables API operations to be routed through S3 API server
- Routes handled by s3api_tables.go integration layer
- Minimal changes to existing S3 API structure

* test: add S3 Tables test infrastructure

- Create setup.go with TestCluster and S3TablesClient definitions
- Create client.go with HTTP client methods for all operations
- Test utilities and client methods organized for reusability
- Foundation for S3 Tables integration tests

* test: add S3 Tables integration tests

- Comprehensive integration tests for all 23 S3 Tables operations
- Test cluster setup based on existing S3 integration tests
- Tests cover:
  * Table bucket lifecycle (create, get, list, delete)
  * Namespace operations
  * Table CRUD with Iceberg schema
  * Table bucket and table policies
  * Resource tagging operations
- Ready for CI/CD pipeline integration

* ci: add S3 Tables integration tests to GitHub Actions

- Create new workflow for S3 Tables integration testing
- Add build verification job for s3tables package and s3api integration
- Add format checking for S3 Tables code
- Add go vet checks for code quality
- Workflow runs on all pull requests
- Includes test output logging and artifact upload on failure

* s3tables: add handler_ prefix to operation handler files

- Rename bucket_create.go → handler_bucket_create.go
- Rename bucket_get_list_delete.go → handler_bucket_get_list_delete.go
- Rename namespace.go → handler_namespace.go
- Rename table.go → handler_table.go
- Rename policy.go → handler_policy.go

Improves file organization by clearly identifying handler implementations.
No code changes, refactoring only.

* s3tables test: refactor to eliminate duplicate definitions

- Move all client methods to client.go
- Remove duplicate types/constants from s3tables_integration_test.go
- Keep setup.go for test infrastructure
- Keep integration test logic in s3tables_integration_test.go
- Clean up unused imports
- Test compiles successfully

* Delete client_methods.go

* s3tables: add bucket name validation and fix error handling

- Add isValidBucketName validation function for [a-z0-9_-] characters
- Validate bucket name characters match ARN parsing regex
- Fix error handling in WithFilerClient closure - properly check for lookup errors
- Add error handling for json.Marshal calls (metadata and tags)
- Improve error messages and logging

* s3tables: add error handling for json.Marshal calls

- Add error handling in handler_namespace.go (metadata marshaling)
- Add error handling in handler_table.go (metadata and tags marshaling)
- Add error handling in handler_policy.go (tag marshaling in TagResource and UntagResource)
- Return proper errors with context instead of silently ignoring failures

* s3tables: replace custom splitPath with stdlib functions

- Remove custom splitPath implementation (23 lines)
- Use filepath.Dir and filepath.Base from stdlib
- More robust and handles edge cases correctly
- Reduces code duplication

* s3tables: improve error handling specificity in ListTableBuckets

- Specifically check for 'not found' errors instead of catching all errors
- Return empty list only when directory doesn't exist
- Propagate other errors (network, permission) with context
- Prevents masking real errors

* s3api_tables: optimize action validation with map lookup

- Replace O(n) slice iteration with O(1) map lookup
- Move s3TablesActionsMap to package level
- Avoid recreating the map on every function call
- Improves performance for request validation

* s3tables: implement permission checking and authorization

- Add permissions.go with permission definitions and checks
- Define permissions for all 21 S3 Tables operations
- Add permission checking helper functions
- Add getPrincipalFromRequest to extract caller identity
- Implement access control in CreateTableBucket, GetTableBucket, DeleteTableBucket
- Return 403 Forbidden for unauthorized operations
- Only bucket owner can perform operations (extensible for future policies)
- Add AuthError type for authorization failures

* workflow: fix s3 tables tests path and working directory

The workflow was failing because it was running inside 'weed' directory,
but the tests are at the repository root. Removed working-directory
default and updated relative paths to weed source.

* workflow: remove emojis from echo statements

* test: format s3tables client.go

* workflow: fix go install path to ./weed

* ci: fail s3 tables tests if any command in pipeline fails

* s3tables: use path.Join for path construction and align namespace paths

* s3tables: improve integration test stability and error reporting

* s3tables: propagate request context to filer operations

* s3tables: clean up unused code and improve error response formatting

* Refine S3 Tables implementation to address code review feedback

- Standardize namespace representation to []string
- Improve listing logic with pagination and StartFromFileName
- Enhance error handling with sentinel errors and robust checks
- Add JSON encoding error logging
- Fix CI workflow to use gofmt -l
- Standardize timestamps in directory creation
- Validate single-level namespaces

* s3tables: further refinements to filer operations and utilities

- Add multi-segment namespace support to ARN parsing
- Refactor permission checking to use map lookup
- Wrap lookup errors with ErrNotFound in filer operations
- Standardize splitPath to use path package

* test: improve S3 Tables client error handling and cleanup

- Add detailed error reporting when decoding failure responses
- Remove orphaned comments and unused sections

* command: implement graceful shutdown for mini cluster

- Introduce MiniClusterCtx to coordinate shutdown across mini services
- Update Master, Volume, Filer, S3, and WebDAV servers to respect context cancellation
- Ensure all resources are cleaned up properly during test teardown
- Integrate MiniClusterCtx in s3tables integration tests

* s3tables: fix pagination and enhance error handling in list/delete operations

- Fix InclusiveStartFrom logic to ensure exclusive start on continued pages
- Prevent duplicates in bucket, namespace, and table listings
- Fail fast on listing errors during bucket and namespace deletion
- Stop swallowing errors in handleListTables and return proper HTTP error responses

* s3tables: align ARN formatting and optimize resource handling

- Update generateTableARN to match AWS S3 Tables specification
- Move defer r.Body.Close() to follow standard Go patterns
- Remove unused generateNamespaceARN helper

* command: fix stale error variable logging in filer serving goroutines

- Use local 'err' variable instead of stale 'e' from outer scope
- Applied to both TLS and non-TLS paths for local listener

* s3tables: implement granular authorization and refine error responses

- Remove mandatory ACTION_ADMIN at the router level
- Enforce granular permissions in bucket and namespace handlers
- Prioritize AccountID in ExtractPrincipalFromContext for ARN matching
- Distinguish between 404 (NoSuchBucket) and 500 (InternalError) in metadata lookups
- Clean up unused imports in s3api_tables.go

* test: refactor S3 Tables client for DRYness and multi-segment namespaces

- Implement doRequestAndDecode to eliminate HTTP boilerplate
- Update client API to accept []string for namespaces to support hierarchy
- Standardize error response decoding across all client methods

* test: update integration tests to match refactored S3 Tables client

- Pass namespaces as []string to support hierarchical structures
- Adapt test calls to new client API signatures

* s3tables: normalize filer errors and use standard helpers

- Migrate from custom ErrNotFound to filer_pb.ErrNotFound
- Use filer_pb.LookupEntry for automatic error normalization
- Normalize entryExists and attribute lookups

* s3tables: harden namespace validation and correct ARN parsing

- Prohibit path traversal (".", "..") and "/" in namespaces
- Restrict namespace characters to [a-z0-9_] for consistency
- Switch to url.PathUnescape for correct decoding of ARN path components
- Align ARN parsing regex with single-segment namespace validation

* s3tables: improve robustness, security, and error propagation in handlers

- Implement strict table name validation (prevention of path traversal and character enforcement)
- Add nil checks for entry.Entry in all listing loops to prevent panics
- Propagate backend errors instead of swallowing them or assuming 404
- Correctly map filer_pb.ErrNotFound to appropriate S3 error codes
- Standardize existence checks across bucket, namespace, and table handlers

* test: add miniClusterMutex to prevent race conditions

- Introduce sync.Mutex to protect global state (os.Args, os.Chdir)
- Ensure serialized initialization of the mini cluster runner
- Fix intermittent race conditions during parallel test execution

* s3tables: improve error handling and permission logic

- Update handleGetNamespace to distinguish between 404 and 500 errors
- Refactor CanManagePolicy to use CheckPermission for consistent enforcement
- Ensure empty identities are correctly handled in policy management checks

* s3tables: optimize regex usage and improve version token uniqueness

- Pre-compile regex patterns as package-level variables to avoid re-compilation overhead on every call
- Add a random component to version token generation to reduce collision probability under high concurrency

* s3tables: harden auth and error handling

- Add authorization checks to all S3 Tables handlers (policy, table ops) to enforce security
- Improve error handling to distinguish between NotFound (404) and InternalError (500)
- Fix directory FileMode usage in filer_ops
- Improve test randomness for version tokens
- Update permissions comments to acknowledge IAM gaps

* S3 Tables: fix gRPC stream loop handling for list operations

- Correctly handle io.EOF to terminate loops gracefully.
- Propagate other errors to prevent silent failures.
- Ensure all list results are processed effectively.

* S3 Tables: validate ARN namespace to prevent path traversal

- Enforce validation on decoded namespace in parseTableFromARN.
- Ensures path components are safe after URL unescaping.

* S3 Tables: secure API router with IAM authentication

- Wrap S3 Tables handler with authenticateS3Tables.
- Use AuthSignatureOnly to enforce valid credentials while delegating granular authorization to handlers.
- Prevent anonymous access to all S3 Tables endpoints.

* S3 Tables: fix gRPC stream loop handling in namespace handlers

- Correctly handle io.EOF in handleListNamespaces and handleDeleteNamespace.
- Propagate other errors to prevent silent failures or accidental data loss.
- Added necessary io import.

* S3 Tables: use os.ModeDir constant in filer_ops.go

- Replace magic number 1<<31 with os.ModeDir for better readability.
- Added necessary os import.

* s3tables: improve principal extraction using identity context

* s3tables: remove duplicate comment in permissions.go

* s3tables test: improve error reporting on decoding failure

* s3tables: implement validateTableName helper

* s3tables: add table name validation and 404 propagation to policy handlers

* s3tables: add table name validation and cleanup duplicated logic in table handlers

* s3tables: ensure root tables directory exists before bucket creation

* s3tables: implement token-based pagination for table buckets listing

* s3tables: implement token-based pagination for namespace listing

* s3tables: refine permission helpers to align with operation names

* s3tables: return 404 in handleDeleteNamespace if namespace not found

* s3tables: fix cross-namespace pagination in listTablesInAllNamespaces

* s3tables test: expose pagination parameters in client list methods

* s3tables test: update integration tests for new client API

* s3tables: use crypto/rand for secure version token generation

Replaced math/rand with crypto/rand to ensure version tokens are
cryptographically secure and unpredictable for optimistic concurrency control.

* s3tables: improve account ID handling and define missing error codes

Updated getPrincipalFromRequest to prioritize X-Amz-Account-ID header and
added getAccountID helper. Defined ErrVersionTokenMismatch and ErrCodeConflict
for better optimistic concurrency support.

* s3tables: update bucket handlers for multi-account support

Ensured bucket ownership is correctly attributed to the authenticated
account ID and updated ARNs to use the request-derived account ID. Added
standard S3 existence checks for bucket deletion.

* s3tables: update namespace handlers for multi-account support

Updated namespace creation to use authenticated account ID for ownership
and unified permission checks across all namespace operations to use the
correct account principal.

* s3tables: implement optimistic concurrency for table deletion

Added VersionToken validation to handleDeleteTable. Refactored table
listing to use request context for accurate ARN generation and fixed
cross-namespace pagination issues.

* s3tables: improve resource resolution and error mapping for policies and tagging

Refactored resolveResourcePath to return resource type, enabling accurate
NoSuchBucket vs NoSuchTable error codes. Added existence checks before
deleting policies.

* s3tables: enhance test robustness and resilience

Updated random string generation to use crypto/rand in s3tables tests.
Increased resilience of IAM distributed tests by adding "connection refused"
to retryable errors.

* s3tables: remove legacy principal fallback header

Removed the fallback to X-Amz-Principal in getPrincipalFromRequest as
S3 Tables is a new feature and does not require legacy header support.

* s3tables: remove unused ExtractPrincipalFromContext function

Removed the unused ExtractPrincipalFromContext utility and its
accompanying iam/utils import to keep the new s3tables codebase clean.

* s3tables: allow hyphens in namespace and table names

Relaxed regex validation in utils.go to support hyphens in S3 Tables
namespaces and table names, improving consistency with S3 bucket naming
and allowing derived names from services like S3 Storage Lens.

* s3tables: add isAuthError helper to handler.go

* s3tables: refactor permission checks to use resource owner in bucket handlers

* s3tables: refactor permission checks to use resource owner in namespace handlers

* s3tables: refactor permission checks to use resource owner in table handlers

* s3tables: refactor permission checks to use resource owner in policy and tagging handlers

* ownerAccountID

* s3tables: implement strict AWS-aligned name validation for buckets, namespaces, and tables

* s3tables: enforce strict resource ownership and implement result filtering for buckets

* s3tables: enforce strict resource ownership and implement result filtering for namespaces

* s3tables: enforce strict resource ownership and implement result filtering for tables

* s3tables: align getPrincipalFromRequest with account ID for IAM compatibility

* s3tables: fix inconsistent permission check in handleCreateTableBucket

* s3tables: improve pagination robustness and error handling in table listing handlers

* s3tables: refactor handleDeleteTableBucket to use strongly typed AuthError

* s3tables: align ARN regex patterns with S3 standards and refactor to constants

* s3tables: standardize access denied errors using ErrAccessDenied constant

* go fmt

* s3tables: fix double-write issue in handleListTables

Remove premature HTTP error writes from within WithFilerClient closure
to prevent duplicate status code responses. Error handling is now
consistently performed at the top level using isAuthError.

* s3tables: update bucket name validation message

Remove "underscores" from error message to accurately reflect that
bucket names only allow lowercase letters, numbers, and hyphens.

* s3tables: add table policy test coverage

Add comprehensive test coverage for table policy operations:
- Added PutTablePolicy, GetTablePolicy, DeleteTablePolicy methods to test client
- Implemented testTablePolicy lifecycle test validating Put/Get/Delete operations
- Verified error handling for missing policies

* follow aws spec

* s3tables: add request body size limiting

Add request body size limiting (10MB) to readRequestBody method:
- Define maxRequestBodySize constant to prevent unbounded reads
- Use io.LimitReader to enforce size limit
- Add explicit error handling for oversized requests
- Prevents potential DoS attacks via large request bodies

* S3 Tables API now properly enforces resource policies

addressing the critical security gap where policies were created but never evaluated.

* s3tables: Add upper bound validation for MaxTables parameter

MaxTables is user-controlled and influences gRPC ListEntries limits via
uint32(maxTables*2). Without an upper bound, very large values can overflow
uint32 or cause excessively large directory scans. Cap MaxTables to 1000 and
return InvalidRequest for out-of-range values, consistent with S3 MaxKeys
handling.

* s3tables: Add upper bound validation for MaxBuckets parameter

MaxBuckets is user-controlled and used in uint32(maxBuckets*2) for ListEntries.
Very large values can overflow uint32 or trigger overly expensive scans. Cap
MaxBuckets to 1000 and reject out-of-range values, consistent with MaxTables
handling and S3 MaxKeys validation elsewhere in the codebase.

* s3tables: Validate bucket name in parseBucketNameFromARN()

Enforce the same bucket name validation rules (length, characters, reserved
prefixes/suffixes) when extracting from ARN. This prevents accepting ARNs
that the system would never create and ensures consistency with
CreateTableBucket validation.

* s3tables: Fix parseTableFromARN() namespace and table name validation

- Remove dead URL unescape for namespace (regex [a-z0-9_]+ cannot contain
  percent-escapes)
- Add URL decoding and validation of extracted table name via
  validateTableName() to prevent callers from bypassing request validation
  done in other paths

* s3tables: Rename tableMetadataInternal.Schema to Metadata

The field name 'Schema' was confusing given it holds a *TableMetadata struct
and serializes as 'metadata' in JSON. Rename to 'Metadata' for clarity and
consistency with the JSON tag and intended meaning.

* s3tables: Improve bucket name validation error message

Replace misleading character-only error message with generic 'invalid bucket
name'. The isValidBucketName() function checks multiple constraints beyond
character set (length, reserved prefixes/suffixes, start/end rules), so a
specific character message is inaccurate.

* s3tables: Separate permission checks for tagging and untagging

- Add CanTagResource() to check TagResource permission
- Add CanUntagResource() to check UntagResource permission
- Update CanManageTags() to check both operations (OR logic)

This prevents UntagResource from incorrectly checking 'ManageTags' permission
and ensures each operation validates the correct permission when per-operation
permissions are enforced.

* s3tables: Consolidate getPrincipalFromRequest and getAccountID into single method

Both methods had identical implementations - they return the account ID from
request header or fall back to handler's default. Remove the duplicate
getPrincipalFromRequest and use getAccountID throughout, with updated comment
explaining its dual role as both caller identity and principal for permission
checks.

* s3tables: Fetch bucket policy in handleListTagsForResource for permission evaluation

Update handleListTagsForResource to fetch and pass bucket policy to
CheckPermission, matching the behavior of handleTagResource/handleUntagResource.
This enables bucket-policy-based permission grants to be evaluated for
ListTagsForResource, not just ownership-based checks.

* s3tables: Extract resource owner and bucket extraction into helper method

Create extractResourceOwnerAndBucket() helper to consolidate the repeated pattern
of unmarshaling metadata and extracting bucket name from resource path. This
pattern was duplicated in handleTagResource, handleListTagsForResource, and
handleUntagResource. Update all three handlers to use the helper.

Also update remaining uses of getPrincipalFromRequest() (in handler_bucket_create,
handler_bucket_get_list_delete, handler_namespace) to use getAccountID() after
consolidating the two identical methods.

* s3tables: Add log message when cluster shutdown times out

The timeout path (2 second wait for graceful shutdown) was silent. Add a
warning log message when it occurs to help diagnose flaky test issues and
indicate when the mini cluster didn't shut down cleanly.

* s3tables: Use policy_engine wildcard matcher for complete IAM compatibility

Replace the custom suffix-only wildcard implementation in matchesActionPattern
and matchesPrincipal with the policy_engine.MatchesWildcard function from
PR #8052. This enables full wildcard support including:

- Middle wildcards: s3tables:Get*Table matches GetTable
- Question mark wildcards: Get? matches any single character
- Combined patterns: s3tables:*Table* matches any action containing 'Table'

Benefits:
- Code reuse: eliminates duplicate wildcard logic
- Complete IAM compatibility: supports all AWS wildcard patterns
- Performance: uses efficient O(n) backtracking algorithm
- Consistency: same wildcard behavior across S3 API and S3 Tables

Add comprehensive unit tests covering exact matches, suffix wildcards,
middle wildcards, question marks, and combined patterns for both action
and principal matching.

* go fmt

* s3tables: Fix vet error - remove undefined c.t reference in Stop()

The TestCluster.Stop() method doesn't have access to testing.T object.
Remove the log statement and keep the timeout handling comment for clarity.
The original intent (warning about shutdown timeout) is still captured in
the code comment explaining potential issues.

* clean up

* s3tables: Add t field to TestCluster for logging

Add *testing.T field to TestCluster struct and initialize it in
startMiniCluster. This allows Stop() to properly log warnings when
cluster shutdown times out. Includes the t field in the test cluster
initialization and restores the logging statement in Stop().

* s3tables: Fix bucket policy error handling in permission checks

Replace error-swallowing pattern where all errors from getExtendedAttribute
were ignored for bucket policy reads. Now properly distinguish between:

- ErrAttributeNotFound: Policy not found is expected; continue with empty policy
- Other errors: Return internal server error and stop processing

Applied fix to all bucket policy reads in:
- handleDeleteTableBucketPolicy (line 220)
- handleTagResource (line 313)
- handleUntagResource (line 405)
- handleListTagsForResource (line 488)
- And additional occurrences in closures

This prevents silent failures and ensures policy-related errors are surfaced
to callers rather than being silently ignored.

* s3tables: Pre-validate namespace to return 400 instead of 500

Move validateNamespace call outside of filerClient.WithFilerClient closure
so that validation errors return HTTP 400 (InvalidRequest) instead of 500
(InternalError).

Before: Validation error inside closure → treated as internal error → 500
After: Validation error before closure → handled as bad request → 400

This provides correct error semantics: namespace validation is an input
validation issue, not a server error.

* Update weed/s3api/s3tables/handler.go

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>

* s3tables: Normalize action names to include service prefix

Add automatic normalization of operations to full IAM-style action names
(e.g., 's3tables:CreateTableBucket') in CheckPermission(). This ensures
policy statements using prefixed actions (s3tables:*) correctly match
operations evaluated by permission helpers.

Also fixes incorrect r.Context() passed to GetIdentityNameFromContext
which expects *http.Request. Now passes r directly.

* s3tables: Use policy framework for table creation authorization

Replace strict ownership check in CreateTable with policy-based authorization.
Now checks both namespace and bucket policies for CreateTable permission,
allowing delegation via resource policies while still respecting owner bypass.

Authorization logic:
- Namespace policy grants CreateTable → allowed
- Bucket policy grants CreateTable → allowed
- Otherwise → denied (even if same owner)

This enables cross-principal table creation via policies while maintaining
security through explicit allow/deny semantics.

* s3tables: Use policy framework for GetTable authorization

Replace strict ownership check with policy-based authorization in GetTable.
Now checks both table and bucket policies for GetTable permission, allowing
authorized non-owners to read table metadata.

Authorization logic:
- Table policy grants GetTable → allowed
- Bucket policy grants GetTable → allowed
- Otherwise → 404 NotFound (no access disclosed)

Maintains security through policy evaluation while enabling read delegation.

* s3tables: Generate ARNs using resource owner account ID

Change ARN generation to use resource OwnerAccountID instead of caller
identity (h.getAccountID(r)). This ensures ARNs are stable and consistent
regardless of which principal accesses the resource.

Updated generateTableBucketARN and generateTableARN function signatures
to accept ownerAccountID parameter. All call sites updated to pass the
resource owner's account ID from metadata.

This prevents ARN inconsistency issues when multiple principals have
access to the same resource via policies.

* s3tables: Fix remaining policy error handling in namespace and bucket handlers

Replace silent error swallowing (err == nil) with proper error distinction
for bucket policy reads. Now properly checks ErrAttributeNotFound and
propagates other errors as internal server errors.

Fixed 5 locations:
- handleCreateNamespace (policy fetch)
- handleDeleteNamespace (policy fetch)
- handleListNamespaces (policy fetch)
- handleGetNamespace (policy fetch)
- handleGetTableBucket (policy fetch)

This prevents masking of filer issues when policies cannot be read due
to I/O errors or other transient failures.

* ci: Pin GitHub Actions to commit SHAs for s3-tables-tests

Update all action refs to use pinned commit SHAs instead of floating tags:
- actions/checkout: @v6@8e8c483 (v4)
- actions/setup-go: @v6@0c52d54 (v5)
- actions/upload-artifact: @v6@65d8626 (v4)

Pinned SHAs improve reproducibility and reduce supply chain risk by
preventing accidental or malicious changes in action releases. Aligns
with repository conventions used in other workflows (e.g., go.yml).

* s3tables: Add resource ARN validation to policy evaluation

Implement resource-specific policy validation to prevent over-broad
permission grants. Add matchesResource and matchesResourcePattern functions
to validate statement Resource fields against specific resource ARNs.

Add new CheckPermissionWithResource function that includes resource ARN
validation, while keeping CheckPermission unchanged for backward compatibility.

This enables policies to grant access to specific resources only:
- statements with Resource: "arn:aws:s3tables:...:bucket/specific-bucket/*"
  will only match when accessing that specific bucket
- statements without Resource field match all resources (implicit *)
- resource patterns support wildcards (* for any sequence, ? for single char)

For future use: Handlers can call CheckPermissionWithResource with the
target resource ARN to enforce resource-level access control.

* Revert "ci: Pin GitHub Actions to commit SHAs for s3-tables-tests"

This reverts commit 01da26fbcb.

* s3tables: Remove duplicate bucket extraction logic in helper

Move bucket name extraction outside the if/else block in
extractResourceOwnerAndBucket since the logic is identical for both
ResourceTypeTable and ResourceTypeBucket cases. This reduces code
duplication and improves maintainability.

The extraction pattern (parts[1] from /tables/{bucket}/...) works for
both resource types, so it's now performed once before the type-specific
metadata unmarshaling.

* go fmt

* s3tables: Fix ownership consistency across handlers

Address three related ownership consistency issues:

1. CreateNamespace now sets OwnerAccountID to bucketMetadata.OwnerAccountID
   instead of request principal. This prevents namespaces created by
   delegated callers (via bucket policy) from becoming unmanageable, since
   ListNamespaces filters by bucket owner.

2. CreateTable now:
   - Fetches bucket metadata to use correct owner for bucket policy evaluation
   - Uses namespaceMetadata.OwnerAccountID for namespace policy checks
   - Uses bucketMetadata.OwnerAccountID for bucket policy checks
   - Sets table OwnerAccountID to namespaceMetadata.OwnerAccountID (inherited)

3. GetTable now:
   - Fetches bucket metadata to use correct owner for bucket policy evaluation
   - Uses metadata.OwnerAccountID for table policy checks
   - Uses bucketMetadata.OwnerAccountID for bucket policy checks

This ensures:
- Bucket owner retains implicit "owner always allowed" behavior even when
  evaluating bucket policies
- Ownership hierarchy is consistent (namespace owned by bucket, table owned by namespace)
- Cross-principal delegation via policies doesn't break ownership chains

* s3tables: Fix ListTables authorization and policy parsing

Make ListTables authorization consistent with GetTable/CreateTable:

1. ListTables authorization now evaluates policies instead of owner-only checks:
   - For namespace listing: checks namespace policy AND bucket policy
   - For bucket-wide listing: checks bucket policy
   - Uses CanListTables permission framework

2. Remove owner-only filter in listTablesWithClient that prevented policy-based
   sharing of tables. Authorization is now enforced at the handler level, so all
   tables in the namespace/bucket are returned to authorized callers (who have
   access either via ownership or policy).

3. Add flexible PolicyDocument.UnmarshalJSON to support both single-object and
   array forms of Statement field:
   - Handles: {"Statement": {...}}
   - Handles: {"Statement": [{...}, {...}]}
   - Improves AWS IAM compatibility

This ensures cross-account table listing works when delegated via bucket/namespace
policies, consistent with the authorization model for other operations.

* go fmt

* s3tables: Separate table name pattern constant for clarity

Define a separate tableNamePatternStr constant for the table name component in
the ARN regex, even though it currently has the same value as
tableNamespacePatternStr. This improves code clarity and maintainability, making
it easier to modify if the naming rules for tables and namespaces diverge in the
future.

* refactor

---------

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
s3tables-by-claude^2
Chris Lu 3 days ago
committed by GitHub
parent
commit
2f155ee5ee
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 189
      .github/workflows/s3-tables-tests.yml
  2. 8
      go.mod
  3. 16
      go.sum
  4. 1
      test/s3/iam/s3_iam_distributed_test.go
  5. 300
      test/s3tables/client.go
  6. 576
      test/s3tables/s3tables_integration_test.go
  7. 53
      test/s3tables/setup.go
  8. 22
      weed/cluster/lock_client.go
  9. 28
      weed/command/filer.go
  10. 8
      weed/command/master.go
  11. 22
      weed/command/mini.go
  12. 21
      weed/command/s3.go
  13. 12
      weed/command/volume.go
  14. 11
      weed/command/webdav.go
  15. 6
      weed/s3api/cors/middleware_test.go
  16. 4
      weed/s3api/s3api_server.go
  17. 143
      weed/s3api/s3api_tables.go
  18. 139
      weed/s3api/s3tables/filer_ops.go
  19. 238
      weed/s3api/s3tables/handler.go
  20. 124
      weed/s3api/s3tables/handler_bucket_create.go
  21. 324
      weed/s3api/s3tables/handler_bucket_get_list_delete.go
  22. 512
      weed/s3api/s3tables/handler_namespace.go
  23. 853
      weed/s3api/s3tables/handler_policy.go
  24. 780
      weed/s3api/s3tables/handler_table.go
  25. 440
      weed/s3api/s3tables/permissions.go
  26. 90
      weed/s3api/s3tables/permissions_test.go
  27. 291
      weed/s3api/s3tables/types.go
  28. 268
      weed/s3api/s3tables/utils.go

189
.github/workflows/s3-tables-tests.yml

@ -0,0 +1,189 @@
name: "S3 Tables Integration Tests"
on:
pull_request:
concurrency:
group: ${{ github.head_ref }}/s3-tables-tests
cancel-in-progress: true
permissions:
contents: read
jobs:
s3-tables-tests:
name: S3 Tables Integration Tests
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Check out code
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
id: go
- name: Install SeaweedFS
run: |
go install -buildvcs=false ./weed
- name: Run S3 Tables Integration Tests
timeout-minutes: 25
working-directory: test/s3tables
run: |
set -x
set -o pipefail
echo "=== System Information ==="
uname -a
free -h
df -h
echo "=== Starting S3 Tables Tests ==="
# Run S3 Tables integration tests
go test -v -timeout 20m . 2>&1 | tee test-output.log || {
echo "S3 Tables integration tests failed"
exit 1
}
- name: Show test output on failure
if: failure()
working-directory: test/s3tables
run: |
echo "=== Test Output ==="
if [ -f test-output.log ]; then
tail -200 test-output.log
fi
echo "=== Process information ==="
ps aux | grep -E "(weed|test)" || true
- name: Upload test logs on failure
if: failure()
uses: actions/upload-artifact@v6
with:
name: s3-tables-test-logs
path: test/s3tables/test-output.log
retention-days: 3
s3-tables-build-verification:
name: S3 Tables Build Verification
runs-on: ubuntu-22.04
timeout-minutes: 15
steps:
- name: Check out code
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
id: go
- name: Verify S3 Tables Package Builds
run: |
set -x
echo "=== Building S3 Tables package ==="
go build ./weed/s3api/s3tables || {
echo "S3 Tables package build failed"
exit 1
}
echo "S3 Tables package built successfully"
- name: Verify S3 API Integration Builds
run: |
set -x
echo "=== Building S3 API with S3 Tables integration ==="
go build ./weed/s3api || {
echo "S3 API build with S3 Tables failed"
exit 1
}
echo "S3 API with S3 Tables integration built successfully"
- name: Run Go Tests for S3 Tables Package
run: |
set -x
echo "=== Running Go unit tests for S3 Tables ==="
go test -v -race -timeout 5m ./weed/s3api/s3tables/... || {
echo "S3 Tables unit tests failed"
exit 1
}
echo "S3 Tables unit tests passed"
s3-tables-fmt-check:
name: S3 Tables Format Check
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- name: Check out code
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
id: go
- name: Check Go Format
run: |
set -x
echo "=== Checking S3 Tables Go format ==="
unformatted=$(gofmt -l ./weed/s3api/s3tables)
if [ -n "$unformatted" ]; then
echo "Go format check failed - files need formatting"
echo "$unformatted"
exit 1
fi
echo "All S3 Tables files are properly formatted"
- name: Check S3 Tables Test Format
run: |
set -x
echo "=== Checking S3 Tables test format ==="
unformatted=$(gofmt -l ./test/s3tables)
if [ -n "$unformatted" ]; then
echo "Go format check failed for tests"
echo "$unformatted"
exit 1
fi
echo "All S3 Tables test files are properly formatted"
s3-tables-vet:
name: S3 Tables Go Vet Check
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- name: Check out code
uses: actions/checkout@v6
- name: Set up Go
uses: actions/setup-go@v6
with:
go-version-file: 'go.mod'
id: go
- name: Run Go Vet
run: |
set -x
echo "=== Running go vet on S3 Tables package ==="
go vet ./weed/s3api/s3tables/... || {
echo "go vet check failed"
exit 1
}
echo "go vet checks passed"
- name: Run Go Vet on Tests
run: |
set -x
echo "=== Running go vet on S3 Tables tests ==="
go vet ./test/s3tables/... || {
echo "go vet check failed for tests"
exit 1
}
echo "go vet checks passed for tests"

8
go.mod

@ -254,7 +254,7 @@ require (
github.com/Azure/azure-sdk-for-go/sdk/internal v1.11.2 // indirect
github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3 // indirect
github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26 // indirect
github.com/Azure/go-ntlmssp v0.1.0 // indirect
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 // indirect
github.com/Files-com/files-sdk-go/v3 v3.2.264 // indirect
github.com/GoogleCloudPlatform/opentelemetry-operations-go/detectors/gcp v1.30.0 // indirect
@ -349,7 +349,7 @@ require (
github.com/gorilla/securecookie v1.1.2 // indirect
github.com/gorilla/sessions v1.4.0 // indirect
github.com/grpc-ecosystem/go-grpc-middleware v1.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 // indirect
github.com/hashicorp/go-cleanhttp v0.5.2 // indirect
github.com/hashicorp/go-hclog v1.6.3 // indirect
github.com/hashicorp/go-immutable-radix v1.3.1 // indirect
@ -458,11 +458,11 @@ require (
go.opentelemetry.io/otel/sdk/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
go.uber.org/zap v1.27.1 // indirect
golang.org/x/arch v0.20.0 // indirect
golang.org/x/term v0.39.0 // indirect
golang.org/x/time v0.14.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251111163417-95abcf5c77ba // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20251124214823-79d6a2a48846 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251213004720-97cd9d5aeac2 // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/validator.v2 v2.0.1 // indirect

16
go.sum

@ -560,8 +560,8 @@ github.com/Azure/azure-sdk-for-go/sdk/storage/azblob v1.6.3/go.mod h1:URuDvhmATV
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3 h1:sxgSqOB9CDToiaVFpxuvb5wGgGqWa3lCShcm5o0n3bE=
github.com/Azure/azure-sdk-for-go/sdk/storage/azfile v1.5.3/go.mod h1:XdED8i399lEVblYHTZM8eXaP07gv4Z58IL6ueMlVlrg=
github.com/Azure/go-ansiterm v0.0.0-20170929234023-d6e3b3328b78/go.mod h1:LmzpDX56iTiv29bbRTIsUNlaFfuhWRQBWjQdVyAevI8=
github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26 h1:gy/jrlpp8EfSyA73a51fofoSfhp5rPNQAUvDr4Dm91c=
github.com/Azure/go-ntlmssp v0.0.2-0.20251110135918-10b7b7e7cd26/go.mod h1:NYqdhxd/8aAct/s4qSYZEerdPuH1liG2/X9DiVTbhpk=
github.com/Azure/go-ntlmssp v0.1.0 h1:DjFo6YtWzNqNvQdrwEyr/e4nhU3vRiwenz5QX7sFz+A=
github.com/Azure/go-ntlmssp v0.1.0/go.mod h1:NYqdhxd/8aAct/s4qSYZEerdPuH1liG2/X9DiVTbhpk=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1 h1:WJTmL004Abzc5wDB5VtZG2PJk5ndYDgVacGqfirKxjM=
github.com/AzureAD/microsoft-authentication-extensions-for-go/cache v0.1.1/go.mod h1:tCcJZ0uHAmvjsVYzEFivsRTN00oz5BEsRgQHu5JZ9WE=
github.com/AzureAD/microsoft-authentication-library-for-go v1.6.0 h1:XRzhVemXdgvJqCH0sFfrBUTnUJSBrBf7++ypk+twtRs=
@ -1206,8 +1206,8 @@ github.com/grpc-ecosystem/grpc-gateway v1.16.0 h1:gmcG1KaJ57LophUzW0Hy8NmPhnMZb4
github.com/grpc-ecosystem/grpc-gateway v1.16.0/go.mod h1:BDjrQk3hbvj6Nolgz8mAMFbcEtjT1g+wF4CSlocrBnw=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.7.0/go.mod h1:hgWBS7lorOAVIJEQMi4ZsPv9hVvWI6+ch50m39Pf2Ks=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.11.3/go.mod h1:o//XUCC/F+yRGJoPO/VU0GSB0f8Nhgmxx0VIRUvaC0w=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1 h1:X5VWvz21y3gzm9Nw/kaUeku/1+uBhcekkmy4IkffJww=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.1/go.mod h1:Zanoh4+gvIgluNqcfMVTJueD4wSS5hT7zTt4Mrutd90=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3 h1:NmZ1PKzSTQbuGHw9DGPFomqkkLWMC+vZCkfs+FHv1Vg=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.3/go.mod h1:zQrxl1YP88HQlA6i9c63DSVPFklWpGX4OWAc9bFuaH4=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
@ -1919,8 +1919,8 @@ go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/zap v1.18.1/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.19.0/go.mod h1:xg/QME4nWcxGxrpdeYfq7UvYrLh66cuVKdrbD1XF/NI=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.uber.org/zap v1.27.1 h1:08RqriUEv8+ArZRYSTXy1LeBScaMpVSTBhCeaZYfMYc=
go.uber.org/zap v1.27.1/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go.yaml.in/yaml/v2 v2.4.3 h1:6gvOSjQoTB3vt1l+CU+tSyi/HOjfOjRLJ4YwYZGwRO0=
go.yaml.in/yaml/v2 v2.4.3/go.mod h1:zSxWcmIDjOzPXpjlTTbAsKokqkDNAVtZO0WOMiT90s8=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
@ -2582,8 +2582,8 @@ google.golang.org/genproto v0.0.0-20230222225845-10f96fb3dbec/go.mod h1:3Dl5ZL0q
google.golang.org/genproto v0.0.0-20230306155012-7f2fa6fef1f4/go.mod h1:NWraEVixdDnqcqQ30jipen1STv2r/n24Wb7twVTGR4s=
google.golang.org/genproto v0.0.0-20250922171735-9219d122eba9 h1:LvZVVaPE0JSqL+ZWb6ErZfnEOKIqqFWUJE2D0fObSmc=
google.golang.org/genproto v0.0.0-20250922171735-9219d122eba9/go.mod h1:QFOrLhdAe2PsTp3vQY4quuLKTi9j3XG3r6JPPaw7MSc=
google.golang.org/genproto/googleapis/api v0.0.0-20251111163417-95abcf5c77ba h1:B14OtaXuMaCQsl2deSvNkyPKIzq3BjfxQp8d00QyWx4=
google.golang.org/genproto/googleapis/api v0.0.0-20251111163417-95abcf5c77ba/go.mod h1:G5IanEx8/PgI9w6CFcYQf7jMtHQhZruvfM1i3qOqk5U=
google.golang.org/genproto/googleapis/api v0.0.0-20251124214823-79d6a2a48846 h1:ZdyUkS9po3H7G0tuh955QVyyotWvOD4W0aEapeGeUYk=
google.golang.org/genproto/googleapis/api v0.0.0-20251124214823-79d6a2a48846/go.mod h1:Fk4kyraUvqD7i5H6S43sj2W98fbZa75lpZz/eUyhfO0=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251213004720-97cd9d5aeac2 h1:2I6GHUeJ/4shcDpoUlLs/2WPnhg7yJwvXtqcMJt9liA=
google.golang.org/genproto/googleapis/rpc v0.0.0-20251213004720-97cd9d5aeac2/go.mod h1:7i2o+ce6H/6BluujYR+kqX3GKH+dChPTQU19wjRPiGk=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=

1
test/s3/iam/s3_iam_distributed_test.go

@ -129,6 +129,7 @@ func TestS3IAMDistributedTests(t *testing.T) {
errorMsg := err.Error()
return strings.Contains(errorMsg, "timeout") ||
strings.Contains(errorMsg, "connection reset") ||
strings.Contains(errorMsg, "connection refused") ||
strings.Contains(errorMsg, "temporary failure") ||
strings.Contains(errorMsg, "TooManyRequests") ||
strings.Contains(errorMsg, "ServiceUnavailable") ||

300
test/s3tables/client.go

@ -0,0 +1,300 @@
package s3tables
import (
"bytes"
"encoding/json"
"fmt"
"io"
"net/http"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3tables"
)
func (c *S3TablesClient) doRequest(operation string, body interface{}) (*http.Response, error) {
var bodyBytes []byte
var err error
if body != nil {
bodyBytes, err = json.Marshal(body)
if err != nil {
return nil, fmt.Errorf("failed to marshal request body: %w", err)
}
}
req, err := http.NewRequest(http.MethodPost, c.endpoint, bytes.NewReader(bodyBytes))
if err != nil {
return nil, fmt.Errorf("failed to create request: %w", err)
}
req.Header.Set("Content-Type", "application/x-amz-json-1.1")
req.Header.Set("X-Amz-Target", "S3Tables."+operation)
return c.client.Do(req)
}
func (c *S3TablesClient) doRequestAndDecode(operation string, reqBody interface{}, respBody interface{}) error {
resp, err := c.doRequest(operation, reqBody)
if err != nil {
return err
}
defer resp.Body.Close()
if resp.StatusCode != http.StatusOK {
bodyBytes, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return fmt.Errorf("%s failed with status %d and could not read error response body: %v", operation, resp.StatusCode, readErr)
}
var errResp s3tables.S3TablesError
if err := json.Unmarshal(bodyBytes, &errResp); err != nil {
return fmt.Errorf("%s failed with status %d, could not decode error response: %v. Body: %s", operation, resp.StatusCode, err, string(bodyBytes))
}
return fmt.Errorf("%s failed: %s - %s", operation, errResp.Type, errResp.Message)
}
if respBody != nil {
if err := json.NewDecoder(resp.Body).Decode(respBody); err != nil {
return fmt.Errorf("failed to decode %s response: %w", operation, err)
}
}
return nil
}
// Table Bucket operations
func (c *S3TablesClient) CreateTableBucket(name string, tags map[string]string) (*s3tables.CreateTableBucketResponse, error) {
req := &s3tables.CreateTableBucketRequest{
Name: name,
Tags: tags,
}
var result s3tables.CreateTableBucketResponse
if err := c.doRequestAndDecode("CreateTableBucket", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) GetTableBucket(arn string) (*s3tables.GetTableBucketResponse, error) {
req := &s3tables.GetTableBucketRequest{
TableBucketARN: arn,
}
var result s3tables.GetTableBucketResponse
if err := c.doRequestAndDecode("GetTableBucket", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) ListTableBuckets(prefix, continuationToken string, maxBuckets int) (*s3tables.ListTableBucketsResponse, error) {
req := &s3tables.ListTableBucketsRequest{
Prefix: prefix,
ContinuationToken: continuationToken,
MaxBuckets: maxBuckets,
}
var result s3tables.ListTableBucketsResponse
if err := c.doRequestAndDecode("ListTableBuckets", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) DeleteTableBucket(arn string) error {
req := &s3tables.DeleteTableBucketRequest{
TableBucketARN: arn,
}
return c.doRequestAndDecode("DeleteTableBucket", req, nil)
}
// Namespace operations
func (c *S3TablesClient) CreateNamespace(bucketARN string, namespace []string) (*s3tables.CreateNamespaceResponse, error) {
req := &s3tables.CreateNamespaceRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
}
var result s3tables.CreateNamespaceResponse
if err := c.doRequestAndDecode("CreateNamespace", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) GetNamespace(bucketARN string, namespace []string) (*s3tables.GetNamespaceResponse, error) {
req := &s3tables.GetNamespaceRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
}
var result s3tables.GetNamespaceResponse
if err := c.doRequestAndDecode("GetNamespace", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) ListNamespaces(bucketARN, prefix, continuationToken string, maxNamespaces int) (*s3tables.ListNamespacesResponse, error) {
req := &s3tables.ListNamespacesRequest{
TableBucketARN: bucketARN,
Prefix: prefix,
ContinuationToken: continuationToken,
MaxNamespaces: maxNamespaces,
}
var result s3tables.ListNamespacesResponse
if err := c.doRequestAndDecode("ListNamespaces", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) DeleteNamespace(bucketARN string, namespace []string) error {
req := &s3tables.DeleteNamespaceRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
}
return c.doRequestAndDecode("DeleteNamespace", req, nil)
}
// Table operations
func (c *S3TablesClient) CreateTable(bucketARN string, namespace []string, name, format string, metadata *s3tables.TableMetadata, tags map[string]string) (*s3tables.CreateTableResponse, error) {
req := &s3tables.CreateTableRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Name: name,
Format: format,
Metadata: metadata,
Tags: tags,
}
var result s3tables.CreateTableResponse
if err := c.doRequestAndDecode("CreateTable", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) GetTable(bucketARN string, namespace []string, name string) (*s3tables.GetTableResponse, error) {
req := &s3tables.GetTableRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Name: name,
}
var result s3tables.GetTableResponse
if err := c.doRequestAndDecode("GetTable", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) ListTables(bucketARN string, namespace []string, prefix, continuationToken string, maxTables int) (*s3tables.ListTablesResponse, error) {
req := &s3tables.ListTablesRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Prefix: prefix,
ContinuationToken: continuationToken,
MaxTables: maxTables,
}
var result s3tables.ListTablesResponse
if err := c.doRequestAndDecode("ListTables", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) DeleteTable(bucketARN string, namespace []string, name string) error {
req := &s3tables.DeleteTableRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Name: name,
}
return c.doRequestAndDecode("DeleteTable", req, nil)
}
// Policy operations
func (c *S3TablesClient) PutTableBucketPolicy(bucketARN, policy string) error {
req := &s3tables.PutTableBucketPolicyRequest{
TableBucketARN: bucketARN,
ResourcePolicy: policy,
}
return c.doRequestAndDecode("PutTableBucketPolicy", req, nil)
}
func (c *S3TablesClient) GetTableBucketPolicy(bucketARN string) (*s3tables.GetTableBucketPolicyResponse, error) {
req := &s3tables.GetTableBucketPolicyRequest{
TableBucketARN: bucketARN,
}
var result s3tables.GetTableBucketPolicyResponse
if err := c.doRequestAndDecode("GetTableBucketPolicy", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) DeleteTableBucketPolicy(bucketARN string) error {
req := &s3tables.DeleteTableBucketPolicyRequest{
TableBucketARN: bucketARN,
}
return c.doRequestAndDecode("DeleteTableBucketPolicy", req, nil)
}
// Table Policy operations
func (c *S3TablesClient) PutTablePolicy(bucketARN string, namespace []string, name, policy string) error {
req := &s3tables.PutTablePolicyRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Name: name,
ResourcePolicy: policy,
}
return c.doRequestAndDecode("PutTablePolicy", req, nil)
}
func (c *S3TablesClient) GetTablePolicy(bucketARN string, namespace []string, name string) (*s3tables.GetTablePolicyResponse, error) {
req := &s3tables.GetTablePolicyRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Name: name,
}
var result s3tables.GetTablePolicyResponse
if err := c.doRequestAndDecode("GetTablePolicy", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) DeleteTablePolicy(bucketARN string, namespace []string, name string) error {
req := &s3tables.DeleteTablePolicyRequest{
TableBucketARN: bucketARN,
Namespace: namespace,
Name: name,
}
return c.doRequestAndDecode("DeleteTablePolicy", req, nil)
}
// Tagging operations
func (c *S3TablesClient) TagResource(resourceARN string, tags map[string]string) error {
req := &s3tables.TagResourceRequest{
ResourceARN: resourceARN,
Tags: tags,
}
return c.doRequestAndDecode("TagResource", req, nil)
}
func (c *S3TablesClient) ListTagsForResource(resourceARN string) (*s3tables.ListTagsForResourceResponse, error) {
req := &s3tables.ListTagsForResourceRequest{
ResourceARN: resourceARN,
}
var result s3tables.ListTagsForResourceResponse
if err := c.doRequestAndDecode("ListTagsForResource", req, &result); err != nil {
return nil, err
}
return &result, nil
}
func (c *S3TablesClient) UntagResource(resourceARN string, tagKeys []string) error {
req := &s3tables.UntagResourceRequest{
ResourceARN: resourceARN,
TagKeys: tagKeys,
}
return c.doRequestAndDecode("UntagResource", req, nil)
}

576
test/s3tables/s3tables_integration_test.go

@ -0,0 +1,576 @@
package s3tables
import (
"context"
"fmt"
"net"
"net/http"
"os"
"path/filepath"
"strconv"
"testing"
"time"
cryptorand "crypto/rand"
"sync"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/seaweedfs/seaweedfs/weed/command"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3tables"
flag "github.com/seaweedfs/seaweedfs/weed/util/fla9"
)
var (
miniClusterMutex sync.Mutex
)
func TestS3TablesIntegration(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
// Create and start test cluster
cluster, err := startMiniCluster(t)
require.NoError(t, err)
defer cluster.Stop()
// Create S3 Tables client
client := NewS3TablesClient(cluster.s3Endpoint, testRegion, testAccessKey, testSecretKey)
// Run test suite
t.Run("TableBucketLifecycle", func(t *testing.T) {
testTableBucketLifecycle(t, client)
})
t.Run("NamespaceLifecycle", func(t *testing.T) {
testNamespaceLifecycle(t, client)
})
t.Run("TableLifecycle", func(t *testing.T) {
testTableLifecycle(t, client)
})
t.Run("TableBucketPolicy", func(t *testing.T) {
testTableBucketPolicy(t, client)
})
t.Run("TablePolicy", func(t *testing.T) {
testTablePolicy(t, client)
})
t.Run("Tagging", func(t *testing.T) {
testTagging(t, client)
})
}
func testTableBucketLifecycle(t *testing.T, client *S3TablesClient) {
bucketName := "test-bucket-" + randomString(8)
// Create table bucket
createResp, err := client.CreateTableBucket(bucketName, nil)
require.NoError(t, err, "Failed to create table bucket")
assert.Contains(t, createResp.ARN, bucketName)
t.Logf("✓ Created table bucket: %s", createResp.ARN)
// Get table bucket
getResp, err := client.GetTableBucket(createResp.ARN)
require.NoError(t, err, "Failed to get table bucket")
assert.Equal(t, bucketName, getResp.Name)
t.Logf("✓ Got table bucket: %s", getResp.Name)
// List table buckets
listResp, err := client.ListTableBuckets("", "", 0)
require.NoError(t, err, "Failed to list table buckets")
found := false
for _, b := range listResp.TableBuckets {
if b.Name == bucketName {
found = true
break
}
}
assert.True(t, found, "Created bucket should appear in list")
t.Logf("✓ Listed table buckets, found %d buckets", len(listResp.TableBuckets))
// Delete table bucket
err = client.DeleteTableBucket(createResp.ARN)
require.NoError(t, err, "Failed to delete table bucket")
t.Logf("✓ Deleted table bucket: %s", bucketName)
// Verify bucket is deleted
_, err = client.GetTableBucket(createResp.ARN)
assert.Error(t, err, "Bucket should not exist after deletion")
}
func testNamespaceLifecycle(t *testing.T, client *S3TablesClient) {
bucketName := "test-ns-bucket-" + randomString(8)
namespaceName := "test_namespace"
// Create table bucket first
createBucketResp, err := client.CreateTableBucket(bucketName, nil)
require.NoError(t, err, "Failed to create table bucket")
defer client.DeleteTableBucket(createBucketResp.ARN)
bucketARN := createBucketResp.ARN
// Create namespace
createNsResp, err := client.CreateNamespace(bucketARN, []string{namespaceName})
require.NoError(t, err, "Failed to create namespace")
assert.Equal(t, []string{namespaceName}, createNsResp.Namespace)
t.Logf("✓ Created namespace: %s", namespaceName)
// Get namespace
getNsResp, err := client.GetNamespace(bucketARN, []string{namespaceName})
require.NoError(t, err, "Failed to get namespace")
assert.Equal(t, []string{namespaceName}, getNsResp.Namespace)
t.Logf("✓ Got namespace: %v", getNsResp.Namespace)
// List namespaces
listNsResp, err := client.ListNamespaces(bucketARN, "", "", 0)
require.NoError(t, err, "Failed to list namespaces")
found := false
for _, ns := range listNsResp.Namespaces {
if len(ns.Namespace) > 0 && ns.Namespace[0] == namespaceName {
found = true
break
}
}
assert.True(t, found, "Created namespace should appear in list")
t.Logf("✓ Listed namespaces, found %d namespaces", len(listNsResp.Namespaces))
// Delete namespace
err = client.DeleteNamespace(bucketARN, []string{namespaceName})
require.NoError(t, err, "Failed to delete namespace")
t.Logf("✓ Deleted namespace: %s", namespaceName)
// Verify namespace is deleted
_, err = client.GetNamespace(bucketARN, []string{namespaceName})
assert.Error(t, err, "Namespace should not exist after deletion")
}
func testTableLifecycle(t *testing.T, client *S3TablesClient) {
bucketName := "test-table-bucket-" + randomString(8)
namespaceName := "test_ns"
tableName := "test_table"
// Create table bucket
createBucketResp, err := client.CreateTableBucket(bucketName, nil)
require.NoError(t, err, "Failed to create table bucket")
defer client.DeleteTableBucket(createBucketResp.ARN)
bucketARN := createBucketResp.ARN
// Create namespace
_, err = client.CreateNamespace(bucketARN, []string{namespaceName})
require.NoError(t, err, "Failed to create namespace")
defer client.DeleteNamespace(bucketARN, []string{namespaceName})
// Create table with Iceberg schema
icebergMetadata := &s3tables.TableMetadata{
Iceberg: &s3tables.IcebergMetadata{
Schema: s3tables.IcebergSchema{
Fields: []s3tables.IcebergSchemaField{
{Name: "id", Type: "int", Required: true},
{Name: "name", Type: "string"},
{Name: "value", Type: "int"},
},
},
},
}
createTableResp, err := client.CreateTable(bucketARN, []string{namespaceName}, tableName, "ICEBERG", icebergMetadata, nil)
require.NoError(t, err, "Failed to create table")
assert.NotEmpty(t, createTableResp.TableARN)
assert.NotEmpty(t, createTableResp.VersionToken)
t.Logf("✓ Created table: %s (version: %s)", createTableResp.TableARN, createTableResp.VersionToken)
// Get table
getTableResp, err := client.GetTable(bucketARN, []string{namespaceName}, tableName)
require.NoError(t, err, "Failed to get table")
assert.Equal(t, tableName, getTableResp.Name)
assert.Equal(t, "ICEBERG", getTableResp.Format)
t.Logf("✓ Got table: %s (format: %s)", getTableResp.Name, getTableResp.Format)
// List tables
listTablesResp, err := client.ListTables(bucketARN, []string{namespaceName}, "", "", 0)
require.NoError(t, err, "Failed to list tables")
found := false
for _, tbl := range listTablesResp.Tables {
if tbl.Name == tableName {
found = true
break
}
}
assert.True(t, found, "Created table should appear in list")
t.Logf("✓ Listed tables, found %d tables", len(listTablesResp.Tables))
// Delete table
err = client.DeleteTable(bucketARN, []string{namespaceName}, tableName)
require.NoError(t, err, "Failed to delete table")
t.Logf("✓ Deleted table: %s", tableName)
// Verify table is deleted
_, err = client.GetTable(bucketARN, []string{namespaceName}, tableName)
assert.Error(t, err, "Table should not exist after deletion")
}
func testTableBucketPolicy(t *testing.T, client *S3TablesClient) {
bucketName := "test-policy-bucket-" + randomString(8)
// Create table bucket
createBucketResp, err := client.CreateTableBucket(bucketName, nil)
require.NoError(t, err, "Failed to create table bucket")
defer client.DeleteTableBucket(createBucketResp.ARN)
bucketARN := createBucketResp.ARN
// Put bucket policy
policy := `{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":"*","Action":"s3tables:*","Resource":"*"}]}`
err = client.PutTableBucketPolicy(bucketARN, policy)
require.NoError(t, err, "Failed to put table bucket policy")
t.Logf("✓ Put table bucket policy")
// Get bucket policy
getPolicyResp, err := client.GetTableBucketPolicy(bucketARN)
require.NoError(t, err, "Failed to get table bucket policy")
assert.Equal(t, policy, getPolicyResp.ResourcePolicy)
t.Logf("✓ Got table bucket policy")
// Delete bucket policy
err = client.DeleteTableBucketPolicy(bucketARN)
require.NoError(t, err, "Failed to delete table bucket policy")
t.Logf("✓ Deleted table bucket policy")
// Verify policy is deleted
_, err = client.GetTableBucketPolicy(bucketARN)
assert.Error(t, err, "Policy should not exist after deletion")
}
func testTablePolicy(t *testing.T, client *S3TablesClient) {
bucketName := "test-table-policy-bucket-" + randomString(8)
namespaceName := "test_ns"
tableName := "test_table"
// Create table bucket
createBucketResp, err := client.CreateTableBucket(bucketName, nil)
require.NoError(t, err, "Failed to create table bucket")
defer client.DeleteTableBucket(createBucketResp.ARN)
bucketARN := createBucketResp.ARN
// Create namespace
_, err = client.CreateNamespace(bucketARN, []string{namespaceName})
require.NoError(t, err, "Failed to create namespace")
defer client.DeleteNamespace(bucketARN, []string{namespaceName})
// Create table
icebergMetadata := &s3tables.TableMetadata{
Iceberg: &s3tables.IcebergMetadata{
Schema: s3tables.IcebergSchema{
Fields: []s3tables.IcebergSchemaField{
{Name: "id", Type: "int", Required: true},
{Name: "name", Type: "string"},
},
},
},
}
createTableResp, err := client.CreateTable(bucketARN, []string{namespaceName}, tableName, "ICEBERG", icebergMetadata, nil)
require.NoError(t, err, "Failed to create table")
defer client.DeleteTable(bucketARN, []string{namespaceName}, tableName)
t.Logf("✓ Created table: %s", createTableResp.TableARN)
// Verify no policy exists initially
_, err = client.GetTablePolicy(bucketARN, []string{namespaceName}, tableName)
assert.Error(t, err, "Policy should not exist initially")
t.Logf("✓ Verified no policy exists initially")
// Put table policy
policy := `{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":"*","Action":"s3tables:*","Resource":"*"}]}`
err = client.PutTablePolicy(bucketARN, []string{namespaceName}, tableName, policy)
require.NoError(t, err, "Failed to put table policy")
t.Logf("✓ Put table policy")
// Get table policy
getPolicyResp, err := client.GetTablePolicy(bucketARN, []string{namespaceName}, tableName)
require.NoError(t, err, "Failed to get table policy")
assert.Equal(t, policy, getPolicyResp.ResourcePolicy)
t.Logf("✓ Got table policy")
// Delete table policy
err = client.DeleteTablePolicy(bucketARN, []string{namespaceName}, tableName)
require.NoError(t, err, "Failed to delete table policy")
t.Logf("✓ Deleted table policy")
// Verify policy is deleted
_, err = client.GetTablePolicy(bucketARN, []string{namespaceName}, tableName)
assert.Error(t, err, "Policy should not exist after deletion")
t.Logf("✓ Verified policy deletion")
}
func testTagging(t *testing.T, client *S3TablesClient) {
bucketName := "test-tag-bucket-" + randomString(8)
// Create table bucket with tags
initialTags := map[string]string{"Environment": "test"}
createBucketResp, err := client.CreateTableBucket(bucketName, initialTags)
require.NoError(t, err, "Failed to create table bucket")
defer client.DeleteTableBucket(createBucketResp.ARN)
bucketARN := createBucketResp.ARN
// List tags
listTagsResp, err := client.ListTagsForResource(bucketARN)
require.NoError(t, err, "Failed to list tags")
assert.Equal(t, "test", listTagsResp.Tags["Environment"])
t.Logf("✓ Listed tags: %v", listTagsResp.Tags)
// Add more tags
newTags := map[string]string{"Department": "Engineering"}
err = client.TagResource(bucketARN, newTags)
require.NoError(t, err, "Failed to tag resource")
t.Logf("✓ Added tags")
// Verify tags
listTagsResp, err = client.ListTagsForResource(bucketARN)
require.NoError(t, err, "Failed to list tags")
assert.Equal(t, "test", listTagsResp.Tags["Environment"])
assert.Equal(t, "Engineering", listTagsResp.Tags["Department"])
t.Logf("✓ Verified tags: %v", listTagsResp.Tags)
// Remove a tag
err = client.UntagResource(bucketARN, []string{"Environment"})
require.NoError(t, err, "Failed to untag resource")
t.Logf("✓ Removed tag")
// Verify tag is removed
listTagsResp, err = client.ListTagsForResource(bucketARN)
require.NoError(t, err, "Failed to list tags")
_, hasEnvironment := listTagsResp.Tags["Environment"]
assert.False(t, hasEnvironment, "Environment tag should be removed")
assert.Equal(t, "Engineering", listTagsResp.Tags["Department"])
t.Logf("✓ Verified tag removal")
}
// Helper functions
// findAvailablePort finds an available port by binding to port 0
func findAvailablePort() (int, error) {
listener, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
return 0, err
}
defer listener.Close()
addr := listener.Addr().(*net.TCPAddr)
return addr.Port, nil
}
// startMiniCluster starts a weed mini instance directly without exec
func startMiniCluster(t *testing.T) (*TestCluster, error) {
// Find available ports
masterPort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find master port: %v", err)
}
masterGrpcPort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find master grpc port: %v", err)
}
volumePort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find volume port: %v", err)
}
volumeGrpcPort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find volume grpc port: %v", err)
}
filerPort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find filer port: %v", err)
}
filerGrpcPort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find filer grpc port: %v", err)
}
s3Port, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find s3 port: %v", err)
}
s3GrpcPort, err := findAvailablePort()
if err != nil {
return nil, fmt.Errorf("failed to find s3 grpc port: %v", err)
}
// Create temporary directory for test data
testDir := t.TempDir()
// Ensure no configuration file from previous runs
configFile := filepath.Join(testDir, "mini.options")
_ = os.Remove(configFile)
// Create context with timeout
ctx, cancel := context.WithCancel(context.Background())
s3Endpoint := fmt.Sprintf("http://127.0.0.1:%d", s3Port)
cluster := &TestCluster{
t: t,
dataDir: testDir,
ctx: ctx,
cancel: cancel,
masterPort: masterPort,
volumePort: volumePort,
filerPort: filerPort,
s3Port: s3Port,
s3Endpoint: s3Endpoint,
}
// Create empty security.toml to disable JWT authentication in tests
securityToml := filepath.Join(testDir, "security.toml")
err = os.WriteFile(securityToml, []byte("# Empty security config for testing\n"), 0644)
if err != nil {
cancel()
return nil, fmt.Errorf("failed to create security.toml: %v", err)
}
// Start weed mini in a goroutine by calling the command directly
cluster.wg.Add(1)
go func() {
defer cluster.wg.Done()
// Protect global state mutation with a mutex
miniClusterMutex.Lock()
defer miniClusterMutex.Unlock()
// Save current directory and args
oldDir, _ := os.Getwd()
oldArgs := os.Args
defer func() {
os.Chdir(oldDir)
os.Args = oldArgs
}()
// Change to test directory so mini picks up security.toml
os.Chdir(testDir)
// Configure args for mini command
os.Args = []string{
"weed",
"-dir=" + testDir,
"-master.port=" + strconv.Itoa(masterPort),
"-master.port.grpc=" + strconv.Itoa(masterGrpcPort),
"-volume.port=" + strconv.Itoa(volumePort),
"-volume.port.grpc=" + strconv.Itoa(volumeGrpcPort),
"-filer.port=" + strconv.Itoa(filerPort),
"-filer.port.grpc=" + strconv.Itoa(filerGrpcPort),
"-s3.port=" + strconv.Itoa(s3Port),
"-s3.port.grpc=" + strconv.Itoa(s3GrpcPort),
"-webdav.port=0", // Disable WebDAV
"-admin.ui=false", // Disable admin UI
"-master.volumeSizeLimitMB=32", // Small volumes for testing
"-ip=127.0.0.1",
"-master.peers=none", // Faster startup
"-s3.iam.readOnly=false", // Enable IAM write operations for tests
}
// Suppress most logging during tests
glog.MaxSize = 1024 * 1024
// Find and run the mini command
for _, cmd := range command.Commands {
if cmd.Name() == "mini" && cmd.Run != nil {
cmd.Flag.Parse(os.Args[1:])
args := cmd.Flag.Args()
command.MiniClusterCtx = ctx
cmd.Run(cmd, args)
command.MiniClusterCtx = nil
return
}
}
}()
// Wait for S3 service to be ready
err = waitForS3Ready(cluster.s3Endpoint, 30*time.Second)
if err != nil {
cancel()
return nil, fmt.Errorf("S3 service failed to start: %v", err)
}
cluster.isRunning = true
t.Logf("Test cluster started successfully at %s", cluster.s3Endpoint)
return cluster, nil
}
// Stop stops the test cluster
func (c *TestCluster) Stop() {
if c.cancel != nil {
c.cancel()
}
// Give services time to shut down gracefully
if c.isRunning {
time.Sleep(500 * time.Millisecond)
}
// Wait for the mini goroutine to finish
done := make(chan struct{})
go func() {
c.wg.Wait()
close(done)
}()
timer := time.NewTimer(2 * time.Second)
defer timer.Stop()
select {
case <-done:
// Goroutine finished
case <-timer.C:
// Timeout - goroutine doesn't respond to context cancel
// This may indicate the mini cluster didn't shut down cleanly
c.t.Log("Warning: Test cluster shutdown timed out after 2 seconds")
}
// Reset the global cmdMini flags to prevent state leakage to other tests
for _, cmd := range command.Commands {
if cmd.Name() == "mini" {
// Reset flags to defaults
cmd.Flag.VisitAll(func(f *flag.Flag) {
// Reset to default value
f.Value.Set(f.DefValue)
})
break
}
}
}
// waitForS3Ready waits for the S3 service to be ready
func waitForS3Ready(endpoint string, timeout time.Duration) error {
client := &http.Client{Timeout: 1 * time.Second}
deadline := time.Now().Add(timeout)
for time.Now().Before(deadline) {
resp, err := client.Get(endpoint)
if err == nil {
resp.Body.Close()
// Wait a bit more to ensure service is fully ready
time.Sleep(500 * time.Millisecond)
return nil
}
time.Sleep(200 * time.Millisecond)
}
return fmt.Errorf("timeout waiting for S3 service at %s", endpoint)
}
// randomString generates a random string for unique naming
func randomString(length int) string {
const charset = "abcdefghijklmnopqrstuvwxyz0123456789"
b := make([]byte, length)
if _, err := cryptorand.Read(b); err != nil {
panic("failed to generate random string: " + err.Error())
}
for i := range b {
b[i] = charset[int(b[i])%len(charset)]
}
return string(b)
}

53
test/s3tables/setup.go

@ -0,0 +1,53 @@
package s3tables
import (
"context"
"net/http"
"sync"
"testing"
"time"
)
// TestCluster manages the weed mini instance for integration testing
type TestCluster struct {
t *testing.T
dataDir string
ctx context.Context
cancel context.CancelFunc
isRunning bool
startOnce sync.Once
wg sync.WaitGroup
masterPort int
volumePort int
filerPort int
s3Port int
s3Endpoint string
}
// S3TablesClient is a simple client for S3 Tables API
type S3TablesClient struct {
endpoint string
region string
accessKey string
secretKey string
client *http.Client
}
// NewS3TablesClient creates a new S3 Tables client
func NewS3TablesClient(endpoint, region, accessKey, secretKey string) *S3TablesClient {
return &S3TablesClient{
endpoint: endpoint,
region: region,
accessKey: accessKey,
secretKey: secretKey,
client: &http.Client{Timeout: 30 * time.Second},
}
}
// Test configuration constants
const (
testRegion = "us-west-2"
testAccessKey = "admin"
testSecretKey = "admin"
testAccountID = "111122223333"
)

22
weed/cluster/lock_client.go

@ -32,17 +32,17 @@ func NewLockClient(grpcDialOption grpc.DialOption, seedFiler pb.ServerAddress) *
}
type LiveLock struct {
key string
renewToken string
expireAtNs int64
hostFiler pb.ServerAddress
cancelCh chan struct{}
grpcDialOption grpc.DialOption
isLocked int32 // 0 = unlocked, 1 = locked; use atomic operations
self string
lc *LockClient
owner string
lockTTL time.Duration
key string
renewToken string
expireAtNs int64
hostFiler pb.ServerAddress
cancelCh chan struct{}
grpcDialOption grpc.DialOption
isLocked int32 // 0 = unlocked, 1 = locked; use atomic operations
self string
lc *LockClient
owner string
lockTTL time.Duration
consecutiveFailures int // Track connection failures to trigger fallback
}

28
weed/command/filer.go

@ -477,23 +477,39 @@ func (fo *FilerOptions) startFiler() {
if filerLocalListener != nil {
go func() {
if err := newHttpServer(defaultMux, tlsConfig).ServeTLS(filerLocalListener, "", ""); err != nil {
glog.Errorf("Filer Fail to serve: %v", e)
glog.Errorf("Filer Fail to serve: %v", err)
}
}()
}
if err := newHttpServer(defaultMux, tlsConfig).ServeTLS(filerListener, "", ""); err != nil {
glog.Fatalf("Filer Fail to serve: %v", e)
httpS := newHttpServer(defaultMux, tlsConfig)
if MiniClusterCtx != nil {
go func() {
<-MiniClusterCtx.Done()
httpS.Shutdown(context.Background())
grpcS.Stop()
}()
}
if err := httpS.ServeTLS(filerListener, "", ""); err != nil && err != http.ErrServerClosed {
glog.Fatalf("Filer Fail to serve: %v", err)
}
} else {
if filerLocalListener != nil {
go func() {
if err := newHttpServer(defaultMux, nil).Serve(filerLocalListener); err != nil {
glog.Errorf("Filer Fail to serve: %v", e)
glog.Errorf("Filer Fail to serve: %v", err)
}
}()
}
if err := newHttpServer(defaultMux, nil).Serve(filerListener); err != nil {
glog.Fatalf("Filer Fail to serve: %v", e)
httpS := newHttpServer(defaultMux, nil)
if MiniClusterCtx != nil {
go func() {
<-MiniClusterCtx.Done()
httpS.Shutdown(context.Background())
grpcS.Stop()
}()
}
if err := httpS.Serve(filerListener); err != nil && err != http.ErrServerClosed {
glog.Fatalf("Filer Fail to serve: %v", err)
}
}
}

8
weed/command/master.go

@ -311,7 +311,13 @@ func startMaster(masterOption MasterOptions, masterWhiteList []string) {
ms.Topo.HashicorpRaft.LeadershipTransfer()
}
})
select {}
if MiniClusterCtx != nil {
<-MiniClusterCtx.Done()
ms.Shutdown()
grpcS.Stop()
} else {
select {}
}
}
func isSingleMasterMode(peers string) bool {

22
weed/command/mini.go

@ -59,6 +59,8 @@ var (
miniEnableS3 *bool
miniEnableAdminUI *bool
miniS3IamReadOnly *bool
// MiniClusterCtx is the context for the mini cluster. If set, the mini cluster will stop when the context is cancelled.
MiniClusterCtx context.Context
)
func init() {
@ -821,7 +823,12 @@ func runMini(cmd *Command, args []string) bool {
// Save configuration to file for persistence and documentation
saveMiniConfiguration(*miniDataFolders)
select {}
if MiniClusterCtx != nil {
<-MiniClusterCtx.Done()
} else {
select {}
}
return true
}
// startMiniServices starts all mini services with proper dependency coordination
@ -928,7 +935,12 @@ func startS3Service() {
func startMiniAdminWithWorker(allServicesReady chan struct{}) {
defer close(allServicesReady) // Ensure channel is always closed on all paths
ctx := context.Background()
var ctx context.Context
if MiniClusterCtx != nil {
ctx = MiniClusterCtx
} else {
ctx = context.Background()
}
// Determine bind IP for health checks
bindIp := getBindIp()
@ -1101,6 +1113,12 @@ func startMiniWorker() {
// Metrics server is already started in the main init function above, so no need to start it again here
// Start the worker
if MiniClusterCtx != nil {
go func() {
<-MiniClusterCtx.Done()
workerInstance.Stop()
}()
}
err = workerInstance.Start()
if err != nil {
glog.Fatalf("Failed to start worker: %v", err)

21
weed/command/s3.go

@ -7,6 +7,7 @@ import (
"fmt"
"io/ioutil"
"net"
"net/http"
"os"
"runtime"
"strings"
@ -405,7 +406,15 @@ func (s3opt *S3Options) startS3Server() bool {
}
}()
}
if err = newHttpServer(router, tlsConfig).ServeTLS(s3ApiListener, "", ""); err != nil {
httpS := newHttpServer(router, tlsConfig)
if MiniClusterCtx != nil {
go func() {
<-MiniClusterCtx.Done()
httpS.Shutdown(context.Background())
grpcS.Stop()
}()
}
if err = httpS.ServeTLS(s3ApiListener, "", ""); err != nil && err != http.ErrServerClosed {
glog.Fatalf("S3 API Server Fail to serve: %v", err)
}
} else {
@ -438,7 +447,15 @@ func (s3opt *S3Options) startS3Server() bool {
}
}()
}
if err = newHttpServer(router, nil).Serve(s3ApiListener); err != nil {
httpS := newHttpServer(router, nil)
if MiniClusterCtx != nil {
go func() {
<-MiniClusterCtx.Done()
httpS.Shutdown(context.Background())
grpcS.Stop()
}()
}
if err = httpS.Serve(s3ApiListener); err != nil && err != http.ErrServerClosed {
glog.Fatalf("S3 API Server Fail to serve: %v", err)
}
}

12
weed/command/volume.go

@ -319,8 +319,16 @@ func (v VolumeServerOptions) startVolumeServer(volumeFolders, maxVolumeCounts, v
stopChan <- true
})
select {
case <-stopChan:
if MiniClusterCtx != nil {
select {
case <-stopChan:
case <-MiniClusterCtx.Done():
shutdown(publicHttpDown, clusterHttpServer, grpcS, volumeServer)
}
} else {
select {
case <-stopChan:
}
}
}

11
weed/command/webdav.go

@ -137,14 +137,21 @@ func (wo *WebDavOption) startWebDav() bool {
glog.Fatalf("WebDav Server listener on %s error: %v", listenAddress, err)
}
if MiniClusterCtx != nil {
go func() {
<-MiniClusterCtx.Done()
httpS.Shutdown(context.Background())
}()
}
if *wo.tlsPrivateKey != "" {
glog.V(0).Infof("Start Seaweed WebDav Server %s at https %s", version.Version(), listenAddress)
if err = httpS.ServeTLS(webDavListener, *wo.tlsCertificate, *wo.tlsPrivateKey); err != nil {
if err = httpS.ServeTLS(webDavListener, *wo.tlsCertificate, *wo.tlsPrivateKey); err != nil && err != http.ErrServerClosed {
glog.Fatalf("WebDav Server Fail to serve: %v", err)
}
} else {
glog.V(0).Infof("Start Seaweed WebDav Server %s at http %s", version.Version(), listenAddress)
if err = httpS.Serve(webDavListener); err != nil {
if err = httpS.Serve(webDavListener); err != nil && err != http.ErrServerClosed {
glog.Fatalf("WebDav Server Fail to serve: %v", err)
}
}

6
weed/s3api/cors/middleware_test.go

@ -453,7 +453,7 @@ func TestMiddlewareVaryHeader(t *testing.T) {
t.Run(tt.name, func(t *testing.T) {
// Setup mocks
bucketChecker := &mockBucketChecker{bucketExists: true}
var errCode s3err.ErrorCode
if tt.bucketConfig == nil {
errCode = s3err.ErrNoSuchCORSConfiguration
@ -503,7 +503,7 @@ func TestMiddlewareVaryHeader(t *testing.T) {
func TestHandleOptionsRequestVaryHeader(t *testing.T) {
// Setup mocks
bucketChecker := &mockBucketChecker{bucketExists: true}
config := &CORSConfiguration{
CORSRules: []CORSRule{
{
@ -528,7 +528,7 @@ func TestHandleOptionsRequestVaryHeader(t *testing.T) {
"bucket": "testbucket",
"object": "testobject",
})
// Set valid CORS headers
req.Header.Set("Origin", "https://example.com")
req.Header.Set("Access-Control-Request-Method", "GET")

4
weed/s3api/s3api_server.go

@ -658,6 +658,10 @@ func (s3a *S3ApiServer) registerRouter(router *mux.Router) {
}
})
// S3 Tables API endpoint
// POST / with X-Amz-Target: S3Tables.<OperationName>
s3a.registerS3TablesRoutes(apiRouter)
// STS API endpoint for AssumeRoleWithWebIdentity
// POST /?Action=AssumeRoleWithWebIdentity&WebIdentityToken=...
if s3a.stsHandlers != nil {

143
weed/s3api/s3api_tables.go

@ -0,0 +1,143 @@
package s3api
import (
"net/http"
"strings"
"github.com/gorilla/mux"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3tables"
)
// s3TablesActionsMap contains all valid S3 Tables operations for O(1) lookup
var s3TablesActionsMap = map[string]struct{}{
"CreateTableBucket": {},
"GetTableBucket": {},
"ListTableBuckets": {},
"DeleteTableBucket": {},
"PutTableBucketPolicy": {},
"GetTableBucketPolicy": {},
"DeleteTableBucketPolicy": {},
"CreateNamespace": {},
"GetNamespace": {},
"ListNamespaces": {},
"DeleteNamespace": {},
"CreateTable": {},
"GetTable": {},
"ListTables": {},
"DeleteTable": {},
"PutTablePolicy": {},
"GetTablePolicy": {},
"DeleteTablePolicy": {},
"TagResource": {},
"ListTagsForResource": {},
"UntagResource": {},
}
// S3TablesApiServer wraps the S3 Tables handler with S3ApiServer's filer access
type S3TablesApiServer struct {
s3a *S3ApiServer
handler *s3tables.S3TablesHandler
}
// NewS3TablesApiServer creates a new S3 Tables API server
func NewS3TablesApiServer(s3a *S3ApiServer) *S3TablesApiServer {
return &S3TablesApiServer{
s3a: s3a,
handler: s3tables.NewS3TablesHandler(),
}
}
// SetRegion sets the AWS region for ARN generation
func (st *S3TablesApiServer) SetRegion(region string) {
st.handler.SetRegion(region)
}
// SetAccountID sets the AWS account ID for ARN generation
func (st *S3TablesApiServer) SetAccountID(accountID string) {
st.handler.SetAccountID(accountID)
}
// S3TablesHandler handles S3 Tables API requests
func (st *S3TablesApiServer) S3TablesHandler(w http.ResponseWriter, r *http.Request) {
st.handler.HandleRequest(w, r, st)
}
// WithFilerClient implements the s3tables.FilerClient interface
func (st *S3TablesApiServer) WithFilerClient(streamingMode bool, fn func(filer_pb.SeaweedFilerClient) error) error {
return st.s3a.WithFilerClient(streamingMode, fn)
}
// registerS3TablesRoutes registers S3 Tables API routes
func (s3a *S3ApiServer) registerS3TablesRoutes(router *mux.Router) {
// Create S3 Tables handler
s3TablesApi := NewS3TablesApiServer(s3a)
// S3 Tables API uses POST with x-amz-target header
// The AWS CLI sends requests with:
// - Content-Type: application/x-amz-json-1.1
// - X-Amz-Target: S3Tables.<OperationName>
// Matcher function to identify S3 Tables requests
s3TablesMatcher := func(r *http.Request, rm *mux.RouteMatch) bool {
// Check for X-Amz-Target header with S3Tables prefix
target := r.Header.Get("X-Amz-Target")
if target != "" && strings.HasPrefix(target, "S3Tables.") {
return true
}
// Also check for specific S3 Tables actions in query string (CLI fallback)
action := r.URL.Query().Get("Action")
if isS3TablesAction(action) {
return true
}
return false
}
// Register the S3 Tables handler wrapped with IAM authentication
router.Methods(http.MethodPost).Path("/").MatcherFunc(s3TablesMatcher).
HandlerFunc(track(s3a.authenticateS3Tables(func(w http.ResponseWriter, r *http.Request) {
s3TablesApi.S3TablesHandler(w, r)
}), "S3Tables"))
glog.V(1).Infof("S3 Tables API enabled")
}
// isS3TablesAction checks if the action is an S3 Tables operation using O(1) map lookup
func isS3TablesAction(action string) bool {
_, ok := s3TablesActionsMap[action]
return ok
}
// authenticateS3Tables wraps the handler with IAM authentication using AuthSignatureOnly
// This authenticates the request but delegates authorization to the S3 Tables handler
// which performs granular permission checks based on the specific operation.
func (s3a *S3ApiServer) authenticateS3Tables(f http.HandlerFunc) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
if !s3a.iam.isEnabled() {
f(w, r)
return
}
// Use AuthSignatureOnly to authenticate the request without authorizing specific actions
identity, errCode := s3a.iam.AuthSignatureOnly(r)
if errCode != s3err.ErrNone {
s3err.WriteErrorResponse(w, r, errCode)
return
}
// Store the authenticated identity in request context
if identity != nil && identity.Name != "" {
ctx := s3_constants.SetIdentityNameInContext(r.Context(), identity.Name)
ctx = s3_constants.SetIdentityInContext(ctx, identity)
r = r.WithContext(ctx)
}
f(w, r)
}
}

139
weed/s3api/s3tables/filer_ops.go

@ -0,0 +1,139 @@
package s3tables
import (
"context"
"errors"
"fmt"
"os"
"time"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
var (
ErrAttributeNotFound = errors.New("attribute not found")
)
// Filer operations - Common functions for interacting with the filer
// createDirectory creates a new directory at the specified path
func (h *S3TablesHandler) createDirectory(ctx context.Context, client filer_pb.SeaweedFilerClient, path string) error {
dir, name := splitPath(path)
now := time.Now().Unix()
_, err := client.CreateEntry(ctx, &filer_pb.CreateEntryRequest{
Directory: dir,
Entry: &filer_pb.Entry{
Name: name,
IsDirectory: true,
Attributes: &filer_pb.FuseAttributes{
Mtime: now,
Crtime: now,
FileMode: uint32(0755 | os.ModeDir), // Directory mode
},
},
})
return err
}
// setExtendedAttribute sets an extended attribute on an existing entry
func (h *S3TablesHandler) setExtendedAttribute(ctx context.Context, client filer_pb.SeaweedFilerClient, path, key string, data []byte) error {
dir, name := splitPath(path)
// First, get the existing entry
resp, err := filer_pb.LookupEntry(ctx, client, &filer_pb.LookupDirectoryEntryRequest{
Directory: dir,
Name: name,
})
if err != nil {
return err
}
entry := resp.Entry
// Update the extended attributes
if entry.Extended == nil {
entry.Extended = make(map[string][]byte)
}
entry.Extended[key] = data
// Save the updated entry
_, err = client.UpdateEntry(ctx, &filer_pb.UpdateEntryRequest{
Directory: dir,
Entry: entry,
})
return err
}
// getExtendedAttribute gets an extended attribute from an entry
func (h *S3TablesHandler) getExtendedAttribute(ctx context.Context, client filer_pb.SeaweedFilerClient, path, key string) ([]byte, error) {
dir, name := splitPath(path)
resp, err := filer_pb.LookupEntry(ctx, client, &filer_pb.LookupDirectoryEntryRequest{
Directory: dir,
Name: name,
})
if err != nil {
return nil, err
}
if resp.Entry.Extended == nil {
return nil, fmt.Errorf("%w: %s", ErrAttributeNotFound, key)
}
data, ok := resp.Entry.Extended[key]
if !ok {
return nil, fmt.Errorf("%w: %s", ErrAttributeNotFound, key)
}
return data, nil
}
// deleteExtendedAttribute deletes an extended attribute from an entry
func (h *S3TablesHandler) deleteExtendedAttribute(ctx context.Context, client filer_pb.SeaweedFilerClient, path, key string) error {
dir, name := splitPath(path)
// Get the existing entry
resp, err := filer_pb.LookupEntry(ctx, client, &filer_pb.LookupDirectoryEntryRequest{
Directory: dir,
Name: name,
})
if err != nil {
return err
}
entry := resp.Entry
// Remove the extended attribute
if entry.Extended != nil {
delete(entry.Extended, key)
}
// Save the updated entry
_, err = client.UpdateEntry(ctx, &filer_pb.UpdateEntryRequest{
Directory: dir,
Entry: entry,
})
return err
}
// deleteDirectory deletes a directory and all its contents
func (h *S3TablesHandler) deleteDirectory(ctx context.Context, client filer_pb.SeaweedFilerClient, path string) error {
dir, name := splitPath(path)
_, err := client.DeleteEntry(ctx, &filer_pb.DeleteEntryRequest{
Directory: dir,
Name: name,
IsDeleteData: true,
IsRecursive: true,
IgnoreRecursiveError: true,
})
return err
}
// entryExists checks if an entry exists at the given path
func (h *S3TablesHandler) entryExists(ctx context.Context, client filer_pb.SeaweedFilerClient, path string) bool {
dir, name := splitPath(path)
_, err := filer_pb.LookupEntry(ctx, client, &filer_pb.LookupDirectoryEntryRequest{
Directory: dir,
Name: name,
})
return err == nil
}

238
weed/s3api/s3tables/handler.go

@ -0,0 +1,238 @@
package s3tables
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
"github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
)
const (
TablesPath = "/tables"
DefaultAccountID = "000000000000"
DefaultRegion = "us-east-1"
// Extended entry attributes for metadata storage
ExtendedKeyMetadata = "s3tables.metadata"
ExtendedKeyPolicy = "s3tables.policy"
ExtendedKeyTags = "s3tables.tags"
// Maximum request body size (10MB)
maxRequestBodySize = 10 * 1024 * 1024
)
var (
ErrVersionTokenMismatch = errors.New("version token mismatch")
ErrAccessDenied = errors.New("access denied")
)
type ResourceType string
const (
ResourceTypeBucket ResourceType = "bucket"
ResourceTypeTable ResourceType = "table"
)
// S3TablesHandler handles S3 Tables API requests
type S3TablesHandler struct {
region string
accountID string
}
// NewS3TablesHandler creates a new S3 Tables handler
func NewS3TablesHandler() *S3TablesHandler {
return &S3TablesHandler{
region: DefaultRegion,
accountID: DefaultAccountID,
}
}
// SetRegion sets the AWS region for ARN generation
func (h *S3TablesHandler) SetRegion(region string) {
if region != "" {
h.region = region
}
}
// SetAccountID sets the AWS account ID for ARN generation
func (h *S3TablesHandler) SetAccountID(accountID string) {
if accountID != "" {
h.accountID = accountID
}
}
// FilerClient interface for filer operations
type FilerClient interface {
WithFilerClient(streamingMode bool, fn func(client filer_pb.SeaweedFilerClient) error) error
}
// HandleRequest is the main entry point for S3 Tables API requests
func (h *S3TablesHandler) HandleRequest(w http.ResponseWriter, r *http.Request, filerClient FilerClient) {
// S3 Tables API uses x-amz-target header to specify the operation
target := r.Header.Get("X-Amz-Target")
if target == "" {
// Try to get from query parameter for CLI compatibility
target = r.URL.Query().Get("Action")
}
// Extract operation name (e.g., "S3Tables.CreateTableBucket" -> "CreateTableBucket")
operation := target
if idx := strings.LastIndex(target, "."); idx != -1 {
operation = target[idx+1:]
}
glog.V(3).Infof("S3Tables: handling operation %s", operation)
var err error
switch operation {
// Table Bucket operations
case "CreateTableBucket":
err = h.handleCreateTableBucket(w, r, filerClient)
case "GetTableBucket":
err = h.handleGetTableBucket(w, r, filerClient)
case "ListTableBuckets":
err = h.handleListTableBuckets(w, r, filerClient)
case "DeleteTableBucket":
err = h.handleDeleteTableBucket(w, r, filerClient)
// Table Bucket Policy operations
case "PutTableBucketPolicy":
err = h.handlePutTableBucketPolicy(w, r, filerClient)
case "GetTableBucketPolicy":
err = h.handleGetTableBucketPolicy(w, r, filerClient)
case "DeleteTableBucketPolicy":
err = h.handleDeleteTableBucketPolicy(w, r, filerClient)
// Namespace operations
case "CreateNamespace":
err = h.handleCreateNamespace(w, r, filerClient)
case "GetNamespace":
err = h.handleGetNamespace(w, r, filerClient)
case "ListNamespaces":
err = h.handleListNamespaces(w, r, filerClient)
case "DeleteNamespace":
err = h.handleDeleteNamespace(w, r, filerClient)
// Table operations
case "CreateTable":
err = h.handleCreateTable(w, r, filerClient)
case "GetTable":
err = h.handleGetTable(w, r, filerClient)
case "ListTables":
err = h.handleListTables(w, r, filerClient)
case "DeleteTable":
err = h.handleDeleteTable(w, r, filerClient)
// Table Policy operations
case "PutTablePolicy":
err = h.handlePutTablePolicy(w, r, filerClient)
case "GetTablePolicy":
err = h.handleGetTablePolicy(w, r, filerClient)
case "DeleteTablePolicy":
err = h.handleDeleteTablePolicy(w, r, filerClient)
// Tagging operations
case "TagResource":
err = h.handleTagResource(w, r, filerClient)
case "ListTagsForResource":
err = h.handleListTagsForResource(w, r, filerClient)
case "UntagResource":
err = h.handleUntagResource(w, r, filerClient)
default:
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, fmt.Sprintf("Unknown operation: %s", operation))
return
}
if err != nil {
glog.Errorf("S3Tables: error handling %s: %v", operation, err)
}
}
// Principal/authorization helpers
// getAccountID returns the authenticated account ID from the request or the handler's default.
// This is also used as the principal for permission checks, ensuring alignment between
// the caller identity and ownership verification when IAM is enabled.
func (h *S3TablesHandler) getAccountID(r *http.Request) string {
if identityName := s3_constants.GetIdentityNameFromContext(r); identityName != "" {
return identityName
}
if accountID := r.Header.Get(s3_constants.AmzAccountId); accountID != "" {
return accountID
}
return h.accountID
}
// Request/Response helpers
func (h *S3TablesHandler) readRequestBody(r *http.Request, v interface{}) error {
defer r.Body.Close()
// Limit request body size to prevent unbounded reads
limitedReader := io.LimitReader(r.Body, maxRequestBodySize+1)
body, err := io.ReadAll(limitedReader)
if err != nil {
return fmt.Errorf("failed to read request body: %w", err)
}
// Check if body exceeds size limit
if len(body) > maxRequestBodySize {
return fmt.Errorf("request body too large: exceeds maximum size of %d bytes", maxRequestBodySize)
}
if len(body) == 0 {
return nil
}
if err := json.Unmarshal(body, v); err != nil {
return fmt.Errorf("failed to decode request: %w", err)
}
return nil
}
// Response writing helpers
func (h *S3TablesHandler) writeJSON(w http.ResponseWriter, status int, data interface{}) {
w.Header().Set("Content-Type", "application/x-amz-json-1.1")
w.WriteHeader(status)
if data != nil {
if err := json.NewEncoder(w).Encode(data); err != nil {
glog.Errorf("S3Tables: failed to encode response: %v", err)
}
}
}
func (h *S3TablesHandler) writeError(w http.ResponseWriter, status int, code, message string) {
w.Header().Set("Content-Type", "application/x-amz-json-1.1")
w.WriteHeader(status)
errorResponse := map[string]interface{}{
"__type": code,
"message": message,
}
if err := json.NewEncoder(w).Encode(errorResponse); err != nil {
glog.Errorf("S3Tables: failed to encode error response: %v", err)
}
}
// ARN generation helpers
func (h *S3TablesHandler) generateTableBucketARN(ownerAccountID, bucketName string) string {
return fmt.Sprintf("arn:aws:s3tables:%s:%s:bucket/%s", h.region, ownerAccountID, bucketName)
}
func (h *S3TablesHandler) generateTableARN(ownerAccountID, bucketName, tableID string) string {
return fmt.Sprintf("arn:aws:s3tables:%s:%s:bucket/%s/table/%s", h.region, ownerAccountID, bucketName, tableID)
}
func isAuthError(err error) bool {
var authErr *AuthError
return errors.As(err, &authErr) || errors.Is(err, ErrAccessDenied)
}

124
weed/s3api/s3tables/handler_bucket_create.go

@ -0,0 +1,124 @@
package s3tables
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
// handleCreateTableBucket creates a new table bucket
func (h *S3TablesHandler) handleCreateTableBucket(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
// Check permission
principal := h.getAccountID(r)
if !CanCreateTableBucket(principal, principal, "") {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to create table buckets")
return NewAuthError("CreateTableBucket", principal, "not authorized to create table buckets")
}
var req CreateTableBucketRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Validate bucket name
if err := validateBucketName(req.Name); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketPath := getTableBucketPath(req.Name)
// Check if bucket already exists
exists := false
err := filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
_, err := filer_pb.LookupEntry(r.Context(), client, &filer_pb.LookupDirectoryEntryRequest{
Directory: TablesPath,
Name: req.Name,
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
return nil
}
return err
}
exists = true
return nil
})
if err != nil {
glog.Errorf("S3Tables: failed to check bucket existence: %v", err)
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to check bucket existence")
return err
}
if exists {
h.writeError(w, http.StatusConflict, ErrCodeBucketAlreadyExists, fmt.Sprintf("table bucket %s already exists", req.Name))
return fmt.Errorf("bucket already exists")
}
// Create the bucket directory and set metadata as extended attributes
now := time.Now()
metadata := &tableBucketMetadata{
Name: req.Name,
CreatedAt: now,
OwnerAccountID: h.getAccountID(r),
}
metadataBytes, err := json.Marshal(metadata)
if err != nil {
glog.Errorf("S3Tables: failed to marshal metadata: %v", err)
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to marshal metadata")
return err
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Ensure root tables directory exists
if !h.entryExists(r.Context(), client, TablesPath) {
if err := h.createDirectory(r.Context(), client, TablesPath); err != nil {
return fmt.Errorf("failed to create root tables directory: %w", err)
}
}
// Create bucket directory
if err := h.createDirectory(r.Context(), client, bucketPath); err != nil {
return err
}
// Set metadata as extended attribute
if err := h.setExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata, metadataBytes); err != nil {
return err
}
// Set tags if provided
if len(req.Tags) > 0 {
tagsBytes, err := json.Marshal(req.Tags)
if err != nil {
return fmt.Errorf("failed to marshal tags: %w", err)
}
if err := h.setExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyTags, tagsBytes); err != nil {
return err
}
}
return nil
})
if err != nil {
glog.Errorf("S3Tables: failed to create table bucket %s: %v", req.Name, err)
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to create table bucket")
return err
}
resp := &CreateTableBucketResponse{
ARN: h.generateTableBucketARN(metadata.OwnerAccountID, req.Name),
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}

324
weed/s3api/s3tables/handler_bucket_get_list_delete.go

@ -0,0 +1,324 @@
package s3tables
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
// handleGetTableBucket gets details of a table bucket
func (h *S3TablesHandler) handleGetTableBucket(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
// Check permission
var req GetTableBucketRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketPath := getTableBucketPath(bucketName)
var metadata tableBucketMetadata
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to get table bucket: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanGetTableBucket(principal, metadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to get table bucket details")
return ErrAccessDenied
}
resp := &GetTableBucketResponse{
ARN: h.generateTableBucketARN(metadata.OwnerAccountID, bucketName),
Name: metadata.Name,
OwnerAccountID: metadata.OwnerAccountID,
CreatedAt: metadata.CreatedAt,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleListTableBuckets lists all table buckets
func (h *S3TablesHandler) handleListTableBuckets(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req ListTableBucketsRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Check permission
principal := h.getAccountID(r)
accountID := h.getAccountID(r)
if !CanListTableBuckets(principal, accountID, "") {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to list table buckets")
return NewAuthError("ListTableBuckets", principal, "not authorized to list table buckets")
}
maxBuckets := req.MaxBuckets
if maxBuckets <= 0 {
maxBuckets = 100
}
// Cap to prevent uint32 overflow when used in uint32(maxBuckets*2)
const maxBucketsLimit = 1000
if maxBuckets > maxBucketsLimit {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "MaxBuckets exceeds maximum allowed value")
return fmt.Errorf("invalid maxBuckets value: %d", maxBuckets)
}
var buckets []TableBucketSummary
lastFileName := req.ContinuationToken
err := filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
for len(buckets) < maxBuckets {
resp, err := client.ListEntries(r.Context(), &filer_pb.ListEntriesRequest{
Directory: TablesPath,
Limit: uint32(maxBuckets * 2), // Fetch more than needed to account for filtering
StartFromFileName: lastFileName,
InclusiveStartFrom: lastFileName == "" || lastFileName == req.ContinuationToken,
})
if err != nil {
return err
}
hasMore := false
for {
entry, respErr := resp.Recv()
if respErr != nil {
if respErr == io.EOF {
break
}
return respErr
}
if entry.Entry == nil {
continue
}
// Skip the start item if it was included in the previous page
if len(buckets) == 0 && req.ContinuationToken != "" && entry.Entry.Name == req.ContinuationToken {
continue
}
hasMore = true
lastFileName = entry.Entry.Name
if !entry.Entry.IsDirectory {
continue
}
// Skip entries starting with "."
if strings.HasPrefix(entry.Entry.Name, ".") {
continue
}
// Apply prefix filter
if req.Prefix != "" && !strings.HasPrefix(entry.Entry.Name, req.Prefix) {
continue
}
// Read metadata from extended attribute
data, ok := entry.Entry.Extended[ExtendedKeyMetadata]
if !ok {
continue
}
var metadata tableBucketMetadata
if err := json.Unmarshal(data, &metadata); err != nil {
continue
}
if metadata.OwnerAccountID != accountID {
continue
}
buckets = append(buckets, TableBucketSummary{
ARN: h.generateTableBucketARN(metadata.OwnerAccountID, entry.Entry.Name),
Name: entry.Entry.Name,
CreatedAt: metadata.CreatedAt,
})
if len(buckets) >= maxBuckets {
return nil
}
}
if !hasMore {
break
}
}
return nil
})
if err != nil {
// Check if it's a "not found" error - return empty list in that case
if errors.Is(err, filer_pb.ErrNotFound) {
buckets = []TableBucketSummary{}
} else {
// For other errors, return error response
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to list table buckets: %v", err))
return err
}
}
paginationToken := ""
if len(buckets) >= maxBuckets {
paginationToken = lastFileName
}
resp := &ListTableBucketsResponse{
TableBuckets: buckets,
ContinuationToken: paginationToken,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleDeleteTableBucket deletes a table bucket
func (h *S3TablesHandler) handleDeleteTableBucket(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req DeleteTableBucketRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketPath := getTableBucketPath(bucketName)
// Check if bucket exists and perform ownership + emptiness check in one block
var metadata tableBucketMetadata
var bucketPolicy string
hasChildren := false
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// 1. Get metadata for ownership check
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
}
// 2. Check permission
principal := h.getAccountID(r)
if !CanDeleteTableBucket(principal, metadata.OwnerAccountID, bucketPolicy) {
return NewAuthError("DeleteTableBucket", principal, fmt.Sprintf("not authorized to delete bucket %s", bucketName))
}
// 3. Check if bucket is empty
resp, err := client.ListEntries(r.Context(), &filer_pb.ListEntriesRequest{
Directory: bucketPath,
Limit: 10,
})
if err != nil {
return err
}
for {
entry, err := resp.Recv()
if err != nil {
if err == io.EOF {
break
}
return err
}
if entry.Entry != nil && !strings.HasPrefix(entry.Entry.Name, ".") {
hasChildren = true
break
}
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
} else if isAuthError(err) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, err.Error())
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to delete table bucket: %v", err))
}
return err
}
if hasChildren {
h.writeError(w, http.StatusConflict, ErrCodeBucketNotEmpty, "table bucket is not empty")
return fmt.Errorf("bucket not empty")
}
// Delete the bucket
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.deleteDirectory(r.Context(), client, bucketPath)
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to delete table bucket")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}

512
weed/s3api/s3tables/handler_namespace.go

@ -0,0 +1,512 @@
package s3tables
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
// handleCreateNamespace creates a new namespace in a table bucket
func (h *S3TablesHandler) handleCreateNamespace(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req CreateNamespaceRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
if len(req.Namespace) == 0 {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "namespace is required")
return fmt.Errorf("namespace is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Check if table bucket exists
bucketPath := getTableBucketPath(bucketName)
var bucketMetadata tableBucketMetadata
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table bucket: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanCreateNamespace(principal, bucketMetadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to create namespace in this bucket")
return ErrAccessDenied
}
namespacePath := getNamespacePath(bucketName, namespaceName)
// Check if namespace already exists
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
_, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyMetadata)
return err
})
if err == nil {
h.writeError(w, http.StatusConflict, ErrCodeNamespaceAlreadyExists, fmt.Sprintf("namespace %s already exists", namespaceName))
return fmt.Errorf("namespace already exists")
} else if !errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check namespace: %v", err))
return err
}
// Create the namespace with bucket owner to maintain consistency
// (authorization above ensures the caller has permission to create in this bucket)
now := time.Now()
metadata := &namespaceMetadata{
Namespace: req.Namespace,
CreatedAt: now,
OwnerAccountID: bucketMetadata.OwnerAccountID,
}
metadataBytes, err := json.Marshal(metadata)
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to marshal namespace metadata")
return fmt.Errorf("failed to marshal metadata: %w", err)
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Create namespace directory
if err := h.createDirectory(r.Context(), client, namespacePath); err != nil {
return err
}
// Set metadata as extended attribute
if err := h.setExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyMetadata, metadataBytes); err != nil {
return err
}
return nil
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to create namespace")
return err
}
resp := &CreateNamespaceResponse{
Namespace: req.Namespace,
TableBucketARN: req.TableBucketARN,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleGetNamespace gets details of a namespace
func (h *S3TablesHandler) handleGetNamespace(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req GetNamespaceRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
namespacePath := getNamespacePath(bucketName, namespaceName)
bucketPath := getTableBucketPath(bucketName)
// Get namespace and bucket policy
var metadata namespaceMetadata
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return err
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchNamespace, fmt.Sprintf("namespace %s not found", flattenNamespace(req.Namespace)))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to get namespace: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanGetNamespace(principal, metadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchNamespace, "namespace not found")
return ErrAccessDenied
}
resp := &GetNamespaceResponse{
Namespace: metadata.Namespace,
CreatedAt: metadata.CreatedAt,
OwnerAccountID: metadata.OwnerAccountID,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleListNamespaces lists all namespaces in a table bucket
func (h *S3TablesHandler) handleListNamespaces(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req ListNamespacesRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
maxNamespaces := req.MaxNamespaces
if maxNamespaces <= 0 {
maxNamespaces = 100
}
bucketPath := getTableBucketPath(bucketName)
// Check permission (check bucket ownership)
var bucketMetadata tableBucketMetadata
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to list namespaces: %v", err))
}
return err
}
principal := h.getAccountID(r)
if !CanListNamespaces(principal, bucketMetadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
return ErrAccessDenied
}
var namespaces []NamespaceSummary
lastFileName := req.ContinuationToken
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
for len(namespaces) < maxNamespaces {
resp, err := client.ListEntries(r.Context(), &filer_pb.ListEntriesRequest{
Directory: bucketPath,
Limit: uint32(maxNamespaces * 2),
StartFromFileName: lastFileName,
InclusiveStartFrom: lastFileName == "" || lastFileName == req.ContinuationToken,
})
if err != nil {
return err
}
hasMore := false
for {
entry, respErr := resp.Recv()
if respErr != nil {
if respErr == io.EOF {
break
}
return respErr
}
if entry.Entry == nil {
continue
}
// Skip the start item if it was included in the previous page
if len(namespaces) == 0 && req.ContinuationToken != "" && entry.Entry.Name == req.ContinuationToken {
continue
}
hasMore = true
lastFileName = entry.Entry.Name
if !entry.Entry.IsDirectory {
continue
}
// Skip hidden entries
if strings.HasPrefix(entry.Entry.Name, ".") {
continue
}
// Apply prefix filter
if req.Prefix != "" && !strings.HasPrefix(entry.Entry.Name, req.Prefix) {
continue
}
// Read metadata from extended attribute
data, ok := entry.Entry.Extended[ExtendedKeyMetadata]
if !ok {
continue
}
var metadata namespaceMetadata
if err := json.Unmarshal(data, &metadata); err != nil {
continue
}
if metadata.OwnerAccountID != bucketMetadata.OwnerAccountID {
continue
}
namespaces = append(namespaces, NamespaceSummary{
Namespace: metadata.Namespace,
CreatedAt: metadata.CreatedAt,
})
if len(namespaces) >= maxNamespaces {
return nil
}
}
if !hasMore {
break
}
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
namespaces = []NamespaceSummary{}
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to list namespaces: %v", err))
return err
}
}
paginationToken := ""
if len(namespaces) >= maxNamespaces {
paginationToken = lastFileName
}
resp := &ListNamespacesResponse{
Namespaces: namespaces,
ContinuationToken: paginationToken,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleDeleteNamespace deletes a namespace from a table bucket
func (h *S3TablesHandler) handleDeleteNamespace(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req DeleteNamespaceRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
namespacePath := getNamespacePath(bucketName, namespaceName)
bucketPath := getTableBucketPath(bucketName)
// Check if namespace exists and get metadata for permission check
var metadata namespaceMetadata
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchNamespace, fmt.Sprintf("namespace %s not found", flattenNamespace(req.Namespace)))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to get namespace metadata: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanDeleteNamespace(principal, metadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchNamespace, "namespace not found")
return ErrAccessDenied
}
// Check if namespace is empty
hasChildren := false
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
resp, err := client.ListEntries(r.Context(), &filer_pb.ListEntriesRequest{
Directory: namespacePath,
Limit: 10,
})
if err != nil {
return err
}
for {
entry, err := resp.Recv()
if err != nil {
if err == io.EOF {
break
}
return err
}
if entry.Entry != nil && !strings.HasPrefix(entry.Entry.Name, ".") {
hasChildren = true
break
}
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchNamespace, fmt.Sprintf("namespace %s not found", flattenNamespace(req.Namespace)))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to list namespace entries: %v", err))
}
return err
}
if hasChildren {
h.writeError(w, http.StatusConflict, ErrCodeNamespaceNotEmpty, "namespace is not empty")
return fmt.Errorf("namespace not empty")
}
// Delete the namespace
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.deleteDirectory(r.Context(), client, namespacePath)
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to delete namespace")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}

853
weed/s3api/s3tables/handler_policy.go

@ -0,0 +1,853 @@
package s3tables
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
// extractResourceOwnerAndBucket extracts ownership info and bucket name from resource metadata.
// This helper consolidates the repeated pattern used in handleTagResource, handleListTagsForResource,
// and handleUntagResource.
func (h *S3TablesHandler) extractResourceOwnerAndBucket(
data []byte,
resourcePath string,
rType ResourceType,
) (ownerAccountID, bucketName string, err error) {
// Extract bucket name from resource path (format: /tables/{bucket}/... for both tables and buckets)
parts := strings.Split(strings.Trim(resourcePath, "/"), "/")
if len(parts) >= 2 {
bucketName = parts[1]
}
if rType == ResourceTypeTable {
var meta tableMetadataInternal
if err := json.Unmarshal(data, &meta); err != nil {
return "", "", err
}
ownerAccountID = meta.OwnerAccountID
} else {
var meta tableBucketMetadata
if err := json.Unmarshal(data, &meta); err != nil {
return "", "", err
}
ownerAccountID = meta.OwnerAccountID
}
return ownerAccountID, bucketName, nil
}
// handlePutTableBucketPolicy puts a policy on a table bucket
func (h *S3TablesHandler) handlePutTableBucketPolicy(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req PutTableBucketPolicyRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
if req.ResourcePolicy == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "resourcePolicy is required")
return fmt.Errorf("resourcePolicy is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Check if bucket exists and get metadata for ownership check
bucketPath := getTableBucketPath(bucketName)
var bucketMetadata tableBucketMetadata
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table bucket: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanPutTableBucketPolicy(principal, bucketMetadata.OwnerAccountID, "") {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to put table bucket policy")
return NewAuthError("PutTableBucketPolicy", principal, "not authorized to put table bucket policy")
}
// Write policy
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.setExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy, []byte(req.ResourcePolicy))
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to put table bucket policy")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}
// handleGetTableBucketPolicy gets the policy of a table bucket
func (h *S3TablesHandler) handleGetTableBucketPolicy(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req GetTableBucketPolicyRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketPath := getTableBucketPath(bucketName)
var policy []byte
var bucketMetadata tableBucketMetadata
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Get metadata for ownership check
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
// Get policy
policy, err = h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
return err
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
return err
}
if errors.Is(err, ErrAttributeNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchPolicy, "table bucket policy not found")
return err
}
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to get table bucket policy: %v", err))
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanGetTableBucketPolicy(principal, bucketMetadata.OwnerAccountID, string(policy)) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to get table bucket policy")
return NewAuthError("GetTableBucketPolicy", principal, "not authorized to get table bucket policy")
}
resp := &GetTableBucketPolicyResponse{
ResourcePolicy: string(policy),
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleDeleteTableBucketPolicy deletes the policy of a table bucket
func (h *S3TablesHandler) handleDeleteTableBucketPolicy(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req DeleteTableBucketPolicyRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketPath := getTableBucketPath(bucketName)
// Check if bucket exists and get metadata for ownership check
var bucketMetadata tableBucketMetadata
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchBucket, fmt.Sprintf("table bucket %s not found", bucketName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table bucket: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanDeleteTableBucketPolicy(principal, bucketMetadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to delete table bucket policy")
return NewAuthError("DeleteTableBucketPolicy", principal, "not authorized to delete table bucket policy")
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.deleteExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
})
if err != nil && !errors.Is(err, ErrAttributeNotFound) {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to delete table bucket policy")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}
// handlePutTablePolicy puts a policy on a table
func (h *S3TablesHandler) handlePutTablePolicy(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req PutTablePolicyRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" || len(req.Namespace) == 0 || req.Name == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN, namespace, and name are required")
return fmt.Errorf("missing required parameters")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.ResourcePolicy == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "resourcePolicy is required")
return fmt.Errorf("resourcePolicy is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Check if table exists
tableName, err := validateTableName(req.Name)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tablePath := getTablePath(bucketName, namespaceName, tableName)
bucketPath := getTableBucketPath(bucketName)
var metadata tableMetadataInternal
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal table metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanPutTablePolicy(principal, metadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to put table policy")
return NewAuthError("PutTablePolicy", principal, "not authorized to put table policy")
}
// Write policy
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.setExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyPolicy, []byte(req.ResourcePolicy))
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to put table policy")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}
// handleGetTablePolicy gets the policy of a table
func (h *S3TablesHandler) handleGetTablePolicy(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req GetTablePolicyRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" || len(req.Namespace) == 0 || req.Name == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN, namespace, and name are required")
return fmt.Errorf("missing required parameters")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tableName, err := validateTableName(req.Name)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tablePath := getTablePath(bucketName, namespaceName, tableName)
bucketPath := getTableBucketPath(bucketName)
var policy []byte
var metadata tableMetadataInternal
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Get metadata for ownership check
data, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal table metadata: %w", err)
}
// Get policy
policy, err = h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyPolicy)
if err != nil {
return err
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
return err
}
if errors.Is(err, ErrAttributeNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchPolicy, "table policy not found")
return err
}
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to get table policy: %v", err))
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanGetTablePolicy(principal, metadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to get table policy")
return NewAuthError("GetTablePolicy", principal, "not authorized to get table policy")
}
resp := &GetTablePolicyResponse{
ResourcePolicy: string(policy),
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleDeleteTablePolicy deletes the policy of a table
func (h *S3TablesHandler) handleDeleteTablePolicy(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req DeleteTablePolicyRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" || len(req.Namespace) == 0 || req.Name == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN, namespace, and name are required")
return fmt.Errorf("missing required parameters")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tableName, err := validateTableName(req.Name)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tablePath := getTablePath(bucketName, namespaceName, tableName)
bucketPath := getTableBucketPath(bucketName)
// Check if table exists
var metadata tableMetadataInternal
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal table metadata: %w", err)
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table: %v", err))
}
return err
}
// Check permission
principal := h.getAccountID(r)
if !CanDeleteTablePolicy(principal, metadata.OwnerAccountID, bucketPolicy) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to delete table policy")
return NewAuthError("DeleteTablePolicy", principal, "not authorized to delete table policy")
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.deleteExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyPolicy)
})
if err != nil && !errors.Is(err, ErrAttributeNotFound) {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to delete table policy")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}
// handleTagResource adds tags to a resource
func (h *S3TablesHandler) handleTagResource(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req TagResourceRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.ResourceARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "resourceArn is required")
return fmt.Errorf("resourceArn is required")
}
if len(req.Tags) == 0 {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tags are required")
return fmt.Errorf("tags are required")
}
// Parse resource ARN to determine if it's a bucket or table
resourcePath, extendedKey, rType, err := h.resolveResourcePath(req.ResourceARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Read existing tags and merge, AND check permissions based on metadata ownership
existingTags := make(map[string]string)
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Read metadata for ownership check
data, err := h.getExtendedAttribute(r.Context(), client, resourcePath, ExtendedKeyMetadata)
if err != nil {
return err
}
ownerAccountID, bucketName, err := h.extractResourceOwnerAndBucket(data, resourcePath, rType)
if err != nil {
return err
}
// Fetch bucket policy if we have a bucket name
if bucketName != "" {
bucketPath := getTableBucketPath(bucketName)
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
}
// Check Permission inside the closure because we just got the ID
principal := h.getAccountID(r)
if !CanManageTags(principal, ownerAccountID, bucketPolicy) {
return NewAuthError("TagResource", principal, "not authorized to tag resource")
}
// Read existing tags
data, err = h.getExtendedAttribute(r.Context(), client, resourcePath, extendedKey)
if err != nil {
if errors.Is(err, ErrAttributeNotFound) {
return nil // No existing tags, which is fine.
}
return err // Propagate other errors.
}
return json.Unmarshal(data, &existingTags)
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
errorCode := ErrCodeNoSuchBucket
if rType == ResourceTypeTable {
errorCode = ErrCodeNoSuchTable
}
h.writeError(w, http.StatusNotFound, errorCode, "resource not found")
} else if isAuthError(err) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, err.Error())
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to read existing tags: %v", err))
}
return err
}
// Merge new tags
for k, v := range req.Tags {
existingTags[k] = v
}
// Write merged tags
tagsBytes, err := json.Marshal(existingTags)
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to marshal tags")
return fmt.Errorf("failed to marshal tags: %w", err)
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.setExtendedAttribute(r.Context(), client, resourcePath, extendedKey, tagsBytes)
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to tag resource")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}
// handleListTagsForResource lists tags for a resource
func (h *S3TablesHandler) handleListTagsForResource(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req ListTagsForResourceRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.ResourceARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "resourceArn is required")
return fmt.Errorf("resourceArn is required")
}
resourcePath, extendedKey, rType, err := h.resolveResourcePath(req.ResourceARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tags := make(map[string]string)
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Read metadata for ownership check
data, err := h.getExtendedAttribute(r.Context(), client, resourcePath, ExtendedKeyMetadata)
if err != nil {
return err
}
ownerAccountID, bucketName, err := h.extractResourceOwnerAndBucket(data, resourcePath, rType)
if err != nil {
return err
}
// Fetch bucket policy if we have a bucket name
if bucketName != "" {
bucketPath := getTableBucketPath(bucketName)
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
}
// Check Permission
principal := h.getAccountID(r)
if !CheckPermission("ListTagsForResource", principal, ownerAccountID, bucketPolicy) {
return NewAuthError("ListTagsForResource", principal, "not authorized to list tags for resource")
}
data, err = h.getExtendedAttribute(r.Context(), client, resourcePath, extendedKey)
if err != nil {
if errors.Is(err, ErrAttributeNotFound) {
return nil // No tags is not an error.
}
return err // Propagate other errors.
}
return json.Unmarshal(data, &tags)
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
errorCode := ErrCodeNoSuchBucket
if rType == ResourceTypeTable {
errorCode = ErrCodeNoSuchTable
}
h.writeError(w, http.StatusNotFound, errorCode, "resource not found")
} else if isAuthError(err) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, err.Error())
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to list tags: %v", err))
}
return err
}
resp := &ListTagsForResourceResponse{
Tags: tags,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleUntagResource removes tags from a resource
func (h *S3TablesHandler) handleUntagResource(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req UntagResourceRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.ResourceARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "resourceArn is required")
return fmt.Errorf("resourceArn is required")
}
if len(req.TagKeys) == 0 {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tagKeys are required")
return fmt.Errorf("tagKeys are required")
}
resourcePath, extendedKey, rType, err := h.resolveResourcePath(req.ResourceARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Read existing tags, check permission
tags := make(map[string]string)
var bucketPolicy string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Read metadata for ownership check
data, err := h.getExtendedAttribute(r.Context(), client, resourcePath, ExtendedKeyMetadata)
if err != nil {
return err
}
ownerAccountID, bucketName, err := h.extractResourceOwnerAndBucket(data, resourcePath, rType)
if err != nil {
return err
}
// Fetch bucket policy if we have a bucket name
if bucketName != "" {
bucketPath := getTableBucketPath(bucketName)
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err != nil {
if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to read bucket policy: %w", err)
}
// Policy not found is not an error; bucketPolicy remains empty
} else {
bucketPolicy = string(policyData)
}
}
// Check Permission
principal := h.getAccountID(r)
if !CanManageTags(principal, ownerAccountID, bucketPolicy) {
return NewAuthError("UntagResource", principal, "not authorized to untag resource")
}
data, err = h.getExtendedAttribute(r.Context(), client, resourcePath, extendedKey)
if err != nil {
if errors.Is(err, ErrAttributeNotFound) {
return nil
}
return err
}
return json.Unmarshal(data, &tags)
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
errorCode := ErrCodeNoSuchBucket
if rType == ResourceTypeTable {
errorCode = ErrCodeNoSuchTable
}
h.writeError(w, http.StatusNotFound, errorCode, "resource not found")
} else if isAuthError(err) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, err.Error())
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to read existing tags")
}
return err
}
// Remove specified tags
for _, key := range req.TagKeys {
delete(tags, key)
}
// Write updated tags
tagsBytes, err := json.Marshal(tags)
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to marshal tags")
return fmt.Errorf("failed to marshal tags: %w", err)
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.setExtendedAttribute(r.Context(), client, resourcePath, extendedKey, tagsBytes)
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to untag resource")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}
// resolveResourcePath determines the resource path and extended attribute key from a resource ARN
func (h *S3TablesHandler) resolveResourcePath(resourceARN string) (path string, key string, rType ResourceType, err error) {
// Try parsing as table ARN first
bucketName, namespace, tableName, err := parseTableFromARN(resourceARN)
if err == nil {
return getTablePath(bucketName, namespace, tableName), ExtendedKeyTags, ResourceTypeTable, nil
}
// Try parsing as bucket ARN
bucketName, err = parseBucketNameFromARN(resourceARN)
if err == nil {
return getTableBucketPath(bucketName), ExtendedKeyTags, ResourceTypeBucket, nil
}
return "", "", "", fmt.Errorf("invalid resource ARN: %s", resourceARN)
}

780
weed/s3api/s3tables/handler_table.go

@ -0,0 +1,780 @@
package s3tables
import (
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/seaweedfs/seaweedfs/weed/glog"
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
)
// handleCreateTable creates a new table in a namespace
func (h *S3TablesHandler) handleCreateTable(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req CreateTableRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.Name == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "name is required")
return fmt.Errorf("name is required")
}
if req.Format == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "format is required")
return fmt.Errorf("format is required")
}
// Validate format
if req.Format != "ICEBERG" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "only ICEBERG format is supported")
return fmt.Errorf("invalid format")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Validate table name
tableName, err := validateTableName(req.Name)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
// Check if namespace exists
namespacePath := getNamespacePath(bucketName, namespaceName)
var namespaceMetadata namespaceMetadata
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &namespaceMetadata); err != nil {
return fmt.Errorf("failed to unmarshal namespace metadata: %w", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchNamespace, fmt.Sprintf("namespace %s not found", namespaceName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check namespace: %v", err))
}
return err
}
// Authorize table creation using policy framework (namespace + bucket policies)
accountID := h.getAccountID(r)
bucketPath := getTableBucketPath(bucketName)
namespacePolicy := ""
bucketPolicy := ""
var bucketMetadata tableBucketMetadata
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Fetch bucket metadata to use correct owner for bucket policy evaluation
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err == nil {
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket metadata: %v", err)
}
// Fetch namespace policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyPolicy)
if err == nil {
namespacePolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch namespace policy: %v", err)
}
// Fetch bucket policy if it exists
policyData, err = h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to fetch policies: %v", err))
return err
}
// Check authorization: namespace policy OR bucket policy OR ownership
// Use namespace owner for namespace policy (consistent with namespace authorization)
nsAllowed := CanCreateTable(accountID, namespaceMetadata.OwnerAccountID, namespacePolicy)
// Use bucket owner for bucket policy (bucket policy applies to bucket-level operations)
bucketAllowed := CanCreateTable(accountID, bucketMetadata.OwnerAccountID, bucketPolicy)
if !nsAllowed && !bucketAllowed {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "not authorized to create table in this namespace")
return ErrAccessDenied
}
tablePath := getTablePath(bucketName, namespaceName, tableName)
// Check if table already exists
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
_, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
return err
})
if err == nil {
h.writeError(w, http.StatusConflict, ErrCodeTableAlreadyExists, fmt.Sprintf("table %s already exists", tableName))
return fmt.Errorf("table already exists")
} else if !errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table: %v", err))
return err
}
// Create the table
now := time.Now()
versionToken := generateVersionToken()
metadata := &tableMetadataInternal{
Name: tableName,
Namespace: namespaceName,
Format: req.Format,
CreatedAt: now,
ModifiedAt: now,
OwnerAccountID: namespaceMetadata.OwnerAccountID, // Inherit namespace owner for consistency
VersionToken: versionToken,
Metadata: req.Metadata,
}
metadataBytes, err := json.Marshal(metadata)
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to marshal table metadata")
return fmt.Errorf("failed to marshal metadata: %w", err)
}
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Create table directory
if err := h.createDirectory(r.Context(), client, tablePath); err != nil {
return err
}
// Create data subdirectory for Iceberg files
dataPath := tablePath + "/data"
if err := h.createDirectory(r.Context(), client, dataPath); err != nil {
return err
}
// Set metadata as extended attribute
if err := h.setExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata, metadataBytes); err != nil {
return err
}
// Set tags if provided
if len(req.Tags) > 0 {
tagsBytes, err := json.Marshal(req.Tags)
if err != nil {
return fmt.Errorf("failed to marshal tags: %w", err)
}
if err := h.setExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyTags, tagsBytes); err != nil {
return err
}
}
return nil
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to create table")
return err
}
tableARN := h.generateTableARN(metadata.OwnerAccountID, bucketName, namespaceName+"/"+tableName)
resp := &CreateTableResponse{
TableARN: tableARN,
VersionToken: versionToken,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleGetTable gets details of a table
func (h *S3TablesHandler) handleGetTable(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req GetTableRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
var bucketName, namespace, tableName string
var err error
// Support getting by ARN or by bucket/namespace/name
if req.TableARN != "" {
bucketName, namespace, tableName, err = parseTableFromARN(req.TableARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
} else if req.TableBucketARN != "" && len(req.Namespace) > 0 && req.Name != "" {
bucketName, err = parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
namespace, err = validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tableName, err = validateTableName(req.Name)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
} else {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "either tableARN or (tableBucketARN, namespace, name) is required")
return fmt.Errorf("missing required parameters")
}
tablePath := getTablePath(bucketName, namespace, tableName)
var metadata tableMetadataInternal
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal table metadata: %w", err)
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to get table: %v", err))
}
return err
}
// Authorize access to the table using policy framework
accountID := h.getAccountID(r)
bucketPath := getTableBucketPath(bucketName)
tablePolicy := ""
bucketPolicy := ""
var bucketMetadata tableBucketMetadata
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
// Fetch bucket metadata to use correct owner for bucket policy evaluation
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err == nil {
if err := json.Unmarshal(data, &bucketMetadata); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket metadata: %v", err)
}
// Fetch table policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyPolicy)
if err == nil {
tablePolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch table policy: %v", err)
}
// Fetch bucket policy if it exists
policyData, err = h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
return nil
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to fetch policies: %v", err))
return err
}
// Check authorization: table policy OR bucket policy OR ownership
// Use table owner for table policy (table-level access control)
tableAllowed := CanGetTable(accountID, metadata.OwnerAccountID, tablePolicy)
// Use bucket owner for bucket policy (bucket-level access control)
bucketAllowed := CanGetTable(accountID, bucketMetadata.OwnerAccountID, bucketPolicy)
if !tableAllowed && !bucketAllowed {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
return ErrAccessDenied
}
tableARN := h.generateTableARN(metadata.OwnerAccountID, bucketName, namespace+"/"+tableName)
resp := &GetTableResponse{
Name: metadata.Name,
TableARN: tableARN,
Namespace: []string{metadata.Namespace},
Format: metadata.Format,
CreatedAt: metadata.CreatedAt,
ModifiedAt: metadata.ModifiedAt,
OwnerAccountID: metadata.OwnerAccountID,
MetadataLocation: metadata.MetadataLocation,
VersionToken: metadata.VersionToken,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// handleListTables lists all tables in a namespace or bucket
func (h *S3TablesHandler) handleListTables(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req ListTablesRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN is required")
return fmt.Errorf("tableBucketARN is required")
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
maxTables := req.MaxTables
if maxTables <= 0 {
maxTables = 100
}
// Cap to prevent uint32 overflow when used in uint32(maxTables*2)
const maxTablesLimit = 1000
if maxTables > maxTablesLimit {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "MaxTables exceeds maximum allowed value")
return fmt.Errorf("invalid maxTables value: %d", maxTables)
}
// Pre-validate namespace before calling WithFilerClient to return 400 on validation errors
var namespaceName string
if len(req.Namespace) > 0 {
var err error
namespaceName, err = validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
}
var tables []TableSummary
var paginationToken string
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
var err error
accountID := h.getAccountID(r)
if len(req.Namespace) > 0 {
// Namespace has already been validated above
namespacePath := getNamespacePath(bucketName, namespaceName)
bucketPath := getTableBucketPath(bucketName)
var nsMeta namespaceMetadata
var bucketMeta tableBucketMetadata
var namespacePolicy, bucketPolicy string
// Fetch namespace metadata and policy
data, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyMetadata)
if err != nil {
return err // Not Found handled by caller
}
if err := json.Unmarshal(data, &nsMeta); err != nil {
return err
}
// Fetch namespace policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, namespacePath, ExtendedKeyPolicy)
if err == nil {
namespacePolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch namespace policy: %v", err)
}
// Fetch bucket metadata and policy
data, err = h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err == nil {
if err := json.Unmarshal(data, &bucketMeta); err != nil {
return fmt.Errorf("failed to unmarshal bucket metadata: %w", err)
}
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket metadata: %v", err)
}
policyData, err = h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
// Authorize listing: namespace policy OR bucket policy OR ownership
nsAllowed := CanListTables(accountID, nsMeta.OwnerAccountID, namespacePolicy)
bucketAllowed := CanListTables(accountID, bucketMeta.OwnerAccountID, bucketPolicy)
if !nsAllowed && !bucketAllowed {
return ErrAccessDenied
}
tables, paginationToken, err = h.listTablesInNamespaceWithClient(r, client, bucketName, namespaceName, req.Prefix, req.ContinuationToken, maxTables)
} else {
// List tables across all namespaces in bucket
bucketPath := getTableBucketPath(bucketName)
var bucketMeta tableBucketMetadata
var bucketPolicy string
// Fetch bucket metadata and policy
data, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &bucketMeta); err != nil {
return err
}
// Fetch bucket policy if it exists
policyData, err := h.getExtendedAttribute(r.Context(), client, bucketPath, ExtendedKeyPolicy)
if err == nil {
bucketPolicy = string(policyData)
} else if !errors.Is(err, ErrAttributeNotFound) {
return fmt.Errorf("failed to fetch bucket policy: %v", err)
}
// Authorize listing: bucket policy OR ownership
if !CanListTables(accountID, bucketMeta.OwnerAccountID, bucketPolicy) {
return ErrAccessDenied
}
tables, paginationToken, err = h.listTablesInAllNamespaces(r, client, bucketName, req.Prefix, req.ContinuationToken, maxTables)
}
return err
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
// If the bucket or namespace directory is not found, return an empty result
tables = []TableSummary{}
paginationToken = ""
} else if isAuthError(err) {
h.writeError(w, http.StatusForbidden, ErrCodeAccessDenied, "Access Denied")
return err
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to list tables: %v", err))
return err
}
}
resp := &ListTablesResponse{
Tables: tables,
ContinuationToken: paginationToken,
}
h.writeJSON(w, http.StatusOK, resp)
return nil
}
// listTablesInNamespaceWithClient lists tables in a specific namespace
func (h *S3TablesHandler) listTablesInNamespaceWithClient(r *http.Request, client filer_pb.SeaweedFilerClient, bucketName, namespaceName, prefix, continuationToken string, maxTables int) ([]TableSummary, string, error) {
namespacePath := getNamespacePath(bucketName, namespaceName)
return h.listTablesWithClient(r, client, namespacePath, bucketName, namespaceName, prefix, continuationToken, maxTables)
}
func (h *S3TablesHandler) listTablesWithClient(r *http.Request, client filer_pb.SeaweedFilerClient, dirPath, bucketName, namespaceName, prefix, continuationToken string, maxTables int) ([]TableSummary, string, error) {
var tables []TableSummary
lastFileName := continuationToken
ctx := r.Context()
for len(tables) < maxTables {
resp, err := client.ListEntries(ctx, &filer_pb.ListEntriesRequest{
Directory: dirPath,
Limit: uint32(maxTables * 2),
StartFromFileName: lastFileName,
InclusiveStartFrom: lastFileName == "" || lastFileName == continuationToken,
})
if err != nil {
return nil, "", err
}
hasMore := false
for {
entry, respErr := resp.Recv()
if respErr != nil {
if respErr == io.EOF {
break
}
return nil, "", respErr
}
if entry.Entry == nil {
continue
}
// Skip the start item if it was included in the previous page
if len(tables) == 0 && continuationToken != "" && entry.Entry.Name == continuationToken {
continue
}
hasMore = true
lastFileName = entry.Entry.Name
if !entry.Entry.IsDirectory {
continue
}
// Skip hidden entries
if strings.HasPrefix(entry.Entry.Name, ".") {
continue
}
// Apply prefix filter
if prefix != "" && !strings.HasPrefix(entry.Entry.Name, prefix) {
continue
}
// Read table metadata from extended attribute
data, ok := entry.Entry.Extended[ExtendedKeyMetadata]
if !ok {
continue
}
var metadata tableMetadataInternal
if err := json.Unmarshal(data, &metadata); err != nil {
continue
}
// Note: Authorization (ownership or policy-based access) is checked at the handler level
// before calling this function. This filter is removed to allow policy-based sharing.
// The caller has already been verified to have ListTables permission for this namespace/bucket.
tableARN := h.generateTableARN(metadata.OwnerAccountID, bucketName, namespaceName+"/"+entry.Entry.Name)
tables = append(tables, TableSummary{
Name: entry.Entry.Name,
TableARN: tableARN,
Namespace: []string{namespaceName},
CreatedAt: metadata.CreatedAt,
ModifiedAt: metadata.ModifiedAt,
})
if len(tables) >= maxTables {
return tables, lastFileName, nil
}
}
if !hasMore {
break
}
}
if len(tables) < maxTables {
lastFileName = ""
}
return tables, lastFileName, nil
}
func (h *S3TablesHandler) listTablesInAllNamespaces(r *http.Request, client filer_pb.SeaweedFilerClient, bucketName, prefix, continuationToken string, maxTables int) ([]TableSummary, string, error) {
bucketPath := getTableBucketPath(bucketName)
ctx := r.Context()
var continuationNamespace string
var startTableName string
if continuationToken != "" {
if parts := strings.SplitN(continuationToken, "/", 2); len(parts) == 2 {
continuationNamespace = parts[0]
startTableName = parts[1]
} else {
continuationNamespace = continuationToken
}
}
var tables []TableSummary
lastNamespace := continuationNamespace
for {
// List namespaces in batches
resp, err := client.ListEntries(ctx, &filer_pb.ListEntriesRequest{
Directory: bucketPath,
Limit: 100,
StartFromFileName: lastNamespace,
InclusiveStartFrom: (lastNamespace == continuationNamespace && startTableName != "") || (lastNamespace == "" && continuationNamespace == ""),
})
if err != nil {
return nil, "", err
}
hasMore := false
for {
entry, respErr := resp.Recv()
if respErr != nil {
if respErr == io.EOF {
break
}
return nil, "", respErr
}
if entry.Entry == nil {
continue
}
hasMore = true
lastNamespace = entry.Entry.Name
if !entry.Entry.IsDirectory || strings.HasPrefix(entry.Entry.Name, ".") {
continue
}
namespace := entry.Entry.Name
tableNameFilter := ""
if namespace == continuationNamespace {
tableNameFilter = startTableName
}
nsTables, nsToken, err := h.listTablesInNamespaceWithClient(r, client, bucketName, namespace, prefix, tableNameFilter, maxTables-len(tables))
if err != nil {
glog.Warningf("S3Tables: failed to list tables in namespace %s/%s: %v", bucketName, namespace, err)
continue
}
tables = append(tables, nsTables...)
if namespace == continuationNamespace {
startTableName = ""
}
if len(tables) >= maxTables {
paginationToken := namespace + "/" + nsToken
if nsToken == "" {
// If we hit the limit exactly at the end of a namespace, the next token should be the next namespace
paginationToken = namespace // This will start from the NEXT namespace in the outer loop
}
return tables, paginationToken, nil
}
}
if !hasMore {
break
}
}
return tables, "", nil
}
// handleDeleteTable deletes a table from a namespace
func (h *S3TablesHandler) handleDeleteTable(w http.ResponseWriter, r *http.Request, filerClient FilerClient) error {
var req DeleteTableRequest
if err := h.readRequestBody(r, &req); err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
if req.TableBucketARN == "" || len(req.Namespace) == 0 || req.Name == "" {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, "tableBucketARN, namespace, and name are required")
return fmt.Errorf("missing required parameters")
}
namespaceName, err := validateNamespace(req.Namespace)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
bucketName, err := parseBucketNameFromARN(req.TableBucketARN)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tableName, err := validateTableName(req.Name)
if err != nil {
h.writeError(w, http.StatusBadRequest, ErrCodeInvalidRequest, err.Error())
return err
}
tablePath := getTablePath(bucketName, namespaceName, tableName)
// Check if table exists and enforce VersionToken if provided
var metadata tableMetadataInternal
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
data, err := h.getExtendedAttribute(r.Context(), client, tablePath, ExtendedKeyMetadata)
if err != nil {
return err
}
if err := json.Unmarshal(data, &metadata); err != nil {
return fmt.Errorf("failed to unmarshal table metadata: %w", err)
}
if req.VersionToken != "" {
if metadata.VersionToken != req.VersionToken {
return ErrVersionTokenMismatch
}
}
return nil
})
if err != nil {
if errors.Is(err, filer_pb.ErrNotFound) {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
} else if errors.Is(err, ErrVersionTokenMismatch) {
h.writeError(w, http.StatusConflict, ErrCodeConflict, "version token mismatch")
} else {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, fmt.Sprintf("failed to check table: %v", err))
}
return err
}
// Check ownership
if accountID := h.getAccountID(r); accountID != metadata.OwnerAccountID {
h.writeError(w, http.StatusNotFound, ErrCodeNoSuchTable, fmt.Sprintf("table %s not found", tableName))
return ErrAccessDenied
}
// Delete the table
err = filerClient.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
return h.deleteDirectory(r.Context(), client, tablePath)
})
if err != nil {
h.writeError(w, http.StatusInternalServerError, ErrCodeInternalError, "failed to delete table")
return err
}
h.writeJSON(w, http.StatusOK, nil)
return nil
}

440
weed/s3api/s3tables/permissions.go

@ -0,0 +1,440 @@
package s3tables
import (
"encoding/json"
"fmt"
"strings"
"github.com/seaweedfs/seaweedfs/weed/s3api/policy_engine"
)
// Permission represents a specific action permission
type Permission string
// IAM Policy structures for evaluation
type PolicyDocument struct {
Version string `json:"Version"`
Statement []Statement `json:"Statement"`
}
// UnmarshalJSON handles both single statement object and array of statements
// AWS allows {"Statement": {...}} or {"Statement": [{...}]}
func (pd *PolicyDocument) UnmarshalJSON(data []byte) error {
type Alias PolicyDocument
aux := &struct {
Statement interface{} `json:"Statement"`
*Alias
}{
Alias: (*Alias)(pd),
}
if err := json.Unmarshal(data, &aux); err != nil {
return err
}
// Handle Statement as either a single object or array
switch s := aux.Statement.(type) {
case map[string]interface{}:
// Single statement object - unmarshal to one Statement
stmtData, err := json.Marshal(s)
if err != nil {
return fmt.Errorf("failed to marshal single statement: %w", err)
}
var stmt Statement
if err := json.Unmarshal(stmtData, &stmt); err != nil {
return fmt.Errorf("failed to unmarshal single statement: %w", err)
}
pd.Statement = []Statement{stmt}
case []interface{}:
// Array of statements - normal handling
stmtData, err := json.Marshal(s)
if err != nil {
return fmt.Errorf("failed to marshal statement array: %w", err)
}
if err := json.Unmarshal(stmtData, &pd.Statement); err != nil {
return fmt.Errorf("failed to unmarshal statement array: %w", err)
}
case nil:
// No statements
pd.Statement = []Statement{}
default:
return fmt.Errorf("Statement must be an object or array, got %T", aux.Statement)
}
return nil
}
type Statement struct {
Effect string `json:"Effect"` // "Allow" or "Deny"
Principal interface{} `json:"Principal"` // Can be string, []string, or map
Action interface{} `json:"Action"` // Can be string or []string
Resource interface{} `json:"Resource"` // Can be string or []string
}
// CheckPermissionWithResource checks if a principal has permission to perform an operation on a specific resource
func CheckPermissionWithResource(operation, principal, owner, resourcePolicy, resourceARN string) bool {
// Deny access if identities are empty
if principal == "" || owner == "" {
return false
}
// Owner always has permission
if principal == owner {
return true
}
// If no policy is provided, deny access (default deny)
if resourcePolicy == "" {
return false
}
// Normalize operation to full IAM-style action name (e.g., "s3tables:CreateTableBucket")
// if not already prefixed
fullAction := operation
if !strings.Contains(operation, ":") {
fullAction = "s3tables:" + operation
}
// Parse and evaluate policy
var policy PolicyDocument
if err := json.Unmarshal([]byte(resourcePolicy), &policy); err != nil {
return false
}
// Evaluate policy statements
// Default is deny, so we need an explicit allow
hasAllow := false
for _, stmt := range policy.Statement {
// Check if principal matches
if !matchesPrincipal(stmt.Principal, principal) {
continue
}
// Check if action matches (using normalized full action name)
if !matchesAction(stmt.Action, fullAction) {
continue
}
// Check if resource matches (if resourceARN specified and Resource field exists)
if resourceARN != "" && !matchesResource(stmt.Resource, resourceARN) {
continue
}
// Statement matches - check effect
if stmt.Effect == "Allow" {
hasAllow = true
} else if stmt.Effect == "Deny" {
// Explicit deny always wins
return false
}
}
return hasAllow
}
// CheckPermission checks if a principal has permission to perform an operation
// (without resource-specific validation - for backward compatibility)
func CheckPermission(operation, principal, owner, resourcePolicy string) bool {
// Deny access if identities are empty
if principal == "" || owner == "" {
return false
}
// Owner always has permission
if principal == owner {
return true
}
// If no policy is provided, deny access (default deny)
if resourcePolicy == "" {
return false
}
// Normalize operation to full IAM-style action name (e.g., "s3tables:CreateTableBucket")
// if not already prefixed
fullAction := operation
if !strings.Contains(operation, ":") {
fullAction = "s3tables:" + operation
}
// Parse and evaluate policy
var policy PolicyDocument
if err := json.Unmarshal([]byte(resourcePolicy), &policy); err != nil {
return false
}
// Evaluate policy statements
// Default is deny, so we need an explicit allow
hasAllow := false
for _, stmt := range policy.Statement {
// Check if principal matches
if !matchesPrincipal(stmt.Principal, principal) {
continue
}
// Check if action matches (using normalized full action name)
if !matchesAction(stmt.Action, fullAction) {
continue
}
// Statement matches - check effect
if stmt.Effect == "Allow" {
hasAllow = true
} else if stmt.Effect == "Deny" {
// Explicit deny always wins
return false
}
}
return hasAllow
}
// matchesPrincipal checks if the principal matches the statement's principal
func matchesPrincipal(principalSpec interface{}, principal string) bool {
if principalSpec == nil {
return false
}
switch p := principalSpec.(type) {
case string:
// Direct string match or wildcard
if p == "*" || p == principal {
return true
}
// Support wildcard matching for principals (e.g., "arn:aws:iam::*:user/admin")
return policy_engine.MatchesWildcard(p, principal)
case []interface{}:
// Array of principals
for _, item := range p {
if str, ok := item.(string); ok {
if str == "*" || str == principal {
return true
}
// Support wildcard matching
if policy_engine.MatchesWildcard(str, principal) {
return true
}
}
}
case map[string]interface{}:
// AWS-style principal with service prefix, e.g., {"AWS": "arn:aws:iam::..."}
// For S3 Tables, we primarily care about the AWS key
if aws, ok := p["AWS"]; ok {
return matchesPrincipal(aws, principal)
}
}
return false
}
// matchesAction checks if the action matches the statement's action
func matchesAction(actionSpec interface{}, action string) bool {
if actionSpec == nil {
return false
}
switch a := actionSpec.(type) {
case string:
// Direct match or wildcard
return matchesActionPattern(a, action)
case []interface{}:
// Array of actions
for _, item := range a {
if str, ok := item.(string); ok {
if matchesActionPattern(str, action) {
return true
}
}
}
}
return false
}
// matchesActionPattern checks if an action matches a pattern (supports wildcards)
// This uses the policy_engine.MatchesWildcard function for full wildcard support,
// including middle wildcards (e.g., "s3tables:Get*Table") for complete IAM compatibility.
func matchesActionPattern(pattern, action string) bool {
if pattern == "*" {
return true
}
// Exact match
if pattern == action {
return true
}
// Wildcard match using policy engine's wildcard matcher
// Supports both * (any sequence) and ? (single character) anywhere in the pattern
return policy_engine.MatchesWildcard(pattern, action)
}
// matchesResource checks if the resource ARN matches the statement's resource specification
// Returns true if resource matches or if Resource is not specified (implicit match)
func matchesResource(resourceSpec interface{}, resourceARN string) bool {
// If no Resource is specified, match all resources (implicit *)
if resourceSpec == nil {
return true
}
switch r := resourceSpec.(type) {
case string:
// Direct match or wildcard
return matchesResourcePattern(r, resourceARN)
case []interface{}:
// Array of resources - match if any matches
for _, item := range r {
if str, ok := item.(string); ok {
if matchesResourcePattern(str, resourceARN) {
return true
}
}
}
}
return false
}
// matchesResourcePattern checks if a resource ARN matches a pattern (supports wildcards)
func matchesResourcePattern(pattern, resourceARN string) bool {
if pattern == "*" {
return true
}
// Exact match
if pattern == resourceARN {
return true
}
// Wildcard match using policy engine's wildcard matcher
return policy_engine.MatchesWildcard(pattern, resourceARN)
}
// Helper functions for specific permissions
// CanCreateTableBucket checks if principal can create table buckets
func CanCreateTableBucket(principal, owner, resourcePolicy string) bool {
return CheckPermission("CreateTableBucket", principal, owner, resourcePolicy)
}
// CanGetTableBucket checks if principal can get table bucket details
func CanGetTableBucket(principal, owner, resourcePolicy string) bool {
return CheckPermission("GetTableBucket", principal, owner, resourcePolicy)
}
// CanListTableBuckets checks if principal can list table buckets
func CanListTableBuckets(principal, owner, resourcePolicy string) bool {
return CheckPermission("ListTableBuckets", principal, owner, resourcePolicy)
}
// CanDeleteTableBucket checks if principal can delete table buckets
func CanDeleteTableBucket(principal, owner, resourcePolicy string) bool {
return CheckPermission("DeleteTableBucket", principal, owner, resourcePolicy)
}
// CanPutTableBucketPolicy checks if principal can put table bucket policies
func CanPutTableBucketPolicy(principal, owner, resourcePolicy string) bool {
return CheckPermission("PutTableBucketPolicy", principal, owner, resourcePolicy)
}
// CanGetTableBucketPolicy checks if principal can get table bucket policies
func CanGetTableBucketPolicy(principal, owner, resourcePolicy string) bool {
return CheckPermission("GetTableBucketPolicy", principal, owner, resourcePolicy)
}
// CanDeleteTableBucketPolicy checks if principal can delete table bucket policies
func CanDeleteTableBucketPolicy(principal, owner, resourcePolicy string) bool {
return CheckPermission("DeleteTableBucketPolicy", principal, owner, resourcePolicy)
}
// CanCreateNamespace checks if principal can create namespaces
func CanCreateNamespace(principal, owner, resourcePolicy string) bool {
return CheckPermission("CreateNamespace", principal, owner, resourcePolicy)
}
// CanGetNamespace checks if principal can get namespace details
func CanGetNamespace(principal, owner, resourcePolicy string) bool {
return CheckPermission("GetNamespace", principal, owner, resourcePolicy)
}
// CanListNamespaces checks if principal can list namespaces
func CanListNamespaces(principal, owner, resourcePolicy string) bool {
return CheckPermission("ListNamespaces", principal, owner, resourcePolicy)
}
// CanDeleteNamespace checks if principal can delete namespaces
func CanDeleteNamespace(principal, owner, resourcePolicy string) bool {
return CheckPermission("DeleteNamespace", principal, owner, resourcePolicy)
}
// CanCreateTable checks if principal can create tables
func CanCreateTable(principal, owner, resourcePolicy string) bool {
return CheckPermission("CreateTable", principal, owner, resourcePolicy)
}
// CanGetTable checks if principal can get table details
func CanGetTable(principal, owner, resourcePolicy string) bool {
return CheckPermission("GetTable", principal, owner, resourcePolicy)
}
// CanListTables checks if principal can list tables
func CanListTables(principal, owner, resourcePolicy string) bool {
return CheckPermission("ListTables", principal, owner, resourcePolicy)
}
// CanDeleteTable checks if principal can delete tables
func CanDeleteTable(principal, owner, resourcePolicy string) bool {
return CheckPermission("DeleteTable", principal, owner, resourcePolicy)
}
// CanPutTablePolicy checks if principal can put table policies
func CanPutTablePolicy(principal, owner, resourcePolicy string) bool {
return CheckPermission("PutTablePolicy", principal, owner, resourcePolicy)
}
// CanGetTablePolicy checks if principal can get table policies
func CanGetTablePolicy(principal, owner, resourcePolicy string) bool {
return CheckPermission("GetTablePolicy", principal, owner, resourcePolicy)
}
// CanDeleteTablePolicy checks if principal can delete table policies
func CanDeleteTablePolicy(principal, owner, resourcePolicy string) bool {
return CheckPermission("DeleteTablePolicy", principal, owner, resourcePolicy)
}
// CanTagResource checks if principal can tag a resource
func CanTagResource(principal, owner, resourcePolicy string) bool {
return CheckPermission("TagResource", principal, owner, resourcePolicy)
}
// CanUntagResource checks if principal can untag a resource
func CanUntagResource(principal, owner, resourcePolicy string) bool {
return CheckPermission("UntagResource", principal, owner, resourcePolicy)
}
// CanManageTags checks if principal can manage tags (tag or untag)
func CanManageTags(principal, owner, resourcePolicy string) bool {
return CanTagResource(principal, owner, resourcePolicy) || CanUntagResource(principal, owner, resourcePolicy)
}
// AuthError represents an authorization error
type AuthError struct {
Operation string
Principal string
Message string
}
func (e *AuthError) Error() string {
return "unauthorized: " + e.Principal + " is not permitted to perform " + e.Operation + ": " + e.Message
}
// NewAuthError creates a new authorization error
func NewAuthError(operation, principal, message string) *AuthError {
return &AuthError{
Operation: operation,
Principal: principal,
Message: message,
}
}

90
weed/s3api/s3tables/permissions_test.go

@ -0,0 +1,90 @@
package s3tables
import "testing"
func TestMatchesActionPattern(t *testing.T) {
tests := []struct {
name string
pattern string
action string
expected bool
}{
// Exact matches
{"exact match", "GetTable", "GetTable", true},
{"no match", "GetTable", "DeleteTable", false},
// Universal wildcard
{"universal wildcard", "*", "anything", true},
// Suffix wildcards
{"suffix wildcard match", "s3tables:*", "s3tables:GetTable", true},
{"suffix wildcard no match", "s3tables:*", "iam:GetUser", false},
// Middle wildcards (new capability from policy_engine)
{"middle wildcard Get*Table", "s3tables:Get*Table", "s3tables:GetTable", true},
{"middle wildcard Get*Table no match GetTableBucket", "s3tables:Get*Table", "s3tables:GetTableBucket", false},
{"middle wildcard Get*Table no match DeleteTable", "s3tables:Get*Table", "s3tables:DeleteTable", false},
{"middle wildcard *Table*", "s3tables:*Table*", "s3tables:GetTableBucket", true},
{"middle wildcard *Table* match CreateTable", "s3tables:*Table*", "s3tables:CreateTable", true},
// Question mark wildcards
{"question mark single char", "GetTable?", "GetTableX", true},
{"question mark no match", "GetTable?", "GetTableXY", false},
// Combined wildcards
{"combined * and ?", "s3tables:Get?able*", "s3tables:GetTable", true},
{"combined * and ?", "s3tables:Get?able*", "s3tables:GetTables", true},
{"combined no match - ? needs 1 char", "s3tables:Get?able*", "s3tables:Getable", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := matchesActionPattern(tt.pattern, tt.action)
if result != tt.expected {
t.Errorf("matchesActionPattern(%q, %q) = %v, want %v", tt.pattern, tt.action, result, tt.expected)
}
})
}
}
func TestMatchesPrincipal(t *testing.T) {
tests := []struct {
name string
principalSpec interface{}
principal string
expected bool
}{
// String principals
{"exact match", "user123", "user123", true},
{"no match", "user123", "user456", false},
{"universal wildcard", "*", "anyone", true},
// Wildcard principals
{"prefix wildcard", "arn:aws:iam::123456789012:user/*", "arn:aws:iam::123456789012:user/admin", true},
{"prefix wildcard no match", "arn:aws:iam::123456789012:user/*", "arn:aws:iam::987654321098:user/admin", false},
{"middle wildcard", "arn:aws:iam::*:user/admin", "arn:aws:iam::123456789012:user/admin", true},
// Array of principals
{"array match first", []interface{}{"user1", "user2"}, "user1", true},
{"array match second", []interface{}{"user1", "user2"}, "user2", true},
{"array no match", []interface{}{"user1", "user2"}, "user3", false},
{"array wildcard", []interface{}{"user1", "arn:aws:iam::*:user/admin"}, "arn:aws:iam::123:user/admin", true},
// Map-style AWS principals
{"AWS map exact", map[string]interface{}{"AWS": "user123"}, "user123", true},
{"AWS map wildcard", map[string]interface{}{"AWS": "arn:aws:iam::*:user/admin"}, "arn:aws:iam::123:user/admin", true},
{"AWS map array", map[string]interface{}{"AWS": []interface{}{"user1", "user2"}}, "user1", true},
// Nil/empty cases
{"nil principal", nil, "user123", false},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
result := matchesPrincipal(tt.principalSpec, tt.principal)
if result != tt.expected {
t.Errorf("matchesPrincipal(%v, %q) = %v, want %v", tt.principalSpec, tt.principal, result, tt.expected)
}
})
}
}

291
weed/s3api/s3tables/types.go

@ -0,0 +1,291 @@
package s3tables
import "time"
// Table bucket types
type TableBucket struct {
ARN string `json:"arn"`
Name string `json:"name"`
OwnerAccountID string `json:"ownerAccountId"`
CreatedAt time.Time `json:"createdAt"`
}
type CreateTableBucketRequest struct {
Name string `json:"name"`
Tags map[string]string `json:"tags,omitempty"`
}
type CreateTableBucketResponse struct {
ARN string `json:"arn"`
}
type GetTableBucketRequest struct {
TableBucketARN string `json:"tableBucketARN"`
}
type GetTableBucketResponse struct {
ARN string `json:"arn"`
Name string `json:"name"`
OwnerAccountID string `json:"ownerAccountId"`
CreatedAt time.Time `json:"createdAt"`
}
type ListTableBucketsRequest struct {
Prefix string `json:"prefix,omitempty"`
ContinuationToken string `json:"continuationToken,omitempty"`
MaxBuckets int `json:"maxBuckets,omitempty"`
}
type TableBucketSummary struct {
ARN string `json:"arn"`
Name string `json:"name"`
CreatedAt time.Time `json:"createdAt"`
}
type ListTableBucketsResponse struct {
TableBuckets []TableBucketSummary `json:"tableBuckets"`
ContinuationToken string `json:"continuationToken,omitempty"`
}
type DeleteTableBucketRequest struct {
TableBucketARN string `json:"tableBucketARN"`
}
// Table bucket policy types
type PutTableBucketPolicyRequest struct {
TableBucketARN string `json:"tableBucketARN"`
ResourcePolicy string `json:"resourcePolicy"`
}
type GetTableBucketPolicyRequest struct {
TableBucketARN string `json:"tableBucketARN"`
}
type GetTableBucketPolicyResponse struct {
ResourcePolicy string `json:"resourcePolicy"`
}
type DeleteTableBucketPolicyRequest struct {
TableBucketARN string `json:"tableBucketARN"`
}
// Namespace types
type Namespace struct {
Namespace []string `json:"namespace"`
CreatedAt time.Time `json:"createdAt"`
OwnerAccountID string `json:"ownerAccountId"`
}
type CreateNamespaceRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
}
type CreateNamespaceResponse struct {
Namespace []string `json:"namespace"`
TableBucketARN string `json:"tableBucketARN"`
}
type GetNamespaceRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
}
type GetNamespaceResponse struct {
Namespace []string `json:"namespace"`
CreatedAt time.Time `json:"createdAt"`
OwnerAccountID string `json:"ownerAccountId"`
}
type ListNamespacesRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Prefix string `json:"prefix,omitempty"`
ContinuationToken string `json:"continuationToken,omitempty"`
MaxNamespaces int `json:"maxNamespaces,omitempty"`
}
type NamespaceSummary struct {
Namespace []string `json:"namespace"`
CreatedAt time.Time `json:"createdAt"`
}
type ListNamespacesResponse struct {
Namespaces []NamespaceSummary `json:"namespaces"`
ContinuationToken string `json:"continuationToken,omitempty"`
}
type DeleteNamespaceRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
}
// Table types
type IcebergSchemaField struct {
Name string `json:"name"`
Type string `json:"type"`
Required bool `json:"required,omitempty"`
}
type IcebergSchema struct {
Fields []IcebergSchemaField `json:"fields"`
}
type IcebergMetadata struct {
Schema IcebergSchema `json:"schema"`
}
type TableMetadata struct {
Iceberg *IcebergMetadata `json:"iceberg,omitempty"`
}
type Table struct {
Name string `json:"name"`
TableARN string `json:"tableARN"`
Namespace []string `json:"namespace"`
Format string `json:"format"`
CreatedAt time.Time `json:"createdAt"`
ModifiedAt time.Time `json:"modifiedAt"`
OwnerAccountID string `json:"ownerAccountId"`
MetadataLocation string `json:"metadataLocation,omitempty"`
Metadata *TableMetadata `json:"metadata,omitempty"`
}
type CreateTableRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
Name string `json:"name"`
Format string `json:"format"`
Metadata *TableMetadata `json:"metadata,omitempty"`
Tags map[string]string `json:"tags,omitempty"`
}
type CreateTableResponse struct {
TableARN string `json:"tableARN"`
VersionToken string `json:"versionToken"`
MetadataLocation string `json:"metadataLocation,omitempty"`
}
type GetTableRequest struct {
TableBucketARN string `json:"tableBucketARN,omitempty"`
Namespace []string `json:"namespace,omitempty"`
Name string `json:"name,omitempty"`
TableARN string `json:"tableARN,omitempty"`
}
type GetTableResponse struct {
Name string `json:"name"`
TableARN string `json:"tableARN"`
Namespace []string `json:"namespace"`
Format string `json:"format"`
CreatedAt time.Time `json:"createdAt"`
ModifiedAt time.Time `json:"modifiedAt"`
OwnerAccountID string `json:"ownerAccountId"`
MetadataLocation string `json:"metadataLocation,omitempty"`
VersionToken string `json:"versionToken"`
}
type ListTablesRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace,omitempty"`
Prefix string `json:"prefix,omitempty"`
ContinuationToken string `json:"continuationToken,omitempty"`
MaxTables int `json:"maxTables,omitempty"`
}
type TableSummary struct {
Name string `json:"name"`
TableARN string `json:"tableARN"`
Namespace []string `json:"namespace"`
CreatedAt time.Time `json:"createdAt"`
ModifiedAt time.Time `json:"modifiedAt"`
MetadataLocation string `json:"metadataLocation,omitempty"`
}
type ListTablesResponse struct {
Tables []TableSummary `json:"tables"`
ContinuationToken string `json:"continuationToken,omitempty"`
}
type DeleteTableRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
Name string `json:"name"`
VersionToken string `json:"versionToken,omitempty"`
}
// Table policy types
type PutTablePolicyRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
Name string `json:"name"`
ResourcePolicy string `json:"resourcePolicy"`
}
type GetTablePolicyRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
Name string `json:"name"`
}
type GetTablePolicyResponse struct {
ResourcePolicy string `json:"resourcePolicy"`
}
type DeleteTablePolicyRequest struct {
TableBucketARN string `json:"tableBucketARN"`
Namespace []string `json:"namespace"`
Name string `json:"name"`
}
// Tagging types
type TagResourceRequest struct {
ResourceARN string `json:"resourceArn"`
Tags map[string]string `json:"tags"`
}
type ListTagsForResourceRequest struct {
ResourceARN string `json:"resourceArn"`
}
type ListTagsForResourceResponse struct {
Tags map[string]string `json:"tags"`
}
type UntagResourceRequest struct {
ResourceARN string `json:"resourceArn"`
TagKeys []string `json:"tagKeys"`
}
// Error types
type S3TablesError struct {
Type string `json:"__type"`
Message string `json:"message"`
}
func (e *S3TablesError) Error() string {
return e.Message
}
// Error codes
const (
ErrCodeBucketAlreadyExists = "BucketAlreadyExists"
ErrCodeBucketNotEmpty = "BucketNotEmpty"
ErrCodeNoSuchBucket = "NoSuchBucket"
ErrCodeNoSuchNamespace = "NoSuchNamespace"
ErrCodeNoSuchTable = "NoSuchTable"
ErrCodeNamespaceAlreadyExists = "NamespaceAlreadyExists"
ErrCodeNamespaceNotEmpty = "NamespaceNotEmpty"
ErrCodeTableAlreadyExists = "TableAlreadyExists"
ErrCodeAccessDenied = "AccessDenied"
ErrCodeInvalidRequest = "InvalidRequest"
ErrCodeInternalError = "InternalError"
ErrCodeNoSuchPolicy = "NoSuchPolicy"
ErrCodeConflict = "Conflict"
)

268
weed/s3api/s3tables/utils.go

@ -0,0 +1,268 @@
package s3tables
import (
"crypto/rand"
"encoding/hex"
"fmt"
"net/url"
"path"
"regexp"
"strings"
"time"
)
const (
bucketNamePatternStr = `[a-z0-9-]+`
tableNamespacePatternStr = `[a-z0-9_]+`
tableNamePatternStr = `[a-z0-9_]+`
)
var (
bucketARNPattern = regexp.MustCompile(`^arn:aws:s3tables:[^:]*:[^:]*:bucket/(` + bucketNamePatternStr + `)$`)
tableARNPattern = regexp.MustCompile(`^arn:aws:s3tables:[^:]*:[^:]*:bucket/(` + bucketNamePatternStr + `)/table/(` + tableNamespacePatternStr + `)/(` + tableNamePatternStr + `)$`)
)
// ARN parsing functions
// parseBucketNameFromARN extracts bucket name from table bucket ARN
// ARN format: arn:aws:s3tables:{region}:{account}:bucket/{bucket-name}
func parseBucketNameFromARN(arn string) (string, error) {
matches := bucketARNPattern.FindStringSubmatch(arn)
if len(matches) != 2 {
return "", fmt.Errorf("invalid bucket ARN: %s", arn)
}
bucketName := matches[1]
if !isValidBucketName(bucketName) {
return "", fmt.Errorf("invalid bucket name in ARN: %s", bucketName)
}
return bucketName, nil
}
// parseTableFromARN extracts bucket name, namespace, and table name from ARN
// ARN format: arn:aws:s3tables:{region}:{account}:bucket/{bucket-name}/table/{namespace}/{table-name}
func parseTableFromARN(arn string) (bucketName, namespace, tableName string, err error) {
// Updated regex to align with namespace validation (single-segment)
matches := tableARNPattern.FindStringSubmatch(arn)
if len(matches) != 4 {
return "", "", "", fmt.Errorf("invalid table ARN: %s", arn)
}
// Namespace is already constrained by the regex; validate it directly.
namespace = matches[2]
_, err = validateNamespace([]string{namespace})
if err != nil {
return "", "", "", fmt.Errorf("invalid namespace in ARN: %v", err)
}
// URL decode and validate the table name from the ARN path component
tableNameUnescaped, err := url.PathUnescape(matches[3])
if err != nil {
return "", "", "", fmt.Errorf("invalid table name encoding in ARN: %v", err)
}
if _, err := validateTableName(tableNameUnescaped); err != nil {
return "", "", "", fmt.Errorf("invalid table name in ARN: %v", err)
}
return matches[1], namespace, tableNameUnescaped, nil
}
// Path helpers
// getTableBucketPath returns the filer path for a table bucket
func getTableBucketPath(bucketName string) string {
return path.Join(TablesPath, bucketName)
}
// getNamespacePath returns the filer path for a namespace
func getNamespacePath(bucketName, namespace string) string {
return path.Join(TablesPath, bucketName, namespace)
}
// getTablePath returns the filer path for a table
func getTablePath(bucketName, namespace, tableName string) string {
return path.Join(TablesPath, bucketName, namespace, tableName)
}
// Metadata structures
type tableBucketMetadata struct {
Name string `json:"name"`
CreatedAt time.Time `json:"createdAt"`
OwnerAccountID string `json:"ownerAccountId"`
}
// namespaceMetadata stores metadata for a namespace
type namespaceMetadata struct {
Namespace []string `json:"namespace"`
CreatedAt time.Time `json:"createdAt"`
OwnerAccountID string `json:"ownerAccountId"`
}
// tableMetadataInternal stores metadata for a table
type tableMetadataInternal struct {
Name string `json:"name"`
Namespace string `json:"namespace"`
Format string `json:"format"`
CreatedAt time.Time `json:"createdAt"`
ModifiedAt time.Time `json:"modifiedAt"`
OwnerAccountID string `json:"ownerAccountId"`
VersionToken string `json:"versionToken"`
MetadataLocation string `json:"metadataLocation,omitempty"`
Metadata *TableMetadata `json:"metadata,omitempty"`
}
// Utility functions
// validateBucketName validates bucket name and returns an error if invalid.
// Bucket names must contain only lowercase letters, numbers, and hyphens.
// Length must be between 3 and 63 characters.
// Must start and end with a letter or digit.
// Reserved prefixes/suffixes are rejected.
func validateBucketName(name string) error {
if name == "" {
return fmt.Errorf("bucket name is required")
}
if len(name) < 3 || len(name) > 63 {
return fmt.Errorf("bucket name must be between 3 and 63 characters")
}
// Must start and end with a letter or digit
start := name[0]
end := name[len(name)-1]
if !((start >= 'a' && start <= 'z') || (start >= '0' && start <= '9')) {
return fmt.Errorf("bucket name must start with a letter or digit")
}
if !((end >= 'a' && end <= 'z') || (end >= '0' && end <= '9')) {
return fmt.Errorf("bucket name must end with a letter or digit")
}
// Allowed characters: a-z, 0-9, -
for i := 0; i < len(name); i++ {
ch := name[i]
if (ch >= 'a' && ch <= 'z') || (ch >= '0' && ch <= '9') || ch == '-' {
continue
}
return fmt.Errorf("bucket name can only contain lowercase letters, numbers, and hyphens")
}
// Reserved prefixes
reservedPrefixes := []string{"xn--", "sthree-", "amzn-s3-demo-", "aws"}
for _, p := range reservedPrefixes {
if strings.HasPrefix(name, p) {
return fmt.Errorf("bucket name cannot start with reserved prefix: %s", p)
}
}
// Reserved suffixes
reservedSuffixes := []string{"-s3alias", "--ol-s3", "--x-s3", "--table-s3"}
for _, s := range reservedSuffixes {
if strings.HasSuffix(name, s) {
return fmt.Errorf("bucket name cannot end with reserved suffix: %s", s)
}
}
return nil
}
// isValidBucketName validates bucket name characters (kept for compatibility)
// Deprecated: use validateBucketName instead
func isValidBucketName(name string) bool {
return validateBucketName(name) == nil
}
// generateVersionToken generates a unique, unpredictable version token
func generateVersionToken() string {
b := make([]byte, 16)
if _, err := rand.Read(b); err != nil {
// Fallback to timestamp if crypto/rand fails
return fmt.Sprintf("%x", time.Now().UnixNano())
}
return hex.EncodeToString(b)
}
// splitPath splits a path into directory and name components using stdlib
func splitPath(p string) (dir, name string) {
dir = path.Dir(p)
name = path.Base(p)
return
}
// validateNamespace validates that the namespace provided is supported (single-level)
func validateNamespace(namespace []string) (string, error) {
if len(namespace) == 0 {
return "", fmt.Errorf("namespace is required")
}
if len(namespace) > 1 {
return "", fmt.Errorf("multi-level namespaces are not supported")
}
name := namespace[0]
if len(name) < 1 || len(name) > 255 {
return "", fmt.Errorf("namespace name must be between 1 and 255 characters")
}
// Prevent path traversal and multi-segment paths
if name == "." || name == ".." {
return "", fmt.Errorf("namespace name cannot be '.' or '..'")
}
if strings.Contains(name, "/") {
return "", fmt.Errorf("namespace name cannot contain '/'")
}
// Must start and end with a letter or digit
start := name[0]
end := name[len(name)-1]
if !((start >= 'a' && start <= 'z') || (start >= '0' && start <= '9')) {
return "", fmt.Errorf("namespace name must start with a letter or digit")
}
if !((end >= 'a' && end <= 'z') || (end >= '0' && end <= '9')) {
return "", fmt.Errorf("namespace name must end with a letter or digit")
}
// Allowed characters: a-z, 0-9, _
for _, ch := range name {
if (ch >= 'a' && ch <= 'z') || (ch >= '0' && ch <= '9') || ch == '_' {
continue
}
return "", fmt.Errorf("invalid namespace name: only 'a-z', '0-9', and '_' are allowed")
}
// Reserved prefix
if strings.HasPrefix(name, "aws") {
return "", fmt.Errorf("namespace name cannot start with reserved prefix 'aws'")
}
return name, nil
}
// validateTableName validates a table name
func validateTableName(name string) (string, error) {
if len(name) < 1 || len(name) > 255 {
return "", fmt.Errorf("table name must be between 1 and 255 characters")
}
if name == "." || name == ".." || strings.Contains(name, "/") {
return "", fmt.Errorf("invalid table name: cannot be '.', '..' or contain '/'")
}
// First character must be a letter or digit
start := name[0]
if !((start >= 'a' && start <= 'z') || (start >= '0' && start <= '9')) {
return "", fmt.Errorf("table name must start with a letter or digit")
}
// Allowed characters: a-z, 0-9, _
for _, ch := range name {
if (ch >= 'a' && ch <= 'z') || (ch >= '0' && ch <= '9') || ch == '_' {
continue
}
return "", fmt.Errorf("invalid table name: only 'a-z', '0-9', and '_' are allowed")
}
return name, nil
}
// flattenNamespace joins namespace elements into a single string (using dots as per AWS S3 Tables)
func flattenNamespace(namespace []string) string {
if len(namespace) == 0 {
return ""
}
return strings.Join(namespace, ".")
}
Loading…
Cancel
Save