* Use Unix sockets for gRPC between co-located services in weed server
Extends the Unix socket gRPC optimization (added for mini mode in #8856)
to `weed server`. Registers Unix socket paths for each service's gRPC
port before startup, so co-located services (master, volume, filer, S3)
communicate via Unix sockets instead of TCP loopback.
Only services actually started in this process get registered. The gRPC
port is resolved early (port + 10000 if unset) so the socket path is
known before any service dials another.
* Refactor gRPC Unix socket registration into a data-driven loop
* Fix stale admin lock metric when lock expires and is reacquired (#8857)
When a lock expired without an explicit unlock and a different client
acquired it, the old client's metric was never cleared, causing
multiple clients to appear as simultaneously holding the lock.
* Use DeleteLabelValues instead of Set(0) to remove stale metric series
Avoids cardinality explosion from accumulated stale series when
client names are dynamic.
* rename metadata events
* fix subscription filter to use NewEntry.Name for rename path matching
The server-side subscription filter constructed the new path using
OldEntry.Name instead of NewEntry.Name when checking if a rename
event's destination matches the subscriber's path prefix. This could
cause events to be incorrectly filtered when a rename changes the
file name.
* fix bucket events to handle rename of bucket directories
onBucketEvents only checked IsCreate and IsDelete. A bucket directory
rename via AtomicRenameEntry now emits a single rename event (both
OldEntry and NewEntry non-nil), which matched neither check. Handle
IsRename by deleting the old bucket and creating the new one.
* fix replicator to handle rename events across directory boundaries
Two issues fixed:
1. The replicator filtered events by checking if the key (old path)
was under the source directory. Rename events now use the old path
as key, so renames from outside into the watched directory were
silently dropped. Now both old and new paths are checked, and
cross-boundary renames are converted to create or delete.
2. NewParentPath was passed to the sink without remapping to the
sink's target directory structure, causing the sink to write
entries at the wrong location. Now NewParentPath is remapped
alongside the key.
* fix filer sync to handle rename events crossing directory boundaries
The early directory-prefix filter only checked resp.Directory (old
parent). Rename events now carry the old parent as Directory, so
renames from outside the source path into it were dropped before
reaching the existing cross-boundary handling logic. Check both old
and new directories against sourcePath and excludePaths so the
downstream old-key/new-key logic can properly convert these to
create or delete operations.
* fix metadata event path matching
* fix metadata event consumers for rename targets
* Fix replication rename target keys
Logical rename events now reach replication sinks with distinct source and target paths.\n\nHandle non-filer sinks as delete-plus-create on the translated target key, and make the rename fallback path create at the translated target key too.\n\nAdd focused tests covering non-filer renames, filer rename updates, and the fallback path.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix filer sync rename path scoping
Use directory-boundary matching instead of raw prefix checks when classifying source and target paths during filer sync.\n\nAlso apply excludePaths per side so renames across excluded boundaries downgrade cleanly to create/delete instead of being misclassified as in-scope updates.\n\nAdd focused tests for boundary matching and rename classification.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix replicator directory boundary checks
Use directory-boundary matching instead of raw prefix checks when deciding whether a source or target path is inside the watched tree or an excluded subtree.\n\nThis prevents sibling paths such as /foo and /foobar from being misclassified during rename handling, and preserves the earlier rename-target-key fix.\n\nAdd focused tests for boundary matching and rename classification across sibling/excluded directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Fix etc-remote rename-out handling
Use boundary-safe source/target directory membership when classifying metadata events under DirectoryEtcRemote.\n\nThis prevents rename-out events from being processed as config updates, while still treating them as removals where appropriate for the remote sync and remote gateway command paths.\n\nAdd focused tests for update/removal classification and sibling-prefix handling.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Defer rename events until commit
Queue logical rename metadata events during atomic and streaming renames and publish them only after the transaction commits successfully.\n\nThis prevents subscribers from seeing delete or logical rename events for operations that later fail during delete or commit.\n\nAlso serialize notification.Queue swaps in rename tests and add failure-path coverage.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Skip descendant rename target lookups
Avoid redundant target lookups during recursive directory renames once the destination subtree is known absent.\n\nThe recursive move path now inserts known-absent descendants directly, and the test harness exercises prefixed directory listing so the optimization is covered by a directory rename regression test.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* Tighten rename review tests
Return filer_pb.ErrNotFound from the bucket tracking store test stub so it follows the FilerStore contract, and add a webhook filter case for same-name renames across parent directories.\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
* fix HardLinkId format verb in InsertEntryKnownAbsent error
HardLinkId is a byte slice. %d prints each byte as a decimal number
which is not useful for an identifier. Use %x to match the log line
two lines above.
* only skip descendant target lookup when source and dest use same store
moveFolderSubEntries unconditionally passed skipTargetLookup=true for
every descendant. This is safe when all paths resolve to the same
underlying store, but with path-specific store configuration a child's
destination may map to a different backend that already holds an entry
at that path. Use FilerStoreWrapper.SameActualStore to check per-child
and fall back to the full CreateEntry path when stores differ.
* add nil and create edge-case tests for metadata event scope helpers
* extract pathIsEqualOrUnder into util.IsEqualOrUnder
Identical implementations existed in both replication/replicator.go and
command/filer_sync.go. Move to util.IsEqualOrUnder (alongside the
existing FullPath.IsUnder) and remove the duplicates.
* use MetadataEventTargetDirectory for new-side directory in filer sync
The new-side directory checks and sourceNewKey computation used
message.NewParentPath directly. If NewParentPath were empty (legacy
events, older filer versions during rolling upgrades), sourceNewKey
would be wrong (/filename instead of /dir/filename) and the
UpdateEntry parent path rewrite would panic on slice bounds.
Derive targetDir once from MetadataEventTargetDirectory, which falls
back to resp.Directory when NewParentPath is empty, and use it
consistently for all new-side checks and the sink parent path.
* Use Unix sockets for gRPC between co-located services in mini mode
In `weed mini`, all services run in one process. Previously, inter-service
gRPC traffic (volume↔master, filer↔master, S3↔filer, worker↔admin, etc.)
went through TCP loopback. This adds a gRPC Unix socket registry in the pb
package: mini mode registers a socket path per gRPC port at startup, each
gRPC server additionally listens on its socket, and GrpcDial transparently
routes to the socket via WithContextDialer when a match is found.
Standalone commands (weed master, weed filer, etc.) are unaffected since
no sockets are registered. TCP listeners are kept for external clients.
* Handle Serve error and clean up socket file in ServeGrpcOnLocalSocket
Log non-expected errors from grpcServer.Serve (ignoring
grpc.ErrServerStopped) and always remove the Unix socket file
when Serve returns, ensuring cleanup on Stop/GracefulStop.
After git reset --hard on a FUSE mount, the kernel dcache can
transiently show the directory then drop it moments later. Add a
1-second stabilisation delay and re-verification in
resetToCommitWithRecovery and tryPullFromCommit so that recovery
retries if the entry vanishes in that window.
* fix(worker): pass compaction revision and file sizes in EC volume copy
The worker EC task was sending CopyFile requests without the current
compaction revision (defaulting to 0) and with StopOffset set to
math.MaxInt64. After a vacuum compaction this caused the volume server
to reject the copy or return stale data.
Read the volume file status first and forward the compaction revision
and actual file sizes so the copy is consistent with the compacted
volume.
* propagate erasure coding task context
* fix(worker): validate volume file status and detect short copies
Reject zero dat file size from ReadVolumeFileStatus — a zero-sized
snapshot would produce 0-byte copies and broken EC shards.
After streaming, verify totalBytes matches the expected stopOffset
and return an error on short copies instead of logging success.
* fix(worker): reject zero idx file size in volume status validation
A non-empty dat with zero idx indicates an empty or corrupt volume.
Without this guard, copyFileFromSource gets stopOffset=0, produces a
0-byte .idx, passes the short-copy check, and generateEcShardsLocally
runs against a volume with no index.
* fix fake plugin volume file status
* fix plugin volume balance test fixtures
The upstream rust:alpine manifest list no longer includes linux/386,
breaking multi-platform builds. Switch the Rust volume server builder
stage to alpine:3.23 and install Rust toolchain via apk instead.
Also adds openssl-dev which is needed for the build.
* fix(filer): apply default disk type after location-prefix resolution in gRPC AssignVolume
The gRPC AssignVolume path was applying the filer's default DiskType to
the request before calling detectStorageOption. This caused the default
to shadow any disk type configured via a filer location-prefix rule,
diverging from the HTTP write path which applies the default only when
no rule matches.
Extract resolveAssignStorageOption to apply the filer default disk type
after detectStorageOption, so location-prefix rules take precedence.
* fix(filer): apply default disk type after location-prefix resolution in TUS upload path
Same class of bug as the gRPC AssignVolume fix: the TUS tusWriteData
handler called detectStorageOption0 but never applied the filer's
default DiskType when no location-prefix rule matched. This made TUS
uploads ignore the -disk flag entirely.
* fix(s3): preserve explicit directory markers during empty folder cleanup
PR #8292 switched empty-folder cleanup from per-folder implicit checks
to bucket-level policy, inadvertently dropping the check that preserved
explicitly created directories (e.g., PUT /bucket/folder/). This caused
user-created folders to be deleted when their last file was removed.
Add IsDirectoryKeyObject check in executeCleanup to skip folders that
have a MIME type set, matching the canonical pattern used throughout the
S3 listing and delete handlers.
* fix: handle ErrNotFound in IsDirectoryKeyObject for race safety
Entry may be deleted between the emptiness check and the directory
marker lookup. Treat not-found as false rather than propagating
the error, avoiding unnecessary error logging in the cleanup path.
* refactor: consolidate directory marker tests and tidy error handling
- Combine two separate test functions into a table-driven test
- Nest ErrNotFound check inside the err != nil block
* notification.kafka: add SASL authentication and TLS support (#8827)
Wire sarama SASL (PLAIN, SCRAM-SHA-256, SCRAM-SHA-512) and TLS
configuration into the Kafka notification producer and consumer,
enabling connections to secured Kafka clusters.
* notification.kafka: validate mTLS config
* kafka notification: validate partial mTLS config, replace panics with errors
- Reject when only one of tls_client_cert/tls_client_key is provided
- Replace three panic() calls in KafkaInput.initialize with returned errors
* kafka notification: enforce minimum TLS 1.2 for Kafka connections
* mount: add option to show system entries
* address gemini code review's suggested changes
* rename flag from -showSystemEntries to -includeSystemEntries
* meta_cache: purge hidden system entries on filer events
---------
Co-authored-by: Chris Lu <chris.lu@gmail.com>
* plugin scheduler: run iceberg and lifecycle lanes concurrently
The default lane serialises job types under a single admin lock
because volume-management operations share global state. Iceberg
and lifecycle lanes have no such constraint, so run each of their
job types independently in separate goroutines.
* Fix concurrent lane scheduler status
* plugin scheduler: address review feedback
- Extract collectDueJobTypes helper to deduplicate policy loading
between locked and concurrent iteration paths.
- Use atomic.Bool instead of sync.Mutex for hadJobs in the concurrent
path.
- Set lane loop state to "busy" before launching concurrent goroutines
so the lane is not reported as idle while work runs.
- Convert TestLaneRequiresLock to table-driven style.
- Add TestRunLaneSchedulerIterationLockBehavior to verify the scheduler
acquires the admin lock only for lanes that require it.
- Fix flaky TestGetLaneSchedulerStatusShowsActiveConcurrentLaneWork by
not starting background scheduler goroutines that race with the
direct runJobTypeIteration call.
* s3api: skip TTL fast-path for versioned buckets (#8757)
PutBucketLifecycleConfiguration was translating Expiration.Days into
filer.conf TTL entries for all buckets. For versioned buckets this is
wrong:
1. TTL volumes expire as a unit, destroying all data — including
noncurrent versions that should be preserved.
2. Filer-backend TTL (RocksDB compaction filter, Redis key expiry)
removes entries without triggering chunk deletion, leaving orphaned
volume data with 0 deleted bytes.
3. On AWS S3, Expiration.Days on a versioned bucket creates a delete
marker — it does not hard-delete data. TTL has no such nuance.
Fix: skip the TTL fast-path when the bucket has versioning enabled or
suspended. All lifecycle rules are evaluated at scan time by the
lifecycle worker instead.
Also fix the lifecycle worker to evaluate Expiration rules against the
latest version in .versions/ directories, which was previously skipped
entirely — only NoncurrentVersionExpiration was handled.
* lifecycle worker: handle SeaweedList error in versions dir cleanup
Do not assume the directory is empty when the list call fails — log
the error and skip the directory to avoid incorrect deletion.
* address review feedback
- Fetch version file for tag-based rules instead of reading tags from
the .versions directory entry where they are not cached.
- Handle getBucketVersioningStatus error by failing closed (treat as
versioned) to avoid creating TTL entries on transient failures.
- Capture and assert deleteExpiredObjects return values in test.
- Improve test documentation.
* ci: add Trivy CVE scan to container release workflow
* ci: pin trivy-action version and fail on HIGH/CRITICAL CVEs
Address review feedback:
- Pin aquasecurity/trivy-action to v0.28.0 instead of @master
- Add exit-code: '1' so the scan fails the job on findings
- Add comment explaining why only amd64 is scanned
* ci: pin trivy-action to SHA for v0.35.0
Tags ≤0.34.2 were compromised (GHSA-69fq-xp46-6x23). Pin to the full
commit SHA of v0.35.0 to avoid mutable tag risks.
* s3api: accept NoncurrentVersionExpiration, AbortIncompleteMultipartUpload, Expiration.Date
Update PutBucketLifecycleConfigurationHandler to accept all newly-supported
lifecycle rule types. Only Transition and NoncurrentVersionTransition are
still rejected (require storage class tier infrastructure).
Changes:
- Remove ErrNotImplemented for Expiration.Date (handled by worker at scan time)
- Only reject rules with Transition.set or NoncurrentVersionTransition.set
- Extract prefix from Filter.And when present
- Add comment explaining that non-Expiration.Days rules are evaluated by
the lifecycle worker from stored lifecycle XML, not via filer.conf TTL
The lifecycle XML is already stored verbatim in bucket metadata, so new
rule types are preserved on Get even without explicit handler support.
Filer.conf TTL entries are only created for Expiration.Days (fast path).
* s3api: skip TTL fast path for rules with tag or size filters
Rules with tag or size constraints (Filter.Tag, Filter.And with tags
or size bounds, Filter.ObjectSizeGreaterThan/LessThan) must not be
lowered to filer.conf TTL entries, because TTL applies unconditionally
to all objects under the prefix. These rules are evaluated at scan
time by the lifecycle worker which checks each object's tags and size.
Only simple Expiration.Days rules with prefix-only filters use the
TTL fast path (RocksDB compaction filter).
---------
Co-authored-by: Copilot <copilot@github.com>
* lifecycle worker: drive MPU abort from lifecycle rules
Update the multipart upload abort phase to read
AbortIncompleteMultipartUpload.DaysAfterInitiation from the parsed
lifecycle rules. Falls back to the worker config abort_mpu_days when
no lifecycle XML rule specifies the value.
This means per-bucket MPU abort thresholds are now respected when
set via PutBucketLifecycleConfiguration, instead of using a single
global worker config value for all buckets.
* lifecycle worker: only use config AbortMPUDays when no lifecycle XML exists
When a bucket has lifecycle XML (useRuleEval=true) but no
AbortIncompleteMultipartUpload rule, mpuAbortDays should be 0
(no abort), not the worker config default. The config fallback
should only apply to buckets without lifecycle XML.
* lifecycle worker: only skip .uploads at bucket root
* lifecycle worker: use per-upload rule evaluation for MPU abort
Replace the single bucket-wide mpuAbortDays with per-upload evaluation
using s3lifecycle.EvaluateMPUAbort, which respects each rule's prefix
filter and DaysAfterInitiation threshold.
Previously the code took the first enabled abort rule's days value
and applied it to all uploads, ignoring prefix scoping and multiple
rules with different thresholds.
Config fallback (abort_mpu_days) now only applies when lifecycle XML
is truly absent (xmlPresent=false), not when XML exists but has no
abort rules.
Also fix EvaluateMPUAbort to use expectedExpiryTime for midnight-UTC
semantics matching other lifecycle cutoffs.
---------
Co-authored-by: Copilot <copilot@github.com>
* lifecycle worker: add NoncurrentVersionExpiration support
Add version-aware scanning to the rule-based execution path. When the
walker encounters a .versions directory, processVersionsDirectory():
- Lists all version entries (v_<versionId>)
- Sorts by version timestamp (newest first)
- Walks non-current versions with ShouldExpireNoncurrentVersion()
which handles both NoncurrentDays and NewerNoncurrentVersions
- Extracts successor time from version IDs (both old/new format)
- Skips delete markers in noncurrent version counting
- Falls back to entry Mtime when version ID timestamp is unavailable
Helper functions:
- sortVersionsByTimestamp: insertion sort by version ID timestamp
- getEntryVersionTimestamp: extracts timestamp with Mtime fallback
* lifecycle worker: address review feedback for noncurrent versions
- Use sentinel errLimitReached in versions directory handler
- Set NoncurrentIndex on ObjectInfo for proper NewerNoncurrentVersions
evaluation
* lifecycle worker: fail closed on XML parse error, guard zero Mtime
- Fail closed when lifecycle XML exists but fails to parse, instead
of falling back to TTL which could apply broader rules
- Guard Mtime > 0 before using time.Unix(mtime, 0) to avoid mapping
unset Mtime to 1970, which would misorder versions and cause
premature expiration
* lifecycle worker: count delete markers toward NoncurrentIndex
Noncurrent delete markers should count toward the
NewerNoncurrentVersions retention threshold so data versions
get the correct position index. Previously, skipping delete
markers without incrementing the index could retain too many
versions after delete/recreate cycles.
* lifecycle worker: fix version ordering, error propagation, and fail-closed scope
1. Use full version ID comparison (CompareVersionIds) for sorting
.versions entries, not just decoded timestamps. Two versions with
the same timestamp prefix but different random suffixes were
previously misordered, potentially treating the newest version as
noncurrent and deleting it.
2. Propagate .versions listing failures to the caller instead of
swallowing them with (nil, 0). Transient filer errors on a
.versions directory now surface in the job result.
3. Narrow the fail-closed path to only malformed lifecycle XML
(errMalformedLifecycleXML). Transient filer LookupEntry errors
now fall back to TTL with a warning, matching the original intent
of "fail closed on bad config, not on network blips."
* lifecycle worker: only skip .uploads at bucket root
* lifecycle worker: sort.Slice, mixed-format test, XML presence tracking
- Replace manual insertion sort with sort.Slice in sortVersionsByVersionId
- Add TestCompareVersionIdsMixedFormats covering old/new format ordering
- Distinguish "no lifecycle XML" (nil) from "XML present but no effective
rules" (non-nil empty slice) so buckets with all-disabled rules don't
incorrectly fall back to filer.conf TTL expiration
* lifecycle worker: guard nil Attributes, use TrimSuffix in test
- Guard entry.Attributes != nil before accessing GetFileSize() and
Mtime in both listExpiredObjectsByRules and processVersionsDirectory
- Use strings.TrimPrefix/TrimSuffix in TestVersionsDirectoryNaming
to match the production code pattern
* lifecycle worker: skip TTL scan when XML present, fix test assertions
- When lifecycle XML is present but has no effective rules, skip
object scanning entirely instead of falling back to TTL path
- Test sort output against concrete expected names instead of
re-using the same comparator as the sort itself
* lifecycle worker: fix ExpiredObjectDeleteMarker to match AWS semantics
Rewrite cleanupDeleteMarkers() to only remove delete markers that are
the sole remaining version of an object. Previously, delete markers
were removed unconditionally which could resurface older versions in
versioned buckets.
New algorithm:
1. Walk bucket tree looking for .versions directories
2. Check ExtLatestVersionIsDeleteMarker from directory metadata
3. Count versions in the .versions directory
4. Only remove if count == 1 (delete marker is sole version)
5. Require an ExpiredObjectDeleteMarker=true rule (when lifecycle
XML rules are present)
6. Remove the empty .versions directory after cleanup
This phase runs after NoncurrentVersionExpiration so version counts
are accurate.
* lifecycle worker: respect prefix filter in ExpiredObjectDeleteMarker rules
Previously hasDeleteMarkerRule was a bucket-wide boolean that ignored
rule prefixes. A prefix-scoped rule like "logs/" would incorrectly
clean up delete markers in all paths.
Add matchesDeleteMarkerRule() that checks if a matching enabled
ExpiredObjectDeleteMarker rule exists for the specific object key,
respecting the rule's prefix filter. Falls back to legacy behavior
(allow cleanup) when no lifecycle XML rules are provided.
* lifecycle worker: only skip .uploads at bucket root
Check dir == bucketPath before skipping directories named .uploads.
Previously a user-created directory like data/.uploads/ at any depth
would be incorrectly skipped during lifecycle scanning.
* lifecycle worker: fix delete marker cleanup with XML-present empty rules
1. matchesDeleteMarkerRule now uses nil check (not len==0) for legacy
fallback. A non-nil empty slice means lifecycle XML was present but
had no ExpiredObjectDeleteMarker rules, so cleanup is blocked.
Previously, an empty slice triggered the legacy true path.
2. Use per-directory removedHere flag instead of cumulative cleaned
counter when deciding to remove .versions directories. Previously,
after the first successful cleanup anywhere in the bucket, every
subsequent .versions directory would be removed even if its own
delete marker was not actually deleted.
* lifecycle worker: use full filter matching for delete marker rules
matchesDeleteMarkerRule now uses s3lifecycle.MatchesFilter (exported)
instead of prefix-only matching. This ensures tag and size filters
on ExpiredObjectDeleteMarker rules are respected, preventing broader
deletions than the configured policy intends.
Add TestMatchesDeleteMarkerRule covering: nil rules (legacy), empty
rules (XML present), prefix match/mismatch, disabled rules, rules
without the flag, and tag-filtered rules against tagless markers.
---------
Co-authored-by: Copilot <copilot@github.com>
* lifecycle worker: add NoncurrentVersionExpiration support
Add version-aware scanning to the rule-based execution path. When the
walker encounters a .versions directory, processVersionsDirectory():
- Lists all version entries (v_<versionId>)
- Sorts by version timestamp (newest first)
- Walks non-current versions with ShouldExpireNoncurrentVersion()
which handles both NoncurrentDays and NewerNoncurrentVersions
- Extracts successor time from version IDs (both old/new format)
- Skips delete markers in noncurrent version counting
- Falls back to entry Mtime when version ID timestamp is unavailable
Helper functions:
- sortVersionsByTimestamp: insertion sort by version ID timestamp
- getEntryVersionTimestamp: extracts timestamp with Mtime fallback
* lifecycle worker: address review feedback for noncurrent versions
- Use sentinel errLimitReached in versions directory handler
- Set NoncurrentIndex on ObjectInfo for proper NewerNoncurrentVersions
evaluation
* lifecycle worker: fail closed on XML parse error, guard zero Mtime
- Fail closed when lifecycle XML exists but fails to parse, instead
of falling back to TTL which could apply broader rules
- Guard Mtime > 0 before using time.Unix(mtime, 0) to avoid mapping
unset Mtime to 1970, which would misorder versions and cause
premature expiration
* lifecycle worker: count delete markers toward NoncurrentIndex
Noncurrent delete markers should count toward the
NewerNoncurrentVersions retention threshold so data versions
get the correct position index. Previously, skipping delete
markers without incrementing the index could retain too many
versions after delete/recreate cycles.
* lifecycle worker: fix version ordering, error propagation, and fail-closed scope
1. Use full version ID comparison (CompareVersionIds) for sorting
.versions entries, not just decoded timestamps. Two versions with
the same timestamp prefix but different random suffixes were
previously misordered, potentially treating the newest version as
noncurrent and deleting it.
2. Propagate .versions listing failures to the caller instead of
swallowing them with (nil, 0). Transient filer errors on a
.versions directory now surface in the job result.
3. Narrow the fail-closed path to only malformed lifecycle XML
(errMalformedLifecycleXML). Transient filer LookupEntry errors
now fall back to TTL with a warning, matching the original intent
of "fail closed on bad config, not on network blips."
* lifecycle worker: only skip .uploads at bucket root
* lifecycle worker: sort.Slice, mixed-format test, XML presence tracking
- Replace manual insertion sort with sort.Slice in sortVersionsByVersionId
- Add TestCompareVersionIdsMixedFormats covering old/new format ordering
- Distinguish "no lifecycle XML" (nil) from "XML present but no effective
rules" (non-nil empty slice) so buckets with all-disabled rules don't
incorrectly fall back to filer.conf TTL expiration
* lifecycle worker: guard nil Attributes, use TrimSuffix in test
- Guard entry.Attributes != nil before accessing GetFileSize() and
Mtime in both listExpiredObjectsByRules and processVersionsDirectory
- Use strings.TrimPrefix/TrimSuffix in TestVersionsDirectoryNaming
to match the production code pattern
* lifecycle worker: skip TTL scan when XML present, fix test assertions
- When lifecycle XML is present but has no effective rules, skip
object scanning entirely instead of falling back to TTL path
- Test sort output against concrete expected names instead of
re-using the same comparator as the sort itself
---------
Co-authored-by: Copilot <copilot@github.com>
* s3api: extend lifecycle XML types with NoncurrentVersionExpiration, AbortIncompleteMultipartUpload
Add missing S3 lifecycle rule types to the XML data model:
- NoncurrentVersionExpiration with NoncurrentDays and NewerNoncurrentVersions
- NoncurrentVersionTransition with NoncurrentDays and StorageClass
- AbortIncompleteMultipartUpload with DaysAfterInitiation
- Filter.ObjectSizeGreaterThan and ObjectSizeLessThan
- And.ObjectSizeGreaterThan and ObjectSizeLessThan
- Filter.UnmarshalXML to properly parse Tag, And, and size filter elements
Each new type follows the existing set-field pattern for conditional
XML marshaling. No behavior changes - these types are not yet wired
into handlers or the lifecycle worker.
* s3lifecycle: add lifecycle rule evaluator package
New package weed/s3api/s3lifecycle/ provides a pure-function lifecycle
rule evaluation engine. The evaluator accepts flattened Rule structs and
ObjectInfo metadata, and returns the appropriate Action.
Components:
- evaluator.go: Evaluate() for per-object actions with S3 priority
ordering (delete marker > noncurrent version > current expiration),
ShouldExpireNoncurrentVersion() with NewerNoncurrentVersions support,
EvaluateMPUAbort() for multipart upload rules
- filter.go: prefix, tag, and size-based filter matching
- tags.go: ExtractTags() extracts S3 tags from filer Extended metadata,
HasTagRules() for scan-time optimization
- version_time.go: GetVersionTimestamp() extracts timestamps from
SeaweedFS version IDs (both old and new format)
Comprehensive test coverage: 54 tests covering all action types,
filter combinations, edge cases, and version ID formats.
* s3api: add UnmarshalXML for Expiration, Transition, ExpireDeleteMarker
Add UnmarshalXML methods that set the internal 'set' flag during XML
parsing. Previously these flags were only set programmatically, causing
XML round-trip to drop elements. This ensures lifecycle configurations
stored as XML survive unmarshal/marshal cycles correctly.
Add comprehensive XML round-trip tests for all lifecycle rule types
including NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
Filter with Tag/And/size constraints, and a complete Terraform-style
lifecycle configuration.
* s3lifecycle: address review feedback
- Fix version_time.go overflow: guard timestampPart > MaxInt64 before
the inversion subtraction to prevent uint64 wrap
- Make all expiry checks inclusive (!now.Before instead of now.After)
so actions trigger at the exact scheduled instant
- Add NoncurrentIndex to ObjectInfo so Evaluate() can properly handle
NewerNoncurrentVersions via ShouldExpireNoncurrentVersion()
- Add test for high-bit overflow version ID
* s3lifecycle: guard ShouldExpireNoncurrentVersion against zero SuccessorModTime
Add early return when obj.IsLatest or obj.SuccessorModTime.IsZero()
to prevent premature expiration of versions with uninitialized
successor timestamps (zero value would compute to epoch, always expired).
* lifecycle worker: detect buckets with lifecycle XML, not just filer.conf TTLs
Update the detection phase to check for stored lifecycle XML in bucket
metadata (key: s3-bucket-lifecycle-configuration-xml) in addition to
filer.conf TTL entries. A bucket is proposed for lifecycle processing if
it has lifecycle XML OR filer.conf TTLs (backward compatible).
New proposal parameters:
- has_lifecycle_xml: whether the bucket has stored lifecycle XML
- versioning_status: the bucket's versioning state (Enabled/Suspended/"")
These parameters will be used by the execution phase (subsequent PR)
to determine which evaluation path to use.
* lifecycle worker: update detection function comment to reflect XML support
* lifecycle worker: add lifecycle XML parsing and rule conversion
Add rules.go with:
- parseLifecycleXML() converts stored lifecycle XML to evaluator-friendly
s3lifecycle.Rule structs, handling Filter.Prefix, Filter.Tag, Filter.And,
size constraints, NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
Expiration.Date, and ExpiredObjectDeleteMarker
- loadLifecycleRulesFromBucket() reads lifecycle XML from bucket metadata
- parseExpirationDate() supports RFC3339 and ISO 8601 date-only formats
Comprehensive tests for all XML variants, filter types, and date formats.
* lifecycle worker: add scan-time rule evaluation for object expiration
Update executeLifecycleForBucket to try lifecycle XML evaluation first,
falling back to TTL-only evaluation when no lifecycle XML exists.
New listExpiredObjectsByRules() function:
- Walks the bucket directory tree
- Builds s3lifecycle.ObjectInfo from each filer entry
- Calls s3lifecycle.Evaluate() to check lifecycle rules
- Skips objects already handled by TTL fast path (TtlSec set)
- Extracts tags only when rules use tag-based filters (optimization)
- Skips .uploads and .versions directories (handled by other phases)
Supports Expiration.Days, Expiration.Date, Filter.Prefix, Filter.Tag,
Filter.And, and Filter.ObjectSize* in the scan-time evaluation path.
Existing TTL-based path remains for backward compatibility.
* lifecycle worker: address review feedback
- Use sentinel error (errLimitReached) instead of string matching
for scan limit detection
- Fix loadLifecycleRulesFromBucket path: use bucketsPath directly
as directory for LookupEntry instead of path.Dir which produced
the wrong parent
* lifecycle worker: fix And filter detection for size-only constraints
The And branch condition only triggered when Prefix or Tags were present,
missing the case where And contains only ObjectSizeGreaterThan or
ObjectSizeLessThan without a prefix or tags.
* lifecycle worker: address review feedback round 3
- rules.go: pass through Filter-level size constraints when Tag is
present without And (Tag+size combination was dropping sizes)
- execution.go: add doc comment to listExpiredObjectsByRules noting
that it handles non-versioned objects only; versioned objects are
handled by processVersionsDirectory
- rules_test.go: add bounds checks before indexing rules[0]
---------
Co-authored-by: Copilot <copilot@github.com>
* s3api: extend lifecycle XML types with NoncurrentVersionExpiration, AbortIncompleteMultipartUpload
Add missing S3 lifecycle rule types to the XML data model:
- NoncurrentVersionExpiration with NoncurrentDays and NewerNoncurrentVersions
- NoncurrentVersionTransition with NoncurrentDays and StorageClass
- AbortIncompleteMultipartUpload with DaysAfterInitiation
- Filter.ObjectSizeGreaterThan and ObjectSizeLessThan
- And.ObjectSizeGreaterThan and ObjectSizeLessThan
- Filter.UnmarshalXML to properly parse Tag, And, and size filter elements
Each new type follows the existing set-field pattern for conditional
XML marshaling. No behavior changes - these types are not yet wired
into handlers or the lifecycle worker.
* s3lifecycle: add lifecycle rule evaluator package
New package weed/s3api/s3lifecycle/ provides a pure-function lifecycle
rule evaluation engine. The evaluator accepts flattened Rule structs and
ObjectInfo metadata, and returns the appropriate Action.
Components:
- evaluator.go: Evaluate() for per-object actions with S3 priority
ordering (delete marker > noncurrent version > current expiration),
ShouldExpireNoncurrentVersion() with NewerNoncurrentVersions support,
EvaluateMPUAbort() for multipart upload rules
- filter.go: prefix, tag, and size-based filter matching
- tags.go: ExtractTags() extracts S3 tags from filer Extended metadata,
HasTagRules() for scan-time optimization
- version_time.go: GetVersionTimestamp() extracts timestamps from
SeaweedFS version IDs (both old and new format)
Comprehensive test coverage: 54 tests covering all action types,
filter combinations, edge cases, and version ID formats.
* s3api: add UnmarshalXML for Expiration, Transition, ExpireDeleteMarker
Add UnmarshalXML methods that set the internal 'set' flag during XML
parsing. Previously these flags were only set programmatically, causing
XML round-trip to drop elements. This ensures lifecycle configurations
stored as XML survive unmarshal/marshal cycles correctly.
Add comprehensive XML round-trip tests for all lifecycle rule types
including NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
Filter with Tag/And/size constraints, and a complete Terraform-style
lifecycle configuration.
* s3lifecycle: address review feedback
- Fix version_time.go overflow: guard timestampPart > MaxInt64 before
the inversion subtraction to prevent uint64 wrap
- Make all expiry checks inclusive (!now.Before instead of now.After)
so actions trigger at the exact scheduled instant
- Add NoncurrentIndex to ObjectInfo so Evaluate() can properly handle
NewerNoncurrentVersions via ShouldExpireNoncurrentVersion()
- Add test for high-bit overflow version ID
* s3lifecycle: guard ShouldExpireNoncurrentVersion against zero SuccessorModTime
Add early return when obj.IsLatest or obj.SuccessorModTime.IsZero()
to prevent premature expiration of versions with uninitialized
successor timestamps (zero value would compute to epoch, always expired).
* lifecycle worker: detect buckets with lifecycle XML, not just filer.conf TTLs
Update the detection phase to check for stored lifecycle XML in bucket
metadata (key: s3-bucket-lifecycle-configuration-xml) in addition to
filer.conf TTL entries. A bucket is proposed for lifecycle processing if
it has lifecycle XML OR filer.conf TTLs (backward compatible).
New proposal parameters:
- has_lifecycle_xml: whether the bucket has stored lifecycle XML
- versioning_status: the bucket's versioning state (Enabled/Suspended/"")
These parameters will be used by the execution phase (subsequent PR)
to determine which evaluation path to use.
* lifecycle worker: update detection function comment to reflect XML support
---------
Co-authored-by: Copilot <copilot@github.com>
* s3api: extend lifecycle XML types with NoncurrentVersionExpiration, AbortIncompleteMultipartUpload
Add missing S3 lifecycle rule types to the XML data model:
- NoncurrentVersionExpiration with NoncurrentDays and NewerNoncurrentVersions
- NoncurrentVersionTransition with NoncurrentDays and StorageClass
- AbortIncompleteMultipartUpload with DaysAfterInitiation
- Filter.ObjectSizeGreaterThan and ObjectSizeLessThan
- And.ObjectSizeGreaterThan and ObjectSizeLessThan
- Filter.UnmarshalXML to properly parse Tag, And, and size filter elements
Each new type follows the existing set-field pattern for conditional
XML marshaling. No behavior changes - these types are not yet wired
into handlers or the lifecycle worker.
* s3lifecycle: add lifecycle rule evaluator package
New package weed/s3api/s3lifecycle/ provides a pure-function lifecycle
rule evaluation engine. The evaluator accepts flattened Rule structs and
ObjectInfo metadata, and returns the appropriate Action.
Components:
- evaluator.go: Evaluate() for per-object actions with S3 priority
ordering (delete marker > noncurrent version > current expiration),
ShouldExpireNoncurrentVersion() with NewerNoncurrentVersions support,
EvaluateMPUAbort() for multipart upload rules
- filter.go: prefix, tag, and size-based filter matching
- tags.go: ExtractTags() extracts S3 tags from filer Extended metadata,
HasTagRules() for scan-time optimization
- version_time.go: GetVersionTimestamp() extracts timestamps from
SeaweedFS version IDs (both old and new format)
Comprehensive test coverage: 54 tests covering all action types,
filter combinations, edge cases, and version ID formats.
* s3api: add UnmarshalXML for Expiration, Transition, ExpireDeleteMarker
Add UnmarshalXML methods that set the internal 'set' flag during XML
parsing. Previously these flags were only set programmatically, causing
XML round-trip to drop elements. This ensures lifecycle configurations
stored as XML survive unmarshal/marshal cycles correctly.
Add comprehensive XML round-trip tests for all lifecycle rule types
including NoncurrentVersionExpiration, AbortIncompleteMultipartUpload,
Filter with Tag/And/size constraints, and a complete Terraform-style
lifecycle configuration.
* s3lifecycle: address review feedback
- Fix version_time.go overflow: guard timestampPart > MaxInt64 before
the inversion subtraction to prevent uint64 wrap
- Make all expiry checks inclusive (!now.Before instead of now.After)
so actions trigger at the exact scheduled instant
- Add NoncurrentIndex to ObjectInfo so Evaluate() can properly handle
NewerNoncurrentVersions via ShouldExpireNoncurrentVersion()
- Add test for high-bit overflow version ID
* s3lifecycle: guard ShouldExpireNoncurrentVersion against zero SuccessorModTime
Add early return when obj.IsLatest or obj.SuccessorModTime.IsZero()
to prevent premature expiration of versions with uninitialized
successor timestamps (zero value would compute to epoch, always expired).
---------
Co-authored-by: Copilot <copilot@github.com>
* s3: support s3:x-amz-server-side-encryption policy condition (#7680)
- Normalize x-amz-server-side-encryption header values to canonical form
(aes256 → AES256, aws:kms mixed-case → aws:kms) so StringEquals
conditions work regardless of client capitalisation
- Exempt UploadPart and UploadPartCopy from SSE Null conditions: these
actions inherit SSE from the initial CreateMultipartUpload request and
do not re-send the header, so Deny/Null("true") should not block them
- Add sse_condition_test.go covering StringEquals, Null, case-insensitive
normalisation, and multipart continuation action exemption
* s3: address review comments on SSE condition support
- Replace "inherited" sentinel in injectSSEForMultipart with "AES256" so
that StringEquals/Null conditions evaluate against a meaningful value;
add TODO noting that KMS multipart uploads need the actual algorithm
looked up from the upload state
- Rewrite TestSSECaseInsensitiveNormalization to drive normalisation
through EvaluatePolicyForRequest with a real *http.Request so regressions
in the production code path are caught; split into AES256 and aws:kms
variants to cover both normalisation branches
* s3: plumb real inherited SSE from multipart upload state into policy eval
Instead of injecting a static "AES256" sentinel for UploadPart/UploadPartCopy,
look up the actual SSE algorithm from the stored CreateMultipartUpload entry
and pass it through the evaluation chain.
Changes:
- PolicyEvaluationArgs gains InheritedSSEAlgorithm string; set by the
BucketPolicyEngine wrapper for multipart continuation actions
- injectSSEForMultipart(conditions, inheritedSSE) now accepts the real
algorithm; empty string means no SSE → Null("true") fires correctly
- IsMultipartContinuationAction exported so the s3api wrapper can use it
- BucketPolicyEngine gets a MultipartSSELookup callback (set by S3ApiServer)
that fetches the upload entry and reads SeaweedFSSSEKMSKeyID /
SeaweedFSSSES3Encryption to determine the algorithm
- S3ApiServer.getMultipartSSEAlgorithm implements the lookup via getEntry
- Tests updated: three multipart cases (AES256, aws:kms, no-SSE-must-deny)
plus UploadPartCopy coverage
* test: preserve branch when recovering bare git repo
* Replaced the standalone ensureMountClone + gitRun in Phase 5 with a new resetToCommitWithRecovery function that mirrors the existing pullFromCommitWithRecovery pattern
* fix ec.balance failing to rebalance when all nodes share all volumes (#8793)
Two bugs in doBalanceEcRack prevented rebalancing:
1. Sorting by freeEcSlot instead of actual shard count caused incorrect
empty/full node selection when nodes have different total capacities.
2. The volume-level check skipped any volume already present on the
target node. When every node has a shard of every volume (common
with many EC volumes across N nodes with N shards each), no moves
were possible.
Fix: sort by actual shard count, and use a two-pass approach - first
prefer moving shards of volumes not on the target (best diversity),
then fall back to moving specific shard IDs not yet on the target.
* add test simulating real cluster topology from issue #8793
Uses the actual node addresses and mixed max capacities (80 vs 33)
from the reporter's 14-node cluster to verify ec.balance correctly
rebalances with heterogeneous node sizes.
* fix pass comments to match 0-indexed loop variable