Tree:
d95df76bca
add-ec-vacuum
add-filer-iam-grpc
add-iam-grpc-management
add_fasthttp_client
add_remote_storage
adding-message-queue-integration-tests
adjust-fsck-cutoff-default
admin/csrf-s3tables
allow-no-role-arn
also-delete-parent-directory-if-empty
avoid_releasing_temp_file_on_write
changing-to-zap
coderabbitai/autofix/fafd849
codex-rust-volume-server-bootstrap
codex/8712-directory-marker-content-type
codex/admin-oidc-auth-ui
codex/cache-iam-policy-engines
codex/ec-repair-worker
codex/erasure-coding-shard-distribution
codex/list-object-versions-newest-first
codex/s3tables-maint-lifecycle-parity
codex/s3tables-maint-planner-multispec
codex/s3tables-maintenance-designs
collect-public-metrics
copilot/fix-helm-chart-installation
copilot/fix-s3-object-tagging-issue
copilot/make-renew-interval-configurable
copilot/make-renew-interval-configurable-again
copilot/sub-pr-7677
create-table-snapshot-api-design
data_query_pushdown
dependabot/maven/other/java/client/com.google.protobuf-protobuf-java-3.25.5
dependabot/maven/other/java/examples/org.apache.hadoop-hadoop-common-3.4.0
detect-and-plan-ec-tasks
do-not-retry-if-error-is-NotFound
ec-disk-type-support
enhance-erasure-coding
expand-the-s3-PutObject-permission-to-the-multipart-permissions
fasthttp
feat/mount-showSystemEntries
feature-8113-storage-class-disk-routing
feature/mini-port-detection
feature/modernize-s3-tests
feature/s3-multi-cert-support
feature/s3tables-improvements-and-spark-tests
feature/sra-uds-handler
feature/sw-block
filer1_maintenance_branch
fix-8303-s3-lifecycle-ttl-assign
fix-GetObjectLockConfigurationHandler
fix-bucket-name-case-7910
fix-helm-fromtoml-compatibility
fix-mount-http-parallelism
fix-mount-read-throughput-7504
fix-pr-7909
fix-s3-configure-consistency
fix-s3-object-tagging-issue-7589
fix-sts-session-token-7941
fix-versioning-listing-only
fix/8712-directory-markers-content-type
fix/iceberg-stage-create-semantics
fix/lock-table-shared-lock-precedence
fix/mount-cache-consistency
fix/object-lock-delete-enforcement
fix/plugin-ui-remove-scheduler-settings
fix/s3-conditional-headers-toctou-race
fix/s3-delete-directory-marker-non-empty
fix/sts-body-preservation
fix/subscribe-metadata-slow-consumer-blocked
fix/windows-test-file-cleanup
ftp
gh-pages
has-weed-sql-command
iam-multi-file-migration
iam-permissions-and-api
improve-fuse-mount
improve-fuse-mount2
lifecycle/pr1-evaluator
logrus
master
message_send
mount2
mq-subscribe
mq2
nfs-cookie-prefix-list-fixes
optimize-delete-lookups
original_weed_mount
plugin-system-phase1
plugin-ui-enhancements-restored
pr-7412
pr/7984
pr/8140
pr/8680
raft-dual-write
random_access_file
refactor-needle-read-operations
refactor-volume-write
remote_overlay
remove-claude-ci
remove-implicit-directory-handling
revert-5134-patch-1
revert-5819-patch-1
revert-6434-bugfix-missing-s3-audit
s3-remote-cache-singleflight
s3-select
s3tables-by-claude
scheduler-sequential-iteration
sub
tcp_read
test-reverting-lock-table
test_udp
testing
testing-sdx-generation
tikv
track-mount-e2e
upgrade-versions-to-4.00
volume_buffered_writes
worker-execute-ec-tasks
0.72
0.72.release
0.73
0.74
0.75
0.76
0.77
0.90
0.91
0.92
0.93
0.94
0.95
0.96
0.97
0.98
0.99
1.00
1.01
1.02
1.03
1.04
1.05
1.06
1.07
1.08
1.09
1.10
1.11
1.12
1.14
1.15
1.16
1.17
1.18
1.19
1.20
1.21
1.22
1.23
1.24
1.25
1.26
1.27
1.28
1.29
1.30
1.31
1.32
1.33
1.34
1.35
1.36
1.37
1.38
1.40
1.41
1.42
1.43
1.44
1.45
1.46
1.47
1.48
1.49
1.50
1.51
1.52
1.53
1.54
1.55
1.56
1.57
1.58
1.59
1.60
1.61
1.61RC
1.62
1.63
1.64
1.65
1.66
1.67
1.68
1.69
1.70
1.71
1.72
1.73
1.74
1.75
1.76
1.77
1.78
1.79
1.80
1.81
1.82
1.83
1.84
1.85
1.86
1.87
1.88
1.90
1.91
1.92
1.93
1.94
1.95
1.96
1.97
1.98
1.99
1;70
2.00
2.01
2.02
2.03
2.04
2.05
2.06
2.07
2.08
2.09
2.10
2.11
2.12
2.13
2.14
2.15
2.16
2.17
2.18
2.19
2.20
2.21
2.22
2.23
2.24
2.25
2.26
2.27
2.28
2.29
2.30
2.31
2.32
2.33
2.34
2.35
2.36
2.37
2.38
2.39
2.40
2.41
2.42
2.43
2.47
2.48
2.49
2.50
2.51
2.52
2.53
2.54
2.55
2.56
2.57
2.58
2.59
2.60
2.61
2.62
2.63
2.64
2.65
2.66
2.67
2.68
2.69
2.70
2.71
2.72
2.73
2.74
2.75
2.76
2.77
2.78
2.79
2.80
2.81
2.82
2.83
2.84
2.85
2.86
2.87
2.88
2.89
2.90
2.91
2.92
2.93
2.94
2.95
2.96
2.97
2.98
2.99
3.00
3.01
3.02
3.03
3.04
3.05
3.06
3.07
3.08
3.09
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.71
3.72
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
3.89
3.90
3.91
3.92
3.93
3.94
3.95
3.96
3.97
3.98
3.99
4.00
4.01
4.02
4.03
4.04
4.05
4.06
4.07
4.08
4.09
4.12
4.13
4.15
4.16
4.17
dev
helm-3.65.1
v0.69
v0.70beta
v3.33
${ noResults }
1010 Commits (d95df76bca58cfaf2ede402feac8779de8588153)
| Author | SHA1 | Message | Date |
|---|---|---|---|
|
|
ba624f1f34
|
Rust volume server implementation with CI (#8539)
* Match Go gRPC client transport defaults
* Honor Go HTTP idle timeout
* Honor maintenanceMBps during volume copy
* Honor images.fix.orientation on uploads
* Honor cpuprofile when pprof is disabled
* Match Go memory status payloads
* Propagate request IDs across gRPC calls
* Format pending Rust source updates
* Match Go stats endpoint payloads
* Serve Go volume server UI assets
* Enforce Go HTTP whitelist guards
* Align Rust metrics admin-port test with Go behavior
* Format pending Rust server updates
* Honor access.ui without per-request JWT checks
* Honor keepLocalDatFile in tier upload shortcut
* Honor Go remote volume write mode
* Load tier backends from master config
* Check master config before loading volumes
* Remove vif files on volume destroy
* Delete remote tier data on volume destroy
* Honor vif version defaults and overrides
* Reject mismatched vif bytes offsets
* Load remote-only tiered volumes
* Report Go tail offsets in sync status
* Stream remote dat in incremental copy
* Honor collection vif for EC shard config
* Persist EC expireAtSec in vif metadata
* Stream remote volume reads through HTTP
* Serve HTTP ranges from backend source
* Match Go ReadAllNeedles scan order
* Match Go CopyFile zero-stop metadata
* Delete EC volumes with collection cleanup
* Drop deleted collection metrics
* Match Go tombstone ReadNeedleMeta
* Match Go TTL parsing: all-digit default to minutes, two-pass fit algorithm
* Match Go needle ID/cookie formatting and name size computation
* Match Go image ext checks: webp resize only, no crop; empty healthz body
* Match Go Prometheus metric names and add missing handler counter constants
* Match Go ReplicaPlacement short string parsing with zero-padding
* Add missing EC constants MAX_SHARD_COUNT and MIN_TOTAL_DISKS
* Add walk_ecx_stats for accurate EC volume file counts and size
* Match Go VolumeStatus dat file size, EC shard stats, and disk pct precision
* Match Go needle map: unconditional delete counter, fix redb idx walk offset
* Add CompactMapSegment overflow panic guard matching Go
* Match Go volume: vif creation, version from superblock, TTL expiry, dedup data_size, garbage_level fallback
* Match Go 304 Not Modified: return bare status with no headers
* Match Go JWT error message: use "wrong jwt" instead of detailed error
* Match Go read handler bare 400, delete error prefix, download throttle timeout
* Match Go pretty JSON 1-space indent and "Deletion Failed:" error prefix
* Match Go heartbeat: keep is_heartbeating on error, add EC shard identification
* Match Go needle ReadBytes V2: tolerate EOF on truncated body
* Match Go volume: cookie check on any existing needle, return DataSize, 128KB meta guard
* Match Go DeleteCollection: propagate destroy errors
* Match Go gRPC: BatchDelete no flag, IncrementalCopy error, FetchAndWrite concurrent, VolumeUnmount/DeleteCollection errors, tail draining, query error code
* Match Go Content-Disposition RFC 6266 formatting with RFC 2231 encoding
* Match Go Guard isWriteActive: combine whitelist and signing key check
* Match Go DeleteCollectionMetrics: use partial label matching
* Match Go heartbeat: send state-only delta on volume state changes
* Match Go ReadNeedleMeta paged I/O: read header+tail only, skip data; add EIO tracking
* Match Go ScrubVolume INDEX mode dispatch; add VolumeCopy preallocation and EC NeedleStatus TODOs
* Add read_ec_shard_needle for full needle reconstruction from local EC shards
* Make heartbeat master config helpers pub for VolumeCopy preallocation
* Match Go gRPC: VolumeCopy preallocation, EC NeedleStatus full read, error message wording
* Match Go HTTP responses: omitempty fields, 2-space JSON indent, JWT JSON error, delete pretty/JSONP, 304 Last-Modified, raw write error
* Match Go WriteNeedleBlob V3 timestamp patching, fix makeup_diff double padding, count==0 read handling
* Add rebuild_ecx_file for EC index reconstruction from data shards
* Match Go gRPC: tail header first-chunk-only, EC cleanup on failure, copy append mode, ecx rebuild, compact cancellation
* Add EC volume read and delete support in HTTP handlers
* Add per-shard EC mount/unmount, location predicate search, idx directory for EC
* Add CheckVolumeDataIntegrity on volume load matching Go
* Match Go gRPC: EC multi-disk placement, per-shard mount/unmount, no auto-mount on reconstruct, streaming ReadAll/EcShardRead, ReceiveFile cleanup, version check, proxy streaming, redirect Content-Type
* Match Go heartbeat metric accounting
* Match Go duplicate UUID heartbeat retries
* Delete expired EC volumes during heartbeat
* Match Go volume heartbeat pruning
* Honor master preallocate in volume max
* Report remote storage info in heartbeats
* Emit EC heartbeat deltas on shard changes
* Match Go throttle boundary: use <= instead of <, fix pretty JSON to 1-space
* Match Go write_needle_blob monotonic appendAtNs via get_append_at_ns
* Match Go VolumeUnmount: idempotent success when volume not found
* Match Go TTL Display: return empty string when unit is Empty
Go checks `t.Unit == Empty` separately and returns "" for TTLs
with nonzero count but Empty unit. Rust only checked is_empty()
(count==0 && unit==0), so count>0 with unit=0 would format as
"5 " instead of "".
* Match Go error behavior for truncated needle data in read_body_v2
Go's readNeedleDataVersion2 returns "index out of range %d" errors
(indices 1-7) when needle body or metadata fields are truncated.
Rust was silently tolerating truncation and returning Ok. Now returns
NeedleError::IndexOutOfRange with the matching index for each field.
* Match Go download throttle: return JSON error instead of plain text
* Match Go crop params: default x1/y1 to 0 when not provided
* Match Go ScrubEcVolume: accumulate total_files from EC shards
* Match Go ScrubVolume: count total_files even on scrub error
* Match Go VolumeEcShardsCopy: set ignore_source_file_not_found for .vif
* Match Go VolumeTailSender: send needle_header on every chunk
* Match Go read_super_block: apply replication override from .vif
* Match Go check_volume_data_integrity: verify all 10 entries, detect trailing corruption
* Match Go WriteNeedleBlob: dedup check before writing during replication
* handlers: use meta-only reads for HEAD
* handlers: align range parsing and responses with Go
* handlers: align upload parsing with Go
* deps: enable webp support
* Make 5bytes the default feature for idx entry compatibility
* Match Go TTL: preserve original unit when count fits in byte
* Fix EC locate_needle: use get_actual_size for full needle size
* Fix raw body POST: only parse multipart when Content-Type contains form-data
* Match Go ReceiveFile: return protocol errors in response body, not gRPC status
* add docs
* Match Go VolumeEcShardsCopy: append to .ecj file instead of truncating
* Match Go ParsePath: support _delta suffix on file IDs for sub-file addressing
* Match Go chunk manifest: add Accept-Ranges, Content-Disposition, filename fallback, MIME detection
* Match Go privateStoreHandler: use proper JSON error for unsupported methods
* Match Go Destroy: add only_empty parameter to reject non-empty volume deletion
* Fix compilation: set_read_only_persist and set_writable return ()
These methods fire-and-forget save_vif internally, so gRPC callers
should not try to chain .map_err() on the unit return type.
* Match Go SaveVolumeInfo: check writability and propagate errors in save_vif
* Match Go VolumeDelete: propagate only_empty to delete_volume for defense in depth
The gRPC VolumeDelete handler had a pre-check for only_empty but then
passed false to store.delete_volume(), bypassing the store-level check.
Go passes req.OnlyEmpty directly to DeleteVolume. Now Rust does the same
for defense in depth against TOCTOU races (though the store write lock
makes this unlikely).
* Match Go ProcessRangeRequest: return full content for empty/oversized ranges
Go returns nil from ProcessRangeRequest when ranges are empty or total
range size exceeds content length, causing the caller to serve the full
content as a normal 200 response. Rust was returning an empty 200 body.
* Match Go Query: quote JSON keys in output records
Go's ToJson produces valid JSON with quoted keys like {"name":"Alice"}.
Rust was producing invalid JSON with unquoted keys like {name:"Alice"}.
* Match Go VolumeCopy: reject when no suitable disk location exists
Go returns ErrVolumeNoSpaceLeft when no location matches the disk type
and has sufficient space. Rust had an unsafe fallback that silently
picked the first location regardless of type or available space.
* Match Go DeleteVolumeNeedle: check noWriteOrDelete before allowing delete
Go checks v.noWriteOrDelete before proceeding with needle deletion,
returning "volume is read only" if true. Rust was skipping this check.
* Match Go ReceiveFile: prefer HardDrive location for EC and use response-level write errors
Two fixes: (1) Go prefers HardDriveType disk location for EC volumes,
falling back to first location. Returns "no storage location available"
when no locations exist. (2) Write failures are now response-level
errors (in response body) instead of gRPC status errors, matching Go.
* Match Go CopyFile: sync EC volume journal to disk before copying
Go calls ecVolume.Sync() before copying EC volume files to ensure the
.ecj journal is flushed to disk. Added sync_to_disk() to EcVolume and
call it in the CopyFile EC branch.
* Match Go readSuperBlock: propagate replication parse errors
Go returns an error when parsing the replication string from the .vif
file fails. Rust was silently ignoring the parse failure and using the
super block's replication as-is.
* Match Go TTL expiry: remove append_at_ns > 0 guard
Go computes TTL expiry from AppendAtNs without guarding against zero.
When append_at_ns is 0, the expiry is epoch + TTL which is in the past,
correctly returning NotFound. Rust's extra guard skipped the check,
incorrectly returning success for such needles.
* Match Go delete_collection: skip volumes with compaction in progress
Go checks !v.isCompactionInProgress.Load() before destroying a volume
during collection deletion, skipping compacting volumes. Also changed
destroy errors to log instead of aborting the entire collection delete.
* Match Go MarkReadonly/MarkWritable: always notify master even on local error
Go always notifies the master regardless of whether the local
set_read_only_persist or set_writable step fails. The Rust code was
using `?` which short-circuited on error, skipping the final master
notification. Save the result and defer the `?` until after the
notify call.
* Match Go PostHandler: return 500 for all write errors
Go returns 500 (InternalServerError) for all write failures. Rust was
returning 404 for volume-not-found and 403 for read-only volumes.
* Match Go makeupDiff: validate .cpd compaction revision is old + 1
Go reads the new .cpd file's super block and verifies the compaction
revision is exactly old + 1. Rust only validated the old revision.
* Match Go VolumeStatus: check data backend before returning status
Go checks v.DataBackend != nil before building the status response,
returning an error if missing. Rust was silently returning size 0.
* Match Go PostHandler: always include mime field in upload response JSON
Go always serializes the mime field even when empty ("mime":""). Rust was
omitting it when empty due to Option<String> with skip_serializing_if.
* Match Go FindFreeLocation: account for EC shards in free slot calculation
Go subtracts EC shard equivalents when computing available volume slots.
Rust was only comparing volume count, potentially over-counting free
slots on locations with many EC shards.
* Match Go privateStoreHandler: use INVALID as metrics label for unsupported methods
Go records the method as INVALID in metrics for unsupported HTTP methods.
Rust was using the actual method name.
* Match Go volume: add commit_compact guard and scrub data size validation
Two fixes: (1) commit_compact now checks/sets is_compacting flag to
prevent concurrent commits, matching Go's CompareAndSwap guard.
(2) scrub now validates total needle sizes against .dat file size.
* Match Go gRPC: fix TailSender error propagation, EcShardsInfo all slots, EcShardRead .ecx check
Three fixes: (1) VolumeTailSender now propagates binary search errors
instead of silently falling back to start. (2) VolumeEcShardsInfo
returns entries for all shard slots including unmounted. (3)
VolumeEcShardRead checks .ecx index for deletions instead of .ecj.
* Match Go metrics: add BuildInfo gauge and connection tracking functions
Go exposes a BuildInfo Prometheus metric with version labels, and tracks
open connections via stats.ConnectionOpen/Close. Added both to Rust.
* Match Go NeedleMap.Delete: use !is_deleted() instead of is_valid()
Go's CompactMap.Delete checks !IsDeleted() not IsValid(), so needles
with size==0 (live but anomalous) can still be deleted. The Rust code
was using is_valid() which returns false for size==0, preventing
deletion of such needles.
* Match Go fitTtlCount: always normalize TTL to coarsest unit
Go's fitTtlCount always converts to seconds first, then finds the
coarsest unit that fits in one byte (e.g., 120m → 2h). Rust had an
early return for count<=255 that skipped normalization, producing
different binary encodings for the same duration.
* Match Go BuildInfo metric: correct name and add missing labels
Go uses SeaweedFS_build_info (Namespace=SeaweedFS, Subsystem=build,
Name=info) with labels [version, commit, sizelimit, goos, goarch].
Rust had SeaweedFS_volumeServer_buildInfo with only [version].
* Match Go HTTP handlers: fix UploadResult fields, DiskStatus JSON, chunk manifest ETag
- UploadResult.mime: add skip_serializing_if to omit empty MIME (Go uses omitempty)
- UploadResult.contentMd5: only include when request provided Content-MD5 header
- Content-MD5 response header: only set when request provided it
- DiskStatuses: use camelCase field names (percentFree, percentUsed, diskType)
to match Go's protobuf JSON marshaling
- Chunk manifest: preserve needle ETag in expanded response headers
* Match Go volume: fix version(), integrity check, scrub, and commit_compact
- version(): use self.version() instead of self.super_block.version in
read_all_needles, check_volume_data_integrity, scan_raw_needles_from
to respect volumeInfo.version override
- check_volume_data_integrity: initialize healthy_index_size to idx_size
(matching Go) and continue on EOF instead of returning error
- scrub(): count deleted needles in total_read since they still occupy
space in the .dat file (matches Go's totalRead += actualSize for deleted)
- commit_compact: clean up .cpd/.cpx files on makeup_diff failure
(matches Go's error path cleanup)
* Match Go write queue: add 4MB batch byte limit
Go's startWorker breaks the batch at either 128 requests or 4MB of
accumulated write data. Rust only had the 128-request limit, allowing
large writes to accumulate unbounded latency.
* Add TTL normalization tests for Go parity verification
Test that fit_ttl_count normalizes 120m→2h, 24h→1d, 7d→1w even
when count fits in a byte, matching Go's fitTtlCount behavior.
* Match Go FindFreeLocation: account for EC shards in free slot calculation
Go's free volume count subtracts both regular volumes and EC volumes
from max_volume_count. Rust was only counting regular volumes, which
could over-report available slots when EC shards are mounted.
* Match Go EC volume: mark deletions in .ecx and replay .ecj at startup
Go's DeleteNeedleFromEcx marks needles as deleted in the .ecx index
in-place (writing TOMBSTONE_FILE_SIZE at the size field) in addition
to appending to the .ecj journal. Go's RebuildEcxFile replays .ecj
entries into .ecx on startup, then removes the .ecj file.
Rust was only appending to .ecj without marking .ecx, which meant
deleted EC needles remained readable via .ecx binary search. This
fix:
- Opens .ecx in read/write mode (was read-only)
- Adds mark_needle_deleted_in_ecx: binary search + in-place write
- Calls it from journal_delete before appending to .ecj
- Adds rebuild_ecx_from_journal: replays .ecj into .ecx on startup
* Match Go check_all_ec_shards_deleted: use MAX_SHARD_COUNT instead of hardcoded 14
Go's TotalShardsCount is DataShardsCount + ParityShardsCount = 14 by
default, but custom EC configs via .vif can have more shards (up to
MaxShardCount = 32). Using MAX_SHARD_COUNT ensures all shard files
are checked regardless of EC configuration.
* Match Go EC locate: subtract 1 from shard size and use datFileSize override
Go's LocateEcShardNeedleInterval passes shard.ecdFileSize-1 to
LocateData (shards are padded, -1 avoids overcounting large block
rows). When datFileSize is known, Go uses datFileSize/DataShards
instead. Rust was passing the raw shard file size without adjustment.
* Fix TTL parsing and DiskStatus field names to match Go exactly
TTL::read: Go's ReadTTL preserves the original unit (7d stays 7d,
not 1w) and errors on count > 255. The previous normalization change
was incorrect — Go only normalizes internally via fitTtlCount, not
during string parsing.
DiskStatus: Go uses encoding/json on protobuf structs, which reads
the json struct tags (snake_case: percent_free, percent_used,
disk_type), not the protobuf JSON names (camelCase). Revert to
snake_case to match Go's actual output.
* Fix heartbeat: check leader != current master before redirect, process duplicated UUIDs first
Match Go's volume_grpc_client_to_master.go behavior:
1. Only trigger leader redirect when the leader address differs from the
current master (prevents unnecessary reconnect loops when master confirms
its own address).
2. Process duplicated_uuids before leader redirect check, matching Go's
ordering where duplicate UUID detection takes priority.
* Remove SetState version check to match Go behavior
Go's SetState unconditionally applies the state without any version
mismatch check. The Rust version had an extra optimistic concurrency
check that would reject valid requests from Go clients that don't
track versions.
* Fix TTL::read() to normalize via fit_ttl_count matching Go's ReadTTL
Go's ReadTTL calls fitTtlCount which converts to seconds and normalizes
to the coarsest unit that fits in a byte count (e.g. 120m->2h, 7d->1w,
24h->1d). The Rust version was preserving the original unit, producing
different binary encodings on disk and in heartbeat messages.
* Always return Content-MD5 header and JSON field on successful writes
Go always sets Content-MD5 in the response regardless of whether the
request included it. The Rust version was conditionally including it
only when the request provided Content-MD5.
* Include name and size in UploadResult JSON even when empty/zero
Go's encoding/json always includes empty strings and zero values in
the upload response. The Rust version was using skip_serializing_if
to omit them, causing JSON structure differences.
* Include deleted needles in scan_raw_needles_from to match Go
Go's ScanVolumeFileFrom visits ALL needles including deleted ones.
Skipping deleted entries during incremental copy would cause tombstones
to not be propagated, making deleted files reappear on the receiving side.
* Match Go NeedleMap.Delete: always write tombstone to idx file
Go's NeedleMap.Delete unconditionally writes a tombstone entry to the
idx file and updates metrics, even if the needle doesn't exist or is
already deleted. This is important for replication where every delete
operation must produce an idx write. The Rust version was skipping the
tombstone write for non-existent or already-deleted needles.
* Limit MIME type to 255 bytes matching Go's CreateNeedleFromRequest
* Title-case Seaweed-* pair keys to match Go HTTP header canonicalization
* Unify DiskType::Hdd into HardDrive to match Go's single HardDriveType
* Skip tombstone entries in walk_ecx_stats total_size matching Go's Raw()
* Return EMPTY TTL when computed seconds is zero matching Go's fitTtlCount
* Include disk-space-low in Volume.is_read_only() matching Go
* Log error on CIDR parse failure in whitelist matching Go's glog.Errorf
* Log cookie mismatch in gRPC Query matching Go's V(0).Infof
* Fix is_expired volume_size comparison to use < matching Go
Go checks `volumeSize < super_block.SuperBlockSize` (strict less-than),
but Rust used `<=`. This meant Rust would fail to expire a volume that
is exactly SUPER_BLOCK_SIZE bytes.
* Apply Go's JWT expiry defaults: 10s write, 60s read
Go calls v.SetDefault("jwt.signing.expires_after_seconds", 10) and
v.SetDefault("jwt.signing.read.expires_after_seconds", 60). Rust
defaulted to 0 for both, which meant tokens would never expire when
security.toml has a signing key but omits expires_after_seconds.
* Stop [grpc.volume].ca from overriding [grpc].ca matching Go
Go reads the gRPC CA file only from config.GetString("grpc.ca"), i.e.
the [grpc] section. The [grpc.volume] section only provides cert and
key. Rust was also reading ca from [grpc.volume] which would silently
override the [grpc].ca value when both were present.
* Fix free_volume_count to use EC shard count matching Go
Was counting EC volumes instead of EC shards, which underestimates EC
space usage. One EC volume with 14 shards uses ~1.4 volume slots, not 1.
Now uses Go's formula: ((max - volumes) * DataShardsCount - ecShardCount) / DataShardsCount.
* Include preallocate in compaction space check matching Go
Go uses max(preallocate, estimatedCompactSize) for the free space check.
Rust was only using the estimated volume size, which could start a
compaction that fails mid-way if preallocate exceeds the volume size.
* Check gzip magic bytes before setting Content-Encoding matching Go
Go checks both Accept-Encoding contains "gzip" AND IsGzippedContent
(data starts with 0x1f 0x8b) before setting Content-Encoding: gzip.
Rust only checked Accept-Encoding, which could incorrectly declare
gzip encoding for non-gzip compressed data.
* Only set upload response name when needle HasName matching Go
Go checks reqNeedle.HasName() before setting ret.Name. Rust always set
the name from the filename variable, which could return the fid portion
of the path as the name for raw PUT requests without a filename.
* Treat MaxVolumeCount==0 as unlimited matching Go's hasFreeDiskLocation
Go's hasFreeDiskLocation returns true immediately when MaxVolumeCount
is 0, treating it as unlimited. Rust was computing effective_free as
<= 0 for max==0, rejecting the location. This could fail volume
creation during early startup before the first heartbeat adjusts max.
* Read lastAppendAtNs from deleted V3 entries in integrity check
Go's doCheckAndFixVolumeData reads AppendAtNs from both live entries
(verifyNeedleIntegrity) and deleted tombstones (verifyDeletedNeedleIntegrity).
Rust was skipping deleted entries, which could result in a stale
last_append_at_ns if the last index entry is a deletion.
* Return empty body for empty/oversized range requests matching Go
Go's ProcessRangeRequest returns nil (empty body, 200 OK) when
parsed ranges are empty or combined range size exceeds total content
size. The Rust buffered path incorrectly returned the full file data
for both cases. The streaming path already handled this correctly.
* Dispatch ScrubEcVolume by mode matching Go's INDEX/LOCAL/FULL
Go's ScrubEcVolume switches on mode: INDEX calls v.ScrubIndex()
(ecx integrity only), LOCAL calls v.ScrubLocal(), FULL calls
vs.store.ScrubEcVolume(). Rust was ignoring the mode and always
running verify_ec_shards. Now INDEX mode checks ecx index integrity
(sorted overlap detection + file size validation) without shard I/O,
while LOCAL/FULL modes run the existing shard verification.
* Fix TTL test expectation: 7d normalizes to 1w matching Go's fitTtlCount
Go's ReadTTL calls fitTtlCount which normalizes to the coarsest unit
that fits: 7 days = 1 week, so "7d" becomes {Count:1, Unit:Week}
which displays as "1w". Both Go and Rust normalize identically.
* Add version mismatch check to SetState matching Go's State.Update
Go's State.Update compares the incoming version with the stored
version and returns "version mismatch" error if they differ. This
provides optimistic concurrency control. The Rust implementation
was accepting any version unconditionally.
* Use unquoted keys in Query JSON output matching Go's json.ToJson
Go's json.ToJson produces records with unquoted keys like
{score:12} not {"score":12}. This is a custom format used
internally by SeaweedFS for query results.
* Fix TTL test expectation in VolumeNeedleStatus: 7d normalizes to 1w
Same normalization as the HTTP test: Go's ReadTTL calls fitTtlCount
which converts 7 days to 1 week.
* Include ETag header in 304 Not Modified responses matching Go behavior
Go sets ETag on the response writer (via SetEtag) before the
If-Modified-Since and If-None-Match conditional checks, so both
304 response paths include the ETag header. The Rust implementation
was only adding ETag to 200 responses.
* Remove needle-name fallback in chunk manifest filename resolution
Go's tryHandleChunkedFile only falls back from URL filename to
manifest name. Rust had an extra fallback to needle.name that
Go does not perform, which could produce different
Content-Disposition filenames for chunk manifests.
* Validate JWT nbf (Not Before) claim matching Go's jwt-go/v5
Go's jwt.ParseWithClaims validates the nbf claim when present,
rejecting tokens whose nbf is in the future. The Rust jsonwebtoken
crate defaults validate_nbf to false, so tokens with future nbf
were incorrectly accepted.
* Set isHeartbeating to true at startup matching Go's VolumeServer init
Go unconditionally sets isHeartbeating: true in the VolumeServer
struct literal. Rust was starting with false when masters are
configured, causing /healthz to return 503 until the first
heartbeat succeeds.
* Call store.close() on shutdown matching Go's Shutdown()
Go's Shutdown() calls vs.store.Close() which closes all volumes
and flushes file handles. The Rust server was relying on process
exit for cleanup, which could leave data unflushed.
* Include server ID in maintenance mode error matching Go's format
Go returns "volume server %s is in maintenance mode" with the
store ID. Rust was returning a generic "maintenance mode" message.
* Fix DiskType test: use HardDrive variant matching Go's HddType=""
Go maps both "" and "hdd" to HardDriveType (empty string). The
Rust enum variant is HardDrive, not Hdd. The test referenced a
nonexistent Hdd variant causing compilation failure.
* Do not include ETag in 304 responses matching Go's GetOrHeadHandler
Go sets ETag at L235 AFTER the If-Modified-Since and If-None-Match
304 return paths, so Go's 304 responses do not include the ETag header.
The Rust code was incorrectly including ETag in both 304 response paths.
* Return 400 on malformed query strings in PostHandler matching Go's ParseForm
Go's r.ParseForm() returns HTTP 400 with "form parse error: ..." when
the query string is malformed. Rust was silently falling back to empty
query params via unwrap_or_default().
* Load EC volume version from .vif matching Go's NewEcVolume
Go sets ev.Version = needle.Version(volumeInfo.Version) from the .vif
file. Rust was always using Version::current() (V3), which would produce
wrong needle actual size calculations for volumes created with V1 or V2.
* Sync .ecx file before close matching Go's EcVolume.Close
Go calls ev.ecxFile.Sync() before closing to ensure in-place deletion
marks are flushed to disk. Without this, deletion marks written via
MarkNeedleDeleted could be lost on crash.
* Validate SuperBlock extra data size matching Go's Bytes() guard
Go checks extraSize > 256*256-2 and calls glog.Fatalf to prevent
corrupt super block headers. Rust was silently truncating via u16 cast,
which would write an incorrect extra_size field.
* Update quinn-proto 0.11.13 -> 0.11.14 to fix GHSA-6xvm-j4wr-6v98
Fixes Dependency Review CI failure: quinn-proto < 0.11.14 is vulnerable
to unauthenticated remote DoS via panic in QUIC transport parameter
parsing.
* Skip TestMultipartUploadUsesFormFieldsForTimestampAndTTL for Go server
Go's r.FormValue() cannot read multipart text fields after
r.MultipartReader() consumes the body, so ts/ttl sent as multipart
form fields only work with the Rust volume server. Skip this test
when VOLUME_SERVER_IMPL != "rust" to fix CI failure.
* Flush .ecx in EC volume sync_to_disk matching Go's Sync()
Go's EcVolume.Sync() flushes both the .ecj journal and the .ecx index
to disk. The Rust version only flushed .ecj, leaving in-place deletion
marks in .ecx unpersisted until close(). This could cause data
inconsistency if the server crashes after marking a needle deleted in
.ecx but before close().
* Remove .vif file in EC volume destroy matching Go's Destroy()
Go's EcVolume.Destroy() removes .ecx, .ecj, and .vif files. The Rust
version only removed .ecx and .ecj, leaving orphaned .vif files on
disk after EC volume destruction (e.g., after TTL expiry).
* Fix is_expired to use <= for SuperBlockSize check matching Go
Go checks contentSize <= SuperBlockSize to detect empty volumes (no
needles). Rust used < which would incorrectly allow a volume with
exactly SuperBlockSize bytes (header only, no data) to proceed to
the TTL expiry check and potentially be marked as expired.
* Fix read_append_at_ns to read timestamps from tombstone entries
Go reads the full needle body for all entries including tombstones
(deleted needles with size=0) to extract the actual AppendAtNs
timestamp. The Rust version returned 0 early for size <= 0 entries,
which would cause the binary search in incremental copy to produce
incorrect results for positions containing deleted needles.
Now uses get_actual_size to compute the on-disk size (which handles
tombstones correctly) and only returns 0 when the actual size is 0.
* Add X-Request-Id response header matching Go's requestIDMiddleware
Go sets both X-Request-Id and x-amz-request-id response headers.
The Rust server only set x-amz-request-id, missing X-Request-Id.
* Add skip_serializing_if for UploadResult name and size fields
Go's UploadResult uses json:"name,omitempty" and json:"size,omitempty",
omitting these fields from JSON when they are zero values (empty
string / 0). The Rust struct always serialized them, producing
"name":"" and "size":0 where Go would omit them.
* Support JSONP/pretty-print for write success responses
Go's writeJsonQuiet checks for callback (JSONP) and pretty query
parameters on all JSON responses including write success. The Rust
write success path used axum::Json directly, bypassing JSONP and
pretty-print support. Now uses json_result_with_query to match Go.
* Include actual limit in file size limit error message
Go returns "file over the limited %d bytes" with the actual limit
value included. Rust returned a generic "file size limit exceeded"
without the limit value, making it harder to debug.
* Extract extension from 2-segment URL paths for image operations
Go's parseURLPath extracts the file extension from all URL formats
including 2-segment paths like /vid,fid.jpg. The Rust version only
handled 3-segment paths (/vid/fid/filename.ext), so extensions in
2-segment paths were lost. This caused image resize/crop operations
requested via query params to be silently skipped for those paths.
* Add size_hint to TrackedBody so throttled downloads get Content-Length
TrackedBody (used for download throttling) did not implement
size_hint(), causing HTTP/1.1 to fall back to chunked transfer
encoding instead of setting Content-Length. Go always sets
Content-Length explicitly for non-range responses.
* Add Last-Modified, pairs, and S3 headers to chunk manifest responses
Go sets Last-Modified, needle pairs, and S3 pass-through headers on
the response writer BEFORE calling tryHandleChunkedFile. Since the
Rust chunk manifest handler created fresh response headers and
returned early, these headers were missing from chunk manifest
responses. Now passes last_modified_str into the chunk manifest
handler and applies pairs and S3 pass-through query params
(response-cache-control, response-content-encoding, etc.) to the
chunk manifest response headers.
* Fix multipart fallback to use first part data when no filename
Go reads the first part's data unconditionally, then looks for a
part with a filename. If none found, Go uses the first part's data
(with empty filename). Rust only captured parts with filenames, so
when no part had a filename it fell back to the raw multipart body
bytes (including boundary delimiters), producing corrupt needle data.
* Set HasName and HasMime flags for empty values matching Go
Go's CreateNeedleFromRequest sets HasName and HasMime flags even when
the filename or MIME type is empty (len < 256 is true for len 0).
Rust skipped empty values, causing the on-disk needle format to
differ: Go-written needles include extra bytes for the empty name/mime
size fields, changing the serialized needle size in the idx entry.
This ensures binary format compatibility between Go and Rust servers.
* Add is_stopping guard to vacuum_volume_commit matching Go
Go's CommitCompactVolume (store_vacuum.go L53-54) checks
s.isStopping before committing compaction to prevent file
swaps during shutdown. The Rust handler was missing this
check, which could allow compaction commits while the
server is stopping.
* Remove disk_type from required status fields since Go omits it
Go's default DiskType is "" (HardDriveType), and protobuf's omitempty
tag causes empty strings to be dropped from JSON output.
* test: honor rust env in dual volume harness
* grpc: notify master after volume lifecycle changes
* http: proxy to replicas before download-limit timeout
* test: pass readMode to rust volume harnesses
* fix store free-location predicate selection
* fix volume copy disk placement and heartbeat notification
* fix chunk manifest delete replication
* fix write replication to survive client disconnects
* fix download limit proxy and wait flow
* fix crop gating for streamed reads
* fix upload limit wait counter behavior
* fix chunk manifest image transforms
* fix has_resize_ops to check width/height > 0 instead of is_some()
Go's shouldResizeImages condition is `width > 0 || height > 0`, so
`?width=0` correctly evaluates to false. Rust was using `is_some()`
which made `?width=0` evaluate to true, unnecessarily disabling
streaming reads for those requests.
* fix Content-MD5 to only compute and return when provided by client
Go only computes the MD5 of uncompressed data when a Content-MD5
header or multipart field is provided. Rust was always computing and
returning it. Also fix the mismatch error message to include size,
matching Go's format.
* fix save_vif to compute ExpireAtSec from TTL
Go's SaveVolumeInfo always computes ExpireAtSec = now + ttlSeconds
when the volume has a TTL. The save_vif path (used by set_read_only
and set_writable) was missing this computation, causing .vif files
to be written without the correct expiration timestamp for TTL volumes.
* fix set_writable to not modify no_write_can_delete
Go's MarkVolumeWritable only sets noWriteOrDelete=false and persists.
Rust was additionally setting no_write_can_delete=has_remote_file,
which could incorrectly change the write mode for remote-file volumes
when the master explicitly asks to make the volume writable.
* fix write_needle_blob_and_index to error on too-small V3 blob
Go returns an error when the needle blob is too small for timestamp
patching. Rust was silently skipping the patch and writing the blob
with a stale/zero timestamp, which could cause data integrity issues
during incremental replication that relies on AppendAtNs ordering.
* fix VolumeEcShardsToVolume to validate dataShards range
Go validates that dataShards is > 0 and <= MaxShardCount before
proceeding with EC-to-volume reconstruction. Without this check,
a zero or excessively large data_shards value could cause confusing
downstream failures.
* fix destroy to use VolumeError::NotEmpty instead of generic Io error
The dedicated NotEmpty variant exists in the enum but was not being
used. This makes error matching consistent with Go's ErrVolumeNotEmpty.
* fix SetState to persist state to disk with rollback on failure
Go's State.Update saves VolumeServerState to a state.pb file after
each SetState call, and rolls back the in-memory state if persistence
fails. Rust was only updating in-memory atomics, so maintenance mode
would be lost on server restart. Now saves protobuf-encoded state.pb
and loads it on startup.
* fix VolumeTierMoveDatToRemote to close local dat backend after upload
Go calls v.LoadRemoteFile() after saving volume info, which closes
the local DataBackend before transitioning to remote storage. Without
this, the volume holds a stale file handle to the deleted local .dat
file, causing reads to fail until server restart.
* fix VolumeTierMoveDatFromRemote to close remote dat backend after download
Go calls v.DataBackend.Close() and sets DataBackend=nil after removing
the remote file reference. Without this, the stale remote backend
state lingers and reads may not discover the newly downloaded local
.dat file until server restart.
* fix redirect to use internal url instead of public_url
Go's proxyReqToTargetServer builds the redirect Location header from
loc.Url (the internal URL), not publicUrl. Using public_url could
cause redirect failures when internal and external URLs differ.
* fix redirect test and add state_file_path to integration test
Update redirect unit test to expect internal url (matching the
previous fix). Add missing state_file_path field to the integration
test VolumeServerState constructor.
* fix FetchAndWriteNeedle to await all writes before checking errors
Go uses a WaitGroup to await all writes (local + replicas) before
checking errors. Rust was short-circuiting on local write failure,
which could leave replica writes in-flight without waiting for
completion.
* fix shutdown to send deregister heartbeat before pre_stop delay
Go's StopHeartbeat() closes stopChan immediately on interrupt, causing
the heartbeat goroutine to send the deregister heartbeat right away,
before the preStopSeconds delay. Rust was only setting is_stopping=true
without waking the heartbeat loop, so the deregister was delayed until
after the pre_stop sleep. Now we call volume_state_notify.notify_one()
to wake the heartbeat immediately.
* fix heartbeat response ordering to check duplicate UUIDs first
Go processes heartbeat responses in this order: DuplicatedUuids first,
then volume options (prealloc/size limit), then leader redirect. Rust
was applying volume options before checking for duplicate UUIDs, which
meant volume option changes would take effect even when the response
contained a duplicate UUID error that should cause an immediate return.
* the test thread was blocked
* fix(deps): update aws-lc-sys 0.38.0 → 0.39.0 to resolve security advisories
Bumps aws-lc-rs 1.16.1 → 1.16.2, pulling in aws-lc-sys 0.39.0 which
fixes GHSA-394x-vwmw-crm3 (X.509 Name Constraints wildcard/unicode
bypass) and GHSA-9f94-5g5w-gf6r (CRL Distribution Point scope check
logic error).
* fix: match Go Content-MD5 mismatch error message format
Go uses "Content-MD5 did not match md5 of file data expected [X]
received [Y] size Z" while Rust had a shorter format. Match the
exact Go error string so clients see identical messages.
* fix: match Go Bearer token length check (> 7, not >= 7)
Go requires len(bearer) > 7 ensuring at least one char after
"Bearer ". Rust used >= 7 which would accept an empty token.
* fix(deps): drop legacy rustls 0.21 to resolve rustls-webpki GHSA-pwjx-qhcg-rvj4
aws-sdk-s3's default "rustls" feature enables tls-rustls in
aws-smithy-runtime, which pulls in legacy-rustls-ring (rustls 0.21
→ rustls-webpki 0.101.7, moderate CRL advisory). Replace with
explicit default-https-client which uses only rustls 0.23 /
rustls-webpki 0.103.9.
* fix: use uploaded filename for auto-compression extension detection
Go extracts the file extension from pu.FileName (the uploaded
filename) for auto-compression decisions. Rust was using the URL
path, which typically has no extension for SeaweedFS file IDs.
* fix: add CRC legacy Value() backward-compat check on needle read
Go double-checks CRC: n.Checksum != crc && uint32(n.Checksum) !=
crc.Value(). The Value() path is a deprecated transform for compat
with seaweed versions prior to commit
|
4 days ago |
|
|
94bfa2b340
|
mount: stream all filer mutations over single ordered gRPC stream (#8770)
* filer: add StreamMutateEntry bidi streaming RPC Add a bidirectional streaming RPC that carries all filer mutation types (create, update, delete, rename) over a single ordered stream. This eliminates per-request connection overhead for pipelined operations and guarantees mutation ordering within a stream. The server handler delegates each request to the existing unary handlers (CreateEntry, UpdateEntry, DeleteEntry) and uses a proxy stream adapter for rename operations to reuse StreamRenameEntry logic. The is_last field signals completion for multi-response operations (rename sends multiple events per request; create/update/delete always send exactly one response with is_last=true). * mount: add streaming mutation multiplexer (streamMutateMux) Implement a client-side multiplexer that routes all filer mutation RPCs (create, update, delete, rename) over a single bidirectional gRPC stream. Multiple goroutines submit requests through a send channel; a dedicated sendLoop serializes them on the stream; a recvLoop dispatches responses to waiting callers via per-request channels. Key features: - Lazy stream opening on first use - Automatic reconnection on stream failure - Permanent fallback to unary RPCs if filer returns Unimplemented - Monotonic request_id for response correlation - Multi-response support for rename operations (is_last signaling) The mux is initialized on WFS and closed during unmount cleanup. No call sites use it yet — wiring comes in subsequent commits. * mount: route CreateEntry and UpdateEntry through streaming mux Wire all CreateEntry call sites to use wfs.streamCreateEntry() which routes through the StreamMutateEntry stream when available, falling back to unary RPCs otherwise. Also wire Link's UpdateEntry calls through wfs.streamUpdateEntry(). Updated call sites: - flushMetadataToFiler (file flush after write) - Mkdir (directory creation) - Symlink (symbolic link creation) - createRegularFile non-deferred path (Mknod) - flushFileMetadata (periodic metadata flush) - Link (hard link: update source + create link + rollback) * mount: route UpdateEntry and DeleteEntry through streaming mux Wire remaining mutation call sites through the streaming mux: - saveEntry (Setattr/chmod/chown/utimes) → streamUpdateEntry - Unlink → streamDeleteEntry (replaces RemoveWithResponse) - Rmdir → streamDeleteEntry (replaces RemoveWithResponse) All filer mutations except Rename now go through StreamMutateEntry when the filer supports it, with automatic unary RPC fallback. * mount: route Rename through streaming mux Wire Rename to use streamMutate.Rename() when available, with fallback to the existing StreamRenameEntry unary stream. The streaming mux sends rename as a StreamRenameEntryRequest oneof variant. The server processes it through the existing rename logic and sends multiple StreamRenameEntryResponse events (one per moved entry), with is_last=true on the final response. All filer mutations now go through a single ordered stream. * mount: fix stream mux connection ownership WithGrpcClient(streamingMode=true) closes the gRPC connection when the callback returns, destroying the stream. Own the connection directly via pb.GrpcDial so it stays alive for the stream's lifetime. Close it explicitly in recvLoop on stream failure and in Close on shutdown. * mount: fix rename failure for deferred-create files Three fixes for rename operations over the streaming mux: 1. lookupEntry: fall back to local metadata store when filer returns "not found" for entries in uncached directories. Files created with deferFilerCreate=true exist only in the local leveldb store until flushed; lookupEntry skipped the local store when the parent directory had never been readdir'd, causing rename to fail with ENOENT. 2. Rename: wait for pending async flushes and force synchronous flush of dirty metadata before sending rename to the filer. Covers the writebackCache case where close() defers the flush to a background worker that may not complete before rename fires. 3. StreamMutateEntry: propagate rename errors from server to client. Add error/errno fields to StreamMutateEntryResponse so the mount can map filer errors to correct FUSE status codes instead of silently returning OK. Also fix the existing Rename error handler which could return fuse.OK on unrecognized errors. * mount: fix streaming mux error handling, sendLoop lifecycle, and fallback Address PR review comments: 1. Server: populate top-level Error/Errno on StreamMutateEntryResponse for create/update/delete errors, not just rename. Previously update errors were silently dropped and create/delete errors were only in nested response fields that the client didn't check. 2. Client: check nested error fields in CreateEntry (ErrorCode, Error) and DeleteEntry (Error) responses, matching CreateEntryWithResponse behavior. 3. Fix sendLoop lifecycle: give each stream generation a stopSend channel. recvLoop closes it on error to stop the paired sendLoop. Previously a reconnect left the old sendLoop draining sendCh, breaking ordering. 4. Transparent fallback: stream helpers and doRename fall back to unary RPCs on transport errors (ErrStreamTransport), including the first Unimplemented from ensureStream. Previously the first call failed instead of degrading. 5. Filer rotation in openStream: try all filer addresses on dial failure, matching WithFilerClient behavior. Stop early on Unimplemented. 6. Pass metadata-bearing context to StreamMutateEntry RPC call so sw-client-id header is actually sent. 7. Gate lookupEntry local-cache fallback on open dirty handle or pending async flush to avoid resurrecting deleted/renamed entries. 8. Remove dead code in flushFileMetadata (err=nil followed by if err!=nil). 9. Use string matching for rename error-to-errno mapping in the mount to stay portable across Linux/macOS (numeric errno values differ). * mount: make failAllPending idempotent with delete-before-close Change failAllPending to collect pending entries into a local slice (deleting from the sync.Map first) before closing channels. This prevents double-close panics if called concurrently. Also remove the unused err parameter. * mount: add stream generation tracking and teardownStream Introduce a generation counter on streamMutateMux that increments each time a new stream is created. Requests carry the generation they were enqueued for so sendLoop can reject stale requests after reconnect. Add teardownStream(gen) which is idempotent (only acts when gen matches current generation and stream is non-nil). Both sendLoop and recvLoop call it on error, replacing the inline cleanup in recvLoop. sendLoop now actively triggers teardown on send errors instead of silently exiting. ensureStream waits for the prior generation's recvDone before creating a new stream, ensuring all old pending waiters are failed before reconnect. recvLoop now takes the stream, generation, and recvDone channel as parameters to avoid accessing shared fields without the lock. * mount: harden Close to prevent races with teardownStream Nil out stream, cancel, and grpcConn under the lock so that any concurrent teardownStream call from recvLoop/sendLoop becomes a no-op. Call failAllPending before closing sendCh to unblock waiters promptly. Guard recvDone with a nil check for the case where Close is called before any stream was ever opened. * mount: make errCh receive ctx-aware in doUnary and Rename Replace the blocking <-sendReq.errCh with a select that also observes ctx.Done(). If sendLoop exits via stopSend without consuming a buffered request, the caller now returns ctx.Err() instead of blocking forever. The buffered errCh (capacity 1) ensures late acknowledgements from sendLoop don't block the sender. * mount: fix sendLoop/Close race and recvLoop/teardown pending channel race Three related fixes: 1. Stop closing sendCh in Close(). Closing the shared producer channel races with callers who passed ensureStream() but haven't sent yet, causing send-on-closed-channel panics. sendCh is now left open; ensureStream checks m.closed to reject new callers. 2. Drain buffered sendCh items on shutdown. sendLoop defers drainSendCh() on exit so buffered requests get an ErrStreamTransport on their errCh instead of blocking forever. Close() drains again for any stragglers enqueued between sendLoop's drain and the final shutdown. 3. Move failAllPending from teardownStream into recvLoop's defer. teardownStream (called from sendLoop on send error) was closing pending response channels while recvLoop could be between pending.Load and the channel send — a send-on-closed-channel panic. recvLoop is now the sole closer of pending channels, eliminating the race. Close() waits on recvDone (with cancel() to guarantee Recv unblocks) so pending cleanup always completes. * filer/mount: add debug logging for hardlink lifecycle Add V(0) logging at every point where a HardLinkId is created, stored, read, or deleted to trace orphaned hardlink references. Logging covers: - gRPC server: CreateEntry/UpdateEntry when request carries HardLinkId - FilerStoreWrapper: InsertEntry/UpdateEntry when entry has HardLinkId - handleUpdateToHardLinks: entry path, HardLinkId, counter, chunk count - setHardLink: KvPut with blob size - maybeReadHardLink: V(1) on read attempt and successful decode - DeleteHardLink: counter decrement/deletion events - Mount Link(): when NewHardLinkId is generated and link is created This helps diagnose how a git pack .rev file ended up with a HardLinkId during a clone (no hard links should be involved). * test: add git clone/pull integration test for FUSE mount Shell script that exercises git operations on a SeaweedFS mount: 1. Creates a bare repo on the mount 2. Clones locally, makes 3 commits, pushes to mount 3. Clones from mount bare repo into an on-mount working dir 4. Verifies clone integrity (files, content, commit hashes) 5. Pushes 2 more commits with renames and deletes 6. Checks out an older revision on the mount clone 7. Returns to branch and pulls with real changes 8. Verifies file content, renames, deletes after pull 9. Checks git log integrity and clean status 27 assertions covering file existence, content, commit hashes, file counts, renames, deletes, and git status. Run against any existing mount: bash test-git-on-mount.sh /path/to/mount * test: add git clone/pull FUSE integration test to CI suite Add TestGitOperations to the existing fuse_integration test framework. The test exercises git's full file operation surface on the mount: 1. Creates a bare repo on the mount (acts as remote) 2. Clones locally, makes 3 commits (files, bulk data, renames), pushes 3. Clones from mount bare repo into an on-mount working dir 4. Verifies clone integrity (content, commit hash, file count) 5. Pushes 2 more commits with new files, renames, and deletes 6. Checks out an older revision on the mount clone 7. Returns to branch and pulls with real fast-forward changes 8. Verifies post-pull state: content, renames, deletes, file counts 9. Checks git log integrity (5 commits) and clean status Runs automatically in the existing fuse-integration.yml CI workflow. * mount: fix permission check with uid/gid mapping The permission checks in createRegularFile() and Access() compared the caller's local uid/gid against the entry's filer-side uid/gid without applying the uid/gid mapper. With -map.uid 501:0, a directory created as uid 0 on the filer would not match the local caller uid 501, causing hasAccess() to fall through to "other" permission bits and reject write access (0755 → other has r-x, no w). Fix: map entry uid/gid from filer-space to local-space before the hasAccess() call so both sides are in the same namespace. This fixes rsync -a failing with "Permission denied" on mkstempat when using uid/gid mapping. * mount: fix Mkdir/Symlink returning filer-side uid/gid to kernel Mkdir and Symlink used `defer wfs.mapPbIdFromFilerToLocal(entry)` to restore local uid/gid, but `outputPbEntry` writes the kernel response before the function returns — so the kernel received filer-side uid/gid (e.g., 0:0). macFUSE then caches these and rejects subsequent child operations (mkdir, create) because the caller uid (501) doesn't match the directory owner (0), and "other" bits (0755 → r-x) lack write permission. Fix: replace the defer with an explicit call to mapPbIdFromFilerToLocal before outputPbEntry, so the kernel gets local uid/gid. Also add nil guards for UidGidMapper in Access and createRegularFile to prevent panics in tests that don't configure a mapper. This fixes rsync -a "Permission denied" on mkpathat for nested directories when using uid/gid mapping. * mount: fix Link outputting filer-side uid/gid to kernel, add nil guards Link had the same defer-before-outputPbEntry bug as Mkdir and Symlink: the kernel received filer-side uid/gid because the defer hadn't run yet when outputPbEntry wrote the response. Also add nil guards for UidGidMapper in Access and createRegularFile so tests without a mapper don't panic. Audit of all outputPbEntry/outputFilerEntry call sites: - Mkdir: fixed in prior commit (explicit map before output) - Symlink: fixed in prior commit (explicit map before output) - Link: fixed here (explicit map before output) - Create (existing file): entry from maybeLoadEntry (already mapped) - Create (deferred): entry has local uid/gid (never mapped to filer) - Create (non-deferred): createRegularFile defer runs before return - Mknod: createRegularFile defer runs before return - Lookup: entry from lookupEntry (already mapped) - GetAttr: entry from maybeReadEntry/maybeLoadEntry (already mapped) - readdir: entry from cache (mapIdFromFilerToLocal) or filer (mapped) - saveEntry: no kernel output - flushMetadataToFiler: no kernel output - flushFileMetadata: no kernel output * test: fix git test for same-filesystem FUSE clone When both the bare repo and working clone live on the same FUSE mount, git's local transport uses hardlinks and cross-repo stat calls that fail on FUSE. Fix: - Use --no-local on clone to disable local transport optimizations - Use reset --hard instead of checkout to stay on branch - Use fetch + reset --hard origin/<branch> instead of git pull to avoid local transport stat failures during fetch * adjust logging * test: use plain git clone/pull to exercise real FUSE behavior Remove --no-local and fetch+reset workarounds. The test should use the same git commands users run (clone, reset --hard, pull) so it reveals real FUSE issues rather than hiding them. * test: enable V(1) logging for filer/mount and collect logs on failure - Run filer and mount with -v=1 so hardlink lifecycle logs (V(0): create/delete/insert, V(1): read attempts) are captured - On test failure, automatically dump last 16KB of all process logs (master, volume, filer, mount) to test output - Copy process logs to /tmp/seaweedfs-fuse-logs/ for CI artifact upload - Update CI workflow to upload SeaweedFS process logs alongside test output * mount: clone entry for filer flush to prevent uid/gid race flushMetadataToFiler and flushFileMetadata used entry.GetEntry() which returns the file handle's live proto entry pointer, then mutated it in-place via mapPbIdFromLocalToFiler. During the gRPC call window, a concurrent Lookup (which takes entryLock.RLock but NOT fhLockTable) could observe filer-side uid/gid (e.g., 0:0) on the file handle entry and return it to the kernel. The kernel caches these attributes, so subsequent opens by the local user (uid 501) fail with EACCES. Fix: proto.Clone the entry before mapping uid/gid for the filer request. The file handle's live entry is never mutated, so concurrent Lookup always sees local uid/gid. This fixes the intermittent "Permission denied" on .git/FETCH_HEAD after the first git pull on a mount with uid/gid mapping. * mount: add debug logging for stale lock file investigation Add V(0) logging to trace the HEAD.lock recreation issue: - Create: log when O_EXCL fails (file already exists) with uid/gid/mode - completeAsyncFlush: log resolved path, saved path, dirtyMetadata, isDeleted at entry to trace whether async flush fires after rename - flushMetadataToFiler: log the dir/name/fullpath being flushed This will show whether the async flush is recreating the lock file after git renames HEAD.lock → HEAD. * mount: prevent async flush from recreating renamed .lock files When git renames HEAD.lock → HEAD, the async flush from the prior close() can run AFTER the rename and re-insert HEAD.lock into the meta cache via its CreateEntryRequest response event. The next git pull then sees HEAD.lock and fails with "File exists". Fix: add isRenamed flag on FileHandle, set by Rename before waiting for the pending async flush. The async flush checks this flag and skips the metadata flush for renamed files (same pattern as isDeleted for unlinked files). The data pages still flush normally. The Rename handler flushes deferred metadata synchronously (Case 1) before setting isRenamed, ensuring the entry exists on the filer for the rename to proceed. For already-released handles (Case 2), the entry was created by a prior flush. * mount: also mark renamed inodes via entry.Attributes.Inode fallback When GetInode fails (Forget already removed the inode mapping), the Rename handler couldn't find the pending async flush to set isRenamed. The async flush then recreated the .lock file on the filer. Fix: fall back to oldEntry.Attributes.Inode to find the pending async flush when the inode-to-path mapping is gone. Also extract MarkInodeRenamed into a method on FileHandleToInode for clarity. * mount: skip async metadata flush when saved path no longer maps to inode The isRenamed flag approach failed for refs/remotes/origin/HEAD.lock because neither GetInode nor oldEntry.Attributes.Inode could find the inode (Forget already evicted the mapping, and the entry's stored inode was 0). Add a direct check in completeAsyncFlush: before flushing metadata, verify that the saved path still maps to this inode in the inode-to-path table. If the path was renamed or removed (inode mismatch or not found), skip the metadata flush to avoid recreating a stale entry. This catches all rename cases regardless of whether the Rename handler could set the isRenamed flag. * mount: wait for pending async flush in Unlink before filer delete Unlink was deleting the filer entry first, then marking the draining async-flush handle as deleted. The async flush worker could race between these two operations and recreate the just-unlinked entry on the filer. This caused git's .lock files (e.g. refs/remotes/origin/HEAD.lock) to persist after git pull, breaking subsequent git operations. Move the isDeleted marking and add waitForPendingAsyncFlush() before the filer delete so any in-flight flush completes first. Even if the worker raced past the isDeleted check, the wait ensures it finishes before the filer delete cleans up any recreated entry. * mount: reduce async flush and metadata flush log verbosity Raise completeAsyncFlush entry log, saved-path-mismatch skip log, and flushMetadataToFiler entry log from V(0) to V(3)/V(4). These fire for every file close with writebackCache and are too noisy for normal use. * filer: reduce hardlink debug log verbosity from V(0) to V(4) HardLinkId logs in filerstore_wrapper, filerstore_hardlink, and filer_grpc_server fire on every hardlinked file operation (git pack files use hardlinks extensively) and produce excessive noise. * mount/filer: reduce noisy V(0) logs for link, rmdir, and empty folder check - weedfs_link.go: hardlink creation logs V(0) → V(4) - weedfs_dir_mkrm.go: non-empty folder rmdir error V(0) → V(1) - empty_folder_cleaner.go: "not empty" check log V(0) → V(4) * filer: handle missing hardlink KV as expected, not error A "kv: not found" on hardlink read is normal when the link blob was already cleaned up but a stale entry still references it. Log at V(1) for not-found; keep Error level for actual KV failures. * test: add waitForDir before git pull in FUSE git operations test After git reset --hard, the FUSE mount's metadata cache may need a moment to settle on slow CI. The git pull subprocess (unpack-objects) could fail to stat the working directory. Poll for up to 5s. * Update git_operations_test.go * wait * test: simplify FUSE test framework to use weed mini Replace the 4-process setup (master + volume + filer + mount) with 2 processes: "weed mini" (all-in-one) + "weed mount". This simplifies startup, reduces port allocation, and is faster on CI. * test: fix mini flag -admin → -admin.ui |
5 days ago |
|
|
c4d642b8aa
|
fix(ec): gather shards from all disk locations before rebuild (#8633)
* fix(ec): gather shards from all disk locations before rebuild (#8631)
Fix "too few shards given" error during ec.rebuild on multi-disk volume
servers. The root cause has two parts:
1. VolumeEcShardsRebuild only looked at a single disk location for shard
files. On multi-disk servers, the existing local shards could be on one
disk while copied shards were placed on another, causing the rebuild to
see fewer shards than actually available.
2. VolumeEcShardsCopy had a DiskId condition (req.DiskId == 0 &&
len(vs.store.Locations) > 0) that was always true, making the
FindFreeLocation fallback dead code. This meant copies always went to
Locations[0] regardless of where existing shards were.
Changes:
- VolumeEcShardsRebuild now finds the location with the most shards,
then gathers shard files from other locations via hard links (or
symlinks for cross-device) before rebuilding. Gathered files are
cleaned up after rebuild.
- VolumeEcShardsCopy now only uses Locations[DiskId] when DiskId > 0
(explicitly set). Otherwise, it prefers the location that already has
the EC volume, falling back to HDD then any free location.
- generateMissingEcFiles now logs shard counts and provides a clear
error message when not enough shards are found, instead of passing
through to the opaque reedsolomon "too few shards given" error.
* fix(ec): update test to match skip behavior for unrepairable volumes
The test expected an error for volumes with insufficient shards, but
commit
|
2 weeks ago |
|
|
4c88fbfd5e
|
Fix nil pointer crash during concurrent vacuum compaction (#8592)
* check for nil needle map before compaction sync
When CommitCompact runs concurrently, it sets v.nm = nil under
dataFileAccessLock. CompactByIndex does not hold that lock, so
v.nm.Sync() can hit a nil pointer. Add an early nil check to
return an error instead of crashing.
Fixes #8591
* guard copyDataBasedOnIndexFile size check against nil needle map
The post-compaction size validation at line 538 accesses
v.nm.ContentSize() and v.nm.DeletedSize(). If CommitCompact has
concurrently set v.nm to nil, this causes a SIGSEGV. Skip the
validation when v.nm is nil since the actual data copy uses local
needle maps (oldNm/newNm) and is unaffected.
Fixes #8591
* use atomic.Bool for compaction flags to prevent concurrent vacuum races
The isCompacting and isCommitCompacting flags were plain bools
read and written from multiple goroutines without synchronization.
This allowed concurrent vacuums on the same volume to pass the
guard checks and run simultaneously, leading to the nil pointer
crash. Using atomic.Bool with CompareAndSwap ensures only one
compaction or commit can run per volume at a time.
Fixes #8591
* use go-version-file in CI workflows instead of hardcoded versions
Use go-version-file: 'go.mod' so CI automatically picks up the Go
version from go.mod, avoiding future version drift. Reordered
checkout before setup-go in go.yml and e2e.yml so go.mod is
available. Removed the now-unused GO_VERSION env vars.
* capture v.nm locally in CompactByIndex to close TOCTOU race
A bare nil check on v.nm followed by v.nm.Sync() has a race window
where CommitCompact can set v.nm = nil between the two. Snapshot
the pointer into a local variable so the nil check and Sync operate
on the same reference.
* add dynamic timeouts to plugin worker vacuum gRPC calls
All vacuum gRPC calls used context.Background() with no deadline,
so the plugin scheduler's execution timeout could kill a job while
a large volume compact was still in progress. Use volume-size-scaled
timeouts matching the topology vacuum approach: 3 min/GB for compact,
1 min/GB for check, commit, and cleanup.
Fixes #8591
* Revert "add dynamic timeouts to plugin worker vacuum gRPC calls"
This reverts commit
|
3 weeks ago |
|
|
af4c3fcb31
|
ec: fall back to data dir when ecx file not found in idx dir (#8541)
* ec: fall back to data dir when ecx file not found in idx dir (#8540) When -dir.idx is configured after EC encoding, the .ecx/.ecj files remain in the data directory. NewEcVolume now falls back to the data directory when the index file is not found in dirIdx. * ec: add fallback logging and improved error message for ecx lookup * ec: preserve configured dirIdx, track actual ecx location separately The previous fallback set ev.dirIdx = dir when finding .ecx in the data directory, which corrupted IndexBaseFileName() for future writes (e.g., WriteIdxFileFromEcIndex during EC-to-volume conversion would write the .idx file to the data directory instead of the configured index directory). Introduce ecxActualDir to track where .ecx/.ecj were actually found, used only by FileName() for cleanup/destroy. IndexBaseFileName() continues to use the configured dirIdx for new file creation. * ec: check both idx and data dirs for .ecx in all cleanup and lookup paths When -dir.idx is configured after EC encoding, .ecx/.ecj files may reside in the data directory. Several code paths only checked l.IdxDirectory, causing them to miss these files: - removeEcVolumeFiles: now removes .ecx/.ecj from both directories - loadExistingVolume: ecx existence check falls back to data dir - deleteEcShardIdsForEachLocation: ecx existence check and cleanup both cover the data directory - VolumeEcShardsRebuild: ecx lookup falls back to data directory so RebuildEcxFile operates on the correct file |
3 weeks ago |
|
|
f5c35240be
|
Add volume dir tags and EC placement priority (#8472)
* Add volume dir tags to topology Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add preferred tag config for EC Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Prioritize EC destinations by tags Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add EC placement planner tag tests Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Refactor EC placement tests to reuse buildActiveTopology Remove buildActiveTopologyWithDiskTags helper function and consolidate tag setup inline in test cases. Tests now use UpdateTopology to apply tags after topology creation, reusing the existing buildActiveTopology function rather than duplicating its logic. All tag scenario tests pass: - TestECPlacementPlannerPrefersTaggedDisks - TestECPlacementPlannerFallsBackWhenTagsInsufficient Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Consolidate normalizeTagList into shared util package Extract normalizeTagList from three locations (volume.go, detection.go, erasure_coding_handler.go) into new weed/util/tag.go as exported NormalizeTagList function. Replace all duplicate implementations with imports and calls to util.NormalizeTagList. This improves code reuse and maintainability by centralizing tag normalization logic. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add PreferredTags to EC config persistence Add preferred_tags field to ErasureCodingTaskConfig protobuf with field number 5. Update GetConfigSpec to include preferred_tags field in the UI configuration schema. Add PreferredTags to ToTaskPolicy to serialize config to protobuf. Add PreferredTags to FromTaskPolicy to deserialize from protobuf with defensive copy to prevent external mutation. This allows EC preferred tags to be persisted and restored across worker restarts. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add defensive copy for Tags slice in DiskLocation Copy the incoming tags slice in NewDiskLocation instead of storing by reference. This prevents external callers from mutating the DiskLocation.Tags slice after construction, improving encapsulation and preventing unexpected changes to disk metadata. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add doc comment to buildCandidateSets method Document the tiered candidate selection and fallback behavior. Explain that for a planner with preferredTags, it accumulates disks matching each tag in order into progressively larger tiers, emits a candidate set once a tier reaches shardsNeeded, and finally falls back to the full candidates set if preferred-tag tiers are insufficient. This clarifies the intended semantics for future maintainers. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Apply final PR review fixes 1. Update parseVolumeTags to replicate single tag entry to all folders instead of leaving some folders with nil tags. This prevents nil pointer dereferences when processing folders without explicit tags. 2. Add defensive copy in ToTaskPolicy for PreferredTags slice to match the pattern used in FromTaskPolicy, preventing external mutation of the returned TaskPolicy. 3. Add clarifying comment in buildCandidateSets explaining that the shardsNeeded <= 0 branch is a defensive check for direct callers, since selectDestinations guarantees shardsNeeded > 0. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix nil pointer dereference in parseVolumeTags Ensure all folder tags are initialized to either normalized tags or empty slices, not nil. When multiple tag entries are provided and there are more folders than entries, remaining folders now get empty slices instead of nil, preventing nil pointer dereference in downstream code. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Fix NormalizeTagList to return empty slice instead of nil Change NormalizeTagList to always return a non-nil slice. When all tags are empty or whitespace after normalization, return an empty slice instead of nil. This prevents nil pointer dereferences in downstream code that expects a valid (possibly empty) slice. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add nil safety check for v.tags pointer Add a safety check to handle the case where v.tags might be nil, preventing a nil pointer dereference. If v.tags is nil, use an empty string instead. This is defensive programming to prevent panics in edge cases. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> * Add volume.tags flag to weed server and weed mini commands Add the volume.tags CLI option to both the 'weed server' and 'weed mini' commands. This allows users to specify disk tags when running the combined server modes, just like they can with 'weed volume'. The flag uses the same format and description as the volume command: comma-separated tag groups per data dir with ':' separators (e.g. fast:ssd,archive). Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> |
4 weeks ago |
|
|
7354fa87f1
|
refactor ec shard distribution (#8465)
* refactor ec shard distribution * fix shard assignment merge and mount errors * fix mount error aggregation scope * make WithFields compatible and wrap errors |
1 month ago |
|
|
9fa95dd2c6
|
fix: unload leveldb not take effect (#8431)
|
1 month ago |
|
|
e4b70c2521 |
go fix
|
1 month ago |
|
|
01b3125815
|
[shell]: volume balance capacity by min volume density (#8026)
volume balance by min volume density and active volumes |
1 month ago |
|
|
a9d12a0792
|
Implement full scrubbing for EC volumes (#8318)
Implement full scrubbing for EC volumes. |
1 month ago |
|
|
11fdb68281
|
Fix superblock write error checks on volume compaction. (#8352)
|
1 month ago |
|
|
0721e3c1e9
|
Rework volume compaction (a.k.a vacuuming) logic to cleanly support new parameters. (#8337)
We'll leverage on this to support a "ignore broken needles" option, necessary to properly recover damaged volumes, as described in https://github.com/seaweedfs/seaweedfs/issues/7442#issuecomment-3897784283 . |
1 month ago |
|
|
fbe7dd32c2
|
Implement full scrubbing for regular volumes (#8254)
Implement full scrubbing for regular volumes. |
1 month ago |
|
|
1ebc9dd530
|
Have local EC volume scrubbing check needle integrity whenever possible. (#8334)
If local EC scrubbing hits needles whose chunk location reside entirely in local shards, we can fully reconstruct them, and check CRCs for data integrity. |
1 month ago |
|
|
75faf826d4
|
Fix LevelDB panic on lazy reload (#8269) (#8307)
* fix LevelDB panic on lazy reload Implemented a thread-safe reload mechanism using double-checked locking and a retry loop in Get, Put, and Delete. Added a concurrency test to verify the fix and prevent regressions. Fixes #8269 * refactor: use helper for leveldb fix and remove deprecated ioutil * fix: prevent deadlock by using getFromDb helper Extracted DB lookup to internal helper to avoid recursive RLock in Put/Delete methods. Updated Get to use the helper as well. * fix: resolve syntax error and commit deadlock prevention Fixed a duplicate function declaration syntax error. Verified that getFromDb helper correctly prevents recursive RLock scenarios. * refactor: remove redundant timeout checks Removed nested `if m.ldbTimeout > 0` checks in Get, Put, and Delete methods as suggested in PR review. |
2 months ago |
|
|
e657e7d827
|
Implement local scrubbing for EC volumes. (#8283)
|
2 months ago |
|
|
1a5679a5eb
|
Implement a `VolumeEcStatus()` RPC for volume servers. (#8006)
Just like `VolumeStatus()`, this call allows inspecting details for a given EC volume - including number of files and their total size. |
2 months ago |
|
|
be6b5db65a
|
s3: fix health check endpoints returning 404 for HEAD requests #8243 (#8248)
* Fix disk errors handling in vacuum compaction When a disk reports IO errors during vacuum compaction (e.g., 'read /mnt/d1/weed/oc_xyz.dat: input/output error'), the vacuum task should signal the error to the master so it can: 1. Drop the faulty volume replica 2. Rebuild the replica from healthy copies Changes: - Add checkReadWriteError() calls in vacuum read paths (ReadNeedleBlob, ReadData, ScanVolumeFile) to flag EIO errors in volume.lastIoError - Preserve error wrapping using %w format instead of %v so EIO propagates correctly - The existing heartbeat logic will detect lastIoError and remove the bad volume Fixes issue #8237 * error * s3: fix health check endpoints returning 404 for HEAD requests #8243 |
2 months ago |
|
|
330ba7d9dc
|
Fix disk errors handling in vacuum compaction (#8244)
When a disk reports IO errors during vacuum compaction (e.g., 'read /mnt/d1/weed/oc_xyz.dat: input/output error'), the vacuum task should signal the error to the master so it can: 1. Drop the faulty volume replica 2. Rebuild the replica from healthy copies Changes: - Add checkReadWriteError() calls in vacuum read paths (ReadNeedleBlob, ReadData, ScanVolumeFile) to flag EIO errors in volume.lastIoError - Preserve error wrapping using %w format instead of %v so EIO propagates correctly - The existing heartbeat logic will detect lastIoError and remove the bad volume Fixes issue #8237 |
2 months ago |
|
|
c284e51d20
|
fix: multipart upload ETag calculation (#8238)
* fix multipart etag * address comments * clean up * clean up * optimization * address comments * unquoted etag * dedup * upgrade * clean * etag * return quoted tag * quoted etag * debug * s3api: unify ETag retrieval and quoting across handlers Refactor newListEntry to take *S3ApiServer and use getObjectETag, and update setResponseHeaders to use the same logic. This ensures consistent ETags are returned for both listing and direct access. * s3api: implement ListObjects deduplication for versioned buckets Handle duplicate entries between the main path and the .versions directory by prioritizing the latest version when bucket versioning is enabled. * s3api: cleanup stale main file entries during versioned uploads Add explicit deletion of pre-existing "main" files when creating new versions in versioned buckets. This prevents stale entries from appearing in bucket listings and ensures consistency. * s3api: fix cleanup code placement in versioned uploads Correct the placement of rm calls in completeMultipartUpload and putVersionedObject to ensure stale main files are properly deleted during versioned uploads. * s3api: improve getObjectETag fallback for empty ExtETagKey Ensure that when ExtETagKey exists but contains an empty value, the function falls through to MD5/chunk-based calculation instead of returning an empty string. * s3api: fix test files for new newListEntry signature Update test files to use the new newListEntry signature where the first parameter is *S3ApiServer. Created mockS3ApiServer to properly test owner display name lookup functionality. * s3api: use filer.ETag for consistent Md5 handling in getEtagFromEntry Change getEtagFromEntry fallback to use filer.ETag(entry) instead of filer.ETagChunks to ensure legacy entries with Attributes.Md5 are handled consistently with the rest of the codebase. * s3api: optimize list logic and fix conditional header logging - Hoist bucket versioning check out of per-entry callback to avoid repeated getVersioningState calls - Extract appendOrDedup helper function to eliminate duplicate dedup/append logic across multiple code paths - Change If-Match mismatch logging from glog.Errorf to glog.V(3).Infof and remove DEBUG prefix for consistency * s3api: fix test mock to properly initialize IAM accounts Fixed nil pointer dereference in TestNewListEntryOwnerDisplayName by directly initializing the IdentityAccessManagement.accounts map in the test setup. This ensures newListEntry can properly look up account display names without panicking. * cleanup * s3api: remove premature main file cleanup in versioned uploads Removed incorrect cleanup logic that was deleting main files during versioned uploads. This was causing test failures because it deleted objects that should have been preserved as null versions when versioning was first enabled. The deduplication logic in listing is sufficient to handle duplicate entries without deleting files during upload. * s3api: add empty-value guard to getEtagFromEntry Added the same empty-value guard used in getObjectETag to prevent returning quoted empty strings. When ExtETagKey exists but is empty, the function now falls through to filer.ETag calculation instead of returning "". * s3api: fix listing of directory key objects with matching prefix Revert prefix handling logic to use strings.TrimPrefix instead of checking HasPrefix with empty string result. This ensures that when a directory key object exactly matches the prefix (e.g. prefix="dir/", object="dir/"), it is correctly handled as a regular entry instead of being skipped or incorrectly processed as a common prefix. Also fixed missing variable definition. * s3api: refactor list inline dedup to use appendOrDedup helper Refactored the inline deduplication logic in listFilerEntries to use the shared appendOrDedup helper function. This ensures consistent behavior and reduces code duplication. * test: fix port allocation race in s3tables integration test Updated startMiniCluster to find all required ports simultaneously using findAvailablePorts instead of sequentially. This prevents race conditions where the OS reallocates a port that was just released, causing multiple services (e.g. Filer and Volume) to be assigned the same port and fail to start. |
2 months ago |
|
|
2cda4289f4
|
Add a version token on RPCs to read/update volume server states. (#8191)
* Add a version token on `GetState()`/`SetState()` RPCs for volume server states. * Make state version a property ov `VolumeServerState` instead of an in-memory counter. Also extend state atomicity to reads, instead of just writes. |
2 months ago |
|
|
f84b70c362
|
Implement index (fast) scrubbing for regular/EC volumes. (#8207)
Implement index (fast) scrubbing for regular/EC volumes via `ScrubVolume()`/`ScrubEcVolume()`. Also rearranges existing index test files for reuse across unit tests for different modules. |
2 months ago |
|
|
82d9d8687b
|
Fix concurrent map access in EC shards info (#8222)
* fix concurrent map access in EC shards info #8219 * refactor: simplify Disk.ToDiskInfo to use ecShards snapshot and avoid redundant locking * refactor: improve GetEcShards with pre-allocation and defer |
2 months ago |
|
|
7169fe585d |
Delete verify_gc_empty_test.go
|
2 months ago |
|
|
2bb21ea276
|
feat: Add Iceberg REST Catalog server and admin UI (#8175)
* feat: Add Iceberg REST Catalog server Implement Iceberg REST Catalog API on a separate port (default 8181) that exposes S3 Tables metadata through the Apache Iceberg REST protocol. - Add new weed/s3api/iceberg package with REST handlers - Implement /v1/config endpoint returning catalog configuration - Implement namespace endpoints (list/create/get/head/delete) - Implement table endpoints (list/create/load/head/delete/update) - Add -port.iceberg flag to S3 standalone server (s3.go) - Add -s3.port.iceberg flag to combined server mode (server.go) - Add -s3.port.iceberg flag to mini cluster mode (mini.go) - Support prefix-based routing for multiple catalogs The Iceberg REST server reuses S3 Tables metadata storage under /table-buckets and enables DuckDB, Spark, and other Iceberg clients to connect to SeaweedFS as a catalog. * feat: Add Iceberg Catalog pages to admin UI Add admin UI pages to browse Iceberg catalogs, namespaces, and tables. - Add Iceberg Catalog menu item under Object Store navigation - Create iceberg_catalog.templ showing catalog overview with REST info - Create iceberg_namespaces.templ listing namespaces in a catalog - Create iceberg_tables.templ listing tables in a namespace - Add handlers and routes in admin_handlers.go - Add Iceberg data provider methods in s3tables_management.go - Add Iceberg data types in types.go The Iceberg Catalog pages provide visibility into the same S3 Tables data through an Iceberg-centric lens, including REST endpoint examples for DuckDB and PyIceberg. * test: Add Iceberg catalog integration tests and reorg s3tables tests - Reorganize existing s3tables tests to test/s3tables/table-buckets/ - Add new test/s3tables/catalog/ for Iceberg REST catalog tests - Add TestIcebergConfig to verify /v1/config endpoint - Add TestIcebergNamespaces to verify namespace listing - Add TestDuckDBIntegration for DuckDB connectivity (requires Docker) - Update CI workflow to use new test paths * fix: Generate proper random UUIDs for Iceberg tables Address code review feedback: - Replace placeholder UUID with crypto/rand-based UUID v4 generation - Add detailed TODO comments for handleUpdateTable stub explaining the required atomic metadata swap implementation * fix: Serve Iceberg on localhost listener when binding to different interface Address code review feedback: properly serve the localhost listener when the Iceberg server is bound to a non-localhost interface. * ci: Add Iceberg catalog integration tests to CI Add new job to run Iceberg catalog tests in CI, along with: - Iceberg package build verification - Iceberg unit tests - Iceberg go vet checks - Iceberg format checks * fix: Address code review feedback for Iceberg implementation - fix: Replace hardcoded account ID with s3_constants.AccountAdminId in buildTableBucketARN() - fix: Improve UUID generation error handling with deterministic fallback (timestamp + PID + counter) - fix: Update handleUpdateTable to return HTTP 501 Not Implemented instead of fake success - fix: Better error handling in handleNamespaceExists to distinguish 404 from 500 errors - fix: Use relative URL in template instead of hardcoded localhost:8181 - fix: Add HTTP timeout to test's waitForService function to avoid hangs - fix: Use dynamic ephemeral ports in integration tests to avoid flaky parallel failures - fix: Add Iceberg port to final port configuration logging in mini.go * fix: Address critical issues in Iceberg implementation - fix: Cache table UUIDs to ensure persistence across LoadTable calls The UUID now remains stable for the lifetime of the server session. TODO: For production, UUIDs should be persisted in S3 Tables metadata. - fix: Remove redundant URL-encoded namespace parsing mux router already decodes %1F to \x1F before passing to handlers. Redundant ReplaceAll call could cause bugs with literal %1F in namespace. * fix: Improve test robustness and reduce code duplication - fix: Make DuckDB test more robust by failing on unexpected errors Instead of silently logging errors, now explicitly check for expected conditions (extension not available) and skip the test appropriately. - fix: Extract username helper method to reduce duplication Created getUsername() helper in AdminHandlers to avoid duplicating the username retrieval logic across Iceberg page handlers. * fix: Add mutex protection to table UUID cache Protects concurrent access to the tableUUIDs map with sync.RWMutex. Uses read-lock for fast path when UUID already cached, and write-lock for generating new UUIDs. Includes double-check pattern to handle race condition between read-unlock and write-lock. * style: fix go fmt errors * feat(iceberg): persist table UUID in S3 Tables metadata * feat(admin): configure Iceberg port in Admin UI and commands * refactor: address review comments (flags, tests, handlers) - command/mini: fix tracking of explicit s3.port.iceberg flag - command/admin: add explicit -iceberg.port flag - admin/handlers: reuse getUsername helper - tests: use 127.0.0.1 for ephemeral ports and os.Stat for file size check * test: check error from FileStat in verify_gc_empty_test |
2 months ago |
|
|
ff5a8f0579
|
Implement RPC skeleton for regular/EC volumes scrubbing. (#8187)
* Implement RPC skeleton for regular/EC volumes scrubbing. See https://github.com/seaweedfs/seaweedfs/issues/8018 for details. * Minor proto improvements for `ScrubVolume()`, `ScrubEcVolume()`: - Add fields for scrubbing details in `ScrubVolumeResponse` and `ScrubEcVolumeResponse`, instead of reporting these through RPC errors. - Return a list of broken shards when scrubbing EC volumes, via `EcShardInfo'. |
2 months ago |
|
|
345ac950b6
|
Add volume server RPCs to read and update state flags. (#8186)
* Boostrap persistent state for volume servers. This PR implements logic load/save persistent state information for storages associated with volume servers, and reporting state changes back to masters via heartbeat messages. More work ensues! See https://github.com/seaweedfs/seaweedfs/issues/7977 for details. * Add volume server RPCs to read and update state flags. |
2 months ago |
|
|
f23e09f58b
|
fix: skip exhausted blocks before creating an interval (#8180)
* fix: skip exhausted blocks before creating an interval * refactor: optimize interval creation and fix logic duplication * docs: add docstring for LocateData * refactor: extract moveToNextBlock helper to deduplicate logic * fix: use int64 for block index comparison to prevent overflow * test: add unit test for LocateData boundary crossing (issue #8179) * fix: skip exhausted blocks to prevent negative interval size and panics (issue #8179) * refactor: apply review suggestions for test maintainability and code style |
2 months ago |
|
|
550a4ff761
|
Fix inconsistent TTL reporting in volume.list #8158 (#8164)
* fix inconsistent TTL reporting in volume.list #8158 * simplify volume.list output using vi.String() |
2 months ago |
|
|
5c8de5e282
|
fix: close volumes and EC shards in tests for Windows compatibility (#8152)
* fix: close volumes and EC shards in tests to prevent Windows cleanup failures On Windows, t.TempDir() cleanup fails when test files are still open because Windows enforces mandatory file locking. Add defer v.Close(), defer store.Close(), and EC volume cleanup to ensure all file handles are released before temp directory removal. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: extract closeEcVolumes helper to reduce duplication Address code review feedback by extracting the repeated EC volume cleanup loop into a closeEcVolumes() helper function. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> |
2 months ago |
|
|
25a4691135 |
Update store_ec_recovery_test.go
|
2 months ago |
|
|
7e81c0bf0d
|
Clarfiy errors upon needle CRC mismatches. (#8096)
|
2 months ago |
|
|
e717a63665
|
Fix EC shard recovery with improved diagnostics (#8091)
* storage: fix EC shard recovery with improved diagnostics and logging - Fix buffer size mismatch in ReconstructData call - Add detailed logging of available and missing shards - Improve error messages when recovery is impossible - Add unit tests for EC recovery shard counting logic * test: refine EC recovery unit tests - Remove redundant tests that only validate setup - Use standard strings.Contains instead of custom recursive helper * adjust tests and minor improvement |
2 months ago |
|
|
3e5d34dd67 |
skip md5 validation if Content-MD5 is not provided
|
2 months ago |
|
|
a473278bfa
|
Fix: Fail fast on unsupported volume versions (#8047)
* Fix: Fail fast when initializing volume with Version 0 * Fix: Fail fast when loading unsupported volume version (e.g. 0 or 4) * Refactor: Use IsSupportedVersion helper function for version validation |
2 months ago |
|
|
ba74185700
|
fix: CompactMap race condition causing runtime panic (#8029)
Fixed critical race condition in CompactMap where Set(), Delete(), and Get() methods had issues with concurrent map access. Root cause: segmentForKey() can create new map segments, which modifies the cm.segments map. Calling this under a read lock caused concurrent map write panics when multiple goroutines accessed the map simultaneously (e.g., during VolumeNeedleStatus gRPC calls). Changes: - Set() method: Changed RLock/RUnlock to Lock/Unlock - Delete() method: Changed RLock/RUnlock to Lock/Unlock, optimized to avoid creating empty segments when key doesn't exist - Get() method: Removed segmentForKey() call to avoid race condition, now checks segment existence directly and returns early if segment doesn't exist (optimization: avoids unnecessary segment creation) This fix resolves the runtime/maps.fatal panic that occurred under concurrent load. Tested with race detector: go test -v -race ./weed/storage/needle_map/... |
2 months ago |
|
|
6b0eade6d4 |
storage: upgrade protobuf API in store_state.go
|
3 months ago |
|
|
587e782feb |
storage: use non-blocking send to StateUpdateChan
|
3 months ago |
|
|
2af293ce60
|
Boostrap persistent state for volume servers. (#7984)
This PR implements logic load/save persistent state information for storages associated with volume servers, and reporting state changes back to masters via heartbeat messages. More work ensues! See https://github.com/seaweedfs/seaweedfs/issues/7977 for details. |
3 months ago |
|
|
9012069bd7
|
chore: execute goimports to format the code (#7983)
* chore: execute goimports to format the code Signed-off-by: promalert <promalert@outlook.com> * goimports -w . --------- Signed-off-by: promalert <promalert@outlook.com> Co-authored-by: Chris Lu <chris.lu@gmail.com> |
3 months ago |
|
|
e10f11b480
|
opt: reduce ShardsInfo memory usage with bitmap and sorted slice (#7974)
* opt: reduce ShardsInfo memory usage with bitmap and sorted slice - Replace map[ShardId]*ShardInfo with sorted []ShardInfo slice - Add ShardBits (uint32) bitmap for O(1) existence checks - Use binary search for O(log n) lookups by shard ID - Maintain sorted order for efficient iteration - Add comprehensive unit tests and benchmarks Memory savings: - Map overhead: ~48 bytes per entry eliminated - Pointers: 8 bytes per entry eliminated - Total: ~56 bytes per shard saved Performance improvements: - Has(): O(1) using bitmap - Size(): O(log n) using binary search (was O(1), acceptable tradeoff) - Count(): O(1) using popcount on bitmap - Iteration: Faster due to cache locality * refactor: add methods to ShardBits type - Add Has(), Set(), Clear(), and Count() methods to ShardBits - Simplify ShardsInfo methods by using ShardBits methods - Improves code readability and encapsulation * opt: use ShardBits directly in ShardsCountFromVolumeEcShardInformationMessage Avoid creating a full ShardsInfo object just to count shards. Directly cast vi.EcIndexBits to ShardBits and use Count() method. * opt: use strings.Builder in ShardsInfo.String() for efficiency * refactor: change AsSlice to return []ShardInfo (values instead of pointers) This completes the memory optimization by avoiding unnecessary pointer slices and potential allocations. * refactor: rename ShardsCountFromVolumeEcShardInformationMessage to GetShardCount * fix: prevent deadlock in Add and Subtract methods Copy shards data from 'other' before releasing its lock to avoid potential deadlock when a.Add(b) and b.Add(a) are called concurrently. The previous implementation held other's lock while calling si.Set/Delete, which acquires si's lock. This could deadlock if two goroutines tried to add/subtract each other concurrently. * opt: avoid unnecessary locking in constructor functions ShardsInfoFromVolume and ShardsInfoFromVolumeEcShardInformationMessage now build shards slice and bitmap directly without calling Set(), which acquires a lock on every call. Since the object is local and not yet shared, locking is unnecessary and adds overhead. This improves performance during object construction. * fix: rename 'copy' variable to avoid shadowing built-in function The variable name 'copy' in TestShardsInfo_Copy shadowed the built-in copy() function, which is confusing and bad practice. Renamed to 'siCopy'. * opt: use math/bits.OnesCount32 and reorganize types 1. Replace manual popcount loop with math/bits.OnesCount32 for better performance and idiomatic Go code 2. Move ShardSize type definition to ec_shards_info.go for better code organization since it's primarily used there * refactor: Set() now accepts ShardInfo for future extensibility Changed Set(id ShardId, size ShardSize) to Set(shard ShardInfo) to support future additions to ShardInfo without changing the API. This makes the code more extensible as new fields can be added to ShardInfo (e.g., checksum, location, etc.) without breaking the Set API. * refactor: move ShardInfo and ShardSize to separate file Created ec_shard_info.go to hold the basic shard types (ShardInfo and ShardSize) for better code organization and separation of concerns. * refactor: add ShardInfo constructor and helper functions Added NewShardInfo() constructor and IsValid() method to better encapsulate ShardInfo creation and validation. Updated code to use the constructor for cleaner, more maintainable code. * fix: update remaining Set() calls to use NewShardInfo constructor Fixed compilation errors in storage and shell packages where Set() calls were not updated to use the new NewShardInfo() constructor. * fix: remove unreachable code in filer backup commands Removed unreachable return statements after infinite loops in filer_backup.go and filer_meta_backup.go to fix compilation errors. * fix: rename 'new' variable to avoid shadowing built-in Renamed 'new' to 'result' in MinusParityShards, Plus, and Minus methods to avoid shadowing Go's built-in new() function. * fix: update remaining test files to use NewShardInfo constructor Fixed Set() calls in command_volume_list_test.go and ec_rebalance_slots_test.go to use NewShardInfo() constructor. |
3 months ago |
|
|
ec1c27a4b3
|
storage/needle: add bounds check for WriteNeedleBlob buffer (#7973)
* storage/needle: add bounds check for WriteNeedleBlob buffer * storage/needle: use int offsets when checking/writing Version3 timestamp * Apply suggestion from @gemini-code-assist[bot] Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> --------- Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com> |
3 months ago |
|
|
ade2e58cf4 |
refactoring
|
3 months ago |
|
|
4e2af080df
|
optimize: enable immediate EC shard reporting during startup (#7933)
* optimize: enable immediate EC shard reporting during startup
Ported the immediate EC shard reporting feature from Enterprise to Community version.
This allows the master to be notified about EC shards immediately during volume server startup,
instead of waiting for the first heartbeat.
Changes:
1. Updated NewStore to initialize notification channels BEFORE loading volumes (fixes potential nil panic).
2. Added ecShardNotifyHandler to report EC shards to NewEcShardsChan during startup.
3. Implemented non-blocking channel send for EC reporting to prevent deadlock when loading many EC shards (fixing the enterprise bug
|
3 months ago |
|
|
6b98b52acc
|
Fix reporting of EC shard sizes from nodes to masters. (#7835)
SeaweedFS tracks EC shard sizes on topology data stuctures, but this information is never
relayed to master servers :( The end result is that commands reporting disk usage, such
as `volume.list` and `cluster.status`, yield incorrect figures when EC shards are present.
As an example for a simple 5-node test cluster, before...
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[1 5]
Disk hdd total size:88967096 file_count:172
DataNode 192.168.10.111:9001 total size:88967096 file_count:172
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[0 4]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9002 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[2 6]
Disk hdd total size:77267536 file_count:166
DataNode 192.168.10.111:9003 total size:77267536 file_count:166
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:77267536 file_count:166 replica_placement:2 version:3 modified_at_second:1766349617
volume id:3 size:88967096 file_count:172 replica_placement:2 version:3 modified_at_second:1766349617
ec volume id:1 collection: shards:[3 7]
Disk hdd total size:166234632 file_count:338
DataNode 192.168.10.111:9004 total size:166234632 file_count:338
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[8 9 10 11 12 13]
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:498703896 file_count:1014
DataCenter DefaultDataCenter total size:498703896 file_count:1014
total size:498703896 file_count:1014
```
...and after:
```
> volume.list
Topology volumeSizeLimit:30000 MB hdd(volume:6/40 active:6 free:33 remote:0)
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9001 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[1 5 9] sizes:[1:8.00 MiB 5:8.00 MiB 9:8.00 MiB] total:24.00 MiB
Disk hdd total size:81761800 file_count:161
DataNode 192.168.10.111:9001 total size:81761800 file_count:161
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9002 hdd(volume:1/8 active:1 free:7 remote:0)
Disk hdd(volume:1/8 active:1 free:7 remote:0) id:0
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[11 12 13] sizes:[11:8.00 MiB 12:8.00 MiB 13:8.00 MiB] total:24.00 MiB
Disk hdd total size:88678712 file_count:170
DataNode 192.168.10.111:9002 total size:88678712 file_count:170
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9003 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[0 4 8] sizes:[0:8.00 MiB 4:8.00 MiB 8:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9003 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9004 hdd(volume:2/8 active:2 free:6 remote:0)
Disk hdd(volume:2/8 active:2 free:6 remote:0) id:0
volume id:2 size:81761800 file_count:161 replica_placement:2 version:3 modified_at_second:1766349495
volume id:3 size:88678712 file_count:170 replica_placement:2 version:3 modified_at_second:1766349495
ec volume id:1 collection: shards:[2 6 10] sizes:[2:8.00 MiB 6:8.00 MiB 10:8.00 MiB] total:24.00 MiB
Disk hdd total size:170440512 file_count:331
DataNode 192.168.10.111:9004 total size:170440512 file_count:331
DataCenter DefaultDataCenter hdd(volume:6/40 active:6 free:33 remote:0)
Rack DefaultRack hdd(volume:6/40 active:6 free:33 remote:0)
DataNode 192.168.10.111:9005 hdd(volume:0/8 active:0 free:8 remote:0)
Disk hdd(volume:0/8 active:0 free:8 remote:0) id:0
ec volume id:1 collection: shards:[3 7] sizes:[3:8.00 MiB 7:8.00 MiB] total:16.00 MiB
Disk hdd total size:0 file_count:0
Rack DefaultRack total size:511321536 file_count:993
DataCenter DefaultDataCenter total size:511321536 file_count:993
total size:511321536 file_count:993
```
|
3 months ago |
|
|
2f6aa98221
|
Refactor: Replace removeDuplicateSlashes with NormalizeObjectKey (#7873)
* Replace removeDuplicateSlashes with NormalizeObjectKey Use s3_constants.NormalizeObjectKey instead of removeDuplicateSlashes in most places for consistency. NormalizeObjectKey handles both duplicate slash removal and ensures the path starts with '/', providing more complete normalization. * Fix double slash issues after NormalizeObjectKey After using NormalizeObjectKey, object keys have a leading '/'. This commit ensures: - getVersionedObjectDir strips leading slash before concatenation - getEntry calls receive names without leading slash - String concatenation with '/' doesn't create '//' paths This prevents path construction errors like: /buckets/bucket//object (wrong) /buckets/bucket/object (correct) * ensure object key leading "/" * fix compilation * fix: Strip leading slash from object keys in S3 API responses After introducing NormalizeObjectKey, all internal object keys have a leading slash. However, S3 API responses must return keys without leading slashes to match AWS S3 behavior. Fixed in three functions: - addVersion: Strip slash for version list entries - processRegularFile: Strip slash for regular file entries - processExplicitDirectory: Strip slash for directory entries This ensures ListObjectVersions and similar APIs return keys like 'bar' instead of '/bar', matching S3 API specifications. * fix: Normalize keyMarker for consistent pagination comparison The S3 API provides keyMarker without a leading slash (e.g., 'object-001'), but after introducing NormalizeObjectKey, all internal object keys have leading slashes (e.g., '/object-001'). When comparing keyMarker < normalizedObjectKey in shouldSkipObjectForMarker, the ASCII value of '/' (47) is less than 'o' (111), causing all objects to be incorrectly skipped during pagination. This resulted in page 2 and beyond returning 0 results. Fix: Normalize the keyMarker when creating versionCollector so comparisons work correctly with normalized object keys. Fixes pagination tests: - TestVersioningPaginationOver1000Versions - TestVersioningPaginationMultipleObjectsManyVersions * refactor: Change NormalizeObjectKey to return keys without leading slash BREAKING STRATEGY CHANGE: Previously, NormalizeObjectKey added a leading slash to all object keys, which required stripping it when returning keys to S3 API clients and caused complexity in marker normalization for pagination. NEW STRATEGY: - NormalizeObjectKey now returns keys WITHOUT leading slash (e.g., 'foo/bar' not '/foo/bar') - This matches the S3 API format directly - All path concatenations now explicitly add '/' between bucket and object - No need to strip slashes in responses or normalize markers Changes: 1. Modified NormalizeObjectKey to strip leading slash instead of adding it 2. Fixed all path concatenations to use: - BucketsPath + '/' + bucket + '/' + object instead of: - BucketsPath + '/' + bucket + object 3. Reverted response key stripping in: - addVersion() - processRegularFile() - processExplicitDirectory() 4. Reverted keyMarker normalization in findVersionsRecursively() 5. Updated matchesPrefixFilter() to work with keys without leading slash 6. Fixed paths in handlers: - s3api_object_handlers.go (GetObject, HeadObject, cacheRemoteObjectForStreaming) - s3api_object_handlers_postpolicy.go - s3api_object_handlers_tagging.go - s3api_object_handlers_acl.go - s3api_version_id.go (getVersionedObjectDir, getVersionIdFormat) - s3api_object_versioning.go (getObjectVersionList, updateLatestVersionAfterDeletion) All versioning tests pass including pagination stress tests. * adjust format * Update post policy tests to match new NormalizeObjectKey behavior - Update TestPostPolicyKeyNormalization to expect keys without leading slashes - Update TestNormalizeObjectKey to expect keys without leading slashes - Update TestPostPolicyFilenameSubstitution to expect keys without leading slashes - Update path construction in tests to use new pattern: BucketsPath + '/' + bucket + '/' + object * Fix ListObjectVersions prefix filtering Remove leading slash addition to prefix parameter to allow correct filtering of .versions directories when listing object versions with a specific prefix. The prefix parameter should match entry paths relative to bucket root. Adding a leading slash was breaking the prefix filter for paginated requests. Fixes pagination issue where second page returned 0 versions instead of continuing with remaining versions. * no leading slash * Fix urlEscapeObject to add leading slash for filer paths NormalizeObjectKey now returns keys without leading slashes to match S3 API format. However, urlEscapeObject is used for filer paths which require leading slashes. Add leading slash back after normalization to ensure filer paths are correct. Fixes TestS3ApiServer_toFilerPath test failures. * adjust tests * normalize * Fix: Normalize prefixes and markers in LIST operations using NormalizeObjectKey Ensure consistent key normalization across all S3 operations (GET, PUT, LIST). Previously, LIST operations were not applying the same normalization rules (handling backslashes, duplicate slashes, leading slashes) as GET/PUT operations. Changes: - Updated normalizePrefixMarker() to call NormalizeObjectKey for both prefix and marker - This ensures prefixes with leading slashes, backslashes, or duplicate slashes are handled consistently with how object keys are normalized - Fixes Parquet test failures where pads.write_dataset creates implicit directory structures that couldn't be discovered by subsequent LIST operations - Added TestPrefixNormalizationInList and TestListPrefixConsistency tests All existing LIST tests continue to pass with the normalization improvements. * Add debugging logging to LIST operations to track prefix normalization * Fix: Remove leading slash addition from GetPrefix to work with NormalizeObjectKey The NormalizeObjectKey function removes leading slashes to match S3 API format (e.g., 'foo/bar' not '/foo/bar'). However, GetPrefix was adding a leading slash back, which caused LIST operations to fail with incorrect path handling. Now GetPrefix only normalizes duplicate slashes without adding a leading slash, which allows NormalizeObjectKey changes to work correctly for S3 LIST operations. All Parquet integration tests now pass (20/20). * Fix: Handle object paths without leading slash in checkDirectoryObject NormalizeObjectKey() removes the leading slash to match S3 API format. However, checkDirectoryObject() was assuming the object path has a leading slash when processing directory markers (paths ending with '/'). Now we ensure the object has a leading slash before processing it for filer operations. Fixes implicit directory marker test (explicit_dir/) while keeping Parquet integration tests passing (20/20). All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * Fix: Handle explicit directory markers with trailing slashes Explicit directory markers created with put_object(Key='dir/', ...) are stored in the filer with the trailing slash as part of the name. The checkDirectoryObject() function now checks for both: 1. Explicit directories: lookup with trailing slash preserved (e.g., 'explicit_dir/') 2. Implicit directories: lookup without trailing slash (e.g., 'implicit_dir') This ensures both types of directory markers are properly recognized. All tests pass: - Implicit directory tests: 6/6 (including explicit directory marker test) - Parquet integration tests: 20/20 * Fix: Preserve trailing slash in NormalizeObjectKey NormalizeObjectKey now preserves trailing slashes when normalizing object keys. This is important for explicit directory markers like 'explicit_dir/' which rely on the trailing slash to be recognized as directory objects. The normalization process: 1. Notes if trailing slash was present 2. Removes duplicate slashes and converts backslashes 3. Removes leading slash for S3 API format 4. Restores trailing slash if it was in the original This ensures explicit directory markers created with put_object(Key='dir/', ...) are properly normalized and can be looked up by their exact name. All tests pass: - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * clean object * Fix: Don't restore trailing slash if result is empty When normalizing paths that are only slashes (e.g., '///', '/'), the function should return an empty string, not a single slash. The fix ensures we only restore the trailing slash if the result is non-empty. This fixes the 'just_slashes' test case: - Input: '///' - Expected: '' - Previous: '/' - Fixed: '' All tests now pass: - Unit tests: TestNormalizeObjectKey (13/13) - Implicit directory tests: 6/6 - Parquet integration tests: 20/20 * prefixEndsOnDelimiter * Update s3api_object_handlers_list.go * Update s3api_object_handlers_list.go * handle create directory |
3 months ago |
|
|
3613279f25
|
Add 'weed mini' command for S3 beginners and small/dev use cases (#7831)
* Add 'weed mini' command for S3 beginners and small/dev use cases
This new command simplifies starting SeaweedFS by combining all components
in one process with optimized settings for development and small deployments.
Features:
- Starts master, volume, filer, S3, WebDAV, and admin in one command
- Volume size limit: 64MB (optimized for small files)
- Volume max: 0 (auto-configured based on free disk space)
- Pre-stop seconds: 1 (faster shutdown for development)
- Master peers: none (single master mode by default)
- Includes admin UI with one worker for maintenance tasks
- Clean, user-friendly startup message with all endpoint URLs
Usage:
weed mini # Use default temp directory
weed mini -dir=/data # Custom data directory
This makes it much easier for:
- Developers getting started with SeaweedFS
- Testing and development workflows
- Learning S3 API with SeaweedFS
- Small deployments that don't need complex clustering
* Change default volume server port to 9340 to avoid popular port 8080
* Fix nil pointer dereference by initializing all required volume server fields
Added missing VolumeServerOptions field initializations:
- id, publicUrl, diskType
- maintenanceMBPerSecond, ldbTimeout
- concurrentUploadLimitMB, concurrentDownloadLimitMB
- pprof, idxFolder
- inflightUploadDataTimeout, inflightDownloadDataTimeout
- hasSlowRead, readBufferSizeMB
This resolves the panic that occurred when starting the volume server.
* Fix multiple nil pointer dereferences in mini command
Added missing field initializations for:
- Master options: raftHashicorp, raftBootstrap, telemetryUrl, telemetryEnabled
- Filer options: filerGroup, saveToFilerLimit, concurrentUploadLimitMB,
concurrentFileUploadLimit, localSocket, showUIDirectoryDelete,
downloadMaxMBps, diskType, allowedOrigins, exposeDirectoryData, tusBasePath
- Volume options: id, publicUrl, diskType, maintenanceMBPerSecond, ldbTimeout,
concurrentUploadLimitMB, concurrentDownloadLimitMB, pprof, idxFolder,
inflightUploadDataTimeout, inflightDownloadDataTimeout, hasSlowRead, readBufferSizeMB
- WebDAV options: tlsPrivateKey, tlsCertificate, filerRootPath
- Admin options: master
These initializations are required to avoid runtime panics when starting components.
* Fix remaining S3 option nil pointers in mini command
* Update mini command: 256MB volume size and add S3 access instructions for beginners
* mini: set default master.volumeSizeLimitMB to 128MB and update help/banner text
* mini: shorten S3 help text to a concise pointer to docs/Admin UI
* mini: remove duplicated component bullet list, use concise sentence
* mini: tidy help alignment and update example usage
* mini: default -dir to current directory
* mini: load initial S3 credentials from env and write IAM config
* mini: use AWS env vars for initial S3 creds; instruct to create via Admin UI if absent
* Improve startup synchronization with channel-based coordination
- Replace fragile time.Sleep delays with robust channel-based synchronization
- Implement proper service dependency ordering (Master → Volume → Filer → S3/WebDAV/Admin)
- Add sync.WaitGroup for goroutine coordination
- Add startup readiness logging for better visibility
- Implement 10-second timeout for admin server startup
- Remove arbitrary sleep delays for faster, more reliable startup
- Services now start deterministically based on dependencies, not timing
This makes the startup process more reliable and eliminates race conditions on slow systems or under load.
* Refactor service startup logic for better maintainability
Extract service startup into dedicated helper functions:
- startMiniServices(): Orchestrates all service startup with dependency coordination
- startServiceWithCoordination(): Starts services with readiness signaling
- startServiceWithoutReady(): Starts services without readiness signaling
- startS3Service(): Encapsulates S3 initialization logic
Benefits:
- Reduced code duplication in runMini()
- Clearer separation of concerns
- Easier to add new services or modify startup sequence
- More testable code structure
- Improved readability with explicit service names and logging
* Remove unused serviceStartupInfo struct type
- Delete the serviceStartupInfo struct that was defined but never used
- Improves code clarity by removing dead code
- All service startup is now handled directly by helper functions
* Preserve existing IAM config file instead of truncating
- Use os.Stat to check if IAM config file already exists
- Only create and write configuration if file doesn't exist
- Log appropriate messages for each case:
* File exists: skip writing, preserve existing config
* File absent: create with os.OpenFile and write new config
* Stat error: log error without overwriting
- Set *miniIamConfig only when new file is successfully created
- Use os.O_CREATE|os.O_WRONLY flags for safe file creation
- Handles file operations with proper error checking and cleanup
* Fix CodeQL security issue: prevent logging of sensitive S3 credentials
- Add createdInitialIAM flag to track when initial IAM config is created from env vars
- Set flag in startS3Service() when new IAM config is successfully written
- Update welcome message to inform user of credential creation without exposing secrets
- Print only the username (mini) and config file location to user
- Never print access keys or secret keys in clear text
- Maintain security while keeping user informed of what was created
- Addresses CodeQL finding: Clear-text logging of sensitive information
* Fix three code review issues in weed mini command
1. Fix deadlock in service startup coordination:
- Run blocking service functions (startMaster, startFiler, etc.) in separate goroutines
- This allows readyChan to be closed and prevents indefinite blocking
- Services now start concurrently instead of sequentially blocking the coordinator
2. Use shared grace.StartDebugServer for consistency:
- Replace inline debug server startup with grace.StartDebugServer
- Improves code consistency with other commands (master, filer, etc.)
- Removes net/http import which is no longer needed
3. Simplify IAM config file cleanup with defer:
- Use 'defer f.Close()' instead of multiple f.Close() calls
- Ensures file is closed regardless of which code path is taken
- Improves robustness and code clarity
* fmt
* Fix: Remove misleading 'service is ready' logs
The previous fix removed 'go' from service function calls but left misleading
'service is ready' log messages. The service helpers now correctly:
- Call fn() directly (blocking) instead of 'go fn()' (non-blocking)
- Remove the 'service is ready' message that was printed before the service
actually started running
- Services run as blocking goroutines within the coordinator goroutine,
which keeps them alive while the program runs
- The readiness channels still work correctly because they're closed when
the coordinator finishes waiting for dependencies
* Update mini.go
* Fix four code review issues in weed mini command
1. Use restrictive file permissions (0600) for IAM config:
- Changed from 0644 to 0600 when creating iam_config.json
- Prevents world-readable access to sensitive AWS credentials
- Protects AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
2. Remove unused sync.WaitGroup:
- Removed WaitGroup that was never waited on
- All services run as blocking goroutines in the coordinator
- Main goroutine blocks indefinitely with select{}
- Removes unnecessary complexity without changing behavior
3. Merge service startup helper functions:
- Combined startServiceWithCoordination and startServiceWithoutReady
- Made readyChan optional (nil for services without readiness signaling)
- Reduces code duplication and improves maintainability
- Both services now use single startServiceWithCoordination function
4. Fix admin server readiness check:
- Removed misleading timeout channel that never closed at startup
- Replaced with simple 2-second sleep before worker startup
- startAdminServer() blocks indefinitely, so channel would only close on shutdown
- Explicit sleep is clearer about the startup coordination intent
* Fix three code quality issues in weed mini command
1. Define volume configuration as named constants:
- Added miniVolumeMaxDataVolumeCounts = "0"
- Added miniVolumeMinFreeSpace = "1"
- Added miniVolumeMinFreeSpacePercent = "1"
- Removed local variable assignments in Volume startup
- Improves maintainability and documents configuration intent
2. Fix deadlock in startServiceWithCoordination:
- Changed from 'defer close(readyChan)' with blocking fn() to running fn() in goroutine
- Close readyChan immediately after launching service goroutine
- Prevents deadlock where fn() never returns, blocking defer execution
- Allows dependent services to start without waiting for blocking call
3. Improve admin server readiness check:
- Replaced fixed 2-second sleep with polling the gRPC port
- Polls up to 20 times (10 seconds total) with 500ms intervals
- Uses net.DialTimeout to check if port is available
- Properly handles IPv6 addresses using net.JoinHostPort
- Logs progress and warnings about connection status
- More robust than sleep against server startup timing variations
4. Add net import for network operations (IPv6 support)
Also fixed IAM config file close error handling to properly check error
from f.Close() and log any failures, preventing silent data loss on NFS.
* Document environment variable setup for S3 credentials
Updated welcome message to explain two ways to create S3 credentials:
1. Environment variables (recommended for quick setup):
- Set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
- Run 'weed mini -dir=/data'
- Creates initial 'mini' user credentials automatically
2. Admin UI (for managing multiple users and policies):
- Open http://localhost:23646 (Admin UI)
- Add identities to create new S3 credentials
This gives users clear guidance on the easiest way to get started with S3
credentials while also explaining the more advanced option for multiple users.
* Print welcome message after all services are running
Moved the welcome message printing from immediately after startMiniServices()
to after all services have been started and are ready. This ensures users see
the welcome message only after startup is complete, not mixed with startup logs.
Changes:
- Extract welcome message logic into printWelcomeMessage() function
- Call printWelcomeMessage() after startMiniServices() completes
- Change message from 'are starting' to 'are running and ready to use'
- This provides cleaner startup output without interleaved logs
* Wait for all services to complete before printing welcome message
The welcome message should only appear after all services are fully running and
the worker is connected. This prevents the message from appearing too early before
startup logs complete.
Changes:
- Pass allServicesReady channel through startMiniServices()
- Add adminReadyChan to track when admin/worker startup completes
- Signal allServicesReady when admin service is fully ready
- Wait for allServicesReady in runMini() before printing welcome message
- This ensures clean output: startup logs first, then welcome message once ready
Now the user sees all startup activity, then a clear welcome message when
everything is truly ready to use.
* Fix welcome message timing: print after worker is fully started
The welcome message was printing too early because allServicesReady was being
closed when the Admin service goroutine started, not when it actually completed.
The Admin service launches startMiniAdminWithWorker() which is a blocking call
that doesn't return until the worker is fully connected.
Now allServicesReady is passed through to startMiniWorker() which closes it
after the worker successfully starts and connects to the admin server.
This ensures the welcome message only appears after:
- Master is ready
- Volume server is ready
- Filer is ready
- S3 service is ready
- WebDAV service is ready
- Admin server is ready
- Worker is connected and running
All startup logs appear first, then the clean welcome message at the end.
* Wait for S3 and WebDAV services to be ready before showing welcome message
The welcome message was printing before S3 and WebDAV servers had fully
initialized. Now the readiness flow is:
1. Master → ready
2. Volume → ready
3. Filer → ready
4. S3 → ready (signals s3ReadyChan)
5. WebDAV → ready (signals webdavReadyChan)
6. Admin/Worker → starts, then waits for both S3 and WebDAV
7. Welcome message prints (all services truly ready)
Changes:
- Add s3ReadyChan and webdavReadyChan to service startup
- Pass S3 and WebDAV ready channels through to Admin service
- Admin/Worker waits for both S3 and WebDAV before closing allServicesReady
- This ensures welcome message appears only when all services are operational
* Admin service should wait for Filer, S3, and WebDAV to be ready
Admin service depends on Filer being operational since it uses the filer
for credential storage. It also makes sense to wait for S3 and WebDAV
since they are user-facing services that should be ready before Admin.
Updated dependencies:
- Admin now waits for: Master, Filer, S3, WebDAV
- This ensures all critical services are operational before Admin starts
- Welcome message will print only after all services including Admin are ready
* Add initialization delay for S3 and WebDAV services
S3 and WebDAV servers need extra time to fully initialize and start listening
after their service functions are launched. Added a 1-second delay after
launching S3 and WebDAV goroutines before signaling readiness.
This ensures the welcome message doesn't print until both services have
emitted their startup logs and are actually serving requests.
* Increase service initialization wait times for more reliable startup
- Increase S3 and WebDAV initialization delay from 1s to 2s to ensure they emit startup logs before welcome message
- Add 1s initialization delay for Filer to ensure it's listening
- Increase admin gRPC polling timeout from 10s to 20s to ensure admin server is fully ready
- This ensures welcome message prints only after all services are fully initialized and ready to accept requests
* Increase service wait time to 10 seconds for reliable startup
All services now wait 10 seconds after launching to ensure they are fully initialized and ready before signaling readiness to dependent services. This ensures the welcome message prints only after all services have fully started.
* Replace fixed 10s delay with intelligent port polling for service readiness
Instead of waiting a fixed 10 seconds for each service, now polls the service
port to check if it's actually accepting connections. This eliminates unnecessary
waiting and allows services to signal readiness as soon as they're ready.
- Polls each service port with up to 30 attempts (6 seconds total)
- Each attempt waits 200ms before retrying
- Stops polling immediately once service is ready
- Falls back gracefully if service is unknown
- Significantly faster startup sequence while maintaining reliability
* Replace channel-based coordination with HTTP pinging for service readiness
Instead of using channels to coordinate service startup, now uses HTTP GET requests
to ping each service endpoint to check if it's ready to accept connections.
Key changes:
- Removed all readiness channels (masterReadyChan, volumeReadyChan, etc.)
- Simplified startMiniServices to use sequential HTTP polling for each service
- startMiniService now just starts the service with logging
- waitForServiceReady uses HTTP client to ping service endpoints (max 6 seconds)
- waitForAdminServerReady uses HTTP GET to check admin server availability
- startMiniAdminWithWorker and startMiniWorker simplified without channel parameters
Benefits:
- Cleaner, more straightforward code
- HTTP pinging is more reliable than TCP port probing
- Services signal readiness through their HTTP endpoints
- Eliminates channel synchronization complexity
* log level
* Remove overly specific comment from volume size limit in welcome message
The '(good for small files)' comment is too limiting. The 128MB volume size
limit works well for general use cases, not just small files. Simplified the
message to just show the value.
* Ensure allServicesReady channel is always closed via defer
Add 'defer close(allServicesReady)' at the start of startMiniAdminWithWorker
to guarantee the channel is closed on ALL exit paths (normal and error).
This prevents the caller waiting on <-allServicesReady from ever hanging,
while removing the explicit close() at the successful end prevents panic
from double-close.
This makes the code more robust by:
- Guaranteeing channel closure even if worker setup fails
- Eliminating the possibility of caller hanging on errors
- Following Go defer patterns for resource cleanup
* Enhance health check polling for more robust service coordination
The service startup already uses HTTP health checks via waitForServiceReady()
to verify services are actually accepting connections. This commit improves
the health check implementation:
Changes:
- Elevated success logging to Info level so users see when services become ready
- Improved error messages to clarify that health check timeouts are not fatal
- Services continue startup even if health checks timeout (they may still work)
- Consistent handling of health check results across all services
This provides better visibility into service startup while maintaining the
existing robust coordination via HTTP pinging rather than just TCP port checks.
* Implement stricter error handling for robust mini server startup
Apply all PR review feedback to ensure the mini server fails fast and clearly
when critical components cannot start:
Changes:
1. Remove redundant miniVolumeMinFreeSpacePercent constant
- Simplified util.MustParseMinFreeSpace() call to use single parameter
2. Make service readiness checks fatal errors:
- Master, Volume, Filer, S3, WebDAV health check failures now return errors
- Prevents partially-functional servers from running
- Caller can handle errors gracefully instead of continuing with broken state
3. Make admin server readiness fatal:
- Admin gRPC availability is critical for worker startup
- Use glog.Fatalf to terminate with clear error message
4. Improve IAM config error handling:
- Treat all file operation failures (stat, open, write, close) as fatal
- Prevents silent failures in S3 credential setup
- User gets immediate feedback instead of authentication issues later
5. Use glog.Fatalf for critical worker setup errors:
- Failed to create worker directory, task directories, or worker instance
- Failed to create admin client or start worker
- Ensures mini server doesn't run in broken state
This ensures deterministic startup: services succeed completely or fail with
clear, actionable error messages for the user.
* Make health checks non-fatal for graceful degradation and improve IAM file handling
Address PR feedback to make the mini command more resilient for development:
1. Make health check failures non-fatal
- Master, Volume, Filer, S3, WebDAV health checks now log warnings but allow startup
- Services may still work even if health check endpoints aren't immediately available
- Aligns with intent of a dev-focused tool that should be forgiving of timing issues
- Only prevents startup if startup coordination or critical errors occur
2. Improve IAM config file handling
- Refactored to guarantee file is always closed using separate error variables
- Opens file once and handles write/close errors independently
- Maintains strict error handling while improving code clarity
- All file operation failures still cause fatal errors (as intended)
This makes startup more graceful while maintaining critical error handling for
fundamental failures like missing directories or configuration errors.
* Fix code quality issues in weed mini command
- Fix pointer aliasing: use value copy (*miniBindIp = *miniIp) instead of pointer assignment
- Remove misleading error return from waitForServiceReady() function
- Simplify health check callers to call waitForServiceReady() directly without error handling
- Remove redundant S3 option assignments already set in init() block
- Remove unused allServicesReady parameter from startMiniWorker() function
* Refactor welcome message to use template strings and add startup delay
- Convert welcome message to constant template strings for cleaner code
- Separate credentials instructions into dedicated constant
- Add 500ms delay after worker startup to allow full initialization before welcome message
- Improves output cleanliness by avoiding log interleaving with welcome message
* Fix code style issues in weed mini command
- Fix indentation in IAM config block (lines 424-432) to align with surrounding code
- Remove unused adminServerDone channel that was created but never read
* Address code review feedback for robustness and resource management
- Use defer f.Close() for IAM file handling to ensure file is closed in all code paths, preventing potential file descriptor leaks
- Use 127.0.0.1 instead of *miniIp for service readiness checks to ensure checks always target localhost, improving reliability in environments with firewalls or complex network configurations
- Simplify error handling in waitForAdminServerReady by using single error return instead of separate write/close error variables
* Fix function declaration formatting
- Separate closing brace of startS3Service from startMiniAdminWithWorker declaration with blank line
- Move comment to proper position above function declaration
- Run gofmt for consistent formatting
* Fix IAM config pointer assignment when file already exists
- Add missing *miniIamConfig = iamPath assignment when IAM config file already exists
- Ensures S3 service is properly pointed to the existing IAM configuration
- Retains logging to inform user that existing configuration is being preserved
* Improve pointer assignments and worker synchronization
- Simplify grpcPort and dataDir pointer assignments by directly dereferencing and assigning values instead of taking address of local variables
- Replace time.Sleep(500ms) with proper TCP-based polling to wait for worker gRPC port readiness
- Add waitForWorkerReady function that polls worker's gRPC port with max 6-second timeout
- Add net package import for TCP connection checks
- Improves code idiomaticity and synchronization robustness
* Refactor and simplify error handling for maintainability
- Remove unused error return from startMiniServices (always returned nil)
- Update runMini caller to not expect error from startMiniServices
- Refactor init() into component-specific helper functions:
* initMiniCommonFlags() for common options
* initMiniMasterFlags() for master server options
* initMiniFilerFlags() for filer server options
* initMiniVolumeFlags() for volume server options
* initMiniS3Flags() for S3 server options
* initMiniWebDAVFlags() for WebDAV server options
* initMiniAdminFlags() for admin server options
- Significantly improves code readability and maintainability
- Each component's flags are now in dedicated, focused functions
|
3 months ago |
|
|
4aa50bfa6a
|
fix: EC rebalance fails with replica placement 000 (#7812)
* fix: EC rebalance fails with replica placement 000 This PR fixes several issues with EC shard distribution: 1. Pre-flight check before EC encoding - Verify target disk type has capacity before encoding starts - Prevents encoding shards only to fail during rebalance - Shows helpful error when wrong diskType is specified (e.g., ssd when volumes are on hdd) 2. Fix EC rebalance with replica placement 000 - When DiffRackCount=0, shards should be distributed freely across racks - The '000' placement means 'no volume replication needed' because EC provides redundancy - Previously all racks were skipped with error 'shards X > replica placement limit (0)' 3. Add unit tests for EC rebalance slot calculation - TestECRebalanceWithLimitedSlots: documents the limited slots scenario - TestECRebalanceZeroFreeSlots: reproduces the 0 free slots error 4. Add Makefile for manual EC testing - make setup: start cluster and populate data - make shell: open weed shell for EC commands - make clean: stop cluster and cleanup * fix: default -rebalance to true for ec.encode The -rebalance flag was defaulting to false, which meant ec.encode would only print shard moves but not actually execute them. This is a poor default since the whole point of EC encoding is to distribute shards across servers for fault tolerance. Now -rebalance defaults to true, so shards are actually distributed after encoding. Users can use -rebalance=false if they only want to see what would happen without making changes. * test/erasure_coding: improve Makefile safety and docs - Narrow pkill pattern for volume servers to use TEST_DIR instead of port pattern, avoiding accidental kills of unrelated SeaweedFS processes - Document external dependencies (curl, jq) in header comments * shell: refactor buildRackWithEcShards to reuse buildEcShards Extract common shard bit construction logic to avoid duplication between buildEcShards and buildRackWithEcShards helper functions. * shell: update test for EC replication 000 behavior When DiffRackCount=0 (replication "000"), EC shards should be distributed freely across racks since erasure coding provides its own redundancy. Update test expectation to reflect this behavior. * erasure_coding: add distribution package for proportional EC shard placement Add a new reusable package for EC shard distribution that: - Supports configurable EC ratios (not hard-coded 10+4) - Distributes shards proportionally based on replication policy - Provides fault tolerance analysis - Prefers moving parity shards to keep data shards spread out Key components: - ECConfig: Configurable data/parity shard counts - ReplicationConfig: Parsed XYZ replication policy - ECDistribution: Target shard counts per DC/rack/node - Rebalancer: Plans shard moves with parity-first strategy This enables seaweed-enterprise custom EC ratios and weed worker integration while maintaining a clean, testable architecture. * shell: integrate distribution package for EC rebalancing Add shell wrappers around the distribution package: - ProportionalECRebalancer: Plans moves using distribution.Rebalancer - NewProportionalECRebalancerWithConfig: Supports custom EC configs - GetDistributionSummary/GetFaultToleranceAnalysis: Helper functions The shell layer converts between EcNode types and the generic TopologyNode types used by the distribution package. * test setup * ec: improve data and parity shard distribution across racks - Add shardsByTypePerRack helper to track data vs parity shards - Rewrite doBalanceEcShardsAcrossRacks for two-pass balancing: 1. Balance data shards (0-9) evenly, max ceil(10/6)=2 per rack 2. Balance parity shards (10-13) evenly, max ceil(4/6)=1 per rack - Add balanceShardTypeAcrossRacks for generic shard type balancing - Add pickRackForShardType to select destination with room for type - Add unit tests for even data/parity distribution verification This ensures even read load during normal operation by spreading both data and parity shards across all available racks. * ec: make data/parity shard counts configurable in ecBalancer - Add dataShardCount and parityShardCount fields to ecBalancer struct - Add getDataShardCount() and getParityShardCount() methods with defaults - Replace direct constant usage with configurable methods - Fix unused variable warning for parityPerRack This allows seaweed-enterprise to use custom EC ratios while defaulting to standard 10+4 scheme. * Address PR 7812 review comments Makefile improvements: - Save PIDs for each volume server for precise termination - Use PID-based killing in stop target with pkill fallback - Use more specific pkill patterns with TEST_DIR paths Documentation: - Document jq dependency in README.md Rebalancer fix: - Fix duplicate shard count updates in applyMovesToAnalysis - All planners (DC/rack/node) update counts inline during planning - Remove duplicate updates from applyMovesToAnalysis to avoid double-counting * test/erasure_coding: use mktemp for test file template Use mktemp instead of hardcoded /tmp/testfile_template.bin path to provide better isolation for concurrent test runs. |
3 months ago |
|
|
7920ffa98c
|
Fix uncleanable size=0 orphans with volume.fsck -forcePurging (#7783)
This is a follow-up fix to PR #7332 which partially addressed the issue. The problem is that size=0 needles are in a gray area: - IsValid() returns false for size=0 (because size must be > 0) - IsDeleted() returns false for size=0 (because size must be < 0 or == TombstoneFileSize) PR #7332 only fixed 2 places, but several other places still had the same bug: 1. needle_map_memory.go:doLoading - line 43 still used oldSize.IsValid() 2. needle_map_memory.go:DoOffsetLoading - used during vacuum and incremental loading 3. needle_map_leveldb.go:generateLevelDbFile - used when generating LevelDB needle maps 4. needle_map_leveldb.go:DoOffsetLoading - used during incremental loading for LevelDB 5. needle_map/compact_map.go:delete - couldn't delete size=0 entries because: - The condition 'size > 0' failed for size=0 - Even if it passed, negating 0 gives 0 (not marking as deleted) Changes: - Changed size.IsValid() to !size.IsDeleted() in doLoading and DoOffsetLoading functions - Fixed compact_map delete to use TombstoneFileSize for size=0 entries Fixes #7293 |
3 months ago |