Three improvements to RedbNeedleMap:
- Use Durability::None on all write transactions since .idx is the
crash recovery source and redb is always rebuildable from it
- Delete stale .rdb before rebuild in load_from_idx to prevent
leftover entries surviving a crash
- Reuse existing .rdb on clean restart by storing .idx file size in
a metadata table; incrementally replay only new .idx entries when
the .idx has grown since last build
Add .rdb (redb index) cleanup to removeVolumeFiles and vacuum commit
in Go code, for compatibility with mixed Rust/Go volume server
deployments. Route .rdb through dirIdx in FileName() like .idx/.ldb.
Closes the metrics gap between Rust and Go volume servers (8 → 23
metrics). Adds handler counters, vacuuming histograms, volume/disk
gauges, inflight request tracking, and concurrent limit gauges.
Centralizes request counting in store handlers instead of per-handler.
- Check for .note file (interrupted VolumeCopy) and remove partial volumes
- Validate EC shards before skipping .dat loading: check shard count,
uniform size, and expected size from .dat file
- Remove stale .cpd/.cpx compaction temp files on startup
- Skip already-loaded volumes on repeated load_existing_volumes calls
- Fix healthz test: set is_heartbeating=true in test state
- Range requests: clamp end to file size instead of returning 416 (matches Go)
- Suffix ranges larger than file size: clamp instead of skipping (matches Go)
- Set ETag response header on 201 Created write responses (matches Go SetEtag)
- Always decompress before image resize/crop even when client accepts gzip
- VolumeTierMoveDatFromRemote: don't check maintenance mode (Go doesn't)
- Fix stale integration test: stats routes are now exposed on admin router
The `/stats/disk`, `/stats/counter`, and `/stats/memory` endpoints were
implemented in `handlers.rs` but were missing from the HTTP router
registration in `volume_server.rs`. Registered them under the UI-enabled
group to match Go implementation.
- VolumeConfigure: don't fail on unmount of non-existent volume (Go returns nil)
- 304 responses include ETag/Last-Modified headers per HTTP spec
- Conditional header checks run before chunk manifest expansion
- EC encoder uses two-phase approach: 1GB large blocks then 1MB small blocks
- Compaction uses volume-level TTL (not per-needle TTL) for filtering
- VolumeConfigure does unmount/modify-disk/remount cycle matching Go
- VolumeMarkReadonly persists flag to .vif when persist=true
- AllocateVolume accepts version parameter
- Multipart boundary uses leading CRLF per RFC 2046
- MIME type override skipped for chunk manifests
- RedbNeedleMap: pure-Rust disk-backed needle map using redb, with NeedleMap
enum wrapping both in-memory and redb variants
- binary_search_by_append_at_ns: port of Go's BinarySearchByAppendAtNs for
VolumeIncrementalCopy with since_ns > 0
- Proxy/redirect: master volume lookup, HTTP proxy forwarding with ?proxied=true,
and 301 redirects for non-local volumes based on ReadMode config
- Wire new VolumeServerState fields: read_mode, master_url, self_url, http_client
- Extract TTL from ?ttl= query param and set on needle (matches Go's ParseUpload)
- Auto-compress compressible file types (.js, .css, .json, .svg, text/*, etc.)
using gzip, only when compression saves >10% (matches Go's IsCompressableFileType)
- Extract Seaweed-* headers as custom metadata pairs stored as JSON in needle
- Store filename from URL path in needle name field
- Include filename in upload response JSON
- Add unit tests for is_compressible_file_type and try_gzip_data
- BatchDelete now supports EC volumes: looks up needle in .ecx index,
journals deletion to .ecj file (local-only, Go handles distributed part)
- JPEG EXIF orientation auto-fix on upload using kamadak-exif + image crate,
matching Go's FixJpgOrientation behavior (8 orientation transforms)
- Async batched write processing via mpsc queue (up to 128 entries per batch),
groups writes by volume ID and syncs once per volume per batch
- VolumeTierMoveDatToRemote: multipart upload .dat file to S3 with progress
streaming, updates .vif with remote file reference
- VolumeTierMoveDatFromRemote: downloads .dat from S3 with progress streaming,
removes remote file reference from .vif
- S3TierRegistry for managing named remote storage backends
- VolumeInfo (.vif) JSON persistence matching Go's protojson format
- 124 lib + 7 integration = 131 Rust tests pass
- All 109 Go integration tests pass (53 HTTP + 56 gRPC)
- Add StreamingBody (http_body::Body) for chunked reads of files >1MB,
avoiding OOM by reading 64KB at a time via spawn_blocking + pread
- Add NeedleStreamInfo and meta-only read path to avoid loading full
needle body when streaming
- Add RustMultiVolumeCluster test framework and MultiCluster interface
so TestVolumeMoveHandlesInFlightWrites works with Rust volume servers
- Remove TestVolumeMoveHandlesInFlightWrites from CI skip list (now passes)
- All 117 Go integration tests + 8 Rust integration tests pass (100%)
- Add remote_storage module with RemoteStorageClient trait
- Implement S3RemoteStorageClient using aws-sdk-s3 (covers all S3-compatible
providers: AWS, Wasabi, Backblaze, Aliyun, etc.)
- FetchAndWriteNeedle now fetches data from S3, writes locally as needle,
and replicates to peers
- Add 3 integration tests using weed mini as S3 backend:
- Full round-trip fetch from S3
- Byte-range (partial) read from S3
- Error handling for non-existent S3 objects
- VolumeServerStatus now returns real data_center and rack from config
- DiskStatus uses sysinfo crate for actual disk total/free/used/percent
- ReadVolumeFileStatus returns dat file modification timestamps
- FetchAndWriteNeedle produces Go-matching error messages for unknown remote storage types
Remove unused imports, prefix unused variables, add #[allow(dead_code)]
for fields/methods used only in specific contexts (serde deserialization,
non-unix builds, future error tracking).
VolumeCopy: connects to source as gRPC client, copies .dat/.idx/.vif
files via CopyFile streaming, verifies sizes, mounts the volume.
Needed for volume rebalancing and migration between servers.
VolumeTailReceiver: connects to source's VolumeTailSender, receives
needle header+body chunks, reassembles multi-chunk needles, writes
them locally. Needed for live replication during volume moves.
Also adds helper functions: parse_grpc_address (SeaweedFS address
format parsing), copy_file_from_source (streaming file copy with
progress reporting), find_last_append_at_ns (timestamp extraction
from copied files).
All 128/130 integration tests still pass (same 2 known unfixable).