Browse Source
mount: make metadata cache rebuilds snapshot-consistent (#8531)
mount: make metadata cache rebuilds snapshot-consistent (#8531)
* filer: expose metadata events and list snapshots * mount: invalidate hot directory caches * mount: read hot directories directly from filer * mount: add sequenced metadata cache applier * mount: apply metadata responses through cache applier * mount: replay snapshot-consistent directory builds * mount: dedupe self metadata events * mount: factor directory build cleanup * mount: replace proto marshal dedup with composite key and ring buffer The dedup logic was doing a full deterministic proto.Marshal on every metadata event just to produce a dedup key. Replace with a cheap composite string key (TsNs|Directory|OldName|NewName). Also replace the sliding-window slice (which leaked the backing array unboundedly) with a fixed-size ring buffer that reuses the same array. * filer: remove mutex and proto.Clone from request-scoped MetadataEventSink MetadataEventSink is created per-request and only accessed by the goroutine handling the gRPC call. The mutex and double proto.Clone (once in Record, once in Last) were unnecessary overhead on every filer write operation. Store the pointer directly instead. * mount: skip proto.Clone for caller-owned metadata events Add ApplyMetadataResponseOwned that takes ownership of the response without cloning. Local metadata events (mkdir, create, flush, etc.) are freshly constructed and never shared, so the clone is unnecessary. * filer: only populate MetadataEvent on successful DeleteEntry Avoid calling eventSink.Last() on error paths where the sink may contain a partial event from an intermediate child deletion during recursive deletes. * mount: avoid map allocation in collectDirectoryNotifications Replace the map with a fixed-size array and linear dedup. There are at most 3 directories to notify (old parent, new parent, new child if directory), so a 3-element array avoids the heap allocation on every metadata event. * mount: fix potential deadlock in enqueueApplyRequest Release applyStateMu before the blocking channel send. Previously, if the channel was full (cap 128), the send would block while holding the mutex, preventing Shutdown from acquiring it to set applyClosed. * mount: restore signature-based self-event filtering as fast path Re-add the signature check that was removed when content-based dedup was introduced. Checking signatures is O(1) on a small slice and avoids enqueuing and processing events that originated from this mount instance. The content-based dedup remains as a fallback. * filer: send snapshotTsNs only in first ListEntries response The snapshot timestamp is identical for every entry in a single ListEntries stream. Sending it in every response message wastes wire bandwidth for large directories. The client already reads it only from the first response. * mount: exit read-through mode after successful full directory listing MarkDirectoryRefreshed was defined but never called, so directories that entered read-through mode (hot invalidation threshold) stayed there permanently, hitting the filer on every readdir even when cold. Call it after a complete read-through listing finishes. * mount: include event shape and full paths in dedup key The previous dedup key only used Names, which could collapse distinct rename targets. Include the event shape (C/D/U/R), source directory, new parent path, and both entry names so structurally different events are never treated as duplicates. * mount: drain pending requests on shutdown in runApplyLoop After receiving the shutdown sentinel, drain any remaining requests from applyCh non-blockingly and signal each with errMetaCacheClosed so callers waiting on req.done are released. * mount: include IsDirectory in synthetic delete events metadataDeleteEvent now accepts an isDirectory parameter so the applier can distinguish directory deletes from file deletes. Rmdir passes true, Unlink passes false. * mount: fall back to synthetic event when MetadataEvent is nil In mknod and mkdir, if the filer response omits MetadataEvent (e.g. older filer without the field), synthesize an equivalent local metadata event so the cache is always updated. * mount: make Flush metadata apply best-effort after successful commit After filer_pb.CreateEntryWithResponse succeeds, the entry is persisted. Don't fail the Flush syscall if the local metadata cache apply fails — log and invalidate the directory cache instead. Also fall back to a synthetic event when MetadataEvent is nil. * mount: make Rename metadata apply best-effort The rename has already succeeded on the filer by the time we apply the local metadata event. Log failures instead of returning errors that would be dropped by the caller anyway. * mount: make saveEntry metadata apply best-effort with fallback After UpdateEntryWithResponse succeeds, treat local metadata apply as non-fatal. Log and invalidate the directory cache on failure. Also fall back to a synthetic event when MetadataEvent is nil. * filer_pb: preserve snapshotTsNs on error in ReadDirAllEntriesWithSnapshot Return the snapshot timestamp even when the first page fails, so callers receive the snapshot boundary when partial data was received. * filer: send snapshot token for empty directory listings When no entries are streamed, send a final ListEntriesResponse with only SnapshotTsNs so clients always receive the snapshot boundary. * mount: distinguish not-found vs transient errors in lookupEntry Return fuse.EIO for non-not-found filer errors instead of unconditionally returning ENOENT, so transient failures don't masquerade as missing entries. * mount: make CacheRemoteObject metadata apply best-effort The file content has already been cached successfully. Don't fail the read if the local metadata cache update fails. * mount: use consistent snapshot for readdir in direct mode Capture the SnapshotTsNs from the first loadDirectoryEntriesDirect call and store it on the DirectoryHandle. Subsequent batch loads pass this stored timestamp so all batches use the same snapshot. Also export DoSeaweedListWithSnapshot so mount can use it directly with snapshot passthrough. * filer_pb: fix test fake to send SnapshotTsNs only on first response Match the server behavior: only the first ListEntriesResponse in a page carries the snapshot timestamp, subsequent entries leave it zero. * Fix nil pointer dereference in ListEntries stream consumers Remove the empty-directory snapshot-only response from ListEntries that sent a ListEntriesResponse with Entry==nil, which crashed every raw stream consumer that assumed resp.Entry is always non-nil. Also add defensive nil checks for resp.Entry in all raw ListEntries stream consumers across: S3 listing, broker topic lookup, broker topic config, admin dashboard, topic retention, hybrid message scanner, Kafka integration, and consumer offset storage. * Add nil guards for resp.Entry in remaining ListEntries stream consumers Covers: S3 object lock check, MQ management dashboard (version/ partition/offset loops), and topic retention version loop. * Make applyLocalMetadataEvent best-effort in Link and Symlink The filer operations already succeeded; failing the syscall because the local cache apply failed is wrong. Log a warning and invalidate the parent directory cache instead. * Make applyLocalMetadataEvent best-effort in Mkdir/Rmdir/Mknod/Unlink The filer RPC already committed; don't fail the syscall when the local metadata cache apply fails. Log a warning and invalidate the parent directory cache to force a re-fetch on next access. * flushFileMetadata: add nil-fallback for metadata event and best-effort apply Synthesize a metadata event when resp.GetMetadataEvent() is nil (matching doFlush), and make the apply best-effort with cache invalidation on failure. * Prevent double-invocation of cleanupBuild in doEnsureVisited Add a cleanupDone guard so the deferred cleanup and inline error-path cleanup don't both call DeleteFolderChildren/AbortDirectoryBuild. * Fix comment: signature check is O(n) not O(1) * Prevent deferred cleanup after successful CompleteDirectoryBuild Set cleanupDone before returning from the success path so the deferred context-cancellation check cannot undo a published build. * Invalidate parent directory caches on rename metadata apply failure When applyLocalMetadataEvent fails during rename, invalidate the source and destination parent directory caches so subsequent accesses trigger a re-fetch from the filer. * Add event nil-fallback and cache invalidation to Link and Symlink Synthesize metadata events when the server doesn't return one, and invalidate parent directory caches on apply failure. * Match requested partition when scanning partition directories Parse the partition range format (NNNN-NNNN) and match against the requested partition parameter instead of using the first directory. * Preserve snapshot timestamp across empty directory listings Initialize actualSnapshotTsNs from the caller-requested value so it isn't lost when the server returns no entries. Re-add the server-side snapshot-only response for empty directories (all raw stream consumers now have nil guards for Entry). * Fix CreateEntry error wrapping to support errors.Is/errors.As Use errors.New + %w instead of %v for resp.Error so callers can unwrap the underlying error. * Fix object lock pagination: only advance on non-nil entries Move entriesReceived inside the nil check so nil entries don't cause repeated ListEntries calls with the same lastFileName. * Guard Attributes nil check before accessing Mtime in MQ management * Do not send nil-Entry response for empty directory listings The snapshot-only ListEntriesResponse (with Entry == nil) for empty directories breaks consumers that treat any received response as an entry (Java FilerClient, S3 listing). The Go client-side DoSeaweedListWithSnapshot already preserves the caller-requested snapshot via actualSnapshotTsNs initialization, so the server-side send is unnecessary. * Fix review findings: subscriber dedup, invalidation normalization, nil guards, shutdown race - Remove self-signature early-return in processEventFn so all events flow through the applier (directory-build buffering sees self-originated events that arrive after a snapshot) - Normalize NewParentPath in collectEntryInvalidations to avoid duplicate invalidations when NewParentPath is empty (same-directory update) - Guard resp.Entry.Attributes for nil in admin_server.go and topic_retention.go to prevent panics on entries without attributes - Fix enqueueApplyRequest race with shutdown by using select on both applyCh and applyDone, preventing sends after the apply loop exits - Add cleanupDone check to deferred cleanup in meta_cache_init.go for clarity alongside the existing guard in cleanupBuild - Add empty directory test case for snapshot consistency * Propagate authoritative metadata event from CacheRemoteObjectToLocalCluster and generate client-side snapshot for empty directories - Add metadata_event field to CacheRemoteObjectToLocalClusterResponse proto so the filer-emitted event is available to callers - Use WithMetadataEventSink in the server handler to capture the event from NotifyUpdateEvent and return it on the response - Update filehandle_read.go to prefer the RPC's metadata event over a locally fabricated one, falling back to metadataUpdateEvent when the server doesn't provide one (e.g., older filers) - Generate a client-side snapshot cutoff in DoSeaweedListWithSnapshot when the server sends no snapshot (empty directory), so callers like CompleteDirectoryBuild get a meaningful boundary for filtering buffered events * Skip directory notifications for dirs being built to prevent mid-build cache wipe When a metadata event is buffered during a directory build, applyMetadataSideEffects was still firing noteDirectoryUpdate for the building directory. If the directory accumulated enough updates to become "hot", markDirectoryReadThrough would call DeleteFolderChildren, wiping entries that EnsureVisited had already inserted. The build would then complete and mark the directory cached with incomplete data. Fix by using applyMetadataSideEffectsSkippingBuildingDirs for buffered events, which suppresses directory notifications for dirs currently in buildingDirs while still applying entry invalidations. * Add test for directory notification suppression during active build TestDirectoryNotificationsSuppressedDuringBuild verifies that metadata events targeting a directory under active EnsureVisited build do NOT fire onDirectoryUpdate for that directory. In production, this prevents markDirectoryReadThrough from calling DeleteFolderChildren mid-build, which would wipe entries already inserted by the listing. The test inserts an entry during a build, sends multiple metadata events for the building directory, asserts no notifications fired for it, verifies the entry survives, and confirms buffered events are replayed after CompleteDirectoryBuild. * Fix create invalidations, build guard, event shape, context, and snapshot error path - collectEntryInvalidations: invalidate FUSE kernel cache on pure create events (OldEntry==nil && NewEntry!=nil), not just updates and deletes - completeDirectoryBuildNow: only call markCachedFn when an active build existed (state != nil), preventing an unpopulated directory from being marked as cached - Add metadataCreateEvent helper that produces a create-shaped event (NewEntry only, no OldEntry) and use it in mkdir, mknod, symlink, and hardlink create fallback paths instead of metadataUpdateEvent which incorrectly set both OldEntry and NewEntry - applyMetadataResponseEnqueue: use context.Background() for the queued mutation so a cancelled caller context cannot abort the apply loop mid-write - DoSeaweedListWithSnapshot: move snapshot initialization before ListEntries call so the error path returns the preserved snapshot instead of 0 * Fix review findings: test loop, cache race, context safety, snapshot consistency - Fix build test loop starting at i=1 instead of i=0, missing new-0.txt verification - Re-check IsDirectoryCached after cache miss to avoid ENOENT race with markDirectoryReadThrough - Use context.Background() in enqueueAndWait so caller cancellation can't abort build/complete mid-way - Pass dh.snapshotTsNs in skip-batch loadDirectoryEntriesDirect for snapshot consistency - Prefer resp.MetadataEvent over fallback in Unlink event derivation - Add comment on MetadataEventSink.Record single-event assumption * Fix empty-directory snapshot clock skew and build cancellation race Empty-directory snapshot: Remove client-side time.Now() synthesis when the server returns no entries. Instead return snapshotTsNs=0, and in completeDirectoryBuildNow replay ALL buffered events when snapshot is 0. This eliminates the clock-skew bug where a client ahead of the filer would filter out legitimate post-list events. Build cancellation: Use context.Background() for BeginDirectoryBuild and CompleteDirectoryBuild calls in doEnsureVisited, so errgroup cancellation doesn't cause enqueueAndWait to return early and trigger cleanupBuild while the operation is still queued. * Add tests for empty-directory build replay and cancellation resilience TestEmptyDirectoryBuildReplaysAllBufferedEvents: verifies that when CompleteDirectoryBuild receives snapshotTsNs=0 (empty directory, no server snapshot), ALL buffered events are replayed regardless of their TsNs values — no clock-skew-sensitive filtering occurs. TestBuildCompletionSurvivesCallerCancellation: verifies that once CompleteDirectoryBuild is enqueued, a cancelled caller context does not prevent the build from completing. The apply loop runs with context.Background(), so the directory becomes cached and buffered events are replayed even when the caller gives up waiting. * Fix directory subtree cleanup, Link rollback, test robustness - applyMetadataResponseLocked: when a directory entry is deleted or moved, call DeleteFolderChildren on the old path so cached descendants don't leak as stale entries. - Link: save original HardLinkId/Counter before mutation. If CreateEntryWithResponse fails after the source was already updated, rollback the source entry to its original state via UpdateEntry. - TestBuildCompletionSurvivesCallerCancellation: replace fixed time.Sleep(50ms) with a deadline-based poll that checks IsDirectoryCached in a loop, failing only after 2s timeout. - TestReadDirAllEntriesWithSnapshotEmptyDirectory: assert that ListEntries was actually invoked on the mock client so the test exercises the RPC path. - newMetadataEvent: add early return when both oldEntry and newEntry are nil to avoid emitting events with empty Directory. --------- Co-authored-by: Copilot <copilot@github.com>pull/8545/head
committed by
GitHub
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
42 changed files with 2615 additions and 374 deletions
-
6other/java/client/src/main/proto/filer.proto
-
11weed/admin/dash/admin_server.go
-
11weed/admin/dash/mq_management.go
-
11weed/admin/dash/topic_retention.go
-
70weed/filer/filer_notify.go
-
47weed/filer/metadata_event_sink.go
-
43weed/filer/metadata_event_sink_test.go
-
12weed/mount/filehandle_read.go
-
41weed/mount/inode_to_path.go
-
41weed/mount/inode_to_path_test.go
-
641weed/mount/meta_cache/meta_cache.go
-
361weed/mount/meta_cache/meta_cache_apply_test.go
-
459weed/mount/meta_cache/meta_cache_build_test.go
-
42weed/mount/meta_cache/meta_cache_init.go
-
68weed/mount/meta_cache/meta_cache_subscribe.go
-
66weed/mount/metadata_events.go
-
76weed/mount/weedfs.go
-
30weed/mount/weedfs_dir_mkrm.go
-
96weed/mount/weedfs_dir_read.go
-
100weed/mount/weedfs_dir_read_test.go
-
35weed/mount/weedfs_file_mkrm.go
-
15weed/mount/weedfs_file_sync.go
-
44weed/mount/weedfs_link.go
-
16weed/mount/weedfs_metadata_flush.go
-
24weed/mount/weedfs_rename.go
-
13weed/mount/weedfs_rename_test.go
-
16weed/mount/weedfs_symlink.go
-
12weed/mount/wfs_save.go
-
4weed/mq/broker/broker_grpc_lookup.go
-
6weed/mq/broker/broker_topic_conf_read_write.go
-
2weed/mq/kafka/consumer_offset/filer_storage.go
-
16weed/mq/kafka/integration/broker_client.go
-
6weed/pb/filer.proto
-
232weed/pb/filer_pb/filer.pb.go
-
83weed/pb/filer_pb/filer_client.go
-
165weed/pb/filer_pb/filer_client_snapshot_test.go
-
22weed/pb/filer_pb/filer_pb_helper.go
-
3weed/query/engine/hybrid_message_scanner.go
-
5weed/s3api/s3_objectlock/object_lock_check.go
-
3weed/s3api/s3api_object_handlers_list.go
-
33weed/server/filer_grpc_server.go
-
2weed/server/filer_grpc_server_remote.go
@ -0,0 +1,47 @@ |
|||
package filer |
|||
|
|||
import ( |
|||
"context" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
) |
|||
|
|||
type metadataEventSinkKey struct{} |
|||
|
|||
// MetadataEventSink captures the last metadata event emitted while serving a
|
|||
// request. It is request-scoped and accessed only by the goroutine handling
|
|||
// the gRPC call, so no mutex is needed.
|
|||
type MetadataEventSink struct { |
|||
last *filer_pb.SubscribeMetadataResponse |
|||
} |
|||
|
|||
func WithMetadataEventSink(ctx context.Context) (context.Context, *MetadataEventSink) { |
|||
sink := &MetadataEventSink{} |
|||
return context.WithValue(ctx, metadataEventSinkKey{}, sink), sink |
|||
} |
|||
|
|||
func metadataEventSinkFromContext(ctx context.Context) *MetadataEventSink { |
|||
if ctx == nil { |
|||
return nil |
|||
} |
|||
sink, _ := ctx.Value(metadataEventSinkKey{}).(*MetadataEventSink) |
|||
return sink |
|||
} |
|||
|
|||
// Record stores the event, replacing any previously recorded one.
|
|||
// Each filer RPC emits at most one NotifyUpdateEvent, so only the last
|
|||
// event is retained. If an RPC were to emit multiple events, only the
|
|||
// final one would be returned to the caller.
|
|||
func (s *MetadataEventSink) Record(event *filer_pb.SubscribeMetadataResponse) { |
|||
if s == nil || event == nil { |
|||
return |
|||
} |
|||
s.last = event |
|||
} |
|||
|
|||
func (s *MetadataEventSink) Last() *filer_pb.SubscribeMetadataResponse { |
|||
if s == nil { |
|||
return nil |
|||
} |
|||
return s.last |
|||
} |
|||
@ -0,0 +1,43 @@ |
|||
package filer |
|||
|
|||
import ( |
|||
"context" |
|||
"testing" |
|||
"time" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
"github.com/seaweedfs/seaweedfs/weed/util/log_buffer" |
|||
) |
|||
|
|||
func TestNotifyUpdateEventRecordsRequestMetadataEvent(t *testing.T) { |
|||
f := &Filer{ |
|||
Signature: 42, |
|||
LocalMetaLogBuffer: log_buffer.NewLogBuffer( |
|||
"test", |
|||
time.Hour, |
|||
func(*log_buffer.LogBuffer, time.Time, time.Time, []byte, int64, int64) {}, |
|||
nil, |
|||
nil, |
|||
), |
|||
} |
|||
|
|||
ctx, sink := WithMetadataEventSink(context.Background()) |
|||
f.NotifyUpdateEvent(ctx, &Entry{FullPath: util.FullPath("/dir/file.txt")}, nil, true, false, []int32{7}) |
|||
|
|||
event := sink.Last() |
|||
if event == nil { |
|||
t.Fatal("expected metadata event to be recorded") |
|||
} |
|||
if event.Directory != "/dir" { |
|||
t.Fatalf("directory = %q, want /dir", event.Directory) |
|||
} |
|||
if event.EventNotification.OldEntry == nil || event.EventNotification.OldEntry.Name != "file.txt" { |
|||
t.Fatalf("old entry = %+v, want file.txt", event.EventNotification.OldEntry) |
|||
} |
|||
if got := event.EventNotification.Signatures; len(got) != 2 || got[0] != 7 || got[1] != 42 { |
|||
t.Fatalf("signatures = %v, want [7 42]", got) |
|||
} |
|||
if event.TsNs == 0 { |
|||
t.Fatal("expected event timestamp to be set") |
|||
} |
|||
} |
|||
@ -0,0 +1,361 @@ |
|||
package meta_cache |
|||
|
|||
import ( |
|||
"context" |
|||
"path/filepath" |
|||
"sync" |
|||
"testing" |
|||
"time" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/filer" |
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
) |
|||
|
|||
func TestApplyMetadataResponseAppliesEventsInOrder(t *testing.T) { |
|||
mc, _, notifications, invalidations := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
"/dir": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
createResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 1, |
|||
Mtime: 1, |
|||
FileMode: 0100644, |
|||
FileSize: 11, |
|||
}, |
|||
}, |
|||
}, |
|||
} |
|||
updateResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
}, |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 1, |
|||
Mtime: 2, |
|||
FileMode: 0100644, |
|||
FileSize: 29, |
|||
}, |
|||
}, |
|||
NewParentPath: "/dir", |
|||
}, |
|||
} |
|||
deleteResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
}, |
|||
}, |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), createResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply create: %v", err) |
|||
} |
|||
|
|||
entry, err := mc.FindEntry(context.Background(), util.FullPath("/dir/file.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find created entry: %v", err) |
|||
} |
|||
if entry.FileSize != 11 { |
|||
t.Fatalf("created file size = %d, want 11", entry.FileSize) |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), updateResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply update: %v", err) |
|||
} |
|||
|
|||
entry, err = mc.FindEntry(context.Background(), util.FullPath("/dir/file.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find updated entry: %v", err) |
|||
} |
|||
if entry.FileSize != 29 { |
|||
t.Fatalf("updated file size = %d, want 29", entry.FileSize) |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), deleteResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply delete: %v", err) |
|||
} |
|||
|
|||
entry, err = mc.FindEntry(context.Background(), util.FullPath("/dir/file.txt")) |
|||
if err != filer_pb.ErrNotFound { |
|||
t.Fatalf("find deleted entry error = %v, want %v", err, filer_pb.ErrNotFound) |
|||
} |
|||
if entry != nil { |
|||
t.Fatalf("deleted entry still cached: %+v", entry) |
|||
} |
|||
|
|||
if got := countPath(notifications.paths(), util.FullPath("/dir")); got != 3 { |
|||
t.Fatalf("directory notifications for /dir = %d, want 3", got) |
|||
} |
|||
if got := countPath(invalidations.paths(), util.FullPath("/dir/file.txt")); got != 3 { |
|||
t.Fatalf("invalidations for /dir/file.txt = %d, want 3 (create + update + delete)", got) |
|||
} |
|||
} |
|||
|
|||
func TestApplyMetadataResponseRenamesAcrossCachedDirectories(t *testing.T) { |
|||
mc, _, notifications, invalidations := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
"/src": true, |
|||
"/dst": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
if err := mc.InsertEntry(context.Background(), &filer.Entry{ |
|||
FullPath: "/src/file.tmp", |
|||
Attr: filer.Attr{ |
|||
Crtime: time.Unix(1, 0), |
|||
Mtime: time.Unix(1, 0), |
|||
Mode: 0100644, |
|||
FileSize: 7, |
|||
}, |
|||
}); err != nil { |
|||
t.Fatalf("insert source entry: %v", err) |
|||
} |
|||
|
|||
renameResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/src", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{ |
|||
Name: "file.tmp", |
|||
}, |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 1, |
|||
Mtime: 2, |
|||
FileMode: 0100644, |
|||
FileSize: 41, |
|||
}, |
|||
}, |
|||
NewParentPath: "/dst", |
|||
}, |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), renameResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply rename: %v", err) |
|||
} |
|||
|
|||
oldEntry, err := mc.FindEntry(context.Background(), util.FullPath("/src/file.tmp")) |
|||
if err != filer_pb.ErrNotFound { |
|||
t.Fatalf("find old path error = %v, want %v", err, filer_pb.ErrNotFound) |
|||
} |
|||
if oldEntry != nil { |
|||
t.Fatalf("old path still cached: %+v", oldEntry) |
|||
} |
|||
|
|||
newEntry, err := mc.FindEntry(context.Background(), util.FullPath("/dst/file.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find new path: %v", err) |
|||
} |
|||
if newEntry.FileSize != 41 { |
|||
t.Fatalf("renamed file size = %d, want 41", newEntry.FileSize) |
|||
} |
|||
|
|||
if got := countPath(notifications.paths(), util.FullPath("/src")); got != 1 { |
|||
t.Fatalf("directory notifications for /src = %d, want 1", got) |
|||
} |
|||
if got := countPath(notifications.paths(), util.FullPath("/dst")); got != 1 { |
|||
t.Fatalf("directory notifications for /dst = %d, want 1", got) |
|||
} |
|||
if got := countPath(invalidations.paths(), util.FullPath("/src/file.tmp")); got != 1 { |
|||
t.Fatalf("invalidations for /src/file.tmp = %d, want 1", got) |
|||
} |
|||
if got := countPath(invalidations.paths(), util.FullPath("/dst/file.txt")); got != 1 { |
|||
t.Fatalf("invalidations for /dst/file.txt = %d, want 1", got) |
|||
} |
|||
} |
|||
|
|||
func TestApplyMetadataResponseLocalOptionsSkipInvalidations(t *testing.T) { |
|||
mc, _, notifications, invalidations := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
"/dir": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
if err := mc.InsertEntry(context.Background(), &filer.Entry{ |
|||
FullPath: "/dir/file.txt", |
|||
Attr: filer.Attr{ |
|||
Crtime: time.Unix(1, 0), |
|||
Mtime: time.Unix(1, 0), |
|||
Mode: 0100644, |
|||
FileSize: 7, |
|||
}, |
|||
}); err != nil { |
|||
t.Fatalf("insert source entry: %v", err) |
|||
} |
|||
|
|||
updateResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
}, |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 1, |
|||
Mtime: 2, |
|||
FileMode: 0100644, |
|||
FileSize: 17, |
|||
}, |
|||
}, |
|||
NewParentPath: "/dir", |
|||
}, |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), updateResp, LocalMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply local update: %v", err) |
|||
} |
|||
|
|||
entry, err := mc.FindEntry(context.Background(), util.FullPath("/dir/file.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find updated entry: %v", err) |
|||
} |
|||
if entry.FileSize != 17 { |
|||
t.Fatalf("updated file size = %d, want 17", entry.FileSize) |
|||
} |
|||
if got := countPath(notifications.paths(), util.FullPath("/dir")); got != 1 { |
|||
t.Fatalf("directory notifications for /dir = %d, want 1", got) |
|||
} |
|||
if got := len(invalidations.paths()); got != 0 { |
|||
t.Fatalf("invalidations = %d, want 0", got) |
|||
} |
|||
} |
|||
|
|||
func TestApplyMetadataResponseDeduplicatesRepeatedFilerEvent(t *testing.T) { |
|||
mc, _, notifications, invalidations := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
"/dir": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
if err := mc.InsertEntry(context.Background(), &filer.Entry{ |
|||
FullPath: "/dir/file.txt", |
|||
Attr: filer.Attr{ |
|||
Crtime: time.Unix(1, 0), |
|||
Mtime: time.Unix(1, 0), |
|||
Mode: 0100644, |
|||
FileSize: 5, |
|||
}, |
|||
}); err != nil { |
|||
t.Fatalf("insert source entry: %v", err) |
|||
} |
|||
|
|||
updateResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
}, |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "file.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 1, |
|||
Mtime: 2, |
|||
FileMode: 0100644, |
|||
FileSize: 15, |
|||
}, |
|||
}, |
|||
NewParentPath: "/dir", |
|||
Signatures: []int32{7}, |
|||
}, |
|||
TsNs: 99, |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), updateResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("first apply: %v", err) |
|||
} |
|||
if err := mc.ApplyMetadataResponse(context.Background(), updateResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("second apply: %v", err) |
|||
} |
|||
|
|||
entry, err := mc.FindEntry(context.Background(), util.FullPath("/dir/file.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find updated entry: %v", err) |
|||
} |
|||
if entry.FileSize != 15 { |
|||
t.Fatalf("updated file size = %d, want 15", entry.FileSize) |
|||
} |
|||
if got := countPath(notifications.paths(), util.FullPath("/dir")); got != 1 { |
|||
t.Fatalf("directory notifications for /dir = %d, want 1", got) |
|||
} |
|||
if got := countPath(invalidations.paths(), util.FullPath("/dir/file.txt")); got != 1 { |
|||
t.Fatalf("invalidations for /dir/file.txt = %d, want 1", got) |
|||
} |
|||
} |
|||
|
|||
func newTestMetaCache(t *testing.T, cached map[util.FullPath]bool) (*MetaCache, map[util.FullPath]bool, *recordedPaths, *recordedPaths) { |
|||
t.Helper() |
|||
|
|||
mapper, err := NewUidGidMapper("", "") |
|||
if err != nil { |
|||
t.Fatalf("uid/gid mapper: %v", err) |
|||
} |
|||
|
|||
var cachedMu sync.Mutex |
|||
notifications := &recordedPaths{} |
|||
invalidations := &recordedPaths{} |
|||
|
|||
mc := NewMetaCache( |
|||
filepath.Join(t.TempDir(), "meta"), |
|||
mapper, |
|||
util.FullPath("/"), |
|||
func(path util.FullPath) { |
|||
cachedMu.Lock() |
|||
defer cachedMu.Unlock() |
|||
cached[path] = true |
|||
}, |
|||
func(path util.FullPath) bool { |
|||
cachedMu.Lock() |
|||
defer cachedMu.Unlock() |
|||
return cached[path] |
|||
}, |
|||
func(path util.FullPath, entry *filer_pb.Entry) { |
|||
invalidations.record(path) |
|||
}, |
|||
func(dir util.FullPath) { |
|||
notifications.record(dir) |
|||
}, |
|||
) |
|||
|
|||
return mc, cached, notifications, invalidations |
|||
} |
|||
|
|||
type recordedPaths struct { |
|||
mu sync.Mutex |
|||
items []util.FullPath |
|||
} |
|||
|
|||
func (r *recordedPaths) record(path util.FullPath) { |
|||
r.mu.Lock() |
|||
defer r.mu.Unlock() |
|||
r.items = append(r.items, path) |
|||
} |
|||
|
|||
func (r *recordedPaths) paths() []util.FullPath { |
|||
r.mu.Lock() |
|||
defer r.mu.Unlock() |
|||
return append([]util.FullPath(nil), r.items...) |
|||
} |
|||
|
|||
func countPath(paths []util.FullPath, target util.FullPath) int { |
|||
count := 0 |
|||
for _, path := range paths { |
|||
if path == target { |
|||
count++ |
|||
} |
|||
} |
|||
return count |
|||
} |
|||
@ -0,0 +1,459 @@ |
|||
package meta_cache |
|||
|
|||
import ( |
|||
"context" |
|||
"fmt" |
|||
"io" |
|||
"sync" |
|||
"testing" |
|||
"time" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/filer" |
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
"google.golang.org/grpc" |
|||
"google.golang.org/grpc/metadata" |
|||
) |
|||
|
|||
type buildListStream struct { |
|||
responses []*filer_pb.ListEntriesResponse |
|||
onFirstRecv func() |
|||
once sync.Once |
|||
index int |
|||
} |
|||
|
|||
func (s *buildListStream) Recv() (*filer_pb.ListEntriesResponse, error) { |
|||
s.once.Do(func() { |
|||
if s.onFirstRecv != nil { |
|||
s.onFirstRecv() |
|||
} |
|||
}) |
|||
if s.index >= len(s.responses) { |
|||
return nil, io.EOF |
|||
} |
|||
resp := s.responses[s.index] |
|||
s.index++ |
|||
return resp, nil |
|||
} |
|||
|
|||
func (s *buildListStream) Header() (metadata.MD, error) { return metadata.MD{}, nil } |
|||
func (s *buildListStream) Trailer() metadata.MD { return metadata.MD{} } |
|||
func (s *buildListStream) CloseSend() error { return nil } |
|||
func (s *buildListStream) Context() context.Context { return context.Background() } |
|||
func (s *buildListStream) SendMsg(any) error { return nil } |
|||
func (s *buildListStream) RecvMsg(any) error { return nil } |
|||
|
|||
type buildListClient struct { |
|||
filer_pb.SeaweedFilerClient |
|||
responses []*filer_pb.ListEntriesResponse |
|||
onFirstRecv func() |
|||
} |
|||
|
|||
func (c *buildListClient) ListEntries(ctx context.Context, in *filer_pb.ListEntriesRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[filer_pb.ListEntriesResponse], error) { |
|||
return &buildListStream{ |
|||
responses: c.responses, |
|||
onFirstRecv: c.onFirstRecv, |
|||
}, nil |
|||
} |
|||
|
|||
type buildFilerAccessor struct { |
|||
client filer_pb.SeaweedFilerClient |
|||
} |
|||
|
|||
func (a *buildFilerAccessor) WithFilerClient(_ bool, fn func(filer_pb.SeaweedFilerClient) error) error { |
|||
return fn(a.client) |
|||
} |
|||
|
|||
func (a *buildFilerAccessor) AdjustedUrl(*filer_pb.Location) string { return "" } |
|||
func (a *buildFilerAccessor) GetDataCenter() string { return "" } |
|||
|
|||
func TestEnsureVisitedReplaysBufferedEventsAfterSnapshot(t *testing.T) { |
|||
mc, _, _, _ := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
var applyErr error |
|||
accessor := &buildFilerAccessor{ |
|||
client: &buildListClient{ |
|||
responses: []*filer_pb.ListEntriesResponse{ |
|||
{ |
|||
Entry: &filer_pb.Entry{ |
|||
Name: "base.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 1, |
|||
Mtime: 1, |
|||
FileMode: 0100644, |
|||
FileSize: 3, |
|||
}, |
|||
}, |
|||
SnapshotTsNs: 100, |
|||
}, |
|||
}, |
|||
onFirstRecv: func() { |
|||
applyErr = mc.ApplyMetadataResponse(context.Background(), &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "after.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 2, |
|||
Mtime: 2, |
|||
FileMode: 0100644, |
|||
FileSize: 9, |
|||
}, |
|||
}, |
|||
}, |
|||
TsNs: 101, |
|||
}, SubscriberMetadataResponseApplyOptions) |
|||
}, |
|||
}, |
|||
} |
|||
|
|||
if err := EnsureVisited(mc, accessor, util.FullPath("/dir")); err != nil { |
|||
t.Fatalf("ensure visited: %v", err) |
|||
} |
|||
if applyErr != nil { |
|||
t.Fatalf("apply buffered event: %v", applyErr) |
|||
} |
|||
if !mc.IsDirectoryCached(util.FullPath("/dir")) { |
|||
t.Fatal("directory /dir should be cached after build completes") |
|||
} |
|||
|
|||
baseEntry, err := mc.FindEntry(context.Background(), util.FullPath("/dir/base.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find base entry: %v", err) |
|||
} |
|||
if baseEntry.FileSize != 3 { |
|||
t.Fatalf("base entry size = %d, want 3", baseEntry.FileSize) |
|||
} |
|||
|
|||
afterEntry, err := mc.FindEntry(context.Background(), util.FullPath("/dir/after.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find replayed entry: %v", err) |
|||
} |
|||
if afterEntry.FileSize != 9 { |
|||
t.Fatalf("replayed entry size = %d, want 9", afterEntry.FileSize) |
|||
} |
|||
} |
|||
|
|||
// TestDirectoryNotificationsSuppressedDuringBuild verifies that metadata events
|
|||
// targeting a directory under active build do NOT fire onDirectoryUpdate for
|
|||
// that directory. In production, onDirectoryUpdate can trigger
|
|||
// markDirectoryReadThrough → DeleteFolderChildren, which would wipe entries
|
|||
// that EnsureVisited already inserted mid-build.
|
|||
func TestDirectoryNotificationsSuppressedDuringBuild(t *testing.T) { |
|||
mc, _, notifications, _ := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
// Start building /dir (simulates the beginning of EnsureVisited)
|
|||
if err := mc.BeginDirectoryBuild(context.Background(), util.FullPath("/dir")); err != nil { |
|||
t.Fatalf("begin build: %v", err) |
|||
} |
|||
|
|||
// Insert an entry as EnsureVisited would during the filer listing
|
|||
if err := mc.InsertEntry(context.Background(), &filer.Entry{ |
|||
FullPath: "/dir/existing.txt", |
|||
Attr: filer.Attr{ |
|||
Crtime: time.Unix(1, 0), |
|||
Mtime: time.Unix(1, 0), |
|||
Mode: 0100644, |
|||
FileSize: 100, |
|||
}, |
|||
}); err != nil { |
|||
t.Fatalf("insert entry during build: %v", err) |
|||
} |
|||
|
|||
// Simulate multiple metadata events arriving for /dir while the build
|
|||
// is in progress. Each event would normally call noteDirectoryUpdate,
|
|||
// which in production can trigger markDirectoryReadThrough and wipe entries.
|
|||
for i := 0; i < 5; i++ { |
|||
resp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: fmt.Sprintf("new-%d.txt", i), |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: int64(10 + i), |
|||
Mtime: int64(10 + i), |
|||
FileMode: 0100644, |
|||
FileSize: uint64(i + 1), |
|||
}, |
|||
}, |
|||
}, |
|||
TsNs: int64(200 + i), |
|||
} |
|||
if err := mc.ApplyMetadataResponse(context.Background(), resp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply event %d: %v", i, err) |
|||
} |
|||
} |
|||
|
|||
// The building directory /dir must NOT have received any notifications.
|
|||
// If it did, markDirectoryReadThrough would wipe the cache mid-build.
|
|||
for _, p := range notifications.paths() { |
|||
if p == util.FullPath("/dir") { |
|||
t.Fatal("onDirectoryUpdate was called for /dir during build; this would cause markDirectoryReadThrough to wipe entries mid-build") |
|||
} |
|||
} |
|||
|
|||
// The entry inserted during the build must still be present
|
|||
entry, err := mc.FindEntry(context.Background(), util.FullPath("/dir/existing.txt")) |
|||
if err != nil { |
|||
t.Fatalf("entry wiped during build: %v", err) |
|||
} |
|||
if entry.FileSize != 100 { |
|||
t.Fatalf("entry size = %d, want 100", entry.FileSize) |
|||
} |
|||
|
|||
// Complete the build — buffered events should be replayed
|
|||
if err := mc.CompleteDirectoryBuild(context.Background(), util.FullPath("/dir"), 150); err != nil { |
|||
t.Fatalf("complete build: %v", err) |
|||
} |
|||
|
|||
// After build completes, the entry from the listing should still exist
|
|||
entry, err = mc.FindEntry(context.Background(), util.FullPath("/dir/existing.txt")) |
|||
if err != nil { |
|||
t.Fatalf("entry lost after build completion: %v", err) |
|||
} |
|||
if entry.FileSize != 100 { |
|||
t.Fatalf("entry size after build = %d, want 100", entry.FileSize) |
|||
} |
|||
|
|||
// Buffered events with TsNs > snapshotTsNs (150) should have been replayed
|
|||
for i := 0; i < 5; i++ { |
|||
name := fmt.Sprintf("new-%d.txt", i) |
|||
e, err := mc.FindEntry(context.Background(), util.FullPath("/dir/"+name)) |
|||
if err != nil { |
|||
t.Fatalf("replayed entry %s not found: %v", name, err) |
|||
} |
|||
if e.FileSize != uint64(i+1) { |
|||
t.Fatalf("replayed entry %s size = %d, want %d", name, e.FileSize, i+1) |
|||
} |
|||
} |
|||
} |
|||
|
|||
// TestEmptyDirectoryBuildReplaysAllBufferedEvents verifies that when a
|
|||
// directory build completes with snapshotTsNs=0 (empty directory — server
|
|||
// returned no entries and no snapshot), ALL buffered events are replayed
|
|||
// without any TsNs filtering. This prevents clock-skew between client and
|
|||
// filer from dropping legitimate mutations.
|
|||
func TestEmptyDirectoryBuildReplaysAllBufferedEvents(t *testing.T) { |
|||
mc, _, _, _ := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
if err := mc.BeginDirectoryBuild(context.Background(), util.FullPath("/empty")); err != nil { |
|||
t.Fatalf("begin build: %v", err) |
|||
} |
|||
|
|||
// Buffer events with a range of TsNs values — some very old, some recent.
|
|||
// With a client-synthesized snapshot, old events could be incorrectly filtered.
|
|||
tsValues := []int64{1, 50, 500, 5000, 50000} |
|||
for i, ts := range tsValues { |
|||
resp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/empty", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: fmt.Sprintf("file-%d.txt", i), |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: ts, |
|||
Mtime: ts, |
|||
FileMode: 0100644, |
|||
FileSize: uint64(i + 10), |
|||
}, |
|||
}, |
|||
}, |
|||
TsNs: ts, |
|||
} |
|||
if err := mc.ApplyMetadataResponse(context.Background(), resp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply event %d: %v", i, err) |
|||
} |
|||
} |
|||
|
|||
// Complete with snapshotTsNs=0 — simulates empty directory listing
|
|||
if err := mc.CompleteDirectoryBuild(context.Background(), util.FullPath("/empty"), 0); err != nil { |
|||
t.Fatalf("complete build: %v", err) |
|||
} |
|||
|
|||
// Every buffered event must have been replayed, regardless of TsNs
|
|||
for i := range tsValues { |
|||
name := fmt.Sprintf("file-%d.txt", i) |
|||
e, err := mc.FindEntry(context.Background(), util.FullPath("/empty/"+name)) |
|||
if err != nil { |
|||
t.Fatalf("replayed entry %s not found: %v", name, err) |
|||
} |
|||
if e.FileSize != uint64(i+10) { |
|||
t.Fatalf("replayed entry %s size = %d, want %d", name, e.FileSize, i+10) |
|||
} |
|||
} |
|||
|
|||
if !mc.IsDirectoryCached(util.FullPath("/empty")) { |
|||
t.Fatal("/empty should be marked cached after build completes") |
|||
} |
|||
} |
|||
|
|||
// TestBuildCompletionSurvivesCallerCancellation verifies that once
|
|||
// CompleteDirectoryBuild is enqueued, a cancelled caller context does not
|
|||
// prevent the build from completing. The apply loop uses context.Background()
|
|||
// internally, so the operation finishes even if the caller gives up waiting.
|
|||
func TestBuildCompletionSurvivesCallerCancellation(t *testing.T) { |
|||
mc, _, _, _ := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
if err := mc.BeginDirectoryBuild(context.Background(), util.FullPath("/dir")); err != nil { |
|||
t.Fatalf("begin build: %v", err) |
|||
} |
|||
|
|||
// Insert an entry during the build (as EnsureVisited would)
|
|||
if err := mc.InsertEntry(context.Background(), &filer.Entry{ |
|||
FullPath: "/dir/kept.txt", |
|||
Attr: filer.Attr{ |
|||
Crtime: time.Unix(1, 0), |
|||
Mtime: time.Unix(1, 0), |
|||
Mode: 0100644, |
|||
FileSize: 42, |
|||
}, |
|||
}); err != nil { |
|||
t.Fatalf("insert entry: %v", err) |
|||
} |
|||
|
|||
// Buffer an event that should be replayed
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/dir", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "buffered.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 5, |
|||
Mtime: 5, |
|||
FileMode: 0100644, |
|||
FileSize: 77, |
|||
}, |
|||
}, |
|||
}, |
|||
TsNs: 200, |
|||
}, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply event: %v", err) |
|||
} |
|||
|
|||
// Complete with an already-cancelled context. The operation should still
|
|||
// succeed because enqueueAndWait sets req.ctx = context.Background().
|
|||
cancelledCtx, cancel := context.WithCancel(context.Background()) |
|||
cancel() // cancel immediately
|
|||
|
|||
// CompleteDirectoryBuild may return ctx.Err() if the select picks
|
|||
// ctx.Done() first, but the operation itself still completes in the
|
|||
// apply loop. Poll for the observable side effect instead of using
|
|||
// a fixed sleep.
|
|||
_ = mc.CompleteDirectoryBuild(cancelledCtx, util.FullPath("/dir"), 100) |
|||
|
|||
// Poll until the build completes or a deadline elapses.
|
|||
deadline := time.After(2 * time.Second) |
|||
for !mc.IsDirectoryCached(util.FullPath("/dir")) { |
|||
select { |
|||
case <-deadline: |
|||
t.Fatal("/dir should be cached — CompleteDirectoryBuild must have executed despite cancelled context") |
|||
default: |
|||
time.Sleep(5 * time.Millisecond) |
|||
} |
|||
} |
|||
|
|||
// The pre-existing entry must survive
|
|||
entry, findErr := mc.FindEntry(context.Background(), util.FullPath("/dir/kept.txt")) |
|||
if findErr != nil { |
|||
t.Fatalf("find kept entry: %v", findErr) |
|||
} |
|||
if entry.FileSize != 42 { |
|||
t.Fatalf("kept entry size = %d, want 42", entry.FileSize) |
|||
} |
|||
|
|||
// The buffered event (TsNs 200 > snapshot 100) must have been replayed
|
|||
buffered, findErr := mc.FindEntry(context.Background(), util.FullPath("/dir/buffered.txt")) |
|||
if findErr != nil { |
|||
t.Fatalf("find buffered entry: %v", findErr) |
|||
} |
|||
if buffered.FileSize != 77 { |
|||
t.Fatalf("buffered entry size = %d, want 77", buffered.FileSize) |
|||
} |
|||
} |
|||
|
|||
func TestBufferedRenameUpdatesOtherDirectoryBeforeBuildCompletes(t *testing.T) { |
|||
mc, _, _, _ := newTestMetaCache(t, map[util.FullPath]bool{ |
|||
"/": true, |
|||
"/src": true, |
|||
}) |
|||
defer mc.Shutdown() |
|||
|
|||
if err := mc.InsertEntry(context.Background(), &filer.Entry{ |
|||
FullPath: "/src/from.txt", |
|||
Attr: filer.Attr{ |
|||
Crtime: time.Unix(1, 0), |
|||
Mtime: time.Unix(1, 0), |
|||
Mode: 0100644, |
|||
FileSize: 7, |
|||
}, |
|||
}); err != nil { |
|||
t.Fatalf("insert source entry: %v", err) |
|||
} |
|||
|
|||
if err := mc.BeginDirectoryBuild(context.Background(), util.FullPath("/dst")); err != nil { |
|||
t.Fatalf("begin build: %v", err) |
|||
} |
|||
|
|||
renameResp := &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: "/src", |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{ |
|||
Name: "from.txt", |
|||
}, |
|||
NewEntry: &filer_pb.Entry{ |
|||
Name: "to.txt", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Crtime: 2, |
|||
Mtime: 2, |
|||
FileMode: 0100644, |
|||
FileSize: 12, |
|||
}, |
|||
}, |
|||
NewParentPath: "/dst", |
|||
}, |
|||
TsNs: 101, |
|||
} |
|||
|
|||
if err := mc.ApplyMetadataResponse(context.Background(), renameResp, SubscriberMetadataResponseApplyOptions); err != nil { |
|||
t.Fatalf("apply rename: %v", err) |
|||
} |
|||
|
|||
oldEntry, err := mc.FindEntry(context.Background(), util.FullPath("/src/from.txt")) |
|||
if err != filer_pb.ErrNotFound { |
|||
t.Fatalf("find old path error = %v, want %v", err, filer_pb.ErrNotFound) |
|||
} |
|||
if oldEntry != nil { |
|||
t.Fatalf("old path should be removed before build completes: %+v", oldEntry) |
|||
} |
|||
|
|||
newEntry, err := mc.FindEntry(context.Background(), util.FullPath("/dst/to.txt")) |
|||
if err != filer_pb.ErrNotFound { |
|||
t.Fatalf("find buffered new path error = %v, want %v", err, filer_pb.ErrNotFound) |
|||
} |
|||
if newEntry != nil { |
|||
t.Fatalf("new path should stay hidden until build completes: %+v", newEntry) |
|||
} |
|||
|
|||
if err := mc.CompleteDirectoryBuild(context.Background(), util.FullPath("/dst"), 100); err != nil { |
|||
t.Fatalf("complete build: %v", err) |
|||
} |
|||
|
|||
newEntry, err = mc.FindEntry(context.Background(), util.FullPath("/dst/to.txt")) |
|||
if err != nil { |
|||
t.Fatalf("find replayed new path: %v", err) |
|||
} |
|||
if newEntry.FileSize != 12 { |
|||
t.Fatalf("replayed new path size = %d, want 12", newEntry.FileSize) |
|||
} |
|||
} |
|||
@ -0,0 +1,66 @@ |
|||
package mount |
|||
|
|||
import ( |
|||
"context" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/mount/meta_cache" |
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
"google.golang.org/protobuf/proto" |
|||
) |
|||
|
|||
func (wfs *WFS) applyLocalMetadataEvent(ctx context.Context, event *filer_pb.SubscribeMetadataResponse) error { |
|||
if ctx == nil { |
|||
ctx = context.Background() |
|||
} |
|||
return wfs.metaCache.ApplyMetadataResponseOwned(ctx, event, meta_cache.LocalMetadataResponseApplyOptions) |
|||
} |
|||
|
|||
func metadataDeleteEvent(directory, name string, isDirectory bool) *filer_pb.SubscribeMetadataResponse { |
|||
if name == "" { |
|||
return nil |
|||
} |
|||
return &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: directory, |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{Name: name, IsDirectory: isDirectory}, |
|||
}, |
|||
} |
|||
} |
|||
|
|||
func metadataCreateEvent(directory string, entry *filer_pb.Entry) *filer_pb.SubscribeMetadataResponse { |
|||
if entry == nil { |
|||
return nil |
|||
} |
|||
return &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: directory, |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
NewEntry: proto.Clone(entry).(*filer_pb.Entry), |
|||
NewParentPath: directory, |
|||
}, |
|||
} |
|||
} |
|||
|
|||
func metadataUpdateEvent(directory string, entry *filer_pb.Entry) *filer_pb.SubscribeMetadataResponse { |
|||
if entry == nil { |
|||
return nil |
|||
} |
|||
return &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: directory, |
|||
EventNotification: &filer_pb.EventNotification{ |
|||
OldEntry: &filer_pb.Entry{Name: entry.Name}, |
|||
NewEntry: proto.Clone(entry).(*filer_pb.Entry), |
|||
NewParentPath: directory, |
|||
}, |
|||
} |
|||
} |
|||
|
|||
func metadataEventFromRenameResponse(resp *filer_pb.StreamRenameEntryResponse) *filer_pb.SubscribeMetadataResponse { |
|||
if resp == nil || resp.EventNotification == nil { |
|||
return nil |
|||
} |
|||
return &filer_pb.SubscribeMetadataResponse{ |
|||
Directory: resp.Directory, |
|||
EventNotification: proto.Clone(resp.EventNotification).(*filer_pb.EventNotification), |
|||
TsNs: resp.TsNs, |
|||
} |
|||
} |
|||
@ -0,0 +1,100 @@ |
|||
package mount |
|||
|
|||
import ( |
|||
"context" |
|||
"io" |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/mount/meta_cache" |
|||
"github.com/seaweedfs/seaweedfs/weed/pb/filer_pb" |
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
"google.golang.org/grpc" |
|||
"google.golang.org/grpc/metadata" |
|||
) |
|||
|
|||
type directoryListStream struct { |
|||
responses []*filer_pb.ListEntriesResponse |
|||
index int |
|||
} |
|||
|
|||
func (s *directoryListStream) Recv() (*filer_pb.ListEntriesResponse, error) { |
|||
if s.index >= len(s.responses) { |
|||
return nil, io.EOF |
|||
} |
|||
resp := s.responses[s.index] |
|||
s.index++ |
|||
return resp, nil |
|||
} |
|||
|
|||
func (s *directoryListStream) Header() (metadata.MD, error) { return metadata.MD{}, nil } |
|||
func (s *directoryListStream) Trailer() metadata.MD { return metadata.MD{} } |
|||
func (s *directoryListStream) CloseSend() error { return nil } |
|||
func (s *directoryListStream) Context() context.Context { return context.Background() } |
|||
func (s *directoryListStream) SendMsg(any) error { return nil } |
|||
func (s *directoryListStream) RecvMsg(any) error { return nil } |
|||
|
|||
type directoryListClient struct { |
|||
filer_pb.SeaweedFilerClient |
|||
responses []*filer_pb.ListEntriesResponse |
|||
} |
|||
|
|||
func (c *directoryListClient) ListEntries(ctx context.Context, in *filer_pb.ListEntriesRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[filer_pb.ListEntriesResponse], error) { |
|||
return &directoryListStream{responses: c.responses}, nil |
|||
} |
|||
|
|||
type directoryFilerAccessor struct { |
|||
client filer_pb.SeaweedFilerClient |
|||
} |
|||
|
|||
func (a *directoryFilerAccessor) WithFilerClient(_ bool, fn func(filer_pb.SeaweedFilerClient) error) error { |
|||
return fn(a.client) |
|||
} |
|||
|
|||
func (a *directoryFilerAccessor) AdjustedUrl(*filer_pb.Location) string { return "" } |
|||
func (a *directoryFilerAccessor) GetDataCenter() string { return "" } |
|||
|
|||
func TestLoadDirectoryEntriesDirectFiltersHiddenEntriesAndMapsIds(t *testing.T) { |
|||
mapper, err := meta_cache.NewUidGidMapper("10:1000", "20:2000") |
|||
if err != nil { |
|||
t.Fatalf("uid/gid mapper: %v", err) |
|||
} |
|||
|
|||
client := &directoryFilerAccessor{ |
|||
client: &directoryListClient{ |
|||
responses: []*filer_pb.ListEntriesResponse{ |
|||
{ |
|||
Entry: &filer_pb.Entry{ |
|||
Name: "topics", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Uid: 1000, |
|||
Gid: 2000, |
|||
}, |
|||
}, |
|||
}, |
|||
{ |
|||
Entry: &filer_pb.Entry{ |
|||
Name: "visible", |
|||
Attributes: &filer_pb.FuseAttributes{ |
|||
Uid: 1000, |
|||
Gid: 2000, |
|||
}, |
|||
}, |
|||
}, |
|||
}, |
|||
}, |
|||
} |
|||
|
|||
entries, _, err := loadDirectoryEntriesDirect(context.Background(), client, mapper, util.FullPath("/"), "", false, 10, 0) |
|||
if err != nil { |
|||
t.Fatalf("loadDirectoryEntriesDirect: %v", err) |
|||
} |
|||
if got := len(entries); got != 1 { |
|||
t.Fatalf("entry count = %d, want 1", got) |
|||
} |
|||
if entries[0].Name() != "visible" { |
|||
t.Fatalf("entry name = %q, want visible", entries[0].Name()) |
|||
} |
|||
if entries[0].Attr.Uid != 10 || entries[0].Attr.Gid != 20 { |
|||
t.Fatalf("mapped uid/gid = %d/%d, want 10/20", entries[0].Attr.Uid, entries[0].Attr.Gid) |
|||
} |
|||
} |
|||
@ -0,0 +1,165 @@ |
|||
package filer_pb |
|||
|
|||
import ( |
|||
"context" |
|||
"fmt" |
|||
"io" |
|||
"testing" |
|||
|
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
"google.golang.org/grpc" |
|||
"google.golang.org/grpc/metadata" |
|||
"google.golang.org/protobuf/proto" |
|||
) |
|||
|
|||
type snapshotListStream struct { |
|||
responses []*ListEntriesResponse |
|||
index int |
|||
} |
|||
|
|||
func (s *snapshotListStream) Recv() (*ListEntriesResponse, error) { |
|||
if s.index >= len(s.responses) { |
|||
return nil, io.EOF |
|||
} |
|||
resp := s.responses[s.index] |
|||
s.index++ |
|||
return resp, nil |
|||
} |
|||
|
|||
func (s *snapshotListStream) Header() (metadata.MD, error) { return metadata.MD{}, nil } |
|||
func (s *snapshotListStream) Trailer() metadata.MD { return metadata.MD{} } |
|||
func (s *snapshotListStream) CloseSend() error { return nil } |
|||
func (s *snapshotListStream) Context() context.Context { return context.Background() } |
|||
func (s *snapshotListStream) SendMsg(any) error { return nil } |
|||
func (s *snapshotListStream) RecvMsg(any) error { return nil } |
|||
|
|||
type snapshotListClient struct { |
|||
SeaweedFilerClient |
|||
entries []*Entry |
|||
requests []*ListEntriesRequest |
|||
snapshotTs int64 |
|||
listCalled bool |
|||
} |
|||
|
|||
func (c *snapshotListClient) ListEntries(ctx context.Context, in *ListEntriesRequest, opts ...grpc.CallOption) (grpc.ServerStreamingClient[ListEntriesResponse], error) { |
|||
c.listCalled = true |
|||
c.requests = append(c.requests, proto.Clone(in).(*ListEntriesRequest)) |
|||
|
|||
start := 0 |
|||
if in.StartFromFileName != "" { |
|||
start = len(c.entries) |
|||
for i, entry := range c.entries { |
|||
if entry.Name == in.StartFromFileName { |
|||
start = i |
|||
if !in.InclusiveStartFrom { |
|||
start++ |
|||
} |
|||
break |
|||
} |
|||
} |
|||
} |
|||
|
|||
end := len(c.entries) |
|||
if in.Limit > 0 && start+int(in.Limit) < end { |
|||
end = start + int(in.Limit) |
|||
} |
|||
|
|||
snapshotTs := in.SnapshotTsNs |
|||
if snapshotTs == 0 { |
|||
snapshotTs = c.snapshotTs |
|||
} |
|||
|
|||
responses := make([]*ListEntriesResponse, 0, end-start) |
|||
for i, entry := range c.entries[start:end] { |
|||
resp := &ListEntriesResponse{ |
|||
Entry: entry, |
|||
} |
|||
if i == 0 { |
|||
resp.SnapshotTsNs = snapshotTs |
|||
} |
|||
responses = append(responses, resp) |
|||
} |
|||
|
|||
return &snapshotListStream{responses: responses}, nil |
|||
} |
|||
|
|||
type snapshotFilerAccessor struct { |
|||
client SeaweedFilerClient |
|||
} |
|||
|
|||
func (a *snapshotFilerAccessor) WithFilerClient(_ bool, fn func(SeaweedFilerClient) error) error { |
|||
return fn(a.client) |
|||
} |
|||
|
|||
func (a *snapshotFilerAccessor) AdjustedUrl(*Location) string { return "" } |
|||
func (a *snapshotFilerAccessor) GetDataCenter() string { return "" } |
|||
|
|||
func TestReadDirAllEntriesWithSnapshotCarriesSnapshotAcrossPages(t *testing.T) { |
|||
entries := make([]*Entry, 0, 10001) |
|||
for i := 0; i < 10001; i++ { |
|||
entries = append(entries, &Entry{Name: fmt.Sprintf("entry-%05d", i), Attributes: &FuseAttributes{}}) |
|||
} |
|||
|
|||
client := &snapshotListClient{ |
|||
entries: entries, |
|||
snapshotTs: 123456789, |
|||
} |
|||
accessor := &snapshotFilerAccessor{client: client} |
|||
|
|||
var listed []string |
|||
snapshotTs, err := ReadDirAllEntriesWithSnapshot(context.Background(), accessor, util.FullPath("/dir"), "", func(entry *Entry, isLast bool) error { |
|||
listed = append(listed, entry.Name) |
|||
return nil |
|||
}) |
|||
if err != nil { |
|||
t.Fatalf("ReadDirAllEntriesWithSnapshot: %v", err) |
|||
} |
|||
|
|||
if got := len(listed); got != len(entries) { |
|||
t.Fatalf("listed %d entries, want %d", got, len(entries)) |
|||
} |
|||
if snapshotTs != client.snapshotTs { |
|||
t.Fatalf("snapshotTs = %d, want %d", snapshotTs, client.snapshotTs) |
|||
} |
|||
if got := len(client.requests); got != 2 { |
|||
t.Fatalf("request count = %d, want 2", got) |
|||
} |
|||
if client.requests[0].SnapshotTsNs != 0 { |
|||
t.Fatalf("first request snapshot = %d, want 0", client.requests[0].SnapshotTsNs) |
|||
} |
|||
if client.requests[1].SnapshotTsNs != client.snapshotTs { |
|||
t.Fatalf("second request snapshot = %d, want %d", client.requests[1].SnapshotTsNs, client.snapshotTs) |
|||
} |
|||
if client.requests[1].StartFromFileName != entries[9999].Name { |
|||
t.Fatalf("second request marker = %q, want %q", client.requests[1].StartFromFileName, entries[9999].Name) |
|||
} |
|||
} |
|||
|
|||
func TestReadDirAllEntriesWithSnapshotEmptyDirectory(t *testing.T) { |
|||
client := &snapshotListClient{ |
|||
entries: nil, // empty directory
|
|||
snapshotTs: 999888777, |
|||
} |
|||
accessor := &snapshotFilerAccessor{client: client} |
|||
|
|||
var listed []string |
|||
snapshotTs, err := ReadDirAllEntriesWithSnapshot(context.Background(), accessor, util.FullPath("/empty"), "", func(entry *Entry, isLast bool) error { |
|||
listed = append(listed, entry.Name) |
|||
return nil |
|||
}) |
|||
if err != nil { |
|||
t.Fatalf("ReadDirAllEntriesWithSnapshot: %v", err) |
|||
} |
|||
if len(listed) != 0 { |
|||
t.Fatalf("listed %d entries, want 0", len(listed)) |
|||
} |
|||
// When the server sends no entries (empty directory), no snapshot is
|
|||
// received. The client returns 0 so callers like CompleteDirectoryBuild
|
|||
// know to replay all buffered events without clock-skew filtering.
|
|||
if snapshotTs != 0 { |
|||
t.Fatalf("snapshotTs = %d, want 0 for empty directory", snapshotTs) |
|||
} |
|||
if !client.listCalled { |
|||
t.Fatal("ListEntries was not invoked for the empty directory") |
|||
} |
|||
} |
|||
Write
Preview
Loading…
Cancel
Save
Reference in new issue