Browse Source
mount: async flush on close() when writebackCache is enabled (#8727)
mount: async flush on close() when writebackCache is enabled (#8727)
* mount: async flush on close() when writebackCache is enabled When -writebackCache is enabled, defer data upload and metadata flush from Flush() (triggered by close()) to a background goroutine in Release(). This allows processes like rsync that write many small files to proceed to the next file immediately instead of blocking on two network round-trips (volume upload + filer metadata) per file. Fixes #8718 * mount: add retry with backoff for async metadata flush The metadata flush in completeAsyncFlush now retries up to 3 times with exponential backoff (1s, 2s, 4s) on transient gRPC errors. Since the chunk data is already safely on volume servers at this point, only the filer metadata reference needs persisting — retrying is both safe and effective. Data flush (FlushData) is not retried externally because UploadWithRetry already handles transient HTTP/gRPC errors internally; if it still fails, the chunk memory has been freed. * test: add integration tests for writebackCache async flush Add comprehensive FUSE integration tests for the writebackCache async flush feature (issue #8718): - Basic operations: write/read, sequential files, large files, empty files, overwrites - Fsync correctness: fsync forces synchronous flush even in writeback mode, immediate read-after-fsync - Concurrent small files: multi-worker parallel writes (rsync-like workload), multi-directory, rapid create/close - Data integrity: append after close, partial writes, file size correctness, binary data preservation - Performance comparison: writeback vs synchronous flush throughput - Stress test: 16 workers x 100 files with content verification - Mixed concurrent operations: reads, writes, creates running together Also fix pre-existing test infrastructure issues: - Rename framework.go to framework_test.go (fixes Go package conflict) - Fix undefined totalSize variable in concurrent_operations_test.go * ci: update fuse-integration workflow to run full test suite The workflow previously only ran placeholder tests (simple_test.go, working_demo_test.go) in a temp directory due to a Go module conflict. Now that framework.go is renamed to framework_test.go, the full test suite compiles and runs correctly from test/fuse_integration/. Changes: - Run go test directly in test/fuse_integration/ (no temp dir copy) - Install weed binary to /usr/local/bin for test framework discovery - Configure /etc/fuse.conf with user_allow_other for FUSE mounts - Install fuse3 for modern FUSE support - Stream test output to log file for artifact upload * mount: fix three P1 races in async flush P1-1: Reopen overwrites data still flushing in background ReleaseByHandle removes the old handle from fhMap before the deferred flush finishes. A reopen of the same inode during that window would build from stale filer metadata, overwriting the async flush. Fix: Track in-flight async flushes per inode via pendingAsyncFlush map. AcquireHandle now calls waitForPendingAsyncFlush(inode) to block until any pending flush completes before reading filer metadata. P1-2: Deferred flush races rename and unlink after close completeAsyncFlush captured the path once at entry, but rename or unlink after close() could cause metadata to be written under the wrong name or recreate a deleted file. Fix: Re-resolve path from inode via GetPath right before metadata flush. GetPath returns the current path (reflecting renames) or ENOENT (if unlinked), in which case we skip the metadata flush. P1-3: SIGINT/SIGTERM bypasses the async-flush drain grace.OnInterrupt runs hooks then calls os.Exit(0), so WaitForAsyncFlush after server.Serve() never executes on signal. Fix: Add WaitForAsyncFlush (with 10s timeout) to the WFS interrupt handler, before cache cleanup. The timeout prevents hanging on Ctrl-C when the filer is unreachable. * mount: fix P1 races — draining handle stays in fhMap P1-1: Reopen TOCTOU The gap between ReleaseByHandle removing from fhMap and submitAsyncFlush registering in pendingAsyncFlush allowed a concurrent AcquireHandle to slip through with stale metadata. Fix: Hold pendingAsyncFlushMu across both the counter decrement (ReleaseByHandle) and the pending registration. The handle is registered as pending before the lock is released, so waitForPendingAsyncFlush always sees it. P1-2: Rename/unlink can't find draining handle ReleaseByHandle deleted from fhMap immediately. Rename's FindFileHandle(inode) at line 251 could not find the handle to update entry.Name. Unlink could not coordinate either. Fix: When asyncFlushPending is true, ReleaseByHandle/ReleaseByInode leave the handle in fhMap (counter=0 but maps intact). The handle stays visible to FindFileHandle so rename can update entry.Name. completeAsyncFlush re-resolves the path from the inode (GetPath) right before metadata flush for correctness after rename/unlink. After drain, RemoveFileHandle cleans up the maps. Double-return prevention: ReleaseByHandle/ReleaseByInode return nil if counter is already <= 0, so Forget after Release doesn't start a second drain goroutine. P1-3: SIGINT deletes swap files under running goroutines After the 10s timeout, os.RemoveAll deleted the write cache dir (containing swap files) while FlushData goroutines were still reading from them. Fix: Increase timeout to 30s. If timeout expires, skip write cache dir removal so in-flight goroutines can finish reading swap files. The OS (or next mount) cleans them up. Read cache is always removed. * mount: never skip metadata flush when Forget drops inode mapping Forget removes the inode→path mapping when the kernel's lookup count reaches zero, but this does NOT mean the file was unlinked — it only means the kernel evicted its cache entry. completeAsyncFlush was treating GetPath failure as "file unlinked" and skipping the metadata flush, which orphaned the just-uploaded chunks for live files. Fix: Save dir and name at doFlush defer time. In completeAsyncFlush, try GetPath first to pick up renames; if the mapping is gone, fall back to the saved dir/name. Always attempt the metadata flush — the filer is the authority on whether the file exists, not the local inode cache. * mount: distinguish Forget from Unlink in async flush path fallback The saved-path fallback (from the previous fix) always flushed metadata when GetPath failed, which recreated files that were explicitly unlinked after close(). The same stale fallback could recreate the pre-rename path if Forget dropped the inode mapping after a rename. Root cause: GetPath failure has two meanings: 1. Forget — kernel evicted the cache entry (file still exists) 2. Unlink — file was explicitly deleted (should not recreate) Fix (three coordinated changes): Unlink (weedfs_file_mkrm.go): Before RemovePath, look up the inode and find any draining handle via FindFileHandle. Set fh.isDeleted = true so the async flush knows the file was explicitly removed. Rename (weedfs_rename.go): When renaming a file with a draining handle, update asyncFlushDir/asyncFlushName to the post-rename location. This keeps the saved-path fallback current so Forget after rename doesn't flush to the old (pre-rename) path. completeAsyncFlush (weedfs_async_flush.go): Check fh.isDeleted first — if true, skip metadata flush (file was unlinked, chunks become orphans for volume.fsck). Otherwise, try GetPath for the current path (renames); fall back to saved path if Forget dropped the mapping (file is live, just evicted from kernel cache). * test/ci: address PR review nitpicks concurrent_operations_test.go: - Restore precise totalSize assertion instead of info.Size() > 0 writeback_cache_test.go: - Check rand.Read errors in all 3 locations (lines 310, 512, 757) - Check os.MkdirAll error in stress test (line 752) - Remove dead verifyErrors variable (line 332) - Replace both time.Sleep(5s) with polling via waitForFileContent to avoid flaky tests under CI load (lines 638, 700) fuse-integration.yml: - Add set -o pipefail so go test failures propagate through tee * ci: fix fuse3/fuse package conflict on ubuntu-22.04 runner fuse3 is pre-installed on ubuntu-22.04 runners and conflicts with the legacy fuse package. Only install libfuse3-dev for the headers. * mount/page_writer: remove debug println statements Remove leftover debug println("read new data1/2") from ReadDataAt in MemChunk and SwapFileChunk. * test: fix findWeedBinary matching source directory instead of binary findWeedBinary() matched ../../weed (the source directory) via os.Stat before checking PATH, then tried to exec a directory which fails with "permission denied" on the CI runner. Fix: Check PATH first (reliable in CI where the binary is installed to /usr/local/bin). For relative paths, verify the candidate is a regular file (!info.IsDir()). Add ../../weed/weed as a candidate for in-tree builds. * test: fix framework — dynamic ports, output capture, data dirs The integration test framework was failing in CI because: 1. All tests used hardcoded ports (19333/18080/18888), so sequential tests could conflict when prior processes hadn't fully released their ports yet. 2. Data subdirectories (data/master, data/volume) were not created before starting processes. 3. Master was started with -peers=none which is not a valid address. 4. Process stdout/stderr was not captured, making failures opaque ("service not ready within timeout" with no diagnostics). 5. The unmount fallback used 'umount' instead of 'fusermount -u'. 6. The mount used -cacheSizeMB (nonexistent) instead of -cacheCapacityMB and was missing -allowOthers=false for unprivileged CI runners. Fixes: - Dynamic port allocation via freePort() (net.Listen ":0") - Explicit gRPC ports via -port.grpc to avoid default port conflicts - Create data/master and data/volume directories in Setup() - Remove invalid -peers=none and -raftBootstrap flags - Capture process output to logDir/*.log via startProcess() helper - dumpLog() prints tail of log file on service startup failure - Use fusermount3/fusermount -u for unmount - Fix mount flag names (-cacheCapacityMB, -allowOthers=false) * test: remove explicit -port.grpc flags from test framework SeaweedFS convention: gRPC port = HTTP port + 10000. Volume and filer discover the master gRPC port by this convention. Setting explicit -port.grpc on master/volume/filer broke inter-service communication because the volume server computed master gRPC as HTTP+10000 but the actual gRPC was on a different port. Remove all -port.grpc flags and let the default convention work. Dynamic HTTP ports already ensure uniqueness; the derived gRPC ports (HTTP+10000) will also be unique. --------- Co-authored-by: Copilot <copilot@github.com>pull/8436/merge
committed by
GitHub
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
16 changed files with 1296 additions and 272 deletions
-
207.github/workflows/fuse-integration.yml
-
4test/fuse_integration/concurrent_operations_test.go
-
156test/fuse_integration/framework_test.go
-
825test/fuse_integration/writeback_cache_test.go
-
5weed/command/mount_std.go
-
13weed/mount/filehandle.go
-
55weed/mount/filehandle_map.go
-
4weed/mount/page_writer/page_chunk_mem.go
-
4weed/mount/page_writer/page_chunk_swapfile.go
-
59weed/mount/weedfs.go
-
96weed/mount/weedfs_async_flush.go
-
9weed/mount/weedfs_file_mkrm.go
-
58weed/mount/weedfs_file_sync.go
-
60weed/mount/weedfs_filehandle.go
-
7weed/mount/weedfs_forget.go
-
6weed/mount/weedfs_rename.go
@ -0,0 +1,825 @@ |
|||
package fuse_test |
|||
|
|||
import ( |
|||
"bytes" |
|||
"crypto/rand" |
|||
"fmt" |
|||
"os" |
|||
"path/filepath" |
|||
"sync" |
|||
"sync/atomic" |
|||
"testing" |
|||
"time" |
|||
|
|||
"github.com/stretchr/testify/assert" |
|||
"github.com/stretchr/testify/require" |
|||
) |
|||
|
|||
// writebackConfig returns a TestConfig with writebackCache enabled
|
|||
func writebackConfig() *TestConfig { |
|||
return &TestConfig{ |
|||
Collection: "", |
|||
Replication: "000", |
|||
ChunkSizeMB: 2, |
|||
CacheSizeMB: 100, |
|||
NumVolumes: 3, |
|||
EnableDebug: false, |
|||
MountOptions: []string{ |
|||
"-writebackCache", |
|||
}, |
|||
SkipCleanup: false, |
|||
} |
|||
} |
|||
|
|||
// waitForFileContent polls until a file has the expected content or timeout expires.
|
|||
// This is needed because writebackCache defers data upload to background goroutines,
|
|||
// so there is a brief window after close() where the file may not yet be readable.
|
|||
func waitForFileContent(t *testing.T, path string, expected []byte, timeout time.Duration) { |
|||
t.Helper() |
|||
deadline := time.Now().Add(timeout) |
|||
var lastErr error |
|||
for time.Now().Before(deadline) { |
|||
actual, err := os.ReadFile(path) |
|||
if err == nil && bytes.Equal(expected, actual) { |
|||
return |
|||
} |
|||
if err != nil { |
|||
lastErr = err |
|||
} else { |
|||
lastErr = fmt.Errorf("content mismatch: got %d bytes, want %d bytes", len(actual), len(expected)) |
|||
} |
|||
time.Sleep(200 * time.Millisecond) |
|||
} |
|||
t.Fatalf("file %s did not have expected content within %v: %v", path, timeout, lastErr) |
|||
} |
|||
|
|||
// waitForFileSize polls until a file reports the expected size or timeout expires.
|
|||
func waitForFileSize(t *testing.T, path string, expectedSize int64, timeout time.Duration) { |
|||
t.Helper() |
|||
deadline := time.Now().Add(timeout) |
|||
for time.Now().Before(deadline) { |
|||
info, err := os.Stat(path) |
|||
if err == nil && info.Size() == expectedSize { |
|||
return |
|||
} |
|||
time.Sleep(200 * time.Millisecond) |
|||
} |
|||
t.Fatalf("file %s did not reach expected size %d within %v", path, expectedSize, timeout) |
|||
} |
|||
|
|||
// TestWritebackCacheBasicOperations tests fundamental file I/O with writebackCache enabled
|
|||
func TestWritebackCacheBasicOperations(t *testing.T) { |
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
|
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
t.Run("WriteAndReadBack", func(t *testing.T) { |
|||
testWritebackWriteAndReadBack(t, framework) |
|||
}) |
|||
|
|||
t.Run("MultipleFilesSequential", func(t *testing.T) { |
|||
testWritebackMultipleFilesSequential(t, framework) |
|||
}) |
|||
|
|||
t.Run("LargeFile", func(t *testing.T) { |
|||
testWritebackLargeFile(t, framework) |
|||
}) |
|||
|
|||
t.Run("EmptyFile", func(t *testing.T) { |
|||
testWritebackEmptyFile(t, framework) |
|||
}) |
|||
|
|||
t.Run("OverwriteExistingFile", func(t *testing.T) { |
|||
testWritebackOverwriteFile(t, framework) |
|||
}) |
|||
} |
|||
|
|||
// testWritebackWriteAndReadBack writes a file and verifies it can be read back
|
|||
// after the async flush completes.
|
|||
func testWritebackWriteAndReadBack(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_basic.txt" |
|||
content := []byte("Hello from writebackCache test!") |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// Write file — close() returns immediately with async flush
|
|||
require.NoError(t, os.WriteFile(mountPath, content, 0644)) |
|||
|
|||
// Wait for async flush to complete and verify content
|
|||
waitForFileContent(t, mountPath, content, 30*time.Second) |
|||
} |
|||
|
|||
// testWritebackMultipleFilesSequential writes multiple files sequentially
|
|||
// and verifies all are readable after async flushes complete.
|
|||
func testWritebackMultipleFilesSequential(t *testing.T, framework *FuseTestFramework) { |
|||
dir := "writeback_sequential" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
numFiles := 50 |
|||
files := make(map[string][]byte, numFiles) |
|||
|
|||
// Write files sequentially — each close() returns immediately
|
|||
for i := 0; i < numFiles; i++ { |
|||
filename := fmt.Sprintf("file_%03d.txt", i) |
|||
content := []byte(fmt.Sprintf("Sequential file %d content: %s", i, time.Now().Format(time.RFC3339Nano))) |
|||
path := filepath.Join(framework.GetMountPoint(), dir, filename) |
|||
require.NoError(t, os.WriteFile(path, content, 0644)) |
|||
files[filename] = content |
|||
} |
|||
|
|||
// Verify all files after a brief wait for async flushes
|
|||
for filename, expectedContent := range files { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, filename) |
|||
waitForFileContent(t, path, expectedContent, 30*time.Second) |
|||
} |
|||
} |
|||
|
|||
// testWritebackLargeFile writes a large file (multi-chunk) with writebackCache
|
|||
func testWritebackLargeFile(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_large.bin" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// 8MB file (spans multiple 2MB chunks)
|
|||
content := make([]byte, 8*1024*1024) |
|||
_, err := rand.Read(content) |
|||
require.NoError(t, err) |
|||
|
|||
require.NoError(t, os.WriteFile(mountPath, content, 0644)) |
|||
|
|||
// Wait for file to be fully flushed
|
|||
waitForFileContent(t, mountPath, content, 60*time.Second) |
|||
} |
|||
|
|||
// testWritebackEmptyFile creates an empty file with writebackCache
|
|||
func testWritebackEmptyFile(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_empty.txt" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// Create empty file
|
|||
f, err := os.Create(mountPath) |
|||
require.NoError(t, err) |
|||
require.NoError(t, f.Close()) |
|||
|
|||
// Should exist and be empty
|
|||
info, err := os.Stat(mountPath) |
|||
require.NoError(t, err) |
|||
assert.Equal(t, int64(0), info.Size()) |
|||
} |
|||
|
|||
// testWritebackOverwriteFile tests overwriting an existing file
|
|||
func testWritebackOverwriteFile(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_overwrite.txt" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// First write
|
|||
content1 := []byte("First version of the file") |
|||
require.NoError(t, os.WriteFile(mountPath, content1, 0644)) |
|||
waitForFileContent(t, mountPath, content1, 30*time.Second) |
|||
|
|||
// Overwrite with different content
|
|||
content2 := []byte("Second version — overwritten content that is longer than the first") |
|||
require.NoError(t, os.WriteFile(mountPath, content2, 0644)) |
|||
waitForFileContent(t, mountPath, content2, 30*time.Second) |
|||
} |
|||
|
|||
// TestWritebackCacheFsync tests that fsync still forces synchronous flush
|
|||
// even when writebackCache is enabled
|
|||
func TestWritebackCacheFsync(t *testing.T) { |
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
|
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
t.Run("FsyncForcesFlush", func(t *testing.T) { |
|||
testFsyncForcesFlush(t, framework) |
|||
}) |
|||
|
|||
t.Run("FsyncThenRead", func(t *testing.T) { |
|||
testFsyncThenRead(t, framework) |
|||
}) |
|||
} |
|||
|
|||
// testFsyncForcesFlush verifies that calling fsync before close ensures
|
|||
// data is immediately available for reading, bypassing the async path.
|
|||
func testFsyncForcesFlush(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_fsync.txt" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
content := []byte("Data that must be flushed synchronously via fsync") |
|||
|
|||
// Open, write, fsync, close
|
|||
f, err := os.OpenFile(mountPath, os.O_CREATE|os.O_WRONLY, 0644) |
|||
require.NoError(t, err) |
|||
|
|||
_, err = f.Write(content) |
|||
require.NoError(t, err) |
|||
|
|||
// fsync forces synchronous data+metadata flush
|
|||
require.NoError(t, f.Sync()) |
|||
require.NoError(t, f.Close()) |
|||
|
|||
// Data should be immediately available — no wait needed
|
|||
actual, err := os.ReadFile(mountPath) |
|||
require.NoError(t, err) |
|||
assert.Equal(t, content, actual) |
|||
} |
|||
|
|||
// testFsyncThenRead verifies that after fsync, a freshly opened read
|
|||
// returns the correct data without any delay.
|
|||
func testFsyncThenRead(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_fsync_read.txt" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
content := make([]byte, 64*1024) // 64KB
|
|||
_, err := rand.Read(content) |
|||
require.NoError(t, err) |
|||
|
|||
// Write with explicit fsync
|
|||
f, err := os.OpenFile(mountPath, os.O_CREATE|os.O_WRONLY, 0644) |
|||
require.NoError(t, err) |
|||
_, err = f.Write(content) |
|||
require.NoError(t, err) |
|||
require.NoError(t, f.Sync()) |
|||
require.NoError(t, f.Close()) |
|||
|
|||
// Immediate read should succeed
|
|||
actual, err := os.ReadFile(mountPath) |
|||
require.NoError(t, err) |
|||
assert.Equal(t, content, actual) |
|||
} |
|||
|
|||
// TestWritebackCacheConcurrentSmallFiles is the primary test for issue #8718:
|
|||
// many small files written concurrently should all be eventually readable.
|
|||
func TestWritebackCacheConcurrentSmallFiles(t *testing.T) { |
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
|
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
t.Run("ConcurrentSmallFiles", func(t *testing.T) { |
|||
testWritebackConcurrentSmallFiles(t, framework) |
|||
}) |
|||
|
|||
t.Run("ConcurrentSmallFilesMultiDir", func(t *testing.T) { |
|||
testWritebackConcurrentSmallFilesMultiDir(t, framework) |
|||
}) |
|||
|
|||
t.Run("RapidCreateCloseSequence", func(t *testing.T) { |
|||
testWritebackRapidCreateClose(t, framework) |
|||
}) |
|||
} |
|||
|
|||
// testWritebackConcurrentSmallFiles simulates the rsync workload from #8718:
|
|||
// multiple workers creating many small files in parallel.
|
|||
func testWritebackConcurrentSmallFiles(t *testing.T, framework *FuseTestFramework) { |
|||
dir := "writeback_concurrent_small" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
numWorkers := 8 |
|||
filesPerWorker := 20 |
|||
totalFiles := numWorkers * filesPerWorker |
|||
|
|||
type fileRecord struct { |
|||
path string |
|||
content []byte |
|||
} |
|||
|
|||
var mu sync.Mutex |
|||
var writeErrors []error |
|||
records := make([]fileRecord, 0, totalFiles) |
|||
|
|||
// Phase 1: Write files concurrently (simulating rsync workers)
|
|||
var wg sync.WaitGroup |
|||
for w := 0; w < numWorkers; w++ { |
|||
wg.Add(1) |
|||
go func(workerID int) { |
|||
defer wg.Done() |
|||
|
|||
for f := 0; f < filesPerWorker; f++ { |
|||
filename := fmt.Sprintf("w%02d_f%03d.dat", workerID, f) |
|||
path := filepath.Join(framework.GetMountPoint(), dir, filename) |
|||
|
|||
// Vary sizes: 100B to 100KB
|
|||
size := 100 + (workerID*filesPerWorker+f)*500 |
|||
if size > 100*1024 { |
|||
size = 100*1024 |
|||
} |
|||
content := make([]byte, size) |
|||
if _, err := rand.Read(content); err != nil { |
|||
mu.Lock() |
|||
writeErrors = append(writeErrors, fmt.Errorf("worker %d file %d rand: %v", workerID, f, err)) |
|||
mu.Unlock() |
|||
return |
|||
} |
|||
|
|||
if err := os.WriteFile(path, content, 0644); err != nil { |
|||
mu.Lock() |
|||
writeErrors = append(writeErrors, fmt.Errorf("worker %d file %d: %v", workerID, f, err)) |
|||
mu.Unlock() |
|||
return |
|||
} |
|||
|
|||
mu.Lock() |
|||
records = append(records, fileRecord{path: path, content: content}) |
|||
mu.Unlock() |
|||
} |
|||
}(w) |
|||
} |
|||
wg.Wait() |
|||
|
|||
require.Empty(t, writeErrors, "write errors: %v", writeErrors) |
|||
assert.Equal(t, totalFiles, len(records)) |
|||
|
|||
// Phase 2: Wait for async flushes and verify all files
|
|||
for _, rec := range records { |
|||
waitForFileContent(t, rec.path, rec.content, 60*time.Second) |
|||
} |
|||
|
|||
// Phase 3: Verify directory listing has correct count
|
|||
entries, err := os.ReadDir(filepath.Join(framework.GetMountPoint(), dir)) |
|||
require.NoError(t, err) |
|||
assert.Equal(t, totalFiles, len(entries)) |
|||
} |
|||
|
|||
// testWritebackConcurrentSmallFilesMultiDir tests concurrent writes across
|
|||
// multiple directories — a common pattern for parallel copy tools.
|
|||
func testWritebackConcurrentSmallFilesMultiDir(t *testing.T, framework *FuseTestFramework) { |
|||
baseDir := "writeback_multidir" |
|||
framework.CreateTestDir(baseDir) |
|||
|
|||
numDirs := 4 |
|||
filesPerDir := 25 |
|||
|
|||
type fileRecord struct { |
|||
path string |
|||
content []byte |
|||
} |
|||
var mu sync.Mutex |
|||
var records []fileRecord |
|||
var writeErrors []error |
|||
|
|||
var wg sync.WaitGroup |
|||
for d := 0; d < numDirs; d++ { |
|||
subDir := filepath.Join(baseDir, fmt.Sprintf("dir_%02d", d)) |
|||
framework.CreateTestDir(subDir) |
|||
|
|||
wg.Add(1) |
|||
go func(dirID int, dirPath string) { |
|||
defer wg.Done() |
|||
|
|||
for f := 0; f < filesPerDir; f++ { |
|||
filename := fmt.Sprintf("file_%03d.txt", f) |
|||
path := filepath.Join(framework.GetMountPoint(), dirPath, filename) |
|||
content := []byte(fmt.Sprintf("dir=%d file=%d data=%s", dirID, f, time.Now().Format(time.RFC3339Nano))) |
|||
|
|||
if err := os.WriteFile(path, content, 0644); err != nil { |
|||
mu.Lock() |
|||
writeErrors = append(writeErrors, fmt.Errorf("dir %d file %d: %v", dirID, f, err)) |
|||
mu.Unlock() |
|||
return |
|||
} |
|||
|
|||
mu.Lock() |
|||
records = append(records, fileRecord{path: path, content: content}) |
|||
mu.Unlock() |
|||
} |
|||
}(d, subDir) |
|||
} |
|||
wg.Wait() |
|||
|
|||
require.Empty(t, writeErrors, "write errors: %v", writeErrors) |
|||
|
|||
// Verify all files
|
|||
for _, rec := range records { |
|||
waitForFileContent(t, rec.path, rec.content, 60*time.Second) |
|||
} |
|||
} |
|||
|
|||
// testWritebackRapidCreateClose rapidly creates and closes files to stress
|
|||
// the async flush goroutine pool.
|
|||
func testWritebackRapidCreateClose(t *testing.T, framework *FuseTestFramework) { |
|||
dir := "writeback_rapid" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
numFiles := 200 |
|||
type fileRecord struct { |
|||
path string |
|||
content []byte |
|||
} |
|||
records := make([]fileRecord, numFiles) |
|||
|
|||
// Rapidly create files without pausing
|
|||
for i := 0; i < numFiles; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("rapid_%04d.bin", i)) |
|||
content := []byte(fmt.Sprintf("rapid-file-%d", i)) |
|||
require.NoError(t, os.WriteFile(path, content, 0644)) |
|||
records[i] = fileRecord{path: path, content: content} |
|||
} |
|||
|
|||
// Verify all files eventually appear with correct content
|
|||
for _, rec := range records { |
|||
waitForFileContent(t, rec.path, rec.content, 60*time.Second) |
|||
} |
|||
} |
|||
|
|||
// TestWritebackCacheDataIntegrity tests that data integrity is preserved
|
|||
// across various write patterns with writebackCache enabled.
|
|||
func TestWritebackCacheDataIntegrity(t *testing.T) { |
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
|
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
t.Run("AppendAfterClose", func(t *testing.T) { |
|||
testWritebackAppendAfterClose(t, framework) |
|||
}) |
|||
|
|||
t.Run("PartialWrites", func(t *testing.T) { |
|||
testWritebackPartialWrites(t, framework) |
|||
}) |
|||
|
|||
t.Run("FileSizeCorrectness", func(t *testing.T) { |
|||
testWritebackFileSizeCorrectness(t, framework) |
|||
}) |
|||
|
|||
t.Run("BinaryData", func(t *testing.T) { |
|||
testWritebackBinaryData(t, framework) |
|||
}) |
|||
} |
|||
|
|||
// testWritebackAppendAfterClose writes a file, closes it (triggering async flush),
|
|||
// waits for flush, then reopens and appends more data.
|
|||
func testWritebackAppendAfterClose(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_append.txt" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// First write
|
|||
part1 := []byte("First part of the data.\n") |
|||
require.NoError(t, os.WriteFile(mountPath, part1, 0644)) |
|||
|
|||
// Wait for first async flush
|
|||
waitForFileContent(t, mountPath, part1, 30*time.Second) |
|||
|
|||
// Append more data
|
|||
part2 := []byte("Second part appended.\n") |
|||
f, err := os.OpenFile(mountPath, os.O_APPEND|os.O_WRONLY, 0644) |
|||
require.NoError(t, err) |
|||
_, err = f.Write(part2) |
|||
require.NoError(t, err) |
|||
require.NoError(t, f.Close()) |
|||
|
|||
// Verify combined content
|
|||
expected := append(part1, part2...) |
|||
waitForFileContent(t, mountPath, expected, 30*time.Second) |
|||
} |
|||
|
|||
// testWritebackPartialWrites tests writing to specific offsets within a file
|
|||
func testWritebackPartialWrites(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_partial.bin" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// Create file with initial content
|
|||
initial := bytes.Repeat([]byte("A"), 4096) |
|||
require.NoError(t, os.WriteFile(mountPath, initial, 0644)) |
|||
waitForFileContent(t, mountPath, initial, 30*time.Second) |
|||
|
|||
// Open and write at specific offset
|
|||
f, err := os.OpenFile(mountPath, os.O_WRONLY, 0644) |
|||
require.NoError(t, err) |
|||
patch := []byte("PATCHED") |
|||
_, err = f.WriteAt(patch, 100) |
|||
require.NoError(t, err) |
|||
require.NoError(t, f.Close()) |
|||
|
|||
// Build expected content
|
|||
expected := make([]byte, 4096) |
|||
copy(expected, initial) |
|||
copy(expected[100:], patch) |
|||
|
|||
waitForFileContent(t, mountPath, expected, 30*time.Second) |
|||
} |
|||
|
|||
// testWritebackFileSizeCorrectness verifies that file sizes are correct
|
|||
// after async flush completes.
|
|||
func testWritebackFileSizeCorrectness(t *testing.T, framework *FuseTestFramework) { |
|||
sizes := []int{0, 1, 100, 4096, 65536, 1024 * 1024} |
|||
|
|||
for _, size := range sizes { |
|||
filename := fmt.Sprintf("writeback_size_%d.bin", size) |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
content := make([]byte, size) |
|||
if size > 0 { |
|||
_, err := rand.Read(content) |
|||
require.NoError(t, err, "rand.Read failed for size %d", size) |
|||
} |
|||
|
|||
require.NoError(t, os.WriteFile(mountPath, content, 0644), "failed to write file of size %d", size) |
|||
|
|||
if size > 0 { |
|||
waitForFileSize(t, mountPath, int64(size), 30*time.Second) |
|||
waitForFileContent(t, mountPath, content, 30*time.Second) |
|||
} |
|||
} |
|||
} |
|||
|
|||
// testWritebackBinaryData verifies that arbitrary binary data (including null bytes)
|
|||
// is preserved correctly through the async flush path.
|
|||
func testWritebackBinaryData(t *testing.T, framework *FuseTestFramework) { |
|||
filename := "writeback_binary.bin" |
|||
mountPath := filepath.Join(framework.GetMountPoint(), filename) |
|||
|
|||
// Generate data with all byte values including nulls
|
|||
content := make([]byte, 256*100) |
|||
for i := range content { |
|||
content[i] = byte(i % 256) |
|||
} |
|||
|
|||
require.NoError(t, os.WriteFile(mountPath, content, 0644)) |
|||
waitForFileContent(t, mountPath, content, 30*time.Second) |
|||
} |
|||
|
|||
// TestWritebackCachePerformance measures whether writebackCache actually
|
|||
// improves throughput for small file workloads compared to synchronous flush.
|
|||
func TestWritebackCachePerformance(t *testing.T) { |
|||
if testing.Short() { |
|||
t.Skip("skipping performance test in short mode") |
|||
} |
|||
|
|||
numFiles := 200 |
|||
fileSize := 4096 // 4KB files
|
|||
|
|||
// Generate test data upfront
|
|||
testData := make([][]byte, numFiles) |
|||
for i := range testData { |
|||
testData[i] = make([]byte, fileSize) |
|||
rand.Read(testData[i]) |
|||
} |
|||
|
|||
// Benchmark with writebackCache enabled
|
|||
t.Run("WithWritebackCache", func(t *testing.T) { |
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
dir := "perf_writeback" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
start := time.Now() |
|||
for i := 0; i < numFiles; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%04d.bin", i)) |
|||
require.NoError(t, os.WriteFile(path, testData[i], 0644)) |
|||
} |
|||
writebackDuration := time.Since(start) |
|||
|
|||
// Wait for all files to be flushed
|
|||
for i := 0; i < numFiles; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%04d.bin", i)) |
|||
waitForFileContent(t, path, testData[i], 60*time.Second) |
|||
} |
|||
|
|||
t.Logf("writebackCache: wrote %d files in %v (%.0f files/sec)", |
|||
numFiles, writebackDuration, float64(numFiles)/writebackDuration.Seconds()) |
|||
}) |
|||
|
|||
// Benchmark without writebackCache (synchronous flush)
|
|||
t.Run("WithoutWritebackCache", func(t *testing.T) { |
|||
config := DefaultTestConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
dir := "perf_sync" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
start := time.Now() |
|||
for i := 0; i < numFiles; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%04d.bin", i)) |
|||
require.NoError(t, os.WriteFile(path, testData[i], 0644)) |
|||
} |
|||
syncDuration := time.Since(start) |
|||
|
|||
t.Logf("synchronous: wrote %d files in %v (%.0f files/sec)", |
|||
numFiles, syncDuration, float64(numFiles)/syncDuration.Seconds()) |
|||
}) |
|||
} |
|||
|
|||
// TestWritebackCacheConcurrentMixedOps tests a mix of operations happening
|
|||
// concurrently with writebackCache: creates, reads, overwrites, and deletes.
|
|||
func TestWritebackCacheConcurrentMixedOps(t *testing.T) { |
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
|
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
dir := "writeback_mixed" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
numFiles := 50 |
|||
var mu sync.Mutex |
|||
var errors []error |
|||
var completedWrites int64 |
|||
|
|||
addError := func(err error) { |
|||
mu.Lock() |
|||
defer mu.Unlock() |
|||
errors = append(errors, err) |
|||
} |
|||
|
|||
// Phase 1: Create initial files and wait for async flushes
|
|||
initialContents := make(map[int][]byte, numFiles) |
|||
for i := 0; i < numFiles; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%03d.txt", i)) |
|||
content := []byte(fmt.Sprintf("initial content %d", i)) |
|||
require.NoError(t, os.WriteFile(path, content, 0644)) |
|||
initialContents[i] = content |
|||
} |
|||
|
|||
// Poll until initial files are flushed (instead of fixed sleep)
|
|||
for i := 0; i < numFiles; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%03d.txt", i)) |
|||
waitForFileContent(t, path, initialContents[i], 30*time.Second) |
|||
} |
|||
|
|||
// Phase 2: Concurrent mixed operations
|
|||
var wg sync.WaitGroup |
|||
|
|||
// Writers: overwrite existing files
|
|||
for i := 0; i < 4; i++ { |
|||
wg.Add(1) |
|||
go func(workerID int) { |
|||
defer wg.Done() |
|||
for j := 0; j < numFiles; j++ { |
|||
if j%4 != workerID { |
|||
continue // each worker handles a subset
|
|||
} |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%03d.txt", j)) |
|||
content := []byte(fmt.Sprintf("overwritten by worker %d at %s", workerID, time.Now().Format(time.RFC3339Nano))) |
|||
if err := os.WriteFile(path, content, 0644); err != nil { |
|||
addError(fmt.Errorf("writer %d file %d: %v", workerID, j, err)) |
|||
return |
|||
} |
|||
atomic.AddInt64(&completedWrites, 1) |
|||
} |
|||
}(i) |
|||
} |
|||
|
|||
// Readers: read files (may see old or new content, but should not error)
|
|||
for i := 0; i < 4; i++ { |
|||
wg.Add(1) |
|||
go func(readerID int) { |
|||
defer wg.Done() |
|||
for j := 0; j < numFiles; j++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("file_%03d.txt", j)) |
|||
_, err := os.ReadFile(path) |
|||
if err != nil && !os.IsNotExist(err) { |
|||
addError(fmt.Errorf("reader %d file %d: %v", readerID, j, err)) |
|||
return |
|||
} |
|||
} |
|||
}(i) |
|||
} |
|||
|
|||
// New file creators
|
|||
wg.Add(1) |
|||
go func() { |
|||
defer wg.Done() |
|||
for i := 0; i < 20; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("new_file_%03d.txt", i)) |
|||
content := []byte(fmt.Sprintf("new file %d", i)) |
|||
if err := os.WriteFile(path, content, 0644); err != nil { |
|||
addError(fmt.Errorf("creator file %d: %v", i, err)) |
|||
return |
|||
} |
|||
} |
|||
}() |
|||
|
|||
wg.Wait() |
|||
|
|||
require.Empty(t, errors, "mixed operation errors: %v", errors) |
|||
assert.True(t, atomic.LoadInt64(&completedWrites) > 0, "should have completed some writes") |
|||
|
|||
// Verify new files exist after async flushes complete (poll instead of fixed sleep)
|
|||
for i := 0; i < 20; i++ { |
|||
path := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("new_file_%03d.txt", i)) |
|||
expected := []byte(fmt.Sprintf("new file %d", i)) |
|||
waitForFileContent(t, path, expected, 30*time.Second) |
|||
} |
|||
} |
|||
|
|||
// TestWritebackCacheStressSmallFiles is a focused stress test for the
|
|||
// async flush path with many small files — the core scenario from #8718.
|
|||
func TestWritebackCacheStressSmallFiles(t *testing.T) { |
|||
if testing.Short() { |
|||
t.Skip("skipping stress test in short mode") |
|||
} |
|||
|
|||
config := writebackConfig() |
|||
framework := NewFuseTestFramework(t, config) |
|||
defer framework.Cleanup() |
|||
|
|||
require.NoError(t, framework.Setup(config)) |
|||
|
|||
dir := "writeback_stress" |
|||
framework.CreateTestDir(dir) |
|||
|
|||
numWorkers := 16 |
|||
filesPerWorker := 100 |
|||
totalFiles := numWorkers * filesPerWorker |
|||
|
|||
type fileRecord struct { |
|||
path string |
|||
content []byte |
|||
} |
|||
|
|||
var mu sync.Mutex |
|||
var writeErrors []error |
|||
records := make([]fileRecord, 0, totalFiles) |
|||
|
|||
start := time.Now() |
|||
|
|||
// Simulate rsync-like workload: many workers each writing small files
|
|||
var wg sync.WaitGroup |
|||
for w := 0; w < numWorkers; w++ { |
|||
wg.Add(1) |
|||
go func(workerID int) { |
|||
defer wg.Done() |
|||
for f := 0; f < filesPerWorker; f++ { |
|||
filename := fmt.Sprintf("w%02d/f%04d.dat", workerID, f) |
|||
path := filepath.Join(framework.GetMountPoint(), dir, filename) |
|||
|
|||
// Ensure subdirectory exists
|
|||
if f == 0 { |
|||
subDir := filepath.Join(framework.GetMountPoint(), dir, fmt.Sprintf("w%02d", workerID)) |
|||
if err := os.MkdirAll(subDir, 0755); err != nil { |
|||
mu.Lock() |
|||
writeErrors = append(writeErrors, fmt.Errorf("worker %d mkdir: %v", workerID, err)) |
|||
mu.Unlock() |
|||
return |
|||
} |
|||
} |
|||
|
|||
// Small file: 1KB-10KB (typical for rsync of config/source files)
|
|||
size := 1024 + (f%10)*1024 |
|||
content := make([]byte, size) |
|||
if _, err := rand.Read(content); err != nil { |
|||
mu.Lock() |
|||
writeErrors = append(writeErrors, fmt.Errorf("worker %d file %d rand: %v", workerID, f, err)) |
|||
mu.Unlock() |
|||
return |
|||
} |
|||
|
|||
if err := os.WriteFile(path, content, 0644); err != nil { |
|||
mu.Lock() |
|||
writeErrors = append(writeErrors, fmt.Errorf("worker %d file %d: %v", workerID, f, err)) |
|||
mu.Unlock() |
|||
return |
|||
} |
|||
|
|||
mu.Lock() |
|||
records = append(records, fileRecord{path: path, content: content}) |
|||
mu.Unlock() |
|||
} |
|||
}(w) |
|||
} |
|||
wg.Wait() |
|||
|
|||
writeDuration := time.Since(start) |
|||
t.Logf("wrote %d files in %v (%.0f files/sec)", |
|||
totalFiles, writeDuration, float64(totalFiles)/writeDuration.Seconds()) |
|||
|
|||
require.Empty(t, writeErrors, "write errors: %v", writeErrors) |
|||
assert.Equal(t, totalFiles, len(records)) |
|||
|
|||
// Verify all files are eventually readable with correct content
|
|||
var verifyErrors []error |
|||
for _, rec := range records { |
|||
deadline := time.Now().Add(120 * time.Second) |
|||
var lastErr error |
|||
for time.Now().Before(deadline) { |
|||
actual, err := os.ReadFile(rec.path) |
|||
if err == nil && bytes.Equal(rec.content, actual) { |
|||
lastErr = nil |
|||
break |
|||
} |
|||
if err != nil { |
|||
lastErr = err |
|||
} else { |
|||
lastErr = fmt.Errorf("content mismatch for %s: got %d bytes, want %d", rec.path, len(actual), len(rec.content)) |
|||
} |
|||
time.Sleep(500 * time.Millisecond) |
|||
} |
|||
if lastErr != nil { |
|||
verifyErrors = append(verifyErrors, lastErr) |
|||
} |
|||
} |
|||
require.Empty(t, verifyErrors, "verification errors after stress test: %v", verifyErrors) |
|||
|
|||
t.Logf("all %d files verified successfully", totalFiles) |
|||
} |
|||
@ -0,0 +1,96 @@ |
|||
package mount |
|||
|
|||
import ( |
|||
"time" |
|||
|
|||
"github.com/seaweedfs/go-fuse/v2/fuse" |
|||
"github.com/seaweedfs/seaweedfs/weed/glog" |
|||
"github.com/seaweedfs/seaweedfs/weed/util" |
|||
) |
|||
|
|||
const asyncFlushMetadataRetries = 3 |
|||
|
|||
// completeAsyncFlush is called in a background goroutine when a file handle
|
|||
// with pending async flush work is released. It performs the deferred data
|
|||
// upload and metadata flush that was skipped in doFlush() for writebackCache mode.
|
|||
//
|
|||
// This enables close() to return immediately for small file workloads (e.g., rsync),
|
|||
// while the actual I/O happens concurrently in the background.
|
|||
//
|
|||
// The caller (submitAsyncFlush) owns asyncFlushWg and the per-inode done channel.
|
|||
func (wfs *WFS) completeAsyncFlush(fh *FileHandle) { |
|||
// Phase 1: Flush dirty pages — seals writable chunks, uploads to volume servers, and waits.
|
|||
// The underlying UploadWithRetry already retries transient HTTP/gRPC errors internally,
|
|||
// so a failure here indicates a persistent issue; the chunk data has been freed.
|
|||
if err := fh.dirtyPages.FlushData(); err != nil { |
|||
glog.Errorf("completeAsyncFlush inode %d: data flush failed: %v", fh.inode, err) |
|||
// Data is lost at this point (chunks freed after internal retry exhaustion).
|
|||
// Proceed to cleanup to avoid resource leaks and unmount hangs.
|
|||
} else if fh.dirtyMetadata { |
|||
// Phase 2: Flush metadata unless the file was explicitly unlinked.
|
|||
//
|
|||
// isDeleted is set by the Unlink handler when it finds a draining
|
|||
// handle. In that case the filer entry is already gone and
|
|||
// flushing would recreate it. The uploaded chunks become orphans
|
|||
// and are cleaned up by volume.fsck.
|
|||
if fh.isDeleted { |
|||
glog.V(3).Infof("completeAsyncFlush inode %d: file was unlinked, skipping metadata flush", fh.inode) |
|||
} else { |
|||
// Resolve the current path for metadata flush.
|
|||
//
|
|||
// Try GetPath first — it reflects any rename that happened
|
|||
// after close(). If the inode mapping is gone (Forget
|
|||
// dropped it after the kernel's lookup count hit zero), fall
|
|||
// back to the dir/name saved at doFlush time. Rename also
|
|||
// updates the saved path, so the fallback is always current.
|
|||
//
|
|||
// Forget does NOT mean the file was deleted — it only means
|
|||
// the kernel evicted its cache entry.
|
|||
dir, name := fh.asyncFlushDir, fh.asyncFlushName |
|||
fileFullPath := util.FullPath(dir).Child(name) |
|||
|
|||
if resolvedPath, status := wfs.inodeToPath.GetPath(fh.inode); status == fuse.OK { |
|||
dir, name = resolvedPath.DirAndName() |
|||
fileFullPath = resolvedPath |
|||
} |
|||
|
|||
wfs.flushMetadataWithRetry(fh, dir, name, fileFullPath) |
|||
} |
|||
} |
|||
|
|||
glog.V(3).Infof("completeAsyncFlush done inode %d fh %d", fh.inode, fh.fh) |
|||
|
|||
// Phase 3: Destroy the upload pipeline and free resources.
|
|||
fh.ReleaseHandle() |
|||
} |
|||
|
|||
// flushMetadataWithRetry attempts to flush file metadata to the filer, retrying
|
|||
// with exponential backoff on transient errors. The chunk data is already on the
|
|||
// volume servers at this point; only the filer metadata reference needs persisting.
|
|||
func (wfs *WFS) flushMetadataWithRetry(fh *FileHandle, dir, name string, fileFullPath util.FullPath) { |
|||
for attempt := 0; attempt <= asyncFlushMetadataRetries; attempt++ { |
|||
if attempt > 0 { |
|||
backoff := time.Duration(1<<uint(attempt-1)) * time.Second |
|||
glog.Warningf("completeAsyncFlush %s: retrying metadata flush (attempt %d/%d) after %v", |
|||
fileFullPath, attempt+1, asyncFlushMetadataRetries+1, backoff) |
|||
time.Sleep(backoff) |
|||
} |
|||
|
|||
if err := wfs.flushMetadataToFiler(fh, dir, name, fh.asyncFlushUid, fh.asyncFlushGid); err != nil { |
|||
if attempt == asyncFlushMetadataRetries { |
|||
glog.Errorf("completeAsyncFlush %s: metadata flush failed after %d attempts: %v — "+ |
|||
"chunks are uploaded but NOT referenced in filer metadata; "+ |
|||
"they will appear as orphans in volume.fsck", |
|||
fileFullPath, asyncFlushMetadataRetries+1, err) |
|||
} |
|||
continue |
|||
} |
|||
return // success
|
|||
} |
|||
} |
|||
|
|||
// WaitForAsyncFlush waits for all pending background flush goroutines to complete.
|
|||
// Called before unmount cleanup to ensure no data is lost.
|
|||
func (wfs *WFS) WaitForAsyncFlush() { |
|||
wfs.asyncFlushWg.Wait() |
|||
} |
|||
Write
Preview
Loading…
Cancel
Save
Reference in new issue