Bruce Zou
4bc2b6d62b
fix nfs list with prefix batch scan ( #7694 )
* fix nfs list with prefix batch scan
* remove else branch
1 week ago
Chen Pu
40eee23be9
mount: fix weed inode nlookup do not equel kernel inode nlookup ( #7682 )
* mount: fix weed inode nlookup do not equel kernel inode nlookup
* mount: add underflow protection for nlookup decrement in Forget
* mount: use consistent == 0 check for uint64 nlookup
* Update weed/mount/inode_to_path.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* mount: snapshot data before unlock in Forget to avoid using deleted InodeEntry
---------
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
1 week ago
Chris Lu
9196696278
mount: add mutex to DirectoryHandle to fix race condition ( #7674 )
* mount: add mutex to DirectoryHandle to fix race condition
When using Ganesha NFS on top of FUSE mount, ls operations would hang
forever on directories with hundreds of files. This was caused by a
race condition in DirectoryHandle where multiple concurrent readdir
operations could modify shared state (entryStream, entryStreamOffset,
isFinished) without synchronization.
The fix adds a mutex to DirectoryHandle and holds it for the entire
duration of doReadDirectory. This serializes concurrent readdir calls
on the same handle, which is the correct behavior for a directory
handle and fixes the race condition.
Key changes:
- Added sync.Mutex to DirectoryHandle struct
- Lock the mutex at the start of doReadDirectory
- This ensures thread-safe access to entryStream and other state
The lock is per-handle (not global), so different directories can
still be listed concurrently. Only concurrent operations on the
same directory handle are serialized.
Fixes: https://github.com/seaweedfs/seaweedfs/issues/7672
* mount: add mutex to DirectoryHandle to fix race condition
When using Ganesha NFS on top of FUSE mount, ls operations would hang
forever on directories with hundreds of files. This was caused by a
race condition in DirectoryHandle where multiple concurrent readdir
operations could modify shared state (entryStream, entryStreamOffset,
isFinished) without synchronization.
The fix adds a mutex to DirectoryHandle and holds it for the entire
duration of doReadDirectory. This serializes concurrent readdir calls
on the same handle, which is the correct behavior for a directory
handle and fixes the race condition.
Key changes:
- Added sync.Mutex to DirectoryHandle struct
- Lock the mutex at the start of doReadDirectory
- Optimized reset() to reuse slice capacity and allow GC of old entries
The lock is per-handle (not global), so different directories can
still be listed concurrently. Only concurrent operations on the
same directory handle are serialized.
Fixes: https://github.com/seaweedfs/seaweedfs/issues/7672
1 week ago
tam-i13
b669607fcd
Add error list each entry func ( #7485 )
* added error return in type ListEachEntryFunc
* return error if errClose
* fix fmt.Errorf
* fix return errClose
* use %w fmt.Errorf
* added entry in messege error
* add callbackErr in ListDirectoryEntries
* fix error
* add log
* clear err when the scanner stops on io.EOF, so returning err doesn’t surface EOF as a failure.
* more info in error
* add ctx to logs, error handling
* fix return eachEntryFunc
* fix
* fix log
* fix return
* fix foundationdb test s
* fix eachEntryFunc
* fix return resEachEntryFuncErr
* Update weed/filer/filer.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/elastic/v7/elastic_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/hbase/hbase_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/foundationdb/foundationdb_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* Update weed/filer/ydb/ydb_store.go
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
* fix
* add scanErr
---------
Co-authored-by: Roman Tamarov <r.tamarov@kryptonite.ru>
Co-authored-by: Chris Lu <chrislusf@users.noreply.github.com>
Co-authored-by: chrislu <chris.lu@gmail.com>
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
3 weeks ago
chrislu
a9c9e1bcb3
refactor
1 year ago
chrislu
e86b0bcaaa
simplify
1 year ago
chrislu
57dc39c451
randomizing next file handle id
1 year ago
Chris Lu
952afd810c
Fix dead lock ( #5815 )
* reduce locks to avoid dead lock
Flush->FlushData->uplloadPipeline.FluahAll
uploaderCount>0
goroutine 1 [sync.Cond.Wait, 71 minutes]:
sync.runtime_notifyListWait(0xc0007ae4d0, 0x0)
/usr/local/go/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0xc001a59290?)
/usr/local/go/src/sync/cond.go:70 +0x85
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).waitForCurrentWritersToComplete(0xc0002ee4d0)
/github/workspace/weed/mount/page_writer/upload_pipeline_lock.go:58 +0x32
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).FlushAll(0xc0002ee4d0)
/github/workspace/weed/mount/page_writer/upload_pipeline.go:151 +0x25
github.com/seaweedfs/seaweedfs/weed/mount.(*ChunkedDirtyPages).FlushData(0xc00087e840)
/github/workspace/weed/mount/dirty_pages_chunked.go:54 +0x29
github.com/seaweedfs/seaweedfs/weed/mount.(*PageWriter).FlushData(...)
/github/workspace/weed/mount/page_writer.go:50
github.com/seaweedfs/seaweedfs/weed/mount.(*WFS).doFlush(0xc0006ad600, 0xc00030d380, 0x0, 0x0)
/github/workspace/weed/mount/weedfs_file_sync.go:101 +0x169
github.com/seaweedfs/seaweedfs/weed/mount.(*WFS).Flush(0xc0006ad600, 0xc001a594a8?, 0xc0004c1ca0)
/github/workspace/weed/mount/weedfs_file_sync.go:59 +0x48
github.com/hanwen/go-fuse/v2/fuse.doFlush(0xc0000da870?, 0xc0004c1b08)
SaveContent -> MemChunk.RLock ->
ChunkedDirtyPages.saveChunkedFileIntervalToStorage
pages.fh.AddChunks([]*filer_pb.FileChunk{chunk})
fh.entryLock.Lock()
sync.(*RWMutex).Lock(0x0?)
/usr/local/go/src/sync/rwmutex.go:146 +0x31
github.com/seaweedfs/seaweedfs/weed/mount.(*FileHandle).AddChunks(0xc00030d380, {0xc00028bdc8, 0x1, 0x1})
/github/workspace/weed/mount/filehandle.go:93 +0x45
github.com/seaweedfs/seaweedfs/weed/mount.(*ChunkedDirtyPages).saveChunkedFileIntervalToStorage(0xc00087e840, {0x2be7ac0, 0xc00018d9e0}, 0x0, 0x121, 0x17e3c624565ace45, 0x1?)
/github/workspace/weed/mount/dirty_pages_chunked.go:80 +0x2d4
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*MemChunk).SaveContent(0xc0008d9130, 0xc0008093e0)
/github/workspace/weed/mount/page_writer/page_chunk_mem.go:115 +0x112
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).moveToSealed.func1()
/github/workspace/weed/mount/page_writer/upload_pipeline.go:187 +0x55
github.com/seaweedfs/seaweedfs/weed/util.(*LimitedConcurrentExecutor).Execute.func1()
/github/workspace/weed/util/limited_executor.go:38 +0x62
created by github.com/seaweedfs/seaweedfs/weed/util.(*LimitedConcurrentExecutor).Execute in goroutine 1
/github/workspace/weed/util/limited_executor.go:33 +0x97
On metadata update
fh.entryLock.Lock()
fh.dirtyPages.Destroy()
up.chunksLock.Lock => each sealed chunk.FreeReference => MemChunk.Lock
goroutine 134 [sync.RWMutex.Lock, 71 minutes]:
sync.runtime_SemacquireRWMutex(0xc0007c3558?, 0xea?, 0x3fb0800?)
/usr/local/go/src/runtime/sema.go:87 +0x25
sync.(*RWMutex).Lock(0xc0007c35a8?)
/usr/local/go/src/sync/rwmutex.go:151 +0x6a
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*MemChunk).FreeResource(0xc0008d9130)
/github/workspace/weed/mount/page_writer/page_chunk_mem.go:38 +0x2a
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*SealedChunk).FreeReference(0xc00071cdb0, {0xc0006ba1a0, 0x20})
/github/workspace/weed/mount/page_writer/upload_pipeline.go:38 +0xb7
github.com/seaweedfs/seaweedfs/weed/mount/page_writer.(*UploadPipeline).Shutdown(0xc0002ee4d0)
/github/workspace/weed/mount/page_writer/upload_pipeline.go:220 +0x185
github.com/seaweedfs/seaweedfs/weed/mount.(*ChunkedDirtyPages).Destroy(0xc0008cea40?)
/github/workspace/weed/mount/dirty_pages_chunked.go:87 +0x17
github.com/seaweedfs/seaweedfs/weed/mount.(*PageWriter).Destroy(...)
/github/workspace/weed/mount/page_writer.go:78
github.com/seaweedfs/seaweedfs/weed/mount.NewSeaweedFileSystem.func3({0xc00069a6c0, 0x30}, 0x6?)
/github/workspace/weed/mount/weedfs.go:119 +0x17a
github.com/seaweedfs/seaweedfs/weed/mount/meta_cache.NewMetaCache.func1({0xc00069a6c0?, 0xc00069a480?}, 0x4015b40?)
/github/workspace/weed/mount/meta_cache/meta_cache.go:37 +0x1c
github.com/seaweedfs/seaweedfs/weed/mount/meta_cache.SubscribeMetaEvents.func1(0xc000661810)
/github/workspace/weed/mount/meta_cache/meta_cache_subscribe.go:43 +0x570
* use locked entry everywhere
* modifiable remote entry
* skip locking after getting lock from fhLockTable
1 year ago
chrislu
94bc9afd9d
refactor: moved to locked entry
3 years ago
chrislu
26dbc6c905
move to https://github.com/seaweedfs/seaweedfs
3 years ago
chrislu
5b8b022985
remove unused parameter
4 years ago
Robert Coelho
0e6e72d462
mount: ReadDir return EIO on EnsureVisited err
4 years ago
Robert Coelho
1fabbe8a25
mount: cleanup ReadDir rewrite's branches to not assume offsets
4 years ago
Robert Coelho
cb422d96f7
mount: rewrite ReadDir to respect input.Offset to fix partial results
4 years ago
chrislu
bd5c5586b5
generate inode via path and time
4 years ago
chrislu
28b8974a3a
mount: fix directory pagination when using midnight commander
4 years ago
chrislu
63a9d8f01d
ensure inodes are not duplicating unless hardlinked
4 years ago
chrislu
be3fc77391
mount2: use consistent inode
4 years ago
chrislu
91f0481f4e
mount2: SetAttr set mode correctly
4 years ago
chrislu
b93d57da31
mount2: dir read opened file
4 years ago
chrislu
f9d33f70b0
return fuse.Status when looking up by inode
4 years ago
chrislu
49b84b6e2a
list entries while reading from remote
4 years ago
chrislu
65a19e3abc
fix listing with correct inode
4 years ago
chrislu
2facd65998
fix second listing
4 years ago
chrislu
3cbce878f2
mount2: fix directory pagination
4 years ago
chrislu
17ac5244c3
mount2: avoid double listing directories
4 years ago
chrislu
222798d926
mount2: fix for read dir plus on linux
4 years ago
chrislu
fe57a2e770
file set attribute
4 years ago
chrislu
dbeeda8123
listen for metadata updates
4 years ago
chrislu
a1ef0e48a9
doc
4 years ago
chrislu
e85ca10a1a
add mkdir
4 years ago
chrislu
5c48c23235
remove println
4 years ago
chrislu
b0a5193e32
working
4 years ago
chrislu
5a0a709016
it runs, but directory listing output is not showing up
4 years ago
chrislu
866981d8ac
rename
4 years ago
chrislu
72faae91e1
implement read directory and read directory plus
4 years ago