Browse Source

Update docs; add nofs, policyfs and greyhole

master
Antonio SJ Musumeci 1 day ago
parent
commit
b8776d205d
  1. 10
      mkdocs/docs/benchmarking.md
  2. 2
      mkdocs/docs/error_handling_and_logging.md
  3. 2
      mkdocs/docs/known_issues_bugs.md
  4. 180
      mkdocs/docs/project_comparisons.md
  5. 2
      mkdocs/docs/quickstart.md
  6. 4
      mkdocs/docs/remote_filesystems.md
  7. 5
      mkdocs/docs/resource_usage.md
  8. 6
      mkdocs/docs/runtime_interface.md
  9. 8
      mkdocs/docs/tooling.md
  10. 34
      mkdocs/mkdocs.yml

10
mkdocs/docs/benchmarking.md

@ -58,13 +58,13 @@ read/writes. Contact the software author or worse case the mergerfs
author so it may be investigated further.
### write benchmark
### Write Benchmark
```
$ dd if=/dev/zero of=/mnt/mergerfs/16GB.file bs=1M count=16384 oflag=nocache conv=fdatasync status=progress
```
### read benchmark
### Read Benchmark
```
$ dd if=/mnt/mergerfs/16GB.file of=/dev/null bs=1M iflag=nocache conv=fdatasync status=progress
@ -79,21 +79,21 @@ Below are some basic tests but also try setting `--direct=0`, changing
the `ioengine`, or playing with other options.
#### write
#### Write
```
$ fio --name=writetest --filename=/mnt/mergerfs/16GB.file --size=16Gb --rw=write --bs=1M --direct=1 --numjobs=1 --iodepth=8 --group_reporting --runtime=60 --startdelay=0 --ioengine=psync
```
#### read
#### Read
```
$ fio --name=readtest --filename=/mnt/mergerfs/16GB.file --size=16Gb --rw=read --bs=1M --direct=1 --numjobs=1 --iodepth=8 --group_reporting --runtime=60 --startdelay=0 --ioengine=psync
```
### other benchmarks
### Other Benchmarks
If you are attempting to benchmark other behaviors you must ensure you
clear kernel caches before runs. In fact it would be a good deal to

2
mkdocs/docs/error_handling_and_logging.md

@ -90,7 +90,7 @@ filesystems. Mainly `ext4`.
Filesystems, and therefore mergerfs, are doing **lots** of small
actions at high speed. It simply isn't reasonable to log all the
actions of the system all time time. However, there is a debug mode
actions of the system all the time. However, there is a debug mode
which can be toggled on as needed, even at runtime, which will record
some information to help with debugging. The main thing it will record
is a trace of all FUSE messages to the location defined by the

2
mkdocs/docs/known_issues_bugs.md

@ -170,7 +170,7 @@ may need to disable the use of the inode in the software if possible
or change the [inodecalc](config/inodecalc.md) setting if possible.
Generally speaking, in modern setups where there are many non-fully
complient POSIX filesystem in use, reliance on inode to have a
compliant POSIX filesystems in use, reliance on inode to have a
specific meaning is not always reasonable. FUSE, for instance, has no
requirement that inode values need to be unique or that two files that
happen to be the same underlying file have the same inode.

180
mkdocs/docs/project_comparisons.md

@ -17,9 +17,11 @@ and offers better performance. As of 2020 the author of mhddfs has
Below is an example of mhddfs and mergerfs setup to work similarly.
`mhddfs -o mlimit=4G,allow_other /mnt/drive1,/mnt/drive2 /mnt/pool`
```
mhddfs -o mlimit=4G,allow_other /mnt/drive1,/mnt/drive2 /mnt/pool
`mergerfs -o minfreespace=4G,category.create=ff /mnt/drive1:/mnt/drive2 /mnt/pool`
mergerfs -o minfreespace=4G,category.create=ff /mnt/drive1:/mnt/drive2 /mnt/pool
```
## aufs
@ -30,15 +32,15 @@ Below is an example of mhddfs and mergerfs setup to work similarly.
aufs, another union filesystem, is a kernel based overlay filesystem
with basic file creation placement policies.
While aufs still is maintained it failed to be included in the
mainline kernel and is no longer available in most Linux distros
making it harder to get installed for the average user.
aufs failed to be included in the mainline kernel and is no longer
available in most Linux distros, making it harder to get installed for
the average user. Development has been largely dormant for years.
While aufs can often offer better peak performance due to being
primarily kernel based (at least when `passthrough.io` is disabled),
mergerfs provides more configurability and is generally easier to
use. mergerfs however does not offer the overlay / copy-on-write (CoW)
features which aufs has.
primarily kernel based (at least when `passthrough.io` is disabled in
mergerfs), mergerfs provides more configurability and is generally
easier to use. mergerfs however does not offer the overlay /
copy-on-write (CoW) features which aufs has.
## unionfs
@ -74,14 +76,13 @@ as unionfs, aufs, and overlayfs require.
* [https://docs.kernel.org/filesystems/overlayfs.html](https://docs.kernel.org/filesystems/overlayfs.html)
overlayfs is effectively the successor to unionfs, unionfs-fuse, and
aufs and is widely used by Linux container platforms such as Docker and
Podman. It was developed and is maintained by the same developer who
created FUSE.
overlayfs is effectively the functional successor to unionfs,
unionfs-fuse, and aufs and is widely used by Linux container platforms
such as Docker and Podman. Both overlayfs and FUSE were originally created by Miklos Szeredi.
If your use case is layering a writable filesystem on top of read-only
filesystems then you should look first to overlayfs. Its feature set
however is very different from mergerfs and solve different problems.
however is very different from mergerfs and solves different problems.
## RAID0, JBOD, SPAN, drive concatenation, striping
@ -110,18 +111,6 @@ used those other technologies. Meaning you can't create a file greater
than 1TB on a pool of two 1TB filesystems.
## BTRFS Single Data Profile
[BTRFS'](https://btrfs.readthedocs.io) `single` data profile is
similar to RAID0, spreading data across multiple devices but offering
no redundancy. Unlike mergerfs which pools existing filesystems at a
high level, BTRFS is a complete filesystem that manages storage
directly. If a single device fails in BTRFS single mode, you lose all
data. mergerfs takes a different approach: it pools filesystems as-is
without redundancy, so a device failure only affects data on that one
device, not the entire pool.
## RAID5, RAID6
* [RAID5](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5)
@ -134,11 +123,11 @@ data once the device is replaced.
mergerfs offers no parity or redundancy features so in that regard the
technologies are not comparable. [SnapRAID](https://www.snapraid.it)
or [nonraid](https://github.com/qvr/nonraid) can be used in
or [NonRAID](https://github.com/qvr/nonraid) can be used in
combination with mergerfs to provide redundancy. Unlike traditional
RAID5 or RAID6 SnapRAID works with drives of different sizes and can
have more than 2 parity drives. However, parity calculations are not
done in real-time. However, nonraid is realtime.
have more than 2 parity drives. Parity calculations with SnapRAID are
not done in real-time but NonRAID's are.
For more details and comparison of SnapRAID to related technologies
see [https://www.snapraid.it/compare](https://www.snapraid.it/compare).
@ -148,14 +137,14 @@ see [https://www.snapraid.it/compare](https://www.snapraid.it/compare).
* [https://unraid.net](https://unraid.net)
UnRAID is a full OS and offers a (FUSE based?) filesystem which
UnRAID is a full OS and offers a filesystem (likely FUSE based) which
provides a union of filesystems like mergerfs but with the addition of
live parity calculation and storage. Outside parity calculations
mergerfs offers more features and due to the lack of real-time parity
calculation can have higher peak performance. For some users mergerfs
being open source is also preferable.
For semi-static data mergerfs + [SnapRAID](http://www.snapraid.it)
For semi-static data mergerfs + [SnapRAID](https://www.snapraid.it)
provides a similar, but not real-time,
solution. [NonRAID](https://github.com/qvr/nonraid) (see below) is a
fork of UnRAID's parity calculation solution and can also be used with
@ -208,8 +197,10 @@ bulk media is pooled across regular filesystems using mergerfs.
## ZFS AnyRAID
[ZFS
AnyRAID](https://hexos.com/blog/introducing-zfs-anyraid-sponsored-by-eshtek)
* [https://github.com/openzfs/zfs/discussions/16773](https://github.com/openzfs/zfs/discussions/16773)
[ZFS AnyRAID](https://github.com/openzfs/zfs/discussions/16773)
([announcement](https://hexos.com/blog/introducing-zfs-anyraid-sponsored-by-eshtek))
is a feature being developed for ZFS to provide more flexibility in
pools by allowing mixed-capacity disks while maintaining live
redundancy.
@ -220,7 +211,7 @@ already provides this flexibility today: add drives of any size at any
time with no redundancy overhead. If you need redundancy with that
flexibility, ZFS AnyRAID could be an option when available; until
then, mergerfs remains the simpler choice for mixed-capacity pooling
with redendancy and integrity available via SnapRAID and/or NonRAID.
with redundancy and integrity available via SnapRAID and/or NonRAID.
## Bcachefs
@ -258,10 +249,26 @@ a pooling layer, providing built-in redundancy and automatic data
placement rather than relying on external tools or the properties of
underlying filesystems.
Bcachefs is under active development and as of early 2026 should be
considered beta quality. It is suitable for testing and non-critical
deployments, but careful evaluation is recommended before use in
production systems.
Bcachefs is under active development but has had a turbulent
relationship with the mainline kernel: it was merged into Linux 6.7
(January 2024) but subsequently removed in Linux 6.18 (late 2025)
following disagreements over development practices. It is now
distributed as an external DKMS module. It should be considered beta
quality and evaluated carefully before use in production systems.
## Btrfs Single Data Profile
* [https://btrfs.readthedocs.io](https://btrfs.readthedocs.io)
[Btrfs'](https://btrfs.readthedocs.io) `single` data profile is
similar to RAID0, spreading data across multiple devices but offering
no redundancy. Unlike mergerfs which pools existing filesystems at a
high level, Btrfs is a complete filesystem that manages storage
directly. If a single device fails in Btrfs single mode, you lose all
data. mergerfs takes a different approach: it pools filesystems as-is
without redundancy, so a device failure only affects data on that one
device, not the entire pool.
## StableBit's DrivePool
@ -305,7 +312,7 @@ choice. For Linux users seeking lightweight pooling without redundancy
overhead, mergerfs is the better option.
## Plan9 binds
## Plan 9 binds
* [https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Union_directories_and_namespaces](https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Union_directories_and_namespaces)
@ -321,6 +328,8 @@ Plan 9 isn't a widely used OS so this comparison is mostly academic.
## SnapRAID pooling
* [https://www.snapraid.it/manual](https://www.snapraid.it/manual)
[SnapRAID](https://www.snapraid.it/manual) has a pooling feature which
creates "a read-only virtual view of all the files in your array using
symbolic links."
@ -341,6 +350,8 @@ that kind of setup.
## rclone union
* [https://rclone.org/union](https://rclone.org/union)
rclone's [union](https://rclone.org/union) backend allows you to
create a union of multiple rclone backends and was inspired by
[mergerfs](https://rclone.org/union/#behavior-policies). Given rclone
@ -355,11 +366,11 @@ given the differing feature sets and focuses of the two projects.
## distributed filesystems
* AFS
* Ceph/CephFS
* GlusterFS
* LizardFS
* MooseFS
* [AFS](https://www.openafs.org)
* [Ceph/CephFS](https://ceph.io)
* [GlusterFS](https://www.gluster.org)
* [LizardFS](https://lizardfs.com)
* [MooseFS](https://moosefs.com)
* etc.
Distributed remote filesystems come in many forms. Some offering POSIX
@ -370,7 +381,7 @@ filesystems with duplication.
These filesystems almost always require a significant amount of
compute to run well and are typically deployed on their own
hardware. Often in an "orchestrators" + "workers" configuration across
hardware. Often in an orchestrator + worker node configuration across
numerous nodes. This limits their usefulness for casual and homelab
users. There could also be issues with network congestion and general
performance if using a single network and that network is slower than
@ -382,8 +393,87 @@ complicated setup and more compute resources than mergerfs (while also
offering a different set of capabilities.)
## nofs
* [https://github.com/chapmanjacobd/nofs](https://github.com/chapmanjacobd/nofs)
nofs is a tool that provides mergerfs-like functionality for combining
multiple filesystems/directories into a unified view but does so
entirely through subcommands which replicate traditional POSIX
commands rather than providing a proper filesystem. It takes a lot of
inspiration from mergerfs in that it supports policy-based branch
selection (mfs, ff, pfrd, epmfs, and others), read-only and no-create
branch modes, and recreates POSIX-like commands such as `ls`, `find`,
`which`, `cp`, `mv`, and `rm`.
Given its design nofs is not suited for general usage as 3rd party
applications will not be able to take advantage of the unioning
behavior it offers. It is primarily for more simple situations where
something like mergerfs is unable to be used.
## policyfs
* [https://policyfs.org](https://policyfs.org)
* [https://github.com/hieutdo/policyfs](https://github.com/hieutdo/policyfs)
policyfs is a Linux FUSE storage daemon that unifies multiple storage
paths under a single mountpoint, similarly to mergerfs. Its
distinguishing features are explicit path-pattern-based routing rules
for reads and writes, and an optional SQLite metadata index that can
serve `readdir` and `getattr` operations without spinning up HDDs.
Where mergerfs applies a single policy across all operations (with
per-function overrides), policyfs lets you write explicit routing
rules matched against path patterns, giving finer-grained control over
which physical storage a particular directory subtree reads from or
writes to. The SQLite metadata index is a notable addition for
HDD-heavy setups where keeping disks spun down is a priority (though
how effective it is will depend on access patterns); mergerfs does not
have an equivalent cache (though does leverage kernel caches when
enabled). policyfs also supports deferred physical operations,
recording delete and rename events for indexed paths and applying them
later.
policyfs is a young project (first released in late 2025) and should
be evaluated accordingly for production use. mergerfs has a longer
track record, a broader policy set, and wider community adoption. If
spin-down-friendly metadata serving or explicit per-path routing rules
are important to your workload, policyfs might be an option.
## Greyhole
* [https://www.greyhole.net](https://www.greyhole.net)
* [https://github.com/gboudreau/Greyhole](https://github.com/gboudreau/Greyhole)
Greyhole is an open-source storage pooling application that works via
Samba. Rather than implementing a FUSE filesystem, it hooks into Samba
VFS to intercept file operations and distribute files across multiple
drives. Its headline feature compared to mergerfs is per-share
redundancy: you can configure how many copies of each file Greyhole
keeps, with copies spread across different physical drives to protect
against drive failure.
Because Greyhole operates through Samba, all file access must go
through Samba shares; it cannot be used as a general-purpose local
filesystem mount. mergerfs is a FUSE filesystem and works with any
application that uses normal POSIX filesystem calls. Greyhole is also
not well-suited for large numbers of small files or frequently-changing
files, and write performance is lower because files first land in a
temporary "landing zone" before being moved to the pool.
For users who need storage pooling with built-in per-file redundancy
and are already using Samba for file sharing, Greyhole provides both
in a single tool. For local filesystem pooling without Samba,
mergerfs is the more appropriate choice; redundancy can then be added
separately via SnapRAID or NonRAID.
## 9P
* [https://en.wikipedia.org/wiki/9P_(protocol)](https://en.wikipedia.org/wiki/9P_(protocol))
[9P](https://en.wikipedia.org/wiki/9P_(protocol)) is a filesystem
protocol from the Plan 9 operating system. While historically
important, it's not directly relevant for users looking to pool

2
mkdocs/docs/quickstart.md

@ -1,4 +1,4 @@
# QuickStart
# Quick Start
## Install

4
mkdocs/docs/remote_filesystems.md

@ -55,7 +55,7 @@ NFS export settings:
other software. By setting `softerr` or `soft` the NFS client will
timeout eventually and return an error.
* `softreval`: NFS client will serve up cached data after `retrans`
attempts to revalidate the data. Helps with intermitent network
attempts to revalidate the data. Helps with intermittent network
issues.
* `timeo=150`: Timeout till retrying request.
* `retrans=3`: Number of retrying a request.
@ -122,7 +122,7 @@ line tool `uuid` or `uuidgen` or through a website such as
`no_root_squash` is required for the same reason mergerfs needs to run
as `root`. Certain behaviors of mergerfs require control over the
filesystem that only `root` can preform. If squashing is enabled, or
filesystem that only `root` can perform. If squashing is enabled, or
mergerfs was running as non-root, it would be unable to perform
certain function and you will receive permission errors.

5
mkdocs/docs/resource_usage.md

@ -23,8 +23,9 @@
value to -990.
* mergerfs increases [its available file descriptor and file size
limit.](https://www.man7.org/linux/man-pages/man3/setrlimit.3p.html)
* mergerfs lowers its [scheduling
* mergerfs sets its [scheduling
priority](https://man7.org/linux/man-pages/man3/setpriority.3p.html)
to -10 ([by default](config/options.md))
(nice value) to -10 ([by default](config/options.md)), raising its
CPU scheduling priority above the default of 0.
* The [readahead](config/readahead.md) values of mergerfs itself and
managed filesystems can be modified.

6
mkdocs/docs/runtime_interface.md

@ -35,7 +35,7 @@ configure the mounting of mergerfs (/etc/fstab, systemd, etc.).
### Command Line Tooling
Extended attributes is prevelant enough that there are common tools
Extended attributes are prevalent enough that there are common tools
available for interacting with them.
In Debian / Ubuntu distributions you can get the tools
@ -122,14 +122,14 @@ user.mergerfs.category.search="mfs"
[trapexit:/mnt/mergerfs] $ getfattr -n user.mergerfs.category.create .mergerfs
user.mergerfs.category.search="prfd"
[trapexit:/mnt/mergerfs] $ setfattr -n user.mergerfs.branches -v "'+</mnt/c=RO .mergerfs
[trapexit:/mnt/mergerfs] $ setfattr -n user.mergerfs.branches -v '+</mnt/c=RO' .mergerfs
[trapexit:/mnt/mergerfs] $ getfattr -n user.mergerfs.branches .mergerfs
user.mergerfs.branches="/mnt/c=RO:/mnt/a=RW:/mnt/b=RW"
```
### Commands
There are a number of commands / behaviors which can be triggerd by
There are a number of commands / behaviors which can be triggered by
writing ([setfattr](https://linux.die.net/man/1/setfattr),
[setxattr](https://linux.die.net/man/2/setxattr)) particular xattr
keys of `/mountpoint/.mergerfs`. These keys do not show up in key

8
mkdocs/docs/tooling.md

@ -75,7 +75,7 @@ the read/write workflow, such as
[moveonenospc](config/moveonenospc.md), will no longer work.
Also, understand that this will only work on dynamically linked
software (a that dynamically linked with the same general libc version
software (that is dynamically linked with the same general libc version
as the software being used with it.) Anything statically compiled will
not work. Many GoLang and Rust apps are statically compiled.
@ -83,8 +83,8 @@ The library will not interfere with non-mergerfs filesystems. The
library is written to always fallback to returning the mergerfs opened
file on error.
While the library was written to account for a number of edgecases
there could be some yet accounted for so please report any oddities.
While the library was written to account for a number of edge cases
there could be some yet unaccounted for so please report any oddities.
Thank you to
[nohajc](https://github.com/nohajc/mergerfs-io-passthrough) for
@ -112,7 +112,7 @@ the same path as found on the host so the preload library can see
them.
**NOTE:** Since a container can have its own OS setup there is no
guarentee that `preload.so` from the host install will be compatible
guarantee that `preload.so` from the host install will be compatible
with the loader found in the container. If that is true it simply
won't work and shouldn't cause any issues.

34
mkdocs/mkdocs.yml

@ -63,8 +63,8 @@ nav:
- setup/installation.md
- setup/upgrade.md
- setup/build.md
- terminology.md
- intro_to_filesystems.md
- Terminology: terminology.md
- Intro to Filesystems: intro_to_filesystems.md
- Config:
- config/options.md
- config/deprecated_options.md
@ -96,17 +96,17 @@ nav:
- config/export-support.md
- config/kernel-permissions-check.md
- config/lazy-umount-mountpoint.md
- error_handling_and_logging.md
- resource_usage.md
- runtime_interface.md
- remote_filesystems.md
- tips_notes.md
- known_issues_bugs.md
- project_comparisons.md
- benchmarking.md
- performance.md
- tooling.md
- extended_usage_patterns.md
- Error Handling and Logging: error_handling_and_logging.md
- Resource Usage: resource_usage.md
- Runtime Interface: runtime_interface.md
- Remote Filesystems: remote_filesystems.md
- Tips and Notes: tips_notes.md
- Known Issues and Bugs: known_issues_bugs.md
- Project Comparisons: project_comparisons.md
- Benchmarking: benchmarking.md
- Performance: performance.md
- Tooling: tooling.md
- Extended Usage Patterns: extended_usage_patterns.md
- FAQ:
- faq/why_isnt_it_working.md
- faq/configuration_and_policies.md
@ -116,7 +116,7 @@ nav:
- faq/technical_behavior_and_limitations.md
- faq/have_you_considered.md
- faq/limit_drive_spinup.md
- related_projects.md
- media_and_publicity.md
- support.md
- sponsorship_and_donations.md
- Related Projects: related_projects.md
- Media and Publicity: media_and_publicity.md
- Support: support.md
- Sponsorship and Donations: sponsorship_and_donations.md
Loading…
Cancel
Save