Browse Source

Misc doc updates

pull/1479/head
trapexit 1 week ago
committed by GitHub
parent
commit
095fcf82a3
No known key found for this signature in database GPG Key ID: B5690EEEBB952194
  1. 64
      mkdocs/docs/benchmarking.md
  2. 2
      mkdocs/docs/config/options.md
  3. 45
      mkdocs/docs/performance.md
  4. 2
      mkdocs/mkdocs.yml
  5. 3
      src/mergerfs.cpp

64
mkdocs/docs/benchmarking.md

@ -13,10 +13,11 @@ and/or disabling caching the values returned will not be
representative of the device's true performance.
When benchmarking through mergerfs ensure you only use 1 branch to
remove any possibility of the policies complicating the
situation. Benchmark the underlying filesystem first and then mount
mergerfs over it and test again. If you're experiencing speeds below
your expectation you will need to narrow down precisely which
remove any possibility of the policies or differences in underlying
filesystems complicating the situation. Benchmark the underlying
filesystem first and then mount mergerfs with that same filesystem and
that. Throughput **will be** lower but if you are experiencing speeds
below your expectation you will need to narrow down precisely which
component is leading to the slowdown. Preferably test the following in
the order listed (but not combined).
@ -24,37 +25,38 @@ the order listed (but not combined).
reads and writes no-ops. Removing the underlying device /
filesystem from the equation. This will give us the top theoretical
speeds.
2. Mount mergerfs over `tmpfs`. `tmpfs` is a RAM disk. Extremely high
speed and very low latency. This is a more realistic best case
scenario. Example: `mount -t tmpfs -o size=2G tmpfs /tmp/tmpfs`
3. Mount mergerfs over a local device. NVMe, SSD, HDD, etc. If you
have more than one I'd suggest testing each of them as drives
and/or controllers (their drivers) could impact performance.
4. Finally, if you intend to use mergerfs with a network filesystem,
either as the source of data or to combine with another through
mergerfs, test each of those alone as above.
2. Configure mergerfs to use a `tmpfs` branch. `tmpfs` is a RAM
disk. Extremely high speed and very low latency. This is a more
realistic best case scenario. Example: `mount -t tmpfs -o size=2G
tmpfs /tmp/tmpfs`
3. Configure mergerfs to use a local device filesystem branch. NVMe,
SSD, HDD, etc. Test them individually. If you have different
interconnects / controllers use the same storage device when
testing to ensure consistency.
4. Configure mergerfs to use any network filesystems you plan to use
one at a time. It may also be worth trying a different network
filesystem. `NFS` vs `CIFS/SMB/Samba` vs `sshfs`, etc.
Once you find the component which has the performance issue you can do
further testing with different options to see if they impact
performance. For reads and writes the most relevant would be:
`cache.files`, `async_read`. Less likely but relevant when using NFS
or with certain filesystems would be `security_capability`, `xattr`,
and `posix_acl`. If you find a specific system, device, filesystem,
controller, etc. that performs poorly contact trapexit so he may
investigate further.
performance. If you find a specific system, device, filesystem,
controller, etc. that performs poorly contact the author so it can be
investigated further.
Sometimes the problem is really the application accessing or writing
data through mergerfs. Some software use small buffer sizes which can
lead to more requests and therefore greater overhead. You can test
this out yourself by replacing `bs=1M` in the examples below with `ibs`
or `obs` and using a size of `512` instead of `1M`. In one example
test using `nullrw` the write speed dropped from 4.9GB/s to 69.7MB/s
when moving from `1M` to `512`. Similar results were had when testing
reads. Small writes overhead may be improved by leveraging a write
cache but in casual tests little gain was found. More tests will need
to be done before this feature would become available. If you have an
app that appears slow with mergerfs it could be due to this. Contact
trapexit so he may investigate further.
this out yourself by replacing `bs=1M` in the examples below with
`ibs` or `obs` and using a size of `512` instead of `1M`. In one
example test using `nullrw` the write speed dropped from 4.9GB/s to
69.7MB/s when moving from `1M` to `512`. Similar results were had when
testing reads. Small writes overhead may be improved by leveraging a
write cache but in casual tests little gain was found. If you have an
app that appears slow with mergerfs it could be due to this. `strace`
can be used with the app in question or mergerfs to see the size of
read/writes. Contact the software author or worse case the mergerfs
author so it may be investigated further.
### write benchmark
@ -78,3 +80,9 @@ run before the read and write benchmarks as well just in case.
sync
echo 3 | sudo tee /proc/sys/vm/drop_caches
```
## Additional Reading
* [Tweaking Performance](performance.md)
* [Options](config/options.md)
* [Tips and Notes](tips_notes.md)

2
mkdocs/docs/config/options.md

@ -36,7 +36,7 @@ config file.
* **[branches-mount-timeout](branches-mount-timeout.md)=UINT**: Number
of seconds to wait at startup for branches to be a mount other than
the mountpoint's filesystem. (default: 0)
* **[branches-mount-timeout-fail](branches-mount-timeout.md##branches-mount-timeout-fail)=BOOL**:
* **[branches-mount-timeout-fail](branches-mount-timeout.md#branches-mount-timeout-fail)=BOOL**:
If set to `true` then if `branches-mount-timeout` expires it will
exit rather than continuing. (default: false)
* **[minfreespace](minfreespace.md)=SIZE**: The minimum available

45
mkdocs/docs/performance.md

@ -1,34 +1,39 @@
# Tweaking Performance
mergerfs is at its is a proxy and therefore its theoretical max
performance is that of the underlying devices. However, given it is a
`mergerfs` is effectively a filesystem proxy and therefore its
theoretical max performance is that of the underlying devices
(ignoring caching performed by the kernel.) However, given it is a
FUSE based filesystem working from userspace there is an increase in
overhead relative to kernel based solutions. That said the performance
can match the theoretical max but it depends greatly on the system's
configuration. Especially when adding network filesystems into the mix
there are many variables which can impact performance. Device speeds
and latency, network speeds and latency, concurrency and parallel
limits of the hardware, read/write sizes, etc.
configuration. There are many things which can impact
performance. Device speeds and latency, network speeds and latency,
concurrency and parallel limits of the hardware, read/write sizes,
etc.
While some settings can impact performance they are all **functional**
in nature. Meaning they change mergerfs' behavior in some way. As a
result there is no such thing as a "performance mode".
result there is really no such thing as a "performance mode".
If you're having performance issues please look over the suggestions
below and the [benchmarking section.](benchmarking.md)
If you're having performance concerns please read over the
[benchmarking section](benchmarking.md) of these docs and then the
details below.
NOTE: Be sure to [read about these features](config/options.md) before
changing them to understand how functionality will change.
NOTE: Be sure to [read about available features](config/options.md)
before changing them to understand how functionality will change.
* test theoretical performance using `nullrw` or mounting a ram disk
* test theoretical performance using `nullrw` or using a ram disk as a
branch
* enable [passthrough](config/passthrough.md) (likely to have the
biggest impact)
* change read or process [thread pools](config/threads.md)
* change [func.readdir](config/func_readdir.md)
* toggle [func.readdir](config/func_readdir.md)
* increase [readahead](config/readahead.md): `readahead=1024`
* disable `security_capability` and/or `xattr`
* disable `security_capability` and/or [xattr](config/xattr.md)
* increase cache timeouts [cache.attr](config/cache.md#cacheattr),
[cache.entry](config/cache.md#cacheentry),
[cache.negative_entry](config/cache.md#cachenegative_entry)
* enable (or disable) page caching ([cache.files](config/cache.md#cachefiles))
* toggle [page caching](config/cache.md#cachefiles)
* enable `parallel-direct-writes`
* enable [cache.writeback](config/cache.md#cachewriteback)
* enable [cache.statfs](config/cache.md#cachestatfs)
@ -41,7 +46,9 @@ changing them to understand how functionality will change.
* use [tiered cache](usage_patterns.md) devices
* use LVM and LVM cache to place a SSD in front of your HDDs
If you come across a setting that significantly impacts performance
please [contact trapexit](support.md) so he may investigate further. Please test
both against your normal setup, a singular branch, and with
`nullrw=true`
## Additional Reading
* [Benchmarking](benchmarking.md)
* [Options](config/options.md)
* [Tips and Notes](tips_notes.md)

2
mkdocs/mkdocs.yml

@ -100,8 +100,8 @@ nav:
- tips_notes.md
- known_issues_bugs.md
- project_comparisons.md
- performance.md
- benchmarking.md
- performance.md
- tooling.md
- usage_patterns.md
- FAQ:

3
src/mergerfs.cpp

@ -265,7 +265,7 @@ namespace l
if(uid == 0)
return;
const char s[] = "mergerfs is not running as root and may not work correctly\n";
constexpr const char s[] = "mergerfs is not running as root and may not work correctly\n";
fmt::print(stderr,"warning: {}",s);
SysLog::warning(s);
}
@ -282,6 +282,7 @@ namespace l
SysLog::open();
SysLog::info("mergerfs v{} started",MERGERFS_VERSION);
SysLog::info("Go to https://trapexit.github.io/mergerfs/support for support");
memset(&ops,0,sizeof(fuse_operations));

Loading…
Cancel
Save