You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

80 lines
3.6 KiB

  1. # Benchmarking
  2. Filesystems are complicated. They do many things and many of those are
  3. interconnected. Additionally, the OS, drivers, hardware, etc. can all
  4. impact performance. Therefore, when benchmarking, it is **necessary**
  5. that the test focuses as narrowly as possible.
  6. For most throughput is the key benchmark. To test throughput `dd` is
  7. useful but **must** be used with the correct settings in order to
  8. ensure the filesystem or device is actually being tested. The OS can
  9. and will cache data. Without forcing synchronous reads and writes
  10. and/or disabling caching the values returned will not be
  11. representative of the device's true performance.
  12. When benchmarking through mergerfs ensure you only use 1 branch to
  13. remove any possibility of the policies complicating the
  14. situation. Benchmark the underlying filesystem first and then mount
  15. mergerfs over it and test again. If you're experiencing speeds below
  16. your expectation you will need to narrow down precisely which
  17. component is leading to the slowdown. Preferably test the following in
  18. the order listed (but not combined).
  19. 1. Enable `nullrw` mode with `nullrw=true`. This will effectively make
  20. reads and writes no-ops. Removing the underlying device /
  21. filesystem from the equation. This will give us the top theoretical
  22. speeds.
  23. 2. Mount mergerfs over `tmpfs`. `tmpfs` is a RAM disk. Extremely high
  24. speed and very low latency. This is a more realistic best case
  25. scenario. Example: `mount -t tmpfs -o size=2G tmpfs /tmp/tmpfs`
  26. 3. Mount mergerfs over a local device. NVMe, SSD, HDD, etc. If you
  27. have more than one I'd suggest testing each of them as drives
  28. and/or controllers (their drivers) could impact performance.
  29. 4. Finally, if you intend to use mergerfs with a network filesystem,
  30. either as the source of data or to combine with another through
  31. mergerfs, test each of those alone as above.
  32. Once you find the component which has the performance issue you can do
  33. further testing with different options to see if they impact
  34. performance. For reads and writes the most relevant would be:
  35. `cache.files`, `async_read`. Less likely but relevant when using NFS
  36. or with certain filesystems would be `security_capability`, `xattr`,
  37. and `posix_acl`. If you find a specific system, device, filesystem,
  38. controller, etc. that performs poorly contact trapexit so he may
  39. investigate further.
  40. Sometimes the problem is really the application accessing or writing
  41. data through mergerfs. Some software use small buffer sizes which can
  42. lead to more requests and therefore greater overhead. You can test
  43. this out yourself by replacing `bs=1M` in the examples below with `ibs`
  44. or `obs` and using a size of `512` instead of `1M`. In one example
  45. test using `nullrw` the write speed dropped from 4.9GB/s to 69.7MB/s
  46. when moving from `1M` to `512`. Similar results were had when testing
  47. reads. Small writes overhead may be improved by leveraging a write
  48. cache but in casual tests little gain was found. More tests will need
  49. to be done before this feature would become available. If you have an
  50. app that appears slow with mergerfs it could be due to this. Contact
  51. trapexit so he may investigate further.
  52. ### write benchmark
  53. ```
  54. $ dd if=/dev/zero of=/mnt/mergerfs/1GB.file bs=1M count=1024 oflag=nocache conv=fdatasync status=progress
  55. ```
  56. ### read benchmark
  57. ```
  58. $ dd if=/mnt/mergerfs/1GB.file of=/dev/null bs=1M count=1024 iflag=nocache conv=fdatasync status=progress
  59. ```
  60. ### other benchmarks
  61. If you are attempting to benchmark other behaviors you must ensure you
  62. clear kernel caches before runs. In fact it would be a good deal to
  63. run before the read and write benchmarks as well just in case.
  64. ```
  65. sync
  66. echo 3 | sudo tee /proc/sys/vm/drop_caches
  67. ```