You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

122 lines
4.6 KiB

  1. # Project Comparisons
  2. ## mhddfs
  3. mhddfs had not been updated in over a decade and has known stability
  4. and security issues. mergerfs provides a superset of mhddfs' features
  5. and offers better performance.
  6. Below is an example of mhddfs and mergerfs setup to work similarly.
  7. `mhddfs -o mlimit=4G,allow_other /mnt/drive1,/mnt/drive2 /mnt/pool`
  8. `mergerfs -o minfreespace=4G,category.create=ff /mnt/drive1:/mnt/drive2 /mnt/pool`
  9. ## aufs
  10. aufs is abandoned and no longer available in most Linux distros.
  11. While aufs can offer better peak performance mergerfs provides more
  12. configurability and is generally easier to use. mergerfs however does
  13. not offer the overlay / copy-on-write (CoW) features which aufs has.
  14. ## Linux unionfs
  15. FILL IN
  16. ## unionfs-fuse
  17. unionfs-fuse is more like aufs than mergerfs in that it offers overlay
  18. / copy-on-write (CoW) features. If you're just looking to create a
  19. union of filesystems and want flexibility in file/directory placement
  20. then mergerfs offers that whereas unionfs-fuse is more for overlaying
  21. read/write filesystems over read-only ones. Largely unionfs-fuse has
  22. been replaced by overlayfs.
  23. ## overlayfs
  24. overlayfs is similar to aufs and unionfs-fuse in that it also is
  25. primarily used to layer a read/write filesystem over one or more
  26. read-only filesystems. It does not have the ability to spread
  27. files/directories across numerous filesystems. It is the successor to
  28. unionfs, unionfs-fuse, and aufs and widely used by container platforms
  29. such as Docker.
  30. If your usecase is layering a writable filesystem on top of readonly
  31. filesystems then you should look first to overlayfs.
  32. ## RAID0, JBOD, drive concatenation, striping
  33. With simple JBOD / drive concatenation / stripping / RAID0 a single
  34. drive failure will result in full pool failure. mergerfs performs a
  35. similar function without the possibility of catastrophic failure and
  36. the difficulties in recovery. Drives may fail but all other
  37. filesystems and their data will continue to be accessible.
  38. The main practical difference with mergerfs is the fact you don't
  39. actually have contiguous space as large as if you used those other
  40. technologies. Meaning you can't create a 2TB file on a pool of 2 1TB
  41. filesystems.
  42. ## UnRAID
  43. UnRAID is a full OS and offers a (FUSE based?) filesystem which
  44. provides a union of filesystems like mergerfs but with the addition of
  45. live parity calculation and storage. Outside parity calculations
  46. mergerfs offers more features and due to the lack of realtime parity
  47. calculation can have high peak performance. Some users also prefer an
  48. open source solution.
  49. For semi-static data mergerfs + [SnapRaid](http://www.snapraid.it)
  50. provides a similar solution.
  51. ## ZFS
  52. mergerfs is very different from ZFS. mergerfs is intended to provide
  53. flexible pooling of arbitrary filesystems (local or remote), of
  54. arbitrary sizes, and arbitrary filesystems. Primarily in `write once, read
  55. many` usecases such as bulk media storage. Where data integrity and
  56. backup is managed in other ways. In those usecases ZFS can introduce a
  57. number of costs and limitations as described
  58. [here](http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html),
  59. [here](https://markmcb.com/2020/01/07/five-years-of-btrfs/), and
  60. [here](https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWhyNoRealReshaping).
  61. ## StableBit's DrivePool
  62. DrivePool works only on Windows so not as common an alternative as
  63. other Linux solutions. If you want to use Windows then DrivePool is a
  64. good option. Functionally the two projects work a bit
  65. differently. DrivePool always writes to the filesystem with the most
  66. free space and later rebalances. mergerfs does not currently offer
  67. rebalance but chooses a branch at file/directory create
  68. time. DrivePool's rebalancing can be done differently in any directory
  69. and has file pattern matching to further customize the
  70. behavior. mergerfs, not having rebalancing does not have these
  71. features, but similar features are planned for mergerfs v3. DrivePool
  72. has builtin file duplication which mergerfs does not natively support
  73. (but can be done via an external script.)
  74. There are a lot of misc differences between the two projects but most
  75. features in DrivePool can be replicated with external tools in
  76. combination with mergerfs.
  77. Additionally, DrivePool is a closed source commercial product vs
  78. mergerfs a ISC licensed open source project.
  79. ## Plan9 binds
  80. Plan9 has the native ability to bind multiple paths/filesystems
  81. together which can be compared to a simplified union filesystem. Such
  82. bind mounts choose files in a "first found" in the order they are
  83. listed similar to mergerfs' `ff` policy. File creation is limited
  84. to... FILL ME IN. REFERENCE DOCS.