You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

170 lines
7.0 KiB

  1. # Project Comparisons
  2. ## mhddfs
  3. * [https://romanrm.net/mhddfs](https://romanrm.net/mhddfs)
  4. mhddfs had not been updated in over a decade and has known stability
  5. and security issues. mergerfs provides a super set of mhddfs' features
  6. and offers better performance. In fact, as of 2020, the author of
  7. mhddfs has [moved to using
  8. mergerfs.](https://romanrm.net/mhddfs#update)
  9. Below is an example of mhddfs and mergerfs setup to work similarly.
  10. `mhddfs -o mlimit=4G,allow_other /mnt/drive1,/mnt/drive2 /mnt/pool`
  11. `mergerfs -o minfreespace=4G,category.create=ff /mnt/drive1:/mnt/drive2 /mnt/pool`
  12. ## aufs
  13. * [https://aufs.sourceforge.net](https://aufs.sourceforge.net)
  14. * [https://en.wikipedia.org/wiki/Aufs](https://en.wikipedia.org/wiki/Aufs)
  15. While aufs still is maintained it failed to be included in the
  16. mainline kernel and is no longer available in most Linux distros
  17. making it harder to get installed for the average user.
  18. While aufs can often offer better peak performance due to being
  19. primarily kernel based, mergerfs provides more configurability and is
  20. generally easier to use. mergerfs however does not offer the overlay /
  21. copy-on-write (CoW) features which aufs has.
  22. ## unionfs
  23. * [https://unionfs.filesystems.org](https://unionfs.filesystems.org)
  24. unionfs for Linux is a "stackable unification file system" which
  25. functions like many other union filesystems. unionfs has not been
  26. maintained and was last released for Linux v3.14 back in 2014.
  27. Documentation is sparse so a comparison of features is not possible
  28. but given the lack of maintenance and support for modern kernels there
  29. is little reason to consider it as a solution.
  30. ## unionfs-fuse
  31. * [https://github.com/rpodgorny/unionfs-fuse](https://github.com/rpodgorny/unionfs-fuse)
  32. unionfs-fuse is more like unionfs, aufs, and overlayfs than mergerfs
  33. in that it offers overlay / copy-on-write (CoW) features. If you're
  34. just looking to create a union of filesystems and want flexibility in
  35. file/directory placement then mergerfs offers that whereas
  36. unionfs-fuse is more for overlaying read/write filesystems over
  37. read-only ones.
  38. Since unionfs-fuse, as the name suggests, is a FUSE based technology
  39. it can be used without elevated privileges that kernel solutions such
  40. as unionfs, aufs, and overlayfs require.
  41. ## overlayfs
  42. * [https://docs.kernel.org/filesystems/overlayfs.html](https://docs.kernel.org/filesystems/overlayfs.html)
  43. overlayfs is effectively the successor to unionfs, unionfs-fuse, and
  44. aufs and is widely used by Linux container platforms such as Docker and
  45. Podman. It was developed and is maintained by the same developer who
  46. created FUSE.
  47. If your use case is layering a writable filesystem on top of read-only
  48. filesystems then you should look first to overlayfs. Its feature set
  49. however is very different from mergerfs and solve different problems.
  50. ## RAID0, JBOD, SPAN, drive concatenation, striping
  51. * [RAID0](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0)
  52. * [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD)
  53. * [SPAN](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#Concatenation_(SPAN,_BIG))
  54. * [striping](https://en.wikipedia.org/wiki/Data_striping)
  55. These are block device technologies which in some form aggregate
  56. devices into what appears to be a singular device on which a
  57. traditional filesystem can be used. The filesystem has no
  58. understanding of the underlying block layout and should one of those
  59. underlying devices fail or be removed the filesystem will be missing
  60. that chunk which could contain critical information and the whole
  61. filesystem may become unrecoverable. Even if the data from the
  62. filesystem is recoverable it will take using specialized tooling to do
  63. so.
  64. In contrast, with mergerfs you can format devices as one normally
  65. would or take existing filesystems and then combine them in a pool to
  66. aggregate their storage. The failure of any one device will have no
  67. impact on the other devices. The downside to mergerfs' technique is
  68. the fact you don't actually have contiguous space as large as if you
  69. used those other technologies. Meaning you can't create a file greater
  70. than 1TB on a pool of 2 1TB filesystems.
  71. ## UnRAID
  72. * [https://unraid.net](https://unraid.net)
  73. UnRAID is a full OS and offers a (FUSE based?) filesystem which
  74. provides a union of filesystems like mergerfs but with the addition of
  75. live parity calculation and storage. Outside parity calculations
  76. mergerfs offers more features and due to the lack of real-time parity
  77. calculation can have higher peak performance. Some users also prefer
  78. an open source solution.
  79. For semi-static data mergerfs + [SnapRaid](http://www.snapraid.it)
  80. provides a similar, but not real-time, solution.
  81. ## ZFS
  82. * [https://en.wikipedia.org/wiki/ZFS](https://en.wikipedia.org/wiki/ZFS)
  83. mergerfs is very different from ZFS. mergerfs is intended to provide
  84. flexible pooling of arbitrary filesystems (local or remote), of
  85. arbitrary sizes, and arbitrary filesystems. Particularly in `write
  86. once, read many` use cases such as bulk media storage. Where data
  87. integrity and backup is managed in other ways. In those use cases ZFS
  88. can introduce a number of costs and limitations as described
  89. [here](http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html),
  90. [here](https://markmcb.com/2020/01/07/five-years-of-btrfs/), and
  91. [here](https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWhyNoRealReshaping).
  92. ## StableBit's DrivePool
  93. * [https://stablebit.com](https://stablebit.com)
  94. DrivePool works only on Windows so not as common an alternative as
  95. other Linux solutions. If you want to use Windows then DrivePool is a
  96. good option. Functionally the two projects work a bit
  97. differently. DrivePool always writes to the filesystem with the most
  98. free space and later rebalances. mergerfs does not currently offer
  99. rebalance but chooses a branch at file/directory create
  100. time. DrivePool's rebalancing can be done differently in any directory
  101. and has file pattern matching to further customize the
  102. behavior. mergerfs, not having rebalancing does not have these
  103. features, but similar features are planned for mergerfs v3. DrivePool
  104. has builtin file duplication which mergerfs does not natively support
  105. (but can be done via an external script.)
  106. There are a lot of misc differences between the two projects but most
  107. features in DrivePool can be replicated with external tools in
  108. combination with mergerfs.
  109. Additionally, DrivePool is a closed source commercial product vs
  110. mergerfs a ISC licensed open source project.
  111. ## Plan9 binds
  112. * [https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Union_directories_and_namespaces](https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Union_directories_and_namespaces)
  113. Plan9 has the native ability to bind multiple paths/filesystems
  114. together to create a setup similar to simplified union
  115. filesystem. Such bind mounts choose files in a "first found" in the
  116. order they are listed similar to mergerfs' `ff` policy. Similar, when
  117. creating a file it will be created on the first directory in the
  118. union.
  119. Plan 9 isn't a widely used OS so this comparison is mostly academic.