You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

206 lines
8.4 KiB

  1. # Project Comparisons
  2. ## mhddfs
  3. * [https://romanrm.net/mhddfs](https://romanrm.net/mhddfs)
  4. mhddfs had not been updated in over a decade and has known stability
  5. and security issues. mergerfs provides a super set of mhddfs' features
  6. and offers better performance. In fact, as of 2020, the author of
  7. mhddfs has [moved to using
  8. mergerfs.](https://romanrm.net/mhddfs#update)
  9. Below is an example of mhddfs and mergerfs setup to work similarly.
  10. `mhddfs -o mlimit=4G,allow_other /mnt/drive1,/mnt/drive2 /mnt/pool`
  11. `mergerfs -o minfreespace=4G,category.create=ff /mnt/drive1:/mnt/drive2 /mnt/pool`
  12. ## aufs
  13. * [https://aufs.sourceforge.net](https://aufs.sourceforge.net)
  14. * [https://en.wikipedia.org/wiki/Aufs](https://en.wikipedia.org/wiki/Aufs)
  15. While aufs still is maintained it failed to be included in the
  16. mainline kernel and is no longer available in most Linux distros
  17. making it harder to get installed for the average user.
  18. While aufs can often offer better peak performance due to being
  19. primarily kernel based, mergerfs provides more configurability and is
  20. generally easier to use. mergerfs however does not offer the overlay /
  21. copy-on-write (CoW) features which aufs has.
  22. ## unionfs
  23. * [https://unionfs.filesystems.org](https://unionfs.filesystems.org)
  24. unionfs for Linux is a "stackable unification file system" which
  25. functions like many other union filesystems. unionfs has not been
  26. maintained and was last released for Linux v3.14 back in 2014.
  27. Documentation is sparse so a comparison of features is not possible
  28. but given the lack of maintenance and support for modern kernels there
  29. is little reason to consider it as a solution.
  30. ## unionfs-fuse
  31. * [https://github.com/rpodgorny/unionfs-fuse](https://github.com/rpodgorny/unionfs-fuse)
  32. unionfs-fuse is more like unionfs, aufs, and overlayfs than mergerfs
  33. in that it offers overlay / copy-on-write (CoW) features. If you're
  34. just looking to create a union of filesystems and want flexibility in
  35. file/directory placement then mergerfs offers that whereas
  36. unionfs-fuse is more for overlaying read/write filesystems over
  37. read-only ones.
  38. Since unionfs-fuse, as the name suggests, is a FUSE based technology
  39. it can be used without elevated privileges that kernel solutions such
  40. as unionfs, aufs, and overlayfs require.
  41. ## overlayfs
  42. * [https://docs.kernel.org/filesystems/overlayfs.html](https://docs.kernel.org/filesystems/overlayfs.html)
  43. overlayfs is effectively the successor to unionfs, unionfs-fuse, and
  44. aufs and is widely used by Linux container platforms such as Docker and
  45. Podman. It was developed and is maintained by the same developer who
  46. created FUSE.
  47. If your use case is layering a writable filesystem on top of read-only
  48. filesystems then you should look first to overlayfs. Its feature set
  49. however is very different from mergerfs and solve different problems.
  50. ## RAID0, JBOD, SPAN, drive concatenation, striping
  51. * [RAID0](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_0)
  52. * [JBOD](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#JBOD)
  53. * [SPAN](https://en.wikipedia.org/wiki/Non-RAID_drive_architectures#Concatenation_(SPAN,_BIG))
  54. * [striping](https://en.wikipedia.org/wiki/Data_striping)
  55. These are block device technologies which in some form aggregate
  56. devices into what appears to be a singular device on which a
  57. traditional filesystem can be used. The filesystem has no
  58. understanding of the underlying block layout and should one of those
  59. underlying devices fail or be removed the filesystem will be missing
  60. that chunk which could contain critical information and the whole
  61. filesystem may become unrecoverable. Even if the data from the
  62. filesystem is recoverable it will take using specialized tooling to do
  63. so.
  64. In contrast, with mergerfs you can format devices as one normally
  65. would or take existing filesystems and then combine them in a pool to
  66. aggregate their storage. The failure of any one device will have no
  67. impact on the other devices. The downside to mergerfs' technique is
  68. the fact you don't actually have contiguous space as large as if you
  69. used those other technologies. Meaning you can't create a file greater
  70. than 1TB on a pool of 2 1TB filesystems.
  71. ## RAID5, RAID6
  72. * [RAID5](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_5)
  73. * [RAID6](https://en.wikipedia.org/wiki/Standard_RAID_levels#RAID_6)
  74. mergerfs offers no parity or redundancy features so in that regard the
  75. technologies are not comparable. [SnapRAID](https://www.snapraid.it)
  76. can be used in combination with mergerfs to provide redundancy. Unlike
  77. traditional RAID5 or RAID6 SnapRAID works with drives of different
  78. sizes and can have more than 2 parity drives. However, parity
  79. calculations are not done in real-time.
  80. See [https://www.snapraid.it/compare](https://www.snapraid.it/compare)
  81. for more details and comparisons.
  82. ## UnRAID
  83. * [https://unraid.net](https://unraid.net)
  84. UnRAID is a full OS and offers a (FUSE based?) filesystem which
  85. provides a union of filesystems like mergerfs but with the addition of
  86. live parity calculation and storage. Outside parity calculations
  87. mergerfs offers more features and due to the lack of real-time parity
  88. calculation can have higher peak performance. Some users also prefer
  89. an open source solution.
  90. For semi-static data mergerfs + [SnapRAID](http://www.snapraid.it)
  91. provides a similar, but not real-time, solution.
  92. ## ZFS
  93. * [https://en.wikipedia.org/wiki/ZFS](https://en.wikipedia.org/wiki/ZFS)
  94. mergerfs is very different from ZFS. mergerfs is intended to provide
  95. flexible pooling of arbitrary filesystems (local or remote), of
  96. arbitrary sizes, and arbitrary filesystems. Particularly in `write
  97. once, read many` use cases such as bulk media storage. Where data
  98. integrity and backup is managed in other ways. In those use cases ZFS
  99. can introduce a number of costs and limitations as described
  100. [here](http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html),
  101. [here](https://markmcb.com/2020/01/07/five-years-of-btrfs/), and
  102. [here](https://utcc.utoronto.ca/~cks/space/blog/solaris/ZFSWhyNoRealReshaping).
  103. ## StableBit's DrivePool
  104. * [https://stablebit.com](https://stablebit.com)
  105. DrivePool works only on Windows so not as common an alternative as
  106. other Linux solutions. If you want to use Windows then DrivePool is a
  107. good option. Functionally the two projects work a bit
  108. differently. DrivePool always writes to the filesystem with the most
  109. free space and later rebalances. mergerfs does not currently offer
  110. rebalance but chooses a branch at file/directory create
  111. time. DrivePool's rebalancing can be done differently in any directory
  112. and has file pattern matching to further customize the
  113. behavior. mergerfs, not having rebalancing does not have these
  114. features, but similar features are planned for mergerfs v3. DrivePool
  115. has builtin file duplication which mergerfs does not natively support
  116. (but can be done via an external script.)
  117. There are a lot of misc differences between the two projects but most
  118. features in DrivePool can be replicated with external tools in
  119. combination with mergerfs.
  120. Additionally, DrivePool is a closed source commercial product vs
  121. mergerfs a ISC licensed open source project.
  122. ## Plan9 binds
  123. * [https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Union_directories_and_namespaces](https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs#Union_directories_and_namespaces)
  124. Plan9 has the native ability to bind multiple paths/filesystems
  125. together to create a setup similar to simplified union
  126. filesystem. Such bind mounts choose files in a "first found" in the
  127. order they are listed similar to mergerfs' `ff` policy. Similar, when
  128. creating a file it will be created on the first directory in the
  129. union.
  130. Plan 9 isn't a widely used OS so this comparison is mostly academic.
  131. ## SnapRAID pooling
  132. [SnapRAID](https://www.snapraid.it/manual) has a pooling feature which
  133. creates "a read-only virtual view of all the files in your array using
  134. symbolic links."
  135. As mentioned in the description this "view" is just the creation of
  136. the same directory layout with symlinks to all files. This means that
  137. reads (and writes) to files are at native speeds but limited in that
  138. it can not practically be used as a target for writing new files and
  139. is only updated when `snapraid pool` is run. Note that some software
  140. treat symlinks differently than regular files. For instance some
  141. backup software will skip symlinks by default.
  142. mergerfs has the feature [symlinkify](config/symlinkify.md) which
  143. provides a similar behavior but is more flexible in that it is not
  144. read-only. That said there can still be some software that won't like
  145. that kind of setup.