From cd90193deb188ea2e73036f6815a44abe5543ae1 Mon Sep 17 00:00:00 2001 From: Antonio SJ Musumeci Date: Wed, 19 Oct 2016 09:38:48 -0400 Subject: [PATCH] add some more explination to the FAQ --- README.md | 8 +++++--- man/mergerfs.1 | 17 ++++++++++++----- 2 files changed, 17 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index f7e9ae9c..bd382493 100644 --- a/README.md +++ b/README.md @@ -400,15 +400,17 @@ There is a bug in the kernel. A work around appears to be turning off `splice`. #### Why use mergerfs over mhddfs? -mhddfs is no longer maintained and has some known stability and security issues (see below). +mhddfs is no longer maintained and has some known stability and security issues (see below). MergerFS provides a superset of mhddfs' features and should offer the same or maybe better performance. #### Why use mergerfs over aufs? -While aufs can offer better peak performance mergerfs offers more configurability and is generally easier to use. mergerfs however doesn't offer the overlay features which tends to result in whiteout files being left around the underlying filesystems. +While aufs can offer better peak performance mergerfs offers more configurability and is generally easier to use. mergerfs however doesn't offer the same overlay features (which tends to result in whiteout files being left around the underlying filesystems.) #### Why use mergerfs over LVM/ZFS/BTRFS/RAID0 drive concatenation / striping? -A single drive failure will lead to full pool failure without additional redundancy. mergerfs performs a similar behavior without the catastrophic failure and lack of recovery. Drives can fail and all other data will continue to be accessable. +With simple JBOD / drive concatenation / stripping / RAID0 a single drive failure will lead to full pool failure. mergerfs performs a similar behavior without the catastrophic failure and general lack of recovery. Drives can fail and all other data will continue to be accessable. + +When combined with something like [SnapRaid](http://www.snapraid.it) and/or an offsite full backup solution you can have the flexibilty of JBOD without the single point of failure. #### Can drives be written to directly? Outside of mergerfs while pooled? diff --git a/man/mergerfs.1 b/man/mergerfs.1 index 593f45ca..f8785582 100644 --- a/man/mergerfs.1 +++ b/man/mergerfs.1 @@ -919,20 +919,27 @@ turn them on. .PP mhddfs is no longer maintained and has some known stability and security issues (see below). +MergerFS provides a superset of mhddfs\[aq] features and should offer +the same or maybe better performance. .SS Why use mergerfs over aufs? .PP While aufs can offer better peak performance mergerfs offers more configurability and is generally easier to use. -mergerfs however doesn\[aq]t offer the overlay features which tends to -result in whiteout files being left around the underlying filesystems. +mergerfs however doesn\[aq]t offer the same overlay features (which +tends to result in whiteout files being left around the underlying +filesystems.) .SS Why use mergerfs over LVM/ZFS/BTRFS/RAID0 drive concatenation / striping? .PP -A single drive failure will lead to full pool failure without additional -redundancy. +With simple JBOD / drive concatenation / stripping / RAID0 a single +drive failure will lead to full pool failure. mergerfs performs a similar behavior without the catastrophic failure -and lack of recovery. +and general lack of recovery. Drives can fail and all other data will continue to be accessable. +.PP +When combined with something like SnapRaid (http://www.snapraid.it) +and/or an offsite full backup solution you can have the flexibilty of +JBOD without the single point of failure. .SS Can drives be written to directly? Outside of mergerfs while pooled? .PP Yes.