@ -122,7 +122,7 @@ SeaweedFS is a simple and highly scalable distributed file system. There are two
1. to store billions of files!
1. to store billions of files!
2. to serve the files fast!
2. to serve the files fast!
SeaweedFS started as an Object Store to handle small files efficiently.
SeaweedFS started as a blob store to handle small files efficiently.
Instead of managing all file metadata in a central master,
Instead of managing all file metadata in a central master,
the central master only manages volumes on volume servers,
the central master only manages volumes on volume servers,
and these volume servers manage files and their metadata.
and these volume servers manage files and their metadata.
@ -134,16 +134,12 @@ It is so simple with O(1) disk reads that you are welcome to challenge the perfo
SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf).
SeaweedFS started by implementing [Facebook's Haystack design paper](http://www.usenix.org/event/osdi10/tech/full_papers/Beaver.pdf).
Also, SeaweedFS implements erasure coding with ideas from
Also, SeaweedFS implements erasure coding with ideas from
[f4: Facebook’s Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf), and has a lot of similarities with [Facebook’s Tectonic Filesystem](https://www.usenix.org/system/files/fast21-pan.pdf)
[f4: Facebook’s Warm BLOB Storage System](https://www.usenix.org/system/files/conference/osdi14/osdi14-paper-muralidhar.pdf), and has a lot of similarities with [Facebook’s Tectonic Filesystem](https://www.usenix.org/system/files/fast21-pan.pdf) and Google's Colossle
On top of the object store, optional [Filer] can support directories and POSIX attributes.
On top of the blob store, optional [Filer] can support directories and POSIX attributes.
Filer is a separate linearly-scalable stateless server with customizable metadata stores,
Filer is a separate linearly-scalable stateless server with customizable metadata stores,
* Any server with some disk space can add to the total storage space.
* Flexible Capacity Expansion: Any server with some disk space can add to the total storage space.
* Adding/Removing servers does **not** cause any data re-balancing unless triggered by admin commands.
* Adding/Removing servers does **not** cause any data re-balancing unless triggered by admin commands.
* Optional picture resizing.
* Optional picture resizing.
* Support ETag, Accept-Range, Last-Modified, etc.
* Support ETag, Accept-Range, Last-Modified, etc.
@ -167,7 +163,7 @@ Faster and cheaper than direct cloud storage!
* Support rebalancing the writable and readonly volumes.
* Support rebalancing the writable and readonly volumes.
* [Customizable Multiple Storage Tiers][TieredStorage]: Customizable storage disk types to balance performance and cost.
* [Customizable Multiple Storage Tiers][TieredStorage]: Customizable storage disk types to balance performance and cost.
* [Transparent cloud integration][CloudTier]: unlimited capacity via tiered cloud storage for warm data.
* [Transparent cloud integration][CloudTier]: unlimited capacity via tiered cloud storage for warm data.
* [Erasure Coding for warm storage][ErasureCoding] Rack-Aware 10.4 erasure coding reduces storage cost and increases availability.
* [Erasure Coding for warm storage][ErasureCoding] Rack-Aware 10.4 erasure coding reduces storage cost and increases availability. Enterprise version can customize EC ratio.
[Back to TOC](#table-of-contents)
[Back to TOC](#table-of-contents)
@ -213,7 +209,7 @@ Faster and cheaper than direct cloud storage!
[Back to TOC](#table-of-contents)
[Back to TOC](#table-of-contents)
## Example: Using Seaweed Object Store ##
## Example: Using Seaweed Blob Store ##
By default, the master node runs on port 9333, and the volume nodes run on port 8080.
By default, the master node runs on port 9333, and the volume nodes run on port 8080.
Let's start one master node, and two volume nodes on port 8080 and 8081. Ideally, they should be started from different machines. We'll use localhost as an example.
Let's start one master node, and two volume nodes on port 8080 and 8081. Ideally, they should be started from different machines. We'll use localhost as an example.
@ -233,23 +229,25 @@ SeaweedFS uses HTTP REST operations to read, write, and delete. The responses ar
A blob, also referred as a needle, a chunk, or mistakenly as a file, is just a byte array. It can have attributes, such as name, mime type, create or update time, etc. But basically it is just a byte array of a relatively small size, such as 2 MB ~ 64 MB. The size is not fixed.
To upload a file: first, send a HTTP POST, PUT, or GET request to `/dir/assign` to get an `fid` and a volume server URL:
To upload a blob: first, send a HTTP POST, PUT, or GET request to `/dir/assign` to get an `fid` and a volume server URL:
Now, you can save the `fid`, 3,01637037d6 in this case, to a database field.
Now, you can save the `fid`, 3,01637037d6 in this case, to a database field.
@ -269,9 +267,9 @@ The file key and file cookie are both coded in hex. You can store the <volume id
If stored as a string, in theory, you would need 8+1+16+8=33 bytes. A char(33) would be enough, if not more than enough, since most uses will not need 2^32 volumes.
If stored as a string, in theory, you would need 8+1+16+8=33 bytes. A char(33) would be enough, if not more than enough, since most uses will not need 2^32 volumes.
If space is really a concern, you can store the file id in your own format. You would need one 4-byte integer for volume id, 8-byte long number for file key, and a 4-byte integer for the file cookie. So 16 bytes are more than enough.
If space is really a concern, you can store the file id in the binary format. You would need one 4-byte integer for volume id, 8-byte long number for file key, and a 4-byte integer for the file cookie. So 16 bytes are more than enough.
### Rack-Aware and Data Center-Aware Replication ###
### Rack-Aware and Data Center-Aware Replication ###
SeaweedFS applies the replication strategy at a volume level. So, when you are getting a file id, you can specify the replication strategy. For example:
SeaweedFS applies the replication strategy at a volume level. So, when you are getting a blob id, you can specify the replication strategy. For example:
When requesting a file key, an optional "dataCenter" parameter can limit the assigned volume to the specific data center. For example, this specifies that the assigned volume should be limited to 'dc1':
When requesting a blob key, an optional "dataCenter" parameter can limit the assigned volume to the specific data center. For example, this specifies that the assigned volume should be limited to 'dc1':
```
```
http://localhost:9333/dir/assign?dataCenter=dc1
http://localhost:9333/dir/assign?dataCenter=dc1
@ -363,15 +361,15 @@ When requesting a file key, an optional "dataCenter" parameter can limit the ass
[Back to TOC](#table-of-contents)
[Back to TOC](#table-of-contents)
## Object Store Architecture ##
## Blob Store Architecture ##
Usually distributed file systems split each file into chunks, a central master keeps a mapping of filenames, chunk indices to chunk handles, and also which chunks each chunk server has.
Usually distributed file systems split each file into chunks. A central server keeps a mapping of filenames to chunks, and also which chunks each chunk server has.
The main drawback is that the central master can't handle many small files efficiently, and since all read requests need to go through the chunk master, so it might not scale well for many concurrent users.
The main drawback is that the central server can't handle many small files efficiently, and since all read requests need to go through the central master, so it might not scale well for many concurrent users.
Instead of managing chunks, SeaweedFS manages data volumes in the master server. Each data volume is 32GB in size, and can hold a lot of files. And each storage node can have many data volumes. So the master node only needs to store the metadata about the volumes, which is a fairly small amount of data and is generally stable.
Instead of managing chunks, SeaweedFS manages data volumes in the master server. Each data volume is 32GB in size, and can hold a lot of blobs. And each storage node can have many data volumes. So the master node only needs to store the metadata about the volumes, which is a fairly small amount of data and is generally stable.
The actual file metadata is stored in each volume on volume servers. Since each volume server only manages metadata of files on its own disk, with only 16 bytes for each file, all file access can read file metadata just from memory and only needs one disk operation to actually read file data.
The actual blob metadata, which are the blob volume, offset, and size, is stored in each volume on volume servers. Since each volume server only manages metadata of blobs on its own disk, with only 16 bytes for each blob, all access can read the metadata just from memory and only needs one disk operation to actually read file data.
For comparison, consider that an xfs inode structure in Linux is 536 bytes.
For comparison, consider that an xfs inode structure in Linux is 536 bytes.
@ -385,23 +383,13 @@ On each write request, the master server also generates a file key, which is a g
### Write and Read files ###
### Write and Read files ###
When a client sends a write request, the master server returns (volume id, file key, file cookie, volume node URL) for the file. The client then contacts the volume node and POSTs the file content.
When a client needs to read a file based on (volume id, file key, file cookie), it asks the master server by the volume id for the (volume node URL, volume node public URL), or retrieves this from a cache. Then the client can GET the content, or just render the URL on web pages and let browsers fetch the content.
Please see the example for details on the write-read process.
### Storage Size ###
When a client sends a write request, the master server returns (volume id, file key, file cookie, volume node URL) for the blob. The client then contacts the volume node and POSTs the blob content.
In the current implementation, each volume can hold 32 gibibytes (32GiB or 8x2^32 bytes). This is because we align content to 8 bytes. We can easily increase this to 64GiB, or 128GiB, or more, by changing 2 lines of code, at the cost of some wasted padding space due to alignment.
There can be 4 gibibytes (4GiB or 2^32 bytes) of volumes. So the total system size is 8 x 4GiB x 4GiB which is 128 exbibytes (128EiB or 2^67 bytes).
Each individual file size is limited to the volume size.
When a client needs to read a blob based on (volume id, file key, file cookie), it asks the master server by the volume id for the (volume node URL, volume node public URL), or retrieves this from a cache. Then the client can GET the content, or just render the URL on web pages and let browsers fetch the content.
### Saving memory ###
### Saving memory ###
All file meta information stored on a volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bitkey,32bitoffset,32bitsize>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.
All blob metadata stored on a volume server is readable from memory without disk access. Each file takes just a 16-byte map entry of <64bitkey,32bitoffset,32bitsize>. Of course, each map entry has its own space cost for the map. But usually the disk space runs out before the memory does.
### Tiered Storage to the cloud ###
### Tiered Storage to the cloud ###
@ -415,6 +403,12 @@ If the hot/warm data is split as 20/80, with 20 servers, you can achieve storage
[Back to TOC](#table-of-contents)
[Back to TOC](#table-of-contents)
## SeaweedFS Filer ##
Built on top of the blob store, SeaweedFS Filer adds directory structure to create a file system. The directory sturcture is an interface that is implemented in many key-value stores or databases.
The content of a file is mapped to one or many blobs, distributed to multiple volumes on multiple volume servers.
## Compared to Other File Systems ##
## Compared to Other File Systems ##
Most other distributed file systems seem more complicated than necessary.
Most other distributed file systems seem more complicated than necessary.