* fix: install cronie
* chore: refactor configure S3Sink
* chore: refactor cinfig
* add filer-backup compose file
* fix: X-Amz-Meta-Mtime and resolve with comments
* fix: attr mtime
* fix: MaxUploadPartst is reduced to the maximum allowable
* fix: env and force set max MaxUploadParts
* fix: env WEED_SINK_S3_UPLOADER_PART_SIZE_MB
* week/shell: Cluster check other disk types
The `cluster.check` command only took the empty (`""`) and `hdd` disk types
into consideration, but a cluster with only `ssd` or `nvme` disk types would be
equally valid.
This commit simply checks that _any_ disk type is defined, and that some
volumes are available for it.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
* weed/shell: Replace loop that copies slice
Use the following construct instead of a `for` loop:
```golang
x = append(x, y...)
```
See https://staticcheck.dev/docs/checks#S1011.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
* weed/shell: Check disk types when filer is in use
Filer stores its metadata logs in generic (i.e. `""`) or HDD disk type volumes,
so make sure those disk types exist and have volumes associated with them when
Filer is deployed in the cluster.
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
---------
Signed-off-by: Benoît Knecht <bknecht@protonmail.ch>
When volume server unavailable for at least one chunk; was returning status 206.
Split `StreamContent` in two parts,
- first prepare, to get chunk info and return stream function
- then write chunk, with that stream function
That allow to catch error in first step before setting response status code in `processRangeRequest`