* Added global http client
* Added Do func for global http client
* Changed the code to use the global http client
* Fix http client in volume uploader
* Fixed pkg name
* Fixed http util funcs
* Fixed http client for bench_filer_upload
* Fixed http client for stress_filer_upload
* Fixed http client for filer_server_handlers_proxy
* Fixed http client for command_fs_merge_volumes
* Fixed http client for command_fs_merge_volumes and command_volume_fsck
* Fixed http client for s3api_server
* Added init global client for main funcs
* Rename global_client to client
* Changed:
- fixed NewHttpClient;
- added CheckIsHttpsClientEnabled func
- updated security.toml in scaffold
* Reduce the visibility of some functions in the util/http/client pkg
* Added the loadSecurityConfig function
* Use util.LoadSecurityConfiguration() in NewHttpClient func
* fix: install cronie
* chore: refactor configure S3Sink
* chore: refactor cinfig
* add filer-backup compose file
* fix: X-Amz-Meta-Mtime and resolve with comments
* fix: attr mtime
* fix: MaxUploadPartst is reduced to the maximum allowable
* fix: env and force set max MaxUploadParts
* fix: env WEED_SINK_S3_UPLOADER_PART_SIZE_MB
* compare chunks by timestamp
* fix slab clearing error
* fix test compilation
* move oldest chunk to sealed, instead of by fullness
* lock on fh.entryViewCache
* remove verbose logs
* revert slat clearing
* less logs
* less logs
* track write and read by timestamp
* remove useless logic
* add entry lock on file handle release
* use mem chunk only, swap file chunk has problems
* comment out code that maybe used later
* add debug mode to compare data read and write
* more efficient readResolvedChunks with linked list
* small optimization
* fix test compilation
* minor fix on writer
* add SeparateGarbageChunks
* group chunks into sections
* turn off debug mode
* fix tests
* fix tests
* tmp enable swap file chunk
* Revert "tmp enable swap file chunk"
This reverts commit 985137ec47.
* simple refactoring
* simple refactoring
* do not re-use swap file chunk. Sealed chunks should not be re-used.
* comment out debugging facilities
* either mem chunk or swap file chunk is fine now
* remove orderedMutex as *semaphore.Weighted
not found impactful
* optimize size calculation for changing large files
* optimize performance to avoid going through the long list of chunks
* still problems with swap file chunk
* rename
* tiny optimization
* swap file chunk save only successfully read data
* fix
* enable both mem and swap file chunk
* resolve chunks with range
* rename
* fix chunk interval list
* also change file handle chunk group when adding chunks
* pick in-active chunk with time-decayed counter
* fix compilation
* avoid nil with empty fh.entry
* refactoring
* rename
* rename
* refactor visible intervals to *list.List
* refactor chunkViews to *list.List
* add IntervalList for generic interval list
* change visible interval to use IntervalList in generics
* cahnge chunkViews to *IntervalList[*ChunkView]
* use NewFileChunkSection to create
* rename variables
* refactor
* fix renaming leftover
* renaming
* renaming
* add insert interval
* interval list adds lock
* incrementally add chunks to readers
Fixes:
1. set start and stop offset for the value object
2. clone the value object
3. use pointer instead of copy-by-value when passing to interval.Value
4. use insert interval since adding chunk could be out of order
* fix tests compilation
* fix tests compilation
Fix https://github.com/seaweedfs/seaweedfs/issues/3714
The destination chunks may be empty. For example, the file is updated and the volume is vacuumed. In this case, the sync would miss the old chunks. This is fine. However, the entry would have correct metadata but missing chunks.
For this case, the simple metadata comparison would be wrongly skipping data changes, and the file will stay empty unless file content md5 is changed.
* remove old raft servers if they don't answer to pings for too long
add ping durations as options
rename ping fields
fix some todos
get masters through masterclient
raft remove server from leader
use raft servers to ping them
CheckMastersAlive for hashicorp raft only
* prepare blocking ping
* pass waitForReady as param
* pass waitForReady through all functions
* waitForReady works
* refactor
* remove unneeded params
* rollback unneeded changes
* fix
I have done filer.backup test:
replication.toml:
[sink.local]
enabled = true
directory = "/srv/test"
___
system@dat1:/srv/test$ weed filer.backup -filer=app1:8888 -filerProxy
I0228 12:39:28 19571 filer_replication.go:129] Configure sink to local
I0228 12:39:28 19571 filer_backup.go:98] resuming from 2022-02-28 12:04:20.210984693 +0100 CET
I0228 12:39:29 19571 filer_backup.go:113] backup app1:8888 progressed to 2022-02-28 12:04:20.211726749 +0100 CET 0.33/sec
system@dat1:/srv/test$ ls -l
total 16
drwxr-xr-x 2 system system 4096 Feb 28 12:39 a
-rw-r--r-- 1 system system 48 Feb 28 12:39 fu.txt
-rw-r--r-- 1 system system 32 Feb 28 12:39 _index.html
-rw-r--r-- 1 system system 68 Feb 28 12:39 index.php
system@dat1:/srv/test$ cat fu.txt
? ?=?^??`?f^};?{4?Z%?X0=??rV????|"?1??踪~??
system@dat1:/srv/test$
On the active mount on the target server it's:
system@app1:/srv/app$ ls -l
total 2
drwxrwxr-x 1 system system 0 Feb 28 12:04 a
-rw-r--r-- 1 system system 20 Feb 28 12:04 fu.txt
-rw-r--r-- 1 system system 4 Feb 28 12:04 _index.html
-rw-r--r-- 1 system system 40 Feb 28 12:04 index.php
system@app1:/srv/app$ cat fu.txt
This is static boy!
Filer was started with: weed filer master="app1:9333,app2:9333,app3:9333" -encryptVolumeData
It seems like it's still encrypted?