Chris Lu
|
4b1ed227d1
|
revert fasthttp changes
related to https://github.com/chrislusf/seaweedfs/issues/1907
|
4 years ago |
Chris Lu
|
7d9dc3c6a2
|
use fasthttp lib to read
|
4 years ago |
Chris Lu
|
487e435679
|
adjust http max idle connections per host
related to https://github.com/chrislusf/seaweedfs/issues/1802
|
4 years ago |
Chris Lu
|
141ce67c09
|
close http request body
|
4 years ago |
Chris Lu
|
2bd6fd3bbe
|
remove unused function
|
4 years ago |
Chris Lu
|
73f934d5de
|
s3: do not close reader too early
fix https://github.com/chrislusf/seaweedfs/issues/1609
|
4 years ago |
Chris Lu
|
3f7d1d1bf1
|
Only wait on retryable requests
|
4 years ago |
Chris Lu
|
4fc0bd1a81
|
return http response directly
|
4 years ago |
Chris Lu
|
5f55a87101
|
close http response
|
4 years ago |
Chris Lu
|
1b3a80dd3d
|
non-fatal error
|
4 years ago |
Chris Lu
|
bbbea8159c
|
http request use gzip if possible
|
4 years ago |
Chris Lu
|
2f03481cb2
|
in case when content is not compressed
|
4 years ago |
Chris Lu
|
3080c197e3
|
rename UnCompressData to DecompressData
|
5 years ago |
Chris Lu
|
e912fd15e3
|
renaming
|
5 years ago |
Chris Lu
|
e0f5996560
|
fix "call of Unmarshal passes non-pointer as second argument"
|
5 years ago |
Chris Lu
|
057722bbf4
|
return part of the chunk if chunkview is not the full chunk
|
5 years ago |
Chris Lu
|
2e3f6ad3a9
|
filer: remember content is gzipped or not
|
5 years ago |
Chris Lu
|
13e215ee5c
|
filer: option to encrypt data on volume server
|
5 years ago |
Chris Lu
|
96c1ae8471
|
refactoring the close http response
|
5 years ago |
Chris Lu
|
cf5064d702
|
properly close http response
|
5 years ago |
Chris Lu
|
33b3bd467c
|
Revert "HEAD operation changes to fasthttp"
This reverts commit 58f126fd27 .
|
5 years ago |
Chris Lu
|
58f126fd27
|
HEAD operation changes to fasthttp
|
5 years ago |
Chris Lu
|
a80ecbfe84
|
s3: add s3 copy
fix https://github.com/chrislusf/seaweedfs/issues/1190
|
5 years ago |
Chris Lu
|
6a5c037099
|
fix http range requests
|
5 years ago |
Chris Lu
|
89e16bd2e8
|
skip error when draining reader
fix https://github.com/chrislusf/seaweedfs/issues/1179
|
5 years ago |
Chris Lu
|
1fd8926ac7
|
ignore draining error
|
5 years ago |
divinerapier
|
4cbb6fa199
|
feat: drains http body if buffer is too small
Signed-off-by: divinerapier <poriter.coco@gmail.com>
|
5 years ago |
divinerapier
|
84640d07b7
|
fix: handle errors for ReadUrl
Signed-off-by: divinerapier <poriter.coco@gmail.com>
|
5 years ago |
Chris Lu
|
392678f8f3
|
upload skipping mimetype if not needed
|
5 years ago |
Chris Lu
|
20d90dea5a
|
filer: avoid hard-coded upload timeout
|
5 years ago |
Chris Lu
|
98a03b38e5
|
avoid util package depends on security package
|
6 years ago |
chenwanli
|
39c7455881
|
Set http timeout to 5s
|
6 years ago |
Chris Lu
|
1bfb96f34d
|
optimization for reading whole chunk with gzip encoding
|
6 years ago |
Chris Lu
|
a6cfaba018
|
able to sync the changes
|
6 years ago |
Chris Lu
|
865a017936
|
fix when if buffer is not aligned
|
6 years ago |
Chris Lu
|
0d98949199
|
tmp commit
|
7 years ago |
Chris Lu
|
07e0d13d2d
|
filer support reading multiple chunks, with range support
|
7 years ago |
Chris Lu
|
8b0718ac92
|
go vet
|
7 years ago |
Chris Lu
|
c11d84f314
|
fix reading from a url
|
7 years ago |
Chris Lu
|
d773e11c7a
|
file handler directly read from volume servers
this mostly works fine now!
next: need to cache files to local disk
|
7 years ago |
kelgon
|
3bf883327e
|
(fix #543)added body to error when Post encounter 4xx response
|
8 years ago |
sparklxb
|
c46e91d229
|
complement weed mount: add read and delete
|
8 years ago |
wangjie
|
90a6f43c56
|
fix the bug than we can't get filename when download file.
|
8 years ago |
Chris Lu
|
a57162e8bf
|
delete operation does not need this checking
|
9 years ago |
Chris Lu
|
cdae9fc680
|
add "weed copy" command to copy files to filer
|
9 years ago |
Chris Lu
|
5ce6bbf076
|
directory structure change to work with glide
glide has its own requirements. My previous workaround caused me some
code checkin errors. Need to fix this.
|
9 years ago |
chrislusf
|
e921cb1a9d
|
format changes
|
9 years ago |
tnextday
|
b177afc326
|
`weed download` command use stream download the large file.
|
9 years ago |
tnextday
|
daac5de1ba
|
more check in `http_util.Delete`
add status code in `DeleteResult` struct
operation.DeleteFiles maybe unsafe, so `ChunkManifest.DeleteChunks` manually delete each chunks
|
9 years ago |
chrislusf
|
86cd40fba8
|
Add "weed backup" command.
This is a pre-cursor for asynchronous replication.
|
10 years ago |