You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

490 lines
11 KiB

9 years ago
1 year ago
2 years ago
10 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
5 years ago
2 years ago
3 years ago
2 years ago
5 years ago
5 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
1 year ago
  1. package util
  2. import (
  3. "compress/gzip"
  4. "encoding/json"
  5. "errors"
  6. "fmt"
  7. "github.com/seaweedfs/seaweedfs/weed/util/mem"
  8. "io"
  9. "net/http"
  10. "net/url"
  11. "strings"
  12. "time"
  13. "github.com/seaweedfs/seaweedfs/weed/glog"
  14. )
  15. var (
  16. client *http.Client
  17. Transport *http.Transport
  18. )
  19. func init() {
  20. Transport = &http.Transport{
  21. MaxIdleConns: 1024,
  22. MaxIdleConnsPerHost: 1024,
  23. }
  24. client = &http.Client{
  25. Transport: Transport,
  26. }
  27. }
  28. func Post(url string, values url.Values) ([]byte, error) {
  29. r, err := client.PostForm(url, values)
  30. if err != nil {
  31. return nil, err
  32. }
  33. defer r.Body.Close()
  34. b, err := io.ReadAll(r.Body)
  35. if r.StatusCode >= 400 {
  36. if err != nil {
  37. return nil, fmt.Errorf("%s: %d - %s", url, r.StatusCode, string(b))
  38. } else {
  39. return nil, fmt.Errorf("%s: %s", url, r.Status)
  40. }
  41. }
  42. if err != nil {
  43. return nil, err
  44. }
  45. return b, nil
  46. }
  47. // github.com/seaweedfs/seaweedfs/unmaintained/repeated_vacuum/repeated_vacuum.go
  48. // may need increasing http.Client.Timeout
  49. func Get(url string) ([]byte, bool, error) {
  50. request, err := http.NewRequest("GET", url, nil)
  51. if err != nil {
  52. return nil, true, err
  53. }
  54. request.Header.Add("Accept-Encoding", "gzip")
  55. response, err := client.Do(request)
  56. if err != nil {
  57. return nil, true, err
  58. }
  59. defer CloseResponse(response)
  60. var reader io.ReadCloser
  61. switch response.Header.Get("Content-Encoding") {
  62. case "gzip":
  63. reader, err = gzip.NewReader(response.Body)
  64. if err != nil {
  65. return nil, true, err
  66. }
  67. defer reader.Close()
  68. default:
  69. reader = response.Body
  70. }
  71. b, err := io.ReadAll(reader)
  72. if response.StatusCode >= 400 {
  73. retryable := response.StatusCode >= 500
  74. return nil, retryable, fmt.Errorf("%s: %s", url, response.Status)
  75. }
  76. if err != nil {
  77. return nil, false, err
  78. }
  79. return b, false, nil
  80. }
  81. func Head(url string) (http.Header, error) {
  82. r, err := client.Head(url)
  83. if err != nil {
  84. return nil, err
  85. }
  86. defer CloseResponse(r)
  87. if r.StatusCode >= 400 {
  88. return nil, fmt.Errorf("%s: %s", url, r.Status)
  89. }
  90. return r.Header, nil
  91. }
  92. func Delete(url string, jwt string) error {
  93. req, err := http.NewRequest("DELETE", url, nil)
  94. if jwt != "" {
  95. req.Header.Set("Authorization", "BEARER "+string(jwt))
  96. }
  97. if err != nil {
  98. return err
  99. }
  100. resp, e := client.Do(req)
  101. if e != nil {
  102. return e
  103. }
  104. defer resp.Body.Close()
  105. body, err := io.ReadAll(resp.Body)
  106. if err != nil {
  107. return err
  108. }
  109. switch resp.StatusCode {
  110. case http.StatusNotFound, http.StatusAccepted, http.StatusOK:
  111. return nil
  112. }
  113. m := make(map[string]interface{})
  114. if e := json.Unmarshal(body, &m); e == nil {
  115. if s, ok := m["error"].(string); ok {
  116. return errors.New(s)
  117. }
  118. }
  119. return errors.New(string(body))
  120. }
  121. func DeleteProxied(url string, jwt string) (body []byte, httpStatus int, err error) {
  122. req, err := http.NewRequest("DELETE", url, nil)
  123. if jwt != "" {
  124. req.Header.Set("Authorization", "BEARER "+string(jwt))
  125. }
  126. if err != nil {
  127. return
  128. }
  129. resp, err := client.Do(req)
  130. if err != nil {
  131. return
  132. }
  133. defer resp.Body.Close()
  134. body, err = io.ReadAll(resp.Body)
  135. if err != nil {
  136. return
  137. }
  138. httpStatus = resp.StatusCode
  139. return
  140. }
  141. func GetBufferStream(url string, values url.Values, allocatedBytes []byte, eachBuffer func([]byte)) error {
  142. r, err := client.PostForm(url, values)
  143. if err != nil {
  144. return err
  145. }
  146. defer CloseResponse(r)
  147. if r.StatusCode != 200 {
  148. return fmt.Errorf("%s: %s", url, r.Status)
  149. }
  150. for {
  151. n, err := r.Body.Read(allocatedBytes)
  152. if n > 0 {
  153. eachBuffer(allocatedBytes[:n])
  154. }
  155. if err != nil {
  156. if err == io.EOF {
  157. return nil
  158. }
  159. return err
  160. }
  161. }
  162. }
  163. func GetUrlStream(url string, values url.Values, readFn func(io.Reader) error) error {
  164. r, err := client.PostForm(url, values)
  165. if err != nil {
  166. return err
  167. }
  168. defer CloseResponse(r)
  169. if r.StatusCode != 200 {
  170. return fmt.Errorf("%s: %s", url, r.Status)
  171. }
  172. return readFn(r.Body)
  173. }
  174. func DownloadFile(fileUrl string, jwt string) (filename string, header http.Header, resp *http.Response, e error) {
  175. req, err := http.NewRequest("GET", fileUrl, nil)
  176. if err != nil {
  177. return "", nil, nil, err
  178. }
  179. if len(jwt) > 0 {
  180. req.Header.Set("Authorization", "BEARER "+jwt)
  181. }
  182. response, err := client.Do(req)
  183. if err != nil {
  184. return "", nil, nil, err
  185. }
  186. header = response.Header
  187. contentDisposition := response.Header["Content-Disposition"]
  188. if len(contentDisposition) > 0 {
  189. idx := strings.Index(contentDisposition[0], "filename=")
  190. if idx != -1 {
  191. filename = contentDisposition[0][idx+len("filename="):]
  192. filename = strings.Trim(filename, "\"")
  193. }
  194. }
  195. resp = response
  196. return
  197. }
  198. func Do(req *http.Request) (resp *http.Response, err error) {
  199. return client.Do(req)
  200. }
  201. func NormalizeUrl(url string) string {
  202. if strings.HasPrefix(url, "http://") || strings.HasPrefix(url, "https://") {
  203. return url
  204. }
  205. return "http://" + url
  206. }
  207. func ReadUrl(fileUrl string, cipherKey []byte, isContentCompressed bool, isFullChunk bool, offset int64, size int, buf []byte) (int64, error) {
  208. if cipherKey != nil {
  209. var n int
  210. _, err := readEncryptedUrl(fileUrl, cipherKey, isContentCompressed, isFullChunk, offset, size, func(data []byte) {
  211. n = copy(buf, data)
  212. })
  213. return int64(n), err
  214. }
  215. req, err := http.NewRequest("GET", fileUrl, nil)
  216. if err != nil {
  217. return 0, err
  218. }
  219. if !isFullChunk {
  220. req.Header.Add("Range", fmt.Sprintf("bytes=%d-%d", offset, offset+int64(size)-1))
  221. } else {
  222. req.Header.Set("Accept-Encoding", "gzip")
  223. }
  224. r, err := client.Do(req)
  225. if err != nil {
  226. return 0, err
  227. }
  228. defer CloseResponse(r)
  229. if r.StatusCode >= 400 {
  230. return 0, fmt.Errorf("%s: %s", fileUrl, r.Status)
  231. }
  232. var reader io.ReadCloser
  233. contentEncoding := r.Header.Get("Content-Encoding")
  234. switch contentEncoding {
  235. case "gzip":
  236. reader, err = gzip.NewReader(r.Body)
  237. if err != nil {
  238. return 0, err
  239. }
  240. defer reader.Close()
  241. default:
  242. reader = r.Body
  243. }
  244. var (
  245. i, m int
  246. n int64
  247. )
  248. // refers to https://github.com/golang/go/blob/master/src/bytes/buffer.go#L199
  249. // commit id c170b14c2c1cfb2fd853a37add92a82fd6eb4318
  250. for {
  251. m, err = reader.Read(buf[i:])
  252. i += m
  253. n += int64(m)
  254. if err == io.EOF {
  255. return n, nil
  256. }
  257. if err != nil {
  258. return n, err
  259. }
  260. if n == int64(len(buf)) {
  261. break
  262. }
  263. }
  264. // drains the response body to avoid memory leak
  265. data, _ := io.ReadAll(reader)
  266. if len(data) != 0 {
  267. glog.V(1).Infof("%s reader has remaining %d bytes", contentEncoding, len(data))
  268. }
  269. return n, err
  270. }
  271. func ReadUrlAsStream(fileUrl string, cipherKey []byte, isContentGzipped bool, isFullChunk bool, offset int64, size int, fn func(data []byte)) (retryable bool, err error) {
  272. if cipherKey != nil {
  273. return readEncryptedUrl(fileUrl, cipherKey, isContentGzipped, isFullChunk, offset, size, fn)
  274. }
  275. req, err := http.NewRequest("GET", fileUrl, nil)
  276. if err != nil {
  277. return false, err
  278. }
  279. if isFullChunk {
  280. req.Header.Add("Accept-Encoding", "gzip")
  281. } else {
  282. req.Header.Add("Range", fmt.Sprintf("bytes=%d-%d", offset, offset+int64(size)-1))
  283. }
  284. r, err := client.Do(req)
  285. if err != nil {
  286. return true, err
  287. }
  288. defer CloseResponse(r)
  289. if r.StatusCode >= 400 {
  290. retryable = r.StatusCode == http.StatusNotFound || r.StatusCode >= 500
  291. return retryable, fmt.Errorf("%s: %s", fileUrl, r.Status)
  292. }
  293. var reader io.ReadCloser
  294. contentEncoding := r.Header.Get("Content-Encoding")
  295. switch contentEncoding {
  296. case "gzip":
  297. reader, err = gzip.NewReader(r.Body)
  298. defer reader.Close()
  299. default:
  300. reader = r.Body
  301. }
  302. var (
  303. m int
  304. )
  305. buf := mem.Allocate(64 * 1024)
  306. defer mem.Free(buf)
  307. for {
  308. m, err = reader.Read(buf)
  309. if m > 0 {
  310. fn(buf[:m])
  311. }
  312. if err == io.EOF {
  313. return false, nil
  314. }
  315. if err != nil {
  316. return true, err
  317. }
  318. }
  319. }
  320. func readEncryptedUrl(fileUrl string, cipherKey []byte, isContentCompressed bool, isFullChunk bool, offset int64, size int, fn func(data []byte)) (bool, error) {
  321. encryptedData, retryable, err := Get(fileUrl)
  322. if err != nil {
  323. return retryable, fmt.Errorf("fetch %s: %v", fileUrl, err)
  324. }
  325. decryptedData, err := Decrypt(encryptedData, CipherKey(cipherKey))
  326. if err != nil {
  327. return false, fmt.Errorf("decrypt %s: %v", fileUrl, err)
  328. }
  329. if isContentCompressed {
  330. decryptedData, err = DecompressData(decryptedData)
  331. if err != nil {
  332. glog.V(0).Infof("unzip decrypt %s: %v", fileUrl, err)
  333. }
  334. }
  335. if len(decryptedData) < int(offset)+size {
  336. return false, fmt.Errorf("read decrypted %s size %d [%d, %d)", fileUrl, len(decryptedData), offset, int(offset)+size)
  337. }
  338. if isFullChunk {
  339. fn(decryptedData)
  340. } else {
  341. fn(decryptedData[int(offset) : int(offset)+size])
  342. }
  343. return false, nil
  344. }
  345. func ReadUrlAsReaderCloser(fileUrl string, jwt string, rangeHeader string) (*http.Response, io.ReadCloser, error) {
  346. req, err := http.NewRequest("GET", fileUrl, nil)
  347. if err != nil {
  348. return nil, nil, err
  349. }
  350. if rangeHeader != "" {
  351. req.Header.Add("Range", rangeHeader)
  352. } else {
  353. req.Header.Add("Accept-Encoding", "gzip")
  354. }
  355. if len(jwt) > 0 {
  356. req.Header.Set("Authorization", "BEARER "+jwt)
  357. }
  358. r, err := client.Do(req)
  359. if err != nil {
  360. return nil, nil, err
  361. }
  362. if r.StatusCode >= 400 {
  363. CloseResponse(r)
  364. return nil, nil, fmt.Errorf("%s: %s", fileUrl, r.Status)
  365. }
  366. var reader io.ReadCloser
  367. contentEncoding := r.Header.Get("Content-Encoding")
  368. switch contentEncoding {
  369. case "gzip":
  370. reader, err = gzip.NewReader(r.Body)
  371. if err != nil {
  372. return nil, nil, err
  373. }
  374. default:
  375. reader = r.Body
  376. }
  377. return r, reader, nil
  378. }
  379. func CloseResponse(resp *http.Response) {
  380. if resp == nil || resp.Body == nil {
  381. return
  382. }
  383. reader := &CountingReader{reader: resp.Body}
  384. io.Copy(io.Discard, reader)
  385. resp.Body.Close()
  386. if reader.BytesRead > 0 {
  387. glog.V(1).Infof("response leftover %d bytes", reader.BytesRead)
  388. }
  389. }
  390. func CloseRequest(req *http.Request) {
  391. reader := &CountingReader{reader: req.Body}
  392. io.Copy(io.Discard, reader)
  393. req.Body.Close()
  394. if reader.BytesRead > 0 {
  395. glog.V(1).Infof("request leftover %d bytes", reader.BytesRead)
  396. }
  397. }
  398. type CountingReader struct {
  399. reader io.Reader
  400. BytesRead int
  401. }
  402. func (r *CountingReader) Read(p []byte) (n int, err error) {
  403. n, err = r.reader.Read(p)
  404. r.BytesRead += n
  405. return n, err
  406. }
  407. func RetriedFetchChunkData(buffer []byte, urlStrings []string, cipherKey []byte, isGzipped bool, isFullChunk bool, offset int64) (n int, err error) {
  408. var shouldRetry bool
  409. for waitTime := time.Second; waitTime < RetryWaitTime; waitTime += waitTime / 2 {
  410. for _, urlString := range urlStrings {
  411. n = 0
  412. if strings.Contains(urlString, "%") {
  413. urlString = url.PathEscape(urlString)
  414. }
  415. shouldRetry, err = ReadUrlAsStream(urlString+"?readDeleted=true", cipherKey, isGzipped, isFullChunk, offset, len(buffer), func(data []byte) {
  416. if n < len(buffer) {
  417. x := copy(buffer[n:], data)
  418. n += x
  419. }
  420. })
  421. if !shouldRetry {
  422. break
  423. }
  424. if err != nil {
  425. glog.V(0).Infof("read %s failed, err: %v", urlString, err)
  426. } else {
  427. break
  428. }
  429. }
  430. if err != nil && shouldRetry {
  431. glog.V(0).Infof("retry reading in %v", waitTime)
  432. time.Sleep(waitTime)
  433. } else {
  434. break
  435. }
  436. }
  437. return n, err
  438. }