You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

470 lines
14 KiB

3 years ago
6 years ago
7 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
7 years ago
7 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
4 years ago
4 years ago
3 years ago
3 years ago
3 years ago
3 years ago
4 years ago
3 years ago
3 years ago
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
4 years ago
3 years ago
3 years ago
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
3 years ago
3 years ago
4 years ago
3 years ago
3 years ago
5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
  1. package s3api
  2. import (
  3. "bytes"
  4. "crypto/md5"
  5. "encoding/json"
  6. "encoding/xml"
  7. "fmt"
  8. "github.com/chrislusf/seaweedfs/weed/security"
  9. "github.com/chrislusf/seaweedfs/weed/util/mem"
  10. "io"
  11. "net/http"
  12. "net/url"
  13. "sort"
  14. "strings"
  15. "time"
  16. "github.com/chrislusf/seaweedfs/weed/filer"
  17. "github.com/pquerna/cachecontrol/cacheobject"
  18. xhttp "github.com/chrislusf/seaweedfs/weed/s3api/http"
  19. "github.com/chrislusf/seaweedfs/weed/s3api/s3err"
  20. "github.com/chrislusf/seaweedfs/weed/glog"
  21. "github.com/chrislusf/seaweedfs/weed/pb/filer_pb"
  22. weed_server "github.com/chrislusf/seaweedfs/weed/server"
  23. "github.com/chrislusf/seaweedfs/weed/util"
  24. )
  25. var (
  26. client *http.Client
  27. )
  28. func init() {
  29. client = &http.Client{Transport: &http.Transport{
  30. MaxIdleConns: 1024,
  31. MaxIdleConnsPerHost: 1024,
  32. }}
  33. }
  34. func mimeDetect(r *http.Request, dataReader io.Reader) io.ReadCloser {
  35. mimeBuffer := make([]byte, 512)
  36. size, _ := dataReader.Read(mimeBuffer)
  37. if size > 0 {
  38. r.Header.Set("Content-Type", http.DetectContentType(mimeBuffer[:size]))
  39. return io.NopCloser(io.MultiReader(bytes.NewReader(mimeBuffer[:size]), dataReader))
  40. }
  41. return io.NopCloser(dataReader)
  42. }
  43. func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
  44. // http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
  45. bucket, object := xhttp.GetBucketAndObject(r)
  46. glog.V(3).Infof("PutObjectHandler %s %s", bucket, object)
  47. _, err := validateContentMd5(r.Header)
  48. if err != nil {
  49. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidDigest)
  50. return
  51. }
  52. if r.Header.Get("Cache-Control") != "" {
  53. if _, err = cacheobject.ParseRequestCacheControl(r.Header.Get("Cache-Control")); err != nil {
  54. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidDigest)
  55. return
  56. }
  57. }
  58. if r.Header.Get("Expires") != "" {
  59. if _, err = time.Parse(http.TimeFormat, r.Header.Get("Expires")); err != nil {
  60. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidDigest)
  61. return
  62. }
  63. }
  64. dataReader := r.Body
  65. rAuthType := getRequestAuthType(r)
  66. if s3a.iam.isEnabled() {
  67. var s3ErrCode s3err.ErrorCode
  68. switch rAuthType {
  69. case authTypeStreamingSigned:
  70. dataReader, s3ErrCode = s3a.iam.newSignV4ChunkedReader(r)
  71. case authTypeSignedV2, authTypePresignedV2:
  72. _, s3ErrCode = s3a.iam.isReqAuthenticatedV2(r)
  73. case authTypePresigned, authTypeSigned:
  74. _, s3ErrCode = s3a.iam.reqSignatureV4Verify(r)
  75. }
  76. if s3ErrCode != s3err.ErrNone {
  77. s3err.WriteErrorResponse(w, r, s3ErrCode)
  78. return
  79. }
  80. } else {
  81. if authTypeStreamingSigned == rAuthType {
  82. s3err.WriteErrorResponse(w, r, s3err.ErrAuthNotSetup)
  83. return
  84. }
  85. }
  86. defer dataReader.Close()
  87. if strings.HasSuffix(object, "/") {
  88. if err := s3a.mkdir(s3a.option.BucketsPath, bucket+object, nil); err != nil {
  89. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  90. return
  91. }
  92. } else {
  93. uploadUrl := fmt.Sprintf("http://%s%s/%s%s", s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, urlPathEscape(object))
  94. if r.Header.Get("Content-Type") == "" {
  95. dataReader = mimeDetect(r, dataReader)
  96. }
  97. etag, errCode := s3a.putToFiler(r, uploadUrl, dataReader)
  98. if errCode != s3err.ErrNone {
  99. s3err.WriteErrorResponse(w, r, errCode)
  100. return
  101. }
  102. setEtag(w, etag)
  103. }
  104. writeSuccessResponseEmpty(w, r)
  105. }
  106. func urlPathEscape(object string) string {
  107. var escapedParts []string
  108. for _, part := range strings.Split(object, "/") {
  109. escapedParts = append(escapedParts, url.PathEscape(part))
  110. }
  111. return strings.Join(escapedParts, "/")
  112. }
  113. func (s3a *S3ApiServer) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
  114. bucket, object := xhttp.GetBucketAndObject(r)
  115. glog.V(3).Infof("GetObjectHandler %s %s", bucket, object)
  116. if strings.HasSuffix(r.URL.Path, "/") {
  117. s3err.WriteErrorResponse(w, r, s3err.ErrNotImplemented)
  118. return
  119. }
  120. destUrl := fmt.Sprintf("http://%s%s/%s%s",
  121. s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, urlPathEscape(object))
  122. s3a.proxyToFiler(w, r, destUrl, false, passThroughResponse)
  123. }
  124. func (s3a *S3ApiServer) HeadObjectHandler(w http.ResponseWriter, r *http.Request) {
  125. bucket, object := xhttp.GetBucketAndObject(r)
  126. glog.V(3).Infof("HeadObjectHandler %s %s", bucket, object)
  127. destUrl := fmt.Sprintf("http://%s%s/%s%s",
  128. s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, urlPathEscape(object))
  129. s3a.proxyToFiler(w, r, destUrl, false, passThroughResponse)
  130. }
  131. func (s3a *S3ApiServer) DeleteObjectHandler(w http.ResponseWriter, r *http.Request) {
  132. bucket, object := xhttp.GetBucketAndObject(r)
  133. glog.V(3).Infof("DeleteObjectHandler %s %s", bucket, object)
  134. destUrl := fmt.Sprintf("http://%s%s/%s%s?recursive=true",
  135. s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, urlPathEscape(object))
  136. s3a.proxyToFiler(w, r, destUrl, true, func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int) {
  137. statusCode = http.StatusNoContent
  138. for k, v := range proxyResponse.Header {
  139. w.Header()[k] = v
  140. }
  141. w.WriteHeader(statusCode)
  142. return statusCode
  143. })
  144. }
  145. // / ObjectIdentifier carries key name for the object to delete.
  146. type ObjectIdentifier struct {
  147. ObjectName string `xml:"Key"`
  148. }
  149. // DeleteObjectsRequest - xml carrying the object key names which needs to be deleted.
  150. type DeleteObjectsRequest struct {
  151. // Element to enable quiet mode for the request
  152. Quiet bool
  153. // List of objects to be deleted
  154. Objects []ObjectIdentifier `xml:"Object"`
  155. }
  156. // DeleteError structure.
  157. type DeleteError struct {
  158. Code string
  159. Message string
  160. Key string
  161. }
  162. // DeleteObjectsResponse container for multiple object deletes.
  163. type DeleteObjectsResponse struct {
  164. XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ DeleteResult" json:"-"`
  165. // Collection of all deleted objects
  166. DeletedObjects []ObjectIdentifier `xml:"Deleted,omitempty"`
  167. // Collection of errors deleting certain objects.
  168. Errors []DeleteError `xml:"Error,omitempty"`
  169. }
  170. // DeleteMultipleObjectsHandler - Delete multiple objects
  171. func (s3a *S3ApiServer) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Request) {
  172. bucket, _ := xhttp.GetBucketAndObject(r)
  173. glog.V(3).Infof("DeleteMultipleObjectsHandler %s", bucket)
  174. deleteXMLBytes, err := io.ReadAll(r.Body)
  175. if err != nil {
  176. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  177. return
  178. }
  179. deleteObjects := &DeleteObjectsRequest{}
  180. if err := xml.Unmarshal(deleteXMLBytes, deleteObjects); err != nil {
  181. s3err.WriteErrorResponse(w, r, s3err.ErrMalformedXML)
  182. return
  183. }
  184. var deletedObjects []ObjectIdentifier
  185. var deleteErrors []DeleteError
  186. var auditLog *s3err.AccessLog
  187. directoriesWithDeletion := make(map[string]int)
  188. if s3err.Logger != nil {
  189. auditLog = s3err.GetAccessLog(r, http.StatusNoContent, s3err.ErrNone)
  190. }
  191. s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
  192. // delete file entries
  193. for _, object := range deleteObjects.Objects {
  194. lastSeparator := strings.LastIndex(object.ObjectName, "/")
  195. parentDirectoryPath, entryName, isDeleteData, isRecursive := "", object.ObjectName, true, false
  196. if lastSeparator > 0 && lastSeparator+1 < len(object.ObjectName) {
  197. entryName = object.ObjectName[lastSeparator+1:]
  198. parentDirectoryPath = "/" + object.ObjectName[:lastSeparator]
  199. }
  200. parentDirectoryPath = fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, bucket, parentDirectoryPath)
  201. err := doDeleteEntry(client, parentDirectoryPath, entryName, isDeleteData, isRecursive)
  202. if err == nil {
  203. directoriesWithDeletion[parentDirectoryPath]++
  204. deletedObjects = append(deletedObjects, object)
  205. } else if strings.Contains(err.Error(), filer.MsgFailDelNonEmptyFolder) {
  206. deletedObjects = append(deletedObjects, object)
  207. } else {
  208. delete(directoriesWithDeletion, parentDirectoryPath)
  209. deleteErrors = append(deleteErrors, DeleteError{
  210. Code: "",
  211. Message: err.Error(),
  212. Key: object.ObjectName,
  213. })
  214. }
  215. if auditLog != nil {
  216. auditLog.Key = entryName
  217. s3err.PostAccessLog(*auditLog)
  218. }
  219. }
  220. // purge empty folders, only checking folders with deletions
  221. for len(directoriesWithDeletion) > 0 {
  222. directoriesWithDeletion = s3a.doDeleteEmptyDirectories(client, directoriesWithDeletion)
  223. }
  224. return nil
  225. })
  226. deleteResp := DeleteObjectsResponse{}
  227. if !deleteObjects.Quiet {
  228. deleteResp.DeletedObjects = deletedObjects
  229. }
  230. deleteResp.Errors = deleteErrors
  231. writeSuccessResponseXML(w, r, deleteResp)
  232. }
  233. func (s3a *S3ApiServer) doDeleteEmptyDirectories(client filer_pb.SeaweedFilerClient, directoriesWithDeletion map[string]int) (newDirectoriesWithDeletion map[string]int) {
  234. var allDirs []string
  235. for dir, _ := range directoriesWithDeletion {
  236. allDirs = append(allDirs, dir)
  237. }
  238. sort.Slice(allDirs, func(i, j int) bool {
  239. return len(allDirs[i]) > len(allDirs[j])
  240. })
  241. newDirectoriesWithDeletion = make(map[string]int)
  242. for _, dir := range allDirs {
  243. parentDir, dirName := util.FullPath(dir).DirAndName()
  244. if parentDir == s3a.option.BucketsPath {
  245. continue
  246. }
  247. if err := doDeleteEntry(client, parentDir, dirName, false, false); err != nil {
  248. glog.V(4).Infof("directory %s has %d deletion but still not empty: %v", dir, directoriesWithDeletion[dir], err)
  249. } else {
  250. newDirectoriesWithDeletion[parentDir]++
  251. }
  252. }
  253. return
  254. }
  255. func (s3a *S3ApiServer) proxyToFiler(w http.ResponseWriter, r *http.Request, destUrl string, isWrite bool, responseFn func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int)) {
  256. glog.V(3).Infof("s3 proxying %s to %s", r.Method, destUrl)
  257. proxyReq, err := http.NewRequest(r.Method, destUrl, r.Body)
  258. if err != nil {
  259. glog.Errorf("NewRequest %s: %v", destUrl, err)
  260. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  261. return
  262. }
  263. proxyReq.Header.Set("X-Forwarded-For", r.RemoteAddr)
  264. for k, v := range r.URL.Query() {
  265. if _, ok := xhttp.PassThroughHeaders[strings.ToLower(k)]; ok {
  266. proxyReq.Header[k] = v
  267. }
  268. }
  269. for header, values := range r.Header {
  270. proxyReq.Header[header] = values
  271. }
  272. // ensure that the Authorization header is overriding any previous
  273. // Authorization header which might be already present in proxyReq
  274. s3a.maybeAddFilerJwtAuthorization(proxyReq, isWrite)
  275. resp, postErr := client.Do(proxyReq)
  276. if postErr != nil {
  277. glog.Errorf("post to filer: %v", postErr)
  278. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  279. return
  280. }
  281. defer util.CloseResponse(resp)
  282. if resp.StatusCode == http.StatusPreconditionFailed {
  283. s3err.WriteErrorResponse(w, r, s3err.ErrPreconditionFailed)
  284. return
  285. }
  286. if (resp.ContentLength == -1 || resp.StatusCode == 404) && resp.StatusCode != 304 {
  287. if r.Method != "DELETE" {
  288. s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
  289. return
  290. }
  291. }
  292. responseStatusCode := responseFn(resp, w)
  293. s3err.PostLog(r, responseStatusCode, s3err.ErrNone)
  294. }
  295. func passThroughResponse(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int) {
  296. for k, v := range proxyResponse.Header {
  297. w.Header()[k] = v
  298. }
  299. if proxyResponse.Header.Get("Content-Range") != "" && proxyResponse.StatusCode == 200 {
  300. w.WriteHeader(http.StatusPartialContent)
  301. statusCode = http.StatusPartialContent
  302. } else {
  303. statusCode = proxyResponse.StatusCode
  304. }
  305. w.WriteHeader(statusCode)
  306. buf := mem.Allocate(128 * 1024)
  307. defer mem.Free(buf)
  308. if n, err := io.CopyBuffer(w, proxyResponse.Body, buf); err != nil {
  309. glog.V(1).Infof("passthrough response read %d bytes: %v", n, err)
  310. }
  311. return statusCode
  312. }
  313. func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader io.Reader) (etag string, code s3err.ErrorCode) {
  314. hash := md5.New()
  315. var body = io.TeeReader(dataReader, hash)
  316. proxyReq, err := http.NewRequest("PUT", uploadUrl, body)
  317. if err != nil {
  318. glog.Errorf("NewRequest %s: %v", uploadUrl, err)
  319. return "", s3err.ErrInternalError
  320. }
  321. proxyReq.Header.Set("X-Forwarded-For", r.RemoteAddr)
  322. for header, values := range r.Header {
  323. for _, value := range values {
  324. proxyReq.Header.Add(header, value)
  325. }
  326. }
  327. // ensure that the Authorization header is overriding any previous
  328. // Authorization header which might be already present in proxyReq
  329. s3a.maybeAddFilerJwtAuthorization(proxyReq, true)
  330. resp, postErr := client.Do(proxyReq)
  331. if postErr != nil {
  332. glog.Errorf("post to filer: %v", postErr)
  333. return "", s3err.ErrInternalError
  334. }
  335. defer resp.Body.Close()
  336. etag = fmt.Sprintf("%x", hash.Sum(nil))
  337. resp_body, ra_err := io.ReadAll(resp.Body)
  338. if ra_err != nil {
  339. glog.Errorf("upload to filer response read %d: %v", resp.StatusCode, ra_err)
  340. return etag, s3err.ErrInternalError
  341. }
  342. var ret weed_server.FilerPostResult
  343. unmarshal_err := json.Unmarshal(resp_body, &ret)
  344. if unmarshal_err != nil {
  345. glog.Errorf("failing to read upload to %s : %v", uploadUrl, string(resp_body))
  346. return "", s3err.ErrInternalError
  347. }
  348. if ret.Error != "" {
  349. glog.Errorf("upload to filer error: %v", ret.Error)
  350. return "", filerErrorToS3Error(ret.Error)
  351. }
  352. return etag, s3err.ErrNone
  353. }
  354. func setEtag(w http.ResponseWriter, etag string) {
  355. if etag != "" {
  356. if strings.HasPrefix(etag, "\"") {
  357. w.Header().Set("ETag", etag)
  358. } else {
  359. w.Header().Set("ETag", "\""+etag+"\"")
  360. }
  361. }
  362. }
  363. func filerErrorToS3Error(errString string) s3err.ErrorCode {
  364. switch {
  365. case strings.HasPrefix(errString, "existing ") && strings.HasSuffix(errString, "is a directory"):
  366. return s3err.ErrExistingObjectIsDirectory
  367. case strings.HasSuffix(errString, "is a file"):
  368. return s3err.ErrExistingObjectIsFile
  369. default:
  370. return s3err.ErrInternalError
  371. }
  372. }
  373. func (s3a *S3ApiServer) maybeAddFilerJwtAuthorization(r *http.Request, isWrite bool) {
  374. encodedJwt := s3a.maybeGetFilerJwtAuthorizationToken(isWrite)
  375. if encodedJwt == "" {
  376. return
  377. }
  378. r.Header.Set("Authorization", "BEARER "+string(encodedJwt))
  379. }
  380. func (s3a *S3ApiServer) maybeGetFilerJwtAuthorizationToken(isWrite bool) string {
  381. var encodedJwt security.EncodedJwt
  382. if isWrite {
  383. encodedJwt = security.GenJwtForFilerServer(s3a.filerGuard.SigningKey, s3a.filerGuard.ExpiresAfterSec)
  384. } else {
  385. encodedJwt = security.GenJwtForFilerServer(s3a.filerGuard.ReadSigningKey, s3a.filerGuard.ReadExpiresAfterSec)
  386. }
  387. return string(encodedJwt)
  388. }