You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

543 lines
16 KiB

3 years ago
7 years ago
7 years ago
7 years ago
7 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
4 years ago
4 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
4 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
5 years ago
3 years ago
3 years ago
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
3 years ago
3 years ago
4 years ago
3 years ago
5 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
4 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client - one JWT for reading and one for writing, analogous to how the JWT between Master and Volume Server works - I did not implement IP `whiteList` parameter on the filer Additionally, because http_util.DownloadFile now sets the JWT, the `download` command should now work when `jwt.signing.read` is configured. By looking at the code, I think this case did not work before. ## Docs to be adjusted after a release Page `Amazon-S3-API`: ``` # Authentication with Filer You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as explained in [Security-Configuration](Security-Configuration) - controlled by the `grpc.*` configuration in `security.toml`. Starting with version XX, it is also possible to authenticate the HTTP operations between the S3-API-Proxy and the Filer (especially uploading new files). This is configured by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. With both configurations (gRPC and JWT), it is possible to have Filer and S3 communicate in fully authenticated fashion; so Filer will reject any unauthenticated communication. ``` Page `Security Overview`: ``` The following items are not covered, yet: - master server http REST services Starting with version XX, the Filer HTTP REST services can be secured with a JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. ... Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer. Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).** ... # Securing Filer HTTP with JWT To enable JWT-based access control for the Filer, 1. generate `security.toml` file by `weed scaffold -config=security` 2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string 3. copy the same `security.toml` file to the filers and all S3 proxies. If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`. If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`. The S3 API Gateway reads the above JWT keys and sends authenticated HTTP requests to the filer. ``` Page `Security Configuration`: ``` (update scaffold file) ... [filer_jwt.signing] key = "blahblahblahblah" [filer_jwt.signing.read] key = "blahblahblahblah" ``` Resolves: #158
3 years ago
  1. package s3api
  2. import (
  3. "bytes"
  4. "crypto/md5"
  5. "encoding/json"
  6. "encoding/xml"
  7. "fmt"
  8. "github.com/aws/aws-sdk-go/service/s3"
  9. "github.com/seaweedfs/seaweedfs/weed/s3api/s3_constants"
  10. "github.com/seaweedfs/seaweedfs/weed/security"
  11. "github.com/seaweedfs/seaweedfs/weed/util/mem"
  12. "golang.org/x/exp/slices"
  13. "io"
  14. "net/http"
  15. "net/url"
  16. "strings"
  17. "time"
  18. "github.com/pquerna/cachecontrol/cacheobject"
  19. "github.com/seaweedfs/seaweedfs/weed/filer"
  20. "github.com/seaweedfs/seaweedfs/weed/s3api/s3err"
  21. "github.com/seaweedfs/seaweedfs/weed/glog"
  22. "github.com/seaweedfs/seaweedfs/weed/pb/filer_pb"
  23. weed_server "github.com/seaweedfs/seaweedfs/weed/server"
  24. "github.com/seaweedfs/seaweedfs/weed/util"
  25. )
  26. const (
  27. deleteMultipleObjectsLimit = 1000
  28. )
  29. func mimeDetect(r *http.Request, dataReader io.Reader) io.ReadCloser {
  30. mimeBuffer := make([]byte, 512)
  31. size, _ := dataReader.Read(mimeBuffer)
  32. if size > 0 {
  33. r.Header.Set("Content-Type", http.DetectContentType(mimeBuffer[:size]))
  34. return io.NopCloser(io.MultiReader(bytes.NewReader(mimeBuffer[:size]), dataReader))
  35. }
  36. return io.NopCloser(dataReader)
  37. }
  38. func (s3a *S3ApiServer) PutObjectHandler(w http.ResponseWriter, r *http.Request) {
  39. // http://docs.aws.amazon.com/AmazonS3/latest/dev/UploadingObjects.html
  40. bucket, object := s3_constants.GetBucketAndObject(r)
  41. glog.V(3).Infof("PutObjectHandler %s %s", bucket, object)
  42. _, err := validateContentMd5(r.Header)
  43. if err != nil {
  44. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidDigest)
  45. return
  46. }
  47. if r.Header.Get("Cache-Control") != "" {
  48. if _, err = cacheobject.ParseRequestCacheControl(r.Header.Get("Cache-Control")); err != nil {
  49. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidDigest)
  50. return
  51. }
  52. }
  53. if r.Header.Get("Expires") != "" {
  54. if _, err = time.Parse(http.TimeFormat, r.Header.Get("Expires")); err != nil {
  55. s3err.WriteErrorResponse(w, r, s3err.ErrMalformedExpires)
  56. return
  57. }
  58. }
  59. dataReader := r.Body
  60. rAuthType := getRequestAuthType(r)
  61. if s3a.iam.isEnabled() {
  62. var s3ErrCode s3err.ErrorCode
  63. switch rAuthType {
  64. case authTypeStreamingSigned:
  65. dataReader, s3ErrCode = s3a.iam.newSignV4ChunkedReader(r)
  66. case authTypeSignedV2, authTypePresignedV2:
  67. _, s3ErrCode = s3a.iam.isReqAuthenticatedV2(r)
  68. case authTypePresigned, authTypeSigned:
  69. _, s3ErrCode = s3a.iam.reqSignatureV4Verify(r)
  70. }
  71. if s3ErrCode != s3err.ErrNone {
  72. s3err.WriteErrorResponse(w, r, s3ErrCode)
  73. return
  74. }
  75. } else {
  76. if authTypeStreamingSigned == rAuthType {
  77. s3err.WriteErrorResponse(w, r, s3err.ErrAuthNotSetup)
  78. return
  79. }
  80. }
  81. defer dataReader.Close()
  82. objectContentType := r.Header.Get("Content-Type")
  83. if strings.HasSuffix(object, "/") && r.ContentLength == 0 {
  84. if err := s3a.mkdir(
  85. s3a.option.BucketsPath, bucket+strings.TrimSuffix(object, "/"),
  86. func(entry *filer_pb.Entry) {
  87. if objectContentType == "" {
  88. objectContentType = "httpd/unix-directory"
  89. }
  90. entry.Attributes.Mime = objectContentType
  91. }); err != nil {
  92. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  93. return
  94. }
  95. } else {
  96. uploadUrl := s3a.toFilerUrl(bucket, object)
  97. if objectContentType == "" {
  98. dataReader = mimeDetect(r, dataReader)
  99. }
  100. etag, errCode := s3a.putToFiler(r, uploadUrl, dataReader, "")
  101. if errCode != s3err.ErrNone {
  102. s3err.WriteErrorResponse(w, r, errCode)
  103. return
  104. }
  105. setEtag(w, etag)
  106. }
  107. writeSuccessResponseEmpty(w, r)
  108. }
  109. func urlPathEscape(object string) string {
  110. var escapedParts []string
  111. for _, part := range strings.Split(object, "/") {
  112. escapedParts = append(escapedParts, url.PathEscape(part))
  113. }
  114. return strings.Join(escapedParts, "/")
  115. }
  116. func removeDuplicateSlashes(object string) string {
  117. result := strings.Builder{}
  118. result.Grow(len(object))
  119. isLastSlash := false
  120. for _, r := range object {
  121. switch r {
  122. case '/':
  123. if !isLastSlash {
  124. result.WriteRune(r)
  125. }
  126. isLastSlash = true
  127. default:
  128. result.WriteRune(r)
  129. isLastSlash = false
  130. }
  131. }
  132. return result.String()
  133. }
  134. func (s3a *S3ApiServer) toFilerUrl(bucket, object string) string {
  135. object = urlPathEscape(removeDuplicateSlashes(object))
  136. destUrl := fmt.Sprintf("http://%s%s/%s%s",
  137. s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, bucket, object)
  138. return destUrl
  139. }
  140. func (s3a *S3ApiServer) GetObjectHandler(w http.ResponseWriter, r *http.Request) {
  141. bucket, object := s3_constants.GetBucketAndObject(r)
  142. glog.V(3).Infof("GetObjectHandler %s %s", bucket, object)
  143. if strings.HasSuffix(r.URL.Path, "/") {
  144. s3err.WriteErrorResponse(w, r, s3err.ErrNotImplemented)
  145. return
  146. }
  147. destUrl := s3a.toFilerUrl(bucket, object)
  148. s3a.proxyToFiler(w, r, destUrl, false, passThroughResponse)
  149. }
  150. // GetObjectAclHandler Put object ACL
  151. // https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjecthtml
  152. func (s3a *S3ApiServer) GetObjectAclHandler(w http.ResponseWriter, r *http.Request) {
  153. bucket, object := s3_constants.GetBucketAndObject(r)
  154. acp, errCode := s3a.checkAccessForReadObjectAcl(r, bucket, object)
  155. if errCode != s3err.ErrNone {
  156. s3err.WriteErrorResponse(w, r, errCode)
  157. return
  158. }
  159. result := &s3.PutBucketAclInput{
  160. AccessControlPolicy: acp,
  161. }
  162. s3err.WriteAwsXMLResponse(w, r, http.StatusOK, &result)
  163. }
  164. func (s3a *S3ApiServer) HeadObjectHandler(w http.ResponseWriter, r *http.Request) {
  165. bucket, object := s3_constants.GetBucketAndObject(r)
  166. glog.V(3).Infof("HeadObjectHandler %s %s", bucket, object)
  167. destUrl := s3a.toFilerUrl(bucket, object)
  168. s3a.proxyToFiler(w, r, destUrl, false, passThroughResponse)
  169. }
  170. func (s3a *S3ApiServer) DeleteObjectHandler(w http.ResponseWriter, r *http.Request) {
  171. bucket, object := s3_constants.GetBucketAndObject(r)
  172. glog.V(3).Infof("DeleteObjectHandler %s %s", bucket, object)
  173. destUrl := s3a.toFilerUrl(bucket, object)
  174. s3a.proxyToFiler(w, r, destUrl, true, func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int) {
  175. statusCode = http.StatusNoContent
  176. for k, v := range proxyResponse.Header {
  177. w.Header()[k] = v
  178. }
  179. w.WriteHeader(statusCode)
  180. return statusCode
  181. })
  182. }
  183. // / ObjectIdentifier carries key name for the object to delete.
  184. type ObjectIdentifier struct {
  185. ObjectName string `xml:"Key"`
  186. }
  187. // DeleteObjectsRequest - xml carrying the object key names which needs to be deleted.
  188. type DeleteObjectsRequest struct {
  189. // Element to enable quiet mode for the request
  190. Quiet bool
  191. // List of objects to be deleted
  192. Objects []ObjectIdentifier `xml:"Object"`
  193. }
  194. // DeleteError structure.
  195. type DeleteError struct {
  196. Code string
  197. Message string
  198. Key string
  199. }
  200. // DeleteObjectsResponse container for multiple object deletes.
  201. type DeleteObjectsResponse struct {
  202. XMLName xml.Name `xml:"http://s3.amazonaws.com/doc/2006-03-01/ DeleteResult" json:"-"`
  203. // Collection of all deleted objects
  204. DeletedObjects []ObjectIdentifier `xml:"Deleted,omitempty"`
  205. // Collection of errors deleting certain objects.
  206. Errors []DeleteError `xml:"Error,omitempty"`
  207. }
  208. // DeleteMultipleObjectsHandler - Delete multiple objects
  209. func (s3a *S3ApiServer) DeleteMultipleObjectsHandler(w http.ResponseWriter, r *http.Request) {
  210. bucket, _ := s3_constants.GetBucketAndObject(r)
  211. glog.V(3).Infof("DeleteMultipleObjectsHandler %s", bucket)
  212. deleteXMLBytes, err := io.ReadAll(r.Body)
  213. if err != nil {
  214. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  215. return
  216. }
  217. deleteObjects := &DeleteObjectsRequest{}
  218. if err := xml.Unmarshal(deleteXMLBytes, deleteObjects); err != nil {
  219. s3err.WriteErrorResponse(w, r, s3err.ErrMalformedXML)
  220. return
  221. }
  222. if len(deleteObjects.Objects) > deleteMultipleObjectsLimit {
  223. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidMaxDeleteObjects)
  224. return
  225. }
  226. var deletedObjects []ObjectIdentifier
  227. var deleteErrors []DeleteError
  228. var auditLog *s3err.AccessLog
  229. directoriesWithDeletion := make(map[string]int)
  230. if s3err.Logger != nil {
  231. auditLog = s3err.GetAccessLog(r, http.StatusNoContent, s3err.ErrNone)
  232. }
  233. s3a.WithFilerClient(false, func(client filer_pb.SeaweedFilerClient) error {
  234. // delete file entries
  235. for _, object := range deleteObjects.Objects {
  236. lastSeparator := strings.LastIndex(object.ObjectName, "/")
  237. parentDirectoryPath, entryName, isDeleteData, isRecursive := "", object.ObjectName, true, false
  238. if lastSeparator > 0 && lastSeparator+1 < len(object.ObjectName) {
  239. entryName = object.ObjectName[lastSeparator+1:]
  240. parentDirectoryPath = "/" + object.ObjectName[:lastSeparator]
  241. }
  242. parentDirectoryPath = fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, bucket, parentDirectoryPath)
  243. err := doDeleteEntry(client, parentDirectoryPath, entryName, isDeleteData, isRecursive)
  244. if err == nil {
  245. directoriesWithDeletion[parentDirectoryPath]++
  246. deletedObjects = append(deletedObjects, object)
  247. } else if strings.Contains(err.Error(), filer.MsgFailDelNonEmptyFolder) {
  248. deletedObjects = append(deletedObjects, object)
  249. } else {
  250. delete(directoriesWithDeletion, parentDirectoryPath)
  251. deleteErrors = append(deleteErrors, DeleteError{
  252. Code: "",
  253. Message: err.Error(),
  254. Key: object.ObjectName,
  255. })
  256. }
  257. if auditLog != nil {
  258. auditLog.Key = entryName
  259. s3err.PostAccessLog(*auditLog)
  260. }
  261. }
  262. // purge empty folders, only checking folders with deletions
  263. for len(directoriesWithDeletion) > 0 {
  264. directoriesWithDeletion = s3a.doDeleteEmptyDirectories(client, directoriesWithDeletion)
  265. }
  266. return nil
  267. })
  268. deleteResp := DeleteObjectsResponse{}
  269. if !deleteObjects.Quiet {
  270. deleteResp.DeletedObjects = deletedObjects
  271. }
  272. deleteResp.Errors = deleteErrors
  273. writeSuccessResponseXML(w, r, deleteResp)
  274. }
  275. func (s3a *S3ApiServer) doDeleteEmptyDirectories(client filer_pb.SeaweedFilerClient, directoriesWithDeletion map[string]int) (newDirectoriesWithDeletion map[string]int) {
  276. var allDirs []string
  277. for dir := range directoriesWithDeletion {
  278. allDirs = append(allDirs, dir)
  279. }
  280. slices.SortFunc(allDirs, func(a, b string) bool {
  281. return len(a) > len(b)
  282. })
  283. newDirectoriesWithDeletion = make(map[string]int)
  284. for _, dir := range allDirs {
  285. parentDir, dirName := util.FullPath(dir).DirAndName()
  286. if parentDir == s3a.option.BucketsPath {
  287. continue
  288. }
  289. if err := doDeleteEntry(client, parentDir, dirName, false, false); err != nil {
  290. glog.V(4).Infof("directory %s has %d deletion but still not empty: %v", dir, directoriesWithDeletion[dir], err)
  291. } else {
  292. newDirectoriesWithDeletion[parentDir]++
  293. }
  294. }
  295. return
  296. }
  297. func (s3a *S3ApiServer) proxyToFiler(w http.ResponseWriter, r *http.Request, destUrl string, isWrite bool, responseFn func(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int)) {
  298. glog.V(3).Infof("s3 proxying %s to %s", r.Method, destUrl)
  299. proxyReq, err := http.NewRequest(r.Method, destUrl, r.Body)
  300. if err != nil {
  301. glog.Errorf("NewRequest %s: %v", destUrl, err)
  302. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  303. return
  304. }
  305. proxyReq.Header.Set("X-Forwarded-For", r.RemoteAddr)
  306. for k, v := range r.URL.Query() {
  307. if _, ok := s3_constants.PassThroughHeaders[strings.ToLower(k)]; ok {
  308. proxyReq.Header[k] = v
  309. }
  310. }
  311. for header, values := range r.Header {
  312. proxyReq.Header[header] = values
  313. }
  314. // ensure that the Authorization header is overriding any previous
  315. // Authorization header which might be already present in proxyReq
  316. s3a.maybeAddFilerJwtAuthorization(proxyReq, isWrite)
  317. resp, postErr := s3a.client.Do(proxyReq)
  318. if postErr != nil {
  319. glog.Errorf("post to filer: %v", postErr)
  320. s3err.WriteErrorResponse(w, r, s3err.ErrInternalError)
  321. return
  322. }
  323. defer util.CloseResponse(resp)
  324. if resp.StatusCode == http.StatusPreconditionFailed {
  325. s3err.WriteErrorResponse(w, r, s3err.ErrPreconditionFailed)
  326. return
  327. }
  328. if resp.StatusCode == http.StatusRequestedRangeNotSatisfiable {
  329. s3err.WriteErrorResponse(w, r, s3err.ErrInvalidRange)
  330. return
  331. }
  332. if r.Method == "DELETE" {
  333. if resp.StatusCode == http.StatusNotFound {
  334. // this is normal
  335. responseStatusCode := responseFn(resp, w)
  336. s3err.PostLog(r, responseStatusCode, s3err.ErrNone)
  337. return
  338. }
  339. }
  340. if resp.StatusCode == http.StatusNotFound {
  341. s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
  342. return
  343. }
  344. if resp.Header.Get(s3_constants.X_SeaweedFS_Header_Directory_Key) == "true" {
  345. responseStatusCode := responseFn(resp, w)
  346. s3err.PostLog(r, responseStatusCode, s3err.ErrNone)
  347. return
  348. }
  349. // when HEAD a directory, it should be reported as no such key
  350. // https://github.com/seaweedfs/seaweedfs/issues/3457
  351. if resp.ContentLength == -1 && resp.StatusCode != http.StatusNotModified {
  352. s3err.WriteErrorResponse(w, r, s3err.ErrNoSuchKey)
  353. return
  354. }
  355. responseStatusCode := responseFn(resp, w)
  356. s3err.PostLog(r, responseStatusCode, s3err.ErrNone)
  357. }
  358. func passThroughResponse(proxyResponse *http.Response, w http.ResponseWriter) (statusCode int) {
  359. for k, v := range proxyResponse.Header {
  360. w.Header()[k] = v
  361. }
  362. if proxyResponse.Header.Get("Content-Range") != "" && proxyResponse.StatusCode == 200 {
  363. w.WriteHeader(http.StatusPartialContent)
  364. statusCode = http.StatusPartialContent
  365. } else {
  366. statusCode = proxyResponse.StatusCode
  367. }
  368. w.WriteHeader(statusCode)
  369. buf := mem.Allocate(128 * 1024)
  370. defer mem.Free(buf)
  371. if n, err := io.CopyBuffer(w, proxyResponse.Body, buf); err != nil {
  372. glog.V(1).Infof("passthrough response read %d bytes: %v", n, err)
  373. }
  374. return statusCode
  375. }
  376. func (s3a *S3ApiServer) putToFiler(r *http.Request, uploadUrl string, dataReader io.Reader, destination string) (etag string, code s3err.ErrorCode) {
  377. hash := md5.New()
  378. var body = io.TeeReader(dataReader, hash)
  379. proxyReq, err := http.NewRequest("PUT", uploadUrl, body)
  380. if err != nil {
  381. glog.Errorf("NewRequest %s: %v", uploadUrl, err)
  382. return "", s3err.ErrInternalError
  383. }
  384. proxyReq.Header.Set("X-Forwarded-For", r.RemoteAddr)
  385. if destination != "" {
  386. proxyReq.Header.Set(s3_constants.SeaweedStorageDestinationHeader, destination)
  387. }
  388. for header, values := range r.Header {
  389. for _, value := range values {
  390. proxyReq.Header.Add(header, value)
  391. }
  392. }
  393. // ensure that the Authorization header is overriding any previous
  394. // Authorization header which might be already present in proxyReq
  395. s3a.maybeAddFilerJwtAuthorization(proxyReq, true)
  396. resp, postErr := s3a.client.Do(proxyReq)
  397. if postErr != nil {
  398. glog.Errorf("post to filer: %v", postErr)
  399. return "", s3err.ErrInternalError
  400. }
  401. defer resp.Body.Close()
  402. etag = fmt.Sprintf("%x", hash.Sum(nil))
  403. resp_body, ra_err := io.ReadAll(resp.Body)
  404. if ra_err != nil {
  405. glog.Errorf("upload to filer response read %d: %v", resp.StatusCode, ra_err)
  406. return etag, s3err.ErrInternalError
  407. }
  408. var ret weed_server.FilerPostResult
  409. unmarshal_err := json.Unmarshal(resp_body, &ret)
  410. if unmarshal_err != nil {
  411. glog.Errorf("failing to read upload to %s : %v", uploadUrl, string(resp_body))
  412. return "", s3err.ErrInternalError
  413. }
  414. if ret.Error != "" {
  415. glog.Errorf("upload to filer error: %v", ret.Error)
  416. return "", filerErrorToS3Error(ret.Error)
  417. }
  418. return etag, s3err.ErrNone
  419. }
  420. func setEtag(w http.ResponseWriter, etag string) {
  421. if etag != "" {
  422. if strings.HasPrefix(etag, "\"") {
  423. w.Header()["ETag"] = []string{etag}
  424. } else {
  425. w.Header()["ETag"] = []string{"\"" + etag + "\""}
  426. }
  427. }
  428. }
  429. func filerErrorToS3Error(errString string) s3err.ErrorCode {
  430. switch {
  431. case strings.HasPrefix(errString, "existing ") && strings.HasSuffix(errString, "is a directory"):
  432. return s3err.ErrExistingObjectIsDirectory
  433. case strings.HasSuffix(errString, "is a file"):
  434. return s3err.ErrExistingObjectIsFile
  435. default:
  436. return s3err.ErrInternalError
  437. }
  438. }
  439. func (s3a *S3ApiServer) maybeAddFilerJwtAuthorization(r *http.Request, isWrite bool) {
  440. encodedJwt := s3a.maybeGetFilerJwtAuthorizationToken(isWrite)
  441. if encodedJwt == "" {
  442. return
  443. }
  444. r.Header.Set("Authorization", "BEARER "+string(encodedJwt))
  445. }
  446. func (s3a *S3ApiServer) maybeGetFilerJwtAuthorizationToken(isWrite bool) string {
  447. var encodedJwt security.EncodedJwt
  448. if isWrite {
  449. encodedJwt = security.GenJwtForFilerServer(s3a.filerGuard.SigningKey, s3a.filerGuard.ExpiresAfterSec)
  450. } else {
  451. encodedJwt = security.GenJwtForFilerServer(s3a.filerGuard.ReadSigningKey, s3a.filerGuard.ReadExpiresAfterSec)
  452. }
  453. return string(encodedJwt)
  454. }