FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client
- one JWT for reading and one for writing, analogous to how the JWT
between Master and Volume Server works
- I did not implement IP `whiteList` parameter on the filer
Additionally, because http_util.DownloadFile now sets the JWT,
the `download` command should now work when `jwt.signing.read` is
configured. By looking at the code, I think this case did not work
before.
## Docs to be adjusted after a release
Page `Amazon-S3-API`:
```
# Authentication with Filer
You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as
explained in [Security-Configuration](Security-Configuration) -
controlled by the `grpc.*` configuration in `security.toml`.
Starting with version XX, it is also possible to authenticate the HTTP
operations between the S3-API-Proxy and the Filer (especially
uploading new files). This is configured by setting
`filer_jwt.signing.key` and `filer_jwt.signing.read.key` in
`security.toml`.
With both configurations (gRPC and JWT), it is possible to have Filer
and S3 communicate in fully authenticated fashion; so Filer will reject
any unauthenticated communication.
```
Page `Security Overview`:
```
The following items are not covered, yet:
- master server http REST services
Starting with version XX, the Filer HTTP REST services can be secured
with a JWT, by setting `filer_jwt.signing.key` and
`filer_jwt.signing.read.key` in `security.toml`.
...
Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer.
Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).**
...
# Securing Filer HTTP with JWT
To enable JWT-based access control for the Filer,
1. generate `security.toml` file by `weed scaffold -config=security`
2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string
3. copy the same `security.toml` file to the filers and all S3 proxies.
If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`.
If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`.
The S3 API Gateway reads the above JWT keys and sends authenticated
HTTP requests to the filer.
```
Page `Security Configuration`:
```
(update scaffold file)
...
[filer_jwt.signing]
key = "blahblahblahblah"
[filer_jwt.signing.read]
key = "blahblahblahblah"
```
Resolves: #158
3 years ago FEATURE: add JWT to HTTP endpoints of Filer and use them in S3 Client
- one JWT for reading and one for writing, analogous to how the JWT
between Master and Volume Server works
- I did not implement IP `whiteList` parameter on the filer
Additionally, because http_util.DownloadFile now sets the JWT,
the `download` command should now work when `jwt.signing.read` is
configured. By looking at the code, I think this case did not work
before.
## Docs to be adjusted after a release
Page `Amazon-S3-API`:
```
# Authentication with Filer
You can use mTLS for the gRPC connection between S3-API-Proxy and the filer, as
explained in [Security-Configuration](Security-Configuration) -
controlled by the `grpc.*` configuration in `security.toml`.
Starting with version XX, it is also possible to authenticate the HTTP
operations between the S3-API-Proxy and the Filer (especially
uploading new files). This is configured by setting
`filer_jwt.signing.key` and `filer_jwt.signing.read.key` in
`security.toml`.
With both configurations (gRPC and JWT), it is possible to have Filer
and S3 communicate in fully authenticated fashion; so Filer will reject
any unauthenticated communication.
```
Page `Security Overview`:
```
The following items are not covered, yet:
- master server http REST services
Starting with version XX, the Filer HTTP REST services can be secured
with a JWT, by setting `filer_jwt.signing.key` and
`filer_jwt.signing.read.key` in `security.toml`.
...
Before version XX: "weed filer -disableHttp", disable http operations, only gRPC operations are allowed. This works with "weed mount" by FUSE. It does **not work** with the [S3 Gateway](Amazon S3 API), as this does HTTP calls to the Filer.
Starting with version XX: secured by JWT, by setting `filer_jwt.signing.key` and `filer_jwt.signing.read.key` in `security.toml`. **This now works with the [S3 Gateway](Amazon S3 API).**
...
# Securing Filer HTTP with JWT
To enable JWT-based access control for the Filer,
1. generate `security.toml` file by `weed scaffold -config=security`
2. set `filer_jwt.signing.key` to a secret string - and optionally filer_jwt.signing.read.key` as well to a secret string
3. copy the same `security.toml` file to the filers and all S3 proxies.
If `filer_jwt.signing.key` is configured: When sending upload/update/delete HTTP operations to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.key`.
If `filer_jwt.signing.read.key` is configured: When sending GET or HEAD requests to a filer server, the request header `Authorization` should be the JWT string (`Authorization: Bearer [JwtToken]`). The operation is authorized after the filer validates the JWT with `filer_jwt.signing.read.key`.
The S3 API Gateway reads the above JWT keys and sends authenticated
HTTP requests to the filer.
```
Page `Security Configuration`:
```
(update scaffold file)
...
[filer_jwt.signing]
key = "blahblahblahblah"
[filer_jwt.signing.read]
key = "blahblahblahblah"
```
Resolves: #158
3 years ago |
|
package s3api
import ( "fmt" "github.com/chrislusf/seaweedfs/weed/glog" "github.com/chrislusf/seaweedfs/weed/s3api/s3_constants" "github.com/chrislusf/seaweedfs/weed/s3api/s3err" "modernc.org/strutil" "net/http" "net/url" "strconv" "strings" "time"
"github.com/chrislusf/seaweedfs/weed/util" )
const ( DirectiveCopy = "COPY" DirectiveReplace = "REPLACE" )
func (s3a *S3ApiServer) CopyObjectHandler(w http.ResponseWriter, r *http.Request) {
dstBucket, dstObject := s3_constants.GetBucketAndObject(r)
// Copy source path.
cpSrcPath, err := url.QueryUnescape(r.Header.Get("X-Amz-Copy-Source")) if err != nil { // Save unescaped string as is.
cpSrcPath = r.Header.Get("X-Amz-Copy-Source") }
srcBucket, srcObject := pathToBucketAndObject(cpSrcPath)
glog.V(3).Infof("CopyObjectHandler %s %s => %s %s", srcBucket, srcObject, dstBucket, dstObject)
replaceMeta, replaceTagging := replaceDirective(r.Header)
if (srcBucket == dstBucket && srcObject == dstObject || cpSrcPath == "") && (replaceMeta || replaceTagging) { fullPath := util.FullPath(fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, dstBucket, dstObject)) dir, name := fullPath.DirAndName() entry, err := s3a.getEntry(dir, name) if err != nil || entry.IsDirectory { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return } entry.Extended, err = processMetadataBytes(r.Header, entry.Extended, replaceMeta, replaceTagging) if err != nil { glog.Errorf("CopyObjectHandler ValidateTags error %s: %v", r.URL, err) s3err.WriteErrorResponse(w, r, s3err.ErrInvalidTag) return } err = s3a.touch(dir, name, entry) if err != nil { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return } writeSuccessResponseXML(w, r, CopyObjectResult{ ETag: fmt.Sprintf("%x", entry.Attributes.Md5), LastModified: time.Now().UTC(), }) return }
// If source object is empty or bucket is empty, reply back invalid copy source.
if srcObject == "" || srcBucket == "" { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return } srcPath := util.FullPath(fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, srcBucket, srcObject)) dir, name := srcPath.DirAndName() if entry, err := s3a.getEntry(dir, name); err != nil || entry.IsDirectory { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return }
if srcBucket == dstBucket && srcObject == dstObject { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopyDest) return }
dstUrl := fmt.Sprintf("http://%s%s/%s%s", s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, dstBucket, urlPathEscape(dstObject)) srcUrl := fmt.Sprintf("http://%s%s/%s%s", s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, srcBucket, urlPathEscape(srcObject))
_, _, resp, err := util.DownloadFile(srcUrl, s3a.maybeGetFilerJwtAuthorizationToken(false)) if err != nil { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return } defer util.CloseResponse(resp)
tagErr := processMetadata(r.Header, resp.Header, replaceMeta, replaceTagging, s3a.getTags, dir, name) if tagErr != nil { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return } glog.V(2).Infof("copy from %s to %s", srcUrl, dstUrl) destination := fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, dstBucket, dstObject) etag, errCode := s3a.putToFiler(r, dstUrl, resp.Body, destination)
if errCode != s3err.ErrNone { s3err.WriteErrorResponse(w, r, errCode) return }
setEtag(w, etag)
response := CopyObjectResult{ ETag: etag, LastModified: time.Now().UTC(), }
writeSuccessResponseXML(w, r, response)
}
func pathToBucketAndObject(path string) (bucket, object string) { path = strings.TrimPrefix(path, "/") parts := strings.SplitN(path, "/", 2) if len(parts) == 2 { return parts[0], "/" + parts[1] } return parts[0], "/" }
type CopyPartResult struct { LastModified time.Time `xml:"LastModified"` ETag string `xml:"ETag"` }
func (s3a *S3ApiServer) CopyObjectPartHandler(w http.ResponseWriter, r *http.Request) { // https://docs.aws.amazon.com/AmazonS3/latest/dev/CopyingObjctsUsingRESTMPUapi.html
// https://docs.aws.amazon.com/AmazonS3/latest/API/API_UploadPartCopy.html
dstBucket, dstObject := s3_constants.GetBucketAndObject(r)
// Copy source path.
cpSrcPath, err := url.QueryUnescape(r.Header.Get("X-Amz-Copy-Source")) if err != nil { // Save unescaped string as is.
cpSrcPath = r.Header.Get("X-Amz-Copy-Source") }
srcBucket, srcObject := pathToBucketAndObject(cpSrcPath) // If source object is empty or bucket is empty, reply back invalid copy source.
if srcObject == "" || srcBucket == "" { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return }
uploadID := r.URL.Query().Get("uploadId") partIDString := r.URL.Query().Get("partNumber")
partID, err := strconv.Atoi(partIDString) if err != nil { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidPart) return }
glog.V(3).Infof("CopyObjectPartHandler %s %s => %s part %d", srcBucket, srcObject, dstBucket, partID)
// check partID with maximum part ID for multipart objects
if partID > globalMaxPartID { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidMaxParts) return }
rangeHeader := r.Header.Get("x-amz-copy-source-range")
dstUrl := fmt.Sprintf("http://%s%s/%s/%04d.part", s3a.option.Filer.ToHttpAddress(), s3a.genUploadsFolder(dstBucket), uploadID, partID) srcUrl := fmt.Sprintf("http://%s%s/%s%s", s3a.option.Filer.ToHttpAddress(), s3a.option.BucketsPath, srcBucket, urlPathEscape(srcObject))
dataReader, err := util.ReadUrlAsReaderCloser(srcUrl, s3a.maybeGetFilerJwtAuthorizationToken(false), rangeHeader) if err != nil { s3err.WriteErrorResponse(w, r, s3err.ErrInvalidCopySource) return } defer dataReader.Close()
glog.V(2).Infof("copy from %s to %s", srcUrl, dstUrl) destination := fmt.Sprintf("%s/%s%s", s3a.option.BucketsPath, dstBucket, dstObject) etag, errCode := s3a.putToFiler(r, dstUrl, dataReader, destination)
if errCode != s3err.ErrNone { s3err.WriteErrorResponse(w, r, errCode) return }
setEtag(w, etag)
response := CopyPartResult{ ETag: etag, LastModified: time.Now().UTC(), }
writeSuccessResponseXML(w, r, response)
}
func replaceDirective(reqHeader http.Header) (replaceMeta, replaceTagging bool) { return reqHeader.Get(s3_constants.AmzUserMetaDirective) == DirectiveReplace, reqHeader.Get(s3_constants.AmzObjectTaggingDirective) == DirectiveReplace }
func processMetadata(reqHeader, existing http.Header, replaceMeta, replaceTagging bool, getTags func(parentDirectoryPath string, entryName string) (tags map[string]string, err error), dir, name string) (err error) { if sc := reqHeader.Get(s3_constants.AmzStorageClass); len(sc) == 0 { if sc := existing[s3_constants.AmzStorageClass]; len(sc) > 0 { reqHeader[s3_constants.AmzStorageClass] = sc } }
if !replaceMeta { for header, _ := range reqHeader { if strings.HasPrefix(header, s3_constants.AmzUserMetaPrefix) { delete(reqHeader, header) } } for k, v := range existing { if strings.HasPrefix(k, s3_constants.AmzUserMetaPrefix) { reqHeader[k] = v } } }
if !replaceTagging { for header, _ := range reqHeader { if strings.HasPrefix(header, s3_constants.AmzObjectTagging) { delete(reqHeader, header) } }
found := false for k, _ := range existing { if strings.HasPrefix(k, s3_constants.AmzObjectTaggingPrefix) { found = true break } }
if found { tags, err := getTags(dir, name) if err != nil { return err }
var tagArr []string for k, v := range tags { tagArr = append(tagArr, fmt.Sprintf("%s=%s", k, v)) } tagStr := strutil.JoinFields(tagArr, "&") reqHeader.Set(s3_constants.AmzObjectTagging, tagStr) } } return }
func processMetadataBytes(reqHeader http.Header, existing map[string][]byte, replaceMeta, replaceTagging bool) (metadata map[string][]byte, err error) { metadata = make(map[string][]byte)
if sc := existing[s3_constants.AmzStorageClass]; len(sc) > 0 { metadata[s3_constants.AmzStorageClass] = sc } if sc := reqHeader.Get(s3_constants.AmzStorageClass); len(sc) > 0 { metadata[s3_constants.AmzStorageClass] = []byte(sc) }
if replaceMeta { for header, values := range reqHeader { if strings.HasPrefix(header, s3_constants.AmzUserMetaPrefix) { for _, value := range values { metadata[header] = []byte(value) } } } } else { for k, v := range existing { if strings.HasPrefix(k, s3_constants.AmzUserMetaPrefix) { metadata[k] = v } } } if replaceTagging { if tags := reqHeader.Get(s3_constants.AmzObjectTagging); tags != "" { parsedTags, err := parseTagsHeader(tags) if err != nil { return nil, err } err = ValidateTags(parsedTags) if err != nil { return nil, err } for k, v := range parsedTags { metadata[s3_constants.AmzObjectTagging+"-"+k] = []byte(v) } } } else { for k, v := range existing { if strings.HasPrefix(k, s3_constants.AmzObjectTagging) { metadata[k] = v } } delete(metadata, s3_constants.AmzTagCount) }
return }
|