Browse Source

Linting, acronyms in names

pull/179/head
Kebert Xela 6 years ago
parent
commit
078c06b58c
  1. 16
      README.md
  2. 12
      server.go
  3. 6
      server_test.go
  4. 10
      static/js/dropzone.js
  5. 8
      torrent.go
  6. 12
      torrent/torrent.go
  7. 4
      torrent_test.go
  8. 6
      upload.go

16
README.md

@ -16,7 +16,7 @@ Self-hosted file/media sharing website.
### Screenshots ### Screenshots
<img width="200" src="https://user-images.githubusercontent.com/4650950/51735725-0033cf00-203d-11e9-8a97-f543330a92ec.png" /> <img width="200" src="https://user-images.githubusercontent.com/4650950/51735724-0033cf00-203d-11e9-8fe0-77442eaa8705.png" /> <img width="200" src="https://user-images.githubusercontent.com/4650950/51735726-0033cf00-203d-11e9-9fca-095a97e46ce8.png" /> <img width="200" src="https://user-images.githubusercontent.com/4650950/51735728-0033cf00-203d-11e9-90e9-4f2d36332fc4.png" />
<img width="200" src="https://user-images.githubusercontent.com/4650950/51735725-0033cf00-203d-11e9-8a97-f543330a92ec.png" /> <img width="200" src="https://user-images.githubusercontent.com/4650950/51735724-0033cf00-203d-11e9-8fe0-77442eaa8705.png" /> <img width="200" src="https://user-images.githubusercontent.com/4650950/51735726-0033cf00-203d-11e9-9fca-095a97e46ce8.png" /> <img width="200" src="https://user-images.githubusercontent.com/4650950/51735728-0033cf00-203d-11e9-90e9-4f2d36332fc4.png" />
Get release and run Get release and run
@ -24,7 +24,7 @@ Get release and run
1. Grab the latest binary from the [releases](https://github.com/andreimarcu/linx-server/releases) 1. Grab the latest binary from the [releases](https://github.com/andreimarcu/linx-server/releases)
2. Run ```./linx-server``` 2. Run ```./linx-server```
Usage Usage
----- -----
@ -50,7 +50,7 @@ allowhotlink = true
- ```-refererpolicy "..."``` -- Referrer-Policy header for pages (default is "same-origin") - ```-refererpolicy "..."``` -- Referrer-Policy header for pages (default is "same-origin")
- ```-filereferrerpolicy "..."``` -- Referrer-Policy header for files (default is "same-origin") - ```-filereferrerpolicy "..."``` -- Referrer-Policy header for files (default is "same-origin")
- ```-xframeoptions "..." ``` -- X-Frame-Options header (default is "SAMEORIGIN") - ```-xframeoptions "..." ``` -- X-Frame-Options header (default is "SAMEORIGIN")
- ```-remoteuploads``` -- (optionally) enable remote uploads (/upload?url=https://...)
- ```-remoteuploads``` -- (optionally) enable remote uploads (/upload?url=https://...)
- ```-nologs``` -- (optionally) disable request logs in stdout - ```-nologs``` -- (optionally) disable request logs in stdout
- ```-force-random-filename``` -- (optionally) force the use of random filenames - ```-force-random-filename``` -- (optionally) force the use of random filenames
@ -69,15 +69,15 @@ The following storage backends are available:
|S3|Use with any S3-compatible provider.<br> This implementation will stream files through the linx instance (every download will request and stream the file from the S3 bucket).<br><br>For high-traffic environments, one might consider using an external caching layer such as described [in this article](https://blog.sentry.io/2017/03/01/dodging-s3-downtime-with-nginx-and-haproxy.html).|```-s3-endpoint https://...``` -- S3 endpoint<br>```-s3-region us-east-1``` -- S3 region<br>```-s3-bucket mybucket``` -- S3 bucket to use for files and metadata<br>```-s3-force-path-style``` (optional) -- force path-style addresing (e.g. https://<span></span>s3.amazonaws.com/linx/example.txt)<br><br>Environment variables to provide:<br>```AWS_ACCESS_KEY_ID``` -- the S3 access key<br>```AWS_SECRET_ACCESS_KEY ``` -- the S3 secret key<br>```AWS_SESSION_TOKEN``` (optional) -- the S3 session token| |S3|Use with any S3-compatible provider.<br> This implementation will stream files through the linx instance (every download will request and stream the file from the S3 bucket).<br><br>For high-traffic environments, one might consider using an external caching layer such as described [in this article](https://blog.sentry.io/2017/03/01/dodging-s3-downtime-with-nginx-and-haproxy.html).|```-s3-endpoint https://...``` -- S3 endpoint<br>```-s3-region us-east-1``` -- S3 region<br>```-s3-bucket mybucket``` -- S3 bucket to use for files and metadata<br>```-s3-force-path-style``` (optional) -- force path-style addresing (e.g. https://<span></span>s3.amazonaws.com/linx/example.txt)<br><br>Environment variables to provide:<br>```AWS_ACCESS_KEY_ID``` -- the S3 access key<br>```AWS_SECRET_ACCESS_KEY ``` -- the S3 secret key<br>```AWS_SESSION_TOKEN``` (optional) -- the S3 session token|
#### SSL with built-in server
#### SSL with built-in server
- ```-certfile path/to/your.crt``` -- Path to the ssl certificate (required if you want to use the https server) - ```-certfile path/to/your.crt``` -- Path to the ssl certificate (required if you want to use the https server)
- ```-keyfile path/to/your.key``` -- Path to the ssl key (required if you want to use the https server) - ```-keyfile path/to/your.key``` -- Path to the ssl key (required if you want to use the https server)
#### Use with http proxy
- ```-realip``` -- let linx-server know you (nginx, etc) are providing the X-Real-IP and/or X-Forwarded-For headers.
#### Use with http proxy
- ```-realIP``` -- let linx-server know you (nginx, etc) are providing the X-Real-IP and/or X-Forwarded-For headers.
#### Use with fastcgi #### Use with fastcgi
- ```-fastcgi``` -- serve through fastcgi
- ```-fastcgi``` -- serve through fastcgi
Cleaning up expired files Cleaning up expired files
@ -112,7 +112,7 @@ server {
... ...
server_name yourlinx.example.org; server_name yourlinx.example.org;
... ...
client_max_body_size 4096M; client_max_body_size 4096M;
location / { location / {
fastcgi_pass 127.0.0.1:8080; fastcgi_pass 127.0.0.1:8080;

12
server.go

@ -52,7 +52,7 @@ var Config struct {
xFrameOptions string xFrameOptions string
maxSize int64 maxSize int64
maxExpiry uint64 maxExpiry uint64
realIp bool
realIP bool
noLogs bool noLogs bool
allowHotlink bool allowHotlink bool
fastcgi bool fastcgi bool
@ -83,8 +83,8 @@ func setup() *web.Mux {
// middleware // middleware
mux.Use(middleware.RequestID) mux.Use(middleware.RequestID)
if Config.realIp {
mux.Use(middleware.RealIP)
if Config.realIP {
mux.Use(middleware.realIP)
} }
if !Config.noLogs { if !Config.noLogs {
@ -124,12 +124,12 @@ func setup() *web.Mux {
Config.siteURL = Config.siteURL + "/" Config.siteURL = Config.siteURL + "/"
} }
parsedUrl, err := url.Parse(Config.siteURL)
parsedURL, err := url.Parse(Config.siteURL)
if err != nil { if err != nil {
log.Fatal("Could not parse siteurl:", err) log.Fatal("Could not parse siteurl:", err)
} }
Config.sitePath = parsedUrl.Path
Config.sitePath = parsedURL.Path
} else { } else {
Config.sitePath = "/" Config.sitePath = "/"
} }
@ -233,7 +233,7 @@ func main() {
"path to ssl certificate (for https)") "path to ssl certificate (for https)")
flag.StringVar(&Config.keyFile, "keyfile", "", flag.StringVar(&Config.keyFile, "keyfile", "",
"path to ssl key (for https)") "path to ssl key (for https)")
flag.BoolVar(&Config.realIp, "realip", false,
flag.BoolVar(&Config.realIP, "realIP", false,
"use X-Real-IP/X-Forwarded-For headers as original host") "use X-Real-IP/X-Forwarded-For headers as original host")
flag.BoolVar(&Config.fastcgi, "fastcgi", false, flag.BoolVar(&Config.fastcgi, "fastcgi", false,
"serve through fastcgi") "serve through fastcgi")

6
server_test.go

@ -19,7 +19,7 @@ import (
type RespOkJSON struct { type RespOkJSON struct {
Filename string Filename string
Url string
URL string
Delete_Key string Delete_Key string
Expiry string Expiry string
Size string Size string
@ -1277,7 +1277,7 @@ func TestPutAndGetCLI(t *testing.T) {
// request file without wget user agent // request file without wget user agent
w = httptest.NewRecorder() w = httptest.NewRecorder()
req, err = http.NewRequest("GET", myjson.Url, nil)
req, err = http.NewRequest("GET", myjson.URL, nil)
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)
} }
@ -1290,7 +1290,7 @@ func TestPutAndGetCLI(t *testing.T) {
// request file with wget user agent // request file with wget user agent
w = httptest.NewRecorder() w = httptest.NewRecorder()
req, err = http.NewRequest("GET", myjson.Url, nil)
req, err = http.NewRequest("GET", myjson.URL, nil)
req.Header.Set("User-Agent", "wget") req.Header.Set("User-Agent", "wget")
if err != nil { if err != nil {
t.Fatal(err) t.Fatal(err)

10
static/js/dropzone.js

@ -303,7 +303,7 @@
} }
return this._updateMaxFilesReachedClass(); return this._updateMaxFilesReachedClass();
}, },
thumbnail: function(file, dataUrl) {
thumbnail: function(file, dataURL) {
var thumbnailElement, _i, _len, _ref; var thumbnailElement, _i, _len, _ref;
if (file.previewElement) { if (file.previewElement) {
file.previewElement.classList.remove("dz-file-preview"); file.previewElement.classList.remove("dz-file-preview");
@ -311,7 +311,7 @@
for (_i = 0, _len = _ref.length; _i < _len; _i++) { for (_i = 0, _len = _ref.length; _i < _len; _i++) {
thumbnailElement = _ref[_i]; thumbnailElement = _ref[_i];
thumbnailElement.alt = file.name; thumbnailElement.alt = file.name;
thumbnailElement.src = dataUrl;
thumbnailElement.src = dataURL;
} }
return setTimeout(((function(_this) { return setTimeout(((function(_this) {
return function() { return function() {
@ -1061,13 +1061,13 @@
} }
return; return;
} }
return _this.createThumbnailFromUrl(file, fileReader.result, callback);
return _this.createThumbnailFromURL(file, fileReader.result, callback);
}; };
})(this); })(this);
return fileReader.readAsDataURL(file); return fileReader.readAsDataURL(file);
}; };
Dropzone.prototype.createThumbnailFromUrl = function(file, imageUrl, callback) {
Dropzone.prototype.createThumbnailFromURL = function(file, imageURL, callback) {
var img; var img;
img = document.createElement("img"); img = document.createElement("img");
img.onload = (function(_this) { img.onload = (function(_this) {
@ -1097,7 +1097,7 @@
if (callback != null) { if (callback != null) {
img.onerror = callback; img.onerror = callback;
} }
return img.src = imageUrl;
return img.src = imageURL;
}; };
Dropzone.prototype.processQueue = function() { Dropzone.prototype.processQueue = function() {

8
torrent.go

@ -15,16 +15,16 @@ import (
) )
func createTorrent(fileName string, f io.Reader, r *http.Request) ([]byte, error) { func createTorrent(fileName string, f io.Reader, r *http.Request) ([]byte, error) {
url := getSiteURL(r) + Config.selifPath + fileName
chunk := make([]byte, torrent.TORRENT_PIECE_LENGTH)
URL := getSiteURL(r) + Config.selifPath + fileName
chunk := make([]byte, torrent.torrentPieceLength)
t := torrent.Torrent{ t := torrent.Torrent{
Encoding: "UTF-8", Encoding: "UTF-8",
Info: torrent.TorrentInfo{ Info: torrent.TorrentInfo{
PieceLength: torrent.TORRENT_PIECE_LENGTH,
PieceLength: torrent.torrentPieceLength,
Name: fileName, Name: fileName,
}, },
UrlList: []string{url},
URLList: []string{URL},
} }
for { for {

12
torrent/torrent.go

@ -5,23 +5,23 @@ import (
) )
const ( const (
TORRENT_PIECE_LENGTH = 262144
torrentPieceLength = 262144
) )
type TorrentInfo struct {
type torrentInfo struct {
PieceLength int `bencode:"piece length"` PieceLength int `bencode:"piece length"`
Pieces string `bencode:"pieces"` Pieces string `bencode:"pieces"`
Name string `bencode:"name"` Name string `bencode:"name"`
Length int `bencode:"length"` Length int `bencode:"length"`
} }
type Torrent struct {
type torrent struct {
Encoding string `bencode:"encoding"` Encoding string `bencode:"encoding"`
Info TorrentInfo `bencode:"info"`
UrlList []string `bencode:"url-list"`
Info torrentInfo `bencode:"info"`
URLList []string `bencode:"url-list"`
} }
func HashPiece(piece []byte) []byte {
func hashPiece(piece []byte) []byte {
h := sha1.New() h := sha1.New()
h.Write(piece) h.Write(piece)
return h.Sum(nil) return h.Sum(nil)

4
torrent_test.go

@ -47,8 +47,8 @@ func TestCreateTorrent(t *testing.T) {
} }
tracker := fmt.Sprintf("%s%s%s", Config.siteURL, Config.selifPath, fileName) tracker := fmt.Sprintf("%s%s%s", Config.siteURL, Config.selifPath, fileName)
if decoded.UrlList[0] != tracker {
t.Fatalf("First entry in URL list was %s, expected %s", decoded.UrlList[0], tracker)
if decoded.URLList[0] != tracker {
t.Fatalf("First entry in URL list was %s, expected %s", decoded.URLList[0], tracker)
} }
} }

6
upload.go

@ -170,15 +170,15 @@ func uploadRemote(c web.C, w http.ResponseWriter, r *http.Request) {
} }
upReq := UploadRequest{} upReq := UploadRequest{}
grabUrl, _ := url.Parse(r.FormValue("url"))
grabURL, _ := url.Parse(r.FormValue("url"))
resp, err := http.Get(grabUrl.String())
resp, err := http.Get(grabURL.String())
if err != nil { if err != nil {
oopsHandler(c, w, r, RespAUTO, "Could not retrieve URL") oopsHandler(c, w, r, RespAUTO, "Could not retrieve URL")
return return
} }
upReq.filename = filepath.Base(grabUrl.Path)
upReq.filename = filepath.Base(grabURL.Path)
upReq.src = http.MaxBytesReader(w, resp.Body, Config.maxSize) upReq.src = http.MaxBytesReader(w, resp.Body, Config.maxSize)
upReq.deleteKey = r.FormValue("deletekey") upReq.deleteKey = r.FormValue("deletekey")
upReq.randomBarename = r.FormValue("randomize") == "yes" upReq.randomBarename = r.FormValue("randomize") == "yes"

Loading…
Cancel
Save