Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • tedomum/matrix-media-repo
1 result
Show changes
Commits on Source (68)
Showing
with 996 additions and 766 deletions
......@@ -3,32 +3,11 @@ on:
push:
jobs:
build:
name: 'Build Go 1.16'
name: 'Build Go 1.18'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-go@v2
with:
go-version: '1.16'
go-version: '1.18'
- run: './build.sh'
complement:
name: 'Complement'
runs-on: ubuntu-latest
container:
# https://github.com/matrix-org/complement/blob/master/dockerfiles/ComplementCIBuildkite.Dockerfile
image: matrixdotorg/complement:latest
env:
CI: true
ports:
- 8448:8448
volumes:
- /var/run/docker.sock:/var/run/docker.sock
steps:
- uses: actions/checkout@v2
- run: docker build -t complement-media-repo -f Complement.Dockerfile .
- run: chmod +x ci-complement.sh
- run: ./ci-complement.sh
env:
CI: "true"
COMPLEMENT_BASE_IMAGE: "complement-media-repo"
COMPLEMENT_VERSION_CHECK_ITERATIONS: "400"
......@@ -15,6 +15,7 @@ assets.bin.go
media-repo*.yaml
homeserver.yaml
s3-probably-safe-to-delete.txt
# Binaries for programs and plugins
*.exe
......
......@@ -7,7 +7,82 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
## [Unreleased]
*Nothing yet*.
*Nothing yet.*
## [1.2.13] - February 12, 2023
### Deprecations
* In version 1.3.0, IPFS will no longer be supported as a datastore. Please migrate your data if you are using the IPFS support.
### Added
* Added the `Cross-Origin-Resource-Policy: cross-origin` header to all downloads, as per [MSC3828](https://github.com/matrix-org/matrix-spec-proposals/pull/3828).
* Added metrics for tracking which S3 operations are performed against datastores.
### Changed
* Swap out the HEIF library for better support towards [ARM64 Docker Images](https://github.com/turt2live/matrix-media-repo/issues/365).
* The development environment now uses Synapse as a homeserver. Test accounts will need recreating.
* Updated to Go 1.18
* Improved error message when thumbnailer cannot determine image dimensions.
### Fixed
* Return default media attributes if none have been explicitly set.
## [1.2.12] - March 31, 2022
### Fixed
* Fixed a permissions check issue on the new statistics endpoint released in v1.2.11
## [1.2.11] - March 31, 2022
### Added
* New config option to set user agent when requesting URL previews.
* Added support for `image/jxl` thumbnailing.
* Built-in early support for content ranges (being able to skip around in audio and video). This is only available if
caching is enabled.
* New config option for changing the log level.
* New (currently undocumented) binary `s3_consistency_check` to find objects in S3 which *might* not be referenced by
the media repo database. Note that this can include uploads in progress.
* Admin endpoint to GET users' usage statistics for a server.
### Removed
* Support for the in-memory cache has been removed. Redis or having no cache are now the only options.
* Support for the Redis config under `features` has been removed. It is now only available at the top level of the
config. See the sample config for more details.
### Fixed
* Fixed media being permanently lost when transferring to an (effectively) readonly S3 datastore.
* Purging non-existent files now won't cause errors.
* Fixed HEIF/HEIC thumbnailing. Note that this thumbnail type might cause increased memory usage.
* Ensure endpoints register in a stable way, making them predictably available.
* Reduced download hits to datastores when using Redis cache.
### Changed
* Updated support for post-[MSC3069](https://github.com/matrix-org/matrix-doc/pull/3069) homeservers.
* Updated the built-in oEmbed `providers.json`
# [1.2.10] - December 23rd, 2021
### Deprecation notices
In a future version (likely the next), the in-memory cache support will be removed. Instead, please use the Redis
caching that is now supported properly by this release, or disable caching if not applicable for your deployment.
### Added
* Added support for setting the Redis database number.
### Fixed
* Fixed an issue with the Redis config not being recognized at the root level.
## [1.2.9] - December 22nd, 2021
......@@ -308,7 +383,11 @@ a large database (more than about 100k uploaded files), run the following steps
* Various other features that would be expected like maximum/minimum size controls, rate limiting, etc. Check out the
sample config for a better idea of what else is possible.
[unreleased]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.9...HEAD
[unreleased]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.13...HEAD
[1.2.13]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.12...v1.2.13
[1.2.12]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.11...v1.2.12
[1.2.11]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.10...v1.2.11
[1.2.10]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.9...v1.2.10
[1.2.9]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.8...v1.2.9
[1.2.8]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.7...v1.2.8
[1.2.6]: https://github.com/turt2live/matrix-media-repo/compare/v1.2.6...v1.2.7
......
# ---- Stage 0 ----
# Builds media repo binaries
FROM golang:1.14-alpine AS builder
# Install build dependencies
RUN apk add --no-cache git musl-dev dos2unix build-base
WORKDIR /opt
COPY . /opt
RUN dos2unix ./build.sh
RUN ./build.sh
# ---- Stage 1 ----
# Final runtime stage.
FROM alpine
COPY --from=builder /opt/bin/media_repo /opt/bin/complement_hs /usr/local/bin/
RUN apk add --no-cache ca-certificates postgresql openssl dos2unix
RUN mkdir -p /data/media
COPY ./docker/complement.yaml /data/media-repo.yaml
ENV REPO_CONFIG=/data/media-repo.yaml
ENV SERVER_NAME=localhost
ENV PGDATA=/data/pgdata
ENV MEDIA_REPO_UNSAFE_FEDERATION=true
COPY ./docker/complement.sh ./docker/complement-run.sh /usr/local/bin/
RUN dos2unix /usr/local/bin/complement.sh /usr/local/bin/complement-run.sh
EXPOSE 8008
EXPOSE 8448
RUN chmod +x /usr/local/bin/complement.sh
RUN chmod +x /usr/local/bin/complement-run.sh
RUN mkdir -p /data/pgdata
RUN mkdir -p /run/postgresql
RUN chown postgres:postgres /data/pgdata
RUN chown postgres:postgres /run/postgresql
RUN su postgres -c initdb
RUN sh /usr/local/bin/complement.sh
CMD /usr/local/bin/complement-run.sh
\ No newline at end of file
# ---- Stage 0 ----
# Builds media repo binaries
FROM golang:1.16-alpine AS builder
FROM golang:1.18-alpine AS builder
# Install build dependencies
RUN apk add --no-cache git musl-dev dos2unix build-base
......@@ -16,7 +16,7 @@ FROM alpine
RUN mkdir /plugins
COPY --from=builder /opt/bin/plugin_antispam_ocr /plugins/
COPY --from=builder /opt/bin/media_repo /opt/bin/import_synapse /opt/bin/gdpr_export /opt/bin/gdpr_import /usr/local/bin/
COPY --from=builder /opt/bin/media_repo /opt/bin/import_synapse /opt/bin/export_synapse_for_import /opt/bin/gdpr_export /opt/bin/gdpr_import /opt/bin/s3_consistency_check /usr/local/bin/
RUN apk add --no-cache \
su-exec \
......
......@@ -28,10 +28,18 @@ once to ensure the assets are set up correctly: follow the
[compilation steps](https://docs.t2bot.io/matrix-media-repo/installing/method/compilation.html)
posted on docs.t2bot.io.
If you'd like to use a regular Matrix client to test the media repo, `docker-compose -f dev/docker-compose.yaml up`
will give you a [Conduit](https://conduit.rs/) homeserver behind an nginx reverse proxy which routes media requests to
`http://host.docker.internal:8001`. To test accurately, it is recommended to add the following homeserver configuration
to your media repo config:
This project offers a development environment you can use to test against a client and homeserver.
As a first-time setup, run:
```bash
docker run --rm -it -v ./dev/synapse-db:/data -e SYNAPSE_SERVER_NAME=localhost -e SYNAPSE_REPORT_STATS=no matrixdotorg/synapse:latest generate
```
Then you can run `docker-compose -f dev/docker-compose.yaml up` to always bring the service online. The homeserver will
be behind an nginx reverse proxy which routes media requests to `http://host.docker.internal:8001`. To test accurately,
it is recommended to add the following homeserver configuration to your media repo config:
```yaml
name: "localhost"
csApi: "http://localhost:8008" # This is exposed by the nginx container
......
......@@ -113,3 +113,15 @@ func RepoAdminRoute(next func(r *http.Request, rctx rcontext.RequestContext, use
return regularFunc(r, rctx)
}
}
func GetRequestUserAdminStatus(r *http.Request, rctx rcontext.RequestContext, user UserInfo) (bool, bool) {
isGlobalAdmin := util.IsGlobalAdmin(user.UserId) || user.IsShared
isLocalAdmin, err := matrix.IsUserAdmin(rctx, r.Host, user.AccessToken, r.RemoteAddr)
if err != nil {
sentry.CaptureException(err)
rctx.Log.Error("Error verifying local admin: " + err.Error())
return isGlobalAdmin, false
}
return isGlobalAdmin, isLocalAdmin
}
package custom
import (
"database/sql"
"encoding/json"
"github.com/getsentry/sentry-go"
"io/ioutil"
"net/http"
"github.com/getsentry/sentry-go"
"github.com/gorilla/mux"
"github.com/sirupsen/logrus"
"github.com/turt2live/matrix-media-repo/api"
......@@ -49,9 +51,21 @@ func GetAttributes(r *http.Request, rctx rcontext.RequestContext, user api.UserI
return api.AuthFailed()
}
// Check to see if the media exists
mediaDb := storage.GetDatabase().GetMediaStore(rctx)
media, err := mediaDb.Get(origin, mediaId)
if err != nil && err != sql.ErrNoRows {
rctx.Log.Error(err)
sentry.CaptureException(err)
return api.InternalServerError("failed to get media record")
}
if media == nil || err == sql.ErrNoRows {
return api.NotFoundError()
}
db := storage.GetDatabase().GetMediaAttributesStore(rctx)
attrs, err := db.GetAttributes(origin, mediaId)
attrs, err := db.GetAttributesDefaulted(origin, mediaId)
if err != nil {
rctx.Log.Error(err)
sentry.CaptureException(err)
......
......@@ -48,7 +48,7 @@ func PurgeRemoteMedia(r *http.Request, rctx rcontext.RequestContext, user api.Us
}
func PurgeIndividualRecord(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) interface{} {
isGlobalAdmin, isLocalAdmin := getPurgeRequestInfo(r, rctx, user)
isGlobalAdmin, isLocalAdmin := api.GetRequestUserAdminStatus(r, rctx, user)
localServerName := r.Host
params := mux.Vars(r)
......@@ -98,7 +98,7 @@ func PurgeIndividualRecord(r *http.Request, rctx rcontext.RequestContext, user a
}
func PurgeQuarantined(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) interface{} {
isGlobalAdmin, isLocalAdmin := getPurgeRequestInfo(r, rctx, user)
isGlobalAdmin, isLocalAdmin := api.GetRequestUserAdminStatus(r, rctx, user)
localServerName := r.Host
var affected []*types.Media
......@@ -168,7 +168,7 @@ func PurgeOldMedia(r *http.Request, rctx rcontext.RequestContext, user api.UserI
}
func PurgeUserMedia(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) interface{} {
isGlobalAdmin, isLocalAdmin := getPurgeRequestInfo(r, rctx, user)
isGlobalAdmin, isLocalAdmin := api.GetRequestUserAdminStatus(r, rctx, user)
if !isGlobalAdmin && !isLocalAdmin {
return api.AuthFailed()
}
......@@ -220,7 +220,7 @@ func PurgeUserMedia(r *http.Request, rctx rcontext.RequestContext, user api.User
}
func PurgeRoomMedia(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) interface{} {
isGlobalAdmin, isLocalAdmin := getPurgeRequestInfo(r, rctx, user)
isGlobalAdmin, isLocalAdmin := api.GetRequestUserAdminStatus(r, rctx, user)
if !isGlobalAdmin && !isLocalAdmin {
return api.AuthFailed()
}
......@@ -300,7 +300,7 @@ func PurgeRoomMedia(r *http.Request, rctx rcontext.RequestContext, user api.User
}
func PurgeDomainMedia(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) interface{} {
isGlobalAdmin, isLocalAdmin := getPurgeRequestInfo(r, rctx, user)
isGlobalAdmin, isLocalAdmin := api.GetRequestUserAdminStatus(r, rctx, user)
if !isGlobalAdmin && !isLocalAdmin {
return api.AuthFailed()
}
......@@ -343,15 +343,3 @@ func PurgeDomainMedia(r *http.Request, rctx rcontext.RequestContext, user api.Us
return &api.DoNotCacheResponse{Payload: map[string]interface{}{"purged": true, "affected": mxcs}}
}
func getPurgeRequestInfo(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) (bool, bool) {
isGlobalAdmin := util.IsGlobalAdmin(user.UserId) || user.IsShared
isLocalAdmin, err := matrix.IsUserAdmin(rctx, r.Host, user.AccessToken, r.RemoteAddr)
if err != nil {
sentry.CaptureException(err)
rctx.Log.Error("Error verifying local admin: " + err.Error())
return isGlobalAdmin, false
}
return isGlobalAdmin, isLocalAdmin
}
package custom
import (
"encoding/json"
"fmt"
"github.com/getsentry/sentry-go"
"net/http"
"strconv"
"strings"
"github.com/gorilla/mux"
"github.com/sirupsen/logrus"
"github.com/turt2live/matrix-media-repo/api"
"github.com/turt2live/matrix-media-repo/common/rcontext"
"github.com/turt2live/matrix-media-repo/storage"
"github.com/turt2live/matrix-media-repo/storage/stores"
"github.com/turt2live/matrix-media-repo/types"
"github.com/turt2live/matrix-media-repo/util"
)
......@@ -207,3 +212,131 @@ func GetUploadsUsage(r *http.Request, rctx rcontext.RequestContext, user api.Use
return &api.DoNotCacheResponse{Payload: parsed}
}
// GetUsersUsageStats attempts to provide a loose equivalent to this Synapse admin end-point:
// https://matrix-org.github.io/synapse/develop/admin_api/statistics.html#users-media-usage-statistics
func GetUsersUsageStats(r *http.Request, rctx rcontext.RequestContext, user api.UserInfo) interface{} {
params := mux.Vars(r)
qs := r.URL.Query()
var err error
serverName := params["serverName"]
isGlobalAdmin, isLocalAdmin := api.GetRequestUserAdminStatus(r, rctx, user)
if !isGlobalAdmin && (serverName != r.Host || !isLocalAdmin) {
return api.AuthFailed()
}
orderBy := qs.Get("order_by")
if len(qs["order_by"]) == 0 {
orderBy = "user_id"
}
if !util.ArrayContains(stores.UsersUsageStatsSorts, orderBy) {
acceptedValsStr, _ := json.Marshal(stores.UsersUsageStatsSorts)
acceptedValsStr = []byte(strings.ReplaceAll(string(acceptedValsStr), "\"", "'"))
return api.BadRequest(
fmt.Sprintf("Query parameter 'order_by' must be one of %s", acceptedValsStr))
}
var start int64 = 0
if len(qs["from"]) > 0 {
start, err = strconv.ParseInt(qs.Get("from"), 10, 64)
if err != nil || start < 0 {
return api.BadRequest("Query parameter 'from' must be a non-negative integer")
}
}
var limit int64 = 100
if len(qs["limit"]) > 0 {
limit, err = strconv.ParseInt(qs.Get("limit"), 10, 64)
if err != nil || limit < 0 {
return api.BadRequest("Query parameter 'limit' must be a non-negative integer")
}
}
const unspecifiedTS int64 = -1
fromTS := unspecifiedTS
if len(qs["from_ts"]) > 0 {
fromTS, err = strconv.ParseInt(qs.Get("from_ts"), 10, 64)
if err != nil || fromTS < 0 {
return api.BadRequest("Query parameter 'from_ts' must be a non-negative integer")
}
}
untilTS := unspecifiedTS
if len(qs["until_ts"]) > 0 {
untilTS, err = strconv.ParseInt(qs.Get("until_ts"), 10, 64)
if err != nil || untilTS < 0 {
return api.BadRequest("Query parameter 'until_ts' must be a non-negative integer")
} else if untilTS <= fromTS {
return api.BadRequest("Query parameter 'until_ts' must be greater than 'from_ts'")
}
}
searchTerm := qs.Get("search_term")
if searchTerm == "" && len(qs["search_term"]) > 0 {
return api.BadRequest("Query parameter 'search_term' cannot be an empty string")
}
isAscendingOrder := true
direction := qs.Get("dir")
if direction == "f" || len(qs["dir"]) == 0 {
// Use default order
} else if direction == "b" {
isAscendingOrder = false
} else {
return api.BadRequest("Query parameter 'dir' must be one of ['f', 'b']")
}
rctx = rctx.LogWithFields(logrus.Fields{
"serverName": serverName,
"order_by": orderBy,
"from": start,
"limit": limit,
"from_ts": fromTS,
"until_ts": untilTS,
"search_term": searchTerm,
"isAscendingOrder": isAscendingOrder,
})
db := storage.GetDatabase().GetMediaStore(rctx)
stats, totalCount, err := db.GetUsersUsageStatsForServer(
serverName,
orderBy,
start,
limit,
fromTS,
untilTS,
searchTerm,
isAscendingOrder)
if err != nil {
rctx.Log.Error(err)
sentry.CaptureException(err)
return api.InternalServerError("Failed to get users' usage stats on specified server")
}
var users []map[string]interface{}
if len(stats) == 0 {
users = []map[string]interface{}{}
} else {
for _, record := range stats {
users = append(users, map[string]interface{}{
"media_count": record.MediaCount,
"media_length": record.MediaLength,
"user_id": record.UserId,
})
}
}
var result = map[string]interface{}{
"users": users,
"total": totalCount,
}
if (start + limit) < totalCount {
result["next_token"] = start + int64(len(stats))
}
return &api.DoNotCacheResponse{Payload: result}
}
......@@ -6,8 +6,9 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/getsentry/sentry-go"
"io"
"io/ioutil"
"math"
"mime"
"net"
"net/http"
......@@ -15,6 +16,8 @@ import (
"strconv"
"strings"
"github.com/getsentry/sentry-go"
"github.com/alioygur/is"
"github.com/prometheus/client_golang/prometheus"
"github.com/sebest/xff"
......@@ -79,12 +82,14 @@ func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Content-Security-Policy", "sandbox; default-src 'none'; script-src 'none'; plugin-types application/pdf; style-src 'unsafe-inline'; media-src 'self'; object-src 'self';")
w.Header().Set("Cross-Origin-Resource-Policy", "cross-origin")
w.Header().Set("X-Content-Security-Policy", "sandbox;")
w.Header().Set("X-Robots-Tag", "noindex, nofollow, noarchive, noimageindex")
w.Header().Set("Server", "matrix-media-repo")
// Process response
var res interface{} = api.AuthFailed()
var rctx rcontext.RequestContext
if util.IsServerOurs(r.Host) || h.ignoreHost {
contextLog.Info("Host is valid - processing request")
cfg := config.GetDomain(r.Host)
......@@ -100,7 +105,7 @@ func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
ctx = context.WithValue(ctx, "mr.logger", contextLog)
ctx = context.WithValue(ctx, "mr.serverConfig", cfg)
ctx = context.WithValue(ctx, "mr.request", r)
rctx := rcontext.RequestContext{Context: ctx, Log: contextLog, Config: *cfg, Request: r}
rctx = rcontext.RequestContext{Context: ctx, Log: contextLog, Config: *cfg, Request: r}
r = r.WithContext(rctx)
metrics.HttpRequests.With(prometheus.Labels{
......@@ -164,6 +169,91 @@ func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}
break
case *r0.DownloadMediaResponse:
// XXX: This range parsing isn't perfect, but works fine enough for now
rangeStart := int64(0)
rangeEnd := int64(0)
grabBytes := int64(0)
doRange := false
if r.Header.Get("Range") != "" && result.SizeBytes > 0 && rctx.Request != nil && config.Get().Redis.Enabled {
rnge := r.Header.Get("Range")
if !strings.HasPrefix(rnge, "bytes=") {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Improper range units")
break
}
if !strings.Contains(rnge, ",") && !strings.HasPrefix(rnge, "bytes=-") {
parts := strings.Split(rnge[len("bytes="):], "-")
if len(parts) <= 2 {
rstart, err := strconv.ParseInt(parts[0], 10, 64)
if err != nil {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Improper start of range")
break
}
if rstart < 0 {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Improper start of range: negative")
break
}
rend := int64(-1)
if len(parts) > 1 && parts[1] != "" {
rend, err = strconv.ParseInt(parts[1], 10, 64)
if err != nil {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Improper end of range")
break
}
if rend < 1 {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Improper end of range: negative")
break
}
if rend >= result.SizeBytes {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Improper end of range: out of bounds")
break
}
if rend <= rstart {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Start must be before end")
break
}
if (rstart + rend) >= result.SizeBytes {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Range too large")
break
}
grabBytes = rend - rstart
} else {
add := int64(10485760) // 10mb default
if rctx.Config.Downloads.DefaultRangeChunkSizeBytes > 0 {
add = rctx.Config.Downloads.DefaultRangeChunkSizeBytes
}
rend = int64(math.Min(float64(rstart+add), float64(result.SizeBytes-1)))
grabBytes = (rend - rstart) + 1
}
rangeStart = rstart
rangeEnd = rend
if (rangeEnd-rangeStart) <= 0 || grabBytes <= 0 {
statusCode = http.StatusRequestedRangeNotSatisfiable
res = api.BadRequest("Range invalid at last pass")
break
}
doRange = true
}
}
}
metrics.HttpResponses.With(prometheus.Labels{
"host": r.Host,
"action": h.action,
......@@ -187,6 +277,9 @@ func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Cache-Control", "private, max-age=259200") // 3 days
w.Header().Set("Content-Type", contentType)
if result.SizeBytes > 0 {
if config.Get().Redis.Enabled {
w.Header().Set("Accept-Ranges", "bytes")
}
w.Header().Set("Content-Length", fmt.Sprint(result.SizeBytes))
}
disposition := result.TargetDisposition
......@@ -222,8 +315,36 @@ func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
} else {
w.Header().Set("Content-Disposition", disposition+"; filename*=utf-8''"+url.QueryEscape(fname))
}
defer result.Data.Close()
writeResponseData(w, result.Data, result.SizeBytes)
if doRange {
_, err = io.CopyN(ioutil.Discard, result.Data, rangeStart)
if err != nil {
// Should only blow up this request
panic(err)
}
expectedBytes := grabBytes
w.Header().Set("Content-Range", fmt.Sprintf("bytes %d-%d/%d", rangeStart, rangeEnd, result.SizeBytes))
w.Header().Set("Content-Length", fmt.Sprint(expectedBytes))
w.WriteHeader(http.StatusPartialContent)
b, err := io.CopyN(w, result.Data, expectedBytes)
if err != nil {
// Should only blow up this request
panic(err)
}
// Discard anything that remains
_, _ = io.Copy(ioutil.Discard, result.Data)
if expectedBytes > 0 && b != expectedBytes {
// Should only blow up this request
panic(errors.New("mismatch transfer size"))
}
} else {
writeResponseData(w, result.Data, result.SizeBytes)
}
return // Prevent sending conflicting responses
case *r0.IdenticonResponse:
metrics.HttpResponses.With(prometheus.Labels{
......@@ -245,7 +366,7 @@ func (h handler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
}).Inc()
w.Header().Set("Cache-Control", "private, max-age=259200") // 3 days
w.Header().Set("Content-Type", "text/html; charset=UTF-8")
w.Header().Set("Content-Security-Policy", "") // We're serving HTML, so take away the CSP
w.Header().Set("Content-Security-Policy", "") // We're serving HTML, so take away the CSP
w.Header().Set("X-Content-Security-Policy", "") // We're serving HTML, so take away the CSP
io.Copy(w, bytes.NewBuffer([]byte(result.HTML)))
return
......
......@@ -30,6 +30,11 @@ type route struct {
handler handler
}
type definedRoute struct {
path string
route route
}
var srv *http.Server
var waitGroup = &sync.WaitGroup{}
var reload = false
......@@ -66,6 +71,7 @@ func Init() *sync.WaitGroup {
domainUsageHandler := handler{api.RepoAdminRoute(custom.GetDomainUsage), "domain_usage", counter, false}
userUsageHandler := handler{api.RepoAdminRoute(custom.GetUserUsage), "user_usage", counter, false}
uploadsUsageHandler := handler{api.RepoAdminRoute(custom.GetUploadsUsage), "uploads_usage", counter, false}
usersUsageStatsHandler := handler{api.AccessTokenRequiredRoute(custom.GetUsersUsageStats), "users_usage_stats", counter, false}
getBackgroundTaskHandler := handler{api.RepoAdminRoute(custom.GetTask), "get_background_task", counter, false}
listAllBackgroundTasksHandler := handler{api.RepoAdminRoute(custom.ListAllTasks), "list_all_background_tasks", counter, false}
listUnfinishedBackgroundTasksHandler := handler{api.RepoAdminRoute(custom.ListUnfinishedTasks), "list_unfinished_background_tasks", counter, false}
......@@ -85,89 +91,90 @@ func Init() *sync.WaitGroup {
getMediaAttrsHandler := handler{api.AccessTokenRequiredRoute(custom.GetAttributes), "get_media_attributes", counter, false}
setMediaAttrsHandler := handler{api.AccessTokenRequiredRoute(custom.SetAttributes), "set_media_attributes", counter, false}
routes := make(map[string]route)
routes := make([]definedRoute, 0)
// r0 is typically clients and v1 is typically servers. v1 is deprecated.
// unstable is, well, unstable. unstable/io.t2bot.media is to comply with MSC2324
// v3 is Matrix 1.1 stuff
versions := []string{"r0", "v1", "v3", "unstable", "unstable/io.t2bot.media"}
// Things that don't need a version
routes["/_matrix/media/version"] = route{"GET", versionHandler}
routes = append(routes, definedRoute{"/_matrix/media/version", route{"GET", versionHandler}})
for _, version := range versions {
// Standard routes we have to handle
routes["/_matrix/media/"+version+"/upload"] = route{"POST", uploadHandler}
routes["/_matrix/media/"+version+"/download/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"GET", downloadHandler}
routes["/_matrix/media/"+version+"/download/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}/{filename:.+}"] = route{"GET", downloadHandler}
routes["/_matrix/media/"+version+"/thumbnail/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"GET", thumbnailHandler}
routes["/_matrix/media/"+version+"/preview_url"] = route{"GET", previewUrlHandler}
routes["/_matrix/media/"+version+"/identicon/{seed:.*}"] = route{"GET", identiconHandler}
routes["/_matrix/media/"+version+"/config"] = route{"GET", configHandler}
routes["/_matrix/client/"+version+"/logout"] = route{"POST", logoutHandler}
routes["/_matrix/client/"+version+"/logout/all"] = route{"POST", logoutAllHandler}
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/upload", route{"POST", uploadHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/download/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}/{filename:.+}", route{"GET", downloadHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/download/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"GET", downloadHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/thumbnail/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"GET", thumbnailHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/preview_url", route{"GET", previewUrlHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/identicon/{seed:.*}", route{"GET", identiconHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/config", route{"GET", configHandler}})
routes = append(routes, definedRoute{"/_matrix/client/" + version + "/logout", route{"POST", logoutHandler}})
routes = append(routes, definedRoute{"/_matrix/client/" + version + "/logout/all", route{"POST", logoutAllHandler}})
// Routes that we define but are not part of the spec (management)
routes["/_matrix/media/"+version+"/admin/purge_remote"] = route{"POST", purgeRemote} // deprecated
routes["/_matrix/media/"+version+"/admin/purge/remote"] = route{"POST", purgeRemote}
routes["/_matrix/media/"+version+"/admin/purge/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"POST", purgeOneHandler}
routes["/_matrix/media/"+version+"/admin/purge/quarantined"] = route{"POST", purgeQuarantinedHandler}
routes["/_matrix/media/"+version+"/admin/purge/user/{userId:[^/]+}"] = route{"POST", purgeUserMediaHandler}
routes["/_matrix/media/"+version+"/admin/purge/room/{roomId:[^/]+}"] = route{"POST", purgeRoomHandler}
routes["/_matrix/media/"+version+"/admin/purge/server/{serverName:[^/]+}"] = route{"POST", purgeDomainHandler}
routes["/_matrix/media/"+version+"/admin/purge/old"] = route{"POST", purgeOldHandler}
routes["/_matrix/media/"+version+"/admin/room/{roomId:[^/]+}/quarantine"] = route{"POST", quarantineRoomHandler} // deprecated
routes["/_matrix/media/"+version+"/admin/quarantine/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"POST", quarantineHandler}
routes["/_matrix/media/"+version+"/admin/quarantine/room/{roomId:[^/]+}"] = route{"POST", quarantineRoomHandler}
routes["/_matrix/media/"+version+"/admin/quarantine/user/{userId:[^/]+}"] = route{"POST", quarantineUserHandler}
routes["/_matrix/media/"+version+"/admin/quarantine/server/{serverName:[^/]+}"] = route{"POST", quarantineDomainHandler}
routes["/_matrix/media/"+version+"/admin/datastores/{datastoreId:[^/]+}/size_estimate"] = route{"GET", storageEstimateHandler}
routes["/_matrix/media/"+version+"/admin/datastores"] = route{"GET", datastoreListHandler}
routes["/_matrix/media/"+version+"/admin/datastores/{sourceDsId:[^/]+}/transfer_to/{targetDsId:[^/]+}"] = route{"POST", dsTransferHandler}
routes["/_matrix/media/"+version+"/admin/federation/test/{serverName:[a-zA-Z0-9.:\\-_]+}"] = route{"GET", fedTestHandler}
routes["/_matrix/media/"+version+"/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}"] = route{"GET", domainUsageHandler}
routes["/_matrix/media/"+version+"/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}/users"] = route{"GET", userUsageHandler}
routes["/_matrix/media/"+version+"/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}/uploads"] = route{"GET", uploadsUsageHandler}
routes["/_matrix/media/"+version+"/admin/tasks/{taskId:[0-9]+}"] = route{"GET", getBackgroundTaskHandler}
routes["/_matrix/media/"+version+"/admin/tasks/all"] = route{"GET", listAllBackgroundTasksHandler}
routes["/_matrix/media/"+version+"/admin/tasks/unfinished"] = route{"GET", listUnfinishedBackgroundTasksHandler}
routes["/_matrix/media/"+version+"/admin/user/{userId:[^/]+}/export"] = route{"POST", exportUserDataHandler}
routes["/_matrix/media/"+version+"/admin/server/{serverName:[^/]+}/export"] = route{"POST", exportServerDataHandler}
routes["/_matrix/media/"+version+"/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/view"] = route{"GET", viewExportHandler}
routes["/_matrix/media/"+version+"/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/metadata"] = route{"GET", getExportMetadataHandler}
routes["/_matrix/media/"+version+"/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/part/{partId:[0-9]+}"] = route{"GET", downloadExportPartHandler}
routes["/_matrix/media/"+version+"/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/delete"] = route{"DELETE", deleteExportHandler}
routes["/_matrix/media/"+version+"/admin/import"] = route{"POST", startImportHandler}
routes["/_matrix/media/"+version+"/admin/import/{importId:[a-zA-Z0-9.:\\-_]+}/part"] = route{"POST", appendToImportHandler}
routes["/_matrix/media/"+version+"/admin/import/{importId:[a-zA-Z0-9.:\\-_]+}/close"] = route{"POST", stopImportHandler}
routes["/_matrix/media/"+version+"/admin/media/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}/attributes"] = route{"GET", getMediaAttrsHandler}
routes["/_matrix/media/"+version+"/admin/media/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}/attributes/set"] = route{"POST", setMediaAttrsHandler}
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge_remote", route{"POST", purgeRemote}}) // deprecated
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/remote", route{"POST", purgeRemote}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/quarantined", route{"POST", purgeQuarantinedHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/user/{userId:[^/]+}", route{"POST", purgeUserMediaHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/room/{roomId:[^/]+}", route{"POST", purgeRoomHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/server/{serverName:[^/]+}", route{"POST", purgeDomainHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/old", route{"POST", purgeOldHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/purge/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"POST", purgeOneHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/room/{roomId:[^/]+}/quarantine", route{"POST", quarantineRoomHandler}}) // deprecated
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/quarantine/room/{roomId:[^/]+}", route{"POST", quarantineRoomHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/quarantine/user/{userId:[^/]+}", route{"POST", quarantineUserHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/quarantine/server/{serverName:[^/]+}", route{"POST", quarantineDomainHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/quarantine/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"POST", quarantineHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/datastores/{datastoreId:[^/]+}/size_estimate", route{"GET", storageEstimateHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/datastores", route{"GET", datastoreListHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/datastores/{sourceDsId:[^/]+}/transfer_to/{targetDsId:[^/]+}", route{"POST", dsTransferHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/federation/test/{serverName:[a-zA-Z0-9.:\\-_]+}", route{"GET", fedTestHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}", route{"GET", domainUsageHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}/users", route{"GET", userUsageHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}/users-stats", route{"GET", usersUsageStatsHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/usage/{serverName:[a-zA-Z0-9.:\\-_]+}/uploads", route{"GET", uploadsUsageHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/tasks/{taskId:[0-9]+}", route{"GET", getBackgroundTaskHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/tasks/all", route{"GET", listAllBackgroundTasksHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/tasks/unfinished", route{"GET", listUnfinishedBackgroundTasksHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/user/{userId:[^/]+}/export", route{"POST", exportUserDataHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/server/{serverName:[^/]+}/export", route{"POST", exportServerDataHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/view", route{"GET", viewExportHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/metadata", route{"GET", getExportMetadataHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/part/{partId:[0-9]+}", route{"GET", downloadExportPartHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/export/{exportId:[a-zA-Z0-9.:\\-_]+}/delete", route{"DELETE", deleteExportHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/import", route{"POST", startImportHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/import/{importId:[a-zA-Z0-9.:\\-_]+}/part", route{"POST", appendToImportHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/import/{importId:[a-zA-Z0-9.:\\-_]+}/close", route{"POST", stopImportHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/media/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}/attributes", route{"GET", getMediaAttrsHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/admin/media/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}/attributes/set", route{"POST", setMediaAttrsHandler}})
// Routes that we should handle but aren't in the media namespace (synapse compat)
routes["/_matrix/client/"+version+"/admin/purge_media_cache"] = route{"POST", purgeRemote}
routes["/_matrix/client/"+version+"/admin/quarantine_media/{roomId:[^/]+}"] = route{"POST", quarantineRoomHandler}
routes = append(routes, definedRoute{"/_matrix/client/" + version + "/admin/purge_media_cache", route{"POST", purgeRemote}})
routes = append(routes, definedRoute{"/_matrix/client/" + version + "/admin/quarantine_media/{roomId:[^/]+}", route{"POST", quarantineRoomHandler}})
if strings.Index(version, "unstable") == 0 {
routes["/_matrix/media/"+version+"/local_copy/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"GET", localCopyHandler}
routes["/_matrix/media/"+version+"/info/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"GET", infoHandler}
routes["/_matrix/media/"+version+"/download/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}"] = route{"DELETE", purgeOneHandler}
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/local_copy/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"GET", localCopyHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/info/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"GET", infoHandler}})
routes = append(routes, definedRoute{"/_matrix/media/" + version + "/download/{server:[a-zA-Z0-9.:\\-_]+}/{mediaId:[^/]+}", route{"DELETE", purgeOneHandler}})
}
}
if config.Get().Features.IPFS.Enabled {
routes[features.IPFSDownloadRoute] = route{"GET", ipfsDownloadHandler}
routes[features.IPFSLiveDownloadRouteR0] = route{"GET", ipfsDownloadHandler}
routes[features.IPFSLiveDownloadRouteV1] = route{"GET", ipfsDownloadHandler}
routes[features.IPFSLiveDownloadRouteUnstable] = route{"GET", ipfsDownloadHandler}
routes = append(routes, definedRoute{features.IPFSDownloadRoute, route{"GET", ipfsDownloadHandler}})
routes = append(routes, definedRoute{features.IPFSLiveDownloadRouteR0, route{"GET", ipfsDownloadHandler}})
routes = append(routes, definedRoute{features.IPFSLiveDownloadRouteV1, route{"GET", ipfsDownloadHandler}})
routes = append(routes, definedRoute{features.IPFSLiveDownloadRouteUnstable, route{"GET", ipfsDownloadHandler}})
}
for routePath, route := range routes {
logrus.Info("Registering route: " + route.method + " " + routePath)
rtr.Handle(routePath, route.handler).Methods(route.method)
rtr.Handle(routePath, optionsHandler).Methods("OPTIONS")
for _, def := range routes {
logrus.Info("Registering route: " + def.route.method + " " + def.path)
rtr.Handle(def.path, def.route.handler).Methods(def.route.method)
rtr.Handle(def.path, optionsHandler).Methods("OPTIONS")
// This is a hack to a ensure that trailing slashes also match the routes correctly
rtr.Handle(routePath+"/", route.handler).Methods(route.method)
rtr.Handle(routePath+"/", optionsHandler).Methods("OPTIONS")
rtr.Handle(def.path+"/", def.route.handler).Methods(def.route.method)
rtr.Handle(def.path+"/", optionsHandler).Methods("OPTIONS")
}
// Health check endpoints
......
This diff is collapsed.
#!/bin/bash
set -ex
rm -rfv $PWD/bin/*
mkdir $PWD/bin/dist
......@@ -31,5 +33,3 @@ do
done
rm -rfv $PWD/bin/dist/compile_assets*
rm -rfv $PWD/bin/dist/loadtest*
rm -rfv $PWD/bin/dist/complement*
#!/bin/sh
git clone --depth=1 https://github.com/matrix-org/complement.git CI_COMPLEMENT
cd CI_COMPLEMENT
go test -run '^(TestMediaWithoutFileName)$' -v ./tests
package main
import (
"encoding/json"
"fmt"
"io"
"io/ioutil"
"log"
"net/http"
"os"
"os/signal"
"strconv"
"strings"
"sync"
"github.com/gorilla/mux"
"github.com/turt2live/matrix-media-repo/util/cleanup"
)
type VersionsResponse struct {
CSAPIVersions []string `json:"versions,flow"`
}
type RegisterRequest struct {
DesiredUsername string `json:"username"`
}
type RegisterResponse struct {
UserID string `json:"user_id"`
AccessToken string `json:"access_token"`
}
type WhoamiResponse struct {
UserID string `json:"user_id"`
}
func requestJson(r *http.Request, i interface{}) error {
b, err := ioutil.ReadAll(r.Body)
if err != nil {
return err
}
return json.Unmarshal(b, &i)
}
func respondJson(w http.ResponseWriter, i interface{}) error {
resp, err := json.Marshal(i)
if err != nil {
return err
}
w.Header().Set("Content-Length",strconv.Itoa(len(resp)))
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(200)
_, err = w.Write(resp)
return err
}
func main() {
// Prepare local server
log.Println("Preparing local server...")
rtr := mux.NewRouter()
rtr.HandleFunc("/_matrix/client/versions", func(w http.ResponseWriter, r *http.Request) {
defer cleanup.DumpAndCloseStream(r.Body)
err := respondJson(w, &VersionsResponse{CSAPIVersions: []string{"r0.6.0"}})
if err != nil {
log.Fatal(err)
}
})
rtr.HandleFunc("/_matrix/client/r0/register", func(w http.ResponseWriter, r *http.Request) {
rr := &RegisterRequest{}
err := requestJson(r, &rr)
if err != nil {
log.Fatal(err)
}
userId := fmt.Sprintf("@%s:%s", rr.DesiredUsername, os.Getenv("SERVER_NAME"))
err = respondJson(w, &RegisterResponse{
AccessToken: userId,
UserID: userId,
})
if err != nil {
log.Fatal(err)
}
})
rtr.HandleFunc("/_matrix/client/r0/account/whoami", func(w http.ResponseWriter, r *http.Request) {
defer cleanup.DumpAndCloseStream(r.Body)
userId := strings.TrimPrefix(r.Header.Get("Authorization"), "Bearer ") // including space after Bearer.
err := respondJson(w, &WhoamiResponse{UserID: userId})
if err != nil {
log.Fatal(err)
}
})
rtr.PathPrefix("/_matrix/media/").HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Proxy to the media repo running within the container
defer cleanup.DumpAndCloseStream(r.Body)
r2, err := http.NewRequest(r.Method, "http://127.0.0.1:8228" + r.RequestURI, r.Body)
if err != nil {
log.Fatal(err)
}
for k, v := range r.Header {
r2.Header.Set(k, v[0])
}
r2.Host = os.Getenv("SERVER_NAME")
resp, err := http.DefaultClient.Do(r2)
if err != nil {
log.Fatal(err)
}
for k, v := range resp.Header {
w.Header().Set(k, v[0])
}
defer cleanup.DumpAndCloseStream(resp.Body)
_, err = io.Copy(w, resp.Body)
if err != nil {
log.Fatal(err)
}
})
srv1 := &http.Server{Addr: "0.0.0.0:8008", Handler: rtr}
srv2 := &http.Server{Addr: "0.0.0.0:8448", Handler: rtr}
log.Println("Starting local server...")
waitGroup1 := &sync.WaitGroup{}
waitGroup2 := &sync.WaitGroup{}
go func() {
if err := srv1.ListenAndServe(); err != http.ErrServerClosed {
log.Fatal(err)
}
srv1 = nil
waitGroup1.Done()
}()
go func() {
if err := srv2.ListenAndServeTLS("/data/server.crt", "/data/server.key"); err != http.ErrServerClosed {
log.Fatal(err)
}
srv2 = nil
waitGroup2.Done()
}()
stop := make(chan os.Signal)
signal.Notify(stop, os.Interrupt, os.Kill)
go func() {
defer close(stop)
<-stop
log.Println("Stopping local server...")
_ = srv1.Close()
_ = srv2.Close()
}()
waitGroup1.Add(1)
waitGroup2.Add(1)
waitGroup1.Wait()
waitGroup2.Wait()
log.Println("Goodbye!")
}
......@@ -58,7 +58,12 @@ func main() {
realPsqlPassword = *postgresPassword
}
err := logging.Setup(config.Get().General.LogDirectory, config.Get().General.LogColors, config.Get().General.JsonLogs)
err := logging.Setup(
config.Get().General.LogDirectory,
config.Get().General.LogColors,
config.Get().General.JsonLogs,
config.Get().General.LogLevel,
)
if err != nil {
panic(err)
}
......
......@@ -45,7 +45,12 @@ func main() {
assets.SetupTemplates(*templatesPath)
var err error
err = logging.Setup(config.Get().General.LogDirectory, config.Get().General.LogColors, config.Get().General.JsonLogs)
err = logging.Setup(
config.Get().General.LogDirectory,
config.Get().General.LogColors,
config.Get().General.JsonLogs,
config.Get().General.LogLevel,
)
if err != nil {
panic(err)
}
......@@ -83,7 +88,7 @@ func main() {
if err != nil {
logrus.Error(err)
} else if task.EndTs > 0 {
waitChan<-true
waitChan <- true
return
}
......
......@@ -35,7 +35,12 @@ func main() {
assets.SetupMigrations(*migrationsPath)
var err error
err = logging.Setup(config.Get().General.LogDirectory, config.Get().General.LogColors, config.Get().General.JsonLogs)
err = logging.Setup(
config.Get().General.LogDirectory,
config.Get().General.LogColors,
config.Get().General.JsonLogs,
config.Get().General.LogLevel,
)
if err != nil {
panic(err)
}
......@@ -155,7 +160,7 @@ func main() {
if err != nil {
logrus.Error(err)
} else if task.EndTs > 0 {
waitChan<-true
waitChan <- true
return
}
......
......@@ -72,7 +72,12 @@ func main() {
realPsqlPassword = *postgresPassword
}
err := logging.Setup(config.Get().General.LogDirectory, config.Get().General.LogColors, config.Get().General.JsonLogs)
err := logging.Setup(
config.Get().General.LogDirectory,
config.Get().General.LogColors,
config.Get().General.JsonLogs,
config.Get().General.LogLevel,
)
if err != nil {
panic(err)
}
......