Compare commits
1 Commits
v1.55.2
...
v1.46-modu
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1894419915 |
@@ -3,7 +3,7 @@
|
||||
# Test for the export-data admin command against sqlite and postgres
|
||||
|
||||
set -xe
|
||||
cd "$(dirname "$0")/../.."
|
||||
cd `dirname $0`/../..
|
||||
|
||||
echo "--- Install dependencies"
|
||||
|
||||
@@ -21,7 +21,7 @@ python -m synapse.app.homeserver --generate-keys -c .ci/sqlite-config.yaml
|
||||
echo "--- Prepare test database"
|
||||
|
||||
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
|
||||
update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
|
||||
scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
|
||||
|
||||
# Run the export-data command on the sqlite test database
|
||||
python -m synapse.app.admin_cmd -c .ci/sqlite-config.yaml export-data @anon-20191002_181700-832:localhost:8800 \
|
||||
@@ -41,7 +41,7 @@ fi
|
||||
|
||||
# Port the SQLite databse to postgres so we can check command works against postgres
|
||||
echo "+++ Port SQLite3 databse to postgres"
|
||||
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
|
||||
# Run the export-data command on postgres database
|
||||
python -m synapse.app.admin_cmd -c .ci/postgres-config.yaml export-data @anon-20191002_181700-832:localhost:8800 \
|
||||
|
||||
@@ -1,20 +1,16 @@
|
||||
#!/usr/bin/env bash
|
||||
# this script is run by GitHub Actions in a plain `focal` container; it installs the
|
||||
# minimal requirements for tox and hands over to the py3-old tox environment.
|
||||
|
||||
# Prevent tzdata from asking for user input
|
||||
export DEBIAN_FRONTEND=noninteractive
|
||||
# this script is run by GitHub Actions in a plain `bionic` container; it installs the
|
||||
# minimal requirements for tox and hands over to the py3-old tox environment.
|
||||
|
||||
set -ex
|
||||
|
||||
apt-get update
|
||||
apt-get install -y \
|
||||
python3 python3-dev python3-pip python3-venv \
|
||||
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox libjpeg-dev libwebp-dev
|
||||
apt-get install -y python3 python3-dev python3-pip libxml2-dev libxslt-dev xmlsec1 zlib1g-dev tox
|
||||
|
||||
export LANG="C.UTF-8"
|
||||
|
||||
# Prevent virtualenv from auto-updating pip to an incompatible version
|
||||
export VIRTUALENV_NO_DOWNLOAD=1
|
||||
|
||||
exec tox -e py3-old
|
||||
exec tox -e py3-old,combine
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
|
||||
set -xe
|
||||
cd "$(dirname "$0")/../.."
|
||||
cd `dirname $0`/../..
|
||||
|
||||
echo "--- Install dependencies"
|
||||
|
||||
@@ -25,19 +25,17 @@ python -m synapse.app.homeserver --generate-keys -c .ci/sqlite-config.yaml
|
||||
echo "--- Prepare test database"
|
||||
|
||||
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
|
||||
update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
|
||||
scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
|
||||
|
||||
# Create the PostgreSQL database.
|
||||
.ci/scripts/postgres_exec.py "CREATE DATABASE synapse"
|
||||
|
||||
echo "+++ Run synapse_port_db against test database"
|
||||
# TODO: this invocation of synapse_port_db (and others below) used to be prepended with `coverage run`,
|
||||
# but coverage seems unable to find the entrypoints installed by `pip install -e .`.
|
||||
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
coverage run scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
|
||||
# We should be able to run twice against the same database.
|
||||
echo "+++ Run synapse_port_db a second time"
|
||||
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
coverage run scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
|
||||
#####
|
||||
|
||||
@@ -48,7 +46,7 @@ echo "--- Prepare empty SQLite database"
|
||||
# we do this by deleting the sqlite db, and then doing the same again.
|
||||
rm .ci/test_db.db
|
||||
|
||||
update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
|
||||
scripts/update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
|
||||
|
||||
# re-create the PostgreSQL database.
|
||||
.ci/scripts/postgres_exec.py \
|
||||
@@ -56,4 +54,4 @@ update_synapse_database --database-config .ci/sqlite-config.yaml --run-backgroun
|
||||
"CREATE DATABASE synapse"
|
||||
|
||||
echo "+++ Run synapse_port_db against empty database"
|
||||
synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
coverage run scripts/synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
|
||||
|
||||
@@ -3,9 +3,11 @@
|
||||
|
||||
# things to include
|
||||
!docker
|
||||
!scripts
|
||||
!synapse
|
||||
!MANIFEST.in
|
||||
!README.rst
|
||||
!setup.py
|
||||
!synctl
|
||||
|
||||
**/__pycache__
|
||||
|
||||
11
.flake8
11
.flake8
@@ -1,11 +0,0 @@
|
||||
# TODO: incorporate this into pyproject.toml if flake8 supports it in the future.
|
||||
# See https://github.com/PyCQA/flake8/issues/234
|
||||
[flake8]
|
||||
# see https://pycodestyle.readthedocs.io/en/latest/intro.html#error-codes
|
||||
# for error codes. The ones we ignore are:
|
||||
# W503: line break before binary operator
|
||||
# W504: line break after binary operator
|
||||
# E203: whitespace before ':' (which is contrary to pep8?)
|
||||
# E731: do not assign a lambda expression, use a def
|
||||
# E501: Line too long (black enforces this for us)
|
||||
ignore=W503,W504,E203,E731,E501
|
||||
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
10
.github/PULL_REQUEST_TEMPLATE.md
vendored
@@ -1,14 +1,12 @@
|
||||
### Pull Request Checklist
|
||||
|
||||
<!-- Please read https://matrix-org.github.io/synapse/latest/development/contributing_guide.html before submitting your pull request -->
|
||||
<!-- Please read CONTRIBUTING.md before submitting your pull request -->
|
||||
|
||||
* [ ] Pull request is based on the develop branch
|
||||
* [ ] Pull request includes a [changelog file](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#changelog). The entry should:
|
||||
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog). The entry should:
|
||||
- Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
|
||||
- Use markdown where necessary, mostly for `code blocks`.
|
||||
- End with either a period (.) or an exclamation mark (!).
|
||||
- Start with a capital letter.
|
||||
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
|
||||
* [ ] Pull request includes a [sign off](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#sign-off)
|
||||
* [ ] [Code style](https://matrix-org.github.io/synapse/latest/code_style.html) is correct
|
||||
(run the [linters](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
|
||||
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#sign-off)
|
||||
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#code-style))
|
||||
|
||||
19
.github/workflows/docker.yml
vendored
19
.github/workflows/docker.yml
vendored
@@ -5,7 +5,7 @@ name: Build docker images
|
||||
on:
|
||||
push:
|
||||
tags: ["v*"]
|
||||
branches: [ master, main, develop ]
|
||||
branches: [ master, main ]
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
@@ -34,15 +34,10 @@ jobs:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
# TODO: consider using https://github.com/docker/metadata-action instead of this
|
||||
# custom magic
|
||||
- name: Calculate docker image tag
|
||||
id: set-tag
|
||||
run: |
|
||||
case "${GITHUB_REF}" in
|
||||
refs/heads/develop)
|
||||
tag=develop
|
||||
;;
|
||||
refs/heads/master|refs/heads/main)
|
||||
tag=latest
|
||||
;;
|
||||
@@ -55,6 +50,18 @@ jobs:
|
||||
esac
|
||||
echo "::set-output name=tag::$tag"
|
||||
|
||||
# for release builds, we want to get the amd64 image out asap, so first
|
||||
# we do an amd64-only build, before following up with a multiarch build.
|
||||
- name: Build and push amd64
|
||||
uses: docker/build-push-action@v2
|
||||
if: "${{ startsWith(github.ref, 'refs/tags/v') }}"
|
||||
with:
|
||||
push: true
|
||||
labels: "gitsha1=${{ github.sha }}"
|
||||
tags: "matrixdotorg/synapse:${{ steps.set-tag.outputs.tag }}"
|
||||
file: "docker/Dockerfile"
|
||||
platforms: linux/amd64
|
||||
|
||||
- name: Build and push all platforms
|
||||
uses: docker/build-push-action@v2
|
||||
with:
|
||||
|
||||
21
.github/workflows/release-artifacts.yml
vendored
21
.github/workflows/release-artifacts.yml
vendored
@@ -7,7 +7,7 @@ on:
|
||||
# of things breaking (but only build one set of debs)
|
||||
pull_request:
|
||||
push:
|
||||
branches: ["develop", "release-*"]
|
||||
branches: ["develop"]
|
||||
|
||||
# we do the full build on tags.
|
||||
tags: ["v*"]
|
||||
@@ -31,7 +31,7 @@ jobs:
|
||||
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
|
||||
dists='["debian:sid"]'
|
||||
if [[ $GITHUB_REF == refs/tags/* ]]; then
|
||||
dists=$(scripts-dev/build_debian_packages.py --show-dists-json)
|
||||
dists=$(scripts-dev/build_debian_packages --show-dists-json)
|
||||
fi
|
||||
echo "::set-output name=distros::$dists"
|
||||
# map the step outputs to job outputs
|
||||
@@ -74,7 +74,7 @@ jobs:
|
||||
# see https://github.com/docker/build-push-action/issues/252
|
||||
# for the cache magic here
|
||||
run: |
|
||||
./src/scripts-dev/build_debian_packages.py \
|
||||
./src/scripts-dev/build_debian_packages \
|
||||
--docker-build-arg=--cache-from=type=local,src=/tmp/.buildx-cache \
|
||||
--docker-build-arg=--cache-to=type=local,mode=max,dest=/tmp/.buildx-cache-new \
|
||||
--docker-build-arg=--progress=plain \
|
||||
@@ -91,7 +91,17 @@ jobs:
|
||||
|
||||
build-sdist:
|
||||
name: "Build pypi distribution files"
|
||||
uses: "matrix-org/backend-meta/.github/workflows/packaging.yml@v1"
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-python@v2
|
||||
- run: pip install wheel
|
||||
- run: |
|
||||
python setup.py sdist bdist_wheel
|
||||
- uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: python-dist
|
||||
path: dist/*
|
||||
|
||||
# if it's a tag, create a release and attach the artifacts to it
|
||||
attach-assets:
|
||||
@@ -112,8 +122,7 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
with:
|
||||
files: |
|
||||
Sdist/*
|
||||
Wheel/*
|
||||
python-dist/*
|
||||
debs.tar.xz
|
||||
# if it's not already published, keep the release as a draft.
|
||||
draft: true
|
||||
|
||||
116
.github/workflows/tests.yml
vendored
116
.github/workflows/tests.yml
vendored
@@ -10,20 +10,12 @@ concurrency:
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
check-sampleconfig:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-python@v2
|
||||
- run: pip install -e .
|
||||
- run: scripts-dev/generate_sample_config.sh --check
|
||||
- run: scripts-dev/config-lint.sh
|
||||
|
||||
lint:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
toxenv:
|
||||
- "check-sampleconfig"
|
||||
- "check_codestyle"
|
||||
- "check_isort"
|
||||
- "mypy"
|
||||
@@ -51,15 +43,29 @@ jobs:
|
||||
ref: ${{ github.event.pull_request.head.sha }}
|
||||
fetch-depth: 0
|
||||
- uses: actions/setup-python@v2
|
||||
- run: "pip install 'towncrier>=18.6.0rc1'"
|
||||
- run: scripts-dev/check-newsfragment.sh
|
||||
- run: pip install tox
|
||||
- run: scripts-dev/check-newsfragment
|
||||
env:
|
||||
PULL_REQUEST_NUMBER: ${{ github.event.number }}
|
||||
|
||||
lint-sdist:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: "3.x"
|
||||
- run: pip install wheel
|
||||
- run: python setup.py sdist bdist_wheel
|
||||
- uses: actions/upload-artifact@v2
|
||||
with:
|
||||
name: Python Distributions
|
||||
path: dist/*
|
||||
|
||||
# Dummy step to gate other tests on without repeating the whole list
|
||||
linting-done:
|
||||
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
|
||||
needs: [lint, lint-crlf, lint-newsfile, check-sampleconfig]
|
||||
needs: [lint, lint-crlf, lint-newsfile, lint-sdist]
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- run: "true"
|
||||
@@ -70,7 +76,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["3.7", "3.8", "3.9", "3.10"]
|
||||
python-version: ["3.6", "3.7", "3.8", "3.9", "3.10"]
|
||||
database: ["sqlite"]
|
||||
toxenv: ["py"]
|
||||
include:
|
||||
@@ -79,9 +85,9 @@ jobs:
|
||||
toxenv: "py-noextras"
|
||||
|
||||
# Oldest Python with PostgreSQL
|
||||
- python-version: "3.7"
|
||||
- python-version: "3.6"
|
||||
database: "postgres"
|
||||
postgres-version: "10"
|
||||
postgres-version: "9.6"
|
||||
toxenv: "py"
|
||||
|
||||
# Newest Python with newest PostgreSQL
|
||||
@@ -135,7 +141,7 @@ jobs:
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
- name: Test with old deps
|
||||
uses: docker://ubuntu:focal # For old python and sqlite
|
||||
uses: docker://ubuntu:bionic # For old python and sqlite
|
||||
with:
|
||||
workdir: /github/workspace
|
||||
entrypoint: .ci/scripts/test_old_deps.sh
|
||||
@@ -161,7 +167,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
strategy:
|
||||
matrix:
|
||||
python-version: ["pypy-3.7"]
|
||||
python-version: ["pypy-3.6"]
|
||||
|
||||
steps:
|
||||
- uses: actions/checkout@v2
|
||||
@@ -207,15 +213,15 @@ jobs:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
include:
|
||||
- sytest-tag: focal
|
||||
- sytest-tag: bionic
|
||||
|
||||
- sytest-tag: focal
|
||||
- sytest-tag: bionic
|
||||
postgres: postgres
|
||||
|
||||
- sytest-tag: testing
|
||||
postgres: postgres
|
||||
|
||||
- sytest-tag: focal
|
||||
- sytest-tag: bionic
|
||||
postgres: multi-postgres
|
||||
workers: workers
|
||||
|
||||
@@ -285,8 +291,8 @@ jobs:
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- python-version: "3.7"
|
||||
postgres-version: "10"
|
||||
- python-version: "3.6"
|
||||
postgres-version: "9.6"
|
||||
|
||||
- python-version: "3.10"
|
||||
postgres-version: "14"
|
||||
@@ -317,29 +323,24 @@ jobs:
|
||||
if: ${{ !failure() && !cancelled() }}
|
||||
needs: linting-done
|
||||
runs-on: ubuntu-latest
|
||||
container:
|
||||
# https://github.com/matrix-org/complement/blob/master/dockerfiles/ComplementCIBuildkite.Dockerfile
|
||||
image: matrixdotorg/complement:latest
|
||||
env:
|
||||
CI: true
|
||||
ports:
|
||||
- 8448:8448
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock
|
||||
|
||||
steps:
|
||||
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
|
||||
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
|
||||
- name: "Set Go Version"
|
||||
run: |
|
||||
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
|
||||
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
|
||||
# Add the Go path to the PATH: We need this so we can call gotestfmt
|
||||
echo "~/go/bin" >> $GITHUB_PATH
|
||||
|
||||
- name: "Install Complement Dependencies"
|
||||
run: |
|
||||
sudo apt-get update && sudo apt-get install -y libolm3 libolm-dev
|
||||
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
|
||||
|
||||
- name: Run actions/checkout@v2 for synapse
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: synapse
|
||||
|
||||
# Attempt to check out the same branch of Complement as the PR. If it
|
||||
# doesn't exist, fallback to HEAD.
|
||||
# doesn't exist, fallback to master.
|
||||
- name: Checkout complement
|
||||
shell: bash
|
||||
run: |
|
||||
@@ -352,8 +353,8 @@ jobs:
|
||||
# for pull requests, otherwise GITHUB_REF).
|
||||
# 2. Attempt to use the base branch, e.g. when merging into release-vX.Y
|
||||
# (GITHUB_BASE_REF for pull requests).
|
||||
# 3. Use the default complement branch ("HEAD").
|
||||
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "HEAD"; do
|
||||
# 3. Use the default complement branch ("master").
|
||||
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "master"; do
|
||||
# Skip empty branch names and merge commits.
|
||||
if [[ -z "$BRANCH_NAME" || $BRANCH_NAME =~ ^refs/pull/.* ]]; then
|
||||
continue
|
||||
@@ -365,8 +366,6 @@ jobs:
|
||||
# Build initial Synapse image
|
||||
- run: docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
|
||||
working-directory: synapse
|
||||
env:
|
||||
DOCKER_BUILDKIT: 1
|
||||
|
||||
# Build a ready-to-run Synapse image based on the initial image above.
|
||||
# This new image includes a config file, keys for signing and TLS, and
|
||||
@@ -375,11 +374,7 @@ jobs:
|
||||
working-directory: complement/dockerfiles
|
||||
|
||||
# Run Complement
|
||||
- run: |
|
||||
set -o pipefail
|
||||
go test -v -json -p 1 -tags synapse_blacklist,msc2403,msc2716,msc3030 ./tests/... 2>&1 | gotestfmt
|
||||
shell: bash
|
||||
name: Run Complement Tests
|
||||
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests/...
|
||||
env:
|
||||
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
|
||||
working-directory: complement
|
||||
@@ -388,22 +383,35 @@ jobs:
|
||||
tests-done:
|
||||
if: ${{ always() }}
|
||||
needs:
|
||||
- check-sampleconfig
|
||||
- lint
|
||||
- lint-crlf
|
||||
- lint-newsfile
|
||||
- lint-sdist
|
||||
- trial
|
||||
- trial-olddeps
|
||||
- sytest
|
||||
- export-data
|
||||
- portdb
|
||||
- complement
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: matrix-org/done-action@v2
|
||||
with:
|
||||
needs: ${{ toJSON(needs) }}
|
||||
- name: Set build result
|
||||
env:
|
||||
NEEDS_CONTEXT: ${{ toJSON(needs) }}
|
||||
# the `jq` incantation dumps out a series of "<job> <result>" lines.
|
||||
# we set it to an intermediate variable to avoid a pipe, which makes it
|
||||
# hard to set $rc.
|
||||
run: |
|
||||
rc=0
|
||||
results=$(jq -r 'to_entries[] | [.key,.value.result] | join(" ")' <<< $NEEDS_CONTEXT)
|
||||
while read job result ; do
|
||||
# The newsfile lint may be skipped on non PR builds
|
||||
if [ $result == "skipped" ] && [ $job == "lint-newsfile" ]; then
|
||||
continue
|
||||
fi
|
||||
|
||||
# The newsfile lint may be skipped on non PR builds
|
||||
skippable:
|
||||
lint-newsfile
|
||||
if [ "$result" != "success" ]; then
|
||||
echo "::set-failed ::Job $job returned $result"
|
||||
rc=1
|
||||
fi
|
||||
done <<< $results
|
||||
exit $rc
|
||||
|
||||
2
.github/workflows/twisted_trunk.yml
vendored
2
.github/workflows/twisted_trunk.yml
vendored
@@ -25,7 +25,7 @@ jobs:
|
||||
- run: sudo apt-get -qq install xmlsec1
|
||||
- uses: actions/setup-python@v2
|
||||
with:
|
||||
python-version: 3.7
|
||||
python-version: 3.6
|
||||
- run: .ci/patch_for_twisted_trunk.sh
|
||||
- run: pip install tox
|
||||
- run: tox -e py
|
||||
|
||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -50,7 +50,3 @@ __pycache__/
|
||||
|
||||
# docs
|
||||
book/
|
||||
|
||||
# complement
|
||||
/complement-*
|
||||
/master.tar.gz
|
||||
|
||||
998
CHANGES.md
998
CHANGES.md
File diff suppressed because it is too large
Load Diff
@@ -1,3 +1,4 @@
|
||||
include synctl
|
||||
include LICENSE
|
||||
include VERSION
|
||||
include *.rst
|
||||
@@ -16,6 +17,7 @@ recursive-include synapse/storage *.txt
|
||||
recursive-include synapse/storage *.md
|
||||
|
||||
recursive-include docs *
|
||||
recursive-include scripts *
|
||||
recursive-include scripts-dev *
|
||||
recursive-include synapse *.pyi
|
||||
recursive-include tests *.py
|
||||
@@ -43,7 +45,6 @@ include book.toml
|
||||
include pyproject.toml
|
||||
recursive-include changelog.d *
|
||||
|
||||
include .flake8
|
||||
prune .circleci
|
||||
prune .github
|
||||
prune .ci
|
||||
@@ -51,4 +52,5 @@ prune contrib
|
||||
prune debian
|
||||
prune demo/etc
|
||||
prune docker
|
||||
prune snap
|
||||
prune stubs
|
||||
|
||||
@@ -246,7 +246,7 @@ Password reset
|
||||
==============
|
||||
|
||||
Users can reset their password through their client. Alternatively, a server admin
|
||||
can reset a users password using the `admin API <docs/admin_api/user_admin_api.md#reset-password>`_
|
||||
can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_
|
||||
or by directly editing the database as shown below.
|
||||
|
||||
First calculate the hash of the new password::
|
||||
@@ -312,9 +312,6 @@ We recommend using the demo which starts 3 federated instances running on ports
|
||||
|
||||
(to stop, you can use `./demo/stop.sh`)
|
||||
|
||||
See the [demo documentation](https://matrix-org.github.io/synapse/develop/development/demo.html)
|
||||
for more information.
|
||||
|
||||
If you just want to start a single instance of the app and run it directly::
|
||||
|
||||
# Create the homeserver.yaml config once
|
||||
|
||||
@@ -14,7 +14,6 @@ services:
|
||||
# failure
|
||||
restart: unless-stopped
|
||||
# See the readme for a full documentation of the environment settings
|
||||
# NOTE: You must edit homeserver.yaml to use postgres, it defaults to sqlite
|
||||
environment:
|
||||
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
|
||||
volumes:
|
||||
|
||||
@@ -92,6 +92,22 @@ new PromConsole.Graph({
|
||||
})
|
||||
</script>
|
||||
|
||||
<h3>Pending calls per tick</h3>
|
||||
<div id="reactor_pending_calls"></div>
|
||||
<script>
|
||||
new PromConsole.Graph({
|
||||
node: document.querySelector("#reactor_pending_calls"),
|
||||
expr: "rate(python_twisted_reactor_pending_calls_sum[30s]) / rate(python_twisted_reactor_pending_calls_count[30s])",
|
||||
name: "[[job]]-[[index]]",
|
||||
min: 0,
|
||||
renderer: "line",
|
||||
height: 150,
|
||||
yAxisFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yHoverFormatter: PromConsole.NumberFormatter.humanize,
|
||||
yTitle: "Pending Calls"
|
||||
})
|
||||
</script>
|
||||
|
||||
<h1>Storage</h1>
|
||||
|
||||
<h3>Queries</h3>
|
||||
|
||||
@@ -84,9 +84,7 @@ AUTH="Authorization: Bearer $TOKEN"
|
||||
###################################################################################################
|
||||
# finally start pruning the room:
|
||||
###################################################################################################
|
||||
# this will really delete local events, so the messages in the room really
|
||||
# disappear unless they are restored by remote federation. This is because
|
||||
# we pass {"delete_local_events":true} to the curl invocation below.
|
||||
POSTDATA='{"delete_local_events":"true"}' # this will really delete local events, so the messages in the room really disappear unless they are restored by remote federation
|
||||
|
||||
for ROOM in "${ROOMS_ARRAY[@]}"; do
|
||||
echo "########################################### $(date) ################# "
|
||||
@@ -106,7 +104,7 @@ for ROOM in "${ROOMS_ARRAY[@]}"; do
|
||||
SLEEP=2
|
||||
set -x
|
||||
# call purge
|
||||
OUT=$(curl --header "$AUTH" -s -d '{"delete_local_events":true}' POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
|
||||
OUT=$(curl --header "$AUTH" -s -d $POSTDATA POST "$API_URL/admin/purge_history/$ROOM/$EVENT_ID")
|
||||
PURGE_ID=$(echo "$OUT" |grep purge_id|cut -d'"' -f4 )
|
||||
if [ "$PURGE_ID" == "" ]; then
|
||||
# probably the history purge is already in progress for $ROOM
|
||||
|
||||
11
debian/build_virtualenv
vendored
11
debian/build_virtualenv
vendored
@@ -15,7 +15,7 @@ export DH_VIRTUALENV_INSTALL_ROOT=/opt/venvs
|
||||
# python won't look in the right directory. At least this way, the error will
|
||||
# be a *bit* more obvious.
|
||||
#
|
||||
SNAKE=$(readlink -e /usr/bin/python3)
|
||||
SNAKE=`readlink -e /usr/bin/python3`
|
||||
|
||||
# try to set the CFLAGS so any compiled C extensions are compiled with the most
|
||||
# generic as possible x64 instructions, so that compiling it on a new Intel chip
|
||||
@@ -24,7 +24,7 @@ SNAKE=$(readlink -e /usr/bin/python3)
|
||||
# TODO: add similar things for non-amd64, or figure out a more generic way to
|
||||
# do this.
|
||||
|
||||
case $(dpkg-architecture -q DEB_HOST_ARCH) in
|
||||
case `dpkg-architecture -q DEB_HOST_ARCH` in
|
||||
amd64)
|
||||
export CFLAGS=-march=x86-64
|
||||
;;
|
||||
@@ -40,7 +40,6 @@ dh_virtualenv \
|
||||
--upgrade-pip \
|
||||
--preinstall="lxml" \
|
||||
--preinstall="mock" \
|
||||
--preinstall="wheel" \
|
||||
--extra-pip-arg="--no-cache-dir" \
|
||||
--extra-pip-arg="--compile" \
|
||||
--extras="all,systemd,test"
|
||||
@@ -57,8 +56,8 @@ case "$DEB_BUILD_OPTIONS" in
|
||||
*)
|
||||
# Copy tests to a temporary directory so that we can put them on the
|
||||
# PYTHONPATH without putting the uninstalled synapse on the pythonpath.
|
||||
tmpdir=$(mktemp -d)
|
||||
trap 'rm -r $tmpdir' EXIT
|
||||
tmpdir=`mktemp -d`
|
||||
trap "rm -r $tmpdir" EXIT
|
||||
|
||||
cp -r tests "$tmpdir"
|
||||
|
||||
@@ -99,7 +98,7 @@ esac
|
||||
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
|
||||
|
||||
# add a dependency on the right version of python to substvars.
|
||||
PYPKG=$(basename "$SNAKE")
|
||||
PYPKG=`basename $SNAKE`
|
||||
echo "synapse:pydepends=$PYPKG" >> debian/matrix-synapse-py3.substvars
|
||||
|
||||
|
||||
|
||||
176
debian/changelog
vendored
176
debian/changelog
vendored
@@ -1,179 +1,3 @@
|
||||
matrix-synapse-py3 (1.55.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 19:07:11 +0000
|
||||
|
||||
matrix-synapse-py3 (1.55.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 17:44:23 +0000
|
||||
|
||||
matrix-synapse-py3 (1.55.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Mar 2022 13:59:26 +0000
|
||||
|
||||
matrix-synapse-py3 (1.55.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Mar 2022 10:59:31 +0000
|
||||
|
||||
matrix-synapse-py3 (1.54.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.54.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Mar 2022 10:54:52 +0000
|
||||
|
||||
matrix-synapse-py3 (1.54.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.54.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 02 Mar 2022 10:43:22 +0000
|
||||
|
||||
matrix-synapse-py3 (1.53.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.53.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Feb 2022 11:32:06 +0000
|
||||
|
||||
matrix-synapse-py3 (1.53.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.53.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Feb 2022 10:40:50 +0000
|
||||
|
||||
matrix-synapse-py3 (1.52.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.52.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Feb 2022 11:34:54 +0000
|
||||
|
||||
matrix-synapse-py3 (1.52.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.52.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Feb 2022 11:04:09 +0000
|
||||
|
||||
matrix-synapse-py3 (1.51.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.51.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Jan 2022 11:28:51 +0000
|
||||
|
||||
matrix-synapse-py3 (1.51.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.51.0~rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Mon, 24 Jan 2022 12:25:00 +0000
|
||||
|
||||
matrix-synapse-py3 (1.51.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.51.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 21 Jan 2022 10:46:02 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Mon, 24 Jan 2022 13:37:11 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 16:06:26 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Jan 2022 10:40:38 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0~rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 14 Jan 2022 11:18:06 +0000
|
||||
|
||||
matrix-synapse-py3 (1.50.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.50.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Jan 2022 12:36:17 +0000
|
||||
|
||||
matrix-synapse-py3 (1.49.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.49.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Dec 2021 17:31:03 +0000
|
||||
|
||||
matrix-synapse-py3 (1.49.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.49.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Dec 2021 11:07:30 +0000
|
||||
|
||||
matrix-synapse-py3 (1.49.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.49.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Dec 2021 12:39:46 +0000
|
||||
|
||||
matrix-synapse-py3 (1.49.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.49.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Dec 2021 13:52:21 +0000
|
||||
|
||||
matrix-synapse-py3 (1.48.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.48.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Nov 2021 11:24:15 +0000
|
||||
|
||||
matrix-synapse-py3 (1.48.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.48.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Nov 2021 15:56:03 +0000
|
||||
|
||||
matrix-synapse-py3 (1.47.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.47.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 19 Nov 2021 13:44:32 +0000
|
||||
|
||||
matrix-synapse-py3 (1.47.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.47.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 17 Nov 2021 13:09:43 +0000
|
||||
|
||||
matrix-synapse-py3 (1.47.0~rc3) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.47.0~rc3.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Nov 2021 14:32:47 +0000
|
||||
|
||||
matrix-synapse-py3 (1.47.0~rc2) stable; urgency=medium
|
||||
|
||||
[ Dan Callahan ]
|
||||
* Update scripts to pass Shellcheck lints.
|
||||
* Remove unused Vagrant scripts from debian/ directory.
|
||||
* Allow building Debian packages for any architecture, not just amd64.
|
||||
* Preinstall the "wheel" package when building virtualenvs.
|
||||
* Do not error if /etc/default/matrix-synapse is missing.
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New synapse release 1.47.0~rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 10 Nov 2021 09:41:01 +0000
|
||||
|
||||
matrix-synapse-py3 (1.46.0) stable; urgency=medium
|
||||
|
||||
[ Richard van der Hoff ]
|
||||
|
||||
2
debian/control
vendored
2
debian/control
vendored
@@ -19,7 +19,7 @@ Standards-Version: 3.9.8
|
||||
Homepage: https://github.com/matrix-org/synapse
|
||||
|
||||
Package: matrix-synapse-py3
|
||||
Architecture: any
|
||||
Architecture: amd64
|
||||
Provides: matrix-synapse
|
||||
Conflicts:
|
||||
matrix-synapse (<< 0.34.0.1-0matrix2),
|
||||
|
||||
1
debian/matrix-synapse-py3.config
vendored
1
debian/matrix-synapse-py3.config
vendored
@@ -2,7 +2,6 @@
|
||||
|
||||
set -e
|
||||
|
||||
# shellcheck disable=SC1091
|
||||
. /usr/share/debconf/confmodule
|
||||
|
||||
# try to update the debconf db according to whatever is in the config files
|
||||
|
||||
1
debian/matrix-synapse-py3.postinst
vendored
1
debian/matrix-synapse-py3.postinst
vendored
@@ -1,6 +1,5 @@
|
||||
#!/bin/sh -e
|
||||
|
||||
# shellcheck disable=SC1091
|
||||
. /usr/share/debconf/confmodule
|
||||
|
||||
CONFIGFILE_SERVERNAME="/etc/matrix-synapse/conf.d/server_name.yaml"
|
||||
|
||||
2
debian/matrix-synapse.service
vendored
2
debian/matrix-synapse.service
vendored
@@ -5,7 +5,7 @@ Description=Synapse Matrix homeserver
|
||||
Type=notify
|
||||
User=matrix-synapse
|
||||
WorkingDirectory=/var/lib/matrix-synapse
|
||||
EnvironmentFile=-/etc/default/matrix-synapse
|
||||
EnvironmentFile=/etc/default/matrix-synapse
|
||||
ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys
|
||||
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
|
||||
2
debian/test/.gitignore
vendored
Normal file
2
debian/test/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
.vagrant
|
||||
*.log
|
||||
23
debian/test/provision.sh
vendored
Normal file
23
debian/test/provision.sh
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# provisioning script for vagrant boxes for testing the matrix-synapse debs.
|
||||
#
|
||||
# Will install the most recent matrix-synapse-py3 deb for this platform from
|
||||
# the /debs directory.
|
||||
|
||||
set -e
|
||||
|
||||
apt-get update
|
||||
apt-get install -y lsb-release
|
||||
|
||||
deb=`ls /debs/matrix-synapse-py3_*+$(lsb_release -cs)*.deb | sort | tail -n1`
|
||||
|
||||
debconf-set-selections <<EOF
|
||||
matrix-synapse matrix-synapse/report-stats boolean false
|
||||
matrix-synapse matrix-synapse/server-name string localhost:18448
|
||||
EOF
|
||||
|
||||
dpkg -i "$deb"
|
||||
|
||||
sed -i -e '/port: 8...$/{s/8448/18448/; s/8008/18008/}' -e '$aregistration_shared_secret: secret' /etc/matrix-synapse/homeserver.yaml
|
||||
systemctl restart matrix-synapse
|
||||
13
debian/test/stretch/Vagrantfile
vendored
Normal file
13
debian/test/stretch/Vagrantfile
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
ver = `cd ../../..; dpkg-parsechangelog -S Version`.strip()
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "debian/stretch64"
|
||||
|
||||
config.vm.synced_folder ".", "/vagrant", disabled: true
|
||||
config.vm.synced_folder "../../../../debs", "/debs", type: "nfs"
|
||||
|
||||
config.vm.provision "shell", path: "../provision.sh"
|
||||
end
|
||||
10
debian/test/xenial/Vagrantfile
vendored
Normal file
10
debian/test/xenial/Vagrantfile
vendored
Normal file
@@ -0,0 +1,10 @@
|
||||
# -*- mode: ruby -*-
|
||||
# vi: set ft=ruby :
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.box = "ubuntu/xenial64"
|
||||
|
||||
config.vm.synced_folder ".", "/vagrant", disabled: true
|
||||
config.vm.synced_folder "../../../../debs", "/debs"
|
||||
config.vm.provision "shell", path: "../provision.sh"
|
||||
end
|
||||
11
demo/.gitignore
vendored
11
demo/.gitignore
vendored
@@ -1,4 +1,7 @@
|
||||
# Ignore all the temporary files from the demo servers.
|
||||
8080/
|
||||
8081/
|
||||
8082/
|
||||
*.db
|
||||
*.log
|
||||
*.log.*
|
||||
*.pid
|
||||
|
||||
/media_store.*
|
||||
/etc
|
||||
|
||||
26
demo/README
Normal file
26
demo/README
Normal file
@@ -0,0 +1,26 @@
|
||||
DO NOT USE THESE DEMO SERVERS IN PRODUCTION
|
||||
|
||||
Requires you to have done:
|
||||
python setup.py develop
|
||||
|
||||
|
||||
The demo start.sh will start three synapse servers on ports 8080, 8081 and 8082, with host names localhost:$port. This can be easily changed to `hostname`:$port in start.sh if required.
|
||||
|
||||
To enable the servers to communicate untrusted ssl certs are used. In order to do this the servers do not check the certs
|
||||
and are configured in a highly insecure way. Do not use these configuration files in production.
|
||||
|
||||
stop.sh will stop the synapse servers and the webclient.
|
||||
|
||||
clean.sh will delete the databases and log files.
|
||||
|
||||
To start a completely new set of servers, run:
|
||||
|
||||
./demo/stop.sh; ./demo/clean.sh && ./demo/start.sh
|
||||
|
||||
|
||||
Logs and sqlitedb will be stored in demo/808{0,1,2}.{log,db}
|
||||
|
||||
|
||||
|
||||
Also note that when joining a public room on a differnt HS via "#foo:bar.net", then you are (in the current impl) joining a room with room_id "foo". This means that it won't work if your HS already has a room with that name.
|
||||
|
||||
@@ -4,19 +4,16 @@ set -e
|
||||
|
||||
DIR="$( cd "$( dirname "$0" )" && pwd )"
|
||||
|
||||
# Ensure that the servers are stopped.
|
||||
$DIR/stop.sh
|
||||
|
||||
PID_FILE="$DIR/servers.pid"
|
||||
|
||||
if [ -f "$PID_FILE" ]; then
|
||||
if [ -f $PID_FILE ]; then
|
||||
echo "servers.pid exists!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
for port in 8080 8081 8082; do
|
||||
rm -rf "${DIR:?}/$port"
|
||||
rm -rf "$DIR/media_store.$port"
|
||||
rm -rf $DIR/$port
|
||||
rm -rf $DIR/media_store.$port
|
||||
done
|
||||
|
||||
rm -rf "${DIR:?}/etc"
|
||||
rm -rf $DIR/etc
|
||||
|
||||
149
demo/start.sh
149
demo/start.sh
@@ -4,100 +4,98 @@ DIR="$( cd "$( dirname "$0" )" && pwd )"
|
||||
|
||||
CWD=$(pwd)
|
||||
|
||||
cd "$DIR/.." || exit
|
||||
cd "$DIR/.."
|
||||
|
||||
PYTHONPATH=$(readlink -f "$(pwd)")
|
||||
export PYTHONPATH
|
||||
mkdir -p demo/etc
|
||||
|
||||
export PYTHONPATH=$(readlink -f $(pwd))
|
||||
|
||||
|
||||
echo "$PYTHONPATH"
|
||||
echo $PYTHONPATH
|
||||
|
||||
for port in 8080 8081 8082; do
|
||||
echo "Starting server on port $port... "
|
||||
|
||||
https_port=$((port + 400))
|
||||
mkdir -p demo/$port
|
||||
pushd demo/$port || exit
|
||||
pushd demo/$port
|
||||
|
||||
# Generate the configuration for the homeserver at localhost:848x.
|
||||
#rm $DIR/etc/$port.config
|
||||
python3 -m synapse.app.homeserver \
|
||||
--generate-config \
|
||||
--server-name "localhost:$port" \
|
||||
--config-path "$port.config" \
|
||||
-H "localhost:$https_port" \
|
||||
--config-path "$DIR/etc/$port.config" \
|
||||
--report-stats no
|
||||
|
||||
if ! grep -F "Customisation made by demo/start.sh" -q "$port.config"; then
|
||||
# Generate TLS keys.
|
||||
openssl req -x509 -newkey rsa:4096 \
|
||||
-keyout "localhost:$port.tls.key" \
|
||||
-out "localhost:$port.tls.crt" \
|
||||
-days 365 -nodes -subj "/O=matrix"
|
||||
if ! grep -F "Customisation made by demo/start.sh" -q $DIR/etc/$port.config; then
|
||||
printf '\n\n# Customisation made by demo/start.sh\n' >> $DIR/etc/$port.config
|
||||
|
||||
# Add customisations to the configuration.
|
||||
{
|
||||
printf '\n\n# Customisation made by demo/start.sh\n\n'
|
||||
echo "public_baseurl: http://localhost:$port/"
|
||||
echo 'enable_registration: true'
|
||||
echo ''
|
||||
echo "public_baseurl: http://localhost:$port/" >> $DIR/etc/$port.config
|
||||
|
||||
# Warning, this heredoc depends on the interaction of tabs and spaces.
|
||||
# Please don't accidentaly bork me with your fancy settings.
|
||||
listeners=$(cat <<-PORTLISTENERS
|
||||
# Configure server to listen on both $https_port and $port
|
||||
# This overides some of the default settings above
|
||||
listeners:
|
||||
- port: $https_port
|
||||
type: http
|
||||
tls: true
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
echo 'enable_registration: true' >> $DIR/etc/$port.config
|
||||
|
||||
- port: $port
|
||||
tls: false
|
||||
bind_addresses: ['::1', '127.0.0.1']
|
||||
type: http
|
||||
x_forwarded: true
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
compress: false
|
||||
PORTLISTENERS
|
||||
)
|
||||
# Warning, this heredoc depends on the interaction of tabs and spaces. Please don't
|
||||
# accidentaly bork me with your fancy settings.
|
||||
listeners=$(cat <<-PORTLISTENERS
|
||||
# Configure server to listen on both $https_port and $port
|
||||
# This overides some of the default settings above
|
||||
listeners:
|
||||
- port: $https_port
|
||||
type: http
|
||||
tls: true
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
|
||||
echo "${listeners}"
|
||||
- port: $port
|
||||
tls: false
|
||||
bind_addresses: ['::1', '127.0.0.1']
|
||||
type: http
|
||||
x_forwarded: true
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
compress: false
|
||||
PORTLISTENERS
|
||||
)
|
||||
echo "${listeners}" >> $DIR/etc/$port.config
|
||||
|
||||
# Disable TLS for the servers
|
||||
printf '\n\n# Disable TLS for the servers.'
|
||||
echo '# DO NOT USE IN PRODUCTION'
|
||||
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true'
|
||||
echo 'federation_verify_certificates: false'
|
||||
# Disable tls for the servers
|
||||
printf '\n\n# Disable tls on the servers.' >> $DIR/etc/$port.config
|
||||
echo '# DO NOT USE IN PRODUCTION' >> $DIR/etc/$port.config
|
||||
echo 'use_insecure_ssl_client_just_for_testing_do_not_use: true' >> $DIR/etc/$port.config
|
||||
echo 'federation_verify_certificates: false' >> $DIR/etc/$port.config
|
||||
|
||||
# Set paths for the TLS certificates.
|
||||
echo "tls_certificate_path: \"$DIR/$port/localhost:$port.tls.crt\""
|
||||
echo "tls_private_key_path: \"$DIR/$port/localhost:$port.tls.key\""
|
||||
# Set tls paths
|
||||
echo "tls_certificate_path: \"$DIR/etc/localhost:$https_port.tls.crt\"" >> $DIR/etc/$port.config
|
||||
echo "tls_private_key_path: \"$DIR/etc/localhost:$https_port.tls.key\"" >> $DIR/etc/$port.config
|
||||
|
||||
# Ignore keys from the trusted keys server
|
||||
echo '# Ignore keys from the trusted keys server'
|
||||
echo 'trusted_key_servers:'
|
||||
echo ' - server_name: "matrix.org"'
|
||||
echo ' accept_keys_insecurely: true'
|
||||
echo ''
|
||||
# Generate tls keys
|
||||
openssl req -x509 -newkey rsa:4096 -keyout $DIR/etc/localhost\:$https_port.tls.key -out $DIR/etc/localhost\:$https_port.tls.crt -days 365 -nodes -subj "/O=matrix"
|
||||
|
||||
# Allow the servers to communicate over localhost.
|
||||
allow_list=$(cat <<-ALLOW_LIST
|
||||
# Allow the servers to communicate over localhost.
|
||||
ip_range_whitelist:
|
||||
- '127.0.0.1/8'
|
||||
- '::1/128'
|
||||
ALLOW_LIST
|
||||
)
|
||||
# Ignore keys from the trusted keys server
|
||||
echo '# Ignore keys from the trusted keys server' >> $DIR/etc/$port.config
|
||||
echo 'trusted_key_servers:' >> $DIR/etc/$port.config
|
||||
echo ' - server_name: "matrix.org"' >> $DIR/etc/$port.config
|
||||
echo ' accept_keys_insecurely: true' >> $DIR/etc/$port.config
|
||||
|
||||
echo "${allow_list}"
|
||||
} >> "$port.config"
|
||||
# Reduce the blacklist
|
||||
blacklist=$(cat <<-BLACK
|
||||
# Set the blacklist so that it doesn't include 127.0.0.1, ::1
|
||||
federation_ip_range_blacklist:
|
||||
- '10.0.0.0/8'
|
||||
- '172.16.0.0/12'
|
||||
- '192.168.0.0/16'
|
||||
- '100.64.0.0/10'
|
||||
- '169.254.0.0/16'
|
||||
- 'fe80::/64'
|
||||
- 'fc00::/7'
|
||||
BLACK
|
||||
)
|
||||
echo "${blacklist}" >> $DIR/etc/$port.config
|
||||
fi
|
||||
|
||||
# Check script parameters
|
||||
if [ $# -eq 1 ]; then
|
||||
if [ "$1" = "--no-rate-limit" ]; then
|
||||
if [ $1 = "--no-rate-limit" ]; then
|
||||
|
||||
# Disable any rate limiting
|
||||
ratelimiting=$(cat <<-RC
|
||||
@@ -139,21 +137,22 @@ for port in 8080 8081 8082; do
|
||||
burst_count: 1000
|
||||
RC
|
||||
)
|
||||
echo "${ratelimiting}" >> "$port.config"
|
||||
echo "${ratelimiting}" >> $DIR/etc/$port.config
|
||||
fi
|
||||
fi
|
||||
|
||||
# Always disable reporting of stats if the option is not there.
|
||||
if ! grep -F "report_stats" -q "$port.config" ; then
|
||||
echo "report_stats: false" >> "$port.config"
|
||||
if ! grep -F "full_twisted_stacktraces" -q $DIR/etc/$port.config; then
|
||||
echo "full_twisted_stacktraces: true" >> $DIR/etc/$port.config
|
||||
fi
|
||||
if ! grep -F "report_stats" -q $DIR/etc/$port.config ; then
|
||||
echo "report_stats: false" >> $DIR/etc/$port.config
|
||||
fi
|
||||
|
||||
# Run the homeserver in the background.
|
||||
python3 -m synapse.app.homeserver \
|
||||
--config-path "$port.config" \
|
||||
--config-path "$DIR/etc/$port.config" \
|
||||
-D \
|
||||
|
||||
popd || exit
|
||||
popd
|
||||
done
|
||||
|
||||
cd "$CWD" || exit
|
||||
cd "$CWD"
|
||||
|
||||
@@ -8,7 +8,7 @@ for pid_file in $FILES; do
|
||||
pid=$(cat "$pid_file")
|
||||
if [[ $pid ]]; then
|
||||
echo "Killing $pid_file with $pid"
|
||||
kill "$pid"
|
||||
kill $pid
|
||||
fi
|
||||
done
|
||||
|
||||
|
||||
@@ -1,20 +1,17 @@
|
||||
# Dockerfile to build the matrixdotorg/synapse docker images.
|
||||
#
|
||||
# Note that it uses features which are only available in BuildKit - see
|
||||
# https://docs.docker.com/go/buildkit/ for more information.
|
||||
#
|
||||
# To build the image, run `docker build` command from the root of the
|
||||
# synapse repository:
|
||||
#
|
||||
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile .
|
||||
# docker build -f docker/Dockerfile .
|
||||
#
|
||||
# There is an optional PYTHON_VERSION build argument which sets the
|
||||
# version of python to build against: for example:
|
||||
#
|
||||
# DOCKER_BUILDKIT=1 docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.10 .
|
||||
# docker build -f docker/Dockerfile --build-arg PYTHON_VERSION=3.6 .
|
||||
#
|
||||
|
||||
ARG PYTHON_VERSION=3.9
|
||||
ARG PYTHON_VERSION=3.8
|
||||
|
||||
###
|
||||
### Stage 0: builder
|
||||
@@ -22,16 +19,7 @@ ARG PYTHON_VERSION=3.9
|
||||
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
|
||||
|
||||
# install the OS build deps
|
||||
#
|
||||
# RUN --mount is specific to buildkit and is documented at
|
||||
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
|
||||
# Here we use it to set up a cache for apt, to improve rebuild speeds on
|
||||
# slow connections.
|
||||
#
|
||||
RUN \
|
||||
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && apt-get install -y \
|
||||
RUN apt-get update && apt-get install -y \
|
||||
build-essential \
|
||||
libffi-dev \
|
||||
libjpeg-dev \
|
||||
@@ -46,7 +34,8 @@ RUN \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Copy just what we need to pip install
|
||||
COPY MANIFEST.in README.rst setup.py /synapse/
|
||||
COPY scripts /synapse/scripts/
|
||||
COPY MANIFEST.in README.rst setup.py synctl /synapse/
|
||||
COPY synapse/__init__.py /synapse/synapse/__init__.py
|
||||
COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
|
||||
|
||||
@@ -55,8 +44,7 @@ COPY synapse/python_dependencies.py /synapse/synapse/python_dependencies.py
|
||||
# used while you develop on the source
|
||||
#
|
||||
# This is aiming at installing the `install_requires` and `extras_require` from `setup.py`
|
||||
RUN --mount=type=cache,target=/root/.cache/pip \
|
||||
pip install --prefix="/install" --no-warn-script-location \
|
||||
RUN pip install --prefix="/install" --no-warn-script-location \
|
||||
/synapse[all]
|
||||
|
||||
# Copy over the rest of the project
|
||||
@@ -78,10 +66,7 @@ LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/syna
|
||||
LABEL org.opencontainers.image.source='https://github.com/matrix-org/synapse.git'
|
||||
LABEL org.opencontainers.image.licenses='Apache-2.0'
|
||||
|
||||
RUN \
|
||||
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && apt-get install -y \
|
||||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
gosu \
|
||||
libjpeg62-turbo \
|
||||
@@ -97,6 +82,8 @@ COPY --from=builder /install /usr/local
|
||||
COPY ./docker/start.py /start.py
|
||||
COPY ./docker/conf /conf
|
||||
|
||||
VOLUME ["/data"]
|
||||
|
||||
EXPOSE 8008/tcp 8009/tcp 8448/tcp
|
||||
|
||||
ENTRYPOINT ["/start.py"]
|
||||
|
||||
@@ -16,7 +16,7 @@ ARG distro=""
|
||||
### Stage 0: build a dh-virtualenv
|
||||
###
|
||||
|
||||
# This is only really needed on focal, since other distributions we
|
||||
# This is only really needed on bionic and focal, since other distributions we
|
||||
# care about have a recent version of dh-virtualenv by default. Unfortunately,
|
||||
# it looks like focal is going to be with us for a while.
|
||||
#
|
||||
@@ -36,8 +36,9 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
|
||||
wget
|
||||
|
||||
# fetch and unpack the package
|
||||
# TODO: Upgrade to 1.2.2 once bionic is dropped (1.2.2 requires debhelper 12; bionic has only 11)
|
||||
RUN mkdir /dh-virtualenv
|
||||
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/refs/tags/1.2.2.tar.gz
|
||||
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
|
||||
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
|
||||
|
||||
# install its build deps. We do another apt-cache-update here, because we might
|
||||
@@ -85,12 +86,12 @@ RUN apt-get update -qq -o Acquire::Languages=none \
|
||||
libpq-dev \
|
||||
xmlsec1
|
||||
|
||||
COPY --from=builder /dh-virtualenv_1.2.2-1_all.deb /
|
||||
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /
|
||||
|
||||
# install dhvirtualenv. Update the apt cache again first, in case we got a
|
||||
# cached cache from docker the first time.
|
||||
RUN apt-get update -qq -o Acquire::Languages=none \
|
||||
&& apt-get install -yq /dh-virtualenv_1.2.2-1_all.deb
|
||||
&& apt-get install -yq /dh-virtualenv_1.2~dev-1_all.deb
|
||||
|
||||
WORKDIR /synapse/source
|
||||
ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"]
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Use the Sytest image that comes with a lot of the build dependencies
|
||||
# pre-installed
|
||||
FROM matrixdotorg/sytest:focal
|
||||
FROM matrixdotorg/sytest:bionic
|
||||
|
||||
# The Sytest image doesn't come with python, so install that
|
||||
RUN apt-get update && apt-get -qq install -y python3 python3-dev python3-pip
|
||||
|
||||
@@ -21,6 +21,3 @@ VOLUME ["/data"]
|
||||
# files to run the desired worker configuration. Will start supervisord.
|
||||
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
|
||||
ENTRYPOINT ["/configure_workers_and_start.py"]
|
||||
|
||||
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
|
||||
CMD /bin/sh /healthcheck.sh
|
||||
|
||||
@@ -65,12 +65,7 @@ The following environment variables are supported in `generate` mode:
|
||||
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
|
||||
such as the database and media store. Defaults to `/data`.
|
||||
* `UID`, `GID`: the user id and group id to use for creating the data
|
||||
directories. If unset, and no user is set via `docker run --user`, defaults
|
||||
to `991`, `991`.
|
||||
|
||||
## Postgres
|
||||
|
||||
By default the config will use SQLite. See the [docs on using Postgres](https://github.com/matrix-org/synapse/blob/develop/docs/postgres.md) for more info on how to use Postgres. Until this section is improved [this issue](https://github.com/matrix-org/synapse/issues/8304) may provide useful information.
|
||||
directories. Defaults to `991`, `991`.
|
||||
|
||||
## Running synapse
|
||||
|
||||
@@ -102,9 +97,7 @@ The following environment variables are supported in `run` mode:
|
||||
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
|
||||
* `SYNAPSE_WORKER`: module to execute, used when running synapse with workers.
|
||||
Defaults to `synapse.app.homeserver`, which is suitable for non-worker mode.
|
||||
* `UID`, `GID`: the user and group id to run Synapse as. If unset, and no user
|
||||
is set via `docker run --user`, defaults to `991`, `991`. Note that this user
|
||||
must have permission to read the config files, and write to the data directories.
|
||||
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
|
||||
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
|
||||
|
||||
For more complex setups (e.g. for workers) you can also pass your args directly to synapse using `run` mode. For example like this:
|
||||
@@ -193,7 +186,7 @@ point to another Dockerfile.
|
||||
## Disabling the healthcheck
|
||||
|
||||
If you are using a non-standard port or tls inside docker you can disable the healthcheck
|
||||
whilst running the above `docker run` commands.
|
||||
whilst running the above `docker run` commands.
|
||||
|
||||
```
|
||||
--no-healthcheck
|
||||
@@ -219,7 +212,7 @@ If you wish to point the healthcheck at a different port with docker command, ad
|
||||
## Setting the healthcheck in docker-compose file
|
||||
|
||||
You can add the following to set a custom healthcheck in a docker compose file.
|
||||
You will need docker-compose version >2.1 for this to work.
|
||||
You will need docker-compose version >2.1 for this to work.
|
||||
|
||||
```
|
||||
healthcheck:
|
||||
@@ -233,5 +226,5 @@ healthcheck:
|
||||
## Using jemalloc
|
||||
|
||||
Jemalloc is embedded in the image and will be used instead of the default allocator.
|
||||
You can read about jemalloc by reading the Synapse
|
||||
You can read about jemalloc by reading the Synapse
|
||||
[README](https://github.com/matrix-org/synapse/blob/HEAD/README.rst#help-synapse-is-slow-and-eats-all-my-ram-cpu).
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
set -ex
|
||||
|
||||
# Get the codename from distro env
|
||||
DIST=$(cut -d ':' -f2 <<< "${distro:?}")
|
||||
DIST=`cut -d ':' -f2 <<< $distro`
|
||||
|
||||
# we get a read-only copy of the source: make a writeable copy
|
||||
cp -aT /synapse/source /synapse/build
|
||||
@@ -17,7 +17,7 @@ cd /synapse/build
|
||||
# Section to determine which "component" it should go into (see
|
||||
# https://manpages.debian.org/stretch/reprepro/reprepro.1.en.html#GUESSING)
|
||||
|
||||
DEB_VERSION=$(dpkg-parsechangelog -SVersion)
|
||||
DEB_VERSION=`dpkg-parsechangelog -SVersion`
|
||||
case $DEB_VERSION in
|
||||
*~rc*|*~a*|*~b*|*~c*)
|
||||
sed -ie '/^Section:/c\Section: prerelease' debian/control
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
#!/bin/sh
|
||||
# This healthcheck script is designed to return OK when every
|
||||
# host involved returns OK
|
||||
{%- for healthcheck_url in healthcheck_urls %}
|
||||
curl -fSs {{ healthcheck_url }} || exit 1
|
||||
{%- endfor %}
|
||||
@@ -148,6 +148,14 @@ bcrypt_rounds: 12
|
||||
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
|
||||
enable_group_creation: true
|
||||
|
||||
# The list of identity servers trusted to verify third party
|
||||
# identifiers by this server.
|
||||
#
|
||||
# Also defines the ID server which will be called when an account is
|
||||
# deactivated (one will be picked arbitrarily).
|
||||
trusted_third_party_id_servers:
|
||||
- matrix.org
|
||||
- vector.im
|
||||
|
||||
## Metrics ###
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ WORKERS_CONFIG = {
|
||||
"app": "synapse.app.user_dir",
|
||||
"listener_resources": ["client"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$"
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
|
||||
],
|
||||
"shared_extra_conf": {"update_user_directory": False},
|
||||
"worker_extra_conf": "",
|
||||
@@ -85,10 +85,10 @@ WORKERS_CONFIG = {
|
||||
"app": "synapse.app.generic_worker",
|
||||
"listener_resources": ["client"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/(v2_alpha|r0|v3)/sync$",
|
||||
"^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$",
|
||||
"^/_matrix/client/(api/v1|r0|v3)/initialSync$",
|
||||
"^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$",
|
||||
"^/_matrix/client/(v2_alpha|r0)/sync$",
|
||||
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
|
||||
"^/_matrix/client/(api/v1|r0)/initialSync$",
|
||||
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
|
||||
],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": "",
|
||||
@@ -146,11 +146,11 @@ WORKERS_CONFIG = {
|
||||
"app": "synapse.app.generic_worker",
|
||||
"listener_resources": ["client"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/join/",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
|
||||
],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": "",
|
||||
@@ -158,7 +158,7 @@ WORKERS_CONFIG = {
|
||||
"frontend_proxy": {
|
||||
"app": "synapse.app.frontend_proxy",
|
||||
"listener_resources": ["client", "replication"],
|
||||
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
|
||||
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": (
|
||||
"worker_main_http_uri: http://127.0.0.1:%d"
|
||||
@@ -474,16 +474,10 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
|
||||
|
||||
# Determine the load-balancing upstreams to configure
|
||||
nginx_upstream_config = ""
|
||||
|
||||
# At the same time, prepare a list of internal endpoints to healthcheck
|
||||
# starting with the main process which exists even if no workers do.
|
||||
healthcheck_urls = ["http://localhost:8080/health"]
|
||||
|
||||
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
|
||||
body = ""
|
||||
for port in upstream_worker_ports:
|
||||
body += " server localhost:%d;\n" % (port,)
|
||||
healthcheck_urls.append("http://localhost:%d/health" % (port,))
|
||||
|
||||
# Add to the list of configured upstreams
|
||||
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
|
||||
@@ -516,13 +510,6 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
|
||||
worker_config=supervisord_config,
|
||||
)
|
||||
|
||||
# healthcheck config
|
||||
convert(
|
||||
"/conf/healthcheck.sh.j2",
|
||||
"/healthcheck.sh",
|
||||
healthcheck_urls=healthcheck_urls,
|
||||
)
|
||||
|
||||
# Ensure the logging directory exists
|
||||
log_dir = data_dir + "/logs"
|
||||
if not os.path.exists(log_dir):
|
||||
|
||||
@@ -16,4 +16,4 @@ sudo -u postgres /usr/lib/postgresql/10/bin/pg_ctl -w -D /var/lib/postgresql/dat
|
||||
# Run the tests
|
||||
cd /src
|
||||
export TRIAL_FLAGS="-j 4"
|
||||
tox --workdir=./.tox-pg-container -e py37-postgres "$@"
|
||||
tox --workdir=./.tox-pg-container -e py36-postgres "$@"
|
||||
|
||||
@@ -120,7 +120,6 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
|
||||
]
|
||||
|
||||
if ownership is not None:
|
||||
log(f"Setting ownership on /data to {ownership}")
|
||||
subprocess.check_output(["chown", "-R", ownership, "/data"])
|
||||
args = ["gosu", ownership] + args
|
||||
|
||||
@@ -145,18 +144,12 @@ def run_generate_config(environ, ownership):
|
||||
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
|
||||
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
|
||||
|
||||
if ownership is not None:
|
||||
# make sure that synapse has perms to write to the data dir.
|
||||
log(f"Setting ownership on {data_dir} to {ownership}")
|
||||
subprocess.check_output(["chown", ownership, data_dir])
|
||||
|
||||
# create a suitable log config from our template
|
||||
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
|
||||
if not os.path.exists(log_config_file):
|
||||
log("Creating log config %s" % (log_config_file,))
|
||||
convert("/conf/log.config", log_config_file, environ)
|
||||
|
||||
# generate the main config file, and a signing key.
|
||||
args = [
|
||||
"python",
|
||||
"-m",
|
||||
@@ -175,23 +168,29 @@ def run_generate_config(environ, ownership):
|
||||
"--open-private-ports",
|
||||
]
|
||||
# log("running %s" % (args, ))
|
||||
os.execv("/usr/local/bin/python", args)
|
||||
|
||||
if ownership is not None:
|
||||
# make sure that synapse has perms to write to the data dir.
|
||||
subprocess.check_output(["chown", ownership, data_dir])
|
||||
|
||||
args = ["gosu", ownership] + args
|
||||
os.execv("/usr/sbin/gosu", args)
|
||||
else:
|
||||
os.execv("/usr/local/bin/python", args)
|
||||
|
||||
|
||||
def main(args, environ):
|
||||
mode = args[1] if len(args) > 1 else "run"
|
||||
|
||||
# if we were given an explicit user to switch to, do so
|
||||
ownership = None
|
||||
if "UID" in environ:
|
||||
desired_uid = int(environ["UID"])
|
||||
desired_gid = int(environ.get("GID", "991"))
|
||||
ownership = f"{desired_uid}:{desired_gid}"
|
||||
elif os.getuid() == 0:
|
||||
# otherwise, if we are running as root, use user 991
|
||||
ownership = "991:991"
|
||||
|
||||
desired_uid = int(environ.get("UID", "991"))
|
||||
desired_gid = int(environ.get("GID", "991"))
|
||||
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
|
||||
if (desired_uid == os.getuid()) and (desired_gid == os.getgid()):
|
||||
ownership = None
|
||||
else:
|
||||
ownership = "{}:{}".format(desired_uid, desired_gid)
|
||||
|
||||
if ownership is None:
|
||||
log("Will not perform chmod/gosu as UserID already matches request")
|
||||
|
||||
# In generate mode, generate a configuration and missing keys, then exit
|
||||
if mode == "generate":
|
||||
|
||||
@@ -15,12 +15,12 @@ in `homeserver.yaml`, to the list of authorized domains. If you have not set
|
||||
1. Agree to the terms of service and submit.
|
||||
1. Copy your site key and secret key and add them to your `homeserver.yaml`
|
||||
configuration file
|
||||
```yaml
|
||||
```
|
||||
recaptcha_public_key: YOUR_SITE_KEY
|
||||
recaptcha_private_key: YOUR_SECRET_KEY
|
||||
```
|
||||
1. Enable the CAPTCHA for new registrations
|
||||
```yaml
|
||||
```
|
||||
enable_registration_captcha: true
|
||||
```
|
||||
1. Go to the settings page for the CAPTCHA you just created
|
||||
|
||||
335
docs/MSC1711_certificates_FAQ.md
Normal file
335
docs/MSC1711_certificates_FAQ.md
Normal file
@@ -0,0 +1,335 @@
|
||||
# MSC1711 Certificates FAQ
|
||||
|
||||
## Historical Note
|
||||
This document was originally written to guide server admins through the upgrade
|
||||
path towards Synapse 1.0. Specifically,
|
||||
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/main/proposals/1711-x509-for-federation.md)
|
||||
required that all servers present valid TLS certificates on their federation
|
||||
API. Admins were encouraged to achieve compliance from version 0.99.0 (released
|
||||
in February 2019) ahead of version 1.0 (released June 2019) enforcing the
|
||||
certificate checks.
|
||||
|
||||
Much of what follows is now outdated since most admins will have already
|
||||
upgraded, however it may be of use to those with old installs returning to the
|
||||
project.
|
||||
|
||||
If you are setting up a server from scratch you almost certainly should look at
|
||||
the [installation guide](setup/installation.md) instead.
|
||||
|
||||
## Introduction
|
||||
The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It
|
||||
supports the r0.1 release of the server to server specification, but is
|
||||
compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well
|
||||
as post-r0.1 behaviour, in order to allow for a smooth upgrade across the
|
||||
federation.
|
||||
|
||||
The most important thing to know is that Synapse 1.0.0 will require a valid TLS
|
||||
certificate on federation endpoints. Self signed certificates will not be
|
||||
sufficient.
|
||||
|
||||
Synapse 0.99.0 makes it easy to configure TLS certificates and will
|
||||
interoperate with both >= 1.0.0 servers as well as existing servers yet to
|
||||
upgrade.
|
||||
|
||||
**It is critical that all admins upgrade to 0.99.0 and configure a valid TLS
|
||||
certificate.** Admins will have 1 month to do so, after which 1.0.0 will be
|
||||
released and those servers without a valid certificate will not longer be able
|
||||
to federate with >= 1.0.0 servers.
|
||||
|
||||
Full details on how to carry out this configuration change is given
|
||||
[below](#configuring-certificates-for-compatibility-with-synapse-100). A
|
||||
timeline and some frequently asked questions are also given below.
|
||||
|
||||
For more details and context on the release of the r0.1 Server/Server API and
|
||||
imminent Matrix 1.0 release, you can also see our
|
||||
[main talk from FOSDEM 2019](https://matrix.org/blog/2019/02/04/matrix-at-fosdem-2019/).
|
||||
|
||||
## Contents
|
||||
* Timeline
|
||||
* Configuring certificates for compatibility with Synapse 1.0
|
||||
* FAQ
|
||||
* Synapse 0.99.0 has just been released, what do I need to do right now?
|
||||
* How do I upgrade?
|
||||
* What will happen if I do not set up a valid federation certificate
|
||||
immediately?
|
||||
* What will happen if I do nothing at all?
|
||||
* When do I need a SRV record or .well-known URI?
|
||||
* Can I still use an SRV record?
|
||||
* I have created a .well-known URI. Do I still need an SRV record?
|
||||
* It used to work just fine, why are you breaking everything?
|
||||
* Can I manage my own certificates rather than having Synapse renew
|
||||
certificates itself?
|
||||
* Do you still recommend against using a reverse proxy on the federation port?
|
||||
* Do I still need to give my TLS certificates to Synapse if I am using a
|
||||
reverse proxy?
|
||||
* Do I need the same certificate for the client and federation port?
|
||||
* How do I tell Synapse to reload my keys/certificates after I replace them?
|
||||
|
||||
## Timeline
|
||||
|
||||
**5th Feb 2019 - Synapse 0.99.0 is released.**
|
||||
|
||||
All server admins are encouraged to upgrade.
|
||||
|
||||
0.99.0:
|
||||
|
||||
- provides support for ACME to make setting up Let's Encrypt certs easy, as
|
||||
well as .well-known support.
|
||||
|
||||
- does not enforce that a valid CA cert is present on the federation API, but
|
||||
rather makes it easy to set one up.
|
||||
|
||||
- provides support for .well-known
|
||||
|
||||
Admins should upgrade and configure a valid CA cert. Homeservers that require a
|
||||
.well-known entry (see below), should retain their SRV record and use it
|
||||
alongside their .well-known record.
|
||||
|
||||
**10th June 2019 - Synapse 1.0.0 is released**
|
||||
|
||||
1.0.0 is scheduled for release on 10th June. In
|
||||
accordance with the the [S2S spec](https://matrix.org/docs/spec/server_server/r0.1.0.html)
|
||||
1.0.0 will enforce certificate validity. This means that any homeserver without a
|
||||
valid certificate after this point will no longer be able to federate with
|
||||
1.0.0 servers.
|
||||
|
||||
## Configuring certificates for compatibility with Synapse 1.0.0
|
||||
|
||||
### If you do not currently have an SRV record
|
||||
|
||||
In this case, your `server_name` points to the host where your Synapse is
|
||||
running. There is no need to create a `.well-known` URI or an SRV record, but
|
||||
you will need to give Synapse a valid, signed, certificate.
|
||||
|
||||
### If you do have an SRV record currently
|
||||
|
||||
If you are using an SRV record, your matrix domain (`server_name`) may not
|
||||
point to the same host that your Synapse is running on (the 'target
|
||||
domain'). (If it does, you can follow the recommendation above; otherwise, read
|
||||
on.)
|
||||
|
||||
Let's assume that your `server_name` is `example.com`, and your Synapse is
|
||||
hosted at a target domain of `customer.example.net`. Currently you should have
|
||||
an SRV record which looks like:
|
||||
|
||||
```
|
||||
_matrix._tcp.example.com. IN SRV 10 5 8000 customer.example.net.
|
||||
```
|
||||
|
||||
In this situation, you have three choices for how to proceed:
|
||||
|
||||
#### Option 1: give Synapse a certificate for your matrix domain
|
||||
|
||||
Synapse 1.0 will expect your server to present a TLS certificate for your
|
||||
`server_name` (`example.com` in the above example). You can achieve this by acquiring a
|
||||
certificate for the `server_name` yourself (for example, using `certbot`), and giving it
|
||||
and the key to Synapse via `tls_certificate_path` and `tls_private_key_path`.
|
||||
|
||||
#### Option 2: run Synapse behind a reverse proxy
|
||||
|
||||
If you have an existing reverse proxy set up with correct TLS certificates for
|
||||
your domain, you can simply route all traffic through the reverse proxy by
|
||||
updating the SRV record appropriately (or removing it, if the proxy listens on
|
||||
8448).
|
||||
|
||||
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
|
||||
reverse proxy.
|
||||
|
||||
#### Option 3: add a .well-known file to delegate your matrix traffic
|
||||
|
||||
This will allow you to keep Synapse on a separate domain, without having to
|
||||
give it a certificate for the matrix domain.
|
||||
|
||||
You can do this with a `.well-known` file as follows:
|
||||
|
||||
1. Keep the SRV record in place - it is needed for backwards compatibility
|
||||
with Synapse 0.34 and earlier.
|
||||
|
||||
2. Give Synapse a certificate corresponding to the target domain
|
||||
(`customer.example.net` in the above example). You can do this by acquire a
|
||||
certificate for the target domain and giving it to Synapse via `tls_certificate_path`
|
||||
and `tls_private_key_path`.
|
||||
|
||||
3. Restart Synapse to ensure the new certificate is loaded.
|
||||
|
||||
4. Arrange for a `.well-known` file at
|
||||
`https://<server_name>/.well-known/matrix/server` with contents:
|
||||
|
||||
```json
|
||||
{"m.server": "<target server name>"}
|
||||
```
|
||||
|
||||
where the target server name is resolved as usual (i.e. SRV lookup, falling
|
||||
back to talking to port 8448).
|
||||
|
||||
In the above example, where synapse is listening on port 8000,
|
||||
`https://example.com/.well-known/matrix/server` should have `m.server` set to one of:
|
||||
|
||||
1. `customer.example.net` ─ with a SRV record on
|
||||
`_matrix._tcp.customer.example.com` pointing to port 8000, or:
|
||||
|
||||
2. `customer.example.net` ─ updating synapse to listen on the default port
|
||||
8448, or:
|
||||
|
||||
3. `customer.example.net:8000` ─ ensuring that if there is a reverse proxy
|
||||
on `customer.example.net:8000` it correctly handles HTTP requests with
|
||||
Host header set to `customer.example.net:8000`.
|
||||
|
||||
## FAQ
|
||||
|
||||
### Synapse 0.99.0 has just been released, what do I need to do right now?
|
||||
|
||||
Upgrade as soon as you can in preparation for Synapse 1.0.0, and update your
|
||||
TLS certificates as [above](#configuring-certificates-for-compatibility-with-synapse-100).
|
||||
|
||||
### What will happen if I do not set up a valid federation certificate immediately?
|
||||
|
||||
Nothing initially, but once 1.0.0 is in the wild it will not be possible to
|
||||
federate with 1.0.0 servers.
|
||||
|
||||
### What will happen if I do nothing at all?
|
||||
|
||||
If the admin takes no action at all, and remains on a Synapse < 0.99.0 then the
|
||||
homeserver will be unable to federate with those who have implemented
|
||||
.well-known. Then, as above, once the month upgrade window has expired the
|
||||
homeserver will not be able to federate with any Synapse >= 1.0.0
|
||||
|
||||
### When do I need a SRV record or .well-known URI?
|
||||
|
||||
If your homeserver listens on the default federation port (8448), and your
|
||||
`server_name` points to the host that your homeserver runs on, you do not need an
|
||||
SRV record or `.well-known/matrix/server` URI.
|
||||
|
||||
For instance, if you registered `example.com` and pointed its DNS A record at a
|
||||
fresh Upcloud VPS or similar, you could install Synapse 0.99 on that host,
|
||||
giving it a server_name of `example.com`, and it would automatically generate a
|
||||
valid TLS certificate for you via Let's Encrypt and no SRV record or
|
||||
`.well-known` URI would be needed.
|
||||
|
||||
This is the common case, although you can add an SRV record or
|
||||
`.well-known/matrix/server` URI for completeness if you wish.
|
||||
|
||||
**However**, if your server does not listen on port 8448, or if your `server_name`
|
||||
does not point to the host that your homeserver runs on, you will need to let
|
||||
other servers know how to find it.
|
||||
|
||||
In this case, you should see ["If you do have an SRV record
|
||||
currently"](#if-you-do-have-an-srv-record-currently) above.
|
||||
|
||||
### Can I still use an SRV record?
|
||||
|
||||
Firstly, if you didn't need an SRV record before (because your server is
|
||||
listening on port 8448 of your server_name), you certainly don't need one now:
|
||||
the defaults are still the same.
|
||||
|
||||
If you previously had an SRV record, you can keep using it provided you are
|
||||
able to give Synapse a TLS certificate corresponding to your server name. For
|
||||
example, suppose you had the following SRV record, which directs matrix traffic
|
||||
for example.com to matrix.example.com:443:
|
||||
|
||||
```
|
||||
_matrix._tcp.example.com. IN SRV 10 5 443 matrix.example.com
|
||||
```
|
||||
|
||||
In this case, Synapse must be given a certificate for example.com - or be
|
||||
configured to acquire one from Let's Encrypt.
|
||||
|
||||
If you are unable to give Synapse a certificate for your server_name, you will
|
||||
also need to use a .well-known URI instead. However, see also "I have created a
|
||||
.well-known URI. Do I still need an SRV record?".
|
||||
|
||||
### I have created a .well-known URI. Do I still need an SRV record?
|
||||
|
||||
As of Synapse 0.99, Synapse will first check for the existence of a `.well-known`
|
||||
URI and follow any delegation it suggests. It will only then check for the
|
||||
existence of an SRV record.
|
||||
|
||||
That means that the SRV record will often be redundant. However, you should
|
||||
remember that there may still be older versions of Synapse in the federation
|
||||
which do not understand `.well-known` URIs, so if you removed your SRV record you
|
||||
would no longer be able to federate with them.
|
||||
|
||||
It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
|
||||
earlier will follow the SRV record (and not care about the invalid
|
||||
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
|
||||
correct certificate chain.
|
||||
|
||||
### It used to work just fine, why are you breaking everything?
|
||||
|
||||
We have always wanted Matrix servers to be as easy to set up as possible, and
|
||||
so back when we started federation in 2014 we didn't want admins to have to go
|
||||
through the cumbersome process of buying a valid TLS certificate to run a
|
||||
server. This was before Let's Encrypt came along and made getting a free and
|
||||
valid TLS certificate straightforward. So instead, we adopted a system based on
|
||||
[Perspectives](https://en.wikipedia.org/wiki/Convergence_(SSL)): an approach
|
||||
where you check a set of "notary servers" (in practice, homeservers) to vouch
|
||||
for the validity of a certificate rather than having it signed by a CA. As long
|
||||
as enough different notaries agree on the certificate's validity, then it is
|
||||
trusted.
|
||||
|
||||
However, in practice this has never worked properly. Most people only use the
|
||||
default notary server (matrix.org), leading to inadvertent centralisation which
|
||||
we want to eliminate. Meanwhile, we never implemented the full consensus
|
||||
algorithm to query the servers participating in a room to determine consensus
|
||||
on whether a given certificate is valid. This is fiddly to get right
|
||||
(especially in face of sybil attacks), and we found ourselves questioning
|
||||
whether it was worth the effort to finish the work and commit to maintaining a
|
||||
secure certificate validation system as opposed to focusing on core Matrix
|
||||
development.
|
||||
|
||||
Meanwhile, Let's Encrypt came along in 2016, and put the final nail in the
|
||||
coffin of the Perspectives project (which was already pretty dead). So, the
|
||||
Spec Core Team decided that a better approach would be to mandate valid TLS
|
||||
certificates for federation alongside the rest of the Web. More details can be
|
||||
found in
|
||||
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/main/proposals/1711-x509-for-federation.md#background-the-failure-of-the-perspectives-approach).
|
||||
|
||||
This results in a breaking change, which is disruptive, but absolutely critical
|
||||
for the security model. However, the existence of Let's Encrypt as a trivial
|
||||
way to replace the old self-signed certificates with valid CA-signed ones helps
|
||||
smooth things over massively, especially as Synapse can now automate Let's
|
||||
Encrypt certificate generation if needed.
|
||||
|
||||
### Can I manage my own certificates rather than having Synapse renew certificates itself?
|
||||
|
||||
Yes, you are welcome to manage your certificates yourself. Synapse will only
|
||||
attempt to obtain certificates from Let's Encrypt if you configure it to do
|
||||
so.The only requirement is that there is a valid TLS cert present for
|
||||
federation end points.
|
||||
|
||||
### Do you still recommend against using a reverse proxy on the federation port?
|
||||
|
||||
We no longer actively recommend against using a reverse proxy. Many admins will
|
||||
find it easier to direct federation traffic to a reverse proxy and manage their
|
||||
own TLS certificates, and this is a supported configuration.
|
||||
|
||||
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
|
||||
reverse proxy.
|
||||
|
||||
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
|
||||
|
||||
Practically speaking, this is no longer necessary.
|
||||
|
||||
If you are using a reverse proxy for all of your TLS traffic, then you can set
|
||||
`no_tls: True`. In that case, the only reason Synapse needs the certificate is
|
||||
to populate a legacy 'tls_fingerprints' field in the federation API. This is
|
||||
ignored by Synapse 0.99.0 and later, and the only time pre-0.99 Synapses will
|
||||
check it is when attempting to fetch the server keys - and generally this is
|
||||
delegated via `matrix.org`, which is on 0.99.0.
|
||||
|
||||
However, there is a bug in Synapse 0.99.0
|
||||
[4554](<https://github.com/matrix-org/synapse/issues/4554>) which prevents
|
||||
Synapse from starting if you do not give it a TLS certificate. To work around
|
||||
this, you can give it any TLS certificate at all. This will be fixed soon.
|
||||
|
||||
### Do I need the same certificate for the client and federation port?
|
||||
|
||||
No. There is nothing stopping you from using different certificates,
|
||||
particularly if you are using a reverse proxy. However, Synapse will use the
|
||||
same certificate on any ports where TLS is configured.
|
||||
|
||||
### How do I tell Synapse to reload my keys/certificates after I replace them?
|
||||
|
||||
Synapse will reload the keys and certificates when it receives a SIGHUP - for
|
||||
example `kill -HUP $(cat homeserver.pid)`. Alternatively, simply restart
|
||||
Synapse, though this will result in downtime while it restarts.
|
||||
@@ -50,10 +50,8 @@ build the documentation with:
|
||||
mdbook build
|
||||
```
|
||||
|
||||
The rendered contents will be outputted to a new `book/` directory at the root of the repository. Please note that
|
||||
index.html is not built by default, it is created by copying over the file `welcome_and_overview.html` to `index.html`
|
||||
during deployment. Thus, when running `mdbook serve` locally the book will initially show a 404 in place of the index
|
||||
due to the above. Do not be alarmed!
|
||||
The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
|
||||
browse the book by opening `book/index.html` in a web browser.
|
||||
|
||||
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
|
||||
# Upgrading
|
||||
- [Upgrading between Synapse Versions](upgrade.md)
|
||||
- [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md)
|
||||
|
||||
# Usage
|
||||
- [Federation](federate.md)
|
||||
@@ -22,14 +23,13 @@
|
||||
- [Structured Logging](structured_logging.md)
|
||||
- [Templates](templates.md)
|
||||
- [User Authentication](usage/configuration/user_authentication/README.md)
|
||||
- [Single-Sign On](usage/configuration/user_authentication/single_sign_on/README.md)
|
||||
- [Single-Sign On]()
|
||||
- [OpenID Connect](openid.md)
|
||||
- [SAML](usage/configuration/user_authentication/single_sign_on/saml.md)
|
||||
- [CAS](usage/configuration/user_authentication/single_sign_on/cas.md)
|
||||
- [SAML]()
|
||||
- [CAS]()
|
||||
- [SSO Mapping Providers](sso_mapping_providers.md)
|
||||
- [Password Auth Providers](password_auth_providers.md)
|
||||
- [JSON Web Tokens](jwt.md)
|
||||
- [Refresh Tokens](usage/configuration/user_authentication/refresh_tokens.md)
|
||||
- [Registration Captcha](CAPTCHA_SETUP.md)
|
||||
- [Application Services](application_services.md)
|
||||
- [Server Notices](server_notices.md)
|
||||
@@ -44,7 +44,6 @@
|
||||
- [Presence router callbacks](modules/presence_router_callbacks.md)
|
||||
- [Account validity callbacks](modules/account_validity_callbacks.md)
|
||||
- [Password auth provider callbacks](modules/password_auth_provider_callbacks.md)
|
||||
- [Background update controller callbacks](modules/background_update_controller_callbacks.md)
|
||||
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
|
||||
- [Workers](workers.md)
|
||||
- [Using `synctl` with Workers](synctl_workers.md)
|
||||
@@ -52,7 +51,6 @@
|
||||
- [Administration](usage/administration/README.md)
|
||||
- [Admin API](usage/administration/admin_api/README.md)
|
||||
- [Account Validity](admin_api/account_validity.md)
|
||||
- [Background Updates](usage/administration/admin_api/background_updates.md)
|
||||
- [Delete Group](admin_api/delete_group.md)
|
||||
- [Event Reports](admin_api/event_reports.md)
|
||||
- [Media](admin_api/media_admin_api.md)
|
||||
@@ -65,24 +63,16 @@
|
||||
- [Statistics](admin_api/statistics.md)
|
||||
- [Users](admin_api/user_admin_api.md)
|
||||
- [Server Version](admin_api/version_api.md)
|
||||
- [Federation](usage/administration/admin_api/federation.md)
|
||||
- [Manhole](manhole.md)
|
||||
- [Monitoring](metrics-howto.md)
|
||||
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
|
||||
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
|
||||
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
|
||||
- [State Groups](usage/administration/state_groups.md)
|
||||
- [Request log format](usage/administration/request_log.md)
|
||||
- [Admin FAQ](usage/administration/admin_faq.md)
|
||||
- [Scripts]()
|
||||
|
||||
# Development
|
||||
- [Contributing Guide](development/contributing_guide.md)
|
||||
- [Code Style](code_style.md)
|
||||
- [Release Cycle](development/releases.md)
|
||||
- [Git Usage](development/git.md)
|
||||
- [Testing]()
|
||||
- [Demo scripts](development/demo.md)
|
||||
- [OpenTracing](opentracing.md)
|
||||
- [Database Schemas](development/database_schema.md)
|
||||
- [Experimental features](development/experimental_features.md)
|
||||
@@ -103,4 +93,3 @@
|
||||
|
||||
# Other
|
||||
- [Dependency Deprecation Policy](deprecation_policy.md)
|
||||
- [Running Synapse on a Single-Board Computer](other/running_synapse_on_single_board_computers.md)
|
||||
|
||||
@@ -4,9 +4,6 @@ This API allows a server administrator to manage the validity of an account. To
|
||||
use it, you must enable the account validity feature (under
|
||||
`account_validity`) in Synapse's configuration.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
## Renew account
|
||||
|
||||
This API extends the validity of an account by as much time as configured in the
|
||||
|
||||
@@ -4,11 +4,11 @@ This API lets a server admin delete a local group. Doing so will kick all
|
||||
users out of the group so that their clients will correctly handle the group
|
||||
being deleted.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/delete_group/<group_id>
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
@@ -2,13 +2,12 @@
|
||||
|
||||
This API returns information about reported events.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/event_reports?from=0&limit=10
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
@@ -95,10 +94,12 @@ The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/event_reports/<report_id>
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
```jsonc
|
||||
{
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
@@ -131,7 +132,7 @@ It returns a JSON body like the following:
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": <report_id>,
|
||||
|
||||
@@ -1,13 +1,24 @@
|
||||
# Contents
|
||||
- [Querying media](#querying-media)
|
||||
* [List all media in a room](#list-all-media-in-a-room)
|
||||
* [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
|
||||
- [Quarantine media](#quarantine-media)
|
||||
* [Quarantining media by ID](#quarantining-media-by-id)
|
||||
* [Remove media from quarantine by ID](#remove-media-from-quarantine-by-id)
|
||||
* [Quarantining media in a room](#quarantining-media-in-a-room)
|
||||
* [Quarantining all media of a user](#quarantining-all-media-of-a-user)
|
||||
* [Protecting media from being quarantined](#protecting-media-from-being-quarantined)
|
||||
* [Unprotecting media from being quarantined](#unprotecting-media-from-being-quarantined)
|
||||
- [Delete local media](#delete-local-media)
|
||||
* [Delete a specific local media](#delete-a-specific-local-media)
|
||||
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
|
||||
* [Delete media uploaded by a user](#delete-media-uploaded-by-a-user)
|
||||
- [Purge Remote Media API](#purge-remote-media-api)
|
||||
|
||||
# Querying media
|
||||
|
||||
These APIs allow extracting media information from the homeserver.
|
||||
|
||||
Details about the format of the `media_id` and storage of the media in the file system
|
||||
are documented under [media repository](../media_repository.md).
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
## List all media in a room
|
||||
|
||||
This API gets a list of known media in a room.
|
||||
@@ -17,6 +28,8 @@ The API is:
|
||||
```
|
||||
GET /_synapse/admin/v1/room/<room_id>/media
|
||||
```
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
The API returns a JSON body like the following:
|
||||
```json
|
||||
@@ -304,5 +317,8 @@ The following fields are returned in the JSON response body:
|
||||
|
||||
* `deleted`: integer - The number of media items successfully deleted
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
If the user re-requests purged remote media, synapse will re-request the media
|
||||
from the originating server.
|
||||
|
||||
@@ -10,15 +10,15 @@ paginate further back in the room from the point being purged from.
|
||||
Note that Synapse requires at least one message in each room, so it will never
|
||||
delete the last message in a room.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
By default, events sent by local users are not deleted, as they may represent
|
||||
the only copies of this content in existence. (Events sent by remote users are
|
||||
deleted.)
|
||||
@@ -27,7 +27,7 @@ Room state data (such as joins, leaves, topic) is always preserved.
|
||||
|
||||
To delete local message events as well, set `delete_local_events` in the body:
|
||||
|
||||
```json
|
||||
```
|
||||
{
|
||||
"delete_local_events": true
|
||||
}
|
||||
@@ -57,6 +57,9 @@ It is possible to poll for updates on recent purges with a second API;
|
||||
GET /_synapse/admin/v1/purge_history_status/<purge_id>
|
||||
```
|
||||
|
||||
Again, you will need to authenticate by providing an `access_token` for a
|
||||
server admin.
|
||||
|
||||
This API returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
@@ -67,8 +70,6 @@ This API returns a JSON body like the following:
|
||||
|
||||
The status will be one of `active`, `complete`, or `failed`.
|
||||
|
||||
If `status` is `failed` there will be a string `error` with the error message.
|
||||
|
||||
## Reclaim disk space (Postgres)
|
||||
|
||||
To reclaim the disk space and return it to the operating system, you need to run
|
||||
|
||||
@@ -5,9 +5,6 @@ to a room with a given `room_id_or_alias`. You can only modify the membership of
|
||||
local users. The server administrator must be in the room and have permission to
|
||||
invite users.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
## Parameters
|
||||
|
||||
The following parameters are available:
|
||||
@@ -26,9 +23,12 @@ POST /_synapse/admin/v1/join/<room_id_or_alias>
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
```
|
||||
{
|
||||
"room_id": "!636q39766251:server.com"
|
||||
}
|
||||
|
||||
@@ -1,12 +1,20 @@
|
||||
# Contents
|
||||
- [List Room API](#list-room-api)
|
||||
- [Room Details API](#room-details-api)
|
||||
- [Room Members API](#room-members-api)
|
||||
- [Room State API](#room-state-api)
|
||||
- [Delete Room API](#delete-room-api)
|
||||
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
||||
- [Make Room Admin API](#make-room-admin-api)
|
||||
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
||||
- [Event Context API](#event-context-api)
|
||||
|
||||
# List Room API
|
||||
|
||||
The List Room admin API allows server admins to get a list of rooms on their
|
||||
server. There are various parameters available that allow for filtering and
|
||||
sorting the returned list. This API supports pagination.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following query parameters are available:
|
||||
@@ -30,14 +38,9 @@ The following query parameters are available:
|
||||
- `history_visibility` - Rooms are ordered alphabetically by visibility of history of the room.
|
||||
- `state_events` - Rooms are ordered by number of state events. Largest to smallest.
|
||||
* `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
|
||||
this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||
* `search_term` - Filter rooms by their room name, canonical alias and room id.
|
||||
Specifically, rooms are selected if the search term is contained in
|
||||
- the room's name,
|
||||
- the local part of the room's canonical alias, or
|
||||
- the complete (local and server part) room's id (case sensitive).
|
||||
|
||||
Defaults to no filtering.
|
||||
this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||
* `search_term` - Filter rooms by their room name. Search term can be contained in any
|
||||
part of the room name. Defaults to no filtering.
|
||||
|
||||
**Response**
|
||||
|
||||
@@ -84,7 +87,7 @@ GET /_synapse/admin/v1/rooms
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
```jsonc
|
||||
{
|
||||
"rooms": [
|
||||
{
|
||||
@@ -167,7 +170,7 @@ GET /_synapse/admin/v1/rooms?order_by=size
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
```jsonc
|
||||
{
|
||||
"rooms": [
|
||||
{
|
||||
@@ -205,7 +208,7 @@ A response body like the following is returned:
|
||||
}
|
||||
],
|
||||
"offset": 0,
|
||||
"total_rooms": 150,
|
||||
"total_rooms": 150
|
||||
"next_token": 100
|
||||
}
|
||||
```
|
||||
@@ -221,7 +224,7 @@ GET /_synapse/admin/v1/rooms?order_by=size&from=100
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
```jsonc
|
||||
{
|
||||
"rooms": [
|
||||
{
|
||||
@@ -375,86 +378,9 @@ A response body like the following is returned:
|
||||
}
|
||||
```
|
||||
|
||||
# Block Room API
|
||||
The Block Room admin API allows server admins to block and unblock rooms,
|
||||
and query to see if a given room is blocked.
|
||||
This API can be used to pre-emptively block a room, even if it's unknown to this
|
||||
homeserver. Users will be prevented from joining a blocked room.
|
||||
|
||||
## Block or unblock a room
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
PUT /_synapse/admin/v1/rooms/<room_id>/block
|
||||
```
|
||||
|
||||
with a body of:
|
||||
|
||||
```json
|
||||
{
|
||||
"block": true
|
||||
}
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"block": true
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `room_id` - The ID of the room.
|
||||
|
||||
The following JSON body parameters are available:
|
||||
|
||||
- `block` - If `true` the room will be blocked and if `false` the room will be unblocked.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are possible in the JSON response body:
|
||||
|
||||
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||
|
||||
## Get block status
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/rooms/<room_id>/block
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"block": true,
|
||||
"user_id": "<user_id>"
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `room_id` - The ID of the room.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are possible in the JSON response body:
|
||||
|
||||
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||
- `user_id` - An optional string. If the room is blocked (`block` is `true`) shows
|
||||
the user who has add the room to blocking list. Otherwise it is not displayed.
|
||||
|
||||
# Delete Room API
|
||||
|
||||
The Delete Room admin API allows server admins to remove rooms from the server
|
||||
The Delete Room admin API allows server admins to remove rooms from server
|
||||
and block these rooms.
|
||||
|
||||
Shuts down a room. Moves all local users and room aliases automatically to a
|
||||
@@ -465,30 +391,18 @@ The new room will be created with the user specified by the `new_room_user_id` p
|
||||
as room administrator and will contain a message explaining what happened. Users invited
|
||||
to the new room will have power level `-10` by default, and thus be unable to speak.
|
||||
|
||||
If `block` is `true`, users will be prevented from joining the old room.
|
||||
This option can in [Version 1](#version-1-old-version) also be used to pre-emptively
|
||||
block a room, even if it's unknown to this homeserver. In this case, the room will be
|
||||
blocked, and no further action will be taken. If `block` is `false`, attempting to
|
||||
delete an unknown room is invalid and will be rejected as a bad request.
|
||||
If `block` is `True` it prevents new joins to the old room.
|
||||
|
||||
This API will remove all trace of the old room from your database after removing
|
||||
all local users. If `purge` is `true` (the default), all traces of the old room will
|
||||
be removed from your database after removing all local users. If you do not want
|
||||
this to happen, set `purge` to `false`.
|
||||
Depending on the amount of history being purged, a call to the API may take
|
||||
Depending on the amount of history being purged a call to the API may take
|
||||
several minutes or longer.
|
||||
|
||||
The local server will only have the power to move local user and room aliases to
|
||||
the new room. Users on other servers will be unaffected.
|
||||
|
||||
## Version 1 (old version)
|
||||
|
||||
This version works synchronously. That means you only get the response once the server has
|
||||
finished the action, which may take a long time. If you request the same action
|
||||
a second time, and the server has not finished the first one, the second request will block.
|
||||
This is fixed in version 2 of this API. The parameters are the same in both APIs.
|
||||
This API will become deprecated in the future.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
@@ -507,6 +421,9 @@ with a body of:
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -523,44 +440,6 @@ A response body like the following is returned:
|
||||
}
|
||||
```
|
||||
|
||||
The parameters and response values have the same format as
|
||||
[version 2](#version-2-new-version) of the API.
|
||||
|
||||
## Version 2 (new version)
|
||||
|
||||
**Note**: This API is new, experimental and "subject to change".
|
||||
|
||||
This version works asynchronously, meaning you get the response from server immediately
|
||||
while the server works on that task in background. You can then request the status of the action
|
||||
to check if it has completed.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
DELETE /_synapse/admin/v2/rooms/<room_id>
|
||||
```
|
||||
|
||||
with a body of:
|
||||
|
||||
```json
|
||||
{
|
||||
"new_room_user_id": "@someuser:example.com",
|
||||
"room_name": "Content Violation Notification",
|
||||
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
|
||||
"block": true,
|
||||
"purge": true
|
||||
}
|
||||
```
|
||||
|
||||
The API starts the shut down and purge running, and returns immediately with a JSON body with
|
||||
a purge id:
|
||||
|
||||
```json
|
||||
{
|
||||
"delete_id": "<opaque id>"
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
@@ -580,10 +459,8 @@ The following JSON body parameters are available:
|
||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||
original room was shut down. Defaults to `Sharing illegal content on this server
|
||||
is not permitted and rooms in violation will be blocked.`
|
||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list,
|
||||
preventing future attempts to join the room. Rooms can be blocked
|
||||
even if they're not yet known to the homeserver (only with
|
||||
[Version 1](#version-1-old-version) of the API). Defaults to `false`.
|
||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
|
||||
future attempts to join the room. Defaults to `false`.
|
||||
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
||||
Defaults to `true`.
|
||||
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
||||
@@ -593,124 +470,16 @@ The following JSON body parameters are available:
|
||||
|
||||
The JSON body must not be empty. The body must be at least `{}`.
|
||||
|
||||
## Status of deleting rooms
|
||||
|
||||
**Note**: This API is new, experimental and "subject to change".
|
||||
|
||||
It is possible to query the status of the background task for deleting rooms.
|
||||
The status can be queried up to 24 hours after completion of the task,
|
||||
or until Synapse is restarted (whichever happens first).
|
||||
|
||||
### Query by `room_id`
|
||||
|
||||
With this API you can get the status of all active deletion tasks, and all those completed in the last 24h,
|
||||
for the given `room_id`.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v2/rooms/<room_id>/delete_status
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"results": [
|
||||
{
|
||||
"delete_id": "delete_id1",
|
||||
"status": "failed",
|
||||
"error": "error message",
|
||||
"shutdown_room": {
|
||||
"kicked_users": [],
|
||||
"failed_to_kick_users": [],
|
||||
"local_aliases": [],
|
||||
"new_room_id": null
|
||||
}
|
||||
}, {
|
||||
"delete_id": "delete_id2",
|
||||
"status": "purging",
|
||||
"shutdown_room": {
|
||||
"kicked_users": [
|
||||
"@foobar:example.com"
|
||||
],
|
||||
"failed_to_kick_users": [],
|
||||
"local_aliases": [
|
||||
"#badroom:example.com",
|
||||
"#evilsaloon:example.com"
|
||||
],
|
||||
"new_room_id": "!newroomid:example.com"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
* `room_id` - The ID of the room.
|
||||
|
||||
### Query by `delete_id`
|
||||
|
||||
With this API you can get the status of one specific task by `delete_id`.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v2/rooms/delete_status/<delete_id>
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "purging",
|
||||
"shutdown_room": {
|
||||
"kicked_users": [
|
||||
"@foobar:example.com"
|
||||
],
|
||||
"failed_to_kick_users": [],
|
||||
"local_aliases": [
|
||||
"#badroom:example.com",
|
||||
"#evilsaloon:example.com"
|
||||
],
|
||||
"new_room_id": "!newroomid:example.com"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
* `delete_id` - The ID for this delete.
|
||||
|
||||
### Response
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- `results` - An array of objects, each containing information about one task.
|
||||
This field is omitted from the result when you query by `delete_id`.
|
||||
Task objects contain the following fields:
|
||||
- `delete_id` - The ID for this purge if you query by `room_id`.
|
||||
- `status` - The status will be one of:
|
||||
- `shutting_down` - The process is removing users from the room.
|
||||
- `purging` - The process is purging the room and event data from database.
|
||||
- `complete` - The process has completed successfully.
|
||||
- `failed` - The process is aborted, an error has occurred.
|
||||
- `error` - A string that shows an error message if `status` is `failed`.
|
||||
Otherwise this field is hidden.
|
||||
- `shutdown_room` - An object containing information about the result of shutting down the room.
|
||||
*Note:* The result is shown after removing the room members.
|
||||
The delete process can still be running. Please pay attention to the `status`.
|
||||
- `kicked_users` - An array of users (`user_id`) that were kicked.
|
||||
- `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
||||
- `local_aliases` - An array of strings representing the local aliases that were
|
||||
migrated from the old room to the new.
|
||||
- `new_room_id` - A string representing the room ID of the new room, or `null` if
|
||||
no such room was created.
|
||||
* `kicked_users` - An array of users (`user_id`) that were kicked.
|
||||
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
||||
* `local_aliases` - An array of strings representing the local aliases that were migrated from
|
||||
the old room to the new.
|
||||
* `new_room_id` - A string representing the room ID of the new room.
|
||||
|
||||
|
||||
## Undoing room deletions
|
||||
|
||||
@@ -751,6 +520,16 @@ With all that being said, if you still want to try and recover the room:
|
||||
4. If `new_room_user_id` was given, a 'Content Violation' will have been
|
||||
created. Consider whether you want to delete that roomm.
|
||||
|
||||
## Deprecated endpoint
|
||||
|
||||
The previous deprecated API will be removed in a future release, it was:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/rooms/<room_id>/delete
|
||||
```
|
||||
|
||||
It behaves the same way than the current endpoint except the path and the method.
|
||||
|
||||
# Make Room Admin API
|
||||
|
||||
Grants another user the highest power available to a local user who is in the room.
|
||||
|
||||
@@ -3,15 +3,15 @@
|
||||
Returns information about all local media usage of users. Gives the
|
||||
possibility to filter them by time and user.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/statistics/users/media
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
|
||||
@@ -1,8 +1,5 @@
|
||||
# User Admin API
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
## Query User Account
|
||||
|
||||
This API returns information about a specific user account.
|
||||
@@ -13,12 +10,14 @@ The api is:
|
||||
GET /_synapse/admin/v2/users/<user_id>
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```jsonc
|
||||
```json
|
||||
{
|
||||
"name": "@user:example.com",
|
||||
"displayname": "User", // can be null if not set
|
||||
"displayname": "User",
|
||||
"threepids": [
|
||||
{
|
||||
"medium": "email",
|
||||
@@ -33,11 +32,11 @@ It returns a JSON body like the following:
|
||||
"validated_at": 1586458409743
|
||||
}
|
||||
],
|
||||
"avatar_url": "<avatar_url>", // can be null if not set
|
||||
"is_guest": 0,
|
||||
"avatar_url": "<avatar_url>",
|
||||
"admin": 0,
|
||||
"deactivated": 0,
|
||||
"shadow_banned": 0,
|
||||
"password_hash": "$2b$12$p9B4GkqYdRTPGD",
|
||||
"creation_ts": 1560432506,
|
||||
"appservice_id": null,
|
||||
"consent_server_notice_sent": null,
|
||||
@@ -104,6 +103,9 @@ with a body of:
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
Returns HTTP status code:
|
||||
- `201` - When a new user object was created.
|
||||
- `200` - When a user was modified.
|
||||
@@ -126,8 +128,7 @@ Body parameters:
|
||||
[Sample Configuration File](../usage/configuration/homeserver_sample_config.html)
|
||||
section `sso` and `oidc_providers`.
|
||||
- `auth_provider` - string. ID of the external identity provider. Value of `idp_id`
|
||||
in the homeserver configuration. Note that no error is raised if the provided
|
||||
value is not in the homeserver configuration.
|
||||
in homeserver configuration.
|
||||
- `external_id` - string, user ID in the external identity provider.
|
||||
- `avatar_url` - string, optional, must be a
|
||||
[MXC URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).
|
||||
@@ -154,6 +155,9 @@ By default, the response is ordered by ascending user ID.
|
||||
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -273,6 +277,9 @@ GET /_matrix/client/r0/admin/whois/<userId>
|
||||
See also: [Client Server
|
||||
API Whois](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid).
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
@@ -327,12 +334,15 @@ with a body of:
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
The erase parameter is optional and defaults to `false`.
|
||||
An empty body may be passed for backwards compatibility.
|
||||
|
||||
The following actions are performed when deactivating an user:
|
||||
|
||||
- Try to unbind 3PIDs from the identity server
|
||||
- Try to unpind 3PIDs from the identity server
|
||||
- Remove all 3PIDs from the homeserver
|
||||
- Delete all devices and E2EE keys
|
||||
- Delete all access tokens
|
||||
@@ -342,11 +352,6 @@ The following actions are performed when deactivating an user:
|
||||
- Remove the user from the user directory
|
||||
- Reject all pending invites
|
||||
- Remove all account validity information related to the user
|
||||
- Remove the arbitrary data store known as *account data*. For example, this includes:
|
||||
- list of ignored users;
|
||||
- push rules;
|
||||
- secret storage keys; and
|
||||
- cross-signing keys.
|
||||
|
||||
The following additional actions are performed during deactivation if `erase`
|
||||
is set to `true`:
|
||||
@@ -360,6 +365,7 @@ The following actions are **NOT** performed. The list may be incomplete.
|
||||
- Remove mappings of SSO IDs
|
||||
- [Delete media uploaded](#delete-media-uploaded-by-a-user) by user (included avatar images)
|
||||
- Delete sent and received messages
|
||||
- Delete E2E cross-signing keys
|
||||
- Remove the user's creation (registration) timestamp
|
||||
- [Remove rate limit overrides](#override-ratelimiting-for-users)
|
||||
- Remove from monthly active users
|
||||
@@ -383,6 +389,9 @@ with a body of:
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
The parameter `new_password` is required.
|
||||
The parameter `logout_devices` is optional and defaults to `true`.
|
||||
|
||||
@@ -395,6 +404,9 @@ The api is:
|
||||
GET /_synapse/admin/v1/users/<user_id>/admin
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -422,6 +434,10 @@ with a body of:
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
|
||||
## List room memberships of a user
|
||||
|
||||
Gets a list of all `room_id` that a specific `user_id` is member.
|
||||
@@ -432,6 +448,9 @@ The API is:
|
||||
GET /_synapse/admin/v1/users/<user_id>/joined_rooms
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -461,90 +480,10 @@ The following fields are returned in the JSON response body:
|
||||
- `joined_rooms` - An array of `room_id`.
|
||||
- `total` - Number of rooms.
|
||||
|
||||
## Account Data
|
||||
Gets information about account data for a specific `user_id`.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/users/<user_id>/accountdata
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"account_data": {
|
||||
"global": {
|
||||
"m.secret_storage.key.LmIGHTg5W": {
|
||||
"algorithm": "m.secret_storage.v1.aes-hmac-sha2",
|
||||
"iv": "fwjNZatxg==",
|
||||
"mac": "eWh9kNnLWZUNOgnc="
|
||||
},
|
||||
"im.vector.hide_profile": {
|
||||
"hide_profile": true
|
||||
},
|
||||
"org.matrix.preview_urls": {
|
||||
"disable": false
|
||||
},
|
||||
"im.vector.riot.breadcrumb_rooms": {
|
||||
"rooms": [
|
||||
"!LxcBDAsDUVAfJDEo:matrix.org",
|
||||
"!MAhRxqasbItjOqxu:matrix.org"
|
||||
]
|
||||
},
|
||||
"m.accepted_terms": {
|
||||
"accepted": [
|
||||
"https://example.org/somewhere/privacy-1.2-en.html",
|
||||
"https://example.org/somewhere/terms-2.0-en.html"
|
||||
]
|
||||
},
|
||||
"im.vector.setting.breadcrumbs": {
|
||||
"recent_rooms": [
|
||||
"!MAhRxqasbItqxuEt:matrix.org",
|
||||
"!ZtSaPCawyWtxiImy:matrix.org"
|
||||
]
|
||||
}
|
||||
},
|
||||
"rooms": {
|
||||
"!GUdfZSHUJibpiVqHYd:matrix.org": {
|
||||
"m.fully_read": {
|
||||
"event_id": "$156334540fYIhZ:matrix.org"
|
||||
}
|
||||
},
|
||||
"!tOZwOOiqwCYQkLhV:matrix.org": {
|
||||
"m.fully_read": {
|
||||
"event_id": "$xjsIyp4_NaVl2yPvIZs_k1Jl8tsC_Sp23wjqXPno"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `user_id` - fully qualified: for example, `@user:server.com`.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- `account_data` - A map containing the account data for the user
|
||||
- `global` - A map containing the global account data for the user
|
||||
- `rooms` - A map containing the account data per room for the user
|
||||
|
||||
## User media
|
||||
|
||||
### List media uploaded by a user
|
||||
Gets a list of all local media that a specific `user_id` has created.
|
||||
These are media that the user has uploaded themselves
|
||||
([local media](../media_repository.md#local-media)), as well as
|
||||
[URL preview images](../media_repository.md#url-previews) requested by the user if the
|
||||
[feature is enabled](../development/url_previews.md).
|
||||
|
||||
By default, the response is ordered by descending creation date and ascending media ID.
|
||||
The newest media is on top. You can change the order with parameters
|
||||
`order_by` and `dir`.
|
||||
@@ -555,6 +494,9 @@ The API is:
|
||||
GET /_synapse/admin/v1/users/<user_id>/media
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -641,9 +583,7 @@ The following fields are returned in the JSON response body:
|
||||
Media objects contain the following fields:
|
||||
- `created_ts` - integer - Timestamp when the content was uploaded in ms.
|
||||
- `last_access_ts` - integer - Timestamp when the content was last accessed in ms.
|
||||
- `media_id` - string - The id used to refer to the media. Details about the format
|
||||
are documented under
|
||||
[media repository](../media_repository.md).
|
||||
- `media_id` - string - The id used to refer to the media.
|
||||
- `media_length` - integer - Length of the media in bytes.
|
||||
- `media_type` - string - The MIME-type of the media.
|
||||
- `quarantined_by` - string - The user ID that initiated the quarantine request
|
||||
@@ -671,6 +611,9 @@ The API is:
|
||||
DELETE /_synapse/admin/v1/users/<user_id>/media
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -743,6 +686,9 @@ The API is:
|
||||
GET /_synapse/admin/v2/users/<user_id>/devices
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -808,6 +754,9 @@ POST /_synapse/admin/v2/users/<user_id>/delete_devices
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
An empty JSON dict is returned.
|
||||
|
||||
**Parameters**
|
||||
@@ -829,6 +778,9 @@ The API is:
|
||||
GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -874,6 +826,9 @@ PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
An empty JSON dict is returned.
|
||||
|
||||
**Parameters**
|
||||
@@ -900,6 +855,9 @@ DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
|
||||
{}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
An empty JSON dict is returned.
|
||||
|
||||
**Parameters**
|
||||
@@ -918,6 +876,9 @@ The API is:
|
||||
GET /_synapse/admin/v1/users/<user_id>/pushers
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -987,7 +948,7 @@ The following fields are returned in the JSON response body:
|
||||
See also the
|
||||
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
|
||||
|
||||
## Controlling whether a user is shadow-banned
|
||||
## Shadow-banning users
|
||||
|
||||
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
||||
A shadow-banned users receives successful responses to their client-server API requests,
|
||||
@@ -1000,19 +961,16 @@ or broken behaviour for the client. A shadow-banned user will not receive any
|
||||
notification and it is generally more appropriate to ban or kick abusive users.
|
||||
A shadow-banned user will be unable to contact anyone on the server.
|
||||
|
||||
To shadow-ban a user the API is:
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||
```
|
||||
|
||||
To un-shadow-ban a user the API is:
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
```
|
||||
DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||
```
|
||||
|
||||
An empty JSON dict is returned in both cases.
|
||||
An empty JSON dict is returned.
|
||||
|
||||
**Parameters**
|
||||
|
||||
@@ -1034,6 +992,9 @@ The API is:
|
||||
GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -1073,6 +1034,9 @@ The API is:
|
||||
POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -1115,6 +1079,9 @@ The API is:
|
||||
DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
An empty JSON dict is returned.
|
||||
|
||||
```json
|
||||
@@ -1140,8 +1107,10 @@ This endpoint will work even if registration is disabled on the server, unlike
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/username_available?username=$localpart
|
||||
POST /_synapse/admin/v1/username_availabile?username=$localpart
|
||||
```
|
||||
|
||||
The request and response format is the same as the
|
||||
[/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
||||
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
@@ -16,6 +16,6 @@ It returns a JSON body like the following:
|
||||
```json
|
||||
{
|
||||
"server_version": "0.99.2rc1 (b=develop, abcdef123)",
|
||||
"python_version": "3.7.8"
|
||||
"python_version": "3.6.8"
|
||||
}
|
||||
```
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
## Server to Server Stack
|
||||
|
||||
To use the server to server stack, homeservers should only need to
|
||||
To use the server to server stack, home servers should only need to
|
||||
interact with the Messaging layer.
|
||||
|
||||
The server to server side of things is designed into 4 distinct layers:
|
||||
@@ -23,7 +23,7 @@ Server with a domain specific API.
|
||||
|
||||
1. **Messaging Layer**
|
||||
|
||||
This is what the rest of the homeserver hits to send messages, join rooms,
|
||||
This is what the rest of the Home Server hits to send messages, join rooms,
|
||||
etc. It also allows you to register callbacks for when it get's notified by
|
||||
lower levels that e.g. a new message has been received.
|
||||
|
||||
@@ -45,7 +45,7 @@ Server with a domain specific API.
|
||||
|
||||
For incoming PDUs, it has to check the PDUs it references to see
|
||||
if we have missed any. If we have go and ask someone (another
|
||||
homeserver) for it.
|
||||
home server) for it.
|
||||
|
||||
3. **Transaction Layer**
|
||||
|
||||
|
||||
@@ -10,9 +10,7 @@ The necessary tools are detailed below.
|
||||
|
||||
First install them with:
|
||||
|
||||
```sh
|
||||
pip install -e ".[lint,mypy]"
|
||||
```
|
||||
pip install -e ".[lint,mypy]"
|
||||
|
||||
- **black**
|
||||
|
||||
@@ -23,9 +21,7 @@ pip install -e ".[lint,mypy]"
|
||||
Have `black` auto-format your code (it shouldn't change any
|
||||
functionality) with:
|
||||
|
||||
```sh
|
||||
black . --exclude="\.tox|build|env"
|
||||
```
|
||||
black . --exclude="\.tox|build|env"
|
||||
|
||||
- **flake8**
|
||||
|
||||
@@ -34,9 +30,7 @@ pip install -e ".[lint,mypy]"
|
||||
|
||||
Check all application and test code with:
|
||||
|
||||
```sh
|
||||
flake8 synapse tests
|
||||
```
|
||||
flake8 synapse tests
|
||||
|
||||
- **isort**
|
||||
|
||||
@@ -45,9 +39,7 @@ pip install -e ".[lint,mypy]"
|
||||
|
||||
Auto-fix imports with:
|
||||
|
||||
```sh
|
||||
isort -rc synapse tests
|
||||
```
|
||||
isort -rc synapse tests
|
||||
|
||||
`-rc` means to recursively search the given directories.
|
||||
|
||||
@@ -74,19 +66,15 @@ save as it takes a while and is very resource intensive.
|
||||
|
||||
Example:
|
||||
|
||||
```python
|
||||
from synapse.types import UserID
|
||||
...
|
||||
user_id = UserID(local, server)
|
||||
```
|
||||
from synapse.types import UserID
|
||||
...
|
||||
user_id = UserID(local, server)
|
||||
|
||||
is preferred over:
|
||||
|
||||
```python
|
||||
from synapse import types
|
||||
...
|
||||
user_id = types.UserID(local, server)
|
||||
```
|
||||
from synapse import types
|
||||
...
|
||||
user_id = types.UserID(local, server)
|
||||
|
||||
(or any other variant).
|
||||
|
||||
@@ -146,32 +134,30 @@ Some guidelines follow:
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
## Frobnication ##
|
||||
## Frobnication ##
|
||||
|
||||
# The frobnicator will ensure that all requests are fully frobnicated.
|
||||
# To enable it, uncomment the following.
|
||||
#
|
||||
#frobnicator_enabled: true
|
||||
# The frobnicator will ensure that all requests are fully frobnicated.
|
||||
# To enable it, uncomment the following.
|
||||
#
|
||||
#frobnicator_enabled: true
|
||||
|
||||
# By default, the frobnicator will frobnicate with the default frobber.
|
||||
# The following will make it use an alternative frobber.
|
||||
#
|
||||
#frobincator_frobber: special_frobber
|
||||
# By default, the frobnicator will frobnicate with the default frobber.
|
||||
# The following will make it use an alternative frobber.
|
||||
#
|
||||
#frobincator_frobber: special_frobber
|
||||
|
||||
# Settings for the frobber
|
||||
#
|
||||
frobber:
|
||||
# frobbing speed. Defaults to 1.
|
||||
#
|
||||
#speed: 10
|
||||
# Settings for the frobber
|
||||
#
|
||||
frobber:
|
||||
# frobbing speed. Defaults to 1.
|
||||
#
|
||||
#speed: 10
|
||||
|
||||
# frobbing distance. Defaults to 1000.
|
||||
#
|
||||
#distance: 100
|
||||
```
|
||||
# frobbing distance. Defaults to 1000.
|
||||
#
|
||||
#distance: 100
|
||||
|
||||
Note that the sample configuration is generated from the synapse code
|
||||
and is maintained by a script, `scripts-dev/generate_sample_config.sh`.
|
||||
and is maintained by a script, `scripts-dev/generate_sample_config`.
|
||||
Making sure that the output from this script matches the desired format
|
||||
is left as an exercise for the reader!
|
||||
|
||||
@@ -99,7 +99,7 @@ construct URIs where users can give their consent.
|
||||
see if an unauthenticated user is viewing the page. This is typically
|
||||
wrapped around the form that would be used to actually agree to the document:
|
||||
|
||||
```html
|
||||
```
|
||||
{% if not public_version %}
|
||||
<!-- The variables used here are only provided when the 'u' param is given to the homeserver -->
|
||||
<form method="post" action="consent">
|
||||
|
||||
@@ -1,8 +1,4 @@
|
||||
# Delegation of incoming federation traffic
|
||||
|
||||
In the following documentation, we use the term `server_name` to refer to that setting
|
||||
in your homeserver configuration file. It appears at the ends of user ids, and tells
|
||||
other homeservers where they can find your server.
|
||||
# Delegation
|
||||
|
||||
By default, other homeservers will expect to be able to reach yours via
|
||||
your `server_name`, on port 8448. For example, if you set your `server_name`
|
||||
@@ -16,21 +12,13 @@ to a different server and/or port (e.g. `synapse.example.com:443`).
|
||||
|
||||
## .well-known delegation
|
||||
|
||||
To use this method, you need to be able to configure the server at
|
||||
`https://<server_name>` to serve a file at
|
||||
`https://<server_name>/.well-known/matrix/server`. There are two ways to do this, shown below.
|
||||
To use this method, you need to be able to alter the
|
||||
`server_name` 's https server to serve the `/.well-known/matrix/server`
|
||||
URL. Having an active server (with a valid TLS certificate) serving your
|
||||
`server_name` domain is out of the scope of this documentation.
|
||||
|
||||
Note that the `.well-known` file is hosted on the default port for `https` (port 443).
|
||||
|
||||
### External server
|
||||
|
||||
For maximum flexibility, you need to configure an external server such as nginx, Apache
|
||||
or HAProxy to serve the `https://<server_name>/.well-known/matrix/server` file. Setting
|
||||
up such a server is out of the scope of this documentation, but note that it is often
|
||||
possible to configure your [reverse proxy](reverse_proxy.md) for this.
|
||||
|
||||
The URL `https://<server_name>/.well-known/matrix/server` should be configured
|
||||
return a JSON structure containing the key `m.server` like this:
|
||||
The URL `https://<server_name>/.well-known/matrix/server` should
|
||||
return a JSON structure containing the key `m.server` like so:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -38,9 +26,8 @@ return a JSON structure containing the key `m.server` like this:
|
||||
}
|
||||
```
|
||||
|
||||
In our example (where we want federation traffic to be routed to
|
||||
`https://synapse.example.com`, on port 443), this would mean that
|
||||
`https://example.com/.well-known/matrix/server` should return:
|
||||
In our example, this would mean that URL `https://example.com/.well-known/matrix/server`
|
||||
should return:
|
||||
|
||||
```json
|
||||
{
|
||||
@@ -51,29 +38,16 @@ In our example (where we want federation traffic to be routed to
|
||||
Note, specifying a port is optional. If no port is specified, then it defaults
|
||||
to 8448.
|
||||
|
||||
### Serving a `.well-known/matrix/server` file with Synapse
|
||||
|
||||
If you are able to set up your domain so that `https://<server_name>` is routed to
|
||||
Synapse (i.e., the only change needed is to direct federation traffic to port 443
|
||||
instead of port 8448), then it is possible to configure Synapse to serve a suitable
|
||||
`.well-known/matrix/server` file. To do so, add the following to your `homeserver.yaml`
|
||||
file:
|
||||
|
||||
```yaml
|
||||
serve_server_wellknown: true
|
||||
```
|
||||
|
||||
**Note**: this *only* works if `https://<server_name>` is routed to Synapse, so is
|
||||
generally not suitable if Synapse is hosted at a subdomain such as
|
||||
`https://synapse.example.com`.
|
||||
With .well-known delegation, federating servers will check for a valid TLS
|
||||
certificate for the delegated hostname (in our example: `synapse.example.com`).
|
||||
|
||||
## SRV DNS record delegation
|
||||
|
||||
It is also possible to do delegation using a SRV DNS record. However, that is generally
|
||||
not recommended, as it can be difficult to configure the TLS certificates correctly in
|
||||
this case, and it offers little advantage over `.well-known` delegation.
|
||||
It is also possible to do delegation using a SRV DNS record. However, that is
|
||||
considered an advanced topic since it's a bit complex to set up, and `.well-known`
|
||||
delegation is already enough in most cases.
|
||||
|
||||
However, if you really need it, you can find some documentation on what such a
|
||||
However, if you really need it, you can find some documentation on how such a
|
||||
record should look like and how Synapse will use it in [the Matrix
|
||||
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names).
|
||||
|
||||
@@ -94,9 +68,27 @@ wouldn't need any delegation set up.
|
||||
domain `server_name` points to, you will need to let other servers know how to
|
||||
find it using delegation.
|
||||
|
||||
### Should I use a reverse proxy for federation traffic?
|
||||
### Do you still recommend against using a reverse proxy on the federation port?
|
||||
|
||||
Generally, using a reverse proxy for both the federation and client traffic is a good
|
||||
idea, since it saves handling TLS traffic in Synapse. See
|
||||
[the reverse proxy documentation](reverse_proxy.md) for information on setting up a
|
||||
We no longer actively recommend against using a reverse proxy. Many admins will
|
||||
find it easier to direct federation traffic to a reverse proxy and manage their
|
||||
own TLS certificates, and this is a supported configuration.
|
||||
|
||||
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
|
||||
reverse proxy.
|
||||
|
||||
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
|
||||
|
||||
This is no longer necessary. If you are using a reverse proxy for all of your
|
||||
TLS traffic, then you can set `no_tls: True` in the Synapse config.
|
||||
|
||||
In that case, the only reason Synapse needs the certificate is to populate a legacy
|
||||
`tls_fingerprints` field in the federation API. This is ignored by Synapse 0.99.0
|
||||
and later, and the only time pre-0.99 Synapses will check it is when attempting to
|
||||
fetch the server keys - and generally this is delegated via `matrix.org`, which
|
||||
is running a modern version of Synapse.
|
||||
|
||||
### Do I need the same certificate for the client and federation port?
|
||||
|
||||
No. There is nothing stopping you from using different certificates,
|
||||
particularly if you are using a reverse proxy.
|
||||
@@ -14,8 +14,8 @@ i.e. when a version reaches End of Life Synapse will withdraw support for that
|
||||
version in future releases.
|
||||
|
||||
Details on the upstream support life cycles for Python and PostgreSQL are
|
||||
documented at [https://endoflife.date/python](https://endoflife.date/python) and
|
||||
[https://endoflife.date/postgresql](https://endoflife.date/postgresql).
|
||||
documented at https://endoflife.date/python and
|
||||
https://endoflife.date/postgresql.
|
||||
|
||||
|
||||
Context
|
||||
|
||||
@@ -8,23 +8,23 @@ easy to run CAS implementation built on top of Django.
|
||||
1. Create a new virtualenv: `python3 -m venv <your virtualenv>`
|
||||
2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate`
|
||||
3. Install Django and django-mama-cas:
|
||||
```sh
|
||||
```
|
||||
python -m pip install "django<3" "django-mama-cas==2.4.0"
|
||||
```
|
||||
4. Create a Django project in the current directory:
|
||||
```sh
|
||||
```
|
||||
django-admin startproject cas_test .
|
||||
```
|
||||
5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas
|
||||
6. Setup the SQLite database: `python manage.py migrate`
|
||||
7. Create a user:
|
||||
```sh
|
||||
```
|
||||
python manage.py createsuperuser
|
||||
```
|
||||
1. Use whatever you want as the username and password.
|
||||
2. Leave the other fields blank.
|
||||
8. Use the built-in Django test server to serve the CAS endpoints on port 8000:
|
||||
```sh
|
||||
```
|
||||
python manage.py runserver
|
||||
```
|
||||
|
||||
|
||||
@@ -15,14 +15,7 @@ license - in our case, this is almost always Apache Software License v2 (see
|
||||
|
||||
# 2. What do I need?
|
||||
|
||||
If you are running Windows, the Windows Subsystem for Linux (WSL) is strongly
|
||||
recommended for development. More information about WSL can be found at
|
||||
<https://docs.microsoft.com/en-us/windows/wsl/install>. Running Synapse natively
|
||||
on Windows is not officially supported.
|
||||
|
||||
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://www.python.org/downloads/). Your Python also needs support for [virtual environments](https://docs.python.org/3/library/venv.html). This is usually built-in, but some Linux distributions like Debian and Ubuntu split it out into its own package. Running `sudo apt install python3-venv` should be enough.
|
||||
|
||||
Synapse can connect to PostgreSQL via the [psycopg2](https://pypi.org/project/psycopg2/) Python library. Building this library from source requires access to PostgreSQL's C header files. On Debian or Ubuntu Linux, these can be installed with `sudo apt install libpq-dev`.
|
||||
The code of Synapse is written in Python 3. To do pretty much anything, you'll need [a recent version of Python 3](https://wiki.python.org/moin/BeginnersGuide/Download).
|
||||
|
||||
The source code of Synapse is hosted on GitHub. You will also need [a recent version of git](https://github.com/git-guides/install-git).
|
||||
|
||||
@@ -48,6 +41,8 @@ can find many good git tutorials on the web.
|
||||
|
||||
# 4. Install the dependencies
|
||||
|
||||
## Under Unix (macOS, Linux, BSD, ...)
|
||||
|
||||
Once you have installed Python 3 and added the source, please open a terminal and
|
||||
setup a *virtualenv*, as follows:
|
||||
|
||||
@@ -55,13 +50,16 @@ setup a *virtualenv*, as follows:
|
||||
cd path/where/you/have/cloned/the/repository
|
||||
python3 -m venv ./env
|
||||
source ./env/bin/activate
|
||||
pip install wheel
|
||||
pip install -e ".[all,dev]"
|
||||
pip install tox
|
||||
```
|
||||
|
||||
This will install the developer dependencies for the project.
|
||||
|
||||
## Under Windows
|
||||
|
||||
TBD
|
||||
|
||||
|
||||
# 5. Get in touch.
|
||||
|
||||
@@ -117,7 +115,7 @@ The linters look at your code and do two things:
|
||||
- ensure that your code follows the coding style adopted by the project;
|
||||
- catch a number of errors in your code.
|
||||
|
||||
The linter takes no time at all to run as soon as you've [downloaded the dependencies into your python virtual environment](#4-install-the-dependencies).
|
||||
They're pretty fast, don't hesitate!
|
||||
|
||||
```sh
|
||||
source ./env/bin/activate
|
||||
@@ -172,27 +170,6 @@ To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`:
|
||||
SYNAPSE_TEST_LOG_LEVEL=DEBUG trial tests
|
||||
```
|
||||
|
||||
By default, tests will use an in-memory SQLite database for test data. For additional
|
||||
help with debugging, one can use an on-disk SQLite database file instead, in order to
|
||||
review database state during and after running tests. This can be done by setting
|
||||
the `SYNAPSE_TEST_PERSIST_SQLITE_DB` environment variable. Doing so will cause the
|
||||
database state to be stored in a file named `test.db` under the trial process'
|
||||
working directory. Typically, this ends up being `_trial_temp/test.db`. For example:
|
||||
|
||||
```sh
|
||||
SYNAPSE_TEST_PERSIST_SQLITE_DB=1 trial tests
|
||||
```
|
||||
|
||||
The database file can then be inspected with:
|
||||
|
||||
```sh
|
||||
sqlite3 _trial_temp/test.db
|
||||
```
|
||||
|
||||
Note that the database file is cleared at the beginning of each test run. Thus it
|
||||
will always only contain the data generated by the *last run test*. Though generally
|
||||
when debugging, one is only running a single test anyway.
|
||||
|
||||
### Running tests under PostgreSQL
|
||||
|
||||
Invoking `trial` as above will use an in-memory SQLite database. This is great for
|
||||
@@ -458,17 +435,6 @@ Git allows you to add this signoff automatically when using the `-s`
|
||||
flag to `git commit`, which uses the name and email set in your
|
||||
`user.name` and `user.email` git configs.
|
||||
|
||||
### Private Sign off
|
||||
|
||||
If you would like to provide your legal name privately to the Matrix.org
|
||||
Foundation (instead of in a public commit or comment), you can do so
|
||||
by emailing your legal name and a link to the pull request to
|
||||
[dco@matrix.org](mailto:dco@matrix.org?subject=Private%20sign%20off).
|
||||
It helps to include "sign off" or similar in the subject line. You will then
|
||||
be instructed further.
|
||||
|
||||
Once private sign off is complete, doing so for future contributions will not
|
||||
be required.
|
||||
|
||||
# 10. Turn feedback into better code.
|
||||
|
||||
|
||||
@@ -89,67 +89,11 @@ To do so, use `scripts-dev/make_full_schema.sh`. This will produce new
|
||||
|
||||
Ensure postgres is installed, then run:
|
||||
|
||||
```sh
|
||||
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
|
||||
```
|
||||
./scripts-dev/make_full_schema.sh -p postgres_username -o output_dir/
|
||||
|
||||
NB at the time of writing, this script predates the split into separate `state`/`main`
|
||||
databases so will require updates to handle that correctly.
|
||||
|
||||
## Delta files
|
||||
|
||||
Delta files define the steps required to upgrade the database from an earlier version.
|
||||
They can be written as either a file containing a series of SQL statements, or a Python
|
||||
module.
|
||||
|
||||
Synapse remembers which delta files it has applied to a database (they are stored in the
|
||||
`applied_schema_deltas` table) and will not re-apply them (even if a given file is
|
||||
subsequently updated).
|
||||
|
||||
Delta files should be placed in a directory named `synapse/storage/schema/<database>/delta/<version>/`.
|
||||
They are applied in alphanumeric order, so by convention the first two characters
|
||||
of the filename should be an integer such as `01`, to put the file in the right order.
|
||||
|
||||
### SQL delta files
|
||||
|
||||
These should be named `*.sql`, or — for changes which should only be applied for a
|
||||
given database engine — `*.sql.posgres` or `*.sql.sqlite`. For example, a delta which
|
||||
adds a new column to the `foo` table might be called `01add_bar_to_foo.sql`.
|
||||
|
||||
Note that our SQL parser is a bit simple - it understands comments (`--` and `/*...*/`),
|
||||
but complex statements which require a `;` in the middle of them (such as `CREATE
|
||||
TRIGGER`) are beyond it and you'll have to use a Python delta file.
|
||||
|
||||
### Python delta files
|
||||
|
||||
For more flexibility, a delta file can take the form of a python module. These should
|
||||
be named `*.py`. Note that database-engine-specific modules are not supported here –
|
||||
instead you can write `if isinstance(database_engine, PostgresEngine)` or similar.
|
||||
|
||||
A Python delta module should define either or both of the following functions:
|
||||
|
||||
```python
|
||||
import synapse.config.homeserver
|
||||
import synapse.storage.engines
|
||||
import synapse.storage.types
|
||||
|
||||
|
||||
def run_create(
|
||||
cur: synapse.storage.types.Cursor,
|
||||
database_engine: synapse.storage.engines.BaseDatabaseEngine,
|
||||
) -> None:
|
||||
"""Called whenever an existing or new database is to be upgraded"""
|
||||
...
|
||||
|
||||
def run_upgrade(
|
||||
cur: synapse.storage.types.Cursor,
|
||||
database_engine: synapse.storage.engines.BaseDatabaseEngine,
|
||||
config: synapse.config.homeserver.HomeServerConfig,
|
||||
) -> None:
|
||||
"""Called whenever an existing database is to be upgraded."""
|
||||
...
|
||||
```
|
||||
|
||||
## Boolean columns
|
||||
|
||||
Boolean columns require special treatment, since SQLite treats booleans the
|
||||
@@ -158,9 +102,9 @@ same as integers.
|
||||
There are three separate aspects to this:
|
||||
|
||||
* Any new boolean column must be added to the `BOOLEAN_COLUMNS` list in
|
||||
`synapse/_scripts/synapse_port_db.py`. This tells the port script to cast
|
||||
the integer value from SQLite to a boolean before writing the value to the
|
||||
postgres database.
|
||||
`scripts/synapse_port_db`. This tells the port script to cast the integer
|
||||
value from SQLite to a boolean before writing the value to the postgres
|
||||
database.
|
||||
|
||||
* Before SQLite 3.23, `TRUE` and `FALSE` were not recognised as constants by
|
||||
SQLite, and the `IS [NOT] TRUE`/`IS [NOT] FALSE` operators were not
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
# Synapse demo setup
|
||||
|
||||
**DO NOT USE THESE DEMO SERVERS IN PRODUCTION**
|
||||
|
||||
Requires you to have a [Synapse development environment setup](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies).
|
||||
|
||||
The demo setup allows running three federation Synapse servers, with server
|
||||
names `localhost:8080`, `localhost:8081`, and `localhost:8082`.
|
||||
|
||||
You can access them via any Matrix client over HTTP at `localhost:8080`,
|
||||
`localhost:8081`, and `localhost:8082` or over HTTPS at `localhost:8480`,
|
||||
`localhost:8481`, and `localhost:8482`.
|
||||
|
||||
To enable the servers to communicate, self-signed SSL certificates are generated
|
||||
and the servers are configured in a highly insecure way, including:
|
||||
|
||||
* Not checking certificates over federation.
|
||||
* Not verifying keys.
|
||||
|
||||
The servers are configured to store their data under `demo/8080`, `demo/8081`, and
|
||||
`demo/8082`. This includes configuration, logs, SQLite databases, and media.
|
||||
|
||||
Note that when joining a public room on a different HS via "#foo:bar.net", then
|
||||
you are (in the current impl) joining a room with room_id "foo". This means that
|
||||
it won't work if your HS already has a room with that name.
|
||||
|
||||
## Using the demo scripts
|
||||
|
||||
There's three main scripts with straightforward purposes:
|
||||
|
||||
* `start.sh` will start the Synapse servers, generating any missing configuration.
|
||||
* This accepts a single parameter `--no-rate-limit` to "disable" rate limits
|
||||
(they actually still exist, but are very high).
|
||||
* `stop.sh` will stop the Synapse servers.
|
||||
* `clean.sh` will delete the configuration, databases, log files, etc.
|
||||
|
||||
To start a completely new set of servers, run:
|
||||
|
||||
```sh
|
||||
./demo/stop.sh; ./demo/clean.sh && ./demo/start.sh
|
||||
```
|
||||
@@ -1,37 +0,0 @@
|
||||
# Synapse Release Cycle
|
||||
|
||||
Releases of Synapse follow a two week release cycle with new releases usually
|
||||
occurring on Tuesdays:
|
||||
|
||||
* Day 0: Synapse `N - 1` is released.
|
||||
* Day 7: Synapse `N` release candidate 1 is released.
|
||||
* Days 7 - 13: Synapse `N` release candidates 2+ are released, if bugs are found.
|
||||
* Day 14: Synapse `N` is released.
|
||||
|
||||
Note that this schedule might be modified depending on the availability of the
|
||||
Synapse team, e.g. releases may be skipped to avoid holidays.
|
||||
|
||||
Release announcements can be found in the
|
||||
[release category of the Matrix blog](https://matrix.org/blog/category/releases).
|
||||
|
||||
## Bugfix releases
|
||||
|
||||
If a bug is found after release that is deemed severe enough (by a combination
|
||||
of the impacted users and the impact on those users) then a bugfix release may
|
||||
be issued. This may be at any point in the release cycle.
|
||||
|
||||
## Security releases
|
||||
|
||||
Security will sometimes be backported to the previous version and released
|
||||
immediately before the next release candidate. An example of this might be:
|
||||
|
||||
* Day 0: Synapse N - 1 is released.
|
||||
* Day 7: Synapse (N - 1).1 is released as Synapse N - 1 + the security fix.
|
||||
* Day 7: Synapse N release candidate 1 is released (including the security fix).
|
||||
|
||||
Depending on the impact and complexity of security fixes, multiple fixes might
|
||||
be held to be released together.
|
||||
|
||||
In some cases, a pre-disclosure of a security release will be issued as a notice
|
||||
to Synapse operators that there is an upcoming security release. These can be
|
||||
found in the [security category of the Matrix blog](https://matrix.org/blog/category/security).
|
||||
@@ -30,72 +30,39 @@ rather than skipping any that arrived late; whereas if you're looking at a
|
||||
historical section of timeline (i.e. `/messages`), you want to see the best
|
||||
representation of the state of the room as others were seeing it at the time.
|
||||
|
||||
## Outliers
|
||||
|
||||
We mark an event as an `outlier` when we haven't figured out the state for the
|
||||
room at that point in the DAG yet. They are "floating" events that we haven't
|
||||
yet correlated to the DAG.
|
||||
|
||||
Outliers typically arise when we fetch the auth chain or state for a given
|
||||
event. When that happens, we just grab the events in the state/auth chain,
|
||||
without calculating the state at those events, or backfilling their
|
||||
`prev_events`.
|
||||
|
||||
So, typically, we won't have the `prev_events` of an `outlier` in the database,
|
||||
(though it's entirely possible that we *might* have them for some other
|
||||
reason). Other things that make outliers different from regular events:
|
||||
|
||||
* We don't have state for them, so there should be no entry in
|
||||
`event_to_state_groups` for an outlier. (In practice this isn't always
|
||||
the case, though I'm not sure why: see https://github.com/matrix-org/synapse/issues/12201).
|
||||
|
||||
* We don't record entries for them in the `event_edges`,
|
||||
`event_forward_extremeties` or `event_backward_extremities` tables.
|
||||
|
||||
Since outliers are not tied into the DAG, they do not normally form part of the
|
||||
timeline sent down to clients via `/sync` or `/messages`; however there is an
|
||||
exception:
|
||||
|
||||
### Out-of-band membership events
|
||||
|
||||
A special case of outlier events are some membership events for federated rooms
|
||||
that we aren't full members of. For example:
|
||||
|
||||
* invites received over federation, before we join the room
|
||||
* *rejections* for said invites
|
||||
* knock events for rooms that we would like to join but have not yet joined.
|
||||
|
||||
In all the above cases, we don't have the state for the room, which is why they
|
||||
are treated as outliers. They are a bit special though, in that they are
|
||||
proactively sent to clients via `/sync`.
|
||||
|
||||
## Forward extremity
|
||||
|
||||
Most-recent-in-time events in the DAG which are not referenced by any other
|
||||
events' `prev_events` yet. (In this definition, outliers, rejected events, and
|
||||
soft-failed events don't count.)
|
||||
Most-recent-in-time events in the DAG which are not referenced by any other events' `prev_events` yet.
|
||||
|
||||
The forward extremities of a room (or at least, a subset of them, if there are
|
||||
more than ten) are used as the `prev_events` when the next event is sent.
|
||||
The forward extremities of a room are used as the `prev_events` when the next event is sent.
|
||||
|
||||
The "current state" of a room (ie: the state which would be used if we
|
||||
generated a new event) is, therefore, the resolution of the room states
|
||||
at each of the forward extremities.
|
||||
|
||||
## Backward extremity
|
||||
## Backwards extremity
|
||||
|
||||
The current marker of where we have backfilled up to and will generally be the
|
||||
`prev_events` of the oldest-in-time events we have in the DAG. This gives a starting point when
|
||||
backfilling history.
|
||||
oldest-in-time events we know of in the DAG.
|
||||
|
||||
Note that, unlike forward extremities, we typically don't have any backward
|
||||
extremity events themselves in the database - or, if we do, they will be "outliers" (see
|
||||
above). Either way, we don't expect to have the room state at a backward extremity.
|
||||
This is an event where we haven't fetched all of the `prev_events` for.
|
||||
|
||||
Once we have fetched all of its `prev_events`, it's unmarked as a backwards
|
||||
extremity (although we may have formed new backwards extremities from the prev
|
||||
events during the backfilling process).
|
||||
|
||||
|
||||
## Outliers
|
||||
|
||||
We mark an event as an `outlier` when we haven't figured out the state for the
|
||||
room at that point in the DAG yet.
|
||||
|
||||
We won't *necessarily* have the `prev_events` of an `outlier` in the database,
|
||||
but it's entirely possible that we *might*. The status of whether we have all of
|
||||
the `prev_events` is marked as a [backwards extremity](#backwards-extremity).
|
||||
|
||||
For example, when we fetch the event auth chain or state for a given event, we
|
||||
mark all of those claimed auth events as outliers because we haven't done the
|
||||
state calculation ourself.
|
||||
|
||||
When we persist a non-outlier event, if it was previously a backward extremity,
|
||||
we clear it as a backward extremity and set all of its `prev_events` as the new
|
||||
backward extremities if they aren't already persisted as non-outliers. This
|
||||
therefore keeps the backward extremities up-to-date.
|
||||
|
||||
## State groups
|
||||
|
||||
|
||||
@@ -15,7 +15,7 @@ To make Synapse (and therefore Element) use it:
|
||||
sp_config:
|
||||
allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
|
||||
metadata:
|
||||
local: ["samling.xml"]
|
||||
local: ["samling.xml"]
|
||||
```
|
||||
5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`:
|
||||
```yaml
|
||||
|
||||
@@ -35,12 +35,7 @@ When Synapse is asked to preview a URL it does the following:
|
||||
5. If the media is HTML:
|
||||
1. Decodes the HTML via the stored file.
|
||||
2. Generates an Open Graph response from the HTML.
|
||||
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
|
||||
1. Downloads the URL and stores it into a file via the media storage provider
|
||||
and saves the local media metadata.
|
||||
2. Convert the oEmbed response to an Open Graph response.
|
||||
3. Override any Open Graph data from the HTML with data from oEmbed.
|
||||
4. If an image exists in the Open Graph response:
|
||||
3. If an image exists in the Open Graph response:
|
||||
1. Downloads the URL and stores it into a file via the media storage
|
||||
provider and saves the local media metadata.
|
||||
2. Generates thumbnails.
|
||||
|
||||
@@ -63,5 +63,4 @@ release of Synapse.
|
||||
|
||||
If you want to get up and running quickly with a trio of homeservers in a
|
||||
private federation, there is a script in the `demo` directory. This is mainly
|
||||
useful just for development purposes. See
|
||||
[demo scripts](https://matrix-org.github.io/synapse/develop/development/demo.html).
|
||||
useful just for development purposes. See [demo/README](https://github.com/matrix-org/synapse/tree/develop/demo/).
|
||||
|
||||
@@ -22,9 +22,8 @@ will be removed in a future version of Synapse.
|
||||
|
||||
The `token` field should include the JSON web token with the following claims:
|
||||
|
||||
* A claim that encodes the local part of the user ID is required. By default,
|
||||
the `sub` (subject) claim is used, or a custom claim can be set in the
|
||||
configuration file.
|
||||
* The `sub` (subject) claim is required and should encode the local part of the
|
||||
user ID.
|
||||
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
|
||||
claims are optional, but validated if present.
|
||||
* The issuer (`iss`) claim is optional, but required and validated if configured.
|
||||
|
||||
@@ -94,6 +94,6 @@ As a simple example, retrieving an event from the database:
|
||||
|
||||
```pycon
|
||||
>>> from twisted.internet import defer
|
||||
>>> defer.ensureDeferred(hs.get_datastores().main.get_event('$1416420717069yeQaw:matrix.org'))
|
||||
>>> defer.ensureDeferred(hs.get_datastore().get_event('$1416420717069yeQaw:matrix.org'))
|
||||
<Deferred at 0x7ff253fc6998 current result: <FrozenEvent event_id='$1416420717069yeQaw:matrix.org', type='m.room.create', state_key=''>>
|
||||
```
|
||||
|
||||
@@ -2,80 +2,29 @@
|
||||
|
||||
*Synapse implementation-specific details for the media repository*
|
||||
|
||||
The media repository
|
||||
* stores avatars, attachments and their thumbnails for media uploaded by local
|
||||
users.
|
||||
* caches avatars, attachments and their thumbnails for media uploaded by remote
|
||||
users.
|
||||
* caches resources and thumbnails used for
|
||||
[URL previews](development/url_previews.md).
|
||||
The media repository is where attachments and avatar photos are stored.
|
||||
It stores attachment content and thumbnails for media uploaded by local users.
|
||||
It caches attachment content and thumbnails for media uploaded by remote users.
|
||||
|
||||
All media in Matrix can be identified by a unique
|
||||
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
|
||||
consisting of a server name and media ID:
|
||||
```
|
||||
mxc://<server-name>/<media-id>
|
||||
```
|
||||
## Storage
|
||||
|
||||
## Local Media
|
||||
Synapse generates 24 character media IDs for content uploaded by local users.
|
||||
These media IDs consist of upper and lowercase letters and are case-sensitive.
|
||||
Other homeserver implementations may generate media IDs differently.
|
||||
Each item of media is assigned a `media_id` when it is uploaded.
|
||||
The `media_id` is a randomly chosen, URL safe 24 character string.
|
||||
|
||||
Local media is recorded in the `local_media_repository` table, which includes
|
||||
metadata such as MIME types, upload times and file sizes.
|
||||
Note that this table is shared by the URL cache, which has a different media ID
|
||||
scheme.
|
||||
Metadata such as the MIME type, upload time and length are stored in the
|
||||
sqlite3 database indexed by `media_id`.
|
||||
|
||||
### Paths
|
||||
A file with media ID `aabbcccccccccccccccccccc` and its `128x96` `image/jpeg`
|
||||
thumbnail, created by scaling, would be stored at:
|
||||
```
|
||||
local_content/aa/bb/cccccccccccccccccccc
|
||||
local_thumbnails/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
|
||||
```
|
||||
Content is stored on the filesystem under a `"local_content"` directory.
|
||||
|
||||
## Remote Media
|
||||
When media from a remote homeserver is requested from Synapse, it is assigned
|
||||
a local `filesystem_id`, with the same format as locally-generated media IDs,
|
||||
as described above.
|
||||
Thumbnails are stored under a `"local_thumbnails"` directory.
|
||||
|
||||
A record of remote media is stored in the `remote_media_cache` table, which
|
||||
can be used to map remote MXC URIs (server names and media IDs) to local
|
||||
`filesystem_id`s.
|
||||
The item with `media_id` `"aabbccccccccdddddddddddd"` is stored under
|
||||
`"local_content/aa/bb/ccccccccdddddddddddd"`. Its thumbnail with width
|
||||
`128` and height `96` and type `"image/jpeg"` is stored under
|
||||
`"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"`
|
||||
|
||||
### Paths
|
||||
A file from `matrix.org` with `filesystem_id` `aabbcccccccccccccccccccc` and its
|
||||
`128x96` `image/jpeg` thumbnail, created by scaling, would be stored at:
|
||||
```
|
||||
remote_content/matrix.org/aa/bb/cccccccccccccccccccc
|
||||
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
|
||||
```
|
||||
Older thumbnails may omit the thumbnailing method:
|
||||
```
|
||||
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
|
||||
```
|
||||
|
||||
Note that `remote_thumbnail/` does not have an `s`.
|
||||
|
||||
## URL Previews
|
||||
See [URL Previews](development/url_previews.md) for documentation on the URL preview
|
||||
process.
|
||||
|
||||
When generating previews for URLs, Synapse may download and cache various
|
||||
resources, including images. These resources are assigned temporary media IDs
|
||||
of the form `yyyy-mm-dd_aaaaaaaaaaaaaaaa`, where `yyyy-mm-dd` is the current
|
||||
date and `aaaaaaaaaaaaaaaa` is a random sequence of 16 case-sensitive letters.
|
||||
|
||||
The metadata for these cached resources is stored in the
|
||||
`local_media_repository` and `local_media_repository_url_cache` tables.
|
||||
|
||||
Resources for URL previews are deleted after a few days.
|
||||
|
||||
### Paths
|
||||
The file with media ID `yyyy-mm-dd_aaaaaaaaaaaaaaaa` and its `128x96`
|
||||
`image/jpeg` thumbnail, created by scaling, would be stored at:
|
||||
```
|
||||
url_cache/yyyy-mm-dd/aaaaaaaaaaaaaaaa
|
||||
url_cache_thumbnails/yyyy-mm-dd/aaaaaaaaaaaaaaaa/128-96-image-jpeg-scale
|
||||
```
|
||||
Remote content is cached under `"remote_content"` directory. Each item of
|
||||
remote content is assigned a local `"filesystem_id"` to ensure that the
|
||||
directory structure `"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`
|
||||
is appropriate. Thumbnails for remote content are stored under
|
||||
`"remote_thumbnail/server_name/..."`
|
||||
|
||||
@@ -69,9 +69,9 @@ A default policy can be defined as such, in the `retention` section of
|
||||
the configuration file:
|
||||
|
||||
```yaml
|
||||
default_policy:
|
||||
min_lifetime: 1d
|
||||
max_lifetime: 1y
|
||||
default_policy:
|
||||
min_lifetime: 1d
|
||||
max_lifetime: 1y
|
||||
```
|
||||
|
||||
Here, `min_lifetime` and `max_lifetime` have the same meaning and level
|
||||
@@ -95,14 +95,14 @@ depending on an event's room's policy. This can be done by setting the
|
||||
file. An example of such configuration could be:
|
||||
|
||||
```yaml
|
||||
purge_jobs:
|
||||
- longest_max_lifetime: 3d
|
||||
interval: 12h
|
||||
- shortest_max_lifetime: 3d
|
||||
longest_max_lifetime: 1w
|
||||
interval: 1d
|
||||
- shortest_max_lifetime: 1w
|
||||
interval: 2d
|
||||
purge_jobs:
|
||||
- longest_max_lifetime: 3d
|
||||
interval: 12h
|
||||
- shortest_max_lifetime: 3d
|
||||
longest_max_lifetime: 1w
|
||||
interval: 1d
|
||||
- shortest_max_lifetime: 1w
|
||||
interval: 2d
|
||||
```
|
||||
|
||||
In this example, we define three jobs:
|
||||
@@ -141,8 +141,8 @@ purging old events in a room. These limits can be defined as such in the
|
||||
`retention` section of the configuration file:
|
||||
|
||||
```yaml
|
||||
allowed_lifetime_min: 1d
|
||||
allowed_lifetime_max: 1y
|
||||
allowed_lifetime_min: 1d
|
||||
allowed_lifetime_max: 1y
|
||||
```
|
||||
|
||||
The limits are considered when running purge jobs. If necessary, the
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
# Background update controller callbacks
|
||||
|
||||
Background update controller callbacks allow module developers to control (e.g. rate-limit)
|
||||
how database background updates are run. A database background update is an operation
|
||||
Synapse runs on its database in the background after it starts. It's usually used to run
|
||||
database operations that would take too long if they were run at the same time as schema
|
||||
updates (which are run on startup) and delay Synapse's startup too much: populating a
|
||||
table with a big amount of data, adding an index on a big table, deleting superfluous data,
|
||||
etc.
|
||||
|
||||
Background update controller callbacks can be registered using the module API's
|
||||
`register_background_update_controller_callbacks` method. Only the first module (in order
|
||||
of appearance in Synapse's configuration file) calling this method can register background
|
||||
update controller callbacks, subsequent calls are ignored.
|
||||
|
||||
The available background update controller callbacks are:
|
||||
|
||||
### `on_update`
|
||||
|
||||
_First introduced in Synapse v1.49.0_
|
||||
|
||||
```python
|
||||
def on_update(update_name: str, database_name: str, one_shot: bool) -> AsyncContextManager[int]
|
||||
```
|
||||
|
||||
Called when about to do an iteration of a background update. The module is given the name
|
||||
of the update, the name of the database, and a flag to indicate whether the background
|
||||
update will happen in one go and may take a long time (e.g. creating indices). If this last
|
||||
argument is set to `False`, the update will be run in batches.
|
||||
|
||||
The module must return an async context manager. It will be entered before Synapse runs a
|
||||
background update; this should return the desired duration of the iteration, in
|
||||
milliseconds.
|
||||
|
||||
The context manager will be exited when the iteration completes. Note that the duration
|
||||
returned by the context manager is a target, and an iteration may take substantially longer
|
||||
or shorter. If the `one_shot` flag is set to `True`, the duration returned is ignored.
|
||||
|
||||
__Note__: Unlike most module callbacks in Synapse, this one is _synchronous_. This is
|
||||
because asynchronous operations are expected to be run by the async context manager.
|
||||
|
||||
This callback is required when registering any other background update controller callback.
|
||||
|
||||
### `default_batch_size`
|
||||
|
||||
_First introduced in Synapse v1.49.0_
|
||||
|
||||
```python
|
||||
async def default_batch_size(update_name: str, database_name: str) -> int
|
||||
```
|
||||
|
||||
Called before the first iteration of a background update, with the name of the update and
|
||||
of the database. The module must return the number of elements to process in this first
|
||||
iteration.
|
||||
|
||||
If this callback is not defined, Synapse will use a default value of 100.
|
||||
|
||||
### `min_batch_size`
|
||||
|
||||
_First introduced in Synapse v1.49.0_
|
||||
|
||||
```python
|
||||
async def min_batch_size(update_name: str, database_name: str) -> int
|
||||
```
|
||||
|
||||
Called before running a new batch for a background update, with the name of the update and
|
||||
of the database. The module must return an integer representing the minimum number of
|
||||
elements to process in this iteration. This number must be at least 1, and is used to
|
||||
ensure that progress is always made.
|
||||
|
||||
If this callback is not defined, Synapse will use a default value of 100.
|
||||
@@ -10,8 +10,8 @@ registered by using the Module API's `register_password_auth_provider_callbacks`
|
||||
|
||||
_First introduced in Synapse v1.46.0_
|
||||
|
||||
```python
|
||||
auth_checkers: Dict[Tuple[str, Tuple[str, ...]], Callable]
|
||||
```
|
||||
auth_checkers: Dict[Tuple[str,Tuple], Callable]
|
||||
```
|
||||
|
||||
A dict mapping from tuples of a login type identifier (such as `m.login.password`) and a
|
||||
@@ -85,7 +85,7 @@ If the authentication is unsuccessful, the module must return `None`.
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `None`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `None` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback. If every callback returns `None`,
|
||||
any of the subsequent implementations of this callback. If every callback return `None`,
|
||||
the authentication is denied.
|
||||
|
||||
### `on_logged_out`
|
||||
@@ -105,115 +105,6 @@ device ID), and the (now deactivated) access token.
|
||||
|
||||
If multiple modules implement this callback, Synapse runs them all in order.
|
||||
|
||||
### `get_username_for_registration`
|
||||
|
||||
_First introduced in Synapse v1.52.0_
|
||||
|
||||
```python
|
||||
async def get_username_for_registration(
|
||||
uia_results: Dict[str, Any],
|
||||
params: Dict[str, Any],
|
||||
) -> Optional[str]
|
||||
```
|
||||
|
||||
Called when registering a new user. The module can return a username to set for the user
|
||||
being registered by returning it as a string, or `None` if it doesn't wish to force a
|
||||
username for this user. If a username is returned, it will be used as the local part of a
|
||||
user's full Matrix ID (e.g. it's `alice` in `@alice:example.com`).
|
||||
|
||||
This callback is called once [User-Interactive Authentication](https://spec.matrix.org/latest/client-server-api/#user-interactive-authentication-api)
|
||||
has been completed by the user. It is not called when registering a user via SSO. It is
|
||||
passed two dictionaries, which include the information that the user has provided during
|
||||
the registration process.
|
||||
|
||||
The first dictionary contains the results of the [User-Interactive Authentication](https://spec.matrix.org/latest/client-server-api/#user-interactive-authentication-api)
|
||||
flow followed by the user. Its keys are the identifiers of every step involved in the flow,
|
||||
associated with either a boolean value indicating whether the step was correctly completed,
|
||||
or additional information (e.g. email address, phone number...). A list of most existing
|
||||
identifiers can be found in the [Matrix specification](https://spec.matrix.org/v1.1/client-server-api/#authentication-types).
|
||||
Here's an example featuring all currently supported keys:
|
||||
|
||||
```python
|
||||
{
|
||||
"m.login.dummy": True, # Dummy authentication
|
||||
"m.login.terms": True, # User has accepted the terms of service for the homeserver
|
||||
"m.login.recaptcha": True, # User has completed the recaptcha challenge
|
||||
"m.login.email.identity": { # User has provided and verified an email address
|
||||
"medium": "email",
|
||||
"address": "alice@example.com",
|
||||
"validated_at": 1642701357084,
|
||||
},
|
||||
"m.login.msisdn": { # User has provided and verified a phone number
|
||||
"medium": "msisdn",
|
||||
"address": "33123456789",
|
||||
"validated_at": 1642701357084,
|
||||
},
|
||||
"m.login.registration_token": "sometoken", # User has registered through a registration token
|
||||
}
|
||||
```
|
||||
|
||||
The second dictionary contains the parameters provided by the user's client in the request
|
||||
to `/_matrix/client/v3/register`. See the [Matrix specification](https://spec.matrix.org/latest/client-server-api/#post_matrixclientv3register)
|
||||
for a complete list of these parameters.
|
||||
|
||||
If the module cannot, or does not wish to, generate a username for this user, it must
|
||||
return `None`.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `None`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `None` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback. If every callback returns `None`,
|
||||
the username provided by the user is used, if any (otherwise one is automatically
|
||||
generated).
|
||||
|
||||
### `get_displayname_for_registration`
|
||||
|
||||
_First introduced in Synapse v1.54.0_
|
||||
|
||||
```python
|
||||
async def get_displayname_for_registration(
|
||||
uia_results: Dict[str, Any],
|
||||
params: Dict[str, Any],
|
||||
) -> Optional[str]
|
||||
```
|
||||
|
||||
Called when registering a new user. The module can return a display name to set for the
|
||||
user being registered by returning it as a string, or `None` if it doesn't wish to force a
|
||||
display name for this user.
|
||||
|
||||
This callback is called once [User-Interactive Authentication](https://spec.matrix.org/latest/client-server-api/#user-interactive-authentication-api)
|
||||
has been completed by the user. It is not called when registering a user via SSO. It is
|
||||
passed two dictionaries, which include the information that the user has provided during
|
||||
the registration process. These dictionaries are identical to the ones passed to
|
||||
[`get_username_for_registration`](#get_username_for_registration), so refer to the
|
||||
documentation of this callback for more information about them.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `None`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `None` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback. If every callback returns `None`,
|
||||
the username will be used (e.g. `alice` if the user being registered is `@alice:example.com`).
|
||||
|
||||
## `is_3pid_allowed`
|
||||
|
||||
_First introduced in Synapse v1.53.0_
|
||||
|
||||
```python
|
||||
async def is_3pid_allowed(self, medium: str, address: str, registration: bool) -> bool
|
||||
```
|
||||
|
||||
Called when attempting to bind a third-party identifier (i.e. an email address or a phone
|
||||
number). The module is given the medium of the third-party identifier (which is `email` if
|
||||
the identifier is an email address, or `msisdn` if the identifier is a phone number) and
|
||||
its address, as well as a boolean indicating whether the attempt to bind is happening as
|
||||
part of registering a new user. The module must return a boolean indicating whether the
|
||||
identifier can be allowed to be bound to an account on the local homeserver.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `True` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback.
|
||||
|
||||
## Example
|
||||
|
||||
The example module below implements authentication checkers for two different login types:
|
||||
@@ -222,7 +113,8 @@ The example module below implements authentication checkers for two different lo
|
||||
- Is checked by the method: `self.check_my_login`
|
||||
- `m.login.password` (defined in [the spec](https://matrix.org/docs/spec/client_server/latest#password-based))
|
||||
- Expects a `password` field to be sent to `/login`
|
||||
- Is checked by the method: `self.check_pass`
|
||||
- Is checked by the method: `self.check_pass`
|
||||
|
||||
|
||||
```python
|
||||
from typing import Awaitable, Callable, Optional, Tuple
|
||||
|
||||
@@ -16,12 +16,10 @@ _First introduced in Synapse v1.37.0_
|
||||
async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool, str]
|
||||
```
|
||||
|
||||
Called when receiving an event from a client or via federation. The callback must return
|
||||
either:
|
||||
- an error message string, to indicate the event must be rejected because of spam and
|
||||
give a rejection reason to forward to clients;
|
||||
- the boolean `True`, to indicate that the event is spammy, but not provide further details; or
|
||||
- the booelan `False`, to indicate that the event is not considered spammy.
|
||||
Called when receiving an event from a client or via federation. The module can return
|
||||
either a `bool` to indicate whether the event must be rejected because of spam, or a `str`
|
||||
to indicate the event must be rejected because of spam and to give a rejection reason to
|
||||
forward to clients.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `False`, Synapse falls through to the next one. The value of the first
|
||||
@@ -37,10 +35,7 @@ async def user_may_join_room(user: str, room: str, is_invited: bool) -> bool
|
||||
```
|
||||
|
||||
Called when a user is trying to join a room. The module must return a `bool` to indicate
|
||||
whether the user can join the room. Return `False` to prevent the user from joining the
|
||||
room; otherwise return `True` to permit the joining.
|
||||
|
||||
The user is represented by their Matrix user ID (e.g.
|
||||
whether the user can join the room. The user is represented by their Matrix user ID (e.g.
|
||||
`@alice:example.com`) and the room is represented by its Matrix ID (e.g.
|
||||
`!room:example.com`). The module is also given a boolean to indicate whether the user
|
||||
currently has a pending invite in the room.
|
||||
@@ -63,8 +58,7 @@ async def user_may_invite(inviter: str, invitee: str, room_id: str) -> bool
|
||||
|
||||
Called when processing an invitation. The module must return a `bool` indicating whether
|
||||
the inviter can invite the invitee to the given room. Both inviter and invitee are
|
||||
represented by their Matrix user ID (e.g. `@alice:example.com`). Return `False` to prevent
|
||||
the invitation; otherwise return `True` to permit it.
|
||||
represented by their Matrix user ID (e.g. `@alice:example.com`).
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
@@ -86,8 +80,7 @@ async def user_may_send_3pid_invite(
|
||||
|
||||
Called when processing an invitation using a third-party identifier (also called a 3PID,
|
||||
e.g. an email address or a phone number). The module must return a `bool` indicating
|
||||
whether the inviter can invite the invitee to the given room. Return `False` to prevent
|
||||
the invitation; otherwise return `True` to permit it.
|
||||
whether the inviter can invite the invitee to the given room.
|
||||
|
||||
The inviter is represented by their Matrix user ID (e.g. `@alice:example.com`), and the
|
||||
invitee is represented by its medium (e.g. "email") and its address
|
||||
@@ -124,7 +117,42 @@ async def user_may_create_room(user: str) -> bool
|
||||
|
||||
Called when processing a room creation request. The module must return a `bool` indicating
|
||||
whether the given user (represented by their Matrix user ID) is allowed to create a room.
|
||||
Return `False` to prevent room creation; otherwise return `True` to permit it.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `True` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback.
|
||||
|
||||
### `user_may_create_room_with_invites`
|
||||
|
||||
_First introduced in Synapse v1.44.0_
|
||||
|
||||
```python
|
||||
async def user_may_create_room_with_invites(
|
||||
user: str,
|
||||
invites: List[str],
|
||||
threepid_invites: List[Dict[str, str]],
|
||||
) -> bool
|
||||
```
|
||||
|
||||
Called when processing a room creation request (right after `user_may_create_room`).
|
||||
The module is given the Matrix user ID of the user trying to create a room, as well as a
|
||||
list of Matrix users to invite and a list of third-party identifiers (3PID, e.g. email
|
||||
addresses) to invite.
|
||||
|
||||
An invited Matrix user to invite is represented by their Matrix user IDs, and an invited
|
||||
3PIDs is represented by a dict that includes the 3PID medium (e.g. "email") through its
|
||||
`medium` key and its address (e.g. "alice@example.com") through its `address` key.
|
||||
|
||||
See [the Matrix specification](https://matrix.org/docs/spec/appendices#pid-types) for more
|
||||
information regarding third-party identifiers.
|
||||
|
||||
If no invite and/or 3PID invite were specified in the room creation request, the
|
||||
corresponding list(s) will be empty.
|
||||
|
||||
**Note**: This callback is not called when a room is cloned (e.g. during a room upgrade)
|
||||
since no invites are sent when cloning a room. To cover this case, modules also need to
|
||||
implement `user_may_create_room`.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
@@ -141,8 +169,7 @@ async def user_may_create_room_alias(user: str, room_alias: "synapse.types.RoomA
|
||||
|
||||
Called when trying to associate an alias with an existing room. The module must return a
|
||||
`bool` indicating whether the given user (represented by their Matrix user ID) is allowed
|
||||
to set the given alias. Return `False` to prevent the alias creation; otherwise return
|
||||
`True` to permit it.
|
||||
to set the given alias.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
@@ -159,8 +186,7 @@ async def user_may_publish_room(user: str, room_id: str) -> bool
|
||||
|
||||
Called when trying to publish a room to the homeserver's public rooms directory. The
|
||||
module must return a `bool` indicating whether the given user (represented by their
|
||||
Matrix user ID) is allowed to publish the given room. Return `False` to prevent the
|
||||
room from being published; otherwise return `True` to permit its publication.
|
||||
Matrix user ID) is allowed to publish the given room.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
@@ -176,11 +202,8 @@ async def check_username_for_spam(user_profile: Dict[str, str]) -> bool
|
||||
```
|
||||
|
||||
Called when computing search results in the user directory. The module must return a
|
||||
`bool` indicating whether the given user should be excluded from user directory
|
||||
searches. Return `True` to indicate that the user is spammy and exclude them from
|
||||
search results; otherwise return `False`.
|
||||
|
||||
The profile is represented as a dictionary with the following keys:
|
||||
`bool` indicating whether the given user profile can appear in search results. The profile
|
||||
is represented as a dictionary with the following keys:
|
||||
|
||||
* `user_id`: The Matrix ID for this user.
|
||||
* `display_name`: The user's display name.
|
||||
@@ -238,9 +261,8 @@ async def check_media_file_for_spam(
|
||||
) -> bool
|
||||
```
|
||||
|
||||
Called when storing a local or remote file. The module must return a `bool` indicating
|
||||
whether the given file should be excluded from the homeserver's media store. Return
|
||||
`True` to prevent this file from being stored; otherwise return `False`.
|
||||
Called when storing a local or remote file. The module must return a boolean indicating
|
||||
whether the given file can be stored in the homeserver's media store.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `False`, Synapse falls through to the next one. The value of the first
|
||||
|
||||
@@ -43,14 +43,6 @@ event with new data by returning the new event's data as a dictionary. In order
|
||||
that, it is recommended the module calls `event.get_dict()` to get the current event as a
|
||||
dictionary, and modify the returned dictionary accordingly.
|
||||
|
||||
If `check_event_allowed` raises an exception, the module is assumed to have failed.
|
||||
The event will not be accepted but is not treated as explicitly rejected, either.
|
||||
An HTTP request causing the module check will likely result in a 500 Internal
|
||||
Server Error.
|
||||
|
||||
When the boolean returned by the module is `False`, the event is rejected.
|
||||
(Module developers should not use exceptions for rejection.)
|
||||
|
||||
Note that replacing the event only works for events sent by local users, not for events
|
||||
received over federation.
|
||||
|
||||
@@ -127,126 +119,6 @@ callback returns `True`, Synapse falls through to the next one. The value of the
|
||||
callback that does not return `True` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback.
|
||||
|
||||
### `on_new_event`
|
||||
|
||||
_First introduced in Synapse v1.47.0_
|
||||
|
||||
```python
|
||||
async def on_new_event(
|
||||
event: "synapse.events.EventBase",
|
||||
state_events: "synapse.types.StateMap",
|
||||
) -> None:
|
||||
```
|
||||
|
||||
Called after sending an event into a room. The module is passed the event, as well
|
||||
as the state of the room _after_ the event. This means that if the event is a state event,
|
||||
it will be included in this state.
|
||||
|
||||
Note that this callback is called when the event has already been processed and stored
|
||||
into the room, which means this callback cannot be used to deny persisting the event. To
|
||||
deny an incoming event, see [`check_event_for_spam`](spam_checker_callbacks.md#check_event_for_spam) instead.
|
||||
|
||||
If multiple modules implement this callback, Synapse runs them all in order.
|
||||
|
||||
### `check_can_shutdown_room`
|
||||
|
||||
_First introduced in Synapse v1.55.0_
|
||||
|
||||
```python
|
||||
async def check_can_shutdown_room(
|
||||
user_id: str, room_id: str,
|
||||
) -> bool:
|
||||
```
|
||||
|
||||
Called when an admin user requests the shutdown of a room. The module must return a
|
||||
boolean indicating whether the shutdown can go through. If the callback returns `False`,
|
||||
the shutdown will not proceed and the caller will see a `M_FORBIDDEN` error.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `True` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback.
|
||||
|
||||
### `check_can_deactivate_user`
|
||||
|
||||
_First introduced in Synapse v1.55.0_
|
||||
|
||||
```python
|
||||
async def check_can_deactivate_user(
|
||||
user_id: str, by_admin: bool,
|
||||
) -> bool:
|
||||
```
|
||||
|
||||
Called when the deactivation of a user is requested. User deactivation can be
|
||||
performed by an admin or the user themselves, so developers are encouraged to check the
|
||||
requester when implementing this callback. The module must return a
|
||||
boolean indicating whether the deactivation can go through. If the callback returns `False`,
|
||||
the deactivation will not proceed and the caller will see a `M_FORBIDDEN` error.
|
||||
|
||||
The module is passed two parameters, `user_id` which is the ID of the user being deactivated, and `by_admin` which is `True` if the request is made by a serve admin, and `False` otherwise.
|
||||
|
||||
If multiple modules implement this callback, they will be considered in order. If a
|
||||
callback returns `True`, Synapse falls through to the next one. The value of the first
|
||||
callback that does not return `True` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback.
|
||||
|
||||
|
||||
### `on_profile_update`
|
||||
|
||||
_First introduced in Synapse v1.54.0_
|
||||
|
||||
```python
|
||||
async def on_profile_update(
|
||||
user_id: str,
|
||||
new_profile: "synapse.module_api.ProfileInfo",
|
||||
by_admin: bool,
|
||||
deactivation: bool,
|
||||
) -> None:
|
||||
```
|
||||
|
||||
Called after updating a local user's profile. The update can be triggered either by the
|
||||
user themselves or a server admin. The update can also be triggered by a user being
|
||||
deactivated (in which case their display name is set to an empty string (`""`) and the
|
||||
avatar URL is set to `None`). The module is passed the Matrix ID of the user whose profile
|
||||
has been updated, their new profile, as well as a `by_admin` boolean that is `True` if the
|
||||
update was triggered by a server admin (and `False` otherwise), and a `deactivated`
|
||||
boolean that is `True` if the update is a result of the user being deactivated.
|
||||
|
||||
Note that the `by_admin` boolean is also `True` if the profile change happens as a result
|
||||
of the user logging in through Single Sign-On, or if a server admin updates their own
|
||||
profile.
|
||||
|
||||
Per-room profile changes do not trigger this callback to be called. Synapse administrators
|
||||
wishing this callback to be called on every profile change are encouraged to disable
|
||||
per-room profiles globally using the `allow_per_room_profiles` configuration setting in
|
||||
Synapse's configuration file.
|
||||
This callback is not called when registering a user, even when setting it through the
|
||||
[`get_displayname_for_registration`](https://matrix-org.github.io/synapse/latest/modules/password_auth_provider_callbacks.html#get_displayname_for_registration)
|
||||
module callback.
|
||||
|
||||
If multiple modules implement this callback, Synapse runs them all in order.
|
||||
|
||||
### `on_user_deactivation_status_changed`
|
||||
|
||||
_First introduced in Synapse v1.54.0_
|
||||
|
||||
```python
|
||||
async def on_user_deactivation_status_changed(
|
||||
user_id: str, deactivated: bool, by_admin: bool
|
||||
) -> None:
|
||||
```
|
||||
|
||||
Called after deactivating a local user, or reactivating them through the admin API. The
|
||||
deactivation can be triggered either by the user themselves or a server admin. The module
|
||||
is passed the Matrix ID of the user whose status is changed, as well as a `deactivated`
|
||||
boolean that is `True` if the user is being deactivated and `False` if they're being
|
||||
reactivated, and a `by_admin` boolean that is `True` if the deactivation was triggered by
|
||||
a server admin (and `False` otherwise). This latter `by_admin` boolean is always `True`
|
||||
if the user is being reactivated, as this operation can only be performed through the
|
||||
admin API.
|
||||
|
||||
If multiple modules implement this callback, Synapse runs them all in order.
|
||||
|
||||
## Example
|
||||
|
||||
The example below is a module that implements the third-party rules callback
|
||||
|
||||
@@ -71,15 +71,15 @@ Modules **must** register their web resources in their `__init__` method.
|
||||
## Registering a callback
|
||||
|
||||
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
|
||||
Synapse will call when performing specific actions. Callbacks must be asynchronous (unless
|
||||
specified otherwise), and are split in categories. A single module may implement callbacks
|
||||
from multiple categories, and is under no obligation to implement all callbacks from the
|
||||
categories it registers callbacks for.
|
||||
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
|
||||
are split in categories. A single module may implement callbacks from multiple categories,
|
||||
and is under no obligation to implement all callbacks from the categories it registers
|
||||
callbacks for.
|
||||
|
||||
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
|
||||
methods. The callback functions are passed to these methods as keyword arguments, with
|
||||
the callback name as the argument name and the function as its value. A
|
||||
`register_[...]_callbacks` method exists for each category.
|
||||
the callback name as the argument name and the function as its value. This is demonstrated
|
||||
in the example below. A `register_[...]_callbacks` method exists for each category.
|
||||
|
||||
Callbacks for each category can be found on their respective page of the
|
||||
[Synapse documentation website](https://matrix-org.github.io/synapse).
|
||||
101
docs/openid.md
101
docs/openid.md
@@ -21,8 +21,6 @@ such as [Github][github-idp].
|
||||
|
||||
[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
|
||||
[auth0]: https://auth0.com/
|
||||
[authentik]: https://goauthentik.io/
|
||||
[lemonldap]: https://lemonldap-ng.org/
|
||||
[okta]: https://www.okta.com/
|
||||
[dex-idp]: https://github.com/dexidp/dex
|
||||
[keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols
|
||||
@@ -83,7 +81,7 @@ oidc_providers:
|
||||
|
||||
### Dex
|
||||
|
||||
[Dex][dex-idp] is a simple, open-source OpenID Connect Provider.
|
||||
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
|
||||
Although it is designed to help building a full-blown provider with an
|
||||
external database, it can be configured with static passwords in a config file.
|
||||
|
||||
@@ -211,76 +209,6 @@ oidc_providers:
|
||||
display_name_template: "{{ user.name }}"
|
||||
```
|
||||
|
||||
### Authentik
|
||||
|
||||
[Authentik][authentik] is an open-source IdP solution.
|
||||
|
||||
1. Create a provider in Authentik, with type OAuth2/OpenID.
|
||||
2. The parameters are:
|
||||
- Client Type: Confidential
|
||||
- JWT Algorithm: RS256
|
||||
- Scopes: OpenID, Email and Profile
|
||||
- RSA Key: Select any available key
|
||||
- Redirect URIs: `[synapse public baseurl]/_synapse/client/oidc/callback`
|
||||
3. Create an application for synapse in Authentik and link it to the provider.
|
||||
4. Note the slug of your application, Client ID and Client Secret.
|
||||
|
||||
Synapse config:
|
||||
```yaml
|
||||
oidc_providers:
|
||||
- idp_id: authentik
|
||||
idp_name: authentik
|
||||
discover: true
|
||||
issuer: "https://your.authentik.example.org/application/o/your-app-slug/" # TO BE FILLED: domain and slug
|
||||
client_id: "your client id" # TO BE FILLED
|
||||
client_secret: "your client secret" # TO BE FILLED
|
||||
scopes:
|
||||
- "openid"
|
||||
- "profile"
|
||||
- "email"
|
||||
user_mapping_provider:
|
||||
config:
|
||||
localpart_template: "{{ user.preferred_username }}}"
|
||||
display_name_template: "{{ user.preferred_username|capitalize }}" # TO BE FILLED: If your users have names in Authentik and you want those in Synapse, this should be replaced with user.name|capitalize.
|
||||
```
|
||||
|
||||
### LemonLDAP
|
||||
|
||||
[LemonLDAP::NG][lemonldap] is an open-source IdP solution.
|
||||
|
||||
1. Create an OpenID Connect Relying Parties in LemonLDAP::NG
|
||||
2. The parameters are:
|
||||
- Client ID under the basic menu of the new Relying Parties (`Options > Basic >
|
||||
Client ID`)
|
||||
- Client secret (`Options > Basic > Client secret`)
|
||||
- JWT Algorithm: RS256 within the security menu of the new Relying Parties
|
||||
(`Options > Security > ID Token signature algorithm` and `Options > Security >
|
||||
Access Token signature algorithm`)
|
||||
- Scopes: OpenID, Email and Profile
|
||||
- Allowed redirection addresses for login (`Options > Basic > Allowed
|
||||
redirection addresses for login` ) :
|
||||
`[synapse public baseurl]/_synapse/client/oidc/callback`
|
||||
|
||||
Synapse config:
|
||||
```yaml
|
||||
oidc_providers:
|
||||
- idp_id: lemonldap
|
||||
idp_name: lemonldap
|
||||
discover: true
|
||||
issuer: "https://auth.example.org/" # TO BE FILLED: replace with your domain
|
||||
client_id: "your client id" # TO BE FILLED
|
||||
client_secret: "your client secret" # TO BE FILLED
|
||||
scopes:
|
||||
- "openid"
|
||||
- "profile"
|
||||
- "email"
|
||||
user_mapping_provider:
|
||||
config:
|
||||
localpart_template: "{{ user.preferred_username }}}"
|
||||
# TO BE FILLED: If your users have names in LemonLDAP::NG and you want those in Synapse, this should be replaced with user.name|capitalize or any valid filter.
|
||||
display_name_template: "{{ user.preferred_username|capitalize }}"
|
||||
```
|
||||
|
||||
### GitHub
|
||||
|
||||
[GitHub][github-idp] is a bit special as it is not an OpenID Connect compliant provider, but
|
||||
@@ -390,6 +318,9 @@ oidc_providers:
|
||||
|
||||
### Facebook
|
||||
|
||||
Like Github, Facebook provide a custom OAuth2 API rather than an OIDC-compliant
|
||||
one so requires a little more configuration.
|
||||
|
||||
0. You will need a Facebook developer account. You can register for one
|
||||
[here](https://developers.facebook.com/async/registration/).
|
||||
1. On the [apps](https://developers.facebook.com/apps/) page of the developer
|
||||
@@ -409,28 +340,24 @@ Synapse config:
|
||||
idp_name: Facebook
|
||||
idp_brand: "facebook" # optional: styling hint for clients
|
||||
discover: false
|
||||
issuer: "https://www.facebook.com"
|
||||
issuer: "https://facebook.com"
|
||||
client_id: "your-client-id" # TO BE FILLED
|
||||
client_secret: "your-client-secret" # TO BE FILLED
|
||||
scopes: ["openid", "email"]
|
||||
authorization_endpoint: "https://facebook.com/dialog/oauth"
|
||||
token_endpoint: "https://graph.facebook.com/v9.0/oauth/access_token"
|
||||
jwks_uri: "https://www.facebook.com/.well-known/oauth/openid/jwks/"
|
||||
authorization_endpoint: https://facebook.com/dialog/oauth
|
||||
token_endpoint: https://graph.facebook.com/v9.0/oauth/access_token
|
||||
user_profile_method: "userinfo_endpoint"
|
||||
userinfo_endpoint: "https://graph.facebook.com/v9.0/me?fields=id,name,email,picture"
|
||||
user_mapping_provider:
|
||||
config:
|
||||
subject_claim: "id"
|
||||
display_name_template: "{{ user.name }}"
|
||||
email_template: "{{ '{{ user.email }}' }}"
|
||||
```
|
||||
|
||||
Relevant documents:
|
||||
* [Manually Build a Login Flow](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow)
|
||||
* [Using Facebook's Graph API](https://developers.facebook.com/docs/graph-api/using-graph-api/)
|
||||
* [Reference to the User endpoint](https://developers.facebook.com/docs/graph-api/reference/user)
|
||||
|
||||
Facebook do have an [OIDC discovery endpoint](https://www.facebook.com/.well-known/openid-configuration),
|
||||
but it has a `response_types_supported` which excludes "code" (which we rely on, and
|
||||
is even mentioned in their [documentation](https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow#login)),
|
||||
so we have to disable discovery and configure the URIs manually.
|
||||
* https://developers.facebook.com/docs/facebook-login/manually-build-a-login-flow
|
||||
* Using Facebook's Graph API: https://developers.facebook.com/docs/graph-api/using-graph-api/
|
||||
* Reference to the User endpoint: https://developers.facebook.com/docs/graph-api/reference/user
|
||||
|
||||
### Gitea
|
||||
|
||||
@@ -524,7 +451,7 @@ The synapse config will look like this:
|
||||
email_template: "{{ user.email }}"
|
||||
```
|
||||
|
||||
### Django OAuth Toolkit
|
||||
## Django OAuth Toolkit
|
||||
|
||||
[django-oauth-toolkit](https://github.com/jazzband/django-oauth-toolkit) is a
|
||||
Django application providing out of the box all the endpoints, data and logic
|
||||
|
||||
@@ -1,75 +0,0 @@
|
||||
## Summary of performance impact of running on resource constrained devices such as SBCs
|
||||
|
||||
I've been running my homeserver on a cubietruck at home now for some time and am often replying to statements like "you need loads of ram to join large rooms" with "it works fine for me". I thought it might be useful to curate a summary of the issues you're likely to run into to help as a scaling-down guide, maybe highlight these for development work or end up as documentation. It seems that once you get up to about 4x1.5GHz arm64 4GiB these issues are no longer a problem.
|
||||
|
||||
- **Platform**: 2x1GHz armhf 2GiB ram [Single-board computers](https://wiki.debian.org/CheapServerBoxHardware), SSD, postgres.
|
||||
|
||||
### Presence
|
||||
|
||||
This is the main reason people have a poor matrix experience on resource constrained homeservers. Element web will frequently be saying the server is offline while the python process will be pegged at 100% cpu. This feature is used to tell when other users are active (have a client app in the foreground) and therefore more likely to respond, but requires a lot of network activity to maintain even when nobody is talking in a room.
|
||||
|
||||

|
||||
|
||||
While synapse does have some performance issues with presence [#3971](https://github.com/matrix-org/synapse/issues/3971), the fundamental problem is that this is an easy feature to implement for a centralised service at nearly no overhead, but federation makes it combinatorial [#8055](https://github.com/matrix-org/synapse/issues/8055). There is also a client-side config option which disables the UI and idle tracking [enable_presence_by_hs_url] to blacklist the largest instances but I didn't notice much difference, so I recommend disabling the feature entirely at the server level as well.
|
||||
|
||||
[enable_presence_by_hs_url]: https://github.com/vector-im/element-web/blob/v1.7.8/config.sample.json#L45
|
||||
|
||||
### Joining
|
||||
|
||||
Joining a "large", federated room will initially fail with the below message in Element web, but waiting a while (10-60mins) and trying again will succeed without any issue. What counts as "large" is not message history, user count, connections to homeservers or even a simple count of the state events, it is instead how long the state resolution algorithm takes. However, each of those numbers are reasonable proxies, so we can use them as estimates since user count is one of the few things you see before joining.
|
||||
|
||||

|
||||
|
||||
This is [#1211](https://github.com/matrix-org/synapse/issues/1211) and will also hopefully be mitigated by peeking [matrix-org/matrix-doc#2753](https://github.com/matrix-org/matrix-doc/pull/2753) so at least you don't need to wait for a join to complete before finding out if it's the kind of room you want. Note that you should first disable presence, otherwise it'll just make the situation worse [#3120](https://github.com/matrix-org/synapse/issues/3120). There is a lot of database interaction too, so make sure you've [migrated your data](../postgres.md) from the default sqlite to postgresql. Personally, I recommend patience - once the initial join is complete there's rarely any issues with actually interacting with the room, but if you like you can just block "large" rooms entirely.
|
||||
|
||||
### Sessions
|
||||
|
||||
Anything that requires modifying the device list [#7721](https://github.com/matrix-org/synapse/issues/7721) will take a while to propagate, again taking the client "Offline" until it's complete. This includes signing in and out, editing the public name and verifying e2ee. The main mitigation I recommend is to keep long-running sessions open e.g. by using Firefox SSB "Use this site in App mode" or Chromium PWA "Install Element".
|
||||
|
||||
### Recommended configuration
|
||||
|
||||
Put the below in a new file at /etc/matrix-synapse/conf.d/sbc.yaml to override the defaults in homeserver.yaml.
|
||||
|
||||
```
|
||||
# Disable presence tracking, which is currently fairly resource intensive
|
||||
# More info: https://github.com/matrix-org/synapse/issues/9478
|
||||
use_presence: false
|
||||
|
||||
# Set a small complexity limit, preventing users from joining large rooms
|
||||
# which may be resource-intensive to remain a part of.
|
||||
#
|
||||
# Note that this will not prevent users from joining smaller rooms that
|
||||
# eventually become complex.
|
||||
limit_remote_rooms:
|
||||
enabled: true
|
||||
complexity: 3.0
|
||||
|
||||
# Database configuration
|
||||
database:
|
||||
# Use postgres for the best performance
|
||||
name: psycopg2
|
||||
args:
|
||||
user: matrix-synapse
|
||||
# Generate a long, secure password using a password manager
|
||||
password: hunter2
|
||||
database: matrix-synapse
|
||||
host: localhost
|
||||
```
|
||||
|
||||
Currently the complexity is measured by [current_state_events / 500](https://github.com/matrix-org/synapse/blob/v1.20.1/synapse/storage/databases/main/events_worker.py#L986). You can find join times and your most complex rooms like this:
|
||||
|
||||
```
|
||||
admin@homeserver:~$ zgrep '/client/r0/join/' /var/log/matrix-synapse/homeserver.log* | awk '{print $18, $25}' | sort --human-numeric-sort
|
||||
29.922sec/-0.002sec /_matrix/client/r0/join/%23debian-fasttrack%3Apoddery.com
|
||||
182.088sec/0.003sec /_matrix/client/r0/join/%23decentralizedweb-general%3Amatrix.org
|
||||
911.625sec/-570.847sec /_matrix/client/r0/join/%23synapse%3Amatrix.org
|
||||
|
||||
admin@homeserver:~$ sudo --user postgres psql matrix-synapse --command 'select canonical_alias, joined_members, current_state_events from room_stats_state natural join room_stats_current where canonical_alias is not null order by current_state_events desc fetch first 5 rows only'
|
||||
canonical_alias | joined_members | current_state_events
|
||||
-------------------------------+----------------+----------------------
|
||||
#_oftc_#debian:matrix.org | 871 | 52355
|
||||
#matrix:matrix.org | 6379 | 10684
|
||||
#irc:matrix.org | 461 | 3751
|
||||
#decentralizedweb-general:matrix.org | 997 | 1509
|
||||
#whatsapp:maunium.net | 554 | 854
|
||||
```
|
||||
@@ -1,7 +1,7 @@
|
||||
<h2 style="color:red">
|
||||
This page of the Synapse documentation is now deprecated. For up to date
|
||||
documentation on setting up or writing a password auth provider module, please see
|
||||
<a href="modules/index.md">this page</a>.
|
||||
<a href="modules.md">this page</a>.
|
||||
</h2>
|
||||
|
||||
# Password auth provider modules
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Using Postgres
|
||||
|
||||
Synapse supports PostgreSQL versions 10 or later.
|
||||
Synapse supports PostgreSQL versions 9.6 or later.
|
||||
|
||||
## Install postgres client libraries
|
||||
|
||||
@@ -29,20 +29,16 @@ connect to a postgres database.
|
||||
|
||||
Assuming your PostgreSQL database user is called `postgres`, first authenticate as the database user with:
|
||||
|
||||
```sh
|
||||
su - postgres
|
||||
# Or, if your system uses sudo to get administrative rights
|
||||
sudo -u postgres bash
|
||||
```
|
||||
su - postgres
|
||||
# Or, if your system uses sudo to get administrative rights
|
||||
sudo -u postgres bash
|
||||
|
||||
Then, create a postgres user and a database with:
|
||||
|
||||
```sh
|
||||
# this will prompt for a password for the new user
|
||||
createuser --pwprompt synapse_user
|
||||
# this will prompt for a password for the new user
|
||||
createuser --pwprompt synapse_user
|
||||
|
||||
createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
|
||||
```
|
||||
createdb --encoding=UTF8 --locale=C --template=template0 --owner=synapse_user synapse
|
||||
|
||||
The above will create a user called `synapse_user`, and a database called
|
||||
`synapse`.
|
||||
@@ -118,9 +114,6 @@ performance:
|
||||
Note that the appropriate values for those fields depend on the amount
|
||||
of free memory the database host has available.
|
||||
|
||||
Additionally, admins of large deployments might want to consider using huge pages
|
||||
to help manage memory, especially when using large values of `shared_buffers`. You
|
||||
can read more about that [here](https://www.postgresql.org/docs/10/kernel-resources.html#LINUX-HUGE-PAGES).
|
||||
|
||||
## Porting from SQLite
|
||||
|
||||
@@ -152,26 +145,20 @@ Firstly, shut down the currently running synapse server and copy its
|
||||
database file (typically `homeserver.db`) to another location. Once the
|
||||
copy is complete, restart synapse. For instance:
|
||||
|
||||
```sh
|
||||
synctl stop
|
||||
cp homeserver.db homeserver.db.snapshot
|
||||
synctl start
|
||||
```
|
||||
./synctl stop
|
||||
cp homeserver.db homeserver.db.snapshot
|
||||
./synctl start
|
||||
|
||||
Copy the old config file into a new config file:
|
||||
|
||||
```sh
|
||||
cp homeserver.yaml homeserver-postgres.yaml
|
||||
```
|
||||
cp homeserver.yaml homeserver-postgres.yaml
|
||||
|
||||
Edit the database section as described in the section *Synapse config*
|
||||
above and with the SQLite snapshot located at `homeserver.db.snapshot`
|
||||
simply run:
|
||||
|
||||
```sh
|
||||
synapse_port_db --sqlite-database homeserver.db.snapshot \
|
||||
--postgres-config homeserver-postgres.yaml
|
||||
```
|
||||
synapse_port_db --sqlite-database homeserver.db.snapshot \
|
||||
--postgres-config homeserver-postgres.yaml
|
||||
|
||||
The flag `--curses` displays a coloured curses progress UI.
|
||||
|
||||
@@ -183,20 +170,16 @@ To complete the conversion shut down the synapse server and run the port
|
||||
script one last time, e.g. if the SQLite database is at `homeserver.db`
|
||||
run:
|
||||
|
||||
```sh
|
||||
synapse_port_db --sqlite-database homeserver.db \
|
||||
--postgres-config homeserver-postgres.yaml
|
||||
```
|
||||
synapse_port_db --sqlite-database homeserver.db \
|
||||
--postgres-config homeserver-postgres.yaml
|
||||
|
||||
Once that has completed, change the synapse config to point at the
|
||||
PostgreSQL database configuration file `homeserver-postgres.yaml`:
|
||||
|
||||
```sh
|
||||
synctl stop
|
||||
mv homeserver.yaml homeserver-old-sqlite.yaml
|
||||
mv homeserver-postgres.yaml homeserver.yaml
|
||||
synctl start
|
||||
```
|
||||
./synctl stop
|
||||
mv homeserver.yaml homeserver-old-sqlite.yaml
|
||||
mv homeserver-postgres.yaml homeserver.yaml
|
||||
./synctl start
|
||||
|
||||
Synapse should now be running against PostgreSQL.
|
||||
|
||||
|
||||
@@ -52,7 +52,7 @@ to proxied traffic.)
|
||||
|
||||
### nginx
|
||||
|
||||
```nginx
|
||||
```
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
listen [::]:443 ssl http2;
|
||||
@@ -63,7 +63,7 @@ server {
|
||||
|
||||
server_name matrix.example.com;
|
||||
|
||||
location ~ ^(/_matrix|/_synapse/client) {
|
||||
location ~* ^(\/_matrix|\/_synapse\/client) {
|
||||
# note: do not add a path (even a single /) after the port in `proxy_pass`,
|
||||
# otherwise nginx will canonicalise the URI and cause signature verification
|
||||
# errors.
|
||||
@@ -141,7 +141,7 @@ matrix.example.com {
|
||||
|
||||
### Apache
|
||||
|
||||
```apache
|
||||
```
|
||||
<VirtualHost *:443>
|
||||
SSLEngine on
|
||||
ServerName matrix.example.com
|
||||
@@ -170,7 +170,7 @@ matrix.example.com {
|
||||
|
||||
**NOTE 2**: It appears that Synapse is currently incompatible with the ModSecurity module for Apache (`mod_security2`). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two `</VirtualHost>` above:
|
||||
|
||||
```apache
|
||||
```
|
||||
<IfModule security2_module>
|
||||
SecRuleEngine off
|
||||
</IfModule>
|
||||
@@ -188,7 +188,7 @@ frontend https
|
||||
http-request set-header X-Forwarded-For %[src]
|
||||
|
||||
# Matrix client traffic
|
||||
acl matrix-host hdr(host) -i matrix.example.com matrix.example.com:443
|
||||
acl matrix-host hdr(host) -i matrix.example.com
|
||||
acl matrix-path path_beg /_matrix
|
||||
acl matrix-path path_beg /_synapse/client
|
||||
|
||||
|
||||
@@ -37,15 +37,15 @@
|
||||
|
||||
# Server admins can expand Synapse's functionality with external modules.
|
||||
#
|
||||
# See https://matrix-org.github.io/synapse/latest/modules/index.html for more
|
||||
# See https://matrix-org.github.io/synapse/latest/modules.html for more
|
||||
# documentation on how to configure or create custom modules for Synapse.
|
||||
#
|
||||
modules:
|
||||
#- module: my_super_module.MySuperClass
|
||||
# config:
|
||||
# do_thing: true
|
||||
#- module: my_other_super_module.SomeClass
|
||||
# config: {}
|
||||
# - module: my_super_module.MySuperClass
|
||||
# config:
|
||||
# do_thing: true
|
||||
# - module: my_other_super_module.SomeClass
|
||||
# config: {}
|
||||
|
||||
|
||||
## Server ##
|
||||
@@ -74,7 +74,13 @@ server_name: "SERVERNAME"
|
||||
#
|
||||
pid_file: DATADIR/homeserver.pid
|
||||
|
||||
# The absolute URL to the web client which / will redirect to.
|
||||
# The absolute URL to the web client which /_matrix/client will redirect
|
||||
# to if 'webclient' is configured under the 'listeners' configuration.
|
||||
#
|
||||
# This option can be also set to the filesystem path to the web client
|
||||
# which will be served at /_matrix/client/ if 'webclient' is configured
|
||||
# under the 'listeners' configuration, however this is a security risk:
|
||||
# https://github.com/matrix-org/synapse#security-note
|
||||
#
|
||||
#web_client_location: https://riot.example.com/
|
||||
|
||||
@@ -85,28 +91,8 @@ pid_file: DATADIR/homeserver.pid
|
||||
# Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
|
||||
# 'listeners' below).
|
||||
#
|
||||
# Defaults to 'https://<server_name>/'.
|
||||
#
|
||||
#public_baseurl: https://example.com/
|
||||
|
||||
# Uncomment the following to tell other servers to send federation traffic on
|
||||
# port 443.
|
||||
#
|
||||
# By default, other servers will try to reach our server on port 8448, which can
|
||||
# be inconvenient in some environments.
|
||||
#
|
||||
# Provided 'https://<server_name>/' on port 443 is routed to Synapse, this
|
||||
# option configures Synapse to serve a file at
|
||||
# 'https://<server_name>/.well-known/matrix/server'. This will tell other
|
||||
# servers to send traffic to port 443 instead.
|
||||
#
|
||||
# See https://matrix-org.github.io/synapse/latest/delegate.html for more
|
||||
# information.
|
||||
#
|
||||
# Defaults to 'false'.
|
||||
#
|
||||
#serve_server_wellknown: true
|
||||
|
||||
# Set the soft limit on the number of file descriptors synapse can use
|
||||
# Zero is used to indicate synapse should set the soft limit to the
|
||||
# hard limit.
|
||||
@@ -158,12 +144,12 @@ presence:
|
||||
# The default room version for newly created rooms.
|
||||
#
|
||||
# Known room versions are listed here:
|
||||
# https://spec.matrix.org/latest/rooms/#complete-list-of-room-versions
|
||||
# https://matrix.org/docs/spec/#complete-list-of-room-versions
|
||||
#
|
||||
# For example, for room version 1, default_room_version should be set
|
||||
# to "1".
|
||||
#
|
||||
#default_room_version: "9"
|
||||
#default_room_version: "6"
|
||||
|
||||
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
|
||||
#
|
||||
@@ -304,6 +290,8 @@ presence:
|
||||
# static: static resources under synapse/static (/_matrix/static). (Mostly
|
||||
# useful for 'fallback authentication'.)
|
||||
#
|
||||
# webclient: A web client. Requires web_client_location to be set.
|
||||
#
|
||||
listeners:
|
||||
# TLS-enabled listener: for when matrix traffic is sent directly to synapse.
|
||||
#
|
||||
@@ -471,20 +459,6 @@ limit_remote_rooms:
|
||||
#
|
||||
#allow_per_room_profiles: false
|
||||
|
||||
# The largest allowed file size for a user avatar. Defaults to no restriction.
|
||||
#
|
||||
# Note that user avatar changes will not work if this is set without
|
||||
# using Synapse's media repository.
|
||||
#
|
||||
#max_avatar_size: 10M
|
||||
|
||||
# The MIME types allowed for user avatars. Defaults to no restriction.
|
||||
#
|
||||
# Note that user avatar changes will not work if this is set without
|
||||
# using Synapse's media repository.
|
||||
#
|
||||
#allowed_avatar_mimetypes: ["image/png", "image/jpeg", "image/gif"]
|
||||
|
||||
# How long to keep redacted events in unredacted form in the database. After
|
||||
# this period redacted events get replaced with their redacted form in the DB.
|
||||
#
|
||||
@@ -653,8 +627,8 @@ retention:
|
||||
#
|
||||
#federation_certificate_verification_whitelist:
|
||||
# - lon.example.com
|
||||
# - "*.domain.com"
|
||||
# - "*.onion"
|
||||
# - *.domain.com
|
||||
# - *.onion
|
||||
|
||||
# List of custom certificate authorities for federation traffic.
|
||||
#
|
||||
@@ -751,16 +725,11 @@ caches:
|
||||
per_cache_factors:
|
||||
#get_users_who_share_room_with_user: 2.0
|
||||
|
||||
# Controls whether cache entries are evicted after a specified time
|
||||
# period. Defaults to true. Uncomment to disable this feature.
|
||||
# Controls how long an entry can be in a cache without having been
|
||||
# accessed before being evicted. Defaults to None, which means
|
||||
# entries are never evicted based on time.
|
||||
#
|
||||
#expire_caches: false
|
||||
|
||||
# If expire_caches is enabled, this flag controls how long an entry can
|
||||
# be in a cache without having been accessed before being evicted.
|
||||
# Defaults to 30m. Uncomment to set a different time to live for cache entries.
|
||||
#
|
||||
#cache_entry_ttl: 30m
|
||||
#expiry_time: 30m
|
||||
|
||||
# Controls how long the results of a /sync request are cached for after
|
||||
# a successful response is returned. A higher duration can help clients with
|
||||
@@ -862,9 +831,6 @@ log_config: "CONFDIR/SERVERNAME.log.config"
|
||||
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
|
||||
# - two for ratelimiting how often invites can be sent in a room or to a
|
||||
# specific user.
|
||||
# - one for ratelimiting 3PID invites (i.e. invites sent to a third-party ID
|
||||
# such as an email address or a phone number) based on the account that's
|
||||
# sending the invite.
|
||||
#
|
||||
# The defaults are as shown below.
|
||||
#
|
||||
@@ -914,10 +880,6 @@ log_config: "CONFDIR/SERVERNAME.log.config"
|
||||
# per_user:
|
||||
# per_second: 0.003
|
||||
# burst_count: 5
|
||||
#
|
||||
#rc_third_party_invite:
|
||||
# per_second: 0.2
|
||||
# burst_count: 10
|
||||
|
||||
# Ratelimiting settings for incoming federation
|
||||
#
|
||||
@@ -1227,44 +1189,6 @@ oembed:
|
||||
#
|
||||
#session_lifetime: 24h
|
||||
|
||||
# Time that an access token remains valid for, if the session is
|
||||
# using refresh tokens.
|
||||
# For more information about refresh tokens, please see the manual.
|
||||
# Note that this only applies to clients which advertise support for
|
||||
# refresh tokens.
|
||||
#
|
||||
# Note also that this is calculated at login time and refresh time:
|
||||
# changes are not applied to existing sessions until they are refreshed.
|
||||
#
|
||||
# By default, this is 5 minutes.
|
||||
#
|
||||
#refreshable_access_token_lifetime: 5m
|
||||
|
||||
# Time that a refresh token remains valid for (provided that it is not
|
||||
# exchanged for another one first).
|
||||
# This option can be used to automatically log-out inactive sessions.
|
||||
# Please see the manual for more information.
|
||||
#
|
||||
# Note also that this is calculated at login time and refresh time:
|
||||
# changes are not applied to existing sessions until they are refreshed.
|
||||
#
|
||||
# By default, this is infinite.
|
||||
#
|
||||
#refresh_token_lifetime: 24h
|
||||
|
||||
# Time that an access token remains valid for, if the session is NOT
|
||||
# using refresh tokens.
|
||||
# Please note that not all clients support refresh tokens, so setting
|
||||
# this to a short value may be inconvenient for some users who will
|
||||
# then be logged out frequently.
|
||||
#
|
||||
# Note also that this is calculated at login time: changes are not applied
|
||||
# retrospectively to existing sessions for users that have already logged in.
|
||||
#
|
||||
# By default, this is infinite.
|
||||
#
|
||||
#nonrefreshable_access_token_lifetime: 24h
|
||||
|
||||
# The user must provide all of the below types of 3PID when registering.
|
||||
#
|
||||
#registrations_require_3pid:
|
||||
@@ -1323,7 +1247,7 @@ oembed:
|
||||
# in on this server.
|
||||
#
|
||||
# (By default, no suggestion is made, so it is left up to the client.
|
||||
# This setting is ignored unless public_baseurl is also explicitly set.)
|
||||
# This setting is ignored unless public_baseurl is also set.)
|
||||
#
|
||||
#default_identity_server: https://matrix.org
|
||||
|
||||
@@ -1348,6 +1272,8 @@ oembed:
|
||||
# by the Matrix Identity Service API specification:
|
||||
# https://matrix.org/docs/spec/identity_service/latest
|
||||
#
|
||||
# If a delegate is specified, the config option public_baseurl must also be filled out.
|
||||
#
|
||||
account_threepid_delegates:
|
||||
#email: https://example.com # Delegate email sending to example.com
|
||||
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
|
||||
@@ -1454,16 +1380,6 @@ account_threepid_delegates:
|
||||
#
|
||||
#auto_join_rooms_for_guests: false
|
||||
|
||||
# Whether to inhibit errors raised when registering a new account if the user ID
|
||||
# already exists. If turned on, that requests to /register/available will always
|
||||
# show a user ID as available, and Synapse won't raise an error when starting
|
||||
# a registration with a user ID that already exists. However, Synapse will still
|
||||
# raise an error if the registration completes and the username conflicts.
|
||||
#
|
||||
# Defaults to false.
|
||||
#
|
||||
#inhibit_user_in_use_error: true
|
||||
|
||||
|
||||
## Metrics ###
|
||||
|
||||
@@ -1516,7 +1432,6 @@ room_prejoin_state:
|
||||
# - m.room.encryption
|
||||
# - m.room.name
|
||||
# - m.room.create
|
||||
# - m.room.topic
|
||||
#
|
||||
# Uncomment the following to disable these defaults (so that only the event
|
||||
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
|
||||
@@ -1531,21 +1446,6 @@ room_prejoin_state:
|
||||
#additional_event_types:
|
||||
# - org.example.custom.event.type
|
||||
|
||||
# We record the IP address of clients used to access the API for various
|
||||
# reasons, including displaying it to the user in the "Where you're signed in"
|
||||
# dialog.
|
||||
#
|
||||
# By default, when puppeting another user via the admin API, the client IP
|
||||
# address is recorded against the user who created the access token (ie, the
|
||||
# admin user), and *not* the puppeted user.
|
||||
#
|
||||
# Uncomment the following to also record the IP address against the puppeted
|
||||
# user. (This also means that the puppeted user will count as an "active" user
|
||||
# for the purpose of monthly active user tracking - see 'limit_usage_by_mau' etc
|
||||
# above.)
|
||||
#
|
||||
#track_puppeted_user_ips: true
|
||||
|
||||
|
||||
# A list of application service config files to use
|
||||
#
|
||||
@@ -1913,13 +1813,10 @@ saml2_config:
|
||||
# Defaults to false. Avoid this in production.
|
||||
#
|
||||
# user_profile_method: Whether to fetch the user profile from the userinfo
|
||||
# endpoint, or to rely on the data returned in the id_token from the
|
||||
# token_endpoint.
|
||||
# endpoint. Valid values are: 'auto' or 'userinfo_endpoint'.
|
||||
#
|
||||
# Valid values are: 'auto' or 'userinfo_endpoint'.
|
||||
#
|
||||
# Defaults to 'auto', which uses the userinfo endpoint if 'openid' is
|
||||
# not included in 'scopes'. Set to 'userinfo_endpoint' to always use the
|
||||
# Defaults to 'auto', which fetches the userinfo endpoint if 'openid' is
|
||||
# included in 'scopes'. Set to 'userinfo_endpoint' to always fetch the
|
||||
# userinfo endpoint.
|
||||
#
|
||||
# allow_existing_users: set to 'true' to allow a user logging in via OIDC to
|
||||
@@ -1947,14 +1844,8 @@ saml2_config:
|
||||
#
|
||||
# localpart_template: Jinja2 template for the localpart of the MXID.
|
||||
# If this is not set, the user will be prompted to choose their
|
||||
# own username (see the documentation for the
|
||||
# 'sso_auth_account_details.html' template). This template can
|
||||
# use the 'localpart_from_email' filter.
|
||||
#
|
||||
# confirm_localpart: Whether to prompt the user to validate (or
|
||||
# change) the generated localpart (see the documentation for the
|
||||
# 'sso_auth_account_details.html' template), instead of
|
||||
# registering the account right away.
|
||||
# own username (see 'sso_auth_account_details.html' in the 'sso'
|
||||
# section of this file).
|
||||
#
|
||||
# display_name_template: Jinja2 template for the display name to set
|
||||
# on first login. If unset, no displayname will be set.
|
||||
@@ -2072,10 +1963,11 @@ sso:
|
||||
# phishing attacks from evil.site. To avoid this, include a slash after the
|
||||
# hostname: "https://my.client/".
|
||||
#
|
||||
# The login fallback page (used by clients that don't natively support the
|
||||
# required login flows) is whitelisted in addition to any URLs in this list.
|
||||
# If public_baseurl is set, then the login fallback page (used by clients
|
||||
# that don't natively support the required login flows) is whitelisted in
|
||||
# addition to any URLs in this list.
|
||||
#
|
||||
# By default, this list contains only the login fallback page.
|
||||
# By default, this list is empty.
|
||||
#
|
||||
#client_whitelist:
|
||||
# - https://riot.im/develop
|
||||
@@ -2130,12 +2022,6 @@ sso:
|
||||
#
|
||||
#algorithm: "provided-by-your-issuer"
|
||||
|
||||
# Name of the claim containing a unique identifier for the user.
|
||||
#
|
||||
# Optional, defaults to `sub`.
|
||||
#
|
||||
#subject_claim: "sub"
|
||||
|
||||
# The issuer to validate the "iss" claim against.
|
||||
#
|
||||
# Optional, if provided the "iss" claim will be required and
|
||||
@@ -2457,8 +2343,8 @@ user_directory:
|
||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||
# rebuild the indexes in order to search through all known users.
|
||||
# These indexes are built the first time Synapse starts; admins can
|
||||
# manually trigger a rebuild via API following the instructions at
|
||||
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||
# manually trigger a rebuild following the instructions at
|
||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
||||
#
|
||||
# Uncomment to return search results containing all known users, even if that
|
||||
# user does not share a room with the requester.
|
||||
@@ -2735,35 +2621,3 @@ redis:
|
||||
# Optional password if configured on the Redis instance
|
||||
#
|
||||
#password: <secret_password>
|
||||
|
||||
|
||||
## Background Updates ##
|
||||
|
||||
# Background updates are database updates that are run in the background in batches.
|
||||
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
|
||||
# sleep can all be configured. This is helpful to speed up or slow down the updates.
|
||||
#
|
||||
background_updates:
|
||||
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
|
||||
# a time to change the default.
|
||||
#
|
||||
#background_update_duration_ms: 500
|
||||
|
||||
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
|
||||
#
|
||||
#sleep_enabled: false
|
||||
|
||||
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
|
||||
# and set a duration to change the default.
|
||||
#
|
||||
#sleep_duration_ms: 300
|
||||
|
||||
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
|
||||
# set a size to change the default.
|
||||
#
|
||||
#min_batch_size: 10
|
||||
|
||||
# The batch size to use for the first iteration of a new background update. The default is 100.
|
||||
# Uncomment and set a size to change the default.
|
||||
#
|
||||
#default_batch_size: 50
|
||||
|
||||
@@ -76,12 +76,6 @@ The fingerprint of the repository signing key (as shown by `gpg
|
||||
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||
|
||||
When installing with Debian packages, you might prefer to place files in
|
||||
`/etc/matrix-synapse/conf.d/` to override your configuration without editing
|
||||
the main configuration file at `/etc/matrix-synapse/homeserver.yaml`.
|
||||
By doing that, you won't be asked if you want to replace your configuration
|
||||
file when you upgrade the Debian package to a later version.
|
||||
|
||||
##### Downstream Debian packages
|
||||
|
||||
We do not recommend using the packages from the default Debian `buster`
|
||||
@@ -164,7 +158,7 @@ xbps-install -S synapse
|
||||
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
|
||||
|
||||
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
|
||||
- Packages: `pkg install py38-matrix-synapse`
|
||||
- Packages: `pkg install py37-matrix-synapse`
|
||||
|
||||
#### OpenBSD
|
||||
|
||||
@@ -194,7 +188,7 @@ When following this route please make sure that the [Platform-specific prerequis
|
||||
System requirements:
|
||||
|
||||
- POSIX-compliant system (tested on Linux & OS X)
|
||||
- Python 3.7 or later, up to Python 3.10.
|
||||
- Python 3.6 or later, up to Python 3.9.
|
||||
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
|
||||
|
||||
To install the Synapse homeserver run:
|
||||
@@ -362,14 +356,12 @@ make install
|
||||
|
||||
##### Windows
|
||||
|
||||
Running Synapse natively on Windows is not officially supported.
|
||||
|
||||
If you wish to run or develop Synapse on Windows, the Windows Subsystem for
|
||||
Linux provides a Linux environment which is capable of using the Debian, Fedora,
|
||||
or source installation methods. More information about WSL can be found at
|
||||
<https://docs.microsoft.com/en-us/windows/wsl/install> for Windows 10/11 and
|
||||
<https://docs.microsoft.com/en-us/windows/wsl/install-on-server> for
|
||||
Windows Server.
|
||||
If you wish to run or develop Synapse on Windows, the Windows Subsystem For
|
||||
Linux provides a Linux environment on Windows 10 which is capable of using the
|
||||
Debian, Fedora, or source installation methods. More information about WSL can
|
||||
be found at <https://docs.microsoft.com/en-us/windows/wsl/install-win10> for
|
||||
Windows 10 and <https://docs.microsoft.com/en-us/windows/wsl/install-on-server>
|
||||
for Windows Server.
|
||||
|
||||
## Setting up Synapse
|
||||
|
||||
|
||||
@@ -49,12 +49,12 @@ comment these options out and use those specified by the module instead.
|
||||
|
||||
A custom mapping provider must specify the following methods:
|
||||
|
||||
* `def __init__(self, parsed_config)`
|
||||
* `__init__(self, parsed_config)`
|
||||
- Arguments:
|
||||
- `parsed_config` - A configuration object that is the return value of the
|
||||
`parse_config` method. You should set any configuration options needed by
|
||||
the module here.
|
||||
* `def parse_config(config)`
|
||||
* `parse_config(config)`
|
||||
- This method should have the `@staticmethod` decoration.
|
||||
- Arguments:
|
||||
- `config` - A `dict` representing the parsed content of the
|
||||
@@ -63,13 +63,13 @@ A custom mapping provider must specify the following methods:
|
||||
any option values they need here.
|
||||
- Whatever is returned will be passed back to the user mapping provider module's
|
||||
`__init__` method during construction.
|
||||
* `def get_remote_user_id(self, userinfo)`
|
||||
* `get_remote_user_id(self, userinfo)`
|
||||
- Arguments:
|
||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||
information from.
|
||||
- This method must return a string, which is the unique, immutable identifier
|
||||
for the user. Commonly the `sub` claim of the response.
|
||||
* `async def map_user_attributes(self, userinfo, token, failures)`
|
||||
* `map_user_attributes(self, userinfo, token, failures)`
|
||||
- This method must be async.
|
||||
- Arguments:
|
||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||
@@ -91,7 +91,7 @@ A custom mapping provider must specify the following methods:
|
||||
during a user's first login. Once a localpart has been associated with a
|
||||
remote user ID (see `get_remote_user_id`) it cannot be updated.
|
||||
- `displayname`: An optional string, the display name for the user.
|
||||
* `async def get_extra_attributes(self, userinfo, token)`
|
||||
* `get_extra_attributes(self, userinfo, token)`
|
||||
- This method must be async.
|
||||
- Arguments:
|
||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||
@@ -125,15 +125,15 @@ comment these options out and use those specified by the module instead.
|
||||
|
||||
A custom mapping provider must specify the following methods:
|
||||
|
||||
* `def __init__(self, parsed_config, module_api)`
|
||||
* `__init__(self, parsed_config, module_api)`
|
||||
- Arguments:
|
||||
- `parsed_config` - A configuration object that is the return value of the
|
||||
`parse_config` method. You should set any configuration options needed by
|
||||
the module here.
|
||||
- `module_api` - a `synapse.module_api.ModuleApi` object which provides the
|
||||
stable API available for extension modules.
|
||||
* `def parse_config(config)`
|
||||
- **This method should have the `@staticmethod` decoration.**
|
||||
* `parse_config(config)`
|
||||
- This method should have the `@staticmethod` decoration.
|
||||
- Arguments:
|
||||
- `config` - A `dict` representing the parsed content of the
|
||||
`saml_config.user_mapping_provider.config` homeserver config option.
|
||||
@@ -141,15 +141,15 @@ A custom mapping provider must specify the following methods:
|
||||
any option values they need here.
|
||||
- Whatever is returned will be passed back to the user mapping provider module's
|
||||
`__init__` method during construction.
|
||||
* `def get_saml_attributes(config)`
|
||||
- **This method should have the `@staticmethod` decoration.**
|
||||
* `get_saml_attributes(config)`
|
||||
- This method should have the `@staticmethod` decoration.
|
||||
- Arguments:
|
||||
- `config` - A object resulting from a call to `parse_config`.
|
||||
- Returns a tuple of two sets. The first set equates to the SAML auth
|
||||
response attributes that are required for the module to function, whereas
|
||||
the second set consists of those attributes which can be used if available,
|
||||
but are not necessary.
|
||||
* `def get_remote_user_id(self, saml_response, client_redirect_url)`
|
||||
* `get_remote_user_id(self, saml_response, client_redirect_url)`
|
||||
- Arguments:
|
||||
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
|
||||
information from.
|
||||
@@ -157,7 +157,7 @@ A custom mapping provider must specify the following methods:
|
||||
redirected to.
|
||||
- This method must return a string, which is the unique, immutable identifier
|
||||
for the user. Commonly the `uid` claim of the response.
|
||||
* `def saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
|
||||
* `saml_response_to_user_attributes(self, saml_response, failures, client_redirect_url)`
|
||||
- Arguments:
|
||||
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
|
||||
information from.
|
||||
|
||||
@@ -81,12 +81,14 @@ remote endpoint at 10.1.2.3:9999.
|
||||
|
||||
## Upgrading from legacy structured logging configuration
|
||||
|
||||
Versions of Synapse prior to v1.54.0 automatically converted the legacy
|
||||
structured logging configuration, which was deprecated in v1.23.0, to the standard
|
||||
library logging configuration.
|
||||
Versions of Synapse prior to v1.23.0 included a custom structured logging
|
||||
configuration which is deprecated. It used a `structured: true` flag and
|
||||
configured `drains` instead of ``handlers`` and `formatters`.
|
||||
|
||||
The following reference can be used to update your configuration. Based on the
|
||||
drain `type`, we can pick a new handler:
|
||||
Synapse currently automatically converts the old configuration to the new
|
||||
configuration, but this will be removed in a future version of Synapse. The
|
||||
following reference can be used to update your configuration. Based on the drain
|
||||
`type`, we can pick a new handler:
|
||||
|
||||
1. For a type of `console`, `console_json`, or `console_json_terse`: a handler
|
||||
with a class of `logging.StreamHandler` and a `stream` of `ext://sys.stdout`
|
||||
@@ -139,7 +141,7 @@ formatters:
|
||||
handlers:
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
stream: ext://sys.stdout
|
||||
location: ext://sys.stdout
|
||||
file:
|
||||
class: logging.FileHandler
|
||||
formatter: json
|
||||
|
||||
@@ -20,9 +20,7 @@ Finally, to actually run your worker-based synapse, you must pass synctl the `-a
|
||||
commandline option to tell it to operate on all the worker configurations found
|
||||
in the given directory, e.g.:
|
||||
|
||||
```sh
|
||||
synctl -a $CONFIG/workers start
|
||||
```
|
||||
synctl -a $CONFIG/workers start
|
||||
|
||||
Currently one should always restart all workers when restarting or upgrading
|
||||
synapse, unless you explicitly know it's safe not to. For instance, restarting
|
||||
@@ -31,6 +29,4 @@ notifications.
|
||||
|
||||
To manipulate a specific worker, you pass the -w option to synctl:
|
||||
|
||||
```sh
|
||||
synctl -w $CONFIG/workers/worker1.yaml restart
|
||||
```
|
||||
synctl -w $CONFIG/workers/worker1.yaml restart
|
||||
|
||||
@@ -15,7 +15,7 @@ Type=notify
|
||||
NotifyAccess=main
|
||||
User=matrix-synapse
|
||||
WorkingDirectory=/var/lib/matrix-synapse
|
||||
EnvironmentFile=-/etc/default/matrix-synapse
|
||||
EnvironmentFile=/etc/default/matrix-synapse
|
||||
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
Restart=always
|
||||
|
||||
@@ -10,7 +10,7 @@ Type=notify
|
||||
NotifyAccess=main
|
||||
User=matrix-synapse
|
||||
WorkingDirectory=/var/lib/matrix-synapse
|
||||
EnvironmentFile=-/etc/default/matrix-synapse
|
||||
EnvironmentFile=/etc/default/matrix-synapse
|
||||
ExecStartPre=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --generate-keys
|
||||
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.homeserver --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/
|
||||
ExecReload=/bin/kill -HUP $MAINPID
|
||||
|
||||
@@ -36,13 +36,6 @@ Turns a `mxc://` URL for media content into an HTTP(S) one using the homeserver'
|
||||
|
||||
Example: `message.sender_avatar_url|mxc_to_http(32,32)`
|
||||
|
||||
```python
|
||||
localpart_from_email(address: str) -> str
|
||||
```
|
||||
|
||||
Returns the local part of an email address (e.g. `alice` in `alice@example.com`).
|
||||
|
||||
Example: `user.email_address|localpart_from_email`
|
||||
|
||||
## Email templates
|
||||
|
||||
@@ -78,12 +71,7 @@ Below are the templates Synapse will look for when generating the content of an
|
||||
* `sender_avatar_url`: the avatar URL (as a `mxc://` URL) for the event's
|
||||
sender
|
||||
* `sender_hash`: a hash of the user ID of the sender
|
||||
* `msgtype`: the type of the message
|
||||
* `body_text_html`: html representation of the message
|
||||
* `body_text_plain`: plaintext representation of the message
|
||||
* `image_url`: mxc url of an image, when "msgtype" is "m.image"
|
||||
* `link`: a `matrix.to` link to the room
|
||||
* `avator_url`: url to the room's avator
|
||||
* `reason`: information on the event that triggered the email to be sent. It's an
|
||||
object with the following attributes:
|
||||
* `room_id`: the ID of the room the event was sent in
|
||||
@@ -183,11 +171,8 @@ Below are the templates Synapse will look for when generating pages related to S
|
||||
for the brand of the IdP
|
||||
* `user_attributes`: an object containing details about the user that
|
||||
we received from the IdP. May have the following attributes:
|
||||
* `display_name`: the user's display name
|
||||
* `emails`: a list of email addresses
|
||||
* `localpart`: the local part of the Matrix user ID to register,
|
||||
if `localpart_template` is set in the mapping provider configuration (empty
|
||||
string if not)
|
||||
* display_name: the user's display_name
|
||||
* emails: a list of email addresses
|
||||
The template should render a form which submits the following fields:
|
||||
* `username`: the localpart of the user's chosen user id
|
||||
* `sso_new_user_consent.html`: HTML page allowing the user to consent to the
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
# Overview
|
||||
|
||||
This document explains how to enable VoIP relaying on your homeserver with
|
||||
This document explains how to enable VoIP relaying on your Home Server with
|
||||
TURN.
|
||||
|
||||
The synapse Matrix homeserver supports integration with TURN server via the
|
||||
The synapse Matrix Home Server supports integration with TURN server via the
|
||||
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
|
||||
allows the homeserver to generate credentials that are valid for use on the
|
||||
TURN server through the use of a secret shared between the homeserver and the
|
||||
allows the Home Server to generate credentials that are valid for use on the
|
||||
TURN server through the use of a secret shared between the Home Server and the
|
||||
TURN server.
|
||||
|
||||
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
|
||||
@@ -15,8 +15,8 @@ The following sections describe how to install [coturn](<https://github.com/cotu
|
||||
|
||||
For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint with a public IP.
|
||||
|
||||
Hosting TURN behind NAT requires port forwaring and for the NAT gateway to have a public IP.
|
||||
However, even with appropriate configuration, NAT is known to cause issues and to often not work.
|
||||
Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
|
||||
and to often not work.
|
||||
|
||||
## `coturn` setup
|
||||
|
||||
@@ -40,9 +40,7 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
1. Configure it:
|
||||
|
||||
```sh
|
||||
./configure
|
||||
```
|
||||
./configure
|
||||
|
||||
You may need to install `libevent2`: if so, you should do so in
|
||||
the way recommended by your operating system. You can ignore
|
||||
@@ -51,28 +49,22 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
1. Build and install it:
|
||||
|
||||
```sh
|
||||
make
|
||||
make install
|
||||
```
|
||||
make
|
||||
make install
|
||||
|
||||
### Configuration
|
||||
|
||||
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
|
||||
lines, with example values, are:
|
||||
|
||||
```
|
||||
use-auth-secret
|
||||
static-auth-secret=[your secret key here]
|
||||
realm=turn.myserver.org
|
||||
```
|
||||
use-auth-secret
|
||||
static-auth-secret=[your secret key here]
|
||||
realm=turn.myserver.org
|
||||
|
||||
See `turnserver.conf` for explanations of the options. One way to generate
|
||||
the `static-auth-secret` is with `pwgen`:
|
||||
|
||||
```sh
|
||||
pwgen -s 64 1
|
||||
```
|
||||
pwgen -s 64 1
|
||||
|
||||
A `realm` must be specified, but its value is somewhat arbitrary. (It is
|
||||
sent to clients as part of the authentication flow.) It is conventional to
|
||||
@@ -81,9 +73,7 @@ This will install and start a systemd service called `coturn`.
|
||||
1. You will most likely want to configure coturn to write logs somewhere. The
|
||||
easiest way is normally to send them to the syslog:
|
||||
|
||||
```sh
|
||||
syslog
|
||||
```
|
||||
syslog
|
||||
|
||||
(in which case, the logs will be available via `journalctl -u coturn` on a
|
||||
systemd system). Alternatively, coturn can be configured to write to a
|
||||
@@ -93,102 +83,56 @@ This will install and start a systemd service called `coturn`.
|
||||
connect to arbitrary IP addresses and ports. The following configuration is
|
||||
suggested as a minimum starting point:
|
||||
|
||||
```
|
||||
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
|
||||
no-tcp-relay
|
||||
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
|
||||
no-tcp-relay
|
||||
|
||||
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
|
||||
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
|
||||
denied-peer-ip=10.0.0.0-10.255.255.255
|
||||
denied-peer-ip=192.168.0.0-192.168.255.255
|
||||
denied-peer-ip=172.16.0.0-172.31.255.255
|
||||
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
|
||||
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
|
||||
denied-peer-ip=10.0.0.0-10.255.255.255
|
||||
denied-peer-ip=192.168.0.0-192.168.255.255
|
||||
denied-peer-ip=172.16.0.0-172.31.255.255
|
||||
|
||||
# recommended additional local peers to block, to mitigate external access to internal services.
|
||||
# https://www.rtcsec.com/article/slack-webrtc-turn-compromise-and-bug-bounty/#how-to-fix-an-open-turn-relay-to-address-this-vulnerability
|
||||
no-multicast-peers
|
||||
denied-peer-ip=0.0.0.0-0.255.255.255
|
||||
denied-peer-ip=100.64.0.0-100.127.255.255
|
||||
denied-peer-ip=127.0.0.0-127.255.255.255
|
||||
denied-peer-ip=169.254.0.0-169.254.255.255
|
||||
denied-peer-ip=192.0.0.0-192.0.0.255
|
||||
denied-peer-ip=192.0.2.0-192.0.2.255
|
||||
denied-peer-ip=192.88.99.0-192.88.99.255
|
||||
denied-peer-ip=198.18.0.0-198.19.255.255
|
||||
denied-peer-ip=198.51.100.0-198.51.100.255
|
||||
denied-peer-ip=203.0.113.0-203.0.113.255
|
||||
denied-peer-ip=240.0.0.0-255.255.255.255
|
||||
# special case the turn server itself so that client->TURN->TURN->client flows work
|
||||
allowed-peer-ip=10.0.0.1
|
||||
|
||||
# special case the turn server itself so that client->TURN->TURN->client flows work
|
||||
# this should be one of the turn server's listening IPs
|
||||
allowed-peer-ip=10.0.0.1
|
||||
|
||||
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
|
||||
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
|
||||
total-quota=1200
|
||||
```
|
||||
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
|
||||
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
|
||||
total-quota=1200
|
||||
|
||||
1. Also consider supporting TLS/DTLS. To do this, add the following settings
|
||||
to `turnserver.conf`:
|
||||
|
||||
```
|
||||
# TLS certificates, including intermediate certs.
|
||||
# For Let's Encrypt certificates, use `fullchain.pem` here.
|
||||
cert=/path/to/fullchain.pem
|
||||
# TLS certificates, including intermediate certs.
|
||||
# For Let's Encrypt certificates, use `fullchain.pem` here.
|
||||
cert=/path/to/fullchain.pem
|
||||
|
||||
# TLS private key file
|
||||
pkey=/path/to/privkey.pem
|
||||
# TLS private key file
|
||||
pkey=/path/to/privkey.pem
|
||||
|
||||
# Ensure the configuration lines that disable TLS/DTLS are commented-out or removed
|
||||
#no-tls
|
||||
#no-dtls
|
||||
```
|
||||
|
||||
In this case, replace the `turn:` schemes in the `turn_uris` settings below
|
||||
In this case, replace the `turn:` schemes in the `turn_uri` settings below
|
||||
with `turns:`.
|
||||
|
||||
We recommend that you only try to set up TLS/DTLS once you have set up a
|
||||
basic installation and got it working.
|
||||
|
||||
NB: If your TLS certificate was provided by Let's Encrypt, TLS/DTLS will
|
||||
not work with any Matrix client that uses Chromium's WebRTC library. This
|
||||
currently includes Element Android & iOS; for more details, see their
|
||||
[respective](https://github.com/vector-im/element-android/issues/1533)
|
||||
[issues](https://github.com/vector-im/element-ios/issues/2712) as well as the underlying
|
||||
[WebRTC issue](https://bugs.chromium.org/p/webrtc/issues/detail?id=11710).
|
||||
Consider using a ZeroSSL certificate for your TURN server as a working alternative.
|
||||
|
||||
1. Ensure your firewall allows traffic into the TURN server on the ports
|
||||
you've configured it to listen on (By default: 3478 and 5349 for TURN
|
||||
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
|
||||
for the UDP relay.)
|
||||
|
||||
1. If your TURN server is behind NAT, the NAT gateway must have an external,
|
||||
publicly-reachable IP address. You must configure coturn to advertise that
|
||||
address to connecting clients:
|
||||
1. We do not recommend running a TURN server behind NAT, and are not aware of
|
||||
anyone doing so successfully.
|
||||
|
||||
```
|
||||
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
|
||||
```
|
||||
If you want to try it anyway, you will at least need to tell coturn its
|
||||
external IP address:
|
||||
|
||||
You may optionally limit the TURN server to listen only on the local
|
||||
address that is mapped by NAT to the external address:
|
||||
external-ip=192.88.99.1
|
||||
|
||||
```
|
||||
listening-ip=INTERNAL_TURNSERVER_IPv4_ADDRESS
|
||||
```
|
||||
... and your NAT gateway must forward all of the relayed ports directly
|
||||
(eg, port 56789 on the external IP must be always be forwarded to port
|
||||
56789 on the internal IP).
|
||||
|
||||
If your NAT gateway is reachable over both IPv4 and IPv6, you may
|
||||
configure coturn to advertise each available address:
|
||||
|
||||
```
|
||||
external-ip=EXTERNAL_NAT_IPv4_ADDRESS
|
||||
external-ip=EXTERNAL_NAT_IPv6_ADDRESS
|
||||
```
|
||||
|
||||
When advertising an external IPv6 address, ensure that the firewall and
|
||||
network settings of the system running your TURN server are configured to
|
||||
accept IPv6 traffic, and that the TURN server is listening on the local
|
||||
IPv6 address that is mapped by NAT to the external IPv6 address.
|
||||
If you get this working, let us know!
|
||||
|
||||
1. (Re)start the turn server:
|
||||
|
||||
@@ -205,18 +149,18 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
## Synapse setup
|
||||
|
||||
Your homeserver configuration file needs the following extra keys:
|
||||
Your home server configuration file needs the following extra keys:
|
||||
|
||||
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
|
||||
for your TURN server to be given out to your clients. Add separate
|
||||
entries for each transport your TURN server supports.
|
||||
2. "`turn_shared_secret`": This is the secret shared between your
|
||||
homeserver and your TURN server, so you should set it to the same
|
||||
Home server and your TURN server, so you should set it to the same
|
||||
string you used in turnserver.conf.
|
||||
3. "`turn_user_lifetime`": This is the amount of time credentials
|
||||
generated by your homeserver are valid for (in milliseconds).
|
||||
generated by your Home Server are valid for (in milliseconds).
|
||||
Shorter times offer less potential for abuse at the expense of
|
||||
increased traffic between web clients and your homeserver to
|
||||
increased traffic between web clients and your home server to
|
||||
refresh credentials. The TURN REST API specification recommends
|
||||
one day (86400000).
|
||||
4. "`turn_allow_guests`": Whether to allow guest users to use the
|
||||
@@ -238,12 +182,11 @@ After updating the homeserver configuration, you must restart synapse:
|
||||
|
||||
* If you use synctl:
|
||||
```sh
|
||||
# Depending on how Synapse is installed, synctl may already be on
|
||||
# your PATH. If not, you may need to activate a virtual environment.
|
||||
synctl restart
|
||||
cd /where/you/run/synapse
|
||||
./synctl restart
|
||||
```
|
||||
* If you use systemd:
|
||||
```sh
|
||||
```
|
||||
systemctl restart matrix-synapse.service
|
||||
```
|
||||
... and then reload any clients (or wait an hour for them to refresh their
|
||||
@@ -257,16 +200,15 @@ connecting". Unfortunately, troubleshooting this can be tricky.
|
||||
|
||||
Here are a few things to try:
|
||||
|
||||
* Check that your TURN server is not behind NAT. As above, we're not aware of
|
||||
anyone who has successfully set this up.
|
||||
|
||||
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
||||
TURN ports (normally 3478 and 5349).
|
||||
TURN ports (normally 3478 and 5479).
|
||||
|
||||
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
||||
relay ports (49152-65535 by default).
|
||||
|
||||
* Try disabling `coturn`'s TLS/DTLS listeners and enable only its (unencrypted)
|
||||
TCP/UDP listeners. (This will only leave signaling traffic unencrypted;
|
||||
voice & video WebRTC traffic is always encrypted.)
|
||||
|
||||
* Some WebRTC implementations (notably, that of Google Chrome) appear to get
|
||||
confused by TURN servers which are reachable over IPv6 (this appears to be
|
||||
an unexpected side-effect of its handling of multiple IP addresses as
|
||||
@@ -276,18 +218,6 @@ Here are a few things to try:
|
||||
Try removing any AAAA records for your TURN server, so that it is only
|
||||
reachable over IPv4.
|
||||
|
||||
* If your TURN server is behind NAT:
|
||||
|
||||
* double-check that your NAT gateway is correctly forwarding all TURN
|
||||
ports (normally 3478 & 5349 for TCP & UDP TURN traffic, and 49152-65535 for the UDP
|
||||
relay) to the NAT-internal address of your TURN server. If advertising
|
||||
both IPv4 and IPv6 external addresses via the `external-ip` option, ensure
|
||||
that the NAT is forwarding both IPv4 and IPv6 traffic to the IPv4 and IPv6
|
||||
internal addresses of your TURN server. When in doubt, remove AAAA records
|
||||
for your TURN server and specify only an IPv4 address as your `external-ip`.
|
||||
|
||||
* ensure that your TURN server uses the NAT gateway as its default route.
|
||||
|
||||
* Enable more verbose logging in coturn via the `verbose` setting:
|
||||
|
||||
```
|
||||
|
||||
285
docs/upgrade.md
285
docs/upgrade.md
@@ -47,7 +47,7 @@ this document.
|
||||
3. Restart Synapse:
|
||||
|
||||
```bash
|
||||
synctl restart
|
||||
./synctl restart
|
||||
```
|
||||
|
||||
To check whether your update was successful, you can check the running
|
||||
@@ -85,179 +85,6 @@ process, for example:
|
||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
```
|
||||
|
||||
# Upgrading to v1.56.0
|
||||
|
||||
## Groups/communities feature has been deprecated
|
||||
|
||||
The non-standard groups/communities feature in Synapse has been deprecated and will
|
||||
be disabled by default in Synapse v1.58.0.
|
||||
|
||||
You can test disabling it by adding the following to your homeserver configuration:
|
||||
|
||||
```yaml
|
||||
experimental_features:
|
||||
groups_enabled: false
|
||||
```
|
||||
|
||||
# Upgrading to v1.55.0
|
||||
|
||||
## `synctl` script has been moved
|
||||
|
||||
The `synctl` script
|
||||
[has been made](https://github.com/matrix-org/synapse/pull/12140) an
|
||||
[entry point](https://packaging.python.org/en/latest/specifications/entry-points/)
|
||||
and no longer exists at the root of Synapse's source tree. If you wish to use
|
||||
`synctl` to manage your homeserver, you should invoke `synctl` directly, e.g.
|
||||
`synctl start` instead of `./synctl start` or `/path/to/synctl start`.
|
||||
|
||||
You will need to ensure `synctl` is on your `PATH`.
|
||||
- This is automatically the case when using
|
||||
[Debian packages](https://packages.matrix.org/debian/) or
|
||||
[docker images](https://hub.docker.com/r/matrixdotorg/synapse)
|
||||
provided by Matrix.org.
|
||||
- When installing from a wheel, sdist, or PyPI, a `synctl` executable is added
|
||||
to your Python installation's `bin`. This should be on your `PATH`
|
||||
automatically, though you might need to activate a virtual environment
|
||||
depending on how you installed Synapse.
|
||||
|
||||
|
||||
## Compatibility dropped for Mjolnir 1.3.1 and earlier
|
||||
|
||||
Synapse v1.55.0 drops support for Mjolnir 1.3.1 and earlier.
|
||||
If you use the Mjolnir module to moderate your homeserver,
|
||||
please upgrade Mjolnir to version 1.3.2 or later before upgrading Synapse.
|
||||
|
||||
|
||||
# Upgrading to v1.54.0
|
||||
|
||||
## Legacy structured logging configuration removal
|
||||
|
||||
This release removes support for the `structured: true` logging configuration
|
||||
which was deprecated in Synapse v1.23.0. If your logging configuration contains
|
||||
`structured: true` then it should be modified based on the
|
||||
[structured logging documentation](structured_logging.md).
|
||||
|
||||
# Upgrading to v1.53.0
|
||||
|
||||
## Dropping support for `webclient` listeners and non-HTTP(S) `web_client_location`
|
||||
|
||||
Per the deprecation notice in Synapse v1.51.0, listeners of type `webclient`
|
||||
are no longer supported and configuring them is a now a configuration error.
|
||||
|
||||
Configuring a non-HTTP(S) `web_client_location` configuration is is now a
|
||||
configuration error. Since the `webclient` listener is no longer supported, this
|
||||
setting only applies to the root path `/` of Synapse's web server and no longer
|
||||
the `/_matrix/client/` path.
|
||||
|
||||
## Stablisation of MSC3231
|
||||
|
||||
The unstable validity-check endpoint for the
|
||||
[Registration Tokens](https://spec.matrix.org/v1.2/client-server-api/#get_matrixclientv1registermloginregistration_tokenvalidity)
|
||||
feature has been stabilised and moved from:
|
||||
|
||||
`/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity`
|
||||
|
||||
to:
|
||||
|
||||
`/_matrix/client/v1/register/m.login.registration_token/validity`
|
||||
|
||||
Please update any relevant reverse proxy or firewall configurations appropriately.
|
||||
|
||||
## Time-based cache expiry is now enabled by default
|
||||
|
||||
Formerly, entries in the cache were not evicted regardless of whether they were accessed after storing.
|
||||
This behavior has now changed. By default entries in the cache are now evicted after 30m of not being accessed.
|
||||
To change the default behavior, go to the `caches` section of the config and change the `expire_caches` and
|
||||
`cache_entry_ttl` flags as necessary. Please note that these flags replace the `expiry_time` flag in the config.
|
||||
The `expiry_time` flag will still continue to work, but it has been deprecated and will be removed in the future.
|
||||
|
||||
## Deprecation of `capability` `org.matrix.msc3283.*`
|
||||
|
||||
The `capabilities` of MSC3283 from the REST API `/_matrix/client/r0/capabilities`
|
||||
becomes stable.
|
||||
|
||||
The old `capabilities`
|
||||
- `org.matrix.msc3283.set_displayname`,
|
||||
- `org.matrix.msc3283.set_avatar_url` and
|
||||
- `org.matrix.msc3283.3pid_changes`
|
||||
|
||||
are deprecated and scheduled to be removed in Synapse v1.54.0.
|
||||
|
||||
The new `capabilities`
|
||||
- `m.set_displayname`,
|
||||
- `m.set_avatar_url` and
|
||||
- `m.3pid_changes`
|
||||
|
||||
are now active by default.
|
||||
|
||||
## Removal of `user_may_create_room_with_invites`
|
||||
|
||||
As announced with the release of [Synapse 1.47.0](#deprecation-of-the-user_may_create_room_with_invites-module-callback),
|
||||
the deprecated `user_may_create_room_with_invites` module callback has been removed.
|
||||
|
||||
Modules relying on it can instead implement [`user_may_invite`](https://matrix-org.github.io/synapse/latest/modules/spam_checker_callbacks.html#user_may_invite)
|
||||
and use the [`get_room_state`](https://github.com/matrix-org/synapse/blob/872f23b95fa980a61b0866c1475e84491991fa20/synapse/module_api/__init__.py#L869-L876)
|
||||
module API to infer whether the invite is happening while creating a room (see [this function](https://github.com/matrix-org/synapse-domain-rule-checker/blob/e7d092dd9f2a7f844928771dbfd9fd24c2332e48/synapse_domain_rule_checker/__init__.py#L56-L89)
|
||||
as an example). Alternately, modules can also implement [`on_create_room`](https://matrix-org.github.io/synapse/latest/modules/third_party_rules_callbacks.html#on_create_room).
|
||||
|
||||
|
||||
# Upgrading to v1.52.0
|
||||
|
||||
## Twisted security release
|
||||
|
||||
Note that [Twisted 22.1.0](https://github.com/twisted/twisted/releases/tag/twisted-22.1.0)
|
||||
has recently been released, which fixes a [security issue](https://github.com/twisted/twisted/security/advisories/GHSA-92x2-jw7w-xvvx)
|
||||
within the Twisted library. We do not believe Synapse is affected by this vulnerability,
|
||||
though we advise server administrators who installed Synapse via pip to upgrade Twisted
|
||||
with `pip install --upgrade Twisted treq` as a matter of good practice. The Docker image
|
||||
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` are using the
|
||||
updated library.
|
||||
|
||||
# Upgrading to v1.51.0
|
||||
|
||||
## Deprecation of `webclient` listeners and non-HTTP(S) `web_client_location`
|
||||
|
||||
Listeners of type `webclient` are deprecated and scheduled to be removed in
|
||||
Synapse v1.53.0.
|
||||
|
||||
Similarly, a non-HTTP(S) `web_client_location` configuration is deprecated and
|
||||
will become a configuration error in Synapse v1.53.0.
|
||||
|
||||
|
||||
# Upgrading to v1.50.0
|
||||
|
||||
## Dropping support for old Python and Postgres versions
|
||||
|
||||
In line with our [deprecation policy](deprecation_policy.md),
|
||||
we've dropped support for Python 3.6 and PostgreSQL 9.6, as they are no
|
||||
longer supported upstream.
|
||||
|
||||
This release of Synapse requires Python 3.7+ and PostgreSQL 10+.
|
||||
|
||||
|
||||
# Upgrading to v1.47.0
|
||||
|
||||
## Removal of old Room Admin API
|
||||
|
||||
The following admin APIs were deprecated in [Synapse 1.34](https://github.com/matrix-org/synapse/blob/v1.34.0/CHANGES.md#deprecations-and-removals)
|
||||
(released on 2021-05-17) and have now been removed:
|
||||
|
||||
- `POST /_synapse/admin/v1/<room_id>/delete`
|
||||
|
||||
Any scripts still using the above APIs should be converted to use the
|
||||
[Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api).
|
||||
|
||||
## Deprecation of the `user_may_create_room_with_invites` module callback
|
||||
|
||||
The `user_may_create_room_with_invites` is deprecated and will be removed in a future
|
||||
version of Synapse. Modules implementing this callback can instead implement
|
||||
[`user_may_invite`](https://matrix-org.github.io/synapse/latest/modules/spam_checker_callbacks.html#user_may_invite)
|
||||
and use the [`get_room_state`](https://github.com/matrix-org/synapse/blob/872f23b95fa980a61b0866c1475e84491991fa20/synapse/module_api/__init__.py#L869-L876)
|
||||
module API method to infer whether the invite is happening in the context of creating a
|
||||
room.
|
||||
|
||||
We plan to remove this callback in January 2022.
|
||||
|
||||
# Upgrading to v1.45.0
|
||||
|
||||
## Changes required to media storage provider modules when reading from the Synapse configuration object
|
||||
@@ -1257,7 +1084,8 @@ more details on upgrading your database.
|
||||
|
||||
Synapse v1.0 is the first release to enforce validation of TLS
|
||||
certificates for the federation API. It is therefore essential that your
|
||||
certificates are correctly configured.
|
||||
certificates are correctly configured. See the
|
||||
[FAQ](MSC1711_certificates_FAQ.md) for more information.
|
||||
|
||||
Note, v1.0 installations will also no longer be able to federate with
|
||||
servers that have not correctly configured their certificates.
|
||||
@@ -1322,6 +1150,9 @@ you will need to replace any self-signed certificates with those
|
||||
verified by a root CA. Information on how to do so can be found at the
|
||||
ACME docs.
|
||||
|
||||
For more information on configuring TLS certificates see the
|
||||
[FAQ](MSC1711_certificates_FAQ.md).
|
||||
|
||||
# Upgrading to v0.34.0
|
||||
|
||||
1. This release is the first to fully support Python 3. Synapse will
|
||||
@@ -1332,20 +1163,16 @@ ACME docs.
|
||||
For users who have installed Synapse into a virtualenv, we recommend
|
||||
doing this by creating a new virtualenv. For example:
|
||||
|
||||
```sh
|
||||
virtualenv -p python3 ~/synapse/env3
|
||||
source ~/synapse/env3/bin/activate
|
||||
pip install matrix-synapse
|
||||
```
|
||||
virtualenv -p python3 ~/synapse/env3
|
||||
source ~/synapse/env3/bin/activate
|
||||
pip install matrix-synapse
|
||||
|
||||
You can then start synapse as normal, having activated the new
|
||||
virtualenv:
|
||||
|
||||
```sh
|
||||
cd ~/synapse
|
||||
source env3/bin/activate
|
||||
synctl start
|
||||
```
|
||||
cd ~/synapse
|
||||
source env3/bin/activate
|
||||
synctl start
|
||||
|
||||
Users who have installed from distribution packages should see the
|
||||
relevant package documentation. See below for notes on Debian
|
||||
@@ -1357,38 +1184,34 @@ ACME docs.
|
||||
`<server>.log.config` file. For example, if your `log.config`
|
||||
file contains:
|
||||
|
||||
```yaml
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: homeserver.log
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
```
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: homeserver.log
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
|
||||
Then you should update this to be:
|
||||
|
||||
```yaml
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: homeserver.log
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
encoding: utf8
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
```
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: homeserver.log
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
encoding: utf8
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
|
||||
There is no need to revert this change if downgrading to
|
||||
Python 2.
|
||||
@@ -1474,28 +1297,24 @@ with the HS remotely has been removed.
|
||||
It has been replaced by specifying a list of application service
|
||||
registrations in `homeserver.yaml`:
|
||||
|
||||
```yaml
|
||||
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
|
||||
```
|
||||
app_service_config_files: ["registration-01.yaml", "registration-02.yaml"]
|
||||
|
||||
Where `registration-01.yaml` looks like:
|
||||
|
||||
```yaml
|
||||
url: <String> # e.g. "https://my.application.service.com"
|
||||
as_token: <String>
|
||||
hs_token: <String>
|
||||
sender_localpart: <String> # This is a new field which denotes the user_id localpart when using the AS token
|
||||
namespaces:
|
||||
users:
|
||||
- exclusive: <Boolean>
|
||||
regex: <String> # e.g. "@prefix_.*"
|
||||
aliases:
|
||||
- exclusive: <Boolean>
|
||||
regex: <String>
|
||||
rooms:
|
||||
- exclusive: <Boolean>
|
||||
regex: <String>
|
||||
```
|
||||
url: <String> # e.g. "https://my.application.service.com"
|
||||
as_token: <String>
|
||||
hs_token: <String>
|
||||
sender_localpart: <String> # This is a new field which denotes the user_id localpart when using the AS token
|
||||
namespaces:
|
||||
users:
|
||||
- exclusive: <Boolean>
|
||||
regex: <String> # e.g. "@prefix_.*"
|
||||
aliases:
|
||||
- exclusive: <Boolean>
|
||||
regex: <String>
|
||||
rooms:
|
||||
- exclusive: <Boolean>
|
||||
regex: <String>
|
||||
|
||||
# Upgrading to v0.8.0
|
||||
|
||||
|
||||
@@ -12,7 +12,7 @@ UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
|
||||
```
|
||||
|
||||
A new server admin user can also be created using the `register_new_matrix_user`
|
||||
command. This is a script that is distributed as part of synapse. It is possibly
|
||||
command. This is a script that is located in the `scripts/` directory, or possibly
|
||||
already on your `$PATH` depending on how Synapse was installed.
|
||||
|
||||
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
|
||||
|
||||
@@ -1,109 +0,0 @@
|
||||
# Background Updates API
|
||||
|
||||
This API allows a server administrator to manage the background updates being
|
||||
run against the database.
|
||||
|
||||
## Status
|
||||
|
||||
This API gets the current status of the background updates.
|
||||
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/background_updates/status
|
||||
```
|
||||
|
||||
Returning:
|
||||
|
||||
```json
|
||||
{
|
||||
"enabled": true,
|
||||
"current_updates": {
|
||||
"<db_name>": {
|
||||
"name": "<background_update_name>",
|
||||
"total_item_count": 50,
|
||||
"total_duration_ms": 10000.0,
|
||||
"average_items_per_ms": 2.2,
|
||||
},
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`enabled` whether the background updates are enabled or disabled.
|
||||
|
||||
`db_name` the database name (usually Synapse is configured with a single database named 'master').
|
||||
|
||||
For each update:
|
||||
|
||||
`name` the name of the update.
|
||||
`total_item_count` total number of "items" processed (the meaning of 'items' depends on the update in question).
|
||||
`total_duration_ms` how long the background process has been running, not including time spent sleeping.
|
||||
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
|
||||
|
||||
|
||||
## Enabled
|
||||
|
||||
This API allow pausing background updates.
|
||||
|
||||
Background updates should *not* be paused for significant periods of time, as
|
||||
this can affect the performance of Synapse.
|
||||
|
||||
*Note*: This won't persist over restarts.
|
||||
|
||||
*Note*: This won't cancel any update query that is currently running. This is
|
||||
usually fine since most queries are short lived, except for `CREATE INDEX`
|
||||
background updates which won't be cancelled once started.
|
||||
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/background_updates/enabled
|
||||
```
|
||||
|
||||
with the following body:
|
||||
|
||||
```json
|
||||
{
|
||||
"enabled": false
|
||||
}
|
||||
```
|
||||
|
||||
`enabled` sets whether the background updates are enabled or disabled.
|
||||
|
||||
The API returns the `enabled` param.
|
||||
|
||||
```json
|
||||
{
|
||||
"enabled": false
|
||||
}
|
||||
```
|
||||
|
||||
There is also a `GET` version which returns the `enabled` state.
|
||||
|
||||
|
||||
## Run
|
||||
|
||||
This API schedules a specific background update to run. The job starts immediately after calling the API.
|
||||
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/background_updates/start_job
|
||||
```
|
||||
|
||||
with the following body:
|
||||
|
||||
```json
|
||||
{
|
||||
"job_name": "populate_stats_process_rooms"
|
||||
}
|
||||
```
|
||||
|
||||
The following JSON body parameters are available:
|
||||
|
||||
- `job_name` - A string which job to run. Valid values are:
|
||||
- `populate_stats_process_rooms` - Recalculate the stats for all rooms.
|
||||
- `regenerate_directory` - Recalculate the [user directory](../../../user_directory.md) if it is stale or out of sync.
|
||||
@@ -1,212 +0,0 @@
|
||||
# Federation API
|
||||
|
||||
This API allows a server administrator to manage Synapse's federation with other homeservers.
|
||||
|
||||
Note: This API is new, experimental and "subject to change".
|
||||
|
||||
## List of destinations
|
||||
|
||||
This API gets the current destination retry timing info for all remote servers.
|
||||
|
||||
The list contains all the servers with which the server federates,
|
||||
regardless of whether an error occurred or not.
|
||||
If an error occurs, it may take up to 20 minutes for the error to be displayed here,
|
||||
as a complete retry must have failed.
|
||||
|
||||
The API is:
|
||||
|
||||
A standard request with no filtering:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/federation/destinations
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"destinations":[
|
||||
{
|
||||
"destination": "matrix.org",
|
||||
"retry_last_ts": 1557332397936,
|
||||
"retry_interval": 3000000,
|
||||
"failure_ts": 1557329397936,
|
||||
"last_successful_stream_ordering": null
|
||||
}
|
||||
],
|
||||
"total": 1
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint again
|
||||
with `from` set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more destinations
|
||||
to paginate through.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following query parameters are available:
|
||||
|
||||
- `from` - Offset in the returned list. Defaults to `0`.
|
||||
- `limit` - Maximum amount of destinations to return. Defaults to `100`.
|
||||
- `order_by` - The method in which to sort the returned list of destinations.
|
||||
Valid values are:
|
||||
- `destination` - Destinations are ordered alphabetically by remote server name.
|
||||
This is the default.
|
||||
- `retry_last_ts` - Destinations are ordered by time of last retry attempt in ms.
|
||||
- `retry_interval` - Destinations are ordered by how long until next retry in ms.
|
||||
- `failure_ts` - Destinations are ordered by when the server started failing in ms.
|
||||
- `last_successful_stream_ordering` - Destinations are ordered by the stream ordering
|
||||
of the most recent successfully-sent PDU.
|
||||
- `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
|
||||
this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||
|
||||
*Caution:* The database only has an index on the column `destination`.
|
||||
This means that if a different sort order is used,
|
||||
this can cause a large load on the database, especially for large environments.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- `destinations` - An array of objects, each containing information about a destination.
|
||||
Destination objects contain the following fields:
|
||||
- `destination` - string - Name of the remote server to federate.
|
||||
- `retry_last_ts` - integer - The last time Synapse tried and failed to reach the
|
||||
remote server, in ms. This is `0` if the last attempt to communicate with the
|
||||
remote server was successful.
|
||||
- `retry_interval` - integer - How long since the last time Synapse tried to reach
|
||||
the remote server before trying again, in ms. This is `0` if no further retrying occuring.
|
||||
- `failure_ts` - nullable integer - The first time Synapse tried and failed to reach the
|
||||
remote server, in ms. This is `null` if communication with the remote server has never failed.
|
||||
- `last_successful_stream_ordering` - nullable integer - The stream ordering of the most
|
||||
recent successfully-sent [PDU](understanding_synapse_through_grafana_graphs.md#federation)
|
||||
to this destination, or `null` if this information has not been tracked yet.
|
||||
- `next_token`: string representing a positive integer - Indication for pagination. See above.
|
||||
- `total` - integer - Total number of destinations.
|
||||
|
||||
## Destination Details API
|
||||
|
||||
This API gets the retry timing info for a specific remote server.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/federation/destinations/<destination>
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"destination": "matrix.org",
|
||||
"retry_last_ts": 1557332397936,
|
||||
"retry_interval": 3000000,
|
||||
"failure_ts": 1557329397936,
|
||||
"last_successful_stream_ordering": null
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `destination` - Name of the remote server.
|
||||
|
||||
**Response**
|
||||
|
||||
The response fields are the same like in the `destinations` array in
|
||||
[List of destinations](#list-of-destinations) response.
|
||||
|
||||
## Destination rooms
|
||||
|
||||
This API gets the rooms that federate with a specific remote server.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/federation/destinations/<destination>/rooms
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"rooms":[
|
||||
{
|
||||
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
|
||||
"stream_ordering": 8326
|
||||
},
|
||||
{
|
||||
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
|
||||
"stream_ordering": 93534
|
||||
}
|
||||
],
|
||||
"total": 2
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint again
|
||||
with `from` set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more destinations
|
||||
to paginate through.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `destination` - Name of the remote server.
|
||||
|
||||
The following query parameters are available:
|
||||
|
||||
- `from` - Offset in the returned list. Defaults to `0`.
|
||||
- `limit` - Maximum amount of destinations to return. Defaults to `100`.
|
||||
- `dir` - Direction of room order by `room_id`. Either `f` for forwards or `b` for
|
||||
backwards. Defaults to `f`.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- `rooms` - An array of objects, each containing information about a room.
|
||||
Room objects contain the following fields:
|
||||
- `room_id` - string - The ID of the room.
|
||||
- `stream_ordering` - integer - The stream ordering of the most recent
|
||||
successfully-sent [PDU](understanding_synapse_through_grafana_graphs.md#federation)
|
||||
to this destination in this room.
|
||||
- `next_token`: string representing a positive integer - Indication for pagination. See above.
|
||||
- `total` - integer - Total number of destinations.
|
||||
|
||||
## Reset connection timeout
|
||||
|
||||
Synapse makes federation requests to other homeservers. If a federation request fails,
|
||||
Synapse will mark the destination homeserver as offline, preventing any future requests
|
||||
to that server for a "cooldown" period. This period grows over time if the server
|
||||
continues to fail its responses
|
||||
([exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff)).
|
||||
|
||||
Admins can cancel the cooldown period with this API.
|
||||
|
||||
This API resets the retry timing for a specific remote server and tries to connect to
|
||||
the remote server again. It does not wait for the next `retry_interval`.
|
||||
The connection must have previously run into an error and `retry_last_ts`
|
||||
([Destination Details API](#destination-details-api)) must not be equal to `0`.
|
||||
|
||||
The connection attempt is carried out in the background and can take a while
|
||||
even if the API already returns the http status 200.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/federation/destinations/<destination>/reset_connection
|
||||
|
||||
{}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `destination` - Name of the remote server.
|
||||
@@ -1,103 +0,0 @@
|
||||
## Admin FAQ
|
||||
|
||||
How do I become a server admin?
|
||||
---
|
||||
If your server already has an admin account you should use the user admin API to promote other accounts to become admins. See [User Admin API](../../admin_api/user_admin_api.md#Change-whether-a-user-is-a-server-administrator-or-not)
|
||||
|
||||
If you don't have any admin accounts yet you won't be able to use the admin API so you'll have to edit the database manually. Manually editing the database is generally not recommended so once you have an admin account, use the admin APIs to make further changes.
|
||||
|
||||
```sql
|
||||
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
|
||||
```
|
||||
What servers are my server talking to?
|
||||
---
|
||||
Run this sql query on your db:
|
||||
```sql
|
||||
SELECT * FROM destinations;
|
||||
```
|
||||
|
||||
What servers are currently participating in this room?
|
||||
---
|
||||
Run this sql query on your db:
|
||||
```sql
|
||||
SELECT DISTINCT split_part(state_key, ':', 2)
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN room_memberships AS m USING (room_id, event_id)
|
||||
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
|
||||
```
|
||||
|
||||
What users are registered on my server?
|
||||
---
|
||||
```sql
|
||||
SELECT NAME from users;
|
||||
```
|
||||
|
||||
Manually resetting passwords:
|
||||
---
|
||||
See https://github.com/matrix-org/synapse/blob/master/README.rst#password-reset
|
||||
|
||||
I have a problem with my server. Can I just delete my database and start again?
|
||||
---
|
||||
Deleting your database is unlikely to make anything better.
|
||||
|
||||
It's easy to make the mistake of thinking that you can start again from a clean slate by dropping your database, but things don't work like that in a federated network: lots of other servers have information about your server.
|
||||
|
||||
For example: other servers might think that you are in a room, your server will think that you are not, and you'll probably be unable to interact with that room in a sensible way ever again.
|
||||
|
||||
In general, there are better solutions to any problem than dropping the database. Come and seek help in https://matrix.to/#/#synapse:matrix.org.
|
||||
|
||||
There are two exceptions when it might be sensible to delete your database and start again:
|
||||
* You have *never* joined any rooms which are federated with other servers. For instance, a local deployment which the outside world can't talk to.
|
||||
* You are changing the `server_name` in the homeserver configuration. In effect this makes your server a completely new one from the point of view of the network, so in this case it makes sense to start with a clean database.
|
||||
(In both cases you probably also want to clear out the media_store.)
|
||||
|
||||
I've stuffed up access to my room, how can I delete it to free up the alias?
|
||||
---
|
||||
Using the following curl command:
|
||||
```
|
||||
curl -H 'Authorization: Bearer <access-token>' -X DELETE https://matrix.org/_matrix/client/r0/directory/room/<room-alias>
|
||||
```
|
||||
`<access-token>` - can be obtained in riot by looking in the riot settings, down the bottom is:
|
||||
Access Token:\<click to reveal\>
|
||||
|
||||
`<room-alias>` - the room alias, eg. #my_room:matrix.org this possibly needs to be URL encoded also, for example %23my_room%3Amatrix.org
|
||||
|
||||
How can I find the lines corresponding to a given HTTP request in my homeserver log?
|
||||
---
|
||||
|
||||
Synapse tags each log line according to the HTTP request it is processing. When it finishes processing each request, it logs a line containing the words `Processed request: `. For example:
|
||||
|
||||
```
|
||||
2019-02-14 22:35:08,196 - synapse.access.http.8008 - 302 - INFO - GET-37 - ::1 - 8008 - {@richvdh:localhost} Processed request: 0.173sec/0.001sec (0.002sec, 0.000sec) (0.027sec/0.026sec/2) 687B 200 "GET /_matrix/client/r0/sync HTTP/1.1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" [0 dbevts]"
|
||||
```
|
||||
|
||||
Here we can see that the request has been tagged with `GET-37`. (The tag depends on the method of the HTTP request, so might start with `GET-`, `PUT-`, `POST-`, `OPTIONS-` or `DELETE-`.) So to find all lines corresponding to this request, we can do:
|
||||
|
||||
```
|
||||
grep 'GET-37' homeserver.log
|
||||
```
|
||||
|
||||
If you want to paste that output into a github issue or matrix room, please remember to surround it with triple-backticks (```) to make it legible (see https://help.github.com/en/articles/basic-writing-and-formatting-syntax#quoting-code).
|
||||
|
||||
|
||||
What do all those fields in the 'Processed' line mean?
|
||||
---
|
||||
See [Request log format](request_log.md).
|
||||
|
||||
|
||||
What are the biggest rooms on my server?
|
||||
---
|
||||
|
||||
```sql
|
||||
SELECT s.canonical_alias, g.room_id, count(*) AS num_rows
|
||||
FROM
|
||||
state_groups_state AS g,
|
||||
room_stats_state AS s
|
||||
WHERE g.room_id = s.room_id
|
||||
GROUP BY s.canonical_alias, g.room_id
|
||||
ORDER BY num_rows desc
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
You can also use the [List Room API](../../admin_api/rooms.md#list-room-api)
|
||||
and `order_by` `state_events`.
|
||||
@@ -1,18 +0,0 @@
|
||||
This blog post by Victor Berger explains how to use many of the tools listed on this page: https://levans.fr/shrink-synapse-database.html
|
||||
|
||||
# List of useful tools and scripts for maintenance Synapse database:
|
||||
|
||||
## [Purge Remote Media API](../../admin_api/media_admin_api.md#purge-remote-media-api)
|
||||
The purge remote media API allows server admins to purge old cached remote media.
|
||||
|
||||
## [Purge Local Media API](../../admin_api/media_admin_api.md#delete-local-media)
|
||||
This API deletes the *local* media from the disk of your own server.
|
||||
|
||||
## [Purge History API](../../admin_api/purge_history_api.md)
|
||||
The purge history API allows server admins to purge historic events from their database, reclaiming disk space.
|
||||
|
||||
## [synapse-compress-state](https://github.com/matrix-org/rust-synapse-compress-state)
|
||||
Tool for compressing (deduplicating) `state_groups_state` table.
|
||||
|
||||
## [SQL for analyzing Synapse PostgreSQL database stats](useful_sql_for_admins.md)
|
||||
Some easy SQL that reports useful stats about your Synapse database.
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user