1
0

Compare commits

..

5 Commits

Author SHA1 Message Date
Olivier Wilkinson (reivilibre)
6e827507f7 Return empty object on success rather than null.
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-20 10:29:36 +01:00
Olivier Wilkinson (reivilibre)
0e99412f4c Add admin API docs for setting admin bits on users
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-20 10:24:38 +01:00
Olivier Wilkinson (reivilibre)
7fd0c90234 Newsfile
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2019-08-19 14:44:53 +01:00
Olivier Wilkinson (reivilibre)
ebd2cd84d5 Add admin API for setting the admin bit of a user. 2019-08-19 14:42:55 +01:00
Olivier Wilkinson (reivilibre)
c497e13734 Introduce set_server_admin as dual to is_server_admin. 2019-08-19 14:41:07 +01:00
432 changed files with 7952 additions and 16834 deletions

View File

@@ -6,7 +6,6 @@ services:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.5
@@ -17,6 +16,6 @@ services:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
working_dir: /app
volumes:
- ..:/src
- ..:/app

View File

@@ -6,7 +6,6 @@ services:
image: postgres:11
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
@@ -17,6 +16,6 @@ services:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
working_dir: /app
volumes:
- ..:/src
- ..:/app

View File

@@ -6,7 +6,6 @@ services:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
@@ -17,6 +16,6 @@ services:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
working_dir: /app
volumes:
- ..:/src
- ..:/app

33
.buildkite/format_tap.py Normal file
View File

@@ -0,0 +1,33 @@
import sys
from tap.parser import Parser
from tap.line import Result, Unknown, Diagnostic
out = ["### TAP Output for " + sys.argv[2]]
p = Parser()
in_error = False
for line in p.parse_file(sys.argv[1]):
if isinstance(line, Result):
if in_error:
out.append("")
out.append("</pre></code></details>")
out.append("")
out.append("----")
out.append("")
in_error = False
if not line.ok and not line.todo:
in_error = True
out.append("FAILURE Test #%d: ``%s``" % (line.number, line.description))
out.append("")
out.append("<details><summary>Show log</summary><code><pre>")
elif isinstance(line, Diagnostic) and in_error:
out.append(line.text)
if out:
for line in out[:-3]:
print(line)

View File

@@ -27,7 +27,7 @@ git config --global user.name "A robot"
# Fetch and merge. If it doesn't work, it will raise due to set -e.
git fetch -u origin $GITBASE
git merge --no-edit --no-commit origin/$GITBASE
git merge --no-edit origin/$GITBASE
# Show what we are after.
git --no-pager show -s

240
.buildkite/pipeline.yml Normal file
View File

@@ -0,0 +1,240 @@
env:
CODECOV_TOKEN: "2dd7eb9b-0eda-45fe-a47c-9b5ac040045f"
steps:
- command:
- "python -m pip install tox"
- "tox -e check_codestyle"
label: "\U0001F9F9 Check Style"
plugins:
- docker#v3.0.1:
image: "python:3.6"
- command:
- "python -m pip install tox"
- "tox -e packaging"
label: "\U0001F9F9 packaging"
plugins:
- docker#v3.0.1:
image: "python:3.6"
- command:
- "python -m pip install tox"
- "tox -e check_isort"
label: "\U0001F9F9 isort"
plugins:
- docker#v3.0.1:
image: "python:3.6"
- command:
- "python -m pip install tox"
- "scripts-dev/check-newsfragment"
label: ":newspaper: Newsfile"
branches: "!master !develop !release-*"
plugins:
- docker#v3.0.1:
image: "python:3.6"
propagate-environment: true
- command:
- "python -m pip install tox"
- "tox -e check-sampleconfig"
label: "\U0001F9F9 check-sample-config"
plugins:
- docker#v3.0.1:
image: "python:3.6"
- wait
- command:
- "apt-get update && apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev"
- "python3.5 -m pip install tox"
- "tox -e py35-old,codecov"
label: ":python: 3.5 / SQLite / Old Deps"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "ubuntu:xenial" # We use xenail to get an old sqlite and python
propagate-environment: true
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- command:
- "python -m pip install tox"
- "tox -e py35,codecov"
label: ":python: 3.5 / SQLite"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.5"
propagate-environment: true
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- command:
- "python -m pip install tox"
- "tox -e py36,codecov"
label: ":python: 3.6 / SQLite"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.6"
propagate-environment: true
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- command:
- "python -m pip install tox"
- "tox -e py37,codecov"
label: ":python: 3.7 / SQLite"
env:
TRIAL_FLAGS: "-j 2"
plugins:
- docker#v3.0.1:
image: "python:3.7"
propagate-environment: true
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 3.5 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py35-postgres,codecov'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py35.pg95.yaml
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 3.7 / :postgres: 9.5"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py37.pg95.yaml
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: ":python: 3.7 / :postgres: 11"
agents:
queue: "medium"
env:
TRIAL_FLAGS: "-j 8"
command:
- "bash -c 'python -m pip install tox && python -m tox -e py37-postgres,codecov'"
plugins:
- docker-compose#v2.1.0:
run: testenv
config:
- .buildkite/docker-compose.py37.pg11.yaml
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / SQLite / Monolith"
agents:
queue: "medium"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / :postgres: 9.6 / Monolith"
agents:
queue: "medium"
env:
POSTGRES: "1"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2
- label: "SyTest - :python: 3.5 / :postgres: 9.6 / Workers"
agents:
queue: "medium"
env:
POSTGRES: "1"
WORKERS: "1"
BLACKLIST: "synapse-blacklist-with-workers"
command:
- "bash .buildkite/merge_base_branch.sh"
- "bash -c 'cat /src/sytest-blacklist /src/.buildkite/worker-blacklist > /src/synapse-blacklist-with-workers'"
- "bash /synapse_sytest.sh"
plugins:
- docker#v3.0.1:
image: "matrixdotorg/sytest-synapse:py35"
propagate-environment: true
always-pull: true
workdir: "/src"
retry:
automatic:
- exit_status: -1
limit: 2
- exit_status: 2
limit: 2

View File

@@ -1,8 +1,7 @@
[run]
branch = True
parallel = True
include=$TOP/synapse/*
data_file = $TOP/.coverage
include = synapse/*
[report]
precision = 2

View File

@@ -7,7 +7,7 @@ about: Create a report to help us improve
<!--
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #synapse:matrix.org ** ;)
You will likely get better support more quickly if you ask in ** #matrix:matrix.org ** ;)
This is a bug report template. By following the instructions below and
@@ -44,26 +44,22 @@ those (please be careful to remove any personal or private data). Please surroun
<!-- IMPORTANT: please answer the following questions, to help us narrow down the problem -->
<!-- Was this issue identified on matrix.org or another homeserver? -->
- **Homeserver**:
- **Homeserver**:
If not matrix.org:
<!--
What version of Synapse is running?
You can find the Synapse version with this command:
$ curl http://localhost:8008/_synapse/admin/v1/server_version
(You may need to replace `localhost:8008` if Synapse is not configured to
listen on that port.)
What version of Synapse is running?
You can find the Synapse version by inspecting the server headers (replace matrix.org with
your own homeserver domain):
$ curl -v https://matrix.org/_matrix/client/versions 2>&1 | grep "Server:"
-->
- **Version**:
- **Version**:
- **Install method**:
- **Install method**:
<!-- examples: package manager/git clone/pip -->
- **Platform**:
- **Platform**:
<!--
Tell us about the environment in which your homeserver is operating
distro, hardware, if it's running in a vm/container, etc.

7
.gitignore vendored
View File

@@ -7,11 +7,9 @@
*.egg-info
*.lock
*.pyc
*.snap
*.tac
_trial_temp/
_trial_temp*/
/out
# stuff that is likely to exist when you run a server locally
/*.db
@@ -22,7 +20,6 @@ _trial_temp*/
/*.signing.key
/env/
/homeserver*.yaml
/logs
/media_store/
/uploads
@@ -32,9 +29,8 @@ _trial_temp*/
/.vscode/
# build products
!/.coveragerc
/.coverage*
/.mypy_cache/
!/.coveragerc
/.tox
/build/
/coverage.*
@@ -42,3 +38,4 @@ _trial_temp*/
/docs/build/
/htmlcov
/pip-wheel-metadata/

View File

@@ -1,199 +1,3 @@
Synapse 1.4.1 (2019-10-18)
==========================
No changes since 1.4.1rc1.
Synapse 1.4.1rc1 (2019-10-17)
=============================
Bugfixes
--------
- Fix bug where redacted events were sometimes incorrectly censored in the database, breaking APIs that attempted to fetch such events. ([\#6185](https://github.com/matrix-org/synapse/issues/6185), [5b0e9948](https://github.com/matrix-org/synapse/commit/5b0e9948eaae801643e594b5abc8ee4b10bd194e))
Synapse 1.4.0 (2019-10-03)
==========================
Bugfixes
--------
- Redact `client_secret` in server logs. ([\#6158](https://github.com/matrix-org/synapse/issues/6158))
Synapse 1.4.0rc2 (2019-10-02)
=============================
Bugfixes
--------
- Fix bug in background update that adds last seen information to the `devices` table, and improve its performance on Postgres. ([\#6135](https://github.com/matrix-org/synapse/issues/6135))
- Fix bad performance of censoring redactions background task. ([\#6141](https://github.com/matrix-org/synapse/issues/6141))
- Fix fetching censored redactions from DB, which caused APIs like initial sync to fail if it tried to include the censored redaction. ([\#6145](https://github.com/matrix-org/synapse/issues/6145))
- Fix exceptions when storing large retry intervals for down remote servers. ([\#6146](https://github.com/matrix-org/synapse/issues/6146))
Internal Changes
----------------
- Fix up sample config entry for `redaction_retention_period` option. ([\#6117](https://github.com/matrix-org/synapse/issues/6117))
Synapse 1.4.0rc1 (2019-09-26)
=============================
Note that this release includes significant changes around 3pid
verification. Administrators are reminded to review the [upgrade notes](UPGRADE.rst#upgrading-to-v140).
Features
--------
- Changes to 3pid verification:
- Add the ability to send registration emails from the homeserver rather than delegating to an identity server. ([\#5835](https://github.com/matrix-org/synapse/issues/5835), [\#5940](https://github.com/matrix-org/synapse/issues/5940), [\#5993](https://github.com/matrix-org/synapse/issues/5993), [\#5994](https://github.com/matrix-org/synapse/issues/5994), [\#5868](https://github.com/matrix-org/synapse/issues/5868))
- Replace `trust_identity_server_for_password_resets` config option with `account_threepid_delegates`, and make the `id_server` parameteter optional on `*/requestToken` endpoints, as per [MSC2263](https://github.com/matrix-org/matrix-doc/pull/2263). ([\#5876](https://github.com/matrix-org/synapse/issues/5876), [\#5969](https://github.com/matrix-org/synapse/issues/5969), [\#6028](https://github.com/matrix-org/synapse/issues/6028))
- Switch to using the v2 Identity Service `/lookup` API where available, with fallback to v1. (Implements [MSC2134](https://github.com/matrix-org/matrix-doc/pull/2134) plus `id_access_token authentication` for v2 Identity Service APIs from [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140)). ([\#5897](https://github.com/matrix-org/synapse/issues/5897))
- Remove `bind_email` and `bind_msisdn` parameters from `/register` ala [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140). ([\#5964](https://github.com/matrix-org/synapse/issues/5964))
- Add `m.id_access_token` to `unstable_features` in `/versions` as per [MSC2264](https://github.com/matrix-org/matrix-doc/pull/2264). ([\#5974](https://github.com/matrix-org/synapse/issues/5974))
- Use the v2 Identity Service API for 3PID invites. ([\#5979](https://github.com/matrix-org/synapse/issues/5979))
- Add `POST /_matrix/client/unstable/account/3pid/unbind` endpoint from [MSC2140](https://github.com/matrix-org/matrix-doc/pull/2140) for unbinding a 3PID from an identity server without removing it from the homeserver user account. ([\#5980](https://github.com/matrix-org/synapse/issues/5980), [\#6062](https://github.com/matrix-org/synapse/issues/6062))
- Use `account_threepid_delegate.email` and `account_threepid_delegate.msisdn` for validating threepid sessions. ([\#6011](https://github.com/matrix-org/synapse/issues/6011))
- Allow homeserver to handle or delegate email validation when adding an email to a user's account. ([\#6042](https://github.com/matrix-org/synapse/issues/6042))
- Implement new Client Server API endpoints `/account/3pid/add` and `/account/3pid/bind` as per [MSC2290](https://github.com/matrix-org/matrix-doc/pull/2290). ([\#6043](https://github.com/matrix-org/synapse/issues/6043))
- Add an unstable feature flag for separate add/bind 3pid APIs. ([\#6044](https://github.com/matrix-org/synapse/issues/6044))
- Remove `bind` parameter from Client Server POST `/account` endpoint as per [MSC2290](https://github.com/matrix-org/matrix-doc/pull/2290/). ([\#6067](https://github.com/matrix-org/synapse/issues/6067))
- Add `POST /add_threepid/msisdn/submit_token` endpoint for proxying submitToken on an `account_threepid_handler`. ([\#6078](https://github.com/matrix-org/synapse/issues/6078))
- Add `submit_url` response parameter to `*/msisdn/requestToken` endpoints. ([\#6079](https://github.com/matrix-org/synapse/issues/6079))
- Add `m.require_identity_server` flag to /version's unstable_features. ([\#5972](https://github.com/matrix-org/synapse/issues/5972))
- Enhancements to OpenTracing support:
- Make OpenTracing work in worker mode. ([\#5771](https://github.com/matrix-org/synapse/issues/5771))
- Pass OpenTracing contexts between servers when transmitting EDUs. ([\#5852](https://github.com/matrix-org/synapse/issues/5852))
- OpenTracing for device list updates. ([\#5853](https://github.com/matrix-org/synapse/issues/5853))
- Add a tag recording a request's authenticated entity and corresponding servlet in OpenTracing. ([\#5856](https://github.com/matrix-org/synapse/issues/5856))
- Add minimum OpenTracing for client servlets. ([\#5983](https://github.com/matrix-org/synapse/issues/5983))
- Check at setup that OpenTracing is installed if it's enabled in the config. ([\#5985](https://github.com/matrix-org/synapse/issues/5985))
- Trace replication send times. ([\#5986](https://github.com/matrix-org/synapse/issues/5986))
- Include missing OpenTracing contexts in outbout replication requests. ([\#5982](https://github.com/matrix-org/synapse/issues/5982))
- Fix sending of EDUs when OpenTracing is enabled with an empty whitelist. ([\#5984](https://github.com/matrix-org/synapse/issues/5984))
- Fix invalid references to None while OpenTracing if the log context slips. ([\#5988](https://github.com/matrix-org/synapse/issues/5988), [\#5991](https://github.com/matrix-org/synapse/issues/5991))
- OpenTracing for room and e2e keys. ([\#5855](https://github.com/matrix-org/synapse/issues/5855))
- Add OpenTracing span over HTTP push processing. ([\#6003](https://github.com/matrix-org/synapse/issues/6003))
- Add an admin API to purge old rooms from the database. ([\#5845](https://github.com/matrix-org/synapse/issues/5845))
- Retry well-known lookups if we have recently seen a valid well-known record for the server. ([\#5850](https://github.com/matrix-org/synapse/issues/5850))
- Add support for filtered room-directory search requests over federation ([MSC2197](https://github.com/matrix-org/matrix-doc/pull/2197), in order to allow upcoming room directory query performance improvements. ([\#5859](https://github.com/matrix-org/synapse/issues/5859))
- Correctly retry all hosts returned from SRV when we fail to connect. ([\#5864](https://github.com/matrix-org/synapse/issues/5864))
- Add admin API endpoint for setting whether or not a user is a server administrator. ([\#5878](https://github.com/matrix-org/synapse/issues/5878))
- Enable cleaning up extremities with dummy events by default to prevent undue build up of forward extremities. ([\#5884](https://github.com/matrix-org/synapse/issues/5884))
- Add config option to sign remote key query responses with a separate key. ([\#5895](https://github.com/matrix-org/synapse/issues/5895))
- Add support for config templating. ([\#5900](https://github.com/matrix-org/synapse/issues/5900))
- Users with the type of "support" or "bot" are no longer required to consent. ([\#5902](https://github.com/matrix-org/synapse/issues/5902))
- Let synctl accept a directory of config files. ([\#5904](https://github.com/matrix-org/synapse/issues/5904))
- Increase max display name size to 256. ([\#5906](https://github.com/matrix-org/synapse/issues/5906))
- Add admin API endpoint for getting whether or not a user is a server administrator. ([\#5914](https://github.com/matrix-org/synapse/issues/5914))
- Redact events in the database that have been redacted for a week. ([\#5934](https://github.com/matrix-org/synapse/issues/5934))
- New prometheus metrics:
- `synapse_federation_known_servers`: represents the total number of servers your server knows about (i.e. is in rooms with), including itself. Enable by setting `metrics_flags.known_servers` to True in the configuration.([\#5981](https://github.com/matrix-org/synapse/issues/5981))
- `synapse_build_info`: exposes the Python version, OS version, and Synapse version of the running server. ([\#6005](https://github.com/matrix-org/synapse/issues/6005))
- Give appropriate exit codes when synctl fails. ([\#5992](https://github.com/matrix-org/synapse/issues/5992))
- Apply the federation blacklist to requests to identity servers. ([\#6000](https://github.com/matrix-org/synapse/issues/6000))
- Add `report_stats_endpoint` option to configure where stats are reported to, if enabled. Contributed by @Sorunome. ([\#6012](https://github.com/matrix-org/synapse/issues/6012))
- Add config option to increase ratelimits for room admins redacting messages. ([\#6015](https://github.com/matrix-org/synapse/issues/6015))
- Stop sending federation transactions to servers which have been down for a long time. ([\#6026](https://github.com/matrix-org/synapse/issues/6026))
- Make the process for mapping SAML2 users to matrix IDs more flexible. ([\#6037](https://github.com/matrix-org/synapse/issues/6037))
- Return a clearer error message when a timeout occurs when attempting to contact an identity server. ([\#6073](https://github.com/matrix-org/synapse/issues/6073))
- Prevent password reset's submit_token endpoint from accepting trailing slashes. ([\#6074](https://github.com/matrix-org/synapse/issues/6074))
- Return 403 on `/register/available` if registration has been disabled. ([\#6082](https://github.com/matrix-org/synapse/issues/6082))
- Explicitly log when a homeserver does not have the `trusted_key_servers` config field configured. ([\#6090](https://github.com/matrix-org/synapse/issues/6090))
- Add support for pruning old rows in `user_ips` table. ([\#6098](https://github.com/matrix-org/synapse/issues/6098))
Bugfixes
--------
- Don't create broken room when `power_level_content_override.users` does not contain `creator_id`. ([\#5633](https://github.com/matrix-org/synapse/issues/5633))
- Fix database index so that different backup versions can have the same sessions. ([\#5857](https://github.com/matrix-org/synapse/issues/5857))
- Fix Synapse looking for config options `password_reset_failure_template` and `password_reset_success_template`, when they are actually `password_reset_template_failure_html`, `password_reset_template_success_html`. ([\#5863](https://github.com/matrix-org/synapse/issues/5863))
- Fix stack overflow when recovering an appservice which had an outage. ([\#5885](https://github.com/matrix-org/synapse/issues/5885))
- Fix error message which referred to `public_base_url` instead of `public_baseurl`. Thanks to @aaronraimist for the fix! ([\#5909](https://github.com/matrix-org/synapse/issues/5909))
- Fix 404 for thumbnail download when `dynamic_thumbnails` is `false` and the thumbnail was dynamically generated. Fix reported by rkfg. ([\#5915](https://github.com/matrix-org/synapse/issues/5915))
- Fix a cache-invalidation bug for worker-based deployments. ([\#5920](https://github.com/matrix-org/synapse/issues/5920))
- Fix admin API for listing media in a room not being available with an external media repo. ([\#5966](https://github.com/matrix-org/synapse/issues/5966))
- Fix list media admin API always returning an error. ([\#5967](https://github.com/matrix-org/synapse/issues/5967))
- Fix room and user stats tracking. ([\#5971](https://github.com/matrix-org/synapse/issues/5971), [\#5998](https://github.com/matrix-org/synapse/issues/5998), [\#6029](https://github.com/matrix-org/synapse/issues/6029))
- Return a `M_MISSING_PARAM` if `sid` is not provided to `/account/3pid`. ([\#5995](https://github.com/matrix-org/synapse/issues/5995))
- `federation_certificate_verification_whitelist` now will not cause `TypeErrors` to be raised (a regression in 1.3). Additionally, it now supports internationalised domain names in their non-canonical representation. ([\#5996](https://github.com/matrix-org/synapse/issues/5996))
- Only count real users when checking for auto-creation of auto-join room. ([\#6004](https://github.com/matrix-org/synapse/issues/6004))
- Ensure support users can be registered even if MAU limit is reached. ([\#6020](https://github.com/matrix-org/synapse/issues/6020))
- Fix bug where login error was shown incorrectly on SSO fallback login. ([\#6024](https://github.com/matrix-org/synapse/issues/6024))
- Fix bug in calculating the federation retry backoff period. ([\#6025](https://github.com/matrix-org/synapse/issues/6025))
- Prevent exceptions being logged when extremity-cleanup events fail due to lack of user consent to the terms of service. ([\#6053](https://github.com/matrix-org/synapse/issues/6053))
- Remove POST method from password-reset `submit_token` endpoint until we implement `submit_url` functionality. ([\#6056](https://github.com/matrix-org/synapse/issues/6056))
- Fix logcontext spam on non-Linux platforms. ([\#6059](https://github.com/matrix-org/synapse/issues/6059))
- Ensure query parameters in email validation links are URL-encoded. ([\#6063](https://github.com/matrix-org/synapse/issues/6063))
- Fix a bug which caused SAML attribute maps to be overridden by defaults. ([\#6069](https://github.com/matrix-org/synapse/issues/6069))
- Fix the logged number of updated items for the `users_set_deactivated_flag` background update. ([\#6092](https://github.com/matrix-org/synapse/issues/6092))
- Add `sid` to `next_link` for email validation. ([\#6097](https://github.com/matrix-org/synapse/issues/6097))
- Threepid validity checks on msisdns should not be dependent on `threepid_behaviour_email`. ([\#6104](https://github.com/matrix-org/synapse/issues/6104))
- Ensure that servers which are not configured to support email address verification do not offer it in the registration flows. ([\#6107](https://github.com/matrix-org/synapse/issues/6107))
Updates to the Docker image
---------------------------
- Avoid changing `UID/GID` if they are already correct. ([\#5970](https://github.com/matrix-org/synapse/issues/5970))
- Provide `SYNAPSE_WORKER` envvar to specify python module. ([\#6058](https://github.com/matrix-org/synapse/issues/6058))
Improved Documentation
----------------------
- Convert documentation to markdown (from rst) ([\#5849](https://github.com/matrix-org/synapse/issues/5849))
- Update `INSTALL.md` to say that Python 2 is no longer supported. ([\#5953](https://github.com/matrix-org/synapse/issues/5953))
- Add developer documentation for using SAML2. ([\#6032](https://github.com/matrix-org/synapse/issues/6032))
- Add some notes on rolling back to v1.3.1. ([\#6049](https://github.com/matrix-org/synapse/issues/6049))
- Update the upgrade notes. ([\#6050](https://github.com/matrix-org/synapse/issues/6050))
Deprecations and Removals
-------------------------
- Remove shared-secret registration from `/_matrix/client/r0/register` endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5877](https://github.com/matrix-org/synapse/issues/5877))
- Deprecate the `trusted_third_party_id_servers` option. ([\#5875](https://github.com/matrix-org/synapse/issues/5875))
Internal Changes
----------------
- Lay the groundwork for structured logging output. ([\#5680](https://github.com/matrix-org/synapse/issues/5680))
- Retry well-known lookup before the cache expires, giving a grace period where the remote well-known can be down but we still use the old result. ([\#5844](https://github.com/matrix-org/synapse/issues/5844))
- Remove log line for debugging issue #5407. ([\#5860](https://github.com/matrix-org/synapse/issues/5860))
- Refactor the Appservice scheduler code. ([\#5886](https://github.com/matrix-org/synapse/issues/5886))
- Compatibility with v2 Identity Service APIs other than /lookup. ([\#5892](https://github.com/matrix-org/synapse/issues/5892), [\#6013](https://github.com/matrix-org/synapse/issues/6013))
- Stop populating some unused tables. ([\#5893](https://github.com/matrix-org/synapse/issues/5893), [\#6047](https://github.com/matrix-org/synapse/issues/6047))
- Add missing index on `users_in_public_rooms` to improve the performance of directory queries. ([\#5894](https://github.com/matrix-org/synapse/issues/5894))
- Improve the logging when we have an error when fetching signing keys. ([\#5896](https://github.com/matrix-org/synapse/issues/5896))
- Add support for database engine-specific schema deltas, based on file extension. ([\#5911](https://github.com/matrix-org/synapse/issues/5911))
- Update Buildkite pipeline to use plugins instead of buildkite-agent commands. ([\#5922](https://github.com/matrix-org/synapse/issues/5922))
- Add link in sample config to the logging config schema. ([\#5926](https://github.com/matrix-org/synapse/issues/5926))
- Remove unnecessary parentheses in return statements. ([\#5931](https://github.com/matrix-org/synapse/issues/5931))
- Remove unused `jenkins/prepare_sytest.sh` file. ([\#5938](https://github.com/matrix-org/synapse/issues/5938))
- Move Buildkite pipeline config to the pipelines repo. ([\#5943](https://github.com/matrix-org/synapse/issues/5943))
- Remove unnecessary return statements in the codebase which were the result of a regex run. ([\#5962](https://github.com/matrix-org/synapse/issues/5962))
- Remove left-over methods from v1 registration API. ([\#5963](https://github.com/matrix-org/synapse/issues/5963))
- Cleanup event auth type initialisation. ([\#5975](https://github.com/matrix-org/synapse/issues/5975))
- Clean up dependency checking at setup. ([\#5989](https://github.com/matrix-org/synapse/issues/5989))
- Update OpenTracing docs to use the unified `trace` method. ([\#5776](https://github.com/matrix-org/synapse/issues/5776))
- Small refactor of function arguments and docstrings in` RoomMemberHandler`. ([\#6009](https://github.com/matrix-org/synapse/issues/6009))
- Remove unused `origin` argument on `FederationHandler.add_display_name_to_third_party_invite`. ([\#6010](https://github.com/matrix-org/synapse/issues/6010))
- Add a `failure_ts` column to the `destinations` database table. ([\#6016](https://github.com/matrix-org/synapse/issues/6016), [\#6072](https://github.com/matrix-org/synapse/issues/6072))
- Clean up some code in the retry logic. ([\#6017](https://github.com/matrix-org/synapse/issues/6017))
- Fix the structured logging tests stomping on the global log configuration for subsequent tests. ([\#6023](https://github.com/matrix-org/synapse/issues/6023))
- Clean up the sample config for SAML authentication. ([\#6064](https://github.com/matrix-org/synapse/issues/6064))
- Change mailer logging to reflect Synapse doesn't just do chat notifications by email now. ([\#6075](https://github.com/matrix-org/synapse/issues/6075))
- Move last-seen info into devices table. ([\#6089](https://github.com/matrix-org/synapse/issues/6089))
- Remove unused parameter to `get_user_id_by_threepid`. ([\#6099](https://github.com/matrix-org/synapse/issues/6099))
- Refactor the user-interactive auth handling. ([\#6105](https://github.com/matrix-org/synapse/issues/6105))
- Refactor code for calculating registration flows. ([\#6106](https://github.com/matrix-org/synapse/issues/6106))
Synapse 1.3.1 (2019-08-17)
==========================

View File

@@ -56,7 +56,7 @@ Code style
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
at https://github.com/matrix-org/synapse/tree/master/docs/code_style.md.
at https://github.com/matrix-org/synapse/tree/master/docs/code_style.rst.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it

View File

@@ -36,7 +36,7 @@ that your email address is probably `user@example.com` rather than
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5, 3.6, or 3.7
- Python 3.5, 3.6, 3.7, or 2.7
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
@@ -349,13 +349,6 @@ sudo pip uninstall py-bcrypt
sudo pip install py-bcrypt
```
### Void Linux
Synapse can be found in the void repositories as 'synapse':
xbps-install -Su
xbps-install -S synapse
### FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
@@ -380,7 +373,7 @@ is suitable for local testing, but for any practical use, you will either need
to enable a reverse proxy, or configure Synapse to expose an HTTPS port.
For information on using a reverse proxy, see
[docs/reverse_proxy.md](docs/reverse_proxy.md).
[docs/reverse_proxy.rst](docs/reverse_proxy.rst).
To configure Synapse to expose an HTTPS port, you will need to edit
`homeserver.yaml`, as follows:
@@ -428,7 +421,7 @@ If Synapse is not configured with an SMTP server, password reset via email will
The easiest way to create a new user is to do so from a client like [Riot](https://riot.im).
Alternatively you can do so from the command line if you have installed via pip.
Alternatively you can do so from the command line if you have installed via pip.
This can be done as follows:
@@ -453,7 +446,7 @@ on your server even if `enable_registration` is `false`.
## Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
a TURN server. See [docs/turn-howto.rst](docs/turn-howto.rst) for details.
## URL previews

View File

@@ -38,14 +38,14 @@ exclude sytest-blacklist
include pyproject.toml
recursive-include changelog.d *
prune .buildkite
prune .circleci
prune .codecov.yml
prune .coveragerc
prune .github
prune debian
prune demo/etc
prune docker
prune mypy.ini
prune snap
prune stubs
prune .circleci
prune .coveragerc
prune debian
prune .codecov.yml
prune .buildkite
exclude jenkins*
recursive-exclude jenkins *.sh

View File

@@ -115,7 +115,7 @@ Registering a new user from a client
By default, registration of new users via Matrix clients is disabled. To enable
it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.rst>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a
user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients.
@@ -186,7 +186,7 @@ Almost all installations should opt to use PostreSQL. Advantages include:
synapse itself.
For information on how to install and use PostgreSQL, please see
`docs/postgres.md <docs/postgres.md>`_.
`docs/postgres.rst <docs/postgres.rst>`_.
.. _reverse-proxy:
@@ -201,7 +201,7 @@ It is recommended to put a reverse proxy such as
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
For information on configuring one, see `<docs/reverse_proxy.md>`_.
For information on configuring one, see `<docs/reverse_proxy.rst>`_.
Identity Servers
================
@@ -381,16 +381,3 @@ indicate that your server is also issuing far more outgoing federation
requests than can be accounted for by your users' activity, this is a
likely cause. The misbehavior can be worked around by setting
``use_presence: false`` in the Synapse config file.
People can't accept room invitations from me
--------------------------------------------
The typical failure mode here is that you send an invitation to someone
to join a room or direct chat, but when they go to accept it, they get an
error (typically along the lines of "Invalid signature"). They might see
something like the following in their logs::
2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>
This is normally caused by a misconfiguration in your reverse-proxy. See
`<docs/reverse_proxy.rst>`_ and double-check that your settings are correct.

View File

@@ -2,268 +2,58 @@ Upgrading Synapse
=================
Before upgrading check if any special steps are required to upgrade from the
what you currently have installed to current version of Synapse. The extra
what you currently have installed to current version of synapse. The extra
instructions that may be required are listed later in this document.
* If Synapse was installed using `prebuilt packages
<INSTALL.md#prebuilt-packages>`_, you will need to follow the normal process
for upgrading those packages.
1. If synapse was installed in a virtualenv then activate that virtualenv before
upgrading. If synapse is installed in a virtualenv in ``~/synapse/env`` then
run:
* If Synapse was installed from source, then:
1. Activate the virtualenv before upgrading. For example, if Synapse is
installed in a virtualenv in ``~/synapse/env`` then run:
.. code:: bash
.. code:: bash
source ~/synapse/env/bin/activate
2. If Synapse was installed using pip then upgrade to the latest version by
running:
2. If synapse was installed using pip then upgrade to the latest version by
running:
.. code:: bash
.. code:: bash
pip install --upgrade matrix-synapse
pip install --upgrade matrix-synapse[all]
If Synapse was installed using git then upgrade to the latest version by
running:
# restart synapse
synctl restart
.. code:: bash
If synapse was installed using git then upgrade to the latest version by
running:
.. code:: bash
# Pull the latest version of the master branch.
git pull
pip install --upgrade .
3. Restart Synapse:
.. code:: bash
# Update synapse and its python dependencies.
pip install --upgrade .[all]
# restart synapse
./synctl restart
To check whether your update was successful, you can check the running server
version with:
To check whether your update was successful, you can check the Server header
returned by the Client-Server API:
.. code:: bash
# you may need to replace 'localhost:8008' if synapse is not configured
# to listen on port 8008.
curl http://localhost:8008/_synapse/admin/v1/server_version
Rolling back to older versions
------------------------------
Rolling back to previous releases can be difficult, due to database schema
changes between releases. Where we have been able to test the rollback process,
this will be noted below.
In general, you will need to undo any changes made during the upgrade process,
for example:
* pip:
.. code:: bash
source env/bin/activate
# replace `1.3.0` accordingly:
pip install matrix-synapse==1.3.0
* Debian:
.. code:: bash
# replace `1.3.0` and `stretch` accordingly:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.4.0
===================
New custom templates
--------------------
If you have configured a custom template directory with the
``email.template_dir`` option, be aware that there are new templates regarding
registration and threepid management (see below) that must be included.
* ``registration.html`` and ``registration.txt``
* ``registration_success.html`` and ``registration_failure.html``
* ``add_threepid.html`` and ``add_threepid.txt``
* ``add_threepid_failure.html`` and ``add_threepid_success.html``
Synapse will expect these files to exist inside the configured template
directory, and **will fail to start** if they are absent.
To view the default templates, see `synapse/res/templates
<https://github.com/matrix-org/synapse/tree/master/synapse/res/templates>`_.
3pid verification changes
-------------------------
**Note: As of this release, users will be unable to add phone numbers or email
addresses to their accounts, without changes to the Synapse configuration. This
includes adding an email address during registration.**
It is possible for a user to associate an email address or phone number
with their account, for a number of reasons:
* for use when logging in, as an alternative to the user id.
* in the case of email, as an alternative contact to help with account recovery.
* in the case of email, to receive notifications of missed messages.
Before an email address or phone number can be added to a user's account,
or before such an address is used to carry out a password-reset, Synapse must
confirm the operation with the owner of the email address or phone number.
It does this by sending an email or text giving the user a link or token to confirm
receipt. This process is known as '3pid verification'. ('3pid', or 'threepid',
stands for third-party identifier, and we use it to refer to external
identifiers such as email addresses and phone numbers.)
Previous versions of Synapse delegated the task of 3pid verification to an
identity server by default. In most cases this server is ``vector.im`` or
``matrix.org``.
In Synapse 1.4.0, for security and privacy reasons, the homeserver will no
longer delegate this task to an identity server by default. Instead,
the server administrator will need to explicitly decide how they would like the
verification messages to be sent.
In the medium term, the ``vector.im`` and ``matrix.org`` identity servers will
disable support for delegated 3pid verification entirely. However, in order to
ease the transition, they will retain the capability for a limited
period. Delegated email verification will be disabled on Monday 2nd December
2019 (giving roughly 2 months notice). Disabling delegated SMS verification
will follow some time after that once SMS verification support lands in
Synapse.
Once delegated 3pid verification support has been disabled in the ``vector.im`` and
``matrix.org`` identity servers, all Synapse versions that depend on those
instances will be unable to verify email and phone numbers through them. There
are no imminent plans to remove delegated 3pid verification from Sydent
generally. (Sydent is the identity server project that backs the ``vector.im`` and
``matrix.org`` instances).
Email
~~~~~
Following upgrade, to continue verifying email (e.g. as part of the
registration process), admins can either:-
* Configure Synapse to use an email server.
* Run or choose an identity server which allows delegated email verification
and delegate to it.
Configure SMTP in Synapse
+++++++++++++++++++++++++
To configure an SMTP server for Synapse, modify the configuration section
headed ``email``, and be sure to have at least the ``smtp_host, smtp_port``
and ``notif_from`` fields filled out.
You may also need to set ``smtp_user``, ``smtp_pass``, and
``require_transport_security``.
See the `sample configuration file <docs/sample_config.yaml>`_ for more details
on these settings.
Delegate email to an identity server
++++++++++++++++++++++++++++++++++++
Some admins will wish to continue using email verification as part of the
registration process, but will not immediately have an appropriate SMTP server
at hand.
To this end, we will continue to support email verification delegation via the
``vector.im`` and ``matrix.org`` identity servers for two months. Support for
delegated email verification will be disabled on Monday 2nd December.
The ``account_threepid_delegates`` dictionary defines whether the homeserver
should delegate an external server (typically an `identity server
<https://matrix.org/docs/spec/identity_service/r0.2.1>`_) to handle sending
confirmation messages via email and SMS.
So to delegate email verification, in ``homeserver.yaml``, set
``account_threepid_delegates.email`` to the base URL of an identity server. For
example:
.. code:: yaml
account_threepid_delegates:
email: https://example.com # Delegate email sending to example.com
Note that ``account_threepid_delegates.email`` replaces the deprecated
``email.trust_identity_server_for_password_resets``: if
``email.trust_identity_server_for_password_resets`` is set to ``true``, and
``account_threepid_delegates.email`` is not set, then the first entry in
``trusted_third_party_id_servers`` will be used as the
``account_threepid_delegate`` for email. This is to ensure compatibility with
existing Synapse installs that set up external server handling for these tasks
before v1.4.0. If ``email.trust_identity_server_for_password_resets`` is
``true`` and no trusted identity server domains are configured, Synapse will
report an error and refuse to start.
If ``email.trust_identity_server_for_password_resets`` is ``false`` or absent
and no ``email`` delegate is configured in ``account_threepid_delegates``,
then Synapse will send email verification messages itself, using the configured
SMTP server (see above).
that type.
Phone numbers
~~~~~~~~~~~~~
Synapse does not support phone-number verification itself, so the only way to
maintain the ability for users to add phone numbers to their accounts will be
by continuing to delegate phone number verification to the ``matrix.org`` and
``vector.im`` identity servers (or another identity server that supports SMS
sending).
The ``account_threepid_delegates`` dictionary defines whether the homeserver
should delegate an external server (typically an `identity server
<https://matrix.org/docs/spec/identity_service/r0.2.1>`_) to handle sending
confirmation messages via email and SMS.
So to delegate phone number verification, in ``homeserver.yaml``, set
``account_threepid_delegates.msisdn`` to the base URL of an identity
server. For example:
.. code:: yaml
account_threepid_delegates:
msisdn: https://example.com # Delegate sms sending to example.com
The ``matrix.org`` and ``vector.im`` identity servers will continue to support
delegated phone number verification via SMS until such time as it is possible
for admins to configure their servers to perform phone number verification
directly. More details will follow in a future release.
Rolling back to v1.3.1
----------------------
If you encounter problems with v1.4.0, it should be possible to roll back to
v1.3.1, subject to the following:
* The 'room statistics' engine was heavily reworked in this release (see
`#5971 <https://github.com/matrix-org/synapse/pull/5971>`_), including
significant changes to the database schema, which are not easily
reverted. This will cause the room statistics engine to stop updating when
you downgrade.
The room statistics are essentially unused in v1.3.1 (in future versions of
Synapse, they will be used to populate the room directory), so there should
be no loss of functionality. However, the statistics engine will write errors
to the logs, which can be avoided by setting the following in
`homeserver.yaml`:
.. code:: yaml
stats:
enabled: false
Don't forget to re-enable it when you upgrade again, in preparation for its
use in the room directory!
# replace <host.name> with the hostname of your synapse homeserver.
# You may need to specify a port (eg, :8448) if your server is not
# configured on port 443.
curl -kv https://<host.name>/_matrix/client/versions 2>&1 | grep "Server:"
Upgrading to v1.2.0
===================
Some counter metrics have been renamed, with the old names deprecated. See
`the metrics documentation <docs/metrics-howto.md#renaming-of-metrics--deprecation-of-old-names-in-12>`_
`the metrics documentation <docs/metrics-howto.rst#renaming-of-metrics--deprecation-of-old-names-in-12>`_
for details.
Upgrading to v1.1.0
@@ -342,19 +132,6 @@ server for password resets, set ``trust_identity_server_for_password_resets`` to
See the `sample configuration file <docs/sample_config.yaml>`_
for more details on these settings.
New email templates
---------------
Some new templates have been added to the default template directory for the purpose of the
homeserver sending its own password reset emails. If you have configured a custom
``template_dir`` in your Synapse config, these files will need to be added.
``password_reset.html`` and ``password_reset.txt`` are HTML and plain text templates
respectively that contain the contents of what will be emailed to the user upon attempting to
reset their password via email. ``password_reset_success.html`` and
``password_reset_failure.html`` are HTML files that the content of which (assuming no redirect
URL is set) will be shown to the user after they attempt to click the link in the email sent
to them.
Upgrading to v0.99.0
====================

View File

@@ -1 +0,0 @@
Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this.

View File

@@ -1 +0,0 @@
Improve quality of thumbnails for 1-bit/8-bit color palette images.

View File

@@ -1 +0,0 @@
Return an HTTP 404 instead of 400 when requesting a filter by ID that is unknown to the server. Thanks to @krombel for contributing this!

View File

@@ -1 +0,0 @@
Fix a problem where users could be invited twice to the same group.

View File

@@ -1 +0,0 @@
Added domain validation when including a list of invitees upon room creation.

1
changelog.d/5633.bugfix Normal file
View File

@@ -0,0 +1 @@
Don't create broken room when power_level_content_override.users does not contain creator_id.

View File

@@ -1,4 +0,0 @@
Allow devices to be marked as hidden, for use by features such as cross-signing.
This adds a new field with a default value to the devices field in the database,
and so the database upgrade may take a long time depending on how many devices
are in the database.

View File

@@ -1 +0,0 @@
Allow uploading of cross-signing keys.

1
changelog.d/5844.misc Normal file
View File

@@ -0,0 +1 @@
Retry well-known lookup before the cache expires, giving a grace period where the remote well-known can be down but we still use the old result.

1
changelog.d/5856.feature Normal file
View File

@@ -0,0 +1 @@
Add a tag recording a request's authenticated entity and corresponding servlet in opentracing.

1
changelog.d/5857.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix database index so that different backup versions can have the same sessions.

1
changelog.d/5860.misc Normal file
View File

@@ -0,0 +1 @@
Remove log line for debugging issue #5407.

1
changelog.d/5863.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix Synapse looking for config options `password_reset_failure_template` and `password_reset_success_template`, when they are actually `password_reset_template_failure_html`, `password_reset_template_success_html`.

1
changelog.d/5878.feature Normal file
View File

@@ -0,0 +1 @@
Add admin API endpoint for setting whether or not a user is a server administrator.

View File

@@ -1 +0,0 @@
Move lookup-related functions from RoomMemberHandler to IdentityHandler.

View File

@@ -1 +0,0 @@
Improve performance of the public room list directory.

View File

@@ -1 +0,0 @@
Edit header dicts docstrings in SimpleHttpClient to note that `str` or `bytes` can be passed as header keys.

View File

@@ -1 +0,0 @@
Add snapcraft packaging information. Contributed by @devec0.

View File

@@ -1 +0,0 @@
Kill off half-implemented password-reset via sms.

View File

@@ -1 +0,0 @@
Remove `get_user_by_req` opentracing span and add some tags.

View File

@@ -1 +0,0 @@
Fix bug when uploading a large file: Synapse responds with `M_UNKNOWN` while it should be `M_TOO_LARGE` according to spec. Contributed by Anshul Angaria.

View File

@@ -1 +0,0 @@
CAS login now provides a default display name for users if a `displayname_attribute` is set in the configuration file.

View File

@@ -1 +0,0 @@
Drop some unused database tables.

View File

@@ -1 +0,0 @@
Reject all pending invites for a user during deactivation.

View File

@@ -1 +0,0 @@
Add env var to turn on tracking of log context changes.

View File

@@ -1 +0,0 @@
Refactor configuration loading to allow better typechecking.

View File

@@ -1 +0,0 @@
Log responder when responding to media request.

View File

@@ -1 +0,0 @@
Prevent user push rules being deleted from a room when it is upgraded.

View File

@@ -1 +0,0 @@
Don't 500 when trying to exchange a revoked 3PID invite.

View File

@@ -1 +0,0 @@
Improve performance of `find_next_generated_user_id` DB query.

View File

@@ -1 +0,0 @@
Expand type-checking on modules imported by synapse.config.

View File

@@ -1 +0,0 @@
Improve performance of the public room list directory.

View File

@@ -1 +0,0 @@
Improve performance of the public room list directory.

View File

@@ -1 +0,0 @@
Improve performance of the public room list directory.

View File

@@ -1 +0,0 @@
Fix transferring notifications and tags when joining an upgraded room that is new to your server.

View File

@@ -1 +0,0 @@
Use Postgres ANY for selecting many values.

View File

@@ -1 +0,0 @@
Add more caching to `_get_joined_users_from_context` DB query.

View File

@@ -1 +0,0 @@
Add some metrics on the federation sender.

View File

@@ -1 +0,0 @@
Fix bug where guest account registration can wedge after restart.

View File

@@ -1 +0,0 @@
Add some logging to the rooms stats updates, to try to track down a flaky test.

View File

@@ -1 +0,0 @@
Fix monthly active user reaping where reserved users are specified.

View File

@@ -1 +0,0 @@
Fix /federation/v1/state endpoint for recent room versions.

View File

@@ -1 +0,0 @@
Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this.

View File

@@ -1 +0,0 @@
Make the `synapse_port_db` script create the right indexes on a new PostgreSQL database.

View File

@@ -1 +0,0 @@
Remove unused `timeout` parameter from `_get_public_room_list`.

View File

@@ -1 +0,0 @@
Update `user_filters` table to have a unique index, and non-null columns. Thanks to @pik for contributing this.

View File

@@ -1 +0,0 @@
Fix bug where we were updating censored events as bytes rather than text, occaisonally causing invalid JSON being inserted breaking APIs that attempted to fetch such events.

View File

@@ -1 +0,0 @@
Reject (accidental) attempts to insert bytes into postgres tables.

View File

@@ -1 +0,0 @@
Fix occasional missed updates in the room and user directories.

View File

@@ -1 +0,0 @@
Make `version` optional in body of `PUT /room_keys/version/{version}`, since it's redundant.

View File

@@ -1 +0,0 @@
Add snapcraft packaging information. Contributed by @devec0.

View File

@@ -1 +0,0 @@
Make storage layer responsible for adding device names to key, rather than the handler.

View File

@@ -1 +0,0 @@
Fix tracing of non-JSON APIs, /media, /key etc.

View File

@@ -1 +0,0 @@
Port synapse.rest.admin module to use async/await.

View File

@@ -1 +0,0 @@
Fix logging getting lost for the docker image.

View File

@@ -1 +0,0 @@
Fix bug where presence would not get timed out correctly if a synchrotron worker is used and restarted.

View File

@@ -1 +0,0 @@
Remove some unused event-auth code.

View File

@@ -1 +0,0 @@
synapse_port_db: Add 2 additional BOOLEAN_COLUMNS to be able to convert from database schema v56.

View File

@@ -1 +0,0 @@
Remove Auth.check method.

View File

@@ -1 +0,0 @@
Remove `format_tap.py` script in favour of a perl reimplementation in Sytest's repo.

View File

@@ -37,8 +37,6 @@ from signedjson.sign import verify_signed_json, SignatureVerifyException
CONFIG_JSON = "cmdclient_config.json"
# TODO: The concept of trusted identity servers has been deprecated. This option and checks
# should be removed
TRUSTED_ID_SERVERS = ["localhost:8001"]
@@ -270,7 +268,6 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_emailrequest(self, args):
# TODO: Update to use v2 Identity Service API endpoint
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/validate/email/requestToken"
@@ -305,7 +302,6 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_emailvalidate(self, args):
# TODO: Update to use v2 Identity Service API endpoint
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/validate/email/submitToken"
@@ -334,7 +330,6 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_3pidbind(self, args):
# TODO: Update to use v2 Identity Service API endpoint
url = self._identityServerUrl() + "/_matrix/identity/api/v1/3pid/bind"
json_res = yield self.http_client.do_request(
@@ -403,7 +398,6 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_invite(self, roomid, userstring):
if not userstring.startswith("@") and self._is_on("complete_usernames"):
# TODO: Update to use v2 Identity Service API endpoint
url = self._identityServerUrl() + "/_matrix/identity/api/v1/lookup"
json_res = yield self.http_client.do_request(
@@ -413,7 +407,6 @@ class SynapseCmd(cmd.Cmd):
mxid = None
if "mxid" in json_res and "signatures" in json_res:
# TODO: Update to use v2 Identity Service API endpoint
url = (
self._identityServerUrl()
+ "/_matrix/identity/api/v1/pubkey/ed25519"

View File

@@ -1,26 +1,39 @@
# Synapse Docker
### Configuration
FIXME: this is out-of-date as of
https://github.com/matrix-org/synapse/issues/5518. Contributions to bring it up
to date would be welcome.
### Automated configuration
It is recommended that you use Docker Compose to run your containers, including
this image and a Postgres server. A sample ``docker-compose.yml`` is provided,
including example labels for reverse proxying and other artifacts.
Read the section about environment variables and set at least mandatory variables,
then run the server:
```
docker-compose up -d
```
If secrets are not specified in the environment variables, they will be generated
as part of the startup. Please ensure these secrets are kept between launches of the
Docker container, as their loss may require users to log in again.
### Manual configuration
A sample ``docker-compose.yml`` is provided, including example labels for
reverse proxying and other artifacts. The docker-compose file is an example,
please comment/uncomment sections that are not suitable for your usecase.
Specify a ``SYNAPSE_CONFIG_PATH``, preferably to a persistent path,
to use manual configuration.
To generate a fresh `homeserver.yaml`, you can use the `generate` command.
(See the [documentation](../../docker/README.md#generating-a-configuration-file)
for more information.) You will need to specify appropriate values for at least the
`SYNAPSE_SERVER_NAME` and `SYNAPSE_REPORT_STATS` environment variables. For example:
to use manual configuration. To generate a fresh ``homeserver.yaml``, simply run:
```
docker-compose run --rm -e SYNAPSE_SERVER_NAME=my.matrix.host -e SYNAPSE_REPORT_STATS=yes synapse generate
docker-compose run --rm -e SYNAPSE_SERVER_NAME=my.matrix.host synapse generate
```
(This will also generate necessary signing keys.)
Then, customize your configuration and run the server:
```

View File

@@ -15,10 +15,13 @@ services:
restart: unless-stopped
# See the readme for a full documentation of the environment settings
environment:
- SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
- SYNAPSE_SERVER_NAME=my.matrix.host
- SYNAPSE_REPORT_STATS=no
- SYNAPSE_ENABLE_REGISTRATION=yes
- SYNAPSE_LOG_LEVEL=INFO
- POSTGRES_PASSWORD=changeme
volumes:
# You may either store all the files in a local folder
- ./matrix-config:/etc
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data
@@ -32,23 +35,9 @@ services:
- 8448:8448/tcp
# ... or use a reverse proxy, here is an example for traefik:
labels:
# The following lines are valid for Traefik version 1.x:
- traefik.enable=true
- traefik.frontend.rule=Host:my.matrix.Host
- traefik.port=8008
# Alternatively, for Traefik version 2.0:
- traefik.enable=true
- traefik.http.routers.http-synapse.entryPoints=http
- traefik.http.routers.http-synapse.rule=Host(`my.matrix.host`)
- traefik.http.middlewares.https_redirect.redirectscheme.scheme=https
- traefik.http.middlewares.https_redirect.redirectscheme.permanent=true
- traefik.http.routers.http-synapse.middlewares=https_redirect
- traefik.http.routers.https-synapse.entryPoints=https
- traefik.http.routers.https-synapse.rule=Host(`my.matrix.host`)
- traefik.http.routers.https-synapse.service=synapse
- traefik.http.routers.https-synapse.tls=true
- traefik.http.services.synapse.loadbalancer.server.port=8008
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db:
image: docker.io/postgres:10-alpine

12
debian/changelog vendored
View File

@@ -1,15 +1,3 @@
matrix-synapse-py3 (1.4.1) stable; urgency=medium
* New synapse release 1.4.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 18 Oct 2019 10:13:27 +0100
matrix-synapse-py3 (1.4.0) stable; urgency=medium
* New synapse release 1.4.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 03 Oct 2019 13:22:25 +0100
matrix-synapse-py3 (1.3.1) stable; urgency=medium
* New synapse release 1.3.1.

View File

@@ -17,7 +17,7 @@ By default, the image expects a single volume, located at ``/data``, that will h
* the appservices configuration.
You are free to use separate volumes depending on storage endpoints at your
disposal. For instance, ``/data/media`` could be stored on a large but low
disposal. For instance, ``/data/media`` coud be stored on a large but low
performance hdd storage while other files could be stored on high performance
endpoints.
@@ -27,8 +27,8 @@ configuration file there. Multiple application services are supported.
## Generating a configuration file
The first step is to generate a valid config file. To do this, you can run the
image with the `generate` command line option.
The first step is to genearte a valid config file. To do this, you can run the
image with the `generate` commandline option.
You will need to specify values for the `SYNAPSE_SERVER_NAME` and
`SYNAPSE_REPORT_STATS` environment variable, and mount a docker volume to store
@@ -59,7 +59,7 @@ The following environment variables are supported in `generate` mode:
* `SYNAPSE_CONFIG_PATH`: path to the file to be generated. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_DATA_DIR`: where the generated config will put persistent data
such as the database and media store. Defaults to `/data`.
such as the datatase and media store. Defaults to `/data`.
* `UID`, `GID`: the user id and group id to use for creating the data
directories. Defaults to `991`, `991`.
@@ -89,8 +89,6 @@ The following environment variables are supported in run mode:
`/data`.
* `SYNAPSE_CONFIG_PATH`: path to the config file. Defaults to
`<SYNAPSE_CONFIG_DIR>/homeserver.yaml`.
* `SYNAPSE_WORKER`: module to execute, used when running synapse with workers.
Defaults to `synapse.app.homeserver`, which is suitable for non-worker mode.
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
@@ -117,7 +115,7 @@ not given).
To migrate from a dynamic configuration file to a static one, run the docker
container once with the environment variables set, and `migrate_config`
command line option. For example:
commandline option. For example:
```
docker run -it --rm \

View File

@@ -24,5 +24,3 @@ loggers:
root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}
handlers: [console]
disable_existing_loggers: false

View File

@@ -41,8 +41,8 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
config_dir (str): where to put generated config files
config_path (str): where to put the main config file
environ (dict): environment dictionary
ownership (str|None): "<user>:<group>" string which will be used to set
ownership of the generated configs. If None, ownership will not change.
ownership (str): "<user>:<group>" string which will be used to set
ownership of the generated configs
"""
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
@@ -105,24 +105,24 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
log("Generating log config file " + log_config_file)
convert("/conf/log.config", log_config_file, environ)
subprocess.check_output(["chown", "-R", ownership, "/data"])
# Hopefully we already have a signing key, but generate one if not.
args = [
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
# tell synapse to put generated keys in /data rather than /compiled
"--keys-directory",
config_dir,
"--generate-keys",
]
if ownership is not None:
subprocess.check_output(["chown", "-R", ownership, "/data"])
args = ["su-exec", ownership] + args
subprocess.check_output(args)
subprocess.check_output(
[
"su-exec",
ownership,
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
# tell synapse to put generated keys in /data rather than /compiled
"--keys-directory",
config_dir,
"--generate-keys",
]
)
def run_generate_config(environ, ownership):
@@ -130,7 +130,7 @@ def run_generate_config(environ, ownership):
Args:
environ (dict): env var dict
ownership (str|None): "userid:groupid" arg for chmod. If None, ownership will not change.
ownership (str): "userid:groupid" arg for chmod
Never returns.
"""
@@ -149,6 +149,9 @@ def run_generate_config(environ, ownership):
log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ)
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = [
"python",
"-m",
@@ -167,34 +170,12 @@ def run_generate_config(environ, ownership):
"--open-private-ports",
]
# log("running %s" % (args, ))
if ownership is not None:
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
else:
os.execv("/usr/local/bin/python", args)
os.execv("/usr/local/bin/python", args)
def main(args, environ):
mode = args[1] if len(args) > 1 else None
desired_uid = int(environ.get("UID", "991"))
desired_gid = int(environ.get("GID", "991"))
synapse_worker = environ.get("SYNAPSE_WORKER", "synapse.app.homeserver")
if (desired_uid == os.getuid()) and (desired_gid == os.getgid()):
ownership = None
else:
ownership = "{}:{}".format(desired_uid, desired_gid)
log(
"Container running as UserID %s:%s, ENV (or defaults) requests %s:%s"
% (os.getuid(), os.getgid(), desired_uid, desired_gid)
)
if ownership is None:
log("Will not perform chmod/su-exec as UserID already matches request")
ownership = "{}:{}".format(environ.get("UID", 991), environ.get("GID", 991))
# In generate mode, generate a configuration and missing keys, then exit
if mode == "generate":
@@ -246,12 +227,16 @@ def main(args, environ):
log("Starting synapse with config file " + config_path)
args = ["python", "-m", synapse_worker, "--config-path", config_path]
if ownership is not None:
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
else:
os.execv("/usr/local/bin/python", args)
args = [
"su-exec",
ownership,
"python",
"-m",
"synapse.app.homeserver",
"--config-path",
config_path,
]
os.execv("/sbin/su-exec", args)
if __name__ == "__main__":

View File

@@ -1,31 +1,30 @@
# Overview
Captcha can be enabled for this home server. This file explains how to do that.
The captcha mechanism used is Google's ReCaptcha. This requires API keys from Google.
## Getting keys
Getting keys
------------
Requires a public/private key pair from:
<https://developers.google.com/recaptcha/>
https://developers.google.com/recaptcha/
Must be a reCAPTCHA v2 key using the "I'm not a robot" Checkbox option
## Setting ReCaptcha Keys
Setting ReCaptcha Keys
----------------------
The keys are a config option on the home server config. If they are not
visible, you can generate them via `--generate-config`. Set the following value:
visible, you can generate them via --generate-config. Set the following value::
recaptcha_public_key: YOUR_PUBLIC_KEY
recaptcha_private_key: YOUR_PRIVATE_KEY
recaptcha_public_key: YOUR_PUBLIC_KEY
recaptcha_private_key: YOUR_PRIVATE_KEY
In addition, you MUST enable captchas via:
In addition, you MUST enable captchas via::
enable_registration_captcha: true
## Configuring IP used for auth
enable_registration_captcha: true
Configuring IP used for auth
----------------------------
The ReCaptcha API requires that the IP address of the user who solved the
captcha is sent. If the client is connecting through a proxy or load balancer,
it may be required to use the `X-Forwarded-For` (XFF) header instead of the origin
IP address. This can be configured using the `x_forwarded` directive in the
it may be required to use the X-Forwarded-For (XFF) header instead of the origin
IP address. This can be configured using the x_forwarded directive in the
listeners section of the homeserver.yaml configuration file.

View File

@@ -147,7 +147,7 @@ your domain, you can simply route all traffic through the reverse proxy by
updating the SRV record appropriately (or removing it, if the proxy listens on
8448).
See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
reverse proxy.
#### Option 3: add a .well-known file to delegate your matrix traffic
@@ -319,7 +319,7 @@ We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
reverse proxy.
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?

View File

@@ -1,7 +0,0 @@
# Synapse Documentation
This directory contains documentation specific to the `synapse` homeserver.
All matrix-generic documentation now lives in its own project, located at [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc)
(Note: some items here may be moved to [matrix-org/matrix-doc](https://github.com/matrix-org/matrix-doc) at some point in the future.)

6
docs/README.rst Normal file
View File

@@ -0,0 +1,6 @@
All matrix-generic documentation now lives in its own project at
github.com/matrix-org/matrix-doc.git
Only Synapse implementation-specific documentation lives here now
(together with some older stuff will be shortly migrated over to matrix-doc)

View File

@@ -10,15 +10,3 @@ server admin by updating the database directly, e.g.:
``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
Restarting may be required for the changes to register.
Using an admin access_token
###########################
Many of the API calls listed in the documentation here will require to include an admin `access_token`.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header:
``curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>``
Fore more details, please refer to the complete `matrix spec documentation <https://matrix.org/docs/spec/client_server/r0.5.0#using-access-tokens>`_.

View File

@@ -1,18 +0,0 @@
Purge room API
==============
This API will remove all trace of a room from your database.
All local users must have left the room before it can be removed.
The API is:
```
POST /_synapse/admin/v1/purge_room
{
"room_id": "!room:id"
}
```
You must authenticate using the access token of an admin user.

View File

@@ -86,25 +86,6 @@ with a body of:
including an ``access_token`` of a server admin.
Get whether a user is a server administrator or not
===================================================
The api is::
GET /_synapse/admin/v1/users/<user_id>/admin
including an ``access_token`` of a server admin.
A response body like the following is returned:
.. code:: json
{
"admin": true
}
Change whether a user is a server administrator or not
======================================================

View File

@@ -1,81 +0,0 @@
> **Warning**
> These architecture notes are spectacularly old, and date back
> to when Synapse was just federation code in isolation. This should be
> merged into the main spec.
# Server to Server
## Server to Server Stack
To use the server to server stack, home servers should only need to
interact with the Messaging layer.
The server to server side of things is designed into 4 distinct layers:
1. Messaging Layer
2. Pdu Layer
3. Transaction Layer
4. Transport Layer
Where the bottom (the transport layer) is what talks to the internet via
HTTP, and the top (the messaging layer) talks to the rest of the Home
Server with a domain specific API.
1. **Messaging Layer**
This is what the rest of the Home Server hits to send messages, join rooms,
etc. It also allows you to register callbacks for when it get's notified by
lower levels that e.g. a new message has been received.
It is responsible for serializing requests to send to the data
layer, and to parse requests received from the data layer.
2. **PDU Layer**
This layer handles:
- duplicate `pdu_id`'s - i.e., it makes sure we ignore them.
- responding to requests for a given `pdu_id`
- responding to requests for all metadata for a given context (i.e. room)
- handling incoming backfill requests
So it has to parse incoming messages to discover which are metadata and
which aren't, and has to correctly clobber existing metadata where
appropriate.
For incoming PDUs, it has to check the PDUs it references to see
if we have missed any. If we have go and ask someone (another
home server) for it.
3. **Transaction Layer**
This layer makes incoming requests idempotent. i.e., it stores
which transaction id's we have seen and what our response were.
If we have already seen a message with the given transaction id,
we do not notify higher levels but simply respond with the
previous response.
`transaction_id` is from "`GET /send/<tx_id>/`"
It's also responsible for batching PDUs into single transaction for
sending to remote destinations, so that we only ever have one
transaction in flight to a given destination at any one time.
This is also responsible for answering requests for things after a
given set of transactions, i.e., ask for everything after 'ver' X.
4. **Transport Layer**
This is responsible for starting a HTTP server and hitting the
correct callbacks on the Transaction layer, as well as sending
both data and requests for data.
## Persistence
We persist things in a single sqlite3 database. All database queries get
run on a separate, dedicated thread. This that we only ever have one
query running at a time, making it a lot easier to do things in a safe
manner.
The queries are located in the `synapse.persistence.transactions` module,
and the table information in the `synapse.persistence.tables` module.

View File

@@ -0,0 +1,59 @@
.. WARNING::
These architecture notes are spectacularly old, and date back to when Synapse
was just federation code in isolation. This should be merged into the main
spec.
= Server to Server =
== Server to Server Stack ==
To use the server to server stack, home servers should only need to interact with the Messaging layer.
The server to server side of things is designed into 4 distinct layers:
1. Messaging Layer
2. Pdu Layer
3. Transaction Layer
4. Transport Layer
Where the bottom (the transport layer) is what talks to the internet via HTTP, and the top (the messaging layer) talks to the rest of the Home Server with a domain specific API.
1. Messaging Layer
This is what the rest of the Home Server hits to send messages, join rooms, etc. It also allows you to register callbacks for when it get's notified by lower levels that e.g. a new message has been received.
It is responsible for serializing requests to send to the data layer, and to parse requests received from the data layer.
2. PDU Layer
This layer handles:
* duplicate pdu_id's - i.e., it makes sure we ignore them.
* responding to requests for a given pdu_id
* responding to requests for all metadata for a given context (i.e. room)
* handling incoming backfill requests
So it has to parse incoming messages to discover which are metadata and which aren't, and has to correctly clobber existing metadata where appropriate.
For incoming PDUs, it has to check the PDUs it references to see if we have missed any. If we have go and ask someone (another home server) for it.
3. Transaction Layer
This layer makes incoming requests idempotent. I.e., it stores which transaction id's we have seen and what our response were. If we have already seen a message with the given transaction id, we do not notify higher levels but simply respond with the previous response.
transaction_id is from "GET /send/<tx_id>/"
It's also responsible for batching PDUs into single transaction for sending to remote destinations, so that we only ever have one transaction in flight to a given destination at any one time.
This is also responsible for answering requests for things after a given set of transactions, i.e., ask for everything after 'ver' X.
4. Transport Layer
This is responsible for starting a HTTP server and hitting the correct callbacks on the Transaction layer, as well as sending both data and requests for data.
== Persistence ==
We persist things in a single sqlite3 database. All database queries get run on a separate, dedicated thread. This that we only ever have one query running at a time, making it a lot easier to do things in a safe manner.
The queries are located in the synapse.persistence.transactions module, and the table information in the synapse.persistence.tables module.

View File

@@ -1,31 +0,0 @@
# Registering an Application Service
The registration of new application services depends on the homeserver used.
In synapse, you need to create a new configuration file for your AS and add it
to the list specified under the `app_service_config_files` config
option in your synapse config.
For example:
```yaml
app_service_config_files:
- /home/matrix/.synapse/<your-AS>.yaml
```
The format of the AS configuration file is as follows:
```yaml
url: <base url of AS>
as_token: <token AS will add to requests to HS>
hs_token: <token HS will add to requests to AS>
sender_localpart: <localpart of AS user>
namespaces:
users: # List of users we're interested in
- exclusive: <bool>
regex: <regex>
- ...
aliases: [] # List of aliases we're interested in
rooms: [] # List of room ids we're interested in
```
See the [spec](https://matrix.org/docs/spec/application_service/unstable.html) for further details on how application services work.

View File

@@ -0,0 +1,35 @@
Registering an Application Service
==================================
The registration of new application services depends on the homeserver used.
In synapse, you need to create a new configuration file for your AS and add it
to the list specified under the ``app_service_config_files`` config
option in your synapse config.
For example:
.. code-block:: yaml
app_service_config_files:
- /home/matrix/.synapse/<your-AS>.yaml
The format of the AS configuration file is as follows:
.. code-block:: yaml
url: <base url of AS>
as_token: <token AS will add to requests to HS>
hs_token: <token HS will add to requests to AS>
sender_localpart: <localpart of AS user>
namespaces:
users: # List of users we're interested in
- exclusive: <bool>
regex: <regex>
- ...
aliases: [] # List of aliases we're interested in
rooms: [] # List of room ids we're interested in
See the spec_ for further details on how application services work.
.. _spec: https://matrix.org/docs/spec/application_service/unstable.html

View File

@@ -1,65 +0,0 @@
# Synapse Architecture
As of the end of Oct 2014, Synapse's overall architecture looks like:
synapse
.-----------------------------------------------------.
| Notifier |
| ^ | |
| | | |
| .------------|------. |
| | handlers/ | | |
| | v | |
| | Event*Handler <--------> rest/* <=> Client
| | Rooms*Handler | |
HS <=> federation/* <==> FederationHandler | |
| | | PresenceHandler | |
| | | TypingHandler | |
| | '-------------------' |
| | | | |
| | state/* | |
| | | | |
| | v v |
| `--------------> storage/* |
| | |
'--------------------------|--------------------------'
v
.----.
| DB |
'----'
- Handlers: business logic of synapse itself. Follows a set contract of BaseHandler:
- BaseHandler gives us onNewRoomEvent which: (TODO: flesh this out and make it less cryptic):
- handle_state(event)
- auth(event)
- persist_event(event)
- notify notifier or federation(event)
- PresenceHandler: use distributor to get EDUs out of Federation.
Very lightweight logic built on the distributor
- TypingHandler: use distributor to get EDUs out of Federation.
Very lightweight logic built on the distributor
- EventsHandler: handles the events stream...
- FederationHandler: - gets PDU from Federation Layer; turns into
an event; follows basehandler functionality.
- RoomsHandler: does all the room logic, including members - lots
of classes in RoomsHandler.
- ProfileHandler: talks to the storage to store/retrieve profile
info.
- EventFactory: generates events of particular event types.
- Notifier: Backs the events handler
- REST: Interfaces handlers and events to the outside world via
HTTP/JSON. Converts events back and forth from JSON.
- Federation: holds the HTTP client & server to talk to other servers.
Does replication to make sure there's nothing missing in the graph.
Handles reliability. Handles txns.
- Distributor: generic event bus. used for presence & typing only
currently. Notifier could be implemented using Distributor - so far
we are only using for things which actually /require/ dynamic
pluggability however as it can obfuscate the actual flow of control.
- Auth: helper singleton to say whether a given event is allowed to do
a given thing (TODO: put this on the diagram)
- State: helper singleton: does state conflict resolution. You give it
an event and it tells you if it actually updates the state or not,
and annotates the event up properly and handles merge conflict
resolution.
- Storage: abstracts the storage engine.

68
docs/architecture.rst Normal file
View File

@@ -0,0 +1,68 @@
Synapse Architecture
====================
As of the end of Oct 2014, Synapse's overall architecture looks like::
synapse
.-----------------------------------------------------.
| Notifier |
| ^ | |
| | | |
| .------------|------. |
| | handlers/ | | |
| | v | |
| | Event*Handler <--------> rest/* <=> Client
| | Rooms*Handler | |
HSes <=> federation/* <==> FederationHandler | |
| | | PresenceHandler | |
| | | TypingHandler | |
| | '-------------------' |
| | | | |
| | state/* | |
| | | | |
| | v v |
| `--------------> storage/* |
| | |
'--------------------------|--------------------------'
v
.----.
| DB |
'----'
* Handlers: business logic of synapse itself. Follows a set contract of BaseHandler:
- BaseHandler gives us onNewRoomEvent which: (TODO: flesh this out and make it less cryptic):
+ handle_state(event)
+ auth(event)
+ persist_event(event)
+ notify notifier or federation(event)
- PresenceHandler: use distributor to get EDUs out of Federation. Very
lightweight logic built on the distributor
- TypingHandler: use distributor to get EDUs out of Federation. Very
lightweight logic built on the distributor
- EventsHandler: handles the events stream...
- FederationHandler: - gets PDU from Federation Layer; turns into an event;
follows basehandler functionality.
- RoomsHandler: does all the room logic, including members - lots of classes in
RoomsHandler.
- ProfileHandler: talks to the storage to store/retrieve profile info.
* EventFactory: generates events of particular event types.
* Notifier: Backs the events handler
* REST: Interfaces handlers and events to the outside world via HTTP/JSON.
Converts events back and forth from JSON.
* Federation: holds the HTTP client & server to talk to other servers. Does
replication to make sure there's nothing missing in the graph. Handles
reliability. Handles txns.
* Distributor: generic event bus. used for presence & typing only currently.
Notifier could be implemented using Distributor - so far we are only using for
things which actually /require/ dynamic pluggability however as it can
obfuscate the actual flow of control.
* Auth: helper singleton to say whether a given event is allowed to do a given
thing (TODO: put this on the diagram)
* State: helper singleton: does state conflict resolution. You give it an event
and it tells you if it actually updates the state or not, and annotates the
event up properly and handles merge conflict resolution.
* Storage: abstracts the storage engine.

View File

@@ -1,169 +0,0 @@
# Code Style
## Formatting tools
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical)
errors in code.
The necessary tools are detailed below.
- **black**
The Synapse codebase uses [black](https://pypi.org/project/black/)
as an opinionated code formatter, ensuring all comitted code is
properly formatted.
First install `black` with:
pip install --upgrade black
Have `black` auto-format your code (it shouldn't change any
functionality) with:
black . --exclude="\.tox|build|env"
- **flake8**
`flake8` is a code checking tool. We require code to pass `flake8`
before being merged into the codebase.
Install `flake8` with:
pip install --upgrade flake8
Check all application and test code with:
flake8 synapse tests
- **isort**
`isort` ensures imports are nicely formatted, and can suggest and
auto-fix issues such as double-importing.
Install `isort` with:
pip install --upgrade isort
Auto-fix imports with:
isort -rc synapse tests
`-rc` means to recursively search the given directories.
It's worth noting that modern IDEs and text editors can run these tools
automatically on save. It may be worth looking into whether this
functionality is supported in your editor for a more convenient
development workflow. It is not, however, recommended to run `flake8` on
save as it takes a while and is very resource intensive.
## General rules
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- **Docstrings**: should follow the [google code
style](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings).
This is so that we can generate documentation with
[sphinx](http://sphinxcontrib-napoleon.readthedocs.org/en/latest/).
See the
[examples](http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html)
in the sphinx documentation.
- **Imports**:
- Imports should be sorted by `isort` as described above.
- Prefer to import classes and functions rather than packages or
modules.
Example:
from synapse.types import UserID
...
user_id = UserID(local, server)
is preferred over:
from synapse import types
...
user_id = types.UserID(local, server)
(or any other variant).
This goes against the advice in the Google style guide, but it
means that errors in the name are caught early (at import time).
- Avoid wildcard imports (`from synapse.types import *`) and
relative imports (`from .types import UserID`).
## Configuration file format
The [sample configuration file](./sample_config.yaml) acts as a
reference to Synapse's configuration options for server administrators.
Remember that many readers will be unfamiliar with YAML and server
administration in general, so that it is important that the file be as
easy to understand as possible, which includes following a consistent
format.
Some guidelines follow:
- Sections should be separated with a heading consisting of a single
line prefixed and suffixed with `##`. There should be **two** blank
lines before the section header, and **one** after.
- Each option should be listed in the file with the following format:
- A comment describing the setting. Each line of this comment
should be prefixed with a hash (`#`) and a space.
The comment should describe the default behaviour (ie, what
happens if the setting is omitted), as well as what the effect
will be if the setting is changed.
Often, the comment end with something like "uncomment the
following to <do action>".
- A line consisting of only `#`.
- A commented-out example setting, prefixed with only `#`.
For boolean (on/off) options, convention is that this example
should be the *opposite* to the default (so the comment will end
with "Uncomment the following to enable [or disable]
<feature>." For other options, the example should give some
non-default value which is likely to be useful to the reader.
- There should be a blank line between each option.
- Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in
comment lines starting `# #`, as this is hard to read and confusing
to edit. Instead, leave the top-level config option uncommented, and
follow the conventions above for sub-options. Ensure that your code
correctly handles the top-level option being set to `None` (as it
will be if no sub-options are enabled).
- Lines should be wrapped at 80 characters.
Example:
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code
and is maintained by a script, `scripts-dev/generate_sample_config`.
Making sure that the output from this script matches the desired format
is left as an exercise for the reader!

180
docs/code_style.rst Normal file
View File

@@ -0,0 +1,180 @@
Code Style
==========
Formatting tools
----------------
The Synapse codebase uses a number of code formatting tools in order to
quickly and automatically check for formatting (and sometimes logical) errors
in code.
The necessary tools are detailed below.
- **black**
The Synapse codebase uses `black <https://pypi.org/project/black/>`_ as an
opinionated code formatter, ensuring all comitted code is properly
formatted.
First install ``black`` with::
pip install --upgrade black
Have ``black`` auto-format your code (it shouldn't change any functionality)
with::
black . --exclude="\.tox|build|env"
- **flake8**
``flake8`` is a code checking tool. We require code to pass ``flake8`` before being merged into the codebase.
Install ``flake8`` with::
pip install --upgrade flake8
Check all application and test code with::
flake8 synapse tests
- **isort**
``isort`` ensures imports are nicely formatted, and can suggest and
auto-fix issues such as double-importing.
Install ``isort`` with::
pip install --upgrade isort
Auto-fix imports with::
isort -rc synapse tests
``-rc`` means to recursively search the given directories.
It's worth noting that modern IDEs and text editors can run these tools
automatically on save. It may be worth looking into whether this
functionality is supported in your editor for a more convenient development
workflow. It is not, however, recommended to run ``flake8`` on save as it
takes a while and is very resource intensive.
General rules
-------------
- **Naming**:
- Use camel case for class and type names
- Use underscores for functions and variables.
- **Docstrings**: should follow the `google code style
<https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings>`_.
This is so that we can generate documentation with `sphinx
<http://sphinxcontrib-napoleon.readthedocs.org/en/latest/>`_. See the
`examples
<http://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html>`_
in the sphinx documentation.
- **Imports**:
- Imports should be sorted by ``isort`` as described above.
- Prefer to import classes and functions rather than packages or modules.
Example::
from synapse.types import UserID
...
user_id = UserID(local, server)
is preferred over::
from synapse import types
...
user_id = types.UserID(local, server)
(or any other variant).
This goes against the advice in the Google style guide, but it means that
errors in the name are caught early (at import time).
- Avoid wildcard imports (``from synapse.types import *``) and relative
imports (``from .types import UserID``).
Configuration file format
-------------------------
The `sample configuration file <./sample_config.yaml>`_ acts as a reference to
Synapse's configuration options for server administrators. Remember that many
readers will be unfamiliar with YAML and server administration in general, so
that it is important that the file be as easy to understand as possible, which
includes following a consistent format.
Some guidelines follow:
* Sections should be separated with a heading consisting of a single line
prefixed and suffixed with ``##``. There should be **two** blank lines
before the section header, and **one** after.
* Each option should be listed in the file with the following format:
* A comment describing the setting. Each line of this comment should be
prefixed with a hash (``#``) and a space.
The comment should describe the default behaviour (ie, what happens if
the setting is omitted), as well as what the effect will be if the
setting is changed.
Often, the comment end with something like "uncomment the
following to \<do action>".
* A line consisting of only ``#``.
* A commented-out example setting, prefixed with only ``#``.
For boolean (on/off) options, convention is that this example should be
the *opposite* to the default (so the comment will end with "Uncomment
the following to enable [or disable] \<feature\>." For other options,
the example should give some non-default value which is likely to be
useful to the reader.
* There should be a blank line between each option.
* Where several settings are grouped into a single dict, *avoid* the
convention where the whole block is commented out, resulting in comment
lines starting ``# #``, as this is hard to read and confusing to
edit. Instead, leave the top-level config option uncommented, and follow
the conventions above for sub-options. Ensure that your code correctly
handles the top-level option being set to ``None`` (as it will be if no
sub-options are enabled).
* Lines should be wrapped at 80 characters.
Example::
## Frobnication ##
# The frobnicator will ensure that all requests are fully frobnicated.
# To enable it, uncomment the following.
#
#frobnicator_enabled: true
# By default, the frobnicator will frobnicate with the default frobber.
# The following will make it use an alternative frobber.
#
#frobincator_frobber: special_frobber
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code and is
maintained by a script, ``scripts-dev/generate_sample_config``. Making sure
that the output from this script matches the desired format is left as an
exercise for the reader!

View File

@@ -1,37 +0,0 @@
# How to test SAML as a developer without a server
https://capriza.github.io/samling/samling.html (https://github.com/capriza/samling) is a great
resource for being able to tinker with the SAML options within Synapse without needing to
deploy and configure a complicated software stack.
To make Synapse (and therefore Riot) use it:
1. Use the samling.html URL above or deploy your own and visit the IdP Metadata tab.
2. Copy the XML to your clipboard.
3. On your Synapse server, create a new file `samling.xml` next to your `homeserver.yaml` with
the XML from step 2 as the contents.
4. Edit your `homeserver.yaml` to include:
```yaml
saml2_config:
sp_config:
allow_unknown_attributes: true # Works around a bug with AVA Hashes: https://github.com/IdentityPython/pysaml2/issues/388
metadata:
local: ["samling.xml"]
```
5. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
the dependencies are installed and ready to go.
6. Restart Synapse.
Then in Riot:
1. Visit the login page with a Riot pointing at your homeserver.
2. Click the Single Sign-On button.
3. On the samling page, enter a Name Identifier and add a SAML Attribute for `uid=your_localpart`.
The response must also be signed.
4. Click "Next".
5. Click "Post Response" (change nothing).
6. You should be logged in.
If you try and repeat this process, you may be automatically logged in using the information you
gave previously. To fix this, open your developer console (`F12` or `Ctrl+Shift+I`) while on the
samling page and clear the site data. In Chrome, this will be a button on the Application tab.

View File

@@ -148,7 +148,7 @@ We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
See [reverse_proxy.rst](reverse_proxy.rst) for information on setting up a
reverse proxy.
#### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
@@ -184,7 +184,7 @@ a complicated dance which requires connections in both directions).
Another common problem is that people on other servers can't join rooms that
you invite them to. This can be caused by an incorrectly-configured reverse
proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
proxy: see [reverse_proxy.rst](<reverse_proxy.rst>) for instructions on how to correctly
configure a reverse proxy.
## Running a Demo Federation of Synapses

View File

@@ -1,494 +0,0 @@
# Log Contexts
To help track the processing of individual requests, synapse uses a
'`log context`' to track which request it is handling at any given
moment. This is done via a thread-local variable; a `logging.Filter` is
then used to fish the information back out of the thread-local variable
and add it to each log record.
Logcontexts are also used for CPU and database accounting, so that we
can track which requests were responsible for high CPU use or database
activity.
The `synapse.logging.context` module provides a facilities for managing
the current log context (as well as providing the `LoggingContextFilter`
class).
Deferreds make the whole thing complicated, so this document describes
how it all works, and how to write code which follows the rules.
##Logcontexts without Deferreds
In the absence of any Deferred voodoo, things are simple enough. As with
any code of this nature, the rule is that our function should leave
things as it found them:
```python
from synapse.logging import context # omitted from future snippets
def handle_request(request_id):
request_context = context.LoggingContext()
calling_context = context.LoggingContext.current_context()
context.LoggingContext.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
finally:
context.LoggingContext.set_current_context(calling_context)
def do_request_handling():
logger.debug("phew") # this will be logged against request_id
```
LoggingContext implements the context management methods, so the above
can be written much more succinctly as:
```python
def handle_request(request_id):
with context.LoggingContext() as request_context:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
def do_request_handling():
logger.debug("phew")
```
## Using logcontexts with Deferreds
Deferreds --- and in particular, `defer.inlineCallbacks` --- break the
linear flow of code so that there is no longer a single entry point
where we should set the logcontext and a single exit point where we
should remove it.
Consider the example above, where `do_request_handling` needs to do some
blocking operation, and returns a deferred:
```python
@defer.inlineCallbacks
def handle_request(request_id):
with context.LoggingContext() as request_context:
request_context.request = request_id
yield do_request_handling()
logger.debug("finished")
```
In the above flow:
- The logcontext is set
- `do_request_handling` is called, and returns a deferred
- `handle_request` yields the deferred
- The `inlineCallbacks` wrapper of `handle_request` returns a deferred
So we have stopped processing the request (and will probably go on to
start processing the next), without clearing the logcontext.
To circumvent this problem, synapse code assumes that, wherever you have
a deferred, you will want to yield on it. To that end, whereever
functions return a deferred, we adopt the following conventions:
**Rules for functions returning deferreds:**
> - If the deferred is already complete, the function returns with the
> same logcontext it started with.
> - If the deferred is incomplete, the function clears the logcontext
> before returning; when the deferred completes, it restores the
> logcontext before running any callbacks.
That sounds complicated, but actually it means a lot of code (including
the example above) "just works". There are two cases:
- If `do_request_handling` returns a completed deferred, then the
logcontext will still be in place. In this case, execution will
continue immediately after the `yield`; the "finished" line will
be logged against the right context, and the `with` block restores
the original context before we return to the caller.
- If the returned deferred is incomplete, `do_request_handling` clears
the logcontext before returning. The logcontext is therefore clear
when `handle_request` yields the deferred. At that point, the
`inlineCallbacks` wrapper adds a callback to the deferred, and
returns another (incomplete) deferred to the caller, and it is safe
to begin processing the next request.
Once `do_request_handling`'s deferred completes, it will reinstate
the logcontext, before running the callback added by the
`inlineCallbacks` wrapper. That callback runs the second half of
`handle_request`, so again the "finished" line will be logged
against the right context, and the `with` block restores the
original context.
As an aside, it's worth noting that `handle_request` follows our rules
-though that only matters if the caller has its own logcontext which it
cares about.
The following sections describe pitfalls and helpful patterns when
implementing these rules.
Always yield your deferreds
---------------------------
Whenever you get a deferred back from a function, you should `yield` on
it as soon as possible. (Returning it directly to your caller is ok too,
if you're not doing `inlineCallbacks`.) Do not pass go; do not do any
logging; do not call any other functions.
```python
@defer.inlineCallbacks
def fun():
logger.debug("starting")
yield do_some_stuff() # just like this
d = more_stuff()
result = yield d # also fine, of course
return result
def nonInlineCallbacksFun():
logger.debug("just a wrapper really")
return do_some_stuff() # this is ok too - the caller will yield on
# it anyway.
```
Provided this pattern is followed all the way back up to the callchain
to where the logcontext was set, this will make things work out ok:
provided `do_some_stuff` and `more_stuff` follow the rules above, then
so will `fun` (as wrapped by `inlineCallbacks`) and
`nonInlineCallbacksFun`.
It's all too easy to forget to `yield`: for instance if we forgot that
`do_some_stuff` returned a deferred, we might plough on regardless. This
leads to a mess; it will probably work itself out eventually, but not
before a load of stuff has been logged against the wrong context.
(Normally, other things will break, more obviously, if you forget to
`yield`, so this tends not to be a major problem in practice.)
Of course sometimes you need to do something a bit fancier with your
Deferreds - not all code follows the linear A-then-B-then-C pattern.
Notes on implementing more complex patterns are in later sections.
## Where you create a new Deferred, make it follow the rules
Most of the time, a Deferred comes from another synapse function.
Sometimes, though, we need to make up a new Deferred, or we get a
Deferred back from external code. We need to make it follow our rules.
The easy way to do it is with a combination of `defer.inlineCallbacks`,
and `context.PreserveLoggingContext`. Suppose we want to implement
`sleep`, which returns a deferred which will run its callbacks after a
given number of seconds. That might look like:
```python
# not a logcontext-rules-compliant function
def get_sleep_deferred(seconds):
d = defer.Deferred()
reactor.callLater(seconds, d.callback, None)
return d
```
That doesn't follow the rules, but we can fix it by wrapping it with
`PreserveLoggingContext` and `yield` ing on it:
```python
@defer.inlineCallbacks
def sleep(seconds):
with PreserveLoggingContext():
yield get_sleep_deferred(seconds)
```
This technique works equally for external functions which return
deferreds, or deferreds we have made ourselves.
You can also use `context.make_deferred_yieldable`, which just does the
boilerplate for you, so the above could be written:
```python
def sleep(seconds):
return context.make_deferred_yieldable(get_sleep_deferred(seconds))
```
## Fire-and-forget
Sometimes you want to fire off a chain of execution, but not wait for
its result. That might look a bit like this:
```python
@defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
# *don't* do this
background_operation()
logger.debug("Request handling complete")
@defer.inlineCallbacks
def background_operation():
yield first_background_step()
logger.debug("Completed first step")
yield second_background_step()
logger.debug("Completed second step")
```
The above code does a couple of steps in the background after
`do_request_handling` has finished. The log lines are still logged
against the `request_context` logcontext, which may or may not be
desirable. There are two big problems with the above, however. The first
problem is that, if `background_operation` returns an incomplete
Deferred, it will expect its caller to `yield` immediately, so will have
cleared the logcontext. In this example, that means that 'Request
handling complete' will be logged without any context.
The second problem, which is potentially even worse, is that when the
Deferred returned by `background_operation` completes, it will restore
the original logcontext. There is nothing waiting on that Deferred, so
the logcontext will leak into the reactor and possibly get attached to
some arbitrary future operation.
There are two potential solutions to this.
One option is to surround the call to `background_operation` with a
`PreserveLoggingContext` call. That will reset the logcontext before
starting `background_operation` (so the context restored when the
deferred completes will be the empty logcontext), and will restore the
current logcontext before continuing the foreground process:
```python
@defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
# start background_operation off in the empty logcontext, to
# avoid leaking the current context into the reactor.
with PreserveLoggingContext():
background_operation()
# this will now be logged against the request context
logger.debug("Request handling complete")
```
Obviously that option means that the operations done in
`background_operation` would be not be logged against a logcontext
(though that might be fixed by setting a different logcontext via a
`with LoggingContext(...)` in `background_operation`).
The second option is to use `context.run_in_background`, which wraps a
function so that it doesn't reset the logcontext even when it returns
an incomplete deferred, and adds a callback to the returned deferred to
reset the logcontext. In other words, it turns a function that follows
the Synapse rules about logcontexts and Deferreds into one which behaves
more like an external function --- the opposite operation to that
described in the previous section. It can be used like this:
```python
@defer.inlineCallbacks
def do_request_handling():
yield foreground_operation()
context.run_in_background(background_operation)
# this will now be logged against the request context
logger.debug("Request handling complete")
```
## Passing synapse deferreds into third-party functions
A typical example of this is where we want to collect together two or
more deferred via `defer.gatherResults`:
```python
d1 = operation1()
d2 = operation2()
d3 = defer.gatherResults([d1, d2])
```
This is really a variation of the fire-and-forget problem above, in that
we are firing off `d1` and `d2` without yielding on them. The difference
is that we now have third-party code attached to their callbacks. Anyway
either technique given in the [Fire-and-forget](#fire-and-forget)
section will work.
Of course, the new Deferred returned by `gatherResults` needs to be
wrapped in order to make it follow the logcontext rules before we can
yield it, as described in [Where you create a new Deferred, make it
follow the
rules](#where-you-create-a-new-deferred-make-it-follow-the-rules).
So, option one: reset the logcontext before starting the operations to
be gathered:
```python
@defer.inlineCallbacks
def do_request_handling():
with PreserveLoggingContext():
d1 = operation1()
d2 = operation2()
result = yield defer.gatherResults([d1, d2])
```
In this case particularly, though, option two, of using
`context.preserve_fn` almost certainly makes more sense, so that
`operation1` and `operation2` are both logged against the original
logcontext. This looks like:
```python
@defer.inlineCallbacks
def do_request_handling():
d1 = context.preserve_fn(operation1)()
d2 = context.preserve_fn(operation2)()
with PreserveLoggingContext():
result = yield defer.gatherResults([d1, d2])
```
## Was all this really necessary?
The conventions used work fine for a linear flow where everything
happens in series via `defer.inlineCallbacks` and `yield`, but are
certainly tricky to follow for any more exotic flows. It's hard not to
wonder if we could have done something else.
We're not going to rewrite Synapse now, so the following is entirely of
academic interest, but I'd like to record some thoughts on an
alternative approach.
I briefly prototyped some code following an alternative set of rules. I
think it would work, but I certainly didn't get as far as thinking how
it would interact with concepts as complicated as the cache descriptors.
My alternative rules were:
- functions always preserve the logcontext of their caller, whether or
not they are returning a Deferred.
- Deferreds returned by synapse functions run their callbacks in the
same context as the function was orignally called in.
The main point of this scheme is that everywhere that sets the
logcontext is responsible for clearing it before returning control to
the reactor.
So, for example, if you were the function which started a
`with LoggingContext` block, you wouldn't `yield` within it --- instead
you'd start off the background process, and then leave the `with` block
to wait for it:
```python
def handle_request(request_id):
with context.LoggingContext() as request_context:
request_context.request = request_id
d = do_request_handling()
def cb(r):
logger.debug("finished")
d.addCallback(cb)
return d
```
(in general, mixing `with LoggingContext` blocks and
`defer.inlineCallbacks` in the same function leads to slighly
counter-intuitive code, under this scheme).
Because we leave the original `with` block as soon as the Deferred is
returned (as opposed to waiting for it to be resolved, as we do today),
the logcontext is cleared before control passes back to the reactor; so
if there is some code within `do_request_handling` which needs to wait
for a Deferred to complete, there is no need for it to worry about
clearing the logcontext before doing so:
```python
def handle_request():
r = do_some_stuff()
r.addCallback(do_some_more_stuff)
return r
```
--- and provided `do_some_stuff` follows the rules of returning a
Deferred which runs its callbacks in the original logcontext, all is
happy.
The business of a Deferred which runs its callbacks in the original
logcontext isn't hard to achieve --- we have it today, in the shape of
`context._PreservingContextDeferred`:
```python
def do_some_stuff():
deferred = do_some_io()
pcd = _PreservingContextDeferred(LoggingContext.current_context())
deferred.chainDeferred(pcd)
return pcd
```
It turns out that, thanks to the way that Deferreds chain together, we
automatically get the property of a context-preserving deferred with
`defer.inlineCallbacks`, provided the final Defered the function
`yields` on has that property. So we can just write:
```python
@defer.inlineCallbacks
def handle_request():
yield do_some_stuff()
yield do_some_more_stuff()
```
To conclude: I think this scheme would have worked equally well, with
less danger of messing it up, and probably made some more esoteric code
easier to write. But again --- changing the conventions of the entire
Synapse codebase is not a sensible option for the marginal improvement
offered.
## A note on garbage-collection of Deferred chains
It turns out that our logcontext rules do not play nicely with Deferred
chains which get orphaned and garbage-collected.
Imagine we have some code that looks like this:
```python
listener_queue = []
def on_something_interesting():
for d in listener_queue:
d.callback("foo")
@defer.inlineCallbacks
def await_something_interesting():
new_deferred = defer.Deferred()
listener_queue.append(new_deferred)
with PreserveLoggingContext():
yield new_deferred
```
Obviously, the idea here is that we have a bunch of things which are
waiting for an event. (It's just an example of the problem here, but a
relatively common one.)
Now let's imagine two further things happen. First of all, whatever was
waiting for the interesting thing goes away. (Perhaps the request times
out, or something *even more* interesting happens.)
Secondly, let's suppose that we decide that the interesting thing is
never going to happen, and we reset the listener queue:
```python
def reset_listener_queue():
listener_queue.clear()
```
So, both ends of the deferred chain have now dropped their references,
and the deferred chain is now orphaned, and will be garbage-collected at
some point. Note that `await_something_interesting` is a generator
function, and when Python garbage-collects generator functions, it gives
them a chance to clean up by making the `yield` raise a `GeneratorExit`
exception. In our case, that means that the `__exit__` handler of
`PreserveLoggingContext` will carefully restore the request context, but
there is now nothing waiting for its return, so the request context is
never cleared.
To reiterate, this problem only arises when *both* ends of a deferred
chain are dropped. Dropping the the reference to a deferred you're
supposed to be calling is probably bad practice, so this doesn't
actually happen too much. Unfortunately, when it does happen, it will
lead to leaked logcontexts which are incredibly hard to track down.

Some files were not shown because too many files have changed in this diff Show More