1
0

Compare commits

..

12 Commits

Author SHA1 Message Date
Erik Johnston
6b2d6fdd33 Add some debugging 2020-05-15 15:33:11 +01:00
Erik Johnston
d263a4de02 Enable moving event persistence off of master 2020-05-14 17:25:42 +01:00
Erik Johnston
66c1dff3ba Use new writers config 2020-05-14 17:25:41 +01:00
Erik Johnston
96b6023e3b Make location of events writer configurable 2020-05-14 17:25:09 +01:00
Erik Johnston
452019064c Allow ReplicationRestResource to be added to workers 2020-05-14 17:25:09 +01:00
Erik Johnston
7c8e09bcf1 Add a worker store for search insertion 2020-05-14 17:25:09 +01:00
Erik Johnston
e7f5ac4ed8 Fix lint 2020-05-14 17:09:58 +01:00
Erik Johnston
208ab7b135 Fix typing and add assertion. 2020-05-14 17:09:58 +01:00
Erik Johnston
41f558ccf7 Newsfile 2020-05-14 17:09:58 +01:00
Erik Johnston
342796d6ac Move push rules ID gen to push rules worker 2020-05-14 17:09:58 +01:00
Erik Johnston
bc3fc3927f Move events ID gens to EventWorkerStore 2020-05-14 17:09:58 +01:00
Erik Johnston
d67a8b5455 Move repliction event stream handling out of slave store 2020-05-14 17:09:58 +01:00
741 changed files with 14273 additions and 26036 deletions

View File

@@ -4,16 +4,18 @@ jobs:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3
workflows:
version: 2

View File

@@ -1,5 +0,0 @@
**If you are looking for support** please ask in **#synapse:matrix.org**
(using a matrix.org account if necessary). We do not use GitHub issues for
support.
**If you want to report a security issue** please see https://matrix.org/security-disclosure-policy/

View File

@@ -6,11 +6,9 @@ about: Create a report to help us improve
<!--
**THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**:
You will likely get better support more quickly if you ask in ** #synapse:matrix.org ** ;)
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
This is a bug report template. By following the instructions below and
filling out the sections with your information, you will help the us to get all

View File

@@ -1,612 +1,21 @@
Synapse 1.19.2 (2020-09-16)
===========================
Due to the issue below server admins are encouraged to upgrade as soon as possible.
Bugfixes
--------
- Fix joining rooms over federation that include malformed events. ([\#8324](https://github.com/matrix-org/synapse/issues/8324))
Synapse 1.19.1 (2020-08-27)
===========================
No significant changes.
Synapse 1.19.1rc1 (2020-08-25)
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a bug introduced in v1.19.0 where appservices with ratelimiting disabled would still be ratelimited when joining rooms. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
- Fix a bug introduced in v1.19.0 that would cause e.g. profile updates to fail due to incorrect application of rate limits on join requests. ([\#8153](https://github.com/matrix-org/synapse/issues/8153))
Synapse 1.19.0 (2020-08-17)
===========================
No significant changes since 1.19.0rc1.
Removal warning
---------------
As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0), we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the `latest-py3` tag. Please see [the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180).
Synapse 1.19.0rc1 (2020-08-13)
==============================
Features
--------
- Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus. ([\#7902](https://github.com/matrix-org/synapse/issues/7902))
- Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel. ([\#7964](https://github.com/matrix-org/synapse/issues/7964))
- Add rate limiting to users joining rooms. ([\#8008](https://github.com/matrix-org/synapse/issues/8008))
- Add a `/health` endpoint to every configured HTTP listener that can be used as a health check endpoint by load balancers. ([\#8048](https://github.com/matrix-org/synapse/issues/8048))
- Allow login to be blocked based on the values of SAML attributes. ([\#8052](https://github.com/matrix-org/synapse/issues/8052))
- Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7314](https://github.com/matrix-org/synapse/issues/7314))
Bugfixes
--------
- Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory. ([\#7977](https://github.com/matrix-org/synapse/issues/7977))
- Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured. ([\#7978](https://github.com/matrix-org/synapse/issues/7978))
- Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0. ([\#7980](https://github.com/matrix-org/synapse/issues/7980))
- Fix various comments and minor discrepencies in server notices code. ([\#7996](https://github.com/matrix-org/synapse/issues/7996))
- Fix a long standing bug where HTTP HEAD requests resulted in a 400 error. ([\#7999](https://github.com/matrix-org/synapse/issues/7999))
- Fix a long-standing bug which caused two copies of some log lines to be written when synctl was used along with a MemoryHandler logger. ([\#8011](https://github.com/matrix-org/synapse/issues/8011), [\#8012](https://github.com/matrix-org/synapse/issues/8012))
Updates to the Docker image
---------------------------
- We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056))
Improved Documentation
----------------------
- Document how to set up a client .well-known file and fix several pieces of outdated documentation. ([\#7899](https://github.com/matrix-org/synapse/issues/7899))
- Improve workers docs. ([\#7990](https://github.com/matrix-org/synapse/issues/7990), [\#8000](https://github.com/matrix-org/synapse/issues/8000))
- Fix typo in `docs/workers.md`. ([\#7992](https://github.com/matrix-org/synapse/issues/7992))
- Add documentation for how to undo a room shutdown. ([\#7998](https://github.com/matrix-org/synapse/issues/7998), [\#8010](https://github.com/matrix-org/synapse/issues/8010))
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Reduce the amount of whitespace in JSON stored and sent in responses. Contributed by David Vo. ([\#7372](https://github.com/matrix-org/synapse/issues/7372))
- Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0. ([\#7936](https://github.com/matrix-org/synapse/issues/7936), [\#7979](https://github.com/matrix-org/synapse/issues/7979))
- Convert various parts of the codebase to async/await. ([\#7947](https://github.com/matrix-org/synapse/issues/7947), [\#7948](https://github.com/matrix-org/synapse/issues/7948), [\#7949](https://github.com/matrix-org/synapse/issues/7949), [\#7951](https://github.com/matrix-org/synapse/issues/7951), [\#7963](https://github.com/matrix-org/synapse/issues/7963), [\#7973](https://github.com/matrix-org/synapse/issues/7973), [\#7975](https://github.com/matrix-org/synapse/issues/7975), [\#7976](https://github.com/matrix-org/synapse/issues/7976), [\#7981](https://github.com/matrix-org/synapse/issues/7981), [\#7987](https://github.com/matrix-org/synapse/issues/7987), [\#7989](https://github.com/matrix-org/synapse/issues/7989), [\#8003](https://github.com/matrix-org/synapse/issues/8003), [\#8014](https://github.com/matrix-org/synapse/issues/8014), [\#8016](https://github.com/matrix-org/synapse/issues/8016), [\#8027](https://github.com/matrix-org/synapse/issues/8027), [\#8031](https://github.com/matrix-org/synapse/issues/8031), [\#8032](https://github.com/matrix-org/synapse/issues/8032), [\#8035](https://github.com/matrix-org/synapse/issues/8035), [\#8042](https://github.com/matrix-org/synapse/issues/8042), [\#8044](https://github.com/matrix-org/synapse/issues/8044), [\#8045](https://github.com/matrix-org/synapse/issues/8045), [\#8061](https://github.com/matrix-org/synapse/issues/8061), [\#8062](https://github.com/matrix-org/synapse/issues/8062), [\#8063](https://github.com/matrix-org/synapse/issues/8063), [\#8066](https://github.com/matrix-org/synapse/issues/8066), [\#8069](https://github.com/matrix-org/synapse/issues/8069), [\#8070](https://github.com/matrix-org/synapse/issues/8070))
- Move some database-related log lines from the default logger to the database/transaction loggers. ([\#7952](https://github.com/matrix-org/synapse/issues/7952))
- Add a script to detect source code files using non-unix line terminators. ([\#7965](https://github.com/matrix-org/synapse/issues/7965), [\#7970](https://github.com/matrix-org/synapse/issues/7970))
- Log the SAML session ID during creation. ([\#7971](https://github.com/matrix-org/synapse/issues/7971))
- Implement new experimental push rules for some users. ([\#7997](https://github.com/matrix-org/synapse/issues/7997))
- Remove redundant and unreliable signature check for v1 Identity Service lookup responses. ([\#8001](https://github.com/matrix-org/synapse/issues/8001))
- Improve the performance of the register endpoint. ([\#8009](https://github.com/matrix-org/synapse/issues/8009))
- Reduce less useful output in the newsfragment CI step. Add a link to the changelog section of the contributing guide on error. ([\#8024](https://github.com/matrix-org/synapse/issues/8024))
- Rename storage layer objects to be more sensible. ([\#8033](https://github.com/matrix-org/synapse/issues/8033))
- Change the default log config to reduce disk I/O and storage for new servers. ([\#8040](https://github.com/matrix-org/synapse/issues/8040))
- Add an assertion on `prev_events` in `create_new_client_event`. ([\#8041](https://github.com/matrix-org/synapse/issues/8041))
- Add a comment to `ServerContextFactory` about the use of `SSLv23_METHOD`. ([\#8043](https://github.com/matrix-org/synapse/issues/8043))
- Log `OPTIONS` requests at `DEBUG` rather than `INFO` level to reduce amount logged at `INFO`. ([\#8049](https://github.com/matrix-org/synapse/issues/8049))
- Reduce amount of outbound request logging at `INFO` level. ([\#8050](https://github.com/matrix-org/synapse/issues/8050))
- It is no longer necessary to explicitly define `filters` in the logging configuration. (Continuing to do so is redundant but harmless.) ([\#8051](https://github.com/matrix-org/synapse/issues/8051))
- Add and improve type hints. ([\#8058](https://github.com/matrix-org/synapse/issues/8058), [\#8064](https://github.com/matrix-org/synapse/issues/8064), [\#8060](https://github.com/matrix-org/synapse/issues/8060), [\#8067](https://github.com/matrix-org/synapse/issues/8067))
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
Synapse 1.18.0 (2020-07-30)
===========================
Deprecation Warnings
--------------------
### Docker Tags with `-py3` Suffix
From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
Scripts relying on the `-py3` suffix will need to be updated.
### TCP-based Replication
When setting up worker processes, we now recommend the use of a Redis server for replication. The old direct TCP connection method is deprecated and will be removed in a future release. See [docs/workers.md](https://github.com/matrix-org/synapse/blob/release-v1.18.0/docs/workers.md) for more details.
Improved Documentation
----------------------
- Update worker docs with latest enhancements. ([\#7969](https://github.com/matrix-org/synapse/issues/7969))
Synapse 1.18.0rc2 (2020-07-28)
Synapse 1.13.0rc1 (2020-05-11)
==============================
Bugfixes
--------
- Fix an `AssertionError` exception introduced in v1.18.0rc1. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
- Fix experimental support for moving typing off master when worker is restarted, which is broken in v1.18.0rc1. ([\#7967](https://github.com/matrix-org/synapse/issues/7967))
Internal Changes
----------------
- Further optimise queueing of inbound replication commands. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
Synapse 1.18.0rc1 (2020-07-27)
==============================
Features
--------
- Include room states on invite events that are sent to application services. Contributed by @Sorunome. ([\#6455](https://github.com/matrix-org/synapse/issues/6455))
- Add delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel. ([\#7613](https://github.com/matrix-org/synapse/issues/7613), [\#7953](https://github.com/matrix-org/synapse/issues/7953))
- Add experimental support for running multiple federation sender processes. ([\#7798](https://github.com/matrix-org/synapse/issues/7798))
- Add the option to validate the `iss` and `aud` claims for JWT logins. ([\#7827](https://github.com/matrix-org/synapse/issues/7827))
- Add support for handling registration requests across multiple client reader workers. ([\#7830](https://github.com/matrix-org/synapse/issues/7830))
- Add an admin API to list the users in a room. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7842](https://github.com/matrix-org/synapse/issues/7842))
- Allow email subjects to be customised through Synapse's configuration. ([\#7846](https://github.com/matrix-org/synapse/issues/7846))
- Add the ability to re-activate an account from the admin API. ([\#7847](https://github.com/matrix-org/synapse/issues/7847), [\#7908](https://github.com/matrix-org/synapse/issues/7908))
- Add experimental support for running multiple pusher workers. ([\#7855](https://github.com/matrix-org/synapse/issues/7855))
- Add experimental support for moving typing off master. ([\#7869](https://github.com/matrix-org/synapse/issues/7869), [\#7959](https://github.com/matrix-org/synapse/issues/7959))
- Report CPU metrics to prometheus for time spent processing replication commands. ([\#7879](https://github.com/matrix-org/synapse/issues/7879))
- Support oEmbed for media previews. ([\#7920](https://github.com/matrix-org/synapse/issues/7920))
- Abort federation requests where the client disconnects before the ratelimiter expires. ([\#7930](https://github.com/matrix-org/synapse/issues/7930))
- Cache responses to `/_matrix/federation/v1/state_ids` to reduce duplicated work. ([\#7931](https://github.com/matrix-org/synapse/issues/7931))
Bugfixes
--------
- Fix detection of out of sync remote device lists when receiving events from remote users. ([\#7815](https://github.com/matrix-org/synapse/issues/7815))
- Fix bug where Synapse fails to process an incoming event over federation if the server is missing too much of the event's auth chain. ([\#7817](https://github.com/matrix-org/synapse/issues/7817))
- Fix a bug causing Synapse to misinterpret the value `off` for `encryption_enabled_by_default_for_room_type` in its configuration file(s) if that value isn't surrounded by quotes. This bug was introduced in v1.16.0. ([\#7822](https://github.com/matrix-org/synapse/issues/7822))
- Fix bug where we did not always pass in `app_name` or `server_name` to email templates, including e.g. for registration emails. ([\#7829](https://github.com/matrix-org/synapse/issues/7829))
- Errors which occur while using the non-standard JWT login now return the proper error: `403 Forbidden` with an error code of `M_FORBIDDEN`. ([\#7844](https://github.com/matrix-org/synapse/issues/7844))
- Fix "AttributeError: 'str' object has no attribute 'get'" error message when applying per-room message retention policies. The bug was introduced in Synapse 1.7.0. ([\#7850](https://github.com/matrix-org/synapse/issues/7850))
- Fix a bug introduced in Synapse 1.10.0 which could cause a "no create event in auth events" error during room creation. ([\#7854](https://github.com/matrix-org/synapse/issues/7854))
- Fix a bug which allowed empty rooms to be rejoined over federation. ([\#7859](https://github.com/matrix-org/synapse/issues/7859))
- Fix 'Unable to find a suitable guest user ID' error when using multiple client_reader workers. ([\#7866](https://github.com/matrix-org/synapse/issues/7866))
- Fix a long standing bug where the tracing of async functions with opentracing was broken. ([\#7872](https://github.com/matrix-org/synapse/issues/7872), [\#7961](https://github.com/matrix-org/synapse/issues/7961))
- Fix "TypeError in `synapse.notifier`" exceptions. ([\#7880](https://github.com/matrix-org/synapse/issues/7880))
- Fix deprecation warning due to invalid escape sequences. ([\#7895](https://github.com/matrix-org/synapse/issues/7895))
Updates to the Docker image
---------------------------
- Base docker image on Debian Buster rather than Alpine Linux. Contributed by @maquis196. ([\#7839](https://github.com/matrix-org/synapse/issues/7839))
Improved Documentation
----------------------
- Provide instructions on using `register_new_matrix_user` via docker. ([\#7885](https://github.com/matrix-org/synapse/issues/7885))
- Change the sample config postgres user section to use `synapse_user` instead of `synapse` to align with the documentation. ([\#7889](https://github.com/matrix-org/synapse/issues/7889))
- Reorder database paragraphs to promote postgres over sqlite. ([\#7933](https://github.com/matrix-org/synapse/issues/7933))
- Update the dates of ACME v1's end of life in [`ACME.md`](https://github.com/matrix-org/synapse/blob/master/docs/ACME.md). ([\#7934](https://github.com/matrix-org/synapse/issues/7934))
Deprecations and Removals
-------------------------
- Remove unused `synapse_replication_tcp_resource_invalidate_cache` prometheus metric. ([\#7878](https://github.com/matrix-org/synapse/issues/7878))
- Remove Ubuntu Eoan from the list of `.deb` packages that we build as it is now end-of-life. Contributed by @gary-kim. ([\#7888](https://github.com/matrix-org/synapse/issues/7888))
Internal Changes
----------------
- Switch parts of the codebase from `simplejson` to the standard library `json`. ([\#7802](https://github.com/matrix-org/synapse/issues/7802))
- Add type hints to the http server code and remove an unused parameter. ([\#7813](https://github.com/matrix-org/synapse/issues/7813))
- Add type hints to synapse.api.errors module. ([\#7820](https://github.com/matrix-org/synapse/issues/7820))
- Ensure that calls to `json.dumps` are compatible with the standard library json. ([\#7836](https://github.com/matrix-org/synapse/issues/7836))
- Remove redundant `retry_on_integrity_error` wrapper for event persistence code. ([\#7848](https://github.com/matrix-org/synapse/issues/7848))
- Consistently use `db_to_json` to convert from database values to JSON objects. ([\#7849](https://github.com/matrix-org/synapse/issues/7849))
- Convert various parts of the codebase to async/await. ([\#7851](https://github.com/matrix-org/synapse/issues/7851), [\#7860](https://github.com/matrix-org/synapse/issues/7860), [\#7868](https://github.com/matrix-org/synapse/issues/7868), [\#7871](https://github.com/matrix-org/synapse/issues/7871), [\#7873](https://github.com/matrix-org/synapse/issues/7873), [\#7874](https://github.com/matrix-org/synapse/issues/7874), [\#7884](https://github.com/matrix-org/synapse/issues/7884), [\#7912](https://github.com/matrix-org/synapse/issues/7912), [\#7935](https://github.com/matrix-org/synapse/issues/7935), [\#7939](https://github.com/matrix-org/synapse/issues/7939), [\#7942](https://github.com/matrix-org/synapse/issues/7942), [\#7944](https://github.com/matrix-org/synapse/issues/7944))
- Add support for handling registration requests across multiple client reader workers. ([\#7853](https://github.com/matrix-org/synapse/issues/7853))
- Small performance improvement in typing processing. ([\#7856](https://github.com/matrix-org/synapse/issues/7856))
- The default value of `filter_timeline_limit` was changed from -1 (no limit) to 100. ([\#7858](https://github.com/matrix-org/synapse/issues/7858))
- Optimise queueing of inbound replication commands. ([\#7861](https://github.com/matrix-org/synapse/issues/7861))
- Add some type annotations to `HomeServer` and `BaseHandler`. ([\#7870](https://github.com/matrix-org/synapse/issues/7870))
- Clean up `PreserveLoggingContext`. ([\#7877](https://github.com/matrix-org/synapse/issues/7877))
- Change "unknown room version" logging from 'error' to 'warning'. ([\#7881](https://github.com/matrix-org/synapse/issues/7881))
- Stop using `device_max_stream_id` table and just use `device_inbox.stream_id`. ([\#7882](https://github.com/matrix-org/synapse/issues/7882))
- Return an empty body for OPTIONS requests. ([\#7886](https://github.com/matrix-org/synapse/issues/7886))
- Fix typo in generated config file. Contributed by @ThiefMaster. ([\#7890](https://github.com/matrix-org/synapse/issues/7890))
- Import ABC from `collections.abc` for Python 3.10 compatibility. ([\#7892](https://github.com/matrix-org/synapse/issues/7892))
- Remove unused functions `time_function`, `trace_function`, `get_previous_frames`
and `get_previous_frame` from `synapse.logging.utils` module. ([\#7897](https://github.com/matrix-org/synapse/issues/7897))
- Lint the `contrib/` directory in CI and linting scripts, add `synctl` to the linting script for consistency with CI. ([\#7914](https://github.com/matrix-org/synapse/issues/7914))
- Use Element CSS and logo in notification emails when app name is Element. ([\#7919](https://github.com/matrix-org/synapse/issues/7919))
- Optimisation to /sync handling: skip serializing the response if the client has already disconnected. ([\#7927](https://github.com/matrix-org/synapse/issues/7927))
- When a client disconnects, don't log it as 'Error processing request'. ([\#7928](https://github.com/matrix-org/synapse/issues/7928))
- Add debugging to `/sync` response generation (disabled by default). ([\#7929](https://github.com/matrix-org/synapse/issues/7929))
- Update comments that refer to Deferreds for async functions. ([\#7945](https://github.com/matrix-org/synapse/issues/7945))
- Simplify error handling in federation handler. ([\#7950](https://github.com/matrix-org/synapse/issues/7950))
Synapse 1.17.0 (2020-07-13)
===========================
Synapse 1.17.0 is identical to 1.17.0rc1, with the addition of the fix that was included in 1.16.1.
Synapse 1.16.1 (2020-07-10)
===========================
In some distributions of Synapse 1.16.0, we incorrectly included a database migration which added a new, unused table. This release removes the redundant table.
Bugfixes
--------
- Drop table `local_rejections_stream` which was incorrectly added in Synapse 1.16.0. ([\#7816](https://github.com/matrix-org/synapse/issues/7816), [b1beb3ff5](https://github.com/matrix-org/synapse/commit/b1beb3ff5))
Synapse 1.17.0rc1 (2020-07-09)
==============================
Bugfixes
--------
- Fix inconsistent handling of upper and lower case in email addresses when used as identifiers for login, etc. Contributed by @dklimpel. ([\#7021](https://github.com/matrix-org/synapse/issues/7021))
- Fix "Tried to close a non-active scope!" error messages when opentracing is enabled. ([\#7732](https://github.com/matrix-org/synapse/issues/7732))
- Fix incorrect error message when database CTYPE was set incorrectly. ([\#7760](https://github.com/matrix-org/synapse/issues/7760))
- Fix to not ignore `set_tweak` actions in Push Rules that have no `value`, as permitted by the specification. ([\#7766](https://github.com/matrix-org/synapse/issues/7766))
- Fix synctl to handle empty config files correctly. Contributed by @kotovalexarian. ([\#7779](https://github.com/matrix-org/synapse/issues/7779))
- Fixes a long standing bug in worker mode where worker information was saved in the devices table instead of the original IP address and user agent. ([\#7797](https://github.com/matrix-org/synapse/issues/7797))
- Fix 'stuck invites' which happen when we are unable to reject a room invite received over federation. ([\#7804](https://github.com/matrix-org/synapse/issues/7804), [\#7809](https://github.com/matrix-org/synapse/issues/7809), [\#7810](https://github.com/matrix-org/synapse/issues/7810))
Updates to the Docker image
---------------------------
- Include libwebp in the Docker file to properly handle webp image uploads. ([\#7791](https://github.com/matrix-org/synapse/issues/7791))
Improved Documentation
----------------------
- Improve the documentation of the non-standard JSON web token login type. ([\#7776](https://github.com/matrix-org/synapse/issues/7776))
- Update doc links for caddy. Contributed by Nicolai Søborg. ([\#7789](https://github.com/matrix-org/synapse/issues/7789))
Internal Changes
----------------
- Refactor getting replication updates from database. ([\#7740](https://github.com/matrix-org/synapse/issues/7740))
- Send push notifications with a high or low priority depending upon whether they may generate user-observable effects. ([\#7765](https://github.com/matrix-org/synapse/issues/7765))
- Use symbolic names for replication stream names. ([\#7768](https://github.com/matrix-org/synapse/issues/7768))
- Add early returns to `_check_for_soft_fail`. ([\#7769](https://github.com/matrix-org/synapse/issues/7769))
- Fix up `synapse.handlers.federation` to pass mypy. ([\#7770](https://github.com/matrix-org/synapse/issues/7770))
- Convert the appserver handler to async/await. ([\#7775](https://github.com/matrix-org/synapse/issues/7775))
- Allow to use higher versions of prometheus_client <0.9.0 which are expected to introduce no breaking changes. Contributed by Oliver Kurz. ([\#7780](https://github.com/matrix-org/synapse/issues/7780))
- Update linting scripts and codebase to be compatible with `isort` v5. ([\#7786](https://github.com/matrix-org/synapse/issues/7786))
- Stop populating unused table `local_invites`. ([\#7793](https://github.com/matrix-org/synapse/issues/7793))
- Ensure that strings (not bytes) are passed into JSON serialization. ([\#7799](https://github.com/matrix-org/synapse/issues/7799))
- Switch from simplejson to the standard library json. ([\#7800](https://github.com/matrix-org/synapse/issues/7800))
- Add `signing_key` property to `HomeServer` to save code duplication. ([\#7805](https://github.com/matrix-org/synapse/issues/7805))
- Improve stacktraces from exceptions in background processes. ([\#7808](https://github.com/matrix-org/synapse/issues/7808))
- Fix various spelling errors in comments and log lines. ([\#7811](https://github.com/matrix-org/synapse/issues/7811))
Synapse 1.16.0 (2020-07-08)
===========================
No significant changes since 1.16.0rc2.
Note that this release deprecates the `m.login.jwt` login method, renaming it
to `org.matrix.login.jwt`, as `m.login.jwt` is not part of the Matrix spec.
Otherwise the behaviour is identical. Synapse will accept both names for now,
but this may change in a future release.
Synapse 1.16.0rc2 (2020-07-02)
==============================
Synapse 1.16.0rc2 includes the security fixes released with Synapse 1.15.2.
Please see [below](#synapse-1152-2020-07-02) for more details.
Improved Documentation
----------------------
- Update postgres image in example `docker-compose.yaml` to tag `12-alpine`. ([\#7696](https://github.com/matrix-org/synapse/issues/7696))
Internal Changes
----------------
- Add some metrics for inbound and outbound federation latencies: `synapse_federation_server_pdu_process_time` and `synapse_event_processing_lag_by_event`. ([\#7771](https://github.com/matrix-org/synapse/issues/7771))
Synapse 1.15.2 (2020-07-02)
===========================
Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.
Security advisory
-----------------
* A malicious homeserver could force Synapse to reset the state in a room to a
small subset of the correct state. This affects all Synapse deployments which
federate with untrusted servers. ([96e9afe6](https://github.com/matrix-org/synapse/commit/96e9afe62500310977dc3cbc99a8d16d3d2fa15c))
* HTML pages served via Synapse were vulnerable to clickjacking attacks. This
predominantly affects homeservers with single-sign-on enabled, but all server
administrators are encouraged to upgrade. ([ea26e9a9](https://github.com/matrix-org/synapse/commit/ea26e9a98b0541fc886a1cb826a38352b7599dbe))
This was reported by [Quentin Gliech](https://sandhose.fr/).
Synapse 1.16.0rc1 (2020-07-01)
==============================
Features
--------
- Add an option to enable encryption by default for new rooms. ([\#7639](https://github.com/matrix-org/synapse/issues/7639))
- Add support for running multiple media repository workers. See [docs/workers.md](https://github.com/matrix-org/synapse/blob/release-v1.16.0/docs/workers.md) for instructions. ([\#7706](https://github.com/matrix-org/synapse/issues/7706))
- Media can now be marked as safe from quarantined. ([\#7718](https://github.com/matrix-org/synapse/issues/7718))
- Expand the configuration options for auto-join rooms. ([\#7763](https://github.com/matrix-org/synapse/issues/7763))
Bugfixes
--------
- Remove `user_id` from the response to `GET /_matrix/client/r0/presence/{userId}/status` to match the specification. ([\#7606](https://github.com/matrix-org/synapse/issues/7606))
- In worker mode, ensure that replicated data has not already been received. ([\#7648](https://github.com/matrix-org/synapse/issues/7648))
- Fix intermittent exception during startup, introduced in Synapse 1.14.0. ([\#7663](https://github.com/matrix-org/synapse/issues/7663))
- Include a user-agent for federation and well-known requests. ([\#7677](https://github.com/matrix-org/synapse/issues/7677))
- Accept the proper field (`phone`) for the `m.id.phone` identifier type. The legacy field of `number` is still accepted as a fallback. Bug introduced in v0.20.0. ([\#7687](https://github.com/matrix-org/synapse/issues/7687))
- Fix "Starting db txn 'get_completed_ui_auth_stages' from sentinel context" warning. The bug was introduced in 1.13.0. ([\#7688](https://github.com/matrix-org/synapse/issues/7688))
- Compare the URI and method during user interactive authentication (instead of the URI twice). Bug introduced in 1.13.0. ([\#7689](https://github.com/matrix-org/synapse/issues/7689))
- Fix a long standing bug where the response to the `GET room_keys/version` endpoint had the incorrect type for the `etag` field. ([\#7691](https://github.com/matrix-org/synapse/issues/7691))
- Fix logged error during device resync in opentracing. Broke in v1.14.0. ([\#7698](https://github.com/matrix-org/synapse/issues/7698))
- Do not break push rule evaluation when receiving an event with a non-string body. This is a long-standing bug. ([\#7701](https://github.com/matrix-org/synapse/issues/7701))
- Fixs a long standing bug which resulted in an exception: "TypeError: argument of type 'ObservableDeferred' is not iterable". ([\#7708](https://github.com/matrix-org/synapse/issues/7708))
- The `synapse_port_db` script no longer fails when the `ui_auth_sessions` table is non-empty. This bug has existed since v1.13.0. ([\#7711](https://github.com/matrix-org/synapse/issues/7711))
- Synapse will now fetch media from the proper specified URL (using the r0 prefix instead of the unspecified v1). ([\#7714](https://github.com/matrix-org/synapse/issues/7714))
- Fix the tables ignored by `synapse_port_db` to be in sync the current database schema. ([\#7717](https://github.com/matrix-org/synapse/issues/7717))
- Fix missing `Content-Length` on HTTP responses from the metrics handler. ([\#7730](https://github.com/matrix-org/synapse/issues/7730))
- Fix large state resolutions from stalling Synapse for seconds at a time. ([\#7735](https://github.com/matrix-org/synapse/issues/7735), [\#7746](https://github.com/matrix-org/synapse/issues/7746))
Improved Documentation
----------------------
- Spelling correction in sample_config.yaml. ([\#7652](https://github.com/matrix-org/synapse/issues/7652))
- Added instructions for how to use Keycloak via OpenID Connect to authenticate with Synapse. ([\#7659](https://github.com/matrix-org/synapse/issues/7659))
- Corrected misspelling of PostgreSQL. ([\#7724](https://github.com/matrix-org/synapse/issues/7724))
Deprecations and Removals
-------------------------
- Deprecate `m.login.jwt` login method in favour of `org.matrix.login.jwt`, as `m.login.jwt` is not part of the Matrix spec. ([\#7675](https://github.com/matrix-org/synapse/issues/7675))
Internal Changes
----------------
- Refactor getting replication updates from database. ([\#7636](https://github.com/matrix-org/synapse/issues/7636))
- Clean-up the login fallback code. ([\#7657](https://github.com/matrix-org/synapse/issues/7657))
- Increase the default SAML session expiry time to 15 minutes. ([\#7664](https://github.com/matrix-org/synapse/issues/7664))
- Convert the device message and pagination handlers to async/await. ([\#7678](https://github.com/matrix-org/synapse/issues/7678))
- Convert typing handler to async/await. ([\#7679](https://github.com/matrix-org/synapse/issues/7679))
- Require `parameterized` package version to be at least 0.7.0. ([\#7680](https://github.com/matrix-org/synapse/issues/7680))
- Refactor handling of `listeners` configuration settings. ([\#7681](https://github.com/matrix-org/synapse/issues/7681))
- Replace uses of `six.iterkeys`/`iteritems`/`itervalues` with `keys()`/`items()`/`values()`. ([\#7692](https://github.com/matrix-org/synapse/issues/7692))
- Add support for using `rust-python-jaeger-reporter` library to reduce jaeger tracing overhead. ([\#7697](https://github.com/matrix-org/synapse/issues/7697))
- Make Tox actions work on Debian 10. ([\#7703](https://github.com/matrix-org/synapse/issues/7703))
- Replace all remaining uses of `six` with native Python 3 equivalents. Contributed by @ilmari. ([\#7704](https://github.com/matrix-org/synapse/issues/7704))
- Fix broken link in sample config. ([\#7712](https://github.com/matrix-org/synapse/issues/7712))
- Speed up state res v2 across large state differences. ([\#7725](https://github.com/matrix-org/synapse/issues/7725))
- Convert directory handler to async/await. ([\#7727](https://github.com/matrix-org/synapse/issues/7727))
- Move `flake8` to the end of `scripts-dev/lint.sh` as it takes the longest and could cause the script to exit early. ([\#7738](https://github.com/matrix-org/synapse/issues/7738))
- Explain the "test" conditional requirement for dependencies is not all of the modules necessary to run the unit tests. ([\#7751](https://github.com/matrix-org/synapse/issues/7751))
- Add some metrics for inbound and outbound federation latencies: `synapse_federation_server_pdu_process_time` and `synapse_event_processing_lag_by_event`. ([\#7755](https://github.com/matrix-org/synapse/issues/7755))
Synapse 1.15.1 (2020-06-16)
===========================
Bugfixes
--------
- Fix a bug introduced in v1.15.0 that would crash Synapse on start when using certain password auth providers. ([\#7684](https://github.com/matrix-org/synapse/issues/7684))
- Fix a bug introduced in v1.15.0 which meant that some 3PID management endpoints were not accessible on the correct URL. ([\#7685](https://github.com/matrix-org/synapse/issues/7685))
Synapse 1.15.0 (2020-06-11)
===========================
No significant changes.
Synapse 1.15.0rc1 (2020-06-09)
==============================
Features
--------
- Advertise support for Client-Server API r0.6.0 and remove related unstable feature flags. ([\#6585](https://github.com/matrix-org/synapse/issues/6585))
- Add an option to disable autojoining rooms for guest accounts. ([\#6637](https://github.com/matrix-org/synapse/issues/6637))
- For SAML authentication, add the ability to pass email addresses to be added to new users' accounts via SAML attributes. Contributed by Christopher Cooper. ([\#7385](https://github.com/matrix-org/synapse/issues/7385))
- Add admin APIs to allow server admins to manage users' devices. Contributed by @dklimpel. ([\#7481](https://github.com/matrix-org/synapse/issues/7481))
- Add support for generating thumbnails for WebP images. Previously, users would see an empty box instead of preview image. Contributed by @WGH-. ([\#7586](https://github.com/matrix-org/synapse/issues/7586))
- Support the standardized `m.login.sso` user-interactive authentication flow. ([\#7630](https://github.com/matrix-org/synapse/issues/7630))
Bugfixes
--------
- Allow new users to be registered via the admin API even if the monthly active user limit has been reached. Contributed by @dklimpel. ([\#7263](https://github.com/matrix-org/synapse/issues/7263))
- Fix email notifications not being enabled for new users when created via the Admin API. ([\#7267](https://github.com/matrix-org/synapse/issues/7267))
- Fix str placeholders in an instance of `PrepareDatabaseException`. Introduced in Synapse v1.8.0. ([\#7575](https://github.com/matrix-org/synapse/issues/7575))
- Fix a bug in automatic user creation during first time login with `m.login.jwt`. Regression in v1.6.0. Contributed by @olof. ([\#7585](https://github.com/matrix-org/synapse/issues/7585))
- Fix a bug causing the cross-signing keys to be ignored when resyncing a device list. ([\#7594](https://github.com/matrix-org/synapse/issues/7594))
- Fix metrics failing when there is a large number of active background processes. ([\#7597](https://github.com/matrix-org/synapse/issues/7597))
- Fix bug where returning rooms for a group would fail if it included a room that the server was not in. ([\#7599](https://github.com/matrix-org/synapse/issues/7599))
- Fix duplicate key violation when persisting read markers. ([\#7607](https://github.com/matrix-org/synapse/issues/7607))
- Prevent an entire iteration of the device list resync loop from failing if one server responds with a malformed result. ([\#7609](https://github.com/matrix-org/synapse/issues/7609))
- Fix exceptions when fetching events from a remote host fails. ([\#7622](https://github.com/matrix-org/synapse/issues/7622))
- Make `synctl restart` start synapse if it wasn't running. ([\#7624](https://github.com/matrix-org/synapse/issues/7624))
- Pass device information through to the login endpoint when using the login fallback. ([\#7629](https://github.com/matrix-org/synapse/issues/7629))
- Advertise the `m.login.token` login flow when OpenID Connect is enabled. ([\#7631](https://github.com/matrix-org/synapse/issues/7631))
- Fix bug in account data replication stream. ([\#7656](https://github.com/matrix-org/synapse/issues/7656))
Improved Documentation
----------------------
- Update the OpenBSD installation instructions. ([\#7587](https://github.com/matrix-org/synapse/issues/7587))
- Advertise Python 3.8 support in `setup.py`. ([\#7602](https://github.com/matrix-org/synapse/issues/7602))
- Add a link to `#synapse:matrix.org` in the troubleshooting section of the README. ([\#7603](https://github.com/matrix-org/synapse/issues/7603))
- Clarifications to the admin api documentation. ([\#7647](https://github.com/matrix-org/synapse/issues/7647))
Internal Changes
----------------
- Convert the identity handler to async/await. ([\#7561](https://github.com/matrix-org/synapse/issues/7561))
- Improve query performance for fetching state from a PostgreSQL database. Contributed by @ilmari. ([\#7567](https://github.com/matrix-org/synapse/issues/7567))
- Speed up processing of federation stream RDATA rows. ([\#7584](https://github.com/matrix-org/synapse/issues/7584))
- Add comment to systemd example to show postgresql dependency. ([\#7591](https://github.com/matrix-org/synapse/issues/7591))
- Refactor `Ratelimiter` to limit the amount of expensive config value accesses. ([\#7595](https://github.com/matrix-org/synapse/issues/7595))
- Convert groups handlers to async/await. ([\#7600](https://github.com/matrix-org/synapse/issues/7600))
- Clean up exception handling in `SAML2ResponseResource`. ([\#7614](https://github.com/matrix-org/synapse/issues/7614))
- Check that all asynchronous tasks succeed and general cleanup of `MonthlyActiveUsersTestCase` and `TestMauLimit`. ([\#7619](https://github.com/matrix-org/synapse/issues/7619))
- Convert `get_user_id_by_threepid` to async/await. ([\#7620](https://github.com/matrix-org/synapse/issues/7620))
- Switch to upstream `dh-virtualenv` rather than our fork for Debian package builds. ([\#7621](https://github.com/matrix-org/synapse/issues/7621))
- Update CI scripts to check the number in the newsfile fragment. ([\#7623](https://github.com/matrix-org/synapse/issues/7623))
- Check if the localpart of a Matrix ID is reserved for guest users earlier in the registration flow, as well as when responding to requests to `/register/available`. ([\#7625](https://github.com/matrix-org/synapse/issues/7625))
- Minor cleanups to OpenID Connect integration. ([\#7628](https://github.com/matrix-org/synapse/issues/7628))
- Attempt to fix flaky test: `PhoneHomeStatsTestCase.test_performance_100`. ([\#7634](https://github.com/matrix-org/synapse/issues/7634))
- Fix typos of `m.olm.curve25519-aes-sha2` and `m.megolm.v1.aes-sha2` in comments, test files. ([\#7637](https://github.com/matrix-org/synapse/issues/7637))
- Convert user directory, state deltas, and stats handlers to async/await. ([\#7640](https://github.com/matrix-org/synapse/issues/7640))
- Remove some unused constants. ([\#7644](https://github.com/matrix-org/synapse/issues/7644))
- Fix type information on `assert_*_is_admin` methods. ([\#7645](https://github.com/matrix-org/synapse/issues/7645))
- Convert registration handler to async/await. ([\#7649](https://github.com/matrix-org/synapse/issues/7649))
Synapse 1.14.0 (2020-05-28)
===========================
No significant changes.
Synapse 1.14.0rc2 (2020-05-27)
==============================
Bugfixes
--------
- Fix cache config to not apply cache factor to event cache. Regression in v1.14.0rc1. ([\#7578](https://github.com/matrix-org/synapse/issues/7578))
- Fix bug where `ReplicationStreamer` was not always started when replication was enabled. Bug introduced in v1.14.0rc1. ([\#7579](https://github.com/matrix-org/synapse/issues/7579))
- Fix specifying individual cache factors for caches with special characters in their name. Regression in v1.14.0rc1. ([\#7580](https://github.com/matrix-org/synapse/issues/7580))
Improved Documentation
----------------------
- Fix the OIDC `client_auth_method` value in the sample config. ([\#7581](https://github.com/matrix-org/synapse/issues/7581))
Synapse 1.14.0rc1 (2020-05-26)
==============================
Features
--------
- Synapse's cache factor can now be configured in `homeserver.yaml` by the `caches.global_factor` setting. Additionally, `caches.per_cache_factors` controls the cache factors for individual caches. ([\#6391](https://github.com/matrix-org/synapse/issues/6391))
- Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs). ([\#7256](https://github.com/matrix-org/synapse/issues/7256), [\#7457](https://github.com/matrix-org/synapse/issues/7457))
- Add room details admin endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7317](https://github.com/matrix-org/synapse/issues/7317))
- Allow for using more than one spam checker module at once. ([\#7435](https://github.com/matrix-org/synapse/issues/7435))
- Add additional authentication checks for `m.room.power_levels` event per [MSC2209](https://github.com/matrix-org/matrix-doc/pull/2209). ([\#7502](https://github.com/matrix-org/synapse/issues/7502))
- Implement room version 6 per [MSC2240](https://github.com/matrix-org/matrix-doc/pull/2240). ([\#7506](https://github.com/matrix-org/synapse/issues/7506))
- Add highly experimental option to move event persistence off master. ([\#7281](https://github.com/matrix-org/synapse/issues/7281), [\#7374](https://github.com/matrix-org/synapse/issues/7374), [\#7436](https://github.com/matrix-org/synapse/issues/7436), [\#7440](https://github.com/matrix-org/synapse/issues/7440), [\#7475](https://github.com/matrix-org/synapse/issues/7475), [\#7490](https://github.com/matrix-org/synapse/issues/7490), [\#7491](https://github.com/matrix-org/synapse/issues/7491), [\#7492](https://github.com/matrix-org/synapse/issues/7492), [\#7493](https://github.com/matrix-org/synapse/issues/7493), [\#7495](https://github.com/matrix-org/synapse/issues/7495), [\#7515](https://github.com/matrix-org/synapse/issues/7515), [\#7516](https://github.com/matrix-org/synapse/issues/7516), [\#7517](https://github.com/matrix-org/synapse/issues/7517), [\#7542](https://github.com/matrix-org/synapse/issues/7542))
Bugfixes
--------
- Fix a bug where event updates might not be sent over replication to worker processes after the stream falls behind. ([\#7384](https://github.com/matrix-org/synapse/issues/7384))
- Allow expired user accounts to log out their device sessions. ([\#7443](https://github.com/matrix-org/synapse/issues/7443))
- Fix a bug that would cause Synapse not to resync out-of-sync device lists. ([\#7453](https://github.com/matrix-org/synapse/issues/7453))
- Prevent rooms with 0 members or with invalid version strings from breaking group queries. ([\#7465](https://github.com/matrix-org/synapse/issues/7465))
- Workaround for an upstream Twisted bug that caused Synapse to become unresponsive after startup. ([\#7473](https://github.com/matrix-org/synapse/issues/7473))
- Fix Redis reconnection logic that can result in missed updates over replication if master reconnects to Redis without restarting. ([\#7482](https://github.com/matrix-org/synapse/issues/7482))
- When sending `m.room.member` events, omit `displayname` and `avatar_url` if they aren't set instead of setting them to `null`. Contributed by Aaron Raimist. ([\#7497](https://github.com/matrix-org/synapse/issues/7497))
- Fix incorrect `method` label on `synapse_http_matrixfederationclient_{requests,responses}` prometheus metrics. ([\#7503](https://github.com/matrix-org/synapse/issues/7503))
- Ignore incoming presence events from other homeservers if presence is disabled locally. ([\#7508](https://github.com/matrix-org/synapse/issues/7508))
- Fix a long-standing bug that broke the update remote profile background process. ([\#7511](https://github.com/matrix-org/synapse/issues/7511))
- Hash passwords as early as possible during password reset. ([\#7538](https://github.com/matrix-org/synapse/issues/7538))
- Fix bug where a local user leaving a room could fail under rare circumstances. ([\#7548](https://github.com/matrix-org/synapse/issues/7548))
- Fix "Missing RelayState parameter" error when using user interactive authentication with SAML for some SAML providers. ([\#7552](https://github.com/matrix-org/synapse/issues/7552))
- Fix exception `'GenericWorkerReplicationHandler' object has no attribute 'send_federation_ack'`, introduced in v1.13.0. ([\#7564](https://github.com/matrix-org/synapse/issues/7564))
- `synctl` now warns if it was unable to stop Synapse and will not attempt to start Synapse if nothing was stopped. Contributed by Romain Bouyé. ([\#6598](https://github.com/matrix-org/synapse/issues/6598))
Updates to the Docker image
---------------------------
- Update docker runtime image to Alpine v3.11. Contributed by @Starbix. ([\#7398](https://github.com/matrix-org/synapse/issues/7398))
Improved Documentation
----------------------
- Update information about mapping providers for SAML and OpenID. ([\#7458](https://github.com/matrix-org/synapse/issues/7458))
- Add additional reverse proxy example for Caddy v2. Contributed by Jeff Peeler. ([\#7463](https://github.com/matrix-org/synapse/issues/7463))
- Fix copy-paste error in `ServerNoticesConfig` docstring. Contributed by @ptman. ([\#7477](https://github.com/matrix-org/synapse/issues/7477))
- Improve the formatting of `reverse_proxy.md`. ([\#7514](https://github.com/matrix-org/synapse/issues/7514))
- Change the systemd worker service to check that the worker config file exists instead of silently failing. Contributed by David Vo. ([\#7528](https://github.com/matrix-org/synapse/issues/7528))
- Minor clarifications to the TURN docs. ([\#7533](https://github.com/matrix-org/synapse/issues/7533))
Internal Changes
----------------
- Add typing annotations in `synapse.federation`. ([\#7382](https://github.com/matrix-org/synapse/issues/7382))
- Convert the room handler to async/await. ([\#7396](https://github.com/matrix-org/synapse/issues/7396))
- Improve performance of `get_e2e_cross_signing_key`. ([\#7428](https://github.com/matrix-org/synapse/issues/7428))
- Improve performance of `mark_as_sent_devices_by_remote`. ([\#7429](https://github.com/matrix-org/synapse/issues/7429), [\#7562](https://github.com/matrix-org/synapse/issues/7562))
- Add type hints to the SAML handler. ([\#7445](https://github.com/matrix-org/synapse/issues/7445))
- Remove storage method `get_hosts_in_room` that is no longer called anywhere. ([\#7448](https://github.com/matrix-org/synapse/issues/7448))
- Fix some typos in the `notice_expiry` templates. ([\#7449](https://github.com/matrix-org/synapse/issues/7449))
- Convert the federation handler to async/await. ([\#7459](https://github.com/matrix-org/synapse/issues/7459))
- Convert the search handler to async/await. ([\#7460](https://github.com/matrix-org/synapse/issues/7460))
- Add type hints to `synapse.event_auth`. ([\#7505](https://github.com/matrix-org/synapse/issues/7505))
- Convert the room member handler to async/await. ([\#7507](https://github.com/matrix-org/synapse/issues/7507))
- Add type hints to room member handler. ([\#7513](https://github.com/matrix-org/synapse/issues/7513))
- Fix typing annotations in `tests.replication`. ([\#7518](https://github.com/matrix-org/synapse/issues/7518))
- Remove some redundant Python 2 support code. ([\#7519](https://github.com/matrix-org/synapse/issues/7519))
- All endpoints now respond with a 200 OK for `OPTIONS` requests. ([\#7534](https://github.com/matrix-org/synapse/issues/7534), [\#7560](https://github.com/matrix-org/synapse/issues/7560))
- Synapse now exports [detailed allocator statistics](https://doc.pypy.org/en/latest/gc_info.html#gc-get-stats) and basic GC timings as Prometheus metrics (`pypy_gc_time_seconds_total` and `pypy_memory_bytes`) when run under PyPy. Contributed by Ivan Shapovalov. ([\#7536](https://github.com/matrix-org/synapse/issues/7536))
- Remove Ubuntu Cosmic and Disco from the list of distributions which we provide `.deb`s for, due to end-of-life. ([\#7539](https://github.com/matrix-org/synapse/issues/7539))
- Make worker processes return a stubbed-out response to `GET /presence` requests. ([\#7545](https://github.com/matrix-org/synapse/issues/7545))
- Optimise some references to `hs.config`. ([\#7546](https://github.com/matrix-org/synapse/issues/7546))
- On upgrade room only send canonical alias once. ([\#7547](https://github.com/matrix-org/synapse/issues/7547))
- Fix some indentation inconsistencies in the sample config. ([\#7550](https://github.com/matrix-org/synapse/issues/7550))
- Include `synapse.http.site` in type checking. ([\#7553](https://github.com/matrix-org/synapse/issues/7553))
- Fix some test code to not mangle stacktraces, to make it easier to debug errors. ([\#7554](https://github.com/matrix-org/synapse/issues/7554))
- Refresh apt cache when building `dh_virtualenv` docker image. ([\#7555](https://github.com/matrix-org/synapse/issues/7555))
- Stop logging some expected HTTP request errors as exceptions. ([\#7556](https://github.com/matrix-org/synapse/issues/7556), [\#7563](https://github.com/matrix-org/synapse/issues/7563))
- Convert sending mail to async/await. ([\#7557](https://github.com/matrix-org/synapse/issues/7557))
- Simplify `reap_monthly_active_users`. ([\#7558](https://github.com/matrix-org/synapse/issues/7558))
Synapse 1.13.0 (2020-05-19)
===========================
This release brings some potential changes necessary for certain
configurations of Synapse:
@@ -625,53 +34,6 @@ configurations of Synapse:
Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes
and for general upgrade guidance.
Notice of change to the default `git` branch for Synapse
--------------------------------------------------------
With the release of Synapse 1.13.0, the default `git` branch for Synapse has
changed to `develop`, which is the development tip. This is more consistent with
common practice and modern `git` usage.
The `master` branch, which tracks the latest release, is still available. It is
recommended that developers and distributors who have scripts which run builds
using the default branch of Synapse should therefore consider pinning their
scripts to `master`.
Internal Changes
----------------
- Update the version of dh-virtualenv we use to build debs, and add focal to the list of target distributions. ([\#7526](https://github.com/matrix-org/synapse/issues/7526))
Synapse 1.13.0rc3 (2020-05-18)
==============================
Bugfixes
--------
- Hash passwords as early as possible during registration. ([\#7523](https://github.com/matrix-org/synapse/issues/7523))
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
Synapse 1.13.0rc1 (2020-05-11)
==============================
Features
--------

View File

@@ -1,48 +1,62 @@
# Contributing code to Synapse
# Contributing code to Matrix
Everyone is welcome to contribute code to [matrix.org
projects](https://github.com/matrix-org), provided that they are willing to
license their contributions under the same license as the project itself. We
follow a simple 'inbound=outbound' model for contributions: the act of
submitting an 'inbound' contribution means that the contributor agrees to
license the code under the same terms as the project's overall 'outbound'
license - in our case, this is almost always Apache Software License v2 (see
[LICENSE](LICENSE)).
Everyone is welcome to contribute code to Matrix
(https://github.com/matrix-org), provided that they are willing to license
their contributions under the same license as the project itself. We follow a
simple 'inbound=outbound' model for contributions: the act of submitting an
'inbound' contribution means that the contributor agrees to license the code
under the same terms as the project's overall 'outbound' license - in our
case, this is almost always Apache Software License v2 (see [LICENSE](LICENSE)).
## How to contribute
The preferred and easiest way to contribute changes is to fork the relevant
project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull your
changes into our repo.
The preferred and easiest way to contribute changes to Matrix is to fork the
relevant project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull
your changes into our repo.
Some other points to follow:
* Please base your changes on the `develop` branch.
* Please follow the [code style requirements](#code-style).
**The single biggest thing you need to know is: please base your changes on
the develop branch - *not* master.**
* Please include a [changelog entry](#changelog) with each PR.
We use the master branch to track the most recent release, so that folks who
blindly clone the repo and automatically check out master get something that
works. Develop is the unstable branch where all the development actually
happens: the workflow is that contributors should fork the develop branch to
make a 'feature' branch for a particular contribution, and then make a pull
request to merge this back into the matrix.org 'official' develop branch. We
use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
* Please [sign off](#sign-off) your contribution.
We use [Buildkite](https://buildkite.com/matrix-dot-org/synapse) for continuous
integration. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
* Please keep an eye on the pull request for feedback from the [continuous
integration system](#continuous-integration-and-testing) and try to fix any
errors that come up.
To run unit tests in a local development environment, you can use:
* If you need to [update your PR](#updating-your-pull-request), just add new
commits to your branch rather than rebasing.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
## Code style
Synapse's code style is documented [here](docs/code_style.md). Please follow
it, including the conventions for the [sample configuration
file](docs/code_style.md#configuration-file-format).
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
[here](docs/code_style.md).
Many of the conventions are enforced by scripts which are run as part of the
[continuous integration system](#continuous-integration-and-testing). To help
check if you have followed the code style, you can run `scripts-dev/lint.sh`
locally. You'll need python 3.6 or later, and to install a number of tools:
To facilitate meeting these criteria you can run `scripts-dev/lint.sh`
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool:
```
# Install the dependencies
@@ -53,11 +67,9 @@ pip install -U black flake8 flake8-comprehensions isort
```
**Note that the script does not just test/check, but also reformats code, so you
may wish to ensure any new code is committed first**.
By default, this script checks all files and can take some time; if you alter
only certain files, you might wish to specify paths as arguments to reduce the
run-time:
may wish to ensure any new code is committed first**. By default this script
checks all files and can take some time; if you alter only certain files, you
might wish to specify paths as arguments to reduce the run-time:
```
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
@@ -70,6 +82,7 @@ Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
## Changelog
All changes, even minor ones, need a corresponding changelog / newsfragment
@@ -85,55 +98,24 @@ in the format of `PRnumber.type`. The type can be one of the following:
* `removal` (also used for deprecations)
* `misc` (for internal-only changes)
This file will become part of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md) at the next
release, so the content of the file should be a short description of your
change in the same style as the rest of the changelog. The file can contain Markdown
formatting, and should end with a full stop (.) or an exclamation mark (!) for
consistency.
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
contain Markdown formatting, and should end with a full stop (.) or an
exclamation mark (!) for consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in
`changelog.d/1234.bugfix`, and contain content like:
`changelog.d/1234.bugfix`, and contain content like "The security levels of
Florbs are now validated when received over federation. Contributed by Jane
Matrix.".
> The security levels of Florbs are now validated when received
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
If there are multiple pull requests involved in a single bugfix/feature/etc,
then the content for each `changelog.d` file should be the same. Towncrier will
merge the matching files together into a single changelog entry when we come to
release.
### How do I know what to call the changelog file before I create the PR?
Obviously, you don't know if you should call your newsfile
`1234.bugfix` or `5678.bugfix` until you create the PR, which leads to a
chicken-and-egg problem.
There are two options for solving this:
1. Open the PR without a changelog file, see what number you got, and *then*
add the changelog file to your branch (see [Updating your pull
request](#updating-your-pull-request)), or:
1. Look at the [list of all
issues/PRs](https://github.com/matrix-org/synapse/issues?q=), add one to the
highest number you see, and quickly open the PR before somebody else claims
your number.
[This
script](https://github.com/richvdh/scripts/blob/master/next_github_number.sh)
might be helpful if you find yourself doing this a lot.
Sorry, we know it's a bit fiddly, but it's *really* helpful for us when we come
to put together a release!
### Debian changelog
## Debian changelog
Changes which affect the debian packaging files (in `debian`) are an
exception to the rule that all changes require a `changelog.d` file.
exception.
In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command:
@@ -218,45 +200,19 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
## Continuous integration and testing
## Merge Strategy
[Buildkite](https://buildkite.com/matrix-dot-org/synapse) will automatically
run a series of checks and tests against any PR which is opened against the
project; if your change breaks the build, this will be shown in GitHub, with
links to the build results. If your build fails, please try to fix the errors
and update your branch.
We use the commit history of develop/master extensively to identify
when regressions were introduced and what changes have been made.
To run unit tests in a local development environment, you can use:
We aim to have a clean merge history, which means we normally squash-merge
changes into develop. For small changes this means there is no need to rebase
to clean up your PR before merging. Larger changes with an organised set of
commits may be merged as-is, if the history is judged to be useful.
- ``tox -e py35`` (requires tox to be installed by ``pip install tox``)
for SQLite-backed Synapse on Python 3.5.
- ``tox -e py36`` for SQLite-backed Synapse on Python 3.6.
- ``tox -e py36-postgres`` for PostgreSQL-backed Synapse on Python 3.6
(requires a running local PostgreSQL with access to create databases).
- ``./test_postgresql.sh`` for PostgreSQL-backed Synapse on Python 3.5
(requires Docker). Entirely self-contained, recommended if you don't want to
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
## Updating your pull request
If you decide to make changes to your pull request - perhaps to address issues
raised in a review, or to fix problems highlighted by [continuous
integration](#continuous-integration-and-testing) - just add new commits to your
branch, and push to GitHub. The pull request will automatically be updated.
Please **avoid** rebasing your branch, especially once the PR has been
reviewed: doing so makes it very difficult for a reviewer to see what has
changed since a previous review.
## Notes for maintainers on merging PRs etc
There are some notes for those with commit access to the project on how we
manage git [here](docs/dev/git.md).
This use of squash-merging will mean PRs built on each other will be hard to
merge. We suggest avoiding these where possible, and if required, ensuring
each PR has a tidy set of commits to ease merging.
## Conclusion

View File

@@ -1,12 +1,10 @@
- [Choosing your server name](#choosing-your-server-name)
- [Picking a database engine](#picking-a-database-engine)
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Prebuilt packages](#prebuilt-packages)
- [Setting up Synapse](#setting-up-synapse)
- [TLS certificates](#tls-certificates)
- [Client Well-Known URI](#client-well-known-uri)
- [Email](#email)
- [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server)
@@ -29,25 +27,6 @@ that your email address is probably `user@example.com` rather than
`user@email.example.com`) - but doing so may require more advanced setup: see
[Setting up Federation](docs/federate.md).
# Picking a database engine
Synapse offers two database engines:
* [PostgreSQL](https://www.postgresql.org)
* [SQLite](https://sqlite.org/)
Almost all installations should opt to use PostgreSQL. Advantages include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
For information on how to install and use PostgreSQL, please see
[docs/postgres.md](docs/postgres.md)
By default Synapse uses SQLite and in doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
light workloads.
# Installing Synapse
## Installing from source
@@ -201,41 +180,35 @@ sudo zypper in python-pip python-setuptools sqlite3 python-virtualenv \
#### OpenBSD
A port of Synapse is available under `net/synapse`. The filesystem
underlying the homeserver directory (defaults to `/var/synapse`) has to be
mounted with `wxallowed` (cf. `mount(8)`), so creating a separate filesystem
and mounting it to `/var/synapse` should be taken into consideration.
To be able to build Synapse's dependency on python the `WRKOBJDIR`
(cf. `bsd.port.mk(5)`) for building python, too, needs to be on a filesystem
mounted with `wxallowed` (cf. `mount(8)`).
Creating a `WRKOBJDIR` for building python under `/usr/local` (which on a
default OpenBSD installation is mounted with `wxallowed`):
Installing prerequisites on OpenBSD:
```
doas mkdir /usr/local/pobj_wxallowed
doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
libxslt jpeg
```
Assuming `PORTS_PRIVSEP=Yes` (cf. `bsd.port.mk(5)`) and `SUDO=doas` are
configured in `/etc/mk.conf`:
There is currently no port for OpenBSD. Additionally, OpenBSD's security
settings require a slightly more difficult installation process.
```
doas chown _pbuild:_pbuild /usr/local/pobj_wxallowed
```
(XXX: I suspect this is out of date)
Setting the `WRKOBJDIR` for building python:
1. Create a new directory in `/usr/local` called `_synapse`. Also, create a
new user called `_synapse` and set that directory as the new user's home.
This is required because, by default, OpenBSD only allows binaries which need
write and execute permissions on the same memory space to be run from
`/usr/local`.
2. `su` to the new `_synapse` user and change to their home directory.
3. Create a new virtualenv: `virtualenv -p python3 ~/.synapse`
4. Source the virtualenv configuration located at
`/usr/local/_synapse/.synapse/bin/activate`. This is done in `ksh` by
using the `.` command, rather than `bash`'s `source`.
5. Optionally, use `pip` to install `lxml`, which Synapse needs to parse
webpages for their titles.
6. Use `pip` to install this repository: `pip install matrix-synapse`
7. Optionally, change `_synapse`'s shell to `/bin/false` to reduce the
chance of a compromised Synapse server being used to take over your box.
```
echo WRKOBJDIR_lang/python/3.7=/usr/local/pobj_wxallowed \\nWRKOBJDIR_lang/python/2.7=/usr/local/pobj_wxallowed >> /etc/mk.conf
```
Building Synapse:
```
cd /usr/ports/net/synapse
make install
```
After this, you may proceed with the rest of the install directions.
#### Windows
@@ -255,9 +228,9 @@ for a number of platforms.
There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse which can be used with
the docker-compose file available at [contrib/docker](contrib/docker). Further
information on this including configuration options is available in the README
on hub.docker.com.
the docker-compose file available at [contrib/docker](contrib/docker). Further information on
this including configuration options is available in the README on
hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at
@@ -265,8 +238,7 @@ https://hub.docker.com/r/avhost/docker-matrix/tags/
Slavi Pantaleev has created an Ansible playbook,
which installs the offical Docker image of Matrix Synapse
along with many other Matrix-related services (Postgres database, Element, coturn,
ma1sd, SSL support, etc.).
along with many other Matrix-related services (Postgres database, riot-web, coturn, mxisd, SSL support, etc.).
For more details, see
https://github.com/spantaleev/matrix-docker-ansible-deploy
@@ -299,27 +271,22 @@ The fingerprint of the repository signing key (as shown by `gpg
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
#### Downstream Debian packages
#### Downstream Debian/Ubuntu packages
We do not recommend using the packages from the default Debian `buster`
repository at this time, as they are old and suffer from known security
vulnerabilities. You can install the latest version of Synapse from
[our repository](#matrixorg-packages) or from `buster-backports`. Please
see the [Debian documentation](https://backports.debian.org/Instructions/)
for information on how to use backports.
If you are using Debian `sid` or testing, Synapse is available in the default
repositories and it should be possible to install it simply with:
For `buster` and `sid`, Synapse is available in the Debian repositories and
it should be possible to install it with simply:
```
sudo apt install matrix-synapse
```
#### Downstream Ubuntu packages
There is also a version of `matrix-synapse` in `stretch-backports`. Please see
the [Debian documentation on
backports](https://backports.debian.org/Instructions/) for information on how
to use them.
We do not recommend using the packages in the default Ubuntu repository
at this time, as they are old and suffer from known security vulnerabilities.
The latest version of Synapse can be installed from [our repository](#matrixorg-packages).
We do not recommend using the packages in downstream Ubuntu at this time, as
they are old and suffer from known security vulnerabilities.
### Fedora
@@ -383,18 +350,6 @@ Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Mo
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py37-matrix-synapse`
### OpenBSD
As of OpenBSD 6.7 Synapse is available as a pre-compiled binary. The filesystem
underlying the homeserver directory (defaults to `/var/synapse`) has to be
mounted with `wxallowed` (cf. `mount(8)`), so creating a separate filesystem
and mounting it to `/var/synapse` should be taken into consideration.
Installing Synapse:
```
doas pkg_add synapse
```
### NixOS
@@ -432,11 +387,13 @@ so, you will need to edit `homeserver.yaml`, as follows:
```
* You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You will need to manage
provisioning of these certificates yourself — Synapse had built-in ACME
support, but the ACMEv1 protocol Synapse implements is deprecated, not
allowed by LetsEncrypt for new sites, and will break for existing sites in
late 2020. See [ACME.md](docs/ACME.md).
`tls_private_key_path` lines under the `TLS` section. You can either
point these settings at an existing certificate and key, or you can
enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
for having Synapse automatically provision and renew federation
certificates through ACME can be found at [ACME.md](docs/ACME.md).
Note that, as pointed out in that document, this feature will not
work with installs set up after November 2019.
If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates
@@ -446,60 +403,6 @@ so, you will need to edit `homeserver.yaml`, as follows:
For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md).
## Client Well-Known URI
Setting up the client Well-Known URI is optional but if you set it up, it will
allow users to enter their full username (e.g. `@user:<server_name>`) into clients
which support well-known lookup to automatically configure the homeserver and
identity server URLs. This is useful so that users don't have to memorize or think
about the actual homeserver URL you are using.
The URL `https://<server_name>/.well-known/matrix/client` should return JSON in
the following format.
```
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
}
}
```
It can optionally contain identity server information as well.
```
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
},
"m.identity_server": {
"base_url": "https://<identity.example.com>"
}
}
```
To work in browser based clients, the file must be served with the appropriate
Cross-Origin Resource Sharing (CORS) headers. A recommended value would be
`Access-Control-Allow-Origin: *` which would allow all browser based clients to
view it.
In nginx this would be something like:
```
location /.well-known/matrix/client {
return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
}
```
You should also ensure the `public_baseurl` option in `homeserver.yaml` is set
correctly. `public_baseurl` should be set to the URL that clients will use to
connect to your server. This is the same URL you put for the `m.homeserver`
`base_url` above.
```
public_baseurl: "https://<matrix.example.com>"
```
## Email
@@ -518,7 +421,7 @@ email will be disabled.
## Registering a user
The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
The easiest way to create a new user is to do so from a client like [Riot](https://riot.im).
Alternatively you can do so from the command line if you have installed via pip.

View File

@@ -1,11 +1,3 @@
================
Synapse |shield|
================
.. |shield| image:: https://img.shields.io/matrix/synapse:matrix.org?label=support&logo=matrix
:alt: (get support on #synapse:matrix.org)
:target: https://matrix.to/#/#synapse:matrix.org
.. contents::
Introduction
@@ -45,7 +37,7 @@ which handle:
- Eventually-consistent cryptographically secure synchronisation of room
state across a global open network of federated servers and services
- Sending and receiving extensible messages in a room with (optional)
end-to-end encryption
end-to-end encryption[1]
- Inviting, joining, leaving, kicking, banning room members
- Managing user accounts (registration, login, logout)
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
@@ -82,15 +74,7 @@ at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
Thanks for using Matrix!
Support
=======
For support installing or managing Synapse, please join |room|_ (from a matrix.org
account if necessary) and ask questions there. We do not use GitHub issues for
support requests, only for bug reports and feature requests.
.. |room| replace:: ``#synapse:matrix.org``
.. _room: https://matrix.to/#/#synapse:matrix.org
[1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
Synapse Installation
@@ -112,11 +96,12 @@ Unless you are running a test instance of Synapse on your local machine, in
general, you will need to enable TLS support before you can successfully
connect from a client: see `<INSTALL.md#tls-certificates>`_.
An easy way to get started is to login or register via Element at
https://app.element.io/#/login or https://app.element.io/#/register respectively.
An easy way to get started is to login or register via Riot at
https://riot.im/app/#/login or https://riot.im/app/#/register respectively.
You will need to change the server you are logging into from ``matrix.org``
and instead specify a Homeserver URL of ``https://<server_name>:8448``
(or just ``https://<server_name>`` if you are using a reverse proxy).
(Leave the identity server as the default - see `Identity servers`_.)
If you prefer to use another client, refer to our
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
@@ -133,7 +118,7 @@ it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a
user via a Matrix client.
user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients.
Your new user name will be formed partly from the ``server_name``, and partly
from a localpart you specify when you create the account. Your name will take
@@ -179,6 +164,30 @@ versions of synapse.
.. _UPGRADE.rst: UPGRADE.rst
Using PostgreSQL
================
Synapse offers two database engines:
* `SQLite <https://sqlite.org/>`_
* `PostgreSQL <https://www.postgresql.org>`_
By default Synapse uses SQLite in and doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
light workloads.
Almost all installations should opt to use PostreSQL. Advantages include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
* allowing basic active/backup high-availability with a "hot spare" synapse
pointing at the same DB master, as well as enabling DB replication in
synapse itself.
For information on how to install and use PostgreSQL, please see
`docs/postgres.md <docs/postgres.md>`_.
.. _reverse-proxy:
Using a reverse proxy with Synapse
@@ -187,7 +196,7 @@ Using a reverse proxy with Synapse
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_ or
`Caddy <https://caddyserver.com/docs/proxy>`_ or
`HAProxy <https://www.haproxy.org/>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
@@ -227,9 +236,10 @@ email address.
Password reset
==============
Users can reset their password through their client. Alternatively, a server admin
can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_
or by directly editing the database as shown below.
If a user has registered an email address to their account using an identity
server, they can request a password-reset token via clients such as Riot.
A manual password reset can be done via direct database access as follows.
First calculate the hash of the new password::
@@ -238,7 +248,7 @@ First calculate the hash of the new password::
Confirm password:
$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Then update the ``users`` table in the database::
Then update the `users` table in the database::
UPDATE users SET password_hash='$2a$12$xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
WHERE name='@test:test.com';
@@ -306,9 +316,6 @@ Building internal API documentation::
Troubleshooting
===============
Need help? Join our community support room on Matrix:
`#synapse:matrix.org <https://matrix.to/#/#synapse:matrix.org>`_
Running out of File Handles
---------------------------

View File

@@ -75,34 +75,10 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.18.0
====================
Docker `-py3` suffix will be removed in future versions
-------------------------------------------------------
From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
Scripts relying on the `-py3` suffix will need to be updated.
Redis replication is now recommended in lieu of TCP replication
---------------------------------------------------------------
When setting up worker processes, we now recommend the use of a Redis server for replication. **The old direct TCP connection method is deprecated and will be removed in a future release.**
See `docs/workers.md <docs/workers.md>`_ for more details.
Upgrading to v1.14.0
====================
This version includes a database update which is run as part of the upgrade,
and which may take a couple of minutes in the case of a large server. Synapse
will not respond to HTTP requests while this update is taking place.
Upgrading to v1.13.0
====================
Incorrect database migration in old synapse versions
----------------------------------------------------
@@ -160,12 +136,12 @@ back to v1.12.4 you need to:
2. Decrease the schema version in the database:
.. code:: sql
UPDATE schema_version SET version = 57;
3. Downgrade Synapse by following the instructions for your installation method
in the "Rolling back to older versions" section above.
Upgrading to v1.12.0
====================

1
changelog.d/6391.feature Normal file
View File

@@ -0,0 +1 @@
Synapse's cache factor can now be configured in `homeserver.yaml` by the `caches.global_factor` setting. Additionally, `caches.per_cache_factors` controls the cache factors for individual caches.

1
changelog.d/7256.feature Normal file
View File

@@ -0,0 +1 @@
Add OpenID Connect login/registration support. Contributed by Quentin Gliech, on behalf of [les Connecteurs](https://connecteu.rs).

1
changelog.d/7281.misc Normal file
View File

@@ -0,0 +1 @@
Add MultiWriterIdGenerator to support multiple concurrent writers of streams.

1
changelog.d/7317.feature Normal file
View File

@@ -0,0 +1 @@
Add room details admin endpoint. Contributed by Awesome Technologies Innovationslabor GmbH.

1
changelog.d/7374.misc Normal file
View File

@@ -0,0 +1 @@
Move catchup of replication streams logic to worker.

1
changelog.d/7382.misc Normal file
View File

@@ -0,0 +1 @@
Add typing annotations in `synapse.federation`.

1
changelog.d/7396.misc Normal file
View File

@@ -0,0 +1 @@
Convert the room handler to async/await.

1
changelog.d/7398.docker Normal file
View File

@@ -0,0 +1 @@
Update docker runtime image to Alpine v3.11. Contributed by @Starbix.

1
changelog.d/7428.misc Normal file
View File

@@ -0,0 +1 @@
Improve performance of `get_e2e_cross_signing_key`.

1
changelog.d/7429.misc Normal file
View File

@@ -0,0 +1 @@
Improve performance of `mark_as_sent_devices_by_remote`.

1
changelog.d/7435.feature Normal file
View File

@@ -0,0 +1 @@
Allow for using more than one spam checker module at once.

1
changelog.d/7436.misc Normal file
View File

@@ -0,0 +1 @@
Support any process writing to cache invalidation stream.

1
changelog.d/7440.misc Normal file
View File

@@ -0,0 +1 @@
Refactor event persistence database functions in preparation for allowing them to be run on non-master processes.

1
changelog.d/7445.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to the SAML handler.

1
changelog.d/7448.misc Normal file
View File

@@ -0,0 +1 @@
Remove storage method `get_hosts_in_room` that is no longer called anywhere.

1
changelog.d/7449.misc Normal file
View File

@@ -0,0 +1 @@
Fix some typos in the notice_expiry templates.

1
changelog.d/7458.doc Normal file
View File

@@ -0,0 +1 @@
Update information about mapping providers for SAML and OpenID.

1
changelog.d/7459.misc Normal file
View File

@@ -0,0 +1 @@
Convert the federation handler to async/await.

1
changelog.d/7460.misc Normal file
View File

@@ -0,0 +1 @@
Convert the search handler to async/await.

1
changelog.d/7470.misc Normal file
View File

@@ -0,0 +1 @@
Fix linting errors in new version of Flake8.

1
changelog.d/7475.misc Normal file
View File

@@ -0,0 +1 @@
Have all instance correctly respond to REPLICATE command.

1
changelog.d/7477.doc Normal file
View File

@@ -0,0 +1 @@
Fix copy-paste error in `ServerNoticesConfig` docstring. Contributed by @ptman.

1
changelog.d/7482.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix Redis reconnection logic that can result in missed updates over replication if master reconnects to Redis without restarting.

1
changelog.d/7490.misc Normal file
View File

@@ -0,0 +1 @@
Clean up replication unit tests.

1
changelog.d/7491.misc Normal file
View File

@@ -0,0 +1 @@
Move event stream handling out of slave store.

1
changelog.d/7492.misc Normal file
View File

@@ -0,0 +1 @@
Allow censoring of events to happen on workers.

1
changelog.d/7493.misc Normal file
View File

@@ -0,0 +1 @@
Move EventStream handling into default ReplicationDataHandler.

1
changelog.d/7495.feature Normal file
View File

@@ -0,0 +1 @@
Add `instance_map` config and route replication calls.

View File

@@ -17,6 +17,9 @@
""" Starts a synapse client console. """
from __future__ import print_function
from twisted.internet import reactor, defer, threads
from http import TwistedHttpClient
import argparse
import cmd
import getpass
@@ -25,14 +28,12 @@ import shlex
import sys
import time
import urllib
from http import TwistedHttpClient
import nacl.encoding
import nacl.signing
import urlparse
from signedjson.sign import SignatureVerifyException, verify_signed_json
from twisted.internet import defer, reactor, threads
import nacl.signing
import nacl.encoding
from signedjson.sign import verify_signed_json, SignatureVerifyException
CONFIG_JSON = "cmdclient_config.json"
@@ -492,7 +493,7 @@ class SynapseCmd(cmd.Cmd):
"list messages <roomid> from=END&to=START&limit=3"
"""
args = self._parse(line, ["type", "roomid", "qp"])
if "type" not in args or "roomid" not in args:
if not "type" in args or not "roomid" in args:
print("Must specify type and room ID.")
return
if args["type"] not in ["members", "messages"]:
@@ -507,7 +508,7 @@ class SynapseCmd(cmd.Cmd):
try:
key_value = key_value_str.split("=")
qp[key_value[0]] = key_value[1]
except Exception:
except:
print("Bad query param: %s" % key_value)
return
@@ -584,7 +585,7 @@ class SynapseCmd(cmd.Cmd):
parsed_url = urlparse.urlparse(args["path"])
qp.update(urlparse.parse_qs(parsed_url.query))
args["path"] = parsed_url.path
except Exception:
except:
pass
reactor.callFromThread(
@@ -609,15 +610,13 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks
def _do_event_stream(self, timeout):
res = yield defer.ensureDeferred(
self.http_client.get_json(
self._url() + "/events",
{
"access_token": self._tok(),
"timeout": str(timeout),
"from": self.event_stream_token,
},
)
res = yield self.http_client.get_json(
self._url() + "/events",
{
"access_token": self._tok(),
"timeout": str(timeout),
"from": self.event_stream_token,
},
)
print(json.dumps(res, indent=4))
@@ -773,10 +772,10 @@ def main(server_url, identity_server_url, username, token, config_path):
syn_cmd.config = json.load(config)
try:
http_client.verbose = "on" == syn_cmd.config["verbose"]
except Exception:
except:
pass
print("Loaded config from %s" % config_path)
except Exception:
except:
pass
# Twisted-specific: Runs the command processor in Twisted's event loop

View File

@@ -14,14 +14,14 @@
# limitations under the License.
from __future__ import print_function
from twisted.web.client import Agent, readBody
from twisted.web.http_headers import Headers
from twisted.internet import defer, reactor
from pprint import pformat
import json
import urllib
from pprint import pformat
from twisted.internet import defer, reactor
from twisted.web.client import Agent, readBody
from twisted.web.http_headers import Headers
class HttpClient(object):

View File

@@ -50,7 +50,7 @@ services:
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db:
image: docker.io/postgres:12-alpine
image: docker.io/postgres:10-alpine
# Change that password, of course!
environment:
- POSTGRES_USER=synapse

View File

@@ -28,24 +28,27 @@ Currently assumes the local address is localhost:<port>
"""
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
from synapse.util import origin_from_ucid
from synapse.app.homeserver import SynapseHomeServer
# from synapse.logging.utils import log_function
from twisted.internet import reactor, defer
from twisted.python import log
import argparse
import curses.wrapper
import json
import logging
import os
import re
import cursesio
from twisted.internet import defer, reactor
from twisted.python import log
from synapse.app.homeserver import SynapseHomeServer
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
from synapse.util import origin_from_ucid
# from synapse.logging.utils import log_function
import curses.wrapper
logger = logging.getLogger("example")
@@ -72,7 +75,7 @@ class InputOutput(object):
"""
try:
m = re.match(r"^join (\S+)$", line)
m = re.match("^join (\S+)$", line)
if m:
# The `sender` wants to join a room.
(room_name,) = m.groups()
@@ -81,7 +84,7 @@ class InputOutput(object):
# self.print_line("OK.")
return
m = re.match(r"^invite (\S+) (\S+)$", line)
m = re.match("^invite (\S+) (\S+)$", line)
if m:
# `sender` wants to invite someone to a room
room_name, invitee = m.groups()
@@ -90,7 +93,7 @@ class InputOutput(object):
# self.print_line("OK.")
return
m = re.match(r"^send (\S+) (.*)$", line)
m = re.match("^send (\S+) (.*)$", line)
if m:
# `sender` wants to message a room
room_name, body = m.groups()
@@ -99,7 +102,7 @@ class InputOutput(object):
# self.print_line("OK.")
return
m = re.match(r"^backfill (\S+)$", line)
m = re.match("^backfill (\S+)$", line)
if m:
# we want to backfill a room
(room_name,) = m.groups()
@@ -198,6 +201,16 @@ class HomeServer(ReplicationHandler):
% (pdu.context, pdu.pdu_type, json.dumps(pdu.content))
)
# def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
# if "joinee" in pdu.content:
# self._on_join(pdu.context, pdu.content["joinee"])
# elif "invitee" in pdu.content:
# self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
def _on_message(self, pdu):
""" We received a message
"""
@@ -301,7 +314,7 @@ class HomeServer(ReplicationHandler):
return self.replication_layer.backfill(dest, room_name, limit)
def _get_room_remote_servers(self, room_name):
return list(self.joined_rooms.setdefault(room_name).servers)
return [i for i in self.joined_rooms.setdefault(room_name).servers]
def _get_or_create_room(self, room_name):
return self.joined_rooms.setdefault(room_name, Room(room_name))
@@ -321,7 +334,7 @@ def main(stdscr):
user = args.user
server_name = origin_from_ucid(user)
# Set up logging
## Set up logging ##
root_logger = logging.getLogger()
@@ -341,7 +354,7 @@ def main(stdscr):
observer = log.PythonLoggingObserver()
observer.start()
# Set up synapse server
## Set up synapse server
curses_stdio = cursesio.CursesStdIO(stdscr)
input_output = InputOutput(curses_stdio, user)
@@ -355,16 +368,16 @@ def main(stdscr):
input_output.set_home_server(hs)
# Add input_output logger
## Add input_output logger
io_logger = IOLoggerHandler(input_output)
io_logger.setFormatter(formatter)
root_logger.addHandler(io_logger)
# Start!
## Start! ##
try:
port = int(server_name.split(":")[1])
except Exception:
except:
port = 12345
app_hs.get_http_server().start_listening(port)

File diff suppressed because it is too large Load Diff

View File

@@ -1,13 +1,5 @@
from __future__ import print_function
import argparse
import cgi
import datetime
import json
import pydot
import urllib2
# Copyright 2014-2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -23,6 +15,15 @@ import urllib2
# limitations under the License.
import sqlite3
import pydot
import cgi
import json
import datetime
import argparse
import urllib2
def make_name(pdu_id, origin):
return "%s@%s" % (pdu_id, origin)
@@ -32,7 +33,7 @@ def make_graph(pdus, room, filename_prefix):
node_map = {}
origins = set()
colors = {"red", "green", "blue", "yellow", "purple"}
colors = set(("red", "green", "blue", "yellow", "purple"))
for pdu in pdus:
origins.add(pdu.get("origin"))
@@ -48,7 +49,7 @@ def make_graph(pdus, room, filename_prefix):
try:
c = colors.pop()
color_map[o] = c
except Exception:
except:
print("Run out of colours!")
color_map[o] = "black"

View File

@@ -13,13 +13,12 @@
# limitations under the License.
import argparse
import cgi
import datetime
import json
import sqlite3
import pydot
import cgi
import json
import datetime
import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
@@ -99,7 +98,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
for prev_id, _ in event.prev_events:
try:
end_node = node_map[prev_id]
except Exception:
except:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node

View File

@@ -1,15 +1,5 @@
from __future__ import print_function
import argparse
import cgi
import datetime
import pydot
import simplejson as json
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
# Copyright 2016 OpenMarket Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
@@ -25,6 +15,18 @@ from synapse.util.frozenutils import unfreeze
# limitations under the License.
import pydot
import cgi
import simplejson as json
import datetime
import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
from six import string_types
def make_graph(file_name, room_id, file_prefix, limit):
print("Reading lines")
with open(file_name) as f:
@@ -60,7 +62,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for key, value in unfreeze(event.get_dict()["content"]).items():
if value is None:
value = "<null>"
elif isinstance(value, str):
elif isinstance(value, string_types):
pass
else:
value = json.dumps(value)
@@ -106,7 +108,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for prev_id, _ in event.prev_events:
try:
end_node = node_map[prev_id]
except Exception:
except:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node

View File

@@ -12,15 +12,15 @@ npm install jquery jsdom
"""
from __future__ import print_function
import json
import subprocess
import time
import gevent
import grequests
from BeautifulSoup import BeautifulSoup
import json
import urllib
import subprocess
import time
ACCESS_TOKEN = ""
# ACCESS_TOKEN="" #
MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/"
MYUSERNAME = "@davetest:matrix.org"

View File

@@ -1,12 +1,10 @@
#!/usr/bin/env python
from __future__ import print_function
from argparse import ArgumentParser
import json
import requests
import sys
import urllib
from argparse import ArgumentParser
import requests
try:
raw_input

View File

@@ -15,9 +15,6 @@
[Unit]
Description=Synapse Matrix homeserver
# If you are using postgresql to persist data, uncomment this line to make sure
# synapse starts after the postgresql service.
# After=postgresql.service
[Service]
Type=notify

View File

@@ -36,6 +36,7 @@ esac
dh_virtualenv \
--install-suffix "matrix-synapse" \
--builtin-venv \
--setuptools \
--python "$SNAKE" \
--upgrade-pip \
--preinstall="lxml" \

88
debian/changelog vendored
View File

@@ -1,94 +1,16 @@
matrix-synapse-py3 (1.19.2) stable; urgency=medium
<<<<<<< HEAD
matrix-synapse-py3 (1.12.3ubuntu1) UNRELEASED; urgency=medium
* New synapse release 1.19.2.
-- Synapse Packaging team <packages@matrix.org> Wed, 16 Sep 2020 12:50:30 +0100
matrix-synapse-py3 (1.19.1) stable; urgency=medium
* New synapse release 1.19.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 27 Aug 2020 10:50:19 +0100
matrix-synapse-py3 (1.19.0) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.19.0.
[ Aaron Raimist ]
* Fix outdated documentation for SYNAPSE_CACHE_FACTOR
-- Synapse Packaging team <packages@matrix.org> Mon, 17 Aug 2020 14:06:42 +0100
matrix-synapse-py3 (1.18.0) stable; urgency=medium
* New synapse release 1.18.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 30 Jul 2020 10:55:53 +0100
matrix-synapse-py3 (1.17.0) stable; urgency=medium
* New synapse release 1.17.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 13 Jul 2020 10:20:31 +0100
matrix-synapse-py3 (1.16.1) stable; urgency=medium
* New synapse release 1.16.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 10 Jul 2020 12:09:24 +0100
matrix-synapse-py3 (1.17.0rc1) stable; urgency=medium
* New synapse release 1.17.0rc1.
-- Synapse Packaging team <packages@matrix.org> Thu, 09 Jul 2020 16:53:12 +0100
matrix-synapse-py3 (1.16.0) stable; urgency=medium
* New synapse release 1.16.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 08 Jul 2020 11:03:48 +0100
matrix-synapse-py3 (1.15.2) stable; urgency=medium
* New synapse release 1.15.2.
-- Synapse Packaging team <packages@matrix.org> Thu, 02 Jul 2020 10:34:00 -0400
matrix-synapse-py3 (1.15.1) stable; urgency=medium
* New synapse release 1.15.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Jun 2020 10:27:50 +0100
matrix-synapse-py3 (1.15.0) stable; urgency=medium
* New synapse release 1.15.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 11 Jun 2020 13:27:06 +0100
matrix-synapse-py3 (1.14.0) stable; urgency=medium
* New synapse release 1.14.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 28 May 2020 10:37:27 +0000
matrix-synapse-py3 (1.13.0) stable; urgency=medium
[ Patrick Cloke ]
* Add information about .well-known files to Debian installation scripts.
[ Synapse Packaging team ]
* New synapse release 1.13.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 19 May 2020 09:16:56 -0400
-- Patrick Cloke <patrickc@matrix.org> Mon, 06 Apr 2020 10:10:38 -0400
=======
matrix-synapse-py3 (1.12.4) stable; urgency=medium
* New synapse release 1.12.4.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400
>>>>>>> master
matrix-synapse-py3 (1.12.3) stable; urgency=medium

View File

@@ -1,2 +1,2 @@
# Specify environment variables used when running Synapse
# SYNAPSE_CACHE_FACTOR=0.5 (default)
# SYNAPSE_CACHE_FACTOR=1 (default)

27
debian/synctl.ronn vendored
View File

@@ -46,20 +46,19 @@ Configuration file may be generated as follows:
## ENVIRONMENT
* `SYNAPSE_CACHE_FACTOR`:
Synapse's architecture is quite RAM hungry currently - we deliberately
cache a lot of recent room data and metadata in RAM in order to speed up
common requests. We'll improve this in the future, but for now the easiest
way to either reduce the RAM usage (at the risk of slowing things down)
is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
variable. The default is 0.5, which can be decreased to reduce RAM usage
in memory constrained enviroments, or increased if performance starts to
degrade.
However, degraded performance due to a low cache factor, common on
machines with slow disks, often leads to explosions in memory use due
backlogged requests. In this case, reducing the cache factor will make
things worse. Instead, try increasing it drastically. 2.0 is a good
starting value.
Synapse's architecture is quite RAM hungry currently - a lot of
recent room data and metadata is deliberately cached in RAM in
order to speed up common requests. This will be improved in
future, but for now the easiest way to either reduce the RAM usage
(at the risk of slowing things down) is to set the
SYNAPSE_CACHE_FACTOR environment variable. Roughly speaking, a
SYNAPSE_CACHE_FACTOR of 1.0 will max out at around 3-4GB of
resident memory - this is what we currently run the matrix.org
on. The default setting is currently 0.1, which is probably around
a ~700MB footprint. You can dial it down further to 0.02 if
desired, which targets roughly ~512MB. Conversely you can dial it
up if you need performance for lots of users and have a box with a
lot of RAM.
## COPYRIGHT

View File

@@ -16,31 +16,34 @@ ARG PYTHON_VERSION=3.7
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 as builder
# install the OS build deps
RUN apk add \
build-base \
libffi-dev \
libjpeg-turbo-dev \
libressl-dev \
libxslt-dev \
linux-headers \
postgresql-dev \
zlib-dev
RUN apt-get update && apt-get install -y \
build-essential \
libpq-dev \
&& rm -rf /var/lib/apt/lists/*
# build things which have slow build steps, before we copy synapse, so that
# the layer can be cached.
#
# (we really just care about caching a wheel here, as the "pip install" below
# will install them again.)
# Build dependencies that are not available as wheels, to speed up rebuilds
RUN pip install --prefix="/install" --no-warn-script-location \
frozendict \
jaeger-client \
opentracing \
prometheus-client \
psycopg2 \
pycparser \
pyrsistent \
pyyaml \
simplejson \
threadloop \
thrift
cryptography \
msgpack-python \
pillow \
pynacl
# now install synapse and all of the python deps to /install.
COPY synapse /synapse/synapse/
COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/
@@ -52,13 +55,19 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-slim
FROM docker.io/python:${PYTHON_VERSION}-alpine3.11
RUN apt-get update && apt-get install -y \
libpq5 \
xmlsec1 \
gosu \
&& rm -rf /var/lib/apt/lists/*
# xmlsec is required for saml support
RUN apk add --no-cache --virtual .runtime_deps \
libffi \
libjpeg-turbo \
libressl \
libxslt \
libpq \
zlib \
su-exec \
tzdata \
xmlsec
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py

View File

@@ -27,18 +27,15 @@ RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
wget
# fetch and unpack the package
RUN mkdir /dh-virtualenv
RUN wget -q -O /dh-virtualenv.tar.gz https://github.com/spotify/dh-virtualenv/archive/ac6e1b1.tar.gz
RUN tar -xv --strip-components=1 -C /dh-virtualenv -f /dh-virtualenv.tar.gz
RUN wget -q -O /dh-virtuenv-1.1.tar.gz https://github.com/spotify/dh-virtualenv/archive/1.1.tar.gz
RUN tar xvf /dh-virtuenv-1.1.tar.gz
# install its build deps. We do another apt-cache-update here, because we might
# be using a stale cache from docker build.
RUN apt-get update -qq -o Acquire::Languages=none \
&& cd /dh-virtualenv \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -y --no-install-recommends"
# install its build deps
RUN cd dh-virtualenv-1.1/ \
&& env DEBIAN_FRONTEND=noninteractive mk-build-deps -ri -t "apt-get -yqq --no-install-recommends"
# build it
RUN cd /dh-virtualenv && dpkg-buildpackage -us -uc -b
RUN cd dh-virtualenv-1.1 && dpkg-buildpackage -us -uc -b
###
### Stage 1
@@ -71,12 +68,12 @@ RUN apt-get update -qq -o Acquire::Languages=none \
sqlite3 \
libpq-dev
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /
COPY --from=builder /dh-virtualenv_1.1-1_all.deb /
# install dhvirtualenv. Update the apt cache again first, in case we got a
# cached cache from docker the first time.
RUN apt-get update -qq -o Acquire::Languages=none \
&& apt-get install -yq /dh-virtualenv_1.2~dev-1_all.deb
&& apt-get install -yq /dh-virtualenv_1.1-1_all.deb
WORKDIR /synapse/source
ENTRYPOINT ["bash","/synapse/source/docker/build_debian.sh"]

View File

@@ -94,21 +94,6 @@ The following environment variables are supported in run mode:
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
## Generating an (admin) user
After synapse is running, you may wish to create a user via `register_new_matrix_user`.
This requires a `registration_shared_secret` to be set in your config file. Synapse
must be restarted to pick up this change.
You can then call the script:
```
docker exec -it synapse register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml --help
```
Remember to remove the `registration_shared_secret` and restart if you no-longer need it.
## TLS support
The default configuration exposes a single HTTP port: http://localhost:8008. It

View File

@@ -4,10 +4,16 @@ formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:

View File

@@ -120,7 +120,7 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
if ownership is not None:
subprocess.check_output(["chown", "-R", ownership, "/data"])
args = ["gosu", ownership] + args
args = ["su-exec", ownership] + args
subprocess.check_output(args)
@@ -172,8 +172,8 @@ def run_generate_config(environ, ownership):
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = ["gosu", ownership] + args
os.execv("/usr/sbin/gosu", args)
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
else:
os.execv("/usr/local/bin/python", args)
@@ -189,7 +189,7 @@ def main(args, environ):
ownership = "{}:{}".format(desired_uid, desired_gid)
if ownership is None:
log("Will not perform chmod/gosu as UserID already matches request")
log("Will not perform chmod/su-exec as UserID already matches request")
# In generate mode, generate a configuration and missing keys, then exit
if mode == "generate":
@@ -236,8 +236,8 @@ running with 'migrate_config'. See the README for more details.
args = ["python", "-m", synapse_worker, "--config-path", config_path]
if ownership is not None:
args = ["gosu", ownership] + args
os.execv("/usr/sbin/gosu", args)
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
else:
os.execv("/usr/local/bin/python", args)

View File

@@ -10,16 +10,5 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md.
# Configuration options that take a time period can be set using a number
# followed by a letter. Letters have the following meanings:
# s = second
# m = minute
# h = hour
# d = day
# w = week
# y = year
# For example, setting redaction_retention_period: 5m would remove redacted
# messages from the database after 5 minutes, rather than 5 months.
################################################################################

View File

@@ -12,14 +12,13 @@ introduced support for automatically provisioning certificates through
In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
Let's Encrypt announced that they were deprecating version 1 of the ACME
protocol, with the plan to disable the use of it for new accounts in
November 2019, for new domains in June 2020, and for existing accounts and
domains in June 2021.
November 2019, and for existing accounts in June 2020.
Synapse doesn't currently support version 2 of the ACME protocol, which
means that:
* for existing installs, Synapse's built-in ACME support will continue
to work until June 2021.
to work until June 2020.
* for new installs, this feature will not work at all.
Either way, it is recommended to move from Synapse's ACME support

View File

@@ -4,21 +4,17 @@ Admin APIs
This directory includes documentation for the various synapse specific admin
APIs available.
Authenticating as a server admin
--------------------------------
Only users that are server admins can use these APIs. A user can be marked as a
server admin by updating the database directly, e.g.:
Many of the API calls in the admin api will require an `access_token` for a
server admin. (Note that a server admin is distinct from a room admin.)
``UPDATE users SET admin = 1 WHERE name = '@foo:bar.com'``
A user can be marked as a server admin by updating the database directly, e.g.:
Restarting may be required for the changes to register.
.. code-block:: sql
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
A new server admin user can also be created using the
``register_new_matrix_user`` script.
Using an admin access_token
###########################
Many of the API calls listed in the documentation here will require to include an admin `access_token`.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
Once you have your `access_token`, to include it in a request, the best option is to add the token to a request header:

View File

@@ -4,11 +4,11 @@ This API lets a server admin delete a local group. Doing so will kick all
users out of the group so that their clients will correctly handle the group
being deleted.
The API is:
```
POST /_synapse/admin/v1/delete_group/<group_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
including an `access_token` of a server admin.

View File

@@ -6,10 +6,9 @@ The API is:
```
GET /_synapse/admin/v1/room/<room_id>/media
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
including an `access_token` of a server admin.
The API returns a JSON body like the following:
It returns a JSON body like the following:
```
{
"local": [
@@ -100,3 +99,4 @@ Response:
"num_quarantined": 10 # The number of media items successfully quarantined
}
```

View File

@@ -15,8 +15,7 @@ The API is:
``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
@@ -55,10 +54,8 @@ It is possible to poll for updates on recent purges with a second API;
``GET /_synapse/admin/v1/purge_history_status/<purge_id>``
Again, you will need to authenticate by providing an ``access_token`` for a
server admin.
This API returns a JSON body like the following:
(again, with a suitable ``access_token``). This API returns a JSON body like
the following:
.. code:: json

View File

@@ -6,15 +6,12 @@ media.
The API is::
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>&access_token=<access_token>
{}
\... which will remove all cached media that was last accessed before
Which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -5,8 +5,6 @@ This API will remove all trace of a room from your database.
All local users must have left the room before it can be removed.
See also: [Delete Room API](rooms.md#delete-room-api)
The API is:
```

View File

@@ -23,8 +23,7 @@ POST /_synapse/admin/v1/join/<room_id_or_alias>
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
Including an `access_token` of a server admin.
Response:

View File

@@ -318,134 +318,3 @@ Response:
"state_events": 93534
}
```
# Room Members API
The Room Members admin API allows server admins to get a list of all members of a room.
The response includes the following fields:
* `members` - A list of all the members that are present in the room, represented by their ids.
* `total` - Total number of members in the room.
## Usage
A standard request:
```
GET /_synapse/admin/v1/rooms/<room_id>/members
{}
```
Response:
```
{
"members": [
"@foo:matrix.org",
"@bar:matrix.org",
"@foobar:matrix.org
],
"total": 3
}
```
# Delete Room API
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
It is a combination and improvement of "[Shutdown room](shutdown_room.md)"
and "[Purge room](purge_room.md)" API.
Shuts down a room. Moves all local users and room aliases automatically to a
new room if `new_room_user_id` is set. Otherwise local users only
leave the room without any information.
The new room will be created with the user specified by the `new_room_user_id` parameter
as room administrator and will contain a message explaining what happened. Users invited
to the new room will have power level `-10` by default, and thus be unable to speak.
If `block` is `True` it prevents new joins to the old room.
This API will remove all trace of the old room from your database after removing
all local users. If `purge` is `true` (the default), all traces of the old room will
be removed from your database after removing all local users. If you do not want
this to happen, set `purge` to `false`.
Depending on the amount of history being purged a call to the API may take
several minutes or longer.
The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.
The API is:
```json
POST /_synapse/admin/v1/rooms/<room_id>/delete
```
with a body of:
```json
{
"new_room_user_id": "@someuser:example.com",
"room_name": "Content Violation Notification",
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
"block": true,
"purge": true
}
```
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see [README.rst](README.rst).
A response body like the following is returned:
```json
{
"kicked_users": [
"@foobar:example.com"
],
"failed_to_kick_users": [],
"local_aliases": [
"#badroom:example.com",
"#evilsaloon:example.com"
],
"new_room_id": "!newroomid:example.com"
}
```
## Parameters
The following parameters should be set in the URL:
* `room_id` - The ID of the room.
The following JSON body parameters are available:
* `new_room_user_id` - Optional. If set, a new room will be created with this user ID
as the creator and admin, and all users in the old room will be moved into that
room. If not set, no new room will be created and the users will just be removed
from the old room. The user ID must be on the local server, but does not necessarily
have to belong to a registered user.
* `room_name` - Optional. A string representing the name of the room that new users will be
invited to. Defaults to `Content Violation Notification`
* `message` - Optional. A string containing the first message that will be sent as
`new_room_user_id` in the new room. Ideally this will clearly convey why the
original room was shut down. Defaults to `Sharing illegal content on this server
is not permitted and rooms in violation will be blocked.`
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
future attempts to join the room. Defaults to `false`.
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
Defaults to `true`.
The JSON body must not be empty. The body must be at least `{}`.
## Response
The following fields are returned in the JSON response body:
* `kicked_users` - An array of users (`user_id`) that were kicked.
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
* `local_aliases` - An array of strings representing the local aliases that were migrated from
the old room to the new.
* `new_room_id` - A string representing the room ID of the new room.

View File

@@ -10,8 +10,6 @@ disallow any further invites or joins.
The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.
See also: [Delete Room API](rooms.md#delete-room-api)
## API
You will need to authenticate with an access token for an admin user.
@@ -33,7 +31,7 @@ You will need to authenticate with an access token for an admin user.
* `message` - Optional. A string containing the first message that will be sent as
`new_room_user_id` in the new room. Ideally this will clearly convey why the
original room was shut down.
If not specified, the default value of `room_name` is "Content Violation
Notification". The default value of `message` is "Sharing illegal content on
othis server is not permitted and rooms in violation will be blocked."
@@ -72,30 +70,3 @@ Response:
"new_room_id": "!newroomid:example.com",
},
```
## Undoing room shutdowns
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
the structure can and does change without notice.
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:
* If the room was invite-only, your users will need to be re-invited.
* If the room no longer has any members at all, it'll be impossible to rejoin.
* The first user to rejoin will have to do so via an alias on a different server.
With all that being said, if you still want to try and recover the room:
1. For safety reasons, shut down Synapse.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
3. Restart Synapse.
You will have to manually handle, if you so choose, the following:
* Aliases that would have been redirected to the Content Violation room.
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.

View File

@@ -1,47 +1,9 @@
.. contents::
Query User Account
==================
This API returns information about a specific user account.
The api is::
GET /_synapse/admin/v2/users/<user_id>
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
It returns a JSON body like the following:
.. code:: json
{
"displayname": "User",
"threepids": [
{
"medium": "email",
"address": "<user_mail_1>"
},
{
"medium": "email",
"address": "<user_mail_2>"
}
],
"avatar_url": "<avatar_url>",
"admin": false,
"deactivated": false
}
URL parameters:
- ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
Create or modify Account
========================
This API allows an administrator to create or modify a user account with a
specific ``user_id``.
specific ``user_id``. Be aware that ``user_id`` is fully qualified: for example,
``@user:server.com``.
This api is::
@@ -69,36 +31,26 @@ with a body of:
"deactivated": false
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
URL parameters:
The parameter ``displayname`` is optional and defaults to the value of
``user_id``.
- ``user_id``: fully-qualified user id: for example, ``@user:server.com``.
The parameter ``threepids`` is optional and allows setting the third-party IDs
(email, msisdn) belonging to a user.
Body parameters:
The parameter ``avatar_url`` is optional. Must be a [MXC
URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).
- ``password``, optional. If provided, the user's password is updated and all
devices are logged out.
The parameter ``admin`` is optional and defaults to ``false``.
- ``displayname``, optional, defaults to the value of ``user_id``.
The parameter ``deactivated`` is optional and defaults to ``false``.
- ``threepids``, optional, allows setting the third-party IDs (email, msisdn)
belonging to a user.
- ``avatar_url``, optional, must be a
`MXC URI <https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris>`_.
- ``admin``, optional, defaults to ``false``.
- ``deactivated``, optional. If unspecified, deactivation state will be left
unchanged on existing accounts and set to ``false`` for new accounts.
The parameter ``password`` is optional. If provided, the user's password is
updated and all devices are logged out.
If the user already exists then optional parameters default to the current value.
In order to re-activate an account ``deactivated`` must be set to ``false``. If
users do not login via single-sign-on, a new ``password`` must be provided.
List Accounts
=============
@@ -108,8 +60,7 @@ The api is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
The parameter ``from`` is optional but used for pagination, denoting the
offset in the returned results. This should be treated as an opaque value and
@@ -164,17 +115,17 @@ with ``from`` set to the value of ``next_token``. This will return a new page.
If the endpoint does not return a ``next_token`` then there are no more users
to paginate through.
Query current sessions for a user
=================================
Query Account
=============
This API returns information about the active sessions for a specific user.
This API returns information about a specific user account.
The api is::
GET /_synapse/admin/v1/whois/<user_id>
GET /_synapse/admin/v1/whois/<user_id> (deprecated)
GET /_synapse/admin/v2/users/<user_id>
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
It returns a JSON body like the following:
@@ -227,10 +178,9 @@ with a body of:
"erase": true
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
The erase parameter is optional and defaults to ``false``.
The erase parameter is optional and defaults to 'false'.
An empty body may be passed for backwards compatibility.
@@ -252,8 +202,7 @@ with a body of:
"logout_devices": true,
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
The parameter ``new_password`` is required.
The parameter ``logout_devices`` is optional and defaults to ``true``.
@@ -266,8 +215,7 @@ The api is::
GET /_synapse/admin/v1/users/<user_id>/admin
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
including an ``access_token`` of a server admin.
A response body like the following is returned:
@@ -295,191 +243,4 @@ with a body of:
"admin": true
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
User devices
============
List all devices
----------------
Gets information about all devices for a specific ``user_id``.
The API is::
GET /_synapse/admin/v2/users/<user_id>/devices
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"devices": [
{
"device_id": "QBUAZIFURK",
"display_name": "android",
"last_seen_ip": "1.2.3.4",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>"
},
{
"device_id": "AUIECTSRND",
"display_name": "ios",
"last_seen_ip": "1.2.3.5",
"last_seen_ts": 1474491775025,
"user_id": "<user_id>"
}
]
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
**Response**
The following fields are returned in the JSON response body:
- ``devices`` - An array of objects, each containing information about a device.
Device objects contain the following fields:
- ``device_id`` - Identifier of device.
- ``display_name`` - Display name set by the user for this device.
Absent if no name has been set.
- ``last_seen_ip`` - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).
- ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- ``user_id`` - Owner of device.
Delete multiple devices
------------------
Deletes the given devices for a specific ``user_id``, and invalidates
any access token associated with them.
The API is::
POST /_synapse/admin/v2/users/<user_id>/delete_devices
{
"devices": [
"QBUAZIFURK",
"AUIECTSRND"
],
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
The following fields are required in the JSON request body:
- ``devices`` - The list of device IDs to delete.
Show a device
---------------
Gets information on a single device, by ``device_id`` for a specific ``user_id``.
The API is::
GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
A response body like the following is returned:
.. code:: json
{
"device_id": "<device_id>",
"display_name": "android",
"last_seen_ip": "1.2.3.4",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>"
}
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to retrieve.
**Response**
The following fields are returned in the JSON response body:
- ``device_id`` - Identifier of device.
- ``display_name`` - Display name set by the user for this device.
Absent if no name has been set.
- ``last_seen_ip`` - The IP address where this device was last seen.
(May be a few minutes out of date, for efficiency reasons).
- ``last_seen_ts`` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- ``user_id`` - Owner of device.
Update a device
---------------
Updates the metadata on the given ``device_id`` for a specific ``user_id``.
The API is::
PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
{
"display_name": "My other phone"
}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to update.
The following fields are required in the JSON request body:
- ``display_name`` - The new display name for this device. If not given,
the display name is unchanged.
Delete a device
---------------
Deletes the given ``device_id`` for a specific ``user_id``,
and invalidates any access token associated with it.
The API is::
DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
{}
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
An empty JSON dict is returned.
**Parameters**
The following parameters should be set in the URL:
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
- ``device_id`` - The device to delete.
including an ``access_token`` of a server admin.

View File

@@ -1,148 +0,0 @@
Some notes on how we use git
============================
On keeping the commit history clean
-----------------------------------
In an ideal world, our git commit history would be a linear progression of
commits each of which contains a single change building on what came
before. Here, by way of an arbitrary example, is the top of `git log --graph
b2dba0607`:
<img src="git/clean.png" alt="clean git graph" width="500px">
Note how the commit comment explains clearly what is changing and why. Also
note the *absence* of merge commits, as well as the absence of commits called
things like (to pick a few culprits):
[“pep8”](https://github.com/matrix-org/synapse/commit/84691da6c), [“fix broken
test”](https://github.com/matrix-org/synapse/commit/474810d9d),
[“oops”](https://github.com/matrix-org/synapse/commit/c9d72e457),
[“typo”](https://github.com/matrix-org/synapse/commit/836358823), or [“Who's
the president?”](https://github.com/matrix-org/synapse/commit/707374d5d).
There are a number of reasons why keeping a clean commit history is a good
thing:
* From time to time, after a change lands, it turns out to be necessary to
revert it, or to backport it to a release branch. Those operations are
*much* easier when the change is contained in a single commit.
* Similarly, it's much easier to answer questions like “is the fix for
`/publicRooms` on the release branch?” if that change consists of a single
commit.
* Likewise: “what has changed on this branch in the last week?” is much
clearer without merges and “pep8” commits everywhere.
* Sometimes we need to figure out where a bug got introduced, or some
behaviour changed. One way of doing that is with `git bisect`: pick an
arbitrary commit between the known good point and the known bad point, and
see how the code behaves. However, that strategy fails if the commit you
chose is the middle of someone's epic branch in which they broke the world
before putting it back together again.
One counterargument is that it is sometimes useful to see how a PR evolved as
it went through review cycles. This is true, but that information is always
available via the GitHub UI (or via the little-known [refs/pull
namespace](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally)).
Of course, in reality, things are more complicated than that. We have release
branches as well as `develop` and `master`, and we deliberately merge changes
between them. Bugs often slip through and have to be fixed later. That's all
fine: this not a cast-iron rule which must be obeyed, but an ideal to aim
towards.
Merges, squashes, rebases: wtf?
-------------------------------
Ok, so that's what we'd like to achieve. How do we achieve it?
The TL;DR is: when you come to merge a pull request, you *probably* want to
“squash and merge”:
![squash and merge](git/squash.png).
(This applies whether you are merging your own PR, or that of another
contributor.)
“Squash and merge”<sup id="a1">[1](#f1)</sup> takes all of the changes in the
PR, and bundles them into a single commit. GitHub gives you the opportunity to
edit the commit message before you confirm, and normally you should do so,
because the default will be useless (again: `* woops typo` is not a useful
thing to keep in the historical record).
The main problem with this approach comes when you have a series of pull
requests which build on top of one another: as soon as you squash-merge the
first PR, you'll end up with a stack of conflicts to resolve in all of the
others. In general, it's best to avoid this situation in the first place by
trying not to have multiple related PRs in flight at the same time. Still,
sometimes that's not possible and doing a regular merge is the lesser evil.
Another occasion in which a regular merge makes more sense is a PR where you've
deliberately created a series of commits each of which makes sense in its own
right. For example: [a PR which gradually propagates a refactoring operation
through the codebase](https://github.com/matrix-org/synapse/pull/6837), or [a
PR which is the culmination of several other
PRs](https://github.com/matrix-org/synapse/pull/5987). In this case the ability
to figure out when a particular change/bug was introduced could be very useful.
Ultimately: **this is not a hard-and-fast-rule**. If in doubt, ask yourself “do
each of the commits I am about to merge make sense in their own right”, but
remember that we're just doing our best to balance “keeping the commit history
clean” with other factors.
Git branching model
-------------------
A [lot](https://nvie.com/posts/a-successful-git-branching-model/)
[of](http://scottchacon.com/2011/08/31/github-flow.html)
[words](https://www.endoflineblog.com/gitflow-considered-harmful) have been
written in the past about git branching models (no really, [a
lot](https://martinfowler.com/articles/branching-patterns.html)). I tend to
think the whole thing is overblown. Fundamentally, it's not that
complicated. Here's how we do it.
Let's start with a picture:
![branching model](git/branches.jpg)
It looks complicated, but it's really not. There's one basic rule: *anyone* is
free to merge from *any* more-stable branch to *any* less-stable branch at
*any* time<sup id="a2">[2](#f2)</sup>. (The principle behind this is that if a
change is good enough for the more-stable branch, then it's also good enough go
put in a less-stable branch.)
Meanwhile, merging (or squashing, as per the above) from a less-stable to a
more-stable branch is a deliberate action in which you want to publish a change
or a set of changes to (some subset of) the world: for example, this happens
when a PR is landed, or as part of our release process.
So, what counts as a more- or less-stable branch? A little reflection will show
that our active branches are ordered thus, from more-stable to less-stable:
* `master` (tracks our last release).
* `release-vX.Y.Z` (the branch where we prepare the next release)<sup
id="a3">[3](#f3)</sup>.
* PR branches which are targeting the release.
* `develop` (our "mainline" branch containing our bleeding-edge).
* regular PR branches.
The corollary is: if you have a bugfix that needs to land in both
`release-vX.Y.Z` *and* `develop`, then you should base your PR on
`release-vX.Y.Z`, get it merged there, and then merge from `release-vX.Y.Z` to
`develop`. (If a fix lands in `develop` and we later need it in a
release-branch, we can of course cherry-pick it, but landing it in the release
branch first helps reduce the chance of annoying conflicts.)
---
<b id="f1">[1]</b>: “Squash and merge” is GitHub's term for this
operation. Given that there is no merge involved, I'm not convinced it's the
most intuitive name. [^](#a1)
<b id="f2">[2]</b>: Well, anyone with commit access.[^](#a2)
<b id="f3">[3]</b>: Very, very occasionally (I think this has happened once in
the history of Synapse), we've had two releases in flight at once. Obviously,
`release-v1.2.3` is more-stable than `release-v1.3.0`. [^](#a3)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 108 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

175
docs/dev/oidc.md Normal file
View File

@@ -0,0 +1,175 @@
# How to test OpenID Connect
Any OpenID Connect Provider (OP) should work with Synapse, as long as it supports the authorization code flow.
There are a few options for that:
- start a local OP. Synapse has been tested with [Hydra][hydra] and [Dex][dex-idp].
Note that for an OP to work, it should be served under a secure (HTTPS) origin.
A certificate signed with a self-signed, locally trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE` environment variable set to the path of the CA.
- use a publicly available OP. Synapse has been tested with [Google][google-idp].
- setup a SaaS OP, like [Auth0][auth0] and [Okta][okta]. Auth0 has a free tier which has been tested with Synapse.
[google-idp]: https://developers.google.com/identity/protocols/OpenIDConnect#authenticatingtheuser
[auth0]: https://auth0.com/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[hydra]: https://www.ory.sh/docs/hydra/
## Sample configs
Here are a few configs for providers that should work with Synapse.
### [Dex][dex-idp]
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider, with some external database, it can be configured with static passwords in a config file.
Follow the [Getting Started guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md) to install Dex.
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
```yaml
staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse base url]/_synapse/oidc/callback'
name: 'Synapse'
```
Run with `dex serve examples/config-dex.yaml`
Synapse config:
```yaml
oidc_config:
enabled: true
skip_verification: true # This is needed as Dex is served on an insecure endpoint
issuer: "http://127.0.0.1:5556/dex"
discover: true
client_id: "synapse"
client_secret: "secret"
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.name }}'
display_name_template: '{{ user.name|capitalize }}'
```
### [Auth0][auth0]
1. Create a regular web application for Synapse
2. Set the Allowed Callback URLs to `[synapse base url]/_synapse/oidc/callback`
3. Add a rule to add the `preferred_username` claim.
<details>
<summary>Code sample</summary>
```js
function addPersistenceAttribute(user, context, callback) {
user.user_metadata = user.user_metadata || {};
user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
context.idToken.preferred_username = user.user_metadata.preferred_username;
auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
.then(function(){
callback(null, user, context);
})
.catch(function(err){
callback(err);
});
}
```
</details>
```yaml
oidc_config:
enabled: true
issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```
### GitHub
GitHub is a bit special as it is not an OpenID Connect compliant provider, but just a regular OAuth2 provider.
The `/user` API endpoint can be used to retrieve informations from the user.
As the OIDC login mechanism needs an attribute to uniquely identify users and that endpoint does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: https://github.com/settings/applications/new
2. Set the callback URL to `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://github.com/"
discover: false
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
authorization_endpoint: "https://github.com/login/oauth/authorize"
token_endpoint: "https://github.com/login/oauth/access_token"
userinfo_endpoint: "https://api.github.com/user"
scopes:
- read:user
user_mapping_provider:
config:
subject_claim: 'id'
localpart_template: '{{ user.login }}'
display_name_template: '{{ user.name }}'
```
### Google
1. Setup a project in the Google API Console
2. Obtain the OAuth 2.0 credentials (see <https://developers.google.com/identity/protocols/oauth2/openid-connect>)
3. Add this Authorized redirect URI: `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://accounts.google.com/"
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes:
- openid
- profile
user_mapping_provider:
config:
localpart_template: '{{ user.given_name|lower }}'
display_name_template: '{{ user.name }}'
```
### Twitch
1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
3. Add this OAuth Redirect URL: `[synapse base url]/_synapse/oidc/callback`
```yaml
oidc_config:
enabled: true
issuer: "https://id.twitch.tv/oauth2/"
discover: true
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: "client_secret_post"
scopes:
- openid
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```

View File

@@ -1,97 +0,0 @@
# JWT Login Type
Synapse comes with a non-standard login type to support
[JSON Web Tokens](https://en.wikipedia.org/wiki/JSON_Web_Token). In general the
documentation for
[the login endpoint](https://matrix.org/docs/spec/client_server/r0.6.1#login)
is still valid (and the mechanism works similarly to the
[token based login](https://matrix.org/docs/spec/client_server/r0.6.1#token-based)).
To log in using a JSON Web Token, clients should submit a `/login` request as
follows:
```json
{
"type": "org.matrix.login.jwt",
"token": "<jwt>"
}
```
Note that the login type of `m.login.jwt` is supported, but is deprecated. This
will be removed in a future version of Synapse.
The `token` field should include the JSON web token with the following claims:
* The `sub` (subject) claim is required and should encode the local part of the
user ID.
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
claims are optional, but validated if present.
* The issuer (`iss`) claim is optional, but required and validated if configured.
* The audience (`aud`) claim is optional, but required and validated if configured.
Providing the audience claim when not configured will cause validation to fail.
In the case that the token is not valid, the homeserver must respond with
`403 Forbidden` and an error code of `M_FORBIDDEN`.
As with other login types, there are additional fields (e.g. `device_id` and
`initial_device_display_name`) which can be included in the above request.
## Preparing Synapse
The JSON Web Token integration in Synapse uses the
[`PyJWT`](https://pypi.org/project/pyjwt/) library, which must be installed
as follows:
* The relevant libraries are included in the Docker images and Debian packages
provided by `matrix.org` so no further action is needed.
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
install synapse[pyjwt]` to install the necessary dependencies.
* For other installation mechanisms, see the documentation provided by the
maintainer.
To enable the JSON web token integration, you should then add an `jwt_config` section
to your configuration file (or uncomment the `enabled: true` line in the
existing section). See [sample_config.yaml](./sample_config.yaml) for some
sample settings.
## How to test JWT as a developer
Although JSON Web Tokens are typically generated from an external server, the
examples below use [PyJWT](https://pyjwt.readthedocs.io/en/latest/) directly.
1. Configure Synapse with JWT logins, note that this example uses a pre-shared
secret and an algorithm of HS256:
```yaml
jwt_config:
enabled: true
secret: "my-secret-token"
algorithm: "HS256"
```
2. Generate a JSON web token:
```bash
$ pyjwt --key=my-secret-token --alg=HS256 encode sub=test-user
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc
```
3. Query for the login types and ensure `org.matrix.login.jwt` is there:
```bash
curl http://localhost:8080/_matrix/client/r0/login
```
4. Login used the generated JSON web token from above:
```bash
$ curl http://localhost:8082/_matrix/client/r0/login -X POST \
--data '{"type":"org.matrix.login.jwt","token":"eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0LXVzZXIifQ.Ag71GT8v01UO3w80aqRPTeuVPBIBZkYhNTJJ-_-zQIc"}'
{
"access_token": "<access token>",
"device_id": "ACBDEFGHI",
"home_server": "localhost:8080",
"user_id": "@test-user:localhost:8480"
}
```
You should now be able to use the returned access token to query the client API.

View File

@@ -27,7 +27,7 @@
different thread to Synapse. This can make it more resilient to
heavy load meaning metrics cannot be retrieved, and can be exposed
to just internal networks easier. The served metrics are available
over HTTP only, and will be available at `/_synapse/metrics`.
over HTTP only, and will be available at `/`.
Add a new listener to homeserver.yaml:

View File

@@ -1,250 +0,0 @@
# Configuring Synapse to authenticate against an OpenID Connect provider
Synapse can be configured to use an OpenID Connect Provider (OP) for
authentication, instead of its own local password database.
Any OP should work with Synapse, as long as it supports the authorization code
flow. There are a few options for that:
- start a local OP. Synapse has been tested with [Hydra][hydra] and
[Dex][dex-idp]. Note that for an OP to work, it should be served under a
secure (HTTPS) origin. A certificate signed with a self-signed, locally
trusted CA should work. In that case, start Synapse with a `SSL_CERT_FILE`
environment variable set to the path of the CA.
- set up a SaaS OP, like [Google][google-idp], [Auth0][auth0] or
[Okta][okta]. Synapse has been tested with Auth0 and Google.
It may also be possible to use other OAuth2 providers which provide the
[authorization code grant type](https://tools.ietf.org/html/rfc6749#section-4.1),
such as [Github][github-idp].
[google-idp]: https://developers.google.com/identity/protocols/oauth2/openid-connect
[auth0]: https://auth0.com/
[okta]: https://www.okta.com/
[dex-idp]: https://github.com/dexidp/dex
[keycloak-idp]: https://www.keycloak.org/docs/latest/server_admin/#sso-protocols
[hydra]: https://www.ory.sh/docs/hydra/
[github-idp]: https://developer.github.com/apps/building-oauth-apps/authorizing-oauth-apps
## Preparing Synapse
The OpenID integration in Synapse uses the
[`authlib`](https://pypi.org/project/Authlib/) library, which must be installed
as follows:
* The relevant libraries are included in the Docker images and Debian packages
provided by `matrix.org` so no further action is needed.
* If you installed Synapse into a virtualenv, run `/path/to/env/bin/pip
install synapse[oidc]` to install the necessary dependencies.
* For other installation mechanisms, see the documentation provided by the
maintainer.
To enable the OpenID integration, you should then add an `oidc_config` section
to your configuration file (or uncomment the `enabled: true` line in the
existing section). See [sample_config.yaml](./sample_config.yaml) for some
sample settings, as well as the text below for example configurations for
specific providers.
## Sample configs
Here are a few configs for providers that should work with Synapse.
### [Dex][dex-idp]
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
Although it is designed to help building a full-blown provider with an
external database, it can be configured with static passwords in a config file.
Follow the [Getting Started
guide](https://github.com/dexidp/dex/blob/master/Documentation/getting-started.md)
to install Dex.
Edit `examples/config-dev.yaml` config file from the Dex repo to add a client:
```yaml
staticClients:
- id: synapse
secret: secret
redirectURIs:
- '[synapse public baseurl]/_synapse/oidc/callback'
name: 'Synapse'
```
Run with `dex serve examples/config-dex.yaml`.
Synapse config:
```yaml
oidc_config:
enabled: true
skip_verification: true # This is needed as Dex is served on an insecure endpoint
issuer: "http://127.0.0.1:5556/dex"
client_id: "synapse"
client_secret: "secret"
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.name }}"
display_name_template: "{{ user.name|capitalize }}"
```
### [Keycloak][keycloak-idp]
[Keycloak][keycloak-idp] is an opensource IdP maintained by Red Hat.
Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to install Keycloak and set up a realm.
1. Click `Clients` in the sidebar and click `Create`
2. Fill in the fields as below:
| Field | Value |
|-----------|-----------|
| Client ID | `synapse` |
| Client Protocol | `openid-connect` |
3. Click `Save`
4. Fill in the fields as below:
| Field | Value |
|-----------|-----------|
| Client ID | `synapse` |
| Enabled | `On` |
| Client Protocol | `openid-connect` |
| Access Type | `confidential` |
| Valid Redirect URIs | `[synapse public baseurl]/_synapse/oidc/callback` |
5. Click `Save`
6. On the Credentials tab, update the fields:
| Field | Value |
|-------|-------|
| Client Authenticator | `Client ID and Secret` |
7. Click `Regenerate Secret`
8. Copy Secret
```yaml
oidc_config:
enabled: true
issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
client_id: "synapse"
client_secret: "copy secret generated from above"
scopes: ["openid", "profile"]
```
### [Auth0][auth0]
1. Create a regular web application for Synapse
2. Set the Allowed Callback URLs to `[synapse public baseurl]/_synapse/oidc/callback`
3. Add a rule to add the `preferred_username` claim.
<details>
<summary>Code sample</summary>
```js
function addPersistenceAttribute(user, context, callback) {
user.user_metadata = user.user_metadata || {};
user.user_metadata.preferred_username = user.user_metadata.preferred_username || user.user_id;
context.idToken.preferred_username = user.user_metadata.preferred_username;
auth0.users.updateUserMetadata(user.user_id, user.user_metadata)
.then(function(){
callback(null, user, context);
})
.catch(function(err){
callback(err);
});
}
```
</details>
Synapse config:
```yaml
oidc_config:
enabled: true
issuer: "https://your-tier.eu.auth0.com/" # TO BE FILLED
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.preferred_username }}"
display_name_template: "{{ user.name }}"
```
### GitHub
GitHub is a bit special as it is not an OpenID Connect compliant provider, but
just a regular OAuth2 provider.
The [`/user` API endpoint](https://developer.github.com/v3/users/#get-the-authenticated-user)
can be used to retrieve information on the authenticated user. As the Synaspse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: https://github.com/settings/applications/new.
2. Set the callback URL to `[synapse public baseurl]/_synapse/oidc/callback`.
Synapse config:
```yaml
oidc_config:
enabled: true
discover: false
issuer: "https://github.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
authorization_endpoint: "https://github.com/login/oauth/authorize"
token_endpoint: "https://github.com/login/oauth/access_token"
userinfo_endpoint: "https://api.github.com/user"
scopes: ["read:user"]
user_mapping_provider:
config:
subject_claim: "id"
localpart_template: "{{ user.login }}"
display_name_template: "{{ user.name }}"
```
### [Google][google-idp]
1. Set up a project in the Google API Console (see
https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
2. add an "OAuth Client ID" for a Web Application under "Credentials".
3. Copy the Client ID and Client Secret, and add the following to your synapse config:
```yaml
oidc_config:
enabled: true
issuer: "https://accounts.google.com/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
scopes: ["openid", "profile"]
user_mapping_provider:
config:
localpart_template: "{{ user.given_name|lower }}"
display_name_template: "{{ user.name }}"
```
4. Back in the Google console, add this Authorized redirect URI: `[synapse
public baseurl]/_synapse/oidc/callback`.
### Twitch
1. Setup a developer account on [Twitch](https://dev.twitch.tv/)
2. Obtain the OAuth 2.0 credentials by [creating an app](https://dev.twitch.tv/console/apps/)
3. Add this OAuth Redirect URL: `[synapse public baseurl]/_synapse/oidc/callback`
Synapse config:
```yaml
oidc_config:
enabled: true
issuer: "https://id.twitch.tv/oauth2/"
client_id: "your-client-id" # TO BE FILLED
client_secret: "your-client-secret" # TO BE FILLED
client_auth_method: "client_secret_post"
user_mapping_provider:
config:
localpart_template: '{{ user.preferred_username }}'
display_name_template: '{{ user.name }}'
```

View File

@@ -19,103 +19,102 @@ password auth provider module implementations:
Password auth provider classes must provide the following methods:
* `parse_config(config)`
This method is passed the `config` object for this module from the
homeserver configuration file.
*class* `SomeProvider.parse_config`(*config*)
It should perform any appropriate sanity checks on the provided
configuration, and return an object which is then passed into
> This method is passed the `config` object for this module from the
> homeserver configuration file.
>
> It should perform any appropriate sanity checks on the provided
> configuration, and return an object which is then passed into
> `__init__`.
This method should have the `@staticmethod` decoration.
*class* `SomeProvider`(*config*, *account_handler*)
* `__init__(self, config, account_handler)`
The constructor is passed the config object returned by
`parse_config`, and a `synapse.module_api.ModuleApi` object which
allows the password provider to check if accounts exist and/or create
new ones.
> The constructor is passed the config object returned by
> `parse_config`, and a `synapse.module_api.ModuleApi` object which
> allows the password provider to check if accounts exist and/or create
> new ones.
## Optional methods
Password auth provider classes may optionally provide the following methods:
Password auth provider classes may optionally provide the following
methods.
* `get_db_schema_files(self)`
*class* `SomeProvider.get_db_schema_files`()
This method, if implemented, should return an Iterable of
`(name, stream)` pairs of database schema files. Each file is applied
in turn at initialisation, and a record is then made in the database
so that it is not re-applied on the next start.
> This method, if implemented, should return an Iterable of
> `(name, stream)` pairs of database schema files. Each file is applied
> in turn at initialisation, and a record is then made in the database
> so that it is not re-applied on the next start.
* `get_supported_login_types(self)`
`someprovider.get_supported_login_types`()
This method, if implemented, should return a `dict` mapping from a
login type identifier (such as `m.login.password`) to an iterable
giving the fields which must be provided by the user in the submission
to [the `/login` API](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
These fields are passed in the `login_dict` dictionary to `check_auth`.
> This method, if implemented, should return a `dict` mapping from a
> login type identifier (such as `m.login.password`) to an iterable
> giving the fields which must be provided by the user in the submission
> to the `/login` api. These fields are passed in the `login_dict`
> dictionary to `check_auth`.
>
> For example, if a password auth provider wants to implement a custom
> login type of `com.example.custom_login`, where the client is expected
> to pass the fields `secret1` and `secret2`, the provider should
> implement this method and return the following dict:
>
> {"com.example.custom_login": ("secret1", "secret2")}
For example, if a password auth provider wants to implement a custom
login type of `com.example.custom_login`, where the client is expected
to pass the fields `secret1` and `secret2`, the provider should
implement this method and return the following dict:
`someprovider.check_auth`(*username*, *login_type*, *login_dict*)
```python
{"com.example.custom_login": ("secret1", "secret2")}
```
> This method is the one that does the real work. If implemented, it
> will be called for each login attempt where the login type matches one
> of the keys returned by `get_supported_login_types`.
>
> It is passed the (possibly UNqualified) `user` provided by the client,
> the login type, and a dictionary of login secrets passed by the
> client.
>
> The method should return a Twisted `Deferred` object, which resolves
> to the canonical `@localpart:domain` user id if authentication is
> successful, and `None` if not.
>
> Alternatively, the `Deferred` can resolve to a `(str, func)` tuple, in
> which case the second field is a callback which will be called with
> the result from the `/login` call (including `access_token`,
> `device_id`, etc.)
* `check_auth(self, username, login_type, login_dict)`
`someprovider.check_3pid_auth`(*medium*, *address*, *password*)
This method does the real work. If implemented, it
will be called for each login attempt where the login type matches one
of the keys returned by `get_supported_login_types`.
> This method, if implemented, is called when a user attempts to
> register or log in with a third party identifier, such as email. It is
> passed the medium (ex. "email"), an address (ex.
> "<jdoe@example.com>") and the user's password.
>
> The method should return a Twisted `Deferred` object, which resolves
> to a `str` containing the user's (canonical) User ID if
> authentication was successful, and `None` if not.
>
> As with `check_auth`, the `Deferred` may alternatively resolve to a
> `(user_id, callback)` tuple.
It is passed the (possibly unqualified) `user` field provided by the client,
the login type, and a dictionary of login secrets passed by the
client.
`someprovider.check_password`(*user_id*, *password*)
The method should return an `Awaitable` object, which resolves
to the canonical `@localpart:domain` user ID if authentication is
successful, and `None` if not.
> This method provides a simpler interface than
> `get_supported_login_types` and `check_auth` for password auth
> providers that just want to provide a mechanism for validating
> `m.login.password` logins.
>
> Iif implemented, it will be called to check logins with an
> `m.login.password` login type. It is passed a qualified
> `@localpart:domain` user id, and the password provided by the user.
>
> The method should return a Twisted `Deferred` object, which resolves
> to `True` if authentication is successful, and `False` if not.
Alternatively, the `Awaitable` can resolve to a `(str, func)` tuple, in
which case the second field is a callback which will be called with
the result from the `/login` call (including `access_token`,
`device_id`, etc.)
`someprovider.on_logged_out`(*user_id*, *device_id*, *access_token*)
* `check_3pid_auth(self, medium, address, password)`
This method, if implemented, is called when a user attempts to
register or log in with a third party identifier, such as email. It is
passed the medium (ex. "email"), an address (ex.
"<jdoe@example.com>") and the user's password.
The method should return an `Awaitable` object, which resolves
to a `str` containing the user's (canonical) User id if
authentication was successful, and `None` if not.
As with `check_auth`, the `Awaitable` may alternatively resolve to a
`(user_id, callback)` tuple.
* `check_password(self, user_id, password)`
This method provides a simpler interface than
`get_supported_login_types` and `check_auth` for password auth
providers that just want to provide a mechanism for validating
`m.login.password` logins.
If implemented, it will be called to check logins with an
`m.login.password` login type. It is passed a qualified
`@localpart:domain` user id, and the password provided by the user.
The method should return an `Awaitable` object, which resolves
to `True` if authentication is successful, and `False` if not.
* `on_logged_out(self, user_id, device_id, access_token)`
This method, if implemented, is called when a user logs out. It is
passed the qualified user ID, the ID of the deactivated device (if
any: access tokens are occasionally created without an associated
device ID), and the (now deactivated) access token.
It may return an `Awaitable` object; the logout request will
wait for the `Awaitable` to complete, but the result is ignored.
> This method, if implemented, is called when a user logs out. It is
> passed the qualified user ID, the ID of the deactivated device (if
> any: access tokens are occasionally created without an associated
> device ID), and the (now deactivated) access token.
>
> It may return a Twisted `Deferred` object; the logout request will
> wait for the deferred to complete but the result is ignored.

View File

@@ -188,9 +188,6 @@ to do step 2.
It is safe to at any time kill the port script and restart it.
Note that the database may take up significantly more (25% - 100% more)
space on disk after porting to Postgres.
### Using the port script
Firstly, shut down the currently running synapse server and copy its

View File

@@ -3,13 +3,13 @@
It is recommended to put a reverse proxy such as
[nginx](https://nginx.org/en/docs/http/ngx_http_proxy_module.html),
[Apache](https://httpd.apache.org/docs/current/mod/mod_proxy_http.html),
[Caddy](https://caddyserver.com/docs/quick-starts/reverse-proxy) or
[Caddy](https://caddyserver.com/docs/proxy) or
[HAProxy](https://www.haproxy.org/) in front of Synapse. One advantage
of doing so is that it means that you can expose the default https port
(443) to Matrix clients without needing to run Synapse with root
privileges.
**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
> **NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
the requested URI in any way (for example, by decoding `%xx` escapes).
Beware that Apache *will* canonicalise URIs unless you specifify
`nocanon`.
@@ -18,7 +18,7 @@ When setting up a reverse proxy, remember that Matrix clients and other
Matrix servers do not necessarily need to connect to your server via the
same server name or port. Indeed, clients will use port 443 by default,
whereas servers default to port 8448. Where these are different, we
refer to the 'client port' and the 'federation port'. See [the Matrix
refer to the 'client port' and the \'federation port\'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
@@ -28,107 +28,93 @@ Let's assume that we expect clients to connect to our server at
`https://example.com:8448`. The following sections detail the configuration of
the reverse proxy and the homeserver.
## Reverse-proxy configuration examples
## Webserver configuration examples
**NOTE**: You only need one of these.
> **NOTE**: You only need one of these.
### nginx
```
server {
listen 443 ssl;
listen [::]:443 ssl;
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name matrix.example.com;
# For the federation port
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
location /_matrix {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M;
}
}
server_name matrix.example.com;
server {
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
server_name example.com;
location /_matrix {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M;
}
}
```
location / {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
**NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will
> **NOTE**: Do not add a `/` after the port in `proxy_pass`, otherwise nginx will
canonicalise/normalise the URI.
### Caddy 1
### Caddy
```
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
}
matrix.example.com {
proxy /_matrix http://localhost:8008 {
transparent
}
}
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
```
### Caddy 2
```
matrix.example.com {
reverse_proxy /_matrix/* http://localhost:8008
}
example.com:8448 {
reverse_proxy http://localhost:8008
}
```
example.com:8448 {
proxy / http://localhost:8008 {
transparent
}
}
### Apache
```
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
<VirtualHost *:443>
SSLEngine on
ServerName matrix.example.com;
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
<VirtualHost *:8448>
SSLEngine on
ServerName example.com;
<VirtualHost *:8448>
SSLEngine on
ServerName example.com;
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
```
AllowEncodedSlashes NoDecode
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
</VirtualHost>
**NOTE**: ensure the `nocanon` options are included.
> **NOTE**: ensure the `nocanon` options are included.
### HAProxy
```
frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
frontend https
bind :::443 v4v6 ssl crt /etc/ssl/haproxy/ strict-sni alpn h2,http/1.1
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
# Matrix client traffic
acl matrix-host hdr(host) -i matrix.example.com
acl matrix-path path_beg /_matrix
use_backend matrix if matrix-host matrix-path
use_backend matrix if matrix-host matrix-path
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
default_backend matrix
frontend matrix-federation
bind :::8448 v4v6 ssl crt /etc/ssl/haproxy/synapse.pem alpn h2,http/1.1
default_backend matrix
backend matrix
server matrix 127.0.0.1:8008
```
backend matrix
server matrix 127.0.0.1:8008
## Homeserver Configuration
@@ -139,10 +125,3 @@ client IP addresses are recorded correctly.
Having done so, you can then use `https://matrix.example.com` (instead
of `https://matrix.example.com:8448`) as the "Custom server" when
connecting to Synapse from a client.
## Health check endpoint
Synapse exposes a health check endpoint for use by reverse proxies.
Each configured HTTP listener has a `/health` endpoint which always returns
200 OK (and doesn't get logged).

View File

@@ -10,17 +10,6 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md.
# Configuration options that take a time period can be set using a number
# followed by a letter. Letters have the following meanings:
# s = second
# m = minute
# h = hour
# d = day
# w = week
# y = year
# For example, setting redaction_retention_period: 5m would remove redacted
# messages from the database after 5 minutes, rather than 5 months.
################################################################################
# Configuration file for Synapse.
@@ -113,9 +102,7 @@ pid_file: DATADIR/homeserver.pid
#gc_thresholds: [700, 10, 10]
# Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is 100. -1 means no upper limit.
#
# Uncomment the following to increase the limit to 5000.
# and sync operations. The default value is -1, means no upper limit.
#
#filter_timeline_limit: 5000
@@ -131,6 +118,38 @@ pid_file: DATADIR/homeserver.pid
#
#enable_search: false
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
@@ -159,7 +178,7 @@ pid_file: DATADIR/homeserver.pid
# names: a list of names of HTTP resources. See below for a list of
# valid resource names.
#
# compress: set to true to enable HTTP compression for this resource.
# compress: set to true to enable HTTP comression for this resource.
#
# additional_resources: Only valid for an 'http' listener. A map of
# additional endpoints which should be loaded via dynamic modules.
@@ -264,7 +283,7 @@ listeners:
# number of monthly active users.
#
# 'limit_usage_by_mau' disables/enables monthly active user blocking. When
# enabled and a limit is reached the server returns a 'ResourceLimitError'
# anabled and a limit is reached the server returns a 'ResourceLimitError'
# with error type Codes.RESOURCE_LIMIT_EXCEEDED
#
# 'max_mau_value' is the hard limit of monthly active users above which
@@ -303,31 +322,22 @@ listeners:
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained homeserver settings
# Resource-constrained homeserver Settings
#
# When this is enabled, the room "complexity" will be checked before a user
# joins a new remote room. If it is above the complexity limit, the server will
# disallow joining, or will instantly leave.
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
# limit_remote_rooms.complexity, it will disallow joining or
# instantly leave.
#
# Room complexity is an arbitrary measure based on factors such as the number of
# users in the room.
# limit_remote_rooms.complexity_error can be set to customise the text
# displayed to the user when a room above the complexity threshold has
# its join cancelled.
#
limit_remote_rooms:
# Uncomment to enable room complexity checking.
#
#enabled: true
# the limit above which rooms cannot be joined. The default is 1.0.
#
#complexity: 0.5
# override the error which is returned when the room is too complex.
#
#complexity_error: "This room is too complex."
# allow server admins to join complex rooms. Default is false.
#
#admins_can_join: true
# Uncomment the below lines to enable:
#limit_remote_rooms:
# enabled: true
# complexity: 1.0
# complexity_error: "This room is too complex."
# Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'.
@@ -593,39 +603,6 @@ acme:
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
## Caching ##
# Caching can be configured through the following options.
@@ -661,12 +638,6 @@ caches:
# takes priority over setting through the config file.
# Ex. SYNAPSE_CACHE_FACTOR_GET_USERS_WHO_SHARE_ROOM_WITH_USER=2.0
#
# Some caches have '*' and other characters that are not
# alphanumeric or underscores. These caches can be named with or
# without the special characters stripped. For example, to specify
# the cache factor for `*stateGroupCache*` via an environment
# variable would be `SYNAPSE_CACHE_FACTOR_STATEGROUPCACHE=2.0`.
#
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
@@ -700,7 +671,7 @@ caches:
#database:
# name: psycopg2
# args:
# user: synapse_user
# user: synapse
# password: secretpassword
# database: synapse
# host: localhost
@@ -746,10 +717,6 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# - one for ratelimiting redactions by room admins. If this is not explicitly
# set then it uses the same ratelimiting as per rc_message. This is useful
# to allow room admins to deal with abuse quickly.
# - two for ratelimiting number of rooms a user can join, "local" for when
# users are joining rooms the server is already in (this is cheap) vs
# "remote" for when users are trying to join rooms not on the server (which
# can be more expensive)
#
# The defaults are as shown below.
#
@@ -775,14 +742,6 @@ log_config: "CONFDIR/SERVERNAME.log.config"
#rc_admin_redaction:
# per_second: 1
# burst_count: 50
#
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 3
# remote:
# per_second: 0.01
# burst_count: 3
# Ratelimiting settings for incoming federation
@@ -983,28 +942,25 @@ url_preview_accept_language:
## Captcha ##
# See docs/CAPTCHA_SETUP.md for full details of configuring this.
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This homeserver's ReCAPTCHA public key. Must be specified if
# enable_registration_captcha is enabled.
# This homeserver's ReCAPTCHA public key.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This homeserver's ReCAPTCHA private key. Must be specified if
# enable_registration_captcha is enabled.
# This homeserver's ReCAPTCHA private key.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
# Uncomment to enable ReCaptcha checks when registering, preventing signup
# Enables ReCaptcha checks when registering, preventing signup
# unless a captcha is answered. Requires a valid ReCaptcha
# public/private key. Defaults to 'false'.
# public/private key.
#
#enable_registration_captcha: true
#enable_registration_captcha: false
# The API endpoint to use for verifying m.login.recaptcha responses.
# Defaults to "https://www.recaptcha.net/recaptcha/api/siteverify".
#
#recaptcha_siteverify_api: "https://my.recaptcha.site"
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"
## TURN ##
@@ -1148,7 +1104,7 @@ account_validity:
# If set, allows registration of standard or admin accounts by anyone who
# has the shared secret, even if registration is otherwise disabled.
#
#registration_shared_secret: <PRIVATE STRING>
# registration_shared_secret: <PRIVATE STRING>
# Set the number of bcrypt rounds used to generate password hash.
# Larger numbers increase the work factor needed to generate the hash.
@@ -1172,6 +1128,24 @@ account_validity:
#
#default_identity_server: https://matrix.org
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
#
# Note: This option is deprecated. Since v0.99.4, Synapse has tracked which identity
# server a 3PID has been bound to. For 3PIDs bound before then, Synapse runs a
# background migration script, informing itself that the identity server all of its
# 3PIDs have been bound to is likely one of the below.
#
# As of Synapse v1.4.0, all other functionality of this option has been deprecated, and
# it is now solely used for the purposes of the background migration script, and can be
# removed once it has run.
#trusted_third_party_id_servers:
# - matrix.org
# - vector.im
# Handle threepid (email/phone etc) registration and password resets through a set of
# *trusted* identity servers. Note that this allows the configured identity server to
# reset passwords for accounts!
@@ -1222,11 +1196,7 @@ account_threepid_delegates:
#enable_3pid_changes: false
# Users who register on this homeserver will automatically be joined
# to these rooms.
#
# By default, any room aliases included in this list will be created
# as a publicly joinable room when the first user registers for the
# homeserver. This behaviour can be customised with the settings below.
# to these rooms
#
#auto_join_rooms:
# - "#example:example.com"
@@ -1234,69 +1204,10 @@ account_threepid_delegates:
# Where auto_join_rooms are specified, setting this flag ensures that the
# the rooms exist by creating them when the first user on the
# homeserver registers.
#
# By default the auto-created rooms are publicly joinable from any federated
# server. Use the autocreate_auto_join_rooms_federated and
# autocreate_auto_join_room_preset settings below to customise this behaviour.
#
# Setting to false means that if the rooms are not manually created,
# users cannot be auto-joined since they do not exist.
#
# Defaults to true. Uncomment the following line to disable automatically
# creating auto-join rooms.
#
#autocreate_auto_join_rooms: false
# Whether the auto_join_rooms that are auto-created are available via
# federation. Only has an effect if autocreate_auto_join_rooms is true.
#
# Note that whether a room is federated cannot be modified after
# creation.
#
# Defaults to true: the room will be joinable from other servers.
# Uncomment the following to prevent users from other homeservers from
# joining these rooms.
#
#autocreate_auto_join_rooms_federated: false
# The room preset to use when auto-creating one of auto_join_rooms. Only has an
# effect if autocreate_auto_join_rooms is true.
#
# This can be one of "public_chat", "private_chat", or "trusted_private_chat".
# If a value of "private_chat" or "trusted_private_chat" is used then
# auto_join_mxid_localpart must also be configured.
#
# Defaults to "public_chat", meaning that the room is joinable by anyone, including
# federated servers if autocreate_auto_join_rooms_federated is true (the default).
# Uncomment the following to require an invitation to join these rooms.
#
#autocreate_auto_join_room_preset: private_chat
# The local part of the user id which is used to create auto_join_rooms if
# autocreate_auto_join_rooms is true. If this is not provided then the
# initial user account that registers will be used to create the rooms.
#
# The user id is also used to invite new users to any auto-join rooms which
# are set to invite-only.
#
# It *must* be configured if autocreate_auto_join_room_preset is set to
# "private_chat" or "trusted_private_chat".
#
# Note that this must be specified in order for new users to be correctly
# invited to any auto-join rooms which have been set to invite-only (either
# at the time of creation or subsequently).
#
# Note that, if the room already exists, this user must be joined and
# have the appropriate permissions to invite new members.
#
#auto_join_mxid_localpart: system
# When auto_join_rooms is specified, setting this flag to false prevents
# guest accounts from being automatically joined to the rooms.
#
# Defaults to true.
#
#auto_join_rooms_for_guests: false
#autocreate_auto_join_rooms: true
## Metrics ###
@@ -1326,8 +1237,7 @@ metrics_flags:
#known_servers: true
# Whether or not to report anonymized homeserver usage statistics.
#
#report_stats: true|false
# report_stats: true|false
# The endpoint to report the anonymized homeserver usage statistics to.
# Defaults to https://matrix.org/report-usage-stats/push
@@ -1363,13 +1273,13 @@ metrics_flags:
# the registration_shared_secret is used, if one is given; otherwise,
# a secret key is derived from the signing key.
#
#macaroon_secret_key: <PRIVATE STRING>
# macaroon_secret_key: <PRIVATE STRING>
# a secret which is used to calculate HMACs for form values, to stop
# falsification of values. Must be specified for the User Consent
# forms to work.
#
#form_secret: <PRIVATE STRING>
# form_secret: <PRIVATE STRING>
## Signing Keys ##
@@ -1454,8 +1364,6 @@ trusted_key_servers:
#key_server_signing_keys_path: "key_server_signing_keys.key"
## Single sign-on integration ##
# Enable SAML2 for registration and login. Uses pysaml2.
#
# At least one of `sp_config` or `config_path` must be set in this section to
@@ -1522,7 +1430,7 @@ saml2_config:
# The lifetime of a SAML session. This defines how long a user has to
# complete the authentication process, if allow_unsolicited is unset.
# The default is 15 minutes.
# The default is 5 minutes.
#
#saml_session_lifetime: 5m
@@ -1577,17 +1485,6 @@ saml2_config:
#
#grandfathered_mxid_source_attribute: upn
# It is possible to configure Synapse to only allow logins if SAML attributes
# match particular values. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted.
#
#attribute_requirements:
# - attribute: userGroup
# value: "staff"
# - attribute: department
# value: "sales"
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
@@ -1600,13 +1497,7 @@ saml2_config:
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# When rendering, this template is given the following variables:
# * code: an HTML error code corresponding to the error that is being
# returned (typically 400 or 500)
#
# * msg: a textual message describing the error.
#
# The variables will automatically be HTML-escaped.
# This template doesn't currently need any variable to render.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
@@ -1614,119 +1505,92 @@ saml2_config:
#template_dir: "res/templates"
# OpenID Connect integration. The following settings can be used to make Synapse
# use an OpenID Connect Provider for authentication, instead of its internal
# password database.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/openid.md.
# Enable OpenID Connect for registration and login. Uses authlib.
#
oidc_config:
# Uncomment the following to enable authorization against an OpenID Connect
# server. Defaults to false.
#
#enabled: true
# Uncomment the following to disable use of the OIDC discovery mechanism to
# discover endpoints. Defaults to true.
#
#discover: false
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
# discover the provider's endpoints.
#
# Required if 'enabled' is true.
#
#issuer: "https://accounts.example.com/"
# oauth2 client id to use.
#
# Required if 'enabled' is true.
#
#client_id: "provided-by-your-issuer"
# oauth2 client secret to use.
#
# Required if 'enabled' is true.
#
#client_secret: "provided-by-your-issuer"
# auth method to use when exchanging the token.
# Valid values are 'client_secret_basic' (default), 'client_secret_post' and
# 'none'.
#
#client_auth_method: client_secret_post
# list of scopes to request. This should normally include the "openid" scope.
# Defaults to ["openid"].
#
#scopes: ["openid", "profile"]
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the OIDC userinfo endpoint. Required if discovery is disabled and the
# "openid" scope is not requested.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# URI where to fetch the JWKS. Required if discovery is disabled and the
# "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# Uncomment to skip metadata verification. Defaults to false.
#
# Use this if you are connecting to a provider that is not OpenID Connect
# compliant.
# Avoid this in production.
#
#skip_verification: true
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
# enable OpenID Connect. Defaults to false.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/sso_mapping_providers.md#openid-mapping-providers
# for information on implementing a custom mapping provider.
#
#module: mapping_provider.OidcMappingProvider
#enabled: true
# Custom configuration values for the module. This section will be passed as
# a Python dictionary to the user mapping provider module's `parse_config`
# method.
# use the OIDC discovery mechanism to discover endpoints. Defaults to true.
#
# The examples below are intended for the default provider: they should be
# changed if using a custom provider.
#discover: true
# the OIDC issuer. Used to validate tokens and discover the providers endpoints. Required.
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#subject_claim: "sub"
#issuer: "https://accounts.example.com/"
# Jinja2 template for the localpart of the MXID.
#
# When rendering, this template is given the following variables:
# * user: The claims returned by the UserInfo Endpoint and/or in the ID
# Token
#
# This must be configured if using the default mapping provider.
#
localpart_template: "{{ user.preferred_username }}"
# oauth2 client id to use. Required.
#
#client_id: "provided-by-your-issuer"
# Jinja2 template for the display name to set on first login.
# oauth2 client secret to use. Required.
#
#client_secret: "provided-by-your-issuer"
# auth method to use when exchanging the token.
# Valid values are "client_secret_basic" (default), "client_secret_post" and "none".
#
#client_auth_method: "client_auth_basic"
# list of scopes to ask. This should include the "openid" scope. Defaults to ["openid"].
#
#scopes: ["openid"]
# the oauth2 authorization endpoint. Required if provider discovery is disabled.
#
#authorization_endpoint: "https://accounts.example.com/oauth2/auth"
# the oauth2 token endpoint. Required if provider discovery is disabled.
#
#token_endpoint: "https://accounts.example.com/oauth2/token"
# the OIDC userinfo endpoint. Required if discovery is disabled and the "openid" scope is not asked.
#
#userinfo_endpoint: "https://accounts.example.com/userinfo"
# URI where to fetch the JWKS. Required if discovery is disabled and the "openid" scope is used.
#
#jwks_uri: "https://accounts.example.com/.well-known/jwks.json"
# skip metadata verification. Defaults to false.
# Use this if you are connecting to a provider that is not OpenID Connect compliant.
# Avoid this in production.
#
#skip_verification: false
# An external module can be provided here as a custom solution to mapping
# attributes returned from a OIDC provider onto a matrix user.
#
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
# Default is 'synapse.handlers.oidc_handler.JinjaOidcMappingProvider'.
#
# If unset, no displayname will be set.
#module: mapping_provider.OidcMappingProvider
# Custom configuration values for the module. Below options are intended
# for the built-in provider, they should be changed if using a custom
# module. This section will be passed as a Python dictionary to the
# module's `parse_config` method.
#
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
# Below is the config of the default mapping provider, based on Jinja2
# templates. Those templates are used to render user attributes, where the
# userinfo object is available through the `user` variable.
#
config:
# name of the claim containing a unique identifier for the user.
# Defaults to `sub`, which OpenID Connect compliant providers should provide.
#
#subject_claim: "sub"
# Jinja2 template for the localpart of the MXID
#
localpart_template: "{{ user.preferred_username }}"
# Jinja2 template for the display name to set on first login. Optional.
#
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
@@ -1741,8 +1605,7 @@ oidc_config:
# # name: value
# Additional settings to use with single-sign on systems such as OpenID Connect,
# SAML2 and CAS.
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
@@ -1827,60 +1690,12 @@ sso:
#template_dir: "res/templates"
# JSON web token integration. The following settings can be used to make
# Synapse JSON web tokens for authentication, instead of its internal
# password database.
#
# Each JSON Web Token needs to contain a "sub" (subject) claim, which is
# used as the localpart of the mxid.
#
# Additionally, the expiration time ("exp"), not before time ("nbf"),
# and issued at ("iat") claims are validated if present.
#
# Note that this is a non-standard login type and client support is
# expected to be non-existant.
#
# See https://github.com/matrix-org/synapse/blob/master/docs/jwt.md.
# The JWT needs to contain a globally unique "sub" (subject) claim.
#
#jwt_config:
# Uncomment the following to enable authorization using JSON web
# tokens. Defaults to false.
#
#enabled: true
# This is either the private shared secret or the public key used to
# decode the contents of the JSON web token.
#
# Required if 'enabled' is true.
#
#secret: "provided-by-your-issuer"
# The algorithm used to sign the JSON web token.
#
# Supported algorithms are listed at
# https://pyjwt.readthedocs.io/en/latest/algorithms.html
#
# Required if 'enabled' is true.
#
#algorithm: "provided-by-your-issuer"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and
# validated for all JSON web tokens.
#
#issuer: "provided-by-your-issuer"
# A list of audiences to validate the "aud" claim against.
#
# Optional, if provided the "aud" claim will be required and
# validated for all JSON web tokens.
#
# Note that if the "aud" claim is included in a JSON web token then
# validation will fail without configuring audiences.
#
#audiences:
# - "provided-by-your-issuer"
# enabled: true
# secret: "a secret"
# algorithm: "HS256"
password_config:
@@ -1949,8 +1764,8 @@ email:
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
#smtp_user: "exampleusername"
#smtp_pass: "examplepassword"
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to
@@ -1971,8 +1786,8 @@ email:
#
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from and email
# subjects. It defaults to 'Matrix'.
# app_name defines the default value for '%(app)s' in notif_from. It
# defaults to 'Matrix'.
#
#app_name: my_branded_matrix_server
@@ -2041,73 +1856,6 @@ email:
#
#template_dir: "res/templates"
# Subjects to use when sending emails from Synapse.
#
# The placeholder '%(app)s' will be replaced with the value of the 'app_name'
# setting above, or by a value dictated by the Matrix client application.
#
# If a subject isn't overridden in this configuration file, the value used as
# its example will be used.
#
#subjects:
# Subjects for notification emails.
#
# On top of the '%(app)s' placeholder, these can use the following
# placeholders:
#
# * '%(person)s', which will be replaced by the display name of the user(s)
# that sent the message(s), e.g. "Alice and Bob".
# * '%(room)s', which will be replaced by the name of the room the
# message(s) have been sent to, e.g. "My super room".
#
# See the example provided for each setting to see which placeholder can be
# used and how to use them.
#
# Subject to use to notify about one message from one or more user(s) in a
# room which has a name.
#message_from_person_in_room: "[%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room..."
#
# Subject to use to notify about one message from one or more user(s) in a
# room which doesn't have a name.
#message_from_person: "[%(app)s] You have a message on %(app)s from %(person)s..."
#
# Subject to use to notify about multiple messages from one or more users in
# a room which doesn't have a name.
#messages_from_person: "[%(app)s] You have messages on %(app)s from %(person)s..."
#
# Subject to use to notify about multiple messages in a room which has a
# name.
#messages_in_room: "[%(app)s] You have messages on %(app)s in the %(room)s room..."
#
# Subject to use to notify about multiple messages in multiple rooms.
#messages_in_room_and_others: "[%(app)s] You have messages on %(app)s in the %(room)s room and others..."
#
# Subject to use to notify about multiple messages from multiple persons in
# multiple rooms. This is similar to the setting above except it's used when
# the room in which the notification was triggered has no name.
#messages_from_person_and_others: "[%(app)s] You have messages on %(app)s from %(person)s and others..."
#
# Subject to use to notify about an invite to a room which has a name.
#invite_from_person_to_room: "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s..."
#
# Subject to use to notify about an invite to a room which doesn't have a
# name.
#invite_from_person: "[%(app)s] %(person)s has invited you to chat on %(app)s..."
# Subject for emails related to account administration.
#
# On top of the '%(app)s' placeholder, these one can use the
# '%(server_name)s' placeholder, which will be replaced by the value of the
# 'server_name' setting in your Synapse configuration.
#
# Subject to use when sending a password reset email.
#password_reset: "[%(server_name)s] Password reset"
#
# Subject to use when sending a verification email to assert an address's
# ownership.
#email_validation: "[%(server_name)s] Validate your email"
# Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods
@@ -2167,26 +1915,6 @@ spam_checker:
# example_stop_events_from: ['@bad:example.com']
## Rooms ##
# Controls whether locally-created rooms should be end-to-end encrypted by
# default.
#
# Possible options are "all", "invite", and "off". They are defined as:
#
# * "all": any locally-created room
# * "invite": any room created with the "private_chat" or "trusted_private_chat"
# room creation presets
# * "off": this option will take no effect
#
# The default value is "off".
#
# Note that this option will only affect rooms created after it is set. It
# will also not affect rooms created by other servers.
#
#encryption_enabled_by_default_for_room_type: invite
# Uncomment to allow non-server-admin users to create groups on this server
#
#enable_group_creation: true
@@ -2418,57 +2146,3 @@ opentracing:
#
# logging:
# false
## Workers ##
# Disables sending of outbound federation transactions on the main process.
# Uncomment if using a federation sender worker.
#
#send_federation: false
# It is possible to run multiple federation sender workers, in which case the
# work is balanced across them.
#
# This configuration must be shared between all federation sender workers, and if
# changed all federation sender workers must be stopped at the same time and then
# started, to ensure that all instances are running with the same config (otherwise
# events may be dropped).
#
#federation_sender_instances:
# - federation_sender1
# When using workers this should be a map from `worker_name` to the
# HTTP replication listener of the worker, if configured.
#
#instance_map:
# worker1:
# host: localhost
# port: 8034
# Experimental: When using workers you can define which workers should
# handle event persistence and typing notifications. Any worker
# specified here must also be in the `instance_map`.
#
#stream_writers:
# events: worker1
# typing: worker1
# Configuration for Redis when using workers. This *must* be enabled when
# using workers (unless using old style direct TCP configuration).
#
redis:
# Uncomment the below to enable Redis support.
#
#enabled: true
# Optional host and port to use to connect to redis. Defaults to
# localhost and 6379
#
#host: localhost
#port: 6379
# Optional password if configured on the Redis instance
#
#password: <secret_password>

View File

@@ -11,33 +11,24 @@ formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.TimedRotatingFileHandler
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: /var/log/matrix-synapse/homeserver.log
when: midnight
backupCount: 3 # Does not include the current log file.
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
# Default to buffering writes to log file for efficiency. This means that
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
# performance, at the expensive of it taking longer for log lines to
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
# A handler that writes logs to stderr. Unused by default, but can be used
# instead of "buffer" and "file" in the logger handlers.
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:
@@ -45,23 +36,8 @@ loggers:
# information such as access tokens.
level: INFO
twisted:
# We send the twisted logging directly to the file handler,
# to work around https://github.com/matrix-org/synapse/issues/3471
# when using "buffer" logger. Use "console" to log to stderr instead.
handlers: [file]
propagate: false
root:
level: INFO
# Write logs to the `buffer` handler, which will buffer them together in memory,
# then write them to a file.
#
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
# also need to update the configuation for the `twisted` logger above, in
# this case.)
#
handlers: [buffer]
handlers: [file, console]
disable_existing_loggers: false

View File

@@ -138,8 +138,6 @@ A custom mapping provider must specify the following methods:
* `mxid_localpart` - Required. The mxid localpart of the new user.
* `displayname` - The displayname of the new user. If not provided, will default to
the value of `mxid_localpart`.
* `emails` - A list of emails for the new user. If not provided, will
default to an empty list.
### Default SAML Mapping Provider

View File

@@ -1,32 +0,0 @@
### Using synctl with workers
If you want to use `synctl` to manage your synapse processes, you will need to
create an an additional configuration file for the main synapse process. That
configuration should look like this:
```yaml
worker_app: synapse.app.homeserver
```
Additionally, each worker app must be configured with the name of a "pid file",
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:
```yaml
worker_pid_file: /home/matrix/synapse/worker1.pid
```
Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:
synctl -a $CONFIG/workers start
Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.
To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/worker1.yaml restart

View File

@@ -1,6 +1,6 @@
[Unit]
Description=Synapse %i
AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target

View File

@@ -1,7 +1,7 @@
worker_app: synapse.app.federation_reader
worker_name: federation_reader1
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:

View File

@@ -18,7 +18,7 @@ For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint
Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
and to often not work.
## `coturn` setup
## `coturn` Setup
### Initial installation
@@ -26,13 +26,7 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
#### Debian installation
Just install the debian package:
```sh
apt install coturn
```
This will install and start a systemd service called `coturn`.
# apt install coturn
#### Source installation
@@ -69,52 +63,38 @@ This will install and start a systemd service called `coturn`.
1. Consider your security settings. TURN lets users request a relay which will
connect to arbitrary IP addresses and ports. The following configuration is
suggested as a minimum starting point:
# VoIP traffic is all UDP. There is no reason to let users connect to arbitrary TCP endpoints via the relay.
no-tcp-relay
# don't let the relay ever try to connect to private IP address ranges within your network (if any)
# given the turn server is likely behind your firewall, remember to include any privileged public IPs too.
denied-peer-ip=10.0.0.0-10.255.255.255
denied-peer-ip=192.168.0.0-192.168.255.255
denied-peer-ip=172.16.0.0-172.31.255.255
# special case the turn server itself so that client->TURN->TURN->client flows work
allowed-peer-ip=10.0.0.1
# consider whether you want to limit the quota of relayed streams per user (or total) to avoid risk of DoS.
user-quota=12 # 4 streams per video call, so 12 streams = 3 simultaneous relayed calls per user.
total-quota=1200
1. Also consider supporting TLS/DTLS. To do this, add the following settings
to `turnserver.conf`:
# TLS certificates, including intermediate certs.
# For Let's Encrypt certificates, use `fullchain.pem` here.
cert=/path/to/fullchain.pem
# TLS private key file
pkey=/path/to/privkey.pem
Ideally coturn should refuse to relay traffic which isn't SRTP; see
<https://github.com/matrix-org/synapse/issues/2009>
1. Ensure your firewall allows traffic into the TURN server on the ports
you've configured it to listen on (By default: 3478 and 5349 for the TURN(s)
traffic (remember to allow both TCP and UDP traffic), and ports 49152-65535
for the UDP relay.)
you've configured it to listen on (remember to allow both TCP and UDP TURN
traffic)
1. (Re)start the turn server:
1. If you've configured coturn to support TLS/DTLS, generate or import your
private key and certificate.
* If you used the Debian package (or have set up a systemd unit yourself):
```sh
systemctl restart coturn
```
1. Start the turn server:
* If you installed from source:
bin/turnserver -o
```sh
bin/turnserver -o
```
## Synapse setup
## synapse Setup
Your home server configuration file needs the following extra keys:
@@ -146,14 +126,7 @@ As an example, here is the relevant section of the config file for matrix.org:
After updating the homeserver configuration, you must restart synapse:
* If you use synctl:
```sh
cd /where/you/run/synapse
./synctl restart
```
* If you use systemd:
```
systemctl restart synapse.service
```
..and your Home Server now supports VoIP relaying!

View File

@@ -7,6 +7,6 @@ who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL [here](../synapse/storage/databases/main/schema/delta/53/user_dir_populate.sql)
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -1,10 +1,10 @@
# Scaling synapse via workers
For small instances it recommended to run Synapse in the default monolith mode.
For larger instances where performance is a concern it can be helpful to split
out functionality into multiple separate python processes. These processes are
called 'workers', and are (eventually) intended to scale horizontally
independently.
For small instances it recommended to run Synapse in monolith mode (the
default). For larger instances where performance is a concern it can be helpful
to split out functionality into multiple separate python processes. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
Synapse's worker support is under active development and subject to change as
we attempt to rapidly scale ever larger Synapse instances. However we are
@@ -16,123 +16,69 @@ workers only work with PostgreSQL-based Synapse deployments. SQLite should only
be used for demo purposes and any admin considering workers should already be
running PostgreSQL.
## Main process/worker communication
## Master/worker communication
The processes communicate with each other via a Synapse-specific protocol called
'replication' (analogous to MySQL- or Postgres-style database replication) which
feeds streams of newly written data between processes so they can be kept in
sync with the database state.
The workers communicate with the master process via a Synapse-specific protocol
called 'replication' (analogous to MySQL- or Postgres-style database
replication) which feeds a stream of relevant data from the master to the
workers so they can be kept in sync with the master process and database state.
When configured to do so, Synapse uses a
[Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
stream between all configured Synapse processes. Additionally, processes may
make HTTP requests to each other, primarily for operations which need to wait
for a reply ─ such as sending an event.
Additionally, workers may make HTTP requests to the master, to send information
in the other direction. Typically this is used for operations which need to
wait for a reply - such as sending an event.
Redis support was added in v1.13.0 with it becoming the recommended method in
v1.18.0. It replaced the old direct TCP connections (which is deprecated as of
v1.18.0) to the main process. With Redis, rather than all the workers connecting
to the main process, all the workers and the main process connect to Redis,
which relays replication commands between processes. This can give a significant
cpu saving on the main process and will be a prerequisite for upcoming
performance improvements.
See the [Architectural diagram](#architectural-diagram) section at the end for
a visualisation of what this looks like.
## Setting up workers
A Redis server is required to manage the communication between the processes.
The Redis server should be installed following the normal procedure for your
distribution (e.g. `apt install redis-server` on Debian). It is safe to use an
existing Redis deployment if you have one.
Once installed, check that Redis is running and accessible from the host running
Synapse, for example by executing `echo PING | nc -q1 localhost 6379` and seeing
a response of `+PONG`.
The appropriate dependencies must also be installed for Synapse. If using a
virtualenv, these can be installed with:
```sh
pip install matrix-synapse[redis]
```
Note that these dependencies are included when synapse is installed with `pip
install matrix-synapse[all]`. They are also included in the debian packages from
`matrix.org` and in the docker images at
https://hub.docker.com/r/matrixdotorg/synapse/.
## Configuration
To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. See
[reverse_proxy.md](reverse_proxy.md) for information on setting up a reverse
proxy.
When using workers, each worker process has its own configuration file which
contains settings specific to that worker, such as the HTTP listener that it
provides (if any), logging configuration, etc.
Normally, the worker processes are configured to read from a shared
configuration file as well as the worker-specific configuration files. This
makes it easier to keep common configuration settings synchronised across all
the processes.
The main process is somewhat special in this respect: it does not normally
need its own configuration file and can take all of its configuration from the
shared configuration file.
### Shared configuration
Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an "HTTP replication
listener" for the main process; and secondly, you need to enable redis-based
replication. For example:
the correct worker, or to the main synapse instance. Note that this includes
requests made to the federation port. See [reverse_proxy.md](reverse_proxy.md)
for information on setting up a reverse proxy.
To enable workers, you need to add *two* replication listeners to the
main Synapse configuration file (`homeserver.yaml`). For example:
```yaml
# extend the existing `listeners` section. This defines the ports that the
# main process will listen on.
listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
resources:
- names: [replication]
redis:
enabled: true
```
See the sample config for the full documentation of each option.
Under **no circumstances** should these replication API listeners be exposed to
the public internet; they have no authentication and are unencrypted.
Under **no circumstances** should the replication listener be exposed to the
public internet; it has no authentication and is unencrypted.
### Worker configuration
You should then create a set of configs for the various worker processes. Each
worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that
worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc. You should minimise the number of overrides though to
maintain a usable config.
In the config file for each worker, you must specify the type of worker
application (`worker_app`), and you should specify a unqiue name for the worker
(`worker_name`). The currently available worker applications are listed below.
You must also specify the HTTP replication endpoint that it should talk to on
the main synapse process. `worker_replication_host` should specify the host of
the main synapse and `worker_replication_http_port` should point to the HTTP
replication port. If the worker will handle HTTP requests then the
`worker_listeners` option should be set with a `http` listener, in the same way
as the `listeners` option in the shared config.
application (`worker_app`). The currently available worker applications are
listed below. You must also specify the replication endpoints that it should
talk to on the main synapse process. `worker_replication_host` should specify
the host of the main synapse, `worker_replication_port` should point to the TCP
replication listener port and `worker_replication_http_port` should point to
the HTTP replication port.
For example:
```yaml
worker_app: synapse.app.generic_worker
worker_name: worker1
worker_app: synapse.app.synchrotron
# The replication listener on the main synapse process.
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
@@ -141,43 +87,142 @@ worker_listeners:
resources:
- names:
- client
- federation
worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
```
...is a full configuration for a generic worker instance, which will expose a
plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
`/sync`, which are listed below.
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP `/sync` endpoint on port 8083 separately from the `/sync` endpoint provided
by the main synapse.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example).
### Running Synapse with workers
Finally, you need to start your worker processes. This can be done with either
`synctl` or your distribution's preferred service manager such as `systemd`. We
recommend the use of `systemd` where available: for information on setting up
`systemd` to start synapse workers, see
[systemd-with-workers](systemd-with-workers). To use `synctl`, see
[synctl_workers.md](synctl_workers.md).
[systemd-with-workers](systemd-with-workers). To use `synctl`, see below.
### **Experimental** support for replication over redis
As of Synapse v1.13.0, it is possible to configure Synapse to send replication
via a [Redis pub/sub channel](https://redis.io/topics/pubsub). This is an
alternative to direct TCP connections to the master: rather than all the
workers connecting to the master, all the workers and the master connect to
Redis, which relays replication commands between processes. This can give a
significant cpu saving on the master and will be a prerequisite for upcoming
performance improvements.
Note that this support is currently experimental; you may experience lost
messages and similar problems! It is strongly recommended that admins setting
up workers for the first time use direct TCP replication as above.
To configure Synapse to use Redis:
1. Install Redis following the normal procedure for your distribution - for
example, on Debian, `apt install redis-server`. (It is safe to use an
existing Redis deployment if you have one: we use a pub/sub stream named
according to the `server_name` of your synapse server.)
2. Check Redis is running and accessible: you should be able to `echo PING | nc -q1
localhost 6379` and get a response of `+PONG`.
3. Install the python prerequisites. If you installed synapse into a
virtualenv, this can be done with:
```sh
pip install matrix-synapse[redis]
```
The debian packages from matrix.org already include the required
dependencies.
4. Add config to the shared configuration (`homeserver.yaml`):
```yaml
redis:
enabled: true
```
Optional parameters which can go alongside `enabled` are `host`, `port`,
`password`. Normally none of these are required.
5. Restart master and all workers.
Once redis replication is in use, `worker_replication_port` is redundant and
can be removed from the worker configuration files. Similarly, the
configuration for the `listener` for the TCP replication port can be removed
from the main configuration file. Note that the HTTP replication port is
still required.
### Using synctl
If you want to use `synctl` to manage your synapse processes, you will need to
create an an additional configuration file for the master synapse process. That
configuration should look like this:
```yaml
worker_app: synapse.app.homeserver
```
Additionally, each worker app must be configured with the name of a "pid file",
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:
```yaml
worker_pid_file: /home/matrix/synapse/synchrotron.pid
```
Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:
synctl -a $CONFIG/workers start
Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.
To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/synchrotron.yaml restart
## Available worker applications
### `synapse.app.generic_worker`
### `synapse.app.pusher`
This worker can handle API requests matching the following regular
expressions:
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set `start_pushers: False` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.synchrotron`
The synchrotron handles `sync` requests from clients. In particular, it can
handle REST endpoints matching the following regular expressions:
# Sync requests
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
# Federation requests
The above endpoints should all be routed to the synchrotron worker by the
reverse-proxy configuration.
It is possible to run multiple instances of the synchrotron to scale
horizontally. In this case the reverse-proxy should be configured to
load-balance across the instances, though it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting
a userid from the access token is currently left as an exercise for the reader.
### `synapse.app.appservice`
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.federation_reader`
Handles a subset of federation endpoints. In particular, it can handle REST
endpoints matching the following regular expressions:
^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/
@@ -197,145 +242,40 @@ expressions:
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/user/devices/
^/_matrix/federation/v1/send/
^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query
# Inbound federation transaction request
^/_matrix/federation/v1/send/
# Client API requests
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/federation/v1/groups/
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
instance.
Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.
#### Load balancing
It is possible to run multiple instances of this worker app, with incoming requests
being load-balanced between them by the reverse-proxy. However, different endpoints
have different characteristics and so admins
may wish to run multiple groups of workers handling different endpoints so that
load balancing can be done in different ways.
For `/sync` and `/initialSync` requests it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting a
user ID from the access token or `Authorization` header is currently left as an
exercise for the reader. Admins may additionally wish to separate out `/sync`
requests that have a `since` query parameter from those that don't (and
`/initialSync`), as requests that don't are known as "initial sync" that happens
when a user logs in on a new device and can be *very* resource intensive, so
isolating these requests will stop them from interfering with other users ongoing
syncs.
Federation and client requests can be balanced via simple round robin.
The inbound federation transaction request `^/_matrix/federation/v1/send/`
should be balanced by source IP so that transactions from the same remote server
go to the same process.
Registration/login requests can be handled separately purely to help ensure that
unexpected load doesn't affect new logins and sign ups.
Finally, event sending requests can be balanced by the room ID in the URI (or
the full URI, or even just round robin), the room ID is the path component after
`/rooms/`. If there is a large bridge connected that is sending or may send lots
of events, then a dedicated set of workers can be provisioned to limit the
effects of bursts of events from that bridge on events sent by normal users.
#### Stream writers
Additionally, there is *experimental* support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)
Currently support streams are `events` and `typing`.
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. For example to
move event persistence off to a dedicated worker, the shared configuration would
include:
Note that `federation` must be added to the listener resources in the worker config:
```yaml
instance_map:
event_persister1:
host: localhost
port: 8034
stream_writers:
events: event_persister1
worker_app: synapse.app.federation_reader
...
worker_listeners:
- type: http
port: <port>
resources:
- names:
- federation
```
### `synapse.app.pusher`
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set `start_pushers: False` in the
shared configuration file to stop the main synapse sending push notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.appservice`
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending appservice notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.federation_sender`
Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set `send_federation: False` in the
shared configuration file to stop the main synapse sending this traffic.
If running multiple federation senders then you must list each
instance in the `federation_sender_instances` option by their `worker_name`.
All instances must be stopped and started when adding or removing instances.
For example:
```yaml
federation_sender_instances:
- federation_sender1
- federation_sender2
```
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.media_repository`
@@ -367,12 +307,47 @@ expose the `media` resource. For example:
- media
```
Note that if running multiple media repositories they must be on the same server
and you must configure a single instance to run the background tasks, e.g.:
Note this worker cannot be load-balanced: only one instance should be active.
```yaml
media_instance_running_background_jobs: "media-repository-1"
```
### `synapse.app.client_reader`
Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
Additionally, the following REST endpoints can be handled, but all requests must
be routed to the same instance:
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
### `synapse.app.user_dir`
@@ -408,65 +383,15 @@ file. For example:
worker_main_http_uri: http://127.0.0.1:8008
### Historical apps
### `synapse.app.event_creator`
*Note:* Historically there used to be more apps, however they have been
amalgamated into a single `synapse.app.generic_worker` app. The remaining apps
are ones that do specific processing unrelated to requests, e.g. the `pusher`
that handles sending out push notifications for new events. The intention is for
all these to be folded into the `generic_worker` app and to use config to define
which processes handle the various proccessing such as push notifications.
Handles some event creation. It can handle REST endpoints matching:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
## Migration from old config
There are two main independent changes that have been made: introducing Redis
support and merging apps into `synapse.app.generic_worker`. Both these changes
are backwards compatible and so no changes to the config are required, however
server admins are encouraged to plan to migrate to Redis as the old style direct
TCP replication config is deprecated.
To migrate to Redis add the `redis` config as above, and optionally remove the
TCP `replication` listener from master and `worker_replication_port` from worker
config.
To migrate apps to use `synapse.app.generic_worker` simply update the
`worker_app` option in the worker configs, and where worker are started (e.g.
in systemd service files, but not required for synctl).
## Architectural diagram
The following shows an example setup using Redis and a reverse proxy:
```
Clients & Federation
|
v
+-----------+
| |
| Reverse |
| Proxy |
| |
+-----------+
| | |
| | | HTTP requests
+-------------------+ | +-----------+
| +---+ |
| | |
v v v
+--------------+ +--------------+ +--------------+ +--------------+
| Main | | Generic | | Generic | | Event |
| Process | | Worker 1 | | Worker 2 | | Persister |
+--------------+ +--------------+ +--------------+ +--------------+
^ ^ | ^ | | ^ | ^ ^
| | | | | | | | | |
| | | | | HTTP | | | | |
| +----------+<--|---|---------+ | | | |
| | +-------------|-->+----------+ |
| | | |
| | | |
v v v v
====================================================================
Redis pub/sub channel
```
It will create events locally and then send them on to the main synapse
instance to be persisted and handled.

View File

@@ -78,9 +78,3 @@ ignore_missing_imports = True
[mypy-authlib.*]
ignore_missing_imports = True
[mypy-rust_python_jaeger_reporter.*]
ignore_missing_imports = True
[mypy-nacl.*]
ignore_missing_imports = True

View File

@@ -24,7 +24,9 @@ DISTS = (
"debian:sid",
"ubuntu:xenial",
"ubuntu:bionic",
"ubuntu:focal",
"ubuntu:cosmic",
"ubuntu:disco",
"ubuntu:eoan",
)
DESC = '''\

View File

@@ -3,60 +3,37 @@
# A script which checks that an appropriate news file has been added on this
# branch.
echo -e "+++ \033[32mChecking newsfragment\033[m"
set -e
# make sure that origin/develop is up to date
git remote set-branches --add origin develop
git fetch -q origin develop
pr="$BUILDKITE_PULL_REQUEST"
git fetch origin develop
# if there are changes in the debian directory, check that the debian changelog
# has been updated
if ! git diff --quiet FETCH_HEAD... -- debian; then
if git diff --quiet FETCH_HEAD... -- debian/changelog; then
echo "Updates to debian directory, but no update to the changelog." >&2
echo "!! Please see the contributing guide for help writing your changelog entry:" >&2
echo "https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#debian-changelog" >&2
exit 1
fi
fi
# if there are changes *outside* the debian directory, check that the
# newsfragments have been updated.
if ! git diff --name-only FETCH_HEAD... | grep -qv '^debian/'; then
exit 0
if git diff --name-only FETCH_HEAD... | grep -qv '^debian/'; then
tox -e check-newsfragment
fi
# Print a link to the contributing guide if the user makes a mistake
CONTRIBUTING_GUIDE_TEXT="!! Please see the contributing guide for help writing your changelog entry:
https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#changelog"
# If check-newsfragment returns a non-zero exit code, print the contributing guide and exit
tox -qe check-newsfragment || (echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 && exit 1)
echo
echo "--------------------------"
echo
matched=0
# check that any new newsfiles on this branch end with a full stop.
for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do
# check that any modified newsfiles on this branch end with a full stop.
lastchar=`tr -d '\n' < $f | tail -c 1`
if [ $lastchar != '.' -a $lastchar != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
exit 1
fi
# see if this newsfile corresponds to the right PR
[[ -n "$pr" && "$f" == changelog.d/"$pr".* ]] && matched=1
done
if [[ -n "$pr" && "$matched" -eq 0 ]]; then
echo -e "\e[31mERROR: Did not find a news fragment with the right number: expected changelog.d/$pr.*.\e[39m" >&2
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
exit 1
fi

View File

@@ -1,34 +0,0 @@
#!/bin/bash
#
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This script checks that line terminators in all repository files (excluding
# those in the .git directory) feature unix line terminators.
#
# Usage:
#
# ./check_line_terminators.sh
#
# The script will emit exit code 1 if any files that do not use unix line
# terminators are found, 0 otherwise.
# cd to the root of the repository
cd `dirname $0`/..
# Find and print files with non-unix line terminators
if find . -path './.git/*' -prune -o -type f -print0 | xargs -0 grep -I -l $'\r$'; then
echo -e '\e[31mERROR: found files with CRLF line endings. See above.\e[39m'
exit 1
fi

View File

@@ -2,9 +2,9 @@ import argparse
import json
import logging
import sys
import urllib2
import dns.resolver
import urllib2
from signedjson.key import decode_verify_key_bytes, write_signing_keys
from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64

View File

@@ -3,6 +3,8 @@ import json
import sys
import time
import six
import psycopg2
import yaml
from canonicaljson import encode_canonical_json
@@ -10,7 +12,10 @@ from signedjson.key import read_signing_keys
from signedjson.sign import sign_json
from unpaddedbase64 import encode_base64
db_binary_type = memoryview
if six.PY2:
db_type = six.moves.builtins.buffer
else:
db_type = memoryview
def select_v1_keys(connection):
@@ -67,7 +72,7 @@ def rows_v2(server, json):
valid_until = json["valid_until_ts"]
key_json = encode_canonical_json(json)
for key_id in json["verify_keys"]:
yield (server, key_id, "-", valid_until, valid_until, db_binary_type(key_json))
yield (server, key_id, "-", valid_until, valid_until, db_type(key_json))
def main():

View File

@@ -21,7 +21,8 @@ import argparse
import base64
import json
import sys
from urllib import parse as urlparse
from six.moves.urllib import parse as urlparse
import nacl.signing
import requests

View File

@@ -2,8 +2,8 @@
#
# Runs linting scripts over the local Synapse checkout
# isort - sorts import statements
# black - opinionated code formatter
# flake8 - lints and finds mistakes
# black - opinionated code formatter
set -e
@@ -11,11 +11,11 @@ if [ $# -ge 1 ]
then
files=$*
else
files="synapse tests scripts-dev scripts contrib synctl"
files="synapse tests scripts-dev scripts"
fi
echo "Linting these locations: $files"
isort $files
isort -y -rc $files
flake8 $files
python3 -m black $files
./scripts-dev/config-lint.sh
flake8 $files

View File

@@ -40,7 +40,7 @@ class MockHomeserver(HomeServer):
config.server_name, reactor=reactor, config=config, **kwargs
)
self.version_string = "Synapse/" + get_version_string(synapse)
self.version_string = "Synapse/"+get_version_string(synapse)
if __name__ == "__main__":
@@ -86,7 +86,7 @@ if __name__ == "__main__":
store = hs.get_datastore()
async def run_background_updates():
await store.db_pool.updates.run_background_updates(sleep=False)
await store.db.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()

View File

@@ -23,6 +23,8 @@ import sys
import time
import traceback
from six import string_types
import yaml
from twisted.internet import defer, reactor
@@ -35,29 +37,30 @@ from synapse.logging.context import (
make_deferred_yieldable,
run_in_background,
)
from synapse.storage.database import DatabasePool, make_conn
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.databases.main.events_bg_updates import (
from synapse.storage.data_stores.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.data_stores.main.deviceinbox import (
DeviceInboxBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.data_stores.main.events_bg_updates import (
EventsBackgroundUpdatesStore,
)
from synapse.storage.databases.main.media_repository import (
from synapse.storage.data_stores.main.media_repository import (
MediaRepositoryBackgroundUpdateStore,
)
from synapse.storage.databases.main.registration import (
from synapse.storage.data_stores.main.registration import (
RegistrationBackgroundUpdateStore,
find_max_generated_user_id_localpart,
)
from synapse.storage.databases.main.room import RoomBackgroundUpdateStore
from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.databases.main.search import SearchBackgroundUpdateStore
from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.user_directory import (
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
from synapse.storage.data_stores.main.state import MainStateBackgroundUpdateStore
from synapse.storage.data_stores.main.stats import StatsStore
from synapse.storage.data_stores.main.user_directory import (
UserDirectoryBackgroundUpdateStore,
)
from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStore
from synapse.storage.data_stores.state.bg_updates import StateBackgroundUpdateStore
from synapse.storage.database import Database, make_conn
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
@@ -88,7 +91,6 @@ BOOLEAN_COLUMNS = {
"account_validity": ["email_sent"],
"redactions": ["have_censored"],
"room_stats_state": ["is_federatable"],
"local_media_repository": ["safe_from_quarantine"],
}
@@ -127,26 +129,6 @@ APPEND_ONLY_TABLES = [
]
IGNORED_TABLES = {
# We don't port these tables, as they're a faff and we can regenerate
# them anyway.
"user_directory",
"user_directory_search",
"user_directory_search_content",
"user_directory_search_docsize",
"user_directory_search_segdir",
"user_directory_search_segments",
"user_directory_search_stat",
"user_directory_search_pos",
"users_who_share_private_rooms",
"users_in_public_room",
# UI auth sessions have foreign keys so additional care needs to be taken,
# the sessions are transient anyway, so ignore them.
"ui_auth_sessions",
"ui_auth_sessions_credentials",
}
# Error returned by the run function. Used at the top-level part of the script to
# handle errors and return codes.
end_error = None
@@ -173,14 +155,14 @@ class Store(
StatsStore,
):
def execute(self, f, *args, **kwargs):
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
return self.db.runInteraction(f.__name__, f, *args, **kwargs)
def execute_sql(self, sql, *args):
def r(txn):
txn.execute(sql, args)
return txn.fetchall()
return self.db_pool.runInteraction("execute_sql", r)
return self.db.runInteraction("execute_sql", r)
def insert_many_txn(self, txn, table, headers, rows):
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
@@ -214,9 +196,6 @@ class MockHomeserver:
def get_reactor(self):
return reactor
def get_instance_name(self):
return "master"
class Porter(object):
def __init__(self, **kwargs):
@@ -225,7 +204,7 @@ class Porter(object):
async def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = await self.postgres_store.db_pool.simple_select_one(
row = await self.postgres_store.db.simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcols=("forward_rowid", "backward_rowid"),
@@ -242,7 +221,7 @@ class Porter(object):
) = await self._setup_sent_transactions()
backward_chunk = 0
else:
await self.postgres_store.db_pool.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
@@ -272,7 +251,7 @@ class Porter(object):
await self.postgres_store.execute(delete_all)
await self.postgres_store.db_pool.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
@@ -309,14 +288,21 @@ class Porter(object):
)
return
if table in IGNORED_TABLES:
if table in (
"user_directory",
"user_directory_search",
"users_who_share_rooms",
"users_in_pubic_room",
):
# We don't port these tables, as they're a faff and we can regenreate
# them anyway.
self.progress.update(table, table_size) # Mark table as done
return
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
await self.postgres_store.db_pool.simple_insert(
await self.postgres_store.db.simple_insert(
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
@@ -357,7 +343,7 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = await self.sqlite_store.db_pool.runInteraction(
headers, frows, brows = await self.sqlite_store.db.runInteraction(
"select", r
)
@@ -373,7 +359,7 @@ class Porter(object):
def insert(txn):
self.postgres_store.insert_many_txn(txn, table, headers[1:], rows)
self.postgres_store.db_pool.simple_update_one_txn(
self.postgres_store.db.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
@@ -411,7 +397,7 @@ class Porter(object):
return headers, rows
headers, rows = await self.sqlite_store.db_pool.runInteraction("select", r)
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
if rows:
forward_chunk = rows[-1][0] + 1
@@ -449,7 +435,7 @@ class Porter(object):
],
)
self.postgres_store.db_pool.simple_update_one_txn(
self.postgres_store.db.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": "event_search"},
@@ -492,7 +478,7 @@ class Porter(object):
db_conn, allow_outdated_version=allow_outdated_version
)
prepare_database(db_conn, engine, config=self.hs_config)
store = Store(DatabasePool(hs, db_config, engine), db_conn, hs)
store = Store(Database(hs, db_config, engine), db_conn, hs)
db_conn.commit()
return store
@@ -500,7 +486,7 @@ class Porter(object):
async def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = (
await self.postgres_store.db_pool.updates.has_completed_background_updates()
await self.postgres_store.db.updates.has_completed_background_updates()
)
if not postgres_ready:
@@ -509,9 +495,9 @@ class Porter(object):
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
await self.postgres_store.db_pool.updates.do_next_background_update(100)
await self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = await (
self.postgres_store.db_pool.updates.has_completed_background_updates()
self.postgres_store.db.updates.has_completed_background_updates()
)
async def run(self):
@@ -532,7 +518,7 @@ class Porter(object):
# Check if all background updates are done, abort if not.
updates_complete = (
await self.sqlite_store.db_pool.updates.has_completed_background_updates()
await self.sqlite_store.db.updates.has_completed_background_updates()
)
if not updates_complete:
end_error = (
@@ -574,24 +560,22 @@ class Porter(object):
)
try:
await self.postgres_store.db_pool.runInteraction(
"alter_table", alter_table
)
await self.postgres_store.db.runInteraction("alter_table", alter_table)
except Exception:
# On Error Resume Next
pass
await self.postgres_store.db_pool.runInteraction(
await self.postgres_store.db.runInteraction(
"create_port_table", create_port_table
)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = await self.sqlite_store.db_pool.simple_select_onecol(
sqlite_tables = await self.sqlite_store.db.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = await self.postgres_store.db_pool.simple_select_onecol(
postgres_tables = await self.postgres_store.db.simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
@@ -623,10 +607,8 @@ class Porter(object):
)
)
# Step 5. Set up sequences
self.progress.set_state("Setting up sequence generators")
# Step 5. Do final post-processing
await self._setup_state_group_id_seq()
await self._setup_user_id_seq()
self.progress.done()
except Exception as e:
@@ -650,7 +632,7 @@ class Porter(object):
return bool(col)
if isinstance(col, bytes):
return bytearray(col)
elif isinstance(col, str) and "\0" in col:
elif isinstance(col, string_types) and "\0" in col:
logger.warning(
"DROPPING ROW: NUL value in table %s col %s: %r",
table,
@@ -692,7 +674,7 @@ class Porter(object):
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = await self.sqlite_store.db_pool.runInteraction("select", r)
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@@ -725,7 +707,7 @@ class Porter(object):
next_chunk = await self.sqlite_store.execute(get_start_id)
next_chunk = max(max_inserted_rowid + 1, next_chunk)
await self.postgres_store.db_pool.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": "sent_transactions",
@@ -794,14 +776,7 @@ class Porter(object):
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
def _setup_user_id_seq(self):
def r(txn):
next_id = find_max_generated_user_id_localpart(txn) + 1
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
##############################################

View File

@@ -26,11 +26,12 @@ ignore=W503,W504,E203,E731,E501
[isort]
line_length = 88
not_skip = __init__.py
sections=FUTURE,STDLIB,COMPAT,THIRDPARTY,TWISTED,FIRSTPARTY,TESTS,LOCALFOLDER
default_section=THIRDPARTY
known_first_party = synapse
known_tests=tests
known_compat = mock
known_compat = mock,six
known_twisted=twisted,OpenSSL
multi_line_output=3
include_trailing_comma=true

View File

@@ -114,7 +114,6 @@ setup(
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
],
scripts=["synctl"] + glob.glob("scripts/*"),
cmdclass={"test": TestCommand},

Some files were not shown because too many files have changed in this diff Show More