Compare commits
1 Commits
squah/leav
...
release-v1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
4ffbebf09b |
5
.github/workflows/docker.yml
vendored
5
.github/workflows/docker.yml
vendored
@@ -5,7 +5,7 @@ name: Build docker images
|
||||
on:
|
||||
push:
|
||||
tags: ["v*"]
|
||||
branches: [ master, main, develop ]
|
||||
branches: [ master, main ]
|
||||
workflow_dispatch:
|
||||
|
||||
permissions:
|
||||
@@ -38,9 +38,6 @@ jobs:
|
||||
id: set-tag
|
||||
run: |
|
||||
case "${GITHUB_REF}" in
|
||||
refs/heads/develop)
|
||||
tag=develop
|
||||
;;
|
||||
refs/heads/master|refs/heads/main)
|
||||
tag=latest
|
||||
;;
|
||||
|
||||
2
.github/workflows/tests.yml
vendored
2
.github/workflows/tests.yml
vendored
@@ -374,7 +374,7 @@ jobs:
|
||||
working-directory: complement/dockerfiles
|
||||
|
||||
# Run Complement
|
||||
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/...
|
||||
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests/...
|
||||
env:
|
||||
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
|
||||
working-directory: complement
|
||||
|
||||
189
CHANGES.md
189
CHANGES.md
@@ -1,184 +1,3 @@
|
||||
Synapse 1.49.0rc1 (2021-12-07)
|
||||
==============================
|
||||
|
||||
We've decided to move the existing, somewhat stagnant pages from the GitHub wiki
|
||||
to the [documentation website](https://matrix-org.github.io/synapse/latest/).
|
||||
|
||||
This was done for two reasons. The first was to ensure that changes are checked by
|
||||
multiple authors before being committed (everyone makes mistakes!) and the second
|
||||
was visibility of the documentation. Not everyone knows that Synapse has some very
|
||||
useful information hidden away in its GitHub wiki pages. Bringing them to the
|
||||
documentation website should help with visibility, as well as keep all Synapse documentation
|
||||
in one, easily-searchable location.
|
||||
|
||||
Note that contributions to the documentation website happen through [GitHub pull
|
||||
requests](https://github.com/matrix-org/synapse/pulls). Please visit [#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org)
|
||||
if you need help with the process!
|
||||
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) experimental client and federation API endpoints to get the closest event to a given timestamp. ([\#9445](https://github.com/matrix-org/synapse/issues/9445))
|
||||
- Include bundled relation aggregations during a limited `/sync` request and `/relations` request, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11284](https://github.com/matrix-org/synapse/issues/11284), [\#11478](https://github.com/matrix-org/synapse/issues/11478))
|
||||
- Add plugin support for controlling database background updates. ([\#11306](https://github.com/matrix-org/synapse/issues/11306), [\#11475](https://github.com/matrix-org/synapse/issues/11475), [\#11479](https://github.com/matrix-org/synapse/issues/11479))
|
||||
- Support the stable API endpoints for [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946): the room `/hierarchy` endpoint. ([\#11329](https://github.com/matrix-org/synapse/issues/11329))
|
||||
- Add admin API to get some information about federation status with remote servers. ([\#11407](https://github.com/matrix-org/synapse/issues/11407))
|
||||
- Support expiry of refresh tokens and expiry of the overall session when refresh tokens are in use. ([\#11425](https://github.com/matrix-org/synapse/issues/11425))
|
||||
- Stabilise support for [MSC2918](https://github.com/matrix-org/matrix-doc/blob/main/proposals/2918-refreshtokens.md#msc2918-refresh-tokens) refresh tokens as they have now been merged into the Matrix specification. ([\#11435](https://github.com/matrix-org/synapse/issues/11435), [\#11522](https://github.com/matrix-org/synapse/issues/11522))
|
||||
- Update [MSC2918 refresh token](https://github.com/matrix-org/matrix-doc/blob/main/proposals/2918-refreshtokens.md#msc2918-refresh-tokens) support to confirm with the latest revision: accept the `refresh_tokens` parameter in the request body rather than in the URL parameters. ([\#11430](https://github.com/matrix-org/synapse/issues/11430))
|
||||
- Support configuring the lifetime of non-refreshable access tokens separately to refreshable access tokens. ([\#11445](https://github.com/matrix-org/synapse/issues/11445))
|
||||
- Expose `synapse_homeserver` and `synapse_worker` commands as entry points to run Synapse's main process and worker processes, respectively. Contributed by @Ma27. ([\#11449](https://github.com/matrix-org/synapse/issues/11449))
|
||||
- `synctl stop` will now wait for Synapse to exit before returning. ([\#11459](https://github.com/matrix-org/synapse/issues/11459), [\#11490](https://github.com/matrix-org/synapse/issues/11490))
|
||||
- Extend the "delete room" admin api to work correctly on rooms which have previously been partially deleted. ([\#11523](https://github.com/matrix-org/synapse/issues/11523))
|
||||
- Add support for the `/_matrix/client/v3/login/sso/redirect/{idpId}` API from Matrix v1.1. This endpoint was overlooked when support for v3 endpoints was added in Synapse 1.48.0rc1. ([\#11451](https://github.com/matrix-org/synapse/issues/11451))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix using [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) batch sending in combination with event persistence workers. Contributed by @tulir at Beeper. ([\#11220](https://github.com/matrix-org/synapse/issues/11220))
|
||||
- Fix a long-standing bug where all requests that read events from the database could get stuck as a result of losing the database connection, properly this time. Also fix a race condition introduced in the previous insufficient fix in Synapse 1.47.0. ([\#11376](https://github.com/matrix-org/synapse/issues/11376))
|
||||
- The `/send_join` response now includes the stable `event` field instead of the unstable field from [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083). ([\#11413](https://github.com/matrix-org/synapse/issues/11413))
|
||||
- Fix a bug introduced in Synapse 1.47.0 where `send_join` could fail due to an outdated `ijson` version. ([\#11439](https://github.com/matrix-org/synapse/issues/11439), [\#11441](https://github.com/matrix-org/synapse/issues/11441), [\#11460](https://github.com/matrix-org/synapse/issues/11460))
|
||||
- Fix a bug introduced in Synapse 1.36.0 which could cause problems fetching event-signing keys from trusted key servers. ([\#11440](https://github.com/matrix-org/synapse/issues/11440))
|
||||
- Fix a bug introduced in Synapse 1.47.1 where the media repository would fail to work if the media store path contained any symbolic links. ([\#11446](https://github.com/matrix-org/synapse/issues/11446))
|
||||
- Fix an `LruCache` corruption bug, introduced in Synapse 1.38.0, that would cause certain requests to fail until the next Synapse restart. ([\#11454](https://github.com/matrix-org/synapse/issues/11454))
|
||||
- Fix a long-standing bug where invites from ignored users were included in incremental syncs. ([\#11511](https://github.com/matrix-org/synapse/issues/11511))
|
||||
- Fix a regression in Synapse 1.48.0 where presence workers would not clear their presence updates over replication on shutdown. ([\#11518](https://github.com/matrix-org/synapse/issues/11518))
|
||||
- Fix a regression in Synapse 1.48.0 where the module API's `looping_background_call` method would spam errors to the logs when given a non-async function. ([\#11524](https://github.com/matrix-org/synapse/issues/11524))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
- Update `Dockerfile-workers` to healthcheck all workers in the container. ([\#11429](https://github.com/matrix-org/synapse/issues/11429))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Update the media repository documentation. ([\#11415](https://github.com/matrix-org/synapse/issues/11415))
|
||||
- Update section about backward extremities in the room DAG concepts doc to correct the misconception about backward extremities indicating whether we have fetched an events' `prev_events`. ([\#11469](https://github.com/matrix-org/synapse/issues/11469))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add `Final` annotation to string constants in `synapse.api.constants` so that they get typed as `Literal`s. ([\#11356](https://github.com/matrix-org/synapse/issues/11356))
|
||||
- Add a check to ensure that users cannot start the Synapse master process when `worker_app` is set. ([\#11416](https://github.com/matrix-org/synapse/issues/11416))
|
||||
- Add a note about postgres memory management and hugepages to postgres doc. ([\#11467](https://github.com/matrix-org/synapse/issues/11467))
|
||||
- Add missing type hints to `synapse.config` module. ([\#11465](https://github.com/matrix-org/synapse/issues/11465))
|
||||
- Add missing type hints to `synapse.federation`. ([\#11483](https://github.com/matrix-org/synapse/issues/11483))
|
||||
- Add type annotations to `tests.storage.test_appservice`. ([\#11488](https://github.com/matrix-org/synapse/issues/11488), [\#11492](https://github.com/matrix-org/synapse/issues/11492))
|
||||
- Add type annotations to some of the configuration surrounding refresh tokens. ([\#11428](https://github.com/matrix-org/synapse/issues/11428))
|
||||
- Add type hints to `synapse/tests/rest/admin`. ([\#11501](https://github.com/matrix-org/synapse/issues/11501))
|
||||
- Add type hints to storage classes. ([\#11411](https://github.com/matrix-org/synapse/issues/11411))
|
||||
- Add wiki pages to documentation website. ([\#11402](https://github.com/matrix-org/synapse/issues/11402))
|
||||
- Clean up `tests.storage.test_main` to remove use of legacy code. ([\#11493](https://github.com/matrix-org/synapse/issues/11493))
|
||||
- Clean up `tests.test_visibility` to remove legacy code. ([\#11495](https://github.com/matrix-org/synapse/issues/11495))
|
||||
- Convert status codes to `HTTPStatus` in `synapse.rest.admin`. ([\#11452](https://github.com/matrix-org/synapse/issues/11452), [\#11455](https://github.com/matrix-org/synapse/issues/11455))
|
||||
- Extend the `scripts-dev/sign_json` script to support signing events. ([\#11486](https://github.com/matrix-org/synapse/issues/11486))
|
||||
- Improve internal types in push code. ([\#11409](https://github.com/matrix-org/synapse/issues/11409))
|
||||
- Improve type annotations in `synapse.module_api`. ([\#11029](https://github.com/matrix-org/synapse/issues/11029))
|
||||
- Improve type hints for `LruCache`. ([\#11453](https://github.com/matrix-org/synapse/issues/11453))
|
||||
- Preparation for database schema simplifications: disambiguate queries on `state_key`. ([\#11497](https://github.com/matrix-org/synapse/issues/11497))
|
||||
- Refactor `backfilled` into specific behavior function arguments (`_persist_events_and_state_updates` and downstream calls). ([\#11417](https://github.com/matrix-org/synapse/issues/11417))
|
||||
- Refactor `get_version_string` to fix-up types and duplicated code. ([\#11468](https://github.com/matrix-org/synapse/issues/11468))
|
||||
- Refactor various parts of the `/sync` handler. ([\#11494](https://github.com/matrix-org/synapse/issues/11494), [\#11515](https://github.com/matrix-org/synapse/issues/11515))
|
||||
- Remove unnecessary `json.dumps` from `tests.rest.admin`. ([\#11461](https://github.com/matrix-org/synapse/issues/11461))
|
||||
- Save the OpenID Connect session ID on login. ([\#11482](https://github.com/matrix-org/synapse/issues/11482))
|
||||
- Update and clean up recently ported documentation pages. ([\#11466](https://github.com/matrix-org/synapse/issues/11466))
|
||||
|
||||
|
||||
Synapse 1.48.0 (2021-11-30)
|
||||
===========================
|
||||
|
||||
This release removes support for the long-deprecated `trust_identity_server_for_password_resets` configuration flag.
|
||||
|
||||
This release also fixes some performance issues with some background database updates introduced in Synapse 1.47.0.
|
||||
|
||||
No significant changes since 1.48.0rc1.
|
||||
|
||||
Synapse 1.48.0rc1 (2021-11-25)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Experimental support for the thread relation defined in [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11161](https://github.com/matrix-org/synapse/issues/11161))
|
||||
- Support filtering by relation senders & types per [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440). ([\#11236](https://github.com/matrix-org/synapse/issues/11236))
|
||||
- Add support for the `/_matrix/client/v3` and `/_matrix/media/v3` APIs from Matrix v1.1. ([\#11318](https://github.com/matrix-org/synapse/issues/11318), [\#11371](https://github.com/matrix-org/synapse/issues/11371))
|
||||
- Support the stable version of [MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778): the `m.login.application_service` login type. Contributed by @tulir. ([\#11335](https://github.com/matrix-org/synapse/issues/11335))
|
||||
- Add a new version of delete room admin API `DELETE /_synapse/admin/v2/rooms/<room_id>` to run it in the background. Contributed by @dklimpel. ([\#11223](https://github.com/matrix-org/synapse/issues/11223))
|
||||
- Allow the admin [Delete Room API](https://matrix-org.github.io/synapse/latest/admin_api/rooms.html#delete-room-api) to block a room without the need to join it. ([\#11228](https://github.com/matrix-org/synapse/issues/11228))
|
||||
- Add an admin API to un-shadow-ban a user. ([\#11347](https://github.com/matrix-org/synapse/issues/11347))
|
||||
- Add an admin API to run background database schema updates. ([\#11352](https://github.com/matrix-org/synapse/issues/11352))
|
||||
- Add an admin API for blocking a room. ([\#11324](https://github.com/matrix-org/synapse/issues/11324))
|
||||
- Update the JWT login type to support custom a `sub` claim. ([\#11361](https://github.com/matrix-org/synapse/issues/11361))
|
||||
- Store and allow querying of arbitrary event relations. ([\#11391](https://github.com/matrix-org/synapse/issues/11391))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a long-standing bug wherein display names or avatar URLs containing null bytes cause an internal server error when stored in the DB. ([\#11230](https://github.com/matrix-org/synapse/issues/11230))
|
||||
- Prevent [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical state events from being pushed to an application service via `/transactions`. ([\#11265](https://github.com/matrix-org/synapse/issues/11265))
|
||||
- Fix a long-standing bug where uploading extremely thin images (e.g. 1000x1) would fail. Contributed by @Neeeflix. ([\#11288](https://github.com/matrix-org/synapse/issues/11288))
|
||||
- Fix a bug, introduced in Synapse 1.46.0, which caused the `check_3pid_auth` and `on_logged_out` callbacks in legacy password authentication provider modules to not be registered. Modules using the generic module interface were not affected. ([\#11340](https://github.com/matrix-org/synapse/issues/11340))
|
||||
- Fix a bug introduced in 1.41.0 where space hierarchy responses would be incorrectly reused if multiple users were to make the same request at the same time. ([\#11355](https://github.com/matrix-org/synapse/issues/11355))
|
||||
- Fix a bug introduced in 1.45.0 where the `read_templates` method of the module API would error. ([\#11377](https://github.com/matrix-org/synapse/issues/11377))
|
||||
- Fix an issue introduced in 1.47.0 which prevented servers re-joining rooms they had previously left, if their signing keys were replaced. ([\#11379](https://github.com/matrix-org/synapse/issues/11379))
|
||||
- Fix a bug introduced in 1.13.0 where creating and publishing a room could cause errors if `room_list_publication_rules` is configured. ([\#11392](https://github.com/matrix-org/synapse/issues/11392))
|
||||
- Improve performance of various background database updates. ([\#11421](https://github.com/matrix-org/synapse/issues/11421), [\#11422](https://github.com/matrix-org/synapse/issues/11422))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Suggest users of the Debian packages add configuration to `/etc/matrix-synapse/conf.d/` to prevent, upon upgrade, being asked to choose between their configuration and the maintainer's. ([\#11281](https://github.com/matrix-org/synapse/issues/11281))
|
||||
- Fix typos in the documentation for the `username_available` admin API. Contributed by Stanislav Motylkov. ([\#11286](https://github.com/matrix-org/synapse/issues/11286))
|
||||
- Add Single Sign-On, SAML and CAS pages to the documentation. ([\#11298](https://github.com/matrix-org/synapse/issues/11298))
|
||||
- Change the word 'Home server' as one word 'homeserver' in documentation. ([\#11320](https://github.com/matrix-org/synapse/issues/11320))
|
||||
- Fix missing quotes for wildcard domains in `federation_certificate_verification_whitelist`. ([\#11381](https://github.com/matrix-org/synapse/issues/11381))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Remove deprecated `trust_identity_server_for_password_resets` configuration flag. ([\#11333](https://github.com/matrix-org/synapse/issues/11333), [\#11395](https://github.com/matrix-org/synapse/issues/11395))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add type annotations to `synapse.metrics`. ([\#10847](https://github.com/matrix-org/synapse/issues/10847))
|
||||
- Split out federated PDU retrieval function into a non-cached version. ([\#11242](https://github.com/matrix-org/synapse/issues/11242))
|
||||
- Clean up code relating to to-device messages and sending ephemeral events to application services. ([\#11247](https://github.com/matrix-org/synapse/issues/11247))
|
||||
- Fix a small typo in the error response when a relation type other than 'm.annotation' is passed to `GET /rooms/{room_id}/aggregations/{event_id}`. ([\#11278](https://github.com/matrix-org/synapse/issues/11278))
|
||||
- Drop unused database tables `room_stats_historical` and `user_stats_historical`. ([\#11280](https://github.com/matrix-org/synapse/issues/11280))
|
||||
- Require all files in synapse/ and tests/ to pass mypy unless specifically excluded. ([\#11282](https://github.com/matrix-org/synapse/issues/11282), [\#11285](https://github.com/matrix-org/synapse/issues/11285), [\#11359](https://github.com/matrix-org/synapse/issues/11359))
|
||||
- Add missing type hints to `synapse.app`. ([\#11287](https://github.com/matrix-org/synapse/issues/11287))
|
||||
- Remove unused parameters on `FederationEventHandler._check_event_auth`. ([\#11292](https://github.com/matrix-org/synapse/issues/11292))
|
||||
- Add type hints to `synapse._scripts`. ([\#11297](https://github.com/matrix-org/synapse/issues/11297))
|
||||
- Fix an issue which prevented the `remove_deleted_devices_from_device_inbox` background database schema update from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303))
|
||||
- Add type hints to storage classes. ([\#11307](https://github.com/matrix-org/synapse/issues/11307), [\#11310](https://github.com/matrix-org/synapse/issues/11310), [\#11311](https://github.com/matrix-org/synapse/issues/11311), [\#11312](https://github.com/matrix-org/synapse/issues/11312), [\#11313](https://github.com/matrix-org/synapse/issues/11313), [\#11314](https://github.com/matrix-org/synapse/issues/11314), [\#11316](https://github.com/matrix-org/synapse/issues/11316), [\#11322](https://github.com/matrix-org/synapse/issues/11322), [\#11332](https://github.com/matrix-org/synapse/issues/11332), [\#11339](https://github.com/matrix-org/synapse/issues/11339), [\#11342](https://github.com/matrix-org/synapse/issues/11342))
|
||||
- Add type hints to `synapse.util`. ([\#11321](https://github.com/matrix-org/synapse/issues/11321), [\#11328](https://github.com/matrix-org/synapse/issues/11328))
|
||||
- Improve type annotations in Synapse's test suite. ([\#11323](https://github.com/matrix-org/synapse/issues/11323), [\#11330](https://github.com/matrix-org/synapse/issues/11330))
|
||||
- Test that room alias deletion works as intended. ([\#11327](https://github.com/matrix-org/synapse/issues/11327))
|
||||
- Add type annotations for some methods and properties in the module API. ([\#11341](https://github.com/matrix-org/synapse/issues/11341))
|
||||
- Fix running `scripts-dev/complement.sh`, which was broken in v1.47.0rc1. ([\#11368](https://github.com/matrix-org/synapse/issues/11368))
|
||||
- Rename internal functions for token generation to better reflect what they do. ([\#11369](https://github.com/matrix-org/synapse/issues/11369), [\#11370](https://github.com/matrix-org/synapse/issues/11370))
|
||||
- Add type hints to configuration classes. ([\#11377](https://github.com/matrix-org/synapse/issues/11377))
|
||||
- Publish a `develop` image to Docker Hub. ([\#11380](https://github.com/matrix-org/synapse/issues/11380))
|
||||
- Keep fallback key marked as used if it's re-uploaded. ([\#11382](https://github.com/matrix-org/synapse/issues/11382))
|
||||
- Use `auto_attribs` on the `attrs` class `RefreshTokenLookupResult`. ([\#11386](https://github.com/matrix-org/synapse/issues/11386))
|
||||
- Rename unstable `access_token_lifetime` configuration option to `refreshable_access_token_lifetime` to make it clear it only concerns refreshable access tokens. ([\#11388](https://github.com/matrix-org/synapse/issues/11388))
|
||||
- Do not run the broken MSC2716 tests when running `scripts-dev/complement.sh`. ([\#11389](https://github.com/matrix-org/synapse/issues/11389))
|
||||
- Remove dead code from supporting ACME. ([\#11393](https://github.com/matrix-org/synapse/issues/11393))
|
||||
- Refactor including the bundled relations when serializing an event. ([\#11408](https://github.com/matrix-org/synapse/issues/11408))
|
||||
|
||||
|
||||
Synapse 1.47.1 (2021-11-23)
|
||||
===========================
|
||||
|
||||
@@ -8898,14 +8717,14 @@ General:
|
||||
|
||||
Federation:
|
||||
|
||||
- Add key distribution mechanisms for fetching public keys of unavailable remote homeservers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
|
||||
- Add key distribution mechanisms for fetching public keys of unavailable remote home servers. See [Retrieving Server Keys](https://github.com/matrix-org/matrix-doc/blob/6f2698/specification/30_server_server_api.rst#retrieving-server-keys) in the spec.
|
||||
|
||||
Configuration:
|
||||
|
||||
- Add support for multiple config files.
|
||||
- Add support for dictionaries in config files.
|
||||
- Remove support for specifying config options on the command line, except for:
|
||||
- `--daemonize` - Daemonize the homeserver.
|
||||
- `--daemonize` - Daemonize the home server.
|
||||
- `--manhole` - Turn on the twisted telnet manhole service on the given port.
|
||||
- `--database-path` - The path to a sqlite database to use.
|
||||
- `--verbose` - The verbosity level.
|
||||
@@ -9110,7 +8929,7 @@ This version adds support for using a TURN server. See docs/turn-howto.rst on ho
|
||||
Homeserver:
|
||||
|
||||
- Add support for redaction of messages.
|
||||
- Fix bug where inviting a user on a remote homeserver could take up to 20-30s.
|
||||
- Fix bug where inviting a user on a remote home server could take up to 20-30s.
|
||||
- Implement a get current room state API.
|
||||
- Add support specifying and retrieving turn server configuration.
|
||||
|
||||
@@ -9200,7 +9019,7 @@ Changes in synapse 0.2.3 (2014-09-12)
|
||||
|
||||
Homeserver:
|
||||
|
||||
- Fix bug where we stopped sending events to remote homeservers if a user from that homeserver left, even if there were some still in the room.
|
||||
- Fix bug where we stopped sending events to remote home servers if a user from that home server left, even if there were some still in the room.
|
||||
- Fix bugs in the state conflict resolution where it was incorrectly rejecting events.
|
||||
|
||||
Webclient:
|
||||
|
||||
12
book.toml
12
book.toml
@@ -34,6 +34,14 @@ additional-css = [
|
||||
"docs/website_files/table-of-contents.css",
|
||||
"docs/website_files/remove-nav-buttons.css",
|
||||
"docs/website_files/indent-section-headers.css",
|
||||
"docs/website_files/version-picker.css",
|
||||
]
|
||||
additional-js = ["docs/website_files/table-of-contents.js"]
|
||||
theme = "docs/website_files/theme"
|
||||
additional-js = [
|
||||
"docs/website_files/table-of-contents.js",
|
||||
"docs/website_files/version-picker.js",
|
||||
"docs/website_files/version.js",
|
||||
]
|
||||
theme = "docs/website_files/theme"
|
||||
|
||||
[preprocessor.schema_versions]
|
||||
command = "./scripts-dev/schema_versions.py"
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Send and handle cross-signing messages using the stable prefix.
|
||||
@@ -1 +0,0 @@
|
||||
A test helper (`wait_for_background_updates`) no longer depends on classes defining a `store` property.
|
||||
@@ -1 +0,0 @@
|
||||
Add an admin API endpoint to force a local user to leave all non-public rooms in a space.
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,21 +1,3 @@
|
||||
matrix-synapse-py3 (1.49.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.49.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Dec 2021 13:52:21 +0000
|
||||
|
||||
matrix-synapse-py3 (1.48.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.48.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Nov 2021 11:24:15 +0000
|
||||
|
||||
matrix-synapse-py3 (1.48.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.48.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Nov 2021 15:56:03 +0000
|
||||
|
||||
matrix-synapse-py3 (1.47.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.47.1.
|
||||
|
||||
@@ -21,6 +21,3 @@ VOLUME ["/data"]
|
||||
# files to run the desired worker configuration. Will start supervisord.
|
||||
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
|
||||
ENTRYPOINT ["/configure_workers_and_start.py"]
|
||||
|
||||
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
|
||||
CMD /bin/sh /healthcheck.sh
|
||||
|
||||
@@ -1,6 +0,0 @@
|
||||
#!/bin/sh
|
||||
# This healthcheck script is designed to return OK when every
|
||||
# host involved returns OK
|
||||
{%- for healthcheck_url in healthcheck_urls %}
|
||||
curl -fSs {{ healthcheck_url }} || exit 1
|
||||
{%- endfor %}
|
||||
@@ -148,6 +148,14 @@ bcrypt_rounds: 12
|
||||
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
|
||||
enable_group_creation: true
|
||||
|
||||
# The list of identity servers trusted to verify third party
|
||||
# identifiers by this server.
|
||||
#
|
||||
# Also defines the ID server which will be called when an account is
|
||||
# deactivated (one will be picked arbitrarily).
|
||||
trusted_third_party_id_servers:
|
||||
- matrix.org
|
||||
- vector.im
|
||||
|
||||
## Metrics ###
|
||||
|
||||
|
||||
@@ -48,7 +48,7 @@ WORKERS_CONFIG = {
|
||||
"app": "synapse.app.user_dir",
|
||||
"listener_resources": ["client"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$"
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
|
||||
],
|
||||
"shared_extra_conf": {"update_user_directory": False},
|
||||
"worker_extra_conf": "",
|
||||
@@ -85,10 +85,10 @@ WORKERS_CONFIG = {
|
||||
"app": "synapse.app.generic_worker",
|
||||
"listener_resources": ["client"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/(v2_alpha|r0|v3)/sync$",
|
||||
"^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$",
|
||||
"^/_matrix/client/(api/v1|r0|v3)/initialSync$",
|
||||
"^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$",
|
||||
"^/_matrix/client/(v2_alpha|r0)/sync$",
|
||||
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
|
||||
"^/_matrix/client/(api/v1|r0)/initialSync$",
|
||||
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
|
||||
],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": "",
|
||||
@@ -146,11 +146,11 @@ WORKERS_CONFIG = {
|
||||
"app": "synapse.app.generic_worker",
|
||||
"listener_resources": ["client"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
|
||||
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/join/",
|
||||
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
|
||||
],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": "",
|
||||
@@ -158,7 +158,7 @@ WORKERS_CONFIG = {
|
||||
"frontend_proxy": {
|
||||
"app": "synapse.app.frontend_proxy",
|
||||
"listener_resources": ["client", "replication"],
|
||||
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
|
||||
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
|
||||
"shared_extra_conf": {},
|
||||
"worker_extra_conf": (
|
||||
"worker_main_http_uri: http://127.0.0.1:%d"
|
||||
@@ -474,16 +474,10 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
|
||||
|
||||
# Determine the load-balancing upstreams to configure
|
||||
nginx_upstream_config = ""
|
||||
|
||||
# At the same time, prepare a list of internal endpoints to healthcheck
|
||||
# starting with the main process which exists even if no workers do.
|
||||
healthcheck_urls = ["http://localhost:8080/health"]
|
||||
|
||||
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
|
||||
body = ""
|
||||
for port in upstream_worker_ports:
|
||||
body += " server localhost:%d;\n" % (port,)
|
||||
healthcheck_urls.append("http://localhost:%d/health" % (port,))
|
||||
|
||||
# Add to the list of configured upstreams
|
||||
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
|
||||
@@ -516,13 +510,6 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
|
||||
worker_config=supervisord_config,
|
||||
)
|
||||
|
||||
# healthcheck config
|
||||
convert(
|
||||
"/conf/healthcheck.sh.j2",
|
||||
"/healthcheck.sh",
|
||||
healthcheck_urls=healthcheck_urls,
|
||||
)
|
||||
|
||||
# Ensure the logging directory exists
|
||||
log_dir = data_dir + "/logs"
|
||||
if not os.path.exists(log_dir):
|
||||
|
||||
@@ -50,10 +50,8 @@ build the documentation with:
|
||||
mdbook build
|
||||
```
|
||||
|
||||
The rendered contents will be outputted to a new `book/` directory at the root of the repository. Please note that
|
||||
index.html is not built by default, it is created by copying over the file `welcome_and_overview.html` to `index.html`
|
||||
during deployment. Thus, when running `mdbook serve` locally the book will initially show a 404 in place of the index
|
||||
due to the above. Do not be alarmed!
|
||||
The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
|
||||
browse the book by opening `book/index.html` in a web browser.
|
||||
|
||||
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:
|
||||
|
||||
|
||||
@@ -23,10 +23,10 @@
|
||||
- [Structured Logging](structured_logging.md)
|
||||
- [Templates](templates.md)
|
||||
- [User Authentication](usage/configuration/user_authentication/README.md)
|
||||
- [Single-Sign On](usage/configuration/user_authentication/single_sign_on/README.md)
|
||||
- [Single-Sign On]()
|
||||
- [OpenID Connect](openid.md)
|
||||
- [SAML](usage/configuration/user_authentication/single_sign_on/saml.md)
|
||||
- [CAS](usage/configuration/user_authentication/single_sign_on/cas.md)
|
||||
- [SAML]()
|
||||
- [CAS]()
|
||||
- [SSO Mapping Providers](sso_mapping_providers.md)
|
||||
- [Password Auth Providers](password_auth_providers.md)
|
||||
- [JSON Web Tokens](jwt.md)
|
||||
@@ -44,7 +44,6 @@
|
||||
- [Presence router callbacks](modules/presence_router_callbacks.md)
|
||||
- [Account validity callbacks](modules/account_validity_callbacks.md)
|
||||
- [Password auth provider callbacks](modules/password_auth_provider_callbacks.md)
|
||||
- [Background update controller callbacks](modules/background_update_controller_callbacks.md)
|
||||
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
|
||||
- [Workers](workers.md)
|
||||
- [Using `synctl` with Workers](synctl_workers.md)
|
||||
@@ -61,20 +60,13 @@
|
||||
- [Registration Tokens](usage/administration/admin_api/registration_tokens.md)
|
||||
- [Manipulate Room Membership](admin_api/room_membership.md)
|
||||
- [Rooms](admin_api/rooms.md)
|
||||
- [Spaces](usage/administration/admin_api/spaces.md)
|
||||
- [Server Notices](admin_api/server_notices.md)
|
||||
- [Statistics](admin_api/statistics.md)
|
||||
- [Users](admin_api/user_admin_api.md)
|
||||
- [Server Version](admin_api/version_api.md)
|
||||
- [Federation](usage/administration/admin_api/federation.md)
|
||||
- [Manhole](manhole.md)
|
||||
- [Monitoring](metrics-howto.md)
|
||||
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
|
||||
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
|
||||
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
|
||||
- [State Groups](usage/administration/state_groups.md)
|
||||
- [Request log format](usage/administration/request_log.md)
|
||||
- [Admin FAQ](usage/administration/admin_faq.md)
|
||||
- [Scripts]()
|
||||
|
||||
# Development
|
||||
@@ -102,4 +94,3 @@
|
||||
|
||||
# Other
|
||||
- [Dependency Deprecation Policy](deprecation_policy.md)
|
||||
- [Running Synapse on a Single-Board Computer](other/running_synapse_on_single_board_computers.md)
|
||||
|
||||
@@ -70,8 +70,6 @@ This API returns a JSON body like the following:
|
||||
|
||||
The status will be one of `active`, `complete`, or `failed`.
|
||||
|
||||
If `status` is `failed` there will be a string `error` with the error message.
|
||||
|
||||
## Reclaim disk space (Postgres)
|
||||
|
||||
To reclaim the disk space and return it to the operating system, you need to run
|
||||
|
||||
@@ -3,11 +3,7 @@
|
||||
- [Room Details API](#room-details-api)
|
||||
- [Room Members API](#room-members-api)
|
||||
- [Room State API](#room-state-api)
|
||||
- [Block Room API](#block-room-api)
|
||||
- [Delete Room API](#delete-room-api)
|
||||
* [Version 1 (old version)](#version-1-old-version)
|
||||
* [Version 2 (new version)](#version-2-new-version)
|
||||
* [Status of deleting rooms](#status-of-deleting-rooms)
|
||||
* [Undoing room shutdowns](#undoing-room-shutdowns)
|
||||
- [Make Room Admin API](#make-room-admin-api)
|
||||
- [Forward Extremities Admin API](#forward-extremities-admin-api)
|
||||
@@ -387,83 +383,6 @@ A response body like the following is returned:
|
||||
}
|
||||
```
|
||||
|
||||
# Block Room API
|
||||
The Block Room admin API allows server admins to block and unblock rooms,
|
||||
and query to see if a given room is blocked.
|
||||
This API can be used to pre-emptively block a room, even if it's unknown to this
|
||||
homeserver. Users will be prevented from joining a blocked room.
|
||||
|
||||
## Block or unblock a room
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
PUT /_synapse/admin/v1/rooms/<room_id>/block
|
||||
```
|
||||
|
||||
with a body of:
|
||||
|
||||
```json
|
||||
{
|
||||
"block": true
|
||||
}
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"block": true
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `room_id` - The ID of the room.
|
||||
|
||||
The following JSON body parameters are available:
|
||||
|
||||
- `block` - If `true` the room will be blocked and if `false` the room will be unblocked.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are possible in the JSON response body:
|
||||
|
||||
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||
|
||||
## Get block status
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/rooms/<room_id>/block
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"block": true,
|
||||
"user_id": "<user_id>"
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- `room_id` - The ID of the room.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are possible in the JSON response body:
|
||||
|
||||
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
|
||||
- `user_id` - An optional string. If the room is blocked (`block` is `true`) shows
|
||||
the user who has add the room to blocking list. Otherwise it is not displayed.
|
||||
|
||||
# Delete Room API
|
||||
|
||||
The Delete Room admin API allows server admins to remove rooms from the server
|
||||
@@ -477,33 +396,18 @@ The new room will be created with the user specified by the `new_room_user_id` p
|
||||
as room administrator and will contain a message explaining what happened. Users invited
|
||||
to the new room will have power level `-10` by default, and thus be unable to speak.
|
||||
|
||||
If `block` is `true`, users will be prevented from joining the old room.
|
||||
This option can in [Version 1](#version-1-old-version) also be used to pre-emptively
|
||||
block a room, even if it's unknown to this homeserver. In this case, the room will be
|
||||
blocked, and no further action will be taken. If `block` is `false`, attempting to
|
||||
delete an unknown room is invalid and will be rejected as a bad request.
|
||||
If `block` is `True` it prevents new joins to the old room.
|
||||
|
||||
This API will remove all trace of the old room from your database after removing
|
||||
all local users. If `purge` is `true` (the default), all traces of the old room will
|
||||
be removed from your database after removing all local users. If you do not want
|
||||
this to happen, set `purge` to `false`.
|
||||
Depending on the amount of history being purged, a call to the API may take
|
||||
Depending on the amount of history being purged a call to the API may take
|
||||
several minutes or longer.
|
||||
|
||||
The local server will only have the power to move local user and room aliases to
|
||||
the new room. Users on other servers will be unaffected.
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
## Version 1 (old version)
|
||||
|
||||
This version works synchronously. That means you only get the response once the server has
|
||||
finished the action, which may take a long time. If you request the same action
|
||||
a second time, and the server has not finished the first one, the second request will block.
|
||||
This is fixed in version 2 of this API. The parameters are the same in both APIs.
|
||||
This API will become deprecated in the future.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
@@ -522,6 +426,9 @@ with a body of:
|
||||
}
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see [Admin API](../usage/administration/admin_api).
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
@@ -538,44 +445,6 @@ A response body like the following is returned:
|
||||
}
|
||||
```
|
||||
|
||||
The parameters and response values have the same format as
|
||||
[version 2](#version-2-new-version) of the API.
|
||||
|
||||
## Version 2 (new version)
|
||||
|
||||
**Note**: This API is new, experimental and "subject to change".
|
||||
|
||||
This version works asynchronously, meaning you get the response from server immediately
|
||||
while the server works on that task in background. You can then request the status of the action
|
||||
to check if it has completed.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
DELETE /_synapse/admin/v2/rooms/<room_id>
|
||||
```
|
||||
|
||||
with a body of:
|
||||
|
||||
```json
|
||||
{
|
||||
"new_room_user_id": "@someuser:example.com",
|
||||
"room_name": "Content Violation Notification",
|
||||
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
|
||||
"block": true,
|
||||
"purge": true
|
||||
}
|
||||
```
|
||||
|
||||
The API starts the shut down and purge running, and returns immediately with a JSON body with
|
||||
a purge id:
|
||||
|
||||
```json
|
||||
{
|
||||
"delete_id": "<opaque id>"
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
@@ -595,10 +464,8 @@ The following JSON body parameters are available:
|
||||
`new_room_user_id` in the new room. Ideally this will clearly convey why the
|
||||
original room was shut down. Defaults to `Sharing illegal content on this server
|
||||
is not permitted and rooms in violation will be blocked.`
|
||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list,
|
||||
preventing future attempts to join the room. Rooms can be blocked
|
||||
even if they're not yet known to the homeserver (only with
|
||||
[Version 1](#version-1-old-version) of the API). Defaults to `false`.
|
||||
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
|
||||
future attempts to join the room. Defaults to `false`.
|
||||
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
|
||||
Defaults to `true`.
|
||||
* `force_purge` - Optional, and ignored unless `purge` is `true`. If set to `true`, it
|
||||
@@ -608,124 +475,16 @@ The following JSON body parameters are available:
|
||||
|
||||
The JSON body must not be empty. The body must be at least `{}`.
|
||||
|
||||
## Status of deleting rooms
|
||||
|
||||
**Note**: This API is new, experimental and "subject to change".
|
||||
|
||||
It is possible to query the status of the background task for deleting rooms.
|
||||
The status can be queried up to 24 hours after completion of the task,
|
||||
or until Synapse is restarted (whichever happens first).
|
||||
|
||||
### Query by `room_id`
|
||||
|
||||
With this API you can get the status of all active deletion tasks, and all those completed in the last 24h,
|
||||
for the given `room_id`.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v2/rooms/<room_id>/delete_status
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"results": [
|
||||
{
|
||||
"delete_id": "delete_id1",
|
||||
"status": "failed",
|
||||
"error": "error message",
|
||||
"shutdown_room": {
|
||||
"kicked_users": [],
|
||||
"failed_to_kick_users": [],
|
||||
"local_aliases": [],
|
||||
"new_room_id": null
|
||||
}
|
||||
}, {
|
||||
"delete_id": "delete_id2",
|
||||
"status": "purging",
|
||||
"shutdown_room": {
|
||||
"kicked_users": [
|
||||
"@foobar:example.com"
|
||||
],
|
||||
"failed_to_kick_users": [],
|
||||
"local_aliases": [
|
||||
"#badroom:example.com",
|
||||
"#evilsaloon:example.com"
|
||||
],
|
||||
"new_room_id": "!newroomid:example.com"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
* `room_id` - The ID of the room.
|
||||
|
||||
### Query by `delete_id`
|
||||
|
||||
With this API you can get the status of one specific task by `delete_id`.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v2/rooms/delete_status/<delete_id>
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "purging",
|
||||
"shutdown_room": {
|
||||
"kicked_users": [
|
||||
"@foobar:example.com"
|
||||
],
|
||||
"failed_to_kick_users": [],
|
||||
"local_aliases": [
|
||||
"#badroom:example.com",
|
||||
"#evilsaloon:example.com"
|
||||
],
|
||||
"new_room_id": "!newroomid:example.com"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
* `delete_id` - The ID for this delete.
|
||||
|
||||
### Response
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- `results` - An array of objects, each containing information about one task.
|
||||
This field is omitted from the result when you query by `delete_id`.
|
||||
Task objects contain the following fields:
|
||||
- `delete_id` - The ID for this purge if you query by `room_id`.
|
||||
- `status` - The status will be one of:
|
||||
- `shutting_down` - The process is removing users from the room.
|
||||
- `purging` - The process is purging the room and event data from database.
|
||||
- `complete` - The process has completed successfully.
|
||||
- `failed` - The process is aborted, an error has occurred.
|
||||
- `error` - A string that shows an error message if `status` is `failed`.
|
||||
Otherwise this field is hidden.
|
||||
- `shutdown_room` - An object containing information about the result of shutting down the room.
|
||||
*Note:* The result is shown after removing the room members.
|
||||
The delete process can still be running. Please pay attention to the `status`.
|
||||
- `kicked_users` - An array of users (`user_id`) that were kicked.
|
||||
- `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
||||
- `local_aliases` - An array of strings representing the local aliases that were
|
||||
migrated from the old room to the new.
|
||||
- `new_room_id` - A string representing the room ID of the new room, or `null` if
|
||||
no such room was created.
|
||||
* `kicked_users` - An array of users (`user_id`) that were kicked.
|
||||
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
|
||||
* `local_aliases` - An array of strings representing the local aliases that were migrated from
|
||||
the old room to the new.
|
||||
* `new_room_id` - A string representing the room ID of the new room.
|
||||
|
||||
|
||||
## Undoing room deletions
|
||||
|
||||
|
||||
@@ -948,7 +948,7 @@ The following fields are returned in the JSON response body:
|
||||
See also the
|
||||
[Client-Server API Spec on pushers](https://matrix.org/docs/spec/client_server/latest#get-matrix-client-r0-pushers).
|
||||
|
||||
## Controlling whether a user is shadow-banned
|
||||
## Shadow-banning users
|
||||
|
||||
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
|
||||
A shadow-banned users receives successful responses to their client-server API requests,
|
||||
@@ -961,22 +961,16 @@ or broken behaviour for the client. A shadow-banned user will not receive any
|
||||
notification and it is generally more appropriate to ban or kick abusive users.
|
||||
A shadow-banned user will be unable to contact anyone on the server.
|
||||
|
||||
To shadow-ban a user the API is:
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||
```
|
||||
|
||||
To un-shadow-ban a user the API is:
|
||||
|
||||
```
|
||||
DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
|
||||
```
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: [Admin API](../usage/administration/admin_api)
|
||||
|
||||
An empty JSON dict is returned in both cases.
|
||||
An empty JSON dict is returned.
|
||||
|
||||
**Parameters**
|
||||
|
||||
@@ -1113,7 +1107,7 @@ This endpoint will work even if registration is disabled on the server, unlike
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/username_available?username=$localpart
|
||||
POST /_synapse/admin/v1/username_availabile?username=$localpart
|
||||
```
|
||||
|
||||
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
|
||||
## Server to Server Stack
|
||||
|
||||
To use the server to server stack, homeservers should only need to
|
||||
To use the server to server stack, home servers should only need to
|
||||
interact with the Messaging layer.
|
||||
|
||||
The server to server side of things is designed into 4 distinct layers:
|
||||
@@ -23,7 +23,7 @@ Server with a domain specific API.
|
||||
|
||||
1. **Messaging Layer**
|
||||
|
||||
This is what the rest of the homeserver hits to send messages, join rooms,
|
||||
This is what the rest of the Home Server hits to send messages, join rooms,
|
||||
etc. It also allows you to register callbacks for when it get's notified by
|
||||
lower levels that e.g. a new message has been received.
|
||||
|
||||
@@ -45,7 +45,7 @@ Server with a domain specific API.
|
||||
|
||||
For incoming PDUs, it has to check the PDUs it references to see
|
||||
if we have missed any. If we have go and ask someone (another
|
||||
homeserver) for it.
|
||||
home server) for it.
|
||||
|
||||
3. **Transaction Layer**
|
||||
|
||||
|
||||
@@ -38,15 +38,16 @@ Most-recent-in-time events in the DAG which are not referenced by any other even
|
||||
The forward extremities of a room are used as the `prev_events` when the next event is sent.
|
||||
|
||||
|
||||
## Backward extremity
|
||||
## Backwards extremity
|
||||
|
||||
The current marker of where we have backfilled up to and will generally be the
|
||||
`prev_events` of the oldest-in-time events we have in the DAG. This gives a starting point when
|
||||
backfilling history.
|
||||
oldest-in-time events we know of in the DAG.
|
||||
|
||||
When we persist a non-outlier event, we clear it as a backward extremity and set
|
||||
all of its `prev_events` as the new backward extremities if they aren't already
|
||||
persisted in the `events` table.
|
||||
This is an event where we haven't fetched all of the `prev_events` for.
|
||||
|
||||
Once we have fetched all of its `prev_events`, it's unmarked as a backwards
|
||||
extremity (although we may have formed new backwards extremities from the prev
|
||||
events during the backfilling process).
|
||||
|
||||
|
||||
## Outliers
|
||||
@@ -55,7 +56,8 @@ We mark an event as an `outlier` when we haven't figured out the state for the
|
||||
room at that point in the DAG yet.
|
||||
|
||||
We won't *necessarily* have the `prev_events` of an `outlier` in the database,
|
||||
but it's entirely possible that we *might*.
|
||||
but it's entirely possible that we *might*. The status of whether we have all of
|
||||
the `prev_events` is marked as a [backwards extremity](#backwards-extremity).
|
||||
|
||||
For example, when we fetch the event auth chain or state for a given event, we
|
||||
mark all of those claimed auth events as outliers because we haven't done the
|
||||
|
||||
@@ -22,9 +22,8 @@ will be removed in a future version of Synapse.
|
||||
|
||||
The `token` field should include the JSON web token with the following claims:
|
||||
|
||||
* A claim that encodes the local part of the user ID is required. By default,
|
||||
the `sub` (subject) claim is used, or a custom claim can be set in the
|
||||
configuration file.
|
||||
* The `sub` (subject) claim is required and should encode the local part of the
|
||||
user ID.
|
||||
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
|
||||
claims are optional, but validated if present.
|
||||
* The issuer (`iss`) claim is optional, but required and validated if configured.
|
||||
|
||||
@@ -2,80 +2,29 @@
|
||||
|
||||
*Synapse implementation-specific details for the media repository*
|
||||
|
||||
The media repository
|
||||
* stores avatars, attachments and their thumbnails for media uploaded by local
|
||||
users.
|
||||
* caches avatars, attachments and their thumbnails for media uploaded by remote
|
||||
users.
|
||||
* caches resources and thumbnails used for
|
||||
[URL previews](development/url_previews.md).
|
||||
The media repository is where attachments and avatar photos are stored.
|
||||
It stores attachment content and thumbnails for media uploaded by local users.
|
||||
It caches attachment content and thumbnails for media uploaded by remote users.
|
||||
|
||||
All media in Matrix can be identified by a unique
|
||||
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
|
||||
consisting of a server name and media ID:
|
||||
```
|
||||
mxc://<server-name>/<media-id>
|
||||
```
|
||||
## Storage
|
||||
|
||||
## Local Media
|
||||
Synapse generates 24 character media IDs for content uploaded by local users.
|
||||
These media IDs consist of upper and lowercase letters and are case-sensitive.
|
||||
Other homeserver implementations may generate media IDs differently.
|
||||
Each item of media is assigned a `media_id` when it is uploaded.
|
||||
The `media_id` is a randomly chosen, URL safe 24 character string.
|
||||
|
||||
Local media is recorded in the `local_media_repository` table, which includes
|
||||
metadata such as MIME types, upload times and file sizes.
|
||||
Note that this table is shared by the URL cache, which has a different media ID
|
||||
scheme.
|
||||
Metadata such as the MIME type, upload time and length are stored in the
|
||||
sqlite3 database indexed by `media_id`.
|
||||
|
||||
### Paths
|
||||
A file with media ID `aabbcccccccccccccccccccc` and its `128x96` `image/jpeg`
|
||||
thumbnail, created by scaling, would be stored at:
|
||||
```
|
||||
local_content/aa/bb/cccccccccccccccccccc
|
||||
local_thumbnails/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
|
||||
```
|
||||
Content is stored on the filesystem under a `"local_content"` directory.
|
||||
|
||||
## Remote Media
|
||||
When media from a remote homeserver is requested from Synapse, it is assigned
|
||||
a local `filesystem_id`, with the same format as locally-generated media IDs,
|
||||
as described above.
|
||||
Thumbnails are stored under a `"local_thumbnails"` directory.
|
||||
|
||||
A record of remote media is stored in the `remote_media_cache` table, which
|
||||
can be used to map remote MXC URIs (server names and media IDs) to local
|
||||
`filesystem_id`s.
|
||||
The item with `media_id` `"aabbccccccccdddddddddddd"` is stored under
|
||||
`"local_content/aa/bb/ccccccccdddddddddddd"`. Its thumbnail with width
|
||||
`128` and height `96` and type `"image/jpeg"` is stored under
|
||||
`"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"`
|
||||
|
||||
### Paths
|
||||
A file from `matrix.org` with `filesystem_id` `aabbcccccccccccccccccccc` and its
|
||||
`128x96` `image/jpeg` thumbnail, created by scaling, would be stored at:
|
||||
```
|
||||
remote_content/matrix.org/aa/bb/cccccccccccccccccccc
|
||||
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
|
||||
```
|
||||
Older thumbnails may omit the thumbnailing method:
|
||||
```
|
||||
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
|
||||
```
|
||||
|
||||
Note that `remote_thumbnail/` does not have an `s`.
|
||||
|
||||
## URL Previews
|
||||
See [URL Previews](development/url_previews.md) for documentation on the URL preview
|
||||
process.
|
||||
|
||||
When generating previews for URLs, Synapse may download and cache various
|
||||
resources, including images. These resources are assigned temporary media IDs
|
||||
of the form `yyyy-mm-dd_aaaaaaaaaaaaaaaa`, where `yyyy-mm-dd` is the current
|
||||
date and `aaaaaaaaaaaaaaaa` is a random sequence of 16 case-sensitive letters.
|
||||
|
||||
The metadata for these cached resources is stored in the
|
||||
`local_media_repository` and `local_media_repository_url_cache` tables.
|
||||
|
||||
Resources for URL previews are deleted after a few days.
|
||||
|
||||
### Paths
|
||||
The file with media ID `yyyy-mm-dd_aaaaaaaaaaaaaaaa` and its `128x96`
|
||||
`image/jpeg` thumbnail, created by scaling, would be stored at:
|
||||
```
|
||||
url_cache/yyyy-mm-dd/aaaaaaaaaaaaaaaa
|
||||
url_cache_thumbnails/yyyy-mm-dd/aaaaaaaaaaaaaaaa/128-96-image-jpeg-scale
|
||||
```
|
||||
Remote content is cached under `"remote_content"` directory. Each item of
|
||||
remote content is assigned a local `"filesystem_id"` to ensure that the
|
||||
directory structure `"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`
|
||||
is appropriate. Thumbnails for remote content are stored under
|
||||
`"remote_thumbnail/server_name/..."`
|
||||
|
||||
@@ -1,71 +0,0 @@
|
||||
# Background update controller callbacks
|
||||
|
||||
Background update controller callbacks allow module developers to control (e.g. rate-limit)
|
||||
how database background updates are run. A database background update is an operation
|
||||
Synapse runs on its database in the background after it starts. It's usually used to run
|
||||
database operations that would take too long if they were run at the same time as schema
|
||||
updates (which are run on startup) and delay Synapse's startup too much: populating a
|
||||
table with a big amount of data, adding an index on a big table, deleting superfluous data,
|
||||
etc.
|
||||
|
||||
Background update controller callbacks can be registered using the module API's
|
||||
`register_background_update_controller_callbacks` method. Only the first module (in order
|
||||
of appearance in Synapse's configuration file) calling this method can register background
|
||||
update controller callbacks, subsequent calls are ignored.
|
||||
|
||||
The available background update controller callbacks are:
|
||||
|
||||
### `on_update`
|
||||
|
||||
_First introduced in Synapse v1.49.0_
|
||||
|
||||
```python
|
||||
def on_update(update_name: str, database_name: str, one_shot: bool) -> AsyncContextManager[int]
|
||||
```
|
||||
|
||||
Called when about to do an iteration of a background update. The module is given the name
|
||||
of the update, the name of the database, and a flag to indicate whether the background
|
||||
update will happen in one go and may take a long time (e.g. creating indices). If this last
|
||||
argument is set to `False`, the update will be run in batches.
|
||||
|
||||
The module must return an async context manager. It will be entered before Synapse runs a
|
||||
background update; this should return the desired duration of the iteration, in
|
||||
milliseconds.
|
||||
|
||||
The context manager will be exited when the iteration completes. Note that the duration
|
||||
returned by the context manager is a target, and an iteration may take substantially longer
|
||||
or shorter. If the `one_shot` flag is set to `True`, the duration returned is ignored.
|
||||
|
||||
__Note__: Unlike most module callbacks in Synapse, this one is _synchronous_. This is
|
||||
because asynchronous operations are expected to be run by the async context manager.
|
||||
|
||||
This callback is required when registering any other background update controller callback.
|
||||
|
||||
### `default_batch_size`
|
||||
|
||||
_First introduced in Synapse v1.49.0_
|
||||
|
||||
```python
|
||||
async def default_batch_size(update_name: str, database_name: str) -> int
|
||||
```
|
||||
|
||||
Called before the first iteration of a background update, with the name of the update and
|
||||
of the database. The module must return the number of elements to process in this first
|
||||
iteration.
|
||||
|
||||
If this callback is not defined, Synapse will use a default value of 100.
|
||||
|
||||
### `min_batch_size`
|
||||
|
||||
_First introduced in Synapse v1.49.0_
|
||||
|
||||
```python
|
||||
async def min_batch_size(update_name: str, database_name: str) -> int
|
||||
```
|
||||
|
||||
Called before running a new batch for a background update, with the name of the update and
|
||||
of the database. The module must return an integer representing the minimum number of
|
||||
elements to process in this iteration. This number must be at least 1, and is used to
|
||||
ensure that progress is always made.
|
||||
|
||||
If this callback is not defined, Synapse will use a default value of 100.
|
||||
@@ -71,15 +71,15 @@ Modules **must** register their web resources in their `__init__` method.
|
||||
## Registering a callback
|
||||
|
||||
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
|
||||
Synapse will call when performing specific actions. Callbacks must be asynchronous (unless
|
||||
specified otherwise), and are split in categories. A single module may implement callbacks
|
||||
from multiple categories, and is under no obligation to implement all callbacks from the
|
||||
categories it registers callbacks for.
|
||||
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
|
||||
are split in categories. A single module may implement callbacks from multiple categories,
|
||||
and is under no obligation to implement all callbacks from the categories it registers
|
||||
callbacks for.
|
||||
|
||||
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
|
||||
methods. The callback functions are passed to these methods as keyword arguments, with
|
||||
the callback name as the argument name and the function as its value. A
|
||||
`register_[...]_callbacks` method exists for each category.
|
||||
the callback name as the argument name and the function as its value. This is demonstrated
|
||||
in the example below. A `register_[...]_callbacks` method exists for each category.
|
||||
|
||||
Callbacks for each category can be found on their respective page of the
|
||||
[Synapse documentation website](https://matrix-org.github.io/synapse).
|
||||
@@ -83,7 +83,7 @@ oidc_providers:
|
||||
|
||||
### Dex
|
||||
|
||||
[Dex][dex-idp] is a simple, open-source OpenID Connect Provider.
|
||||
[Dex][dex-idp] is a simple, open-source, certified OpenID Connect Provider.
|
||||
Although it is designed to help building a full-blown provider with an
|
||||
external database, it can be configured with static passwords in a config file.
|
||||
|
||||
@@ -523,7 +523,7 @@ The synapse config will look like this:
|
||||
email_template: "{{ user.email }}"
|
||||
```
|
||||
|
||||
### Django OAuth Toolkit
|
||||
## Django OAuth Toolkit
|
||||
|
||||
[django-oauth-toolkit](https://github.com/jazzband/django-oauth-toolkit) is a
|
||||
Django application providing out of the box all the endpoints, data and logic
|
||||
|
||||
@@ -1,74 +0,0 @@
|
||||
## Summary of performance impact of running on resource constrained devices such as SBCs
|
||||
|
||||
I've been running my homeserver on a cubietruck at home now for some time and am often replying to statements like "you need loads of ram to join large rooms" with "it works fine for me". I thought it might be useful to curate a summary of the issues you're likely to run into to help as a scaling-down guide, maybe highlight these for development work or end up as documentation. It seems that once you get up to about 4x1.5GHz arm64 4GiB these issues are no longer a problem.
|
||||
|
||||
- **Platform**: 2x1GHz armhf 2GiB ram [Single-board computers](https://wiki.debian.org/CheapServerBoxHardware), SSD, postgres.
|
||||
|
||||
### Presence
|
||||
|
||||
This is the main reason people have a poor matrix experience on resource constrained homeservers. Element web will frequently be saying the server is offline while the python process will be pegged at 100% cpu. This feature is used to tell when other users are active (have a client app in the foreground) and therefore more likely to respond, but requires a lot of network activity to maintain even when nobody is talking in a room.
|
||||
|
||||

|
||||
|
||||
While synapse does have some performance issues with presence [#3971](https://github.com/matrix-org/synapse/issues/3971), the fundamental problem is that this is an easy feature to implement for a centralised service at nearly no overhead, but federation makes it combinatorial [#8055](https://github.com/matrix-org/synapse/issues/8055). There is also a client-side config option which disables the UI and idle tracking [enable_presence_by_hs_url] to blacklist the largest instances but I didn't notice much difference, so I recommend disabling the feature entirely at the server level as well.
|
||||
|
||||
[enable_presence_by_hs_url]: https://github.com/vector-im/element-web/blob/v1.7.8/config.sample.json#L45
|
||||
|
||||
### Joining
|
||||
|
||||
Joining a "large", federated room will initially fail with the below message in Element web, but waiting a while (10-60mins) and trying again will succeed without any issue. What counts as "large" is not message history, user count, connections to homeservers or even a simple count of the state events, it is instead how long the state resolution algorithm takes. However, each of those numbers are reasonable proxies, so we can use them as estimates since user count is one of the few things you see before joining.
|
||||
|
||||

|
||||
|
||||
This is [#1211](https://github.com/matrix-org/synapse/issues/1211) and will also hopefully be mitigated by peeking [matrix-org/matrix-doc#2753](https://github.com/matrix-org/matrix-doc/pull/2753) so at least you don't need to wait for a join to complete before finding out if it's the kind of room you want. Note that you should first disable presence, otherwise it'll just make the situation worse [#3120](https://github.com/matrix-org/synapse/issues/3120). There is a lot of database interaction too, so make sure you've [migrated your data](../postgres.md) from the default sqlite to postgresql. Personally, I recommend patience - once the initial join is complete there's rarely any issues with actually interacting with the room, but if you like you can just block "large" rooms entirely.
|
||||
|
||||
### Sessions
|
||||
|
||||
Anything that requires modifying the device list [#7721](https://github.com/matrix-org/synapse/issues/7721) will take a while to propagate, again taking the client "Offline" until it's complete. This includes signing in and out, editing the public name and verifying e2ee. The main mitigation I recommend is to keep long-running sessions open e.g. by using Firefox SSB "Use this site in App mode" or Chromium PWA "Install Element".
|
||||
|
||||
### Recommended configuration
|
||||
|
||||
Put the below in a new file at /etc/matrix-synapse/conf.d/sbc.yaml to override the defaults in homeserver.yaml.
|
||||
|
||||
```
|
||||
# Set to false to disable presence tracking on this homeserver.
|
||||
use_presence: false
|
||||
|
||||
# When this is enabled, the room "complexity" will be checked before a user
|
||||
# joins a new remote room. If it is above the complexity limit, the server will
|
||||
# disallow joining, or will instantly leave.
|
||||
limit_remote_rooms:
|
||||
# Uncomment to enable room complexity checking.
|
||||
#enabled: true
|
||||
complexity: 3.0
|
||||
|
||||
# Database configuration
|
||||
database:
|
||||
name: psycopg2
|
||||
args:
|
||||
user: matrix-synapse
|
||||
# Generate a long, secure one with a password manager
|
||||
password: hunter2
|
||||
database: matrix-synapse
|
||||
host: localhost
|
||||
cp_min: 5
|
||||
cp_max: 10
|
||||
```
|
||||
|
||||
Currently the complexity is measured by [current_state_events / 500](https://github.com/matrix-org/synapse/blob/v1.20.1/synapse/storage/databases/main/events_worker.py#L986). You can find join times and your most complex rooms like this:
|
||||
|
||||
```
|
||||
admin@homeserver:~$ zgrep '/client/r0/join/' /var/log/matrix-synapse/homeserver.log* | awk '{print $18, $25}' | sort --human-numeric-sort
|
||||
29.922sec/-0.002sec /_matrix/client/r0/join/%23debian-fasttrack%3Apoddery.com
|
||||
182.088sec/0.003sec /_matrix/client/r0/join/%23decentralizedweb-general%3Amatrix.org
|
||||
911.625sec/-570.847sec /_matrix/client/r0/join/%23synapse%3Amatrix.org
|
||||
|
||||
admin@homeserver:~$ sudo --user postgres psql matrix-synapse --command 'select canonical_alias, joined_members, current_state_events from room_stats_state natural join room_stats_current where canonical_alias is not null order by current_state_events desc fetch first 5 rows only'
|
||||
canonical_alias | joined_members | current_state_events
|
||||
-------------------------------+----------------+----------------------
|
||||
#_oftc_#debian:matrix.org | 871 | 52355
|
||||
#matrix:matrix.org | 6379 | 10684
|
||||
#irc:matrix.org | 461 | 3751
|
||||
#decentralizedweb-general:matrix.org | 997 | 1509
|
||||
#whatsapp:maunium.net | 554 | 854
|
||||
```
|
||||
@@ -1,7 +1,7 @@
|
||||
<h2 style="color:red">
|
||||
This page of the Synapse documentation is now deprecated. For up to date
|
||||
documentation on setting up or writing a password auth provider module, please see
|
||||
<a href="modules/index.md">this page</a>.
|
||||
<a href="modules.md">this page</a>.
|
||||
</h2>
|
||||
|
||||
# Password auth provider modules
|
||||
|
||||
@@ -118,9 +118,6 @@ performance:
|
||||
Note that the appropriate values for those fields depend on the amount
|
||||
of free memory the database host has available.
|
||||
|
||||
Additionally, admins of large deployments might want to consider using huge pages
|
||||
to help manage memory, especially when using large values of `shared_buffers`. You
|
||||
can read more about that [here](https://www.postgresql.org/docs/10/kernel-resources.html#LINUX-HUGE-PAGES).
|
||||
|
||||
## Porting from SQLite
|
||||
|
||||
|
||||
@@ -647,8 +647,8 @@ retention:
|
||||
#
|
||||
#federation_certificate_verification_whitelist:
|
||||
# - lon.example.com
|
||||
# - "*.domain.com"
|
||||
# - "*.onion"
|
||||
# - *.domain.com
|
||||
# - *.onion
|
||||
|
||||
# List of custom certificate authorities for federation traffic.
|
||||
#
|
||||
@@ -1209,44 +1209,6 @@ oembed:
|
||||
#
|
||||
#session_lifetime: 24h
|
||||
|
||||
# Time that an access token remains valid for, if the session is
|
||||
# using refresh tokens.
|
||||
# For more information about refresh tokens, please see the manual.
|
||||
# Note that this only applies to clients which advertise support for
|
||||
# refresh tokens.
|
||||
#
|
||||
# Note also that this is calculated at login time and refresh time:
|
||||
# changes are not applied to existing sessions until they are refreshed.
|
||||
#
|
||||
# By default, this is 5 minutes.
|
||||
#
|
||||
#refreshable_access_token_lifetime: 5m
|
||||
|
||||
# Time that a refresh token remains valid for (provided that it is not
|
||||
# exchanged for another one first).
|
||||
# This option can be used to automatically log-out inactive sessions.
|
||||
# Please see the manual for more information.
|
||||
#
|
||||
# Note also that this is calculated at login time and refresh time:
|
||||
# changes are not applied to existing sessions until they are refreshed.
|
||||
#
|
||||
# By default, this is infinite.
|
||||
#
|
||||
#refresh_token_lifetime: 24h
|
||||
|
||||
# Time that an access token remains valid for, if the session is NOT
|
||||
# using refresh tokens.
|
||||
# Please note that not all clients support refresh tokens, so setting
|
||||
# this to a short value may be inconvenient for some users who will
|
||||
# then be logged out frequently.
|
||||
#
|
||||
# Note also that this is calculated at login time: changes are not applied
|
||||
# retrospectively to existing sessions for users that have already logged in.
|
||||
#
|
||||
# By default, this is infinite.
|
||||
#
|
||||
#nonrefreshable_access_token_lifetime: 24h
|
||||
|
||||
# The user must provide all of the below types of 3PID when registering.
|
||||
#
|
||||
#registrations_require_3pid:
|
||||
@@ -2077,12 +2039,6 @@ sso:
|
||||
#
|
||||
#algorithm: "provided-by-your-issuer"
|
||||
|
||||
# Name of the claim containing a unique identifier for the user.
|
||||
#
|
||||
# Optional, defaults to `sub`.
|
||||
#
|
||||
#subject_claim: "sub"
|
||||
|
||||
# The issuer to validate the "iss" claim against.
|
||||
#
|
||||
# Optional, if provided the "iss" claim will be required and
|
||||
@@ -2404,8 +2360,8 @@ user_directory:
|
||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||
# rebuild the indexes in order to search through all known users.
|
||||
# These indexes are built the first time Synapse starts; admins can
|
||||
# manually trigger a rebuild via API following the instructions at
|
||||
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||
# manually trigger a rebuild following the instructions at
|
||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
||||
#
|
||||
# Uncomment to return search results containing all known users, even if that
|
||||
# user does not share a room with the requester.
|
||||
|
||||
@@ -76,12 +76,6 @@ The fingerprint of the repository signing key (as shown by `gpg
|
||||
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
|
||||
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
|
||||
|
||||
When installing with Debian packages, you might prefer to place files in
|
||||
`/etc/matrix-synapse/conf.d/` to override your configuration without editing
|
||||
the main configuration file at `/etc/matrix-synapse/homeserver.yaml`.
|
||||
By doing that, you won't be asked if you want to replace your configuration
|
||||
file when you upgrade the Debian package to a later version.
|
||||
|
||||
##### Downstream Debian packages
|
||||
|
||||
We do not recommend using the packages from the default Debian `buster`
|
||||
|
||||
@@ -71,12 +71,7 @@ Below are the templates Synapse will look for when generating the content of an
|
||||
* `sender_avatar_url`: the avatar URL (as a `mxc://` URL) for the event's
|
||||
sender
|
||||
* `sender_hash`: a hash of the user ID of the sender
|
||||
* `msgtype`: the type of the message
|
||||
* `body_text_html`: html representation of the message
|
||||
* `body_text_plain`: plaintext representation of the message
|
||||
* `image_url`: mxc url of an image, when "msgtype" is "m.image"
|
||||
* `link`: a `matrix.to` link to the room
|
||||
* `avator_url`: url to the room's avator
|
||||
* `reason`: information on the event that triggered the email to be sent. It's an
|
||||
object with the following attributes:
|
||||
* `room_id`: the ID of the room the event was sent in
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
# Overview
|
||||
|
||||
This document explains how to enable VoIP relaying on your homeserver with
|
||||
This document explains how to enable VoIP relaying on your Home Server with
|
||||
TURN.
|
||||
|
||||
The synapse Matrix homeserver supports integration with TURN server via the
|
||||
The synapse Matrix Home Server supports integration with TURN server via the
|
||||
[TURN server REST API](<https://tools.ietf.org/html/draft-uberti-behave-turn-rest-00>). This
|
||||
allows the homeserver to generate credentials that are valid for use on the
|
||||
TURN server through the use of a secret shared between the homeserver and the
|
||||
allows the Home Server to generate credentials that are valid for use on the
|
||||
TURN server through the use of a secret shared between the Home Server and the
|
||||
TURN server.
|
||||
|
||||
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
|
||||
@@ -165,18 +165,18 @@ This will install and start a systemd service called `coturn`.
|
||||
|
||||
## Synapse setup
|
||||
|
||||
Your homeserver configuration file needs the following extra keys:
|
||||
Your home server configuration file needs the following extra keys:
|
||||
|
||||
1. "`turn_uris`": This needs to be a yaml list of public-facing URIs
|
||||
for your TURN server to be given out to your clients. Add separate
|
||||
entries for each transport your TURN server supports.
|
||||
2. "`turn_shared_secret`": This is the secret shared between your
|
||||
homeserver and your TURN server, so you should set it to the same
|
||||
Home server and your TURN server, so you should set it to the same
|
||||
string you used in turnserver.conf.
|
||||
3. "`turn_user_lifetime`": This is the amount of time credentials
|
||||
generated by your homeserver are valid for (in milliseconds).
|
||||
generated by your Home Server are valid for (in milliseconds).
|
||||
Shorter times offer less potential for abuse at the expense of
|
||||
increased traffic between web clients and your homeserver to
|
||||
increased traffic between web clients and your home server to
|
||||
refresh credentials. The TURN REST API specification recommends
|
||||
one day (86400000).
|
||||
4. "`turn_allow_guests`": Whether to allow guest users to use the
|
||||
@@ -220,7 +220,7 @@ Here are a few things to try:
|
||||
anyone who has successfully set this up.
|
||||
|
||||
* Check that you have opened your firewall to allow TCP and UDP traffic to the
|
||||
TURN ports (normally 3478 and 5349).
|
||||
TURN ports (normally 3478 and 5479).
|
||||
|
||||
* Check that you have opened your firewall to allow UDP traffic to the UDP
|
||||
relay ports (49152-65535 by default).
|
||||
|
||||
@@ -42,6 +42,7 @@ For each update:
|
||||
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
|
||||
|
||||
|
||||
|
||||
## Enabled
|
||||
|
||||
This API allow pausing background updates.
|
||||
@@ -81,29 +82,3 @@ The API returns the `enabled` param.
|
||||
```
|
||||
|
||||
There is also a `GET` version which returns the `enabled` state.
|
||||
|
||||
|
||||
## Run
|
||||
|
||||
This API schedules a specific background update to run. The job starts immediately after calling the API.
|
||||
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
POST /_synapse/admin/v1/background_updates/start_job
|
||||
```
|
||||
|
||||
with the following body:
|
||||
|
||||
```json
|
||||
{
|
||||
"job_name": "populate_stats_process_rooms"
|
||||
}
|
||||
```
|
||||
|
||||
The following JSON body parameters are available:
|
||||
|
||||
- `job_name` - A string which job to run. Valid values are:
|
||||
- `populate_stats_process_rooms` - Recalculate the stats for all rooms.
|
||||
- `regenerate_directory` - Recalculate the [user directory](../../../user_directory.md) if it is stale or out of sync.
|
||||
|
||||
@@ -1,114 +0,0 @@
|
||||
# Federation API
|
||||
|
||||
This API allows a server administrator to manage Synapse's federation with other homeservers.
|
||||
|
||||
Note: This API is new, experimental and "subject to change".
|
||||
|
||||
## List of destinations
|
||||
|
||||
This API gets the current destination retry timing info for all remote servers.
|
||||
|
||||
The list contains all the servers with which the server federates,
|
||||
regardless of whether an error occurred or not.
|
||||
If an error occurs, it may take up to 20 minutes for the error to be displayed here,
|
||||
as a complete retry must have failed.
|
||||
|
||||
The API is:
|
||||
|
||||
A standard request with no filtering:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/federation/destinations
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"destinations":[
|
||||
{
|
||||
"destination": "matrix.org",
|
||||
"retry_last_ts": 1557332397936,
|
||||
"retry_interval": 3000000,
|
||||
"failure_ts": 1557329397936,
|
||||
"last_successful_stream_ordering": null
|
||||
}
|
||||
],
|
||||
"total": 1
|
||||
}
|
||||
```
|
||||
|
||||
To paginate, check for `next_token` and if present, call the endpoint again
|
||||
with `from` set to the value of `next_token`. This will return a new page.
|
||||
|
||||
If the endpoint does not return a `next_token` then there are no more destinations
|
||||
to paginate through.
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following query parameters are available:
|
||||
|
||||
- `from` - Offset in the returned list. Defaults to `0`.
|
||||
- `limit` - Maximum amount of destinations to return. Defaults to `100`.
|
||||
- `order_by` - The method in which to sort the returned list of destinations.
|
||||
Valid values are:
|
||||
- `destination` - Destinations are ordered alphabetically by remote server name.
|
||||
This is the default.
|
||||
- `retry_last_ts` - Destinations are ordered by time of last retry attempt in ms.
|
||||
- `retry_interval` - Destinations are ordered by how long until next retry in ms.
|
||||
- `failure_ts` - Destinations are ordered by when the server started failing in ms.
|
||||
- `last_successful_stream_ordering` - Destinations are ordered by the stream ordering
|
||||
of the most recent successfully-sent PDU.
|
||||
- `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
|
||||
this value to `b` will reverse the above sort order. Defaults to `f`.
|
||||
|
||||
*Caution:* The database only has an index on the column `destination`.
|
||||
This means that if a different sort order is used,
|
||||
this can cause a large load on the database, especially for large environments.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- `destinations` - An array of objects, each containing information about a destination.
|
||||
Destination objects contain the following fields:
|
||||
- `destination` - string - Name of the remote server to federate.
|
||||
- `retry_last_ts` - integer - The last time Synapse tried and failed to reach the
|
||||
remote server, in ms. This is `0` if the last attempt to communicate with the
|
||||
remote server was successful.
|
||||
- `retry_interval` - integer - How long since the last time Synapse tried to reach
|
||||
the remote server before trying again, in ms. This is `0` if no further retrying occuring.
|
||||
- `failure_ts` - nullable integer - The first time Synapse tried and failed to reach the
|
||||
remote server, in ms. This is `null` if communication with the remote server has never failed.
|
||||
- `last_successful_stream_ordering` - nullable integer - The stream ordering of the most
|
||||
recent successfully-sent [PDU](understanding_synapse_through_grafana_graphs.md#federation)
|
||||
to this destination, or `null` if this information has not been tracked yet.
|
||||
- `next_token`: string representing a positive integer - Indication for pagination. See above.
|
||||
- `total` - integer - Total number of destinations.
|
||||
|
||||
# Destination Details API
|
||||
|
||||
This API gets the retry timing info for a specific remote server.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
GET /_synapse/admin/v1/federation/destinations/<destination>
|
||||
```
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
```json
|
||||
{
|
||||
"destination": "matrix.org",
|
||||
"retry_last_ts": 1557332397936,
|
||||
"retry_interval": 3000000,
|
||||
"failure_ts": 1557329397936,
|
||||
"last_successful_stream_ordering": null
|
||||
}
|
||||
```
|
||||
|
||||
**Response**
|
||||
|
||||
The response fields are the same like in the `destinations` array in
|
||||
[List of destinations](#list-of-destinations) response.
|
||||
@@ -1,57 +0,0 @@
|
||||
# Spaces API
|
||||
|
||||
This API allows a server administrator to manage spaces.
|
||||
|
||||
## Remove local user
|
||||
|
||||
This API forces a local user to leave all non-public rooms in a space.
|
||||
|
||||
The space itself is always left, regardless of whether it is public.
|
||||
|
||||
May succeed partially if the user fails to leave some rooms.
|
||||
|
||||
The API is:
|
||||
|
||||
```
|
||||
DELETE /_synapse/admin/v1/rooms/<room_id>/hierarchy/members/<user_id>
|
||||
```
|
||||
|
||||
with an optional body of:
|
||||
|
||||
```json
|
||||
{
|
||||
"include_remote_spaces": true,
|
||||
}
|
||||
```
|
||||
|
||||
`include_remote_spaces` controls whether to process subspaces that the
|
||||
local homeserver is not participating in. The listings of such subspaces
|
||||
have to be retrieved over federation and their accuracy cannot be
|
||||
guaranteed.
|
||||
|
||||
Returning:
|
||||
|
||||
```json
|
||||
{
|
||||
"left_rooms": ["!room1:example.net", "!room2:example.net", ...],
|
||||
"inaccessible_rooms": ["!subspace1:example.net", ...],
|
||||
"failed_rooms": {
|
||||
"!room4:example.net": "Failed to leave room.",
|
||||
...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
`left_rooms`: A list of rooms that the user has been made to leave.
|
||||
|
||||
`inaccessible_rooms`: A list of rooms and spaces that the local
|
||||
homeserver is not in, and may have not been fully processed. Rooms may
|
||||
appear here if:
|
||||
* The room is a space that the local homeserver is not in, and so its
|
||||
full list of child rooms could not be determined.
|
||||
* The room is inaccessible to the local homeserver, and it is not
|
||||
known whether the room is a subspace containing further rooms.
|
||||
|
||||
`failed_rooms`: A dictionary of errors encountered when leaving rooms.
|
||||
The keys of the dictionary are room IDs and the values of the dictionary
|
||||
are error messages.
|
||||
@@ -1,103 +0,0 @@
|
||||
## Admin FAQ
|
||||
|
||||
How do I become a server admin?
|
||||
---
|
||||
If your server already has an admin account you should use the user admin API to promote other accounts to become admins. See [User Admin API](../../admin_api/user_admin_api.md#Change-whether-a-user-is-a-server-administrator-or-not)
|
||||
|
||||
If you don't have any admin accounts yet you won't be able to use the admin API so you'll have to edit the database manually. Manually editing the database is generally not recommended so once you have an admin account, use the admin APIs to make further changes.
|
||||
|
||||
```sql
|
||||
UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';
|
||||
```
|
||||
What servers are my server talking to?
|
||||
---
|
||||
Run this sql query on your db:
|
||||
```sql
|
||||
SELECT * FROM destinations;
|
||||
```
|
||||
|
||||
What servers are currently participating in this room?
|
||||
---
|
||||
Run this sql query on your db:
|
||||
```sql
|
||||
SELECT DISTINCT split_part(state_key, ':', 2)
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN room_memberships AS m USING (room_id, event_id)
|
||||
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
|
||||
```
|
||||
|
||||
What users are registered on my server?
|
||||
---
|
||||
```sql
|
||||
SELECT NAME from users;
|
||||
```
|
||||
|
||||
Manually resetting passwords:
|
||||
---
|
||||
See https://github.com/matrix-org/synapse/blob/master/README.rst#password-reset
|
||||
|
||||
I have a problem with my server. Can I just delete my database and start again?
|
||||
---
|
||||
Deleting your database is unlikely to make anything better.
|
||||
|
||||
It's easy to make the mistake of thinking that you can start again from a clean slate by dropping your database, but things don't work like that in a federated network: lots of other servers have information about your server.
|
||||
|
||||
For example: other servers might think that you are in a room, your server will think that you are not, and you'll probably be unable to interact with that room in a sensible way ever again.
|
||||
|
||||
In general, there are better solutions to any problem than dropping the database. Come and seek help in https://matrix.to/#/#synapse:matrix.org.
|
||||
|
||||
There are two exceptions when it might be sensible to delete your database and start again:
|
||||
* You have *never* joined any rooms which are federated with other servers. For instance, a local deployment which the outside world can't talk to.
|
||||
* You are changing the `server_name` in the homeserver configuration. In effect this makes your server a completely new one from the point of view of the network, so in this case it makes sense to start with a clean database.
|
||||
(In both cases you probably also want to clear out the media_store.)
|
||||
|
||||
I've stuffed up access to my room, how can I delete it to free up the alias?
|
||||
---
|
||||
Using the following curl command:
|
||||
```
|
||||
curl -H 'Authorization: Bearer <access-token>' -X DELETE https://matrix.org/_matrix/client/r0/directory/room/<room-alias>
|
||||
```
|
||||
`<access-token>` - can be obtained in riot by looking in the riot settings, down the bottom is:
|
||||
Access Token:\<click to reveal\>
|
||||
|
||||
`<room-alias>` - the room alias, eg. #my_room:matrix.org this possibly needs to be URL encoded also, for example %23my_room%3Amatrix.org
|
||||
|
||||
How can I find the lines corresponding to a given HTTP request in my homeserver log?
|
||||
---
|
||||
|
||||
Synapse tags each log line according to the HTTP request it is processing. When it finishes processing each request, it logs a line containing the words `Processed request: `. For example:
|
||||
|
||||
```
|
||||
2019-02-14 22:35:08,196 - synapse.access.http.8008 - 302 - INFO - GET-37 - ::1 - 8008 - {@richvdh:localhost} Processed request: 0.173sec/0.001sec (0.002sec, 0.000sec) (0.027sec/0.026sec/2) 687B 200 "GET /_matrix/client/r0/sync HTTP/1.1" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36" [0 dbevts]"
|
||||
```
|
||||
|
||||
Here we can see that the request has been tagged with `GET-37`. (The tag depends on the method of the HTTP request, so might start with `GET-`, `PUT-`, `POST-`, `OPTIONS-` or `DELETE-`.) So to find all lines corresponding to this request, we can do:
|
||||
|
||||
```
|
||||
grep 'GET-37' homeserver.log
|
||||
```
|
||||
|
||||
If you want to paste that output into a github issue or matrix room, please remember to surround it with triple-backticks (```) to make it legible (see https://help.github.com/en/articles/basic-writing-and-formatting-syntax#quoting-code).
|
||||
|
||||
|
||||
What do all those fields in the 'Processed' line mean?
|
||||
---
|
||||
See [Request log format](request_log.md).
|
||||
|
||||
|
||||
What are the biggest rooms on my server?
|
||||
---
|
||||
|
||||
```sql
|
||||
SELECT s.canonical_alias, g.room_id, count(*) AS num_rows
|
||||
FROM
|
||||
state_groups_state AS g,
|
||||
room_stats_state AS s
|
||||
WHERE g.room_id = s.room_id
|
||||
GROUP BY s.canonical_alias, g.room_id
|
||||
ORDER BY num_rows desc
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
You can also use the [List Room API](../../admin_api/rooms.md#list-room-api)
|
||||
and `order_by` `state_events`.
|
||||
@@ -1,18 +0,0 @@
|
||||
This blog post by Victor Berger explains how to use many of the tools listed on this page: https://levans.fr/shrink-synapse-database.html
|
||||
|
||||
# List of useful tools and scripts for maintenance Synapse database:
|
||||
|
||||
## [Purge Remote Media API](../../admin_api/media_admin_api.md#purge-remote-media-api)
|
||||
The purge remote media API allows server admins to purge old cached remote media.
|
||||
|
||||
## [Purge Local Media API](../../admin_api/media_admin_api.md#delete-local-media)
|
||||
This API deletes the *local* media from the disk of your own server.
|
||||
|
||||
## [Purge History API](../../admin_api/purge_history_api.md)
|
||||
The purge history API allows server admins to purge historic events from their database, reclaiming disk space.
|
||||
|
||||
## [synapse-compress-state](https://github.com/matrix-org/rust-synapse-compress-state)
|
||||
Tool for compressing (deduplicating) `state_groups_state` table.
|
||||
|
||||
## [SQL for analyzing Synapse PostgreSQL database stats](useful_sql_for_admins.md)
|
||||
Some easy SQL that reports useful stats about your Synapse database.
|
||||
@@ -1,25 +0,0 @@
|
||||
# How do State Groups work?
|
||||
|
||||
As a general rule, I encourage people who want to understand the deepest darkest secrets of the database schema to drop by #synapse-dev:matrix.org and ask questions.
|
||||
|
||||
However, one question that comes up frequently is that of how "state groups" work, and why the `state_groups_state` table gets so big, so here's an attempt to answer that question.
|
||||
|
||||
We need to be able to relatively quickly calculate the state of a room at any point in that room's history. In other words, we need to know the state of the room at each event in that room. This is done as follows:
|
||||
|
||||
A sequence of events where the state is the same are grouped together into a `state_group`; the mapping is recorded in `event_to_state_groups`. (Technically speaking, since a state event usually changes the state in the room, we are recording the state of the room *after* the given event id: which is to say, to a handwavey simplification, the first event in a state group is normally a state event, and others in the same state group are normally non-state-events.)
|
||||
|
||||
`state_groups` records, for each state group, the id of the room that we're looking at, and also the id of the first event in that group. (I'm not sure if that event id is used much in practice.)
|
||||
|
||||
Now, if we stored all the room state for each `state_group`, that would be a huge amount of data. Instead, for each state group, we normally store the difference between the state in that group and some other state group, and only occasionally (every 100 state changes or so) record the full state.
|
||||
|
||||
So, most state groups have an entry in `state_group_edges` (don't ask me why it's not a column in `state_groups`) which records the previous state group in the room, and `state_groups_state` records the differences in state since that previous state group.
|
||||
|
||||
A full state group just records the event id for each piece of state in the room at that point.
|
||||
|
||||
## Known bugs with state groups
|
||||
|
||||
There are various reasons that we can end up creating many more state groups than we need: see https://github.com/matrix-org/synapse/issues/3364 for more details.
|
||||
|
||||
## Compression tool
|
||||
|
||||
There is a tool at https://github.com/matrix-org/rust-synapse-compress-state which can compress the `state_groups_state` on a room by-room basis (essentially, it reduces the number of "full" state groups). This can result in dramatic reductions of the storage used.
|
||||
@@ -1,84 +0,0 @@
|
||||
## Understanding Synapse through Grafana graphs
|
||||
|
||||
It is possible to monitor much of the internal state of Synapse using [Prometheus](https://prometheus.io)
|
||||
metrics and [Grafana](https://grafana.com/).
|
||||
A guide for configuring Synapse to provide metrics is available [here](../../metrics-howto.md)
|
||||
and information on setting up Grafana is [here](https://github.com/matrix-org/synapse/tree/master/contrib/grafana).
|
||||
In this setup, Prometheus will periodically scrape the information Synapse provides and
|
||||
store a record of it over time. Grafana is then used as an interface to query and
|
||||
present this information through a series of pretty graphs.
|
||||
|
||||
Once you have grafana set up, and assuming you're using [our grafana dashboard template](https://github.com/matrix-org/synapse/blob/master/contrib/grafana/synapse.json), look for the following graphs when debugging a slow/overloaded Synapse:
|
||||
|
||||
## Message Event Send Time
|
||||
|
||||

|
||||
|
||||
This, along with the CPU and Memory graphs, is a good way to check the general health of your Synapse instance. It represents how long it takes for a user on your homeserver to send a message.
|
||||
|
||||
## Transaction Count and Transaction Duration
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
These graphs show the database transactions that are occurring the most frequently, as well as those are that are taking the most amount of time to execute.
|
||||
|
||||

|
||||
|
||||
In the first graph, we can see obvious spikes corresponding to lots of `get_user_by_id` transactions. This would be useful information to figure out which part of the Synapse codebase is potentially creating a heavy load on the system. However, be sure to cross-reference this with Transaction Duration, which states that `get_users_by_id` is actually a very quick database transaction and isn't causing as much load as others, like `persist_events`:
|
||||
|
||||

|
||||
|
||||
Still, it's probably worth investigating why we're getting users from the database that often, and whether it's possible to reduce the amount of queries we make by adjusting our cache factor(s).
|
||||
|
||||
The `persist_events` transaction is responsible for saving new room events to the Synapse database, so can often show a high transaction duration.
|
||||
|
||||
## Federation
|
||||
|
||||
The charts in the "Federation" section show information about incoming and outgoing federation requests. Federation data can be divided into two basic types:
|
||||
|
||||
- PDU (Persistent Data Unit) - room events: messages, state events (join/leave), etc. These are permanently stored in the database.
|
||||
- EDU (Ephemeral Data Unit) - other data, which need not be stored permanently, such as read receipts, typing notifications.
|
||||
|
||||
The "Outgoing EDUs by type" chart shows the EDUs within outgoing federation requests by type: `m.device_list_update`, `m.direct_to_device`, `m.presence`, `m.receipt`, `m.typing`.
|
||||
|
||||
If you see a large number of `m.presence` EDUs and are having trouble with too much CPU load, you can disable `presence` in the Synapse config. See also [#3971](https://github.com/matrix-org/synapse/issues/3971).
|
||||
|
||||
## Caches
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
This is quite a useful graph. It shows how many times Synapse attempts to retrieve a piece of data from a cache which the cache did not contain, thus resulting in a call to the database. We can see here that the `_get_joined_profile_from_event_id` cache is being requested a lot, and often the data we're after is not cached.
|
||||
|
||||
Cross-referencing this with the Eviction Rate graph, which shows that entries are being evicted from `_get_joined_profile_from_event_id` quite often:
|
||||
|
||||

|
||||
|
||||
we should probably consider raising the size of that cache by raising its cache factor (a multiplier value for the size of an individual cache). Information on doing so is available [here](https://github.com/matrix-org/synapse/blob/ee421e524478c1ad8d43741c27379499c2f6135c/docs/sample_config.yaml#L608-L642) (note that the configuration of individual cache factors through the configuration file is available in Synapse v1.14.0+, whereas doing so through environment variables has been supported for a very long time). Note that this will increase Synapse's overall memory usage.
|
||||
|
||||
## Forward Extremities
|
||||
|
||||

|
||||
|
||||
Forward extremities are the leaf events at the end of a DAG in a room, aka events that have no children. The more that exist in a room, the more [state resolution](https://spec.matrix.org/v1.1/server-server-api/#room-state-resolution) that Synapse needs to perform (hint: it's an expensive operation). While Synapse has code to prevent too many of these existing at one time in a room, bugs can sometimes make them crop up again.
|
||||
|
||||
If a room has >10 forward extremities, it's worth checking which room is the culprit and potentially removing them using the SQL queries mentioned in [#1760](https://github.com/matrix-org/synapse/issues/1760).
|
||||
|
||||
## Garbage Collection
|
||||
|
||||

|
||||
|
||||
Large spikes in garbage collection times (bigger than shown here, I'm talking in the
|
||||
multiple seconds range), can cause lots of problems in Synapse performance. It's more an
|
||||
indicator of problems, and a symptom of other problems though, so check other graphs for what might be causing it.
|
||||
|
||||
## Final Thoughts
|
||||
|
||||
If you're still having performance problems with your Synapse instance and you've
|
||||
tried everything you can, it may just be a lack of system resources. Consider adding
|
||||
more CPU and RAM, and make use of [worker mode](../../workers.md)
|
||||
to make use of multiple CPU cores / multiple machines for your homeserver.
|
||||
|
||||
@@ -1,156 +0,0 @@
|
||||
## Some useful SQL queries for Synapse Admins
|
||||
|
||||
## Size of full matrix db
|
||||
`SELECT pg_size_pretty( pg_database_size( 'matrix' ) );`
|
||||
### Result example:
|
||||
```
|
||||
pg_size_pretty
|
||||
----------------
|
||||
6420 MB
|
||||
(1 row)
|
||||
```
|
||||
## Show top 20 larger rooms by state events count
|
||||
```sql
|
||||
SELECT r.name, s.room_id, s.current_state_events
|
||||
FROM room_stats_current s
|
||||
LEFT JOIN room_stats_state r USING (room_id)
|
||||
ORDER BY current_state_events DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
and by state_group_events count:
|
||||
```sql
|
||||
SELECT rss.name, s.room_id, count(s.room_id) FROM state_groups_state s
|
||||
LEFT JOIN room_stats_state rss USING (room_id)
|
||||
GROUP BY s.room_id, rss.name
|
||||
ORDER BY count(s.room_id) DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
plus same, but with join removed for performance reasons:
|
||||
```sql
|
||||
SELECT s.room_id, count(s.room_id) FROM state_groups_state s
|
||||
GROUP BY s.room_id
|
||||
ORDER BY count(s.room_id) DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
## Show top 20 larger tables by row count
|
||||
```sql
|
||||
SELECT relname, n_live_tup as rows
|
||||
FROM pg_stat_user_tables
|
||||
ORDER BY n_live_tup DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
This query is quick, but may be very approximate, for exact number of rows use `SELECT COUNT(*) FROM <table_name>`.
|
||||
### Result example:
|
||||
```
|
||||
state_groups_state - 161687170
|
||||
event_auth - 8584785
|
||||
event_edges - 6995633
|
||||
event_json - 6585916
|
||||
event_reference_hashes - 6580990
|
||||
events - 6578879
|
||||
received_transactions - 5713989
|
||||
event_to_state_groups - 4873377
|
||||
stream_ordering_to_exterm - 4136285
|
||||
current_state_delta_stream - 3770972
|
||||
event_search - 3670521
|
||||
state_events - 2845082
|
||||
room_memberships - 2785854
|
||||
cache_invalidation_stream - 2448218
|
||||
state_groups - 1255467
|
||||
state_group_edges - 1229849
|
||||
current_state_events - 1222905
|
||||
users_in_public_rooms - 364059
|
||||
device_lists_stream - 326903
|
||||
user_directory_search - 316433
|
||||
```
|
||||
|
||||
## Show top 20 rooms by new events count in last 1 day:
|
||||
```sql
|
||||
SELECT e.room_id, r.name, COUNT(e.event_id) cnt FROM events e
|
||||
LEFT JOIN room_stats_state r USING (room_id)
|
||||
WHERE e.origin_server_ts >= DATE_PART('epoch', NOW() - INTERVAL '1 day') * 1000 GROUP BY e.room_id, r.name ORDER BY cnt DESC LIMIT 20;
|
||||
```
|
||||
|
||||
## Show top 20 users on homeserver by sent events (messages) at last month:
|
||||
```sql
|
||||
SELECT user_id, SUM(total_events)
|
||||
FROM user_stats_historical
|
||||
WHERE TO_TIMESTAMP(end_ts/1000) AT TIME ZONE 'UTC' > date_trunc('day', now() - interval '1 month')
|
||||
GROUP BY user_id
|
||||
ORDER BY SUM(total_events) DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
|
||||
## Show last 100 messages from needed user, with room names:
|
||||
```sql
|
||||
SELECT e.room_id, r.name, e.event_id, e.type, e.content, j.json FROM events e
|
||||
LEFT JOIN event_json j USING (room_id)
|
||||
LEFT JOIN room_stats_state r USING (room_id)
|
||||
WHERE sender = '@LOGIN:example.com'
|
||||
AND e.type = 'm.room.message'
|
||||
ORDER BY stream_ordering DESC
|
||||
LIMIT 100;
|
||||
```
|
||||
|
||||
## Show top 20 larger tables by storage size
|
||||
```sql
|
||||
SELECT nspname || '.' || relname AS "relation",
|
||||
pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
|
||||
FROM pg_class C
|
||||
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
|
||||
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
|
||||
AND C.relkind <> 'i'
|
||||
AND nspname !~ '^pg_toast'
|
||||
ORDER BY pg_total_relation_size(C.oid) DESC
|
||||
LIMIT 20;
|
||||
```
|
||||
### Result example:
|
||||
```
|
||||
public.state_groups_state - 27 GB
|
||||
public.event_json - 9855 MB
|
||||
public.events - 3675 MB
|
||||
public.event_edges - 3404 MB
|
||||
public.received_transactions - 2745 MB
|
||||
public.event_reference_hashes - 1864 MB
|
||||
public.event_auth - 1775 MB
|
||||
public.stream_ordering_to_exterm - 1663 MB
|
||||
public.event_search - 1370 MB
|
||||
public.room_memberships - 1050 MB
|
||||
public.event_to_state_groups - 948 MB
|
||||
public.current_state_delta_stream - 711 MB
|
||||
public.state_events - 611 MB
|
||||
public.presence_stream - 530 MB
|
||||
public.current_state_events - 525 MB
|
||||
public.cache_invalidation_stream - 466 MB
|
||||
public.receipts_linearized - 279 MB
|
||||
public.state_groups - 160 MB
|
||||
public.device_lists_remote_cache - 124 MB
|
||||
public.state_group_edges - 122 MB
|
||||
```
|
||||
|
||||
## Show rooms with names, sorted by events in this rooms
|
||||
`echo "select event_json.room_id,room_stats_state.name from event_json,room_stats_state where room_stats_state.room_id=event_json.room_id" | psql synapse | sort | uniq -c | sort -n`
|
||||
### Result example:
|
||||
```
|
||||
9459 !FPUfgzXYWTKgIrwKxW:matrix.org | This Week in Matrix
|
||||
9459 !FPUfgzXYWTKgIrwKxW:matrix.org | This Week in Matrix (TWIM)
|
||||
17799 !iDIOImbmXxwNngznsa:matrix.org | Linux in Russian
|
||||
18739 !GnEEPYXUhoaHbkFBNX:matrix.org | Riot Android
|
||||
23373 !QtykxKocfZaZOUrTwp:matrix.org | Matrix HQ
|
||||
39504 !gTQfWzbYncrtNrvEkB:matrix.org | ru.[matrix]
|
||||
43601 !iNmaIQExDMeqdITdHH:matrix.org | Riot
|
||||
43601 !iNmaIQExDMeqdITdHH:matrix.org | Riot Web/Desktop
|
||||
```
|
||||
|
||||
## Lookup room state info by list of room_id
|
||||
```sql
|
||||
SELECT rss.room_id, rss.name, rss.canonical_alias, rss.topic, rss.encryption, rsc.joined_members, rsc.local_users_in_room, rss.join_rules
|
||||
FROM room_stats_state rss
|
||||
LEFT JOIN room_stats_current rsc USING (room_id)
|
||||
WHERE room_id IN (WHERE room_id IN (
|
||||
'!OGEhHVWSdvArJzumhm:matrix.org',
|
||||
'!YTvKGNlinIzlkMTVRl:matrix.org'
|
||||
)
|
||||
```
|
||||
@@ -1,5 +0,0 @@
|
||||
# Single Sign-On
|
||||
|
||||
Synapse supports single sign-on through the SAML, Open ID Connect or CAS protocols.
|
||||
LDAP and other login methods are supported through first and third-party password
|
||||
auth provider modules.
|
||||
@@ -1,8 +0,0 @@
|
||||
# CAS
|
||||
|
||||
Synapse supports authenticating users via the [Central Authentication
|
||||
Service protocol](https://en.wikipedia.org/wiki/Central_Authentication_Service)
|
||||
(CAS) natively.
|
||||
|
||||
Please see the `cas_config` and `sso` sections of the [Synapse configuration
|
||||
file](../../../configuration/homeserver_sample_config.md) for more details.
|
||||
@@ -1,8 +0,0 @@
|
||||
# SAML
|
||||
|
||||
Synapse supports authenticating users via the [Security Assertion
|
||||
Markup Language](https://en.wikipedia.org/wiki/Security_Assertion_Markup_Language)
|
||||
(SAML) protocol natively.
|
||||
|
||||
Please see the `saml2_config` and `sso` sections of the [Synapse configuration
|
||||
file](../../../configuration/homeserver_sample_config.md) for more details.
|
||||
@@ -6,9 +6,9 @@ on this particular server - i.e. ones which your account shares a room with, or
|
||||
who are present in a publicly viewable room present on the server.
|
||||
|
||||
The directory info is stored in various tables, which can (typically after
|
||||
DB corruption) get stale or out of sync. If this happens, for now the
|
||||
solution to fix it is to use the [admin API](usage/administration/admin_api/background_updates.md#run)
|
||||
and execute the job `regenerate_directory`. This should then start a background task to
|
||||
DB corruption) get stale or out of sync. If this happens, for now the
|
||||
solution to fix it is to execute the SQL [here](https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql)
|
||||
and then restart synapse. This should then start a background task to
|
||||
flush the current tables and regenerate the directory.
|
||||
|
||||
Data model
|
||||
|
||||
@@ -24,6 +24,11 @@ Finally, we also stylise the chapter titles in the left sidebar by indenting the
|
||||
slightly so that they are more visually distinguishable from the section headers
|
||||
(the bold titles). This is done through the `indent-section-headers.css` file.
|
||||
|
||||
In addition to these modifications, we have added a version picker to the documentation.
|
||||
Users can switch between documentations for different versions of Synapse.
|
||||
This functionality was implemented through the `version-picker.js` and
|
||||
`version-picker.css` files.
|
||||
|
||||
More information can be found in mdbook's official documentation for
|
||||
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
|
||||
and
|
||||
|
||||
@@ -131,6 +131,18 @@
|
||||
<i class="fa fa-search"></i>
|
||||
</button>
|
||||
{{/if}}
|
||||
<div class="version-picker">
|
||||
<div class="dropdown">
|
||||
<div class="select">
|
||||
<span></span>
|
||||
<i class="fa fa-chevron-down"></i>
|
||||
</div>
|
||||
<input type="hidden" name="version">
|
||||
<ul class="dropdown-menu">
|
||||
<!-- Versions will be added dynamically in version-picker.js -->
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h1 class="menu-title">{{ book_title }}</h1>
|
||||
@@ -309,4 +321,4 @@
|
||||
{{/if}}
|
||||
|
||||
</body>
|
||||
</html>
|
||||
</html>
|
||||
|
||||
78
docs/website_files/version-picker.css
Normal file
78
docs/website_files/version-picker.css
Normal file
@@ -0,0 +1,78 @@
|
||||
.version-picker {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.version-picker .dropdown {
|
||||
width: 130px;
|
||||
max-height: 29px;
|
||||
margin-left: 10px;
|
||||
display: inline-block;
|
||||
border-radius: 4px;
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
position: relative;
|
||||
font-size: 13px;
|
||||
color: var(--fg);
|
||||
height: 100%;
|
||||
text-align: left;
|
||||
}
|
||||
.version-picker .dropdown .select {
|
||||
cursor: pointer;
|
||||
display: block;
|
||||
padding: 5px 2px 5px 15px;
|
||||
}
|
||||
.version-picker .dropdown .select > i {
|
||||
font-size: 10px;
|
||||
color: var(--fg);
|
||||
cursor: pointer;
|
||||
float: right;
|
||||
line-height: 20px !important;
|
||||
}
|
||||
.version-picker .dropdown:hover {
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
}
|
||||
.version-picker .dropdown:active {
|
||||
background-color: var(--theme-popup-bg);
|
||||
}
|
||||
.version-picker .dropdown.active:hover,
|
||||
.version-picker .dropdown.active {
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
border-radius: 2px 2px 0 0;
|
||||
background-color: var(--theme-popup-bg);
|
||||
}
|
||||
.version-picker .dropdown.active .select > i {
|
||||
transform: rotate(-180deg);
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu {
|
||||
position: absolute;
|
||||
background-color: var(--theme-popup-bg);
|
||||
width: 100%;
|
||||
left: -1px;
|
||||
right: 1px;
|
||||
margin-top: 1px;
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
border-radius: 0 0 4px 4px;
|
||||
overflow: hidden;
|
||||
display: none;
|
||||
max-height: 300px;
|
||||
overflow-y: auto;
|
||||
z-index: 9;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li {
|
||||
font-size: 12px;
|
||||
padding: 6px 20px;
|
||||
cursor: pointer;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu {
|
||||
padding: 0;
|
||||
list-style: none;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li:hover {
|
||||
background-color: var(--theme-hover);
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li.active::before {
|
||||
display: inline-block;
|
||||
content: "✓";
|
||||
margin-inline-start: -14px;
|
||||
width: 14px;
|
||||
}
|
||||
127
docs/website_files/version-picker.js
Normal file
127
docs/website_files/version-picker.js
Normal file
@@ -0,0 +1,127 @@
|
||||
|
||||
const dropdown = document.querySelector('.version-picker .dropdown');
|
||||
const dropdownMenu = dropdown.querySelector('.dropdown-menu');
|
||||
|
||||
fetchVersions(dropdown, dropdownMenu).then(() => {
|
||||
initializeVersionDropdown(dropdown, dropdownMenu);
|
||||
});
|
||||
|
||||
/**
|
||||
* Initialize the dropdown functionality for version selection.
|
||||
*
|
||||
* @param {Element} dropdown - The dropdown element.
|
||||
* @param {Element} dropdownMenu - The dropdown menu element.
|
||||
*/
|
||||
function initializeVersionDropdown(dropdown, dropdownMenu) {
|
||||
// Toggle the dropdown menu on click
|
||||
dropdown.addEventListener('click', function () {
|
||||
this.setAttribute('tabindex', 1);
|
||||
this.classList.toggle('active');
|
||||
dropdownMenu.style.display = (dropdownMenu.style.display === 'block') ? 'none' : 'block';
|
||||
});
|
||||
|
||||
// Remove the 'active' class and hide the dropdown menu on focusout
|
||||
dropdown.addEventListener('focusout', function () {
|
||||
this.classList.remove('active');
|
||||
dropdownMenu.style.display = 'none';
|
||||
});
|
||||
|
||||
// Handle item selection within the dropdown menu
|
||||
const dropdownMenuItems = dropdownMenu.querySelectorAll('li');
|
||||
dropdownMenuItems.forEach(function (item) {
|
||||
item.addEventListener('click', function () {
|
||||
dropdownMenuItems.forEach(function (item) {
|
||||
item.classList.remove('active');
|
||||
});
|
||||
this.classList.add('active');
|
||||
dropdown.querySelector('span').textContent = this.textContent;
|
||||
dropdown.querySelector('input').value = this.getAttribute('id');
|
||||
|
||||
window.location.href = changeVersion(window.location.href, this.textContent);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* This function fetches the available versions from a GitHub repository
|
||||
* and inserts them into the version picker.
|
||||
*
|
||||
* @param {Element} dropdown - The dropdown element.
|
||||
* @param {Element} dropdownMenu - The dropdown menu element.
|
||||
* @returns {Promise<Array<string>>} A promise that resolves with an array of available versions.
|
||||
*/
|
||||
function fetchVersions(dropdown, dropdownMenu) {
|
||||
return new Promise((resolve, reject) => {
|
||||
window.addEventListener("load", () => {
|
||||
|
||||
fetch("https://api.github.com/repos/matrix-org/synapse/git/trees/gh-pages", {
|
||||
cache: "force-cache",
|
||||
}).then(res =>
|
||||
res.json()
|
||||
).then(resObject => {
|
||||
const excluded = ['dev-docs', 'v1.91.0', 'v1.80.0', 'v1.69.0'];
|
||||
const tree = resObject.tree.filter(item => item.type === "tree" && !excluded.includes(item.path));
|
||||
const versions = tree.map(item => item.path).sort(sortVersions);
|
||||
|
||||
// Create a list of <li> items for versions
|
||||
versions.forEach((version) => {
|
||||
const li = document.createElement("li");
|
||||
li.textContent = version;
|
||||
li.id = version;
|
||||
|
||||
if (window.SYNAPSE_VERSION === version) {
|
||||
li.classList.add('active');
|
||||
dropdown.querySelector('span').textContent = version;
|
||||
dropdown.querySelector('input').value = version;
|
||||
}
|
||||
|
||||
dropdownMenu.appendChild(li);
|
||||
});
|
||||
|
||||
resolve(versions);
|
||||
|
||||
}).catch(ex => {
|
||||
console.error("Failed to fetch version data", ex);
|
||||
reject(ex);
|
||||
})
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom sorting function to sort an array of version strings.
|
||||
*
|
||||
* @param {string} a - The first version string to compare.
|
||||
* @param {string} b - The second version string to compare.
|
||||
* @returns {number} - A negative number if a should come before b, a positive number if b should come before a, or 0 if they are equal.
|
||||
*/
|
||||
function sortVersions(a, b) {
|
||||
// Put 'develop' and 'latest' at the top
|
||||
if (a === 'develop' || a === 'latest') return -1;
|
||||
if (b === 'develop' || b === 'latest') return 1;
|
||||
|
||||
const versionA = (a.match(/v\d+(\.\d+)+/) || [])[0];
|
||||
const versionB = (b.match(/v\d+(\.\d+)+/) || [])[0];
|
||||
|
||||
return versionB.localeCompare(versionA);
|
||||
}
|
||||
|
||||
/**
|
||||
* Change the version in a URL path.
|
||||
*
|
||||
* @param {string} url - The original URL to be modified.
|
||||
* @param {string} newVersion - The new version to replace the existing version in the URL.
|
||||
* @returns {string} The updated URL with the new version.
|
||||
*/
|
||||
function changeVersion(url, newVersion) {
|
||||
const parsedURL = new URL(url);
|
||||
const pathSegments = parsedURL.pathname.split('/');
|
||||
|
||||
// Modify the version
|
||||
pathSegments[2] = newVersion;
|
||||
|
||||
// Reconstruct the URL
|
||||
parsedURL.pathname = pathSegments.join('/');
|
||||
|
||||
return parsedURL.href;
|
||||
}
|
||||
1
docs/website_files/version.js
Normal file
1
docs/website_files/version.js
Normal file
@@ -0,0 +1 @@
|
||||
window.SYNAPSE_VERSION = 'v1.47';
|
||||
@@ -182,10 +182,10 @@ This worker can handle API requests matching the following regular
|
||||
expressions:
|
||||
|
||||
# Sync requests
|
||||
^/_matrix/client/(v2_alpha|r0|v3)/sync$
|
||||
^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
|
||||
^/_matrix/client/(api/v1|r0|v3)/initialSync$
|
||||
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
||||
^/_matrix/client/(v2_alpha|r0)/sync$
|
||||
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
|
||||
^/_matrix/client/(api/v1|r0)/initialSync$
|
||||
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
|
||||
|
||||
# Federation requests
|
||||
^/_matrix/federation/v1/event/
|
||||
@@ -210,46 +210,46 @@ expressions:
|
||||
^/_matrix/federation/v1/get_groups_publicised$
|
||||
^/_matrix/key/v2/query
|
||||
^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
|
||||
^/_matrix/federation/(v1|unstable/org.matrix.msc2946)/hierarchy/
|
||||
^/_matrix/federation/unstable/org.matrix.msc2946/hierarchy/
|
||||
|
||||
# Inbound federation transaction request
|
||||
^/_matrix/federation/v1/send/
|
||||
|
||||
# Client API requests
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/createRoom$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
|
||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
|
||||
^/_matrix/client/(v1|unstable/org.matrix.msc2946)/rooms/.*/hierarchy$
|
||||
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
|
||||
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/devices$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
|
||||
^/_matrix/client/versions$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
|
||||
^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/search$
|
||||
|
||||
# Registration/login requests
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
|
||||
^/_matrix/client/(r0|v3|unstable)/register$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login$
|
||||
^/_matrix/client/(r0|unstable)/register$
|
||||
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
|
||||
|
||||
# Event sending requests
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/join/
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/join/
|
||||
^/_matrix/client/(api/v1|r0|unstable)/profile/
|
||||
|
||||
|
||||
Additionally, the following REST endpoints can be handled for GET requests:
|
||||
@@ -261,14 +261,14 @@ room must be routed to the same instance. Additionally, care must be taken to
|
||||
ensure that the purge history admin API is not used while pagination requests
|
||||
for the room are in flight:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
|
||||
|
||||
Additionally, the following endpoints should be included if Synapse is configured
|
||||
to use SSO (you only need to include the ones for whichever SSO provider you're
|
||||
using):
|
||||
|
||||
# for all SSO providers
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/login/sso/redirect
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
|
||||
^/_synapse/client/pick_idp$
|
||||
^/_synapse/client/pick_username
|
||||
^/_synapse/client/new_user_consent$
|
||||
@@ -281,7 +281,7 @@ using):
|
||||
^/_synapse/client/saml2/authn_response$
|
||||
|
||||
# CAS requests.
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/login/cas/ticket$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
|
||||
|
||||
Ensure that all SSO logins go to a single process.
|
||||
For multiple workers not handling the SSO endpoints properly, see
|
||||
@@ -465,7 +465,7 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
|
||||
Handles searches in the user directory. It can handle REST endpoints matching
|
||||
the following regular expressions:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
|
||||
|
||||
When using this worker you must also set `update_user_directory: False` in the
|
||||
shared configuration file to stop the main synapse running background
|
||||
@@ -477,12 +477,12 @@ Proxies some frequently-requested client endpoints to add caching and remove
|
||||
load from the main synapse. It can handle REST endpoints matching the following
|
||||
regular expressions:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
|
||||
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
|
||||
|
||||
If `use_presence` is False in the homeserver config, it can also handle REST
|
||||
endpoints matching the following regular expressions:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status
|
||||
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
|
||||
|
||||
This "stub" presence handler will pass through `GET` request but make the
|
||||
`PUT` effectively a no-op.
|
||||
|
||||
336
mypy.ini
336
mypy.ini
@@ -10,165 +10,95 @@ warn_unreachable = True
|
||||
local_partial_types = True
|
||||
no_implicit_optional = True
|
||||
|
||||
# To find all folders that pass mypy you run:
|
||||
#
|
||||
# find synapse/* -type d -not -name __pycache__ -exec bash -c "mypy '{}' > /dev/null" \; -print
|
||||
|
||||
files =
|
||||
scripts-dev/sign_json,
|
||||
setup.py,
|
||||
synapse/,
|
||||
tests/
|
||||
|
||||
# Note: Better exclusion syntax coming in mypy > 0.910
|
||||
# https://github.com/python/mypy/pull/11329
|
||||
#
|
||||
# For now, set the (?x) flag enable "verbose" regexes
|
||||
# https://docs.python.org/3/library/re.html#re.X
|
||||
exclude = (?x)
|
||||
^(
|
||||
|synapse/storage/databases/__init__.py
|
||||
|synapse/storage/databases/main/__init__.py
|
||||
|synapse/storage/databases/main/account_data.py
|
||||
|synapse/storage/databases/main/cache.py
|
||||
|synapse/storage/databases/main/devices.py
|
||||
|synapse/storage/databases/main/e2e_room_keys.py
|
||||
|synapse/storage/databases/main/end_to_end_keys.py
|
||||
|synapse/storage/databases/main/event_federation.py
|
||||
|synapse/storage/databases/main/event_push_actions.py
|
||||
|synapse/storage/databases/main/events_bg_updates.py
|
||||
|synapse/storage/databases/main/group_server.py
|
||||
|synapse/storage/databases/main/metrics.py
|
||||
|synapse/storage/databases/main/monthly_active_users.py
|
||||
|synapse/storage/databases/main/presence.py
|
||||
|synapse/storage/databases/main/purge_events.py
|
||||
|synapse/storage/databases/main/push_rule.py
|
||||
|synapse/storage/databases/main/receipts.py
|
||||
|synapse/storage/databases/main/room.py
|
||||
|synapse/storage/databases/main/roommember.py
|
||||
|synapse/storage/databases/main/search.py
|
||||
|synapse/storage/databases/main/state.py
|
||||
|synapse/storage/databases/main/stats.py
|
||||
|synapse/storage/databases/main/transactions.py
|
||||
|synapse/storage/databases/main/user_directory.py
|
||||
|synapse/storage/schema/
|
||||
|
||||
|tests/api/test_auth.py
|
||||
|tests/api/test_ratelimiting.py
|
||||
|tests/app/test_openid_listener.py
|
||||
|tests/appservice/test_scheduler.py
|
||||
|tests/config/test_cache.py
|
||||
|tests/config/test_tls.py
|
||||
|tests/crypto/test_keyring.py
|
||||
|tests/events/test_presence_router.py
|
||||
|tests/events/test_utils.py
|
||||
|tests/federation/test_federation_catch_up.py
|
||||
|tests/federation/test_federation_sender.py
|
||||
|tests/federation/test_federation_server.py
|
||||
|tests/federation/transport/test_knocking.py
|
||||
|tests/federation/transport/test_server.py
|
||||
|tests/handlers/test_cas.py
|
||||
|tests/handlers/test_directory.py
|
||||
|tests/handlers/test_e2e_keys.py
|
||||
|tests/handlers/test_federation.py
|
||||
|tests/handlers/test_oidc.py
|
||||
|tests/handlers/test_presence.py
|
||||
|tests/handlers/test_profile.py
|
||||
|tests/handlers/test_saml.py
|
||||
|tests/handlers/test_typing.py
|
||||
|tests/http/federation/test_matrix_federation_agent.py
|
||||
|tests/http/federation/test_srv_resolver.py
|
||||
|tests/http/test_fedclient.py
|
||||
|tests/http/test_proxyagent.py
|
||||
|tests/http/test_servlet.py
|
||||
|tests/http/test_site.py
|
||||
|tests/logging/__init__.py
|
||||
|tests/logging/test_terse_json.py
|
||||
|tests/module_api/test_api.py
|
||||
|tests/push/test_email.py
|
||||
|tests/push/test_http.py
|
||||
|tests/push/test_presentable_names.py
|
||||
|tests/push/test_push_rule_evaluator.py
|
||||
|tests/rest/admin/test_admin.py
|
||||
|tests/rest/admin/test_user.py
|
||||
|tests/rest/admin/test_username_available.py
|
||||
|tests/rest/client/test_account.py
|
||||
|tests/rest/client/test_events.py
|
||||
|tests/rest/client/test_filter.py
|
||||
|tests/rest/client/test_groups.py
|
||||
|tests/rest/client/test_register.py
|
||||
|tests/rest/client/test_report_event.py
|
||||
|tests/rest/client/test_rooms.py
|
||||
|tests/rest/client/test_third_party_rules.py
|
||||
|tests/rest/client/test_transactions.py
|
||||
|tests/rest/client/test_typing.py
|
||||
|tests/rest/client/utils.py
|
||||
|tests/rest/key/v2/test_remote_key_resource.py
|
||||
|tests/rest/media/v1/test_base.py
|
||||
|tests/rest/media/v1/test_media_storage.py
|
||||
|tests/rest/media/v1/test_url_preview.py
|
||||
|tests/scripts/test_new_matrix_user.py
|
||||
|tests/server.py
|
||||
|tests/server_notices/test_resource_limits_server_notices.py
|
||||
|tests/state/test_v2.py
|
||||
|tests/storage/test_account_data.py
|
||||
|tests/storage/test_background_update.py
|
||||
|tests/storage/test_base.py
|
||||
|tests/storage/test_client_ips.py
|
||||
|tests/storage/test_database.py
|
||||
|tests/storage/test_event_federation.py
|
||||
|tests/storage/test_id_generators.py
|
||||
|tests/storage/test_roommember.py
|
||||
|tests/test_metrics.py
|
||||
|tests/test_phone_home.py
|
||||
|tests/test_server.py
|
||||
|tests/test_state.py
|
||||
|tests/test_terms_auth.py
|
||||
|tests/unittest.py
|
||||
|tests/util/caches/test_cached_call.py
|
||||
|tests/util/caches/test_deferred_cache.py
|
||||
|tests/util/caches/test_descriptors.py
|
||||
|tests/util/caches/test_response_cache.py
|
||||
|tests/util/caches/test_ttlcache.py
|
||||
|tests/util/test_async_helpers.py
|
||||
|tests/util/test_batching_queue.py
|
||||
|tests/util/test_dict_cache.py
|
||||
|tests/util/test_expiring_cache.py
|
||||
|tests/util/test_file_consumer.py
|
||||
|tests/util/test_linearizer.py
|
||||
|tests/util/test_logcontext.py
|
||||
|tests/util/test_lrucache.py
|
||||
|tests/util/test_rwlock.py
|
||||
|tests/util/test_wheel_timer.py
|
||||
|tests/utils.py
|
||||
)$
|
||||
synapse/__init__.py,
|
||||
synapse/api,
|
||||
synapse/appservice,
|
||||
synapse/config,
|
||||
synapse/crypto,
|
||||
synapse/event_auth.py,
|
||||
synapse/events,
|
||||
synapse/federation,
|
||||
synapse/groups,
|
||||
synapse/handlers,
|
||||
synapse/http,
|
||||
synapse/logging,
|
||||
synapse/metrics,
|
||||
synapse/module_api,
|
||||
synapse/notifier.py,
|
||||
synapse/push,
|
||||
synapse/replication,
|
||||
synapse/rest,
|
||||
synapse/server.py,
|
||||
synapse/server_notices,
|
||||
synapse/spam_checker_api,
|
||||
synapse/state,
|
||||
synapse/storage/__init__.py,
|
||||
synapse/storage/_base.py,
|
||||
synapse/storage/background_updates.py,
|
||||
synapse/storage/databases/main/appservice.py,
|
||||
synapse/storage/databases/main/client_ips.py,
|
||||
synapse/storage/databases/main/events.py,
|
||||
synapse/storage/databases/main/keys.py,
|
||||
synapse/storage/databases/main/pusher.py,
|
||||
synapse/storage/databases/main/registration.py,
|
||||
synapse/storage/databases/main/relations.py,
|
||||
synapse/storage/databases/main/session.py,
|
||||
synapse/storage/databases/main/stream.py,
|
||||
synapse/storage/databases/main/ui_auth.py,
|
||||
synapse/storage/databases/state,
|
||||
synapse/storage/database.py,
|
||||
synapse/storage/engines,
|
||||
synapse/storage/keys.py,
|
||||
synapse/storage/persist_events.py,
|
||||
synapse/storage/prepare_database.py,
|
||||
synapse/storage/purge_events.py,
|
||||
synapse/storage/push_rule.py,
|
||||
synapse/storage/relations.py,
|
||||
synapse/storage/roommember.py,
|
||||
synapse/storage/state.py,
|
||||
synapse/storage/types.py,
|
||||
synapse/storage/util,
|
||||
synapse/streams,
|
||||
synapse/types.py,
|
||||
synapse/util,
|
||||
synapse/visibility.py,
|
||||
tests/replication,
|
||||
tests/test_event_auth.py,
|
||||
tests/test_utils,
|
||||
tests/handlers/test_password_providers.py,
|
||||
tests/handlers/test_room.py,
|
||||
tests/handlers/test_room_summary.py,
|
||||
tests/handlers/test_send_email.py,
|
||||
tests/handlers/test_sync.py,
|
||||
tests/handlers/test_user_directory.py,
|
||||
tests/rest/client/test_login.py,
|
||||
tests/rest/client/test_auth.py,
|
||||
tests/rest/client/test_relations.py,
|
||||
tests/rest/media/v1/test_filepath.py,
|
||||
tests/rest/media/v1/test_oembed.py,
|
||||
tests/storage/test_state.py,
|
||||
tests/storage/test_user_directory.py,
|
||||
tests/util/test_itertools.py,
|
||||
tests/util/test_stream_change_cache.py
|
||||
|
||||
[mypy-synapse.api.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.app.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.config._base]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.crypto.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.events.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.federation.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.federation.transport.client]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.handlers.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.metrics.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.module_api.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.push.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
@@ -184,52 +114,105 @@ disallow_untyped_defs = True
|
||||
[mypy-synapse.storage.databases.main.client_ips]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.directory]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.events_worker]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.room_batch]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.profile]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.state_deltas]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.user_erasure_store]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.util.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.streams.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.*]
|
||||
[mypy-synapse.util.batching_queue]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.treecache]
|
||||
disallow_untyped_defs = False
|
||||
[mypy-synapse.util.caches.cached_call]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.dictionary_cache]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.lrucache]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.response_cache]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.stream_change_cache]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.ttl_cache]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.daemonize]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.file_consumer]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.frozenutils]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.hash]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.httpresourcetree]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.iterutils]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.linked_list]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.logcontext]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.logformatter]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.macaroons]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.manhole]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.module_loader]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.msisdn]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.patch_inline_callbacks]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.ratelimitutils]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.retryutils]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.rlimit]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.stringutils]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.templates]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.threepids]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.wheel_timer]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.versionstring]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-tests.handlers.test_user_directory]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-tests.storage.test_profile]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-tests.storage.test_user_directory]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-tests.rest.client.test_directory]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-tests.federation.transport.test_client]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
|
||||
;; Dependencies without annotations
|
||||
;; Before ignoring a module, check to see if type stubs are available.
|
||||
;; The `typeshed` project maintains stubs here:
|
||||
@@ -289,9 +272,6 @@ ignore_missing_imports = True
|
||||
[mypy-opentracing]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-parameterized.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-phonenumbers.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
|
||||
@@ -24,7 +24,7 @@
|
||||
set -e
|
||||
|
||||
# Change to the repository root
|
||||
cd "$(dirname $0)/.."
|
||||
cd "$(dirname "$0")/.."
|
||||
|
||||
# Check for a user-specified Complement checkout
|
||||
if [[ -z "$COMPLEMENT_DIR" ]]; then
|
||||
@@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
|
||||
EXTRA_COMPLEMENT_ARGS=""
|
||||
if [[ -n "$1" ]]; then
|
||||
# A test name regex has been set, supply it to Complement
|
||||
EXTRA_COMPLEMENT_ARGS+="-run $1 "
|
||||
EXTRA_COMPLEMENT_ARGS=(-run "$1")
|
||||
fi
|
||||
|
||||
# Run the tests!
|
||||
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...
|
||||
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...
|
||||
|
||||
@@ -15,25 +15,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
|
||||
"""
|
||||
Script for signing and sending federation requests.
|
||||
|
||||
Some tips on doing the join dance with this:
|
||||
|
||||
room_id=...
|
||||
user_id=...
|
||||
|
||||
# make_join
|
||||
federation_client.py "/_matrix/federation/v1/make_join/$room_id/$user_id?ver=5" > make_join.json
|
||||
|
||||
# sign
|
||||
jq -M .event make_join.json | sign_json --sign-event-room-version=$(jq -r .room_version make_join.json) -o signed-join.json
|
||||
|
||||
# send_join
|
||||
federation_client.py -X PUT "/_matrix/federation/v2/send_join/$room_id/x" --body $(<signed-join.json) > send_join.json
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import json
|
||||
|
||||
@@ -22,8 +22,6 @@ import yaml
|
||||
from signedjson.key import read_signing_keys
|
||||
from signedjson.sign import sign_json
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.crypto.event_signing import add_hashes_and_signatures
|
||||
from synapse.util import json_encoder
|
||||
|
||||
|
||||
@@ -70,16 +68,6 @@ Example usage:
|
||||
),
|
||||
)
|
||||
|
||||
parser.add_argument(
|
||||
"--sign-event-room-version",
|
||||
type=str,
|
||||
help=(
|
||||
"Sign the JSON as an event for the given room version, rather than raw JSON. "
|
||||
"This means that we will add a 'hashes' object, and redact the event before "
|
||||
"signing."
|
||||
),
|
||||
)
|
||||
|
||||
input_args = parser.add_mutually_exclusive_group()
|
||||
|
||||
input_args.add_argument("input_data", nargs="?", help="Raw JSON to be signed.")
|
||||
@@ -128,17 +116,7 @@ Example usage:
|
||||
print("Input json was not an object", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
if args.sign_event_room_version:
|
||||
room_version = KNOWN_ROOM_VERSIONS.get(args.sign_event_room_version)
|
||||
if not room_version:
|
||||
print(
|
||||
f"Unknown room version {args.sign_event_room_version}", file=sys.stderr
|
||||
)
|
||||
sys.exit(1)
|
||||
add_hashes_and_signatures(room_version, obj, args.server_name, keys[0])
|
||||
else:
|
||||
sign_json(obj, args.server_name, keys[0])
|
||||
|
||||
sign_json(obj, args.server_name, keys[0])
|
||||
for c in json_encoder.iterencode(obj):
|
||||
args.output.write(c)
|
||||
args.output.write("\n")
|
||||
|
||||
16
setup.py
16
setup.py
@@ -17,7 +17,6 @@
|
||||
# limitations under the License.
|
||||
import glob
|
||||
import os
|
||||
from typing import Any, Dict
|
||||
|
||||
from setuptools import Command, find_packages, setup
|
||||
|
||||
@@ -50,6 +49,8 @@ here = os.path.abspath(os.path.dirname(__file__))
|
||||
# [1]: http://tox.readthedocs.io/en/2.5.0/example/basic.html#integration-with-setup-py-test-command
|
||||
# [2]: https://pypi.python.org/pypi/setuptools_trial
|
||||
class TestCommand(Command):
|
||||
user_options = []
|
||||
|
||||
def initialize_options(self):
|
||||
pass
|
||||
|
||||
@@ -74,7 +75,7 @@ def read_file(path_segments):
|
||||
|
||||
def exec_file(path_segments):
|
||||
"""Execute a single python file to get the variables defined in it"""
|
||||
result: Dict[str, Any] = {}
|
||||
result = {}
|
||||
code = read_file(path_segments)
|
||||
exec(code, result)
|
||||
return result
|
||||
@@ -110,7 +111,6 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
|
||||
"types-Pillow>=8.3.4",
|
||||
"types-pyOpenSSL>=20.0.7",
|
||||
"types-PyYAML>=5.4.10",
|
||||
"types-requests>=2.26.0",
|
||||
"types-setuptools>=57.4.0",
|
||||
]
|
||||
|
||||
@@ -119,9 +119,7 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
|
||||
# Tests assume that all optional dependencies are installed.
|
||||
#
|
||||
# parameterized_class decorator was introduced in parameterized 0.7.0
|
||||
#
|
||||
# We use `mock` library as that backports `AsyncMock` to Python 3.6
|
||||
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0", "mock>=4.0.0"]
|
||||
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0"]
|
||||
|
||||
CONDITIONAL_REQUIREMENTS["dev"] = (
|
||||
CONDITIONAL_REQUIREMENTS["lint"]
|
||||
@@ -152,12 +150,6 @@ setup(
|
||||
long_description=long_description,
|
||||
long_description_content_type="text/x-rst",
|
||||
python_requires="~=3.6",
|
||||
entry_points={
|
||||
"console_scripts": [
|
||||
"synapse_homeserver = synapse.app.homeserver:main",
|
||||
"synapse_worker = synapse.app.generic_worker:main",
|
||||
]
|
||||
},
|
||||
classifiers=[
|
||||
"Development Status :: 5 - Production/Stable",
|
||||
"Topic :: Communications :: Chat",
|
||||
|
||||
@@ -47,7 +47,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.49.0rc1"
|
||||
__version__ = "1.47.1"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright 2018 New Vector
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -20,23 +19,22 @@ import hashlib
|
||||
import hmac
|
||||
import logging
|
||||
import sys
|
||||
from typing import Callable, Optional
|
||||
|
||||
import requests as _requests
|
||||
import yaml
|
||||
|
||||
|
||||
def request_registration(
|
||||
user: str,
|
||||
password: str,
|
||||
server_location: str,
|
||||
shared_secret: str,
|
||||
admin: bool = False,
|
||||
user_type: Optional[str] = None,
|
||||
user,
|
||||
password,
|
||||
server_location,
|
||||
shared_secret,
|
||||
admin=False,
|
||||
user_type=None,
|
||||
requests=_requests,
|
||||
_print: Callable[[str], None] = print,
|
||||
exit: Callable[[int], None] = sys.exit,
|
||||
) -> None:
|
||||
_print=print,
|
||||
exit=sys.exit,
|
||||
):
|
||||
|
||||
url = "%s/_synapse/admin/v1/register" % (server_location.rstrip("/"),)
|
||||
|
||||
@@ -67,13 +65,13 @@ def request_registration(
|
||||
mac.update(b"\x00")
|
||||
mac.update(user_type.encode("utf8"))
|
||||
|
||||
hex_mac = mac.hexdigest()
|
||||
mac = mac.hexdigest()
|
||||
|
||||
data = {
|
||||
"nonce": nonce,
|
||||
"username": user,
|
||||
"password": password,
|
||||
"mac": hex_mac,
|
||||
"mac": mac,
|
||||
"admin": admin,
|
||||
"user_type": user_type,
|
||||
}
|
||||
@@ -93,17 +91,10 @@ def request_registration(
|
||||
_print("Success!")
|
||||
|
||||
|
||||
def register_new_user(
|
||||
user: str,
|
||||
password: str,
|
||||
server_location: str,
|
||||
shared_secret: str,
|
||||
admin: Optional[bool],
|
||||
user_type: Optional[str],
|
||||
) -> None:
|
||||
def register_new_user(user, password, server_location, shared_secret, admin, user_type):
|
||||
if not user:
|
||||
try:
|
||||
default_user: Optional[str] = getpass.getuser()
|
||||
default_user = getpass.getuser()
|
||||
except Exception:
|
||||
default_user = None
|
||||
|
||||
@@ -132,8 +123,8 @@ def register_new_user(
|
||||
sys.exit(1)
|
||||
|
||||
if admin is None:
|
||||
admin_inp = input("Make admin [no]: ")
|
||||
if admin_inp in ("y", "yes", "true"):
|
||||
admin = input("Make admin [no]: ")
|
||||
if admin in ("y", "yes", "true"):
|
||||
admin = True
|
||||
else:
|
||||
admin = False
|
||||
@@ -143,7 +134,7 @@ def register_new_user(
|
||||
)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
def main():
|
||||
|
||||
logging.captureWarnings(True)
|
||||
|
||||
|
||||
@@ -92,7 +92,7 @@ def get_recent_users(txn: LoggingTransaction, since_ms: int) -> List[UserInfo]:
|
||||
return user_infos
|
||||
|
||||
|
||||
def main() -> None:
|
||||
def main():
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument(
|
||||
"-c",
|
||||
@@ -142,8 +142,7 @@ def main() -> None:
|
||||
engine = create_engine(database_config.config)
|
||||
|
||||
with make_conn(database_config, engine, "review_recent_signups") as db_conn:
|
||||
# This generates a type of Cursor, not LoggingTransaction.
|
||||
user_infos = get_recent_users(db_conn.cursor(), since_ms) # type: ignore[arg-type]
|
||||
user_infos = get_recent_users(db_conn.cursor(), since_ms)
|
||||
|
||||
for user_info in user_infos:
|
||||
if exclude_users_with_email and user_info.emails:
|
||||
|
||||
@@ -17,8 +17,6 @@
|
||||
|
||||
"""Contains constants from the specification."""
|
||||
|
||||
from typing_extensions import Final
|
||||
|
||||
# the max size of a (canonical-json-encoded) event
|
||||
MAX_PDU_SIZE = 65536
|
||||
|
||||
@@ -41,125 +39,125 @@ class Membership:
|
||||
|
||||
"""Represents the membership states of a user in a room."""
|
||||
|
||||
INVITE: Final = "invite"
|
||||
JOIN: Final = "join"
|
||||
KNOCK: Final = "knock"
|
||||
LEAVE: Final = "leave"
|
||||
BAN: Final = "ban"
|
||||
LIST: Final = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
||||
INVITE = "invite"
|
||||
JOIN = "join"
|
||||
KNOCK = "knock"
|
||||
LEAVE = "leave"
|
||||
BAN = "ban"
|
||||
LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
||||
|
||||
|
||||
class PresenceState:
|
||||
"""Represents the presence state of a user."""
|
||||
|
||||
OFFLINE: Final = "offline"
|
||||
UNAVAILABLE: Final = "unavailable"
|
||||
ONLINE: Final = "online"
|
||||
BUSY: Final = "org.matrix.msc3026.busy"
|
||||
OFFLINE = "offline"
|
||||
UNAVAILABLE = "unavailable"
|
||||
ONLINE = "online"
|
||||
BUSY = "org.matrix.msc3026.busy"
|
||||
|
||||
|
||||
class JoinRules:
|
||||
PUBLIC: Final = "public"
|
||||
KNOCK: Final = "knock"
|
||||
INVITE: Final = "invite"
|
||||
PRIVATE: Final = "private"
|
||||
PUBLIC = "public"
|
||||
KNOCK = "knock"
|
||||
INVITE = "invite"
|
||||
PRIVATE = "private"
|
||||
# As defined for MSC3083.
|
||||
RESTRICTED: Final = "restricted"
|
||||
RESTRICTED = "restricted"
|
||||
|
||||
|
||||
class RestrictedJoinRuleTypes:
|
||||
"""Understood types for the allow rules in restricted join rules."""
|
||||
|
||||
ROOM_MEMBERSHIP: Final = "m.room_membership"
|
||||
ROOM_MEMBERSHIP = "m.room_membership"
|
||||
|
||||
|
||||
class LoginType:
|
||||
PASSWORD: Final = "m.login.password"
|
||||
EMAIL_IDENTITY: Final = "m.login.email.identity"
|
||||
MSISDN: Final = "m.login.msisdn"
|
||||
RECAPTCHA: Final = "m.login.recaptcha"
|
||||
TERMS: Final = "m.login.terms"
|
||||
SSO: Final = "m.login.sso"
|
||||
DUMMY: Final = "m.login.dummy"
|
||||
REGISTRATION_TOKEN: Final = "org.matrix.msc3231.login.registration_token"
|
||||
PASSWORD = "m.login.password"
|
||||
EMAIL_IDENTITY = "m.login.email.identity"
|
||||
MSISDN = "m.login.msisdn"
|
||||
RECAPTCHA = "m.login.recaptcha"
|
||||
TERMS = "m.login.terms"
|
||||
SSO = "m.login.sso"
|
||||
DUMMY = "m.login.dummy"
|
||||
REGISTRATION_TOKEN = "org.matrix.msc3231.login.registration_token"
|
||||
|
||||
|
||||
# This is used in the `type` parameter for /register when called by
|
||||
# an appservice to register a new user.
|
||||
APP_SERVICE_REGISTRATION_TYPE: Final = "m.login.application_service"
|
||||
APP_SERVICE_REGISTRATION_TYPE = "m.login.application_service"
|
||||
|
||||
|
||||
class EventTypes:
|
||||
Member: Final = "m.room.member"
|
||||
Create: Final = "m.room.create"
|
||||
Tombstone: Final = "m.room.tombstone"
|
||||
JoinRules: Final = "m.room.join_rules"
|
||||
PowerLevels: Final = "m.room.power_levels"
|
||||
Aliases: Final = "m.room.aliases"
|
||||
Redaction: Final = "m.room.redaction"
|
||||
ThirdPartyInvite: Final = "m.room.third_party_invite"
|
||||
RelatedGroups: Final = "m.room.related_groups"
|
||||
Member = "m.room.member"
|
||||
Create = "m.room.create"
|
||||
Tombstone = "m.room.tombstone"
|
||||
JoinRules = "m.room.join_rules"
|
||||
PowerLevels = "m.room.power_levels"
|
||||
Aliases = "m.room.aliases"
|
||||
Redaction = "m.room.redaction"
|
||||
ThirdPartyInvite = "m.room.third_party_invite"
|
||||
RelatedGroups = "m.room.related_groups"
|
||||
|
||||
RoomHistoryVisibility: Final = "m.room.history_visibility"
|
||||
CanonicalAlias: Final = "m.room.canonical_alias"
|
||||
Encrypted: Final = "m.room.encrypted"
|
||||
RoomAvatar: Final = "m.room.avatar"
|
||||
RoomEncryption: Final = "m.room.encryption"
|
||||
GuestAccess: Final = "m.room.guest_access"
|
||||
RoomHistoryVisibility = "m.room.history_visibility"
|
||||
CanonicalAlias = "m.room.canonical_alias"
|
||||
Encrypted = "m.room.encrypted"
|
||||
RoomAvatar = "m.room.avatar"
|
||||
RoomEncryption = "m.room.encryption"
|
||||
GuestAccess = "m.room.guest_access"
|
||||
|
||||
# These are used for validation
|
||||
Message: Final = "m.room.message"
|
||||
Topic: Final = "m.room.topic"
|
||||
Name: Final = "m.room.name"
|
||||
Message = "m.room.message"
|
||||
Topic = "m.room.topic"
|
||||
Name = "m.room.name"
|
||||
|
||||
ServerACL: Final = "m.room.server_acl"
|
||||
Pinned: Final = "m.room.pinned_events"
|
||||
ServerACL = "m.room.server_acl"
|
||||
Pinned = "m.room.pinned_events"
|
||||
|
||||
Retention: Final = "m.room.retention"
|
||||
Retention = "m.room.retention"
|
||||
|
||||
Dummy: Final = "org.matrix.dummy_event"
|
||||
Dummy = "org.matrix.dummy_event"
|
||||
|
||||
SpaceChild: Final = "m.space.child"
|
||||
SpaceParent: Final = "m.space.parent"
|
||||
SpaceChild = "m.space.child"
|
||||
SpaceParent = "m.space.parent"
|
||||
|
||||
MSC2716_INSERTION: Final = "org.matrix.msc2716.insertion"
|
||||
MSC2716_BATCH: Final = "org.matrix.msc2716.batch"
|
||||
MSC2716_MARKER: Final = "org.matrix.msc2716.marker"
|
||||
MSC2716_INSERTION = "org.matrix.msc2716.insertion"
|
||||
MSC2716_BATCH = "org.matrix.msc2716.batch"
|
||||
MSC2716_MARKER = "org.matrix.msc2716.marker"
|
||||
|
||||
|
||||
class ToDeviceEventTypes:
|
||||
RoomKeyRequest: Final = "m.room_key_request"
|
||||
RoomKeyRequest = "m.room_key_request"
|
||||
|
||||
|
||||
class DeviceKeyAlgorithms:
|
||||
"""Spec'd algorithms for the generation of per-device keys"""
|
||||
|
||||
ED25519: Final = "ed25519"
|
||||
CURVE25519: Final = "curve25519"
|
||||
SIGNED_CURVE25519: Final = "signed_curve25519"
|
||||
ED25519 = "ed25519"
|
||||
CURVE25519 = "curve25519"
|
||||
SIGNED_CURVE25519 = "signed_curve25519"
|
||||
|
||||
|
||||
class EduTypes:
|
||||
Presence: Final = "m.presence"
|
||||
Presence = "m.presence"
|
||||
|
||||
|
||||
class RejectedReason:
|
||||
AUTH_ERROR: Final = "auth_error"
|
||||
AUTH_ERROR = "auth_error"
|
||||
|
||||
|
||||
class RoomCreationPreset:
|
||||
PRIVATE_CHAT: Final = "private_chat"
|
||||
PUBLIC_CHAT: Final = "public_chat"
|
||||
TRUSTED_PRIVATE_CHAT: Final = "trusted_private_chat"
|
||||
PRIVATE_CHAT = "private_chat"
|
||||
PUBLIC_CHAT = "public_chat"
|
||||
TRUSTED_PRIVATE_CHAT = "trusted_private_chat"
|
||||
|
||||
|
||||
class ThirdPartyEntityKind:
|
||||
USER: Final = "user"
|
||||
LOCATION: Final = "location"
|
||||
USER = "user"
|
||||
LOCATION = "location"
|
||||
|
||||
|
||||
ServerNoticeMsgType: Final = "m.server_notice"
|
||||
ServerNoticeLimitReached: Final = "m.server_notice.usage_limit_reached"
|
||||
ServerNoticeMsgType = "m.server_notice"
|
||||
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
||||
|
||||
|
||||
class UserTypes:
|
||||
@@ -167,91 +165,91 @@ class UserTypes:
|
||||
'admin' and 'guest' users should also be UserTypes. Normal users are type None
|
||||
"""
|
||||
|
||||
SUPPORT: Final = "support"
|
||||
BOT: Final = "bot"
|
||||
ALL_USER_TYPES: Final = (SUPPORT, BOT)
|
||||
SUPPORT = "support"
|
||||
BOT = "bot"
|
||||
ALL_USER_TYPES = (SUPPORT, BOT)
|
||||
|
||||
|
||||
class RelationTypes:
|
||||
"""The types of relations known to this server."""
|
||||
|
||||
ANNOTATION: Final = "m.annotation"
|
||||
REPLACE: Final = "m.replace"
|
||||
REFERENCE: Final = "m.reference"
|
||||
THREAD: Final = "io.element.thread"
|
||||
ANNOTATION = "m.annotation"
|
||||
REPLACE = "m.replace"
|
||||
REFERENCE = "m.reference"
|
||||
THREAD = "io.element.thread"
|
||||
|
||||
|
||||
class LimitBlockingTypes:
|
||||
"""Reasons that a server may be blocked"""
|
||||
|
||||
MONTHLY_ACTIVE_USER: Final = "monthly_active_user"
|
||||
HS_DISABLED: Final = "hs_disabled"
|
||||
MONTHLY_ACTIVE_USER = "monthly_active_user"
|
||||
HS_DISABLED = "hs_disabled"
|
||||
|
||||
|
||||
class EventContentFields:
|
||||
"""Fields found in events' content, regardless of type."""
|
||||
|
||||
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
|
||||
LABELS: Final = "org.matrix.labels"
|
||||
LABELS = "org.matrix.labels"
|
||||
|
||||
# Timestamp to delete the event after
|
||||
# cf https://github.com/matrix-org/matrix-doc/pull/2228
|
||||
SELF_DESTRUCT_AFTER: Final = "org.matrix.self_destruct_after"
|
||||
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
|
||||
|
||||
# cf https://github.com/matrix-org/matrix-doc/pull/1772
|
||||
ROOM_TYPE: Final = "type"
|
||||
ROOM_TYPE = "type"
|
||||
|
||||
# Whether a room can federate.
|
||||
FEDERATE: Final = "m.federate"
|
||||
FEDERATE = "m.federate"
|
||||
|
||||
# The creator of the room, as used in `m.room.create` events.
|
||||
ROOM_CREATOR: Final = "creator"
|
||||
ROOM_CREATOR = "creator"
|
||||
|
||||
# Used in m.room.guest_access events.
|
||||
GUEST_ACCESS: Final = "guest_access"
|
||||
GUEST_ACCESS = "guest_access"
|
||||
|
||||
# Used on normal messages to indicate they were historically imported after the fact
|
||||
MSC2716_HISTORICAL: Final = "org.matrix.msc2716.historical"
|
||||
MSC2716_HISTORICAL = "org.matrix.msc2716.historical"
|
||||
# For "insertion" events to indicate what the next batch ID should be in
|
||||
# order to connect to it
|
||||
MSC2716_NEXT_BATCH_ID: Final = "org.matrix.msc2716.next_batch_id"
|
||||
MSC2716_NEXT_BATCH_ID = "org.matrix.msc2716.next_batch_id"
|
||||
# Used on "batch" events to indicate which insertion event it connects to
|
||||
MSC2716_BATCH_ID: Final = "org.matrix.msc2716.batch_id"
|
||||
MSC2716_BATCH_ID = "org.matrix.msc2716.batch_id"
|
||||
# For "marker" events
|
||||
MSC2716_MARKER_INSERTION: Final = "org.matrix.msc2716.marker.insertion"
|
||||
MSC2716_MARKER_INSERTION = "org.matrix.msc2716.marker.insertion"
|
||||
|
||||
# The authorising user for joining a restricted room.
|
||||
AUTHORISING_USER: Final = "join_authorised_via_users_server"
|
||||
AUTHORISING_USER = "join_authorised_via_users_server"
|
||||
|
||||
|
||||
class RoomTypes:
|
||||
"""Understood values of the room_type field of m.room.create events."""
|
||||
|
||||
SPACE: Final = "m.space"
|
||||
SPACE = "m.space"
|
||||
|
||||
|
||||
class RoomEncryptionAlgorithms:
|
||||
MEGOLM_V1_AES_SHA2: Final = "m.megolm.v1.aes-sha2"
|
||||
DEFAULT: Final = MEGOLM_V1_AES_SHA2
|
||||
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
|
||||
DEFAULT = MEGOLM_V1_AES_SHA2
|
||||
|
||||
|
||||
class AccountDataTypes:
|
||||
DIRECT: Final = "m.direct"
|
||||
IGNORED_USER_LIST: Final = "m.ignored_user_list"
|
||||
DIRECT = "m.direct"
|
||||
IGNORED_USER_LIST = "m.ignored_user_list"
|
||||
|
||||
|
||||
class HistoryVisibility:
|
||||
INVITED: Final = "invited"
|
||||
JOINED: Final = "joined"
|
||||
SHARED: Final = "shared"
|
||||
WORLD_READABLE: Final = "world_readable"
|
||||
INVITED = "invited"
|
||||
JOINED = "joined"
|
||||
SHARED = "shared"
|
||||
WORLD_READABLE = "world_readable"
|
||||
|
||||
|
||||
class GuestAccess:
|
||||
CAN_JOIN: Final = "can_join"
|
||||
CAN_JOIN = "can_join"
|
||||
# anything that is not "can_join" is considered "forbidden", but for completeness:
|
||||
FORBIDDEN: Final = "forbidden"
|
||||
FORBIDDEN = "forbidden"
|
||||
|
||||
|
||||
class ReadReceiptEventFields:
|
||||
MSC2285_HIDDEN: Final = "org.matrix.msc2285.hidden"
|
||||
MSC2285_HIDDEN = "org.matrix.msc2285.hidden"
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright 2017 Vector Creations Ltd
|
||||
# Copyright 2018-2019 New Vector Ltd
|
||||
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -86,9 +86,6 @@ ROOM_EVENT_FILTER_SCHEMA = {
|
||||
# cf https://github.com/matrix-org/matrix-doc/pull/2326
|
||||
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
|
||||
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
|
||||
# MSC3440, filtering by event relations.
|
||||
"io.element.relation_senders": {"type": "array", "items": {"type": "string"}},
|
||||
"io.element.relation_types": {"type": "array", "items": {"type": "string"}},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -149,16 +146,14 @@ def matrix_user_id_validator(user_id_str: str) -> UserID:
|
||||
|
||||
class Filtering:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self._hs = hs
|
||||
super().__init__()
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
self.DEFAULT_FILTER_COLLECTION = FilterCollection(hs, {})
|
||||
|
||||
async def get_user_filter(
|
||||
self, user_localpart: str, filter_id: Union[int, str]
|
||||
) -> "FilterCollection":
|
||||
result = await self.store.get_user_filter(user_localpart, filter_id)
|
||||
return FilterCollection(self._hs, result)
|
||||
return FilterCollection(result)
|
||||
|
||||
def add_user_filter(
|
||||
self, user_localpart: str, user_filter: JsonDict
|
||||
@@ -196,22 +191,21 @@ FilterEvent = TypeVar("FilterEvent", EventBase, UserPresenceState, JsonDict)
|
||||
|
||||
|
||||
class FilterCollection:
|
||||
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
|
||||
def __init__(self, filter_json: JsonDict):
|
||||
self._filter_json = filter_json
|
||||
|
||||
room_filter_json = self._filter_json.get("room", {})
|
||||
|
||||
self._room_filter = Filter(
|
||||
hs,
|
||||
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")},
|
||||
{k: v for k, v in room_filter_json.items() if k in ("rooms", "not_rooms")}
|
||||
)
|
||||
|
||||
self._room_timeline_filter = Filter(hs, room_filter_json.get("timeline", {}))
|
||||
self._room_state_filter = Filter(hs, room_filter_json.get("state", {}))
|
||||
self._room_ephemeral_filter = Filter(hs, room_filter_json.get("ephemeral", {}))
|
||||
self._room_account_data = Filter(hs, room_filter_json.get("account_data", {}))
|
||||
self._presence_filter = Filter(hs, filter_json.get("presence", {}))
|
||||
self._account_data = Filter(hs, filter_json.get("account_data", {}))
|
||||
self._room_timeline_filter = Filter(room_filter_json.get("timeline", {}))
|
||||
self._room_state_filter = Filter(room_filter_json.get("state", {}))
|
||||
self._room_ephemeral_filter = Filter(room_filter_json.get("ephemeral", {}))
|
||||
self._room_account_data = Filter(room_filter_json.get("account_data", {}))
|
||||
self._presence_filter = Filter(filter_json.get("presence", {}))
|
||||
self._account_data = Filter(filter_json.get("account_data", {}))
|
||||
|
||||
self.include_leave = filter_json.get("room", {}).get("include_leave", False)
|
||||
self.event_fields = filter_json.get("event_fields", [])
|
||||
@@ -238,37 +232,25 @@ class FilterCollection:
|
||||
def include_redundant_members(self) -> bool:
|
||||
return self._room_state_filter.include_redundant_members
|
||||
|
||||
async def filter_presence(
|
||||
def filter_presence(
|
||||
self, events: Iterable[UserPresenceState]
|
||||
) -> List[UserPresenceState]:
|
||||
return await self._presence_filter.filter(events)
|
||||
return self._presence_filter.filter(events)
|
||||
|
||||
async def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||
return await self._account_data.filter(events)
|
||||
def filter_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||
return self._account_data.filter(events)
|
||||
|
||||
async def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
|
||||
return await self._room_state_filter.filter(
|
||||
await self._room_filter.filter(events)
|
||||
)
|
||||
def filter_room_state(self, events: Iterable[EventBase]) -> List[EventBase]:
|
||||
return self._room_state_filter.filter(self._room_filter.filter(events))
|
||||
|
||||
async def filter_room_timeline(
|
||||
self, events: Iterable[EventBase]
|
||||
) -> List[EventBase]:
|
||||
return await self._room_timeline_filter.filter(
|
||||
await self._room_filter.filter(events)
|
||||
)
|
||||
def filter_room_timeline(self, events: Iterable[EventBase]) -> List[EventBase]:
|
||||
return self._room_timeline_filter.filter(self._room_filter.filter(events))
|
||||
|
||||
async def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||
return await self._room_ephemeral_filter.filter(
|
||||
await self._room_filter.filter(events)
|
||||
)
|
||||
def filter_room_ephemeral(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||
return self._room_ephemeral_filter.filter(self._room_filter.filter(events))
|
||||
|
||||
async def filter_room_account_data(
|
||||
self, events: Iterable[JsonDict]
|
||||
) -> List[JsonDict]:
|
||||
return await self._room_account_data.filter(
|
||||
await self._room_filter.filter(events)
|
||||
)
|
||||
def filter_room_account_data(self, events: Iterable[JsonDict]) -> List[JsonDict]:
|
||||
return self._room_account_data.filter(self._room_filter.filter(events))
|
||||
|
||||
def blocks_all_presence(self) -> bool:
|
||||
return (
|
||||
@@ -292,9 +274,7 @@ class FilterCollection:
|
||||
|
||||
|
||||
class Filter:
|
||||
def __init__(self, hs: "HomeServer", filter_json: JsonDict):
|
||||
self._hs = hs
|
||||
self._store = hs.get_datastore()
|
||||
def __init__(self, filter_json: JsonDict):
|
||||
self.filter_json = filter_json
|
||||
|
||||
self.limit = filter_json.get("limit", 10)
|
||||
@@ -317,20 +297,6 @@ class Filter:
|
||||
self.labels = filter_json.get("org.matrix.labels", None)
|
||||
self.not_labels = filter_json.get("org.matrix.not_labels", [])
|
||||
|
||||
# Ideally these would be rejected at the endpoint if they were provided
|
||||
# and not supported, but that would involve modifying the JSON schema
|
||||
# based on the homeserver configuration.
|
||||
if hs.config.experimental.msc3440_enabled:
|
||||
self.relation_senders = self.filter_json.get(
|
||||
"io.element.relation_senders", None
|
||||
)
|
||||
self.relation_types = self.filter_json.get(
|
||||
"io.element.relation_types", None
|
||||
)
|
||||
else:
|
||||
self.relation_senders = None
|
||||
self.relation_types = None
|
||||
|
||||
def filters_all_types(self) -> bool:
|
||||
return "*" in self.not_types
|
||||
|
||||
@@ -340,7 +306,7 @@ class Filter:
|
||||
def filters_all_rooms(self) -> bool:
|
||||
return "*" in self.not_rooms
|
||||
|
||||
def _check(self, event: FilterEvent) -> bool:
|
||||
def check(self, event: FilterEvent) -> bool:
|
||||
"""Checks whether the filter matches the given event.
|
||||
|
||||
Args:
|
||||
@@ -454,30 +420,8 @@ class Filter:
|
||||
|
||||
return room_ids
|
||||
|
||||
async def _check_event_relations(
|
||||
self, events: Iterable[FilterEvent]
|
||||
) -> List[FilterEvent]:
|
||||
# The event IDs to check, mypy doesn't understand the ifinstance check.
|
||||
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
|
||||
event_ids_to_keep = set(
|
||||
await self._store.events_have_relations(
|
||||
event_ids, self.relation_senders, self.relation_types
|
||||
)
|
||||
)
|
||||
|
||||
return [
|
||||
event
|
||||
for event in events
|
||||
if not isinstance(event, EventBase) or event.event_id in event_ids_to_keep
|
||||
]
|
||||
|
||||
async def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
|
||||
result = [event for event in events if self._check(event)]
|
||||
|
||||
if self.relation_senders or self.relation_types:
|
||||
return await self._check_event_relations(result)
|
||||
|
||||
return result
|
||||
def filter(self, events: Iterable[FilterEvent]) -> List[FilterEvent]:
|
||||
return list(filter(self.check, events))
|
||||
|
||||
def with_room_ids(self, room_ids: Iterable[str]) -> "Filter":
|
||||
"""Returns a new filter with the given room IDs appended.
|
||||
@@ -489,7 +433,7 @@ class Filter:
|
||||
filter: A new filter including the given rooms and the old
|
||||
filter's rooms.
|
||||
"""
|
||||
newFilter = Filter(self._hs, self.filter_json)
|
||||
newFilter = Filter(self.filter_json)
|
||||
newFilter.rooms += room_ids
|
||||
return newFilter
|
||||
|
||||
@@ -500,3 +444,6 @@ def _matches_wildcard(actual_value: Optional[str], filter_value: str) -> bool:
|
||||
return actual_value.startswith(type_prefix)
|
||||
else:
|
||||
return actual_value == filter_value
|
||||
|
||||
|
||||
DEFAULT_FILTER_COLLECTION = FilterCollection({})
|
||||
|
||||
@@ -30,8 +30,7 @@ FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
|
||||
STATIC_PREFIX = "/_matrix/static"
|
||||
WEB_CLIENT_PREFIX = "/_matrix/client"
|
||||
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
|
||||
MEDIA_R0_PREFIX = "/_matrix/media/r0"
|
||||
MEDIA_V3_PREFIX = "/_matrix/media/v3"
|
||||
MEDIA_PREFIX = "/_matrix/media/r0"
|
||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||
|
||||
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import sys
|
||||
from typing import Container
|
||||
|
||||
from synapse import python_dependencies # noqa: E402
|
||||
|
||||
@@ -28,9 +27,7 @@ except python_dependencies.DependencyException as e:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def check_bind_error(
|
||||
e: Exception, address: str, bind_addresses: Container[str]
|
||||
) -> None:
|
||||
def check_bind_error(e, address, bind_addresses):
|
||||
"""
|
||||
This method checks an exception occurred while binding on 0.0.0.0.
|
||||
If :: is specified in the bind addresses a warning is shown.
|
||||
@@ -41,9 +38,9 @@ def check_bind_error(
|
||||
When binding on 0.0.0.0 after :: this can safely be ignored.
|
||||
|
||||
Args:
|
||||
e: Exception that was caught.
|
||||
address: Address on which binding was attempted.
|
||||
bind_addresses: Addresses on which the service listens.
|
||||
e (Exception): Exception that was caught.
|
||||
address (str): Address on which binding was attempted.
|
||||
bind_addresses (list): Addresses on which the service listens.
|
||||
"""
|
||||
if address == "0.0.0.0" and "::" in bind_addresses:
|
||||
logger.warning(
|
||||
|
||||
@@ -22,28 +22,13 @@ import socket
|
||||
import sys
|
||||
import traceback
|
||||
import warnings
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
Any,
|
||||
Awaitable,
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
NoReturn,
|
||||
Optional,
|
||||
Tuple,
|
||||
cast,
|
||||
)
|
||||
from typing import TYPE_CHECKING, Awaitable, Callable, Iterable
|
||||
|
||||
from cryptography.utils import CryptographyDeprecationWarning
|
||||
from typing_extensions import NoReturn
|
||||
|
||||
import twisted
|
||||
from twisted.internet import defer, error, reactor as _reactor
|
||||
from twisted.internet.interfaces import IOpenSSLContextFactory, IReactorSSL, IReactorTCP
|
||||
from twisted.internet.protocol import ServerFactory
|
||||
from twisted.internet.tcp import Port
|
||||
from twisted.internet import defer, error, reactor
|
||||
from twisted.logger import LoggingFile, LogLevel
|
||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||
from twisted.python.threadpool import ThreadPool
|
||||
@@ -63,7 +48,6 @@ from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.metrics import register_threadpool
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.metrics.jemalloc import setup_jemalloc_stats
|
||||
from synapse.types import ISynapseReactor
|
||||
from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
|
||||
from synapse.util.daemonize import daemonize_process
|
||||
from synapse.util.gai_resolver import GAIResolver
|
||||
@@ -73,44 +57,33 @@ from synapse.util.versionstring import get_version_string
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
# Twisted injects the global reactor to make it easier to import, this confuses
|
||||
# mypy which thinks it is a module. Tell it that it a more proper type.
|
||||
reactor = cast(ISynapseReactor, _reactor)
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# list of tuples of function, args list, kwargs dict
|
||||
_sighup_callbacks: List[
|
||||
Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]]
|
||||
] = []
|
||||
_sighup_callbacks = []
|
||||
|
||||
|
||||
def register_sighup(func: Callable[..., None], *args: Any, **kwargs: Any) -> None:
|
||||
def register_sighup(func, *args, **kwargs):
|
||||
"""
|
||||
Register a function to be called when a SIGHUP occurs.
|
||||
|
||||
Args:
|
||||
func: Function to be called when sent a SIGHUP signal.
|
||||
func (function): Function to be called when sent a SIGHUP signal.
|
||||
*args, **kwargs: args and kwargs to be passed to the target function.
|
||||
"""
|
||||
_sighup_callbacks.append((func, args, kwargs))
|
||||
|
||||
|
||||
def start_worker_reactor(
|
||||
appname: str,
|
||||
config: HomeServerConfig,
|
||||
run_command: Callable[[], None] = reactor.run,
|
||||
) -> None:
|
||||
def start_worker_reactor(appname, config, run_command=reactor.run):
|
||||
"""Run the reactor in the main process
|
||||
|
||||
Daemonizes if necessary, and then configures some resources, before starting
|
||||
the reactor. Pulls configuration from the 'worker' settings in 'config'.
|
||||
|
||||
Args:
|
||||
appname: application name which will be sent to syslog
|
||||
config: config object
|
||||
run_command: callable that actually runs the reactor
|
||||
appname (str): application name which will be sent to syslog
|
||||
config (synapse.config.Config): config object
|
||||
run_command (Callable[]): callable that actually runs the reactor
|
||||
"""
|
||||
|
||||
logger = logging.getLogger(config.worker.worker_app)
|
||||
@@ -128,32 +101,32 @@ def start_worker_reactor(
|
||||
|
||||
|
||||
def start_reactor(
|
||||
appname: str,
|
||||
soft_file_limit: int,
|
||||
gc_thresholds: Optional[Tuple[int, int, int]],
|
||||
pid_file: str,
|
||||
daemonize: bool,
|
||||
print_pidfile: bool,
|
||||
logger: logging.Logger,
|
||||
run_command: Callable[[], None] = reactor.run,
|
||||
) -> None:
|
||||
appname,
|
||||
soft_file_limit,
|
||||
gc_thresholds,
|
||||
pid_file,
|
||||
daemonize,
|
||||
print_pidfile,
|
||||
logger,
|
||||
run_command=reactor.run,
|
||||
):
|
||||
"""Run the reactor in the main process
|
||||
|
||||
Daemonizes if necessary, and then configures some resources, before starting
|
||||
the reactor
|
||||
|
||||
Args:
|
||||
appname: application name which will be sent to syslog
|
||||
soft_file_limit:
|
||||
appname (str): application name which will be sent to syslog
|
||||
soft_file_limit (int):
|
||||
gc_thresholds:
|
||||
pid_file: name of pid file to write to if daemonize is True
|
||||
daemonize: true to run the reactor in a background process
|
||||
print_pidfile: whether to print the pid file, if daemonize is True
|
||||
logger: logger instance to pass to Daemonize
|
||||
run_command: callable that actually runs the reactor
|
||||
pid_file (str): name of pid file to write to if daemonize is True
|
||||
daemonize (bool): true to run the reactor in a background process
|
||||
print_pidfile (bool): whether to print the pid file, if daemonize is True
|
||||
logger (logging.Logger): logger instance to pass to Daemonize
|
||||
run_command (Callable[]): callable that actually runs the reactor
|
||||
"""
|
||||
|
||||
def run() -> None:
|
||||
def run():
|
||||
logger.info("Running")
|
||||
setup_jemalloc_stats()
|
||||
change_resource_limit(soft_file_limit)
|
||||
@@ -212,7 +185,7 @@ def redirect_stdio_to_logs() -> None:
|
||||
print("Redirected stdout/stderr to logs")
|
||||
|
||||
|
||||
def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> None:
|
||||
def register_start(cb: Callable[..., Awaitable], *args, **kwargs) -> None:
|
||||
"""Register a callback with the reactor, to be called once it is running
|
||||
|
||||
This can be used to initialise parts of the system which require an asynchronous
|
||||
@@ -222,7 +195,7 @@ def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> N
|
||||
will exit.
|
||||
"""
|
||||
|
||||
async def wrapper() -> None:
|
||||
async def wrapper():
|
||||
try:
|
||||
await cb(*args, **kwargs)
|
||||
except Exception:
|
||||
@@ -251,7 +224,7 @@ def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> N
|
||||
reactor.callWhenRunning(lambda: defer.ensureDeferred(wrapper()))
|
||||
|
||||
|
||||
def listen_metrics(bind_addresses: Iterable[str], port: int) -> None:
|
||||
def listen_metrics(bind_addresses, port):
|
||||
"""
|
||||
Start Prometheus metrics server.
|
||||
"""
|
||||
@@ -263,11 +236,11 @@ def listen_metrics(bind_addresses: Iterable[str], port: int) -> None:
|
||||
|
||||
|
||||
def listen_manhole(
|
||||
bind_addresses: Collection[str],
|
||||
bind_addresses: Iterable[str],
|
||||
port: int,
|
||||
manhole_settings: ManholeConfig,
|
||||
manhole_globals: dict,
|
||||
) -> None:
|
||||
):
|
||||
# twisted.conch.manhole 21.1.0 uses "int_from_bytes", which produces a confusing
|
||||
# warning. It's fixed by https://github.com/twisted/twisted/pull/1522), so
|
||||
# suppress the warning for now.
|
||||
@@ -286,18 +259,12 @@ def listen_manhole(
|
||||
)
|
||||
|
||||
|
||||
def listen_tcp(
|
||||
bind_addresses: Collection[str],
|
||||
port: int,
|
||||
factory: ServerFactory,
|
||||
reactor: IReactorTCP = reactor,
|
||||
backlog: int = 50,
|
||||
) -> List[Port]:
|
||||
def listen_tcp(bind_addresses, port, factory, reactor=reactor, backlog=50):
|
||||
"""
|
||||
Create a TCP socket for a port and several addresses
|
||||
|
||||
Returns:
|
||||
list of twisted.internet.tcp.Port listening for TCP connections
|
||||
list[twisted.internet.tcp.Port]: listening for TCP connections
|
||||
"""
|
||||
r = []
|
||||
for address in bind_addresses:
|
||||
@@ -306,19 +273,12 @@ def listen_tcp(
|
||||
except error.CannotListenError as e:
|
||||
check_bind_error(e, address, bind_addresses)
|
||||
|
||||
# IReactorTCP returns an object implementing IListeningPort from listenTCP,
|
||||
# but we know it will be a Port instance.
|
||||
return r # type: ignore[return-value]
|
||||
return r
|
||||
|
||||
|
||||
def listen_ssl(
|
||||
bind_addresses: Collection[str],
|
||||
port: int,
|
||||
factory: ServerFactory,
|
||||
context_factory: IOpenSSLContextFactory,
|
||||
reactor: IReactorSSL = reactor,
|
||||
backlog: int = 50,
|
||||
) -> List[Port]:
|
||||
bind_addresses, port, factory, context_factory, reactor=reactor, backlog=50
|
||||
):
|
||||
"""
|
||||
Create an TLS-over-TCP socket for a port and several addresses
|
||||
|
||||
@@ -334,13 +294,10 @@ def listen_ssl(
|
||||
except error.CannotListenError as e:
|
||||
check_bind_error(e, address, bind_addresses)
|
||||
|
||||
# IReactorSSL incorrectly declares that an int is returned from listenSSL,
|
||||
# it actually returns an object implementing IListeningPort, but we know it
|
||||
# will be a Port instance.
|
||||
return r # type: ignore[return-value]
|
||||
return r
|
||||
|
||||
|
||||
def refresh_certificate(hs: "HomeServer") -> None:
|
||||
def refresh_certificate(hs: "HomeServer"):
|
||||
"""
|
||||
Refresh the TLS certificates that Synapse is using by re-reading them from
|
||||
disk and updating the TLS context factories to use them.
|
||||
@@ -372,7 +329,7 @@ def refresh_certificate(hs: "HomeServer") -> None:
|
||||
logger.info("Context factories updated.")
|
||||
|
||||
|
||||
async def start(hs: "HomeServer") -> None:
|
||||
async def start(hs: "HomeServer"):
|
||||
"""
|
||||
Start a Synapse server or worker.
|
||||
|
||||
@@ -403,7 +360,7 @@ async def start(hs: "HomeServer") -> None:
|
||||
if hasattr(signal, "SIGHUP"):
|
||||
|
||||
@wrap_as_background_process("sighup")
|
||||
async def handle_sighup(*args: Any, **kwargs: Any) -> None:
|
||||
def handle_sighup(*args, **kwargs):
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
# we're not using systemd.
|
||||
sdnotify(b"RELOADING=1")
|
||||
@@ -416,7 +373,7 @@ async def start(hs: "HomeServer") -> None:
|
||||
# We defer running the sighup handlers until next reactor tick. This
|
||||
# is so that we're in a sane state, e.g. flushing the logs may fail
|
||||
# if the sighup happens in the middle of writing a log entry.
|
||||
def run_sighup(*args: Any, **kwargs: Any) -> None:
|
||||
def run_sighup(*args, **kwargs):
|
||||
# `callFromThread` should be "signal safe" as well as thread
|
||||
# safe.
|
||||
reactor.callFromThread(handle_sighup, *args, **kwargs)
|
||||
@@ -479,8 +436,12 @@ async def start(hs: "HomeServer") -> None:
|
||||
atexit.register(gc.freeze)
|
||||
|
||||
|
||||
def setup_sentry(hs: "HomeServer") -> None:
|
||||
"""Enable sentry integration, if enabled in configuration"""
|
||||
def setup_sentry(hs: "HomeServer"):
|
||||
"""Enable sentry integration, if enabled in configuration
|
||||
|
||||
Args:
|
||||
hs
|
||||
"""
|
||||
|
||||
if not hs.config.metrics.sentry_enabled:
|
||||
return
|
||||
@@ -505,7 +466,7 @@ def setup_sentry(hs: "HomeServer") -> None:
|
||||
scope.set_tag("worker_name", name)
|
||||
|
||||
|
||||
def setup_sdnotify(hs: "HomeServer") -> None:
|
||||
def setup_sdnotify(hs: "HomeServer"):
|
||||
"""Adds process state hooks to tell systemd what we are up to."""
|
||||
|
||||
# Tell systemd our state, if we're using it. This will silently fail if
|
||||
@@ -520,7 +481,7 @@ def setup_sdnotify(hs: "HomeServer") -> None:
|
||||
sdnotify_sockaddr = os.getenv("NOTIFY_SOCKET")
|
||||
|
||||
|
||||
def sdnotify(state: bytes) -> None:
|
||||
def sdnotify(state):
|
||||
"""
|
||||
Send a notification to systemd, if the NOTIFY_SOCKET env var is set.
|
||||
|
||||
@@ -529,7 +490,7 @@ def sdnotify(state: bytes) -> None:
|
||||
package which many OSes don't include as a matter of principle.
|
||||
|
||||
Args:
|
||||
state: notification to send
|
||||
state (bytes): notification to send
|
||||
"""
|
||||
if not isinstance(state, bytes):
|
||||
raise TypeError("sdnotify should be called with a bytes")
|
||||
|
||||
@@ -17,7 +17,6 @@ import logging
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
from typing import List, Optional
|
||||
|
||||
from twisted.internet import defer, task
|
||||
|
||||
@@ -26,7 +25,6 @@ from synapse.app import _base
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.logger import setup_logging
|
||||
from synapse.events import EventBase
|
||||
from synapse.handlers.admin import ExfiltrationWriter
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
|
||||
@@ -42,7 +40,6 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
|
||||
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||
from synapse.types import StateMap
|
||||
from synapse.util.logcontext import LoggingContext
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
@@ -68,11 +65,16 @@ class AdminCmdSlavedStore(
|
||||
|
||||
|
||||
class AdminCmdServer(HomeServer):
|
||||
DATASTORE_CLASS = AdminCmdSlavedStore # type: ignore
|
||||
DATASTORE_CLASS = AdminCmdSlavedStore
|
||||
|
||||
|
||||
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
||||
"""Export data for a user."""
|
||||
async def export_data_command(hs: HomeServer, args):
|
||||
"""Export data for a user.
|
||||
|
||||
Args:
|
||||
hs
|
||||
args (argparse.Namespace)
|
||||
"""
|
||||
|
||||
user_id = args.user_id
|
||||
directory = args.output_directory
|
||||
@@ -90,12 +92,12 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
Note: This writes to disk on the main reactor thread.
|
||||
|
||||
Args:
|
||||
user_id: The user whose data is being exfiltrated.
|
||||
directory: The directory to write the data to, if None then will write
|
||||
to a temporary directory.
|
||||
user_id (str): The user whose data is being exfiltrated.
|
||||
directory (str|None): The directory to write the data to, if None then
|
||||
will write to a temporary directory.
|
||||
"""
|
||||
|
||||
def __init__(self, user_id: str, directory: Optional[str] = None):
|
||||
def __init__(self, user_id, directory=None):
|
||||
self.user_id = user_id
|
||||
|
||||
if directory:
|
||||
@@ -109,7 +111,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
if list(os.listdir(self.base_directory)):
|
||||
raise Exception("Directory must be empty")
|
||||
|
||||
def write_events(self, room_id: str, events: List[EventBase]) -> None:
|
||||
def write_events(self, room_id, events):
|
||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||
os.makedirs(room_directory, exist_ok=True)
|
||||
events_file = os.path.join(room_directory, "events")
|
||||
@@ -118,9 +120,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
for event in events:
|
||||
print(json.dumps(event.get_pdu_json()), file=f)
|
||||
|
||||
def write_state(
|
||||
self, room_id: str, event_id: str, state: StateMap[EventBase]
|
||||
) -> None:
|
||||
def write_state(self, room_id, event_id, state):
|
||||
room_directory = os.path.join(self.base_directory, "rooms", room_id)
|
||||
state_directory = os.path.join(room_directory, "state")
|
||||
os.makedirs(state_directory, exist_ok=True)
|
||||
@@ -131,9 +131,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
for event in state.values():
|
||||
print(json.dumps(event.get_pdu_json()), file=f)
|
||||
|
||||
def write_invite(
|
||||
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||
) -> None:
|
||||
def write_invite(self, room_id, event, state):
|
||||
self.write_events(room_id, [event])
|
||||
|
||||
# We write the invite state somewhere else as they aren't full events
|
||||
@@ -147,9 +145,7 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
for event in state.values():
|
||||
print(json.dumps(event), file=f)
|
||||
|
||||
def write_knock(
|
||||
self, room_id: str, event: EventBase, state: StateMap[EventBase]
|
||||
) -> None:
|
||||
def write_knock(self, room_id, event, state):
|
||||
self.write_events(room_id, [event])
|
||||
|
||||
# We write the knock state somewhere else as they aren't full events
|
||||
@@ -163,11 +159,11 @@ class FileExfiltrationWriter(ExfiltrationWriter):
|
||||
for event in state.values():
|
||||
print(json.dumps(event), file=f)
|
||||
|
||||
def finished(self) -> str:
|
||||
def finished(self):
|
||||
return self.base_directory
|
||||
|
||||
|
||||
def start(config_options: List[str]) -> None:
|
||||
def start(config_options):
|
||||
parser = argparse.ArgumentParser(description="Synapse Admin Command")
|
||||
HomeServerConfig.add_arguments_to_parser(parser)
|
||||
|
||||
@@ -235,7 +231,7 @@ def start(config_options: List[str]) -> None:
|
||||
# We also make sure that `_base.start` gets run before we actually run the
|
||||
# command.
|
||||
|
||||
async def run() -> None:
|
||||
async def run():
|
||||
with LoggingContext("command"):
|
||||
await _base.start(ss)
|
||||
await args.func(ss, args)
|
||||
|
||||
@@ -14,10 +14,11 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import sys
|
||||
from typing import Dict, List, Optional, Tuple
|
||||
from typing import Dict, Optional
|
||||
|
||||
from twisted.internet import address
|
||||
from twisted.web.resource import Resource
|
||||
from twisted.web.resource import IResource
|
||||
from twisted.web.server import Request
|
||||
|
||||
import synapse
|
||||
import synapse.events
|
||||
@@ -26,8 +27,7 @@ from synapse.api.urls import (
|
||||
CLIENT_API_PREFIX,
|
||||
FEDERATION_PREFIX,
|
||||
LEGACY_MEDIA_PREFIX,
|
||||
MEDIA_R0_PREFIX,
|
||||
MEDIA_V3_PREFIX,
|
||||
MEDIA_PREFIX,
|
||||
SERVER_KEY_V2_PREFIX,
|
||||
)
|
||||
from synapse.app import _base
|
||||
@@ -44,7 +44,7 @@ from synapse.config.server import ListenerConfig
|
||||
from synapse.federation.transport.server import TransportLayerServer
|
||||
from synapse.http.server import JsonResource, OptionsResource
|
||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
||||
from synapse.http.site import SynapseRequest, SynapseSite
|
||||
from synapse.http.site import SynapseSite
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||
@@ -113,14 +113,12 @@ from synapse.storage.databases.main.monthly_active_users import (
|
||||
)
|
||||
from synapse.storage.databases.main.presence import PresenceStore
|
||||
from synapse.storage.databases.main.room import RoomWorkerStore
|
||||
from synapse.storage.databases.main.room_batch import RoomBatchStore
|
||||
from synapse.storage.databases.main.search import SearchStore
|
||||
from synapse.storage.databases.main.session import SessionStore
|
||||
from synapse.storage.databases.main.stats import StatsStore
|
||||
from synapse.storage.databases.main.transactions import TransactionWorkerStore
|
||||
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
||||
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
@@ -145,9 +143,7 @@ class KeyUploadServlet(RestServlet):
|
||||
self.http_client = hs.get_simple_http_client()
|
||||
self.main_uri = hs.config.worker.worker_main_http_uri
|
||||
|
||||
async def on_POST(
|
||||
self, request: SynapseRequest, device_id: Optional[str]
|
||||
) -> Tuple[int, JsonDict]:
|
||||
async def on_POST(self, request: Request, device_id: Optional[str]):
|
||||
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||
user_id = requester.user.to_string()
|
||||
body = parse_json_object_from_request(request)
|
||||
@@ -191,8 +187,9 @@ class KeyUploadServlet(RestServlet):
|
||||
# If the header exists, add to the comma-separated list of the first
|
||||
# instance of the header. Otherwise, generate a new header.
|
||||
if x_forwarded_for:
|
||||
x_forwarded_for = [x_forwarded_for[0] + b", " + previous_host]
|
||||
x_forwarded_for.extend(x_forwarded_for[1:])
|
||||
x_forwarded_for = [
|
||||
x_forwarded_for[0] + b", " + previous_host
|
||||
] + x_forwarded_for[1:]
|
||||
else:
|
||||
x_forwarded_for = [previous_host]
|
||||
headers[b"X-Forwarded-For"] = x_forwarded_for
|
||||
@@ -241,7 +238,6 @@ class GenericWorkerSlavedStore(
|
||||
SlavedEventStore,
|
||||
SlavedKeyStore,
|
||||
RoomWorkerStore,
|
||||
RoomBatchStore,
|
||||
DirectoryStore,
|
||||
SlavedApplicationServiceStore,
|
||||
SlavedRegistrationStore,
|
||||
@@ -257,16 +253,13 @@ class GenericWorkerSlavedStore(
|
||||
SessionStore,
|
||||
BaseSlavedStore,
|
||||
):
|
||||
# Properties that multiple storage classes define. Tell mypy what the
|
||||
# expected type is.
|
||||
server_name: str
|
||||
config: HomeServerConfig
|
||||
pass
|
||||
|
||||
|
||||
class GenericWorkerServer(HomeServer):
|
||||
DATASTORE_CLASS = GenericWorkerSlavedStore # type: ignore
|
||||
DATASTORE_CLASS = GenericWorkerSlavedStore
|
||||
|
||||
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
||||
def _listen_http(self, listener_config: ListenerConfig):
|
||||
port = listener_config.port
|
||||
bind_addresses = listener_config.bind_addresses
|
||||
|
||||
@@ -274,10 +267,10 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
site_tag = listener_config.http_options.tag
|
||||
if site_tag is None:
|
||||
site_tag = str(port)
|
||||
site_tag = port
|
||||
|
||||
# We always include a health resource.
|
||||
resources: Dict[str, Resource] = {"/health": HealthResource()}
|
||||
resources: Dict[str, IResource] = {"/health": HealthResource()}
|
||||
|
||||
for res in listener_config.http_options.resources:
|
||||
for name in res.names:
|
||||
@@ -341,8 +334,7 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
resources.update(
|
||||
{
|
||||
MEDIA_R0_PREFIX: media_repo,
|
||||
MEDIA_V3_PREFIX: media_repo,
|
||||
MEDIA_PREFIX: media_repo,
|
||||
LEGACY_MEDIA_PREFIX: media_repo,
|
||||
"/_synapse/admin": admin_resource,
|
||||
}
|
||||
@@ -394,7 +386,7 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
logger.info("Synapse worker now listening on port %d", port)
|
||||
|
||||
def start_listening(self) -> None:
|
||||
def start_listening(self):
|
||||
for listener in self.config.worker.worker_listeners:
|
||||
if listener.type == "http":
|
||||
self._listen_http(listener)
|
||||
@@ -419,7 +411,7 @@ class GenericWorkerServer(HomeServer):
|
||||
self.get_tcp_replication().start_replication(self)
|
||||
|
||||
|
||||
def start(config_options: List[str]) -> None:
|
||||
def start(config_options):
|
||||
try:
|
||||
config = HomeServerConfig.load_config("Synapse worker", config_options)
|
||||
except ConfigError as e:
|
||||
@@ -505,10 +497,6 @@ def start(config_options: List[str]) -> None:
|
||||
_base.start_worker_reactor("synapse-generic-worker", config)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
if __name__ == "__main__":
|
||||
with LoggingContext("main"):
|
||||
start(sys.argv[1:])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
@@ -16,10 +16,10 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from typing import Dict, Iterable, Iterator, List
|
||||
from typing import Iterator
|
||||
|
||||
from twisted.internet.tcp import Port
|
||||
from twisted.web.resource import EncodingResourceWrapper, Resource
|
||||
from twisted.internet import reactor
|
||||
from twisted.web.resource import EncodingResourceWrapper, IResource
|
||||
from twisted.web.server import GzipEncoderFactory
|
||||
from twisted.web.static import File
|
||||
|
||||
@@ -29,8 +29,7 @@ from synapse import events
|
||||
from synapse.api.urls import (
|
||||
FEDERATION_PREFIX,
|
||||
LEGACY_MEDIA_PREFIX,
|
||||
MEDIA_R0_PREFIX,
|
||||
MEDIA_V3_PREFIX,
|
||||
MEDIA_PREFIX,
|
||||
SERVER_KEY_V2_PREFIX,
|
||||
STATIC_PREFIX,
|
||||
WEB_CLIENT_PREFIX,
|
||||
@@ -77,27 +76,23 @@ from synapse.util.versionstring import get_version_string
|
||||
logger = logging.getLogger("synapse.app.homeserver")
|
||||
|
||||
|
||||
def gz_wrap(r: Resource) -> Resource:
|
||||
def gz_wrap(r):
|
||||
return EncodingResourceWrapper(r, [GzipEncoderFactory()])
|
||||
|
||||
|
||||
class SynapseHomeServer(HomeServer):
|
||||
DATASTORE_CLASS = DataStore # type: ignore
|
||||
DATASTORE_CLASS = DataStore
|
||||
|
||||
def _listener_http(
|
||||
self, config: HomeServerConfig, listener_config: ListenerConfig
|
||||
) -> Iterable[Port]:
|
||||
def _listener_http(self, config: HomeServerConfig, listener_config: ListenerConfig):
|
||||
port = listener_config.port
|
||||
bind_addresses = listener_config.bind_addresses
|
||||
tls = listener_config.tls
|
||||
# Must exist since this is an HTTP listener.
|
||||
assert listener_config.http_options is not None
|
||||
site_tag = listener_config.http_options.tag
|
||||
if site_tag is None:
|
||||
site_tag = str(port)
|
||||
|
||||
# We always include a health resource.
|
||||
resources: Dict[str, Resource] = {"/health": HealthResource()}
|
||||
resources = {"/health": HealthResource()}
|
||||
|
||||
for res in listener_config.http_options.resources:
|
||||
for name in res.names:
|
||||
@@ -116,7 +111,7 @@ class SynapseHomeServer(HomeServer):
|
||||
("listeners", site_tag, "additional_resources", "<%s>" % (path,)),
|
||||
)
|
||||
handler = handler_cls(config, module_api)
|
||||
if isinstance(handler, Resource):
|
||||
if IResource.providedBy(handler):
|
||||
resource = handler
|
||||
elif hasattr(handler, "handle_request"):
|
||||
resource = AdditionalResource(self, handler.handle_request)
|
||||
@@ -133,7 +128,7 @@ class SynapseHomeServer(HomeServer):
|
||||
|
||||
# try to find something useful to redirect '/' to
|
||||
if WEB_CLIENT_PREFIX in resources:
|
||||
root_resource: Resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
|
||||
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
|
||||
elif STATIC_PREFIX in resources:
|
||||
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
|
||||
else:
|
||||
@@ -150,8 +145,6 @@ class SynapseHomeServer(HomeServer):
|
||||
)
|
||||
|
||||
if tls:
|
||||
# refresh_certificate should have been called before this.
|
||||
assert self.tls_server_context_factory is not None
|
||||
ports = listen_ssl(
|
||||
bind_addresses,
|
||||
port,
|
||||
@@ -172,21 +165,20 @@ class SynapseHomeServer(HomeServer):
|
||||
|
||||
return ports
|
||||
|
||||
def _configure_named_resource(
|
||||
self, name: str, compress: bool = False
|
||||
) -> Dict[str, Resource]:
|
||||
def _configure_named_resource(self, name, compress=False):
|
||||
"""Build a resource map for a named resource
|
||||
|
||||
Args:
|
||||
name: named resource: one of "client", "federation", etc
|
||||
compress: whether to enable gzip compression for this resource
|
||||
name (str): named resource: one of "client", "federation", etc
|
||||
compress (bool): whether to enable gzip compression for this
|
||||
resource
|
||||
|
||||
Returns:
|
||||
map from path to HTTP resource
|
||||
dict[str, Resource]: map from path to HTTP resource
|
||||
"""
|
||||
resources: Dict[str, Resource] = {}
|
||||
resources = {}
|
||||
if name == "client":
|
||||
client_resource: Resource = ClientRestResource(self)
|
||||
client_resource = ClientRestResource(self)
|
||||
if compress:
|
||||
client_resource = gz_wrap(client_resource)
|
||||
|
||||
@@ -194,8 +186,6 @@ class SynapseHomeServer(HomeServer):
|
||||
{
|
||||
"/_matrix/client/api/v1": client_resource,
|
||||
"/_matrix/client/r0": client_resource,
|
||||
"/_matrix/client/v1": client_resource,
|
||||
"/_matrix/client/v3": client_resource,
|
||||
"/_matrix/client/unstable": client_resource,
|
||||
"/_matrix/client/v2_alpha": client_resource,
|
||||
"/_matrix/client/versions": client_resource,
|
||||
@@ -217,7 +207,7 @@ class SynapseHomeServer(HomeServer):
|
||||
if name == "consent":
|
||||
from synapse.rest.consent.consent_resource import ConsentResource
|
||||
|
||||
consent_resource: Resource = ConsentResource(self)
|
||||
consent_resource = ConsentResource(self)
|
||||
if compress:
|
||||
consent_resource = gz_wrap(consent_resource)
|
||||
resources.update({"/_matrix/consent": consent_resource})
|
||||
@@ -247,11 +237,7 @@ class SynapseHomeServer(HomeServer):
|
||||
if self.config.server.enable_media_repo:
|
||||
media_repo = self.get_media_repository_resource()
|
||||
resources.update(
|
||||
{
|
||||
MEDIA_R0_PREFIX: media_repo,
|
||||
MEDIA_V3_PREFIX: media_repo,
|
||||
LEGACY_MEDIA_PREFIX: media_repo,
|
||||
}
|
||||
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
|
||||
)
|
||||
elif name == "media":
|
||||
raise ConfigError(
|
||||
@@ -291,7 +277,7 @@ class SynapseHomeServer(HomeServer):
|
||||
|
||||
return resources
|
||||
|
||||
def start_listening(self) -> None:
|
||||
def start_listening(self):
|
||||
if self.config.redis.redis_enabled:
|
||||
# If redis is enabled we connect via the replication command handler
|
||||
# in the same way as the workers (since we're effectively a client
|
||||
@@ -317,9 +303,7 @@ class SynapseHomeServer(HomeServer):
|
||||
ReplicationStreamProtocolFactory(self),
|
||||
)
|
||||
for s in services:
|
||||
self.get_reactor().addSystemEventTrigger(
|
||||
"before", "shutdown", s.stopListening
|
||||
)
|
||||
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
|
||||
elif listener.type == "metrics":
|
||||
if not self.config.metrics.enable_metrics:
|
||||
logger.warning(
|
||||
@@ -334,13 +318,14 @@ class SynapseHomeServer(HomeServer):
|
||||
logger.warning("Unrecognized listener type: %s", listener.type)
|
||||
|
||||
|
||||
def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
def setup(config_options):
|
||||
"""
|
||||
Args:
|
||||
config_options_options: The options passed to Synapse. Usually `sys.argv[1:]`.
|
||||
config_options_options: The options passed to Synapse. Usually
|
||||
`sys.argv[1:]`.
|
||||
|
||||
Returns:
|
||||
A homeserver instance.
|
||||
HomeServer
|
||||
"""
|
||||
try:
|
||||
config = HomeServerConfig.load_or_generate_config(
|
||||
@@ -358,13 +343,6 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
# generating config files and shouldn't try to continue.
|
||||
sys.exit(0)
|
||||
|
||||
if config.worker.worker_app:
|
||||
raise ConfigError(
|
||||
"You have specified `worker_app` in the config but are attempting to start a non-worker "
|
||||
"instance. Please use `python -m synapse.app.generic_worker` instead (or remove the option if this is the main process)."
|
||||
)
|
||||
sys.exit(1)
|
||||
|
||||
events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
|
||||
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
||||
|
||||
@@ -386,7 +364,7 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
except Exception as e:
|
||||
handle_startup_exception(e)
|
||||
|
||||
async def start() -> None:
|
||||
async def start():
|
||||
# Load the OIDC provider metadatas, if OIDC is enabled.
|
||||
if hs.config.oidc.oidc_enabled:
|
||||
oidc = hs.get_oidc_handler()
|
||||
@@ -426,15 +404,39 @@ def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||
|
||||
yield ":\n %s" % (e.msg,)
|
||||
|
||||
parent_e = e.__cause__
|
||||
e = e.__cause__
|
||||
indent = 1
|
||||
while parent_e:
|
||||
while e:
|
||||
indent += 1
|
||||
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||
parent_e = parent_e.__cause__
|
||||
yield ":\n%s%s" % (" " * indent, str(e))
|
||||
e = e.__cause__
|
||||
|
||||
|
||||
def run(hs: HomeServer) -> None:
|
||||
def run(hs: HomeServer):
|
||||
PROFILE_SYNAPSE = False
|
||||
if PROFILE_SYNAPSE:
|
||||
|
||||
def profile(func):
|
||||
from cProfile import Profile
|
||||
from threading import current_thread
|
||||
|
||||
def profiled(*args, **kargs):
|
||||
profile = Profile()
|
||||
profile.enable()
|
||||
func(*args, **kargs)
|
||||
profile.disable()
|
||||
ident = current_thread().ident
|
||||
profile.dump_stats(
|
||||
"/tmp/%s.%s.%i.pstat" % (hs.hostname, func.__name__, ident)
|
||||
)
|
||||
|
||||
return profiled
|
||||
|
||||
from twisted.python.threadpool import ThreadPool
|
||||
|
||||
ThreadPool._worker = profile(ThreadPool._worker)
|
||||
reactor.run = profile(reactor.run)
|
||||
|
||||
_base.start_reactor(
|
||||
"synapse-homeserver",
|
||||
soft_file_limit=hs.config.server.soft_file_limit,
|
||||
@@ -446,7 +448,7 @@ def run(hs: HomeServer) -> None:
|
||||
)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
def main():
|
||||
with LoggingContext("main"):
|
||||
# check base requirements
|
||||
check_requirements()
|
||||
|
||||
@@ -15,12 +15,11 @@ import logging
|
||||
import math
|
||||
import resource
|
||||
import sys
|
||||
from typing import TYPE_CHECKING, List, Sized, Tuple
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from prometheus_client import Gauge
|
||||
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.types import JsonDict
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -29,7 +28,7 @@ logger = logging.getLogger("synapse.app.homeserver")
|
||||
|
||||
# Contains the list of processes we will be monitoring
|
||||
# currently either 0 or 1
|
||||
_stats_process: List[Tuple[int, "resource.struct_rusage"]] = []
|
||||
_stats_process = []
|
||||
|
||||
# Gauges to expose monthly active user control metrics
|
||||
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
|
||||
@@ -46,15 +45,9 @@ registered_reserved_users_mau_gauge = Gauge(
|
||||
|
||||
|
||||
@wrap_as_background_process("phone_stats_home")
|
||||
async def phone_stats_home(
|
||||
hs: "HomeServer",
|
||||
stats: JsonDict,
|
||||
stats_process: List[Tuple[int, "resource.struct_rusage"]] = _stats_process,
|
||||
) -> None:
|
||||
async def phone_stats_home(hs: "HomeServer", stats, stats_process=_stats_process):
|
||||
logger.info("Gathering stats for reporting")
|
||||
now = int(hs.get_clock().time())
|
||||
# Ensure the homeserver has started.
|
||||
assert hs.start_time is not None
|
||||
uptime = int(now - hs.start_time)
|
||||
if uptime < 0:
|
||||
uptime = 0
|
||||
@@ -153,15 +146,15 @@ async def phone_stats_home(
|
||||
logger.warning("Error reporting stats: %s", e)
|
||||
|
||||
|
||||
def start_phone_stats_home(hs: "HomeServer") -> None:
|
||||
def start_phone_stats_home(hs: "HomeServer"):
|
||||
"""
|
||||
Start the background tasks which report phone home stats.
|
||||
"""
|
||||
clock = hs.get_clock()
|
||||
|
||||
stats: JsonDict = {}
|
||||
stats = {}
|
||||
|
||||
def performance_stats_init() -> None:
|
||||
def performance_stats_init():
|
||||
_stats_process.clear()
|
||||
_stats_process.append(
|
||||
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
|
||||
@@ -177,10 +170,10 @@ def start_phone_stats_home(hs: "HomeServer") -> None:
|
||||
hs.get_datastore().reap_monthly_active_users()
|
||||
|
||||
@wrap_as_background_process("generate_monthly_active_users")
|
||||
async def generate_monthly_active_users() -> None:
|
||||
async def generate_monthly_active_users():
|
||||
current_mau_count = 0
|
||||
current_mau_count_by_service = {}
|
||||
reserved_users: Sized = ()
|
||||
reserved_users = ()
|
||||
store = hs.get_datastore()
|
||||
if hs.config.server.limit_usage_by_mau or hs.config.server.mau_stats_only:
|
||||
current_mau_count = await store.get_monthly_active_count()
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import re
|
||||
from enum import Enum
|
||||
from typing import TYPE_CHECKING, Iterable, List, Match, Optional
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
@@ -28,7 +27,7 @@ if TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ApplicationServiceState(Enum):
|
||||
class ApplicationServiceState:
|
||||
DOWN = "down"
|
||||
UP = "up"
|
||||
|
||||
|
||||
@@ -231,32 +231,13 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
json_body=body,
|
||||
args={"access_token": service.hs_token},
|
||||
)
|
||||
if logger.isEnabledFor(logging.DEBUG):
|
||||
logger.debug(
|
||||
"push_bulk to %s succeeded! events=%s",
|
||||
uri,
|
||||
[event.get("event_id") for event in events],
|
||||
)
|
||||
sent_transactions_counter.labels(service.id).inc()
|
||||
sent_events_counter.labels(service.id).inc(len(events))
|
||||
return True
|
||||
except CodeMessageException as e:
|
||||
logger.warning(
|
||||
"push_bulk to %s received code=%s msg=%s",
|
||||
uri,
|
||||
e.code,
|
||||
e.msg,
|
||||
exc_info=logger.isEnabledFor(logging.DEBUG),
|
||||
)
|
||||
logger.warning("push_bulk to %s received %s", uri, e.code)
|
||||
except Exception as ex:
|
||||
logger.warning(
|
||||
"push_bulk to %s threw exception(%s) %s args=%s",
|
||||
uri,
|
||||
type(ex).__name__,
|
||||
ex,
|
||||
ex.args,
|
||||
exc_info=logger.isEnabledFor(logging.DEBUG),
|
||||
)
|
||||
logger.warning("push_bulk to %s threw exception %s", uri, ex)
|
||||
failed_transactions_counter.labels(service.id).inc()
|
||||
return False
|
||||
|
||||
|
||||
@@ -13,13 +13,12 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import sys
|
||||
from typing import List
|
||||
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
|
||||
|
||||
def main(args: List[str]) -> None:
|
||||
def main(args):
|
||||
action = args[1] if len(args) > 1 and args[1] == "read" else None
|
||||
# If we're reading a key in the config file, then `args[1]` will be `read` and `args[2]`
|
||||
# will be the key to read.
|
||||
|
||||
@@ -20,18 +20,7 @@ import os
|
||||
from collections import OrderedDict
|
||||
from hashlib import sha256
|
||||
from textwrap import dedent
|
||||
from typing import (
|
||||
Any,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
TypeVar,
|
||||
Union,
|
||||
)
|
||||
from typing import Any, Iterable, List, MutableMapping, Optional, Union
|
||||
|
||||
import attr
|
||||
import jinja2
|
||||
@@ -89,7 +78,7 @@ CONFIG_FILE_HEADER = """\
|
||||
"""
|
||||
|
||||
|
||||
def path_exists(file_path: str) -> bool:
|
||||
def path_exists(file_path):
|
||||
"""Check if a file exists
|
||||
|
||||
Unlike os.path.exists, this throws an exception if there is an error
|
||||
@@ -97,7 +86,7 @@ def path_exists(file_path: str) -> bool:
|
||||
the parent dir).
|
||||
|
||||
Returns:
|
||||
True if the file exists; False if not.
|
||||
bool: True if the file exists; False if not.
|
||||
"""
|
||||
try:
|
||||
os.stat(file_path)
|
||||
@@ -113,15 +102,15 @@ class Config:
|
||||
A configuration section, containing configuration keys and values.
|
||||
|
||||
Attributes:
|
||||
section: The section title of this config object, such as
|
||||
section (str): The section title of this config object, such as
|
||||
"tls" or "logger". This is used to refer to it on the root
|
||||
logger (for example, `config.tls.some_option`). Must be
|
||||
defined in subclasses.
|
||||
"""
|
||||
|
||||
section: str
|
||||
section = None
|
||||
|
||||
def __init__(self, root_config: "RootConfig" = None):
|
||||
def __init__(self, root_config=None):
|
||||
self.root = root_config
|
||||
|
||||
# Get the path to the default Synapse template directory
|
||||
@@ -130,7 +119,7 @@ class Config:
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def parse_size(value: Union[str, int]) -> int:
|
||||
def parse_size(value):
|
||||
if isinstance(value, int):
|
||||
return value
|
||||
sizes = {"K": 1024, "M": 1024 * 1024}
|
||||
@@ -173,15 +162,15 @@ class Config:
|
||||
return int(value) * size
|
||||
|
||||
@staticmethod
|
||||
def abspath(file_path: str) -> str:
|
||||
def abspath(file_path):
|
||||
return os.path.abspath(file_path) if file_path else file_path
|
||||
|
||||
@classmethod
|
||||
def path_exists(cls, file_path: str) -> bool:
|
||||
def path_exists(cls, file_path):
|
||||
return path_exists(file_path)
|
||||
|
||||
@classmethod
|
||||
def check_file(cls, file_path: Optional[str], config_name: str) -> str:
|
||||
def check_file(cls, file_path, config_name):
|
||||
if file_path is None:
|
||||
raise ConfigError("Missing config for %s." % (config_name,))
|
||||
try:
|
||||
@@ -194,7 +183,7 @@ class Config:
|
||||
return cls.abspath(file_path)
|
||||
|
||||
@classmethod
|
||||
def ensure_directory(cls, dir_path: str) -> str:
|
||||
def ensure_directory(cls, dir_path):
|
||||
dir_path = cls.abspath(dir_path)
|
||||
os.makedirs(dir_path, exist_ok=True)
|
||||
if not os.path.isdir(dir_path):
|
||||
@@ -202,7 +191,7 @@ class Config:
|
||||
return dir_path
|
||||
|
||||
@classmethod
|
||||
def read_file(cls, file_path: Any, config_name: str) -> str:
|
||||
def read_file(cls, file_path, config_name):
|
||||
"""Deprecated: call read_file directly"""
|
||||
return read_file(file_path, (config_name,))
|
||||
|
||||
@@ -295,9 +284,6 @@ class Config:
|
||||
return [env.get_template(filename) for filename in filenames]
|
||||
|
||||
|
||||
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
|
||||
|
||||
|
||||
class RootConfig:
|
||||
"""
|
||||
Holder of an application's configuration.
|
||||
@@ -322,9 +308,7 @@ class RootConfig:
|
||||
raise Exception("Failed making %s: %r" % (config_class.section, e))
|
||||
setattr(self, config_class.section, conf)
|
||||
|
||||
def invoke_all(
|
||||
self, func_name: str, *args: Any, **kwargs: Any
|
||||
) -> MutableMapping[str, Any]:
|
||||
def invoke_all(self, func_name: str, *args, **kwargs) -> MutableMapping[str, Any]:
|
||||
"""
|
||||
Invoke a function on all instantiated config objects this RootConfig is
|
||||
configured to use.
|
||||
@@ -333,7 +317,6 @@ class RootConfig:
|
||||
func_name: Name of function to invoke
|
||||
*args
|
||||
**kwargs
|
||||
|
||||
Returns:
|
||||
ordered dictionary of config section name and the result of the
|
||||
function from it.
|
||||
@@ -349,7 +332,7 @@ class RootConfig:
|
||||
return res
|
||||
|
||||
@classmethod
|
||||
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: any) -> None:
|
||||
def invoke_all_static(cls, func_name: str, *args, **kwargs):
|
||||
"""
|
||||
Invoke a static function on config objects this RootConfig is
|
||||
configured to use.
|
||||
@@ -358,7 +341,6 @@ class RootConfig:
|
||||
func_name: Name of function to invoke
|
||||
*args
|
||||
**kwargs
|
||||
|
||||
Returns:
|
||||
ordered dictionary of config section name and the result of the
|
||||
function from it.
|
||||
@@ -369,16 +351,16 @@ class RootConfig:
|
||||
|
||||
def generate_config(
|
||||
self,
|
||||
config_dir_path: str,
|
||||
data_dir_path: str,
|
||||
server_name: str,
|
||||
generate_secrets: bool = False,
|
||||
report_stats: Optional[bool] = None,
|
||||
open_private_ports: bool = False,
|
||||
listeners: Optional[List[dict]] = None,
|
||||
tls_certificate_path: Optional[str] = None,
|
||||
tls_private_key_path: Optional[str] = None,
|
||||
) -> str:
|
||||
config_dir_path,
|
||||
data_dir_path,
|
||||
server_name,
|
||||
generate_secrets=False,
|
||||
report_stats=None,
|
||||
open_private_ports=False,
|
||||
listeners=None,
|
||||
tls_certificate_path=None,
|
||||
tls_private_key_path=None,
|
||||
):
|
||||
"""
|
||||
Build a default configuration file
|
||||
|
||||
@@ -386,27 +368,27 @@ class RootConfig:
|
||||
(eg with --generate_config).
|
||||
|
||||
Args:
|
||||
config_dir_path: The path where the config files are kept. Used to
|
||||
config_dir_path (str): The path where the config files are kept. Used to
|
||||
create filenames for things like the log config and the signing key.
|
||||
|
||||
data_dir_path: The path where the data files are kept. Used to create
|
||||
data_dir_path (str): The path where the data files are kept. Used to create
|
||||
filenames for things like the database and media store.
|
||||
|
||||
server_name: The server name. Used to initialise the server_name
|
||||
server_name (str): The server name. Used to initialise the server_name
|
||||
config param, but also used in the names of some of the config files.
|
||||
|
||||
generate_secrets: True if we should generate new secrets for things
|
||||
generate_secrets (bool): True if we should generate new secrets for things
|
||||
like the macaroon_secret_key. If False, these parameters will be left
|
||||
unset.
|
||||
|
||||
report_stats: Initial setting for the report_stats setting.
|
||||
report_stats (bool|None): Initial setting for the report_stats setting.
|
||||
If None, report_stats will be left unset.
|
||||
|
||||
open_private_ports: True to leave private ports (such as the non-TLS
|
||||
open_private_ports (bool): True to leave private ports (such as the non-TLS
|
||||
HTTP listener) open to the internet.
|
||||
|
||||
listeners: A list of descriptions of the listeners synapse should
|
||||
start with each of which specifies a port (int), a list of
|
||||
listeners (list(dict)|None): A list of descriptions of the listeners
|
||||
synapse should start with each of which specifies a port (str), a list of
|
||||
resources (list(str)), tls (bool) and type (str). For example:
|
||||
[{
|
||||
"port": 8448,
|
||||
@@ -421,12 +403,16 @@ class RootConfig:
|
||||
"type": "http",
|
||||
}],
|
||||
|
||||
tls_certificate_path: The path to the tls certificate.
|
||||
|
||||
tls_private_key_path: The path to the tls private key.
|
||||
database (str|None): The database type to configure, either `psycog2`
|
||||
or `sqlite3`.
|
||||
|
||||
tls_certificate_path (str|None): The path to the tls certificate.
|
||||
|
||||
tls_private_key_path (str|None): The path to the tls private key.
|
||||
|
||||
Returns:
|
||||
The yaml config file
|
||||
str: the yaml config file
|
||||
"""
|
||||
|
||||
return CONFIG_FILE_HEADER + "\n\n".join(
|
||||
@@ -446,15 +432,12 @@ class RootConfig:
|
||||
)
|
||||
|
||||
@classmethod
|
||||
def load_config(
|
||||
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||
) -> TRootConfig:
|
||||
def load_config(cls, description, argv):
|
||||
"""Parse the commandline and config files
|
||||
|
||||
Doesn't support config-file-generation: used by the worker apps.
|
||||
|
||||
Returns:
|
||||
Config object.
|
||||
Returns: Config object.
|
||||
"""
|
||||
config_parser = argparse.ArgumentParser(description=description)
|
||||
cls.add_arguments_to_parser(config_parser)
|
||||
@@ -463,7 +446,7 @@ class RootConfig:
|
||||
return obj
|
||||
|
||||
@classmethod
|
||||
def add_arguments_to_parser(cls, config_parser: argparse.ArgumentParser) -> None:
|
||||
def add_arguments_to_parser(cls, config_parser):
|
||||
"""Adds all the config flags to an ArgumentParser.
|
||||
|
||||
Doesn't support config-file-generation: used by the worker apps.
|
||||
@@ -471,7 +454,7 @@ class RootConfig:
|
||||
Used for workers where we want to add extra flags/subcommands.
|
||||
|
||||
Args:
|
||||
config_parser: App description
|
||||
config_parser (ArgumentParser): App description
|
||||
"""
|
||||
|
||||
config_parser.add_argument(
|
||||
@@ -494,9 +477,7 @@ class RootConfig:
|
||||
cls.invoke_all_static("add_arguments", config_parser)
|
||||
|
||||
@classmethod
|
||||
def load_config_with_parser(
|
||||
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
|
||||
) -> Tuple[TRootConfig, argparse.Namespace]:
|
||||
def load_config_with_parser(cls, parser, argv):
|
||||
"""Parse the commandline and config files with the given parser
|
||||
|
||||
Doesn't support config-file-generation: used by the worker apps.
|
||||
@@ -504,12 +485,13 @@ class RootConfig:
|
||||
Used for workers where we want to add extra flags/subcommands.
|
||||
|
||||
Args:
|
||||
parser
|
||||
argv
|
||||
parser (ArgumentParser)
|
||||
argv (list[str])
|
||||
|
||||
Returns:
|
||||
Returns the parsed config object and the parsed argparse.Namespace
|
||||
object from parser.parse_args(..)`
|
||||
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
|
||||
config object and the parsed argparse.Namespace object from
|
||||
`parser.parse_args(..)`
|
||||
"""
|
||||
|
||||
obj = cls()
|
||||
@@ -538,15 +520,12 @@ class RootConfig:
|
||||
return obj, config_args
|
||||
|
||||
@classmethod
|
||||
def load_or_generate_config(
|
||||
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||
) -> Optional[TRootConfig]:
|
||||
def load_or_generate_config(cls, description, argv):
|
||||
"""Parse the commandline and config files
|
||||
|
||||
Supports generation of config files, so is used for the main homeserver app.
|
||||
|
||||
Returns:
|
||||
Config object, or None if --generate-config or --generate-keys was set
|
||||
Returns: Config object, or None if --generate-config or --generate-keys was set
|
||||
"""
|
||||
parser = argparse.ArgumentParser(description=description)
|
||||
parser.add_argument(
|
||||
@@ -701,21 +680,16 @@ class RootConfig:
|
||||
|
||||
return obj
|
||||
|
||||
def parse_config_dict(
|
||||
self,
|
||||
config_dict: Dict[str, Any],
|
||||
config_dir_path: Optional[str] = None,
|
||||
data_dir_path: Optional[str] = None,
|
||||
) -> None:
|
||||
def parse_config_dict(self, config_dict, config_dir_path=None, data_dir_path=None):
|
||||
"""Read the information from the config dict into this Config object.
|
||||
|
||||
Args:
|
||||
config_dict: Configuration data, as read from the yaml
|
||||
config_dict (dict): Configuration data, as read from the yaml
|
||||
|
||||
config_dir_path: The path where the config files are kept. Used to
|
||||
config_dir_path (str): The path where the config files are kept. Used to
|
||||
create filenames for things like the log config and the signing key.
|
||||
|
||||
data_dir_path: The path where the data files are kept. Used to create
|
||||
data_dir_path (str): The path where the data files are kept. Used to create
|
||||
filenames for things like the database and media store.
|
||||
"""
|
||||
self.invoke_all(
|
||||
@@ -725,20 +699,17 @@ class RootConfig:
|
||||
data_dir_path=data_dir_path,
|
||||
)
|
||||
|
||||
def generate_missing_files(
|
||||
self, config_dict: Dict[str, Any], config_dir_path: str
|
||||
) -> None:
|
||||
def generate_missing_files(self, config_dict, config_dir_path):
|
||||
self.invoke_all("generate_files", config_dict, config_dir_path)
|
||||
|
||||
|
||||
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
||||
def read_config_files(config_files):
|
||||
"""Read the config files into a dict
|
||||
|
||||
Args:
|
||||
config_files: A list of the config files to read
|
||||
config_files (iterable[str]): A list of the config files to read
|
||||
|
||||
Returns:
|
||||
The configuration dictionary.
|
||||
Returns: dict
|
||||
"""
|
||||
specified_config = {}
|
||||
for config_file in config_files:
|
||||
@@ -762,17 +733,17 @@ def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
||||
return specified_config
|
||||
|
||||
|
||||
def find_config_files(search_paths: List[str]) -> List[str]:
|
||||
def find_config_files(search_paths):
|
||||
"""Finds config files using a list of search paths. If a path is a file
|
||||
then that file path is added to the list. If a search path is a directory
|
||||
then all the "*.yaml" files in that directory are added to the list in
|
||||
sorted order.
|
||||
|
||||
Args:
|
||||
search_paths: A list of paths to search.
|
||||
search_paths(list(str)): A list of paths to search.
|
||||
|
||||
Returns:
|
||||
A list of file paths.
|
||||
list(str): A list of file paths.
|
||||
"""
|
||||
|
||||
config_files = []
|
||||
@@ -806,7 +777,7 @@ def find_config_files(search_paths: List[str]) -> List[str]:
|
||||
return config_files
|
||||
|
||||
|
||||
@attr.s(auto_attribs=True)
|
||||
@attr.s
|
||||
class ShardedWorkerHandlingConfig:
|
||||
"""Algorithm for choosing which instance is responsible for handling some
|
||||
sharded work.
|
||||
@@ -816,7 +787,7 @@ class ShardedWorkerHandlingConfig:
|
||||
below).
|
||||
"""
|
||||
|
||||
instances: List[str]
|
||||
instances = attr.ib(type=List[str])
|
||||
|
||||
def should_handle(self, instance_name: str, key: str) -> bool:
|
||||
"""Whether this instance is responsible for handling the given key."""
|
||||
|
||||
@@ -1,18 +1,4 @@
|
||||
import argparse
|
||||
from typing import (
|
||||
Any,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
TypeVar,
|
||||
Union,
|
||||
)
|
||||
|
||||
import jinja2
|
||||
from typing import Any, Iterable, List, Optional
|
||||
|
||||
from synapse.config import (
|
||||
account_validity,
|
||||
@@ -33,7 +19,6 @@ from synapse.config import (
|
||||
logger,
|
||||
metrics,
|
||||
modules,
|
||||
oembed,
|
||||
oidc,
|
||||
password_auth_providers,
|
||||
push,
|
||||
@@ -42,7 +27,6 @@ from synapse.config import (
|
||||
registration,
|
||||
repository,
|
||||
retention,
|
||||
room,
|
||||
room_directory,
|
||||
saml2,
|
||||
server,
|
||||
@@ -67,9 +51,7 @@ MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
|
||||
MISSING_REPORT_STATS_SPIEL: str
|
||||
MISSING_SERVER_NAME: str
|
||||
|
||||
def path_exists(file_path: str) -> bool: ...
|
||||
|
||||
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
|
||||
def path_exists(file_path: str): ...
|
||||
|
||||
class RootConfig:
|
||||
server: server.ServerConfig
|
||||
@@ -79,7 +61,6 @@ class RootConfig:
|
||||
logging: logger.LoggingConfig
|
||||
ratelimiting: ratelimiting.RatelimitConfig
|
||||
media: repository.ContentRepositoryConfig
|
||||
oembed: oembed.OembedConfig
|
||||
captcha: captcha.CaptchaConfig
|
||||
voip: voip.VoipConfig
|
||||
registration: registration.RegistrationConfig
|
||||
@@ -99,7 +80,6 @@ class RootConfig:
|
||||
authproviders: password_auth_providers.PasswordAuthProviderConfig
|
||||
push: push.PushConfig
|
||||
spamchecker: spam_checker.SpamCheckerConfig
|
||||
room: room.RoomConfig
|
||||
groups: groups.GroupsConfig
|
||||
userdirectory: user_directory.UserDirectoryConfig
|
||||
consent: consent.ConsentConfig
|
||||
@@ -107,85 +87,72 @@ class RootConfig:
|
||||
servernotices: server_notices.ServerNoticesConfig
|
||||
roomdirectory: room_directory.RoomDirectoryConfig
|
||||
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
|
||||
tracing: tracer.TracerConfig
|
||||
tracer: tracer.TracerConfig
|
||||
redis: redis.RedisConfig
|
||||
modules: modules.ModulesConfig
|
||||
caches: cache.CacheConfig
|
||||
federation: federation.FederationConfig
|
||||
retention: retention.RetentionConfig
|
||||
|
||||
config_classes: List[Type["Config"]] = ...
|
||||
config_classes: List = ...
|
||||
def __init__(self) -> None: ...
|
||||
def invoke_all(
|
||||
self, func_name: str, *args: Any, **kwargs: Any
|
||||
) -> MutableMapping[str, Any]: ...
|
||||
def invoke_all(self, func_name: str, *args: Any, **kwargs: Any): ...
|
||||
@classmethod
|
||||
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ...
|
||||
def __getattr__(self, item: str): ...
|
||||
def parse_config_dict(
|
||||
self,
|
||||
config_dict: Dict[str, Any],
|
||||
config_dir_path: Optional[str] = ...,
|
||||
data_dir_path: Optional[str] = ...,
|
||||
config_dict: Any,
|
||||
config_dir_path: Optional[Any] = ...,
|
||||
data_dir_path: Optional[Any] = ...,
|
||||
) -> None: ...
|
||||
read_config: Any = ...
|
||||
def generate_config(
|
||||
self,
|
||||
config_dir_path: str,
|
||||
data_dir_path: str,
|
||||
server_name: str,
|
||||
generate_secrets: bool = ...,
|
||||
report_stats: Optional[bool] = ...,
|
||||
report_stats: Optional[str] = ...,
|
||||
open_private_ports: bool = ...,
|
||||
listeners: Optional[Any] = ...,
|
||||
database_conf: Optional[Any] = ...,
|
||||
tls_certificate_path: Optional[str] = ...,
|
||||
tls_private_key_path: Optional[str] = ...,
|
||||
) -> str: ...
|
||||
): ...
|
||||
@classmethod
|
||||
def load_or_generate_config(
|
||||
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||
) -> Optional[TRootConfig]: ...
|
||||
def load_or_generate_config(cls, description: Any, argv: Any): ...
|
||||
@classmethod
|
||||
def load_config(
|
||||
cls: Type[TRootConfig], description: str, argv: List[str]
|
||||
) -> TRootConfig: ...
|
||||
def load_config(cls, description: Any, argv: Any): ...
|
||||
@classmethod
|
||||
def add_arguments_to_parser(
|
||||
cls, config_parser: argparse.ArgumentParser
|
||||
) -> None: ...
|
||||
def add_arguments_to_parser(cls, config_parser: Any) -> None: ...
|
||||
@classmethod
|
||||
def load_config_with_parser(
|
||||
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
|
||||
) -> Tuple[TRootConfig, argparse.Namespace]: ...
|
||||
def load_config_with_parser(cls, parser: Any, argv: Any): ...
|
||||
def generate_missing_files(
|
||||
self, config_dict: dict, config_dir_path: str
|
||||
) -> None: ...
|
||||
|
||||
class Config:
|
||||
root: RootConfig
|
||||
default_template_dir: str
|
||||
def __init__(self, root_config: Optional[RootConfig] = ...) -> None: ...
|
||||
def __getattr__(self, item: str, from_root: bool = ...): ...
|
||||
@staticmethod
|
||||
def parse_size(value: Union[str, int]) -> int: ...
|
||||
def parse_size(value: Any): ...
|
||||
@staticmethod
|
||||
def parse_duration(value: Union[str, int]) -> int: ...
|
||||
def parse_duration(value: Any): ...
|
||||
@staticmethod
|
||||
def abspath(file_path: Optional[str]) -> str: ...
|
||||
def abspath(file_path: Optional[str]): ...
|
||||
@classmethod
|
||||
def path_exists(cls, file_path: str) -> bool: ...
|
||||
def path_exists(cls, file_path: str): ...
|
||||
@classmethod
|
||||
def check_file(cls, file_path: str, config_name: str) -> str: ...
|
||||
def check_file(cls, file_path: str, config_name: str): ...
|
||||
@classmethod
|
||||
def ensure_directory(cls, dir_path: str) -> str: ...
|
||||
def ensure_directory(cls, dir_path: str): ...
|
||||
@classmethod
|
||||
def read_file(cls, file_path: str, config_name: str) -> str: ...
|
||||
def read_template(self, filenames: str) -> jinja2.Template: ...
|
||||
def read_templates(
|
||||
self,
|
||||
filenames: List[str],
|
||||
custom_template_directories: Optional[Iterable[str]] = None,
|
||||
) -> List[jinja2.Template]: ...
|
||||
def read_file(cls, file_path: str, config_name: str): ...
|
||||
|
||||
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]: ...
|
||||
def find_config_files(search_paths: List[str]) -> List[str]: ...
|
||||
def read_config_files(config_files: List[str]): ...
|
||||
def find_config_files(search_paths: List[str]): ...
|
||||
|
||||
class ShardedWorkerHandlingConfig:
|
||||
instances: List[str]
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -14,14 +13,14 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Dict, List
|
||||
from typing import Dict
|
||||
from urllib import parse as urlparse
|
||||
|
||||
import yaml
|
||||
from netaddr import IPSet
|
||||
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.types import JsonDict, UserID
|
||||
from synapse.types import UserID
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
|
||||
@@ -31,12 +30,12 @@ logger = logging.getLogger(__name__)
|
||||
class AppServiceConfig(Config):
|
||||
section = "appservice"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
self.app_service_config_files = config.get("app_service_config_files", [])
|
||||
self.notify_appservices = config.get("notify_appservices", True)
|
||||
self.track_appservice_user_ips = config.get("track_appservice_user_ips", False)
|
||||
|
||||
def generate_config_section(cls, **kwargs) -> str:
|
||||
def generate_config_section(cls, **kwargs):
|
||||
return """\
|
||||
# A list of application service config files to use
|
||||
#
|
||||
@@ -51,9 +50,7 @@ class AppServiceConfig(Config):
|
||||
"""
|
||||
|
||||
|
||||
def load_appservices(
|
||||
hostname: str, config_files: List[str]
|
||||
) -> List[ApplicationService]:
|
||||
def load_appservices(hostname, config_files):
|
||||
"""Returns a list of Application Services from the config files."""
|
||||
if not isinstance(config_files, list):
|
||||
logger.warning("Expected %s to be a list of AS config files.", config_files)
|
||||
@@ -96,9 +93,7 @@ def load_appservices(
|
||||
return appservices
|
||||
|
||||
|
||||
def _load_appservice(
|
||||
hostname: str, as_info: JsonDict, config_filename: str
|
||||
) -> ApplicationService:
|
||||
def _load_appservice(hostname, as_info, config_filename):
|
||||
required_string_fields = ["id", "as_token", "hs_token", "sender_localpart"]
|
||||
for field in required_string_fields:
|
||||
if not isinstance(as_info.get(field), str):
|
||||
@@ -120,9 +115,9 @@ def _load_appservice(
|
||||
user_id = user.to_string()
|
||||
|
||||
# Rate limiting for users of this AS is on by default (excludes sender)
|
||||
rate_limited = as_info.get("rate_limited")
|
||||
if not isinstance(rate_limited, bool):
|
||||
rate_limited = True
|
||||
rate_limited = True
|
||||
if isinstance(as_info.get("rate_limited"), bool):
|
||||
rate_limited = as_info.get("rate_limited")
|
||||
|
||||
# namespace checks
|
||||
if not isinstance(as_info.get("namespaces"), dict):
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2019-2021 Matrix.org Foundation C.I.C.
|
||||
# Copyright 2019 Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -15,9 +15,7 @@
|
||||
import os
|
||||
import re
|
||||
import threading
|
||||
from typing import Callable, Dict, Optional
|
||||
|
||||
import attr
|
||||
from typing import Callable, Dict
|
||||
|
||||
from synapse.python_dependencies import DependencyException, check_requirements
|
||||
|
||||
@@ -36,13 +34,13 @@ _DEFAULT_FACTOR_SIZE = 0.5
|
||||
_DEFAULT_EVENT_CACHE_SIZE = "10K"
|
||||
|
||||
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
class CacheProperties:
|
||||
# The default factor size for all caches
|
||||
default_factor_size: float = float(
|
||||
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
||||
)
|
||||
resize_all_caches_func: Optional[Callable[[], None]] = None
|
||||
def __init__(self):
|
||||
# The default factor size for all caches
|
||||
self.default_factor_size = float(
|
||||
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
||||
)
|
||||
self.resize_all_caches_func = None
|
||||
|
||||
|
||||
properties = CacheProperties()
|
||||
@@ -64,7 +62,7 @@ def _canonicalise_cache_name(cache_name: str) -> str:
|
||||
|
||||
def add_resizable_cache(
|
||||
cache_name: str, cache_resize_callback: Callable[[float], None]
|
||||
) -> None:
|
||||
):
|
||||
"""Register a cache that's size can dynamically change
|
||||
|
||||
Args:
|
||||
@@ -93,7 +91,7 @@ class CacheConfig(Config):
|
||||
_environ = os.environ
|
||||
|
||||
@staticmethod
|
||||
def reset() -> None:
|
||||
def reset():
|
||||
"""Resets the caches to their defaults. Used for tests."""
|
||||
properties.default_factor_size = float(
|
||||
os.environ.get(_CACHE_PREFIX, _DEFAULT_FACTOR_SIZE)
|
||||
@@ -102,7 +100,7 @@ class CacheConfig(Config):
|
||||
with _CACHES_LOCK:
|
||||
_CACHES.clear()
|
||||
|
||||
def generate_config_section(self, **kwargs) -> str:
|
||||
def generate_config_section(self, **kwargs):
|
||||
return """\
|
||||
## Caching ##
|
||||
|
||||
@@ -164,7 +162,7 @@ class CacheConfig(Config):
|
||||
#sync_response_cache_duration: 2m
|
||||
"""
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
self.event_cache_size = self.parse_size(
|
||||
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
||||
)
|
||||
@@ -219,7 +217,7 @@ class CacheConfig(Config):
|
||||
|
||||
expiry_time = cache_config.get("expiry_time")
|
||||
if expiry_time:
|
||||
self.expiry_time_msec: Optional[int] = self.parse_duration(expiry_time)
|
||||
self.expiry_time_msec = self.parse_duration(expiry_time)
|
||||
else:
|
||||
self.expiry_time_msec = None
|
||||
|
||||
@@ -234,7 +232,7 @@ class CacheConfig(Config):
|
||||
# needing an instance of Config
|
||||
properties.resize_all_caches_func = self.resize_all_caches
|
||||
|
||||
def resize_all_caches(self) -> None:
|
||||
def resize_all_caches(self):
|
||||
"""Ensure all cache sizes are up to date
|
||||
|
||||
For each cache, run the mapped callback function with either
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -29,7 +28,7 @@ class CasConfig(Config):
|
||||
|
||||
section = "cas"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
cas_config = config.get("cas_config", None)
|
||||
self.cas_enabled = cas_config and cas_config.get("enabled", True)
|
||||
|
||||
@@ -52,7 +51,7 @@ class CasConfig(Config):
|
||||
self.cas_displayname_attribute = None
|
||||
self.cas_required_attributes = []
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
# Enable Central Authentication Service (CAS) for registration and login.
|
||||
#
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2020-2021 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -12,7 +12,6 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
|
||||
@@ -120,7 +119,7 @@ class DatabaseConfig(Config):
|
||||
|
||||
self.databases = []
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
# We *experimentally* support specifying multiple databases via the
|
||||
# `databases` key. This is a map from a label to database config in the
|
||||
# same format as the `database` config option, plus an extra
|
||||
@@ -164,12 +163,12 @@ class DatabaseConfig(Config):
|
||||
self.databases = [DatabaseConnectionConfig("master", database_config)]
|
||||
self.set_databasepath(database_path)
|
||||
|
||||
def generate_config_section(self, data_dir_path, **kwargs) -> str:
|
||||
def generate_config_section(self, data_dir_path, **kwargs):
|
||||
return DEFAULT_CONFIG % {
|
||||
"database_path": os.path.join(data_dir_path, "homeserver.db")
|
||||
}
|
||||
|
||||
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||
def read_arguments(self, args):
|
||||
"""
|
||||
Cases for the cli input:
|
||||
- If no databases are configured and no database_path is set, raise.
|
||||
@@ -195,7 +194,7 @@ class DatabaseConfig(Config):
|
||||
else:
|
||||
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
|
||||
|
||||
def set_databasepath(self, database_path: str) -> None:
|
||||
def set_databasepath(self, database_path):
|
||||
|
||||
if database_path != ":memory:":
|
||||
database_path = self.abspath(database_path)
|
||||
@@ -203,7 +202,7 @@ class DatabaseConfig(Config):
|
||||
self.databases[0].config["args"]["database"] = database_path
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||
def add_arguments(parser):
|
||||
db_group = parser.add_argument_group("database")
|
||||
db_group.add_argument(
|
||||
"-d",
|
||||
|
||||
@@ -137,14 +137,33 @@ class EmailConfig(Config):
|
||||
if self.root.registration.account_threepid_delegate_email
|
||||
else ThreepidBehaviour.LOCAL
|
||||
)
|
||||
# Prior to Synapse v1.4.0, there was another option that defined whether Synapse would
|
||||
# use an identity server to password reset tokens on its behalf. We now warn the user
|
||||
# if they have this set and tell them to use the updated option, while using a default
|
||||
# identity server in the process.
|
||||
self.using_identity_server_from_trusted_list = False
|
||||
if (
|
||||
not self.root.registration.account_threepid_delegate_email
|
||||
and config.get("trust_identity_server_for_password_resets", False) is True
|
||||
):
|
||||
# Use the first entry in self.trusted_third_party_id_servers instead
|
||||
if self.trusted_third_party_id_servers:
|
||||
# XXX: It's a little confusing that account_threepid_delegate_email is modified
|
||||
# both in RegistrationConfig and here. We should factor this bit out
|
||||
|
||||
if config.get("trust_identity_server_for_password_resets"):
|
||||
raise ConfigError(
|
||||
'The config option "trust_identity_server_for_password_resets" '
|
||||
'has been replaced by "account_threepid_delegate". '
|
||||
"Please consult the sample config at docs/sample_config.yaml for "
|
||||
"details and update your config file."
|
||||
)
|
||||
first_trusted_identity_server = self.trusted_third_party_id_servers[0]
|
||||
|
||||
# trusted_third_party_id_servers does not contain a scheme whereas
|
||||
# account_threepid_delegate_email is expected to. Presume https
|
||||
self.root.registration.account_threepid_delegate_email = (
|
||||
"https://" + first_trusted_identity_server
|
||||
)
|
||||
self.using_identity_server_from_trusted_list = True
|
||||
else:
|
||||
raise ConfigError(
|
||||
"Attempted to use an identity server from"
|
||||
'"trusted_third_party_id_servers" but it is empty.'
|
||||
)
|
||||
|
||||
self.local_threepid_handling_disabled_due_to_email_config = False
|
||||
if (
|
||||
|
||||
@@ -46,6 +46,3 @@ class ExperimentalConfig(Config):
|
||||
|
||||
# MSC3266 (room summary api)
|
||||
self.msc3266_enabled: bool = experimental.get("msc3266_enabled", False)
|
||||
|
||||
# MSC3030 (Jump to date API endpoint)
|
||||
self.msc3030_enabled: bool = experimental.get("msc3030_enabled", False)
|
||||
|
||||
@@ -31,8 +31,6 @@ class JWTConfig(Config):
|
||||
self.jwt_secret = jwt_config["secret"]
|
||||
self.jwt_algorithm = jwt_config["algorithm"]
|
||||
|
||||
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
|
||||
|
||||
# The issuer and audiences are optional, if provided, it is asserted
|
||||
# that the claims exist on the JWT.
|
||||
self.jwt_issuer = jwt_config.get("issuer")
|
||||
@@ -48,7 +46,6 @@ class JWTConfig(Config):
|
||||
self.jwt_enabled = False
|
||||
self.jwt_secret = None
|
||||
self.jwt_algorithm = None
|
||||
self.jwt_subject_claim = None
|
||||
self.jwt_issuer = None
|
||||
self.jwt_audiences = None
|
||||
|
||||
@@ -91,12 +88,6 @@ class JWTConfig(Config):
|
||||
#
|
||||
#algorithm: "provided-by-your-issuer"
|
||||
|
||||
# Name of the claim containing a unique identifier for the user.
|
||||
#
|
||||
# Optional, defaults to `sub`.
|
||||
#
|
||||
#subject_claim: "sub"
|
||||
|
||||
# The issuer to validate the "iss" claim against.
|
||||
#
|
||||
# Optional, if provided the "iss" claim will be required and
|
||||
|
||||
@@ -16,7 +16,6 @@
|
||||
import hashlib
|
||||
import logging
|
||||
import os
|
||||
from typing import Any, Dict
|
||||
|
||||
import attr
|
||||
import jsonschema
|
||||
@@ -313,7 +312,7 @@ class KeyConfig(Config):
|
||||
)
|
||||
return keys
|
||||
|
||||
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
|
||||
def generate_files(self, config, config_dir_path):
|
||||
if "signing_key" in config:
|
||||
return
|
||||
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -19,7 +18,7 @@ import os
|
||||
import sys
|
||||
import threading
|
||||
from string import Template
|
||||
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
import yaml
|
||||
from zope.interface import implementer
|
||||
@@ -41,7 +40,6 @@ from synapse.util.versionstring import get_version_string
|
||||
from ._base import Config, ConfigError
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.server import HomeServer
|
||||
|
||||
DEFAULT_LOG_CONFIG = Template(
|
||||
@@ -143,13 +141,13 @@ removed in Synapse 1.3.0. You should instead set up a separate log configuration
|
||||
class LoggingConfig(Config):
|
||||
section = "logging"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
if config.get("log_file"):
|
||||
raise ConfigError(LOG_FILE_ERROR)
|
||||
self.log_config = self.abspath(config.get("log_config"))
|
||||
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
log_config = os.path.join(config_dir_path, server_name + ".log.config")
|
||||
return (
|
||||
"""\
|
||||
@@ -163,14 +161,14 @@ class LoggingConfig(Config):
|
||||
% locals()
|
||||
)
|
||||
|
||||
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||
def read_arguments(self, args):
|
||||
if args.no_redirect_stdio is not None:
|
||||
self.no_redirect_stdio = args.no_redirect_stdio
|
||||
if args.log_file is not None:
|
||||
raise ConfigError(LOG_FILE_ERROR)
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||
def add_arguments(parser):
|
||||
logging_group = parser.add_argument_group("logging")
|
||||
logging_group.add_argument(
|
||||
"-n",
|
||||
@@ -187,7 +185,7 @@ class LoggingConfig(Config):
|
||||
help=argparse.SUPPRESS,
|
||||
)
|
||||
|
||||
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
|
||||
def generate_files(self, config, config_dir_path):
|
||||
log_config = config.get("log_config")
|
||||
if log_config and not os.path.exists(log_config):
|
||||
log_file = self.abspath("homeserver.log")
|
||||
@@ -199,9 +197,7 @@ class LoggingConfig(Config):
|
||||
log_config_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=log_file))
|
||||
|
||||
|
||||
def _setup_stdlib_logging(
|
||||
config: "HomeServerConfig", log_config_path: Optional[str], logBeginner: LogBeginner
|
||||
) -> None:
|
||||
def _setup_stdlib_logging(config, log_config_path, logBeginner: LogBeginner) -> None:
|
||||
"""
|
||||
Set up Python standard library logging.
|
||||
"""
|
||||
@@ -234,7 +230,7 @@ def _setup_stdlib_logging(
|
||||
log_metadata_filter = MetadataFilter({"server_name": config.server.server_name})
|
||||
old_factory = logging.getLogRecordFactory()
|
||||
|
||||
def factory(*args: Any, **kwargs: Any) -> logging.LogRecord:
|
||||
def factory(*args, **kwargs):
|
||||
record = old_factory(*args, **kwargs)
|
||||
log_context_filter.filter(record)
|
||||
log_metadata_filter.filter(record)
|
||||
@@ -301,7 +297,7 @@ def _load_logging_config(log_config_path: str) -> None:
|
||||
logging.config.dictConfig(log_config)
|
||||
|
||||
|
||||
def _reload_logging_config(log_config_path: Optional[str]) -> None:
|
||||
def _reload_logging_config(log_config_path):
|
||||
"""
|
||||
Reload the log configuration from the file and apply it.
|
||||
"""
|
||||
@@ -315,8 +311,8 @@ def _reload_logging_config(log_config_path: Optional[str]) -> None:
|
||||
|
||||
def setup_logging(
|
||||
hs: "HomeServer",
|
||||
config: "HomeServerConfig",
|
||||
use_worker_options: bool = False,
|
||||
config,
|
||||
use_worker_options=False,
|
||||
logBeginner: LogBeginner = globalLogBeginner,
|
||||
) -> None:
|
||||
"""
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
from collections import Counter
|
||||
from typing import Any, Collection, Iterable, List, Mapping, Optional, Tuple, Type
|
||||
from typing import Collection, Iterable, List, Mapping, Optional, Tuple, Type
|
||||
|
||||
import attr
|
||||
|
||||
@@ -36,7 +36,7 @@ LEGACY_USER_MAPPING_PROVIDER = "synapse.handlers.oidc_handler.JinjaOidcMappingPr
|
||||
class OIDCConfig(Config):
|
||||
section = "oidc"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
self.oidc_providers = tuple(_parse_oidc_provider_configs(config))
|
||||
if not self.oidc_providers:
|
||||
return
|
||||
@@ -66,7 +66,7 @@ class OIDCConfig(Config):
|
||||
# OIDC is enabled if we have a provider
|
||||
return bool(self.oidc_providers)
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
# List of OpenID Connect (OIDC) / OAuth 2.0 identity providers, for registration
|
||||
# and login.
|
||||
@@ -495,89 +495,89 @@ def _parse_oidc_config_dict(
|
||||
)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@attr.s(slots=True, frozen=True)
|
||||
class OidcProviderClientSecretJwtKey:
|
||||
# a pem-encoded signing key
|
||||
key: str
|
||||
key = attr.ib(type=str)
|
||||
|
||||
# properties to include in the JWT header
|
||||
jwt_header: Mapping[str, str]
|
||||
jwt_header = attr.ib(type=Mapping[str, str])
|
||||
|
||||
# properties to include in the JWT payload.
|
||||
jwt_payload: Mapping[str, str]
|
||||
jwt_payload = attr.ib(type=Mapping[str, str])
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@attr.s(slots=True, frozen=True)
|
||||
class OidcProviderConfig:
|
||||
# a unique identifier for this identity provider. Used in the 'user_external_ids'
|
||||
# table, as well as the query/path parameter used in the login protocol.
|
||||
idp_id: str
|
||||
idp_id = attr.ib(type=str)
|
||||
|
||||
# user-facing name for this identity provider.
|
||||
idp_name: str
|
||||
idp_name = attr.ib(type=str)
|
||||
|
||||
# Optional MXC URI for icon for this IdP.
|
||||
idp_icon: Optional[str]
|
||||
idp_icon = attr.ib(type=Optional[str])
|
||||
|
||||
# Optional brand identifier for this IdP.
|
||||
idp_brand: Optional[str]
|
||||
idp_brand = attr.ib(type=Optional[str])
|
||||
|
||||
# whether the OIDC discovery mechanism is used to discover endpoints
|
||||
discover: bool
|
||||
discover = attr.ib(type=bool)
|
||||
|
||||
# the OIDC issuer. Used to validate tokens and (if discovery is enabled) to
|
||||
# discover the provider's endpoints.
|
||||
issuer: str
|
||||
issuer = attr.ib(type=str)
|
||||
|
||||
# oauth2 client id to use
|
||||
client_id: str
|
||||
client_id = attr.ib(type=str)
|
||||
|
||||
# oauth2 client secret to use. if `None`, use client_secret_jwt_key to generate
|
||||
# a secret.
|
||||
client_secret: Optional[str]
|
||||
client_secret = attr.ib(type=Optional[str])
|
||||
|
||||
# key to use to construct a JWT to use as a client secret. May be `None` if
|
||||
# `client_secret` is set.
|
||||
client_secret_jwt_key: Optional[OidcProviderClientSecretJwtKey]
|
||||
client_secret_jwt_key = attr.ib(type=Optional[OidcProviderClientSecretJwtKey])
|
||||
|
||||
# auth method to use when exchanging the token.
|
||||
# Valid values are 'client_secret_basic', 'client_secret_post' and
|
||||
# 'none'.
|
||||
client_auth_method: str
|
||||
client_auth_method = attr.ib(type=str)
|
||||
|
||||
# list of scopes to request
|
||||
scopes: Collection[str]
|
||||
scopes = attr.ib(type=Collection[str])
|
||||
|
||||
# the oauth2 authorization endpoint. Required if discovery is disabled.
|
||||
authorization_endpoint: Optional[str]
|
||||
authorization_endpoint = attr.ib(type=Optional[str])
|
||||
|
||||
# the oauth2 token endpoint. Required if discovery is disabled.
|
||||
token_endpoint: Optional[str]
|
||||
token_endpoint = attr.ib(type=Optional[str])
|
||||
|
||||
# the OIDC userinfo endpoint. Required if discovery is disabled and the
|
||||
# "openid" scope is not requested.
|
||||
userinfo_endpoint: Optional[str]
|
||||
userinfo_endpoint = attr.ib(type=Optional[str])
|
||||
|
||||
# URI where to fetch the JWKS. Required if discovery is disabled and the
|
||||
# "openid" scope is used.
|
||||
jwks_uri: Optional[str]
|
||||
jwks_uri = attr.ib(type=Optional[str])
|
||||
|
||||
# Whether to skip metadata verification
|
||||
skip_verification: bool
|
||||
skip_verification = attr.ib(type=bool)
|
||||
|
||||
# Whether to fetch the user profile from the userinfo endpoint. Valid
|
||||
# values are: "auto" or "userinfo_endpoint".
|
||||
user_profile_method: str
|
||||
user_profile_method = attr.ib(type=str)
|
||||
|
||||
# whether to allow a user logging in via OIDC to match a pre-existing account
|
||||
# instead of failing
|
||||
allow_existing_users: bool
|
||||
allow_existing_users = attr.ib(type=bool)
|
||||
|
||||
# the class of the user mapping provider
|
||||
user_mapping_provider_class: Type
|
||||
user_mapping_provider_class = attr.ib(type=Type)
|
||||
|
||||
# the config of the user mapping provider
|
||||
user_mapping_provider_config: Any
|
||||
user_mapping_provider_config = attr.ib()
|
||||
|
||||
# required attributes to require in userinfo to allow login/registration
|
||||
attribute_requirements: List[SsoAttributeRequirement]
|
||||
attribute_requirements = attr.ib(type=List[SsoAttributeRequirement])
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -12,8 +11,6 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
from typing import Optional
|
||||
|
||||
from synapse.api.constants import RoomCreationPreset
|
||||
from synapse.config._base import Config, ConfigError
|
||||
@@ -42,7 +39,9 @@ class RegistrationConfig(Config):
|
||||
self.registration_shared_secret = config.get("registration_shared_secret")
|
||||
|
||||
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
|
||||
|
||||
self.trusted_third_party_id_servers = config.get(
|
||||
"trusted_third_party_id_servers", ["matrix.org", "vector.im"]
|
||||
)
|
||||
account_threepid_delegates = config.get("account_threepid_delegates") or {}
|
||||
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
|
||||
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
|
||||
@@ -115,74 +114,26 @@ class RegistrationConfig(Config):
|
||||
session_lifetime = self.parse_duration(session_lifetime)
|
||||
self.session_lifetime = session_lifetime
|
||||
|
||||
# The `refreshable_access_token_lifetime` applies for tokens that can be renewed
|
||||
# using a refresh token, as per MSC2918.
|
||||
# If it is `None`, the refresh token mechanism is disabled.
|
||||
refreshable_access_token_lifetime = config.get(
|
||||
"refreshable_access_token_lifetime",
|
||||
"5m",
|
||||
# The `access_token_lifetime` applies for tokens that can be renewed
|
||||
# using a refresh token, as per MSC2918. If it is `None`, the refresh
|
||||
# token mechanism is disabled.
|
||||
#
|
||||
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
|
||||
# `None` by default if a `session_lifetime` is set.
|
||||
access_token_lifetime = config.get(
|
||||
"access_token_lifetime", "5m" if session_lifetime is None else None
|
||||
)
|
||||
if refreshable_access_token_lifetime is not None:
|
||||
refreshable_access_token_lifetime = self.parse_duration(
|
||||
refreshable_access_token_lifetime
|
||||
if access_token_lifetime is not None:
|
||||
access_token_lifetime = self.parse_duration(access_token_lifetime)
|
||||
self.access_token_lifetime = access_token_lifetime
|
||||
|
||||
if session_lifetime is not None and access_token_lifetime is not None:
|
||||
raise ConfigError(
|
||||
"The refresh token mechanism is incompatible with the "
|
||||
"`session_lifetime` option. Consider disabling the "
|
||||
"`session_lifetime` option or disabling the refresh token "
|
||||
"mechanism by removing the `access_token_lifetime` option."
|
||||
)
|
||||
self.refreshable_access_token_lifetime: Optional[
|
||||
int
|
||||
] = refreshable_access_token_lifetime
|
||||
|
||||
if (
|
||||
self.session_lifetime is not None
|
||||
and "refreshable_access_token_lifetime" in config
|
||||
):
|
||||
if self.session_lifetime < self.refreshable_access_token_lifetime:
|
||||
raise ConfigError(
|
||||
"Both `session_lifetime` and `refreshable_access_token_lifetime` "
|
||||
"configuration options have been set, but `refreshable_access_token_lifetime` "
|
||||
" exceeds `session_lifetime`!"
|
||||
)
|
||||
|
||||
# The `nonrefreshable_access_token_lifetime` applies for tokens that can NOT be
|
||||
# refreshed using a refresh token.
|
||||
# If it is None, then these tokens last for the entire length of the session,
|
||||
# which is infinite by default.
|
||||
# The intention behind this configuration option is to help with requiring
|
||||
# all clients to use refresh tokens, if the homeserver administrator requires.
|
||||
nonrefreshable_access_token_lifetime = config.get(
|
||||
"nonrefreshable_access_token_lifetime",
|
||||
None,
|
||||
)
|
||||
if nonrefreshable_access_token_lifetime is not None:
|
||||
nonrefreshable_access_token_lifetime = self.parse_duration(
|
||||
nonrefreshable_access_token_lifetime
|
||||
)
|
||||
self.nonrefreshable_access_token_lifetime = nonrefreshable_access_token_lifetime
|
||||
|
||||
if (
|
||||
self.session_lifetime is not None
|
||||
and self.nonrefreshable_access_token_lifetime is not None
|
||||
):
|
||||
if self.session_lifetime < self.nonrefreshable_access_token_lifetime:
|
||||
raise ConfigError(
|
||||
"Both `session_lifetime` and `nonrefreshable_access_token_lifetime` "
|
||||
"configuration options have been set, but `nonrefreshable_access_token_lifetime` "
|
||||
" exceeds `session_lifetime`!"
|
||||
)
|
||||
|
||||
refresh_token_lifetime = config.get("refresh_token_lifetime")
|
||||
if refresh_token_lifetime is not None:
|
||||
refresh_token_lifetime = self.parse_duration(refresh_token_lifetime)
|
||||
self.refresh_token_lifetime: Optional[int] = refresh_token_lifetime
|
||||
|
||||
if (
|
||||
self.session_lifetime is not None
|
||||
and self.refresh_token_lifetime is not None
|
||||
):
|
||||
if self.session_lifetime < self.refresh_token_lifetime:
|
||||
raise ConfigError(
|
||||
"Both `session_lifetime` and `refresh_token_lifetime` "
|
||||
"configuration options have been set, but `refresh_token_lifetime` "
|
||||
" exceeds `session_lifetime`!"
|
||||
)
|
||||
|
||||
# The fallback template used for authenticating using a registration token
|
||||
self.registration_token_template = self.read_template("registration_token.html")
|
||||
@@ -220,44 +171,6 @@ class RegistrationConfig(Config):
|
||||
#
|
||||
#session_lifetime: 24h
|
||||
|
||||
# Time that an access token remains valid for, if the session is
|
||||
# using refresh tokens.
|
||||
# For more information about refresh tokens, please see the manual.
|
||||
# Note that this only applies to clients which advertise support for
|
||||
# refresh tokens.
|
||||
#
|
||||
# Note also that this is calculated at login time and refresh time:
|
||||
# changes are not applied to existing sessions until they are refreshed.
|
||||
#
|
||||
# By default, this is 5 minutes.
|
||||
#
|
||||
#refreshable_access_token_lifetime: 5m
|
||||
|
||||
# Time that a refresh token remains valid for (provided that it is not
|
||||
# exchanged for another one first).
|
||||
# This option can be used to automatically log-out inactive sessions.
|
||||
# Please see the manual for more information.
|
||||
#
|
||||
# Note also that this is calculated at login time and refresh time:
|
||||
# changes are not applied to existing sessions until they are refreshed.
|
||||
#
|
||||
# By default, this is infinite.
|
||||
#
|
||||
#refresh_token_lifetime: 24h
|
||||
|
||||
# Time that an access token remains valid for, if the session is NOT
|
||||
# using refresh tokens.
|
||||
# Please note that not all clients support refresh tokens, so setting
|
||||
# this to a short value may be inconvenient for some users who will
|
||||
# then be logged out frequently.
|
||||
#
|
||||
# Note also that this is calculated at login time: changes are not applied
|
||||
# retrospectively to existing sessions for users that have already logged in.
|
||||
#
|
||||
# By default, this is infinite.
|
||||
#
|
||||
#nonrefreshable_access_token_lifetime: 24h
|
||||
|
||||
# The user must provide all of the below types of 3PID when registering.
|
||||
#
|
||||
#registrations_require_3pid:
|
||||
@@ -451,7 +364,7 @@ class RegistrationConfig(Config):
|
||||
)
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||
def add_arguments(parser):
|
||||
reg_group = parser.add_argument_group("registration")
|
||||
reg_group.add_argument(
|
||||
"--enable-registration",
|
||||
@@ -460,6 +373,6 @@ class RegistrationConfig(Config):
|
||||
help="Enable registration for new users.",
|
||||
)
|
||||
|
||||
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||
def read_arguments(self, args):
|
||||
if args.enable_registration is not None:
|
||||
self.enable_registration = strtobool(str(args.enable_registration))
|
||||
|
||||
@@ -15,12 +15,11 @@
|
||||
import logging
|
||||
import os
|
||||
from collections import namedtuple
|
||||
from typing import Dict, List, Tuple
|
||||
from typing import Dict, List
|
||||
from urllib.request import getproxies_environment # type: ignore
|
||||
|
||||
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST, generate_ip_set
|
||||
from synapse.python_dependencies import DependencyException, check_requirements
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.module_loader import load_module
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
@@ -58,9 +57,7 @@ MediaStorageProviderConfig = namedtuple(
|
||||
)
|
||||
|
||||
|
||||
def parse_thumbnail_requirements(
|
||||
thumbnail_sizes: List[JsonDict],
|
||||
) -> Dict[str, Tuple[ThumbnailRequirement, ...]]:
|
||||
def parse_thumbnail_requirements(thumbnail_sizes):
|
||||
"""Takes a list of dictionaries with "width", "height", and "method" keys
|
||||
and creates a map from image media types to the thumbnail size, thumbnailing
|
||||
method, and thumbnail media type to precalculate
|
||||
@@ -72,7 +69,7 @@ def parse_thumbnail_requirements(
|
||||
Dictionary mapping from media type string to list of
|
||||
ThumbnailRequirement tuples.
|
||||
"""
|
||||
requirements: Dict[str, List[ThumbnailRequirement]] = {}
|
||||
requirements: Dict[str, List] = {}
|
||||
for size in thumbnail_sizes:
|
||||
width = size["width"]
|
||||
height = size["height"]
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2018 New Vector Ltd
|
||||
# Copyright 2021 Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -13,9 +12,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import List
|
||||
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util import glob_to_regex
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
@@ -24,7 +20,7 @@ from ._base import Config, ConfigError
|
||||
class RoomDirectoryConfig(Config):
|
||||
section = "roomdirectory"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
self.enable_room_list_search = config.get("enable_room_list_search", True)
|
||||
|
||||
alias_creation_rules = config.get("alias_creation_rules")
|
||||
@@ -51,7 +47,7 @@ class RoomDirectoryConfig(Config):
|
||||
_RoomDirectoryRule("room_list_publication_rules", {"action": "allow"})
|
||||
]
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """
|
||||
# Uncomment to disable searching the public room list. When disabled
|
||||
# blocks searching local and remote room lists for local and remote
|
||||
@@ -117,16 +113,16 @@ class RoomDirectoryConfig(Config):
|
||||
# action: allow
|
||||
"""
|
||||
|
||||
def is_alias_creation_allowed(self, user_id: str, room_id: str, alias: str) -> bool:
|
||||
def is_alias_creation_allowed(self, user_id, room_id, alias):
|
||||
"""Checks if the given user is allowed to create the given alias
|
||||
|
||||
Args:
|
||||
user_id: The user to check.
|
||||
room_id: The room ID for the alias.
|
||||
alias: The alias being created.
|
||||
user_id (str)
|
||||
room_id (str)
|
||||
alias (str)
|
||||
|
||||
Returns:
|
||||
True if user is allowed to create the alias
|
||||
boolean: True if user is allowed to create the alias
|
||||
"""
|
||||
for rule in self._alias_creation_rules:
|
||||
if rule.matches(user_id, room_id, [alias]):
|
||||
@@ -134,18 +130,16 @@ class RoomDirectoryConfig(Config):
|
||||
|
||||
return False
|
||||
|
||||
def is_publishing_room_allowed(
|
||||
self, user_id: str, room_id: str, aliases: List[str]
|
||||
) -> bool:
|
||||
def is_publishing_room_allowed(self, user_id, room_id, aliases):
|
||||
"""Checks if the given user is allowed to publish the room
|
||||
|
||||
Args:
|
||||
user_id: The user ID publishing the room.
|
||||
room_id: The room being published.
|
||||
aliases: any local aliases associated with the room
|
||||
user_id (str)
|
||||
room_id (str)
|
||||
aliases (list[str]): any local aliases associated with the room
|
||||
|
||||
Returns:
|
||||
True if user can publish room
|
||||
boolean: True if user can publish room
|
||||
"""
|
||||
for rule in self._room_list_publication_rules:
|
||||
if rule.matches(user_id, room_id, aliases):
|
||||
@@ -159,11 +153,11 @@ class _RoomDirectoryRule:
|
||||
creating an alias or publishing a room.
|
||||
"""
|
||||
|
||||
def __init__(self, option_name: str, rule: JsonDict):
|
||||
def __init__(self, option_name, rule):
|
||||
"""
|
||||
Args:
|
||||
option_name: Name of the config option this rule belongs to
|
||||
rule: The rule as specified in the config
|
||||
option_name (str): Name of the config option this rule belongs to
|
||||
rule (dict): The rule as specified in the config
|
||||
"""
|
||||
|
||||
action = rule["action"]
|
||||
@@ -187,18 +181,18 @@ class _RoomDirectoryRule:
|
||||
except Exception as e:
|
||||
raise ConfigError("Failed to parse glob into regex") from e
|
||||
|
||||
def matches(self, user_id: str, room_id: str, aliases: List[str]) -> bool:
|
||||
def matches(self, user_id, room_id, aliases):
|
||||
"""Tests if this rule matches the given user_id, room_id and aliases.
|
||||
|
||||
Args:
|
||||
user_id: The user ID to check.
|
||||
room_id: The room ID to check.
|
||||
aliases: The associated aliases to the room. Will be a single element
|
||||
for testing alias creation, and can be empty for testing room
|
||||
publishing.
|
||||
user_id (str)
|
||||
room_id (str)
|
||||
aliases (list[str]): The associated aliases to the room. Will be a
|
||||
single element for testing alias creation, and can be empty for
|
||||
testing room publishing.
|
||||
|
||||
Returns:
|
||||
True if the rule matches.
|
||||
boolean
|
||||
"""
|
||||
|
||||
# Note: The regexes are anchored at both ends
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
# Copyright 2018 New Vector Ltd
|
||||
# Copyright 2019-2021 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2019 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -14,11 +14,10 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Any, List, Set
|
||||
from typing import Any, List
|
||||
|
||||
from synapse.config.sso import SsoAttributeRequirement
|
||||
from synapse.python_dependencies import DependencyException, check_requirements
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.module_loader import load_module, load_python_module
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
@@ -34,7 +33,7 @@ LEGACY_USER_MAPPING_PROVIDER = (
|
||||
)
|
||||
|
||||
|
||||
def _dict_merge(merge_dict: dict, into_dict: dict) -> None:
|
||||
def _dict_merge(merge_dict, into_dict):
|
||||
"""Do a deep merge of two dicts
|
||||
|
||||
Recursively merges `merge_dict` into `into_dict`:
|
||||
@@ -44,8 +43,8 @@ def _dict_merge(merge_dict: dict, into_dict: dict) -> None:
|
||||
the value from `merge_dict`.
|
||||
|
||||
Args:
|
||||
merge_dict: dict to merge
|
||||
into_dict: target dict to be modified
|
||||
merge_dict (dict): dict to merge
|
||||
into_dict (dict): target dict
|
||||
"""
|
||||
for k, v in merge_dict.items():
|
||||
if k not in into_dict:
|
||||
@@ -65,7 +64,7 @@ def _dict_merge(merge_dict: dict, into_dict: dict) -> None:
|
||||
class SAML2Config(Config):
|
||||
section = "saml2"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
self.saml2_enabled = False
|
||||
|
||||
saml2_config = config.get("saml2_config")
|
||||
@@ -184,8 +183,8 @@ class SAML2Config(Config):
|
||||
)
|
||||
|
||||
def _default_saml_config_dict(
|
||||
self, required_attributes: Set[str], optional_attributes: Set[str]
|
||||
) -> JsonDict:
|
||||
self, required_attributes: set, optional_attributes: set
|
||||
):
|
||||
"""Generate a configuration dictionary with required and optional attributes that
|
||||
will be needed to process new user registration
|
||||
|
||||
@@ -196,7 +195,7 @@ class SAML2Config(Config):
|
||||
additional information to Synapse user accounts, but are not required
|
||||
|
||||
Returns:
|
||||
A SAML configuration dictionary
|
||||
dict: A SAML configuration dictionary
|
||||
"""
|
||||
import saml2
|
||||
|
||||
@@ -223,7 +222,7 @@ class SAML2Config(Config):
|
||||
},
|
||||
}
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
## Single sign-on integration ##
|
||||
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
import itertools
|
||||
import logging
|
||||
import os.path
|
||||
@@ -28,7 +27,6 @@ from netaddr import AddrFormatError, IPNetwork, IPSet
|
||||
from twisted.conch.ssh.keys import Key
|
||||
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.types import JsonDict
|
||||
from synapse.util.module_loader import load_module
|
||||
from synapse.util.stringutils import parse_and_validate_server_name
|
||||
|
||||
@@ -423,7 +421,7 @@ class ServerConfig(Config):
|
||||
# before redacting them.
|
||||
redaction_retention_period = config.get("redaction_retention_period", "7d")
|
||||
if redaction_retention_period is not None:
|
||||
self.redaction_retention_period: Optional[int] = self.parse_duration(
|
||||
self.redaction_retention_period = self.parse_duration(
|
||||
redaction_retention_period
|
||||
)
|
||||
else:
|
||||
@@ -432,7 +430,7 @@ class ServerConfig(Config):
|
||||
# How long to keep entries in the `users_ips` table.
|
||||
user_ips_max_age = config.get("user_ips_max_age", "28d")
|
||||
if user_ips_max_age is not None:
|
||||
self.user_ips_max_age: Optional[int] = self.parse_duration(user_ips_max_age)
|
||||
self.user_ips_max_age = self.parse_duration(user_ips_max_age)
|
||||
else:
|
||||
self.user_ips_max_age = None
|
||||
|
||||
@@ -1225,7 +1223,7 @@ class ServerConfig(Config):
|
||||
% locals()
|
||||
)
|
||||
|
||||
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||
def read_arguments(self, args):
|
||||
if args.manhole is not None:
|
||||
self.manhole = args.manhole
|
||||
if args.daemonize is not None:
|
||||
@@ -1234,7 +1232,7 @@ class ServerConfig(Config):
|
||||
self.print_pidfile = args.print_pidfile
|
||||
|
||||
@staticmethod
|
||||
def add_arguments(parser: argparse.ArgumentParser) -> None:
|
||||
def add_arguments(parser):
|
||||
server_group = parser.add_argument_group("server")
|
||||
server_group.add_argument(
|
||||
"-D",
|
||||
@@ -1276,16 +1274,14 @@ class ServerConfig(Config):
|
||||
)
|
||||
|
||||
|
||||
def is_threepid_reserved(
|
||||
reserved_threepids: List[JsonDict], threepid: JsonDict
|
||||
) -> bool:
|
||||
def is_threepid_reserved(reserved_threepids, threepid):
|
||||
"""Check the threepid against the reserved threepid config
|
||||
Args:
|
||||
reserved_threepids: List of reserved threepids
|
||||
threepid: The threepid to test for
|
||||
reserved_threepids([dict]) - list of reserved threepids
|
||||
threepid(dict) - The threepid to test for
|
||||
|
||||
Returns:
|
||||
Is the threepid undertest reserved_user
|
||||
boolean Is the threepid undertest reserved_user
|
||||
"""
|
||||
|
||||
for tp in reserved_threepids:
|
||||
@@ -1294,9 +1290,7 @@ def is_threepid_reserved(
|
||||
return False
|
||||
|
||||
|
||||
def read_gc_thresholds(
|
||||
thresholds: Optional[List[Any]],
|
||||
) -> Optional[Tuple[int, int, int]]:
|
||||
def read_gc_thresholds(thresholds):
|
||||
"""Reads the three integer thresholds for garbage collection. Ensures that
|
||||
the thresholds are integers if thresholds are supplied.
|
||||
"""
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
# Copyright 2020-2021 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -29,13 +29,13 @@ https://matrix-org.github.io/synapse/latest/templates.html
|
||||
---------------------------------------------------------------------------------------"""
|
||||
|
||||
|
||||
@attr.s(frozen=True, auto_attribs=True)
|
||||
@attr.s(frozen=True)
|
||||
class SsoAttributeRequirement:
|
||||
"""Object describing a single requirement for SSO attributes."""
|
||||
|
||||
attribute: str
|
||||
attribute = attr.ib(type=str)
|
||||
# If a value is not given, than the attribute must simply exist.
|
||||
value: Optional[str]
|
||||
value = attr.ib(type=Optional[str])
|
||||
|
||||
JSON_SCHEMA = {
|
||||
"type": "object",
|
||||
@@ -49,7 +49,7 @@ class SSOConfig(Config):
|
||||
|
||||
section = "sso"
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
def read_config(self, config, **kwargs):
|
||||
sso_config: Dict[str, Any] = config.get("sso") or {}
|
||||
|
||||
# The sso-specific template_dir
|
||||
@@ -106,7 +106,7 @@ class SSOConfig(Config):
|
||||
)
|
||||
self.sso_client_whitelist.append(login_fallback_url)
|
||||
|
||||
def generate_config_section(self, **kwargs) -> str:
|
||||
def generate_config_section(self, **kwargs):
|
||||
return """\
|
||||
# Additional settings to use with single-sign on systems such as OpenID Connect,
|
||||
# SAML2 and CAS.
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
|
||||
import logging
|
||||
import os
|
||||
from datetime import datetime
|
||||
from typing import List, Optional, Pattern
|
||||
|
||||
from OpenSSL import SSL, crypto
|
||||
@@ -132,6 +133,55 @@ class TlsConfig(Config):
|
||||
self.tls_certificate: Optional[crypto.X509] = None
|
||||
self.tls_private_key: Optional[crypto.PKey] = None
|
||||
|
||||
def is_disk_cert_valid(self, allow_self_signed=True):
|
||||
"""
|
||||
Is the certificate we have on disk valid, and if so, for how long?
|
||||
|
||||
Args:
|
||||
allow_self_signed (bool): Should we allow the certificate we
|
||||
read to be self signed?
|
||||
|
||||
Returns:
|
||||
int: Days remaining of certificate validity.
|
||||
None: No certificate exists.
|
||||
"""
|
||||
if not os.path.exists(self.tls_certificate_file):
|
||||
return None
|
||||
|
||||
try:
|
||||
with open(self.tls_certificate_file, "rb") as f:
|
||||
cert_pem = f.read()
|
||||
except Exception as e:
|
||||
raise ConfigError(
|
||||
"Failed to read existing certificate file %s: %s"
|
||||
% (self.tls_certificate_file, e)
|
||||
)
|
||||
|
||||
try:
|
||||
tls_certificate = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
|
||||
except Exception as e:
|
||||
raise ConfigError(
|
||||
"Failed to parse existing certificate file %s: %s"
|
||||
% (self.tls_certificate_file, e)
|
||||
)
|
||||
|
||||
if not allow_self_signed:
|
||||
if tls_certificate.get_subject() == tls_certificate.get_issuer():
|
||||
raise ValueError(
|
||||
"TLS Certificate is self signed, and this is not permitted"
|
||||
)
|
||||
|
||||
# YYYYMMDDhhmmssZ -- in UTC
|
||||
expiry_data = tls_certificate.get_notAfter()
|
||||
if expiry_data is None:
|
||||
raise ValueError(
|
||||
"TLS Certificate has no expiry date, and this is not permitted"
|
||||
)
|
||||
expires_on = datetime.strptime(expiry_data.decode("ascii"), "%Y%m%d%H%M%SZ")
|
||||
now = datetime.utcnow()
|
||||
days_remaining = (expires_on - now).days
|
||||
return days_remaining
|
||||
|
||||
def read_certificate_from_disk(self):
|
||||
"""
|
||||
Read the certificates and private key from disk.
|
||||
@@ -213,8 +263,8 @@ class TlsConfig(Config):
|
||||
#
|
||||
#federation_certificate_verification_whitelist:
|
||||
# - lon.example.com
|
||||
# - "*.domain.com"
|
||||
# - "*.onion"
|
||||
# - *.domain.com
|
||||
# - *.onion
|
||||
|
||||
# List of custom certificate authorities for federation traffic.
|
||||
#
|
||||
@@ -245,7 +295,7 @@ class TlsConfig(Config):
|
||||
cert_path = self.tls_certificate_file
|
||||
logger.info("Loading TLS certificate from %s", cert_path)
|
||||
cert_pem = self.read_file(cert_path, "tls_certificate_path")
|
||||
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem.encode())
|
||||
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
|
||||
|
||||
return cert
|
||||
|
||||
|
||||
@@ -53,8 +53,8 @@ class UserDirectoryConfig(Config):
|
||||
# indexes were (re)built was before Synapse 1.44, you'll have to
|
||||
# rebuild the indexes in order to search through all known users.
|
||||
# These indexes are built the first time Synapse starts; admins can
|
||||
# manually trigger a rebuild via API following the instructions at
|
||||
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
|
||||
# manually trigger a rebuild following the instructions at
|
||||
# https://matrix-org.github.io/synapse/latest/user_directory.html
|
||||
#
|
||||
# Uncomment to return search results containing all known users, even if that
|
||||
# user does not share a room with the requester.
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -13,7 +12,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import argparse
|
||||
from typing import List, Union
|
||||
|
||||
import attr
|
||||
@@ -345,7 +343,7 @@ class WorkerConfig(Config):
|
||||
#worker_replication_secret: ""
|
||||
"""
|
||||
|
||||
def read_arguments(self, args: argparse.Namespace) -> None:
|
||||
def read_arguments(self, args):
|
||||
# We support a bunch of command line arguments that override options in
|
||||
# the config. A lot of these options have a worker_* prefix when running
|
||||
# on workers so we also have to override them when command line options
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2017, 2018 New Vector Ltd
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -119,6 +120,16 @@ class VerifyJsonRequest:
|
||||
key_ids=key_ids,
|
||||
)
|
||||
|
||||
def to_fetch_key_request(self) -> "_FetchKeyRequest":
|
||||
"""Create a key fetch request for all keys needed to satisfy the
|
||||
verification request.
|
||||
"""
|
||||
return _FetchKeyRequest(
|
||||
server_name=self.server_name,
|
||||
minimum_valid_until_ts=self.minimum_valid_until_ts,
|
||||
key_ids=self.key_ids,
|
||||
)
|
||||
|
||||
|
||||
class KeyLookupError(ValueError):
|
||||
pass
|
||||
@@ -168,22 +179,8 @@ class Keyring:
|
||||
clock=hs.get_clock(),
|
||||
process_batch_callback=self._inner_fetch_key_requests,
|
||||
)
|
||||
|
||||
self._hostname = hs.hostname
|
||||
|
||||
# build a FetchKeyResult for each of our own keys, to shortcircuit the
|
||||
# fetcher.
|
||||
self._local_verify_keys: Dict[str, FetchKeyResult] = {}
|
||||
for key_id, key in hs.config.key.old_signing_keys.items():
|
||||
self._local_verify_keys[key_id] = FetchKeyResult(
|
||||
verify_key=key, valid_until_ts=key.expired_ts
|
||||
)
|
||||
|
||||
vk = get_verify_key(hs.signing_key)
|
||||
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
|
||||
verify_key=vk,
|
||||
valid_until_ts=2 ** 63, # fake future timestamp
|
||||
)
|
||||
self.verify_key = get_verify_key(hs.signing_key)
|
||||
self.hostname = hs.hostname
|
||||
|
||||
async def verify_json_for_server(
|
||||
self,
|
||||
@@ -270,32 +267,22 @@ class Keyring:
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
|
||||
found_keys: Dict[str, FetchKeyResult] = {}
|
||||
# If we are the originating server don't fetch verify key for self over federation
|
||||
if verify_request.server_name == self.hostname:
|
||||
await self._process_json(self.verify_key, verify_request)
|
||||
return
|
||||
|
||||
# If we are the originating server, short-circuit the key-fetch for any keys
|
||||
# we already have
|
||||
if verify_request.server_name == self._hostname:
|
||||
for key_id in verify_request.key_ids:
|
||||
if key_id in self._local_verify_keys:
|
||||
found_keys[key_id] = self._local_verify_keys[key_id]
|
||||
# Add the keys we need to verify to the queue for retrieval. We queue
|
||||
# up requests for the same server so we don't end up with many in flight
|
||||
# requests for the same keys.
|
||||
key_request = verify_request.to_fetch_key_request()
|
||||
found_keys_by_server = await self._server_queue.add_to_queue(
|
||||
key_request, key=verify_request.server_name
|
||||
)
|
||||
|
||||
key_ids_to_find = set(verify_request.key_ids) - found_keys.keys()
|
||||
if key_ids_to_find:
|
||||
# Add the keys we need to verify to the queue for retrieval. We queue
|
||||
# up requests for the same server so we don't end up with many in flight
|
||||
# requests for the same keys.
|
||||
key_request = _FetchKeyRequest(
|
||||
server_name=verify_request.server_name,
|
||||
minimum_valid_until_ts=verify_request.minimum_valid_until_ts,
|
||||
key_ids=list(key_ids_to_find),
|
||||
)
|
||||
found_keys_by_server = await self._server_queue.add_to_queue(
|
||||
key_request, key=verify_request.server_name
|
||||
)
|
||||
|
||||
# Since we batch up requests the returned set of keys may contain keys
|
||||
# from other servers, so we pull out only the ones we care about.
|
||||
found_keys.update(found_keys_by_server.get(verify_request.server_name, {}))
|
||||
# Since we batch up requests the returned set of keys may contain keys
|
||||
# from other servers, so we pull out only the ones we care about.s
|
||||
found_keys = found_keys_by_server.get(verify_request.server_name, {})
|
||||
|
||||
# Verify each signature we got valid keys for, raising if we can't
|
||||
# verify any of them.
|
||||
@@ -667,25 +654,21 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
perspective_name,
|
||||
)
|
||||
|
||||
request: JsonDict = {}
|
||||
for queue_value in keys_to_fetch:
|
||||
# there may be multiple requests for each server, so we have to merge
|
||||
# them intelligently.
|
||||
request_for_server = {
|
||||
key_id: {
|
||||
"minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
|
||||
}
|
||||
for key_id in queue_value.key_ids
|
||||
}
|
||||
request.setdefault(queue_value.server_name, {}).update(request_for_server)
|
||||
|
||||
logger.debug("Request to notary server %s: %s", perspective_name, request)
|
||||
|
||||
try:
|
||||
query_response = await self.client.post_json(
|
||||
destination=perspective_name,
|
||||
path="/_matrix/key/v2/query",
|
||||
data={"server_keys": request},
|
||||
data={
|
||||
"server_keys": {
|
||||
queue_value.server_name: {
|
||||
key_id: {
|
||||
"minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
|
||||
}
|
||||
for key_id in queue_value.key_ids
|
||||
}
|
||||
for queue_value in keys_to_fetch
|
||||
}
|
||||
},
|
||||
)
|
||||
except (NotRetryingDestination, RequestSendFailed) as e:
|
||||
# these both have str() representations which we can't really improve upon
|
||||
@@ -693,10 +676,6 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
except HttpResponseException as e:
|
||||
raise KeyLookupError("Remote server returned an error: %s" % (e,))
|
||||
|
||||
logger.debug(
|
||||
"Response from notary server %s: %s", perspective_name, query_response
|
||||
)
|
||||
|
||||
keys: Dict[str, Dict[str, FetchKeyResult]] = {}
|
||||
added_keys: List[Tuple[str, str, FetchKeyResult]] = []
|
||||
|
||||
|
||||
@@ -128,12 +128,14 @@ class EventBuilder:
|
||||
)
|
||||
|
||||
format_version = self.room_version.event_format
|
||||
# The types of auth/prev events changes between event versions.
|
||||
prev_events: Union[List[str], List[Tuple[str, Dict[str, str]]]]
|
||||
auth_events: Union[List[str], List[Tuple[str, Dict[str, str]]]]
|
||||
if format_version == EventFormatVersions.V1:
|
||||
auth_events = await self._store.add_event_hashes(auth_event_ids)
|
||||
prev_events = await self._store.add_event_hashes(prev_event_ids)
|
||||
# The types of auth/prev events changes between event versions.
|
||||
auth_events: Union[
|
||||
List[str], List[Tuple[str, Dict[str, str]]]
|
||||
] = await self._store.add_event_hashes(auth_event_ids)
|
||||
prev_events: Union[
|
||||
List[str], List[Tuple[str, Dict[str, str]]]
|
||||
] = await self._store.add_event_hashes(prev_event_ids)
|
||||
else:
|
||||
auth_events = auth_event_ids
|
||||
prev_events = prev_event_ids
|
||||
|
||||
@@ -322,11 +322,6 @@ class _AsyncEventContextImpl(EventContext):
|
||||
attributes by loading from the database.
|
||||
"""
|
||||
if self.state_group is None:
|
||||
# No state group means the event is an outlier. Usually the state_ids dicts are also
|
||||
# pre-set to empty dicts, but they get reset when the context is serialized, so set
|
||||
# them to empty dicts again here.
|
||||
self._current_state_ids = {}
|
||||
self._prev_state_ids = {}
|
||||
return
|
||||
|
||||
current_state_ids = await self._storage.state.get_state_ids_for_group(
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -306,7 +305,6 @@ def format_event_for_client_v2_without_room_id(d: JsonDict) -> JsonDict:
|
||||
def serialize_event(
|
||||
e: Union[JsonDict, EventBase],
|
||||
time_now_ms: int,
|
||||
*,
|
||||
as_client_event: bool = True,
|
||||
event_format: Callable[[JsonDict], JsonDict] = format_event_for_client_v1,
|
||||
token_id: Optional[str] = None,
|
||||
@@ -394,18 +392,15 @@ class EventClientSerializer:
|
||||
self,
|
||||
event: Union[JsonDict, EventBase],
|
||||
time_now: int,
|
||||
*,
|
||||
bundle_aggregations: bool = True,
|
||||
**kwargs: Any,
|
||||
) -> JsonDict:
|
||||
"""Serializes a single event.
|
||||
|
||||
Args:
|
||||
event: The event being serialized.
|
||||
event
|
||||
time_now: The current time in milliseconds
|
||||
bundle_aggregations: Whether to include the bundled aggregations for this
|
||||
event. Only applies to non-state events. (State events never include
|
||||
bundled aggregations.)
|
||||
bundle_aggregations: Whether to bundle in related events
|
||||
**kwargs: Arguments to pass to `serialize_event`
|
||||
|
||||
Returns:
|
||||
@@ -415,109 +410,76 @@ class EventClientSerializer:
|
||||
if not isinstance(event, EventBase):
|
||||
return event
|
||||
|
||||
event_id = event.event_id
|
||||
serialized_event = serialize_event(event, time_now, **kwargs)
|
||||
|
||||
# Check if there are any bundled aggregations to include with the event.
|
||||
#
|
||||
# Do not bundle aggregations if any of the following at true:
|
||||
#
|
||||
# * Support is disabled via the configuration or the caller.
|
||||
# * The event is a state event.
|
||||
# * The event has been redacted.
|
||||
if (
|
||||
self._msc1849_enabled
|
||||
and bundle_aggregations
|
||||
and not event.is_state()
|
||||
and not event.internal_metadata.is_redacted()
|
||||
# If MSC1849 is enabled then we need to look if there are any relations
|
||||
# we need to bundle in with the event.
|
||||
# Do not bundle relations if the event has been redacted
|
||||
if not event.internal_metadata.is_redacted() and (
|
||||
self._msc1849_enabled and bundle_aggregations
|
||||
):
|
||||
await self._injected_bundled_aggregations(event, time_now, serialized_event)
|
||||
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
||||
references = await self.store.get_relations_for_event(
|
||||
event_id, RelationTypes.REFERENCE, direction="f"
|
||||
)
|
||||
|
||||
return serialized_event
|
||||
if annotations.chunk:
|
||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
||||
r[RelationTypes.ANNOTATION] = annotations.to_dict()
|
||||
|
||||
async def _injected_bundled_aggregations(
|
||||
self, event: EventBase, time_now: int, serialized_event: JsonDict
|
||||
) -> None:
|
||||
"""Potentially injects bundled aggregations into the unsigned portion of the serialized event.
|
||||
if references.chunk:
|
||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
||||
r[RelationTypes.REFERENCE] = references.to_dict()
|
||||
|
||||
Args:
|
||||
event: The event being serialized.
|
||||
time_now: The current time in milliseconds
|
||||
serialized_event: The serialized event which may be modified.
|
||||
edit = None
|
||||
if event.type == EventTypes.Message:
|
||||
edit = await self.store.get_applicable_edit(event_id)
|
||||
|
||||
"""
|
||||
# Do not bundle aggregations for an event which represents an edit or an
|
||||
# annotation. It does not make sense for them to have related events.
|
||||
relates_to = event.content.get("m.relates_to")
|
||||
if isinstance(relates_to, (dict, frozendict)):
|
||||
relation_type = relates_to.get("rel_type")
|
||||
if relation_type in (RelationTypes.ANNOTATION, RelationTypes.REPLACE):
|
||||
return
|
||||
if edit:
|
||||
# If there is an edit replace the content, preserving existing
|
||||
# relations.
|
||||
|
||||
event_id = event.event_id
|
||||
# Ensure we take copies of the edit content, otherwise we risk modifying
|
||||
# the original event.
|
||||
edit_content = edit.content.copy()
|
||||
|
||||
# The bundled aggregations to include.
|
||||
aggregations = {}
|
||||
# Unfreeze the event content if necessary, so that we may modify it below
|
||||
edit_content = unfreeze(edit_content)
|
||||
serialized_event["content"] = edit_content.get("m.new_content", {})
|
||||
|
||||
annotations = await self.store.get_aggregation_groups_for_event(event_id)
|
||||
if annotations.chunk:
|
||||
aggregations[RelationTypes.ANNOTATION] = annotations.to_dict()
|
||||
# Check for existing relations
|
||||
relations = event.content.get("m.relates_to")
|
||||
if relations:
|
||||
# Keep the relations, ensuring we use a dict copy of the original
|
||||
serialized_event["content"]["m.relates_to"] = relations.copy()
|
||||
else:
|
||||
serialized_event["content"].pop("m.relates_to", None)
|
||||
|
||||
references = await self.store.get_relations_for_event(
|
||||
event_id, RelationTypes.REFERENCE, direction="f"
|
||||
)
|
||||
if references.chunk:
|
||||
aggregations[RelationTypes.REFERENCE] = references.to_dict()
|
||||
|
||||
edit = None
|
||||
if event.type == EventTypes.Message:
|
||||
edit = await self.store.get_applicable_edit(event_id)
|
||||
|
||||
if edit:
|
||||
# If there is an edit replace the content, preserving existing
|
||||
# relations.
|
||||
|
||||
# Ensure we take copies of the edit content, otherwise we risk modifying
|
||||
# the original event.
|
||||
edit_content = edit.content.copy()
|
||||
|
||||
# Unfreeze the event content if necessary, so that we may modify it below
|
||||
edit_content = unfreeze(edit_content)
|
||||
serialized_event["content"] = edit_content.get("m.new_content", {})
|
||||
|
||||
# Check for existing relations
|
||||
relates_to = event.content.get("m.relates_to")
|
||||
if relates_to:
|
||||
# Keep the relations, ensuring we use a dict copy of the original
|
||||
serialized_event["content"]["m.relates_to"] = relates_to.copy()
|
||||
else:
|
||||
serialized_event["content"].pop("m.relates_to", None)
|
||||
|
||||
aggregations[RelationTypes.REPLACE] = {
|
||||
"event_id": edit.event_id,
|
||||
"origin_server_ts": edit.origin_server_ts,
|
||||
"sender": edit.sender,
|
||||
}
|
||||
|
||||
# If this event is the start of a thread, include a summary of the replies.
|
||||
if self._msc3440_enabled:
|
||||
(
|
||||
thread_count,
|
||||
latest_thread_event,
|
||||
) = await self.store.get_thread_summary(event_id)
|
||||
if latest_thread_event:
|
||||
aggregations[RelationTypes.THREAD] = {
|
||||
# Don't bundle aggregations as this could recurse forever.
|
||||
"latest_event": await self.serialize_event(
|
||||
latest_thread_event, time_now, bundle_aggregations=False
|
||||
),
|
||||
"count": thread_count,
|
||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
||||
r[RelationTypes.REPLACE] = {
|
||||
"event_id": edit.event_id,
|
||||
"origin_server_ts": edit.origin_server_ts,
|
||||
"sender": edit.sender,
|
||||
}
|
||||
|
||||
# If any bundled aggregations were found, include them.
|
||||
if aggregations:
|
||||
serialized_event["unsigned"].setdefault("m.relations", {}).update(
|
||||
aggregations
|
||||
)
|
||||
# If this event is the start of a thread, include a summary of the replies.
|
||||
if self._msc3440_enabled:
|
||||
(
|
||||
thread_count,
|
||||
latest_thread_event,
|
||||
) = await self.store.get_thread_summary(event_id)
|
||||
if latest_thread_event:
|
||||
r = serialized_event["unsigned"].setdefault("m.relations", {})
|
||||
r[RelationTypes.THREAD] = {
|
||||
# Don't bundle aggregations as this could recurse forever.
|
||||
"latest_event": await self.serialize_event(
|
||||
latest_thread_event, time_now, bundle_aggregations=False
|
||||
),
|
||||
"count": thread_count,
|
||||
}
|
||||
|
||||
return serialized_event
|
||||
|
||||
async def serialize_events(
|
||||
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
|
||||
|
||||
@@ -128,7 +128,7 @@ class FederationClient(FederationBase):
|
||||
reset_expiry_on_get=False,
|
||||
)
|
||||
|
||||
def _clear_tried_cache(self) -> None:
|
||||
def _clear_tried_cache(self):
|
||||
"""Clear pdu_destination_tried cache"""
|
||||
now = self._clock.time_msec()
|
||||
|
||||
@@ -277,58 +277,6 @@ class FederationClient(FederationBase):
|
||||
|
||||
return pdus
|
||||
|
||||
async def get_pdu_from_destination_raw(
|
||||
self,
|
||||
destination: str,
|
||||
event_id: str,
|
||||
room_version: RoomVersion,
|
||||
outlier: bool = False,
|
||||
timeout: Optional[int] = None,
|
||||
) -> Optional[EventBase]:
|
||||
"""Requests the PDU with given origin and ID from the remote home
|
||||
server. Does not have any caching or rate limiting!
|
||||
|
||||
Args:
|
||||
destination: Which homeserver to query
|
||||
event_id: event to fetch
|
||||
room_version: version of the room
|
||||
outlier: Indicates whether the PDU is an `outlier`, i.e. if
|
||||
it's from an arbitrary point in the context as opposed to part
|
||||
of the current block of PDUs. Defaults to `False`
|
||||
timeout: How long to try (in ms) each destination for before
|
||||
moving to the next destination. None indicates no timeout.
|
||||
|
||||
Returns:
|
||||
The requested PDU, or None if we were unable to find it.
|
||||
|
||||
Raises:
|
||||
SynapseError, NotRetryingDestination, FederationDeniedError
|
||||
"""
|
||||
transaction_data = await self.transport_layer.get_event(
|
||||
destination, event_id, timeout=timeout
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"retrieved event id %s from %s: %r",
|
||||
event_id,
|
||||
destination,
|
||||
transaction_data,
|
||||
)
|
||||
|
||||
pdu_list: List[EventBase] = [
|
||||
event_from_pdu_json(p, room_version, outlier=outlier)
|
||||
for p in transaction_data["pdus"]
|
||||
]
|
||||
|
||||
if pdu_list and pdu_list[0]:
|
||||
pdu = pdu_list[0]
|
||||
|
||||
# Check signatures are correct.
|
||||
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
|
||||
return signed_pdu
|
||||
|
||||
return None
|
||||
|
||||
async def get_pdu(
|
||||
self,
|
||||
destinations: Iterable[str],
|
||||
@@ -373,14 +321,30 @@ class FederationClient(FederationBase):
|
||||
continue
|
||||
|
||||
try:
|
||||
signed_pdu = await self.get_pdu_from_destination_raw(
|
||||
destination=destination,
|
||||
event_id=event_id,
|
||||
room_version=room_version,
|
||||
outlier=outlier,
|
||||
timeout=timeout,
|
||||
transaction_data = await self.transport_layer.get_event(
|
||||
destination, event_id, timeout=timeout
|
||||
)
|
||||
|
||||
logger.debug(
|
||||
"retrieved event id %s from %s: %r",
|
||||
event_id,
|
||||
destination,
|
||||
transaction_data,
|
||||
)
|
||||
|
||||
pdu_list: List[EventBase] = [
|
||||
event_from_pdu_json(p, room_version, outlier=outlier)
|
||||
for p in transaction_data["pdus"]
|
||||
]
|
||||
|
||||
if pdu_list and pdu_list[0]:
|
||||
pdu = pdu_list[0]
|
||||
|
||||
# Check signatures are correct.
|
||||
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
|
||||
|
||||
break
|
||||
|
||||
pdu_attempts[destination] = now
|
||||
|
||||
except SynapseError as e:
|
||||
@@ -800,7 +764,7 @@ class FederationClient(FederationBase):
|
||||
no servers successfully handle the request.
|
||||
"""
|
||||
|
||||
async def send_request(destination: str) -> SendJoinResult:
|
||||
async def send_request(destination) -> SendJoinResult:
|
||||
response = await self._do_send_join(room_version, destination, pdu)
|
||||
|
||||
# If an event was returned (and expected to be returned):
|
||||
@@ -1395,28 +1359,11 @@ class FederationClient(FederationBase):
|
||||
async def send_request(
|
||||
destination: str,
|
||||
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]:
|
||||
try:
|
||||
res = await self.transport_layer.get_room_hierarchy(
|
||||
destination=destination,
|
||||
room_id=room_id,
|
||||
suggested_only=suggested_only,
|
||||
)
|
||||
except HttpResponseException as e:
|
||||
# If an error is received that is due to an unrecognised endpoint,
|
||||
# fallback to the unstable endpoint. Otherwise consider it a
|
||||
# legitmate error and raise.
|
||||
if not self._is_unknown_endpoint(e):
|
||||
raise
|
||||
|
||||
logger.debug(
|
||||
"Couldn't fetch room hierarchy with the v1 API, falling back to the unstable API"
|
||||
)
|
||||
|
||||
res = await self.transport_layer.get_room_hierarchy_unstable(
|
||||
destination=destination,
|
||||
room_id=room_id,
|
||||
suggested_only=suggested_only,
|
||||
)
|
||||
res = await self.transport_layer.get_room_hierarchy(
|
||||
destination=destination,
|
||||
room_id=room_id,
|
||||
suggested_only=suggested_only,
|
||||
)
|
||||
|
||||
room = res.get("room")
|
||||
if not isinstance(room, dict):
|
||||
@@ -1466,10 +1413,6 @@ class FederationClient(FederationBase):
|
||||
if e.code != 502:
|
||||
raise
|
||||
|
||||
logger.debug(
|
||||
"Couldn't fetch room hierarchy, falling back to the spaces API"
|
||||
)
|
||||
|
||||
# Fallback to the old federation API and translate the results if
|
||||
# no servers implement the new API.
|
||||
#
|
||||
@@ -1517,83 +1460,6 @@ class FederationClient(FederationBase):
|
||||
self._get_room_hierarchy_cache[(room_id, suggested_only)] = result
|
||||
return result
|
||||
|
||||
async def timestamp_to_event(
|
||||
self, destination: str, room_id: str, timestamp: int, direction: str
|
||||
) -> "TimestampToEventResponse":
|
||||
"""
|
||||
Calls a remote federating server at `destination` asking for their
|
||||
closest event to the given timestamp in the given direction. Also
|
||||
validates the response to always return the expected keys or raises an
|
||||
error.
|
||||
|
||||
Args:
|
||||
destination: Domain name of the remote homeserver
|
||||
room_id: Room to fetch the event from
|
||||
timestamp: The point in time (inclusive) we should navigate from in
|
||||
the given direction to find the closest event.
|
||||
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||
or backward from the given timestamp to find the closest event.
|
||||
|
||||
Returns:
|
||||
A parsed TimestampToEventResponse including the closest event_id
|
||||
and origin_server_ts
|
||||
|
||||
Raises:
|
||||
Various exceptions when the request fails
|
||||
InvalidResponseError when the response does not have the correct
|
||||
keys or wrong types
|
||||
"""
|
||||
remote_response = await self.transport_layer.timestamp_to_event(
|
||||
destination, room_id, timestamp, direction
|
||||
)
|
||||
|
||||
if not isinstance(remote_response, dict):
|
||||
raise InvalidResponseError(
|
||||
"Response must be a JSON dictionary but received %r" % remote_response
|
||||
)
|
||||
|
||||
try:
|
||||
return TimestampToEventResponse.from_json_dict(remote_response)
|
||||
except ValueError as e:
|
||||
raise InvalidResponseError(str(e))
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||
class TimestampToEventResponse:
|
||||
"""Typed response dictionary for the federation /timestamp_to_event endpoint"""
|
||||
|
||||
event_id: str
|
||||
origin_server_ts: int
|
||||
|
||||
# the raw data, including the above keys
|
||||
data: JsonDict
|
||||
|
||||
@classmethod
|
||||
def from_json_dict(cls, d: JsonDict) -> "TimestampToEventResponse":
|
||||
"""Parsed response from the federation /timestamp_to_event endpoint
|
||||
|
||||
Args:
|
||||
d: JSON object response to be parsed
|
||||
|
||||
Raises:
|
||||
ValueError if d does not the correct keys or they are the wrong types
|
||||
"""
|
||||
|
||||
event_id = d.get("event_id")
|
||||
if not isinstance(event_id, str):
|
||||
raise ValueError(
|
||||
"Invalid response: 'event_id' must be a str but received %r" % event_id
|
||||
)
|
||||
|
||||
origin_server_ts = d.get("origin_server_ts")
|
||||
if not isinstance(origin_server_ts, int):
|
||||
raise ValueError(
|
||||
"Invalid response: 'origin_server_ts' must be a int but received %r"
|
||||
% origin_server_ts
|
||||
)
|
||||
|
||||
return cls(event_id, origin_server_ts, d)
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||
class FederationSpaceSummaryEventResult:
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Copyright 2015, 2016 OpenMarket Ltd
|
||||
# Copyright 2018 New Vector Ltd
|
||||
# Copyright 2019-2021 Matrix.org Federation C.I.C
|
||||
# Copyright 2019 Matrix.org Federation C.I.C
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -110,7 +110,6 @@ class FederationServer(FederationBase):
|
||||
super().__init__(hs)
|
||||
|
||||
self.handler = hs.get_federation_handler()
|
||||
self.storage = hs.get_storage()
|
||||
self._federation_event_handler = hs.get_federation_event_handler()
|
||||
self.state = hs.get_state_handler()
|
||||
self._event_auth_handler = hs.get_event_auth_handler()
|
||||
@@ -201,48 +200,6 @@ class FederationServer(FederationBase):
|
||||
|
||||
return 200, res
|
||||
|
||||
async def on_timestamp_to_event_request(
|
||||
self, origin: str, room_id: str, timestamp: int, direction: str
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
"""When we receive a federated `/timestamp_to_event` request,
|
||||
handle all of the logic for validating and fetching the event.
|
||||
|
||||
Args:
|
||||
origin: The server we received the event from
|
||||
room_id: Room to fetch the event from
|
||||
timestamp: The point in time (inclusive) we should navigate from in
|
||||
the given direction to find the closest event.
|
||||
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||
or backward from the given timestamp to find the closest event.
|
||||
|
||||
Returns:
|
||||
Tuple indicating the response status code and dictionary response
|
||||
body including `event_id`.
|
||||
"""
|
||||
with (await self._server_linearizer.queue((origin, room_id))):
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
await self.check_server_matches_acl(origin_host, room_id)
|
||||
|
||||
# We only try to fetch data from the local database
|
||||
event_id = await self.store.get_event_id_for_timestamp(
|
||||
room_id, timestamp, direction
|
||||
)
|
||||
if event_id:
|
||||
event = await self.store.get_event(
|
||||
event_id, allow_none=False, allow_rejected=False
|
||||
)
|
||||
|
||||
return 200, {
|
||||
"event_id": event_id,
|
||||
"origin_server_ts": event.origin_server_ts,
|
||||
}
|
||||
|
||||
raise SynapseError(
|
||||
404,
|
||||
"Unable to find event from %s in direction %s" % (timestamp, direction),
|
||||
errcode=Codes.NOT_FOUND,
|
||||
)
|
||||
|
||||
async def on_incoming_transaction(
|
||||
self,
|
||||
origin: str,
|
||||
@@ -450,7 +407,7 @@ class FederationServer(FederationBase):
|
||||
# require callouts to other servers to fetch missing events), but
|
||||
# impose a limit to avoid going too crazy with ram/cpu.
|
||||
|
||||
async def process_pdus_for_room(room_id: str) -> None:
|
||||
async def process_pdus_for_room(room_id: str):
|
||||
with nested_logging_context(room_id):
|
||||
logger.debug("Processing PDUs for %s", room_id)
|
||||
|
||||
@@ -547,7 +504,7 @@ class FederationServer(FederationBase):
|
||||
|
||||
async def on_state_ids_request(
|
||||
self, origin: str, room_id: str, event_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
) -> Tuple[int, Dict[str, Any]]:
|
||||
if not event_id:
|
||||
raise NotImplementedError("Specify an event")
|
||||
|
||||
@@ -567,9 +524,7 @@ class FederationServer(FederationBase):
|
||||
|
||||
return 200, resp
|
||||
|
||||
async def _on_state_ids_request_compute(
|
||||
self, room_id: str, event_id: str
|
||||
) -> JsonDict:
|
||||
async def _on_state_ids_request_compute(self, room_id, event_id):
|
||||
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
|
||||
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
|
||||
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
|
||||
@@ -658,11 +613,8 @@ class FederationServer(FederationBase):
|
||||
state = await self.store.get_events(state_ids)
|
||||
|
||||
time_now = self._clock.time_msec()
|
||||
event_json = event.get_pdu_json()
|
||||
return {
|
||||
# TODO Remove the unstable prefix when servers have updated.
|
||||
"org.matrix.msc3083.v2.event": event_json,
|
||||
"event": event_json,
|
||||
"org.matrix.msc3083.v2.event": event.get_pdu_json(),
|
||||
"state": [p.get_pdu_json(time_now) for p in state.values()],
|
||||
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
|
||||
}
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -24,7 +23,6 @@ from typing import Optional, Tuple
|
||||
|
||||
from synapse.federation.units import Transaction
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.types import JsonDict
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -33,7 +31,7 @@ logger = logging.getLogger(__name__)
|
||||
class TransactionActions:
|
||||
"""Defines persistence actions that relate to handling Transactions."""
|
||||
|
||||
def __init__(self, datastore: DataStore):
|
||||
def __init__(self, datastore):
|
||||
self.store = datastore
|
||||
|
||||
@log_function
|
||||
|
||||
@@ -1,5 +1,4 @@
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -351,7 +350,7 @@ class BaseFederationRow:
|
||||
TypeId = "" # Unique string that ids the type. Must be overridden in sub classes.
|
||||
|
||||
@staticmethod
|
||||
def from_data(data: JsonDict) -> "BaseFederationRow":
|
||||
def from_data(data):
|
||||
"""Parse the data from the federation stream into a row.
|
||||
|
||||
Args:
|
||||
@@ -360,7 +359,7 @@ class BaseFederationRow:
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def to_data(self) -> JsonDict:
|
||||
def to_data(self):
|
||||
"""Serialize this row to be sent over the federation stream.
|
||||
|
||||
Returns:
|
||||
@@ -369,7 +368,7 @@ class BaseFederationRow:
|
||||
"""
|
||||
raise NotImplementedError()
|
||||
|
||||
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||
def add_to_buffer(self, buff):
|
||||
"""Add this row to the appropriate field in the buffer ready for this
|
||||
to be sent over federation.
|
||||
|
||||
@@ -392,15 +391,15 @@ class PresenceDestinationsRow(
|
||||
TypeId = "pd"
|
||||
|
||||
@staticmethod
|
||||
def from_data(data: JsonDict) -> "PresenceDestinationsRow":
|
||||
def from_data(data):
|
||||
return PresenceDestinationsRow(
|
||||
state=UserPresenceState.from_dict(data["state"]), destinations=data["dests"]
|
||||
)
|
||||
|
||||
def to_data(self) -> JsonDict:
|
||||
def to_data(self):
|
||||
return {"state": self.state.as_dict(), "dests": self.destinations}
|
||||
|
||||
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||
def add_to_buffer(self, buff):
|
||||
buff.presence_destinations.append((self.state, self.destinations))
|
||||
|
||||
|
||||
@@ -418,13 +417,13 @@ class KeyedEduRow(
|
||||
TypeId = "k"
|
||||
|
||||
@staticmethod
|
||||
def from_data(data: JsonDict) -> "KeyedEduRow":
|
||||
def from_data(data):
|
||||
return KeyedEduRow(key=tuple(data["key"]), edu=Edu(**data["edu"]))
|
||||
|
||||
def to_data(self) -> JsonDict:
|
||||
def to_data(self):
|
||||
return {"key": self.key, "edu": self.edu.get_internal_dict()}
|
||||
|
||||
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||
def add_to_buffer(self, buff):
|
||||
buff.keyed_edus.setdefault(self.edu.destination, {})[self.key] = self.edu
|
||||
|
||||
|
||||
@@ -434,13 +433,13 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", ("edu",))): # Edu
|
||||
TypeId = "e"
|
||||
|
||||
@staticmethod
|
||||
def from_data(data: JsonDict) -> "EduRow":
|
||||
def from_data(data):
|
||||
return EduRow(Edu(**data))
|
||||
|
||||
def to_data(self) -> JsonDict:
|
||||
def to_data(self):
|
||||
return self.edu.get_internal_dict()
|
||||
|
||||
def add_to_buffer(self, buff: "ParsedFederationStreamData") -> None:
|
||||
def add_to_buffer(self, buff):
|
||||
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
|
||||
|
||||
|
||||
|
||||
@@ -1,6 +1,5 @@
|
||||
# Copyright 2014-2016 OpenMarket Ltd
|
||||
# Copyright 2019 New Vector Ltd
|
||||
# Copyright 2021 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
@@ -15,8 +14,7 @@
|
||||
# limitations under the License.
|
||||
import datetime
|
||||
import logging
|
||||
from types import TracebackType
|
||||
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple, Type
|
||||
from typing import TYPE_CHECKING, Dict, Hashable, Iterable, List, Optional, Tuple
|
||||
|
||||
import attr
|
||||
from prometheus_client import Counter
|
||||
@@ -215,7 +213,7 @@ class PerDestinationQueue:
|
||||
self._pending_edus_keyed[(edu.edu_type, key)] = edu
|
||||
self.attempt_new_transaction()
|
||||
|
||||
def send_edu(self, edu: Edu) -> None:
|
||||
def send_edu(self, edu) -> None:
|
||||
self._pending_edus.append(edu)
|
||||
self.attempt_new_transaction()
|
||||
|
||||
@@ -703,12 +701,7 @@ class _TransactionQueueManager:
|
||||
|
||||
return self._pdus, pending_edus
|
||||
|
||||
async def __aexit__(
|
||||
self,
|
||||
exc_type: Optional[Type[BaseException]],
|
||||
exc: Optional[BaseException],
|
||||
tb: Optional[TracebackType],
|
||||
) -> None:
|
||||
async def __aexit__(self, exc_type, exc, tb):
|
||||
if exc_type is not None:
|
||||
# Failed to send transaction, so we bail out.
|
||||
return
|
||||
|
||||
@@ -21,7 +21,6 @@ from typing import (
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
Generator,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
@@ -149,42 +148,6 @@ class TransportLayerClient:
|
||||
destination, path=path, args=args, try_trailing_slash_on_400=True
|
||||
)
|
||||
|
||||
@log_function
|
||||
async def timestamp_to_event(
|
||||
self, destination: str, room_id: str, timestamp: int, direction: str
|
||||
) -> Union[JsonDict, List]:
|
||||
"""
|
||||
Calls a remote federating server at `destination` asking for their
|
||||
closest event to the given timestamp in the given direction.
|
||||
|
||||
Args:
|
||||
destination: Domain name of the remote homeserver
|
||||
room_id: Room to fetch the event from
|
||||
timestamp: The point in time (inclusive) we should navigate from in
|
||||
the given direction to find the closest event.
|
||||
direction: ["f"|"b"] to indicate whether we should navigate forward
|
||||
or backward from the given timestamp to find the closest event.
|
||||
|
||||
Returns:
|
||||
Response dict received from the remote homeserver.
|
||||
|
||||
Raises:
|
||||
Various exceptions when the request fails
|
||||
"""
|
||||
path = _create_path(
|
||||
FEDERATION_UNSTABLE_PREFIX,
|
||||
"/org.matrix.msc3030/timestamp_to_event/%s",
|
||||
room_id,
|
||||
)
|
||||
|
||||
args = {"ts": [str(timestamp)], "dir": [direction]}
|
||||
|
||||
remote_response = await self.client.get_json(
|
||||
destination, path=path, args=args, try_trailing_slash_on_400=True
|
||||
)
|
||||
|
||||
return remote_response
|
||||
|
||||
@log_function
|
||||
async def send_transaction(
|
||||
self,
|
||||
@@ -236,16 +199,11 @@ class TransportLayerClient:
|
||||
|
||||
@log_function
|
||||
async def make_query(
|
||||
self,
|
||||
destination: str,
|
||||
query_type: str,
|
||||
args: dict,
|
||||
retry_on_dns_fail: bool,
|
||||
ignore_backoff: bool = False,
|
||||
) -> JsonDict:
|
||||
self, destination, query_type, args, retry_on_dns_fail, ignore_backoff=False
|
||||
):
|
||||
path = _create_v1_path("/query/%s", query_type)
|
||||
|
||||
return await self.client.get_json(
|
||||
content = await self.client.get_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
args=args,
|
||||
@@ -254,6 +212,8 @@ class TransportLayerClient:
|
||||
ignore_backoff=ignore_backoff,
|
||||
)
|
||||
|
||||
return content
|
||||
|
||||
@log_function
|
||||
async def make_membership_event(
|
||||
self,
|
||||
@@ -1232,24 +1192,10 @@ class TransportLayerClient:
|
||||
)
|
||||
|
||||
async def get_room_hierarchy(
|
||||
self, destination: str, room_id: str, suggested_only: bool
|
||||
) -> JsonDict:
|
||||
"""
|
||||
Args:
|
||||
destination: The remote server
|
||||
room_id: The room ID to ask about.
|
||||
suggested_only: if True, only suggested rooms will be returned
|
||||
"""
|
||||
path = _create_v1_path("/hierarchy/%s", room_id)
|
||||
|
||||
return await self.client.get_json(
|
||||
destination=destination,
|
||||
path=path,
|
||||
args={"suggested_only": "true" if suggested_only else "false"},
|
||||
)
|
||||
|
||||
async def get_room_hierarchy_unstable(
|
||||
self, destination: str, room_id: str, suggested_only: bool
|
||||
self,
|
||||
destination: str,
|
||||
room_id: str,
|
||||
suggested_only: bool,
|
||||
) -> JsonDict:
|
||||
"""
|
||||
Args:
|
||||
@@ -1321,7 +1267,7 @@ class SendJoinResponse:
|
||||
|
||||
|
||||
@ijson.coroutine
|
||||
def _event_parser(event_dict: JsonDict) -> Generator[None, Tuple[str, Any], None]:
|
||||
def _event_parser(event_dict: JsonDict):
|
||||
"""Helper function for use with `ijson.kvitems_coro` to parse key-value pairs
|
||||
to add them to a given dictionary.
|
||||
"""
|
||||
@@ -1332,9 +1278,7 @@ def _event_parser(event_dict: JsonDict) -> Generator[None, Tuple[str, Any], None
|
||||
|
||||
|
||||
@ijson.coroutine
|
||||
def _event_list_parser(
|
||||
room_version: RoomVersion, events: List[EventBase]
|
||||
) -> Generator[None, JsonDict, None]:
|
||||
def _event_list_parser(room_version: RoomVersion, events: List[EventBase]):
|
||||
"""Helper function for use with `ijson.items_coro` to parse an array of
|
||||
events and add them to the given list.
|
||||
"""
|
||||
@@ -1373,26 +1317,15 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
|
||||
prefix + "auth_chain.item",
|
||||
use_float=True,
|
||||
)
|
||||
# TODO Remove the unstable prefix when servers have updated.
|
||||
#
|
||||
# By re-using the same event dictionary this will cause the parsing of
|
||||
# org.matrix.msc3083.v2.event and event to stomp over each other.
|
||||
# Generally this should be fine.
|
||||
self._coro_unstable_event = ijson.kvitems_coro(
|
||||
_event_parser(self._response.event_dict),
|
||||
prefix + "org.matrix.msc3083.v2.event",
|
||||
use_float=True,
|
||||
)
|
||||
self._coro_event = ijson.kvitems_coro(
|
||||
_event_parser(self._response.event_dict),
|
||||
prefix + "event",
|
||||
prefix + "org.matrix.msc3083.v2.event",
|
||||
use_float=True,
|
||||
)
|
||||
|
||||
def write(self, data: bytes) -> int:
|
||||
self._coro_state.send(data)
|
||||
self._coro_auth.send(data)
|
||||
self._coro_unstable_event.send(data)
|
||||
self._coro_event.send(data)
|
||||
|
||||
return len(data)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user