Compare commits
1 Commits
v1.21.1
...
travis/gro
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
b024acffea |
Binary file not shown.
352
CHANGES.md
352
CHANGES.md
@@ -1,351 +1,15 @@
|
||||
Synapse 1.21.1 (2020-10-13)
|
||||
===========================
|
||||
|
||||
This release fixes a regression in v1.21.0 that prevented debian packages from being built.
|
||||
It is otherwise identical to v1.21.0.
|
||||
|
||||
Synapse 1.21.0 (2020-10-12)
|
||||
===========================
|
||||
|
||||
No significant changes since v1.21.0rc3.
|
||||
|
||||
As [noted in
|
||||
v1.20.0](https://github.com/matrix-org/synapse/blob/release-v1.21.0/CHANGES.md#synapse-1200-2020-09-22),
|
||||
a future release will drop support for accessing Synapse's
|
||||
[Admin API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api) under the
|
||||
`/_matrix/client/*` endpoint prefixes. At that point, the Admin API will only
|
||||
be accessible under `/_synapse/admin`.
|
||||
|
||||
|
||||
Synapse 1.21.0rc3 (2020-10-08)
|
||||
==============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix duplication of events on high traffic servers, caused by PostgreSQL `could not serialize access due to concurrent update` errors. ([\#8456](https://github.com/matrix-org/synapse/issues/8456))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add Groovy Gorilla to the list of distributions we build `.deb`s for. ([\#8475](https://github.com/matrix-org/synapse/issues/8475))
|
||||
|
||||
|
||||
Synapse 1.21.0rc2 (2020-10-02)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Convert additional templates from inline HTML to Jinja2 templates. ([\#8444](https://github.com/matrix-org/synapse/issues/8444))
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a regression in v1.21.0rc1 which broke thumbnails of remote media. ([\#8438](https://github.com/matrix-org/synapse/issues/8438))
|
||||
- Do not expose the experimental `uk.half-shot.msc2778.login.application_service` flow in the login API, which caused a compatibility problem with Element iOS. ([\#8440](https://github.com/matrix-org/synapse/issues/8440))
|
||||
- Fix malformed log line in new federation "catch up" logic. ([\#8442](https://github.com/matrix-org/synapse/issues/8442))
|
||||
- Fix DB query on startup for negative streams which caused long start up times. Introduced in [\#8374](https://github.com/matrix-org/synapse/issues/8374). ([\#8447](https://github.com/matrix-org/synapse/issues/8447))
|
||||
|
||||
|
||||
Synapse 1.21.0rc1 (2020-10-01)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Require the user to confirm that their password should be reset after clicking the email confirmation link. ([\#8004](https://github.com/matrix-org/synapse/issues/8004))
|
||||
- Add an admin API `GET /_synapse/admin/v1/event_reports` to read entries of table `event_reports`. Contributed by @dklimpel. ([\#8217](https://github.com/matrix-org/synapse/issues/8217))
|
||||
- Consolidate the SSO error template across all configuration. ([\#8248](https://github.com/matrix-org/synapse/issues/8248), [\#8405](https://github.com/matrix-org/synapse/issues/8405))
|
||||
- Add a configuration option to specify a whitelist of domains that a user can be redirected to after validating their email or phone number. ([\#8275](https://github.com/matrix-org/synapse/issues/8275), [\#8417](https://github.com/matrix-org/synapse/issues/8417))
|
||||
- Add experimental support for sharding event persister. ([\#8294](https://github.com/matrix-org/synapse/issues/8294), [\#8387](https://github.com/matrix-org/synapse/issues/8387), [\#8396](https://github.com/matrix-org/synapse/issues/8396), [\#8419](https://github.com/matrix-org/synapse/issues/8419))
|
||||
- Add the room topic and avatar to the room details admin API. ([\#8305](https://github.com/matrix-org/synapse/issues/8305))
|
||||
- Add an admin API for querying rooms where a user is a member. Contributed by @dklimpel. ([\#8306](https://github.com/matrix-org/synapse/issues/8306))
|
||||
- Add `uk.half-shot.msc2778.login.application_service` login type to allow appservices to login. ([\#8320](https://github.com/matrix-org/synapse/issues/8320))
|
||||
- Add a configuration option that allows existing users to log in with OpenID Connect. Contributed by @BBBSnowball and @OmmyZhang. ([\#8345](https://github.com/matrix-org/synapse/issues/8345))
|
||||
- Add prometheus metrics for replication requests. ([\#8406](https://github.com/matrix-org/synapse/issues/8406))
|
||||
- Support passing additional single sign-on parameters to the client. ([\#8413](https://github.com/matrix-org/synapse/issues/8413))
|
||||
- Add experimental reporting of metrics on expensive rooms for state-resolution. ([\#8420](https://github.com/matrix-org/synapse/issues/8420))
|
||||
- Add experimental prometheus metric to track numbers of "large" rooms for state resolutiom. ([\#8425](https://github.com/matrix-org/synapse/issues/8425))
|
||||
- Add prometheus metrics to track federation delays. ([\#8430](https://github.com/matrix-org/synapse/issues/8430))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug in the media repository where remote thumbnails with the same size but different crop methods would overwrite each other. Contributed by @deepbluev7. ([\#7124](https://github.com/matrix-org/synapse/issues/7124))
|
||||
- Fix inconsistent handling of non-existent push rules, and stop tracking the `enabled` state of removed push rules. ([\#7796](https://github.com/matrix-org/synapse/issues/7796))
|
||||
- Fix a longstanding bug when storing a media file with an empty `upload_name`. ([\#7905](https://github.com/matrix-org/synapse/issues/7905))
|
||||
- Fix messages not being sent over federation until an event is sent into the same room. ([\#8230](https://github.com/matrix-org/synapse/issues/8230), [\#8247](https://github.com/matrix-org/synapse/issues/8247), [\#8258](https://github.com/matrix-org/synapse/issues/8258), [\#8272](https://github.com/matrix-org/synapse/issues/8272), [\#8322](https://github.com/matrix-org/synapse/issues/8322))
|
||||
- Fix a longstanding bug where files that could not be thumbnailed would result in an Internal Server Error. ([\#8236](https://github.com/matrix-org/synapse/issues/8236), [\#8435](https://github.com/matrix-org/synapse/issues/8435))
|
||||
- Upgrade minimum version of `canonicaljson` to version 1.4.0, to fix an unicode encoding issue. ([\#8262](https://github.com/matrix-org/synapse/issues/8262))
|
||||
- Fix longstanding bug which could lead to incomplete database upgrades on SQLite. ([\#8265](https://github.com/matrix-org/synapse/issues/8265))
|
||||
- Fix stack overflow when stderr is redirected to the logging system, and the logging system encounters an error. ([\#8268](https://github.com/matrix-org/synapse/issues/8268))
|
||||
- Fix a bug which cause the logging system to report errors, if `DEBUG` was enabled and no `context` filter was applied. ([\#8278](https://github.com/matrix-org/synapse/issues/8278))
|
||||
- Fix edge case where push could get delayed for a user until a later event was pushed. ([\#8287](https://github.com/matrix-org/synapse/issues/8287))
|
||||
- Fix fetching malformed events from remote servers. ([\#8324](https://github.com/matrix-org/synapse/issues/8324))
|
||||
- Fix `UnboundLocalError` from occuring when appservices send a malformed register request. ([\#8329](https://github.com/matrix-org/synapse/issues/8329))
|
||||
- Don't send push notifications to expired user accounts. ([\#8353](https://github.com/matrix-org/synapse/issues/8353))
|
||||
- Fix a regression in v1.19.0 with reactivating users through the admin API. ([\#8362](https://github.com/matrix-org/synapse/issues/8362))
|
||||
- Fix a bug where during device registration the length of the device name wasn't limited. ([\#8364](https://github.com/matrix-org/synapse/issues/8364))
|
||||
- Include `guest_access` in the fields that are checked for null bytes when updating `room_stats_state`. Broke in v1.7.2. ([\#8373](https://github.com/matrix-org/synapse/issues/8373))
|
||||
- Fix theoretical race condition where events are not sent down `/sync` if the synchrotron worker is restarted without restarting other workers. ([\#8374](https://github.com/matrix-org/synapse/issues/8374))
|
||||
- Fix a bug which could cause errors in rooms with malformed membership events, on servers using sqlite. ([\#8385](https://github.com/matrix-org/synapse/issues/8385))
|
||||
- Fix "Re-starting finished log context" warning when receiving an event we already had over federation. ([\#8398](https://github.com/matrix-org/synapse/issues/8398))
|
||||
- Fix incorrect handling of timeouts on outgoing HTTP requests. ([\#8400](https://github.com/matrix-org/synapse/issues/8400))
|
||||
- Fix a regression in v1.20.0 in the `synapse_port_db` script regarding the `ui_auth_sessions_ips` table. ([\#8410](https://github.com/matrix-org/synapse/issues/8410))
|
||||
- Remove unnecessary 3PID registration check when resetting password via an email address. Bug introduced in v0.34.0rc2. ([\#8414](https://github.com/matrix-org/synapse/issues/8414))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Add `/_synapse/client` to the reverse proxy documentation. ([\#8227](https://github.com/matrix-org/synapse/issues/8227))
|
||||
- Add note to the reverse proxy settings documentation about disabling Apache's mod_security2. Contributed by Julian Fietkau (@jfietkau). ([\#8375](https://github.com/matrix-org/synapse/issues/8375))
|
||||
- Improve description of `server_name` config option in `homserver.yaml`. ([\#8415](https://github.com/matrix-org/synapse/issues/8415))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Drop support for `prometheus_client` older than 0.4.0. ([\#8426](https://github.com/matrix-org/synapse/issues/8426))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Fix tests on distros which disable TLSv1.0. Contributed by @danc86. ([\#8208](https://github.com/matrix-org/synapse/issues/8208))
|
||||
- Simplify the distributor code to avoid unnecessary work. ([\#8216](https://github.com/matrix-org/synapse/issues/8216))
|
||||
- Remove the `populate_stats_process_rooms_2` background job and restore functionality to `populate_stats_process_rooms`. ([\#8243](https://github.com/matrix-org/synapse/issues/8243))
|
||||
- Clean up type hints for `PaginationConfig`. ([\#8250](https://github.com/matrix-org/synapse/issues/8250), [\#8282](https://github.com/matrix-org/synapse/issues/8282))
|
||||
- Track the latest event for every destination and room for catch-up after federation outage. ([\#8256](https://github.com/matrix-org/synapse/issues/8256))
|
||||
- Fix non-user visible bug in implementation of `MultiWriterIdGenerator.get_current_token_for_writer`. ([\#8257](https://github.com/matrix-org/synapse/issues/8257))
|
||||
- Switch to the JSON implementation from the standard library. ([\#8259](https://github.com/matrix-org/synapse/issues/8259))
|
||||
- Add type hints to `synapse.util.async_helpers`. ([\#8260](https://github.com/matrix-org/synapse/issues/8260))
|
||||
- Simplify tests that mock asynchronous functions. ([\#8261](https://github.com/matrix-org/synapse/issues/8261))
|
||||
- Add type hints to `StreamToken` and `RoomStreamToken` classes. ([\#8279](https://github.com/matrix-org/synapse/issues/8279))
|
||||
- Change `StreamToken.room_key` to be a `RoomStreamToken` instance. ([\#8281](https://github.com/matrix-org/synapse/issues/8281))
|
||||
- Refactor notifier code to correctly use the max event stream position. ([\#8288](https://github.com/matrix-org/synapse/issues/8288))
|
||||
- Use slotted classes where possible. ([\#8296](https://github.com/matrix-org/synapse/issues/8296))
|
||||
- Support testing the local Synapse checkout against the [Complement homeserver test suite](https://github.com/matrix-org/complement/). ([\#8317](https://github.com/matrix-org/synapse/issues/8317))
|
||||
- Update outdated usages of `metaclass` to python 3 syntax. ([\#8326](https://github.com/matrix-org/synapse/issues/8326))
|
||||
- Move lint-related dependencies to package-extra field, update CONTRIBUTING.md to utilise this. ([\#8330](https://github.com/matrix-org/synapse/issues/8330), [\#8377](https://github.com/matrix-org/synapse/issues/8377))
|
||||
- Use the `admin_patterns` helper in additional locations. ([\#8331](https://github.com/matrix-org/synapse/issues/8331))
|
||||
- Fix test logging to allow braces in log output. ([\#8335](https://github.com/matrix-org/synapse/issues/8335))
|
||||
- Remove `__future__` imports related to Python 2 compatibility. ([\#8337](https://github.com/matrix-org/synapse/issues/8337))
|
||||
- Simplify `super()` calls to Python 3 syntax. ([\#8344](https://github.com/matrix-org/synapse/issues/8344))
|
||||
- Fix bad merge from `release-v1.20.0` branch to `develop`. ([\#8354](https://github.com/matrix-org/synapse/issues/8354))
|
||||
- Factor out a `_send_dummy_event_for_room` method. ([\#8370](https://github.com/matrix-org/synapse/issues/8370))
|
||||
- Improve logging of state resolution. ([\#8371](https://github.com/matrix-org/synapse/issues/8371))
|
||||
- Add type annotations to `SimpleHttpClient`. ([\#8372](https://github.com/matrix-org/synapse/issues/8372))
|
||||
- Refactor ID generators to use `async with` syntax. ([\#8383](https://github.com/matrix-org/synapse/issues/8383))
|
||||
- Add `EventStreamPosition` type. ([\#8388](https://github.com/matrix-org/synapse/issues/8388))
|
||||
- Create a mechanism for marking tests "logcontext clean". ([\#8399](https://github.com/matrix-org/synapse/issues/8399))
|
||||
- A pair of tiny cleanups in the federation request code. ([\#8401](https://github.com/matrix-org/synapse/issues/8401))
|
||||
- Add checks on startup that PostgreSQL sequences are consistent with their associated tables. ([\#8402](https://github.com/matrix-org/synapse/issues/8402))
|
||||
- Do not include appservice users when calculating the total MAU for a server. ([\#8404](https://github.com/matrix-org/synapse/issues/8404))
|
||||
- Typing fixes for `synapse.handlers.federation`. ([\#8422](https://github.com/matrix-org/synapse/issues/8422))
|
||||
- Various refactors to simplify stream token handling. ([\#8423](https://github.com/matrix-org/synapse/issues/8423))
|
||||
- Make stream token serializing/deserializing async. ([\#8427](https://github.com/matrix-org/synapse/issues/8427))
|
||||
|
||||
|
||||
Synapse 1.20.1 (2020-09-24)
|
||||
===========================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in v1.20.0 which caused the `synapse_port_db` script to fail. ([\#8386](https://github.com/matrix-org/synapse/issues/8386))
|
||||
- Fix a bug introduced in v1.20.0 which caused variables to be incorrectly escaped in Jinja2 templates. ([\#8394](https://github.com/matrix-org/synapse/issues/8394))
|
||||
|
||||
|
||||
Synapse 1.20.0 (2020-09-22)
|
||||
===========================
|
||||
|
||||
No significant changes since v1.20.0rc5.
|
||||
For the next release
|
||||
====================
|
||||
|
||||
Removal warning
|
||||
---------------
|
||||
|
||||
Historically, the [Synapse Admin
|
||||
API](https://github.com/matrix-org/synapse/tree/master/docs) has been
|
||||
accessible under the `/_matrix/client/api/v1/admin`,
|
||||
`/_matrix/client/unstable/admin`, `/_matrix/client/r0/admin` and
|
||||
`/_synapse/admin` prefixes. In a future release, we will be dropping support
|
||||
for accessing Synapse's Admin API using the `/_matrix/client/*` prefixes.
|
||||
|
||||
From that point, the Admin API will only be accessible under `/_synapse/admin`.
|
||||
This makes it easier for homeserver admins to lock down external access to the
|
||||
Admin API endpoints.
|
||||
|
||||
Synapse 1.20.0rc5 (2020-09-18)
|
||||
==============================
|
||||
|
||||
In addition to the below, Synapse 1.20.0rc5 also includes the bug fix that was included in 1.19.3.
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add flags to the `/versions` endpoint for whether new rooms default to using E2EE. ([\#8343](https://github.com/matrix-org/synapse/issues/8343))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix rate limiting of federation `/send` requests. ([\#8342](https://github.com/matrix-org/synapse/issues/8342))
|
||||
- Fix a longstanding bug where back pagination over federation could get stuck if it failed to handle a received event. ([\#8349](https://github.com/matrix-org/synapse/issues/8349))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Blacklist [MSC2753](https://github.com/matrix-org/matrix-doc/pull/2753) SyTests until it is implemented. ([\#8285](https://github.com/matrix-org/synapse/issues/8285))
|
||||
|
||||
|
||||
Synapse 1.19.3 (2020-09-18)
|
||||
===========================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Partially mitigate bug where newly joined servers couldn't get past events in a room when there is a malformed event. ([\#8350](https://github.com/matrix-org/synapse/issues/8350))
|
||||
|
||||
|
||||
Synapse 1.20.0rc4 (2020-09-16)
|
||||
==============================
|
||||
|
||||
Synapse 1.20.0rc4 is identical to 1.20.0rc3, with the addition of the security fix that was included in 1.19.2.
|
||||
|
||||
|
||||
Synapse 1.19.2 (2020-09-16)
|
||||
===========================
|
||||
|
||||
Due to the issue below server admins are encouraged to upgrade as soon as possible.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix joining rooms over federation that include malformed events. ([\#8324](https://github.com/matrix-org/synapse/issues/8324))
|
||||
|
||||
|
||||
Synapse 1.20.0rc3 (2020-09-11)
|
||||
==============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in v1.20.0rc1 where the wrong exception was raised when invalid JSON data is encountered. ([\#8291](https://github.com/matrix-org/synapse/issues/8291))
|
||||
|
||||
|
||||
Synapse 1.20.0rc2 (2020-09-09)
|
||||
==============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in v1.20.0rc1 causing some features related to notifications to misbehave following the implementation of unread counts. ([\#8280](https://github.com/matrix-org/synapse/issues/8280))
|
||||
|
||||
|
||||
Synapse 1.20.0rc1 (2020-09-08)
|
||||
==============================
|
||||
|
||||
Removal warning
|
||||
---------------
|
||||
|
||||
Some older clients used a [disallowed character](https://matrix.org/docs/spec/client_server/r0.6.1#post-matrix-client-r0-register-email-requesttoken) (`:`) in the `client_secret` parameter of various endpoints. The incorrect behaviour was allowed for backwards compatibility, but is now being removed from Synapse as most users have updated their client. Further context can be found at [\#6766](https://github.com/matrix-org/synapse/issues/6766).
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add an endpoint to query your shared rooms with another user as an implementation of [MSC2666](https://github.com/matrix-org/matrix-doc/pull/2666). ([\#7785](https://github.com/matrix-org/synapse/issues/7785))
|
||||
- Iteratively encode JSON to avoid blocking the reactor. ([\#8013](https://github.com/matrix-org/synapse/issues/8013), [\#8116](https://github.com/matrix-org/synapse/issues/8116))
|
||||
- Add support for shadow-banning users (ignoring any message send requests). ([\#8034](https://github.com/matrix-org/synapse/issues/8034), [\#8092](https://github.com/matrix-org/synapse/issues/8092), [\#8095](https://github.com/matrix-org/synapse/issues/8095), [\#8142](https://github.com/matrix-org/synapse/issues/8142), [\#8152](https://github.com/matrix-org/synapse/issues/8152), [\#8157](https://github.com/matrix-org/synapse/issues/8157), [\#8158](https://github.com/matrix-org/synapse/issues/8158), [\#8176](https://github.com/matrix-org/synapse/issues/8176))
|
||||
- Use the default template file when its equivalent is not found in a custom template directory. ([\#8037](https://github.com/matrix-org/synapse/issues/8037), [\#8107](https://github.com/matrix-org/synapse/issues/8107), [\#8252](https://github.com/matrix-org/synapse/issues/8252))
|
||||
- Add unread messages count to sync responses, as specified in [MSC2654](https://github.com/matrix-org/matrix-doc/pull/2654). ([\#8059](https://github.com/matrix-org/synapse/issues/8059), [\#8254](https://github.com/matrix-org/synapse/issues/8254), [\#8270](https://github.com/matrix-org/synapse/issues/8270), [\#8274](https://github.com/matrix-org/synapse/issues/8274))
|
||||
- Optimise `/federation/v1/user/devices/` API by only returning devices with encryption keys. ([\#8198](https://github.com/matrix-org/synapse/issues/8198))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a memory leak by limiting the length of time that messages will be queued for a remote server that has been unreachable. ([\#7864](https://github.com/matrix-org/synapse/issues/7864))
|
||||
- Fix `Re-starting finished log context PUT-nnnn` warning when event persistence failed. ([\#8081](https://github.com/matrix-org/synapse/issues/8081))
|
||||
- Synapse now correctly enforces the valid characters in the `client_secret` parameter used in various endpoints. ([\#8101](https://github.com/matrix-org/synapse/issues/8101))
|
||||
- Fix a bug introduced in v1.7.2 impacting message retention policies that would allow federated homeservers to dictate a retention period that's lower than the configured minimum allowed duration in the configuration file. ([\#8104](https://github.com/matrix-org/synapse/issues/8104))
|
||||
- Fix a long-standing bug where invalid JSON would be accepted by Synapse. ([\#8106](https://github.com/matrix-org/synapse/issues/8106))
|
||||
- Fix a bug introduced in Synapse v1.12.0 which could cause `/sync` requests to fail with a 404 if you had a very old outstanding room invite. ([\#8110](https://github.com/matrix-org/synapse/issues/8110))
|
||||
- Return a proper error code when the rooms of an invalid group are requested. ([\#8129](https://github.com/matrix-org/synapse/issues/8129))
|
||||
- Fix a bug which could cause a leaked postgres connection if synapse was set to daemonize. ([\#8131](https://github.com/matrix-org/synapse/issues/8131))
|
||||
- Clarify the error code if a user tries to register with a numeric ID. This bug was introduced in v1.15.0. ([\#8135](https://github.com/matrix-org/synapse/issues/8135))
|
||||
- Fix a bug where appservices with ratelimiting disabled would still be ratelimited when joining rooms. This bug was introduced in v1.19.0. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
|
||||
- Fix logging in via OpenID Connect with a provider that uses integer user IDs. ([\#8190](https://github.com/matrix-org/synapse/issues/8190))
|
||||
- Fix a longstanding bug where user directory updates could break when unexpected profile data was included in events. ([\#8223](https://github.com/matrix-org/synapse/issues/8223))
|
||||
- Fix a longstanding bug where stats updates could break when unexpected profile data was included in events. ([\#8226](https://github.com/matrix-org/synapse/issues/8226))
|
||||
- Fix slow start times for large servers by removing a table scan of the `users` table from startup code. ([\#8271](https://github.com/matrix-org/synapse/issues/8271))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
- Fix builds of the Docker image on non-x86 platforms. ([\#8144](https://github.com/matrix-org/synapse/issues/8144))
|
||||
- Added curl for healthcheck support and readme updates for the change. Contributed by @maquis196. ([\#8147](https://github.com/matrix-org/synapse/issues/8147))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Link to matrix-synapse-rest-password-provider in the password provider documentation. ([\#8111](https://github.com/matrix-org/synapse/issues/8111))
|
||||
- Updated documentation to note that Synapse does not follow `HTTP 308` redirects due to an upstream library not supporting them. Contributed by Ryan Cole. ([\#8120](https://github.com/matrix-org/synapse/issues/8120))
|
||||
- Explain better what GDPR-erased means when deactivating a user. ([\#8189](https://github.com/matrix-org/synapse/issues/8189))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Add filter `name` to the `/users` admin API, which filters by user ID or displayname. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7377](https://github.com/matrix-org/synapse/issues/7377), [\#8163](https://github.com/matrix-org/synapse/issues/8163))
|
||||
- Reduce run times of some unit tests by advancing the reactor a fewer number of times. ([\#7757](https://github.com/matrix-org/synapse/issues/7757))
|
||||
- Don't fail `/submit_token` requests on incorrect session ID if `request_token_inhibit_3pid_errors` is turned on. ([\#7991](https://github.com/matrix-org/synapse/issues/7991))
|
||||
- Convert various parts of the codebase to async/await. ([\#8071](https://github.com/matrix-org/synapse/issues/8071), [\#8072](https://github.com/matrix-org/synapse/issues/8072), [\#8074](https://github.com/matrix-org/synapse/issues/8074), [\#8075](https://github.com/matrix-org/synapse/issues/8075), [\#8076](https://github.com/matrix-org/synapse/issues/8076), [\#8087](https://github.com/matrix-org/synapse/issues/8087), [\#8100](https://github.com/matrix-org/synapse/issues/8100), [\#8119](https://github.com/matrix-org/synapse/issues/8119), [\#8121](https://github.com/matrix-org/synapse/issues/8121), [\#8133](https://github.com/matrix-org/synapse/issues/8133), [\#8156](https://github.com/matrix-org/synapse/issues/8156), [\#8162](https://github.com/matrix-org/synapse/issues/8162), [\#8166](https://github.com/matrix-org/synapse/issues/8166), [\#8168](https://github.com/matrix-org/synapse/issues/8168), [\#8173](https://github.com/matrix-org/synapse/issues/8173), [\#8191](https://github.com/matrix-org/synapse/issues/8191), [\#8192](https://github.com/matrix-org/synapse/issues/8192), [\#8193](https://github.com/matrix-org/synapse/issues/8193), [\#8194](https://github.com/matrix-org/synapse/issues/8194), [\#8195](https://github.com/matrix-org/synapse/issues/8195), [\#8197](https://github.com/matrix-org/synapse/issues/8197), [\#8199](https://github.com/matrix-org/synapse/issues/8199), [\#8200](https://github.com/matrix-org/synapse/issues/8200), [\#8201](https://github.com/matrix-org/synapse/issues/8201), [\#8202](https://github.com/matrix-org/synapse/issues/8202), [\#8207](https://github.com/matrix-org/synapse/issues/8207), [\#8213](https://github.com/matrix-org/synapse/issues/8213), [\#8214](https://github.com/matrix-org/synapse/issues/8214))
|
||||
- Remove some unused database functions. ([\#8085](https://github.com/matrix-org/synapse/issues/8085))
|
||||
- Add type hints to various parts of the codebase. ([\#8090](https://github.com/matrix-org/synapse/issues/8090), [\#8127](https://github.com/matrix-org/synapse/issues/8127), [\#8187](https://github.com/matrix-org/synapse/issues/8187), [\#8241](https://github.com/matrix-org/synapse/issues/8241), [\#8140](https://github.com/matrix-org/synapse/issues/8140), [\#8183](https://github.com/matrix-org/synapse/issues/8183), [\#8232](https://github.com/matrix-org/synapse/issues/8232), [\#8235](https://github.com/matrix-org/synapse/issues/8235), [\#8237](https://github.com/matrix-org/synapse/issues/8237), [\#8244](https://github.com/matrix-org/synapse/issues/8244))
|
||||
- Return the previous stream token if a non-member event is a duplicate. ([\#8093](https://github.com/matrix-org/synapse/issues/8093), [\#8112](https://github.com/matrix-org/synapse/issues/8112))
|
||||
- Separate `get_current_token` into two since there are two different use cases for it. ([\#8113](https://github.com/matrix-org/synapse/issues/8113))
|
||||
- Remove `ChainedIdGenerator`. ([\#8123](https://github.com/matrix-org/synapse/issues/8123))
|
||||
- Reduce the amount of whitespace in JSON stored and sent in responses. ([\#8124](https://github.com/matrix-org/synapse/issues/8124))
|
||||
- Update the test federation client to handle streaming responses. ([\#8130](https://github.com/matrix-org/synapse/issues/8130))
|
||||
- Micro-optimisations to `get_auth_chain_ids`. ([\#8132](https://github.com/matrix-org/synapse/issues/8132))
|
||||
- Refactor `StreamIdGenerator` and `MultiWriterIdGenerator` to have the same interface. ([\#8161](https://github.com/matrix-org/synapse/issues/8161))
|
||||
- Add functions to `MultiWriterIdGen` used by events stream. ([\#8164](https://github.com/matrix-org/synapse/issues/8164), [\#8179](https://github.com/matrix-org/synapse/issues/8179))
|
||||
- Fix tests that were broken due to the merge of 1.19.1. ([\#8167](https://github.com/matrix-org/synapse/issues/8167))
|
||||
- Make `SlavedIdTracker.advance` have the same interface as `MultiWriterIDGenerator`. ([\#8171](https://github.com/matrix-org/synapse/issues/8171))
|
||||
- Remove unused `is_guest` parameter from, and add safeguard to, `MessageHandler.get_room_data`. ([\#8174](https://github.com/matrix-org/synapse/issues/8174), [\#8181](https://github.com/matrix-org/synapse/issues/8181))
|
||||
- Standardize the mypy configuration. ([\#8175](https://github.com/matrix-org/synapse/issues/8175))
|
||||
- Refactor some of `LoginRestServlet`'s helper methods, and move them to `AuthHandler` for easier reuse. ([\#8182](https://github.com/matrix-org/synapse/issues/8182))
|
||||
- Fix `wait_for_stream_position` to allow multiple waiters on same stream ID. ([\#8196](https://github.com/matrix-org/synapse/issues/8196))
|
||||
- Make `MultiWriterIDGenerator` work for streams that use negative values. ([\#8203](https://github.com/matrix-org/synapse/issues/8203))
|
||||
- Refactor queries for device keys and cross-signatures. ([\#8204](https://github.com/matrix-org/synapse/issues/8204), [\#8205](https://github.com/matrix-org/synapse/issues/8205), [\#8222](https://github.com/matrix-org/synapse/issues/8222), [\#8224](https://github.com/matrix-org/synapse/issues/8224), [\#8225](https://github.com/matrix-org/synapse/issues/8225), [\#8231](https://github.com/matrix-org/synapse/issues/8231), [\#8233](https://github.com/matrix-org/synapse/issues/8233), [\#8234](https://github.com/matrix-org/synapse/issues/8234))
|
||||
- Fix type hints for functions decorated with `@cached`. ([\#8240](https://github.com/matrix-org/synapse/issues/8240))
|
||||
- Remove obsolete `order` field from federation send queues. ([\#8245](https://github.com/matrix-org/synapse/issues/8245))
|
||||
- Stop sub-classing from object. ([\#8249](https://github.com/matrix-org/synapse/issues/8249))
|
||||
- Add more logging to debug slow startup. ([\#8264](https://github.com/matrix-org/synapse/issues/8264))
|
||||
- Do not attempt to upgrade database schema on worker processes. ([\#8266](https://github.com/matrix-org/synapse/issues/8266), [\#8276](https://github.com/matrix-org/synapse/issues/8276))
|
||||
|
||||
|
||||
Synapse 1.19.1 (2020-08-27)
|
||||
===========================
|
||||
|
||||
No significant changes.
|
||||
|
||||
|
||||
Synapse 1.19.1rc1 (2020-08-25)
|
||||
==============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in v1.19.0 where appservices with ratelimiting disabled would still be ratelimited when joining rooms. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
|
||||
- Fix a bug introduced in v1.19.0 that would cause e.g. profile updates to fail due to incorrect application of rate limits on join requests. ([\#8153](https://github.com/matrix-org/synapse/issues/8153))
|
||||
Some older clients used a
|
||||
[disallowed character](https://matrix.org/docs/spec/client_server/r0.6.1#post-matrix-client-r0-register-email-requesttoken)
|
||||
(`:`) in the `client_secret` parameter of various endpoints. The incorrect
|
||||
behaviour was allowed for backwards compatibility, but is now being removed
|
||||
from Synapse as most users have updated their client. Further context can be
|
||||
found at [\#6766](https://github.com/matrix-org/synapse/issues/6766).
|
||||
|
||||
|
||||
Synapse 1.19.0 (2020-08-17)
|
||||
|
||||
@@ -17,9 +17,9 @@ https://help.github.com/articles/using-pull-requests/) to ask us to pull your
|
||||
changes into our repo.
|
||||
|
||||
Some other points to follow:
|
||||
|
||||
|
||||
* Please base your changes on the `develop` branch.
|
||||
|
||||
|
||||
* Please follow the [code style requirements](#code-style).
|
||||
|
||||
* Please include a [changelog entry](#changelog) with each PR.
|
||||
@@ -46,7 +46,7 @@ locally. You'll need python 3.6 or later, and to install a number of tools:
|
||||
|
||||
```
|
||||
# Install the dependencies
|
||||
pip install -e ".[lint]"
|
||||
pip install -U black flake8 flake8-comprehensions isort
|
||||
|
||||
# Run the linter script
|
||||
./scripts-dev/lint.sh
|
||||
|
||||
53
UPGRADE.rst
53
UPGRADE.rst
@@ -75,59 +75,6 @@ for example:
|
||||
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
|
||||
Upgrading to v1.21.0
|
||||
====================
|
||||
|
||||
Forwarding ``/_synapse/client`` through your reverse proxy
|
||||
----------------------------------------------------------
|
||||
|
||||
The `reverse proxy documentation
|
||||
<https://github.com/matrix-org/synapse/blob/develop/docs/reverse_proxy.md>`_ has been updated
|
||||
to include reverse proxy directives for ``/_synapse/client/*`` endpoints. As the user password
|
||||
reset flow now uses endpoints under this prefix, **you must update your reverse proxy
|
||||
configurations for user password reset to work**.
|
||||
|
||||
Additionally, note that the `Synapse worker documentation
|
||||
<https://github.com/matrix-org/synapse/blob/develop/docs/workers.md>`_ has been updated to
|
||||
state that the ``/_synapse/client/password_reset/email/submit_token`` endpoint can be handled
|
||||
by all workers. If you make use of Synapse's worker feature, please update your reverse proxy
|
||||
configuration to reflect this change.
|
||||
|
||||
New HTML templates
|
||||
------------------
|
||||
|
||||
A new HTML template,
|
||||
`password_reset_confirmation.html <https://github.com/matrix-org/synapse/blob/develop/synapse/res/templates/password_reset_confirmation.html>`_,
|
||||
has been added to the ``synapse/res/templates`` directory. If you are using a
|
||||
custom template directory, you may want to copy the template over and modify it.
|
||||
|
||||
Note that as of v1.20.0, templates do not need to be included in custom template
|
||||
directories for Synapse to start. The default templates will be used if a custom
|
||||
template cannot be found.
|
||||
|
||||
This page will appear to the user after clicking a password reset link that has
|
||||
been emailed to them.
|
||||
|
||||
To complete password reset, the page must include a way to make a `POST`
|
||||
request to
|
||||
``/_synapse/client/password_reset/{medium}/submit_token``
|
||||
with the query parameters from the original link, presented as a URL-encoded form. See the file
|
||||
itself for more details.
|
||||
|
||||
Updated Single Sign-on HTML Templates
|
||||
-------------------------------------
|
||||
|
||||
The ``saml_error.html`` template was removed from Synapse and replaced with the
|
||||
``sso_error.html`` template. If your Synapse is configured to use SAML and a
|
||||
custom ``sso_redirect_confirm_template_dir`` configuration then any customisations
|
||||
of the ``saml_error.html`` template will need to be merged into the ``sso_error.html``
|
||||
template. These templates are similar, but the parameters are slightly different:
|
||||
|
||||
* The ``msg`` parameter should be renamed to ``error_description``.
|
||||
* There is no longer a ``code`` parameter for the response code.
|
||||
* A string ``error`` parameter is available that includes a short hint of why a
|
||||
user is seeing the error page.
|
||||
|
||||
Upgrading to v1.18.0
|
||||
====================
|
||||
|
||||
|
||||
1
changelog.d/7864.bugfix
Normal file
1
changelog.d/7864.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a memory leak by limiting the length of time that messages will be queued for a remote server that has been unreachable.
|
||||
1
changelog.d/8013.feature
Normal file
1
changelog.d/8013.feature
Normal file
@@ -0,0 +1 @@
|
||||
Iteratively encode JSON to avoid blocking the reactor.
|
||||
1
changelog.d/8037.feature
Normal file
1
changelog.d/8037.feature
Normal file
@@ -0,0 +1 @@
|
||||
Use the default template file when its equivalent is not found in a custom template directory.
|
||||
1
changelog.d/8072.misc
Normal file
1
changelog.d/8072.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8074.misc
Normal file
1
changelog.d/8074.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8075.misc
Normal file
1
changelog.d/8075.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8076.misc
Normal file
1
changelog.d/8076.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8081.bugfix
Normal file
1
changelog.d/8081.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix `Re-starting finished log context PUT-nnnn` warning when event persistence failed.
|
||||
1
changelog.d/8085.misc
Normal file
1
changelog.d/8085.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove some unused database functions.
|
||||
1
changelog.d/8087.misc
Normal file
1
changelog.d/8087.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8090.misc
Normal file
1
changelog.d/8090.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type hints to `synapse.handlers.room`.
|
||||
1
changelog.d/8092.feature
Normal file
1
changelog.d/8092.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add support for shadow-banning users (ignoring any message send requests).
|
||||
1
changelog.d/8093.misc
Normal file
1
changelog.d/8093.misc
Normal file
@@ -0,0 +1 @@
|
||||
Return the previous stream token if a non-member event is a duplicate.
|
||||
1
changelog.d/8100.misc
Normal file
1
changelog.d/8100.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8101.bugfix
Normal file
1
changelog.d/8101.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Synapse now correctly enforces the valid characters in the `client_secret` parameter used in various endpoints.
|
||||
1
changelog.d/8107.feature
Normal file
1
changelog.d/8107.feature
Normal file
@@ -0,0 +1 @@
|
||||
Use the default template file when its equivalent is not found in a custom template directory.
|
||||
1
changelog.d/8111.doc
Normal file
1
changelog.d/8111.doc
Normal file
@@ -0,0 +1 @@
|
||||
Link to matrix-synapse-rest-password-provider in the password provider documentation.
|
||||
1
changelog.d/8112.misc
Normal file
1
changelog.d/8112.misc
Normal file
@@ -0,0 +1 @@
|
||||
Return the previous stream token if a non-member event is a duplicate.
|
||||
@@ -15,6 +15,8 @@
|
||||
# limitations under the License.
|
||||
|
||||
""" Starts a synapse client console. """
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import cmd
|
||||
import getpass
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import json
|
||||
import urllib
|
||||
from pprint import pformat
|
||||
@@ -22,7 +24,7 @@ from twisted.web.client import Agent, readBody
|
||||
from twisted.web.http_headers import Headers
|
||||
|
||||
|
||||
class HttpClient:
|
||||
class HttpClient(object):
|
||||
""" Interface for talking json over http
|
||||
"""
|
||||
|
||||
@@ -167,7 +169,7 @@ class TwistedHttpClient(HttpClient):
|
||||
return d
|
||||
|
||||
|
||||
class _RawProducer:
|
||||
class _RawProducer(object):
|
||||
def __init__(self, data):
|
||||
self.data = data
|
||||
self.body = data
|
||||
@@ -184,7 +186,7 @@ class _RawProducer:
|
||||
pass
|
||||
|
||||
|
||||
class _JsonProducer:
|
||||
class _JsonProducer(object):
|
||||
""" Used by the twisted http client to create the HTTP body from json
|
||||
"""
|
||||
|
||||
|
||||
@@ -141,7 +141,7 @@ class CursesStdIO:
|
||||
curses.endwin()
|
||||
|
||||
|
||||
class Callback:
|
||||
class Callback(object):
|
||||
def __init__(self, stdio):
|
||||
self.stdio = stdio
|
||||
|
||||
|
||||
@@ -55,7 +55,7 @@ def excpetion_errback(failure):
|
||||
logging.exception(failure)
|
||||
|
||||
|
||||
class InputOutput:
|
||||
class InputOutput(object):
|
||||
""" This is responsible for basic I/O so that a user can interact with
|
||||
the example app.
|
||||
"""
|
||||
@@ -132,7 +132,7 @@ class IOLoggerHandler(logging.Handler):
|
||||
self.io.print_log(msg)
|
||||
|
||||
|
||||
class Room:
|
||||
class Room(object):
|
||||
""" Used to store (in memory) the current membership state of a room, and
|
||||
which home servers we should send PDUs associated with the room to.
|
||||
"""
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import cgi
|
||||
import datetime
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import cgi
|
||||
import datetime
|
||||
|
||||
@@ -10,6 +10,8 @@ the bridge.
|
||||
Requires:
|
||||
npm install jquery jsdom
|
||||
"""
|
||||
from __future__ import print_function
|
||||
|
||||
import json
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
#!/usr/bin/env python
|
||||
from __future__ import print_function
|
||||
|
||||
import json
|
||||
import sys
|
||||
@@ -7,6 +8,11 @@ from argparse import ArgumentParser
|
||||
|
||||
import requests
|
||||
|
||||
try:
|
||||
raw_input
|
||||
except NameError: # Python 3
|
||||
raw_input = input
|
||||
|
||||
|
||||
def _mkurl(template, kws):
|
||||
for key in kws:
|
||||
@@ -52,7 +58,7 @@ def main(hs, room_id, access_token, user_id_prefix, why):
|
||||
print("The following user IDs will be kicked from %s" % room_name)
|
||||
for uid in kick_list:
|
||||
print(uid)
|
||||
doit = input("Continue? [Y]es\n")
|
||||
doit = raw_input("Continue? [Y]es\n")
|
||||
if len(doit) > 0 and doit.lower() == "y":
|
||||
print("Kicking members...")
|
||||
# encode them all
|
||||
|
||||
2
debian/build_virtualenv
vendored
2
debian/build_virtualenv
vendored
@@ -42,7 +42,7 @@ dh_virtualenv \
|
||||
--preinstall="mock" \
|
||||
--extra-pip-arg="--no-cache-dir" \
|
||||
--extra-pip-arg="--compile" \
|
||||
--extras="all,systemd,test"
|
||||
--extras="all,systemd"
|
||||
|
||||
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
|
||||
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
|
||||
|
||||
50
debian/changelog
vendored
50
debian/changelog
vendored
@@ -1,53 +1,3 @@
|
||||
matrix-synapse-py3 (1.21.1) stable; urgency=medium
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New synapse release 1.21.1.
|
||||
|
||||
[ Andrew Morgan ]
|
||||
* Explicitly install "test" python dependencies.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Oct 2020 10:24:13 +0100
|
||||
|
||||
matrix-synapse-py3 (1.21.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.21.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Mon, 12 Oct 2020 15:47:44 +0100
|
||||
|
||||
matrix-synapse-py3 (1.20.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.20.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Sep 2020 16:25:22 +0100
|
||||
|
||||
matrix-synapse-py3 (1.20.0) stable; urgency=medium
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New synapse release 1.20.0.
|
||||
|
||||
[ Dexter Chua ]
|
||||
* Use Type=notify in systemd service
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Sep 2020 15:19:32 +0100
|
||||
|
||||
matrix-synapse-py3 (1.19.3) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.19.3.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 18 Sep 2020 14:59:30 +0100
|
||||
|
||||
matrix-synapse-py3 (1.19.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.19.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 16 Sep 2020 12:50:30 +0100
|
||||
|
||||
matrix-synapse-py3 (1.19.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.19.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 27 Aug 2020 10:50:19 +0100
|
||||
|
||||
matrix-synapse-py3 (1.19.0) stable; urgency=medium
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
|
||||
2
debian/matrix-synapse.service
vendored
2
debian/matrix-synapse.service
vendored
@@ -2,7 +2,7 @@
|
||||
Description=Synapse Matrix homeserver
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
Type=simple
|
||||
User=matrix-synapse
|
||||
WorkingDirectory=/var/lib/matrix-synapse
|
||||
EnvironmentFile=/etc/default/matrix-synapse
|
||||
|
||||
@@ -19,16 +19,11 @@ ARG PYTHON_VERSION=3.7
|
||||
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
|
||||
|
||||
# install the OS build deps
|
||||
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
build-essential \
|
||||
libffi-dev \
|
||||
libjpeg-dev \
|
||||
libpq-dev \
|
||||
libssl-dev \
|
||||
libwebp-dev \
|
||||
libxml++2.6-dev \
|
||||
libxslt1-dev \
|
||||
zlib1g-dev \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Build dependencies that are not available as wheels, to speed up rebuilds
|
||||
@@ -60,12 +55,9 @@ RUN pip install --prefix="/install" --no-warn-script-location \
|
||||
FROM docker.io/python:${PYTHON_VERSION}-slim
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
curl \
|
||||
gosu \
|
||||
libjpeg62-turbo \
|
||||
libpq5 \
|
||||
libwebp6 \
|
||||
xmlsec1 \
|
||||
gosu \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
COPY --from=builder /install /usr/local
|
||||
@@ -77,6 +69,3 @@ VOLUME ["/data"]
|
||||
EXPOSE 8008/tcp 8009/tcp 8448/tcp
|
||||
|
||||
ENTRYPOINT ["/start.py"]
|
||||
|
||||
HEALTHCHECK --interval=1m --timeout=5s \
|
||||
CMD curl -fSs http://localhost:8008/health || exit 1
|
||||
|
||||
@@ -162,32 +162,3 @@ docker build -t matrixdotorg/synapse -f docker/Dockerfile .
|
||||
|
||||
You can choose to build a different docker image by changing the value of the `-f` flag to
|
||||
point to another Dockerfile.
|
||||
|
||||
## Disabling the healthcheck
|
||||
|
||||
If you are using a non-standard port or tls inside docker you can disable the healthcheck
|
||||
whilst running the above `docker run` commands.
|
||||
|
||||
```
|
||||
--no-healthcheck
|
||||
```
|
||||
## Setting custom healthcheck on docker run
|
||||
|
||||
If you wish to point the healthcheck at a different port with docker command, add the following
|
||||
|
||||
```
|
||||
--health-cmd 'curl -fSs http://localhost:1234/health'
|
||||
```
|
||||
|
||||
## Setting the healthcheck in docker-compose file
|
||||
|
||||
You can add the following to set a custom healthcheck in a docker compose file.
|
||||
You will need version >2.1 for this to work.
|
||||
|
||||
```
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-fSs", "http://localhost:8008/health"]
|
||||
interval: 1m
|
||||
timeout: 10s
|
||||
retries: 3
|
||||
```
|
||||
|
||||
@@ -1,129 +0,0 @@
|
||||
Show reported events
|
||||
====================
|
||||
|
||||
This API returns information about reported events.
|
||||
|
||||
The api is::
|
||||
|
||||
GET /_synapse/admin/v1/event_reports?from=0&limit=10
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
.. code:: jsonc
|
||||
|
||||
{
|
||||
"event_reports": [
|
||||
{
|
||||
"content": {
|
||||
"reason": "foo",
|
||||
"score": -100
|
||||
},
|
||||
"event_id": "$bNUFCwGzWca1meCGkjp-zwslF-GfVcXukvRLI1_FaVY",
|
||||
"event_json": {
|
||||
"auth_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M",
|
||||
"$oggsNXxzPFRE3y53SUNd7nsj69-QzKv03a1RucHu-ws"
|
||||
],
|
||||
"content": {
|
||||
"body": "matrix.org: This Week in Matrix",
|
||||
"format": "org.matrix.custom.html",
|
||||
"formatted_body": "<strong>matrix.org</strong>:<br><a href=\"https://matrix.org/blog/\"><strong>This Week in Matrix</strong></a>",
|
||||
"msgtype": "m.notice"
|
||||
},
|
||||
"depth": 546,
|
||||
"hashes": {
|
||||
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
|
||||
},
|
||||
"origin": "matrix.org",
|
||||
"origin_server_ts": 1592291711430,
|
||||
"prev_events": [
|
||||
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"
|
||||
],
|
||||
"prev_state": [],
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"signatures": {
|
||||
"matrix.org": {
|
||||
"ed25519:a_JaEG": "cs+OUKW/iHx5pEidbWxh0UiNNHwe46Ai9LwNz+Ah16aWDNszVIe2gaAcVZfvNsBhakQTew51tlKmL2kspXk/Dg"
|
||||
}
|
||||
},
|
||||
"type": "m.room.message",
|
||||
"unsigned": {
|
||||
"age_ts": 1592291711430,
|
||||
}
|
||||
},
|
||||
"id": 2,
|
||||
"reason": "foo",
|
||||
"received_ts": 1570897107409,
|
||||
"room_alias": "#alias1:matrix.org",
|
||||
"room_id": "!ERAgBpSOcCCuTJqQPk:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@foo:matrix.org"
|
||||
},
|
||||
{
|
||||
"content": {
|
||||
"reason": "bar",
|
||||
"score": -100
|
||||
},
|
||||
"event_id": "$3IcdZsDaN_En-S1DF4EMCy3v4gNRKeOJs8W5qTOKj4I",
|
||||
"event_json": {
|
||||
// hidden items
|
||||
// see above
|
||||
},
|
||||
"id": 3,
|
||||
"reason": "bar",
|
||||
"received_ts": 1598889612059,
|
||||
"room_alias": "#alias2:matrix.org",
|
||||
"room_id": "!eGvUQuTCkHGVwNMOjv:matrix.org",
|
||||
"sender": "@foobar:matrix.org",
|
||||
"user_id": "@bar:matrix.org"
|
||||
}
|
||||
],
|
||||
"next_token": 2,
|
||||
"total": 4
|
||||
}
|
||||
|
||||
To paginate, check for ``next_token`` and if present, call the endpoint again
|
||||
with ``from`` set to the value of ``next_token``. This will return a new page.
|
||||
|
||||
If the endpoint does not return a ``next_token`` then there are no more
|
||||
reports to paginate through.
|
||||
|
||||
**URL parameters:**
|
||||
|
||||
- ``limit``: integer - Is optional but is used for pagination,
|
||||
denoting the maximum number of items to return in this call. Defaults to ``100``.
|
||||
- ``from``: integer - Is optional but used for pagination,
|
||||
denoting the offset in the returned results. This should be treated as an opaque value and
|
||||
not explicitly set to anything other than the return value of ``next_token`` from a previous call.
|
||||
Defaults to ``0``.
|
||||
- ``dir``: string - Direction of event report order. Whether to fetch the most recent first (``b``) or the
|
||||
oldest first (``f``). Defaults to ``b``.
|
||||
- ``user_id``: string - Is optional and filters to only return users with user IDs that contain this value.
|
||||
This is the user who reported the event and wrote the reason.
|
||||
- ``room_id``: string - Is optional and filters to only return rooms with room IDs that contain this value.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``id``: integer - ID of event report.
|
||||
- ``received_ts``: integer - The timestamp (in milliseconds since the unix epoch) when this report was sent.
|
||||
- ``room_id``: string - The ID of the room in which the event being reported is located.
|
||||
- ``event_id``: string - The ID of the reported event.
|
||||
- ``user_id``: string - This is the user who reported the event and wrote the reason.
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``content``: object - Content of reported event.
|
||||
|
||||
- ``reason``: string - Comment made by the ``user_id`` in this report. May be blank.
|
||||
- ``score``: integer - Content is reported based upon a negative score, where -100 is "most offensive" and 0 is "inoffensive".
|
||||
|
||||
- ``sender``: string - This is the ID of the user who sent the original message/event that was reported.
|
||||
- ``room_alias``: string - The alias of the room. ``null`` if the room does not have a canonical alias set.
|
||||
- ``event_json``: object - Details of the original event that was reported.
|
||||
- ``next_token``: integer - Indication for pagination. See above.
|
||||
- ``total``: integer - Total number of event reports related to the query (``user_id`` and ``room_id``).
|
||||
|
||||
@@ -275,8 +275,6 @@ The following fields are possible in the JSON response body:
|
||||
|
||||
* `room_id` - The ID of the room.
|
||||
* `name` - The name of the room.
|
||||
* `topic` - The topic of the room.
|
||||
* `avatar` - The `mxc` URI to the avatar of the room.
|
||||
* `canonical_alias` - The canonical (main) alias address of the room.
|
||||
* `joined_members` - How many users are currently in the room.
|
||||
* `joined_local_members` - How many local users are currently in the room.
|
||||
@@ -306,8 +304,6 @@ Response:
|
||||
{
|
||||
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
|
||||
"name": "Music Theory",
|
||||
"avatar": "mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV",
|
||||
"topic": "Theory, Composition, Notation, Analysis",
|
||||
"canonical_alias": "#musictheory:matrix.org",
|
||||
"joined_members": 127
|
||||
"joined_local_members": 2,
|
||||
|
||||
@@ -108,7 +108,7 @@ The api is::
|
||||
|
||||
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
To use it, you will need to authenticate by providing an `access_token` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
The parameter ``from`` is optional but used for pagination, denoting the
|
||||
@@ -119,11 +119,8 @@ from a previous call.
|
||||
The parameter ``limit`` is optional but is used for pagination, denoting the
|
||||
maximum number of items to return in this call. Defaults to ``100``.
|
||||
|
||||
The parameter ``user_id`` is optional and filters to only return users with user IDs
|
||||
that contain this value. This parameter is ignored when using the ``name`` parameter.
|
||||
|
||||
The parameter ``name`` is optional and filters to only return users with user ID localparts
|
||||
**or** displaynames that contain this value.
|
||||
The parameter ``user_id`` is optional and filters to only users with user IDs
|
||||
that contain this value.
|
||||
|
||||
The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
|
||||
Defaults to ``true`` to include guest users.
|
||||
@@ -214,11 +211,9 @@ Deactivate Account
|
||||
|
||||
This API deactivates an account. It removes active access tokens, resets the
|
||||
password, and deletes third-party IDs (to prevent the user requesting a
|
||||
password reset).
|
||||
|
||||
It can also mark the user as GDPR-erased. This means messages sent by the
|
||||
user will still be visible by anyone that was in the room when these messages
|
||||
were sent, but hidden from users joining the room afterwards.
|
||||
password reset). It can also mark the user as GDPR-erased (stopping their data
|
||||
from distributed further, and deleting it entirely if there are no other
|
||||
references to it).
|
||||
|
||||
The api is::
|
||||
|
||||
@@ -304,43 +299,6 @@ To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
|
||||
List room memberships of an user
|
||||
================================
|
||||
Gets a list of all ``room_id`` that a specific ``user_id`` is member.
|
||||
|
||||
The API is::
|
||||
|
||||
GET /_synapse/admin/v1/users/<user_id>/joined_rooms
|
||||
|
||||
To use it, you will need to authenticate by providing an ``access_token`` for a
|
||||
server admin: see `README.rst <README.rst>`_.
|
||||
|
||||
A response body like the following is returned:
|
||||
|
||||
.. code:: json
|
||||
|
||||
{
|
||||
"joined_rooms": [
|
||||
"!DuGcnbhHGaSZQoNQR:matrix.org",
|
||||
"!ZtSaPCawyWtxfWiIy:matrix.org"
|
||||
],
|
||||
"total": 2
|
||||
}
|
||||
|
||||
**Parameters**
|
||||
|
||||
The following parameters should be set in the URL:
|
||||
|
||||
- ``user_id`` - fully qualified: for example, ``@user:server.com``.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body:
|
||||
|
||||
- ``joined_rooms`` - An array of ``room_id``.
|
||||
- ``total`` - Number of rooms.
|
||||
|
||||
|
||||
User devices
|
||||
============
|
||||
|
||||
|
||||
@@ -47,18 +47,6 @@ you invite them to. This can be caused by an incorrectly-configured reverse
|
||||
proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
|
||||
configure a reverse proxy.
|
||||
|
||||
### Known issues
|
||||
|
||||
**HTTP `308 Permanent Redirect` redirects are not followed**: Due to missing features
|
||||
in the HTTP library used by Synapse, 308 redirects are currently not followed by
|
||||
federating servers, which can cause `M_UNKNOWN` or `401 Unauthorized` errors. This
|
||||
may affect users who are redirecting apex-to-www (e.g. `example.com` -> `www.example.com`),
|
||||
and especially users of the Kubernetes *Nginx Ingress* module, which uses 308 redirect
|
||||
codes by default. For those Kubernetes users, [this Stackoverflow post](https://stackoverflow.com/a/52617528/5096871)
|
||||
might be helpful. For other users, switching to a `301 Moved Permanently` code may be
|
||||
an option. 308 redirect codes will be supported properly in a future
|
||||
release of Synapse.
|
||||
|
||||
## Running a demo federation of Synapses
|
||||
|
||||
If you want to get up and running quickly with a trio of homeservers in a
|
||||
|
||||
@@ -106,17 +106,6 @@ Note that the above may fail with an error about duplicate rows if corruption
|
||||
has already occurred, and such duplicate rows will need to be manually removed.
|
||||
|
||||
|
||||
## Fixing inconsistent sequences error
|
||||
|
||||
Synapse uses Postgres sequences to generate IDs for various tables. A sequence
|
||||
and associated table can get out of sync if, for example, Synapse has been
|
||||
downgraded and then upgraded again.
|
||||
|
||||
To fix the issue shut down Synapse (including any and all workers) and run the
|
||||
SQL command included in the error message. Once done Synapse should start
|
||||
successfully.
|
||||
|
||||
|
||||
## Tuning Postgres
|
||||
|
||||
The default settings should be fine for most deployments. For larger
|
||||
|
||||
@@ -11,7 +11,7 @@ privileges.
|
||||
|
||||
**NOTE**: Your reverse proxy must not `canonicalise` or `normalise`
|
||||
the requested URI in any way (for example, by decoding `%xx` escapes).
|
||||
Beware that Apache *will* canonicalise URIs unless you specify
|
||||
Beware that Apache *will* canonicalise URIs unless you specifify
|
||||
`nocanon`.
|
||||
|
||||
When setting up a reverse proxy, remember that Matrix clients and other
|
||||
@@ -23,10 +23,6 @@ specification](https://matrix.org/docs/spec/server_server/latest#resolving-serve
|
||||
for more details of the algorithm used for federation connections, and
|
||||
[delegate.md](<delegate.md>) for instructions on setting up delegation.
|
||||
|
||||
Endpoints that are part of the standardised Matrix specification are
|
||||
located under `/_matrix`, whereas endpoints specific to Synapse are
|
||||
located under `/_synapse/client`.
|
||||
|
||||
Let's assume that we expect clients to connect to our server at
|
||||
`https://matrix.example.com`, and other servers to connect at
|
||||
`https://example.com:8448`. The following sections detail the configuration of
|
||||
@@ -49,7 +45,7 @@ server {
|
||||
|
||||
server_name matrix.example.com;
|
||||
|
||||
location ~* ^(\/_matrix|\/_synapse\/client) {
|
||||
location /_matrix {
|
||||
proxy_pass http://localhost:8008;
|
||||
proxy_set_header X-Forwarded-For $remote_addr;
|
||||
# Nginx by default only allows file uploads up to 1M in size
|
||||
@@ -69,10 +65,6 @@ matrix.example.com {
|
||||
proxy /_matrix http://localhost:8008 {
|
||||
transparent
|
||||
}
|
||||
|
||||
proxy /_synapse/client http://localhost:8008 {
|
||||
transparent
|
||||
}
|
||||
}
|
||||
|
||||
example.com:8448 {
|
||||
@@ -87,7 +79,6 @@ example.com:8448 {
|
||||
```
|
||||
matrix.example.com {
|
||||
reverse_proxy /_matrix/* http://localhost:8008
|
||||
reverse_proxy /_synapse/client/* http://localhost:8008
|
||||
}
|
||||
|
||||
example.com:8448 {
|
||||
@@ -105,8 +96,6 @@ example.com:8448 {
|
||||
AllowEncodedSlashes NoDecode
|
||||
ProxyPass /_matrix http://127.0.0.1:8008/_matrix nocanon
|
||||
ProxyPassReverse /_matrix http://127.0.0.1:8008/_matrix
|
||||
ProxyPass /_synapse/client http://127.0.0.1:8008/_synapse/client nocanon
|
||||
ProxyPassReverse /_synapse/client http://127.0.0.1:8008/_synapse/client
|
||||
</VirtualHost>
|
||||
|
||||
<VirtualHost *:8448>
|
||||
@@ -121,14 +110,6 @@ example.com:8448 {
|
||||
|
||||
**NOTE**: ensure the `nocanon` options are included.
|
||||
|
||||
**NOTE 2**: It appears that Synapse is currently incompatible with the ModSecurity module for Apache (`mod_security2`). If you need it enabled for other services on your web server, you can disable it for Synapse's two VirtualHosts by including the following lines before each of the two `</VirtualHost>` above:
|
||||
|
||||
```
|
||||
<IfModule security2_module>
|
||||
SecRuleEngine off
|
||||
</IfModule>
|
||||
```
|
||||
|
||||
### HAProxy
|
||||
|
||||
```
|
||||
@@ -138,7 +119,6 @@ frontend https
|
||||
# Matrix client traffic
|
||||
acl matrix-host hdr(host) -i matrix.example.com
|
||||
acl matrix-path path_beg /_matrix
|
||||
acl matrix-path path_beg /_synapse/client
|
||||
|
||||
use_backend matrix if matrix-host matrix-path
|
||||
|
||||
@@ -166,10 +146,3 @@ connecting to Synapse from a client.
|
||||
Synapse exposes a health check endpoint for use by reverse proxies.
|
||||
Each configured HTTP listener has a `/health` endpoint which always returns
|
||||
200 OK (and doesn't get logged).
|
||||
|
||||
## Synapse administration endpoints
|
||||
|
||||
Endpoints for administering your Synapse instance are placed under
|
||||
`/_synapse/admin`. These require authentication through an access token of an
|
||||
admin user. However as access to these endpoints grants the caller a lot of power,
|
||||
we do not recommend exposing them to the public internet without good reason.
|
||||
|
||||
@@ -33,23 +33,10 @@
|
||||
|
||||
## Server ##
|
||||
|
||||
# The public-facing domain of the server
|
||||
#
|
||||
# The server_name name will appear at the end of usernames and room addresses
|
||||
# created on this server. For example if the server_name was example.com,
|
||||
# usernames on this server would be in the format @user:example.com
|
||||
#
|
||||
# In most cases you should avoid using a matrix specific subdomain such as
|
||||
# matrix.example.com or synapse.example.com as the server_name for the same
|
||||
# reasons you wouldn't use user@email.example.com as your email address.
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md
|
||||
# for information on how to host Synapse on a subdomain while preserving
|
||||
# a clean server_name.
|
||||
#
|
||||
# The server_name cannot be changed later so it is important to
|
||||
# configure this correctly before you start Synapse. It should be all
|
||||
# lowercase and may contain an explicit port.
|
||||
# Examples: matrix.org, localhost:8080
|
||||
# The domain name of the server, with optional explicit port.
|
||||
# This is used by remote servers to connect to this server,
|
||||
# e.g. matrix.org, localhost:8080, etc.
|
||||
# This is also the last part of your UserID.
|
||||
#
|
||||
server_name: "SERVERNAME"
|
||||
|
||||
@@ -391,10 +378,11 @@ retention:
|
||||
# min_lifetime: 1d
|
||||
# max_lifetime: 1y
|
||||
|
||||
# Retention policy limits. If set, and the state of a room contains a
|
||||
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
|
||||
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
|
||||
# to these limits when running purge jobs.
|
||||
# Retention policy limits. If set, a user won't be able to send a
|
||||
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
|
||||
# that's not within this range. This is especially useful in closed federations,
|
||||
# in which server admins can make sure every federating server applies the same
|
||||
# rules.
|
||||
#
|
||||
#allowed_lifetime_min: 1d
|
||||
#allowed_lifetime_max: 1y
|
||||
@@ -420,19 +408,12 @@ retention:
|
||||
# (e.g. every 12h), but not want that purge to be performed by a job that's
|
||||
# iterating over every room it knows, which could be heavy on the server.
|
||||
#
|
||||
# If any purge job is configured, it is strongly recommended to have at least
|
||||
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
|
||||
# set, or one job without 'shortest_max_lifetime' and one job without
|
||||
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
|
||||
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
|
||||
# room's policy to these values is done after the policies are retrieved from
|
||||
# Synapse's database (which is done using the range specified in a purge job's
|
||||
# configuration).
|
||||
#
|
||||
#purge_jobs:
|
||||
# - longest_max_lifetime: 3d
|
||||
# - shortest_max_lifetime: 1d
|
||||
# longest_max_lifetime: 3d
|
||||
# interval: 12h
|
||||
# - shortest_max_lifetime: 3d
|
||||
# longest_max_lifetime: 1y
|
||||
# interval: 1d
|
||||
|
||||
# Inhibits the /requestToken endpoints from returning an error that might leak
|
||||
@@ -445,24 +426,6 @@ retention:
|
||||
#
|
||||
#request_token_inhibit_3pid_errors: true
|
||||
|
||||
# A list of domains that the domain portion of 'next_link' parameters
|
||||
# must match.
|
||||
#
|
||||
# This parameter is optionally provided by clients while requesting
|
||||
# validation of an email or phone number, and maps to a link that
|
||||
# users will be automatically redirected to after validation
|
||||
# succeeds. Clients can make use this parameter to aid the validation
|
||||
# process.
|
||||
#
|
||||
# The whitelist is applied whether the homeserver or an
|
||||
# identity server is handling validation.
|
||||
#
|
||||
# The default value is no whitelist functionality; all domains are
|
||||
# allowed. Setting this value to an empty list will instead disallow
|
||||
# all domains.
|
||||
#
|
||||
#next_link_domain_whitelist: ["matrix.org"]
|
||||
|
||||
|
||||
## TLS ##
|
||||
|
||||
@@ -629,7 +592,6 @@ acme:
|
||||
#tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
|
||||
|
||||
|
||||
## Federation ##
|
||||
|
||||
# Restrict federation to the following whitelist of domains.
|
||||
# N.B. we recommend also firewalling your federation listener to limit
|
||||
@@ -663,17 +625,6 @@ federation_ip_range_blacklist:
|
||||
- 'fe80::/64'
|
||||
- 'fc00::/7'
|
||||
|
||||
# Report prometheus metrics on the age of PDUs being sent to and received from
|
||||
# the following domains. This can be used to give an idea of "delay" on inbound
|
||||
# and outbound federation, though be aware that any delay can be due to problems
|
||||
# at either end or with the intermediate network.
|
||||
#
|
||||
# By default, no domains are monitored in this way.
|
||||
#
|
||||
#federation_metrics_domains:
|
||||
# - matrix.org
|
||||
# - example.com
|
||||
|
||||
|
||||
## Caching ##
|
||||
|
||||
@@ -1510,14 +1461,11 @@ trusted_key_servers:
|
||||
# At least one of `sp_config` or `config_path` must be set in this section to
|
||||
# enable SAML login.
|
||||
#
|
||||
# You will probably also want to set the following options to `false` to
|
||||
# (You will probably also want to set the following options to `false` to
|
||||
# disable the regular login/registration flows:
|
||||
# * enable_registration
|
||||
# * password_config.enabled
|
||||
#
|
||||
# You will also want to investigate the settings under the "sso" configuration
|
||||
# section below.
|
||||
#
|
||||
# Once SAML support is enabled, a metadata file will be exposed at
|
||||
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
|
||||
# use to configure your SAML IdP with. Alternatively, you can manually configure
|
||||
@@ -1640,6 +1588,31 @@ saml2_config:
|
||||
# - attribute: department
|
||||
# value: "sales"
|
||||
|
||||
# Directory in which Synapse will try to find the template files below.
|
||||
# If not set, default templates from within the Synapse package will be used.
|
||||
#
|
||||
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
|
||||
# If you *do* uncomment it, you will need to make sure that all the templates
|
||||
# below are in the directory.
|
||||
#
|
||||
# Synapse will look for the following templates in this directory:
|
||||
#
|
||||
# * HTML page to display to users if something goes wrong during the
|
||||
# authentication process: 'saml_error.html'.
|
||||
#
|
||||
# When rendering, this template is given the following variables:
|
||||
# * code: an HTML error code corresponding to the error that is being
|
||||
# returned (typically 400 or 500)
|
||||
#
|
||||
# * msg: a textual message describing the error.
|
||||
#
|
||||
# The variables will automatically be HTML-escaped.
|
||||
#
|
||||
# You can see the default templates at:
|
||||
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
|
||||
#
|
||||
#template_dir: "res/templates"
|
||||
|
||||
|
||||
# OpenID Connect integration. The following settings can be used to make Synapse
|
||||
# use an OpenID Connect Provider for authentication, instead of its internal
|
||||
@@ -1714,11 +1687,6 @@ oidc_config:
|
||||
#
|
||||
#skip_verification: true
|
||||
|
||||
# Uncomment to allow a user logging in via OIDC to match a pre-existing account instead
|
||||
# of failing. This could be used if switching from password logins to OIDC. Defaults to false.
|
||||
#
|
||||
#allow_existing_users: true
|
||||
|
||||
# An external module can be provided here as a custom solution to mapping
|
||||
# attributes returned from a OIDC provider onto a matrix user.
|
||||
#
|
||||
@@ -1760,14 +1728,6 @@ oidc_config:
|
||||
#
|
||||
#display_name_template: "{{ user.given_name }} {{ user.last_name }}"
|
||||
|
||||
# Jinja2 templates for extra attributes to send back to the client during
|
||||
# login.
|
||||
#
|
||||
# Note that these are non-standard and clients will ignore them without modifications.
|
||||
#
|
||||
#extra_attributes:
|
||||
#birthdate: "{{ user.birthdate }}"
|
||||
|
||||
|
||||
|
||||
# Enable CAS for registration and login.
|
||||
@@ -2055,13 +2015,9 @@ email:
|
||||
# * The contents of password reset emails sent by the homeserver:
|
||||
# 'password_reset.html' and 'password_reset.txt'
|
||||
#
|
||||
# * An HTML page that a user will see when they follow the link in the password
|
||||
# reset email. The user will be asked to confirm the action before their
|
||||
# password is reset: 'password_reset_confirmation.html'
|
||||
#
|
||||
# * HTML pages for success and failure that a user will see when they confirm
|
||||
# the password reset flow using the page above: 'password_reset_success.html'
|
||||
# and 'password_reset_failure.html'
|
||||
# * HTML pages for success and failure that a user will see when they follow
|
||||
# the link in the password reset email: 'password_reset_success.html' and
|
||||
# 'password_reset_failure.html'
|
||||
#
|
||||
# * The contents of address verification emails sent during registration:
|
||||
# 'registration.html' and 'registration.txt'
|
||||
|
||||
@@ -57,7 +57,7 @@ A custom mapping provider must specify the following methods:
|
||||
- This method must return a string, which is the unique identifier for the
|
||||
user. Commonly the ``sub`` claim of the response.
|
||||
* `map_user_attributes(self, userinfo, token)`
|
||||
- This method must be async.
|
||||
- This method should be async.
|
||||
- Arguments:
|
||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||
information from.
|
||||
@@ -66,18 +66,6 @@ A custom mapping provider must specify the following methods:
|
||||
- Returns a dictionary with two keys:
|
||||
- localpart: A required string, used to generate the Matrix ID.
|
||||
- displayname: An optional string, the display name for the user.
|
||||
* `get_extra_attributes(self, userinfo, token)`
|
||||
- This method must be async.
|
||||
- Arguments:
|
||||
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
|
||||
information from.
|
||||
- `token` - A dictionary which includes information necessary to make
|
||||
further requests to the OpenID provider.
|
||||
- Returns a dictionary that is suitable to be serialized to JSON. This
|
||||
will be returned as part of the response during a successful login.
|
||||
|
||||
Note that care should be taken to not overwrite any of the parameters
|
||||
usually returned as part of the [login response](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
|
||||
|
||||
### Default OpenID Mapping Provider
|
||||
|
||||
|
||||
@@ -1,14 +1,9 @@
|
||||
[Unit]
|
||||
Description=Synapse %i
|
||||
AssertPathExists=/etc/matrix-synapse/workers/%i.yaml
|
||||
|
||||
# This service should be restarted when the synapse target is restarted.
|
||||
PartOf=matrix-synapse.target
|
||||
|
||||
# if this is started at the same time as the main, let the main process start
|
||||
# first, to initialise the database schema.
|
||||
After=matrix-synapse.service
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
NotifyAccess=main
|
||||
|
||||
@@ -217,7 +217,6 @@ expressions:
|
||||
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
|
||||
^/_synapse/client/password_reset/email/submit_token$
|
||||
|
||||
# Registration/login requests
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login$
|
||||
@@ -243,22 +242,6 @@ for the room are in flight:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
|
||||
|
||||
Additionally, the following endpoints should be included if Synapse is configured
|
||||
to use SSO (you only need to include the ones for whichever SSO provider you're
|
||||
using):
|
||||
|
||||
# OpenID Connect requests.
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$
|
||||
^/_synapse/oidc/callback$
|
||||
|
||||
# SAML requests.
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect$
|
||||
^/_matrix/saml2/authn_response$
|
||||
|
||||
# CAS requests.
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login/(cas|sso)/redirect$
|
||||
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
|
||||
|
||||
Note that a HTTP listener with `client` and `federation` resources must be
|
||||
configured in the `worker_listeners` option in the worker config.
|
||||
|
||||
|
||||
60
mypy.ini
60
mypy.ini
@@ -1,69 +1,11 @@
|
||||
[mypy]
|
||||
namespace_packages = True
|
||||
plugins = mypy_zope:plugin, scripts-dev/mypy_synapse_plugin.py
|
||||
plugins = mypy_zope:plugin
|
||||
follow_imports = silent
|
||||
check_untyped_defs = True
|
||||
show_error_codes = True
|
||||
show_traceback = True
|
||||
mypy_path = stubs
|
||||
files =
|
||||
synapse/api,
|
||||
synapse/appservice,
|
||||
synapse/config,
|
||||
synapse/event_auth.py,
|
||||
synapse/events/builder.py,
|
||||
synapse/events/spamcheck.py,
|
||||
synapse/federation,
|
||||
synapse/handlers/auth.py,
|
||||
synapse/handlers/cas_handler.py,
|
||||
synapse/handlers/directory.py,
|
||||
synapse/handlers/events.py,
|
||||
synapse/handlers/federation.py,
|
||||
synapse/handlers/identity.py,
|
||||
synapse/handlers/initial_sync.py,
|
||||
synapse/handlers/message.py,
|
||||
synapse/handlers/oidc_handler.py,
|
||||
synapse/handlers/pagination.py,
|
||||
synapse/handlers/presence.py,
|
||||
synapse/handlers/room.py,
|
||||
synapse/handlers/room_member.py,
|
||||
synapse/handlers/room_member_worker.py,
|
||||
synapse/handlers/saml_handler.py,
|
||||
synapse/handlers/sync.py,
|
||||
synapse/handlers/ui_auth,
|
||||
synapse/http/federation/well_known_resolver.py,
|
||||
synapse/http/server.py,
|
||||
synapse/http/site.py,
|
||||
synapse/logging,
|
||||
synapse/metrics,
|
||||
synapse/module_api,
|
||||
synapse/notifier.py,
|
||||
synapse/push/pusherpool.py,
|
||||
synapse/push/push_rule_evaluator.py,
|
||||
synapse/replication,
|
||||
synapse/rest,
|
||||
synapse/server.py,
|
||||
synapse/server_notices,
|
||||
synapse/spam_checker_api,
|
||||
synapse/state,
|
||||
synapse/storage/databases/main/events.py,
|
||||
synapse/storage/databases/main/stream.py,
|
||||
synapse/storage/databases/main/ui_auth.py,
|
||||
synapse/storage/database.py,
|
||||
synapse/storage/engines,
|
||||
synapse/storage/persist_events.py,
|
||||
synapse/storage/state.py,
|
||||
synapse/storage/util,
|
||||
synapse/streams,
|
||||
synapse/types.py,
|
||||
synapse/util/async_helpers.py,
|
||||
synapse/util/caches/descriptors.py,
|
||||
synapse/util/caches/stream_change_cache.py,
|
||||
synapse/util/metrics.py,
|
||||
tests/replication,
|
||||
tests/test_utils,
|
||||
tests/rest/client/v2_alpha/test_auth.py,
|
||||
tests/util/test_stream_change_cache.py
|
||||
|
||||
[mypy-pymacaroons.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
@@ -25,7 +25,6 @@ DISTS = (
|
||||
"ubuntu:xenial",
|
||||
"ubuntu:bionic",
|
||||
"ubuntu:focal",
|
||||
"ubuntu:groovy",
|
||||
)
|
||||
|
||||
DESC = '''\
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
#! /bin/bash -eu
|
||||
# This script is designed for developers who want to test their code
|
||||
# against Complement.
|
||||
#
|
||||
# It makes a Synapse image which represents the current checkout,
|
||||
# then downloads Complement and runs it with that image.
|
||||
|
||||
cd "$(dirname $0)/.."
|
||||
|
||||
# Build the base Synapse image from the local checkout
|
||||
docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
|
||||
|
||||
# Download Complement
|
||||
wget -N https://github.com/matrix-org/complement/archive/master.tar.gz
|
||||
tar -xzf master.tar.gz
|
||||
cd complement-master
|
||||
|
||||
# Build the Synapse image from Complement, based on the above image we just built
|
||||
docker build -t complement-synapse -f dockerfiles/Synapse.Dockerfile ./dockerfiles
|
||||
|
||||
# Run the tests on the resulting image!
|
||||
COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -count=1 ./tests
|
||||
@@ -1,5 +1,7 @@
|
||||
#! /usr/bin/python
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import ast
|
||||
import os
|
||||
@@ -11,7 +13,7 @@ import yaml
|
||||
|
||||
class DefinitionVisitor(ast.NodeVisitor):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
super(DefinitionVisitor, self).__init__()
|
||||
self.functions = {}
|
||||
self.classes = {}
|
||||
self.names = {}
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
#!/usr/bin/env python2
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import sys
|
||||
|
||||
import pymacaroons
|
||||
|
||||
@@ -15,16 +15,16 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import base64
|
||||
import json
|
||||
import sys
|
||||
from typing import Any, Optional
|
||||
from urllib import parse as urlparse
|
||||
|
||||
import nacl.signing
|
||||
import requests
|
||||
import signedjson.types
|
||||
import srvlookup
|
||||
import yaml
|
||||
from requests.adapters import HTTPAdapter
|
||||
@@ -69,9 +69,7 @@ def encode_canonical_json(value):
|
||||
).encode("UTF-8")
|
||||
|
||||
|
||||
def sign_json(
|
||||
json_object: Any, signing_key: signedjson.types.SigningKey, signing_name: str
|
||||
) -> Any:
|
||||
def sign_json(json_object, signing_key, signing_name):
|
||||
signatures = json_object.pop("signatures", {})
|
||||
unsigned = json_object.pop("unsigned", None)
|
||||
|
||||
@@ -124,14 +122,7 @@ def read_signing_keys(stream):
|
||||
return keys
|
||||
|
||||
|
||||
def request(
|
||||
method: Optional[str],
|
||||
origin_name: str,
|
||||
origin_key: signedjson.types.SigningKey,
|
||||
destination: str,
|
||||
path: str,
|
||||
content: Optional[str],
|
||||
) -> requests.Response:
|
||||
def request_json(method, origin_name, origin_key, destination, path, content):
|
||||
if method is None:
|
||||
if content is None:
|
||||
method = "GET"
|
||||
@@ -168,14 +159,11 @@ def request(
|
||||
if method == "POST":
|
||||
headers["Content-Type"] = "application/json"
|
||||
|
||||
return s.request(
|
||||
method=method,
|
||||
url=dest,
|
||||
headers=headers,
|
||||
verify=False,
|
||||
data=content,
|
||||
stream=True,
|
||||
result = s.request(
|
||||
method=method, url=dest, headers=headers, verify=False, data=content
|
||||
)
|
||||
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
|
||||
return result.json()
|
||||
|
||||
|
||||
def main():
|
||||
@@ -234,7 +222,7 @@ def main():
|
||||
with open(args.signing_key_path) as f:
|
||||
key = read_signing_keys(f)[0]
|
||||
|
||||
result = request(
|
||||
result = request_json(
|
||||
args.method,
|
||||
args.server_name,
|
||||
key,
|
||||
@@ -243,12 +231,7 @@ def main():
|
||||
content=args.body,
|
||||
)
|
||||
|
||||
sys.stderr.write("Status Code: %d\n" % (result.status_code,))
|
||||
|
||||
for chunk in result.iter_content():
|
||||
# we write raw utf8 to stdout.
|
||||
sys.stdout.buffer.write(chunk)
|
||||
|
||||
json.dump(result, sys.stdout)
|
||||
print("")
|
||||
|
||||
|
||||
@@ -321,7 +304,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
|
||||
url = urlparse.urlunparse(
|
||||
("https", netloc, parsed.path, parsed.params, parsed.query, parsed.fragment)
|
||||
)
|
||||
return super().get_connection(url, proxies)
|
||||
return super(MatrixConnectionAdapter, self).get_connection(url, proxies)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -1,3 +1,5 @@
|
||||
from __future__ import print_function
|
||||
|
||||
import sqlite3
|
||||
import sys
|
||||
|
||||
@@ -13,7 +15,7 @@ from synapse.storage.pdu import PduStore
|
||||
from synapse.storage.signatures import SignatureStore
|
||||
|
||||
|
||||
class Store:
|
||||
class Store(object):
|
||||
_get_pdu_tuples = PduStore.__dict__["_get_pdu_tuples"]
|
||||
_get_pdu_content_hashes_txn = SignatureStore.__dict__["_get_pdu_content_hashes_txn"]
|
||||
_get_prev_pdu_hashes_txn = SignatureStore.__dict__["_get_prev_pdu_hashes_txn"]
|
||||
|
||||
@@ -1,85 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
"""This is a mypy plugin for Synpase to deal with some of the funky typing that
|
||||
can crop up, e.g the cache descriptors.
|
||||
"""
|
||||
|
||||
from typing import Callable, Optional
|
||||
|
||||
from mypy.plugin import MethodSigContext, Plugin
|
||||
from mypy.typeops import bind_self
|
||||
from mypy.types import CallableType
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
def get_method_signature_hook(
|
||||
self, fullname: str
|
||||
) -> Optional[Callable[[MethodSigContext], CallableType]]:
|
||||
if fullname.startswith(
|
||||
"synapse.util.caches.descriptors._CachedFunction.__call__"
|
||||
):
|
||||
return cached_function_method_signature
|
||||
return None
|
||||
|
||||
|
||||
def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
"""Fixes the `_CachedFunction.__call__` signature to be correct.
|
||||
|
||||
It already has *almost* the correct signature, except:
|
||||
|
||||
1. the `self` argument needs to be marked as "bound"; and
|
||||
2. any `cache_context` argument should be removed.
|
||||
"""
|
||||
|
||||
# First we mark this as a bound function signature.
|
||||
signature = bind_self(ctx.default_signature)
|
||||
|
||||
# Secondly, we remove any "cache_context" args.
|
||||
#
|
||||
# Note: We should be only doing this if `cache_context=True` is set, but if
|
||||
# it isn't then the code will raise an exception when its called anyway, so
|
||||
# its not the end of the world.
|
||||
context_arg_index = None
|
||||
for idx, name in enumerate(signature.arg_names):
|
||||
if name == "cache_context":
|
||||
context_arg_index = idx
|
||||
break
|
||||
|
||||
if context_arg_index:
|
||||
arg_types = list(signature.arg_types)
|
||||
arg_types.pop(context_arg_index)
|
||||
|
||||
arg_names = list(signature.arg_names)
|
||||
arg_names.pop(context_arg_index)
|
||||
|
||||
arg_kinds = list(signature.arg_kinds)
|
||||
arg_kinds.pop(context_arg_index)
|
||||
|
||||
signature = signature.copy_modified(
|
||||
arg_types=arg_types, arg_names=arg_names, arg_kinds=arg_kinds,
|
||||
)
|
||||
|
||||
return signature
|
||||
|
||||
|
||||
def plugin(version: str):
|
||||
# This is the entry point of the plugin, and let's us deal with the fact
|
||||
# that the mypy plugin interface is *not* stable by looking at the version
|
||||
# string.
|
||||
#
|
||||
# However, since we pin the version of mypy Synapse uses in CI, we don't
|
||||
# really care.
|
||||
return SynapsePlugin
|
||||
@@ -32,6 +32,8 @@ To use, pipe the above into::
|
||||
PYTHON_PATH=. ./scripts/move_remote_media_to_new_store.py <source repo> <dest repo>
|
||||
"""
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import os
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
from synapse._scripts.register_new_matrix_user import main
|
||||
|
||||
if __name__ == "__main__":
|
||||
|
||||
@@ -89,7 +89,6 @@ BOOLEAN_COLUMNS = {
|
||||
"redactions": ["have_censored"],
|
||||
"room_stats_state": ["is_federatable"],
|
||||
"local_media_repository": ["safe_from_quarantine"],
|
||||
"users": ["shadow_banned"],
|
||||
}
|
||||
|
||||
|
||||
@@ -145,7 +144,6 @@ IGNORED_TABLES = {
|
||||
# the sessions are transient anyway, so ignore them.
|
||||
"ui_auth_sessions",
|
||||
"ui_auth_sessions_credentials",
|
||||
"ui_auth_sessions_ips",
|
||||
}
|
||||
|
||||
|
||||
@@ -629,7 +627,6 @@ class Porter(object):
|
||||
self.progress.set_state("Setting up sequence generators")
|
||||
await self._setup_state_group_id_seq()
|
||||
await self._setup_user_id_seq()
|
||||
await self._setup_events_stream_seqs()
|
||||
|
||||
self.progress.done()
|
||||
except Exception as e:
|
||||
@@ -806,29 +803,6 @@ class Porter(object):
|
||||
|
||||
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
|
||||
|
||||
def _setup_events_stream_seqs(self):
|
||||
def r(txn):
|
||||
txn.execute("SELECT MAX(stream_ordering) FROM events")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if curr_id:
|
||||
next_id = curr_id + 1
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_stream_seq RESTART WITH %s", (next_id,)
|
||||
)
|
||||
|
||||
txn.execute("SELECT -MIN(stream_ordering) FROM events")
|
||||
curr_id = txn.fetchone()[0]
|
||||
if curr_id:
|
||||
next_id = curr_id + 1
|
||||
txn.execute(
|
||||
"ALTER SEQUENCE events_backfill_stream_seq RESTART WITH %s",
|
||||
(next_id,),
|
||||
)
|
||||
|
||||
return self.postgres_store.db_pool.runInteraction(
|
||||
"_setup_events_stream_seqs", r
|
||||
)
|
||||
|
||||
|
||||
##############################################
|
||||
# The following is simply UI stuff
|
||||
|
||||
16
setup.py
16
setup.py
@@ -94,22 +94,6 @@ ALL_OPTIONAL_REQUIREMENTS = dependencies["ALL_OPTIONAL_REQUIREMENTS"]
|
||||
# Make `pip install matrix-synapse[all]` install all the optional dependencies.
|
||||
CONDITIONAL_REQUIREMENTS["all"] = list(ALL_OPTIONAL_REQUIREMENTS)
|
||||
|
||||
# Developer dependencies should not get included in "all".
|
||||
#
|
||||
# We pin black so that our tests don't start failing on new releases.
|
||||
CONDITIONAL_REQUIREMENTS["lint"] = [
|
||||
"isort==5.0.3",
|
||||
"black==19.10b0",
|
||||
"flake8-comprehensions",
|
||||
"flake8",
|
||||
]
|
||||
|
||||
# Dependencies which are exclusively required by unit test code. This is
|
||||
# NOT a list of all modules that are necessary to run the unit tests.
|
||||
# Tests assume that all optional dependencies are installed.
|
||||
#
|
||||
# parameterized_class decorator was introduced in parameterized 0.7.0
|
||||
CONDITIONAL_REQUIREMENTS["test"] = ["mock>=2.0", "parameterized>=0.7.0"]
|
||||
|
||||
setup(
|
||||
name="matrix-synapse",
|
||||
|
||||
@@ -1,47 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Stub for frozendict.
|
||||
|
||||
from typing import (
|
||||
Any,
|
||||
Hashable,
|
||||
Iterable,
|
||||
Iterator,
|
||||
Mapping,
|
||||
overload,
|
||||
Tuple,
|
||||
TypeVar,
|
||||
)
|
||||
|
||||
_KT = TypeVar("_KT", bound=Hashable) # Key type.
|
||||
_VT = TypeVar("_VT") # Value type.
|
||||
|
||||
class frozendict(Mapping[_KT, _VT]):
|
||||
@overload
|
||||
def __init__(self, **kwargs: _VT) -> None: ...
|
||||
@overload
|
||||
def __init__(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
|
||||
@overload
|
||||
def __init__(
|
||||
self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
|
||||
) -> None: ...
|
||||
def __getitem__(self, key: _KT) -> _VT: ...
|
||||
def __contains__(self, key: Any) -> bool: ...
|
||||
def copy(self, **add_or_replace: Any) -> frozendict: ...
|
||||
def __iter__(self) -> Iterator[_KT]: ...
|
||||
def __len__(self) -> int: ...
|
||||
def __repr__(self) -> str: ...
|
||||
def __hash__(self) -> int: ...
|
||||
@@ -48,7 +48,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.21.1"
|
||||
__version__ = "1.19.0"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -14,6 +14,8 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import argparse
|
||||
import getpass
|
||||
import hashlib
|
||||
|
||||
@@ -58,7 +58,7 @@ class _InvalidMacaroonException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
class Auth:
|
||||
class Auth(object):
|
||||
"""
|
||||
FIXME: This class contains a mix of functions for authenticating users
|
||||
of our client-server API and authenticating events added to room graphs.
|
||||
@@ -218,7 +218,11 @@ class Auth:
|
||||
# Deny the request if the user account has expired.
|
||||
if self._account_validity.enabled and not allow_expired:
|
||||
user_id = user.to_string()
|
||||
if await self.store.is_account_expired(user_id, self.clock.time_msec()):
|
||||
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
|
||||
if (
|
||||
expiration_ts is not None
|
||||
and self.clock.time_msec() >= expiration_ts
|
||||
):
|
||||
raise AuthError(
|
||||
403, "User account has expired", errcode=Codes.EXPIRED_ACCOUNT
|
||||
)
|
||||
|
||||
@@ -22,7 +22,7 @@ from synapse.config.server import is_threepid_reserved
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class AuthBlocking:
|
||||
class AuthBlocking(object):
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
|
||||
@@ -28,7 +28,7 @@ MAX_ALIAS_LENGTH = 255
|
||||
MAX_USERID_LENGTH = 255
|
||||
|
||||
|
||||
class Membership:
|
||||
class Membership(object):
|
||||
|
||||
"""Represents the membership states of a user in a room."""
|
||||
|
||||
@@ -40,7 +40,7 @@ class Membership:
|
||||
LIST = (INVITE, JOIN, KNOCK, LEAVE, BAN)
|
||||
|
||||
|
||||
class PresenceState:
|
||||
class PresenceState(object):
|
||||
"""Represents the presence state of a user."""
|
||||
|
||||
OFFLINE = "offline"
|
||||
@@ -48,14 +48,14 @@ class PresenceState:
|
||||
ONLINE = "online"
|
||||
|
||||
|
||||
class JoinRules:
|
||||
class JoinRules(object):
|
||||
PUBLIC = "public"
|
||||
KNOCK = "knock"
|
||||
INVITE = "invite"
|
||||
PRIVATE = "private"
|
||||
|
||||
|
||||
class LoginType:
|
||||
class LoginType(object):
|
||||
PASSWORD = "m.login.password"
|
||||
EMAIL_IDENTITY = "m.login.email.identity"
|
||||
MSISDN = "m.login.msisdn"
|
||||
@@ -65,7 +65,7 @@ class LoginType:
|
||||
DUMMY = "m.login.dummy"
|
||||
|
||||
|
||||
class EventTypes:
|
||||
class EventTypes(object):
|
||||
Member = "m.room.member"
|
||||
Create = "m.room.create"
|
||||
Tombstone = "m.room.tombstone"
|
||||
@@ -96,17 +96,17 @@ class EventTypes:
|
||||
Presence = "m.presence"
|
||||
|
||||
|
||||
class RejectedReason:
|
||||
class RejectedReason(object):
|
||||
AUTH_ERROR = "auth_error"
|
||||
|
||||
|
||||
class RoomCreationPreset:
|
||||
class RoomCreationPreset(object):
|
||||
PRIVATE_CHAT = "private_chat"
|
||||
PUBLIC_CHAT = "public_chat"
|
||||
TRUSTED_PRIVATE_CHAT = "trusted_private_chat"
|
||||
|
||||
|
||||
class ThirdPartyEntityKind:
|
||||
class ThirdPartyEntityKind(object):
|
||||
USER = "user"
|
||||
LOCATION = "location"
|
||||
|
||||
@@ -115,7 +115,7 @@ ServerNoticeMsgType = "m.server_notice"
|
||||
ServerNoticeLimitReached = "m.server_notice.usage_limit_reached"
|
||||
|
||||
|
||||
class UserTypes:
|
||||
class UserTypes(object):
|
||||
"""Allows for user type specific behaviour. With the benefit of hindsight
|
||||
'admin' and 'guest' users should also be UserTypes. Normal users are type None
|
||||
"""
|
||||
@@ -125,7 +125,7 @@ class UserTypes:
|
||||
ALL_USER_TYPES = (SUPPORT, BOT)
|
||||
|
||||
|
||||
class RelationTypes:
|
||||
class RelationTypes(object):
|
||||
"""The types of relations known to this server.
|
||||
"""
|
||||
|
||||
@@ -134,14 +134,14 @@ class RelationTypes:
|
||||
REFERENCE = "m.reference"
|
||||
|
||||
|
||||
class LimitBlockingTypes:
|
||||
class LimitBlockingTypes(object):
|
||||
"""Reasons that a server may be blocked"""
|
||||
|
||||
MONTHLY_ACTIVE_USER = "monthly_active_user"
|
||||
HS_DISABLED = "hs_disabled"
|
||||
|
||||
|
||||
class EventContentFields:
|
||||
class EventContentFields(object):
|
||||
"""Fields found in events' content, regardless of type."""
|
||||
|
||||
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
|
||||
@@ -152,6 +152,6 @@ class EventContentFields:
|
||||
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"
|
||||
|
||||
|
||||
class RoomEncryptionAlgorithms:
|
||||
class RoomEncryptionAlgorithms(object):
|
||||
MEGOLM_V1_AES_SHA2 = "m.megolm.v1.aes-sha2"
|
||||
DEFAULT = MEGOLM_V1_AES_SHA2
|
||||
|
||||
@@ -21,9 +21,9 @@ import typing
|
||||
from http import HTTPStatus
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from twisted.web import http
|
||||
from canonicaljson import json
|
||||
|
||||
from synapse.util import json_decoder
|
||||
from twisted.web import http
|
||||
|
||||
if typing.TYPE_CHECKING:
|
||||
from synapse.types import JsonDict
|
||||
@@ -31,7 +31,7 @@ if typing.TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Codes:
|
||||
class Codes(object):
|
||||
UNRECOGNIZED = "M_UNRECOGNIZED"
|
||||
UNAUTHORIZED = "M_UNAUTHORIZED"
|
||||
FORBIDDEN = "M_FORBIDDEN"
|
||||
@@ -87,7 +87,7 @@ class CodeMessageException(RuntimeError):
|
||||
"""
|
||||
|
||||
def __init__(self, code: Union[int, HTTPStatus], msg: str):
|
||||
super().__init__("%d: %s" % (code, msg))
|
||||
super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
|
||||
|
||||
# Some calls to this method pass instances of http.HTTPStatus for `code`.
|
||||
# While HTTPStatus is a subclass of int, it has magic __str__ methods
|
||||
@@ -138,7 +138,7 @@ class SynapseError(CodeMessageException):
|
||||
msg: The human-readable error message.
|
||||
errcode: The matrix error code e.g 'M_FORBIDDEN'
|
||||
"""
|
||||
super().__init__(code, msg)
|
||||
super(SynapseError, self).__init__(code, msg)
|
||||
self.errcode = errcode
|
||||
|
||||
def error_dict(self):
|
||||
@@ -159,7 +159,7 @@ class ProxiedRequestError(SynapseError):
|
||||
errcode: str = Codes.UNKNOWN,
|
||||
additional_fields: Optional[Dict] = None,
|
||||
):
|
||||
super().__init__(code, msg, errcode)
|
||||
super(ProxiedRequestError, self).__init__(code, msg, errcode)
|
||||
if additional_fields is None:
|
||||
self._additional_fields = {} # type: Dict
|
||||
else:
|
||||
@@ -181,7 +181,7 @@ class ConsentNotGivenError(SynapseError):
|
||||
msg: The human-readable error message
|
||||
consent_url: The URL where the user can give their consent
|
||||
"""
|
||||
super().__init__(
|
||||
super(ConsentNotGivenError, self).__init__(
|
||||
code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.CONSENT_NOT_GIVEN
|
||||
)
|
||||
self._consent_uri = consent_uri
|
||||
@@ -201,7 +201,7 @@ class UserDeactivatedError(SynapseError):
|
||||
Args:
|
||||
msg: The human-readable error message
|
||||
"""
|
||||
super().__init__(
|
||||
super(UserDeactivatedError, self).__init__(
|
||||
code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
|
||||
)
|
||||
|
||||
@@ -225,7 +225,7 @@ class FederationDeniedError(SynapseError):
|
||||
|
||||
self.destination = destination
|
||||
|
||||
super().__init__(
|
||||
super(FederationDeniedError, self).__init__(
|
||||
code=403,
|
||||
msg="Federation denied with %s." % (self.destination,),
|
||||
errcode=Codes.FORBIDDEN,
|
||||
@@ -244,7 +244,9 @@ class InteractiveAuthIncompleteError(Exception):
|
||||
"""
|
||||
|
||||
def __init__(self, session_id: str, result: "JsonDict"):
|
||||
super().__init__("Interactive auth not yet complete")
|
||||
super(InteractiveAuthIncompleteError, self).__init__(
|
||||
"Interactive auth not yet complete"
|
||||
)
|
||||
self.session_id = session_id
|
||||
self.result = result
|
||||
|
||||
@@ -259,14 +261,14 @@ class UnrecognizedRequestError(SynapseError):
|
||||
message = "Unrecognized request"
|
||||
else:
|
||||
message = args[0]
|
||||
super().__init__(400, message, **kwargs)
|
||||
super(UnrecognizedRequestError, self).__init__(400, message, **kwargs)
|
||||
|
||||
|
||||
class NotFoundError(SynapseError):
|
||||
"""An error indicating we can't find the thing you asked for"""
|
||||
|
||||
def __init__(self, msg: str = "Not found", errcode: str = Codes.NOT_FOUND):
|
||||
super().__init__(404, msg, errcode=errcode)
|
||||
super(NotFoundError, self).__init__(404, msg, errcode=errcode)
|
||||
|
||||
|
||||
class AuthError(SynapseError):
|
||||
@@ -277,7 +279,7 @@ class AuthError(SynapseError):
|
||||
def __init__(self, *args, **kwargs):
|
||||
if "errcode" not in kwargs:
|
||||
kwargs["errcode"] = Codes.FORBIDDEN
|
||||
super().__init__(*args, **kwargs)
|
||||
super(AuthError, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class InvalidClientCredentialsError(SynapseError):
|
||||
@@ -333,7 +335,7 @@ class ResourceLimitError(SynapseError):
|
||||
):
|
||||
self.admin_contact = admin_contact
|
||||
self.limit_type = limit_type
|
||||
super().__init__(code, msg, errcode=errcode)
|
||||
super(ResourceLimitError, self).__init__(code, msg, errcode=errcode)
|
||||
|
||||
def error_dict(self):
|
||||
return cs_error(
|
||||
@@ -350,7 +352,7 @@ class EventSizeError(SynapseError):
|
||||
def __init__(self, *args, **kwargs):
|
||||
if "errcode" not in kwargs:
|
||||
kwargs["errcode"] = Codes.TOO_LARGE
|
||||
super().__init__(413, *args, **kwargs)
|
||||
super(EventSizeError, self).__init__(413, *args, **kwargs)
|
||||
|
||||
|
||||
class EventStreamError(SynapseError):
|
||||
@@ -359,7 +361,7 @@ class EventStreamError(SynapseError):
|
||||
def __init__(self, *args, **kwargs):
|
||||
if "errcode" not in kwargs:
|
||||
kwargs["errcode"] = Codes.BAD_PAGINATION
|
||||
super().__init__(*args, **kwargs)
|
||||
super(EventStreamError, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class LoginError(SynapseError):
|
||||
@@ -382,7 +384,7 @@ class InvalidCaptchaError(SynapseError):
|
||||
error_url: Optional[str] = None,
|
||||
errcode: str = Codes.CAPTCHA_INVALID,
|
||||
):
|
||||
super().__init__(code, msg, errcode)
|
||||
super(InvalidCaptchaError, self).__init__(code, msg, errcode)
|
||||
self.error_url = error_url
|
||||
|
||||
def error_dict(self):
|
||||
@@ -400,7 +402,7 @@ class LimitExceededError(SynapseError):
|
||||
retry_after_ms: Optional[int] = None,
|
||||
errcode: str = Codes.LIMIT_EXCEEDED,
|
||||
):
|
||||
super().__init__(code, msg, errcode)
|
||||
super(LimitExceededError, self).__init__(code, msg, errcode)
|
||||
self.retry_after_ms = retry_after_ms
|
||||
|
||||
def error_dict(self):
|
||||
@@ -416,7 +418,9 @@ class RoomKeysVersionError(SynapseError):
|
||||
Args:
|
||||
current_version: the current version of the store they should have used
|
||||
"""
|
||||
super().__init__(403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION)
|
||||
super(RoomKeysVersionError, self).__init__(
|
||||
403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION
|
||||
)
|
||||
self.current_version = current_version
|
||||
|
||||
|
||||
@@ -425,7 +429,7 @@ class UnsupportedRoomVersionError(SynapseError):
|
||||
not support."""
|
||||
|
||||
def __init__(self, msg: str = "Homeserver does not support this room version"):
|
||||
super().__init__(
|
||||
super(UnsupportedRoomVersionError, self).__init__(
|
||||
code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION,
|
||||
)
|
||||
|
||||
@@ -436,7 +440,7 @@ class ThreepidValidationError(SynapseError):
|
||||
def __init__(self, *args, **kwargs):
|
||||
if "errcode" not in kwargs:
|
||||
kwargs["errcode"] = Codes.FORBIDDEN
|
||||
super().__init__(*args, **kwargs)
|
||||
super(ThreepidValidationError, self).__init__(*args, **kwargs)
|
||||
|
||||
|
||||
class IncompatibleRoomVersionError(SynapseError):
|
||||
@@ -447,7 +451,7 @@ class IncompatibleRoomVersionError(SynapseError):
|
||||
"""
|
||||
|
||||
def __init__(self, room_version: str):
|
||||
super().__init__(
|
||||
super(IncompatibleRoomVersionError, self).__init__(
|
||||
code=400,
|
||||
msg="Your homeserver does not support the features required to "
|
||||
"join this room",
|
||||
@@ -469,7 +473,7 @@ class PasswordRefusedError(SynapseError):
|
||||
msg: str = "This password doesn't comply with the server's policy",
|
||||
errcode: str = Codes.WEAK_PASSWORD,
|
||||
):
|
||||
super().__init__(
|
||||
super(PasswordRefusedError, self).__init__(
|
||||
code=400, msg=msg, errcode=errcode,
|
||||
)
|
||||
|
||||
@@ -484,7 +488,7 @@ class RequestSendFailed(RuntimeError):
|
||||
"""
|
||||
|
||||
def __init__(self, inner_exception, can_retry):
|
||||
super().__init__(
|
||||
super(RequestSendFailed, self).__init__(
|
||||
"Failed to send request: %s: %s"
|
||||
% (type(inner_exception).__name__, inner_exception)
|
||||
)
|
||||
@@ -538,7 +542,7 @@ class FederationError(RuntimeError):
|
||||
self.source = source
|
||||
|
||||
msg = "%s %s: %s" % (level, code, reason)
|
||||
super().__init__(msg)
|
||||
super(FederationError, self).__init__(msg)
|
||||
|
||||
def get_dict(self):
|
||||
return {
|
||||
@@ -566,7 +570,7 @@ class HttpResponseException(CodeMessageException):
|
||||
msg: reason phrase from HTTP response status line
|
||||
response: body of response
|
||||
"""
|
||||
super().__init__(code, msg)
|
||||
super(HttpResponseException, self).__init__(code, msg)
|
||||
self.response = response
|
||||
|
||||
def to_synapse_error(self):
|
||||
@@ -589,7 +593,7 @@ class HttpResponseException(CodeMessageException):
|
||||
# try to parse the body as json, to get better errcode/msg, but
|
||||
# default to M_UNKNOWN with the HTTP status as the error text
|
||||
try:
|
||||
j = json_decoder.decode(self.response.decode("utf-8"))
|
||||
j = json.loads(self.response.decode("utf-8"))
|
||||
except ValueError:
|
||||
j = {}
|
||||
|
||||
@@ -600,11 +604,3 @@ class HttpResponseException(CodeMessageException):
|
||||
errmsg = j.pop("error", self.msg)
|
||||
|
||||
return ProxiedRequestError(self.code, errmsg, errcode, j)
|
||||
|
||||
|
||||
class ShadowBanError(Exception):
|
||||
"""
|
||||
Raised when a shadow-banned user attempts to perform an action.
|
||||
|
||||
This should be caught and a proper "fake" success response sent to the user.
|
||||
"""
|
||||
|
||||
@@ -15,10 +15,10 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import json
|
||||
from typing import List
|
||||
|
||||
import jsonschema
|
||||
from canonicaljson import json
|
||||
from jsonschema import FormatChecker
|
||||
|
||||
from synapse.api.constants import EventContentFields
|
||||
@@ -130,9 +130,9 @@ def matrix_user_id_validator(user_id_str):
|
||||
return UserID.from_string(user_id_str)
|
||||
|
||||
|
||||
class Filtering:
|
||||
class Filtering(object):
|
||||
def __init__(self, hs):
|
||||
super().__init__()
|
||||
super(Filtering, self).__init__()
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
async def get_user_filter(self, user_localpart, filter_id):
|
||||
@@ -168,7 +168,7 @@ class Filtering:
|
||||
raise SynapseError(400, str(e))
|
||||
|
||||
|
||||
class FilterCollection:
|
||||
class FilterCollection(object):
|
||||
def __init__(self, filter_json):
|
||||
self._filter_json = filter_json
|
||||
|
||||
@@ -249,7 +249,7 @@ class FilterCollection:
|
||||
)
|
||||
|
||||
|
||||
class Filter:
|
||||
class Filter(object):
|
||||
def __init__(self, filter_json):
|
||||
self.filter_json = filter_json
|
||||
|
||||
|
||||
@@ -17,11 +17,10 @@ from collections import OrderedDict
|
||||
from typing import Any, Optional, Tuple
|
||||
|
||||
from synapse.api.errors import LimitExceededError
|
||||
from synapse.types import Requester
|
||||
from synapse.util import Clock
|
||||
|
||||
|
||||
class Ratelimiter:
|
||||
class Ratelimiter(object):
|
||||
"""
|
||||
Ratelimit actions marked by arbitrary keys.
|
||||
|
||||
@@ -44,42 +43,6 @@ class Ratelimiter:
|
||||
# * The rate_hz of this particular entry. This can vary per request
|
||||
self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]
|
||||
|
||||
def can_requester_do_action(
|
||||
self,
|
||||
requester: Requester,
|
||||
rate_hz: Optional[float] = None,
|
||||
burst_count: Optional[int] = None,
|
||||
update: bool = True,
|
||||
_time_now_s: Optional[int] = None,
|
||||
) -> Tuple[bool, float]:
|
||||
"""Can the requester perform the action?
|
||||
|
||||
Args:
|
||||
requester: The requester to key off when rate limiting. The user property
|
||||
will be used.
|
||||
rate_hz: The long term number of actions that can be performed in a second.
|
||||
Overrides the value set during instantiation if set.
|
||||
burst_count: How many actions that can be performed before being limited.
|
||||
Overrides the value set during instantiation if set.
|
||||
update: Whether to count this check as performing the action
|
||||
_time_now_s: The current time. Optional, defaults to the current time according
|
||||
to self.clock. Only used by tests.
|
||||
|
||||
Returns:
|
||||
A tuple containing:
|
||||
* A bool indicating if they can perform the action now
|
||||
* The reactor timestamp for when the action can be performed next.
|
||||
-1 if rate_hz is less than or equal to zero
|
||||
"""
|
||||
# Disable rate limiting of users belonging to any AS that is configured
|
||||
# not to be rate limited in its registration file (rate_limited: true|false).
|
||||
if requester.app_service and not requester.app_service.is_rate_limited():
|
||||
return True, -1.0
|
||||
|
||||
return self.can_do_action(
|
||||
requester.user.to_string(), rate_hz, burst_count, update, _time_now_s
|
||||
)
|
||||
|
||||
def can_do_action(
|
||||
self,
|
||||
key: Any,
|
||||
|
||||
@@ -18,7 +18,7 @@ from typing import Dict
|
||||
import attr
|
||||
|
||||
|
||||
class EventFormatVersions:
|
||||
class EventFormatVersions(object):
|
||||
"""This is an internal enum for tracking the version of the event format,
|
||||
independently from the room version.
|
||||
"""
|
||||
@@ -35,20 +35,20 @@ KNOWN_EVENT_FORMAT_VERSIONS = {
|
||||
}
|
||||
|
||||
|
||||
class StateResolutionVersions:
|
||||
class StateResolutionVersions(object):
|
||||
"""Enum to identify the state resolution algorithms"""
|
||||
|
||||
V1 = 1 # room v1 state res
|
||||
V2 = 2 # MSC1442 state res: room v2 and later
|
||||
|
||||
|
||||
class RoomDisposition:
|
||||
class RoomDisposition(object):
|
||||
STABLE = "stable"
|
||||
UNSTABLE = "unstable"
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True)
|
||||
class RoomVersion:
|
||||
class RoomVersion(object):
|
||||
"""An object which describes the unique attributes of a room version."""
|
||||
|
||||
identifier = attr.ib() # str; the identifier for this version
|
||||
@@ -69,7 +69,7 @@ class RoomVersion:
|
||||
limit_notifications_power_levels = attr.ib(type=bool)
|
||||
|
||||
|
||||
class RoomVersions:
|
||||
class RoomVersions(object):
|
||||
V1 = RoomVersion(
|
||||
"1",
|
||||
RoomDisposition.STABLE,
|
||||
|
||||
@@ -21,7 +21,6 @@ from urllib.parse import urlencode
|
||||
|
||||
from synapse.config import ConfigError
|
||||
|
||||
SYNAPSE_CLIENT_API_PREFIX = "/_synapse/client"
|
||||
CLIENT_API_PREFIX = "/_matrix/client"
|
||||
FEDERATION_PREFIX = "/_matrix/federation"
|
||||
FEDERATION_V1_PREFIX = FEDERATION_PREFIX + "/v1"
|
||||
@@ -34,7 +33,7 @@ MEDIA_PREFIX = "/_matrix/media/r0"
|
||||
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"
|
||||
|
||||
|
||||
class ConsentURIBuilder:
|
||||
class ConsentURIBuilder(object):
|
||||
def __init__(self, hs_config):
|
||||
"""
|
||||
Args:
|
||||
|
||||
@@ -334,13 +334,6 @@ def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
|
||||
This is to workaround https://twistedmatrix.com/trac/ticket/9620, where we
|
||||
can run out of file descriptors and infinite loop if we attempt to do too
|
||||
many DNS queries at once
|
||||
|
||||
XXX: I'm confused by this. reactor.nameResolver does not use twisted.names unless
|
||||
you explicitly install twisted.names as the resolver; rather it uses a GAIResolver
|
||||
backed by the reactor's default threadpool (which is limited to 10 threads). So
|
||||
(a) I don't understand why twisted ticket 9620 is relevant, and (b) I don't
|
||||
understand why we would run out of FDs if we did too many lookups at once.
|
||||
-- richvdh 2020/08/29
|
||||
"""
|
||||
new_resolver = _LimitedHostnameResolver(
|
||||
reactor.nameResolver, max_dns_requests_in_flight
|
||||
@@ -349,7 +342,7 @@ def install_dns_limiter(reactor, max_dns_requests_in_flight=100):
|
||||
reactor.installNameResolver(new_resolver)
|
||||
|
||||
|
||||
class _LimitedHostnameResolver:
|
||||
class _LimitedHostnameResolver(object):
|
||||
"""Wraps a IHostnameResolver, limiting the number of in-flight DNS lookups.
|
||||
"""
|
||||
|
||||
@@ -409,7 +402,7 @@ class _LimitedHostnameResolver:
|
||||
yield deferred
|
||||
|
||||
|
||||
class _DeferredResolutionReceiver:
|
||||
class _DeferredResolutionReceiver(object):
|
||||
"""Wraps a IResolutionReceiver and simply resolves the given deferred when
|
||||
resolution is complete
|
||||
"""
|
||||
|
||||
@@ -14,12 +14,13 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import argparse
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import tempfile
|
||||
|
||||
from canonicaljson import json
|
||||
|
||||
from twisted.internet import defer, task
|
||||
|
||||
import synapse
|
||||
@@ -78,7 +79,8 @@ class AdminCmdServer(HomeServer):
|
||||
pass
|
||||
|
||||
|
||||
async def export_data_command(hs, args):
|
||||
@defer.inlineCallbacks
|
||||
def export_data_command(hs, args):
|
||||
"""Export data for a user.
|
||||
|
||||
Args:
|
||||
@@ -89,8 +91,10 @@ async def export_data_command(hs, args):
|
||||
user_id = args.user_id
|
||||
directory = args.output_directory
|
||||
|
||||
res = await hs.get_handlers().admin_handler.export_user_data(
|
||||
user_id, FileExfiltrationWriter(user_id, directory=directory)
|
||||
res = yield defer.ensureDeferred(
|
||||
hs.get_handlers().admin_handler.export_user_data(
|
||||
user_id, FileExfiltrationWriter(user_id, directory=directory)
|
||||
)
|
||||
)
|
||||
print(res)
|
||||
|
||||
@@ -228,15 +232,14 @@ def start(config_options):
|
||||
# We also make sure that `_base.start` gets run before we actually run the
|
||||
# command.
|
||||
|
||||
async def run():
|
||||
@defer.inlineCallbacks
|
||||
def run(_reactor):
|
||||
with LoggingContext("command"):
|
||||
_base.start(ss, [])
|
||||
await args.func(ss, args)
|
||||
yield _base.start(ss, [])
|
||||
yield args.func(ss, args)
|
||||
|
||||
_base.start_worker_reactor(
|
||||
"synapse-admin-cmd",
|
||||
config,
|
||||
run_command=lambda: task.react(lambda _reactor: defer.ensureDeferred(run())),
|
||||
"synapse-admin-cmd", config, run_command=lambda: task.react(run)
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -152,7 +152,7 @@ class PresenceStatusStubServlet(RestServlet):
|
||||
PATTERNS = client_patterns("/presence/(?P<user_id>[^/]*)/status")
|
||||
|
||||
def __init__(self, hs):
|
||||
super().__init__()
|
||||
super(PresenceStatusStubServlet, self).__init__()
|
||||
self.auth = hs.get_auth()
|
||||
|
||||
async def on_GET(self, request, user_id):
|
||||
@@ -176,7 +176,7 @@ class KeyUploadServlet(RestServlet):
|
||||
Args:
|
||||
hs (synapse.server.HomeServer): server
|
||||
"""
|
||||
super().__init__()
|
||||
super(KeyUploadServlet, self).__init__()
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastore()
|
||||
self.http_client = hs.get_simple_http_client()
|
||||
@@ -646,7 +646,7 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
class GenericWorkerReplicationHandler(ReplicationDataHandler):
|
||||
def __init__(self, hs):
|
||||
super().__init__(hs)
|
||||
super(GenericWorkerReplicationHandler, self).__init__(hs)
|
||||
|
||||
self.store = hs.get_datastore()
|
||||
self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence
|
||||
@@ -745,7 +745,7 @@ class GenericWorkerReplicationHandler(ReplicationDataHandler):
|
||||
self.send_handler.wake_destination(server)
|
||||
|
||||
|
||||
class FederationSenderHandler:
|
||||
class FederationSenderHandler(object):
|
||||
"""Processes the fedration replication stream
|
||||
|
||||
This class is only instantiate on the worker responsible for sending outbound
|
||||
|
||||
@@ -15,6 +15,8 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import print_function
|
||||
|
||||
import gc
|
||||
import logging
|
||||
import math
|
||||
@@ -46,7 +48,6 @@ from synapse.api.urls import (
|
||||
from synapse.app import _base
|
||||
from synapse.app._base import listen_ssl, listen_tcp, quit_with_error
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config.emailconfig import ThreepidBehaviour
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.server import ListenerConfig
|
||||
from synapse.federation.transport.server import TransportLayerServer
|
||||
@@ -208,15 +209,6 @@ class SynapseHomeServer(HomeServer):
|
||||
|
||||
resources["/_matrix/saml2"] = SAML2Resource(self)
|
||||
|
||||
if self.get_config().threepid_behaviour_email == ThreepidBehaviour.LOCAL:
|
||||
from synapse.rest.synapse.client.password_reset import (
|
||||
PasswordResetSubmitTokenResource,
|
||||
)
|
||||
|
||||
resources[
|
||||
"/_synapse/client/password_reset/email/submit_token"
|
||||
] = PasswordResetSubmitTokenResource(self)
|
||||
|
||||
if name == "consent":
|
||||
from synapse.rest.consent.consent_resource import ConsentResource
|
||||
|
||||
@@ -419,24 +411,26 @@ def setup(config_options):
|
||||
|
||||
return provision
|
||||
|
||||
async def reprovision_acme():
|
||||
@defer.inlineCallbacks
|
||||
def reprovision_acme():
|
||||
"""
|
||||
Provision a certificate from ACME, if required, and reload the TLS
|
||||
certificate if it's renewed.
|
||||
"""
|
||||
reprovisioned = await do_acme()
|
||||
reprovisioned = yield defer.ensureDeferred(do_acme())
|
||||
if reprovisioned:
|
||||
_base.refresh_certificate(hs)
|
||||
|
||||
async def start():
|
||||
@defer.inlineCallbacks
|
||||
def start():
|
||||
try:
|
||||
# Run the ACME provisioning code, if it's enabled.
|
||||
if hs.config.acme_enabled:
|
||||
acme = hs.get_acme_handler()
|
||||
# Start up the webservices which we will respond to ACME
|
||||
# challenges with, and then provision.
|
||||
await acme.start_listening()
|
||||
await do_acme()
|
||||
yield defer.ensureDeferred(acme.start_listening())
|
||||
yield defer.ensureDeferred(do_acme())
|
||||
|
||||
# Check if it needs to be reprovisioned every day.
|
||||
hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000)
|
||||
@@ -445,8 +439,8 @@ def setup(config_options):
|
||||
if hs.config.oidc_enabled:
|
||||
oidc = hs.get_oidc_handler()
|
||||
# Loading the provider metadata also ensures the provider config is valid.
|
||||
await oidc.load_metadata()
|
||||
await oidc.load_jwks()
|
||||
yield defer.ensureDeferred(oidc.load_metadata())
|
||||
yield defer.ensureDeferred(oidc.load_jwks())
|
||||
|
||||
_base.start(hs, config.listeners)
|
||||
|
||||
@@ -462,7 +456,7 @@ def setup(config_options):
|
||||
reactor.stop()
|
||||
sys.exit(1)
|
||||
|
||||
reactor.callWhenRunning(lambda: defer.ensureDeferred(start()))
|
||||
reactor.callWhenRunning(start)
|
||||
|
||||
return hs
|
||||
|
||||
|
||||
@@ -14,25 +14,20 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import re
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.appservice.api import ApplicationServiceApi
|
||||
from synapse.types import GroupID, get_domain_from_id
|
||||
from synapse.util.caches.descriptors import cached
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.storage.databases.main import DataStore
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ApplicationServiceState:
|
||||
class ApplicationServiceState(object):
|
||||
DOWN = "down"
|
||||
UP = "up"
|
||||
|
||||
|
||||
class AppServiceTransaction:
|
||||
class AppServiceTransaction(object):
|
||||
"""Represents an application service transaction."""
|
||||
|
||||
def __init__(self, service, id, events):
|
||||
@@ -40,19 +35,19 @@ class AppServiceTransaction:
|
||||
self.id = id
|
||||
self.events = events
|
||||
|
||||
async def send(self, as_api: ApplicationServiceApi) -> bool:
|
||||
def send(self, as_api):
|
||||
"""Sends this transaction using the provided AS API interface.
|
||||
|
||||
Args:
|
||||
as_api: The API to use to send.
|
||||
as_api(ApplicationServiceApi): The API to use to send.
|
||||
Returns:
|
||||
True if the transaction was sent.
|
||||
An Awaitable which resolves to True if the transaction was sent.
|
||||
"""
|
||||
return await as_api.push_bulk(
|
||||
return as_api.push_bulk(
|
||||
service=self.service, events=self.events, txn_id=self.id
|
||||
)
|
||||
|
||||
async def complete(self, store: "DataStore") -> None:
|
||||
def complete(self, store):
|
||||
"""Completes this transaction as successful.
|
||||
|
||||
Marks this transaction ID on the application service and removes the
|
||||
@@ -60,11 +55,13 @@ class AppServiceTransaction:
|
||||
|
||||
Args:
|
||||
store: The database store to operate on.
|
||||
Returns:
|
||||
A Deferred which resolves to True if the transaction was completed.
|
||||
"""
|
||||
await store.complete_appservice_txn(service=self.service, txn_id=self.id)
|
||||
return store.complete_appservice_txn(service=self.service, txn_id=self.id)
|
||||
|
||||
|
||||
class ApplicationService:
|
||||
class ApplicationService(object):
|
||||
"""Defines an application service. This definition is mostly what is
|
||||
provided to the /register AS API.
|
||||
|
||||
|
||||
@@ -14,20 +14,18 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import urllib
|
||||
from typing import TYPE_CHECKING, Optional
|
||||
|
||||
from prometheus_client import Counter
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventTypes, ThirdPartyEntityKind
|
||||
from synapse.api.errors import CodeMessageException
|
||||
from synapse.events.utils import serialize_event
|
||||
from synapse.http.client import SimpleHttpClient
|
||||
from synapse.types import JsonDict, ThirdPartyInstanceID
|
||||
from synapse.types import ThirdPartyInstanceID
|
||||
from synapse.util.caches.response_cache import ResponseCache
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.appservice import ApplicationService
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
sent_transactions_counter = Counter(
|
||||
@@ -88,7 +86,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
"""
|
||||
|
||||
def __init__(self, hs):
|
||||
super().__init__(hs)
|
||||
super(ApplicationServiceApi, self).__init__(hs)
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
self.protocol_meta_cache = ResponseCache(
|
||||
@@ -165,20 +163,19 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
logger.warning("query_3pe to %s threw exception %s", uri, ex)
|
||||
return []
|
||||
|
||||
async def get_3pe_protocol(
|
||||
self, service: "ApplicationService", protocol: str
|
||||
) -> Optional[JsonDict]:
|
||||
def get_3pe_protocol(self, service, protocol):
|
||||
if service.url is None:
|
||||
return {}
|
||||
|
||||
async def _get() -> Optional[JsonDict]:
|
||||
@defer.inlineCallbacks
|
||||
def _get():
|
||||
uri = "%s%s/thirdparty/protocol/%s" % (
|
||||
service.url,
|
||||
APP_SERVICE_PREFIX,
|
||||
urllib.parse.quote(protocol),
|
||||
)
|
||||
try:
|
||||
info = await self.get_json(uri)
|
||||
info = yield defer.ensureDeferred(self.get_json(uri, {}))
|
||||
|
||||
if not _is_valid_3pe_metadata(info):
|
||||
logger.warning(
|
||||
@@ -199,7 +196,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
return None
|
||||
|
||||
key = (service.id, protocol)
|
||||
return await self.protocol_meta_cache.wrap(key, _get)
|
||||
return self.protocol_meta_cache.wrap(key, _get)
|
||||
|
||||
async def push_bulk(self, service, events, txn_id=None):
|
||||
if service.url is None:
|
||||
|
||||
@@ -57,7 +57,7 @@ from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ApplicationServiceScheduler:
|
||||
class ApplicationServiceScheduler(object):
|
||||
""" Public facing API for this module. Does the required DI to tie the
|
||||
components together. This also serves as the "event_pool", which in this
|
||||
case is a simple array.
|
||||
@@ -86,7 +86,7 @@ class ApplicationServiceScheduler:
|
||||
self.queuer.enqueue(service, event)
|
||||
|
||||
|
||||
class _ServiceQueuer:
|
||||
class _ServiceQueuer(object):
|
||||
"""Queue of events waiting to be sent to appservices.
|
||||
|
||||
Groups events into transactions per-appservice, and sends them on to the
|
||||
@@ -133,7 +133,7 @@ class _ServiceQueuer:
|
||||
self.requests_in_flight.discard(service.id)
|
||||
|
||||
|
||||
class _TransactionController:
|
||||
class _TransactionController(object):
|
||||
"""Transaction manager.
|
||||
|
||||
Builds AppServiceTransactions and runs their lifecycle. Also starts a Recoverer
|
||||
@@ -209,7 +209,7 @@ class _TransactionController:
|
||||
return state == ApplicationServiceState.UP or state is None
|
||||
|
||||
|
||||
class _Recoverer:
|
||||
class _Recoverer(object):
|
||||
"""Manages retries and backoff for a DOWN appservice.
|
||||
|
||||
We have one of these for each appservice which is currently considered DOWN.
|
||||
|
||||
@@ -88,7 +88,7 @@ def path_exists(file_path):
|
||||
return False
|
||||
|
||||
|
||||
class Config:
|
||||
class Config(object):
|
||||
"""
|
||||
A configuration section, containing configuration keys and values.
|
||||
|
||||
@@ -194,10 +194,7 @@ class Config:
|
||||
return file_stream.read()
|
||||
|
||||
def read_templates(
|
||||
self,
|
||||
filenames: List[str],
|
||||
custom_template_directory: Optional[str] = None,
|
||||
autoescape: bool = False,
|
||||
self, filenames: List[str], custom_template_directory: Optional[str] = None,
|
||||
) -> List[jinja2.Template]:
|
||||
"""Load a list of template files from disk using the given variables.
|
||||
|
||||
@@ -213,9 +210,6 @@ class Config:
|
||||
custom_template_directory: A directory to try to look for the templates
|
||||
before using the default Synapse template directory instead.
|
||||
|
||||
autoescape: Whether to autoescape variables before inserting them into the
|
||||
template.
|
||||
|
||||
Raises:
|
||||
ConfigError: if the file's path is incorrect or otherwise cannot be read.
|
||||
|
||||
@@ -239,14 +233,15 @@ class Config:
|
||||
search_directories.insert(0, custom_template_directory)
|
||||
|
||||
loader = jinja2.FileSystemLoader(search_directories)
|
||||
env = jinja2.Environment(loader=loader, autoescape=autoescape)
|
||||
env = jinja2.Environment(loader=loader, autoescape=True)
|
||||
|
||||
# Update the environment with our custom filters
|
||||
env.filters.update({"format_ts": _format_ts_filter})
|
||||
if self.public_baseurl:
|
||||
env.filters.update(
|
||||
{"mxc_to_http": _create_mxc_to_http_filter(self.public_baseurl)}
|
||||
)
|
||||
env.filters.update(
|
||||
{
|
||||
"format_ts": _format_ts_filter,
|
||||
"mxc_to_http": _create_mxc_to_http_filter(self.public_baseurl),
|
||||
}
|
||||
)
|
||||
|
||||
for filename in filenames:
|
||||
# Load the template
|
||||
@@ -288,7 +283,7 @@ def _create_mxc_to_http_filter(public_baseurl: str) -> Callable:
|
||||
return mxc_to_http_filter
|
||||
|
||||
|
||||
class RootConfig:
|
||||
class RootConfig(object):
|
||||
"""
|
||||
Holder of an application's configuration.
|
||||
|
||||
@@ -837,26 +832,11 @@ class ShardedWorkerHandlingConfig:
|
||||
def should_handle(self, instance_name: str, key: str) -> bool:
|
||||
"""Whether this instance is responsible for handling the given key.
|
||||
"""
|
||||
# If multiple instances are not defined we always return true
|
||||
|
||||
# If multiple instances are not defined we always return true.
|
||||
if not self.instances or len(self.instances) == 1:
|
||||
return True
|
||||
|
||||
return self.get_instance(key) == instance_name
|
||||
|
||||
def get_instance(self, key: str) -> str:
|
||||
"""Get the instance responsible for handling the given key.
|
||||
|
||||
Note: For things like federation sending the config for which instance
|
||||
is sending is known only to the sender instance if there is only one.
|
||||
Therefore `should_handle` should be used where possible.
|
||||
"""
|
||||
|
||||
if not self.instances:
|
||||
return "master"
|
||||
|
||||
if len(self.instances) == 1:
|
||||
return self.instances[0]
|
||||
|
||||
# We shard by taking the hash, modulo it by the number of instances and
|
||||
# then checking whether this instance matches the instance at that
|
||||
# index.
|
||||
@@ -866,7 +846,7 @@ class ShardedWorkerHandlingConfig:
|
||||
dest_hash = sha256(key.encode("utf8")).digest()
|
||||
dest_int = int.from_bytes(dest_hash, byteorder="little")
|
||||
remainder = dest_int % (len(self.instances))
|
||||
return self.instances[remainder]
|
||||
return self.instances[remainder] == instance_name
|
||||
|
||||
|
||||
__all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]
|
||||
|
||||
@@ -142,4 +142,3 @@ class ShardedWorkerHandlingConfig:
|
||||
instances: List[str]
|
||||
def __init__(self, instances: List[str]) -> None: ...
|
||||
def should_handle(self, instance_name: str, key: str) -> bool: ...
|
||||
def get_instance(self, key: str) -> str: ...
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Any, Iterable
|
||||
from typing import Any, List
|
||||
|
||||
import jsonschema
|
||||
|
||||
@@ -20,9 +20,7 @@ from synapse.config._base import ConfigError
|
||||
from synapse.types import JsonDict
|
||||
|
||||
|
||||
def validate_config(
|
||||
json_schema: JsonDict, config: Any, config_path: Iterable[str]
|
||||
) -> None:
|
||||
def validate_config(json_schema: JsonDict, config: Any, config_path: List[str]) -> None:
|
||||
"""Validates a config setting against a JsonSchema definition
|
||||
|
||||
This can be used to validate a section of the config file against a schema
|
||||
|
||||
@@ -33,7 +33,7 @@ _DEFAULT_FACTOR_SIZE = 0.5
|
||||
_DEFAULT_EVENT_CACHE_SIZE = "10K"
|
||||
|
||||
|
||||
class CacheProperties:
|
||||
class CacheProperties(object):
|
||||
def __init__(self):
|
||||
# The default factor size for all caches
|
||||
self.default_factor_size = float(
|
||||
|
||||
@@ -28,9 +28,6 @@ class CaptchaConfig(Config):
|
||||
"recaptcha_siteverify_api",
|
||||
"https://www.recaptcha.net/recaptcha/api/siteverify",
|
||||
)
|
||||
self.recaptcha_template = self.read_templates(
|
||||
["recaptcha.html"], autoescape=True
|
||||
)[0]
|
||||
|
||||
def generate_config_section(self, **kwargs):
|
||||
return """\
|
||||
|
||||
@@ -77,7 +77,7 @@ class ConsentConfig(Config):
|
||||
section = "consent"
|
||||
|
||||
def __init__(self, *args):
|
||||
super().__init__(*args)
|
||||
super(ConsentConfig, self).__init__(*args)
|
||||
|
||||
self.user_consent_version = None
|
||||
self.user_consent_template_dir = None
|
||||
@@ -89,8 +89,6 @@ class ConsentConfig(Config):
|
||||
|
||||
def read_config(self, config, **kwargs):
|
||||
consent_config = config.get("user_consent")
|
||||
self.terms_template = self.read_templates(["terms.html"], autoescape=True)[0]
|
||||
|
||||
if consent_config is None:
|
||||
return
|
||||
self.user_consent_version = str(consent_config["version"])
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from __future__ import print_function
|
||||
|
||||
# This file can't be called email.py because if it is, we cannot:
|
||||
import email.utils
|
||||
@@ -227,7 +228,6 @@ class EmailConfig(Config):
|
||||
self.email_registration_template_text,
|
||||
self.email_add_threepid_template_html,
|
||||
self.email_add_threepid_template_text,
|
||||
self.email_password_reset_template_confirmation_html,
|
||||
self.email_password_reset_template_failure_html,
|
||||
self.email_registration_template_failure_html,
|
||||
self.email_add_threepid_template_failure_html,
|
||||
@@ -242,7 +242,6 @@ class EmailConfig(Config):
|
||||
registration_template_text,
|
||||
add_threepid_template_html,
|
||||
add_threepid_template_text,
|
||||
"password_reset_confirmation.html",
|
||||
password_reset_template_failure_html,
|
||||
registration_template_failure_html,
|
||||
add_threepid_template_failure_html,
|
||||
@@ -405,13 +404,9 @@ class EmailConfig(Config):
|
||||
# * The contents of password reset emails sent by the homeserver:
|
||||
# 'password_reset.html' and 'password_reset.txt'
|
||||
#
|
||||
# * An HTML page that a user will see when they follow the link in the password
|
||||
# reset email. The user will be asked to confirm the action before their
|
||||
# password is reset: 'password_reset_confirmation.html'
|
||||
#
|
||||
# * HTML pages for success and failure that a user will see when they confirm
|
||||
# the password reset flow using the page above: 'password_reset_success.html'
|
||||
# and 'password_reset_failure.html'
|
||||
# * HTML pages for success and failure that a user will see when they follow
|
||||
# the link in the password reset email: 'password_reset_success.html' and
|
||||
# 'password_reset_failure.html'
|
||||
#
|
||||
# * The contents of address verification emails sent during registration:
|
||||
# 'registration.html' and 'registration.txt'
|
||||
|
||||
@@ -17,8 +17,7 @@ from typing import Optional
|
||||
|
||||
from netaddr import IPSet
|
||||
|
||||
from synapse.config._base import Config, ConfigError
|
||||
from synapse.config._util import validate_config
|
||||
from ._base import Config, ConfigError
|
||||
|
||||
|
||||
class FederationConfig(Config):
|
||||
@@ -53,18 +52,8 @@ class FederationConfig(Config):
|
||||
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
|
||||
)
|
||||
|
||||
federation_metrics_domains = config.get("federation_metrics_domains") or []
|
||||
validate_config(
|
||||
_METRICS_FOR_DOMAINS_SCHEMA,
|
||||
federation_metrics_domains,
|
||||
("federation_metrics_domains",),
|
||||
)
|
||||
self.federation_metrics_domains = set(federation_metrics_domains)
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
## Federation ##
|
||||
|
||||
# Restrict federation to the following whitelist of domains.
|
||||
# N.B. we recommend also firewalling your federation listener to limit
|
||||
# inbound federation traffic as early as possible, rather than relying
|
||||
@@ -96,18 +85,4 @@ class FederationConfig(Config):
|
||||
- '::1/128'
|
||||
- 'fe80::/64'
|
||||
- 'fc00::/7'
|
||||
|
||||
# Report prometheus metrics on the age of PDUs being sent to and received from
|
||||
# the following domains. This can be used to give an idea of "delay" on inbound
|
||||
# and outbound federation, though be aware that any delay can be due to problems
|
||||
# at either end or with the intermediate network.
|
||||
#
|
||||
# By default, no domains are monitored in this way.
|
||||
#
|
||||
#federation_metrics_domains:
|
||||
# - matrix.org
|
||||
# - example.com
|
||||
"""
|
||||
|
||||
|
||||
_METRICS_FOR_DOMAINS_SCHEMA = {"type": "array", "items": {"type": "string"}}
|
||||
|
||||
@@ -92,4 +92,5 @@ class HomeServerConfig(RootConfig):
|
||||
TracerConfig,
|
||||
WorkerConfig,
|
||||
RedisConfig,
|
||||
FederationConfig,
|
||||
]
|
||||
|
||||
@@ -82,7 +82,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@attr.s
|
||||
class TrustedKeyServer:
|
||||
class TrustedKeyServer(object):
|
||||
# string: name of the server.
|
||||
server_name = attr.ib()
|
||||
|
||||
|
||||
@@ -17,7 +17,6 @@ import logging
|
||||
import logging.config
|
||||
import os
|
||||
import sys
|
||||
import threading
|
||||
from string import Template
|
||||
|
||||
import yaml
|
||||
@@ -26,7 +25,6 @@ from twisted.logger import (
|
||||
ILogObserver,
|
||||
LogBeginner,
|
||||
STDLibLogObserver,
|
||||
eventAsText,
|
||||
globalLogBeginner,
|
||||
)
|
||||
|
||||
@@ -218,9 +216,8 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
# system.
|
||||
observer = STDLibLogObserver()
|
||||
|
||||
threadlocal = threading.local()
|
||||
|
||||
def _log(event):
|
||||
|
||||
if "log_text" in event:
|
||||
if event["log_text"].startswith("DNSDatagramProtocol starting on "):
|
||||
return
|
||||
@@ -231,25 +228,7 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
if event["log_text"].startswith("Timing out client"):
|
||||
return
|
||||
|
||||
# this is a workaround to make sure we don't get stack overflows when the
|
||||
# logging system raises an error which is written to stderr which is redirected
|
||||
# to the logging system, etc.
|
||||
if getattr(threadlocal, "active", False):
|
||||
# write the text of the event, if any, to the *real* stderr (which may
|
||||
# be redirected to /dev/null, but there's not much we can do)
|
||||
try:
|
||||
event_text = eventAsText(event)
|
||||
print("logging during logging: %s" % event_text, file=sys.__stderr__)
|
||||
except Exception:
|
||||
# gah.
|
||||
pass
|
||||
return
|
||||
|
||||
try:
|
||||
threadlocal.active = True
|
||||
return observer(event)
|
||||
finally:
|
||||
threadlocal.active = False
|
||||
return observer(event)
|
||||
|
||||
logBeginner.beginLoggingTo([_log], redirectStandardIO=not config.no_redirect_stdio)
|
||||
if not config.no_redirect_stdio:
|
||||
|
||||
@@ -22,7 +22,7 @@ from ._base import Config, ConfigError
|
||||
|
||||
|
||||
@attr.s
|
||||
class MetricsFlags:
|
||||
class MetricsFlags(object):
|
||||
known_servers = attr.ib(default=False, validator=attr.validators.instance_of(bool))
|
||||
|
||||
@classmethod
|
||||
|
||||
@@ -56,7 +56,6 @@ class OIDCConfig(Config):
|
||||
self.oidc_userinfo_endpoint = oidc_config.get("userinfo_endpoint")
|
||||
self.oidc_jwks_uri = oidc_config.get("jwks_uri")
|
||||
self.oidc_skip_verification = oidc_config.get("skip_verification", False)
|
||||
self.oidc_allow_existing_users = oidc_config.get("allow_existing_users", False)
|
||||
|
||||
ump_config = oidc_config.get("user_mapping_provider", {})
|
||||
ump_config.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
|
||||
@@ -159,11 +158,6 @@ class OIDCConfig(Config):
|
||||
#
|
||||
#skip_verification: true
|
||||
|
||||
# Uncomment to allow a user logging in via OIDC to match a pre-existing account instead
|
||||
# of failing. This could be used if switching from password logins to OIDC. Defaults to false.
|
||||
#
|
||||
#allow_existing_users: true
|
||||
|
||||
# An external module can be provided here as a custom solution to mapping
|
||||
# attributes returned from a OIDC provider onto a matrix user.
|
||||
#
|
||||
@@ -204,14 +198,6 @@ class OIDCConfig(Config):
|
||||
# If unset, no displayname will be set.
|
||||
#
|
||||
#display_name_template: "{{{{ user.given_name }}}} {{{{ user.last_name }}}}"
|
||||
|
||||
# Jinja2 templates for extra attributes to send back to the client during
|
||||
# login.
|
||||
#
|
||||
# Note that these are non-standard and clients will ignore them without modifications.
|
||||
#
|
||||
#extra_attributes:
|
||||
#birthdate: "{{{{ user.birthdate }}}}"
|
||||
""".format(
|
||||
mapping_provider=DEFAULT_USER_MAPPING_PROVIDER
|
||||
)
|
||||
|
||||
@@ -17,7 +17,7 @@ from typing import Dict
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class RateLimitConfig:
|
||||
class RateLimitConfig(object):
|
||||
def __init__(
|
||||
self,
|
||||
config: Dict[str, float],
|
||||
@@ -27,7 +27,7 @@ class RateLimitConfig:
|
||||
self.burst_count = config.get("burst_count", defaults["burst_count"])
|
||||
|
||||
|
||||
class FederationRateLimitConfig:
|
||||
class FederationRateLimitConfig(object):
|
||||
_items_and_default = {
|
||||
"window_size": 1000,
|
||||
"sleep_limit": 10,
|
||||
|
||||
@@ -30,7 +30,7 @@ class AccountValidityConfig(Config):
|
||||
def __init__(self, config, synapse_config):
|
||||
if config is None:
|
||||
return
|
||||
super().__init__()
|
||||
super(AccountValidityConfig, self).__init__()
|
||||
self.enabled = config.get("enabled", False)
|
||||
self.renew_by_email_enabled = "renew_at" in config
|
||||
|
||||
@@ -187,11 +187,6 @@ class RegistrationConfig(Config):
|
||||
session_lifetime = self.parse_duration(session_lifetime)
|
||||
self.session_lifetime = session_lifetime
|
||||
|
||||
# The success template used during fallback auth.
|
||||
self.fallback_success_template = self.read_templates(
|
||||
["auth_success.html"], autoescape=True
|
||||
)[0]
|
||||
|
||||
def generate_config_section(self, generate_secrets=False, **kwargs):
|
||||
if generate_secrets:
|
||||
registration_shared_secret = 'registration_shared_secret: "%s"' % (
|
||||
|
||||
@@ -22,7 +22,7 @@ from ._base import Config, ConfigError
|
||||
logger = logging.Logger(__name__)
|
||||
|
||||
|
||||
class RoomDefaultEncryptionTypes:
|
||||
class RoomDefaultEncryptionTypes(object):
|
||||
"""Possible values for the encryption_enabled_by_default_for_room_type config option"""
|
||||
|
||||
ALL = "all"
|
||||
|
||||
@@ -149,7 +149,7 @@ class RoomDirectoryConfig(Config):
|
||||
return False
|
||||
|
||||
|
||||
class _RoomDirectoryRule:
|
||||
class _RoomDirectoryRule(object):
|
||||
"""Helper class to test whether a room directory action is allowed, like
|
||||
creating an alias or publishing a room.
|
||||
"""
|
||||
|
||||
@@ -169,6 +169,10 @@ class SAML2Config(Config):
|
||||
saml2_config.get("saml_session_lifetime", "15m")
|
||||
)
|
||||
|
||||
self.saml2_error_html_template = self.read_templates(
|
||||
["saml_error.html"], saml2_config.get("template_dir")
|
||||
)
|
||||
|
||||
def _default_saml_config_dict(
|
||||
self, required_attributes: set, optional_attributes: set
|
||||
):
|
||||
@@ -221,14 +225,11 @@ class SAML2Config(Config):
|
||||
# At least one of `sp_config` or `config_path` must be set in this section to
|
||||
# enable SAML login.
|
||||
#
|
||||
# You will probably also want to set the following options to `false` to
|
||||
# (You will probably also want to set the following options to `false` to
|
||||
# disable the regular login/registration flows:
|
||||
# * enable_registration
|
||||
# * password_config.enabled
|
||||
#
|
||||
# You will also want to investigate the settings under the "sso" configuration
|
||||
# section below.
|
||||
#
|
||||
# Once SAML support is enabled, a metadata file will be exposed at
|
||||
# https://<server>:<port>/_matrix/saml2/metadata.xml, which you may be able to
|
||||
# use to configure your SAML IdP with. Alternatively, you can manually configure
|
||||
@@ -350,6 +351,31 @@ class SAML2Config(Config):
|
||||
# value: "staff"
|
||||
# - attribute: department
|
||||
# value: "sales"
|
||||
|
||||
# Directory in which Synapse will try to find the template files below.
|
||||
# If not set, default templates from within the Synapse package will be used.
|
||||
#
|
||||
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
|
||||
# If you *do* uncomment it, you will need to make sure that all the templates
|
||||
# below are in the directory.
|
||||
#
|
||||
# Synapse will look for the following templates in this directory:
|
||||
#
|
||||
# * HTML page to display to users if something goes wrong during the
|
||||
# authentication process: 'saml_error.html'.
|
||||
#
|
||||
# When rendering, this template is given the following variables:
|
||||
# * code: an HTML error code corresponding to the error that is being
|
||||
# returned (typically 400 or 500)
|
||||
#
|
||||
# * msg: a textual message describing the error.
|
||||
#
|
||||
# The variables will automatically be HTML-escaped.
|
||||
#
|
||||
# You can see the default templates at:
|
||||
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
|
||||
#
|
||||
#template_dir: "res/templates"
|
||||
""" % {
|
||||
"config_dir_path": config_dir_path
|
||||
}
|
||||
|
||||
@@ -19,7 +19,7 @@ import logging
|
||||
import os.path
|
||||
import re
|
||||
from textwrap import indent
|
||||
from typing import Any, Dict, Iterable, List, Optional, Set
|
||||
from typing import Any, Dict, Iterable, List, Optional
|
||||
|
||||
import attr
|
||||
import yaml
|
||||
@@ -424,7 +424,7 @@ class ServerConfig(Config):
|
||||
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
|
||||
|
||||
@attr.s
|
||||
class LimitRemoteRoomsConfig:
|
||||
class LimitRemoteRoomsConfig(object):
|
||||
enabled = attr.ib(
|
||||
validator=attr.validators.instance_of(bool), default=False
|
||||
)
|
||||
@@ -542,19 +542,6 @@ class ServerConfig(Config):
|
||||
users_new_default_push_rules
|
||||
) # type: set
|
||||
|
||||
# Whitelist of domain names that given next_link parameters must have
|
||||
next_link_domain_whitelist = config.get(
|
||||
"next_link_domain_whitelist"
|
||||
) # type: Optional[List[str]]
|
||||
|
||||
self.next_link_domain_whitelist = None # type: Optional[Set[str]]
|
||||
if next_link_domain_whitelist is not None:
|
||||
if not isinstance(next_link_domain_whitelist, list):
|
||||
raise ConfigError("'next_link_domain_whitelist' must be a list")
|
||||
|
||||
# Turn the list into a set to improve lookup speed.
|
||||
self.next_link_domain_whitelist = set(next_link_domain_whitelist)
|
||||
|
||||
def has_tls_listener(self) -> bool:
|
||||
return any(listener.tls for listener in self.listeners)
|
||||
|
||||
@@ -641,23 +628,10 @@ class ServerConfig(Config):
|
||||
"""\
|
||||
## Server ##
|
||||
|
||||
# The public-facing domain of the server
|
||||
#
|
||||
# The server_name name will appear at the end of usernames and room addresses
|
||||
# created on this server. For example if the server_name was example.com,
|
||||
# usernames on this server would be in the format @user:example.com
|
||||
#
|
||||
# In most cases you should avoid using a matrix specific subdomain such as
|
||||
# matrix.example.com or synapse.example.com as the server_name for the same
|
||||
# reasons you wouldn't use user@email.example.com as your email address.
|
||||
# See https://github.com/matrix-org/synapse/blob/master/docs/delegate.md
|
||||
# for information on how to host Synapse on a subdomain while preserving
|
||||
# a clean server_name.
|
||||
#
|
||||
# The server_name cannot be changed later so it is important to
|
||||
# configure this correctly before you start Synapse. It should be all
|
||||
# lowercase and may contain an explicit port.
|
||||
# Examples: matrix.org, localhost:8080
|
||||
# The domain name of the server, with optional explicit port.
|
||||
# This is used by remote servers to connect to this server,
|
||||
# e.g. matrix.org, localhost:8080, etc.
|
||||
# This is also the last part of your UserID.
|
||||
#
|
||||
server_name: "%(server_name)s"
|
||||
|
||||
@@ -987,10 +961,11 @@ class ServerConfig(Config):
|
||||
# min_lifetime: 1d
|
||||
# max_lifetime: 1y
|
||||
|
||||
# Retention policy limits. If set, and the state of a room contains a
|
||||
# 'm.room.retention' event in its state which contains a 'min_lifetime' or a
|
||||
# 'max_lifetime' that's out of these bounds, Synapse will cap the room's policy
|
||||
# to these limits when running purge jobs.
|
||||
# Retention policy limits. If set, a user won't be able to send a
|
||||
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
|
||||
# that's not within this range. This is especially useful in closed federations,
|
||||
# in which server admins can make sure every federating server applies the same
|
||||
# rules.
|
||||
#
|
||||
#allowed_lifetime_min: 1d
|
||||
#allowed_lifetime_max: 1y
|
||||
@@ -1016,19 +991,12 @@ class ServerConfig(Config):
|
||||
# (e.g. every 12h), but not want that purge to be performed by a job that's
|
||||
# iterating over every room it knows, which could be heavy on the server.
|
||||
#
|
||||
# If any purge job is configured, it is strongly recommended to have at least
|
||||
# a single job with neither 'shortest_max_lifetime' nor 'longest_max_lifetime'
|
||||
# set, or one job without 'shortest_max_lifetime' and one job without
|
||||
# 'longest_max_lifetime' set. Otherwise some rooms might be ignored, even if
|
||||
# 'allowed_lifetime_min' and 'allowed_lifetime_max' are set, because capping a
|
||||
# room's policy to these values is done after the policies are retrieved from
|
||||
# Synapse's database (which is done using the range specified in a purge job's
|
||||
# configuration).
|
||||
#
|
||||
#purge_jobs:
|
||||
# - longest_max_lifetime: 3d
|
||||
# - shortest_max_lifetime: 1d
|
||||
# longest_max_lifetime: 3d
|
||||
# interval: 12h
|
||||
# - shortest_max_lifetime: 3d
|
||||
# longest_max_lifetime: 1y
|
||||
# interval: 1d
|
||||
|
||||
# Inhibits the /requestToken endpoints from returning an error that might leak
|
||||
@@ -1040,24 +1008,6 @@ class ServerConfig(Config):
|
||||
# act as if no error happened and return a fake session ID ('sid') to clients.
|
||||
#
|
||||
#request_token_inhibit_3pid_errors: true
|
||||
|
||||
# A list of domains that the domain portion of 'next_link' parameters
|
||||
# must match.
|
||||
#
|
||||
# This parameter is optionally provided by clients while requesting
|
||||
# validation of an email or phone number, and maps to a link that
|
||||
# users will be automatically redirected to after validation
|
||||
# succeeds. Clients can make use this parameter to aid the validation
|
||||
# process.
|
||||
#
|
||||
# The whitelist is applied whether the homeserver or an
|
||||
# identity server is handling validation.
|
||||
#
|
||||
# The default value is no whitelist functionality; all domains are
|
||||
# allowed. Setting this value to an empty list will instead disallow
|
||||
# all domains.
|
||||
#
|
||||
#next_link_domain_whitelist: ["matrix.org"]
|
||||
"""
|
||||
% locals()
|
||||
)
|
||||
|
||||
@@ -62,7 +62,7 @@ class ServerNoticesConfig(Config):
|
||||
section = "servernotices"
|
||||
|
||||
def __init__(self, *args):
|
||||
super().__init__(*args)
|
||||
super(ServerNoticesConfig, self).__init__(*args)
|
||||
self.server_notices_mxid = None
|
||||
self.server_notices_mxid_display_name = None
|
||||
self.server_notices_mxid_avatar_url = None
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from __future__ import division
|
||||
|
||||
import sys
|
||||
|
||||
from ._base import Config
|
||||
|
||||
@@ -471,6 +471,7 @@ class TlsConfig(Config):
|
||||
# or by checking matrix.org/federationtester/api/report?server_name=$host
|
||||
#
|
||||
#tls_fingerprints: [{"sha256": "<base64_encoded_sha256_fingerprint>"}]
|
||||
|
||||
"""
|
||||
# Lowercase the string representation of boolean values
|
||||
% {
|
||||
|
||||
@@ -13,24 +13,12 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from typing import List, Union
|
||||
|
||||
import attr
|
||||
|
||||
from ._base import Config, ConfigError, ShardedWorkerHandlingConfig
|
||||
from .server import ListenerConfig, parse_listener_def
|
||||
|
||||
|
||||
def _instance_to_list_converter(obj: Union[str, List[str]]) -> List[str]:
|
||||
"""Helper for allowing parsing a string or list of strings to a config
|
||||
option expecting a list of strings.
|
||||
"""
|
||||
|
||||
if isinstance(obj, str):
|
||||
return [obj]
|
||||
return obj
|
||||
|
||||
|
||||
@attr.s
|
||||
class InstanceLocationConfig:
|
||||
"""The host and port to talk to an instance via HTTP replication.
|
||||
@@ -45,13 +33,11 @@ class WriterLocations:
|
||||
"""Specifies the instances that write various streams.
|
||||
|
||||
Attributes:
|
||||
events: The instances that write to the event and backfill streams.
|
||||
typing: The instance that writes to the typing stream.
|
||||
events: The instance that writes to the event and backfill streams.
|
||||
events: The instance that writes to the typing stream.
|
||||
"""
|
||||
|
||||
events = attr.ib(
|
||||
default=["master"], type=List[str], converter=_instance_to_list_converter
|
||||
)
|
||||
events = attr.ib(default="master", type=str)
|
||||
typing = attr.ib(default="master", type=str)
|
||||
|
||||
|
||||
@@ -119,18 +105,15 @@ class WorkerConfig(Config):
|
||||
writers = config.get("stream_writers") or {}
|
||||
self.writers = WriterLocations(**writers)
|
||||
|
||||
# Check that the configured writers for events and typing also appears in
|
||||
# Check that the configured writer for events and typing also appears in
|
||||
# `instance_map`.
|
||||
for stream in ("events", "typing"):
|
||||
instances = _instance_to_list_converter(getattr(self.writers, stream))
|
||||
for instance in instances:
|
||||
if instance != "master" and instance not in self.instance_map:
|
||||
raise ConfigError(
|
||||
"Instance %r is configured to write %s but does not appear in `instance_map` config."
|
||||
% (instance, stream)
|
||||
)
|
||||
|
||||
self.events_shard_config = ShardedWorkerHandlingConfig(self.writers.events)
|
||||
instance = getattr(self.writers, stream)
|
||||
if instance != "master" and instance not in self.instance_map:
|
||||
raise ConfigError(
|
||||
"Instance %r is configured to write %s but does not appear in `instance_map` config."
|
||||
% (instance, stream)
|
||||
)
|
||||
|
||||
def generate_config_section(self, config_dir_path, server_name, **kwargs):
|
||||
return """\
|
||||
|
||||
@@ -45,11 +45,7 @@ _TLS_VERSION_MAP = {
|
||||
|
||||
class ServerContextFactory(ContextFactory):
|
||||
"""Factory for PyOpenSSL SSL contexts that are used to handle incoming
|
||||
connections.
|
||||
|
||||
TODO: replace this with an implementation of IOpenSSLServerConnectionCreator,
|
||||
per https://github.com/matrix-org/synapse/issues/1691
|
||||
"""
|
||||
connections."""
|
||||
|
||||
def __init__(self, config):
|
||||
# TODO: once pyOpenSSL exposes TLS_METHOD and SSL_CTX_set_min_proto_version,
|
||||
@@ -87,7 +83,7 @@ class ServerContextFactory(ContextFactory):
|
||||
|
||||
|
||||
@implementer(IPolicyForHTTPS)
|
||||
class FederationPolicyForHTTPS:
|
||||
class FederationPolicyForHTTPS(object):
|
||||
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
|
||||
to remote servers for federation.
|
||||
|
||||
@@ -156,7 +152,7 @@ class FederationPolicyForHTTPS:
|
||||
|
||||
|
||||
@implementer(IPolicyForHTTPS)
|
||||
class RegularPolicyForHTTPS:
|
||||
class RegularPolicyForHTTPS(object):
|
||||
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
|
||||
to remote servers, for other than federation.
|
||||
|
||||
@@ -193,7 +189,7 @@ def _context_info_cb(ssl_connection, where, ret):
|
||||
|
||||
|
||||
@implementer(IOpenSSLClientConnectionCreator)
|
||||
class SSLClientConnectionCreator:
|
||||
class SSLClientConnectionCreator(object):
|
||||
"""Creates openssl connection objects for client connections.
|
||||
|
||||
Replaces twisted.internet.ssl.ClientTLSOptions
|
||||
@@ -218,7 +214,7 @@ class SSLClientConnectionCreator:
|
||||
return connection
|
||||
|
||||
|
||||
class ConnectionVerifier:
|
||||
class ConnectionVerifier(object):
|
||||
"""Set the SNI, and do cert verification
|
||||
|
||||
This is a thing which is attached to the TLSMemoryBIOProtocol, and is called by
|
||||
|
||||
@@ -42,6 +42,7 @@ from synapse.api.errors import (
|
||||
)
|
||||
from synapse.logging.context import (
|
||||
PreserveLoggingContext,
|
||||
current_context,
|
||||
make_deferred_yieldable,
|
||||
preserve_fn,
|
||||
run_in_background,
|
||||
@@ -56,7 +57,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@attr.s(slots=True, cmp=False)
|
||||
class VerifyJsonRequest:
|
||||
class VerifyJsonRequest(object):
|
||||
"""
|
||||
A request to verify a JSON object.
|
||||
|
||||
@@ -95,7 +96,7 @@ class KeyLookupError(ValueError):
|
||||
pass
|
||||
|
||||
|
||||
class Keyring:
|
||||
class Keyring(object):
|
||||
def __init__(self, hs, key_fetchers=None):
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
@@ -232,6 +233,8 @@ class Keyring:
|
||||
"""
|
||||
|
||||
try:
|
||||
ctx = current_context()
|
||||
|
||||
# map from server name to a set of outstanding request ids
|
||||
server_to_request_ids = {}
|
||||
|
||||
@@ -262,8 +265,12 @@ class Keyring:
|
||||
|
||||
# if there are no more requests for this server, we can drop the lock.
|
||||
if not server_requests:
|
||||
logger.debug("Releasing key lookup lock on %s", server_name)
|
||||
drop_server_lock(server_name)
|
||||
with PreserveLoggingContext(ctx):
|
||||
logger.debug("Releasing key lookup lock on %s", server_name)
|
||||
|
||||
# ... but not immediately, as that can cause stack explosions if
|
||||
# we get a long queue of lookups.
|
||||
self.clock.call_later(0, drop_server_lock, server_name)
|
||||
|
||||
return res
|
||||
|
||||
@@ -328,32 +335,20 @@ class Keyring:
|
||||
)
|
||||
|
||||
# look for any requests which weren't satisfied
|
||||
while remaining_requests:
|
||||
verify_request = remaining_requests.pop()
|
||||
rq_str = (
|
||||
"VerifyJsonRequest(server=%s, key_ids=%s, min_valid=%i)"
|
||||
% (
|
||||
verify_request.server_name,
|
||||
verify_request.key_ids,
|
||||
verify_request.minimum_valid_until_ts,
|
||||
with PreserveLoggingContext():
|
||||
for verify_request in remaining_requests:
|
||||
verify_request.key_ready.errback(
|
||||
SynapseError(
|
||||
401,
|
||||
"No key for %s with ids in %s (min_validity %i)"
|
||||
% (
|
||||
verify_request.server_name,
|
||||
verify_request.key_ids,
|
||||
verify_request.minimum_valid_until_ts,
|
||||
),
|
||||
Codes.UNAUTHORIZED,
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# If we run the errback immediately, it may cancel our
|
||||
# loggingcontext while we are still in it, so instead we
|
||||
# schedule it for the next time round the reactor.
|
||||
#
|
||||
# (this also ensures that we don't get a stack overflow if we
|
||||
# has a massive queue of lookups waiting for this server).
|
||||
self.clock.call_later(
|
||||
0,
|
||||
verify_request.key_ready.errback,
|
||||
SynapseError(
|
||||
401,
|
||||
"Failed to find any key to satisfy %s" % (rq_str,),
|
||||
Codes.UNAUTHORIZED,
|
||||
),
|
||||
)
|
||||
except Exception as err:
|
||||
# we don't really expect to get here, because any errors should already
|
||||
# have been caught and logged. But if we do, let's log the error and make
|
||||
@@ -415,30 +410,17 @@ class Keyring:
|
||||
# key was not valid at this point
|
||||
continue
|
||||
|
||||
# we have a valid key for this request. If we run the callback
|
||||
# immediately, it may cancel our loggingcontext while we are still in
|
||||
# it, so instead we schedule it for the next time round the reactor.
|
||||
#
|
||||
# (this also ensures that we don't get a stack overflow if we had
|
||||
# a massive queue of lookups waiting for this server).
|
||||
logger.debug(
|
||||
"Found key %s:%s for %s",
|
||||
server_name,
|
||||
key_id,
|
||||
verify_request.request_name,
|
||||
)
|
||||
self.clock.call_later(
|
||||
0,
|
||||
verify_request.key_ready.callback,
|
||||
(server_name, key_id, fetch_key_result.verify_key),
|
||||
)
|
||||
with PreserveLoggingContext():
|
||||
verify_request.key_ready.callback(
|
||||
(server_name, key_id, fetch_key_result.verify_key)
|
||||
)
|
||||
completed.append(verify_request)
|
||||
break
|
||||
|
||||
remaining_requests.difference_update(completed)
|
||||
|
||||
|
||||
class KeyFetcher:
|
||||
class KeyFetcher(object):
|
||||
async def get_keys(self, keys_to_fetch):
|
||||
"""
|
||||
Args:
|
||||
@@ -474,7 +456,7 @@ class StoreKeyFetcher(KeyFetcher):
|
||||
return keys
|
||||
|
||||
|
||||
class BaseV2KeyFetcher:
|
||||
class BaseV2KeyFetcher(object):
|
||||
def __init__(self, hs):
|
||||
self.store = hs.get_datastore()
|
||||
self.config = hs.get_config()
|
||||
@@ -576,7 +558,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
|
||||
"""KeyFetcher impl which fetches keys from the "perspectives" servers"""
|
||||
|
||||
def __init__(self, hs):
|
||||
super().__init__(hs)
|
||||
super(PerspectivesKeyFetcher, self).__init__(hs)
|
||||
self.clock = hs.get_clock()
|
||||
self.client = hs.get_http_client()
|
||||
self.key_servers = self.config.key_servers
|
||||
@@ -746,7 +728,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
"""KeyFetcher impl which fetches keys from the origin servers"""
|
||||
|
||||
def __init__(self, hs):
|
||||
super().__init__(hs)
|
||||
super(ServerKeyFetcher, self).__init__(hs)
|
||||
self.clock = hs.get_clock()
|
||||
self.client = hs.get_http_client()
|
||||
|
||||
@@ -775,8 +757,9 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
except Exception:
|
||||
logger.exception("Error getting keys %s from %s", key_ids, server_name)
|
||||
|
||||
await yieldable_gather_results(get_key, keys_to_fetch.items())
|
||||
return results
|
||||
return await yieldable_gather_results(
|
||||
get_key, keys_to_fetch.items()
|
||||
).addCallback(lambda _: results)
|
||||
|
||||
async def get_server_verify_key_v2_direct(self, server_name, key_ids):
|
||||
"""
|
||||
@@ -786,7 +769,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
|
||||
key_ids (iterable[str]):
|
||||
|
||||
Returns:
|
||||
dict[str, FetchKeyResult]: map from key ID to lookup result
|
||||
Deferred[dict[str, FetchKeyResult]]: map from key ID to lookup result
|
||||
|
||||
Raises:
|
||||
KeyLookupError if there was a problem making the lookup
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user