1
0

Compare commits

...

144 Commits

Author SHA1 Message Date
David Robertson
0953cad3e4 docstringggggg 2022-05-18 10:19:30 +01:00
David Robertson
ad6a6675bf go away flake8 2022-05-17 14:14:40 +01:00
David Robertson
21d1347f2c Changelog 2022-05-17 14:00:56 +01:00
David Robertson
a9fe3350f8 Drive-by typo fix I spotted while debugging 2022-05-17 13:55:58 +01:00
David Robertson
a1adede444 discard strings with nulls before user dir update 2022-05-17 13:55:58 +01:00
David Robertson
79f1cef5e4 Move non_null_str_or_none to s.utils.stringutils 2022-05-17 13:55:58 +01:00
David Robertson
8c977edec8 Reproduce #12755 2022-05-17 13:55:44 +01:00
David Robertson
1402159bb8 Merge branch 'master' into develop 2022-05-17 11:00:54 +01:00
Erik Johnston
32ef24fbd7 Add index to cache invalidations (#12747)
For workers that rarely write to the cache the `get_all_updated_caches`
query can become expensive if the worker falls behind when reading the
cache.
2022-05-17 09:34:59 +00:00
Erik Johnston
fcf951d5dc Track in memory events using weakrefs (#10533) 2022-05-17 10:34:27 +01:00
David Robertson
44d7bb13c3 version tweak in changelog 2022-05-17 10:30:31 +01:00
David Robertson
5c3d525cad 1.59.0 2022-05-17 10:27:51 +01:00
David Robertson
1fe202a1a3 Tidy up and type-hint the database engine modules (#12734)
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-05-17 00:34:38 +01:00
Andrew Morgan
6d8d1218dd Fix typo in name of 'run_background_tasks_on' option in config manual (#12749) 2022-05-16 17:31:12 +00:00
Andrew Morgan
3eafee629d Revert "changelog"
This reverts commit e24c11afd6.

whoops...
2022-05-16 17:52:22 +01:00
Andrew Morgan
e24c11afd6 changelog 2022-05-16 17:51:43 +01:00
Andrew Morgan
83be72d76c Add StreamKeyType class and replace string literals with constants (#12567) 2022-05-16 15:35:31 +00:00
Erik Johnston
4ea546067d Fix query performance for /sync (#12745) 2022-05-16 16:30:35 +01:00
Šimon Brandner
3ce15cc7be Avoid unnecessary copies when filtering private read receipts. (#12711)
A minor optimization to avoid unnecessary copying/building
identical dictionaries when filtering private read receipts.

Also clarifies comments and cleans-up some tests.
2022-05-16 15:06:23 +00:00
David Robertson
b4eb163434 Merge tag 'v1.59.0rc2' into develop
Synapse 1.59.0rc2 (2022-05-16)
==============================

Synapse 1.59 makes several changes that server administrators should be aware of:

- Device name lookup over federation is now disabled by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](https://github.com/matrix-org/synapse/issues/12452), [\#12654](https://github.com/matrix-org/synapse/issues/12654))

See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1590) for more details.

Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](https://github.com/matrix-org/synapse/issues/12597))

Bugfixes
--------

- Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was rejected. ([\#12729](https://github.com/matrix-org/synapse/issues/12729))
2022-05-16 14:55:18 +01:00
Dirk Klimpel
8060034612 Fix typo in listener config (#12742) 2022-05-16 13:50:07 +00:00
Sean Quah
a5c26750b5 Fix room upgrades creating an empty room when auth fails (#12696)
Signed-off-by: Sean Quah <seanq@element.io>
2022-05-16 14:06:04 +01:00
Patrick Cloke
86a515ccbf Consolidate logic for parsing relations. (#12693)
Parse the `m.relates_to` event content field (which describes relations)
in a single place, this is used during:

* Event persistence.
* Validation of the Client-Server API.
* Fetching bundled aggregations.
* Processing of push rules.

Each of these separately implement the logic and each made slightly
different assumptions about what was valid. Some had minor / potential
bugs.
2022-05-16 12:42:45 +00:00
David Robertson
6f04ae7033 Move 1.59 warning to the top 2022-05-16 12:53:10 +01:00
David Robertson
c3b232cb39 1.59.0rc2 2022-05-16 12:52:29 +01:00
Erik Johnston
8689230a55 Fix bug /sync returning 404 (#12729)
* Fix bug /sync returning 404

Fixes #12571
2022-05-16 12:06:56 +01:00
Shay
cde8af9a49 Add config flags to allow for cache auto-tuning (#12701) 2022-05-13 12:32:39 -07:00
Till
e8ae472d3b Update configs used by Complement to allow more invites (#12731) 2022-05-13 16:45:47 +01:00
Brendan Abolivier
9013104429 Don't create an empty room when checking for MAU limits (#12713) 2022-05-13 15:30:15 +02:00
David Robertson
aec69d2481 Another batch of type annotations (#12726) 2022-05-13 12:35:31 +01:00
Jess Porter
39bed28b28 SpamChecker metrics (#12513)
* add Measure blocks all over SpamChecker

Signed-off-by: jesopo <github@lolnerd.net>

* fix test_spam_checker_may_join_room and test_threepid_invite_spamcheck

* better changelog entry
2022-05-13 12:17:38 +01:00
Niklas
c9fc2c0d22 Update issuer URL in example OIDC Keycloak config (#12727)
* Update openid.md

Newer versions of keycloak returning a 404 when using the `/auth` prefix.

Related: https://github.com/matrix-org/synapse/issues/12714
2022-05-13 10:15:51 +00:00
Andrew Morgan
57f6c496d0 URL preview cache expiry logs: INFO -> DEBUG, text clarifications (#12720) 2022-05-12 18:16:32 +01:00
David Robertson
17e1eb7749 Reduce the number of "untyped defs" (#12716) 2022-05-12 14:33:50 +00:00
Andy Balaam
de1e599b9d add default_power_level_content_override config option. (#12618)
Co-authored-by: Matthew Hodgson <matthew@matrix.org>
2022-05-12 10:41:35 +00:00
Andrew Morgan
409573f6d0 Fix reference to the wrong symbol in the media admin api docs (#12715) 2022-05-12 09:29:37 +01:00
Sean Quah
bf7ce92bf7 Enable cancellation of GET /members and GET /state requests (#12708)
Enable cancellation of `GET /rooms/$room_id/members`,
`GET /rooms/$room_id/state` and
`GET /rooms/$room_id/state/$state_key/*` requests.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-11 17:22:34 +01:00
David Robertson
db10f2c037 No longer permit empty body when sending receipts (#12709) 2022-05-11 15:34:17 +00:00
Sean Quah
6ee61b9052 Complain if a federation endpoint has the @cancellable flag (#12705)
`BaseFederationServlet` wraps its endpoints in a bunch of async code
that has not been vetted for compatibility with cancellation.
Fail CI if a `@cancellable` flag is applied to a federation endpoint.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-11 14:52:26 +01:00
David Robertson
d38d242411 Reload cache factors from disk on SIGHUP (#12673) 2022-05-11 13:43:22 +00:00
Sean Quah
a559c8b0d9 Respect the @cancellable flag for ReplicationEndpoints (#12700)
While `ReplicationEndpoint`s register themselves via `JsonResource`,
they pass a method that calls the handler, instead of the handler itself,
to `register_paths`. As a result, `JsonResource` will not correctly pick
up the `@cancellable` flag and we have to apply it ourselves.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-11 12:25:39 +01:00
Sean Quah
9d8e380d2e Respect the @cancellable flag for RestServlets and BaseFederationServlets (#12699)
Both `RestServlet`s and `BaseFederationServlet`s register their handlers
with `HttpServer.register_paths` / `JsonResource.register_paths`. Update
`JsonResource` to respect the `@cancellable` flag on handlers registered
in this way.

Although `ReplicationEndpoint` also registers itself using
`register_paths`, it does not pass the handler method that would have the
`@cancellable` flag directly, and so needs separate handling.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-11 12:25:13 +01:00
Sean Quah
dffecade7d Respect the @cancellable flag for DirectServe{Html,Json}Resources (#12698)
`DirectServeHtmlResource` and `DirectServeJsonResource` both inherit
from `_AsyncResource`. These classes expect to be subclassed with
`_async_render_*` methods.

This commit has no effect on `JsonResource`, despite inheriting from
`_AsyncResource`. `JsonResource` has its own `_async_render` override
which will need to be updated separately.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-11 12:24:48 +01:00
Patrick Cloke
a4c75918b3 Remove unneeded ActionGenerator class. (#12691)
It simply passes through to `BulkPushRuleEvaluator`, which can be
called directly instead.
2022-05-11 07:15:21 -04:00
Eric Eastwood
84facf769e Fix /messages throwing a 500 when querying for non-existent room (#12683)
Fix https://github.com/matrix-org/synapse/issues/12678

Complement test added:  https://github.com/matrix-org/complement/pull/369

**Before:** 500 internal server error

**After:** According to the [spec](https://spec.matrix.org/latest/client-server-api/#get_matrixclientv3roomsroomidmessages), calling `/messages` against a non-existent `room_id` should throw a 403 forbidden (since you're not part of the room). This also matches the behavior before https://github.com/matrix-org/synapse/pull/12370 which regressed Synapse to the 500 behavior.
```json
{
    "errcode": "M_FORBIDDEN",
    "error": "User @test:my.synapse.server not in room !dne:my.synapse.server, and room previews are disabled"
}
```
2022-05-10 23:39:14 -05:00
Erik Johnston
c72d26c1e1 Refactor EventContext (#12689)
Refactor how the `EventContext` class works, with the intention of reducing the amount of state we fetch from the DB during event processing.

The idea here is to get rid of the cached `current_state_ids` and `prev_state_ids` that live in the `EventContext`, and instead defer straight to the database (and its caching). 

One change that may have a noticeable effect is that we now no longer prefill the `get_current_state_ids` cache on a state change. However, that query is relatively light, since its just a case of reading a table from the DB (unlike fetching state at an event which is more heavyweight). For deployments with workers this cache isn't even used.


Part of #12684
2022-05-10 19:43:13 +00:00
Sean Quah
c997bfb926 Capture the Deferred for request cancellation in _AsyncResource (#12694)
All async request processing goes through `_AsyncResource`, so this is
the only place where a `Deferred` needs to be captured for cancellation.

Unfortunately, the same isn't true for determining whether a request
can be cancelled. Each of `RestServlet`, `BaseFederationServlet`,
`DirectServe{Html,Json}Resource` and `ReplicationEndpoint` have
different wrappers around the method doing the request handling and they
all need to be handled separately.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-10 20:39:05 +01:00
Patrick Cloke
29f06704b8 Fix incorrect type hint in filtering code. (#12695) 2022-05-10 14:10:22 -04:00
Dirk Klimpel
989fa33096 Add some type hints to datastore. (#12477) 2022-05-10 14:07:48 -04:00
Richard van der Hoff
147f098fb4 Stop writing to event_reference_hashes (#12679)
This table is never read, since #11794. We stop writing to it; in future we can
drop it altogether.
2022-05-10 15:35:08 +01:00
Sean Quah
dbb12a0b54 Add helper class for testing request cancellation (#12630)
Also expose the `SynapseRequest` from `FakeChannel` in tests, so that
we can call `Request.connectionLost` to simulate a client disconnecting.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-10 14:06:56 +01:00
Sean Quah
5cfb004595 Add ability to cancel disconnected requests to SynapseRequest (#12588)
Signed-off-by: Sean Quah <seanq@element.io>
2022-05-10 14:06:08 +01:00
Sean Quah
5c00151c28 Add @cancellable decorator, for use on request handlers (#12586)
Signed-off-by: Sean Quah <seanq@element.io>
2022-05-10 14:05:22 +01:00
David Robertson
2aad0ae57f Merge tag 'v1.59.0rc1' into develop
Synapse 1.59.0rc1 (2022-05-10)
==============================

This release makes several changes that server administrators should be aware of:

- Device name lookup over federation is now disabled by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](https://github.com/matrix-org/synapse/issues/12452), [\#12654](https://github.com/matrix-org/synapse/issues/12654))

See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1590) for more details.

Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](https://github.com/matrix-org/synapse/issues/12597))

Features
--------

- Support [MSC3266](https://github.com/matrix-org/matrix-doc/pull/3266) room summaries over federation. ([\#11507](https://github.com/matrix-org/synapse/issues/11507))
- Implement [changes](4a77139249) to [MSC2285 (hidden read receipts)](https://github.com/matrix-org/matrix-spec-proposals/pull/2285). Contributed by @SimonBrandner. ([\#12168](https://github.com/matrix-org/synapse/issues/12168), [\#12635](https://github.com/matrix-org/synapse/issues/12635), [\#12636](https://github.com/matrix-org/synapse/issues/12636), [\#12670](https://github.com/matrix-org/synapse/issues/12670))
- Extend the [module API](https://github.com/matrix-org/synapse/blob/release-v1.59/synapse/module_api/__init__.py) to allow modules to change actions for existing push rules of local users. ([\#12406](https://github.com/matrix-org/synapse/issues/12406))
- Add the `notify_appservices_from_worker` configuration option (superseding `notify_appservices`) to allow a generic worker to be designated as the worker to send traffic to Application Services. ([\#12452](https://github.com/matrix-org/synapse/issues/12452))
- Add the `update_user_directory_from_worker` configuration option (superseding `update_user_directory`) to allow a generic worker to be designated as the worker to update the user directory. ([\#12654](https://github.com/matrix-org/synapse/issues/12654))
- Add new `enable_registration_token_3pid_bypass` configuration option to allow registrations via token as an alternative to verifying a 3pid. ([\#12526](https://github.com/matrix-org/synapse/issues/12526))
- Implement [MSC3786](https://github.com/matrix-org/matrix-spec-proposals/pull/3786): Add a default push rule to ignore `m.room.server_acl` events. ([\#12601](https://github.com/matrix-org/synapse/issues/12601))
- Add new `mau_appservice_trial_days` configuration option to specify a different trial period for users registered via an appservice. ([\#12619](https://github.com/matrix-org/synapse/issues/12619))

Bugfixes
--------

- Fix a bug introduced in Synapse 1.48.0 where the latest thread reply provided failed to include the proper bundled aggregations. ([\#12273](https://github.com/matrix-org/synapse/issues/12273))
- Fix a bug introduced in Synapse 1.22.0 where attempting to send a large amount of read receipts to an application service all at once would result in duplicate content and abnormally high memory usage. Contributed by Brad & Nick @ Beeper. ([\#12544](https://github.com/matrix-org/synapse/issues/12544))
- Fix a bug introduced in Synapse 1.57.0 which could cause `Failed to calculate hosts in room` errors to be logged for outbound federation. ([\#12570](https://github.com/matrix-org/synapse/issues/12570))
- Fix a long-standing bug where status codes would almost always get logged as `200!`, irrespective of the actual status code, when clients disconnect before a request has finished processing. ([\#12580](https://github.com/matrix-org/synapse/issues/12580))
- Fix race when persisting an event and deleting a room that could lead to outbound federation breaking. ([\#12594](https://github.com/matrix-org/synapse/issues/12594))
- Fix a bug introduced in Synapse 1.53.0 where bundled aggregations for annotations/edits were incorrectly calculated. ([\#12633](https://github.com/matrix-org/synapse/issues/12633))
- Fix a long-standing bug where rooms containing power levels with string values could not be upgraded. ([\#12657](https://github.com/matrix-org/synapse/issues/12657))
- Prevent memory leak from reoccurring when presence is disabled. ([\#12656](https://github.com/matrix-org/synapse/issues/12656))

Updates to the Docker image
---------------------------

- Explicitly opt-in to using [BuildKit-specific features](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md) in the Dockerfile. This fixes issues with building images in some GitLab CI environments. ([\#12541](https://github.com/matrix-org/synapse/issues/12541))
- Update the "Build docker images" GitHub Actions workflow to use `docker/metadata-action` to generate docker image tags, instead of a custom shell script. Contributed by @henryclw. ([\#12573](https://github.com/matrix-org/synapse/issues/12573))

Improved Documentation
----------------------

- Update SQL statements and replace use of old table `user_stats_historical` in docs for Synapse Admins. ([\#12536](https://github.com/matrix-org/synapse/issues/12536))
- Add missing linebreak to `pipx` install instructions. ([\#12579](https://github.com/matrix-org/synapse/issues/12579))
- Add information about the TCP replication module to docs. ([\#12621](https://github.com/matrix-org/synapse/issues/12621))
- Fixes to the formatting of `README.rst`. ([\#12627](https://github.com/matrix-org/synapse/issues/12627))
- Fix docs on how to run specific Complement tests using the `complement.sh` test runner. ([\#12664](https://github.com/matrix-org/synapse/issues/12664))

Deprecations and Removals
-------------------------

- Remove unstable identifiers from [MSC3069](https://github.com/matrix-org/matrix-doc/pull/3069). ([\#12596](https://github.com/matrix-org/synapse/issues/12596))
- Remove the unspecified `m.login.jwt` login type and the unstable `uk.half-shot.msc2778.login.application_service` from
  [MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778). ([\#12597](https://github.com/matrix-org/synapse/issues/12597))
- Synapse now requires at least Python 3.7.1 (up from 3.7.0), for compatibility with the latest Twisted trunk. ([\#12613](https://github.com/matrix-org/synapse/issues/12613))

Internal Changes
----------------

- Use supervisord to supervise Postgres and Caddy in the Complement image to reduce restart time. ([\#12480](https://github.com/matrix-org/synapse/issues/12480))
- Immediately retry any requests that have backed off when a server comes back online. ([\#12500](https://github.com/matrix-org/synapse/issues/12500))
- Use `make_awaitable` instead of `defer.succeed` for return values of mocks in tests. ([\#12505](https://github.com/matrix-org/synapse/issues/12505))
- Consistently check if an object is a `frozendict`. ([\#12564](https://github.com/matrix-org/synapse/issues/12564))
- Protect module callbacks with read semantics against cancellation. ([\#12568](https://github.com/matrix-org/synapse/issues/12568))
- Improve comments and error messages around access tokens. ([\#12577](https://github.com/matrix-org/synapse/issues/12577))
- Improve docstrings for the receipts store. ([\#12581](https://github.com/matrix-org/synapse/issues/12581))
- Use constants for read-receipts in tests. ([\#12582](https://github.com/matrix-org/synapse/issues/12582))
- Log status code of cancelled requests as 499 and avoid logging stack traces for them. ([\#12587](https://github.com/matrix-org/synapse/issues/12587), [\#12663](https://github.com/matrix-org/synapse/issues/12663))
- Remove special-case for `twisted` logger from default log config. ([\#12589](https://github.com/matrix-org/synapse/issues/12589))
- Use `getClientAddress` instead of the deprecated `getClientIP`. ([\#12599](https://github.com/matrix-org/synapse/issues/12599))
- Add link to documentation in Grafana Dashboard. ([\#12602](https://github.com/matrix-org/synapse/issues/12602))
- Reduce log spam when running multiple event persisters. ([\#12610](https://github.com/matrix-org/synapse/issues/12610))
- Add extra debug logging to federation sender. ([\#12614](https://github.com/matrix-org/synapse/issues/12614))
- Prevent remote homeservers from requesting local user device names by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
- Add a consistency check on events which we read from the database. ([\#12620](https://github.com/matrix-org/synapse/issues/12620))
- Remove use of the `constantly` library and switch to enums for `EventRedactBehaviour`. Contributed by @andrewdoh. ([\#12624](https://github.com/matrix-org/synapse/issues/12624))
- Remove unused code related to receipts. ([\#12632](https://github.com/matrix-org/synapse/issues/12632))
- Minor improvements to the scripts for running Synapse in worker mode under Complement. ([\#12637](https://github.com/matrix-org/synapse/issues/12637))
- Move `pympler` back in to the `all` extras. ([\#12652](https://github.com/matrix-org/synapse/issues/12652))
- Fix spelling of `M_UNRECOGNIZED` in comments. ([\#12665](https://github.com/matrix-org/synapse/issues/12665))
- Release script: confirm the commit to be tagged before tagging. ([\#12556](https://github.com/matrix-org/synapse/issues/12556))
- Fix a typo in the announcement text generated by the Synapse release development script. ([\#12612](https://github.com/matrix-org/synapse/issues/12612))

- Fix scripts-dev to pass typechecking. ([\#12356](https://github.com/matrix-org/synapse/issues/12356))
- Add some type hints to datastore. ([\#12485](https://github.com/matrix-org/synapse/issues/12485))
- Remove unused `# type: ignore`s. ([\#12531](https://github.com/matrix-org/synapse/issues/12531))
- Allow unused `# type: ignore` comments in bleeding edge CI jobs. ([\#12576](https://github.com/matrix-org/synapse/issues/12576))
- Remove redundant lines of config from `mypy.ini`. ([\#12608](https://github.com/matrix-org/synapse/issues/12608))
- Update to mypy 0.950. ([\#12650](https://github.com/matrix-org/synapse/issues/12650))
- Use `Concatenate` to better annotate `_do_execute`. ([\#12666](https://github.com/matrix-org/synapse/issues/12666))
- Use `ParamSpec` to refine type hints. ([\#12667](https://github.com/matrix-org/synapse/issues/12667))
- Fix mypy against latest pillow stubs. ([\#12671](https://github.com/matrix-org/synapse/issues/12671))
2022-05-10 13:17:56 +01:00
Patrick Cloke
b44fbdffa4 Move free functions into PushRuleEvaluatorForEvent. (#12677)
* Move `_condition_checker` into `PushRuleEvaluatorForEvent`.
* Move the condition cache into `PushRuleEvaluatorForEvent`.
* Improve docstrings.
* Inline a method which is only called once.
2022-05-10 07:54:30 -04:00
Patrick Cloke
02cdace707 Add class-diagrams and notes for push. (#12676) 2022-05-10 07:43:34 -04:00
David Robertson
efcd899f69 other fixes 2022-05-10 11:31:10 +01:00
David Robertson
735faab2b8 backquote m.room.server_acl 2022-05-10 11:30:20 +01:00
David Robertson
c707ea736a v1 -> 1 2022-05-10 11:29:49 +01:00
David Robertson
80b3246528 Fix deprecation notice 2022-05-10 11:29:40 +01:00
David Robertson
2bae6d93c9 I manually added O's change, remove newsfile 2022-05-10 11:17:42 +01:00
David Robertson
239da21c1a Add Olivier's last-minute merge 2022-05-10 11:12:53 +01:00
David Robertson
946b8437cf Group release script changes 2022-05-10 11:12:53 +01:00
David Robertson
464fe99f52 Fix changelog link 2022-05-10 11:12:53 +01:00
reivilibre
699192fc1a Add the update_user_directory_from_worker configuration option (superseding update_user_directory) to allow a generic worker to be designated as the worker to update the user directory. (#12654)
Co-authored-by: Shay <hillerys@element.io>
2022-05-10 11:08:45 +01:00
David Robertson
8ef0d85acd Changelog typo 2022-05-10 11:07:44 +01:00
David Robertson
2cdac6f585 Adjust changelog 2022-05-10 11:06:58 +01:00
David Robertson
e5fd23fb6f 1.59.0rc1 2022-05-10 10:45:13 +01:00
Erik Johnston
8dd3e0e084 Immediately retry any requests that have backed off when a server comes back online. (#12500)
Otherwise it can take up to a minute for any in-flight `/send` requests to be retried.
2022-05-10 10:39:54 +01:00
Šimon Brandner
ade3008821 Implement MSC3786: Add a default push rule to ignore m.room.server_acl events (#12601)
Fixes vector-im/element-web#20788
Implements matrix-org/matrix-spec-proposals#3786
2022-05-10 08:57:36 +01:00
Shay
d80a7ab151 Update replication.md with info on TCP module structure (#12621) 2022-05-09 14:46:43 -07:00
Dirk Klimpel
615d96ad6e Update SQL statements in docs for Synapse Admins (#12536) 2022-05-09 14:43:02 -07:00
Richard van der Hoff
34e84fee68 Tweaks to workers-under-complement (#12637)
* Bump the HS startup timeout
* Log prefixes for more processes
* Bump the overall timeout
2022-05-09 22:41:06 +01:00
Val Lorentz
bf0c3ca20a Fix inconsistent spelling of 'M_UNRECOGNIZED'. (#12665) 2022-05-09 20:29:07 +00:00
Sean Quah
a00462dd99 Implement cancellation support/protection for module callbacks (#12568)
There's no guarantee that module callbacks will handle cancellation
appropriately. Protect module callbacks with read semantics from
cancellation and avoid swallowing `CancelledError`s that arise.

Other module callbacks, such as the `on_*` callbacks, are presumed to
live on code paths that involve writes and aren't cancellation-friendly.
These module callbacks have been left alone.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-09 12:31:14 +01:00
David Robertson
8de0facaae Fix mypy against latest pillow stubs (#12671) 2022-05-09 10:48:14 +00:00
Sean Quah
41a882e62d Update changelog for #12587 to be more accurate (#12663)
#12587 has fallen on the wrong side of the release cutoff to the rest of
the related PRs.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-09 11:34:39 +01:00
David Robertson
fa0eab9c8e Use ParamSpec in a few places (#12667) 2022-05-09 10:27:39 +00:00
Erik Johnston
c5969b346d Don't error on unknown receipt types (#12670)
Fixes #12669
2022-05-09 11:09:19 +01:00
Sheogorath
77258b6725 docs(contrib): Add link to documentation in dashboard (#12602) 2022-05-09 10:08:31 +00:00
Eric Eastwood
18d6c18aa1 Fix docs on how to run specific Complement tests after recent complement.sh change (#12664) 2022-05-09 10:38:32 +01:00
David Robertson
26c1ad71c5 Use Concatenate to annotate do_execute (#12666) 2022-05-09 10:28:38 +01:00
David Robertson
0ce2201932 Move pympler back into the all extras (#12652)
* Move `pympler` back into the `all` extras

Undoes a change I made in #12381. I can't fully remember my reasoning,
but this changed the contents of the debian packages in a backwards
incompatible way. We're not aware of anyone who's been bitten by this,
but we still want to fix it.

To the reviewer: please be convinced that the debian packages will still
contain pympler after this change.

* Debian changelog entry to keep the linter happy
2022-05-07 13:40:58 +01:00
David Robertson
051a1c3f22 Convert stringy power levels to integers on room upgrade (#12657) 2022-05-07 13:37:29 +01:00
Erik Johnston
4337d33a73 Prevent memory leak from reoccurring when presence is disabled. (#12656) 2022-05-06 16:41:57 +00:00
David Robertson
2607b3e181 Update mypy to 0.950 and fix complaints (#12650) 2022-05-06 12:35:20 +00:00
reivilibre
c2d50e9f6c Add the notify_appservices_from_worker configuration option (superseding notify_appservices) to allow a generic worker to be designated as the worker to send traffic to Application Services. (#12452) 2022-05-06 11:43:53 +01:00
Andrew Morgan
f1fbf75cfc Merge branch 'master' into develop 2022-05-05 17:43:27 +01:00
DeepBlueV7.X
a377a43386 Support MSC3266 room summaries over federation (#11507)
Signed-off-by: Nicolas Werner <nicolas.werner@hotmail.de>
2022-05-05 15:25:00 +01:00
Andrew Morgan
3a8ee22911 Update v1.58.1 changelog entry with more familiar language 2022-05-05 15:15:32 +01:00
Andrew Morgan
bc149a18f6 link to relevant bug report in v1.58.1 changelog 2022-05-05 15:10:24 +01:00
Andrew Morgan
d2784b6567 Minor wording change to v1.58.1 release notes 2022-05-05 15:06:39 +01:00
Andrew Morgan
6a17a291a6 1.58.1 2022-05-05 15:05:58 +01:00
Andrew Morgan
e923fc20bd Include extra dependency groups 'systemd' and 'cache_memory' in debian packages (#12640)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-05-05 13:51:15 +00:00
Šimon Brandner
ef86cf3d28 Update _on_new_receipts() to work with MSC2285 changes. (#12636) 2022-05-05 13:25:51 +00:00
reivilibre
07fa53ec40 Improve comments and error messages around access tokens. (#12577) 2022-05-05 13:39:59 +01:00
Henry
b8fa24b022 Use docker/metadata-action to generate docker image tags (#12573)
Update the "Build docker images" GitHub Actions workflow to use
`docker/metadata-action` to generate docker image tags, instead of a
custom shell script.

Signed-off-by: Henry <97804910+henryclw@users.noreply.github.com>
2022-05-05 12:36:42 +00:00
Šimon Brandner
9ae0253f4e Use private instead of hidden in MSC2285 related code. (#12635) 2022-05-05 12:31:25 +00:00
Patrick Cloke
f90d381c7b Edits/annotations should not have any bundled aggregations calculated. (#12633)
Fixes a regression from 8b309adb43 (#11660)
and b65acead42 (#11752) where events which
themselves were an edit or an annotation could have bundled aggregations calculated,
which is not allowed.
2022-05-05 08:15:12 -04:00
Patrick Cloke
ddc8bba00f Remove unused receipt datastore methods. (#12632)
The last usage was removed in 5a1dd297c3 (#8059).
2022-05-05 07:51:19 -04:00
Will Hunt
cc7656099d Fix typo in some instances of enable_registration_token_3pid_bypass. (#12639) 2022-05-05 07:11:52 -04:00
Erik Johnston
c0379d6e5b Reduce log spam when running multiple event persisters (#12610) 2022-05-05 10:20:23 +01:00
Will Hunt
2d74a8c178 Add mau_appservice_trial_days config (#12619)
* Add mau_appservice_trial_days

* Add a test

* Tweaks

* changelog

* Ensure we sync after the delay

* Fix types

* Add config statement

* Fix test

* Reinstate logging that got removed

* Fix feature name
2022-05-04 19:33:26 +01:00
Patrick Cloke
7fbf42499d Use getClientAddress instead of getClientIP. (#12599)
getClientIP was deprecated in Twisted 18.4.0, which also added
getClientAddress. The Synapse minimum version for Twisted is
currently 18.9.0, so all supported versions have the new API.
2022-05-04 14:11:21 -04:00
Šimon Brandner
116a4c8340 Implement changes to MSC2285 (hidden read receipts) (#12168)
* Changes hidden read receipts to be a separate receipt type
  (instead of a field on `m.read`).
* Updates the `/receipts` endpoint to accept `m.fully_read`.
2022-05-04 11:59:22 -04:00
Andrew Morgan
332cce8dcf Disable device name lookup over federation by default (#12616) 2022-05-04 16:41:40 +01:00
Patrick Cloke
ba3fd54bad Remove unstable/unspecced login types. (#12597)
* `m.login.jwt`, which was never specced and has been deprecated
  since Synapse 1.16.0. (`org.matrix.login.jwt` can be used instead.)
* `uk.half-shot.msc2778.login.application_service`, which was
  stabilized as part of the Matrix spec v1.2 release.
2022-05-04 13:53:21 +00:00
Sean Quah
b2df0716bc Improve logging for cancelled requests (#12587)
Don't log stack traces for cancelled requests and use a custom HTTP
status code of 499.

Signed-off-by: Sean Quah <seanq@element.io>
2022-05-04 13:38:55 +01:00
Patrick Cloke
75dff3dc98 Include bundled aggregations for the latest event in a thread. (#12273)
The `latest_event` field of the bundled aggregations for `m.thread` relations
did not include bundled aggregations itself. This resulted in clients needing to
immediately request the event from the server (and thus making it useless that
the latest event itself was serialized instead of just including an event ID).
2022-05-04 08:38:18 -04:00
andrew do
01e625513a remove constantly lib use and switch to enums. (#12624) 2022-05-04 11:26:11 +00:00
Richard van der Hoff
873d467976 Fixes to the formatting of README.rst (#12627)
Fixes a couple of formatting errors which were introduced in #12475.
2022-05-04 11:02:19 +01:00
Richard van der Hoff
96e0cdbc5a Add a consistency check on events read from the database (#12620)
I've seen a few errors which can only plausibly be explained by the calculated
event id for an event being different from the ID of the event in the
database. It should be cheap to check this, so let's do so and raise an
exception.
2022-05-03 21:27:52 +01:00
David Robertson
9ce51a47f6 Bump Synapse minimum Python version to 3.7.1 (#12613) 2022-05-03 19:22:06 +01:00
Patrick Cloke
aa5f5ede33 Remove unstable identifiers for MSC3069. (#12596) 2022-05-03 12:43:12 -04:00
Richard van der Hoff
d66d68f917 Add extra debug logging to federation sender (#12614)
... in order to debug some problems we've been having with certain events not
being sent when expected.
2022-05-03 16:32:40 +01:00
Andrew Morgan
c4514b97db Add missing space before 'docker' link in release announcement script (#12612) 2022-05-03 14:46:42 +00:00
Richard van der Hoff
77dee1b451 fix imports
broken in 5938928 :-S
2022-05-03 13:59:28 +01:00
Richard van der Hoff
5938928c59 minor wording fix in docstring 2022-05-03 13:50:50 +01:00
Richard van der Hoff
db2edf5a65 Exclude OOB memberships from the federation sender (#12570)
As the comment says, there is no need to process such events, and indeed we
need to avoid doing so.

Fixes #12509.
2022-05-03 12:47:56 +00:00
Andrew Morgan
13e4386710 Merge branch 'master' into develop 2022-05-03 11:51:24 +01:00
David Robertson
bf2fea8f7d Add sanity checks to the release script (#12556)
Check we're on the right branch before tagging, and on the right tag before uploading

* Abort if we're on the wrong branch
* Check we have the right tag checked out
* Clarify that `publish` only releases to GitHub
2022-05-03 10:50:03 +00:00
Erik Johnston
ae7858f184 Fix race when persisting an event and deleting a room (#12594)
This works by taking a row level lock on the `rooms` table at the start of both transactions, ensuring that they don't run at the same time. In the event persistence transaction we also check that there is an entry still in the `rooms` table.

I can't figure out how to do this in SQLite. I was just going to lock the table, but it seems that we don't support that in SQLite either, so I'm *really* confused as to how we maintain integrity in SQLite when using `lock_table`....
2022-05-03 11:47:21 +01:00
David Robertson
01dcf7532d Prune mypy ignore_missing_imports list (#12608) 2022-05-03 11:03:20 +01:00
Richard van der Hoff
8d156ec0ba Remove special-case for twisted logger (#12589)
This was originally added when we first added a `MemoryHandler` to the default
log config back in https://github.com/matrix-org/synapse/pull/8040, to ensure
that we didn't explode with an infinite loop if there was an error formatting
the logs.

Since then, we made additional improvements to logging which make this
workaround redundant. In particular:

 * we no longer attempt to log un-UTF8-decodable byte sequences, which were the
   most likely cause of an error in the first place.

 * https://github.com/matrix-org/synapse/pull/8268 ensures that in the unlikely
   case that there *is* an error, it won't cause an infinite loop.
2022-04-29 22:05:18 +01:00
David Robertson
57fac2a234 Allow unused ignores in "bleeding edge" CI (#12576)
* Allow unused ignores in "bleeding edge" CI

Where "bleeding edge" means the Twisted Trunk and Latest Deps jobs.

Follow up from #12531.
Resolves #12574.

* Use `--extras all` in latest deps mypy CI

Twisted trunk job already does this.

Missed in #12531.

* changelog
2022-04-29 17:57:23 +01:00
Patrick Cloke
3ae56d125c Improve the docstrings for the receipts store. (#12581) 2022-04-28 17:58:58 +00:00
Šimon Brandner
0d9eaa19fd Use constants for receipt types in tests. (#12582) 2022-04-28 13:34:33 -04:00
Sean Quah
0b684b59e5 Fix logging of incorrect status codes for disconnected requests (#12580)
The status code of requests must always be set, regardless of client
disconnection, otherwise they will always be logged as 200!.

Broken for `respond_with_json` in
f48792eec4.
Broken for `respond_with_json_bytes` in
3e58ce72b4.
Broken for `respond_with_html_bytes` in
ea26e9a98b.

Signed-off-by: Sean Quah <seanq@element.io>
2022-04-28 15:49:50 +00:00
DeepBlueV7.X
629aa51743 Add linebreak to pipx install quote in README (#12579) 2022-04-28 13:54:46 +01:00
David Robertson
5d3509dfda Revert accidental direct-to-develop commits.
This reverts commit 5a320baa45.
This reverts commit f282d5fc11.
This reverts commit ce6ecdd4b4.
2022-04-28 11:33:05 +01:00
David Robertson
5a320baa45 changelog 2022-04-28 11:31:26 +01:00
David Robertson
f282d5fc11 Use --extras all in latest deps mypy CI
Twisted trunk job already does this.

Missed in #12531.
2022-04-28 11:29:13 +01:00
David Robertson
ce6ecdd4b4 Allow unused ignores in "bleeding edge" CI
Where "bleeding edge" means the Twisted Trunk and Latest Deps jobs.

Follow up from #12531.
Resolves #12574.
2022-04-28 11:28:22 +01:00
Sean Quah
78b99de7c2 Prefer make_awaitable over defer.succeed in tests (#12505)
When configuring the return values of mocks, prefer awaitables from
`make_awaitable` over `defer.succeed`. `Deferred`s are only awaitable
once, so it is inappropriate for a mock to return the same `Deferred`
multiple times.

Also update `run_in_background` to support functions that return
arbitrary awaitables.

Signed-off-by: Sean Quah <seanq@element.io>
2022-04-27 14:58:26 +01:00
Brendan Abolivier
5ef673de4f Add a module API to allow modules to edit push rule actions (#12406)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-04-27 13:55:33 +00:00
reivilibre
d743b25c8f Use supervisord to supervise Postgres and Caddy in the Complement image. (#12480)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-04-27 14:39:41 +01:00
David Robertson
30c8e7e408 Make scripts-dev pass mypy --disallow-untyped-defs (#12356)
Not enforced in config yet. One day.
2022-04-27 13:10:31 +00:00
David Robertson
6463244375 Remove unused # type: ignores (#12531)
Over time we've begun to use newer versions of mypy, typeshed, stub
packages---and of course we've improved our own annotations. This makes
some type ignore comments no longer necessary. I have removed them.

There was one exception: a module that imports `select.epoll`. The
ignore is redundant on Linux, but I've kept it ignored for those of us
who work on the source tree using not-Linux. (#11771)

I'm more interested in the config line which enforces this. I want
unused ignores to be reported, because I think it's useful feedback when
annotating to know when you've fixed a problem you had to previously
ignore.

* Installing extras before typechecking

Lacking an easy way to install all extras generically, let's bite the bullet and
make install the hand-maintained `all` extra before typechecking.

Now that https://github.com/matrix-org/backend-meta/pull/6 is merged to
the release/v1 branch.
2022-04-27 14:03:44 +01:00
Patrick Cloke
8a23bde823 Consistently use collections.abc.Mapping to check frozendict. (#12564) 2022-04-27 09:00:07 -04:00
Will Hunt
e8d1ec0e92 Add option to enable token registration without requiring 3pids (#12526) 2022-04-27 12:57:53 +00:00
Dirk Klimpel
b76f1a4d5f Add some type hints to datastore (#12485) 2022-04-27 13:05:00 +01:00
Nick Mills-Barrett
63ba9ba38b Bound ephemeral events by key (#12544)
Co-authored-by: Brad Murray <bradtgmurray@gmail.com>
Co-authored-by: Andrew Morgan <andrewm@element.io>
2022-04-26 20:14:21 +01:00
David Robertson
9986621bc8 Merge tag 'v1.58.0rc2' into develop
Synapse 1.58.0rc2 (2022-04-26)
==============================

This release candidate fixes bugs related to Synapse 1.58.0rc1's logic for handling device list updates.

Bugfixes
--------

- Fix a bug introduced in Synapse 1.58.0rc1 where the main process could consume excessive amounts of CPU and memory while handling sentry logging failures. ([\#12554](https://github.com/matrix-org/synapse/issues/12554))
- Fix a bug introduced in Synapse 1.58.0rc1 where opentracing contexts were not correctly sent to whitelisted remote servers with device lists updates. ([\#12555](https://github.com/matrix-org/synapse/issues/12555))

Internal Changes
----------------

- Reduce unnecessary work when handling remote device list updates. ([\#12557](https://github.com/matrix-org/synapse/issues/12557))
2022-04-26 18:07:15 +01:00
Jason Robinson
706456de1f Mark Dockerfile as requiring BuildKit (#12541)
Co-authored-by: David Robertson <davidr@element.io>
2022-04-26 15:31:52 +01:00
295 changed files with 7110 additions and 2517 deletions

View File

@@ -34,32 +34,24 @@ jobs:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# TODO: consider using https://github.com/docker/metadata-action instead of this
# custom magic
- name: Calculate docker image tag
id: set-tag
run: |
case "${GITHUB_REF}" in
refs/heads/develop)
tag=develop
;;
refs/heads/master|refs/heads/main)
tag=latest
;;
refs/tags/*)
tag=${GITHUB_REF#refs/tags/}
;;
*)
tag=${GITHUB_SHA}
;;
esac
echo "::set-output name=tag::$tag"
uses: docker/metadata-action@master
with:
images: matrixdotorg/synapse
flavor: |
latest=false
tags: |
type=raw,value=develop,enable=${{ github.ref == 'refs/heads/develop' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/master' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
type=pep440,pattern={{raw}}
- name: Build and push all platforms
uses: docker/build-push-action@v2
with:
push: true
labels: "gitsha1=${{ github.sha }}"
tags: "matrixdotorg/synapse:${{ steps.set-tag.outputs.tag }}"
tags: "${{ steps.set-tag.outputs.tags }}"
file: "docker/Dockerfile"
platforms: linux/amd64,linux/arm64

View File

@@ -32,12 +32,15 @@ jobs:
with:
python-version: "3.x"
poetry-version: "1.2.0b1"
extras: "all"
# Dump installed versions for debugging.
- run: poetry run pip list > before.txt
# Upgrade all runtime dependencies only. This is intended to mimic a fresh
# `pip install matrix-synapse[all]` as closely as possible.
- run: poetry update --no-dev
- run: poetry run pip list > after.txt && (diff -u before.txt after.txt || true)
- name: Remove warn_unused_ignores from mypy config
run: sed '/warn_unused_ignores = True/d' -i mypy.ini
- run: poetry run mypy
trial:
runs-on: ubuntu-latest

View File

@@ -20,13 +20,9 @@ jobs:
- run: scripts-dev/config-lint.sh
lint:
# This does a vanilla `poetry install` - no extras. I'm slightly anxious
# that we might skip some typechecks on code that uses extras. However,
# I think the right way to fix this is to mark any extras needed for
# typechecking as development dependencies. To detect this, we ought to
# turn up mypy's strictness: disallow unknown imports and be accept fewer
# uses of `Any`.
uses: "matrix-org/backend-meta/.github/workflows/python-poetry-ci.yml@v1"
with:
typechecking-extras: "all"
lint-crlf:
runs-on: ubuntu-latest

View File

@@ -24,6 +24,8 @@ jobs:
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry install --no-interaction --extras "all test"
- name: Remove warn_unused_ignores from mypy config
run: sed '/warn_unused_ignores = True/d' -i mypy.ini
- run: poetry run mypy
trial:

View File

@@ -1,3 +1,139 @@
Synapse 1.59.0 (2022-05-17)
===========================
Synapse 1.59 makes several changes that server administrators should be aware of:
- Device name lookup over federation is now disabled by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](https://github.com/matrix-org/synapse/issues/12452), [\#12654](https://github.com/matrix-org/synapse/issues/12654))
See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#upgrading-to-v1590) for more details.
Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](https://github.com/matrix-org/synapse/issues/12597))
Bugfixes
--------
- Fix DB performance regression introduced in Synapse 1.59.0rc2. ([\#12745](https://github.com/matrix-org/synapse/issues/12745))
Synapse 1.59.0rc2 (2022-05-16)
==============================
Note: this release candidate includes a performance regression which can cause database disruption. Other release candidates in the v1.59.0 series are not affected, and a fix will be included in the v1.59.0 final release.
Bugfixes
--------
- Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was rejected. ([\#12729](https://github.com/matrix-org/synapse/issues/12729))
Synapse 1.59.0rc1 (2022-05-10)
==============================
Features
--------
- Support [MSC3266](https://github.com/matrix-org/matrix-doc/pull/3266) room summaries over federation. ([\#11507](https://github.com/matrix-org/synapse/issues/11507))
- Implement [changes](https://github.com/matrix-org/matrix-spec-proposals/pull/2285/commits/4a77139249c2e830aec3c7d6bd5501a514d1cc27) to [MSC2285 (hidden read receipts)](https://github.com/matrix-org/matrix-spec-proposals/pull/2285). Contributed by @SimonBrandner. ([\#12168](https://github.com/matrix-org/synapse/issues/12168), [\#12635](https://github.com/matrix-org/synapse/issues/12635), [\#12636](https://github.com/matrix-org/synapse/issues/12636), [\#12670](https://github.com/matrix-org/synapse/issues/12670))
- Extend the [module API](https://github.com/matrix-org/synapse/blob/release-v1.59/synapse/module_api/__init__.py) to allow modules to change actions for existing push rules of local users. ([\#12406](https://github.com/matrix-org/synapse/issues/12406))
- Add the `notify_appservices_from_worker` configuration option (superseding `notify_appservices`) to allow a generic worker to be designated as the worker to send traffic to Application Services. ([\#12452](https://github.com/matrix-org/synapse/issues/12452))
- Add the `update_user_directory_from_worker` configuration option (superseding `update_user_directory`) to allow a generic worker to be designated as the worker to update the user directory. ([\#12654](https://github.com/matrix-org/synapse/issues/12654))
- Add new `enable_registration_token_3pid_bypass` configuration option to allow registrations via token as an alternative to verifying a 3pid. ([\#12526](https://github.com/matrix-org/synapse/issues/12526))
- Implement [MSC3786](https://github.com/matrix-org/matrix-spec-proposals/pull/3786): Add a default push rule to ignore `m.room.server_acl` events. ([\#12601](https://github.com/matrix-org/synapse/issues/12601))
- Add new `mau_appservice_trial_days` configuration option to specify a different trial period for users registered via an appservice. ([\#12619](https://github.com/matrix-org/synapse/issues/12619))
Bugfixes
--------
- Fix a bug introduced in Synapse 1.48.0 where the latest thread reply provided failed to include the proper bundled aggregations. ([\#12273](https://github.com/matrix-org/synapse/issues/12273))
- Fix a bug introduced in Synapse 1.22.0 where attempting to send a large amount of read receipts to an application service all at once would result in duplicate content and abnormally high memory usage. Contributed by Brad & Nick @ Beeper. ([\#12544](https://github.com/matrix-org/synapse/issues/12544))
- Fix a bug introduced in Synapse 1.57.0 which could cause `Failed to calculate hosts in room` errors to be logged for outbound federation. ([\#12570](https://github.com/matrix-org/synapse/issues/12570))
- Fix a long-standing bug where status codes would almost always get logged as `200!`, irrespective of the actual status code, when clients disconnect before a request has finished processing. ([\#12580](https://github.com/matrix-org/synapse/issues/12580))
- Fix race when persisting an event and deleting a room that could lead to outbound federation breaking. ([\#12594](https://github.com/matrix-org/synapse/issues/12594))
- Fix a bug introduced in Synapse 1.53.0 where bundled aggregations for annotations/edits were incorrectly calculated. ([\#12633](https://github.com/matrix-org/synapse/issues/12633))
- Fix a long-standing bug where rooms containing power levels with string values could not be upgraded. ([\#12657](https://github.com/matrix-org/synapse/issues/12657))
- Prevent memory leak from reoccurring when presence is disabled. ([\#12656](https://github.com/matrix-org/synapse/issues/12656))
Updates to the Docker image
---------------------------
- Explicitly opt-in to using [BuildKit-specific features](https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md) in the Dockerfile. This fixes issues with building images in some GitLab CI environments. ([\#12541](https://github.com/matrix-org/synapse/issues/12541))
- Update the "Build docker images" GitHub Actions workflow to use `docker/metadata-action` to generate docker image tags, instead of a custom shell script. Contributed by @henryclw. ([\#12573](https://github.com/matrix-org/synapse/issues/12573))
Improved Documentation
----------------------
- Update SQL statements and replace use of old table `user_stats_historical` in docs for Synapse Admins. ([\#12536](https://github.com/matrix-org/synapse/issues/12536))
- Add missing linebreak to `pipx` install instructions. ([\#12579](https://github.com/matrix-org/synapse/issues/12579))
- Add information about the TCP replication module to docs. ([\#12621](https://github.com/matrix-org/synapse/issues/12621))
- Fixes to the formatting of `README.rst`. ([\#12627](https://github.com/matrix-org/synapse/issues/12627))
- Fix docs on how to run specific Complement tests using the `complement.sh` test runner. ([\#12664](https://github.com/matrix-org/synapse/issues/12664))
Deprecations and Removals
-------------------------
- Remove unstable identifiers from [MSC3069](https://github.com/matrix-org/matrix-doc/pull/3069). ([\#12596](https://github.com/matrix-org/synapse/issues/12596))
- Remove the unspecified `m.login.jwt` login type and the unstable `uk.half-shot.msc2778.login.application_service` from
[MSC2778](https://github.com/matrix-org/matrix-doc/pull/2778). ([\#12597](https://github.com/matrix-org/synapse/issues/12597))
- Synapse now requires at least Python 3.7.1 (up from 3.7.0), for compatibility with the latest Twisted trunk. ([\#12613](https://github.com/matrix-org/synapse/issues/12613))
Internal Changes
----------------
- Use supervisord to supervise Postgres and Caddy in the Complement image to reduce restart time. ([\#12480](https://github.com/matrix-org/synapse/issues/12480))
- Immediately retry any requests that have backed off when a server comes back online. ([\#12500](https://github.com/matrix-org/synapse/issues/12500))
- Use `make_awaitable` instead of `defer.succeed` for return values of mocks in tests. ([\#12505](https://github.com/matrix-org/synapse/issues/12505))
- Consistently check if an object is a `frozendict`. ([\#12564](https://github.com/matrix-org/synapse/issues/12564))
- Protect module callbacks with read semantics against cancellation. ([\#12568](https://github.com/matrix-org/synapse/issues/12568))
- Improve comments and error messages around access tokens. ([\#12577](https://github.com/matrix-org/synapse/issues/12577))
- Improve docstrings for the receipts store. ([\#12581](https://github.com/matrix-org/synapse/issues/12581))
- Use constants for read-receipts in tests. ([\#12582](https://github.com/matrix-org/synapse/issues/12582))
- Log status code of cancelled requests as 499 and avoid logging stack traces for them. ([\#12587](https://github.com/matrix-org/synapse/issues/12587), [\#12663](https://github.com/matrix-org/synapse/issues/12663))
- Remove special-case for `twisted` logger from default log config. ([\#12589](https://github.com/matrix-org/synapse/issues/12589))
- Use `getClientAddress` instead of the deprecated `getClientIP`. ([\#12599](https://github.com/matrix-org/synapse/issues/12599))
- Add link to documentation in Grafana Dashboard. ([\#12602](https://github.com/matrix-org/synapse/issues/12602))
- Reduce log spam when running multiple event persisters. ([\#12610](https://github.com/matrix-org/synapse/issues/12610))
- Add extra debug logging to federation sender. ([\#12614](https://github.com/matrix-org/synapse/issues/12614))
- Prevent remote homeservers from requesting local user device names by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
- Add a consistency check on events which we read from the database. ([\#12620](https://github.com/matrix-org/synapse/issues/12620))
- Remove use of the `constantly` library and switch to enums for `EventRedactBehaviour`. Contributed by @andrewdoh. ([\#12624](https://github.com/matrix-org/synapse/issues/12624))
- Remove unused code related to receipts. ([\#12632](https://github.com/matrix-org/synapse/issues/12632))
- Minor improvements to the scripts for running Synapse in worker mode under Complement. ([\#12637](https://github.com/matrix-org/synapse/issues/12637))
- Move `pympler` back in to the `all` extras. ([\#12652](https://github.com/matrix-org/synapse/issues/12652))
- Fix spelling of `M_UNRECOGNIZED` in comments. ([\#12665](https://github.com/matrix-org/synapse/issues/12665))
- Release script: confirm the commit to be tagged before tagging. ([\#12556](https://github.com/matrix-org/synapse/issues/12556))
- Fix a typo in the announcement text generated by the Synapse release development script. ([\#12612](https://github.com/matrix-org/synapse/issues/12612))
### Typechecking
- Fix scripts-dev to pass typechecking. ([\#12356](https://github.com/matrix-org/synapse/issues/12356))
- Add some type hints to datastore. ([\#12485](https://github.com/matrix-org/synapse/issues/12485))
- Remove unused `# type: ignore`s. ([\#12531](https://github.com/matrix-org/synapse/issues/12531))
- Allow unused `# type: ignore` comments in bleeding edge CI jobs. ([\#12576](https://github.com/matrix-org/synapse/issues/12576))
- Remove redundant lines of config from `mypy.ini`. ([\#12608](https://github.com/matrix-org/synapse/issues/12608))
- Update to mypy 0.950. ([\#12650](https://github.com/matrix-org/synapse/issues/12650))
- Use `Concatenate` to better annotate `_do_execute`. ([\#12666](https://github.com/matrix-org/synapse/issues/12666))
- Use `ParamSpec` to refine type hints. ([\#12667](https://github.com/matrix-org/synapse/issues/12667))
- Fix mypy against latest pillow stubs. ([\#12671](https://github.com/matrix-org/synapse/issues/12671))
Synapse 1.58.1 (2022-05-05)
===========================
This patch release includes a fix to the Debian packages, installing the
`systemd` and `cache_memory` extra package groups, which were incorrectly
omitted in v1.58.0. This primarily prevented Synapse from starting
when the `systemd.journal.JournalHandler` log handler was configured.
See [#12631](https://github.com/matrix-org/synapse/issues/12631) for further information.
Otherwise, no significant changes since 1.58.0.
Synapse 1.58.0 (2022-05-03)
===========================

View File

@@ -55,7 +55,7 @@ solutions. The hope is for Matrix to act as the building blocks for a new
generation of fully open and interoperable messaging and VoIP apps for the
internet.
Synapse is a Matrix "homeserver" implementation developed by the matrix.org core
Synapse is a Matrix "homeserver" implementation developed by the matrix.org core
team, written in Python 3/Twisted.
In Matrix, every user runs one or more Matrix clients, which connect through to
@@ -294,13 +294,13 @@ directory of your choice::
cd synapse
Synapse has a number of external dependencies. We maintain a fixed development
environment using [poetry](https://python-poetry.org/). First, install poetry. We recommend
environment using `Poetry <https://python-poetry.org/>`_. First, install poetry. We recommend::
pip install --user pipx
pipx install poetry
as described `here <https://python-poetry.org/docs/#installing-with-pipx>`_.
(See `poetry's installation docs <https://python-poetry.org/docs/#installation>`
(See `poetry's installation docs <https://python-poetry.org/docs/#installation>`_
for other installation methods.) Then ask poetry to create a virtual environment
from the project and install Synapse's dependencies::
@@ -309,11 +309,11 @@ from the project and install Synapse's dependencies::
This will run a process of downloading and installing all the needed
dependencies into a virtual env.
We recommend using the demo which starts 3 federated instances running on ports `8080` - `8082`
We recommend using the demo which starts 3 federated instances running on ports `8080` - `8082`::
poetry run ./demo/start.sh
(to stop, you can use `poetry run ./demo/stop.sh`)
(to stop, you can use ``poetry run ./demo/stop.sh``)
See the `demo documentation <https://matrix-org.github.io/synapse/develop/development/demo.html>`_
for more information.

1
changelog.d/10533.misc Normal file
View File

@@ -0,0 +1 @@
Improve event caching mechanism to avoid having multiple copies of an event in memory at a time.

1
changelog.d/12477.misc Normal file
View File

@@ -0,0 +1 @@
Add some type hints to datastore.

View File

@@ -0,0 +1 @@
Measure the time taken in spam-checking callbacks and expose those measurements as metrics.

1
changelog.d/12567.misc Normal file
View File

@@ -0,0 +1 @@
Replace string literal instances of stream key types with typed constants.

1
changelog.d/12586.misc Normal file
View File

@@ -0,0 +1 @@
Add `@cancellable` decorator, for use on endpoint methods that can be cancelled when clients disconnect.

1
changelog.d/12588.misc Normal file
View File

@@ -0,0 +1 @@
Add ability to cancel disconnected requests to `SynapseRequest`.

View File

@@ -0,0 +1 @@
Add a `default_power_level_content_override` config option to set default room power levels per room preset.

1
changelog.d/12630.misc Normal file
View File

@@ -0,0 +1 @@
Add a helper class for testing request cancellation.

View File

@@ -0,0 +1 @@
Synapse will now reload [cache config](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#caching) when it receives a [SIGHUP](https://en.wikipedia.org/wiki/SIGHUP) signal.

1
changelog.d/12676.misc Normal file
View File

@@ -0,0 +1 @@
Improve documentation of the `synapse.push` module.

1
changelog.d/12677.misc Normal file
View File

@@ -0,0 +1 @@
Refactor functions to on `PushRuleEvaluatorForEvent`.

1
changelog.d/12679.misc Normal file
View File

@@ -0,0 +1 @@
Preparation for database schema simplifications: stop writing to `event_reference_hashes`.

1
changelog.d/12683.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.57.0 where `/messages` would throw a 500 error when querying for a non-existent room.

1
changelog.d/12689.misc Normal file
View File

@@ -0,0 +1 @@
Refactor `EventContext` class.

1
changelog.d/12691.misc Normal file
View File

@@ -0,0 +1 @@
Remove an unneeded class in the push code.

1
changelog.d/12693.misc Normal file
View File

@@ -0,0 +1 @@
Consolidate parsing of relation information from events.

1
changelog.d/12694.misc Normal file
View File

@@ -0,0 +1 @@
Capture the `Deferred` for request cancellation in `_AsyncResource`.

1
changelog.d/12695.misc Normal file
View File

@@ -0,0 +1 @@
Fixes an incorrect type hint for `Filter._check_event_relations`.

1
changelog.d/12696.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where an empty room would be created when a user with an insufficient power level tried to upgrade a room.

1
changelog.d/12698.misc Normal file
View File

@@ -0,0 +1 @@
Respect the `@cancellable` flag for `DirectServe{Html,Json}Resource`s.

1
changelog.d/12699.misc Normal file
View File

@@ -0,0 +1 @@
Respect the `@cancellable` flag for `RestServlet`s and `BaseFederationServlet`s.

1
changelog.d/12700.misc Normal file
View File

@@ -0,0 +1 @@
Respect the `@cancellable` flag for `ReplicationEndpoint`s.

View File

@@ -0,0 +1 @@
Add a config options to allow for auto-tuning of caches.

1
changelog.d/12705.misc Normal file
View File

@@ -0,0 +1 @@
Complain if a federation endpoint has the `@cancellable` flag, since some of the wrapper code may not handle cancellation correctly yet.

1
changelog.d/12708.misc Normal file
View File

@@ -0,0 +1 @@
Enable cancellation of `GET /rooms/$room_id/members`, `GET /rooms/$room_id/state` and `GET /rooms/$room_id/state/$event_type/*` requests.

View File

@@ -0,0 +1 @@
Require a body in POST requests to `/rooms/{roomId}/receipt/{receiptType}/{eventId}`, as required by the [Matrix specification](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidreceiptreceipttypeeventid). This breaks compatibility with Element Android 1.2.0 and earlier: users of those clients will be unable to send read receipts.

1
changelog.d/12711.misc Normal file
View File

@@ -0,0 +1 @@
Optimize private read receipt filtering.

1
changelog.d/12713.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.30.0 where empty rooms could be automatically created if a monthly active users limit is set.

1
changelog.d/12715.doc Normal file
View File

@@ -0,0 +1 @@
Fix a typo in the Media Admin API documentation.

1
changelog.d/12716.misc Normal file
View File

@@ -0,0 +1 @@
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.

1
changelog.d/12720.misc Normal file
View File

@@ -0,0 +1 @@
Drop the logging level of status messages for the URL preview cache expiry job from INFO to DEBUG.

1
changelog.d/12726.misc Normal file
View File

@@ -0,0 +1 @@
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.

1
changelog.d/12727.doc Normal file
View File

@@ -0,0 +1 @@
Update the OpenID Connect example for Keycloak to be compatible with newer versions of Keycloak. Contributed by @nhh.

1
changelog.d/12731.misc Normal file
View File

@@ -0,0 +1 @@
Update configs used by Complement to allow more invites/3PID validations during tests.

1
changelog.d/12734.misc Normal file
View File

@@ -0,0 +1 @@
Tidy up and type-hint the database engine modules.

1
changelog.d/12742.doc Normal file
View File

@@ -0,0 +1 @@
Fix typo in server listener documentation.

1
changelog.d/12747.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix poor database performance when reading the cache invalidation stream for large servers with lots of workers.

1
changelog.d/12749.doc Normal file
View File

@@ -0,0 +1 @@
Fix typo in 'run_background_tasks_on' option name in configuration manual documentation.

1
changelog.d/12762.misc Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar.

View File

@@ -66,6 +66,18 @@
],
"title": "Dashboards",
"type": "dashboards"
},
{
"asDropdown": false,
"icon": "external link",
"includeVars": false,
"keepTime": false,
"tags": [],
"targetBlank": true,
"title": "Synapse Documentation",
"tooltip": "Open Documentation",
"type": "link",
"url": "https://matrix-org.github.io/synapse/latest/"
}
],
"panels": [
@@ -10889,4 +10901,4 @@
"title": "Synapse",
"uid": "000000012",
"version": 100
}
}

View File

@@ -37,7 +37,11 @@ python3 -m venv "$TEMP_VENV"
source "$TEMP_VENV/bin/activate"
pip install -U pip
pip install poetry==1.2.0b1
poetry export --extras all --extras test -o exported_requirements.txt
poetry export \
--extras all \
--extras test \
--extras systemd \
-o exported_requirements.txt
deactivate
rm -rf "$TEMP_VENV"

29
debian/changelog vendored
View File

@@ -1,3 +1,32 @@
matrix-synapse-py3 (1.59.0) stable; urgency=medium
* New Synapse release 1.59.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 May 2022 10:26:50 +0100
matrix-synapse-py3 (1.59.0~rc2) stable; urgency=medium
* New Synapse release 1.59.0rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 16 May 2022 12:52:15 +0100
matrix-synapse-py3 (1.59.0~rc1) stable; urgency=medium
* Adjust how the `exported-requirements.txt` file is generated as part of
the process of building these packages. This affects the package
maintainers only; end-users are unaffected.
* New Synapse release 1.59.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 10 May 2022 10:45:08 +0100
matrix-synapse-py3 (1.58.1) stable; urgency=medium
* Include python dependencies from the `systemd` and `cache_memory` extras package groups, which
were incorrectly omitted from the 1.58.0 package.
* New Synapse release 1.58.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 05 May 2022 14:58:23 +0100
matrix-synapse-py3 (1.58.0) stable; urgency=medium
* New Synapse release 1.58.0.

View File

@@ -1,3 +1,4 @@
# syntax=docker/dockerfile:1
# Dockerfile to build the matrixdotorg/synapse docker images.
#
# Note that it uses features which are only available in BuildKit - see

View File

@@ -20,6 +20,9 @@ RUN rm /etc/nginx/sites-enabled/default
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
# Copy a script to prefix log lines with the supervisor program name
COPY ./docker/prefix-log /usr/local/bin/
# Expose nginx listener port
EXPOSE 8080/tcp

View File

@@ -34,13 +34,16 @@ WORKDIR /data
# Copy the caddy config
COPY conf-workers/caddy.complement.json /root/caddy.json
COPY conf-workers/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
COPY conf-workers/caddy.supervisord.conf /etc/supervisor/conf.d/caddy.conf
# Copy the entrypoint
COPY conf-workers/start-complement-synapse-workers.sh /
# Expose caddy's listener ports
EXPOSE 8008 8448
ENTRYPOINT /start-complement-synapse-workers.sh
ENTRYPOINT ["/start-complement-synapse-workers.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \

View File

@@ -0,0 +1,7 @@
[program:caddy]
command=/usr/local/bin/prefix-log /root/caddy run --config /root/caddy.json
autorestart=unexpected
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0

View File

@@ -0,0 +1,16 @@
[program:postgres]
command=/usr/local/bin/prefix-log /usr/bin/pg_ctlcluster 13 main start --foreground
# Lower priority number = starts first
priority=1
autorestart=unexpected
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
# Use 'Fast Shutdown' mode which aborts current transactions and closes connections quickly.
# (Default (TERM) is 'Smart Shutdown' which stops accepting new connections but
# lets existing connections close gracefully.)
stopsignal=INT

View File

@@ -12,12 +12,6 @@ function log {
# Replace the server name in the caddy config
sed -i "s/{{ server_name }}/${SERVER_NAME}/g" /root/caddy.json
log "starting postgres"
pg_ctlcluster 13 main start
log "starting caddy"
/root/caddy start --config /root/caddy.json
# Set the server name of the homeserver
export SYNAPSE_SERVER_NAME=${SERVER_NAME}

View File

@@ -53,6 +53,18 @@ rc_joins:
per_second: 9999
burst_count: 9999
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
federation_rr_transactions_per_room_per_second: 9999
## Experimental Features ##

View File

@@ -87,6 +87,18 @@ rc_joins:
per_second: 9999
burst_count: 9999
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
federation_rr_transactions_per_room_per_second: 9999
## API Configuration ##

View File

@@ -9,7 +9,7 @@ user=root
files = /etc/supervisor/conf.d/*.conf
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
command=/usr/local/bin/prefix-log /usr/sbin/nginx -g "daemon off;"
priority=500
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
@@ -19,7 +19,7 @@ username=www-data
autorestart=true
[program:redis]
command=/usr/bin/redis-server /etc/redis/redis.conf --daemonize no
command=/usr/local/bin/prefix-log /usr/bin/redis-server /etc/redis/redis.conf --daemonize no
priority=1
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
@@ -29,7 +29,7 @@ username=redis
autorestart=true
[program:synapse_main]
command=/usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml
command=/usr/local/bin/prefix-log /usr/local/bin/python -m synapse.app.homeserver --config-path="{{ main_config_path }}" --config-path=/conf/workers/shared.yaml
priority=10
# Log startup failures to supervisord's stdout/err
# Regular synapse logs will still go in the configured data directory

View File

@@ -2,11 +2,7 @@ version: 1
formatters:
precise:
{% if worker_name %}
format: '%(asctime)s - worker:{{ worker_name }} - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% else %}
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
{% endif %}
handlers:
{% if LOG_FILE_PATH %}

View File

@@ -69,10 +69,10 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"worker_extra_conf": "enable_media_repo: true",
},
"appservice": {
"app": "synapse.app.appservice",
"app": "synapse.app.generic_worker",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": {"notify_appservices": False},
"shared_extra_conf": {"notify_appservices_from_worker": "appservice"},
"worker_extra_conf": "",
},
"federation_sender": {
@@ -171,7 +171,7 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
# Templates for sections that may be inserted multiple times in config files
SUPERVISORD_PROCESS_CONFIG_BLOCK = """
[program:synapse_{name}]
command=/usr/local/bin/python -m {app} \
command=/usr/local/bin/prefix-log /usr/local/bin/python -m {app} \
--config-path="{config_path}" \
--config-path=/conf/workers/shared.yaml \
--config-path=/conf/workers/{name}.yaml

12
docker/prefix-log Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
#
# Prefixes all lines on stdout and stderr with the process name (as determined by
# the SUPERVISOR_PROCESS_NAME env var, which is automatically set by Supervisor).
#
# Usage:
# prefix-log command [args...]
#
exec 1> >(awk '{print "'"${SUPERVISOR_PROCESS_NAME}"' | "$0}' >&1)
exec 2> >(awk '{print "'"${SUPERVISOR_PROCESS_NAME}"' | "$0}' >&2)
exec "$@"

View File

@@ -289,7 +289,7 @@ POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
URL Parameters
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in milliseconds.
* `before_ts`: string representing a positive integer - Unix timestamp in milliseconds.
All cached media that was last accessed before this timestamp will be removed.
Response:

View File

@@ -270,13 +270,13 @@ COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh
To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages
```
To run a specific test, you can specify the whole name structure:
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh TestBackfillingHistory/parallel/Backfilled_historical_events_resolve_with_proper_state_in_correct_order
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages/parallel/Historical_events_resolve_in_the_correct_order
```

View File

@@ -17,9 +17,6 @@ follows:
}
```
Note that the login type of `m.login.jwt` is supported, but is deprecated. This
will be removed in a future version of Synapse.
The `token` field should include the JSON web token with the following claims:
* A claim that encodes the local part of the user ID is required. By default,

View File

@@ -159,7 +159,7 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
oidc_providers:
- idp_id: keycloak
idp_name: "My KeyCloak server"
issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
issuer: "https://127.0.0.1:8443/realms/{realm_name}"
client_id: "synapse"
client_secret: "copy secret generated from above"
scopes: ["openid", "profile"]

View File

@@ -35,3 +35,8 @@ See [the TCP replication documentation](tcp_replication.md).
There are read-only version of the synapse storage layer in
`synapse/replication/slave/storage` that use the response of the
replication API to invalidate their caches.
### The TCP Replication Module
Information about how the tcp replication module is structured, including how
the classes interact, can be found in
`synapse/replication/tcp/__init__.py`

View File

@@ -289,7 +289,7 @@ presence:
# federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid'
#
# keys: the key discovery API (/_matrix/keys).
# keys: the key discovery API (/_matrix/key).
#
# media: the media API (/_matrix/media).
#
@@ -407,6 +407,11 @@ manhole_settings:
# sign up in a short space of time never to return after their initial
# session.
#
# The option `mau_appservice_trial_days` is similar to `mau_trial_days`, but
# applies a different trial number if the user was registered by an appservice.
# A value of 0 means no trial days are applied. Appservices not listed in this
# dictionary use the value of `mau_trial_days` instead.
#
# 'mau_limit_alerting' is a means of limiting client side alerting
# should the mau limit be reached. This is useful for small instances
# where the admin has 5 mau seats (say) for 5 specific people and no
@@ -417,6 +422,8 @@ manhole_settings:
#max_mau_value: 50
#mau_trial_days: 2
#mau_limit_alerting: false
#mau_appservice_trial_days:
# "appservice-id": 1
# If enabled, the metrics for the number of monthly active users will
# be populated, however no one will be limited. If limit_usage_by_mau
@@ -709,11 +716,11 @@ retention:
#
#allow_profile_lookup_over_federation: false
# Uncomment to disable device display name lookup over federation. By default, the
# Federation API allows other homeservers to obtain device display names of any user
# on this homeserver. Defaults to 'true'.
# Uncomment to allow device display name lookup over federation. By default, the
# Federation API prevents other homeservers from obtaining the display names of
# user devices on this homeserver. Defaults to 'false'.
#
#allow_device_name_lookup_over_federation: false
#allow_device_name_lookup_over_federation: true
## Caching ##
@@ -723,6 +730,12 @@ retention:
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
#
# The configuration for cache factors (caches.global_factor and
# caches.per_cache_factors) can be reloaded while the application is running,
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
# the caching config will NOT be applied after a SIGHUP is received; a restart
# is necessary.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
@@ -771,6 +784,24 @@ caches:
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.
@@ -1323,6 +1354,12 @@ oembed:
#
#registration_requires_token: true
# Allow users to submit a token during registration to bypass any required 3pid
# steps configured in `registrations_require_3pid`.
# Defaults to false, requiring that registration tokens (if enabled) complete a 3pid flow.
#
#enable_registration_token_3pid_bypass: false
# If set, allows registration of standard or admin accounts by anyone who
# has the shared secret, even if registration is otherwise disabled.
#
@@ -2449,6 +2486,40 @@ push:
#
#encryption_enabled_by_default_for_room_type: invite
# Override the default power levels for rooms created on this server, per
# room creation preset.
#
# The appropriate dictionary for the room preset will be applied on top
# of the existing power levels content.
#
# Useful if you know that your users need special permissions in rooms
# that they create (e.g. to send particular types of state events without
# needing an elevated power level). This takes the same shape as the
# `power_level_content_override` parameter in the /createRoom API, but
# is applied before that parameter.
#
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
# and `public_chat`. Inside each of those should be any of the
# properties allowed in `power_level_content_override` in the
# /createRoom API. If any property is missing, its default value will
# continue to be used. If any property is present, it will overwrite
# the existing default completely (so if the `events` property exists,
# the default event power levels will be ignored).
#
#default_power_level_content_override:
# private_chat:
# "events":
# "com.example.myeventtype" : 0
# "m.room.avatar": 50
# "m.room.canonical_alias": 50
# "m.room.encryption": 100
# "m.room.history_visibility": 100
# "m.room.name": 50
# "m.room.power_levels": 100
# "m.room.server_acl": 100
# "m.room.tombstone": 100
# "events_default": 1
# Uncomment to allow non-server-admin users to create groups on this server
#

View File

@@ -62,13 +62,6 @@ loggers:
# information such as access tokens.
level: INFO
twisted:
# We send the twisted logging directly to the file handler,
# to work around https://github.com/matrix-org/synapse/issues/3471
# when using "buffer" logger. Use "console" to log to stderr instead.
handlers: [file]
propagate: false
root:
level: INFO

View File

@@ -89,6 +89,50 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.59.0
## Device name lookup over federation has been disabled by default
The names of user devices are no longer visible to users on other homeservers by default.
Device IDs are unaffected, as these are necessary to facilitate end-to-end encryption.
To re-enable this functionality, set the
[`allow_device_name_lookup_over_federation`](https://matrix-org.github.io/synapse/v1.59/usage/configuration/config_documentation.html#federation)
homeserver config option to `true`.
## Deprecation of the `synapse.app.appservice` and `synapse.app.user_dir` worker application types
The `synapse.app.appservice` worker application type allowed you to configure a
single worker to use to notify application services of new events, as long
as this functionality was disabled on the main process with `notify_appservices: False`.
Further, the `synapse.app.user_dir` worker application type allowed you to configure
a single worker to be responsible for updating the user directory, as long as this
was disabled on the main process with `update_user_directory: False`.
To unify Synapse's worker types, the `synapse.app.appservice` worker application
type and the `notify_appservices` configuration option have been deprecated.
The `synapse.app.user_dir` worker application type and `update_user_directory`
configuration option have also been deprecated.
To get the same functionality as was provided by the deprecated options, it's now recommended that the `synapse.app.generic_worker`
worker application type is used and that the `notify_appservices_from_worker` and/or
`update_user_directory_from_worker` options are set to the name of a worker.
For the time being, the old options can be used alongside the new options to make
it easier to transition between the two configurations, however please note that:
- the options must not contradict each other (otherwise Synapse won't start); and
- the `notify_appservices` and `update_user_directory` options will be removed in a future release of Synapse.
Please see the [*Notifying Application Services*][v1_59_notify_ases_from] and
[*Updating the User Directory*][v1_59_update_user_dir] sections of the worker
documentation for more information.
[v1_59_notify_ases_from]: workers.md#notifying-application-services
[v1_59_update_user_dir]: workers.md#updating-the-user-directory
# Upgrading to v1.58.0
## Groups/communities feature has been disabled by default
@@ -96,6 +140,7 @@ process, for example:
The non-standard groups/communities feature in Synapse has been disabled by default
and will be removed in Synapse v1.61.0.
# Upgrading to v1.57.0
## Changes to database schema for application services

View File

@@ -28,7 +28,7 @@ See the following for how to decode the dense data available from the default lo
| NNNN | Total time waiting for response to DB queries across all parallel DB work from this request |
| OOOO | Count of DB transactions performed |
| PPPP | Response body size |
| QQQQ | Response status code (prefixed with ! if the socket was closed before the response was generated) |
| QQQQ | Response status code<br/>Suffixed with `!` if the socket was closed before the response was generated.<br/>A `499!` status code indicates that Synapse also cancelled request processing after the socket was closed.<br/> |
| RRRR | Request |
| SSSS | User-agent |
| TTTT | Events fetched from DB to service this request (note that this does not include events fetched from the cache) |

View File

@@ -1,7 +1,10 @@
## Some useful SQL queries for Synapse Admins
## Size of full matrix db
`SELECT pg_size_pretty( pg_database_size( 'matrix' ) );`
```sql
SELECT pg_size_pretty( pg_database_size( 'matrix' ) );
```
### Result example:
```
pg_size_pretty
@@ -9,39 +12,19 @@ pg_size_pretty
6420 MB
(1 row)
```
## Show top 20 larger rooms by state events count
```sql
SELECT r.name, s.room_id, s.current_state_events
FROM room_stats_current s
LEFT JOIN room_stats_state r USING (room_id)
ORDER BY current_state_events DESC
LIMIT 20;
```
and by state_group_events count:
```sql
SELECT rss.name, s.room_id, count(s.room_id) FROM state_groups_state s
LEFT JOIN room_stats_state rss USING (room_id)
GROUP BY s.room_id, rss.name
ORDER BY count(s.room_id) DESC
LIMIT 20;
```
plus same, but with join removed for performance reasons:
```sql
SELECT s.room_id, count(s.room_id) FROM state_groups_state s
GROUP BY s.room_id
ORDER BY count(s.room_id) DESC
LIMIT 20;
```
## Show top 20 larger tables by row count
```sql
SELECT relname, n_live_tup as rows
FROM pg_stat_user_tables
SELECT relname, n_live_tup AS "rows"
FROM pg_stat_user_tables
ORDER BY n_live_tup DESC
LIMIT 20;
```
This query is quick, but may be very approximate, for exact number of rows use `SELECT COUNT(*) FROM <table_name>`.
This query is quick, but may be very approximate, for exact number of rows use:
```sql
SELECT COUNT(*) FROM <table_name>;
```
### Result example:
```
state_groups_state - 161687170
@@ -66,46 +49,19 @@ device_lists_stream - 326903
user_directory_search - 316433
```
## Show top 20 rooms by new events count in last 1 day:
```sql
SELECT e.room_id, r.name, COUNT(e.event_id) cnt FROM events e
LEFT JOIN room_stats_state r USING (room_id)
WHERE e.origin_server_ts >= DATE_PART('epoch', NOW() - INTERVAL '1 day') * 1000 GROUP BY e.room_id, r.name ORDER BY cnt DESC LIMIT 20;
```
## Show top 20 users on homeserver by sent events (messages) at last month:
```sql
SELECT user_id, SUM(total_events)
FROM user_stats_historical
WHERE TO_TIMESTAMP(end_ts/1000) AT TIME ZONE 'UTC' > date_trunc('day', now() - interval '1 month')
GROUP BY user_id
ORDER BY SUM(total_events) DESC
LIMIT 20;
```
## Show last 100 messages from needed user, with room names:
```sql
SELECT e.room_id, r.name, e.event_id, e.type, e.content, j.json FROM events e
LEFT JOIN event_json j USING (room_id)
LEFT JOIN room_stats_state r USING (room_id)
WHERE sender = '@LOGIN:example.com'
AND e.type = 'm.room.message'
ORDER BY stream_ordering DESC
LIMIT 100;
```
## Show top 20 larger tables by storage size
```sql
SELECT nspname || '.' || relname AS "relation",
pg_size_pretty(pg_total_relation_size(C.oid)) AS "total_size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
pg_size_pretty(pg_total_relation_size(c.oid)) AS "total_size"
FROM pg_class c
LEFT JOIN pg_namespace n ON (n.oid = c.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
AND C.relkind <> 'i'
AND c.relkind <> 'i'
AND nspname !~ '^pg_toast'
ORDER BY pg_total_relation_size(C.oid) DESC
ORDER BY pg_total_relation_size(c.oid) DESC
LIMIT 20;
```
### Result example:
```
public.state_groups_state - 27 GB
@@ -130,8 +86,93 @@ public.device_lists_remote_cache - 124 MB
public.state_group_edges - 122 MB
```
## Show top 20 larger rooms by state events count
You get the same information when you use the
[admin API](../../admin_api/rooms.md#list-room-api)
and set parameter `order_by=state_events`.
```sql
SELECT r.name, s.room_id, s.current_state_events
FROM room_stats_current s
LEFT JOIN room_stats_state r USING (room_id)
ORDER BY current_state_events DESC
LIMIT 20;
```
and by state_group_events count:
```sql
SELECT rss.name, s.room_id, COUNT(s.room_id)
FROM state_groups_state s
LEFT JOIN room_stats_state rss USING (room_id)
GROUP BY s.room_id, rss.name
ORDER BY COUNT(s.room_id) DESC
LIMIT 20;
```
plus same, but with join removed for performance reasons:
```sql
SELECT s.room_id, COUNT(s.room_id)
FROM state_groups_state s
GROUP BY s.room_id
ORDER BY COUNT(s.room_id) DESC
LIMIT 20;
```
## Show top 20 rooms by new events count in last 1 day:
```sql
SELECT e.room_id, r.name, COUNT(e.event_id) cnt
FROM events e
LEFT JOIN room_stats_state r USING (room_id)
WHERE e.origin_server_ts >= DATE_PART('epoch', NOW() - INTERVAL '1 day') * 1000
GROUP BY e.room_id, r.name
ORDER BY cnt DESC
LIMIT 20;
```
## Show top 20 users on homeserver by sent events (messages) at last month:
Caution. This query does not use any indexes, can be slow and create load on the database.
```sql
SELECT COUNT(*), sender
FROM events
WHERE (type = 'm.room.encrypted' OR type = 'm.room.message')
AND origin_server_ts >= DATE_PART('epoch', NOW() - INTERVAL '1 month') * 1000
GROUP BY sender
ORDER BY COUNT(*) DESC
LIMIT 20;
```
## Show last 100 messages from needed user, with room names:
```sql
SELECT e.room_id, r.name, e.event_id, e.type, e.content, j.json
FROM events e
LEFT JOIN event_json j USING (room_id)
LEFT JOIN room_stats_state r USING (room_id)
WHERE sender = '@LOGIN:example.com'
AND e.type = 'm.room.message'
ORDER BY stream_ordering DESC
LIMIT 100;
```
## Show rooms with names, sorted by events in this rooms
`echo "select event_json.room_id,room_stats_state.name from event_json,room_stats_state where room_stats_state.room_id=event_json.room_id" | psql synapse | sort | uniq -c | sort -n`
**Sort and order with bash**
```bash
echo "SELECT event_json.room_id, room_stats_state.name FROM event_json, room_stats_state \
WHERE room_stats_state.room_id = event_json.room_id" | psql -d synapse -h localhost -U synapse_user -t \
| sort | uniq -c | sort -n
```
Documentation for `psql` command line parameters: https://www.postgresql.org/docs/current/app-psql.html
**Sort and order with SQL**
```sql
SELECT COUNT(*), event_json.room_id, room_stats_state.name
FROM event_json, room_stats_state
WHERE room_stats_state.room_id = event_json.room_id
GROUP BY event_json.room_id, room_stats_state.name
ORDER BY COUNT(*) DESC
LIMIT 50;
```
### Result example:
```
9459 !FPUfgzXYWTKgIrwKxW:matrix.org | This Week in Matrix
@@ -145,12 +186,22 @@ public.state_group_edges - 122 MB
```
## Lookup room state info by list of room_id
You get the same information when you use the
[admin API](../../admin_api/rooms.md#room-details-api).
```sql
SELECT rss.room_id, rss.name, rss.canonical_alias, rss.topic, rss.encryption, rsc.joined_members, rsc.local_users_in_room, rss.join_rules
FROM room_stats_state rss
LEFT JOIN room_stats_current rsc USING (room_id)
WHERE room_id IN (WHERE room_id IN (
'!OGEhHVWSdvArJzumhm:matrix.org',
'!YTvKGNlinIzlkMTVRl:matrix.org'
)
```
SELECT rss.room_id, rss.name, rss.canonical_alias, rss.topic, rss.encryption,
rsc.joined_members, rsc.local_users_in_room, rss.join_rules
FROM room_stats_state rss
LEFT JOIN room_stats_current rsc USING (room_id)
WHERE room_id IN ( WHERE room_id IN (
'!OGEhHVWSdvArJzumhm:matrix.org',
'!YTvKGNlinIzlkMTVRl:matrix.org'
);
```
## Show users and devices that have not been online for a while
```sql
SELECT user_id, device_id, user_agent, TO_TIMESTAMP(last_seen / 1000) AS "last_seen"
FROM devices
WHERE last_seen < DATE_PART('epoch', NOW() - INTERVAL '3 month') * 1000;
```

View File

@@ -467,13 +467,13 @@ Sub-options for each listener include:
Valid resource names are:
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies 'media' and 'static'.
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
* `keys`: the key discovery API (/_matrix/keys).
* `keys`: the key discovery API (/_matrix/key).
* `media`: the media API (/_matrix/media).
@@ -627,6 +627,20 @@ Example configuration:
mau_trial_days: 5
```
---
Config option: `mau_appservice_trial_days`
The option `mau_appservice_trial_days` is similar to `mau_trial_days`, but applies a different
trial number if the user was registered by an appservice. A value
of 0 means no trial days are applied. Appservices not listed in this dictionary
use the value of `mau_trial_days` instead.
Example configuration:
```yaml
mau_appservice_trial_days:
my_appservice_id: 3
another_appservice_id: 6
```
---
Config option: `mau_limit_alerting`
The option `mau_limit_alerting` is a means of limiting client-side alerting
@@ -1035,13 +1049,13 @@ allow_profile_lookup_over_federation: false
---
Config option: `allow_device_name_lookup_over_federation`
Set this option to false to disable device display name lookup over federation. By default, the
Federation API allows other homeservers to obtain device display names of any user
Set this option to true to allow device display name lookup over federation. By default, the
Federation API prevents other homeservers from obtaining the display names of any user devices
on this homeserver.
Example configuration:
```yaml
allow_device_name_lookup_over_federation: false
allow_device_name_lookup_over_federation: true
```
---
## Caching ##
@@ -1105,7 +1119,17 @@ Caching can be configured through the following sub-options:
with intermittent connections, at the cost of higher memory usage.
By default, this is zero, which means that sync responses are not cached
at all.
* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
`min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
to utilize this option, and all three of the options must be specified for this feature to work.
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
the flag below, or until the `min_cache_ttl` is hit.
* `target_memory_usage` sets a rough target for the desired memory usage of the caches.
* `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
from being emptied while Synapse is evicting due to memory.
Example configuration:
```yaml
@@ -1113,9 +1137,29 @@ caches:
global_factor: 1.0
per_cache_factors:
get_users_who_share_room_with_user: 2.0
expire_caches: false
sync_response_cache_duration: 2m
cache_autotuning:
max_cache_memory_usage: 1024M
target_cache_memory_usage: 758M
min_cache_ttl: 5m
```
### Reloading cache factors
The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`) may be reloaded at any time by sending a
[`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
```commandline
kill -HUP [PID_OF_SYNAPSE_PROCESS]
```
If you are running multiple workers, you must individually update the worker
config file and send this signal to each worker process.
If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
`systemctl reload matrix-synapse`.
---
## Database ##
Config options related to database settings.
@@ -3284,6 +3328,32 @@ room_list_publication_rules:
room_id: "*"
action: allow
```
---
Config option: `default_power_level_content_override`
The `default_power_level_content_override` option controls the default power
levels for rooms.
Useful if you know that your users need special permissions in rooms
that they create (e.g. to send particular types of state events without
needing an elevated power level). This takes the same shape as the
`power_level_content_override` parameter in the /createRoom API, but
is applied before that parameter.
Note that each key provided inside a preset (for example `events` in the example
below) will overwrite all existing defaults inside that key. So in the example
below, newly-created private_chat rooms will have no rules for any event types
except `com.example.foo`.
Example configuration:
```yaml
default_power_level_content_override:
private_chat: { "events": { "com.example.foo" : 0 } }
trusted_private_chat: null
public_chat: null
```
---
## Opentracing ##
Configuration options related to Opentracing support.
@@ -3384,7 +3454,7 @@ stream_writers:
typing: worker1
```
---
Config option: `run_background_task_on`
Config option: `run_background_tasks_on`
The worker that is used to run background tasks (e.g. cleaning up expired
data). If not provided this defaults to the main process.

View File

@@ -426,7 +426,7 @@ the shared configuration would include:
run_background_tasks_on: background_worker
```
You might also wish to investigate the `update_user_directory` and
You might also wish to investigate the `update_user_directory_from_worker` and
`media_instance_running_background_jobs` settings.
An example for a dedicated background worker instance:
@@ -435,6 +435,40 @@ An example for a dedicated background worker instance:
{{#include systemd-with-workers/workers/background_worker.yaml}}
```
#### Updating the User Directory
You can designate one generic worker to update the user directory.
Specify its name in the shared configuration as follows:
```yaml
update_user_directory_from_worker: worker_name
```
This work cannot be load-balanced; please ensure the main process is restarted
after setting this option in the shared configuration!
This style of configuration supersedes the legacy `synapse.app.user_dir`
worker application type.
#### Notifying Application Services
You can designate one generic worker to send output traffic to Application Services.
Specify its name in the shared configuration as follows:
```yaml
notify_appservices_from_worker: worker_name
```
This work cannot be load-balanced; please ensure the main process is restarted
after setting this option in the shared configuration!
This style of configuration supersedes the legacy `synapse.app.appservice`
worker application type.
### `synapse.app.pusher`
Handles sending push notifications to sygnal and email. Doesn't handle any
@@ -453,6 +487,9 @@ pusher_instances:
### `synapse.app.appservice`
**Deprecated as of Synapse v1.59.** [Use `synapse.app.generic_worker` with the
`notify_appservices_from_worker` option instead.](#notifying-application-services)
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending appservice notifications.
@@ -520,6 +557,9 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
### `synapse.app.user_dir`
**Deprecated as of Synapse v1.59.** [Use `synapse.app.generic_worker` with the
`update_user_directory_from_worker` option instead.](#updating-the-user-directory)
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions:

110
mypy.ini
View File

@@ -7,6 +7,7 @@ show_error_codes = True
show_traceback = True
mypy_path = stubs
warn_unreachable = True
warn_unused_ignores = True
local_partial_types = True
no_implicit_optional = True
@@ -23,10 +24,6 @@ files =
# https://docs.python.org/3/library/re.html#re.X
exclude = (?x)
^(
|scripts-dev/build_debian_packages.py
|scripts-dev/federation_client.py
|scripts-dev/release.py
|synapse/storage/databases/__init__.py
|synapse/storage/databases/main/cache.py
|synapse/storage/databases/main/devices.py
@@ -122,18 +119,47 @@ disallow_untyped_defs = True
[mypy-synapse.federation.transport.client]
disallow_untyped_defs = False
[mypy-synapse.groups.*]
disallow_untyped_defs = True
[mypy-synapse.handlers.*]
disallow_untyped_defs = True
[mypy-synapse.http.federation.*]
disallow_untyped_defs = True
[mypy-synapse.http.connectproxyclient]
disallow_untyped_defs = True
[mypy-synapse.http.proxyagent]
disallow_untyped_defs = True
[mypy-synapse.http.request_metrics]
disallow_untyped_defs = True
[mypy-synapse.http.server]
disallow_untyped_defs = True
[mypy-synapse.logging._remote]
disallow_untyped_defs = True
[mypy-synapse.logging.context]
disallow_untyped_defs = True
[mypy-synapse.logging.formatter]
disallow_untyped_defs = True
[mypy-synapse.logging.handlers]
disallow_untyped_defs = True
[mypy-synapse.metrics.*]
disallow_untyped_defs = True
[mypy-synapse.metrics._reactor_metrics]
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
# See https://github.com/matrix-org/synapse/pull/11771.
warn_unused_ignores = False
[mypy-synapse.module_api.*]
disallow_untyped_defs = True
@@ -155,6 +181,9 @@ disallow_untyped_defs = True
[mypy-synapse.state.*]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.background_updates]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.account_data]
disallow_untyped_defs = True
@@ -194,18 +223,39 @@ disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.state_deltas]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.stream]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.transactions]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.user_erasure_store]
disallow_untyped_defs = True
[mypy-synapse.storage.engines.*]
disallow_untyped_defs = True
[mypy-synapse.storage.prepare_database]
disallow_untyped_defs = True
[mypy-synapse.storage.persist_events]
disallow_untyped_defs = True
[mypy-synapse.storage.state]
disallow_untyped_defs = True
[mypy-synapse.storage.types]
disallow_untyped_defs = True
[mypy-synapse.storage.util.*]
disallow_untyped_defs = True
[mypy-synapse.streams.*]
disallow_untyped_defs = True
[mypy-synapse.types]
disallow_untyped_defs = True
[mypy-synapse.util.*]
disallow_untyped_defs = True
@@ -239,63 +289,26 @@ disallow_untyped_defs = True
[mypy-authlib.*]
ignore_missing_imports = True
[mypy-bcrypt]
ignore_missing_imports = True
[mypy-canonicaljson]
ignore_missing_imports = True
[mypy-constantly]
ignore_missing_imports = True
[mypy-daemonize]
ignore_missing_imports = True
[mypy-h11]
ignore_missing_imports = True
[mypy-hiredis]
ignore_missing_imports = True
[mypy-hyperlink]
ignore_missing_imports = True
[mypy-ijson.*]
ignore_missing_imports = True
[mypy-importlib_metadata.*]
ignore_missing_imports = True
[mypy-jaeger_client.*]
ignore_missing_imports = True
[mypy-josepy.*]
ignore_missing_imports = True
[mypy-jwt.*]
ignore_missing_imports = True
[mypy-lxml]
ignore_missing_imports = True
[mypy-msgpack]
ignore_missing_imports = True
[mypy-nacl.*]
ignore_missing_imports = True
# Note: WIP stubs available at
# https://github.com/microsoft/python-type-stubs/tree/64934207f523ad6b611e6cfe039d85d7175d7d0d/netaddr
[mypy-netaddr]
ignore_missing_imports = True
[mypy-parameterized.*]
ignore_missing_imports = True
[mypy-phonenumbers.*]
ignore_missing_imports = True
[mypy-prometheus_client.*]
ignore_missing_imports = True
[mypy-pymacaroons.*]
ignore_missing_imports = True
@@ -308,23 +321,14 @@ ignore_missing_imports = True
[mypy-saml2.*]
ignore_missing_imports = True
[mypy-sentry_sdk]
ignore_missing_imports = True
[mypy-service_identity.*]
ignore_missing_imports = True
[mypy-signedjson.*]
[mypy-srvlookup.*]
ignore_missing_imports = True
[mypy-treq.*]
ignore_missing_imports = True
[mypy-twisted.*]
ignore_missing_imports = True
[mypy-zope]
ignore_missing_imports = True
[mypy-incremental.*]
ignore_missing_imports = True

97
poetry.lock generated
View File

@@ -309,14 +309,15 @@ smmap = ">=3.0.1,<6"
[[package]]
name = "gitpython"
version = "3.1.14"
description = "Python Git Library"
version = "3.1.27"
description = "GitPython is a python library used to interact with Git repositories"
category = "dev"
optional = false
python-versions = ">=3.4"
python-versions = ">=3.7"
[package.dependencies]
gitdb = ">=4.0.1,<5"
typing-extensions = {version = ">=3.7.4.3", markers = "python_version < \"3.8\""}
[[package]]
name = "hiredis"
@@ -571,7 +572,7 @@ python-versions = "*"
[[package]]
name = "mypy"
version = "0.931"
version = "0.950"
description = "Optional static typing for Python"
category = "dev"
optional = false
@@ -579,13 +580,14 @@ python-versions = ">=3.6"
[package.dependencies]
mypy-extensions = ">=0.4.3"
tomli = ">=1.1.0"
tomli = {version = ">=1.1.0", markers = "python_version < \"3.11\""}
typed-ast = {version = ">=1.4.0,<2", markers = "python_version < \"3.8\""}
typing-extensions = ">=3.10"
[package.extras]
dmypy = ["psutil (>=4.0)"]
python2 = ["typed-ast (>=1.4.0,<2)"]
reports = ["lxml"]
[[package]]
name = "mypy-extensions"
@@ -597,14 +599,14 @@ python-versions = "*"
[[package]]
name = "mypy-zope"
version = "0.3.5"
version = "0.3.7"
description = "Plugin for mypy to support zope interfaces"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
mypy = "0.931"
mypy = "0.950"
"zope.interface" = "*"
"zope.schema" = "*"
@@ -1019,7 +1021,7 @@ jeepney = ">=0.6"
[[package]]
name = "sentry-sdk"
version = "1.5.7"
version = "1.5.11"
description = "Python client for Sentry (https://sentry.io)"
category = "main"
optional = true
@@ -1315,6 +1317,14 @@ category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "types-commonmark"
version = "0.9.2"
description = "Typing stubs for commonmark"
category = "dev"
optional = false
python-versions = "*"
[[package]]
name = "types-cryptography"
version = "3.3.15"
@@ -1361,7 +1371,7 @@ python-versions = "*"
[[package]]
name = "types-pillow"
version = "9.0.6"
version = "9.0.15"
description = "Typing stubs for Pillow"
category = "dev"
optional = false
@@ -1536,7 +1546,7 @@ docs = ["sphinx", "repoze.sphinx.autointerface"]
test = ["zope.i18nmessageid", "zope.testing", "zope.testrunner"]
[extras]
all = ["matrix-synapse-ldap3", "psycopg2", "psycopg2cffi", "psycopg2cffi-compat", "pysaml2", "authlib", "lxml", "sentry-sdk", "jaeger-client", "opentracing", "pyjwt", "txredisapi", "hiredis"]
all = ["matrix-synapse-ldap3", "psycopg2", "psycopg2cffi", "psycopg2cffi-compat", "pysaml2", "authlib", "lxml", "sentry-sdk", "jaeger-client", "opentracing", "pyjwt", "txredisapi", "hiredis", "Pympler"]
cache_memory = ["Pympler"]
jwt = ["pyjwt"]
matrix-synapse-ldap3 = ["matrix-synapse-ldap3"]
@@ -1552,8 +1562,8 @@ url_preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7"
content-hash = "f482a4f594a165dfe01ce253a22510d5faf38647ab0dcebc35789350cafd9bf0"
python-versions = "^3.7.1"
content-hash = "d39d5ac5d51c014581186b7691999b861058b569084c525523baf70b77f292b1"
[metadata.files]
attrs = [
@@ -1766,8 +1776,8 @@ gitdb = [
{file = "gitdb-4.0.9.tar.gz", hash = "sha256:bac2fd45c0a1c9cf619e63a90d62bdc63892ef92387424b855792a6cabe789aa"},
]
gitpython = [
{file = "GitPython-3.1.14-py3-none-any.whl", hash = "sha256:3283ae2fba31c913d857e12e5ba5f9a7772bbc064ae2bb09efafa71b0dd4939b"},
{file = "GitPython-3.1.14.tar.gz", hash = "sha256:be27633e7509e58391f10207cd32b2a6cf5b908f92d9cd30da2e514e1137af61"},
{file = "GitPython-3.1.27-py3-none-any.whl", hash = "sha256:5b68b000463593e05ff2b261acff0ff0972df8ab1b70d3cdbd41b546c8b8fc3d"},
{file = "GitPython-3.1.27.tar.gz", hash = "sha256:1c885ce809e8ba2d88a29befeb385fcea06338d3640712b59ca623c220bb5704"},
]
hiredis = [
{file = "hiredis-2.0.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:b4c8b0bc5841e578d5fb32a16e0c305359b987b850a06964bd5a62739d688048"},
@@ -2080,34 +2090,37 @@ msgpack = [
{file = "msgpack-1.0.3.tar.gz", hash = "sha256:51fdc7fb93615286428ee7758cecc2f374d5ff363bdd884c7ea622a7a327a81e"},
]
mypy = [
{file = "mypy-0.931-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:3c5b42d0815e15518b1f0990cff7a705805961613e701db60387e6fb663fe78a"},
{file = "mypy-0.931-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c89702cac5b302f0c5d33b172d2b55b5df2bede3344a2fbed99ff96bddb2cf00"},
{file = "mypy-0.931-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:300717a07ad09525401a508ef5d105e6b56646f7942eb92715a1c8d610149714"},
{file = "mypy-0.931-cp310-cp310-win_amd64.whl", hash = "sha256:7b3f6f557ba4afc7f2ce6d3215d5db279bcf120b3cfd0add20a5d4f4abdae5bc"},
{file = "mypy-0.931-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:1bf752559797c897cdd2c65f7b60c2b6969ffe458417b8d947b8340cc9cec08d"},
{file = "mypy-0.931-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:4365c60266b95a3f216a3047f1d8e3f895da6c7402e9e1ddfab96393122cc58d"},
{file = "mypy-0.931-cp36-cp36m-win_amd64.whl", hash = "sha256:1b65714dc296a7991000b6ee59a35b3f550e0073411ac9d3202f6516621ba66c"},
{file = "mypy-0.931-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:e839191b8da5b4e5d805f940537efcaa13ea5dd98418f06dc585d2891d228cf0"},
{file = "mypy-0.931-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:50c7346a46dc76a4ed88f3277d4959de8a2bd0a0fa47fa87a4cde36fe247ac05"},
{file = "mypy-0.931-cp37-cp37m-win_amd64.whl", hash = "sha256:d8f1ff62f7a879c9fe5917b3f9eb93a79b78aad47b533911b853a757223f72e7"},
{file = "mypy-0.931-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:f9fe20d0872b26c4bba1c1be02c5340de1019530302cf2dcc85c7f9fc3252ae0"},
{file = "mypy-0.931-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:1b06268df7eb53a8feea99cbfff77a6e2b205e70bf31743e786678ef87ee8069"},
{file = "mypy-0.931-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8c11003aaeaf7cc2d0f1bc101c1cc9454ec4cc9cb825aef3cafff8a5fdf4c799"},
{file = "mypy-0.931-cp38-cp38-win_amd64.whl", hash = "sha256:d9d2b84b2007cea426e327d2483238f040c49405a6bf4074f605f0156c91a47a"},
{file = "mypy-0.931-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:ff3bf387c14c805ab1388185dd22d6b210824e164d4bb324b195ff34e322d166"},
{file = "mypy-0.931-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:5b56154f8c09427bae082b32275a21f500b24d93c88d69a5e82f3978018a0266"},
{file = "mypy-0.931-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:8ca7f8c4b1584d63c9a0f827c37ba7a47226c19a23a753d52e5b5eddb201afcd"},
{file = "mypy-0.931-cp39-cp39-win_amd64.whl", hash = "sha256:74f7eccbfd436abe9c352ad9fb65872cc0f1f0a868e9d9c44db0893440f0c697"},
{file = "mypy-0.931-py3-none-any.whl", hash = "sha256:1171f2e0859cfff2d366da2c7092b06130f232c636a3f7301e3feb8b41f6377d"},
{file = "mypy-0.931.tar.gz", hash = "sha256:0038b21890867793581e4cb0d810829f5fd4441aa75796b53033af3aa30430ce"},
{file = "mypy-0.950-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cf9c261958a769a3bd38c3e133801ebcd284ffb734ea12d01457cb09eacf7d7b"},
{file = "mypy-0.950-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b5b5bd0ffb11b4aba2bb6d31b8643902c48f990cc92fda4e21afac658044f0c0"},
{file = "mypy-0.950-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5e7647df0f8fc947388e6251d728189cfadb3b1e558407f93254e35abc026e22"},
{file = "mypy-0.950-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:eaff8156016487c1af5ffa5304c3e3fd183edcb412f3e9c72db349faf3f6e0eb"},
{file = "mypy-0.950-cp310-cp310-win_amd64.whl", hash = "sha256:563514c7dc504698fb66bb1cf897657a173a496406f1866afae73ab5b3cdb334"},
{file = "mypy-0.950-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:dd4d670eee9610bf61c25c940e9ade2d0ed05eb44227275cce88701fee014b1f"},
{file = "mypy-0.950-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ca75ecf2783395ca3016a5e455cb322ba26b6d33b4b413fcdedfc632e67941dc"},
{file = "mypy-0.950-cp36-cp36m-win_amd64.whl", hash = "sha256:6003de687c13196e8a1243a5e4bcce617d79b88f83ee6625437e335d89dfebe2"},
{file = "mypy-0.950-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4c653e4846f287051599ed8f4b3c044b80e540e88feec76b11044ddc5612ffed"},
{file = "mypy-0.950-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:e19736af56947addedce4674c0971e5dceef1b5ec7d667fe86bcd2b07f8f9075"},
{file = "mypy-0.950-cp37-cp37m-win_amd64.whl", hash = "sha256:ef7beb2a3582eb7a9f37beaf38a28acfd801988cde688760aea9e6cc4832b10b"},
{file = "mypy-0.950-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0112752a6ff07230f9ec2f71b0d3d4e088a910fdce454fdb6553e83ed0eced7d"},
{file = "mypy-0.950-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ee0a36edd332ed2c5208565ae6e3a7afc0eabb53f5327e281f2ef03a6bc7687a"},
{file = "mypy-0.950-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:77423570c04aca807508a492037abbd72b12a1fb25a385847d191cd50b2c9605"},
{file = "mypy-0.950-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5ce6a09042b6da16d773d2110e44f169683d8cc8687e79ec6d1181a72cb028d2"},
{file = "mypy-0.950-cp38-cp38-win_amd64.whl", hash = "sha256:5b231afd6a6e951381b9ef09a1223b1feabe13625388db48a8690f8daa9b71ff"},
{file = "mypy-0.950-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0384d9f3af49837baa92f559d3fa673e6d2652a16550a9ee07fc08c736f5e6f8"},
{file = "mypy-0.950-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1fdeb0a0f64f2a874a4c1f5271f06e40e1e9779bf55f9567f149466fc7a55038"},
{file = "mypy-0.950-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:61504b9a5ae166ba5ecfed9e93357fd51aa693d3d434b582a925338a2ff57fd2"},
{file = "mypy-0.950-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a952b8bc0ae278fc6316e6384f67bb9a396eb30aced6ad034d3a76120ebcc519"},
{file = "mypy-0.950-cp39-cp39-win_amd64.whl", hash = "sha256:eaea21d150fb26d7b4856766e7addcf929119dd19fc832b22e71d942835201ef"},
{file = "mypy-0.950-py3-none-any.whl", hash = "sha256:a4d9898f46446bfb6405383b57b96737dcfd0a7f25b748e78ef3e8c576bba3cb"},
{file = "mypy-0.950.tar.gz", hash = "sha256:1b333cfbca1762ff15808a0ef4f71b5d3eed8528b23ea1c3fb50543c867d68de"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
mypy-zope = [
{file = "mypy-zope-0.3.5.tar.gz", hash = "sha256:489e7da1c2af887f2cfe3496995fc247f296512b495b57817edddda9d22308f3"},
{file = "mypy_zope-0.3.5-py3-none-any.whl", hash = "sha256:3bd0cc9a3e5933b02931af4b214ba32a4f4ff98adb30c979ce733857db91a18b"},
{file = "mypy-zope-0.3.7.tar.gz", hash = "sha256:9da171e78e8ef7ac8922c86af1a62f1b7f3244f121020bd94a2246bc3f33c605"},
{file = "mypy_zope-0.3.7-py3-none-any.whl", hash = "sha256:9c7637d066e4d1bafa0651abc091c752009769098043b236446e6725be2bc9c2"},
]
netaddr = [
{file = "netaddr-0.8.0-py2.py3-none-any.whl", hash = "sha256:9666d0232c32d2656e5e5f8d735f58fd6c7457ce52fc21c98d45f2af78f990ac"},
@@ -2377,8 +2390,8 @@ secretstorage = [
{file = "SecretStorage-3.3.1.tar.gz", hash = "sha256:fd666c51a6bf200643495a04abb261f83229dcb6fd8472ec393df7ffc8b6f195"},
]
sentry-sdk = [
{file = "sentry-sdk-1.5.7.tar.gz", hash = "sha256:aa52da941c56b5a76fd838f8e9e92a850bf893a9eb1e33ffce6c21431d07ee30"},
{file = "sentry_sdk-1.5.7-py2.py3-none-any.whl", hash = "sha256:411a8495bd18cf13038e5749e4710beb4efa53da6351f67b4c2f307c2d9b6d49"},
{file = "sentry-sdk-1.5.11.tar.gz", hash = "sha256:6c01d9d0b65935fd275adc120194737d1df317dce811e642cbf0394d0d37a007"},
{file = "sentry_sdk-1.5.11-py2.py3-none-any.whl", hash = "sha256:c17179183cac614e900cbd048dab03f49a48e2820182ec686c25e7ce46f8548f"},
]
service-identity = [
{file = "service-identity-21.1.0.tar.gz", hash = "sha256:6e6c6086ca271dc11b033d17c3a8bea9f24ebff920c587da090afc9519419d34"},
@@ -2588,6 +2601,10 @@ types-bleach = [
{file = "types-bleach-4.1.4.tar.gz", hash = "sha256:2d30c2c4fb6854088ac636471352c9a51bf6c089289800d2a8060820a01cd43a"},
{file = "types_bleach-4.1.4-py3-none-any.whl", hash = "sha256:edffe173ed6d7b6f3543036a96204a9319c3bf6c3645917b14274e43f000cc9b"},
]
types-commonmark = [
{file = "types-commonmark-0.9.2.tar.gz", hash = "sha256:b894b67750c52fd5abc9a40a9ceb9da4652a391d75c1b480bba9cef90f19fc86"},
{file = "types_commonmark-0.9.2-py3-none-any.whl", hash = "sha256:56f20199a1f9a2924443211a0ef97f8b15a8a956a7f4e9186be6950bf38d6d02"},
]
types-cryptography = [
{file = "types-cryptography-3.3.15.tar.gz", hash = "sha256:a7983a75a7b88a18f88832008f0ef140b8d1097888ec1a0824ec8fb7e105273b"},
{file = "types_cryptography-3.3.15-py3-none-any.whl", hash = "sha256:d9b0dd5465d7898d400850e7f35e5518aa93a7e23d3e11757cd81b4777089046"},
@@ -2609,8 +2626,8 @@ types-opentracing = [
{file = "types_opentracing-2.4.7-py3-none-any.whl", hash = "sha256:861fb8103b07cf717f501dd400cb274ca9992552314d4d6c7a824b11a215e512"},
]
types-pillow = [
{file = "types-Pillow-9.0.6.tar.gz", hash = "sha256:79b350b1188c080c27558429f1e119e69c9f020b877a82df761d9283070e0185"},
{file = "types_Pillow-9.0.6-py3-none-any.whl", hash = "sha256:bd1e0a844fc718398aa265bf50fcad550fc520cc54f80e5ffeb7b3226b3cc507"},
{file = "types-Pillow-9.0.15.tar.gz", hash = "sha256:d2e385fe5c192e75970f18accce69f5c2a9f186f3feb578a9b91cd6fdf64211d"},
{file = "types_Pillow-9.0.15-py3-none-any.whl", hash = "sha256:c9646595dfafdf8b63d4b1443292ead17ee0fc7b18a143e497b68e0ea2dc1eb6"},
]
types-psycopg2 = [
{file = "types-psycopg2-2.9.9.tar.gz", hash = "sha256:4f9d4d52eeb343dc00fd5ed4f1513a8a5c18efba0a072eb82706d15cf4f20a2e"},

View File

@@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry]
name = "matrix-synapse"
version = "1.58.0"
version = "1.59.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -100,7 +100,7 @@ synapse_review_recent_signups = "synapse._scripts.review_recent_signups:main"
update_synapse_database = "synapse._scripts.update_synapse_database:main"
[tool.poetry.dependencies]
python = "^3.7"
python = "^3.7.1"
# Mandatory Dependencies
# ----------------------
@@ -142,8 +142,10 @@ netaddr = ">=0.7.18"
# add a lower bound to the Jinja2 dependency.
Jinja2 = ">=3.0"
bleach = ">=1.4.3"
# We use `ParamSpec`, which was added in `typing-extensions` 3.10.0.0.
typing-extensions = ">=3.10.0"
# We use `ParamSpec` and `Concatenate`, which were added in `typing-extensions` 3.10.0.0.
# Additionally we need https://github.com/python/typing/pull/817 to allow types to be
# generic over ParamSpecs.
typing-extensions = ">=3.10.0.1"
# We enforce that we have a `cryptography` version that bundles an `openssl`
# with the latest security patches.
cryptography = ">=3.4.7"
@@ -231,10 +233,11 @@ all = [
"jaeger-client", "opentracing",
# jwt
"pyjwt",
#redis
"txredisapi", "hiredis"
# redis
"txredisapi", "hiredis",
# cache_memory
"pympler",
# omitted:
# - cache_memory: this is an experimental option
# - test: it's useful to have this separate from dev deps in the olddeps job
# - systemd: this is a system-based requirement
]
@@ -248,9 +251,10 @@ flake8-bugbear = "==21.3.2"
flake8 = "*"
# Typechecking
mypy = "==0.931"
mypy-zope = "==0.3.5"
mypy = "*"
mypy-zope = "*"
types-bleach = ">=4.1.0"
types-commonmark = ">=0.9.2"
types-jsonschema = ">=3.2.0"
types-opentracing = ">=2.4.2"
types-Pillow = ">=8.3.4"
@@ -270,7 +274,8 @@ idna = ">=2.5"
# The following are used by the release script
click = "==8.1.0"
GitPython = "==3.1.14"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
commonmark = "==0.9.1"
pygithub = "==1.55"
# The following are executed as commands by the release script.

View File

@@ -17,7 +17,8 @@ import subprocess
import sys
import threading
from concurrent.futures import ThreadPoolExecutor
from typing import Optional, Sequence
from types import FrameType
from typing import Collection, Optional, Sequence, Set
DISTS = (
"debian:buster", # oldstable: EOL 2022-08
@@ -41,15 +42,17 @@ projdir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
class Builder(object):
def __init__(
self, redirect_stdout=False, docker_build_args: Optional[Sequence[str]] = None
self,
redirect_stdout: bool = False,
docker_build_args: Optional[Sequence[str]] = None,
):
self.redirect_stdout = redirect_stdout
self._docker_build_args = tuple(docker_build_args or ())
self.active_containers = set()
self.active_containers: Set[str] = set()
self._lock = threading.Lock()
self._failed = False
def run_build(self, dist, skip_tests=False):
def run_build(self, dist: str, skip_tests: bool = False) -> None:
"""Build deb for a single distribution"""
if self._failed:
@@ -63,7 +66,7 @@ class Builder(object):
self._failed = True
raise
def _inner_build(self, dist, skip_tests=False):
def _inner_build(self, dist: str, skip_tests: bool = False) -> None:
tag = dist.split(":", 1)[1]
# Make the dir where the debs will live.
@@ -138,7 +141,7 @@ class Builder(object):
stdout.close()
print("Completed build of %s" % (dist,))
def kill_containers(self):
def kill_containers(self) -> None:
with self._lock:
active = list(self.active_containers)
@@ -156,8 +159,10 @@ class Builder(object):
self.active_containers.remove(c)
def run_builds(builder, dists, jobs=1, skip_tests=False):
def sig(signum, _frame):
def run_builds(
builder: Builder, dists: Collection[str], jobs: int = 1, skip_tests: bool = False
) -> None:
def sig(signum: int, _frame: Optional[FrameType]) -> None:
print("Caught SIGINT")
builder.kill_containers()

View File

@@ -43,6 +43,8 @@ fi
# Build the base Synapse image from the local checkout
docker build -t matrixdotorg/synapse -f "docker/Dockerfile" .
extra_test_args=()
# If we're using workers, modify the docker files slightly.
if [[ -n "$WORKERS" ]]; then
# Build the workers docker image (from the base Synapse image).
@@ -52,7 +54,14 @@ if [[ -n "$WORKERS" ]]; then
COMPLEMENT_DOCKERFILE=SynapseWorkers.Dockerfile
# And provide some more configuration to complement.
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=60
# It can take quite a while to spin up a worker-mode Synapse for the first
# time (the main problem is that we start 14 python processes for each test,
# and complement likes to do two of them in parallel).
export COMPLEMENT_SPAWN_HS_TIMEOUT_SECS=120
# ... and it takes longer than 10m to run the whole suite.
extra_test_args+=("-timeout=60m")
else
export COMPLEMENT_BASE_IMAGE=complement-synapse
COMPLEMENT_DOCKERFILE=Dockerfile
@@ -64,4 +73,4 @@ docker build -t $COMPLEMENT_BASE_IMAGE -f "docker/complement/$COMPLEMENT_DOCKERF
# Run the tests!
echo "Images built; running complement"
cd "$COMPLEMENT_DIR"
go test -v -tags synapse_blacklist,msc2716,msc3030,faster_joins -count=1 "$@" ./tests/...
go test -v -tags synapse_blacklist,msc2716,msc3030,faster_joins -count=1 "${extra_test_args[@]}" "$@" ./tests/...

View File

@@ -38,7 +38,7 @@ import argparse
import base64
import json
import sys
from typing import Any, Optional
from typing import Any, Dict, Optional, Tuple
from urllib import parse as urlparse
import requests
@@ -47,13 +47,14 @@ import signedjson.types
import srvlookup
import yaml
from requests.adapters import HTTPAdapter
from urllib3 import HTTPConnectionPool
# uncomment the following to enable debug logging of http requests
# from httplib import HTTPConnection
# HTTPConnection.debuglevel = 1
def encode_base64(input_bytes):
def encode_base64(input_bytes: bytes) -> str:
"""Encode bytes as a base64 string without any padding."""
input_len = len(input_bytes)
@@ -63,7 +64,7 @@ def encode_base64(input_bytes):
return output_string
def encode_canonical_json(value):
def encode_canonical_json(value: object) -> bytes:
return json.dumps(
value,
# Encode code-points outside of ASCII as UTF-8 rather than \u escapes
@@ -130,7 +131,7 @@ def request(
sig,
destination,
)
authorization_headers.append(header.encode("ascii"))
authorization_headers.append(header)
print("Authorization: %s" % header, file=sys.stderr)
dest = "matrix://%s%s" % (destination, path)
@@ -139,7 +140,10 @@ def request(
s = requests.Session()
s.mount("matrix://", MatrixConnectionAdapter())
headers = {"Host": destination, "Authorization": authorization_headers[0]}
headers: Dict[str, str] = {
"Host": destination,
"Authorization": authorization_headers[0],
}
if method == "POST":
headers["Content-Type"] = "application/json"
@@ -154,7 +158,7 @@ def request(
)
def main():
def main() -> None:
parser = argparse.ArgumentParser(
description="Signs and sends a federation request to a matrix homeserver"
)
@@ -212,6 +216,7 @@ def main():
if not args.server_name or not args.signing_key:
read_args_from_config(args)
assert isinstance(args.signing_key, str)
algorithm, version, key_base64 = args.signing_key.split()
key = signedjson.key.decode_signing_key_base64(algorithm, version, key_base64)
@@ -233,7 +238,7 @@ def main():
print("")
def read_args_from_config(args):
def read_args_from_config(args: argparse.Namespace) -> None:
with open(args.config, "r") as fh:
config = yaml.safe_load(fh)
@@ -250,7 +255,7 @@ def read_args_from_config(args):
class MatrixConnectionAdapter(HTTPAdapter):
@staticmethod
def lookup(s, skip_well_known=False):
def lookup(s: str, skip_well_known: bool = False) -> Tuple[str, int]:
if s[-1] == "]":
# ipv6 literal (with no port)
return s, 8448
@@ -276,7 +281,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
return s, 8448
@staticmethod
def get_well_known(server_name):
def get_well_known(server_name: str) -> Optional[str]:
uri = "https://%s/.well-known/matrix/server" % (server_name,)
print("fetching %s" % (uri,), file=sys.stderr)
@@ -299,7 +304,9 @@ class MatrixConnectionAdapter(HTTPAdapter):
print("Invalid response from %s: %s" % (uri, e), file=sys.stderr)
return None
def get_connection(self, url, proxies=None):
def get_connection(
self, url: str, proxies: Optional[Dict[str, str]] = None
) -> HTTPConnectionPool:
parsed = urlparse.urlparse(url)
(host, port) = self.lookup(parsed.netloc)

View File

@@ -16,7 +16,7 @@
can crop up, e.g the cache descriptors.
"""
from typing import Callable, Optional
from typing import Callable, Optional, Type
from mypy.nodes import ARG_NAMED_OPT
from mypy.plugin import MethodSigContext, Plugin
@@ -94,7 +94,7 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
return signature
def plugin(version: str):
def plugin(version: str) -> Type[SynapsePlugin]:
# This is the entry point of the plugin, and let's us deal with the fact
# that the mypy plugin interface is *not* stable by looking at the version
# string.

View File

@@ -25,7 +25,7 @@ import sys
import urllib.request
from os import path
from tempfile import TemporaryDirectory
from typing import List, Optional
from typing import Any, List, Optional, cast
import attr
import click
@@ -36,7 +36,9 @@ from github import Github
from packaging import version
def run_until_successful(command, *args, **kwargs):
def run_until_successful(
command: str, *args: Any, **kwargs: Any
) -> subprocess.CompletedProcess:
while True:
completed_process = subprocess.run(command, *args, **kwargs)
exit_code = completed_process.returncode
@@ -50,7 +52,7 @@ def run_until_successful(command, *args, **kwargs):
@click.group()
def cli():
def cli() -> None:
"""An interactive script to walk through the parts of creating a release.
Requires the dev dependencies be installed, which can be done via:
@@ -81,19 +83,13 @@ def cli():
@cli.command()
def prepare():
def prepare() -> None:
"""Do the initial stages of creating a release, including creating release
branch, updating changelog and pushing to GitHub.
"""
# Make sure we're in a git repo.
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
repo = get_repo_and_check_clean_checkout()
click.secho("Updating git repo...")
repo.remote().fetch()
@@ -161,22 +157,21 @@ def prepare():
click.get_current_context().abort()
# Switch to the release branch.
parsed_new_version: version.Version = version.parse(new_version)
# Cast safety: parse() won't return a version.LegacyVersion from our
# version string format.
parsed_new_version = cast(version.Version, version.parse(new_version))
# We assume for debian changelogs that we only do RCs or full releases.
assert not parsed_new_version.is_devrelease
assert not parsed_new_version.is_postrelease
release_branch_name = (
f"release-v{parsed_new_version.major}.{parsed_new_version.minor}"
)
release_branch_name = get_release_branch_name(parsed_new_version)
release_branch = find_ref(repo, release_branch_name)
if release_branch:
if release_branch.is_remote():
# If the release branch only exists on the remote we check it out
# locally.
repo.git.checkout(release_branch_name)
release_branch = repo.active_branch
else:
# If a branch doesn't exist we create one. We ask which one branch it
# should be based off, defaulting to sensible values depending on the
@@ -198,13 +193,15 @@ def prepare():
click.get_current_context().abort()
# Check out the base branch and ensure it's up to date
repo.head.reference = base_branch
repo.head.set_reference(base_branch, "check out the base branch")
repo.head.reset(index=True, working_tree=True)
if not base_branch.is_remote():
update_branch(repo)
# Create the new release branch
release_branch = repo.create_head(release_branch_name, commit=base_branch)
# Type ignore will no longer be needed after GitPython 3.1.28.
# See https://github.com/gitpython-developers/GitPython/pull/1419
repo.create_head(release_branch_name, commit=base_branch) # type: ignore[arg-type]
# Switch to the release branch and ensure it's up to date.
repo.git.checkout(release_branch_name)
@@ -265,17 +262,11 @@ def prepare():
@cli.command()
@click.option("--gh-token", envvar=["GH_TOKEN", "GITHUB_TOKEN"])
def tag(gh_token: Optional[str]):
def tag(gh_token: Optional[str]) -> None:
"""Tags the release and generates a draft GitHub release"""
# Make sure we're in a git repo.
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
repo = get_repo_and_check_clean_checkout()
click.secho("Updating git repo...")
repo.remote().fetch()
@@ -288,12 +279,26 @@ def tag(gh_token: Optional[str]):
if tag_name in repo.tags:
raise click.ClickException(f"Tag {tag_name} already exists!\n")
# Check we're on the right release branch
release_branch = get_release_branch_name(current_version)
if repo.active_branch.name != release_branch:
click.echo(
f"Need to be on the release branch ({release_branch}) before tagging. "
f"Currently on ({repo.active_branch.name})."
)
click.get_current_context().abort()
# Get the appropriate changelogs and tag.
changes = get_changes_for_version(current_version)
click.echo_via_pager(changes)
if click.confirm("Edit text?", default=False):
changes = click.edit(changes, require_save=False)
edited_changes = click.edit(changes, require_save=False)
# This assert is for mypy's benefit. click's docs are a little unclear, but
# when `require_save=False`, not saving the temp file in the editor returns
# the original string.
assert edited_changes is not None
changes = edited_changes
repo.create_tag(tag_name, message=changes, sign=True)
@@ -347,22 +352,16 @@ def tag(gh_token: Optional[str]):
@cli.command()
@click.option("--gh-token", envvar=["GH_TOKEN", "GITHUB_TOKEN"], required=True)
def publish(gh_token: str):
"""Publish release."""
def publish(gh_token: str) -> None:
"""Publish release on GitHub."""
# Make sure we're in a git repo.
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
get_repo_and_check_clean_checkout()
current_version = get_package_version()
tag_name = f"v{current_version}"
if not click.confirm(f"Publish {tag_name}?", default=True):
if not click.confirm(f"Publish release {tag_name} on GitHub?", default=True):
return
# Publish the draft release
@@ -390,12 +389,19 @@ def publish(gh_token: str):
@cli.command()
def upload():
def upload() -> None:
"""Upload release to pypi."""
current_version = get_package_version()
tag_name = f"v{current_version}"
# Check we have the right tag checked out.
repo = get_repo_and_check_clean_checkout()
tag = repo.tag(f"refs/tags/{tag_name}")
if repo.head.commit != tag.commit:
click.echo("Tag {tag_name} (tag.commit) is not currently checked out!")
click.get_current_context().abort()
pypi_asset_names = [
f"matrix_synapse-{current_version}-py3-none-any.whl",
f"matrix-synapse-{current_version}.tar.gz",
@@ -418,7 +424,7 @@ def upload():
@cli.command()
def announce():
def announce() -> None:
"""Generate markdown to announce the release."""
current_version = get_package_version()
@@ -428,7 +434,7 @@ def announce():
f"""
Hi everyone. Synapse {current_version} has just been released.
[notes](https://github.com/matrix-org/synapse/releases/tag/{tag_name}) |\
[notes](https://github.com/matrix-org/synapse/releases/tag/{tag_name}) | \
[docker](https://hub.docker.com/r/matrixdotorg/synapse/tags?name={tag_name}) | \
[debs](https://packages.matrix.org/debian/) | \
[pypi](https://pypi.org/project/matrix-synapse/{current_version}/)"""
@@ -459,20 +465,36 @@ def get_package_version() -> version.Version:
return version.Version(version_string)
def get_release_branch_name(version_number: version.Version) -> str:
return f"release-v{version_number.major}.{version_number.minor}"
def get_repo_and_check_clean_checkout() -> git.Repo:
"""Get the project repo and check it's not got any uncommitted changes."""
try:
repo = git.Repo()
except git.InvalidGitRepositoryError:
raise click.ClickException("Not in Synapse repo.")
if repo.is_dirty():
raise click.ClickException("Uncommitted changes exist.")
return repo
def find_ref(repo: git.Repo, ref_name: str) -> Optional[git.HEAD]:
"""Find the branch/ref, looking first locally then in the remote."""
if ref_name in repo.refs:
return repo.refs[ref_name]
if ref_name in repo.references:
return repo.references[ref_name]
elif ref_name in repo.remote().refs:
return repo.remote().refs[ref_name]
else:
return None
def update_branch(repo: git.Repo):
def update_branch(repo: git.Repo) -> None:
"""Ensure branch is up to date if it has a remote"""
if repo.active_branch.tracking_branch():
repo.git.merge(repo.active_branch.tracking_branch().name)
tracking_branch = repo.active_branch.tracking_branch()
if tracking_branch:
repo.git.merge(tracking_branch.name)
def get_changes_for_version(wanted_version: version.Version) -> str:
@@ -536,7 +558,9 @@ def get_changes_for_version(wanted_version: version.Version) -> str:
return "\n".join(version_changelog)
def generate_and_write_changelog(current_version: version.Version, new_version: str):
def generate_and_write_changelog(
current_version: version.Version, new_version: str
) -> None:
# We do this by getting a draft so that we can edit it before writing to the
# changelog.
result = run_until_successful(
@@ -558,8 +582,8 @@ def generate_and_write_changelog(current_version: version.Version, new_version:
f.write(existing_content)
# Remove all the news fragments
for f in glob.iglob("changelog.d/*.*"):
os.remove(f)
for filename in glob.iglob("changelog.d/*.*"):
os.remove(filename)
if __name__ == "__main__":

View File

@@ -27,7 +27,7 @@ from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.util import json_encoder
def main():
def main() -> None:
parser = argparse.ArgumentParser(
description="""Adds a signature to a JSON object.

View File

@@ -85,12 +85,19 @@ class SortedDict(Dict[_KT, _VT]):
def popitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def peekitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def setdefault(self, key: _KT, default: Optional[_VT] = ...) -> _VT: ...
@overload
def update(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def update(self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT) -> None: ...
@overload
def update(self, **kwargs: _VT) -> None: ...
# Mypy now reports the first overload as an error, because typeshed widened the type
# of `__map` to its internal `_typeshed.SupportsKeysAndGetItem` type in
# https://github.com/python/typeshed/pull/6653
# Since sorteddicts don't change the signature of `update` from that of `dict`, we
# let the stubs for `update` inherit from the stubs for `dict`. (I suspect we could
# do the same for many othe methods.) We leave the stubs commented to better track
# how this file has evolved from the original stubs.
# @overload
# def update(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
# @overload
# def update(self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT) -> None: ...
# @overload
# def update(self, **kwargs: _VT) -> None: ...
def __reduce__(
self,
) -> Tuple[
@@ -115,9 +122,7 @@ class SortedKeysView(KeysView[_KT_co], Sequence[_KT_co]):
def __getitem__(self, index: slice) -> List[_KT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedItemsView( # type: ignore
ItemsView[_KT_co, _VT_co], Sequence[Tuple[_KT_co, _VT_co]]
):
class SortedItemsView(ItemsView[_KT_co, _VT_co], Sequence[Tuple[_KT_co, _VT_co]]):
def __iter__(self) -> Iterator[Tuple[_KT_co, _VT_co]]: ...
@overload
def __getitem__(self, index: int) -> Tuple[_KT_co, _VT_co]: ...

View File

@@ -187,7 +187,7 @@ class Auth:
Once get_user_by_req has set up the opentracing span, this does the actual work.
"""
try:
ip_addr = request.getClientIP()
ip_addr = request.getClientAddress().host
user_agent = get_request_user_agent(request)
access_token = self.get_access_token_from_request(request)
@@ -356,7 +356,7 @@ class Auth:
return None, None, None
if app_service.ip_range_whitelist:
ip_address = IPAddress(request.getClientIP())
ip_address = IPAddress(request.getClientAddress().host)
if ip_address not in app_service.ip_range_whitelist:
return None, None, None
@@ -417,7 +417,8 @@ class Auth:
"""
if rights == "access":
# first look in the database
# First look in the database to see if the access token is present
# as an opaque token.
r = await self.store.get_user_by_access_token(token)
if r:
valid_until_ms = r.valid_until_ms
@@ -434,7 +435,8 @@ class Auth:
return r
# otherwise it needs to be a valid macaroon
# If the token isn't found in the database, then it could still be a
# macaroon, so we check that here.
try:
user_id, guest = self._parse_and_validate_macaroon(token, rights)
@@ -482,8 +484,12 @@ class Auth:
TypeError,
ValueError,
) as e:
logger.warning("Invalid macaroon in auth: %s %s", type(e), e)
raise InvalidClientTokenError("Invalid macaroon passed.")
logger.warning(
"Invalid access token in auth: %s %s.",
type(e),
e,
)
raise InvalidClientTokenError("Invalid access token passed.")
def _parse_and_validate_macaroon(
self, token: str, rights: str = "access"
@@ -504,10 +510,7 @@ class Auth:
try:
macaroon = pymacaroons.Macaroon.deserialize(token)
except Exception: # deserialize can throw more-or-less anything
# doesn't look like a macaroon: treat it as an opaque token which
# must be in the database.
# TODO: it would be nice to get rid of this, but apparently some
# people use access tokens which aren't macaroons
# The access token doesn't look like a macaroon.
raise _InvalidMacaroonException()
try:

View File

@@ -255,7 +255,5 @@ class GuestAccess:
class ReceiptTypes:
READ: Final = "m.read"
class ReadReceiptEventFields:
MSC2285_HIDDEN: Final = "org.matrix.msc2285.hidden"
READ_PRIVATE: Final = "org.matrix.msc2285.read.private"
FULLY_READ: Final = "m.fully_read"

View File

@@ -19,6 +19,7 @@ from typing import (
TYPE_CHECKING,
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
@@ -444,9 +445,9 @@ class Filter:
return room_ids
async def _check_event_relations(
self, events: Iterable[FilterEvent]
self, events: Collection[FilterEvent]
) -> List[FilterEvent]:
# The event IDs to check, mypy doesn't understand the ifinstance check.
# The event IDs to check, mypy doesn't understand the isinstance check.
event_ids = [event.event_id for event in events if isinstance(event, EventBase)] # type: ignore[attr-defined]
event_ids_to_keep = set(
await self._store.events_have_relations(

View File

@@ -38,6 +38,7 @@ from typing import (
from cryptography.utils import CryptographyDeprecationWarning
from matrix_common.versionstring import get_distribution_version_string
from typing_extensions import ParamSpec
import twisted
from twisted.internet import defer, error, reactor as _reactor
@@ -48,10 +49,12 @@ from twisted.logger import LoggingFile, LogLevel
from twisted.protocols.tls import TLSMemoryBIOFactory
from twisted.python.threadpool import ThreadPool
import synapse
import synapse.util.caches
from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ManholeConfig
from synapse.crypto import context_factory
@@ -60,6 +63,7 @@ from synapse.events.spamcheck import load_legacy_spam_checkers
from synapse.events.third_party_rules import load_legacy_third_party_event_rules
from synapse.handlers.auth import load_legacy_password_auth_providers
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.opentracing import init_tracer
from synapse.metrics import install_gc_manager, register_threadpool
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.metrics.jemalloc import setup_jemalloc_stats
@@ -81,11 +85,12 @@ logger = logging.getLogger(__name__)
# list of tuples of function, args list, kwargs dict
_sighup_callbacks: List[
Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]]
Tuple[Callable[..., None], Tuple[object, ...], Dict[str, object]]
] = []
P = ParamSpec("P")
def register_sighup(func: Callable[..., None], *args: Any, **kwargs: Any) -> None:
def register_sighup(func: Callable[P, None], *args: P.args, **kwargs: P.kwargs) -> None:
"""
Register a function to be called when a SIGHUP occurs.
@@ -93,7 +98,9 @@ def register_sighup(func: Callable[..., None], *args: Any, **kwargs: Any) -> Non
func: Function to be called when sent a SIGHUP signal.
*args, **kwargs: args and kwargs to be passed to the target function.
"""
_sighup_callbacks.append((func, args, kwargs))
# This type-ignore should be redundant once we use a mypy release with
# https://github.com/python/mypy/pull/12668.
_sighup_callbacks.append((func, args, kwargs)) # type: ignore[arg-type]
def start_worker_reactor(
@@ -214,7 +221,9 @@ def redirect_stdio_to_logs() -> None:
print("Redirected stdout/stderr to logs")
def register_start(cb: Callable[..., Awaitable], *args: Any, **kwargs: Any) -> None:
def register_start(
cb: Callable[P, Awaitable], *args: P.args, **kwargs: P.kwargs
) -> None:
"""Register a callback with the reactor, to be called once it is running
This can be used to initialise parts of the system which require an asynchronous
@@ -426,12 +435,16 @@ async def start(hs: "HomeServer") -> None:
signal.signal(signal.SIGHUP, run_sighup)
register_sighup(refresh_certificate, hs)
register_sighup(reload_cache_config, hs.config)
# Apply the cache config.
hs.config.caches.resize_all_caches()
# Load the certificate from disk.
refresh_certificate(hs)
# Start the tracer
synapse.logging.opentracing.init_tracer(hs) # type: ignore[attr-defined] # noqa
init_tracer(hs) # noqa
# Instantiate the modules so they can register their web resources to the module API
# before we start the listeners.
@@ -480,6 +493,43 @@ async def start(hs: "HomeServer") -> None:
atexit.register(gc.freeze)
def reload_cache_config(config: HomeServerConfig) -> None:
"""Reload cache config from disk and immediately apply it.resize caches accordingly.
If the config is invalid, a `ConfigError` is logged and no changes are made.
Otherwise, this:
- replaces the `caches` section on the given `config` object,
- resizes all caches according to the new cache factors, and
Note that the following cache config keys are read, but not applied:
- event_cache_size: used to set a max_size and _original_max_size on
EventsWorkerStore._get_event_cache when it is created. We'd have to update
the _original_max_size (and maybe
- sync_response_cache_duration: would have to update the timeout_sec attribute on
HomeServer -> SyncHandler -> ResponseCache.
- track_memory_usage. This affects synapse.util.caches.TRACK_MEMORY_USAGE which
influences Synapse's self-reported metrics.
Also, the HTTPConnectionPool in SimpleHTTPClient sets its maxPersistentPerHost
parameter based on the global_factor. This won't be applied on a config reload.
"""
try:
previous_cache_config = config.reload_config_section("caches")
except ConfigError as e:
logger.warning("Failed to reload cache config")
for f in format_config_error(e):
logger.warning(f)
else:
logger.debug(
"New cache config. Was:\n %s\nNow:\n",
previous_cache_config.__dict__,
config.caches.__dict__,
)
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
config.caches.resize_all_caches()
def setup_sentry(hs: "HomeServer") -> None:
"""Enable sentry integration, if enabled in configuration"""

View File

@@ -210,7 +210,7 @@ def start(config_options: List[str]) -> None:
config.logging.no_redirect_stdio = True
# Explicitly disable background processes
config.server.update_user_directory = False
config.worker.should_update_user_directory = False
config.worker.run_background_tasks = False
config.worker.start_pushers = False
config.worker.pusher_shard_config.instances = []

View File

@@ -441,38 +441,6 @@ def start(config_options: List[str]) -> None:
"synapse.app.user_dir",
)
if config.worker.worker_app == "synapse.app.appservice":
if config.appservice.notify_appservices:
sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``notify_appservices: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the appservice to start since they will be disabled in the main config
config.appservice.notify_appservices = True
else:
# For other worker types we force this to off.
config.appservice.notify_appservices = False
if config.worker.worker_app == "synapse.app.user_dir":
if config.server.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``update_user_directory: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.server.update_user_directory = True
else:
# For other worker types we force this to off.
config.server.update_user_directory = False
synapse.events.USE_FROZEN_DICTS = config.server.use_frozen_dicts
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage

View File

@@ -16,7 +16,7 @@
import logging
import os
import sys
from typing import Dict, Iterable, Iterator, List
from typing import Dict, Iterable, List
from matrix_common.versionstring import get_distribution_version_string
@@ -45,7 +45,7 @@ from synapse.app._base import (
redirect_stdio_to_logs,
register_start,
)
from synapse.config._base import ConfigError
from synapse.config._base import ConfigError, format_config_error
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig
@@ -399,38 +399,6 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
return hs
def format_config_error(e: ConfigError) -> Iterator[str]:
"""
Formats a config error neatly
The idea is to format the immediate error, plus the "causes" of those errors,
hopefully in a way that makes sense to the user. For example:
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
Failed to parse config for module 'JinjaOidcMappingProvider':
invalid jinja template:
unexpected end of template, expected 'end of print statement'.
Args:
e: the error to be formatted
Returns: An iterator which yields string fragments to be formatted
"""
yield "Error in configuration"
if e.path:
yield " at '%s'" % (".".join(e.path),)
yield ":\n %s" % (e.msg,)
parent_e = e.__cause__
indent = 1
while parent_e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(parent_e))
parent_e = parent_e.__cause__
def run(hs: HomeServer) -> None:
_base.start_reactor(
"synapse-homeserver",

View File

@@ -17,6 +17,7 @@ import urllib.parse
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple
from prometheus_client import Counter
from typing_extensions import TypeGuard
from synapse.api.constants import EventTypes, Membership, ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException
@@ -66,7 +67,7 @@ def _is_valid_3pe_metadata(info: JsonDict) -> bool:
return True
def _is_valid_3pe_result(r: JsonDict, field: str) -> bool:
def _is_valid_3pe_result(r: object, field: str) -> TypeGuard[JsonDict]:
if not isinstance(r, dict):
return False

View File

@@ -16,14 +16,18 @@
import argparse
import errno
import logging
import os
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import (
Any,
ClassVar,
Collection,
Dict,
Iterable,
Iterator,
List,
MutableMapping,
Optional,
@@ -40,6 +44,8 @@ import yaml
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
logger = logging.getLogger(__name__)
class ConfigError(Exception):
"""Represents a problem parsing the configuration
@@ -55,6 +61,38 @@ class ConfigError(Exception):
self.path = path
def format_config_error(e: ConfigError) -> Iterator[str]:
"""
Formats a config error neatly
The idea is to format the immediate error, plus the "causes" of those errors,
hopefully in a way that makes sense to the user. For example:
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
Failed to parse config for module 'JinjaOidcMappingProvider':
invalid jinja template:
unexpected end of template, expected 'end of print statement'.
Args:
e: the error to be formatted
Returns: An iterator which yields string fragments to be formatted
"""
yield "Error in configuration"
if e.path:
yield " at '%s'" % (".".join(e.path),)
yield ":\n %s" % (e.msg,)
parent_e = e.__cause__
indent = 1
while parent_e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(parent_e))
parent_e = parent_e.__cause__
# We split these messages out to allow packages to override with package
# specific instructions.
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
@@ -119,7 +157,7 @@ class Config:
defined in subclasses.
"""
section: str
section: ClassVar[str]
def __init__(self, root_config: "RootConfig" = None):
self.root = root_config
@@ -309,9 +347,12 @@ class RootConfig:
class, lower-cased and with "Config" removed.
"""
config_classes = []
config_classes: List[Type[Config]] = []
def __init__(self, config_files: Collection[str] = ()):
# Capture absolute paths here, so we can reload config after we daemonize.
self.config_files = [os.path.abspath(path) for path in config_files]
def __init__(self):
for config_class in self.config_classes:
if config_class.section is None:
raise ValueError("%r requires a section name" % (config_class,))
@@ -512,12 +553,10 @@ class RootConfig:
object from parser.parse_args(..)`
"""
obj = cls()
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
obj = cls(config_files)
if not config_files:
parser.error("Must supply a config file.")
@@ -627,7 +666,7 @@ class RootConfig:
generate_missing_configs = config_args.generate_missing_configs
obj = cls()
obj = cls(config_files)
if config_args.generate_config:
if config_args.report_stats is None:
@@ -727,6 +766,34 @@ class RootConfig:
) -> None:
self.invoke_all("generate_files", config_dict, config_dir_path)
def reload_config_section(self, section_name: str) -> Config:
"""Reconstruct the given config section, leaving all others unchanged.
This works in three steps:
1. Create a new instance of the relevant `Config` subclass.
2. Call `read_config` on that instance to parse the new config.
3. Replace the existing config instance with the new one.
:raises ValueError: if the given `section` does not exist.
:raises ConfigError: for any other problems reloading config.
:returns: the previous config object, which no longer has a reference to this
RootConfig.
"""
existing_config: Optional[Config] = getattr(self, section_name, None)
if existing_config is None:
raise ValueError(f"Unknown config section '{section_name}'")
logger.info("Reloading config section '%s'", section_name)
new_config_data = read_config_files(self.config_files)
new_config = type(existing_config)(self)
new_config.read_config(new_config_data)
setattr(self, section_name, new_config)
existing_config.root = None
return existing_config
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
"""Read the config files into a dict

View File

@@ -1,15 +1,19 @@
import argparse
from typing import (
Any,
Collection,
Dict,
Iterable,
Iterator,
List,
Literal,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
overload,
)
import jinja2
@@ -64,6 +68,8 @@ class ConfigError(Exception):
self.msg = msg
self.path = path
def format_config_error(e: ConfigError) -> Iterator[str]: ...
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
MISSING_REPORT_STATS_SPIEL: str
MISSING_SERVER_NAME: str
@@ -117,7 +123,8 @@ class RootConfig:
background_updates: background_updates.BackgroundUpdateConfig
config_classes: List[Type["Config"]] = ...
def __init__(self) -> None: ...
config_files: List[str]
def __init__(self, config_files: Collection[str] = ...) -> None: ...
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
) -> MutableMapping[str, Any]: ...
@@ -157,6 +164,12 @@ class RootConfig:
def generate_missing_files(
self, config_dict: dict, config_dir_path: str
) -> None: ...
@overload
def reload_config_section(
self, section_name: Literal["caches"]
) -> cache.CacheConfig: ...
@overload
def reload_config_section(self, section_name: str) -> Config: ...
class Config:
root: RootConfig

View File

@@ -33,7 +33,6 @@ class AppServiceConfig(Config):
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
self.app_service_config_files = config.get("app_service_config_files", [])
self.notify_appservices = config.get("notify_appservices", True)
self.track_appservice_user_ips = config.get("track_appservice_user_ips", False)
def generate_config_section(cls, **kwargs: Any) -> str:
@@ -56,7 +55,8 @@ def load_appservices(
) -> List[ApplicationService]:
"""Returns a list of Application Services from the config files."""
if not isinstance(config_files, list):
logger.warning("Expected %s to be a list of AS config files.", config_files)
# type-ignore: this function gets arbitrary json value; we do use this path.
logger.warning("Expected %s to be a list of AS config files.", config_files) # type: ignore[unreachable]
return []
# Dicts of value -> filename

View File

@@ -69,11 +69,11 @@ def _canonicalise_cache_name(cache_name: str) -> str:
def add_resizable_cache(
cache_name: str, cache_resize_callback: Callable[[float], None]
) -> None:
"""Register a cache that's size can dynamically change
"""Register a cache whose size can dynamically change
Args:
cache_name: A reference to the cache
cache_resize_callback: A callback function that will be ran whenever
cache_resize_callback: A callback function that will run whenever
the cache needs to be resized
"""
# Some caches have '*' in them which we strip out.
@@ -96,6 +96,13 @@ class CacheConfig(Config):
section = "caches"
_environ = os.environ
event_cache_size: int
cache_factors: Dict[str, float]
global_factor: float
track_memory_usage: bool
expiry_time_msec: Optional[int]
sync_response_cache_duration: int
@staticmethod
def reset() -> None:
"""Resets the caches to their defaults. Used for tests."""
@@ -115,6 +122,12 @@ class CacheConfig(Config):
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
#
# The configuration for cache factors (caches.global_factor and
# caches.per_cache_factors) can be reloaded while the application is running,
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
# the caching config will NOT be applied after a SIGHUP is received; a restart
# is necessary.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
@@ -163,6 +176,24 @@ class CacheConfig(Config):
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.
@@ -174,21 +205,21 @@ class CacheConfig(Config):
"""
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
"""Populate this config object with values from `config`.
This method does NOT resize existing or future caches: use `resize_all_caches`.
We use two separate methods so that we can reject bad config before applying it.
"""
self.event_cache_size = self.parse_size(
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
)
self.cache_factors: Dict[str, float] = {}
self.cache_factors = {}
cache_config = config.get("caches") or {}
self.global_factor = cache_config.get(
"global_factor", properties.default_factor_size
)
self.global_factor = cache_config.get("global_factor", _DEFAULT_FACTOR_SIZE)
if not isinstance(self.global_factor, (int, float)):
raise ConfigError("caches.global_factor must be a number.")
# Set the global one so that it's reflected in new caches
properties.default_factor_size = self.global_factor
# Load cache factors from the config
individual_factors = cache_config.get("per_cache_factors") or {}
if not isinstance(individual_factors, dict):
@@ -230,7 +261,7 @@ class CacheConfig(Config):
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
if expire_caches:
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
self.expiry_time_msec = self.parse_duration(cache_entry_ttl)
else:
self.expiry_time_msec = None
@@ -250,23 +281,38 @@ class CacheConfig(Config):
)
self.expiry_time_msec = self.parse_duration(expiry_time)
self.cache_autotuning = cache_config.get("cache_autotuning")
if self.cache_autotuning:
max_memory_usage = self.cache_autotuning.get("max_cache_memory_usage")
self.cache_autotuning["max_cache_memory_usage"] = self.parse_size(
max_memory_usage
)
target_mem_size = self.cache_autotuning.get("target_cache_memory_usage")
self.cache_autotuning["target_cache_memory_usage"] = self.parse_size(
target_mem_size
)
min_cache_ttl = self.cache_autotuning.get("min_cache_ttl")
self.cache_autotuning["min_cache_ttl"] = self.parse_duration(min_cache_ttl)
self.sync_response_cache_duration = self.parse_duration(
cache_config.get("sync_response_cache_duration", 0)
)
# Resize all caches (if necessary) with the new factors we've loaded
self.resize_all_caches()
# Store this function so that it can be called from other classes without
# needing an instance of Config
properties.resize_all_caches_func = self.resize_all_caches
def resize_all_caches(self) -> None:
"""Ensure all cache sizes are up to date
"""Ensure all cache sizes are up-to-date.
For each cache, run the mapped callback function with either
a specific cache factor or the default, global one.
"""
# Set the global factor size, so that new caches are appropriately sized.
properties.default_factor_size = self.global_factor
# Store this function so that it can be called from other classes without
# needing an instance of CacheConfig
properties.resize_all_caches_func = self.resize_all_caches
# block other threads from modifying _CACHES while we iterate it.
with _CACHES_LOCK:
for cache_name, callback in _CACHES.items():

View File

@@ -32,7 +32,7 @@ class ExperimentalConfig(Config):
# MSC2716 (importing historical messages)
self.msc2716_enabled: bool = experimental.get("msc2716_enabled", False)
# MSC2285 (hidden read receipts)
# MSC2285 (private read receipts)
self.msc2285_enabled: bool = experimental.get("msc2285_enabled", False)
# MSC3244 (room version capabilities)
@@ -81,3 +81,6 @@ class ExperimentalConfig(Config):
# MSC2815 (allow room moderators to view redacted event content)
self.msc2815_enabled: bool = experimental.get("msc2815_enabled", False)
# MSC3786 (Add a default push rule to ignore m.room.server_acl events)
self.msc3786_enabled: bool = experimental.get("msc3786_enabled", False)

View File

@@ -46,7 +46,7 @@ class FederationConfig(Config):
)
self.allow_device_name_lookup_over_federation = config.get(
"allow_device_name_lookup_over_federation", True
"allow_device_name_lookup_over_federation", False
)
def generate_config_section(self, **kwargs: Any) -> str:
@@ -81,11 +81,11 @@ class FederationConfig(Config):
#
#allow_profile_lookup_over_federation: false
# Uncomment to disable device display name lookup over federation. By default, the
# Federation API allows other homeservers to obtain device display names of any user
# on this homeserver. Defaults to 'true'.
# Uncomment to allow device display name lookup over federation. By default, the
# Federation API prevents other homeservers from obtaining the display names of
# user devices on this homeserver. Defaults to 'false'.
#
#allow_device_name_lookup_over_federation: false
#allow_device_name_lookup_over_federation: true
"""

View File

@@ -110,13 +110,6 @@ loggers:
# information such as access tokens.
level: INFO
twisted:
# We send the twisted logging directly to the file handler,
# to work around https://github.com/matrix-org/synapse/issues/3471
# when using "buffer" logger. Use "console" to log to stderr instead.
handlers: [file]
propagate: false
root:
level: INFO

View File

@@ -43,6 +43,9 @@ class RegistrationConfig(Config):
self.registration_requires_token = config.get(
"registration_requires_token", False
)
self.enable_registration_token_3pid_bypass = config.get(
"enable_registration_token_3pid_bypass", False
)
self.registration_shared_secret = config.get("registration_shared_secret")
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
@@ -309,6 +312,12 @@ class RegistrationConfig(Config):
#
#registration_requires_token: true
# Allow users to submit a token during registration to bypass any required 3pid
# steps configured in `registrations_require_3pid`.
# Defaults to false, requiring that registration tokens (if enabled) complete a 3pid flow.
#
#enable_registration_token_3pid_bypass: false
# If set, allows registration of standard or admin accounts by anyone who
# has the shared secret, even if registration is otherwise disabled.
#

View File

@@ -63,6 +63,19 @@ class RoomConfig(Config):
"Invalid value for encryption_enabled_by_default_for_room_type"
)
self.default_power_level_content_override = config.get(
"default_power_level_content_override",
None,
)
if self.default_power_level_content_override is not None:
for preset in self.default_power_level_content_override:
if preset not in vars(RoomCreationPreset).values():
raise ConfigError(
"Unrecognised room preset %s in default_power_level_content_override"
% preset
)
# We validate the actual overrides when we try to apply them.
def generate_config_section(self, **kwargs: Any) -> str:
return """\
## Rooms ##
@@ -83,4 +96,38 @@ class RoomConfig(Config):
# will also not affect rooms created by other servers.
#
#encryption_enabled_by_default_for_room_type: invite
# Override the default power levels for rooms created on this server, per
# room creation preset.
#
# The appropriate dictionary for the room preset will be applied on top
# of the existing power levels content.
#
# Useful if you know that your users need special permissions in rooms
# that they create (e.g. to send particular types of state events without
# needing an elevated power level). This takes the same shape as the
# `power_level_content_override` parameter in the /createRoom API, but
# is applied before that parameter.
#
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
# and `public_chat`. Inside each of those should be any of the
# properties allowed in `power_level_content_override` in the
# /createRoom API. If any property is missing, its default value will
# continue to be used. If any property is present, it will overwrite
# the existing default completely (so if the `events` property exists,
# the default event power levels will be ignored).
#
#default_power_level_content_override:
# private_chat:
# "events":
# "com.example.myeventtype" : 0
# "m.room.avatar": 50
# "m.room.canonical_alias": 50
# "m.room.encryption": 100
# "m.room.history_visibility": 100
# "m.room.name": 50
# "m.room.power_levels": 100
# "m.room.server_acl": 100
# "m.room.tombstone": 100
# "events_default": 1
"""

View File

@@ -186,7 +186,7 @@ KNOWN_RESOURCES = {
class HttpResourceConfig:
names: List[str] = attr.ib(
factory=list,
validator=attr.validators.deep_iterable(attr.validators.in_(KNOWN_RESOURCES)), # type: ignore
validator=attr.validators.deep_iterable(attr.validators.in_(KNOWN_RESOURCES)),
)
compress: bool = attr.ib(
default=False,
@@ -231,9 +231,7 @@ class ManholeConfig:
class LimitRemoteRoomsConfig:
enabled: bool = attr.ib(validator=attr.validators.instance_of(bool), default=False)
complexity: Union[float, int] = attr.ib(
validator=attr.validators.instance_of(
(float, int) # type: ignore[arg-type] # noqa
),
validator=attr.validators.instance_of((float, int)), # noqa
default=1.0,
)
complexity_error: str = attr.ib(
@@ -321,10 +319,6 @@ class ServerConfig(Config):
self.presence_router_config,
) = load_module(presence_router_config, ("presence", "presence_router"))
# Whether to update the user directory or not. This should be set to
# false only if we are updating the user directory in a worker
self.update_user_directory = config.get("update_user_directory", True)
# whether to enable the media repository endpoints. This should be set
# to false if the media repository is running as a separate endpoint;
# doing so ensures that we will not run cache cleanup jobs on the
@@ -415,6 +409,7 @@ class ServerConfig(Config):
)
self.mau_trial_days = config.get("mau_trial_days", 0)
self.mau_appservice_trial_days = config.get("mau_appservice_trial_days", {})
self.mau_limit_alerting = config.get("mau_limit_alerting", True)
# How long to keep redacted events in the database in unredacted form
@@ -1001,7 +996,7 @@ class ServerConfig(Config):
# federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid'
#
# keys: the key discovery API (/_matrix/keys).
# keys: the key discovery API (/_matrix/key).
#
# media: the media API (/_matrix/media).
#
@@ -1107,6 +1102,11 @@ class ServerConfig(Config):
# sign up in a short space of time never to return after their initial
# session.
#
# The option `mau_appservice_trial_days` is similar to `mau_trial_days`, but
# applies a different trial number if the user was registered by an appservice.
# A value of 0 means no trial days are applied. Appservices not listed in this
# dictionary use the value of `mau_trial_days` instead.
#
# 'mau_limit_alerting' is a means of limiting client side alerting
# should the mau limit be reached. This is useful for small instances
# where the admin has 5 mau seats (say) for 5 specific people and no
@@ -1117,6 +1117,8 @@ class ServerConfig(Config):
#max_mau_value: 50
#mau_trial_days: 2
#mau_limit_alerting: false
#mau_appservice_trial_days:
# "appservice-id": 1
# If enabled, the metrics for the number of monthly active users will
# be populated, however no one will be limited. If limit_usage_by_mau

Some files were not shown because too many files have changed in this diff Show More