1
0

Compare commits

..

30 Commits

Author SHA1 Message Date
Andrew Morgan
e689cae47d update changelog for 1.55.1 2022-03-24 17:54:43 +00:00
Andrew Morgan
088f3ae182 1.55.1 2022-03-24 17:47:03 +00:00
Andrew Morgan
8810c93e82 Replace instances of deprecated Jinja2.Markup with markupsafe.Markup (#12289)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-03-24 17:16:13 +00:00
Andrew Morgan
6b26536a52 Changelog: sso -> Single Sign-On 2022-03-22 14:21:49 +00:00
Andrew Morgan
a701a09f9b changelog: move notice from rc to final release 2022-03-22 14:05:17 +00:00
Andrew Morgan
34baf76451 1.55.0 2022-03-22 14:02:52 +00:00
Michael Telatynski
01211e0c16 Tweak copy for sso account details template (#12265)
* Tweak copy for sso account details template
* Update sso footer copyright year
* Add newsfragment

Signed-off-by: Michael Telatynski <7t3chguy@gmail.com>
2022-03-22 10:22:25 +00:00
David Robertson
d9bc65918e Call out synctl change 2022-03-21 17:27:59 +00:00
reivilibre
6134b3079e Reword 'Choose your user name' as 'Choose your account name' in the SSO registration template, in order to comply with SIWA guidelines. (#12260)
* Reword as 'Choose your account name'

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-03-21 12:16:46 +00:00
Patrick Cloke
f70afbd565 Re-generate changelog. 2022-03-16 12:20:05 -04:00
Patrick Cloke
96274565ff Fix bundling aggregations if unsigned is not a returned event field. (#12234)
An error occured if a filter was supplied with `event_fields` which did not include
`unsigned`.

In that case, bundled aggregations are still added as the spec states it is allowed
for servers to add additional fields.
2022-03-16 12:17:39 -04:00
David Robertson
9e90d643e6 Changelog tweaks 2022-03-15 11:16:36 +00:00
David Robertson
d1130a249b 1.55.0rc1 2022-03-15 11:00:01 +00:00
Sean Quah
2fcf4b3f6c Add cancellation support to @cached and @cachedList decorators (#12183)
These decorators mostly support cancellation already. Add cancellation
tests and fix use of finished logging contexts by delaying cancellation,
as suggested by @erikjohnston.

Signed-off-by: Sean Quah <seanq@element.io>
2022-03-14 19:04:29 +00:00
Sean Quah
605d161d7d Add cancellation support to ReadWriteLock (#12120)
Also convert `ReadWriteLock` to use async context managers.

Signed-off-by: Sean Quah <seanq@element.io>
2022-03-14 18:49:07 +00:00
Sean Quah
8e5706d144 Fix broken background updates when using sqlite with enable_search off (#12215)
Signed-off-by: Sean Quah <seanq@element.io>
2022-03-14 17:52:58 +00:00
Sean Quah
90b2327066 Add delay_cancellation utility function (#12180)
`delay_cancellation` behaves like `stop_cancellation`, except it
delays `CancelledError`s until the original `Deferred` resolves.
This is handy for unifying cleanup paths and ensuring that uncancelled
coroutines don't use finished logcontexts.

Signed-off-by: Sean Quah <seanq@element.io>
2022-03-14 17:52:15 +00:00
Patrick Cloke
54f674f7a9 Deprecate the groups/communities endpoints and add an experimental configuration flag. (#12200) 2022-03-12 13:23:37 -05:00
Shay
ef3619e61d Add config settings for background update parameters (#11980) 2022-03-11 10:46:45 -08:00
Brendan Abolivier
e6a106fd5e Implement a Jinja2 filter to extract localparts from email addresses (#12212) 2022-03-11 15:15:11 +00:00
reivilibre
4a53f35737 Improve code documentation for the typing stream over replication. (#12211) 2022-03-11 14:00:15 +00:00
Nick Mills-Barrett
735e89bd3a Add an additional HTTP pusher + push rule tests. (#12188)
And rename the field used for caching from _id to _cache_key.
2022-03-11 08:45:26 -05:00
Brendan Abolivier
003cc6910a Update the SSO username picker template to comply with SIWA guidelines (#12210)
Fixes https://github.com/matrix-org/synapse/issues/12205
2022-03-11 13:20:00 +00:00
Dirk Klimpel
32c828d0f7 Add type hints to tests/rest. (#12208)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-03-11 12:42:22 +00:00
Patrick Cloke
e10a2fe0c2 Add some type hints to the tests.handlers module. (#12207) 2022-03-11 07:07:15 -05:00
Patrick Cloke
bc9dff1d95 Remove unnecessary pass statements. (#12206) 2022-03-11 07:06:21 -05:00
Andrew Morgan
3b12f6d61b Note that contributors can sign off privately (#12204)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-03-11 11:10:20 +00:00
Richard van der Hoff
483f2aa2ec Retention test: avoid relying on state at purged events (#12202)
This test was relying on poking events which weren't in the database into
filter_events_for_client.
2022-03-11 10:33:49 +00:00
~creme
7577894bec Document that most streams can only have a single writer. (#12196)
This includes the `typing`, `to_device`, `account_data`, `receipts`, and `presence`
streams (really anything except the `events` stream).
2022-03-10 18:15:19 +00:00
Shay
ed9aea42fa fix misleading comment in check_events_for_spam (#12203) 2022-03-10 09:40:07 -08:00
115 changed files with 1771 additions and 466 deletions

View File

@@ -1,3 +1,111 @@
Synapse 1.55.1 (2022-03-24)
===========================
This is a patch release that fixes an incompatibility with version 3.1.0 of the [Jinja](https://pypi.org/project/Jinja2/) library, released on March 24th, 2022. Deployments of Synapse using the `matrixdotorg/synapse` Docker image or Debian packages from packages.matrix.org are not affected.
Internal Changes
----------------
- Remove uses of the long-deprecated `jinja2.Markup` which would prevent Synapse from starting with Jinja 3.1.0 or above installed. ([\#12289](https://github.com/matrix-org/synapse/issues/12289))
Synapse 1.55.0 (2022-03-22)
===========================
This release removes a workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. **This breaks compatibility with Mjolnir 1.3.1 and earlier. ([\#11700](https://github.com/matrix-org/synapse/issues/11700))**; Mjolnir users should upgrade Mjolnir before upgrading Synapse to this version.
This release also moves the location of the `synctl` script; see the [upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#synctl-script-has-been-moved) for more details.
Internal Changes
----------------
- Tweak copy for default Single Sign-On account details template to better adhere to mobile app store guidelines. ([\#12265](https://github.com/matrix-org/synapse/issues/12265), [\#12260](https://github.com/matrix-org/synapse/issues/12260))
Synapse 1.55.0rc1 (2022-03-15)
==============================
Features
--------
- Add third-party rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`. ([\#12028](https://github.com/matrix-org/synapse/issues/12028))
- Improve performance of logging in for large accounts. ([\#12132](https://github.com/matrix-org/synapse/issues/12132))
- Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted. ([\#12135](https://github.com/matrix-org/synapse/issues/12135))
- Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads. ([\#12151](https://github.com/matrix-org/synapse/issues/12151))
- Add a new Jinja2 template filter to extract the local part of an email address. ([\#12212](https://github.com/matrix-org/synapse/issues/12212))
Bugfixes
--------
- Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0. ([\#12090](https://github.com/matrix-org/synapse/issues/12090))
- Fix a long-standing bug when redacting events with relations. ([\#12113](https://github.com/matrix-org/synapse/issues/12113), [\#12121](https://github.com/matrix-org/synapse/issues/12121), [\#12130](https://github.com/matrix-org/synapse/issues/12130), [\#12189](https://github.com/matrix-org/synapse/issues/12189))
- Fix a bug introduced in Synapse 1.7.2 whereby background updates are never run with the default background batch size. ([\#12157](https://github.com/matrix-org/synapse/issues/12157))
- Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0. ([\#12175](https://github.com/matrix-org/synapse/issues/12175))
- Fix a bug introduced in Synapse 1.54.0 that broke background updates on sqlite homeservers while search was disabled. ([\#12215](https://github.com/matrix-org/synapse/issues/12215))
- Fix a long-standing bug when a `filter` argument with `event_fields` which did not include the `unsigned` field could result in a 500 error on `/sync`. ([\#12234](https://github.com/matrix-org/synapse/issues/12234))
Improved Documentation
----------------------
- Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page. ([\#11998](https://github.com/matrix-org/synapse/issues/11998))
- Improve documentation for demo scripts. ([\#12143](https://github.com/matrix-org/synapse/issues/12143))
- Updates to the Room DAG concepts development document. ([\#12179](https://github.com/matrix-org/synapse/issues/12179))
- Document that the `typing`, `to_device`, `account_data`, `receipts`, and `presence` stream writer can only be used on a single worker. ([\#12196](https://github.com/matrix-org/synapse/issues/12196))
- Document that contributors can sign off privately by email. ([\#12204](https://github.com/matrix-org/synapse/issues/12204))
Deprecations and Removals
-------------------------
- **Remove workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. Breaks compatibility with Mjolnir 1.3.1 and earlier. ([\#11700](https://github.com/matrix-org/synapse/issues/11700))**
- **`synctl` has been moved into into `synapse._scripts` and is exposed as an entry point; see [upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#synctl-script-has-been-moved). ([\#12140](https://github.com/matrix-org/synapse/issues/12140))
- Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0. ([\#12138](https://github.com/matrix-org/synapse/issues/12138))
- The groups/communities feature in Synapse has been deprecated. ([\#12200](https://github.com/matrix-org/synapse/issues/12200))
Internal Changes
----------------
- Simplify the `ApplicationService` class' set of public methods related to interest checking. ([\#11915](https://github.com/matrix-org/synapse/issues/11915))
- Add config settings for background update parameters. ([\#11980](https://github.com/matrix-org/synapse/issues/11980))
- Correct type hints for txredis. ([\#12042](https://github.com/matrix-org/synapse/issues/12042))
- Limit the size of `aggregation_key` on annotations. ([\#12101](https://github.com/matrix-org/synapse/issues/12101))
- Add type hints to tests files. ([\#12108](https://github.com/matrix-org/synapse/issues/12108), [\#12146](https://github.com/matrix-org/synapse/issues/12146), [\#12207](https://github.com/matrix-org/synapse/issues/12207), [\#12208](https://github.com/matrix-org/synapse/issues/12208))
- Move scripts to Synapse package and expose as setuptools entry points. ([\#12118](https://github.com/matrix-org/synapse/issues/12118))
- Add support for cancellation to `ReadWriteLock`. ([\#12120](https://github.com/matrix-org/synapse/issues/12120))
- Fix data validation to compare to lists, not sequences. ([\#12128](https://github.com/matrix-org/synapse/issues/12128))
- Fix CI not attaching source distributions and wheels to the GitHub releases. ([\#12131](https://github.com/matrix-org/synapse/issues/12131))
- Remove unused mocks from `test_typing`. ([\#12136](https://github.com/matrix-org/synapse/issues/12136))
- Give `scripts-dev` scripts suffixes for neater CI config. ([\#12137](https://github.com/matrix-org/synapse/issues/12137))
- Move the snapcraft configuration file to `contrib`. ([\#12142](https://github.com/matrix-org/synapse/issues/12142))
- Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI. ([\#12144](https://github.com/matrix-org/synapse/issues/12144))
- Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI. ([\#12145](https://github.com/matrix-org/synapse/issues/12145))
- Add test for `ObservableDeferred`'s cancellation behaviour. ([\#12149](https://github.com/matrix-org/synapse/issues/12149))
- Use `ParamSpec` in type hints for `synapse.logging.context`. ([\#12150](https://github.com/matrix-org/synapse/issues/12150))
- Prune unused jobs from `tox` config. ([\#12152](https://github.com/matrix-org/synapse/issues/12152))
- Move CI checks out of tox, to facilitate a move to using poetry. ([\#12153](https://github.com/matrix-org/synapse/issues/12153))
- Avoid generating state groups for local out-of-band leaves. ([\#12154](https://github.com/matrix-org/synapse/issues/12154))
- Avoid trying to calculate the state at outlier events. ([\#12155](https://github.com/matrix-org/synapse/issues/12155), [\#12173](https://github.com/matrix-org/synapse/issues/12173), [\#12202](https://github.com/matrix-org/synapse/issues/12202))
- Fix some type annotations. ([\#12156](https://github.com/matrix-org/synapse/issues/12156))
- Add type hints for `ObservableDeferred` attributes. ([\#12159](https://github.com/matrix-org/synapse/issues/12159))
- Use a prebuilt Action for the `tests-done` CI job. ([\#12161](https://github.com/matrix-org/synapse/issues/12161))
- Reduce number of DB queries made during processing of `/sync`. ([\#12163](https://github.com/matrix-org/synapse/issues/12163))
- Add `delay_cancellation` utility function, which behaves like `stop_cancellation` but waits until the original `Deferred` resolves before raising a `CancelledError`. ([\#12180](https://github.com/matrix-org/synapse/issues/12180))
- Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper. ([\#12182](https://github.com/matrix-org/synapse/issues/12182))
- Add cancellation support to `@cached` and `@cachedList` decorators. ([\#12183](https://github.com/matrix-org/synapse/issues/12183))
- Remove unused variables. ([\#12187](https://github.com/matrix-org/synapse/issues/12187))
- Add combined test for HTTP pusher and push rule. Contributed by Nick @ Beeper. ([\#12188](https://github.com/matrix-org/synapse/issues/12188))
- Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`. ([\#12192](https://github.com/matrix-org/synapse/issues/12192))
- Remove some dead code. ([\#12197](https://github.com/matrix-org/synapse/issues/12197))
- Fix a misleading comment in the function `check_event_for_spam`. ([\#12203](https://github.com/matrix-org/synapse/issues/12203))
- Remove unnecessary `pass` statements. ([\#12206](https://github.com/matrix-org/synapse/issues/12206))
- Update the SSO username picker template to comply with SIWA guidelines. ([\#12210](https://github.com/matrix-org/synapse/issues/12210))
- Improve code documentation for the typing stream over replication. ([\#12211](https://github.com/matrix-org/synapse/issues/12211))
Synapse 1.54.0 (2022-03-08)
===========================

View File

@@ -33,7 +33,7 @@ site-url = "/synapse/"
additional-css = [
"docs/website_files/table-of-contents.css",
"docs/website_files/remove-nav-buttons.css",
"docs/website_files/section-headers.css",
"docs/website_files/indent-section-headers.css",
]
additional-js = ["docs/website_files/table-of-contents.js"]
theme = "docs/website_files/theme"

View File

@@ -1 +0,0 @@
Remove workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. Breaks compatibility with Mjolnir 1.3.1 and earlier.

View File

@@ -1 +0,0 @@
Simplify the `ApplicationService` class' set of public methods related to interest checking.

View File

@@ -1 +0,0 @@
Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page.

View File

@@ -1 +0,0 @@
Add third-party rules rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`.

View File

@@ -1 +0,0 @@
Correct type hints for txredis.

View File

@@ -1 +0,0 @@
Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0.

View File

@@ -1 +0,0 @@
Limit the size of `aggregation_key` on annotations.

View File

@@ -1 +0,0 @@
Add type hints to `tests/rest/client`.

View File

@@ -1 +0,0 @@
Fix a long-standing bug when redacting events with relations.

View File

@@ -1 +0,0 @@
Move scripts to Synapse package and expose as setuptools entry points.

View File

@@ -1 +0,0 @@
Fix a long-standing bug when redacting events with relations.

View File

@@ -1 +0,0 @@
Fix data validation to compare to lists, not sequences.

View File

@@ -1 +0,0 @@
Fix a long-standing bug when redacting events with relations.

View File

@@ -1 +0,0 @@
Fix CI not attaching source distributions and wheels to the GitHub releases.

View File

@@ -1 +0,0 @@
Improve performance of logging in for large accounts.

View File

@@ -1 +0,0 @@
Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted.

View File

@@ -1 +0,0 @@
Remove unused mocks from `test_typing`.

View File

@@ -1 +0,0 @@
Give `scripts-dev` scripts suffixes for neater CI config.

View File

@@ -1 +0,0 @@
Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0.

View File

@@ -1 +0,0 @@
Move `synctl` into `synapse._scripts` and expose as an entry point.

View File

@@ -1 +0,0 @@
Move the snapcraft configuration file to `contrib`.

View File

@@ -1 +0,0 @@
Improve documentation for demo scripts.

View File

@@ -1 +0,0 @@
Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI.

View File

@@ -1 +0,0 @@
Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI.

View File

@@ -1 +0,0 @@
Add type hints to `tests/rest`.

View File

@@ -1 +0,0 @@
Add test for `ObservableDeferred`'s cancellation behaviour.

View File

@@ -1 +0,0 @@
Use `ParamSpec` in type hints for `synapse.logging.context`.

View File

@@ -1 +0,0 @@
Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads.

View File

@@ -1 +0,0 @@
Prune unused jobs from `tox` config.

View File

@@ -1 +0,0 @@
Move CI checks out of tox, to facilitate a move to using poetry.

View File

@@ -1 +0,0 @@
Avoid generating state groups for local out-of-band leaves.

View File

@@ -1 +0,0 @@
Avoid trying to calculate the state at outlier events.

View File

@@ -1 +0,0 @@
Fix some type annotations.

View File

@@ -1 +0,0 @@
Fix a bug introduced in #4864 whereby background updates are never run with the default background batch size.

View File

@@ -1 +0,0 @@
Add type hints for `ObservableDeferred` attributes.

View File

@@ -1 +0,0 @@
Use a prebuilt Action for the `tests-done` CI job.

View File

@@ -1 +0,0 @@
Reduce number of DB queries made during processing of `/sync`.

View File

@@ -1 +0,0 @@
Avoid trying to calculate the state at outlier events.

View File

@@ -1 +0,0 @@
Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0.

View File

@@ -1 +0,0 @@
Updates to the Room DAG concepts development document.

View File

@@ -1 +0,0 @@
Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper.

View File

@@ -1 +0,0 @@
Remove unused variables.

View File

@@ -1 +0,0 @@
Fix a long-standing bug when redacting events with relations.

View File

@@ -1 +0,0 @@
Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`.

View File

@@ -1 +0,0 @@
Remove some dead code.

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
matrix-synapse-py3 (1.55.1) stable; urgency=medium
* New synapse release 1.55.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 17:44:23 +0000
matrix-synapse-py3 (1.55.0) stable; urgency=medium
* New synapse release 1.55.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Mar 2022 13:59:26 +0000
matrix-synapse-py3 (1.55.0~rc1) stable; urgency=medium
* New synapse release 1.55.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Mar 2022 10:59:31 +0000
matrix-synapse-py3 (1.54.0) stable; urgency=medium
* New synapse release 1.54.0.

View File

@@ -458,6 +458,17 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
### Private Sign off
If you would like to provide your legal name privately to the Matrix.org
Foundation (instead of in a public commit or comment), you can do so
by emailing your legal name and a link to the pull request to
[dco@matrix.org](mailto:dco@matrix.org?subject=Private%20sign%20off).
It helps to include "sign off" or similar in the subject line. You will then
be instructed further.
Once private sign off is complete, doing so for future contributions will not
be required.
# 10. Turn feedback into better code.

View File

@@ -1947,8 +1947,14 @@ saml2_config:
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
# own username (see the documentation for the
# 'sso_auth_account_details.html' template). This template can
# use the 'localpart_from_email' filter.
#
# confirm_localpart: Whether to prompt the user to validate (or
# change) the generated localpart (see the documentation for the
# 'sso_auth_account_details.html' template), instead of
# registering the account right away.
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.
@@ -2729,3 +2735,35 @@ redis:
# Optional password if configured on the Redis instance
#
#password: <secret_password>
## Background Updates ##
# Background updates are database updates that are run in the background in batches.
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
# sleep can all be configured. This is helpful to speed up or slow down the updates.
#
background_updates:
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
# a time to change the default.
#
#background_update_duration_ms: 500
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
#
#sleep_enabled: false
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
# and set a duration to change the default.
#
#sleep_duration_ms: 300
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
# set a size to change the default.
#
#min_batch_size: 10
# The batch size to use for the first iteration of a new background update. The default is 100.
# Uncomment and set a size to change the default.
#
#default_batch_size: 50

View File

@@ -36,6 +36,13 @@ Turns a `mxc://` URL for media content into an HTTP(S) one using the homeserver'
Example: `message.sender_avatar_url|mxc_to_http(32,32)`
```python
localpart_from_email(address: str) -> str
```
Returns the local part of an email address (e.g. `alice` in `alice@example.com`).
Example: `user.email_address|localpart_from_email`
## Email templates
@@ -176,8 +183,11 @@ Below are the templates Synapse will look for when generating pages related to S
for the brand of the IdP
* `user_attributes`: an object containing details about the user that
we received from the IdP. May have the following attributes:
* display_name: the user's display_name
* emails: a list of email addresses
* `display_name`: the user's display name
* `emails`: a list of email addresses
* `localpart`: the local part of the Matrix user ID to register,
if `localpart_template` is set in the mapping provider configuration (empty
string if not)
The template should render a form which submits the following fields:
* `username`: the localpart of the user's chosen user id
* `sso_new_user_consent.html`: HTML page allowing the user to consent to the

View File

@@ -85,6 +85,20 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.56.0
## Groups/communities feature has been deprecated
The non-standard groups/communities feature in Synapse has been deprecated and will
be disabled by default in Synapse v1.58.0.
You can test disabling it by adding the following to your homeserver configuration:
```yaml
experimental_features:
groups_enabled: false
```
# Upgrading to v1.55.0
## `synctl` script has been moved

View File

@@ -0,0 +1,7 @@
/*
* Indents each chapter title in the left sidebar so that they aren't
* at the same level as the section headers.
*/
.chapter-item {
margin-left: 1em;
}

View File

@@ -1,20 +0,0 @@
/*
* Indents each chapter title in the left sidebar so that they aren't
* at the same level as the section headers.
*/
.chapter-item {
margin-left: 1em;
}
/*
* Prevents a large gap between successive section headers.
*
* mdbook sets 'margin-top: 2.5em' on h2 and h3 headers. This makes sense when separating
* a header from the paragraph beforehand, but has the downside of introducing a large
* gap between headers that are next to each other with no text in between.
*
* This rule reduces the margin in this case.
*/
h1 + h2, h2 + h3 {
margin-top: 1.0em;
}

View File

@@ -351,8 +351,11 @@ is only supported with Redis-based replication.)
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. The same worker
can handle multiple streams. For example, to move event persistence off to a
dedicated worker, the shared configuration would include:
can handle multiple streams, but unless otherwise documented, each stream can only
have a single writer.
For example, to move event persistence off to a dedicated worker, the shared
configuration would include:
```yaml
instance_map:
@@ -370,8 +373,8 @@ streams and the endpoints associated with them:
##### The `events` stream
The `events` stream also experimentally supports having multiple writers, where
work is sharded between them by room ID. Note that you *must* restart all worker
The `events` stream experimentally supports having multiple writers, where work
is sharded between them by room ID. Note that you *must* restart all worker
instances when adding or removing event persisters. An example `stream_writers`
configuration with multiple writers:
@@ -384,38 +387,38 @@ stream_writers:
##### The `typing` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `typing` stream:
The following endpoints should be routed directly to the worker configured as
the stream writer for the `typing` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
##### The `to_device` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `to_device` stream:
The following endpoints should be routed directly to the worker configured as
the stream writer for the `to_device` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/sendToDevice/
##### The `account_data` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `account_data` stream:
The following endpoints should be routed directly to the worker configured as
the stream writer for the `account_data` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/tags
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/account_data
##### The `receipts` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `receipts` stream:
The following endpoints should be routed directly to the worker configured as
the stream writer for the `receipts` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/receipt
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/read_markers
##### The `presence` stream
The following endpoints should be routed directly to the workers configured as
stream writers for the `presence` stream:
The following endpoints should be routed directly to the worker configured as
the stream writer for the `presence` stream:
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/

View File

@@ -90,7 +90,6 @@ exclude = (?x)
|tests/push/test_push_rule_evaluator.py
|tests/rest/client/test_transactions.py
|tests/rest/media/v1/test_media_storage.py
|tests/rest/media/v1/test_url_preview.py
|tests/scripts/test_new_matrix_user.py
|tests/server.py
|tests/server_notices/test_resource_limits_server_notices.py

View File

@@ -68,7 +68,7 @@ try:
except ImportError:
pass
__version__ = "1.54.0"
__version__ = "1.55.1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -322,7 +322,8 @@ class GenericWorkerServer(HomeServer):
presence.register_servlets(self, resource)
groups.register_servlets(self, resource)
if self.config.experimental.groups_enabled:
groups.register_servlets(self, resource)
resources.update({CLIENT_API_PREFIX: resource})

View File

@@ -19,6 +19,7 @@ from synapse.config import (
api,
appservice,
auth,
background_updates,
cache,
captcha,
cas,
@@ -113,6 +114,7 @@ class RootConfig:
caches: cache.CacheConfig
federation: federation.FederationConfig
retention: retention.RetentionConfig
background_updates: background_updates.BackgroundUpdateConfig
config_classes: List[Type["Config"]] = ...
def __init__(self) -> None: ...

View File

@@ -0,0 +1,68 @@
# Copyright 2022 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from ._base import Config
class BackgroundUpdateConfig(Config):
section = "background_updates"
def generate_config_section(self, **kwargs) -> str:
return """\
## Background Updates ##
# Background updates are database updates that are run in the background in batches.
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
# sleep can all be configured. This is helpful to speed up or slow down the updates.
#
background_updates:
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
# a time to change the default.
#
#background_update_duration_ms: 500
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
#
#sleep_enabled: false
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
# and set a duration to change the default.
#
#sleep_duration_ms: 300
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
# set a size to change the default.
#
#min_batch_size: 10
# The batch size to use for the first iteration of a new background update. The default is 100.
# Uncomment and set a size to change the default.
#
#default_batch_size: 50
"""
def read_config(self, config, **kwargs) -> None:
bg_update_config = config.get("background_updates") or {}
self.update_duration_ms = bg_update_config.get(
"background_update_duration_ms", 100
)
self.sleep_enabled = bg_update_config.get("sleep_enabled", True)
self.sleep_duration_ms = bg_update_config.get("sleep_duration_ms", 1000)
self.min_batch_size = bg_update_config.get("min_batch_size", 1)
self.default_batch_size = bg_update_config.get("default_batch_size", 100)

View File

@@ -74,3 +74,6 @@ class ExperimentalConfig(Config):
# MSC3720 (Account status endpoint)
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
# The deprecated groups feature.
self.groups_enabled: bool = experimental.get("groups_enabled", True)

View File

@@ -16,6 +16,7 @@ from .account_validity import AccountValidityConfig
from .api import ApiConfig
from .appservice import AppServiceConfig
from .auth import AuthConfig
from .background_updates import BackgroundUpdateConfig
from .cache import CacheConfig
from .captcha import CaptchaConfig
from .cas import CasConfig
@@ -99,4 +100,5 @@ class HomeServerConfig(RootConfig):
WorkerConfig,
RedisConfig,
ExperimentalConfig,
BackgroundUpdateConfig,
]

View File

@@ -182,8 +182,14 @@ class OIDCConfig(Config):
#
# localpart_template: Jinja2 template for the localpart of the MXID.
# If this is not set, the user will be prompted to choose their
# own username (see 'sso_auth_account_details.html' in the 'sso'
# section of this file).
# own username (see the documentation for the
# 'sso_auth_account_details.html' template). This template can
# use the 'localpart_from_email' filter.
#
# confirm_localpart: Whether to prompt the user to validate (or
# change) the generated localpart (see the documentation for the
# 'sso_auth_account_details.html' template), instead of
# registering the account right away.
#
# display_name_template: Jinja2 template for the display name to set
# on first login. If unset, no displayname will be set.

View File

@@ -245,8 +245,8 @@ class SpamChecker:
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
sent by a local user. If it is sent by a user on another server, then
users receive a blank event.
sent by a local user. If it is sent by a user on another server, the
event is soft-failed.
Args:
event: the event to be checked

View File

@@ -530,9 +530,12 @@ class EventClientSerializer:
# Include the bundled aggregations in the event.
if serialized_aggregations:
serialized_event["unsigned"].setdefault("m.relations", {}).update(
serialized_aggregations
)
# There is likely already an "unsigned" field, but a filter might
# have stripped it off (via the event_fields option). The server is
# allowed to return additional fields, so add it back.
serialized_event.setdefault("unsigned", {}).setdefault(
"m.relations", {}
).update(serialized_aggregations)
def serialize_events(
self,

View File

@@ -289,7 +289,7 @@ class OpenIdUserInfo(BaseFederationServlet):
return 200, {"sub": user_id}
DEFAULT_SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"federation": FEDERATION_SERVLET_CLASSES,
"room_list": (PublicRoomList,),
"group_server": GROUP_SERVER_SERVLET_CLASSES,
@@ -298,6 +298,10 @@ DEFAULT_SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
"openid": (OpenIdUserInfo,),
}
DEFAULT_SERVLET_GROUPS = ("federation", "room_list", "openid")
GROUP_SERVLET_GROUPS = ("group_server", "group_local", "group_attestation")
def register_servlets(
hs: "HomeServer",
@@ -320,16 +324,19 @@ def register_servlets(
Defaults to ``DEFAULT_SERVLET_GROUPS``.
"""
if not servlet_groups:
servlet_groups = DEFAULT_SERVLET_GROUPS.keys()
servlet_groups = DEFAULT_SERVLET_GROUPS
# Only allow the groups servlets if the deprecated groups feature is enabled.
if hs.config.experimental.groups_enabled:
servlet_groups = servlet_groups + GROUP_SERVLET_GROUPS
for servlet_group in servlet_groups:
# Skip unknown servlet groups.
if servlet_group not in DEFAULT_SERVLET_GROUPS:
if servlet_group not in SERVLET_GROUPS:
raise RuntimeError(
f"Attempting to register unknown federation servlet: '{servlet_group}'"
)
for servletclass in DEFAULT_SERVLET_GROUPS[servlet_group]:
for servletclass in SERVLET_GROUPS[servlet_group]:
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
if (
servletclass == FederationTimestampLookupServlet

View File

@@ -371,7 +371,6 @@ class DeviceHandler(DeviceWorkerHandler):
log_kv(
{"reason": "User doesn't have device id.", "device_id": device_id}
)
pass
else:
raise
@@ -414,7 +413,6 @@ class DeviceHandler(DeviceWorkerHandler):
# no match
set_tag("error", True)
set_tag("reason", "User doesn't have that device id.")
pass
else:
raise

View File

@@ -45,6 +45,7 @@ from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
from synapse.util import Clock, json_decoder
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
from synapse.util.templates import _localpart_from_email_filter
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -1228,6 +1229,7 @@ class OidcSessionData:
class UserAttributeDict(TypedDict):
localpart: Optional[str]
confirm_localpart: bool
display_name: Optional[str]
emails: List[str]
@@ -1307,6 +1309,11 @@ def jinja_finalize(thing: Any) -> Any:
env = Environment(finalize=jinja_finalize)
env.filters.update(
{
"localpart_from_email": _localpart_from_email_filter,
}
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -1316,6 +1323,7 @@ class JinjaOidcMappingConfig:
display_name_template: Optional[Template]
email_template: Optional[Template]
extra_attributes: Dict[str, Template]
confirm_localpart: bool = False
class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
@@ -1357,12 +1365,17 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
"invalid jinja template", path=["extra_attributes", key]
) from e
confirm_localpart = config.get("confirm_localpart") or False
if not isinstance(confirm_localpart, bool):
raise ConfigError("must be a bool", path=["confirm_localpart"])
return JinjaOidcMappingConfig(
subject_claim=subject_claim,
localpart_template=localpart_template,
display_name_template=display_name_template,
email_template=email_template,
extra_attributes=extra_attributes,
confirm_localpart=confirm_localpart,
)
def get_remote_user_id(self, userinfo: UserInfo) -> str:
@@ -1398,7 +1411,10 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
emails.append(email)
return UserAttributeDict(
localpart=localpart, display_name=display_name, emails=emails
localpart=localpart,
display_name=display_name,
emails=emails,
confirm_localpart=self._config.confirm_localpart,
)
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:

View File

@@ -350,7 +350,7 @@ class PaginationHandler:
"""
self._purges_in_progress_by_room.add(room_id)
try:
with await self.pagination_lock.write(room_id):
async with self.pagination_lock.write(room_id):
await self.storage.purge_events.purge_history(
room_id, token, delete_local_events
)
@@ -406,7 +406,7 @@ class PaginationHandler:
room_id: room to be purged
force: set true to skip checking for joined users.
"""
with await self.pagination_lock.write(room_id):
async with self.pagination_lock.write(room_id):
# first check that we have no users in this room
if not force:
joined = await self.store.is_host_joined(room_id, self._server_name)
@@ -448,7 +448,7 @@ class PaginationHandler:
room_token = from_token.room_key
with await self.pagination_lock.read(room_id):
async with self.pagination_lock.read(room_id):
(
membership,
member_event_id,
@@ -615,7 +615,7 @@ class PaginationHandler:
self._purges_in_progress_by_room.add(room_id)
try:
with await self.pagination_lock.write(room_id):
async with self.pagination_lock.write(room_id):
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_SHUTTING_DOWN
self._delete_by_id[
delete_id

View File

@@ -267,7 +267,6 @@ class BasePresenceHandler(abc.ABC):
is_syncing: Whether or not the user is now syncing
sync_time_msec: Time in ms when the user was last syncing
"""
pass
async def update_external_syncs_clear(self, process_id: str) -> None:
"""Marks all users that had been marked as syncing by a given process
@@ -277,7 +276,6 @@ class BasePresenceHandler(abc.ABC):
This is a no-op when presence is handled by a different worker.
"""
pass
async def process_replication_rows(
self, stream_name: str, instance_name: str, token: int, rows: list

View File

@@ -132,6 +132,7 @@ class UserAttributes:
# if `None`, the mapper has not picked a userid, and the user should be prompted to
# enter one.
localpart: Optional[str]
confirm_localpart: bool = False
display_name: Optional[str] = None
emails: Collection[str] = attr.Factory(list)
@@ -561,9 +562,10 @@ class SsoHandler:
# Must provide either attributes or session, not both
assert (attributes is not None) != (session is not None)
if (attributes and attributes.localpart is None) or (
session and session.chosen_localpart is None
):
if (
attributes
and (attributes.localpart is None or attributes.confirm_localpart is True)
) or (session and session.chosen_localpart is None):
return b"/_synapse/client/pick_username/account_details"
elif self._consent_at_registration and not (
session and session.terms_accepted_version

View File

@@ -160,8 +160,9 @@ class FollowerTypingHandler:
"""Should be called whenever we receive updates for typing stream."""
if self._latest_room_serial > token:
# The master has gone backwards. To prevent inconsistent data, just
# clear everything.
# The typing worker has gone backwards (e.g. it may have restarted).
# To prevent inconsistent data, just clear everything.
logger.info("Typing handler stream went backwards; resetting")
self._reset()
# Set the latest serial token to whatever the server gave us.

View File

@@ -120,7 +120,6 @@ class ByteParser(ByteWriteable, Generic[T], abc.ABC):
"""Called when response has finished streaming and the parser should
return the final result (or error).
"""
pass
@attr.s(slots=True, frozen=True, auto_attribs=True)
@@ -601,7 +600,6 @@ class MatrixFederationHttpClient:
response.code,
response_phrase,
)
pass
else:
logger.info(
"{%s} [%s] Got response headers: %d %s",

View File

@@ -233,7 +233,6 @@ class HttpServer(Protocol):
servlet_classname (str): The name of the handler to be used in prometheus
and opentracing logs.
"""
pass
class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):

View File

@@ -169,7 +169,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "content.msgtype",
"pattern": "m.notice",
"_id": "_suppress_notices",
"_cache_key": "_suppress_notices",
}
],
"actions": ["dont_notify"],
@@ -183,13 +183,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.room.member",
"_id": "_member",
"_cache_key": "_member",
},
{
"kind": "event_match",
"key": "content.membership",
"pattern": "invite",
"_id": "_invite_member",
"_cache_key": "_invite_member",
},
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
],
@@ -212,7 +212,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.room.member",
"_id": "_member",
"_cache_key": "_member",
}
],
"actions": ["dont_notify"],
@@ -237,12 +237,12 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "content.body",
"pattern": "@room",
"_id": "_roomnotif_content",
"_cache_key": "_roomnotif_content",
},
{
"kind": "sender_notification_permission",
"key": "room",
"_id": "_roomnotif_pl",
"_cache_key": "_roomnotif_pl",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
@@ -254,13 +254,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.room.tombstone",
"_id": "_tombstone",
"_cache_key": "_tombstone",
},
{
"kind": "event_match",
"key": "state_key",
"pattern": "",
"_id": "_tombstone_statekey",
"_cache_key": "_tombstone_statekey",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
@@ -272,7 +272,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.reaction",
"_id": "_reaction",
"_cache_key": "_reaction",
}
],
"actions": ["dont_notify"],
@@ -288,7 +288,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.call.invite",
"_id": "_call",
"_cache_key": "_call",
}
],
"actions": [
@@ -302,12 +302,12 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/underride/.m.rule.room_one_to_one",
"conditions": [
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.message",
"_id": "_message",
"_cache_key": "_message",
},
],
"actions": [
@@ -321,12 +321,12 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
{
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
"conditions": [
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.encrypted",
"_id": "_encrypted",
"_cache_key": "_encrypted",
},
],
"actions": [
@@ -342,7 +342,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.room.message",
"_id": "_message",
"_cache_key": "_message",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
@@ -356,7 +356,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "m.room.encrypted",
"_id": "_encrypted",
"_cache_key": "_encrypted",
}
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
@@ -368,19 +368,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
"kind": "event_match",
"key": "type",
"pattern": "im.vector.modular.widgets",
"_id": "_type_modular_widgets",
"_cache_key": "_type_modular_widgets",
},
{
"kind": "event_match",
"key": "content.type",
"pattern": "jitsi",
"_id": "_content_type_jitsi",
"_cache_key": "_content_type_jitsi",
},
{
"kind": "event_match",
"key": "state_key",
"pattern": "*",
"_id": "_is_state_event",
"_cache_key": "_is_state_event",
},
],
"actions": ["notify", {"set_tweak": "highlight", "value": False}],

View File

@@ -274,17 +274,17 @@ def _condition_checker(
cache: Dict[str, bool],
) -> bool:
for cond in conditions:
_id = cond.get("_id", None)
if _id:
res = cache.get(_id, None)
_cache_key = cond.get("_cache_key", None)
if _cache_key:
res = cache.get(_cache_key, None)
if res is False:
return False
elif res is True:
continue
res = evaluator.matches(cond, uid, display_name)
if _id:
cache[_id] = bool(res)
if _cache_key:
cache[_cache_key] = bool(res)
if not res:
return False

View File

@@ -40,7 +40,7 @@ def format_push_rules_for_user(
# Remove internal stuff.
for c in r["conditions"]:
c.pop("_id", None)
c.pop("_cache_key", None)
pattern_type = c.pop("pattern_type", None)
if pattern_type == "user_id":

View File

@@ -18,6 +18,7 @@ from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, TypeVar
import bleach
import jinja2
from markupsafe import Markup
from synapse.api.constants import EventTypes, Membership, RoomTypes
from synapse.api.errors import StoreError
@@ -867,7 +868,7 @@ class Mailer:
)
def safe_markup(raw_html: str) -> jinja2.Markup:
def safe_markup(raw_html: str) -> Markup:
"""
Sanitise a raw HTML string to a set of allowed tags and attributes, and linkify any bare URLs.
@@ -877,7 +878,7 @@ def safe_markup(raw_html: str) -> jinja2.Markup:
Returns:
A Markup object ready to safely use in a Jinja template.
"""
return jinja2.Markup(
return Markup(
bleach.linkify(
bleach.clean(
raw_html,
@@ -891,7 +892,7 @@ def safe_markup(raw_html: str) -> jinja2.Markup:
)
def safe_text(raw_text: str) -> jinja2.Markup:
def safe_text(raw_text: str) -> Markup:
"""
Sanitise text (escape any HTML tags), and then linkify any bare URLs.
@@ -901,7 +902,7 @@ def safe_text(raw_text: str) -> jinja2.Markup:
Returns:
A Markup object ready to safely use in a Jinja template.
"""
return jinja2.Markup(
return Markup(
bleach.linkify(bleach.clean(raw_text, tags=[], attributes=[], strip=False))
)

View File

@@ -75,6 +75,7 @@ REQUIREMENTS = [
"attrs>=19.2.0,!=21.1.0",
"netaddr>=0.7.18",
"Jinja2>=2.9",
"MarkupSafe>=2.0",
"bleach>=1.4.3",
# We use `ParamSpec`, which was added in `typing-extensions` 3.10.0.0.
"typing-extensions>=3.10.0",

View File

@@ -709,7 +709,7 @@ class ReplicationCommandHandler:
self.send_command(RemoteServerUpCommand(server))
def stream_update(self, stream_name: str, token: Optional[int], data: Any) -> None:
"""Called when a new update is available to stream to clients.
"""Called when a new update is available to stream to Redis subscribers.
We need to check if the client is interested in the stream or not
"""

View File

@@ -67,8 +67,8 @@ class ReplicationStreamProtocolFactory(ServerFactory):
class ReplicationStreamer:
"""Handles replication connections.
This needs to be poked when new replication data may be available. When new
data is available it will propagate to all connected clients.
This needs to be poked when new replication data may be available.
When new data is available it will propagate to all Redis subscribers.
"""
def __init__(self, hs: "HomeServer"):
@@ -109,7 +109,7 @@ class ReplicationStreamer:
def on_notifier_poke(self) -> None:
"""Checks if there is actually any new data and sends it to the
connections if there are.
Redis subscribers if there are.
This should get called each time new data is available, even if it
is currently being executed, so that nothing gets missed

View File

@@ -316,7 +316,19 @@ class PresenceFederationStream(Stream):
class TypingStream(Stream):
@attr.s(slots=True, frozen=True, auto_attribs=True)
class TypingStreamRow:
"""
An entry in the typing stream.
Describes all the users that are 'typing' right now in one room.
When a user stops typing, it will be streamed as a new update with that
user absent; you can think of the `user_ids` list as overwriting the
entire list that was there previously.
"""
# The room that this update is for.
room_id: str
# All the users that are 'typing' right now in the specified room.
user_ids: List[str]
NAME = "typing"

View File

@@ -130,22 +130,22 @@
</head>
<body>
<header>
<h1>Your account is nearly ready</h1>
<p>Check your details before creating an account on {{ server_name }}</p>
<h1>Create your account</h1>
<p>This is required. Continue to create your account on {{ server_name }}. You can't change this later.</p>
</header>
<main>
<form method="post" class="form__input" id="form">
<div class="username_input" id="username_input">
<label for="field-username">Username</label>
<label for="field-username">Username (required)</label>
<div class="prefix">@</div>
<input type="text" name="username" id="field-username" autofocus>
<input type="text" name="username" id="field-username" value="{{ user_attributes.localpart }}" autofocus>
<div class="postfix">:{{ server_name }}</div>
</div>
<output for="username_input" id="field-username-output"></output>
<input type="submit" value="Continue" class="primary-button">
{% if user_attributes.avatar_url or user_attributes.display_name or user_attributes.emails %}
<section class="idp-pick-details">
<h2>{% if idp.idp_icon %}<img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>{% endif %}Information from {{ idp.idp_name }}</h2>
<h2>{% if idp.idp_icon %}<img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>{% endif %}Optional data from {{ idp.idp_name }}</h2>
{% if user_attributes.avatar_url %}
<label class="idp-detail idp-avatar" for="idp-avatar">
<div class="check-row">

View File

@@ -62,7 +62,7 @@ function validateUsername(username) {
usernameField.parentElement.classList.remove("invalid");
usernameOutput.classList.remove("error");
if (!username) {
return reportError("Please provide a username");
return reportError("This is required. Please provide a username");
}
if (username.length > 255) {
return reportError("Too long, please choose something shorter");

View File

@@ -15,5 +15,5 @@
</g>
</g>
</svg>
<p>An open network for secure, decentralized communication.<br>© 2021 The Matrix.org Foundation C.I.C.</p>
<p>An open network for secure, decentralized communication.<br>© 2022 The Matrix.org Foundation C.I.C.</p>
</footer>

View File

@@ -118,7 +118,8 @@ class ClientRestResource(JsonResource):
thirdparty.register_servlets(hs, client_resource)
sendtodevice.register_servlets(hs, client_resource)
user_directory.register_servlets(hs, client_resource)
groups.register_servlets(hs, client_resource)
if hs.config.experimental.groups_enabled:
groups.register_servlets(hs, client_resource)
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
room_batch.register_servlets(hs, client_resource)
capabilities.register_servlets(hs, client_resource)

View File

@@ -293,7 +293,8 @@ def register_servlets_for_client_rest_resource(
ResetPasswordRestServlet(hs).register(http_server)
SearchUsersRestServlet(hs).register(http_server)
UserRegisterServlet(hs).register(http_server)
DeleteGroupAdminRestServlet(hs).register(http_server)
if hs.config.experimental.groups_enabled:
DeleteGroupAdminRestServlet(hs).register(http_server)
AccountValidityRenewServlet(hs).register(http_server)
# Load the media repo ones if we're using them. Otherwise load the servlets which

View File

@@ -298,7 +298,6 @@ class Responder:
Returns:
Resolves once the response has finished being written
"""
pass
def __enter__(self) -> None:
pass

View File

@@ -92,12 +92,20 @@ class AccountDetailsResource(DirectServeHtmlResource):
self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code)
return
# The configuration might mandate going through this step to validate an
# automatically generated localpart, so session.chosen_localpart might already
# be set.
localpart = ""
if session.chosen_localpart is not None:
localpart = session.chosen_localpart
idp_id = session.auth_provider_id
template_params = {
"idp": self._sso_handler.get_identity_providers()[idp_id],
"user_attributes": {
"display_name": session.display_name,
"emails": session.emails,
"localpart": localpart,
},
}

View File

@@ -328,7 +328,6 @@ class HomeServer(metaclass=abc.ABCMeta):
Does nothing in this base class; overridden in derived classes to start the
appropriate listeners.
"""
pass
def setup_background_tasks(self) -> None:
"""

View File

@@ -60,18 +60,19 @@ class _BackgroundUpdateHandler:
class _BackgroundUpdateContextManager:
BACKGROUND_UPDATE_INTERVAL_MS = 1000
BACKGROUND_UPDATE_DURATION_MS = 100
def __init__(self, sleep: bool, clock: Clock):
def __init__(
self, sleep: bool, clock: Clock, sleep_duration_ms: int, update_duration: int
):
self._sleep = sleep
self._clock = clock
self._sleep_duration_ms = sleep_duration_ms
self._update_duration_ms = update_duration
async def __aenter__(self) -> int:
if self._sleep:
await self._clock.sleep(self.BACKGROUND_UPDATE_INTERVAL_MS / 1000)
await self._clock.sleep(self._sleep_duration_ms / 1000)
return self.BACKGROUND_UPDATE_DURATION_MS
return self._update_duration_ms
async def __aexit__(self, *exc) -> None:
pass
@@ -133,9 +134,6 @@ class BackgroundUpdater:
process and autotuning the batch size.
"""
MINIMUM_BACKGROUND_BATCH_SIZE = 1
DEFAULT_BACKGROUND_BATCH_SIZE = 100
def __init__(self, hs: "HomeServer", database: "DatabasePool"):
self._clock = hs.get_clock()
self.db_pool = database
@@ -160,6 +158,14 @@ class BackgroundUpdater:
# enable/disable background updates via the admin API.
self.enabled = True
self.minimum_background_batch_size = hs.config.background_updates.min_batch_size
self.default_background_batch_size = (
hs.config.background_updates.default_batch_size
)
self.update_duration_ms = hs.config.background_updates.update_duration_ms
self.sleep_duration_ms = hs.config.background_updates.sleep_duration_ms
self.sleep_enabled = hs.config.background_updates.sleep_enabled
def register_update_controller_callbacks(
self,
on_update: ON_UPDATE_CALLBACK,
@@ -216,7 +222,9 @@ class BackgroundUpdater:
if self._on_update_callback is not None:
return self._on_update_callback(update_name, database_name, oneshot)
return _BackgroundUpdateContextManager(sleep, self._clock)
return _BackgroundUpdateContextManager(
sleep, self._clock, self.sleep_duration_ms, self.update_duration_ms
)
async def _default_batch_size(self, update_name: str, database_name: str) -> int:
"""The batch size to use for the first iteration of a new background
@@ -225,7 +233,7 @@ class BackgroundUpdater:
if self._default_batch_size_callback is not None:
return await self._default_batch_size_callback(update_name, database_name)
return self.DEFAULT_BACKGROUND_BATCH_SIZE
return self.default_background_batch_size
async def _min_batch_size(self, update_name: str, database_name: str) -> int:
"""A lower bound on the batch size of a new background update.
@@ -235,7 +243,7 @@ class BackgroundUpdater:
if self._min_batch_size_callback is not None:
return await self._min_batch_size_callback(update_name, database_name)
return self.MINIMUM_BACKGROUND_BATCH_SIZE
return self.minimum_background_batch_size
def get_current_update(self) -> Optional[BackgroundUpdatePerformance]:
"""Returns the current background update, if any."""
@@ -254,9 +262,12 @@ class BackgroundUpdater:
if self.enabled:
# if we start a new background update, not all updates are done.
self._all_done = False
run_as_background_process("background_updates", self.run_background_updates)
sleep = self.sleep_enabled
run_as_background_process(
"background_updates", self.run_background_updates, sleep
)
async def run_background_updates(self, sleep: bool = True) -> None:
async def run_background_updates(self, sleep: bool) -> None:
if self._running or not self.enabled:
return

View File

@@ -48,8 +48,6 @@ class ExternalIDReuseException(Exception):
"""Exception if writing an external id for a user fails,
because this external id is given to an other user."""
pass
@attr.s(frozen=True, slots=True, auto_attribs=True)
class TokenLookupResult:

View File

@@ -125,9 +125,6 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
):
super().__init__(database, db_conn, hs)
if not hs.config.server.enable_search:
return
self.db_pool.updates.register_background_update_handler(
self.EVENT_SEARCH_UPDATE_NAME, self._background_reindex_search
)
@@ -243,9 +240,13 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
return len(event_search_rows)
result = await self.db_pool.runInteraction(
self.EVENT_SEARCH_UPDATE_NAME, reindex_search_txn
)
if self.hs.config.server.enable_search:
result = await self.db_pool.runInteraction(
self.EVENT_SEARCH_UPDATE_NAME, reindex_search_txn
)
else:
# Don't index anything if search is not enabled.
result = 0
if not result:
await self.db_pool.updates._end_background_update(

View File

@@ -36,7 +36,6 @@ def run_upgrade(cur, database_engine, config, *args, **kwargs):
config_files = config.appservice.app_service_config_files
except AttributeError:
logger.warning("Could not get app_service_config_files from config")
pass
appservices = load_appservices(config.server.server_name, config_files)

View File

@@ -18,9 +18,10 @@ import collections
import inspect
import itertools
import logging
from contextlib import contextmanager
from contextlib import asynccontextmanager, contextmanager
from typing import (
Any,
AsyncIterator,
Awaitable,
Callable,
Collection,
@@ -40,7 +41,7 @@ from typing import (
)
import attr
from typing_extensions import ContextManager, Literal
from typing_extensions import AsyncContextManager, Literal
from twisted.internet import defer
from twisted.internet.defer import CancelledError
@@ -491,7 +492,7 @@ class ReadWriteLock:
Example:
with await read_write_lock.read("test_key"):
async with read_write_lock.read("test_key"):
# do some work
"""
@@ -514,22 +515,24 @@ class ReadWriteLock:
# Latest writer queued
self.key_to_current_writer: Dict[str, defer.Deferred] = {}
async def read(self, key: str) -> ContextManager:
new_defer: "defer.Deferred[None]" = defer.Deferred()
def read(self, key: str) -> AsyncContextManager:
@asynccontextmanager
async def _ctx_manager() -> AsyncIterator[None]:
new_defer: "defer.Deferred[None]" = defer.Deferred()
curr_readers = self.key_to_current_readers.setdefault(key, set())
curr_writer = self.key_to_current_writer.get(key, None)
curr_readers = self.key_to_current_readers.setdefault(key, set())
curr_writer = self.key_to_current_writer.get(key, None)
curr_readers.add(new_defer)
curr_readers.add(new_defer)
# We wait for the latest writer to finish writing. We can safely ignore
# any existing readers... as they're readers.
if curr_writer:
await make_deferred_yieldable(curr_writer)
@contextmanager
def _ctx_manager() -> Iterator[None]:
try:
# We wait for the latest writer to finish writing. We can safely ignore
# any existing readers... as they're readers.
# May raise a `CancelledError` if the `Deferred` wrapping us is
# cancelled. The `Deferred` we are waiting on must not be cancelled,
# since we do not own it.
if curr_writer:
await make_deferred_yieldable(stop_cancellation(curr_writer))
yield
finally:
with PreserveLoggingContext():
@@ -538,29 +541,35 @@ class ReadWriteLock:
return _ctx_manager()
async def write(self, key: str) -> ContextManager:
new_defer: "defer.Deferred[None]" = defer.Deferred()
def write(self, key: str) -> AsyncContextManager:
@asynccontextmanager
async def _ctx_manager() -> AsyncIterator[None]:
new_defer: "defer.Deferred[None]" = defer.Deferred()
curr_readers = self.key_to_current_readers.get(key, set())
curr_writer = self.key_to_current_writer.get(key, None)
curr_readers = self.key_to_current_readers.get(key, set())
curr_writer = self.key_to_current_writer.get(key, None)
# We wait on all latest readers and writer.
to_wait_on = list(curr_readers)
if curr_writer:
to_wait_on.append(curr_writer)
# We wait on all latest readers and writer.
to_wait_on = list(curr_readers)
if curr_writer:
to_wait_on.append(curr_writer)
# We can clear the list of current readers since the new writer waits
# for them to finish.
curr_readers.clear()
self.key_to_current_writer[key] = new_defer
# We can clear the list of current readers since `new_defer` waits
# for them to finish.
curr_readers.clear()
self.key_to_current_writer[key] = new_defer
await make_deferred_yieldable(defer.gatherResults(to_wait_on))
@contextmanager
def _ctx_manager() -> Iterator[None]:
to_wait_on_defer = defer.gatherResults(to_wait_on)
try:
# Wait for all current readers and the latest writer to finish.
# May raise a `CancelledError` immediately after the wait if the
# `Deferred` wrapping us is cancelled. We must only release the lock
# once we have acquired it, hence the use of `delay_cancellation`
# rather than `stop_cancellation`.
await make_deferred_yieldable(delay_cancellation(to_wait_on_defer))
yield
finally:
# Release the lock.
with PreserveLoggingContext():
new_defer.callback(None)
# `self.key_to_current_writer[key]` may be missing if there was another
@@ -686,12 +695,48 @@ def stop_cancellation(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
Synapse logcontext rules.
Returns:
A new `Deferred`, which will contain the result of the original `Deferred`,
but will not propagate cancellation through to the original. When cancelled,
the new `Deferred` will fail with a `CancelledError` and will not follow the
Synapse logcontext rules. `make_deferred_yieldable` should be used to wrap
the new `Deferred`.
A new `Deferred`, which will contain the result of the original `Deferred`.
The new `Deferred` will not propagate cancellation through to the original.
When cancelled, the new `Deferred` will fail with a `CancelledError`.
The new `Deferred` will not follow the Synapse logcontext rules and should be
wrapped with `make_deferred_yieldable`.
"""
new_deferred: defer.Deferred[T] = defer.Deferred()
new_deferred: "defer.Deferred[T]" = defer.Deferred()
deferred.chainDeferred(new_deferred)
return new_deferred
def delay_cancellation(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
"""Delay cancellation of a `Deferred` until it resolves.
Has the same effect as `stop_cancellation`, but the returned `Deferred` will not
resolve with a `CancelledError` until the original `Deferred` resolves.
Args:
deferred: The `Deferred` to protect against cancellation. May optionally follow
the Synapse logcontext rules.
Returns:
A new `Deferred`, which will contain the result of the original `Deferred`.
The new `Deferred` will not propagate cancellation through to the original.
When cancelled, the new `Deferred` will wait until the original `Deferred`
resolves before failing with a `CancelledError`.
The new `Deferred` will follow the Synapse logcontext rules if `deferred`
follows the Synapse logcontext rules. Otherwise the new `Deferred` should be
wrapped with `make_deferred_yieldable`.
"""
def handle_cancel(new_deferred: "defer.Deferred[T]") -> None:
# before the new deferred is cancelled, we `pause` it to stop the cancellation
# propagating. we then `unpause` it once the wrapped deferred completes, to
# propagate the exception.
new_deferred.pause()
new_deferred.errback(Failure(CancelledError()))
deferred.addBoth(lambda _: new_deferred.unpause())
new_deferred: "defer.Deferred[T]" = defer.Deferred(handle_cancel)
deferred.chainDeferred(new_deferred)
return new_deferred

View File

@@ -41,6 +41,7 @@ from twisted.python.failure import Failure
from synapse.logging.context import make_deferred_yieldable, preserve_fn
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import delay_cancellation
from synapse.util.caches.deferred_cache import DeferredCache
from synapse.util.caches.lrucache import LruCache
@@ -350,6 +351,11 @@ class DeferredCacheDescriptor(_CacheDescriptorBase):
ret = defer.maybeDeferred(preserve_fn(self.orig), obj, *args, **kwargs)
ret = cache.set(cache_key, ret, callback=invalidate_callback)
# We started a new call to `self.orig`, so we must always wait for it to
# complete. Otherwise we might mark our current logging context as
# finished while `self.orig` is still using it in the background.
ret = delay_cancellation(ret)
return make_deferred_yieldable(ret)
wrapped = cast(_CachedFunction, _wrapped)
@@ -510,6 +516,11 @@ class DeferredCacheListDescriptor(_CacheDescriptorBase):
d = defer.gatherResults(cached_defers, consumeErrors=True).addCallbacks(
lambda _: results, unwrapFirstError
)
if missing:
# We started a new call to `self.orig`, so we must always wait for it to
# complete. Otherwise we might mark our current logging context as
# finished while `self.orig` is still using it in the background.
d = delay_cancellation(d)
return make_deferred_yieldable(d)
else:
return defer.succeed(results)

View File

@@ -22,8 +22,6 @@ class TreeCacheNode(dict):
leaves.
"""
pass
class TreeCache:
"""

View File

@@ -64,6 +64,7 @@ def build_jinja_env(
{
"format_ts": _format_ts_filter,
"mxc_to_http": _create_mxc_to_http_filter(config.server.public_baseurl),
"localpart_from_email": _localpart_from_email_filter,
}
)
@@ -112,3 +113,7 @@ def _create_mxc_to_http_filter(
def _format_ts_filter(value: int, format: str) -> str:
return time.strftime(format, time.localtime(value / 1000))
def _localpart_from_email_filter(address: str) -> str:
return address.rsplit("@", 1)[0]

View File

@@ -0,0 +1,58 @@
# Copyright 2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import yaml
from synapse.storage.background_updates import BackgroundUpdater
from tests.unittest import HomeserverTestCase, override_config
class BackgroundUpdateConfigTestCase(HomeserverTestCase):
# Tests that the default values in the config are correctly loaded. Note that the default
# values are loaded when the corresponding config options are commented out, which is why there isn't
# a config specified here.
def test_default_configuration(self):
background_updater = BackgroundUpdater(
self.hs, self.hs.get_datastores().main.db_pool
)
self.assertEqual(background_updater.minimum_background_batch_size, 1)
self.assertEqual(background_updater.default_background_batch_size, 100)
self.assertEqual(background_updater.sleep_enabled, True)
self.assertEqual(background_updater.sleep_duration_ms, 1000)
self.assertEqual(background_updater.update_duration_ms, 100)
# Tests that non-default values for the config options are properly picked up and passed on.
@override_config(
yaml.safe_load(
"""
background_updates:
background_update_duration_ms: 1000
sleep_enabled: false
sleep_duration_ms: 600
min_batch_size: 5
default_batch_size: 50
"""
)
)
def test_custom_configuration(self):
background_updater = BackgroundUpdater(
self.hs, self.hs.get_datastores().main.db_pool
)
self.assertEqual(background_updater.minimum_background_batch_size, 5)
self.assertEqual(background_updater.default_background_batch_size, 50)
self.assertEqual(background_updater.sleep_enabled, False)
self.assertEqual(background_updater.sleep_duration_ms, 600)
self.assertEqual(background_updater.update_duration_ms, 1000)

View File

@@ -15,11 +15,15 @@
from collections import Counter
from unittest.mock import Mock
from twisted.test.proto_helpers import MemoryReactor
import synapse.rest.admin
import synapse.storage
from synapse.api.constants import EventTypes, JoinRules
from synapse.api.room_versions import RoomVersions
from synapse.rest.client import knock, login, room
from synapse.server import HomeServer
from synapse.util import Clock
from tests import unittest
@@ -32,7 +36,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
knock.register_servlets,
]
def prepare(self, reactor, clock, hs):
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.admin_handler = hs.get_admin_handler()
self.user1 = self.register_user("user1", "password")
@@ -41,7 +45,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
self.user2 = self.register_user("user2", "password")
self.token2 = self.login("user2", "password")
def test_single_public_joined_room(self):
def test_single_public_joined_room(self) -> None:
"""Test that we write *all* events for a public room"""
room_id = self.helper.create_room_as(
self.user1, tok=self.token1, is_public=True
@@ -74,7 +78,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
def test_single_private_joined_room(self):
def test_single_private_joined_room(self) -> None:
"""Tests that we correctly write state when we can't see all events in
a room.
"""
@@ -112,7 +116,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
def test_single_left_room(self):
def test_single_left_room(self) -> None:
"""Tests that we don't see events in the room after we leave."""
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
self.helper.send(room_id, body="Hello!", tok=self.token1)
@@ -144,7 +148,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 2)
def test_single_left_rejoined_private_room(self):
def test_single_left_rejoined_private_room(self) -> None:
"""Tests that see the correct events in private rooms when we
repeatedly join and leave.
"""
@@ -185,7 +189,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
self.assertEqual(counter[(EventTypes.Member, self.user2)], 3)
def test_invite(self):
def test_invite(self) -> None:
"""Tests that pending invites get handled correctly."""
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
self.helper.send(room_id, body="Hello!", tok=self.token1)
@@ -204,7 +208,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
self.assertEqual(args[1].content["membership"], "invite")
self.assertTrue(args[2]) # Assert there is at least one bit of state
def test_knock(self):
def test_knock(self) -> None:
"""Tests that knock get handled correctly."""
# create a knockable v7 room
room_id = self.helper.create_room_as(

Some files were not shown because too many files have changed in this diff Show More