Compare commits
2 Commits
v1.55.2
...
anoa/docs_
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3360be1829 | ||
|
|
19ca533bcc |
120
CHANGES.md
120
CHANGES.md
@@ -1,123 +1,3 @@
|
||||
Synapse 1.55.2 (2022-03-24)
|
||||
===========================
|
||||
|
||||
This patch version reverts the earlier fixes from Synapse 1.55.1, which could cause problems in certain deployments, and instead adds a cap to the version of Jinja to be installed. Again, this is to fix an incompatibility with version 3.1.0 of the [Jinja](https://pypi.org/project/Jinja2/) library, and again, deployments of Synapse using the `matrixdotorg/synapse` Docker image or Debian packages from packages.matrix.org are not affected.
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Pin Jinja to <3.1.0, as Synapse fails to start with Jinja 3.1.0. ([\#12297](https://github.com/matrix-org/synapse/issues/12297))
|
||||
- Revert changes from 1.55.1 as they caused problems with older versions of Jinja ([\#12296](https://github.com/matrix-org/synapse/issues/12296))
|
||||
|
||||
|
||||
Synapse 1.55.1 (2022-03-24)
|
||||
===========================
|
||||
|
||||
This is a patch release that fixes an incompatibility with version 3.1.0 of the [Jinja](https://pypi.org/project/Jinja2/) library, released on March 24th, 2022. Deployments of Synapse using the `matrixdotorg/synapse` Docker image or Debian packages from packages.matrix.org are not affected.
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Remove uses of the long-deprecated `jinja2.Markup` which would prevent Synapse from starting with Jinja 3.1.0 or above installed. ([\#12289](https://github.com/matrix-org/synapse/issues/12289))
|
||||
|
||||
|
||||
Synapse 1.55.0 (2022-03-22)
|
||||
===========================
|
||||
|
||||
This release removes a workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. **This breaks compatibility with Mjolnir 1.3.1 and earlier. ([\#11700](https://github.com/matrix-org/synapse/issues/11700))**; Mjolnir users should upgrade Mjolnir before upgrading Synapse to this version.
|
||||
|
||||
This release also moves the location of the `synctl` script; see the [upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#synctl-script-has-been-moved) for more details.
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Tweak copy for default Single Sign-On account details template to better adhere to mobile app store guidelines. ([\#12265](https://github.com/matrix-org/synapse/issues/12265), [\#12260](https://github.com/matrix-org/synapse/issues/12260))
|
||||
|
||||
|
||||
Synapse 1.55.0rc1 (2022-03-15)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add third-party rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`. ([\#12028](https://github.com/matrix-org/synapse/issues/12028))
|
||||
- Improve performance of logging in for large accounts. ([\#12132](https://github.com/matrix-org/synapse/issues/12132))
|
||||
- Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted. ([\#12135](https://github.com/matrix-org/synapse/issues/12135))
|
||||
- Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads. ([\#12151](https://github.com/matrix-org/synapse/issues/12151))
|
||||
- Add a new Jinja2 template filter to extract the local part of an email address. ([\#12212](https://github.com/matrix-org/synapse/issues/12212))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0. ([\#12090](https://github.com/matrix-org/synapse/issues/12090))
|
||||
- Fix a long-standing bug when redacting events with relations. ([\#12113](https://github.com/matrix-org/synapse/issues/12113), [\#12121](https://github.com/matrix-org/synapse/issues/12121), [\#12130](https://github.com/matrix-org/synapse/issues/12130), [\#12189](https://github.com/matrix-org/synapse/issues/12189))
|
||||
- Fix a bug introduced in Synapse 1.7.2 whereby background updates are never run with the default background batch size. ([\#12157](https://github.com/matrix-org/synapse/issues/12157))
|
||||
- Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0. ([\#12175](https://github.com/matrix-org/synapse/issues/12175))
|
||||
- Fix a bug introduced in Synapse 1.54.0 that broke background updates on sqlite homeservers while search was disabled. ([\#12215](https://github.com/matrix-org/synapse/issues/12215))
|
||||
- Fix a long-standing bug when a `filter` argument with `event_fields` which did not include the `unsigned` field could result in a 500 error on `/sync`. ([\#12234](https://github.com/matrix-org/synapse/issues/12234))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page. ([\#11998](https://github.com/matrix-org/synapse/issues/11998))
|
||||
- Improve documentation for demo scripts. ([\#12143](https://github.com/matrix-org/synapse/issues/12143))
|
||||
- Updates to the Room DAG concepts development document. ([\#12179](https://github.com/matrix-org/synapse/issues/12179))
|
||||
- Document that the `typing`, `to_device`, `account_data`, `receipts`, and `presence` stream writer can only be used on a single worker. ([\#12196](https://github.com/matrix-org/synapse/issues/12196))
|
||||
- Document that contributors can sign off privately by email. ([\#12204](https://github.com/matrix-org/synapse/issues/12204))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- **Remove workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. Breaks compatibility with Mjolnir 1.3.1 and earlier. ([\#11700](https://github.com/matrix-org/synapse/issues/11700))**
|
||||
- **`synctl` has been moved into into `synapse._scripts` and is exposed as an entry point; see [upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/upgrade.md#synctl-script-has-been-moved). ([\#12140](https://github.com/matrix-org/synapse/issues/12140))
|
||||
- Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0. ([\#12138](https://github.com/matrix-org/synapse/issues/12138))
|
||||
- The groups/communities feature in Synapse has been deprecated. ([\#12200](https://github.com/matrix-org/synapse/issues/12200))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Simplify the `ApplicationService` class' set of public methods related to interest checking. ([\#11915](https://github.com/matrix-org/synapse/issues/11915))
|
||||
- Add config settings for background update parameters. ([\#11980](https://github.com/matrix-org/synapse/issues/11980))
|
||||
- Correct type hints for txredis. ([\#12042](https://github.com/matrix-org/synapse/issues/12042))
|
||||
- Limit the size of `aggregation_key` on annotations. ([\#12101](https://github.com/matrix-org/synapse/issues/12101))
|
||||
- Add type hints to tests files. ([\#12108](https://github.com/matrix-org/synapse/issues/12108), [\#12146](https://github.com/matrix-org/synapse/issues/12146), [\#12207](https://github.com/matrix-org/synapse/issues/12207), [\#12208](https://github.com/matrix-org/synapse/issues/12208))
|
||||
- Move scripts to Synapse package and expose as setuptools entry points. ([\#12118](https://github.com/matrix-org/synapse/issues/12118))
|
||||
- Add support for cancellation to `ReadWriteLock`. ([\#12120](https://github.com/matrix-org/synapse/issues/12120))
|
||||
- Fix data validation to compare to lists, not sequences. ([\#12128](https://github.com/matrix-org/synapse/issues/12128))
|
||||
- Fix CI not attaching source distributions and wheels to the GitHub releases. ([\#12131](https://github.com/matrix-org/synapse/issues/12131))
|
||||
- Remove unused mocks from `test_typing`. ([\#12136](https://github.com/matrix-org/synapse/issues/12136))
|
||||
- Give `scripts-dev` scripts suffixes for neater CI config. ([\#12137](https://github.com/matrix-org/synapse/issues/12137))
|
||||
- Move the snapcraft configuration file to `contrib`. ([\#12142](https://github.com/matrix-org/synapse/issues/12142))
|
||||
- Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI. ([\#12144](https://github.com/matrix-org/synapse/issues/12144))
|
||||
- Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI. ([\#12145](https://github.com/matrix-org/synapse/issues/12145))
|
||||
- Add test for `ObservableDeferred`'s cancellation behaviour. ([\#12149](https://github.com/matrix-org/synapse/issues/12149))
|
||||
- Use `ParamSpec` in type hints for `synapse.logging.context`. ([\#12150](https://github.com/matrix-org/synapse/issues/12150))
|
||||
- Prune unused jobs from `tox` config. ([\#12152](https://github.com/matrix-org/synapse/issues/12152))
|
||||
- Move CI checks out of tox, to facilitate a move to using poetry. ([\#12153](https://github.com/matrix-org/synapse/issues/12153))
|
||||
- Avoid generating state groups for local out-of-band leaves. ([\#12154](https://github.com/matrix-org/synapse/issues/12154))
|
||||
- Avoid trying to calculate the state at outlier events. ([\#12155](https://github.com/matrix-org/synapse/issues/12155), [\#12173](https://github.com/matrix-org/synapse/issues/12173), [\#12202](https://github.com/matrix-org/synapse/issues/12202))
|
||||
- Fix some type annotations. ([\#12156](https://github.com/matrix-org/synapse/issues/12156))
|
||||
- Add type hints for `ObservableDeferred` attributes. ([\#12159](https://github.com/matrix-org/synapse/issues/12159))
|
||||
- Use a prebuilt Action for the `tests-done` CI job. ([\#12161](https://github.com/matrix-org/synapse/issues/12161))
|
||||
- Reduce number of DB queries made during processing of `/sync`. ([\#12163](https://github.com/matrix-org/synapse/issues/12163))
|
||||
- Add `delay_cancellation` utility function, which behaves like `stop_cancellation` but waits until the original `Deferred` resolves before raising a `CancelledError`. ([\#12180](https://github.com/matrix-org/synapse/issues/12180))
|
||||
- Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper. ([\#12182](https://github.com/matrix-org/synapse/issues/12182))
|
||||
- Add cancellation support to `@cached` and `@cachedList` decorators. ([\#12183](https://github.com/matrix-org/synapse/issues/12183))
|
||||
- Remove unused variables. ([\#12187](https://github.com/matrix-org/synapse/issues/12187))
|
||||
- Add combined test for HTTP pusher and push rule. Contributed by Nick @ Beeper. ([\#12188](https://github.com/matrix-org/synapse/issues/12188))
|
||||
- Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`. ([\#12192](https://github.com/matrix-org/synapse/issues/12192))
|
||||
- Remove some dead code. ([\#12197](https://github.com/matrix-org/synapse/issues/12197))
|
||||
- Fix a misleading comment in the function `check_event_for_spam`. ([\#12203](https://github.com/matrix-org/synapse/issues/12203))
|
||||
- Remove unnecessary `pass` statements. ([\#12206](https://github.com/matrix-org/synapse/issues/12206))
|
||||
- Update the SSO username picker template to comply with SIWA guidelines. ([\#12210](https://github.com/matrix-org/synapse/issues/12210))
|
||||
- Improve code documentation for the typing stream over replication. ([\#12211](https://github.com/matrix-org/synapse/issues/12211))
|
||||
|
||||
|
||||
Synapse 1.54.0 (2022-03-08)
|
||||
===========================
|
||||
|
||||
|
||||
@@ -33,7 +33,7 @@ site-url = "/synapse/"
|
||||
additional-css = [
|
||||
"docs/website_files/table-of-contents.css",
|
||||
"docs/website_files/remove-nav-buttons.css",
|
||||
"docs/website_files/indent-section-headers.css",
|
||||
"docs/website_files/section-headers.css",
|
||||
]
|
||||
additional-js = ["docs/website_files/table-of-contents.js"]
|
||||
theme = "docs/website_files/theme"
|
||||
1
changelog.d/11700.removal
Normal file
1
changelog.d/11700.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove workaround introduced in Synapse 1.50.0 for Mjolnir compatibility. Breaks compatibility with Mjolnir 1.3.1 and earlier.
|
||||
1
changelog.d/11915.misc
Normal file
1
changelog.d/11915.misc
Normal file
@@ -0,0 +1 @@
|
||||
Simplify the `ApplicationService` class' set of public methods related to interest checking.
|
||||
1
changelog.d/11998.doc
Normal file
1
changelog.d/11998.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix complexity checking config example in [Resource Constrained Devices](https://matrix-org.github.io/synapse/v1.54/other/running_synapse_on_single_board_computers.html) docs page.
|
||||
1
changelog.d/12028.feature
Normal file
1
changelog.d/12028.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add third-party rules rules callbacks `check_can_shutdown_room` and `check_can_deactivate_user`.
|
||||
1
changelog.d/12042.misc
Normal file
1
changelog.d/12042.misc
Normal file
@@ -0,0 +1 @@
|
||||
Correct type hints for txredis.
|
||||
1
changelog.d/12090.bugfix
Normal file
1
changelog.d/12090.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Use the proper serialization format for bundled thread aggregations. The bug has existed since Synapse v1.48.0.
|
||||
1
changelog.d/12101.misc
Normal file
1
changelog.d/12101.misc
Normal file
@@ -0,0 +1 @@
|
||||
Limit the size of `aggregation_key` on annotations.
|
||||
1
changelog.d/12108.misc
Normal file
1
changelog.d/12108.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type hints to `tests/rest/client`.
|
||||
1
changelog.d/12113.bugfix
Normal file
1
changelog.d/12113.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug when redacting events with relations.
|
||||
1
changelog.d/12118.misc
Normal file
1
changelog.d/12118.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move scripts to Synapse package and expose as setuptools entry points.
|
||||
1
changelog.d/12121.bugfix
Normal file
1
changelog.d/12121.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug when redacting events with relations.
|
||||
1
changelog.d/12128.misc
Normal file
1
changelog.d/12128.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix data validation to compare to lists, not sequences.
|
||||
1
changelog.d/12130.bugfix
Normal file
1
changelog.d/12130.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug when redacting events with relations.
|
||||
1
changelog.d/12131.misc
Normal file
1
changelog.d/12131.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix CI not attaching source distributions and wheels to the GitHub releases.
|
||||
1
changelog.d/12132.feature
Normal file
1
changelog.d/12132.feature
Normal file
@@ -0,0 +1 @@
|
||||
Improve performance of logging in for large accounts.
|
||||
1
changelog.d/12135.feature
Normal file
1
changelog.d/12135.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add experimental env var `SYNAPSE_ASYNC_IO_REACTOR` that causes Synapse to use the asyncio reactor for Twisted.
|
||||
1
changelog.d/12136.misc
Normal file
1
changelog.d/12136.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unused mocks from `test_typing`.
|
||||
1
changelog.d/12137.misc
Normal file
1
changelog.d/12137.misc
Normal file
@@ -0,0 +1 @@
|
||||
Give `scripts-dev` scripts suffixes for neater CI config.
|
||||
1
changelog.d/12138.removal
Normal file
1
changelog.d/12138.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove backwards compatibilty with pagination tokens from the `/relations` and `/aggregations` endpoints generated from Synapse < v1.52.0.
|
||||
1
changelog.d/12140.misc
Normal file
1
changelog.d/12140.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move `synctl` into `synapse._scripts` and expose as an entry point.
|
||||
1
changelog.d/12142.misc
Normal file
1
changelog.d/12142.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move the snapcraft configuration file to `contrib`.
|
||||
1
changelog.d/12143.doc
Normal file
1
changelog.d/12143.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve documentation for demo scripts.
|
||||
1
changelog.d/12144.misc
Normal file
1
changelog.d/12144.misc
Normal file
@@ -0,0 +1 @@
|
||||
Enable [MSC3030](https://github.com/matrix-org/matrix-doc/pull/3030) Complement tests in CI.
|
||||
1
changelog.d/12145.misc
Normal file
1
changelog.d/12145.misc
Normal file
@@ -0,0 +1 @@
|
||||
Enable [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) Complement tests in CI.
|
||||
1
changelog.d/12146.misc
Normal file
1
changelog.d/12146.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type hints to `tests/rest`.
|
||||
1
changelog.d/12149.misc
Normal file
1
changelog.d/12149.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add test for `ObservableDeferred`'s cancellation behaviour.
|
||||
1
changelog.d/12150.misc
Normal file
1
changelog.d/12150.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use `ParamSpec` in type hints for `synapse.logging.context`.
|
||||
1
changelog.d/12151.feature
Normal file
1
changelog.d/12151.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support the stable identifiers from [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440): threads.
|
||||
1
changelog.d/12152.misc
Normal file
1
changelog.d/12152.misc
Normal file
@@ -0,0 +1 @@
|
||||
Prune unused jobs from `tox` config.
|
||||
1
changelog.d/12153.misc
Normal file
1
changelog.d/12153.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move CI checks out of tox, to facilitate a move to using poetry.
|
||||
1
changelog.d/12154.misc
Normal file
1
changelog.d/12154.misc
Normal file
@@ -0,0 +1 @@
|
||||
Avoid generating state groups for local out-of-band leaves.
|
||||
1
changelog.d/12155.misc
Normal file
1
changelog.d/12155.misc
Normal file
@@ -0,0 +1 @@
|
||||
Avoid trying to calculate the state at outlier events.
|
||||
1
changelog.d/12156.misc
Normal file
1
changelog.d/12156.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix some type annotations.
|
||||
1
changelog.d/12157.bugfix
Normal file
1
changelog.d/12157.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in #4864 whereby background updates are never run with the default background batch size.
|
||||
1
changelog.d/12159.misc
Normal file
1
changelog.d/12159.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type hints for `ObservableDeferred` attributes.
|
||||
1
changelog.d/12161.misc
Normal file
1
changelog.d/12161.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use a prebuilt Action for the `tests-done` CI job.
|
||||
1
changelog.d/12163.misc
Normal file
1
changelog.d/12163.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of DB queries made during processing of `/sync`.
|
||||
1
changelog.d/12173.misc
Normal file
1
changelog.d/12173.misc
Normal file
@@ -0,0 +1 @@
|
||||
Avoid trying to calculate the state at outlier events.
|
||||
1
changelog.d/12175.bugfix
Normal file
1
changelog.d/12175.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug where non-standard information was returned from the `/hierarchy` API. Introduced in Synapse v1.41.0.
|
||||
1
changelog.d/12179.doc
Normal file
1
changelog.d/12179.doc
Normal file
@@ -0,0 +1 @@
|
||||
Updates to the Room DAG concepts development document.
|
||||
1
changelog.d/12182.misc
Normal file
1
changelog.d/12182.misc
Normal file
@@ -0,0 +1 @@
|
||||
Retry HTTP replication failures, this should prevent 502's when restarting stateful workers (main, event persisters, stream writers). Contributed by Nick @ Beeper.
|
||||
1
changelog.d/12187.misc
Normal file
1
changelog.d/12187.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unused variables.
|
||||
1
changelog.d/12189.bugfix
Normal file
1
changelog.d/12189.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug when redacting events with relations.
|
||||
1
changelog.d/12192.misc
Normal file
1
changelog.d/12192.misc
Normal file
@@ -0,0 +1 @@
|
||||
Rename `HomeServer.get_tcp_replication` to `get_replication_command_handler`.
|
||||
1
changelog.d/12197.misc
Normal file
1
changelog.d/12197.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove some dead code.
|
||||
24
debian/changelog
vendored
24
debian/changelog
vendored
@@ -1,27 +1,3 @@
|
||||
matrix-synapse-py3 (1.55.2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 19:07:11 +0000
|
||||
|
||||
matrix-synapse-py3 (1.55.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 24 Mar 2022 17:44:23 +0000
|
||||
|
||||
matrix-synapse-py3 (1.55.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Mar 2022 13:59:26 +0000
|
||||
|
||||
matrix-synapse-py3 (1.55.0~rc1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.55.0~rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Mar 2022 10:59:31 +0000
|
||||
|
||||
matrix-synapse-py3 (1.54.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.54.0.
|
||||
|
||||
@@ -458,17 +458,6 @@ Git allows you to add this signoff automatically when using the `-s`
|
||||
flag to `git commit`, which uses the name and email set in your
|
||||
`user.name` and `user.email` git configs.
|
||||
|
||||
### Private Sign off
|
||||
|
||||
If you would like to provide your legal name privately to the Matrix.org
|
||||
Foundation (instead of in a public commit or comment), you can do so
|
||||
by emailing your legal name and a link to the pull request to
|
||||
[dco@matrix.org](mailto:dco@matrix.org?subject=Private%20sign%20off).
|
||||
It helps to include "sign off" or similar in the subject line. You will then
|
||||
be instructed further.
|
||||
|
||||
Once private sign off is complete, doing so for future contributions will not
|
||||
be required.
|
||||
|
||||
# 10. Turn feedback into better code.
|
||||
|
||||
|
||||
@@ -1947,14 +1947,8 @@ saml2_config:
|
||||
#
|
||||
# localpart_template: Jinja2 template for the localpart of the MXID.
|
||||
# If this is not set, the user will be prompted to choose their
|
||||
# own username (see the documentation for the
|
||||
# 'sso_auth_account_details.html' template). This template can
|
||||
# use the 'localpart_from_email' filter.
|
||||
#
|
||||
# confirm_localpart: Whether to prompt the user to validate (or
|
||||
# change) the generated localpart (see the documentation for the
|
||||
# 'sso_auth_account_details.html' template), instead of
|
||||
# registering the account right away.
|
||||
# own username (see 'sso_auth_account_details.html' in the 'sso'
|
||||
# section of this file).
|
||||
#
|
||||
# display_name_template: Jinja2 template for the display name to set
|
||||
# on first login. If unset, no displayname will be set.
|
||||
@@ -2735,35 +2729,3 @@ redis:
|
||||
# Optional password if configured on the Redis instance
|
||||
#
|
||||
#password: <secret_password>
|
||||
|
||||
|
||||
## Background Updates ##
|
||||
|
||||
# Background updates are database updates that are run in the background in batches.
|
||||
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
|
||||
# sleep can all be configured. This is helpful to speed up or slow down the updates.
|
||||
#
|
||||
background_updates:
|
||||
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
|
||||
# a time to change the default.
|
||||
#
|
||||
#background_update_duration_ms: 500
|
||||
|
||||
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
|
||||
#
|
||||
#sleep_enabled: false
|
||||
|
||||
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
|
||||
# and set a duration to change the default.
|
||||
#
|
||||
#sleep_duration_ms: 300
|
||||
|
||||
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
|
||||
# set a size to change the default.
|
||||
#
|
||||
#min_batch_size: 10
|
||||
|
||||
# The batch size to use for the first iteration of a new background update. The default is 100.
|
||||
# Uncomment and set a size to change the default.
|
||||
#
|
||||
#default_batch_size: 50
|
||||
|
||||
@@ -36,13 +36,6 @@ Turns a `mxc://` URL for media content into an HTTP(S) one using the homeserver'
|
||||
|
||||
Example: `message.sender_avatar_url|mxc_to_http(32,32)`
|
||||
|
||||
```python
|
||||
localpart_from_email(address: str) -> str
|
||||
```
|
||||
|
||||
Returns the local part of an email address (e.g. `alice` in `alice@example.com`).
|
||||
|
||||
Example: `user.email_address|localpart_from_email`
|
||||
|
||||
## Email templates
|
||||
|
||||
@@ -183,11 +176,8 @@ Below are the templates Synapse will look for when generating pages related to S
|
||||
for the brand of the IdP
|
||||
* `user_attributes`: an object containing details about the user that
|
||||
we received from the IdP. May have the following attributes:
|
||||
* `display_name`: the user's display name
|
||||
* `emails`: a list of email addresses
|
||||
* `localpart`: the local part of the Matrix user ID to register,
|
||||
if `localpart_template` is set in the mapping provider configuration (empty
|
||||
string if not)
|
||||
* display_name: the user's display_name
|
||||
* emails: a list of email addresses
|
||||
The template should render a form which submits the following fields:
|
||||
* `username`: the localpart of the user's chosen user id
|
||||
* `sso_new_user_consent.html`: HTML page allowing the user to consent to the
|
||||
|
||||
@@ -85,20 +85,6 @@ process, for example:
|
||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
```
|
||||
|
||||
# Upgrading to v1.56.0
|
||||
|
||||
## Groups/communities feature has been deprecated
|
||||
|
||||
The non-standard groups/communities feature in Synapse has been deprecated and will
|
||||
be disabled by default in Synapse v1.58.0.
|
||||
|
||||
You can test disabling it by adding the following to your homeserver configuration:
|
||||
|
||||
```yaml
|
||||
experimental_features:
|
||||
groups_enabled: false
|
||||
```
|
||||
|
||||
# Upgrading to v1.55.0
|
||||
|
||||
## `synctl` script has been moved
|
||||
|
||||
@@ -1,7 +0,0 @@
|
||||
/*
|
||||
* Indents each chapter title in the left sidebar so that they aren't
|
||||
* at the same level as the section headers.
|
||||
*/
|
||||
.chapter-item {
|
||||
margin-left: 1em;
|
||||
}
|
||||
20
docs/website_files/section-headers.css
Normal file
20
docs/website_files/section-headers.css
Normal file
@@ -0,0 +1,20 @@
|
||||
/*
|
||||
* Indents each chapter title in the left sidebar so that they aren't
|
||||
* at the same level as the section headers.
|
||||
*/
|
||||
.chapter-item {
|
||||
margin-left: 1em;
|
||||
}
|
||||
|
||||
/*
|
||||
* Prevents a large gap between successive section headers.
|
||||
*
|
||||
* mdbook sets 'margin-top: 2.5em' on h2 and h3 headers. This makes sense when separating
|
||||
* a header from the paragraph beforehand, but has the downside of introducing a large
|
||||
* gap between headers that are next to each other with no text in between.
|
||||
*
|
||||
* This rule reduces the margin in this case.
|
||||
*/
|
||||
h1 + h2, h2 + h3 {
|
||||
margin-top: 1.0em;
|
||||
}
|
||||
@@ -351,11 +351,8 @@ is only supported with Redis-based replication.)
|
||||
|
||||
To enable this, the worker must have a HTTP replication listener configured,
|
||||
have a `worker_name` and be listed in the `instance_map` config. The same worker
|
||||
can handle multiple streams, but unless otherwise documented, each stream can only
|
||||
have a single writer.
|
||||
|
||||
For example, to move event persistence off to a dedicated worker, the shared
|
||||
configuration would include:
|
||||
can handle multiple streams. For example, to move event persistence off to a
|
||||
dedicated worker, the shared configuration would include:
|
||||
|
||||
```yaml
|
||||
instance_map:
|
||||
@@ -373,8 +370,8 @@ streams and the endpoints associated with them:
|
||||
|
||||
##### The `events` stream
|
||||
|
||||
The `events` stream experimentally supports having multiple writers, where work
|
||||
is sharded between them by room ID. Note that you *must* restart all worker
|
||||
The `events` stream also experimentally supports having multiple writers, where
|
||||
work is sharded between them by room ID. Note that you *must* restart all worker
|
||||
instances when adding or removing event persisters. An example `stream_writers`
|
||||
configuration with multiple writers:
|
||||
|
||||
@@ -387,38 +384,38 @@ stream_writers:
|
||||
|
||||
##### The `typing` stream
|
||||
|
||||
The following endpoints should be routed directly to the worker configured as
|
||||
the stream writer for the `typing` stream:
|
||||
The following endpoints should be routed directly to the workers configured as
|
||||
stream writers for the `typing` stream:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/typing
|
||||
|
||||
##### The `to_device` stream
|
||||
|
||||
The following endpoints should be routed directly to the worker configured as
|
||||
the stream writer for the `to_device` stream:
|
||||
The following endpoints should be routed directly to the workers configured as
|
||||
stream writers for the `to_device` stream:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/sendToDevice/
|
||||
|
||||
##### The `account_data` stream
|
||||
|
||||
The following endpoints should be routed directly to the worker configured as
|
||||
the stream writer for the `account_data` stream:
|
||||
The following endpoints should be routed directly to the workers configured as
|
||||
stream writers for the `account_data` stream:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/tags
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/.*/account_data
|
||||
|
||||
##### The `receipts` stream
|
||||
|
||||
The following endpoints should be routed directly to the worker configured as
|
||||
the stream writer for the `receipts` stream:
|
||||
The following endpoints should be routed directly to the workers configured as
|
||||
stream writers for the `receipts` stream:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/receipt
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/read_markers
|
||||
|
||||
##### The `presence` stream
|
||||
|
||||
The following endpoints should be routed directly to the worker configured as
|
||||
the stream writer for the `presence` stream:
|
||||
The following endpoints should be routed directly to the workers configured as
|
||||
stream writers for the `presence` stream:
|
||||
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
|
||||
|
||||
|
||||
1
mypy.ini
1
mypy.ini
@@ -90,6 +90,7 @@ exclude = (?x)
|
||||
|tests/push/test_push_rule_evaluator.py
|
||||
|tests/rest/client/test_transactions.py
|
||||
|tests/rest/media/v1/test_media_storage.py
|
||||
|tests/rest/media/v1/test_url_preview.py
|
||||
|tests/scripts/test_new_matrix_user.py
|
||||
|tests/server.py
|
||||
|tests/server_notices/test_resource_limits_server_notices.py
|
||||
|
||||
@@ -68,7 +68,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.55.2"
|
||||
__version__ = "1.54.0"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -322,8 +322,7 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
presence.register_servlets(self, resource)
|
||||
|
||||
if self.config.experimental.groups_enabled:
|
||||
groups.register_servlets(self, resource)
|
||||
groups.register_servlets(self, resource)
|
||||
|
||||
resources.update({CLIENT_API_PREFIX: resource})
|
||||
|
||||
|
||||
@@ -19,7 +19,6 @@ from synapse.config import (
|
||||
api,
|
||||
appservice,
|
||||
auth,
|
||||
background_updates,
|
||||
cache,
|
||||
captcha,
|
||||
cas,
|
||||
@@ -114,7 +113,6 @@ class RootConfig:
|
||||
caches: cache.CacheConfig
|
||||
federation: federation.FederationConfig
|
||||
retention: retention.RetentionConfig
|
||||
background_updates: background_updates.BackgroundUpdateConfig
|
||||
|
||||
config_classes: List[Type["Config"]] = ...
|
||||
def __init__(self) -> None: ...
|
||||
|
||||
@@ -1,68 +0,0 @@
|
||||
# Copyright 2022 Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from ._base import Config
|
||||
|
||||
|
||||
class BackgroundUpdateConfig(Config):
|
||||
section = "background_updates"
|
||||
|
||||
def generate_config_section(self, **kwargs) -> str:
|
||||
return """\
|
||||
## Background Updates ##
|
||||
|
||||
# Background updates are database updates that are run in the background in batches.
|
||||
# The duration, minimum batch size, default batch size, whether to sleep between batches and if so, how long to
|
||||
# sleep can all be configured. This is helpful to speed up or slow down the updates.
|
||||
#
|
||||
background_updates:
|
||||
# How long in milliseconds to run a batch of background updates for. Defaults to 100. Uncomment and set
|
||||
# a time to change the default.
|
||||
#
|
||||
#background_update_duration_ms: 500
|
||||
|
||||
# Whether to sleep between updates. Defaults to True. Uncomment to change the default.
|
||||
#
|
||||
#sleep_enabled: false
|
||||
|
||||
# If sleeping between updates, how long in milliseconds to sleep for. Defaults to 1000. Uncomment
|
||||
# and set a duration to change the default.
|
||||
#
|
||||
#sleep_duration_ms: 300
|
||||
|
||||
# Minimum size a batch of background updates can be. Must be greater than 0. Defaults to 1. Uncomment and
|
||||
# set a size to change the default.
|
||||
#
|
||||
#min_batch_size: 10
|
||||
|
||||
# The batch size to use for the first iteration of a new background update. The default is 100.
|
||||
# Uncomment and set a size to change the default.
|
||||
#
|
||||
#default_batch_size: 50
|
||||
"""
|
||||
|
||||
def read_config(self, config, **kwargs) -> None:
|
||||
bg_update_config = config.get("background_updates") or {}
|
||||
|
||||
self.update_duration_ms = bg_update_config.get(
|
||||
"background_update_duration_ms", 100
|
||||
)
|
||||
|
||||
self.sleep_enabled = bg_update_config.get("sleep_enabled", True)
|
||||
|
||||
self.sleep_duration_ms = bg_update_config.get("sleep_duration_ms", 1000)
|
||||
|
||||
self.min_batch_size = bg_update_config.get("min_batch_size", 1)
|
||||
|
||||
self.default_batch_size = bg_update_config.get("default_batch_size", 100)
|
||||
@@ -74,6 +74,3 @@ class ExperimentalConfig(Config):
|
||||
|
||||
# MSC3720 (Account status endpoint)
|
||||
self.msc3720_enabled: bool = experimental.get("msc3720_enabled", False)
|
||||
|
||||
# The deprecated groups feature.
|
||||
self.groups_enabled: bool = experimental.get("groups_enabled", True)
|
||||
|
||||
@@ -16,7 +16,6 @@ from .account_validity import AccountValidityConfig
|
||||
from .api import ApiConfig
|
||||
from .appservice import AppServiceConfig
|
||||
from .auth import AuthConfig
|
||||
from .background_updates import BackgroundUpdateConfig
|
||||
from .cache import CacheConfig
|
||||
from .captcha import CaptchaConfig
|
||||
from .cas import CasConfig
|
||||
@@ -100,5 +99,4 @@ class HomeServerConfig(RootConfig):
|
||||
WorkerConfig,
|
||||
RedisConfig,
|
||||
ExperimentalConfig,
|
||||
BackgroundUpdateConfig,
|
||||
]
|
||||
|
||||
@@ -182,14 +182,8 @@ class OIDCConfig(Config):
|
||||
#
|
||||
# localpart_template: Jinja2 template for the localpart of the MXID.
|
||||
# If this is not set, the user will be prompted to choose their
|
||||
# own username (see the documentation for the
|
||||
# 'sso_auth_account_details.html' template). This template can
|
||||
# use the 'localpart_from_email' filter.
|
||||
#
|
||||
# confirm_localpart: Whether to prompt the user to validate (or
|
||||
# change) the generated localpart (see the documentation for the
|
||||
# 'sso_auth_account_details.html' template), instead of
|
||||
# registering the account right away.
|
||||
# own username (see 'sso_auth_account_details.html' in the 'sso'
|
||||
# section of this file).
|
||||
#
|
||||
# display_name_template: Jinja2 template for the display name to set
|
||||
# on first login. If unset, no displayname will be set.
|
||||
|
||||
@@ -245,8 +245,8 @@ class SpamChecker:
|
||||
"""Checks if a given event is considered "spammy" by this server.
|
||||
|
||||
If the server considers an event spammy, then it will be rejected if
|
||||
sent by a local user. If it is sent by a user on another server, the
|
||||
event is soft-failed.
|
||||
sent by a local user. If it is sent by a user on another server, then
|
||||
users receive a blank event.
|
||||
|
||||
Args:
|
||||
event: the event to be checked
|
||||
|
||||
@@ -530,12 +530,9 @@ class EventClientSerializer:
|
||||
|
||||
# Include the bundled aggregations in the event.
|
||||
if serialized_aggregations:
|
||||
# There is likely already an "unsigned" field, but a filter might
|
||||
# have stripped it off (via the event_fields option). The server is
|
||||
# allowed to return additional fields, so add it back.
|
||||
serialized_event.setdefault("unsigned", {}).setdefault(
|
||||
"m.relations", {}
|
||||
).update(serialized_aggregations)
|
||||
serialized_event["unsigned"].setdefault("m.relations", {}).update(
|
||||
serialized_aggregations
|
||||
)
|
||||
|
||||
def serialize_events(
|
||||
self,
|
||||
|
||||
@@ -289,7 +289,7 @@ class OpenIdUserInfo(BaseFederationServlet):
|
||||
return 200, {"sub": user_id}
|
||||
|
||||
|
||||
SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
||||
DEFAULT_SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
||||
"federation": FEDERATION_SERVLET_CLASSES,
|
||||
"room_list": (PublicRoomList,),
|
||||
"group_server": GROUP_SERVER_SERVLET_CLASSES,
|
||||
@@ -298,10 +298,6 @@ SERVLET_GROUPS: Dict[str, Iterable[Type[BaseFederationServlet]]] = {
|
||||
"openid": (OpenIdUserInfo,),
|
||||
}
|
||||
|
||||
DEFAULT_SERVLET_GROUPS = ("federation", "room_list", "openid")
|
||||
|
||||
GROUP_SERVLET_GROUPS = ("group_server", "group_local", "group_attestation")
|
||||
|
||||
|
||||
def register_servlets(
|
||||
hs: "HomeServer",
|
||||
@@ -324,19 +320,16 @@ def register_servlets(
|
||||
Defaults to ``DEFAULT_SERVLET_GROUPS``.
|
||||
"""
|
||||
if not servlet_groups:
|
||||
servlet_groups = DEFAULT_SERVLET_GROUPS
|
||||
# Only allow the groups servlets if the deprecated groups feature is enabled.
|
||||
if hs.config.experimental.groups_enabled:
|
||||
servlet_groups = servlet_groups + GROUP_SERVLET_GROUPS
|
||||
servlet_groups = DEFAULT_SERVLET_GROUPS.keys()
|
||||
|
||||
for servlet_group in servlet_groups:
|
||||
# Skip unknown servlet groups.
|
||||
if servlet_group not in SERVLET_GROUPS:
|
||||
if servlet_group not in DEFAULT_SERVLET_GROUPS:
|
||||
raise RuntimeError(
|
||||
f"Attempting to register unknown federation servlet: '{servlet_group}'"
|
||||
)
|
||||
|
||||
for servletclass in SERVLET_GROUPS[servlet_group]:
|
||||
for servletclass in DEFAULT_SERVLET_GROUPS[servlet_group]:
|
||||
# Only allow the `/timestamp_to_event` servlet if msc3030 is enabled
|
||||
if (
|
||||
servletclass == FederationTimestampLookupServlet
|
||||
|
||||
@@ -371,6 +371,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
log_kv(
|
||||
{"reason": "User doesn't have device id.", "device_id": device_id}
|
||||
)
|
||||
pass
|
||||
else:
|
||||
raise
|
||||
|
||||
@@ -413,6 +414,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
# no match
|
||||
set_tag("error", True)
|
||||
set_tag("reason", "User doesn't have that device id.")
|
||||
pass
|
||||
else:
|
||||
raise
|
||||
|
||||
|
||||
@@ -45,7 +45,6 @@ from synapse.types import JsonDict, UserID, map_username_to_mxid_localpart
|
||||
from synapse.util import Clock, json_decoder
|
||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||
from synapse.util.macaroons import get_value_from_macaroon, satisfy_expiry
|
||||
from synapse.util.templates import _localpart_from_email_filter
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
@@ -1229,7 +1228,6 @@ class OidcSessionData:
|
||||
|
||||
class UserAttributeDict(TypedDict):
|
||||
localpart: Optional[str]
|
||||
confirm_localpart: bool
|
||||
display_name: Optional[str]
|
||||
emails: List[str]
|
||||
|
||||
@@ -1309,11 +1307,6 @@ def jinja_finalize(thing: Any) -> Any:
|
||||
|
||||
|
||||
env = Environment(finalize=jinja_finalize)
|
||||
env.filters.update(
|
||||
{
|
||||
"localpart_from_email": _localpart_from_email_filter,
|
||||
}
|
||||
)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -1323,7 +1316,6 @@ class JinjaOidcMappingConfig:
|
||||
display_name_template: Optional[Template]
|
||||
email_template: Optional[Template]
|
||||
extra_attributes: Dict[str, Template]
|
||||
confirm_localpart: bool = False
|
||||
|
||||
|
||||
class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||
@@ -1365,17 +1357,12 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||
"invalid jinja template", path=["extra_attributes", key]
|
||||
) from e
|
||||
|
||||
confirm_localpart = config.get("confirm_localpart") or False
|
||||
if not isinstance(confirm_localpart, bool):
|
||||
raise ConfigError("must be a bool", path=["confirm_localpart"])
|
||||
|
||||
return JinjaOidcMappingConfig(
|
||||
subject_claim=subject_claim,
|
||||
localpart_template=localpart_template,
|
||||
display_name_template=display_name_template,
|
||||
email_template=email_template,
|
||||
extra_attributes=extra_attributes,
|
||||
confirm_localpart=confirm_localpart,
|
||||
)
|
||||
|
||||
def get_remote_user_id(self, userinfo: UserInfo) -> str:
|
||||
@@ -1411,10 +1398,7 @@ class JinjaOidcMappingProvider(OidcMappingProvider[JinjaOidcMappingConfig]):
|
||||
emails.append(email)
|
||||
|
||||
return UserAttributeDict(
|
||||
localpart=localpart,
|
||||
display_name=display_name,
|
||||
emails=emails,
|
||||
confirm_localpart=self._config.confirm_localpart,
|
||||
localpart=localpart, display_name=display_name, emails=emails
|
||||
)
|
||||
|
||||
async def get_extra_attributes(self, userinfo: UserInfo, token: Token) -> JsonDict:
|
||||
|
||||
@@ -350,7 +350,7 @@ class PaginationHandler:
|
||||
"""
|
||||
self._purges_in_progress_by_room.add(room_id)
|
||||
try:
|
||||
async with self.pagination_lock.write(room_id):
|
||||
with await self.pagination_lock.write(room_id):
|
||||
await self.storage.purge_events.purge_history(
|
||||
room_id, token, delete_local_events
|
||||
)
|
||||
@@ -406,7 +406,7 @@ class PaginationHandler:
|
||||
room_id: room to be purged
|
||||
force: set true to skip checking for joined users.
|
||||
"""
|
||||
async with self.pagination_lock.write(room_id):
|
||||
with await self.pagination_lock.write(room_id):
|
||||
# first check that we have no users in this room
|
||||
if not force:
|
||||
joined = await self.store.is_host_joined(room_id, self._server_name)
|
||||
@@ -448,7 +448,7 @@ class PaginationHandler:
|
||||
|
||||
room_token = from_token.room_key
|
||||
|
||||
async with self.pagination_lock.read(room_id):
|
||||
with await self.pagination_lock.read(room_id):
|
||||
(
|
||||
membership,
|
||||
member_event_id,
|
||||
@@ -615,7 +615,7 @@ class PaginationHandler:
|
||||
|
||||
self._purges_in_progress_by_room.add(room_id)
|
||||
try:
|
||||
async with self.pagination_lock.write(room_id):
|
||||
with await self.pagination_lock.write(room_id):
|
||||
self._delete_by_id[delete_id].status = DeleteStatus.STATUS_SHUTTING_DOWN
|
||||
self._delete_by_id[
|
||||
delete_id
|
||||
|
||||
@@ -267,6 +267,7 @@ class BasePresenceHandler(abc.ABC):
|
||||
is_syncing: Whether or not the user is now syncing
|
||||
sync_time_msec: Time in ms when the user was last syncing
|
||||
"""
|
||||
pass
|
||||
|
||||
async def update_external_syncs_clear(self, process_id: str) -> None:
|
||||
"""Marks all users that had been marked as syncing by a given process
|
||||
@@ -276,6 +277,7 @@ class BasePresenceHandler(abc.ABC):
|
||||
|
||||
This is a no-op when presence is handled by a different worker.
|
||||
"""
|
||||
pass
|
||||
|
||||
async def process_replication_rows(
|
||||
self, stream_name: str, instance_name: str, token: int, rows: list
|
||||
|
||||
@@ -132,7 +132,6 @@ class UserAttributes:
|
||||
# if `None`, the mapper has not picked a userid, and the user should be prompted to
|
||||
# enter one.
|
||||
localpart: Optional[str]
|
||||
confirm_localpart: bool = False
|
||||
display_name: Optional[str] = None
|
||||
emails: Collection[str] = attr.Factory(list)
|
||||
|
||||
@@ -562,10 +561,9 @@ class SsoHandler:
|
||||
# Must provide either attributes or session, not both
|
||||
assert (attributes is not None) != (session is not None)
|
||||
|
||||
if (
|
||||
attributes
|
||||
and (attributes.localpart is None or attributes.confirm_localpart is True)
|
||||
) or (session and session.chosen_localpart is None):
|
||||
if (attributes and attributes.localpart is None) or (
|
||||
session and session.chosen_localpart is None
|
||||
):
|
||||
return b"/_synapse/client/pick_username/account_details"
|
||||
elif self._consent_at_registration and not (
|
||||
session and session.terms_accepted_version
|
||||
|
||||
@@ -160,9 +160,8 @@ class FollowerTypingHandler:
|
||||
"""Should be called whenever we receive updates for typing stream."""
|
||||
|
||||
if self._latest_room_serial > token:
|
||||
# The typing worker has gone backwards (e.g. it may have restarted).
|
||||
# To prevent inconsistent data, just clear everything.
|
||||
logger.info("Typing handler stream went backwards; resetting")
|
||||
# The master has gone backwards. To prevent inconsistent data, just
|
||||
# clear everything.
|
||||
self._reset()
|
||||
|
||||
# Set the latest serial token to whatever the server gave us.
|
||||
|
||||
@@ -120,6 +120,7 @@ class ByteParser(ByteWriteable, Generic[T], abc.ABC):
|
||||
"""Called when response has finished streaming and the parser should
|
||||
return the final result (or error).
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
@@ -600,6 +601,7 @@ class MatrixFederationHttpClient:
|
||||
response.code,
|
||||
response_phrase,
|
||||
)
|
||||
pass
|
||||
else:
|
||||
logger.info(
|
||||
"{%s} [%s] Got response headers: %d %s",
|
||||
|
||||
@@ -233,6 +233,7 @@ class HttpServer(Protocol):
|
||||
servlet_classname (str): The name of the handler to be used in prometheus
|
||||
and opentracing logs.
|
||||
"""
|
||||
pass
|
||||
|
||||
|
||||
class _AsyncResource(resource.Resource, metaclass=abc.ABCMeta):
|
||||
|
||||
@@ -169,7 +169,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "content.msgtype",
|
||||
"pattern": "m.notice",
|
||||
"_cache_key": "_suppress_notices",
|
||||
"_id": "_suppress_notices",
|
||||
}
|
||||
],
|
||||
"actions": ["dont_notify"],
|
||||
@@ -183,13 +183,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.member",
|
||||
"_cache_key": "_member",
|
||||
"_id": "_member",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.membership",
|
||||
"pattern": "invite",
|
||||
"_cache_key": "_invite_member",
|
||||
"_id": "_invite_member",
|
||||
},
|
||||
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
||||
],
|
||||
@@ -212,7 +212,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.member",
|
||||
"_cache_key": "_member",
|
||||
"_id": "_member",
|
||||
}
|
||||
],
|
||||
"actions": ["dont_notify"],
|
||||
@@ -237,12 +237,12 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "content.body",
|
||||
"pattern": "@room",
|
||||
"_cache_key": "_roomnotif_content",
|
||||
"_id": "_roomnotif_content",
|
||||
},
|
||||
{
|
||||
"kind": "sender_notification_permission",
|
||||
"key": "room",
|
||||
"_cache_key": "_roomnotif_pl",
|
||||
"_id": "_roomnotif_pl",
|
||||
},
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
||||
@@ -254,13 +254,13 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.tombstone",
|
||||
"_cache_key": "_tombstone",
|
||||
"_id": "_tombstone",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "state_key",
|
||||
"pattern": "",
|
||||
"_cache_key": "_tombstone_statekey",
|
||||
"_id": "_tombstone_statekey",
|
||||
},
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": True}],
|
||||
@@ -272,7 +272,7 @@ BASE_APPEND_OVERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.reaction",
|
||||
"_cache_key": "_reaction",
|
||||
"_id": "_reaction",
|
||||
}
|
||||
],
|
||||
"actions": ["dont_notify"],
|
||||
@@ -288,7 +288,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.call.invite",
|
||||
"_cache_key": "_call",
|
||||
"_id": "_call",
|
||||
}
|
||||
],
|
||||
"actions": [
|
||||
@@ -302,12 +302,12 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
{
|
||||
"rule_id": "global/underride/.m.rule.room_one_to_one",
|
||||
"conditions": [
|
||||
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
||||
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.message",
|
||||
"_cache_key": "_message",
|
||||
"_id": "_message",
|
||||
},
|
||||
],
|
||||
"actions": [
|
||||
@@ -321,12 +321,12 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
{
|
||||
"rule_id": "global/underride/.m.rule.encrypted_room_one_to_one",
|
||||
"conditions": [
|
||||
{"kind": "room_member_count", "is": "2", "_cache_key": "member_count"},
|
||||
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.encrypted",
|
||||
"_cache_key": "_encrypted",
|
||||
"_id": "_encrypted",
|
||||
},
|
||||
],
|
||||
"actions": [
|
||||
@@ -342,7 +342,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.message",
|
||||
"_cache_key": "_message",
|
||||
"_id": "_message",
|
||||
}
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||
@@ -356,7 +356,7 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.encrypted",
|
||||
"_cache_key": "_encrypted",
|
||||
"_id": "_encrypted",
|
||||
}
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||
@@ -368,19 +368,19 @@ BASE_APPEND_UNDERRIDE_RULES: List[Dict[str, Any]] = [
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "im.vector.modular.widgets",
|
||||
"_cache_key": "_type_modular_widgets",
|
||||
"_id": "_type_modular_widgets",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.type",
|
||||
"pattern": "jitsi",
|
||||
"_cache_key": "_content_type_jitsi",
|
||||
"_id": "_content_type_jitsi",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "state_key",
|
||||
"pattern": "*",
|
||||
"_cache_key": "_is_state_event",
|
||||
"_id": "_is_state_event",
|
||||
},
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "highlight", "value": False}],
|
||||
|
||||
@@ -274,17 +274,17 @@ def _condition_checker(
|
||||
cache: Dict[str, bool],
|
||||
) -> bool:
|
||||
for cond in conditions:
|
||||
_cache_key = cond.get("_cache_key", None)
|
||||
if _cache_key:
|
||||
res = cache.get(_cache_key, None)
|
||||
_id = cond.get("_id", None)
|
||||
if _id:
|
||||
res = cache.get(_id, None)
|
||||
if res is False:
|
||||
return False
|
||||
elif res is True:
|
||||
continue
|
||||
|
||||
res = evaluator.matches(cond, uid, display_name)
|
||||
if _cache_key:
|
||||
cache[_cache_key] = bool(res)
|
||||
if _id:
|
||||
cache[_id] = bool(res)
|
||||
|
||||
if not res:
|
||||
return False
|
||||
|
||||
@@ -40,7 +40,7 @@ def format_push_rules_for_user(
|
||||
|
||||
# Remove internal stuff.
|
||||
for c in r["conditions"]:
|
||||
c.pop("_cache_key", None)
|
||||
c.pop("_id", None)
|
||||
|
||||
pattern_type = c.pop("pattern_type", None)
|
||||
if pattern_type == "user_id":
|
||||
|
||||
@@ -74,8 +74,7 @@ REQUIREMENTS = [
|
||||
# Note: 21.1.0 broke `/sync`, see #9936
|
||||
"attrs>=19.2.0,!=21.1.0",
|
||||
"netaddr>=0.7.18",
|
||||
# Jinja2 3.1.0 removes the deprecated jinja2.Markup class, which we rely on.
|
||||
"Jinja2<3.1.0",
|
||||
"Jinja2>=2.9",
|
||||
"bleach>=1.4.3",
|
||||
# We use `ParamSpec`, which was added in `typing-extensions` 3.10.0.0.
|
||||
"typing-extensions>=3.10.0",
|
||||
|
||||
@@ -709,7 +709,7 @@ class ReplicationCommandHandler:
|
||||
self.send_command(RemoteServerUpCommand(server))
|
||||
|
||||
def stream_update(self, stream_name: str, token: Optional[int], data: Any) -> None:
|
||||
"""Called when a new update is available to stream to Redis subscribers.
|
||||
"""Called when a new update is available to stream to clients.
|
||||
|
||||
We need to check if the client is interested in the stream or not
|
||||
"""
|
||||
|
||||
@@ -67,8 +67,8 @@ class ReplicationStreamProtocolFactory(ServerFactory):
|
||||
class ReplicationStreamer:
|
||||
"""Handles replication connections.
|
||||
|
||||
This needs to be poked when new replication data may be available.
|
||||
When new data is available it will propagate to all Redis subscribers.
|
||||
This needs to be poked when new replication data may be available. When new
|
||||
data is available it will propagate to all connected clients.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
@@ -109,7 +109,7 @@ class ReplicationStreamer:
|
||||
|
||||
def on_notifier_poke(self) -> None:
|
||||
"""Checks if there is actually any new data and sends it to the
|
||||
Redis subscribers if there are.
|
||||
connections if there are.
|
||||
|
||||
This should get called each time new data is available, even if it
|
||||
is currently being executed, so that nothing gets missed
|
||||
|
||||
@@ -316,19 +316,7 @@ class PresenceFederationStream(Stream):
|
||||
class TypingStream(Stream):
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class TypingStreamRow:
|
||||
"""
|
||||
An entry in the typing stream.
|
||||
Describes all the users that are 'typing' right now in one room.
|
||||
|
||||
When a user stops typing, it will be streamed as a new update with that
|
||||
user absent; you can think of the `user_ids` list as overwriting the
|
||||
entire list that was there previously.
|
||||
"""
|
||||
|
||||
# The room that this update is for.
|
||||
room_id: str
|
||||
|
||||
# All the users that are 'typing' right now in the specified room.
|
||||
user_ids: List[str]
|
||||
|
||||
NAME = "typing"
|
||||
|
||||
@@ -130,22 +130,22 @@
|
||||
</head>
|
||||
<body>
|
||||
<header>
|
||||
<h1>Create your account</h1>
|
||||
<p>This is required. Continue to create your account on {{ server_name }}. You can't change this later.</p>
|
||||
<h1>Your account is nearly ready</h1>
|
||||
<p>Check your details before creating an account on {{ server_name }}</p>
|
||||
</header>
|
||||
<main>
|
||||
<form method="post" class="form__input" id="form">
|
||||
<div class="username_input" id="username_input">
|
||||
<label for="field-username">Username (required)</label>
|
||||
<label for="field-username">Username</label>
|
||||
<div class="prefix">@</div>
|
||||
<input type="text" name="username" id="field-username" value="{{ user_attributes.localpart }}" autofocus>
|
||||
<input type="text" name="username" id="field-username" autofocus>
|
||||
<div class="postfix">:{{ server_name }}</div>
|
||||
</div>
|
||||
<output for="username_input" id="field-username-output"></output>
|
||||
<input type="submit" value="Continue" class="primary-button">
|
||||
{% if user_attributes.avatar_url or user_attributes.display_name or user_attributes.emails %}
|
||||
<section class="idp-pick-details">
|
||||
<h2>{% if idp.idp_icon %}<img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>{% endif %}Optional data from {{ idp.idp_name }}</h2>
|
||||
<h2>{% if idp.idp_icon %}<img src="{{ idp.idp_icon | mxc_to_http(24, 24) }}"/>{% endif %}Information from {{ idp.idp_name }}</h2>
|
||||
{% if user_attributes.avatar_url %}
|
||||
<label class="idp-detail idp-avatar" for="idp-avatar">
|
||||
<div class="check-row">
|
||||
|
||||
@@ -62,7 +62,7 @@ function validateUsername(username) {
|
||||
usernameField.parentElement.classList.remove("invalid");
|
||||
usernameOutput.classList.remove("error");
|
||||
if (!username) {
|
||||
return reportError("This is required. Please provide a username");
|
||||
return reportError("Please provide a username");
|
||||
}
|
||||
if (username.length > 255) {
|
||||
return reportError("Too long, please choose something shorter");
|
||||
|
||||
@@ -15,5 +15,5 @@
|
||||
</g>
|
||||
</g>
|
||||
</svg>
|
||||
<p>An open network for secure, decentralized communication.<br>© 2022 The Matrix.org Foundation C.I.C.</p>
|
||||
<p>An open network for secure, decentralized communication.<br>© 2021 The Matrix.org Foundation C.I.C.</p>
|
||||
</footer>
|
||||
@@ -118,8 +118,7 @@ class ClientRestResource(JsonResource):
|
||||
thirdparty.register_servlets(hs, client_resource)
|
||||
sendtodevice.register_servlets(hs, client_resource)
|
||||
user_directory.register_servlets(hs, client_resource)
|
||||
if hs.config.experimental.groups_enabled:
|
||||
groups.register_servlets(hs, client_resource)
|
||||
groups.register_servlets(hs, client_resource)
|
||||
room_upgrade_rest_servlet.register_servlets(hs, client_resource)
|
||||
room_batch.register_servlets(hs, client_resource)
|
||||
capabilities.register_servlets(hs, client_resource)
|
||||
|
||||
@@ -293,8 +293,7 @@ def register_servlets_for_client_rest_resource(
|
||||
ResetPasswordRestServlet(hs).register(http_server)
|
||||
SearchUsersRestServlet(hs).register(http_server)
|
||||
UserRegisterServlet(hs).register(http_server)
|
||||
if hs.config.experimental.groups_enabled:
|
||||
DeleteGroupAdminRestServlet(hs).register(http_server)
|
||||
DeleteGroupAdminRestServlet(hs).register(http_server)
|
||||
AccountValidityRenewServlet(hs).register(http_server)
|
||||
|
||||
# Load the media repo ones if we're using them. Otherwise load the servlets which
|
||||
|
||||
@@ -298,6 +298,7 @@ class Responder:
|
||||
Returns:
|
||||
Resolves once the response has finished being written
|
||||
"""
|
||||
pass
|
||||
|
||||
def __enter__(self) -> None:
|
||||
pass
|
||||
|
||||
@@ -92,20 +92,12 @@ class AccountDetailsResource(DirectServeHtmlResource):
|
||||
self._sso_handler.render_error(request, "bad_session", e.msg, code=e.code)
|
||||
return
|
||||
|
||||
# The configuration might mandate going through this step to validate an
|
||||
# automatically generated localpart, so session.chosen_localpart might already
|
||||
# be set.
|
||||
localpart = ""
|
||||
if session.chosen_localpart is not None:
|
||||
localpart = session.chosen_localpart
|
||||
|
||||
idp_id = session.auth_provider_id
|
||||
template_params = {
|
||||
"idp": self._sso_handler.get_identity_providers()[idp_id],
|
||||
"user_attributes": {
|
||||
"display_name": session.display_name,
|
||||
"emails": session.emails,
|
||||
"localpart": localpart,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
@@ -328,6 +328,7 @@ class HomeServer(metaclass=abc.ABCMeta):
|
||||
Does nothing in this base class; overridden in derived classes to start the
|
||||
appropriate listeners.
|
||||
"""
|
||||
pass
|
||||
|
||||
def setup_background_tasks(self) -> None:
|
||||
"""
|
||||
|
||||
@@ -60,19 +60,18 @@ class _BackgroundUpdateHandler:
|
||||
|
||||
|
||||
class _BackgroundUpdateContextManager:
|
||||
def __init__(
|
||||
self, sleep: bool, clock: Clock, sleep_duration_ms: int, update_duration: int
|
||||
):
|
||||
BACKGROUND_UPDATE_INTERVAL_MS = 1000
|
||||
BACKGROUND_UPDATE_DURATION_MS = 100
|
||||
|
||||
def __init__(self, sleep: bool, clock: Clock):
|
||||
self._sleep = sleep
|
||||
self._clock = clock
|
||||
self._sleep_duration_ms = sleep_duration_ms
|
||||
self._update_duration_ms = update_duration
|
||||
|
||||
async def __aenter__(self) -> int:
|
||||
if self._sleep:
|
||||
await self._clock.sleep(self._sleep_duration_ms / 1000)
|
||||
await self._clock.sleep(self.BACKGROUND_UPDATE_INTERVAL_MS / 1000)
|
||||
|
||||
return self._update_duration_ms
|
||||
return self.BACKGROUND_UPDATE_DURATION_MS
|
||||
|
||||
async def __aexit__(self, *exc) -> None:
|
||||
pass
|
||||
@@ -134,6 +133,9 @@ class BackgroundUpdater:
|
||||
process and autotuning the batch size.
|
||||
"""
|
||||
|
||||
MINIMUM_BACKGROUND_BATCH_SIZE = 1
|
||||
DEFAULT_BACKGROUND_BATCH_SIZE = 100
|
||||
|
||||
def __init__(self, hs: "HomeServer", database: "DatabasePool"):
|
||||
self._clock = hs.get_clock()
|
||||
self.db_pool = database
|
||||
@@ -158,14 +160,6 @@ class BackgroundUpdater:
|
||||
# enable/disable background updates via the admin API.
|
||||
self.enabled = True
|
||||
|
||||
self.minimum_background_batch_size = hs.config.background_updates.min_batch_size
|
||||
self.default_background_batch_size = (
|
||||
hs.config.background_updates.default_batch_size
|
||||
)
|
||||
self.update_duration_ms = hs.config.background_updates.update_duration_ms
|
||||
self.sleep_duration_ms = hs.config.background_updates.sleep_duration_ms
|
||||
self.sleep_enabled = hs.config.background_updates.sleep_enabled
|
||||
|
||||
def register_update_controller_callbacks(
|
||||
self,
|
||||
on_update: ON_UPDATE_CALLBACK,
|
||||
@@ -222,9 +216,7 @@ class BackgroundUpdater:
|
||||
if self._on_update_callback is not None:
|
||||
return self._on_update_callback(update_name, database_name, oneshot)
|
||||
|
||||
return _BackgroundUpdateContextManager(
|
||||
sleep, self._clock, self.sleep_duration_ms, self.update_duration_ms
|
||||
)
|
||||
return _BackgroundUpdateContextManager(sleep, self._clock)
|
||||
|
||||
async def _default_batch_size(self, update_name: str, database_name: str) -> int:
|
||||
"""The batch size to use for the first iteration of a new background
|
||||
@@ -233,7 +225,7 @@ class BackgroundUpdater:
|
||||
if self._default_batch_size_callback is not None:
|
||||
return await self._default_batch_size_callback(update_name, database_name)
|
||||
|
||||
return self.default_background_batch_size
|
||||
return self.DEFAULT_BACKGROUND_BATCH_SIZE
|
||||
|
||||
async def _min_batch_size(self, update_name: str, database_name: str) -> int:
|
||||
"""A lower bound on the batch size of a new background update.
|
||||
@@ -243,7 +235,7 @@ class BackgroundUpdater:
|
||||
if self._min_batch_size_callback is not None:
|
||||
return await self._min_batch_size_callback(update_name, database_name)
|
||||
|
||||
return self.minimum_background_batch_size
|
||||
return self.MINIMUM_BACKGROUND_BATCH_SIZE
|
||||
|
||||
def get_current_update(self) -> Optional[BackgroundUpdatePerformance]:
|
||||
"""Returns the current background update, if any."""
|
||||
@@ -262,12 +254,9 @@ class BackgroundUpdater:
|
||||
if self.enabled:
|
||||
# if we start a new background update, not all updates are done.
|
||||
self._all_done = False
|
||||
sleep = self.sleep_enabled
|
||||
run_as_background_process(
|
||||
"background_updates", self.run_background_updates, sleep
|
||||
)
|
||||
run_as_background_process("background_updates", self.run_background_updates)
|
||||
|
||||
async def run_background_updates(self, sleep: bool) -> None:
|
||||
async def run_background_updates(self, sleep: bool = True) -> None:
|
||||
if self._running or not self.enabled:
|
||||
return
|
||||
|
||||
|
||||
@@ -48,6 +48,8 @@ class ExternalIDReuseException(Exception):
|
||||
"""Exception if writing an external id for a user fails,
|
||||
because this external id is given to an other user."""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
@attr.s(frozen=True, slots=True, auto_attribs=True)
|
||||
class TokenLookupResult:
|
||||
|
||||
@@ -125,6 +125,9 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
if not hs.config.server.enable_search:
|
||||
return
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
self.EVENT_SEARCH_UPDATE_NAME, self._background_reindex_search
|
||||
)
|
||||
@@ -240,13 +243,9 @@ class SearchBackgroundUpdateStore(SearchWorkerStore):
|
||||
|
||||
return len(event_search_rows)
|
||||
|
||||
if self.hs.config.server.enable_search:
|
||||
result = await self.db_pool.runInteraction(
|
||||
self.EVENT_SEARCH_UPDATE_NAME, reindex_search_txn
|
||||
)
|
||||
else:
|
||||
# Don't index anything if search is not enabled.
|
||||
result = 0
|
||||
result = await self.db_pool.runInteraction(
|
||||
self.EVENT_SEARCH_UPDATE_NAME, reindex_search_txn
|
||||
)
|
||||
|
||||
if not result:
|
||||
await self.db_pool.updates._end_background_update(
|
||||
|
||||
@@ -36,6 +36,7 @@ def run_upgrade(cur, database_engine, config, *args, **kwargs):
|
||||
config_files = config.appservice.app_service_config_files
|
||||
except AttributeError:
|
||||
logger.warning("Could not get app_service_config_files from config")
|
||||
pass
|
||||
|
||||
appservices = load_appservices(config.server.server_name, config_files)
|
||||
|
||||
|
||||
@@ -18,10 +18,9 @@ import collections
|
||||
import inspect
|
||||
import itertools
|
||||
import logging
|
||||
from contextlib import asynccontextmanager, contextmanager
|
||||
from contextlib import contextmanager
|
||||
from typing import (
|
||||
Any,
|
||||
AsyncIterator,
|
||||
Awaitable,
|
||||
Callable,
|
||||
Collection,
|
||||
@@ -41,7 +40,7 @@ from typing import (
|
||||
)
|
||||
|
||||
import attr
|
||||
from typing_extensions import AsyncContextManager, Literal
|
||||
from typing_extensions import ContextManager, Literal
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.internet.defer import CancelledError
|
||||
@@ -492,7 +491,7 @@ class ReadWriteLock:
|
||||
|
||||
Example:
|
||||
|
||||
async with read_write_lock.read("test_key"):
|
||||
with await read_write_lock.read("test_key"):
|
||||
# do some work
|
||||
"""
|
||||
|
||||
@@ -515,24 +514,22 @@ class ReadWriteLock:
|
||||
# Latest writer queued
|
||||
self.key_to_current_writer: Dict[str, defer.Deferred] = {}
|
||||
|
||||
def read(self, key: str) -> AsyncContextManager:
|
||||
@asynccontextmanager
|
||||
async def _ctx_manager() -> AsyncIterator[None]:
|
||||
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
||||
async def read(self, key: str) -> ContextManager:
|
||||
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
||||
|
||||
curr_readers = self.key_to_current_readers.setdefault(key, set())
|
||||
curr_writer = self.key_to_current_writer.get(key, None)
|
||||
curr_readers = self.key_to_current_readers.setdefault(key, set())
|
||||
curr_writer = self.key_to_current_writer.get(key, None)
|
||||
|
||||
curr_readers.add(new_defer)
|
||||
curr_readers.add(new_defer)
|
||||
|
||||
# We wait for the latest writer to finish writing. We can safely ignore
|
||||
# any existing readers... as they're readers.
|
||||
if curr_writer:
|
||||
await make_deferred_yieldable(curr_writer)
|
||||
|
||||
@contextmanager
|
||||
def _ctx_manager() -> Iterator[None]:
|
||||
try:
|
||||
# We wait for the latest writer to finish writing. We can safely ignore
|
||||
# any existing readers... as they're readers.
|
||||
# May raise a `CancelledError` if the `Deferred` wrapping us is
|
||||
# cancelled. The `Deferred` we are waiting on must not be cancelled,
|
||||
# since we do not own it.
|
||||
if curr_writer:
|
||||
await make_deferred_yieldable(stop_cancellation(curr_writer))
|
||||
yield
|
||||
finally:
|
||||
with PreserveLoggingContext():
|
||||
@@ -541,35 +538,29 @@ class ReadWriteLock:
|
||||
|
||||
return _ctx_manager()
|
||||
|
||||
def write(self, key: str) -> AsyncContextManager:
|
||||
@asynccontextmanager
|
||||
async def _ctx_manager() -> AsyncIterator[None]:
|
||||
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
||||
async def write(self, key: str) -> ContextManager:
|
||||
new_defer: "defer.Deferred[None]" = defer.Deferred()
|
||||
|
||||
curr_readers = self.key_to_current_readers.get(key, set())
|
||||
curr_writer = self.key_to_current_writer.get(key, None)
|
||||
curr_readers = self.key_to_current_readers.get(key, set())
|
||||
curr_writer = self.key_to_current_writer.get(key, None)
|
||||
|
||||
# We wait on all latest readers and writer.
|
||||
to_wait_on = list(curr_readers)
|
||||
if curr_writer:
|
||||
to_wait_on.append(curr_writer)
|
||||
# We wait on all latest readers and writer.
|
||||
to_wait_on = list(curr_readers)
|
||||
if curr_writer:
|
||||
to_wait_on.append(curr_writer)
|
||||
|
||||
# We can clear the list of current readers since `new_defer` waits
|
||||
# for them to finish.
|
||||
curr_readers.clear()
|
||||
self.key_to_current_writer[key] = new_defer
|
||||
# We can clear the list of current readers since the new writer waits
|
||||
# for them to finish.
|
||||
curr_readers.clear()
|
||||
self.key_to_current_writer[key] = new_defer
|
||||
|
||||
to_wait_on_defer = defer.gatherResults(to_wait_on)
|
||||
await make_deferred_yieldable(defer.gatherResults(to_wait_on))
|
||||
|
||||
@contextmanager
|
||||
def _ctx_manager() -> Iterator[None]:
|
||||
try:
|
||||
# Wait for all current readers and the latest writer to finish.
|
||||
# May raise a `CancelledError` immediately after the wait if the
|
||||
# `Deferred` wrapping us is cancelled. We must only release the lock
|
||||
# once we have acquired it, hence the use of `delay_cancellation`
|
||||
# rather than `stop_cancellation`.
|
||||
await make_deferred_yieldable(delay_cancellation(to_wait_on_defer))
|
||||
yield
|
||||
finally:
|
||||
# Release the lock.
|
||||
with PreserveLoggingContext():
|
||||
new_defer.callback(None)
|
||||
# `self.key_to_current_writer[key]` may be missing if there was another
|
||||
@@ -695,48 +686,12 @@ def stop_cancellation(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
|
||||
Synapse logcontext rules.
|
||||
|
||||
Returns:
|
||||
A new `Deferred`, which will contain the result of the original `Deferred`.
|
||||
The new `Deferred` will not propagate cancellation through to the original.
|
||||
When cancelled, the new `Deferred` will fail with a `CancelledError`.
|
||||
|
||||
The new `Deferred` will not follow the Synapse logcontext rules and should be
|
||||
wrapped with `make_deferred_yieldable`.
|
||||
A new `Deferred`, which will contain the result of the original `Deferred`,
|
||||
but will not propagate cancellation through to the original. When cancelled,
|
||||
the new `Deferred` will fail with a `CancelledError` and will not follow the
|
||||
Synapse logcontext rules. `make_deferred_yieldable` should be used to wrap
|
||||
the new `Deferred`.
|
||||
"""
|
||||
new_deferred: "defer.Deferred[T]" = defer.Deferred()
|
||||
deferred.chainDeferred(new_deferred)
|
||||
return new_deferred
|
||||
|
||||
|
||||
def delay_cancellation(deferred: "defer.Deferred[T]") -> "defer.Deferred[T]":
|
||||
"""Delay cancellation of a `Deferred` until it resolves.
|
||||
|
||||
Has the same effect as `stop_cancellation`, but the returned `Deferred` will not
|
||||
resolve with a `CancelledError` until the original `Deferred` resolves.
|
||||
|
||||
Args:
|
||||
deferred: The `Deferred` to protect against cancellation. May optionally follow
|
||||
the Synapse logcontext rules.
|
||||
|
||||
Returns:
|
||||
A new `Deferred`, which will contain the result of the original `Deferred`.
|
||||
The new `Deferred` will not propagate cancellation through to the original.
|
||||
When cancelled, the new `Deferred` will wait until the original `Deferred`
|
||||
resolves before failing with a `CancelledError`.
|
||||
|
||||
The new `Deferred` will follow the Synapse logcontext rules if `deferred`
|
||||
follows the Synapse logcontext rules. Otherwise the new `Deferred` should be
|
||||
wrapped with `make_deferred_yieldable`.
|
||||
"""
|
||||
|
||||
def handle_cancel(new_deferred: "defer.Deferred[T]") -> None:
|
||||
# before the new deferred is cancelled, we `pause` it to stop the cancellation
|
||||
# propagating. we then `unpause` it once the wrapped deferred completes, to
|
||||
# propagate the exception.
|
||||
new_deferred.pause()
|
||||
new_deferred.errback(Failure(CancelledError()))
|
||||
|
||||
deferred.addBoth(lambda _: new_deferred.unpause())
|
||||
|
||||
new_deferred: "defer.Deferred[T]" = defer.Deferred(handle_cancel)
|
||||
new_deferred: defer.Deferred[T] = defer.Deferred()
|
||||
deferred.chainDeferred(new_deferred)
|
||||
return new_deferred
|
||||
|
||||
@@ -41,7 +41,6 @@ from twisted.python.failure import Failure
|
||||
|
||||
from synapse.logging.context import make_deferred_yieldable, preserve_fn
|
||||
from synapse.util import unwrapFirstError
|
||||
from synapse.util.async_helpers import delay_cancellation
|
||||
from synapse.util.caches.deferred_cache import DeferredCache
|
||||
from synapse.util.caches.lrucache import LruCache
|
||||
|
||||
@@ -351,11 +350,6 @@ class DeferredCacheDescriptor(_CacheDescriptorBase):
|
||||
ret = defer.maybeDeferred(preserve_fn(self.orig), obj, *args, **kwargs)
|
||||
ret = cache.set(cache_key, ret, callback=invalidate_callback)
|
||||
|
||||
# We started a new call to `self.orig`, so we must always wait for it to
|
||||
# complete. Otherwise we might mark our current logging context as
|
||||
# finished while `self.orig` is still using it in the background.
|
||||
ret = delay_cancellation(ret)
|
||||
|
||||
return make_deferred_yieldable(ret)
|
||||
|
||||
wrapped = cast(_CachedFunction, _wrapped)
|
||||
@@ -516,11 +510,6 @@ class DeferredCacheListDescriptor(_CacheDescriptorBase):
|
||||
d = defer.gatherResults(cached_defers, consumeErrors=True).addCallbacks(
|
||||
lambda _: results, unwrapFirstError
|
||||
)
|
||||
if missing:
|
||||
# We started a new call to `self.orig`, so we must always wait for it to
|
||||
# complete. Otherwise we might mark our current logging context as
|
||||
# finished while `self.orig` is still using it in the background.
|
||||
d = delay_cancellation(d)
|
||||
return make_deferred_yieldable(d)
|
||||
else:
|
||||
return defer.succeed(results)
|
||||
|
||||
@@ -22,6 +22,8 @@ class TreeCacheNode(dict):
|
||||
leaves.
|
||||
"""
|
||||
|
||||
pass
|
||||
|
||||
|
||||
class TreeCache:
|
||||
"""
|
||||
|
||||
@@ -64,7 +64,6 @@ def build_jinja_env(
|
||||
{
|
||||
"format_ts": _format_ts_filter,
|
||||
"mxc_to_http": _create_mxc_to_http_filter(config.server.public_baseurl),
|
||||
"localpart_from_email": _localpart_from_email_filter,
|
||||
}
|
||||
)
|
||||
|
||||
@@ -113,7 +112,3 @@ def _create_mxc_to_http_filter(
|
||||
|
||||
def _format_ts_filter(value: int, format: str) -> str:
|
||||
return time.strftime(format, time.localtime(value / 1000))
|
||||
|
||||
|
||||
def _localpart_from_email_filter(address: str) -> str:
|
||||
return address.rsplit("@", 1)[0]
|
||||
|
||||
@@ -1,58 +0,0 @@
|
||||
# Copyright 2022 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import yaml
|
||||
|
||||
from synapse.storage.background_updates import BackgroundUpdater
|
||||
|
||||
from tests.unittest import HomeserverTestCase, override_config
|
||||
|
||||
|
||||
class BackgroundUpdateConfigTestCase(HomeserverTestCase):
|
||||
# Tests that the default values in the config are correctly loaded. Note that the default
|
||||
# values are loaded when the corresponding config options are commented out, which is why there isn't
|
||||
# a config specified here.
|
||||
def test_default_configuration(self):
|
||||
background_updater = BackgroundUpdater(
|
||||
self.hs, self.hs.get_datastores().main.db_pool
|
||||
)
|
||||
|
||||
self.assertEqual(background_updater.minimum_background_batch_size, 1)
|
||||
self.assertEqual(background_updater.default_background_batch_size, 100)
|
||||
self.assertEqual(background_updater.sleep_enabled, True)
|
||||
self.assertEqual(background_updater.sleep_duration_ms, 1000)
|
||||
self.assertEqual(background_updater.update_duration_ms, 100)
|
||||
|
||||
# Tests that non-default values for the config options are properly picked up and passed on.
|
||||
@override_config(
|
||||
yaml.safe_load(
|
||||
"""
|
||||
background_updates:
|
||||
background_update_duration_ms: 1000
|
||||
sleep_enabled: false
|
||||
sleep_duration_ms: 600
|
||||
min_batch_size: 5
|
||||
default_batch_size: 50
|
||||
"""
|
||||
)
|
||||
)
|
||||
def test_custom_configuration(self):
|
||||
background_updater = BackgroundUpdater(
|
||||
self.hs, self.hs.get_datastores().main.db_pool
|
||||
)
|
||||
|
||||
self.assertEqual(background_updater.minimum_background_batch_size, 5)
|
||||
self.assertEqual(background_updater.default_background_batch_size, 50)
|
||||
self.assertEqual(background_updater.sleep_enabled, False)
|
||||
self.assertEqual(background_updater.sleep_duration_ms, 600)
|
||||
self.assertEqual(background_updater.update_duration_ms, 1000)
|
||||
@@ -15,15 +15,11 @@
|
||||
from collections import Counter
|
||||
from unittest.mock import Mock
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
import synapse.storage
|
||||
from synapse.api.constants import EventTypes, JoinRules
|
||||
from synapse.api.room_versions import RoomVersions
|
||||
from synapse.rest.client import knock, login, room
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
|
||||
@@ -36,7 +32,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
knock.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
def prepare(self, reactor, clock, hs):
|
||||
self.admin_handler = hs.get_admin_handler()
|
||||
|
||||
self.user1 = self.register_user("user1", "password")
|
||||
@@ -45,7 +41,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
self.user2 = self.register_user("user2", "password")
|
||||
self.token2 = self.login("user2", "password")
|
||||
|
||||
def test_single_public_joined_room(self) -> None:
|
||||
def test_single_public_joined_room(self):
|
||||
"""Test that we write *all* events for a public room"""
|
||||
room_id = self.helper.create_room_as(
|
||||
self.user1, tok=self.token1, is_public=True
|
||||
@@ -78,7 +74,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
|
||||
|
||||
def test_single_private_joined_room(self) -> None:
|
||||
def test_single_private_joined_room(self):
|
||||
"""Tests that we correctly write state when we can't see all events in
|
||||
a room.
|
||||
"""
|
||||
@@ -116,7 +112,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 1)
|
||||
|
||||
def test_single_left_room(self) -> None:
|
||||
def test_single_left_room(self):
|
||||
"""Tests that we don't see events in the room after we leave."""
|
||||
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
|
||||
self.helper.send(room_id, body="Hello!", tok=self.token1)
|
||||
@@ -148,7 +144,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 2)
|
||||
|
||||
def test_single_left_rejoined_private_room(self) -> None:
|
||||
def test_single_left_rejoined_private_room(self):
|
||||
"""Tests that see the correct events in private rooms when we
|
||||
repeatedly join and leave.
|
||||
"""
|
||||
@@ -189,7 +185,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user1)], 1)
|
||||
self.assertEqual(counter[(EventTypes.Member, self.user2)], 3)
|
||||
|
||||
def test_invite(self) -> None:
|
||||
def test_invite(self):
|
||||
"""Tests that pending invites get handled correctly."""
|
||||
room_id = self.helper.create_room_as(self.user1, tok=self.token1)
|
||||
self.helper.send(room_id, body="Hello!", tok=self.token1)
|
||||
@@ -208,7 +204,7 @@ class ExfiltrateData(unittest.HomeserverTestCase):
|
||||
self.assertEqual(args[1].content["membership"], "invite")
|
||||
self.assertTrue(args[2]) # Assert there is at least one bit of state
|
||||
|
||||
def test_knock(self) -> None:
|
||||
def test_knock(self):
|
||||
"""Tests that knock get handled correctly."""
|
||||
# create a knockable v7 room
|
||||
room_id = self.helper.create_room_as(
|
||||
|
||||
@@ -15,12 +15,8 @@ from unittest.mock import Mock
|
||||
|
||||
import pymacaroons
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.errors import AuthError, ResourceLimitError
|
||||
from synapse.rest import admin
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
from tests.test_utils import make_awaitable
|
||||
@@ -31,7 +27,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
admin.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
def prepare(self, reactor, clock, hs):
|
||||
self.auth_handler = hs.get_auth_handler()
|
||||
self.macaroon_generator = hs.get_macaroon_generator()
|
||||
|
||||
@@ -46,23 +42,23 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
self.user1 = self.register_user("a_user", "pass")
|
||||
|
||||
def test_macaroon_caveats(self) -> None:
|
||||
def test_macaroon_caveats(self):
|
||||
token = self.macaroon_generator.generate_guest_access_token("a_user")
|
||||
macaroon = pymacaroons.Macaroon.deserialize(token)
|
||||
|
||||
def verify_gen(caveat: str) -> bool:
|
||||
def verify_gen(caveat):
|
||||
return caveat == "gen = 1"
|
||||
|
||||
def verify_user(caveat: str) -> bool:
|
||||
def verify_user(caveat):
|
||||
return caveat == "user_id = a_user"
|
||||
|
||||
def verify_type(caveat: str) -> bool:
|
||||
def verify_type(caveat):
|
||||
return caveat == "type = access"
|
||||
|
||||
def verify_nonce(caveat: str) -> bool:
|
||||
def verify_nonce(caveat):
|
||||
return caveat.startswith("nonce =")
|
||||
|
||||
def verify_guest(caveat: str) -> bool:
|
||||
def verify_guest(caveat):
|
||||
return caveat == "guest = true"
|
||||
|
||||
v = pymacaroons.Verifier()
|
||||
@@ -73,7 +69,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
v.satisfy_general(verify_guest)
|
||||
v.verify(macaroon, self.hs.config.key.macaroon_secret_key)
|
||||
|
||||
def test_short_term_login_token_gives_user_id(self) -> None:
|
||||
def test_short_term_login_token_gives_user_id(self):
|
||||
token = self.macaroon_generator.generate_short_term_login_token(
|
||||
self.user1, "", duration_in_ms=5000
|
||||
)
|
||||
@@ -88,7 +84,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
AuthError,
|
||||
)
|
||||
|
||||
def test_short_term_login_token_gives_auth_provider(self) -> None:
|
||||
def test_short_term_login_token_gives_auth_provider(self):
|
||||
token = self.macaroon_generator.generate_short_term_login_token(
|
||||
self.user1, auth_provider_id="my_idp"
|
||||
)
|
||||
@@ -96,7 +92,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(self.user1, res.user_id)
|
||||
self.assertEqual("my_idp", res.auth_provider_id)
|
||||
|
||||
def test_short_term_login_token_cannot_replace_user_id(self) -> None:
|
||||
def test_short_term_login_token_cannot_replace_user_id(self):
|
||||
token = self.macaroon_generator.generate_short_term_login_token(
|
||||
self.user1, "", duration_in_ms=5000
|
||||
)
|
||||
@@ -116,7 +112,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
AuthError,
|
||||
)
|
||||
|
||||
def test_mau_limits_disabled(self) -> None:
|
||||
def test_mau_limits_disabled(self):
|
||||
self.auth_blocking._limit_usage_by_mau = False
|
||||
# Ensure does not throw exception
|
||||
self.get_success(
|
||||
@@ -131,7 +127,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
)
|
||||
|
||||
def test_mau_limits_exceeded_large(self) -> None:
|
||||
def test_mau_limits_exceeded_large(self):
|
||||
self.auth_blocking._limit_usage_by_mau = True
|
||||
self.hs.get_datastores().main.get_monthly_active_count = Mock(
|
||||
return_value=make_awaitable(self.large_number_of_users)
|
||||
@@ -154,7 +150,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
ResourceLimitError,
|
||||
)
|
||||
|
||||
def test_mau_limits_parity(self) -> None:
|
||||
def test_mau_limits_parity(self):
|
||||
# Ensure we're not at the unix epoch.
|
||||
self.reactor.advance(1)
|
||||
self.auth_blocking._limit_usage_by_mau = True
|
||||
@@ -193,7 +189,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
)
|
||||
|
||||
def test_mau_limits_not_exceeded(self) -> None:
|
||||
def test_mau_limits_not_exceeded(self):
|
||||
self.auth_blocking._limit_usage_by_mau = True
|
||||
|
||||
self.hs.get_datastores().main.get_monthly_active_count = Mock(
|
||||
@@ -215,7 +211,7 @@ class AuthTestCase(unittest.HomeserverTestCase):
|
||||
)
|
||||
)
|
||||
|
||||
def _get_macaroon(self) -> pymacaroons.Macaroon:
|
||||
def _get_macaroon(self):
|
||||
token = self.macaroon_generator.generate_short_term_login_token(
|
||||
self.user1, "", duration_in_ms=5000
|
||||
)
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user