Compare commits
101 Commits
ts/spam-er
...
erikj/push
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
055dc16d49 | ||
|
|
9319bf036c | ||
|
|
e5c2ea6341 | ||
|
|
8141a0d0b3 | ||
|
|
8e47d72992 | ||
|
|
da10dfc311 | ||
|
|
f9d470b2da | ||
|
|
8b33331cb5 | ||
|
|
2ebb0c6f99 | ||
|
|
3bbe3074fb | ||
|
|
6fd8b850ed | ||
|
|
4b5a1a45da | ||
|
|
2dd2ca17a0 | ||
|
|
39dee30f01 | ||
|
|
456a394bf7 | ||
|
|
10280fc943 | ||
|
|
4bd06c9c98 | ||
|
|
c8c12ac13a | ||
|
|
9bb3bbe153 | ||
|
|
68ff8f3575 | ||
|
|
11efe7231f | ||
|
|
f69785e875 | ||
|
|
151cb6e2f4 | ||
|
|
d882ee6219 | ||
|
|
94cd2cad4f | ||
|
|
155399a145 | ||
|
|
71e8afe34d | ||
|
|
2be5a2b07b | ||
|
|
96df31239c | ||
|
|
177b884ad7 | ||
|
|
eb4aaa1b4b | ||
|
|
ab2a615cfb | ||
|
|
684feeaf2f | ||
|
|
66a5f6c400 | ||
|
|
f16ec055cc | ||
|
|
b935c9529c | ||
|
|
d25935cd3d | ||
|
|
47619017f9 | ||
|
|
5675cebfaa | ||
|
|
6ff99e3bea | ||
|
|
a1cb05b3e8 | ||
|
|
d38c73e9ab | ||
|
|
0fce474a40 | ||
|
|
19d79b6ebe | ||
|
|
3d8839c30c | ||
|
|
50ae4eafe1 | ||
|
|
682431efbe | ||
|
|
635f0d916b | ||
|
|
df4963548b | ||
|
|
a167304c8b | ||
|
|
deca250e3f | ||
|
|
d24a1486e5 | ||
|
|
1aa30f7b3e | ||
|
|
c22314c4e8 | ||
|
|
d4713d3e33 | ||
|
|
8afb7b55d0 | ||
|
|
37935b5183 | ||
|
|
0d17357fcd | ||
|
|
182ca78a12 | ||
|
|
5331fb5b47 | ||
|
|
6edefef602 | ||
|
|
942c30b16b | ||
|
|
24b590de32 | ||
|
|
a34a41f135 | ||
|
|
1402159bb8 | ||
|
|
32ef24fbd7 | ||
|
|
fcf951d5dc | ||
|
|
44d7bb13c3 | ||
|
|
5c3d525cad | ||
|
|
1fe202a1a3 | ||
|
|
6d8d1218dd | ||
|
|
3eafee629d | ||
|
|
e24c11afd6 | ||
|
|
83be72d76c | ||
|
|
4ea546067d | ||
|
|
3ce15cc7be | ||
|
|
b4eb163434 | ||
|
|
8060034612 | ||
|
|
a5c26750b5 | ||
|
|
86a515ccbf | ||
|
|
6f04ae7033 | ||
|
|
c3b232cb39 | ||
|
|
8689230a55 | ||
|
|
cde8af9a49 | ||
|
|
e8ae472d3b | ||
|
|
9013104429 | ||
|
|
aec69d2481 | ||
|
|
39bed28b28 | ||
|
|
c9fc2c0d22 | ||
|
|
57f6c496d0 | ||
|
|
17e1eb7749 | ||
|
|
de1e599b9d | ||
|
|
409573f6d0 | ||
|
|
bf7ce92bf7 | ||
|
|
db10f2c037 | ||
|
|
6ee61b9052 | ||
|
|
d38d242411 | ||
|
|
a559c8b0d9 | ||
|
|
9d8e380d2e | ||
|
|
dffecade7d | ||
|
|
a4c75918b3 |
38
CHANGES.md
38
CHANGES.md
@@ -1,7 +1,18 @@
|
||||
Synapse 1.59.0rc1 (2022-05-10)
|
||||
==============================
|
||||
Synapse 1.59.1 (2022-05-18)
|
||||
===========================
|
||||
|
||||
This release makes several changes that server administrators should be aware of:
|
||||
This release fixes a long-standing issue which could prevent Synapse's user directory for updating properly.
|
||||
|
||||
Bugfixes
|
||||
----------------
|
||||
|
||||
- Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar. Contributed by Nick @ Beeper. ([\#12762](https://github.com/matrix-org/synapse/issues/12762))
|
||||
|
||||
|
||||
Synapse 1.59.0 (2022-05-17)
|
||||
===========================
|
||||
|
||||
Synapse 1.59 makes several changes that server administrators should be aware of:
|
||||
|
||||
- Device name lookup over federation is now disabled by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
|
||||
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](https://github.com/matrix-org/synapse/issues/12452), [\#12654](https://github.com/matrix-org/synapse/issues/12654))
|
||||
@@ -10,6 +21,27 @@ See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/
|
||||
|
||||
Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](https://github.com/matrix-org/synapse/issues/12597))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix DB performance regression introduced in Synapse 1.59.0rc2. ([\#12745](https://github.com/matrix-org/synapse/issues/12745))
|
||||
|
||||
|
||||
Synapse 1.59.0rc2 (2022-05-16)
|
||||
==============================
|
||||
|
||||
Note: this release candidate includes a performance regression which can cause database disruption. Other release candidates in the v1.59.0 series are not affected, and a fix will be included in the v1.59.0 final release.
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was rejected. ([\#12729](https://github.com/matrix-org/synapse/issues/12729))
|
||||
|
||||
|
||||
Synapse 1.59.0rc1 (2022-05-10)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
|
||||
1
changelog.d/10533.misc
Normal file
1
changelog.d/10533.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve event caching mechanism to avoid having multiple copies of an event in memory at a time.
|
||||
1
changelog.d/12498.misc
Normal file
1
changelog.d/12498.misc
Normal file
@@ -0,0 +1 @@
|
||||
Preparation for faster-room-join work: return subsets of room state which we already have, immediately.
|
||||
1
changelog.d/12513.feature
Normal file
1
changelog.d/12513.feature
Normal file
@@ -0,0 +1 @@
|
||||
Measure the time taken in spam-checking callbacks and expose those measurements as metrics.
|
||||
1
changelog.d/12567.misc
Normal file
1
changelog.d/12567.misc
Normal file
@@ -0,0 +1 @@
|
||||
Replace string literal instances of stream key types with typed constants.
|
||||
1
changelog.d/12618.feature
Normal file
1
changelog.d/12618.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add a `default_power_level_content_override` config option to set default room power levels per room preset.
|
||||
1
changelog.d/12623.feature
Normal file
1
changelog.d/12623.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add support for [MSC3787: Allowing knocks to restricted rooms](https://github.com/matrix-org/matrix-spec-proposals/pull/3787).
|
||||
1
changelog.d/12672.feature
Normal file
1
changelog.d/12672.feature
Normal file
@@ -0,0 +1 @@
|
||||
Send `USER_IP` commands on a different Redis channel, in order to reduce traffic to workers that do not process these commands.
|
||||
1
changelog.d/12673.feature
Normal file
1
changelog.d/12673.feature
Normal file
@@ -0,0 +1 @@
|
||||
Synapse will now reload [cache config](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#caching) when it receives a [SIGHUP](https://en.wikipedia.org/wiki/SIGHUP) signal.
|
||||
1
changelog.d/12680.misc
Normal file
1
changelog.d/12680.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove code which updates unused database column `application_services_state.last_txn`.
|
||||
1
changelog.d/12687.bugfix
Normal file
1
changelog.d/12687.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Add a unique index to `state_group_edges` to prevent duplicates being accidentally introduced and the consequential impact to performance.
|
||||
1
changelog.d/12691.misc
Normal file
1
changelog.d/12691.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove an unneeded class in the push code.
|
||||
1
changelog.d/12693.misc
Normal file
1
changelog.d/12693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Consolidate parsing of relation information from events.
|
||||
1
changelog.d/12696.bugfix
Normal file
1
changelog.d/12696.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug where an empty room would be created when a user with an insufficient power level tried to upgrade a room.
|
||||
1
changelog.d/12698.misc
Normal file
1
changelog.d/12698.misc
Normal file
@@ -0,0 +1 @@
|
||||
Respect the `@cancellable` flag for `DirectServe{Html,Json}Resource`s.
|
||||
1
changelog.d/12699.misc
Normal file
1
changelog.d/12699.misc
Normal file
@@ -0,0 +1 @@
|
||||
Respect the `@cancellable` flag for `RestServlet`s and `BaseFederationServlet`s.
|
||||
1
changelog.d/12700.misc
Normal file
1
changelog.d/12700.misc
Normal file
@@ -0,0 +1 @@
|
||||
Respect the `@cancellable` flag for `ReplicationEndpoint`s.
|
||||
1
changelog.d/12701.feature
Normal file
1
changelog.d/12701.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add a config options to allow for auto-tuning of caches.
|
||||
1
changelog.d/12703.misc
Normal file
1
changelog.d/12703.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert namespace class `Codes` into a string enum.
|
||||
1
changelog.d/12705.misc
Normal file
1
changelog.d/12705.misc
Normal file
@@ -0,0 +1 @@
|
||||
Complain if a federation endpoint has the `@cancellable` flag, since some of the wrapper code may not handle cancellation correctly yet.
|
||||
1
changelog.d/12708.misc
Normal file
1
changelog.d/12708.misc
Normal file
@@ -0,0 +1 @@
|
||||
Enable cancellation of `GET /rooms/$room_id/members`, `GET /rooms/$room_id/state` and `GET /rooms/$room_id/state/$event_type/*` requests.
|
||||
1
changelog.d/12709.removal
Normal file
1
changelog.d/12709.removal
Normal file
@@ -0,0 +1 @@
|
||||
Require a body in POST requests to `/rooms/{roomId}/receipt/{receiptType}/{eventId}`, as required by the [Matrix specification](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidreceiptreceipttypeeventid). This breaks compatibility with Element Android 1.2.0 and earlier: users of those clients will be unable to send read receipts.
|
||||
1
changelog.d/12711.misc
Normal file
1
changelog.d/12711.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimize private read receipt filtering.
|
||||
1
changelog.d/12713.bugfix
Normal file
1
changelog.d/12713.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse 1.30.0 where empty rooms could be automatically created if a monthly active users limit is set.
|
||||
1
changelog.d/12715.doc
Normal file
1
changelog.d/12715.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix a typo in the Media Admin API documentation.
|
||||
1
changelog.d/12716.misc
Normal file
1
changelog.d/12716.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.
|
||||
1
changelog.d/12717.misc
Normal file
1
changelog.d/12717.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add some type hints to datastore.
|
||||
1
changelog.d/12720.misc
Normal file
1
changelog.d/12720.misc
Normal file
@@ -0,0 +1 @@
|
||||
Drop the logging level of status messages for the URL preview cache expiry job from INFO to DEBUG.
|
||||
1
changelog.d/12721.bugfix
Normal file
1
changelog.d/12721.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix push to dismiss notifications when read on another client. Contributed by @SpiritCroc @ Beeper.
|
||||
1
changelog.d/12723.misc
Normal file
1
changelog.d/12723.misc
Normal file
@@ -0,0 +1 @@
|
||||
Downgrade some OIDC errors to warnings in the logs, to reduce the noise of Sentry reports.
|
||||
1
changelog.d/12726.misc
Normal file
1
changelog.d/12726.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.
|
||||
1
changelog.d/12727.doc
Normal file
1
changelog.d/12727.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update the OpenID Connect example for Keycloak to be compatible with newer versions of Keycloak. Contributed by @nhh.
|
||||
1
changelog.d/12731.misc
Normal file
1
changelog.d/12731.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update configs used by Complement to allow more invites/3PID validations during tests.
|
||||
1
changelog.d/12734.misc
Normal file
1
changelog.d/12734.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tidy up and type-hint the database engine modules.
|
||||
1
changelog.d/12742.doc
Normal file
1
changelog.d/12742.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in server listener documentation.
|
||||
1
changelog.d/12747.bugfix
Normal file
1
changelog.d/12747.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix poor database performance when reading the cache invalidation stream for large servers with lots of workers.
|
||||
1
changelog.d/12748.doc
Normal file
1
changelog.d/12748.doc
Normal file
@@ -0,0 +1 @@
|
||||
Link to the configuration manual from the welcome page of the documentation.
|
||||
1
changelog.d/12749.doc
Normal file
1
changelog.d/12749.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in 'run_background_tasks_on' option name in configuration manual documentation.
|
||||
1
changelog.d/12753.misc
Normal file
1
changelog.d/12753.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add some type hints to datastore.
|
||||
1
changelog.d/12759.doc
Normal file
1
changelog.d/12759.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add information regarding the `rc_invites` ratelimiting option to the configuration docs.
|
||||
1
changelog.d/12761.doc
Normal file
1
changelog.d/12761.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add documentation for cancellation of request processing.
|
||||
1
changelog.d/12762.misc
Normal file
1
changelog.d/12762.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar.
|
||||
1
changelog.d/12765.doc
Normal file
1
changelog.d/12765.doc
Normal file
@@ -0,0 +1 @@
|
||||
Recommend using docker to run tests against postgres.
|
||||
1
changelog.d/12769.misc
Normal file
1
changelog.d/12769.misc
Normal file
@@ -0,0 +1 @@
|
||||
Tweak the mypy plugin so that `@cached` can accept `on_invalidate=None`.
|
||||
1
changelog.d/12770.bugfix
Normal file
1
changelog.d/12770.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Delete events from the `federation_inbound_events_staging` table when a room is purged through the admin API.
|
||||
1
changelog.d/12772.misc
Normal file
1
changelog.d/12772.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move methods that call `add_push_rule` to the `PushRuleStore` class.
|
||||
1
changelog.d/12773.doc
Normal file
1
changelog.d/12773.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add missing user directory endpoint from the generic worker documentation. Contributed by @olmari.
|
||||
1
changelog.d/12774.misc
Normal file
1
changelog.d/12774.misc
Normal file
@@ -0,0 +1 @@
|
||||
Make handling of federation Authorization header (more) compliant with RFC7230.
|
||||
1
changelog.d/12775.misc
Normal file
1
changelog.d/12775.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor `resolve_state_groups_for_events` to not pull out full state when no state resolution happens.
|
||||
2
changelog.d/12776.doc
Normal file
2
changelog.d/12776.doc
Normal file
@@ -0,0 +1,2 @@
|
||||
Add additional info to documentation of config option `cache_autotuning`.
|
||||
|
||||
2
changelog.d/12777.doc
Normal file
2
changelog.d/12777.doc
Normal file
@@ -0,0 +1,2 @@
|
||||
Update configuration manual documentation to document size-related suffixes.
|
||||
|
||||
1
changelog.d/12779.bugfix
Normal file
1
changelog.d/12779.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Give a meaningful error message when a client tries to create a room with an invalid alias localpart.
|
||||
1
changelog.d/12781.misc
Normal file
1
changelog.d/12781.misc
Normal file
@@ -0,0 +1 @@
|
||||
Do not keep going if there are 5 back-to-back background update failures.
|
||||
1
changelog.d/12783.misc
Normal file
1
changelog.d/12783.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix federation when using the demo scripts.
|
||||
1
changelog.d/12785.doc
Normal file
1
changelog.d/12785.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix invalid YAML syntax in the example documentation for the `url_preview_accept_language` config option.
|
||||
1
changelog.d/12786.feature
Normal file
1
changelog.d/12786.feature
Normal file
@@ -0,0 +1 @@
|
||||
Implement [MSC3818: Copy room type on upgrade](https://github.com/matrix-org/matrix-spec-proposals/pull/3818).
|
||||
1
changelog.d/12789.misc
Normal file
1
changelog.d/12789.misc
Normal file
@@ -0,0 +1 @@
|
||||
The `hash_password` script now fails when it is called without specifying a config file.
|
||||
1
changelog.d/12790.misc
Normal file
1
changelog.d/12790.misc
Normal file
@@ -0,0 +1 @@
|
||||
Simplify `disallow_untyped_defs` config in `mypy.ini`.
|
||||
1
changelog.d/12791.misc
Normal file
1
changelog.d/12791.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update EventContext `get_current_event_ids` and `get_prev_event_ids` to accept state filters and update calls where possible.
|
||||
1
changelog.d/12792.feature
Normal file
1
changelog.d/12792.feature
Normal file
@@ -0,0 +1 @@
|
||||
Implement [MSC3818: Copy room type on upgrade](https://github.com/matrix-org/matrix-spec-proposals/pull/3818).
|
||||
1
changelog.d/12794.bugfix
Normal file
1
changelog.d/12794.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in 1.43.0 where a file (`providers.json`) was never closed. Contributed by @arkamar.
|
||||
1
changelog.d/12803.bugfix
Normal file
1
changelog.d/12803.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug where finished log contexts would be re-started when failing to contact remote homeservers.
|
||||
1
changelog.d/12809.feature
Normal file
1
changelog.d/12809.feature
Normal file
@@ -0,0 +1 @@
|
||||
Send `USER_IP` commands on a different Redis channel, in order to reduce traffic to workers that do not process these commands.
|
||||
1
changelog.d/12811.misc
Normal file
1
changelog.d/12811.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce the amount of state we pull from the DB.
|
||||
1
changelog.d/12828.misc
Normal file
1
changelog.d/12828.misc
Normal file
@@ -0,0 +1 @@
|
||||
Pull out less state when handling gaps in room DAG.
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,3 +1,21 @@
|
||||
matrix-synapse-py3 (1.59.1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.59.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 18 May 2022 11:41:46 +0100
|
||||
|
||||
matrix-synapse-py3 (1.59.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.59.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 17 May 2022 10:26:50 +0100
|
||||
|
||||
matrix-synapse-py3 (1.59.0~rc2) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.59.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Mon, 16 May 2022 12:52:15 +0100
|
||||
|
||||
matrix-synapse-py3 (1.59.0~rc1) stable; urgency=medium
|
||||
|
||||
* Adjust how the `exported-requirements.txt` file is generated as part of
|
||||
|
||||
@@ -12,6 +12,7 @@ export PYTHONPATH
|
||||
|
||||
echo "$PYTHONPATH"
|
||||
|
||||
# Create servers which listen on HTTP at 808x and HTTPS at 848x.
|
||||
for port in 8080 8081 8082; do
|
||||
echo "Starting server on port $port... "
|
||||
|
||||
@@ -19,10 +20,12 @@ for port in 8080 8081 8082; do
|
||||
mkdir -p demo/$port
|
||||
pushd demo/$port || exit
|
||||
|
||||
# Generate the configuration for the homeserver at localhost:848x.
|
||||
# Generate the configuration for the homeserver at localhost:848x, note that
|
||||
# the homeserver name needs to match the HTTPS listening port for federation
|
||||
# to properly work..
|
||||
python3 -m synapse.app.homeserver \
|
||||
--generate-config \
|
||||
--server-name "localhost:$port" \
|
||||
--server-name "localhost:$https_port" \
|
||||
--config-path "$port.config" \
|
||||
--report-stats no
|
||||
|
||||
|
||||
@@ -53,6 +53,18 @@ rc_joins:
|
||||
per_second: 9999
|
||||
burst_count: 9999
|
||||
|
||||
rc_3pid_validation:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
rc_invites:
|
||||
per_room:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
per_user:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
federation_rr_transactions_per_room_per_second: 9999
|
||||
|
||||
## Experimental Features ##
|
||||
|
||||
@@ -87,6 +87,18 @@ rc_joins:
|
||||
per_second: 9999
|
||||
burst_count: 9999
|
||||
|
||||
rc_3pid_validation:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
rc_invites:
|
||||
per_room:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
per_user:
|
||||
per_second: 1000
|
||||
burst_count: 1000
|
||||
|
||||
federation_rr_transactions_per_room_per_second: 9999
|
||||
|
||||
## API Configuration ##
|
||||
|
||||
@@ -89,6 +89,7 @@
|
||||
- [Database Schemas](development/database_schema.md)
|
||||
- [Experimental features](development/experimental_features.md)
|
||||
- [Synapse Architecture]()
|
||||
- [Cancellation](development/synapse_architecture/cancellation.md)
|
||||
- [Log Contexts](log_contexts.md)
|
||||
- [Replication](replication.md)
|
||||
- [TCP Replication](tcp_replication.md)
|
||||
|
||||
@@ -289,7 +289,7 @@ POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
|
||||
|
||||
URL Parameters
|
||||
|
||||
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in milliseconds.
|
||||
* `before_ts`: string representing a positive integer - Unix timestamp in milliseconds.
|
||||
All cached media that was last accessed before this timestamp will be removed.
|
||||
|
||||
Response:
|
||||
|
||||
@@ -206,7 +206,32 @@ This means that we need to run our unit tests against PostgreSQL too. Our CI doe
|
||||
this automatically for pull requests and release candidates, but it's sometimes
|
||||
useful to reproduce this locally.
|
||||
|
||||
To do so, [configure Postgres](../postgres.md) and run `trial` with the
|
||||
#### Using Docker
|
||||
|
||||
The easiest way to do so is to run Postgres via a docker container. In one
|
||||
terminal:
|
||||
|
||||
```shell
|
||||
docker run --rm -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgress -p 5432:5432 postgres:14
|
||||
```
|
||||
|
||||
If you see an error like
|
||||
|
||||
```
|
||||
docker: Error response from daemon: driver failed programming external connectivity on endpoint nice_ride (b57bbe2e251b70015518d00c9981e8cb8346b5c785250341a6c53e3c899875f1): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use.
|
||||
```
|
||||
|
||||
then something is already bound to port 5432. You're probably already running postgres locally.
|
||||
|
||||
Once you have a postgres server running, invoke `trial` in a second terminal:
|
||||
|
||||
```shell
|
||||
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_HOST=127.0.0.1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_POSTGRES_PASSWORD=mysecretpassword poetry run trial tests
|
||||
````
|
||||
|
||||
#### Using an existing Postgres installation
|
||||
|
||||
If you have postgres already installed on your system, you can run `trial` with the
|
||||
following environment variables matching your configuration:
|
||||
|
||||
- `SYNAPSE_POSTGRES` to anything nonempty
|
||||
@@ -229,8 +254,8 @@ You don't need to specify the host, user, port or password if your Postgres
|
||||
server is set to authenticate you over the UNIX socket (i.e. if the `psql` command
|
||||
works without further arguments).
|
||||
|
||||
Your Postgres account needs to be able to create databases.
|
||||
|
||||
Your Postgres account needs to be able to create databases; see the postgres
|
||||
docs for [`ALTER ROLE`](https://www.postgresql.org/docs/current/sql-alterrole.html).
|
||||
|
||||
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@
|
||||
Requires you to have a [Synapse development environment setup](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies).
|
||||
|
||||
The demo setup allows running three federation Synapse servers, with server
|
||||
names `localhost:8080`, `localhost:8081`, and `localhost:8082`.
|
||||
names `localhost:8480`, `localhost:8481`, and `localhost:8482`.
|
||||
|
||||
You can access them via any Matrix client over HTTP at `localhost:8080`,
|
||||
`localhost:8081`, and `localhost:8082` or over HTTPS at `localhost:8480`,
|
||||
@@ -20,9 +20,10 @@ and the servers are configured in a highly insecure way, including:
|
||||
The servers are configured to store their data under `demo/8080`, `demo/8081`, and
|
||||
`demo/8082`. This includes configuration, logs, SQLite databases, and media.
|
||||
|
||||
Note that when joining a public room on a different HS via "#foo:bar.net", then
|
||||
you are (in the current impl) joining a room with room_id "foo". This means that
|
||||
it won't work if your HS already has a room with that name.
|
||||
Note that when joining a public room on a different homeserver via "#foo:bar.net",
|
||||
then you are (in the current implementation) joining a room with room_id "foo".
|
||||
This means that it won't work if your homeserver already has a room with that
|
||||
name.
|
||||
|
||||
## Using the demo scripts
|
||||
|
||||
|
||||
392
docs/development/synapse_architecture/cancellation.md
Normal file
392
docs/development/synapse_architecture/cancellation.md
Normal file
@@ -0,0 +1,392 @@
|
||||
# Cancellation
|
||||
Sometimes, requests take a long time to service and clients disconnect
|
||||
before Synapse produces a response. To avoid wasting resources, Synapse
|
||||
can cancel request processing for select endpoints marked with the
|
||||
`@cancellable` decorator.
|
||||
|
||||
Synapse makes use of Twisted's `Deferred.cancel()` feature to make
|
||||
cancellation work. The `@cancellable` decorator does nothing by itself
|
||||
and merely acts as a flag, signalling to developers and other code alike
|
||||
that a method can be cancelled.
|
||||
|
||||
## Enabling cancellation for an endpoint
|
||||
1. Check that the endpoint method, and any `async` functions in its call
|
||||
tree handle cancellation correctly. See
|
||||
[Handling cancellation correctly](#handling-cancellation-correctly)
|
||||
for a list of things to look out for.
|
||||
2. Add the `@cancellable` decorator to the `on_GET/POST/PUT/DELETE`
|
||||
method. It's not recommended to make non-`GET` methods cancellable,
|
||||
since cancellation midway through some database updates is less
|
||||
likely to be handled correctly.
|
||||
|
||||
## Mechanics
|
||||
There are two stages to cancellation: downward propagation of a
|
||||
`cancel()` call, followed by upwards propagation of a `CancelledError`
|
||||
out of a blocked `await`.
|
||||
Both Twisted and asyncio have a cancellation mechanism.
|
||||
|
||||
| | Method | Exception | Exception inherits from |
|
||||
|---------------|---------------------|-----------------------------------------|-------------------------|
|
||||
| Twisted | `Deferred.cancel()` | `twisted.internet.defer.CancelledError` | `Exception` (!) |
|
||||
| asyncio | `Task.cancel()` | `asyncio.CancelledError` | `BaseException` |
|
||||
|
||||
### Deferred.cancel()
|
||||
When Synapse starts handling a request, it runs the async method
|
||||
responsible for handling it using `defer.ensureDeferred`, which returns
|
||||
a `Deferred`. For example:
|
||||
|
||||
```python
|
||||
def do_something() -> Deferred[None]:
|
||||
...
|
||||
|
||||
@cancellable
|
||||
async def on_GET() -> Tuple[int, JsonDict]:
|
||||
d = make_deferred_yieldable(do_something())
|
||||
await d
|
||||
return 200, {}
|
||||
|
||||
request = defer.ensureDeferred(on_GET())
|
||||
```
|
||||
|
||||
When a client disconnects early, Synapse checks for the presence of the
|
||||
`@cancellable` decorator on `on_GET`. Since `on_GET` is cancellable,
|
||||
`Deferred.cancel()` is called on the `Deferred` from
|
||||
`defer.ensureDeferred`, ie. `request`. Twisted knows which `Deferred`
|
||||
`request` is waiting on and passes the `cancel()` call on to `d`.
|
||||
|
||||
The `Deferred` being waited on, `d`, may have its own handling for
|
||||
`cancel()` and pass the call on to other `Deferred`s.
|
||||
|
||||
Eventually, a `Deferred` handles the `cancel()` call by resolving itself
|
||||
with a `CancelledError`.
|
||||
|
||||
### CancelledError
|
||||
The `CancelledError` gets raised out of the `await` and bubbles up, as
|
||||
per normal Python exception handling.
|
||||
|
||||
## Handling cancellation correctly
|
||||
In general, when writing code that might be subject to cancellation, two
|
||||
things must be considered:
|
||||
* The effect of `CancelledError`s raised out of `await`s.
|
||||
* The effect of `Deferred`s being `cancel()`ed.
|
||||
|
||||
Examples of code that handles cancellation incorrectly include:
|
||||
* `try-except` blocks which swallow `CancelledError`s.
|
||||
* Code that shares the same `Deferred`, which may be cancelled, between
|
||||
multiple requests.
|
||||
* Code that starts some processing that's exempt from cancellation, but
|
||||
uses a logging context from cancellable code. The logging context
|
||||
will be finished upon cancellation, while the uncancelled processing
|
||||
is still using it.
|
||||
|
||||
Some common patterns are listed below in more detail.
|
||||
|
||||
### `async` function calls
|
||||
Most functions in Synapse are relatively straightforward from a
|
||||
cancellation standpoint: they don't do anything with `Deferred`s and
|
||||
purely call and `await` other `async` functions.
|
||||
|
||||
An `async` function handles cancellation correctly if its own code
|
||||
handles cancellation correctly and all the async function it calls
|
||||
handle cancellation correctly. For example:
|
||||
```python
|
||||
async def do_two_things() -> None:
|
||||
check_something()
|
||||
await do_something()
|
||||
await do_something_else()
|
||||
```
|
||||
`do_two_things` handles cancellation correctly if `do_something` and
|
||||
`do_something_else` handle cancellation correctly.
|
||||
|
||||
That is, when checking whether a function handles cancellation
|
||||
correctly, its implementation and all its `async` function calls need to
|
||||
be checked, recursively.
|
||||
|
||||
As `check_something` is not `async`, it does not need to be checked.
|
||||
|
||||
### CancelledErrors
|
||||
Because Twisted's `CancelledError`s are `Exception`s, it's easy to
|
||||
accidentally catch and suppress them. Care must be taken to ensure that
|
||||
`CancelledError`s are allowed to propagate upwards.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
try:
|
||||
await do_something()
|
||||
except Exception:
|
||||
# `CancelledError` gets swallowed here.
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
try:
|
||||
await do_something()
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**OK**:
|
||||
```python
|
||||
try:
|
||||
check_something()
|
||||
# A `CancelledError` won't ever be raised here.
|
||||
except Exception:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
try:
|
||||
await do_something()
|
||||
except ValueError:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
#### defer.gatherResults
|
||||
`defer.gatherResults` produces a `Deferred` which:
|
||||
* broadcasts `cancel()` calls to every `Deferred` being waited on.
|
||||
* wraps the first exception it sees in a `FirstError`.
|
||||
|
||||
Together, this means that `CancelledError`s will be wrapped in
|
||||
a `FirstError` unless unwrapped. Such `FirstError`s are liable to be
|
||||
swallowed, so they must be unwrapped.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
async def do_something() -> None:
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults([...], consumeErrors=True)
|
||||
)
|
||||
|
||||
try:
|
||||
await do_something()
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
# `FirstError(CancelledError)` gets swallowed here.
|
||||
logger.info(...)
|
||||
```
|
||||
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
async def do_something() -> None:
|
||||
await make_deferred_yieldable(
|
||||
defer.gatherResults([...], consumeErrors=True)
|
||||
).addErrback(unwrapFirstError)
|
||||
|
||||
try:
|
||||
await do_something()
|
||||
except CancelledError:
|
||||
raise
|
||||
except Exception:
|
||||
logger.info(...)
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
### Creation of `Deferred`s
|
||||
If a function creates a `Deferred`, the effect of cancelling it must be considered. `Deferred`s that get shared are likely to have unintended behaviour when cancelled.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
cache: Dict[str, Deferred[None]] = {}
|
||||
|
||||
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||
deferred = cache.get(room_id)
|
||||
if deferred is None:
|
||||
deferred = Deferred()
|
||||
cache[room_id] = deferred
|
||||
# `deferred` can have multiple waiters.
|
||||
# All of them will observe a `CancelledError`
|
||||
# if any one of them is cancelled.
|
||||
return make_deferred_yieldable(deferred)
|
||||
|
||||
# Request 1
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
# Request 2
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
cache: Dict[str, Deferred[None]] = {}
|
||||
|
||||
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||
deferred = cache.get(room_id)
|
||||
if deferred is None:
|
||||
deferred = Deferred()
|
||||
cache[room_id] = deferred
|
||||
# `deferred` will never be cancelled now.
|
||||
# A `CancelledError` will still come out of
|
||||
# the `await`.
|
||||
# `delay_cancellation` may also be used.
|
||||
return make_deferred_yieldable(stop_cancellation(deferred))
|
||||
|
||||
# Request 1
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
# Request 2
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
cache: Dict[str, List[Deferred[None]]] = {}
|
||||
|
||||
def wait_for_room(room_id: str) -> Deferred[None]:
|
||||
if room_id not in cache:
|
||||
cache[room_id] = []
|
||||
# Each request gets its own `Deferred` to wait on.
|
||||
deferred = Deferred()
|
||||
cache[room_id]].append(deferred)
|
||||
return make_deferred_yieldable(deferred)
|
||||
|
||||
# Request 1
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
# Request 2
|
||||
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
|
||||
```
|
||||
</td>
|
||||
</table>
|
||||
|
||||
### Uncancelled processing
|
||||
Some `async` functions may kick off some `async` processing which is
|
||||
intentionally protected from cancellation, by `stop_cancellation` or
|
||||
other means. If the `async` processing inherits the logcontext of the
|
||||
request which initiated it, care must be taken to ensure that the
|
||||
logcontext is not finished before the `async` processing completes.
|
||||
|
||||
<table width="100%">
|
||||
<tr>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Bad**:
|
||||
```python
|
||||
cache: Optional[ObservableDeferred[None]] = None
|
||||
|
||||
async def do_something_else(
|
||||
to_resolve: Deferred[None]
|
||||
) -> None:
|
||||
await ...
|
||||
logger.info("done!")
|
||||
to_resolve.callback(None)
|
||||
|
||||
async def do_something() -> None:
|
||||
if not cache:
|
||||
to_resolve = Deferred()
|
||||
cache = ObservableDeferred(to_resolve)
|
||||
# `do_something_else` will never be cancelled and
|
||||
# can outlive the `request-1` logging context.
|
||||
run_in_background(do_something_else, to_resolve)
|
||||
|
||||
await make_deferred_yieldable(cache.observe())
|
||||
|
||||
with LoggingContext("request-1"):
|
||||
await do_something()
|
||||
```
|
||||
</td>
|
||||
<td width="50%" valign="top">
|
||||
|
||||
**Good**:
|
||||
```python
|
||||
cache: Optional[ObservableDeferred[None]] = None
|
||||
|
||||
async def do_something_else(
|
||||
to_resolve: Deferred[None]
|
||||
) -> None:
|
||||
await ...
|
||||
logger.info("done!")
|
||||
to_resolve.callback(None)
|
||||
|
||||
async def do_something() -> None:
|
||||
if not cache:
|
||||
to_resolve = Deferred()
|
||||
cache = ObservableDeferred(to_resolve)
|
||||
run_in_background(do_something_else, to_resolve)
|
||||
# We'll wait until `do_something_else` is
|
||||
# done before raising a `CancelledError`.
|
||||
await make_deferred_yieldable(
|
||||
delay_cancellation(cache.observe())
|
||||
)
|
||||
else:
|
||||
await make_deferred_yieldable(cache.observe())
|
||||
|
||||
with LoggingContext("request-1"):
|
||||
await do_something()
|
||||
```
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td width="50%">
|
||||
|
||||
**OK**:
|
||||
```python
|
||||
cache: Optional[ObservableDeferred[None]] = None
|
||||
|
||||
async def do_something_else(
|
||||
to_resolve: Deferred[None]
|
||||
) -> None:
|
||||
await ...
|
||||
logger.info("done!")
|
||||
to_resolve.callback(None)
|
||||
|
||||
async def do_something() -> None:
|
||||
if not cache:
|
||||
to_resolve = Deferred()
|
||||
cache = ObservableDeferred(to_resolve)
|
||||
# `do_something_else` will get its own independent
|
||||
# logging context. `request-1` will not count any
|
||||
# metrics from `do_something_else`.
|
||||
run_as_background_process(
|
||||
"do_something_else",
|
||||
do_something_else,
|
||||
to_resolve,
|
||||
)
|
||||
|
||||
await make_deferred_yieldable(cache.observe())
|
||||
|
||||
with LoggingContext("request-1"):
|
||||
await do_something()
|
||||
```
|
||||
</td>
|
||||
<td width="50%">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
@@ -159,7 +159,7 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
|
||||
oidc_providers:
|
||||
- idp_id: keycloak
|
||||
idp_name: "My KeyCloak server"
|
||||
issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
|
||||
issuer: "https://127.0.0.1:8443/realms/{realm_name}"
|
||||
client_id: "synapse"
|
||||
client_secret: "copy secret generated from above"
|
||||
scopes: ["openid", "profile"]
|
||||
@@ -293,7 +293,7 @@ can be used to retrieve information on the authenticated user. As the Synapse
|
||||
login mechanism needs an attribute to uniquely identify users, and that endpoint
|
||||
does not return a `sub` property, an alternative `subject_claim` has to be set.
|
||||
|
||||
1. Create a new OAuth application: https://github.com/settings/applications/new.
|
||||
1. Create a new OAuth application: [https://github.com/settings/applications/new](https://github.com/settings/applications/new).
|
||||
2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`.
|
||||
|
||||
Synapse config:
|
||||
@@ -322,10 +322,10 @@ oidc_providers:
|
||||
|
||||
[Google][google-idp] is an OpenID certified authentication and authorisation provider.
|
||||
|
||||
1. Set up a project in the Google API Console (see
|
||||
https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
|
||||
2. Add an "OAuth Client ID" for a Web Application under "Credentials".
|
||||
3. Copy the Client ID and Client Secret, and add the following to your synapse config:
|
||||
1. Set up a project in the Google API Console (see
|
||||
[documentation](https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup)).
|
||||
3. Add an "OAuth Client ID" for a Web Application under "Credentials".
|
||||
4. Copy the Client ID and Client Secret, and add the following to your synapse config:
|
||||
```yaml
|
||||
oidc_providers:
|
||||
- idp_id: google
|
||||
@@ -501,8 +501,8 @@ As well as the private key file, you will need:
|
||||
* Team ID: a 10-character ID associated with your developer account.
|
||||
* Key ID: the 10-character identifier for the key.
|
||||
|
||||
https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
|
||||
documentation on setting up SiWA.
|
||||
[Apple's developer documentation](https://help.apple.com/developer-account/?lang=en#/dev77c875b7e)
|
||||
has more information on setting up SiWA.
|
||||
|
||||
The synapse config will look like this:
|
||||
|
||||
@@ -535,8 +535,8 @@ needed to add OAuth2 capabilities to your Django projects. It supports
|
||||
|
||||
Configuration on Django's side:
|
||||
|
||||
1. Add an application: https://example.com/admin/oauth2_provider/application/add/ and choose parameters like this:
|
||||
* `Redirect uris`: https://synapse.example.com/_synapse/client/oidc/callback
|
||||
1. Add an application: `https://example.com/admin/oauth2_provider/application/add/` and choose parameters like this:
|
||||
* `Redirect uris`: `https://synapse.example.com/_synapse/client/oidc/callback`
|
||||
* `Client type`: `Confidential`
|
||||
* `Authorization grant type`: `Authorization code`
|
||||
* `Algorithm`: `HMAC with SHA-2 256`
|
||||
|
||||
@@ -289,7 +289,7 @@ presence:
|
||||
# federation: the server-server API (/_matrix/federation). Also implies
|
||||
# 'media', 'keys', 'openid'
|
||||
#
|
||||
# keys: the key discovery API (/_matrix/keys).
|
||||
# keys: the key discovery API (/_matrix/key).
|
||||
#
|
||||
# media: the media API (/_matrix/media).
|
||||
#
|
||||
@@ -730,6 +730,12 @@ retention:
|
||||
# A cache 'factor' is a multiplier that can be applied to each of
|
||||
# Synapse's caches in order to increase or decrease the maximum
|
||||
# number of entries that can be stored.
|
||||
#
|
||||
# The configuration for cache factors (caches.global_factor and
|
||||
# caches.per_cache_factors) can be reloaded while the application is running,
|
||||
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
|
||||
# the caching config will NOT be applied after a SIGHUP is received; a restart
|
||||
# is necessary.
|
||||
|
||||
# The number of events to cache in memory. Not affected by
|
||||
# caches.global_factor.
|
||||
@@ -778,6 +784,24 @@ caches:
|
||||
#
|
||||
#cache_entry_ttl: 30m
|
||||
|
||||
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
|
||||
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
|
||||
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
|
||||
# this option, and all three of the options must be specified for this feature to work.
|
||||
#cache_autotuning:
|
||||
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
|
||||
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
# the flag below, or until the `min_cache_ttl` is hit.
|
||||
#max_cache_memory_usage: 1024M
|
||||
|
||||
# This flag sets a rough target for the desired memory usage of the caches.
|
||||
#target_cache_memory_usage: 758M
|
||||
|
||||
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||
# from being emptied while Synapse is evicting due to memory.
|
||||
#min_cache_ttl: 5m
|
||||
|
||||
# Controls how long the results of a /sync request are cached for after
|
||||
# a successful response is returned. A higher duration can help clients with
|
||||
# intermittent connections, at the cost of higher memory usage.
|
||||
@@ -2462,6 +2486,40 @@ push:
|
||||
#
|
||||
#encryption_enabled_by_default_for_room_type: invite
|
||||
|
||||
# Override the default power levels for rooms created on this server, per
|
||||
# room creation preset.
|
||||
#
|
||||
# The appropriate dictionary for the room preset will be applied on top
|
||||
# of the existing power levels content.
|
||||
#
|
||||
# Useful if you know that your users need special permissions in rooms
|
||||
# that they create (e.g. to send particular types of state events without
|
||||
# needing an elevated power level). This takes the same shape as the
|
||||
# `power_level_content_override` parameter in the /createRoom API, but
|
||||
# is applied before that parameter.
|
||||
#
|
||||
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
|
||||
# and `public_chat`. Inside each of those should be any of the
|
||||
# properties allowed in `power_level_content_override` in the
|
||||
# /createRoom API. If any property is missing, its default value will
|
||||
# continue to be used. If any property is present, it will overwrite
|
||||
# the existing default completely (so if the `events` property exists,
|
||||
# the default event power levels will be ignored).
|
||||
#
|
||||
#default_power_level_content_override:
|
||||
# private_chat:
|
||||
# "events":
|
||||
# "com.example.myeventtype" : 0
|
||||
# "m.room.avatar": 50
|
||||
# "m.room.canonical_alias": 50
|
||||
# "m.room.encryption": 100
|
||||
# "m.room.history_visibility": 100
|
||||
# "m.room.name": 50
|
||||
# "m.room.power_levels": 100
|
||||
# "m.room.server_acl": 100
|
||||
# "m.room.tombstone": 100
|
||||
# "events_default": 1
|
||||
|
||||
|
||||
# Uncomment to allow non-server-admin users to create groups on this server
|
||||
#
|
||||
|
||||
@@ -89,6 +89,96 @@ process, for example:
|
||||
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
|
||||
```
|
||||
|
||||
# Upgrading to v1.60.0
|
||||
|
||||
## Adding a new unique index to `state_group_edges` could fail if your database is corrupted
|
||||
|
||||
This release of Synapse will add a unique index to the `state_group_edges` table, in order
|
||||
to prevent accidentally introducing duplicate information (for example, because a database
|
||||
backup was restored multiple times).
|
||||
|
||||
Duplicate rows being present in this table could cause drastic performance problems; see
|
||||
[issue 11779](https://github.com/matrix-org/synapse/issues/11779) for more details.
|
||||
|
||||
If your Synapse database already has had duplicate rows introduced into this table,
|
||||
this could fail, with either of these errors:
|
||||
|
||||
|
||||
**On Postgres:**
|
||||
```
|
||||
synapse.storage.background_updates - 623 - INFO - background_updates-0 - Adding index state_group_edges_unique_idx to state_group_edges
|
||||
synapse.storage.background_updates - 282 - ERROR - background_updates-0 - Error doing update
|
||||
...
|
||||
psycopg2.errors.UniqueViolation: could not create unique index "state_group_edges_unique_idx"
|
||||
DETAIL: Key (state_group, prev_state_group)=(2, 1) is duplicated.
|
||||
```
|
||||
(The numbers may be different.)
|
||||
|
||||
**On SQLite:**
|
||||
```
|
||||
synapse.storage.background_updates - 623 - INFO - background_updates-0 - Adding index state_group_edges_unique_idx to state_group_edges
|
||||
synapse.storage.background_updates - 282 - ERROR - background_updates-0 - Error doing update
|
||||
...
|
||||
sqlite3.IntegrityError: UNIQUE constraint failed: state_group_edges.state_group, state_group_edges.prev_state_group
|
||||
```
|
||||
|
||||
|
||||
<details>
|
||||
<summary><b>Expand this section for steps to resolve this problem</b></summary>
|
||||
|
||||
### On Postgres
|
||||
|
||||
Connect to your database with `psql`.
|
||||
|
||||
```sql
|
||||
BEGIN;
|
||||
DELETE FROM state_group_edges WHERE (ctid, state_group, prev_state_group) IN (
|
||||
SELECT row_id, state_group, prev_state_group
|
||||
FROM (
|
||||
SELECT
|
||||
ctid AS row_id,
|
||||
MIN(ctid) OVER (PARTITION BY state_group, prev_state_group) AS min_row_id,
|
||||
state_group,
|
||||
prev_state_group
|
||||
FROM state_group_edges
|
||||
) AS t1
|
||||
WHERE row_id <> min_row_id
|
||||
);
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
|
||||
### On SQLite
|
||||
|
||||
At the command-line, use `sqlite3 path/to/your-homeserver-database.db`:
|
||||
|
||||
```sql
|
||||
BEGIN;
|
||||
DELETE FROM state_group_edges WHERE (rowid, state_group, prev_state_group) IN (
|
||||
SELECT row_id, state_group, prev_state_group
|
||||
FROM (
|
||||
SELECT
|
||||
rowid AS row_id,
|
||||
MIN(rowid) OVER (PARTITION BY state_group, prev_state_group) AS min_row_id,
|
||||
state_group,
|
||||
prev_state_group
|
||||
FROM state_group_edges
|
||||
)
|
||||
WHERE row_id <> min_row_id
|
||||
);
|
||||
COMMIT;
|
||||
```
|
||||
|
||||
|
||||
### For more details
|
||||
|
||||
[This comment on issue 11779](https://github.com/matrix-org/synapse/issues/11779#issuecomment-1131545970)
|
||||
has queries that can be used to check a database for this problem in advance.
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
|
||||
# Upgrading to v1.59.0
|
||||
|
||||
## Device name lookup over federation has been disabled by default
|
||||
|
||||
@@ -23,6 +23,14 @@ followed by a letter. Letters have the following meanings:
|
||||
For example, setting `redaction_retention_period: 5m` would remove redacted
|
||||
messages from the database after 5 minutes, rather than 5 months.
|
||||
|
||||
In addition, configuration options referring to size use the following suffixes:
|
||||
|
||||
* `M` = MiB, or 1,048,576 bytes
|
||||
* `K` = KiB, or 1024 bytes
|
||||
|
||||
For example, setting `max_avatar_size: 10M` means that Synapse will not accept files larger than 10,485,760 bytes
|
||||
for a user avatar.
|
||||
|
||||
### YAML
|
||||
The configuration file is a [YAML](https://yaml.org/) file, which means that certain syntax rules
|
||||
apply if you want your config file to be read properly. A few helpful things to know:
|
||||
@@ -467,13 +475,13 @@ Sub-options for each listener include:
|
||||
|
||||
Valid resource names are:
|
||||
|
||||
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies 'media' and 'static'.
|
||||
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
|
||||
|
||||
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
|
||||
|
||||
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
|
||||
|
||||
* `keys`: the key discovery API (/_matrix/keys).
|
||||
* `keys`: the key discovery API (/_matrix/key).
|
||||
|
||||
* `media`: the media API (/_matrix/media).
|
||||
|
||||
@@ -1119,7 +1127,22 @@ Caching can be configured through the following sub-options:
|
||||
with intermittent connections, at the cost of higher memory usage.
|
||||
By default, this is zero, which means that sync responses are not cached
|
||||
at all.
|
||||
|
||||
* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
|
||||
`min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
|
||||
usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
|
||||
to utilize this option, and all three of the options must be specified for this feature to work. This option
|
||||
defaults to off, enable it by providing values for the sub-options listed below. Please note that the feature will not work
|
||||
and may cause unstable behavior (such as excessive emptying of caches or exceptions) if all of the values are not provided.
|
||||
Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
|
||||
durations.
|
||||
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
|
||||
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
|
||||
* `target_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
|
||||
for this option.
|
||||
* `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||
caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||
from being emptied while Synapse is evicting due to memory. There is no default value for this option.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
@@ -1127,9 +1150,29 @@ caches:
|
||||
global_factor: 1.0
|
||||
per_cache_factors:
|
||||
get_users_who_share_room_with_user: 2.0
|
||||
expire_caches: false
|
||||
sync_response_cache_duration: 2m
|
||||
cache_autotuning:
|
||||
max_cache_memory_usage: 1024M
|
||||
target_cache_memory_usage: 758M
|
||||
min_cache_ttl: 5m
|
||||
```
|
||||
|
||||
### Reloading cache factors
|
||||
|
||||
The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`) may be reloaded at any time by sending a
|
||||
[`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
|
||||
|
||||
```commandline
|
||||
kill -HUP [PID_OF_SYNAPSE_PROCESS]
|
||||
```
|
||||
|
||||
If you are running multiple workers, you must individually update the worker
|
||||
config file and send this signal to each worker process.
|
||||
|
||||
If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
|
||||
file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
|
||||
`systemctl reload matrix-synapse`.
|
||||
|
||||
---
|
||||
## Database ##
|
||||
Config options related to database settings.
|
||||
@@ -1164,7 +1207,7 @@ For more information on using Synapse with Postgres,
|
||||
see [here](../../postgres.md).
|
||||
|
||||
Example SQLite configuration:
|
||||
```
|
||||
```yaml
|
||||
database:
|
||||
name: sqlite3
|
||||
args:
|
||||
@@ -1172,7 +1215,7 @@ database:
|
||||
```
|
||||
|
||||
Example Postgres configuration:
|
||||
```
|
||||
```yaml
|
||||
database:
|
||||
name: psycopg2
|
||||
txn_limit: 10000
|
||||
@@ -1327,6 +1370,20 @@ This option sets ratelimiting how often invites can be sent in a room or to a
|
||||
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
|
||||
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
|
||||
|
||||
Client requests that invite user(s) when [creating a
|
||||
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
|
||||
will count against the `rc_invites.per_room` limit, whereas
|
||||
client requests to [invite a single user to a
|
||||
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidinvite)
|
||||
will count against both the `rc_invites.per_user` and `rc_invites.per_room` limits.
|
||||
|
||||
Federation requests to invite a user will count against the `rc_invites.per_user`
|
||||
limit only, as Synapse presumes ratelimiting by room will be done by the sending server.
|
||||
|
||||
The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather than the
|
||||
sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
|
||||
cannot *receive* more than a burst of 5 invites at a time.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
rc_invites:
|
||||
@@ -1635,10 +1692,10 @@ Defaults to "en".
|
||||
Example configuration:
|
||||
```yaml
|
||||
url_preview_accept_language:
|
||||
- en-UK
|
||||
- en-US;q=0.9
|
||||
- fr;q=0.8
|
||||
- *;q=0.7
|
||||
- 'en-UK'
|
||||
- 'en-US;q=0.9'
|
||||
- 'fr;q=0.8'
|
||||
- '*;q=0.7'
|
||||
```
|
||||
----
|
||||
Config option: `oembed`
|
||||
@@ -3298,6 +3355,32 @@ room_list_publication_rules:
|
||||
room_id: "*"
|
||||
action: allow
|
||||
```
|
||||
|
||||
---
|
||||
Config option: `default_power_level_content_override`
|
||||
|
||||
The `default_power_level_content_override` option controls the default power
|
||||
levels for rooms.
|
||||
|
||||
Useful if you know that your users need special permissions in rooms
|
||||
that they create (e.g. to send particular types of state events without
|
||||
needing an elevated power level). This takes the same shape as the
|
||||
`power_level_content_override` parameter in the /createRoom API, but
|
||||
is applied before that parameter.
|
||||
|
||||
Note that each key provided inside a preset (for example `events` in the example
|
||||
below) will overwrite all existing defaults inside that key. So in the example
|
||||
below, newly-created private_chat rooms will have no rules for any event types
|
||||
except `com.example.foo`.
|
||||
|
||||
Example configuration:
|
||||
```yaml
|
||||
default_power_level_content_override:
|
||||
private_chat: { "events": { "com.example.foo" : 0 } }
|
||||
trusted_private_chat: null
|
||||
public_chat: null
|
||||
```
|
||||
|
||||
---
|
||||
## Opentracing ##
|
||||
Configuration options related to Opentracing support.
|
||||
@@ -3398,7 +3481,7 @@ stream_writers:
|
||||
typing: worker1
|
||||
```
|
||||
---
|
||||
Config option: `run_background_task_on`
|
||||
Config option: `run_background_tasks_on`
|
||||
|
||||
The worker that is used to run background tasks (e.g. cleaning up expired
|
||||
data). If not provided this defaults to the main process.
|
||||
|
||||
@@ -7,10 +7,10 @@ team.
|
||||
## Installing and using Synapse
|
||||
|
||||
This documentation covers topics for **installation**, **configuration** and
|
||||
**maintainence** of your Synapse process:
|
||||
**maintenance** of your Synapse process:
|
||||
|
||||
* Learn how to [install](setup/installation.md) and
|
||||
[configure](usage/configuration/index.html) your own instance, perhaps with [Single
|
||||
[configure](usage/configuration/config_documentation.md) your own instance, perhaps with [Single
|
||||
Sign-On](usage/configuration/user_authentication/index.html).
|
||||
|
||||
* See how to [upgrade](upgrade.md) between Synapse versions.
|
||||
@@ -65,7 +65,7 @@ following documentation:
|
||||
|
||||
Want to help keep Synapse going but don't know how to code? Synapse is a
|
||||
[Matrix.org Foundation](https://matrix.org) project. Consider becoming a
|
||||
supportor on [Liberapay](https://liberapay.com/matrixdotorg),
|
||||
supporter on [Liberapay](https://liberapay.com/matrixdotorg),
|
||||
[Patreon](https://patreon.com/matrixdotorg) or through
|
||||
[PayPal](https://paypal.me/matrixdotorg) via a one-time donation.
|
||||
|
||||
|
||||
@@ -251,6 +251,8 @@ information.
|
||||
# Presence requests
|
||||
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
|
||||
|
||||
# User directory search requests
|
||||
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
|
||||
|
||||
Additionally, the following REST endpoints can be handled for GET requests:
|
||||
|
||||
@@ -448,6 +450,14 @@ update_user_directory_from_worker: worker_name
|
||||
This work cannot be load-balanced; please ensure the main process is restarted
|
||||
after setting this option in the shared configuration!
|
||||
|
||||
User directory updates allow REST endpoints matching the following regular
|
||||
expressions to work:
|
||||
|
||||
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
|
||||
|
||||
The above endpoints can be routed to any worker, though you may choose to route
|
||||
it to the chosen user directory worker.
|
||||
|
||||
This style of configuration supersedes the legacy `synapse.app.user_dir`
|
||||
worker application type.
|
||||
|
||||
|
||||
132
mypy.ini
132
mypy.ini
@@ -10,6 +10,7 @@ warn_unreachable = True
|
||||
warn_unused_ignores = True
|
||||
local_partial_types = True
|
||||
no_implicit_optional = True
|
||||
disallow_untyped_defs = True
|
||||
|
||||
files =
|
||||
docker/,
|
||||
@@ -27,9 +28,6 @@ exclude = (?x)
|
||||
|synapse/storage/databases/__init__.py
|
||||
|synapse/storage/databases/main/cache.py
|
||||
|synapse/storage/databases/main/devices.py
|
||||
|synapse/storage/databases/main/event_federation.py
|
||||
|synapse/storage/databases/main/push_rule.py
|
||||
|synapse/storage/databases/main/roommember.py
|
||||
|synapse/storage/schema/
|
||||
|
||||
|tests/api/test_auth.py
|
||||
@@ -89,131 +87,39 @@ exclude = (?x)
|
||||
|tests/utils.py
|
||||
)$
|
||||
|
||||
[mypy-synapse._scripts.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.api.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.app.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.appservice.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.config.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.crypto.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.event_auth]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.events.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.federation.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.federation.transport.client]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.handlers.*]
|
||||
disallow_untyped_defs = True
|
||||
[mypy-synapse.http.client]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.http.server]
|
||||
disallow_untyped_defs = True
|
||||
[mypy-synapse.http.matrixfederationclient]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.logging.context]
|
||||
disallow_untyped_defs = True
|
||||
[mypy-synapse.logging.opentracing]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.metrics.*]
|
||||
disallow_untyped_defs = True
|
||||
[mypy-synapse.logging.scopecontextmanager]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.metrics._reactor_metrics]
|
||||
disallow_untyped_defs = False
|
||||
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
|
||||
# See https://github.com/matrix-org/synapse/pull/11771.
|
||||
warn_unused_ignores = False
|
||||
|
||||
[mypy-synapse.module_api.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.notifier]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.push.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.replication.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.rest.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.server_notices.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.state.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.account_data]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.client_ips]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.directory]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.e2e_room_keys]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.end_to_end_keys]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.event_push_actions]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.events_bg_updates]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.events_worker]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.room]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.room_batch]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.profile]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.stats]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.state_deltas]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.transactions]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.databases.main.user_erasure_store]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.storage.util.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.streams.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.*]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
[mypy-synapse.util.caches.treecache]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.server]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-synapse.storage.database]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-tests.*]
|
||||
disallow_untyped_defs = False
|
||||
|
||||
[mypy-tests.handlers.test_user_directory]
|
||||
disallow_untyped_defs = True
|
||||
|
||||
|
||||
@@ -54,7 +54,7 @@ skip_gitignore = true
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.59.0rc1"
|
||||
version = "1.59.1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
|
||||
@@ -21,7 +21,7 @@ from typing import Callable, Optional, Type
|
||||
from mypy.nodes import ARG_NAMED_OPT
|
||||
from mypy.plugin import MethodSigContext, Plugin
|
||||
from mypy.typeops import bind_self
|
||||
from mypy.types import CallableType, NoneType
|
||||
from mypy.types import CallableType, NoneType, UnionType
|
||||
|
||||
|
||||
class SynapsePlugin(Plugin):
|
||||
@@ -72,13 +72,20 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
|
||||
# Third, we add an optional "on_invalidate" argument.
|
||||
#
|
||||
# This is a callable which accepts no input and returns nothing.
|
||||
calltyp = CallableType(
|
||||
arg_types=[],
|
||||
arg_kinds=[],
|
||||
arg_names=[],
|
||||
ret_type=NoneType(),
|
||||
fallback=ctx.api.named_generic_type("builtins.function", []),
|
||||
# This is a either
|
||||
# - a callable which accepts no input and returns nothing, or
|
||||
# - None.
|
||||
calltyp = UnionType(
|
||||
[
|
||||
NoneType(),
|
||||
CallableType(
|
||||
arg_types=[],
|
||||
arg_kinds=[],
|
||||
arg_names=[],
|
||||
ret_type=NoneType(),
|
||||
fallback=ctx.api.named_generic_type("builtins.function", []),
|
||||
),
|
||||
]
|
||||
)
|
||||
|
||||
arg_types.append(calltyp)
|
||||
@@ -95,7 +102,7 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
|
||||
|
||||
|
||||
def plugin(version: str) -> Type[SynapsePlugin]:
|
||||
# This is the entry point of the plugin, and let's us deal with the fact
|
||||
# This is the entry point of the plugin, and lets us deal with the fact
|
||||
# that the mypy plugin interface is *not* stable by looking at the version
|
||||
# string.
|
||||
#
|
||||
|
||||
@@ -46,14 +46,14 @@ def main() -> None:
|
||||
"Path to server config file. "
|
||||
"Used to read in bcrypt_rounds and password_pepper."
|
||||
),
|
||||
required=True,
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
if "config" in args and args.config:
|
||||
config = yaml.safe_load(args.config)
|
||||
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
|
||||
password_config = config.get("password_config", None) or {}
|
||||
password_pepper = password_config.get("pepper", password_pepper)
|
||||
config = yaml.safe_load(args.config)
|
||||
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
|
||||
password_config = config.get("password_config", None) or {}
|
||||
password_pepper = password_config.get("pepper", password_pepper)
|
||||
password = args.password
|
||||
|
||||
if not password:
|
||||
|
||||
@@ -61,7 +61,6 @@ class Auth:
|
||||
self.hs = hs
|
||||
self.clock = hs.get_clock()
|
||||
self.store = hs.get_datastores().main
|
||||
self.state = hs.get_state_handler()
|
||||
self._account_validity_handler = hs.get_account_validity_handler()
|
||||
|
||||
self.token_cache: LruCache[str, Tuple[str, bool]] = LruCache(
|
||||
@@ -81,7 +80,7 @@ class Auth:
|
||||
user_id: str,
|
||||
current_state: Optional[StateMap[EventBase]] = None,
|
||||
allow_departed_users: bool = False,
|
||||
) -> EventBase:
|
||||
) -> Tuple[str, Optional[str]]:
|
||||
"""Check if the user is in the room, or was at some point.
|
||||
Args:
|
||||
room_id: The room to check.
|
||||
@@ -99,29 +98,28 @@ class Auth:
|
||||
Raises:
|
||||
AuthError if the user is/was not in the room.
|
||||
Returns:
|
||||
Membership event for the user if the user was in the
|
||||
room. This will be the join event if they are currently joined to
|
||||
the room. This will be the leave event if they have left the room.
|
||||
The current membership of the user in the room and the
|
||||
membership event ID of the user.
|
||||
"""
|
||||
if current_state:
|
||||
member = current_state.get((EventTypes.Member, user_id), None)
|
||||
else:
|
||||
member = await self.state.get_current_state(
|
||||
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
|
||||
)
|
||||
|
||||
if member:
|
||||
membership = member.membership
|
||||
(
|
||||
membership,
|
||||
member_event_id,
|
||||
) = await self.store.get_local_current_membership_for_user_in_room(
|
||||
user_id=user_id,
|
||||
room_id=room_id,
|
||||
)
|
||||
|
||||
if membership:
|
||||
if membership == Membership.JOIN:
|
||||
return member
|
||||
return membership, member_event_id
|
||||
|
||||
# XXX this looks totally bogus. Why do we not allow users who have been banned,
|
||||
# or those who were members previously and have been re-invited?
|
||||
if allow_departed_users and membership == Membership.LEAVE:
|
||||
forgot = await self.store.did_forget(user_id, room_id)
|
||||
if not forgot:
|
||||
return member
|
||||
return membership, member_event_id
|
||||
|
||||
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
|
||||
|
||||
@@ -602,7 +600,8 @@ class Auth:
|
||||
# We currently require the user is a "moderator" in the room. We do this
|
||||
# by checking if they would (theoretically) be able to change the
|
||||
# m.room.canonical_alias events
|
||||
power_level_event = await self.state.get_current_state(
|
||||
|
||||
power_level_event = await self.store.get_current_state_event(
|
||||
room_id, EventTypes.PowerLevels, ""
|
||||
)
|
||||
|
||||
@@ -693,12 +692,11 @@ class Auth:
|
||||
# * The user is a non-guest user, and was ever in the room
|
||||
# * The user is a guest user, and has joined the room
|
||||
# else it will throw.
|
||||
member_event = await self.check_user_in_room(
|
||||
return await self.check_user_in_room(
|
||||
room_id, user_id, allow_departed_users=allow_departed_users
|
||||
)
|
||||
return member_event.membership, member_event.event_id
|
||||
except AuthError:
|
||||
visibility = await self.state.get_current_state(
|
||||
visibility = await self.store.get_current_state_event(
|
||||
room_id, EventTypes.RoomHistoryVisibility, ""
|
||||
)
|
||||
if (
|
||||
|
||||
@@ -65,6 +65,8 @@ class JoinRules:
|
||||
PRIVATE: Final = "private"
|
||||
# As defined for MSC3083.
|
||||
RESTRICTED: Final = "restricted"
|
||||
# As defined for MSC3787.
|
||||
KNOCK_RESTRICTED: Final = "knock_restricted"
|
||||
|
||||
|
||||
class RestrictedJoinRuleTypes:
|
||||
|
||||
@@ -17,6 +17,7 @@
|
||||
|
||||
import logging
|
||||
import typing
|
||||
from enum import Enum
|
||||
from http import HTTPStatus
|
||||
from typing import Any, Dict, List, Optional, Union
|
||||
|
||||
@@ -30,7 +31,11 @@ if typing.TYPE_CHECKING:
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Codes:
|
||||
class Codes(str, Enum):
|
||||
"""
|
||||
All known error codes, as an enum of strings.
|
||||
"""
|
||||
|
||||
UNRECOGNIZED = "M_UNRECOGNIZED"
|
||||
UNAUTHORIZED = "M_UNAUTHORIZED"
|
||||
FORBIDDEN = "M_FORBIDDEN"
|
||||
@@ -265,7 +270,9 @@ class UnrecognizedRequestError(SynapseError):
|
||||
"""An error indicating we don't understand the request you're trying to make"""
|
||||
|
||||
def __init__(
|
||||
self, msg: str = "Unrecognized request", errcode: str = Codes.UNRECOGNIZED
|
||||
self,
|
||||
msg: str = "Unrecognized request",
|
||||
errcode: str = Codes.UNRECOGNIZED,
|
||||
):
|
||||
super().__init__(400, msg, errcode)
|
||||
|
||||
|
||||
@@ -81,6 +81,9 @@ class RoomVersion:
|
||||
msc2716_historical: bool
|
||||
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
|
||||
msc2716_redactions: bool
|
||||
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
|
||||
# knocks and restricted join rules into the same join condition.
|
||||
msc3787_knock_restricted_join_rule: bool
|
||||
|
||||
|
||||
class RoomVersions:
|
||||
@@ -99,6 +102,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V2 = RoomVersion(
|
||||
"2",
|
||||
@@ -115,6 +119,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V3 = RoomVersion(
|
||||
"3",
|
||||
@@ -131,6 +136,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V4 = RoomVersion(
|
||||
"4",
|
||||
@@ -147,6 +153,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V5 = RoomVersion(
|
||||
"5",
|
||||
@@ -163,6 +170,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V6 = RoomVersion(
|
||||
"6",
|
||||
@@ -179,6 +187,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
MSC2176 = RoomVersion(
|
||||
"org.matrix.msc2176",
|
||||
@@ -195,6 +204,7 @@ class RoomVersions:
|
||||
msc2403_knocking=False,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V7 = RoomVersion(
|
||||
"7",
|
||||
@@ -211,6 +221,7 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V8 = RoomVersion(
|
||||
"8",
|
||||
@@ -227,6 +238,7 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
V9 = RoomVersion(
|
||||
"9",
|
||||
@@ -243,6 +255,7 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
MSC2716v3 = RoomVersion(
|
||||
"org.matrix.msc2716v3",
|
||||
@@ -259,6 +272,24 @@ class RoomVersions:
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=True,
|
||||
msc2716_redactions=True,
|
||||
msc3787_knock_restricted_join_rule=False,
|
||||
)
|
||||
MSC3787 = RoomVersion(
|
||||
"org.matrix.msc3787",
|
||||
RoomDisposition.UNSTABLE,
|
||||
EventFormatVersions.V3,
|
||||
StateResolutionVersions.V2,
|
||||
enforce_key_validity=True,
|
||||
special_case_aliases_auth=False,
|
||||
strict_canonicaljson=True,
|
||||
limit_notifications_power_levels=True,
|
||||
msc2176_redaction_rules=False,
|
||||
msc3083_join_rules=True,
|
||||
msc3375_redaction_rules=True,
|
||||
msc2403_knocking=True,
|
||||
msc2716_historical=False,
|
||||
msc2716_redactions=False,
|
||||
msc3787_knock_restricted_join_rule=True,
|
||||
)
|
||||
|
||||
|
||||
@@ -276,6 +307,7 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
|
||||
RoomVersions.V8,
|
||||
RoomVersions.V9,
|
||||
RoomVersions.MSC2716v3,
|
||||
RoomVersions.MSC3787,
|
||||
)
|
||||
}
|
||||
|
||||
|
||||
@@ -49,9 +49,12 @@ from twisted.logger import LoggingFile, LogLevel
|
||||
from twisted.protocols.tls import TLSMemoryBIOFactory
|
||||
from twisted.python.threadpool import ThreadPool
|
||||
|
||||
import synapse.util.caches
|
||||
from synapse.api.constants import MAX_PDU_SIZE
|
||||
from synapse.app import check_bind_error
|
||||
from synapse.app.phone_stats_home import start_phone_stats_home
|
||||
from synapse.config import ConfigError
|
||||
from synapse.config._base import format_config_error
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.server import ManholeConfig
|
||||
from synapse.crypto import context_factory
|
||||
@@ -432,6 +435,10 @@ async def start(hs: "HomeServer") -> None:
|
||||
signal.signal(signal.SIGHUP, run_sighup)
|
||||
|
||||
register_sighup(refresh_certificate, hs)
|
||||
register_sighup(reload_cache_config, hs.config)
|
||||
|
||||
# Apply the cache config.
|
||||
hs.config.caches.resize_all_caches()
|
||||
|
||||
# Load the certificate from disk.
|
||||
refresh_certificate(hs)
|
||||
@@ -486,6 +493,43 @@ async def start(hs: "HomeServer") -> None:
|
||||
atexit.register(gc.freeze)
|
||||
|
||||
|
||||
def reload_cache_config(config: HomeServerConfig) -> None:
|
||||
"""Reload cache config from disk and immediately apply it.resize caches accordingly.
|
||||
|
||||
If the config is invalid, a `ConfigError` is logged and no changes are made.
|
||||
|
||||
Otherwise, this:
|
||||
- replaces the `caches` section on the given `config` object,
|
||||
- resizes all caches according to the new cache factors, and
|
||||
|
||||
Note that the following cache config keys are read, but not applied:
|
||||
- event_cache_size: used to set a max_size and _original_max_size on
|
||||
EventsWorkerStore._get_event_cache when it is created. We'd have to update
|
||||
the _original_max_size (and maybe
|
||||
- sync_response_cache_duration: would have to update the timeout_sec attribute on
|
||||
HomeServer -> SyncHandler -> ResponseCache.
|
||||
- track_memory_usage. This affects synapse.util.caches.TRACK_MEMORY_USAGE which
|
||||
influences Synapse's self-reported metrics.
|
||||
|
||||
Also, the HTTPConnectionPool in SimpleHTTPClient sets its maxPersistentPerHost
|
||||
parameter based on the global_factor. This won't be applied on a config reload.
|
||||
"""
|
||||
try:
|
||||
previous_cache_config = config.reload_config_section("caches")
|
||||
except ConfigError as e:
|
||||
logger.warning("Failed to reload cache config")
|
||||
for f in format_config_error(e):
|
||||
logger.warning(f)
|
||||
else:
|
||||
logger.debug(
|
||||
"New cache config. Was:\n %s\nNow:\n",
|
||||
previous_cache_config.__dict__,
|
||||
config.caches.__dict__,
|
||||
)
|
||||
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
|
||||
config.caches.resize_all_caches()
|
||||
|
||||
|
||||
def setup_sentry(hs: "HomeServer") -> None:
|
||||
"""Enable sentry integration, if enabled in configuration"""
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
from typing import Dict, Iterable, Iterator, List
|
||||
from typing import Dict, Iterable, List
|
||||
|
||||
from matrix_common.versionstring import get_distribution_version_string
|
||||
|
||||
@@ -45,7 +45,7 @@ from synapse.app._base import (
|
||||
redirect_stdio_to_logs,
|
||||
register_start,
|
||||
)
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.config._base import ConfigError, format_config_error
|
||||
from synapse.config.emailconfig import ThreepidBehaviour
|
||||
from synapse.config.homeserver import HomeServerConfig
|
||||
from synapse.config.server import ListenerConfig
|
||||
@@ -399,38 +399,6 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
|
||||
return hs
|
||||
|
||||
|
||||
def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||
"""
|
||||
Formats a config error neatly
|
||||
|
||||
The idea is to format the immediate error, plus the "causes" of those errors,
|
||||
hopefully in a way that makes sense to the user. For example:
|
||||
|
||||
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
|
||||
Failed to parse config for module 'JinjaOidcMappingProvider':
|
||||
invalid jinja template:
|
||||
unexpected end of template, expected 'end of print statement'.
|
||||
|
||||
Args:
|
||||
e: the error to be formatted
|
||||
|
||||
Returns: An iterator which yields string fragments to be formatted
|
||||
"""
|
||||
yield "Error in configuration"
|
||||
|
||||
if e.path:
|
||||
yield " at '%s'" % (".".join(e.path),)
|
||||
|
||||
yield ":\n %s" % (e.msg,)
|
||||
|
||||
parent_e = e.__cause__
|
||||
indent = 1
|
||||
while parent_e:
|
||||
indent += 1
|
||||
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||
parent_e = parent_e.__cause__
|
||||
|
||||
|
||||
def run(hs: HomeServer) -> None:
|
||||
_base.start_reactor(
|
||||
"synapse-homeserver",
|
||||
|
||||
@@ -16,14 +16,18 @@
|
||||
|
||||
import argparse
|
||||
import errno
|
||||
import logging
|
||||
import os
|
||||
from collections import OrderedDict
|
||||
from hashlib import sha256
|
||||
from textwrap import dedent
|
||||
from typing import (
|
||||
Any,
|
||||
ClassVar,
|
||||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
@@ -40,6 +44,8 @@ import yaml
|
||||
|
||||
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class ConfigError(Exception):
|
||||
"""Represents a problem parsing the configuration
|
||||
@@ -55,6 +61,38 @@ class ConfigError(Exception):
|
||||
self.path = path
|
||||
|
||||
|
||||
def format_config_error(e: ConfigError) -> Iterator[str]:
|
||||
"""
|
||||
Formats a config error neatly
|
||||
|
||||
The idea is to format the immediate error, plus the "causes" of those errors,
|
||||
hopefully in a way that makes sense to the user. For example:
|
||||
|
||||
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
|
||||
Failed to parse config for module 'JinjaOidcMappingProvider':
|
||||
invalid jinja template:
|
||||
unexpected end of template, expected 'end of print statement'.
|
||||
|
||||
Args:
|
||||
e: the error to be formatted
|
||||
|
||||
Returns: An iterator which yields string fragments to be formatted
|
||||
"""
|
||||
yield "Error in configuration"
|
||||
|
||||
if e.path:
|
||||
yield " at '%s'" % (".".join(e.path),)
|
||||
|
||||
yield ":\n %s" % (e.msg,)
|
||||
|
||||
parent_e = e.__cause__
|
||||
indent = 1
|
||||
while parent_e:
|
||||
indent += 1
|
||||
yield ":\n%s%s" % (" " * indent, str(parent_e))
|
||||
parent_e = parent_e.__cause__
|
||||
|
||||
|
||||
# We split these messages out to allow packages to override with package
|
||||
# specific instructions.
|
||||
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
|
||||
@@ -119,7 +157,7 @@ class Config:
|
||||
defined in subclasses.
|
||||
"""
|
||||
|
||||
section: str
|
||||
section: ClassVar[str]
|
||||
|
||||
def __init__(self, root_config: "RootConfig" = None):
|
||||
self.root = root_config
|
||||
@@ -309,9 +347,12 @@ class RootConfig:
|
||||
class, lower-cased and with "Config" removed.
|
||||
"""
|
||||
|
||||
config_classes = []
|
||||
config_classes: List[Type[Config]] = []
|
||||
|
||||
def __init__(self, config_files: Collection[str] = ()):
|
||||
# Capture absolute paths here, so we can reload config after we daemonize.
|
||||
self.config_files = [os.path.abspath(path) for path in config_files]
|
||||
|
||||
def __init__(self):
|
||||
for config_class in self.config_classes:
|
||||
if config_class.section is None:
|
||||
raise ValueError("%r requires a section name" % (config_class,))
|
||||
@@ -512,12 +553,10 @@ class RootConfig:
|
||||
object from parser.parse_args(..)`
|
||||
"""
|
||||
|
||||
obj = cls()
|
||||
|
||||
config_args = parser.parse_args(argv)
|
||||
|
||||
config_files = find_config_files(search_paths=config_args.config_path)
|
||||
|
||||
obj = cls(config_files)
|
||||
if not config_files:
|
||||
parser.error("Must supply a config file.")
|
||||
|
||||
@@ -627,7 +666,7 @@ class RootConfig:
|
||||
|
||||
generate_missing_configs = config_args.generate_missing_configs
|
||||
|
||||
obj = cls()
|
||||
obj = cls(config_files)
|
||||
|
||||
if config_args.generate_config:
|
||||
if config_args.report_stats is None:
|
||||
@@ -727,6 +766,34 @@ class RootConfig:
|
||||
) -> None:
|
||||
self.invoke_all("generate_files", config_dict, config_dir_path)
|
||||
|
||||
def reload_config_section(self, section_name: str) -> Config:
|
||||
"""Reconstruct the given config section, leaving all others unchanged.
|
||||
|
||||
This works in three steps:
|
||||
|
||||
1. Create a new instance of the relevant `Config` subclass.
|
||||
2. Call `read_config` on that instance to parse the new config.
|
||||
3. Replace the existing config instance with the new one.
|
||||
|
||||
:raises ValueError: if the given `section` does not exist.
|
||||
:raises ConfigError: for any other problems reloading config.
|
||||
|
||||
:returns: the previous config object, which no longer has a reference to this
|
||||
RootConfig.
|
||||
"""
|
||||
existing_config: Optional[Config] = getattr(self, section_name, None)
|
||||
if existing_config is None:
|
||||
raise ValueError(f"Unknown config section '{section_name}'")
|
||||
logger.info("Reloading config section '%s'", section_name)
|
||||
|
||||
new_config_data = read_config_files(self.config_files)
|
||||
new_config = type(existing_config)(self)
|
||||
new_config.read_config(new_config_data)
|
||||
setattr(self, section_name, new_config)
|
||||
|
||||
existing_config.root = None
|
||||
return existing_config
|
||||
|
||||
|
||||
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
|
||||
"""Read the config files into a dict
|
||||
|
||||
@@ -1,15 +1,19 @@
|
||||
import argparse
|
||||
from typing import (
|
||||
Any,
|
||||
Collection,
|
||||
Dict,
|
||||
Iterable,
|
||||
Iterator,
|
||||
List,
|
||||
Literal,
|
||||
MutableMapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Type,
|
||||
TypeVar,
|
||||
Union,
|
||||
overload,
|
||||
)
|
||||
|
||||
import jinja2
|
||||
@@ -64,6 +68,8 @@ class ConfigError(Exception):
|
||||
self.msg = msg
|
||||
self.path = path
|
||||
|
||||
def format_config_error(e: ConfigError) -> Iterator[str]: ...
|
||||
|
||||
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
|
||||
MISSING_REPORT_STATS_SPIEL: str
|
||||
MISSING_SERVER_NAME: str
|
||||
@@ -117,7 +123,8 @@ class RootConfig:
|
||||
background_updates: background_updates.BackgroundUpdateConfig
|
||||
|
||||
config_classes: List[Type["Config"]] = ...
|
||||
def __init__(self) -> None: ...
|
||||
config_files: List[str]
|
||||
def __init__(self, config_files: Collection[str] = ...) -> None: ...
|
||||
def invoke_all(
|
||||
self, func_name: str, *args: Any, **kwargs: Any
|
||||
) -> MutableMapping[str, Any]: ...
|
||||
@@ -157,6 +164,12 @@ class RootConfig:
|
||||
def generate_missing_files(
|
||||
self, config_dict: dict, config_dir_path: str
|
||||
) -> None: ...
|
||||
@overload
|
||||
def reload_config_section(
|
||||
self, section_name: Literal["caches"]
|
||||
) -> cache.CacheConfig: ...
|
||||
@overload
|
||||
def reload_config_section(self, section_name: str) -> Config: ...
|
||||
|
||||
class Config:
|
||||
root: RootConfig
|
||||
|
||||
@@ -69,11 +69,11 @@ def _canonicalise_cache_name(cache_name: str) -> str:
|
||||
def add_resizable_cache(
|
||||
cache_name: str, cache_resize_callback: Callable[[float], None]
|
||||
) -> None:
|
||||
"""Register a cache that's size can dynamically change
|
||||
"""Register a cache whose size can dynamically change
|
||||
|
||||
Args:
|
||||
cache_name: A reference to the cache
|
||||
cache_resize_callback: A callback function that will be ran whenever
|
||||
cache_resize_callback: A callback function that will run whenever
|
||||
the cache needs to be resized
|
||||
"""
|
||||
# Some caches have '*' in them which we strip out.
|
||||
@@ -96,6 +96,13 @@ class CacheConfig(Config):
|
||||
section = "caches"
|
||||
_environ = os.environ
|
||||
|
||||
event_cache_size: int
|
||||
cache_factors: Dict[str, float]
|
||||
global_factor: float
|
||||
track_memory_usage: bool
|
||||
expiry_time_msec: Optional[int]
|
||||
sync_response_cache_duration: int
|
||||
|
||||
@staticmethod
|
||||
def reset() -> None:
|
||||
"""Resets the caches to their defaults. Used for tests."""
|
||||
@@ -115,6 +122,12 @@ class CacheConfig(Config):
|
||||
# A cache 'factor' is a multiplier that can be applied to each of
|
||||
# Synapse's caches in order to increase or decrease the maximum
|
||||
# number of entries that can be stored.
|
||||
#
|
||||
# The configuration for cache factors (caches.global_factor and
|
||||
# caches.per_cache_factors) can be reloaded while the application is running,
|
||||
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
|
||||
# the caching config will NOT be applied after a SIGHUP is received; a restart
|
||||
# is necessary.
|
||||
|
||||
# The number of events to cache in memory. Not affected by
|
||||
# caches.global_factor.
|
||||
@@ -163,6 +176,24 @@ class CacheConfig(Config):
|
||||
#
|
||||
#cache_entry_ttl: 30m
|
||||
|
||||
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
|
||||
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
|
||||
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
|
||||
# this option, and all three of the options must be specified for this feature to work.
|
||||
#cache_autotuning:
|
||||
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
|
||||
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
|
||||
# the flag below, or until the `min_cache_ttl` is hit.
|
||||
#max_cache_memory_usage: 1024M
|
||||
|
||||
# This flag sets a rough target for the desired memory usage of the caches.
|
||||
#target_cache_memory_usage: 758M
|
||||
|
||||
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
|
||||
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
|
||||
# from being emptied while Synapse is evicting due to memory.
|
||||
#min_cache_ttl: 5m
|
||||
|
||||
# Controls how long the results of a /sync request are cached for after
|
||||
# a successful response is returned. A higher duration can help clients with
|
||||
# intermittent connections, at the cost of higher memory usage.
|
||||
@@ -174,21 +205,21 @@ class CacheConfig(Config):
|
||||
"""
|
||||
|
||||
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
|
||||
"""Populate this config object with values from `config`.
|
||||
|
||||
This method does NOT resize existing or future caches: use `resize_all_caches`.
|
||||
We use two separate methods so that we can reject bad config before applying it.
|
||||
"""
|
||||
self.event_cache_size = self.parse_size(
|
||||
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
|
||||
)
|
||||
self.cache_factors: Dict[str, float] = {}
|
||||
self.cache_factors = {}
|
||||
|
||||
cache_config = config.get("caches") or {}
|
||||
self.global_factor = cache_config.get(
|
||||
"global_factor", properties.default_factor_size
|
||||
)
|
||||
self.global_factor = cache_config.get("global_factor", _DEFAULT_FACTOR_SIZE)
|
||||
if not isinstance(self.global_factor, (int, float)):
|
||||
raise ConfigError("caches.global_factor must be a number.")
|
||||
|
||||
# Set the global one so that it's reflected in new caches
|
||||
properties.default_factor_size = self.global_factor
|
||||
|
||||
# Load cache factors from the config
|
||||
individual_factors = cache_config.get("per_cache_factors") or {}
|
||||
if not isinstance(individual_factors, dict):
|
||||
@@ -230,7 +261,7 @@ class CacheConfig(Config):
|
||||
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
|
||||
|
||||
if expire_caches:
|
||||
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
|
||||
self.expiry_time_msec = self.parse_duration(cache_entry_ttl)
|
||||
else:
|
||||
self.expiry_time_msec = None
|
||||
|
||||
@@ -250,23 +281,38 @@ class CacheConfig(Config):
|
||||
)
|
||||
self.expiry_time_msec = self.parse_duration(expiry_time)
|
||||
|
||||
self.cache_autotuning = cache_config.get("cache_autotuning")
|
||||
if self.cache_autotuning:
|
||||
max_memory_usage = self.cache_autotuning.get("max_cache_memory_usage")
|
||||
self.cache_autotuning["max_cache_memory_usage"] = self.parse_size(
|
||||
max_memory_usage
|
||||
)
|
||||
|
||||
target_mem_size = self.cache_autotuning.get("target_cache_memory_usage")
|
||||
self.cache_autotuning["target_cache_memory_usage"] = self.parse_size(
|
||||
target_mem_size
|
||||
)
|
||||
|
||||
min_cache_ttl = self.cache_autotuning.get("min_cache_ttl")
|
||||
self.cache_autotuning["min_cache_ttl"] = self.parse_duration(min_cache_ttl)
|
||||
|
||||
self.sync_response_cache_duration = self.parse_duration(
|
||||
cache_config.get("sync_response_cache_duration", 0)
|
||||
)
|
||||
|
||||
# Resize all caches (if necessary) with the new factors we've loaded
|
||||
self.resize_all_caches()
|
||||
|
||||
# Store this function so that it can be called from other classes without
|
||||
# needing an instance of Config
|
||||
properties.resize_all_caches_func = self.resize_all_caches
|
||||
|
||||
def resize_all_caches(self) -> None:
|
||||
"""Ensure all cache sizes are up to date
|
||||
"""Ensure all cache sizes are up-to-date.
|
||||
|
||||
For each cache, run the mapped callback function with either
|
||||
a specific cache factor or the default, global one.
|
||||
"""
|
||||
# Set the global factor size, so that new caches are appropriately sized.
|
||||
properties.default_factor_size = self.global_factor
|
||||
|
||||
# Store this function so that it can be called from other classes without
|
||||
# needing an instance of CacheConfig
|
||||
properties.resize_all_caches_func = self.resize_all_caches
|
||||
|
||||
# block other threads from modifying _CACHES while we iterate it.
|
||||
with _CACHES_LOCK:
|
||||
for cache_name, callback in _CACHES.items():
|
||||
|
||||
@@ -57,9 +57,9 @@ class OembedConfig(Config):
|
||||
"""
|
||||
# Whether to use the packaged providers.json file.
|
||||
if not oembed_config.get("disable_default_providers") or False:
|
||||
providers = json.load(
|
||||
pkg_resources.resource_stream("synapse", "res/providers.json")
|
||||
)
|
||||
with pkg_resources.resource_stream("synapse", "res/providers.json") as s:
|
||||
providers = json.load(s)
|
||||
|
||||
yield from self._parse_and_validate_provider(
|
||||
providers, config_path=("oembed",)
|
||||
)
|
||||
|
||||
@@ -63,6 +63,19 @@ class RoomConfig(Config):
|
||||
"Invalid value for encryption_enabled_by_default_for_room_type"
|
||||
)
|
||||
|
||||
self.default_power_level_content_override = config.get(
|
||||
"default_power_level_content_override",
|
||||
None,
|
||||
)
|
||||
if self.default_power_level_content_override is not None:
|
||||
for preset in self.default_power_level_content_override:
|
||||
if preset not in vars(RoomCreationPreset).values():
|
||||
raise ConfigError(
|
||||
"Unrecognised room preset %s in default_power_level_content_override"
|
||||
% preset
|
||||
)
|
||||
# We validate the actual overrides when we try to apply them.
|
||||
|
||||
def generate_config_section(self, **kwargs: Any) -> str:
|
||||
return """\
|
||||
## Rooms ##
|
||||
@@ -83,4 +96,38 @@ class RoomConfig(Config):
|
||||
# will also not affect rooms created by other servers.
|
||||
#
|
||||
#encryption_enabled_by_default_for_room_type: invite
|
||||
|
||||
# Override the default power levels for rooms created on this server, per
|
||||
# room creation preset.
|
||||
#
|
||||
# The appropriate dictionary for the room preset will be applied on top
|
||||
# of the existing power levels content.
|
||||
#
|
||||
# Useful if you know that your users need special permissions in rooms
|
||||
# that they create (e.g. to send particular types of state events without
|
||||
# needing an elevated power level). This takes the same shape as the
|
||||
# `power_level_content_override` parameter in the /createRoom API, but
|
||||
# is applied before that parameter.
|
||||
#
|
||||
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
|
||||
# and `public_chat`. Inside each of those should be any of the
|
||||
# properties allowed in `power_level_content_override` in the
|
||||
# /createRoom API. If any property is missing, its default value will
|
||||
# continue to be used. If any property is present, it will overwrite
|
||||
# the existing default completely (so if the `events` property exists,
|
||||
# the default event power levels will be ignored).
|
||||
#
|
||||
#default_power_level_content_override:
|
||||
# private_chat:
|
||||
# "events":
|
||||
# "com.example.myeventtype" : 0
|
||||
# "m.room.avatar": 50
|
||||
# "m.room.canonical_alias": 50
|
||||
# "m.room.encryption": 100
|
||||
# "m.room.history_visibility": 100
|
||||
# "m.room.name": 50
|
||||
# "m.room.power_levels": 100
|
||||
# "m.room.server_acl": 100
|
||||
# "m.room.tombstone": 100
|
||||
# "events_default": 1
|
||||
"""
|
||||
|
||||
@@ -996,7 +996,7 @@ class ServerConfig(Config):
|
||||
# federation: the server-server API (/_matrix/federation). Also implies
|
||||
# 'media', 'keys', 'openid'
|
||||
#
|
||||
# keys: the key discovery API (/_matrix/keys).
|
||||
# keys: the key discovery API (/_matrix/key).
|
||||
#
|
||||
# media: the media API (/_matrix/media).
|
||||
#
|
||||
|
||||
@@ -414,7 +414,12 @@ def _is_membership_change_allowed(
|
||||
raise AuthError(403, "You are banned from this room")
|
||||
elif join_rule == JoinRules.PUBLIC:
|
||||
pass
|
||||
elif room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED:
|
||||
elif (
|
||||
room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED
|
||||
) or (
|
||||
room_version.msc3787_knock_restricted_join_rule
|
||||
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||
):
|
||||
# This is the same as public, but the event must contain a reference
|
||||
# to the server who authorised the join. If the event does not contain
|
||||
# the proper content it is rejected.
|
||||
@@ -440,8 +445,13 @@ def _is_membership_change_allowed(
|
||||
if authorising_user_level < invite_level:
|
||||
raise AuthError(403, "Join event authorised by invalid server.")
|
||||
|
||||
elif join_rule == JoinRules.INVITE or (
|
||||
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
|
||||
elif (
|
||||
join_rule == JoinRules.INVITE
|
||||
or (room_version.msc2403_knocking and join_rule == JoinRules.KNOCK)
|
||||
or (
|
||||
room_version.msc3787_knock_restricted_join_rule
|
||||
and join_rule == JoinRules.KNOCK_RESTRICTED
|
||||
)
|
||||
):
|
||||
if not caller_in_room and not caller_invited:
|
||||
raise AuthError(403, "You are not invited to this room.")
|
||||
@@ -462,7 +472,10 @@ def _is_membership_change_allowed(
|
||||
if user_level < ban_level or user_level <= target_level:
|
||||
raise AuthError(403, "You don't have permission to ban")
|
||||
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
|
||||
if join_rule != JoinRules.KNOCK:
|
||||
if join_rule != JoinRules.KNOCK and (
|
||||
not room_version.msc3787_knock_restricted_join_rule
|
||||
or join_rule != JoinRules.KNOCK_RESTRICTED
|
||||
):
|
||||
raise AuthError(403, "You don't have permission to knock")
|
||||
elif target_user_id != event.user_id:
|
||||
raise AuthError(403, "You cannot knock for other users")
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
import abc
|
||||
import collections.abc
|
||||
import os
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
@@ -32,9 +33,11 @@ from typing import (
|
||||
overload,
|
||||
)
|
||||
|
||||
import attr
|
||||
from typing_extensions import Literal
|
||||
from unpaddedbase64 import encode_base64
|
||||
|
||||
from synapse.api.constants import RelationTypes
|
||||
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
|
||||
from synapse.types import JsonDict, RoomStreamToken
|
||||
from synapse.util.caches import intern_dict
|
||||
@@ -615,3 +618,45 @@ def make_event_from_dict(
|
||||
return event_type(
|
||||
event_dict, room_version, internal_metadata_dict or {}, rejected_reason
|
||||
)
|
||||
|
||||
|
||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
||||
class _EventRelation:
|
||||
# The target event of the relation.
|
||||
parent_id: str
|
||||
# The relation type.
|
||||
rel_type: str
|
||||
# The aggregation key. Will be None if the rel_type is not m.annotation or is
|
||||
# not a string.
|
||||
aggregation_key: Optional[str]
|
||||
|
||||
|
||||
def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
|
||||
"""
|
||||
Attempt to parse relation information an event.
|
||||
|
||||
Returns:
|
||||
The event relation information, if it is valid. None, otherwise.
|
||||
"""
|
||||
relation = event.content.get("m.relates_to")
|
||||
if not relation or not isinstance(relation, collections.abc.Mapping):
|
||||
# No relation information.
|
||||
return None
|
||||
|
||||
# Relations must have a type and parent event ID.
|
||||
rel_type = relation.get("rel_type")
|
||||
if not isinstance(rel_type, str):
|
||||
return None
|
||||
|
||||
parent_id = relation.get("event_id")
|
||||
if not isinstance(parent_id, str):
|
||||
return None
|
||||
|
||||
# Annotations have a key field.
|
||||
aggregation_key = None
|
||||
if rel_type == RelationTypes.ANNOTATION:
|
||||
aggregation_key = relation.get("key")
|
||||
if not isinstance(aggregation_key, str):
|
||||
aggregation_key = None
|
||||
|
||||
return _EventRelation(parent_id, rel_type, aggregation_key)
|
||||
|
||||
@@ -24,6 +24,7 @@ from synapse.types import JsonDict, StateMap
|
||||
if TYPE_CHECKING:
|
||||
from synapse.storage import Storage
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.storage.state import StateFilter
|
||||
|
||||
|
||||
@attr.s(slots=True, auto_attribs=True)
|
||||
@@ -196,7 +197,9 @@ class EventContext:
|
||||
|
||||
return self._state_group
|
||||
|
||||
async def get_current_state_ids(self) -> Optional[StateMap[str]]:
|
||||
async def get_current_state_ids(
|
||||
self, state_filter: Optional["StateFilter"] = None
|
||||
) -> Optional[StateMap[str]]:
|
||||
"""
|
||||
Gets the room state map, including this event - ie, the state in ``state_group``
|
||||
|
||||
@@ -204,6 +207,9 @@ class EventContext:
|
||||
not make it into the room state. This method will raise an exception if
|
||||
``rejected`` is set.
|
||||
|
||||
Arg:
|
||||
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
|
||||
|
||||
Returns:
|
||||
Returns None if state_group is None, which happens when the associated
|
||||
event is an outlier.
|
||||
@@ -216,7 +222,7 @@ class EventContext:
|
||||
|
||||
assert self._state_delta_due_to_event is not None
|
||||
|
||||
prev_state_ids = await self.get_prev_state_ids()
|
||||
prev_state_ids = await self.get_prev_state_ids(state_filter)
|
||||
|
||||
if self._state_delta_due_to_event:
|
||||
prev_state_ids = dict(prev_state_ids)
|
||||
@@ -224,12 +230,17 @@ class EventContext:
|
||||
|
||||
return prev_state_ids
|
||||
|
||||
async def get_prev_state_ids(self) -> StateMap[str]:
|
||||
async def get_prev_state_ids(
|
||||
self, state_filter: Optional["StateFilter"] = None
|
||||
) -> StateMap[str]:
|
||||
"""
|
||||
Gets the room state map, excluding this event.
|
||||
|
||||
For a non-state event, this will be the same as get_current_state_ids().
|
||||
|
||||
Args:
|
||||
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
|
||||
|
||||
Returns:
|
||||
Returns {} if state_group is None, which happens when the associated
|
||||
event is an outlier.
|
||||
@@ -239,7 +250,7 @@ class EventContext:
|
||||
"""
|
||||
assert self.state_group_before_event is not None
|
||||
return await self._storage.state.get_state_ids_for_group(
|
||||
self.state_group_before_event
|
||||
self.state_group_before_event, state_filter
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -32,6 +32,7 @@ from synapse.rest.media.v1.media_storage import ReadableFileWrapper
|
||||
from synapse.spam_checker_api import RegistrationBehaviour
|
||||
from synapse.types import RoomAlias, UserProfile
|
||||
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import synapse.events
|
||||
@@ -162,7 +163,10 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer") -> None:
|
||||
|
||||
|
||||
class SpamChecker:
|
||||
def __init__(self) -> None:
|
||||
def __init__(self, hs: "synapse.server.HomeServer") -> None:
|
||||
self.hs = hs
|
||||
self.clock = hs.get_clock()
|
||||
|
||||
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
|
||||
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
|
||||
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
|
||||
@@ -255,7 +259,10 @@ class SpamChecker:
|
||||
will be used as the error message returned to the user.
|
||||
"""
|
||||
for callback in self._check_event_for_spam_callbacks:
|
||||
res: Union[bool, str] = await delay_cancellation(callback(event))
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
res: Union[bool, str] = await delay_cancellation(callback(event))
|
||||
if res:
|
||||
return res
|
||||
|
||||
@@ -276,9 +283,12 @@ class SpamChecker:
|
||||
Whether the user may join the room
|
||||
"""
|
||||
for callback in self._user_may_join_room_callbacks:
|
||||
may_join_room = await delay_cancellation(
|
||||
callback(user_id, room_id, is_invited)
|
||||
)
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_join_room = await delay_cancellation(
|
||||
callback(user_id, room_id, is_invited)
|
||||
)
|
||||
if may_join_room is False:
|
||||
return False
|
||||
|
||||
@@ -300,9 +310,12 @@ class SpamChecker:
|
||||
True if the user may send an invite, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_invite_callbacks:
|
||||
may_invite = await delay_cancellation(
|
||||
callback(inviter_userid, invitee_userid, room_id)
|
||||
)
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_invite = await delay_cancellation(
|
||||
callback(inviter_userid, invitee_userid, room_id)
|
||||
)
|
||||
if may_invite is False:
|
||||
return False
|
||||
|
||||
@@ -328,9 +341,12 @@ class SpamChecker:
|
||||
True if the user may send the invite, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_send_3pid_invite_callbacks:
|
||||
may_send_3pid_invite = await delay_cancellation(
|
||||
callback(inviter_userid, medium, address, room_id)
|
||||
)
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_send_3pid_invite = await delay_cancellation(
|
||||
callback(inviter_userid, medium, address, room_id)
|
||||
)
|
||||
if may_send_3pid_invite is False:
|
||||
return False
|
||||
|
||||
@@ -348,7 +364,10 @@ class SpamChecker:
|
||||
True if the user may create a room, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_create_room_callbacks:
|
||||
may_create_room = await delay_cancellation(callback(userid))
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_create_room = await delay_cancellation(callback(userid))
|
||||
if may_create_room is False:
|
||||
return False
|
||||
|
||||
@@ -369,9 +388,12 @@ class SpamChecker:
|
||||
True if the user may create a room alias, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_create_room_alias_callbacks:
|
||||
may_create_room_alias = await delay_cancellation(
|
||||
callback(userid, room_alias)
|
||||
)
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_create_room_alias = await delay_cancellation(
|
||||
callback(userid, room_alias)
|
||||
)
|
||||
if may_create_room_alias is False:
|
||||
return False
|
||||
|
||||
@@ -390,7 +412,10 @@ class SpamChecker:
|
||||
True if the user may publish the room, otherwise False
|
||||
"""
|
||||
for callback in self._user_may_publish_room_callbacks:
|
||||
may_publish_room = await delay_cancellation(callback(userid, room_id))
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
may_publish_room = await delay_cancellation(callback(userid, room_id))
|
||||
if may_publish_room is False:
|
||||
return False
|
||||
|
||||
@@ -412,9 +437,13 @@ class SpamChecker:
|
||||
True if the user is spammy.
|
||||
"""
|
||||
for callback in self._check_username_for_spam_callbacks:
|
||||
# Make a copy of the user profile object to ensure the spam checker cannot
|
||||
# modify it.
|
||||
if await delay_cancellation(callback(user_profile.copy())):
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
# Make a copy of the user profile object to ensure the spam checker cannot
|
||||
# modify it.
|
||||
res = await delay_cancellation(callback(user_profile.copy()))
|
||||
if res:
|
||||
return True
|
||||
|
||||
return False
|
||||
@@ -442,9 +471,12 @@ class SpamChecker:
|
||||
"""
|
||||
|
||||
for callback in self._check_registration_for_spam_callbacks:
|
||||
behaviour = await delay_cancellation(
|
||||
callback(email_threepid, username, request_info, auth_provider_id)
|
||||
)
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
behaviour = await delay_cancellation(
|
||||
callback(email_threepid, username, request_info, auth_provider_id)
|
||||
)
|
||||
assert isinstance(behaviour, RegistrationBehaviour)
|
||||
if behaviour != RegistrationBehaviour.ALLOW:
|
||||
return behaviour
|
||||
@@ -486,7 +518,10 @@ class SpamChecker:
|
||||
"""
|
||||
|
||||
for callback in self._check_media_file_for_spam_callbacks:
|
||||
spam = await delay_cancellation(callback(file_wrapper, file_info))
|
||||
with Measure(
|
||||
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
|
||||
):
|
||||
spam = await delay_cancellation(callback(file_wrapper, file_info))
|
||||
if spam:
|
||||
return True
|
||||
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user