1
0

Compare commits

..

160 Commits

Author SHA1 Message Date
H. Shay
78c0d07e0c bump max request body size in replication test cases to accomodate larger body size due to batched events 2022-10-06 17:34:34 -07:00
H. Shay
2612196ce7 fix tests to reflect new reality 2022-10-06 12:10:06 -07:00
H. Shay
ece0f566e8 fix merge conflict 2022-10-06 12:09:53 -07:00
Shay
a61efa073e Merge branch 'develop' into shay/more_batching 2022-10-06 11:33:43 -07:00
H. Shay
e55e811b8c add power level event to batched events 2022-10-06 11:30:32 -07:00
H. Shay
8260088d78 update callsites of action_for_event_by_user 2022-10-06 11:30:11 -07:00
H. Shay
dcc096b435 change action for event by user to take a list of events, context and pass batch of events to _get_power_levels_and_sender_level 2022-10-06 11:28:56 -07:00
H. Shay
c7f677fd1d change _get_power_levels_and_sender_level to check for events in batch 2022-10-06 11:27:42 -07:00
David Robertson
cb20b885cb Always close _all_ ijson coroutines, even if doing so raises Exceptions (#14065) 2022-10-06 18:17:50 +00:00
Sean Quah
44741aa85b Merge tag 'v1.69.0rc2' into develop
Synapse 1.69.0rc2 (2022-10-06)
==============================

Please note that legacy Prometheus metric names are now deprecated and will be removed in Synapse 1.73.0.
Server administrators should update their dashboards and alerting rules to avoid using the deprecated metric names.
See the [upgrade notes](https://matrix-org.github.io/synapse/v1.69/upgrade.html#upgrading-to-v1690) for more details.

Deprecations and Removals
-------------------------

- Deprecate the `generate_short_term_login_token` method in favor of an async `create_login_token` method in the Module API. ([\#13842](https://github.com/matrix-org/synapse/issues/13842))

Internal Changes
----------------

- Ensure Synapse v1.69 works with upcoming database changes in v1.70. ([\#14045](https://github.com/matrix-org/synapse/issues/14045))
- Fix a bug introduced in Synapse v1.68.0 where messages could not be sent in rooms with non-integer `notifications` power level. ([\#14073](https://github.com/matrix-org/synapse/issues/14073))
- Temporarily pin build-system requirements to workaround an incompatibility with poetry-core 1.3.0. This will be reverted before the v1.69.0 release proper, see [\#14079](https://github.com/matrix-org/synapse/issues/14079). ([\#14080](https://github.com/matrix-org/synapse/issues/14080))
2022-10-06 18:48:47 +01:00
Andre Klärner
b753f63000 The changelog entry ending in a . or ! is not optional (#14087) 2022-10-06 17:11:32 +00:00
Sean Quah
a98ac3cc1e Update 1.69.0rc2 changelog 2022-10-06 14:55:37 +01:00
Sean Quah
b42177f94f Replace incorrect 1.69.0rc3 version with 1.69.0rc2 2022-10-06 14:48:04 +01:00
Sean Quah
bb69dbf3e3 1.69.0rc3 2022-10-06 14:46:57 +01:00
David Robertson
e9a0419c8d Fix sending events into rooms with non-integer power levels (#14073) 2022-10-06 13:00:03 +00:00
David Robertson
720b12c209 Pin build-system requirements (#14080)
* Pin build-system requirements

* Changelog
2022-10-06 12:55:07 +01:00
Quentin Gliech
79c592cec6 Deprecate the generate_short_term_login_token method in favor of an async create_login_token method in the Module API. (#13842)
Signed-off-by: Quentin Gliech <quenting@element.io>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-10-06 10:22:36 +00:00
Sean Quah
f6f6bdc7b3 1.69.0rc2 2022-10-06 10:33:23 +01:00
Richard van der Hoff
a09a7d40e3 openid.md: fix a typo in the facebook example 2022-10-06 09:47:54 +01:00
H. Shay
57ccbd8181 Merge branch 'develop' into shay/more_batching 2022-10-05 14:10:39 -07:00
H. Shay
8209485e3a newsfragment 2022-10-05 13:56:53 -07:00
H. Shay
a5679093a0 update callsites to reflect new signature of auth function 2022-10-05 13:50:59 -07:00
H. Shay
157e4fdf81 when authing batched events, check if the events needed to auth the event are present in the batch and if so, pass them to the auth functions 2022-10-05 13:50:29 -07:00
H. Shay
82de07126a update auth functions to take optional parameter of batched events needed to auth 2022-10-05 13:49:32 -07:00
Shay
7b7478e8b6 Batch up notifications after event persistence (#14033) 2022-10-05 10:12:48 -07:00
Richard van der Hoff
51436c8dd5 Complement test image: capture logs from nginx (#14063)
Have nginx send its logs to stderr/out, so that we can debug
https://github.com/matrix-org/synapse/issues/13334.
2022-10-05 17:37:34 +01:00
Patrick Cloke
0b037d6c91 Fix handling of public rooms filter with a network tuple. (#14053)
Fixes two related bugs:

* The handling of `[null]` for a `room_types` filter was incorrect.
* The ordering of arguments when providing both a network tuple
  and room type field was incorrect.
2022-10-05 12:49:52 +00:00
Patrick Cloke
e3d4755454 Fix backwards compatibility with upcoming threads schema changes. (#14045)
Ensure that the upsert will work properly by first updating any existing
rows (in the same way that the background update to backfill data works).
2022-10-05 07:56:05 -04:00
Patrick Cloke
dcced5a8d7 Use threaded receipts when fetching events for push. (#13878)
Update the HTTP and email pushers to consider threaded read receipts
when fetching unread events.
2022-10-04 12:07:02 -04:00
dependabot[bot]
c3b0e5e178 Bump types-pyyaml from 6.0.4 to 6.0.12 (#14041)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-04 15:53:47 +00:00
Nick Mills-Barrett
0506bb100e Remove get rooms for user with stream ordering (#13991)
By getting the joined rooms before the current token we avoid any reading
history to confirm a user *was* in a room. We can then use any membership
change events, which we already fetch during sync, to determine the final
list of joined room IDs.
2022-10-04 16:42:59 +01:00
Patrick Cloke
2b6d41ebd6 Recursively fetch the thread for receipts & notifications. (#13824)
Consider an event to be part of a thread if you can follow a
chain of relations up to a thread root.

Part of MSC3773 & MSC3771.
2022-10-04 11:36:16 -04:00
ffaex
3e74ad20db update authlib from 0.15.5 to 1.1.0 (#14006)
Signed-off-by: Fyodor Fomichev <fyodor.fomichev@gmail.com>
2022-10-04 14:49:07 +00:00
Patrick Cloke
a7ba457b2b Mark events as read using threaded read receipts from MSC3771. (#13877)
Applies the proper logic for unthreaded and threaded receipts to either
apply to all events in the room or only events in the same thread, respectively.
2022-10-04 10:46:42 -04:00
Sean Quah
f0019f3f3b Merge tag 'v1.69.0rc1' into develop
Synapse 1.69.0rc1 (2022-10-04)
==============================

Please note that legacy Prometheus metric names are now deprecated and will be removed in Synapse 1.73.0.
Server administrators should update their dashboards and alerting rules to avoid using the deprecated metric names.
See the [upgrade notes](https://matrix-org.github.io/synapse/v1.69/upgrade.html#upgrading-to-v1690) for more details.

Features
--------

- Allow application services to set the `origin_server_ts` of a state event by providing the query parameter `ts` in [`PUT /_matrix/client/r0/rooms/{roomId}/state/{eventType}/{stateKey}`](https://spec.matrix.org/v1.4/client-server-api/#put_matrixclientv3roomsroomidstateeventtypestatekey), per [MSC3316](https://github.com/matrix-org/matrix-doc/pull/3316). Contributed by @lukasdenk. ([\#11866](https://github.com/matrix-org/synapse/issues/11866))
- Allow server admins to require a manual approval process before new accounts can be used (using [MSC3866](https://github.com/matrix-org/matrix-spec-proposals/pull/3866)). ([\#13556](https://github.com/matrix-org/synapse/issues/13556))
- Exponentially backoff from backfilling the same event over and over. ([\#13635](https://github.com/matrix-org/synapse/issues/13635), [\#13936](https://github.com/matrix-org/synapse/issues/13936))
- Add cache invalidation across workers to module API. ([\#13667](https://github.com/matrix-org/synapse/issues/13667), [\#13947](https://github.com/matrix-org/synapse/issues/13947))
- Experimental implementation of [MSC3882](https://github.com/matrix-org/matrix-spec-proposals/pull/3882) to allow an existing device/session to generate a login token for use on a new device/session. ([\#13722](https://github.com/matrix-org/synapse/issues/13722), [\#13868](https://github.com/matrix-org/synapse/issues/13868))
- Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)). ([\#13782](https://github.com/matrix-org/synapse/issues/13782), [\#13893](https://github.com/matrix-org/synapse/issues/13893), [\#13932](https://github.com/matrix-org/synapse/issues/13932), [\#13937](https://github.com/matrix-org/synapse/issues/13937), [\#13939](https://github.com/matrix-org/synapse/issues/13939))
- Add experimental support for [MSC3881: Remotely toggle push notifications for another client](https://github.com/matrix-org/matrix-spec-proposals/pull/3881). ([\#13799](https://github.com/matrix-org/synapse/issues/13799), [\#13831](https://github.com/matrix-org/synapse/issues/13831), [\#13860](https://github.com/matrix-org/synapse/issues/13860))
- Keep track when an event pulled over federation fails its signature check so we can intelligently back-off in the future. ([\#13815](https://github.com/matrix-org/synapse/issues/13815))
- Improve validation for the unspecced, internal-only `_matrix/client/unstable/add_threepid/msisdn/submit_token` endpoint. ([\#13832](https://github.com/matrix-org/synapse/issues/13832))
- Faster remote room joins: record _when_ we first partial-join to a room. ([\#13892](https://github.com/matrix-org/synapse/issues/13892))
- Support a `dir` parameter on the `/relations` endpoint per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715). ([\#13920](https://github.com/matrix-org/synapse/issues/13920))
- Ask mail servers receiving emails from Synapse to not send automatic replies (e.g. out-of-office responses). ([\#13957](https://github.com/matrix-org/synapse/issues/13957))

Bugfixes
--------

- Send push notifications for invites received over federation. ([\#13719](https://github.com/matrix-org/synapse/issues/13719), [\#14014](https://github.com/matrix-org/synapse/issues/14014))
- Fix a long-standing bug where typing events would be accepted from remote servers not present in a room. Also fix a bug where incoming typing events would cause other incoming events to get stuck during a fast join. ([\#13830](https://github.com/matrix-org/synapse/issues/13830))
- Fix a bug introduced in Synapse v1.53.0 where the experimental implementation of [MSC3715](https://github.com/matrix-org/matrix-spec-proposals/pull/3715) would give incorrect results when paginating forward. ([\#13840](https://github.com/matrix-org/synapse/issues/13840))
- Fix access token leak to logs from proxy agent. ([\#13855](https://github.com/matrix-org/synapse/issues/13855))
- Fix `have_seen_event` cache not being invalidated after we persist an event which causes inefficiency effects like extra `/state` federation calls. ([\#13863](https://github.com/matrix-org/synapse/issues/13863))
- Faster room joins: Fix a bug introduced in 1.66.0 where an error would be logged when syncing after joining a room. ([\#13872](https://github.com/matrix-org/synapse/issues/13872))
- Fix a bug introduced in 1.66.0 where some required fields in the pushrules sent to clients were not present anymore. Contributed by Nico. ([\#13904](https://github.com/matrix-org/synapse/issues/13904))
- Fix packaging to include `Cargo.lock` in `sdist`. ([\#13909](https://github.com/matrix-org/synapse/issues/13909))
- Fix a long-standing bug where device updates could cause delays sending out to-device messages over federation. ([\#13922](https://github.com/matrix-org/synapse/issues/13922))
- Fix a bug introduced in v1.68.0 where Synapse would require `setuptools_rust` at runtime, even though the package is only required at build time. ([\#13952](https://github.com/matrix-org/synapse/issues/13952))
- Fix a long-standing bug where `POST /_matrix/client/v3/keys/query` requests could result in excessively large SQL queries. ([\#13956](https://github.com/matrix-org/synapse/issues/13956))
- Fix a performance regression in the `get_users_in_room` database query. Introduced in v1.67.0. ([\#13972](https://github.com/matrix-org/synapse/issues/13972))
- Fix a bug introduced in v1.68.0 bug where Rust extension wasn't built in `release` mode when using `poetry install`. ([\#14009](https://github.com/matrix-org/synapse/issues/14009))
- Do not return an unspecified `original_event` field when using the stable `/relations` endpoint. Introduced in Synapse v1.57.0. ([\#14025](https://github.com/matrix-org/synapse/issues/14025))
- Correctly handle a race with device lists when a remote user leaves during a partial join. ([\#13885](https://github.com/matrix-org/synapse/issues/13885))
- Correctly handle sending local device list updates to remote servers during a partial join. ([\#13934](https://github.com/matrix-org/synapse/issues/13934))

Improved Documentation
----------------------

- Add `worker_main_http_uri` for the worker generator bash script. ([\#13772](https://github.com/matrix-org/synapse/issues/13772))
- Update URL for the NixOS module for Synapse. ([\#13818](https://github.com/matrix-org/synapse/issues/13818))
- Fix a mistake in sso_mapping_providers.md: `map_user_attributes` is expected to return `display_name`, not `displayname`. ([\#13836](https://github.com/matrix-org/synapse/issues/13836))
- Fix a cross-link from the registration admin API to the `registration_shared_secret` configuration documentation. ([\#13870](https://github.com/matrix-org/synapse/issues/13870))
- Update the man page for the `hash_password` script to correct the default number of bcrypt rounds performed. ([\#13911](https://github.com/matrix-org/synapse/issues/13911), [\#13930](https://github.com/matrix-org/synapse/issues/13930))
- Emphasize the right reasons when to use `(room_id, event_id)` in a database schema. ([\#13915](https://github.com/matrix-org/synapse/issues/13915))
- Add instruction to contributing guide for running unit tests in parallel. Contributed by @ashfame. ([\#13928](https://github.com/matrix-org/synapse/issues/13928))
- Clarify that the `auto_join_rooms` config option can also be used with Space aliases. ([\#13931](https://github.com/matrix-org/synapse/issues/13931))
- Add some cross references to worker documentation. ([\#13974](https://github.com/matrix-org/synapse/issues/13974))
- Linkify urls in config documentation. ([\#14003](https://github.com/matrix-org/synapse/issues/14003))

Deprecations and Removals
-------------------------

- Remove the `complete_sso_login` method from the Module API which was deprecated in Synapse 1.13.0. ([\#13843](https://github.com/matrix-org/synapse/issues/13843))
- Announce that legacy metric names are deprecated, will be turned off by default in Synapse v1.71.0 and removed altogether in Synapse v1.73.0. See the upgrade notes for more information. ([\#14024](https://github.com/matrix-org/synapse/issues/14024))

Internal Changes
----------------

- Speed up creation of DM rooms. ([\#13487](https://github.com/matrix-org/synapse/issues/13487), [\#13800](https://github.com/matrix-org/synapse/issues/13800))
- Port push rules to using Rust. ([\#13768](https://github.com/matrix-org/synapse/issues/13768), [\#13838](https://github.com/matrix-org/synapse/issues/13838), [\#13889](https://github.com/matrix-org/synapse/issues/13889))
- Optimise get rooms for user calls. Contributed by Nick @ Beeper (@fizzadar). ([\#13787](https://github.com/matrix-org/synapse/issues/13787))
- Update the script which makes full schema dumps. ([\#13792](https://github.com/matrix-org/synapse/issues/13792))
- Use shared methods for cache invalidation when persisting events, remove duplicate codepaths. Contributed by Nick @ Beeper (@fizzadar). ([\#13796](https://github.com/matrix-org/synapse/issues/13796))
- Improve the `synapse.api.auth.Auth` mock used in unit tests. ([\#13809](https://github.com/matrix-org/synapse/issues/13809))
- Faster Remote Room Joins: tell remote homeservers that we are unable to authorise them if they query a room which has partial state on our server. ([\#13823](https://github.com/matrix-org/synapse/issues/13823))
- Carry IdP Session IDs through user-mapping sessions. ([\#13839](https://github.com/matrix-org/synapse/issues/13839))
- Fix the release script not publishing binary wheels. ([\#13850](https://github.com/matrix-org/synapse/issues/13850))
- Raise issue if complement fails with latest deps. ([\#13859](https://github.com/matrix-org/synapse/issues/13859))
- Correct the comments in the complement dockerfile. ([\#13867](https://github.com/matrix-org/synapse/issues/13867))
- Create a new snapshot of the database schema. ([\#13873](https://github.com/matrix-org/synapse/issues/13873))
- Faster room joins: Send device list updates to most servers in rooms with partial state. ([\#13874](https://github.com/matrix-org/synapse/issues/13874), [\#14013](https://github.com/matrix-org/synapse/issues/14013))
- Add comments to the Prometheus recording rules to make it clear which set of rules you need for Grafana or Prometheus Console. ([\#13876](https://github.com/matrix-org/synapse/issues/13876))
- Only pull relevant backfill points from the database based on the current depth and limit (instead of all) every time we want to `/backfill`. ([\#13879](https://github.com/matrix-org/synapse/issues/13879))
- Faster room joins: Avoid waiting for full state when processing `/keys/changes` requests. ([\#13888](https://github.com/matrix-org/synapse/issues/13888))
- Improve backfill robustness by trying more servers when we get a `4xx` error back. ([\#13890](https://github.com/matrix-org/synapse/issues/13890))
- Fix mypy errors with canonicaljson 1.6.3. ([\#13905](https://github.com/matrix-org/synapse/issues/13905))
- Faster remote room joins: correctly handle remote device list updates during a partial join. ([\#13913](https://github.com/matrix-org/synapse/issues/13913))
- Complement image: propagate SIGTERM to all workers. ([\#13914](https://github.com/matrix-org/synapse/issues/13914))
- Update an innaccurate comment in Synapse's upsert database helper. ([\#13924](https://github.com/matrix-org/synapse/issues/13924))
- Update mypy (0.950 -> 0.981) and mypy-zope (0.3.7 -> 0.3.11). ([\#13925](https://github.com/matrix-org/synapse/issues/13925), [\#13993](https://github.com/matrix-org/synapse/issues/13993))
- Use dedicated `get_local_users_in_room(room_id)` function to find local users when calculating users to copy over during a room upgrade. ([\#13960](https://github.com/matrix-org/synapse/issues/13960))
- Refactor language in user directory `_track_user_joined_room` code to make it more clear that we use both local and remote users. ([\#13966](https://github.com/matrix-org/synapse/issues/13966))
- Revert catch-all exceptions being recorded as event pull attempt failures (only handle what we know about). ([\#13969](https://github.com/matrix-org/synapse/issues/13969))
- Speed up calculating push actions in large rooms. ([\#13973](https://github.com/matrix-org/synapse/issues/13973), [\#13992](https://github.com/matrix-org/synapse/issues/13992))
- Enable update notifications from Github's dependabot. ([\#13976](https://github.com/matrix-org/synapse/issues/13976))
- Prototype a workflow to automatically add changelogs to dependabot PRs. ([\#13998](https://github.com/matrix-org/synapse/issues/13998), [\#14011](https://github.com/matrix-org/synapse/issues/14011), [\#14017](https://github.com/matrix-org/synapse/issues/14017), [\#14021](https://github.com/matrix-org/synapse/issues/14021), [\#14027](https://github.com/matrix-org/synapse/issues/14027))
- Fix type annotations to be compatible with new annotations in development versions of twisted. ([\#14012](https://github.com/matrix-org/synapse/issues/14012))
- Clear out stale entries in `event_push_actions_staging` table. ([\#14020](https://github.com/matrix-org/synapse/issues/14020))
- Bump versions of GitHub actions. ([\#13978](https://github.com/matrix-org/synapse/issues/13978), [\#13979](https://github.com/matrix-org/synapse/issues/13979), [\#13980](https://github.com/matrix-org/synapse/issues/13980), [\#13982](https://github.com/matrix-org/synapse/issues/13982), [\#14015](https://github.com/matrix-org/synapse/issues/14015), [\#14019](https://github.com/matrix-org/synapse/issues/14019), [\#14022](https://github.com/matrix-org/synapse/issues/14022), [\#14023](https://github.com/matrix-org/synapse/issues/14023))
2022-10-04 15:26:37 +01:00
Patrick Cloke
d8663f5e63 Advertise supporting version 1.3 of the Matrix spec. (#14032)
Now that all features / changes in 1.3 are supported in Synapse.
2022-10-04 10:21:16 -04:00
David Robertson
04ce2edddc Fix building wheels on OSX (#14046)
* Fix building wheels on OSX

Follow-up to #13983. I missed a breaking change in setup-python v4.
Serves me right for rushing to cut through the dependabot spam.

* Changelog

* Merge changelog
2022-10-04 14:59:30 +01:00
Patrick Cloke
b4ec4f5e71 Track notification counts per thread (implement MSC3773). (#13776)
When retrieving counts of notifications segment the results based on the
thread ID, but choose whether to return them as individual threads or as
a single summed field by letting the client opt-in via a sync flag.

The summarization code is also updated to be per thread, instead of per
room.
2022-10-04 09:47:04 -04:00
dependabot[bot]
94017e867d Bump isort from 5.7.0 to 5.10.1 (#13988)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-04 13:19:40 +01:00
Sean Quah
17bc4ecff2 Update changelog 2022-10-04 12:50:02 +01:00
dependabot[bot]
6ec9fcc808 Bump jsonschema from 4.4.0 to 4.16.0 (#13985)
* Bump jsonschema from 4.4.0 to 4.16.0

Bumps [jsonschema](https://github.com/python-jsonschema/jsonschema) from 4.4.0 to 4.16.0.
- [Release notes](https://github.com/python-jsonschema/jsonschema/releases)
- [Changelog](https://github.com/python-jsonschema/jsonschema/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/python-jsonschema/jsonschema/compare/v4.4.0...v4.16.0)

---
updated-dependencies:
- dependency-name: jsonschema
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-04 12:34:08 +01:00
dependabot[bot]
b95d8b463e Bump types-pyopenssl from 22.0.0 to 22.0.10 (#13984)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-04 12:27:41 +01:00
dependabot[bot]
b64b7297f2 Bump actions/setup-python from 2 to 4 (#13983)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-04 12:24:36 +01:00
dependabot[bot]
0e5106a0cc Bump types-requests from 2.27.11 to 2.28.11 (#13987)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-04 12:24:18 +01:00
dependabot[bot]
23a4973b83 Bump types-setuptools from 57.4.9 to 65.4.0.0 (#13986)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-10-04 12:21:20 +01:00
Patrick Cloke
e70c6b720e Disable pushing for server ACL events (MSC3786). (#13997)
Switches to the stable identifier for MSC3786 and enables it
by default.

This disables pushes of m.room.server_acl events.
2022-10-04 07:08:27 -04:00
Patrick Cloke
27fa0fa698 Send the appservice access token as a header. (#13996)
Implements MSC2832 by sending application service access
tokens in the Authorization header.

The access token is also still sent as a query parameter until
the application service ecosystem has fully migrated to using
headers. In the future this could be made opt-in, or removed
completely.
2022-10-04 07:06:41 -04:00
Sean Quah
471e13a103 1.69.0rc1 2022-10-04 11:18:37 +01:00
Akshit Tyagi
1613857b90 Linkify config documentation. (#14003) 2022-10-04 09:03:31 +00:00
Eric Eastwood
70a4317692 Track when the pulled event signature fails (#13815)
Because we're doing the recording in `_check_sigs_and_hash_for_pulled_events_and_fetch` (previously named `_check_sigs_and_hash_and_fetch`), this means we will track signature failures for `backfill`, `get_room_state`, `get_event_auth`, and `get_missing_events` (all pulled event scenarios). And we also record signature failures from `get_pdu`.

Part of https://github.com/matrix-org/synapse/issues/13700

Part of https://github.com/matrix-org/synapse/issues/13676 and https://github.com/matrix-org/synapse/issues/13356

This PR will be especially important for https://github.com/matrix-org/synapse/pull/13816 so we can avoid the costly `_get_state_ids_after_missing_prev_event` down the line when `/messages` calls backfill.
2022-10-03 14:53:29 -05:00
David Robertson
92ae90aca2 Dependabot changelog: don't require a changelog in lints 2022-10-03 19:30:01 +01:00
dependabot[bot]
0c9c159f45 Bump actions/upload-artifact from 2 to 3 (#14023)
* Bump actions/upload-artifact from 2 to 3

Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 2 to 3.
- [Release notes](https://github.com/actions/upload-artifact/releases)
- [Commits](https://github.com/actions/upload-artifact/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/upload-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-03 19:11:22 +01:00
David Robertson
a34638c126 Dependabot changelog: try using branch name
Gonna hope this doesn't use the merge ref and is good enough
2022-10-03 19:09:33 +01:00
Erik Johnston
5a6d025246 Clear out old rows from event_push_actions_staging (#14020)
On matrix.org we have ~5 million stale rows in `event_push_actions_staging`, let's add a background job to make sure we clear them out.
2022-10-03 18:44:44 +01:00
reivilibre
b381701f8c Announce that legacy metric names are deprecated, will be turned off by default in Synapse v1.71.0 and removed altogether in Synapse v1.73.0. (#14024) 2022-10-03 17:16:15 +00:00
dependabot[bot]
4cb0f5a99d Bump actions/cache from 2 to 3 (#13980)
* Bump actions/cache from 2 to 3

Bumps [actions/cache](https://github.com/actions/cache) from 2 to 3.
- [Release notes](https://github.com/actions/cache/releases)
- [Changelog](https://github.com/actions/cache/blob/main/RELEASES.md)
- [Commits](https://github.com/actions/cache/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/cache
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-03 17:54:06 +01:00
David Robertson
a2c7259827 Dependabot changelog: allow dependabot to trigger actions 2022-10-03 17:53:37 +01:00
dependabot[bot]
8e8638ac40 Bump actions/download-artifact from 2 to 3 (#13979)
* Bump actions/download-artifact from 2 to 3

Bumps [actions/download-artifact](https://github.com/actions/download-artifact) from 2 to 3.
- [Release notes](https://github.com/actions/download-artifact/releases)
- [Commits](https://github.com/actions/download-artifact/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/download-artifact
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-03 17:50:06 +01:00
Patrick Cloke
b706111b78 Do not return unspecced original_event field when using the stable /relations endpoint. (#14025)
Keep the old behavior (of including the original_event field) for any
requests to the /unstable version of the endpoint, but do not include
the field when the /v1 version is used.

This should avoid new clients from depending on this field, but will
not help with current dependencies.
2022-10-03 16:47:15 +00:00
dependabot[bot]
d42541733d Bump docker/login-action from 1 to 2 (#13978)
* Bump docker/login-action from 1 to 2

Bumps [docker/login-action](https://github.com/docker/login-action) from 1 to 2.
- [Release notes](https://github.com/docker/login-action/releases)
- [Commits](https://github.com/docker/login-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/login-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: GitHub Actions <github-actions[bot]@users.noreply.github.com>
2022-10-03 17:32:48 +01:00
David Robertson
9f218b73e9 Dependabot changelog: pass GITHUB_TOKEN to gh 2022-10-03 17:31:01 +01:00
David Robertson
2d5ce8c087 Dependabot changelogs: trigger CI explicitly (#14027)
* Dependabot changelogs: trigger CI explicitly

* Changelog

* Use merge ref, not head ref

ref ref ref ref ref

* explanatory note
2022-10-03 17:16:45 +01:00
dependabot[bot]
4cceb6ba66 Bump docker/build-push-action from 2 to 3 (#14022)
* Bump docker/build-push-action from 2 to 3

Bumps [docker/build-push-action](https://github.com/docker/build-push-action) from 2 to 3.
- [Release notes](https://github.com/docker/build-push-action/releases)
- [Commits](https://github.com/docker/build-push-action/compare/v2...v3)

---
updated-dependencies:
- dependency-name: docker/build-push-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
2022-10-03 17:15:27 +01:00
David Robertson
ddcb52e455 More WIP dependabot changelog CI
Fixes a typo in #14021, and allows the workflow to trigger on `reopen`
for easier debugging.
2022-10-03 16:10:57 +01:00
dependabot[bot]
6c85796769 Bump docker/setup-qemu-action from 1 to 2 (#14019)
* Bump docker/setup-qemu-action from 1 to 2

Bumps [docker/setup-qemu-action](https://github.com/docker/setup-qemu-action) from 1 to 2.
- [Release notes](https://github.com/docker/setup-qemu-action/releases)
- [Commits](https://github.com/docker/setup-qemu-action/compare/v1...v2)

---
updated-dependencies:
- dependency-name: docker/setup-qemu-action
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
2022-10-03 16:07:39 +01:00
David Robertson
3ac0e76b79 yet another attempt to make dependabot changelogs work (#14021)
* yet another attempt to make dependabot changelogs work

* Changelog
2022-10-03 15:59:32 +01:00
dependabot[bot]
a52d27a68b Bump docker/setup-buildx-action from 1 to 2 (#14015)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
2022-10-03 14:13:15 +00:00
Erik Johnston
2c237debd3 Fix bug where we didn't delete staging push actions (#14014)
Introduced in #13719
2022-10-03 13:45:19 +00:00
David Robertson
cc8a3582e0 Try again to automate dependabot changelogs (#14017)
* Try again to automate dependabot changelogs

* Changelog
2022-10-03 14:34:50 +01:00
lukasdenk
719488dda8 Add query parameter ts to allow appservices set the origin_server_ts for state events. (#11866)
MSC3316 declares that both /rooms/{roomId}/send and /rooms/{roomId}/state
should accept a ts parameter for appservices. This change expands support
to /state and adds tests.
2022-10-03 13:30:45 +00:00
David Robertson
a423f45294 Fix twisted trunk mypy errors (#14012) 2022-10-03 13:26:49 +00:00
dependabot[bot]
7f4f2a3782 Bump actions/checkout from 2 to 3 (#13982)
* Bump actions/checkout from 2 to 3

Bumps [actions/checkout](https://github.com/actions/checkout) from 2 to 3.
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v2...v3)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

* Changelog

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
2022-10-03 14:20:32 +01:00
Erik Johnston
606b2d9009 Add cache to get_partial_state_servers_at_join (#14013) 2022-10-03 13:13:11 +00:00
Sean Quah
d65862c41f Refactor _get_e2e_device_keys_txn to split large queries (#13956)
Instead of running a single large query, run a single query for
user-only lookups and additional queries for batches of user device
lookups.

Resolves #13580.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-10-03 13:46:36 +01:00
David Robertson
061739d10f Change dependabot CL suffix (#14011)
* Change dependabot CL suffix

* Changelog
2022-10-03 11:51:57 +01:00
David Robertson
7a441c4f30 Protoype automatic changelogs for dependabot PRs (#13998) 2022-10-03 11:30:21 +01:00
Erik Johnston
343038c3c3 Always build Rust extension in release mode (#14009)
Fixes #13990
2022-10-03 11:20:18 +01:00
Eric Eastwood
2769ef4df1 Revert the general exception recording introduced in #13814 (#13969)
* Maybe not catch all errors to avoid things in the nature-of CancelledError

See https://github.com/matrix-org/synapse/pull/13815#discussion_r983384698

* Remove general exception tracking

* Add changelog
2022-10-03 10:14:45 +01:00
Eric Eastwood
a52c40e2a6 Fix get_users_in_room mis-use in transfer_room_state_on_room_upgrade (#13960)
Spawning from looking into `get_users_in_room` while investigating https://github.com/matrix-org/synapse/issues/13942#issuecomment-1262787050.

See https://github.com/matrix-org/synapse/pull/13575#discussion_r953023755 for the original exploration around finding `get_users_in_room` mis-uses.

Related to the following PRs where we also cleaned up some `get_users_in_room` mis-uses:

 - https://github.com/matrix-org/synapse/pull/13605
 - https://github.com/matrix-org/synapse/pull/13608
 - https://github.com/matrix-org/synapse/pull/13606
 - https://github.com/matrix-org/synapse/pull/13958
2022-09-30 20:10:50 -05:00
Eric Eastwood
ad4c14e4b0 Clarifications in user directory for users who share rooms tracking (#13966)
Spawned while working on [`get_users_in_room` mis-uses](https://github.com/matrix-org/synapse/pull/13958#discussion_r984074897) and thinking we could use `get_local_users_in_room` here but we can't.

From first glance, it seemed like this was only using local users from all of the `is_mine_id(user_id)` checks but I see that it does actually use remote users. Just making things a little more clear here what it does and mentions remote users so maybe that will be more obvious in the future.
2022-09-30 14:40:18 -05:00
Erik Johnston
535f8c8f7d Skip filtering during push if there are no push actions (#13992) 2022-09-30 17:40:33 +01:00
David Robertson
285d72556b Update mypy and mypy-zope, attempt 3 (#13993)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-09-30 17:36:28 +01:00
David Robertson
8e52cb0bce Revert "Update mypy and mypy-zope (#13925)"
This reverts commit 6d543d6d9f.
2022-09-30 16:37:48 +01:00
David Robertson
6d543d6d9f Update mypy and mypy-zope (#13925)
* Update mypy and mypy-zope

* Unignore assigning to LogRecord attributes

Presumably https://github.com/python/typeshed/pull/8064 makes this ok

Cherry-picked from #13521

* Remove unused ignores due to mypy ParamSpec fixes

https://github.com/python/mypy/pull/12668

Cherry-picked from #13521

* Remove additional unused ignores

* Fix new mypy complaints related to `assertGreater`

Presumably due to https://github.com/python/typeshed/pull/8077

* Changelog

* Reword changelog

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-09-30 16:34:47 +01:00
Richard van der Hoff
b2aadd81a8 Enable dependabot updates (#13976)
Fixes https://github.com/matrix-org/synapse/issues/11828
2022-09-30 14:28:34 +01:00
Erik Johnston
285b9e9b6c Speed up calculating push actions in large rooms (#13973)
We move the expensive check of visibility to after calculating push actions, avoiding the expensive check for users who won't get pushed anyway.

I think this should have a big impact on rooms with large numbers of local users that have pushed disabled.
2022-09-30 14:27:00 +01:00
David Robertson
5507bfa769 Discourage automatic replies to Synapse's emails (#13957)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-09-30 13:23:37 +00:00
Dirk Klimpel
4490697b98 Add some cross references to worker documentation (#13974)
Signed-off-by: Dirk Klimpel <dirk@klimpel.org>
2022-09-30 13:41:32 +01:00
Erik Johnston
3dfc4a08dc Fix performance regression in get_users_in_room (#13972)
Fixes #13942. Introduced in #13575.

Basically, let's only get the ordered set of hosts out of the DB if we need an ordered set of hosts. Since we split the function up the caching won't be as good, but I think it will still be fine as e.g. multiple backfill requests for the same room will hit the cache.
2022-09-30 13:15:32 +01:00
David Robertson
e8f30a76ca Fix overflows in /messages backfill calculation (#13936)
* Reproduce bug
* Compute `least_function` first
* Substitute `least_function` with an f-string
* Bugfix: avoid overflow

Co-authored-by: Eric Eastwood <erice@element.io>
2022-09-30 11:54:53 +01:00
Brendan Abolivier
1cc2ca81ba Add missing version information in the ModuleApi (#13947) 2022-09-30 12:27:21 +02:00
David Robertson
6f0c3e669d Don't require setuptools_rust at runtime (#13952) 2022-09-29 20:16:08 +00:00
David Robertson
15754d720f Update UPSERT comment now that native upserts are the default (#13924) 2022-09-29 19:10:47 +01:00
Erik Johnston
ebd9e2dac6 Implement push rule evaluation in Rust. (#13838) 2022-09-29 16:12:09 +01:00
Nick Mills-Barrett
a466164647 Optimise get_rooms_for_user (drop with_stream_ordering) (#13787) 2022-09-29 13:55:12 +00:00
Brendan Abolivier
be76cd8200 Allow admins to require a manual approval process before new accounts can be used (using MSC3866) (#13556) 2022-09-29 15:23:24 +02:00
Patrick Cloke
8625ad8099 Explicit cast to enforce type hints. (#13939) 2022-09-29 07:22:41 -04:00
Hugh Nimmo-Smith
e5fdf16d46 Expose MSC3882 only be under an unstable endpoint. (#13868) 2022-09-29 07:22:27 -04:00
Patrick Cloke
568016929f Clarify that a method returns only unthreaded receipts. (#13937)
By renaming it and updating the docstring.

Additionally, refactors a method which is used only by tests.
2022-09-29 07:07:31 -04:00
Nicolas Werner
99a7e7e023 Always send default and rule_id to clients (#13904) 2022-09-29 11:57:00 +01:00
reivilibre
73ecff7e9e Improve backfill robustness by trying more servers. (#13890)
Co-authored-by: Eric Eastwood <erice@element.io>
2022-09-29 10:00:02 +00:00
Erik Johnston
5f659d4a88 Handle local device list updates during partial join (#13934) 2022-09-28 23:22:35 +01:00
Eric Eastwood
df8b91ed2b Limit and filter the number of backfill points to get from the database (#13879)
There is no need to grab thousands of backfill points when we only need 5 to make the `/backfill` request with. We need to grab a few extra in case the first few aren't visible in the history.

Previously, we grabbed thousands of backfill points from the database, then sorted and filtered them in the app. Fetching the 4.6k backfill points for `#matrix:matrix.org` from the database takes ~50ms - ~570ms so it's not like this saves a lot of time 🤷. But it might save us more time now that `get_backfill_points_in_room`/`get_insertion_event_backward_extremities_in_room` are more complicated after https://github.com/matrix-org/synapse/pull/13635 

This PR moves the filtering and limiting to the SQL query so we just have less data to work with in the first place.

Part of https://github.com/matrix-org/synapse/issues/13356
2022-09-28 15:26:16 -05:00
Patrick Cloke
d768c50c0e Add upgrade notes for changes to receipts replication. (#13932) 2022-09-28 11:33:31 -04:00
Patrick Cloke
1386ce4735 Revert "Stop returning an unused column when handling new receipts. (#13933)" (#13935)
This reverts commit 7766bd5b35 (#13933).

The unused column is actually used, but much further down in the function.
2022-09-28 11:01:41 -04:00
Patrick Cloke
7766bd5b35 Stop returning an unused column when handling new receipts. (#13933) 2022-09-28 10:58:25 -04:00
Andrew Morgan
25c0e69392 Document that the 'auto_join_rooms' option works with Spaces (#13931) 2022-09-28 14:13:19 +00:00
Andrew Morgan
220b21936e Correct default number of bcrypt hash rounds in .1 man page (#13930) 2022-09-28 14:45:13 +01:00
Erik Johnston
4b17a5ace8 Handle remote device list updates during partial join (#13913)
c.f. #12993 (comment), point 3

This stores all device list updates that we receive while partial joins are ongoing, and processes them once we have the full state.

Note: We don't actually process the device lists in the same ways as if we weren't partially joined. Instead of updating the device list remote cache, we simply notify local users that a change in the remote user's devices has happened. I think this is safe as if the local user requests the keys for the remote user and we don't have them we'll simply fetch them as normal.
2022-09-28 13:42:43 +00:00
Kateřina Churanová
6caa303083 fix: Push notifications for invite over federation (#13719) 2022-09-28 12:31:53 +00:00
Ashish Kumar
5c429b86b4 Add instruction for running unit tests in parallel (#13928) 2022-09-28 11:14:35 +00:00
Shay
8ab16a92ed Persist CreateRoom events to DB in a batch (#13800) 2022-09-28 10:11:48 +00:00
Shay
a2cf66a94d Prepatory work for batching events to send (#13487)
This PR begins work on batching up events during the creation of a room. The PR splits out the creation and sending/persisting of the events. The first three events in the creation of the room-creating the room, joining the creator to the room, and the power levels event are sent sequentially, while the subsequent events are created and collected to be sent at the end of the function. This is currently done by appending them to a list and then iterating over the list to send, the next step (after this PR) would be to send and persist the collected events as a batch.
2022-09-28 10:39:03 +01:00
Eric Eastwood
29269d9d3f Fix have_seen_event cache not being invalidated (#13863)
Fix https://github.com/matrix-org/synapse/issues/13856
Fix https://github.com/matrix-org/synapse/issues/13865

> Discovered while trying to make Synapse fast enough for [this MSC2716 test for importing many batches](https://github.com/matrix-org/complement/pull/214#discussion_r741678240). As an example, disabling the `have_seen_event` cache saves 10 seconds for each `/messages` request in that MSC2716 Complement test because we're not making as many federation requests for `/state` (speeding up `have_seen_event` itself is related to https://github.com/matrix-org/synapse/issues/13625) 
> 
> But this will also make `/messages` faster in general so we can include it in the [faster `/messages` milestone](https://github.com/matrix-org/synapse/milestone/11).
> 
> *-- https://github.com/matrix-org/synapse/issues/13856*


### The problem

`_invalidate_caches_for_event` doesn't run in monolith mode which means we never even tried to clear the `have_seen_event` and other caches. And even in worker mode, it only runs on the workers, not the master (AFAICT).

Additionally there was bug with the key being wrong so `_invalidate_caches_for_event` never invalidates the `have_seen_event` cache even when it does run.

Because we were using the `@cachedList` wrong, it was putting items in the cache under keys like `((room_id, event_id),)` with a `set` in a `set` (ex. `(('!TnCIJPKzdQdUlIyXdQ:test', '$Iu0eqEBN7qcyF1S9B3oNB3I91v2o5YOgRNPwi_78s-k'),)`) and we we're trying to invalidate with just `(room_id, event_id)` which did nothing.
2022-09-27 15:55:43 -05:00
Eric Eastwood
35e9d6a616 Emphasize the right reasons to use (room_id, event_id) in a schema (#13915)
* Emphasize the right reasons to use (room_id, event_id)

Follow-up to:
 - https://github.com/matrix-org/synapse/pull/13701
 - https://github.com/matrix-org/synapse/pull/13771
2022-09-27 19:43:16 +00:00
David Robertson
f5aaa55e27 Add new columns tracking when we partial-joined (#13892) 2022-09-27 17:26:35 +01:00
Patrick Cloke
87fe9db467 Support the stable dir parameter for /relations. (#13920)
Since MSC3715 has passed FCP, the stable parameter can be used.

This currently falls back to the unstable parameter if the stable
parameter is not provided (and MSC3715 support is enabled in
the configuration).
2022-09-27 10:47:34 -04:00
Erik Johnston
299b00d968 Prioritize outbound to-device over device list updates (#13922)
Otherwise device list changes for large accounts can temporarily delay to-device messages.
2022-09-27 15:17:41 +01:00
David Robertson
ac1b0d03a5 Merge branch 'master' into develop 2022-09-27 15:05:16 +01:00
David Robertson
9bd442e202 Update upgrade notes (#13923)
* Update upgrade notes

* Synapse Synapse Synapse Synapse

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2022-09-27 15:04:24 +01:00
Mathieu Velten
42dd992bb7 Dockerfile for tests: align comments with current behavior (#13867)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-09-27 15:55:43 +02:00
Quentin Gliech
50c92f3a69 Carry IdP Session IDs through user-mapping sessions. (#13839)
Since #11482, we're saving sessions IDs from upstream IdPs, but we've been losing them when the user goes through a user mapping session on account registration.
2022-09-27 14:38:14 +01:00
David Robertson
a95ce6dd08 Merge branch 'master' into develop 2022-09-27 14:32:23 +01:00
Erik Johnston
e8318a4333 Handle the case of remote users leaving a partial join room for device lists (#13885) 2022-09-27 13:01:08 +01:00
Sean Quah
85e161631a Faster room joins: Fix spurious error when joining a room (#13872)
During a `lazy_load_members` `/sync`, we look through auth events in
rooms with partial state to find prior membership events. When such a
membership is not found, an error is logged.

Since the first join event for a user never has a prior membership event
to cite, the error would always be logged when one appeared in the room
timeline.

Avoid logging errors for such events.

Introduced in #13477.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-27 11:17:23 +01:00
Richard van der Hoff
d6b85a2a7d Complement image: propagate SIGTERM to all workers (#13914)
This should mean that logs from worker processes are flushed before shutdown.

When a test completes, Complement stops the docker container, which means that
synapse will receive a SIGTERM. Currently, the `complement_fork_starter` exits
immediately (without notifying the worker processes), which means that the
workers never get a chance to flush their logs before the whole container is
vaped. We can fix this by propagating the SIGTERM to the children.
2022-09-26 23:07:02 +01:00
Patrick Cloke
2fae1a3f78 Improve tests for get_unread_push_actions_for_user_in_range_*. (#13893)
* Adds a docstring.
* Reduces a small amount of duplicated code.
* Improves tests.
2022-09-26 18:28:12 +00:00
Andrew Morgan
58ab96747c Update the manpage documentation for the hash_password script (#13911) 2022-09-26 18:18:41 +00:00
David Robertson
0a38c7ec6d Snapshot schema 72 (#13873)
Including another batch of fixes to the schema dump script
2022-09-26 18:28:32 +01:00
Mathieu Velten
41461fd4d6 typing: check origin server of typing event against room's servers (#13830)
This is also using the partial state approximation if needed so we do
not block here during a fast join.

Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-09-26 17:33:32 +02:00
Nick Mills-Barrett
6b4593a80f Simplify cache invalidation after event persist txn (#13796)
This moves all the invalidations into a single place and de-duplicates
the code involved in invalidating caches for a given event by using
the base class method.
2022-09-26 16:26:35 +01:00
enterprisey
f34b0bc262 Update NixOS module URL (#13818)
* Update NixOS module URL

* Create 13818.doc
2022-09-26 15:26:59 +01:00
Erik Johnston
c5defa4cba Correctly add Cargo.lock to sdsit (#13909) 2022-09-26 13:58:10 +00:00
David Robertson
dcdd50e458 Fix mypy errors with latest canonicaljson (#13905)
* Lockfile: update canonicaljson 1.6.0 -> 1.6.3

* Fix mypy errors with latest canonicaljson

The change to `_encode_json_bytes` definition wasn't sufficient:

```
synapse/http/server.py:751: error: Incompatible types in assignment (expression has type "Callable[[Arg(object, 'json_object')], bytes]", variable has type "Callable[[Arg(object, 'data')], bytes]")  [assignment]
```

Which I think is mypy warning us that the two functions accept different
sets of kwargs. Fair enough!

* Changelog
2022-09-26 13:30:00 +01:00
Eric Eastwood
ac1a31740b Only try to backfill event if we haven't tried before recently (#13635)
Only try to backfill event if we haven't tried before recently (exponential backoff). No need to keep trying the same backfill point that fails over and over.

Fix https://github.com/matrix-org/synapse/issues/13622
Fix https://github.com/matrix-org/synapse/issues/8451

Follow-up to https://github.com/matrix-org/synapse/pull/13589

Part of https://github.com/matrix-org/synapse/issues/13356
2022-09-23 14:01:29 -05:00
Sean Quah
f49f73c0da Faster room joins: Avoid blocking /keys/changes (#13888)
Part of the work for #12993.

Once #12993 is fully resolved, we expect `/keys/changes` to behave
sensibly when joined to a room with partial state.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-23 17:55:15 +01:00
Eric Eastwood
db868db594 Fix access token leak to logs from proxyagent (#13855)
This can happen specifically with an application service `/transactions/10722?access_token=leaked` request

Fix https://github.com/matrix-org/synapse/issues/13010

---

Saw an example leak in https://github.com/matrix-org/synapse/issues/13423#issuecomment-1205348482

```
2022-08-04 14:47:57,925 - synapse.http.client - 401 - DEBUG - as-sender-signal-1 - Sending request PUT http://localhost:29328/transactions/10722?access_token=<redacted>
2022-08-04 14:47:57,926 - synapse.http.proxyagent - 223 - DEBUG - as-sender-signal-1 - Requesting b'http://localhost:29328/transactions/10722?access_token=leaked' via <HostnameEndpoint localhost:29328>
```
2022-09-23 11:49:39 -05:00
Patrick Cloke
e3512a7719 Update Cargo.lock file. (#13889) 2022-09-23 11:59:39 -04:00
Patrick Cloke
efd108b45d Accept & store thread IDs for receipts (implement MSC3771). (#13782)
Updates the `/receipts` endpoint and receipt EDU handler to parse a
`thread_id` from the body and insert it in the database.
2022-09-23 14:33:28 +00:00
Sean Quah
03c2bfb7f8 Send device list updates out to servers in partially joined rooms (#13874)
Use the provided list of servers in the room from the `/send_join`
response, since we will not know which users are in the room.  This
isn't sufficient to ensure that all remote servers receive the right
device list updates, since the `/send_join` response may be inaccurate
or we may calculate the membership state of new users in the room
incorrectly.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-23 13:44:03 +01:00
reivilibre
c06b2b7142 Faster Remote Room Joins: tell remote homeservers that we are unable to authorise them if they query a room which has partial state on our server. (#13823) 2022-09-23 11:47:16 +01:00
reivilibre
ac7e5683d6 Add comments to the Prometheus recording rules to make it clear which set of rules you need for Grafana or Prometheus Console. (#13876) 2022-09-23 11:46:45 +01:00
Erik Johnston
c9316f9f76 Merge branch 'release-v1.68' into develop 2022-09-23 10:43:06 +01:00
David Robertson
f7c89c44c5 Raise issue if complement fails with latest deps (#13859) 2022-09-22 17:33:37 +01:00
Patrick Cloke
8c3dcdf1b9 Fix the cross-link from register admin API to config docs. (#13870) 2022-09-22 09:11:50 -04:00
Patrick Cloke
b7272b73aa Properly paginate forward in the /relations API. (#13840)
This fixes a bug where the `/relations` API with `dir=f` would
skip the first item of each page (except the first page), causing
incomplete data to be returned to the client.
2022-09-22 12:47:49 +00:00
David Robertson
1a1abdda42 Last batch of Pydantic for synapse/rest/client/account.py (#13832)
* Validation for `/add_threepid/msisdn/submit_token`

* Don't validate deprecated endpoint

* Changelog
2022-09-21 22:23:44 +01:00
Brendan Abolivier
efabf44c76 Add version flag for MSC3881 (#13860) 2022-09-21 18:18:44 +02:00
Brendan Abolivier
ccca14140a Track device IDs for pushers (#13831)
Second half of the MSC3881 implementation
2022-09-21 15:31:53 +00:00
Hugh Nimmo-Smith
0fd2f2d460 Implementation of MSC3882 login token request (#13722) 2022-09-21 15:12:29 +00:00
villepeh
269eddad6f Add worker_main_http_uri to the contrib bash script (#13772)
* Add worker_main_http_uri, replace >> with >

Co-authored-by: Dirk Klimpel <5740567+dklimpel@users.noreply.github.com>
Co-authored-by: Erik Johnston <erik@matrix.org>
2022-09-21 15:58:46 +01:00
Brendan Abolivier
8ae42ab8fa Support enabling/disabling pushers (from MSC3881) (#13799)
Partial implementation of MSC3881
2022-09-21 14:39:01 +00:00
Mathieu Velten
6bd8763804 Add cache invalidation across workers to module API (#13667)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-09-21 15:32:01 +02:00
Peter Scheu
16e1a9d9a7 Correct documentation for map_user_attributes of OpenID Mapping Providers (#13836)
Co-authored-by: David Robertson <davidr@element.io>
2022-09-21 13:08:16 +00:00
Quentin Gliech
e0804ef898 Improve the synapse.api.auth.Auth mock used in unit tests. (#13809)
To return the proper type (`Requester`) instead of a `dict`.
2022-09-21 12:40:34 +00:00
reivilibre
a35842caec Fix the release script not publishing binary wheels. (#13850) 2022-09-21 09:43:08 +00:00
Olivier Wilkinson (reivilibre)
2b522cceb6 Merge branch 'release-v1.68' into develop 2022-09-20 14:54:05 +01:00
Quentin Gliech
85fc7ea1a1 Remove the complete_sso_login method from the Module API which was deprecated in Synapse 1.13.0. (#13843)
Signed-off-by: Quentin Gliech <quenting@element.io>
2022-09-20 15:18:07 +02:00
David Robertson
fff9b955fa Generate separate snapshots for logical databases (#13792)
* Generate separate snapshots for sqlite, postgres and common
* Cleanup postgres dbs in the TRAP
* Say which logical DB we're applying updates to
* Run background updates on the state DB
* Add new option for accepting a SCHEMA_NUMBER
2022-09-20 14:14:12 +01:00
Erik Johnston
42d261c32f Port the push rule classes to Rust. (#13768) 2022-09-20 12:10:31 +01:00
233 changed files with 12242 additions and 3358 deletions

17
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,17 @@
version: 2
updates:
- # "pip" is the correct setting for poetry, per https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem
package-ecosystem: "pip"
directory: "/"
schedule:
interval: "weekly"
- package-ecosystem: "docker"
directory: "/docker"
schedule:
interval: "weekly"
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "weekly"

View File

@@ -0,0 +1,46 @@
name: Write changelog for dependabot PR
on:
pull_request:
types:
- opened
- reopened # For debugging!
permissions:
# Needed to be able to push the commit. See
# https://docs.github.com/en/code-security/dependabot/working-with-dependabot/automating-dependabot-with-github-actions#enable-auto-merge-on-a-pull-request
# for a similar example
contents: write
jobs:
add-changelog:
runs-on: 'ubuntu-latest'
if: ${{ github.actor == 'dependabot[bot]' }}
steps:
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.ref }}
- name: Write, commit and push changelog
run: |
echo "${{ github.event.pull_request.title }}." > "changelog.d/${{ github.event.pull_request.number }}".misc
git add changelog.d
git config user.email "github-actions[bot]@users.noreply.github.com"
git config user.name "GitHub Actions"
git commit -m "Changelog"
git push
shell: bash
# The `git push` above does not trigger CI on the dependabot PR.
#
# By default, workflows can't trigger other workflows when they're just using the
# default `GITHUB_TOKEN` access token. (This is intended to stop you from writing
# recursive workflow loops by accident, because that'll get very expensive very
# quickly.) Instead, you have to manually call out to another workflow, or else
# make your changes (i.e. the `git push` above) using a personal access token.
# See
# https://docs.github.com/en/actions/using-workflows/triggering-a-workflow#triggering-a-workflow-from-a-workflow
#
# I have tried and failed to find a way to trigger CI on the "merge ref" of the PR.
# See git commit history for previous attempts. If anyone desperately wants to try
# again in the future, make a matrix-bot account and use its access token to git push.
# THIS WORKFLOW HAS WRITE PERMISSIONS---do not add other jobs here unless they
# are sufficiently locked down to dependabot only as above.

View File

@@ -17,19 +17,19 @@ jobs:
steps:
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v1
uses: docker/setup-qemu-action@v2
with:
platforms: arm64
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v2
- name: Inspect builder
run: docker buildx inspect
- name: Log in to DockerHub
uses: docker/login-action@v1
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -48,7 +48,7 @@ jobs:
type=pep440,pattern={{raw}}
- name: Build and push all platforms
uses: docker/build-push-action@v2
uses: docker/build-push-action@v3
with:
push: true
labels: "gitsha1=${{ github.sha }}"

View File

@@ -17,7 +17,7 @@ jobs:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Setup mdbook
uses: peaceiris/actions-mdbook@4b5ef36b314c2599664ca107bb8c02412548d79d # v1.1.14

View File

@@ -25,7 +25,7 @@ jobs:
mypy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
@@ -59,7 +59,7 @@ jobs:
postgres-version: "14"
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -76,7 +76,7 @@ jobs:
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@v2
- uses: actions/setup-python@v4
with:
python-version: "3.x"
- run: pip install .[all,test]
@@ -133,7 +133,7 @@ jobs:
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -155,7 +155,7 @@ jobs:
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
@@ -182,8 +182,8 @@ jobs:
database: Postgres
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
- name: Run actions/checkout@v3 for synapse
uses: actions/checkout@v3
with:
path: synapse
@@ -201,15 +201,16 @@ jobs:
open-issue:
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request'"
needs:
# TODO: should mypy be included here? It feels more brittle than the other two.
# TODO: should mypy be included here? It feels more brittle than the others.
- mypy
- trial
- sytest
- complement
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: JasonEtco/create-an-issue@5d9504915f79f9cc6d791934b8ef34f2353dd74d # v2.5.0, 2020-12-06
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -11,6 +11,7 @@ on:
# we do the full build on tags.
tags: ["v*"]
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -24,8 +25,8 @@ jobs:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- id: set-distros
run: |
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
@@ -49,18 +50,18 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@v2
uses: actions/checkout@v3
with:
path: src
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
uses: docker/setup-buildx-action@v2
with:
install: true
- name: Set up docker layer caching
uses: actions/cache@v2
uses: actions/cache@v3
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
@@ -68,7 +69,7 @@ jobs:
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@v2
uses: actions/setup-python@v4
- name: Build the packages
# see https://github.com/docker/build-push-action/issues/252
@@ -84,7 +85,7 @@ jobs:
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Upload debs as artifacts
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
with:
name: debs
path: debs/*
@@ -106,7 +107,11 @@ jobs:
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
- uses: actions/setup-python@v4
with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7.
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.9.0 poetry==1.2.0
@@ -145,7 +150,7 @@ jobs:
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@v2
- uses: actions/upload-artifact@v3
with:
name: Sdist
path: dist/*.tar.gz
@@ -162,7 +167,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Download all workflow run artifacts
uses: actions/download-artifact@v2
uses: actions/download-artifact@v3
- name: Build a tarball for the debs
run: tar -cvJf debs.tar.xz debs
- name: Attach to release

View File

@@ -4,6 +4,7 @@ on:
push:
branches: ["develop", "release-*"]
pull_request:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
@@ -30,8 +31,8 @@ jobs:
check-sampleconfig:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- uses: matrix-org/setup-python-poetry@v1
with:
extras: "all"
@@ -41,8 +42,8 @@ jobs:
check-schema-delta:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
- run: scripts-dev/check_schema_delta.py --force-colors
@@ -54,19 +55,19 @@ jobs:
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
if: ${{ github.base_ref == 'develop' || contains(github.base_ref, 'release-') }}
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@v2
- uses: actions/setup-python@v4
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment.sh
env:
@@ -75,7 +76,7 @@ jobs:
lint-pydantic:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
with:
ref: ${{ github.event.pull_request.head.sha }}
- uses: matrix-org/setup-python-poetry@v1
@@ -89,7 +90,7 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -107,7 +108,7 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -140,8 +141,8 @@ jobs:
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
- id: get-matrix
run: .ci/scripts/calculate_jobs.py
outputs:
@@ -157,7 +158,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
@@ -199,7 +200,7 @@ jobs:
needs: linting-done
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -270,7 +271,7 @@ jobs:
extras: ["all"]
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@v1
@@ -313,7 +314,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
@@ -331,7 +332,7 @@ jobs:
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.job.*, ', ') }})
@@ -361,7 +362,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -402,7 +403,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -444,8 +445,8 @@ jobs:
database: Postgres
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
- name: Run actions/checkout@v3 for synapse
uses: actions/checkout@v3
with:
path: synapse
@@ -473,7 +474,7 @@ jobs:
- changes
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1

View File

@@ -15,7 +15,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -40,7 +40,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
@@ -81,7 +81,7 @@ jobs:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- name: Install Rust
uses: actions-rs/toolchain@v1
@@ -112,7 +112,7 @@ jobs:
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v2
uses: actions/upload-artifact@v3
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
@@ -138,8 +138,8 @@ jobs:
database: Postgres
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
- name: Run actions/checkout@v3 for synapse
uses: actions/checkout@v3
with:
path: synapse
@@ -177,7 +177,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v3
- uses: JasonEtco/create-an-issue@5d9504915f79f9cc6d791934b8ef34f2353dd74d # v2.5.0, 2020-12-06
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

1
.rustfmt.toml Normal file
View File

@@ -0,0 +1 @@
group_imports = "StdExternalCrate"

View File

@@ -1,3 +1,124 @@
Synapse 1.69.0rc2 (2022-10-06)
==============================
Please note that legacy Prometheus metric names are now deprecated and will be removed in Synapse 1.73.0.
Server administrators should update their dashboards and alerting rules to avoid using the deprecated metric names.
See the [upgrade notes](https://matrix-org.github.io/synapse/v1.69/upgrade.html#upgrading-to-v1690) for more details.
Deprecations and Removals
-------------------------
- Deprecate the `generate_short_term_login_token` method in favor of an async `create_login_token` method in the Module API. ([\#13842](https://github.com/matrix-org/synapse/issues/13842))
Internal Changes
----------------
- Ensure Synapse v1.69 works with upcoming database changes in v1.70. ([\#14045](https://github.com/matrix-org/synapse/issues/14045))
- Fix a bug introduced in Synapse v1.68.0 where messages could not be sent in rooms with non-integer `notifications` power level. ([\#14073](https://github.com/matrix-org/synapse/issues/14073))
- Temporarily pin build-system requirements to workaround an incompatibility with poetry-core 1.3.0. This will be reverted before the v1.69.0 release proper, see [\#14079](https://github.com/matrix-org/synapse/issues/14079). ([\#14080](https://github.com/matrix-org/synapse/issues/14080))
Synapse 1.69.0rc1 (2022-10-04)
==============================
Features
--------
- Allow application services to set the `origin_server_ts` of a state event by providing the query parameter `ts` in [`PUT /_matrix/client/r0/rooms/{roomId}/state/{eventType}/{stateKey}`](https://spec.matrix.org/v1.4/client-server-api/#put_matrixclientv3roomsroomidstateeventtypestatekey), per [MSC3316](https://github.com/matrix-org/matrix-doc/pull/3316). Contributed by @lukasdenk. ([\#11866](https://github.com/matrix-org/synapse/issues/11866))
- Allow server admins to require a manual approval process before new accounts can be used (using [MSC3866](https://github.com/matrix-org/matrix-spec-proposals/pull/3866)). ([\#13556](https://github.com/matrix-org/synapse/issues/13556))
- Exponentially backoff from backfilling the same event over and over. ([\#13635](https://github.com/matrix-org/synapse/issues/13635), [\#13936](https://github.com/matrix-org/synapse/issues/13936))
- Add cache invalidation across workers to module API. ([\#13667](https://github.com/matrix-org/synapse/issues/13667), [\#13947](https://github.com/matrix-org/synapse/issues/13947))
- Experimental implementation of [MSC3882](https://github.com/matrix-org/matrix-spec-proposals/pull/3882) to allow an existing device/session to generate a login token for use on a new device/session. ([\#13722](https://github.com/matrix-org/synapse/issues/13722), [\#13868](https://github.com/matrix-org/synapse/issues/13868))
- Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)). ([\#13782](https://github.com/matrix-org/synapse/issues/13782), [\#13893](https://github.com/matrix-org/synapse/issues/13893), [\#13932](https://github.com/matrix-org/synapse/issues/13932), [\#13937](https://github.com/matrix-org/synapse/issues/13937), [\#13939](https://github.com/matrix-org/synapse/issues/13939))
- Add experimental support for [MSC3881: Remotely toggle push notifications for another client](https://github.com/matrix-org/matrix-spec-proposals/pull/3881). ([\#13799](https://github.com/matrix-org/synapse/issues/13799), [\#13831](https://github.com/matrix-org/synapse/issues/13831), [\#13860](https://github.com/matrix-org/synapse/issues/13860))
- Keep track when an event pulled over federation fails its signature check so we can intelligently back-off in the future. ([\#13815](https://github.com/matrix-org/synapse/issues/13815))
- Improve validation for the unspecced, internal-only `_matrix/client/unstable/add_threepid/msisdn/submit_token` endpoint. ([\#13832](https://github.com/matrix-org/synapse/issues/13832))
- Faster remote room joins: record _when_ we first partial-join to a room. ([\#13892](https://github.com/matrix-org/synapse/issues/13892))
- Support a `dir` parameter on the `/relations` endpoint per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715). ([\#13920](https://github.com/matrix-org/synapse/issues/13920))
- Ask mail servers receiving emails from Synapse to not send automatic replies (e.g. out-of-office responses). ([\#13957](https://github.com/matrix-org/synapse/issues/13957))
Bugfixes
--------
- Send push notifications for invites received over federation. ([\#13719](https://github.com/matrix-org/synapse/issues/13719), [\#14014](https://github.com/matrix-org/synapse/issues/14014))
- Fix a long-standing bug where typing events would be accepted from remote servers not present in a room. Also fix a bug where incoming typing events would cause other incoming events to get stuck during a fast join. ([\#13830](https://github.com/matrix-org/synapse/issues/13830))
- Fix a bug introduced in Synapse v1.53.0 where the experimental implementation of [MSC3715](https://github.com/matrix-org/matrix-spec-proposals/pull/3715) would give incorrect results when paginating forward. ([\#13840](https://github.com/matrix-org/synapse/issues/13840))
- Fix access token leak to logs from proxy agent. ([\#13855](https://github.com/matrix-org/synapse/issues/13855))
- Fix `have_seen_event` cache not being invalidated after we persist an event which causes inefficiency effects like extra `/state` federation calls. ([\#13863](https://github.com/matrix-org/synapse/issues/13863))
- Faster room joins: Fix a bug introduced in 1.66.0 where an error would be logged when syncing after joining a room. ([\#13872](https://github.com/matrix-org/synapse/issues/13872))
- Fix a bug introduced in 1.66.0 where some required fields in the pushrules sent to clients were not present anymore. Contributed by Nico. ([\#13904](https://github.com/matrix-org/synapse/issues/13904))
- Fix packaging to include `Cargo.lock` in `sdist`. ([\#13909](https://github.com/matrix-org/synapse/issues/13909))
- Fix a long-standing bug where device updates could cause delays sending out to-device messages over federation. ([\#13922](https://github.com/matrix-org/synapse/issues/13922))
- Fix a bug introduced in v1.68.0 where Synapse would require `setuptools_rust` at runtime, even though the package is only required at build time. ([\#13952](https://github.com/matrix-org/synapse/issues/13952))
- Fix a long-standing bug where `POST /_matrix/client/v3/keys/query` requests could result in excessively large SQL queries. ([\#13956](https://github.com/matrix-org/synapse/issues/13956))
- Fix a performance regression in the `get_users_in_room` database query. Introduced in v1.67.0. ([\#13972](https://github.com/matrix-org/synapse/issues/13972))
- Fix a bug introduced in v1.68.0 bug where Rust extension wasn't built in `release` mode when using `poetry install`. ([\#14009](https://github.com/matrix-org/synapse/issues/14009))
- Do not return an unspecified `original_event` field when using the stable `/relations` endpoint. Introduced in Synapse v1.57.0. ([\#14025](https://github.com/matrix-org/synapse/issues/14025))
- Correctly handle a race with device lists when a remote user leaves during a partial join. ([\#13885](https://github.com/matrix-org/synapse/issues/13885))
- Correctly handle sending local device list updates to remote servers during a partial join. ([\#13934](https://github.com/matrix-org/synapse/issues/13934))
Improved Documentation
----------------------
- Add `worker_main_http_uri` for the worker generator bash script. ([\#13772](https://github.com/matrix-org/synapse/issues/13772))
- Update URL for the NixOS module for Synapse. ([\#13818](https://github.com/matrix-org/synapse/issues/13818))
- Fix a mistake in sso_mapping_providers.md: `map_user_attributes` is expected to return `display_name`, not `displayname`. ([\#13836](https://github.com/matrix-org/synapse/issues/13836))
- Fix a cross-link from the registration admin API to the `registration_shared_secret` configuration documentation. ([\#13870](https://github.com/matrix-org/synapse/issues/13870))
- Update the man page for the `hash_password` script to correct the default number of bcrypt rounds performed. ([\#13911](https://github.com/matrix-org/synapse/issues/13911), [\#13930](https://github.com/matrix-org/synapse/issues/13930))
- Emphasize the right reasons when to use `(room_id, event_id)` in a database schema. ([\#13915](https://github.com/matrix-org/synapse/issues/13915))
- Add instruction to contributing guide for running unit tests in parallel. Contributed by @ashfame. ([\#13928](https://github.com/matrix-org/synapse/issues/13928))
- Clarify that the `auto_join_rooms` config option can also be used with Space aliases. ([\#13931](https://github.com/matrix-org/synapse/issues/13931))
- Add some cross references to worker documentation. ([\#13974](https://github.com/matrix-org/synapse/issues/13974))
- Linkify urls in config documentation. ([\#14003](https://github.com/matrix-org/synapse/issues/14003))
Deprecations and Removals
-------------------------
- Remove the `complete_sso_login` method from the Module API which was deprecated in Synapse 1.13.0. ([\#13843](https://github.com/matrix-org/synapse/issues/13843))
- Announce that legacy metric names are deprecated, will be turned off by default in Synapse v1.71.0 and removed altogether in Synapse v1.73.0. See the upgrade notes for more information. ([\#14024](https://github.com/matrix-org/synapse/issues/14024))
Internal Changes
----------------
- Speed up creation of DM rooms. ([\#13487](https://github.com/matrix-org/synapse/issues/13487), [\#13800](https://github.com/matrix-org/synapse/issues/13800))
- Port push rules to using Rust. ([\#13768](https://github.com/matrix-org/synapse/issues/13768), [\#13838](https://github.com/matrix-org/synapse/issues/13838), [\#13889](https://github.com/matrix-org/synapse/issues/13889))
- Optimise get rooms for user calls. Contributed by Nick @ Beeper (@fizzadar). ([\#13787](https://github.com/matrix-org/synapse/issues/13787))
- Update the script which makes full schema dumps. ([\#13792](https://github.com/matrix-org/synapse/issues/13792))
- Use shared methods for cache invalidation when persisting events, remove duplicate codepaths. Contributed by Nick @ Beeper (@fizzadar). ([\#13796](https://github.com/matrix-org/synapse/issues/13796))
- Improve the `synapse.api.auth.Auth` mock used in unit tests. ([\#13809](https://github.com/matrix-org/synapse/issues/13809))
- Faster Remote Room Joins: tell remote homeservers that we are unable to authorise them if they query a room which has partial state on our server. ([\#13823](https://github.com/matrix-org/synapse/issues/13823))
- Carry IdP Session IDs through user-mapping sessions. ([\#13839](https://github.com/matrix-org/synapse/issues/13839))
- Fix the release script not publishing binary wheels. ([\#13850](https://github.com/matrix-org/synapse/issues/13850))
- Raise issue if complement fails with latest deps. ([\#13859](https://github.com/matrix-org/synapse/issues/13859))
- Correct the comments in the complement dockerfile. ([\#13867](https://github.com/matrix-org/synapse/issues/13867))
- Create a new snapshot of the database schema. ([\#13873](https://github.com/matrix-org/synapse/issues/13873))
- Faster room joins: Send device list updates to most servers in rooms with partial state. ([\#13874](https://github.com/matrix-org/synapse/issues/13874), [\#14013](https://github.com/matrix-org/synapse/issues/14013))
- Add comments to the Prometheus recording rules to make it clear which set of rules you need for Grafana or Prometheus Console. ([\#13876](https://github.com/matrix-org/synapse/issues/13876))
- Only pull relevant backfill points from the database based on the current depth and limit (instead of all) every time we want to `/backfill`. ([\#13879](https://github.com/matrix-org/synapse/issues/13879))
- Faster room joins: Avoid waiting for full state when processing `/keys/changes` requests. ([\#13888](https://github.com/matrix-org/synapse/issues/13888))
- Improve backfill robustness by trying more servers when we get a `4xx` error back. ([\#13890](https://github.com/matrix-org/synapse/issues/13890))
- Fix mypy errors with canonicaljson 1.6.3. ([\#13905](https://github.com/matrix-org/synapse/issues/13905))
- Faster remote room joins: correctly handle remote device list updates during a partial join. ([\#13913](https://github.com/matrix-org/synapse/issues/13913))
- Complement image: propagate SIGTERM to all workers. ([\#13914](https://github.com/matrix-org/synapse/issues/13914))
- Update an innaccurate comment in Synapse's upsert database helper. ([\#13924](https://github.com/matrix-org/synapse/issues/13924))
- Update mypy (0.950 -> 0.981) and mypy-zope (0.3.7 -> 0.3.11). ([\#13925](https://github.com/matrix-org/synapse/issues/13925), [\#13993](https://github.com/matrix-org/synapse/issues/13993))
- Use dedicated `get_local_users_in_room(room_id)` function to find local users when calculating users to copy over during a room upgrade. ([\#13960](https://github.com/matrix-org/synapse/issues/13960))
- Refactor language in user directory `_track_user_joined_room` code to make it more clear that we use both local and remote users. ([\#13966](https://github.com/matrix-org/synapse/issues/13966))
- Revert catch-all exceptions being recorded as event pull attempt failures (only handle what we know about). ([\#13969](https://github.com/matrix-org/synapse/issues/13969))
- Speed up calculating push actions in large rooms. ([\#13973](https://github.com/matrix-org/synapse/issues/13973), [\#13992](https://github.com/matrix-org/synapse/issues/13992))
- Enable update notifications from Github's dependabot. ([\#13976](https://github.com/matrix-org/synapse/issues/13976))
- Prototype a workflow to automatically add changelogs to dependabot PRs. ([\#13998](https://github.com/matrix-org/synapse/issues/13998), [\#14011](https://github.com/matrix-org/synapse/issues/14011), [\#14017](https://github.com/matrix-org/synapse/issues/14017), [\#14021](https://github.com/matrix-org/synapse/issues/14021), [\#14027](https://github.com/matrix-org/synapse/issues/14027))
- Fix type annotations to be compatible with new annotations in development versions of twisted. ([\#14012](https://github.com/matrix-org/synapse/issues/14012))
- Clear out stale entries in `event_push_actions_staging` table. ([\#14020](https://github.com/matrix-org/synapse/issues/14020))
- Bump versions of GitHub actions. ([\#13978](https://github.com/matrix-org/synapse/issues/13978), [\#13979](https://github.com/matrix-org/synapse/issues/13979), [\#13980](https://github.com/matrix-org/synapse/issues/13980), [\#13982](https://github.com/matrix-org/synapse/issues/13982), [\#14015](https://github.com/matrix-org/synapse/issues/14015), [\#14019](https://github.com/matrix-org/synapse/issues/14019), [\#14022](https://github.com/matrix-org/synapse/issues/14022), [\#14023](https://github.com/matrix-org/synapse/issues/14023))
Synapse 1.68.0 (2022-09-27)
===========================

164
Cargo.lock generated
View File

@@ -2,6 +2,27 @@
# It is not intended for manual editing.
version = 3
[[package]]
name = "aho-corasick"
version = "0.7.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4f55bd91a0978cbfd91c457a164bab8b4001c833b7f323132c0a4e1922dd44e"
dependencies = [
"memchr",
]
[[package]]
name = "anyhow"
version = "1.0.65"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "98161a4e3e2184da77bb14f02184cdd111e83bbbcc9979dfee3c44b9a85f5602"
[[package]]
name = "arc-swap"
version = "1.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "983cd8b9d4b02a6dc6ffa557262eb5858a27a0038ffffe21a0f133eaa819a164"
[[package]]
name = "autocfg"
version = "1.1.0"
@@ -81,6 +102,18 @@ version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "adab1eaa3408fb7f0c777a73e7465fd5656136fc93b670eb6df3c88c2c1344e3"
[[package]]
name = "itoa"
version = "1.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c8af84674fe1f223a982c933a0ee1086ac4d4052aa0fb8060c12c6ad838e754"
[[package]]
name = "lazy_static"
version = "1.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2abad23fbc42b3700f2f279844dc832adb2b2eb069b2df918f455c4e18cc646"
[[package]]
name = "libc"
version = "0.2.132"
@@ -97,6 +130,30 @@ dependencies = [
"scopeguard",
]
[[package]]
name = "log"
version = "0.4.17"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "abb12e687cfb44aa40f41fc3978ef76448f9b6038cad6aef4259d3c095a2382e"
dependencies = [
"cfg-if",
]
[[package]]
name = "memchr"
version = "2.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2dffe52ecf27772e601905b7522cb4ef790d2cc203488bbd0e2fe85fcb74566d"
[[package]]
name = "memoffset"
version = "0.6.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5aa361d4faea93603064a027415f07bd8e1d5c88c9fbf68bf56a285428fd79ce"
dependencies = [
"autocfg",
]
[[package]]
name = "once_cell"
version = "1.13.1"
@@ -137,13 +194,15 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.16.6"
version = "0.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0220c44442c9b239dd4357aa856ac468a4f5e1f0df19ddb89b2522952eb4c6ca"
checksum = "12f72538a0230791398a0986a6518ebd88abc3fded89007b506ed072acc831e1"
dependencies = [
"anyhow",
"cfg-if",
"indoc",
"libc",
"memoffset",
"parking_lot",
"pyo3-build-config",
"pyo3-ffi",
@@ -153,9 +212,9 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.16.6"
version = "0.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9c819d397859445928609d0ec5afc2da5204e0d0f73d6bf9e153b04e83c9cdc2"
checksum = "fc4cf18c20f4f09995f3554e6bcf9b09bd5e4d6b67c562fdfaafa644526ba479"
dependencies = [
"once_cell",
"target-lexicon",
@@ -163,19 +222,30 @@ dependencies = [
[[package]]
name = "pyo3-ffi"
version = "0.16.6"
version = "0.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ca882703ab55f54702d7bfe1189b41b0af10272389f04cae38fe4cd56c65f75f"
checksum = "a41877f28d8ebd600b6aa21a17b40c3b0fc4dfe73a27b6e81ab3d895e401b0e9"
dependencies = [
"libc",
"pyo3-build-config",
]
[[package]]
name = "pyo3-macros"
version = "0.16.6"
name = "pyo3-log"
version = "0.7.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "568749402955ad7be7bad9a09b8593851cd36e549ac90bfd44079cea500f3f21"
checksum = "e5695ccff5060c13ca1751cf8c857a12da9b0bf0378cb071c5e0326f7c7e4c1b"
dependencies = [
"arc-swap",
"log",
"pyo3",
]
[[package]]
name = "pyo3-macros"
version = "0.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e81c8d4bcc2f216dc1b665412df35e46d12ee8d3d046b381aad05f1fcf30547"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -185,15 +255,25 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.16.6"
version = "0.17.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "611f64e82d98f447787e82b8e7b0ebc681e1eb78fc1252668b2c605ffb4e1eb8"
checksum = "85752a767ee19399a78272cc2ab625cd7d373b2e112b4b13db28de71fa892784"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "pythonize"
version = "0.17.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0f7f0c136f5fbc01868185eef462800e49659eb23acca83b9e884367a006acb6"
dependencies = [
"pyo3",
"serde",
]
[[package]]
name = "quote"
version = "1.0.21"
@@ -212,12 +292,66 @@ dependencies = [
"bitflags",
]
[[package]]
name = "regex"
version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c4eb3267174b8c6c2f654116623910a0fef09c4753f8dd83db29c48a0df988b"
dependencies = [
"aho-corasick",
"memchr",
"regex-syntax",
]
[[package]]
name = "regex-syntax"
version = "0.6.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3f87b73ce11b1619a3c6332f45341e0047173771e8b8b73f87bfeefb7b56244"
[[package]]
name = "ryu"
version = "1.0.11"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4501abdff3ae82a1c1b477a17252eb69cee9e66eb915c1abaa4f44d873df9f09"
[[package]]
name = "scopeguard"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.145"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "728eb6351430bccb993660dfffc5a72f91ccc1295abaa8ce19b27ebe4f75568b"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.145"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81fa1584d3d1bcacd84c277a0dfe21f5b0f6accf4a23d04d4c6d61f1af522b4c"
dependencies = [
"proc-macro2",
"quote",
"syn",
]
[[package]]
name = "serde_json"
version = "1.0.85"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e55a28e3aaef9d5ce0506d0a14dbba8054ddc7e499ef522dd8b26859ec9d4a44"
dependencies = [
"itoa",
"ryu",
"serde",
]
[[package]]
name = "smallvec"
version = "1.9.0"
@@ -245,9 +379,17 @@ dependencies = [
name = "synapse"
version = "0.1.0"
dependencies = [
"anyhow",
"blake2",
"hex",
"lazy_static",
"log",
"pyo3",
"pyo3-log",
"pythonize",
"regex",
"serde",
"serde_json",
]
[[package]]

View File

@@ -15,6 +15,9 @@ def build(setup_kwargs: Dict[str, Any]) -> None:
path=cargo_toml_path,
binding=Binding.PyO3,
py_limited_api=True,
# We force always building in release mode, as we can't tell the
# difference between using `poetry` in development vs production.
debug=False,
)
setup_kwargs.setdefault("rust_extensions", []).append(extension)
setup_kwargs["zip_safe"] = False

View File

@@ -0,0 +1 @@
Experimental support for thread-specific notifications ([MSC3773](https://github.com/matrix-org/matrix-spec-proposals/pull/3773)).

View File

@@ -0,0 +1 @@
Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)).

View File

@@ -0,0 +1 @@
Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)).

View File

@@ -0,0 +1 @@
Experimental support for thread-specific receipts ([MSC3771](https://github.com/matrix-org/matrix-spec-proposals/pull/3771)).

1
changelog.d/13983.misc Normal file
View File

@@ -0,0 +1 @@
Bump actions/setup-python from 2 to 4.

1
changelog.d/13984.misc Normal file
View File

@@ -0,0 +1 @@
Bump types-pyopenssl from 22.0.0 to 22.0.10.

1
changelog.d/13985.misc Normal file
View File

@@ -0,0 +1 @@
Bump jsonschema from 4.4.0 to 4.16.0.

1
changelog.d/13986.misc Normal file
View File

@@ -0,0 +1 @@
Bump types-setuptools from 57.4.9 to 65.4.0.0.

1
changelog.d/13987.misc Normal file
View File

@@ -0,0 +1 @@
Bump types-requests from 2.27.11 to 2.28.11.

1
changelog.d/13988.misc Normal file
View File

@@ -0,0 +1 @@
Bump isort from 5.7.0 to 5.10.1.

1
changelog.d/13991.misc Normal file
View File

@@ -0,0 +1 @@
Optimise queries used to get a users rooms during sync. Contributed by Nick @ Beeper (@fizzadar).

View File

@@ -0,0 +1 @@
Send application service access tokens as a header (and query parameter). Implement [MSC2832](https://github.com/matrix-org/matrix-spec-proposals/pull/2832).

View File

@@ -0,0 +1 @@
Ignore server ACL changes when generating pushes. Implement [MSC3786](https://github.com/matrix-org/matrix-spec-proposals/pull/3786).

1
changelog.d/14006.misc Normal file
View File

@@ -0,0 +1 @@
Update authlib from 0.15.5 to 1.1.0.

View File

@@ -0,0 +1 @@
Advertise Matrix 1.3 support on `/_matrix/client/versions`.

1
changelog.d/14033.misc Normal file
View File

@@ -0,0 +1 @@
Don't repeatedly wake up the same users for batched events.

1
changelog.d/14041.misc Normal file
View File

@@ -0,0 +1 @@
Bump types-pyyaml from 6.0.4 to 6.0.12.

1
changelog.d/14046.misc Normal file
View File

@@ -0,0 +1 @@
Bump actions/setup-python from 2 to 4.

1
changelog.d/14053.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.53.0 when querying `/publicRooms` with both a `room_type` filter and a `third_party_instance_id`.

1
changelog.d/14063.misc Normal file
View File

@@ -0,0 +1 @@
Complement test image: capture logs from nginx.

1
changelog.d/14065.misc Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.35.0 where errors parsing a `/send_join` or `/state` response would produce excessive, low-quality Sentry events.

1
changelog.d/14070.misc Normal file
View File

@@ -0,0 +1 @@
Batch up initial power level event when creating rooms.

1
changelog.d/14087.doc Normal file
View File

@@ -0,0 +1 @@
The changelog entry ending in a full stop or exclamation mark is not optional.

View File

@@ -1,7 +1,12 @@
groups:
- name: synapse
rules:
# These 3 rules are used in the included Prometheus console
###
### Prometheus Console Only
### The following rules are only needed if you use the Prometheus Console
### in contrib/prometheus/consoles/synapse.html
###
- record: 'synapse_federation_client_sent'
labels:
type: "EDU"
@@ -15,7 +20,6 @@ groups:
type: "Query"
expr: 'sum(synapse_federation_client_sent_queries) by (job)'
# These 3 rules are used in the included Prometheus console
- record: 'synapse_federation_server_received'
labels:
type: "EDU"
@@ -29,7 +33,6 @@ groups:
type: "Query"
expr: 'sum(synapse_federation_server_received_queries) by (job)'
# These 2 rules are used in the included Prometheus console
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "EDU"
@@ -38,8 +41,16 @@ groups:
labels:
type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
###
### End of 'Prometheus Console Only' rules block
###
# These 3 rules are used in the included Grafana dashboard
###
### Grafana Only
### The following rules are only needed if you use the Grafana dashboard
### in contrib/grafana/synapse.json
###
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_type="remote"})
labels:
@@ -53,11 +64,11 @@ groups:
labels:
type: bridges
# This rule is used in the included Grafana dashboard
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep_total)
# This rule is used in the included Grafana dashboard
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep_total)
###
### End of 'Grafana Only' rules block
###

View File

@@ -7,7 +7,7 @@ You can alternatively create multiple worker configuration files with a simple `
#!/bin/bash
for i in {1..5}
do
cat << EOF >> generic_worker$i.yaml
cat << EOF > generic_worker$i.yaml
worker_app: synapse.app.generic_worker
worker_name: generic_worker$i
@@ -15,6 +15,8 @@ worker_name: generic_worker$i
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_main_http_uri: http://localhost:8008/
worker_listeners:
- type: http
port: 808$i

14
debian/changelog vendored
View File

@@ -1,3 +1,17 @@
matrix-synapse-py3 (1.69.0~rc2) stable; urgency=medium
* New Synapse release 1.69.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 06 Oct 2022 14:45:00 +0100
matrix-synapse-py3 (1.69.0~rc1) stable; urgency=medium
* The man page for the hash_password script has been updated to reflect
the correct default value of 'bcrypt_rounds'.
* New Synapse release 1.69.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Oct 2022 11:17:16 +0100
matrix-synapse-py3 (1.68.0) stable; urgency=medium
* New Synapse release 1.68.0.

View File

@@ -10,7 +10,7 @@
.P
\fBhash_password\fR takes a password as an parameter either on the command line or the \fBSTDIN\fR if not supplied\.
.P
It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB10\fR\.
It accepts an YAML file which can be used to specify parameters like the number of rounds for bcrypt and password_config section having the pepper value used for the hashing\. By default \fBbcrypt_rounds\fR is set to \fB12\fR\.
.P
The hashed password is written on the \fBSTDOUT\fR\.
.SH "FILES"

View File

@@ -14,7 +14,7 @@ or the `STDIN` if not supplied.
It accepts an YAML file which can be used to specify parameters like the
number of rounds for bcrypt and password_config section having the pepper
value used for the hashing. By default `bcrypt_rounds` is set to **10**.
value used for the hashing. By default `bcrypt_rounds` is set to **12**.
The hashed password is written on the `STDOUT`.

View File

@@ -40,7 +40,11 @@ FROM matrixdotorg/synapse:$SYNAPSE_VERSION
COPY --from=deps_base /etc/nginx /etc/nginx
RUN rm /etc/nginx/sites-enabled/default
RUN mkdir /var/log/nginx /var/lib/nginx
RUN chown www-data /var/log/nginx /var/lib/nginx
RUN chown www-data /var/lib/nginx
# have nginx log to stderr/out
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/

View File

@@ -8,19 +8,15 @@
ARG SYNAPSE_VERSION=latest
# first of all, we create a base image with a postgres server and database,
# which we can copy into the target image. For repeated rebuilds, this is
# much faster than apt installing postgres each time.
#
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
# now build the final image, based on the Synapse image.
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
# First of all, we copy postgres server from the official postgres image,
# since for repeated rebuilds, this is much faster than apt installing
# postgres each time.
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres:13-bullseye /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres:13-bullseye /usr/share/postgresql /usr/share/postgresql
@@ -28,7 +24,7 @@ FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# initialise the database cluster in /var/lib/postgresql
# We also initialize the database at build time, rather than runtime, so that it's faster to spin up the image.
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
# Configure a password and create a database for Synapse

View File

@@ -5,7 +5,7 @@ non-interactive way. This is generally used for bootstrapping a Synapse
instance with administrator accounts.
To authenticate yourself to the server, you will need both the shared secret
([`registration_shared_secret`](../configuration/config_documentation.md#registration_shared_secret)
([`registration_shared_secret`](../usage/configuration/config_documentation.md#registration_shared_secret)
in the homeserver configuration), and a one-time nonce. If the registration
shared secret is not configured, this API is not enabled.

View File

@@ -167,6 +167,12 @@ was broken. They are slower than the linters but will typically catch more error
poetry run trial tests
```
You can run unit tests in parallel by specifying `-jX` argument to `trial` where `X` is the number of parallel runners you want. To use 4 cpu cores, you would run them like:
```sh
poetry run trial -j4 tests
```
If you wish to only run *some* unit tests, you may specify
another module instead of `tests` - or a test class or a method:
@@ -384,7 +390,7 @@ This file will become part of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md) at the next
release, so the content of the file should be a short description of your
change in the same style as the rest of the changelog. The file can contain Markdown
formatting, and should end with a full stop (.) or an exclamation mark (!) for
formatting, and must end with a full stop (.) or an exclamation mark (!) for
consistency.
Adding credits to the changelog is encouraged, we value your

View File

@@ -195,23 +195,24 @@ There are three separate aspects to this:
## `event_id` global uniqueness
In room versions `1` and `2` it's possible to end up with two events with the
same `event_id` (in the same or different rooms). After room version `3`, that
can only happen with a hash collision, which we basically hope will never
happen.
There are several places in Synapse and even Matrix APIs like [`GET
`event_id`'s can be considered globally unique although there has been a lot of
debate on this topic in places like
[MSC2779](https://github.com/matrix-org/matrix-spec-proposals/issues/2779) and
[MSC2848](https://github.com/matrix-org/matrix-spec-proposals/pull/2848) which
has no resolution yet (as of 2022-09-01). There are several places in Synapse
and even in the Matrix APIs like [`GET
/_matrix/federation/v1/event/{eventId}`](https://spec.matrix.org/v1.1/server-server-api/#get_matrixfederationv1eventeventid)
where we assume that event IDs are globally unique.
But hash collisions are still possible, and by treating event IDs as room
scoped, we can reduce the possibility of a hash collision. When scoping
`event_id` in the database schema, it should be also accompanied by `room_id`
(`PRIMARY KEY (room_id, event_id)`) and lookups should be done through the pair
`(room_id, event_id)`.
When scoping `event_id` in a database schema, it is often nice to accompany it
with `room_id` (`PRIMARY KEY (room_id, event_id)` and a `FOREIGN KEY(room_id)
REFERENCES rooms(room_id)`) which makes flexible lookups easy. For example it
makes it very easy to find and clean up everything in a room when it needs to be
purged (no need to use sub-`select` query or join from the `events` table).
A note on collisions: In room versions `1` and `2` it's possible to end up with
two events with the same `event_id` (in the same or different rooms). After room
version `3`, that can only happen with a hash collision, which we basically hope
will never happen (SHA256 has a massive big key space).
There has been a lot of debate on this in places like
https://github.com/matrix-org/matrix-spec-proposals/issues/2779 and
[MSC2848](https://github.com/matrix-org/matrix-spec-proposals/pull/2848) which
has no resolution yet (as of 2022-09-01).

View File

@@ -135,6 +135,8 @@ Synapse 1.2 updates the Prometheus metrics to match the naming
convention of the upstream `prometheus_client`. The old names are
considered deprecated and will be removed in a future version of
Synapse.
**The old names will be disabled by default in Synapse v1.71.0 and removed
altogether in Synapse v1.73.0.**
| New Name | Old Name |
| ---------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
@@ -146,6 +148,13 @@ Synapse.
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
| synapse_util_caches_cache_hits | synapse_util_caches_cache:hits |
| synapse_util_caches_cache_size | synapse_util_caches_cache:size |
| synapse_util_caches_cache_evicted_size | synapse_util_caches_cache:evicted_size |
| synapse_util_caches_cache | synapse_util_caches_cache:total |
| synapse_util_caches_response_cache_size | synapse_util_caches_response_cache:size |
| synapse_util_caches_response_cache_hits | synapse_util_caches_response_cache:hits |
| synapse_util_caches_response_cache_evicted_size | synapse_util_caches_response_cache:evicted_size |
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
@@ -261,7 +270,7 @@ Standard Metric Names
As of synapse version 0.18.2, the format of the process-wide metrics has
been changed to fit prometheus standard naming conventions. Additionally
the units have been changed to seconds, from miliseconds.
the units have been changed to seconds, from milliseconds.
| New name | Old name |
| ---------------------------------------- | --------------------------------- |

View File

@@ -423,7 +423,7 @@ Synapse config:
user_mapping_provider:
config:
display_name_template: "{{ user.name }}"
email_template: "{{ '{{ user.email }}' }}"
email_template: "{{ user.email }}"
```
Relevant documents:

View File

@@ -181,7 +181,7 @@ doas pkg_add synapse
#### NixOS
Robin Lambertz has packaged Synapse for NixOS at:
<https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/misc/matrix-synapse.nix>
<https://github.com/NixOS/nixpkgs/blob/master/nixos/modules/services/matrix/synapse.nix>
### Installing as a Python module from PyPI

View File

@@ -73,8 +73,8 @@ A custom mapping provider must specify the following methods:
* `async def map_user_attributes(self, userinfo, token, failures)`
- This method must be async.
- Arguments:
- `userinfo` - A `authlib.oidc.core.claims.UserInfo` object to extract user
information from.
- `userinfo` - An [`authlib.oidc.core.claims.UserInfo`](https://docs.authlib.org/en/latest/specs/oidc.html#authlib.oidc.core.UserInfo)
object to extract user information from.
- `token` - A dictionary which includes information necessary to make
further requests to the OpenID provider.
- `failures` - An `int` that represents the amount of times the returned
@@ -91,7 +91,13 @@ A custom mapping provider must specify the following methods:
`None`, the user is prompted to pick their own username. This is only used
during a user's first login. Once a localpart has been associated with a
remote user ID (see `get_remote_user_id`) it cannot be updated.
- `displayname`: An optional string, the display name for the user.
- `confirm_localpart`: A boolean. If set to `True`, when a `localpart`
string is returned from this method, Synapse will prompt the user to
either accept this localpart or pick their own username. Otherwise this
option has no effect. If omitted, defaults to `False`.
- `display_name`: An optional string, the display name for the user.
- `emails`: A list of strings, the email address(es) to associate with
this user. If omitted, defaults to an empty list.
* `async def get_extra_attributes(self, userinfo, token)`
- This method must be async.
- Arguments:

View File

@@ -0,0 +1,14 @@
worker_app: synapse.app.media_repository
worker_name: media_worker
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 8085
resources:
- names: [media]
worker_log_config: /etc/matrix-synapse/media-worker-log.yaml

View File

@@ -88,13 +88,103 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.69.0
## Changes to the receipts replication streams
Synapse now includes information indicating if a receipt applies to a thread when
replicating it to other workers. This is a forwards- and backwards-incompatible
change: v1.68 and workers cannot process receipts replicated by v1.69 workers, and
vice versa.
Once all workers are upgraded to v1.69 (or downgraded to v1.68), receipts
replication will resume as normal.
## Deprecation of legacy Prometheus metric names
In current versions of Synapse, some Prometheus metrics are emitted under two different names,
with one of the names being older but non-compliant with OpenMetrics and Prometheus conventions
and one of the names being newer but compliant.
Synapse v1.71.0 will turn the old metric names off *by default*.
For administrators that still rely on them and have not had chance to update their
uses of the metrics, it's possible to specify `enable_legacy_metrics: true` in
the configuration to re-enable them temporarily.
Synapse v1.73.0 will **remove legacy metric names altogether** and it will no longer
be possible to re-enable them.
The Grafana dashboard, Prometheus recording rules and Prometheus Consoles included
in the `contrib` directory in the Synapse repository have been updated to no longer
rely on the legacy names. These can be used on a current version of Synapse
because current versions of Synapse emit both old and new names.
You may need to update your alerting rules or any other rules that depend on
the names of Prometheus metrics.
If you want to test your changes before legacy names are disabled by default,
you may specify `enable_legacy_metrics: false` in your homeserver configuration.
A list of affected metrics is available on the [Metrics How-to page](https://matrix-org.github.io/synapse/v1.69/metrics-howto.html?highlight=metrics%20deprecated#renaming-of-metrics--deprecation-of-old-names-in-12).
## Deprecation of the `generate_short_term_login_token` module API method
The following method of the module API has been deprecated, and is scheduled to
be remove in v1.71.0:
```python
def generate_short_term_login_token(
self,
user_id: str,
duration_in_ms: int = (2 * 60 * 1000),
auth_provider_id: str = "",
auth_provider_session_id: Optional[str] = None,
) -> str:
...
```
It has been replaced by an asynchronous equivalent:
```python
async def create_login_token(
self,
user_id: str,
duration_in_ms: int = (2 * 60 * 1000),
auth_provider_id: Optional[str] = None,
auth_provider_session_id: Optional[str] = None,
) -> str:
...
```
Synapse will log a warning when a module uses the deprecated method, to help
administrators find modules using it.
# Upgrading to v1.68.0
As announced in the upgrade notes for v1.67.0, Synapse now requires a SQLite
version of 3.27.0 or higher if SQLite is in use and source checkouts of Synapse
now require a recent Rust compiler.
Two changes announced in the upgrade notes for v1.67.0 have now landed in v1.68.0.
Installations using
## SQLite version requirement
Synapse now requires a SQLite version of 3.27.0 or higher if SQLite is configured as
Synapse's database.
Installations using
- Docker images [from `matrixdotorg`](https://hub.docker.com/r/matrixdotorg/synapse),
- Debian packages [from Matrix.org](https://packages.matrix.org/), or
- a PostgreSQL database
are not affected.
## Rust requirement when building from source.
Building from a source checkout of Synapse now requires a recent Rust compiler
(currently Rust 1.58.1, but see also the
[Platform Dependency Policy](https://matrix-org.github.io/synapse/latest/deprecation_policy.html)).
Installations using
- Docker images [from `matrixdotorg`](https://hub.docker.com/r/matrixdotorg/synapse),
- Debian packages [from Matrix.org](https://packages.matrix.org/), or
@@ -134,12 +224,12 @@ The simplest way of installing Rust is via [rustup.rs](https://rustup.rs/)
## SQLite version requirement in the next release
From the next major release (v1.68.0) Synapse will require SQLite 3.27.0 or
From the next major release (v1.68.0) Synapse will require SQLite 3.27.0 or
higher. Synapse v1.67.0 will be the last major release supporting SQLite
versions 3.22 to 3.26.
Those using Docker images or Debian packages from Matrix.org will not be
affected. If you have installed from source, you should check the version of
affected. If you have installed from source, you should check the version of
SQLite used by Python with:
```shell

View File

@@ -179,7 +179,7 @@ This will tell other servers to send traffic to port 443 instead.
This option currently defaults to false.
See https://matrix-org.github.io/synapse/latest/delegate.html for more
See [Delegation of incoming federation traffic](../../delegate.md) for more
information.
Example configuration:
@@ -2229,6 +2229,9 @@ homeserver. If the room already exists, make certain it is a publicly joinable
room, i.e. the join rule of the room must be set to 'public'. You can find more options
relating to auto-joining rooms below.
As Spaces are just rooms under the hood, Space aliases may also be
used.
Example configuration:
```yaml
auto_join_rooms:
@@ -2240,7 +2243,7 @@ auto_join_rooms:
Where `auto_join_rooms` are specified, setting this flag ensures that
the rooms exist by creating them when the first user on the
homeserver registers.
homeserver registers. This option will not create Spaces.
By default the auto-created rooms are publicly joinable from any federated
server. Use the `autocreate_auto_join_rooms_federated` and
@@ -2258,7 +2261,7 @@ autocreate_auto_join_rooms: false
---
### `autocreate_auto_join_rooms_federated`
Whether the rooms listen in `auto_join_rooms` that are auto-created are available
Whether the rooms listed in `auto_join_rooms` that are auto-created are available
via federation. Only has an effect if `autocreate_auto_join_rooms` is true.
Note that whether a room is federated cannot be modified after
@@ -2433,6 +2436,31 @@ Example configuration:
enable_metrics: true
```
---
### `enable_legacy_metrics`
Set to `true` to publish both legacy and non-legacy Prometheus metric names,
or to `false` to only publish non-legacy Prometheus metric names.
Defaults to `true`. Has no effect if `enable_metrics` is `false`.
**In Synapse v1.71.0, this will default to `false` before being removed in Synapse v1.73.0.**
Legacy metric names include:
- metrics containing colons in the name, such as `synapse_util_caches_response_cache:hits`, because colons are supposed to be reserved for user-defined recording rules;
- counters that don't end with the `_total` suffix, such as `synapse_federation_client_sent_edus`, therefore not adhering to the OpenMetrics standard.
These legacy metric names are unconventional and not compliant with OpenMetrics standards.
They are included for backwards compatibility.
Example configuration:
```yaml
enable_legacy_metrics: false
```
See https://github.com/matrix-org/synapse/issues/11106 for context.
*Since v1.67.0.*
**Will be removed in v1.73.0.**
---
### `sentry`
Use this option to enable sentry integration. Provide the DSN assigned to you by sentry
@@ -2949,7 +2977,7 @@ Options for each entry include:
* `module`: The class name of a custom mapping module. Default is
`synapse.handlers.oidc.JinjaOidcMappingProvider`.
See https://matrix-org.github.io/synapse/latest/sso_mapping_providers.html#openid-mapping-providers
See [OpenID Mapping Providers](../../sso_mapping_providers.md#openid-mapping-providers)
for information on implementing a custom mapping provider.
* `config`: Configuration for the mapping provider module. This section will
@@ -3390,13 +3418,15 @@ This option has the following sub-options:
the user directory. If false, search results will only contain users
visible in public rooms and users sharing a room with the requester.
Defaults to false.
NB. If you set this to true, and the last time the user_directory search
indexes were (re)built was before Synapse 1.44, you'll have to
rebuild the indexes in order to search through all known users.
These indexes are built the first time Synapse starts; admins can
manually trigger a rebuild via API following the instructions at
https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
Set to true to return search results containing all known users, even if that
manually trigger a rebuild via the API following the instructions
[for running background updates](../administration/admin_api/background_updates.md#run),
set to true to return search results containing all known users, even if that
user does not share a room with the requester.
* `prefer_local_users`: Defines whether to prefer local users in search query results.
If set to true, local users are more likely to appear above remote users when searching the

View File

@@ -93,7 +93,6 @@ listener" for the main process; and secondly, you need to enable redis-based
replication. Optionally, a shared secret can be used to authenticate HTTP
traffic between workers. For example:
```yaml
# extend the existing `listeners` section. This defines the ports that the
# main process will listen on.
@@ -129,7 +128,8 @@ In the config file for each worker, you must specify:
* The HTTP replication endpoint that it should talk to on the main synapse process
(`worker_replication_host` and `worker_replication_http_port`)
* If handling HTTP requests, a `worker_listeners` option with an `http`
listener, in the same way as the `listeners` option in the shared config.
listener, in the same way as the [`listeners`](usage/configuration/config_documentation.md#listeners)
option in the shared config.
* If handling the `^/_matrix/client/v3/keys/upload` endpoint, the HTTP URI for
the main process (`worker_main_http_uri`).
@@ -285,8 +285,9 @@ For multiple workers not handling the SSO endpoints properly, see
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
[#9427](https://github.com/matrix-org/synapse/issues/9427).
Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.
Note that a [HTTP listener](usage/configuration/config_documentation.md#listeners)
with `client` and `federation` `resources` must be configured in the `worker_listeners`
option in the worker config.
#### Load balancing
@@ -326,7 +327,8 @@ effects of bursts of events from that bridge on events sent by normal users.
Additionally, the writing of specific streams (such as events) can be moved off
of the main process to a particular worker.
To enable this, the worker must have a HTTP replication listener configured,
To enable this, the worker must have a
[HTTP `replication` listener](usage/configuration/config_documentation.md#listeners) configured,
have a `worker_name` and be listed in the `instance_map` config. The same worker
can handle multiple streams, but unless otherwise documented, each stream can only
have a single writer.
@@ -410,7 +412,7 @@ the stream writer for the `presence` stream:
There is also support for moving background tasks to a separate
worker. Background tasks are run periodically or started via replication. Exactly
which tasks are configured to run depends on your Synapse configuration (e.g. if
stats is enabled).
stats is enabled). This worker doesn't handle any REST endpoints itself.
To enable this, the worker must have a `worker_name` and can be configured to run
background tasks. For example, to move background tasks to a dedicated worker,
@@ -457,8 +459,8 @@ worker application type.
#### Notifying Application Services
You can designate one generic worker to send output traffic to Application Services.
Specify its name in the shared configuration as follows:
Doesn't handle any REST endpoints itself, but you should specify its name in the
shared configuration as follows:
```yaml
notify_appservices_from_worker: worker_name
@@ -536,16 +538,12 @@ file to stop the main synapse running background jobs related to managing the
media repository. Note that doing so will prevent the main process from being
able to handle the above endpoints.
In the `media_repository` worker configuration file, configure the http listener to
In the `media_repository` worker configuration file, configure the
[HTTP listener](usage/configuration/config_documentation.md#listeners) to
expose the `media` resource. For example:
```yaml
worker_listeners:
- type: http
port: 8085
resources:
- names:
- media
{{#include systemd-with-workers/workers/media_worker.yaml}}
```
Note that if running multiple media repositories they must be on the same server

View File

@@ -106,6 +106,9 @@ disallow_untyped_defs = False
[mypy-tests.handlers.test_user_directory]
disallow_untyped_defs = True
[mypy-tests.push.test_bulk_push_rule_evaluator]
disallow_untyped_defs = True
[mypy-tests.test_server]
disallow_untyped_defs = True

181
poetry.lock generated
View File

@@ -13,18 +13,15 @@ tests = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy", "pympl
tests_no_zope = ["cloudpickle", "coverage[toml] (>=5.0.2)", "hypothesis", "mypy", "pympler", "pytest (>=4.3.0)", "pytest-mypy-plugins", "six"]
[[package]]
name = "authlib"
version = "0.15.5"
description = "The ultimate Python library in building OAuth and OpenID Connect servers."
name = "Authlib"
version = "1.1.0"
description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients."
category = "main"
optional = true
python-versions = "*"
[package.dependencies]
cryptography = "*"
[package.extras]
client = ["requests"]
cryptography = ">=3.2"
[[package]]
name = "automat"
@@ -95,14 +92,15 @@ webencodings = "*"
[[package]]
name = "canonicaljson"
version = "1.6.0"
version = "1.6.3"
description = "Canonical JSON"
category = "main"
optional = false
python-versions = "~=3.7"
python-versions = ">=3.7"
[package.dependencies]
simplejson = ">=3.14.0"
typing-extensions = {version = ">=4.0.0", markers = "python_version < \"3.8\""}
[package.extras]
frozendict = ["frozendict (>=1.0)"]
@@ -398,15 +396,16 @@ scripts = ["click (>=6.0)", "twisted (>=16.4.0)"]
[[package]]
name = "isort"
version = "5.7.0"
version = "5.10.1"
description = "A Python utility / library to sort Python imports."
category = "dev"
optional = false
python-versions = ">=3.6,<4.0"
python-versions = ">=3.6.1,<4.0"
[package.extras]
colors = ["colorama (>=0.4.3,<0.5.0)"]
pipfile_deprecated_finder = ["pipreqs", "requirementslib"]
plugins = ["setuptools"]
requirements_deprecated_finder = ["pip-api", "pipreqs"]
[[package]]
@@ -454,7 +453,7 @@ i18n = ["Babel (>=2.7)"]
[[package]]
name = "jsonschema"
version = "4.4.0"
version = "4.16.0"
description = "An implementation of JSON Schema validation for Python"
category = "main"
optional = false
@@ -464,12 +463,13 @@ python-versions = ">=3.7"
attrs = ">=17.4.0"
importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
importlib-resources = {version = ">=1.4.0", markers = "python_version < \"3.9\""}
pkgutil-resolve-name = {version = ">=1.3.10", markers = "python_version < \"3.9\""}
pyrsistent = ">=0.14.0,<0.17.0 || >0.17.0,<0.17.1 || >0.17.1,<0.17.2 || >0.17.2"
typing-extensions = {version = "*", markers = "python_version < \"3.8\""}
[package.extras]
format = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3987", "uri-template", "webcolors (>=1.11)"]
format_nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
format-nongpl = ["fqdn", "idna", "isoduration", "jsonpointer (>1.13)", "rfc3339-validator", "rfc3986-validator (>0.1.0)", "uri-template", "webcolors (>=1.11)"]
[[package]]
name = "keyring"
@@ -572,11 +572,11 @@ python-versions = "*"
[[package]]
name = "mypy"
version = "0.950"
version = "0.981"
description = "Optional static typing for Python"
category = "dev"
optional = false
python-versions = ">=3.6"
python-versions = ">=3.7"
[package.dependencies]
mypy-extensions = ">=0.4.3"
@@ -599,14 +599,14 @@ python-versions = "*"
[[package]]
name = "mypy-zope"
version = "0.3.7"
version = "0.3.11"
description = "Plugin for mypy to support zope interfaces"
category = "dev"
optional = false
python-versions = "*"
[package.dependencies]
mypy = "0.950"
mypy = "0.981"
"zope.interface" = "*"
"zope.schema" = "*"
@@ -689,6 +689,14 @@ python-versions = "*"
[package.extras]
testing = ["coverage", "nose"]
[[package]]
name = "pkgutil_resolve_name"
version = "1.3.10"
description = "Resolve a name to an object."
category = "main"
optional = false
python-versions = ">=3.6"
[[package]]
name = "platformdirs"
version = "2.5.1"
@@ -1441,8 +1449,8 @@ optional = false
python-versions = "*"
[[package]]
name = "types-pyopenssl"
version = "22.0.0"
name = "types-pyOpenSSL"
version = "22.0.10"
description = "Typing stubs for pyOpenSSL"
category = "dev"
optional = false
@@ -1452,8 +1460,8 @@ python-versions = "*"
types-cryptography = "*"
[[package]]
name = "types-pyyaml"
version = "6.0.4"
name = "types-PyYAML"
version = "6.0.12"
description = "Typing stubs for PyYAML"
category = "dev"
optional = false
@@ -1461,7 +1469,7 @@ python-versions = "*"
[[package]]
name = "types-requests"
version = "2.27.11"
version = "2.28.11"
description = "Typing stubs for requests"
category = "dev"
optional = false
@@ -1472,7 +1480,7 @@ types-urllib3 = "<1.27"
[[package]]
name = "types-setuptools"
version = "57.4.9"
version = "65.4.0.0"
description = "Typing stubs for setuptools"
category = "dev"
optional = false
@@ -1625,16 +1633,16 @@ url_preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1"
content-hash = "1b14fc274d9e2a495a7f864150f3ffcf4d9f585e09a67e53301ae4ef3c2f3e48"
content-hash = "9d74da808739e4c3d15a2d3473f01ad419f62aec8bf28613b03bd69136c4745b"
[metadata.files]
attrs = [
{file = "attrs-21.4.0-py2.py3-none-any.whl", hash = "sha256:2d27e3784d7a565d36ab851fe94887c5eccd6a463168875832a1be79c82828b4"},
{file = "attrs-21.4.0.tar.gz", hash = "sha256:626ba8234211db98e869df76230a137c4c40a12d72445c45d5f5b716f076e2fd"},
]
authlib = [
{file = "Authlib-0.15.5-py2.py3-none-any.whl", hash = "sha256:ecf4a7a9f2508c0bb07e93a752dd3c495cfaffc20e864ef0ffc95e3f40d2abaf"},
{file = "Authlib-0.15.5.tar.gz", hash = "sha256:b83cf6360c8e92b0e9df0d1f32d675790bcc4e3c03977499b1eed24dcdef4252"},
Authlib = [
{file = "Authlib-1.1.0-py2.py3-none-any.whl", hash = "sha256:be4b6a1dea51122336c210a6945b27a105b9ac572baffd15b07bcff4376c1523"},
{file = "Authlib-1.1.0.tar.gz", hash = "sha256:0a270c91409fc2b7b0fbee6996e09f2ee3187358762111a9a4225c874b94e891"},
]
automat = [
{file = "Automat-20.2.0-py2.py3-none-any.whl", hash = "sha256:b6feb6455337df834f6c9962d6ccf771515b7d939bca142b29c20c2376bc6111"},
@@ -1682,8 +1690,8 @@ bleach = [
{file = "bleach-4.1.0.tar.gz", hash = "sha256:0900d8b37eba61a802ee40ac0061f8c2b5dee29c1927dd1d233e075ebf5a71da"},
]
canonicaljson = [
{file = "canonicaljson-1.6.0-py3-none-any.whl", hash = "sha256:7230c2a2a3db07874f622af84effe41a655e07bf23734830e18a454e65d5b998"},
{file = "canonicaljson-1.6.0.tar.gz", hash = "sha256:8739d5fd91aca7281d425660ae65af7663808c8177778965f67e90b16a2b2427"},
{file = "canonicaljson-1.6.3-py3-none-any.whl", hash = "sha256:6ba3cf1702fa3d209b3e915a4e9a3e4ef194f1e8fca189c1f0b7a2a7686a27e6"},
{file = "canonicaljson-1.6.3.tar.gz", hash = "sha256:ca59760bc274a899a0da75809d6909ae43e5123381fd6ef040a44d1952c0b448"},
]
certifi = [
{file = "certifi-2021.10.8-py2.py3-none-any.whl", hash = "sha256:d62a0163eb4c2344ac042ab2bdf75399a71a2d8c7d47eac2e2ee91b9d6339569"},
@@ -1969,8 +1977,8 @@ incremental = [
{file = "incremental-21.3.0.tar.gz", hash = "sha256:02f5de5aff48f6b9f665d99d48bfc7ec03b6e3943210de7cfc88856d755d6f57"},
]
isort = [
{file = "isort-5.7.0-py3-none-any.whl", hash = "sha256:fff4f0c04e1825522ce6949973e83110a6e907750cd92d128b0d14aaaadbffdc"},
{file = "isort-5.7.0.tar.gz", hash = "sha256:c729845434366216d320e936b8ad6f9d681aab72dc7cbc2d51bedc3582f3ad1e"},
{file = "isort-5.10.1-py3-none-any.whl", hash = "sha256:6f62d78e2f89b4500b080fe3a81690850cd254227f27f75c3a0c491a1f351ba7"},
{file = "isort-5.10.1.tar.gz", hash = "sha256:e8443a5e7a020e9d7f97f1d7d9cd17c88bcb3bc7e218bf9cf5095fe550be2951"},
]
jaeger-client = [
{file = "jaeger-client-4.8.0.tar.gz", hash = "sha256:3157836edab8e2c209bd2d6ae61113db36f7ee399e66b1dcbb715d87ab49bfe0"},
@@ -1984,18 +1992,15 @@ jinja2 = [
{file = "Jinja2-3.0.3.tar.gz", hash = "sha256:611bb273cd68f3b993fabdc4064fc858c5b47a973cb5aa7999ec1ba405c87cd7"},
]
jsonschema = [
{file = "jsonschema-4.4.0-py3-none-any.whl", hash = "sha256:77281a1f71684953ee8b3d488371b162419767973789272434bbc3f29d9c8823"},
{file = "jsonschema-4.4.0.tar.gz", hash = "sha256:636694eb41b3535ed608fe04129f26542b59ed99808b4f688aa32dcf55317a83"},
{file = "jsonschema-4.16.0-py3-none-any.whl", hash = "sha256:9e74b8f9738d6a946d70705dc692b74b5429cd0960d58e79ffecfc43b2221eb9"},
{file = "jsonschema-4.16.0.tar.gz", hash = "sha256:165059f076eff6971bae5b742fc029a7b4ef3f9bcf04c14e4776a7605de14b23"},
]
keyring = [
{file = "keyring-23.5.0-py3-none-any.whl", hash = "sha256:b0d28928ac3ec8e42ef4cc227822647a19f1d544f21f96457965dc01cf555261"},
{file = "keyring-23.5.0.tar.gz", hash = "sha256:9012508e141a80bd1c0b6778d5c610dd9f8c464d75ac6774248500503f972fb9"},
]
ldap3 = [
{file = "ldap3-2.9.1-py2.6.egg", hash = "sha256:5ab7febc00689181375de40c396dcad4f2659cd260fc5e94c508b6d77c17e9d5"},
{file = "ldap3-2.9.1-py2.7.egg", hash = "sha256:2bc966556fc4d4fa9f445a1c31dc484ee81d44a51ab0e2d0fd05b62cac75daa6"},
{file = "ldap3-2.9.1-py2.py3-none-any.whl", hash = "sha256:5869596fc4948797020d3f03b7939da938778a0f9e2009f7a072ccf92b8e8d70"},
{file = "ldap3-2.9.1-py3.9.egg", hash = "sha256:5630d1383e09ba94839e253e013f1aa1a2cf7a547628ba1265cb7b9a844b5687"},
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
]
lxml = [
@@ -2161,37 +2166,38 @@ msgpack = [
{file = "msgpack-1.0.3.tar.gz", hash = "sha256:51fdc7fb93615286428ee7758cecc2f374d5ff363bdd884c7ea622a7a327a81e"},
]
mypy = [
{file = "mypy-0.950-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:cf9c261958a769a3bd38c3e133801ebcd284ffb734ea12d01457cb09eacf7d7b"},
{file = "mypy-0.950-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:b5b5bd0ffb11b4aba2bb6d31b8643902c48f990cc92fda4e21afac658044f0c0"},
{file = "mypy-0.950-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:5e7647df0f8fc947388e6251d728189cfadb3b1e558407f93254e35abc026e22"},
{file = "mypy-0.950-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:eaff8156016487c1af5ffa5304c3e3fd183edcb412f3e9c72db349faf3f6e0eb"},
{file = "mypy-0.950-cp310-cp310-win_amd64.whl", hash = "sha256:563514c7dc504698fb66bb1cf897657a173a496406f1866afae73ab5b3cdb334"},
{file = "mypy-0.950-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:dd4d670eee9610bf61c25c940e9ade2d0ed05eb44227275cce88701fee014b1f"},
{file = "mypy-0.950-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:ca75ecf2783395ca3016a5e455cb322ba26b6d33b4b413fcdedfc632e67941dc"},
{file = "mypy-0.950-cp36-cp36m-win_amd64.whl", hash = "sha256:6003de687c13196e8a1243a5e4bcce617d79b88f83ee6625437e335d89dfebe2"},
{file = "mypy-0.950-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:4c653e4846f287051599ed8f4b3c044b80e540e88feec76b11044ddc5612ffed"},
{file = "mypy-0.950-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:e19736af56947addedce4674c0971e5dceef1b5ec7d667fe86bcd2b07f8f9075"},
{file = "mypy-0.950-cp37-cp37m-win_amd64.whl", hash = "sha256:ef7beb2a3582eb7a9f37beaf38a28acfd801988cde688760aea9e6cc4832b10b"},
{file = "mypy-0.950-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:0112752a6ff07230f9ec2f71b0d3d4e088a910fdce454fdb6553e83ed0eced7d"},
{file = "mypy-0.950-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:ee0a36edd332ed2c5208565ae6e3a7afc0eabb53f5327e281f2ef03a6bc7687a"},
{file = "mypy-0.950-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:77423570c04aca807508a492037abbd72b12a1fb25a385847d191cd50b2c9605"},
{file = "mypy-0.950-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:5ce6a09042b6da16d773d2110e44f169683d8cc8687e79ec6d1181a72cb028d2"},
{file = "mypy-0.950-cp38-cp38-win_amd64.whl", hash = "sha256:5b231afd6a6e951381b9ef09a1223b1feabe13625388db48a8690f8daa9b71ff"},
{file = "mypy-0.950-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:0384d9f3af49837baa92f559d3fa673e6d2652a16550a9ee07fc08c736f5e6f8"},
{file = "mypy-0.950-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:1fdeb0a0f64f2a874a4c1f5271f06e40e1e9779bf55f9567f149466fc7a55038"},
{file = "mypy-0.950-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:61504b9a5ae166ba5ecfed9e93357fd51aa693d3d434b582a925338a2ff57fd2"},
{file = "mypy-0.950-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:a952b8bc0ae278fc6316e6384f67bb9a396eb30aced6ad034d3a76120ebcc519"},
{file = "mypy-0.950-cp39-cp39-win_amd64.whl", hash = "sha256:eaea21d150fb26d7b4856766e7addcf929119dd19fc832b22e71d942835201ef"},
{file = "mypy-0.950-py3-none-any.whl", hash = "sha256:a4d9898f46446bfb6405383b57b96737dcfd0a7f25b748e78ef3e8c576bba3cb"},
{file = "mypy-0.950.tar.gz", hash = "sha256:1b333cfbca1762ff15808a0ef4f71b5d3eed8528b23ea1c3fb50543c867d68de"},
{file = "mypy-0.981-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:4bc460e43b7785f78862dab78674e62ec3cd523485baecfdf81a555ed29ecfa0"},
{file = "mypy-0.981-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:756fad8b263b3ba39e4e204ee53042671b660c36c9017412b43af210ddee7b08"},
{file = "mypy-0.981-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:a16a0145d6d7d00fbede2da3a3096dcc9ecea091adfa8da48fa6a7b75d35562d"},
{file = "mypy-0.981-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce65f70b14a21fdac84c294cde75e6dbdabbcff22975335e20827b3b94bdbf49"},
{file = "mypy-0.981-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6e35d764784b42c3e256848fb8ed1d4292c9fc0098413adb28d84974c095b279"},
{file = "mypy-0.981-cp310-cp310-win_amd64.whl", hash = "sha256:e53773073c864d5f5cec7f3fc72fbbcef65410cde8cc18d4f7242dea60dac52e"},
{file = "mypy-0.981-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:6ee196b1d10b8b215e835f438e06965d7a480f6fe016eddbc285f13955cca659"},
{file = "mypy-0.981-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8ad21d4c9d3673726cf986ea1d0c9fb66905258709550ddf7944c8f885f208be"},
{file = "mypy-0.981-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:d1debb09043e1f5ee845fa1e96d180e89115b30e47c5d3ce53bc967bab53f62d"},
{file = "mypy-0.981-cp37-cp37m-win_amd64.whl", hash = "sha256:9f362470a3480165c4c6151786b5379351b790d56952005be18bdbdd4c7ce0ae"},
{file = "mypy-0.981-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:c9e0efb95ed6ca1654951bd5ec2f3fa91b295d78bf6527e026529d4aaa1e0c30"},
{file = "mypy-0.981-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:e178eaffc3c5cd211a87965c8c0df6da91ed7d258b5fc72b8e047c3771317ddb"},
{file = "mypy-0.981-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:06e1eac8d99bd404ed8dd34ca29673c4346e76dd8e612ea507763dccd7e13c7a"},
{file = "mypy-0.981-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:fa38f82f53e1e7beb45557ff167c177802ba7b387ad017eab1663d567017c8ee"},
{file = "mypy-0.981-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:64e1f6af81c003f85f0dfed52db632817dabb51b65c0318ffbf5ff51995bbb08"},
{file = "mypy-0.981-cp38-cp38-win_amd64.whl", hash = "sha256:e1acf62a8c4f7c092462c738aa2c2489e275ed386320c10b2e9bff31f6f7e8d6"},
{file = "mypy-0.981-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:b6ede64e52257931315826fdbfc6ea878d89a965580d1a65638ef77cb551f56d"},
{file = "mypy-0.981-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:eb3978b191b9fa0488524bb4ffedf2c573340e8c2b4206fc191d44c7093abfb7"},
{file = "mypy-0.981-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:77f8fcf7b4b3cc0c74fb33ae54a4cd00bb854d65645c48beccf65fa10b17882c"},
{file = "mypy-0.981-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f64d2ce043a209a297df322eb4054dfbaa9de9e8738291706eaafda81ab2b362"},
{file = "mypy-0.981-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:2ee3dbc53d4df7e6e3b1c68ac6a971d3a4fb2852bf10a05fda228721dd44fae1"},
{file = "mypy-0.981-cp39-cp39-win_amd64.whl", hash = "sha256:8e8e49aa9cc23aa4c926dc200ce32959d3501c4905147a66ce032f05cb5ecb92"},
{file = "mypy-0.981-py3-none-any.whl", hash = "sha256:794f385653e2b749387a42afb1e14c2135e18daeb027e0d97162e4b7031210f8"},
{file = "mypy-0.981.tar.gz", hash = "sha256:ad77c13037d3402fbeffda07d51e3f228ba078d1c7096a73759c9419ea031bf4"},
]
mypy-extensions = [
{file = "mypy_extensions-0.4.3-py2.py3-none-any.whl", hash = "sha256:090fedd75945a69ae91ce1303b5824f428daf5a028d2f6ab8a299250a846f15d"},
{file = "mypy_extensions-0.4.3.tar.gz", hash = "sha256:2d82818f5bb3e369420cb3c4060a7970edba416647068eb4c5343488a6c604a8"},
]
mypy-zope = [
{file = "mypy-zope-0.3.7.tar.gz", hash = "sha256:9da171e78e8ef7ac8922c86af1a62f1b7f3244f121020bd94a2246bc3f33c605"},
{file = "mypy_zope-0.3.7-py3-none-any.whl", hash = "sha256:9c7637d066e4d1bafa0651abc091c752009769098043b236446e6725be2bc9c2"},
{file = "mypy-zope-0.3.11.tar.gz", hash = "sha256:d4255f9f04d48c79083bbd4e2fea06513a6ac7b8de06f8c4ce563fd85142ca05"},
{file = "mypy_zope-0.3.11-py3-none-any.whl", hash = "sha256:ec080a6508d1f7805c8d2054f9fdd13c849742ce96803519e1fdfa3d3cab7140"},
]
netaddr = [
{file = "netaddr-0.8.0-py2.py3-none-any.whl", hash = "sha256:9666d0232c32d2656e5e5f8d735f58fd6c7457ce52fc21c98d45f2af78f990ac"},
@@ -2257,6 +2263,10 @@ pkginfo = [
{file = "pkginfo-1.8.2-py2.py3-none-any.whl", hash = "sha256:c24c487c6a7f72c66e816ab1796b96ac6c3d14d49338293d2141664330b55ffc"},
{file = "pkginfo-1.8.2.tar.gz", hash = "sha256:542e0d0b6750e2e21c20179803e40ab50598d8066d51097a0e382cba9eb02bff"},
]
pkgutil_resolve_name = [
{file = "pkgutil_resolve_name-1.3.10-py3-none-any.whl", hash = "sha256:ca27cc078d25c5ad71a9de0a7a330146c4e014c2462d9af19c6b828280649c5e"},
{file = "pkgutil_resolve_name-1.3.10.tar.gz", hash = "sha256:357d6c9e6a755653cfd78893817c0853af365dd51ec97f3d358a819373bbd174"},
]
platformdirs = [
{file = "platformdirs-2.5.1-py3-none-any.whl", hash = "sha256:bcae7cab893c2d310a711b70b24efb93334febe65f8de776ee320b517471e227"},
{file = "platformdirs-2.5.1.tar.gz", hash = "sha256:7535e70dfa32e84d4b34996ea99c5e432fa29a708d0f4e394bbcb2a8faa4f16d"},
@@ -2285,34 +2295,12 @@ psycopg2cffi-compat = [
{file = "psycopg2cffi-compat-1.1.tar.gz", hash = "sha256:d25e921748475522b33d13420aad5c2831c743227dc1f1f2585e0fdb5c914e05"},
]
pyasn1 = [
{file = "pyasn1-0.4.8-py2.4.egg", hash = "sha256:fec3e9d8e36808a28efb59b489e4528c10ad0f480e57dcc32b4de5c9d8c9fdf3"},
{file = "pyasn1-0.4.8-py2.5.egg", hash = "sha256:0458773cfe65b153891ac249bcf1b5f8f320b7c2ce462151f8fa74de8934becf"},
{file = "pyasn1-0.4.8-py2.6.egg", hash = "sha256:5c9414dcfede6e441f7e8f81b43b34e834731003427e5b09e4e00e3172a10f00"},
{file = "pyasn1-0.4.8-py2.7.egg", hash = "sha256:6e7545f1a61025a4e58bb336952c5061697da694db1cae97b116e9c46abcf7c8"},
{file = "pyasn1-0.4.8-py2.py3-none-any.whl", hash = "sha256:39c7e2ec30515947ff4e87fb6f456dfc6e84857d34be479c9d4a4ba4bf46aa5d"},
{file = "pyasn1-0.4.8-py3.1.egg", hash = "sha256:78fa6da68ed2727915c4767bb386ab32cdba863caa7dbe473eaae45f9959da86"},
{file = "pyasn1-0.4.8-py3.2.egg", hash = "sha256:08c3c53b75eaa48d71cf8c710312316392ed40899cb34710d092e96745a358b7"},
{file = "pyasn1-0.4.8-py3.3.egg", hash = "sha256:03840c999ba71680a131cfaee6fab142e1ed9bbd9c693e285cc6aca0d555e576"},
{file = "pyasn1-0.4.8-py3.4.egg", hash = "sha256:7ab8a544af125fb704feadb008c99a88805126fb525280b2270bb25cc1d78a12"},
{file = "pyasn1-0.4.8-py3.5.egg", hash = "sha256:e89bf84b5437b532b0803ba5c9a5e054d21fec423a89952a74f87fa2c9b7bce2"},
{file = "pyasn1-0.4.8-py3.6.egg", hash = "sha256:014c0e9976956a08139dc0712ae195324a75e142284d5f87f1a87ee1b068a359"},
{file = "pyasn1-0.4.8-py3.7.egg", hash = "sha256:99fcc3c8d804d1bc6d9a099921e39d827026409a58f2a720dcdb89374ea0c776"},
{file = "pyasn1-0.4.8.tar.gz", hash = "sha256:aef77c9fb94a3ac588e87841208bdec464471d9871bd5050a287cc9a475cd0ba"},
]
pyasn1-modules = [
{file = "pyasn1-modules-0.2.8.tar.gz", hash = "sha256:905f84c712230b2c592c19470d3ca8d552de726050d1d1716282a1f6146be65e"},
{file = "pyasn1_modules-0.2.8-py2.4.egg", hash = "sha256:0fe1b68d1e486a1ed5473f1302bd991c1611d319bba158e98b106ff86e1d7199"},
{file = "pyasn1_modules-0.2.8-py2.5.egg", hash = "sha256:fe0644d9ab041506b62782e92b06b8c68cca799e1a9636ec398675459e031405"},
{file = "pyasn1_modules-0.2.8-py2.6.egg", hash = "sha256:a99324196732f53093a84c4369c996713eb8c89d360a496b599fb1a9c47fc3eb"},
{file = "pyasn1_modules-0.2.8-py2.7.egg", hash = "sha256:0845a5582f6a02bb3e1bde9ecfc4bfcae6ec3210dd270522fee602365430c3f8"},
{file = "pyasn1_modules-0.2.8-py2.py3-none-any.whl", hash = "sha256:a50b808ffeb97cb3601dd25981f6b016cbb3d31fbf57a8b8a87428e6158d0c74"},
{file = "pyasn1_modules-0.2.8-py3.1.egg", hash = "sha256:f39edd8c4ecaa4556e989147ebf219227e2cd2e8a43c7e7fcb1f1c18c5fd6a3d"},
{file = "pyasn1_modules-0.2.8-py3.2.egg", hash = "sha256:b80486a6c77252ea3a3e9b1e360bc9cf28eaac41263d173c032581ad2f20fe45"},
{file = "pyasn1_modules-0.2.8-py3.3.egg", hash = "sha256:65cebbaffc913f4fe9e4808735c95ea22d7a7775646ab690518c056784bc21b4"},
{file = "pyasn1_modules-0.2.8-py3.4.egg", hash = "sha256:15b7c67fabc7fc240d87fb9aabf999cf82311a6d6fb2c70d00d3d0604878c811"},
{file = "pyasn1_modules-0.2.8-py3.5.egg", hash = "sha256:426edb7a5e8879f1ec54a1864f16b882c2837bfd06eee62f2c982315ee2473ed"},
{file = "pyasn1_modules-0.2.8-py3.6.egg", hash = "sha256:cbac4bc38d117f2a49aeedec4407d23e8866ea4ac27ff2cf7fb3e5b570df19e0"},
{file = "pyasn1_modules-0.2.8-py3.7.egg", hash = "sha256:c29a5e5cc7a3f05926aff34e097e84f8589cd790ce0ed41b67aed6857b26aafd"},
]
pycodestyle = [
{file = "pycodestyle-2.8.0-py2.py3-none-any.whl", hash = "sha256:720f8b39dde8b293825e7ff02c475f3077124006db4f440dcbc9a20b76548a20"},
@@ -2450,6 +2438,13 @@ pyyaml = [
{file = "PyYAML-6.0-cp310-cp310-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl", hash = "sha256:f84fbc98b019fef2ee9a1cb3ce93e3187a6df0b2538a651bfb890254ba9f90b5"},
{file = "PyYAML-6.0-cp310-cp310-win32.whl", hash = "sha256:2cd5df3de48857ed0544b34e2d40e9fac445930039f3cfe4bcc592a1f836d513"},
{file = "PyYAML-6.0-cp310-cp310-win_amd64.whl", hash = "sha256:daf496c58a8c52083df09b80c860005194014c3698698d1a57cbcfa182142a3a"},
{file = "PyYAML-6.0-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:d4b0ba9512519522b118090257be113b9468d804b19d63c71dbcf4a48fa32358"},
{file = "PyYAML-6.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:81957921f441d50af23654aa6c5e5eaf9b06aba7f0a19c18a538dc7ef291c5a1"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:afa17f5bc4d1b10afd4466fd3a44dc0e245382deca5b3c353d8b757f9e3ecb8d"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:dbad0e9d368bb989f4515da330b88a057617d16b6a8245084f1b05400f24609f"},
{file = "PyYAML-6.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:432557aa2c09802be39460360ddffd48156e30721f5e8d917f01d31694216782"},
{file = "PyYAML-6.0-cp311-cp311-win32.whl", hash = "sha256:bfaef573a63ba8923503d27530362590ff4f576c626d86a9fed95822a8255fd7"},
{file = "PyYAML-6.0-cp311-cp311-win_amd64.whl", hash = "sha256:01b45c0191e6d66c470b6cf1b9531a771a83c1c4208272ead47a3ae4f2f603bf"},
{file = "PyYAML-6.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:897b80890765f037df3403d22bab41627ca8811ae55e9a722fd0392850ec4d86"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:50602afada6d6cbfad699b0c7bb50d5ccffa7e46a3d738092afddc1f9758427f"},
{file = "PyYAML-6.0-cp36-cp36m-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:48c346915c114f5fdb3ead70312bd042a953a8ce5c7106d5bfb1a5254e47da92"},
@@ -2753,21 +2748,21 @@ types-psycopg2 = [
{file = "types-psycopg2-2.9.9.tar.gz", hash = "sha256:4f9d4d52eeb343dc00fd5ed4f1513a8a5c18efba0a072eb82706d15cf4f20a2e"},
{file = "types_psycopg2-2.9.9-py3-none-any.whl", hash = "sha256:cec9291d4318ad70b407310f8304b3d40f6d0358f09870448f7a65e3027c80af"},
]
types-pyopenssl = [
{file = "types-pyOpenSSL-22.0.0.tar.gz", hash = "sha256:d86dde7f6fe2f1ac9fe0b6282e489f649f480364bdaa9d6a4696d52505f4477e"},
{file = "types_pyOpenSSL-22.0.0-py3-none-any.whl", hash = "sha256:da685f57b864979f36df0157895139c8244ad4aad19b551f1678206fbad0108a"},
types-pyOpenSSL = [
{file = "types-pyOpenSSL-22.0.10.tar.gz", hash = "sha256:f943b834f5b97e5e808764c2f6e37be1a2e226c46792296f61558196acfcc3a1"},
{file = "types_pyOpenSSL-22.0.10-py3-none-any.whl", hash = "sha256:63baea211768bea580a769ac5c0d637ae8cd3150314aadc5726ca22e4c4f241a"},
]
types-pyyaml = [
{file = "types-PyYAML-6.0.4.tar.gz", hash = "sha256:6252f62d785e730e454dfa0c9f0fb99d8dae254c5c3c686903cf878ea27c04b7"},
{file = "types_PyYAML-6.0.4-py3-none-any.whl", hash = "sha256:693b01c713464a6851f36ff41077f8adbc6e355eda929addfb4a97208aea9b4b"},
types-PyYAML = [
{file = "types-PyYAML-6.0.12.tar.gz", hash = "sha256:f6f350418125872f3f0409d96a62a5a5ceb45231af5cc07ee0034ec48a3c82fa"},
{file = "types_PyYAML-6.0.12-py3-none-any.whl", hash = "sha256:29228db9f82df4f1b7febee06bbfb601677882e98a3da98132e31c6874163e15"},
]
types-requests = [
{file = "types-requests-2.27.11.tar.gz", hash = "sha256:6a7ed24b21780af4a5b5e24c310b2cd885fb612df5fd95584d03d87e5f2a195a"},
{file = "types_requests-2.27.11-py3-none-any.whl", hash = "sha256:506279bad570c7b4b19ac1f22e50146538befbe0c133b2cea66a9b04a533a859"},
{file = "types-requests-2.28.11.tar.gz", hash = "sha256:7ee827eb8ce611b02b5117cfec5da6455365b6a575f5e3ff19f655ba603e6b4e"},
{file = "types_requests-2.28.11-py3-none-any.whl", hash = "sha256:af5f55e803cabcfb836dad752bd6d8a0fc8ef1cd84243061c0e27dee04ccf4fd"},
]
types-setuptools = [
{file = "types-setuptools-57.4.9.tar.gz", hash = "sha256:536ef74744f8e1e4be4fc719887f886e74e4cf3c792b4a06984320be4df450b5"},
{file = "types_setuptools-57.4.9-py3-none-any.whl", hash = "sha256:948dc6863373750e2cd0b223a84f1fb608414cde5e55cf38ea657b93aeb411d2"},
{file = "types-setuptools-65.4.0.0.tar.gz", hash = "sha256:d9021d6a70690b34e7bd2947e7ab10167c646fbf062508cb56581be2e2a1615e"},
{file = "types_setuptools-65.4.0.0-py3-none-any.whl", hash = "sha256:ce178b3f7dbd6c0e67f8eee7ae29c1be280ade7e5188bdd9e620843de4060d85"},
]
types-urllib3 = [
{file = "types-urllib3-1.26.10.tar.gz", hash = "sha256:a26898f530e6c3f43f25b907f2b884486868ffd56a9faa94cbf9b3eb6e165d6a"},

View File

@@ -57,7 +57,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.68.0"
version = "1.69.0rc2"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"
@@ -267,7 +267,7 @@ all = [
[tool.poetry.dev-dependencies]
## We pin black so that our tests don't start failing on new releases.
isort = "==5.7.0"
isort = "==5.10.1"
black = "==22.3.0"
flake8-comprehensions = "*"
flake8-bugbear = "==21.3.2"
@@ -307,7 +307,7 @@ twine = "*"
towncrier = ">=18.6.0rc1"
[build-system]
requires = ["poetry-core>=1.0.0", "setuptools_rust>=1.3"]
requires = ["poetry-core==1.2.0", "setuptools_rust==1.5.2"]
build-backend = "poetry.core.masonry.api"

View File

@@ -11,14 +11,24 @@ rust-version = "1.58.1"
[lib]
name = "synapse"
crate-type = ["cdylib"]
# We generate a `cdylib` for Python and a standard `lib` for running
# tests/benchmarks.
crate-type = ["lib", "cdylib"]
[package.metadata.maturin]
# This is where we tell maturin where to place the built library.
name = "synapse.synapse_rust"
[dependencies]
pyo3 = { version = "0.16.5", features = ["extension-module", "macros", "abi3", "abi3-py37"] }
anyhow = "1.0.63"
lazy_static = "1.4.0"
log = "0.4.17"
pyo3 = { version = "0.17.1", features = ["extension-module", "macros", "anyhow", "abi3", "abi3-py37"] }
pyo3-log = "0.7.0"
pythonize = "0.17.0"
regex = "1.6.0"
serde = { version = "1.0.144", features = ["derive"] }
serde_json = "1.0.85"
[build-dependencies]
blake2 = "0.10.4"

149
rust/benches/evaluator.rs Normal file
View File

@@ -0,0 +1,149 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(test)]
use synapse::push::{
evaluator::PushRuleEvaluator, Condition, EventMatchCondition, FilteredPushRules, PushRules,
};
use test::Bencher;
extern crate test;
#[bench]
fn bench_match_exact(b: &mut Bencher) {
let flattened_keys = [
("type".to_string(), "m.text".to_string()),
("room_id".to_string(), "!room:server".to_string()),
("content.body".to_string(), "test message".to_string()),
]
.into_iter()
.collect();
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Default::default(),
Default::default(),
true,
)
.unwrap();
let condition = Condition::Known(synapse::push::KnownCondition::EventMatch(
EventMatchCondition {
key: "room_id".into(),
pattern: Some("!room:server".into()),
pattern_type: None,
},
));
let matched = eval.match_condition(&condition, None, None).unwrap();
assert!(matched, "Didn't match");
b.iter(|| eval.match_condition(&condition, None, None).unwrap());
}
#[bench]
fn bench_match_word(b: &mut Bencher) {
let flattened_keys = [
("type".to_string(), "m.text".to_string()),
("room_id".to_string(), "!room:server".to_string()),
("content.body".to_string(), "test message".to_string()),
]
.into_iter()
.collect();
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Default::default(),
Default::default(),
true,
)
.unwrap();
let condition = Condition::Known(synapse::push::KnownCondition::EventMatch(
EventMatchCondition {
key: "content.body".into(),
pattern: Some("test".into()),
pattern_type: None,
},
));
let matched = eval.match_condition(&condition, None, None).unwrap();
assert!(matched, "Didn't match");
b.iter(|| eval.match_condition(&condition, None, None).unwrap());
}
#[bench]
fn bench_match_word_miss(b: &mut Bencher) {
let flattened_keys = [
("type".to_string(), "m.text".to_string()),
("room_id".to_string(), "!room:server".to_string()),
("content.body".to_string(), "test message".to_string()),
]
.into_iter()
.collect();
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Default::default(),
Default::default(),
true,
)
.unwrap();
let condition = Condition::Known(synapse::push::KnownCondition::EventMatch(
EventMatchCondition {
key: "content.body".into(),
pattern: Some("foobar".into()),
pattern_type: None,
},
));
let matched = eval.match_condition(&condition, None, None).unwrap();
assert!(!matched, "Didn't match");
b.iter(|| eval.match_condition(&condition, None, None).unwrap());
}
#[bench]
fn bench_eval_message(b: &mut Bencher) {
let flattened_keys = [
("type".to_string(), "m.text".to_string()),
("room_id".to_string(), "!room:server".to_string()),
("content.body".to_string(), "test message".to_string()),
]
.into_iter()
.collect();
let eval = PushRuleEvaluator::py_new(
flattened_keys,
10,
0,
Default::default(),
Default::default(),
true,
)
.unwrap();
let rules =
FilteredPushRules::py_new(PushRules::new(Vec::new()), Default::default(), false, false);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));
}

40
rust/benches/glob.rs Normal file
View File

@@ -0,0 +1,40 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(test)]
use synapse::push::utils::{glob_to_regex, GlobMatchType};
use test::Bencher;
extern crate test;
#[bench]
fn bench_whole(b: &mut Bencher) {
b.iter(|| glob_to_regex("test", GlobMatchType::Whole));
}
#[bench]
fn bench_word(b: &mut Bencher) {
b.iter(|| glob_to_regex("test", GlobMatchType::Word));
}
#[bench]
fn bench_whole_wildcard_run(b: &mut Bencher) {
b.iter(|| glob_to_regex("test***??*?*?foo", GlobMatchType::Whole));
}
#[bench]
fn bench_word_wildcard_run(b: &mut Bencher) {
b.iter(|| glob_to_regex("test***??*?*?foo", GlobMatchType::Whole));
}

View File

@@ -22,7 +22,7 @@ fn main() -> Result<(), std::io::Error> {
for entry in entries {
if entry.is_dir() {
dirs.push(entry)
dirs.push(entry);
} else {
paths.push(entry.to_str().expect("valid rust paths").to_string());
}

View File

@@ -1,5 +1,7 @@
use pyo3::prelude::*;
pub mod push;
/// Returns the hash of all the rust source files at the time it was compiled.
///
/// Used by python to detect if the rust library is outdated.
@@ -17,8 +19,13 @@ fn sum_as_string(a: usize, b: usize) -> PyResult<String> {
/// The entry point for defining the Python module.
#[pymodule]
fn synapse_rust(_py: Python<'_>, m: &PyModule) -> PyResult<()> {
fn synapse_rust(py: Python<'_>, m: &PyModule) -> PyResult<()> {
pyo3_log::init();
m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
m.add_function(wrap_pyfunction!(get_rust_file_digest, m)?)?;
push::register_module(py, m)?;
Ok(())
}

336
rust/src/push/base_rules.rs Normal file
View File

@@ -0,0 +1,336 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Contains the definitions of the "base" push rules.
use std::borrow::Cow;
use std::collections::HashMap;
use lazy_static::lazy_static;
use serde_json::Value;
use super::KnownCondition;
use crate::push::Action;
use crate::push::Condition;
use crate::push::EventMatchCondition;
use crate::push::PushRule;
use crate::push::SetTweak;
use crate::push::TweakValue;
const HIGHLIGHT_ACTION: Action = Action::SetTweak(SetTweak {
set_tweak: Cow::Borrowed("highlight"),
value: None,
other_keys: Value::Null,
});
const HIGHLIGHT_FALSE_ACTION: Action = Action::SetTweak(SetTweak {
set_tweak: Cow::Borrowed("highlight"),
value: Some(TweakValue::Other(Value::Bool(false))),
other_keys: Value::Null,
});
const SOUND_ACTION: Action = Action::SetTweak(SetTweak {
set_tweak: Cow::Borrowed("sound"),
value: Some(TweakValue::String(Cow::Borrowed("default"))),
other_keys: Value::Null,
});
const RING_ACTION: Action = Action::SetTweak(SetTweak {
set_tweak: Cow::Borrowed("sound"),
value: Some(TweakValue::String(Cow::Borrowed("ring"))),
other_keys: Value::Null,
});
pub const BASE_PREPEND_OVERRIDE_RULES: &[PushRule] = &[PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.master"),
priority_class: 5,
conditions: Cow::Borrowed(&[]),
actions: Cow::Borrowed(&[Action::DontNotify]),
default: true,
default_enabled: false,
}];
pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.suppress_notices"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("content.msgtype"),
pattern: Some(Cow::Borrowed("m.notice")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::DontNotify]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.invite_for_me"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.member")),
pattern_type: None,
})),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("content.membership"),
pattern: Some(Cow::Borrowed("invite")),
pattern_type: None,
})),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("state_key"),
pattern: None,
pattern_type: Some(Cow::Borrowed("user_id")),
})),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION, SOUND_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.member_event"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.member")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::DontNotify]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.contains_display_name"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::ContainsDisplayName)]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION, SOUND_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.roomnotif"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::SenderNotificationPermission {
key: Cow::Borrowed("room"),
}),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("content.body"),
pattern: Some(Cow::Borrowed("@room")),
pattern_type: None,
})),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.tombstone"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.tombstone")),
pattern_type: None,
})),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("state_key"),
pattern: Some(Cow::Borrowed("")),
pattern_type: None,
})),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.reaction"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.reaction")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::DontNotify]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/override/.m.rule.room.server_acl"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.server_acl")),
pattern_type: None,
})),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("state_key"),
pattern: Some(Cow::Borrowed("")),
pattern_type: None,
})),
]),
actions: Cow::Borrowed(&[]),
default: true,
default_enabled: true,
},
];
pub const BASE_APPEND_CONTENT_RULES: &[PushRule] = &[PushRule {
rule_id: Cow::Borrowed("global/content/.m.rule.contains_user_name"),
priority_class: 4,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("content.body"),
pattern: None,
pattern_type: Some(Cow::Borrowed("user_localpart")),
},
))]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_ACTION, SOUND_ACTION]),
default: true,
default_enabled: true,
}];
pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.call"),
priority_class: 1,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.call.invite")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::Notify, RING_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.room_one_to_one"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.message")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.encrypted_room_one_to_one"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.encrypted")),
pattern_type: None,
})),
Condition::Known(KnownCondition::RoomMemberCount {
is: Some(Cow::Borrowed("2")),
}),
]),
actions: Cow::Borrowed(&[Action::Notify, SOUND_ACTION, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.org.matrix.msc3772.thread_reply"),
priority_class: 1,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::RelationMatch {
rel_type: Cow::Borrowed("m.thread"),
event_type_pattern: None,
sender: None,
sender_type: Some(Cow::Borrowed("user_id")),
})]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.message"),
priority_class: 1,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.message")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.m.rule.encrypted"),
priority_class: 1,
conditions: Cow::Borrowed(&[Condition::Known(KnownCondition::EventMatch(
EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("m.room.encrypted")),
pattern_type: None,
},
))]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed("global/underride/.im.vector.jitsi"),
priority_class: 1,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("type"),
pattern: Some(Cow::Borrowed("im.vector.modular.widgets")),
pattern_type: None,
})),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("content.type"),
pattern: Some(Cow::Borrowed("jitsi")),
pattern_type: None,
})),
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: Cow::Borrowed("state_key"),
pattern: Some(Cow::Borrowed("*")),
pattern_type: None,
})),
]),
actions: Cow::Borrowed(&[Action::Notify, HIGHLIGHT_FALSE_ACTION]),
default: true,
default_enabled: true,
},
];
lazy_static! {
pub static ref BASE_RULES_BY_ID: HashMap<&'static str, &'static PushRule> =
BASE_PREPEND_OVERRIDE_RULES
.iter()
.chain(BASE_APPEND_OVERRIDE_RULES.iter())
.chain(BASE_APPEND_CONTENT_RULES.iter())
.chain(BASE_APPEND_UNDERRIDE_RULES.iter())
.map(|rule| { (&*rule.rule_id, rule) })
.collect();
}

374
rust/src/push/evaluator.rs Normal file
View File

@@ -0,0 +1,374 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::{
borrow::Cow,
collections::{BTreeMap, BTreeSet},
};
use anyhow::{Context, Error};
use lazy_static::lazy_static;
use log::warn;
use pyo3::prelude::*;
use regex::Regex;
use super::{
utils::{get_glob_matcher, get_localpart_from_id, GlobMatchType},
Action, Condition, EventMatchCondition, FilteredPushRules, KnownCondition,
};
lazy_static! {
/// Used to parse the `is` clause in the room member count condition.
static ref INEQUALITY_EXPR: Regex = Regex::new(r"^([=<>]*)([0-9]+)$").expect("valid regex");
}
/// Allows running a set of push rules against a particular event.
#[pyclass]
pub struct PushRuleEvaluator {
/// A mapping of "flattened" keys to string values in the event, e.g.
/// includes things like "type" and "content.msgtype".
flattened_keys: BTreeMap<String, String>,
/// The "content.body", if any.
body: String,
/// The number of users in the room.
room_member_count: u64,
/// The `notifications` section of the current power levels in the room.
notification_power_levels: BTreeMap<String, i64>,
/// The relations related to the event as a mapping from relation type to
/// set of sender/event type 2-tuples.
relations: BTreeMap<String, BTreeSet<(String, String)>>,
/// Is running "relation" conditions enabled?
relation_match_enabled: bool,
/// The power level of the sender of the event, or None if event is an
/// outlier.
sender_power_level: Option<i64>,
}
#[pymethods]
impl PushRuleEvaluator {
/// Create a new `PushRuleEvaluator`. See struct docstring for details.
#[new]
pub fn py_new(
flattened_keys: BTreeMap<String, String>,
room_member_count: u64,
sender_power_level: Option<i64>,
notification_power_levels: BTreeMap<String, i64>,
relations: BTreeMap<String, BTreeSet<(String, String)>>,
relation_match_enabled: bool,
) -> Result<Self, Error> {
let body = flattened_keys
.get("content.body")
.cloned()
.unwrap_or_default();
Ok(PushRuleEvaluator {
flattened_keys,
body,
room_member_count,
notification_power_levels,
relations,
relation_match_enabled,
sender_power_level,
})
}
/// Run the evaluator with the given push rules, for the given user ID and
/// display name of the user.
///
/// Passing in None will skip evaluating rules matching user ID and display
/// name.
///
/// Returns the set of actions, if any, that match (filtering out any
/// `dont_notify` actions).
pub fn run(
&self,
push_rules: &FilteredPushRules,
user_id: Option<&str>,
display_name: Option<&str>,
) -> Vec<Action> {
'outer: for (push_rule, enabled) in push_rules.iter() {
if !enabled {
continue;
}
for condition in push_rule.conditions.iter() {
match self.match_condition(condition, user_id, display_name) {
Ok(true) => {}
Ok(false) => continue 'outer,
Err(err) => {
warn!("Condition match failed {err}");
continue 'outer;
}
}
}
let actions = push_rule
.actions
.iter()
// Filter out "dont_notify" actions, as we don't store them.
.filter(|a| **a != Action::DontNotify)
.cloned()
.collect();
return actions;
}
Vec::new()
}
/// Check if the given condition matches.
fn matches(
&self,
condition: Condition,
user_id: Option<&str>,
display_name: Option<&str>,
) -> bool {
match self.match_condition(&condition, user_id, display_name) {
Ok(true) => true,
Ok(false) => false,
Err(err) => {
warn!("Condition match failed {err}");
false
}
}
}
}
impl PushRuleEvaluator {
/// Match a given `Condition` for a push rule.
pub fn match_condition(
&self,
condition: &Condition,
user_id: Option<&str>,
display_name: Option<&str>,
) -> Result<bool, Error> {
let known_condition = match condition {
Condition::Known(known) => known,
Condition::Unknown(_) => {
return Ok(false);
}
};
let result = match known_condition {
KnownCondition::EventMatch(event_match) => {
self.match_event_match(event_match, user_id)?
}
KnownCondition::ContainsDisplayName => {
if let Some(dn) = display_name {
if !dn.is_empty() {
get_glob_matcher(dn, GlobMatchType::Word)?.is_match(&self.body)?
} else {
// We specifically ignore empty display names, as otherwise
// they would always match.
false
}
} else {
false
}
}
KnownCondition::RoomMemberCount { is } => {
if let Some(is) = is {
self.match_member_count(is)?
} else {
false
}
}
KnownCondition::SenderNotificationPermission { key } => {
if let Some(sender_power_level) = &self.sender_power_level {
let required_level = self
.notification_power_levels
.get(key.as_ref())
.copied()
.unwrap_or(50);
*sender_power_level >= required_level
} else {
false
}
}
KnownCondition::RelationMatch {
rel_type,
event_type_pattern,
sender,
sender_type,
} => {
self.match_relations(rel_type, sender, sender_type, user_id, event_type_pattern)?
}
};
Ok(result)
}
/// Evaluates a relation condition.
fn match_relations(
&self,
rel_type: &str,
sender: &Option<Cow<str>>,
sender_type: &Option<Cow<str>>,
user_id: Option<&str>,
event_type_pattern: &Option<Cow<str>>,
) -> Result<bool, Error> {
// First check if relation matching is enabled...
if !self.relation_match_enabled {
return Ok(false);
}
// ... and if there are any relations to match against.
let relations = if let Some(relations) = self.relations.get(rel_type) {
relations
} else {
return Ok(false);
};
// Extract the sender pattern from the condition
let sender_pattern = if let Some(sender) = sender {
Some(sender.as_ref())
} else if let Some(sender_type) = sender_type {
if sender_type == "user_id" {
if let Some(user_id) = user_id {
Some(user_id)
} else {
return Ok(false);
}
} else {
warn!("Unrecognized sender_type: {sender_type}");
return Ok(false);
}
} else {
None
};
let mut sender_compiled_pattern = if let Some(pattern) = sender_pattern {
Some(get_glob_matcher(pattern, GlobMatchType::Whole)?)
} else {
None
};
let mut type_compiled_pattern = if let Some(pattern) = event_type_pattern {
Some(get_glob_matcher(pattern, GlobMatchType::Whole)?)
} else {
None
};
for (relation_sender, event_type) in relations {
if let Some(pattern) = &mut sender_compiled_pattern {
if !pattern.is_match(relation_sender)? {
continue;
}
}
if let Some(pattern) = &mut type_compiled_pattern {
if !pattern.is_match(event_type)? {
continue;
}
}
return Ok(true);
}
Ok(false)
}
/// Evaluates a `event_match` condition.
fn match_event_match(
&self,
event_match: &EventMatchCondition,
user_id: Option<&str>,
) -> Result<bool, Error> {
let pattern = if let Some(pattern) = &event_match.pattern {
pattern
} else if let Some(pattern_type) = &event_match.pattern_type {
// The `pattern_type` can either be "user_id" or "user_localpart",
// either way if we don't have a `user_id` then the condition can't
// match.
let user_id = if let Some(user_id) = user_id {
user_id
} else {
return Ok(false);
};
match &**pattern_type {
"user_id" => user_id,
"user_localpart" => get_localpart_from_id(user_id)?,
_ => return Ok(false),
}
} else {
return Ok(false);
};
let haystack = if let Some(haystack) = self.flattened_keys.get(&*event_match.key) {
haystack
} else {
return Ok(false);
};
// For the content.body we match against "words", but for everything
// else we match against the entire value.
let match_type = if event_match.key == "content.body" {
GlobMatchType::Word
} else {
GlobMatchType::Whole
};
let mut compiled_pattern = get_glob_matcher(pattern, match_type)?;
compiled_pattern.is_match(haystack)
}
/// Match the member count against an 'is' condition
/// The `is` condition can be things like '>2', '==3' or even just '4'.
fn match_member_count(&self, is: &str) -> Result<bool, Error> {
let captures = INEQUALITY_EXPR.captures(is).context("bad 'is' clause")?;
let ineq = captures.get(1).map_or("==", |m| m.as_str());
let rhs: u64 = captures
.get(2)
.context("missing number")?
.as_str()
.parse()?;
let matches = match ineq {
"" | "==" => self.room_member_count == rhs,
"<" => self.room_member_count < rhs,
">" => self.room_member_count > rhs,
">=" => self.room_member_count >= rhs,
"<=" => self.room_member_count <= rhs,
_ => false,
};
Ok(matches)
}
}
#[test]
fn push_rule_evaluator() {
let mut flattened_keys = BTreeMap::new();
flattened_keys.insert("content.body".to_string(), "foo bar bob hello".to_string());
let evaluator = PushRuleEvaluator::py_new(
flattened_keys,
10,
Some(0),
BTreeMap::new(),
BTreeMap::new(),
true,
)
.unwrap();
let result = evaluator.run(&FilteredPushRules::default(), None, Some("bob"));
assert_eq!(result.len(), 3);
}

505
rust/src/push/mod.rs Normal file
View File

@@ -0,0 +1,505 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! An implementation of Matrix push rules.
//!
//! The `Cow<_>` type is used extensively within this module to allow creating
//! the base rules as constants (in Rust constants can't require explicit
//! allocation atm).
//!
//! ---
//!
//! Push rules is the system used to determine which events trigger a push (and a
//! bump in notification counts).
//!
//! This consists of a list of "push rules" for each user, where a push rule is a
//! pair of "conditions" and "actions". When a user receives an event Synapse
//! iterates over the list of push rules until it finds one where all the conditions
//! match the event, at which point "actions" describe the outcome (e.g. notify,
//! highlight, etc).
//!
//! Push rules are split up into 5 different "kinds" (aka "priority classes"), which
//! are run in order:
//! 1. Override — highest priority rules, e.g. always ignore notices
//! 2. Content — content specific rules, e.g. @ notifications
//! 3. Room — per room rules, e.g. enable/disable notifications for all messages
//! in a room
//! 4. Sender — per sender rules, e.g. never notify for messages from a given
//! user
//! 5. Underride — the lowest priority "default" rules, e.g. notify for every
//! message.
//!
//! The set of "base rules" are the list of rules that every user has by default. A
//! user can modify their copy of the push rules in one of three ways:
//! 1. Adding a new push rule of a certain kind
//! 2. Changing the actions of a base rule
//! 3. Enabling/disabling a base rule.
//!
//! The base rules are split into whether they come before or after a particular
//! kind, so the order of push rule evaluation would be: base rules for before
//! "override" kind, user defined "override" rules, base rules after "override"
//! kind, etc, etc.
use std::borrow::Cow;
use std::collections::{BTreeMap, HashMap, HashSet};
use anyhow::{Context, Error};
use log::warn;
use pyo3::prelude::*;
use pythonize::{depythonize, pythonize};
use serde::de::Error as _;
use serde::{Deserialize, Serialize};
use serde_json::Value;
use self::evaluator::PushRuleEvaluator;
mod base_rules;
pub mod evaluator;
pub mod utils;
/// Called when registering modules with python.
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
let child_module = PyModule::new(py, "push")?;
child_module.add_class::<PushRule>()?;
child_module.add_class::<PushRules>()?;
child_module.add_class::<FilteredPushRules>()?;
child_module.add_class::<PushRuleEvaluator>()?;
child_module.add_function(wrap_pyfunction!(get_base_rule_ids, m)?)?;
m.add_submodule(child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import push` work.
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.push", child_module)?;
Ok(())
}
#[pyfunction]
fn get_base_rule_ids() -> HashSet<&'static str> {
base_rules::BASE_RULES_BY_ID.keys().copied().collect()
}
/// A single push rule for a user.
#[derive(Debug, Clone)]
#[pyclass(frozen)]
pub struct PushRule {
/// A unique ID for this rule
pub rule_id: Cow<'static, str>,
/// The "kind" of push rule this is (see `PRIORITY_CLASS_MAP` in Python)
#[pyo3(get)]
pub priority_class: i32,
/// The conditions that must all match for actions to be applied
pub conditions: Cow<'static, [Condition]>,
/// The actions to apply if all conditions are met
pub actions: Cow<'static, [Action]>,
/// Whether this is a base rule
#[pyo3(get)]
pub default: bool,
/// Whether this is enabled by default
#[pyo3(get)]
pub default_enabled: bool,
}
#[pymethods]
impl PushRule {
#[staticmethod]
pub fn from_db(
rule_id: String,
priority_class: i32,
conditions: &str,
actions: &str,
) -> Result<PushRule, Error> {
let conditions = serde_json::from_str(conditions).context("parsing conditions")?;
let actions = serde_json::from_str(actions).context("parsing actions")?;
Ok(PushRule {
rule_id: Cow::Owned(rule_id),
priority_class,
conditions,
actions,
default: false,
default_enabled: true,
})
}
#[getter]
fn rule_id(&self) -> &str {
&self.rule_id
}
#[getter]
fn actions(&self) -> Vec<Action> {
self.actions.clone().into_owned()
}
#[getter]
fn conditions(&self) -> Vec<Condition> {
self.conditions.clone().into_owned()
}
fn __repr__(&self) -> String {
format!(
"<PushRule rule_id={}, conditions={:?}, actions={:?}>",
self.rule_id, self.conditions, self.actions
)
}
}
/// The "action" Synapse should perform for a matching push rule.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum Action {
DontNotify,
Notify,
Coalesce,
SetTweak(SetTweak),
// An unrecognized custom action.
Unknown(Value),
}
impl IntoPy<PyObject> for Action {
fn into_py(self, py: Python<'_>) -> PyObject {
// When we pass the `Action` struct to Python we want it to be converted
// to a dict. We use `pythonize`, which converts the struct using the
// `serde` serialization.
pythonize(py, &self).expect("valid action")
}
}
/// The body of a `SetTweak` push action.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
pub struct SetTweak {
set_tweak: Cow<'static, str>,
#[serde(skip_serializing_if = "Option::is_none")]
value: Option<TweakValue>,
// This picks up any other fields that may have been added by clients.
// These get added when we convert the `Action` to a python object.
#[serde(flatten)]
other_keys: Value,
}
/// The value of a `set_tweak`.
///
/// We need this (rather than using `TweakValue` directly) so that we can use
/// `&'static str` in the value when defining the constant base rules.
#[derive(Serialize, Deserialize, Debug, Clone, PartialEq, Eq)]
#[serde(untagged)]
pub enum TweakValue {
String(Cow<'static, str>),
Other(Value),
}
impl Serialize for Action {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
match self {
Action::DontNotify => serializer.serialize_str("dont_notify"),
Action::Notify => serializer.serialize_str("notify"),
Action::Coalesce => serializer.serialize_str("coalesce"),
Action::SetTweak(tweak) => tweak.serialize(serializer),
Action::Unknown(value) => value.serialize(serializer),
}
}
}
/// Simple helper class for deserializing Action from JSON.
#[derive(Deserialize)]
#[serde(untagged)]
enum ActionDeserializeHelper {
Str(String),
SetTweak(SetTweak),
Unknown(Value),
}
impl<'de> Deserialize<'de> for Action {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let helper: ActionDeserializeHelper = Deserialize::deserialize(deserializer)?;
match helper {
ActionDeserializeHelper::Str(s) => match &*s {
"dont_notify" => Ok(Action::DontNotify),
"notify" => Ok(Action::Notify),
"coalesce" => Ok(Action::Coalesce),
_ => Err(D::Error::custom("unrecognized action")),
},
ActionDeserializeHelper::SetTweak(set_tweak) => Ok(Action::SetTweak(set_tweak)),
ActionDeserializeHelper::Unknown(value) => Ok(Action::Unknown(value)),
}
}
}
/// A condition used in push rules to match against an event.
///
/// We need this split as `serde` doesn't give us the ability to have a
/// "catchall" variant in tagged enums.
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(untagged)]
pub enum Condition {
/// A recognized condition that we can match against
Known(KnownCondition),
/// An unrecognized condition that we ignore.
Unknown(Value),
}
/// The set of "known" conditions that we can handle.
#[derive(Serialize, Deserialize, Debug, Clone)]
#[serde(rename_all = "snake_case")]
#[serde(tag = "kind")]
pub enum KnownCondition {
EventMatch(EventMatchCondition),
ContainsDisplayName,
RoomMemberCount {
#[serde(skip_serializing_if = "Option::is_none")]
is: Option<Cow<'static, str>>,
},
SenderNotificationPermission {
key: Cow<'static, str>,
},
#[serde(rename = "org.matrix.msc3772.relation_match")]
RelationMatch {
rel_type: Cow<'static, str>,
#[serde(skip_serializing_if = "Option::is_none", rename = "type")]
event_type_pattern: Option<Cow<'static, str>>,
#[serde(skip_serializing_if = "Option::is_none")]
sender: Option<Cow<'static, str>>,
#[serde(skip_serializing_if = "Option::is_none")]
sender_type: Option<Cow<'static, str>>,
},
}
impl IntoPy<PyObject> for Condition {
fn into_py(self, py: Python<'_>) -> PyObject {
pythonize(py, &self).expect("valid condition")
}
}
impl<'source> FromPyObject<'source> for Condition {
fn extract(ob: &'source PyAny) -> PyResult<Self> {
Ok(depythonize(ob)?)
}
}
/// The body of a [`Condition::EventMatch`]
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct EventMatchCondition {
pub key: Cow<'static, str>,
#[serde(skip_serializing_if = "Option::is_none")]
pub pattern: Option<Cow<'static, str>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub pattern_type: Option<Cow<'static, str>>,
}
/// The collection of push rules for a user.
#[derive(Debug, Clone, Default)]
#[pyclass(frozen)]
pub struct PushRules {
/// Custom push rules that override a base rule.
overridden_base_rules: HashMap<Cow<'static, str>, PushRule>,
/// Custom rules that come between the prepend/append override base rules.
override_rules: Vec<PushRule>,
/// Custom rules that come before the base content rules.
content: Vec<PushRule>,
/// Custom rules that come before the base room rules.
room: Vec<PushRule>,
/// Custom rules that come before the base sender rules.
sender: Vec<PushRule>,
/// Custom rules that come before the base underride rules.
underride: Vec<PushRule>,
}
#[pymethods]
impl PushRules {
#[new]
pub fn new(rules: Vec<PushRule>) -> PushRules {
let mut push_rules: PushRules = Default::default();
for rule in rules {
if let Some(&o) = base_rules::BASE_RULES_BY_ID.get(&*rule.rule_id) {
push_rules.overridden_base_rules.insert(
rule.rule_id.clone(),
PushRule {
actions: rule.actions.clone(),
..o.clone()
},
);
continue;
}
match rule.priority_class {
5 => push_rules.override_rules.push(rule),
4 => push_rules.content.push(rule),
3 => push_rules.room.push(rule),
2 => push_rules.sender.push(rule),
1 => push_rules.underride.push(rule),
_ => {
warn!(
"Unrecognized priority class for rule {}: {}",
rule.rule_id, rule.priority_class
);
}
}
}
push_rules
}
/// Returns the list of all rules, including base rules, in the order they
/// should be executed in.
fn rules(&self) -> Vec<PushRule> {
self.iter().cloned().collect()
}
}
impl PushRules {
/// Iterates over all the rules, including base rules, in the order they
/// should be executed in.
pub fn iter(&self) -> impl Iterator<Item = &PushRule> {
base_rules::BASE_PREPEND_OVERRIDE_RULES
.iter()
.chain(self.override_rules.iter())
.chain(base_rules::BASE_APPEND_OVERRIDE_RULES.iter())
.chain(self.content.iter())
.chain(base_rules::BASE_APPEND_CONTENT_RULES.iter())
.chain(self.room.iter())
.chain(self.sender.iter())
.chain(self.underride.iter())
.chain(base_rules::BASE_APPEND_UNDERRIDE_RULES.iter())
.map(|rule| {
self.overridden_base_rules
.get(&*rule.rule_id)
.unwrap_or(rule)
})
}
}
/// A wrapper around `PushRules` that checks the enabled state of rules and
/// filters out disabled experimental rules.
#[derive(Debug, Clone, Default)]
#[pyclass(frozen)]
pub struct FilteredPushRules {
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
msc3772_enabled: bool,
}
#[pymethods]
impl FilteredPushRules {
#[new]
pub fn py_new(
push_rules: PushRules,
enabled_map: BTreeMap<String, bool>,
msc3772_enabled: bool,
) -> Self {
Self {
push_rules,
enabled_map,
msc3772_enabled,
}
}
/// Returns the list of all rules and their enabled state, including base
/// rules, in the order they should be executed in.
fn rules(&self) -> Vec<(PushRule, bool)> {
self.iter().map(|(r, e)| (r.clone(), e)).collect()
}
}
impl FilteredPushRules {
/// Iterates over all the rules and their enabled state, including base
/// rules, in the order they should be executed in.
fn iter(&self) -> impl Iterator<Item = (&PushRule, bool)> {
self.push_rules
.iter()
.filter(|rule| {
// Ignore disabled experimental push rules
if !self.msc3772_enabled
&& rule.rule_id == "global/underride/.org.matrix.msc3772.thread_reply"
{
return false;
}
true
})
.map(|r| {
let enabled = *self
.enabled_map
.get(&*r.rule_id)
.unwrap_or(&r.default_enabled);
(r, enabled)
})
}
}
#[test]
fn test_serialize_condition() {
let condition = Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
key: "content.body".into(),
pattern: Some("coffee".into()),
pattern_type: None,
}));
let json = serde_json::to_string(&condition).unwrap();
assert_eq!(
json,
r#"{"kind":"event_match","key":"content.body","pattern":"coffee"}"#
)
}
#[test]
fn test_deserialize_condition() {
let json = r#"{"kind":"event_match","key":"content.body","pattern":"coffee"}"#;
let _: Condition = serde_json::from_str(json).unwrap();
}
#[test]
fn test_deserialize_custom_condition() {
let json = r#"{"kind":"custom_tag"}"#;
let condition: Condition = serde_json::from_str(json).unwrap();
assert!(matches!(condition, Condition::Unknown(_)));
let new_json = serde_json::to_string(&condition).unwrap();
assert_eq!(json, new_json);
}
#[test]
fn test_deserialize_action() {
let _: Action = serde_json::from_str(r#""notify""#).unwrap();
let _: Action = serde_json::from_str(r#""dont_notify""#).unwrap();
let _: Action = serde_json::from_str(r#""coalesce""#).unwrap();
let _: Action = serde_json::from_str(r#"{"set_tweak": "highlight"}"#).unwrap();
}
#[test]
fn test_custom_action() {
let json = r#"{"some_custom":"action_fields"}"#;
let action: Action = serde_json::from_str(json).unwrap();
assert!(matches!(action, Action::Unknown(_)));
let new_json = serde_json::to_string(&action).unwrap();
assert_eq!(json, new_json);
}

215
rust/src/push/utils.rs Normal file
View File

@@ -0,0 +1,215 @@
// Copyright 2022 The Matrix.org Foundation C.I.C.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use anyhow::bail;
use anyhow::Context;
use anyhow::Error;
use lazy_static::lazy_static;
use regex;
use regex::Regex;
use regex::RegexBuilder;
lazy_static! {
/// Matches runs of non-wildcard characters followed by wildcard characters.
static ref WILDCARD_RUN: Regex = Regex::new(r"([^\?\*]*)([\?\*]*)").expect("valid regex");
}
/// Extract the localpart from a Matrix style ID
pub(crate) fn get_localpart_from_id(id: &str) -> Result<&str, Error> {
let (localpart, _) = id
.split_once(':')
.with_context(|| format!("ID does not contain colon: {id}"))?;
// We need to strip off the first character, which is the ID type.
if localpart.is_empty() {
bail!("Invalid ID {id}");
}
Ok(&localpart[1..])
}
/// Used by `glob_to_regex` to specify what to match the regex against.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum GlobMatchType {
/// The generated regex will match against the entire input.
Whole,
/// The generated regex will match against words.
Word,
}
/// Convert a "glob" style expression to a regex, anchoring either to the entire
/// input or to individual words.
pub fn glob_to_regex(glob: &str, match_type: GlobMatchType) -> Result<Regex, Error> {
let mut chunks = Vec::new();
// Patterns with wildcards must be simplified to avoid performance cliffs
// - The glob `?**?**?` is equivalent to the glob `???*`
// - The glob `???*` is equivalent to the regex `.{3,}`
for captures in WILDCARD_RUN.captures_iter(glob) {
if let Some(chunk) = captures.get(1) {
chunks.push(regex::escape(chunk.as_str()));
}
if let Some(wildcards) = captures.get(2) {
if wildcards.as_str() == "" {
continue;
}
let question_marks = wildcards.as_str().chars().filter(|c| *c == '?').count();
if wildcards.as_str().contains('*') {
chunks.push(format!(".{{{question_marks},}}"));
} else {
chunks.push(format!(".{{{question_marks}}}"));
}
}
}
let joined = chunks.join("");
let regex_str = match match_type {
GlobMatchType::Whole => format!(r"\A{joined}\z"),
// `^|\W` and `\W|$` handle the case where `pattern` starts or ends with a non-word
// character.
GlobMatchType::Word => format!(r"(?:^|\b|\W){joined}(?:\b|\W|$)"),
};
Ok(RegexBuilder::new(&regex_str)
.case_insensitive(true)
.build()?)
}
/// Compiles the glob into a `Matcher`.
pub fn get_glob_matcher(glob: &str, match_type: GlobMatchType) -> Result<Matcher, Error> {
// There are a number of shortcuts we can make if the glob doesn't contain a
// wild card.
let matcher = if glob.contains(['*', '?']) {
let regex = glob_to_regex(glob, match_type)?;
Matcher::Regex(regex)
} else if match_type == GlobMatchType::Whole {
// If there aren't any wildcards and we're matching the whole thing,
// then we simply can do a case-insensitive string match.
Matcher::Whole(glob.to_lowercase())
} else {
// Otherwise, if we're matching against words then can first check
// if the haystack contains the glob at all.
Matcher::Word {
word: glob.to_lowercase(),
regex: None,
}
};
Ok(matcher)
}
/// Matches against a glob
pub enum Matcher {
/// Plain regex matching.
Regex(Regex),
/// Case-insensitive equality.
Whole(String),
/// Word matching. `regex` is a cache of calling [`glob_to_regex`] on word.
Word { word: String, regex: Option<Regex> },
}
impl Matcher {
/// Checks if the glob matches the given haystack.
pub fn is_match(&mut self, haystack: &str) -> Result<bool, Error> {
// We want to to do case-insensitive matching, so we convert to
// lowercase first.
let haystack = haystack.to_lowercase();
match self {
Matcher::Regex(regex) => Ok(regex.is_match(&haystack)),
Matcher::Whole(whole) => Ok(whole == &haystack),
Matcher::Word { word, regex } => {
// If we're looking for a literal word, then we first check if
// the haystack contains the word as a substring.
if !haystack.contains(&*word) {
return Ok(false);
}
// If it does contain the word as a substring, then we need to
// check if it is an actual word by testing it against the regex.
let regex = if let Some(regex) = regex {
regex
} else {
let compiled_regex = glob_to_regex(word, GlobMatchType::Word)?;
regex.insert(compiled_regex)
};
Ok(regex.is_match(&haystack))
}
}
}
}
#[test]
fn test_get_domain_from_id() {
get_localpart_from_id("").unwrap_err();
get_localpart_from_id(":").unwrap_err();
get_localpart_from_id(":asd").unwrap_err();
get_localpart_from_id("::as::asad").unwrap_err();
assert_eq!(get_localpart_from_id("@test:foo").unwrap(), "test");
assert_eq!(get_localpart_from_id("@:").unwrap(), "");
assert_eq!(get_localpart_from_id("@test:foo:907").unwrap(), "test");
}
#[test]
fn tset_glob() -> Result<(), Error> {
assert_eq!(
glob_to_regex("simple", GlobMatchType::Whole)?.as_str(),
r"\Asimple\z"
);
assert_eq!(
glob_to_regex("simple*", GlobMatchType::Whole)?.as_str(),
r"\Asimple.{0,}\z"
);
assert_eq!(
glob_to_regex("simple?", GlobMatchType::Whole)?.as_str(),
r"\Asimple.{1}\z"
);
assert_eq!(
glob_to_regex("simple?*?*", GlobMatchType::Whole)?.as_str(),
r"\Asimple.{2,}\z"
);
assert_eq!(
glob_to_regex("simple???", GlobMatchType::Whole)?.as_str(),
r"\Asimple.{3}\z"
);
assert_eq!(
glob_to_regex("escape.", GlobMatchType::Whole)?.as_str(),
r"\Aescape\.\z"
);
assert!(glob_to_regex("simple", GlobMatchType::Whole)?.is_match("simple"));
assert!(!glob_to_regex("simple", GlobMatchType::Whole)?.is_match("simples"));
assert!(glob_to_regex("simple*", GlobMatchType::Whole)?.is_match("simples"));
assert!(glob_to_regex("simple?", GlobMatchType::Whole)?.is_match("simples"));
assert!(glob_to_regex("simple*", GlobMatchType::Whole)?.is_match("simple"));
assert!(glob_to_regex("simple", GlobMatchType::Word)?.is_match("some simple."));
assert!(glob_to_regex("simple", GlobMatchType::Word)?.is_match("simple"));
assert!(!glob_to_regex("simple", GlobMatchType::Word)?.is_match("simples"));
assert!(glob_to_regex("@user:foo", GlobMatchType::Word)?.is_match("Some @user:foo test"));
assert!(glob_to_regex("@user:foo", GlobMatchType::Word)?.is_match("@user:foo"));
Ok(())
}

View File

@@ -88,10 +88,9 @@ def make_wrapper(factory: Callable[P, R]) -> Callable[P, R]:
@functools.wraps(factory)
def wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
# type-ignore: should be redundant once we can use https://github.com/python/mypy/pull/12668
if "strict" not in kwargs: # type: ignore[attr-defined]
if "strict" not in kwargs:
raise MissingStrictInConstrainedTypeException(factory.__name__)
if not kwargs["strict"]: # type: ignore[index]
if not kwargs["strict"]:
raise MissingStrictInConstrainedTypeException(factory.__name__)
return factory(*args, **kwargs)

View File

@@ -2,23 +2,16 @@
#
# This script generates SQL files for creating a brand new Synapse DB with the latest
# schema, on both SQLite3 and Postgres.
#
# It does so by having Synapse generate an up-to-date SQLite DB, then running
# synapse_port_db to convert it to Postgres. It then dumps the contents of both.
export PGHOST="localhost"
POSTGRES_DB_NAME="synapse_full_schema.$$"
SQLITE_SCHEMA_FILE="schema.sql.sqlite"
SQLITE_ROWS_FILE="rows.sql.sqlite"
POSTGRES_SCHEMA_FILE="full.sql.postgres"
POSTGRES_ROWS_FILE="rows.sql.postgres"
POSTGRES_MAIN_DB_NAME="synapse_full_schema_main.$$"
POSTGRES_COMMON_DB_NAME="synapse_full_schema_common.$$"
POSTGRES_STATE_DB_NAME="synapse_full_schema_state.$$"
REQUIRED_DEPS=("matrix-synapse" "psycopg2")
usage() {
echo
echo "Usage: $0 -p <postgres_username> -o <path> [-c] [-n] [-h]"
echo "Usage: $0 -p <postgres_username> -o <path> [-c] [-n <schema number>] [-h]"
echo
echo "-p <postgres_username>"
echo " Username to connect to local postgres instance. The password will be requested"
@@ -27,11 +20,19 @@ usage() {
echo " CI mode. Prints every command that the script runs."
echo "-o <path>"
echo " Directory to output full schema files to."
echo "-n <schema number>"
echo " Schema number for the new snapshot. Used to set the location of files within "
echo " the output directory, mimicking that of synapse/storage/schemas."
echo " Defaults to 9999."
echo "-h"
echo " Display this help text."
echo ""
echo " NB: make sure to run this against the *oldest* supported version of postgres,"
echo " or else pg_dump might output non-backwards-compatible syntax."
}
while getopts "p:co:h" opt; do
SCHEMA_NUMBER="9999"
while getopts "p:co:hn:" opt; do
case $opt in
p)
export PGUSER=$OPTARG
@@ -48,6 +49,9 @@ while getopts "p:co:h" opt; do
usage
exit
;;
n)
SCHEMA_NUMBER="$OPTARG"
;;
\?)
echo "ERROR: Invalid option: -$OPTARG" >&2
usage
@@ -95,12 +99,21 @@ cd "$(dirname "$0")/.."
TMPDIR=$(mktemp -d)
KEY_FILE=$TMPDIR/test.signing.key # default Synapse signing key path
SQLITE_CONFIG=$TMPDIR/sqlite.conf
SQLITE_DB=$TMPDIR/homeserver.db
SQLITE_MAIN_DB=$TMPDIR/main.db
SQLITE_STATE_DB=$TMPDIR/state.db
SQLITE_COMMON_DB=$TMPDIR/common.db
POSTGRES_CONFIG=$TMPDIR/postgres.conf
# Ensure these files are delete on script exit
# TODO: the trap should also drop the temp postgres DB
trap 'rm -rf $TMPDIR' EXIT
cleanup() {
echo "Cleaning up temporary sqlite database and config files..."
rm -r "$TMPDIR"
echo "Cleaning up temporary Postgres database..."
dropdb --if-exists "$POSTGRES_COMMON_DB_NAME"
dropdb --if-exists "$POSTGRES_MAIN_DB_NAME"
dropdb --if-exists "$POSTGRES_STATE_DB_NAME"
}
trap 'cleanup' EXIT
cat > "$SQLITE_CONFIG" <<EOF
server_name: "test"
@@ -110,10 +123,22 @@ macaroon_secret_key: "abcde"
report_stats: false
database:
name: "sqlite3"
args:
database: "$SQLITE_DB"
databases:
common:
name: "sqlite3"
data_stores: []
args:
database: "$SQLITE_COMMON_DB"
main:
name: "sqlite3"
data_stores: ["main"]
args:
database: "$SQLITE_MAIN_DB"
state:
name: "sqlite3"
data_stores: ["state"]
args:
database: "$SQLITE_STATE_DB"
# Suppress the key server warning.
trusted_key_servers: []
@@ -127,13 +152,32 @@ macaroon_secret_key: "abcde"
report_stats: false
database:
name: "psycopg2"
args:
user: "$PGUSER"
host: "$PGHOST"
password: "$PGPASSWORD"
database: "$POSTGRES_DB_NAME"
databases:
common:
name: "psycopg2"
data_stores: []
args:
user: "$PGUSER"
host: "$PGHOST"
password: "$PGPASSWORD"
database: "$POSTGRES_COMMON_DB_NAME"
main:
name: "psycopg2"
data_stores: ["main"]
args:
user: "$PGUSER"
host: "$PGHOST"
password: "$PGPASSWORD"
database: "$POSTGRES_MAIN_DB_NAME"
state:
name: "psycopg2"
data_stores: ["state"]
args:
user: "$PGUSER"
host: "$PGHOST"
password: "$PGPASSWORD"
database: "$POSTGRES_STATE_DB_NAME"
# Suppress the key server warning.
trusted_key_servers: []
@@ -148,33 +192,105 @@ echo "Running db background jobs..."
synapse/_scripts/update_synapse_database.py --database-config "$SQLITE_CONFIG" --run-background-updates
# Create the PostgreSQL database.
echo "Creating postgres database..."
createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_DB_NAME"
echo "Creating postgres databases..."
createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_COMMON_DB_NAME"
createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_MAIN_DB_NAME"
createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_STATE_DB_NAME"
echo "Running db background jobs..."
synapse/_scripts/update_synapse_database.py --database-config "$POSTGRES_CONFIG" --run-background-updates
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
# Also delete any shadow tables from fts4
echo "Dropping unwanted db tables..."
SQL="
# Some common tables are created and updated by Synapse itself and do not belong in the
# schema.
DROP_APP_MANAGED_TABLES="
DROP TABLE schema_version;
DROP TABLE schema_compat_version;
DROP TABLE applied_schema_deltas;
DROP TABLE applied_module_schemas;
"
sqlite3 "$SQLITE_DB" <<< "$SQL"
psql "$POSTGRES_DB_NAME" -w <<< "$SQL"
# Other common tables are not created by Synapse and do belong in the schema.
# TODO: we could derive DROP_COMMON_TABLES from the dump of the common-only DB. But
# since there's only one table there, I haven't bothered to do so.
DROP_COMMON_TABLES="$DROP_APP_MANAGED_TABLES
DROP TABLE background_updates;
"
echo "Dumping SQLite3 schema to '$OUTPUT_DIR/$SQLITE_SCHEMA_FILE' and '$OUTPUT_DIR/$SQLITE_ROWS_FILE'..."
sqlite3 "$SQLITE_DB" ".schema --indent" > "$OUTPUT_DIR/$SQLITE_SCHEMA_FILE"
sqlite3 "$SQLITE_DB" ".dump --data-only --nosys" > "$OUTPUT_DIR/$SQLITE_ROWS_FILE"
sqlite3 "$SQLITE_COMMON_DB" <<< "$DROP_APP_MANAGED_TABLES"
sqlite3 "$SQLITE_MAIN_DB" <<< "$DROP_COMMON_TABLES"
sqlite3 "$SQLITE_STATE_DB" <<< "$DROP_COMMON_TABLES"
psql "$POSTGRES_COMMON_DB_NAME" -w <<< "$DROP_APP_MANAGED_TABLES"
psql "$POSTGRES_MAIN_DB_NAME" -w <<< "$DROP_COMMON_TABLES"
psql "$POSTGRES_STATE_DB_NAME" -w <<< "$DROP_COMMON_TABLES"
echo "Dumping Postgres schema to '$OUTPUT_DIR/$POSTGRES_SCHEMA_FILE' and '$OUTPUT_DIR/$POSTGRES_ROWS_FILE'..."
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner "$POSTGRES_DB_NAME" | sed -e '/^$/d' -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_SCHEMA_FILE"
pg_dump --format=plain --data-only --inserts --no-tablespaces --no-acl --no-owner "$POSTGRES_DB_NAME" | sed -e '/^$/d' -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_ROWS_FILE"
# For Reasons(TM), SQLite's `.schema` also dumps out "shadow tables", the implementation
# details behind full text search tables. Omit these from the dumps.
echo "Cleaning up temporary Postgres database..."
dropdb $POSTGRES_DB_NAME
sqlite3 "$SQLITE_MAIN_DB" <<< "
DROP TABLE event_search_content;
DROP TABLE event_search_segments;
DROP TABLE event_search_segdir;
DROP TABLE event_search_docsize;
DROP TABLE event_search_stat;
DROP TABLE user_directory_search_content;
DROP TABLE user_directory_search_segments;
DROP TABLE user_directory_search_segdir;
DROP TABLE user_directory_search_docsize;
DROP TABLE user_directory_search_stat;
"
echo "Dumping SQLite3 schema..."
mkdir -p "$OUTPUT_DIR/"{common,main,state}"/full_schemas/$SCHEMA_NUMBER"
sqlite3 "$SQLITE_COMMON_DB" ".schema" > "$OUTPUT_DIR/common/full_schemas/$SCHEMA_NUMBER/full.sql.sqlite"
sqlite3 "$SQLITE_COMMON_DB" ".dump --data-only --nosys" >> "$OUTPUT_DIR/common/full_schemas/$SCHEMA_NUMBER/full.sql.sqlite"
sqlite3 "$SQLITE_MAIN_DB" ".schema" > "$OUTPUT_DIR/main/full_schemas/$SCHEMA_NUMBER/full.sql.sqlite"
sqlite3 "$SQLITE_MAIN_DB" ".dump --data-only --nosys" >> "$OUTPUT_DIR/main/full_schemas/$SCHEMA_NUMBER/full.sql.sqlite"
sqlite3 "$SQLITE_STATE_DB" ".schema" > "$OUTPUT_DIR/state/full_schemas/$SCHEMA_NUMBER/full.sql.sqlite"
sqlite3 "$SQLITE_STATE_DB" ".dump --data-only --nosys" >> "$OUTPUT_DIR/state/full_schemas/$SCHEMA_NUMBER/full.sql.sqlite"
cleanup_pg_schema() {
# Cleanup as follows:
# - Remove empty lines. pg_dump likes to output a lot of these.
# - Remove comment-only lines. pg_dump also likes to output a lot of these to visually
# separate tables etc.
# - Remove "public." prefix --- the schema name.
# - Remove "SET" commands. Last time I ran this, the output commands were
# SET statement_timeout = 0;
# SET lock_timeout = 0;
# SET idle_in_transaction_session_timeout = 0;
# SET client_encoding = 'UTF8';
# SET standard_conforming_strings = on;
# SET check_function_bodies = false;
# SET xmloption = content;
# SET client_min_messages = warning;
# SET row_security = off;
# SET default_table_access_method = heap;
# - Very carefully remove specific SELECT statements. We CANNOT blanket remove all
# SELECT statements because some of those have side-effects which we do want in the
# schema. Last time I ran this, the only SELECTS were
# SELECT pg_catalog.set_config('search_path', '', false);
# and
# SELECT pg_catalog.setval(text, bigint, bool);
# We do want to remove the former, but the latter is important. If the last argument
# is `true` or omitted, this marks the given integer as having been consumed and
# will NOT appear as the nextval.
sed -e '/^$/d' \
-e '/^--/d' \
-e 's/public\.//g' \
-e '/^SET /d' \
-e '/^SELECT pg_catalog.set_config/d'
}
echo "Dumping Postgres schema..."
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner "$POSTGRES_COMMON_DB_NAME" | cleanup_pg_schema > "$OUTPUT_DIR/common/full_schemas/$SCHEMA_NUMBER/full.sql.postgres"
pg_dump --format=plain --data-only --inserts --no-tablespaces --no-acl --no-owner "$POSTGRES_COMMON_DB_NAME" | cleanup_pg_schema >> "$OUTPUT_DIR/common/full_schemas/$SCHEMA_NUMBER/full.sql.postgres"
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner "$POSTGRES_MAIN_DB_NAME" | cleanup_pg_schema > "$OUTPUT_DIR/main/full_schemas/$SCHEMA_NUMBER/full.sql.postgres"
pg_dump --format=plain --data-only --inserts --no-tablespaces --no-acl --no-owner "$POSTGRES_MAIN_DB_NAME" | cleanup_pg_schema >> "$OUTPUT_DIR/main/full_schemas/$SCHEMA_NUMBER/full.sql.postgres"
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner "$POSTGRES_STATE_DB_NAME" | cleanup_pg_schema > "$OUTPUT_DIR/state/full_schemas/$SCHEMA_NUMBER/full.sql.postgres"
pg_dump --format=plain --data-only --inserts --no-tablespaces --no-acl --no-owner "$POSTGRES_STATE_DB_NAME" | cleanup_pg_schema >> "$OUTPUT_DIR/state/full_schemas/$SCHEMA_NUMBER/full.sql.postgres"
echo "Done! Files dumped to: $OUTPUT_DIR"

View File

@@ -29,7 +29,7 @@ class SynapsePlugin(Plugin):
self, fullname: str
) -> Optional[Callable[[MethodSigContext], CallableType]]:
if fullname.startswith(
"synapse.util.caches.descriptors._CachedFunction.__call__"
"synapse.util.caches.descriptors.CachedFunction.__call__"
) or fullname.startswith(
"synapse.util.caches.descriptors._LruCachedFunction.__call__"
):
@@ -38,7 +38,7 @@ class SynapsePlugin(Plugin):
def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
"""Fixes the `_CachedFunction.__call__` signature to be correct.
"""Fixes the `CachedFunction.__call__` signature to be correct.
It already has *almost* the correct signature, except:

View File

@@ -0,0 +1,50 @@
from typing import Any, Collection, Dict, Mapping, Optional, Sequence, Set, Tuple, Union
from synapse.types import JsonDict
class PushRule:
@property
def rule_id(self) -> str: ...
@property
def priority_class(self) -> int: ...
@property
def conditions(self) -> Sequence[Mapping[str, str]]: ...
@property
def actions(self) -> Sequence[Union[Mapping[str, Any], str]]: ...
@property
def default(self) -> bool: ...
@property
def default_enabled(self) -> bool: ...
@staticmethod
def from_db(
rule_id: str, priority_class: int, conditions: str, actions: str
) -> "PushRule": ...
class PushRules:
def __init__(self, rules: Collection[PushRule]): ...
def rules(self) -> Collection[PushRule]: ...
class FilteredPushRules:
def __init__(
self, push_rules: PushRules, enabled_map: Dict[str, bool], msc3772_enabled: bool
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...
def get_base_rule_ids() -> Collection[str]: ...
class PushRuleEvaluator:
def __init__(
self,
flattened_keys: Mapping[str, str],
room_member_count: int,
sender_power_level: Optional[int],
notification_power_levels: Mapping[str, int],
relations: Mapping[str, Set[Tuple[str, str]]],
relation_match_enabled: bool,
): ...
def run(
self,
push_rules: FilteredPushRules,
user_id: Optional[str],
display_name: Optional[str],
) -> Collection[dict]: ...

View File

@@ -107,10 +107,11 @@ BOOLEAN_COLUMNS = {
"redactions": ["have_censored"],
"room_stats_state": ["is_federatable"],
"local_media_repository": ["safe_from_quarantine"],
"users": ["shadow_banned"],
"users": ["shadow_banned", "approved"],
"e2e_fallback_keys_json": ["used"],
"access_tokens": ["used"],
"device_lists_changes_in_room": ["converted_to_destinations"],
"pushers": ["enabled"],
}

View File

@@ -48,10 +48,13 @@ class MockHomeserver(HomeServer):
def run_background_updates(hs: HomeServer) -> None:
store = hs.get_datastores().main
main = hs.get_datastores().main
state = hs.get_datastores().state
async def run_background_updates() -> None:
await store.db_pool.updates.run_background_updates(sleep=False)
await main.db_pool.updates.run_background_updates(sleep=False)
if state:
await state.db_pool.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()
@@ -97,8 +100,11 @@ def main() -> None:
# Load, process and sanity-check the config.
hs_config = yaml.safe_load(args.database_config)
if "database" not in hs_config:
sys.stderr.write("The configuration file must have a 'database' section.\n")
if "database" not in hs_config and "databases" not in hs_config:
sys.stderr.write(
"The configuration file must have a 'database' or 'databases' section. "
"See https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#database"
)
sys.exit(4)
config = HomeServerConfig()

View File

@@ -31,6 +31,9 @@ MAX_ALIAS_LENGTH = 255
# the maximum length for a user id is 255 characters
MAX_USERID_LENGTH = 255
# Constant value used for the pseudo-thread which is the main timeline.
MAIN_TIMELINE: Final = "main"
class Membership:
@@ -269,3 +272,14 @@ class PublicRoomsFilterFields:
GENERIC_SEARCH_TERM: Final = "generic_search_term"
ROOM_TYPES: Final = "room_types"
class ApprovalNoticeMedium:
"""Identifier for the medium this server will use to serve notice of approval for a
specific user's registration.
As defined in https://github.com/matrix-org/matrix-spec-proposals/blob/babolivier/m_not_approved/proposals/3866-user-not-approved-error.md
"""
NONE = "org.matrix.msc3866.none"
EMAIL = "org.matrix.msc3866.email"

View File

@@ -100,6 +100,14 @@ class Codes(str, Enum):
UNREDACTED_CONTENT_DELETED = "FI.MAU.MSC2815_UNREDACTED_CONTENT_DELETED"
# Returned for federation requests where we can't process a request as we
# can't ensure the sending server is in a room which is partial-stated on
# our side.
# Part of MSC3895.
UNABLE_DUE_TO_PARTIAL_STATE = "ORG.MATRIX.MSC3895_UNABLE_DUE_TO_PARTIAL_STATE"
USER_AWAITING_APPROVAL = "ORG.MATRIX.MSC3866_USER_AWAITING_APPROVAL"
class CodeMessageException(RuntimeError):
"""An exception with integer code and message string attributes.
@@ -560,6 +568,20 @@ class UnredactedContentDeletedError(SynapseError):
return cs_error(self.msg, self.errcode, **extra)
class NotApprovedError(SynapseError):
def __init__(
self,
msg: str,
approval_notice_medium: str,
):
super().__init__(
code=403,
msg=msg,
errcode=Codes.USER_AWAITING_APPROVAL,
additional_fields={"approval_notice_medium": approval_notice_medium},
)
def cs_error(msg: str, code: str = Codes.UNKNOWN, **kwargs: Any) -> "JsonDict":
"""Utility method for constructing an error response for client-server
interactions.

View File

@@ -84,6 +84,7 @@ ROOM_EVENT_FILTER_SCHEMA = {
"contains_url": {"type": "boolean"},
"lazy_load_members": {"type": "boolean"},
"include_redundant_members": {"type": "boolean"},
"org.matrix.msc3773.unread_thread_notifications": {"type": "boolean"},
# Include or exclude events with the provided labels.
# cf https://github.com/matrix-org/matrix-doc/pull/2326
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
@@ -240,6 +241,9 @@ class FilterCollection:
def include_redundant_members(self) -> bool:
return self._room_state_filter.include_redundant_members
def unread_thread_notifications(self) -> bool:
return self._room_timeline_filter.unread_thread_notifications
async def filter_presence(
self, events: Iterable[UserPresenceState]
) -> List[UserPresenceState]:
@@ -304,6 +308,12 @@ class Filter:
self.include_redundant_members = filter_json.get(
"include_redundant_members", False
)
if hs.config.experimental.msc3773_enabled:
self.unread_thread_notifications: bool = filter_json.get(
"org.matrix.msc3773.unread_thread_notifications", False
)
else:
self.unread_thread_notifications = False
self.types = filter_json.get("types", None)
self.not_types = filter_json.get("not_types", [])

View File

@@ -98,9 +98,7 @@ def register_sighup(func: Callable[P, None], *args: P.args, **kwargs: P.kwargs)
func: Function to be called when sent a SIGHUP signal.
*args, **kwargs: args and kwargs to be passed to the target function.
"""
# This type-ignore should be redundant once we use a mypy release with
# https://github.com/python/mypy/pull/12668.
_sighup_callbacks.append((func, args, kwargs)) # type: ignore[arg-type]
_sighup_callbacks.append((func, args, kwargs))
def start_worker_reactor(

View File

@@ -53,9 +53,9 @@ logger = logging.getLogger("synapse.app.admin_cmd")
class AdminCmdSlavedStore(
SlavedFilteringStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedDeviceStore,
TagsWorkerStore,
DeviceInboxWorkerStore,
AccountDataWorkerStore,

View File

@@ -51,11 +51,18 @@ import argparse
import importlib
import itertools
import multiprocessing
import os
import signal
import sys
from typing import Any, Callable, List
from types import FrameType
from typing import Any, Callable, List, Optional
from twisted.internet.main import installReactor
# a list of the original signal handlers, before we installed our custom ones.
# We restore these in our child processes.
_original_signal_handlers: dict[int, Any] = {}
class ProxiedReactor:
"""
@@ -105,6 +112,11 @@ def _worker_entrypoint(
sys.argv = args
# reset the custom signal handlers that we installed, so that the children start
# from a clean slate.
for sig, handler in _original_signal_handlers.items():
signal.signal(sig, handler)
from twisted.internet.epollreactor import EPollReactor
proxy_reactor._install_real_reactor(EPollReactor())
@@ -167,13 +179,29 @@ def main() -> None:
update_proc.join()
print("===== PREPARED DATABASE =====", file=sys.stderr)
processes: List[multiprocessing.Process] = []
# Install signal handlers to propagate signals to all our children, so that they
# shut down cleanly. This also inhibits our own exit, but that's good: we want to
# wait until the children have exited.
def handle_signal(signum: int, frame: Optional[FrameType]) -> None:
print(
f"complement_fork_starter: Caught signal {signum}. Stopping children.",
file=sys.stderr,
)
for p in processes:
if p.pid:
os.kill(p.pid, signum)
for sig in (signal.SIGINT, signal.SIGTERM):
_original_signal_handlers[sig] = signal.signal(sig, handle_signal)
# At this point, we've imported all the main entrypoints for all the workers.
# Now we basically just fork() out to create the workers we need.
# Because we're using fork(), all the workers get a clone of this launcher's
# memory space and don't need to repeat the work of loading the code!
# Instead of using fork() directly, we use the multiprocessing library,
# which uses fork() on Unix platforms.
processes = []
for (func, worker_args) in zip(worker_functions, args_by_worker):
process = multiprocessing.Process(
target=_worker_entrypoint, args=(func, proxy_reactor, worker_args)

View File

@@ -120,7 +120,11 @@ class ApplicationServiceApi(SimpleHttpClient):
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
try:
response = await self.get_json(uri, {"access_token": service.hs_token})
response = await self.get_json(
uri,
{"access_token": service.hs_token},
headers={"Authorization": f"Bearer {service.hs_token}"},
)
if response is not None: # just an empty json object
return True
except CodeMessageException as e:
@@ -140,7 +144,11 @@ class ApplicationServiceApi(SimpleHttpClient):
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
try:
response = await self.get_json(uri, {"access_token": service.hs_token})
response = await self.get_json(
uri,
{"access_token": service.hs_token},
headers={"Authorization": f"Bearer {service.hs_token}"},
)
if response is not None: # just an empty json object
return True
except CodeMessageException as e:
@@ -181,7 +189,9 @@ class ApplicationServiceApi(SimpleHttpClient):
**fields,
b"access_token": service.hs_token,
}
response = await self.get_json(uri, args=args)
response = await self.get_json(
uri, args=args, headers={"Authorization": f"Bearer {service.hs_token}"}
)
if not isinstance(response, list):
logger.warning(
"query_3pe to %s returned an invalid response %r", uri, response
@@ -217,7 +227,11 @@ class ApplicationServiceApi(SimpleHttpClient):
urllib.parse.quote(protocol),
)
try:
info = await self.get_json(uri, {"access_token": service.hs_token})
info = await self.get_json(
uri,
{"access_token": service.hs_token},
headers={"Authorization": f"Bearer {service.hs_token}"},
)
if not _is_valid_3pe_metadata(info):
logger.warning(
@@ -313,6 +327,7 @@ class ApplicationServiceApi(SimpleHttpClient):
uri=uri,
json_body=body,
args={"access_token": service.hs_token},
headers={"Authorization": f"Bearer {service.hs_token}"},
)
if logger.isEnabledFor(logging.DEBUG):
logger.debug(

View File

@@ -14,10 +14,25 @@
from typing import Any
import attr
from synapse.config._base import Config
from synapse.types import JsonDict
@attr.s(auto_attribs=True, frozen=True, slots=True)
class MSC3866Config:
"""Configuration for MSC3866 (mandating approval for new users)"""
# Whether the base support for the approval process is enabled. This includes the
# ability for administrators to check and update the approval of users, even if no
# approval is currently required.
enabled: bool = False
# Whether to require that new users are approved by an admin before their account
# can be used. Note that this setting is ignored if 'enabled' is false.
require_approval_for_new_accounts: bool = False
class ExperimentalConfig(Config):
"""Config section for enabling experimental features"""
@@ -63,7 +78,8 @@ class ExperimentalConfig(Config):
# MSC3706 (server-side support for partial state in /send_join responses)
self.msc3706_enabled: bool = experimental.get("msc3706_enabled", False)
# experimental support for faster joins over federation (msc2775, msc3706)
# experimental support for faster joins over federation
# (MSC2775, MSC3706, MSC3895)
# requires a target server with msc3706_enabled enabled.
self.faster_joins_enabled: bool = experimental.get("faster_joins", False)
@@ -79,11 +95,12 @@ class ExperimentalConfig(Config):
# MSC2815 (allow room moderators to view redacted event content)
self.msc2815_enabled: bool = experimental.get("msc2815_enabled", False)
# MSC3786 (Add a default push rule to ignore m.room.server_acl events)
self.msc3786_enabled: bool = experimental.get("msc3786_enabled", False)
# MSC3771: Thread read receipts
self.msc3771_enabled: bool = experimental.get("msc3771_enabled", False)
# MSC3772: A push rule for mutual relations.
self.msc3772_enabled: bool = experimental.get("msc3772_enabled", False)
# MSC3773: Thread notifications
self.msc3773_enabled: bool = experimental.get("msc3773_enabled", False)
# MSC3715: dir param on /relations.
self.msc3715_enabled: bool = experimental.get("msc3715_enabled", False)
@@ -93,3 +110,17 @@ class ExperimentalConfig(Config):
# MSC3852: Expose last seen user agent field on /_matrix/client/v3/devices.
self.msc3852_enabled: bool = experimental.get("msc3852_enabled", False)
# MSC3866: M_USER_AWAITING_APPROVAL error code
raw_msc3866_config = experimental.get("msc3866", {})
self.msc3866 = MSC3866Config(**raw_msc3866_config)
# MSC3881: Remotely toggle push notifications for another client
self.msc3881_enabled: bool = experimental.get("msc3881_enabled", False)
# MSC3882: Allow an existing session to sign in a new session
self.msc3882_enabled: bool = experimental.get("msc3882_enabled", False)
self.msc3882_ui_auth: bool = experimental.get("msc3882_ui_auth", True)
self.msc3882_token_timeout = self.parse_duration(
experimental.get("msc3882_token_timeout", "5m")
)

View File

@@ -43,32 +43,6 @@ class MetricsConfig(Config):
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
self.enable_metrics = config.get("enable_metrics", False)
"""
### `enable_legacy_metrics` (experimental)
**Experimental: this option may be removed or have its behaviour
changed at any time, with no notice.**
Set to `true` to publish both legacy and non-legacy Prometheus metric names,
or to `false` to only publish non-legacy Prometheus metric names.
Defaults to `true`. Has no effect if `enable_metrics` is `false`.
Legacy metric names include:
- metrics containing colons in the name, such as `synapse_util_caches_response_cache:hits`, because colons are supposed to be reserved for user-defined recording rules;
- counters that don't end with the `_total` suffix, such as `synapse_federation_client_sent_edus`, therefore not adhering to the OpenMetrics standard.
These legacy metric names are unconventional and not compliant with OpenMetrics standards.
They are included for backwards compatibility.
Example configuration:
```yaml
enable_legacy_metrics: false
```
See https://github.com/matrix-org/synapse/issues/11106 for context.
*Since v1.67.0.*
"""
self.enable_legacy_metrics = config.get("enable_legacy_metrics", True)
self.report_stats = config.get("report_stats", None)

View File

@@ -15,7 +15,18 @@
import logging
import typing
from typing import Any, Collection, Dict, Iterable, List, Optional, Set, Tuple, Union
from typing import (
Any,
Collection,
Dict,
Iterable,
List,
Mapping,
Optional,
Set,
Tuple,
Union,
)
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
@@ -134,6 +145,7 @@ def validate_event_for_room_version(event: "EventBase") -> None:
async def check_state_independent_auth_rules(
store: _EventSourceStore,
event: "EventBase",
batched_auth_events: Optional[Mapping[str, "EventBase"]] = None,
) -> None:
"""Check that an event complies with auth rules that are independent of room state
@@ -143,6 +155,8 @@ async def check_state_independent_auth_rules(
Args:
store: the datastore; used to fetch the auth events for validation
event: the event being checked.
batched_auth_events: if the event being authed is part of a batch, any events
from the same batch that may be necessary to auth the current event
Raises:
AuthError if the checks fail
@@ -162,6 +176,9 @@ async def check_state_independent_auth_rules(
redact_behaviour=EventRedactBehaviour.as_is,
allow_rejected=True,
)
if batched_auth_events:
auth_events.update(batched_auth_events)
room_id = event.room_id
auth_dict: MutableStateMap[str] = {}
expected_auth_types = auth_types_for_event(event.room_version, event)

View File

@@ -289,6 +289,10 @@ class _EventInternalMetadata:
"""
return self._dict.get("historical", False)
def is_notifiable(self) -> bool:
"""Whether this event can trigger a push notification"""
return not self.is_outlier() or self.is_out_of_band_membership()
class EventBase(metaclass=abc.ABCMeta):
@property

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Awaitable, Callable, Optional
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
@@ -58,7 +58,12 @@ class FederationBase:
@trace
async def _check_sigs_and_hash(
self, room_version: RoomVersion, pdu: EventBase
self,
room_version: RoomVersion,
pdu: EventBase,
record_failure_callback: Optional[
Callable[[EventBase, str], Awaitable[None]]
] = None,
) -> EventBase:
"""Checks that event is correctly signed by the sending server.
@@ -70,6 +75,11 @@ class FederationBase:
Args:
room_version: The room version of the PDU
pdu: the event to be checked
record_failure_callback: A callback to run whenever the given event
fails signature or hash checks. This includes exceptions
that would be normally be thrown/raised but also things like
checking for event tampering where we just return the redacted
event.
Returns:
* the original event if the checks pass
@@ -80,7 +90,12 @@ class FederationBase:
InvalidEventSignatureError if the signature check failed. Nothing
will be logged in this case.
"""
await _check_sigs_on_pdu(self.keyring, room_version, pdu)
try:
await _check_sigs_on_pdu(self.keyring, room_version, pdu)
except InvalidEventSignatureError as exc:
if record_failure_callback:
await record_failure_callback(pdu, str(exc))
raise exc
if not check_event_content_hash(pdu):
# let's try to distinguish between failures because the event was
@@ -116,6 +131,10 @@ class FederationBase:
"event_id": pdu.event_id,
}
)
if record_failure_callback:
await record_failure_callback(
pdu, "Event content has been tampered with"
)
return redacted_event
spam_check = await self.spam_checker.check_event_for_spam(pdu)

View File

@@ -278,7 +278,7 @@ class FederationClient(FederationBase):
pdus = [event_from_pdu_json(p, room_version) for p in transaction_data_pdus]
# Check signatures and hash of pdus, removing any from the list that fail checks
pdus[:] = await self._check_sigs_and_hash_and_fetch(
pdus[:] = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
dest, pdus, room_version=room_version
)
@@ -328,7 +328,17 @@ class FederationClient(FederationBase):
# Check signatures are correct.
try:
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
async def _record_failure_callback(
event: EventBase, cause: str
) -> None:
await self.store.record_event_failed_pull_attempt(
event.room_id, event.event_id, cause
)
signed_pdu = await self._check_sigs_and_hash(
room_version, pdu, _record_failure_callback
)
except InvalidEventSignatureError as e:
errmsg = f"event id {pdu.event_id}: {e}"
logger.warning("%s", errmsg)
@@ -547,24 +557,28 @@ class FederationClient(FederationBase):
len(auth_event_map),
)
valid_auth_events = await self._check_sigs_and_hash_and_fetch(
valid_auth_events = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
destination, auth_event_map.values(), room_version
)
valid_state_events = await self._check_sigs_and_hash_and_fetch(
destination, state_event_map.values(), room_version
valid_state_events = (
await self._check_sigs_and_hash_for_pulled_events_and_fetch(
destination, state_event_map.values(), room_version
)
)
return valid_state_events, valid_auth_events
@trace
async def _check_sigs_and_hash_and_fetch(
async def _check_sigs_and_hash_for_pulled_events_and_fetch(
self,
origin: str,
pdus: Collection[EventBase],
room_version: RoomVersion,
) -> List[EventBase]:
"""Checks the signatures and hashes of a list of events.
"""
Checks the signatures and hashes of a list of pulled events we got from
federation and records any signature failures as failed pull attempts.
If a PDU fails its signature check then we check if we have it in
the database, and if not then request it from the sender's server (if that
@@ -597,11 +611,17 @@ class FederationClient(FederationBase):
valid_pdus: List[EventBase] = []
async def _record_failure_callback(event: EventBase, cause: str) -> None:
await self.store.record_event_failed_pull_attempt(
event.room_id, event.event_id, cause
)
async def _execute(pdu: EventBase) -> None:
valid_pdu = await self._check_sigs_and_hash_and_fetch_one(
pdu=pdu,
origin=origin,
room_version=room_version,
record_failure_callback=_record_failure_callback,
)
if valid_pdu:
@@ -618,6 +638,9 @@ class FederationClient(FederationBase):
pdu: EventBase,
origin: str,
room_version: RoomVersion,
record_failure_callback: Optional[
Callable[[EventBase, str], Awaitable[None]]
] = None,
) -> Optional[EventBase]:
"""Takes a PDU and checks its signatures and hashes.
@@ -634,6 +657,11 @@ class FederationClient(FederationBase):
origin
pdu
room_version
record_failure_callback: A callback to run whenever the given event
fails signature or hash checks. This includes exceptions
that would be normally be thrown/raised but also things like
checking for event tampering where we just return the redacted
event.
Returns:
The PDU (possibly redacted) if it has valid signatures and hashes.
@@ -641,7 +669,9 @@ class FederationClient(FederationBase):
"""
try:
return await self._check_sigs_and_hash(room_version, pdu)
return await self._check_sigs_and_hash(
room_version, pdu, record_failure_callback
)
except InvalidEventSignatureError as e:
logger.warning(
"Signature on retrieved event %s was invalid (%s). "
@@ -694,7 +724,7 @@ class FederationClient(FederationBase):
auth_chain = [event_from_pdu_json(p, room_version) for p in res["auth_chain"]]
signed_auth = await self._check_sigs_and_hash_and_fetch(
signed_auth = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
destination, auth_chain, room_version=room_version
)
@@ -1401,7 +1431,7 @@ class FederationClient(FederationBase):
event_from_pdu_json(e, room_version) for e in content.get("events", [])
]
signed_events = await self._check_sigs_and_hash_and_fetch(
signed_events = await self._check_sigs_and_hash_for_pulled_events_and_fetch(
destination, events, room_version=room_version
)
except HttpResponseException as e:

View File

@@ -530,13 +530,10 @@ class FederationServer(FederationBase):
async def on_room_state_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, JsonDict]:
await self._event_auth_handler.assert_host_in_room(room_id, origin)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
# we grab the linearizer to protect ourselves from servers which hammer
# us. In theory we might already have the response to this query
# in the cache so we could return it without waiting for the linearizer
@@ -560,13 +557,10 @@ class FederationServer(FederationBase):
if not event_id:
raise NotImplementedError("Specify an event")
await self._event_auth_handler.assert_host_in_room(room_id, origin)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
resp = await self._state_ids_resp_cache.wrap(
(room_id, event_id),
self._on_state_ids_request_compute,
@@ -955,6 +949,7 @@ class FederationServer(FederationBase):
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
async with self._server_linearizer.queue((origin, room_id)):
await self._event_auth_handler.assert_host_in_room(room_id, origin)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)

View File

@@ -646,10 +646,25 @@ class _TransactionQueueManager:
# We start by fetching device related EDUs, i.e device updates and to
# device messages. We have to keep 2 free slots for presence and rr_edus.
limit = MAX_EDUS_PER_TRANSACTION - 2
device_edu_limit = MAX_EDUS_PER_TRANSACTION - 2
# We prioritize to-device messages so that existing encryption channels
# work. We also keep a few slots spare (by reducing the limit) so that
# we can still trickle out some device list updates.
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(device_edu_limit - 10)
if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id
device_edu_limit -= len(to_device_edus)
device_update_edus, dev_list_id = await self.queue._get_device_update_edus(
limit
device_edu_limit
)
if device_update_edus:
@@ -657,18 +672,6 @@ class _TransactionQueueManager:
else:
self.queue._last_device_list_stream_id = dev_list_id
limit -= len(device_update_edus)
(
to_device_edus,
device_stream_id,
) = await self.queue._get_to_device_message_edus(limit)
if to_device_edus:
self._device_stream_id = device_stream_id
else:
self.queue._last_device_stream_id = device_stream_id
pending_edus = device_update_edus + to_device_edus
# Now add the read receipt EDU.

View File

@@ -45,6 +45,7 @@ from synapse.federation.units import Transaction
from synapse.http.matrixfederationclient import ByteParser
from synapse.http.types import QueryParams
from synapse.types import JsonDict
from synapse.util import ExceptionBundle
logger = logging.getLogger(__name__)
@@ -926,8 +927,7 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
return len(data)
def finish(self) -> SendJoinResponse:
for c in self._coros:
c.close()
_close_coros(self._coros)
if self._response.event_dict:
self._response.event = make_event_from_dict(
@@ -970,6 +970,27 @@ class _StateParser(ByteParser[StateRequestResponse]):
return len(data)
def finish(self) -> StateRequestResponse:
for c in self._coros:
c.close()
_close_coros(self._coros)
return self._response
def _close_coros(coros: Iterable[Generator[None, bytes, None]]) -> None:
"""Close each of the given coroutines.
Always calls .close() on each coroutine, even if doing so raises an exception.
Any exceptions raised are aggregated into an ExceptionBundle.
:raises ExceptionBundle: if at least one coroutine fails to close.
"""
exceptions = []
for c in coros:
try:
c.close()
except Exception as e:
exceptions.append(e)
if exceptions:
# raise from the first exception so that the traceback has slightly more context
raise ExceptionBundle(
f"There were {len(exceptions)} errors closing coroutines", exceptions
) from exceptions[0]

View File

@@ -32,6 +32,7 @@ class AdminHandler:
self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self._state_storage_controller = self._storage_controllers.state
self._msc3866_enabled = hs.config.experimental.msc3866.enabled
async def get_whois(self, user: UserID) -> JsonDict:
connections = []
@@ -75,6 +76,10 @@ class AdminHandler:
"is_guest",
}
if self._msc3866_enabled:
# Only include the approved flag if support for MSC3866 is enabled.
user_info_to_return.add("approved")
# Restrict returned keys to a known set.
user_info_dict = {
key: value

View File

@@ -63,7 +63,6 @@ from synapse.http.server import finish_request, respond_with_html
from synapse.http.site import SynapseRequest
from synapse.logging.context import defer_to_thread
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.roommember import ProfileInfo
from synapse.types import JsonDict, Requester, UserID
from synapse.util import stringutils as stringutils
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
@@ -1010,6 +1009,17 @@ class AuthHandler:
return res[0]
return None
async def is_user_approved(self, user_id: str) -> bool:
"""Checks if a user is approved and therefore can be allowed to log in.
Args:
user_id: the user to check the approval status of.
Returns:
A boolean that is True if the user is approved, False otherwise.
"""
return await self.store.is_user_approved(user_id)
async def _find_user_id_and_pwd_hash(
self, user_id: str
) -> Optional[Tuple[str, str]]:
@@ -1687,41 +1697,10 @@ class AuthHandler:
respond_with_html(request, 403, self._sso_account_deactivated_template)
return
profile = await self.store.get_profileinfo(
user_profile_data = await self.store.get_profileinfo(
UserID.from_string(registered_user_id).localpart
)
self._complete_sso_login(
registered_user_id,
auth_provider_id,
request,
client_redirect_url,
extra_attributes,
new_user=new_user,
user_profile_data=profile,
auth_provider_session_id=auth_provider_session_id,
)
def _complete_sso_login(
self,
registered_user_id: str,
auth_provider_id: str,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
new_user: bool = False,
user_profile_data: Optional[ProfileInfo] = None,
auth_provider_session_id: Optional[str] = None,
) -> None:
"""
The synchronous portion of complete_sso_login.
This exists purely for backwards compatibility of synapse.module_api.ModuleApi.
"""
if user_profile_data is None:
user_profile_data = ProfileInfo(None, None)
# Store any extra attributes which will be passed in the login response.
# Note that this is per-user so it may overwrite a previous value, this
# is considered OK since the newest SSO attributes should be most valid.

View File

@@ -130,6 +130,9 @@ class CasHandler:
except PartialDownloadError as pde:
# Twisted raises this error if the connection is closed,
# even if that's being used old-http style to signal end-of-data
# Assertion is for mypy's benefit. Error.response is Optional[bytes],
# but a PartialDownloadError should always have a non-None response.
assert pde.response is not None
body = pde.response
except HttpResponseException as e:
description = (

View File

@@ -195,7 +195,9 @@ class DeviceWorkerHandler:
possibly_changed = set(changed)
possibly_left = set()
for room_id in rooms_changed:
current_state_ids = await self._state_storage.get_current_state_ids(room_id)
current_state_ids = await self._state_storage.get_current_state_ids(
room_id, await_full_state=False
)
# The user may have left the room
# TODO: Check if they actually did or if we were just invited.
@@ -234,7 +236,8 @@ class DeviceWorkerHandler:
# mapping from event_id -> state_dict
prev_state_ids = await self._state_storage.get_state_ids_for_events(
event_ids
event_ids,
await_full_state=False,
)
# Check if we've joined the room? If so we just blindly add all the users to
@@ -270,11 +273,9 @@ class DeviceWorkerHandler:
possibly_left = possibly_changed | possibly_left
# Double check if we still share rooms with the given user.
users_rooms = await self.store.get_rooms_for_users_with_stream_ordering(
possibly_left
)
users_rooms = await self.store.get_rooms_for_users(possibly_left)
for changed_user_id, entries in users_rooms.items():
if any(e.room_id in room_ids for e in entries):
if any(rid in room_ids for rid in entries):
possibly_left.discard(changed_user_id)
else:
possibly_joined.discard(changed_user_id)
@@ -306,6 +307,17 @@ class DeviceWorkerHandler:
"self_signing_key": self_signing_key,
}
async def handle_room_un_partial_stated(self, room_id: str) -> None:
"""Handles sending appropriate device list updates in a room that has
gone from partial to full state.
"""
# TODO(faster_joins): worker mode support
# https://github.com/matrix-org/synapse/issues/12994
logger.error(
"Trying handling device list state for partial join: not supported on workers."
)
class DeviceHandler(DeviceWorkerHandler):
def __init__(self, hs: "HomeServer"):
@@ -688,11 +700,15 @@ class DeviceHandler(DeviceWorkerHandler):
# Ignore any users that aren't ours
if self.hs.is_mine_id(user_id):
hosts = set(
await self._storage_controllers.state.get_current_hosts_in_room(
await self._storage_controllers.state.get_current_hosts_in_room_or_partial_state_approximation(
room_id
)
)
hosts.discard(self.server_name)
# For rooms with partial state, `hosts` is merely an
# approximation. When we transition to a full state room, we
# will have to send out device list updates to any servers we
# missed.
# Check if we've already sent this update to some hosts
if current_stream_id == stream_id:
@@ -739,6 +755,95 @@ class DeviceHandler(DeviceWorkerHandler):
finally:
self._handle_new_device_update_is_processing = False
async def handle_room_un_partial_stated(self, room_id: str) -> None:
"""Handles sending appropriate device list updates in a room that has
gone from partial to full state.
"""
# We defer to the device list updater to handle pending remote device
# list updates.
await self.device_list_updater.handle_room_un_partial_stated(room_id)
# Replay local updates.
(
join_event_id,
device_lists_stream_id,
) = await self.store.get_join_event_id_and_device_lists_stream_id_for_partial_state(
room_id
)
# Get the local device list changes that have happened in the room since
# we started joining. If there are no updates there's nothing left to do.
changes = await self.store.get_device_list_changes_in_room(
room_id, device_lists_stream_id
)
local_changes = {(u, d) for u, d in changes if self.hs.is_mine_id(u)}
if not local_changes:
return
# Note: We have persisted the full state at this point, we just haven't
# cleared the `partial_room` flag.
join_state_ids = await self._state_storage.get_state_ids_for_event(
join_event_id, await_full_state=False
)
current_state_ids = await self.store.get_partial_current_state_ids(room_id)
# Now we need to work out all servers that might have been in the room
# at any point during our join.
# First we look for any membership states that have changed between the
# initial join and now...
all_keys = set(join_state_ids)
all_keys.update(current_state_ids)
potentially_changed_hosts = set()
for etype, state_key in all_keys:
if etype != EventTypes.Member:
continue
prev = join_state_ids.get((etype, state_key))
current = current_state_ids.get((etype, state_key))
if prev != current:
potentially_changed_hosts.add(get_domain_from_id(state_key))
# ... then we add all the hosts that are currently joined to the room...
current_hosts_in_room = await self.store.get_current_hosts_in_room(room_id)
potentially_changed_hosts.update(current_hosts_in_room)
# ... and finally we remove any hosts that we were told about, as we
# will have sent device list updates to those hosts when they happened.
known_hosts_at_join = await self.store.get_partial_state_servers_at_join(
room_id
)
potentially_changed_hosts.difference_update(known_hosts_at_join)
potentially_changed_hosts.discard(self.server_name)
if not potentially_changed_hosts:
# Nothing to do.
return
logger.info(
"Found %d changed hosts to send device list updates to",
len(potentially_changed_hosts),
)
for user_id, device_id in local_changes:
await self.store.add_device_list_outbound_pokes(
user_id=user_id,
device_id=device_id,
room_id=room_id,
stream_id=None,
hosts=potentially_changed_hosts,
context=None,
)
# Notify things that device lists need to be sent out.
self.notifier.notify_replication()
for host in potentially_changed_hosts:
self.federation_sender.send_device_messages(host, immediate=False)
def _update_device_from_client_ips(
device: JsonDict, client_ips: Mapping[Tuple[str, str], Mapping[str, Any]]
@@ -829,6 +934,16 @@ class DeviceListUpdater:
)
return
# Check if we are partially joining any rooms. If so we need to store
# all device list updates so that we can handle them correctly once we
# know who is in the room.
partial_rooms = await self.store.get_partial_state_rooms_and_servers()
if partial_rooms:
await self.store.add_remote_device_list_to_pending(
user_id,
device_id,
)
room_ids = await self.store.get_rooms_for_user(user_id)
if not room_ids:
# We don't share any rooms with this user. Ignore update, as we
@@ -1168,3 +1283,35 @@ class DeviceListUpdater:
device_ids.append(verify_key.version)
return device_ids
async def handle_room_un_partial_stated(self, room_id: str) -> None:
"""Handles sending appropriate device list updates in a room that has
gone from partial to full state.
"""
pending_updates = (
await self.store.get_pending_remote_device_list_updates_for_room(room_id)
)
for user_id, device_id in pending_updates:
logger.info(
"Got pending device list update in room %s: %s / %s",
room_id,
user_id,
device_id,
)
position = await self.store.add_device_change_to_streams(
user_id,
[device_id],
room_ids=[room_id],
)
if not position:
# This should only happen if there are no updates, which
# shouldn't happen when we've passed in a non-empty set of
# device IDs.
continue
self.device_handler.notifier.on_new_event(
StreamKeyType.DEVICE_LIST, position, rooms=[room_id]
)

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Collection, List, Optional, Union
from typing import TYPE_CHECKING, Collection, List, Mapping, Optional, Union
from synapse import event_auth
from synapse.api.constants import (
@@ -29,9 +29,7 @@ from synapse.event_auth import (
)
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.events.snapshot import EventContext
from synapse.types import StateMap, get_domain_from_id
from synapse.util.metrics import Measure
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -52,12 +50,21 @@ class EventAuthHandler:
async def check_auth_rules_from_context(
self,
event: EventBase,
context: EventContext,
batched_auth_events: Optional[Mapping[str, EventBase]] = None,
) -> None:
"""Check an event passes the auth rules at its own auth events"""
await check_state_independent_auth_rules(self._store, event)
"""Check an event passes the auth rules at its own auth events
Args:
event: event to be authed
batched_auth_events: if the event being authed is part of a batch, any events
from the same batch that may be necessary to auth the current event
"""
await check_state_independent_auth_rules(
self._store, event, batched_auth_events
)
auth_event_ids = event.auth_event_ids()
auth_events_by_id = await self._store.get_events(auth_event_ids)
if batched_auth_events:
auth_events_by_id.update(batched_auth_events)
check_state_dependent_auth_rules(event, auth_events_by_id.values())
def compute_auth_events(
@@ -156,9 +163,33 @@ class EventAuthHandler:
Codes.UNABLE_TO_GRANT_JOIN,
)
async def check_host_in_room(self, room_id: str, host: str) -> bool:
with Measure(self._clock, "check_host_in_room"):
return await self._store.is_host_joined(room_id, host)
async def is_host_in_room(self, room_id: str, host: str) -> bool:
return await self._store.is_host_joined(room_id, host)
async def assert_host_in_room(
self, room_id: str, host: str, allow_partial_state_rooms: bool = False
) -> None:
"""
Asserts that the host is in the room, or raises an AuthError.
If the room is partial-stated, we raise an AuthError with the
UNABLE_DUE_TO_PARTIAL_STATE error code, unless `allow_partial_state_rooms` is true.
If allow_partial_state_rooms is True and the room is partial-stated,
this function may return an incorrect result as we are not able to fully
track server membership in a room without full state.
"""
if not allow_partial_state_rooms and await self._store.is_partial_state_room(
room_id
):
raise AuthError(
403,
"Unable to authorise you right now; room is partial-stated here.",
errcode=Codes.UNABLE_DUE_TO_PARTIAL_STATE,
)
if not await self.is_host_in_room(room_id, host):
raise AuthError(403, "Host not in room.")
async def check_restricted_join_rules(
self,

View File

@@ -38,7 +38,7 @@ from signedjson.sign import verify_signed_json
from unpaddedbase64 import decode_base64
from synapse import event_auth
from synapse.api.constants import EventContentFields, EventTypes, Membership
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
from synapse.api.errors import (
AuthError,
CodeMessageException,
@@ -149,6 +149,8 @@ class FederationHandler:
self.http_client = hs.get_proxied_blacklisted_http_client()
self._replication = hs.get_replication_data_handler()
self._federation_event_handler = hs.get_federation_event_handler()
self._device_handler = hs.get_device_handler()
self._bulk_push_rule_evaluator = hs.get_bulk_push_rule_evaluator()
self._clean_room_for_join_client = ReplicationCleanRoomRestServlet.make_client(
hs
@@ -209,7 +211,7 @@ class FederationHandler:
current_depth: int,
limit: int,
*,
processing_start_time: int,
processing_start_time: Optional[int],
) -> bool:
"""
Checks whether the `current_depth` is at or approaching any backfill
@@ -221,13 +223,22 @@ class FederationHandler:
room_id: The room to backfill in.
current_depth: The depth to check at for any upcoming backfill points.
limit: The max number of events to request from the remote federated server.
processing_start_time: The time when `maybe_backfill` started
processing. Only used for timing.
processing_start_time: The time when `maybe_backfill` started processing.
Only used for timing. If `None`, no timing observation will be made.
"""
backwards_extremities = [
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
for event_id, depth in await self.store.get_oldest_event_ids_with_depth_in_room(
room_id
for event_id, depth in await self.store.get_backfill_points_in_room(
room_id=room_id,
current_depth=current_depth,
# We only need to end up with 5 extremities combined with the
# insertion event extremities to make the `/backfill` request
# but fetch an order of magnitude more to make sure there is
# enough even after we filter them by whether visible in the
# history. This isn't fool-proof as all backfill points within
# our limit could be filtered out but seems like a good amount
# to try with at least.
limit=50,
)
]
@@ -236,7 +247,12 @@ class FederationHandler:
insertion_events_to_be_backfilled = [
_BackfillPoint(event_id, depth, _BackfillPointType.INSERTION_PONT)
for event_id, depth in await self.store.get_insertion_event_backward_extremities_in_room(
room_id
room_id=room_id,
current_depth=current_depth,
# We only need to end up with 5 extremities combined with
# the backfill points to make the `/backfill` request ...
# (see the other comment above for more context).
limit=50,
)
]
logger.debug(
@@ -245,10 +261,6 @@ class FederationHandler:
insertion_events_to_be_backfilled,
)
if not backwards_extremities and not insertion_events_to_be_backfilled:
logger.debug("Not backfilling as no extremeties found.")
return False
# we now have a list of potential places to backpaginate from. We prefer to
# start with the most recent (ie, max depth), so let's sort the list.
sorted_backfill_points: List[_BackfillPoint] = sorted(
@@ -269,6 +281,33 @@ class FederationHandler:
sorted_backfill_points,
)
# If we have no backfill points lower than the `current_depth` then
# either we can a) bail or b) still attempt to backfill. We opt to try
# backfilling anyway just in case we do get relevant events.
if not sorted_backfill_points and current_depth != MAX_DEPTH:
logger.debug(
"_maybe_backfill_inner: all backfill points are *after* current depth. Trying again with later backfill points."
)
return await self._maybe_backfill_inner(
room_id=room_id,
# We use `MAX_DEPTH` so that we find all backfill points next
# time (all events are below the `MAX_DEPTH`)
current_depth=MAX_DEPTH,
limit=limit,
# We don't want to start another timing observation from this
# nested recursive call. The top-most call can record the time
# overall otherwise the smaller one will throw off the results.
processing_start_time=None,
)
# Even after recursing with `MAX_DEPTH`, we didn't find any
# backward extremities to backfill from.
if not sorted_backfill_points:
logger.debug(
"_maybe_backfill_inner: Not backfilling as no backward extremeties found."
)
return False
# If we're approaching an extremity we trigger a backfill, otherwise we
# no-op.
#
@@ -278,47 +317,16 @@ class FederationHandler:
# chose more than one times the limit in case of failure, but choosing a
# much larger factor will result in triggering a backfill request much
# earlier than necessary.
#
# XXX: shouldn't we do this *after* the filter by depth below? Again, we don't
# care about events that have happened after our current position.
#
max_depth = sorted_backfill_points[0].depth
if current_depth - 2 * limit > max_depth:
max_depth_of_backfill_points = sorted_backfill_points[0].depth
if current_depth - 2 * limit > max_depth_of_backfill_points:
logger.debug(
"Not backfilling as we don't need to. %d < %d - 2 * %d",
max_depth,
max_depth_of_backfill_points,
current_depth,
limit,
)
return False
# We ignore extremities that have a greater depth than our current depth
# as:
# 1. we don't really care about getting events that have happened
# after our current position; and
# 2. we have likely previously tried and failed to backfill from that
# extremity, so to avoid getting "stuck" requesting the same
# backfill repeatedly we drop those extremities.
#
# However, we need to check that the filtered extremities are non-empty.
# If they are empty then either we can a) bail or b) still attempt to
# backfill. We opt to try backfilling anyway just in case we do get
# relevant events.
#
filtered_sorted_backfill_points = [
t for t in sorted_backfill_points if t.depth <= current_depth
]
if filtered_sorted_backfill_points:
logger.debug(
"_maybe_backfill_inner: backfill points before current depth: %s",
filtered_sorted_backfill_points,
)
sorted_backfill_points = filtered_sorted_backfill_points
else:
logger.debug(
"_maybe_backfill_inner: all backfill points are *after* current depth. Backfilling anyway."
)
# For performance's sake, we only want to paginate from a particular extremity
# if we can actually see the events we'll get. Otherwise, we'd just spend a lot
# of resources to get redacted events. We check each extremity in turn and
@@ -404,11 +412,22 @@ class FederationHandler:
# First we try hosts that are already in the room.
# TODO: HEURISTIC ALERT.
likely_domains = (
await self._storage_controllers.state.get_current_hosts_in_room(room_id)
await self._storage_controllers.state.get_current_hosts_in_room_ordered(
room_id
)
)
async def try_backfill(domains: Collection[str]) -> bool:
# TODO: Should we try multiple of these at a time?
# Number of contacted remote homeservers that have denied our backfill
# request with a 4xx code.
denied_count = 0
# Maximum number of contacted remote homeservers that can deny our
# backfill request with 4xx codes before we give up.
max_denied_count = 5
for dom in domains:
# We don't want to ask our own server for information we don't have
if dom == self.server_name:
@@ -427,13 +446,33 @@ class FederationHandler:
continue
except HttpResponseException as e:
if 400 <= e.code < 500:
raise e.to_synapse_error()
logger.warning(
"Backfill denied from %s because %s [%d/%d]",
dom,
e,
denied_count,
max_denied_count,
)
denied_count += 1
if denied_count >= max_denied_count:
return False
continue
logger.info("Failed to backfill from %s because %s", dom, e)
continue
except CodeMessageException as e:
if 400 <= e.code < 500:
raise
logger.warning(
"Backfill denied from %s because %s [%d/%d]",
dom,
e,
denied_count,
max_denied_count,
)
denied_count += 1
if denied_count >= max_denied_count:
return False
continue
logger.info("Failed to backfill from %s because %s", dom, e)
continue
@@ -452,10 +491,15 @@ class FederationHandler:
return False
processing_end_time = self.clock.time_msec()
backfill_processing_before_timer.observe(
(processing_end_time - processing_start_time) / 1000
)
# If we have the `processing_start_time`, then we can make an
# observation. We wouldn't have the `processing_start_time` in the case
# where `_maybe_backfill_inner` is recursively called to find any
# backfill points regardless of `current_depth`.
if processing_start_time is not None:
processing_end_time = self.clock.time_msec()
backfill_processing_before_timer.observe(
(processing_end_time - processing_start_time) / 1000
)
success = await try_backfill(likely_domains)
if success:
@@ -583,7 +627,11 @@ class FederationHandler:
# Mark the room as having partial state.
# The background process is responsible for unmarking this flag,
# even if the join fails.
await self.store.store_partial_state_room(room_id, ret.servers_in_room)
await self.store.store_partial_state_room(
room_id=room_id,
servers=ret.servers_in_room,
device_lists_stream_id=self.store.get_device_stream_token(),
)
try:
max_stream_id = (
@@ -608,6 +656,14 @@ class FederationHandler:
room_id,
)
raise LimitExceededError(msg=e.msg, errcode=e.errcode, retry_after_ms=0)
else:
# Record the join event id for future use (when we finish the full
# join). We have to do this after persisting the event to keep foreign
# key constraints intact.
if ret.partial_state:
await self.store.write_partial_state_rooms_join_event_id(
room_id, event.event_id
)
finally:
# Always kick off the background process that asynchronously fetches
# state for the room.
@@ -804,7 +860,7 @@ class FederationHandler:
)
# now check that we are *still* in the room
is_in_room = await self._event_auth_handler.check_host_in_room(
is_in_room = await self._event_auth_handler.is_host_in_room(
room_id, self.server_name
)
if not is_in_room:
@@ -872,7 +928,7 @@ class FederationHandler:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_join_request`
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
return event
async def on_invite_request(
@@ -946,9 +1002,17 @@ class FederationHandler:
)
context = EventContext.for_outlier(self._storage_controllers)
await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)]
await self._bulk_push_rule_evaluator.action_for_event_by_user(
[(event, context)]
)
try:
await self._federation_event_handler.persist_events_and_notify(
event.room_id, [(event, context)]
)
except Exception:
await self.store.remove_push_actions_from_staging(event.event_id)
raise
return event
@@ -1047,7 +1111,7 @@ class FederationHandler:
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_leave_request`
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Failed to create new leave %r because %s", event, e)
raise e
@@ -1106,7 +1170,7 @@ class FederationHandler:
try:
# The remote hasn't signed it yet, obviously. We'll do the full checks
# when we get the event back in `on_send_knock_request`
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Failed to create new knock %r because %s", event, e)
raise e
@@ -1150,9 +1214,7 @@ class FederationHandler:
async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int
) -> List[EventBase]:
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
await self._event_auth_handler.assert_host_in_room(room_id, origin)
# Synapse asks for 100 events per backfill request. Do not allow more.
limit = min(limit, 100)
@@ -1198,21 +1260,17 @@ class FederationHandler:
event_id, allow_none=True, allow_rejected=True
)
if event:
in_room = await self._event_auth_handler.check_host_in_room(
event.room_id, origin
)
if not in_room:
raise AuthError(403, "Host not in room.")
events = await filter_events_for_server(
self._storage_controllers, origin, [event]
)
event = events[0]
return event
else:
if not event:
return None
await self._event_auth_handler.assert_host_in_room(event.room_id, origin)
events = await filter_events_for_server(
self._storage_controllers, origin, [event]
)
event = events[0]
return event
async def on_get_missing_events(
self,
origin: str,
@@ -1221,9 +1279,7 @@ class FederationHandler:
latest_events: List[str],
limit: int,
) -> List[EventBase]:
in_room = await self._event_auth_handler.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
await self._event_auth_handler.assert_host_in_room(room_id, origin)
# Only allow up to 20 events to be retrieved per request.
limit = min(limit, 20)
@@ -1257,7 +1313,7 @@ class FederationHandler:
"state_key": target_user_id,
}
if await self._event_auth_handler.check_host_in_room(room_id, self.hs.hostname):
if await self._event_auth_handler.is_host_in_room(room_id, self.hs.hostname):
room_version_obj = await self.store.get_room_version(room_id)
builder = self.event_builder_factory.for_room_version(
room_version_obj, event_dict
@@ -1280,9 +1336,7 @@ class FederationHandler:
try:
validate_event_for_room_version(event)
await self._event_auth_handler.check_auth_rules_from_context(
event, context
)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Denying new third party invite %r because %s", event, e)
raise e
@@ -1332,7 +1386,7 @@ class FederationHandler:
try:
validate_event_for_room_version(event)
await self._event_auth_handler.check_auth_rules_from_context(event, context)
await self._event_auth_handler.check_auth_rules_from_context(event)
except AuthError as e:
logger.warning("Denying third party invite %r because %s", event, e)
raise e
@@ -1622,6 +1676,9 @@ class FederationHandler:
# https://github.com/matrix-org/synapse/issues/12994
await self.state_handler.update_current_state(room_id)
logger.info("Handling any pending device list updates")
await self._device_handler.handle_room_un_partial_stated(room_id)
logger.info("Clearing partial-state flag for %s", room_id)
success = await self.store.clear_partial_state_room(room_id)
if success:

View File

@@ -238,7 +238,7 @@ class FederationEventHandler:
#
# Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version.
is_in_room = await self._event_auth_handler.check_host_in_room(
is_in_room = await self._event_auth_handler.is_host_in_room(
room_id, self._server_name
)
if not is_in_room:
@@ -866,11 +866,6 @@ class FederationEventHandler:
event.room_id, event_id, str(err)
)
return
except Exception as exc:
await self._store.record_event_failed_pull_attempt(
event.room_id, event_id, str(exc)
)
raise exc
try:
try:
@@ -913,11 +908,6 @@ class FederationEventHandler:
logger.warning("Pulled event %s failed history check.", event_id)
else:
raise
except Exception as exc:
await self._store.record_event_failed_pull_attempt(
event.room_id, event_id, str(exc)
)
raise exc
@trace
async def _compute_event_context_with_maybe_missing_prevs(
@@ -2128,7 +2118,7 @@ class FederationEventHandler:
)
else:
await self._bulk_push_rule_evaluator.action_for_event_by_user(
event, context
[(event, context)]
)
try:
@@ -2170,6 +2160,7 @@ class FederationEventHandler:
if instance != self._instance_name:
# Limit the number of events sent over replication. We choose 200
# here as that is what we default to in `max_request_body_size(..)`
result = {}
try:
for batch in batch_iter(event_and_contexts, 200):
result = await self._send_events(
@@ -2249,8 +2240,8 @@ class FederationEventHandler:
event_pos = PersistedEventPosition(
self._instance_name, event.internal_metadata.stream_ordering
)
await self._notifier.on_new_room_event(
event, event_pos, max_stream_token, extra_users=extra_users
await self._notifier.on_new_room_events(
[(event, event_pos)], max_stream_token, extra_users=extra_users
)
if event.type == EventTypes.Member and event.membership == Membership.JOIN:

File diff suppressed because it is too large Load Diff

View File

@@ -16,14 +16,17 @@ from typing import TYPE_CHECKING, List, Optional, Union
import attr
from synapse.api.errors import SynapseError, UnrecognizedRequestError
from synapse.push.baserules import BASE_RULE_IDS
from synapse.storage.push_rule import RuleNotFoundException
from synapse.synapse_rust.push import get_base_rule_ids
from synapse.types import JsonDict
if TYPE_CHECKING:
from synapse.server import HomeServer
BASE_RULE_IDS = get_base_rule_ids()
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RuleSpec:
scope: str

View File

@@ -63,6 +63,8 @@ class ReceiptsHandler:
self.clock = self.hs.get_clock()
self.state = hs.get_state_handler()
self._msc3771_enabled = hs.config.experimental.msc3771_enabled
async def _received_remote_receipt(self, origin: str, content: JsonDict) -> None:
"""Called when we receive an EDU of type m.receipt from a remote HS."""
receipts = []
@@ -70,7 +72,7 @@ class ReceiptsHandler:
# If we're not in the room just ditch the event entirely. This is
# probably an old server that has come back and thinks we're still in
# the room (or we've been rejoined to the room by a state reset).
is_in_room = await self.event_auth_handler.check_host_in_room(
is_in_room = await self.event_auth_handler.is_host_in_room(
room_id, self.server_name
)
if not is_in_room:
@@ -91,13 +93,23 @@ class ReceiptsHandler:
)
continue
# Check if these receipts apply to a thread.
thread_id = None
data = user_values.get("data", {})
if self._msc3771_enabled and isinstance(data, dict):
thread_id = data.get("thread_id")
# If the thread ID is invalid, consider it missing.
if not isinstance(thread_id, str):
thread_id = None
receipts.append(
ReadReceipt(
room_id=room_id,
receipt_type=receipt_type,
user_id=user_id,
event_ids=user_values["event_ids"],
data=user_values.get("data", {}),
thread_id=thread_id,
data=data,
)
)
@@ -114,6 +126,7 @@ class ReceiptsHandler:
receipt.receipt_type,
receipt.user_id,
receipt.event_ids,
receipt.thread_id,
receipt.data,
)
@@ -146,7 +159,12 @@ class ReceiptsHandler:
return True
async def received_client_receipt(
self, room_id: str, receipt_type: str, user_id: str, event_id: str
self,
room_id: str,
receipt_type: str,
user_id: str,
event_id: str,
thread_id: Optional[str],
) -> None:
"""Called when a client tells us a local user has read up to the given
event_id in the room.
@@ -156,6 +174,7 @@ class ReceiptsHandler:
receipt_type=receipt_type,
user_id=user_id,
event_ids=[event_id],
thread_id=thread_id,
data={"ts": int(self.clock.time_msec())},
)

View File

@@ -220,6 +220,7 @@ class RegistrationHandler:
by_admin: bool = False,
user_agent_ips: Optional[List[Tuple[str, str]]] = None,
auth_provider_id: Optional[str] = None,
approved: bool = False,
) -> str:
"""Registers a new client on the server.
@@ -246,6 +247,8 @@ class RegistrationHandler:
user_agent_ips: Tuples of user-agents and IP addresses used
during the registration process.
auth_provider_id: The SSO IdP the user used, if any.
approved: True if the new user should be considered already
approved by an administrator.
Returns:
The registered user_id.
Raises:
@@ -307,6 +310,7 @@ class RegistrationHandler:
user_type=user_type,
address=address,
shadow_banned=shadow_banned,
approved=approved,
)
profile = await self.store.get_profileinfo(localpart)
@@ -695,6 +699,7 @@ class RegistrationHandler:
user_type: Optional[str] = None,
address: Optional[str] = None,
shadow_banned: bool = False,
approved: bool = False,
) -> None:
"""Register user in the datastore.
@@ -713,6 +718,7 @@ class RegistrationHandler:
api.constants.UserTypes, or None for a normal user.
address: the IP address used to perform the registration.
shadow_banned: Whether to shadow-ban the user
approved: Whether to mark the user as approved by an administrator
"""
if self.hs.config.worker.worker_app:
await self._register_client(
@@ -726,6 +732,7 @@ class RegistrationHandler:
user_type=user_type,
address=address,
shadow_banned=shadow_banned,
approved=approved,
)
else:
await self.store.register_user(
@@ -738,6 +745,7 @@ class RegistrationHandler:
admin=admin,
user_type=user_type,
shadow_banned=shadow_banned,
approved=approved,
)
# Only call the account validity module(s) on the main process, to avoid
@@ -997,7 +1005,7 @@ class RegistrationHandler:
assert user_tuple
token_id = user_tuple.token_id
await self.pusher_pool.add_pusher(
await self.pusher_pool.add_or_update_pusher(
user_id=user_id,
access_token=token_id,
kind="email",
@@ -1005,7 +1013,7 @@ class RegistrationHandler:
app_display_name="Email Notifications",
device_display_name=threepid["address"],
pushkey=threepid["address"],
lang=None, # We don't know a user's language here
lang=None,
data={},
)

View File

@@ -78,6 +78,7 @@ class RelationsHandler:
direction: str = "b",
from_token: Optional[StreamToken] = None,
to_token: Optional[StreamToken] = None,
include_original_event: bool = False,
) -> JsonDict:
"""Get related events of a event, ordered by topological ordering.
@@ -94,6 +95,7 @@ class RelationsHandler:
oldest first (`"f"`).
from_token: Fetch rows from the given token, or from the start if None.
to_token: Fetch rows up to the given token, or up to the end if None.
include_original_event: Whether to include the parent event.
Returns:
The pagination chunk.
@@ -138,25 +140,24 @@ class RelationsHandler:
is_peeking=(member_event_id is None),
)
now = self._clock.time_msec()
# Do not bundle aggregations when retrieving the original event because
# we want the content before relations are applied to it.
original_event = self._event_serializer.serialize_event(
event, now, bundle_aggregations=None
)
# The relations returned for the requested event do include their
# bundled aggregations.
aggregations = await self.get_bundled_aggregations(
events, requester.user.to_string()
)
serialized_events = self._event_serializer.serialize_events(
events, now, bundle_aggregations=aggregations
)
return_value = {
"chunk": serialized_events,
"original_event": original_event,
now = self._clock.time_msec()
return_value: JsonDict = {
"chunk": self._event_serializer.serialize_events(
events, now, bundle_aggregations=aggregations
),
}
if include_original_event:
# Do not bundle aggregations when retrieving the original event because
# we want the content before relations are applied to it.
return_value["original_event"] = self._event_serializer.serialize_event(
event, now, bundle_aggregations=None
)
if next_token:
return_value["next_batch"] = await next_token.to_string(self._main_store)

Some files were not shown because too many files have changed in this diff Show More