1
0

Compare commits

...

90 Commits

Author SHA1 Message Date
Erik Johnston
5de571987e WIP: Type TreeCache 2022-07-18 13:05:07 +01:00
Erik Johnston
129691f190 Comment TreeCache 2022-07-18 10:49:12 +01:00
Erik Johnston
462db2a171 Comment LruCache 2022-07-18 10:48:24 +01:00
Erik Johnston
7f7b36d56d Comment LruCache 2022-07-18 10:40:36 +01:00
Erik Johnston
057ae8b61c Comments 2022-07-17 11:34:34 +01:00
Erik Johnston
7aceec3ed9 Fix up 2022-07-15 16:52:16 +01:00
Erik Johnston
cad555f07c Better stuff 2022-07-15 16:31:53 +01:00
Erik Johnston
23c2f394a5 Fix mypy 2022-07-15 16:30:56 +01:00
Erik Johnston
602a81f5a2 don't update access 2022-07-15 16:24:10 +01:00
Erik Johnston
f046366d2a Fix test 2022-07-15 15:54:01 +01:00
Erik Johnston
a22716c5c5 Fix literal 2022-07-15 15:31:57 +01:00
Erik Johnston
40a8fba5f6 Newsfile 2022-07-15 15:27:03 +01:00
Erik Johnston
326a175987 Make DictionaryCache have better expiry properties 2022-07-15 15:26:02 +01:00
Erik Johnston
0731e0829c Don't pull out the full state when storing state (#13274) 2022-07-15 12:59:45 +00:00
Patrick Cloke
3343035a06 Use a real room in the notification rotation tests. (#13260)
Instead of manually inserting fake data. This fixes some issues with
having to manually calculate stream orderings and other oddities.
2022-07-15 08:22:43 -04:00
David Robertson
7281591f4c Use state before join to determine if we _should_perform_remote_join (#13270)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2022-07-15 12:20:47 +00:00
Sean Quah
d765ada84f Update locked frozendict version to 2.3.2 (#13284)
`frozendict` 2.3.2 includes a fix for a memory leak in
`frozendict.__hash__`. This likely has no impact outside of the
deprecated `/initialSync` endpoint, which uses `StreamToken`s,
containing `RoomStreamToken`s, containing `frozendict`s, as cache keys.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-07-15 13:18:51 +01:00
Richard van der Hoff
b116d3ce00 Bg update to populate new events table columns (#13215)
These columns were added back in Synapse 1.52, and have been populated for new
events since then. It's now (beyond) time to back-populate them for existing
events.
2022-07-15 12:47:26 +01:00
Erik Johnston
7be954f59b Fix a bug which could lead to incorrect state (#13278)
There are two fixes here:
1. A long-standing bug where we incorrectly calculated `delta_ids`; and
2. A bug introduced in #13267 where we got current state incorrect.
2022-07-15 11:06:41 +00:00
Richard van der Hoff
512486bbeb Docker: copy postgres from base image (#13279)
When building the docker images for complement testing, copy a preinstalled
complement over from a base image, rather than apt installing it. This avoids
network traffic and is much faster.
2022-07-15 11:13:40 +01:00
Nick Mills-Barrett
cc21a431f3 Async get event cache prep (#13242)
Some experimental prep work to enable external event caching based on #9379 & #12955. Doesn't actually move the cache at all, just lays the groundwork for async implemented caches.

Signed off by Nick @ Beeper (@Fizzadar)
2022-07-15 09:30:46 +00:00
Nick Mills-Barrett
21eeacc995 Federation Sender & Appservice Pusher Stream Optimisations (#13251)
* Replace `get_new_events_for_appservice` with `get_all_new_events_stream`

The functions were near identical and this brings the AS worker closer
to the way federation senders work which can allow for multiple workers
to handle AS traffic.

* Pull received TS alongside events when processing the stream

This avoids an extra query -per event- when both federation sender
and appservice pusher process events.
2022-07-15 09:36:56 +01:00
Richard van der Hoff
fe15a865a5 Rip out auth-event reconciliation code (#12943)
There is a corner in `_check_event_auth` (long known as "the weird corner") where, if we get an event with auth_events which don't match those we were expecting, we attempt to resolve the diffence between our state and the remote's with a state resolution.

This isn't specced, and there's general agreement we shouldn't be doing it.

However, it turns out that the faster-joins code was relying on it, so we need to introduce something similar (but rather simpler) for that.
2022-07-14 21:52:26 +00:00
Richard van der Hoff
df55b377be CHANGES.md: fix link to upgrade notes 2022-07-14 15:07:52 +01:00
Erik Johnston
0ca4172b5d Don't pull out state in compute_event_context for unconflicted state (#13267) 2022-07-14 13:57:02 +00:00
David Robertson
599c403d99 Allow rate limiters to passively record actions they cannot limit (#13253)
Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2022-07-13 19:09:42 +00:00
David Robertson
0eb7e69768 Notifier: accept callbacks to fire on room joins (#13254) 2022-07-13 19:48:24 +01:00
Jacek Kuśnierz
cc1071598a Call the v2 identity service /3pid/unbind endpoint, rather than v1. (#13240)
* Drop support for v1 unbind

Signed-off-by: Jacek Kusnierz <jacek.kusnierz@tum.de>

* Add changelog

Signed-off-by: Jacek Kusnierz <jacek.kusnierz@tum.de>

* Update changelog.d/13240.misc
2022-07-13 19:43:17 +01:00
Shay
ad5761b65c Add support for room version 10 (#13220) 2022-07-13 11:36:02 -07:00
jejo86
2341032cf2 Document advising against publicly exposing the Admin API and provide a usage example (#13231)
* Admin API request explanation improved

Pointed out, that the Admin API is not accessible by default from any remote computer, but only from the PC `matrix-synapse` is running on.
Added a full, working example, making sure to include the cURL flag `-X`, which needs to be prepended to `GET`, `POST`, `PUT` etc. and listing the full query string including protocol, IP address and port.

* Admin API request explanation improved

* Apply suggestions from code review

Update changelog. Reword prose.

Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
2022-07-13 19:33:33 +01:00
Nick Mills-Barrett
982fe29655 Optimise room creation event lookups part 2 (#13224) 2022-07-13 19:32:46 +01:00
Patrick Cloke
1d5c80b161 Reduce duplicate code in receipts servlets. (#13198) 2022-07-13 13:23:16 -04:00
Brad Murray
3371e1abcb Add prometheus counters for content types other than events (#13175) 2022-07-13 15:18:20 +01:00
Patrick Cloke
4db7862e0f Drop unused tables from groups/communities. (#12967)
These tables have been unused since Synapse v1.61.0, although schema version 72
was added in Synapse v1.62.0.
2022-07-13 09:55:14 -04:00
Patrick Cloke
90e9b4fa1e Do not fail build if complement with workers fails. (#13266) 2022-07-13 08:30:42 -04:00
Thomas Weston
0312ff44c6 Fix "add user" admin api error when request contains a "msisdn" threepid (#13263)
Co-authored-by: Thomas Weston <thomas.weston@clearspancloud.com>
Co-authored-by: David Robertson <david.m.robertson1@gmail.com>
2022-07-13 11:33:21 +01:00
Patrick Cloke
1381563988 Inline URL preview documentation. (#13261)
Inline URL preview documentation near the implementation.
2022-07-12 15:01:58 -04:00
Richard van der Hoff
a366b75b72 Drop unused table event_reference_hashes (#13218)
This is unused since Synapse 1.60.0 (#12679). It's time for it to go.
2022-07-12 18:52:06 +00:00
Jacek Kuśnierz
7218a0ca18 Drop support for calling /_matrix/client/v3/account/3pid/bind without an id_access_token (#13239)
Fixes #13201

Signed-off-by: Jacek Kusnierz jacek.kusnierz@tum.de
2022-07-12 18:48:29 +00:00
David Robertson
52a0c8f2f7 Rename test case method to add_hashes_and_signatures_from_other_server (#13255) 2022-07-12 18:46:32 +00:00
Richard van der Hoff
fa71bb18b5 Drop support for delegating email validation (#13192)
* Drop support for delegating email validation

Delegating email validation to an IS is insecure (since it allows the owner of
the IS to do a password reset on your HS), and has long been deprecated. It
will now cause a config error at startup.

* Update unit test which checks for email verification

Give it an `email` config instead of a threepid delegate

* Remove unused method `requestEmailToken`

* Simplify config handling for email verification

Rather than an enum and a boolean, all we need here is a single bool, which
says whether we are or are not doing email verification.

* update docs

* changelog

* upgrade.md: fix typo

* update version number

this will be in 1.64, not 1.63

* update version number

this one too
2022-07-12 19:18:53 +01:00
Sean Quah
3f178332d6 Log the stack when waiting for an entire room to be un-partial stated (#13257)
The stack is already logged when waiting for an event to be un-partial
stated. Log the stack for rooms as well, to aid in debugging.
2022-07-12 18:57:38 +01:00
Shay
6f30eb5b8e Add info about configuration in the url preview docs (#13233)
Cross-link doc pages for easier navigation.
2022-07-12 13:48:47 -04:00
Quentin Gliech
b19060a29b Make the AS login method call Auth.get_user_by_req for checking the AS token. (#13094)
This gets rid of another usage of get_appservice_by_req, with all the benefits, including correctly tracking the appservice IP and setting the tracing attributes correctly.

Signed-off-by: Quentin Gliech <quenting@element.io>
2022-07-12 18:06:29 +01:00
andrew do
2d82cdafd2 expose whether a room is a space in the Admin API (#13208) 2022-07-12 15:30:53 +01:00
Sean Quah
f14c632134 Update changelog once more 2022-07-12 13:01:42 +01:00
Sean Quah
ac7aec0cd3 Reorder and tidy up changelog 2022-07-12 12:52:47 +01:00
Sean Quah
6173d585df 1.63.0rc1 2022-07-12 11:26:25 +01:00
Erik Johnston
e5716b631c Don't pull out the full state when calculating push actions (#13078) 2022-07-11 20:08:39 +00:00
villepeh
bc8eefc1e1 Add a sample bash script to docs for creating multiple worker files (#13032)
Signed-off-by: Ville Petteri Huh.
2022-07-11 18:33:53 +01:00
Nick Mills-Barrett
92202ce867 Reduce event lookups during room creation by passing known event IDs (#13210)
Inspired by the room batch handler, this uses previous event inserts to
pre-populate prev events during room creation, reducing the number of
queries required to create a room.

Signed off by Nick @ Beeper (@Fizzadar)
2022-07-11 18:00:12 +01:00
David Teller
11f811470f Uniformize spam-checker API, part 5: expand other spam-checker callbacks to return Tuple[Codes, dict] (#13044)
Signed-off-by: David Teller <davidt@element.io>
Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2022-07-11 16:52:10 +00:00
Travis Ralston
d736d5cfad Fix to-device messages not being sent to MSC3202-enabled appservices (#13235)
The field name was simply incorrect, leading to errors.
2022-07-11 17:22:17 +01:00
Erik Johnston
f1711e1f5c Remove delay when rotating event push actions (#13211)
We want to be as up to date as possible, and sleeping doesn't help here
and can mean we fall behind.
2022-07-11 16:51:30 +01:00
Andrew Morgan
5ef2f87569 Document the 'databases' homeserver config option (#13212) 2022-07-11 14:05:24 +00:00
Erik Johnston
e610128c50 Add a filter_event_for_clients_with_state function (#13222) 2022-07-11 14:14:09 +01:00
Travis Ralston
a113011794 Fix appservice EDUs failing to send if the EDU doesn't have a room ID (#13236)
* Fix appservice EDUs failing to send if the EDU doesn't have a room ID

As is in the case of presence.

* changelog

* linter

* fix linter again
2022-07-11 14:12:28 +01:00
David Robertson
28d96cb2b4 Ensure portdb selects _all_ rows with negative rowids (#13226) 2022-07-11 10:36:18 +01:00
Sumner Evans
739adf1551 editorconfig: add max_line_length for Python files (#13228)
See the documentation for the property here:
https://github.com/editorconfig/editorconfig/wiki/EditorConfig-Properties#max_line_length

Signed-off-by: Sumner Evans <me@sumnerevans.com>
2022-07-08 16:40:25 +00:00
Erik Johnston
757bc0caef Fix notification count after a highlighted message (#13223)
Fixes #13196

Broke by #13005
2022-07-08 14:00:29 +01:00
Eric Eastwood
a962c5a56d Fix exception when using MSC3030 to look for remote federated events before room creation (#13197)
Complement tests: https://github.com/matrix-org/complement/pull/405

This happens when you have some messages imported before the room is created.
Then use MSC3030 to look backwards before the room creation from a remote
federated server. The server won't find anything locally, but will ask over
federation which will have the remote event. The previous logic would
choke on not having the local event assigned.

```
Failed to fetch /timestamp_to_event from hs2 because of exception(UnboundLocalError) local variable 'local_event' referenced before assignment args=("local variable 'local_event' referenced before assignment",)
```
2022-07-07 11:52:45 -05:00
reivilibre
0c95313a44 Add --build-only option to complement.sh to prevent actually running Complement. (#13158) 2022-07-07 14:18:38 +00:00
Petr Vaněk
bb20113c8f Remove obsolete RoomEventsStoreTestCase (#13200)
All tests are prefixed with `STALE_` and therefore they are silently
skipped. They were moved to `STALE_` in version `v0.5.0` in commit
2fcce3b3c5 - `Remove stale tests`.

Tests from `RoomEventsStoreTestCase` class are not used for last 8
years, I believe the best would be to remove them entirely.

Signed-off-by: Petr Vaněk <arkamar@atlas.cz>
2022-07-07 13:47:26 +01:00
Sean Quah
1391a76cd2 Faster room joins: fix race in recalculation of current room state (#13151)
Bounce recalculation of current state to the correct event persister and
move recalculation of current state into the event persistence queue, to
avoid concurrent updates to a room's current state.

Also give recalculation of a room's current state a real stream
ordering.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-07-07 12:19:31 +00:00
Nick Mills-Barrett
2b5ab8e367 Use a single query in ProfileHandler.get_profile (#13209) 2022-07-07 11:02:09 +00:00
dependabot[bot]
4aaeb87dad Bump lxml from 4.8.0 to 4.9.1 (#13207)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David Robertson <davidr@element.io>
2022-07-07 10:56:52 +00:00
reivilibre
fb7d24ab6d Check that auto_vacuum is disabled when porting a SQLite database to Postgres, as VACUUMs must not be performed between runs of the script. (#13195) 2022-07-07 10:08:04 +00:00
David Teller
57f6f59e3e Make _get_state_map_for_room not break when room state events don't contain an event id. (#13174)
Method `_get_state_map_for_room` seems to break in presence of some ill-formed events in the database. Reimplementing this method to use `get_current_state`, which is more robust to such events.
2022-07-07 08:14:32 +00:00
Patrick Cloke
dcc7873700 Add information on how the Synapse team does reviews. (#13132) 2022-07-06 07:30:58 -04:00
Erik Johnston
a0f51b059c Fix bug where we failed to delete old push actions (#13194)
This happened if we encountered a stream ordering in `event_push_actions` that had more rows than the batch size of the delete, as If we don't delete any rows in an iteration then the next time round we get the exact same stream ordering and get stuck.
2022-07-06 12:09:19 +01:00
Sean Quah
68db233f0c Handle race between persisting an event and un-partial stating a room (#13100)
Whenever we want to persist an event, we first compute an event context,
which includes the state at the event and a flag indicating whether the
state is partial. After a lot of processing, we finally try to store the
event in the database, which can fail for partial state events when the
containing room has been un-partial stated in the meantime.

We detect the race as a foreign key constraint failure in the data store
layer and turn it into a special `PartialStateConflictError` exception,
which makes its way up to the method in which we computed the event
context.

To make things difficult, the exception needs to cross a replication
request: `/fed_send_events` for events coming over federation and
`/send_event` for events from clients. We transport the
`PartialStateConflictError` as a `409 Conflict` over replication and
turn `409`s back into `PartialStateConflictError`s on the worker making
the request.

All client events go through
`EventCreationHandler.handle_new_client_event`, which is called in
*a lot* of places. Instead of trying to update all the code which
creates client events, we turn the `PartialStateConflictError` into a
`429 Too Many Requests` in
`EventCreationHandler.handle_new_client_event` and hope that clients
take it as a hint to retry their request.

On the federation event side, there are 7 places which compute event
contexts. 4 of them use outlier event contexts:
`FederationEventHandler._auth_and_persist_outliers_inner`,
`FederationHandler.do_knock`, `FederationHandler.on_invite_request` and
`FederationHandler.do_remotely_reject_invite`. These events won't have
the partial state flag, so we do not need to do anything for then.

The remaining 3 paths which create events are
`FederationEventHandler.process_remote_join`,
`FederationEventHandler.on_send_membership_event` and
`FederationEventHandler._process_received_pdu`.

We can't experience the race in `process_remote_join`, unless we're
handling an additional join into a partial state room, which currently
blocks, so we make no attempt to handle it correctly.

`on_send_membership_event` is only called by
`FederationServer._on_send_membership_event`, so we catch the
`PartialStateConflictError` there and retry just once.

`_process_received_pdu` is called by `on_receive_pdu` for incoming
events and `_process_pulled_event` for backfill. The latter should never
try to persist partial state events, so we ignore it. We catch the
`PartialStateConflictError` in `on_receive_pdu` and retry just once.

Refering to the graph of code paths in
https://github.com/matrix-org/synapse/issues/12988#issuecomment-1156857648
may make the above make more sense.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-07-05 16:12:52 +01:00
David Robertson
6ba732fefe Type tests.utils (#13028)
* Cast to postgres types when handling postgres db

* Remove unused method

* Easy annotations

* Annotate create_room

* Use `ParamSpec` to annotate looping_call

* Annotate `default_config`

* Track `now` as a float

`time_ms` returns an int like the proper Synapse `Clock`

* Introduce a `Timer` dataclass

* Introduce a Looper type

* Suppress checking of a mock

* tests.utils is typed

* Changelog

* Whoops, import ParamSpec from typing_extensions

* ditch the psycopg2 casts
2022-07-05 15:13:47 +01:00
reivilibre
68695d8007 Factor out some common Complement CI setup commands to a script. (#13157) 2022-07-05 14:24:42 +01:00
Erik Johnston
578a5e24a9 Use upserts for updating event_push_summary (#13153) 2022-07-05 13:51:04 +01:00
David Robertson
347165bc06 Merge branch 'master' into develop 2022-07-05 13:25:29 +01:00
Eric Eastwood
2c2a42cc10 Fix application service not being able to join remote federated room without a profile set (#13131)
Fix https://github.com/matrix-org/synapse/issues/4778

Complement tests: https://github.com/matrix-org/complement/pull/399
2022-07-05 05:56:06 -05:00
David Robertson
b51a0f4be0 Mention the spamchecker plugins 2022-07-05 11:19:54 +01:00
David Robertson
cf63d57dce 1.62.0 2022-07-05 11:14:27 +01:00
reivilibre
65e675504f Add the ability to set the log level using the SYNAPSE_TEST_LOG_LEVEL environment when using complement.sh. (#13152) 2022-07-05 09:46:20 +00:00
Dirk Klimpel
e514495465 Add missing links to config options (#13166) 2022-07-05 10:10:26 +01:00
David Robertson
d102ad67fd annotate tests.server.FakeChannel (#13136) 2022-07-04 18:08:56 +01:00
Brendan Abolivier
5b5c943e7d Revert "Up the dependency on canonicaljson to ^1.5.0"
This reverts commit dcc4e0621c.
2022-07-04 17:48:09 +01:00
Brendan Abolivier
dcc4e0621c Up the dependency on canonicaljson to ^1.5.0 2022-07-04 17:47:51 +01:00
Andrew Morgan
6180e1bc4b Merge tag 'v1.62.0rc3' into develop
Synapse 1.62.0rc3 (2022-07-04)
==============================

Bugfixes
--------

- Update the version of the [ldap3 plugin](https://github.com/matrix-org/matrix-synapse-ldap3/) included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on `packages.matrix.org` to 0.2.1. This fixes [a bug](https://github.com/matrix-org/matrix-synapse-ldap3/pull/163) with usernames containing uppercase characters. ([\#13156](https://github.com/matrix-org/synapse/issues/13156))
- Fix a bug introduced in Synapse 1.62.0rc1 affecting unread counts for users on small servers. ([\#13168](https://github.com/matrix-org/synapse/issues/13168))
2022-07-04 17:35:06 +01:00
Andrew Morgan
95a260da73 Update changelog for v1.62.0rc2 2022-07-04 16:29:04 +01:00
Andrew Morgan
046d87756b 1.62.0rc3 2022-07-04 16:16:47 +01:00
Erik Johnston
723ce73d02 Fix stuck notification counts on small servers (#13168) 2022-07-04 16:02:21 +01:00
Andrew Morgan
9820665597 Remove tests/utils.py from mypy's exclude list (#13159) 2022-07-04 15:15:33 +01:00
Till
fa10468eb4 [Complement] Allow device_name lookup over federation (#13167) 2022-07-04 12:34:50 +00:00
David Robertson
8d7491a152 matrix-synapse-ldap3: 0.2.0 -> 0.2.1 (#13156) 2022-07-01 17:01:54 +00:00
178 changed files with 4156 additions and 2473 deletions

View File

@@ -0,0 +1,36 @@
#!/bin/sh
#
# Common commands to set up Complement's prerequisites in a GitHub Actions CI run.
#
# Must be called after Synapse has been checked out to `synapse/`.
#
set -eu
alias block='{ set +x; } 2>/dev/null; func() { echo "::group::$*"; set -x; }; func'
alias endblock='{ set +x; } 2>/dev/null; func() { echo "::endgroup::"; set -x; }; func'
block Set Go Version
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
# Add the Go path to the PATH: We need this so we can call gotestfmt
echo "~/go/bin" >> $GITHUB_PATH
endblock
block Install Complement Dependencies
sudo apt-get -qq update && sudo apt-get install -qqy libolm3 libolm-dev
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
endblock
block Install custom gotestfmt template
mkdir .gotestfmt/github -p
cp synapse/.ci/complement_package.gotpl .gotestfmt/github/package.gotpl
endblock
block Check out Complement
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
synapse/.ci/scripts/checkout_complement.sh
endblock

View File

@@ -7,3 +7,4 @@ root = true
[*.py]
indent_style = space
indent_size = 4
max_line_length = 88

View File

@@ -328,38 +328,14 @@ jobs:
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
- name: "Set Go Version"
run: |
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
# Add the Go path to the PATH: We need this so we can call gotestfmt
echo "~/go/bin" >> $GITHUB_PATH
- name: "Install Complement Dependencies"
run: |
sudo apt-get -qq update && sudo apt-get install -qqy libolm3 libolm-dev
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: "Install custom gotestfmt template"
run: |
mkdir .gotestfmt/github -p
cp synapse/.ci/complement_package.gotpl .gotestfmt/github/package.gotpl
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
- name: Checkout complement
run: synapse/.ci/scripts/checkout_complement.sh
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- run: |
set -o pipefail
@@ -367,6 +343,30 @@ jobs:
shell: bash
name: Run Complement Tests
# XXX When complement with workers is stable, move this back into the standard
# "complement" matrix above.
#
# See https://github.com/matrix-org/synapse/issues/13161
complement-workers:
if: "${{ !failure() && !cancelled() }}"
needs: linting-done
runs-on: ubuntu-latest
steps:
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- run: |
set -o pipefail
POSTGRES=1 WORKERS=1 COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
# a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done:
if: ${{ always() }}

View File

@@ -114,25 +114,14 @@ jobs:
database: Postgres
steps:
# The path is set via a file given by $GITHUB_PATH. We need both Go 1.17 and GOPATH on the path to run Complement.
# See https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#adding-a-system-path
- name: "Set Go Version"
run: |
# Add Go 1.17 to the PATH: see https://github.com/actions/virtual-environments/blob/main/images/linux/Ubuntu2004-Readme.md#environment-variables-2
echo "$GOROOT_1_17_X64/bin" >> $GITHUB_PATH
# Add the Go path to the PATH: We need this so we can call gotestfmt
echo "~/go/bin" >> $GITHUB_PATH
- name: "Install Complement Dependencies"
run: |
sudo apt-get update && sudo apt-get install -y libolm3 libolm-dev
go get -v github.com/haveyoudebuggedit/gotestfmt/v2/cmd/gotestfmt@latest
- name: Run actions/checkout@v2 for synapse
uses: actions/checkout@v2
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
# This step is specific to the 'Twisted trunk' test run:
- name: Patch dependencies
run: |
@@ -146,16 +135,6 @@ jobs:
# NOT IN 1.1.12 poetry lock --check
working-directory: synapse
- name: "Install custom gotestfmt template"
run: |
mkdir .gotestfmt/github -p
cp synapse/.ci/complement_package.gotpl .gotestfmt/github/package.gotpl
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
- name: Checkout complement
run: synapse/.ci/scripts/checkout_complement.sh
- run: |
set -o pipefail
TEST_ONLY_SKIP_DEP_HASH_VERIFICATION=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | gotestfmt

View File

@@ -1,3 +1,100 @@
Synapse vNext
=============
As of this release, Synapse no longer allows the tasks of verifying email address ownership, and password reset confirmation, to be delegated to an identity server. For more information, see the [upgrade notes](https://matrix-org.github.io/synapse/v1.64/upgrade.html#upgrading-to-v1640).
Synapse 1.63.0rc1 (2022-07-12)
==============================
Features
--------
- Add a rate limit for local users sending invites. ([\#13125](https://github.com/matrix-org/synapse/issues/13125))
- Implement [MSC3827](https://github.com/matrix-org/matrix-spec-proposals/pull/3827): Filtering of `/publicRooms` by room type. ([\#13031](https://github.com/matrix-org/synapse/issues/13031))
- Improve validation logic in Synapse's REST endpoints. ([\#13148](https://github.com/matrix-org/synapse/issues/13148))
Bugfixes
--------
- Fix a long-standing bug where application services were not able to join remote federated rooms without a profile. ([\#13131](https://github.com/matrix-org/synapse/issues/13131))
- Fix a long-standing bug where `_get_state_map_for_room` might raise errors when third party event rules callbacks are present. ([\#13174](https://github.com/matrix-org/synapse/issues/13174))
- Fix a long-standing bug where the `synapse_port_db` script could fail to copy rows with negative row ids. ([\#13226](https://github.com/matrix-org/synapse/issues/13226))
- Fix a bug introduced in 1.54.0 where appservices would not receive room-less EDUs, like presence, when both [MSC2409](https://github.com/matrix-org/matrix-spec-proposals/pull/2409) and [MSC3202](https://github.com/matrix-org/matrix-spec-proposals/pull/3202) are enabled. ([\#13236](https://github.com/matrix-org/synapse/issues/13236))
- Fix a bug introduced in 1.62.0 where rows were not deleted from `event_push_actions` table on large servers. ([\#13194](https://github.com/matrix-org/synapse/issues/13194))
- Fix a bug introduced in 1.62.0 where notification counts would get stuck after a highlighted message. ([\#13223](https://github.com/matrix-org/synapse/issues/13223))
- Fix exception when using experimental [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event` endpoint to look for remote federated imported events before room creation. ([\#13197](https://github.com/matrix-org/synapse/issues/13197))
- Fix [MSC3202](https://github.com/matrix-org/matrix-spec-proposals/pull/3202)-enabled appservices not receiving to-device messages, preventing messages from being decrypted. ([\#13235](https://github.com/matrix-org/synapse/issues/13235))
Updates to the Docker image
---------------------------
- Bump the version of `lxml` in matrix.org Docker images Debian packages from 4.8.0 to 4.9.1. ([\#13207](https://github.com/matrix-org/synapse/issues/13207))
Improved Documentation
----------------------
- Add an explanation of the `--report-stats` argument to the docs. ([\#13029](https://github.com/matrix-org/synapse/issues/13029))
- Add a helpful example bash script to the contrib directory for creating multiple worker configuration files of the same type. Contributed by @villepeh. ([\#13032](https://github.com/matrix-org/synapse/issues/13032))
- Add missing links to config options. ([\#13166](https://github.com/matrix-org/synapse/issues/13166))
- Add documentation for anonymised homeserver statistics collection. ([\#13086](https://github.com/matrix-org/synapse/issues/13086))
- Add documentation for the existing `databases` option in the homeserver configuration manual. ([\#13212](https://github.com/matrix-org/synapse/issues/13212))
- Clean up references to sample configuration and redirect users to the configuration manual instead. ([\#13077](https://github.com/matrix-org/synapse/issues/13077), [\#13139](https://github.com/matrix-org/synapse/issues/13139))
- Document how the Synapse team does reviews. ([\#13132](https://github.com/matrix-org/synapse/issues/13132))
- Fix wrong section header for `allow_public_rooms_over_federation` in the homeserver config documentation. ([\#13116](https://github.com/matrix-org/synapse/issues/13116))
Deprecations and Removals
-------------------------
- Remove obsolete and for 8 years unused `RoomEventsStoreTestCase`. Contributed by @arkamar. ([\#13200](https://github.com/matrix-org/synapse/issues/13200))
Internal Changes
----------------
- Add type annotations to `synapse.logging`, `tests.server` and `tests.utils`. ([\#13028](https://github.com/matrix-org/synapse/issues/13028), [\#13103](https://github.com/matrix-org/synapse/issues/13103), [\#13159](https://github.com/matrix-org/synapse/issues/13159), [\#13136](https://github.com/matrix-org/synapse/issues/13136))
- Enforce type annotations for `tests.test_server`. ([\#13135](https://github.com/matrix-org/synapse/issues/13135))
- Support temporary experimental return values for spam checker module callbacks. ([\#13044](https://github.com/matrix-org/synapse/issues/13044))
- Add support to `complement.sh` for skipping the docker build. ([\#13143](https://github.com/matrix-org/synapse/issues/13143), [\#13158](https://github.com/matrix-org/synapse/issues/13158))
- Add support to `complement.sh` for setting the log level using the `SYNAPSE_TEST_LOG_LEVEL` environment variable. ([\#13152](https://github.com/matrix-org/synapse/issues/13152))
- Enable Complement testing in the 'Twisted Trunk' CI runs. ([\#13079](https://github.com/matrix-org/synapse/issues/13079), [\#13157](https://github.com/matrix-org/synapse/issues/13157))
- Improve startup times in Complement test runs against workers, particularly in CPU-constrained environments. ([\#13127](https://github.com/matrix-org/synapse/issues/13127))
- Update config used by Complement to allow device name lookup over federation. ([\#13167](https://github.com/matrix-org/synapse/issues/13167))
- Faster room joins: handle race between persisting an event and un-partial stating a room. ([\#13100](https://github.com/matrix-org/synapse/issues/13100))
- Faster room joins: fix race in recalculation of current room state. ([\#13151](https://github.com/matrix-org/synapse/issues/13151))
- Faster room joins: skip waiting for full state when processing incoming events over federation. ([\#13144](https://github.com/matrix-org/synapse/issues/13144))
- Raise a `DependencyError` on missing dependencies instead of a `ConfigError`. ([\#13113](https://github.com/matrix-org/synapse/issues/13113))
- Avoid stripping line breaks from SQL sent to the database. ([\#13129](https://github.com/matrix-org/synapse/issues/13129))
- Apply ratelimiting earlier in processing of `/send` requests. ([\#13134](https://github.com/matrix-org/synapse/issues/13134))
- Improve exception handling when processing events received over federation. ([\#13145](https://github.com/matrix-org/synapse/issues/13145))
- Check that `auto_vacuum` is disabled when porting a SQLite database to Postgres, as `VACUUM`s must not be performed between runs of the script. ([\#13195](https://github.com/matrix-org/synapse/issues/13195))
- Reduce DB usage of `/sync` when a large number of unread messages have recently been sent in a room. ([\#13119](https://github.com/matrix-org/synapse/issues/13119), [\#13153](https://github.com/matrix-org/synapse/issues/13153))
- Reduce memory consumption when processing incoming events in large rooms. ([\#13078](https://github.com/matrix-org/synapse/issues/13078), [\#13222](https://github.com/matrix-org/synapse/issues/13222))
- Reduce number of queries used to get profile information. Contributed by Nick @ Beeper (@fizzadar). ([\#13209](https://github.com/matrix-org/synapse/issues/13209))
- Reduce number of events queried during room creation. Contributed by Nick @ Beeper (@fizzadar). ([\#13210](https://github.com/matrix-org/synapse/issues/13210))
- More aggressively rotate push actions. ([\#13211](https://github.com/matrix-org/synapse/issues/13211))
- Add `max_line_length` setting for Python files to the `.editorconfig`. Contributed by @sumnerevans @ Beeper. ([\#13228](https://github.com/matrix-org/synapse/issues/13228))
Synapse 1.62.0 (2022-07-05)
===========================
No significant changes since 1.62.0rc3.
Authors of spam-checker plugins should consult the [upgrade notes](https://github.com/matrix-org/synapse/blob/release-v1.62/docs/upgrade.md#upgrading-to-v1620) to learn about the enriched signatures for spam checker callbacks, which are supported with this release of Synapse.
Synapse 1.62.0rc3 (2022-07-04)
==============================
Bugfixes
--------
- Update the version of the [ldap3 plugin](https://github.com/matrix-org/matrix-synapse-ldap3/) included in the `matrixdotorg/synapse` DockerHub images and the Debian packages hosted on `packages.matrix.org` to 0.2.1. This fixes [a bug](https://github.com/matrix-org/matrix-synapse-ldap3/pull/163) with usernames containing uppercase characters. ([\#13156](https://github.com/matrix-org/synapse/issues/13156))
- Fix a bug introduced in Synapse 1.62.0rc1 affecting unread counts for users on small servers. ([\#13168](https://github.com/matrix-org/synapse/issues/13168))
Synapse 1.62.0rc2 (2022-07-01)
==============================

1
changelog.d/12943.misc Normal file
View File

@@ -0,0 +1 @@
Remove code which incorrectly attempted to reconcile state with remote servers when processing incoming events.

View File

@@ -0,0 +1 @@
Drop tables used for groups/communities.

View File

@@ -1 +0,0 @@
Add an explanation of the `--report-stats` argument to the docs.

View File

@@ -1 +0,0 @@
Implement [MSC3827](https://github.com/matrix-org/matrix-spec-proposals/pull/3827): Filtering of /publicRooms by room type.

View File

@@ -1,3 +0,0 @@
Clean up references to sample configuration and redirect users to the configuration manual instead.

View File

@@ -1 +0,0 @@
Enable Complement testing in the 'Twisted Trunk' CI runs.

View File

@@ -1 +0,0 @@
Add documentation for anonymised homeserver statistics collection.

1
changelog.d/13094.misc Normal file
View File

@@ -0,0 +1 @@
Make the AS login method call `Auth.get_user_by_req` for checking the AS token.

View File

@@ -1 +0,0 @@
Add missing type hints to `synapse.logging`.

View File

@@ -1 +0,0 @@
Raise a `DependencyError` on missing dependencies instead of a `ConfigError`.

View File

@@ -1 +0,0 @@
Fix wrong section header for `allow_public_rooms_over_federation` in the homeserver config documentation.

View File

@@ -1 +0,0 @@
Reduce DB usage of `/sync` when a large number of unread messages have recently been sent in a room.

View File

@@ -1 +0,0 @@
Add a rate limit for local users sending invites.

View File

@@ -1 +0,0 @@
Improve startup times in Complement test runs against workers, particularly in CPU-constrained environments.

View File

@@ -1 +0,0 @@
Only one-line SQL statements for logging and tracing.

View File

@@ -1 +0,0 @@
Apply ratelimiting earlier in processing of /send request.

View File

@@ -1 +0,0 @@
Enforce type annotations for `tests.test_server`.

View File

@@ -1 +0,0 @@
Add a link to the configuration manual from the homeserver sample config documentation.

View File

@@ -1 +0,0 @@
Add support to `complement.sh` for skipping the docker build.

View File

@@ -1 +0,0 @@
Faster joins: skip waiting for full state when processing incoming events over federation.

View File

@@ -1 +0,0 @@
Improve exception handling when processing events received over federation.

View File

@@ -1 +0,0 @@
Improve validation logic in Synapse's REST endpoints.

1
changelog.d/13175.misc Normal file
View File

@@ -0,0 +1 @@
Add prometheus counters for ephemeral events and to device messages pushed to app services. Contributed by Brad @ Beeper.

View File

@@ -0,0 +1 @@
Drop support for delegating email verification to an external server.

1
changelog.d/13198.misc Normal file
View File

@@ -0,0 +1 @@
Refactor receipts servlet logic to avoid duplicated code.

View File

@@ -0,0 +1 @@
Add a `room_type` field in the responses for the list room and room details admin API. Contributed by @andrewdoh.

1
changelog.d/13215.misc Normal file
View File

@@ -0,0 +1 @@
Preparation for database schema simplifications: populate `state_key` and `rejection_reason` for existing rows in the `events` table.

1
changelog.d/13218.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused database table `event_reference_hashes`.

View File

@@ -0,0 +1 @@
Add support for room version 10.

1
changelog.d/13224.misc Normal file
View File

@@ -0,0 +1 @@
Further reduce queries used sending events when creating new rooms. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13231.doc Normal file
View File

@@ -0,0 +1 @@
Provide an example of using the Admin API. Contributed by @jejo86.

1
changelog.d/13233.doc Normal file
View File

@@ -0,0 +1 @@
Move the documentation for how URL previews work to the URL preview module.

View File

@@ -0,0 +1 @@
Drop support for calling `/_matrix/client/v3/account/3pid/bind` without an `id_access_token`, which was not permitted by the spec. Contributed by @Vetchu.

1
changelog.d/13240.misc Normal file
View File

@@ -0,0 +1 @@
Call the v2 identity service `/3pid/unbind` endpoint, rather than v1.

1
changelog.d/13242.misc Normal file
View File

@@ -0,0 +1 @@
Use an asynchronous cache wrapper for the get event cache. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13251.misc Normal file
View File

@@ -0,0 +1 @@
Optimise federation sender and appservice pusher event stream processing queries. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13253.misc Normal file
View File

@@ -0,0 +1 @@
Preparatory work for a per-room rate limiter on joins.

1
changelog.d/13254.misc Normal file
View File

@@ -0,0 +1 @@
Preparatory work for a per-room rate limiter on joins.

1
changelog.d/13255.misc Normal file
View File

@@ -0,0 +1 @@
Preparatory work for a per-room rate limiter on joins.

1
changelog.d/13257.misc Normal file
View File

@@ -0,0 +1 @@
Log the stack when waiting for an entire room to be un-partial stated.

1
changelog.d/13260.misc Normal file
View File

@@ -0,0 +1 @@
Clean-up tests for notifications.

1
changelog.d/13261.doc Normal file
View File

@@ -0,0 +1 @@
Move the documentation for how URL previews work to the URL preview module.

1
changelog.d/13263.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.15.0 where adding a user through the Synapse Admin API with a phone number would fail if the "enable_email_notifs" and "email_notifs_for_new_users" options were enabled. Contributed by @thomasweston12.

1
changelog.d/13266.misc Normal file
View File

@@ -0,0 +1 @@
Do not fail build if complement with workers fails.

1
changelog.d/13267.misc Normal file
View File

@@ -0,0 +1 @@
Don't pull out state in `compute_event_context` for unconflicted state.

1
changelog.d/13270.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.40 where a user invited to a restricted room would be briefly unable to join.

1
changelog.d/13274.misc Normal file
View File

@@ -0,0 +1 @@
Don't pull out state in `compute_event_context` for unconflicted state.

1
changelog.d/13278.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix long-standing bug where in rare instances Synapse could store the incorrect state for a room after a state resolution.

1
changelog.d/13279.misc Normal file
View File

@@ -0,0 +1 @@
Reduce the rebuild time for the complement-synapse docker image.

1
changelog.d/13284.misc Normal file
View File

@@ -0,0 +1 @@
Update locked version of `frozendict` to 2.3.2, which has a fix for a memory leak.

1
changelog.d/13292.misc Normal file
View File

@@ -0,0 +1 @@
Make `DictionaryCache` expire full entries if they haven't been queried in a while, even if specific keys have been queried recently.

View File

@@ -0,0 +1,31 @@
# Creating multiple workers with a bash script
Setting up multiple worker configuration files manually can be time-consuming.
You can alternatively create multiple worker configuration files with a simple `bash` script. For example:
```sh
#!/bin/bash
for i in {1..5}
do
cat << EOF >> generic_worker$i.yaml
worker_app: synapse.app.generic_worker
worker_name: generic_worker$i
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 808$i
resources:
- names: [client, federation]
worker_log_config: /etc/matrix-synapse/generic-worker-log.yaml
EOF
done
```
This would create five generic workers with a unique `worker_name` field in each file and listening on ports 8081-8085.
Customise the script to your needs.

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
matrix-synapse-py3 (1.63.0~rc1) stable; urgency=medium
* New Synapse release 1.63.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 12 Jul 2022 11:26:02 +0100
matrix-synapse-py3 (1.62.0) stable; urgency=medium
* New Synapse release 1.62.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 05 Jul 2022 11:14:15 +0100
matrix-synapse-py3 (1.62.0~rc3) stable; urgency=medium
* New Synapse release 1.62.0rc3.
-- Synapse Packaging team <packages@matrix.org> Mon, 04 Jul 2022 16:07:01 +0100
matrix-synapse-py3 (1.62.0~rc2) stable; urgency=medium
* New Synapse release 1.62.0rc2.

View File

@@ -67,6 +67,13 @@ The following environment variables are supported in `generate` mode:
* `UID`, `GID`: the user id and group id to use for creating the data
directories. If unset, and no user is set via `docker run --user`, defaults
to `991`, `991`.
* `SYNAPSE_LOG_LEVEL`: the log level to use (one of `DEBUG`, `INFO`, `WARNING` or `ERROR`).
Defaults to `INFO`.
* `SYNAPSE_LOG_SENSITIVE`: if set and the log level is set to `DEBUG`, Synapse
will log sensitive information such as access tokens.
This should not be needed unless you are a developer attempting to debug something
particularly tricky.
## Postgres

View File

@@ -4,42 +4,58 @@
#
# Instructions for building this image from those it depends on is detailed in this guide:
# https://github.com/matrix-org/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
ARG SYNAPSE_VERSION=latest
# first of all, we create a base image with a postgres server and database,
# which we can copy into the target image. For repeated rebuilds, this is
# much faster than apt installing postgres each time.
#
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
FROM postgres:13-bullseye AS postgres_base
# initialise the database cluster in /var/lib/postgresql
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
# Configure a password and create a database for Synapse
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
# now build the final image, based on the Synapse image.
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# Install postgresql
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install --no-install-recommends -yqq postgresql-13
# Extend the shared homeserver config to disable rate-limiting,
# set Complement's static shared secret, enable registration, amongst other
# tweaks to get Synapse ready for testing.
# To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
# Configure a user and create a database for Synapse
RUN pg_ctlcluster 13 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 13 main stop
WORKDIR /data
# Extend the shared homeserver config to disable rate-limiting,
# set Complement's static shared secret, enable registration, amongst other
# tweaks to get Synapse ready for testing.
# To do this, we copy the old template out of the way and then include it
# with Jinja2.
RUN mv /conf/shared.yaml.j2 /conf/shared-orig.yaml.j2
COPY conf/workers-shared-extra.yaml.j2 /conf/shared.yaml.j2
COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
WORKDIR /data
# Copy the entrypoint
COPY conf/start_for_complement.sh /
COPY conf/postgres.supervisord.conf /etc/supervisor/conf.d/postgres.conf
# Expose nginx's listener ports
EXPOSE 8008 8448
# Copy the entrypoint
COPY conf/start_for_complement.sh /
ENTRYPOINT ["/start_for_complement.sh"]
# Expose nginx's listener ports
EXPOSE 8008 8448
ENTRYPOINT ["/start_for_complement.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD /bin/sh /healthcheck.sh

View File

@@ -1,5 +1,5 @@
[program:postgres]
command=/usr/local/bin/prefix-log /usr/bin/pg_ctlcluster 13 main start --foreground
command=/usr/local/bin/prefix-log gosu postgres postgres
# Only start if START_POSTGRES=1
autostart=%(ENV_START_POSTGRES)s

View File

@@ -81,6 +81,8 @@ rc_invites:
federation_rr_transactions_per_room_per_second: 9999
allow_device_name_lookup_over_federation: true
## Experimental Features ##
experimental_features:

View File

@@ -49,11 +49,17 @@ handlers:
class: logging.StreamHandler
formatter: precise
{% if not SYNAPSE_LOG_SENSITIVE %}
{#
If SYNAPSE_LOG_SENSITIVE is unset, then override synapse.storage.SQL to INFO
so that DEBUG entries (containing sensitive information) are not emitted.
#}
loggers:
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
{% endif %}
root:
level: {{ SYNAPSE_LOG_LEVEL or "INFO" }}

View File

@@ -29,6 +29,10 @@
# * SYNAPSE_USE_EXPERIMENTAL_FORKING_LAUNCHER: Whether to use the forking launcher,
# only intended for usage in Complement at the moment.
# No stability guarantees are provided.
# * SYNAPSE_LOG_LEVEL: Set this to DEBUG, INFO, WARNING or ERROR to change the
# log level. INFO is the default.
# * SYNAPSE_LOG_SENSITIVE: If unset, SQL and SQL values won't be logged,
# regardless of the SYNAPSE_LOG_LEVEL setting.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
@@ -38,7 +42,7 @@ import os
import subprocess
import sys
from pathlib import Path
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Set
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Optional, Set
import yaml
from jinja2 import Environment, FileSystemLoader
@@ -552,13 +556,17 @@ def generate_worker_log_config(
Returns: the path to the generated file
"""
# Check whether we should write worker logs to disk, in addition to the console
extra_log_template_args = {}
extra_log_template_args: Dict[str, Optional[str]] = {}
if environ.get("SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK"):
extra_log_template_args["LOG_FILE_PATH"] = "{dir}/logs/{name}.log".format(
dir=data_dir, name=worker_name
)
extra_log_template_args["LOG_FILE_PATH"] = f"{data_dir}/logs/{worker_name}.log"
extra_log_template_args["SYNAPSE_LOG_LEVEL"] = environ.get("SYNAPSE_LOG_LEVEL")
extra_log_template_args["SYNAPSE_LOG_SENSITIVE"] = environ.get(
"SYNAPSE_LOG_SENSITIVE"
)
# Render and write the file
log_config_filepath = "/conf/workers/{name}.log.config".format(name=worker_name)
log_config_filepath = f"/conf/workers/{worker_name}.log.config"
convert(
"/conf/log.config",
log_config_filepath,

View File

@@ -35,7 +35,6 @@
- [Application Services](application_services.md)
- [Server Notices](server_notices.md)
- [Consent Tracking](consent_tracking.md)
- [URL Previews](development/url_previews.md)
- [User Directory](user_directory.md)
- [Message Retention Policies](message_retention_policies.md)
- [Pluggable Modules](modules/index.md)
@@ -81,6 +80,7 @@
# Development
- [Contributing Guide](development/contributing_guide.md)
- [Code Style](code_style.md)
- [Reviewing Code](development/reviews.md)
- [Release Cycle](development/releases.md)
- [Git Usage](development/git.md)
- [Testing]()

View File

@@ -59,6 +59,7 @@ The following fields are possible in the JSON response body:
- `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
- `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
- `state_events` - Total number of state_events of a room. Complexity of the room.
- `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space. If the room does not define a type, the value will be `null`.
* `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is
not intended to be parsed.
@@ -101,7 +102,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": "m.space"
},
... (8 hidden items) ...
{
@@ -118,7 +120,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
"state_events": 8345,
"room_type": null
}
],
"offset": 0,
@@ -151,7 +154,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8
"state_events": 8,
"room_type": null
}
],
"offset": 0,
@@ -184,7 +188,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": null
},
... (98 hidden items) ...
{
@@ -201,7 +206,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
"state_events": 8345,
"room_type": "m.space"
}
],
"offset": 0,
@@ -238,7 +244,9 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": "m.space"
},
... (48 hidden items) ...
{
@@ -255,7 +263,9 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
"state_events": 8345,
"room_type": null
}
],
"offset": 100,
@@ -290,6 +300,8 @@ The following fields are possible in the JSON response body:
* `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
* `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
* `state_events` - Total number of state_events of a room. Complexity of the room.
* `room_type` - The type of the room taken from the room's creation event; for example "m.space" if the room is a space.
If the room does not define a type, the value will be `null`.
The API is:
@@ -317,7 +329,8 @@ A response body like the following is returned:
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
"state_events": 93534,
"room_type": "m.space"
}
```

View File

@@ -544,7 +544,7 @@ Gets a list of all local media that a specific `user_id` has created.
These are media that the user has uploaded themselves
([local media](../media_repository.md#local-media)), as well as
[URL preview images](../media_repository.md#url-previews) requested by the user if the
[feature is enabled](../development/url_previews.md).
[feature is enabled](../usage/configuration/config_documentation.md#url_preview_enabled).
By default, the response is ordered by descending creation date and ascending media ID.
The newest media is on top. You can change the order with parameters

View File

@@ -309,6 +309,10 @@ The above will run a monolithic (single-process) Synapse with SQLite as the data
- Passing `POSTGRES=1` as an environment variable to use the Postgres database instead.
- Passing `WORKERS=1` as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh
SYNAPSE_TEST_LOG_LEVEL=DEBUG COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages
```
### Prettier formatting with `gotestfmt`
@@ -347,7 +351,7 @@ To prepare a Pull Request, please:
3. `git push` your commit to your fork of Synapse;
4. on GitHub, [create the Pull Request](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-a-pull-request);
5. add a [changelog entry](#changelog) and push it to your Pull Request;
6. for most contributors, that's all - however, if you are a member of the organization `matrix-org`, on GitHub, please request a review from `matrix.org / Synapse Core`.
6. that's it for now, a non-draft pull request will automatically request review from the team;
7. if you need to update your PR, please avoid rebasing and just add new commits to your branch.
@@ -523,10 +527,13 @@ From this point, you should:
1. Look at the results of the CI pipeline.
- If there is any error, fix the error.
2. If a developer has requested changes, make these changes and let us know if it is ready for a developer to review again.
- A pull request is a conversation, if you disagree with the suggestions, please respond and discuss it.
3. Create a new commit with the changes.
- Please do NOT overwrite the history. New commits make the reviewer's life easier.
- Push this commits to your Pull Request.
4. Back to 1.
5. Once the pull request is ready for review again please re-request review from whichever developer did your initial
review (or leave a comment in the pull request that you believe all required changes have been done).
Once both the CI and the developers are happy, the patch will be merged into Synapse and released shortly!

View File

@@ -0,0 +1,41 @@
Some notes on how we do reviews
===============================
The Synapse team works off a shared review queue -- any new pull requests for
Synapse (or related projects) has a review requested from the entire team. Team
members should process this queue using the following rules:
* Any high urgency pull requests (e.g. fixes for broken continuous integration
or fixes for release blockers);
* Follow-up reviews for pull requests which have previously received reviews;
* Any remaining pull requests.
For the latter two categories above, older pull requests should be prioritised.
It is explicit that there is no priority given to pull requests from the team
(vs from the community). If a pull request requires a quick turn around, please
explicitly communicate this via [#synapse-dev:matrix.org](https://matrix.to/#/#synapse-dev:matrix.org)
or as a comment on the pull request.
Once an initial review has been completed and the author has made additional changes,
follow-up reviews should go back to the same reviewer. This helps build a shared
context and conversation between author and reviewer.
As a team we aim to keep the number of inflight pull requests to a minimum to ensure
that ongoing work is finished before starting new work.
Performing a review
-------------------
To communicate to the rest of the team the status of each pull request, team
members should do the following:
* Assign themselves to the pull request (they should be left assigned to the
pull request until it is merged, closed, or are no longer the reviewer);
* Review the pull request by leaving comments, questions, and suggestions;
* Mark the pull request appropriately (as needing changes or accepted).
If you are unsure about a particular part of the pull request (or are not confident
in your understanding of part of the code) then ask questions or request review
from the team again. When requesting review from the team be sure to leave a comment
with the rationale on why you're putting it back in the queue.

View File

@@ -1,61 +0,0 @@
URL Previews
============
The `GET /_matrix/media/r0/preview_url` endpoint provides a generic preview API
for URLs which outputs [Open Graph](https://ogp.me/) responses (with some Matrix
specific additions).
This does have trade-offs compared to other designs:
* Pros:
* Simple and flexible; can be used by any clients at any point
* Cons:
* If each homeserver provides one of these independently, all the HSes in a
room may needlessly DoS the target URI
* The URL metadata must be stored somewhere, rather than just using Matrix
itself to store the media.
* Matrix cannot be used to distribute the metadata between homeservers.
When Synapse is asked to preview a URL it does the following:
1. Checks against a URL blacklist (defined as `url_preview_url_blacklist` in the
config).
2. Checks the in-memory cache by URLs and returns the result if it exists. (This
is also used to de-duplicate processing of multiple in-flight requests at once.)
3. Kicks off a background process to generate a preview:
1. Checks the database cache by URL and timestamp and returns the result if it
has not expired and was successful (a 2xx return code).
2. Checks if the URL matches an [oEmbed](https://oembed.com/) pattern. If it
does, update the URL to download.
3. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
4. If the media is an image:
1. Generates thumbnails.
2. Generates an Open Graph response based on image properties.
5. If the media is HTML:
1. Decodes the HTML via the stored file.
2. Generates an Open Graph response from the HTML.
3. If a JSON oEmbed URL was found in the HTML via autodiscovery:
1. Downloads the URL and stores it into a file via the media storage provider
and saves the local media metadata.
2. Convert the oEmbed response to an Open Graph response.
3. Override any Open Graph data from the HTML with data from oEmbed.
4. If an image exists in the Open Graph response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
6. If the media is JSON and an oEmbed URL was found:
1. Convert the oEmbed response to an Open Graph response.
2. If a thumbnail or image is in the oEmbed response:
1. Downloads the URL and stores it into a file via the media storage
provider and saves the local media metadata.
2. Generates thumbnails.
3. Updates the Open Graph response based on image properties.
7. Stores the result in the database cache.
4. Returns the result.
The in-memory cache expires after 1 hour.
Expired entries in the database cache (and their associated media files) are
deleted every 10 seconds. The default expiration time is 1 hour from download.

View File

@@ -7,8 +7,7 @@ The media repository
users.
* caches avatars, attachments and their thumbnails for media uploaded by remote
users.
* caches resources and thumbnails used for
[URL previews](development/url_previews.md).
* caches resources and thumbnails used for URL previews.
All media in Matrix can be identified by a unique
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
@@ -59,8 +58,6 @@ remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
Note that `remote_thumbnail/` does not have an `s`.
## URL Previews
See [URL Previews](development/url_previews.md) for documentation on the URL preview
process.
When generating previews for URLs, Synapse may download and cache various
resources, including images. These resources are assigned temporary media IDs

View File

@@ -143,6 +143,14 @@ to do step 2.
It is safe to at any time kill the port script and restart it.
However, under no circumstances should the SQLite database be `VACUUM`ed between
multiple runs of the script. Doing so can lead to an inconsistent copy of your database
into Postgres.
To avoid accidental error, the script will check that SQLite's `auto_vacuum` mechanism
is disabled, but the script is not able to protect against a manual `VACUUM` operation
performed either by the administrator or by any automated task that the administrator
may have configured.
Note that the database may take up significantly more (25% - 100% more)
space on disk after porting to Postgres.

View File

@@ -89,6 +89,21 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.64.0
## Delegation of email validation no longer supported
As of this version, Synapse no longer allows the tasks of verifying email address
ownership, and password reset confirmation, to be delegated to an identity server.
To continue to allow users to add email addresses to their homeserver accounts,
and perform password resets, make sure that Synapse is configured with a
working email server in the `email` configuration section (including, at a
minimum, a `notif_from` setting.)
Specifying an `email` setting under `account_threepid_delegates` will now cause
an error at startup.
# Upgrading to v1.62.0
## New signatures for spam checker callbacks

View File

@@ -18,6 +18,11 @@ already on your `$PATH` depending on how Synapse was installed.
Finding your user's `access_token` is client-dependent, but will usually be shown in the client's settings.
## Making an Admin API request
For security reasons, we [recommend](reverse_proxy.md#synapse-administration-endpoints)
that the Admin API (`/_synapse/admin/...`) should be hidden from public view using a
reverse proxy. This means you should typically query the Admin API from a terminal on
the machine which runs Synapse.
Once you have your `access_token`, you will need to authenticate each request to an Admin API endpoint by
providing the token as either a query parameter or a request header. To add it as a request header in cURL:
@@ -25,5 +30,17 @@ providing the token as either a query parameter or a request header. To add it a
curl --header "Authorization: Bearer <access_token>" <the_rest_of_your_API_request>
```
For example, suppose we want to
[query the account](user_admin_api.md#query-user-account) of the user
`@foo:bar.com`. We need an admin access token (e.g.
`syt_AjfVef2_L33JNpafeif_0feKJfeaf0CQpoZk`), and we need to know which port
Synapse's [`client` listener](config_documentation.md#listeners) is listening
on (e.g. `8008`). Then we can use the following command to request the account
information from the Admin API.
```sh
curl --header "Authorization: Bearer syt_AjfVef2_L33JNpafeif_0feKJfeaf0CQpoZk" -X GET http://127.0.0.1:8008/_synapse/admin/v2/users/@foo:bar.com
```
For more details on access tokens in Matrix, please refer to the complete
[matrix spec documentation](https://matrix.org/docs/spec/client_server/r0.6.1#using-access-tokens).

View File

@@ -591,7 +591,7 @@ Example configuration:
dummy_events_threshold: 5
```
---
Config option `delete_stale_devices_after`
### `delete_stale_devices_after`
An optional duration. If set, Synapse will run a daily background task to log out and
delete any device that hasn't been accessed for more than the specified amount of time.
@@ -1257,6 +1257,98 @@ database:
cp_max: 10
```
---
### `databases`
The `databases` option allows specifying a mapping between certain database tables and
database host details, spreading the load of a single Synapse instance across multiple
database backends. This is often referred to as "database sharding". This option is only
supported for PostgreSQL database backends.
**Important note:** This is a supported option, but is not currently used in production by the
Matrix.org Foundation. Proceed with caution and always make backups.
`databases` is a dictionary of arbitrarily-named database entries. Each entry is equivalent
to the value of the `database` homeserver config option (see above), with the addition of
a `data_stores` key. `data_stores` is an array of strings that specifies the data store(s)
(a defined label for a set of tables) that should be stored on the associated database
backend entry.
The currently defined values for `data_stores` are:
* `"state"`: Database that relates to state groups will be stored in this database.
Specifically, that means the following tables:
* `state_groups`
* `state_group_edges`
* `state_groups_state`
And the following sequences:
* `state_groups_seq_id`
* `"main"`: All other database tables and sequences.
All databases will end up with additional tables used for tracking database schema migrations
and any pending background updates. Synapse will create these automatically on startup when checking for
and/or performing database schema migrations.
To migrate an existing database configuration (e.g. all tables on a single database) to a different
configuration (e.g. the "main" data store on one database, and "state" on another), do the following:
1. Take a backup of your existing database. Things can and do go wrong and database corruption is no joke!
2. Ensure all pending database migrations have been applied and background updates have run. The simplest
way to do this is to use the `update_synapse_database` script supplied with your Synapse installation.
```sh
update_synapse_database --database-config homeserver.yaml --run-background-updates
```
3. Copy over the necessary tables and sequences from one database to the other. Tables relating to database
migrations, schemas, schema versions and background updates should **not** be copied.
As an example, say that you'd like to split out the "state" data store from an existing database which
currently contains all data stores.
Simply copy the tables and sequences defined above for the "state" datastore from the existing database
to the secondary database. As noted above, additional tables will be created in the secondary database
when Synapse is started.
4. Modify/create the `databases` option in your `homeserver.yaml` to match the desired database configuration.
5. Start Synapse. Check that it starts up successfully and that things generally seem to be working.
6. Drop the old tables that were copied in step 3.
Only one of the options `database` or `databases` may be specified in your config, but not both.
Example configuration:
```yaml
databases:
basement_box:
name: psycopg2
txn_limit: 10000
data_stores: ["main"]
args:
user: synapse_user
password: secretpassword
database: synapse_main
host: localhost
port: 5432
cp_min: 5
cp_max: 10
my_other_database:
name: psycopg2
txn_limit: 10000
data_stores: ["state"]
args:
user: synapse_user
password: secretpassword
database: synapse_state
host: localhost
port: 5432
cp_min: 5
cp_max: 10
```
---
## Logging ##
Config options related to logging.
@@ -1843,7 +1935,7 @@ Example configuration:
turn_shared_secret: "YOUR_SHARED_SECRET"
```
----
Config options: `turn_username` and `turn_password`
### `turn_username` and `turn_password`
The Username and password if the TURN server needs them and does not use a token.
@@ -2076,30 +2168,26 @@ default_identity_server: https://matrix.org
---
### `account_threepid_delegates`
Handle threepid (email/phone etc) registration and password resets through a set of
*trusted* identity servers. Note that this allows the configured identity server to
reset passwords for accounts!
Delegate verification of phone numbers to an identity server.
Be aware that if `email` is not set, and SMTP options have not been
configured in the email config block, registration and user password resets via
email will be globally disabled.
When a user wishes to add a phone number to their account, we need to verify that they
actually own that phone number, which requires sending them a text message (SMS).
Currently Synapse does not support sending those texts itself and instead delegates the
task to an identity server. The base URI for the identity server to be used is
specified by the `account_threepid_delegates.msisdn` option.
Additionally, if `msisdn` is not set, registration and password resets via msisdn
will be disabled regardless, and users will not be able to associate an msisdn
identifier to their account. This is due to Synapse currently not supporting
any method of sending SMS messages on its own.
If this is left unspecified, Synapse will not allow users to add phone numbers to
their account.
To enable using an identity server for operations regarding a particular third-party
identifier type, set the value to the URL of that identity server as shown in the
examples below.
(Servers handling the these requests must answer the `/requestToken` endpoints defined
by the Matrix Identity Service API
[specification](https://matrix.org/docs/spec/identity_service/latest).)
Servers handling the these requests must answer the `/requestToken` endpoints defined
by the Matrix Identity Service API [specification](https://matrix.org/docs/spec/identity_service/latest).
*Updated in Synapse 1.64.0*: No longer accepts an `email` option.
Example configuration:
```yaml
account_threepid_delegates:
email: https://example.com # Delegate email sending to example.com
msisdn: http://localhost:8090 # Delegate SMS sending to this local process
```
---
@@ -3373,7 +3461,7 @@ alias_creation_rules:
action: deny
```
---
Config options: `room_list_publication_rules`
### `room_list_publication_rules`
The `room_list_publication_rules` option controls who can publish and
which rooms can be published in the public room list.

View File

@@ -73,7 +73,6 @@ exclude = (?x)
|tests/util/test_lrucache.py
|tests/util/test_rwlock.py
|tests/util/test_wheel_timer.py
|tests/utils.py
)$
[mypy-synapse.federation.transport.client]
@@ -127,6 +126,9 @@ disallow_untyped_defs = True
[mypy-tests.federation.transport.test_client]
disallow_untyped_defs = True
[mypy-tests.utils]
disallow_untyped_defs = True
;; Dependencies without annotations
;; Before ignoring a module, check to see if type stubs are available.

177
poetry.lock generated
View File

@@ -290,7 +290,7 @@ importlib-metadata = {version = "*", markers = "python_version < \"3.8\""}
[[package]]
name = "frozendict"
version = "2.3.0"
version = "2.3.2"
description = "A simple immutable dictionary"
category = "main"
optional = false
@@ -502,7 +502,7 @@ pyasn1 = ">=0.4.6"
[[package]]
name = "lxml"
version = "4.8.0"
version = "4.9.1"
description = "Powerful and Pythonic XML processing library combining libxml2/libxslt with the ElementTree API."
category = "main"
optional = true
@@ -540,7 +540,7 @@ test = ["tox", "twisted", "aiounittest"]
[[package]]
name = "matrix-synapse-ldap3"
version = "0.2.0"
version = "0.2.1"
description = "An LDAP3 auth provider for Synapse"
category = "main"
optional = true
@@ -552,7 +552,7 @@ service-identity = "*"
Twisted = ">=15.1.0"
[package.extras]
dev = ["matrix-synapse", "tox", "ldaptor", "mypy (==0.910)", "types-setuptools", "black (==21.9b0)", "flake8 (==4.0.1)", "isort (==5.9.3)"]
dev = ["matrix-synapse", "tox", "ldaptor", "mypy (==0.910)", "types-setuptools", "black (==22.3.0)", "flake8 (==4.0.1)", "isort (==5.9.3)"]
[[package]]
name = "mccabe"
@@ -1753,23 +1753,23 @@ flake8-comprehensions = [
{file = "flake8_comprehensions-3.8.0-py3-none-any.whl", hash = "sha256:9406314803abe1193c064544ab14fdc43c58424c0882f6ff8a581eb73fc9bb58"},
]
frozendict = [
{file = "frozendict-2.3.0-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e18e2abd144a9433b0a8334582843b2aa0d3b9ac8b209aaa912ad365115fe2e1"},
{file = "frozendict-2.3.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:96dc7a02e78da5725e5e642269bb7ae792e0c9f13f10f2e02689175ebbfedb35"},
{file = "frozendict-2.3.0-cp310-cp310-win_amd64.whl", hash = "sha256:752a6dcfaf9bb20a7ecab24980e4dbe041f154509c989207caf185522ef85461"},
{file = "frozendict-2.3.0-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:5346d9fc1c936c76d33975a9a9f1a067342963105d9a403a99e787c939cc2bb2"},
{file = "frozendict-2.3.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:60dd2253f1bacb63a7c486ec541a968af4f985ffb06602ee8954a3d39ec6bd2e"},
{file = "frozendict-2.3.0-cp36-cp36m-win_amd64.whl", hash = "sha256:b2e044602ce17e5cd86724add46660fb9d80169545164e763300a3b839cb1b79"},
{file = "frozendict-2.3.0-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a27a69b1ac3591e4258325108aee62b53c0eeb6ad0a993ae68d3c7eaea980420"},
{file = "frozendict-2.3.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4f45ef5f6b184d84744fff97b61f6b9a855e24d36b713ea2352fc723a047afa5"},
{file = "frozendict-2.3.0-cp37-cp37m-win_amd64.whl", hash = "sha256:2d3f5016650c0e9a192f5024e68fb4d63f670d0ee58b099ed3f5b4c62ea30ecb"},
{file = "frozendict-2.3.0-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6cf605916f50aabaaba5624c81eb270200f6c2c466c46960237a125ec8fe3ae0"},
{file = "frozendict-2.3.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f6da06e44904beae4412199d7e49be4f85c6cc168ab06b77c735ea7da5ce3454"},
{file = "frozendict-2.3.0-cp38-cp38-win_amd64.whl", hash = "sha256:1f34793fb409c4fa70ffd25bea87b01f3bd305fb1c6b09e7dff085b126302206"},
{file = "frozendict-2.3.0-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:fd72494a559bdcd28aa71f4aa81860269cd0b7c45fff3e2614a0a053ecfd2a13"},
{file = "frozendict-2.3.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:00ea9166aa68cc5feed05986206fdbf35e838a09cb3feef998cf35978ff8a803"},
{file = "frozendict-2.3.0-cp39-cp39-win_amd64.whl", hash = "sha256:9ffaf440648b44e0bc694c1a4701801941378ba3ba6541e17750ae4b4aeeb116"},
{file = "frozendict-2.3.0-py3-none-any.whl", hash = "sha256:8578fe06815fcdcc672bd5603eebc98361a5317c1c3a13b28c6c810f6ea3b323"},
{file = "frozendict-2.3.0.tar.gz", hash = "sha256:da4231adefc5928e7810da2732269d3ad7b5616295b3e693746392a8205ea0b5"},
{file = "frozendict-2.3.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:4fb171d1e84d17335365877e19d17440373b47ca74a73c06f65ac0b16d01e87f"},
{file = "frozendict-2.3.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0a3640e9d7533d164160b758351aa49d9e85bbe0bd76d219d4021e90ffa6a52"},
{file = "frozendict-2.3.2-cp310-cp310-win_amd64.whl", hash = "sha256:87cfd00fafbc147d8cd2590d1109b7db8ac8d7d5bdaa708ba46caee132b55d4d"},
{file = "frozendict-2.3.2-cp36-cp36m-macosx_10_9_x86_64.whl", hash = "sha256:fb09761e093cfabb2f179dbfdb2521e1ec5701df714d1eb5c51fa7849027be19"},
{file = "frozendict-2.3.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:82176dc7adf01cf8f0193e909401939415a230a1853f4a672ec1629a06ceae18"},
{file = "frozendict-2.3.2-cp36-cp36m-win_amd64.whl", hash = "sha256:c1c70826aa4a50fa283fe161834ac4a3ac7c753902c980bb8b595b0998a38ddb"},
{file = "frozendict-2.3.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:1db5035ddbed995badd1a62c4102b5e207b5aeb24472df2c60aba79639d7996b"},
{file = "frozendict-2.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4246fc4cb1413645ba4d3513939b90d979a5bae724be605a10b2b26ee12f839c"},
{file = "frozendict-2.3.2-cp37-cp37m-win_amd64.whl", hash = "sha256:680cd42fb0a255da1ce45678ccbd7f69da750d5243809524ebe8f45b2eda6e6b"},
{file = "frozendict-2.3.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:6a7f3a181d6722c92a9fab12d0c5c2b006a18ca5666098531f316d1e1c8984e3"},
{file = "frozendict-2.3.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b1cb866eabb3c1384a7fe88e1e1033e2b6623073589012ab637c552bf03f6364"},
{file = "frozendict-2.3.2-cp38-cp38-win_amd64.whl", hash = "sha256:952c5e5e664578c5c2ce8489ee0ab6a1855da02b58ef593ee728fc10d672641a"},
{file = "frozendict-2.3.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:608b77904cd0117cd816df605a80d0043a5326ee62529327d2136c792165a823"},
{file = "frozendict-2.3.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0eed41fd326f0bcc779837d8d9e1374da1bc9857fe3b9f2910195bbd5fff3aeb"},
{file = "frozendict-2.3.2-cp39-cp39-win_amd64.whl", hash = "sha256:bde28db6b5868dd3c45b3555f9d1dc5a1cca6d93591502fa5dcecce0dde6a335"},
{file = "frozendict-2.3.2-py3-none-any.whl", hash = "sha256:6882a9bbe08ab9b5ff96ce11bdff3fe40b114b9813bc6801261e2a7b45e20012"},
{file = "frozendict-2.3.2.tar.gz", hash = "sha256:7fac4542f0a13fbe704db4942f41ba3abffec5af8b100025973e59dff6a09d0d"},
]
gitdb = [
{file = "gitdb-4.0.9-py3-none-any.whl", hash = "sha256:8033ad4e853066ba6ca92050b9df2f89301b8fc8bf7e9324d412a63f8bf1a8fd"},
@@ -1937,67 +1937,76 @@ ldap3 = [
{file = "ldap3-2.9.1.tar.gz", hash = "sha256:f3e7fc4718e3f09dda568b57100095e0ce58633bcabbed8667ce3f8fbaa4229f"},
]
lxml = [
{file = "lxml-4.8.0-cp27-cp27m-macosx_10_14_x86_64.whl", hash = "sha256:e1ab2fac607842ac36864e358c42feb0960ae62c34aa4caaf12ada0a1fb5d99b"},
{file = "lxml-4.8.0-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:28d1af847786f68bec57961f31221125c29d6f52d9187c01cd34dc14e2b29430"},
{file = "lxml-4.8.0-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:b92d40121dcbd74831b690a75533da703750f7041b4bf951befc657c37e5695a"},
{file = "lxml-4.8.0-cp27-cp27m-win32.whl", hash = "sha256:e01f9531ba5420838c801c21c1b0f45dbc9607cb22ea2cf132844453bec863a5"},
{file = "lxml-4.8.0-cp27-cp27m-win_amd64.whl", hash = "sha256:6259b511b0f2527e6d55ad87acc1c07b3cbffc3d5e050d7e7bcfa151b8202df9"},
{file = "lxml-4.8.0-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:1010042bfcac2b2dc6098260a2ed022968dbdfaf285fc65a3acf8e4eb1ffd1bc"},
{file = "lxml-4.8.0-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:fa56bb08b3dd8eac3a8c5b7d075c94e74f755fd9d8a04543ae8d37b1612dd170"},
{file = "lxml-4.8.0-cp310-cp310-macosx_10_15_x86_64.whl", hash = "sha256:31ba2cbc64516dcdd6c24418daa7abff989ddf3ba6d3ea6f6ce6f2ed6e754ec9"},
{file = "lxml-4.8.0-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:31499847fc5f73ee17dbe1b8e24c6dafc4e8d5b48803d17d22988976b0171f03"},
{file = "lxml-4.8.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:5f7d7d9afc7b293147e2d506a4596641d60181a35279ef3aa5778d0d9d9123fe"},
{file = "lxml-4.8.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:a3c5f1a719aa11866ffc530d54ad965063a8cbbecae6515acbd5f0fae8f48eaa"},
{file = "lxml-4.8.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:6268e27873a3d191849204d00d03f65c0e343b3bcb518a6eaae05677c95621d1"},
{file = "lxml-4.8.0-cp310-cp310-win32.whl", hash = "sha256:330bff92c26d4aee79c5bc4d9967858bdbe73fdbdbacb5daf623a03a914fe05b"},
{file = "lxml-4.8.0-cp310-cp310-win_amd64.whl", hash = "sha256:b2582b238e1658c4061ebe1b4df53c435190d22457642377fd0cb30685cdfb76"},
{file = "lxml-4.8.0-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:a2bfc7e2a0601b475477c954bf167dee6d0f55cb167e3f3e7cefad906e7759f6"},
{file = "lxml-4.8.0-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a1547ff4b8a833511eeaceacbcd17b043214fcdb385148f9c1bc5556ca9623e2"},
{file = "lxml-4.8.0-cp35-cp35m-win32.whl", hash = "sha256:a9f1c3489736ff8e1c7652e9dc39f80cff820f23624f23d9eab6e122ac99b150"},
{file = "lxml-4.8.0-cp35-cp35m-win_amd64.whl", hash = "sha256:530f278849031b0eb12f46cca0e5db01cfe5177ab13bd6878c6e739319bae654"},
{file = "lxml-4.8.0-cp36-cp36m-macosx_10_14_x86_64.whl", hash = "sha256:078306d19a33920004addeb5f4630781aaeabb6a8d01398045fcde085091a169"},
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:86545e351e879d0b72b620db6a3b96346921fa87b3d366d6c074e5a9a0b8dadb"},
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:24f5c5ae618395ed871b3d8ebfcbb36e3f1091fd847bf54c4de623f9107942f3"},
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:bbab6faf6568484707acc052f4dfc3802bdb0cafe079383fbaa23f1cdae9ecd4"},
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:7993232bd4044392c47779a3c7e8889fea6883be46281d45a81451acfd704d7e"},
{file = "lxml-4.8.0-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d6483b1229470e1d8835e52e0ff3c6973b9b97b24cd1c116dca90b57a2cc613"},
{file = "lxml-4.8.0-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:ad4332a532e2d5acb231a2e5d33f943750091ee435daffca3fec0a53224e7e33"},
{file = "lxml-4.8.0-cp36-cp36m-win32.whl", hash = "sha256:db3535733f59e5605a88a706824dfcb9bd06725e709ecb017e165fc1d6e7d429"},
{file = "lxml-4.8.0-cp36-cp36m-win_amd64.whl", hash = "sha256:5f148b0c6133fb928503cfcdfdba395010f997aa44bcf6474fcdd0c5398d9b63"},
{file = "lxml-4.8.0-cp37-cp37m-macosx_10_14_x86_64.whl", hash = "sha256:8a31f24e2a0b6317f33aafbb2f0895c0bce772980ae60c2c640d82caac49628a"},
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:719544565c2937c21a6f76d520e6e52b726d132815adb3447ccffbe9f44203c4"},
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:c0b88ed1ae66777a798dc54f627e32d3b81c8009967c63993c450ee4cbcbec15"},
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:fa9b7c450be85bfc6cd39f6df8c5b8cbd76b5d6fc1f69efec80203f9894b885f"},
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e9f84ed9f4d50b74fbc77298ee5c870f67cb7e91dcdc1a6915cb1ff6a317476c"},
{file = "lxml-4.8.0-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:1d650812b52d98679ed6c6b3b55cbb8fe5a5460a0aef29aeb08dc0b44577df85"},
{file = "lxml-4.8.0-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:80bbaddf2baab7e6de4bc47405e34948e694a9efe0861c61cdc23aa774fcb141"},
{file = "lxml-4.8.0-cp37-cp37m-win32.whl", hash = "sha256:6f7b82934c08e28a2d537d870293236b1000d94d0b4583825ab9649aef7ddf63"},
{file = "lxml-4.8.0-cp37-cp37m-win_amd64.whl", hash = "sha256:e1fd7d2fe11f1cb63d3336d147c852f6d07de0d0020d704c6031b46a30b02ca8"},
{file = "lxml-4.8.0-cp38-cp38-macosx_10_14_x86_64.whl", hash = "sha256:5045ee1ccd45a89c4daec1160217d363fcd23811e26734688007c26f28c9e9e7"},
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0c1978ff1fd81ed9dcbba4f91cf09faf1f8082c9d72eb122e92294716c605428"},
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:52cbf2ff155b19dc4d4100f7442f6a697938bf4493f8d3b0c51d45568d5666b5"},
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ce13d6291a5f47c1c8dbd375baa78551053bc6b5e5c0e9bb8e39c0a8359fd52f"},
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:e11527dc23d5ef44d76fef11213215c34f36af1608074561fcc561d983aeb870"},
{file = "lxml-4.8.0-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:60d2f60bd5a2a979df28ab309352cdcf8181bda0cca4529769a945f09aba06f9"},
{file = "lxml-4.8.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:62f93eac69ec0f4be98d1b96f4d6b964855b8255c345c17ff12c20b93f247b68"},
{file = "lxml-4.8.0-cp38-cp38-win32.whl", hash = "sha256:20b8a746a026017acf07da39fdb10aa80ad9877046c9182442bf80c84a1c4696"},
{file = "lxml-4.8.0-cp38-cp38-win_amd64.whl", hash = "sha256:891dc8f522d7059ff0024cd3ae79fd224752676447f9c678f2a5c14b84d9a939"},
{file = "lxml-4.8.0-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:b6fc2e2fb6f532cf48b5fed57567ef286addcef38c28874458a41b7837a57807"},
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:74eb65ec61e3c7c019d7169387d1b6ffcfea1b9ec5894d116a9a903636e4a0b1"},
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:627e79894770783c129cc5e89b947e52aa26e8e0557c7e205368a809da4b7939"},
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:545bd39c9481f2e3f2727c78c169425efbfb3fbba6e7db4f46a80ebb249819ca"},
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5a58d0b12f5053e270510bf12f753a76aaf3d74c453c00942ed7d2c804ca845c"},
{file = "lxml-4.8.0-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:ec4b4e75fc68da9dc0ed73dcdb431c25c57775383fec325d23a770a64e7ebc87"},
{file = "lxml-4.8.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5804e04feb4e61babf3911c2a974a5b86f66ee227cc5006230b00ac6d285b3a9"},
{file = "lxml-4.8.0-cp39-cp39-win32.whl", hash = "sha256:aa0cf4922da7a3c905d000b35065df6184c0dc1d866dd3b86fd961905bbad2ea"},
{file = "lxml-4.8.0-cp39-cp39-win_amd64.whl", hash = "sha256:dd10383f1d6b7edf247d0960a3db274c07e96cf3a3fc7c41c8448f93eac3fb1c"},
{file = "lxml-4.8.0-pp37-pypy37_pp73-macosx_10_14_x86_64.whl", hash = "sha256:2403a6d6fb61c285969b71f4a3527873fe93fd0abe0832d858a17fe68c8fa507"},
{file = "lxml-4.8.0-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:986b7a96228c9b4942ec420eff37556c5777bfba6758edcb95421e4a614b57f9"},
{file = "lxml-4.8.0-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6fe4ef4402df0250b75ba876c3795510d782def5c1e63890bde02d622570d39e"},
{file = "lxml-4.8.0-pp38-pypy38_pp73-macosx_10_14_x86_64.whl", hash = "sha256:f10ce66fcdeb3543df51d423ede7e238be98412232fca5daec3e54bcd16b8da0"},
{file = "lxml-4.8.0-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:730766072fd5dcb219dd2b95c4c49752a54f00157f322bc6d71f7d2a31fecd79"},
{file = "lxml-4.8.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:8b99ec73073b37f9ebe8caf399001848fced9c08064effdbfc4da2b5a8d07b93"},
{file = "lxml-4.8.0.tar.gz", hash = "sha256:f63f62fc60e6228a4ca9abae28228f35e1bd3ce675013d1dfb828688d50c6e23"},
{file = "lxml-4.9.1-cp27-cp27m-macosx_10_15_x86_64.whl", hash = "sha256:98cafc618614d72b02185ac583c6f7796202062c41d2eeecdf07820bad3295ed"},
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c62e8dd9754b7debda0c5ba59d34509c4688f853588d75b53c3791983faa96fc"},
{file = "lxml-4.9.1-cp27-cp27m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:21fb3d24ab430fc538a96e9fbb9b150029914805d551deeac7d7822f64631dfc"},
{file = "lxml-4.9.1-cp27-cp27m-win32.whl", hash = "sha256:86e92728ef3fc842c50a5cb1d5ba2bc66db7da08a7af53fb3da79e202d1b2cd3"},
{file = "lxml-4.9.1-cp27-cp27m-win_amd64.whl", hash = "sha256:4cfbe42c686f33944e12f45a27d25a492cc0e43e1dc1da5d6a87cbcaf2e95627"},
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:dad7b164905d3e534883281c050180afcf1e230c3d4a54e8038aa5cfcf312b84"},
{file = "lxml-4.9.1-cp27-cp27mu-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:a614e4afed58c14254e67862456d212c4dcceebab2eaa44d627c2ca04bf86837"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:f9ced82717c7ec65a67667bb05865ffe38af0e835cdd78728f1209c8fffe0cad"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:d9fc0bf3ff86c17348dfc5d322f627d78273eba545db865c3cd14b3f19e57fa5"},
{file = "lxml-4.9.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:e5f66bdf0976ec667fc4594d2812a00b07ed14d1b44259d19a41ae3fff99f2b8"},
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:fe17d10b97fdf58155f858606bddb4e037b805a60ae023c009f760d8361a4eb8"},
{file = "lxml-4.9.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:8caf4d16b31961e964c62194ea3e26a0e9561cdf72eecb1781458b67ec83423d"},
{file = "lxml-4.9.1-cp310-cp310-win32.whl", hash = "sha256:4780677767dd52b99f0af1f123bc2c22873d30b474aa0e2fc3fe5e02217687c7"},
{file = "lxml-4.9.1-cp310-cp310-win_amd64.whl", hash = "sha256:b122a188cd292c4d2fcd78d04f863b789ef43aa129b233d7c9004de08693728b"},
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:be9eb06489bc975c38706902cbc6888f39e946b81383abc2838d186f0e8b6a9d"},
{file = "lxml-4.9.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:f1be258c4d3dc609e654a1dc59d37b17d7fef05df912c01fc2e15eb43a9735f3"},
{file = "lxml-4.9.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:927a9dd016d6033bc12e0bf5dee1dde140235fc8d0d51099353c76081c03dc29"},
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:9232b09f5efee6a495a99ae6824881940d6447debe272ea400c02e3b68aad85d"},
{file = "lxml-4.9.1-cp35-cp35m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:04da965dfebb5dac2619cb90fcf93efdb35b3c6994fea58a157a834f2f94b318"},
{file = "lxml-4.9.1-cp35-cp35m-win32.whl", hash = "sha256:4d5bae0a37af799207140652a700f21a85946f107a199bcb06720b13a4f1f0b7"},
{file = "lxml-4.9.1-cp35-cp35m-win_amd64.whl", hash = "sha256:4878e667ebabe9b65e785ac8da4d48886fe81193a84bbe49f12acff8f7a383a4"},
{file = "lxml-4.9.1-cp36-cp36m-macosx_10_15_x86_64.whl", hash = "sha256:1355755b62c28950f9ce123c7a41460ed9743c699905cbe664a5bcc5c9c7c7fb"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:bcaa1c495ce623966d9fc8a187da80082334236a2a1c7e141763ffaf7a405067"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6eafc048ea3f1b3c136c71a86db393be36b5b3d9c87b1c25204e7d397cee9536"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:13c90064b224e10c14dcdf8086688d3f0e612db53766e7478d7754703295c7c8"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206a51077773c6c5d2ce1991327cda719063a47adc02bd703c56a662cdb6c58b"},
{file = "lxml-4.9.1-cp36-cp36m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:e8f0c9d65da595cfe91713bc1222af9ecabd37971762cb830dea2fc3b3bb2acf"},
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_aarch64.whl", hash = "sha256:8f0a4d179c9a941eb80c3a63cdb495e539e064f8054230844dcf2fcb812b71d3"},
{file = "lxml-4.9.1-cp36-cp36m-musllinux_1_1_x86_64.whl", hash = "sha256:830c88747dce8a3e7525defa68afd742b4580df6aa2fdd6f0855481e3994d391"},
{file = "lxml-4.9.1-cp36-cp36m-win32.whl", hash = "sha256:1e1cf47774373777936c5aabad489fef7b1c087dcd1f426b621fda9dcc12994e"},
{file = "lxml-4.9.1-cp36-cp36m-win_amd64.whl", hash = "sha256:5974895115737a74a00b321e339b9c3f45c20275d226398ae79ac008d908bff7"},
{file = "lxml-4.9.1-cp37-cp37m-macosx_10_15_x86_64.whl", hash = "sha256:1423631e3d51008871299525b541413c9b6c6423593e89f9c4cfbe8460afc0a2"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:2aaf6a0a6465d39b5ca69688fce82d20088c1838534982996ec46633dc7ad6cc"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:9f36de4cd0c262dd9927886cc2305aa3f2210db437aa4fed3fb4940b8bf4592c"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:ae06c1e4bc60ee076292e582a7512f304abdf6c70db59b56745cca1684f875a4"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:57e4d637258703d14171b54203fd6822fda218c6c2658a7d30816b10995f29f3"},
{file = "lxml-4.9.1-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:6d279033bf614953c3fc4a0aa9ac33a21e8044ca72d4fa8b9273fe75359d5cca"},
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:a60f90bba4c37962cbf210f0188ecca87daafdf60271f4c6948606e4dabf8785"},
{file = "lxml-4.9.1-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:6ca2264f341dd81e41f3fffecec6e446aa2121e0b8d026fb5130e02de1402785"},
{file = "lxml-4.9.1-cp37-cp37m-win32.whl", hash = "sha256:27e590352c76156f50f538dbcebd1925317a0f70540f7dc8c97d2931c595783a"},
{file = "lxml-4.9.1-cp37-cp37m-win_amd64.whl", hash = "sha256:eea5d6443b093e1545ad0210e6cf27f920482bfcf5c77cdc8596aec73523bb7e"},
{file = "lxml-4.9.1-cp38-cp38-macosx_10_15_x86_64.whl", hash = "sha256:f05251bbc2145349b8d0b77c0d4e5f3b228418807b1ee27cefb11f69ed3d233b"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:487c8e61d7acc50b8be82bda8c8d21d20e133c3cbf41bd8ad7eb1aaeb3f07c97"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:8d1a92d8e90b286d491e5626af53afef2ba04da33e82e30744795c71880eaa21"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:b570da8cd0012f4af9fa76a5635cd31f707473e65a5a335b186069d5c7121ff2"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:5ef87fca280fb15342726bd5f980f6faf8b84a5287fcc2d4962ea8af88b35130"},
{file = "lxml-4.9.1-cp38-cp38-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:93e414e3206779ef41e5ff2448067213febf260ba747fc65389a3ddaa3fb8715"},
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:6653071f4f9bac46fbc30f3c7838b0e9063ee335908c5d61fb7a4a86c8fd2036"},
{file = "lxml-4.9.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:32a73c53783becdb7eaf75a2a1525ea8e49379fb7248c3eeefb9412123536387"},
{file = "lxml-4.9.1-cp38-cp38-win32.whl", hash = "sha256:1a7c59c6ffd6ef5db362b798f350e24ab2cfa5700d53ac6681918f314a4d3b94"},
{file = "lxml-4.9.1-cp38-cp38-win_amd64.whl", hash = "sha256:1436cf0063bba7888e43f1ba8d58824f085410ea2025befe81150aceb123e345"},
{file = "lxml-4.9.1-cp39-cp39-macosx_10_15_x86_64.whl", hash = "sha256:4beea0f31491bc086991b97517b9683e5cfb369205dac0148ef685ac12a20a67"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:41fb58868b816c202e8881fd0f179a4644ce6e7cbbb248ef0283a34b73ec73bb"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.manylinux_2_24_aarch64.whl", hash = "sha256:bd34f6d1810d9354dc7e35158aa6cc33456be7706df4420819af6ed966e85448"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:edffbe3c510d8f4bf8640e02ca019e48a9b72357318383ca60e3330c23aaffc7"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:6d949f53ad4fc7cf02c44d6678e7ff05ec5f5552b235b9e136bd52e9bf730b91"},
{file = "lxml-4.9.1-cp39-cp39-manylinux_2_5_x86_64.manylinux1_x86_64.whl", hash = "sha256:079b68f197c796e42aa80b1f739f058dcee796dc725cc9a1be0cdb08fc45b000"},
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:9c3a88d20e4fe4a2a4a84bf439a5ac9c9aba400b85244c63a1ab7088f85d9d25"},
{file = "lxml-4.9.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:4e285b5f2bf321fc0857b491b5028c5f276ec0c873b985d58d7748ece1d770dd"},
{file = "lxml-4.9.1-cp39-cp39-win32.whl", hash = "sha256:ef72013e20dd5ba86a8ae1aed7f56f31d3374189aa8b433e7b12ad182c0d2dfb"},
{file = "lxml-4.9.1-cp39-cp39-win_amd64.whl", hash = "sha256:10d2017f9150248563bb579cd0d07c61c58da85c922b780060dcc9a3aa9f432d"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-macosx_10_15_x86_64.whl", hash = "sha256:0538747a9d7827ce3e16a8fdd201a99e661c7dee3c96c885d8ecba3c35d1032c"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:0645e934e940107e2fdbe7c5b6fb8ec6232444260752598bc4d09511bd056c0b"},
{file = "lxml-4.9.1-pp37-pypy37_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:6daa662aba22ef3258934105be2dd9afa5bb45748f4f702a3b39a5bf53a1f4dc"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-macosx_10_15_x86_64.whl", hash = "sha256:603a464c2e67d8a546ddaa206d98e3246e5db05594b97db844c2f0a1af37cf5b"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c4b2e0559b68455c085fb0f6178e9752c4be3bba104d6e881eb5573b399d1eb2"},
{file = "lxml-4.9.1-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:0f3f0059891d3254c7b5fb935330d6db38d6519ecd238ca4fce93c234b4a0f73"},
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_12_i686.manylinux2010_i686.manylinux_2_24_i686.whl", hash = "sha256:c852b1530083a620cb0de5f3cd6826f19862bafeaf77586f1aef326e49d95f0c"},
{file = "lxml-4.9.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl", hash = "sha256:287605bede6bd36e930577c5925fcea17cb30453d96a7b4c63c14a257118dbb9"},
{file = "lxml-4.9.1.tar.gz", hash = "sha256:fe749b052bb7233fe5d072fcb549221a8cb1a16725c47c37e42b0b9cb3ff2c3f"},
]
markupsafe = [
{file = "MarkupSafe-2.1.0-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:3028252424c72b2602a323f70fbf50aa80a5d3aa616ea6add4ba21ae9cc9da4c"},
@@ -2046,8 +2055,8 @@ matrix-common = [
{file = "matrix_common-1.2.1.tar.gz", hash = "sha256:a99dcf02a6bd95b24a5a61b354888a2ac92bf2b4b839c727b8dd9da2cdfa3853"},
]
matrix-synapse-ldap3 = [
{file = "matrix-synapse-ldap3-0.2.0.tar.gz", hash = "sha256:91a0715b43a41ec3033244174fca20846836da98fda711fb01687f7199eecd2e"},
{file = "matrix_synapse_ldap3-0.2.0-py3-none-any.whl", hash = "sha256:0128ca7c3058987adc2e8a88463bb46879915bfd3d373309632813b353e30f9f"},
{file = "matrix-synapse-ldap3-0.2.1.tar.gz", hash = "sha256:bfb4390f4a262ffb0d6f057ff3aeb1e46d4e52ff420a064d795fb4f555f00285"},
{file = "matrix_synapse_ldap3-0.2.1-py3-none-any.whl", hash = "sha256:1b3310a60f1d06466f35905a269b6df95747fd1305f2b7fe638f373963b2aa2c"},
]
mccabe = [
{file = "mccabe-0.6.1-py2.py3-none-any.whl", hash = "sha256:ab8a6258860da4b6677da4bd2fe5dc2c659cff31b3ee4f7f5d64e79735b80d42"},

View File

@@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry]
name = "matrix-synapse"
version = "1.62.0rc2"
version = "1.63.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"

View File

@@ -44,8 +44,14 @@ usage() {
Usage: $0 [-f] <go test arguments>...
Run the complement test suite on Synapse.
-f Skip rebuilding the docker images, and just use the most recent
'complement-synapse:latest' image
-f, --fast
Skip rebuilding the docker images, and just use the most recent
'complement-synapse:latest' image.
Conflicts with --build-only.
--build-only
Only build the Docker images. Don't actually run Complement.
Conflicts with -f/--fast.
For help on arguments to 'go test', run 'go help testflag'.
EOF
@@ -53,6 +59,7 @@ EOF
# parse our arguments
skip_docker_build=""
skip_complement_run=""
while [ $# -ge 1 ]; do
arg=$1
case "$arg" in
@@ -60,9 +67,12 @@ while [ $# -ge 1 ]; do
usage
exit 1
;;
"-f")
"-f"|"--fast")
skip_docker_build=1
;;
"--build-only")
skip_complement_run=1
;;
*)
# unknown arg: presumably an argument to gotest. break the loop.
break
@@ -106,6 +116,11 @@ if [ -z "$skip_docker_build" ]; then
echo_if_github "::endgroup::"
fi
if [ -n "$skip_complement_run" ]; then
echo "Skipping Complement run as requested."
exit
fi
export COMPLEMENT_BASE_IMAGE=complement-synapse
extra_test_args=()
@@ -145,6 +160,18 @@ else
test_tags="$test_tags,faster_joins"
fi
if [[ -n "$SYNAPSE_TEST_LOG_LEVEL" ]]; then
# Set the log level to what is desired
export PASS_SYNAPSE_LOG_LEVEL="$SYNAPSE_TEST_LOG_LEVEL"
# Allow logging sensitive things (currently SQL queries & parameters).
# (This won't have any effect if we're not logging at DEBUG level overall.)
# Since this is just a test suite, this is fine and won't reveal anyone's
# personal information
export PASS_SYNAPSE_LOG_SENSITIVE=1
fi
# Run the tests!
echo "Images built; running complement"
cd "$COMPLEMENT_DIR"

View File

@@ -166,22 +166,6 @@ IGNORED_TABLES = {
"ui_auth_sessions",
"ui_auth_sessions_credentials",
"ui_auth_sessions_ips",
# Groups/communities is no longer supported.
"group_attestations_remote",
"group_attestations_renewals",
"group_invites",
"group_roles",
"group_room_categories",
"group_rooms",
"group_summary_roles",
"group_summary_room_categories",
"group_summary_rooms",
"group_summary_users",
"group_users",
"groups",
"local_group_membership",
"local_group_updates",
"remote_profile_cache",
}
@@ -418,12 +402,15 @@ class Porter:
self.progress.update(table, table_size) # Mark table as done
return
# We sweep over rowids in two directions: one forwards (rowids 1, 2, 3, ...)
# and another backwards (rowids 0, -1, -2, ...).
forward_select = (
"SELECT rowid, * FROM %s WHERE rowid >= ? ORDER BY rowid LIMIT ?" % (table,)
)
backward_select = (
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid LIMIT ?" % (table,)
"SELECT rowid, * FROM %s WHERE rowid <= ? ORDER BY rowid DESC LIMIT ?"
% (table,)
)
do_forward = [True]
@@ -621,6 +608,25 @@ class Porter:
self.postgres_store.db_pool.updates.has_completed_background_updates()
)
@staticmethod
def _is_sqlite_autovacuum_enabled(txn: LoggingTransaction) -> bool:
"""
Returns true if auto_vacuum is enabled in SQLite.
https://www.sqlite.org/pragma.html#pragma_auto_vacuum
Vacuuming changes the rowids on rows in the database.
Auto-vacuuming is therefore dangerous when used in conjunction with this script.
Note that the auto_vacuum setting can't be changed without performing
a VACUUM after trying to change the pragma.
"""
txn.execute("PRAGMA auto_vacuum")
row = txn.fetchone()
assert row is not None, "`PRAGMA auto_vacuum` did not give a row."
(autovacuum_setting,) = row
# 0 means off. 1 means full. 2 means incremental.
return autovacuum_setting != 0
async def run(self) -> None:
"""Ports the SQLite database to a PostgreSQL database.
@@ -637,6 +643,21 @@ class Porter:
allow_outdated_version=True,
)
# For safety, ensure auto_vacuums are disabled.
if await self.sqlite_store.db_pool.runInteraction(
"is_sqlite_autovacuum_enabled", self._is_sqlite_autovacuum_enabled
):
end_error = (
"auto_vacuum is enabled in the SQLite database."
" (This is not the default configuration.)\n"
" This script relies on rowids being consistent and must not"
" be used if the database could be vacuumed between re-runs.\n"
" To disable auto_vacuum, you need to stop Synapse and run the following SQL:\n"
" PRAGMA auto_vacuum=off;\n"
" VACUUM;"
)
return
# Check if all background updates are done, abort if not.
updates_complete = (
await self.sqlite_store.db_pool.updates.has_completed_background_updates()

View File

@@ -297,8 +297,14 @@ class AuthError(SynapseError):
other poorly-defined times.
"""
def __init__(self, code: int, msg: str, errcode: str = Codes.FORBIDDEN):
super().__init__(code, msg, errcode)
def __init__(
self,
code: int,
msg: str,
errcode: str = Codes.FORBIDDEN,
additional_fields: Optional[dict] = None,
):
super().__init__(code, msg, errcode, additional_fields)
class InvalidClientCredentialsError(SynapseError):

View File

@@ -27,6 +27,33 @@ class Ratelimiter:
"""
Ratelimit actions marked by arbitrary keys.
(Note that the source code speaks of "actions" and "burst_count" rather than
"tokens" and a "bucket_size".)
This is a "leaky bucket as a meter". For each key to be tracked there is a bucket
containing some number 0 <= T <= `burst_count` of tokens corresponding to previously
permitted requests for that key. Each bucket starts empty, and gradually leaks
tokens at a rate of `rate_hz`.
Upon an incoming request, we must determine:
- the key that this request falls under (which bucket to inspect), and
- the cost C of this request in tokens.
Then, if there is room in the bucket for C tokens (T + C <= `burst_count`),
the request is permitted and `cost` tokens are added to the bucket.
Otherwise the request is denied, and the bucket continues to hold T tokens.
This means that the limiter enforces an average request frequency of `rate_hz`,
while accumulating a buffer of up to `burst_count` requests which can be consumed
instantaneously.
The tricky bit is the leaking. We do not want to have a periodic process which
leaks every bucket! Instead, we track
- the time point when the bucket was last completely empty, and
- how many tokens have added to the bucket permitted since then.
Then for each incoming request, we can calculate how many tokens have leaked
since this time point, and use that to decide if we should accept or reject the
request.
Args:
clock: A homeserver clock, for retrieving the current time
rate_hz: The long term number of actions that can be performed in a second.
@@ -41,14 +68,30 @@ class Ratelimiter:
self.burst_count = burst_count
self.store = store
# A ordered dictionary keeping track of actions, when they were last
# performed and how often. Each entry is a mapping from a key of arbitrary type
# to a tuple representing:
# * How many times an action has occurred since a point in time
# * The point in time
# * The rate_hz of this particular entry. This can vary per request
# An ordered dictionary representing the token buckets tracked by this rate
# limiter. Each entry maps a key of arbitrary type to a tuple representing:
# * The number of tokens currently in the bucket,
# * The time point when the bucket was last completely empty, and
# * The rate_hz (leak rate) of this particular bucket.
self.actions: OrderedDict[Hashable, Tuple[float, float, float]] = OrderedDict()
def _get_key(
self, requester: Optional[Requester], key: Optional[Hashable]
) -> Hashable:
"""Use the requester's MXID as a fallback key if no key is provided."""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
return key
def _get_action_counts(
self, key: Hashable, time_now_s: float
) -> Tuple[float, float, float]:
"""Retrieve the action counts, with a fallback representing an empty bucket."""
return self.actions.get(key, (0.0, time_now_s, 0.0))
async def can_do_action(
self,
requester: Optional[Requester],
@@ -88,11 +131,7 @@ class Ratelimiter:
* The reactor timestamp for when the action can be performed next.
-1 if rate_hz is less than or equal to zero
"""
if key is None:
if not requester:
raise ValueError("Must supply at least one of `requester` or `key`")
key = requester.user.to_string()
key = self._get_key(requester, key)
if requester:
# Disable rate limiting of users belonging to any AS that is configured
@@ -121,7 +160,7 @@ class Ratelimiter:
self._prune_message_counts(time_now_s)
# Check if there is an existing count entry for this key
action_count, time_start, _ = self.actions.get(key, (0.0, time_now_s, 0.0))
action_count, time_start, _ = self._get_action_counts(key, time_now_s)
# Check whether performing another action is allowed
time_delta = time_now_s - time_start
@@ -164,6 +203,37 @@ class Ratelimiter:
return allowed, time_allowed
def record_action(
self,
requester: Optional[Requester],
key: Optional[Hashable] = None,
n_actions: int = 1,
_time_now_s: Optional[float] = None,
) -> None:
"""Record that an action(s) took place, even if they violate the rate limit.
This is useful for tracking the frequency of events that happen across
federation which we still want to impose local rate limits on. For instance, if
we are alice.com monitoring a particular room, we cannot prevent bob.com
from joining users to that room. However, we can track the number of recent
joins in the room and refuse to serve new joins ourselves if there have been too
many in the room across both homeservers.
Args:
requester: The requester that is doing the action, if any.
key: An arbitrary key used to classify an action. Defaults to the
requester's user ID.
n_actions: The number of times the user wants to do this action. If the user
cannot do all of the actions, the user's action count is not incremented
at all.
_time_now_s: The current time. Optional, defaults to the current time according
to self.clock. Only used by tests.
"""
key = self._get_key(requester, key)
time_now_s = _time_now_s if _time_now_s is not None else self.clock.time()
action_count, time_start, rate_hz = self._get_action_counts(key, time_now_s)
self.actions[key] = (action_count + n_actions, time_start, rate_hz)
def _prune_message_counts(self, time_now_s: float) -> None:
"""Remove message count entries that have not exceeded their defined
rate_hz limit

View File

@@ -84,6 +84,8 @@ class RoomVersion:
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
# knocks and restricted join rules into the same join condition.
msc3787_knock_restricted_join_rule: bool
# MSC3667: Enforce integer power levels
msc3667_int_only_power_levels: bool
class RoomVersions:
@@ -103,6 +105,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V2 = RoomVersion(
"2",
@@ -120,6 +123,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V3 = RoomVersion(
"3",
@@ -137,6 +141,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V4 = RoomVersion(
"4",
@@ -154,6 +159,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V5 = RoomVersion(
"5",
@@ -171,6 +177,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V6 = RoomVersion(
"6",
@@ -188,6 +195,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@@ -205,6 +213,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V7 = RoomVersion(
"7",
@@ -222,6 +231,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V8 = RoomVersion(
"8",
@@ -239,6 +249,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
V9 = RoomVersion(
"9",
@@ -256,6 +267,7 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC2716v3 = RoomVersion(
"org.matrix.msc2716v3",
@@ -273,6 +285,7 @@ class RoomVersions:
msc2716_historical=True,
msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False,
msc3667_int_only_power_levels=False,
)
MSC3787 = RoomVersion(
"org.matrix.msc3787",
@@ -290,6 +303,25 @@ class RoomVersions:
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=False,
)
V10 = RoomVersion(
"10",
RoomDisposition.STABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
msc3667_int_only_power_levels=True,
)
@@ -308,6 +340,7 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V9,
RoomVersions.MSC2716v3,
RoomVersions.MSC3787,
RoomVersions.V10,
)
}

View File

@@ -39,6 +39,7 @@ from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.server import HomeServer
from synapse.storage.database import DatabasePool, LoggingDatabaseConnection
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.types import StateMap
from synapse.util import SYNAPSE_VERSION
@@ -60,7 +61,17 @@ class AdminCmdSlavedStore(
BaseSlavedStore,
RoomWorkerStore,
):
pass
def __init__(
self,
database: DatabasePool,
db_conn: LoggingDatabaseConnection,
hs: "HomeServer",
):
super().__init__(database, db_conn, hs)
# Annoyingly `filter_events_for_client` assumes that this exists. We
# should refactor it to take a `Clock` directly.
self.clock = hs.get_clock()
class AdminCmdServer(HomeServer):

View File

@@ -44,7 +44,6 @@ from synapse.app._base import (
register_start,
)
from synapse.config._base import ConfigError, format_config_error
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig
from synapse.federation.transport.server import TransportLayerServer
@@ -202,7 +201,7 @@ class SynapseHomeServer(HomeServer):
}
)
if self.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
if self.config.email.can_verify_email:
from synapse.rest.synapse.client.password_reset import (
PasswordResetSubmitTokenResource,
)

View File

@@ -53,6 +53,18 @@ sent_events_counter = Counter(
"synapse_appservice_api_sent_events", "Number of events sent to the AS", ["service"]
)
sent_ephemeral_counter = Counter(
"synapse_appservice_api_sent_ephemeral",
"Number of ephemeral events sent to the AS",
["service"],
)
sent_todevice_counter = Counter(
"synapse_appservice_api_sent_todevice",
"Number of todevice messages sent to the AS",
["service"],
)
HOUR_IN_MS = 60 * 60 * 1000
@@ -310,6 +322,8 @@ class ApplicationServiceApi(SimpleHttpClient):
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(serialized_events))
sent_ephemeral_counter.labels(service.id).inc(len(ephemeral))
sent_todevice_counter.labels(service.id).inc(len(to_device_messages))
return True
except CodeMessageException as e:
logger.warning(

View File

@@ -319,7 +319,9 @@ class _ServiceQueuer:
rooms_of_interesting_users.update(event.room_id for event in events)
# EDUs
rooms_of_interesting_users.update(
ephemeral["room_id"] for ephemeral in ephemerals
ephemeral["room_id"]
for ephemeral in ephemerals
if ephemeral.get("room_id") is not None
)
# Look up the AS users in those rooms
@@ -329,8 +331,9 @@ class _ServiceQueuer:
)
# Add recipients of to-device messages.
# device_message["user_id"] is the ID of the recipient.
users.update(device_message["user_id"] for device_message in to_device_messages)
users.update(
device_message["to_user_id"] for device_message in to_device_messages
)
# Compute and return the counts / fallback key usage states
otk_counts = await self._store.count_bulk_e2e_one_time_keys_for_as(users)

View File

@@ -18,7 +18,6 @@
import email.utils
import logging
import os
from enum import Enum
from typing import Any
import attr
@@ -131,41 +130,22 @@ class EmailConfig(Config):
self.email_enable_notifs = email_config.get("enable_notifs", False)
self.threepid_behaviour_email = (
# Have Synapse handle the email sending if account_threepid_delegates.email
# is not defined
# msisdn is currently always remote while Synapse does not support any method of
# sending SMS messages
ThreepidBehaviour.REMOTE
if self.root.registration.account_threepid_delegate_email
else ThreepidBehaviour.LOCAL
)
if config.get("trust_identity_server_for_password_resets"):
raise ConfigError(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the configuration manual at docs/usage/configuration/config_documentation.md for "
"details and update your config file."
"is no longer supported. Please remove it from the config file."
)
self.local_threepid_handling_disabled_due_to_email_config = False
if (
self.threepid_behaviour_email == ThreepidBehaviour.LOCAL
and email_config == {}
):
# We cannot warn the user this has happened here
# Instead do so when a user attempts to reset their password
self.local_threepid_handling_disabled_due_to_email_config = True
self.threepid_behaviour_email = ThreepidBehaviour.OFF
# If we have email config settings, assume that we can verify ownership of
# email addresses.
self.can_verify_email = email_config != {}
# Get lifetime of a validation token in milliseconds
self.email_validation_token_lifetime = self.parse_duration(
email_config.get("validation_token_lifetime", "1h")
)
if self.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
if self.can_verify_email:
missing = []
if not self.email_notif_from:
missing.append("email.notif_from")
@@ -356,18 +336,3 @@ class EmailConfig(Config):
"Config option email.invite_client_location must be a http or https URL",
path=("email", "invite_client_location"),
)
class ThreepidBehaviour(Enum):
"""
Enum to define the behaviour of Synapse with regards to when it contacts an identity
server for 3pid registration and password resets
REMOTE = use an external server to send tokens
LOCAL = send tokens ourselves
OFF = disable registration via 3pid and password resets
"""
REMOTE = "remote"
LOCAL = "local"
OFF = "off"

View File

@@ -20,6 +20,13 @@ from synapse.config._base import Config, ConfigError
from synapse.types import JsonDict, RoomAlias, UserID
from synapse.util.stringutils import random_string_with_symbols, strtobool
NO_EMAIL_DELEGATE_ERROR = """\
Delegation of email verification to an identity server is no longer supported. To
continue to allow users to add email addresses to their accounts, and use them for
password resets, configure Synapse with an SMTP server via the `email` setting, and
remove `account_threepid_delegates.email`.
"""
class RegistrationConfig(Config):
section = "registration"
@@ -51,7 +58,9 @@ class RegistrationConfig(Config):
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
if "email" in account_threepid_delegates:
raise ConfigError(NO_EMAIL_DELEGATE_ERROR)
# self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False)

View File

@@ -740,6 +740,32 @@ def _check_power_levels(
except Exception:
raise SynapseError(400, "Not a valid power level: %s" % (v,))
# Reject events with stringy power levels if required by room version
if (
event.type == EventTypes.PowerLevels
and room_version_obj.msc3667_int_only_power_levels
):
for k, v in event.content.items():
if k in {
"users_default",
"events_default",
"state_default",
"ban",
"redact",
"kick",
"invite",
}:
if not isinstance(v, int):
raise SynapseError(400, f"{v!r} must be an integer.")
if k in {"events", "notifications", "users"}:
if not isinstance(v, dict) or not all(
isinstance(v, int) for v in v.values()
):
raise SynapseError(
400,
f"{v!r} must be a dict wherein all the values are integers.",
)
key = (event.type, event.state_key)
current_state = auth_events.get(key)

View File

@@ -120,7 +120,7 @@ class EventBuilder:
The signed and hashed event.
"""
if auth_event_ids is None:
state_ids = await self._state.get_current_state_ids(
state_ids = await self._state.compute_state_after_events(
self.room_id, prev_event_ids
)
auth_event_ids = self._event_auth_handler.compute_auth_events(

View File

@@ -21,7 +21,6 @@ from typing import (
Awaitable,
Callable,
Collection,
Dict,
List,
Optional,
Tuple,
@@ -32,10 +31,11 @@ from typing import (
from typing_extensions import Literal
import synapse
from synapse.api.errors import Codes
from synapse.rest.media.v1._base import FileInfo
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.types import RoomAlias, UserProfile
from synapse.types import JsonDict, RoomAlias, UserProfile
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
from synapse.util.metrics import Measure
@@ -50,12 +50,12 @@ CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
Awaitable[
Union[
str,
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple["synapse.api.errors.Codes", Dict],
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -70,7 +70,12 @@ USER_MAY_JOIN_ROOM_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -81,7 +86,12 @@ USER_MAY_INVITE_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -92,7 +102,12 @@ USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -103,7 +118,12 @@ USER_MAY_CREATE_ROOM_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -114,7 +134,12 @@ USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -125,7 +150,12 @@ USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -154,7 +184,12 @@ CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK = Callable[
Awaitable[
Union[
Literal["NOT_SPAM"],
"synapse.api.errors.Codes",
Codes,
# Highly experimental, not officially part of the spamchecker API, may
# disappear without warning depending on the results of ongoing
# experiments.
# Use this to return additional information as part of an error.
Tuple[Codes, JsonDict],
# Deprecated
bool,
]
@@ -345,7 +380,7 @@ class SpamChecker:
async def check_event_for_spam(
self, event: "synapse.events.EventBase"
) -> Union[Tuple["synapse.api.errors.Codes", Dict], str]:
) -> Union[Tuple[Codes, JsonDict], str]:
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
@@ -376,7 +411,16 @@ class SpamChecker:
elif res is True:
# This spam-checker rejects the event with deprecated
# return value `True`
return (synapse.api.errors.Codes.FORBIDDEN, {})
return synapse.api.errors.Codes.FORBIDDEN, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif not isinstance(res, str):
# mypy complains that we can't reach this code because of the
# return type in CHECK_EVENT_FOR_SPAM_CALLBACK, but we don't know
@@ -422,7 +466,7 @@ class SpamChecker:
async def user_may_join_room(
self, user_id: str, room_id: str, is_invited: bool
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, JsonDict], Literal["NOT_SPAM"]]:
"""Checks if a given users is allowed to join a room.
Not called when a user creates a room.
@@ -432,7 +476,7 @@ class SpamChecker:
is_invited: Whether the user is invited into the room
Returns:
NOT_SPAM if the operation is permitted, Codes otherwise.
NOT_SPAM if the operation is permitted, [Codes, Dict] otherwise.
"""
for callback in self._user_may_join_room_callbacks:
with Measure(
@@ -443,21 +487,28 @@ class SpamChecker:
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting join as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
# No spam-checker has rejected the request, let it pass.
return self.NOT_SPAM
async def user_may_invite(
self, inviter_userid: str, invitee_userid: str, room_id: str
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a given user may send an invite
Args:
@@ -479,21 +530,28 @@ class SpamChecker:
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting invite as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
# No spam-checker has rejected the request, let it pass.
return self.NOT_SPAM
async def user_may_send_3pid_invite(
self, inviter_userid: str, medium: str, address: str, room_id: str
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a given user may invite a given threepid into the room
Note that if the threepid is already associated with a Matrix user ID, Synapse
@@ -519,20 +577,27 @@ class SpamChecker:
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting 3pid invite as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
return self.NOT_SPAM
async def user_may_create_room(
self, userid: str
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a given user may create a room
Args:
@@ -546,20 +611,27 @@ class SpamChecker:
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting room creation as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
return self.NOT_SPAM
async def user_may_create_room_alias(
self, userid: str, room_alias: RoomAlias
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a given user may create a room alias
Args:
@@ -575,20 +647,27 @@ class SpamChecker:
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting room create as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
return self.NOT_SPAM
async def user_may_publish_room(
self, userid: str, room_id: str
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a given user may publish a room to the directory
Args:
@@ -603,14 +682,21 @@ class SpamChecker:
if res is True or res is self.NOT_SPAM:
continue
elif res is False:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting room publication as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
return self.NOT_SPAM
@@ -678,7 +764,7 @@ class SpamChecker:
async def check_media_file_for_spam(
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
) -> Union["synapse.api.errors.Codes", Literal["NOT_SPAM"]]:
) -> Union[Tuple[Codes, dict], Literal["NOT_SPAM"]]:
"""Checks if a piece of newly uploaded media should be blocked.
This will be called for local uploads, downloads of remote media, each
@@ -715,13 +801,20 @@ class SpamChecker:
if res is False or res is self.NOT_SPAM:
continue
elif res is True:
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
elif isinstance(res, synapse.api.errors.Codes):
return res, {}
elif (
isinstance(res, tuple)
and len(res) == 2
and isinstance(res[0], synapse.api.errors.Codes)
and isinstance(res[1], dict)
):
return res
else:
logger.warning(
"Module returned invalid value, rejecting media file as spam"
)
return synapse.api.errors.Codes.FORBIDDEN
return synapse.api.errors.Codes.FORBIDDEN, {}
return self.NOT_SPAM

View File

@@ -464,14 +464,7 @@ class ThirdPartyEventRules:
Returns:
A dict mapping (event type, state key) to state event.
"""
state_ids = await self._storage_controllers.state.get_current_state_ids(room_id)
room_state_events = await self.store.get_events(state_ids.values())
state_events = {}
for key, event_id in state_ids.items():
state_events[key] = room_state_events[event_id]
return state_events
return await self._storage_controllers.state.get_current_state(room_id)
async def on_profile_update(
self, user_id: str, new_profile: ProfileInfo, by_admin: bool, deactivation: bool

View File

@@ -67,6 +67,7 @@ from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet,
)
from synapse.storage.databases.main.events import PartialStateConflictError
from synapse.storage.databases.main.lock import Lock
from synapse.types import JsonDict, StateMap, get_domain_from_id
from synapse.util import json_decoder, unwrapFirstError
@@ -882,9 +883,20 @@ class FederationServer(FederationBase):
logger.warning("%s", errmsg)
raise SynapseError(403, errmsg, Codes.FORBIDDEN)
return await self._federation_event_handler.on_send_membership_event(
origin, event
)
try:
return await self._federation_event_handler.on_send_membership_event(
origin, event
)
except PartialStateConflictError:
# The room was un-partial stated while we were persisting the event.
# Try once more, with full state this time.
logger.info(
"Room %s was un-partial stated during `on_send_membership_event`, trying again.",
room_id,
)
return await self._federation_event_handler.on_send_membership_event(
origin, event
)
async def on_event_auth(
self, origin: str, room_id: str, event_id: str

View File

@@ -351,7 +351,11 @@ class FederationSender(AbstractFederationSender):
self._is_processing = True
while True:
last_token = await self.store.get_federation_out_pos("events")
next_token, events = await self.store.get_all_new_events_stream(
(
next_token,
events,
event_to_received_ts,
) = await self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=100
)
@@ -476,7 +480,7 @@ class FederationSender(AbstractFederationSender):
await self._send_pdu(event, sharded_destinations)
now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id)
ts = event_to_received_ts[event.event_id]
assert ts is not None
synapse.metrics.event_processing_lag_by_event.labels(
"federation_sender"
@@ -509,7 +513,7 @@ class FederationSender(AbstractFederationSender):
if events:
now = self.clock.time_msec()
ts = await self.store.get_received_ts(events[-1].event_id)
ts = event_to_received_ts[events[-1].event_id]
assert ts is not None
synapse.metrics.event_processing_lag.labels(

View File

@@ -104,14 +104,15 @@ class ApplicationServicesHandler:
with Measure(self.clock, "notify_interested_services"):
self.is_processing = True
try:
limit = 100
upper_bound = -1
while upper_bound < self.current_max:
last_token = await self.store.get_appservice_last_pos()
(
upper_bound,
events,
) = await self.store.get_new_events_for_appservice(
self.current_max, limit
event_to_received_ts,
) = await self.store.get_all_new_events_stream(
last_token, self.current_max, limit=100, get_prev_content=True
)
events_by_room: Dict[str, List[EventBase]] = {}
@@ -150,7 +151,7 @@ class ApplicationServicesHandler:
)
now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id)
ts = event_to_received_ts[event.event_id]
assert ts is not None
synapse.metrics.event_processing_lag_by_event.labels(
@@ -187,7 +188,7 @@ class ApplicationServicesHandler:
if events:
now = self.clock.time_msec()
ts = await self.store.get_received_ts(events[-1].event_id)
ts = event_to_received_ts[events[-1].event_id]
assert ts is not None
synapse.metrics.event_processing_lag.labels(

View File

@@ -149,7 +149,8 @@ class DirectoryHandler:
raise AuthError(
403,
"This user is not permitted to create this alias",
spam_check,
errcode=spam_check[0],
additional_fields=spam_check[1],
)
if not self.config.roomdirectory.is_alias_creation_allowed(
@@ -441,7 +442,8 @@ class DirectoryHandler:
raise AuthError(
403,
"This user is not permitted to publish rooms to the room list",
spam_check,
errcode=spam_check[0],
additional_fields=spam_check[1],
)
if requester.is_guest:

View File

@@ -45,6 +45,7 @@ from synapse.api.errors import (
FederationDeniedError,
FederationError,
HttpResponseException,
LimitExceededError,
NotFoundError,
RequestSendFailed,
SynapseError,
@@ -64,6 +65,7 @@ from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationStoreRoomOnOutlierMembershipRestServlet,
)
from synapse.storage.databases.main.events import PartialStateConflictError
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.storage.state import StateFilter
from synapse.types import JsonDict, StateMap, get_domain_from_id
@@ -549,15 +551,29 @@ class FederationHandler:
# https://github.com/matrix-org/synapse/issues/12998
await self.store.store_partial_state_room(room_id, ret.servers_in_room)
max_stream_id = await self._federation_event_handler.process_remote_join(
origin,
room_id,
auth_chain,
state,
event,
room_version_obj,
partial_state=ret.partial_state,
)
try:
max_stream_id = (
await self._federation_event_handler.process_remote_join(
origin,
room_id,
auth_chain,
state,
event,
room_version_obj,
partial_state=ret.partial_state,
)
)
except PartialStateConflictError as e:
# The homeserver was already in the room and it is no longer partial
# stated. We ought to be doing a local join instead. Turn the error into
# a 429, as a hint to the client to try again.
# TODO(faster_joins): `_should_perform_remote_join` suggests that we may
# do a remote join for restricted rooms even if we have full state.
logger.error(
"Room %s was un-partial stated while processing remote join.",
room_id,
)
raise LimitExceededError(msg=e.msg, errcode=e.errcode, retry_after_ms=0)
if ret.partial_state:
# Kick off the process of asynchronously fetching the state for this
@@ -828,7 +844,8 @@ class FederationHandler:
raise SynapseError(
403,
"This user is not permitted to send invites to this server/user",
spam_check,
errcode=spam_check[0],
additional_fields=spam_check[1],
)
membership = event.content.get("membership")
@@ -1543,14 +1560,9 @@ class FederationHandler:
# all the events are updated, so we can update current state and
# clear the lazy-loading flag.
logger.info("Updating current state for %s", room_id)
# TODO(faster_joins): support workers
# TODO(faster_joins): notify workers in notify_room_un_partial_stated
# https://github.com/matrix-org/synapse/issues/12994
assert (
self._storage_controllers.persistence is not None
), "worker-mode deployments not currently supported here"
await self._storage_controllers.persistence.update_current_state(
room_id
)
await self.state_handler.update_current_state(room_id)
logger.info("Clearing partial-state flag for %s", room_id)
success = await self.store.clear_partial_state_room(room_id)
@@ -1567,11 +1579,6 @@ class FederationHandler:
# we raced against more events arriving with partial state. Go round
# the loop again. We've already logged a warning, so no need for more.
# TODO(faster_joins): there is still a race here, whereby incoming events which raced
# with us will fail to be persisted after the call to `clear_partial_state_room` due to
# having partial state.
# https://github.com/matrix-org/synapse/issues/12988
#
continue
events = await self.store.get_events_as_list(

View File

@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import itertools
import logging
from http import HTTPStatus
@@ -64,6 +65,7 @@ from synapse.replication.http.federation import (
ReplicationFederationSendEventsRestServlet,
)
from synapse.state import StateResolutionStore
from synapse.storage.databases.main.events import PartialStateConflictError
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.storage.state import StateFilter
from synapse.types import (
@@ -275,7 +277,16 @@ class FederationEventHandler:
affected=pdu.event_id,
)
await self._process_received_pdu(origin, pdu, state_ids=None)
try:
await self._process_received_pdu(origin, pdu, state_ids=None)
except PartialStateConflictError:
# The room was un-partial stated while we were processing the PDU.
# Try once more, with full state this time.
logger.info(
"Room %s was un-partial stated while processing the PDU, trying again.",
room_id,
)
await self._process_received_pdu(origin, pdu, state_ids=None)
async def on_send_membership_event(
self, origin: str, event: EventBase
@@ -306,6 +317,9 @@ class FederationEventHandler:
Raises:
SynapseError if the event is not accepted into the room
PartialStateConflictError if the room was un-partial stated in between
computing the state at the event and persisting it. The caller should
retry exactly once in this case.
"""
logger.debug(
"on_send_membership_event: Got event: %s, signatures: %s",
@@ -334,7 +348,7 @@ class FederationEventHandler:
event.internal_metadata.send_on_behalf_of = origin
context = await self._state_handler.compute_event_context(event)
context = await self._check_event_auth(origin, event, context)
await self._check_event_auth(origin, event, context)
if context.rejected:
raise SynapseError(
403, f"{event.membership} event was rejected", Codes.FORBIDDEN
@@ -423,6 +437,8 @@ class FederationEventHandler:
Raises:
SynapseError if the response is in some way invalid.
PartialStateConflictError if the homeserver is already in the room and it
has been un-partial stated.
"""
create_event = None
for e in state:
@@ -470,7 +486,7 @@ class FederationEventHandler:
partial_state=partial_state,
)
context = await self._check_event_auth(origin, event, context)
await self._check_event_auth(origin, event, context)
if context.rejected:
raise SynapseError(400, "Join event was rejected")
@@ -1084,10 +1100,14 @@ class FederationEventHandler:
state_ids: Normally None, but if we are handling a gap in the graph
(ie, we are missing one or more prev_events), the resolved state at the
event
event. Must not be partial state.
backfilled: True if this is part of a historical batch of events (inhibits
notification to clients, and validation of device keys.)
PartialStateConflictError: if the room was un-partial stated in between
computing the state at the event and persisting it. The caller should retry
exactly once in this case. Will never be raised if `state_ids` is provided.
"""
logger.debug("Processing event: %s", event)
assert not event.internal_metadata.outlier
@@ -1097,11 +1117,7 @@ class FederationEventHandler:
state_ids_before_event=state_ids,
)
try:
context = await self._check_event_auth(
origin,
event,
context,
)
await self._check_event_auth(origin, event, context)
except AuthError as e:
# This happens only if we couldn't find the auth events. We'll already have
# logged a warning, so now we just convert to a FederationError.
@@ -1476,11 +1492,8 @@ class FederationEventHandler:
)
async def _check_event_auth(
self,
origin: str,
event: EventBase,
context: EventContext,
) -> EventContext:
self, origin: str, event: EventBase, context: EventContext
) -> None:
"""
Checks whether an event should be rejected (for failing auth checks).
@@ -1490,9 +1503,6 @@ class FederationEventHandler:
context:
The event context.
Returns:
The updated context object.
Raises:
AuthError if we were unable to find copies of the event's auth events.
(Most other failures just cause us to set `context.rejected`.)
@@ -1507,7 +1517,7 @@ class FederationEventHandler:
logger.warning("While validating received event %r: %s", event, e)
# TODO: use a different rejected reason here?
context.rejected = RejectedReason.AUTH_ERROR
return context
return
# next, check that we have all of the event's auth events.
#
@@ -1519,6 +1529,9 @@ class FederationEventHandler:
)
# ... and check that the event passes auth at those auth events.
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
# 4. Passes authorization rules based on the events auth events,
# otherwise it is rejected.
try:
await check_state_independent_auth_rules(self._store, event)
check_state_dependent_auth_rules(event, claimed_auth_events)
@@ -1527,55 +1540,90 @@ class FederationEventHandler:
"While checking auth of %r against auth_events: %s", event, e
)
context.rejected = RejectedReason.AUTH_ERROR
return context
return
# now check auth against what we think the auth events *should* be.
event_types = event_auth.auth_types_for_event(event.room_version, event)
prev_state_ids = await context.get_prev_state_ids(
StateFilter.from_types(event_types)
)
# now check the auth rules pass against the room state before the event
# https://spec.matrix.org/v1.3/server-server-api/#checks-performed-on-receipt-of-a-pdu:
# 5. Passes authorization rules based on the state before the event,
# otherwise it is rejected.
#
# ... however, if we only have partial state for the room, then there is a good
# chance that we'll be missing some of the state needed to auth the new event.
# So, we state-resolve the auth events that we are given against the state that
# we know about, which ensures things like bans are applied. (Note that we'll
# already have checked we have all the auth events, in
# _load_or_fetch_auth_events_for_event above)
if context.partial_state:
room_version = await self._store.get_room_version_id(event.room_id)
auth_events_ids = self._event_auth_handler.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_x = await self._store.get_events(auth_events_ids)
calculated_auth_event_map = {
(e.type, e.state_key): e for e in auth_events_x.values()
}
local_state_id_map = await context.get_prev_state_ids()
claimed_auth_events_id_map = {
(ev.type, ev.state_key): ev.event_id for ev in claimed_auth_events
}
try:
updated_auth_events = await self._update_auth_events_for_auth(
event,
calculated_auth_event_map=calculated_auth_event_map,
state_for_auth_id_map = (
await self._state_resolution_handler.resolve_events_with_store(
event.room_id,
room_version,
[local_state_id_map, claimed_auth_events_id_map],
event_map=None,
state_res_store=StateResolutionStore(self._store),
)
)
except Exception:
# We don't really mind if the above fails, so lets not fail
# processing if it does. However, it really shouldn't fail so
# let's still log as an exception since we'll still want to fix
# any bugs.
logger.exception(
"Failed to double check auth events for %s with remote. "
"Ignoring failure and continuing processing of event.",
event.event_id,
)
updated_auth_events = None
if updated_auth_events:
context = await self._update_context_for_auth_events(
event, context, updated_auth_events
)
auth_events_for_auth = updated_auth_events
else:
auth_events_for_auth = calculated_auth_event_map
event_types = event_auth.auth_types_for_event(event.room_version, event)
state_for_auth_id_map = await context.get_prev_state_ids(
StateFilter.from_types(event_types)
)
calculated_auth_event_ids = self._event_auth_handler.compute_auth_events(
event, state_for_auth_id_map, for_verification=True
)
# if those are the same, we're done here.
if collections.Counter(event.auth_event_ids()) == collections.Counter(
calculated_auth_event_ids
):
return
# otherwise, re-run the auth checks based on what we calculated.
calculated_auth_events = await self._store.get_events_as_list(
calculated_auth_event_ids
)
# log the differences
claimed_auth_event_map = {(e.type, e.state_key): e for e in claimed_auth_events}
calculated_auth_event_map = {
(e.type, e.state_key): e for e in calculated_auth_events
}
logger.info(
"event's auth_events are different to our calculated auth_events. "
"Claimed but not calculated: %s. Calculated but not claimed: %s",
[
ev
for k, ev in claimed_auth_event_map.items()
if k not in calculated_auth_event_map
or calculated_auth_event_map[k].event_id != ev.event_id
],
[
ev
for k, ev in calculated_auth_event_map.items()
if k not in claimed_auth_event_map
or claimed_auth_event_map[k].event_id != ev.event_id
],
)
try:
check_state_dependent_auth_rules(event, auth_events_for_auth.values())
check_state_dependent_auth_rules(event, calculated_auth_events)
except AuthError as e:
logger.warning("Failed auth resolution for %r because %s", event, e)
logger.warning(
"While checking auth of %r against room state before the event: %s",
event,
e,
)
context.rejected = RejectedReason.AUTH_ERROR
return context
async def _maybe_kick_guest_users(self, event: EventBase) -> None:
if event.type != EventTypes.GuestAccess:
return
@@ -1685,93 +1733,6 @@ class FederationEventHandler:
soft_failed_event_counter.inc()
event.internal_metadata.soft_failed = True
async def _update_auth_events_for_auth(
self,
event: EventBase,
calculated_auth_event_map: StateMap[EventBase],
) -> Optional[StateMap[EventBase]]:
"""Helper for _check_event_auth. See there for docs.
Checks whether a given event has the expected auth events. If it
doesn't then we talk to the remote server to compare state to see if
we can come to a consensus (e.g. if one server missed some valid
state).
This attempts to resolve any potential divergence of state between
servers, but is not essential and so failures should not block further
processing of the event.
Args:
event:
calculated_auth_event_map:
Our calculated auth_events based on the state of the room
at the event's position in the DAG.
Returns:
updated auth event map, or None if no changes are needed.
"""
assert not event.internal_metadata.outlier
# check for events which are in the event's claimed auth_events, but not
# in our calculated event map.
event_auth_events = set(event.auth_event_ids())
different_auth = event_auth_events.difference(
e.event_id for e in calculated_auth_event_map.values()
)
if not different_auth:
return None
logger.info(
"auth_events refers to events which are not in our calculated auth "
"chain: %s",
different_auth,
)
# XXX: currently this checks for redactions but I'm not convinced that is
# necessary?
different_events = await self._store.get_events_as_list(different_auth)
# double-check they're all in the same room - we should already have checked
# this but it doesn't hurt to check again.
for d in different_events:
assert (
d.room_id == event.room_id
), f"Event {event.event_id} refers to auth_event {d.event_id} which is in a different room"
# now we state-resolve between our own idea of the auth events, and the remote's
# idea of them.
local_state = calculated_auth_event_map.values()
remote_auth_events = dict(calculated_auth_event_map)
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
remote_state = remote_auth_events.values()
room_version = await self._store.get_room_version_id(event.room_id)
new_state = await self._state_handler.resolve_events(
room_version, (local_state, remote_state), event
)
different_state = {
(d.type, d.state_key): d
for d in new_state.values()
if calculated_auth_event_map.get((d.type, d.state_key)) != d
}
if not different_state:
logger.info("State res returned no new state")
return None
logger.info(
"After state res: updating auth_events with new state %s",
different_state.values(),
)
# take a copy of calculated_auth_event_map before we modify it.
auth_events = dict(calculated_auth_event_map)
auth_events.update(different_state)
return auth_events
async def _load_or_fetch_auth_events_for_event(
self, destination: str, event: EventBase
) -> Collection[EventBase]:
@@ -1869,61 +1830,6 @@ class FederationEventHandler:
await self._auth_and_persist_outliers(room_id, remote_auth_events)
async def _update_context_for_auth_events(
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
) -> EventContext:
"""Update the state_ids in an event context after auth event resolution,
storing the changes as a new state group.
Args:
event: The event we're handling the context for
context: initial event context
auth_events: Events to update in the event context.
Returns:
new event context
"""
# exclude the state key of the new event from the current_state in the context.
if event.is_state():
event_key: Optional[Tuple[str, str]] = (event.type, event.state_key)
else:
event_key = None
state_updates = {
k: a.event_id for k, a in auth_events.items() if k != event_key
}
current_state_ids = await context.get_current_state_ids()
current_state_ids = dict(current_state_ids) # type: ignore
current_state_ids.update(state_updates)
prev_state_ids = await context.get_prev_state_ids()
prev_state_ids = dict(prev_state_ids)
prev_state_ids.update({k: a.event_id for k, a in auth_events.items()})
# create a new state group as a delta from the existing one.
prev_group = context.state_group
state_group = await self._state_storage_controller.store_state_group(
event.event_id,
event.room_id,
prev_group=prev_group,
delta_ids=state_updates,
current_state_ids=current_state_ids,
)
return EventContext.with_state(
storage=self._storage_controllers,
state_group=state_group,
state_group_before_event=context.state_group_before_event,
state_delta_due_to_event=state_updates,
prev_group=prev_group,
delta_ids=state_updates,
partial_state=context.partial_state,
)
async def _run_push_actions_and_persist_event(
self, event: EventBase, context: EventContext, backfilled: bool = False
) -> None:
@@ -1933,6 +1839,9 @@ class FederationEventHandler:
event: The event itself.
context: The event context.
backfilled: True if the event was backfilled.
PartialStateConflictError: if attempting to persist a partial state event in
a room that has been un-partial stated.
"""
# this method should not be called on outliers (those code paths call
# persist_events_and_notify directly.)
@@ -1985,6 +1894,10 @@ class FederationEventHandler:
Returns:
The stream ID after which all events have been persisted.
Raises:
PartialStateConflictError: if attempting to persist a partial state event in
a room that has been un-partial stated.
"""
if not event_and_contexts:
return self._store.get_room_max_stream_ordering()
@@ -1993,14 +1906,19 @@ class FederationEventHandler:
if instance != self._instance_name:
# Limit the number of events sent over replication. We choose 200
# here as that is what we default to in `max_request_body_size(..)`
for batch in batch_iter(event_and_contexts, 200):
result = await self._send_events(
instance_name=instance,
store=self._store,
room_id=room_id,
event_and_contexts=batch,
backfilled=backfilled,
)
try:
for batch in batch_iter(event_and_contexts, 200):
result = await self._send_events(
instance_name=instance,
store=self._store,
room_id=room_id,
event_and_contexts=batch,
backfilled=backfilled,
)
except SynapseError as e:
if e.code == HTTPStatus.CONFLICT:
raise PartialStateConflictError()
raise
return result["max_stream_id"]
else:
assert self._storage_controllers.persistence

View File

@@ -26,7 +26,6 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.api.ratelimiting import Ratelimiter
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.http import RequestTimedOutError
from synapse.http.client import SimpleHttpClient
from synapse.http.site import SynapseRequest
@@ -163,8 +162,7 @@ class IdentityHandler:
sid: str,
mxid: str,
id_server: str,
id_access_token: Optional[str] = None,
use_v2: bool = True,
id_access_token: str,
) -> JsonDict:
"""Bind a 3PID to an identity server
@@ -174,8 +172,7 @@ class IdentityHandler:
mxid: The MXID to bind the 3PID to
id_server: The domain of the identity server to query
id_access_token: The access token to authenticate to the identity
server with, if necessary. Required if use_v2 is true
use_v2: Whether to use v2 Identity Service API endpoints. Defaults to True
server with
Raises:
SynapseError: On any of the following conditions
@@ -187,24 +184,15 @@ class IdentityHandler:
"""
logger.debug("Proxying threepid bind request for %s to %s", mxid, id_server)
# If an id_access_token is not supplied, force usage of v1
if id_access_token is None:
use_v2 = False
if not valid_id_server_location(id_server):
raise SynapseError(
400,
"id_server must be a valid hostname with optional port and path components",
)
# Decide which API endpoint URLs to use
headers = {}
bind_data = {"sid": sid, "client_secret": client_secret, "mxid": mxid}
if use_v2:
bind_url = "https://%s/_matrix/identity/v2/3pid/bind" % (id_server,)
headers["Authorization"] = create_id_access_token_header(id_access_token) # type: ignore
else:
bind_url = "https://%s/_matrix/identity/api/v1/3pid/bind" % (id_server,)
bind_url = "https://%s/_matrix/identity/v2/3pid/bind" % (id_server,)
headers = {"Authorization": create_id_access_token_header(id_access_token)}
try:
# Use the blacklisting http client as this call is only to identity servers
@@ -223,21 +211,14 @@ class IdentityHandler:
return data
except HttpResponseException as e:
if e.code != 404 or not use_v2:
logger.error("3PID bind failed with Matrix error: %r", e)
raise e.to_synapse_error()
logger.error("3PID bind failed with Matrix error: %r", e)
raise e.to_synapse_error()
except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server")
except CodeMessageException as e:
data = json_decoder.decode(e.msg) # XXX WAT?
return data
logger.info("Got 404 when POSTing JSON %s, falling back to v1 URL", bind_url)
res = await self.bind_threepid(
client_secret, sid, mxid, id_server, id_access_token, use_v2=False
)
return res
async def try_unbind_threepid(self, mxid: str, threepid: dict) -> bool:
"""Attempt to remove a 3PID from an identity server, or if one is not provided, all
identity servers we're aware the binding is present on
@@ -300,8 +281,8 @@ class IdentityHandler:
"id_server must be a valid hostname with optional port and path components",
)
url = "https://%s/_matrix/identity/api/v1/3pid/unbind" % (id_server,)
url_bytes = b"/_matrix/identity/api/v1/3pid/unbind"
url = "https://%s/_matrix/identity/v2/3pid/unbind" % (id_server,)
url_bytes = b"/_matrix/identity/v2/3pid/unbind"
content = {
"mxid": mxid,
@@ -434,48 +415,6 @@ class IdentityHandler:
return session_id
async def requestEmailToken(
self,
id_server: str,
email: str,
client_secret: str,
send_attempt: int,
next_link: Optional[str] = None,
) -> JsonDict:
"""
Request an external server send an email on our behalf for the purposes of threepid
validation.
Args:
id_server: The identity server to proxy to
email: The email to send the message to
client_secret: The unique client_secret sends by the user
send_attempt: Which attempt this is
next_link: A link to redirect the user to once they submit the token
Returns:
The json response body from the server
"""
params = {
"email": email,
"client_secret": client_secret,
"send_attempt": send_attempt,
}
if next_link:
params["next_link"] = next_link
try:
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
params,
)
return data
except HttpResponseException as e:
logger.info("Proxied requestToken failed: %r", e)
raise e.to_synapse_error()
except RequestTimedOutError:
raise SynapseError(500, "Timed out contacting identity server")
async def requestMsisdnToken(
self,
id_server: str,
@@ -549,18 +488,7 @@ class IdentityHandler:
validation_session = None
# Try to validate as email
if self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.REMOTE:
# Remote emails will only be used if a valid identity server is provided.
assert (
self.hs.config.registration.account_threepid_delegate_email is not None
)
# Ask our delegated email identity server
validation_session = await self.threepid_from_creds(
self.hs.config.registration.account_threepid_delegate_email,
threepid_creds,
)
elif self.hs.config.email.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
if self.hs.config.email.can_verify_email:
# Get a validated session matching these details
validation_session = await self.store.get_threepid_validation_session(
"email", client_secret, sid=sid, validated=True

Some files were not shown because too many files have changed in this diff Show More