1
0

Compare commits

..

512 Commits

Author SHA1 Message Date
Andrew Morgan
529462b5c0 1.12.1 2020-04-02 11:32:16 +01:00
Andrew Morgan
677d0edbac Note where bugs were introduced 2020-03-31 11:58:48 +01:00
Andrew Morgan
3fb9fc40f5 1.12.1rc1 2020-03-31 11:49:43 +01:00
Erik Johnston
5d99bde788 Newsfile 2020-03-31 11:30:34 +01:00
Andrew Morgan
2cf115f0ea Rewrite changelog 2020-03-31 11:30:16 +01:00
Andrew Morgan
2cb38ca871 Add changelog 2020-03-31 11:30:05 +01:00
David Vo
5bd2b27525 Only import sqlite3 when type checking
Fixes: #7127
Signed-off-by: David Vo <david@vovo.id.au>
2020-03-31 11:27:17 +01:00
Andrew Morgan
b5d0b038f4 Fix another instance 2020-03-31 11:26:37 +01:00
Andrew Morgan
b5ecafd157 Only setdefault for signatures if device has key_json 2020-03-31 11:26:29 +01:00
Erik Johnston
db098ec994 Fix starting workers when federation sending not split out. 2020-03-31 11:25:21 +01:00
Richard van der Hoff
88bb6c27e1 matrix.org was fine 2020-03-23 13:38:30 +00:00
Neil Johnson
066804f591 Update CHANGES.md 2020-03-23 13:36:16 +00:00
Richard van der Hoff
56b5f1d0ee changelog typos 2020-03-23 13:23:21 +00:00
Richard van der Hoff
a438950a00 1.12.0 changelog 2020-03-23 13:00:40 +00:00
Richard van der Hoff
2fa55c0cc6 1.12.0 2020-03-23 12:13:09 +00:00
Richard van der Hoff
c8c926f9c9 more changelog 2020-03-19 11:26:51 +00:00
Richard van der Hoff
163f23785a changelog fixes 2020-03-19 11:25:32 +00:00
Richard van der Hoff
5aa6dff99e fix typo 2020-03-19 11:15:48 +00:00
Richard van der Hoff
e43e78b985 1.12.0rc1 2020-03-19 11:07:16 +00:00
Richard van der Hoff
782b811789 update grafana dashboard 2020-03-19 10:45:40 +00:00
Richard van der Hoff
e913823a22 Fix concurrent modification errors in pusher metrics (#7106)
add a lock to try to make this metric actually work
2020-03-19 10:28:49 +00:00
Richard van der Hoff
8c75667ad7 Add prometheus metrics for the number of active pushers (#7103) 2020-03-19 10:00:24 +00:00
Richard van der Hoff
443162e577 Move pusherpool startup into _base.setup (#7104)
This should be safe to do on all workers/masters because it is guarded by
a config option which will ensure it is only actually done on the worker
assigned as a pusher.
2020-03-19 09:48:45 +00:00
Erik Johnston
4a17a647a9 Improve get auth chain difference algorithm. (#7095)
It was originally implemented by pulling the full auth chain of all
state sets out of the database and doing set comparison. However, that
can take a lot work if the state and auth chains are large.

Instead, lets try and fetch the auth chains at the same time and
calculate the difference on the fly, allowing us to bail early if all
the auth chains converge. Assuming that the auth chains do converge more
often than not, this should improve performance. Hopefully.
2020-03-18 16:46:41 +00:00
Patrick Cloke
88b41986db Add an option to the set password API to choose whether to logout other devices. (#7085) 2020-03-18 07:50:00 -04:00
Richard von Kellner
6d110ddea4 Update INSTALL.md updated CentOS8 install instructions (#6925) 2020-03-17 21:48:23 +00:00
Richard van der Hoff
c37db0211e Share SSL contexts for non-federation requests (#7094)
Extends #5794 etc to the SimpleHttpClient so that it also applies to non-federation requests.

Fixes #7092.
2020-03-17 21:32:25 +00:00
The Stranjer
5e477c1deb Set charset to utf-8 when adding headers for certain text content types (#7044)
Fixes #7043
2020-03-17 13:29:09 +00:00
Patrick Cloke
7581d30e9f Remove unused federation endpoint (query_auth) (#7026) 2020-03-17 08:04:49 -04:00
Patrick Cloke
60724c46b7 Remove special casing of m.room.aliases events (#7034) 2020-03-17 07:37:04 -04:00
Richard van der Hoff
6a35046363 Revert "Add options to disable setting profile info for prevent changes. (#7053)"
This reverts commit 54dd28621b, reversing
changes made to 6640460d05.
2020-03-17 11:25:01 +00:00
Brendan Abolivier
7df04ca0e6 Populate the room version from state events (#7070)
Fixes #7065 

This is basically the same as https://github.com/matrix-org/synapse/pull/6847 except it tries to populate events from `state_events` rather than `current_state_events`, since the latter might have been cleared from the state of some rooms too early, leaving them with a `NULL` room version.
2020-03-16 22:31:47 +00:00
Brendan Abolivier
beb19cf61a Fix buggy condition in account validity handler (#7074) 2020-03-16 12:16:30 +00:00
Brendan Abolivier
d8d91983bc Merge pull request #7067 from matrix-org/babolivier/saml_error_moar
Move the default SAML2 error HTML to a dedicated file
2020-03-13 19:53:19 +00:00
Brendan Abolivier
ebfcbbff9c Use innerText instead of innerHTML 2020-03-13 19:09:22 +00:00
Patrick Cloke
77d0a4507b Add type annotations and comments to auth handler (#7063) 2020-03-12 11:36:27 -04:00
Brendan Abolivier
0de9f9486a Lint 2020-03-11 20:39:18 +00:00
Brendan Abolivier
f9e98176bf Put the file in the templates directory 2020-03-11 20:31:42 +00:00
Brendan Abolivier
bd5e555b0d Merge pull request #7066 from matrix-org/babolivier/dummy_events_state
Skip the correct visibility checks when checking the visibility of the state at a given event
2020-03-11 20:07:58 +00:00
Brendan Abolivier
900bca9707 Update wording and config 2020-03-11 19:40:30 +00:00
Brendan Abolivier
e55a240681 Changelog 2020-03-11 19:37:04 +00:00
Brendan Abolivier
b8cfe79ffc Move the default SAML2 error HTML to a dedicated file
Also add some JS to it to process any error we might have in the URI
(see #6893).
2020-03-11 19:33:16 +00:00
Brendan Abolivier
8120a238a4 Refactor a bit 2020-03-11 18:49:41 +00:00
Brendan Abolivier
37a9873f63 Also don't fail on aliases events in this case 2020-03-11 18:43:41 +00:00
Brendan Abolivier
e38c44b418 Lint 2020-03-11 18:06:07 +00:00
Brendan Abolivier
1cde4cf3f1 Changelog 2020-03-11 18:03:56 +00:00
Brendan Abolivier
2dce68c651 Also don't filter out events sent by ignored users when checking state visibility 2020-03-11 17:53:22 +00:00
Brendan Abolivier
9c0775e86a Fix condition 2020-03-11 17:53:18 +00:00
Brendan Abolivier
69ce55c510 Don't filter out dummy events when we're checking the visibility of state 2020-03-11 17:52:54 +00:00
Brendan Abolivier
54dd28621b Add options to disable setting profile info for prevent changes. (#7053) 2020-03-10 22:23:01 +00:00
Dirk Klimpel
751d51dd12 Update sample_config.yaml 2020-03-10 21:41:25 +01:00
Dirk Klimpel
42ac4ca477 Update synapse/config/registration.py
Co-Authored-By: Brendan Abolivier <github@brendanabolivier.com>
2020-03-10 21:26:55 +01:00
Brendan Abolivier
6640460d05 Merge pull request #7058 from matrix-org/babolivier/saml_error_html
SAML2: render a comprehensible error page if something goes wrong
2020-03-10 18:42:15 +00:00
Brendan Abolivier
8f826f98ac Rephrase default message 2020-03-10 17:22:45 +00:00
Brendan Abolivier
dc6fb56c5f Hopefully mypy is happy now 2020-03-10 14:40:28 +00:00
Brendan Abolivier
fe593ef990 Attempt at appeasing the gods of mypy 2020-03-10 14:19:06 +00:00
Brendan Abolivier
5ec2077bf9 Lint 2020-03-10 14:04:20 +00:00
Brendan Abolivier
156f271867 Changelog 2020-03-10 14:01:24 +00:00
Brendan Abolivier
51c094c4ac Update sample config 2020-03-10 14:00:29 +00:00
Brendan Abolivier
6b0efe73e2 SAML2: render a comprehensible error page if something goes wrong
If an error happened while processing a SAML AuthN response, or a client
ends up doing a `GET` request to `/authn_response`, then render a
customisable error page rather than a confusing error.
2020-03-10 13:59:22 +00:00
dklimpel
39f6595b4a lint, fix tests 2020-03-09 22:13:20 +01:00
dklimpel
885134529f updates after review 2020-03-09 22:09:29 +01:00
dklimpel
7e5f40e771 fix tests 2020-03-09 21:00:36 +01:00
dklimpel
50ea178c20 lint 2020-03-09 19:57:04 +01:00
dklimpel
04f4b5f6f8 add tests 2020-03-09 19:51:31 +01:00
Brendan Abolivier
14b2ebe767 Merge pull request #7055 from matrix-org/babolivier/get_time_of_last_push_action_before
Move get_time_of_last_push_action_before to the EventPushActionsWorkerStore
2020-03-09 14:53:50 +00:00
Brendan Abolivier
f9e3a3f4d0 Changelog
It's the same as in #6964 since it's the most likely cause of the bug
and that change hasn't been released yet.
2020-03-09 14:21:01 +00:00
Brendan Abolivier
aee2bae952 Fix undefined room_id in make_summary_text
This would break notifications about un-named rooms when processing
notifications in a batch.
2020-03-09 14:10:19 +00:00
Brendan Abolivier
87c65576e0 Move get_time_of_last_push_action_before to the EventPushActionsWorkerStore
Fixes #7054

I also had a look at the rest of the functions in
`EventPushActionsStore` and in the push notifications send code and it
looks to me like there shouldn't be any other method with this issue in
this part of the codebase.
2020-03-09 13:58:38 +00:00
Patrick Cloke
06eb5cae08 Remove special auth and redaction rules for aliases events in experimental room ver. (#7037) 2020-03-09 08:58:25 -04:00
Patrick Cloke
66315d862f Update routing of fallback auth in the worker docs. (#7048) 2020-03-09 07:19:24 -04:00
Brendan Abolivier
bbf725e7da Merge pull request #7045 from matrix-org/babolivier/room_keys_check
Make sure that is_verified is a boolean when processing room keys
2020-03-09 09:54:48 +00:00
dklimpel
99bbe177b6 add disable_3pid_changes 2020-03-08 21:58:12 +01:00
dklimpel
20545a2199 lint2 2020-03-08 15:28:00 +01:00
dklimpel
ce460dc31c lint 2020-03-08 15:22:43 +01:00
dklimpel
fb078f921b changelog 2020-03-08 15:19:07 +01:00
dklimpel
1f5f3ae8b1 Add options to disable setting profile info for prevent changes. 2020-03-08 14:49:33 +01:00
Neil Pilgrim
2bff4457d9 Add type hints to logging/context.py (#6309)
* Add type hints to logging/context.py

Signed-off-by: neiljp (Neil Pilgrim) <github@kepier.clara.net>
2020-03-07 17:57:26 +00:00
Neil Johnson
1d66dce83e Break down monthly active users by appservice_id (#7030)
* Break down monthly active users by appservice_id and emit via prometheus.

Co-authored-by: Brendan Abolivier <babolivier@matrix.org>
2020-03-06 18:14:19 +00:00
Brendan Abolivier
54b78a0e3b Lint 2020-03-06 15:11:13 +00:00
Brendan Abolivier
297aaf4816 Mention the session ID in the error message 2020-03-06 15:07:41 +00:00
Brendan Abolivier
45df9d35a9 Lint 2020-03-06 11:10:52 +00:00
Brendan Abolivier
a27056d539 Changelog 2020-03-06 11:06:47 +00:00
Brendan Abolivier
80e580ae92 Make sure that is_verified is a boolean when processing room keys 2020-03-06 11:05:00 +00:00
Patrick Cloke
87972f07e5 Convert remote key resource REST layer to async/await. (#7020) 2020-03-05 11:29:56 -05:00
Richard van der Hoff
78a15b1f9d Store room_versions in EventBase objects (#6875)
This is a bit fiddly because it all has to be done on one fell swoop:

* Wherever we create a new event, pass in the room version (and check it matches the format version)
* When we prune an event, use the room version of the unpruned event to create the pruned version.
* When we pass an event over the replication protocol, pass the room version over alongside it, and use it when deserialising the event again.
2020-03-05 15:46:44 +00:00
Brendan Abolivier
fe678a0900 Merge pull request #7035 from matrix-org/babolivier/hide_dummy_events
Hide extremities dummy events from clients
2020-03-05 10:51:19 +00:00
Brendan Abolivier
83b6c69d3d Changelog 2020-03-04 17:29:09 +00:00
Brendan Abolivier
31a2116331 Hide extremities dummy events from clients 2020-03-04 17:28:13 +00:00
Patrick Cloke
13892776ef Allow deleting an alias if the user has sufficient power level (#6986) 2020-03-04 11:30:46 -05:00
Richard van der Hoff
8ef8fb2c1c Read the room version from database when fetching events (#6874)
This is a precursor to giving EventBase objects the knowledge of which room version they belong to.
2020-03-04 13:11:04 +00:00
Brendan Abolivier
43f874055d Merge branch 'master' into develop 2020-03-03 15:20:49 +00:00
Brendan Abolivier
6b0ef34706 Update debian changelog 2020-03-03 15:01:43 +00:00
Brendan Abolivier
fe6ab0439d Merge branch 'babolivier/v1.11.1-changelog' into 'release-v1.11.1'
v1.11.1

See merge request new-vector/synapse!6
2020-03-03 14:58:37 +00:00
Brendan Abolivier
fd983fad96 v1.11.1 2020-03-03 14:58:37 +00:00
Patrick Cloke
7dcbc33a1b Validate the alt_aliases property of canonical alias events (#6971) 2020-03-03 07:12:45 -05:00
Brendan Abolivier
6a8880b9c3 Merge branch 'babolivier/complete_sso_login_saml' into 'release-v1.11.1'
Fix wrong handler being used in SAML handler

See merge request new-vector/synapse!5
2020-03-03 11:29:07 +00:00
Brendan Abolivier
a0178df104 Fix wrong handler being used in SAML handler 2020-03-03 11:29:07 +00:00
Brendan Abolivier
6f67a8b570 Merge branch 'babolivier/sso_module_api' into 'release-v1.11.1'
Factor out complete_sso_login and expose it to the Module API

See merge request new-vector/synapse!4
2020-03-03 10:54:44 +00:00
Brendan Abolivier
65c73cdfec Factor out complete_sso_login and expose it to the Module API 2020-03-03 10:54:44 +00:00
Richard van der Hoff
809e8567f6 Merge branch 'rav/sso-confirm-whitelist' into 'release-v1.11.1'
Add a whitelist for the SSO confirmation step.

See merge request new-vector/synapse!3
2020-03-02 17:05:09 +00:00
Richard van der Hoff
b68041df3d Add a whitelist for the SSO confirmation step. 2020-03-02 17:05:09 +00:00
Erik Johnston
b29474e0aa Always return a deferred from get_current_state_deltas. (#7019)
This currently causes presence notify code to log exceptions when there
is no state changes to process. This doesn't actually cause any problems
as we'd simply do nothing anyway.
2020-03-02 16:52:15 +00:00
Richard van der Hoff
27d099edd6 Merge remote-tracking branch 'origin/release-v1.11.1' into release-v1.11.1 2020-03-02 16:43:33 +00:00
Brendan Abolivier
2e7fad87d4 Merge branch 'anoabolivier/sso-confirm' into 'release-v1.11.1'
Add a confirmation step to the SSO login flow

See merge request new-vector/synapse!2
2020-03-02 16:36:32 +00:00
Brendan Abolivier
b2bd54a2e3 Add a confirmation step to the SSO login flow 2020-03-02 16:36:32 +00:00
Erik Johnston
3ab8e9c293 Fix py35-old CI by using native tox. (#7018)
I'm not really sure how this was going wrong, but this seems like the
right approach anyway.
2020-03-02 16:17:11 +00:00
Richard van der Hoff
174aaa1d62 remove spurious changelog 2020-03-02 14:53:56 +00:00
Richard van der Hoff
036c6cea07 Merge branch 'release-v1.11.1' into develop 2020-03-02 14:53:10 +00:00
Dirk Klimpel
bbeee33d63 Fixed set a user as an admin with the new API (#6928)
Fix #6910
2020-03-02 13:28:50 +00:00
Matthew Hodgson
cc7ab0d84a rst->md 2020-03-01 21:21:36 +00:00
Uday Bansal
e4ffb14d57 Fix last date for ACMEv1 install (#7015)
Support for getting TLS certificates through ACMEv1 ended on November 2019.

Signed-off-by: Uday Bansal <43824981+udaybansal19@users.noreply.github.com>
2020-02-29 23:37:23 +00:00
Sandro
d96ac97d29 Fix mounting of homeserver.yaml when it does not exist on host (#6913)
Signed-off-by: Sandro Jäckel <sandro.jaeckel@gmail.com>
2020-02-29 23:32:26 +00:00
Patrick Cloke
12d4259000 Add some type annotations to the federation base & client classes (#6995) 2020-02-28 07:31:07 -05:00
Dirk Klimpel
9b06d8f8a6 Fixed set a user as an admin with the new API (#6928)
Fix #6910
2020-02-28 09:58:05 +00:00
Patrick Cloke
ab0073a6c0 Merge remote-tracking branch 'origin/release-v1.11.1' into develop 2020-02-27 13:47:44 -05:00
Erik Johnston
2201bc9795 Don't refuse to start worker if media listener configured. (#7002)
Instead lets just warn if the worker has a media listener configured but
has the media repository disabled.

Previously non media repository workers would just ignore the media
listener.
2020-02-27 16:33:21 +00:00
Richard van der Hoff
cab4a52535 set worker_app for frontend proxy test (#7003)
to stop the federationhandler trying to do master stuff
2020-02-27 13:08:43 +00:00
James
b32ac60c22 Expose common commands via snap run interface to allow easier invocation (#6315)
Signed-off-by: James Hebden <james@ec0.io>
2020-02-27 12:47:40 +00:00
Richard van der Hoff
132b673dbe Add some type annotations in synapse.storage (#6987)
I cracked, and added some type definitions in synapse.storage.
2020-02-27 11:53:40 +00:00
Richard van der Hoff
3e99528f2b Store room version on invite (#6983)
When we get an invite over federation, store the room version in the rooms table.

The general idea here is that, when we pull the invite out again, we'll want to know what room_version it belongs to (so that we can later redact it if need be). So we need to store it somewhere...
2020-02-26 16:58:33 +00:00
Patrick Cloke
380122866f Cast a coroutine into a Deferred in the federation base (#6996)
Properly convert a coroutine into a Deferred in federation_base to fix an error when joining a room.
2020-02-26 11:32:13 -05:00
Erik Johnston
1f773eec91 Port PresenceHandler to async/await (#6991) 2020-02-26 15:33:26 +00:00
Uday Bansal
7728d87fd7 Updated warning for incorrect database collation/ctype (#6985)
Signed-off-by: Uday Bansal <43824981+udaybansal19@users.noreply.github.com>
2020-02-26 15:17:03 +00:00
Andrew Morgan
8c75b621bf Ensure 'deactivated' parameter is a boolean on user admin API, Fix error handling of call to deactivate user (#6990) 2020-02-26 12:22:55 +00:00
Richard van der Hoff
c1156d3e2b Sanity-check database before running upgrades (#6982)
Some of the database deltas rely on `config.server_name` being set correctly,
so we should check that it is before running the deltas.

Fixes #6870.
2020-02-25 17:46:34 +00:00
Richard van der Hoff
e66f099ca9 Sanity-check database before running upgrades (#6982)
Some of the database deltas rely on `config.server_name` being set correctly,
so we should check that it is before running the deltas.

Fixes #6870.
2020-02-25 17:46:00 +00:00
Erik Johnston
bbf8886a05 Merge worker apps into one. (#6964) 2020-02-25 16:56:55 +00:00
Fridtjof Mund
4aea0bd292 contrib/docker: remove quotes for POSTGRES_INITDB_ARGS (#6984)
I made a mistake in https://github.com/matrix-org/synapse/pull/6921 - the quotes break the postgres container's startup script (or docker-compose), which makes initdb fail: https://github.com/matrix-org/synapse/pull/6921#issuecomment-590657154

Signed-off-by: Fridtjof Mund <fridtjof@das-labor.org>
2020-02-25 10:48:13 +00:00
Richard van der Hoff
691659568f Remove redundant store_room call (#6979)
`_process_received_pdu` is only called by `on_receive_pdu`, which ignores any
events for unknown rooms, so this is redundant.
2020-02-24 17:20:44 +00:00
Richard van der Hoff
a301934f46 Upsert room version when we join over federation (#6968)
This is intended as a precursor to storing room versions when we receive an
invite over federation, but has the happy side-effect of fixing #3374 at last.

In short: change the store_room with try/except to a proper upsert which
updates the right columns.
2020-02-24 15:46:41 +00:00
Richard van der Hoff
4c2ed3f20e Fix minor issues with email config (#6962)
* Give `notif_template_html`, `notif_template_text` default values (fixes #6960)
 * Don't complain if `smtp_host` and `smtp_port` are unset, since they have sensible defaults (fixes #6961)
 * Set the example for `enable_notifs` to `True`, for consistency and because it's more useful
 * Raise errors as ConfigError rather than RuntimeError for nicer formatting
2020-02-24 15:18:38 +00:00
Patrick Cloke
af6c389501 No longer use room alias events to calculate room names for push notifications. (#6966) 2020-02-21 12:50:48 -05:00
Dirk Klimpel
7b0e2d961c Change displayname of user as admin in rooms (#6876) 2020-02-21 17:44:03 +00:00
Patrick Cloke
fcf4599488 Stop returning aliases as part of the room list. (#6970) 2020-02-21 12:40:23 -05:00
Patrick Cloke
7936d2a96e Publishing/removing from the directory requires a power level greater than canonical aliases. 2020-02-21 07:18:33 -05:00
Patrick Cloke
509e381afa Clarify list/set/dict/tuple comprehensions and enforce via flake8 (#6957)
Ensure good comprehension hygiene using flake8-comprehensions.
2020-02-21 07:15:07 -05:00
Richard van der Hoff
272eee1ae1 Merge pull request #6967 from matrix-org/rav/increase_max_events_behind
Increase MAX_EVENTS_BEHIND for replication clients
2020-02-21 10:17:28 +00:00
Richard van der Hoff
4f7e4fc2fb Merge branch 'master' into develop 2020-02-21 09:37:03 +00:00
Richard van der Hoff
1fcb9a1a7a changelog 2020-02-21 09:06:18 +00:00
Erik Johnston
0bd8cf435e Increase MAX_EVENTS_BEHIND for replication clients 2020-02-21 09:04:33 +00:00
Richard van der Hoff
9c1b83b007 1.11.0 2020-02-21 08:56:04 +00:00
Andrew Morgan
8f6d9c4cf0 Small grammar fixes to the ACME v1 deprecation notice (#6944)
Some small fixes to the copy in #6907.
2020-02-21 08:53:01 +00:00
Patrick Cloke
99eed85a77 Do not send alias events when creating / upgrading a room (#6941)
Stop emitting room alias update events during room creation/upgrade.
2020-02-20 16:24:04 -05:00
Hubert Chathi
a90d0dc5c2 don't insert into the device table for remote cross-signing keys (#6956) 2020-02-20 09:59:00 -05:00
Ruben Barkow-Kuder
4fb5f4d0ce Add some clarifications to README.md in the database schema directory. (#6615)
Signed-off-by: Ruben Barkow-Kuder <github@r.z11.de>
2020-02-20 10:37:57 +00:00
Erik Johnston
7b7c3cedf2 Minor perf fixes to get_auth_chain_ids. 2020-02-19 15:47:11 +00:00
Erik Johnston
fc87d2ffb3 Freeze allocated objects on startup. (#6953)
This may make gc go a bit faster as the gc will know things like
caches/data stores etc. are frozen without having to check.
2020-02-19 15:09:00 +00:00
Erik Johnston
2b37eabca1 Reduce auth chains fetched during v2 state res. (#6952)
The state res v2 algorithm only cares about the difference between auth
chains, so we can pass in the known common state to the `get_auth_chain`
storage function so that it can ignore those events.
2020-02-19 15:04:47 +00:00
Richard van der Hoff
0001e8397e update changes.md 2020-02-19 13:54:05 +00:00
Richard van der Hoff
197b08de35 1.11.0rc1 2020-02-19 13:48:32 +00:00
Erik Johnston
099c96b89b Revert get_auth_chain_ids changes (#6951) 2020-02-19 11:37:35 +00:00
Richard van der Hoff
2fb7794e60 Merge pull request #6949 from matrix-org/rav/list_room_aliases_peekable
Make room alias lists peekable
2020-02-19 11:19:11 +00:00
Brendan Abolivier
bbe39f808c Merge pull request #6940 from matrix-org/babolivier/federate.md
Clean up and update federation docs
2020-02-19 10:58:59 +00:00
Richard van der Hoff
880aaac1d8 Move MSC2432 stuff onto unstable prefix (#6948)
it's not in the spec yet, so needs to be unstable. Also add a feature flag for it. Also add a test for admin users.
2020-02-19 10:40:27 +00:00
Richard van der Hoff
abf1e5c526 Tiny optimisation for _get_handler_for_request (#6950)
we have hundreds of path_regexes (see #5118), so let's not convert the same
bytes to str for each of them.
2020-02-19 10:38:20 +00:00
Erik Johnston
0d0bc35792 Increase DB/CPU perf of _is_server_still_joined check. (#6936)
* Increase DB/CPU perf of `_is_server_still_joined` check.

For rooms with large amount of state a single user leaving could cause
us to go and load a lot of membership events and then pull out
membership state in a large number of batches.

* Newsfile

* Update synapse/storage/persist_events.py

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>

* Fix adding if too soon

* Update docstring

* Review comments

* Woops typo

Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-02-19 10:15:49 +00:00
Brendan Abolivier
5e4a438556 Merge pull request #6945 from matrix-org/babolivier/fix-retention-debug-log
Fix log in message retention purge jobs
2020-02-19 10:12:55 +00:00
Brendan Abolivier
71d65407e7 Incorporate review 2020-02-19 10:03:19 +00:00
Brendan Abolivier
fa64f836ec Update changelog.d/6945.bugfix
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-02-19 09:54:13 +00:00
Erik Johnston
5a5abd55e8 Limit size of get_auth_chain_ids query (#6947) 2020-02-19 09:39:26 +00:00
Richard van der Hoff
603618c002 changelog 2020-02-19 08:53:32 +00:00
Richard van der Hoff
709e81f518 Make room alias lists peekable
As per
https://github.com/matrix-org/matrix-doc/pull/2432#pullrequestreview-360566830,
make room alias lists accessible to users outside world_readable rooms.
2020-02-19 08:53:32 +00:00
Richard van der Hoff
a0a1fd0bec Add allow_departed_users param to check_in_room_or_world_readable
... and set it everywhere it's called.

while we're here, rename it for consistency with `check_user_in_room` (and to
help check that I haven't missed any instances)
2020-02-19 08:52:51 +00:00
Richard van der Hoff
b58d17e44f Refactor the membership check methods in Auth
these were getting a bit unwieldy, so let's combine `check_joined_room` and
`check_user_was_in_room` into a single `check_user_in_room`.
2020-02-18 23:21:44 +00:00
Brendan Abolivier
771d70e89c Changelog 2020-02-18 17:31:02 +00:00
Brendan Abolivier
f31a94a6dd Fix log in message retention purge jobs 2020-02-18 17:29:57 +00:00
Brendan Abolivier
61b457e3ec Incorporate review 2020-02-18 17:20:03 +00:00
Richard van der Hoff
adfaea8c69 Implement GET /_matrix/client/r0/rooms/{roomId}/aliases (#6939)
per matrix-org/matrix-doc#2432
2020-02-18 16:23:25 +00:00
Richard van der Hoff
3f1cd14791 Merge pull request #6872 from matrix-org/rav/dictproperty
Rewrite _EventInternalMetadata to back it with a dict
2020-02-18 16:21:02 +00:00
Brendan Abolivier
a0d2f9d089 Phrasing 2020-02-18 16:16:49 +00:00
Brendan Abolivier
d484126bf7 Merge pull request #6907 from matrix-org/babolivier/acme-config
Add mention and warning about ACME v1 deprecation to the TLS config
2020-02-18 16:11:31 +00:00
Erik Johnston
8a380d0fe2 Increase perf of get_auth_chain_ids used in state res v2. (#6937)
We do this by moving the recursive query to be fully in the DB.
2020-02-18 15:39:09 +00:00
Erik Johnston
818def8248 Fix worker docs to point /publicised_groups API correctly. (#6938) 2020-02-18 15:27:45 +00:00
Brendan Abolivier
9801a042f3 Make the log more noticeable 2020-02-18 15:15:43 +00:00
Brendan Abolivier
bfbe2f5b08 Print the error as an error log and raise the same exception we got 2020-02-18 15:10:41 +00:00
Brendan Abolivier
7a782c32a2 Merge pull request #6909 from matrix-org/babolivier/acme-install
Update INSTALL.md to recommend reverse proxying and warn about ACMEv1 deprecation
2020-02-18 15:06:06 +00:00
Brendan Abolivier
b1255077f5 Changelog 2020-02-18 14:35:51 +00:00
Brendan Abolivier
d009535639 Add mention of SRV records as an advanced topic 2020-02-18 14:07:41 +00:00
Brendan Abolivier
ba7a523854 Argh trailing spaces 2020-02-18 13:57:15 +00:00
Brendan Abolivier
e837be5b5c Fix links in the reverse proxy doc 2020-02-18 13:53:58 +00:00
Brendan Abolivier
3c67eee6dc Make federate.md more of a sumary of the steps to follow to set up replication 2020-02-18 13:51:03 +00:00
Patrick Cloke
fe3941f6e3 Stop sending events when creating or deleting aliases (#6904)
Stop sending events when creating or deleting associations (room aliases). Send an updated canonical alias event if one of the alt_aliases is deleted.
2020-02-18 07:29:44 -05:00
Brendan Abolivier
8ee0d74516 Split the delegating documentation out of federate.md and trim it down 2020-02-18 12:05:45 +00:00
Richard van der Hoff
3be2abd0a9 Kill off deprecated "config-on-the-fly" docker mode (#6918)
Lots of people seem to get confused by this mode, and it's been deprecated
since Synapse 1.1.0. It's time for it to go.
2020-02-18 11:41:53 +00:00
Richard van der Hoff
bc831d1d9a #6924 has been released in 1.10.1 2020-02-17 16:34:13 +00:00
Richard van der Hoff
0a714c3abf Merge branch 'master' into develop 2020-02-17 16:33:21 +00:00
Richard van der Hoff
7718fabb7a Merge branch 'release-v1.10.1' 2020-02-17 16:33:04 +00:00
Richard van der Hoff
fd6d83ed96 1.10.1 2020-02-17 16:27:33 +00:00
Richard van der Hoff
d2455ec3aa wait for current_state_events_membership before delete_old_current_state_events (#6924) 2020-02-17 16:19:32 +00:00
Andrew Morgan
3404ad289b Raise the default power levels for invites, tombstones and server acls (#6834) 2020-02-17 13:23:37 +00:00
Richard van der Hoff
46fa66bbfd wait for current_state_events_membership before delete_old_current_state_events (#6924) 2020-02-17 11:30:50 +00:00
Patrick Cloke
10027c80b0 Add type hints to the spam check module (#6915)
Add typing information to the spam checker modules.
2020-02-14 12:49:40 -05:00
Richard van der Hoff
5a78f47f6e changelog 2020-02-14 16:42:40 +00:00
Richard van der Hoff
9551911f88 Rewrite _EventInternalMetadata to back it with a _dict
Mostly, this gives mypy an easier time.
2020-02-14 16:42:40 +00:00
Richard van der Hoff
43b2be9764 Replace _event_dict_property with DictProperty
this amounts to the same thing, but replaces `_event_dict` with `_dict`, and
removes some of the function layers generated by `property`.
2020-02-14 16:42:37 +00:00
Fridtjof Mund
32873efa87 contrib/docker: Ensure correct encoding and locale settings on DB creation (#6921)
Signed-off-by: Fridtjof Mund <fridtjof@das-labor.org>
2020-02-14 16:27:29 +00:00
Richard van der Hoff
97a42bbc3a Add a warning about indentation to generated config (#6920)
Fixes #6916.
2020-02-14 16:22:30 +00:00
Patrick Cloke
02e89021f5 Convert the directory handler tests to use HomeserverTestCase (#6919)
Convert directory handler tests to use HomeserverTestCase.
2020-02-14 09:05:43 -05:00
Patrick Cloke
49f877d32e Filter the results of user directory searching via the spam checker (#6888)
Add a method to the spam checker to filter the user directory results.
2020-02-14 07:17:54 -05:00
Brendan Abolivier
ffe1fc111d Update INSTALL.md
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-02-13 18:16:48 +00:00
Brendan Abolivier
79460ce9c9 Changelog 2020-02-13 17:24:14 +00:00
Brendan Abolivier
71cc6bab5f Update INSTALL.md to recommend reverse proxying and warn about ACMEv1 deprecation 2020-02-13 17:22:44 +00:00
Brendan Abolivier
36af094017 Linters are hard but in they end they just want what's best for us 2020-02-13 17:03:41 +00:00
Brendan Abolivier
65bdc35a1f Lint 2020-02-13 16:14:15 +00:00
Brendan Abolivier
df1c98c22a Update changelog for #6905 to group it with upcoming PRs 2020-02-13 16:12:20 +00:00
Brendan Abolivier
f3f142259e Changelog 2020-02-13 16:10:16 +00:00
Brendan Abolivier
0cb83cde70 Lint 2020-02-13 16:06:31 +00:00
Brendan Abolivier
ef9c275d96 Add a separator for the config warning 2020-02-13 15:44:14 +00:00
Brendan Abolivier
12bbcc255a Add a comprehensive error when failing to register for an ACME account 2020-02-13 14:58:34 +00:00
Brendan Abolivier
5820ed905f Add mention and warning about ACME v1 deprecation to the Synapse config 2020-02-13 14:20:08 +00:00
Patrick Cloke
361de49c90 Add documentation for the spam checker module (#6906)
Add documentation for the spam checker.
2020-02-13 07:40:57 -05:00
Brendan Abolivier
f48bf4febd Merge pull request #6905 from matrix-org/babolivier/acme.md
Update ACME.md to mention ACME v1 deprecation
2020-02-13 12:13:18 +00:00
Aaron Raimist
dc3f998706 Remove m.lazy_load_members from unstable features since it is in CS r0.5.0 (#6877)
Fixes #5528
2020-02-13 12:02:32 +00:00
Brendan Abolivier
862669d6cc Update docs/ACME.md 2020-02-13 11:29:08 +00:00
Brendan Abolivier
459d089af7 Mention that using Synapse to serve certificates requires restarts 2020-02-12 21:05:30 +00:00
Brendan Abolivier
e88a5dd108 Changelog 2020-02-12 20:15:41 +00:00
Brendan Abolivier
e45a7c0939 Remove duplicated info about certbot et al 2020-02-12 20:14:59 +00:00
Brendan Abolivier
f092029d2d Update ACME.md to mention ACME v1 deprecation 2020-02-12 20:14:16 +00:00
Brendan Abolivier
6cd34da8b1 Merge pull request #6891 from matrix-org/babolivier/retention-doc-amend
Spell out that the last event sent to a room won't be deleted by a purge
2020-02-12 20:12:20 +00:00
Andrew Morgan
d8994942f2 Return a 404 for admin api user lookup if user not found (#6901) 2020-02-12 18:14:10 +00:00
Brendan Abolivier
08e050c3fd Rephrase 2020-02-12 15:39:40 +00:00
Brendan Abolivier
47acbc519f Merge branch 'master' into develop 2020-02-12 13:24:09 +00:00
Brendan Abolivier
d9239b5257 Merge tag 'v1.10.0'
Synapse 1.10.0 (2020-02-12)
===========================

**WARNING to client developers**: As of this release Synapse validates `client_secret` parameters in the Client-Server API as per the spec. See [\#6766](https://github.com/matrix-org/synapse/issues/6766) for details.

Updates to the Docker image
---------------------------

- Update the docker images to Alpine Linux 3.11. ([\#6897](https://github.com/matrix-org/synapse/issues/6897))

Synapse 1.10.0rc5 (2020-02-11)
==============================

Bugfixes
--------

- Fix the filtering introduced in 1.10.0rc3 to also apply to the state blocks returned by `/sync`. ([\#6884](https://github.com/matrix-org/synapse/issues/6884))

Synapse 1.10.0rc4 (2020-02-11)
==============================

This release candidate was built incorrectly and is superceded by 1.10.0rc5.

Synapse 1.10.0rc3 (2020-02-10)
==============================

Features
--------

- Filter out `m.room.aliases` from the CS API to mitigate abuse while a better solution is specced. ([\#6878](https://github.com/matrix-org/synapse/issues/6878))

Internal Changes
----------------

- Fix continuous integration failures with old versions of `pip`, which were introduced by a release of the `zipp` library. ([\#6880](https://github.com/matrix-org/synapse/issues/6880))

Synapse 1.10.0rc2 (2020-02-06)
==============================

Bugfixes
--------

- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))

Internal Changes
----------------

- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))

Synapse 1.10.0rc1 (2020-01-31)
==============================

Features
--------

- Add experimental support for updated authorization rules for aliases events, from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260). ([\#6787](https://github.com/matrix-org/synapse/issues/6787), [\#6790](https://github.com/matrix-org/synapse/issues/6790), [\#6794](https://github.com/matrix-org/synapse/issues/6794))

Bugfixes
--------

- Warn if postgres database has a non-C locale, as that can cause issues when upgrading locales (e.g. due to upgrading OS). ([\#6734](https://github.com/matrix-org/synapse/issues/6734))
- Minor fixes to `PUT /_synapse/admin/v2/users` admin api. ([\#6761](https://github.com/matrix-org/synapse/issues/6761))
- Validate `client_secret` parameter using the regex provided by the Client-Server API, temporarily allowing `:` characters for older clients. The `:` character will be removed in a future release. ([\#6767](https://github.com/matrix-org/synapse/issues/6767))
- Fix persisting redaction events that have been redacted (or otherwise don't have a redacts key). ([\#6771](https://github.com/matrix-org/synapse/issues/6771))
- Fix outbound federation request metrics. ([\#6795](https://github.com/matrix-org/synapse/issues/6795))
- Fix bug where querying a remote user's device keys that weren't cached resulted in only returning a single device. ([\#6796](https://github.com/matrix-org/synapse/issues/6796))
- Fix race in federation sender worker that delayed sending of device updates. ([\#6799](https://github.com/matrix-org/synapse/issues/6799), [\#6800](https://github.com/matrix-org/synapse/issues/6800))
- Fix bug where Synapse didn't invalidate cache of remote users' devices when Synapse left a room. ([\#6801](https://github.com/matrix-org/synapse/issues/6801))
- Fix waking up other workers when remote server is detected to have come back online. ([\#6811](https://github.com/matrix-org/synapse/issues/6811))

Improved Documentation
----------------------

- Clarify documentation related to `user_dir` and `federation_reader` workers. ([\#6775](https://github.com/matrix-org/synapse/issues/6775))

Internal Changes
----------------

- Record room versions in the `rooms` table. ([\#6729](https://github.com/matrix-org/synapse/issues/6729), [\#6788](https://github.com/matrix-org/synapse/issues/6788), [\#6810](https://github.com/matrix-org/synapse/issues/6810))
- Propagate cache invalidates from workers to other workers. ([\#6748](https://github.com/matrix-org/synapse/issues/6748))
- Remove some unnecessary admin handler abstraction methods. ([\#6751](https://github.com/matrix-org/synapse/issues/6751))
- Add some debugging for media storage providers. ([\#6757](https://github.com/matrix-org/synapse/issues/6757))
- Detect unknown remote devices and mark cache as stale. ([\#6776](https://github.com/matrix-org/synapse/issues/6776), [\#6819](https://github.com/matrix-org/synapse/issues/6819))
- Attempt to resync remote users' devices when detected as stale. ([\#6786](https://github.com/matrix-org/synapse/issues/6786))
- Delete current state from the database when server leaves a room. ([\#6792](https://github.com/matrix-org/synapse/issues/6792))
- When a client asks for a remote user's device keys check if the local cache for that user has been marked as potentially stale. ([\#6797](https://github.com/matrix-org/synapse/issues/6797))
- Add background update to clean out left rooms from current state. ([\#6802](https://github.com/matrix-org/synapse/issues/6802), [\#6816](https://github.com/matrix-org/synapse/issues/6816))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6803](https://github.com/matrix-org/synapse/issues/6803), [\#6805](https://github.com/matrix-org/synapse/issues/6805), [\#6806](https://github.com/matrix-org/synapse/issues/6806), [\#6807](https://github.com/matrix-org/synapse/issues/6807), [\#6820](https://github.com/matrix-org/synapse/issues/6820))
2020-02-12 13:23:22 +00:00
Brendan Abolivier
7b8d654a61 Move the warning at the top of the release changes 2020-02-12 12:20:37 +00:00
Brendan Abolivier
fdb816713a 1.10.0 2020-02-12 12:19:19 +00:00
Richard van der Hoff
3dd2b5f5e3 bump the version of Alpine Linux used in the docker images (#6897) 2020-02-12 12:02:53 +00:00
Patrick Cloke
ba547ec3a9 Use BSD-compatible in-place editing for sed. (#6887) 2020-02-12 07:02:19 -05:00
Brendan Abolivier
a0c4769f1a Update the changelog file 2020-02-11 17:56:42 +00:00
Brendan Abolivier
6b21986e4e Also spell it out in the purge history API doc 2020-02-11 17:56:04 +00:00
Brendan Abolivier
705c978366 Changelog 2020-02-11 17:38:27 +00:00
Brendan Abolivier
a443d2a25d Spell out that Synapse never purges the last event sent in a room 2020-02-11 17:37:09 +00:00
Richard van der Hoff
88d41e94f5 Merge branch 'release-v1.10.0' into develop 2020-02-11 11:12:31 +00:00
Richard van der Hoff
856b2a9555 1.10.0rc5 2020-02-11 11:06:28 +00:00
Richard van der Hoff
78d170262c changelog wording 2020-02-11 10:53:25 +00:00
Richard van der Hoff
aa7e4291ee Update CHANGES.md 2020-02-11 10:51:13 +00:00
Richard van der Hoff
9e45d573d4 changelog formatting 2020-02-11 10:50:55 +00:00
Richard van der Hoff
605cd089f7 Merge branch 'release-v1.10.0' into develop 2020-02-11 10:43:47 +00:00
Richard van der Hoff
3edc65dd24 1.10.0rc4 2020-02-11 10:43:16 +00:00
Patrick Cloke
a92e703ab9 Reject device display names that are too long (#6882)
* Reject device display names that are too long.

Too long is currently defined as 100 characters in length.

* Add a regression test for rejecting a too long device display name.
2020-02-10 16:35:26 -05:00
Matthew Hodgson
01209382fb filter out m.room.aliases from /sync state blocks (#6884)
We forgot to filter out aliases from /sync state blocks as well as the timeline.
2020-02-10 18:07:35 +00:00
Patrick Cloke
3a3118f4ec Add an additional test to the SyTest blacklist for worker mode. (#6883) 2020-02-10 11:47:18 -05:00
Richard van der Hoff
db0fee738d Merge tag 'v1.10.0rc3' into develop
Synapse 1.10.0rc3 (2020-02-10)
==============================

Features
--------

- Filter out m.room.aliases from the CS API to mitigate abuse while a better solution is specced. ([\#6878](https://github.com/matrix-org/synapse/issues/6878))

Internal Changes
----------------

- Fix continuous integration failures with old versions of `pip`, which were introduced by a release of the `zipp` library. ([\#6880](https://github.com/matrix-org/synapse/issues/6880))
2020-02-10 10:15:32 +00:00
Richard van der Hoff
3de57e7062 1.10.0rc3 2020-02-10 09:56:42 +00:00
Matthew Hodgson
8e64c5a24c filter out m.room.aliases from the CS API until a better solution is specced (#6878)
We're in the middle of properly mitigating spam caused by malicious aliases being added to a room. However, until this work fully lands, we temporarily filter out all m.room.aliases events from /sync and /messages on the CS API, to remove abusive aliases. This is considered acceptable as m.room.aliases events were never a reliable record of the given alias->id mapping and were purely informational, and in their current state do more harm than good.
2020-02-10 09:36:23 +00:00
Richard van der Hoff
cc0800ebfc Merge remote-tracking branch 'origin/release-v1.10.0' into develop 2020-02-10 00:41:49 +00:00
Richard van der Hoff
fe73f0d533 Update setuptools for python 3.5 tests (#6880)
Workaround for jaraco/zipp#40
2020-02-10 00:41:20 +00:00
Erik Johnston
21db35f77e Add support for putting fed user query API on workers (#6873) 2020-02-07 15:45:39 +00:00
Richard van der Hoff
e1d858984d Remove unused get_room_stats_state method. (#6869) 2020-02-07 15:30:26 +00:00
Richard van der Hoff
799001f2c0 Add a make_event_from_dict method (#6858)
... and use it in places where it's trivial to do so.

This will make it easier to pass room versions into the FrozenEvent
constructors.
2020-02-07 15:30:04 +00:00
Erik Johnston
b08b0a22d5 Add typing to synapse.federation.sender (#6871) 2020-02-07 13:56:38 +00:00
Erik Johnston
de2d267375 Allow moving group read APIs to workers (#6866) 2020-02-07 11:14:19 +00:00
Dirk Klimpel
56ca93ef59 Admin api to add an email address (#6789) 2020-02-07 10:29:36 +00:00
Richard van der Hoff
f4884444c3 remove unused room_version_to_event_format (#6857) 2020-02-07 09:26:57 +00:00
Richard van der Hoff
e1b240329e Merge pull request #6856 from matrix-org/rav/redact_changes/6
Pass room_version into `event_from_pdu_json`
2020-02-07 09:22:15 +00:00
Patrick Cloke
7765bf3989 Limit the number of events that can be requested when backfilling events (#6864)
Limit the maximum number of events requested when backfilling events.
2020-02-06 13:25:24 -05:00
Richard van der Hoff
928edef979 Pass room_version into event_from_pdu_json
It's called from all over the shop, so this one's a bit messy.
2020-02-06 16:08:27 +00:00
Richard van der Hoff
b0c8bdd49d pass room version into FederationClient.send_join (#6854)
... which allows us to sanity-check the create event.
2020-02-06 15:50:39 +00:00
timfi
bce557175b Allow empty federation_certificate_verification_whitelist (#6849) 2020-02-06 14:45:01 +00:00
PeerD
99fcc96289 Third party event rules Update (#6781) 2020-02-06 14:15:29 +00:00
Erik Johnston
ed630ea17c Reduce amount of logging at INFO level. (#6862)
A lot of the things we log at INFO are now a bit superfluous, so lets
make them DEBUG logs to reduce the amount we log by default.

Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-authored-by: Brendan Abolivier <github@brendanabolivier.com>
2020-02-06 13:31:05 +00:00
Richard van der Hoff
9bcd37146e Merge pull request #6823 from matrix-org/rav/redact_changes/5
pass room versions around
2020-02-06 11:32:33 +00:00
Erik Johnston
2201ef8556 Merge tag 'v1.10.0rc2' into develop
Synapse 1.10.0rc2 (2020-02-06)
==============================

Bugfixes
--------

- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))

Internal Changes
----------------

- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))
2020-02-06 11:04:03 +00:00
Erik Johnston
f663118155 Update changelog 2020-02-06 10:52:25 +00:00
Erik Johnston
b5176166b7 Update changelog 2020-02-06 10:51:02 +00:00
Erik Johnston
4a50b674f2 Update changelog 2020-02-06 10:45:29 +00:00
Erik Johnston
6a7e90ad78 1.10.0rc2 2020-02-06 10:40:08 +00:00
Robin Vleij
f0561fcffd Update documentation (#6859)
Update documentation to reflect the correct format of user_id (fully qualified).
2020-02-05 21:27:38 +00:00
Patrick Cloke
5e019069ab Merge pull request #6855 from matrix-org/clokep/readme-pip-install
Add quotes around the pip install target to avoid my shell complaining
2020-02-05 13:29:52 -05:00
Patrick Cloke
39c2d26e0b Add quotes around pip install target (my shell complained without them). 2020-02-05 12:53:18 -05:00
Richard van der Hoff
ff70ec0a00 Newsfile 2020-02-05 17:43:57 +00:00
Richard van der Hoff
ee0525b2b2 Simplify room_version handling in FederationClient.send_invite 2020-02-05 17:43:57 +00:00
Richard van der Hoff
f84700fba8 Pass room version object into FederationClient.get_pdu 2020-02-05 17:25:46 +00:00
Richard van der Hoff
577f460369 Merge pull request #6840 from matrix-org/rav/federation_client_async
Port much of `synapse.federation.federation_client` to async/await
2020-02-05 16:56:39 +00:00
Richard van der Hoff
6bbd890f05 make FederationClient._do_send_invite async 2020-02-05 15:50:31 +00:00
Richard van der Hoff
146fec0820 Apply suggestions from code review
Co-Authored-By: Erik Johnston <erik@matrix.org>
2020-02-05 15:47:00 +00:00
Erik Johnston
a58860e480 Check sender_key matches on inbound encrypted events. (#6850)
If they don't then the device lists are probably out of sync.
2020-02-05 14:02:39 +00:00
Hubert Chathi
60d0672426 Merge pull request #6844 from matrix-org/uhoreg/cross_signing_fix_device_fed
add device signatures to device key query results
2020-02-05 10:54:49 +00:00
Michael Kaye
a831d2e4e3 Reduce performance logging to DEBUG (#6833)
* Reduce tnx performance logging to DEBUG
* Changelog.d
2020-02-05 08:57:37 +00:00
Richard van der Hoff
d88e0ec080 Database updates to populate rooms.room_version (#6847)
We're going to need this so that we can figure out how to handle redactions when fetching events from the database.
2020-02-04 21:31:08 +00:00
Erik Johnston
6475382d80 Fix detecting unknown devices from remote encrypted events. (#6848)
We were looking at the wrong event type (`m.room.encryption` vs
`m.room.encrypted`).

Also fixup the duplicate `EvenTypes` entries.

Introduced in #6776.
2020-02-04 17:25:54 +00:00
Hubert Chathi
74bf3fdbb9 Merge pull request #6844 from matrix-org/uhoreg/cross_signing_fix_device_fed
add device signatures to device key query results
2020-02-04 12:03:54 -05:00
Michael Kaye
c87572d6e4 Update CONTRIBUTING.md about merging PRs. (#6846) 2020-02-04 16:21:09 +00:00
Richard van der Hoff
5ef91b96f1 Merge remote-tracking branch 'origin/develop' into rav/federation_client_async 2020-02-04 12:07:05 +00:00
Richard van der Hoff
c7d6d5c69e Merge pull request #6837 from matrix-org/rav/federation_async
Port much of `synapse.handlers.federation` to async/await.
2020-02-04 12:06:18 +00:00
Hubert Chathi
245ee14220 add changelog 2020-02-04 00:21:07 -05:00
Hubert Chathi
23d8a55c7a add device signatures to device key query results 2020-02-04 00:13:12 -05:00
Richard van der Hoff
ea23210b2d make FederationClient.send_invite async 2020-02-03 22:29:49 +00:00
Richard van der Hoff
4b4536dd02 newsfile 2020-02-03 22:28:45 +00:00
Richard van der Hoff
6deeefb68c make FederationClient.get_missing_events async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
abadf44eb2 make FederationClient._do_send_leave async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
e88b90aaeb make FederationClient.send_leave.send_request async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
638001116d make FederationClient._do_send_join async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
3960527c2e make FederationClient.send_join.send_request async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
ad09ee9262 make FederationClient.make_membership_event.send_request async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
1330c311b7 make FederationClient._try_destination_list async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
a46fabf17b make FederationClient.send_leave async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
8af9f11bea make FederationClient.send_join async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
3f11cbb404 make FederationClient.make_membership_event async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
24d814ca23 make FederationClient.get_event_auth async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
d73683c363 make FederationClient.get_room_state_ids async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
0cb0c7bcd5 make FederationClient.get_pdu async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
0536d0c9be make FederationClient.backfill async 2020-02-03 22:28:45 +00:00
Richard van der Hoff
5d17c31596 make FederationHandler.send_invite async 2020-02-03 22:28:11 +00:00
Richard van der Hoff
e81c093974 make FederationHandler.on_get_missing_events async 2020-02-03 19:15:08 +00:00
Erik Johnston
b9391c9575 Add typing to SyncHandler (#6821)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-02-03 18:05:44 +00:00
Erik Johnston
ae5b3104f0 Fix stacktraces when using ObservableDeferred and async/await (#6836) 2020-02-03 17:10:54 +00:00
Richard van der Hoff
e49eb1a886 changelog 2020-02-03 16:30:21 +00:00
Richard van der Hoff
f64c96662e make FederationHandler.user_joined_room async 2020-02-03 16:29:30 +00:00
Richard van der Hoff
52642860da make FederationHandler._clean_room_for_join async 2020-02-03 16:29:30 +00:00
Richard van der Hoff
814cc00cb9 make FederationHandler._notify_persisted_event async 2020-02-03 16:29:30 +00:00
Richard van der Hoff
05299599b6 make FederationHandler.persist_events_and_notify async 2020-02-03 16:29:30 +00:00
Richard van der Hoff
3b7e0e002b make FederationHandler._make_and_verify_event async 2020-02-03 16:22:30 +00:00
Richard van der Hoff
4286e429a7 make FederationHandler.do_remotely_reject_invite async 2020-02-03 16:19:18 +00:00
Richard van der Hoff
c3f296af32 make FederationHandler._check_for_soft_fail async 2020-02-03 16:16:31 +00:00
Richard van der Hoff
dbdf843012 make FederationHandler._persist_auth_tree async 2020-02-03 16:14:58 +00:00
Richard van der Hoff
ebd6a15af3 make FederationHandler.do_invite_join async 2020-02-03 16:13:13 +00:00
Richard van der Hoff
94f7b4cd54 make FederationHandler.on_event_auth async 2020-02-03 16:06:46 +00:00
Richard van der Hoff
863087d186 make FederationHandler.on_exchange_third_party_invite_request async 2020-02-03 16:02:50 +00:00
Richard van der Hoff
957129f4a7 make FederationHandler.construct_auth_difference async 2020-02-03 16:00:46 +00:00
Richard van der Hoff
0d5f2f4bb0 make FederationHandler._update_context_for_auth_events async 2020-02-03 15:55:35 +00:00
Richard van der Hoff
a25ddf26a3 make FederationHandler._update_auth_events_and_context_for_auth async 2020-02-03 15:53:54 +00:00
Richard van der Hoff
bc9b75c6f0 make FederationHandler.do_auth async 2020-02-03 15:51:24 +00:00
Richard van der Hoff
8033b257a7 make FederationHandler._prep_event async 2020-02-03 15:49:32 +00:00
Richard van der Hoff
1cdc253e0a make FederationHandler._handle_new_event async 2020-02-03 15:48:33 +00:00
Richard van der Hoff
c556ed9e15 make FederationHandler._handle_new_events async 2020-02-03 15:43:51 +00:00
Richard van der Hoff
6e89ec5e32 make FederationHandler.on_make_leave_request async 2020-02-03 15:40:41 +00:00
Richard van der Hoff
d184cbc031 make FederationHandler.on_send_leave_request async 2020-02-03 15:39:24 +00:00
Richard van der Hoff
98681f90cb make FederationHandler.on_make_join_request async 2020-02-03 15:38:02 +00:00
Richard van der Hoff
af8ba6b525 make FederationHandler.on_invite_request async 2020-02-03 15:33:42 +00:00
Richard van der Hoff
7571bf86f0 make FederationHandler.on_send_join_request async 2020-02-03 15:32:48 +00:00
Richard van der Hoff
b3e44f0bdf make FederationHandler.on_query_auth async 2020-02-03 15:30:23 +00:00
Andrew Morgan
370080531e Allow URL-encoded user IDs on user admin api paths (#6825) 2020-02-03 13:18:42 +00:00
Richard van der Hoff
b0d112e78b Fix room_version in on_invite_request flow (#6827)
I messed this up a bit in #6805, but fortunately we weren't actually doing
anything with the room_version so it didn't matter that it was a str not a RoomVersion.
2020-02-03 13:15:23 +00:00
Erik Johnston
68ef7ebbef Update changelog 2020-01-31 15:45:08 +00:00
Erik Johnston
0f8ffa38b5 Fix link in upgrade.rst 2020-01-31 15:38:16 +00:00
Erik Johnston
ac0d45b78b 1.10.0rc1 2020-01-31 15:35:37 +00:00
Erik Johnston
83b0ea047b Fix deleting of stale marker for device lists (#6819)
We were in fact only deleting stale marker when we got an incremental
update, rather than when we did a full resync.
2020-01-31 14:04:15 +00:00
Richard van der Hoff
7f93eb1903 pass room_version into compute_event_signature (#6807) 2020-01-31 13:47:43 +00:00
Richard van der Hoff
a5afdd15e5 Merge pull request #6806 from matrix-org/rav/redact_changes/3
Pass room_version into add_hashes_and_signatures
2020-01-31 10:57:03 +00:00
Richard van der Hoff
160522e32c Merge pull request #6820 from matrix-org/rav/get_room_version_id
Make `get_room_version` return a RoomVersion object
2020-01-31 10:56:42 +00:00
Richard van der Hoff
f6fa2c0b31 newsfile 2020-01-31 10:30:29 +00:00
Richard van der Hoff
08f41a6f05 Add get_room_version method
So that we can start factoring out some of this boilerplatey boilerplate.
2020-01-31 10:28:15 +00:00
Richard van der Hoff
d7bf793cc1 s/get_room_version/get_room_version_id/
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
2020-01-31 10:06:21 +00:00
Erik Johnston
7d846e8704 Fix bug with getting missing auth event during join 500'ed (#6810) 2020-01-31 09:49:13 +00:00
Richard van der Hoff
540c5e168b changelog 2020-01-30 22:15:50 +00:00
Richard van der Hoff
2a81393a4b Pass room_version into add_hashes_and_signatures 2020-01-30 22:15:50 +00:00
Richard van der Hoff
54f3f369bd Pass room_version into create_local_event_from_event_dict 2020-01-30 22:15:50 +00:00
Richard van der Hoff
ef6bdafb29 Store the room version in EventBuilder 2020-01-30 22:15:50 +00:00
Richard van der Hoff
46a446828d pass room version into FederationHandler.on_invite_request (#6805) 2020-01-30 22:13:02 +00:00
Erik Johnston
e0992fcc5b Log when we delete room in bg update (#6816) 2020-01-30 17:55:34 +00:00
Richard van der Hoff
184303b865 MSC2260: Block direct sends of m.room.aliases events (#6794)
as per MSC2260
2020-01-30 17:20:55 +00:00
Erik Johnston
57ad702af0 Backgroud update to clean out rooms from current state (#6802) 2020-01-30 17:17:44 +00:00
Erik Johnston
b660327056 Resync remote device list when detected as stale. (#6786) 2020-01-30 17:06:38 +00:00
Erik Johnston
c3d4ad8afd Fix sending server up commands from workers (#6811)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2020-01-30 16:42:11 +00:00
Erik Johnston
a5bab2d058 When server leaves room check for stale device lists. (#6801)
When a server leaves a room it may stop sharing a room with remote
users, and thus not get any updates to their device lists. So we need to
check for this case and delete those device lists from the cache.

We don't need to do this if we stop sharing a room because the remote
user leaves the room, because we track that case via looking at
membership changes.
2020-01-30 16:10:30 +00:00
Erik Johnston
c80a9fe13d When a client asks for remote keys check if should resync. (#6797)
If we detect that the remote users' keys may have changed then we should
attempt to resync against the remote server rather than using the
(potentially) stale local cache.
2020-01-30 15:06:58 +00:00
Richard van der Hoff
5a246611e3 Type defintions for use in refactoring for redaction changes (#6803)
* Bump signedjson to 1.1

... so that we can use the type definitions

* Fix breakage caused by upgrade to signedjson 1.1

Thanks, @illicitonion...
2020-01-30 11:25:59 +00:00
Erik Johnston
a855b7c3a8 Remove unused DeviceRow class (#6800) 2020-01-29 12:06:31 +00:00
Richard van der Hoff
281551f720 Merge pull request #6790 from matrix-org/rav/msc2260.1
MSC2260: change the default power level for m.room.aliases events
2020-01-29 11:53:11 +00:00
Richard van der Hoff
750d4d7599 changelog 2020-01-29 11:52:52 +00:00
Richard van der Hoff
dcd85b976d Make /directory/room/<alias> handle restrictive power levels
Fixes a bug where the alias would be added, but `PUT /directory/room/<alias>`
would return a 403.
2020-01-29 11:52:52 +00:00
Richard van der Hoff
b36095ae5c Set the PL for aliases events to 0. 2020-01-29 11:52:52 +00:00
Richard van der Hoff
ee42a5513e Factor out a copy_power_levels_contents method
I'm going to need another copy (hah!) of this.
2020-01-29 11:52:52 +00:00
Erik Johnston
6b9e1014cf Fix race in federation sender that delayed device updates. (#6799)
We were sending device updates down both the federation stream and
device streams. This mean there was a race if the federation sender
worker processed the federation stream first, as when the sender checked
if there were new device updates the slaved ID generator hadn't been
updated with the new stream IDs and so returned nothing.

This situation is correctly handled by events/receipts/etc by not
sending updates down the federation stream and instead having the
federation sender worker listen on the other streams and poke the
transaction queues as appropriate.
2020-01-29 11:23:01 +00:00
Erik Johnston
611215a49c Delete current state when server leaves a room (#6792)
Otherwise its just stale data, which may get deleted later anyway so
can't be relied on. It's also a bit of a shotgun if we're trying to get
the current state of a room we're not in.
2020-01-29 11:01:32 +00:00
Erik Johnston
2cad8baa70 Fix bug when querying remote user keys that require a resync. (#6796)
We ended up only returning a single device, rather than all of them.
2020-01-29 09:56:41 +00:00
Erik Johnston
fcfb591b31 Fix outbound federation request metrics (#6795) 2020-01-28 18:59:48 +00:00
Richard van der Hoff
cc109b79dd Merge pull request #6787 from matrix-org/rav/msc2260
Implement updated auth rules from MSC2260
2020-01-28 15:11:22 +00:00
Richard van der Hoff
a1f307f7d1 fix bad variable ref 2020-01-28 14:55:22 +00:00
Erik Johnston
e17a110661 Detect unknown remote devices and mark cache as stale (#6776)
We just mark the fact that the cache may be stale in the database for
now.
2020-01-28 14:43:21 +00:00
Richard van der Hoff
fbe0a82c0d update changelog 2020-01-28 14:20:10 +00:00
Richard van der Hoff
99e205fc21 changelog 2020-01-28 14:20:10 +00:00
Richard van der Hoff
49d3bca37b Implement updated auth rules from MSC2260 2020-01-28 14:20:10 +00:00
Richard van der Hoff
a8ce7aeb43 Pass room version object into event_auth.check and check_redaction (#6788)
These are easier to work with than the strings and we normally have one around.

This fixes `FederationHander._persist_auth_tree` which was passing a
RoomVersion object into event_auth.check instead of a string.
2020-01-28 14:18:29 +00:00
Erik Johnston
02b44db922 Warn if postgres database has non-C locale. (#6734)
As using non-C locale can cause issues on upgrading OS.
2020-01-28 13:44:21 +00:00
Erik Johnston
33f904835a Merge branch 'master' into develop 2020-01-28 13:39:39 +00:00
Erik Johnston
77d9357226 1.9.1 2020-01-28 13:09:36 +00:00
Erik Johnston
bdbeeb94ec Fix setting mau_limit_reserved_threepids config (#6793)
Calling the invalidation function during initialisation of the data
stores introduces a circular dependency, causing Synapse to fail to
start.
2020-01-28 13:05:24 +00:00
Erik Johnston
8df862e45d Add rooms.room_version column (#6729)
This is so that we don't have to rely on pulling it out from `current_state_events` table.
2020-01-27 14:30:57 +00:00
Erik Johnston
d5275fc55f Propagate cache invalidates from workers to other workers. (#6748)
Currently if a worker invalidates a cache it will be streamed to master, which then didn't forward those to other workers.
2020-01-27 13:47:50 +00:00
Brendan Abolivier
f74d178b17 Merge pull request #6775 from matrix-org/jaywink/worker-docs-tweaks
Clarifications to the workers documentation
2020-01-27 12:30:54 +00:00
Jason Robinson
cf9d56e5cf Formatting of changelog
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
2020-01-27 14:09:59 +02:00
Jason Robinson
1fe5001369 Fix federation_reader listeners doc as per PR review
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-01-27 10:21:25 +02:00
Andrew Morgan
9f7aaf90b5 Validate client_secret parameter (#6767) 2020-01-24 14:28:40 +00:00
Jason Robinson
aa6ad288f1 Clarifications to the workers documentation
* Add note that user_dir requires disabling user dir
  updates from the main synapse process.
* Add note that federation_reader should have
  the federation listener resource.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-01-24 11:08:50 +02:00
Erik Johnston
fa4d609e20 Make 'event.redacts' never raise. (#6771)
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
2020-01-23 15:19:03 +00:00
Brendan Abolivier
51fc3f693e Merge branch 'master' into develop 2020-01-23 13:45:23 +00:00
Brendan Abolivier
9bae740527 Fixup changelog 2020-01-23 13:13:19 +00:00
Brendan Abolivier
1755326d8a Fixup changelog 2020-01-23 13:11:07 +00:00
Brendan Abolivier
1dc5a791cf Fixup changelog 2020-01-23 12:59:29 +00:00
Brendan Abolivier
ba64c3b615 Merge branch 'release-v1.9.0' of github.com:matrix-org/synapse into release-v1.9.0 2020-01-23 12:58:03 +00:00
Brendan Abolivier
f3eac2b3e9 1.9.0 2020-01-23 12:57:55 +00:00
Richard van der Hoff
6b7462a13f a bit of debugging for media storage providers (#6757)
* a bit of debugging for media storage providers

* changelog
2020-01-23 12:11:44 +00:00
Richard van der Hoff
5bd3cb7260 Minor fixes to user admin api (#6761)
* don't insist on a password (this is valid if you have an SSO login)
* fix reference to undefined `requester`
2020-01-23 12:03:58 +00:00
Andrew Morgan
04345338e1 Merge branch 'release-v1.9.0' into develop 2020-01-23 11:38:29 +00:00
Andrew Morgan
d31f5f4d89 Update admin room docs with correct endpoints (#6770) 2020-01-23 11:37:26 +00:00
Andrew Morgan
ce84dd9e20 Remove unnecessary abstractions in admin handler (#6751) 2020-01-22 15:09:57 +00:00
Brendan Abolivier
33f7e5ce2a Fixup warning about workers changes 2020-01-22 14:49:21 +00:00
Brendan Abolivier
91085ef49e Add deprecation headers 2020-01-22 14:30:22 +00:00
Brendan Abolivier
ffa637050d Fixup changelog 2020-01-22 14:19:23 +00:00
Brendan Abolivier
0d0f32bc53 1.9.0rc1 2020-01-22 14:03:46 +00:00
Andrew Morgan
90a28fb475 Admin API to list, filter and sort rooms (#6720) 2020-01-22 13:36:43 +00:00
Brendan Abolivier
ae6cf586b0 Merge pull request #6764 from matrix-org/babolivier/fix-thumbnail
Fix typo in _select_thumbnail
2020-01-22 13:21:00 +00:00
Brendan Abolivier
6ae0c8db33 Lint + changelog 2020-01-22 12:38:18 +00:00
Brendan Abolivier
d9a8728b11 Remove unused import 2020-01-22 12:30:49 +00:00
Brendan Abolivier
67aa18e8dc Add tests for thumbnailing 2020-01-22 12:28:07 +00:00
Brendan Abolivier
ed83c3a018 Fix typo in _select_thumbnail 2020-01-22 12:27:42 +00:00
Andrew Morgan
aa9b00fb2f Fix and add test to deprecated quarantine media admin api (#6756) 2020-01-22 11:05:50 +00:00
Neil Johnson
5e52d8563b Allow monthly active user limiting support for worker mode, fixes #4639. (#6742) 2020-01-22 11:05:14 +00:00
Erik Johnston
5d7a6ad223 Allow streaming cache invalidate all to workers. (#6749) 2020-01-22 10:37:00 +00:00
Erik Johnston
2093f83ea0 Remove unused CI docker compose files (#6754)
These now exist in the pipelines repo.
2020-01-22 10:36:48 +00:00
Ivan Vilata-i-Balaguer
837f62266b Avoid attribute error when password_config present but empty (#6753)
The old statement returned `None` for such a `password_config` (like the one
created on first run), thus retrieval of the `pepper` key failed with
`AttributeError`.

Fixes #5315

Signed-off-by: Ivan Vilata i Balaguer <ivan@selidor.net>
2020-01-22 07:32:52 +00:00
Brendan Abolivier
07124d028d Port synapse_port_db to async/await (#6718)
* Raise an exception if there are pending background updates

So we return with a non-0 code

* Changelog

* Port synapse_port_db to async/await

* Port update_database to async/await

* Add version string to mocked homeservers

* Remove unused imports

* Convert overseen bits to async/await

* Fixup logging contexts

* Fix imports

* Add a way to print an error without raising an exception

* Incorporate review
2020-01-21 19:04:58 +00:00
Erik Johnston
0e68760078 Add a DeltaState to track changes to be made to current state (#6716) 2020-01-20 18:07:20 +00:00
Erik Johnston
b0a66ab83c Fixup synapse.rest to pass mypy (#6732) 2020-01-20 17:38:21 +00:00
Erik Johnston
74b74462f1 Fix /events/:event_id deprecated API. (#6731) 2020-01-20 17:38:09 +00:00
Erik Johnston
0f6e525be3 Fixup synapse.api to pass mypy (#6733) 2020-01-20 17:34:13 +00:00
Erik Johnston
ceecedc68b Fix changing password via user admin API. (#6730) 2020-01-20 17:23:59 +00:00
Andrew Morgan
e9e066055f Fix empty account_validity config block (#6747) 2020-01-20 16:21:59 +00:00
Andrew Morgan
351fdfede6 Update changelog.d/6747.bugfix
Co-Authored-By: Erik Johnston <erik@matrix.org>
2020-01-20 15:58:44 +00:00
Erik Johnston
2f23eb27b3 Revert "Newsfile"
This reverts commit 11c23af465.
2020-01-20 15:12:58 +00:00
Erik Johnston
11c23af465 Newsfile 2020-01-20 15:11:38 +00:00
Andrew Morgan
026f4bdf3c Add changelog 2020-01-20 14:12:21 +00:00
Andrew Morgan
198d52da3a Fix empty account_validity config block 2020-01-20 14:05:29 +00:00
Brendan Abolivier
a17f64361c Add more logging around message retention policies support (#6717)
So we can debug issues like #6683 more easily
2020-01-17 20:51:44 +00:00
Erik Johnston
5909751936 Fix up changelog 2020-01-17 15:13:27 +00:00
Richard van der Hoff
0b885d62ef bump version to v1.9.0.dev2 2020-01-17 14:58:58 +00:00
Satsuki Yanagi
722b4f302d Fix syntax error in run_upgrade for schema 57 (#6728)
Fix #6727
Related #6655

Co-authored-by: Erik Johnston <erikj@jki.re>
2020-01-17 14:30:35 +00:00
Brendan Abolivier
3b72bb780a Merge pull request #6714 from matrix-org/babolivier/retention_select_event
Fix instantiation of message retention purge jobs
2020-01-17 14:23:51 +00:00
Richard van der Hoff
1dee1e900b bump version to v1.9.0.dev1 2020-01-17 10:44:12 +00:00
Richard van der Hoff
59dc87c618 Merge pull request #6724 from matrix-org/rav/log_saml_attributes
Log saml assertions rather than the whole response
2020-01-17 10:33:24 +00:00
Richard van der Hoff
2b6a77fcde Delegate remote_user_id mapping to the saml mapping provider (#6723)
Turns out that figuring out a remote user id for the SAML user isn't quite as obvious as it seems. Factor it out to the SamlMappingProvider so that it's easy to control.
2020-01-17 10:32:47 +00:00
Erik Johnston
a8a50f5b57 Wake up transaction queue when remote server comes back online (#6706)
This will be used to retry outbound transactions to a remote server if
we think it might have come back up.
2020-01-17 10:27:19 +00:00
Richard van der Hoff
5ce0b17e38 Clarify the account_validity and email sections of the sample configuration. (#6685)
Generally try to make this more comprehensible, and make it match the
conventions.

I've removed the documentation for all the settings which allow you to change
the names of the template files, because I can't really see why they are
useful.
2020-01-17 10:04:15 +00:00
Richard van der Hoff
95c5b9bfb3 changelog 2020-01-16 22:29:06 +00:00
Richard van der Hoff
acc7820574 Log saml assertions rather than the whole response
... since the whole response is huge.

We even need to break up the assertions, since kibana otherwise truncates them.
2020-01-16 22:26:34 +00:00
Richard van der Hoff
14d8f342d5 move batch_iter to a separate module 2020-01-16 22:25:32 +00:00
Brendan Abolivier
4fb3cb208a Precise changelog 2020-01-16 20:27:07 +00:00
Brendan Abolivier
dac148341b Fixup diff 2020-01-16 20:25:09 +00:00
Brendan Abolivier
842c2cfbf1 Remove get_room_event_after_stream_ordering entirely 2020-01-16 20:24:17 +00:00
Erik Johnston
d386f2f339 Add StateMap type alias (#6715) 2020-01-16 13:31:22 +00:00
Brendan Abolivier
e601f35d3b Lint 2020-01-16 09:55:11 +00:00
Andrew Morgan
7b14c4a018 Add tips for the changelog to the pull request template (#6663) 2020-01-16 09:46:36 +00:00
Neil Johnson
38e0e59f42 Add org.matrix.e2e_cross_signing to unstable_features in /versions as per MSC1756 (#6712) 2020-01-16 09:46:14 +00:00
Erik Johnston
48c3a96886 Port synapse.replication.tcp to async/await (#6666)
* Port synapse.replication.tcp to async/await

* Newsfile

* Correctly document type of on_<FOO> functions as async

* Don't be overenthusiastic with the asyncing....
2020-01-16 09:16:12 +00:00
Brendan Abolivier
48e57a6452 Rename changelog 2020-01-15 19:40:52 +00:00
Brendan Abolivier
914e73cdd9 Changelog 2020-01-15 19:36:19 +00:00
Brendan Abolivier
066b9f52b8 Correctly order when selecting before stream ordering 2020-01-15 19:32:47 +00:00
Brendan Abolivier
8363588237 Fix typo 2020-01-15 19:13:22 +00:00
Brendan Abolivier
855af069a4 Fix instantiation of message retention purge jobs
When figuring out which topological token to start a purge job at, we
need to do the following:

1. Figure out a timestamp before which events will be purged
2. Select the first stream ordering after that timestamp
3. Select info about the first event after that stream ordering
4. Build a topological token from that info

In some situations (e.g. quiet rooms with a short max_lifetime), there
might not be an event after the stream ordering at step 3, therefore we
abort the purge with the error `No event found`. To mitigate that, this
patch fetches the first event _before_ the stream ordering, instead of
after.
2020-01-15 18:56:18 +00:00
Erik Johnston
19a1aac48c Fix purge_room admin API (#6711) 2020-01-15 18:13:47 +00:00
Andrew Morgan
edc244eec4 Remove duplicate session check in web fallback servlet (#6702) 2020-01-15 18:05:18 +00:00
Richard van der Hoff
608bf7d741 Merge pull request #6688 from matrix-org/rav/module_api_extensions
Cleanups and additions to the module API
2020-01-15 16:43:13 +00:00
Richard van der Hoff
107f256cd8 Merge branch 'develop' into rav/module_api_extensions 2020-01-15 16:00:24 +00:00
Richard van der Hoff
8f5d7302ac Implement RedirectException (#6687)
Allow REST endpoint implemnentations to raise a RedirectException, which will
redirect the user's browser to a given location.
2020-01-15 15:58:55 +00:00
Erik Johnston
28c98e51ff Add local_current_membership table (#6655)
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
2020-01-15 14:59:33 +00:00
Erik Johnston
b5ce7f5874 Process EDUs in parallel with PDUs. (#6697)
This means that things like to device messages don't get blocked behind
processing PDUs, which can potentially take *ages*.
2020-01-14 14:08:35 +00:00
Erik Johnston
e8b68a4e4b Fixup synapse.replication to pass mypy checks (#6667) 2020-01-14 14:08:06 +00:00
Andrew Morgan
1177d3f3a3 Quarantine media by ID or user ID (#6681) 2020-01-13 18:10:43 +00:00
Richard van der Hoff
47f4f493f0 Document more supported endpoints for workers (#6698) 2020-01-13 15:32:02 +00:00
Richard van der Hoff
326c893d24 Kill off RegistrationError (#6691)
This is pretty pointless. Let's just use SynapseError.
2020-01-13 12:48:22 +00:00
Richard van der Hoff
2d07c73777 Don't assign numeric IDs for empty usernames (#6690)
Fix a bug where we would assign a numeric userid if somebody tried registering
with an empty username
2020-01-13 12:47:30 +00:00
Richard van der Hoff
3cfac9593c Merge pull request #6689 from matrix-org/rav/saml_mapping_provider_updates
Updates to the SAML mapping provider API
2020-01-13 12:44:55 +00:00
Richard van der Hoff
8039685051 Allow additional_resources to implement Resource directly (#6686)
AdditionalResource really doesn't add any value, and it gets in the way for
resources which want to support child resources or the like. So, if the
resource object already implements the IResource interface, don't bother
wrapping it.
2020-01-13 12:42:44 +00:00
Richard van der Hoff
feee819973 Fix exceptions on requests for non-ascii urls (#6682)
Fixes #6402
2020-01-13 12:41:51 +00:00
Richard van der Hoff
da4e52544e comment for run_in_background 2020-01-12 21:53:47 +00:00
Richard van der Hoff
d56e95ea8b changelog 2020-01-12 21:42:15 +00:00
Richard van der Hoff
dc69a1cf43 Pass client redirect URL into SAML mapping providers 2020-01-12 21:40:49 +00:00
Richard van der Hoff
47e63cc67a Pass the module_api into the SamlMappingProvider
... for consistency with other modules, and because we'll need it sooner or
later and it will be a pain to introduce later.
2020-01-12 21:40:49 +00:00
Richard van der Hoff
96ed33739a changelog 2020-01-12 21:36:10 +00:00
Richard van der Hoff
01243b98e1 Handle config not being set for synapse plugin modules
Some modules don't need any config, so having to define a `config` property
just to keep the loader happy is a bit annoying.
2020-01-12 21:34:36 +00:00
Richard van der Hoff
473d3801b6 Cleanups and additions to the module API
Add some useful things, such as error types and logcontext handling, to the
API.

Make `hs` a private member to dissuade people from using it (hopefully
they aren't already).

Add a couple of new methods (`record_user_external_id` and
`generate_short_term_login_token`).
2020-01-12 21:31:44 +00:00
Richard van der Hoff
1d16f5ea0e Merge pull request #6675 from matrix-org/rav/die_sqlite37_die_die_die
Refuse to start if sqlite is older than 3.11.0
2020-01-10 12:17:22 +00:00
Richard van der Hoff
937dea42e7 update install notes for CentOS 2020-01-09 18:11:04 +00:00
Richard van der Hoff
c3843fd075 changelog 2020-01-09 18:11:04 +00:00
Richard van der Hoff
bf46821180 Refuse to start if sqlite is older than 3.11.0 2020-01-09 18:11:04 +00:00
Richard van der Hoff
e48ba84e0b Check postgres version in check_database
this saves doing it on each connection, and will allow us to pass extra options
in.
2020-01-09 18:05:59 +00:00
Richard van der Hoff
e97d1cf001 Modify check_database to take a connection rather than a cursor
We might not need the cursor at all.
2020-01-09 18:05:50 +00:00
Erik Johnston
645b1f0ea1 Merge branch 'master' of github.com:matrix-org/synapse into develop 2020-01-09 17:14:02 +00:00
Erik Johnston
c2ba994dbb Add note about log_file no longer be accepted (#6674) 2020-01-09 17:13:36 +00:00
Manuel Stahl
d2906fe666 Allow admin users to create or modify users without a shared secret (#6495)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
2020-01-09 13:31:00 +00:00
Erik Johnston
d773290cb1 Merge branch 'master' into develop 2020-01-09 13:25:48 +00:00
Erik Johnston
7c232bd98b Merge pull request #6664 from matrix-org/erikj/media_admin_apis
Fix media repo admin APIs when using a media worker.
2020-01-08 15:50:06 +00:00
Erik Johnston
d74054afda Shuffle the code 2020-01-08 14:57:45 +00:00
Erik Johnston
bca3455b38 Comments 2020-01-08 14:27:35 +00:00
Erik Johnston
187dc6ad02 Do not rely on streaming events, as media repo doesn't 2020-01-08 14:24:28 +00:00
Brendan Abolivier
e16521faab Merge pull request #6665 from matrix-org/babolivier/retention_doc_typo
Fix typo in message retention policies doc
2020-01-08 13:57:02 +00:00
Erik Johnston
4e2a072a05 Newsfile 2020-01-08 13:28:19 +00:00
Brendan Abolivier
32ad2a3349 Changelog 2020-01-08 13:28:12 +00:00
Brendan Abolivier
3889fcd9d7 Fix typo in message retention policies doc 2020-01-08 13:27:29 +00:00
Richard van der Hoff
b064a41291 Merge remote-tracking branch 'origin/release-v1.8.0' into develop 2020-01-08 13:27:17 +00:00
Erik Johnston
1adf27c82a Import RoomStore in media worker to fix admin APIs 2020-01-08 13:26:20 +00:00
Erik Johnston
3cf7d6d5b6 Move media admin store functions to worker store 2020-01-08 13:26:20 +00:00
Brendan Abolivier
cff1cb8685 Merge pull request #6624 from matrix-org/babolivier/retention_doc
Add complete documentation of the message retention policies support
2020-01-08 11:24:47 +00:00
Fabian Meyer
dd57715de2 contrib/docker-compose: fixing mount that overrides containers' /etc (#6656)
The mount in the form of ./matrix-config:/etc overwrites the contents of the container /etc folder. Since all valid ca certificates are stored in /etc, the synapse.push.httppusher, for example, cannot validate the certificate from matrix.org.
2020-01-08 07:25:05 +00:00
Matthew Hodgson
91718b3f23 typo 2020-01-07 15:46:04 +00:00
Erik Johnston
be29ed7ad8 Correctly proxy remote group HTTP errors. (#6654)
e.g. if remote returns a 404 then that shouldn't be treated as an error
but should be proxied through.
2020-01-07 15:36:41 +00:00
Brendan Abolivier
2b6b7f482a Merge pull request #6621 from matrix-org/babolivier/purge_job_config_typo
Fix a typo in the purge jobs configuration example
2020-01-07 16:17:40 +01:00
Brendan Abolivier
3675fb9bc6 Fix reference 2020-01-07 15:15:16 +00:00
Brendan Abolivier
7ba98a2874 Incorporate review 2020-01-07 15:14:33 +00:00
Brendan Abolivier
4be582d7c8 Merge branch 'develop' into babolivier/retention_doc 2020-01-07 15:07:19 +00:00
Brendan Abolivier
01fbd95736 Apply suggestions from code review
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-01-07 15:59:38 +01:00
Brendan Abolivier
03edfc5850 Update changelog.d/6624.doc
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-01-07 15:59:05 +01:00
Brendan Abolivier
391fb47791 Reword 2020-01-07 14:54:32 +00:00
Brendan Abolivier
3a86477162 Change the example from 5min to 12h
Have a purge job running every 5min is probably not something we want to advise admins to do as a sort-of default.
2020-01-07 14:53:07 +00:00
Brendan Abolivier
b7dec300b7 Fix vacuum instructions for sqlite 2020-01-03 13:51:59 +01:00
Brendan Abolivier
51b8a21f0c Rename changelog 2020-01-03 13:49:12 +01:00
Brendan Abolivier
9279a2c4e4 Add a complete documentation of the message retention policies support 2020-01-03 13:45:03 +01:00
Brendan Abolivier
9c59bc59c8 Changelog 2020-01-03 13:00:32 +01:00
Brendan Abolivier
dd2954f78d Update sample config 2020-01-03 12:58:12 +01:00
Brendan Abolivier
4efe1d4d3f Fix a typo in the purge jobs configuration example 2020-01-03 12:57:24 +01:00
307 changed files with 14208 additions and 7867 deletions

View File

@@ -1,22 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.5
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
volumes:
- ..:/src

View File

@@ -1,22 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:11
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
volumes:
- ..:/src

View File

@@ -1,22 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
volumes:
- ..:/src

View File

@@ -0,0 +1,13 @@
#!/bin/bash
# this script is run by buildkite in a plain `xenial` container; it installs the
# minimal requirements for tox and hands over to the py35-old tox environment.
set -ex
apt-get update
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev tox
export LANG="C.UTF-8"
exec tox -e py35-old,combine

View File

@@ -39,3 +39,5 @@ Server correctly handles incoming m.device_list_update
# this fails reliably with a torture level of 100 due to https://github.com/matrix-org/synapse/issues/6536
Outbound federation requests missing prev_events and then asks for /state_ids and resolves the state
Can get rooms/{roomId}/members at a given point

View File

@@ -3,6 +3,10 @@
<!-- Please read CONTRIBUTING.md before submitting your pull request -->
* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog)
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog). The entry should:
- Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#code-style))

View File

@@ -1,6 +1,460 @@
Synapse 1.12.1 (2020-04-02)
===========================
No significant changes since 1.12.1rc1.
Synapse 1.12.1rc1 (2020-03-31)
==============================
Bugfixes
--------
- Fix starting workers when federation sending not split out. ([\#7133](https://github.com/matrix-org/synapse/issues/7133)). Introduced in v1.12.0.
- Avoid importing `sqlite3` when using the postgres backend. Contributed by David Vo. ([\#7155](https://github.com/matrix-org/synapse/issues/7155)). Introduced in v1.12.0rc1.
- Fix a bug which could cause outbound federation traffic to stop working if a client uploaded an incorrect e2e device signature. ([\#7177](https://github.com/matrix-org/synapse/issues/7177)). Introduced in v1.11.0.
Synapse 1.12.0 (2020-03-23)
===========================
No significant changes since 1.12.0rc1.
Debian packages and Docker images are rebuilt using the latest versions of
dependency libraries, including Twisted 20.3.0. **Please see security advisory
below**.
Security advisory
-----------------
Synapse may be vulnerable to request-smuggling attacks when it is used with a
reverse-proxy. The vulnerabilties are fixed in Twisted 20.3.0, and are
described in
[CVE-2020-10108](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10108)
and
[CVE-2020-10109](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10109).
For a good introduction to this class of request-smuggling attacks, see
https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn.
We are not aware of these vulnerabilities being exploited in the wild, and
do not believe that they are exploitable with current versions of any reverse
proxies. Nevertheless, we recommend that all Synapse administrators ensure that
they have the latest versions of the Twisted library to ensure that their
installation remains secure.
* Administrators using the [`matrix.org` Docker
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
packages from
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
should ensure that they have version 1.12.0 installed: these images include
Twisted 20.3.0.
* Administrators who have [installed Synapse from
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
should upgrade Twisted within their virtualenv by running:
```sh
<path_to_virtualenv>/bin/pip install 'Twisted>=20.3.0'
```
* Administrators who have installed Synapse from distribution packages should
consult the information from their distributions.
The `matrix.org` Synapse instance was not vulnerable to these vulnerabilities.
Advance notice of change to the default `git` branch for Synapse
----------------------------------------------------------------
Currently, the default `git` branch for Synapse is `master`, which tracks the
latest release.
After the release of Synapse 1.13.0, we intend to change this default to
`develop`, which is the development tip. This is more consistent with common
practice and modern `git` usage.
Although we try to keep `develop` in a stable state, there may be occasions
where regressions creep in. Developers and distributors who have scripts which
run builds using the default branch of `Synapse` should therefore consider
pinning their scripts to `master`.
Synapse 1.12.0rc1 (2020-03-19)
==============================
Features
--------
- Changes related to room alias management ([MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432)):
- Publishing/removing a room from the room directory now requires the user to have a power level capable of modifying the canonical alias, instead of the room aliases. ([\#6965](https://github.com/matrix-org/synapse/issues/6965))
- Validate the `alt_aliases` property of canonical alias events. ([\#6971](https://github.com/matrix-org/synapse/issues/6971))
- Users with a power level sufficient to modify the canonical alias of a room can now delete room aliases. ([\#6986](https://github.com/matrix-org/synapse/issues/6986))
- Implement updated authorization rules and redaction rules for aliases events, from [MSC2261](https://github.com/matrix-org/matrix-doc/pull/2261) and [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#7037](https://github.com/matrix-org/synapse/issues/7037))
- Stop sending m.room.aliases events during room creation and upgrade. ([\#6941](https://github.com/matrix-org/synapse/issues/6941))
- Synapse no longer uses room alias events to calculate room names for push notifications. ([\#6966](https://github.com/matrix-org/synapse/issues/6966))
- The room list endpoint no longer returns a list of aliases. ([\#6970](https://github.com/matrix-org/synapse/issues/6970))
- Remove special handling of aliases events from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260) added in v1.10.0rc1. ([\#7034](https://github.com/matrix-org/synapse/issues/7034))
- Expose the `synctl`, `hash_password` and `generate_config` commands in the snapcraft package. Contributed by @devec0. ([\#6315](https://github.com/matrix-org/synapse/issues/6315))
- Check that server_name is correctly set before running database updates. ([\#6982](https://github.com/matrix-org/synapse/issues/6982))
- Break down monthly active users by `appservice_id` and emit via Prometheus. ([\#7030](https://github.com/matrix-org/synapse/issues/7030))
- Render a configurable and comprehensible error page if something goes wrong during the SAML2 authentication process. ([\#7058](https://github.com/matrix-org/synapse/issues/7058), [\#7067](https://github.com/matrix-org/synapse/issues/7067))
- Add an optional parameter to control whether other sessions are logged out when a user's password is modified. ([\#7085](https://github.com/matrix-org/synapse/issues/7085))
- Add prometheus metrics for the number of active pushers. ([\#7103](https://github.com/matrix-org/synapse/issues/7103), [\#7106](https://github.com/matrix-org/synapse/issues/7106))
- Improve performance when making HTTPS requests to sygnal, sydent, etc, by sharing the SSL context object between connections. ([\#7094](https://github.com/matrix-org/synapse/issues/7094))
Bugfixes
--------
- When a user's profile is updated via the admin API, also generate a displayname/avatar update for that user in each room. ([\#6572](https://github.com/matrix-org/synapse/issues/6572))
- Fix a couple of bugs in email configuration handling. ([\#6962](https://github.com/matrix-org/synapse/issues/6962))
- Fix an issue affecting worker-based deployments where replication would stop working, necessitating a full restart, after joining a large room. ([\#6967](https://github.com/matrix-org/synapse/issues/6967))
- Fix `duplicate key` error which was logged when rejoining a room over federation. ([\#6968](https://github.com/matrix-org/synapse/issues/6968))
- Prevent user from setting 'deactivated' to anything other than a bool on the v2 PUT /users Admin API. ([\#6990](https://github.com/matrix-org/synapse/issues/6990))
- Fix py35-old CI by using native tox package. ([\#7018](https://github.com/matrix-org/synapse/issues/7018))
- Fix a bug causing `org.matrix.dummy_event` to be included in responses from `/sync`. ([\#7035](https://github.com/matrix-org/synapse/issues/7035))
- Fix a bug that renders UTF-8 text files incorrectly when loaded from media. Contributed by @TheStranjer. ([\#7044](https://github.com/matrix-org/synapse/issues/7044))
- Fix a bug that would cause Synapse to respond with an error about event visibility if a client tried to request the state of a room at a given token. ([\#7066](https://github.com/matrix-org/synapse/issues/7066))
- Repair a data-corruption issue which was introduced in Synapse 1.10, and fixed in Synapse 1.11, and which could cause `/sync` to return with 404 errors about missing events and unknown rooms. ([\#7070](https://github.com/matrix-org/synapse/issues/7070))
- Fix a bug causing account validity renewal emails to be sent even if the feature is turned off in some cases. ([\#7074](https://github.com/matrix-org/synapse/issues/7074))
Improved Documentation
----------------------
- Updated CentOS8 install instructions. Contributed by Richard Kellner. ([\#6925](https://github.com/matrix-org/synapse/issues/6925))
- Fix `POSTGRES_INITDB_ARGS` in the `contrib/docker/docker-compose.yml` example docker-compose configuration. ([\#6984](https://github.com/matrix-org/synapse/issues/6984))
- Change date in [INSTALL.md](./INSTALL.md#tls-certificates) for last date of getting TLS certificates to November 2019. ([\#7015](https://github.com/matrix-org/synapse/issues/7015))
- Document that the fallback auth endpoints must be routed to the same worker node as the register endpoints. ([\#7048](https://github.com/matrix-org/synapse/issues/7048))
Deprecations and Removals
-------------------------
- Remove the unused query_auth federation endpoint per [MSC2451](https://github.com/matrix-org/matrix-doc/pull/2451). ([\#7026](https://github.com/matrix-org/synapse/issues/7026))
Internal Changes
----------------
- Add type hints to `logging/context.py`. ([\#6309](https://github.com/matrix-org/synapse/issues/6309))
- Add some clarifications to `README.md` in the database schema directory. ([\#6615](https://github.com/matrix-org/synapse/issues/6615))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6874](https://github.com/matrix-org/synapse/issues/6874), [\#6875](https://github.com/matrix-org/synapse/issues/6875), [\#6983](https://github.com/matrix-org/synapse/issues/6983), [\#7003](https://github.com/matrix-org/synapse/issues/7003))
- Improve performance of v2 state resolution for large rooms. ([\#6952](https://github.com/matrix-org/synapse/issues/6952), [\#7095](https://github.com/matrix-org/synapse/issues/7095))
- Reduce time spent doing GC, by freezing objects on startup. ([\#6953](https://github.com/matrix-org/synapse/issues/6953))
- Minor perfermance fixes to `get_auth_chain_ids`. ([\#6954](https://github.com/matrix-org/synapse/issues/6954))
- Don't record remote cross-signing keys in the `devices` table. ([\#6956](https://github.com/matrix-org/synapse/issues/6956))
- Use flake8-comprehensions to enforce good hygiene of list/set/dict comprehensions. ([\#6957](https://github.com/matrix-org/synapse/issues/6957))
- Merge worker apps together. ([\#6964](https://github.com/matrix-org/synapse/issues/6964), [\#7002](https://github.com/matrix-org/synapse/issues/7002), [\#7055](https://github.com/matrix-org/synapse/issues/7055), [\#7104](https://github.com/matrix-org/synapse/issues/7104))
- Remove redundant `store_room` call from `FederationHandler._process_received_pdu`. ([\#6979](https://github.com/matrix-org/synapse/issues/6979))
- Update warning for incorrect database collation/ctype to include link to documentation. ([\#6985](https://github.com/matrix-org/synapse/issues/6985))
- Add some type annotations to the database storage classes. ([\#6987](https://github.com/matrix-org/synapse/issues/6987))
- Port `synapse.handlers.presence` to async/await. ([\#6991](https://github.com/matrix-org/synapse/issues/6991), [\#7019](https://github.com/matrix-org/synapse/issues/7019))
- Add some type annotations to the federation base & client classes. ([\#6995](https://github.com/matrix-org/synapse/issues/6995))
- Port `synapse.rest.keys` to async/await. ([\#7020](https://github.com/matrix-org/synapse/issues/7020))
- Add a type check to `is_verified` when processing room keys. ([\#7045](https://github.com/matrix-org/synapse/issues/7045))
- Add type annotations and comments to the auth handler. ([\#7063](https://github.com/matrix-org/synapse/issues/7063))
Synapse 1.11.1 (2020-03-03)
===========================
This release includes a security fix impacting installations using Single Sign-On (i.e. SAML2 or CAS) for authentication. Administrators of such installations are encouraged to upgrade as soon as possible.
The release also includes fixes for a couple of other bugs.
Bugfixes
--------
- Add a confirmation step to the SSO login flow before redirecting users to the redirect URL. ([b2bd54a2](https://github.com/matrix-org/synapse/commit/b2bd54a2e31d9a248f73fadb184ae9b4cbdb49f9), [65c73cdf](https://github.com/matrix-org/synapse/commit/65c73cdfec1876a9fec2fd2c3a74923cd146fe0b), [a0178df1](https://github.com/matrix-org/synapse/commit/a0178df10422a76fd403b82d2b2a4ed28a9a9d1e))
- Fixed set a user as an admin with the admin API `PUT /_synapse/admin/v2/users/<user_id>`. Contributed by @dklimpel. ([\#6910](https://github.com/matrix-org/synapse/issues/6910))
- Fix bug introduced in Synapse 1.11.0 which sometimes caused errors when joining rooms over federation, with `'coroutine' object has no attribute 'event_id'`. ([\#6996](https://github.com/matrix-org/synapse/issues/6996))
Synapse 1.11.0 (2020-02-21)
===========================
Improved Documentation
----------------------
- Small grammatical fixes to the ACME v1 deprecation notice. ([\#6944](https://github.com/matrix-org/synapse/issues/6944))
Synapse 1.11.0rc1 (2020-02-19)
==============================
Features
--------
- Admin API to add or modify threepids of user accounts. ([\#6769](https://github.com/matrix-org/synapse/issues/6769))
- Limit the number of events that can be requested by the backfill federation API to 100. ([\#6864](https://github.com/matrix-org/synapse/issues/6864))
- Add ability to run some group APIs on workers. ([\#6866](https://github.com/matrix-org/synapse/issues/6866))
- Reject device display names over 100 characters in length to prevent abuse. ([\#6882](https://github.com/matrix-org/synapse/issues/6882))
- Add ability to route federation user device queries to workers. ([\#6873](https://github.com/matrix-org/synapse/issues/6873))
- The result of a user directory search can now be filtered via the spam checker. ([\#6888](https://github.com/matrix-org/synapse/issues/6888))
- Implement new `GET /_matrix/client/unstable/org.matrix.msc2432/rooms/{roomId}/aliases` endpoint as per [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#6939](https://github.com/matrix-org/synapse/issues/6939), [\#6948](https://github.com/matrix-org/synapse/issues/6948), [\#6949](https://github.com/matrix-org/synapse/issues/6949))
- Stop sending `m.room.alias` events wheng adding / removing aliases. Check `alt_aliases` in the latest `m.room.canonical_alias` event when deleting an alias. ([\#6904](https://github.com/matrix-org/synapse/issues/6904))
- Change the default power levels of invites, tombstones and server ACLs for new rooms. ([\#6834](https://github.com/matrix-org/synapse/issues/6834))
Bugfixes
--------
- Fixed third party event rules function `on_create_room`'s return value being ignored. ([\#6781](https://github.com/matrix-org/synapse/issues/6781))
- Allow URL-encoded User IDs on `/_synapse/admin/v2/users/<user_id>[/admin]` endpoints. Thanks to @NHAS for reporting. ([\#6825](https://github.com/matrix-org/synapse/issues/6825))
- Fix Synapse refusing to start if `federation_certificate_verification_whitelist` option is blank. ([\#6849](https://github.com/matrix-org/synapse/issues/6849))
- Fix errors from logging in the purge jobs related to the message retention policies support. ([\#6945](https://github.com/matrix-org/synapse/issues/6945))
- Return a 404 instead of 200 for querying information of a non-existant user through the admin API. ([\#6901](https://github.com/matrix-org/synapse/issues/6901))
Updates to the Docker image
---------------------------
- The deprecated "generate-config-on-the-fly" mode is no longer supported. ([\#6918](https://github.com/matrix-org/synapse/issues/6918))
Improved Documentation
----------------------
- Add details of PR merge strategy to contributing docs. ([\#6846](https://github.com/matrix-org/synapse/issues/6846))
- Spell out that the last event sent to a room won't be deleted by a purge. ([\#6891](https://github.com/matrix-org/synapse/issues/6891))
- Update Synapse's documentation to warn about the deprecation of ACME v1. ([\#6905](https://github.com/matrix-org/synapse/issues/6905), [\#6907](https://github.com/matrix-org/synapse/issues/6907), [\#6909](https://github.com/matrix-org/synapse/issues/6909))
- Add documentation for the spam checker. ([\#6906](https://github.com/matrix-org/synapse/issues/6906))
- Fix worker docs to point `/publicised_groups` API correctly. ([\#6938](https://github.com/matrix-org/synapse/issues/6938))
- Clean up and update docs on setting up federation. ([\#6940](https://github.com/matrix-org/synapse/issues/6940))
- Add a warning about indentation to generated configuration files. ([\#6920](https://github.com/matrix-org/synapse/issues/6920))
- Databases created using the compose file in contrib/docker will now always have correct encoding and locale settings. Contributed by Fridtjof Mund. ([\#6921](https://github.com/matrix-org/synapse/issues/6921))
- Update pip install directions in readme to avoid error when using zsh. ([\#6855](https://github.com/matrix-org/synapse/issues/6855))
Deprecations and Removals
-------------------------
- Remove `m.lazy_load_members` from `unstable_features` since lazy loading is in the stable Client-Server API version r0.5.0. ([\#6877](https://github.com/matrix-org/synapse/issues/6877))
Internal Changes
----------------
- Add type hints to `SyncHandler`. ([\#6821](https://github.com/matrix-org/synapse/issues/6821))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6823](https://github.com/matrix-org/synapse/issues/6823), [\#6827](https://github.com/matrix-org/synapse/issues/6827), [\#6854](https://github.com/matrix-org/synapse/issues/6854), [\#6856](https://github.com/matrix-org/synapse/issues/6856), [\#6857](https://github.com/matrix-org/synapse/issues/6857), [\#6858](https://github.com/matrix-org/synapse/issues/6858))
- Fix stacktraces when using `ObservableDeferred` and async/await. ([\#6836](https://github.com/matrix-org/synapse/issues/6836))
- Port much of `synapse.handlers.federation` to async/await. ([\#6837](https://github.com/matrix-org/synapse/issues/6837), [\#6840](https://github.com/matrix-org/synapse/issues/6840))
- Populate `rooms.room_version` database column at startup, rather than in a background update. ([\#6847](https://github.com/matrix-org/synapse/issues/6847))
- Reduce amount we log at `INFO` level. ([\#6833](https://github.com/matrix-org/synapse/issues/6833), [\#6862](https://github.com/matrix-org/synapse/issues/6862))
- Remove unused `get_room_stats_state` method. ([\#6869](https://github.com/matrix-org/synapse/issues/6869))
- Add typing to `synapse.federation.sender` and port to async/await. ([\#6871](https://github.com/matrix-org/synapse/issues/6871))
- Refactor `_EventInternalMetadata` object to improve type safety. ([\#6872](https://github.com/matrix-org/synapse/issues/6872))
- Add an additional entry to the SyTest blacklist for worker mode. ([\#6883](https://github.com/matrix-org/synapse/issues/6883))
- Fix the use of sed in the linting scripts when using BSD sed. ([\#6887](https://github.com/matrix-org/synapse/issues/6887))
- Add type hints to the spam checker module. ([\#6915](https://github.com/matrix-org/synapse/issues/6915))
- Convert the directory handler tests to use HomeserverTestCase. ([\#6919](https://github.com/matrix-org/synapse/issues/6919))
- Increase DB/CPU perf of `_is_server_still_joined` check. ([\#6936](https://github.com/matrix-org/synapse/issues/6936))
- Tiny optimisation for incoming HTTP request dispatch. ([\#6950](https://github.com/matrix-org/synapse/issues/6950))
Synapse 1.10.1 (2020-02-17)
===========================
Bugfixes
--------
- Fix a bug introduced in Synapse 1.10.0 which would cause room state to be cleared in the database if Synapse was upgraded direct from 1.2.1 or earlier to 1.10.0. ([\#6924](https://github.com/matrix-org/synapse/issues/6924))
Synapse 1.10.0 (2020-02-12)
===========================
**WARNING to client developers**: As of this release Synapse validates `client_secret` parameters in the Client-Server API as per the spec. See [\#6766](https://github.com/matrix-org/synapse/issues/6766) for details.
Updates to the Docker image
---------------------------
- Update the docker images to Alpine Linux 3.11. ([\#6897](https://github.com/matrix-org/synapse/issues/6897))
Synapse 1.10.0rc5 (2020-02-11)
==============================
Bugfixes
--------
- Fix the filtering introduced in 1.10.0rc3 to also apply to the state blocks returned by `/sync`. ([\#6884](https://github.com/matrix-org/synapse/issues/6884))
Synapse 1.10.0rc4 (2020-02-11)
==============================
This release candidate was built incorrectly and is superceded by 1.10.0rc5.
Synapse 1.10.0rc3 (2020-02-10)
==============================
Features
--------
- Filter out `m.room.aliases` from the CS API to mitigate abuse while a better solution is specced. ([\#6878](https://github.com/matrix-org/synapse/issues/6878))
Internal Changes
----------------
- Fix continuous integration failures with old versions of `pip`, which were introduced by a release of the `zipp` library. ([\#6880](https://github.com/matrix-org/synapse/issues/6880))
Synapse 1.10.0rc2 (2020-02-06)
==============================
Bugfixes
--------
- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))
Internal Changes
----------------
- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))
Synapse 1.10.0rc1 (2020-01-31)
==============================
Features
--------
- Add experimental support for updated authorization rules for aliases events, from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260). ([\#6787](https://github.com/matrix-org/synapse/issues/6787), [\#6790](https://github.com/matrix-org/synapse/issues/6790), [\#6794](https://github.com/matrix-org/synapse/issues/6794))
Bugfixes
--------
- Warn if postgres database has a non-C locale, as that can cause issues when upgrading locales (e.g. due to upgrading OS). ([\#6734](https://github.com/matrix-org/synapse/issues/6734))
- Minor fixes to `PUT /_synapse/admin/v2/users` admin api. ([\#6761](https://github.com/matrix-org/synapse/issues/6761))
- Validate `client_secret` parameter using the regex provided by the Client-Server API, temporarily allowing `:` characters for older clients. The `:` character will be removed in a future release. ([\#6767](https://github.com/matrix-org/synapse/issues/6767))
- Fix persisting redaction events that have been redacted (or otherwise don't have a redacts key). ([\#6771](https://github.com/matrix-org/synapse/issues/6771))
- Fix outbound federation request metrics. ([\#6795](https://github.com/matrix-org/synapse/issues/6795))
- Fix bug where querying a remote user's device keys that weren't cached resulted in only returning a single device. ([\#6796](https://github.com/matrix-org/synapse/issues/6796))
- Fix race in federation sender worker that delayed sending of device updates. ([\#6799](https://github.com/matrix-org/synapse/issues/6799), [\#6800](https://github.com/matrix-org/synapse/issues/6800))
- Fix bug where Synapse didn't invalidate cache of remote users' devices when Synapse left a room. ([\#6801](https://github.com/matrix-org/synapse/issues/6801))
- Fix waking up other workers when remote server is detected to have come back online. ([\#6811](https://github.com/matrix-org/synapse/issues/6811))
Improved Documentation
----------------------
- Clarify documentation related to `user_dir` and `federation_reader` workers. ([\#6775](https://github.com/matrix-org/synapse/issues/6775))
Internal Changes
----------------
- Record room versions in the `rooms` table. ([\#6729](https://github.com/matrix-org/synapse/issues/6729), [\#6788](https://github.com/matrix-org/synapse/issues/6788), [\#6810](https://github.com/matrix-org/synapse/issues/6810))
- Propagate cache invalidates from workers to other workers. ([\#6748](https://github.com/matrix-org/synapse/issues/6748))
- Remove some unnecessary admin handler abstraction methods. ([\#6751](https://github.com/matrix-org/synapse/issues/6751))
- Add some debugging for media storage providers. ([\#6757](https://github.com/matrix-org/synapse/issues/6757))
- Detect unknown remote devices and mark cache as stale. ([\#6776](https://github.com/matrix-org/synapse/issues/6776), [\#6819](https://github.com/matrix-org/synapse/issues/6819))
- Attempt to resync remote users' devices when detected as stale. ([\#6786](https://github.com/matrix-org/synapse/issues/6786))
- Delete current state from the database when server leaves a room. ([\#6792](https://github.com/matrix-org/synapse/issues/6792))
- When a client asks for a remote user's device keys check if the local cache for that user has been marked as potentially stale. ([\#6797](https://github.com/matrix-org/synapse/issues/6797))
- Add background update to clean out left rooms from current state. ([\#6802](https://github.com/matrix-org/synapse/issues/6802), [\#6816](https://github.com/matrix-org/synapse/issues/6816))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6803](https://github.com/matrix-org/synapse/issues/6803), [\#6805](https://github.com/matrix-org/synapse/issues/6805), [\#6806](https://github.com/matrix-org/synapse/issues/6806), [\#6807](https://github.com/matrix-org/synapse/issues/6807), [\#6820](https://github.com/matrix-org/synapse/issues/6820))
Synapse 1.9.1 (2020-01-28)
==========================
Bugfixes
--------
- Fix bug where setting `mau_limit_reserved_threepids` config would cause Synapse to refuse to start. ([\#6793](https://github.com/matrix-org/synapse/issues/6793))
Synapse 1.9.0 (2020-01-23)
==========================
**WARNING**: As of this release, Synapse no longer supports versions of SQLite before 3.11, and will refuse to start when configured to use an older version. Administrators are recommended to migrate their database to Postgres (see instructions [here](docs/postgres.md)).
If your Synapse deployment uses workers, note that the reverse-proxy configurations for the `synapse.app.media_repository`, `synapse.app.federation_reader` and `synapse.app.event_creator` workers have changed, with the addition of a few paths (see the updated configurations [here](docs/workers.md#available-worker-applications)). Existing configurations will continue to work.
Improved Documentation
----------------------
- Fix endpoint documentation for the List Rooms admin API. ([\#6770](https://github.com/matrix-org/synapse/issues/6770))
Synapse 1.9.0rc1 (2020-01-22)
=============================
Features
--------
- Allow admin to create or modify a user. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5742](https://github.com/matrix-org/synapse/issues/5742))
- Add new quarantine media admin APIs to quarantine by media ID or by user who uploaded the media. ([\#6681](https://github.com/matrix-org/synapse/issues/6681), [\#6756](https://github.com/matrix-org/synapse/issues/6756))
- Add `org.matrix.e2e_cross_signing` to `unstable_features` in `/versions` as per [MSC1756](https://github.com/matrix-org/matrix-doc/pull/1756). ([\#6712](https://github.com/matrix-org/synapse/issues/6712))
- Add a new admin API to list and filter rooms on the server. ([\#6720](https://github.com/matrix-org/synapse/issues/6720))
Bugfixes
--------
- Correctly proxy HTTP errors due to API calls to remote group servers. ([\#6654](https://github.com/matrix-org/synapse/issues/6654))
- Fix media repo admin APIs when using a media worker. ([\#6664](https://github.com/matrix-org/synapse/issues/6664))
- Fix "CRITICAL" errors being logged when a request is received for a uri containing non-ascii characters. ([\#6682](https://github.com/matrix-org/synapse/issues/6682))
- Fix a bug where we would assign a numeric user ID if somebody tried registering with an empty username. ([\#6690](https://github.com/matrix-org/synapse/issues/6690))
- Fix `purge_room` admin API. ([\#6711](https://github.com/matrix-org/synapse/issues/6711))
- Fix a bug causing Synapse to not always purge quiet rooms with a low `max_lifetime` in their message retention policies when running the automated purge jobs. ([\#6714](https://github.com/matrix-org/synapse/issues/6714))
- Fix the `synapse_port_db` not correctly running background updates. Thanks @tadzik for reporting. ([\#6718](https://github.com/matrix-org/synapse/issues/6718))
- Fix changing password via user admin API. ([\#6730](https://github.com/matrix-org/synapse/issues/6730))
- Fix `/events/:event_id` deprecated API. ([\#6731](https://github.com/matrix-org/synapse/issues/6731))
- Fix monthly active user limiting support for worker mode, fixes [#4639](https://github.com/matrix-org/synapse/issues/4639). ([\#6742](https://github.com/matrix-org/synapse/issues/6742))
- Fix bug when setting `account_validity` to an empty block in the config. Thanks to @Sorunome for reporting. ([\#6747](https://github.com/matrix-org/synapse/issues/6747))
- Fix `AttributeError: 'NoneType' object has no attribute 'get'` in `hash_password` when configuration has an empty `password_config`. Contributed by @ivilata. ([\#6753](https://github.com/matrix-org/synapse/issues/6753))
- Fix the `docker-compose.yaml` overriding the entire `/etc` folder of the container. Contributed by Fabian Meyer. ([\#6656](https://github.com/matrix-org/synapse/issues/6656))
Improved Documentation
----------------------
- Fix a typo in the configuration example for purge jobs in the sample configuration file. ([\#6621](https://github.com/matrix-org/synapse/issues/6621))
- Add complete documentation of the message retention policies support. ([\#6624](https://github.com/matrix-org/synapse/issues/6624), [\#6665](https://github.com/matrix-org/synapse/issues/6665))
- Add some helpful tips about changelog entries to the GitHub pull request template. ([\#6663](https://github.com/matrix-org/synapse/issues/6663))
- Clarify the `account_validity` and `email` sections of the sample configuration. ([\#6685](https://github.com/matrix-org/synapse/issues/6685))
- Add more endpoints to the documentation for Synapse workers. ([\#6698](https://github.com/matrix-org/synapse/issues/6698))
Deprecations and Removals
-------------------------
- Synapse no longer supports versions of SQLite before 3.11, and will refuse to start when configured to use an older version. Administrators are recommended to migrate their database to Postgres (see instructions [here](docs/postgres.md)). ([\#6675](https://github.com/matrix-org/synapse/issues/6675))
Internal Changes
----------------
- Add `local_current_membership` table for tracking local user membership state in rooms. ([\#6655](https://github.com/matrix-org/synapse/issues/6655), [\#6728](https://github.com/matrix-org/synapse/issues/6728))
- Port `synapse.replication.tcp` to async/await. ([\#6666](https://github.com/matrix-org/synapse/issues/6666))
- Fixup `synapse.replication` to pass mypy checks. ([\#6667](https://github.com/matrix-org/synapse/issues/6667))
- Allow `additional_resources` to implement `IResource` directly. ([\#6686](https://github.com/matrix-org/synapse/issues/6686))
- Allow REST endpoint implementations to raise a `RedirectException`, which will redirect the user's browser to a given location. ([\#6687](https://github.com/matrix-org/synapse/issues/6687))
- Updates and extensions to the module API. ([\#6688](https://github.com/matrix-org/synapse/issues/6688))
- Updates to the SAML mapping provider API. ([\#6689](https://github.com/matrix-org/synapse/issues/6689), [\#6723](https://github.com/matrix-org/synapse/issues/6723))
- Remove redundant `RegistrationError` class. ([\#6691](https://github.com/matrix-org/synapse/issues/6691))
- Don't block processing of incoming EDUs behind processing PDUs in the same transaction. ([\#6697](https://github.com/matrix-org/synapse/issues/6697))
- Remove duplicate check for the `session` query parameter on the `/auth/xxx/fallback/web` Client-Server endpoint. ([\#6702](https://github.com/matrix-org/synapse/issues/6702))
- Attempt to retry sending a transaction when we detect a remote server has come back online, rather than waiting for a transaction to be triggered by new data. ([\#6706](https://github.com/matrix-org/synapse/issues/6706))
- Add `StateMap` type alias to simplify types. ([\#6715](https://github.com/matrix-org/synapse/issues/6715))
- Add a `DeltaState` to track changes to be made to current state during event persistence. ([\#6716](https://github.com/matrix-org/synapse/issues/6716))
- Add more logging around message retention policies support. ([\#6717](https://github.com/matrix-org/synapse/issues/6717))
- When processing a SAML response, log the assertions for easier configuration. ([\#6724](https://github.com/matrix-org/synapse/issues/6724))
- Fixup `synapse.rest` to pass mypy. ([\#6732](https://github.com/matrix-org/synapse/issues/6732), [\#6764](https://github.com/matrix-org/synapse/issues/6764))
- Fixup `synapse.api` to pass mypy. ([\#6733](https://github.com/matrix-org/synapse/issues/6733))
- Allow streaming cache 'invalidate all' to workers. ([\#6749](https://github.com/matrix-org/synapse/issues/6749))
- Remove unused CI docker compose files. ([\#6754](https://github.com/matrix-org/synapse/issues/6754))
Synapse 1.8.0 (2020-01-09)
==========================
**WARNING**: As of this release Synapse will refuse to start if the `log_file` config option is specified. Support for the option was removed in v1.3.0.
Bugfixes
--------
@@ -16,7 +470,7 @@ Features
- Add v2 APIs for the `send_join` and `send_leave` federation endpoints (as described in [MSC1802](https://github.com/matrix-org/matrix-doc/pull/1802)). ([\#6349](https://github.com/matrix-org/synapse/issues/6349))
- Add a develop script to generate full SQL schemas. ([\#6394](https://github.com/matrix-org/synapse/issues/6394))
- Add custom SAML username mapping functinality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
- Add custom SAML username mapping functionality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
- Automatically delete empty groups/communities. ([\#6453](https://github.com/matrix-org/synapse/issues/6453))
- Add option `limit_profile_requests_to_users_who_share_rooms` to prevent requirement of a local user sharing a room with another user to query their profile information. ([\#6523](https://github.com/matrix-org/synapse/issues/6523))
- Add an `export_signing_key` script to extract the public part of signing keys when rotating them. ([\#6546](https://github.com/matrix-org/synapse/issues/6546))

View File

@@ -60,7 +60,7 @@ python 3.6 and to install each tool:
```
# Install the dependencies
pip install -U black flake8 isort
pip install -U black flake8 flake8-comprehensions isort
# Run the linter script
./scripts-dev/lint.sh
@@ -101,8 +101,8 @@ in the format of `PRnumber.type`. The type can be one of the following:
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
contain Markdown formatting, and should end with a full stop (.) or an
exclamation mark (!) for consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
@@ -200,6 +200,20 @@ Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
## Merge Strategy
We use the commit history of develop/master extensively to identify
when regressions were introduced and what changes have been made.
We aim to have a clean merge history, which means we normally squash-merge
changes into develop. For small changes this means there is no need to rebase
to clean up your PR before merging. Larger changes with an organised set of
commits may be merged as-is, if the history is judged to be useful.
This use of squash-merging will mean PRs built on each other will be hard to
merge. We suggest avoiding these where possible, and if required, ensuring
each PR has a tidy set of commits to ease merging.
## Conclusion
That's it! Matrix is a very open and collaborative project as you might expect

View File

@@ -124,15 +124,29 @@ sudo pacman -S base-devel python python-pip \
#### CentOS/Fedora
Installing prerequisites on CentOS 7 or Fedora 25:
Installing prerequisites on CentOS 8 or Fedora>26:
```
sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
libwebp-devel tk-devel redhat-rpm-config \
python3-virtualenv libffi-devel openssl-devel
sudo dnf groupinstall "Development Tools"
```
Installing prerequisites on CentOS 7 or Fedora<=25:
```
sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
lcms2-devel libwebp-devel tcl-devel tk-devel redhat-rpm-config \
python-virtualenv libffi-devel openssl-devel
python3-virtualenv libffi-devel openssl-devel
sudo yum groupinstall "Development Tools"
```
Note that Synapse does not support versions of SQLite before 3.11, and CentOS 7
uses SQLite 3.7. You may be able to work around this by installing a more
recent SQLite version, but it is recommended that you instead use a Postgres
database: see [docs/postgres.md](docs/postgres.md).
#### macOS
Installing prerequisites on macOS:
@@ -383,15 +397,17 @@ Once you have installed synapse as above, you will need to configure it.
## TLS certificates
The default configuration exposes a single HTTP port: http://localhost:8008. It
is suitable for local testing, but for any practical use, you will either need
to enable a reverse proxy, or configure Synapse to expose an HTTPS port.
The default configuration exposes a single HTTP port on the local
interface: `http://localhost:8008`. It is suitable for local testing,
but for any practical use, you will need Synapse's APIs to be served
over HTTPS.
For information on using a reverse proxy, see
The recommended way to do so is to set up a reverse proxy on port
`8448`. You can find documentation on doing so in
[docs/reverse_proxy.md](docs/reverse_proxy.md).
To configure Synapse to expose an HTTPS port, you will need to edit
`homeserver.yaml`, as follows:
Alternatively, you can configure Synapse to expose an HTTPS port. To do
so, you will need to edit `homeserver.yaml`, as follows:
* First, under the `listeners` section, uncomment the configuration for the
TLS-enabled listener. (Remove the hash sign (`#`) at the start of
@@ -409,10 +425,13 @@ To configure Synapse to expose an HTTPS port, you will need to edit
point these settings at an existing certificate and key, or you can
enable Synapse's built-in ACME (Let's Encrypt) support. Instructions
for having Synapse automatically provision and renew federation
certificates through ACME can be found at [ACME.md](docs/ACME.md). If you
are using your own certificate, be sure to use a `.pem` file that includes
the full certificate chain including any intermediate certificates (for
instance, if using certbot, use `fullchain.pem` as your certificate, not
certificates through ACME can be found at [ACME.md](docs/ACME.md).
Note that, as pointed out in that document, this feature will not
work with installs set up after November 2019.
If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates
(for instance, if using certbot, use `fullchain.pem` as your certificate, not
`cert.pem`).
For a more detailed guide to configuring your server for federation, see

View File

@@ -272,7 +272,7 @@ to install using pip and a virtualenv::
virtualenv -p python3 env
source env/bin/activate
python -m pip install --no-use-pep517 -e .[all]
python -m pip install --no-use-pep517 -e ".[all]"
This will run a process of downloading and installing all the needed
dependencies into a virtual env.

View File

@@ -75,6 +75,24 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.10.0
====================
Synapse will now log a warning on start up if used with a PostgreSQL database
that has a non-recommended locale set.
See `docs/postgres.md <docs/postgres.md>`_ for details.
Upgrading to v1.8.0
===================
Specifying a ``log_file`` config option will now cause Synapse to refuse to
start, and should be replaced by with the ``log_config`` option. Support for
the ``log_file`` option was removed in v1.3.0 and has since had no effect.
Upgrading to v1.7.0
===================

View File

@@ -15,10 +15,9 @@ services:
restart: unless-stopped
# See the readme for a full documentation of the environment settings
environment:
- SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
- SYNAPSE_CONFIG_PATH=/data/homeserver.yaml
volumes:
# You may either store all the files in a local folder
- ./matrix-config:/etc
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data
@@ -56,6 +55,9 @@ services:
environment:
- POSTGRES_USER=synapse
- POSTGRES_PASSWORD=changeme
# ensure the database gets created correctly
# https://github.com/matrix-org/synapse/blob/master/docs/postgres.md#set-up-database
- POSTGRES_INITDB_ARGS=--encoding=UTF-8 --lc-collate=C --lc-ctype=C
volumes:
# You may store the database tables in a local folder..
- ./schemas:/var/lib/postgresql/data

View File

@@ -1,6 +1,6 @@
# Using the Synapse Grafana dashboard
0. Set up Prometheus and Grafana. Out of scope for this readme. Useful documentation about using Grafana with Prometheus: http://docs.grafana.org/features/datasources/prometheus/
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.rst
1. Have your Prometheus scrape your Synapse. https://github.com/matrix-org/synapse/blob/master/docs/metrics-howto.md
2. Import dashboard into Grafana. Download `synapse.json`. Import it to Grafana and select the correct Prometheus datasource. http://docs.grafana.org/reference/export_import/
3. Set up additional recording rules

View File

@@ -18,7 +18,7 @@
"gnetId": null,
"graphTooltip": 0,
"id": 1,
"iteration": 1561447718159,
"iteration": 1584612489167,
"links": [
{
"asDropdown": true,
@@ -34,6 +34,7 @@
"panels": [
{
"collapsed": false,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -52,12 +53,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 1
},
"hiddenSeries": false,
"id": 75,
"legend": {
"avg": false,
@@ -72,7 +75,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -151,6 +156,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 9,
@@ -158,6 +164,7 @@
"x": 12,
"y": 1
},
"hiddenSeries": false,
"id": 33,
"legend": {
"avg": false,
@@ -172,7 +179,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -302,12 +311,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 10
},
"hiddenSeries": false,
"id": 107,
"legend": {
"avg": false,
@@ -322,7 +333,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -425,12 +438,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 19
},
"hiddenSeries": false,
"id": 118,
"legend": {
"avg": false,
@@ -445,7 +460,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -542,6 +559,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -1361,6 +1379,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -1732,6 +1751,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -2439,6 +2459,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -2635,6 +2656,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -2650,11 +2672,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 61
"y": 33
},
"id": 79,
"legend": {
@@ -2670,6 +2693,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -2684,8 +2710,13 @@
"expr": "sum(rate(synapse_federation_client_sent_transactions{instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "txn rate",
"legendFormat": "successful txn rate",
"refId": "A"
},
{
"expr": "sum(rate(synapse_util_metrics_block_count{block_name=\"_send_new_transaction\",instance=\"$instance\"}[$bucket_size]) - ignoring (block_name) rate(synapse_federation_client_sent_transactions{instance=\"$instance\"}[$bucket_size]))",
"legendFormat": "failed txn rate",
"refId": "B"
}
],
"thresholds": [],
@@ -2736,11 +2767,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 61
"y": 33
},
"id": 83,
"legend": {
@@ -2756,6 +2788,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -2829,11 +2864,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 70
"y": 42
},
"id": 109,
"legend": {
@@ -2849,6 +2885,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -2923,11 +2962,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 70
"y": 42
},
"id": 111,
"legend": {
@@ -2943,6 +2983,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3009,6 +3052,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -3024,12 +3068,14 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 7,
"h": 8,
"w": 12,
"x": 0,
"y": 62
"y": 34
},
"hiddenSeries": false,
"id": 51,
"legend": {
"avg": false,
@@ -3044,6 +3090,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3112,6 +3161,95 @@
"align": false,
"alignLevel": null
}
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "$datasource",
"description": "",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 34
},
"hiddenSeries": false,
"id": 134,
"legend": {
"avg": false,
"current": false,
"hideZero": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "topk(10,synapse_pushers{job=~\"$job\",index=~\"$index\", instance=\"$instance\"})",
"legendFormat": "{{kind}} {{app_id}}",
"refId": "A"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "Active pusher instances by app",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
},
{
"format": "short",
"label": null,
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
}
],
"repeat": null,
@@ -3120,6 +3258,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -3523,6 +3662,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -3540,6 +3680,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3562,6 +3703,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3630,6 +3774,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3652,6 +3797,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3720,6 +3868,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3742,6 +3891,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3810,6 +3962,7 @@
"editable": true,
"error": false,
"fill": 1,
"fillGradient": 0,
"grid": {},
"gridPos": {
"h": 13,
@@ -3832,6 +3985,9 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -3921,6 +4077,7 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -4010,6 +4167,7 @@
"linewidth": 2,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -4076,6 +4234,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -4540,6 +4699,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -5060,6 +5220,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -5079,7 +5240,7 @@
"h": 7,
"w": 12,
"x": 0,
"y": 67
"y": 39
},
"id": 2,
"legend": {
@@ -5095,6 +5256,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5198,7 +5360,7 @@
"h": 7,
"w": 12,
"x": 12,
"y": 67
"y": 39
},
"id": 41,
"legend": {
@@ -5214,6 +5376,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5286,7 +5449,7 @@
"h": 7,
"w": 12,
"x": 0,
"y": 74
"y": 46
},
"id": 42,
"legend": {
@@ -5302,6 +5465,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5373,7 +5537,7 @@
"h": 7,
"w": 12,
"x": 12,
"y": 74
"y": 46
},
"id": 43,
"legend": {
@@ -5389,6 +5553,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5460,7 +5625,7 @@
"h": 7,
"w": 12,
"x": 0,
"y": 81
"y": 53
},
"id": 113,
"legend": {
@@ -5476,6 +5641,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5546,7 +5712,7 @@
"h": 7,
"w": 12,
"x": 12,
"y": 81
"y": 53
},
"id": 115,
"legend": {
@@ -5562,6 +5728,7 @@
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5573,7 +5740,7 @@
"steppedLine": false,
"targets": [
{
"expr": "rate(synapse_replication_tcp_protocol_close_reason{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_replication_tcp_protocol_close_reason{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "{{job}}-{{index}} {{reason_type}}",
@@ -5628,6 +5795,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -5643,11 +5811,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 13
"y": 40
},
"id": 67,
"legend": {
@@ -5663,7 +5832,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5679,7 +5850,7 @@
"format": "time_series",
"interval": "",
"intervalFactor": 1,
"legendFormat": "{{job}}-{{index}} ",
"legendFormat": "{{job}}-{{index}} {{name}}",
"refId": "A"
}
],
@@ -5731,11 +5902,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 13
"y": 40
},
"id": 71,
"legend": {
@@ -5751,7 +5923,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5819,11 +5993,12 @@
"dashes": false,
"datasource": "$datasource",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 0,
"y": 22
"y": 49
},
"id": 121,
"interval": "",
@@ -5840,7 +6015,9 @@
"linewidth": 1,
"links": [],
"nullPointMode": "connected",
"options": {},
"options": {
"dataLinks": []
},
"paceLength": 10,
"percentage": false,
"pointradius": 5,
@@ -5909,6 +6086,7 @@
},
{
"collapsed": true,
"datasource": null,
"gridPos": {
"h": 1,
"w": 24,
@@ -6607,7 +6785,7 @@
}
],
"refresh": "5m",
"schemaVersion": 18,
"schemaVersion": 22,
"style": "dark",
"tags": [
"matrix"
@@ -6616,7 +6794,7 @@
"list": [
{
"current": {
"tags": [],
"selected": true,
"text": "Prometheus",
"value": "Prometheus"
},
@@ -6638,6 +6816,7 @@
"auto_count": 100,
"auto_min": "30s",
"current": {
"selected": false,
"text": "auto",
"value": "$__auto_interval_bucket_size"
},
@@ -6719,9 +6898,9 @@
"allFormat": "regex wildcard",
"allValue": "",
"current": {
"text": "All",
"text": "synapse",
"value": [
"$__all"
"synapse"
]
},
"datasource": "$datasource",
@@ -6751,7 +6930,9 @@
"allValue": ".*",
"current": {
"text": "All",
"value": "$__all"
"value": [
"$__all"
]
},
"datasource": "$datasource",
"definition": "",
@@ -6810,5 +6991,5 @@
"timezone": "",
"title": "Synapse",
"uid": "000000012",
"version": 10
"version": 19
}

48
debian/changelog vendored
View File

@@ -1,3 +1,51 @@
matrix-synapse-py3 (1.12.1) stable; urgency=medium
* New synapse release 1.12.1.
-- Synapse Packaging team <packages@matrix.org> Mon, 02 Apr 2020 11:30:47 +0000
matrix-synapse-py3 (1.12.0) stable; urgency=medium
* New synapse release 1.12.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 23 Mar 2020 12:13:03 +0000
matrix-synapse-py3 (1.11.1) stable; urgency=medium
* New synapse release 1.11.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Mar 2020 15:01:22 +0000
matrix-synapse-py3 (1.11.0) stable; urgency=medium
* New synapse release 1.11.0.
-- Synapse Packaging team <packages@matrix.org> Fri, 21 Feb 2020 08:54:34 +0000
matrix-synapse-py3 (1.10.1) stable; urgency=medium
* New synapse release 1.10.1.
-- Synapse Packaging team <packages@matrix.org> Mon, 17 Feb 2020 16:27:28 +0000
matrix-synapse-py3 (1.10.0) stable; urgency=medium
* New synapse release 1.10.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 12 Feb 2020 12:18:54 +0000
matrix-synapse-py3 (1.9.1) stable; urgency=medium
* New synapse release 1.9.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Jan 2020 13:09:23 +0000
matrix-synapse-py3 (1.9.0) stable; urgency=medium
* New synapse release 1.9.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Jan 2020 12:56:31 +0000
matrix-synapse-py3 (1.8.0) stable; urgency=medium
[ Richard van der Hoff ]

View File

@@ -16,7 +16,7 @@ ARG PYTHON_VERSION=3.7
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 as builder
FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 as builder
# install the OS build deps

View File

@@ -110,12 +110,12 @@ argument to `docker run`.
## Legacy dynamic configuration file support
For backwards-compatibility only, the docker image supports creating a dynamic
configuration file based on environment variables. This is now deprecated, but
is enabled when the `SYNAPSE_SERVER_NAME` variable is set (and `generate` is
not given).
The docker image used to support creating a dynamic configuration file based
on environment variables. This is no longer supported, and an error will be
raised if you try to run synapse without a config file.
To migrate from a dynamic configuration file to a static one, run the docker
It is, however, possible to generate a static configuration file based on
the environment variables that were previously used. To do this, run the docker
container once with the environment variables set, and `migrate_config`
command line option. For example:
@@ -127,15 +127,20 @@ docker run -it --rm \
matrixdotorg/synapse:latest migrate_config
```
This will generate the same configuration file as the legacy mode used, but
will store it in `/data/homeserver.yaml` instead of a temporary location. You
can then use it as shown above at [Running synapse](#running-synapse).
This will generate the same configuration file as the legacy mode used, and
will store it in `/data/homeserver.yaml`. You can then use it as shown above at
[Running synapse](#running-synapse).
Note that the defaults used in this configuration file may be different to
those when generating a new config file with `generate`: for example, TLS is
enabled by default in this mode. You are encouraged to inspect the generated
configuration file and edit it to ensure it meets your needs.
## Building the image
If you need to build the image from a Synapse checkout, use the following `docker
build` command from the repo's root:
```
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```

View File

@@ -188,11 +188,6 @@ def main(args, environ):
else:
ownership = "{}:{}".format(desired_uid, desired_gid)
log(
"Container running as UserID %s:%s, ENV (or defaults) requests %s:%s"
% (os.getuid(), os.getgid(), desired_uid, desired_gid)
)
if ownership is None:
log("Will not perform chmod/su-exec as UserID already matches request")
@@ -213,38 +208,30 @@ def main(args, environ):
if mode is not None:
error("Unknown execution mode '%s'" % (mode,))
if "SYNAPSE_SERVER_NAME" in environ:
# backwards-compatibility generate-a-config-on-the-fly mode
if "SYNAPSE_CONFIG_PATH" in environ:
error(
"SYNAPSE_SERVER_NAME can only be combined with SYNAPSE_CONFIG_PATH "
"in `generate` or `migrate_config` mode. To start synapse using a "
"config file, unset the SYNAPSE_SERVER_NAME environment variable."
)
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
config_path = "/compiled/homeserver.yaml"
log(
"Generating config file '%s' on-the-fly from environment variables.\n"
"Note that this mode is deprecated. You can migrate to a static config\n"
"file by running with 'migrate_config'. See the README for more details."
% (config_path,)
)
generate_config_from_template("/compiled", config_path, environ, ownership)
else:
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get(
"SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml"
)
if not os.path.exists(config_path):
if not os.path.exists(config_path):
if "SYNAPSE_SERVER_NAME" in environ:
error(
"Config file '%s' does not exist. You should either create a new "
"config file by running with the `generate` argument (and then edit "
"the resulting file before restarting) or specify the path to an "
"existing config file with the SYNAPSE_CONFIG_PATH variable."
"""\
Config file '%s' does not exist.
The synapse docker image no longer supports generating a config file on-the-fly
based on environment variables. You can migrate to a static config file by
running with 'migrate_config'. See the README for more details.
"""
% (config_path,)
)
error(
"Config file '%s' does not exist. You should either create a new "
"config file by running with the `generate` argument (and then edit "
"the resulting file before restarting) or specify the path to an "
"existing config file with the SYNAPSE_CONFIG_PATH variable."
% (config_path,)
)
log("Starting synapse with config file " + config_path)
args = ["python", "-m", synapse_worker, "--config-path", config_path]

View File

@@ -1,4 +1,4 @@
# The config is maintained as an up-to-date snapshot of the default
# This file is maintained as an up-to-date snapshot of the default
# homeserver.yaml configuration generated by Synapse.
#
# It is intended to act as a reference for the default configuration,
@@ -10,3 +10,5 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md.
################################################################################

View File

@@ -1,12 +1,48 @@
# ACME
Synapse v1.0 will require valid TLS certificates for communication between
servers (port `8448` by default) in addition to those that are client-facing
(port `443`). If you do not already have a valid certificate for your domain,
the easiest way to get one is with Synapse's new ACME support, which will use
the ACME protocol to provision a certificate automatically. Synapse v0.99.0+
will provision server-to-server certificates automatically for you for free
through [Let's Encrypt](https://letsencrypt.org/) if you tell it to.
From version 1.0 (June 2019) onwards, Synapse requires valid TLS
certificates for communication between servers (by default on port
`8448`) in addition to those that are client-facing (port `443`). To
help homeserver admins fulfil this new requirement, Synapse v0.99.0
introduced support for automatically provisioning certificates through
[Let's Encrypt](https://letsencrypt.org/) using the ACME protocol.
## Deprecation of ACME v1
In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
Let's Encrypt announced that they were deprecating version 1 of the ACME
protocol, with the plan to disable the use of it for new accounts in
November 2019, and for existing accounts in June 2020.
Synapse doesn't currently support version 2 of the ACME protocol, which
means that:
* for existing installs, Synapse's built-in ACME support will continue
to work until June 2020.
* for new installs, this feature will not work at all.
Either way, it is recommended to move from Synapse's ACME support
feature to an external automated tool such as [certbot](https://github.com/certbot/certbot)
(or browse [this list](https://letsencrypt.org/fr/docs/client-options/)
for an alternative ACME client).
It's also recommended to use a reverse proxy for the server-facing
communications (more documentation about this can be found
[here](/docs/reverse_proxy.md)) as well as the client-facing ones and
have it serve the certificates.
In case you can't do that and need Synapse to serve them itself, make
sure to set the `tls_certificate_path` configuration setting to the path
of the certificate (make sure to use the certificate containing the full
certification chain, e.g. `fullchain.pem` if using certbot) and
`tls_private_key_path` to the path of the matching private key. Note
that in this case you will need to restart Synapse after each
certificate renewal so that Synapse stops using the old certificate.
If you still want to use Synapse's built-in ACME support, the rest of
this document explains how to set it up.
## Initial setup
In the case that your `server_name` config variable is the same as
the hostname that the client connects to, then the same certificate can be
@@ -32,11 +68,6 @@ If you already have certificates, you will need to back up or delete them
(files `example.com.tls.crt` and `example.com.tls.key` in Synapse's root
directory), Synapse's ACME implementation will not overwrite them.
You may wish to use alternate methods such as Certbot to obtain a certificate
from Let's Encrypt, depending on your server configuration. Of course, if you
already have a valid certificate for your homeserver's domain, that can be
placed in Synapse's config directory without the need for any ACME setup.
## ACME setup
The main steps for enabling ACME support in short summary are:

View File

@@ -22,19 +22,81 @@ It returns a JSON body like the following:
}
```
# Quarantine media in a room
This API 'quarantines' all the media in a room.
The API is:
```
POST /_synapse/admin/v1/quarantine_media/<room_id>
{}
```
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies
to any local media, and any locally-cached copies of remote media.
The media file itself (and any thumbnails) is not deleted from the server.
## Quarantining media by ID
This API quarantines a single piece of local or remote media.
Request:
```
POST /_synapse/admin/v1/media/quarantine/<server_name>/<media_id>
{}
```
Where `server_name` is in the form of `example.org`, and `media_id` is in the
form of `abcdefg12345...`.
Response:
```
{}
```
## Quarantining media in a room
This API quarantines all local and remote media in a room.
Request:
```
POST /_synapse/admin/v1/room/<room_id>/media/quarantine
{}
```
Where `room_id` is in the form of `!roomid12345:example.org`.
Response:
```
{
"num_quarantined": 10 # The number of media items successfully quarantined
}
```
Note that there is a legacy endpoint, `POST
/_synapse/admin/v1/quarantine_media/<room_id >`, that operates the same.
However, it is deprecated and may be removed in a future release.
## Quarantining all media of a user
This API quarantines all *local* media that a *local* user has uploaded. That is to say, if
you would like to quarantine media uploaded by a user on a remote homeserver, you should
instead use one of the other APIs.
Request:
```
POST /_synapse/admin/v1/user/<user_id>/media/quarantine
{}
```
Where `user_id` is in the form of `@bob:example.org`.
Response:
```
{
"num_quarantined": 10 # The number of media items successfully quarantined
}
```

View File

@@ -8,6 +8,9 @@ Depending on the amount of history being purged a call to the API may take
several minutes or longer. During this period users will not be able to
paginate further back in the room from the point being purged from.
Note that Synapse requires at least one message in each room, so it will never
delete the last message in a room.
The API is:
``POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]``

173
docs/admin_api/rooms.md Normal file
View File

@@ -0,0 +1,173 @@
# List Room API
The List Room admin API allows server admins to get a list of rooms on their
server. There are various parameters available that allow for filtering and
sorting the returned list. This API supports pagination.
## Parameters
The following query parameters are available:
* `from` - Offset in the returned list. Defaults to `0`.
* `limit` - Maximum amount of rooms to return. Defaults to `100`.
* `order_by` - The method in which to sort the returned list of rooms. Valid values are:
- `alphabetical` - Rooms are ordered alphabetically by room name. This is the default.
- `size` - Rooms are ordered by the number of members. Largest to smallest.
* `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
this value to `b` will reverse the above sort order. Defaults to `f`.
* `search_term` - Filter rooms by their room name. Search term can be contained in any
part of the room name. Defaults to no filtering.
The following fields are possible in the JSON response body:
* `rooms` - An array of objects, each containing information about a room.
- Room objects contain the following fields:
- `room_id` - The ID of the room.
- `name` - The name of the room.
- `canonical_alias` - The canonical (main) alias address of the room.
- `joined_members` - How many users are currently in the room.
* `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is
not intended to be parsed.
* `total_rooms` - The total number of rooms this query can return. Using this
and `offset`, you have enough information to know the current
progression through the list.
* `next_batch` - If this field is present, we know that there are potentially
more rooms on the server that did not all fit into this response.
We can use `next_batch` to get the "next page" of results. To do
so, simply repeat your request, setting the `from` parameter to
the value of `next_batch`.
* `prev_batch` - If this field is present, it is possible to paginate backwards.
Use `prev_batch` for the `from` value in the next request to
get the "previous page" of results.
## Usage
A standard request with no filtering:
```
GET /_synapse/admin/v1/rooms
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org",
"joined_members": 8326
},
... (8 hidden items) ...
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
}
],
"offset": 0,
"total_rooms": 10
}
```
Filtering by room name:
```
GET /_synapse/admin/v1/rooms?search_term=TWIM
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
}
],
"offset": 0,
"total_rooms": 1
}
```
Paginating through a list of rooms:
```
GET /_synapse/admin/v1/rooms?order_by=size
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org",
"joined_members": 8326
},
... (98 hidden items) ...
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
}
],
"offset": 0,
"total_rooms": 150
"next_token": 100
}
```
The presence of the `next_token` parameter tells us that there are more rooms
than returned in this request, and we need to make another request to get them.
To get the next batch of room results, we repeat our request, setting the `from`
parameter to the value of `next_token`.
```
GET /_synapse/admin/v1/rooms?order_by=size&from=100
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org",
"joined_members": 127
},
... (48 hidden items) ...
{
"room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk",
"name": "weechat-matrix",
"canonical_alias": "#weechat-matrix:termina.org.uk",
"joined_members": 137
}
],
"offset": 100,
"prev_batch": 0,
"total_rooms": 150
}
```
Once the `next_token` parameter is no longer present, we know we've reached the
end of the list.

View File

@@ -1,3 +1,46 @@
Create or modify Account
========================
This API allows an administrator to create or modify a user account with a
specific ``user_id``. Be aware that ``user_id`` is fully qualified: for example,
``@user:server.com``.
This api is::
PUT /_synapse/admin/v2/users/<user_id>
with a body of:
.. code:: json
{
"password": "user_password",
"displayname": "User",
"threepids": [
{
"medium": "email",
"address": "<user_mail_1>"
},
{
"medium": "email",
"address": "<user_mail_2>"
}
],
"avatar_url": "<avatar_url>",
"admin": false,
"deactivated": false
}
including an ``access_token`` of a server admin.
The parameter ``displayname`` is optional and defaults to ``user_id``.
The parameter ``threepids`` is optional.
The parameter ``avatar_url`` is optional.
The parameter ``admin`` is optional and defaults to 'false'.
The parameter ``deactivated`` is optional and defaults to 'false'.
The parameter ``password`` is optional. If provided the user's password is updated and all devices are logged out.
If the user already exists then optional parameters default to the current value.
List Accounts
=============
@@ -50,7 +93,8 @@ This API returns information about a specific user account.
The api is::
GET /_synapse/admin/v1/whois/<user_id>
GET /_synapse/admin/v1/whois/<user_id> (deprecated)
GET /_synapse/admin/v2/users/<user_id>
including an ``access_token`` of a server admin.
@@ -125,11 +169,14 @@ with a body of:
.. code:: json
{
"new_password": "<secret>"
"new_password": "<secret>",
"logout_devices": true,
}
including an ``access_token`` of a server admin.
The parameter ``new_password`` is required.
The parameter ``logout_devices`` is optional and defaults to ``true``.
Get whether a user is a server administrator or not
===================================================

View File

@@ -30,7 +30,7 @@ The necessary tools are detailed below.
Install `flake8` with:
pip install --upgrade flake8
pip install --upgrade flake8 flake8-comprehensions
Check all application and test code with:

94
docs/delegate.md Normal file
View File

@@ -0,0 +1,94 @@
# Delegation
By default, other homeservers will expect to be able to reach yours via
your `server_name`, on port 8448. For example, if you set your `server_name`
to `example.com` (so that your user names look like `@user:example.com`),
other servers will try to connect to yours at `https://example.com:8448/`.
Delegation is a Matrix feature allowing a homeserver admin to retain a
`server_name` of `example.com` so that user IDs, room aliases, etc continue
to look like `*:example.com`, whilst having federation traffic routed
to a different server and/or port (e.g. `synapse.example.com:443`).
## .well-known delegation
To use this method, you need to be able to alter the
`server_name` 's https server to serve the `/.well-known/matrix/server`
URL. Having an active server (with a valid TLS certificate) serving your
`server_name` domain is out of the scope of this documentation.
The URL `https://<server_name>/.well-known/matrix/server` should
return a JSON structure containing the key `m.server` like so:
```json
{
"m.server": "<synapse.server.name>[:<yourport>]"
}
```
In our example, this would mean that URL `https://example.com/.well-known/matrix/server`
should return:
```json
{
"m.server": "synapse.example.com:443"
}
```
Note, specifying a port is optional. If no port is specified, then it defaults
to 8448.
With .well-known delegation, federating servers will check for a valid TLS
certificate for the delegated hostname (in our example: `synapse.example.com`).
## SRV DNS record delegation
It is also possible to do delegation using a SRV DNS record. However, that is
considered an advanced topic since it's a bit complex to set up, and `.well-known`
delegation is already enough in most cases.
However, if you really need it, you can find some documentation on how such a
record should look like and how Synapse will use it in [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names).
## Delegation FAQ
### When do I need delegation?
If your homeserver's APIs are accessible on the default federation port (8448)
and the domain your `server_name` points to, you do not need any delegation.
For instance, if you registered `example.com` and pointed its DNS A record at a
fresh server, you could install Synapse on that host, giving it a `server_name`
of `example.com`, and once a reverse proxy has been set up to proxy all requests
sent to the port `8448` and serve TLS certificates for `example.com`, you
wouldn't need any delegation set up.
**However**, if your homeserver's APIs aren't accessible on port 8448 and on the
domain `server_name` points to, you will need to let other servers know how to
find it using delegation.
### Do you still recommend against using a reverse proxy on the federation port?
We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
reverse proxy.
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
This is no longer necessary. If you are using a reverse proxy for all of your
TLS traffic, then you can set `no_tls: True` in the Synapse config.
In that case, the only reason Synapse needs the certificate is to populate a legacy
`tls_fingerprints` field in the federation API. This is ignored by Synapse 0.99.0
and later, and the only time pre-0.99 Synapses will check it is when attempting to
fetch the server keys - and generally this is delegated via `matrix.org`, which
is running a modern version of Synapse.
### Do I need the same certificate for the client and federation port?
No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy.

View File

@@ -1,163 +1,41 @@
Setting up Federation
Setting up federation
=====================
Federation is the process by which users on different servers can participate
in the same room. For this to work, those other servers must be able to contact
yours to send messages.
The ``server_name`` configured in the Synapse configuration file (often
``homeserver.yaml``) defines how resources (users, rooms, etc.) will be
identified (eg: ``@user:example.com``, ``#room:example.com``). By
default, it is also the domain that other servers will use to
try to reach your server (via port 8448). This is easy to set
up and will work provided you set the ``server_name`` to match your
machine's public DNS hostname, and provide Synapse with a TLS certificate
which is valid for your ``server_name``.
The `server_name` configured in the Synapse configuration file (often
`homeserver.yaml`) defines how resources (users, rooms, etc.) will be
identified (eg: `@user:example.com`, `#room:example.com`). By default,
it is also the domain that other servers will use to try to reach your
server (via port 8448). This is easy to set up and will work provided
you set the `server_name` to match your machine's public DNS hostname.
For this default configuration to work, you will need to listen for TLS
connections on port 8448. The preferred way to do that is by using a
reverse proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions
on how to correctly set one up.
In some cases you might not want to run Synapse on the machine that has
the `server_name` as its public DNS hostname, or you might want federation
traffic to use a different port than 8448. For example, you might want to
have your user names look like `@user:example.com`, but you want to run
Synapse on `synapse.example.com` on port 443. This can be done using
delegation, which allows an admin to control where federation traffic should
be sent. See [delegate.md](delegate.md) for instructions on how to set this up.
Once federation has been configured, you should be able to join a room over
federation. A good place to start is ``#synapse:matrix.org`` - a room for
federation. A good place to start is `#synapse:matrix.org` - a room for
Synapse admins.
## Delegation
For a more flexible configuration, you can have ``server_name``
resources (eg: ``@user:example.com``) served by a different host and
port (eg: ``synapse.example.com:443``). There are two ways to do this:
- adding a ``/.well-known/matrix/server`` URL served on ``https://example.com``.
- adding a DNS ``SRV`` record in the DNS zone of domain
``example.com``.
Without configuring delegation, the matrix federation will
expect to find your server via ``example.com:8448``. The following methods
allow you retain a `server_name` of `example.com` so that your user IDs, room
aliases, etc continue to look like `*:example.com`, whilst having your
federation traffic routed to a different server.
### .well-known delegation
To use this method, you need to be able to alter the
``server_name`` 's https server to serve the ``/.well-known/matrix/server``
URL. Having an active server (with a valid TLS certificate) serving your
``server_name`` domain is out of the scope of this documentation.
The URL ``https://<server_name>/.well-known/matrix/server`` should
return a JSON structure containing the key ``m.server`` like so:
{
"m.server": "<synapse.server.name>[:<yourport>]"
}
In our example, this would mean that URL ``https://example.com/.well-known/matrix/server``
should return:
{
"m.server": "synapse.example.com:443"
}
Note, specifying a port is optional. If a port is not specified an SRV lookup
is performed, as described below. If the target of the
delegation does not have an SRV record, then the port defaults to 8448.
Most installations will not need to configure .well-known. However, it can be
useful in cases where the admin is hosting on behalf of someone else and
therefore cannot gain access to the necessary certificate. With .well-known,
federation servers will check for a valid TLS certificate for the delegated
hostname (in our example: ``synapse.example.com``).
### DNS SRV delegation
To use this delegation method, you need to have write access to your
``server_name`` 's domain zone DNS records (in our example it would be
``example.com`` DNS zone).
This method requires the target server to provide a
valid TLS certificate for the original ``server_name``.
You need to add a SRV record in your ``server_name`` 's DNS zone with
this format:
_matrix._tcp.<yourdomain.com> <ttl> IN SRV <priority> <weight> <port> <synapse.server.name>
In our example, we would need to add this SRV record in the
``example.com`` DNS zone:
_matrix._tcp.example.com. 3600 IN SRV 10 5 443 synapse.example.com.
Once done and set up, you can check the DNS record with ``dig -t srv
_matrix._tcp.<server_name>``. In our example, we would expect this:
$ dig -t srv _matrix._tcp.example.com
_matrix._tcp.example.com. 3600 IN SRV 10 0 443 synapse.example.com.
Note that the target of a SRV record cannot be an alias (CNAME record): it has to point
directly to the server hosting the synapse instance.
### Delegation FAQ
#### When do I need a SRV record or .well-known URI?
If your homeserver listens on the default federation port (8448), and your
`server_name` points to the host that your homeserver runs on, you do not need an SRV
record or `.well-known/matrix/server` URI.
For instance, if you registered `example.com` and pointed its DNS A record at a
fresh server, you could install Synapse on that host,
giving it a `server_name` of `example.com`, and once [ACME](acme.md) support is enabled,
it would automatically generate a valid TLS certificate for you via Let's Encrypt
and no SRV record or .well-known URI would be needed.
**However**, if your server does not listen on port 8448, or if your `server_name`
does not point to the host that your homeserver runs on, you will need to let
other servers know how to find it. The way to do this is via .well-known or an
SRV record.
#### I have created a .well-known URI. Do I also need an SRV record?
No. You can use either `.well-known` delegation or use an SRV record for delegation. You
do not need to use both to delegate to the same location.
#### Can I manage my own certificates rather than having Synapse renew certificates itself?
Yes, you are welcome to manage your certificates yourself. Synapse will only
attempt to obtain certificates from Let's Encrypt if you configure it to do
so.The only requirement is that there is a valid TLS cert present for
federation end points.
#### Do you still recommend against using a reverse proxy on the federation port?
We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [reverse_proxy.md](reverse_proxy.md) for information on setting up a
reverse proxy.
#### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
Practically speaking, this is no longer necessary.
If you are using a reverse proxy for all of your TLS traffic, then you can set
`no_tls: True` in the Synapse config. In that case, the only reason Synapse
needs the certificate is to populate a legacy `tls_fingerprints` field in the
federation API. This is ignored by Synapse 0.99.0 and later, and the only time
pre-0.99 Synapses will check it is when attempting to fetch the server keys -
and generally this is delegated via `matrix.org`, which will be running a modern
version of Synapse.
#### Do I need the same certificate for the client and federation port?
No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy. However, Synapse will use the
same certificate on any ports where TLS is configured.
## Troubleshooting
You can use the [federation tester](
<https://matrix.org/federationtester>) to check if your homeserver is
configured correctly. Alternatively try the [JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN).
Note that you'll have to modify this URL to replace ``DOMAIN`` with your
``server_name``. Hitting the API directly provides extra detail.
You can use the [federation tester](https://matrix.org/federationtester)
to check if your homeserver is configured correctly. Alternatively try the
[JSON API used by the federation tester](https://matrix.org/federationtester/api/report?server_name=DOMAIN).
Note that you'll have to modify this URL to replace `DOMAIN` with your
`server_name`. Hitting the API directly provides extra detail.
The typical failure mode for federation is that when the server tries to join
a room, it is rejected with "401: Unauthorized". Generally this means that other
@@ -169,8 +47,8 @@ you invite them to. This can be caused by an incorrectly-configured reverse
proxy: see [reverse_proxy.md](<reverse_proxy.md>) for instructions on how to correctly
configure a reverse proxy.
## Running a Demo Federation of Synapses
## Running a demo federation of Synapses
If you want to get up and running quickly with a trio of homeservers in a
private federation, there is a script in the ``demo`` directory. This is mainly
private federation, there is a script in the `demo` directory. This is mainly
useful just for development purposes. See [demo/README](<../demo/README>).

View File

@@ -0,0 +1,195 @@
# Message retention policies
Synapse admins can enable support for message retention policies on
their homeserver. Message retention policies exist at a room level,
follow the semantics described in
[MSC1763](https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md),
and allow server and room admins to configure how long messages should
be kept in a homeserver's database before being purged from it.
**Please note that, as this feature isn't part of the Matrix
specification yet, this implementation is to be considered as
experimental.**
A message retention policy is mainly defined by its `max_lifetime`
parameter, which defines how long a message can be kept around after
it was sent to the room. If a room doesn't have a message retention
policy, and there's no default one for a given server, then no message
sent in that room is ever purged on that server.
MSC1763 also specifies semantics for a `min_lifetime` parameter which
defines the amount of time after which an event _can_ get purged (after
it was sent to the room), but Synapse doesn't currently support it
beyond registering it.
Both `max_lifetime` and `min_lifetime` are optional parameters.
Note that message retention policies don't apply to state events.
Once an event reaches its expiry date (defined as the time it was sent
plus the value for `max_lifetime` in the room), two things happen:
* Synapse stops serving the event to clients via any endpoint.
* The message gets picked up by the next purge job (see the "Purge jobs"
section) and is removed from Synapse's database.
Since purge jobs don't run continuously, this means that an event might
stay in a server's database for longer than the value for `max_lifetime`
in the room would allow, though hidden from clients.
Similarly, if a server (with support for message retention policies
enabled) receives from another server an event that should have been
purged according to its room's policy, then the receiving server will
process and store that event until it's picked up by the next purge job,
though it will always hide it from clients.
Synapse requires at least one message in each room, so it will never
delete the last message in a room. It will, however, hide it from
clients.
## Server configuration
Support for this feature can be enabled and configured in the
`retention` section of the Synapse configuration file (see the
[sample file](https://github.com/matrix-org/synapse/blob/v1.7.3/docs/sample_config.yaml#L332-L393)).
To enable support for message retention policies, set the setting
`enabled` in this section to `true`.
### Default policy
A default message retention policy is a policy defined in Synapse's
configuration that is used by Synapse for every room that doesn't have a
message retention policy configured in its state. This allows server
admins to ensure that messages are never kept indefinitely in a server's
database.
A default policy can be defined as such, in the `retention` section of
the configuration file:
```yaml
default_policy:
min_lifetime: 1d
max_lifetime: 1y
```
Here, `min_lifetime` and `max_lifetime` have the same meaning and level
of support as previously described. They can be expressed either as a
duration (using the units `s` (seconds), `m` (minutes), `h` (hours),
`d` (days), `w` (weeks) and `y` (years)) or as a number of milliseconds.
### Purge jobs
Purge jobs are the jobs that Synapse runs in the background to purge
expired events from the database. They are only run if support for
message retention policies is enabled in the server's configuration. If
no configuration for purge jobs is configured by the server admin,
Synapse will use a default configuration, which is described in the
[sample configuration file](https://github.com/matrix-org/synapse/blob/master/docs/sample_config.yaml#L332-L393).
Some server admins might want a finer control on when events are removed
depending on an event's room's policy. This can be done by setting the
`purge_jobs` sub-section in the `retention` section of the configuration
file. An example of such configuration could be:
```yaml
purge_jobs:
- longest_max_lifetime: 3d
interval: 12h
- shortest_max_lifetime: 3d
longest_max_lifetime: 1w
interval: 1d
- shortest_max_lifetime: 1w
interval: 2d
```
In this example, we define three jobs:
* one that runs twice a day (every 12 hours) and purges events in rooms
which policy's `max_lifetime` is lower or equal to 3 days.
* one that runs once a day and purges events in rooms which policy's
`max_lifetime` is between 3 days and a week.
* one that runs once every 2 days and purges events in rooms which
policy's `max_lifetime` is greater than a week.
Note that this example is tailored to show different configurations and
features slightly more jobs than it's probably necessary (in practice, a
server admin would probably consider it better to replace the two last
jobs with one that runs once a day and handles rooms which which
policy's `max_lifetime` is greater than 3 days).
Keep in mind, when configuring these jobs, that a purge job can become
quite heavy on the server if it targets many rooms, therefore prefer
having jobs with a low interval that target a limited set of rooms. Also
make sure to include a job with no minimum and one with no maximum to
make sure your configuration handles every policy.
As previously mentioned in this documentation, while a purge job that
runs e.g. every day means that an expired event might stay in the
database for up to a day after its expiry, Synapse hides expired events
from clients as soon as they expire, so the event is not visible to
local users between its expiry date and the moment it gets purged from
the server's database.
### Lifetime limits
**Note: this feature is mainly useful within a closed federation or on
servers that don't federate, because there currently is no way to
enforce these limits in an open federation.**
Server admins can restrict the values their local users are allowed to
use for both `min_lifetime` and `max_lifetime`. These limits can be
defined as such in the `retention` section of the configuration file:
```yaml
allowed_lifetime_min: 1d
allowed_lifetime_max: 1y
```
Here, `allowed_lifetime_min` is the lowest value a local user can set
for both `min_lifetime` and `max_lifetime`, and `allowed_lifetime_max`
is the highest value. Both parameters are optional (e.g. setting
`allowed_lifetime_min` but not `allowed_lifetime_max` only enforces a
minimum and no maximum).
Like other settings in this section, these parameters can be expressed
either as a duration or as a number of milliseconds.
## Room configuration
To configure a room's message retention policy, a room's admin or
moderator needs to send a state event in that room with the type
`m.room.retention` and the following content:
```json
{
"max_lifetime": ...
}
```
In this event's content, the `max_lifetime` parameter has the same
meaning as previously described, and needs to be expressed in
milliseconds. The event's content can also include a `min_lifetime`
parameter, which has the same meaning and limited support as previously
described.
Note that over every server in the room, only the ones with support for
message retention policies will actually remove expired events. This
support is currently not enabled by default in Synapse.
## Note on reclaiming disk space
While purge jobs actually delete data from the database, the disk space
used by the database might not decrease immediately on the database's
host. However, even though the database engine won't free up the disk
space, it will start writing new data into where the purged data was.
If you want to reclaim the freed disk space anyway and return it to the
operating system, the server admin needs to run `VACUUM FULL;` (or
`VACUUM;` for SQLite databases) on Synapse's database (see the related
[PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-vacuum.html)).

View File

@@ -32,7 +32,7 @@ Assuming your PostgreSQL database user is called `postgres`, first authenticate
su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
Then, create a user ``synapse_user`` with:
createuser --pwprompt synapse_user
@@ -63,6 +63,24 @@ You may need to enable password authentication so `synapse_user` can
connect to the database. See
<https://www.postgresql.org/docs/11/auth-pg-hba-conf.html>.
### Fixing incorrect `COLLATE` or `CTYPE`
Synapse will refuse to set up a new database if it has the wrong values of
`COLLATE` and `CTYPE` set, and will log warnings on existing databases. Using
different locales can cause issues if the locale library is updated from
underneath the database, or if a different version of the locale is used on any
replicas.
The safest way to fix the issue is to take a dump and recreate the database with
the correct `COLLATE` and `CTYPE` parameters (as per
[docs/postgres.md](docs/postgres.md)). It is also possible to change the
parameters on a live database and run a `REINDEX` on the entire database,
however extreme care must be taken to avoid database corruption.
Note that the above may fail with an error about duplicate rows if corruption
has already occurred, and such duplicate rows will need to be manually removed.
## Tuning Postgres
The default settings should be fine for most deployments. For larger

View File

@@ -18,9 +18,10 @@ When setting up a reverse proxy, remember that Matrix clients and other
Matrix servers do not necessarily need to connect to your server via the
same server name or port. Indeed, clients will use port 443 by default,
whereas servers default to port 8448. Where these are different, we
refer to the 'client port' and the \'federation port\'. See [Setting
up federation](federate.md) for more details of the algorithm used for
federation connections.
refer to the 'client port' and the \'federation port\'. See [the Matrix
specification](https://matrix.org/docs/spec/server_server/latest#resolving-server-names)
for more details of the algorithm used for federation connections, and
[delegate.md](<delegate.md>) for instructions on setting up delegation.
Let's assume that we expect clients to connect to our server at
`https://matrix.example.com`, and other servers to connect at

View File

@@ -1,4 +1,4 @@
# The config is maintained as an up-to-date snapshot of the default
# This file is maintained as an up-to-date snapshot of the default
# homeserver.yaml configuration generated by Synapse.
#
# It is intended to act as a reference for the default configuration,
@@ -10,6 +10,16 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md.
################################################################################
# Configuration file for Synapse.
#
# This is a YAML file: see [1] for a quick introduction. Note in particular
# that *indentation is important*: all the elements of a list or dictionary
# should have the same indentation.
#
# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
## Server ##
# The domain name of the server, with optional explicit port.
@@ -387,17 +397,17 @@ retention:
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
# of outdated messages on a more frequent basis than for the rest of the rooms
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
# interval: 1d
## TLS ##
@@ -466,6 +476,11 @@ retention:
# ACME support: This will configure Synapse to request a valid TLS certificate
# for your configured `server_name` via Let's Encrypt.
#
# Note that ACME v1 is now deprecated, and Synapse currently doesn't support
# ACME v2. This means that this feature currently won't work with installs set
# up after November 2019. For more info, and alternative solutions, see
# https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
#
# Note that provisioning a certificate in this way requires port 80 to be
# routed to Synapse so that it can complete the http-01 ACME challenge.
# By default, if you enable ACME support, Synapse will attempt to listen on
@@ -874,23 +889,6 @@ media_store_path: "DATADIR/media_store"
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# ``enabled`` defines whether the account validity feature is enabled. Defaults
# to False.
#
# ``period`` allows setting the period after which an account is valid
# after its registration. When renewing the account, its validity period
# will be extended by this amount of time. This parameter is required when using
# the account validity feature.
#
# ``renew_at`` is the amount of time before an account's expiry date at which
# Synapse will send an email to the account's email address with a renewal link.
# This needs the ``email`` and ``public_baseurl`` configuration sections to be
# filled.
#
# ``renew_email_subject`` is the subject of the email sent out with the renewal
# link. ``%(app)s`` can be used as a placeholder for the ``app_name`` parameter
# from the ``email`` section.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
@@ -901,21 +899,55 @@ media_store_path: "DATADIR/media_store"
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10% of the validity period.
#
#account_validity:
# enabled: true
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: "Renew your %(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
#template_dir: "res/templates"
# File within 'template_dir' giving the HTML to be displayed to the user after
# they successfully renewed their account. If not set, default text is used.
#
#account_renewed_html_path: "account_renewed.html"
# File within 'template_dir' giving the HTML to be displayed when the user
# tries to renew an account with an invalid renewal token. If not set,
# default text is used.
#
#invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#
@@ -1315,6 +1347,25 @@ saml2_config:
#
#grandfathered_mxid_source_attribute: upn
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# This template doesn't currently need any variable to render.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
# Enable CAS for registration and login.
@@ -1328,6 +1379,56 @@ saml2_config:
# # name: value
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
# have to confirm giving access to their account to the URL. Any client
# whose URL starts with an entry in the following list will not be subject
# to an additional confirmation step after the SSO login is completed.
#
# WARNING: An entry such as "https://my.client" is insecure, because it
# will also match "https://my.client.evil.site", exposing your users to
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/".
#
# By default, this list is empty.
#
#client_whitelist:
# - https://riot.im/develop
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given three variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to. Needs manual escaping
# (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * server_name: the homeserver's name.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
# The JWT needs to contain a globally unique "sub" (subject) claim.
#
#jwt_config:
@@ -1353,107 +1454,111 @@ password_config:
#pepper: "EVEN_MORE_SECRET"
# Configuration for sending emails from Synapse.
#
email:
# The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
#
#smtp_host: mail.server
# Enable sending emails for password resets, notification events or
# account expiry notices
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25 # SSL: 465, STARTTLS: 587
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #
# notif_for_new_users: true
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
# # the "app_name" setting is ignored
# #
# riot_base_url: "http://localhost/riot"
#
# # Configure the time that a validation email or text message code
# # will expire after sending
# #
# # This is currently used for password resets
# #
# #validation_token_lifetime: 1h
#
# # Template directory. All template files should be stored within this
# # directory. If not set, default templates from within the Synapse
# # package will be used
# #
# # For the list of default templates, please see
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
# #template_dir: res/templates
#
# # Templates for email notifications
# #
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
#
# # Templates for account expiry notices
# #
# expiry_template_html: notice_expiry.html
# expiry_template_text: notice_expiry.txt
#
# # Templates for password reset emails sent by the homeserver
# #
# #password_reset_template_html: password_reset.html
# #password_reset_template_text: password_reset.txt
#
# # Templates for registration emails sent by the homeserver
# #
# #registration_template_html: registration.html
# #registration_template_text: registration.txt
#
# # Templates for validation emails sent by the homeserver when adding an email to
# # your user account
# #
# #add_threepid_template_html: add_threepid.html
# #add_threepid_template_text: add_threepid.txt
#
# # Templates for password reset success and failure pages that a user
# # will see after attempting to reset their password
# #
# #password_reset_template_success_html: password_reset_success.html
# #password_reset_template_failure_html: password_reset_failure.html
#
# # Templates for registration success and failure pages that a user
# # will see after attempting to register using an email or phone
# #
# #registration_template_success_html: registration_success.html
# #registration_template_failure_html: registration_failure.html
#
# # Templates for success and failure pages that a user will see after attempting
# # to add an email or phone to their account
# #
# #add_threepid_success_html: add_threepid_success.html
# #add_threepid_failure_html: add_threepid_failure.html
# The port on the mail server for outgoing SMTP. Defaults to 25.
#
#smtp_port: 587
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to
# TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
# Synapse will refuse to connect unless the server supports STARTTLS.
#
#require_transport_security: true
# notif_from defines the "From" address to use when sending emails.
# It must be set if email sending is enabled.
#
# The placeholder '%(app)s' will be replaced by the application name,
# which is normally 'app_name' (below), but may be overridden by the
# Matrix client application.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from. It
# defaults to 'Matrix'.
#
#app_name: my_branded_matrix_server
# Uncomment the following to enable sending emails for messages that the user
# has missed. Disabled by default.
#
#enable_notifs: true
# Uncomment the following to disable automatic subscription to email
# notifications for new users. Enabled by default.
#
#notif_for_new_users: false
# Custom URL for client links within the email notifications. By default
# links will be based on "https://matrix.to".
#
# (This setting used to be called riot_base_url; the old name is still
# supported for backwards-compatibility but is now deprecated.)
#
#client_base_url: "http://localhost/riot"
# Configure the time that a validation email will expire after sending.
# Defaults to 1h.
#
#validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * The contents of email notifications of missed events: 'notif_mail.html' and
# 'notif_mail.txt'.
#
# * The contents of account expiry notice emails: 'notice_expiry.html' and
# 'notice_expiry.txt'.
#
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in the password reset email: 'password_reset_success.html' and
# 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent during registration:
# 'registration_success.html' and 'registration_failure.html'
#
# * The contents of address verification emails sent when an address is added
# to a Matrix account: 'add_threepid.html' and 'add_threepid.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent when an address is added
# to a Matrix account: 'add_threepid_success.html' and
# 'add_threepid_failure.html'
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
#password_providers:

88
docs/spam_checker.md Normal file
View File

@@ -0,0 +1,88 @@
# Handling spam in Synapse
Synapse has support to customize spam checking behavior. It can plug into a
variety of events and affect how they are presented to users on your homeserver.
The spam checking behavior is implemented as a Python class, which must be
able to be imported by the running Synapse.
## Python spam checker class
The Python class is instantiated with two objects:
* Any configuration (see below).
* An instance of `synapse.spam_checker_api.SpamCheckerApi`.
It then implements methods which return a boolean to alter behavior in Synapse.
There's a generic method for checking every event (`check_event_for_spam`), as
well as some specific methods:
* `user_may_invite`
* `user_may_create_room`
* `user_may_create_room_alias`
* `user_may_publish_room`
The details of the each of these methods (as well as their inputs and outputs)
are documented in the `synapse.events.spamcheck.SpamChecker` class.
The `SpamCheckerApi` class provides a way for the custom spam checker class to
call back into the homeserver internals. It currently implements the following
methods:
* `get_state_events_in_room`
### Example
```python
class ExampleSpamChecker:
def __init__(self, config, api):
self.config = config
self.api = api
def check_event_for_spam(self, foo):
return False # allow all events
def user_may_invite(self, inviter_userid, invitee_userid, room_id):
return True # allow all invites
def user_may_create_room(self, userid):
return True # allow all room creations
def user_may_create_room_alias(self, userid, room_alias):
return True # allow all room aliases
def user_may_publish_room(self, userid, room_id):
return True # allow publishing of all rooms
def check_username_for_spam(self, user_profile):
return False # allow all usernames
```
## Configuration
Modify the `spam_checker` section of your `homeserver.yaml` in the following
manner:
`module` should point to the fully qualified Python class that implements your
custom logic, e.g. `my_module.ExampleSpamChecker`.
`config` is a dictionary that gets passed to the spam checker class.
### Example
This section might look like:
```yaml
spam_checker:
module: my_module.ExampleSpamChecker
config:
# Enable or disable a specific option in ExampleSpamChecker.
my_custom_option: true
```
## Examples
The [Mjolnir](https://github.com/matrix-org/mjolnir) project is a full fledged
example using the Synapse spam checking API, including a bot for dynamic
configuration.

View File

@@ -209,7 +209,7 @@ Where `<token>` may be either:
* a numeric stream_id to stream updates since (exclusive)
* `NOW` to stream all subsequent updates.
The `<stream_name>` is the name of a replication stream to subscribe
The `<stream_name>` is the name of a replication stream to subscribe
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
of streams). It can also be `ALL` to subscribe to all known streams,
in which case the `<token>` must be set to `NOW`.
@@ -234,6 +234,10 @@ in which case the `<token>` must be set to `NOW`.
Used exclusively in tests
### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online.
See `synapse/replication/tcp/commands.py` for a detailed description and
the format of each command.
@@ -250,6 +254,11 @@ and they key to invalidate. For example:
> RDATA caches 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
Alternatively, an entire cache can be invalidated by sending down a `null`
instead of the key. For example:
> RDATA caches 550953772 ["get_user_by_id", null, 1550574873252]
However, there are times when a number of caches need to be invalidated
at the same time with the same key. To reduce traffic we batch those
invalidations into a single poke by defining a special cache name that

View File

@@ -168,20 +168,42 @@ endpoints matching the following regular expressions:
^/_matrix/federation/v1/make_join/
^/_matrix/federation/v1/make_leave/
^/_matrix/federation/v1/send_join/
^/_matrix/federation/v2/send_join/
^/_matrix/federation/v1/send_leave/
^/_matrix/federation/v2/send_leave/
^/_matrix/federation/v1/invite/
^/_matrix/federation/v2/invite/
^/_matrix/federation/v1/query_auth/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/user/devices/
^/_matrix/federation/v1/send/
^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/federation/v1/groups/
The above endpoints should all be routed to the federation_reader worker by the
reverse-proxy configuration.
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
instance.
Note that `federation` must be added to the listener resources in the worker config:
```yaml
worker_app: synapse.app.federation_reader
...
worker_listeners:
- type: http
port: <port>
resources:
- names:
- federation
```
### `synapse.app.federation_sender`
Handles sending federation traffic to other servers. Doesn't handle any
@@ -199,7 +221,9 @@ Handles the media repository. It can handle all endpoints starting with:
... and the following regular expressions matching media-specific administration APIs:
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
^/_synapse/admin/v1/room/.*/media.*$
^/_synapse/admin/v1/user/.*/media.*$
^/_synapse/admin/v1/media/.*$
^/_synapse/admin/v1/quarantine_media/.*$
You should also set `enable_media_repo: False` in the shared configuration
@@ -236,15 +260,20 @@ following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
Additionally, the following REST endpoints can be handled, but all requests must
be routed to the same instance:
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to
@@ -260,6 +289,10 @@ the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
When using this worker you must also set `update_user_directory: False` in the
shared configuration file to stop the main synapse running background
jobs related to updating the user directory.
### `synapse.app.frontend_proxy`
Proxies some frequently-requested client endpoints to add caching and remove
@@ -288,6 +321,7 @@ file. For example:
Handles some event creation. It can handle REST endpoints matching:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/

View File

@@ -7,6 +7,9 @@ show_error_codes = True
show_traceback = True
mypy_path = stubs
[mypy-pymacaroons.*]
ignore_missing_imports = True
[mypy-zope]
ignore_missing_imports = True
@@ -63,3 +66,12 @@ ignore_missing_imports = True
[mypy-sentry_sdk]
ignore_missing_imports = True
[mypy-PIL.*]
ignore_missing_imports = True
[mypy-lxml]
ignore_missing_imports = True
[mypy-jwt.*]
ignore_missing_imports = True

View File

@@ -3,7 +3,8 @@
# Exits with 0 if there are no problems, or another code otherwise.
# Fix non-lowercase true/false values
sed -i -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
sed -i.bak -E "s/: +True/: true/g; s/: +False/: false/g;" docs/sample_config.yaml
rm docs/sample_config.yaml.bak
# Check if anything changed
git diff --exit-code docs/sample_config.yaml

View File

@@ -103,7 +103,7 @@ def main():
yaml.safe_dump(result, sys.stdout, default_flow_style=False)
rows = list(row for server, json in result.items() for row in rows_v2(server, json))
rows = [row for server, json in result.items() for row in rows_v2(server, json)]
cursor = connection.cursor()
cursor.executemany(

View File

@@ -22,10 +22,12 @@ import yaml
from twisted.internet import defer, reactor
import synapse
from synapse.config.homeserver import HomeServerConfig
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("update_database")
@@ -38,6 +40,8 @@ class MockHomeserver(HomeServer):
config.server_name, reactor=reactor, config=config, **kwargs
)
self.version_string = "Synapse/"+get_version_string(synapse)
if __name__ == "__main__":
parser = argparse.ArgumentParser(
@@ -81,15 +85,17 @@ if __name__ == "__main__":
hs.setup()
store = hs.get_datastore()
@defer.inlineCallbacks
def run_background_updates():
yield store.db.updates.run_background_updates(sleep=False)
async def run_background_updates():
await store.db.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()
# Apply all background updates on the database.
reactor.callWhenRunning(
lambda: run_as_background_process("background_updates", run_background_updates)
)
def run():
# Apply all background updates on the database.
defer.ensureDeferred(
run_as_background_process("background_updates", run_background_updates)
)
reactor.callWhenRunning(run)
reactor.run()

View File

@@ -52,7 +52,7 @@ if __name__ == "__main__":
if "config" in args and args.config:
config = yaml.safe_load(args.config)
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
password_config = config.get("password_config", {})
password_config = config.get("password_config", None) or {}
password_pepper = password_config.get("pepper", password_pepper)
password = args.password

View File

@@ -27,13 +27,16 @@ from six import string_types
import yaml
from twisted.enterprise import adbapi
from twisted.internet import defer, reactor
import synapse
from synapse.config.database import DatabaseConnectionConfig
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import PreserveLoggingContext
from synapse.storage._base import LoggingTransaction
from synapse.logging.context import (
LoggingContext,
make_deferred_yieldable,
run_in_background,
)
from synapse.storage.data_stores.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.data_stores.main.deviceinbox import (
DeviceInboxBackgroundUpdateStore,
@@ -61,6 +64,7 @@ from synapse.storage.database import Database, make_conn
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse_port_db")
@@ -125,6 +129,13 @@ APPEND_ONLY_TABLES = [
]
# Error returned by the run function. Used at the top-level part of the script to
# handle errors and return codes.
end_error = None
# The exec_info for the error, if any. If error is defined but not exec_info the script
# will show only the error message without the stacktrace, if exec_info is defined but
# not the error then the script will show nothing outside of what's printed in the run
# function. If both are defined, the script will print both the error and the stacktrace.
end_error_exec_info = None
@@ -177,6 +188,7 @@ class MockHomeserver:
self.clock = Clock(reactor)
self.config = config
self.hostname = config.server_name
self.version_string = "Synapse/"+get_version_string(synapse)
def get_clock(self):
return self.clock
@@ -189,11 +201,10 @@ class Porter(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
@defer.inlineCallbacks
def setup_table(self, table):
async def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = yield self.postgres_store.db.simple_select_one(
row = await self.postgres_store.db.simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcols=("forward_rowid", "backward_rowid"),
@@ -207,10 +218,10 @@ class Porter(object):
forward_chunk,
already_ported,
total_to_port,
) = yield self._setup_sent_transactions()
) = await self._setup_sent_transactions()
backward_chunk = 0
else:
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
@@ -227,7 +238,7 @@ class Porter(object):
backward_chunk = row["backward_rowid"]
if total_to_port is None:
already_ported, total_to_port = yield self._get_total_count_to_port(
already_ported, total_to_port = await self._get_total_count_to_port(
table, forward_chunk, backward_chunk
)
else:
@@ -238,9 +249,9 @@ class Porter(object):
)
txn.execute("TRUNCATE %s CASCADE" % (table,))
yield self.postgres_store.execute(delete_all)
await self.postgres_store.execute(delete_all)
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
@@ -248,16 +259,13 @@ class Porter(object):
forward_chunk = 1
backward_chunk = 0
already_ported, total_to_port = yield self._get_total_count_to_port(
already_ported, total_to_port = await self._get_total_count_to_port(
table, forward_chunk, backward_chunk
)
defer.returnValue(
(table, already_ported, total_to_port, forward_chunk, backward_chunk)
)
return table, already_ported, total_to_port, forward_chunk, backward_chunk
@defer.inlineCallbacks
def handle_table(
async def handle_table(
self, table, postgres_size, table_size, forward_chunk, backward_chunk
):
logger.info(
@@ -275,7 +283,7 @@ class Porter(object):
self.progress.add_table(table, postgres_size, table_size)
if table == "event_search":
yield self.handle_search_table(
await self.handle_search_table(
postgres_size, table_size, forward_chunk, backward_chunk
)
return
@@ -294,7 +302,7 @@ class Porter(object):
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
@@ -335,7 +343,7 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = yield self.sqlite_store.db.runInteraction(
headers, frows, brows = await self.sqlite_store.db.runInteraction(
"select", r
)
@@ -361,7 +369,7 @@ class Porter(object):
},
)
yield self.postgres_store.execute(insert)
await self.postgres_store.execute(insert)
postgres_size += len(rows)
@@ -369,8 +377,7 @@ class Porter(object):
else:
return
@defer.inlineCallbacks
def handle_search_table(
async def handle_search_table(
self, postgres_size, table_size, forward_chunk, backward_chunk
):
select = (
@@ -390,7 +397,7 @@ class Porter(object):
return headers, rows
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
if rows:
forward_chunk = rows[-1][0] + 1
@@ -438,7 +445,7 @@ class Porter(object):
},
)
yield self.postgres_store.execute(insert)
await self.postgres_store.execute(insert)
postgres_size += len(rows)
@@ -447,20 +454,15 @@ class Porter(object):
else:
return
def setup_db(self, db_config: DatabaseConnectionConfig, engine):
db_conn = make_conn(db_config, engine)
prepare_database(db_conn, engine, config=None)
db_conn.commit()
return db_conn
@defer.inlineCallbacks
def build_db_store(self, db_config: DatabaseConnectionConfig):
def build_db_store(
self, db_config: DatabaseConnectionConfig, allow_outdated_version: bool = False,
):
"""Builds and returns a database store using the provided configuration.
Args:
config: The database configuration
db_config: The database configuration
allow_outdated_version: True to suppress errors about the database server
version being too old to run a complete synapse
Returns:
The built Store object.
@@ -468,24 +470,23 @@ class Porter(object):
self.progress.set_state("Preparing %s" % db_config.config["name"])
engine = create_engine(db_config.config)
conn = self.setup_db(db_config, engine)
hs = MockHomeserver(self.hs_config)
store = Store(Database(hs, db_config, engine), conn, hs)
yield store.db.runInteraction(
"%s_engine.check_database" % db_config.config["name"],
engine.check_database,
)
with make_conn(db_config, engine) as db_conn:
engine.check_database(
db_conn, allow_outdated_version=allow_outdated_version
)
prepare_database(db_conn, engine, config=self.hs_config)
store = Store(Database(hs, db_config, engine), db_conn, hs)
db_conn.commit()
return store
@defer.inlineCallbacks
def run_background_updates_on_postgres(self):
async def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = (
yield self.postgres_store.db.updates.has_completed_background_updates()
await self.postgres_store.db.updates.has_completed_background_updates()
)
if not postgres_ready:
@@ -494,35 +495,44 @@ class Porter(object):
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
yield self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = yield (
await self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = await (
self.postgres_store.db.updates.has_completed_background_updates()
)
@defer.inlineCallbacks
def run(self):
async def run(self):
"""Ports the SQLite database to a PostgreSQL database.
When a fatal error is met, its message is assigned to the global "end_error"
variable. When this error comes with a stacktrace, its exec_info is assigned to
the global "end_error_exec_info" variable.
"""
global end_error
try:
self.sqlite_store = yield self.build_db_store(
DatabaseConnectionConfig("master-sqlite", self.sqlite_config)
# we allow people to port away from outdated versions of sqlite.
self.sqlite_store = self.build_db_store(
DatabaseConnectionConfig("master-sqlite", self.sqlite_config),
allow_outdated_version=True,
)
# Check if all background updates are done, abort if not.
updates_complete = (
yield self.sqlite_store.db.updates.has_completed_background_updates()
await self.sqlite_store.db.updates.has_completed_background_updates()
)
if not updates_complete:
sys.stderr.write(
end_error = (
"Pending background updates exist in the SQLite3 database."
" Please start Synapse again and wait until every update has finished"
" before running this script.\n"
)
defer.returnValue(None)
return
self.postgres_store = yield self.build_db_store(
self.postgres_store = self.build_db_store(
self.hs_config.get_single_database()
)
yield self.run_background_updates_on_postgres()
await self.run_background_updates_on_postgres()
self.progress.set_state("Creating port tables")
@@ -550,22 +560,22 @@ class Porter(object):
)
try:
yield self.postgres_store.db.runInteraction("alter_table", alter_table)
await self.postgres_store.db.runInteraction("alter_table", alter_table)
except Exception:
# On Error Resume Next
pass
yield self.postgres_store.db.runInteraction(
await self.postgres_store.db.runInteraction(
"create_port_table", create_port_table
)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store.db.simple_select_onecol(
sqlite_tables = await self.sqlite_store.db.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = yield self.postgres_store.db.simple_select_onecol(
postgres_tables = await self.postgres_store.db.simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
@@ -576,28 +586,34 @@ class Porter(object):
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults(
[
self.setup_table(table)
for table in tables
if table not in ["schema_version", "applied_schema_deltas"]
and not table.startswith("sqlite_")
],
consumeErrors=True,
setup_res = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(self.setup_table, table)
for table in tables
if table not in ["schema_version", "applied_schema_deltas"]
and not table.startswith("sqlite_")
],
consumeErrors=True,
)
)
# Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults(
[self.handle_table(*res) for res in setup_res], consumeErrors=True
await make_deferred_yieldable(
defer.gatherResults(
[run_in_background(self.handle_table, *res) for res in setup_res],
consumeErrors=True,
)
)
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
await self._setup_state_group_id_seq()
self.progress.done()
except Exception:
except Exception as e:
global end_error_exec_info
end_error = e
end_error_exec_info = sys.exc_info()
logger.exception("")
finally:
@@ -637,8 +653,7 @@ class Porter(object):
return outrows
@defer.inlineCallbacks
def _setup_sent_transactions(self):
async def _setup_sent_transactions(self):
# Only save things from the last day
yesterday = int(time.time() * 1000) - 86400000
@@ -659,7 +674,7 @@ class Porter(object):
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@@ -672,7 +687,7 @@ class Porter(object):
txn, "sent_transactions", headers[1:], rows
)
yield self.postgres_store.execute(insert)
await self.postgres_store.execute(insert)
else:
max_inserted_rowid = 0
@@ -689,10 +704,10 @@ class Porter(object):
else:
return 1
next_chunk = yield self.sqlite_store.execute(get_start_id)
next_chunk = await self.sqlite_store.execute(get_start_id)
next_chunk = max(max_inserted_rowid + 1, next_chunk)
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": "sent_transactions",
@@ -708,46 +723,49 @@ class Porter(object):
(size,) = txn.fetchone()
return int(size)
remaining_count = yield self.sqlite_store.execute(get_sent_table_size)
remaining_count = await self.sqlite_store.execute(get_sent_table_size)
total_count = remaining_count + inserted_rows
defer.returnValue((next_chunk, inserted_rows, total_count))
return next_chunk, inserted_rows, total_count
@defer.inlineCallbacks
def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk):
frows = yield self.sqlite_store.execute_sql(
async def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk):
frows = await self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk
)
brows = yield self.sqlite_store.execute_sql(
brows = await self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk
)
defer.returnValue(frows[0][0] + brows[0][0])
return frows[0][0] + brows[0][0]
@defer.inlineCallbacks
def _get_already_ported_count(self, table):
rows = yield self.postgres_store.execute_sql(
async def _get_already_ported_count(self, table):
rows = await self.postgres_store.execute_sql(
"SELECT count(*) FROM %s" % (table,)
)
defer.returnValue(rows[0][0])
return rows[0][0]
@defer.inlineCallbacks
def _get_total_count_to_port(self, table, forward_chunk, backward_chunk):
remaining, done = yield defer.gatherResults(
[
self._get_remaining_count_to_port(table, forward_chunk, backward_chunk),
self._get_already_ported_count(table),
],
consumeErrors=True,
async def _get_total_count_to_port(self, table, forward_chunk, backward_chunk):
remaining, done = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self._get_remaining_count_to_port,
table,
forward_chunk,
backward_chunk,
),
run_in_background(self._get_already_ported_count, table),
],
)
)
remaining = int(remaining) if remaining else 0
done = int(done) if done else 0
defer.returnValue((done, remaining + done))
return done, remaining + done
def _setup_state_group_id_seq(self):
def r(txn):
@@ -1013,7 +1031,12 @@ if __name__ == "__main__":
hs_config=config,
)
reactor.callWhenRunning(porter.run)
@defer.inlineCallbacks
def run():
with LoggingContext("synapse_port_db_run"):
yield defer.ensureDeferred(porter.run())
reactor.callWhenRunning(run)
reactor.run()
@@ -1022,7 +1045,11 @@ if __name__ == "__main__":
else:
start()
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
if end_error:
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
sys.stderr.write(end_error)
sys.exit(5)

View File

@@ -1,20 +1,31 @@
name: matrix-synapse
base: core18
version: git
version: git
summary: Reference Matrix homeserver
description: |
Synapse is the reference Matrix homeserver.
Matrix is a federated and decentralised instant messaging and VoIP system.
grade: stable
confinement: strict
grade: stable
confinement: strict
apps:
matrix-synapse:
matrix-synapse:
command: synctl --no-daemonize start $SNAP_COMMON/homeserver.yaml
stop-command: synctl -c $SNAP_COMMON stop
plugs: [network-bind, network]
daemon: simple
daemon: simple
hash-password:
command: hash_password
generate-config:
command: generate_config
generate-signing-key:
command: generate_signing_key.py
register-new-matrix-user:
command: register_new_matrix_user
plugs: [network]
synctl:
command: synctl
parts:
matrix-synapse:
source: .

View File

@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.8.0"
__version__ = "1.12.1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import logging
from typing import Dict, Tuple
from typing import Optional
from six import itervalues
@@ -34,8 +34,10 @@ from synapse.api.errors import (
MissingClientTokenError,
ResourceLimitError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.config.server import is_threepid_reserved
from synapse.types import UserID
from synapse.events import EventBase
from synapse.types import StateMap, UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
@@ -78,32 +80,48 @@ class Auth(object):
self._account_validity = hs.config.account_validity
@defer.inlineCallbacks
def check_from_context(self, room_version, event, context, do_sig_check=True):
def check_from_context(self, room_version: str, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids()
auth_events_ids = yield self.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in itervalues(auth_events)}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
room_version, event, auth_events=auth_events, do_sig_check=do_sig_check
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
@defer.inlineCallbacks
def check_joined_room(self, room_id, user_id, current_state=None):
"""Check if the user is currently joined in the room
def check_user_in_room(
self,
room_id: str,
user_id: str,
current_state: Optional[StateMap[EventBase]] = None,
allow_departed_users: bool = False,
):
"""Check if the user is in the room, or was at some point.
Args:
room_id(str): The room to check.
user_id(str): The user to check.
current_state(dict): Optional map of the current state of the room.
room_id: The room to check.
user_id: The user to check.
current_state: Optional map of the current state of the room.
If provided then that map is used to check whether they are a
member of the room. Otherwise the current membership is
loaded from the database.
allow_departed_users: if True, accept users that were previously
members but have now departed.
Raises:
AuthError if the user is not in the room.
AuthError if the user is/was not in the room.
Returns:
A deferred membership event for the user if the user is in
the room.
Deferred[Optional[EventBase]]:
Membership event for the user if the user was in the
room. This will be the join event if they are currently joined to
the room. This will be the leave event if they have left the room.
"""
if current_state:
member = current_state.get((EventTypes.Member, user_id), None)
@@ -111,37 +129,19 @@ class Auth(object):
member = yield self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
self._check_joined_room(member, user_id, room_id)
return member
@defer.inlineCallbacks
def check_user_was_in_room(self, room_id, user_id):
"""Check if the user was in the room at some point.
Args:
room_id(str): The room to check.
user_id(str): The user to check.
Raises:
AuthError if the user was never in the room.
Returns:
A deferred membership event for the user if the user was in the
room. This will be the join event if they are currently joined to
the room. This will be the leave event if they have left the room.
"""
member = yield self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
membership = member.membership if member else None
if membership not in (Membership.JOIN, Membership.LEAVE):
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
if membership == Membership.JOIN:
return member
if membership == Membership.LEAVE:
# XXX this looks totally bogus. Why do we not allow users who have been banned,
# or those who were members previously and have been re-invited?
if allow_departed_users and membership == Membership.LEAVE:
forgot = yield self.store.did_forget(user_id, room_id)
if forgot:
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
if not forgot:
return member
return member
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
@@ -149,12 +149,6 @@ class Auth(object):
latest_event_ids = yield self.store.is_host_joined(room_id, host)
return latest_event_ids
def _check_joined_room(self, member, user_id, room_id):
if not member or member.membership != Membership.JOIN:
raise AuthError(
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
)
def can_federate(self, event, auth_events):
creation_event = auth_events.get((EventTypes.Create, ""))
@@ -509,10 +503,7 @@ class Auth(object):
return self.store.is_server_admin(user)
def compute_auth_events(
self,
event,
current_state_ids: Dict[Tuple[str, str], str],
for_verification: bool = False,
self, event, current_state_ids: StateMap[str], for_verification: bool = False,
):
"""Given an event and current state return the list of event IDs used
to auth an event.
@@ -547,13 +538,13 @@ class Auth(object):
return defer.succeed(auth_ids)
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):
"""Check if the user is allowed to edit the room's entry in the
def check_can_change_room_list(self, room_id: str, user: UserID):
"""Determine whether the user is allowed to edit the room's entry in the
published room list.
Args:
room_id (str)
user (UserID)
room_id
user
"""
is_admin = yield self.is_server_admin(user)
@@ -561,11 +552,11 @@ class Auth(object):
return True
user_id = user.to_string()
yield self.check_joined_room(room_id, user_id)
yield self.check_user_in_room(room_id, user_id)
# We currently require the user is a "moderator" in the room. We do this
# by checking if they would (theoretically) be able to change the
# m.room.aliases events
# m.room.canonical_alias events
power_level_event = yield self.state.get_current_state(
room_id, EventTypes.PowerLevels, ""
)
@@ -575,16 +566,11 @@ class Auth(object):
auth_events[(EventTypes.PowerLevels, "")] = power_level_event
send_level = event_auth.get_send_level(
EventTypes.Aliases, "", power_level_event
EventTypes.CanonicalAlias, "", power_level_event
)
user_level = event_auth.get_user_power_level(user_id, auth_events)
if user_level < send_level:
raise AuthError(
403,
"This server requires you to be a moderator in the room to"
" edit its room list entry",
)
return user_level >= send_level
@staticmethod
def has_access_token(request):
@@ -634,10 +620,18 @@ class Auth(object):
return query_params[0].decode("ascii")
@defer.inlineCallbacks
def check_in_room_or_world_readable(self, room_id, user_id):
def check_user_in_room_or_world_readable(
self, room_id: str, user_id: str, allow_departed_users: bool = False
):
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
Args:
room_id: room to check
user_id: user to check
allow_departed_users: if True, accept users that were previously
members but have now departed
Returns:
Deferred[tuple[str, str|None]]: Resolves to the current membership of
the user in the room and the membership event ID of the user. If
@@ -646,12 +640,14 @@ class Auth(object):
"""
try:
# check_user_was_in_room will return the most recent membership
# check_user_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_was_in_room(room_id, user_id)
member_event = yield self.check_user_in_room(
room_id, user_id, allow_departed_users=allow_departed_users
)
return member_event.membership, member_event.event_id
except AuthError:
visibility = yield self.state.get_current_state(
@@ -663,7 +659,9 @@ class Auth(object):
):
return Membership.JOIN, None
raise AuthError(
403, "Guest access not allowed", errcode=Codes.GUEST_ACCESS_FORBIDDEN
403,
"User %s not in room %s, and room previews are disabled"
% (user_id, room_id),
)
@defer.inlineCallbacks

View File

@@ -77,12 +77,11 @@ class EventTypes(object):
Aliases = "m.room.aliases"
Redaction = "m.room.redaction"
ThirdPartyInvite = "m.room.third_party_invite"
Encryption = "m.room.encryption"
RelatedGroups = "m.room.related_groups"
RoomHistoryVisibility = "m.room.history_visibility"
CanonicalAlias = "m.room.canonical_alias"
Encryption = "m.room.encryption"
Encrypted = "m.room.encrypted"
RoomAvatar = "m.room.avatar"
RoomEncryption = "m.room.encryption"
GuestAccess = "m.room.guest_access"

View File

@@ -17,13 +17,15 @@
"""Contains exceptions and error codes."""
import logging
from typing import Dict
from typing import Dict, List
from six import iteritems
from six.moves import http_client
from canonicaljson import json
from twisted.web import http
logger = logging.getLogger(__name__)
@@ -64,6 +66,7 @@ class Codes(object):
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
BAD_ALIAS = "M_BAD_ALIAS"
class CodeMessageException(RuntimeError):
@@ -80,6 +83,29 @@ class CodeMessageException(RuntimeError):
self.msg = msg
class RedirectException(CodeMessageException):
"""A pseudo-error indicating that we want to redirect the client to a different
location
Attributes:
cookies: a list of set-cookies values to add to the response. For example:
b"sessionId=a3fWa; Expires=Wed, 21 Oct 2015 07:28:00 GMT"
"""
def __init__(self, location: bytes, http_code: int = http.FOUND):
"""
Args:
location: the URI to redirect to
http_code: the HTTP response code
"""
msg = "Redirect to %s" % (location.decode("utf-8"),)
super().__init__(code=http_code, msg=msg)
self.location = location
self.cookies = [] # type: List[bytes]
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
message (as well as an HTTP status code).
@@ -158,12 +184,6 @@ class UserDeactivatedError(SynapseError):
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
class FederationDeniedError(SynapseError):
"""An error raised when the server tries to federate with a server which
is not on its federation whitelist.
@@ -383,11 +403,9 @@ class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does
not support."""
def __init__(self):
def __init__(self, msg="Homeserver does not support this room version"):
super(UnsupportedRoomVersionError, self).__init__(
code=400,
msg="Homeserver does not support this room version",
errcode=Codes.UNSUPPORTED_ROOM_VERSION,
code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION,
)

View File

@@ -15,6 +15,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from six import text_type
import jsonschema
@@ -293,7 +295,7 @@ class Filter(object):
room_id = None
ev_type = "m.presence"
contains_url = False
labels = []
labels = [] # type: List[str]
else:
sender = event.get("sender", None)
if not sender:

View File

@@ -12,7 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from collections import OrderedDict
from typing import Any, Optional, Tuple
from synapse.api.errors import LimitExceededError
@@ -23,7 +24,9 @@ class Ratelimiter(object):
"""
def __init__(self):
self.message_counts = collections.OrderedDict()
self.message_counts = (
OrderedDict()
) # type: OrderedDict[Any, Tuple[float, int, Optional[float]]]
def can_do_action(self, key, time_now_s, rate_hz, burst_count, update=True):
"""Can the entity (e.g. user or IP address) perform the action?

View File

@@ -57,6 +57,9 @@ class RoomVersion(object):
state_res = attr.ib() # int; one of the StateResolutionVersions
enforce_key_validity = attr.ib() # bool
# bool: before MSC2261/MSC2432, m.room.aliases had special auth rules and redaction rules
special_case_aliases_auth = attr.ib(type=bool, default=False)
class RoomVersions(object):
V1 = RoomVersion(
@@ -65,6 +68,7 @@ class RoomVersions(object):
EventFormatVersions.V1,
StateResolutionVersions.V1,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V2 = RoomVersion(
"2",
@@ -72,6 +76,7 @@ class RoomVersions(object):
EventFormatVersions.V1,
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V3 = RoomVersion(
"3",
@@ -79,6 +84,7 @@ class RoomVersions(object):
EventFormatVersions.V2,
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V4 = RoomVersion(
"4",
@@ -86,6 +92,7 @@ class RoomVersions(object):
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V5 = RoomVersion(
"5",
@@ -93,6 +100,15 @@ class RoomVersions(object):
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=True,
)
MSC2432_DEV = RoomVersion(
"org.matrix.msc2432",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
)
@@ -104,5 +120,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.MSC2432_DEV,
)
} # type: Dict[str, RoomVersion]

View File

@@ -141,7 +141,7 @@ def start_reactor(
def quit_with_error(error_string):
message_lines = error_string.split("\n")
line_length = max([len(l) for l in message_lines if len(l) < 80]) + 2
line_length = max(len(l) for l in message_lines if len(l) < 80) + 2
sys.stderr.write("*" * line_length + "\n")
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
@@ -276,9 +276,19 @@ def start(hs, listeners=None):
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().db.start_profiling()
hs.get_pusherpool().start()
setup_sentry(hs)
setup_sdnotify(hs)
# We now freeze all allocated objects in the hopes that (almost)
# everything currently allocated are things that will be used for the
# rest of time. Doing so means less work each GC (hopefully).
#
# This only works on Python 3.7
if sys.version_info >= (3, 7):
gc.collect()
gc.freeze()
except Exception:
traceback.print_exc(file=sys.stderr)
reactor = hs.get_reactor()

View File

@@ -84,8 +84,7 @@ class AdminCmdServer(HomeServer):
class AdminCmdReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
async def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):

View File

@@ -13,162 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.appservice")
class AppserviceSlaveStore(
DirectoryStore,
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
):
pass
class AppserviceServer(HomeServer):
DATASTORE_CLASS = AppserviceSlaveStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse appservice now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ASReplicationHandler(self)
class ASReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
run_in_background(self._notify_app_services, max_stream_id)
@defer.inlineCallbacks
def _notify_app_services(self, room_stream_id):
try:
yield self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse appservice", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.appservice"
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.notify_appservices:
sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``notify_appservices: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.notify_appservices = True
ps = AppserviceServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ps, config, use_worker_options=True)
ps.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ps, config.worker_listeners
)
_base.start_worker_reactor("synapse-appservice", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -13,188 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.login import LoginRestServlet
from synapse.rest.client.v1.push_rule import PushRuleRestServlet
from synapse.rest.client.v1.room import (
JoinedRoomMemberListRestServlet,
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomMemberListRestServlet,
RoomMessageListRestServlet,
RoomStateRestServlet,
)
from synapse.rest.client.v1.voip import VoipRestServlet
from synapse.rest.client.v2_alpha.account import ThreepidRestServlet
from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet
from synapse.server import HomeServer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.client_reader")
class ClientReaderSlavedStore(
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedGroupServerStore,
SlavedAccountDataStore,
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedTransactionStore,
SlavedProfileStore,
SlavedClientIpStore,
BaseSlavedStore,
):
pass
class ClientReaderServer(HomeServer):
DATASTORE_CLASS = ClientReaderSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
RoomMemberListRestServlet(self).register(resource)
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
RoomMessageListRestServlet(self).register(resource)
RegisterRestServlet(self).register(resource)
LoginRestServlet(self).register(resource)
ThreepidRestServlet(self).register(resource)
KeyQueryServlet(self).register(resource)
KeyChangesServlet(self).register(resource)
VoipRestServlet(self).register(resource)
PushRuleRestServlet(self).register(resource)
VersionsRestServlet(self).register(resource)
resources.update({"/_matrix/client": resource})
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse client reader", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.client_reader"
events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = ClientReaderServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-client-reader", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -13,187 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v1.profile import (
ProfileAvatarURLRestServlet,
ProfileDisplaynameRestServlet,
ProfileRestServlet,
)
from synapse.rest.client.v1.room import (
JoinRoomAliasServlet,
RoomMembershipRestServlet,
RoomSendEventRestServlet,
RoomStateEventRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.event_creator")
class EventCreatorSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
UserDirectoryStore,
DirectoryStore,
SlavedTransactionStore,
SlavedProfileStore,
SlavedAccountDataStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedEventStore,
SlavedRegistrationStore,
RoomStore,
BaseSlavedStore,
):
pass
class EventCreatorServer(HomeServer):
DATASTORE_CLASS = EventCreatorSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse event creator now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse event creator", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.event_creator"
assert config.worker_replication_http_port is not None
# This should only be done on the user directory worker or the master
config.update_user_directory = False
events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = EventCreatorServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-event-creator", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -13,169 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import FEDERATION_PREFIX, SERVER_KEY_V2_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation.transport.server import TransportLayerServer
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.federation_reader")
class FederationReaderSlavedStore(
SlavedAccountDataStore,
SlavedProfileStore,
SlavedApplicationServiceStore,
SlavedPusherStore,
SlavedPushRuleStore,
SlavedReceiptsStore,
SlavedEventStore,
SlavedKeyStore,
SlavedRegistrationStore,
RoomStore,
DirectoryStore,
SlavedTransactionStore,
BaseSlavedStore,
):
pass
class FederationReaderServer(HomeServer):
DATASTORE_CLASS = FederationReaderSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
if name == "openid" and "federation" not in res["names"]:
# Only load the openid resource separately if federation resource
# is not specified since federation resource includes openid
# resource.
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
reactor=self.get_reactor(),
)
logger.info("Synapse federation reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse federation reader", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_reader"
events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = FederationReaderServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-reader", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -13,281 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import ReceiptsStream
from synapse.server import HomeServer
from synapse.storage.database import Database
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.federation_sender")
class FederationSenderSlaveStore(
SlavedDeviceInboxStore,
SlavedTransactionStore,
SlavedReceiptsStore,
SlavedEventStore,
SlavedRegistrationStore,
SlavedDeviceStore,
SlavedPresenceStore,
):
def __init__(self, database: Database, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(database, db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
# that we don't have to bounce via a deferred once when we start the
# replication streams.
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
txn.execute(sql, ("federation",))
rows = txn.fetchall()
txn.close()
return rows[0][0] if rows else -1
class FederationSenderServer(HomeServer):
DATASTORE_CLASS = FederationSenderSlaveStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse federation_sender now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return FederationSenderReplicationHandler(self)
class FederationSenderReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(
FederationSenderReplicationHandler, self
).get_streams_to_replicate()
args.update(self.send_handler.stream_positions())
return args
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse federation sender", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.federation_sender"
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.send_federation:
sys.stderr.write(
"\nThe send_federation must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``send_federation: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.send_federation = True
ss = FederationSenderServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-federation-sender", config)
class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs, replication_client):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
self.federation_sender = hs.get_federation_sender()
self.replication_client = replication_client
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
self._room_serials = {}
self._room_typing = {}
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
self.federation_sender.notify_new_events(
self.store.get_room_max_stream_ordering()
)
def stream_positions(self):
return {"federation": self.federation_position}
def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
run_in_background(self.update_token, token)
# We also need to poke the federation sender when new events happen
elif stream_name == "events":
self.federation_sender.notify_new_events(token)
# ... and when new receipts happen
elif stream_name == ReceiptsStream.NAME:
run_as_background_process(
"process_receipts_for_federation", self._on_new_receipts, rows
)
@defer.inlineCallbacks
def _on_new_receipts(self, rows):
"""
Args:
rows (iterable[synapse.replication.tcp.streams.ReceiptsStreamRow]):
new receipts to be processed
"""
for receipt in rows:
# we only want to send on receipts for our own users
if not self._is_mine_id(receipt.user_id):
continue
receipt_info = ReadReceipt(
receipt.room_id,
receipt.receipt_type,
receipt.user_id,
[receipt.event_id],
receipt.data,
)
yield self.federation_sender.send_read_receipt(receipt_info)
@defer.inlineCallbacks
def update_token(self, token):
try:
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (yield self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
yield self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(
self.federation_position
)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -13,241 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.errors import HttpResponseException, SynapseError
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.server import HomeServer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.frontend_proxy")
class PresenceStatusStubServlet(RestServlet):
PATTERNS = client_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super(PresenceStatusStubServlet, self).__init__()
self.http_client = hs.get_simple_http_client()
self.auth = hs.get_auth()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_GET(self, request, user_id):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {"Authorization": auth_headers}
try:
result = yield self.http_client.get_json(
self.main_uri + request.uri.decode("ascii"), headers=headers
)
except HttpResponseException as e:
raise e.to_synapse_error()
return 200, result
@defer.inlineCallbacks
def on_PUT(self, request, user_id):
yield self.auth.get_user_by_req(request)
return 200, {}
class KeyUploadServlet(RestServlet):
PATTERNS = client_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
super(KeyUploadServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
@defer.inlineCallbacks
def on_POST(self, request, device_id):
requester = yield self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if requester.device_id is not None and device_id != requester.device_id:
logger.warning(
"Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id,
device_id,
)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400, "To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {"Authorization": auth_headers}
result = yield self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
return 200, result
else:
# Just interested in counts.
result = yield self.store.count_e2e_one_time_keys(user_id, device_id)
return 200, {"one_time_key_counts": result}
class FrontendProxySlavedStore(
SlavedDeviceStore,
SlavedClientIpStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
BaseSlavedStore,
):
pass
class FrontendProxyServer(HomeServer):
DATASTORE_CLASS = FrontendProxySlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
KeyUploadServlet(self).register(resource)
# If presence is disabled, use the stub servlet that does
# not allow sending presence
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
reactor=self.get_reactor(),
)
logger.info("Synapse client reader now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse frontend proxy", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.frontend_proxy"
assert config.worker_main_http_uri is not None
events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = FrontendProxyServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-frontend-proxy", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -0,0 +1,935 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
import synapse.events
from synapse.api.constants import EventTypes
from synapse.api.errors import HttpResponseException, SynapseError
from synapse.api.urls import (
CLIENT_API_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.federation.transport.server import TransportLayerServer
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.directory import DirectoryStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.keys import SlavedKeyStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.profile import SlavedProfileStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import (
DeviceListsStream,
ReceiptsStream,
ToDeviceStream,
)
from synapse.replication.tcp.streams.events import EventsStreamEventRow, EventsStreamRow
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.login import LoginRestServlet
from synapse.rest.client.v1.profile import (
ProfileAvatarURLRestServlet,
ProfileDisplaynameRestServlet,
ProfileRestServlet,
)
from synapse.rest.client.v1.push_rule import PushRuleRestServlet
from synapse.rest.client.v1.room import (
JoinedRoomMemberListRestServlet,
JoinRoomAliasServlet,
PublicRoomListRestServlet,
RoomEventContextServlet,
RoomInitialSyncRestServlet,
RoomMemberListRestServlet,
RoomMembershipRestServlet,
RoomMessageListRestServlet,
RoomSendEventRestServlet,
RoomStateEventRestServlet,
RoomStateRestServlet,
)
from synapse.rest.client.v1.voip import VoipRestServlet
from synapse.rest.client.v2_alpha import groups, sync, user_directory
from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.rest.client.v2_alpha.account import ThreepidRestServlet
from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker")
class PresenceStatusStubServlet(RestServlet):
"""If presence is disabled this servlet can be used to stub out setting
presence status, while proxying the getters to the master instance.
"""
PATTERNS = client_patterns("/presence/(?P<user_id>[^/]*)/status")
def __init__(self, hs):
super(PresenceStatusStubServlet, self).__init__()
self.http_client = hs.get_simple_http_client()
self.auth = hs.get_auth()
self.main_uri = hs.config.worker_main_http_uri
async def on_GET(self, request, user_id):
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders("Authorization", [])
headers = {"Authorization": auth_headers}
try:
result = await self.http_client.get_json(
self.main_uri + request.uri.decode("ascii"), headers=headers
)
except HttpResponseException as e:
raise e.to_synapse_error()
return 200, result
async def on_PUT(self, request, user_id):
await self.auth.get_user_by_req(request)
return 200, {}
class KeyUploadServlet(RestServlet):
"""An implementation of the `KeyUploadServlet` that responds to read only
requests, but otherwise proxies through to the master instance.
"""
PATTERNS = client_patterns("/keys/upload(/(?P<device_id>[^/]+))?$")
def __init__(self, hs):
"""
Args:
hs (synapse.server.HomeServer): server
"""
super(KeyUploadServlet, self).__init__()
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.http_client = hs.get_simple_http_client()
self.main_uri = hs.config.worker_main_http_uri
async def on_POST(self, request, device_id):
requester = await self.auth.get_user_by_req(request, allow_guest=True)
user_id = requester.user.to_string()
body = parse_json_object_from_request(request)
if device_id is not None:
# passing the device_id here is deprecated; however, we allow it
# for now for compatibility with older clients.
if requester.device_id is not None and device_id != requester.device_id:
logger.warning(
"Client uploading keys for a different device "
"(logged in as %s, uploading for %s)",
requester.device_id,
device_id,
)
else:
device_id = requester.device_id
if device_id is None:
raise SynapseError(
400, "To upload keys, you must pass device_id when authenticating"
)
if body:
# They're actually trying to upload something, proxy to main synapse.
# Pass through the auth headers, if any, in case the access token
# is there.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization", [])
headers = {"Authorization": auth_headers}
result = await self.http_client.post_json_get_json(
self.main_uri + request.uri.decode("ascii"), body, headers=headers
)
return 200, result
else:
# Just interested in counts.
result = await self.store.count_e2e_one_time_keys(user_id, device_id)
return 200, {"one_time_key_counts": result}
UPDATE_SYNCING_USERS_MS = 10 * 1000
class GenericWorkerPresence(object):
def __init__(self, hs):
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
had recently stopped syncing.
Args:
user_id (str)
"""
going_offline = self.users_going_offline.pop(user_id, None)
if not going_offline:
# Safe to skip because we haven't yet told the master they were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id):
"""A user has stopped syncing. We wait before notifying the master as
its likely they'll come back soon. This allows us to avoid sending
a stopped syncing immediately followed by a started syncing notification
to the master
Args:
user_id (str)
"""
self.users_going_offline[user_id] = self.clock.time_msec()
def send_stop_syncing(self):
"""Check if there are any users who have stopped syncing a while ago
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in list(self.users_going_offline.items()):
if now - last_sync_ms > UPDATE_SYNCING_USERS_MS:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
return defer.succeed(None)
get_states = __func__(PresenceHandler.get_states)
get_state = __func__(PresenceHandler.get_state)
current_state_for_users = __func__(PresenceHandler.current_state_for_users)
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
# If we went from no in flight sync to some, notify replication
if self.user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
def _user_syncing():
try:
yield
finally:
_end()
return defer.succeed(_user_syncing())
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key",
stream_id,
rooms=room_ids_to_states.keys(),
users=users_to_states.keys(),
)
@defer.inlineCallbacks
def process_replication_rows(self, token, rows):
states = [
UserPresenceState(
row.user_id,
row.state,
row.last_active_ts,
row.last_federation_update_ts,
row.last_user_sync_ts,
row.status_msg,
row.currently_active,
)
for row in rows
]
for state in states:
self.user_to_current_state[state.user_id] = state
stream_id = token
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
if self.hs.config.use_presence:
return [
user_id
for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
]
else:
return set()
class GenericWorkerTyping(object):
def __init__(self, hs):
self._latest_room_serial = 0
self._reset()
def _reset(self):
"""
Reset the typing handler's data caches.
"""
# map room IDs to serial numbers
self._room_serials = {}
# map room IDs to sets of users currently typing
self._room_typing = {}
def stream_positions(self):
# We must update this typing token from the response of the previous
# sync. In particular, the stream id may "reset" back to zero/a low
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication_rows(self, token, rows):
if self._latest_room_serial > token:
# The master has gone backwards. To prevent inconsistent data, just
# clear everything.
self._reset()
# Set the latest serial token to whatever the server gave us.
self._latest_room_serial = token
for row in rows:
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
class GenericWorkerSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
UserDirectoryStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedReceiptsStore,
SlavedPushRuleStore,
SlavedGroupServerStore,
SlavedAccountDataStore,
SlavedPusherStore,
SlavedEventStore,
SlavedKeyStore,
RoomStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedTransactionStore,
SlavedProfileStore,
SlavedClientIpStore,
SlavedPresenceStore,
SlavedFilteringStore,
MonthlyActiveUsersWorkerStore,
MediaRepositoryStore,
BaseSlavedStore,
):
def __init__(self, database, db_conn, hs):
super(GenericWorkerSlavedStore, self).__init__(database, db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
# that we don't have to bounce via a deferred once when we start the
# replication streams.
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
txn.execute(sql, ("federation",))
rows = txn.fetchall()
txn.close()
return rows[0][0] if rows else -1
class GenericWorkerServer(HomeServer):
DATASTORE_CLASS = GenericWorkerSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
PublicRoomListRestServlet(self).register(resource)
RoomMemberListRestServlet(self).register(resource)
JoinedRoomMemberListRestServlet(self).register(resource)
RoomStateRestServlet(self).register(resource)
RoomEventContextServlet(self).register(resource)
RoomMessageListRestServlet(self).register(resource)
RegisterRestServlet(self).register(resource)
LoginRestServlet(self).register(resource)
ThreepidRestServlet(self).register(resource)
KeyQueryServlet(self).register(resource)
KeyChangesServlet(self).register(resource)
VoipRestServlet(self).register(resource)
PushRuleRestServlet(self).register(resource)
VersionsRestServlet(self).register(resource)
RoomSendEventRestServlet(self).register(resource)
RoomMembershipRestServlet(self).register(resource)
RoomStateEventRestServlet(self).register(resource)
JoinRoomAliasServlet(self).register(resource)
ProfileAvatarURLRestServlet(self).register(resource)
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
KeyUploadServlet(self).register(resource)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
InitialSyncRestServlet(self).register(resource)
RoomInitialSyncRestServlet(self).register(resource)
user_directory.register_servlets(self, resource)
# If presence is disabled, use the stub servlet that does
# not allow sending presence
if not self.config.use_presence:
PresenceStatusStubServlet(self).register(resource)
groups.register_servlets(self, resource)
resources.update({CLIENT_API_PREFIX: resource})
elif name == "federation":
resources.update({FEDERATION_PREFIX: TransportLayerServer(self)})
elif name == "media":
if self.config.can_load_media_repo:
media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources.update(
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}
)
else:
logger.warning(
"A 'media' listener is configured but the media"
" repository is disabled. Ignoring."
)
if name == "openid" and "federation" not in res["names"]:
# Only load the openid resource separately if federation resource
# is not specified since federation resource includes openid
# resource.
resources.update(
{
FEDERATION_PREFIX: TransportLayerServer(
self, servlet_groups=["openid"]
)
}
)
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
reactor=self.get_reactor(),
)
logger.info("Synapse worker now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def build_tcp_replication(self):
return GenericWorkerReplicationHandler(self)
def build_presence_handler(self):
return GenericWorkerPresence(self)
def build_typing_handler(self):
return GenericWorkerTyping(self)
class GenericWorkerReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(GenericWorkerReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
self.notify_pushers = hs.config.start_pushers
self.pusher_pool = hs.get_pusherpool()
if hs.config.send_federation:
self.send_handler = FederationSenderHandler(hs, self)
else:
self.send_handler = None
async def on_rdata(self, stream_name, token, rows):
await super(GenericWorkerReplicationHandler, self).on_rdata(
stream_name, token, rows
)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(GenericWorkerReplicationHandler, self).get_streams_to_replicate()
args.update(self.typing_handler.stream_positions())
if self.send_handler:
args.update(self.send_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
async def process_and_notify(self, stream_name, token, rows):
try:
if self.send_handler:
self.send_handler.process_replication_rows(stream_name, token, rows)
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
if row.type != EventsStreamEventRow.TypeId:
continue
assert isinstance(row, EventsStreamRow)
event = await self.store.get_event(
row.data.event_id, allow_rejected=True
)
if event.rejected_reason:
continue
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
await self.pusher_pool.on_new_notifications(token, token)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows]
)
elif stream_name in ("account_data", "tag_account_data"):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows]
)
await self.pusher_pool.on_new_receipts(
token, token, {row.room_id for row in rows}
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows]
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event("to_device_key", token, users=entities)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = await self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
elif stream_name == "presence":
await self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "pushers":
for row in rows:
if row.deleted:
self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
await self.start_pusher(row.user_id, row.app_id, row.pushkey)
except Exception:
logger.exception("Error processing replication")
def stop_pusher(self, user_id, app_id, pushkey):
if not self.notify_pushers:
return
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = self.pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
async def start_pusher(self, user_id, app_id, pushkey):
if not self.notify_pushers:
return
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return await self.pusher_pool.start_pusher_by_id(app_id, pushkey, user_id)
def on_remote_server_up(self, server: str):
"""Called when get a new REMOTE_SERVER_UP command."""
# Let's wake up the transaction queue for the server in case we have
# pending stuff to send to it.
if self.send_handler:
self.send_handler.wake_destination(server)
class FederationSenderHandler(object):
"""Processes the replication stream and forwards the appropriate entries
to the federation sender.
"""
def __init__(self, hs: GenericWorkerServer, replication_client):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
self.federation_sender = hs.get_federation_sender()
self.replication_client = replication_client
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
self._room_serials = {}
self._room_typing = {}
def on_start(self):
# There may be some events that are persisted but haven't been sent,
# so send them now.
self.federation_sender.notify_new_events(
self.store.get_room_max_stream_ordering()
)
def wake_destination(self, server: str):
self.federation_sender.wake_destination(server)
def stream_positions(self):
return {"federation": self.federation_position}
def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
run_in_background(self.update_token, token)
# We also need to poke the federation sender when new events happen
elif stream_name == "events":
self.federation_sender.notify_new_events(token)
# ... and when new receipts happen
elif stream_name == ReceiptsStream.NAME:
run_as_background_process(
"process_receipts_for_federation", self._on_new_receipts, rows
)
# ... as well as device updates and messages
elif stream_name == DeviceListsStream.NAME:
hosts = {row.destination for row in rows}
for host in hosts:
self.federation_sender.send_device_messages(host)
elif stream_name == ToDeviceStream.NAME:
# The to_device stream includes stuff to be pushed to both local
# clients and remote servers, so we ignore entities that start with
# '@' (since they'll be local users rather than destinations).
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
async def _on_new_receipts(self, rows):
"""
Args:
rows (iterable[synapse.replication.tcp.streams.ReceiptsStreamRow]):
new receipts to be processed
"""
for receipt in rows:
# we only want to send on receipts for our own users
if not self._is_mine_id(receipt.user_id):
continue
receipt_info = ReadReceipt(
receipt.room_id,
receipt.receipt_type,
receipt.user_id,
[receipt.event_id],
receipt.data,
)
await self.federation_sender.send_read_receipt(receipt_info)
async def update_token(self, token):
try:
self.federation_position = token
# We linearize here to ensure we don't have races updating the token
with (await self._fed_position_linearizer.queue(None)):
if self._last_ack < self.federation_position:
await self.store.update_federation_out_pos(
"federation", self.federation_position
)
# We ACK this token over replication so that the master can drop
# its in memory queues
self.replication_client.send_federation_ack(
self.federation_position
)
self._last_ack = self.federation_position
except Exception:
logger.exception("Error updating federation stream position")
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse worker", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
# For backwards compatibility let any of the old app names.
assert config.worker_app in (
"synapse.app.appservice",
"synapse.app.client_reader",
"synapse.app.event_creator",
"synapse.app.federation_reader",
"synapse.app.federation_sender",
"synapse.app.frontend_proxy",
"synapse.app.generic_worker",
"synapse.app.media_repository",
"synapse.app.pusher",
"synapse.app.synchrotron",
"synapse.app.user_dir",
)
if config.worker_app == "synapse.app.appservice":
if config.notify_appservices:
sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``notify_appservices: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the appservice to start since they will be disabled in the main config
config.notify_appservices = True
else:
# For other worker types we force this to off.
config.notify_appservices = False
if config.worker_app == "synapse.app.pusher":
if config.start_pushers:
sys.stderr.write(
"\nThe pushers must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``start_pushers: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
else:
# For other worker types we force this to off.
config.start_pushers = False
if config.worker_app == "synapse.app.user_dir":
if config.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``update_user_directory: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
else:
# For other worker types we force this to off.
config.update_user_directory = False
if config.worker_app == "synapse.app.federation_sender":
if config.send_federation:
sys.stderr.write(
"\nThe send_federation must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``send_federation: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.send_federation = True
else:
# For other worker types we force this to off.
config.send_federation = False
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = GenericWorkerServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-generic-worker", config)
if __name__ == "__main__":
with LoggingContext("main"):
start(sys.argv[1:])

View File

@@ -31,7 +31,7 @@ from prometheus_client import Gauge
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.resource import EncodingResourceWrapper, IResource, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
@@ -109,7 +109,16 @@ class SynapseHomeServer(HomeServer):
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
handler = handler_cls(config, module_api)
resources[path] = AdditionalResource(self, handler.handle_request)
if IResource.providedBy(handler):
resource = handler
elif hasattr(handler, "handle_request"):
resource = AdditionalResource(self, handler.handle_request)
else:
raise ConfigError(
"additional_resource %s does not implement a known interface"
% (resmodule["module"],)
)
resources[path] = resource
# try to find something useful to redirect '/' to
if WEB_CLIENT_PREFIX in resources:
@@ -289,6 +298,11 @@ class SynapseHomeServer(HomeServer):
# Gauges to expose monthly active user control metrics
current_mau_gauge = Gauge("synapse_admin_mau:current", "Current MAU")
current_mau_by_service_gauge = Gauge(
"synapse_admin_mau_current_mau_by_service",
"Current MAU by service",
["app_service"],
)
max_mau_gauge = Gauge("synapse_admin_mau:max", "MAU Limit")
registered_reserved_users_mau_gauge = Gauge(
"synapse_admin_mau:registered_reserved_users",
@@ -394,7 +408,6 @@ def setup(config_options):
_base.start(hs, config.listeners)
hs.get_pusherpool().start()
hs.get_datastore().db.updates.start_doing_background_updates()
except Exception:
# Print the exception and bail out.
@@ -576,12 +589,20 @@ def run(hs):
@defer.inlineCallbacks
def generate_monthly_active_users():
current_mau_count = 0
current_mau_count_by_service = {}
reserved_users = ()
store = hs.get_datastore()
if hs.config.limit_usage_by_mau or hs.config.mau_stats_only:
current_mau_count = yield store.get_monthly_active_count()
current_mau_count_by_service = (
yield store.get_monthly_active_count_by_service()
)
reserved_users = yield store.get_registered_reserved_users()
current_mau_gauge.set(float(current_mau_count))
for app_service, count in current_mau_count_by_service.items():
current_mau_by_service_gauge.labels(app_service).set(float(count))
registered_reserved_users_mau_gauge.set(float(len(reserved_users)))
max_mau_gauge.set(float(hs.config.max_mau_value))

View File

@@ -13,160 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.server import HomeServer
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
SlavedTransactionStore,
BaseSlavedStore,
MediaRepositoryStore,
):
pass
class MediaRepositoryServer(HomeServer):
DATASTORE_CLASS = MediaRepositorySlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "media":
media_repo = self.get_media_repository_resource()
# We need to serve the admin servlets for media on the
# worker.
admin_resource = JsonResource(self, canonical_json=False)
register_servlets_for_media_repo(self, admin_resource)
resources.update(
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse media repository now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return ReplicationClientHandler(self.get_datastore())
def start(config_options):
try:
config = HomeServerConfig.load_config(
"Synapse media repository", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.media_repository"
if config.enable_media_repo:
_base.quit_with_error(
"enable_media_repo must be disabled in the main synapse process\n"
"before the media repo can be run in a separate worker.\n"
"Please add ``enable_media_repo: false`` to the main config\n"
)
events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = MediaRepositoryServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-media-repository", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -13,214 +13,12 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.pusher")
class PusherSlaveStore(
SlavedEventStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
RoomStore,
):
update_pusher_last_stream_ordering_and_success = __func__(
DataStore.update_pusher_last_stream_ordering_and_success
)
update_pusher_failing_since = __func__(DataStore.update_pusher_failing_since)
update_pusher_last_stream_ordering = __func__(
DataStore.update_pusher_last_stream_ordering
)
get_throttle_params_by_room = __func__(DataStore.get_throttle_params_by_room)
set_throttle_params = __func__(DataStore.set_throttle_params)
get_time_of_last_push_action_before = __func__(
DataStore.get_time_of_last_push_action_before
)
get_profile_displayname = __func__(DataStore.get_profile_displayname)
class PusherServer(HomeServer):
DATASTORE_CLASS = PusherSlaveStore
def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse pusher now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return PusherReplicationHandler(self)
class PusherReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(PusherReplicationHandler, self).__init__(hs.get_datastore())
self.pusher_pool = hs.get_pusherpool()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
def poke_pushers(self, stream_name, token, rows):
try:
if stream_name == "pushers":
for row in rows:
if row.deleted:
yield self.stop_pusher(row.user_id, row.app_id, row.pushkey)
else:
yield self.start_pusher(row.user_id, row.app_id, row.pushkey)
elif stream_name == "events":
yield self.pusher_pool.on_new_notifications(token, token)
elif stream_name == "receipts":
yield self.pusher_pool.on_new_receipts(
token, token, set(row.room_id for row in rows)
)
except Exception:
logger.exception("Error poking pushers")
def stop_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
pushers_for_user = self.pusher_pool.pushers.get(user_id, {})
pusher = pushers_for_user.pop(key, None)
if pusher is None:
return
logger.info("Stopping pusher %r / %r", user_id, key)
pusher.on_stop()
def start_pusher(self, user_id, app_id, pushkey):
key = "%s:%s" % (app_id, pushkey)
logger.info("Starting pusher %r / %r", user_id, key)
return self.pusher_pool.start_pusher_by_id(app_id, pushkey, user_id)
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse pusher", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.pusher"
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.start_pushers:
sys.stderr.write(
"\nThe pushers must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``start_pushers: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
ps = PusherServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ps, config, use_worker_options=True)
ps.setup()
def start():
_base.start(ps, config.worker_listeners)
ps.get_pusherpool().start()
reactor.addSystemEventTrigger("before", "startup", start)
_base.start_worker_reactor("synapse-pusher", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):
ps = start(sys.argv[1:])
start(sys.argv[1:])

View File

@@ -13,451 +13,11 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import logging
import sys
from six import iteritems
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse.api.constants import EventTypes
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.deviceinbox import SlavedDeviceInboxStore
from synapse.replication.slave.storage.devices import SlavedDeviceStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.filtering import SlavedFilteringStore
from synapse.replication.slave.storage.groups import SlavedGroupServerStore
from synapse.replication.slave.storage.presence import SlavedPresenceStore
from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams.events import EventsStreamEventRow, EventsStreamRow
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.synchrotron")
class SynchrotronSlavedStore(
SlavedReceiptsStore,
SlavedAccountDataStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedFilteringStore,
SlavedPresenceStore,
SlavedGroupServerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedPushRuleStore,
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
BaseSlavedStore,
):
pass
UPDATE_SYNCING_USERS_MS = 10 * 1000
class SynchrotronPresence(object):
def __init__(self, hs):
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
self.users_going_offline = {}
self._send_stop_syncing_loop = self.clock.looping_call(
self.send_stop_syncing, 10 * 1000
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
if self.hs.config.use_presence:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
"""A user has started syncing. Send a UserSync to the master, unless they
had recently stopped syncing.
Args:
user_id (str)
"""
going_offline = self.users_going_offline.pop(user_id, None)
if not going_offline:
# Safe to skip because we haven't yet told the master they were offline
self.send_user_sync(user_id, True, self.clock.time_msec())
def mark_as_going_offline(self, user_id):
"""A user has stopped syncing. We wait before notifying the master as
its likely they'll come back soon. This allows us to avoid sending
a stopped syncing immediately followed by a started syncing notification
to the master
Args:
user_id (str)
"""
self.users_going_offline[user_id] = self.clock.time_msec()
def send_stop_syncing(self):
"""Check if there are any users who have stopped syncing a while ago
and haven't come back yet. If there are poke the master about them.
"""
now = self.clock.time_msec()
for user_id, last_sync_ms in list(self.users_going_offline.items()):
if now - last_sync_ms > 10 * 1000:
self.users_going_offline.pop(user_id, None)
self.send_user_sync(user_id, False, last_sync_ms)
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
return defer.succeed(None)
get_states = __func__(PresenceHandler.get_states)
get_state = __func__(PresenceHandler.get_state)
current_state_for_users = __func__(PresenceHandler.current_state_for_users)
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
# If we went from no in flight sync to some, notify replication
if self.user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
def _user_syncing():
try:
yield
finally:
_end()
return defer.succeed(_user_syncing())
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties
self.notifier.on_new_event(
"presence_key",
stream_id,
rooms=room_ids_to_states.keys(),
users=users_to_states.keys(),
)
@defer.inlineCallbacks
def process_replication_rows(self, token, rows):
states = [
UserPresenceState(
row.user_id,
row.state,
row.last_active_ts,
row.last_federation_update_ts,
row.last_user_sync_ts,
row.status_msg,
row.currently_active,
)
for row in rows
]
for state in states:
self.user_to_current_state[state.user_id] = state
stream_id = token
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
if self.hs.config.use_presence:
return [
user_id
for user_id, count in iteritems(self.user_to_num_current_syncs)
if count > 0
]
else:
return set()
class SynchrotronTyping(object):
def __init__(self, hs):
self._latest_room_serial = 0
self._reset()
def _reset(self):
"""
Reset the typing handler's data caches.
"""
# map room IDs to serial numbers
self._room_serials = {}
# map room IDs to sets of users currently typing
self._room_typing = {}
def stream_positions(self):
# We must update this typing token from the response of the previous
# sync. In particular, the stream id may "reset" back to zero/a low
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication_rows(self, token, rows):
if self._latest_room_serial > token:
# The master has gone backwards. To prevent inconsistent data, just
# clear everything.
self._reset()
# Set the latest serial token to whatever the server gave us.
self._latest_room_serial = token
for row in rows:
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
class SynchrotronApplicationService(object):
def notify_interested_services(self, event):
pass
class SynchrotronServer(HomeServer):
DATASTORE_CLASS = SynchrotronSlavedStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
InitialSyncRestServlet(self).register(resource)
RoomInitialSyncRestServlet(self).register(resource)
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse synchrotron now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return SyncReplicationHandler(self)
def build_presence_handler(self):
return SynchrotronPresence(self)
def build_typing_handler(self):
return SynchrotronTyping(self)
class SyncReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(SyncReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
args = super(SyncReplicationHandler, self).get_streams_to_replicate()
args.update(self.typing_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
async def process_and_notify(self, stream_name, token, rows):
try:
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
if row.type != EventsStreamEventRow.TypeId:
continue
assert isinstance(row, EventsStreamRow)
event = await self.store.get_event(
row.data.event_id, allow_rejected=True
)
if event.rejected_reason:
continue
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
max_token = self.store.get_room_max_stream_ordering()
self.notifier.on_new_room_event(
event, token, max_token, extra_users
)
elif stream_name == "push_rules":
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows]
)
elif stream_name in ("account_data", "tag_account_data"):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "receipts":
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows]
)
elif stream_name == "typing":
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows]
)
elif stream_name == "to_device":
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event("to_device_key", token, users=entities)
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = await self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
elif stream_name == "presence":
await self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows]
)
except Exception:
logger.exception("Error processing replication")
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse synchrotron", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.synchrotron"
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = SynchrotronServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
application_service_handler=SynchrotronApplicationService(),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-synchrotron", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -14,218 +14,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import sys
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.http.server import JsonResource
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams.events import (
EventsStream,
EventsStreamCurrentStateRow,
)
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.database import Database
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.user_dir")
class UserDirectorySlaveStore(
SlavedEventStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
UserDirectoryStore,
BaseSlavedStore,
):
def __init__(self, database: Database, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(database, db_conn, hs)
events_max = self._stream_id_gen.get_current_token()
curr_state_delta_prefill, min_curr_state_delta_id = self.db.get_cache_dict(
db_conn,
"current_state_delta_stream",
entity_column="room_id",
stream_column="stream_id",
max_value=events_max, # As we share the stream id with events token
limit=1000,
)
self._curr_state_delta_stream_cache = StreamChangeCache(
"_curr_state_delta_stream_cache",
min_curr_state_delta_id,
prefilled_cache=curr_state_delta_prefill,
)
def stream_positions(self):
result = super(UserDirectorySlaveStore, self).stream_positions()
return result
def process_replication_rows(self, stream_name, token, rows):
if stream_name == EventsStream.NAME:
self._stream_id_gen.advance(token)
for row in rows:
if row.type != EventsStreamCurrentStateRow.TypeId:
continue
self._curr_state_delta_stream_cache.entity_has_changed(
row.data.room_id, token
)
return super(UserDirectorySlaveStore, self).process_replication_rows(
stream_name, token, rows
)
class UserDirectoryServer(HomeServer):
DATASTORE_CLASS = UserDirectorySlaveStore
def _listen_http(self, listener_config):
port = listener_config["port"]
bind_addresses = listener_config["bind_addresses"]
site_tag = listener_config.get("tag", port)
resources = {}
for res in listener_config["resources"]:
for name in res["names"]:
if name == "metrics":
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
elif name == "client":
resource = JsonResource(self, canonical_json=False)
user_directory.register_servlets(self, resource)
resources.update(
{
"/_matrix/client/r0": resource,
"/_matrix/client/unstable": resource,
"/_matrix/client/v2_alpha": resource,
"/_matrix/client/api/v1": resource,
}
)
root_resource = create_resource_tree(resources, NoResource())
_base.listen_tcp(
bind_addresses,
port,
SynapseSite(
"synapse.access.http.%s" % (site_tag,),
site_tag,
listener_config,
root_resource,
self.version_string,
),
)
logger.info("Synapse user_dir now listening on port %d", port)
def start_listening(self, listeners):
for listener in listeners:
if listener["type"] == "http":
self._listen_http(listener)
elif listener["type"] == "manhole":
_base.listen_tcp(
listener["bind_addresses"],
listener["port"],
manhole(
username="matrix", password="rabbithole", globals={"hs": self}
),
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
)
)
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)
def build_tcp_replication(self):
return UserDirectoryReplicationHandler(self)
class UserDirectoryReplicationHandler(ReplicationClientHandler):
def __init__(self, hs):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == EventsStream.NAME:
run_in_background(self._notify_directory)
@defer.inlineCallbacks
def _notify_directory(self):
try:
yield self.user_directory.notify_new_event()
except Exception:
logger.exception("Error notifiying user directory of state update")
def start(config_options):
try:
config = HomeServerConfig.load_config("Synapse user directory", config_options)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.exit(1)
assert config.worker_app == "synapse.app.user_dir"
events.USE_FROZEN_DICTS = config.use_frozen_dicts
if config.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker."
"\nPlease add ``update_user_directory: false`` to the main config"
"\n"
)
sys.exit(1)
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
ss = UserDirectoryServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
)
_base.start_worker_reactor("synapse-user-dir", config)
from synapse.app.generic_worker import start
from synapse.util.logcontext import LoggingContext
if __name__ == "__main__":
with LoggingContext("main"):

View File

@@ -53,6 +53,18 @@ Missing mandatory `server_name` config option.
"""
CONFIG_FILE_HEADER = """\
# Configuration file for Synapse.
#
# This is a YAML file: see [1] for a quick introduction. Note in particular
# that *indentation is important*: all the elements of a list or dictionary
# should have the same indentation.
#
# [1] https://docs.ansible.com/ansible/latest/reference_appendices/YAMLSyntax.html
"""
def path_exists(file_path):
"""Check if a file exists
@@ -344,7 +356,7 @@ class RootConfig(object):
str: the yaml config file
"""
return "\n\n".join(
return CONFIG_FILE_HEADER + "\n\n".join(
dedent(conf)
for conf in self.invoke_all(
"generate_config_section",
@@ -574,8 +586,8 @@ class RootConfig(object):
if not path_exists(config_dir_path):
os.makedirs(config_dir_path)
with open(config_path, "w") as config_file:
config_file.write("# vim:ft=yaml\n\n")
config_file.write(config_str)
config_file.write("\n\n# vim:ft=yaml")
config_dict = yaml.safe_load(config_str)
obj.generate_missing_files(config_dict, config_dir_path)

View File

@@ -24,6 +24,7 @@ from synapse.config import (
server,
server_notices_config,
spam_checker,
sso,
stats,
third_party_event_rules,
tls,
@@ -57,6 +58,7 @@ class RootConfig:
key: key.KeyConfig
saml2: saml2_config.SAML2Config
cas: cas.CasConfig
sso: sso.SSOConfig
jwt: jwt_config.JWTConfig
password: password.PasswordConfig
email: emailconfig.EmailConfig

View File

@@ -27,6 +27,12 @@ import pkg_resources
from ._base import Config, ConfigError
MISSING_PASSWORD_RESET_CONFIG_ERROR = """\
Password reset emails are enabled on this homeserver due to a partial
'email' block. However, the following required keys are missing:
%s
"""
class EmailConfig(Config):
section = "email"
@@ -37,10 +43,12 @@ class EmailConfig(Config):
self.email_enable_notifs = False
email_config = config.get("email", {})
email_config = config.get("email")
if email_config is None:
email_config = {}
self.email_smtp_host = email_config.get("smtp_host", None)
self.email_smtp_port = email_config.get("smtp_port", None)
self.email_smtp_host = email_config.get("smtp_host", "localhost")
self.email_smtp_port = email_config.get("smtp_port", 25)
self.email_smtp_user = email_config.get("smtp_user", None)
self.email_smtp_pass = email_config.get("smtp_pass", None)
self.require_transport_security = email_config.get(
@@ -74,9 +82,9 @@ class EmailConfig(Config):
self.email_template_dir = os.path.abspath(template_dir)
self.email_enable_notifs = email_config.get("enable_notifs", False)
account_validity_renewal_enabled = config.get("account_validity", {}).get(
"renew_at"
)
account_validity_config = config.get("account_validity") or {}
account_validity_renewal_enabled = account_validity_config.get("renew_at")
self.threepid_behaviour_email = (
# Have Synapse handle the email sending if account_threepid_delegates.email
@@ -140,24 +148,18 @@ class EmailConfig(Config):
bleach
if self.threepid_behaviour_email == ThreepidBehaviour.LOCAL:
required = ["smtp_host", "smtp_port", "notif_from"]
missing = []
for k in required:
if k not in email_config:
missing.append("email." + k)
if not self.email_notif_from:
missing.append("email.notif_from")
# public_baseurl is required to build password reset and validation links that
# will be emailed to users
if config.get("public_baseurl") is None:
missing.append("public_baseurl")
if len(missing) > 0:
raise RuntimeError(
"Password resets emails are configured to be sent from "
"this homeserver due to a partial 'email' block. "
"However, the following required keys are missing: %s"
% (", ".join(missing),)
if missing:
raise ConfigError(
MISSING_PASSWORD_RESET_CONFIG_ERROR % (", ".join(missing),)
)
# These email templates have placeholders in them, and thus must be
@@ -243,32 +245,25 @@ class EmailConfig(Config):
)
if self.email_enable_notifs:
required = [
"smtp_host",
"smtp_port",
"notif_from",
"notif_template_html",
"notif_template_text",
]
missing = []
for k in required:
if k not in email_config:
missing.append(k)
if len(missing) > 0:
raise RuntimeError(
"email.enable_notifs is True but required keys are missing: %s"
% (", ".join(["email." + k for k in missing]),)
)
if not self.email_notif_from:
missing.append("email.notif_from")
if config.get("public_baseurl") is None:
raise RuntimeError(
"email.enable_notifs is True but no public_baseurl is set"
missing.append("public_baseurl")
if missing:
raise ConfigError(
"email.enable_notifs is True but required keys are missing: %s"
% (", ".join(missing),)
)
self.email_notif_template_html = email_config["notif_template_html"]
self.email_notif_template_text = email_config["notif_template_text"]
self.email_notif_template_html = email_config.get(
"notif_template_html", "notif_mail.html"
)
self.email_notif_template_text = email_config.get(
"notif_template_text", "notif_mail.txt"
)
for f in self.email_notif_template_text, self.email_notif_template_html:
p = os.path.join(self.email_template_dir, f)
@@ -278,7 +273,9 @@ class EmailConfig(Config):
self.email_notif_for_new_users = email_config.get(
"notif_for_new_users", True
)
self.email_riot_base_url = email_config.get("riot_base_url", None)
self.email_riot_base_url = email_config.get(
"client_base_url", email_config.get("riot_base_url", None)
)
if account_validity_renewal_enabled:
self.email_expiry_template_html = email_config.get(
@@ -294,107 +291,112 @@ class EmailConfig(Config):
raise ConfigError("Unable to find email template file %s" % (p,))
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """
# Enable sending emails for password resets, notification events or
# account expiry notices
return """\
# Configuration for sending emails from Synapse.
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25 # SSL: 465, STARTTLS: 587
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #
# notif_for_new_users: true
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
# # the "app_name" setting is ignored
# #
# riot_base_url: "http://localhost/riot"
#
# # Configure the time that a validation email or text message code
# # will expire after sending
# #
# # This is currently used for password resets
# #
# #validation_token_lifetime: 1h
#
# # Template directory. All template files should be stored within this
# # directory. If not set, default templates from within the Synapse
# # package will be used
# #
# # For the list of default templates, please see
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
# #template_dir: res/templates
#
# # Templates for email notifications
# #
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
#
# # Templates for account expiry notices
# #
# expiry_template_html: notice_expiry.html
# expiry_template_text: notice_expiry.txt
#
# # Templates for password reset emails sent by the homeserver
# #
# #password_reset_template_html: password_reset.html
# #password_reset_template_text: password_reset.txt
#
# # Templates for registration emails sent by the homeserver
# #
# #registration_template_html: registration.html
# #registration_template_text: registration.txt
#
# # Templates for validation emails sent by the homeserver when adding an email to
# # your user account
# #
# #add_threepid_template_html: add_threepid.html
# #add_threepid_template_text: add_threepid.txt
#
# # Templates for password reset success and failure pages that a user
# # will see after attempting to reset their password
# #
# #password_reset_template_success_html: password_reset_success.html
# #password_reset_template_failure_html: password_reset_failure.html
#
# # Templates for registration success and failure pages that a user
# # will see after attempting to register using an email or phone
# #
# #registration_template_success_html: registration_success.html
# #registration_template_failure_html: registration_failure.html
#
# # Templates for success and failure pages that a user will see after attempting
# # to add an email or phone to their account
# #
# #add_threepid_success_html: add_threepid_success.html
# #add_threepid_failure_html: add_threepid_failure.html
email:
# The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
#
#smtp_host: mail.server
# The port on the mail server for outgoing SMTP. Defaults to 25.
#
#smtp_port: 587
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to
# TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
# Synapse will refuse to connect unless the server supports STARTTLS.
#
#require_transport_security: true
# notif_from defines the "From" address to use when sending emails.
# It must be set if email sending is enabled.
#
# The placeholder '%(app)s' will be replaced by the application name,
# which is normally 'app_name' (below), but may be overridden by the
# Matrix client application.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from. It
# defaults to 'Matrix'.
#
#app_name: my_branded_matrix_server
# Uncomment the following to enable sending emails for messages that the user
# has missed. Disabled by default.
#
#enable_notifs: true
# Uncomment the following to disable automatic subscription to email
# notifications for new users. Enabled by default.
#
#notif_for_new_users: false
# Custom URL for client links within the email notifications. By default
# links will be based on "https://matrix.to".
#
# (This setting used to be called riot_base_url; the old name is still
# supported for backwards-compatibility but is now deprecated.)
#
#client_base_url: "http://localhost/riot"
# Configure the time that a validation email will expire after sending.
# Defaults to 1h.
#
#validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * The contents of email notifications of missed events: 'notif_mail.html' and
# 'notif_mail.txt'.
#
# * The contents of account expiry notice emails: 'notice_expiry.html' and
# 'notice_expiry.txt'.
#
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in the password reset email: 'password_reset_success.html' and
# 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent during registration:
# 'registration_success.html' and 'registration_failure.html'
#
# * The contents of address verification emails sent when an address is added
# to a Matrix account: 'add_threepid.html' and 'add_threepid.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent when an address is added
# to a Matrix account: 'add_threepid_success.html' and
# 'add_threepid_failure.html'
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
"""

View File

@@ -38,6 +38,7 @@ from .saml2_config import SAML2Config
from .server import ServerConfig
from .server_notices_config import ServerNoticesConfig
from .spam_checker import SpamCheckerConfig
from .sso import SSOConfig
from .stats import StatsConfig
from .third_party_event_rules import ThirdPartyRulesConfig
from .tls import TlsConfig
@@ -65,6 +66,7 @@ class HomeServerConfig(RootConfig):
KeyConfig,
SAML2Config,
CasConfig,
SSOConfig,
JWTConfig,
PasswordConfig,
EmailConfig,

View File

@@ -35,7 +35,7 @@ class PushConfig(Config):
# Now check for the one in the 'email' section and honour it,
# with a warning.
push_config = config.get("email", {})
push_config = config.get("email") or {}
redact_content = push_config.get("redact_content")
if redact_content is not None:
print(

View File

@@ -27,6 +27,9 @@ class AccountValidityConfig(Config):
section = "accountvalidity"
def __init__(self, config, synapse_config):
if config is None:
return
super(AccountValidityConfig, self).__init__()
self.enabled = config.get("enabled", False)
self.renew_by_email_enabled = "renew_at" in config
@@ -91,7 +94,7 @@ class RegistrationConfig(Config):
)
self.account_validity = AccountValidityConfig(
config.get("account_validity", {}), config
config.get("account_validity") or {}, config
)
self.registrations_require_3pid = config.get("registrations_require_3pid", [])
@@ -159,23 +162,6 @@ class RegistrationConfig(Config):
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# ``enabled`` defines whether the account validity feature is enabled. Defaults
# to False.
#
# ``period`` allows setting the period after which an account is valid
# after its registration. When renewing the account, its validity period
# will be extended by this amount of time. This parameter is required when using
# the account validity feature.
#
# ``renew_at`` is the amount of time before an account's expiry date at which
# Synapse will send an email to the account's email address with a renewal link.
# This needs the ``email`` and ``public_baseurl`` configuration sections to be
# filled.
#
# ``renew_email_subject`` is the subject of the email sent out with the renewal
# link. ``%%(app)s`` can be used as a placeholder for the ``app_name`` parameter
# from the ``email`` section.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
@@ -186,21 +172,55 @@ class RegistrationConfig(Config):
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10%% of the validity period.
#
#account_validity:
# enabled: true
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: "Renew your %%(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
#template_dir: "res/templates"
# File within 'template_dir' giving the HTML to be displayed to the user after
# they successfully renewed their account. If not set, default text is used.
#
#account_renewed_html_path: "account_renewed.html"
# File within 'template_dir' giving the HTML to be displayed when the user
# tries to renew an account with an invalid renewal token. If not set,
# default text is used.
#
#invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#

View File

@@ -15,6 +15,9 @@
# limitations under the License.
import logging
import os
import pkg_resources
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module, load_python_module
@@ -121,6 +124,7 @@ class SAML2Config(Config):
required_methods = [
"get_saml_attributes",
"saml_response_to_user_attributes",
"get_remote_user_id",
]
missing_methods = [
method
@@ -159,6 +163,14 @@ class SAML2Config(Config):
saml2_config.get("saml_session_lifetime", "5m")
)
template_dir = saml2_config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.saml2_error_html_content = self.read_file(
os.path.join(template_dir, "saml_error.html"), "saml2_config.saml_error",
)
def _default_saml_config_dict(
self, required_attributes: set, optional_attributes: set
):
@@ -324,6 +336,25 @@ class SAML2Config(Config):
# The default is 'uid'.
#
#grandfathered_mxid_source_attribute: upn
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page to display to users if something goes wrong during the
# authentication process: 'saml_error.html'.
#
# This template doesn't currently need any variable to render.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
""" % {
"config_dir_path": config_dir_path
}

View File

@@ -294,6 +294,14 @@ class ServerConfig(Config):
self.retention_default_min_lifetime = None
self.retention_default_max_lifetime = None
if self.retention_enabled:
logger.info(
"Message retention policies support enabled with the following default"
" policy: min_lifetime = %s ; max_lifetime = %s",
self.retention_default_min_lifetime,
self.retention_default_max_lifetime,
)
self.retention_allowed_lifetime_min = retention_config.get(
"allowed_lifetime_min"
)
@@ -948,17 +956,17 @@ class ServerConfig(Config):
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
# of outdated messages on a more frequent basis than for the rest of the rooms
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
# interval: 1d
"""
% locals()
)
@@ -1058,12 +1066,12 @@ KNOWN_RESOURCES = (
def _check_resource_config(listeners):
resource_names = set(
resource_names = {
res_name
for listener in listeners
for res in listener.get("resources", [])
for res_name in res.get("names", [])
)
}
for resource in resource_names:
if resource not in KNOWN_RESOURCES:

92
synapse/config/sso.py Normal file
View File

@@ -0,0 +1,92 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, Dict
import pkg_resources
from ._base import Config
class SSOConfig(Config):
"""SSO Configuration
"""
section = "sso"
def read_config(self, config, **kwargs):
sso_config = config.get("sso") or {} # type: Dict[str, Any]
# Pick a template directory in order of:
# * The sso-specific template_dir
# * /path/to/synapse/install/res/templates
template_dir = sso_config.get("template_dir")
if not template_dir:
template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.sso_redirect_confirm_template_dir = template_dir
self.sso_client_whitelist = sso_config.get("client_whitelist") or []
def generate_config_section(self, **kwargs):
return """\
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
#
sso:
# A list of client URLs which are whitelisted so that the user does not
# have to confirm giving access to their account to the URL. Any client
# whose URL starts with an entry in the following list will not be subject
# to an additional confirmation step after the SSO login is completed.
#
# WARNING: An entry such as "https://my.client" is insecure, because it
# will also match "https://my.client.evil.site", exposing your users to
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/".
#
# By default, this list is empty.
#
#client_whitelist:
# - https://riot.im/develop
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * HTML page for a confirmation step before redirecting back to the client
# with the login token: 'sso_redirect_confirm.html'.
#
# When rendering, this template is given three variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * display_url: the same as `redirect_url`, but with the query
# parameters stripped. The intention is to have a
# human-readable URL to show to users, not to use it as
# the final address to redirect to. Needs manual escaping
# (see https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * server_name: the homeserver's name.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
"""

View File

@@ -32,6 +32,17 @@ from synapse.util import glob_to_regex
logger = logging.getLogger(__name__)
ACME_SUPPORT_ENABLED_WARN = """\
This server uses Synapse's built-in ACME support. Note that ACME v1 has been
deprecated by Let's Encrypt, and that Synapse doesn't currently support ACME v2,
which means that this feature will not work with Synapse installs set up after
November 2019, and that it may stop working on June 2020 for installs set up
before that date.
For more info and alternative solutions, see
https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
--------------------------------------------------------------------------------"""
class TlsConfig(Config):
section = "tls"
@@ -44,6 +55,9 @@ class TlsConfig(Config):
self.acme_enabled = acme_config.get("enabled", False)
if self.acme_enabled:
logger.warning(ACME_SUPPORT_ENABLED_WARN)
# hyperlink complains on py2 if this is not a Unicode
self.acme_url = six.text_type(
acme_config.get("url", "https://acme-v01.api.letsencrypt.org/directory")
@@ -109,6 +123,8 @@ class TlsConfig(Config):
fed_whitelist_entries = config.get(
"federation_certificate_verification_whitelist", []
)
if fed_whitelist_entries is None:
fed_whitelist_entries = []
# Support globs (*) in whitelist values
self.federation_certificate_verification_whitelist = [] # type: List[str]
@@ -244,7 +260,7 @@ class TlsConfig(Config):
crypto.FILETYPE_ASN1, self.tls_certificate
)
sha256_fingerprint = encode_base64(sha256(x509_certificate_bytes).digest())
sha256_fingerprints = set(f["sha256"] for f in self.tls_fingerprints)
sha256_fingerprints = {f["sha256"] for f in self.tls_fingerprints}
if sha256_fingerprint not in sha256_fingerprints:
self.tls_fingerprints.append({"sha256": sha256_fingerprint})
@@ -360,6 +376,11 @@ class TlsConfig(Config):
# ACME support: This will configure Synapse to request a valid TLS certificate
# for your configured `server_name` via Let's Encrypt.
#
# Note that ACME v1 is now deprecated, and Synapse currently doesn't support
# ACME v2. This means that this feature currently won't work with installs set
# up after November 2019. For more info, and alternative solutions, see
# https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
#
# Note that provisioning a certificate in this way requires port 80 to be
# routed to Synapse so that it can complete the http-01 ACME challenge.
# By default, if you enable ACME support, Synapse will attempt to listen on

View File

@@ -75,7 +75,7 @@ class ServerContextFactory(ContextFactory):
@implementer(IPolicyForHTTPS)
class ClientTLSOptionsFactory(object):
class FederationPolicyForHTTPS(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers for federation.
@@ -103,15 +103,15 @@ class ClientTLSOptionsFactory(object):
# let us do).
minTLS = _TLS_VERSION_MAP[config.federation_client_minimum_tls_version]
self._verify_ssl = CertificateOptions(
_verify_ssl = CertificateOptions(
trustRoot=trust_root, insecurelyLowerMinimumTo=minTLS
)
self._verify_ssl_context = self._verify_ssl.getContext()
self._verify_ssl_context.set_info_callback(self._context_info_cb)
self._verify_ssl_context = _verify_ssl.getContext()
self._verify_ssl_context.set_info_callback(_context_info_cb)
self._no_verify_ssl = CertificateOptions(insecurelyLowerMinimumTo=minTLS)
self._no_verify_ssl_context = self._no_verify_ssl.getContext()
self._no_verify_ssl_context.set_info_callback(self._context_info_cb)
_no_verify_ssl = CertificateOptions(insecurelyLowerMinimumTo=minTLS)
self._no_verify_ssl_context = _no_verify_ssl.getContext()
self._no_verify_ssl_context.set_info_callback(_context_info_cb)
def get_options(self, host: bytes):
@@ -136,23 +136,6 @@ class ClientTLSOptionsFactory(object):
return SSLClientConnectionCreator(host, ssl_context, should_verify)
@staticmethod
def _context_info_cb(ssl_connection, where, ret):
"""The 'information callback' for our openssl context object."""
# we assume that the app_data on the connection object has been set to
# a TLSMemoryBIOProtocol object. (This is done by SSLClientConnectionCreator)
tls_protocol = ssl_connection.get_app_data()
try:
# ... we further assume that SSLClientConnectionCreator has set the
# '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
tls_protocol._synapse_tls_verifier.verify_context_info_cb(
ssl_connection, where
)
except: # noqa: E722, taken from the twisted implementation
logger.exception("Error during info_callback")
f = Failure()
tls_protocol.failVerification(f)
def creatorForNetloc(self, hostname, port):
"""Implements the IPolicyForHTTPS interace so that this can be passed
directly to agents.
@@ -160,6 +143,43 @@ class ClientTLSOptionsFactory(object):
return self.get_options(hostname)
@implementer(IPolicyForHTTPS)
class RegularPolicyForHTTPS(object):
"""Factory for Twisted SSLClientConnectionCreators that are used to make connections
to remote servers, for other than federation.
Always uses the same OpenSSL context object, which uses the default OpenSSL CA
trust root.
"""
def __init__(self):
trust_root = platformTrust()
self._ssl_context = CertificateOptions(trustRoot=trust_root).getContext()
self._ssl_context.set_info_callback(_context_info_cb)
def creatorForNetloc(self, hostname, port):
return SSLClientConnectionCreator(hostname, self._ssl_context, True)
def _context_info_cb(ssl_connection, where, ret):
"""The 'information callback' for our openssl context objects.
Note: Once this is set as the info callback on a Context object, the Context should
only be used with the SSLClientConnectionCreator.
"""
# we assume that the app_data on the connection object has been set to
# a TLSMemoryBIOProtocol object. (This is done by SSLClientConnectionCreator)
tls_protocol = ssl_connection.get_app_data()
try:
# ... we further assume that SSLClientConnectionCreator has set the
# '_synapse_tls_verifier' attribute to a ConnectionVerifier object.
tls_protocol._synapse_tls_verifier.verify_context_info_cb(ssl_connection, where)
except: # noqa: E722, taken from the twisted implementation
logger.exception("Error during info_callback")
f = Failure()
tls_protocol.failVerification(f)
@implementer(IOpenSSLClientConnectionCreator)
class SSLClientConnectionCreator(object):
"""Creates openssl connection objects for client connections.

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
#
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -17,13 +18,17 @@
import collections.abc
import hashlib
import logging
from typing import Dict
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
from signedjson.types import SigningKey
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.events.utils import prune_event, prune_event_dict
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -112,20 +117,30 @@ def compute_event_reference_hash(event, hash_algorithm=hashlib.sha256):
return hashed.name, hashed.digest()
def compute_event_signature(event_dict, signature_name, signing_key):
def compute_event_signature(
room_version: RoomVersion,
event_dict: JsonDict,
signature_name: str,
signing_key: SigningKey,
) -> Dict[str, Dict[str, str]]:
"""Compute the signature of the event for the given name and key.
Args:
event_dict (dict): The event as a dict
signature_name (str): The name of the entity signing the event
room_version: the version of the room that this event is in.
(the room version determines the redaction algorithm and hence the
json to be signed)
event_dict: The event as a dict
signature_name: The name of the entity signing the event
(typically the server's hostname).
signing_key (syutil.crypto.SigningKey): The key to sign with
signing_key: The key to sign with
Returns:
dict[str, dict[str, str]]: Returns a dictionary in the same format of
an event's signatures field.
a dictionary in the same format of an event's signatures field.
"""
redact_json = prune_event_dict(event_dict)
redact_json = prune_event_dict(room_version, event_dict)
redact_json.pop("age_ts", None)
redact_json.pop("unsigned", None)
if logger.isEnabledFor(logging.DEBUG):
@@ -137,23 +152,26 @@ def compute_event_signature(event_dict, signature_name, signing_key):
def add_hashes_and_signatures(
event_dict, signature_name, signing_key, hash_algorithm=hashlib.sha256
room_version: RoomVersion,
event_dict: JsonDict,
signature_name: str,
signing_key: SigningKey,
):
"""Add content hash and sign the event
Args:
event_dict (dict): The event to add hashes to and sign
signature_name (str): The name of the entity signing the event
room_version: the version of the room this event is in
event_dict: The event to add hashes to and sign
signature_name: The name of the entity signing the event
(typically the server's hostname).
signing_key (syutil.crypto.SigningKey): The key to sign with
hash_algorithm: A hasher from `hashlib`, e.g. hashlib.sha256, to use
to hash the event
signing_key: The key to sign with
"""
name, digest = compute_content_hash(event_dict, hash_algorithm=hash_algorithm)
name, digest = compute_content_hash(event_dict, hash_algorithm=hashlib.sha256)
event_dict.setdefault("hashes", {})[name] = encode_base64(digest)
event_dict["signatures"] = compute_event_signature(
event_dict, signature_name=signature_name, signing_key=signing_key
room_version, event_dict, signature_name=signature_name, signing_key=signing_key
)

View File

@@ -326,9 +326,7 @@ class Keyring(object):
verify_requests (list[VerifyJsonRequest]): list of verify requests
"""
remaining_requests = set(
(rq for rq in verify_requests if not rq.key_ready.called)
)
remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called}
@defer.inlineCallbacks
def do_iterations():
@@ -396,7 +394,7 @@ class Keyring(object):
results = yield fetcher.get_keys(missing_keys)
completed = list()
completed = []
for verify_request in remaining_requests:
server_name = verify_request.server_name

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -23,17 +24,27 @@ from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.types import UserID, get_domain_from_id
logger = logging.getLogger(__name__)
def check(room_version, event, auth_events, do_sig_check=True, do_size_check=True):
def check(
room_version_obj: RoomVersion,
event,
auth_events,
do_sig_check=True,
do_size_check=True,
):
""" Checks if this event is correctly authed.
Args:
room_version (str): the version of the room
room_version_obj: the version of the room
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
@@ -89,7 +100,12 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
if not event.signatures.get(event_id_domain):
raise AuthError(403, "Event not signed by sending server")
# Implementation of https://matrix.org/docs/spec/rooms/v1#authorization-rules
#
# 1. If type is m.room.create:
if event.type == EventTypes.Create:
# 1b. If the domain of the room_id does not match the domain of the sender,
# reject.
sender_domain = get_domain_from_id(event.sender)
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
@@ -97,37 +113,45 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
403, "Creation event's room_id domain does not match sender's"
)
room_version = event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
# 1c. If content.room_version is present and is not a recognised version, reject
room_version_prop = event.content.get("room_version", "1")
if room_version_prop not in KNOWN_ROOM_VERSIONS:
raise AuthError(
403, "room appears to have unsupported version %s" % (room_version,)
403,
"room appears to have unsupported version %s" % (room_version_prop,),
)
# FIXME
logger.debug("Allowing! %s", event)
return
# 3. If event does not have a m.room.create in its auth_events, reject.
creation_event = auth_events.get((EventTypes.Create, ""), None)
if not creation_event:
raise AuthError(403, "No create event in auth events")
# additional check for m.federate
creating_domain = get_domain_from_id(event.room_id)
originating_domain = get_domain_from_id(event.sender)
if creating_domain != originating_domain:
if not _can_federate(event, auth_events):
raise AuthError(403, "This room has been marked as unfederatable.")
# FIXME: Temp hack
if event.type == EventTypes.Aliases:
# 4. If type is m.room.aliases
if event.type == EventTypes.Aliases and room_version_obj.special_case_aliases_auth:
# 4a. If event has no state_key, reject
if not event.is_state():
raise AuthError(403, "Alias event must be a state event")
if not event.state_key:
raise AuthError(403, "Alias event must have non-empty state_key")
# 4b. If sender's domain doesn't matches [sic] state_key, reject
sender_domain = get_domain_from_id(event.sender)
if event.state_key != sender_domain:
raise AuthError(
403, "Alias event's state_key does not match sender's domain"
)
# 4c. Otherwise, allow.
logger.debug("Allowing! %s", event)
return
@@ -160,7 +184,7 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
_check_power_levels(event, auth_events)
if event.type == EventTypes.Redaction:
check_redaction(room_version, event, auth_events)
check_redaction(room_version_obj, event, auth_events)
logger.debug("Allowing! %s", event)
@@ -386,7 +410,7 @@ def _can_send_event(event, auth_events):
return True
def check_redaction(room_version, event, auth_events):
def check_redaction(room_version_obj: RoomVersion, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
@@ -406,11 +430,7 @@ def check_redaction(room_version, event, auth_events):
if user_level >= redact_level:
return False
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
raise RuntimeError("Unrecognized room version %r" % (room_version,))
if v.event_format == EventFormatVersions.V1:
if room_version_obj.event_format == EventFormatVersions.V1:
redacter_domain = get_domain_from_id(event.event_id)
redactee_domain = get_domain_from_id(event.redacts)
if redacter_domain == redactee_domain:
@@ -634,7 +654,7 @@ def get_public_keys(invite_event):
return public_keys
def auth_types_for_event(event) -> Set[Tuple[str]]:
def auth_types_for_event(event) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room.

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2019 New Vector Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,15 +15,17 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
import os
from distutils.util import strtobool
from typing import Dict, Optional, Type
import six
from unpaddedbase64 import encode_base64
from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
from synapse.types import JsonDict
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
@@ -36,34 +39,115 @@ from synapse.util.frozenutils import freeze
USE_FROZEN_DICTS = strtobool(os.environ.get("SYNAPSE_USE_FROZEN_DICTS", "0"))
class DictProperty:
"""An object property which delegates to the `_dict` within its parent object."""
__slots__ = ["key"]
def __init__(self, key: str):
self.key = key
def __get__(self, instance, owner=None):
# if the property is accessed as a class property rather than an instance
# property, return the property itself rather than the value
if instance is None:
return self
try:
return instance._dict[self.key]
except KeyError as e1:
# We want this to look like a regular attribute error (mostly so that
# hasattr() works correctly), so we convert the KeyError into an
# AttributeError.
#
# To exclude the KeyError from the traceback, we explicitly
# 'raise from e1.__context__' (which is better than 'raise from None',
# becuase that would omit any *earlier* exceptions).
#
raise AttributeError(
"'%s' has no '%s' property" % (type(instance), self.key)
) from e1.__context__
def __set__(self, instance, v):
instance._dict[self.key] = v
def __delete__(self, instance):
try:
del instance._dict[self.key]
except KeyError as e1:
raise AttributeError(
"'%s' has no '%s' property" % (type(instance), self.key)
) from e1.__context__
class DefaultDictProperty(DictProperty):
"""An extension of DictProperty which provides a default if the property is
not present in the parent's _dict.
Note that this means that hasattr() on the property always returns True.
"""
__slots__ = ["default"]
def __init__(self, key, default):
super().__init__(key)
self.default = default
def __get__(self, instance, owner=None):
if instance is None:
return self
return instance._dict.get(self.key, self.default)
class _EventInternalMetadata(object):
def __init__(self, internal_metadata_dict):
self.__dict__ = dict(internal_metadata_dict)
__slots__ = ["_dict"]
def get_dict(self):
return dict(self.__dict__)
def __init__(self, internal_metadata_dict: JsonDict):
# we have to copy the dict, because it turns out that the same dict is
# reused. TODO: fix that
self._dict = dict(internal_metadata_dict)
def is_outlier(self):
return getattr(self, "outlier", False)
outlier = DictProperty("outlier") # type: bool
out_of_band_membership = DictProperty("out_of_band_membership") # type: bool
send_on_behalf_of = DictProperty("send_on_behalf_of") # type: str
recheck_redaction = DictProperty("recheck_redaction") # type: bool
soft_failed = DictProperty("soft_failed") # type: bool
proactively_send = DictProperty("proactively_send") # type: bool
redacted = DictProperty("redacted") # type: bool
txn_id = DictProperty("txn_id") # type: str
token_id = DictProperty("token_id") # type: str
stream_ordering = DictProperty("stream_ordering") # type: int
def is_out_of_band_membership(self):
# XXX: These are set by StreamWorkerStore._set_before_and_after.
# I'm pretty sure that these are never persisted to the database, so shouldn't
# be here
before = DictProperty("before") # type: str
after = DictProperty("after") # type: str
order = DictProperty("order") # type: int
def get_dict(self) -> JsonDict:
return dict(self._dict)
def is_outlier(self) -> bool:
return self._dict.get("outlier", False)
def is_out_of_band_membership(self) -> bool:
"""Whether this is an out of band membership, like an invite or an invite
rejection. This is needed as those events are marked as outliers, but
they still need to be processed as if they're new events (e.g. updating
invite state in the database, relaying to clients, etc).
"""
return getattr(self, "out_of_band_membership", False)
return self._dict.get("out_of_band_membership", False)
def get_send_on_behalf_of(self):
def get_send_on_behalf_of(self) -> Optional[str]:
"""Whether this server should send the event on behalf of another server.
This is used by the federation "send_join" API to forward the initial join
event for a server in the room.
returns a str with the name of the server this event is sent on behalf of.
"""
return getattr(self, "send_on_behalf_of", None)
return self._dict.get("send_on_behalf_of")
def need_to_check_redaction(self):
def need_to_check_redaction(self) -> bool:
"""Whether the redaction event needs to be rechecked when fetching
from the database.
@@ -76,9 +160,9 @@ class _EventInternalMetadata(object):
Returns:
bool
"""
return getattr(self, "recheck_redaction", False)
return self._dict.get("recheck_redaction", False)
def is_soft_failed(self):
def is_soft_failed(self) -> bool:
"""Whether the event has been soft failed.
Soft failed events should be handled as usual, except:
@@ -90,7 +174,7 @@ class _EventInternalMetadata(object):
Returns:
bool
"""
return getattr(self, "soft_failed", False)
return self._dict.get("soft_failed", False)
def should_proactively_send(self):
"""Whether the event, if ours, should be sent to other clients and
@@ -102,7 +186,7 @@ class _EventInternalMetadata(object):
Returns:
bool
"""
return getattr(self, "proactively_send", True)
return self._dict.get("proactively_send", True)
def is_redacted(self):
"""Whether the event has been redacted.
@@ -113,62 +197,53 @@ class _EventInternalMetadata(object):
Returns:
bool
"""
return getattr(self, "redacted", False)
return self._dict.get("redacted", False)
def _event_dict_property(key):
# We want to be able to use hasattr with the event dict properties.
# However, (on python3) hasattr expects AttributeError to be raised. Hence,
# we need to transform the KeyError into an AttributeError
def getter(self):
try:
return self._event_dict[key]
except KeyError:
raise AttributeError(key)
class EventBase(metaclass=abc.ABCMeta):
@property
@abc.abstractmethod
def format_version(self) -> int:
"""The EventFormatVersion implemented by this event"""
...
def setter(self, v):
try:
self._event_dict[key] = v
except KeyError:
raise AttributeError(key)
def delete(self):
try:
del self._event_dict[key]
except KeyError:
raise AttributeError(key)
return property(getter, setter, delete)
class EventBase(object):
def __init__(
self,
event_dict,
signatures={},
unsigned={},
internal_metadata_dict={},
rejected_reason=None,
event_dict: JsonDict,
room_version: RoomVersion,
signatures: Dict[str, Dict[str, str]],
unsigned: JsonDict,
internal_metadata_dict: JsonDict,
rejected_reason: Optional[str],
):
assert room_version.event_format == self.format_version
self.room_version = room_version
self.signatures = signatures
self.unsigned = unsigned
self.rejected_reason = rejected_reason
self._event_dict = event_dict
self._dict = event_dict
self.internal_metadata = _EventInternalMetadata(internal_metadata_dict)
auth_events = _event_dict_property("auth_events")
depth = _event_dict_property("depth")
content = _event_dict_property("content")
hashes = _event_dict_property("hashes")
origin = _event_dict_property("origin")
origin_server_ts = _event_dict_property("origin_server_ts")
prev_events = _event_dict_property("prev_events")
redacts = _event_dict_property("redacts")
room_id = _event_dict_property("room_id")
sender = _event_dict_property("sender")
user_id = _event_dict_property("sender")
auth_events = DictProperty("auth_events")
depth = DictProperty("depth")
content = DictProperty("content")
hashes = DictProperty("hashes")
origin = DictProperty("origin")
origin_server_ts = DictProperty("origin_server_ts")
prev_events = DictProperty("prev_events")
redacts = DefaultDictProperty("redacts", None)
room_id = DictProperty("room_id")
sender = DictProperty("sender")
state_key = DictProperty("state_key")
type = DictProperty("type")
user_id = DictProperty("sender")
@property
def event_id(self) -> str:
raise NotImplementedError()
@property
def membership(self):
@@ -177,19 +252,19 @@ class EventBase(object):
def is_state(self):
return hasattr(self, "state_key") and self.state_key is not None
def get_dict(self):
d = dict(self._event_dict)
def get_dict(self) -> JsonDict:
d = dict(self._dict)
d.update({"signatures": self.signatures, "unsigned": dict(self.unsigned)})
return d
def get(self, key, default=None):
return self._event_dict.get(key, default)
return self._dict.get(key, default)
def get_internal_metadata_dict(self):
return self.internal_metadata.get_dict()
def get_pdu_json(self, time_now=None):
def get_pdu_json(self, time_now=None) -> JsonDict:
pdu_json = self.get_dict()
if time_now is not None and "age_ts" in pdu_json["unsigned"]:
@@ -206,16 +281,16 @@ class EventBase(object):
raise AttributeError("Unrecognized attribute %s" % (instance,))
def __getitem__(self, field):
return self._event_dict[field]
return self._dict[field]
def __contains__(self, field):
return field in self._event_dict
return field in self._dict
def items(self):
return list(self._event_dict.items())
return list(self._dict.items())
def keys(self):
return six.iterkeys(self._event_dict)
return six.iterkeys(self._dict)
def prev_event_ids(self):
"""Returns the list of prev event IDs. The order matches the order
@@ -239,7 +314,13 @@ class EventBase(object):
class FrozenEvent(EventBase):
format_version = EventFormatVersions.V1 # All events of this type are V1
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
def __init__(
self,
event_dict: JsonDict,
room_version: RoomVersion,
internal_metadata_dict: JsonDict = {},
rejected_reason: Optional[str] = None,
):
event_dict = dict(event_dict)
# Signatures is a dict of dicts, and this is faster than doing a
@@ -260,19 +341,21 @@ class FrozenEvent(EventBase):
else:
frozen_dict = event_dict
self.event_id = event_dict["event_id"]
self.type = event_dict["type"]
if "state_key" in event_dict:
self.state_key = event_dict["state_key"]
self._event_id = event_dict["event_id"]
super(FrozenEvent, self).__init__(
super().__init__(
frozen_dict,
room_version=room_version,
signatures=signatures,
unsigned=unsigned,
internal_metadata_dict=internal_metadata_dict,
rejected_reason=rejected_reason,
)
@property
def event_id(self) -> str:
return self._event_id
def __str__(self):
return self.__repr__()
@@ -287,7 +370,13 @@ class FrozenEvent(EventBase):
class FrozenEventV2(EventBase):
format_version = EventFormatVersions.V2 # All events of this type are V2
def __init__(self, event_dict, internal_metadata_dict={}, rejected_reason=None):
def __init__(
self,
event_dict: JsonDict,
room_version: RoomVersion,
internal_metadata_dict: JsonDict = {},
rejected_reason: Optional[str] = None,
):
event_dict = dict(event_dict)
# Signatures is a dict of dicts, and this is faster than doing a
@@ -311,12 +400,10 @@ class FrozenEventV2(EventBase):
frozen_dict = event_dict
self._event_id = None
self.type = event_dict["type"]
if "state_key" in event_dict:
self.state_key = event_dict["state_key"]
super(FrozenEventV2, self).__init__(
super().__init__(
frozen_dict,
room_version=room_version,
signatures=signatures,
unsigned=unsigned,
internal_metadata_dict=internal_metadata_dict,
@@ -383,28 +470,7 @@ class FrozenEventV3(FrozenEventV2):
return self._event_id
def room_version_to_event_format(room_version):
"""Converts a room version string to the event format
Args:
room_version (str)
Returns:
int
Raises:
UnsupportedRoomVersionError if the room version is unknown
"""
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
# this can happen if support is withdrawn for a room version
raise UnsupportedRoomVersionError()
return v.event_format
def event_type_from_format_version(format_version):
def _event_type_from_format_version(format_version: int) -> Type[EventBase]:
"""Returns the python type to use to construct an Event object for the
given event format version.
@@ -424,3 +490,14 @@ def event_type_from_format_version(format_version):
return FrozenEventV3
else:
raise Exception("No event format %r" % (format_version,))
def make_event_from_dict(
event_dict: JsonDict,
room_version: RoomVersion = RoomVersions.V1,
internal_metadata_dict: JsonDict = {},
rejected_reason: Optional[str] = None,
) -> EventBase:
"""Construct an EventBase from the given event dict"""
event_type = _event_type_from_format_version(room_version.event_format)
return event_type(event_dict, room_version, internal_metadata_dict, rejected_reason)

View File

@@ -12,8 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
import attr
from nacl.signing import SigningKey
from twisted.internet import defer
@@ -23,13 +25,14 @@ from synapse.api.room_versions import (
KNOWN_EVENT_FORMAT_VERSIONS,
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.types import EventID
from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict
from synapse.types import EventID, JsonDict
from synapse.util import Clock
from synapse.util.stringutils import random_string
from . import _EventInternalMetadata, event_type_from_format_version
@attr.s(slots=True, cmp=False, frozen=True)
class EventBuilder(object):
@@ -40,7 +43,7 @@ class EventBuilder(object):
content/unsigned/internal_metadata fields are still mutable)
Attributes:
format_version (int): Event format version
room_version: Version of the target room
room_id (str)
type (str)
sender (str)
@@ -63,7 +66,7 @@ class EventBuilder(object):
_hostname = attr.ib()
_signing_key = attr.ib()
format_version = attr.ib()
room_version = attr.ib(type=RoomVersion)
room_id = attr.ib()
type = attr.ib()
@@ -108,7 +111,8 @@ class EventBuilder(object):
)
auth_ids = yield self._auth.compute_auth_events(self, state_ids)
if self.format_version == EventFormatVersions.V1:
format_version = self.room_version.event_format
if format_version == EventFormatVersions.V1:
auth_events = yield self._store.add_event_hashes(auth_ids)
prev_events = yield self._store.add_event_hashes(prev_event_ids)
else:
@@ -148,7 +152,7 @@ class EventBuilder(object):
clock=self._clock,
hostname=self._hostname,
signing_key=self._signing_key,
format_version=self.format_version,
room_version=self.room_version,
event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(),
)
@@ -201,7 +205,7 @@ class EventBuilderFactory(object):
clock=self.clock,
hostname=self.hostname,
signing_key=self.signing_key,
format_version=room_version.event_format,
room_version=room_version,
type=key_values["type"],
state_key=key_values.get("state_key"),
room_id=key_values["room_id"],
@@ -214,29 +218,19 @@ class EventBuilderFactory(object):
def create_local_event_from_event_dict(
clock,
hostname,
signing_key,
format_version,
event_dict,
internal_metadata_dict=None,
):
clock: Clock,
hostname: str,
signing_key: SigningKey,
room_version: RoomVersion,
event_dict: JsonDict,
internal_metadata_dict: Optional[JsonDict] = None,
) -> EventBase:
"""Takes a fully formed event dict, ensuring that fields like `origin`
and `origin_server_ts` have correct values for a locally produced event,
then signs and hashes it.
Args:
clock (Clock)
hostname (str)
signing_key
format_version (int)
event_dict (dict)
internal_metadata_dict (dict|None)
Returns:
FrozenEvent
"""
format_version = room_version.event_format
if format_version not in KNOWN_EVENT_FORMAT_VERSIONS:
raise Exception("No event format defined for version %r" % (format_version,))
@@ -257,9 +251,9 @@ def create_local_event_from_event_dict(
event_dict.setdefault("signatures", {})
add_hashes_and_signatures(event_dict, hostname, signing_key)
return event_type_from_format_version(format_version)(
event_dict, internal_metadata_dict=internal_metadata_dict
add_hashes_and_signatures(room_version, event_dict, hostname, signing_key)
return make_event_from_dict(
event_dict, room_version, internal_metadata_dict=internal_metadata_dict
)

View File

@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict, Optional, Tuple, Union
from typing import Optional, Union
from six import iteritems
@@ -23,6 +23,7 @@ from twisted.internet import defer
from synapse.appservice import ApplicationService
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import StateMap
@attr.s(slots=True)
@@ -106,13 +107,11 @@ class EventContext:
_state_group = attr.ib(default=None, type=Optional[int])
state_group_before_event = attr.ib(default=None, type=Optional[int])
prev_group = attr.ib(default=None, type=Optional[int])
delta_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
delta_ids = attr.ib(default=None, type=Optional[StateMap[str]])
app_service = attr.ib(default=None, type=Optional[ApplicationService])
_current_state_ids = attr.ib(
default=None, type=Optional[Dict[Tuple[str, str], str]]
)
_prev_state_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
_current_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
_prev_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
@staticmethod
def with_state(

View File

@@ -15,12 +15,17 @@
# limitations under the License.
import inspect
from typing import Dict
from synapse.spam_checker_api import SpamCheckerApi
MYPY = False
if MYPY:
import synapse.server
class SpamChecker(object):
def __init__(self, hs):
def __init__(self, hs: "synapse.server.HomeServer"):
self.spam_checker = None
module = None
@@ -40,7 +45,7 @@ class SpamChecker(object):
else:
self.spam_checker = module(config=config)
def check_event_for_spam(self, event):
def check_event_for_spam(self, event: "synapse.events.EventBase") -> bool:
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
@@ -48,26 +53,30 @@ class SpamChecker(object):
users receive a blank event.
Args:
event (synapse.events.EventBase): the event to be checked
event: the event to be checked
Returns:
bool: True if the event is spammy.
True if the event is spammy.
"""
if self.spam_checker is None:
return False
return self.spam_checker.check_event_for_spam(event)
def user_may_invite(self, inviter_userid, invitee_userid, room_id):
def user_may_invite(
self, inviter_userid: str, invitee_userid: str, room_id: str
) -> bool:
"""Checks if a given user may send an invite
If this method returns false, the invite will be rejected.
Args:
userid (string): The sender's user ID
inviter_userid: The user ID of the sender of the invitation
invitee_userid: The user ID targeted in the invitation
room_id: The room ID
Returns:
bool: True if the user may send an invite, otherwise False
True if the user may send an invite, otherwise False
"""
if self.spam_checker is None:
return True
@@ -76,52 +85,78 @@ class SpamChecker(object):
inviter_userid, invitee_userid, room_id
)
def user_may_create_room(self, userid):
def user_may_create_room(self, userid: str) -> bool:
"""Checks if a given user may create a room
If this method returns false, the creation request will be rejected.
Args:
userid (string): The sender's user ID
userid: The ID of the user attempting to create a room
Returns:
bool: True if the user may create a room, otherwise False
True if the user may create a room, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_create_room(userid)
def user_may_create_room_alias(self, userid, room_alias):
def user_may_create_room_alias(self, userid: str, room_alias: str) -> bool:
"""Checks if a given user may create a room alias
If this method returns false, the association request will be rejected.
Args:
userid (string): The sender's user ID
room_alias (string): The alias to be created
userid: The ID of the user attempting to create a room alias
room_alias: The alias to be created
Returns:
bool: True if the user may create a room alias, otherwise False
True if the user may create a room alias, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_create_room_alias(userid, room_alias)
def user_may_publish_room(self, userid, room_id):
def user_may_publish_room(self, userid: str, room_id: str) -> bool:
"""Checks if a given user may publish a room to the directory
If this method returns false, the publish request will be rejected.
Args:
userid (string): The sender's user ID
room_id (string): The ID of the room that would be published
userid: The user ID attempting to publish the room
room_id: The ID of the room that would be published
Returns:
bool: True if the user may publish the room, otherwise False
True if the user may publish the room, otherwise False
"""
if self.spam_checker is None:
return True
return self.spam_checker.user_may_publish_room(userid, room_id)
def check_username_for_spam(self, user_profile: Dict[str, str]) -> bool:
"""Checks if a user ID or display name are considered "spammy" by this server.
If the server considers a username spammy, then it will not be included in
user directory results.
Args:
user_profile: The user information to check, it contains the keys:
* user_id
* display_name
* avatar_url
Returns:
True if the user is spammy.
"""
if self.spam_checker is None:
return False
# For backwards compatibility, if the method does not exist on the spam checker, fallback to not interfering.
checker = getattr(self.spam_checker, "check_username_for_spam", None)
if not checker:
return False
# Make a copy of the user profile object to ensure the spam checker
# cannot modify it.
return checker(user_profile.copy())

View File

@@ -74,15 +74,16 @@ class ThirdPartyEventRules(object):
is_requester_admin (bool): If the requester is an admin
Returns:
defer.Deferred
defer.Deferred[bool]: Whether room creation is allowed or denied.
"""
if self.third_party_rules is None:
return
return True
yield self.third_party_rules.on_create_room(
ret = yield self.third_party_rules.on_create_room(
requester, config, is_requester_admin
)
return ret
@defer.inlineCallbacks
def check_threepid_can_be_invited(self, medium, address, room_id):

View File

@@ -12,8 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import re
from typing import Mapping, Union
from six import string_types
@@ -22,6 +23,7 @@ from frozendict import frozendict
from twisted.internet import defer
from synapse.api.constants import EventTypes, RelationTypes
from synapse.api.room_versions import RoomVersion
from synapse.util.async_helpers import yieldable_gather_results
from . import EventBase
@@ -34,26 +36,20 @@ from . import EventBase
SPLIT_FIELD_REGEX = re.compile(r"(?<!\\)\.")
def prune_event(event):
def prune_event(event: EventBase) -> EventBase:
""" Returns a pruned version of the given event, which removes all keys we
don't know about or think could potentially be dodgy.
This is used when we "redact" an event. We want to remove all fields that
the user has specified, but we do want to keep necessary information like
type, state_key etc.
Args:
event (FrozenEvent)
Returns:
FrozenEvent
"""
pruned_event_dict = prune_event_dict(event.get_dict())
pruned_event_dict = prune_event_dict(event.room_version, event.get_dict())
from . import event_type_from_format_version
from . import make_event_from_dict
pruned_event = event_type_from_format_version(event.format_version)(
pruned_event_dict, event.internal_metadata.get_dict()
pruned_event = make_event_from_dict(
pruned_event_dict, event.room_version, event.internal_metadata.get_dict()
)
# Mark the event as redacted
@@ -62,15 +58,12 @@ def prune_event(event):
return pruned_event
def prune_event_dict(event_dict):
def prune_event_dict(room_version: RoomVersion, event_dict: dict) -> dict:
"""Redacts the event_dict in the same way as `prune_event`, except it
operates on dicts rather than event objects
Args:
event_dict (dict)
Returns:
dict: A copy of the pruned event dict
A copy of the pruned event dict
"""
allowed_keys = [
@@ -117,7 +110,7 @@ def prune_event_dict(event_dict):
"kick",
"redact",
)
elif event_type == EventTypes.Aliases:
elif event_type == EventTypes.Aliases and room_version.special_case_aliases_auth:
add_fields("aliases")
elif event_type == EventTypes.RoomHistoryVisibility:
add_fields("history_visibility")
@@ -422,3 +415,37 @@ class EventClientSerializer(object):
return yieldable_gather_results(
self.serialize_event, events, time_now=time_now, **kwargs
)
def copy_power_levels_contents(
old_power_levels: Mapping[str, Union[int, Mapping[str, int]]]
):
"""Copy the content of a power_levels event, unfreezing frozendicts along the way
Raises:
TypeError if the input does not look like a valid power levels event content
"""
if not isinstance(old_power_levels, collections.Mapping):
raise TypeError("Not a valid power-levels content: %r" % (old_power_levels,))
power_levels = {}
for k, v in old_power_levels.items():
if isinstance(v, int):
power_levels[k] = v
continue
if isinstance(v, collections.Mapping):
power_levels[k] = h = {}
for k1, v1 in v.items():
# we should only have one level of nesting
if not isinstance(v1, int):
raise TypeError(
"Invalid power_levels value for %s.%s: %r" % (k, k1, v1)
)
h[k1] = v1
continue
raise TypeError("Invalid power_levels value for %s: %r" % (k, v))
return power_levels

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,27 +15,32 @@
# limitations under the License.
import logging
from collections import namedtuple
from typing import Iterable, List
import six
from twisted.internet import defer
from twisted.internet.defer import DeferredList
from twisted.internet.defer import Deferred, DeferredList
from twisted.python.failure import Failure
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.crypto.event_signing import check_event_content_hash
from synapse.events import event_type_from_format_version
from synapse.crypto.keyring import Keyring
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
make_deferred_yieldable,
preserve_fn,
)
from synapse.types import get_domain_from_id
from synapse.util import unwrapFirstError
from synapse.types import JsonDict, get_domain_from_id
logger = logging.getLogger(__name__)
@@ -49,92 +55,23 @@ class FederationBase(object):
self.store = hs.get_datastore()
self._clock = hs.get_clock()
@defer.inlineCallbacks
def _check_sigs_and_hash_and_fetch(
self, origin, pdus, room_version, outlier=False, include_none=False
):
"""Takes a list of PDUs and checks the signatures and hashs of each
one. If a PDU fails its signature check then we check if we have it in
the database and if not then request if from the originating server of
that PDU.
If a PDU fails its content hash check then it is redacted.
The given list of PDUs are not modified, instead the function returns
a new list.
Args:
origin (str)
pdu (list)
room_version (str)
outlier (bool): Whether the events are outliers or not
include_none (str): Whether to include None in the returned list
for events that have failed their checks
Returns:
Deferred : A list of PDUs that have valid signatures and hashes.
"""
deferreds = self._check_sigs_and_hashes(room_version, pdus)
@defer.inlineCallbacks
def handle_check_result(pdu, deferred):
try:
res = yield make_deferred_yieldable(deferred)
except SynapseError:
res = None
if not res:
# Check local db.
res = yield self.store.get_event(
pdu.event_id, allow_rejected=True, allow_none=True
)
if not res and pdu.origin != origin:
try:
res = yield self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
room_version=room_version,
outlier=outlier,
timeout=10000,
)
except SynapseError:
pass
if not res:
logger.warning(
"Failed to find copy of %s with valid signature", pdu.event_id
)
return res
handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
valid_pdus = yield make_deferred_yieldable(
defer.gatherResults(deferreds2, consumeErrors=True)
).addErrback(unwrapFirstError)
if include_none:
return valid_pdus
else:
return [p for p in valid_pdus if p]
def _check_sigs_and_hash(self, room_version, pdu):
def _check_sigs_and_hash(self, room_version: str, pdu: EventBase) -> Deferred:
return make_deferred_yieldable(
self._check_sigs_and_hashes(room_version, [pdu])[0]
)
def _check_sigs_and_hashes(self, room_version, pdus):
def _check_sigs_and_hashes(
self, room_version: str, pdus: List[EventBase]
) -> List[Deferred]:
"""Checks that each of the received events is correctly signed by the
sending server.
Args:
room_version (str): The room version of the PDUs
pdus (list[FrozenEvent]): the events to be checked
room_version: The room version of the PDUs
pdus: the events to be checked
Returns:
list[Deferred]: for each input event, a deferred which:
For each input event, a deferred which:
* returns the original event if the checks pass
* returns a redacted version of the event (if the signature
matched but the hash did not)
@@ -145,7 +82,7 @@ class FederationBase(object):
ctx = LoggingContext.current_context()
def callback(_, pdu):
def callback(_, pdu: EventBase):
with PreserveLoggingContext(ctx):
if not check_event_content_hash(pdu):
# let's try to distinguish between failures because the event was
@@ -182,7 +119,7 @@ class FederationBase(object):
return pdu
def errback(failure, pdu):
def errback(failure: Failure, pdu: EventBase):
failure.trap(SynapseError)
with PreserveLoggingContext(ctx):
logger.warning(
@@ -208,16 +145,18 @@ class PduToCheckSig(
pass
def _check_sigs_on_pdus(keyring, room_version, pdus):
def _check_sigs_on_pdus(
keyring: Keyring, room_version: str, pdus: Iterable[EventBase]
) -> List[Deferred]:
"""Check that the given events are correctly signed
Args:
keyring (synapse.crypto.Keyring): keyring object to do the checks
room_version (str): the room version of the PDUs
pdus (Collection[EventBase]): the events to be checked
keyring: keyring object to do the checks
room_version: the room version of the PDUs
pdus: the events to be checked
Returns:
List[Deferred]: a Deferred for each event in pdus, which will either succeed if
A Deferred for each event in pdus, which will either succeed if
the signatures are valid, or fail (with a SynapseError) if not.
"""
@@ -322,7 +261,7 @@ def _check_sigs_on_pdus(keyring, room_version, pdus):
return [_flatten_deferred_list(p.deferreds) for p in pdus_to_check]
def _flatten_deferred_list(deferreds):
def _flatten_deferred_list(deferreds: List[Deferred]) -> Deferred:
"""Given a list of deferreds, either return the single deferred,
combine into a DeferredList, or return an already resolved deferred.
"""
@@ -334,7 +273,7 @@ def _flatten_deferred_list(deferreds):
return defer.succeed(None)
def _is_invite_via_3pid(event):
def _is_invite_via_3pid(event: EventBase) -> bool:
return (
event.type == EventTypes.Member
and event.membership == Membership.INVITE
@@ -342,16 +281,15 @@ def _is_invite_via_3pid(event):
)
def event_from_pdu_json(pdu_json, event_format_version, outlier=False):
"""Construct a FrozenEvent from an event json received over federation
def event_from_pdu_json(
pdu_json: JsonDict, room_version: RoomVersion, outlier: bool = False
) -> EventBase:
"""Construct an EventBase from an event json received over federation
Args:
pdu_json (object): pdu as received over federation
event_format_version (int): The event format version
outlier (bool): True to mark this event as an outlier
Returns:
FrozenEvent
pdu_json: pdu as received over federation
room_version: The version of the room this event belongs to
outlier: True to mark this event as an outlier
Raises:
SynapseError: if the pdu is missing required fields or is otherwise
@@ -370,8 +308,7 @@ def event_from_pdu_json(pdu_json, event_format_version, outlier=False):
elif depth > MAX_DEPTH:
raise SynapseError(400, "Depth too large", Codes.BAD_JSON)
event = event_type_from_format_version(event_format_version)(pdu_json)
event = make_event_from_dict(pdu_json, room_version)
event.internal_metadata.outlier = outlier
return event

View File

@@ -17,10 +17,23 @@
import copy
import itertools
import logging
from typing import (
Any,
Awaitable,
Callable,
Dict,
Iterable,
List,
Optional,
Sequence,
Tuple,
TypeVar,
)
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.defer import Deferred
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import (
@@ -29,16 +42,19 @@ from synapse.api.errors import (
FederationDeniedError,
HttpResponseException,
SynapseError,
UnsupportedRoomVersionError,
)
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
RoomVersions,
)
from synapse.events import builder, room_version_to_event_format
from synapse.events import EventBase, builder
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.context import make_deferred_yieldable, preserve_fn
from synapse.logging.utils import log_function
from synapse.types import JsonDict
from synapse.util import unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
@@ -50,6 +66,8 @@ sent_queries_counter = Counter("synapse_federation_client_sent_queries", "", ["t
PDU_RETRY_TIME_MS = 1 * 60 * 1000
T = TypeVar("T")
class InvalidResponseError(RuntimeError):
"""Helper for _try_destination_list: indicates that the server returned a response
@@ -168,56 +186,55 @@ class FederationClient(FederationBase):
sent_queries_counter.labels("client_one_time_keys").inc()
return self.transport_layer.claim_client_keys(destination, content, timeout)
@defer.inlineCallbacks
@log_function
def backfill(self, dest, room_id, limit, extremities):
"""Requests some more historic PDUs for the given context from the
async def backfill(
self, dest: str, room_id: str, limit: int, extremities: Iterable[str]
) -> Optional[List[EventBase]]:
"""Requests some more historic PDUs for the given room from the
given destination server.
Args:
dest (str): The remote homeserver to ask.
room_id (str): The room_id to backfill.
limit (int): The maximum number of PDUs to return.
extremities (list): List of PDU id and origins of the first pdus
we have seen from the context
Returns:
Deferred: Results in the received PDUs.
limit (int): The maximum number of events to return.
extremities (list): our current backwards extremities, to backfill from
"""
logger.debug("backfill extrem=%s", extremities)
# If there are no extremeties then we've (probably) reached the start.
# If there are no extremities then we've (probably) reached the start.
if not extremities:
return
return None
transaction_data = yield self.transport_layer.backfill(
transaction_data = await self.transport_layer.backfill(
dest, room_id, extremities, limit
)
logger.debug("backfill transaction_data=%r", transaction_data)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
room_version = await self.store.get_room_version(room_id)
pdus = [
event_from_pdu_json(p, format_ver, outlier=False)
event_from_pdu_json(p, room_version, outlier=False)
for p in transaction_data["pdus"]
]
# FIXME: We should handle signature failures more gracefully.
pdus[:] = yield make_deferred_yieldable(
pdus[:] = await make_deferred_yieldable(
defer.gatherResults(
self._check_sigs_and_hashes(room_version, pdus), consumeErrors=True
self._check_sigs_and_hashes(room_version.identifier, pdus),
consumeErrors=True,
).addErrback(unwrapFirstError)
)
return pdus
@defer.inlineCallbacks
@log_function
def get_pdu(
self, destinations, event_id, room_version, outlier=False, timeout=None
):
async def get_pdu(
self,
destinations: Iterable[str],
event_id: str,
room_version: RoomVersion,
outlier: bool = False,
timeout: Optional[int] = None,
) -> Optional[EventBase]:
"""Requests the PDU with given origin and ID from the remote home
servers.
@@ -225,18 +242,17 @@ class FederationClient(FederationBase):
one succeeds.
Args:
destinations (list): Which homeservers to query
event_id (str): event to fetch
room_version (str): version of the room
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
destinations: Which homeservers to query
event_id: event to fetch
room_version: version of the room
outlier: Indicates whether the PDU is an `outlier`, i.e. if
it's from an arbitary point in the context as opposed to part
of the current block of PDUs. Defaults to `False`
timeout (int): How long to try (in ms) each destination for before
timeout: How long to try (in ms) each destination for before
moving to the next destination. None indicates no timeout.
Returns:
Deferred: Results in the requested PDU, or None if we were unable to find
it.
The requested PDU, or None if we were unable to find it.
"""
# TODO: Rate limit the number of times we try and get the same event.
@@ -247,8 +263,6 @@ class FederationClient(FederationBase):
pdu_attempts = self.pdu_destination_tried.setdefault(event_id, {})
format_ver = room_version_to_event_format(room_version)
signed_pdu = None
for destination in destinations:
now = self._clock.time_msec()
@@ -257,7 +271,7 @@ class FederationClient(FederationBase):
continue
try:
transaction_data = yield self.transport_layer.get_event(
transaction_data = await self.transport_layer.get_event(
destination, event_id, timeout=timeout
)
@@ -269,15 +283,17 @@ class FederationClient(FederationBase):
)
pdu_list = [
event_from_pdu_json(p, format_ver, outlier=outlier)
event_from_pdu_json(p, room_version, outlier=outlier)
for p in transaction_data["pdus"]
]
] # type: List[EventBase]
if pdu_list and pdu_list[0]:
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = yield self._check_sigs_and_hash(room_version, pdu)
signed_pdu = await self._check_sigs_and_hash(
room_version.identifier, pdu
)
break
@@ -307,15 +323,16 @@ class FederationClient(FederationBase):
return signed_pdu
@defer.inlineCallbacks
def get_room_state_ids(self, destination: str, room_id: str, event_id: str):
async def get_room_state_ids(
self, destination: str, room_id: str, event_id: str
) -> Tuple[List[str], List[str]]:
"""Calls the /state_ids endpoint to fetch the state at a particular point
in the room, and the auth events for the given event
Returns:
Tuple[List[str], List[str]]: a tuple of (state event_ids, auth event_ids)
a tuple of (state event_ids, auth event_ids)
"""
result = yield self.transport_layer.get_room_state_ids(
result = await self.transport_layer.get_room_state_ids(
destination, room_id, event_id=event_id
)
@@ -329,37 +346,116 @@ class FederationClient(FederationBase):
return state_event_ids, auth_event_ids
@defer.inlineCallbacks
@log_function
def get_event_auth(self, destination, room_id, event_id):
res = yield self.transport_layer.get_event_auth(destination, room_id, event_id)
async def _check_sigs_and_hash_and_fetch(
self,
origin: str,
pdus: List[EventBase],
room_version: str,
outlier: bool = False,
include_none: bool = False,
) -> List[EventBase]:
"""Takes a list of PDUs and checks the signatures and hashs of each
one. If a PDU fails its signature check then we check if we have it in
the database and if not then request if from the originating server of
that PDU.
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
If a PDU fails its content hash check then it is redacted.
The given list of PDUs are not modified, instead the function returns
a new list.
Args:
origin
pdu
room_version
outlier: Whether the events are outliers or not
include_none: Whether to include None in the returned list
for events that have failed their checks
Returns:
Deferred : A list of PDUs that have valid signatures and hashes.
"""
deferreds = self._check_sigs_and_hashes(room_version, pdus)
@defer.inlineCallbacks
def handle_check_result(pdu: EventBase, deferred: Deferred):
try:
res = yield make_deferred_yieldable(deferred)
except SynapseError:
res = None
if not res:
# Check local db.
res = yield self.store.get_event(
pdu.event_id, allow_rejected=True, allow_none=True
)
if not res and pdu.origin != origin:
try:
res = yield defer.ensureDeferred(
self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
room_version=room_version, # type: ignore
outlier=outlier,
timeout=10000,
)
)
except SynapseError:
pass
if not res:
logger.warning(
"Failed to find copy of %s with valid signature", pdu.event_id
)
return res
handle = preserve_fn(handle_check_result)
deferreds2 = [handle(pdu, deferred) for pdu, deferred in zip(pdus, deferreds)]
valid_pdus = await make_deferred_yieldable(
defer.gatherResults(deferreds2, consumeErrors=True)
).addErrback(unwrapFirstError)
if include_none:
return valid_pdus
else:
return [p for p in valid_pdus if p]
async def get_event_auth(self, destination, room_id, event_id):
res = await self.transport_layer.get_event_auth(destination, room_id, event_id)
room_version = await self.store.get_room_version(room_id)
auth_chain = [
event_from_pdu_json(p, format_ver, outlier=True) for p in res["auth_chain"]
event_from_pdu_json(p, room_version, outlier=True)
for p in res["auth_chain"]
]
signed_auth = yield self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True, room_version=room_version
signed_auth = await self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True, room_version=room_version.identifier
)
signed_auth.sort(key=lambda e: e.depth)
return signed_auth
@defer.inlineCallbacks
def _try_destination_list(self, description, destinations, callback):
async def _try_destination_list(
self,
description: str,
destinations: Iterable[str],
callback: Callable[[str], Awaitable[T]],
) -> T:
"""Try an operation on a series of servers, until it succeeds
Args:
description (unicode): description of the operation we're doing, for logging
description: description of the operation we're doing, for logging
destinations (Iterable[unicode]): list of server_names to try
destinations: list of server_names to try
callback (callable): Function to run for each server. Passed a single
argument: the server_name to try. May return a deferred.
callback: Function to run for each server. Passed a single
argument: the server_name to try.
If the callback raises a CodeMessageException with a 300/400 code,
attempts to perform the operation stop immediately and the exception is
@@ -370,7 +466,7 @@ class FederationClient(FederationBase):
suppressed if the exception is an InvalidResponseError.
Returns:
The [Deferred] result of callback, if it succeeds
The result of callback, if it succeeds
Raises:
SynapseError if the chosen remote server returns a 300/400 code, or
@@ -381,10 +477,12 @@ class FederationClient(FederationBase):
continue
try:
res = yield callback(destination)
res = await callback(destination)
return res
except InvalidResponseError as e:
logger.warning("Failed to %s via %s: %s", description, destination, e)
except UnsupportedRoomVersionError:
raise
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
@@ -398,14 +496,20 @@ class FederationClient(FederationBase):
)
except Exception:
logger.warning(
"Failed to %s via %s", description, destination, exc_info=1
"Failed to %s via %s", description, destination, exc_info=True
)
raise SynapseError(502, "Failed to %s via any server" % (description,))
def make_membership_event(
self, destinations, room_id, user_id, membership, content, params
):
async def make_membership_event(
self,
destinations: Iterable[str],
room_id: str,
user_id: str,
membership: str,
content: dict,
params: Dict[str, str],
) -> Tuple[str, EventBase, RoomVersion]:
"""
Creates an m.room.member event, with context, without participating in the room.
@@ -417,26 +521,28 @@ class FederationClient(FederationBase):
Note that this does not append any events to any graphs.
Args:
destinations (Iterable[str]): Candidate homeservers which are probably
destinations: Candidate homeservers which are probably
participating in the room.
room_id (str): The room in which the event will happen.
user_id (str): The user whose membership is being evented.
membership (str): The "membership" property of the event. Must be
one of "join" or "leave".
content (dict): Any additional data to put into the content field
of the event.
params (dict[str, str|Iterable[str]]): Query parameters to include in the
request.
Return:
Deferred[tuple[str, FrozenEvent, int]]: resolves to a tuple of
`(origin, event, event_format)` where origin is the remote
homeserver which generated the event, and event_format is one of
`synapse.api.room_versions.EventFormatVersions`.
room_id: The room in which the event will happen.
user_id: The user whose membership is being evented.
membership: The "membership" property of the event. Must be one of
"join" or "leave".
content: Any additional data to put into the content field of the
event.
params: Query parameters to include in the request.
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Returns:
`(origin, event, room_version)` where origin is the remote
homeserver which generated the event, and room_version is the
version of the room.
Fails with a ``RuntimeError`` if no servers were reachable.
Raises:
UnsupportedRoomVersionError: if remote responds with
a room version we don't understand.
SynapseError: if the chosen remote server returns a 300/400 code.
RuntimeError: if no servers were reachable.
"""
valid_memberships = {Membership.JOIN, Membership.LEAVE}
if membership not in valid_memberships:
@@ -445,16 +551,17 @@ class FederationClient(FederationBase):
% (membership, ",".join(valid_memberships))
)
@defer.inlineCallbacks
def send_request(destination):
ret = yield self.transport_layer.make_membership_event(
async def send_request(destination: str) -> Tuple[str, EventBase, RoomVersion]:
ret = await self.transport_layer.make_membership_event(
destination, room_id, user_id, membership, params
)
# Note: If not supplied, the room version may be either v1 or v2,
# however either way the event format version will be v1.
room_version = ret.get("room_version", RoomVersions.V1.identifier)
event_format = room_version_to_event_format(room_version)
room_version_id = ret.get("room_version", RoomVersions.V1.identifier)
room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version:
raise UnsupportedRoomVersionError()
pdu_dict = ret.get("event", None)
if not isinstance(pdu_dict, dict):
@@ -474,92 +581,87 @@ class FederationClient(FederationBase):
self._clock,
self.hostname,
self.signing_key,
format_version=event_format,
room_version=room_version,
event_dict=pdu_dict,
)
return (destination, ev, event_format)
return destination, ev, room_version
return self._try_destination_list(
return await self._try_destination_list(
"make_" + membership, destinations, send_request
)
def send_join(self, destinations, pdu, event_format_version):
async def send_join(
self, destinations: Iterable[str], pdu: EventBase, room_version: RoomVersion
) -> Dict[str, Any]:
"""Sends a join event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
and send the event out to the rest of the federation.
Args:
destinations (str): Candidate homeservers which are probably
destinations: Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
event_format_version (int): The event format version
pdu: event to be sent
room_version: the version of the room (according to the server that
did the make_join)
Return:
Deferred: resolves to a dict with members ``origin`` (a string
giving the serer the event was sent to, ``state`` (?) and
Returns:
a dict with members ``origin`` (a string
giving the server the event was sent to, ``state`` (?) and
``auth_chain``.
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Raises:
SynapseError: if the chosen remote server returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
RuntimeError: if no servers were reachable.
"""
def check_authchain_validity(signed_auth_chain):
for e in signed_auth_chain:
if e.type == EventTypes.Create:
create_event = e
break
else:
raise InvalidResponseError("no %s in auth chain" % (EventTypes.Create,))
# the room version should be sane.
room_version = create_event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
# This shouldn't be possible, because the remote server should have
# rejected the join attempt during make_join.
raise InvalidResponseError(
"room appears to have unsupported version %s" % (room_version,)
)
@defer.inlineCallbacks
def send_request(destination):
content = yield self._do_send_join(destination, pdu)
async def send_request(destination) -> Dict[str, Any]:
content = await self._do_send_join(destination, pdu)
logger.debug("Got content: %s", content)
state = [
event_from_pdu_json(p, event_format_version, outlier=True)
event_from_pdu_json(p, room_version, outlier=True)
for p in content.get("state", [])
]
auth_chain = [
event_from_pdu_json(p, event_format_version, outlier=True)
event_from_pdu_json(p, room_version, outlier=True)
for p in content.get("auth_chain", [])
]
pdus = {p.event_id: p for p in itertools.chain(state, auth_chain)}
room_version = None
create_event = None
for e in state:
if (e.type, e.state_key) == (EventTypes.Create, ""):
room_version = e.content.get(
"room_version", RoomVersions.V1.identifier
)
create_event = e
break
if room_version is None:
if create_event is None:
# If the state doesn't have a create event then the room is
# invalid, and it would fail auth checks anyway.
raise SynapseError(400, "No create event in state")
valid_pdus = yield self._check_sigs_and_hash_and_fetch(
# the room version should be sane.
create_room_version = create_event.content.get(
"room_version", RoomVersions.V1.identifier
)
if create_room_version != room_version.identifier:
# either the server that fulfilled the make_join, or the server that is
# handling the send_join, is lying.
raise InvalidResponseError(
"Unexpected room version %s in create event"
% (create_room_version,)
)
valid_pdus = await self._check_sigs_and_hash_and_fetch(
destination,
list(pdus.values()),
outlier=True,
room_version=room_version,
room_version=room_version.identifier,
)
valid_pdus_map = {p.event_id: p for p in valid_pdus}
@@ -583,7 +685,17 @@ class FederationClient(FederationBase):
for s in signed_state:
s.internal_metadata = copy.deepcopy(s.internal_metadata)
check_authchain_validity(signed_auth)
# double-check that the same create event has ended up in the auth chain
auth_chain_create_events = [
e.event_id
for e in signed_auth
if (e.type, e.state_key) == (EventTypes.Create, "")
]
if auth_chain_create_events != [create_event.event_id]:
raise InvalidResponseError(
"Unexpected create event(s) in auth chain: %s"
% (auth_chain_create_events,)
)
return {
"state": signed_state,
@@ -591,14 +703,13 @@ class FederationClient(FederationBase):
"origin": destination,
}
return self._try_destination_list("send_join", destinations, send_request)
return await self._try_destination_list("send_join", destinations, send_request)
@defer.inlineCallbacks
def _do_send_join(self, destination, pdu):
async def _do_send_join(self, destination: str, pdu: EventBase):
time_now = self._clock.time_msec()
try:
content = yield self.transport_layer.send_join_v2(
content = await self.transport_layer.send_join_v2(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
@@ -620,7 +731,7 @@ class FederationClient(FederationBase):
logger.debug("Couldn't send_join with the v2 API, falling back to the v1 API")
resp = yield self.transport_layer.send_join_v1(
resp = await self.transport_layer.send_join_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
@@ -631,51 +742,45 @@ class FederationClient(FederationBase):
# content.
return resp[1]
@defer.inlineCallbacks
def send_invite(self, destination, room_id, event_id, pdu):
room_version = yield self.store.get_room_version(room_id)
async def send_invite(
self, destination: str, room_id: str, event_id: str, pdu: EventBase,
) -> EventBase:
room_version = await self.store.get_room_version(room_id)
content = yield self._do_send_invite(destination, pdu, room_version)
content = await self._do_send_invite(destination, pdu, room_version)
pdu_dict = content["event"]
logger.debug("Got response to send_invite: %s", pdu_dict)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(pdu_dict, format_ver)
pdu = event_from_pdu_json(pdu_dict, room_version)
# Check signatures are correct.
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
# FIXME: We should handle signature failures more gracefully.
return pdu
@defer.inlineCallbacks
def _do_send_invite(self, destination, pdu, room_version):
async def _do_send_invite(
self, destination: str, pdu: EventBase, room_version: RoomVersion
) -> JsonDict:
"""Actually sends the invite, first trying v2 API and falling back to
v1 API if necessary.
Args:
destination (str): Target server
pdu (FrozenEvent)
room_version (str)
Returns:
dict: The event as a dict as returned by the remote server
The event as a dict as returned by the remote server
"""
time_now = self._clock.time_msec()
try:
content = yield self.transport_layer.send_invite_v2(
content = await self.transport_layer.send_invite_v2(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content={
"event": pdu.get_pdu_json(time_now),
"room_version": room_version,
"room_version": room_version.identifier,
"invite_room_state": pdu.unsigned.get("invite_room_state", []),
},
)
@@ -693,8 +798,7 @@ class FederationClient(FederationBase):
# Otherwise, we assume that the remote server doesn't understand
# the v2 invite API. That's ok provided the room uses old-style event
# IDs.
v = KNOWN_ROOM_VERSIONS.get(room_version)
if v.event_format != EventFormatVersions.V1:
if room_version.event_format != EventFormatVersions.V1:
raise SynapseError(
400,
"User's homeserver does not support this room version",
@@ -708,7 +812,7 @@ class FederationClient(FederationBase):
# Didn't work, try v1 API.
# Note the v1 API returns a tuple of `(200, content)`
_, content = yield self.transport_layer.send_invite_v1(
_, content = await self.transport_layer.send_invite_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
@@ -716,7 +820,7 @@ class FederationClient(FederationBase):
)
return content
def send_leave(self, destinations, pdu):
async def send_leave(self, destinations: Iterable[str], pdu: EventBase) -> None:
"""Sends a leave event to one of a list of homeservers.
Doing so will cause the remote server to add the event to the graph,
@@ -725,34 +829,29 @@ class FederationClient(FederationBase):
This is mostly useful to reject received invites.
Args:
destinations (str): Candidate homeservers which are probably
destinations: Candidate homeservers which are probably
participating in the room.
pdu (BaseEvent): event to be sent
pdu: event to be sent
Return:
Deferred: resolves to None.
Raises:
SynapseError if the chosen remote server returns a 300/400 code.
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
Fails with a ``RuntimeError`` if no servers were reachable.
RuntimeError if no servers were reachable.
"""
@defer.inlineCallbacks
def send_request(destination):
content = yield self._do_send_leave(destination, pdu)
async def send_request(destination: str) -> None:
content = await self._do_send_leave(destination, pdu)
logger.debug("Got content: %s", content)
return None
return self._try_destination_list("send_leave", destinations, send_request)
return await self._try_destination_list(
"send_leave", destinations, send_request
)
@defer.inlineCallbacks
def _do_send_leave(self, destination, pdu):
async def _do_send_leave(self, destination, pdu):
time_now = self._clock.time_msec()
try:
content = yield self.transport_layer.send_leave_v2(
content = await self.transport_layer.send_leave_v2(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
@@ -774,7 +873,7 @@ class FederationClient(FederationBase):
logger.debug("Couldn't send_leave with the v2 API, falling back to the v1 API")
resp = yield self.transport_layer.send_leave_v1(
resp = await self.transport_layer.send_leave_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
@@ -806,34 +905,33 @@ class FederationClient(FederationBase):
third_party_instance_id=third_party_instance_id,
)
@defer.inlineCallbacks
def get_missing_events(
async def get_missing_events(
self,
destination,
room_id,
earliest_events_ids,
latest_events,
limit,
min_depth,
timeout,
):
destination: str,
room_id: str,
earliest_events_ids: Sequence[str],
latest_events: Iterable[EventBase],
limit: int,
min_depth: int,
timeout: int,
) -> List[EventBase]:
"""Tries to fetch events we are missing. This is called when we receive
an event without having received all of its ancestors.
Args:
destination (str)
room_id (str)
earliest_events_ids (list): List of event ids. Effectively the
destination
room_id
earliest_events_ids: List of event ids. Effectively the
events we expected to receive, but haven't. `get_missing_events`
should only return events that didn't happen before these.
latest_events (list): List of events we have received that we don't
latest_events: List of events we have received that we don't
have all previous events for.
limit (int): Maximum number of events to return.
min_depth (int): Minimum depth of events tor return.
timeout (int): Max time to wait in ms
limit: Maximum number of events to return.
min_depth: Minimum depth of events to return.
timeout: Max time to wait in ms
"""
try:
content = yield self.transport_layer.get_missing_events(
content = await self.transport_layer.get_missing_events(
destination=destination,
room_id=room_id,
earliest_events=earliest_events_ids,
@@ -843,15 +941,14 @@ class FederationClient(FederationBase):
timeout=timeout,
)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
room_version = await self.store.get_room_version(room_id)
events = [
event_from_pdu_json(e, format_ver) for e in content.get("events", [])
event_from_pdu_json(e, room_version) for e in content.get("events", [])
]
signed_events = yield self._check_sigs_and_hash_and_fetch(
destination, events, outlier=False, room_version=room_version
signed_events = await self._check_sigs_and_hash_and_fetch(
destination, events, outlier=False, room_version=room_version.identifier
)
except HttpResponseException as e:
if not e.code == 400:

View File

@@ -15,6 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Dict
import six
from six import iteritems
@@ -22,6 +23,7 @@ from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
@@ -36,20 +38,23 @@ from synapse.api.errors import (
UnsupportedRoomVersionError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import room_version_to_event_format
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.http.endpoint import parse_server_name
from synapse.logging.context import nested_logging_context
from synapse.logging.context import (
make_deferred_yieldable,
nested_logging_context,
run_in_background,
)
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
from synapse.logging.utils import log_function
from synapse.replication.http.federation import (
ReplicationFederationSendEduRestServlet,
ReplicationGetQueryRestServlet,
)
from synapse.types import get_domain_from_id
from synapse.util import glob_to_regex
from synapse.types import JsonDict, get_domain_from_id
from synapse.util import glob_to_regex, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
@@ -76,6 +81,8 @@ class FederationServer(FederationBase):
self.handler = hs.get_handlers().federation_handler
self.state = hs.get_state_handler()
self.device_handler = hs.get_device_handler()
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
@@ -160,6 +167,43 @@ class FederationServer(FederationBase):
)
return 400, response
# We process PDUs and EDUs in parallel. This is important as we don't
# want to block things like to device messages from reaching clients
# behind the potentially expensive handling of PDUs.
pdu_results, _ = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self._handle_pdus_in_txn, origin, transaction, request_time
),
run_in_background(self._handle_edus_in_txn, origin, transaction),
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
response = {"pdus": pdu_results}
logger.debug("Returning: %s", str(response))
await self.transaction_actions.set_response(origin, transaction, 200, response)
return 200, response
async def _handle_pdus_in_txn(
self, origin: str, transaction: Transaction, request_time: int
) -> Dict[str, dict]:
"""Process the PDUs in a received transaction.
Args:
origin: the server making the request
transaction: incoming transaction
request_time: timestamp that the HTTP request arrived at
Returns:
A map from event ID of a processed PDU to any errors we should
report back to the sending server.
"""
received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(origin)
@@ -195,20 +239,13 @@ class FederationServer(FederationBase):
except NotFoundError:
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
continue
try:
format_ver = room_version_to_event_format(room_version)
except UnsupportedRoomVersionError:
except UnsupportedRoomVersionError as e:
# this can happen if support for a given room version is withdrawn,
# so that we still get events for said room.
logger.info(
"Ignoring PDU for room %s with unknown version %s",
room_id,
room_version,
)
logger.info("Ignoring PDU: %s", e)
continue
event = event_from_pdu_json(p, format_ver)
event = event_from_pdu_json(p, room_version)
pdus_by_room.setdefault(room_id, []).append(event)
pdu_results = {}
@@ -250,20 +287,28 @@ class FederationServer(FederationBase):
process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT
)
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
await self.received_edu(origin, edu.edu_type, edu.content)
return pdu_results
response = {"pdus": pdu_results}
async def _handle_edus_in_txn(self, origin: str, transaction: Transaction):
"""Process the EDUs in a received transaction.
"""
logger.debug("Returning: %s", str(response))
async def _process_edu(edu_dict):
received_edus_counter.inc()
await self.transaction_actions.set_response(origin, transaction, 200, response)
return 200, response
edu = Edu(
origin=origin,
destination=self.server_name,
edu_type=edu_dict["edu_type"],
content=edu_dict["content"],
)
await self.registry.on_edu(edu.edu_type, origin, edu.content)
async def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
await self.registry.on_edu(edu_type, origin, content)
await concurrently_execute(
_process_edu,
getattr(transaction, "edus", []),
TRANSACTION_CONCURRENCY_LIMIT,
)
async def on_context_state_request(self, origin, room_id, event_id):
origin_host, _ = parse_server_name(origin)
@@ -288,7 +333,7 @@ class FederationServer(FederationBase):
)
)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
resp["room_version"] = room_version
return 200, resp
@@ -339,7 +384,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
if room_version not in supported_versions:
logger.warning(
"Room version %s not in %s", room_version, supported_versions
@@ -350,21 +395,22 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
async def on_invite_request(self, origin, content, room_version):
if room_version not in KNOWN_ROOM_VERSIONS:
async def on_invite_request(
self, origin: str, content: JsonDict, room_version_id: str
):
room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version:
raise SynapseError(
400,
"Homeserver does not support this room version",
Codes.UNSUPPORTED_ROOM_VERSION,
)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = await self._check_sigs_and_hash(room_version, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu, room_version)
time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)}
@@ -372,15 +418,14 @@ class FederationServer(FederationBase):
logger.debug("on_send_join_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -394,7 +439,7 @@ class FederationServer(FederationBase):
await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_leave_request(origin, room_id, user_id)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@@ -403,15 +448,14 @@ class FederationServer(FederationBase):
logger.debug("on_send_leave_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return {}
@@ -426,64 +470,13 @@ class FederationServer(FederationBase):
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
return 200, res
async def on_query_auth_request(self, origin, content, room_id, event_id):
"""
Content is a dict with keys::
auth_chain (list): A list of events that give the auth chain.
missing (list): A list of event_ids indicating what the other
side (`origin`) think we're missing.
rejects (dict): A mapping from event_id to a 2-tuple of reason
string and a proof (or None) of why the event was rejected.
The keys of this dict give the list of events the `origin` has
rejected.
Args:
origin (str)
content (dict)
event_id (str)
Returns:
Deferred: Results in `dict` with the same format as `content`
"""
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
auth_chain = [
event_from_pdu_json(e, format_ver) for e in content["auth_chain"]
]
signed_auth = await self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True, room_version=room_version
)
ret = await self.handler.on_query_auth(
origin,
event_id,
room_id,
signed_auth,
content.get("rejects", []),
content.get("missing", []),
)
time_now = self._clock.time_msec()
send_content = {
"auth_chain": [e.get_pdu_json(time_now) for e in ret["auth_chain"]],
"rejects": ret.get("rejects", []),
"missing": ret.get("missing", []),
}
return 200, send_content
@log_function
def on_query_client_keys(self, origin, content):
return self.on_query_request("client_keys", content)
def on_query_user_devices(self, origin, user_id):
return self.on_query_request("user_devices", user_id)
async def on_query_user_devices(self, origin: str, user_id: str):
keys = await self.device_handler.on_federation_query_user_devices(user_id)
return 200, keys
@trace
async def on_claim_client_keys(self, origin, content):
@@ -524,7 +517,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
logger.info(
logger.debug(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
" limit: %d",
earliest_events,
@@ -537,11 +530,11 @@ class FederationServer(FederationBase):
)
if len(missing_events) < 5:
logger.info(
logger.debug(
"Returning %d events: %r", len(missing_events), missing_events
)
else:
logger.info("Returning %d events", len(missing_events))
logger.debug("Returning %d events", len(missing_events))
time_now = self._clock.time_msec()
@@ -618,7 +611,7 @@ class FederationServer(FederationBase):
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = await self.store.get_room_version(pdu.room_id)
room_version = await self.store.get_room_version_id(pdu.room_id)
# Check signature.
try:

View File

@@ -69,8 +69,6 @@ class FederationRemoteSendQueue(object):
self.edus = SortedDict() # stream position -> Edu
self.device_messages = SortedDict() # stream position -> destination
self.pos = 1
self.pos_time = SortedDict()
@@ -92,7 +90,6 @@ class FederationRemoteSendQueue(object):
"keyed_edu",
"keyed_edu_changed",
"edus",
"device_messages",
"pos_time",
"presence_destinations",
]:
@@ -132,9 +129,9 @@ class FederationRemoteSendQueue(object):
for key in keys[:i]:
del self.presence_changed[key]
user_ids = set(
user_ids = {
user_id for uids in self.presence_changed.values() for user_id in uids
)
}
keys = self.presence_destinations.keys()
i = self.presence_destinations.bisect_left(position_to_delete)
@@ -171,12 +168,6 @@ class FederationRemoteSendQueue(object):
for key in keys[:i]:
del self.edus[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = self.device_messages.bisect_left(position_to_delete)
for key in keys[:i]:
del self.device_messages[key]
def notify_new_events(self, current_id):
"""As per FederationSender"""
# We don't need to replicate this as it gets sent down a different
@@ -249,9 +240,8 @@ class FederationRemoteSendQueue(object):
def send_device_messages(self, destination):
"""As per FederationSender"""
pos = self._next_pos()
self.device_messages[pos] = destination
self.notifier.on_new_replication_data()
# We don't need to replicate this as it gets sent down a different
# stream.
def get_current_token(self):
return self.pos - 1
@@ -259,7 +249,9 @@ class FederationRemoteSendQueue(object):
def federation_ack(self, token):
self._clear_queue_before_pos(token)
def get_replication_rows(self, from_token, to_token, limit, federation_ack=None):
async def get_replication_rows(
self, from_token, to_token, limit, federation_ack=None
):
"""Get rows to be sent over federation between the two tokens
Args:
@@ -337,14 +329,6 @@ class FederationRemoteSendQueue(object):
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed device messages
i = self.device_messages.bisect_right(from_token)
j = self.device_messages.bisect_right(to_token) + 1
device_messages = {v: k for k, v in self.device_messages.items()[i:j]}
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(destination=destination)))
# Sort rows based on pos
rows.sort()
@@ -470,28 +454,9 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", ("edu",))): # Edu
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", ("destination",))): # str
"""Streams the fact that either a) there is pending to device messages for
users on the remote, or b) a local users device has changed and needs to
be sent to the remote.
"""
TypeId = "d"
@staticmethod
def from_data(data):
return DeviceRow(destination=data["destination"])
def to_data(self):
return {"destination": self.destination}
def add_to_buffer(self, buff):
buff.device_destinations.add(self.destination)
TypeToRow = {
Row.TypeId: Row
for Row in (PresenceRow, PresenceDestinationsRow, KeyedEduRow, EduRow, DeviceRow)
for Row in (PresenceRow, PresenceDestinationsRow, KeyedEduRow, EduRow,)
}
@@ -502,7 +467,6 @@ ParsedFederationStreamData = namedtuple(
"presence_destinations", # list of tuples of UserPresenceState and destinations
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"device_destinations", # set of destinations
),
)
@@ -521,11 +485,7 @@ def process_rows_for_federation(transaction_queue, rows):
# them into the appropriate collection and then send them off.
buff = ParsedFederationStreamData(
presence=[],
presence_destinations=[],
keyed_edus={},
edus={},
device_destinations=set(),
presence=[], presence_destinations=[], keyed_edus={}, edus={},
)
# Parse the rows in the stream and add to the buffer
@@ -553,6 +513,3 @@ def process_rows_for_federation(transaction_queue, rows):
for destination, edu_list in iteritems(buff.edus):
for edu in edu_list:
transaction_queue.send_edu(edu, None)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

View File

@@ -14,6 +14,7 @@
# limitations under the License.
import logging
from typing import Dict, Hashable, Iterable, List, Optional, Set
from six import itervalues
@@ -21,7 +22,9 @@ from prometheus_client import Counter
from twisted.internet import defer
import synapse
import synapse.metrics
from synapse.events import EventBase
from synapse.federation.sender.per_destination_queue import PerDestinationQueue
from synapse.federation.sender.transaction_manager import TransactionManager
from synapse.federation.units import Edu
@@ -38,6 +41,8 @@ from synapse.metrics import (
events_processed_counter,
)
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.presence import UserPresenceState
from synapse.types import ReadReceipt
from synapse.util.metrics import Measure, measure_func
logger = logging.getLogger(__name__)
@@ -54,7 +59,7 @@ sent_pdus_destination_dist_total = Counter(
class FederationSender(object):
def __init__(self, hs):
def __init__(self, hs: "synapse.server.HomeServer"):
self.hs = hs
self.server_name = hs.hostname
@@ -67,7 +72,7 @@ class FederationSender(object):
self._transaction_manager = TransactionManager(hs)
# map from destination to PerDestinationQueue
self._per_destination_queues = {} # type: dict[str, PerDestinationQueue]
self._per_destination_queues = {} # type: Dict[str, PerDestinationQueue]
LaterGauge(
"synapse_federation_transaction_queue_pending_destinations",
@@ -83,7 +88,7 @@ class FederationSender(object):
# Map of user_id -> UserPresenceState for all the pending presence
# to be sent out by user_id. Entries here get processed and put in
# pending_presence_by_dest
self.pending_presence = {}
self.pending_presence = {} # type: Dict[str, UserPresenceState]
LaterGauge(
"synapse_federation_transaction_queue_pending_pdus",
@@ -115,20 +120,17 @@ class FederationSender(object):
# and that there is a pending call to _flush_rrs_for_room in the system.
self._queues_awaiting_rr_flush_by_room = (
{}
) # type: dict[str, set[PerDestinationQueue]]
) # type: Dict[str, Set[PerDestinationQueue]]
self._rr_txn_interval_per_room_ms = (
1000.0 / hs.get_config().federation_rr_transactions_per_room_per_second
1000.0 / hs.config.federation_rr_transactions_per_room_per_second
)
def _get_per_destination_queue(self, destination):
def _get_per_destination_queue(self, destination: str) -> PerDestinationQueue:
"""Get or create a PerDestinationQueue for the given destination
Args:
destination (str): server_name of remote server
Returns:
PerDestinationQueue
destination: server_name of remote server
"""
queue = self._per_destination_queues.get(destination)
if not queue:
@@ -136,7 +138,7 @@ class FederationSender(object):
self._per_destination_queues[destination] = queue
return queue
def notify_new_events(self, current_id):
def notify_new_events(self, current_id: int) -> None:
"""This gets called when we have some new events we might want to
send out to other servers.
"""
@@ -150,13 +152,12 @@ class FederationSender(object):
"process_event_queue_for_federation", self._process_event_queue_loop
)
@defer.inlineCallbacks
def _process_event_queue_loop(self):
async def _process_event_queue_loop(self) -> None:
try:
self._is_processing = True
while True:
last_token = yield self.store.get_federation_out_pos("events")
next_token, events = yield self.store.get_all_new_events_stream(
last_token = await self.store.get_federation_out_pos("events")
next_token, events = await self.store.get_all_new_events_stream(
last_token, self._last_poked_id, limit=100
)
@@ -165,8 +166,7 @@ class FederationSender(object):
if not events and next_token >= self._last_poked_id:
break
@defer.inlineCallbacks
def handle_event(event):
async def handle_event(event: EventBase) -> None:
# Only send events for this server.
send_on_behalf_of = event.internal_metadata.get_send_on_behalf_of()
is_mine = self.is_mine_id(event.sender)
@@ -183,7 +183,7 @@ class FederationSender(object):
# Otherwise if the last member on a server in a room is
# banned then it won't receive the event because it won't
# be in the room after the ban.
destinations = yield self.state.get_hosts_in_room_at_events(
destinations = await self.state.get_hosts_in_room_at_events(
event.room_id, event_ids=event.prev_event_ids()
)
except Exception:
@@ -205,17 +205,16 @@ class FederationSender(object):
self._send_pdu(event, destinations)
@defer.inlineCallbacks
def handle_room_events(events):
async def handle_room_events(events: Iterable[EventBase]) -> None:
with Measure(self.clock, "handle_room_events"):
for event in events:
yield handle_event(event)
await handle_event(event)
events_by_room = {}
events_by_room = {} # type: Dict[str, List[EventBase]]
for event in events:
events_by_room.setdefault(event.room_id, []).append(event)
yield make_deferred_yieldable(
await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(handle_room_events, evs)
@@ -225,11 +224,11 @@ class FederationSender(object):
)
)
yield self.store.update_federation_out_pos("events", next_token)
await self.store.update_federation_out_pos("events", next_token)
if events:
now = self.clock.time_msec()
ts = yield self.store.get_received_ts(events[-1].event_id)
ts = await self.store.get_received_ts(events[-1].event_id)
synapse.metrics.event_processing_lag.labels(
"federation_sender"
@@ -253,7 +252,7 @@ class FederationSender(object):
finally:
self._is_processing = False
def _send_pdu(self, pdu, destinations):
def _send_pdu(self, pdu: EventBase, destinations: Iterable[str]) -> None:
# We loop through all destinations to see whether we already have
# a transaction in progress. If we do, stick it in the pending_pdus
# table and we'll get back to it later.
@@ -275,11 +274,11 @@ class FederationSender(object):
self._get_per_destination_queue(destination).send_pdu(pdu, order)
@defer.inlineCallbacks
def send_read_receipt(self, receipt):
def send_read_receipt(self, receipt: ReadReceipt):
"""Send a RR to any other servers in the room
Args:
receipt (synapse.types.ReadReceipt): receipt to be sent
receipt: receipt to be sent
"""
# Some background on the rate-limiting going on here.
@@ -342,7 +341,7 @@ class FederationSender(object):
else:
queue.flush_read_receipts_for_room(room_id)
def _schedule_rr_flush_for_room(self, room_id, n_domains):
def _schedule_rr_flush_for_room(self, room_id: str, n_domains: int) -> None:
# that is going to cause approximately len(domains) transactions, so now back
# off for that multiplied by RR_TXN_INTERVAL_PER_ROOM
backoff_ms = self._rr_txn_interval_per_room_ms * n_domains
@@ -351,7 +350,7 @@ class FederationSender(object):
self.clock.call_later(backoff_ms, self._flush_rrs_for_room, room_id)
self._queues_awaiting_rr_flush_by_room[room_id] = set()
def _flush_rrs_for_room(self, room_id):
def _flush_rrs_for_room(self, room_id: str) -> None:
queues = self._queues_awaiting_rr_flush_by_room.pop(room_id)
logger.debug("Flushing RRs in %s to %s", room_id, queues)
@@ -367,14 +366,11 @@ class FederationSender(object):
@preserve_fn # the caller should not yield on this
@defer.inlineCallbacks
def send_presence(self, states):
def send_presence(self, states: List[UserPresenceState]):
"""Send the new presence states to the appropriate destinations.
This actually queues up the presence states ready for sending and
triggers a background task to process them and send out the transactions.
Args:
states (list(UserPresenceState))
"""
if not self.hs.config.use_presence:
# No-op if presence is disabled.
@@ -411,11 +407,10 @@ class FederationSender(object):
finally:
self._processing_pending_presence = False
def send_presence_to_destinations(self, states, destinations):
def send_presence_to_destinations(
self, states: List[UserPresenceState], destinations: List[str]
) -> None:
"""Send the given presence states to the given destinations.
Args:
states (list[UserPresenceState])
destinations (list[str])
"""
@@ -430,12 +425,9 @@ class FederationSender(object):
@measure_func("txnqueue._process_presence")
@defer.inlineCallbacks
def _process_presence_inner(self, states):
def _process_presence_inner(self, states: List[UserPresenceState]):
"""Given a list of states populate self.pending_presence_by_dest and
poke to send a new transaction to each destination
Args:
states (list(UserPresenceState))
"""
hosts_and_states = yield get_interested_remotes(self.store, states, self.state)
@@ -445,14 +437,20 @@ class FederationSender(object):
continue
self._get_per_destination_queue(destination).send_presence(states)
def build_and_send_edu(self, destination, edu_type, content, key=None):
def build_and_send_edu(
self,
destination: str,
edu_type: str,
content: dict,
key: Optional[Hashable] = None,
):
"""Construct an Edu object, and queue it for sending
Args:
destination (str): name of server to send to
edu_type (str): type of EDU to send
content (dict): content of EDU
key (Any|None): clobbering key for this edu
destination: name of server to send to
edu_type: type of EDU to send
content: content of EDU
key: clobbering key for this edu
"""
if destination == self.server_name:
logger.info("Not sending EDU to ourselves")
@@ -467,12 +465,12 @@ class FederationSender(object):
self.send_edu(edu, key)
def send_edu(self, edu, key):
def send_edu(self, edu: Edu, key: Optional[Hashable]):
"""Queue an EDU for sending
Args:
edu (Edu): edu to send
key (Any|None): clobbering key for this edu
edu: edu to send
key: clobbering key for this edu
"""
queue = self._get_per_destination_queue(edu.destination)
if key:
@@ -480,12 +478,25 @@ class FederationSender(object):
else:
queue.send_edu(edu)
def send_device_messages(self, destination):
def send_device_messages(self, destination: str):
if destination == self.server_name:
logger.info("Not sending device update to ourselves")
logger.warning("Not sending device update to ourselves")
return
self._get_per_destination_queue(destination).attempt_new_transaction()
def get_current_token(self):
def wake_destination(self, destination: str):
"""Called when we want to retry sending transactions to a remote.
This is mainly useful if the remote server has been down and we think it
might have come back.
"""
if destination == self.server_name:
logger.warning("Not waking up ourselves")
return
self._get_per_destination_queue(destination).attempt_new_transaction()
def get_current_token(self) -> int:
return 0

View File

@@ -15,11 +15,11 @@
# limitations under the License.
import datetime
import logging
from typing import Dict, Hashable, Iterable, List, Tuple
from prometheus_client import Counter
from twisted.internet import defer
import synapse.server
from synapse.api.errors import (
FederationDeniedError,
HttpResponseException,
@@ -31,6 +31,7 @@ from synapse.handlers.presence import format_user_presence_state
from synapse.metrics import sent_transactions_counter
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.presence import UserPresenceState
from synapse.types import ReadReceipt
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
# This is defined in the Matrix spec and enforced by the receiver.
@@ -55,13 +56,18 @@ class PerDestinationQueue(object):
Manages the per-destination transmission queues.
Args:
hs (synapse.HomeServer):
transaction_sender (TransactionManager):
destination (str): the server_name of the destination that we are managing
hs
transaction_sender
destination: the server_name of the destination that we are managing
transmission for.
"""
def __init__(self, hs, transaction_manager, destination):
def __init__(
self,
hs: "synapse.server.HomeServer",
transaction_manager: "synapse.federation.sender.TransactionManager",
destination: str,
):
self._server_name = hs.hostname
self._clock = hs.get_clock()
self._store = hs.get_datastore()
@@ -71,20 +77,20 @@ class PerDestinationQueue(object):
self.transmission_loop_running = False
# a list of tuples of (pending pdu, order)
self._pending_pdus = [] # type: list[tuple[EventBase, int]]
self._pending_edus = [] # type: list[Edu]
self._pending_pdus = [] # type: List[Tuple[EventBase, int]]
self._pending_edus = [] # type: List[Edu]
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of (edu_type, key) -> Edu
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
self._pending_edus_keyed = {} # type: Dict[Tuple[str, Hashable], Edu]
# Map of user_id -> UserPresenceState of pending presence to be sent to this
# destination
self._pending_presence = {} # type: dict[str, UserPresenceState]
self._pending_presence = {} # type: Dict[str, UserPresenceState]
# room_id -> receipt_type -> user_id -> receipt_dict
self._pending_rrs = {}
self._pending_rrs = {} # type: Dict[str, Dict[str, Dict[str, dict]]]
self._rrs_pending_flush = False
# stream_id of last successfully sent to-device message.
@@ -94,50 +100,50 @@ class PerDestinationQueue(object):
# stream_id of last successfully sent device list update.
self._last_device_list_stream_id = 0
def __str__(self):
def __str__(self) -> str:
return "PerDestinationQueue[%s]" % self._destination
def pending_pdu_count(self):
def pending_pdu_count(self) -> int:
return len(self._pending_pdus)
def pending_edu_count(self):
def pending_edu_count(self) -> int:
return (
len(self._pending_edus)
+ len(self._pending_presence)
+ len(self._pending_edus_keyed)
)
def send_pdu(self, pdu, order):
def send_pdu(self, pdu: EventBase, order: int) -> None:
"""Add a PDU to the queue, and start the transmission loop if neccessary
Args:
pdu (EventBase): pdu to send
order (int):
pdu: pdu to send
order
"""
self._pending_pdus.append((pdu, order))
self.attempt_new_transaction()
def send_presence(self, states):
def send_presence(self, states: Iterable[UserPresenceState]) -> None:
"""Add presence updates to the queue. Start the transmission loop if neccessary.
Args:
states (iterable[UserPresenceState]): presence to send
states: presence to send
"""
self._pending_presence.update({state.user_id: state for state in states})
self.attempt_new_transaction()
def queue_read_receipt(self, receipt):
def queue_read_receipt(self, receipt: ReadReceipt) -> None:
"""Add a RR to the list to be sent. Doesn't start the transmission loop yet
(see flush_read_receipts_for_room)
Args:
receipt (synapse.api.receipt_info.ReceiptInfo): receipt to be queued
receipt: receipt to be queued
"""
self._pending_rrs.setdefault(receipt.room_id, {}).setdefault(
receipt.receipt_type, {}
)[receipt.user_id] = {"event_ids": receipt.event_ids, "data": receipt.data}
def flush_read_receipts_for_room(self, room_id):
def flush_read_receipts_for_room(self, room_id: str) -> None:
# if we don't have any read-receipts for this room, it may be that we've already
# sent them out, so we don't need to flush.
if room_id not in self._pending_rrs:
@@ -145,15 +151,15 @@ class PerDestinationQueue(object):
self._rrs_pending_flush = True
self.attempt_new_transaction()
def send_keyed_edu(self, edu, key):
def send_keyed_edu(self, edu: Edu, key: Hashable) -> None:
self._pending_edus_keyed[(edu.edu_type, key)] = edu
self.attempt_new_transaction()
def send_edu(self, edu):
def send_edu(self, edu) -> None:
self._pending_edus.append(edu)
self.attempt_new_transaction()
def attempt_new_transaction(self):
def attempt_new_transaction(self) -> None:
"""Try to start a new transaction to this destination
If there is already a transaction in progress to this destination,
@@ -176,23 +182,22 @@ class PerDestinationQueue(object):
self._transaction_transmission_loop,
)
@defer.inlineCallbacks
def _transaction_transmission_loop(self):
pending_pdus = []
async def _transaction_transmission_loop(self) -> None:
pending_pdus = [] # type: List[Tuple[EventBase, int]]
try:
self.transmission_loop_running = True
# This will throw if we wouldn't retry. We do this here so we fail
# quickly, but we will later check this again in the http client,
# hence why we throw the result away.
yield get_retry_limiter(self._destination, self._clock, self._store)
await get_retry_limiter(self._destination, self._clock, self._store)
pending_pdus = []
while True:
# We have to keep 2 free slots for presence and rr_edus
limit = MAX_EDUS_PER_TRANSACTION - 2
device_update_edus, dev_list_id = yield self._get_device_update_edus(
device_update_edus, dev_list_id = await self._get_device_update_edus(
limit
)
@@ -201,7 +206,7 @@ class PerDestinationQueue(object):
(
to_device_edus,
device_stream_id,
) = yield self._get_to_device_message_edus(limit)
) = await self._get_to_device_message_edus(limit)
pending_edus = device_update_edus + to_device_edus
@@ -268,7 +273,7 @@ class PerDestinationQueue(object):
# END CRITICAL SECTION
success = yield self._transaction_manager.send_new_transaction(
success = await self._transaction_manager.send_new_transaction(
self._destination, pending_pdus, pending_edus
)
if success:
@@ -279,7 +284,7 @@ class PerDestinationQueue(object):
# Remove the acknowledged device messages from the database
# Only bother if we actually sent some device messages
if to_device_edus:
yield self._store.delete_device_msgs_for_remote(
await self._store.delete_device_msgs_for_remote(
self._destination, device_stream_id
)
@@ -288,7 +293,7 @@ class PerDestinationQueue(object):
logger.info(
"Marking as sent %r %r", self._destination, dev_list_id
)
yield self._store.mark_as_sent_devices_by_remote(
await self._store.mark_as_sent_devices_by_remote(
self._destination, dev_list_id
)
@@ -333,7 +338,7 @@ class PerDestinationQueue(object):
# We want to be *very* sure we clear this after we stop processing
self.transmission_loop_running = False
def _get_rr_edus(self, force_flush):
def _get_rr_edus(self, force_flush: bool) -> Iterable[Edu]:
if not self._pending_rrs:
return
if not force_flush and not self._rrs_pending_flush:
@@ -350,17 +355,16 @@ class PerDestinationQueue(object):
self._rrs_pending_flush = False
yield edu
def _pop_pending_edus(self, limit):
def _pop_pending_edus(self, limit: int) -> List[Edu]:
pending_edus = self._pending_edus
pending_edus, self._pending_edus = pending_edus[:limit], pending_edus[limit:]
return pending_edus
@defer.inlineCallbacks
def _get_device_update_edus(self, limit):
async def _get_device_update_edus(self, limit: int) -> Tuple[List[Edu], int]:
last_device_list = self._last_device_list_stream_id
# Retrieve list of new device updates to send to the destination
now_stream_id, results = yield self._store.get_device_updates_by_remote(
now_stream_id, results = await self._store.get_device_updates_by_remote(
self._destination, last_device_list, limit=limit
)
edus = [
@@ -377,11 +381,10 @@ class PerDestinationQueue(object):
return (edus, now_stream_id)
@defer.inlineCallbacks
def _get_to_device_message_edus(self, limit):
async def _get_to_device_message_edus(self, limit: int) -> Tuple[List[Edu], int]:
last_device_stream_id = self._last_device_stream_id
to_device_stream_id = self._store.get_to_device_stream_token()
contents, stream_id = yield self._store.get_new_device_msgs_for_remote(
contents, stream_id = await self._store.get_new_device_msgs_for_remote(
self._destination, last_device_stream_id, to_device_stream_id, limit
)
edus = [

View File

@@ -13,14 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import List
from canonicaljson import json
from twisted.internet import defer
import synapse.server
from synapse.api.errors import HttpResponseException
from synapse.events import EventBase
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Transaction
from synapse.federation.units import Edu, Transaction
from synapse.logging.opentracing import (
extract_text_map,
set_tag,
@@ -39,7 +40,7 @@ class TransactionManager(object):
shared between PerDestinationQueue objects
"""
def __init__(self, hs):
def __init__(self, hs: "synapse.server.HomeServer"):
self._server_name = hs.hostname
self.clock = hs.get_clock() # nb must be called this for @measure_func
self._store = hs.get_datastore()
@@ -50,8 +51,9 @@ class TransactionManager(object):
self._next_txn_id = int(self.clock.time_msec())
@measure_func("_send_new_transaction")
@defer.inlineCallbacks
def send_new_transaction(self, destination, pending_pdus, pending_edus):
async def send_new_transaction(
self, destination: str, pending_pdus: List[EventBase], pending_edus: List[Edu]
):
# Make a transaction-sending opentracing span. This span follows on from
# all the edus in that transaction. This needs to be done since there is
@@ -127,7 +129,7 @@ class TransactionManager(object):
return data
try:
response = yield self._transport_layer.send_transaction(
response = await self._transport_layer.send_transaction(
transaction, json_data_cb
)
code = 200

View File

@@ -15,6 +15,7 @@
# limitations under the License.
import logging
from typing import Any, Dict
from six.moves import urllib
@@ -352,7 +353,9 @@ class TransportLayerClient(object):
else:
path = _create_v1_path("/publicRooms")
args = {"include_all_networks": "true" if include_all_networks else "false"}
args = {
"include_all_networks": "true" if include_all_networks else "false"
} # type: Dict[str, Any]
if third_party_instance_id:
args["third_party_instance_id"] = (third_party_instance_id,)
if limit:

View File

@@ -18,6 +18,7 @@
import functools
import logging
import re
from typing import Optional, Tuple, Type
from twisted.internet.defer import maybeDeferred
@@ -44,6 +45,7 @@ from synapse.logging.opentracing import (
tags,
whitelisted_homeserver,
)
from synapse.server import HomeServer
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
@@ -101,12 +103,17 @@ class NoAuthenticationError(AuthenticationError):
class Authenticator(object):
def __init__(self, hs):
def __init__(self, hs: HomeServer):
self._clock = hs.get_clock()
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
self.notifer = hs.get_notifier()
self.replication_client = None
if hs.config.worker.worker_app:
self.replication_client = hs.get_tcp_replication()
# A method just so we can pass 'self' as the authenticator to the Servlets
async def authenticate_request(self, request, content):
@@ -151,7 +158,7 @@ class Authenticator(object):
origin, json_request, now, "Incoming request"
)
logger.info("Request from %s", origin)
logger.debug("Request from %s", origin)
request.authenticated_entity = origin
# If we get a valid signed request from the other side, its probably
@@ -166,6 +173,17 @@ class Authenticator(object):
try:
logger.info("Marking origin %r as up", origin)
await self.store.set_destination_retry_timings(origin, None, 0, 0)
# Inform the relevant places that the remote server is back up.
self.notifer.notify_remote_server_up(origin)
if self.replication_client:
# If we're on a worker we try and inform master about this. The
# replication client doesn't hook into the notifier to avoid
# infinite loops where we send a `REMOTE_SERVER_UP` command to
# master, which then echoes it back to us which in turn pokes
# the notifier.
self.replication_client.send_remote_server_up(origin)
except Exception:
logger.exception("Error resetting retry timings on %s", origin)
@@ -250,6 +268,8 @@ class BaseFederationServlet(object):
returned.
"""
PATH = "" # Overridden in subclasses, the regex to match against the path.
REQUIRE_AUTH = True
PREFIX = FEDERATION_V1_PREFIX # Allows specifying the API version
@@ -330,9 +350,6 @@ class BaseFederationServlet(object):
return response
# Extra logic that functools.wraps() doesn't finish
new_func.__self__ = func.__self__
return new_func
def register(self, server):
@@ -562,7 +579,7 @@ class FederationV1InviteServlet(BaseFederationServlet):
# state resolution algorithm, and we don't use that for processing
# invites
content = await self.handler.on_invite_request(
origin, content, room_version=RoomVersions.V1.identifier
origin, content, room_version_id=RoomVersions.V1.identifier
)
# V1 federation API is defined to return a content of `[200, {...}]`
@@ -589,7 +606,7 @@ class FederationV2InviteServlet(BaseFederationServlet):
event.setdefault("unsigned", {})["invite_room_state"] = invite_room_state
content = await self.handler.on_invite_request(
origin, event, room_version=room_version
origin, event, room_version_id=room_version
)
return 200, content
@@ -626,17 +643,6 @@ class FederationClientKeysClaimServlet(BaseFederationServlet):
return 200, response
class FederationQueryAuthServlet(BaseFederationServlet):
PATH = "/query_auth/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
async def on_POST(self, origin, content, query, context, event_id):
new_content = await self.handler.on_query_auth_request(
origin, content, context, event_id
)
return 200, new_content
class FederationGetMissingEventsServlet(BaseFederationServlet):
# TODO(paul): Why does this path alone end with "/?" optional?
PATH = "/get_missing_events/(?P<room_id>[^/]*)/?"
@@ -807,7 +813,7 @@ class PublicRoomList(BaseFederationServlet):
if not self.allow_access:
raise FederationDeniedError(origin)
limit = int(content.get("limit", 100))
limit = int(content.get("limit", 100)) # type: Optional[int]
since_token = content.get("since", None)
search_filter = content.get("filter", None)
@@ -954,7 +960,7 @@ class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
result = await self.groups_handler.update_room_in_group(
result = await self.handler.update_room_in_group(
group_id, requester_user_id, room_id, config_key, content
)
@@ -1395,7 +1401,6 @@ FEDERATION_SERVLET_CLASSES = (
FederationV2SendLeaveServlet,
FederationV1InviteServlet,
FederationV2InviteServlet,
FederationQueryAuthServlet,
FederationGetMissingEventsServlet,
FederationEventAuthServlet,
FederationClientKeysQueryServlet,
@@ -1405,11 +1410,13 @@ FEDERATION_SERVLET_CLASSES = (
On3pidBindServlet,
FederationVersionServlet,
RoomComplexityServlet,
)
) # type: Tuple[Type[BaseFederationServlet], ...]
OPENID_SERVLET_CLASSES = (OpenIdUserInfo,)
OPENID_SERVLET_CLASSES = (
OpenIdUserInfo,
) # type: Tuple[Type[BaseFederationServlet], ...]
ROOM_LIST_CLASSES = (PublicRoomList,)
ROOM_LIST_CLASSES = (PublicRoomList,) # type: Tuple[Type[PublicRoomList], ...]
GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsProfileServlet,
@@ -1430,17 +1437,19 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
)
) # type: Tuple[Type[BaseFederationServlet], ...]
GROUP_LOCAL_SERVLET_CLASSES = (
FederationGroupsLocalInviteServlet,
FederationGroupsRemoveLocalUserServlet,
FederationGroupsBulkPublicisedServlet,
)
) # type: Tuple[Type[BaseFederationServlet], ...]
GROUP_ATTESTATION_SERVLET_CLASSES = (FederationGroupsRenewAttestaionServlet,)
GROUP_ATTESTATION_SERVLET_CLASSES = (
FederationGroupsRenewAttestaionServlet,
) # type: Tuple[Type[BaseFederationServlet], ...]
DEFAULT_SERVLET_GROUPS = (
"federation",

View File

@@ -19,11 +19,15 @@ server protocol.
import logging
import attr
from synapse.types import JsonDict
from synapse.util.jsonobject import JsonEncodedObject
logger = logging.getLogger(__name__)
@attr.s(slots=True)
class Edu(JsonEncodedObject):
""" An Edu represents a piece of data sent from one homeserver to another.
@@ -32,11 +36,24 @@ class Edu(JsonEncodedObject):
internal ID or previous references graph.
"""
valid_keys = ["origin", "destination", "edu_type", "content"]
edu_type = attr.ib(type=str)
content = attr.ib(type=dict)
origin = attr.ib(type=str)
destination = attr.ib(type=str)
required_keys = ["edu_type"]
def get_dict(self) -> JsonDict:
return {
"edu_type": self.edu_type,
"content": self.content,
}
internal_keys = ["origin", "destination"]
def get_internal_dict(self) -> JsonDict:
return {
"edu_type": self.edu_type,
"content": self.content,
"origin": self.origin,
"destination": self.destination,
}
def get_context(self):
return getattr(self, "content", {}).get("org.matrix.opentracing_context", "{}")

View File

@@ -36,7 +36,7 @@ logger = logging.getLogger(__name__)
# TODO: Flairs
class GroupsServerHandler(object):
class GroupsServerWorkerHandler(object):
def __init__(self, hs):
self.hs = hs
self.store = hs.get_datastore()
@@ -51,9 +51,6 @@ class GroupsServerHandler(object):
self.transport_client = hs.get_federation_transport_client()
self.profile_handler = hs.get_profile_handler()
# Ensure attestations get renewed
hs.get_groups_attestation_renewer()
@defer.inlineCallbacks
def check_group_is_ours(
self, group_id, requester_user_id, and_exists=False, and_is_admin=None
@@ -167,68 +164,6 @@ class GroupsServerHandler(object):
"user": membership_info,
}
@defer.inlineCallbacks
def update_group_summary_room(
self, group_id, requester_user_id, room_id, category_id, content
):
"""Add/update a room to the group summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
RoomID.from_string(room_id) # Ensure valid room id
order = content.get("order", None)
is_public = _parse_visibility_from_contents(content)
yield self.store.add_room_to_summary(
group_id=group_id,
room_id=room_id,
category_id=category_id,
order=order,
is_public=is_public,
)
return {}
@defer.inlineCallbacks
def delete_group_summary_room(
self, group_id, requester_user_id, room_id, category_id
):
"""Remove a room from the summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_room_from_summary(
group_id=group_id, room_id=room_id, category_id=category_id
)
return {}
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
join_policy = _parse_join_policy_from_contents(content)
if join_policy is None:
raise SynapseError(400, "No value specified for 'm.join_policy'")
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
return {}
@defer.inlineCallbacks
def get_group_categories(self, group_id, requester_user_id):
"""Get all categories in a group (as seen by user)
@@ -248,42 +183,10 @@ class GroupsServerHandler(object):
group_id=group_id, category_id=category_id
)
logger.info("group %s", res)
return res
@defer.inlineCallbacks
def update_group_category(self, group_id, requester_user_id, category_id, content):
"""Add/Update a group category
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
profile = content.get("profile")
yield self.store.upsert_group_category(
group_id=group_id,
category_id=category_id,
is_public=is_public,
profile=profile,
)
return {}
@defer.inlineCallbacks
def delete_group_category(self, group_id, requester_user_id, category_id):
"""Delete a group category
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_group_category(
group_id=group_id, category_id=category_id
)
return {}
@defer.inlineCallbacks
def get_group_roles(self, group_id, requester_user_id):
"""Get all roles in a group (as seen by user)
@@ -302,74 +205,6 @@ class GroupsServerHandler(object):
res = yield self.store.get_group_role(group_id=group_id, role_id=role_id)
return res
@defer.inlineCallbacks
def update_group_role(self, group_id, requester_user_id, role_id, content):
"""Add/update a role in a group
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
profile = content.get("profile")
yield self.store.upsert_group_role(
group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
)
return {}
@defer.inlineCallbacks
def delete_group_role(self, group_id, requester_user_id, role_id):
"""Remove role from group
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
return {}
@defer.inlineCallbacks
def update_group_summary_user(
self, group_id, requester_user_id, user_id, role_id, content
):
"""Add/update a users entry in the group summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
order = content.get("order", None)
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_summary(
group_id=group_id,
user_id=user_id,
role_id=role_id,
order=order,
is_public=is_public,
)
return {}
@defer.inlineCallbacks
def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
"""Remove a user from the group summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_user_from_summary(
group_id=group_id, user_id=user_id, role_id=role_id
)
return {}
@defer.inlineCallbacks
def get_group_profile(self, group_id, requester_user_id):
"""Get the group profile as seen by requester_user_id
@@ -394,24 +229,6 @@ class GroupsServerHandler(object):
else:
raise SynapseError(404, "Unknown group")
@defer.inlineCallbacks
def update_group_profile(self, group_id, requester_user_id, content):
"""Update the group profile
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
profile = {}
for keyname in ("name", "avatar_url", "short_description", "long_description"):
if keyname in content:
value = content[keyname]
if not isinstance(value, string_types):
raise SynapseError(400, "%r value is not a string" % (keyname,))
profile[keyname] = value
yield self.store.update_group_profile(group_id, profile)
@defer.inlineCallbacks
def get_users_in_group(self, group_id, requester_user_id):
"""Get the users in group as seen by requester_user_id.
@@ -530,6 +347,196 @@ class GroupsServerHandler(object):
return {"chunk": chunk, "total_room_count_estimate": len(room_results)}
class GroupsServerHandler(GroupsServerWorkerHandler):
def __init__(self, hs):
super(GroupsServerHandler, self).__init__(hs)
# Ensure attestations get renewed
hs.get_groups_attestation_renewer()
@defer.inlineCallbacks
def update_group_summary_room(
self, group_id, requester_user_id, room_id, category_id, content
):
"""Add/update a room to the group summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
RoomID.from_string(room_id) # Ensure valid room id
order = content.get("order", None)
is_public = _parse_visibility_from_contents(content)
yield self.store.add_room_to_summary(
group_id=group_id,
room_id=room_id,
category_id=category_id,
order=order,
is_public=is_public,
)
return {}
@defer.inlineCallbacks
def delete_group_summary_room(
self, group_id, requester_user_id, room_id, category_id
):
"""Remove a room from the summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_room_from_summary(
group_id=group_id, room_id=room_id, category_id=category_id
)
return {}
@defer.inlineCallbacks
def set_group_join_policy(self, group_id, requester_user_id, content):
"""Sets the group join policy.
Currently supported policies are:
- "invite": an invite must be received and accepted in order to join.
- "open": anyone can join.
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
join_policy = _parse_join_policy_from_contents(content)
if join_policy is None:
raise SynapseError(400, "No value specified for 'm.join_policy'")
yield self.store.set_group_join_policy(group_id, join_policy=join_policy)
return {}
@defer.inlineCallbacks
def update_group_category(self, group_id, requester_user_id, category_id, content):
"""Add/Update a group category
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
profile = content.get("profile")
yield self.store.upsert_group_category(
group_id=group_id,
category_id=category_id,
is_public=is_public,
profile=profile,
)
return {}
@defer.inlineCallbacks
def delete_group_category(self, group_id, requester_user_id, category_id):
"""Delete a group category
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_group_category(
group_id=group_id, category_id=category_id
)
return {}
@defer.inlineCallbacks
def update_group_role(self, group_id, requester_user_id, role_id, content):
"""Add/update a role in a group
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
is_public = _parse_visibility_from_contents(content)
profile = content.get("profile")
yield self.store.upsert_group_role(
group_id=group_id, role_id=role_id, is_public=is_public, profile=profile
)
return {}
@defer.inlineCallbacks
def delete_group_role(self, group_id, requester_user_id, role_id):
"""Remove role from group
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_group_role(group_id=group_id, role_id=role_id)
return {}
@defer.inlineCallbacks
def update_group_summary_user(
self, group_id, requester_user_id, user_id, role_id, content
):
"""Add/update a users entry in the group summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
order = content.get("order", None)
is_public = _parse_visibility_from_contents(content)
yield self.store.add_user_to_summary(
group_id=group_id,
user_id=user_id,
role_id=role_id,
order=order,
is_public=is_public,
)
return {}
@defer.inlineCallbacks
def delete_group_summary_user(self, group_id, requester_user_id, user_id, role_id):
"""Remove a user from the group summary
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
yield self.store.remove_user_from_summary(
group_id=group_id, user_id=user_id, role_id=role_id
)
return {}
@defer.inlineCallbacks
def update_group_profile(self, group_id, requester_user_id, content):
"""Update the group profile
"""
yield self.check_group_is_ours(
group_id, requester_user_id, and_exists=True, and_is_admin=requester_user_id
)
profile = {}
for keyname in ("name", "avatar_url", "short_description", "long_description"):
if keyname in content:
value = content[keyname]
if not isinstance(value, string_types):
raise SynapseError(400, "%r value is not a string" % (keyname,))
profile[keyname] = value
yield self.store.update_group_profile(group_id, profile)
@defer.inlineCallbacks
def add_room_to_group(self, group_id, requester_user_id, room_id, content):
"""Add room to group
@@ -601,7 +608,7 @@ class GroupsServerHandler(object):
user_results = yield self.store.get_users_in_group(
group_id, include_private=True
)
if user_id in [user_result["user_id"] for user_result in user_results]:
if user_id in (user_result["user_id"] for user_result in user_results):
raise SynapseError(400, "User already in group")
content = {

View File

@@ -44,7 +44,11 @@ class AccountValidityHandler(object):
self._account_validity = self.hs.config.account_validity
if self._account_validity.renew_by_email_enabled and load_jinja2_templates:
if (
self._account_validity.enabled
and self._account_validity.renew_by_email_enabled
and load_jinja2_templates
):
# Don't do email-specific configuration if renewal by email is disabled.
try:
app_name = self.hs.config.email_app_name

View File

@@ -25,6 +25,15 @@ from synapse.app import check_bind_error
logger = logging.getLogger(__name__)
ACME_REGISTER_FAIL_ERROR = """
--------------------------------------------------------------------------------
Failed to register with the ACME provider. This is likely happening because the installation
is new, and ACME v1 has been deprecated by Let's Encrypt and disabled for
new installations since November 2019.
At the moment, Synapse doesn't support ACME v2. For more information and alternative
solutions, please read https://github.com/matrix-org/synapse/blob/master/docs/ACME.md#deprecation-of-acme-v1
--------------------------------------------------------------------------------"""
class AcmeHandler(object):
def __init__(self, hs):
@@ -71,7 +80,12 @@ class AcmeHandler(object):
# want it to control where we save the certificates, we have to reach in
# and trigger the registration machinery ourselves.
self._issuer._registered = False
yield self._issuer._ensure_registered()
try:
yield self._issuer._ensure_registered()
except Exception:
logger.error(ACME_REGISTER_FAIL_ERROR)
raise
@defer.inlineCallbacks
def provision_certificate(self):

View File

@@ -14,9 +14,11 @@
# limitations under the License.
import logging
from typing import List
from synapse.api.constants import Membership
from synapse.types import RoomStreamToken
from synapse.events import FrozenEvent
from synapse.types import RoomStreamToken, StateMap
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -51,68 +53,17 @@ class AdminHandler(BaseHandler):
return ret
async def get_users(self):
"""Function to retrieve a list of users in users table.
Args:
Returns:
defer.Deferred: resolves to list[dict[str, Any]]
"""
ret = await self.store.get_users()
async def get_user(self, user):
"""Function to get user details"""
ret = await self.store.get_user_by_id(user.to_string())
if ret:
profile = await self.store.get_profileinfo(user.localpart)
threepids = await self.store.user_get_threepids(user.to_string())
ret["displayname"] = profile.display_name
ret["avatar_url"] = profile.avatar_url
ret["threepids"] = threepids
return ret
async def get_users_paginate(self, start, limit, name, guests, deactivated):
"""Function to retrieve a paginated list of users from
users list. This will return a json list of users.
Args:
start (int): start number to begin the query from
limit (int): number of rows to retrieve
name (string): filter for user names
guests (bool): whether to in include guest users
deactivated (bool): whether to include deactivated users
Returns:
defer.Deferred: resolves to json list[dict[str, Any]]
"""
ret = await self.store.get_users_paginate(
start, limit, name, guests, deactivated
)
return ret
async def search_users(self, term):
"""Function to search users list for one or more users with
the matched term.
Args:
term (str): search term
Returns:
defer.Deferred: resolves to list[dict[str, Any]]
"""
ret = await self.store.search_users(term)
return ret
def get_user_server_admin(self, user):
"""
Get the admin bit on a user.
Args:
user_id (UserID): the (necessarily local) user to manipulate
"""
return self.store.is_server_admin(user)
def set_user_server_admin(self, user, admin):
"""
Set the admin bit on a user.
Args:
user_id (UserID): the (necessarily local) user to manipulate
admin (bool): whether or not the user should be an admin of this server
"""
return self.store.set_server_admin(user, admin)
async def export_user_data(self, user_id, writer):
"""Write all data we have on the user to the given writer.
@@ -125,7 +76,7 @@ class AdminHandler(BaseHandler):
The returned value is that returned by `writer.finished()`.
"""
# Get all rooms the user is in or has been in
rooms = await self.store.get_rooms_for_user_where_membership_is(
rooms = await self.store.get_rooms_for_local_user_where_membership_is(
user_id,
membership_list=(
Membership.JOIN,
@@ -250,35 +201,26 @@ class ExfiltrationWriter(object):
"""Interface used to specify how to write exported data.
"""
def write_events(self, room_id, events):
def write_events(self, room_id: str, events: List[FrozenEvent]):
"""Write a batch of events for a room.
Args:
room_id (str)
events (list[FrozenEvent])
"""
pass
def write_state(self, room_id, event_id, state):
def write_state(self, room_id: str, event_id: str, state: StateMap[FrozenEvent]):
"""Write the state at the given event in the room.
This only gets called for backward extremities rather than for each
event.
Args:
room_id (str)
event_id (str)
state (dict[tuple[str, str], FrozenEvent])
"""
pass
def write_invite(self, room_id, event, state):
def write_invite(self, room_id: str, event: FrozenEvent, state: StateMap[dict]):
"""Write an invite for the room, with associated invite state.
Args:
room_id (str)
event (FrozenEvent)
state (dict[tuple[str, str], dict]): A subset of the state at the
room_id
event
state: A subset of the state at the
invite, with a subset of the event keys (type, state_key
content and sender)
"""

View File

@@ -17,9 +17,11 @@
import logging
import time
import unicodedata
import urllib.parse
from typing import Any, Dict, Iterable, List, Optional
import attr
import bcrypt
import bcrypt # type: ignore[import]
import pymacaroons
from twisted.internet import defer
@@ -38,9 +40,12 @@ from synapse.api.errors import (
from synapse.api.ratelimiting import Ratelimiter
from synapse.handlers.ui_auth import INTERACTIVE_AUTH_CHECKERS
from synapse.handlers.ui_auth.checkers import UserInteractiveAuthChecker
from synapse.http.server import finish_request
from synapse.http.site import SynapseRequest
from synapse.logging.context import defer_to_thread
from synapse.module_api import ModuleApi
from synapse.types import UserID
from synapse.push.mailer import load_jinja2_templates
from synapse.types import Requester, UserID
from synapse.util.caches.expiringcache import ExpiringCache
from ._base import BaseHandler
@@ -58,11 +63,11 @@ class AuthHandler(BaseHandler):
"""
super(AuthHandler, self).__init__(hs)
self.checkers = {} # type: dict[str, UserInteractiveAuthChecker]
self.checkers = {} # type: Dict[str, UserInteractiveAuthChecker]
for auth_checker_class in INTERACTIVE_AUTH_CHECKERS:
inst = auth_checker_class(hs)
if inst.is_enabled():
self.checkers[inst.AUTH_TYPE] = inst
self.checkers[inst.AUTH_TYPE] = inst # type: ignore
self.bcrypt_rounds = hs.config.bcrypt_rounds
@@ -108,8 +113,20 @@ class AuthHandler(BaseHandler):
self._clock = self.hs.get_clock()
# Load the SSO redirect confirmation page HTML template
self._sso_redirect_confirm_template = load_jinja2_templates(
hs.config.sso_redirect_confirm_template_dir, ["sso_redirect_confirm.html"],
)[0]
self._server_name = hs.config.server_name
# cast to tuple for use with str.startswith
self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist)
@defer.inlineCallbacks
def validate_user_via_ui_auth(self, requester, request_body, clientip):
def validate_user_via_ui_auth(
self, requester: Requester, request_body: Dict[str, Any], clientip: str
):
"""
Checks that the user is who they claim to be, via a UI auth.
@@ -118,11 +135,11 @@ class AuthHandler(BaseHandler):
that it isn't stolen by re-authenticating them.
Args:
requester (Requester): The user, as given by the access token
requester: The user, as given by the access token
request_body (dict): The body of the request sent by the client
request_body: The body of the request sent by the client
clientip (str): The IP address of the client.
clientip: The IP address of the client.
Returns:
defer.Deferred[dict]: the parameters for this request (which may
@@ -193,7 +210,9 @@ class AuthHandler(BaseHandler):
return self.checkers.keys()
@defer.inlineCallbacks
def check_auth(self, flows, clientdict, clientip):
def check_auth(
self, flows: List[List[str]], clientdict: Dict[str, Any], clientip: str
):
"""
Takes a dictionary sent by the client in the login / registration
protocol and handles the User-Interactive Auth flow.
@@ -208,14 +227,14 @@ class AuthHandler(BaseHandler):
decorator.
Args:
flows (list): A list of login flows. Each flow is an ordered list of
strings representing auth-types. At least one full
flow must be completed in order for auth to be successful.
flows: A list of login flows. Each flow is an ordered list of
strings representing auth-types. At least one full
flow must be completed in order for auth to be successful.
clientdict: The dictionary from the client root level, not the
'auth' key: this method prompts for auth if none is sent.
clientip (str): The IP address of the client.
clientip: The IP address of the client.
Returns:
defer.Deferred[dict, dict, str]: a deferred tuple of
@@ -235,7 +254,7 @@ class AuthHandler(BaseHandler):
"""
authdict = None
sid = None
sid = None # type: Optional[str]
if clientdict and "auth" in clientdict:
authdict = clientdict["auth"]
del clientdict["auth"]
@@ -268,9 +287,9 @@ class AuthHandler(BaseHandler):
creds = session["creds"]
# check auth type currently being presented
errordict = {}
errordict = {} # type: Dict[str, Any]
if "type" in authdict:
login_type = authdict["type"]
login_type = authdict["type"] # type: str
try:
result = yield self._check_auth_dict(authdict, clientip)
if result:
@@ -311,7 +330,7 @@ class AuthHandler(BaseHandler):
raise InteractiveAuthIncompleteError(ret)
@defer.inlineCallbacks
def add_oob_auth(self, stagetype, authdict, clientip):
def add_oob_auth(self, stagetype: str, authdict: Dict[str, Any], clientip: str):
"""
Adds the result of out-of-band authentication into an existing auth
session. Currently used for adding the result of fallback auth.
@@ -333,7 +352,7 @@ class AuthHandler(BaseHandler):
return True
return False
def get_session_id(self, clientdict):
def get_session_id(self, clientdict: Dict[str, Any]) -> Optional[str]:
"""
Gets the session ID for a client given the client dictionary
@@ -341,7 +360,7 @@ class AuthHandler(BaseHandler):
clientdict: The dictionary sent by the client in the request
Returns:
str|None: The string session ID the client sent. If the client did
The string session ID the client sent. If the client did
not send a session ID, returns None.
"""
sid = None
@@ -351,40 +370,42 @@ class AuthHandler(BaseHandler):
sid = authdict["session"]
return sid
def set_session_data(self, session_id, key, value):
def set_session_data(self, session_id: str, key: str, value: Any) -> None:
"""
Store a key-value pair into the sessions data associated with this
request. This data is stored server-side and cannot be modified by
the client.
Args:
session_id (string): The ID of this session as returned from check_auth
key (string): The key to store the data under
value (any): The data to store
session_id: The ID of this session as returned from check_auth
key: The key to store the data under
value: The data to store
"""
sess = self._get_session_info(session_id)
sess.setdefault("serverdict", {})[key] = value
self._save_session(sess)
def get_session_data(self, session_id, key, default=None):
def get_session_data(
self, session_id: str, key: str, default: Optional[Any] = None
) -> Any:
"""
Retrieve data stored with set_session_data
Args:
session_id (string): The ID of this session as returned from check_auth
key (string): The key to store the data under
default (any): Value to return if the key has not been set
session_id: The ID of this session as returned from check_auth
key: The key to store the data under
default: Value to return if the key has not been set
"""
sess = self._get_session_info(session_id)
return sess.setdefault("serverdict", {}).get(key, default)
@defer.inlineCallbacks
def _check_auth_dict(self, authdict, clientip):
def _check_auth_dict(self, authdict: Dict[str, Any], clientip: str):
"""Attempt to validate the auth dict provided by a client
Args:
authdict (object): auth dict provided by the client
clientip (str): IP address of the client
authdict: auth dict provided by the client
clientip: IP address of the client
Returns:
Deferred: result of the stage verification.
@@ -410,10 +431,10 @@ class AuthHandler(BaseHandler):
(canonical_id, callback) = yield self.validate_login(user_id, authdict)
return canonical_id
def _get_params_recaptcha(self):
def _get_params_recaptcha(self) -> dict:
return {"public_key": self.hs.config.recaptcha_public_key}
def _get_params_terms(self):
def _get_params_terms(self) -> dict:
return {
"policies": {
"privacy_policy": {
@@ -430,7 +451,9 @@ class AuthHandler(BaseHandler):
}
}
def _auth_dict_for_flows(self, flows, session):
def _auth_dict_for_flows(
self, flows: List[List[str]], session: Dict[str, Any]
) -> Dict[str, Any]:
public_flows = []
for f in flows:
public_flows.append(f)
@@ -440,7 +463,7 @@ class AuthHandler(BaseHandler):
LoginType.TERMS: self._get_params_terms,
}
params = {}
params = {} # type: Dict[str, Any]
for f in public_flows:
for stage in f:
@@ -453,7 +476,13 @@ class AuthHandler(BaseHandler):
"params": params,
}
def _get_session_info(self, session_id):
def _get_session_info(self, session_id: Optional[str]) -> dict:
"""
Gets or creates a session given a session ID.
The session can be used to track data across multiple requests, e.g. for
interactive authentication.
"""
if session_id not in self.sessions:
session_id = None
@@ -466,7 +495,9 @@ class AuthHandler(BaseHandler):
return self.sessions[session_id]
@defer.inlineCallbacks
def get_access_token_for_user_id(self, user_id, device_id, valid_until_ms):
def get_access_token_for_user_id(
self, user_id: str, device_id: Optional[str], valid_until_ms: Optional[int]
):
"""
Creates a new access token for the user with the given user ID.
@@ -476,11 +507,11 @@ class AuthHandler(BaseHandler):
The device will be recorded in the table if it is not there already.
Args:
user_id (str): canonical User ID
device_id (str|None): the device ID to associate with the tokens.
user_id: canonical User ID
device_id: the device ID to associate with the tokens.
None to leave the tokens unassociated with a device (deprecated:
we should always have a device ID)
valid_until_ms (int|None): when the token is valid until. None for
valid_until_ms: when the token is valid until. None for
no expiry.
Returns:
The access token for the user's session.
@@ -515,13 +546,13 @@ class AuthHandler(BaseHandler):
return access_token
@defer.inlineCallbacks
def check_user_exists(self, user_id):
def check_user_exists(self, user_id: str):
"""
Checks to see if a user with the given id exists. Will check case
insensitively, but return None if there are multiple inexact matches.
Args:
(unicode|bytes) user_id: complete @user:id
user_id: complete @user:id
Returns:
defer.Deferred: (unicode) canonical_user_id, or None if zero or
@@ -536,7 +567,7 @@ class AuthHandler(BaseHandler):
return None
@defer.inlineCallbacks
def _find_user_id_and_pwd_hash(self, user_id):
def _find_user_id_and_pwd_hash(self, user_id: str):
"""Checks to see if a user with the given id exists. Will check case
insensitively, but will return None if there are multiple inexact
matches.
@@ -566,7 +597,7 @@ class AuthHandler(BaseHandler):
)
return result
def get_supported_login_types(self):
def get_supported_login_types(self) -> Iterable[str]:
"""Get a the login types supported for the /login API
By default this is just 'm.login.password' (unless password_enabled is
@@ -574,20 +605,20 @@ class AuthHandler(BaseHandler):
other login types.
Returns:
Iterable[str]: login types
login types
"""
return self._supported_login_types
@defer.inlineCallbacks
def validate_login(self, username, login_submission):
def validate_login(self, username: str, login_submission: Dict[str, Any]):
"""Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate
m.login.password auth types.
Args:
username (str): username supplied by the user
login_submission (dict): the whole of the login submission
username: username supplied by the user
login_submission: the whole of the login submission
(including 'type' and other relevant fields)
Returns:
Deferred[str, func]: canonical user id, and optional callback
@@ -675,13 +706,13 @@ class AuthHandler(BaseHandler):
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks
def check_password_provider_3pid(self, medium, address, password):
def check_password_provider_3pid(self, medium: str, address: str, password: str):
"""Check if a password provider is able to validate a thirdparty login
Args:
medium (str): The medium of the 3pid (ex. email).
address (str): The address of the 3pid (ex. jdoe@example.com).
password (str): The password of the user.
medium: The medium of the 3pid (ex. email).
address: The address of the 3pid (ex. jdoe@example.com).
password: The password of the user.
Returns:
Deferred[(str|None, func|None)]: A tuple of `(user_id,
@@ -709,15 +740,15 @@ class AuthHandler(BaseHandler):
return None, None
@defer.inlineCallbacks
def _check_local_password(self, user_id, password):
def _check_local_password(self, user_id: str, password: str):
"""Authenticate a user against the local password database.
user_id is checked case insensitively, but will return None if there are
multiple inexact matches.
Args:
user_id (unicode): complete @user:id
password (unicode): the provided password
user_id: complete @user:id
password: the provided password
Returns:
Deferred[unicode] the canonical_user_id, or Deferred[None] if
unknown user/bad password
@@ -740,7 +771,7 @@ class AuthHandler(BaseHandler):
return user_id
@defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token):
def validate_short_term_login_token_and_get_user_id(self, login_token: str):
auth_api = self.hs.get_auth()
user_id = None
try:
@@ -754,11 +785,11 @@ class AuthHandler(BaseHandler):
return user_id
@defer.inlineCallbacks
def delete_access_token(self, access_token):
def delete_access_token(self, access_token: str):
"""Invalidate a single access token
Args:
access_token (str): access token to be deleted
access_token: access token to be deleted
Returns:
Deferred
@@ -783,15 +814,17 @@ class AuthHandler(BaseHandler):
@defer.inlineCallbacks
def delete_access_tokens_for_user(
self, user_id, except_token_id=None, device_id=None
self,
user_id: str,
except_token_id: Optional[str] = None,
device_id: Optional[str] = None,
):
"""Invalidate access tokens belonging to a user
Args:
user_id (str): ID of user the tokens belong to
except_token_id (str|None): access_token ID which should *not* be
deleted
device_id (str|None): ID of device the tokens are associated with.
user_id: ID of user the tokens belong to
except_token_id: access_token ID which should *not* be deleted
device_id: ID of device the tokens are associated with.
If None, tokens associated with any device (or no device) will
be deleted
Returns:
@@ -815,7 +848,15 @@ class AuthHandler(BaseHandler):
)
@defer.inlineCallbacks
def add_threepid(self, user_id, medium, address, validated_at):
def add_threepid(self, user_id: str, medium: str, address: str, validated_at: int):
# check if medium has a valid value
if medium not in ["email", "msisdn"]:
raise SynapseError(
code=400,
msg=("'%s' is not a valid value for 'medium'" % (medium,)),
errcode=Codes.INVALID_PARAM,
)
# 'Canonicalise' email addresses down to lower case.
# We've now moving towards the homeserver being the entity that
# is responsible for validating threepids used for resetting passwords
@@ -833,19 +874,20 @@ class AuthHandler(BaseHandler):
)
@defer.inlineCallbacks
def delete_threepid(self, user_id, medium, address, id_server=None):
def delete_threepid(
self, user_id: str, medium: str, address: str, id_server: Optional[str] = None
):
"""Attempts to unbind the 3pid on the identity servers and deletes it
from the local database.
Args:
user_id (str)
medium (str)
address (str)
id_server (str|None): Use the given identity server when unbinding
user_id: ID of user to remove the 3pid from.
medium: The medium of the 3pid being removed: "email" or "msisdn".
address: The 3pid address to remove.
id_server: Use the given identity server when unbinding
any threepids. If None then will attempt to unbind using the
identity server specified when binding (if known).
Returns:
Deferred[bool]: Returns True if successfully unbound the 3pid on
the identity server, False if identity server doesn't support the
@@ -864,17 +906,18 @@ class AuthHandler(BaseHandler):
yield self.store.user_delete_threepid(user_id, medium, address)
return result
def _save_session(self, session):
def _save_session(self, session: Dict[str, Any]) -> None:
"""Update the last used time on the session to now and add it back to the session store."""
# TODO: Persistent storage
logger.debug("Saving session %s", session)
session["last_used"] = self.hs.get_clock().time_msec()
self.sessions[session["id"]] = session
def hash(self, password):
def hash(self, password: str):
"""Computes a secure hash of password.
Args:
password (unicode): Password to hash.
password: Password to hash.
Returns:
Deferred(unicode): Hashed password.
@@ -891,12 +934,12 @@ class AuthHandler(BaseHandler):
return defer_to_thread(self.hs.get_reactor(), _do_hash)
def validate_hash(self, password, stored_hash):
def validate_hash(self, password: str, stored_hash: bytes):
"""Validates that self.hash(password) == stored_hash.
Args:
password (unicode): Password to hash.
stored_hash (bytes): Expected hash value.
password: Password to hash.
stored_hash: Expected hash value.
Returns:
Deferred(bool): Whether self.hash(password) == stored_hash.
@@ -919,13 +962,74 @@ class AuthHandler(BaseHandler):
else:
return defer.succeed(False)
def complete_sso_login(
self,
registered_user_id: str,
request: SynapseRequest,
client_redirect_url: str,
):
"""Having figured out a mxid for this user, complete the HTTP request
Args:
registered_user_id: The registered user ID to complete SSO login for.
request: The request to complete.
client_redirect_url: The URL to which to redirect the user at the end of the
process.
"""
# Create a login token
login_token = self.macaroon_gen.generate_short_term_login_token(
registered_user_id
)
# Append the login token to the original redirect URL (i.e. with its query
# parameters kept intact) to build the URL to which the template needs to
# redirect the users once they have clicked on the confirmation link.
redirect_url = self.add_query_param_to_url(
client_redirect_url, "loginToken", login_token
)
# if the client is whitelisted, we can redirect straight to it
if client_redirect_url.startswith(self._whitelisted_sso_clients):
request.redirect(redirect_url)
finish_request(request)
return
# Otherwise, serve the redirect confirmation page.
# Remove the query parameters from the redirect URL to get a shorter version of
# it. This is only to display a human-readable URL in the template, but not the
# URL we redirect users to.
redirect_url_no_params = client_redirect_url.split("?")[0]
html = self._sso_redirect_confirm_template.render(
display_url=redirect_url_no_params,
redirect_url=redirect_url,
server_name=self._server_name,
).encode("utf-8")
request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html),))
request.write(html)
finish_request(request)
@staticmethod
def add_query_param_to_url(url: str, param_name: str, param: Any):
url_parts = list(urllib.parse.urlparse(url))
query = dict(urllib.parse.parse_qsl(url_parts[4]))
query.update({param_name: param})
url_parts[4] = urllib.parse.urlencode(query)
return urllib.parse.urlunparse(url_parts)
@attr.s
class MacaroonGenerator(object):
hs = attr.ib()
def generate_access_token(self, user_id, extra_caveats=None):
def generate_access_token(
self, user_id: str, extra_caveats: Optional[List[str]] = None
) -> str:
extra_caveats = extra_caveats or []
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = access")
@@ -938,16 +1042,9 @@ class MacaroonGenerator(object):
macaroon.add_first_party_caveat(caveat)
return macaroon.serialize()
def generate_short_term_login_token(self, user_id, duration_in_ms=(2 * 60 * 1000)):
"""
Args:
user_id (unicode):
duration_in_ms (int):
Returns:
unicode
"""
def generate_short_term_login_token(
self, user_id: str, duration_in_ms: int = (2 * 60 * 1000)
) -> str:
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = login")
now = self.hs.get_clock().time_msec()
@@ -955,12 +1052,12 @@ class MacaroonGenerator(object):
macaroon.add_first_party_caveat("time < %d" % (expiry,))
return macaroon.serialize()
def generate_delete_pusher_token(self, user_id):
def generate_delete_pusher_token(self, user_id: str) -> str:
macaroon = self._generate_base_macaroon(user_id)
macaroon.add_first_party_caveat("type = delete_pusher")
return macaroon.serialize()
def _generate_base_macaroon(self, user_id):
def _generate_base_macaroon(self, user_id: str) -> pymacaroons.Macaroon:
macaroon = pymacaroons.Macaroon(
location=self.hs.config.server_name,
identifier="key",

View File

@@ -140,7 +140,7 @@ class DeactivateAccountHandler(BaseHandler):
user_id (str): The user ID to reject pending invites for.
"""
user = UserID.from_string(user_id)
pending_invites = await self.store.get_invited_rooms_for_user(user_id)
pending_invites = await self.store.get_invited_rooms_for_local_user(user_id)
for room in pending_invites:
try:

View File

@@ -26,6 +26,7 @@ from synapse.api.errors import (
FederationDeniedError,
HttpResponseException,
RequestSendFailed,
SynapseError,
)
from synapse.logging.opentracing import log_kv, set_tag, trace
from synapse.types import RoomStreamToken, get_domain_from_id
@@ -39,6 +40,8 @@ from ._base import BaseHandler
logger = logging.getLogger(__name__)
MAX_DEVICE_DISPLAY_NAME_LEN = 100
class DeviceWorkerHandler(BaseHandler):
def __init__(self, hs):
@@ -225,6 +228,22 @@ class DeviceWorkerHandler(BaseHandler):
return result
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
self_signing_key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing"
)
return {
"user_id": user_id,
"stream_id": stream_id,
"devices": devices,
"master_key": master_key,
"self_signing_key": self_signing_key,
}
class DeviceHandler(DeviceWorkerHandler):
def __init__(self, hs):
@@ -239,9 +258,6 @@ class DeviceHandler(DeviceWorkerHandler):
federation_registry.register_edu_handler(
"m.device_list_update", self.device_list_updater.incoming_device_list_update
)
federation_registry.register_query_handler(
"user_devices", self.on_federation_query_user_devices
)
hs.get_distributor().observe("user_left_room", self.user_left_room)
@@ -391,9 +407,18 @@ class DeviceHandler(DeviceWorkerHandler):
defer.Deferred:
"""
# Reject a new displayname which is too long.
new_display_name = content.get("display_name")
if new_display_name and len(new_display_name) > MAX_DEVICE_DISPLAY_NAME_LEN:
raise SynapseError(
400,
"Device display name is too long (max %i)"
% (MAX_DEVICE_DISPLAY_NAME_LEN,),
)
try:
yield self.store.update_device(
user_id, device_id, new_display_name=content.get("display_name")
user_id, device_id, new_display_name=new_display_name
)
yield self.notify_device_update(user_id, [device_id])
except errors.StoreError as e:
@@ -456,22 +481,6 @@ class DeviceHandler(DeviceWorkerHandler):
self.notifier.on_new_event("device_list_key", position, users=[from_user_id])
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
self_signing_key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing"
)
return {
"user_id": user_id,
"stream_id": stream_id,
"devices": devices,
"master_key": master_key,
"self_signing_key": self_signing_key,
}
@defer.inlineCallbacks
def user_left_room(self, user, room_id):
user_id = user.to_string()
@@ -598,7 +607,13 @@ class DeviceListUpdater(object):
# happens if we've missed updates.
resync = yield self._need_to_do_resync(user_id, pending_updates)
logger.debug("Need to re-sync devices for %r? %r", user_id, resync)
if logger.isEnabledFor(logging.INFO):
logger.info(
"Received device list update for %s, requiring resync: %s. Devices: %s",
user_id,
resync,
", ".join(u[0] for u in pending_updates),
)
if resync:
yield self.user_device_resync(user_id)
@@ -727,6 +742,6 @@ class DeviceListUpdater(object):
# We clobber the seen updates since we've re-synced from a given
# point.
self._seen_updates[user_id] = set([stream_id])
self._seen_updates[user_id] = {stream_id}
defer.returnValue(result)

View File

@@ -14,12 +14,14 @@
# limitations under the License.
import logging
from typing import Any, Dict
from canonicaljson import json
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
get_active_span_text_map,
log_kv,
@@ -47,6 +49,8 @@ class DeviceMessageHandler(object):
"m.direct_to_device", self.on_direct_to_device_edu
)
self._device_list_updater = hs.get_device_handler().device_list_updater
@defer.inlineCallbacks
def on_direct_to_device_edu(self, origin, content):
local_messages = {}
@@ -65,6 +69,9 @@ class DeviceMessageHandler(object):
logger.warning("Request for keys for non-local user %s", user_id)
raise SynapseError(400, "Not a user here")
if not by_device:
continue
messages_by_device = {
device_id: {
"content": message_content,
@@ -73,8 +80,11 @@ class DeviceMessageHandler(object):
}
for device_id, message_content in by_device.items()
}
if messages_by_device:
local_messages[user_id] = messages_by_device
local_messages[user_id] = messages_by_device
yield self._check_for_unknown_devices(
message_type, sender_user_id, by_device
)
stream_id = yield self.store.add_messages_from_remote_to_device_inbox(
origin, message_id, local_messages
@@ -84,6 +94,55 @@ class DeviceMessageHandler(object):
"to_device_key", stream_id, users=local_messages.keys()
)
@defer.inlineCallbacks
def _check_for_unknown_devices(
self,
message_type: str,
sender_user_id: str,
by_device: Dict[str, Dict[str, Any]],
):
"""Checks inbound device messages for unkown remote devices, and if
found marks the remote cache for the user as stale.
"""
if message_type != "m.room_key_request":
return
# Get the sending device IDs
requesting_device_ids = set()
for message_content in by_device.values():
device_id = message_content.get("requesting_device_id")
requesting_device_ids.add(device_id)
# Check if we are tracking the devices of the remote user.
room_ids = yield self.store.get_rooms_for_user(sender_user_id)
if not room_ids:
logger.info(
"Received device message from remote device we don't"
" share a room with: %s %s",
sender_user_id,
requesting_device_ids,
)
return
# If we are tracking check that we know about the sending
# devices.
cached_devices = yield self.store.get_cached_devices_for_user(sender_user_id)
unknown_devices = requesting_device_ids - set(cached_devices)
if unknown_devices:
logger.info(
"Received device message from remote device not in our cache: %s %s",
sender_user_id,
unknown_devices,
)
yield self.store.mark_remote_user_device_cache_as_stale(sender_user_id)
# Immediately attempt a resync in the background
run_in_background(
self._device_list_updater.user_device_resync, sender_user_id
)
@defer.inlineCallbacks
def send_device_message(self, sender_user_id, message_type, messages):
set_tag("number_of_messages", len(messages))

Some files were not shown because too many files have changed in this diff Show More