1
0

Compare commits

..

280 Commits

Author SHA1 Message Date
Patrick Cloke
3c8a57f080 1.13.0rc3 2020-05-18 10:58:51 -04:00
Patrick Cloke
56db0b1365 Hash passwords earlier in the registration process (#7523) 2020-05-18 09:46:18 -04:00
Richard van der Hoff
75fbc1a0c6 1.13.0rc2 2020-05-14 11:10:11 +01:00
Richard van der Hoff
623abbeb8d 1.13.0rc2 2020-05-14 11:01:24 +01:00
Richard van der Hoff
a0e063387d Stop get_joined_users corruption from custom statuses (#7376)
Fix a bug where the `get_joined_users` cache could be corrupted by custom
status events (or other state events with a state_key matching the user ID).

The bug was introduced by #2229, but has largely gone unnoticed since then.

Fixes #7099, #7373.
2020-05-14 10:07:54 +01:00
Patrick Cloke
5d64fefd6c Do not validate that the client dict is stable during UI Auth. (#7483)
This backs out some of the validation for the client dictionary and logs if
this changes during a user interactive authentication session instead.
2020-05-13 14:26:44 -04:00
Patrick Cloke
edd3b0747c Fix new flake8 errors (#7489)
This is a cherry-pick of 1a1da60ad2 (#7470)
to the release-v1.13.0 branch.
2020-05-13 08:24:50 -04:00
Andrew Morgan
fa4af2c3af Don't UPGRADE database rows 2020-05-11 16:08:37 +01:00
Andrew Morgan
e0caeedab3 RST indenting 2020-05-11 16:05:23 +01:00
Andrew Morgan
8cced49764 Put rollback instructions in upgrade notes 2020-05-11 16:04:00 +01:00
Andrew Morgan
647a995b96 Fix changelog typo 2020-05-11 15:40:36 +01:00
Andrew Morgan
c3416c888a Oh yeah, RST 2020-05-11 15:37:51 +01:00
Andrew Morgan
b41d7b3969 Absolute URL it is then 2020-05-11 15:36:32 +01:00
Andrew Morgan
3916c655f8 Fix upgrade notes link 2020-05-11 15:34:59 +01:00
Andrew Morgan
1f73d28b6c Provide summary of upgrade issues in changelog. Fix ) 2020-05-11 15:33:53 +01:00
Andrew Morgan
71c9307509 Move next version notes from changelog to upgrade notes 2020-05-11 15:10:08 +01:00
Andrew Morgan
200ad02624 Changelog fixes 2020-05-11 14:56:29 +01:00
Andrew Morgan
20ffaa7209 1.13.0rc1 2020-05-11 14:54:38 +01:00
Neil Johnson
85155654c5 Documentation on setting up redis (#7446) 2020-05-11 13:21:15 +01:00
Patrick Cloke
0ad6d28b0d Rework UI Auth session validation for registration (#7455)
Be less strict about validation of UI authentication sessions during
registration to match client expecations.
2020-05-08 16:08:58 -04:00
Richard van der Hoff
aa5aa6f96a Fix errors from malformed log line (#7454) 2020-05-07 19:51:38 +01:00
Richard van der Hoff
da9b2db3af Drop support for redis.dbid (#7450)
Since we only use pubsub, the dbid is irrelevant.
2020-05-07 16:46:15 +01:00
Brendan Abolivier
d9b8d27494 Add a configuration setting for the dummy event threshold (#7422)
Add dummy_events_threshold which allows configuring the number of forward extremities a room needs for Synapse to send forward extremities in it.
2020-05-07 10:35:23 +01:00
Patrick Cloke
d7c2df2fa3 Improve per-block CPU and DB usage metrics (#7426) 2020-05-06 16:43:39 -04:00
Andrew Morgan
4162c39dcf Port group attestation renewal slow down from matrix-org-hotfixes (#7442) 2020-05-06 20:21:38 +01:00
Richard van der Hoff
e053c86a96 Make redis go faster with hiredis (#7439)
For the record, the reason we need this is as follows:

each RDATA command comes down the redis pipe as a subscription message. txredisapi as written needs at least three reactor ticks to read each subscription message from the tcp buffer. Hence, once the process gets loaded, it starts getting behind, and eventually redis knifes the connection. it then takes ages for the master to work its way through the backlog, before it reconnects again, during which any commands from any workers are dropped.
2020-05-06 17:36:46 +01:00
Andrew Morgan
aee9130a83 Stop Auth methods from polling the config on every req. (#7420) 2020-05-06 15:54:58 +01:00
Erik Johnston
b26f3e582c Merge pull request #7423 from matrix-org/erikj/faster_device_lists_fetch
Speed up fetching device lists changes in sync.
2020-05-06 11:14:13 +01:00
Richard van der Hoff
c255b0ffdc Merge pull request #7427 from matrix-org/rav/fix_dropped_messages
Fix lost events on replication reconnection
2020-05-06 10:54:25 +01:00
Richard van der Hoff
a8c17da245 Merge branch 'release-v1.13.0' into rav/fix_dropped_messages 2020-05-05 23:01:12 +01:00
Richard van der Hoff
1242267316 Merge branch 'release-v1.13.0' into rav/fix_dropped_messages 2020-05-05 22:38:44 +01:00
Richard van der Hoff
7bf788ac73 changelog 2020-05-05 22:38:16 +01:00
Richard van der Hoff
7f7eedbebb Wait for a POSITION on the right connection before accepting RDATA
... otherwise we can believe we're up to date when we're not.
2020-05-05 22:38:16 +01:00
Brendan Abolivier
5b8023dc7f Move logs about discarded RDATA to debug (#7421) 2020-05-05 21:07:33 +02:00
Richard van der Hoff
d78265af0c Wait to subscribe before sending REPLICATE 2020-05-05 19:31:37 +01:00
Richard van der Hoff
13dd458b8d Merge branch 'release-v1.13.0' into erikj/faster_device_lists_fetch 2020-05-05 18:14:00 +01:00
Richard van der Hoff
714560e325 Update changelog.d/7423.misc 2020-05-05 18:03:59 +01:00
Erik Johnston
79fe3e068b Newsfile 2020-05-05 17:40:29 +01:00
Erik Johnston
f9073893af Speed up fetching device lists changes in sync.
Currently we copy `users_who_share_room` needlessly about three times,
which is expensive when the set is large (which it can easily be).
2020-05-05 17:40:29 +01:00
Patrick Cloke
fe69fb6263 Add backwards compatibility codepath to LoggingContext. (#7408) 2020-05-05 09:21:34 -04:00
Erik Johnston
7941a70fa8 Fix bug in EventContext.deserialize. (#7393)
This caused `prev_state_ids` to be incorrect if the state event was not
replacing an existing state entry.
2020-05-05 14:17:27 +01:00
Brendan Abolivier
9858d5c362 Fix ordering in MANIFEST.in 2020-05-04 16:33:30 +02:00
Brendan Abolivier
ad088716bc Merge pull request #7404 from matrix-org/babolivier/fix_manifest
Fix MANIFEST.in
2020-05-04 16:24:15 +02:00
Brendan Abolivier
068da604c2 Fix MANIFEST.in
An update of check-manifest shone some light on some issues with MANIFEST.in, specifically that we didn't ignore/prune the contrib directory, and that we were using prune instead of exclude for files. This fixes both issues.

Fixes #7403
2020-05-04 15:18:06 +02:00
Erik Johnston
350421e058 Fix redis password support. (#7401)
We forgot to set the password on the subscriber connection, as well as
not calling super methods for overridden connectionMade/connectionLost
functions.
2020-05-04 14:04:09 +01:00
Patrick Cloke
032e5a2aca Convert synapse.server_notices to async/await. (#7394) 2020-05-01 15:28:59 -04:00
Patrick Cloke
b0cbc57375 Convert the synapse.notifier module to async/await. (#7395) 2020-05-01 15:14:49 -04:00
Richard van der Hoff
97ef1471c6 Wait for current_state_events_membership before populate_stats_process_rooms (#7387)
populate_stats_process_rooms was added in #5971 / v1.4.0; current_state_events_membership was added in #5706 / v1.3.0.

Fixes #7380.
2020-05-01 18:05:58 +01:00
Erik Johnston
0e719f2398 Thread through instance name to replication client. (#7369)
For in memory streams when fetching updates on workers we need to query the source of the stream, which currently is hard coded to be master. This PR threads through the source instance we received via `POSITION` through to the update function in each stream, which can then be passed to the replication client for in memory streams.
2020-05-01 17:19:56 +01:00
Erik Johnston
3085cde577 Use stream.current_token() and remove stream_positions() (#7172)
We move the processing of typing and federation replication traffic into their handlers so that `Stream.current_token()` points to a valid token. This allows us to remove `get_streams_to_replicate()` and `stream_positions()`.
2020-05-01 15:21:35 +01:00
Andrew Morgan
6b22921b19 async/await is_server_admin (#7363) 2020-05-01 15:15:36 +01:00
Andrew Morgan
2e8955f4a6 Further improvements to requesting the public rooms list on a homeserver which has it set to private (#7368) 2020-05-01 15:15:08 +01:00
Richard van der Hoff
b2dba06079 Workaround for assertion errors from db_query_to_update_function (#7378)
Hopefully this is no worse than what we have on master...
2020-05-01 09:25:16 +01:00
Patrick Cloke
627b0f5f27 Persist user interactive authentication sessions (#7302)
By persisting the user interactive authentication sessions to the database, this fixes
situations where a user hits different works throughout their auth session and also
allows sessions to persist through restarts of Synapse.
2020-04-30 13:47:49 -04:00
Andrew Morgan
9d8ecc9e6c Apply federation check for /publicRooms with filter list (#7367) 2020-04-30 11:38:07 +01:00
Erik Johnston
37f6823f5b Add instance name to RDATA/POSITION commands (#7364)
This is primarily for allowing us to send those commands from workers, but for now simply allows us to ignore echoed RDATA/POSITION commands that we sent (we get echoes of sent commands when using redis). Currently we log a WARNING on the master process every time we receive an echoed RDATA.
2020-04-29 16:23:08 +01:00
Erik Johnston
3eab76ad43 Don't relay REMOTE_SERVER_UP cmds to same conn. (#7352)
For direct TCP connections we need the master to relay REMOTE_SERVER_UP
commands to the other connections so that all instances get notified
about it. The old implementation just relayed to all connections,
assuming that sending back to the original sender of the command was
safe. This is not true for redis, where commands sent get echoed back to
the sender, which was causing master to effectively infinite loop
sending and then re-receiving REMOTE_SERVER_UP commands that it sent.

The fix is to ensure that we only relay to *other* connections and not
to the connection we received the notification from.

Fixes #7334.
2020-04-29 14:10:59 +01:00
Richard van der Hoff
c2e1a2110f Fix limit logic for EventsStream (#7358)
* Factor out functions for injecting events into database

I want to add some more flexibility to the tools for injecting events into the
database, and I don't want to clutter up HomeserverTestCase with them, so let's
factor them out to a new file.

* Rework TestReplicationDataHandler

This wasn't very easy to work with: the mock wrapping was largely superfluous,
and it's useful to be able to inspect the received rows, and clear out the
received list.

* Fix AssertionErrors being thrown by EventsStream

Part of the problem was that there was an off-by-one error in the assertion,
but also the limit logic was too simple. Fix it all up and add some tests.
2020-04-29 12:30:36 +01:00
Andrew Morgan
eeef9633af Fix fallback value for account_threepid_delegates.email (#7316) 2020-04-29 11:30:06 +01:00
Andrew Morgan
c58ae367d8 Clean up admin api docs (#7361) 2020-04-28 20:06:03 +01:00
Manuel Stahl
04dd7d182d Return total number of users and profile attributes in admin users endpoint (#6881)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
2020-04-28 18:19:36 +01:00
Erik Johnston
fce663889b Add some replication tests (#7278)
Specifically some tests for the typing stream, which means we test streams that fetch missing updates via HTTP (rather than via the DB).

We also shuffle things around a bit so that we create two separate `HomeServer` objects, rather than trying to insert a slaved store into places.

Note: `test_typing.py` is heavily inspired by `test_receipts.py`
2020-04-28 17:42:03 +01:00
Andrew Morgan
ce207aa0ff Fix typo 'datbases' in ConfigError 2020-04-28 16:53:10 +01:00
Richard van der Hoff
fb8ff79efd Fix collation for postgres for unit tests (#7359)
When running the UTs against a postgres deatbase, we need to set the collation
correctly.
2020-04-28 14:21:48 +01:00
Erik Johnston
38919b521e Run replication streamers on workers (#7146)
Currently we never write to streams from workers, but that will change soon
2020-04-28 13:34:12 +01:00
Richard van der Hoff
07337fe30b Fix incorrect metrics reporting for renew_attestations (#7344)
We need to wait for the renewals to finish, so that the metrics are correctly
reported.
2020-04-27 22:20:10 +01:00
Brendan Abolivier
036fab5d8a Document monitoring workers (#7357)
It doesn't seem to be documented anywhere and means that you suddenly start losing metrics without any obvious reason when you go from monolith to workers (e.g. #7312).
2020-04-27 21:36:47 +02:00
lub
aa2492907f Add some explanation to application_services.md (#7091)
Signed-off-by: Simon Körner <git@lubiland.de>
2020-04-27 15:03:09 +01:00
Brendan Abolivier
cc9eceb00d Don't crash when one of the configuration files is empty (#7341)
If the admin adds a `.yaml` file that's either empty or doesn't parse into a dict to a config directory (e.g. `conf.d` for debs installs), stuff like https://github.com/matrix-org/synapse/issues/7322 would happen. This PR checks that the file is correctly parsed into a dict, or ignores it with a warning if it parses into any other type (including `None` for empty files).

Fixes https://github.com/matrix-org/synapse/issues/7322
2020-04-27 15:01:03 +02:00
Patrick Cloke
7bfe0902ce Add documentation to the sample config about the templates for SSO. (#7343) 2020-04-24 15:03:49 -04:00
Patrick Cloke
33bceb7f70 Convert some of the federation handler methods to async/await. (#7338) 2020-04-24 14:36:38 -04:00
Richard van der Hoff
69a1ac00b2 Merge pull request #7337 from matrix-org/rav/fix_update_limit_assertion
Fix assertions being thrown by the EventsStream update function
2020-04-24 14:00:29 +01:00
Richard van der Hoff
3655eafe85 changelog 2020-04-24 13:59:21 +01:00
Richard van der Hoff
ce428a1abe Fix EventsStream raising assertions when it falls behind
Figuring out how to correctly limit updates from this stream without dropping
entries is far more complicated than just counting the number of rows being
returned. We need to consider each query separately and, if any one query hits
the limit, truncate the results from the others.

I think this also fixes some potentially long-standing bugs where events or
state changes could get missed if we hit the limit on either query.
2020-04-24 13:59:21 +01:00
Patrick Cloke
68384d96fd Merge branch 'master' into develop 2020-04-23 12:04:50 -04:00
Patrick Cloke
204664d1ad Merge tag 'v1.12.4'
Synapse v1.12.4

Features:

* Always send users their own device updates. (#7160)
* Add support for handling GET requests for account_data on a worker. (#7311)

Bugfixes:

* Fix a bug that prevented cross-signing with users on worker-mode synapses. (#7255)
* Do not treat display names as globs in push rules. (#7271)
* Fix a bug with cross-signing devices belonging to remote users who did not share a
  room with any user on the local homeserver. (#7289)
2020-04-23 12:03:33 -04:00
Patrick Cloke
ce9b62e13f 1.12.4 2020-04-23 10:59:10 -04:00
Richard van der Hoff
9cbdfb3a2f Make it clear that the limit for an update_function is a target 2020-04-23 15:45:12 +01:00
Richard van der Hoff
23b28266ac Remove 'limit' param from get_repl_stream_updates API
there doesn't seem to be much point in passing this limit all around, since
both sides agree it's meant to be 100.
2020-04-23 15:44:35 +01:00
Brendan Abolivier
2e3b9a0fcb Revert "Revert "Merge pull request #7315 from matrix-org/babolivier/request_token""
This reverts commit 1adf6a5587.
2020-04-23 11:23:53 +02:00
Brendan Abolivier
fb825759e3 Merge branch 'master' into develop 2020-04-23 11:23:33 +02:00
Brendan Abolivier
1adf6a5587 Revert "Merge pull request #7315 from matrix-org/babolivier/request_token"
This reverts commit 6f4319368b, reversing
changes made to 0d775fcc2d.
2020-04-23 11:23:10 +02:00
Brendan Abolivier
6f4319368b Merge pull request #7315 from matrix-org/babolivier/request_token
Config option to inhibit 3PID errors on /requestToken
2020-04-23 10:38:57 +02:00
Richard van der Hoff
71a1abb8a1 Stop the master relaying USER_SYNC for other workers (#7318)
Long story short: if we're handling presence on the current worker, we shouldn't be sending USER_SYNC commands over replication.

In an attempt to figure out what is going on here, I ended up refactoring some bits of the presencehandler code, so the first 4 commits here are non-functional refactors to move this code slightly closer to sanity. (There's still plenty to do here :/). Suggest reviewing individual commits.

Fixes (I hope) #7257.
2020-04-22 22:39:04 +01:00
Brendan Abolivier
69ad7cc13b Config option to inhibit 3PID errors on /requestToken
Adds a request_token_inhibit_errors configuration flag (disabled by
default) which, if enabled, change the behaviour of all /requestToken
endpoints so that they return a 200 and a fake sid if the 3PID was/was
not found associated with an account (depending on the endpoint),
instead of an error.

Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2020-04-22 23:38:42 +02:00
Erik Johnston
841c581c40 Fix replication metrics when using redis (#7325) 2020-04-22 16:26:19 +01:00
Richard van der Hoff
f16beaa969 Merge tag 'v1.12.4rc1' into develop
Synapse 1.12.4rc1 (2020-04-22)
==============================

Features
--------

- Always send users their own device updates. ([\#7160](https://github.com/matrix-org/synapse/issues/7160))
- Add support for handling GET requests for `account_data` on a worker. ([\#7311](https://github.com/matrix-org/synapse/issues/7311))

Bugfixes
--------

- Fix a bug that prevented cross-signing with users on worker-mode synapses. ([\#7255](https://github.com/matrix-org/synapse/issues/7255))
- Do not treat display names as globs in push rules. ([\#7271](https://github.com/matrix-org/synapse/issues/7271))
- Fix a bug with cross-signing devices belonging to remote users who did not share a room with any user on the local homeserver. ([\#7289](https://github.com/matrix-org/synapse/issues/7289))
2020-04-22 14:54:08 +01:00
Richard van der Hoff
ba0aac5e44 formatting for the changelog 2020-04-22 14:50:51 +01:00
Richard van der Hoff
82d8b1dd1f Another go at fixing one-word commands (#7326)
I messed this up last time I tried (#7239 / e13c6c7).
2020-04-22 14:34:31 +01:00
Richard van der Hoff
dc8003f921 1.12.4rc1 2020-04-22 14:29:39 +01:00
Richard van der Hoff
83af1079d6 1.12.4rc1 2020-04-22 14:28:23 +01:00
Richard van der Hoff
7c7618c7e3 fix changelog name 2020-04-22 14:01:11 +01:00
Richard van der Hoff
13683a3a22 Extend StreamChangeCache to support multiple entities per stream ID (#7303)
First some background: StreamChangeCache is used to keep track of what "entities" have 
changed since a given stream ID. So for example, we might use it to keep track of when the last
to-device message for a given user was received [1], and hence whether we need to pull any to-device messages from the database on a sync [2].

Now, it turns out that StreamChangeCache didn't support more than one thing being changed at
a given stream_id (this was part of the problem with #7206). However, it's entirely valid to send
to-device messages to more than one user at a time.

As it turns out, this did in fact work, because *some* methods of StreamChangeCache coped
ok with having multiple things changing on the same stream ID, and it seems we never actually
use the methods which don't work on the stream change caches where we allow multiple
changes at the same stream ID. But that feels horribly fragile, hence: let's update
StreamChangeCache to properly support this, and add some typing and some more tests while
we're at it.

[1]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L301
[2]: https://github.com/matrix-org/synapse/blob/release-v1.12.3/synapse/storage/data_stores/main/deviceinbox.py#L47-L51
2020-04-22 13:45:40 +01:00
Dirk Klimpel
6b6685db9f Extend room admin api with additional attributes (#7225) 2020-04-22 13:38:41 +01:00
Richard van der Hoff
2aa5bf13c8 Merge branch 'release-v1.12.4' into develop 2020-04-22 13:09:23 +01:00
Erik Johnston
51f7eaf908 Add ability to run replication protocol over redis. (#7040)
This is configured via the `redis` config options.
2020-04-22 13:07:41 +01:00
Patrick Cloke
51f358e2fe Do not treat display names as globs for push rules. (#7271) 2020-04-22 13:07:12 +01:00
Michael Kaye
5308239d5d Reduce logging verbosity of URL cache cleanup. (#7295) 2020-04-22 07:45:16 -04:00
Andrew Morgan
f89ad3b6df Query missing cross-signing keys on local sig upload (#7289) 2020-04-22 12:29:36 +01:00
Andrew Morgan
ff5604e7f1 import urllib.parse when using urllib.parse.quote (#7319) 2020-04-22 12:18:51 +01:00
Michael Kaye
336989a57f Reduce federation logging on success (#7321)
Splitting based on the response code means we can avoid double logging here and identical information from line 164 while still logging at info if we don't get a good response and need to retry.
2020-04-22 11:18:18 +01:00
Richard van der Hoff
556566f0b8 Fix changelog file
I updated the PR and forgot to update the changelog.
2020-04-21 13:20:16 +01:00
Richard van der Hoff
974c0d726a Support GET account_data requests on a worker (#7311) 2020-04-21 10:46:30 +01:00
Richard van der Hoff
461f01ad43 Merge branch 'release-v1.12.4' into develop 2020-04-20 17:55:39 +01:00
Richard van der Hoff
d41c8f6d4d Revert "Query missing cross-signing keys on local sig upload"
This was incorrectly merged to the release branch before it was ready.

This reverts commit 72fe2affb6.
2020-04-20 17:54:35 +01:00
Richard van der Hoff
a46ff43319 Merge branch 'release-v1.12.4' into develop 2020-04-20 17:42:27 +01:00
David Baker
40f79f58bf Always send the user updates to their own device list (#7160) 2020-04-20 17:20:38 +01:00
Lars Franke
13917232d5 Fix indention in generated config file (#7300)
Also adjust sample_config.yaml

Signed-off-by: Lars Franke <frcl@mailbox.org>
2020-04-20 16:51:27 +01:00
Patrick Cloke
f5ea8b48bd Reject unknown UI auth sessions (instead of silently generating a new one) (#7268) 2020-04-20 08:54:42 -04:00
Richard van der Hoff
0f8f02bc39 On catchup, process each row with its own stream id (#7286)
Other parts of the code (such as the StreamChangeCache) assume that there will
not be multiple changes with the same stream id.

This code was introduced in #7024, and I hope this fixes #7206.
2020-04-20 11:43:29 +01:00
Patrick Cloke
054c231e58 Use a template for the SSO success page to allow for customization. (#7279) 2020-04-17 13:34:55 -04:00
James
701788a227 Added explicit Python build tools to snap requirements (#7213)
Signed-off-by: James Hebden <james@ec0.io>
2020-04-17 17:28:00 +01:00
Andrew Morgan
72fe2affb6 Query missing cross-signing keys on local sig upload
Add changelog

Save retrieved keys to the db

lint

Fix and de-brittle remote result dict processing

Use query_user_devices instead, assume only master, self_signing key types

Make changelog more useful

Remove very specific exception handling

Wrap get_verify_key_from_cross_signing_key in a try/except

Note that _get_e2e_cross_signing_verify_key can raise a SynapseError

lint

Add comment explaining why this is useful

Only fetch master and self_signing key types

Fix log statements, docstrings

Remove extraneous items from remote query try/except

lint

Factor key retrieval out into a separate function

Send device updates, modeled after SigningKeyEduUpdater._handle_signing_key_updates

Update method docstring
2020-04-17 15:47:49 +01:00
Richard van der Hoff
67ff7b8ba0 Improve type checking in replication.tcp.Stream (#7291)
The general idea here is to get rid of the type: ignore annotations on all of the current_token and update_function assignments, which would have caught #7290.

After a bit of experimentation, it seems like the least-awful way to do this is to pass the offending functions in as parameters to the Stream constructor. Unfortunately that means that the concrete implementations no longer have the same constructor signature as Stream itself, which means that it gets hard to correctly annotate STREAMS_MAP.

I've also introduced a couple of new types, to take out some duplication.
2020-04-17 14:49:55 +01:00
nataraj-hates-MS-for-stealing-github
0d775fcc2d Improve example TURN configuration in documentation (#7284) 2020-04-17 08:04:23 -04:00
Tristan Lins
c07fca9e2f Clarify the comments for media_storage_providers options (#7272) 2020-04-17 07:09:33 -04:00
Patrick Cloke
01294e6b3a Do not treat display names as globs for push rules. (#7271) 2020-04-16 10:52:55 -04:00
Richard van der Hoff
d7d42387f5 Fix 'generator object is not subscriptable' error (#7290)
Some of the query functions return generators rather than lists, so we can't
index into the result. Happily we already have a copy of the results.

(think this was introduced in #7024)
2020-04-16 14:37:06 +01:00
Patrick Cloke
eed7c5b89e Convert auth handler to async/await (#7261) 2020-04-15 12:40:18 -04:00
Patrick Cloke
17a2433b0d Add notes to the changelog about an additional SSO template. (#7259) 2020-04-15 10:36:33 -04:00
Andrew Morgan
a48138784e Allow specifying the value of Accept-Language header for URL previews (#7265) 2020-04-15 13:35:29 +01:00
Richard van der Hoff
6fb63d6426 Add some warnings and notes about slow upgrade (#7275)
Fixes #7144.
2020-04-15 12:40:43 +01:00
Richard van der Hoff
f2049a8d21 Fix a potentially-huge sql query (#7274)
We could end up looking up tens of thousands of events, which could cause large
amounts of data to be logged to the postgres log.
2020-04-15 10:16:35 +01:00
Zay11Zay
f1097e7720 Fix the parameters of a test fixture (#7243) 2020-04-14 15:37:28 -04:00
Andrew Morgan
ac6a84818f Only register devices edu handler on the master process (#7255) 2020-04-14 11:36:24 +01:00
Andrew Morgan
fef82f4e22 Only register devices edu handler on the master process (#7255) 2020-04-14 10:09:58 +01:00
Ryan Hovland
4a0dadafbe Add setting to nginx configuration to allow larger file uploads (#7251) 2020-04-13 17:23:36 +01:00
PeerD
f41b742161 Check on room creation if the user is allowed to publish the room to the room directory. (#7260) 2020-04-13 07:42:32 -04:00
Matthew Hodgson
118b58f0c9 typos 2020-04-11 20:55:18 +01:00
Andrew Morgan
ac978ab3da Default PL100 to enable encryption in a room (#7230) 2020-04-09 18:45:38 +01:00
Patrick Cloke
b85d7652ff Do not allow a deactivated user to login via SSO. (#7240) 2020-04-09 13:28:13 -04:00
Andrew Morgan
967f99b9f8 Unblacklist /upgrade creates a new room (#7228)
This commit was originally merged in #7228 but reverted in #7254 as the
associated sytest was not ready yet. Now that it is, we can merge this
again.
2020-04-09 13:53:35 +01:00
Andrew Morgan
a026bdaab7 Add matrix-synapse-shared-secret-auth as an example password provider (#7248) 2020-04-09 12:49:05 +01:00
Andrew Morgan
e3cd28d8a4 Revert "Unblacklist /upgrade creates a new room (#7228)" (#7254)
This reverts commit 59f0ca8b87.
2020-04-09 12:47:16 +01:00
Andrew Morgan
59f0ca8b87 Unblacklist /upgrade creates a new room (#7228) 2020-04-09 12:44:48 +01:00
Richard van der Hoff
5a709630bf Fix --help commandline argument (#7249)
I don't really remember why this was so complicated; I think it dates
back to the time when we had to instantiate the Config classes before
we could call `add_arguments` - ie before #5597. In any case, I don't
think there's a good reason for it any more, and the impact of it
being complicated is that `--help` doesn't work correctly.
2020-04-09 12:44:37 +01:00
Andrew Morgan
7f7f9968b3 Make it clearer to the user that they haven't overridden onLogin/Register (#7237) 2020-04-09 12:38:38 +01:00
Andrew Morgan
55d46da59a Upgrade jQuery to 3.x on fallback login/registration screens (#7236) 2020-04-09 12:23:30 +01:00
Richard van der Hoff
24722de7c8 Fix bad merge of CHANGES.md 2020-04-08 17:41:46 +01:00
Richard van der Hoff
23f8d285eb Remove redundant checks on daemonize from synctl (#7233)
We pass --daemonize on the commandline, which (since at least #4853) overrides
whatever the config file, so there is no need for it to be set in the config
file.
2020-04-08 11:59:47 +01:00
Richard van der Hoff
cae4121484 Make systemd-with-workers doc official (#7234)
Simplify and update this documentation, and make it part of the core dist.
2020-04-08 11:59:26 +01:00
Brendan Abolivier
c11d24d48c Fix changelog for #7235 2020-04-08 11:59:51 +02:00
Andrew Morgan
29b7e22b93 Add documentation to password_providers config option (#7238) 2020-04-08 00:46:50 +01:00
Richard van der Hoff
f31e65a749 bg update to clear out duplicate outbound_device_list_pokes (#7193)
We seem to have some duplicates, which could do with being cleared out.
2020-04-07 23:06:39 +01:00
Richard van der Hoff
aedeedc206 Merge pull request #7239 from matrix-org/rav/replication_cleanup
Miscellaneous cleanups to replication code
2020-04-07 22:00:55 +01:00
Patrick Cloke
1722b8a527 Convert delete_url_cache_media to async/await. (#7241) 2020-04-07 21:56:34 +01:00
Patrick Cloke
d78cb31588 Add typing information to federation_server. (#7219) 2020-04-07 15:03:23 -04:00
Richard van der Hoff
bd2ea3432b changelog 2020-04-07 17:44:51 +01:00
Richard van der Hoff
e13c6c7a96 Handle one-word replication commands correctly
`REPLICATE` is now a valid command, and it's nice if you can issue it from the
console without remembering to call it `REPLICATE ` with a trailing space.
2020-04-07 17:43:46 +01:00
Richard van der Hoff
c3e4b4edb2 Fix warnings about not calling superclass constructor
Separate `SimpleCommand` from `Command`, so that things which don't want to use
the `data` property don't have to, and thus fix the warnings PyCharm was giving
me about not calling `__init__` in the base class.
2020-04-07 17:40:22 +01:00
Richard van der Hoff
6a519a0ca0 Remove vestigal references to SYNC replication command
We've ripped pretty much all of this out: let's remove the remains.
2020-04-07 17:40:07 +01:00
Brendan Abolivier
ec5ac8e2b1 Fix typo in the login fallback javascript (#7235)
* Fix typo in the login fallback javascript

* Changelog
2020-04-07 18:31:50 +02:00
Richard van der Hoff
2e105c156b Remove sent outbound device list pokes from the database (#7192)
They just get in the way.
2020-04-07 15:19:19 +01:00
Erik Johnston
ce72355d7f Fix race in replication (#7226)
Fixes a race between handling `POSITION` and `RDATA` commands. We do this by simply linearizing handling of them.
2020-04-07 11:01:04 +01:00
Erik Johnston
82498ee901 Move server command handling out of TCP protocol (#7187)
This completes the merging of server and client command processing.
2020-04-07 10:51:07 +01:00
Patrick Cloke
71953139d1 Add information about .well-known to Debian installation. (#7227) 2020-04-06 17:02:44 -04:00
Andrew Morgan
4b0f00ad0c Remove stream before/after debug log lines (#7207) 2020-04-06 12:40:34 +01:00
Andrew Morgan
b21000a44f Improve error responses when a remote server doesn't allow you to access its public rooms list (#6899) 2020-04-06 12:35:30 +01:00
Erik Johnston
5016b162fc Move client command handling out of TCP protocol (#7185)
The aim here is to move the command handling out of the TCP protocol classes and to also merge the client and server command handling (so that we can reuse them for redis protocol). This PR simply moves the client paths to the new `ReplicationCommandHandler`, a future PR will move the server paths too.
2020-04-06 09:58:42 +01:00
Brendan Abolivier
d73bf18d13 Server notices: Dissociate room creation/lookup from invite (#7199)
Fixes #6815

Before figuring out whether we should alert a user on MAU, we call get_notice_room_for_user to get some info on the existing server notices room for this user. This function, if the room doesn't exist, creates it and invites the user in it. This means that, if we decide later that no server notice is needed, the user gets invited in a room with no message in it. This happens at every restart of the server, since the room ID returned by get_notice_room_for_user is cached.

This PR fixes that by moving the inviting bit to a dedicated function, that's only called when the server actually needs to send a notice to the user. A potential issue with this approach is that the room that's created by get_notice_room_for_user doesn't match how that same function looks for an existing room (i.e. it creates a room that doesn't have an invite or a join for the current user in it, so it could lead to a new room being created each time a user syncs), but I'm not sure this is a problem given it's cached until the server restarts, so that function won't run very often.

It also renames get_notice_room_for_user into get_or_create_notice_room_for_user to make what it does clearer.
2020-04-04 17:27:45 +02:00
Patrick Cloke
694d8bed0e Support CAS in UI Auth flows. (#7186) 2020-04-03 15:35:05 -04:00
Martin Milata
b0db928c63 Extend web_client_location to handle absolute URLs (#7006)
Log warning when filesystem path is used.

Signed-off-by: Martin Milata <martin@martinmilata.cz>
2020-04-03 11:57:34 -04:00
Amber Brown
334bfdbc90 Add some benchmarks for LruCache (#6446) 2020-04-03 16:31:52 +01:00
Andrew Morgan
07b88c546d Convert http.HTTPStatus objects to their int equivalent (#7188) 2020-04-03 14:26:07 +01:00
Richard van der Hoff
0f05fd1530 Reduce the number of calls to resource.getrusage (#7183)
Let's just call `getrusage` once on each logcontext change, rather than twice.
2020-04-03 13:21:30 +01:00
Richard van der Hoff
fd4c975b5b Merge pull request #7190 from matrix-org/rav/one_bg_update_at_a_time
Only run one background update at a time
2020-04-03 13:17:30 +01:00
Richard van der Hoff
bae32740da Remove some run_in_background calls in replication code (#7203)
By running this stuff with `run_in_background`, it won't be correctly reported
against the relevant CPU usage stats.

Fixes #7202
2020-04-03 12:29:30 +01:00
Richard van der Hoff
6dd6a3557c Merge branch 'master' into develop 2020-04-03 11:29:43 +01:00
Richard van der Hoff
0cbb4808ed Revert "Revert "Merge pull request #7153 from matrix-org/babolivier/sso_whitelist_login_fallback""
This reverts commit 0122ef1037.
2020-04-03 11:28:49 +01:00
Richard van der Hoff
14a8e71297 Revert "Revert "Improve the UX of the login fallback when using SSO (#7152)""
This reverts commit 8d4cbdeaa9.
2020-04-03 11:28:43 +01:00
Richard van der Hoff
883ac4b1bb Merge tag 'v1.12.3'
Synapse 1.12.3 (2020-04-03)
===========================

- Remove the the pin to Pillow 7.0 which was introduced in Synapse 1.12.2, and
correctly fix the issue with building the Debian packages. ([\#7212](https://github.com/matrix-org/synapse/issues/7212))
2020-04-03 11:25:56 +01:00
Richard van der Hoff
cb40b0cb80 Merge tag 'v1.12.2'
Synapse 1.12.2 (2020-04-02)
===========================

This release fixes [an
issue](https://github.com/matrix-org/synapse/issues/7208) with building the
debian packages.

No other significant changes since 1.12.1.
2020-04-03 11:25:42 +01:00
Richard van der Hoff
0122ef1037 Revert "Merge pull request #7153 from matrix-org/babolivier/sso_whitelist_login_fallback"
This was incorrectly merged to master.

This reverts commit 319c41f573, reversing
changes made to 229eb81498.
2020-04-03 11:17:39 +01:00
Richard van der Hoff
8d4cbdeaa9 Revert "Improve the UX of the login fallback when using SSO (#7152)"
This was incorrectly merged to `master` instead of develop.

This reverts commit 90246344e3.
2020-04-03 11:16:41 +01:00
Richard van der Hoff
553c8a9b6b tweak changelog 2020-04-03 11:00:57 +01:00
Richard van der Hoff
29ce90358c 1.12.3 2020-04-03 10:57:07 +01:00
Richard van der Hoff
fcc2de7a0c Update docstring per review comments 2020-04-03 10:51:32 +01:00
Richard van der Hoff
daa1ac89a0 Fix device list update stream ids going backward (#7158)
Occasionally we could get a federation device list update transaction which
looked like:

```
[
    {'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D2', 'prev_id': [], 'stream_id': 12, 'deleted': True}},
    {'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D1', 'prev_id': [12], 'stream_id': 11, 'deleted': True}},
    {'edu_type': 'm.device_list_update', 'content': {'user_id': '@user:test', 'device_id': 'D3', 'prev_id': [11], 'stream_id': 13, 'deleted': True}}
]
```

Having `stream_ids` which are lower than `prev_ids` looks odd. It might work
(I'm not actually sure), but in any case it doesn't seem like a reasonable
thing to expect other implementations to support.
2020-04-03 10:40:22 +01:00
Richard van der Hoff
6d7cec7a57 Fix the debian build in a better way. (#7212) 2020-04-03 10:23:36 +01:00
Andrew Morgan
f7d6e849b3 Fix changelog wording 2020-04-02 19:08:06 +01:00
Andrew Morgan
08edefe694 1.12.2 2020-04-02 19:02:45 +01:00
Andrew Morgan
ec56620ff6 Pin Pillow>=4.3.0,<7.1.0 to fix dep issue 2020-04-02 18:58:08 +01:00
Andrew Morgan
b730480abb 1.12.1 2020-04-02 18:57:31 +01:00
Andrew Morgan
61bb834364 Merge branch 'master' into develop
* master:
  1.12.1
  Note where bugs were introduced
  1.12.1rc1
  Newsfile
  Rewrite changelog
  Add changelog
  Only import sqlite3 when type checking
  Fix another instance
  Only setdefault for signatures if device has key_json
  Fix starting workers when federation sending not split out.
  Attempt to clarify Python version requirements (#7161)
  Improve the UX of the login fallback when using SSO (#7152)
  Update the wording of the config comment
  Lint
  Changelog
  Regenerate sample config
  Whitelist the login fallback by default for SSO
2020-04-02 16:24:38 +01:00
Andrew Morgan
84a901cf0c Merge tag 'v1.12.1'
Synapse 1.12.1 (2020-04-02)
===========================

No significant changes since 1.12.1rc1.

Synapse 1.12.1rc1 (2020-03-31)
==============================

Bugfixes
--------

- Fix starting workers when federation sending not split out. ([\#7133](https://github.com/matrix-org/synapse/issues/7133)). Introduced in v1.12.0.
- Avoid importing `sqlite3` when using the postgres backend. Contributed by David Vo. ([\#7155](https://github.com/matrix-org/synapse/issues/7155)). Introduced in v1.12.0rc1.
- Fix a bug which could cause outbound federation traffic to stop working if a client uploaded an incorrect e2e device signature. ([\#7177](https://github.com/matrix-org/synapse/issues/7177)). Introduced in v1.11.0.

* tag 'v1.12.1':
  1.12.1
  Note where bugs were introduced
  1.12.1rc1
  Newsfile
  Rewrite changelog
  Add changelog
  Only import sqlite3 when type checking
  Fix another instance
  Only setdefault for signatures if device has key_json
  Fix starting workers when federation sending not split out.
2020-04-02 16:20:54 +01:00
Richard van der Hoff
af47264b78 review comment 2020-04-02 12:04:55 +01:00
Andrew Morgan
529462b5c0 1.12.1 2020-04-02 11:32:16 +01:00
Patrick Cloke
b9930d24a0 Support SAML in the user interactive authentication workflow. (#7102) 2020-04-01 08:48:00 -04:00
Patrick Cloke
468dcc767b Allow admins to create aliases when they are not in the room (#7191) 2020-04-01 08:27:05 -04:00
siroccal
250f87d0de Update postgres.md (#7119) 2020-04-01 12:44:51 +01:00
Erik Johnston
dfa0782254 Remove connections per replication stream metric. (#7195)
This broke in a recent PR (#7024) and is no longer useful due to all
replication clients implicitly subscribing to all streams, so let's
just remove it.
2020-04-01 10:40:46 +01:00
Richard van der Hoff
b413ab8aa6 changelog 2020-03-31 17:44:36 +01:00
Richard van der Hoff
7b608cf468 Only run one background update at a time 2020-03-31 17:43:58 +01:00
Richard van der Hoff
b4c2234232 Make do_next_background_update return a bool
returning a None or an int that we don't use is confusing.
2020-03-31 17:43:58 +01:00
Richard van der Hoff
51f4d52cb4 Set a logging context while running the bg updates
This mostly just reduces the amount of "running from sentinel context" spam
during unittest setup.
2020-03-31 17:43:58 +01:00
Richard van der Hoff
26d17b9bdc Make has_completed_background_updates async
(Almost) everywhere that uses it is happy with an awaitable.
2020-03-31 17:43:58 +01:00
Richard van der Hoff
cfe8c8ab8e Remove unused start_background_update
This was only used in a unit test, so let's just inline it in the test.
2020-03-31 17:24:06 +01:00
Jostein Kjønigsen
2e826cd80c Improve TURN documentation. (#7167) 2020-03-31 15:50:48 +01:00
Richard van der Hoff
60adcbed91 Fix "'NoneType' has no attribute start|stop" logcontext errors (#7181)
Fixes #7179.
2020-03-31 15:18:41 +01:00
Karlinde
fe1580bfd9 Fill in the 'default' field for user-defined push rules (#6639)
Signed-off-by: Karl Linderhed <git@karlinde.se>
2020-03-31 15:08:56 +01:00
Andrew Morgan
b994e86e35 Only setdefault for signatures if device has key_json (#7177) 2020-03-31 14:51:22 +01:00
Patrick Cloke
0a7b0882c1 Fix use of async/await in media code (#7184) 2020-03-31 09:33:02 -04:00
Richard van der Hoff
62a7289133 Fix a bug which could cause incorrect 'cyclic dependency' error. (#7178)
If there was an exception setting up one of the attributes of the Homeserver
god object, then future attempts to fetch that attribute would raise a
confusing "Cyclic dependency" error. Let's make sure that we clear the
`building` flag so that we just get the original exception.

Ref: #7169
2020-03-31 13:09:16 +01:00
Andrew Morgan
677d0edbac Note where bugs were introduced 2020-03-31 11:58:48 +01:00
Andrew Morgan
3fb9fc40f5 1.12.1rc1 2020-03-31 11:49:43 +01:00
Erik Johnston
5d99bde788 Newsfile 2020-03-31 11:30:34 +01:00
Andrew Morgan
2cf115f0ea Rewrite changelog 2020-03-31 11:30:16 +01:00
Andrew Morgan
2cb38ca871 Add changelog 2020-03-31 11:30:05 +01:00
David Vo
5bd2b27525 Only import sqlite3 when type checking
Fixes: #7127
Signed-off-by: David Vo <david@vovo.id.au>
2020-03-31 11:27:17 +01:00
Andrew Morgan
b5d0b038f4 Fix another instance 2020-03-31 11:26:37 +01:00
Andrew Morgan
b5ecafd157 Only setdefault for signatures if device has key_json 2020-03-31 11:26:29 +01:00
Erik Johnston
db098ec994 Fix starting workers when federation sending not split out. 2020-03-31 11:25:21 +01:00
Richard van der Hoff
7966a1cde9 Rewrite prune_old_outbound_device_pokes for efficiency (#7159)
make sure we clear out all but one update for the user
2020-03-30 19:06:52 +01:00
Andrew Morgan
7042840b32 Transfer alias mappings when joining an upgraded room (#6946) 2020-03-30 17:53:25 +01:00
Andrew Morgan
d9f29f8dae Fix a small typo in the metrics_flags config option. (#7171) 2020-03-30 17:38:21 +01:00
Erik Johnston
4f21c33be3 Remove usage of "conn_id" for presence. (#7128)
* Remove `conn_id` usage for UserSyncCommand.

Each tcp replication connection is assigned a "conn_id", which is used
to give an ID to a remotely connected worker. In a redis world, there
will no longer be a one to one mapping between connection and instance,
so instead we need to replace such usages with an ID generated by the
remote instances and included in the replicaiton commands.

This really only effects UserSyncCommand.

* Add CLEAR_USER_SYNCS command that is sent on shutdown.

This should help with the case where a synchrotron gets restarted
gracefully, rather than rely on 5 minute timeout.
2020-03-30 16:37:24 +01:00
David Baker
07569f25d1 Merge pull request #7160 from matrix-org/dbkr/always_send_own_device_list_updates
Always send the user updates to their own device list
2020-03-30 14:34:28 +01:00
Andrew Morgan
104844c1e1 Add explanatory comment 2020-03-30 14:00:11 +01:00
Richard van der Hoff
6486c96b65 Merge pull request #7157 from matrix-org/rev.outbound_device_pokes_tests
Add tests for outbound device pokes
2020-03-30 13:59:07 +01:00
Richard van der Hoff
e577c5d607 Attempt to clarify Python version requirements (#7161)
In particular, we depend on `typing.TYPE_CHECKING`, which is only present in
3.5.2.

It turns out that Ubuntu Xenial, despite having a package called `python 3
(3.5.1-3)`, actually has python 3.5.2, so I think this is fine.
2020-03-30 13:55:01 +01:00
Patrick Cloke
c5f89fba55 Add developer documentation for running a local CAS server (#7147) 2020-03-30 07:28:42 -04:00
David Baker
7406477525 black 2020-03-30 10:18:33 +01:00
David Baker
9fc588e6dc Just add own user ID to the list we track device changes for 2020-03-30 10:11:26 +01:00
Richard van der Hoff
b7da598a61 Always whitelist the login fallback for SSO (#7153)
That fallback sets the redirect URL to itself (so it can process the login
token then return gracefully to the client). This would make it pointless to
ask the user for confirmation, since the URL the confirmation page would be
showing wouldn't be the client's.
2020-03-27 20:24:52 +00:00
Brendan Abolivier
84f7eaed16 Improve the UX of the login fallback when using SSO (#7152)
* Don't show the login forms if we're currently logging in with a
  password or a token.
* Submit directly the SSO login form, showing only a spinner to the
  user, in order to eliminate from the clunkiness of SSO through this
  fallback.
2020-03-27 20:19:54 +00:00
Dirk Klimpel
fb69690761 Admin API to join users to a room. (#7051) 2020-03-27 19:16:43 +00:00
Dirk Klimpel
8327eb9280 Add options to prevent users from changing their profile. (#7096) 2020-03-27 19:15:23 +00:00
txt-file
ae219fb411 update debian installation instructions to recommend installing virtualenv instead of python3-virtualenv (#6892)
* change debian package from python3-virtualenv to virtualenv

The virtualenv package is needed for the virtualenv command. The
virtualenv package depends on python3-virtualenv (at least since
debian jessie) so there is no need to specify python3-virtualenv
additionally.

Signed-off-by: Vieno Hakkerinen <vieno@hakkerinen.eu>

* Add changelog

Co-authored-by: Andrew Morgan <andrew@amorgan.xyz>
2020-03-27 15:02:00 +00:00
Brendan Abolivier
90246344e3 Improve the UX of the login fallback when using SSO (#7152)
* Don't show the login forms if we're currently logging in with a
  password or a token.
* Submit directly the SSO login form, showing only a spinner to the
  user, in order to eliminate from the clunkiness of SSO through this
  fallback.
2020-03-27 15:44:13 +01:00
Brendan Abolivier
319c41f573 Merge pull request #7153 from matrix-org/babolivier/sso_whitelist_login_fallback
Always whitelist the login fallback for SSO
2020-03-27 15:34:41 +01:00
Brendan Abolivier
63aea691a7 Update the wording of the config comment 2020-03-27 15:09:12 +01:00
Andrew Morgan
12aa5a7fa7 Ensure is_verified on /_matrix/client/r0/room_keys/keys is a boolean (#7150) 2020-03-27 13:30:22 +00:00
David Vo
fbf0782c63 Only import sqlite3 when type checking (#7155)
Fixes: #7127
Signed-off-by: David Vo <david@vovo.id.au>
2020-03-27 13:20:00 +00:00
David Baker
16ee97988a Fix undefined variable & remove debug logging 2020-03-27 12:39:54 +00:00
David Baker
a07e03ce90 black 2020-03-27 12:35:32 +00:00
David Baker
d9965fb8d6 changelog 2020-03-27 12:30:59 +00:00
David Baker
09cc058a4c Always send the user updates to their own device list
This will allow clients to notify users about new devices even if
the user isn't in any rooms (yet).
2020-03-27 12:26:47 +00:00
Richard van der Hoff
665630fcaa Add tests for outbound device pokes 2020-03-27 12:01:37 +00:00
Jason Robinson
7496d3d2f6 Merge pull request #7151 from matrix-org/jaywink/saml-redirect-fix
Allow RedirectResponse in SAML response handler
2020-03-26 22:10:31 +02:00
Patrick Cloke
fa4f12102d Refactor the CAS code (move the logic out of the REST layer to a handler) (#7136) 2020-03-26 15:05:26 -04:00
Jason Robinson
55ca6cf88c Update changelog.d/7151.bugfix
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-03-26 20:35:50 +02:00
Brendan Abolivier
bdf3cdaec8 Lint 2020-03-26 19:06:44 +01:00
Brendan Abolivier
48b37f61ce Changelog 2020-03-26 19:02:59 +01:00
Brendan Abolivier
7083147961 Regenerate sample config 2020-03-26 19:01:54 +01:00
Brendan Abolivier
c2ab0b3066 Whitelist the login fallback by default for SSO 2020-03-26 18:58:58 +01:00
Nektarios Katakis
825fb5d0a5 Don't default to an invalid sqlite config if no database configuration is provided (#6573) 2020-03-26 17:13:14 +00:00
Jason Robinson
060e7dce09 Allow RedirectResponse in SAML response handler
Allow custom SAML handlers to redirect after processing an auth response.

Fixes #7149

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-03-26 19:02:35 +02:00
Dirk Klimpel
e8e2ddb60a Allow server admins to define and enforce a password policy (MSC2000). (#7118) 2020-03-26 16:51:13 +00:00
Patrick Cloke
1c1242acba Validate that the session is not modified during UI-Auth (#7068) 2020-03-26 07:39:34 -04:00
Aaron Raimist
6ca5e56fd1 Remove unused captcha_bypass_secret option (#7137)
Signed-off-by: Aaron Raimist <aaron@raim.ist>
2020-03-25 17:49:34 +00:00
Erik Johnston
4cff617df1 Move catchup of replication streams to worker. (#7024)
This changes the replication protocol so that the server does not send down `RDATA` for rows that happened before the client connected. Instead, the server will send a `POSITION` and clients then query the database (or master out of band) to get up to date.
2020-03-25 14:54:01 +00:00
Andrew Morgan
7bab642707 Various cleanups to INSTALL.md (#7141) 2020-03-25 13:56:40 +00:00
Erik Johnston
b1cfaf08af Merge pull request #7133 from matrix-org/erikj/fix_worker_startup
Fix starting workers when federation sending not split out.
2020-03-25 09:42:39 +00:00
Richard van der Hoff
28d9d6e8a9 Remove spurious "name" parameter to default_config
this is never set to anything other than "test", and is a source of unnecessary
boilerplate.
2020-03-24 18:33:49 +00:00
Richard van der Hoff
39230d2171 Clean up some LoggingContext stuff (#7120)
* Pull Sentinel out of LoggingContext

... and drop a few unnecessary references to it

* Factor out LoggingContext.current_context

move `current_context` and `set_context` out to top-level functions.

Mostly this means that I can more easily trace what's actually referring to
LoggingContext, but I think it's generally neater.

* move copy-to-parent into `stop`

this really just makes `start` and `stop` more symetric. It also means that it
behaves correctly if you manually `set_log_context` rather than using the
context manager.

* Replace `LoggingContext.alive` with `finished`

Turn `alive` into `finished` and make it a bit better defined.
2020-03-24 14:45:33 +00:00
Naugrimm
1fcf9c6f95 Fix CAS redirect url (#6634)
Build the same service URL when requesting the CAS ticket and when calling the proxyValidate URL.
2020-03-24 11:59:04 +00:00
Erik Johnston
d6828c129f Newsfile 2020-03-24 10:36:44 +00:00
Erik Johnston
c816072d47 Fix starting workers when federation sending not split out. 2020-03-24 10:35:00 +00:00
Patrick Cloke
190ab593b7 Use the proper error code when a canonical alias that does not exist is used. (#7109) 2020-03-23 15:21:54 -04:00
Kartikaya Gupta (kats)
e341518f92 Update pre-built package name for FreeBSD (#7107). (#7107)
Signed-off-by: Kartikaya Gupta <kats@trevize.staktrace.com>
2020-03-23 15:31:02 +00:00
Richard van der Hoff
a564b92d37 Convert *StreamRow classes to inner classes (#7116)
This just helps keep the rows closer to their streams, so that it's easier to
see what the format of each stream is.
2020-03-23 13:59:11 +00:00
Richard van der Hoff
5126cb1253 Merge branch 'master' into develop 2020-03-23 13:54:29 +00:00
Richard van der Hoff
229eb81498 Merge tag 'v1.12.0'
Synapse 1.12.0 (2020-03-23)
===========================

No significant changes since 1.12.0rc1.

Debian packages and Docker images are rebuilt using the latest versions of
dependency libraries, including Twisted 20.3.0. **Please see security advisory
below**.

Security advisory
-----------------

Synapse may be vulnerable to request-smuggling attacks when it is used with a
reverse-proxy. The vulnerabilties are fixed in Twisted 20.3.0, and are
described in
[CVE-2020-10108](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10108)
and
[CVE-2020-10109](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-10109).
For a good introduction to this class of request-smuggling attacks, see
https://portswigger.net/research/http-desync-attacks-request-smuggling-reborn.

We are not aware of these vulnerabilities being exploited in the wild, and
do not believe that they are exploitable with current versions of any reverse
proxies. Nevertheless, we recommend that all Synapse administrators ensure that
they have the latest versions of the Twisted library to ensure that their
installation remains secure.

* Administrators using the [`matrix.org` Docker
  image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
  packages from
  `matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
  should ensure that they have version 1.12.0 installed: these images include
  Twisted 20.3.0.
* Administrators who have [installed Synapse from
  source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
  should upgrade Twisted within their virtualenv by running:
  ```sh
  <path_to_virtualenv>/bin/pip install 'Twisted>=20.3.0'
  ```
* Administrators who have installed Synapse from distribution packages should
  consult the information from their distributions.

The `matrix.org` Synapse instance was not vulnerable to these vulnerabilities.

Advance notice of change to the default `git` branch for Synapse
----------------------------------------------------------------

Currently, the default `git` branch for Synapse is `master`, which tracks the
latest release.

After the release of Synapse 1.13.0, we intend to change this default to
`develop`, which is the development tip. This is more consistent with common
practice and modern `git` usage.

Although we try to keep `develop` in a stable state, there may be occasions
where regressions creep in. Developers and distributors who have scripts which
run builds using the default branch of `Synapse` should therefore consider
pinning their scripts to `master`.

Synapse 1.12.0rc1 (2020-03-19)
==============================

Features
--------

- Changes related to room alias management ([MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432)):
  - Publishing/removing a room from the room directory now requires the user to have a power level capable of modifying the canonical alias, instead of the room aliases. ([\#6965](https://github.com/matrix-org/synapse/issues/6965))
  - Validate the `alt_aliases` property of canonical alias events. ([\#6971](https://github.com/matrix-org/synapse/issues/6971))
  - Users with a power level sufficient to modify the canonical alias of a room can now delete room aliases. ([\#6986](https://github.com/matrix-org/synapse/issues/6986))
  - Implement updated authorization rules and redaction rules for aliases events, from [MSC2261](https://github.com/matrix-org/matrix-doc/pull/2261) and [MSC2432](https://github.com/matrix-org/matrix-doc/pull/2432). ([\#7037](https://github.com/matrix-org/synapse/issues/7037))
  - Stop sending m.room.aliases events during room creation and upgrade. ([\#6941](https://github.com/matrix-org/synapse/issues/6941))
  - Synapse no longer uses room alias events to calculate room names for push notifications. ([\#6966](https://github.com/matrix-org/synapse/issues/6966))
  - The room list endpoint no longer returns a list of aliases. ([\#6970](https://github.com/matrix-org/synapse/issues/6970))
  - Remove special handling of aliases events from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260) added in v1.10.0rc1. ([\#7034](https://github.com/matrix-org/synapse/issues/7034))
- Expose the `synctl`, `hash_password` and `generate_config` commands in the snapcraft package. Contributed by @devec0. ([\#6315](https://github.com/matrix-org/synapse/issues/6315))
- Check that server_name is correctly set before running database updates. ([\#6982](https://github.com/matrix-org/synapse/issues/6982))
- Break down monthly active users by `appservice_id` and emit via Prometheus. ([\#7030](https://github.com/matrix-org/synapse/issues/7030))
- Render a configurable and comprehensible error page if something goes wrong during the SAML2 authentication process. ([\#7058](https://github.com/matrix-org/synapse/issues/7058), [\#7067](https://github.com/matrix-org/synapse/issues/7067))
- Add an optional parameter to control whether other sessions are logged out when a user's password is modified. ([\#7085](https://github.com/matrix-org/synapse/issues/7085))
- Add prometheus metrics for the number of active pushers. ([\#7103](https://github.com/matrix-org/synapse/issues/7103), [\#7106](https://github.com/matrix-org/synapse/issues/7106))
- Improve performance when making HTTPS requests to sygnal, sydent, etc, by sharing the SSL context object between connections. ([\#7094](https://github.com/matrix-org/synapse/issues/7094))

Bugfixes
--------

- When a user's profile is updated via the admin API, also generate a displayname/avatar update for that user in each room. ([\#6572](https://github.com/matrix-org/synapse/issues/6572))
- Fix a couple of bugs in email configuration handling. ([\#6962](https://github.com/matrix-org/synapse/issues/6962))
- Fix an issue affecting worker-based deployments where replication would stop working, necessitating a full restart, after joining a large room. ([\#6967](https://github.com/matrix-org/synapse/issues/6967))
- Fix `duplicate key` error which was logged when rejoining a room over federation. ([\#6968](https://github.com/matrix-org/synapse/issues/6968))
- Prevent user from setting 'deactivated' to anything other than a bool on the v2 PUT /users Admin API. ([\#6990](https://github.com/matrix-org/synapse/issues/6990))
- Fix py35-old CI by using native tox package. ([\#7018](https://github.com/matrix-org/synapse/issues/7018))
- Fix a bug causing `org.matrix.dummy_event` to be included in responses from `/sync`. ([\#7035](https://github.com/matrix-org/synapse/issues/7035))
- Fix a bug that renders UTF-8 text files incorrectly when loaded from media. Contributed by @TheStranjer. ([\#7044](https://github.com/matrix-org/synapse/issues/7044))
- Fix a bug that would cause Synapse to respond with an error about event visibility if a client tried to request the state of a room at a given token. ([\#7066](https://github.com/matrix-org/synapse/issues/7066))
- Repair a data-corruption issue which was introduced in Synapse 1.10, and fixed in Synapse 1.11, and which could cause `/sync` to return with 404 errors about missing events and unknown rooms. ([\#7070](https://github.com/matrix-org/synapse/issues/7070))
- Fix a bug causing account validity renewal emails to be sent even if the feature is turned off in some cases. ([\#7074](https://github.com/matrix-org/synapse/issues/7074))

Improved Documentation
----------------------

- Updated CentOS8 install instructions. Contributed by Richard Kellner. ([\#6925](https://github.com/matrix-org/synapse/issues/6925))
- Fix `POSTGRES_INITDB_ARGS` in the `contrib/docker/docker-compose.yml` example docker-compose configuration. ([\#6984](https://github.com/matrix-org/synapse/issues/6984))
- Change date in [INSTALL.md](./INSTALL.md#tls-certificates) for last date of getting TLS certificates to November 2019. ([\#7015](https://github.com/matrix-org/synapse/issues/7015))
- Document that the fallback auth endpoints must be routed to the same worker node as the register endpoints. ([\#7048](https://github.com/matrix-org/synapse/issues/7048))

Deprecations and Removals
-------------------------

- Remove the unused query_auth federation endpoint per [MSC2451](https://github.com/matrix-org/matrix-doc/pull/2451). ([\#7026](https://github.com/matrix-org/synapse/issues/7026))

Internal Changes
----------------

- Add type hints to `logging/context.py`. ([\#6309](https://github.com/matrix-org/synapse/issues/6309))
- Add some clarifications to `README.md` in the database schema directory. ([\#6615](https://github.com/matrix-org/synapse/issues/6615))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6874](https://github.com/matrix-org/synapse/issues/6874), [\#6875](https://github.com/matrix-org/synapse/issues/6875), [\#6983](https://github.com/matrix-org/synapse/issues/6983), [\#7003](https://github.com/matrix-org/synapse/issues/7003))
- Improve performance of v2 state resolution for large rooms. ([\#6952](https://github.com/matrix-org/synapse/issues/6952), [\#7095](https://github.com/matrix-org/synapse/issues/7095))
- Reduce time spent doing GC, by freezing objects on startup. ([\#6953](https://github.com/matrix-org/synapse/issues/6953))
- Minor perfermance fixes to `get_auth_chain_ids`. ([\#6954](https://github.com/matrix-org/synapse/issues/6954))
- Don't record remote cross-signing keys in the `devices` table. ([\#6956](https://github.com/matrix-org/synapse/issues/6956))
- Use flake8-comprehensions to enforce good hygiene of list/set/dict comprehensions. ([\#6957](https://github.com/matrix-org/synapse/issues/6957))
- Merge worker apps together. ([\#6964](https://github.com/matrix-org/synapse/issues/6964), [\#7002](https://github.com/matrix-org/synapse/issues/7002), [\#7055](https://github.com/matrix-org/synapse/issues/7055), [\#7104](https://github.com/matrix-org/synapse/issues/7104))
- Remove redundant `store_room` call from `FederationHandler._process_received_pdu`. ([\#6979](https://github.com/matrix-org/synapse/issues/6979))
- Update warning for incorrect database collation/ctype to include link to documentation. ([\#6985](https://github.com/matrix-org/synapse/issues/6985))
- Add some type annotations to the database storage classes. ([\#6987](https://github.com/matrix-org/synapse/issues/6987))
- Port `synapse.handlers.presence` to async/await. ([\#6991](https://github.com/matrix-org/synapse/issues/6991), [\#7019](https://github.com/matrix-org/synapse/issues/7019))
- Add some type annotations to the federation base & client classes. ([\#6995](https://github.com/matrix-org/synapse/issues/6995))
- Port `synapse.rest.keys` to async/await. ([\#7020](https://github.com/matrix-org/synapse/issues/7020))
- Add a type check to `is_verified` when processing room keys. ([\#7045](https://github.com/matrix-org/synapse/issues/7045))
- Add type annotations and comments to the auth handler. ([\#7063](https://github.com/matrix-org/synapse/issues/7063))
2020-03-23 13:54:17 +00:00
Richard van der Hoff
b3cee0ce67 Fix processing of groups stream, and use symbolic names for streams (#7117)
`groups` != `receipts`

Introduced in #6964
2020-03-23 11:39:36 +00:00
Dionysis Grigoropoulos
96071eea8f Set Referrer-Policy to no-referrer for media (#7009) 2020-03-23 09:48:28 +00:00
Patrick Cloke
477c4f5b1c Clean-up some auth/login REST code (#7115) 2020-03-20 16:22:47 -04:00
Richard van der Hoff
c165c1233b Improve database configuration docs (#6988)
Attempts to clarify the sample config for databases, and add some stuff about
tcp keepalives to `postgres.md`.
2020-03-20 15:24:22 +00:00
Erik Johnston
fdb1344716 Remove concept of a non-limited stream. (#7011) 2020-03-20 14:40:47 +00:00
Patrick Cloke
caec7d4fa0 Convert some of the media REST code to async/await (#7110) 2020-03-20 07:20:02 -04:00
Patrick Cloke
c2db6599c8 Fix a bug in the federation API which could cause occasional "Failed to get PDU" errors (#7089). 2020-03-19 08:22:56 -04:00
Erik Johnston
a319cb1dd1 Change device list streams to have one row per ID (#7010)
* Add 'device_lists_outbound_pokes' as extra table.

This makes sure we check all the relevant tables to get the current max
stream ID.

Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.

* Change device lists stream to have one row per id.

This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.

* Change device list replication to match new semantics.

Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).

* Newsfile

* Remove handling of multiple rows per ID

* Fix worker handling

* Comments from review
2020-03-19 11:36:53 +00:00
Erik Johnston
6e6476ef07 Comments from review 2020-03-18 10:13:55 +00:00
Richard van der Hoff
4ce50519cd Update postgres.md
fix broken link
2020-03-17 18:08:43 +00:00
Erik Johnston
65a941d1f8 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/fixup_devices_stream 2020-03-02 16:55:55 +00:00
Erik Johnston
e53744c737 Fix worker handling 2020-03-02 12:52:28 +00:00
Erik Johnston
f70f44abc7 Remove handling of multiple rows per ID 2020-02-28 11:45:35 +00:00
Erik Johnston
59ad93d2a4 Newsfile 2020-02-28 11:27:37 +00:00
Erik Johnston
9ce4e344a8 Change device list replication to match new semantics.
Instead of sending down batches of user ID/host tuples, send down a row
per entity (user ID or host).
2020-02-28 11:25:34 +00:00
Erik Johnston
f5caa1864e Change device lists stream to have one row per id.
This will make it possible to process the streams more incrementally,
avoiding having to process large chunks at once.
2020-02-28 11:21:25 +00:00
Erik Johnston
c3c6c0e622 Add 'device_lists_outbound_pokes' as extra table.
This makes sure we check all the relevant tables to get the current max
stream ID.

Currently not doing so isn't problematic as the max stream ID in
`device_lists_outbound_pokes` is the same as in `device_lists_stream`,
however that will change.
2020-02-28 11:15:11 +00:00
260 changed files with 12784 additions and 5094 deletions

View File

@@ -5,8 +5,6 @@ Message history can be paginated
Can re-join room if re-invited
/upgrade creates a new room
The only membership state included in an initial sync is for all the senders in the timeline
Local device key changes get to remote servers

View File

@@ -1,12 +1,251 @@
Synapse 1.12.0 (2020-03-23)
Synapse 1.13.0rc3 (2020-05-18)
==============================
Bugfixes
--------
- Hash passwords as early as possible during registration. ([\#7523](https://github.com/matrix-org/synapse/issues/7523))
Synapse 1.13.0rc2 (2020-05-14)
==============================
Bugfixes
--------
- Fix a long-standing bug which could cause messages not to be sent over federation, when state events with state keys matching user IDs (such as custom user statuses) were received. ([\#7376](https://github.com/matrix-org/synapse/issues/7376))
- Restore compatibility with non-compliant clients during the user interactive authentication process, fixing a problem introduced in v1.13.0rc1. ([\#7483](https://github.com/matrix-org/synapse/issues/7483))
Internal Changes
----------------
- Fix linting errors in new version of Flake8. ([\#7470](https://github.com/matrix-org/synapse/issues/7470))
Synapse 1.13.0rc1 (2020-05-11)
==============================
This release brings some potential changes necessary for certain
configurations of Synapse:
* If your Synapse is configured to use SSO and have a custom
`sso_redirect_confirm_template_dir` configuration option set, you will need
to duplicate the new `sso_auth_confirm.html`, `sso_auth_success.html` and
`sso_account_deactivated.html` templates into that directory.
* Synapse plugins using the `complete_sso_login` method of
`synapse.module_api.ModuleApi` should instead switch to the async/await
version, `complete_sso_login_async`, which includes additional checks. The
former version is now deprecated.
* A bug was introduced in Synapse 1.4.0 which could cause the room directory
to be incomplete or empty if Synapse was upgraded directly from v1.2.1 or
earlier, to versions between v1.4.0 and v1.12.x.
Please review [UPGRADE.rst](UPGRADE.rst) for more details on these changes
and for general upgrade guidance.
Features
--------
- Extend the `web_client_location` option to accept an absolute URL to use as a redirect. Adds a warning when running the web client on the same hostname as homeserver. Contributed by Martin Milata. ([\#7006](https://github.com/matrix-org/synapse/issues/7006))
- Set `Referrer-Policy` header to `no-referrer` on media downloads. ([\#7009](https://github.com/matrix-org/synapse/issues/7009))
- Add support for running replication over Redis when using workers. ([\#7040](https://github.com/matrix-org/synapse/issues/7040), [\#7325](https://github.com/matrix-org/synapse/issues/7325), [\#7352](https://github.com/matrix-org/synapse/issues/7352), [\#7401](https://github.com/matrix-org/synapse/issues/7401), [\#7427](https://github.com/matrix-org/synapse/issues/7427), [\#7439](https://github.com/matrix-org/synapse/issues/7439), [\#7446](https://github.com/matrix-org/synapse/issues/7446), [\#7450](https://github.com/matrix-org/synapse/issues/7450), [\#7454](https://github.com/matrix-org/synapse/issues/7454))
- Admin API `POST /_synapse/admin/v1/join/<roomIdOrAlias>` to join users to a room like `auto_join_rooms` for creation of users. ([\#7051](https://github.com/matrix-org/synapse/issues/7051))
- Add options to prevent users from changing their profile or associated 3PIDs. ([\#7096](https://github.com/matrix-org/synapse/issues/7096))
- Support SSO in the user interactive authentication workflow. ([\#7102](https://github.com/matrix-org/synapse/issues/7102), [\#7186](https://github.com/matrix-org/synapse/issues/7186), [\#7279](https://github.com/matrix-org/synapse/issues/7279), [\#7343](https://github.com/matrix-org/synapse/issues/7343))
- Allow server admins to define and enforce a password policy ([MSC2000](https://github.com/matrix-org/matrix-doc/issues/2000)). ([\#7118](https://github.com/matrix-org/synapse/issues/7118))
- Improve the support for SSO authentication on the login fallback page. ([\#7152](https://github.com/matrix-org/synapse/issues/7152), [\#7235](https://github.com/matrix-org/synapse/issues/7235))
- Always whitelist the login fallback in the SSO configuration if `public_baseurl` is set. ([\#7153](https://github.com/matrix-org/synapse/issues/7153))
- Admin users are no longer required to be in a room to create an alias for it. ([\#7191](https://github.com/matrix-org/synapse/issues/7191))
- Require admin privileges to enable room encryption by default. This does not affect existing rooms. ([\#7230](https://github.com/matrix-org/synapse/issues/7230))
- Add a config option for specifying the value of the Accept-Language HTTP header when generating URL previews. ([\#7265](https://github.com/matrix-org/synapse/issues/7265))
- Allow `/requestToken` endpoints to hide the existence (or lack thereof) of 3PID associations on the homeserver. ([\#7315](https://github.com/matrix-org/synapse/issues/7315))
- Add a configuration setting to tweak the threshold for dummy events. ([\#7422](https://github.com/matrix-org/synapse/issues/7422))
Bugfixes
--------
- Don't attempt to use an invalid sqlite config if no database configuration is provided. Contributed by @nekatak. ([\#6573](https://github.com/matrix-org/synapse/issues/6573))
- Fix single-sign on with CAS systems: pass the same service URL when requesting the CAS ticket and when calling the `proxyValidate` URL. Contributed by @Naugrimm. ([\#6634](https://github.com/matrix-org/synapse/issues/6634))
- Fix missing field `default` when fetching user-defined push rules. ([\#6639](https://github.com/matrix-org/synapse/issues/6639))
- Improve error responses when accessing remote public room lists. ([\#6899](https://github.com/matrix-org/synapse/issues/6899), [\#7368](https://github.com/matrix-org/synapse/issues/7368))
- Transfer alias mappings on room upgrade. ([\#6946](https://github.com/matrix-org/synapse/issues/6946))
- Ensure that a user interactive authentication session is tied to a single request. ([\#7068](https://github.com/matrix-org/synapse/issues/7068), [\#7455](https://github.com/matrix-org/synapse/issues/7455))
- Fix a bug in the federation API which could cause occasional "Failed to get PDU" errors. ([\#7089](https://github.com/matrix-org/synapse/issues/7089))
- Return the proper error (`M_BAD_ALIAS`) when a non-existant canonical alias is provided. ([\#7109](https://github.com/matrix-org/synapse/issues/7109))
- Fix a bug which meant that groups updates were not correctly replicated between workers. ([\#7117](https://github.com/matrix-org/synapse/issues/7117))
- Fix starting workers when federation sending not split out. ([\#7133](https://github.com/matrix-org/synapse/issues/7133))
- Ensure `is_verified` is a boolean in responses to `GET /_matrix/client/r0/room_keys/keys`. Also warn the user if they forgot the `version` query param. ([\#7150](https://github.com/matrix-org/synapse/issues/7150))
- Fix error page being shown when a custom SAML handler attempted to redirect when processing an auth response. ([\#7151](https://github.com/matrix-org/synapse/issues/7151))
- Avoid importing `sqlite3` when using the postgres backend. Contributed by David Vo. ([\#7155](https://github.com/matrix-org/synapse/issues/7155))
- Fix excessive CPU usage by `prune_old_outbound_device_pokes` job. ([\#7159](https://github.com/matrix-org/synapse/issues/7159))
- Fix a bug which could cause outbound federation traffic to stop working if a client uploaded an incorrect e2e device signature. ([\#7177](https://github.com/matrix-org/synapse/issues/7177))
- Fix a bug which could cause incorrect 'cyclic dependency' error. ([\#7178](https://github.com/matrix-org/synapse/issues/7178))
- Fix a bug that could cause a user to be invited to a server notices (aka System Alerts) room without any notice being sent. ([\#7199](https://github.com/matrix-org/synapse/issues/7199))
- Fix some worker-mode replication handling not being correctly recorded in CPU usage stats. ([\#7203](https://github.com/matrix-org/synapse/issues/7203))
- Do not allow a deactivated user to login via SSO. ([\#7240](https://github.com/matrix-org/synapse/issues/7240), [\#7259](https://github.com/matrix-org/synapse/issues/7259))
- Fix --help command-line argument. ([\#7249](https://github.com/matrix-org/synapse/issues/7249))
- Fix room publish permissions not being checked on room creation. ([\#7260](https://github.com/matrix-org/synapse/issues/7260))
- Reject unknown session IDs during user interactive authentication instead of silently creating a new session. ([\#7268](https://github.com/matrix-org/synapse/issues/7268))
- Fix a SQL query introduced in Synapse 1.12.0 which could cause large amounts of logging to the postgres slow-query log. ([\#7274](https://github.com/matrix-org/synapse/issues/7274))
- Persist user interactive authentication sessions across workers and Synapse restarts. ([\#7302](https://github.com/matrix-org/synapse/issues/7302))
- Fixed backwards compatibility logic of the first value of `trusted_third_party_id_servers` being used for `account_threepid_delegates.email`, which occurs when the former, deprecated option is set and the latter is not. ([\#7316](https://github.com/matrix-org/synapse/issues/7316))
- Fix a bug where event updates might not be sent over replication to worker processes after the stream falls behind. ([\#7337](https://github.com/matrix-org/synapse/issues/7337), [\#7358](https://github.com/matrix-org/synapse/issues/7358))
- Fix bad error handling that would cause Synapse to crash if it's provided with a YAML configuration file that's either empty or doesn't parse into a key-value map. ([\#7341](https://github.com/matrix-org/synapse/issues/7341))
- Fix incorrect metrics reporting for `renew_attestations` background task. ([\#7344](https://github.com/matrix-org/synapse/issues/7344))
- Prevent non-federating rooms from appearing in responses to federated `POST /publicRoom` requests when a filter was included. ([\#7367](https://github.com/matrix-org/synapse/issues/7367))
- Fix a bug which would cause the room durectory to be incorrectly populated if Synapse was upgraded directly from v1.2.1 or earlier to v1.4.0 or later. Note that this fix does not apply retrospectively; see the [upgrade notes](UPGRADE.rst#upgrading-to-v1130) for more information. ([\#7387](https://github.com/matrix-org/synapse/issues/7387))
- Fix bug in `EventContext.deserialize`. ([\#7393](https://github.com/matrix-org/synapse/issues/7393))
Improved Documentation
----------------------
- Update Debian installation instructions to recommend installing the `virtualenv` package instead of `python3-virtualenv`. ([\#6892](https://github.com/matrix-org/synapse/issues/6892))
- Improve the documentation for database configuration. ([\#6988](https://github.com/matrix-org/synapse/issues/6988))
- Improve the documentation of application service configuration files. ([\#7091](https://github.com/matrix-org/synapse/issues/7091))
- Update pre-built package name for FreeBSD. ([\#7107](https://github.com/matrix-org/synapse/issues/7107))
- Update postgres docs with login troubleshooting information. ([\#7119](https://github.com/matrix-org/synapse/issues/7119))
- Clean up INSTALL.md a bit. ([\#7141](https://github.com/matrix-org/synapse/issues/7141))
- Add documentation for running a local CAS server for testing. ([\#7147](https://github.com/matrix-org/synapse/issues/7147))
- Improve README.md by being explicit about public IP recommendation for TURN relaying. ([\#7167](https://github.com/matrix-org/synapse/issues/7167))
- Fix a small typo in the `metrics_flags` config option. ([\#7171](https://github.com/matrix-org/synapse/issues/7171))
- Update the contributed documentation on managing synapse workers with systemd, and bring it into the core distribution. ([\#7234](https://github.com/matrix-org/synapse/issues/7234))
- Add documentation to the `password_providers` config option. Add known password provider implementations to docs. ([\#7238](https://github.com/matrix-org/synapse/issues/7238), [\#7248](https://github.com/matrix-org/synapse/issues/7248))
- Modify suggested nginx reverse proxy configuration to match Synapse's default file upload size. Contributed by @ProCycleDev. ([\#7251](https://github.com/matrix-org/synapse/issues/7251))
- Documentation of media_storage_providers options updated to avoid misunderstandings. Contributed by Tristan Lins. ([\#7272](https://github.com/matrix-org/synapse/issues/7272))
- Add documentation on monitoring workers with Prometheus. ([\#7357](https://github.com/matrix-org/synapse/issues/7357))
- Clarify endpoint usage in the users admin api documentation. ([\#7361](https://github.com/matrix-org/synapse/issues/7361))
Deprecations and Removals
-------------------------
- Remove nonfunctional `captcha_bypass_secret` option from `homeserver.yaml`. ([\#7137](https://github.com/matrix-org/synapse/issues/7137))
Internal Changes
----------------
- Add benchmarks for LruCache. ([\#6446](https://github.com/matrix-org/synapse/issues/6446))
- Return total number of users and profile attributes in admin users endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6881](https://github.com/matrix-org/synapse/issues/6881))
- Change device list streams to have one row per ID. ([\#7010](https://github.com/matrix-org/synapse/issues/7010))
- Remove concept of a non-limited stream. ([\#7011](https://github.com/matrix-org/synapse/issues/7011))
- Move catchup of replication streams logic to worker. ([\#7024](https://github.com/matrix-org/synapse/issues/7024), [\#7195](https://github.com/matrix-org/synapse/issues/7195), [\#7226](https://github.com/matrix-org/synapse/issues/7226), [\#7239](https://github.com/matrix-org/synapse/issues/7239), [\#7286](https://github.com/matrix-org/synapse/issues/7286), [\#7290](https://github.com/matrix-org/synapse/issues/7290), [\#7318](https://github.com/matrix-org/synapse/issues/7318), [\#7326](https://github.com/matrix-org/synapse/issues/7326), [\#7378](https://github.com/matrix-org/synapse/issues/7378), [\#7421](https://github.com/matrix-org/synapse/issues/7421))
- Convert some of synapse.rest.media to async/await. ([\#7110](https://github.com/matrix-org/synapse/issues/7110), [\#7184](https://github.com/matrix-org/synapse/issues/7184), [\#7241](https://github.com/matrix-org/synapse/issues/7241))
- De-duplicate / remove unused REST code for login and auth. ([\#7115](https://github.com/matrix-org/synapse/issues/7115))
- Convert `*StreamRow` classes to inner classes. ([\#7116](https://github.com/matrix-org/synapse/issues/7116))
- Clean up some LoggingContext code. ([\#7120](https://github.com/matrix-org/synapse/issues/7120), [\#7181](https://github.com/matrix-org/synapse/issues/7181), [\#7183](https://github.com/matrix-org/synapse/issues/7183), [\#7408](https://github.com/matrix-org/synapse/issues/7408), [\#7426](https://github.com/matrix-org/synapse/issues/7426))
- Add explicit `instance_id` for USER_SYNC commands and remove implicit `conn_id` usage. ([\#7128](https://github.com/matrix-org/synapse/issues/7128))
- Refactored the CAS authentication logic to a separate class. ([\#7136](https://github.com/matrix-org/synapse/issues/7136))
- Run replication streamers on workers. ([\#7146](https://github.com/matrix-org/synapse/issues/7146))
- Add tests for outbound device pokes. ([\#7157](https://github.com/matrix-org/synapse/issues/7157))
- Fix device list update stream ids going backward. ([\#7158](https://github.com/matrix-org/synapse/issues/7158))
- Use `stream.current_token()` and remove `stream_positions()`. ([\#7172](https://github.com/matrix-org/synapse/issues/7172))
- Move client command handling out of TCP protocol. ([\#7185](https://github.com/matrix-org/synapse/issues/7185))
- Move server command handling out of TCP protocol. ([\#7187](https://github.com/matrix-org/synapse/issues/7187))
- Fix consistency of HTTP status codes reported in log lines. ([\#7188](https://github.com/matrix-org/synapse/issues/7188))
- Only run one background database update at a time. ([\#7190](https://github.com/matrix-org/synapse/issues/7190))
- Remove sent outbound device list pokes from the database. ([\#7192](https://github.com/matrix-org/synapse/issues/7192))
- Add a background database update job to clear out duplicate `device_lists_outbound_pokes`. ([\#7193](https://github.com/matrix-org/synapse/issues/7193))
- Remove some extraneous debugging log lines. ([\#7207](https://github.com/matrix-org/synapse/issues/7207))
- Add explicit Python build tooling as dependencies for the snapcraft build. ([\#7213](https://github.com/matrix-org/synapse/issues/7213))
- Add typing information to federation server code. ([\#7219](https://github.com/matrix-org/synapse/issues/7219))
- Extend room admin api (`GET /_synapse/admin/v1/rooms`) with additional attributes. ([\#7225](https://github.com/matrix-org/synapse/issues/7225))
- Unblacklist '/upgrade creates a new room' sytest for workers. ([\#7228](https://github.com/matrix-org/synapse/issues/7228))
- Remove redundant checks on `daemonize` from synctl. ([\#7233](https://github.com/matrix-org/synapse/issues/7233))
- Upgrade jQuery to v3.4.1 on fallback login/registration pages. ([\#7236](https://github.com/matrix-org/synapse/issues/7236))
- Change log line that told user to implement onLogin/onRegister fallback js functions to a warning, instead of an info, so it's more visible. ([\#7237](https://github.com/matrix-org/synapse/issues/7237))
- Correct the parameters of a test fixture. Contributed by Isaiah Singletary. ([\#7243](https://github.com/matrix-org/synapse/issues/7243))
- Convert auth handler to async/await. ([\#7261](https://github.com/matrix-org/synapse/issues/7261))
- Add some unit tests for replication. ([\#7278](https://github.com/matrix-org/synapse/issues/7278))
- Improve typing annotations in `synapse.replication.tcp.streams.Stream`. ([\#7291](https://github.com/matrix-org/synapse/issues/7291))
- Reduce log verbosity of url cache cleanup tasks. ([\#7295](https://github.com/matrix-org/synapse/issues/7295))
- Fix sample SAML Service Provider configuration. Contributed by @frcl. ([\#7300](https://github.com/matrix-org/synapse/issues/7300))
- Fix StreamChangeCache to work with multiple entities changing on the same stream id. ([\#7303](https://github.com/matrix-org/synapse/issues/7303))
- Fix an incorrect import in IdentityHandler. ([\#7319](https://github.com/matrix-org/synapse/issues/7319))
- Reduce logging verbosity for successful federation requests. ([\#7321](https://github.com/matrix-org/synapse/issues/7321))
- Convert some federation handler code to async/await. ([\#7338](https://github.com/matrix-org/synapse/issues/7338))
- Fix collation for postgres for unit tests. ([\#7359](https://github.com/matrix-org/synapse/issues/7359))
- Convert RegistrationWorkerStore.is_server_admin and dependent code to async/await. ([\#7363](https://github.com/matrix-org/synapse/issues/7363))
- Add an `instance_name` to `RDATA` and `POSITION` replication commands. ([\#7364](https://github.com/matrix-org/synapse/issues/7364))
- Thread through instance name to replication client. ([\#7369](https://github.com/matrix-org/synapse/issues/7369))
- Convert synapse.server_notices to async/await. ([\#7394](https://github.com/matrix-org/synapse/issues/7394))
- Convert synapse.notifier to async/await. ([\#7395](https://github.com/matrix-org/synapse/issues/7395))
- Fix issues with the Python package manifest. ([\#7404](https://github.com/matrix-org/synapse/issues/7404))
- Prevent methods in `synapse.handlers.auth` from polling the homeserver config every request. ([\#7420](https://github.com/matrix-org/synapse/issues/7420))
- Speed up fetching device lists changes when handling `/sync` requests. ([\#7423](https://github.com/matrix-org/synapse/issues/7423))
- Run group attestation renewal in series rather than parallel for performance. ([\#7442](https://github.com/matrix-org/synapse/issues/7442))
Synapse 1.12.4 (2020-04-23)
===========================
No significant changes since 1.12.0rc1.
No significant changes.
Synapse 1.12.4rc1 (2020-04-22)
==============================
Features
--------
- Always send users their own device updates. ([\#7160](https://github.com/matrix-org/synapse/issues/7160))
- Add support for handling GET requests for `account_data` on a worker. ([\#7311](https://github.com/matrix-org/synapse/issues/7311))
Bugfixes
--------
- Fix a bug that prevented cross-signing with users on worker-mode synapses. ([\#7255](https://github.com/matrix-org/synapse/issues/7255))
- Do not treat display names as globs in push rules. ([\#7271](https://github.com/matrix-org/synapse/issues/7271))
- Fix a bug with cross-signing devices belonging to remote users who did not share a room with any user on the local homeserver. ([\#7289](https://github.com/matrix-org/synapse/issues/7289))
Synapse 1.12.3 (2020-04-03)
===========================
- Remove the the pin to Pillow 7.0 which was introduced in Synapse 1.12.2, and
correctly fix the issue with building the Debian packages. ([\#7212](https://github.com/matrix-org/synapse/issues/7212))
Synapse 1.12.2 (2020-04-02)
===========================
This release works around [an issue](https://github.com/matrix-org/synapse/issues/7208) with building the debian packages.
No other significant changes since 1.12.1.
Synapse 1.12.1 (2020-04-02)
===========================
No significant changes since 1.12.1rc1.
Synapse 1.12.1rc1 (2020-03-31)
==============================
Bugfixes
--------
- Fix starting workers when federation sending not split out. ([\#7133](https://github.com/matrix-org/synapse/issues/7133)). Introduced in v1.12.0.
- Avoid importing `sqlite3` when using the postgres backend. Contributed by David Vo. ([\#7155](https://github.com/matrix-org/synapse/issues/7155)). Introduced in v1.12.0rc1.
- Fix a bug which could cause outbound federation traffic to stop working if a client uploaded an incorrect e2e device signature. ([\#7177](https://github.com/matrix-org/synapse/issues/7177)). Introduced in v1.11.0.
Synapse 1.12.0 (2020-03-23)
===========================
Debian packages and Docker images are rebuilt using the latest versions of
dependency libraries, including Twisted 20.3.0. **Please see security advisory
below**.
Potential slow database update during upgrade
---------------------------------------------
Synapse 1.12.0 includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking
place. For imformation on seeing if you are affected, and workaround if you
are, see the [upgrade notes](UPGRADE.rst#upgrading-to-v1120).
Security advisory
-----------------

View File

@@ -2,7 +2,6 @@
- [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions)
- [Troubleshooting Installation](#troubleshooting-installation)
- [Prebuilt packages](#prebuilt-packages)
- [Setting up Synapse](#setting-up-synapse)
- [TLS certificates](#tls-certificates)
@@ -10,6 +9,7 @@
- [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server)
- [URL previews](#url-previews)
- [Troubleshooting Installation](#troubleshooting-installation)
# Choosing your server name
@@ -36,7 +36,7 @@ that your email address is probably `user@example.com` rather than
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5, 3.6, 3.7 or 3.8.
- Python 3.5.2 or later, up to Python 3.8.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
@@ -70,7 +70,7 @@ pip install -U matrix-synapse
```
Before you can start Synapse, you will need to generate a configuration
file. To do this, run (in your virtualenv, as before)::
file. To do this, run (in your virtualenv, as before):
```
cd ~/synapse
@@ -84,22 +84,24 @@ python -m synapse.app.homeserver \
... substituting an appropriate value for `--server-name`.
This command will generate you a config file that you can then customise, but it will
also generate a set of keys for you. These keys will allow your Home Server to
identify itself to other Home Servers, so don't lose or delete them. It would be
also generate a set of keys for you. These keys will allow your homeserver to
identify itself to other homeserver, so don't lose or delete them. It would be
wise to back them up somewhere safe. (If, for whatever reason, you do need to
change your Home Server's keys, you may find that other Home Servers have the
change your homeserver's keys, you may find that other homeserver have the
old key cached. If you update the signing key, you should change the name of the
key in the `<server name>.signing.key` file (the second word) to something
different. See the
[spec](https://matrix.org/docs/spec/server_server/latest.html#retrieving-server-keys)
for more information on key management.)
for more information on key management).
To actually run your new homeserver, pick a working directory for Synapse to
run (e.g. `~/synapse`), and::
run (e.g. `~/synapse`), and:
cd ~/synapse
source env/bin/activate
synctl start
```
cd ~/synapse
source env/bin/activate
synctl start
```
### Platform-Specific Instructions
@@ -110,7 +112,7 @@ Installing prerequisites on Ubuntu or Debian:
```
sudo apt-get install build-essential python3-dev libffi-dev \
python3-pip python3-setuptools sqlite3 \
libssl-dev python3-virtualenv libjpeg-dev libxslt1-dev
libssl-dev virtualenv libjpeg-dev libxslt1-dev
```
#### ArchLinux
@@ -188,7 +190,7 @@ doas pkg_add python libffi py-pip py-setuptools sqlite3 py-virtualenv \
There is currently no port for OpenBSD. Additionally, OpenBSD's security
settings require a slightly more difficult installation process.
XXX: I suspect this is out of date.
(XXX: I suspect this is out of date)
1. Create a new directory in `/usr/local` called `_synapse`. Also, create a
new user called `_synapse` and set that directory as the new user's home.
@@ -196,7 +198,7 @@ XXX: I suspect this is out of date.
write and execute permissions on the same memory space to be run from
`/usr/local`.
2. `su` to the new `_synapse` user and change to their home directory.
3. Create a new virtualenv: `virtualenv -p python2.7 ~/.synapse`
3. Create a new virtualenv: `virtualenv -p python3 ~/.synapse`
4. Source the virtualenv configuration located at
`/usr/local/_synapse/.synapse/bin/activate`. This is done in `ksh` by
using the `.` command, rather than `bash`'s `source`.
@@ -217,45 +219,6 @@ be found at https://docs.microsoft.com/en-us/windows/wsl/install-win10 for
Windows 10 and https://docs.microsoft.com/en-us/windows/wsl/install-on-server
for Windows Server.
### Troubleshooting Installation
XXX a bunch of this is no longer relevant.
Synapse requires pip 8 or later, so if your OS provides too old a version you
may need to manually upgrade it::
sudo pip install --upgrade pip
Installing may fail with `Could not find any downloads that satisfy the requirement pymacaroons-pynacl (from matrix-synapse==0.12.0)`.
You can fix this by manually upgrading pip and virtualenv::
sudo pip install --upgrade virtualenv
You can next rerun `virtualenv -p python3 synapse` to update the virtual env.
Installing may fail during installing virtualenv with `InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.`
You can fix this by manually installing ndg-httpsclient::
pip install --upgrade ndg-httpsclient
Installing may fail with `mock requires setuptools>=17.1. Aborting installation`.
You can fix this by upgrading setuptools::
pip install --upgrade setuptools
If pip crashes mid-installation for reason (e.g. lost terminal), pip may
refuse to run until you remove the temporary installation directory it
created. To reset the installation::
rm -rf /tmp/pip_install_matrix
pip seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are
failing, e.g.::
pip install twisted
## Prebuilt packages
As an alternative to installing from source, prebuilt packages are available
@@ -314,7 +277,7 @@ For `buster` and `sid`, Synapse is available in the Debian repositories and
it should be possible to install it with simply:
```
sudo apt install matrix-synapse
sudo apt install matrix-synapse
```
There is also a version of `matrix-synapse` in `stretch-backports`. Please see
@@ -375,15 +338,17 @@ sudo pip install py-bcrypt
Synapse can be found in the void repositories as 'synapse':
xbps-install -Su
xbps-install -S synapse
```
xbps-install -Su
xbps-install -S synapse
```
### FreeBSD
Synapse can be installed via FreeBSD Ports or Packages contributed by Brendan Molloy from:
- Ports: `cd /usr/ports/net-im/py-matrix-synapse && make install clean`
- Packages: `pkg install py27-matrix-synapse`
- Packages: `pkg install py37-matrix-synapse`
### NixOS
@@ -420,6 +385,7 @@ so, you will need to edit `homeserver.yaml`, as follows:
resources:
- names: [client, federation]
```
* You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You can either
point these settings at an existing certificate and key, or you can
@@ -427,15 +393,15 @@ so, you will need to edit `homeserver.yaml`, as follows:
for having Synapse automatically provision and renew federation
certificates through ACME can be found at [ACME.md](docs/ACME.md).
Note that, as pointed out in that document, this feature will not
work with installs set up after November 2019.
work with installs set up after November 2019.
If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates
(for instance, if using certbot, use `fullchain.pem` as your certificate, not
`cert.pem`).
For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md)
[federate.md](docs/federate.md).
## Email
@@ -482,7 +448,7 @@ on your server even if `enable_registration` is `false`.
## Setting up a TURN server
For reliable VoIP calls to be routed via this homeserver, you MUST configure
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
a TURN server. See [docs/turn-howto.md](docs/turn-howto.md) for details.
## URL previews
@@ -491,10 +457,24 @@ turn it on you must enable the `url_preview_enabled: True` config parameter
and explicitly specify the IP ranges that Synapse is not allowed to spider for
previewing in the `url_preview_ip_range_blacklist` configuration parameter.
This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that
spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional lxml and netaddr python dependencies to be
installed. This in turn requires the libxml2 library to be available - on
This also requires the optional `lxml` and `netaddr` python dependencies to be
installed. This in turn requires the `libxml2` library to be available - on
Debian/Ubuntu this means `apt-get install libxml2-dev`, or equivalent for
your OS.
# Troubleshooting Installation
`pip` seems to leak *lots* of memory during installation. For instance, a Linux
host with 512MB of RAM may run out of memory whilst installing Twisted. If this
happens, you will have to individually install the dependencies which are
failing, e.g.:
```
pip install twisted
```
If you have any other problems, feel free to ask in
[#synapse:matrix.org](https://matrix.to/#/#synapse:matrix.org).

View File

@@ -30,23 +30,24 @@ recursive-include synapse/static *.gif
recursive-include synapse/static *.html
recursive-include synapse/static *.js
exclude Dockerfile
exclude .codecov.yml
exclude .coveragerc
exclude .dockerignore
exclude test_postgresql.sh
exclude .editorconfig
exclude Dockerfile
exclude mypy.ini
exclude sytest-blacklist
exclude test_postgresql.sh
include pyproject.toml
recursive-include changelog.d *
prune .buildkite
prune .circleci
prune .codecov.yml
prune .coveragerc
prune .github
prune contrib
prune debian
prune demo/etc
prune docker
prune mypy.ini
prune snap
prune stubs

View File

@@ -75,6 +75,139 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.13.0
====================
Incorrect database migration in old synapse versions
----------------------------------------------------
A bug was introduced in Synapse 1.4.0 which could cause the room directory to
be incomplete or empty if Synapse was upgraded directly from v1.2.1 or
earlier, to versions between v1.4.0 and v1.12.x.
This will *not* be a problem for Synapse installations which were:
* created at v1.4.0 or later,
* upgraded via v1.3.x, or
* upgraded straight from v1.2.1 or earlier to v1.13.0 or later.
If completeness of the room directory is a concern, installations which are
affected can be repaired as follows:
1. Run the following sql from a `psql` or `sqlite3` console:
.. code:: sql
INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES
('populate_stats_process_rooms', '{}', 'current_state_events_membership');
INSERT INTO background_updates (update_name, progress_json, depends_on) VALUES
('populate_stats_process_users', '{}', 'populate_stats_process_rooms');
2. Restart synapse.
New Single Sign-on HTML Templates
---------------------------------
New templates (``sso_auth_confirm.html``, ``sso_auth_success.html``, and
``sso_account_deactivated.html``) were added to Synapse. If your Synapse is
configured to use SSO and a custom ``sso_redirect_confirm_template_dir``
configuration then these templates will need to be copied from
`synapse/res/templates <synapse/res/templates>`_ into that directory.
Synapse SSO Plugins Method Deprecation
--------------------------------------
Plugins using the ``complete_sso_login`` method of
``synapse.module_api.ModuleApi`` should update to using the async/await
version ``complete_sso_login_async`` which includes additional checks. The
non-async version is considered deprecated.
Rolling back to v1.12.4 after a failed upgrade
----------------------------------------------
v1.13.0 includes a lot of large changes. If something problematic occurs, you
may want to roll-back to a previous version of Synapse. Because v1.13.0 also
includes a new database schema version, reverting that version is also required
alongside the generic rollback instructions mentioned above. In short, to roll
back to v1.12.4 you need to:
1. Stop the server
2. Decrease the schema version in the database:
.. code:: sql
UPDATE schema_version SET version = 57;
3. Downgrade Synapse by following the instructions for your installation method
in the "Rolling back to older versions" section above.
Upgrading to v1.12.0
====================
This version includes a database update which is run as part of the upgrade,
and which may take some time (several hours in the case of a large
server). Synapse will not respond to HTTP requests while this update is taking
place.
This is only likely to be a problem in the case of a server which is
participating in many rooms.
0. As with all upgrades, it is recommended that you have a recent backup of
your database which can be used for recovery in the event of any problems.
1. As an initial check to see if you will be affected, you can try running the
following query from the `psql` or `sqlite3` console. It is safe to run it
while Synapse is still running.
.. code:: sql
SELECT MAX(q.v) FROM (
SELECT (
SELECT ej.json AS v
FROM state_events se INNER JOIN event_json ej USING (event_id)
WHERE se.room_id=rooms.room_id AND se.type='m.room.create' AND se.state_key=''
LIMIT 1
) FROM rooms WHERE rooms.room_version IS NULL
) q;
This query will take about the same amount of time as the upgrade process: ie,
if it takes 5 minutes, then it is likely that Synapse will be unresponsive for
5 minutes during the upgrade.
If you consider an outage of this duration to be acceptable, no further
action is necessary and you can simply start Synapse 1.12.0.
If you would prefer to reduce the downtime, continue with the steps below.
2. The easiest workaround for this issue is to manually
create a new index before upgrading. On PostgreSQL, his can be done as follows:
.. code:: sql
CREATE INDEX CONCURRENTLY tmp_upgrade_1_12_0_index
ON state_events(room_id) WHERE type = 'm.room.create';
The above query may take some time, but is also safe to run while Synapse is
running.
We assume that no SQLite users have databases large enough to be
affected. If you *are* affected, you can run a similar query, omitting the
``CONCURRENTLY`` keyword. Note however that this operation may in itself cause
Synapse to stop running for some time. Synapse admins are reminded that
`SQLite is not recommended for use outside a test
environment <https://github.com/matrix-org/synapse/blob/master/README.rst#using-postgresql>`_.
3. Once the index has been created, the ``SELECT`` query in step 1 above should
complete quickly. It is therefore safe to upgrade to Synapse 1.12.0.
4. Once Synapse 1.12.0 has successfully started and is responding to HTTP
requests, the temporary index can be removed:
.. code:: sql
DROP INDEX tmp_upgrade_1_12_0_index;
Upgrading to v1.10.0
====================

View File

@@ -1,150 +1,2 @@
# Setup Synapse with Workers and Systemd
This is a setup for managing synapse with systemd including support for
managing workers. It provides a `matrix-synapse`, as well as a
`matrix-synapse-worker@` service for any workers you require. Additionally to
group the required services it sets up a `matrix.target`. You can use this to
automatically start any bot- or bridge-services. More on this in
[Bots and Bridges](#bots-and-bridges).
See the folder [system](system) for any service and target files.
The folder [workers](workers) contains an example configuration for the
`federation_reader` worker. Pay special attention to the name of the
configuration file. In order to work with the `matrix-synapse-worker@.service`
service, it needs to have the exact same name as the worker app.
This setup expects neither the homeserver nor any workers to fork. Forking is
handled by systemd.
## Setup
1. Adjust your matrix configs. Make sure that the worker config files have the
exact same name as the worker app. Compare `matrix-synapse-worker@.service` for
why. You can find an example worker config in the [workers](workers) folder. See
below for relevant settings in the `homeserver.yaml`.
2. Copy the `*.service` and `*.target` files in [system](system) to
`/etc/systemd/system`.
3. `systemctl enable matrix-synapse.service` this adds the homeserver
app to the `matrix.target`
4. *Optional.* `systemctl enable
matrix-synapse-worker@federation_reader.service` this adds the federation_reader
app to the `matrix-synapse.service`
5. *Optional.* Repeat step 4 for any additional workers you require.
6. *Optional.* Add any bots or bridges by enabling them.
7. Start all matrix related services via `systemctl start matrix.target`
8. *Optional.* Enable autostart of all matrix related services on system boot
via `systemctl enable matrix.target`
## Usage
After you have setup you can use the following commands to manage your synapse
installation:
```
# Start matrix-synapse, all workers and any enabled bots or bridges.
systemctl start matrix.target
# Restart matrix-synapse and all workers (not necessarily restarting bots
# or bridges, see "Bots and Bridges")
systemctl restart matrix-synapse.service
# Stop matrix-synapse and all workers (not necessarily restarting bots
# or bridges, see "Bots and Bridges")
systemctl stop matrix-synapse.service
# Restart a specific worker (i. e. federation_reader), the homeserver is
# unaffected by this.
systemctl restart matrix-synapse-worker@federation_reader.service
# Add a new worker (assuming all configs are setup already)
systemctl enable matrix-synapse-worker@federation_writer.service
systemctl restart matrix-synapse.service
```
## The Configs
Make sure the `worker_app` is set in the `homeserver.yaml` and it does not fork.
```
worker_app: synapse.app.homeserver
daemonize: false
```
None of the workers should fork, as forking is handled by systemd. Hence make
sure this is present in all worker config files.
```
worker_daemonize: false
```
The config files of all workers are expected to be located in
`/etc/matrix-synapse/workers`. If you want to use a different location you have
to edit the provided `*.service` files accordingly.
## Bots and Bridges
Most bots and bridges do not care if the homeserver goes down or is restarted.
Depending on the implementation this may crash them though. So look up the docs
or ask the community of the specific bridge or bot you want to run to make sure
you choose the correct setup.
Whichever configuration you choose, after the setup the following will enable
automatically starting (and potentially restarting) your bot/bridge with the
`matrix.target`.
```
systemctl enable <yourBotOrBridgeName>.service
```
**Note** that from an inactive synapse the bots/bridges will only be started with
synapse if you start the `matrix.target`, not if you start the
`matrix-synapse.service`. This is on purpose. Think of `matrix-synapse.service`
as *just* synapse, but `matrix.target` being anything matrix related, including
synapse and any and all enabled bots and bridges.
### Start with synapse but ignore synapse going down
If the bridge can handle shutdowns of the homeserver you'll want to install the
service in the `matrix.target` and optionally add a
`After=matrix-synapse.service` dependency to have the bot/bridge start after
synapse on starting everything.
In this case the service file should look like this.
```
[Unit]
# ...
# Optional, this will only ensure that if you start everything, synapse will
# be started before the bot/bridge will be started.
After=matrix-synapse.service
[Service]
# ...
[Install]
WantedBy=matrix.target
```
### Stop/restart when synapse stops/restarts
If the bridge can't handle shutdowns of the homeserver you'll still want to
install the service in the `matrix.target` but also have to specify the
`After=matrix-synapse.service` *and* `BindsTo=matrix-synapse.service`
dependencies to have the bot/bridge stop/restart with synapse.
In this case the service file should look like this.
```
[Unit]
# ...
# Mandatory
After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
# ...
[Install]
WantedBy=matrix.target
```
The documentation for using systemd to manage synapse workers is now part of
the main synapse distribution. See [docs/systemd-with-workers](../../docs/systemd-with-workers).

View File

@@ -1,19 +0,0 @@
[Unit]
Description=Synapse Matrix Worker
After=matrix-synapse.service
BindsTo=matrix-synapse.service
[Service]
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.%i --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse-%i
[Install]
WantedBy=matrix-synapse.service

View File

@@ -1,7 +0,0 @@
[Unit]
Description=Contains matrix services like synapse, bridges and bots
After=network.target
AllowIsolate=no
[Install]
WantedBy=multi-user.target

37
debian/changelog vendored
View File

@@ -1,3 +1,40 @@
<<<<<<< HEAD
matrix-synapse-py3 (1.12.3ubuntu1) UNRELEASED; urgency=medium
* Add information about .well-known files to Debian installation scripts.
-- Patrick Cloke <patrickc@matrix.org> Mon, 06 Apr 2020 10:10:38 -0400
=======
matrix-synapse-py3 (1.12.4) stable; urgency=medium
* New synapse release 1.12.4.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Apr 2020 10:58:14 -0400
>>>>>>> master
matrix-synapse-py3 (1.12.3) stable; urgency=medium
[ Richard van der Hoff ]
* Update the Debian build scripts to handle the new installation paths
for the support libraries introduced by Pillow 7.1.1.
[ Synapse Packaging team ]
* New synapse release 1.12.3.
-- Synapse Packaging team <packages@matrix.org> Fri, 03 Apr 2020 10:55:03 +0100
matrix-synapse-py3 (1.12.2) stable; urgency=medium
* New synapse release 1.12.2.
-- Synapse Packaging team <packages@matrix.org> Mon, 02 Apr 2020 19:02:17 +0000
matrix-synapse-py3 (1.12.1) stable; urgency=medium
* New synapse release 1.12.1.
-- Synapse Packaging team <packages@matrix.org> Mon, 02 Apr 2020 11:30:47 +0000
matrix-synapse-py3 (1.12.0) stable; urgency=medium
* New synapse release 1.12.0.

View File

@@ -1,14 +1,14 @@
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the matrix-synapse package.
# This file is distributed under the same license as the matrix-synapse-py3 package.
# FIRST AUTHOR <EMAIL@ADDRESS>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: matrix-synapse\n"
"Report-Msgid-Bugs-To: matrix-synapse@packages.debian.org\n"
"POT-Creation-Date: 2017-02-21 07:51+0000\n"
"Project-Id-Version: matrix-synapse-py3\n"
"Report-Msgid-Bugs-To: matrix-synapse-py3@packages.debian.org\n"
"POT-Creation-Date: 2020-04-06 16:39-0400\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -28,7 +28,10 @@ msgstr ""
#: ../templates:1001
msgid ""
"The name that this homeserver will appear as, to clients and other servers "
"via federation. This name should match the SRV record published in DNS."
"via federation. This is normally the public hostname of the server running "
"synapse, but can be different if you set up delegation. Please refer to the "
"delegation documentation in this case: https://github.com/matrix-org/synapse/"
"blob/master/docs/delegate.md."
msgstr ""
#. Type: boolean

33
debian/rules vendored
View File

@@ -15,17 +15,38 @@ override_dh_installinit:
# we don't really want to strip the symbols from our object files.
override_dh_strip:
# dh_shlibdeps calls dpkg-shlibdeps, which finds all the binary files
# (executables and shared libs) in the package, and looks for the shared
# libraries that they depend on. It then adds a dependency on the package that
# contains that library to the package.
#
# We make two modifications to that process...
#
override_dh_shlibdeps:
# make the postgres package's dependencies a recommendation
# rather than a hard dependency.
# Firstly, postgres is not a hard dependency for us, so we want to make
# the things that psycopg2 depends on (such as libpq) be
# recommendations rather than hard dependencies. We do so by
# running dpkg-shlibdeps manually on psycopg2's libs.
#
find debian/$(PACKAGE_NAME)/ -path '*/site-packages/psycopg2/*.so' | \
xargs dpkg-shlibdeps -Tdebian/$(PACKAGE_NAME).substvars \
-pshlibs1 -dRecommends
# all the other dependencies can be normal 'Depends' requirements,
# except for PIL's, which is self-contained and which confuses
# dpkg-shlibdeps.
dh_shlibdeps -X site-packages/PIL/.libs -X site-packages/psycopg2
# secondly, we exclude PIL's libraries from the process. They are known
# to be self-contained, but they have interdependencies and
# dpkg-shlibdeps doesn't know how to resolve them.
#
# As of Pillow 7.1.0, these libraries are in
# site-packages/Pillow.libs. Previously, they were in
# site-packages/PIL/.libs.
#
# (we also need to exclude psycopg2, of course, since we've already
# dealt with that.)
#
dh_shlibdeps \
-X site-packages/PIL/.libs \
-X site-packages/Pillow.libs \
-X site-packages/psycopg2
override_dh_virtualenv:
./debian/build_virtualenv

6
debian/templates vendored
View File

@@ -2,8 +2,10 @@ Template: matrix-synapse/server-name
Type: string
_Description: Name of the server:
The name that this homeserver will appear as, to clients and other
servers via federation. This name should match the SRV record
published in DNS.
servers via federation. This is normally the public hostname of the
server running synapse, but can be different if you set up delegation.
Please refer to the delegation documentation in this case:
https://github.com/matrix-org/synapse/blob/master/docs/delegate.md.
Template: matrix-synapse/report-stats
Type: boolean

View File

@@ -0,0 +1,34 @@
# Edit Room Membership API
This API allows an administrator to join an user account with a given `user_id`
to a room with a given `room_id_or_alias`. You can only modify the membership of
local users. The server administrator must be in the room and have permission to
invite users.
## Parameters
The following parameters are available:
* `user_id` - Fully qualified user: for example, `@user:server.com`.
* `room_id_or_alias` - The room identifier or alias to join: for example,
`!636q39766251:server.com`.
## Usage
```
POST /_synapse/admin/v1/join/<room_id_or_alias>
{
"user_id": "@user:server.com"
}
```
Including an `access_token` of a server admin.
Response:
```
{
"room_id": "!636q39766251:server.com"
}
```

View File

@@ -11,8 +11,21 @@ The following query parameters are available:
* `from` - Offset in the returned list. Defaults to `0`.
* `limit` - Maximum amount of rooms to return. Defaults to `100`.
* `order_by` - The method in which to sort the returned list of rooms. Valid values are:
- `alphabetical` - Rooms are ordered alphabetically by room name. This is the default.
- `size` - Rooms are ordered by the number of members. Largest to smallest.
- `alphabetical` - Same as `name`. This is deprecated.
- `size` - Same as `joined_members`. This is deprecated.
- `name` - Rooms are ordered alphabetically by room name. This is the default.
- `canonical_alias` - Rooms are ordered alphabetically by main alias address of the room.
- `joined_members` - Rooms are ordered by the number of members. Largest to smallest.
- `joined_local_members` - Rooms are ordered by the number of local members. Largest to smallest.
- `version` - Rooms are ordered by room version. Largest to smallest.
- `creator` - Rooms are ordered alphabetically by creator of the room.
- `encryption` - Rooms are ordered alphabetically by the end-to-end encryption algorithm.
- `federatable` - Rooms are ordered by whether the room is federatable.
- `public` - Rooms are ordered by visibility in room list.
- `join_rules` - Rooms are ordered alphabetically by join rules of the room.
- `guest_access` - Rooms are ordered alphabetically by guest access option of the room.
- `history_visibility` - Rooms are ordered alphabetically by visibility of history of the room.
- `state_events` - Rooms are ordered by number of state events. Largest to smallest.
* `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
this value to `b` will reverse the above sort order. Defaults to `f`.
* `search_term` - Filter rooms by their room name. Search term can be contained in any
@@ -26,6 +39,16 @@ The following fields are possible in the JSON response body:
- `name` - The name of the room.
- `canonical_alias` - The canonical (main) alias address of the room.
- `joined_members` - How many users are currently in the room.
- `joined_local_members` - How many local users are currently in the room.
- `version` - The version of the room as a string.
- `creator` - The `user_id` of the room creator.
- `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
- `federatable` - Whether users on other servers can join this room.
- `public` - Whether the room is visible in room directory.
- `join_rules` - The type of rules used for users wishing to join this room. One of: ["public", "knock", "invite", "private"].
- `guest_access` - Whether guests can join the room. One of: ["can_join", "forbidden"].
- `history_visibility` - Who can see the room history. One of: ["invited", "joined", "shared", "world_readable"].
- `state_events` - Total number of state_events of a room. Complexity of the room.
* `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is
not intended to be parsed.
@@ -60,14 +83,34 @@ Response:
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org",
"joined_members": 8326
"joined_members": 8326,
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
},
... (8 hidden items) ...
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
"joined_members": 314,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:matrix.org",
"encryption": "m.megolm.v1.aes-sha2",
"federatable": true,
"public": false,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
}
],
"offset": 0,
@@ -92,7 +135,17 @@ Response:
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
"joined_members": 314,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:matrix.org",
"encryption": "m.megolm.v1.aes-sha2",
"federatable": true,
"public": false,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8
}
],
"offset": 0,
@@ -117,14 +170,34 @@ Response:
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org",
"joined_members": 8326
"joined_members": 8326,
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
},
... (98 hidden items) ...
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
"joined_members": 314,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:matrix.org",
"encryption": "m.megolm.v1.aes-sha2",
"federatable": true,
"public": false,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
}
],
"offset": 0,
@@ -154,6 +227,16 @@ Response:
"name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org",
"joined_members": 127
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 93534
},
... (48 hidden items) ...
{
@@ -161,6 +244,16 @@ Response:
"name": "weechat-matrix",
"canonical_alias": "#weechat-matrix:termina.org.uk",
"joined_members": 137
"joined_local_members": 20,
"version": "4",
"creator": "@foo:termina.org.uk",
"encryption": null,
"federatable": true,
"public": true,
"join_rules": "invite",
"guest_access": null,
"history_visibility": "shared",
"state_events": 8345
}
],
"offset": 100,

View File

@@ -33,12 +33,22 @@ with a body of:
including an ``access_token`` of a server admin.
The parameter ``displayname`` is optional and defaults to ``user_id``.
The parameter ``threepids`` is optional.
The parameter ``avatar_url`` is optional.
The parameter ``admin`` is optional and defaults to 'false'.
The parameter ``deactivated`` is optional and defaults to 'false'.
The parameter ``password`` is optional. If provided the user's password is updated and all devices are logged out.
The parameter ``displayname`` is optional and defaults to the value of
``user_id``.
The parameter ``threepids`` is optional and allows setting the third-party IDs
(email, msisdn) belonging to a user.
The parameter ``avatar_url`` is optional. Must be a [MXC
URI](https://matrix.org/docs/spec/client_server/r0.6.0#matrix-content-mxc-uris).
The parameter ``admin`` is optional and defaults to ``false``.
The parameter ``deactivated`` is optional and defaults to ``false``.
The parameter ``password`` is optional. If provided, the user's password is
updated and all devices are logged out.
If the user already exists then optional parameters default to the current value.
List Accounts
@@ -51,16 +61,25 @@ The api is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
including an ``access_token`` of a server admin.
The parameters ``from`` and ``limit`` are required only for pagination.
By default, a ``limit`` of 100 is used.
The parameter ``user_id`` can be used to select only users with user ids that
contain this value.
The parameter ``guests=false`` can be used to exclude guest users,
default is to include guest users.
The parameter ``deactivated=true`` can be used to include deactivated users,
default is to exclude deactivated users.
If the endpoint does not return a ``next_token`` then there are no more users left.
It returns a JSON body like the following:
The parameter ``from`` is optional but used for pagination, denoting the
offset in the returned results. This should be treated as an opaque value and
not explicitly set to anything other than the return value of ``next_token``
from a previous call.
The parameter ``limit`` is optional but is used for pagination, denoting the
maximum number of items to return in this call. Defaults to ``100``.
The parameter ``user_id`` is optional and filters to only users with user IDs
that contain this value.
The parameter ``guests`` is optional and if ``false`` will **exclude** guest users.
Defaults to ``true`` to include guest users.
The parameter ``deactivated`` is optional and if ``true`` will **include** deactivated users.
Defaults to ``false`` to exclude deactivated users.
A JSON body is returned with the following shape:
.. code:: json
@@ -72,19 +91,29 @@ It returns a JSON body like the following:
"is_guest": 0,
"admin": 0,
"user_type": null,
"deactivated": 0
"deactivated": 0,
"displayname": "<User One>",
"avatar_url": null
}, {
"name": "<user_id2>",
"password_hash": "<password_hash2>",
"is_guest": 0,
"admin": 1,
"user_type": null,
"deactivated": 0
"deactivated": 0,
"displayname": "<User Two>",
"avatar_url": "<avatar_url>"
}
],
"next_token": "100"
"next_token": "100",
"total": 200
}
To paginate, check for ``next_token`` and if present, call the endpoint again
with ``from`` set to the value of ``next_token``. This will return a new page.
If the endpoint does not return a ``next_token`` then there are no more users
to paginate through.
Query Account
=============

View File

@@ -23,9 +23,13 @@ namespaces:
users: # List of users we're interested in
- exclusive: <bool>
regex: <regex>
group_id: <group>
- ...
aliases: [] # List of aliases we're interested in
rooms: [] # List of room ids we're interested in
```
`exclusive`: If enabled, only this application service is allowed to register users in its namespace(s).
`group_id`: All users of this application service are dynamically joined to this group. This is useful for e.g user organisation or flairs.
See the [spec](https://matrix.org/docs/spec/application_service/unstable.html) for further details on how application services work.

64
docs/dev/cas.md Normal file
View File

@@ -0,0 +1,64 @@
# How to test CAS as a developer without a server
The [django-mama-cas](https://github.com/jbittel/django-mama-cas) project is an
easy to run CAS implementation built on top of Django.
## Prerequisites
1. Create a new virtualenv: `python3 -m venv <your virtualenv>`
2. Activate your virtualenv: `source /path/to/your/virtualenv/bin/activate`
3. Install Django and django-mama-cas:
```
python -m pip install "django<3" "django-mama-cas==2.4.0"
```
4. Create a Django project in the current directory:
```
django-admin startproject cas_test .
```
5. Follow the [install directions](https://django-mama-cas.readthedocs.io/en/latest/installation.html#configuring) for django-mama-cas
6. Setup the SQLite database: `python manage.py migrate`
7. Create a user:
```
python manage.py createsuperuser
```
1. Use whatever you want as the username and password.
2. Leave the other fields blank.
8. Use the built-in Django test server to serve the CAS endpoints on port 8000:
```
python manage.py runserver
```
You should now have a Django project configured to serve CAS authentication with
a single user created.
## Configure Synapse (and Riot) to use CAS
1. Modify your `homeserver.yaml` to enable CAS and point it to your locally
running Django test server:
```yaml
cas_config:
enabled: true
server_url: "http://localhost:8000"
service_url: "http://localhost:8081"
#displayname_attribute: name
#required_attributes:
# name: value
```
2. Restart Synapse.
Note that the above configuration assumes the homeserver is running on port 8081
and that the CAS server is on port 8000, both on localhost.
## Testing the configuration
Then in Riot:
1. Visit the login page with a Riot pointing at your homeserver.
2. Click the Single Sign-On button.
3. Login using the credentials created with `createsuperuser`.
4. You should be logged in.
If you want to repeat this process you'll need to manually logout first:
1. http://localhost:8000/admin/
2. Click "logout" in the top right.

View File

@@ -18,9 +18,13 @@ To make Synapse (and therefore Riot) use it:
metadata:
local: ["samling.xml"]
```
5. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
5. Ensure that your `homeserver.yaml` has a setting for `public_baseurl`:
```yaml
public_baseurl: http://localhost:8080/
```
6. Run `apt-get install xmlsec1` and `pip install --upgrade --force 'pysaml2>=4.5.0'` to ensure
the dependencies are installed and ready to go.
6. Restart Synapse.
7. Restart Synapse.
Then in Riot:

View File

@@ -29,14 +29,13 @@ from synapse.logging import context # omitted from future snippets
def handle_request(request_id):
request_context = context.LoggingContext()
calling_context = context.LoggingContext.current_context()
context.LoggingContext.set_current_context(request_context)
calling_context = context.set_current_context(request_context)
try:
request_context.request = request_id
do_request_handling()
logger.debug("finished")
finally:
context.LoggingContext.set_current_context(calling_context)
context.set_current_context(calling_context)
def do_request_handling():
logger.debug("phew") # this will be logged against request_id

View File

@@ -60,6 +60,31 @@
1. Restart Prometheus.
## Monitoring workers
To monitor a Synapse installation using
[workers](https://github.com/matrix-org/synapse/blob/master/docs/workers.md),
every worker needs to be monitored independently, in addition to
the main homeserver process. This is because workers don't send
their metrics to the main homeserver process, but expose them
directly (if they are configured to do so).
To allow collecting metrics from a worker, you need to add a
`metrics` listener to its configuration, by adding the following
under `worker_listeners`:
```yaml
- type: metrics
bind_address: ''
port: 9101
```
The `bind_address` and `port` parameters should be set so that
the resulting listener can be reached by prometheus, and they
don't clash with an existing worker.
With this example, the worker's metrics would then be available
on `http://127.0.0.1:9101`.
## Renaming of metrics & deprecation of old names in 1.2
Synapse 1.2 updates the Prometheus metrics to match the naming

View File

@@ -9,7 +9,11 @@ into Synapse, and provides a number of methods by which it can integrate
with the authentication system.
This document serves as a reference for those looking to implement their
own password auth providers.
own password auth providers. Additionally, here is a list of known
password auth provider module implementations:
* [matrix-synapse-ldap3](https://github.com/matrix-org/matrix-synapse-ldap3/)
* [matrix-synapse-shared-secret-auth](https://github.com/devture/matrix-synapse-shared-secret-auth)
## Required methods

View File

@@ -61,7 +61,33 @@ Note that the PostgreSQL database *must* have the correct encoding set
You may need to enable password authentication so `synapse_user` can
connect to the database. See
<https://www.postgresql.org/docs/11/auth-pg-hba-conf.html>.
<https://www.postgresql.org/docs/current/auth-pg-hba-conf.html>.
If you get an error along the lines of `FATAL: Ident authentication failed for
user "synapse_user"`, you may need to use an authentication method other than
`ident`:
* If the `synapse_user` user has a password, add the password to the `database:`
section of `homeserver.yaml`. Then add the following to `pg_hba.conf`:
```
host synapse synapse_user ::1/128 md5 # or `scram-sha-256` instead of `md5` if you use that
```
* If the `synapse_user` user does not have a password, then a password doesn't
have to be added to `homeserver.yaml`. But the following does need to be added
to `pg_hba.conf`:
```
host synapse synapse_user ::1/128 trust
```
Note that line order matters in `pg_hba.conf`, so make sure that if you do add a
new line, it is inserted before:
```
host all all ::1/128 ident
```
### Fixing incorrect `COLLATE` or `CTYPE`
@@ -72,8 +98,7 @@ underneath the database, or if a different version of the locale is used on any
replicas.
The safest way to fix the issue is to take a dump and recreate the database with
the correct `COLLATE` and `CTYPE` parameters (as per
[docs/postgres.md](docs/postgres.md)). It is also possible to change the
the correct `COLLATE` and `CTYPE` parameters (as shown above). It is also possible to change the
parameters on a live database and run a `REINDEX` on the entire database,
however extreme care must be taken to avoid database corruption.
@@ -105,19 +130,41 @@ of free memory the database host has available.
When you are ready to start using PostgreSQL, edit the `database`
section in your config file to match the following lines:
database:
name: psycopg2
args:
user: <user>
password: <pass>
database: <db>
host: <host>
cp_min: 5
cp_max: 10
```yaml
database:
name: psycopg2
args:
user: <user>
password: <pass>
database: <db>
host: <host>
cp_min: 5
cp_max: 10
```
All key, values in `args` are passed to the `psycopg2.connect(..)`
function, except keys beginning with `cp_`, which are consumed by the
twisted adbapi connection pool.
twisted adbapi connection pool. See the [libpq
documentation](https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS)
for a list of options which can be passed.
You should consider tuning the `args.keepalives_*` options if there is any danger of
the connection between your homeserver and database dropping, otherwise Synapse
may block for an extended period while it waits for a response from the
database server. Example values might be:
```yaml
# seconds of inactivity after which TCP should send a keepalive message to the server
keepalives_idle: 10
# the number of seconds after which a TCP keepalive message that is not
# acknowledged by the server should be retransmitted
keepalives_interval: 10
# the number of TCP keepalives that can be lost before the client's connection
# to the server is considered dead
keepalives_count: 3
```
## Porting from SQLite

View File

@@ -42,6 +42,9 @@ the reverse proxy and the homeserver.
location /_matrix {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 10M;
}
}

View File

@@ -33,10 +33,15 @@ server_name: "SERVERNAME"
#
pid_file: DATADIR/homeserver.pid
# The path to the web client which will be served at /_matrix/client/
# if 'webclient' is configured under the 'listeners' configuration.
# The absolute URL to the web client which /_matrix/client will redirect
# to if 'webclient' is configured under the 'listeners' configuration.
#
#web_client_location: "/path/to/web/root"
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
#
#web_client_location: https://riot.example.com/
# The public-facing base URL that clients use to access this HS
# (not including _matrix/...). This is the same URL a user would
@@ -248,6 +253,18 @@ listeners:
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of
# that room can become quite expensive. To mitigate this, once the number of
# forward extremities reaches a given threshold, Synapse will send an
# org.matrix.dummy_event event, which will reduce the forward extremities
# in the room.
#
# This setting defines the threshold (i.e. number of forward extremities in the
# room) at which dummy events are sent. The default value is 10.
#
#dummy_events_threshold: 5
## Homeserver blocking ##
@@ -409,6 +426,16 @@ retention:
# longest_max_lifetime: 1y
# interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver.
# Note that for some endpoints the error situation is the e-mail already being
# used, and for others the error is entering the e-mail being unused.
# If this option is enabled, instead of returning an error, these endpoints will
# act as if no error happened and return a fake session ID ('sid') to clients.
#
#request_token_inhibit_3pid_errors: true
## TLS ##
@@ -578,13 +605,46 @@ acme:
## Database ##
# The 'database' setting defines the database that synapse uses to store all of
# its data.
#
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
# * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
# * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
# * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
#
#
# Example SQLite configuration:
#
#database:
# name: sqlite3
# args:
# database: /path/to/homeserver.db
#
#
# Example Postgres configuration:
#
#database:
# name: psycopg2
# args:
# user: synapse
# password: secretpassword
# database: synapse
# host: localhost
# cp_min: 5
# cp_max: 10
#
# For more information on using Synapse with Postgres, see `docs/postgres.md`.
#
database:
# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
name: sqlite3
args:
# Path to the database
database: "DATADIR/homeserver.db"
database: DATADIR/homeserver.db
# Number of events to cache in memory.
#
@@ -697,12 +757,11 @@ media_store_path: "DATADIR/media_store"
#
#media_storage_providers:
# - module: file_system
# # Whether to write new local files.
# # Whether to store newly uploaded local files
# store_local: false
# # Whether to write new remote media
# # Whether to store newly downloaded remote files
# store_remote: false
# # Whether to block upload requests waiting for write to this
# # provider to complete
# # Whether to wait for successful storage for local uploads
# store_synchronous: false
# config:
# directory: /mnt/some/other/directory
@@ -821,6 +880,31 @@ media_store_path: "DATADIR/media_store"
#
#max_spider_size: 10M
# A list of values for the Accept-Language HTTP header used when
# downloading webpages during URL preview generation. This allows
# Synapse to specify the preferred languages that URL previews should
# be in when communicating with remote servers.
#
# Each value is a IETF language tag; a 2-3 letter identifier for a
# language, optionally followed by subtags separated by '-', specifying
# a country or region variant.
#
# Multiple values can be provided, and a weight can be added to each by
# using quality value syntax (;q=). '*' translates to any language.
#
# Defaults to "en".
#
# Example:
#
# url_preview_accept_language:
# - en-UK
# - en-US;q=0.9
# - fr;q=0.8
# - *;q=0.7
#
url_preview_accept_language:
# - en
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
@@ -839,10 +923,6 @@ media_store_path: "DATADIR/media_store"
#
#enable_registration_captcha: false
# A secret key used to bypass the captcha test entirely.
#
#captcha_bypass_secret: "YOUR_SECRET_HERE"
# The API endpoint to use for verifying m.login.recaptcha responses.
#
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"
@@ -1057,6 +1137,29 @@ account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Whether users are allowed to change their displayname after it has
# been initially set. Useful when provisioning users based on the
# contents of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_displayname: false
# Whether users are allowed to change their avatar after it has been
# initially set. Useful when provisioning users based on the contents
# of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_avatar_url: false
# Whether users can change the 3PIDs associated with their accounts
# (email address and msisdn).
#
# Defaults to 'true'
#
#enable_3pid_changes: false
# Users who register on this homeserver will automatically be joined
# to these rooms
#
@@ -1092,7 +1195,7 @@ account_threepid_delegates:
# enabled by default, either for performance reasons or limited use.
#
metrics_flags:
# Publish synapse_federation_known_servers, a g auge of the number of
# Publish synapse_federation_known_servers, a gauge of the number of
# servers this homeserver knows about, including itself. May cause
# performance problems on large homeservers.
#
@@ -1258,32 +1361,32 @@ saml2_config:
# remote:
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: true
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: true
#
# # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
# # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
#
# description: ["My awesome SP", "en"]
# name: ["Test SP", "en"]
# description: ["My awesome SP", "en"]
# name: ["Test SP", "en"]
#
# organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
# organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
#
# contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
# contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
# Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file:
@@ -1392,6 +1495,10 @@ sso:
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/".
#
# If public_baseurl is set, then the login fallback page (used by clients
# that don't natively support the required login flows) is whitelisted in
# addition to any URLs in this list.
#
# By default, this list is empty.
#
#client_whitelist:
@@ -1423,6 +1530,30 @@ sso:
#
# * server_name: the homeserver's name.
#
# * HTML page which notifies the user that they are authenticating to confirm
# an operation on their account during the user interactive authentication
# process: 'sso_auth_confirm.html'.
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * description: the operation which the user is being asked to confirm
#
# * HTML page shown after a successful user interactive authentication session:
# 'sso_auth_success.html'.
#
# Note that this page must include the JavaScript which notifies of a successful authentication
# (see https://matrix.org/docs/spec/client_server/r0.6.0#fallback).
#
# This template has no additional variables.
#
# * HTML page shown during single sign-on if a deactivated user (according to Synapse's database)
# attempts to login: 'sso_account_deactivated.html'.
#
# This template has no additional variables.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
@@ -1453,6 +1584,41 @@ password_config:
#
#pepper: "EVEN_MORE_SECRET"
# Define and enforce a password policy. Each parameter is optional.
# This is an implementation of MSC2000.
#
policy:
# Whether to enforce the password policy.
# Defaults to 'false'.
#
#enabled: true
# Minimum accepted length for a password.
# Defaults to 0.
#
#minimum_length: 15
# Whether a password must contain at least one digit.
# Defaults to 'false'.
#
#require_digit: true
# Whether a password must contain at least one symbol.
# A symbol is any character that's not a number or a letter.
# Defaults to 'false'.
#
#require_symbol: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_uppercase: true
# Configuration for sending emails from Synapse.
#
@@ -1561,7 +1727,19 @@ email:
#template_dir: "res/templates"
#password_providers:
# Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods
# ex. LDAP, external tokens, etc.
#
# For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
#
# Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options,
# respectively.
#
password_providers:
# # Example config for an LDAP auth provider
# - module: "ldap_auth_provider.LdapAuthProvider"
# config:
# enabled: true

View File

@@ -0,0 +1,67 @@
# Setting up Synapse with Workers and Systemd
This is a setup for managing synapse with systemd, including support for
managing workers. It provides a `matrix-synapse` service for the master, as
well as a `matrix-synapse-worker@` service template for any workers you
require. Additionally, to group the required services, it sets up a
`matrix-synapse.target`.
See the folder [system](system) for the systemd unit files.
The folder [workers](workers) contains an example configuration for the
`federation_reader` worker.
## Synapse configuration files
See [workers.md](../workers.md) for information on how to set up the
configuration files and reverse-proxy correctly. You can find an example worker
config in the [workers](workers) folder.
Systemd manages daemonization itself, so ensure that none of the configuration
files set either `daemonize` or `worker_daemonize`.
The config files of all workers are expected to be located in
`/etc/matrix-synapse/workers`. If you want to use a different location, edit
the provided `*.service` files accordingly.
There is no need for a separate configuration file for the master process.
## Set up
1. Adjust synapse configuration files as above.
1. Copy the `*.service` and `*.target` files in [system](system) to
`/etc/systemd/system`.
1. Run `systemctl deamon-reload` to tell systemd to load the new unit files.
1. Run `systemctl enable matrix-synapse.service`. This will configure the
synapse master process to be started as part of the `matrix-synapse.target`
target.
1. For each worker process to be enabled, run `systemctl enable
matrix-synapse-worker@<worker_name>.service`. For each `<worker_name>`, there
should be a corresponding configuration file
`/etc/matrix-synapse/workers/<worker_name>.yaml`.
1. Start all the synapse processes with `systemctl start matrix-synapse.target`.
1. Tell systemd to start synapse on boot with `systemctl enable matrix-synapse.target`/
## Usage
Once the services are correctly set up, you can use the following commands
to manage your synapse installation:
```sh
# Restart Synapse master and all workers
systemctl restart matrix-synapse.target
# Stop Synapse and all workers
systemctl stop matrix-synapse.target
# Restart the master alone
systemctl start matrix-synapse.service
# Restart a specific worker (eg. federation_reader); the master is
# unaffected by this.
systemctl restart matrix-synapse-worker@federation_reader.service
# Add a new worker (assuming all configs are set up already)
systemctl enable matrix-synapse-worker@federation_writer.service
systemctl restart matrix-synapse.target
```

View File

@@ -0,0 +1,20 @@
[Unit]
Description=Synapse %i
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target
[Service]
Type=notify
NotifyAccess=main
User=matrix-synapse
WorkingDirectory=/var/lib/matrix-synapse
EnvironmentFile=/etc/default/matrix-synapse
ExecStart=/opt/venvs/matrix-synapse/bin/python -m synapse.app.generic_worker --config-path=/etc/matrix-synapse/homeserver.yaml --config-path=/etc/matrix-synapse/conf.d/ --config-path=/etc/matrix-synapse/workers/%i.yaml
ExecReload=/bin/kill -HUP $MAINPID
Restart=always
RestartSec=3
SyslogIdentifier=matrix-synapse-%i
[Install]
WantedBy=matrix-synapse.target

View File

@@ -1,5 +1,8 @@
[Unit]
Description=Synapse Matrix Homeserver
Description=Synapse master
# This service should be restarted when the synapse target is restarted.
PartOf=matrix-synapse.target
[Service]
Type=notify
@@ -15,4 +18,4 @@ RestartSec=3
SyslogIdentifier=matrix-synapse
[Install]
WantedBy=matrix.target
WantedBy=matrix-synapse.target

View File

@@ -0,0 +1,6 @@
[Unit]
Description=Synapse parent target
After=network.target
[Install]
WantedBy=multi-user.target

View File

@@ -10,5 +10,4 @@ worker_listeners:
resources:
- names: [federation]
worker_daemonize: false
worker_log_config: /etc/matrix-synapse/federation-reader-log.yaml

View File

@@ -14,16 +14,18 @@ example flow would be (where '>' indicates master to worker and
'<' worker to master flows):
> SERVER example.com
< REPLICATE events 53
> RDATA events 54 ["$foo1:bar.com", ...]
> RDATA events 55 ["$foo4:bar.com", ...]
< REPLICATE
> POSITION events master 53
> RDATA events master 54 ["$foo1:bar.com", ...]
> RDATA events master 55 ["$foo4:bar.com", ...]
The example shows the server accepting a new connection and sending its
identity with the `SERVER` command, followed by the client asking to
subscribe to the `events` stream from the token `53`. The server then
periodically sends `RDATA` commands which have the format
`RDATA <stream_name> <token> <row>`, where the format of `<row>` is
defined by the individual streams.
The example shows the server accepting a new connection and sending its identity
with the `SERVER` command, followed by the client server to respond with the
position of all streams. The server then periodically sends `RDATA` commands
which have the format `RDATA <stream_name> <instance_name> <token> <row>`, where
the format of `<row>` is defined by the individual streams. The
`<instance_name>` is the name of the Synapse process that generated the data
(usually "master").
Error reporting happens by either the client or server sending an ERROR
command, and usually the connection will be closed.
@@ -32,9 +34,6 @@ Since the protocol is a simple line based, its possible to manually
connect to the server using a tool like netcat. A few things should be
noted when manually using the protocol:
- When subscribing to a stream using `REPLICATE`, the special token
`NOW` can be used to get all future updates. The special stream name
`ALL` can be used with `NOW` to subscribe to all available streams.
- The federation stream is only available if federation sending has
been disabled on the main process.
- The server will only time connections out that have sent a `PING`
@@ -55,7 +54,7 @@ The basic structure of the protocol is line based, where the initial
word of each line specifies the command. The rest of the line is parsed
based on the command. For example, the RDATA command is defined as:
RDATA <stream_name> <token> <row_json>
RDATA <stream_name> <instance_name> <token> <row_json>
(Note that <row_json> may contains spaces, but cannot contain
newlines.)
@@ -91,9 +90,7 @@ The client:
- Sends a `NAME` command, allowing the server to associate a human
friendly name with the connection. This is optional.
- Sends a `PING` as above
- For each stream the client wishes to subscribe to it sends a
`REPLICATE` with the `stream_name` and token it wants to subscribe
from.
- Sends a `REPLICATE` to get the current position of all streams.
- On receipt of a `SERVER` command, checks that the server name
matches the expected server name.
@@ -140,14 +137,12 @@ the wire:
> PING 1490197665618
< NAME synapse.app.appservice
< PING 1490197665618
< REPLICATE events 1
< REPLICATE backfill 1
< REPLICATE caches 1
> POSITION events 1
> POSITION backfill 1
> POSITION caches 1
> RDATA caches 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
> RDATA events 14 ["$149019767112vOHxz:localhost:8823",
< REPLICATE
> POSITION events master 1
> POSITION backfill master 1
> POSITION caches master 1
> RDATA caches master 2 ["get_user_by_id",["@01register-user:localhost:8823"],1490197670513]
> RDATA events master 14 ["$149019767112vOHxz:localhost:8823",
"!AFDCvgApUmpdfVjIXm:localhost:8823","m.room.guest_access","",null]
< PING 1490197675618
> ERROR server stopping
@@ -158,10 +153,10 @@ position without needing to send data with the `RDATA` command.
An example of a batched set of `RDATA` is:
> RDATA caches batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
> RDATA caches batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
> RDATA caches 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
> RDATA caches master batch ["get_user_by_id",["@test:localhost:8823"],1490197670513]
> RDATA caches master batch ["get_user_by_id",["@test2:localhost:8823"],1490197670513]
> RDATA caches master batch ["get_user_by_id",["@test3:localhost:8823"],1490197670513]
> RDATA caches master 54 ["get_user_by_id",["@test4:localhost:8823"],1490197670513]
In this case the client shouldn't advance their caches token until it
sees the the last `RDATA`.
@@ -181,9 +176,14 @@ client (C):
#### POSITION (S)
The position of the stream has been updated. Sent to the client
after all missing updates for a stream have been sent to the client
and they're now up to date.
On receipt of a POSITION command clients should check if they have missed any
updates, and if so then fetch them out of band. Sent in response to a
REPLICATE command (but can happen at any time).
The POSITION command includes the source of the stream. Currently all streams
are written by a single process (usually "master"). If fetching missing
updates via HTTP API, rather than via the DB, then processes should make the
request to the appropriate process.
#### ERROR (S, C)
@@ -199,24 +199,17 @@ client (C):
#### REPLICATE (C)
Asks the server to replicate a given stream. The syntax is:
```
REPLICATE <stream_name> <token>
```
Where `<token>` may be either:
* a numeric stream_id to stream updates since (exclusive)
* `NOW` to stream all subsequent updates.
The `<stream_name>` is the name of a replication stream to subscribe
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
of streams). It can also be `ALL` to subscribe to all known streams,
in which case the `<token>` must be set to `NOW`.
Asks the server for the current position of all streams.
#### USER_SYNC (C)
A user has started or stopped syncing
A user has started or stopped syncing on this process.
#### CLEAR_USER_SYNC (C)
The server should clear all associated user sync data from the worker.
This is used when a worker is shutting down.
#### FEDERATION_ACK (C)
@@ -230,10 +223,6 @@ in which case the `<token>` must be set to `NOW`.
Inform the server a cache should be invalidated
#### SYNC (S, C)
Used exclusively in tests
### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online.
@@ -252,12 +241,12 @@ Each individual cache invalidation results in a row being sent down
replication, which includes the cache name (the name of the function)
and they key to invalidate. For example:
> RDATA caches 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
> RDATA caches master 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
Alternatively, an entire cache can be invalidated by sending down a `null`
instead of the key. For example:
> RDATA caches 550953772 ["get_user_by_id", null, 1550574873252]
> RDATA caches master 550953772 ["get_user_by_id", null, 1550574873252]
However, there are times when a number of caches need to be invalidated
at the same time with the same key. To reduce traffic we batch those

View File

@@ -11,6 +11,13 @@ TURN server.
The following sections describe how to install [coturn](<https://github.com/coturn/coturn>) (which implements the TURN REST API) and integrate it with synapse.
## Requirements
For TURN relaying with `coturn` to work, it must be hosted on a server/endpoint with a public IP.
Hosting TURN behind a NAT (even with appropriate port forwarding) is known to cause issues
and to often not work.
## `coturn` Setup
### Initial installation
@@ -113,7 +120,7 @@ Your home server configuration file needs the following extra keys:
As an example, here is the relevant section of the config file for matrix.org:
turn_uris: [ "turn:turn.matrix.org:3478?transport=udp", "turn:turn.matrix.org:3478?transport=tcp" ]
turn_shared_secret: n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons
turn_shared_secret: "n0t4ctuAllymatr1Xd0TorgSshar3d5ecret4obvIousreAsons"
turn_user_lifetime: 86400000
turn_allow_guests: True

View File

@@ -1,23 +1,31 @@
# Scaling synapse via workers
Synapse has experimental support for splitting out functionality into
multiple separate python processes, helping greatly with scalability. These
For small instances it recommended to run Synapse in monolith mode (the
default). For larger instances where performance is a concern it can be helpful
to split out functionality into multiple separate python processes. These
processes are called 'workers', and are (eventually) intended to scale
horizontally independently.
All of the below is highly experimental and subject to change as Synapse evolves,
but documenting it here to help folks needing highly scalable Synapses similar
to the one running matrix.org!
Synapse's worker support is under active development and subject to change as
we attempt to rapidly scale ever larger Synapse instances. However we are
documenting it here to help admins needing a highly scalable Synapse instance
similar to the one running `matrix.org`.
All processes continue to share the same database instance, and as such, workers
only work with postgres based synapse deployments (sharing a single sqlite
across multiple processes is a recipe for disaster, plus you should be using
postgres anyway if you care about scalability).
All processes continue to share the same database instance, and as such,
workers only work with PostgreSQL-based Synapse deployments. SQLite should only
be used for demo purposes and any admin considering workers should already be
running PostgreSQL.
The workers communicate with the master synapse process via a synapse-specific
TCP protocol called 'replication' - analogous to MySQL or Postgres style
database replication; feeding a stream of relevant data to the workers so they
can be kept in sync with the main synapse process and database state.
## Master/worker communication
The workers communicate with the master process via a Synapse-specific protocol
called 'replication' (analogous to MySQL- or Postgres-style database
replication) which feeds a stream of relevant data from the master to the
workers so they can be kept in sync with the master process and database state.
Additionally, workers may make HTTP requests to the master, to send information
in the other direction. Typically this is used for operations which need to
wait for a reply - such as sending an event.
## Configuration
@@ -27,72 +35,61 @@ the correct worker, or to the main synapse instance. Note that this includes
requests made to the federation port. See [reverse_proxy.md](reverse_proxy.md)
for information on setting up a reverse proxy.
To enable workers, you need to add two replication listeners to the master
synapse, e.g.:
To enable workers, you need to add *two* replication listeners to the
main Synapse configuration file (`homeserver.yaml`). For example:
listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
resources:
- names: [replication]
```yaml
listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port
- port: 9093
bind_address: '127.0.0.1'
type: http
resources:
- names: [replication]
```
Under **no circumstances** should these replication API listeners be exposed to
the public internet; it currently implements no authentication whatsoever and is
unencrypted.
the public internet; they have no authentication and are unencrypted.
(Roughly, the TCP port is used for streaming data from the master to the
workers, and the HTTP port for the workers to send data to the main
synapse process.)
You should then create a set of configs for the various worker processes. Each
worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that
worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc. You should minimise the number of overrides though to
maintain a usable config.
You then create a set of configs for the various worker processes. These
should be worker configuration files, and should be stored in a dedicated
subdirectory, to allow synctl to manipulate them. An additional configuration
for the master synapse process will need to be created because the process will
not be started automatically. That configuration should look like this:
In the config file for each worker, you must specify the type of worker
application (`worker_app`). The currently available worker applications are
listed below. You must also specify the replication endpoints that it should
talk to on the main synapse process. `worker_replication_host` should specify
the host of the main synapse, `worker_replication_port` should point to the TCP
replication listener port and `worker_replication_http_port` should point to
the HTTP replication port.
worker_app: synapse.app.homeserver
daemonize: true
For example:
Each worker configuration file inherits the configuration of the main homeserver
configuration file. You can then override configuration specific to that worker,
e.g. the HTTP listener that it provides (if any); logging configuration; etc.
You should minimise the number of overrides though to maintain a usable config.
```yaml
worker_app: synapse.app.synchrotron
You must specify the type of worker application (`worker_app`). The currently
available worker applications are listed below. You must also specify the
replication endpoints that it's talking to on the main synapse process.
`worker_replication_host` should specify the host of the main synapse,
`worker_replication_port` should point to the TCP replication listener port and
`worker_replication_http_port` should point to the HTTP replication port.
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
Currently, the `event_creator` and `federation_reader` workers require specifying
`worker_replication_http_port`.
worker_listeners:
- type: http
port: 8083
resources:
- names:
- client
For instance:
worker_app: synapse.app.synchrotron
# The replication listener on the synapse to talk to.
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: 8083
resources:
- names:
- client
worker_daemonize: True
worker_pid_file: /home/matrix/synapse/synchrotron.pid
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml
```
...is a full configuration for a synchrotron worker instance, which will expose a
plain HTTP `/sync` endpoint on port 8083 separately from the `/sync` endpoint provided
@@ -101,7 +98,75 @@ by the main synapse.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example).
Finally, to actually run your worker-based synapse, you must pass synctl the -a
Finally, you need to start your worker processes. This can be done with either
`synctl` or your distribution's preferred service manager such as `systemd`. We
recommend the use of `systemd` where available: for information on setting up
`systemd` to start synapse workers, see
[systemd-with-workers](systemd-with-workers). To use `synctl`, see below.
### **Experimental** support for replication over redis
As of Synapse v1.13.0, it is possible to configure Synapse to send replication
via a [Redis pub/sub channel](https://redis.io/topics/pubsub). This is an
alternative to direct TCP connections to the master: rather than all the
workers connecting to the master, all the workers and the master connect to
Redis, which relays replication commands between processes. This can give a
significant cpu saving on the master and will be a prerequisite for upcoming
performance improvements.
Note that this support is currently experimental; you may experience lost
messages and similar problems! It is strongly recommended that admins setting
up workers for the first time use direct TCP replication as above.
To configure Synapse to use Redis:
1. Install Redis following the normal procedure for your distribution - for
example, on Debian, `apt install redis-server`. (It is safe to use an
existing Redis deployment if you have one: we use a pub/sub stream named
according to the `server_name` of your synapse server.)
2. Check Redis is running and accessible: you should be able to `echo PING | nc -q1
localhost 6379` and get a response of `+PONG`.
3. Install the python prerequisites. If you installed synapse into a
virtualenv, this can be done with:
```sh
pip install matrix-synapse[redis]
```
The debian packages from matrix.org already include the required
dependencies.
4. Add config to the shared configuration (`homeserver.yaml`):
```yaml
redis:
enabled: true
```
Optional parameters which can go alongside `enabled` are `host`, `port`,
`password`. Normally none of these are required.
5. Restart master and all workers.
Once redis replication is in use, `worker_replication_port` is redundant and
can be removed from the worker configuration files. Similarly, the
configuration for the `listener` for the TCP replication port can be removed
from the main configuration file. Note that the HTTP replication port is
still required.
### Using synctl
If you want to use `synctl` to manage your synapse processes, you will need to
create an an additional configuration file for the master synapse process. That
configuration should look like this:
```yaml
worker_app: synapse.app.homeserver
```
Additionally, each worker app must be configured with the name of a "pid file",
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:
```yaml
worker_pid_file: /home/matrix/synapse/synchrotron.pid
```
Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:
@@ -268,6 +333,8 @@ Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
Additionally, the following REST endpoints can be handled, but all requests must
be routed to the same instance:

View File

@@ -33,6 +33,10 @@ parts:
python-version: python3
python-packages:
- '.[all]'
- pip
- setuptools
- setuptools-scm
- wheel
build-packages:
- libffi-dev
- libturbojpeg0-dev

View File

@@ -0,0 +1,13 @@
from .sorteddict import (
SortedDict,
SortedKeysView,
SortedItemsView,
SortedValuesView,
)
__all__ = [
"SortedDict",
"SortedKeysView",
"SortedItemsView",
"SortedValuesView",
]

View File

@@ -0,0 +1,124 @@
# stub for SortedDict. This is a lightly edited copy of
# https://github.com/grantjenks/python-sortedcontainers/blob/eea42df1f7bad2792e8da77335ff888f04b9e5ae/sortedcontainers/sorteddict.pyi
# (from https://github.com/grantjenks/python-sortedcontainers/pull/107)
from typing import (
Any,
Callable,
Dict,
Hashable,
Iterator,
Iterable,
ItemsView,
KeysView,
List,
Mapping,
Optional,
Sequence,
Type,
TypeVar,
Tuple,
Union,
ValuesView,
overload,
)
_T = TypeVar("_T")
_S = TypeVar("_S")
_T_h = TypeVar("_T_h", bound=Hashable)
_KT = TypeVar("_KT", bound=Hashable) # Key type.
_VT = TypeVar("_VT") # Value type.
_KT_co = TypeVar("_KT_co", covariant=True, bound=Hashable)
_VT_co = TypeVar("_VT_co", covariant=True)
_SD = TypeVar("_SD", bound=SortedDict)
_Key = Callable[[_T], Any]
class SortedDict(Dict[_KT, _VT]):
@overload
def __init__(self, **kwargs: _VT) -> None: ...
@overload
def __init__(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
@overload
def __init__(self, __key: _Key[_KT], **kwargs: _VT) -> None: ...
@overload
def __init__(
self, __key: _Key[_KT], __map: Mapping[_KT, _VT], **kwargs: _VT
) -> None: ...
@overload
def __init__(
self, __key: _Key[_KT], __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT
) -> None: ...
@property
def key(self) -> Optional[_Key[_KT]]: ...
@property
def iloc(self) -> SortedKeysView[_KT]: ...
def clear(self) -> None: ...
def __delitem__(self, key: _KT) -> None: ...
def __iter__(self) -> Iterator[_KT]: ...
def __reversed__(self) -> Iterator[_KT]: ...
def __setitem__(self, key: _KT, value: _VT) -> None: ...
def _setitem(self, key: _KT, value: _VT) -> None: ...
def copy(self: _SD) -> _SD: ...
def __copy__(self: _SD) -> _SD: ...
@classmethod
@overload
def fromkeys(cls, seq: Iterable[_T_h]) -> SortedDict[_T_h, None]: ...
@classmethod
@overload
def fromkeys(cls, seq: Iterable[_T_h], value: _S) -> SortedDict[_T_h, _S]: ...
def keys(self) -> SortedKeysView[_KT]: ...
def items(self) -> SortedItemsView[_KT, _VT]: ...
def values(self) -> SortedValuesView[_VT]: ...
@overload
def pop(self, key: _KT) -> _VT: ...
@overload
def pop(self, key: _KT, default: _T = ...) -> Union[_VT, _T]: ...
def popitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def peekitem(self, index: int = ...) -> Tuple[_KT, _VT]: ...
def setdefault(self, key: _KT, default: Optional[_VT] = ...) -> _VT: ...
@overload
def update(self, __map: Mapping[_KT, _VT], **kwargs: _VT) -> None: ...
@overload
def update(self, __iterable: Iterable[Tuple[_KT, _VT]], **kwargs: _VT) -> None: ...
@overload
def update(self, **kwargs: _VT) -> None: ...
def __reduce__(
self,
) -> Tuple[
Type[SortedDict[_KT, _VT]], Tuple[Callable[[_KT], Any], List[Tuple[_KT, _VT]]],
]: ...
def __repr__(self) -> str: ...
def _check(self) -> None: ...
def islice(
self, start: Optional[int] = ..., stop: Optional[int] = ..., reverse=bool,
) -> Iterator[_KT]: ...
def bisect_left(self, value: _KT) -> int: ...
def bisect_right(self, value: _KT) -> int: ...
class SortedKeysView(KeysView[_KT_co], Sequence[_KT_co]):
@overload
def __getitem__(self, index: int) -> _KT_co: ...
@overload
def __getitem__(self, index: slice) -> List[_KT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedItemsView( # type: ignore
ItemsView[_KT_co, _VT_co], Sequence[Tuple[_KT_co, _VT_co]]
):
def __iter__(self) -> Iterator[Tuple[_KT_co, _VT_co]]: ...
@overload
def __getitem__(self, index: int) -> Tuple[_KT_co, _VT_co]: ...
@overload
def __getitem__(self, index: slice) -> List[Tuple[_KT_co, _VT_co]]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...
class SortedValuesView(ValuesView[_VT_co], Sequence[_VT_co]):
@overload
def __getitem__(self, index: int) -> _VT_co: ...
@overload
def __getitem__(self, index: slice) -> List[_VT_co]: ...
def __delitem__(self, index: Union[int, slice]) -> None: ...

43
stubs/txredisapi.pyi Normal file
View File

@@ -0,0 +1,43 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Contains *incomplete* type hints for txredisapi.
"""
from typing import List, Optional, Union
class RedisProtocol:
def publish(self, channel: str, message: bytes): ...
class SubscriberProtocol:
password: Optional[str]
def subscribe(self, channels: Union[str, List[str]]): ...
def connectionMade(self): ...
def connectionLost(self, reason): ...
def lazyConnection(
host: str = ...,
port: int = ...,
dbid: Optional[int] = ...,
reconnect: bool = ...,
charset: str = ...,
password: Optional[str] = ...,
connectTimeout: Optional[int] = ...,
replyTimeout: Optional[int] = ...,
convertNumbers: bool = ...,
) -> RedisProtocol: ...
class SubscriberFactory:
def buildProtocol(self, addr): ...

View File

@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.12.0"
__version__ = "1.13.0rc3"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -26,16 +26,15 @@ from twisted.internet import defer
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
from synapse.api.constants import EventTypes, LimitBlockingTypes, Membership, UserTypes
from synapse.api.auth_blocking import AuthBlocking
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import (
AuthError,
Codes,
InvalidClientTokenError,
MissingClientTokenError,
ResourceLimitError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.config.server import is_threepid_reserved
from synapse.events import EventBase
from synapse.types import StateMap, UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
@@ -77,7 +76,11 @@ class Auth(object):
self.token_cache = LruCache(CACHE_SIZE_FACTOR * 10000)
register_cache("cache", "token_cache", self.token_cache)
self._auth_blocking = AuthBlocking(self.hs)
self._account_validity = hs.config.account_validity
self._track_appservice_user_ips = hs.config.track_appservice_user_ips
self._macaroon_secret_key = hs.config.macaroon_secret_key
@defer.inlineCallbacks
def check_from_context(self, room_version: str, event, context, do_sig_check=True):
@@ -191,7 +194,7 @@ class Auth(object):
opentracing.set_tag("authenticated_entity", user_id)
opentracing.set_tag("appservice_id", app_service.id)
if ip_addr and self.hs.config.track_appservice_user_ips:
if ip_addr and self._track_appservice_user_ips:
yield self.store.insert_client_ip(
user_id=user_id,
access_token=access_token,
@@ -454,7 +457,7 @@ class Auth(object):
# access_tokens include a nonce for uniqueness: any value is acceptable
v.satisfy_general(lambda c: c.startswith("nonce = "))
v.verify(macaroon, self.hs.config.macaroon_secret_key)
v.verify(macaroon, self._macaroon_secret_key)
def _verify_expiry(self, caveat):
prefix = "time < "
@@ -537,8 +540,7 @@ class Auth(object):
return defer.succeed(auth_ids)
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id: str, user: UserID):
async def check_can_change_room_list(self, room_id: str, user: UserID):
"""Determine whether the user is allowed to edit the room's entry in the
published room list.
@@ -547,17 +549,17 @@ class Auth(object):
user
"""
is_admin = yield self.is_server_admin(user)
is_admin = await self.is_server_admin(user)
if is_admin:
return True
user_id = user.to_string()
yield self.check_user_in_room(room_id, user_id)
await self.check_user_in_room(room_id, user_id)
# We currently require the user is a "moderator" in the room. We do this
# by checking if they would (theoretically) be able to change the
# m.room.canonical_alias events
power_level_event = yield self.state.get_current_state(
power_level_event = await self.state.get_current_state(
room_id, EventTypes.PowerLevels, ""
)
@@ -664,71 +666,5 @@ class Auth(object):
% (user_id, room_id),
)
@defer.inlineCallbacks
def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
Args:
user_id(str|None): If present, checks for presence against existing
MAU cohort
threepid(dict|None): If present, checks for presence against configured
reserved threepid. Used in cases where the user is trying register
with a MAU blocked server, normally they would be rejected but their
threepid is on the reserved list. user_id and
threepid should never be set at the same time.
user_type(str|None): If present, is used to decide whether to check against
certain blocking reasons like MAU.
"""
# Never fail an auth check for the server notices users or support user
# This can be a problem where event creation is prohibited due to blocking
if user_id is not None:
if user_id == self.hs.config.server_notices_mxid:
return
if (yield self.store.is_support_user(user_id)):
return
if self.hs.config.hs_disabled:
raise ResourceLimitError(
403,
self.hs.config.hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=self.hs.config.admin_contact,
limit_type=LimitBlockingTypes.HS_DISABLED,
)
if self.hs.config.limit_usage_by_mau is True:
assert not (user_id and threepid)
# If the user is already part of the MAU cohort or a trial user
if user_id:
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
if timestamp:
return
is_trial = yield self.store.is_trial_user(user_id)
if is_trial:
return
elif threepid:
# If the user does not exist yet, but is signing up with a
# reserved threepid then pass auth check
if is_threepid_reserved(
self.hs.config.mau_limits_reserved_threepids, threepid
):
return
elif user_type == UserTypes.SUPPORT:
# If the user does not exist yet and is of type "support",
# allow registration. Support users are excluded from MAU checks.
return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self.hs.config.max_mau_value:
raise ResourceLimitError(
403,
"Monthly Active User Limit Exceeded",
admin_contact=self.hs.config.admin_contact,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type=LimitBlockingTypes.MONTHLY_ACTIVE_USER,
)
def check_auth_blocking(self, *args, **kwargs):
return self._auth_blocking.check_auth_blocking(*args, **kwargs)

View File

@@ -0,0 +1,104 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.constants import LimitBlockingTypes, UserTypes
from synapse.api.errors import Codes, ResourceLimitError
from synapse.config.server import is_threepid_reserved
logger = logging.getLogger(__name__)
class AuthBlocking(object):
def __init__(self, hs):
self.store = hs.get_datastore()
self._server_notices_mxid = hs.config.server_notices_mxid
self._hs_disabled = hs.config.hs_disabled
self._hs_disabled_message = hs.config.hs_disabled_message
self._admin_contact = hs.config.admin_contact
self._max_mau_value = hs.config.max_mau_value
self._limit_usage_by_mau = hs.config.limit_usage_by_mau
self._mau_limits_reserved_threepids = hs.config.mau_limits_reserved_threepids
@defer.inlineCallbacks
def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
Args:
user_id(str|None): If present, checks for presence against existing
MAU cohort
threepid(dict|None): If present, checks for presence against configured
reserved threepid. Used in cases where the user is trying register
with a MAU blocked server, normally they would be rejected but their
threepid is on the reserved list. user_id and
threepid should never be set at the same time.
user_type(str|None): If present, is used to decide whether to check against
certain blocking reasons like MAU.
"""
# Never fail an auth check for the server notices users or support user
# This can be a problem where event creation is prohibited due to blocking
if user_id is not None:
if user_id == self._server_notices_mxid:
return
if (yield self.store.is_support_user(user_id)):
return
if self._hs_disabled:
raise ResourceLimitError(
403,
self._hs_disabled_message,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=self._admin_contact,
limit_type=LimitBlockingTypes.HS_DISABLED,
)
if self._limit_usage_by_mau is True:
assert not (user_id and threepid)
# If the user is already part of the MAU cohort or a trial user
if user_id:
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
if timestamp:
return
is_trial = yield self.store.is_trial_user(user_id)
if is_trial:
return
elif threepid:
# If the user does not exist yet, but is signing up with a
# reserved threepid then pass auth check
if is_threepid_reserved(self._mau_limits_reserved_threepids, threepid):
return
elif user_type == UserTypes.SUPPORT:
# If the user does not exist yet and is of type "support",
# allow registration. Support users are excluded from MAU checks.
return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
if current_mau >= self._max_mau_value:
raise ResourceLimitError(
403,
"Monthly Active User Limit Exceeded",
admin_contact=self._admin_contact,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED,
limit_type=LimitBlockingTypes.MONTHLY_ACTIVE_USER,
)

View File

@@ -61,6 +61,7 @@ class LoginType(object):
MSISDN = "m.login.msisdn"
RECAPTCHA = "m.login.recaptcha"
TERMS = "m.login.terms"
SSO = "org.matrix.login.sso"
DUMMY = "m.login.dummy"
# Only for C/S API v1
@@ -96,6 +97,8 @@ class EventTypes(object):
Retention = "m.room.retention"
Presence = "m.presence"
class RejectedReason(object):
AUTH_ERROR = "auth_error"

View File

@@ -64,6 +64,13 @@ class Codes(object):
INCOMPATIBLE_ROOM_VERSION = "M_INCOMPATIBLE_ROOM_VERSION"
WRONG_ROOM_KEYS_VERSION = "M_WRONG_ROOM_KEYS_VERSION"
EXPIRED_ACCOUNT = "ORG_MATRIX_EXPIRED_ACCOUNT"
PASSWORD_TOO_SHORT = "M_PASSWORD_TOO_SHORT"
PASSWORD_NO_DIGIT = "M_PASSWORD_NO_DIGIT"
PASSWORD_NO_UPPERCASE = "M_PASSWORD_NO_UPPERCASE"
PASSWORD_NO_LOWERCASE = "M_PASSWORD_NO_LOWERCASE"
PASSWORD_NO_SYMBOL = "M_PASSWORD_NO_SYMBOL"
PASSWORD_IN_DICTIONARY = "M_PASSWORD_IN_DICTIONARY"
WEAK_PASSWORD = "M_WEAK_PASSWORD"
INVALID_SIGNATURE = "M_INVALID_SIGNATURE"
USER_DEACTIVATED = "M_USER_DEACTIVATED"
BAD_ALIAS = "M_BAD_ALIAS"
@@ -79,7 +86,14 @@ class CodeMessageException(RuntimeError):
def __init__(self, code, msg):
super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
self.code = code
# Some calls to this method pass instances of http.HTTPStatus for `code`.
# While HTTPStatus is a subclass of int, it has magic __str__ methods
# which emit `HTTPStatus.FORBIDDEN` when converted to a str, instead of `403`.
# This causes inconsistency in our log lines.
#
# To eliminate this behaviour, we convert them to their integer equivalents here.
self.code = int(code)
self.msg = msg
@@ -439,6 +453,20 @@ class IncompatibleRoomVersionError(SynapseError):
return cs_error(self.msg, self.errcode, room_version=self._room_version)
class PasswordRefusedError(SynapseError):
"""A password has been refused, either during password reset/change or registration.
"""
def __init__(
self,
msg="This password doesn't comply with the server's policy",
errcode=Codes.WEAK_PASSWORD,
):
super(PasswordRefusedError, self).__init__(
code=400, msg=msg, errcode=errcode,
)
class RequestSendFailed(RuntimeError):
"""Sending a HTTP request over federation failed due to not being able to
talk to the remote server for some reason.

View File

@@ -22,6 +22,7 @@ import sys
import traceback
from daemonize import Daemonize
from typing_extensions import NoReturn
from twisted.internet import defer, error, reactor
from twisted.protocols.tls import TLSMemoryBIOFactory
@@ -139,9 +140,9 @@ def start_reactor(
run()
def quit_with_error(error_string):
def quit_with_error(error_string: str) -> NoReturn:
message_lines = error_string.split("\n")
line_length = max(len(l) for l in message_lines if len(l) < 80) + 2
line_length = max(len(line) for line in message_lines if len(line) < 80) + 2
sys.stderr.write("*" * line_length + "\n")
for line in message_lines:
sys.stderr.write(" %s\n" % (line.rstrip(),))
@@ -270,7 +271,7 @@ def start(hs, listeners=None):
# Start the tracer
synapse.logging.opentracing.init_tracer( # type: ignore[attr-defined] # noqa
hs.config
hs
)
# It is now safe to start your Synapse.
@@ -316,7 +317,7 @@ def setup_sentry(hs):
scope.set_tag("matrix_server_name", hs.config.server_name)
app = hs.config.worker_app if hs.config.worker_app else "synapse.app.homeserver"
name = hs.config.worker_name if hs.config.worker_name else "master"
name = hs.get_instance_name()
scope.set_tag("worker_app", app)
scope.set_tag("worker_name", name)

View File

@@ -43,7 +43,6 @@ from synapse.replication.slave.storage.push_rule import SlavedPushRuleStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
@@ -79,17 +78,6 @@ class AdminCmdServer(HomeServer):
def start_listening(self, listeners):
pass
def build_tcp_replication(self):
return AdminCmdReplicationHandler(self)
class AdminCmdReplicationHandler(ReplicationClientHandler):
async def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
return {}
@defer.inlineCallbacks
def export_data_command(hs, args):

View File

@@ -17,6 +17,9 @@
import contextlib
import logging
import sys
from typing import Dict, Iterable
from typing_extensions import ContextManager
from twisted.internet import defer, reactor
from twisted.web.resource import NoResource
@@ -38,14 +41,14 @@ from synapse.config.homeserver import HomeServerConfig
from synapse.config.logger import setup_logging
from synapse.federation import send_queue
from synapse.federation.transport.server import TransportLayerServer
from synapse.handlers.presence import PresenceHandler, get_interested_parties
from synapse.handlers.presence import BasePresenceHandler, get_interested_parties
from synapse.http.server import JsonResource
from synapse.http.servlet import RestServlet, parse_json_object_from_request
from synapse.http.site import SynapseSite
from synapse.logging.context import LoggingContext, run_in_background
from synapse.logging.context import LoggingContext
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.slave.storage._base import BaseSlavedStore, __func__
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.account_data import SlavedAccountDataStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
@@ -64,13 +67,25 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import (
from synapse.replication.tcp.client import ReplicationDataHandler
from synapse.replication.tcp.commands import ClearUserSyncsCommand
from synapse.replication.tcp.streams import (
AccountDataStream,
DeviceListsStream,
GroupServerStream,
PresenceStream,
PushersStream,
PushRulesStream,
ReceiptsStream,
TagAccountDataStream,
ToDeviceStream,
TypingStream,
)
from synapse.replication.tcp.streams.events import (
EventsStream,
EventsStreamEventRow,
EventsStreamRow,
)
from synapse.replication.tcp.streams.events import EventsStreamEventRow, EventsStreamRow
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
@@ -98,6 +113,10 @@ from synapse.rest.client.v1.voip import VoipRestServlet
from synapse.rest.client.v2_alpha import groups, sync, user_directory
from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.rest.client.v2_alpha.account import ThreepidRestServlet
from synapse.rest.client.v2_alpha.account_data import (
AccountDataServlet,
RoomAccountDataServlet,
)
from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet
@@ -108,12 +127,12 @@ from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.storage.data_stores.main.ui_auth import UIAuthWorkerStore
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker")
@@ -210,21 +229,31 @@ class KeyUploadServlet(RestServlet):
return 200, {"one_time_key_counts": result}
class _NullContextManager(ContextManager[None]):
"""A context manager which does nothing."""
def __exit__(self, exc_type, exc_val, exc_tb):
pass
UPDATE_SYNCING_USERS_MS = 10 * 1000
class GenericWorkerPresence(object):
class GenericWorkerPresence(BasePresenceHandler):
def __init__(self, hs):
super().__init__(hs)
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self.store = hs.get_datastore()
self.user_to_num_current_syncs = {}
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
self._presence_enabled = hs.config.use_presence
# The number of ongoing syncs on this process, by user id.
# Empty if _presence_enabled is false.
self._user_to_num_current_syncs = {} # type: Dict[str, int]
self.notifier = hs.get_notifier()
self.instance_id = hs.get_instance_id()
# user_id -> last_sync_ms. Lists the users that have stopped syncing
# but we haven't notified the master of that yet
@@ -234,13 +263,24 @@ class GenericWorkerPresence(object):
self.send_stop_syncing, UPDATE_SYNCING_USERS_MS
)
self.process_id = random_string(16)
logger.info("Presence process_id is %r", self.process_id)
hs.get_reactor().addSystemEventTrigger(
"before",
"shutdown",
run_as_background_process,
"generic_presence.on_shutdown",
self._on_shutdown,
)
def _on_shutdown(self):
if self._presence_enabled:
self.hs.get_tcp_replication().send_command(
ClearUserSyncsCommand(self.instance_id)
)
def send_user_sync(self, user_id, is_syncing, last_sync_ms):
if self.hs.config.use_presence:
if self._presence_enabled:
self.hs.get_tcp_replication().send_user_sync(
user_id, is_syncing, last_sync_ms
self.instance_id, user_id, is_syncing, last_sync_ms
)
def mark_as_coming_online(self, user_id):
@@ -280,28 +320,33 @@ class GenericWorkerPresence(object):
# TODO Hows this supposed to work?
return defer.succeed(None)
get_states = __func__(PresenceHandler.get_states)
get_state = __func__(PresenceHandler.get_state)
current_state_for_users = __func__(PresenceHandler.current_state_for_users)
async def user_syncing(
self, user_id: str, affect_presence: bool
) -> ContextManager[None]:
"""Record that a user is syncing.
def user_syncing(self, user_id, affect_presence):
if affect_presence:
curr_sync = self.user_to_num_current_syncs.get(user_id, 0)
self.user_to_num_current_syncs[user_id] = curr_sync + 1
Called by the sync and events servlets to record that a user has connected to
this worker and is waiting for some events.
"""
if not affect_presence or not self._presence_enabled:
return _NullContextManager()
# If we went from no in flight sync to some, notify replication
if self.user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
curr_sync = self._user_to_num_current_syncs.get(user_id, 0)
self._user_to_num_current_syncs[user_id] = curr_sync + 1
# If we went from no in flight sync to some, notify replication
if self._user_to_num_current_syncs[user_id] == 1:
self.mark_as_coming_online(user_id)
def _end():
# We check that the user_id is in user_to_num_current_syncs because
# user_to_num_current_syncs may have been cleared if we are
# shutting down.
if affect_presence and user_id in self.user_to_num_current_syncs:
self.user_to_num_current_syncs[user_id] -= 1
if user_id in self._user_to_num_current_syncs:
self._user_to_num_current_syncs[user_id] -= 1
# If we went from one in flight sync to non, notify replication
if self.user_to_num_current_syncs[user_id] == 0:
if self._user_to_num_current_syncs[user_id] == 0:
self.mark_as_going_offline(user_id)
@contextlib.contextmanager
@@ -311,7 +356,7 @@ class GenericWorkerPresence(object):
finally:
_end()
return defer.succeed(_user_syncing())
return _user_syncing()
@defer.inlineCallbacks
def notify_from_replication(self, states, stream_id):
@@ -346,15 +391,12 @@ class GenericWorkerPresence(object):
stream_id = token
yield self.notify_from_replication(states, stream_id)
def get_currently_syncing_users(self):
if self.hs.config.use_presence:
return [
user_id
for user_id, count in self.user_to_num_current_syncs.items()
if count > 0
]
else:
return set()
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
return [
user_id
for user_id, count in self._user_to_num_current_syncs.items()
if count > 0
]
class GenericWorkerTyping(object):
@@ -371,12 +413,6 @@ class GenericWorkerTyping(object):
# map room IDs to sets of users currently typing
self._room_typing = {}
def stream_positions(self):
# We must update this typing token from the response of the previous
# sync. In particular, the stream id may "reset" back to zero/a low
# value which we *must* use for the next replication request.
return {"typing": self._latest_room_serial}
def process_replication_rows(self, token, rows):
if self._latest_room_serial > token:
# The master has gone backwards. To prevent inconsistent data, just
@@ -390,11 +426,15 @@ class GenericWorkerTyping(object):
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
def get_current_token(self) -> int:
return self._latest_room_serial
class GenericWorkerSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker.
UserDirectoryStore,
UIAuthWorkerStore,
SlavedDeviceInboxStore,
SlavedDeviceStore,
SlavedReceiptsStore,
@@ -475,6 +515,8 @@ class GenericWorkerServer(HomeServer):
ProfileDisplaynameRestServlet(self).register(resource)
ProfileRestServlet(self).register(resource)
KeyUploadServlet(self).register(resource)
AccountDataServlet(self).register(resource)
RoomAccountDataServlet(self).register(resource)
sync.register_servlets(self, resource)
events.register_servlets(self, resource)
@@ -577,7 +619,7 @@ class GenericWorkerServer(HomeServer):
def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def build_tcp_replication(self):
def build_replication_data_handler(self):
return GenericWorkerReplicationHandler(self)
def build_presence_handler(self):
@@ -587,14 +629,13 @@ class GenericWorkerServer(HomeServer):
return GenericWorkerTyping(self)
class GenericWorkerReplicationHandler(ReplicationClientHandler):
class GenericWorkerReplicationHandler(ReplicationDataHandler):
def __init__(self, hs):
super(GenericWorkerReplicationHandler, self).__init__(hs.get_datastore())
self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
# NB this is a SynchrotronPresence, not a normal PresenceHandler
self.presence_handler = hs.get_presence_handler()
self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence
self.notifier = hs.get_notifier()
self.notify_pushers = hs.config.start_pushers
@@ -605,28 +646,18 @@ class GenericWorkerReplicationHandler(ReplicationClientHandler):
else:
self.send_handler = None
async def on_rdata(self, stream_name, token, rows):
await super(GenericWorkerReplicationHandler, self).on_rdata(
stream_name, token, rows
)
run_in_background(self.process_and_notify, stream_name, token, rows)
async def on_rdata(self, stream_name, instance_name, token, rows):
await super().on_rdata(stream_name, instance_name, token, rows)
await self._process_and_notify(stream_name, instance_name, token, rows)
def get_streams_to_replicate(self):
args = super(GenericWorkerReplicationHandler, self).get_streams_to_replicate()
args.update(self.typing_handler.stream_positions())
if self.send_handler:
args.update(self.send_handler.stream_positions())
return args
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
async def process_and_notify(self, stream_name, token, rows):
async def _process_and_notify(self, stream_name, instance_name, token, rows):
try:
if self.send_handler:
self.send_handler.process_replication_rows(stream_name, token, rows)
await self.send_handler.process_replication_rows(
stream_name, token, rows
)
if stream_name == "events":
if stream_name == EventsStream.NAME:
# We shouldn't get multiple rows per token for events stream, so
# we don't need to optimise this for multiple rows.
for row in rows:
@@ -649,43 +680,44 @@ class GenericWorkerReplicationHandler(ReplicationClientHandler):
)
await self.pusher_pool.on_new_notifications(token, token)
elif stream_name == "push_rules":
elif stream_name == PushRulesStream.NAME:
self.notifier.on_new_event(
"push_rules_key", token, users=[row.user_id for row in rows]
)
elif stream_name in ("account_data", "tag_account_data"):
elif stream_name in (AccountDataStream.NAME, TagAccountDataStream.NAME):
self.notifier.on_new_event(
"account_data_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "receipts":
elif stream_name == ReceiptsStream.NAME:
self.notifier.on_new_event(
"receipt_key", token, rooms=[row.room_id for row in rows]
)
await self.pusher_pool.on_new_receipts(
token, token, {row.room_id for row in rows}
)
elif stream_name == "typing":
elif stream_name == TypingStream.NAME:
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows]
)
elif stream_name == "to_device":
elif stream_name == ToDeviceStream.NAME:
entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities:
self.notifier.on_new_event("to_device_key", token, users=entities)
elif stream_name == "device_lists":
elif stream_name == DeviceListsStream.NAME:
all_room_ids = set()
for row in rows:
room_ids = await self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
if row.entity.startswith("@"):
room_ids = await self.store.get_rooms_for_user(row.entity)
all_room_ids.update(room_ids)
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
elif stream_name == "presence":
elif stream_name == PresenceStream.NAME:
await self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
elif stream_name == GroupServerStream.NAME:
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows]
)
elif stream_name == "pushers":
elif stream_name == PushersStream.NAME:
for row in rows:
if row.deleted:
self.stop_pusher(row.user_id, row.app_id, row.pushkey)
@@ -752,15 +784,12 @@ class FederationSenderHandler(object):
def wake_destination(self, server: str):
self.federation_sender.wake_destination(server)
def stream_positions(self):
return {"federation": self.federation_position}
def process_replication_rows(self, stream_name, token, rows):
async def process_replication_rows(self, stream_name, token, rows):
# The federation stream contains things that we want to send out, e.g.
# presence, typing, etc.
if stream_name == "federation":
send_queue.process_rows_for_federation(self.federation_sender, rows)
run_in_background(self.update_token, token)
await self.update_token(token)
# We also need to poke the federation sender when new events happen
elif stream_name == "events":
@@ -768,13 +797,14 @@ class FederationSenderHandler(object):
# ... and when new receipts happen
elif stream_name == ReceiptsStream.NAME:
run_as_background_process(
"process_receipts_for_federation", self._on_new_receipts, rows
)
await self._on_new_receipts(rows)
# ... as well as device updates and messages
elif stream_name == DeviceListsStream.NAME:
hosts = {row.destination for row in rows}
# The entities are either user IDs (starting with '@') whose devices
# have changed, or remote servers that we need to tell about
# changes.
hosts = {row.entity for row in rows if not row.entity.startswith("@")}
for host in hosts:
self.federation_sender.send_device_messages(host)
@@ -789,7 +819,7 @@ class FederationSenderHandler(object):
async def _on_new_receipts(self, rows):
"""
Args:
rows (iterable[synapse.replication.tcp.streams.ReceiptsStreamRow]):
rows (Iterable[synapse.replication.tcp.streams.ReceiptsStream.ReceiptsStreamRow]):
new receipts to be processed
"""
for receipt in rows:
@@ -860,6 +890,9 @@ def start(config_options):
# Force the appservice to start since they will be disabled in the main config
config.notify_appservices = True
else:
# For other worker types we force this to off.
config.notify_appservices = False
if config.worker_app == "synapse.app.pusher":
if config.start_pushers:
@@ -873,6 +906,9 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
else:
# For other worker types we force this to off.
config.start_pushers = False
if config.worker_app == "synapse.app.user_dir":
if config.update_user_directory:
@@ -886,6 +922,9 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True
else:
# For other worker types we force this to off.
config.update_user_directory = False
if config.worker_app == "synapse.app.federation_sender":
if config.send_federation:
@@ -899,20 +938,28 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.send_federation = True
else:
# For other worker types we force this to off.
config.send_federation = False
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
ss = GenericWorkerServer(
hs = GenericWorkerServer(
config.server_name,
config=config,
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)
setup_logging(hs, config, use_worker_options=True)
hs.setup()
# Ensure the replication streamer is always started in case we write to any
# streams. Will no-op if no streams can be written to by this worker.
hs.get_replication_streamer()
ss.setup()
reactor.addSystemEventTrigger(
"before", "startup", _base.start, ss, config.worker_listeners
"before", "startup", _base.start, hs, config.worker_listeners
)
_base.start_worker_reactor("synapse-generic-worker", config)

View File

@@ -241,16 +241,26 @@ class SynapseHomeServer(HomeServer):
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "webclient":
webclient_path = self.get_config().web_client_location
webclient_loc = self.get_config().web_client_location
if webclient_path is None:
if webclient_loc is None:
logger.warning(
"Not enabling webclient resource, as web_client_location is unset."
)
elif webclient_loc.startswith("http://") or webclient_loc.startswith(
"https://"
):
resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc)
else:
logger.warning(
"Running webclient on the same domain is not recommended: "
"https://github.com/matrix-org/synapse#security-note - "
"after you move webclient to different host you can set "
"web_client_location to its full URL to enable redirection."
)
# GZip is disabled here due to
# https://twistedmatrix.com/trac/ticket/7678
resources[WEB_CLIENT_PREFIX] = File(webclient_path)
resources[WEB_CLIENT_PREFIX] = File(webclient_loc)
if name == "metrics" and self.get_config().enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
@@ -263,6 +273,12 @@ class SynapseHomeServer(HomeServer):
def start_listening(self, listeners):
config = self.get_config()
if config.redis_enabled:
# If redis is enabled we connect via the replication command handler
# in the same way as the workers (since we're effectively a client
# rather than a server).
self.get_tcp_replication().start_replication(self)
for listener in listeners:
if listener["type"] == "http":
self._listening_services.extend(self._listener_http(config, listener))

View File

@@ -294,7 +294,6 @@ class RootConfig(object):
report_stats=None,
open_private_ports=False,
listeners=None,
database_conf=None,
tls_certificate_path=None,
tls_private_key_path=None,
acme_domain=None,
@@ -367,7 +366,6 @@ class RootConfig(object):
report_stats=report_stats,
open_private_ports=open_private_ports,
listeners=listeners,
database_conf=database_conf,
tls_certificate_path=tls_certificate_path,
tls_private_key_path=tls_private_key_path,
acme_domain=acme_domain,
@@ -470,8 +468,8 @@ class RootConfig(object):
Returns: Config object, or None if --generate-config or --generate-keys was set
"""
config_parser = argparse.ArgumentParser(add_help=False)
config_parser.add_argument(
parser = argparse.ArgumentParser(description=description)
parser.add_argument(
"-c",
"--config-path",
action="append",
@@ -480,7 +478,7 @@ class RootConfig(object):
" may specify directories containing *.yaml files.",
)
generate_group = config_parser.add_argument_group("Config generation")
generate_group = parser.add_argument_group("Config generation")
generate_group.add_argument(
"--generate-config",
action="store_true",
@@ -528,12 +526,13 @@ class RootConfig(object):
),
)
config_args, remaining_args = config_parser.parse_known_args(argv)
cls.invoke_all_static("add_arguments", parser)
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
if not config_files:
config_parser.error(
parser.error(
"Must supply a config file.\nA config file can be automatically"
' generated using "--generate-config -H SERVER_NAME'
' -c CONFIG-FILE"'
@@ -552,7 +551,7 @@ class RootConfig(object):
if config_args.generate_config:
if config_args.report_stats is None:
config_parser.error(
parser.error(
"Please specify either --report-stats=yes or --report-stats=no\n\n"
+ MISSING_REPORT_STATS_SPIEL
)
@@ -611,15 +610,6 @@ class RootConfig(object):
)
generate_missing_configs = True
parser = argparse.ArgumentParser(
parents=[config_parser],
description=description,
formatter_class=argparse.RawDescriptionHelpFormatter,
)
obj.invoke_all_static("add_arguments", parser)
args = parser.parse_args(remaining_args)
config_dict = read_config_files(config_files)
if generate_missing_configs:
obj.generate_missing_files(config_dict, config_dir_path)
@@ -628,7 +618,7 @@ class RootConfig(object):
obj.parse_config_dict(
config_dict, config_dir_path=config_dir_path, data_dir_path=data_dir_path
)
obj.invoke_all("read_arguments", args)
obj.invoke_all("read_arguments", config_args)
return obj
@@ -667,6 +657,12 @@ def read_config_files(config_files):
for config_file in config_files:
with open(config_file) as file_stream:
yaml_config = yaml.safe_load(file_stream)
if not isinstance(yaml_config, dict):
err = "File %r is empty or doesn't parse into a key-value map. IGNORING."
print(err % (config_file,))
continue
specified_config.update(yaml_config)
if "server_name" not in specified_config:

View File

@@ -24,7 +24,6 @@ class CaptchaConfig(Config):
self.enable_registration_captcha = config.get(
"enable_registration_captcha", False
)
self.captcha_bypass_secret = config.get("captcha_bypass_secret")
self.recaptcha_siteverify_api = config.get(
"recaptcha_siteverify_api",
"https://www.recaptcha.net/recaptcha/api/siteverify",
@@ -49,10 +48,6 @@ class CaptchaConfig(Config):
#
#enable_registration_captcha: false
# A secret key used to bypass the captcha test entirely.
#
#captcha_bypass_secret: "YOUR_SECRET_HERE"
# The API endpoint to use for verifying m.login.recaptcha responses.
#
#recaptcha_siteverify_api: "https://www.recaptcha.net/recaptcha/api/siteverify"

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,14 +15,65 @@
# limitations under the License.
import logging
import os
from textwrap import indent
import yaml
from synapse.config._base import Config, ConfigError
logger = logging.getLogger(__name__)
NON_SQLITE_DATABASE_PATH_WARNING = """\
Ignoring 'database_path' setting: not using a sqlite3 database.
--------------------------------------------------------------------------------
"""
DEFAULT_CONFIG = """\
## Database ##
# The 'database' setting defines the database that synapse uses to store all of
# its data.
#
# 'name' gives the database engine to use: either 'sqlite3' (for SQLite) or
# 'psycopg2' (for PostgreSQL).
#
# 'args' gives options which are passed through to the database engine,
# except for options starting 'cp_', which are used to configure the Twisted
# connection pool. For a reference to valid arguments, see:
# * for sqlite: https://docs.python.org/3/library/sqlite3.html#sqlite3.connect
# * for postgres: https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-PARAMKEYWORDS
# * for the connection pool: https://twistedmatrix.com/documents/current/api/twisted.enterprise.adbapi.ConnectionPool.html#__init__
#
#
# Example SQLite configuration:
#
#database:
# name: sqlite3
# args:
# database: /path/to/homeserver.db
#
#
# Example Postgres configuration:
#
#database:
# name: psycopg2
# args:
# user: synapse
# password: secretpassword
# database: synapse
# host: localhost
# cp_min: 5
# cp_max: 10
#
# For more information on using Synapse with Postgres, see `docs/postgres.md`.
#
database:
name: sqlite3
args:
database: %(database_path)s
# Number of events to cache in memory.
#
#event_cache_size: 10K
"""
class DatabaseConnectionConfig:
"""Contains the connection config for a particular database.
@@ -36,10 +88,12 @@ class DatabaseConnectionConfig:
"""
def __init__(self, name: str, db_config: dict):
if db_config["name"] not in ("sqlite3", "psycopg2"):
raise ConfigError("Unsupported database type %r" % (db_config["name"],))
db_engine = db_config.get("name", "sqlite3")
if db_config["name"] == "sqlite3":
if db_engine not in ("sqlite3", "psycopg2"):
raise ConfigError("Unsupported database type %r" % (db_engine,))
if db_engine == "sqlite3":
db_config.setdefault("args", {}).update(
{"cp_min": 1, "cp_max": 1, "check_same_thread": False}
)
@@ -56,6 +110,11 @@ class DatabaseConnectionConfig:
class DatabaseConfig(Config):
section = "database"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.databases = []
def read_config(self, config, **kwargs):
self.event_cache_size = self.parse_size(config.get("event_cache_size", "10K"))
@@ -76,12 +135,13 @@ class DatabaseConfig(Config):
multi_database_config = config.get("databases")
database_config = config.get("database")
database_path = config.get("database_path")
if multi_database_config and database_config:
raise ConfigError("Can't specify both 'database' and 'datbases' in config")
raise ConfigError("Can't specify both 'database' and 'databases' in config")
if multi_database_config:
if config.get("database_path"):
if database_path:
raise ConfigError("Can't specify 'database_path' with 'databases'")
self.databases = [
@@ -89,65 +149,55 @@ class DatabaseConfig(Config):
for name, db_conf in multi_database_config.items()
]
else:
if database_config is None:
database_config = {"name": "sqlite3", "args": {}}
if database_config:
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(config.get("database_path"))
if database_path:
if self.databases and self.databases[0].name != "sqlite3":
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
return
def generate_config_section(self, data_dir_path, database_conf, **kwargs):
if not database_conf:
database_path = os.path.join(data_dir_path, "homeserver.db")
database_conf = (
"""# The database engine name
name: "sqlite3"
# Arguments to pass to the engine
args:
# Path to the database
database: "%(database_path)s"
"""
% locals()
)
else:
database_conf = indent(yaml.dump(database_conf), " " * 10).lstrip()
database_config = {"name": "sqlite3", "args": {}}
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(database_path)
return (
"""\
## Database ##
database:
%(database_conf)s
# Number of events to cache in memory.
#
#event_cache_size: 10K
"""
% locals()
)
def generate_config_section(self, data_dir_path, **kwargs):
return DEFAULT_CONFIG % {
"database_path": os.path.join(data_dir_path, "homeserver.db")
}
def read_arguments(self, args):
self.set_databasepath(args.database_path)
"""
Cases for the cli input:
- If no databases are configured and no database_path is set, raise.
- No databases and only database_path available ==> sqlite3 db.
- If there are multiple databases and a database_path raise an error.
- If the database set in the config file is sqlite then
overwrite with the command line argument.
"""
if args.database_path is None:
if not self.databases:
raise ConfigError("No database config provided")
return
if len(self.databases) == 0:
database_config = {"name": "sqlite3", "args": {}}
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(args.database_path)
return
if self.get_single_database().name == "sqlite3":
self.set_databasepath(args.database_path)
else:
logger.warning(NON_SQLITE_DATABASE_PATH_WARNING)
def set_databasepath(self, database_path):
if database_path is None:
return
if database_path != ":memory:":
database_path = self.abspath(database_path)
# We only support setting a database path if we have a single sqlite3
# database.
if len(self.databases) != 1:
raise ConfigError("Cannot specify 'database_path' with multiple databases")
database = self.get_single_database()
if database.config["name"] != "sqlite3":
# We don't raise here as we haven't done so before for this case.
logger.warn("Ignoring 'database_path' for non-sqlite3 database")
return
database.config["args"]["database"] = database_path
self.databases[0].config["args"]["database"] = database_path
@staticmethod
def add_arguments(parser):
@@ -162,7 +212,7 @@ class DatabaseConfig(Config):
def get_single_database(self) -> DatabaseConnectionConfig:
"""Returns the database if there is only one, useful for e.g. tests
"""
if len(self.databases) != 1:
if not self.databases:
raise Exception("More than one database exists")
return self.databases[0]

View File

@@ -108,9 +108,14 @@ class EmailConfig(Config):
if self.trusted_third_party_id_servers:
# XXX: It's a little confusing that account_threepid_delegate_email is modified
# both in RegistrationConfig and here. We should factor this bit out
self.account_threepid_delegate_email = self.trusted_third_party_id_servers[
0
] # type: Optional[str]
first_trusted_identity_server = self.trusted_third_party_id_servers[0]
# trusted_third_party_id_servers does not contain a scheme whereas
# account_threepid_delegate_email is expected to. Presume https
self.account_threepid_delegate_email = (
"https://" + first_trusted_identity_server
) # type: Optional[str]
self.using_identity_server_from_trusted_list = True
else:
raise ConfigError(

View File

@@ -31,6 +31,7 @@ from .password import PasswordConfig
from .password_auth_providers import PasswordAuthProviderConfig
from .push import PushConfig
from .ratelimiting import RatelimitConfig
from .redis import RedisConfig
from .registration import RegistrationConfig
from .repository import ContentRepositoryConfig
from .room_directory import RoomDirectoryConfig
@@ -82,4 +83,5 @@ class HomeServerConfig(RootConfig):
RoomDirectoryConfig,
ThirdPartyRulesConfig,
TracerConfig,
RedisConfig,
]

View File

@@ -86,7 +86,7 @@ class MetricsConfig(Config):
# enabled by default, either for performance reasons or limited use.
#
metrics_flags:
# Publish synapse_federation_known_servers, a g auge of the number of
# Publish synapse_federation_known_servers, a gauge of the number of
# servers this homeserver knows about, including itself. May cause
# performance problems on large homeservers.
#

View File

@@ -31,6 +31,10 @@ class PasswordConfig(Config):
self.password_localdb_enabled = password_config.get("localdb_enabled", True)
self.password_pepper = password_config.get("pepper", "")
# Password policy
self.password_policy = password_config.get("policy") or {}
self.password_policy_enabled = self.password_policy.get("enabled", False)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
password_config:
@@ -48,4 +52,39 @@ class PasswordConfig(Config):
# DO NOT CHANGE THIS AFTER INITIAL SETUP!
#
#pepper: "EVEN_MORE_SECRET"
# Define and enforce a password policy. Each parameter is optional.
# This is an implementation of MSC2000.
#
policy:
# Whether to enforce the password policy.
# Defaults to 'false'.
#
#enabled: true
# Minimum accepted length for a password.
# Defaults to 0.
#
#minimum_length: 15
# Whether a password must contain at least one digit.
# Defaults to 'false'.
#
#require_digit: true
# Whether a password must contain at least one symbol.
# A symbol is any character that's not a number or a letter.
# Defaults to 'false'.
#
#require_symbol: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_lowercase: true
# Whether a password must contain at least one lowercase letter.
# Defaults to 'false'.
#
#require_uppercase: true
"""

View File

@@ -35,7 +35,7 @@ class PasswordAuthProviderConfig(Config):
if ldap_config.get("enabled", False):
providers.append({"module": LDAP_PROVIDER, "config": ldap_config})
providers.extend(config.get("password_providers", []))
providers.extend(config.get("password_providers") or [])
for provider in providers:
mod_name = provider["module"]
@@ -52,7 +52,19 @@ class PasswordAuthProviderConfig(Config):
def generate_config_section(self, **kwargs):
return """\
#password_providers:
# Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods
# ex. LDAP, external tokens, etc.
#
# For more information and known implementations, please see
# https://github.com/matrix-org/synapse/blob/master/docs/password_auth_providers.md
#
# Note: instances wishing to use SAML or CAS authentication should
# instead use the `saml2_config` or `cas_config` options,
# respectively.
#
password_providers:
# # Example config for an LDAP auth provider
# - module: "ldap_auth_provider.LdapAuthProvider"
# config:
# enabled: true

34
synapse/config/redis.py Normal file
View File

@@ -0,0 +1,34 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.config._base import Config
from synapse.python_dependencies import check_requirements
class RedisConfig(Config):
section = "redis"
def read_config(self, config, **kwargs):
redis_config = config.get("redis", {})
self.redis_enabled = redis_config.get("enabled", False)
if not self.redis_enabled:
return
check_requirements("redis")
self.redis_host = redis_config.get("host", "localhost")
self.redis_port = redis_config.get("port", 6379)
self.redis_password = redis_config.get("password")

View File

@@ -129,6 +129,10 @@ class RegistrationConfig(Config):
raise ConfigError("Invalid auto_join_rooms entry %s" % (room_alias,))
self.autocreate_auto_join_rooms = config.get("autocreate_auto_join_rooms", True)
self.enable_set_displayname = config.get("enable_set_displayname", True)
self.enable_set_avatar_url = config.get("enable_set_avatar_url", True)
self.enable_3pid_changes = config.get("enable_3pid_changes", True)
self.disable_msisdn_registration = config.get(
"disable_msisdn_registration", False
)
@@ -330,6 +334,29 @@ class RegistrationConfig(Config):
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Whether users are allowed to change their displayname after it has
# been initially set. Useful when provisioning users based on the
# contents of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_displayname: false
# Whether users are allowed to change their avatar after it has been
# initially set. Useful when provisioning users based on the contents
# of a third-party directory.
#
# Does not apply to server administrators. Defaults to 'true'
#
#enable_set_avatar_url: false
# Whether users can change the 3PIDs associated with their accounts
# (email address and msisdn).
#
# Defaults to 'true'
#
#enable_3pid_changes: false
# Users who register on this homeserver will automatically be joined
# to these rooms
#

View File

@@ -192,6 +192,10 @@ class ContentRepositoryConfig(Config):
self.url_preview_url_blacklist = config.get("url_preview_url_blacklist", ())
self.url_preview_accept_language = config.get(
"url_preview_accept_language"
) or ["en"]
def generate_config_section(self, data_dir_path, **kwargs):
media_store = os.path.join(data_dir_path, "media_store")
uploads_path = os.path.join(data_dir_path, "uploads")
@@ -220,12 +224,11 @@ class ContentRepositoryConfig(Config):
#
#media_storage_providers:
# - module: file_system
# # Whether to write new local files.
# # Whether to store newly uploaded local files
# store_local: false
# # Whether to write new remote media
# # Whether to store newly downloaded remote files
# store_remote: false
# # Whether to block upload requests waiting for write to this
# # provider to complete
# # Whether to wait for successful storage for local uploads
# store_synchronous: false
# config:
# directory: /mnt/some/other/directory
@@ -329,6 +332,31 @@ class ContentRepositoryConfig(Config):
# The largest allowed URL preview spidering size in bytes
#
#max_spider_size: 10M
# A list of values for the Accept-Language HTTP header used when
# downloading webpages during URL preview generation. This allows
# Synapse to specify the preferred languages that URL previews should
# be in when communicating with remote servers.
#
# Each value is a IETF language tag; a 2-3 letter identifier for a
# language, optionally followed by subtags separated by '-', specifying
# a country or region variant.
#
# Multiple values can be provided, and a weight can be added to each by
# using quality value syntax (;q=). '*' translates to any language.
#
# Defaults to "en".
#
# Example:
#
# url_preview_accept_language:
# - en-UK
# - en-US;q=0.9
# - fr;q=0.8
# - *;q=0.7
#
url_preview_accept_language:
# - en
"""
% locals()
)

View File

@@ -248,32 +248,32 @@ class SAML2Config(Config):
# remote:
# - url: https://our_idp/metadata.xml
#
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: true
# # By default, the user has to go to our login page first. If you'd like
# # to allow IdP-initiated login, set 'allow_unsolicited: true' in a
# # 'service.sp' section:
# #
# #service:
# # sp:
# # allow_unsolicited: true
#
# # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
# # The examples below are just used to generate our metadata xml, and you
# # may well not need them, depending on your setup. Alternatively you
# # may need a whole lot more detail - see the pysaml2 docs!
#
# description: ["My awesome SP", "en"]
# name: ["Test SP", "en"]
# description: ["My awesome SP", "en"]
# name: ["Test SP", "en"]
#
# organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
# organization:
# name: Example com
# display_name:
# - ["Example co", "en"]
# url: "http://example.com"
#
# contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
# contact_person:
# - given_name: Bob
# sur_name: "the Sysadmin"
# email_address": ["admin@example.com"]
# contact_type": technical
# Instead of putting the config inline as above, you can specify a
# separate pysaml2 configuration file:

View File

@@ -505,10 +505,24 @@ class ServerConfig(Config):
"cleanup_extremities_with_dummy_events", True
)
# The number of forward extremities in a room needed to send a dummy event.
self.dummy_events_threshold = config.get("dummy_events_threshold", 10)
self.enable_ephemeral_messages = config.get("enable_ephemeral_messages", False)
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver, and instead return a 200 with a fake sid if this kind of error is
# met, without sending anything.
# This is a compromise between sending an email, which could be a spam vector,
# and letting the client know which email address is bound to an account and
# which one isn't.
self.request_token_inhibit_3pid_errors = config.get(
"request_token_inhibit_3pid_errors", False,
)
def has_tls_listener(self) -> bool:
return any(l["tls"] for l in self.listeners)
return any(listener["tls"] for listener in self.listeners)
def generate_config_section(
self, server_name, data_dir_path, open_private_ports, listeners, **kwargs
@@ -604,10 +618,15 @@ class ServerConfig(Config):
#
pid_file: %(pid_file)s
# The path to the web client which will be served at /_matrix/client/
# if 'webclient' is configured under the 'listeners' configuration.
# The absolute URL to the web client which /_matrix/client will redirect
# to if 'webclient' is configured under the 'listeners' configuration.
#
#web_client_location: "/path/to/web/root"
# This option can be also set to the filesystem path to the web client
# which will be served at /_matrix/client/ if 'webclient' is configured
# under the 'listeners' configuration, however this is a security risk:
# https://github.com/matrix-org/synapse#security-note
#
#web_client_location: https://riot.example.com/
# The public-facing base URL that clients use to access this HS
# (not including _matrix/...). This is the same URL a user would
@@ -807,6 +826,18 @@ class ServerConfig(Config):
# bind_addresses: ['::1', '127.0.0.1']
# type: manhole
# Forward extremities can build up in a room due to networking delays between
# homeservers. Once this happens in a large room, calculation of the state of
# that room can become quite expensive. To mitigate this, once the number of
# forward extremities reaches a given threshold, Synapse will send an
# org.matrix.dummy_event event, which will reduce the forward extremities
# in the room.
#
# This setting defines the threshold (i.e. number of forward extremities in the
# room) at which dummy events are sent. The default value is 10.
#
#dummy_events_threshold: 5
## Homeserver blocking ##
@@ -967,6 +998,16 @@ class ServerConfig(Config):
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 1d
# Inhibits the /requestToken endpoints from returning an error that might leak
# information about whether an e-mail address is in use or not on this
# homeserver.
# Note that for some endpoints the error situation is the e-mail already being
# used, and for others the error is entering the e-mail being unused.
# If this option is enabled, instead of returning an error, these endpoints will
# act as if no error happened and return a fake session ID ('sid') to clients.
#
#request_token_inhibit_3pid_errors: true
"""
% locals()
)

View File

@@ -12,6 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from typing import Any, Dict
import pkg_resources
@@ -36,9 +37,32 @@ class SSOConfig(Config):
template_dir = pkg_resources.resource_filename("synapse", "res/templates",)
self.sso_redirect_confirm_template_dir = template_dir
self.sso_account_deactivated_template = self.read_file(
os.path.join(
self.sso_redirect_confirm_template_dir, "sso_account_deactivated.html"
),
"sso_account_deactivated_template",
)
self.sso_auth_success_template = self.read_file(
os.path.join(
self.sso_redirect_confirm_template_dir, "sso_auth_success.html"
),
"sso_auth_success_template",
)
self.sso_client_whitelist = sso_config.get("client_whitelist") or []
# Attempt to also whitelist the server's login fallback, since that fallback sets
# the redirect URL to itself (so it can process the login token then return
# gracefully to the client). This would make it pointless to ask the user for
# confirmation, since the URL the confirmation page would be showing wouldn't be
# the client's.
# public_baseurl is an optional setting, so we only add the fallback's URL to the
# list if it's provided (because we can't figure out what that URL is otherwise).
if self.public_baseurl:
login_fallback_url = self.public_baseurl + "_matrix/static/client/login"
self.sso_client_whitelist.append(login_fallback_url)
def generate_config_section(self, **kwargs):
return """\
# Additional settings to use with single-sign on systems such as SAML2 and CAS.
@@ -54,6 +78,10 @@ class SSOConfig(Config):
# phishing attacks from evil.site. To avoid this, include a slash after the
# hostname: "https://my.client/".
#
# If public_baseurl is set, then the login fallback page (used by clients
# that don't natively support the required login flows) is whitelisted in
# addition to any URLs in this list.
#
# By default, this list is empty.
#
#client_whitelist:
@@ -85,6 +113,30 @@ class SSOConfig(Config):
#
# * server_name: the homeserver's name.
#
# * HTML page which notifies the user that they are authenticating to confirm
# an operation on their account during the user interactive authentication
# process: 'sso_auth_confirm.html'.
#
# When rendering, this template is given the following variables:
# * redirect_url: the URL the user is about to be redirected to. Needs
# manual escaping (see
# https://jinja.palletsprojects.com/en/2.11.x/templates/#html-escaping).
#
# * description: the operation which the user is being asked to confirm
#
# * HTML page shown after a successful user interactive authentication session:
# 'sso_auth_success.html'.
#
# Note that this page must include the JavaScript which notifies of a successful authentication
# (see https://matrix.org/docs/spec/client_server/r0.6.0#fallback).
#
# This template has no additional variables.
#
# * HTML page shown during single sign-on if a deactivated user (according to Synapse's database)
# attempts to login: 'sso_account_deactivated.html'.
#
# This template has no additional variables.
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#

View File

@@ -43,8 +43,8 @@ from synapse.api.errors import (
SynapseError,
)
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
current_context,
make_deferred_yieldable,
preserve_fn,
run_in_background,
@@ -236,7 +236,7 @@ class Keyring(object):
"""
try:
ctx = LoggingContext.current_context()
ctx = current_context()
# map from server name to a set of outstanding request ids
server_to_request_ids = {}

View File

@@ -322,11 +322,14 @@ class _AsyncEventContextImpl(EventContext):
self._current_state_ids = yield self._storage.state.get_state_ids_for_group(
self.state_group
)
if self._prev_state_id and self._event_state_key is not None:
if self._event_state_key is not None:
self._prev_state_ids = dict(self._current_state_ids)
key = (self._event_type, self._event_state_key)
self._prev_state_ids[key] = self._prev_state_id
if self._prev_state_id:
self._prev_state_ids[key] = self._prev_state_id
else:
self._prev_state_ids.pop(key, None)
else:
self._prev_state_ids = self._current_state_ids

View File

@@ -25,19 +25,15 @@ from twisted.python.failure import Failure
from synapse.api.constants import MAX_DEPTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.api.room_versions import EventFormatVersions, RoomVersion
from synapse.crypto.event_signing import check_event_content_hash
from synapse.crypto.keyring import Keyring
from synapse.events import EventBase, make_event_from_dict
from synapse.events.utils import prune_event
from synapse.http.servlet import assert_params_in_dict
from synapse.logging.context import (
LoggingContext,
PreserveLoggingContext,
current_context,
make_deferred_yieldable,
)
from synapse.types import JsonDict, get_domain_from_id
@@ -55,13 +51,15 @@ class FederationBase(object):
self.store = hs.get_datastore()
self._clock = hs.get_clock()
def _check_sigs_and_hash(self, room_version: str, pdu: EventBase) -> Deferred:
def _check_sigs_and_hash(
self, room_version: RoomVersion, pdu: EventBase
) -> Deferred:
return make_deferred_yieldable(
self._check_sigs_and_hashes(room_version, [pdu])[0]
)
def _check_sigs_and_hashes(
self, room_version: str, pdus: List[EventBase]
self, room_version: RoomVersion, pdus: List[EventBase]
) -> List[Deferred]:
"""Checks that each of the received events is correctly signed by the
sending server.
@@ -80,7 +78,7 @@ class FederationBase(object):
"""
deferreds = _check_sigs_on_pdus(self.keyring, room_version, pdus)
ctx = LoggingContext.current_context()
ctx = current_context()
def callback(_, pdu: EventBase):
with PreserveLoggingContext(ctx):
@@ -146,7 +144,7 @@ class PduToCheckSig(
def _check_sigs_on_pdus(
keyring: Keyring, room_version: str, pdus: Iterable[EventBase]
keyring: Keyring, room_version: RoomVersion, pdus: Iterable[EventBase]
) -> List[Deferred]:
"""Check that the given events are correctly signed
@@ -191,10 +189,6 @@ def _check_sigs_on_pdus(
for p in pdus
]
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
raise RuntimeError("Unrecognized room version %s" % (room_version,))
# First we check that the sender event is signed by the sender's domain
# (except if its a 3pid invite, in which case it may be sent by any server)
pdus_to_check_sender = [p for p in pdus_to_check if not _is_invite_via_3pid(p.pdu)]
@@ -204,7 +198,7 @@ def _check_sigs_on_pdus(
(
p.sender_domain,
p.redacted_pdu_json,
p.pdu.origin_server_ts if v.enforce_key_validity else 0,
p.pdu.origin_server_ts if room_version.enforce_key_validity else 0,
p.pdu.event_id,
)
for p in pdus_to_check_sender
@@ -227,7 +221,7 @@ def _check_sigs_on_pdus(
# event id's domain (normally only the case for joins/leaves), and add additional
# checks. Only do this if the room version has a concept of event ID domain
# (ie, the room version uses old-style non-hash event IDs).
if v.event_format == EventFormatVersions.V1:
if room_version.event_format == EventFormatVersions.V1:
pdus_to_check_event_id = [
p
for p in pdus_to_check
@@ -239,7 +233,7 @@ def _check_sigs_on_pdus(
(
get_domain_from_id(p.pdu.event_id),
p.redacted_pdu_json,
p.pdu.origin_server_ts if v.enforce_key_validity else 0,
p.pdu.origin_server_ts if room_version.enforce_key_validity else 0,
p.pdu.event_id,
)
for p in pdus_to_check_event_id

View File

@@ -220,8 +220,7 @@ class FederationClient(FederationBase):
# FIXME: We should handle signature failures more gracefully.
pdus[:] = await make_deferred_yieldable(
defer.gatherResults(
self._check_sigs_and_hashes(room_version.identifier, pdus),
consumeErrors=True,
self._check_sigs_and_hashes(room_version, pdus), consumeErrors=True,
).addErrback(unwrapFirstError)
)
@@ -291,9 +290,7 @@ class FederationClient(FederationBase):
pdu = pdu_list[0]
# Check signatures are correct.
signed_pdu = await self._check_sigs_and_hash(
room_version.identifier, pdu
)
signed_pdu = await self._check_sigs_and_hash(room_version, pdu)
break
@@ -350,7 +347,7 @@ class FederationClient(FederationBase):
self,
origin: str,
pdus: List[EventBase],
room_version: str,
room_version: RoomVersion,
outlier: bool = False,
include_none: bool = False,
) -> List[EventBase]:
@@ -396,7 +393,7 @@ class FederationClient(FederationBase):
self.get_pdu(
destinations=[pdu.origin],
event_id=pdu.event_id,
room_version=room_version, # type: ignore
room_version=room_version,
outlier=outlier,
timeout=10000,
)
@@ -434,7 +431,7 @@ class FederationClient(FederationBase):
]
signed_auth = await self._check_sigs_and_hash_and_fetch(
destination, auth_chain, outlier=True, room_version=room_version.identifier
destination, auth_chain, outlier=True, room_version=room_version
)
signed_auth.sort(key=lambda e: e.depth)
@@ -661,7 +658,7 @@ class FederationClient(FederationBase):
destination,
list(pdus.values()),
outlier=True,
room_version=room_version.identifier,
room_version=room_version,
)
valid_pdus_map = {p.event_id: p for p in valid_pdus}
@@ -756,7 +753,7 @@ class FederationClient(FederationBase):
pdu = event_from_pdu_json(pdu_dict, room_version)
# Check signatures are correct.
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
# FIXME: We should handle signature failures more gracefully.
@@ -886,18 +883,37 @@ class FederationClient(FederationBase):
def get_public_rooms(
self,
destination,
limit=None,
since_token=None,
search_filter=None,
include_all_networks=False,
third_party_instance_id=None,
remote_server: str,
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[Dict] = None,
include_all_networks: bool = False,
third_party_instance_id: Optional[str] = None,
):
if destination == self.server_name:
return
"""Get the list of public rooms from a remote homeserver
Args:
remote_server: The name of the remote server
limit: Maximum amount of rooms to return
since_token: Used for result pagination
search_filter: A filter dictionary to send the remote homeserver
and filter the result set
include_all_networks: Whether to include results from all third party instances
third_party_instance_id: Whether to only include results from a specific third
party instance
Returns:
Deferred[Dict[str, Any]]: The response from the remote server, or None if
`remote_server` is the same as the local server_name
Raises:
HttpResponseException: There was an exception returned from the remote server
SynapseException: M_FORBIDDEN when the remote server has disallowed publicRoom
requests over federation
"""
return self.transport_layer.get_public_rooms(
destination,
remote_server,
limit,
since_token,
search_filter,
@@ -948,7 +964,7 @@ class FederationClient(FederationBase):
]
signed_events = await self._check_sigs_and_hash_and_fetch(
destination, events, outlier=False, room_version=room_version.identifier
destination, events, outlier=False, room_version=room_version
)
except HttpResponseException as e:
if not e.code == 400:
@@ -960,14 +976,13 @@ class FederationClient(FederationBase):
return signed_events
@defer.inlineCallbacks
def forward_third_party_invite(self, destinations, room_id, event_dict):
async def forward_third_party_invite(self, destinations, room_id, event_dict):
for destination in destinations:
if destination == self.server_name:
continue
try:
yield self.transport_layer.exchange_third_party_invite(
await self.transport_layer.exchange_third_party_invite(
destination=destination, room_id=room_id, event_dict=event_dict
)
return None

View File

@@ -15,7 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Dict
from typing import Any, Callable, Dict, List, Match, Optional, Tuple, Union
import six
from six import iteritems
@@ -38,6 +38,7 @@ from synapse.api.errors import (
UnsupportedRoomVersionError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
@@ -94,7 +95,9 @@ class FederationServer(FederationBase):
# come in waves.
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
async def on_backfill_request(self, origin, room_id, versions, limit):
async def on_backfill_request(
self, origin: str, room_id: str, versions: List[str], limit: int
) -> Tuple[int, Dict[str, Any]]:
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@@ -107,23 +110,25 @@ class FederationServer(FederationBase):
return 200, res
async def on_incoming_transaction(self, origin, transaction_data):
async def on_incoming_transaction(
self, origin: str, transaction_data: JsonDict
) -> Tuple[int, Dict[str, Any]]:
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
transaction = Transaction(**transaction_data)
if not transaction.transaction_id:
if not transaction.transaction_id: # type: ignore
raise Exception("Transaction missing transaction_id")
logger.debug("[%s] Got transaction", transaction.transaction_id)
logger.debug("[%s] Got transaction", transaction.transaction_id) # type: ignore
# use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel.
with (
await self._transaction_linearizer.queue(
(origin, transaction.transaction_id)
(origin, transaction.transaction_id) # type: ignore
)
):
result = await self._handle_incoming_transaction(
@@ -132,31 +137,33 @@ class FederationServer(FederationBase):
return result
async def _handle_incoming_transaction(self, origin, transaction, request_time):
async def _handle_incoming_transaction(
self, origin: str, transaction: Transaction, request_time: int
) -> Tuple[int, Dict[str, Any]]:
""" Process an incoming transaction and return the HTTP response
Args:
origin (unicode): the server making the request
transaction (Transaction): incoming transaction
request_time (int): timestamp that the HTTP request arrived at
origin: the server making the request
transaction: incoming transaction
request_time: timestamp that the HTTP request arrived at
Returns:
Deferred[(int, object)]: http response code and body
HTTP response code and body
"""
response = await self.transaction_actions.have_responded(origin, transaction)
if response:
logger.debug(
"[%s] We've already responded to this request",
transaction.transaction_id,
transaction.transaction_id, # type: ignore
)
return response
logger.debug("[%s] Transaction is new", transaction.transaction_id)
logger.debug("[%s] Transaction is new", transaction.transaction_id) # type: ignore
# Reject if PDU count > 50 or EDU count > 100
if len(transaction.pdus) > 50 or (
hasattr(transaction, "edus") and len(transaction.edus) > 100
if len(transaction.pdus) > 50 or ( # type: ignore
hasattr(transaction, "edus") and len(transaction.edus) > 100 # type: ignore
):
logger.info("Transaction PDU or EDU count too large. Returning 400")
@@ -204,13 +211,13 @@ class FederationServer(FederationBase):
report back to the sending server.
"""
received_pdus_counter.inc(len(transaction.pdus))
received_pdus_counter.inc(len(transaction.pdus)) # type: ignore
origin_host, _ = parse_server_name(origin)
pdus_by_room = {}
pdus_by_room = {} # type: Dict[str, List[EventBase]]
for p in transaction.pdus:
for p in transaction.pdus: # type: ignore
if "unsigned" in p:
unsigned = p["unsigned"]
if "age" in unsigned:
@@ -254,7 +261,7 @@ class FederationServer(FederationBase):
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
async def process_pdus_for_room(room_id):
async def process_pdus_for_room(room_id: str):
logger.debug("Processing PDUs for %s", room_id)
try:
await self.check_server_matches_acl(origin_host, room_id)
@@ -310,7 +317,9 @@ class FederationServer(FederationBase):
TRANSACTION_CONCURRENCY_LIMIT,
)
async def on_context_state_request(self, origin, room_id, event_id):
async def on_context_state_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@@ -338,7 +347,9 @@ class FederationServer(FederationBase):
return 200, resp
async def on_state_ids_request(self, origin, room_id, event_id):
async def on_state_ids_request(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
if not event_id:
raise NotImplementedError("Specify an event")
@@ -354,7 +365,9 @@ class FederationServer(FederationBase):
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(self, room_id, event_id):
async def _on_context_state_request_compute(
self, room_id: str, event_id: str
) -> Dict[str, list]:
if event_id:
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
else:
@@ -367,7 +380,9 @@ class FederationServer(FederationBase):
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
async def on_pdu_request(self, origin, event_id):
async def on_pdu_request(
self, origin: str, event_id: str
) -> Tuple[int, Union[JsonDict, str]]:
pdu = await self.handler.get_persisted_pdu(origin, event_id)
if pdu:
@@ -375,12 +390,16 @@ class FederationServer(FederationBase):
else:
return 404, ""
async def on_query_request(self, query_type, args):
async def on_query_request(
self, query_type: str, args: Dict[str, str]
) -> Tuple[int, Dict[str, Any]]:
received_queries_counter.labels(query_type).inc()
resp = await self.registry.on_query(query_type, args)
return 200, resp
async def on_make_join_request(self, origin, room_id, user_id, supported_versions):
async def on_make_join_request(
self, origin: str, room_id: str, user_id: str, supported_versions: List[str]
) -> Dict[str, Any]:
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@@ -397,7 +416,7 @@ class FederationServer(FederationBase):
async def on_invite_request(
self, origin: str, content: JsonDict, room_version_id: str
):
) -> Dict[str, Any]:
room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version:
raise SynapseError(
@@ -409,12 +428,14 @@ class FederationServer(FederationBase):
pdu = event_from_pdu_json(content, room_version)
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu, room_version)
time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)}
async def on_send_join_request(self, origin, content, room_id):
async def on_send_join_request(
self, origin: str, content: JsonDict, room_id: str
) -> Dict[str, Any]:
logger.debug("on_send_join_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
@@ -425,7 +446,7 @@ class FederationServer(FederationBase):
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
@@ -434,7 +455,9 @@ class FederationServer(FederationBase):
"auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]],
}
async def on_make_leave_request(self, origin, room_id, user_id):
async def on_make_leave_request(
self, origin: str, room_id: str, user_id: str
) -> Dict[str, Any]:
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_leave_request(origin, room_id, user_id)
@@ -444,7 +467,9 @@ class FederationServer(FederationBase):
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
async def on_send_leave_request(self, origin, content, room_id):
async def on_send_leave_request(
self, origin: str, content: JsonDict, room_id: str
) -> dict:
logger.debug("on_send_leave_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
@@ -455,12 +480,14 @@ class FederationServer(FederationBase):
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = await self._check_sigs_and_hash(room_version.identifier, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return {}
async def on_event_auth(self, origin, room_id, event_id):
async def on_event_auth(
self, origin: str, room_id: str, event_id: str
) -> Tuple[int, Dict[str, Any]]:
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@@ -471,15 +498,21 @@ class FederationServer(FederationBase):
return 200, res
@log_function
def on_query_client_keys(self, origin, content):
return self.on_query_request("client_keys", content)
async def on_query_client_keys(
self, origin: str, content: Dict[str, str]
) -> Tuple[int, Dict[str, Any]]:
return await self.on_query_request("client_keys", content)
async def on_query_user_devices(self, origin: str, user_id: str):
async def on_query_user_devices(
self, origin: str, user_id: str
) -> Tuple[int, Dict[str, Any]]:
keys = await self.device_handler.on_federation_query_user_devices(user_id)
return 200, keys
@trace
async def on_claim_client_keys(self, origin, content):
async def on_claim_client_keys(
self, origin: str, content: JsonDict
) -> Dict[str, Any]:
query = []
for user_id, device_keys in content.get("one_time_keys", {}).items():
for device_id, algorithm in device_keys.items():
@@ -488,7 +521,7 @@ class FederationServer(FederationBase):
log_kv({"message": "Claiming one time keys.", "user, device pairs": query})
results = await self.store.claim_e2e_one_time_keys(query)
json_result = {}
json_result = {} # type: Dict[str, Dict[str, dict]]
for user_id, device_keys in results.items():
for device_id, keys in device_keys.items():
for key_id, json_bytes in keys.items():
@@ -511,8 +544,13 @@ class FederationServer(FederationBase):
return {"one_time_keys": json_result}
async def on_get_missing_events(
self, origin, room_id, earliest_events, latest_events, limit
):
self,
origin: str,
room_id: str,
earliest_events: List[str],
latest_events: List[str],
limit: int,
) -> Dict[str, list]:
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
@@ -541,11 +579,11 @@ class FederationServer(FederationBase):
return {"events": [ev.get_pdu_json(time_now) for ev in missing_events]}
@log_function
def on_openid_userinfo(self, token):
async def on_openid_userinfo(self, token: str) -> Optional[str]:
ts_now_ms = self._clock.time_msec()
return self.store.get_user_id_for_open_id_token(token, ts_now_ms)
return await self.store.get_user_id_for_open_id_token(token, ts_now_ms)
def _transaction_from_pdus(self, pdu_list):
def _transaction_from_pdus(self, pdu_list: List[EventBase]) -> Transaction:
"""Returns a new Transaction containing the given PDUs suitable for
transmission.
"""
@@ -558,7 +596,7 @@ class FederationServer(FederationBase):
destination=None,
)
async def _handle_received_pdu(self, origin, pdu):
async def _handle_received_pdu(self, origin: str, pdu: EventBase) -> None:
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
@@ -579,10 +617,8 @@ class FederationServer(FederationBase):
until we try to backfill across the discontinuity.
Args:
origin (str): server which sent the pdu
pdu (FrozenEvent): received pdu
Returns (Deferred): completes with None
origin: server which sent the pdu
pdu: received pdu
Raises: FederationError if the signatures / hash do not match, or
if the event was unacceptable for any other reason (eg, too large,
@@ -611,7 +647,7 @@ class FederationServer(FederationBase):
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = await self.store.get_room_version_id(pdu.room_id)
room_version = await self.store.get_room_version(pdu.room_id)
# Check signature.
try:
@@ -625,25 +661,27 @@ class FederationServer(FederationBase):
return "<ReplicationLayer(%s)>" % self.server_name
async def exchange_third_party_invite(
self, sender_user_id, target_user_id, room_id, signed
self, sender_user_id: str, target_user_id: str, room_id: str, signed: Dict
):
ret = await self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed
)
return ret
async def on_exchange_third_party_invite_request(self, room_id, event_dict):
async def on_exchange_third_party_invite_request(
self, room_id: str, event_dict: Dict
):
ret = await self.handler.on_exchange_third_party_invite_request(
room_id, event_dict
)
return ret
async def check_server_matches_acl(self, server_name, room_id):
async def check_server_matches_acl(self, server_name: str, room_id: str):
"""Check if the given server is allowed by the server ACLs in the room
Args:
server_name (str): name of server, *without any port part*
room_id (str): ID of the room to check
server_name: name of server, *without any port part*
room_id: ID of the room to check
Raises:
AuthError if the server does not match the ACL
@@ -661,15 +699,15 @@ class FederationServer(FederationBase):
raise AuthError(code=403, msg="Server is banned from room")
def server_matches_acl_event(server_name, acl_event):
def server_matches_acl_event(server_name: str, acl_event: EventBase) -> bool:
"""Check if the given server is allowed by the ACL event
Args:
server_name (str): name of server, without any port part
acl_event (EventBase): m.room.server_acl event
server_name: name of server, without any port part
acl_event: m.room.server_acl event
Returns:
bool: True if this server is allowed by the ACLs
True if this server is allowed by the ACLs
"""
logger.debug("Checking %s against acl %s", server_name, acl_event.content)
@@ -713,7 +751,7 @@ def server_matches_acl_event(server_name, acl_event):
return False
def _acl_entry_matches(server_name, acl_entry):
def _acl_entry_matches(server_name: str, acl_entry: str) -> Match:
if not isinstance(acl_entry, six.string_types):
logger.warning(
"Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry)
@@ -732,13 +770,13 @@ class FederationHandlerRegistry(object):
self.edu_handlers = {}
self.query_handlers = {}
def register_edu_handler(self, edu_type, handler):
def register_edu_handler(self, edu_type: str, handler: Callable[[str, dict], None]):
"""Sets the handler callable that will be used to handle an incoming
federation EDU of the given type.
Args:
edu_type (str): The type of the incoming EDU to register handler for
handler (Callable[[str, dict]]): A callable invoked on incoming EDU
edu_type: The type of the incoming EDU to register handler for
handler: A callable invoked on incoming EDU
of the given type. The arguments are the origin server name and
the EDU contents.
"""
@@ -749,14 +787,16 @@ class FederationHandlerRegistry(object):
self.edu_handlers[edu_type] = handler
def register_query_handler(self, query_type, handler):
def register_query_handler(
self, query_type: str, handler: Callable[[dict], defer.Deferred]
):
"""Sets the handler callable that will be used to handle an incoming
federation query of the given type.
Args:
query_type (str): Category name of the query, which should match
query_type: Category name of the query, which should match
the string used by make_query.
handler (Callable[[dict], Deferred[dict]]): Invoked to handle
handler: Invoked to handle
incoming queries of this type. The return will be yielded
on and the result used as the response to the query request.
"""
@@ -767,10 +807,11 @@ class FederationHandlerRegistry(object):
self.query_handlers[query_type] = handler
async def on_edu(self, edu_type, origin, content):
async def on_edu(self, edu_type: str, origin: str, content: dict):
handler = self.edu_handlers.get(edu_type)
if not handler:
logger.warning("No handler registered for EDU type %s", edu_type)
return
with start_active_span_from_edu(content, "handle_edu"):
try:
@@ -780,7 +821,7 @@ class FederationHandlerRegistry(object):
except Exception:
logger.exception("Failed to handle edu %r", edu_type)
def on_query(self, query_type, args):
def on_query(self, query_type: str, args: dict) -> defer.Deferred:
handler = self.query_handlers.get(query_type)
if not handler:
logger.warning("No handler registered for query type %s", query_type)
@@ -807,7 +848,7 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
super(ReplicationFederationHandlerRegistry, self).__init__()
async def on_edu(self, edu_type, origin, content):
async def on_edu(self, edu_type: str, origin: str, content: dict):
"""Overrides FederationHandlerRegistry
"""
if not self.config.use_presence and edu_type == "m.presence":
@@ -821,7 +862,7 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
return await self._send_edu(edu_type=edu_type, origin=origin, content=content)
async def on_query(self, query_type, args):
async def on_query(self, query_type: str, args: dict):
"""Overrides FederationHandlerRegistry
"""
handler = self.query_handlers.get(query_type)

View File

@@ -477,7 +477,7 @@ def process_rows_for_federation(transaction_queue, rows):
Args:
transaction_queue (FederationSender)
rows (list(synapse.replication.tcp.streams.FederationStreamRow))
rows (list(synapse.replication.tcp.streams.federation.FederationStream.FederationStreamRow))
"""
# The federation stream contains a bunch of different types of

View File

@@ -499,4 +499,13 @@ class FederationSender(object):
self._get_per_destination_queue(destination).attempt_new_transaction()
def get_current_token(self) -> int:
# Dummy implementation for case where federation sender isn't offloaded
# to a worker.
return 0
async def get_replication_rows(
self, from_token, to_token, limit, federation_ack=None
):
# Dummy implementation for case where federation sender isn't offloaded
# to a worker.
return []

View File

@@ -15,13 +15,14 @@
# limitations under the License.
import logging
from typing import Any, Dict
from typing import Any, Dict, Optional
from six.moves import urllib
from twisted.internet import defer
from synapse.api.constants import Membership
from synapse.api.errors import Codes, HttpResponseException, SynapseError
from synapse.api.urls import (
FEDERATION_UNSTABLE_PREFIX,
FEDERATION_V1_PREFIX,
@@ -326,18 +327,25 @@ class TransportLayerClient(object):
@log_function
def get_public_rooms(
self,
remote_server,
limit,
since_token,
search_filter=None,
include_all_networks=False,
third_party_instance_id=None,
remote_server: str,
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[Dict] = None,
include_all_networks: bool = False,
third_party_instance_id: Optional[str] = None,
):
"""Get the list of public rooms from a remote homeserver
See synapse.federation.federation_client.FederationClient.get_public_rooms for
more information.
"""
if search_filter:
# this uses MSC2197 (Search Filtering over Federation)
path = _create_v1_path("/publicRooms")
data = {"include_all_networks": "true" if include_all_networks else "false"}
data = {
"include_all_networks": "true" if include_all_networks else "false"
} # type: Dict[str, Any]
if third_party_instance_id:
data["third_party_instance_id"] = third_party_instance_id
if limit:
@@ -347,9 +355,19 @@ class TransportLayerClient(object):
data["filter"] = search_filter
response = yield self.client.post_json(
destination=remote_server, path=path, data=data, ignore_backoff=True
)
try:
response = yield self.client.post_json(
destination=remote_server, path=path, data=data, ignore_backoff=True
)
except HttpResponseException as e:
if e.code == 403:
raise SynapseError(
403,
"You are not allowed to view the public rooms list of %s"
% (remote_server,),
errcode=Codes.FORBIDDEN,
)
raise
else:
path = _create_v1_path("/publicRooms")
@@ -363,9 +381,19 @@ class TransportLayerClient(object):
if since_token:
args["since"] = [since_token]
response = yield self.client.get_json(
destination=remote_server, path=path, args=args, ignore_backoff=True
)
try:
response = yield self.client.get_json(
destination=remote_server, path=path, args=args, ignore_backoff=True
)
except HttpResponseException as e:
if e.code == 403:
raise SynapseError(
403,
"You are not allowed to view the public rooms list of %s"
% (remote_server,),
errcode=Codes.FORBIDDEN,
)
raise
return response
@@ -399,20 +427,30 @@ class TransportLayerClient(object):
{
"device_keys": {
"<user_id>": ["<device_id>"]
} }
}
}
Response:
{
"device_keys": {
"<user_id>": {
"<device_id>": {...}
} } }
}
},
"master_key": {
"<user_id>": {...}
}
},
"self_signing_key": {
"<user_id>": {...}
}
}
Args:
destination(str): The server to query.
query_content(dict): The user ids to query.
Returns:
A dict containg the device keys.
A dict containing device and cross-signing keys.
"""
path = _create_v1_path("/user/keys/query")
@@ -429,14 +467,30 @@ class TransportLayerClient(object):
Response:
{
"stream_id": "...",
"devices": [ { ... } ]
"devices": [ { ... } ],
"master_key": {
"user_id": "<user_id>",
"usage": [...],
"keys": {...},
"signatures": {
"<user_id>": {...}
}
},
"self_signing_key": {
"user_id": "<user_id>",
"usage": [...],
"keys": {...},
"signatures": {
"<user_id>": {...}
}
}
}
Args:
destination(str): The server to query.
query_content(dict): The user ids to query.
Returns:
A dict containg the device keys.
A dict containing device and cross-signing keys.
"""
path = _create_v1_path("/user/devices/%s", user_id)
@@ -454,8 +508,10 @@ class TransportLayerClient(object):
{
"one_time_keys": {
"<user_id>": {
"<device_id>": "<algorithm>"
} } }
"<device_id>": "<algorithm>"
}
}
}
Response:
{
@@ -463,13 +519,16 @@ class TransportLayerClient(object):
"<user_id>": {
"<device_id>": {
"<algorithm>:<key_id>": "<key_base64>"
} } } }
}
}
}
}
Args:
destination(str): The server to query.
query_content(dict): The user ids to query.
Returns:
A dict containg the one-time keys.
A dict containing the one-time keys.
"""
path = _create_v1_path("/user/keys/claim")

View File

@@ -37,13 +37,13 @@ An attestation is a signed blob of json that looks like:
import logging
import random
from typing import Tuple
from signedjson.sign import sign_json
from twisted.internet import defer
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import get_domain_from_id
@@ -162,19 +162,19 @@ class GroupAttestionRenewer(object):
def _start_renew_attestations(self):
return run_as_background_process("renew_attestations", self._renew_attestations)
@defer.inlineCallbacks
def _renew_attestations(self):
async def _renew_attestations(self):
"""Called periodically to check if we need to update any of our attestations
"""
now = self.clock.time_msec()
rows = yield self.store.get_attestations_need_renewals(
rows = await self.store.get_attestations_need_renewals(
now + UPDATE_ATTESTATION_TIME_MS
)
@defer.inlineCallbacks
def _renew_attestation(group_id, user_id):
def _renew_attestation(group_user: Tuple[str, str]):
group_id, user_id = group_user
try:
if not self.is_mine_id(group_id):
destination = get_domain_from_id(group_id)
@@ -208,7 +208,4 @@ class GroupAttestionRenewer(object):
)
for row in rows:
group_id = row["group_id"]
user_id = row["user_id"]
run_in_background(_renew_attestation, group_id, user_id)
await _renew_attestation((row["group_id"], row["user_id"]))

View File

@@ -748,17 +748,18 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
raise NotImplementedError()
@defer.inlineCallbacks
def remove_user_from_group(self, group_id, user_id, requester_user_id, content):
async def remove_user_from_group(
self, group_id, user_id, requester_user_id, content
):
"""Remove a user from the group; either a user is leaving or an admin
kicked them.
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
is_kick = False
if requester_user_id != user_id:
is_admin = yield self.store.is_user_admin_in_group(
is_admin = await self.store.is_user_admin_in_group(
group_id, requester_user_id
)
if not is_admin:
@@ -766,30 +767,29 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
is_kick = True
yield self.store.remove_user_from_group(group_id, user_id)
await self.store.remove_user_from_group(group_id, user_id)
if is_kick:
if self.hs.is_mine_id(user_id):
groups_local = self.hs.get_groups_local_handler()
yield groups_local.user_removed_from_group(group_id, user_id, {})
await groups_local.user_removed_from_group(group_id, user_id, {})
else:
yield self.transport_client.remove_user_from_group_notification(
await self.transport_client.remove_user_from_group_notification(
get_domain_from_id(user_id), group_id, user_id, {}
)
if not self.hs.is_mine_id(user_id):
yield self.store.maybe_delete_remote_profile_cache(user_id)
await self.store.maybe_delete_remote_profile_cache(user_id)
# Delete group if the last user has left
users = yield self.store.get_users_in_group(group_id, include_private=True)
users = await self.store.get_users_in_group(group_id, include_private=True)
if not users:
yield self.store.delete_group(group_id)
await self.store.delete_group(group_id)
return {}
@defer.inlineCallbacks
def create_group(self, group_id, requester_user_id, content):
group = yield self.check_group_is_ours(group_id, requester_user_id)
async def create_group(self, group_id, requester_user_id, content):
group = await self.check_group_is_ours(group_id, requester_user_id)
logger.info("Attempting to create group with ID: %r", group_id)
@@ -799,7 +799,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if group:
raise SynapseError(400, "Group already exists")
is_admin = yield self.auth.is_server_admin(
is_admin = await self.auth.is_server_admin(
UserID.from_string(requester_user_id)
)
if not is_admin:
@@ -822,7 +822,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
long_description = profile.get("long_description")
user_profile = content.get("user_profile", {})
yield self.store.create_group(
await self.store.create_group(
group_id,
requester_user_id,
name=name,
@@ -834,7 +834,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
if not self.hs.is_mine_id(requester_user_id):
remote_attestation = content["attestation"]
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
remote_attestation, user_id=requester_user_id, group_id=group_id
)
@@ -845,7 +845,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
local_attestation = None
remote_attestation = None
yield self.store.add_user_to_group(
await self.store.add_user_to_group(
group_id,
requester_user_id,
is_admin=True,
@@ -855,7 +855,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
)
if not self.hs.is_mine_id(requester_user_id):
yield self.store.add_remote_profile_cache(
await self.store.add_remote_profile_cache(
requester_user_id,
displayname=user_profile.get("displayname"),
avatar_url=user_profile.get("avatar_url"),
@@ -863,8 +863,7 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
return {"group_id": group_id}
@defer.inlineCallbacks
def delete_group(self, group_id, requester_user_id):
async def delete_group(self, group_id, requester_user_id):
"""Deletes a group, kicking out all current members.
Only group admins or server admins can call this request
@@ -877,14 +876,14 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
Deferred
"""
yield self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
await self.check_group_is_ours(group_id, requester_user_id, and_exists=True)
# Only server admins or group admins can delete groups.
is_admin = yield self.store.is_user_admin_in_group(group_id, requester_user_id)
is_admin = await self.store.is_user_admin_in_group(group_id, requester_user_id)
if not is_admin:
is_admin = yield self.auth.is_server_admin(
is_admin = await self.auth.is_server_admin(
UserID.from_string(requester_user_id)
)
@@ -892,18 +891,17 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
raise SynapseError(403, "User is not an admin")
# Before deleting the group lets kick everyone out of it
users = yield self.store.get_users_in_group(group_id, include_private=True)
users = await self.store.get_users_in_group(group_id, include_private=True)
@defer.inlineCallbacks
def _kick_user_from_group(user_id):
async def _kick_user_from_group(user_id):
if self.hs.is_mine_id(user_id):
groups_local = self.hs.get_groups_local_handler()
yield groups_local.user_removed_from_group(group_id, user_id, {})
await groups_local.user_removed_from_group(group_id, user_id, {})
else:
yield self.transport_client.remove_user_from_group_notification(
await self.transport_client.remove_user_from_group_notification(
get_domain_from_id(user_id), group_id, user_id, {}
)
yield self.store.maybe_delete_remote_profile_cache(user_id)
await self.store.maybe_delete_remote_profile_cache(user_id)
# We kick users out in the order of:
# 1. Non-admins
@@ -922,11 +920,11 @@ class GroupsServerHandler(GroupsServerWorkerHandler):
else:
non_admins.append(u["user_id"])
yield concurrently_execute(_kick_user_from_group, non_admins, 10)
yield concurrently_execute(_kick_user_from_group, admins, 10)
yield _kick_user_from_group(requester_user_id)
await concurrently_execute(_kick_user_from_group, non_admins, 10)
await concurrently_execute(_kick_user_from_group, admins, 10)
await _kick_user_from_group(requester_user_id)
yield self.store.delete_group(group_id)
await self.store.delete_group(group_id)
def _parse_join_policy_from_contents(content):

View File

@@ -126,30 +126,28 @@ class BaseHandler(object):
retry_after_ms=int(1000 * (time_allowed - time_now))
)
@defer.inlineCallbacks
def maybe_kick_guest_users(self, event, context=None):
async def maybe_kick_guest_users(self, event, context=None):
# Technically this function invalidates current_state by changing it.
# Hopefully this isn't that important to the caller.
if event.type == EventTypes.GuestAccess:
guest_access = event.content.get("guest_access", "forbidden")
if guest_access != "can_join":
if context:
current_state_ids = yield context.get_current_state_ids()
current_state = yield self.store.get_events(
current_state_ids = await context.get_current_state_ids()
current_state = await self.store.get_events(
list(current_state_ids.values())
)
else:
current_state = yield self.state_handler.get_current_state(
current_state = await self.state_handler.get_current_state(
event.room_id
)
current_state = list(current_state.values())
logger.info("maybe_kick_guest_users %r", current_state)
yield self.kick_guest_users(current_state)
await self.kick_guest_users(current_state)
@defer.inlineCallbacks
def kick_guest_users(self, current_state):
async def kick_guest_users(self, current_state):
for member_event in current_state:
try:
if member_event.type != EventTypes.Member:
@@ -180,7 +178,7 @@ class BaseHandler(object):
# homeserver.
requester = synapse.types.create_requester(target_user, is_guest=True)
handler = self.hs.get_room_member_handler()
yield handler.update_membership(
await handler.update_membership(
requester,
target_user,
member_event.room_id,

View File

@@ -18,14 +18,12 @@ import logging
import time
import unicodedata
import urllib.parse
from typing import Any, Dict, Iterable, List, Optional
from typing import Any, Callable, Dict, Iterable, List, Optional, Tuple, Union
import attr
import bcrypt # type: ignore[import]
import pymacaroons
from twisted.internet import defer
import synapse.util.stringutils as stringutils
from synapse.api.constants import LoginType
from synapse.api.errors import (
@@ -43,10 +41,10 @@ from synapse.handlers.ui_auth.checkers import UserInteractiveAuthChecker
from synapse.http.server import finish_request
from synapse.http.site import SynapseRequest
from synapse.logging.context import defer_to_thread
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.module_api import ModuleApi
from synapse.push.mailer import load_jinja2_templates
from synapse.types import Requester, UserID
from synapse.util.caches.expiringcache import ExpiringCache
from ._base import BaseHandler
@@ -71,15 +69,6 @@ class AuthHandler(BaseHandler):
self.bcrypt_rounds = hs.config.bcrypt_rounds
# This is not a cache per se, but a store of all current sessions that
# expire after N hours
self.sessions = ExpiringCache(
cache_name="register_sessions",
clock=hs.get_clock(),
expiry_ms=self.SESSION_EXPIRE_MS,
reset_expiry_on_get=True,
)
account_handler = ModuleApi(hs, self)
self.password_providers = [
module(config=config, account_handler=account_handler)
@@ -91,6 +80,7 @@ class AuthHandler(BaseHandler):
self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled
self._sso_enabled = hs.config.saml2_enabled or hs.config.cas_enabled
# we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first
@@ -106,6 +96,13 @@ class AuthHandler(BaseHandler):
if t not in login_types:
login_types.append(t)
self._supported_login_types = login_types
# Login types and UI Auth types have a heavy overlap, but are not
# necessarily identical. Login types have SSO (and other login types)
# added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET.
ui_auth_types = login_types.copy()
if self._sso_enabled:
ui_auth_types.append(LoginType.SSO)
self._supported_ui_auth_types = ui_auth_types
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
@@ -113,20 +110,52 @@ class AuthHandler(BaseHandler):
self._clock = self.hs.get_clock()
# Load the SSO redirect confirmation page HTML template
# Expire old UI auth sessions after a period of time.
if hs.config.worker_app is None:
self._clock.looping_call(
run_as_background_process,
5 * 60 * 1000,
"expire_old_sessions",
self._expire_old_sessions,
)
# Load the SSO HTML templates.
# The following template is shown to the user during a client login via SSO,
# after the SSO completes and before redirecting them back to their client.
# It notifies the user they are about to give access to their matrix account
# to the client.
self._sso_redirect_confirm_template = load_jinja2_templates(
hs.config.sso_redirect_confirm_template_dir, ["sso_redirect_confirm.html"],
)[0]
# The following template is shown during user interactive authentication
# in the fallback auth scenario. It notifies the user that they are
# authenticating for an operation to occur on their account.
self._sso_auth_confirm_template = load_jinja2_templates(
hs.config.sso_redirect_confirm_template_dir, ["sso_auth_confirm.html"],
)[0]
# The following template is shown after a successful user interactive
# authentication session. It tells the user they can close the window.
self._sso_auth_success_template = hs.config.sso_auth_success_template
# The following template is shown during the SSO authentication process if
# the account is deactivated.
self._sso_account_deactivated_template = (
hs.config.sso_account_deactivated_template
)
self._server_name = hs.config.server_name
# cast to tuple for use with str.startswith
self._whitelisted_sso_clients = tuple(hs.config.sso_client_whitelist)
@defer.inlineCallbacks
def validate_user_via_ui_auth(
self, requester: Requester, request_body: Dict[str, Any], clientip: str
):
async def validate_user_via_ui_auth(
self,
requester: Requester,
request: SynapseRequest,
request_body: Dict[str, Any],
clientip: str,
description: str,
) -> dict:
"""
Checks that the user is who they claim to be, via a UI auth.
@@ -137,12 +166,17 @@ class AuthHandler(BaseHandler):
Args:
requester: The user, as given by the access token
request: The request sent by the client.
request_body: The body of the request sent by the client
clientip: The IP address of the client.
description: A human readable string to be displayed to the user that
describes the operation happening on their account.
Returns:
defer.Deferred[dict]: the parameters for this request (which may
The parameters for this request (which may
have been given only in a previous call).
Raises:
@@ -169,10 +203,12 @@ class AuthHandler(BaseHandler):
)
# build a list of supported flows
flows = [[login_type] for login_type in self._supported_login_types]
flows = [[login_type] for login_type in self._supported_ui_auth_types]
try:
result, params, _ = yield self.check_auth(flows, request_body, clientip)
result, params, _ = await self.check_auth(
flows, request, request_body, clientip, description
)
except LoginError:
# Update the ratelimite to say we failed (`can_do_action` doesn't raise).
self._failed_uia_attempts_ratelimiter.can_do_action(
@@ -185,7 +221,7 @@ class AuthHandler(BaseHandler):
raise
# find the completed login type
for login_type in self._supported_login_types:
for login_type in self._supported_ui_auth_types:
if login_type not in result:
continue
@@ -209,18 +245,18 @@ class AuthHandler(BaseHandler):
"""
return self.checkers.keys()
@defer.inlineCallbacks
def check_auth(
self, flows: List[List[str]], clientdict: Dict[str, Any], clientip: str
):
async def check_auth(
self,
flows: List[List[str]],
request: SynapseRequest,
clientdict: Dict[str, Any],
clientip: str,
description: str,
) -> Tuple[dict, dict, str]:
"""
Takes a dictionary sent by the client in the login / registration
protocol and handles the User-Interactive Auth flow.
As a side effect, this function fills in the 'creds' key on the user's
session with a map, which maps each auth-type (str) to the relevant
identity authenticated by that auth-type (mostly str, but for captcha, bool).
If no auth flows have been completed successfully, raises an
InteractiveAuthIncompleteError. To handle this, you can use
synapse.rest.client.v2_alpha._base.interactive_auth_handler as a
@@ -231,14 +267,18 @@ class AuthHandler(BaseHandler):
strings representing auth-types. At least one full
flow must be completed in order for auth to be successful.
request: The request sent by the client.
clientdict: The dictionary from the client root level, not the
'auth' key: this method prompts for auth if none is sent.
clientip: The IP address of the client.
description: A human readable string to be displayed to the user that
describes the operation happening on their account.
Returns:
defer.Deferred[dict, dict, str]: a deferred tuple of
(creds, params, session_id).
A tuple of (creds, params, session_id).
'creds' contains the authenticated credentials of each stage.
@@ -260,9 +300,26 @@ class AuthHandler(BaseHandler):
del clientdict["auth"]
if "session" in authdict:
sid = authdict["session"]
session = self._get_session_info(sid)
if len(clientdict) > 0:
# Convert the URI and method to strings.
uri = request.uri.decode("utf-8")
method = request.uri.decode("utf-8")
# If there's no session ID, create a new session.
if not sid:
session = await self.store.create_ui_auth_session(
clientdict, uri, method, description
)
else:
try:
session = await self.store.get_ui_auth_session(sid)
except StoreError:
raise SynapseError(400, "Unknown session ID: %s" % (sid,))
# If the client provides parameters, update what is persisted,
# otherwise use whatever was last provided.
#
# This was designed to allow the client to omit the parameters
# and just supply the session in subsequent calls so it split
# auth between devices by just sharing the session, (eg. so you
@@ -270,31 +327,60 @@ class AuthHandler(BaseHandler):
# email auth link on there). It's probably too open to abuse
# because it lets unauthenticated clients store arbitrary objects
# on a homeserver.
# Revisit: Assumimg the REST APIs do sensible validation, the data
# isn't arbintrary.
session["clientdict"] = clientdict
self._save_session(session)
elif "clientdict" in session:
clientdict = session["clientdict"]
#
# Revisit: Assuming the REST APIs do sensible validation, the data
# isn't arbitrary.
#
# Note that the registration endpoint explicitly removes the
# "initial_device_display_name" parameter if it is provided
# without a "password" parameter. See the changes to
# synapse.rest.client.v2_alpha.register.RegisterRestServlet.on_POST
# in commit 544722bad23fc31056b9240189c3cbbbf0ffd3f9.
if not clientdict:
clientdict = session.clientdict
# Ensure that the queried operation does not vary between stages of
# the UI authentication session. This is done by generating a stable
# comparator and storing it during the initial query. Subsequent
# queries ensure that this comparator has not changed.
#
# The comparator is based on the requested URI and HTTP method. The
# client dict (minus the auth dict) should also be checked, but some
# clients are not spec compliant, just warn for now if the client
# dict changes.
if (session.uri, session.method) != (uri, method):
raise SynapseError(
403,
"Requested operation has changed during the UI authentication session.",
)
if session.clientdict != clientdict:
logger.warning(
"Requested operation has changed during the UI "
"authentication session. A future version of Synapse "
"will remove this capability."
)
# For backwards compatibility, changes to the client dict are
# persisted as clients modify them throughout their user interactive
# authentication flow.
await self.store.set_ui_auth_clientdict(sid, clientdict)
if not authdict:
raise InteractiveAuthIncompleteError(
self._auth_dict_for_flows(flows, session)
self._auth_dict_for_flows(flows, session.session_id)
)
if "creds" not in session:
session["creds"] = {}
creds = session["creds"]
# check auth type currently being presented
errordict = {} # type: Dict[str, Any]
if "type" in authdict:
login_type = authdict["type"] # type: str
try:
result = yield self._check_auth_dict(authdict, clientip)
result = await self._check_auth_dict(authdict, clientip)
if result:
creds[login_type] = result
self._save_session(session)
await self.store.mark_ui_auth_stage_complete(
session.session_id, login_type, result
)
except LoginError as e:
if login_type == LoginType.EMAIL_IDENTITY:
# riot used to have a bug where it would request a new
@@ -310,6 +396,7 @@ class AuthHandler(BaseHandler):
# so that the client can have another go.
errordict = e.error_dict()
creds = await self.store.get_completed_ui_auth_stages(session.session_id)
for f in flows:
if len(set(f) - set(creds)) == 0:
# it's very useful to know what args are stored, but this can
@@ -322,15 +409,17 @@ class AuthHandler(BaseHandler):
creds,
list(clientdict),
)
return creds, clientdict, session["id"]
ret = self._auth_dict_for_flows(flows, session)
return creds, clientdict, session.session_id
ret = self._auth_dict_for_flows(flows, session.session_id)
ret["completed"] = list(creds)
ret.update(errordict)
raise InteractiveAuthIncompleteError(ret)
@defer.inlineCallbacks
def add_oob_auth(self, stagetype: str, authdict: Dict[str, Any], clientip: str):
async def add_oob_auth(
self, stagetype: str, authdict: Dict[str, Any], clientip: str
) -> bool:
"""
Adds the result of out-of-band authentication into an existing auth
session. Currently used for adding the result of fallback auth.
@@ -340,15 +429,11 @@ class AuthHandler(BaseHandler):
if "session" not in authdict:
raise LoginError(400, "", Codes.MISSING_PARAM)
sess = self._get_session_info(authdict["session"])
if "creds" not in sess:
sess["creds"] = {}
creds = sess["creds"]
result = yield self.checkers[stagetype].check_auth(authdict, clientip)
result = await self.checkers[stagetype].check_auth(authdict, clientip)
if result:
creds[stagetype] = result
self._save_session(sess)
await self.store.mark_ui_auth_stage_complete(
authdict["session"], stagetype, result
)
return True
return False
@@ -370,7 +455,7 @@ class AuthHandler(BaseHandler):
sid = authdict["session"]
return sid
def set_session_data(self, session_id: str, key: str, value: Any) -> None:
async def set_session_data(self, session_id: str, key: str, value: Any) -> None:
"""
Store a key-value pair into the sessions data associated with this
request. This data is stored server-side and cannot be modified by
@@ -381,11 +466,12 @@ class AuthHandler(BaseHandler):
key: The key to store the data under
value: The data to store
"""
sess = self._get_session_info(session_id)
sess.setdefault("serverdict", {})[key] = value
self._save_session(sess)
try:
await self.store.set_ui_auth_session_data(session_id, key, value)
except StoreError:
raise SynapseError(400, "Unknown session ID: %s" % (session_id,))
def get_session_data(
async def get_session_data(
self, session_id: str, key: str, default: Optional[Any] = None
) -> Any:
"""
@@ -396,11 +482,22 @@ class AuthHandler(BaseHandler):
key: The key to store the data under
default: Value to return if the key has not been set
"""
sess = self._get_session_info(session_id)
return sess.setdefault("serverdict", {}).get(key, default)
try:
return await self.store.get_ui_auth_session_data(session_id, key, default)
except StoreError:
raise SynapseError(400, "Unknown session ID: %s" % (session_id,))
@defer.inlineCallbacks
def _check_auth_dict(self, authdict: Dict[str, Any], clientip: str):
async def _expire_old_sessions(self):
"""
Invalidate any user interactive authentication sessions that have expired.
"""
now = self._clock.time_msec()
expiration_time = now - self.SESSION_EXPIRE_MS
await self.store.delete_old_ui_auth_sessions(expiration_time)
async def _check_auth_dict(
self, authdict: Dict[str, Any], clientip: str
) -> Union[Dict[str, Any], str]:
"""Attempt to validate the auth dict provided by a client
Args:
@@ -408,7 +505,7 @@ class AuthHandler(BaseHandler):
clientip: IP address of the client
Returns:
Deferred: result of the stage verification.
Result of the stage verification.
Raises:
StoreError if there was a problem accessing the database
@@ -418,7 +515,7 @@ class AuthHandler(BaseHandler):
login_type = authdict["type"]
checker = self.checkers.get(login_type)
if checker is not None:
res = yield checker.check_auth(authdict, clientip=clientip)
res = await checker.check_auth(authdict, clientip=clientip)
return res
# build a v1-login-style dict out of the authdict and fall back to the
@@ -428,7 +525,7 @@ class AuthHandler(BaseHandler):
if user_id is None:
raise SynapseError(400, "", Codes.MISSING_PARAM)
(canonical_id, callback) = yield self.validate_login(user_id, authdict)
(canonical_id, callback) = await self.validate_login(user_id, authdict)
return canonical_id
def _get_params_recaptcha(self) -> dict:
@@ -452,7 +549,7 @@ class AuthHandler(BaseHandler):
}
def _auth_dict_for_flows(
self, flows: List[List[str]], session: Dict[str, Any]
self, flows: List[List[str]], session_id: str,
) -> Dict[str, Any]:
public_flows = []
for f in flows:
@@ -471,31 +568,12 @@ class AuthHandler(BaseHandler):
params[stage] = get_params[stage]()
return {
"session": session["id"],
"session": session_id,
"flows": [{"stages": f} for f in public_flows],
"params": params,
}
def _get_session_info(self, session_id: Optional[str]) -> dict:
"""
Gets or creates a session given a session ID.
The session can be used to track data across multiple requests, e.g. for
interactive authentication.
"""
if session_id not in self.sessions:
session_id = None
if not session_id:
# create a new session
while session_id is None or session_id in self.sessions:
session_id = stringutils.random_string(24)
self.sessions[session_id] = {"id": session_id}
return self.sessions[session_id]
@defer.inlineCallbacks
def get_access_token_for_user_id(
async def get_access_token_for_user_id(
self, user_id: str, device_id: Optional[str], valid_until_ms: Optional[int]
):
"""
@@ -525,10 +603,10 @@ class AuthHandler(BaseHandler):
)
logger.info("Logging in user %s on device %s%s", user_id, device_id, fmt_expiry)
yield self.auth.check_auth_blocking(user_id)
await self.auth.check_auth_blocking(user_id)
access_token = self.macaroon_gen.generate_access_token(user_id)
yield self.store.add_access_token_to_user(
await self.store.add_access_token_to_user(
user_id, access_token, device_id, valid_until_ms
)
@@ -538,15 +616,14 @@ class AuthHandler(BaseHandler):
# device, so we double-check it here.
if device_id is not None:
try:
yield self.store.get_device(user_id, device_id)
await self.store.get_device(user_id, device_id)
except StoreError:
yield self.store.delete_access_token(access_token)
await self.store.delete_access_token(access_token)
raise StoreError(400, "Login raced against device deletion")
return access_token
@defer.inlineCallbacks
def check_user_exists(self, user_id: str):
async def check_user_exists(self, user_id: str) -> Optional[str]:
"""
Checks to see if a user with the given id exists. Will check case
insensitively, but return None if there are multiple inexact matches.
@@ -555,28 +632,25 @@ class AuthHandler(BaseHandler):
user_id: complete @user:id
Returns:
defer.Deferred: (unicode) canonical_user_id, or None if zero or
multiple matches
Raises:
UserDeactivatedError if a user is found but is deactivated.
The canonical_user_id, or None if zero or multiple matches
"""
res = yield self._find_user_id_and_pwd_hash(user_id)
res = await self._find_user_id_and_pwd_hash(user_id)
if res is not None:
return res[0]
return None
@defer.inlineCallbacks
def _find_user_id_and_pwd_hash(self, user_id: str):
async def _find_user_id_and_pwd_hash(
self, user_id: str
) -> Optional[Tuple[str, str]]:
"""Checks to see if a user with the given id exists. Will check case
insensitively, but will return None if there are multiple inexact
matches.
Returns:
tuple: A 2-tuple of `(canonical_user_id, password_hash)`
None: if there is not exactly one match
A 2-tuple of `(canonical_user_id, password_hash)` or `None`
if there is not exactly one match
"""
user_infos = yield self.store.get_users_by_id_case_insensitive(user_id)
user_infos = await self.store.get_users_by_id_case_insensitive(user_id)
result = None
if not user_infos:
@@ -609,8 +683,9 @@ class AuthHandler(BaseHandler):
"""
return self._supported_login_types
@defer.inlineCallbacks
def validate_login(self, username: str, login_submission: Dict[str, Any]):
async def validate_login(
self, username: str, login_submission: Dict[str, Any]
) -> Tuple[str, Optional[Callable[[Dict[str, str]], None]]]:
"""Authenticates the user for the /login API
Also used by the user-interactive auth flow to validate
@@ -621,7 +696,7 @@ class AuthHandler(BaseHandler):
login_submission: the whole of the login submission
(including 'type' and other relevant fields)
Returns:
Deferred[str, func]: canonical user id, and optional callback
A tuple of the canonical user id, and optional callback
to be called once the access token and device id are issued
Raises:
StoreError if there was a problem accessing the database
@@ -650,7 +725,7 @@ class AuthHandler(BaseHandler):
for provider in self.password_providers:
if hasattr(provider, "check_password") and login_type == LoginType.PASSWORD:
known_login_type = True
is_valid = yield provider.check_password(qualified_user_id, password)
is_valid = await provider.check_password(qualified_user_id, password)
if is_valid:
return qualified_user_id, None
@@ -682,7 +757,7 @@ class AuthHandler(BaseHandler):
% (login_type, missing_fields),
)
result = yield provider.check_auth(username, login_type, login_dict)
result = await provider.check_auth(username, login_type, login_dict)
if result:
if isinstance(result, str):
result = (result, None)
@@ -691,8 +766,8 @@ class AuthHandler(BaseHandler):
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
known_login_type = True
canonical_user_id = yield self._check_local_password(
qualified_user_id, password
canonical_user_id = await self._check_local_password(
qualified_user_id, password # type: ignore
)
if canonical_user_id:
@@ -705,8 +780,9 @@ class AuthHandler(BaseHandler):
# login, it turns all LoginErrors into a 401 anyway.
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
@defer.inlineCallbacks
def check_password_provider_3pid(self, medium: str, address: str, password: str):
async def check_password_provider_3pid(
self, medium: str, address: str, password: str
) -> Tuple[Optional[str], Optional[Callable[[Dict[str, str]], None]]]:
"""Check if a password provider is able to validate a thirdparty login
Args:
@@ -715,9 +791,8 @@ class AuthHandler(BaseHandler):
password: The password of the user.
Returns:
Deferred[(str|None, func|None)]: A tuple of `(user_id,
callback)`. If authentication is successful, `user_id` is a `str`
containing the authenticated, canonical user ID. `callback` is
A tuple of `(user_id, callback)`. If authentication is successful,
`user_id`is the authenticated, canonical user ID. `callback` is
then either a function to be later run after the server has
completed login/registration, or `None`. If authentication was
unsuccessful, `user_id` and `callback` are both `None`.
@@ -729,7 +804,7 @@ class AuthHandler(BaseHandler):
# success, to a str (which is the user_id) or a tuple of
# (user_id, callback_func), where callback_func should be run
# after we've finished everything else
result = yield provider.check_3pid_auth(medium, address, password)
result = await provider.check_3pid_auth(medium, address, password)
if result:
# Check if the return value is a str or a tuple
if isinstance(result, str):
@@ -739,8 +814,7 @@ class AuthHandler(BaseHandler):
return None, None
@defer.inlineCallbacks
def _check_local_password(self, user_id: str, password: str):
async def _check_local_password(self, user_id: str, password: str) -> Optional[str]:
"""Authenticate a user against the local password database.
user_id is checked case insensitively, but will return None if there are
@@ -750,28 +824,26 @@ class AuthHandler(BaseHandler):
user_id: complete @user:id
password: the provided password
Returns:
Deferred[unicode] the canonical_user_id, or Deferred[None] if
unknown user/bad password
The canonical_user_id, or None if unknown user/bad password
"""
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
lookupres = await self._find_user_id_and_pwd_hash(user_id)
if not lookupres:
return None
(user_id, password_hash) = lookupres
# If the password hash is None, the account has likely been deactivated
if not password_hash:
deactivated = yield self.store.get_user_deactivated_status(user_id)
deactivated = await self.store.get_user_deactivated_status(user_id)
if deactivated:
raise UserDeactivatedError("This account has been deactivated")
result = yield self.validate_hash(password, password_hash)
result = await self.validate_hash(password, password_hash)
if not result:
logger.warning("Failed password login for user %s", user_id)
return None
return user_id
@defer.inlineCallbacks
def validate_short_term_login_token_and_get_user_id(self, login_token: str):
async def validate_short_term_login_token_and_get_user_id(self, login_token: str):
auth_api = self.hs.get_auth()
user_id = None
try:
@@ -781,26 +853,23 @@ class AuthHandler(BaseHandler):
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
yield self.auth.check_auth_blocking(user_id)
await self.auth.check_auth_blocking(user_id)
return user_id
@defer.inlineCallbacks
def delete_access_token(self, access_token: str):
async def delete_access_token(self, access_token: str):
"""Invalidate a single access token
Args:
access_token: access token to be deleted
Returns:
Deferred
"""
user_info = yield self.auth.get_user_by_access_token(access_token)
yield self.store.delete_access_token(access_token)
user_info = await self.auth.get_user_by_access_token(access_token)
await self.store.delete_access_token(access_token)
# see if any of our auth providers want to know about this
for provider in self.password_providers:
if hasattr(provider, "on_logged_out"):
yield provider.on_logged_out(
await provider.on_logged_out(
user_id=str(user_info["user"]),
device_id=user_info["device_id"],
access_token=access_token,
@@ -808,12 +877,11 @@ class AuthHandler(BaseHandler):
# delete pushers associated with this access token
if user_info["token_id"] is not None:
yield self.hs.get_pusherpool().remove_pushers_by_access_token(
await self.hs.get_pusherpool().remove_pushers_by_access_token(
str(user_info["user"]), (user_info["token_id"],)
)
@defer.inlineCallbacks
def delete_access_tokens_for_user(
async def delete_access_tokens_for_user(
self,
user_id: str,
except_token_id: Optional[str] = None,
@@ -827,10 +895,8 @@ class AuthHandler(BaseHandler):
device_id: ID of device the tokens are associated with.
If None, tokens associated with any device (or no device) will
be deleted
Returns:
Deferred
"""
tokens_and_devices = yield self.store.user_delete_access_tokens(
tokens_and_devices = await self.store.user_delete_access_tokens(
user_id, except_token_id=except_token_id, device_id=device_id
)
@@ -838,17 +904,18 @@ class AuthHandler(BaseHandler):
for provider in self.password_providers:
if hasattr(provider, "on_logged_out"):
for token, token_id, device_id in tokens_and_devices:
yield provider.on_logged_out(
await provider.on_logged_out(
user_id=user_id, device_id=device_id, access_token=token
)
# delete pushers associated with the access tokens
yield self.hs.get_pusherpool().remove_pushers_by_access_token(
await self.hs.get_pusherpool().remove_pushers_by_access_token(
user_id, (token_id for _, token_id, _ in tokens_and_devices)
)
@defer.inlineCallbacks
def add_threepid(self, user_id: str, medium: str, address: str, validated_at: int):
async def add_threepid(
self, user_id: str, medium: str, address: str, validated_at: int
):
# check if medium has a valid value
if medium not in ["email", "msisdn"]:
raise SynapseError(
@@ -869,14 +936,13 @@ class AuthHandler(BaseHandler):
if medium == "email":
address = address.lower()
yield self.store.user_add_threepid(
await self.store.user_add_threepid(
user_id, medium, address, validated_at, self.hs.get_clock().time_msec()
)
@defer.inlineCallbacks
def delete_threepid(
async def delete_threepid(
self, user_id: str, medium: str, address: str, id_server: Optional[str] = None
):
) -> bool:
"""Attempts to unbind the 3pid on the identity servers and deletes it
from the local database.
@@ -889,7 +955,7 @@ class AuthHandler(BaseHandler):
identity server specified when binding (if known).
Returns:
Deferred[bool]: Returns True if successfully unbound the 3pid on
Returns True if successfully unbound the 3pid on
the identity server, False if identity server doesn't support the
unbind API.
"""
@@ -899,28 +965,21 @@ class AuthHandler(BaseHandler):
address = address.lower()
identity_handler = self.hs.get_handlers().identity_handler
result = yield identity_handler.try_unbind_threepid(
result = await identity_handler.try_unbind_threepid(
user_id, {"medium": medium, "address": address, "id_server": id_server}
)
yield self.store.user_delete_threepid(user_id, medium, address)
await self.store.user_delete_threepid(user_id, medium, address)
return result
def _save_session(self, session: Dict[str, Any]) -> None:
"""Update the last used time on the session to now and add it back to the session store."""
# TODO: Persistent storage
logger.debug("Saving session %s", session)
session["last_used"] = self.hs.get_clock().time_msec()
self.sessions[session["id"]] = session
def hash(self, password: str):
async def hash(self, password: str) -> str:
"""Computes a secure hash of password.
Args:
password: Password to hash.
Returns:
Deferred(unicode): Hashed password.
Hashed password.
"""
def _do_hash():
@@ -932,9 +991,11 @@ class AuthHandler(BaseHandler):
bcrypt.gensalt(self.bcrypt_rounds),
).decode("ascii")
return defer_to_thread(self.hs.get_reactor(), _do_hash)
return await defer_to_thread(self.hs.get_reactor(), _do_hash)
def validate_hash(self, password: str, stored_hash: bytes):
async def validate_hash(
self, password: str, stored_hash: Union[bytes, str]
) -> bool:
"""Validates that self.hash(password) == stored_hash.
Args:
@@ -942,7 +1003,7 @@ class AuthHandler(BaseHandler):
stored_hash: Expected hash value.
Returns:
Deferred(bool): Whether self.hash(password) == stored_hash.
Whether self.hash(password) == stored_hash.
"""
def _do_validate_hash():
@@ -958,11 +1019,57 @@ class AuthHandler(BaseHandler):
if not isinstance(stored_hash, bytes):
stored_hash = stored_hash.encode("ascii")
return defer_to_thread(self.hs.get_reactor(), _do_validate_hash)
return await defer_to_thread(self.hs.get_reactor(), _do_validate_hash)
else:
return defer.succeed(False)
return False
def complete_sso_login(
async def start_sso_ui_auth(self, redirect_url: str, session_id: str) -> str:
"""
Get the HTML for the SSO redirect confirmation page.
Args:
redirect_url: The URL to redirect to the SSO provider.
session_id: The user interactive authentication session ID.
Returns:
The HTML to render.
"""
try:
session = await self.store.get_ui_auth_session(session_id)
except StoreError:
raise SynapseError(400, "Unknown session ID: %s" % (session_id,))
return self._sso_auth_confirm_template.render(
description=session.description, redirect_url=redirect_url,
)
async def complete_sso_ui_auth(
self, registered_user_id: str, session_id: str, request: SynapseRequest,
):
"""Having figured out a mxid for this user, complete the HTTP request
Args:
registered_user_id: The registered user ID to complete SSO login for.
request: The request to complete.
client_redirect_url: The URL to which to redirect the user at the end of the
process.
"""
# Mark the stage of the authentication as successful.
# Save the user who authenticated with SSO, this will be used to ensure
# that the account be modified is also the person who logged in.
await self.store.mark_ui_auth_stage_complete(
session_id, LoginType.SSO, registered_user_id
)
# Render the HTML and return.
html_bytes = self._sso_auth_success_template.encode("utf-8")
request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html_bytes)
finish_request(request)
async def complete_sso_login(
self,
registered_user_id: str,
request: SynapseRequest,
@@ -976,6 +1083,32 @@ class AuthHandler(BaseHandler):
client_redirect_url: The URL to which to redirect the user at the end of the
process.
"""
# If the account has been deactivated, do not proceed with the login
# flow.
deactivated = await self.store.get_user_deactivated_status(registered_user_id)
if deactivated:
html_bytes = self._sso_account_deactivated_template.encode("utf-8")
request.setResponseCode(403)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html_bytes)
finish_request(request)
return
self._complete_sso_login(registered_user_id, request, client_redirect_url)
def _complete_sso_login(
self,
registered_user_id: str,
request: SynapseRequest,
client_redirect_url: str,
):
"""
The synchronous portion of complete_sso_login.
This exists purely for backwards compatibility of synapse.module_api.ModuleApi.
"""
# Create a login token
login_token = self.macaroon_gen.generate_short_term_login_token(
registered_user_id
@@ -1001,7 +1134,7 @@ class AuthHandler(BaseHandler):
# URL we redirect users to.
redirect_url_no_params = client_redirect_url.split("?")[0]
html = self._sso_redirect_confirm_template.render(
html_bytes = self._sso_redirect_confirm_template.render(
display_url=redirect_url_no_params,
redirect_url=redirect_url,
server_name=self._server_name,
@@ -1009,8 +1142,8 @@ class AuthHandler(BaseHandler):
request.setResponseCode(200)
request.setHeader(b"Content-Type", b"text/html; charset=utf-8")
request.setHeader(b"Content-Length", b"%d" % (len(html),))
request.write(html)
request.setHeader(b"Content-Length", b"%d" % (len(html_bytes),))
request.write(html_bytes)
finish_request(request)
@staticmethod

View File

@@ -0,0 +1,221 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import xml.etree.ElementTree as ET
from typing import Dict, Optional, Tuple
from six.moves import urllib
from twisted.web.client import PartialDownloadError
from synapse.api.errors import Codes, LoginError
from synapse.http.site import SynapseRequest
from synapse.types import UserID, map_username_to_mxid_localpart
logger = logging.getLogger(__name__)
class CasHandler:
"""
Utility class for to handle the response from a CAS SSO service.
Args:
hs (synapse.server.HomeServer)
"""
def __init__(self, hs):
self._hostname = hs.hostname
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
self._cas_server_url = hs.config.cas_server_url
self._cas_service_url = hs.config.cas_service_url
self._cas_displayname_attribute = hs.config.cas_displayname_attribute
self._cas_required_attributes = hs.config.cas_required_attributes
self._http_client = hs.get_proxied_http_client()
def _build_service_param(self, args: Dict[str, str]) -> str:
"""
Generates a value to use as the "service" parameter when redirecting or
querying the CAS service.
Args:
args: Additional arguments to include in the final redirect URL.
Returns:
The URL to use as a "service" parameter.
"""
return "%s%s?%s" % (
self._cas_service_url,
"/_matrix/client/r0/login/cas/ticket",
urllib.parse.urlencode(args),
)
async def _validate_ticket(
self, ticket: str, service_args: Dict[str, str]
) -> Tuple[str, Optional[str]]:
"""
Validate a CAS ticket with the server, parse the response, and return the user and display name.
Args:
ticket: The CAS ticket from the client.
service_args: Additional arguments to include in the service URL.
Should be the same as those passed to `get_redirect_url`.
"""
uri = self._cas_server_url + "/proxyValidate"
args = {
"ticket": ticket,
"service": self._build_service_param(service_args),
}
try:
body = await self._http_client.get_raw(uri, args)
except PartialDownloadError as pde:
# Twisted raises this error if the connection is closed,
# even if that's being used old-http style to signal end-of-data
body = pde.response
user, attributes = self._parse_cas_response(body)
displayname = attributes.pop(self._cas_displayname_attribute, None)
for required_attribute, required_value in self._cas_required_attributes.items():
# If required attribute was not in CAS Response - Forbidden
if required_attribute not in attributes:
raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED)
# Also need to check value
if required_value is not None:
actual_value = attributes[required_attribute]
# If required attribute value does not match expected - Forbidden
if required_value != actual_value:
raise LoginError(401, "Unauthorized", errcode=Codes.UNAUTHORIZED)
return user, displayname
def _parse_cas_response(
self, cas_response_body: str
) -> Tuple[str, Dict[str, Optional[str]]]:
"""
Retrieve the user and other parameters from the CAS response.
Args:
cas_response_body: The response from the CAS query.
Returns:
A tuple of the user and a mapping of other attributes.
"""
user = None
attributes = {}
try:
root = ET.fromstring(cas_response_body)
if not root.tag.endswith("serviceResponse"):
raise Exception("root of CAS response is not serviceResponse")
success = root[0].tag.endswith("authenticationSuccess")
for child in root[0]:
if child.tag.endswith("user"):
user = child.text
if child.tag.endswith("attributes"):
for attribute in child:
# ElementTree library expands the namespace in
# attribute tags to the full URL of the namespace.
# We don't care about namespace here and it will always
# be encased in curly braces, so we remove them.
tag = attribute.tag
if "}" in tag:
tag = tag.split("}")[1]
attributes[tag] = attribute.text
if user is None:
raise Exception("CAS response does not contain user")
except Exception:
logger.exception("Error parsing CAS response")
raise LoginError(401, "Invalid CAS response", errcode=Codes.UNAUTHORIZED)
if not success:
raise LoginError(
401, "Unsuccessful CAS response", errcode=Codes.UNAUTHORIZED
)
return user, attributes
def get_redirect_url(self, service_args: Dict[str, str]) -> str:
"""
Generates a URL for the CAS server where the client should be redirected.
Args:
service_args: Additional arguments to include in the final redirect URL.
Returns:
The URL to redirect the client to.
"""
args = urllib.parse.urlencode(
{"service": self._build_service_param(service_args)}
)
return "%s/login?%s" % (self._cas_server_url, args)
async def handle_ticket(
self,
request: SynapseRequest,
ticket: str,
client_redirect_url: Optional[str],
session: Optional[str],
) -> None:
"""
Called once the user has successfully authenticated with the SSO.
Validates a CAS ticket sent by the client and completes the auth process.
If the user interactive authentication session is provided, marks the
UI Auth session as complete, then returns an HTML page notifying the
user they are done.
Otherwise, this registers the user if necessary, and then returns a
redirect (with a login token) to the client.
Args:
request: the incoming request from the browser. We'll
respond to it with a redirect or an HTML page.
ticket: The CAS ticket provided by the client.
client_redirect_url: the redirectUrl parameter from the `/cas/ticket` HTTP request, if given.
This should be the same as the redirectUrl from the original `/login/sso/redirect` request.
session: The session parameter from the `/cas/ticket` HTTP request, if given.
This should be the UI Auth session id.
"""
args = {}
if client_redirect_url:
args["redirectUrl"] = client_redirect_url
if session:
args["session"] = session
username, user_display_name = await self._validate_ticket(ticket, args)
localpart = map_username_to_mxid_localpart(username)
user_id = UserID(localpart, self._hostname).to_string()
registered_user_id = await self._auth_handler.check_user_exists(user_id)
if session:
await self._auth_handler.complete_sso_ui_auth(
registered_user_id, session, request,
)
else:
if not registered_user_id:
registered_user_id = await self._registration_handler.register_user(
localpart=localpart, default_display_name=user_display_name
)
await self._auth_handler.complete_sso_login(
registered_user_id, request, client_redirect_url
)

View File

@@ -125,8 +125,14 @@ class DeviceWorkerHandler(BaseHandler):
users_who_share_room = yield self.store.get_users_who_share_room_with_user(
user_id
)
tracked_users = set(users_who_share_room)
# Always tell the user about their own devices
tracked_users.add(user_id)
changed = yield self.store.get_users_whose_devices_changed(
from_token.device_list_key, users_who_share_room
from_token.device_list_key, tracked_users
)
# Then work out if any users have since joined
@@ -332,8 +338,10 @@ class DeviceHandler(DeviceWorkerHandler):
else:
raise
yield self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
yield defer.ensureDeferred(
self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
)
)
yield self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id)
@@ -385,8 +393,10 @@ class DeviceHandler(DeviceWorkerHandler):
# Delete access tokens and e2e keys for each device. Not optimised as it is not
# considered as part of a critical path.
for device_id in device_ids:
yield self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
yield defer.ensureDeferred(
self._auth_handler.delete_access_tokens_for_user(
user_id, device_id=device_id
)
)
yield self.store.delete_e2e_keys_by_device(
user_id=user_id, device_id=device_id
@@ -456,7 +466,11 @@ class DeviceHandler(DeviceWorkerHandler):
room_ids = yield self.store.get_rooms_for_user(user_id)
yield self.notifier.on_new_event("device_list_key", position, rooms=room_ids)
# specify the user ID too since the user should always get their own device list
# updates, even if they aren't in any rooms.
yield self.notifier.on_new_event(
"device_list_key", position, users=[user_id], rooms=room_ids
)
if hosts:
logger.info(

View File

@@ -86,8 +86,7 @@ class DirectoryHandler(BaseHandler):
room_alias, room_id, servers, creator=creator
)
@defer.inlineCallbacks
def create_association(
async def create_association(
self,
requester: Requester,
room_alias: RoomAlias,
@@ -127,8 +126,12 @@ class DirectoryHandler(BaseHandler):
errcode=Codes.EXCLUSIVE,
)
else:
if self.require_membership and check_membership:
rooms_for_user = yield self.store.get_rooms_for_user(user_id)
# Server admins are not subject to the same constraints as normal
# users when creating an alias (e.g. being in the room).
is_admin = await self.auth.is_server_admin(requester.user)
if (self.require_membership and check_membership) and not is_admin:
rooms_for_user = await self.store.get_rooms_for_user(user_id)
if room_id not in rooms_for_user:
raise AuthError(
403, "You must be in the room to create an alias for it"
@@ -145,7 +148,7 @@ class DirectoryHandler(BaseHandler):
# per alias creation rule?
raise SynapseError(403, "Not allowed to create alias")
can_create = yield self.can_modify_alias(room_alias, user_id=user_id)
can_create = await self.can_modify_alias(room_alias, user_id=user_id)
if not can_create:
raise AuthError(
400,
@@ -153,10 +156,9 @@ class DirectoryHandler(BaseHandler):
errcode=Codes.EXCLUSIVE,
)
yield self._create_association(room_alias, room_id, servers, creator=user_id)
await self._create_association(room_alias, room_id, servers, creator=user_id)
@defer.inlineCallbacks
def delete_association(self, requester: Requester, room_alias: RoomAlias):
async def delete_association(self, requester: Requester, room_alias: RoomAlias):
"""Remove an alias from the directory
(this is only meant for human users; AS users should call
@@ -180,7 +182,7 @@ class DirectoryHandler(BaseHandler):
user_id = requester.user.to_string()
try:
can_delete = yield self._user_can_delete_alias(room_alias, user_id)
can_delete = await self._user_can_delete_alias(room_alias, user_id)
except StoreError as e:
if e.code == 404:
raise NotFoundError("Unknown room alias")
@@ -189,7 +191,7 @@ class DirectoryHandler(BaseHandler):
if not can_delete:
raise AuthError(403, "You don't have permission to delete the alias.")
can_delete = yield self.can_modify_alias(room_alias, user_id=user_id)
can_delete = await self.can_modify_alias(room_alias, user_id=user_id)
if not can_delete:
raise SynapseError(
400,
@@ -197,10 +199,10 @@ class DirectoryHandler(BaseHandler):
errcode=Codes.EXCLUSIVE,
)
room_id = yield self._delete_association(room_alias)
room_id = await self._delete_association(room_alias)
try:
yield self._update_canonical_alias(requester, user_id, room_id, room_alias)
await self._update_canonical_alias(requester, user_id, room_id, room_alias)
except AuthError as e:
logger.info("Failed to update alias events: %s", e)
@@ -292,15 +294,14 @@ class DirectoryHandler(BaseHandler):
Codes.NOT_FOUND,
)
@defer.inlineCallbacks
def _update_canonical_alias(
async def _update_canonical_alias(
self, requester: Requester, user_id: str, room_id: str, room_alias: RoomAlias
):
"""
Send an updated canonical alias event if the removed alias was set as
the canonical alias or listed in the alt_aliases field.
"""
alias_event = yield self.state.get_current_state(
alias_event = await self.state.get_current_state(
room_id, EventTypes.CanonicalAlias, ""
)
@@ -331,7 +332,7 @@ class DirectoryHandler(BaseHandler):
del content["alt_aliases"]
if send_update:
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.CanonicalAlias,
@@ -372,8 +373,7 @@ class DirectoryHandler(BaseHandler):
# either no interested services, or no service with an exclusive lock
return defer.succeed(True)
@defer.inlineCallbacks
def _user_can_delete_alias(self, alias: RoomAlias, user_id: str):
async def _user_can_delete_alias(self, alias: RoomAlias, user_id: str):
"""Determine whether a user can delete an alias.
One of the following must be true:
@@ -384,24 +384,23 @@ class DirectoryHandler(BaseHandler):
for the current room.
"""
creator = yield self.store.get_room_alias_creator(alias.to_string())
creator = await self.store.get_room_alias_creator(alias.to_string())
if creator is not None and creator == user_id:
return True
# Resolve the alias to the corresponding room.
room_mapping = yield self.get_association(alias)
room_mapping = await self.get_association(alias)
room_id = room_mapping["room_id"]
if not room_id:
return False
res = yield self.auth.check_can_change_room_list(
res = await self.auth.check_can_change_room_list(
room_id, UserID.from_string(user_id)
)
return res
@defer.inlineCallbacks
def edit_published_room_list(
async def edit_published_room_list(
self, requester: Requester, room_id: str, visibility: str
):
"""Edit the entry of the room in the published room list.
@@ -429,11 +428,11 @@ class DirectoryHandler(BaseHandler):
403, "This user is not permitted to publish rooms to the room list"
)
room = yield self.store.get_room(room_id)
room = await self.store.get_room(room_id)
if room is None:
raise SynapseError(400, "Unknown room")
can_change_room_list = yield self.auth.check_can_change_room_list(
can_change_room_list = await self.auth.check_can_change_room_list(
room_id, requester.user
)
if not can_change_room_list:
@@ -445,8 +444,8 @@ class DirectoryHandler(BaseHandler):
making_public = visibility == "public"
if making_public:
room_aliases = yield self.store.get_aliases_for_room(room_id)
canonical_alias = yield self.store.get_canonical_alias_for_room(room_id)
room_aliases = await self.store.get_aliases_for_room(room_id)
canonical_alias = await self.store.get_canonical_alias_for_room(room_id)
if canonical_alias:
room_aliases.append(canonical_alias)
@@ -458,7 +457,7 @@ class DirectoryHandler(BaseHandler):
# per alias creation rule?
raise SynapseError(403, "Not allowed to publish room")
yield self.store.set_room_is_public(room_id, making_public)
await self.store.set_room_is_public(room_id, making_public)
@defer.inlineCallbacks
def edit_published_appservice_room_list(

View File

@@ -54,19 +54,23 @@ class E2eKeysHandler(object):
self._edu_updater = SigningKeyEduUpdater(hs, self)
federation_registry = hs.get_federation_registry()
self._is_master = hs.config.worker_app is None
if not self._is_master:
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
hs
)
else:
# Only register this edu handler on master as it requires writing
# device updates to the db
#
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
federation_registry.register_edu_handler(
"org.matrix.signing_key_update",
self._edu_updater.incoming_signing_key_update,
)
federation_registry = hs.get_federation_registry()
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
federation_registry.register_edu_handler(
"org.matrix.signing_key_update",
self._edu_updater.incoming_signing_key_update,
)
# doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the
# "query handler" interface.
@@ -170,8 +174,8 @@ class E2eKeysHandler(object):
"""This is called when we are querying the device list of a user on
a remote homeserver and their device list is not in the device list
cache. If we share a room with this user and we're not querying for
specific user we will update the cache
with their device list."""
specific user we will update the cache with their device list.
"""
destination_query = remote_queries_not_in_cache[destination]
@@ -957,13 +961,19 @@ class E2eKeysHandler(object):
return signature_list, failures
@defer.inlineCallbacks
def _get_e2e_cross_signing_verify_key(self, user_id, key_type, from_user_id=None):
"""Fetch the cross-signing public key from storage and interpret it.
def _get_e2e_cross_signing_verify_key(
self, user_id: str, key_type: str, from_user_id: str = None
):
"""Fetch locally or remotely query for a cross-signing public key.
First, attempt to fetch the cross-signing public key from storage.
If that fails, query the keys from the homeserver they belong to
and update our local copy.
Args:
user_id (str): the user whose key should be fetched
key_type (str): the type of key to fetch
from_user_id (str): the user that we are fetching the keys for.
user_id: the user whose key should be fetched
key_type: the type of key to fetch
from_user_id: the user that we are fetching the keys for.
This affects what signatures are fetched.
Returns:
@@ -972,16 +982,140 @@ class E2eKeysHandler(object):
Raises:
NotFoundError: if the key is not found
SynapseError: if `user_id` is invalid
"""
user = UserID.from_string(user_id)
key = yield self.store.get_e2e_cross_signing_key(
user_id, key_type, from_user_id
)
if key is None:
logger.debug("no %s key found for %s", key_type, user_id)
if key:
# We found a copy of this key in our database. Decode and return it
key_id, verify_key = get_verify_key_from_cross_signing_key(key)
return key, key_id, verify_key
# If we couldn't find the key locally, and we're looking for keys of
# another user then attempt to fetch the missing key from the remote
# user's server.
#
# We may run into this in possible edge cases where a user tries to
# cross-sign a remote user, but does not share any rooms with them yet.
# Thus, we would not have their key list yet. We instead fetch the key,
# store it and notify clients of new, associated device IDs.
if self.is_mine(user) or key_type not in ["master", "self_signing"]:
# Note that master and self_signing keys are the only cross-signing keys we
# can request over federation
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
key_id, verify_key = get_verify_key_from_cross_signing_key(key)
(
key,
key_id,
verify_key,
) = yield self._retrieve_cross_signing_keys_for_remote_user(user, key_type)
if key is None:
raise NotFoundError("No %s key found for %s" % (key_type, user_id))
return key, key_id, verify_key
@defer.inlineCallbacks
def _retrieve_cross_signing_keys_for_remote_user(
self, user: UserID, desired_key_type: str,
):
"""Queries cross-signing keys for a remote user and saves them to the database
Only the key specified by `key_type` will be returned, while all retrieved keys
will be saved regardless
Args:
user: The user to query remote keys for
desired_key_type: The type of key to receive. One of "master", "self_signing"
Returns:
Deferred[Tuple[Optional[Dict], Optional[str], Optional[VerifyKey]]]: A tuple
of the retrieved key content, the key's ID and the matching VerifyKey.
If the key cannot be retrieved, all values in the tuple will instead be None.
"""
try:
remote_result = yield self.federation.query_user_devices(
user.domain, user.to_string()
)
except Exception as e:
logger.warning(
"Unable to query %s for cross-signing keys of user %s: %s %s",
user.domain,
user.to_string(),
type(e),
e,
)
return None, None, None
# Process each of the retrieved cross-signing keys
desired_key = None
desired_key_id = None
desired_verify_key = None
retrieved_device_ids = []
for key_type in ["master", "self_signing"]:
key_content = remote_result.get(key_type + "_key")
if not key_content:
continue
# Ensure these keys belong to the correct user
if "user_id" not in key_content:
logger.warning(
"Invalid %s key retrieved, missing user_id field: %s",
key_type,
key_content,
)
continue
if user.to_string() != key_content["user_id"]:
logger.warning(
"Found %s key of user %s when querying for keys of user %s",
key_type,
key_content["user_id"],
user.to_string(),
)
continue
# Validate the key contents
try:
# verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID
key_id, verify_key = get_verify_key_from_cross_signing_key(key_content)
except ValueError as e:
logger.warning(
"Invalid %s key retrieved: %s - %s %s",
key_type,
key_content,
type(e),
e,
)
continue
# Note down the device ID attached to this key
retrieved_device_ids.append(verify_key.version)
# If this is the desired key type, save it and its ID/VerifyKey
if key_type == desired_key_type:
desired_key = key_content
desired_verify_key = verify_key
desired_key_id = key_id
# At the same time, store this key in the db for subsequent queries
yield self.store.set_e2e_cross_signing_key(
user.to_string(), key_type, key_content
)
# Notify clients that new devices for this user have been discovered
if retrieved_device_ids:
# XXX is this necessary?
yield self.device_handler.notify_device_update(
user.to_string(), retrieved_device_ids
)
return desired_key, desired_key_id, desired_verify_key
def _check_cross_signing_key(key, user_id, key_type, signing_key=None):
"""Check a cross-signing key uploaded by a user. Performs some basic sanity

View File

@@ -19,6 +19,7 @@ import random
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
from synapse.handlers.presence import format_user_presence_state
from synapse.logging.utils import log_function
from synapse.types import UserID
from synapse.visibility import filter_events_for_client
@@ -97,6 +98,8 @@ class EventStreamHandler(BaseHandler):
explicit_room_id=room_id,
)
time_now = self.clock.time_msec()
# When the user joins a new room, or another user joins a currently
# joined room, we need to send down presence for those users.
to_add = []
@@ -112,19 +115,20 @@ class EventStreamHandler(BaseHandler):
users = await self.state.get_current_users_in_room(
event.room_id
)
states = await presence_handler.get_states(users, as_event=True)
to_add.extend(states)
else:
users = [event.state_key]
ev = await presence_handler.get_state(
UserID.from_string(event.state_key), as_event=True
)
to_add.append(ev)
states = await presence_handler.get_states(users)
to_add.extend(
{
"type": EventTypes.Presence,
"content": format_user_presence_state(state, time_now),
}
for state in states
)
events.extend(to_add)
time_now = self.clock.time_msec()
chunks = await self._event_serializer.serialize_events(
events,
time_now,

View File

@@ -49,6 +49,7 @@ from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.validator import EventValidator
from synapse.handlers._base import BaseHandler
from synapse.logging.context import (
make_deferred_yieldable,
nested_logging_context,
@@ -69,10 +70,9 @@ from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.distributor import user_joined_room
from synapse.util.retryutils import NotRetryingDestination
from synapse.util.stringutils import shortstr
from synapse.visibility import filter_events_for_server
from ._base import BaseHandler
logger = logging.getLogger(__name__)
@@ -93,27 +93,6 @@ class _NewEventInfo:
auth_events = attr.ib(type=Optional[StateMap[EventBase]], default=None)
def shortstr(iterable, maxitems=5):
"""If iterable has maxitems or fewer, return the stringification of a list
containing those items.
Otherwise, return the stringification of a a list with the first maxitems items,
followed by "...".
Args:
iterable (Iterable): iterable to truncate
maxitems (int): number of items to return before truncating
Returns:
unicode
"""
items = list(itertools.islice(iterable, maxitems + 1))
if len(items) <= maxitems:
return str(items)
return "[" + ", ".join(repr(r) for r in items[:maxitems]) + ", ...]"
class FederationHandler(BaseHandler):
"""Handles events that originated from federation.
Responsible for:
@@ -364,7 +343,7 @@ class FederationHandler(BaseHandler):
ours = await self.state_store.get_state_groups_ids(room_id, seen)
# state_maps is a list of mappings from (type, state_key) to event_id
state_maps = list(ours.values()) # type: list[StateMap[str]]
state_maps = list(ours.values()) # type: List[StateMap[str]]
# we don't need this any more, let's delete it.
del ours
@@ -1715,16 +1694,15 @@ class FederationHandler(BaseHandler):
return None
@defer.inlineCallbacks
def get_state_for_pdu(self, room_id, event_id):
async def get_state_for_pdu(self, room_id: str, event_id: str) -> List[EventBase]:
"""Returns the state at the event. i.e. not including said event.
"""
event = yield self.store.get_event(
event = await self.store.get_event(
event_id, allow_none=False, check_room_id=room_id
)
state_groups = yield self.state_store.get_state_groups(room_id, [event_id])
state_groups = await self.state_store.get_state_groups(room_id, [event_id])
if state_groups:
_, state = list(iteritems(state_groups)).pop()
@@ -1735,7 +1713,7 @@ class FederationHandler(BaseHandler):
if "replaces_state" in event.unsigned:
prev_id = event.unsigned["replaces_state"]
if prev_id != event.event_id:
prev_event = yield self.store.get_event(prev_id)
prev_event = await self.store.get_event(prev_id)
results[(event.type, event.state_key)] = prev_event
else:
del results[(event.type, event.state_key)]
@@ -1745,15 +1723,14 @@ class FederationHandler(BaseHandler):
else:
return []
@defer.inlineCallbacks
def get_state_ids_for_pdu(self, room_id, event_id):
async def get_state_ids_for_pdu(self, room_id: str, event_id: str) -> List[str]:
"""Returns the state at the event. i.e. not including said event.
"""
event = yield self.store.get_event(
event = await self.store.get_event(
event_id, allow_none=False, check_room_id=room_id
)
state_groups = yield self.state_store.get_state_groups_ids(room_id, [event_id])
state_groups = await self.state_store.get_state_groups_ids(room_id, [event_id])
if state_groups:
_, state = list(state_groups.items()).pop()
@@ -1772,49 +1749,50 @@ class FederationHandler(BaseHandler):
else:
return []
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, pdu_list, limit):
in_room = yield self.auth.check_host_in_room(room_id, origin)
async def on_backfill_request(
self, origin: str, room_id: str, pdu_list: List[str], limit: int
) -> List[EventBase]:
in_room = await self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
# Synapse asks for 100 events per backfill request. Do not allow more.
limit = min(limit, 100)
events = yield self.store.get_backfill_events(room_id, pdu_list, limit)
events = await self.store.get_backfill_events(room_id, pdu_list, limit)
events = yield filter_events_for_server(self.storage, origin, events)
events = await filter_events_for_server(self.storage, origin, events)
return events
@defer.inlineCallbacks
@log_function
def get_persisted_pdu(self, origin, event_id):
async def get_persisted_pdu(
self, origin: str, event_id: str
) -> Optional[EventBase]:
"""Get an event from the database for the given server.
Args:
origin [str]: hostname of server which is requesting the event; we
origin: hostname of server which is requesting the event; we
will check that the server is allowed to see it.
event_id [str]: id of the event being requested
event_id: id of the event being requested
Returns:
Deferred[EventBase|None]: None if we know nothing about the event;
otherwise the (possibly-redacted) event.
None if we know nothing about the event; otherwise the (possibly-redacted) event.
Raises:
AuthError if the server is not currently in the room
"""
event = yield self.store.get_event(
event = await self.store.get_event(
event_id, allow_none=True, allow_rejected=True
)
if event:
in_room = yield self.auth.check_host_in_room(event.room_id, origin)
in_room = await self.auth.check_host_in_room(event.room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield filter_events_for_server(self.storage, origin, [event])
events = await filter_events_for_server(self.storage, origin, [event])
event = events[0]
return event
else:
@@ -2418,7 +2396,7 @@ class FederationHandler(BaseHandler):
"""
# exclude the state key of the new event from the current_state in the context.
if event.is_state():
event_key = (event.type, event.state_key)
event_key = (event.type, event.state_key) # type: Optional[Tuple[str, str]]
else:
event_key = None
state_updates = {
@@ -2584,9 +2562,8 @@ class FederationHandler(BaseHandler):
"missing": [e.event_id for e in missing_locals],
}
@defer.inlineCallbacks
@log_function
def exchange_third_party_invite(
async def exchange_third_party_invite(
self, sender_user_id, target_user_id, room_id, signed
):
third_party_invite = {"signed": signed}
@@ -2602,16 +2579,16 @@ class FederationHandler(BaseHandler):
"state_key": target_user_id,
}
if (yield self.auth.check_host_in_room(room_id, self.hs.hostname)):
room_version = yield self.store.get_room_version_id(room_id)
if await self.auth.check_host_in_room(room_id, self.hs.hostname):
room_version = await self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(room_version, event_dict)
EventValidator().validate_builder(builder)
event, context = yield self.event_creation_handler.create_new_client_event(
event, context = await self.event_creation_handler.create_new_client_event(
builder=builder
)
event_allowed = yield self.third_party_event_rules.check_event_allowed(
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
@@ -2623,7 +2600,7 @@ class FederationHandler(BaseHandler):
403, "This event is not allowed in this context", Codes.FORBIDDEN
)
event, context = yield self.add_display_name_to_third_party_invite(
event, context = await self.add_display_name_to_third_party_invite(
room_version, event_dict, event, context
)
@@ -2634,19 +2611,19 @@ class FederationHandler(BaseHandler):
event.internal_metadata.send_on_behalf_of = self.hs.hostname
try:
yield self.auth.check_from_context(room_version, event, context)
await self.auth.check_from_context(room_version, event, context)
except AuthError as e:
logger.warning("Denying new third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, context)
await self._check_signature(event, context)
# We retrieve the room member handler here as to not cause a cyclic dependency
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
await member_handler.send_membership_event(None, event, context)
else:
destinations = {x.split(":", 1)[-1] for x in (sender_user_id, room_id)}
yield self.federation_client.forward_third_party_invite(
await self.federation_client.forward_third_party_invite(
destinations, room_id, event_dict
)

View File

@@ -284,15 +284,14 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
set_group_join_policy = _create_rerouter("set_group_join_policy")
@defer.inlineCallbacks
def create_group(self, group_id, user_id, content):
async def create_group(self, group_id, user_id, content):
"""Create a group
"""
logger.info("Asking to create group with ID: %r", group_id)
if self.is_mine_id(group_id):
res = yield self.groups_server_handler.create_group(
res = await self.groups_server_handler.create_group(
group_id, user_id, content
)
local_attestation = None
@@ -301,10 +300,10 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
local_attestation = self.attestations.create_attestation(group_id, user_id)
content["attestation"] = local_attestation
content["user_profile"] = yield self.profile_handler.get_profile(user_id)
content["user_profile"] = await self.profile_handler.get_profile(user_id)
try:
res = yield self.transport_client.create_group(
res = await self.transport_client.create_group(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
@@ -313,7 +312,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
raise SynapseError(502, "Failed to contact group server")
remote_attestation = res["attestation"]
yield self.attestations.verify_attestation(
await self.attestations.verify_attestation(
remote_attestation,
group_id=group_id,
user_id=user_id,
@@ -321,7 +320,7 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
)
is_publicised = content.get("publicise", False)
token = yield self.store.register_user_group_membership(
token = await self.store.register_user_group_membership(
group_id,
user_id,
membership="join",
@@ -482,12 +481,13 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
return {"state": "invite", "user_profile": user_profile}
@defer.inlineCallbacks
def remove_user_from_group(self, group_id, user_id, requester_user_id, content):
async def remove_user_from_group(
self, group_id, user_id, requester_user_id, content
):
"""Remove a user from a group
"""
if user_id == requester_user_id:
token = yield self.store.register_user_group_membership(
token = await self.store.register_user_group_membership(
group_id, user_id, membership="leave"
)
self.notifier.on_new_event("groups_key", token, users=[user_id])
@@ -496,13 +496,13 @@ class GroupsLocalHandler(GroupsLocalWorkerHandler):
# retry if the group server is currently down.
if self.is_mine_id(group_id):
res = yield self.groups_server_handler.remove_user_from_group(
res = await self.groups_server_handler.remove_user_from_group(
group_id, user_id, requester_user_id, content
)
else:
content["requester_user_id"] = requester_user_id
try:
res = yield self.transport_client.remove_user_from_group(
res = await self.transport_client.remove_user_from_group(
get_domain_from_id(group_id),
group_id,
requester_user_id,

View File

@@ -18,7 +18,7 @@
"""Utilities for interacting with Identity Servers"""
import logging
import urllib
import urllib.parse
from canonicaljson import json
from signedjson.key import decode_verify_key_bytes

View File

@@ -381,10 +381,16 @@ class InitialSyncHandler(BaseHandler):
return []
states = await presence_handler.get_states(
[m.user_id for m in room_members], as_event=True
[m.user_id for m in room_members]
)
return states
return [
{
"type": EventTypes.Presence,
"content": format_user_presence_state(s, time_now),
}
for s in states
]
async def get_receipts():
receipts = await self.store.get_linearized_receipts_for_room(

View File

@@ -419,6 +419,8 @@ class EventCreationHandler(object):
self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages
self._dummy_events_threshold = hs.config.dummy_events_threshold
@defer.inlineCallbacks
def create_event(
self,
@@ -626,8 +628,7 @@ class EventCreationHandler(object):
msg = self._block_events_without_consent_error % {"consent_uri": consent_uri}
raise ConsentNotGivenError(msg=msg, consent_uri=consent_uri)
@defer.inlineCallbacks
def send_nonmember_event(self, requester, event, context, ratelimit=True):
async def send_nonmember_event(self, requester, event, context, ratelimit=True):
"""
Persists and notifies local clients and federation of an event.
@@ -647,7 +648,7 @@ class EventCreationHandler(object):
assert self.hs.is_mine(user), "User must be our own: %s" % (user,)
if event.is_state():
prev_state = yield self.deduplicate_state_event(event, context)
prev_state = await self.deduplicate_state_event(event, context)
if prev_state is not None:
logger.info(
"Not bothering to persist state event %s duplicated by %s",
@@ -656,7 +657,7 @@ class EventCreationHandler(object):
)
return prev_state
yield self.handle_new_client_event(
await self.handle_new_client_event(
requester=requester, event=event, context=context, ratelimit=ratelimit
)
@@ -683,8 +684,7 @@ class EventCreationHandler(object):
return prev_event
return
@defer.inlineCallbacks
def create_and_send_nonmember_event(
async def create_and_send_nonmember_event(
self, requester, event_dict, ratelimit=True, txn_id=None
):
"""
@@ -698,8 +698,8 @@ class EventCreationHandler(object):
# a situation where event persistence can't keep up, causing
# extremities to pile up, which in turn leads to state resolution
# taking longer.
with (yield self.limiter.queue(event_dict["room_id"])):
event, context = yield self.create_event(
with (await self.limiter.queue(event_dict["room_id"])):
event, context = await self.create_event(
requester, event_dict, token_id=requester.access_token_id, txn_id=txn_id
)
@@ -709,7 +709,7 @@ class EventCreationHandler(object):
spam_error = "Spam is not permitted here"
raise SynapseError(403, spam_error, Codes.FORBIDDEN)
yield self.send_nonmember_event(
await self.send_nonmember_event(
requester, event, context, ratelimit=ratelimit
)
return event
@@ -770,8 +770,7 @@ class EventCreationHandler(object):
return (event, context)
@measure_func("handle_new_client_event")
@defer.inlineCallbacks
def handle_new_client_event(
async def handle_new_client_event(
self, requester, event, context, ratelimit=True, extra_users=[]
):
"""Processes a new event. This includes checking auth, persisting it,
@@ -794,9 +793,9 @@ class EventCreationHandler(object):
):
room_version = event.content.get("room_version", RoomVersions.V1.identifier)
else:
room_version = yield self.store.get_room_version_id(event.room_id)
room_version = await self.store.get_room_version_id(event.room_id)
event_allowed = yield self.third_party_event_rules.check_event_allowed(
event_allowed = await self.third_party_event_rules.check_event_allowed(
event, context
)
if not event_allowed:
@@ -805,7 +804,7 @@ class EventCreationHandler(object):
)
try:
yield self.auth.check_from_context(room_version, event, context)
await self.auth.check_from_context(room_version, event, context)
except AuthError as err:
logger.warning("Denying new event %r because %s", event, err)
raise err
@@ -818,7 +817,7 @@ class EventCreationHandler(object):
logger.exception("Failed to encode content: %r", event.content)
raise
yield self.action_generator.handle_push_actions_for_event(event, context)
await self.action_generator.handle_push_actions_for_event(event, context)
# reraise does not allow inlineCallbacks to preserve the stacktrace, so we
# hack around with a try/finally instead.
@@ -826,7 +825,7 @@ class EventCreationHandler(object):
try:
# If we're a worker we need to hit out to the master.
if self.config.worker_app:
yield self.send_event_to_master(
await self.send_event_to_master(
event_id=event.event_id,
store=self.store,
requester=requester,
@@ -838,7 +837,7 @@ class EventCreationHandler(object):
success = True
return
yield self.persist_and_notify_client_event(
await self.persist_and_notify_client_event(
requester, event, context, ratelimit=ratelimit, extra_users=extra_users
)
@@ -852,7 +851,38 @@ class EventCreationHandler(object):
)
@defer.inlineCallbacks
def persist_and_notify_client_event(
def _validate_canonical_alias(
self, directory_handler, room_alias_str, expected_room_id
):
"""
Ensure that the given room alias points to the expected room ID.
Args:
directory_handler: The directory handler object.
room_alias_str: The room alias to check.
expected_room_id: The room ID that the alias should point to.
"""
room_alias = RoomAlias.from_string(room_alias_str)
try:
mapping = yield directory_handler.get_association(room_alias)
except SynapseError as e:
# Turn M_NOT_FOUND errors into M_BAD_ALIAS errors.
if e.errcode == Codes.NOT_FOUND:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (room_alias_str,),
Codes.BAD_ALIAS,
)
raise
if mapping["room_id"] != expected_room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (room_alias_str,),
Codes.BAD_ALIAS,
)
async def persist_and_notify_client_event(
self, requester, event, context, ratelimit=True, extra_users=[]
):
"""Called when we have fully built the event, have already
@@ -869,7 +899,7 @@ class EventCreationHandler(object):
# user is actually admin or not).
is_admin_redaction = False
if event.type == EventTypes.Redaction:
original_event = yield self.store.get_event(
original_event = await self.store.get_event(
event.redacts,
redact_behaviour=EventRedactBehaviour.AS_IS,
get_prev_content=False,
@@ -881,11 +911,11 @@ class EventCreationHandler(object):
original_event and event.sender != original_event.sender
)
yield self.base_handler.ratelimit(
await self.base_handler.ratelimit(
requester, is_admin_redaction=is_admin_redaction
)
yield self.base_handler.maybe_kick_guest_users(event, context)
await self.base_handler.maybe_kick_guest_users(event, context)
if event.type == EventTypes.CanonicalAlias:
# Validate a newly added alias or newly added alt_aliases.
@@ -895,7 +925,7 @@ class EventCreationHandler(object):
original_event_id = event.unsigned.get("replaces_state")
if original_event_id:
original_event = yield self.store.get_event(original_event_id)
original_event = await self.store.get_event(original_event_id)
if original_event:
original_alias = original_event.content.get("alias", None)
@@ -905,15 +935,9 @@ class EventCreationHandler(object):
room_alias_str = event.content.get("alias", None)
directory_handler = self.hs.get_handlers().directory_handler
if room_alias_str and room_alias_str != original_alias:
room_alias = RoomAlias.from_string(room_alias_str)
mapping = yield directory_handler.get_association(room_alias)
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room" % (room_alias_str,),
Codes.BAD_ALIAS,
)
await self._validate_canonical_alias(
directory_handler, room_alias_str, event.room_id
)
# Check that alt_aliases is the proper form.
alt_aliases = event.content.get("alt_aliases", [])
@@ -931,16 +955,9 @@ class EventCreationHandler(object):
new_alt_aliases = set(alt_aliases) - set(original_alt_aliases)
if new_alt_aliases:
for alias_str in new_alt_aliases:
room_alias = RoomAlias.from_string(alias_str)
mapping = yield directory_handler.get_association(room_alias)
if mapping["room_id"] != event.room_id:
raise SynapseError(
400,
"Room alias %s does not point to the room"
% (room_alias_str,),
Codes.BAD_ALIAS,
)
await self._validate_canonical_alias(
directory_handler, alias_str, event.room_id
)
federation_handler = self.hs.get_handlers().federation_handler
@@ -950,7 +967,7 @@ class EventCreationHandler(object):
def is_inviter_member_event(e):
return e.type == EventTypes.Member and e.sender == event.sender
current_state_ids = yield context.get_current_state_ids()
current_state_ids = await context.get_current_state_ids()
state_to_include_ids = [
e_id
@@ -959,7 +976,7 @@ class EventCreationHandler(object):
or k == (EventTypes.Member, event.sender)
]
state_to_include = yield self.store.get_events(state_to_include_ids)
state_to_include = await self.store.get_events(state_to_include_ids)
event.unsigned["invite_room_state"] = [
{
@@ -977,8 +994,8 @@ class EventCreationHandler(object):
# way? If we have been invited by a remote server, we need
# to get them to sign the event.
returned_invite = yield defer.ensureDeferred(
federation_handler.send_invite(invitee.domain, event)
returned_invite = await federation_handler.send_invite(
invitee.domain, event
)
event.unsigned.pop("room_state", None)
@@ -986,7 +1003,7 @@ class EventCreationHandler(object):
event.signatures.update(returned_invite.signatures)
if event.type == EventTypes.Redaction:
original_event = yield self.store.get_event(
original_event = await self.store.get_event(
event.redacts,
redact_behaviour=EventRedactBehaviour.AS_IS,
get_prev_content=False,
@@ -1002,14 +1019,14 @@ class EventCreationHandler(object):
if original_event.room_id != event.room_id:
raise SynapseError(400, "Cannot redact event from a different room")
prev_state_ids = yield context.get_prev_state_ids()
auth_events_ids = yield self.auth.compute_auth_events(
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = await self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
room_version = yield self.store.get_room_version_id(event.room_id)
room_version = await self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
if event_auth.check_redaction(
@@ -1028,11 +1045,11 @@ class EventCreationHandler(object):
event.internal_metadata.recheck_redaction = False
if event.type == EventTypes.Create:
prev_state_ids = yield context.get_prev_state_ids()
prev_state_ids = await context.get_prev_state_ids()
if prev_state_ids:
raise AuthError(403, "Changing the room create event is forbidden")
event_stream_id, max_stream_id = yield self.storage.persistence.persist_event(
event_stream_id, max_stream_id = await self.storage.persistence.persist_event(
event, context=context
)
@@ -1040,7 +1057,7 @@ class EventCreationHandler(object):
# If there's an expiry timestamp on the event, schedule its expiry.
self._message_handler.maybe_schedule_expiry(event)
yield self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id)
await self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id)
def _notify():
try:
@@ -1064,14 +1081,13 @@ class EventCreationHandler(object):
except Exception:
logger.exception("Error bumping presence active time")
@defer.inlineCallbacks
def _send_dummy_events_to_fill_extremities(self):
async def _send_dummy_events_to_fill_extremities(self):
"""Background task to send dummy events into rooms that have a large
number of extremities
"""
self._expire_rooms_to_exclude_from_dummy_event_insertion()
room_ids = yield self.store.get_rooms_with_many_extremities(
min_count=10,
room_ids = await self.store.get_rooms_with_many_extremities(
min_count=self._dummy_events_threshold,
limit=5,
room_id_filter=self._rooms_to_exclude_from_dummy_event_insertion.keys(),
)
@@ -1080,9 +1096,9 @@ class EventCreationHandler(object):
# For each room we need to find a joined member we can use to send
# the dummy event with.
latest_event_ids = yield self.store.get_prev_events_for_room(room_id)
latest_event_ids = await self.store.get_prev_events_for_room(room_id)
members = yield self.state.get_current_users_in_room(
members = await self.state.get_current_users_in_room(
room_id, latest_event_ids=latest_event_ids
)
dummy_event_sent = False
@@ -1091,7 +1107,7 @@ class EventCreationHandler(object):
continue
requester = create_requester(user_id)
try:
event, context = yield self.create_event(
event, context = await self.create_event(
requester,
{
"type": "org.matrix.dummy_event",
@@ -1104,7 +1120,7 @@ class EventCreationHandler(object):
event.internal_metadata.proactively_send = False
yield self.send_nonmember_event(
await self.send_nonmember_event(
requester, event, context, ratelimit=False
)
dummy_event_sent = True

View File

@@ -0,0 +1,93 @@
# -*- coding: utf-8 -*-
# Copyright 2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import re
from synapse.api.errors import Codes, PasswordRefusedError
logger = logging.getLogger(__name__)
class PasswordPolicyHandler(object):
def __init__(self, hs):
self.policy = hs.config.password_policy
self.enabled = hs.config.password_policy_enabled
# Regexps for the spec'd policy parameters.
self.regexp_digit = re.compile("[0-9]")
self.regexp_symbol = re.compile("[^a-zA-Z0-9]")
self.regexp_uppercase = re.compile("[A-Z]")
self.regexp_lowercase = re.compile("[a-z]")
def validate_password(self, password):
"""Checks whether a given password complies with the server's policy.
Args:
password (str): The password to check against the server's policy.
Raises:
PasswordRefusedError: The password doesn't comply with the server's policy.
"""
if not self.enabled:
return
minimum_accepted_length = self.policy.get("minimum_length", 0)
if len(password) < minimum_accepted_length:
raise PasswordRefusedError(
msg=(
"The password must be at least %d characters long"
% minimum_accepted_length
),
errcode=Codes.PASSWORD_TOO_SHORT,
)
if (
self.policy.get("require_digit", False)
and self.regexp_digit.search(password) is None
):
raise PasswordRefusedError(
msg="The password must include at least one digit",
errcode=Codes.PASSWORD_NO_DIGIT,
)
if (
self.policy.get("require_symbol", False)
and self.regexp_symbol.search(password) is None
):
raise PasswordRefusedError(
msg="The password must include at least one symbol",
errcode=Codes.PASSWORD_NO_SYMBOL,
)
if (
self.policy.get("require_uppercase", False)
and self.regexp_uppercase.search(password) is None
):
raise PasswordRefusedError(
msg="The password must include at least one uppercase letter",
errcode=Codes.PASSWORD_NO_UPPERCASE,
)
if (
self.policy.get("require_lowercase", False)
and self.regexp_lowercase.search(password) is None
):
raise PasswordRefusedError(
msg="The password must include at least one lowercase letter",
errcode=Codes.PASSWORD_NO_LOWERCASE,
)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -21,10 +22,10 @@ The methods that define policy are:
- PresenceHandler._handle_timeouts
- should_notify
"""
import abc
import logging
from contextlib import contextmanager
from typing import Dict, List, Set
from typing import Dict, Iterable, List, Set
from six import iteritems, itervalues
@@ -41,7 +42,7 @@ from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.presence import UserPresenceState
from synapse.types import UserID, get_domain_from_id
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.descriptors import cached
from synapse.util.metrics import Measure
@@ -99,13 +100,106 @@ EXTERNAL_PROCESS_EXPIRY = 5 * 60 * 1000
assert LAST_ACTIVE_GRANULARITY < IDLE_TIMER
class PresenceHandler(object):
class BasePresenceHandler(abc.ABC):
"""Parts of the PresenceHandler that are shared between workers and master"""
def __init__(self, hs: "synapse.server.HomeServer"):
self.clock = hs.get_clock()
self.store = hs.get_datastore()
active_presence = self.store.take_presence_startup_info()
self.user_to_current_state = {state.user_id: state for state in active_presence}
@abc.abstractmethod
async def user_syncing(
self, user_id: str, affect_presence: bool
) -> ContextManager[None]:
"""Returns a context manager that should surround any stream requests
from the user.
This allows us to keep track of who is currently streaming and who isn't
without having to have timers outside of this module to avoid flickering
when users disconnect/reconnect.
Args:
user_id: the user that is starting a sync
affect_presence: If false this function will be a no-op.
Useful for streams that are not associated with an actual
client that is being used by a user.
"""
@abc.abstractmethod
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
"""Get an iterable of syncing users on this worker, to send to the presence handler
This is called when a replication connection is established. It should return
a list of user ids, which are then sent as USER_SYNC commands to inform the
process handling presence about those users.
Returns:
An iterable of user_id strings.
"""
async def get_state(self, target_user: UserID) -> UserPresenceState:
results = await self.get_states([target_user.to_string()])
return results[0]
async def get_states(
self, target_user_ids: Iterable[str]
) -> List[UserPresenceState]:
"""Get the presence state for users."""
updates_d = await self.current_state_for_users(target_user_ids)
updates = list(updates_d.values())
for user_id in set(target_user_ids) - {u.user_id for u in updates}:
updates.append(UserPresenceState.default(user_id))
return updates
async def current_state_for_users(
self, user_ids: Iterable[str]
) -> Dict[str, UserPresenceState]:
"""Get the current presence state for multiple users.
Returns:
dict: `user_id` -> `UserPresenceState`
"""
states = {
user_id: self.user_to_current_state.get(user_id, None)
for user_id in user_ids
}
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = await self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id) for user_id in missing
}
states.update(new)
self.user_to_current_state.update(new)
return states
@abc.abstractmethod
async def set_state(
self, target_user: UserID, state: JsonDict, ignore_status_msg: bool = False
) -> None:
"""Set the presence state of the user. """
class PresenceHandler(BasePresenceHandler):
def __init__(self, hs: "synapse.server.HomeServer"):
super().__init__(hs)
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.store = hs.get_datastore()
self.wheel_timer = WheelTimer()
self.notifier = hs.get_notifier()
self.federation = hs.get_federation_sender()
@@ -115,13 +209,6 @@ class PresenceHandler(object):
federation_registry.register_edu_handler("m.presence", self.incoming_presence)
active_presence = self.store.take_presence_startup_info()
# A dictionary of the current state of users. This is prefilled with
# non-offline presence from the DB. We should fetch from the DB if
# we can't find a users presence in here.
self.user_to_current_state = {state.user_id: state for state in active_presence}
LaterGauge(
"synapse_handlers_presence_user_to_current_state_size",
"",
@@ -130,7 +217,7 @@ class PresenceHandler(object):
)
now = self.clock.time_msec()
for state in active_presence:
for state in self.user_to_current_state.values():
self.wheel_timer.insert(
now=now, obj=state.user_id, then=state.last_active_ts + IDLE_TIMER
)
@@ -361,10 +448,18 @@ class PresenceHandler(object):
timers_fired_counter.inc(len(states))
syncing_user_ids = {
user_id
for user_id, count in self.user_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
changes = handle_timeouts(
states,
is_mine_fn=self.is_mine_id,
syncing_user_ids=self.get_currently_syncing_users(),
syncing_user_ids=syncing_user_ids,
now=now,
)
@@ -462,22 +557,9 @@ class PresenceHandler(object):
return _user_syncing()
def get_currently_syncing_users(self):
"""Get the set of user ids that are currently syncing on this HS.
Returns:
set(str): A set of user_id strings.
"""
if self.hs.config.use_presence:
syncing_user_ids = {
user_id
for user_id, count in self.user_to_num_current_syncs.items()
if count
}
for user_ids in self.external_process_to_current_syncs.values():
syncing_user_ids.update(user_ids)
return syncing_user_ids
else:
return set()
def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
# since we are the process handling presence, there is nothing to do here.
return []
async def update_external_syncs_row(
self, process_id, user_id, is_syncing, sync_time_msec
@@ -554,34 +636,6 @@ class PresenceHandler(object):
res = await self.current_state_for_users([user_id])
return res[user_id]
async def current_state_for_users(self, user_ids):
"""Get the current presence state for multiple users.
Returns:
dict: `user_id` -> `UserPresenceState`
"""
states = {
user_id: self.user_to_current_state.get(user_id, None)
for user_id in user_ids
}
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
# There are things not in our in memory cache. Lets pull them out of
# the database.
res = await self.store.get_presence_for_users(missing)
states.update(res)
missing = [user_id for user_id, state in iteritems(states) if not state]
if missing:
new = {
user_id: UserPresenceState.default(user_id) for user_id in missing
}
states.update(new)
self.user_to_current_state.update(new)
return states
async def _persist_and_notify(self, states):
"""Persist states in the database, poke the notifier and send to
interested remote servers
@@ -669,40 +723,6 @@ class PresenceHandler(object):
federation_presence_counter.inc(len(updates))
await self._update_states(updates)
async def get_state(self, target_user, as_event=False):
results = await self.get_states([target_user.to_string()], as_event=as_event)
return results[0]
async def get_states(self, target_user_ids, as_event=False):
"""Get the presence state for users.
Args:
target_user_ids (list)
as_event (bool): Whether to format it as a client event or not.
Returns:
list
"""
updates = await self.current_state_for_users(target_user_ids)
updates = list(updates.values())
for user_id in set(target_user_ids) - {u.user_id for u in updates}:
updates.append(UserPresenceState.default(user_id))
now = self.clock.time_msec()
if as_event:
return [
{
"type": "m.presence",
"content": format_user_presence_state(state, now),
}
for state in updates
]
else:
return updates
async def set_state(self, target_user, state, ignore_status_msg=False):
"""Set the presence state of the user.
"""
@@ -747,7 +767,7 @@ class PresenceHandler(object):
return False
async def get_all_presence_updates(self, last_id, current_id):
async def get_all_presence_updates(self, last_id, current_id, limit):
"""
Gets a list of presence update rows from between the given stream ids.
Each row has:
@@ -762,7 +782,7 @@ class PresenceHandler(object):
"""
# TODO(markjh): replicate the unpersisted changes.
# This could use the in-memory stores for recent changes.
rows = await self.store.get_all_presence_updates(last_id, current_id)
rows = await self.store.get_all_presence_updates(last_id, current_id, limit)
return rows
def notify_new_event(self):
@@ -889,7 +909,7 @@ class PresenceHandler(object):
user_ids = await self.state.get_current_users_in_room(room_id)
user_ids = list(filter(self.is_mine_id, user_ids))
states = await self.current_state_for_users(user_ids)
states_d = await self.current_state_for_users(user_ids)
# Filter out old presence, i.e. offline presence states where
# the user hasn't been active for a week. We can change this
@@ -899,7 +919,7 @@ class PresenceHandler(object):
now = self.clock.time_msec()
states = [
state
for state in states.values()
for state in states_d.values()
if state.state != PresenceState.OFFLINE
or now - state.last_active_ts < 7 * 24 * 60 * 60 * 1000
or state.status_msg is not None

View File

@@ -141,8 +141,9 @@ class BaseProfileHandler(BaseHandler):
return result["displayname"]
@defer.inlineCallbacks
def set_displayname(self, target_user, requester, new_displayname, by_admin=False):
async def set_displayname(
self, target_user, requester, new_displayname, by_admin=False
):
"""Set the displayname of a user
Args:
@@ -157,6 +158,15 @@ class BaseProfileHandler(BaseHandler):
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's displayname")
if not by_admin and not self.hs.config.enable_set_displayname:
profile = await self.store.get_profileinfo(target_user.localpart)
if profile.display_name:
raise SynapseError(
400,
"Changing display name is disabled on this server",
Codes.FORBIDDEN,
)
if len(new_displayname) > MAX_DISPLAYNAME_LEN:
raise SynapseError(
400, "Displayname is too long (max %i)" % (MAX_DISPLAYNAME_LEN,)
@@ -171,15 +181,15 @@ class BaseProfileHandler(BaseHandler):
if by_admin:
requester = create_requester(target_user)
yield self.store.set_profile_displayname(target_user.localpart, new_displayname)
await self.store.set_profile_displayname(target_user.localpart, new_displayname)
if self.hs.config.user_directory_search_all_users:
profile = yield self.store.get_profileinfo(target_user.localpart)
yield self.user_directory_handler.handle_local_profile_change(
profile = await self.store.get_profileinfo(target_user.localpart)
await self.user_directory_handler.handle_local_profile_change(
target_user.to_string(), profile
)
yield self._update_join_states(requester, target_user)
await self._update_join_states(requester, target_user)
@defer.inlineCallbacks
def get_avatar_url(self, target_user):
@@ -208,8 +218,9 @@ class BaseProfileHandler(BaseHandler):
return result["avatar_url"]
@defer.inlineCallbacks
def set_avatar_url(self, target_user, requester, new_avatar_url, by_admin=False):
async def set_avatar_url(
self, target_user, requester, new_avatar_url, by_admin=False
):
"""target_user is the user whose avatar_url is to be changed;
auth_user is the user attempting to make this change."""
if not self.hs.is_mine(target_user):
@@ -218,6 +229,13 @@ class BaseProfileHandler(BaseHandler):
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's avatar_url")
if not by_admin and not self.hs.config.enable_set_avatar_url:
profile = await self.store.get_profileinfo(target_user.localpart)
if profile.avatar_url:
raise SynapseError(
400, "Changing avatar is disabled on this server", Codes.FORBIDDEN
)
if len(new_avatar_url) > MAX_AVATAR_URL_LEN:
raise SynapseError(
400, "Avatar URL is too long (max %i)" % (MAX_AVATAR_URL_LEN,)
@@ -227,15 +245,15 @@ class BaseProfileHandler(BaseHandler):
if by_admin:
requester = create_requester(target_user)
yield self.store.set_profile_avatar_url(target_user.localpart, new_avatar_url)
await self.store.set_profile_avatar_url(target_user.localpart, new_avatar_url)
if self.hs.config.user_directory_search_all_users:
profile = yield self.store.get_profileinfo(target_user.localpart)
yield self.user_directory_handler.handle_local_profile_change(
profile = await self.store.get_profileinfo(target_user.localpart)
await self.user_directory_handler.handle_local_profile_change(
target_user.to_string(), profile
)
yield self._update_join_states(requester, target_user)
await self._update_join_states(requester, target_user)
@defer.inlineCallbacks
def on_profile_query(self, args):
@@ -263,21 +281,20 @@ class BaseProfileHandler(BaseHandler):
return response
@defer.inlineCallbacks
def _update_join_states(self, requester, target_user):
async def _update_join_states(self, requester, target_user):
if not self.hs.is_mine(target_user):
return
yield self.ratelimit(requester)
await self.ratelimit(requester)
room_ids = yield self.store.get_rooms_for_user(target_user.to_string())
room_ids = await self.store.get_rooms_for_user(target_user.to_string())
for room_id in room_ids:
handler = self.hs.get_room_member_handler()
try:
# Assume the target_user isn't a guest,
# because we don't let guests set profile or avatar data.
yield handler.update_membership(
await handler.update_membership(
requester,
target_user,
room_id,

View File

@@ -132,7 +132,7 @@ class RegistrationHandler(BaseHandler):
def register_user(
self,
localpart=None,
password=None,
password_hash=None,
guest_access_token=None,
make_guest=False,
admin=False,
@@ -145,9 +145,9 @@ class RegistrationHandler(BaseHandler):
"""Registers a new client on the server.
Args:
localpart : The local part of the user ID to register. If None,
localpart: The local part of the user ID to register. If None,
one will be generated.
password (unicode) : The password to assign to this user so they can
password_hash (str|None): The hashed password to assign to this user so they can
login again. This can be None which means they cannot login again
via a password (e.g. the user is an application service user).
user_type (str|None): type of user. One of the values from
@@ -164,9 +164,6 @@ class RegistrationHandler(BaseHandler):
yield self.check_registration_ratelimit(address)
yield self.auth.check_auth_blocking(threepid=threepid)
password_hash = None
if password:
password_hash = yield self._auth_handler.hash(password)
if localpart is not None:
yield self.check_username(localpart, guest_access_token=guest_access_token)
@@ -242,7 +239,7 @@ class RegistrationHandler(BaseHandler):
fail_count += 1
if not self.hs.config.user_consent_at_registration:
yield self._auto_join_rooms(user_id)
yield defer.ensureDeferred(self._auto_join_rooms(user_id))
else:
logger.info(
"Skipping auto-join for %s because consent is required at registration",
@@ -264,8 +261,7 @@ class RegistrationHandler(BaseHandler):
return user_id
@defer.inlineCallbacks
def _auto_join_rooms(self, user_id):
async def _auto_join_rooms(self, user_id):
"""Automatically joins users to auto join rooms - creating the room in the first place
if the user is the first to be created.
@@ -279,9 +275,9 @@ class RegistrationHandler(BaseHandler):
# that an auto-generated support or bot user is not a real user and will never be
# the user to create the room
should_auto_create_rooms = False
is_real_user = yield self.store.is_real_user(user_id)
is_real_user = await self.store.is_real_user(user_id)
if self.hs.config.autocreate_auto_join_rooms and is_real_user:
count = yield self.store.count_real_users()
count = await self.store.count_real_users()
should_auto_create_rooms = count == 1
for r in self.hs.config.auto_join_rooms:
logger.info("Auto-joining %s to %s", user_id, r)
@@ -300,7 +296,7 @@ class RegistrationHandler(BaseHandler):
# getting the RoomCreationHandler during init gives a dependency
# loop
yield self.hs.get_room_creation_handler().create_room(
await self.hs.get_room_creation_handler().create_room(
fake_requester,
config={
"preset": "public_chat",
@@ -309,7 +305,7 @@ class RegistrationHandler(BaseHandler):
ratelimit=False,
)
else:
yield self._join_user_to_room(fake_requester, r)
await self._join_user_to_room(fake_requester, r)
except ConsentNotGivenError as e:
# Technically not necessary to pull out this error though
# moving away from bare excepts is a good thing to do.
@@ -317,15 +313,14 @@ class RegistrationHandler(BaseHandler):
except Exception as e:
logger.error("Failed to join new user to %r: %r", r, e)
@defer.inlineCallbacks
def post_consent_actions(self, user_id):
async def post_consent_actions(self, user_id):
"""A series of registration actions that can only be carried out once consent
has been granted
Args:
user_id (str): The user to join
"""
yield self._auto_join_rooms(user_id)
await self._auto_join_rooms(user_id)
@defer.inlineCallbacks
def appservice_register(self, user_localpart, as_token):
@@ -392,14 +387,13 @@ class RegistrationHandler(BaseHandler):
self._next_generated_user_id += 1
return str(id)
@defer.inlineCallbacks
def _join_user_to_room(self, requester, room_identifier):
async def _join_user_to_room(self, requester, room_identifier):
room_member_handler = self.hs.get_room_member_handler()
if RoomID.is_valid(room_identifier):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = yield room_member_handler.lookup_room_alias(
room_id, remote_room_hosts = await room_member_handler.lookup_room_alias(
room_alias
)
room_id = room_id.to_string()
@@ -408,7 +402,7 @@ class RegistrationHandler(BaseHandler):
400, "%s was not legal room ID or room alias" % (room_identifier,)
)
yield room_member_handler.update_membership(
await room_member_handler.update_membership(
requester=requester,
target=requester.user,
room_id=room_id,
@@ -540,14 +534,15 @@ class RegistrationHandler(BaseHandler):
user_id, ["guest = true"]
)
else:
access_token = yield self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id, valid_until_ms=valid_until_ms
access_token = yield defer.ensureDeferred(
self._auth_handler.get_access_token_for_user_id(
user_id, device_id=device_id, valid_until_ms=valid_until_ms
)
)
return (device_id, access_token)
@defer.inlineCallbacks
def post_registration_actions(self, user_id, auth_result, access_token):
async def post_registration_actions(self, user_id, auth_result, access_token):
"""A user has completed registration
Args:
@@ -558,7 +553,7 @@ class RegistrationHandler(BaseHandler):
device, or None if `inhibit_login` enabled.
"""
if self.hs.config.worker_app:
yield self._post_registration_client(
await self._post_registration_client(
user_id=user_id, auth_result=auth_result, access_token=access_token
)
return
@@ -570,19 +565,18 @@ class RegistrationHandler(BaseHandler):
if is_threepid_reserved(
self.hs.config.mau_limits_reserved_threepids, threepid
):
yield self.store.upsert_monthly_active_user(user_id)
await self.store.upsert_monthly_active_user(user_id)
yield self._register_email_threepid(user_id, threepid, access_token)
await self._register_email_threepid(user_id, threepid, access_token)
if auth_result and LoginType.MSISDN in auth_result:
threepid = auth_result[LoginType.MSISDN]
yield self._register_msisdn_threepid(user_id, threepid)
await self._register_msisdn_threepid(user_id, threepid)
if auth_result and LoginType.TERMS in auth_result:
yield self._on_user_consented(user_id, self.hs.config.user_consent_version)
await self._on_user_consented(user_id, self.hs.config.user_consent_version)
@defer.inlineCallbacks
def _on_user_consented(self, user_id, consent_version):
async def _on_user_consented(self, user_id, consent_version):
"""A user consented to the terms on registration
Args:
@@ -591,8 +585,8 @@ class RegistrationHandler(BaseHandler):
consented to.
"""
logger.info("%s has consented to the privacy policy", user_id)
yield self.store.user_set_consent_version(user_id, consent_version)
yield self.post_consent_actions(user_id)
await self.store.user_set_consent_version(user_id, consent_version)
await self.post_consent_actions(user_id)
@defer.inlineCallbacks
def _register_email_threepid(self, user_id, threepid, token):
@@ -617,8 +611,13 @@ class RegistrationHandler(BaseHandler):
logger.info("Can't add incomplete 3pid")
return
yield self._auth_handler.add_threepid(
user_id, threepid["medium"], threepid["address"], threepid["validated_at"]
yield defer.ensureDeferred(
self._auth_handler.add_threepid(
user_id,
threepid["medium"],
threepid["address"],
threepid["validated_at"],
)
)
# And we add an email pusher for them by default, but only
@@ -670,6 +669,11 @@ class RegistrationHandler(BaseHandler):
return None
raise
yield self._auth_handler.add_threepid(
user_id, threepid["medium"], threepid["address"], threepid["validated_at"]
yield defer.ensureDeferred(
self._auth_handler.add_threepid(
user_id,
threepid["medium"],
threepid["address"],
threepid["validated_at"],
)
)

View File

@@ -148,17 +148,16 @@ class RoomCreationHandler(BaseHandler):
return ret
@defer.inlineCallbacks
def _upgrade_room(
async def _upgrade_room(
self, requester: Requester, old_room_id: str, new_version: RoomVersion
):
user_id = requester.user.to_string()
# start by allocating a new room id
r = yield self.store.get_room(old_room_id)
r = await self.store.get_room(old_room_id)
if r is None:
raise NotFoundError("Unknown room id %s" % (old_room_id,))
new_room_id = yield self._generate_room_id(
new_room_id = await self._generate_room_id(
creator_id=user_id, is_public=r["is_public"], room_version=new_version,
)
@@ -169,7 +168,7 @@ class RoomCreationHandler(BaseHandler):
(
tombstone_event,
tombstone_context,
) = yield self.event_creation_handler.create_event(
) = await self.event_creation_handler.create_event(
requester,
{
"type": EventTypes.Tombstone,
@@ -183,12 +182,12 @@ class RoomCreationHandler(BaseHandler):
},
token_id=requester.access_token_id,
)
old_room_version = yield self.store.get_room_version_id(old_room_id)
yield self.auth.check_from_context(
old_room_version = await self.store.get_room_version_id(old_room_id)
await self.auth.check_from_context(
old_room_version, tombstone_event, tombstone_context
)
yield self.clone_existing_room(
await self.clone_existing_room(
requester,
old_room_id=old_room_id,
new_room_id=new_room_id,
@@ -197,32 +196,31 @@ class RoomCreationHandler(BaseHandler):
)
# now send the tombstone
yield self.event_creation_handler.send_nonmember_event(
await self.event_creation_handler.send_nonmember_event(
requester, tombstone_event, tombstone_context
)
old_room_state = yield tombstone_context.get_current_state_ids()
old_room_state = await tombstone_context.get_current_state_ids()
# update any aliases
yield self._move_aliases_to_new_room(
await self._move_aliases_to_new_room(
requester, old_room_id, new_room_id, old_room_state
)
# Copy over user push rules, tags and migrate room directory state
yield self.room_member_handler.transfer_room_state_on_room_upgrade(
await self.room_member_handler.transfer_room_state_on_room_upgrade(
old_room_id, new_room_id
)
# finally, shut down the PLs in the old room, and update them in the new
# room.
yield self._update_upgraded_room_pls(
await self._update_upgraded_room_pls(
requester, old_room_id, new_room_id, old_room_state,
)
return new_room_id
@defer.inlineCallbacks
def _update_upgraded_room_pls(
async def _update_upgraded_room_pls(
self,
requester: Requester,
old_room_id: str,
@@ -249,7 +247,7 @@ class RoomCreationHandler(BaseHandler):
)
return
old_room_pl_state = yield self.store.get_event(old_room_pl_event_id)
old_room_pl_state = await self.store.get_event(old_room_pl_event_id)
# we try to stop regular users from speaking by setting the PL required
# to send regular events and invites to 'Moderator' level. That's normally
@@ -278,7 +276,7 @@ class RoomCreationHandler(BaseHandler):
if updated:
try:
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.PowerLevels,
@@ -292,7 +290,7 @@ class RoomCreationHandler(BaseHandler):
except AuthError as e:
logger.warning("Unable to update PLs in old room: %s", e)
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.PowerLevels,
@@ -304,8 +302,7 @@ class RoomCreationHandler(BaseHandler):
ratelimit=False,
)
@defer.inlineCallbacks
def clone_existing_room(
async def clone_existing_room(
self,
requester: Requester,
old_room_id: str,
@@ -338,7 +335,7 @@ class RoomCreationHandler(BaseHandler):
# Check if old room was non-federatable
# Get old room's create event
old_room_create_event = yield self.store.get_create_event_for_room(old_room_id)
old_room_create_event = await self.store.get_create_event_for_room(old_room_id)
# Check if the create event specified a non-federatable room
if not old_room_create_event.content.get("m.federate", True):
@@ -361,11 +358,11 @@ class RoomCreationHandler(BaseHandler):
(EventTypes.PowerLevels, ""),
)
old_room_state_ids = yield self.store.get_filtered_current_state_ids(
old_room_state_ids = await self.store.get_filtered_current_state_ids(
old_room_id, StateFilter.from_types(types_to_copy)
)
# map from event_id to BaseEvent
old_room_state_events = yield self.store.get_events(old_room_state_ids.values())
old_room_state_events = await self.store.get_events(old_room_state_ids.values())
for k, old_event_id in iteritems(old_room_state_ids):
old_event = old_room_state_events.get(old_event_id)
@@ -400,7 +397,7 @@ class RoomCreationHandler(BaseHandler):
if current_power_level < needed_power_level:
power_levels["users"][user_id] = needed_power_level
yield self._send_events_for_new_room(
await self._send_events_for_new_room(
requester,
new_room_id,
# we expect to override all the presets with initial_state, so this is
@@ -412,12 +409,12 @@ class RoomCreationHandler(BaseHandler):
)
# Transfer membership events
old_room_member_state_ids = yield self.store.get_filtered_current_state_ids(
old_room_member_state_ids = await self.store.get_filtered_current_state_ids(
old_room_id, StateFilter.from_types([(EventTypes.Member, None)])
)
# map from event_id to BaseEvent
old_room_member_state_events = yield self.store.get_events(
old_room_member_state_events = await self.store.get_events(
old_room_member_state_ids.values()
)
for k, old_event in iteritems(old_room_member_state_events):
@@ -426,7 +423,7 @@ class RoomCreationHandler(BaseHandler):
"membership" in old_event.content
and old_event.content["membership"] == "ban"
):
yield self.room_member_handler.update_membership(
await self.room_member_handler.update_membership(
requester,
UserID.from_string(old_event["state_key"]),
new_room_id,
@@ -438,8 +435,7 @@ class RoomCreationHandler(BaseHandler):
# XXX invites/joins
# XXX 3pid invites
@defer.inlineCallbacks
def _move_aliases_to_new_room(
async def _move_aliases_to_new_room(
self,
requester: Requester,
old_room_id: str,
@@ -448,13 +444,13 @@ class RoomCreationHandler(BaseHandler):
):
directory_handler = self.hs.get_handlers().directory_handler
aliases = yield self.store.get_aliases_for_room(old_room_id)
aliases = await self.store.get_aliases_for_room(old_room_id)
# check to see if we have a canonical alias.
canonical_alias_event = None
canonical_alias_event_id = old_room_state.get((EventTypes.CanonicalAlias, ""))
if canonical_alias_event_id:
canonical_alias_event = yield self.store.get_event(canonical_alias_event_id)
canonical_alias_event = await self.store.get_event(canonical_alias_event_id)
# first we try to remove the aliases from the old room (we suppress sending
# the room_aliases event until the end).
@@ -472,7 +468,7 @@ class RoomCreationHandler(BaseHandler):
for alias_str in aliases:
alias = RoomAlias.from_string(alias_str)
try:
yield directory_handler.delete_association(requester, alias)
await directory_handler.delete_association(requester, alias)
removed_aliases.append(alias_str)
except SynapseError as e:
logger.warning("Unable to remove alias %s from old room: %s", alias, e)
@@ -485,7 +481,7 @@ class RoomCreationHandler(BaseHandler):
# we can now add any aliases we successfully removed to the new room.
for alias in removed_aliases:
try:
yield directory_handler.create_association(
await directory_handler.create_association(
requester,
RoomAlias.from_string(alias),
new_room_id,
@@ -502,7 +498,7 @@ class RoomCreationHandler(BaseHandler):
# alias event for the new room with a copy of the information.
try:
if canonical_alias_event:
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.CanonicalAlias,
@@ -518,8 +514,9 @@ class RoomCreationHandler(BaseHandler):
# we returned the new room to the client at this point.
logger.error("Unable to send updated alias events in new room: %s", e)
@defer.inlineCallbacks
def create_room(self, requester, config, ratelimit=True, creator_join_profile=None):
async def create_room(
self, requester, config, ratelimit=True, creator_join_profile=None
):
""" Creates a new room.
Args:
@@ -547,7 +544,7 @@ class RoomCreationHandler(BaseHandler):
"""
user_id = requester.user.to_string()
yield self.auth.check_auth_blocking(user_id)
await self.auth.check_auth_blocking(user_id)
if (
self._server_notices_mxid is not None
@@ -556,11 +553,11 @@ class RoomCreationHandler(BaseHandler):
# allow the server notices mxid to create rooms
is_requester_admin = True
else:
is_requester_admin = yield self.auth.is_server_admin(requester.user)
is_requester_admin = await self.auth.is_server_admin(requester.user)
# Check whether the third party rules allows/changes the room create
# request.
event_allowed = yield self.third_party_event_rules.on_create_room(
event_allowed = await self.third_party_event_rules.on_create_room(
requester, config, is_requester_admin=is_requester_admin
)
if not event_allowed:
@@ -574,7 +571,7 @@ class RoomCreationHandler(BaseHandler):
raise SynapseError(403, "You are not permitted to create rooms")
if ratelimit:
yield self.ratelimit(requester)
await self.ratelimit(requester)
room_version_id = config.get(
"room_version", self.config.default_room_version.identifier
@@ -597,7 +594,7 @@ class RoomCreationHandler(BaseHandler):
raise SynapseError(400, "Invalid characters in room alias")
room_alias = RoomAlias(config["room_alias_name"], self.hs.hostname)
mapping = yield self.store.get_association_from_room_alias(room_alias)
mapping = await self.store.get_association_from_room_alias(room_alias)
if mapping:
raise SynapseError(400, "Room alias already taken", Codes.ROOM_IN_USE)
@@ -612,7 +609,7 @@ class RoomCreationHandler(BaseHandler):
except Exception:
raise SynapseError(400, "Invalid user_id: %s" % (i,))
yield self.event_creation_handler.assert_accepted_privacy_policy(requester)
await self.event_creation_handler.assert_accepted_privacy_policy(requester)
power_level_content_override = config.get("power_level_content_override")
if (
@@ -631,13 +628,13 @@ class RoomCreationHandler(BaseHandler):
visibility = config.get("visibility", None)
is_public = visibility == "public"
room_id = yield self._generate_room_id(
room_id = await self._generate_room_id(
creator_id=user_id, is_public=is_public, room_version=room_version,
)
directory_handler = self.hs.get_handlers().directory_handler
if room_alias:
yield directory_handler.create_association(
await directory_handler.create_association(
requester=requester,
room_id=room_id,
room_alias=room_alias,
@@ -645,6 +642,13 @@ class RoomCreationHandler(BaseHandler):
check_membership=False,
)
if is_public:
if not self.config.is_publishing_room_allowed(user_id, room_id, room_alias):
# Lets just return a generic message, as there may be all sorts of
# reasons why we said no. TODO: Allow configurable error messages
# per alias creation rule?
raise SynapseError(403, "Not allowed to publish room")
preset_config = config.get(
"preset",
RoomCreationPreset.PRIVATE_CHAT
@@ -663,7 +667,7 @@ class RoomCreationHandler(BaseHandler):
# override any attempt to set room versions via the creation_content
creation_content["room_version"] = room_version.identifier
yield self._send_events_for_new_room(
await self._send_events_for_new_room(
requester,
room_id,
preset_config=preset_config,
@@ -677,7 +681,7 @@ class RoomCreationHandler(BaseHandler):
if "name" in config:
name = config["name"]
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.Name,
@@ -691,7 +695,7 @@ class RoomCreationHandler(BaseHandler):
if "topic" in config:
topic = config["topic"]
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.Topic,
@@ -709,7 +713,7 @@ class RoomCreationHandler(BaseHandler):
if is_direct:
content["is_direct"] = is_direct
yield self.room_member_handler.update_membership(
await self.room_member_handler.update_membership(
requester,
UserID.from_string(invitee),
room_id,
@@ -723,7 +727,7 @@ class RoomCreationHandler(BaseHandler):
id_access_token = invite_3pid.get("id_access_token") # optional
address = invite_3pid["address"]
medium = invite_3pid["medium"]
yield self.hs.get_room_member_handler().do_3pid_invite(
await self.hs.get_room_member_handler().do_3pid_invite(
room_id,
requester.user,
medium,
@@ -741,8 +745,7 @@ class RoomCreationHandler(BaseHandler):
return result
@defer.inlineCallbacks
def _send_events_for_new_room(
async def _send_events_for_new_room(
self,
creator, # A Requester object.
room_id,
@@ -762,11 +765,10 @@ class RoomCreationHandler(BaseHandler):
return e
@defer.inlineCallbacks
def send(etype, content, **kwargs):
async def send(etype, content, **kwargs):
event = create(etype, content, **kwargs)
logger.debug("Sending %s in new room", etype)
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
creator, event, ratelimit=False
)
@@ -777,10 +779,10 @@ class RoomCreationHandler(BaseHandler):
event_keys = {"room_id": room_id, "sender": creator_id, "state_key": ""}
creation_content.update({"creator": creator_id})
yield send(etype=EventTypes.Create, content=creation_content)
await send(etype=EventTypes.Create, content=creation_content)
logger.debug("Sending %s in new room", EventTypes.Member)
yield self.room_member_handler.update_membership(
await self.room_member_handler.update_membership(
creator,
creator.user,
room_id,
@@ -793,7 +795,7 @@ class RoomCreationHandler(BaseHandler):
# of the first events that get sent into a room.
pl_content = initial_state.pop((EventTypes.PowerLevels, ""), None)
if pl_content is not None:
yield send(etype=EventTypes.PowerLevels, content=pl_content)
await send(etype=EventTypes.PowerLevels, content=pl_content)
else:
power_level_content = {
"users": {creator_id: 100},
@@ -806,6 +808,7 @@ class RoomCreationHandler(BaseHandler):
EventTypes.RoomAvatar: 50,
EventTypes.Tombstone: 100,
EventTypes.ServerACL: 100,
EventTypes.RoomEncryption: 100,
},
"events_default": 0,
"state_default": 50,
@@ -825,33 +828,33 @@ class RoomCreationHandler(BaseHandler):
if power_level_content_override:
power_level_content.update(power_level_content_override)
yield send(etype=EventTypes.PowerLevels, content=power_level_content)
await send(etype=EventTypes.PowerLevels, content=power_level_content)
if room_alias and (EventTypes.CanonicalAlias, "") not in initial_state:
yield send(
await send(
etype=EventTypes.CanonicalAlias,
content={"alias": room_alias.to_string()},
)
if (EventTypes.JoinRules, "") not in initial_state:
yield send(
await send(
etype=EventTypes.JoinRules, content={"join_rule": config["join_rules"]}
)
if (EventTypes.RoomHistoryVisibility, "") not in initial_state:
yield send(
await send(
etype=EventTypes.RoomHistoryVisibility,
content={"history_visibility": config["history_visibility"]},
)
if config["guest_can_join"]:
if (EventTypes.GuestAccess, "") not in initial_state:
yield send(
await send(
etype=EventTypes.GuestAccess, content={"guest_access": "can_join"}
)
for (etype, state_key), content in initial_state.items():
yield send(etype=etype, state_key=state_key, content=content)
await send(etype=etype, state_key=state_key, content=content)
@defer.inlineCallbacks
def _generate_room_id(

View File

@@ -15,6 +15,7 @@
import logging
from collections import namedtuple
from typing import Any, Dict, Optional
from six import iteritems
@@ -89,7 +90,11 @@ class RoomListHandler(BaseHandler):
logger.info("Bypassing cache as search request.")
return self._get_public_room_list(
limit, since_token, search_filter, network_tuple=network_tuple
limit,
since_token,
search_filter,
network_tuple=network_tuple,
from_federation=from_federation,
)
key = (limit, since_token, network_tuple)
@@ -105,22 +110,22 @@ class RoomListHandler(BaseHandler):
@defer.inlineCallbacks
def _get_public_room_list(
self,
limit=None,
since_token=None,
search_filter=None,
network_tuple=EMPTY_THIRD_PARTY_ID,
from_federation=False,
):
limit: Optional[int] = None,
since_token: Optional[str] = None,
search_filter: Optional[Dict] = None,
network_tuple: ThirdPartyInstanceID = EMPTY_THIRD_PARTY_ID,
from_federation: bool = False,
) -> Dict[str, Any]:
"""Generate a public room list.
Args:
limit (int|None): Maximum amount of rooms to return.
since_token (str|None)
search_filter (dict|None): Dictionary to filter rooms by.
network_tuple (ThirdPartyInstanceID): Which public list to use.
limit: Maximum amount of rooms to return.
since_token:
search_filter: Dictionary to filter rooms by.
network_tuple: Which public list to use.
This can be (None, None) to indicate the main list, or a particular
appservice and network id to use an appservice specific one.
Setting to None returns all public rooms across all lists.
from_federation (bool): Whether this request originated from a
from_federation: Whether this request originated from a
federating server or a client. Used for room filtering.
"""

View File

@@ -142,8 +142,7 @@ class RoomMemberHandler(object):
"""
raise NotImplementedError()
@defer.inlineCallbacks
def _local_membership_update(
async def _local_membership_update(
self,
requester,
target,
@@ -164,7 +163,7 @@ class RoomMemberHandler(object):
if requester.is_guest:
content["kind"] = "guest"
event, context = yield self.event_creation_handler.create_event(
event, context = await self.event_creation_handler.create_event(
requester,
{
"type": EventTypes.Member,
@@ -182,18 +181,18 @@ class RoomMemberHandler(object):
)
# Check if this event matches the previous membership event for the user.
duplicate = yield self.event_creation_handler.deduplicate_state_event(
duplicate = await self.event_creation_handler.deduplicate_state_event(
event, context
)
if duplicate is not None:
# Discard the new event since this membership change is a no-op.
return duplicate
yield self.event_creation_handler.handle_new_client_event(
await self.event_creation_handler.handle_new_client_event(
requester, event, context, extra_users=[target], ratelimit=ratelimit
)
prev_state_ids = yield context.get_prev_state_ids()
prev_state_ids = await context.get_prev_state_ids()
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
@@ -203,15 +202,15 @@ class RoomMemberHandler(object):
# info.
newly_joined = True
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
prev_member_event = await self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
yield self._user_joined_room(target, room_id)
await self._user_joined_room(target, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
prev_member_event = await self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN:
yield self._user_left_room(target, room_id)
await self._user_left_room(target, room_id)
return event
@@ -253,8 +252,7 @@ class RoomMemberHandler(object):
for tag, tag_content in room_tags.items():
yield self.store.add_tag_to_room(user_id, new_room_id, tag, tag_content)
@defer.inlineCallbacks
def update_membership(
async def update_membership(
self,
requester,
target,
@@ -269,8 +267,8 @@ class RoomMemberHandler(object):
):
key = (room_id,)
with (yield self.member_linearizer.queue(key)):
result = yield self._update_membership(
with (await self.member_linearizer.queue(key)):
result = await self._update_membership(
requester,
target,
room_id,
@@ -285,8 +283,7 @@ class RoomMemberHandler(object):
return result
@defer.inlineCallbacks
def _update_membership(
async def _update_membership(
self,
requester,
target,
@@ -321,7 +318,7 @@ class RoomMemberHandler(object):
# if this is a join with a 3pid signature, we may need to turn a 3pid
# invite into a normal invite before we can handle the join.
if third_party_signed is not None:
yield self.federation_handler.exchange_third_party_invite(
await self.federation_handler.exchange_third_party_invite(
third_party_signed["sender"],
target.to_string(),
room_id,
@@ -332,7 +329,7 @@ class RoomMemberHandler(object):
remote_room_hosts = []
if effective_membership_state not in ("leave", "ban"):
is_blocked = yield self.store.is_room_blocked(room_id)
is_blocked = await self.store.is_room_blocked(room_id)
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
@@ -351,7 +348,7 @@ class RoomMemberHandler(object):
is_requester_admin = True
else:
is_requester_admin = yield self.auth.is_server_admin(requester.user)
is_requester_admin = await self.auth.is_server_admin(requester.user)
if not is_requester_admin:
if self.config.block_non_admin_invites:
@@ -370,9 +367,9 @@ class RoomMemberHandler(object):
if block_invite:
raise SynapseError(403, "Invites have been disabled on this server")
latest_event_ids = yield self.store.get_prev_events_for_room(room_id)
latest_event_ids = await self.store.get_prev_events_for_room(room_id)
current_state_ids = yield self.state_handler.get_current_state_ids(
current_state_ids = await self.state_handler.get_current_state_ids(
room_id, latest_event_ids=latest_event_ids
)
@@ -381,7 +378,7 @@ class RoomMemberHandler(object):
# transitions and generic otherwise
old_state_id = current_state_ids.get((EventTypes.Member, target.to_string()))
if old_state_id:
old_state = yield self.store.get_event(old_state_id, allow_none=True)
old_state = await self.store.get_event(old_state_id, allow_none=True)
old_membership = old_state.content.get("membership") if old_state else None
if action == "unban" and old_membership != "ban":
raise SynapseError(
@@ -413,7 +410,7 @@ class RoomMemberHandler(object):
old_membership == Membership.INVITE
and effective_membership_state == Membership.LEAVE
):
is_blocked = yield self._is_server_notice_room(room_id)
is_blocked = await self._is_server_notice_room(room_id)
if is_blocked:
raise SynapseError(
http_client.FORBIDDEN,
@@ -424,18 +421,18 @@ class RoomMemberHandler(object):
if action == "kick":
raise AuthError(403, "The target user is not in the room")
is_host_in_room = yield self._is_host_in_room(current_state_ids)
is_host_in_room = await self._is_host_in_room(current_state_ids)
if effective_membership_state == Membership.JOIN:
if requester.is_guest:
guest_can_join = yield self._can_guest_join(current_state_ids)
guest_can_join = await self._can_guest_join(current_state_ids)
if not guest_can_join:
# This should be an auth check, but guests are a local concept,
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
if not is_host_in_room:
inviter = yield self._get_inviter(target.to_string(), room_id)
inviter = await self._get_inviter(target.to_string(), room_id)
if inviter and not self.hs.is_mine(inviter):
remote_room_hosts.append(inviter.domain)
@@ -443,13 +440,13 @@ class RoomMemberHandler(object):
profile = self.profile_handler
if not content_specified:
content["displayname"] = yield profile.get_displayname(target)
content["avatar_url"] = yield profile.get_avatar_url(target)
content["displayname"] = await profile.get_displayname(target)
content["avatar_url"] = await profile.get_avatar_url(target)
if requester.is_guest:
content["kind"] = "guest"
remote_join_response = yield self._remote_join(
remote_join_response = await self._remote_join(
requester, remote_room_hosts, room_id, target, content
)
@@ -458,7 +455,7 @@ class RoomMemberHandler(object):
elif effective_membership_state == Membership.LEAVE:
if not is_host_in_room:
# perhaps we've been invited
inviter = yield self._get_inviter(target.to_string(), room_id)
inviter = await self._get_inviter(target.to_string(), room_id)
if not inviter:
raise SynapseError(404, "Not a known room")
@@ -472,12 +469,12 @@ class RoomMemberHandler(object):
else:
# send the rejection to the inviter's HS.
remote_room_hosts = remote_room_hosts + [inviter.domain]
res = yield self._remote_reject_invite(
res = await self._remote_reject_invite(
requester, remote_room_hosts, room_id, target, content,
)
return res
res = yield self._local_membership_update(
res = await self._local_membership_update(
requester=requester,
target=target,
room_id=room_id,
@@ -519,6 +516,9 @@ class RoomMemberHandler(object):
yield self.store.set_room_is_public(old_room_id, False)
yield self.store.set_room_is_public(room_id, True)
# Transfer alias mappings in the room directory
yield self.store.update_aliases_for_room(old_room_id, room_id)
# Check if any groups we own contain the predecessor room
local_group_ids = yield self.store.get_local_groups_for_room(old_room_id)
for group_id in local_group_ids:
@@ -569,8 +569,7 @@ class RoomMemberHandler(object):
)
continue
@defer.inlineCallbacks
def send_membership_event(self, requester, event, context, ratelimit=True):
async def send_membership_event(self, requester, event, context, ratelimit=True):
"""
Change the membership status of a user in a room.
@@ -596,27 +595,27 @@ class RoomMemberHandler(object):
else:
requester = types.create_requester(target_user)
prev_event = yield self.event_creation_handler.deduplicate_state_event(
prev_event = await self.event_creation_handler.deduplicate_state_event(
event, context
)
if prev_event is not None:
return
prev_state_ids = yield context.get_prev_state_ids()
prev_state_ids = await context.get_prev_state_ids()
if event.membership == Membership.JOIN:
if requester.is_guest:
guest_can_join = yield self._can_guest_join(prev_state_ids)
guest_can_join = await self._can_guest_join(prev_state_ids)
if not guest_can_join:
# This should be an auth check, but guests are a local concept,
# so don't really fit into the general auth process.
raise AuthError(403, "Guest access not allowed")
if event.membership not in (Membership.LEAVE, Membership.BAN):
is_blocked = yield self.store.is_room_blocked(room_id)
is_blocked = await self.store.is_room_blocked(room_id)
if is_blocked:
raise SynapseError(403, "This room has been blocked on this server")
yield self.event_creation_handler.handle_new_client_event(
await self.event_creation_handler.handle_new_client_event(
requester, event, context, extra_users=[target_user], ratelimit=ratelimit
)
@@ -630,15 +629,15 @@ class RoomMemberHandler(object):
# info.
newly_joined = True
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
prev_member_event = await self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
yield self._user_joined_room(target_user, room_id)
await self._user_joined_room(target_user, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
prev_member_event = yield self.store.get_event(prev_member_event_id)
prev_member_event = await self.store.get_event(prev_member_event_id)
if prev_member_event.membership == Membership.JOIN:
yield self._user_left_room(target_user, room_id)
await self._user_left_room(target_user, room_id)
@defer.inlineCallbacks
def _can_guest_join(self, current_state_ids):
@@ -696,8 +695,7 @@ class RoomMemberHandler(object):
if invite:
return UserID.from_string(invite.sender)
@defer.inlineCallbacks
def do_3pid_invite(
async def do_3pid_invite(
self,
room_id,
inviter,
@@ -709,7 +707,7 @@ class RoomMemberHandler(object):
id_access_token=None,
):
if self.config.block_non_admin_invites:
is_requester_admin = yield self.auth.is_server_admin(requester.user)
is_requester_admin = await self.auth.is_server_admin(requester.user)
if not is_requester_admin:
raise SynapseError(
403, "Invites have been disabled on this server", Codes.FORBIDDEN
@@ -717,9 +715,9 @@ class RoomMemberHandler(object):
# We need to rate limit *before* we send out any 3PID invites, so we
# can't just rely on the standard ratelimiting of events.
yield self.base_handler.ratelimit(requester)
await self.base_handler.ratelimit(requester)
can_invite = yield self.third_party_event_rules.check_threepid_can_be_invited(
can_invite = await self.third_party_event_rules.check_threepid_can_be_invited(
medium, address, room_id
)
if not can_invite:
@@ -734,16 +732,16 @@ class RoomMemberHandler(object):
403, "Looking up third-party identifiers is denied from this server"
)
invitee = yield self.identity_handler.lookup_3pid(
invitee = await self.identity_handler.lookup_3pid(
id_server, medium, address, id_access_token
)
if invitee:
yield self.update_membership(
await self.update_membership(
requester, UserID.from_string(invitee), room_id, "invite", txn_id=txn_id
)
else:
yield self._make_and_store_3pid_invite(
await self._make_and_store_3pid_invite(
requester,
id_server,
medium,
@@ -754,8 +752,7 @@ class RoomMemberHandler(object):
id_access_token=id_access_token,
)
@defer.inlineCallbacks
def _make_and_store_3pid_invite(
async def _make_and_store_3pid_invite(
self,
requester,
id_server,
@@ -766,7 +763,7 @@ class RoomMemberHandler(object):
txn_id,
id_access_token=None,
):
room_state = yield self.state_handler.get_current_state(room_id)
room_state = await self.state_handler.get_current_state(room_id)
inviter_display_name = ""
inviter_avatar_url = ""
@@ -804,7 +801,7 @@ class RoomMemberHandler(object):
public_keys,
fallback_public_key,
display_name,
) = yield self.identity_handler.ask_id_server_for_third_party_invite(
) = await self.identity_handler.ask_id_server_for_third_party_invite(
requester=requester,
id_server=id_server,
medium=medium,
@@ -820,7 +817,7 @@ class RoomMemberHandler(object):
id_access_token=id_access_token,
)
yield self.event_creation_handler.create_and_send_nonmember_event(
await self.event_creation_handler.create_and_send_nonmember_event(
requester,
{
"type": EventTypes.ThirdPartyInvite,
@@ -914,8 +911,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
return complexity["v1"] > max_complexity
@defer.inlineCallbacks
def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
async def _remote_join(self, requester, remote_room_hosts, room_id, user, content):
"""Implements RoomMemberHandler._remote_join
"""
# filter ourselves out of remote_room_hosts: do_invite_join ignores it
@@ -930,7 +926,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
if self.hs.config.limit_remote_rooms.enabled:
# Fetch the room complexity
too_complex = yield self._is_remote_room_too_complex(
too_complex = await self._is_remote_room_too_complex(
room_id, remote_room_hosts
)
if too_complex is True:
@@ -944,12 +940,10 @@ class RoomMemberMasterHandler(RoomMemberHandler):
# join dance for now, since we're kinda implicitly checking
# that we are allowed to join when we decide whether or not we
# need to do the invite/join dance.
yield defer.ensureDeferred(
self.federation_handler.do_invite_join(
remote_room_hosts, room_id, user.to_string(), content
)
await self.federation_handler.do_invite_join(
remote_room_hosts, room_id, user.to_string(), content
)
yield self._user_joined_room(user, room_id)
await self._user_joined_room(user, room_id)
# Check the room we just joined wasn't too large, if we didn't fetch the
# complexity of it before.
@@ -959,7 +953,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
return
# Check again, but with the local state events
too_complex = yield self._is_local_room_too_complex(room_id)
too_complex = await self._is_local_room_too_complex(room_id)
if too_complex is False:
# We're under the limit.
@@ -967,7 +961,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
# The room is too large. Leave.
requester = types.create_requester(user, None, False, None)
yield self.update_membership(
await self.update_membership(
requester=requester, target=user, room_id=room_id, action="leave"
)
raise SynapseError(
@@ -1005,12 +999,12 @@ class RoomMemberMasterHandler(RoomMemberHandler):
def _user_joined_room(self, target, room_id):
"""Implements RoomMemberHandler._user_joined_room
"""
return user_joined_room(self.distributor, target, room_id)
return defer.succeed(user_joined_room(self.distributor, target, room_id))
def _user_left_room(self, target, room_id):
"""Implements RoomMemberHandler._user_left_room
"""
return user_left_room(self.distributor, target, room_id)
return defer.succeed(user_left_room(self.distributor, target, room_id))
@defer.inlineCallbacks
def forget(self, user, room_id):

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import logging
import re
from typing import Tuple
from typing import Optional, Tuple
import attr
import saml2
@@ -26,6 +26,7 @@ from synapse.config import ConfigError
from synapse.http.server import finish_request
from synapse.http.servlet import parse_string
from synapse.module_api import ModuleApi
from synapse.module_api.errors import RedirectException
from synapse.types import (
UserID,
map_username_to_mxid_localpart,
@@ -43,11 +44,15 @@ class Saml2SessionData:
# time the session was created, in milliseconds
creation_time = attr.ib()
# The user interactive authentication session ID associated with this SAML
# session (or None if this SAML session is for an initial login).
ui_auth_session_id = attr.ib(type=Optional[str], default=None)
class SamlHandler:
def __init__(self, hs):
self._saml_client = Saml2Client(hs.config.saml2_sp_config)
self._auth = hs.get_auth()
self._auth_handler = hs.get_auth_handler()
self._registration_handler = hs.get_registration_handler()
@@ -76,12 +81,14 @@ class SamlHandler:
self._error_html_content = hs.config.saml2_error_html_content
def handle_redirect_request(self, client_redirect_url):
def handle_redirect_request(self, client_redirect_url, ui_auth_session_id=None):
"""Handle an incoming request to /login/sso/redirect
Args:
client_redirect_url (bytes): the URL that we should redirect the
client to when everything is done
ui_auth_session_id (Optional[str]): The session ID of the ongoing UI Auth (or
None if this is a login).
Returns:
bytes: URL to redirect to
@@ -91,7 +98,9 @@ class SamlHandler:
)
now = self._clock.time_msec()
self._outstanding_requests_dict[reqid] = Saml2SessionData(creation_time=now)
self._outstanding_requests_dict[reqid] = Saml2SessionData(
creation_time=now, ui_auth_session_id=ui_auth_session_id,
)
for key, value in info["headers"]:
if key == "Location":
@@ -118,7 +127,12 @@ class SamlHandler:
self.expire_sessions()
try:
user_id = await self._map_saml_response_to_user(resp_bytes, relay_state)
user_id, current_session = await self._map_saml_response_to_user(
resp_bytes, relay_state
)
except RedirectException:
# Raise the exception as per the wishes of the SAML module response
raise
except Exception as e:
# If decoding the response or mapping it to a user failed, then log the
# error and tell the user that something went wrong.
@@ -133,9 +147,28 @@ class SamlHandler:
finish_request(request)
return
self._auth_handler.complete_sso_login(user_id, request, relay_state)
# Complete the interactive auth session or the login.
if current_session and current_session.ui_auth_session_id:
await self._auth_handler.complete_sso_ui_auth(
user_id, current_session.ui_auth_session_id, request
)
async def _map_saml_response_to_user(self, resp_bytes, client_redirect_url):
else:
await self._auth_handler.complete_sso_login(user_id, request, relay_state)
async def _map_saml_response_to_user(
self, resp_bytes: str, client_redirect_url: str
) -> Tuple[str, Optional[Saml2SessionData]]:
"""
Given a sample response, retrieve the cached session and user for it.
Args:
resp_bytes: The SAML response.
client_redirect_url: The redirect URL passed in by the client.
Returns:
Tuple of the user ID and SAML session associated with this response.
"""
try:
saml2_auth = self._saml_client.parse_authn_request_response(
resp_bytes,
@@ -163,7 +196,9 @@ class SamlHandler:
logger.info("SAML2 mapped attributes: %s", saml2_auth.ava)
self._outstanding_requests_dict.pop(saml2_auth.in_response_to, None)
current_session = self._outstanding_requests_dict.pop(
saml2_auth.in_response_to, None
)
remote_user_id = self._user_mapping_provider.get_remote_user_id(
saml2_auth, client_redirect_url
@@ -184,7 +219,7 @@ class SamlHandler:
)
if registered_user_id is not None:
logger.info("Found existing mapping %s", registered_user_id)
return registered_user_id
return registered_user_id, current_session
# backwards-compatibility hack: see if there is an existing user with a
# suitable mapping from the uid
@@ -209,7 +244,7 @@ class SamlHandler:
await self._datastore.record_user_external_id(
self._auth_provider_id, remote_user_id, registered_user_id
)
return registered_user_id
return registered_user_id, current_session
# Map saml response to user attributes using the configured mapping provider
for i in range(1000):
@@ -256,7 +291,7 @@ class SamlHandler:
await self._datastore.record_user_external_id(
self._auth_provider_id, remote_user_id, registered_user_id
)
return registered_user_id
return registered_user_id, current_session
def expire_sessions(self):
expire_before = self._clock.time_msec() - self._saml2_session_lifetime

View File

@@ -15,8 +15,6 @@
import logging
from typing import Optional
from twisted.internet import defer
from synapse.api.errors import Codes, StoreError, SynapseError
from synapse.types import Requester
@@ -32,9 +30,9 @@ class SetPasswordHandler(BaseHandler):
super(SetPasswordHandler, self).__init__(hs)
self._auth_handler = hs.get_auth_handler()
self._device_handler = hs.get_device_handler()
self._password_policy_handler = hs.get_password_policy_handler()
@defer.inlineCallbacks
def set_password(
async def set_password(
self,
user_id: str,
new_password: str,
@@ -44,10 +42,11 @@ class SetPasswordHandler(BaseHandler):
if not self.hs.config.password_localdb_enabled:
raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
password_hash = yield self._auth_handler.hash(new_password)
self._password_policy_handler.validate_password(new_password)
password_hash = await self._auth_handler.hash(new_password)
try:
yield self.store.user_set_password_hash(user_id, password_hash)
await self.store.user_set_password_hash(user_id, password_hash)
except StoreError as e:
if e.code == 404:
raise SynapseError(404, "Unknown user", Codes.NOT_FOUND)
@@ -59,12 +58,12 @@ class SetPasswordHandler(BaseHandler):
except_access_token_id = requester.access_token_id if requester else None
# First delete all of their other devices.
yield self._device_handler.delete_all_devices_for_user(
await self._device_handler.delete_all_devices_for_user(
user_id, except_device_id=except_device_id
)
# and now delete any access tokens which weren't associated with
# devices (or were associated with this device).
yield self._auth_handler.delete_access_tokens_for_user(
await self._auth_handler.delete_access_tokens_for_user(
user_id, except_token_id=except_access_token_id
)

View File

@@ -26,7 +26,7 @@ from prometheus_client import Counter
from synapse.api.constants import EventTypes, Membership
from synapse.api.filtering import FilterCollection
from synapse.events import EventBase
from synapse.logging.context import LoggingContext
from synapse.logging.context import current_context
from synapse.push.clientformat import format_push_rules_for_user
from synapse.storage.roommember import MemberSummary
from synapse.storage.state import StateFilter
@@ -301,7 +301,7 @@ class SyncHandler(object):
else:
sync_type = "incremental_sync"
context = LoggingContext.current_context()
context = current_context()
if context:
context.tag = sync_type
@@ -1143,9 +1143,18 @@ class SyncHandler(object):
user_id
)
# Always tell the user about their own devices. We check as the user
# ID is almost certainly already included (unless they're not in any
# rooms) and taking a copy of the set is relatively expensive.
if user_id not in users_who_share_room:
users_who_share_room = set(users_who_share_room)
users_who_share_room.add(user_id)
tracked_users = users_who_share_room
# Step 1a, check for changes in devices of users we share a room with
users_that_have_changed = await self.store.get_users_whose_devices_changed(
since_token.device_list_key, users_who_share_room
since_token.device_list_key, tracked_users
)
# Step 1b, check for newly joined rooms
@@ -1634,7 +1643,7 @@ class SyncHandler(object):
)
# We loop through all room ids, even if there are no new events, in case
# there are non room events taht we need to notify about.
# there are non room events that we need to notify about.
for room_id in sync_result_builder.joined_room_ids:
room_entry = room_to_events.get(room_id, None)

View File

@@ -15,6 +15,7 @@
import logging
from collections import namedtuple
from typing import List
from twisted.internet import defer
@@ -257,7 +258,13 @@ class TypingHandler(object):
"typing_key", self._latest_room_serial, rooms=[member.room_id]
)
async def get_all_typing_updates(self, last_id, current_id):
async def get_all_typing_updates(
self, last_id: int, current_id: int, limit: int
) -> List[dict]:
"""Get up to `limit` typing updates between the given tokens, earliest
updates first.
"""
if last_id == current_id:
return []
@@ -275,7 +282,7 @@ class TypingHandler(object):
typing = self._room_typing[room_id]
rows.append((serial, room_id, list(typing)))
rows.sort()
return rows
return rows[:limit]
def get_current_token(self):
return self._latest_room_serial

View File

@@ -434,21 +434,27 @@ class MatrixFederationHttpClient(object):
logger.info("Failed to send request: %s", e)
raise_from(RequestSendFailed(e, can_retry=True), e)
logger.info(
"{%s} [%s] Got response headers: %d %s",
request.txn_id,
request.destination,
response.code,
response.phrase.decode("ascii", errors="replace"),
)
incoming_responses_counter.labels(method_bytes, response.code).inc()
set_tag(tags.HTTP_STATUS_CODE, response.code)
if 200 <= response.code < 300:
logger.debug(
"{%s} [%s] Got response headers: %d %s",
request.txn_id,
request.destination,
response.code,
response.phrase.decode("ascii", errors="replace"),
)
pass
else:
logger.info(
"{%s} [%s] Got response headers: %d %s",
request.txn_id,
request.destination,
response.code,
response.phrase.decode("ascii", errors="replace"),
)
# :'(
# Update transactions table?
d = treq.content(response)

View File

@@ -19,7 +19,7 @@ import threading
from prometheus_client.core import Counter, Histogram
from synapse.logging.context import LoggingContext
from synapse.logging.context import current_context
from synapse.metrics import LaterGauge
logger = logging.getLogger(__name__)
@@ -148,7 +148,7 @@ LaterGauge(
class RequestMetrics(object):
def start(self, time_sec, name, method):
self.start = time_sec
self.start_context = LoggingContext.current_context()
self.start_context = current_context()
self.name = name
self.method = method
@@ -163,7 +163,7 @@ class RequestMetrics(object):
with _in_flight_requests_lock:
_in_flight_requests.discard(self)
context = LoggingContext.current_context()
context = current_context()
tag = ""
if context:

View File

@@ -193,6 +193,12 @@ class SynapseRequest(Request):
self.finish_time = time.time()
Request.connectionLost(self, reason)
if self.logcontext is None:
logger.info(
"Connection from %s lost before request headers were read", self.client
)
return
# we only get here if the connection to the client drops before we send
# the response.
#
@@ -236,13 +242,6 @@ class SynapseRequest(Request):
def _finished_processing(self):
"""Log the completion of this request and update the metrics
"""
if self.logcontext is None:
# this can happen if the connection closed before we read the
# headers (so render was never called). In that case we'll already
# have logged a warning, so just bail out.
return
usage = self.logcontext.get_resource_usage()
if self._processing_finished_time is None:

View File

@@ -42,7 +42,7 @@ from synapse.logging._terse_json import (
TerseJSONToConsoleLogObserver,
TerseJSONToTCPLogObserver,
)
from synapse.logging.context import LoggingContext
from synapse.logging.context import current_context
def stdlib_log_level_to_twisted(level: str) -> LogLevel:
@@ -86,7 +86,7 @@ class LogContextObserver(object):
].startswith("Timing out client"):
return
context = LoggingContext.current_context()
context = current_context()
# Copy the context information to the log event.
if context is not None:

View File

@@ -27,6 +27,7 @@ import inspect
import logging
import threading
import types
import warnings
from typing import TYPE_CHECKING, Optional, Tuple, TypeVar, Union
from typing_extensions import Literal
@@ -51,7 +52,7 @@ try:
is_thread_resource_usage_supported = True
def get_thread_resource_usage():
def get_thread_resource_usage() -> "Optional[resource._RUsage]":
return resource.getrusage(RUSAGE_THREAD)
@@ -60,7 +61,7 @@ except Exception:
# won't track resource usage.
is_thread_resource_usage_supported = False
def get_thread_resource_usage():
def get_thread_resource_usage() -> "Optional[resource._RUsage]":
return None
@@ -175,7 +176,54 @@ class ContextResourceUsage(object):
return res
LoggingContextOrSentinel = Union["LoggingContext", "LoggingContext.Sentinel"]
LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"]
class _Sentinel(object):
"""Sentinel to represent the root context"""
__slots__ = ["previous_context", "finished", "request", "scope", "tag"]
def __init__(self) -> None:
# Minimal set for compatibility with LoggingContext
self.previous_context = None
self.finished = False
self.request = None
self.scope = None
self.tag = None
def __str__(self):
return "sentinel"
def copy_to(self, record):
pass
def copy_to_twisted_log_entry(self, record):
record["request"] = None
record["scope"] = None
def start(self, rusage: "Optional[resource._RUsage]"):
pass
def stop(self, rusage: "Optional[resource._RUsage]"):
pass
def add_database_transaction(self, duration_sec):
pass
def add_database_scheduled(self, sched_sec):
pass
def record_event_fetch(self, event_count):
pass
def __nonzero__(self):
return False
__bool__ = __nonzero__ # python3
SENTINEL_CONTEXT = _Sentinel()
class LoggingContext(object):
@@ -199,76 +247,33 @@ class LoggingContext(object):
"_resource_usage",
"usage_start",
"main_thread",
"alive",
"finished",
"request",
"tag",
"scope",
]
thread_local = threading.local()
class Sentinel(object):
"""Sentinel to represent the root context"""
__slots__ = ["previous_context", "alive", "request", "scope", "tag"]
def __init__(self) -> None:
# Minimal set for compatibility with LoggingContext
self.previous_context = None
self.alive = None
self.request = None
self.scope = None
self.tag = None
def __str__(self):
return "sentinel"
def copy_to(self, record):
pass
def copy_to_twisted_log_entry(self, record):
record["request"] = None
record["scope"] = None
def start(self):
pass
def stop(self):
pass
def add_database_transaction(self, duration_sec):
pass
def add_database_scheduled(self, sched_sec):
pass
def record_event_fetch(self, event_count):
pass
def __nonzero__(self):
return False
__bool__ = __nonzero__ # python3
sentinel = Sentinel()
def __init__(self, name=None, parent_context=None, request=None) -> None:
self.previous_context = LoggingContext.current_context()
self.previous_context = current_context()
self.name = name
# track the resources used by this context so far
self._resource_usage = ContextResourceUsage()
# If alive has the thread resource usage when the logcontext last
# became active.
self.usage_start = None
# The thread resource usage when the logcontext became active. None
# if the context is not currently active.
self.usage_start = None # type: Optional[resource._RUsage]
self.main_thread = get_thread_id()
self.request = None
self.tag = ""
self.alive = True
self.scope = None # type: Optional[_LogContextScope]
# keep track of whether we have hit the __exit__ block for this context
# (suggesting that the the thing that created the context thinks it should
# be finished, and that re-activating it would suggest an error).
self.finished = False
self.parent_context = parent_context
if self.parent_context is not None:
@@ -287,40 +292,51 @@ class LoggingContext(object):
def current_context(cls) -> LoggingContextOrSentinel:
"""Get the current logging context from thread local storage
This exists for backwards compatibility. ``current_context()`` should be
called directly.
Returns:
LoggingContext: the current logging context
"""
return getattr(cls.thread_local, "current_context", cls.sentinel)
warnings.warn(
"synapse.logging.context.LoggingContext.current_context() is deprecated "
"in favor of synapse.logging.context.current_context().",
DeprecationWarning,
stacklevel=2,
)
return current_context()
@classmethod
def set_current_context(
cls, context: LoggingContextOrSentinel
) -> LoggingContextOrSentinel:
"""Set the current logging context in thread local storage
This exists for backwards compatibility. ``set_current_context()`` should be
called directly.
Args:
context(LoggingContext): The context to activate.
Returns:
The context that was previously active
"""
current = cls.current_context()
if current is not context:
current.stop()
cls.thread_local.current_context = context
context.start()
return current
warnings.warn(
"synapse.logging.context.LoggingContext.set_current_context() is deprecated "
"in favor of synapse.logging.context.set_current_context().",
DeprecationWarning,
stacklevel=2,
)
return set_current_context(context)
def __enter__(self) -> "LoggingContext":
"""Enters this logging context into thread local storage"""
old_context = self.set_current_context(self)
old_context = set_current_context(self)
if self.previous_context != old_context:
logger.warning(
"Expected previous context %r, found %r",
self.previous_context,
old_context,
)
self.alive = True
return self
def __exit__(self, type, value, traceback) -> None:
@@ -329,24 +345,19 @@ class LoggingContext(object):
Returns:
None to avoid suppressing any exceptions that were thrown.
"""
current = self.set_current_context(self.previous_context)
current = set_current_context(self.previous_context)
if current is not self:
if current is self.sentinel:
if current is SENTINEL_CONTEXT:
logger.warning("Expected logging context %s was lost", self)
else:
logger.warning(
"Expected logging context %s but found %s", self, current
)
self.alive = False
# if we have a parent, pass our CPU usage stats on
if self.parent_context is not None and hasattr(
self.parent_context, "_resource_usage"
):
self.parent_context._resource_usage += self._resource_usage
# reset them in case we get entered again
self._resource_usage.reset()
# the fact that we are here suggests that the caller thinks that everything
# is done and dusted for this logcontext, and further activity will not get
# recorded against the correct metrics.
self.finished = True
def copy_to(self, record) -> None:
"""Copy logging fields from this context to a log record or
@@ -366,35 +377,63 @@ class LoggingContext(object):
record["request"] = self.request
record["scope"] = self.scope
def start(self) -> None:
def start(self, rusage: "Optional[resource._RUsage]") -> None:
"""
Record that this logcontext is currently running.
This should not be called directly: use set_current_context
Args:
rusage: the resources used by the current thread, at the point of
switching to this logcontext. May be None if this platform doesn't
support getrusuage.
"""
if get_thread_id() != self.main_thread:
logger.warning("Started logcontext %s on different thread", self)
return
if self.finished:
logger.warning("Re-starting finished log context %s", self)
# If we haven't already started record the thread resource usage so
# far
if not self.usage_start:
self.usage_start = get_thread_resource_usage()
if self.usage_start:
logger.warning("Re-starting already-active log context %s", self)
else:
self.usage_start = rusage
def stop(self) -> None:
if get_thread_id() != self.main_thread:
logger.warning("Stopped logcontext %s on different thread", self)
return
def stop(self, rusage: "Optional[resource._RUsage]") -> None:
"""
Record that this logcontext is no longer running.
# When we stop, let's record the cpu used since we started
if not self.usage_start:
# Log a warning on platforms that support thread usage tracking
if is_thread_resource_usage_supported:
This should not be called directly: use set_current_context
Args:
rusage: the resources used by the current thread, at the point of
switching away from this logcontext. May be None if this platform
doesn't support getrusuage.
"""
try:
if get_thread_id() != self.main_thread:
logger.warning("Stopped logcontext %s on different thread", self)
return
if not rusage:
return
# Record the cpu used since we started
if not self.usage_start:
logger.warning(
"Called stop on logcontext %s without calling start", self
"Called stop on logcontext %s without recording a start rusage",
self,
)
return
return
utime_delta, stime_delta = self._get_cputime()
self._resource_usage.ru_utime += utime_delta
self._resource_usage.ru_stime += stime_delta
self.usage_start = None
utime_delta, stime_delta = self._get_cputime(rusage)
self.add_cputime(utime_delta, stime_delta)
finally:
self.usage_start = None
def get_resource_usage(self) -> ContextResourceUsage:
"""Get resources used by this logcontext so far.
@@ -409,25 +448,25 @@ class LoggingContext(object):
# If we are on the correct thread and we're currently running then we
# can include resource usage so far.
is_main_thread = get_thread_id() == self.main_thread
if self.alive and self.usage_start and is_main_thread:
utime_delta, stime_delta = self._get_cputime()
if self.usage_start and is_main_thread:
rusage = get_thread_resource_usage()
assert rusage is not None
utime_delta, stime_delta = self._get_cputime(rusage)
res.ru_utime += utime_delta
res.ru_stime += stime_delta
return res
def _get_cputime(self) -> Tuple[float, float]:
"""Get the cpu usage time so far
def _get_cputime(self, current: "resource._RUsage") -> Tuple[float, float]:
"""Get the cpu usage time between start() and the given rusage
Args:
rusage: the current resource usage
Returns: Tuple[float, float]: seconds in user mode, seconds in system mode
"""
assert self.usage_start is not None
current = get_thread_resource_usage()
# Indicate to mypy that we know that self.usage_start is None.
assert self.usage_start is not None
utime_delta = current.ru_utime - self.usage_start.ru_utime
stime_delta = current.ru_stime - self.usage_start.ru_stime
@@ -450,30 +489,52 @@ class LoggingContext(object):
return utime_delta, stime_delta
def add_cputime(self, utime_delta: float, stime_delta: float) -> None:
"""Update the CPU time usage of this context (and any parents, recursively).
Args:
utime_delta: additional user time, in seconds, spent in this context.
stime_delta: additional system time, in seconds, spent in this context.
"""
self._resource_usage.ru_utime += utime_delta
self._resource_usage.ru_stime += stime_delta
if self.parent_context:
self.parent_context.add_cputime(utime_delta, stime_delta)
def add_database_transaction(self, duration_sec: float) -> None:
"""Record the use of a database transaction and the length of time it took.
Args:
duration_sec: The number of seconds the database transaction took.
"""
if duration_sec < 0:
raise ValueError("DB txn time can only be non-negative")
self._resource_usage.db_txn_count += 1
self._resource_usage.db_txn_duration_sec += duration_sec
if self.parent_context:
self.parent_context.add_database_transaction(duration_sec)
def add_database_scheduled(self, sched_sec: float) -> None:
"""Record a use of the database pool
Args:
sched_sec (float): number of seconds it took us to get a
connection
sched_sec: number of seconds it took us to get a connection
"""
if sched_sec < 0:
raise ValueError("DB scheduling time can only be non-negative")
self._resource_usage.db_sched_duration_sec += sched_sec
if self.parent_context:
self.parent_context.add_database_scheduled(sched_sec)
def record_event_fetch(self, event_count: int) -> None:
"""Record a number of events being fetched from the db
Args:
event_count (int): number of events being fetched
event_count: number of events being fetched
"""
self._resource_usage.evt_db_fetch_count += event_count
if self.parent_context:
self.parent_context.record_event_fetch(event_count)
class LoggingContextFilter(logging.Filter):
@@ -492,7 +553,7 @@ class LoggingContextFilter(logging.Filter):
Returns:
True to include the record in the log output.
"""
context = LoggingContext.current_context()
context = current_context()
for key, value in self.defaults.items():
setattr(record, key, value)
@@ -512,27 +573,24 @@ class PreserveLoggingContext(object):
__slots__ = ["current_context", "new_context", "has_parent"]
def __init__(self, new_context: Optional[LoggingContextOrSentinel] = None) -> None:
if new_context is None:
self.new_context = LoggingContext.sentinel # type: LoggingContextOrSentinel
else:
self.new_context = new_context
def __init__(
self, new_context: LoggingContextOrSentinel = SENTINEL_CONTEXT
) -> None:
self.new_context = new_context
def __enter__(self) -> None:
"""Captures the current logging context"""
self.current_context = LoggingContext.set_current_context(self.new_context)
self.current_context = set_current_context(self.new_context)
if self.current_context:
self.has_parent = self.current_context.previous_context is not None
if not self.current_context.alive:
logger.debug("Entering dead context: %s", self.current_context)
def __exit__(self, type, value, traceback) -> None:
"""Restores the current logging context"""
context = LoggingContext.set_current_context(self.current_context)
context = set_current_context(self.current_context)
if context != self.new_context:
if context is LoggingContext.sentinel:
if not context:
logger.warning("Expected logging context %s was lost", self.new_context)
else:
logger.warning(
@@ -541,9 +599,37 @@ class PreserveLoggingContext(object):
context,
)
if self.current_context is not LoggingContext.sentinel:
if not self.current_context.alive:
logger.debug("Restoring dead context: %s", self.current_context)
_thread_local = threading.local()
_thread_local.current_context = SENTINEL_CONTEXT
def current_context() -> LoggingContextOrSentinel:
"""Get the current logging context from thread local storage"""
return getattr(_thread_local, "current_context", SENTINEL_CONTEXT)
def set_current_context(context: LoggingContextOrSentinel) -> LoggingContextOrSentinel:
"""Set the current logging context in thread local storage
Args:
context(LoggingContext): The context to activate.
Returns:
The context that was previously active
"""
# everything blows up if we allow current_context to be set to None, so sanity-check
# that now.
if context is None:
raise TypeError("'context' argument may not be None")
current = current_context()
if current is not context:
rusage = get_thread_resource_usage()
current.stop(rusage)
_thread_local.current_context = context
context.start(rusage)
return current
def nested_logging_context(
@@ -572,7 +658,7 @@ def nested_logging_context(
if parent_context is not None:
context = parent_context # type: LoggingContextOrSentinel
else:
context = LoggingContext.current_context()
context = current_context()
return LoggingContext(
parent_context=context, request=str(context.request) + "-" + suffix
)
@@ -604,7 +690,7 @@ def run_in_background(f, *args, **kwargs):
CRITICAL error about an unhandled error will be logged without much
indication about where it came from.
"""
current = LoggingContext.current_context()
current = current_context()
try:
res = f(*args, **kwargs)
except: # noqa: E722
@@ -625,7 +711,7 @@ def run_in_background(f, *args, **kwargs):
# The function may have reset the context before returning, so
# we need to restore it now.
ctx = LoggingContext.set_current_context(current)
ctx = set_current_context(current)
# The original context will be restored when the deferred
# completes, but there is nothing waiting for it, so it will
@@ -674,7 +760,7 @@ def make_deferred_yieldable(deferred):
# ok, we can't be sure that a yield won't block, so let's reset the
# logcontext, and add a callback to the deferred to restore it.
prev_context = LoggingContext.set_current_context(LoggingContext.sentinel)
prev_context = set_current_context(SENTINEL_CONTEXT)
deferred.addBoth(_set_context_cb, prev_context)
return deferred
@@ -684,7 +770,7 @@ ResultT = TypeVar("ResultT")
def _set_context_cb(result: ResultT, context: LoggingContext) -> ResultT:
"""A callback function which just sets the logging context"""
LoggingContext.set_current_context(context)
set_current_context(context)
return result
@@ -752,7 +838,7 @@ def defer_to_threadpool(reactor, threadpool, f, *args, **kwargs):
Deferred: A Deferred which fires a callback with the result of `f`, or an
errback if `f` throws an exception.
"""
logcontext = LoggingContext.current_context()
logcontext = current_context()
def g():
with LoggingContext(parent_context=logcontext):

View File

@@ -171,7 +171,7 @@ import logging
import re
import types
from functools import wraps
from typing import Dict
from typing import TYPE_CHECKING, Dict
from canonicaljson import json
@@ -179,6 +179,9 @@ from twisted.internet import defer
from synapse.config import ConfigError
if TYPE_CHECKING:
from synapse.server import HomeServer
# Helper class
@@ -297,14 +300,11 @@ def _noop_context_manager(*args, **kwargs):
# Setup
def init_tracer(config):
def init_tracer(hs: "HomeServer"):
"""Set the whitelists and initialise the JaegerClient tracer
Args:
config (HomeserverConfig): The config used by the homeserver
"""
global opentracing
if not config.opentracer_enabled:
if not hs.config.opentracer_enabled:
# We don't have a tracer
opentracing = None
return
@@ -315,18 +315,15 @@ def init_tracer(config):
"installed."
)
# Include the worker name
name = config.worker_name if config.worker_name else "master"
# Pull out the jaeger config if it was given. Otherwise set it to something sensible.
# See https://github.com/jaegertracing/jaeger-client-python/blob/master/jaeger_client/config.py
set_homeserver_whitelist(config.opentracer_whitelist)
set_homeserver_whitelist(hs.config.opentracer_whitelist)
JaegerConfig(
config=config.jaeger_config,
service_name="{} {}".format(config.server_name, name),
scope_manager=LogContextScopeManager(config),
config=hs.config.jaeger_config,
service_name="{} {}".format(hs.config.server_name, hs.get_instance_name()),
scope_manager=LogContextScopeManager(hs.config),
).initialize_tracer()

View File

@@ -19,7 +19,7 @@ from opentracing import Scope, ScopeManager
import twisted
from synapse.logging.context import LoggingContext, nested_logging_context
from synapse.logging.context import current_context, nested_logging_context
logger = logging.getLogger(__name__)
@@ -49,11 +49,8 @@ class LogContextScopeManager(ScopeManager):
(Scope) : the Scope that is active, or None if not
available.
"""
ctx = LoggingContext.current_context()
if ctx is LoggingContext.sentinel:
return None
else:
return ctx.scope
ctx = current_context()
return ctx.scope
def activate(self, span, finish_on_close):
"""
@@ -70,9 +67,9 @@ class LogContextScopeManager(ScopeManager):
"""
enter_logcontext = False
ctx = LoggingContext.current_context()
ctx = current_context()
if ctx is LoggingContext.sentinel:
if not ctx:
# We don't want this scope to affect.
logger.error("Tried to activate scope outside of loggingcontext")
return Scope(None, span)

View File

@@ -86,7 +86,7 @@ class ModuleApi(object):
Deferred[str|None]: Canonical (case-corrected) user_id, or None
if the user is not registered.
"""
return self._auth_handler.check_user_exists(user_id)
return defer.ensureDeferred(self._auth_handler.check_user_exists(user_id))
@defer.inlineCallbacks
def register(self, localpart, displayname=None, emails=[]):
@@ -196,7 +196,9 @@ class ModuleApi(object):
yield self._hs.get_device_handler().delete_device(user_id, device_id)
else:
# no associated device. Just delete the access token.
yield self._auth_handler.delete_access_token(access_token)
yield defer.ensureDeferred(
self._auth_handler.delete_access_token(access_token)
)
def run_db_interaction(self, desc, func, *args, **kwargs):
"""Run a function with a database connection
@@ -220,6 +222,8 @@ class ModuleApi(object):
want their access token sent to `client_redirect_url`, or redirect them to that
URL with a token directly if the URL matches with one of the whitelisted clients.
This is deprecated in favor of complete_sso_login_async.
Args:
registered_user_id: The MXID that has been registered as a previous step of
of this SSO login.
@@ -227,6 +231,24 @@ class ModuleApi(object):
client_redirect_url: The URL to which to offer to redirect the user (or to
redirect them directly if whitelisted).
"""
self._auth_handler.complete_sso_login(
self._auth_handler._complete_sso_login(
registered_user_id, request, client_redirect_url,
)
async def complete_sso_login_async(
self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str
):
"""Complete a SSO login by redirecting the user to a page to confirm whether they
want their access token sent to `client_redirect_url`, or redirect them to that
URL with a token directly if the URL matches with one of the whitelisted clients.
Args:
registered_user_id: The MXID that has been registered as a previous step of
of this SSO login.
request: The request to respond to.
client_redirect_url: The URL to which to offer to redirect the user (or to
redirect them directly if whitelisted).
"""
await self._auth_handler.complete_sso_login(
registered_user_id, request, client_redirect_url,
)

Some files were not shown because too many files have changed in this diff Show More