1
0

Compare commits

..

142 Commits

Author SHA1 Message Date
Erik Johnston ac2aad2123 Newsfile 2020-08-04 09:56:43 +01:00
Erik Johnston 14ddce892f Remove consensus logic from inbound federation.
The logic is "designed" to "handle" the case where the servers view of
the state at an event doesn't match what the remote server set as the
auth events. With some hand waving the server would try and come to some
sort of conclusion of which side was correct, involving state
resolution, but this could come up with interesting results.

The entire process is unspecced and buggy, so let's just remove it.
2020-08-04 09:50:03 +01:00
Andrew Morgan 481f76c7aa Remove signature check on v1 identity server lookups (#8001)
We've [decided](https://github.com/matrix-org/synapse/issues/5253#issuecomment-665976308) to remove the signature check for v1 lookups.

The signature check has been removed in v2 lookups. v1 lookups are currently deprecated. As mentioned in the above linked issue, this verification was causing deployments for the vector.im and matrix.org IS deployments, and this change is the simplest solution, without being unjustified.

Implementations are encouraged to use the v2 lookup API as it has [increased privacy benefits](https://github.com/matrix-org/matrix-doc/pull/2134).
2020-08-03 21:56:43 +01:00
Andrew Morgan 5d92a1428c Prevent join->join membership transitions changing member count (#7977)
`StatsHandler` handles updates to the `current_state_delta_stream`, and updates room stats such as the amount of state events, joined users, etc.

However, it counts every new join membership as a new user entering a room (and that user being in another room), whereas it's possible for a user's membership status to go from join -> join, for instance when they change their per-room profile information.

This PR adds a check for join->join membership transitions, and bails out early, as none of the further checks are necessary at that point.

Due to this bug, membership stats in many rooms have ended up being wildly larger than their true values. I am not sure if we also want to include a migration step which recalculates these statistics (possibly using the `_populate_stats_process_rooms` bg update).

Bug introduced in the initial implementation https://github.com/matrix-org/synapse/pull/4338.
2020-08-03 21:54:24 +01:00
Patrick Cloke 6812509807 Implement handling of HTTP HEAD requests. (#7999) 2020-08-03 08:45:42 -04:00
Patrick Cloke 2a89ce8cd4 Convert the crypto module to async/await. (#8003) 2020-08-03 08:29:01 -04:00
Michael Albert b6c6fb7950 Allow guests to operate in encrypted rooms (#7314)
Signed-off-by: Michael Albert <michael.albert@awesome-technologies.de>
2020-08-03 12:13:49 +01:00
Patrick Cloke 3b415e23a5 Convert replication code to async/await. (#7987) 2020-08-03 07:12:55 -04:00
Patrick Cloke db5970ac6d Convert ACME code to async/await. (#7989) 2020-08-03 07:09:33 -04:00
Patrick Cloke d1008fe949 Fix some comments and types in service notices (#7996) 2020-07-31 16:22:06 -04:00
Erik Johnston 394be6a0e6 Merge pull request #8008 from matrix-org/erikj/add_rate_limiting_to_joins
Add ratelimiting on joins
2020-07-31 18:21:48 +01:00
Erik Johnston faba873d4b Merge branch 'develop' of github.com:matrix-org/synapse into erikj/add_rate_limiting_to_joins 2020-07-31 15:07:01 +01:00
Erik Johnston 9b3ab57acd Newsfile 2020-07-31 15:06:56 +01:00
Erik Johnston 18de00adb4 Add ratelimiting on joins 2020-07-31 15:06:56 +01:00
Travis Ralston e2a4ba6f9b Add docs for undoing room shutdowns (#7998)
These docs were tested successfully in production by a customer, so it's probably fine.
2020-07-31 04:41:44 +01:00
Stuart Mumford 6d4b790021 Update workers docs (#7990) 2020-07-30 17:30:11 +01:00
Richard van der Hoff 0a7fb24716 Fix invite rejection when we have no forward-extremeties (#7980)
Thanks to some slightly overzealous cleanup in the
`delete_old_current_state_events`, it's possible to end up with no
`event_forward_extremities` in a room where we have outstanding local
invites. The user would then get a "no create event in auth events" when trying
to reject the invite.

We can hack around it by using the dangling invite as the prev event.
2020-07-30 16:58:57 +01:00
Erik Johnston 606805bf06 Fix typo in docs/workers.md (#7992) 2020-07-30 16:28:36 +01:00
Olivier Wilkinson (reivilibre) 3aa36b782c Merge branch 'master' into develop 2020-07-30 15:18:36 +01:00
Patrick Cloke c978f6c451 Convert federation client to async/await. (#7975) 2020-07-30 08:01:33 -04:00
Patrick Cloke 4cce8ef74e Convert appservice to async. (#7973) 2020-07-30 07:27:39 -04:00
Patrick Cloke b3a97d6dac Convert some of the data store to async. (#7976) 2020-07-30 07:20:41 -04:00
Olivier Wilkinson (reivilibre) 320ef98852 Fix formatting of changelog and upgrade notes
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-07-30 11:59:11 +01:00
Patrick Cloke 3950ae51ef Ensure that remove_pusher is always async (#7981) 2020-07-30 06:56:55 -04:00
Olivier Wilkinson (reivilibre) fc0ef72d9c Add deprecation warnings
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-07-30 11:55:04 +01:00
Olivier Wilkinson (reivilibre) a9631b7b4b 1.18.0 2020-07-30 10:56:54 +01:00
Erik Johnston 2c1b9d6763 Update worker docs with recent enhancements (#7969) 2020-07-29 23:22:13 +01:00
Patrick Cloke a53e0160a2 Ensure the msg property of HttpResponseException is a string. (#7979) 2020-07-29 13:56:06 -04:00
Patrick Cloke d90087cffa Remove from the event_relations table when purging historical events. (#7978) 2020-07-29 13:55:01 -04:00
Patrick Cloke 3a00bd1378 Add additional logging for SAML sessions. (#7971) 2020-07-29 13:54:44 -04:00
Brendan Abolivier f23c77389d Add MSC reference to changelog for #7736 2020-07-29 18:31:03 +01:00
Brendan Abolivier 8dff4a1242 Re-implement unread counts (#7736) 2020-07-29 18:26:55 +01:00
Aaron Raimist 2184f61fae Various improvements to the docs (#7899) 2020-07-29 10:35:44 -04:00
Patrick Cloke 3345c166a4 Convert storage layer to async/await. (#7963) 2020-07-28 16:09:53 -04:00
Dirk Klimpel e866e3b896 Add an option to disable purge in delete room admin API (#7964)
Add option ```purge``` to ```POST /_synapse/admin/v1/rooms/<room_id>/delete```
Fixes: #3761

Signed-off-by: Dirk Klimpel dirk@klimpel.org
2020-07-28 20:08:23 +01:00
Andrew Morgan 8a25332d94 Move some log lines from default logger to sql/transaction loggers (#7952)
Idea from matrix-org/synapse-dinsic#49
2020-07-28 18:52:13 +01:00
Patrick Cloke 2c1e1b153d Use the JSON module from the std library instead of simplejson. (#7936) 2020-07-28 10:28:59 -04:00
Richard van der Hoff 8078dec3be Fix exit code for check_line_terminators.sh (#7970)
If there are *no* files with CRLF line endings, then the xargs exits with a
non-zero exit code (as expected), but then, since that is the last thing to
happen in the script, the script as a whole exits non-zero, making the whole
thing fail.

using `if/then/fi` instead of `&& (...)` means that the script exits with a
zero exit code.
2020-07-28 08:52:25 -04:00
lugino-emeritus 3857de2194 Option to allow server admins to join complex rooms (#7902)
Fixes #7901.

Signed-off-by: Niklas Tittjung <nik_t.01@web.de>
2020-07-28 13:41:44 +01:00
Richard van der Hoff 349119a340 Merge tag 'v1.18.0rc2' into develop
Synapse 1.18.0rc2 (2020-07-28)
==============================

Bugfixes
--------

- Fix an `AssertionError` exception introduced in v1.18.0rc1. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
- Fix experimental support for moving typing off master when worker is restarted, which is broken in v1.18.0rc1. ([\#7967](https://github.com/matrix-org/synapse/issues/7967))

Internal Changes
----------------

- Further optimise queueing of inbound replication commands. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
2020-07-28 11:31:31 +01:00
Richard van der Hoff 7000a215e6 1.18.0rc2 2020-07-28 11:22:32 +01:00
Erik Johnston a8f7ed28c6 Typing worker needs to handle stream update requests (#7967)
IIRC this doesn't break tests because its only hit on reconnection, or something.

Basically, when a process needs to fetch missing updates for the `typing` stream it needs to query the writer instance via HTTP (as we don't write typing notifications to the DB), the problem was that the endpoint (`streams`) was only registered on master and specifically not on the typing writer worker.
2020-07-28 11:04:53 +01:00
Erik Johnston aaf9ce72a0 Fix typo in metrics docs (#7966) 2020-07-28 10:03:18 +01:00
Andrew Morgan c4ce0da6fe Add script for finding files with unix line terminators (#7965)
This PRs adds a script to check for unix-line terminators in the repo. It will be used to address https://github.com/matrix-org/synapse/issues/7943 by adding the check to CI.

I've changed the original script slightly as proposed in https://github.com/matrix-org/pipelines/pull/81#discussion_r460580664
2020-07-28 01:26:50 +01:00
Patrick Cloke 68626ff8e9 Convert the remaining media repo code to async / await. (#7947) 2020-07-27 14:40:11 -04:00
Richard van der Hoff f57b99af22 Handle replication commands synchronously where possible (#7876)
Most of the stuff we do for replication commands can be done synchronously. There's no point spinning up background processes if we're not going to need them.
2020-07-27 18:54:43 +01:00
Patrick Cloke 8553f46498 Convert a synapse.events to async/await. (#7949) 2020-07-27 13:40:22 -04:00
Patrick Cloke 5f65e62681 Convert groups and visibility code to async / await. (#7951) 2020-07-27 12:32:08 -04:00
Patrick Cloke 8144bc26a7 Convert push to async/await. (#7948) 2020-07-27 12:21:34 -04:00
Richard van der Hoff 7c2e2c2077 update changelog 2020-07-27 17:08:41 +01:00
Richard van der Hoff f88c48f3b8 1.18.0rc1 2020-07-27 16:57:40 +01:00
Erik Johnston 1ef9efc1e0 Fix error reporting when using opentracing.trace (#7961) 2020-07-27 16:20:24 +01:00
Erik Johnston 84d099ae11 Fix typing replication not being handled on master (#7959)
Handling of incoming typing stream updates from replication was not
hooked up on master, effecting set ups where typing was handled on a
different worker.

This is really only a problem if the master process is also handling
sync requests, which is unlikely for those that are at the stage of
moving typing off.

The other observable effect is that if a worker restarts or a
replication connect drops then the typing worker will issue a
`POSITION typing`, triggering master process to try and stream *all*
typing updates from position 0.

Fixes #7907
2020-07-27 14:10:53 +01:00
Patrick Cloke d8a9cd8d3e Remove hacky error handling for inlineDeferreds. (#7950) 2020-07-27 08:35:56 -04:00
Andrew Morgan c4268e3da6 Convert tests/rest/admin/test_room.py to unix file endings (#7953)
Converts tests/rest/admin/test_room.py to have unix file endings after they were accidentally changed in #7613.

Keeping the same changelog as #7613 as it hasn't gone out in a release yet.
2020-07-27 13:22:52 +01:00
Patrick Cloke 3fc8fdd150 Support oEmbed for media previews. (#7920)
Fixes previews of Twitter URLs by using their oEmbed endpoint to grab content.
2020-07-27 07:50:44 -04:00
Patrick Cloke b975fa2e99 Convert state resolution to async/await (#7942) 2020-07-24 10:59:51 -04:00
Patrick Cloke e739b20588 Fix up types and comments that refer to Deferreds. (#7945) 2020-07-24 10:53:25 -04:00
Patrick Cloke 53f7b49f5b Do not convert async functions to Deferreds in the interactive_auth_handler (#7944) 2020-07-24 09:43:49 -04:00
Patrick Cloke 5ea29d7f85 Convert more of the media code to async/await (#7873) 2020-07-24 09:39:02 -04:00
Patrick Cloke 6a080ea184 Return an empty body for OPTIONS requests. (#7886) 2020-07-24 07:08:07 -04:00
Richard van der Hoff 1ec688bf21 Downgrade warning on client disconnect to INFO (#7928)
Clients disconnecting before we finish processing the request happens from time
to time. We don't need to yell about it
2020-07-24 09:55:47 +01:00
Patrick Cloke fefe9943ef Convert presence handler helpers to async/await. (#7939) 2020-07-23 16:47:36 -04:00
Patrick Cloke 83434df381 Update the auth providers to be async. (#7935) 2020-07-23 15:45:39 -04:00
Richard van der Hoff 7078866969 Put a cache on /state_ids (#7931)
If we send out an event which refers to `prev_events` which other servers in
the federation are missing, then (after a round or two of backfill attempts),
they will end up asking us for `/state_ids` at a particular point in the DAG.

As per https://github.com/matrix-org/synapse/issues/7893, this is quite
expensive, and we tend to see lots of very similar requests around the same
time.

We can therefore handle this much more efficiently by using a cache, which (a)
ensures that if we see the same request from multiple servers (or even the same
server, multiple times), then they share the result, and (b) any other servers
that miss the initial excitement can also benefit from the work.

[It's interesting to note that `/state` has a cache for exactly this
reason. `/state` is now essentially unused and replaced with `/state_ids`, but
evidently when we replaced it we forgot to add a cache to the new endpoint.]
2020-07-23 18:38:19 +01:00
Richard van der Hoff 4876af06dd Abort federation requests if the client disconnects early (#7930)
For inbound federation requests, if a given remote server makes too many
requests at once, we start stacking them up rather than processing them
immediatedly.

However, that means that there is a fair chance that the requesting server will
disconnect before we start processing the request. In that case, if it was a
read-only request (ie, a GET request), there is absolutely no point in
building a response (and some requests are quite expensive to handle).

Even in the case of a POST request, one of two things will happen:

 * Most likely, the requesting server will retry the request and we'll get the
   information anyway.

 * Even if it doesn't, the requesting server has to assume that we didn't get
   the memo, and act accordingly.

In short, we're better off aborting the request at this point rather than
ploughing on with what might be a quite expensive request.
2020-07-23 16:52:33 +01:00
Michael Kaye ff22672fd6 Reorder database docs to promote postgresql. (#7933) 2020-07-23 07:48:49 -04:00
Patrick Cloke 68cd935826 Convert the federation agent and related code to async/await. (#7874) 2020-07-23 07:05:57 -04:00
Patrick Cloke 13d77464c9 Follow-up to admin API to re-activate accounts (#7908) 2020-07-22 12:33:19 -04:00
Patrick Cloke cc9bb3dc3f Convert the message handler to async/await. (#7884) 2020-07-22 12:29:15 -04:00
Brendan Abolivier a4cf94a3c2 Merge pull request #7934 from matrix-org/babolivier/acme_eol
Update the dates for ACME v1 EOL
2020-07-22 16:45:09 +01:00
Brendan Abolivier 55f2617f8c Update the dates for ACME v1 EOL
As per https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430
2020-07-22 16:18:40 +01:00
Richard van der Hoff 923c995023 Skip serializing /sync response if client has disconnected (#7927)
... it's a load of work which may be entirely redundant.
2020-07-22 13:44:16 +01:00
Richard van der Hoff b74919c72e Add debugging to sync response generation (#7929) 2020-07-22 13:43:10 +01:00
Richard van der Hoff 931b026844 Remove an unused prometheus metric (#7878) 2020-07-22 00:40:55 +01:00
Richard van der Hoff 05060e0223 Track command processing as a background process (#7879)
I'm going to be doing more stuff synchronously, and I don't want to lose the
CPU metrics down the sofa.
2020-07-22 00:40:42 +01:00
Richard van der Hoff 15997618e2 Clean up PreserveLoggingContext (#7877)
This had some dead code and some just plain wrong docstrings.
2020-07-22 00:40:27 +01:00
Richard van der Hoff 2ccd48e921 fix an incorrect comment 2020-07-22 00:24:56 +01:00
Patrick Cloke de119063f2 Convert room list handler to async/await. (#7912) 2020-07-21 07:51:48 -04:00
Jason Robinson 759481af6d Element CSS and logo in email templates (#7919)
Use Element CSS and logo in notification emails when app name is Element.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-07-21 11:58:01 +01:00
Andrew Morgan b7ddece2a6 Lint the contrib/ directory in CI and linting scripts, add synctl to linting script (#7914)
Run `isort`, `flake8` and `black` over the `contrib/` directory and `synctl` script. The latter was already being done in CI, but now the linting script does it too.

Fixes https://github.com/matrix-org/synapse/issues/7910
2020-07-20 21:43:49 +01:00
Karthikeyan Singaravelan 5662e2b0f3 Remove unused code from synapse.logging.utils. (#7897) 2020-07-20 15:20:53 -04:00
Adrian 64d2280299 Fix a typo in the sample config. (#7890) 2020-07-20 13:42:52 -04:00
Karthikeyan Singaravelan a7b06a81f0 Fix deprecation warning: import ABC from collections.abc (#7892) 2020-07-20 13:33:04 -04:00
Andrew Morgan 5ecf98f59e Change sample config's postgres user to synapse_user (#7889)
The [postgres setup docs](https://github.com/matrix-org/synapse/blob/develop/docs/postgres.md#set-up-database) recommend setting up your database with user `synapse_user`.

However, uncommenting the postgres defaults in the sample config leave you with user `synapse`.

This PR switches the sample config to recommend `synapse_user`. Took a me a second to figure this out, so assume this will beneficial to others.
2020-07-20 18:29:25 +01:00
Karthikeyan Singaravelan 438020732e Fix deprecation warning due to invalid escape sequences (#7895)
* Fix deprecation warnings due to invalid escape sequences.

* Add changelog

Signed-off-by: Karthikeyan Singaravelan <tir.karthi@gmail.com>
2020-07-20 16:45:51 +01:00
Gary Kim f2af3e4fc5 Remove Ubuntu Eoan that is now EOL (#7888) 2020-07-17 15:38:41 -04:00
Patrick Cloke d1d5fa66e4 Fix the trace function for async functions. (#7872)
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-07-17 13:32:01 -04:00
Michael Kaye 1ec2961b3b Add help for creating a user via docker (#7885) 2020-07-17 13:25:48 -04:00
Christopher May-Townsend a5545cf86d Switch to Debian:Slim from Alpine for the docker image (#7839)
As mentioned in #7397, switching to a debian base should help with multi-arch work to save time on compiling. This is unashamedly based on #6373, but without the extra functionality. Switch python version back to generic 3.7 to always pull the latest. Essentially, keeping this as small as possible. The image is bigger though unfortunately.
2020-07-17 17:40:53 +01:00
Erik Johnston 2d2acc1cf2 Stop using 'device_max_stream_id' (#7882)
It serves no purpose and updating everytime we write to the device inbox
stream means all such transactions will conflict, causing lots of
transaction failures and retries.
2020-07-17 17:03:27 +01:00
Erik Johnston a3ad045286 Fix TypeError in synapse.notifier (#7880)
Fixes #7774
2020-07-17 14:11:05 +01:00
Patrick Cloke 852930add7 Add a default limit (of 100) to get/sync operations. (#7858) 2020-07-17 07:59:23 -04:00
Erik Johnston 4642fd66df Change "unknown room ver" logging to warning. (#7881)
It's somewhat expected for us to have unknown room versions in the
database due to room version experiments.
2020-07-17 12:10:43 +01:00
Patrick Cloke 6b3ac3b8cd Convert device handler to async/await (#7871) 2020-07-17 07:09:25 -04:00
Patrick Cloke 00e57b755c Convert synapse.app to async/await. (#7868) 2020-07-17 07:08:56 -04:00
Patrick Cloke 6fca1b3506 Convert _base, profile, and _receipts handlers to async/await (#7860) 2020-07-17 07:08:30 -04:00
Michael Albert fff483ea96 Add admin endpoint to get members in a room. (#7842) 2020-07-16 16:43:23 -04:00
Patrick Cloke f460da6031 Consistently use db_to_json to convert from database values to JSON objects. (#7849) 2020-07-16 11:32:19 -04:00
Luke Faraone b0f031f92a Combine nginx federation server blocks (#7823)
I'm pretty sure there's no technical reason these have to be distinct server blocks, so collapse into one and go with the more terse location block.

Signed-off-by: Luke W Faraone <luke@faraone.cc>
2020-07-16 16:01:45 +01:00
Richard van der Hoff e5300063ed Optimise queueing of inbound replication commands (#7861)
When we get behind on replication, we tend to stack up background processes
behind a linearizer. Bg processes are heavy (particularly with respect to
prometheus metrics) and linearizers aren't terribly efficient once the queue
gets long either.

A better approach is to maintain a queue of requests to be processed, and
nominate a single process to work its way through the queue.

Fixes: #7444
2020-07-16 15:49:37 +01:00
Richard van der Hoff 346476df21 Reject attempts to join empty rooms over federation (#7859)
We shouldn't allow others to make_join through us if we've left the room;
reject such attempts with a 404.

Fixes #7835. Fixes #6958.
2020-07-16 15:17:31 +01:00
Erik Johnston f2e38ca867 Allow moving typing off master (#7869) 2020-07-16 15:12:54 +01:00
Erik Johnston 649a7ead5c Add ability to run multiple pusher instances (#7855)
This reuses the same scheme as federation sender sharding
2020-07-16 14:06:28 +01:00
Richard van der Hoff a827838706 Merge pull request #7866 from matrix-org/rav/fix_guest_user_id
Fix guest user registration with lots of client readers
2020-07-16 13:54:45 +01:00
Richard van der Hoff a973bcb8a4 Add some tiny type annotations (#7870)
I found these made pycharm have more of a clue as to what was going on in other places.
2020-07-16 13:52:29 +01:00
Richard van der Hoff 16368c8a34 changelog 2020-07-16 13:01:11 +01:00
Richard van der Hoff c445bc0cad Use a postgres sequence to generate guest user IDs 2020-07-16 13:00:25 +01:00
Richard van der Hoff 3c36ae17a5 Use SequenceGenerator for state group ID allocation 2020-07-16 11:25:08 +01:00
Richard van der Hoff 42509b8fb6 Use PostgresSequenceGenerator from MultiWriterIdGenerator
partly just to show it works, but alwo to remove a bit of code duplication.
2020-07-16 11:25:08 +01:00
Richard van der Hoff 90b0cdda42 Add some helper classes for generating ID sequences 2020-07-16 11:25:08 +01:00
Olivier Wilkinson (reivilibre) 12528dc42f Remove obsolete comment.
It was correct at the time of our friend Jorik writing it (checking
git blame), but the world has moved now and it is no longer a
generator.

Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-07-16 11:12:48 +01:00
Patrick Cloke 35450519de Ensure that calls to json.dumps are compatible with the standard library json. (#7836) 2020-07-15 13:40:54 -04:00
Richard van der Hoff a57df9b827 Avoid brand new rooms in delete_old_current_state_events (#7854)
When considering rooms to clean up in `delete_old_current_state_events`, skip
rooms which we are creating, which otherwise look a bit like rooms we have
left.

Fixes #7834.
2020-07-15 18:33:03 +01:00
Erik Johnston 97e1159ac1 Merge branch 'erikj/faster_typing' of github.com:matrix-org/synapse into develop 2020-07-15 16:54:30 +01:00
Patrick Cloke 8c7d0f163d Allow accounts to be re-activated from the admin APIs. (#7847) 2020-07-15 11:00:21 -04:00
Erik Johnston 9006e125af Fix tests 2020-07-15 15:47:27 +01:00
Erik Johnston 62352c3a1b Fix typo 2020-07-15 15:46:16 +01:00
Erik Johnston 3032b54ac9 Newsfile 2020-07-15 15:45:19 +01:00
Erik Johnston 3a3a618460 Use get_users_in_room rather than state handler in typing for speed 2020-07-15 15:42:07 +01:00
Erik Johnston f13061d515 Fix client reader sharding tests (#7853)
* Fix client reader sharding tests

* Newsfile

* Fix typing

* Update changelog.d/7853.misc

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Move mocking of http_client to tests

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
2020-07-15 15:27:35 +01:00
Patrick Cloke b11450dedc Convert E2E key and room key handlers to async/await. (#7851) 2020-07-15 08:48:58 -04:00
Patrick Cloke 111e70d75c Return the proper 403 Forbidden error during errors with JWT logins. (#7844) 2020-07-15 07:10:21 -04:00
Richard van der Hoff 1d9dca02f9 remove retry_on_integrity_error wrapper for persist_events (#7848)
As far as I can tell from the sentry logs, the only time this has actually done
anything in the last two years is when we had two master workers running at
once, and even then, it made a bit of a mess of it (see
https://github.com/matrix-org/synapse/issues/7845#issuecomment-658238739).

Generally I feel like this code is doing more harm than good.
2020-07-15 10:34:53 +01:00
Patrick Cloke 8d0097bef1 Fix bug in per-room message retention policies. (#7850) 2020-07-14 15:51:13 -04:00
Brendan Abolivier 85223106f3 Allow email subjects to be customised through Synapse's configuration (#7846) 2020-07-14 19:10:42 +01:00
Dirk Klimpel 491f0dab1b Add delete room admin endpoint (#7613)
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
`DELETE /_synapse/admin/v1/rooms/<room_id>`
It is a combination and improvement of "[Shutdown room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/shutdown_room.md)" and "[Purge room](https://github.com/matrix-org/synapse/blob/develop/docs/admin_api/purge_room.md)" API.

Fixes: #6425 

It also fixes a bug in [synapse/storage/data_stores/main/room.py](synapse/storage/data_stores/main/room.py) in ` get_room_with_stats`.
It should return `None` if the room is unknown. But it returns an `IndexError`.
https://github.com/matrix-org/synapse/blob/901b1fa561e3cc661d78aa96d59802cf2078cb0d/synapse/storage/data_stores/main/room.py#L99-L105

Related to:
- #5575
- https://github.com/Awesome-Technologies/synapse-admin/issues/17

Signed-off-by: Dirk Klimpel dirk@klimpel.org
2020-07-14 12:36:23 +01:00
Patrick Cloke 77d2c05410 Add the option to validate the iss and aud claims for JWT logins. (#7827) 2020-07-14 07:16:43 -04:00
Patrick Cloke 4db1509516 Improve the type hints of synapse.api.errors. (#7820) 2020-07-14 07:03:58 -04:00
Luke Faraone 93c8b077ed Clearly state built-in ACME no longer works (#7824)
I'm tempted to remove this section entirely, but it's helpful for admins who are trying to figure out why their Synapse is crashing on start with ACME errors.

Signed-off-by: Luke W Faraone <luke@faraone.cc>
2020-07-14 10:49:10 +01:00
Erik Johnston f886a69916 Correctly pass app_name to all email templates. (#7829)
We didn't do this for e.g. registration emails.
2020-07-14 10:00:53 +01:00
Patrick Cloke 457096e6df Support handling registration requests across multiple client readers. (#7830) 2020-07-13 13:31:46 -04:00
Brendan Abolivier 504c8f3483 Fix handling of "off" in encryption_enabled_by_default_for_room_type (#7822)
Fixes https://github.com/matrix-org/synapse/issues/7821, introduced in https://github.com/matrix-org/synapse/pull/7639

Turns out PyYAML translates `off` into a `False` boolean if it's
unquoted (see https://stackoverflow.com/questions/36463531/pyyaml-automatically-converting-certain-keys-to-boolean-values),
which seems to be a liberal interpretation of this bit of the YAML spec: https://yaml.org/spec/1.1/current.html#id864510

An alternative fix would be to implement the solution mentioned in the
SO post linked above, but I'm aware it might break existing setups
(which might use these values in the configuration file) so it's
probably better just to add an extra check for this one. We should be
aware that this is a thing for the next times we do that though.

I didn't find any other occurrence of this bug elsewhere in the
codebase.
2020-07-13 17:14:42 +01:00
Richard van der Hoff fa361c8f65 Update grafana dashboard 2020-07-13 14:48:21 +01:00
Richard van der Hoff 59e64b6d5b Merge branch 'master' into develop 2020-07-13 11:42:52 +01:00
Patrick Cloke 66a4af8d96 Do not use canonicaljson to magically handle decoding bytes from JSON. (#7802) 2020-07-10 14:30:08 -04:00
Patrick Cloke d9e47af617 Add types to the server code and remove unused parameter (#7813) 2020-07-10 14:28:42 -04:00
Sorunome 1bca21e1da Include room states on invite events sent to ASes (#6455) 2020-07-10 18:44:56 +01:00
Richard van der Hoff 6cef918a4b Merge branch 'release-v1.17.0' into develop 2020-07-10 18:38:50 +01:00
Erik Johnston f299441cc6 Add ability to shard the federation sender (#7798) 2020-07-10 18:26:36 +01:00
Erik Johnston f1245dc3c0 Fix resync remote devices on receive PDU in worker mode. (#7815)
The replication client requires that arguments are given as keyword
arguments, which was not done in this case. We also pull out the logic
so that we can catch and handle any exceptions raised, rather than
leaving them unhandled.
2020-07-10 18:23:17 +01:00
Erik Johnston e29c44340b Fix recursion error when fetching auth chain over federation (#7817)
When fetching the state of a room over federation we receive the event
IDs of the state and auth chain. We then fetch those events that we
don't already have.

However, we used a function that recursively fetched any missing auth
events for the fetched events, which can lead to a lot of recursion if
the server is missing most of the auth chain. This work is entirely
pointless because would have queued up the missing events in the auth
chain to be fetched already.

Let's just diable the recursion, since it only gets called from one
place anyway.
2020-07-10 18:15:35 +01:00
297 changed files with 10190 additions and 6040 deletions
+135
View File
@@ -1,3 +1,138 @@
Synapse 1.18.0 (2020-07-30)
===========================
Deprecation Warnings
--------------------
### Docker Tags with `-py3` Suffix
From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
Scripts relying on the `-py3` suffix will need to be updated.
### TCP-based Replication
When setting up worker processes, we now recommend the use of a Redis server for replication. The old direct TCP connection method is deprecated and will be removed in a future release. See [docs/workers.md](https://github.com/matrix-org/synapse/blob/release-v1.18.0/docs/workers.md) for more details.
Improved Documentation
----------------------
- Update worker docs with latest enhancements. ([\#7969](https://github.com/matrix-org/synapse/issues/7969))
Synapse 1.18.0rc2 (2020-07-28)
==============================
Bugfixes
--------
- Fix an `AssertionError` exception introduced in v1.18.0rc1. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
- Fix experimental support for moving typing off master when worker is restarted, which is broken in v1.18.0rc1. ([\#7967](https://github.com/matrix-org/synapse/issues/7967))
Internal Changes
----------------
- Further optimise queueing of inbound replication commands. ([\#7876](https://github.com/matrix-org/synapse/issues/7876))
Synapse 1.18.0rc1 (2020-07-27)
==============================
Features
--------
- Include room states on invite events that are sent to application services. Contributed by @Sorunome. ([\#6455](https://github.com/matrix-org/synapse/issues/6455))
- Add delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel. ([\#7613](https://github.com/matrix-org/synapse/issues/7613), [\#7953](https://github.com/matrix-org/synapse/issues/7953))
- Add experimental support for running multiple federation sender processes. ([\#7798](https://github.com/matrix-org/synapse/issues/7798))
- Add the option to validate the `iss` and `aud` claims for JWT logins. ([\#7827](https://github.com/matrix-org/synapse/issues/7827))
- Add support for handling registration requests across multiple client reader workers. ([\#7830](https://github.com/matrix-org/synapse/issues/7830))
- Add an admin API to list the users in a room. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7842](https://github.com/matrix-org/synapse/issues/7842))
- Allow email subjects to be customised through Synapse's configuration. ([\#7846](https://github.com/matrix-org/synapse/issues/7846))
- Add the ability to re-activate an account from the admin API. ([\#7847](https://github.com/matrix-org/synapse/issues/7847), [\#7908](https://github.com/matrix-org/synapse/issues/7908))
- Add experimental support for running multiple pusher workers. ([\#7855](https://github.com/matrix-org/synapse/issues/7855))
- Add experimental support for moving typing off master. ([\#7869](https://github.com/matrix-org/synapse/issues/7869), [\#7959](https://github.com/matrix-org/synapse/issues/7959))
- Report CPU metrics to prometheus for time spent processing replication commands. ([\#7879](https://github.com/matrix-org/synapse/issues/7879))
- Support oEmbed for media previews. ([\#7920](https://github.com/matrix-org/synapse/issues/7920))
- Abort federation requests where the client disconnects before the ratelimiter expires. ([\#7930](https://github.com/matrix-org/synapse/issues/7930))
- Cache responses to `/_matrix/federation/v1/state_ids` to reduce duplicated work. ([\#7931](https://github.com/matrix-org/synapse/issues/7931))
Bugfixes
--------
- Fix detection of out of sync remote device lists when receiving events from remote users. ([\#7815](https://github.com/matrix-org/synapse/issues/7815))
- Fix bug where Synapse fails to process an incoming event over federation if the server is missing too much of the event's auth chain. ([\#7817](https://github.com/matrix-org/synapse/issues/7817))
- Fix a bug causing Synapse to misinterpret the value `off` for `encryption_enabled_by_default_for_room_type` in its configuration file(s) if that value isn't surrounded by quotes. This bug was introduced in v1.16.0. ([\#7822](https://github.com/matrix-org/synapse/issues/7822))
- Fix bug where we did not always pass in `app_name` or `server_name` to email templates, including e.g. for registration emails. ([\#7829](https://github.com/matrix-org/synapse/issues/7829))
- Errors which occur while using the non-standard JWT login now return the proper error: `403 Forbidden` with an error code of `M_FORBIDDEN`. ([\#7844](https://github.com/matrix-org/synapse/issues/7844))
- Fix "AttributeError: 'str' object has no attribute 'get'" error message when applying per-room message retention policies. The bug was introduced in Synapse 1.7.0. ([\#7850](https://github.com/matrix-org/synapse/issues/7850))
- Fix a bug introduced in Synapse 1.10.0 which could cause a "no create event in auth events" error during room creation. ([\#7854](https://github.com/matrix-org/synapse/issues/7854))
- Fix a bug which allowed empty rooms to be rejoined over federation. ([\#7859](https://github.com/matrix-org/synapse/issues/7859))
- Fix 'Unable to find a suitable guest user ID' error when using multiple client_reader workers. ([\#7866](https://github.com/matrix-org/synapse/issues/7866))
- Fix a long standing bug where the tracing of async functions with opentracing was broken. ([\#7872](https://github.com/matrix-org/synapse/issues/7872), [\#7961](https://github.com/matrix-org/synapse/issues/7961))
- Fix "TypeError in `synapse.notifier`" exceptions. ([\#7880](https://github.com/matrix-org/synapse/issues/7880))
- Fix deprecation warning due to invalid escape sequences. ([\#7895](https://github.com/matrix-org/synapse/issues/7895))
Updates to the Docker image
---------------------------
- Base docker image on Debian Buster rather than Alpine Linux. Contributed by @maquis196. ([\#7839](https://github.com/matrix-org/synapse/issues/7839))
Improved Documentation
----------------------
- Provide instructions on using `register_new_matrix_user` via docker. ([\#7885](https://github.com/matrix-org/synapse/issues/7885))
- Change the sample config postgres user section to use `synapse_user` instead of `synapse` to align with the documentation. ([\#7889](https://github.com/matrix-org/synapse/issues/7889))
- Reorder database paragraphs to promote postgres over sqlite. ([\#7933](https://github.com/matrix-org/synapse/issues/7933))
- Update the dates of ACME v1's end of life in [`ACME.md`](https://github.com/matrix-org/synapse/blob/master/docs/ACME.md). ([\#7934](https://github.com/matrix-org/synapse/issues/7934))
Deprecations and Removals
-------------------------
- Remove unused `synapse_replication_tcp_resource_invalidate_cache` prometheus metric. ([\#7878](https://github.com/matrix-org/synapse/issues/7878))
- Remove Ubuntu Eoan from the list of `.deb` packages that we build as it is now end-of-life. Contributed by @gary-kim. ([\#7888](https://github.com/matrix-org/synapse/issues/7888))
Internal Changes
----------------
- Switch parts of the codebase from `simplejson` to the standard library `json`. ([\#7802](https://github.com/matrix-org/synapse/issues/7802))
- Add type hints to the http server code and remove an unused parameter. ([\#7813](https://github.com/matrix-org/synapse/issues/7813))
- Add type hints to synapse.api.errors module. ([\#7820](https://github.com/matrix-org/synapse/issues/7820))
- Ensure that calls to `json.dumps` are compatible with the standard library json. ([\#7836](https://github.com/matrix-org/synapse/issues/7836))
- Remove redundant `retry_on_integrity_error` wrapper for event persistence code. ([\#7848](https://github.com/matrix-org/synapse/issues/7848))
- Consistently use `db_to_json` to convert from database values to JSON objects. ([\#7849](https://github.com/matrix-org/synapse/issues/7849))
- Convert various parts of the codebase to async/await. ([\#7851](https://github.com/matrix-org/synapse/issues/7851), [\#7860](https://github.com/matrix-org/synapse/issues/7860), [\#7868](https://github.com/matrix-org/synapse/issues/7868), [\#7871](https://github.com/matrix-org/synapse/issues/7871), [\#7873](https://github.com/matrix-org/synapse/issues/7873), [\#7874](https://github.com/matrix-org/synapse/issues/7874), [\#7884](https://github.com/matrix-org/synapse/issues/7884), [\#7912](https://github.com/matrix-org/synapse/issues/7912), [\#7935](https://github.com/matrix-org/synapse/issues/7935), [\#7939](https://github.com/matrix-org/synapse/issues/7939), [\#7942](https://github.com/matrix-org/synapse/issues/7942), [\#7944](https://github.com/matrix-org/synapse/issues/7944))
- Add support for handling registration requests across multiple client reader workers. ([\#7853](https://github.com/matrix-org/synapse/issues/7853))
- Small performance improvement in typing processing. ([\#7856](https://github.com/matrix-org/synapse/issues/7856))
- The default value of `filter_timeline_limit` was changed from -1 (no limit) to 100. ([\#7858](https://github.com/matrix-org/synapse/issues/7858))
- Optimise queueing of inbound replication commands. ([\#7861](https://github.com/matrix-org/synapse/issues/7861))
- Add some type annotations to `HomeServer` and `BaseHandler`. ([\#7870](https://github.com/matrix-org/synapse/issues/7870))
- Clean up `PreserveLoggingContext`. ([\#7877](https://github.com/matrix-org/synapse/issues/7877))
- Change "unknown room version" logging from 'error' to 'warning'. ([\#7881](https://github.com/matrix-org/synapse/issues/7881))
- Stop using `device_max_stream_id` table and just use `device_inbox.stream_id`. ([\#7882](https://github.com/matrix-org/synapse/issues/7882))
- Return an empty body for OPTIONS requests. ([\#7886](https://github.com/matrix-org/synapse/issues/7886))
- Fix typo in generated config file. Contributed by @ThiefMaster. ([\#7890](https://github.com/matrix-org/synapse/issues/7890))
- Import ABC from `collections.abc` for Python 3.10 compatibility. ([\#7892](https://github.com/matrix-org/synapse/issues/7892))
- Remove unused functions `time_function`, `trace_function`, `get_previous_frames`
and `get_previous_frame` from `synapse.logging.utils` module. ([\#7897](https://github.com/matrix-org/synapse/issues/7897))
- Lint the `contrib/` directory in CI and linting scripts, add `synctl` to the linting script for consistency with CI. ([\#7914](https://github.com/matrix-org/synapse/issues/7914))
- Use Element CSS and logo in notification emails when app name is Element. ([\#7919](https://github.com/matrix-org/synapse/issues/7919))
- Optimisation to /sync handling: skip serializing the response if the client has already disconnected. ([\#7927](https://github.com/matrix-org/synapse/issues/7927))
- When a client disconnects, don't log it as 'Error processing request'. ([\#7928](https://github.com/matrix-org/synapse/issues/7928))
- Add debugging to `/sync` response generation (disabled by default). ([\#7929](https://github.com/matrix-org/synapse/issues/7929))
- Update comments that refer to Deferreds for async functions. ([\#7945](https://github.com/matrix-org/synapse/issues/7945))
- Simplify error handling in federation handler. ([\#7950](https://github.com/matrix-org/synapse/issues/7950))
Synapse 1.17.0 (2020-07-13) Synapse 1.17.0 (2020-07-13)
=========================== ===========================
+100 -21
View File
@@ -1,10 +1,12 @@
- [Choosing your server name](#choosing-your-server-name) - [Choosing your server name](#choosing-your-server-name)
- [Picking a database engine](#picking-a-database-engine)
- [Installing Synapse](#installing-synapse) - [Installing Synapse](#installing-synapse)
- [Installing from source](#installing-from-source) - [Installing from source](#installing-from-source)
- [Platform-Specific Instructions](#platform-specific-instructions) - [Platform-Specific Instructions](#platform-specific-instructions)
- [Prebuilt packages](#prebuilt-packages) - [Prebuilt packages](#prebuilt-packages)
- [Setting up Synapse](#setting-up-synapse) - [Setting up Synapse](#setting-up-synapse)
- [TLS certificates](#tls-certificates) - [TLS certificates](#tls-certificates)
- [Client Well-Known URI](#client-well-known-uri)
- [Email](#email) - [Email](#email)
- [Registering a user](#registering-a-user) - [Registering a user](#registering-a-user)
- [Setting up a TURN server](#setting-up-a-turn-server) - [Setting up a TURN server](#setting-up-a-turn-server)
@@ -27,6 +29,25 @@ that your email address is probably `user@example.com` rather than
`user@email.example.com`) - but doing so may require more advanced setup: see `user@email.example.com`) - but doing so may require more advanced setup: see
[Setting up Federation](docs/federate.md). [Setting up Federation](docs/federate.md).
# Picking a database engine
Synapse offers two database engines:
* [PostgreSQL](https://www.postgresql.org)
* [SQLite](https://sqlite.org/)
Almost all installations should opt to use PostgreSQL. Advantages include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
For information on how to install and use PostgreSQL, please see
[docs/postgres.md](docs/postgres.md)
By default Synapse uses SQLite and in doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
light workloads.
# Installing Synapse # Installing Synapse
## Installing from source ## Installing from source
@@ -234,9 +255,9 @@ for a number of platforms.
There is an offical synapse image available at There is an offical synapse image available at
https://hub.docker.com/r/matrixdotorg/synapse which can be used with https://hub.docker.com/r/matrixdotorg/synapse which can be used with
the docker-compose file available at [contrib/docker](contrib/docker). Further information on the docker-compose file available at [contrib/docker](contrib/docker). Further
this including configuration options is available in the README on information on this including configuration options is available in the README
hub.docker.com. on hub.docker.com.
Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a Alternatively, Andreas Peters (previously Silvio Fricke) has contributed a
Dockerfile to automate a synapse server in a single Docker image, at Dockerfile to automate a synapse server in a single Docker image, at
@@ -244,7 +265,8 @@ https://hub.docker.com/r/avhost/docker-matrix/tags/
Slavi Pantaleev has created an Ansible playbook, Slavi Pantaleev has created an Ansible playbook,
which installs the offical Docker image of Matrix Synapse which installs the offical Docker image of Matrix Synapse
along with many other Matrix-related services (Postgres database, riot-web, coturn, mxisd, SSL support, etc.). along with many other Matrix-related services (Postgres database, Element, coturn,
ma1sd, SSL support, etc.).
For more details, see For more details, see
https://github.com/spantaleev/matrix-docker-ansible-deploy https://github.com/spantaleev/matrix-docker-ansible-deploy
@@ -277,22 +299,27 @@ The fingerprint of the repository signing key (as shown by `gpg
/usr/share/keyrings/matrix-org-archive-keyring.gpg`) is /usr/share/keyrings/matrix-org-archive-keyring.gpg`) is
`AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`. `AAF9AE843A7584B5A3E4CD2BCF45A512DE2DA058`.
#### Downstream Debian/Ubuntu packages #### Downstream Debian packages
For `buster` and `sid`, Synapse is available in the Debian repositories and We do not recommend using the packages from the default Debian `buster`
it should be possible to install it with simply: repository at this time, as they are old and suffer from known security
vulnerabilities. You can install the latest version of Synapse from
[our repository](#matrixorg-packages) or from `buster-backports`. Please
see the [Debian documentation](https://backports.debian.org/Instructions/)
for information on how to use backports.
If you are using Debian `sid` or testing, Synapse is available in the default
repositories and it should be possible to install it simply with:
``` ```
sudo apt install matrix-synapse sudo apt install matrix-synapse
``` ```
There is also a version of `matrix-synapse` in `stretch-backports`. Please see #### Downstream Ubuntu packages
the [Debian documentation on
backports](https://backports.debian.org/Instructions/) for information on how
to use them.
We do not recommend using the packages in downstream Ubuntu at this time, as We do not recommend using the packages in the default Ubuntu repository
they are old and suffer from known security vulnerabilities. at this time, as they are old and suffer from known security vulnerabilities.
The latest version of Synapse can be installed from [our repository](#matrixorg-packages).
### Fedora ### Fedora
@@ -405,13 +432,11 @@ so, you will need to edit `homeserver.yaml`, as follows:
``` ```
* You will also need to uncomment the `tls_certificate_path` and * You will also need to uncomment the `tls_certificate_path` and
`tls_private_key_path` lines under the `TLS` section. You can either `tls_private_key_path` lines under the `TLS` section. You will need to manage
point these settings at an existing certificate and key, or you can provisioning of these certificates yourself — Synapse had built-in ACME
enable Synapse's built-in ACME (Let's Encrypt) support. Instructions support, but the ACMEv1 protocol Synapse implements is deprecated, not
for having Synapse automatically provision and renew federation allowed by LetsEncrypt for new sites, and will break for existing sites in
certificates through ACME can be found at [ACME.md](docs/ACME.md). late 2020. See [ACME.md](docs/ACME.md).
Note that, as pointed out in that document, this feature will not
work with installs set up after November 2019.
If you are using your own certificate, be sure to use a `.pem` file that If you are using your own certificate, be sure to use a `.pem` file that
includes the full certificate chain including any intermediate certificates includes the full certificate chain including any intermediate certificates
@@ -421,6 +446,60 @@ so, you will need to edit `homeserver.yaml`, as follows:
For a more detailed guide to configuring your server for federation, see For a more detailed guide to configuring your server for federation, see
[federate.md](docs/federate.md). [federate.md](docs/federate.md).
## Client Well-Known URI
Setting up the client Well-Known URI is optional but if you set it up, it will
allow users to enter their full username (e.g. `@user:<server_name>`) into clients
which support well-known lookup to automatically configure the homeserver and
identity server URLs. This is useful so that users don't have to memorize or think
about the actual homeserver URL you are using.
The URL `https://<server_name>/.well-known/matrix/client` should return JSON in
the following format.
```
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
}
}
```
It can optionally contain identity server information as well.
```
{
"m.homeserver": {
"base_url": "https://<matrix.example.com>"
},
"m.identity_server": {
"base_url": "https://<identity.example.com>"
}
}
```
To work in browser based clients, the file must be served with the appropriate
Cross-Origin Resource Sharing (CORS) headers. A recommended value would be
`Access-Control-Allow-Origin: *` which would allow all browser based clients to
view it.
In nginx this would be something like:
```
location /.well-known/matrix/client {
return 200 '{"m.homeserver": {"base_url": "https://<matrix.example.com>"}}';
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
}
```
You should also ensure the `public_baseurl` option in `homeserver.yaml` is set
correctly. `public_baseurl` should be set to the URL that clients will use to
connect to your server. This is the same URL you put for the `m.homeserver`
`base_url` above.
```
public_baseurl: "https://<matrix.example.com>"
```
## Email ## Email
@@ -439,7 +518,7 @@ email will be disabled.
## Registering a user ## Registering a user
The easiest way to create a new user is to do so from a client like [Riot](https://riot.im). The easiest way to create a new user is to do so from a client like [Element](https://element.io/).
Alternatively you can do so from the command line if you have installed via pip. Alternatively you can do so from the command line if you have installed via pip.
+7 -36
View File
@@ -45,7 +45,7 @@ which handle:
- Eventually-consistent cryptographically secure synchronisation of room - Eventually-consistent cryptographically secure synchronisation of room
state across a global open network of federated servers and services state across a global open network of federated servers and services
- Sending and receiving extensible messages in a room with (optional) - Sending and receiving extensible messages in a room with (optional)
end-to-end encryption[1] end-to-end encryption
- Inviting, joining, leaving, kicking, banning room members - Inviting, joining, leaving, kicking, banning room members
- Managing user accounts (registration, login, logout) - Managing user accounts (registration, login, logout)
- Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers, - Using 3rd Party IDs (3PIDs) such as email addresses, phone numbers,
@@ -82,9 +82,6 @@ at the `Matrix spec <https://matrix.org/docs/spec>`_, and experiment with the
Thanks for using Matrix! Thanks for using Matrix!
[1] End-to-end encryption is currently in beta: `blog post <https://matrix.org/blog/2016/11/21/matrixs-olm-end-to-end-encryption-security-assessment-released-and-implemented-cross-platform-on-riot-at-last>`_.
Support Support
======= =======
@@ -115,12 +112,11 @@ Unless you are running a test instance of Synapse on your local machine, in
general, you will need to enable TLS support before you can successfully general, you will need to enable TLS support before you can successfully
connect from a client: see `<INSTALL.md#tls-certificates>`_. connect from a client: see `<INSTALL.md#tls-certificates>`_.
An easy way to get started is to login or register via Riot at An easy way to get started is to login or register via Element at
https://riot.im/app/#/login or https://riot.im/app/#/register respectively. https://app.element.io/#/login or https://app.element.io/#/register respectively.
You will need to change the server you are logging into from ``matrix.org`` You will need to change the server you are logging into from ``matrix.org``
and instead specify a Homeserver URL of ``https://<server_name>:8448`` and instead specify a Homeserver URL of ``https://<server_name>:8448``
(or just ``https://<server_name>`` if you are using a reverse proxy). (or just ``https://<server_name>`` if you are using a reverse proxy).
(Leave the identity server as the default - see `Identity servers`_.)
If you prefer to use another client, refer to our If you prefer to use another client, refer to our
`client breakdown <https://matrix.org/docs/projects/clients-matrix>`_. `client breakdown <https://matrix.org/docs/projects/clients-matrix>`_.
@@ -137,7 +133,7 @@ it, specify ``enable_registration: true`` in ``homeserver.yaml``. (It is then
recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.) recommended to also set up CAPTCHA - see `<docs/CAPTCHA_SETUP.md>`_.)
Once ``enable_registration`` is set to ``true``, it is possible to register a Once ``enable_registration`` is set to ``true``, it is possible to register a
user via `riot.im <https://riot.im/app/#/register>`_ or other Matrix clients. user via a Matrix client.
Your new user name will be formed partly from the ``server_name``, and partly Your new user name will be formed partly from the ``server_name``, and partly
from a localpart you specify when you create the account. Your name will take from a localpart you specify when you create the account. Your name will take
@@ -183,30 +179,6 @@ versions of synapse.
.. _UPGRADE.rst: UPGRADE.rst .. _UPGRADE.rst: UPGRADE.rst
Using PostgreSQL
================
Synapse offers two database engines:
* `SQLite <https://sqlite.org/>`_
* `PostgreSQL <https://www.postgresql.org>`_
By default Synapse uses SQLite in and doing so trades performance for convenience.
SQLite is only recommended in Synapse for testing purposes or for servers with
light workloads.
Almost all installations should opt to use PostgreSQL. Advantages include:
* significant performance improvements due to the superior threading and
caching model, smarter query optimiser
* allowing the DB to be run on separate hardware
* allowing basic active/backup high-availability with a "hot spare" synapse
pointing at the same DB master, as well as enabling DB replication in
synapse itself.
For information on how to install and use PostgreSQL, please see
`docs/postgres.md <docs/postgres.md>`_.
.. _reverse-proxy: .. _reverse-proxy:
Using a reverse proxy with Synapse Using a reverse proxy with Synapse
@@ -255,10 +227,9 @@ email address.
Password reset Password reset
============== ==============
If a user has registered an email address to their account using an identity Users can reset their password through their client. Alternatively, a server admin
server, they can request a password-reset token via clients such as Riot. can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_
or by directly editing the database as shown below.
A manual password reset can be done via direct database access as follows.
First calculate the hash of the new password:: First calculate the hash of the new password::
+18
View File
@@ -75,6 +75,24 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.18.0
====================
Docker `-py3` suffix will be removed in future versions
-------------------------------------------------------
From 10th August 2020, we will no longer publish Docker images with the `-py3` tag suffix. The images tagged with the `-py3` suffix have been identical to the non-suffixed tags since release 0.99.0, and the suffix is obsolete.
On 10th August, we will remove the `latest-py3` tag. Existing per-release tags (such as `v1.18.0-py3`) will not be removed, but no new `-py3` tags will be added.
Scripts relying on the `-py3` suffix will need to be updated.
Redis replication is now recommended in lieu of TCP replication
---------------------------------------------------------------
When setting up worker processes, we now recommend the use of a Redis server for replication. **The old direct TCP connection method is deprecated and will be removed in a future release.**
See `docs/workers.md <docs/workers.md>`_ for more details.
Upgrading to v1.14.0 Upgrading to v1.14.0
==================== ====================
+1
View File
@@ -0,0 +1 @@
Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH.
+1
View File
@@ -0,0 +1 @@
Add unread messages count to sync responses, as specified in [MSC2654](https://github.com/matrix-org/matrix-doc/pull/2654).
+1
View File
@@ -0,0 +1 @@
Document how to set up a Client Well-Known file and fix several pieces of outdated documentation.
+1
View File
@@ -0,0 +1 @@
Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus.
+1
View File
@@ -0,0 +1 @@
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Move some database-related log lines from the default logger to the database/transaction loggers.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel.
+1
View File
@@ -0,0 +1 @@
Add a script to detect source code files using non-unix line terminators.
+1
View File
@@ -0,0 +1 @@
Add a script to detect source code files using non-unix line terminators.
+1
View File
@@ -0,0 +1 @@
Log the SAML session ID during creation.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory.
+1
View File
@@ -0,0 +1 @@
Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured.
+1
View File
@@ -0,0 +1 @@
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.
+1
View File
@@ -0,0 +1 @@
Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Improve workers docs.
+1
View File
@@ -0,0 +1 @@
Fix typo in `docs/workers.md`.
+1
View File
@@ -0,0 +1 @@
Fix various comments and minor discrepencies in server notices code.
+1
View File
@@ -0,0 +1 @@
Add documentation for how to undo a room shutdown.
+1
View File
@@ -0,0 +1 @@
Fix a long standing bug where HTTP HEAD requests resulted in a 400 error.
+1
View File
@@ -0,0 +1 @@
Remove redundant and unreliable signature check for v1 Identity Service lookup responses.
+1
View File
@@ -0,0 +1 @@
Convert various parts of the codebase to async/await.
+1
View File
@@ -0,0 +1 @@
Add rate limiting to users joining rooms.
+1
View File
@@ -0,0 +1 @@
Fix bug where state (e.g. power levels) would reset incorrectly when receiving an event from a remote server.
+19 -18
View File
@@ -17,9 +17,6 @@
""" Starts a synapse client console. """ """ Starts a synapse client console. """
from __future__ import print_function from __future__ import print_function
from twisted.internet import reactor, defer, threads
from http import TwistedHttpClient
import argparse import argparse
import cmd import cmd
import getpass import getpass
@@ -28,12 +25,14 @@ import shlex
import sys import sys
import time import time
import urllib import urllib
import urlparse from http import TwistedHttpClient
import nacl.signing
import nacl.encoding import nacl.encoding
import nacl.signing
import urlparse
from signedjson.sign import SignatureVerifyException, verify_signed_json
from signedjson.sign import verify_signed_json, SignatureVerifyException from twisted.internet import defer, reactor, threads
CONFIG_JSON = "cmdclient_config.json" CONFIG_JSON = "cmdclient_config.json"
@@ -493,7 +492,7 @@ class SynapseCmd(cmd.Cmd):
"list messages <roomid> from=END&to=START&limit=3" "list messages <roomid> from=END&to=START&limit=3"
""" """
args = self._parse(line, ["type", "roomid", "qp"]) args = self._parse(line, ["type", "roomid", "qp"])
if not "type" in args or not "roomid" in args: if "type" not in args or "roomid" not in args:
print("Must specify type and room ID.") print("Must specify type and room ID.")
return return
if args["type"] not in ["members", "messages"]: if args["type"] not in ["members", "messages"]:
@@ -508,7 +507,7 @@ class SynapseCmd(cmd.Cmd):
try: try:
key_value = key_value_str.split("=") key_value = key_value_str.split("=")
qp[key_value[0]] = key_value[1] qp[key_value[0]] = key_value[1]
except: except Exception:
print("Bad query param: %s" % key_value) print("Bad query param: %s" % key_value)
return return
@@ -585,7 +584,7 @@ class SynapseCmd(cmd.Cmd):
parsed_url = urlparse.urlparse(args["path"]) parsed_url = urlparse.urlparse(args["path"])
qp.update(urlparse.parse_qs(parsed_url.query)) qp.update(urlparse.parse_qs(parsed_url.query))
args["path"] = parsed_url.path args["path"] = parsed_url.path
except: except Exception:
pass pass
reactor.callFromThread( reactor.callFromThread(
@@ -610,13 +609,15 @@ class SynapseCmd(cmd.Cmd):
@defer.inlineCallbacks @defer.inlineCallbacks
def _do_event_stream(self, timeout): def _do_event_stream(self, timeout):
res = yield self.http_client.get_json( res = yield defer.ensureDeferred(
self._url() + "/events", self.http_client.get_json(
{ self._url() + "/events",
"access_token": self._tok(), {
"timeout": str(timeout), "access_token": self._tok(),
"from": self.event_stream_token, "timeout": str(timeout),
}, "from": self.event_stream_token,
},
)
) )
print(json.dumps(res, indent=4)) print(json.dumps(res, indent=4))
@@ -772,10 +773,10 @@ def main(server_url, identity_server_url, username, token, config_path):
syn_cmd.config = json.load(config) syn_cmd.config = json.load(config)
try: try:
http_client.verbose = "on" == syn_cmd.config["verbose"] http_client.verbose = "on" == syn_cmd.config["verbose"]
except: except Exception:
pass pass
print("Loaded config from %s" % config_path) print("Loaded config from %s" % config_path)
except: except Exception:
pass pass
# Twisted-specific: Runs the command processor in Twisted's event loop # Twisted-specific: Runs the command processor in Twisted's event loop
+5 -5
View File
@@ -14,14 +14,14 @@
# limitations under the License. # limitations under the License.
from __future__ import print_function from __future__ import print_function
from twisted.web.client import Agent, readBody
from twisted.web.http_headers import Headers
from twisted.internet import defer, reactor
from pprint import pformat
import json import json
import urllib import urllib
from pprint import pformat
from twisted.internet import defer, reactor
from twisted.web.client import Agent, readBody
from twisted.web.http_headers import Headers
class HttpClient(object): class HttpClient(object):
+21 -34
View File
@@ -28,27 +28,24 @@ Currently assumes the local address is localhost:<port>
""" """
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
from synapse.util import origin_from_ucid
from synapse.app.homeserver import SynapseHomeServer
# from synapse.logging.utils import log_function
from twisted.internet import reactor, defer
from twisted.python import log
import argparse import argparse
import curses.wrapper
import json import json
import logging import logging
import os import os
import re import re
import cursesio import cursesio
import curses.wrapper
from twisted.internet import defer, reactor
from twisted.python import log
from synapse.app.homeserver import SynapseHomeServer
from synapse.federation import ReplicationHandler
from synapse.federation.units import Pdu
from synapse.util import origin_from_ucid
# from synapse.logging.utils import log_function
logger = logging.getLogger("example") logger = logging.getLogger("example")
@@ -75,7 +72,7 @@ class InputOutput(object):
""" """
try: try:
m = re.match("^join (\S+)$", line) m = re.match(r"^join (\S+)$", line)
if m: if m:
# The `sender` wants to join a room. # The `sender` wants to join a room.
(room_name,) = m.groups() (room_name,) = m.groups()
@@ -84,7 +81,7 @@ class InputOutput(object):
# self.print_line("OK.") # self.print_line("OK.")
return return
m = re.match("^invite (\S+) (\S+)$", line) m = re.match(r"^invite (\S+) (\S+)$", line)
if m: if m:
# `sender` wants to invite someone to a room # `sender` wants to invite someone to a room
room_name, invitee = m.groups() room_name, invitee = m.groups()
@@ -93,7 +90,7 @@ class InputOutput(object):
# self.print_line("OK.") # self.print_line("OK.")
return return
m = re.match("^send (\S+) (.*)$", line) m = re.match(r"^send (\S+) (.*)$", line)
if m: if m:
# `sender` wants to message a room # `sender` wants to message a room
room_name, body = m.groups() room_name, body = m.groups()
@@ -102,7 +99,7 @@ class InputOutput(object):
# self.print_line("OK.") # self.print_line("OK.")
return return
m = re.match("^backfill (\S+)$", line) m = re.match(r"^backfill (\S+)$", line)
if m: if m:
# we want to backfill a room # we want to backfill a room
(room_name,) = m.groups() (room_name,) = m.groups()
@@ -201,16 +198,6 @@ class HomeServer(ReplicationHandler):
% (pdu.context, pdu.pdu_type, json.dumps(pdu.content)) % (pdu.context, pdu.pdu_type, json.dumps(pdu.content))
) )
# def on_state_change(self, pdu):
##self.output.print_line("#%s (state) %s *** %s" %
##(pdu.context, pdu.state_key, pdu.pdu_type)
##)
# if "joinee" in pdu.content:
# self._on_join(pdu.context, pdu.content["joinee"])
# elif "invitee" in pdu.content:
# self._on_invite(pdu.origin, pdu.context, pdu.content["invitee"])
def _on_message(self, pdu): def _on_message(self, pdu):
""" We received a message """ We received a message
""" """
@@ -314,7 +301,7 @@ class HomeServer(ReplicationHandler):
return self.replication_layer.backfill(dest, room_name, limit) return self.replication_layer.backfill(dest, room_name, limit)
def _get_room_remote_servers(self, room_name): def _get_room_remote_servers(self, room_name):
return [i for i in self.joined_rooms.setdefault(room_name).servers] return list(self.joined_rooms.setdefault(room_name).servers)
def _get_or_create_room(self, room_name): def _get_or_create_room(self, room_name):
return self.joined_rooms.setdefault(room_name, Room(room_name)) return self.joined_rooms.setdefault(room_name, Room(room_name))
@@ -334,7 +321,7 @@ def main(stdscr):
user = args.user user = args.user
server_name = origin_from_ucid(user) server_name = origin_from_ucid(user)
## Set up logging ## # Set up logging
root_logger = logging.getLogger() root_logger = logging.getLogger()
@@ -354,7 +341,7 @@ def main(stdscr):
observer = log.PythonLoggingObserver() observer = log.PythonLoggingObserver()
observer.start() observer.start()
## Set up synapse server # Set up synapse server
curses_stdio = cursesio.CursesStdIO(stdscr) curses_stdio = cursesio.CursesStdIO(stdscr)
input_output = InputOutput(curses_stdio, user) input_output = InputOutput(curses_stdio, user)
@@ -368,16 +355,16 @@ def main(stdscr):
input_output.set_home_server(hs) input_output.set_home_server(hs)
## Add input_output logger # Add input_output logger
io_logger = IOLoggerHandler(input_output) io_logger = IOLoggerHandler(input_output)
io_logger.setFormatter(formatter) io_logger.setFormatter(formatter)
root_logger.addHandler(io_logger) root_logger.addHandler(io_logger)
## Start! ## # Start!
try: try:
port = int(server_name.split(":")[1]) port = int(server_name.split(":")[1])
except: except Exception:
port = 12345 port = 12345
app_hs.get_http_server().start_listening(port) app_hs.get_http_server().start_listening(port)
+214 -85
View File
@@ -1,7 +1,44 @@
{ {
"__inputs": [
{
"name": "DS_PROMETHEUS",
"label": "Prometheus",
"description": "",
"type": "datasource",
"pluginId": "prometheus",
"pluginName": "Prometheus"
}
],
"__requires": [
{
"type": "grafana",
"id": "grafana",
"name": "Grafana",
"version": "6.7.4"
},
{
"type": "panel",
"id": "graph",
"name": "Graph",
"version": ""
},
{
"type": "panel",
"id": "heatmap",
"name": "Heatmap",
"version": ""
},
{
"type": "datasource",
"id": "prometheus",
"name": "Prometheus",
"version": "1.0.0"
}
],
"annotations": { "annotations": {
"list": [ "list": [
{ {
"$$hashKey": "object:76",
"builtIn": 1, "builtIn": 1,
"datasource": "$datasource", "datasource": "$datasource",
"enable": false, "enable": false,
@@ -17,8 +54,8 @@
"editable": true, "editable": true,
"gnetId": null, "gnetId": null,
"graphTooltip": 0, "graphTooltip": 0,
"id": 1, "id": null,
"iteration": 1591098104645, "iteration": 1594646317221,
"links": [ "links": [
{ {
"asDropdown": true, "asDropdown": true,
@@ -34,7 +71,7 @@
"panels": [ "panels": [
{ {
"collapsed": false, "collapsed": false,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -269,7 +306,6 @@
"show": false "show": false
}, },
"links": [], "links": [],
"options": {},
"reverseYBuckets": false, "reverseYBuckets": false,
"targets": [ "targets": [
{ {
@@ -559,7 +595,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -1423,7 +1459,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -1795,7 +1831,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -2531,7 +2567,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -2823,7 +2859,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -2844,7 +2880,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 33 "y": 6
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 79, "id": 79,
@@ -2940,7 +2976,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 33 "y": 6
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 83, "id": 83,
@@ -3038,7 +3074,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 42 "y": 15
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 109, "id": 109,
@@ -3137,7 +3173,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 42 "y": 15
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 111, "id": 111,
@@ -3223,14 +3259,14 @@
"dashLength": 10, "dashLength": 10,
"dashes": false, "dashes": false,
"datasource": "$datasource", "datasource": "$datasource",
"description": "", "description": "Number of events queued up on the master process for processing by the federation sender",
"fill": 1, "fill": 1,
"fillGradient": 0, "fillGradient": 0,
"gridPos": { "gridPos": {
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 51 "y": 24
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 140, "id": 140,
@@ -3354,6 +3390,103 @@
"align": false, "align": false,
"alignLevel": null "alignLevel": null
} }
},
{
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": "${DS_PROMETHEUS}",
"description": "The number of events in the in-memory queues ",
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 8,
"w": 12,
"x": 12,
"y": 24
},
"hiddenSeries": false,
"id": 142,
"legend": {
"avg": false,
"current": false,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"nullPointMode": "null",
"options": {
"dataLinks": []
},
"percentage": false,
"pointradius": 2,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"expr": "synapse_federation_transaction_queue_pending_pdus{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"interval": "",
"legendFormat": "pending PDUs {{job}}-{{index}}",
"refId": "A"
},
{
"expr": "synapse_federation_transaction_queue_pending_edus{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"interval": "",
"legendFormat": "pending EDUs {{job}}-{{index}}",
"refId": "B"
}
],
"thresholds": [],
"timeFrom": null,
"timeRegions": [],
"timeShift": null,
"title": "In-memory federation transmission queues",
"tooltip": {
"shared": true,
"sort": 0,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"buckets": null,
"mode": "time",
"name": null,
"show": true,
"values": []
},
"yaxes": [
{
"$$hashKey": "object:317",
"format": "short",
"label": "events",
"logBase": 1,
"max": null,
"min": "0",
"show": true
},
{
"$$hashKey": "object:318",
"format": "short",
"label": "",
"logBase": 1,
"max": null,
"min": null,
"show": true
}
],
"yaxis": {
"align": false,
"alignLevel": null
}
} }
], ],
"title": "Federation", "title": "Federation",
@@ -3361,7 +3494,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -3567,7 +3700,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -3588,7 +3721,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 52 "y": 79
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 48, "id": 48,
@@ -3682,7 +3815,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 52 "y": 79
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 104, "id": 104,
@@ -3802,7 +3935,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 59 "y": 86
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 10, "id": 10,
@@ -3898,7 +4031,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 59 "y": 86
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 11, "id": 11,
@@ -3987,7 +4120,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -4011,7 +4144,7 @@
"h": 13, "h": 13,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 67 "y": 80
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 12, "id": 12,
@@ -4106,7 +4239,7 @@
"h": 13, "h": 13,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 67 "y": 80
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 26, "id": 26,
@@ -4201,7 +4334,7 @@
"h": 13, "h": 13,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 80 "y": 93
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 13, "id": 13,
@@ -4297,7 +4430,7 @@
"h": 13, "h": 13,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 80 "y": 93
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 27, "id": 27,
@@ -4392,7 +4525,7 @@
"h": 13, "h": 13,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 93 "y": 106
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 28, "id": 28,
@@ -4486,7 +4619,7 @@
"h": 13, "h": 13,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 93 "y": 106
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 25, "id": 25,
@@ -4572,7 +4705,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -5062,7 +5195,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -5083,7 +5216,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 66 "y": 121
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 91, "id": 91,
@@ -5179,7 +5312,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 66 "y": 121
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 21, "id": 21,
@@ -5271,7 +5404,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 75 "y": 130
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 89, "id": 89,
@@ -5369,7 +5502,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 75 "y": 130
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 93, "id": 93,
@@ -5459,7 +5592,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 84 "y": 139
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 95, "id": 95,
@@ -5552,12 +5685,12 @@
"mode": "spectrum" "mode": "spectrum"
}, },
"dataFormat": "tsbuckets", "dataFormat": "tsbuckets",
"datasource": "Prometheus", "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 84 "y": 139
}, },
"heatmap": {}, "heatmap": {},
"hideZeroBuckets": true, "hideZeroBuckets": true,
@@ -5567,7 +5700,6 @@
"show": true "show": true
}, },
"links": [], "links": [],
"options": {},
"reverseYBuckets": false, "reverseYBuckets": false,
"targets": [ "targets": [
{ {
@@ -5609,7 +5741,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -5630,7 +5762,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 39 "y": 66
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 2, "id": 2,
@@ -5754,7 +5886,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 39 "y": 66
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 41, "id": 41,
@@ -5847,7 +5979,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 46 "y": 73
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 42, "id": 42,
@@ -5939,7 +6071,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 46 "y": 73
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 43, "id": 43,
@@ -6031,7 +6163,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 53 "y": 80
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 113, "id": 113,
@@ -6129,7 +6261,7 @@
"h": 7, "h": 7,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 53 "y": 80
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 115, "id": 115,
@@ -6215,7 +6347,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -6236,7 +6368,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 58 "y": 40
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 67, "id": 67,
@@ -6267,7 +6399,7 @@
"steppedLine": false, "steppedLine": false,
"targets": [ "targets": [
{ {
"expr": " synapse_event_persisted_position{instance=\"$instance\",job=\"synapse\"} - ignoring(index, job, name) group_right() synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}", "expr": "max(synapse_event_persisted_position{instance=\"$instance\"}) - ignoring(instance,index, job, name) group_right() synapse_event_processing_positions{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"format": "time_series", "format": "time_series",
"interval": "", "interval": "",
"intervalFactor": 1, "intervalFactor": 1,
@@ -6328,7 +6460,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 58 "y": 40
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 71, "id": 71,
@@ -6362,6 +6494,7 @@
"expr": "time()*1000-synapse_event_processing_last_ts{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}", "expr": "time()*1000-synapse_event_processing_last_ts{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"format": "time_series", "format": "time_series",
"hide": false, "hide": false,
"interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "{{job}}-{{index}} {{name}}", "legendFormat": "{{job}}-{{index}} {{name}}",
"refId": "B" "refId": "B"
@@ -6420,7 +6553,7 @@
"h": 9, "h": 9,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 67 "y": 49
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 121, "id": 121,
@@ -6509,7 +6642,7 @@
}, },
{ {
"collapsed": true, "collapsed": true,
"datasource": null, "datasource": "${DS_PROMETHEUS}",
"gridPos": { "gridPos": {
"h": 1, "h": 1,
"w": 24, "w": 24,
@@ -6539,7 +6672,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 41 "y": 86
}, },
"heatmap": {}, "heatmap": {},
"hideZeroBuckets": true, "hideZeroBuckets": true,
@@ -6549,7 +6682,6 @@
"show": true "show": true
}, },
"links": [], "links": [],
"options": {},
"reverseYBuckets": false, "reverseYBuckets": false,
"targets": [ "targets": [
{ {
@@ -6599,7 +6731,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 41 "y": 86
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 124, "id": 124,
@@ -6700,7 +6832,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 49 "y": 94
}, },
"heatmap": {}, "heatmap": {},
"hideZeroBuckets": true, "hideZeroBuckets": true,
@@ -6710,7 +6842,6 @@
"show": true "show": true
}, },
"links": [], "links": [],
"options": {},
"reverseYBuckets": false, "reverseYBuckets": false,
"targets": [ "targets": [
{ {
@@ -6760,7 +6891,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 49 "y": 94
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 128, "id": 128,
@@ -6879,7 +7010,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 57 "y": 102
}, },
"heatmap": {}, "heatmap": {},
"hideZeroBuckets": true, "hideZeroBuckets": true,
@@ -6889,7 +7020,6 @@
"show": true "show": true
}, },
"links": [], "links": [],
"options": {},
"reverseYBuckets": false, "reverseYBuckets": false,
"targets": [ "targets": [
{ {
@@ -6939,7 +7069,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 57 "y": 102
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 130, "id": 130,
@@ -7058,7 +7188,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 0, "x": 0,
"y": 65 "y": 110
}, },
"heatmap": {}, "heatmap": {},
"hideZeroBuckets": true, "hideZeroBuckets": true,
@@ -7068,12 +7198,12 @@
"show": true "show": true
}, },
"links": [], "links": [],
"options": {},
"reverseYBuckets": false, "reverseYBuckets": false,
"targets": [ "targets": [
{ {
"expr": "rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0)", "expr": "rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "heatmap", "format": "heatmap",
"interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "{{le}}", "legendFormat": "{{le}}",
"refId": "A" "refId": "A"
@@ -7118,7 +7248,7 @@
"h": 8, "h": 8,
"w": 12, "w": 12,
"x": 12, "x": 12,
"y": 65 "y": 110
}, },
"hiddenSeries": false, "hiddenSeries": false,
"id": 132, "id": 132,
@@ -7149,29 +7279,33 @@
"steppedLine": false, "steppedLine": false,
"targets": [ "targets": [
{ {
"expr": "histogram_quantile(0.5, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0)) ", "expr": "histogram_quantile(0.5, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"format": "time_series", "format": "time_series",
"interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "50%", "legendFormat": "50%",
"refId": "A" "refId": "A"
}, },
{ {
"expr": "histogram_quantile(0.75, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))", "expr": "histogram_quantile(0.75, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"format": "time_series", "format": "time_series",
"interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "75%", "legendFormat": "75%",
"refId": "B" "refId": "B"
}, },
{ {
"expr": "histogram_quantile(0.90, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))", "expr": "histogram_quantile(0.90, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"format": "time_series", "format": "time_series",
"interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "90%", "legendFormat": "90%",
"refId": "C" "refId": "C"
}, },
{ {
"expr": "histogram_quantile(0.99, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))", "expr": "histogram_quantile(0.99, rate(synapse_state_number_state_groups_in_resolution_bucket{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"format": "time_series", "format": "time_series",
"interval": "",
"intervalFactor": 1, "intervalFactor": 1,
"legendFormat": "99%", "legendFormat": "99%",
"refId": "D" "refId": "D"
@@ -7181,7 +7315,7 @@
"timeFrom": null, "timeFrom": null,
"timeRegions": [], "timeRegions": [],
"timeShift": null, "timeShift": null,
"title": "Number of state resolution performed, by number of state groups involved (quantiles)", "title": "Number of state resolutions performed, by number of state groups involved (quantiles)",
"tooltip": { "tooltip": {
"shared": true, "shared": true,
"sort": 0, "sort": 0,
@@ -7233,6 +7367,7 @@
"list": [ "list": [
{ {
"current": { "current": {
"selected": false,
"text": "Prometheus", "text": "Prometheus",
"value": "Prometheus" "value": "Prometheus"
}, },
@@ -7309,14 +7444,12 @@
}, },
{ {
"allValue": null, "allValue": null,
"current": { "current": {},
"text": "matrix.org",
"value": "matrix.org"
},
"datasource": "$datasource", "datasource": "$datasource",
"definition": "", "definition": "",
"hide": 0, "hide": 0,
"includeAll": false, "includeAll": false,
"index": -1,
"label": null, "label": null,
"multi": false, "multi": false,
"name": "instance", "name": "instance",
@@ -7335,17 +7468,13 @@
{ {
"allFormat": "regex wildcard", "allFormat": "regex wildcard",
"allValue": "", "allValue": "",
"current": { "current": {},
"text": "synapse",
"value": [
"synapse"
]
},
"datasource": "$datasource", "datasource": "$datasource",
"definition": "", "definition": "",
"hide": 0, "hide": 0,
"hideLabel": false, "hideLabel": false,
"includeAll": true, "includeAll": true,
"index": -1,
"label": "Job", "label": "Job",
"multi": true, "multi": true,
"multiFormat": "regex values", "multiFormat": "regex values",
@@ -7366,16 +7495,13 @@
{ {
"allFormat": "regex wildcard", "allFormat": "regex wildcard",
"allValue": ".*", "allValue": ".*",
"current": { "current": {},
"selected": false,
"text": "All",
"value": "$__all"
},
"datasource": "$datasource", "datasource": "$datasource",
"definition": "", "definition": "",
"hide": 0, "hide": 0,
"hideLabel": false, "hideLabel": false,
"includeAll": true, "includeAll": true,
"index": -1,
"label": "", "label": "",
"multi": true, "multi": true,
"multiFormat": "regex values", "multiFormat": "regex values",
@@ -7428,5 +7554,8 @@
"timezone": "", "timezone": "",
"title": "Synapse", "title": "Synapse",
"uid": "000000012", "uid": "000000012",
"version": 29 "variables": {
"list": []
},
"version": 32
} }
+10 -11
View File
@@ -1,5 +1,13 @@
from __future__ import print_function from __future__ import print_function
import argparse
import cgi
import datetime
import json
import pydot
import urllib2
# Copyright 2014-2016 OpenMarket Ltd # Copyright 2014-2016 OpenMarket Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -15,15 +23,6 @@ from __future__ import print_function
# limitations under the License. # limitations under the License.
import sqlite3
import pydot
import cgi
import json
import datetime
import argparse
import urllib2
def make_name(pdu_id, origin): def make_name(pdu_id, origin):
return "%s@%s" % (pdu_id, origin) return "%s@%s" % (pdu_id, origin)
@@ -33,7 +32,7 @@ def make_graph(pdus, room, filename_prefix):
node_map = {} node_map = {}
origins = set() origins = set()
colors = set(("red", "green", "blue", "yellow", "purple")) colors = {"red", "green", "blue", "yellow", "purple"}
for pdu in pdus: for pdu in pdus:
origins.add(pdu.get("origin")) origins.add(pdu.get("origin"))
@@ -49,7 +48,7 @@ def make_graph(pdus, room, filename_prefix):
try: try:
c = colors.pop() c = colors.pop()
color_map[o] = c color_map[o] = c
except: except Exception:
print("Run out of colours!") print("Run out of colours!")
color_map[o] = "black" color_map[o] = "black"
+7 -6
View File
@@ -13,12 +13,13 @@
# limitations under the License. # limitations under the License.
import sqlite3
import pydot
import cgi
import json
import datetime
import argparse import argparse
import cgi
import datetime
import json
import sqlite3
import pydot
from synapse.events import FrozenEvent from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze from synapse.util.frozenutils import unfreeze
@@ -98,7 +99,7 @@ def make_graph(db_name, room_id, file_prefix, limit):
for prev_id, _ in event.prev_events: for prev_id, _ in event.prev_events:
try: try:
end_node = node_map[prev_id] end_node = node_map[prev_id]
except: except Exception:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,)) end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node node_map[prev_id] = end_node
+11 -11
View File
@@ -1,5 +1,15 @@
from __future__ import print_function from __future__ import print_function
import argparse
import cgi
import datetime
import pydot
import simplejson as json
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
# Copyright 2016 OpenMarket Ltd # Copyright 2016 OpenMarket Ltd
# #
# Licensed under the Apache License, Version 2.0 (the "License"); # Licensed under the Apache License, Version 2.0 (the "License");
@@ -15,16 +25,6 @@ from __future__ import print_function
# limitations under the License. # limitations under the License.
import pydot
import cgi
import simplejson as json
import datetime
import argparse
from synapse.events import FrozenEvent
from synapse.util.frozenutils import unfreeze
def make_graph(file_name, room_id, file_prefix, limit): def make_graph(file_name, room_id, file_prefix, limit):
print("Reading lines") print("Reading lines")
with open(file_name) as f: with open(file_name) as f:
@@ -106,7 +106,7 @@ def make_graph(file_name, room_id, file_prefix, limit):
for prev_id, _ in event.prev_events: for prev_id, _ in event.prev_events:
try: try:
end_node = node_map[prev_id] end_node = node_map[prev_id]
except: except Exception:
end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,)) end_node = pydot.Node(name=prev_id, label="<<b>%s</b>>" % (prev_id,))
node_map[prev_id] = end_node node_map[prev_id] = end_node
+5 -5
View File
@@ -12,15 +12,15 @@ npm install jquery jsdom
""" """
from __future__ import print_function from __future__ import print_function
import gevent
import grequests
from BeautifulSoup import BeautifulSoup
import json import json
import urllib
import subprocess import subprocess
import time import time
# ACCESS_TOKEN="" # import gevent
import grequests
from BeautifulSoup import BeautifulSoup
ACCESS_TOKEN = ""
MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/" MATRIXBASE = "https://matrix.org/_matrix/client/api/v1/"
MYUSERNAME = "@davetest:matrix.org" MYUSERNAME = "@davetest:matrix.org"
+4 -2
View File
@@ -1,10 +1,12 @@
#!/usr/bin/env python #!/usr/bin/env python
from __future__ import print_function from __future__ import print_function
from argparse import ArgumentParser
import json import json
import requests
import sys import sys
import urllib import urllib
from argparse import ArgumentParser
import requests
try: try:
raw_input raw_input
+16
View File
@@ -1,3 +1,19 @@
matrix-synapse-py3 (1.xx.0) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.xx.0.
[ Aaron Raimist ]
* Fix outdated documentation for SYNAPSE_CACHE_FACTOR
-- Synapse Packaging team <packages@matrix.org> XXXXX
matrix-synapse-py3 (1.18.0) stable; urgency=medium
* New synapse release 1.18.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 30 Jul 2020 10:55:53 +0100
matrix-synapse-py3 (1.17.0) stable; urgency=medium matrix-synapse-py3 (1.17.0) stable; urgency=medium
* New synapse release 1.17.0. * New synapse release 1.17.0.
+1 -1
View File
@@ -1,2 +1,2 @@
# Specify environment variables used when running Synapse # Specify environment variables used when running Synapse
# SYNAPSE_CACHE_FACTOR=1 (default) # SYNAPSE_CACHE_FACTOR=0.5 (default)
+14 -13
View File
@@ -46,19 +46,20 @@ Configuration file may be generated as follows:
## ENVIRONMENT ## ENVIRONMENT
* `SYNAPSE_CACHE_FACTOR`: * `SYNAPSE_CACHE_FACTOR`:
Synapse's architecture is quite RAM hungry currently - a lot of Synapse's architecture is quite RAM hungry currently - we deliberately
recent room data and metadata is deliberately cached in RAM in cache a lot of recent room data and metadata in RAM in order to speed up
order to speed up common requests. This will be improved in common requests. We'll improve this in the future, but for now the easiest
future, but for now the easiest way to either reduce the RAM usage way to either reduce the RAM usage (at the risk of slowing things down)
(at the risk of slowing things down) is to set the is to set the almost-undocumented ``SYNAPSE_CACHE_FACTOR`` environment
SYNAPSE_CACHE_FACTOR environment variable. Roughly speaking, a variable. The default is 0.5, which can be decreased to reduce RAM usage
SYNAPSE_CACHE_FACTOR of 1.0 will max out at around 3-4GB of in memory constrained enviroments, or increased if performance starts to
resident memory - this is what we currently run the matrix.org degrade.
on. The default setting is currently 0.1, which is probably around
a ~700MB footprint. You can dial it down further to 0.02 if However, degraded performance due to a low cache factor, common on
desired, which targets roughly ~512MB. Conversely you can dial it machines with slow disks, often leads to explosions in memory use due
up if you need performance for lots of users and have a box with a backlogged requests. In this case, reducing the cache factor will make
lot of RAM. things worse. Instead, try increasing it drastically. 2.0 is a good
starting value.
## COPYRIGHT ## COPYRIGHT
+23 -34
View File
@@ -16,35 +16,31 @@ ARG PYTHON_VERSION=3.7
### ###
### Stage 0: builder ### Stage 0: builder
### ###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 as builder FROM docker.io/python:${PYTHON_VERSION}-slim as builder
# install the OS build deps # install the OS build deps
RUN apk add \
build-base \
libffi-dev \
libjpeg-turbo-dev \
libwebp-dev \
libressl-dev \
libxslt-dev \
linux-headers \
postgresql-dev \
zlib-dev
# build things which have slow build steps, before we copy synapse, so that RUN apt-get update && apt-get install -y \
# the layer can be cached. build-essential \
# libpq-dev \
# (we really just care about caching a wheel here, as the "pip install" below && rm -rf /var/lib/apt/lists/*
# will install them again.)
# Build dependencies that are not available as wheels, to speed up rebuilds
RUN pip install --prefix="/install" --no-warn-script-location \ RUN pip install --prefix="/install" --no-warn-script-location \
cryptography \ frozendict \
msgpack-python \ jaeger-client \
pillow \ opentracing \
pynacl prometheus-client \
psycopg2 \
pycparser \
pyrsistent \
pyyaml \
simplejson \
threadloop \
thrift
# now install synapse and all of the python deps to /install. # now install synapse and all of the python deps to /install.
COPY synapse /synapse/synapse/ COPY synapse /synapse/synapse/
COPY scripts /synapse/scripts/ COPY scripts /synapse/scripts/
COPY MANIFEST.in README.rst setup.py synctl /synapse/ COPY MANIFEST.in README.rst setup.py synctl /synapse/
@@ -56,20 +52,13 @@ RUN pip install --prefix="/install" --no-warn-script-location \
### Stage 1: runtime ### Stage 1: runtime
### ###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 FROM docker.io/python:${PYTHON_VERSION}-slim
# xmlsec is required for saml support RUN apt-get update && apt-get install -y \
RUN apk add --no-cache --virtual .runtime_deps \ libpq5 \
libffi \ xmlsec1 \
libjpeg-turbo \ gosu \
libwebp \ && rm -rf /var/lib/apt/lists/*
libressl \
libxslt \
libpq \
zlib \
su-exec \
tzdata \
xmlsec
COPY --from=builder /install /usr/local COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py COPY ./docker/start.py /start.py
+15
View File
@@ -94,6 +94,21 @@ The following environment variables are supported in run mode:
* `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`. * `UID`, `GID`: the user and group id to run Synapse as. Defaults to `991`, `991`.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`. * `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
## Generating an (admin) user
After synapse is running, you may wish to create a user via `register_new_matrix_user`.
This requires a `registration_shared_secret` to be set in your config file. Synapse
must be restarted to pick up this change.
You can then call the script:
```
docker exec -it synapse register_new_matrix_user http://localhost:8008 -c /data/homeserver.yaml --help
```
Remember to remove the `registration_shared_secret` and restart if you no-longer need it.
## TLS support ## TLS support
The default configuration exposes a single HTTP port: http://localhost:8008. It The default configuration exposes a single HTTP port: http://localhost:8008. It
+6 -6
View File
@@ -120,7 +120,7 @@ def generate_config_from_template(config_dir, config_path, environ, ownership):
if ownership is not None: if ownership is not None:
subprocess.check_output(["chown", "-R", ownership, "/data"]) subprocess.check_output(["chown", "-R", ownership, "/data"])
args = ["su-exec", ownership] + args args = ["gosu", ownership] + args
subprocess.check_output(args) subprocess.check_output(args)
@@ -172,8 +172,8 @@ def run_generate_config(environ, ownership):
# make sure that synapse has perms to write to the data dir. # make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir]) subprocess.check_output(["chown", ownership, data_dir])
args = ["su-exec", ownership] + args args = ["gosu", ownership] + args
os.execv("/sbin/su-exec", args) os.execv("/usr/sbin/gosu", args)
else: else:
os.execv("/usr/local/bin/python", args) os.execv("/usr/local/bin/python", args)
@@ -189,7 +189,7 @@ def main(args, environ):
ownership = "{}:{}".format(desired_uid, desired_gid) ownership = "{}:{}".format(desired_uid, desired_gid)
if ownership is None: if ownership is None:
log("Will not perform chmod/su-exec as UserID already matches request") log("Will not perform chmod/gosu as UserID already matches request")
# In generate mode, generate a configuration and missing keys, then exit # In generate mode, generate a configuration and missing keys, then exit
if mode == "generate": if mode == "generate":
@@ -236,8 +236,8 @@ running with 'migrate_config'. See the README for more details.
args = ["python", "-m", synapse_worker, "--config-path", config_path] args = ["python", "-m", synapse_worker, "--config-path", config_path]
if ownership is not None: if ownership is not None:
args = ["su-exec", ownership] + args args = ["gosu", ownership] + args
os.execv("/sbin/su-exec", args) os.execv("/usr/sbin/gosu", args)
else: else:
os.execv("/usr/local/bin/python", args) os.execv("/usr/local/bin/python", args)
+11
View File
@@ -10,5 +10,16 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate # homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md. # a fresh config using Synapse by following the instructions in INSTALL.md.
# Configuration options that take a time period can be set using a number
# followed by a letter. Letters have the following meanings:
# s = second
# m = minute
# h = hour
# d = day
# w = week
# y = year
# For example, setting redaction_retention_period: 5m would remove redacted
# messages from the database after 5 minutes, rather than 5 months.
################################################################################ ################################################################################
+3 -2
View File
@@ -12,13 +12,14 @@ introduced support for automatically provisioning certificates through
In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430), In [March 2019](https://community.letsencrypt.org/t/end-of-life-plan-for-acmev1/88430),
Let's Encrypt announced that they were deprecating version 1 of the ACME Let's Encrypt announced that they were deprecating version 1 of the ACME
protocol, with the plan to disable the use of it for new accounts in protocol, with the plan to disable the use of it for new accounts in
November 2019, and for existing accounts in June 2020. November 2019, for new domains in June 2020, and for existing accounts and
domains in June 2021.
Synapse doesn't currently support version 2 of the ACME protocol, which Synapse doesn't currently support version 2 of the ACME protocol, which
means that: means that:
* for existing installs, Synapse's built-in ACME support will continue * for existing installs, Synapse's built-in ACME support will continue
to work until June 2020. to work until June 2021.
* for new installs, this feature will not work at all. * for new installs, this feature will not work at all.
Either way, it is recommended to move from Synapse's ACME support Either way, it is recommended to move from Synapse's ACME support
+2
View File
@@ -5,6 +5,8 @@ This API will remove all trace of a room from your database.
All local users must have left the room before it can be removed. All local users must have left the room before it can be removed.
See also: [Delete Room API](rooms.md#delete-room-api)
The API is: The API is:
``` ```
+131
View File
@@ -318,3 +318,134 @@ Response:
"state_events": 93534 "state_events": 93534
} }
``` ```
# Room Members API
The Room Members admin API allows server admins to get a list of all members of a room.
The response includes the following fields:
* `members` - A list of all the members that are present in the room, represented by their ids.
* `total` - Total number of members in the room.
## Usage
A standard request:
```
GET /_synapse/admin/v1/rooms/<room_id>/members
{}
```
Response:
```
{
"members": [
"@foo:matrix.org",
"@bar:matrix.org",
"@foobar:matrix.org
],
"total": 3
}
```
# Delete Room API
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
It is a combination and improvement of "[Shutdown room](shutdown_room.md)"
and "[Purge room](purge_room.md)" API.
Shuts down a room. Moves all local users and room aliases automatically to a
new room if `new_room_user_id` is set. Otherwise local users only
leave the room without any information.
The new room will be created with the user specified by the `new_room_user_id` parameter
as room administrator and will contain a message explaining what happened. Users invited
to the new room will have power level `-10` by default, and thus be unable to speak.
If `block` is `True` it prevents new joins to the old room.
This API will remove all trace of the old room from your database after removing
all local users. If `purge` is `true` (the default), all traces of the old room will
be removed from your database after removing all local users. If you do not want
this to happen, set `purge` to `false`.
Depending on the amount of history being purged a call to the API may take
several minutes or longer.
The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.
The API is:
```json
POST /_synapse/admin/v1/rooms/<room_id>/delete
```
with a body of:
```json
{
"new_room_user_id": "@someuser:example.com",
"room_name": "Content Violation Notification",
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service.",
"block": true,
"purge": true
}
```
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see [README.rst](README.rst).
A response body like the following is returned:
```json
{
"kicked_users": [
"@foobar:example.com"
],
"failed_to_kick_users": [],
"local_aliases": [
"#badroom:example.com",
"#evilsaloon:example.com"
],
"new_room_id": "!newroomid:example.com"
}
```
## Parameters
The following parameters should be set in the URL:
* `room_id` - The ID of the room.
The following JSON body parameters are available:
* `new_room_user_id` - Optional. If set, a new room will be created with this user ID
as the creator and admin, and all users in the old room will be moved into that
room. If not set, no new room will be created and the users will just be removed
from the old room. The user ID must be on the local server, but does not necessarily
have to belong to a registered user.
* `room_name` - Optional. A string representing the name of the room that new users will be
invited to. Defaults to `Content Violation Notification`
* `message` - Optional. A string containing the first message that will be sent as
`new_room_user_id` in the new room. Ideally this will clearly convey why the
original room was shut down. Defaults to `Sharing illegal content on this server
is not permitted and rooms in violation will be blocked.`
* `block` - Optional. If set to `true`, this room will be added to a blocking list, preventing
future attempts to join the room. Defaults to `false`.
* `purge` - Optional. If set to `true`, it will remove all traces of the room from your database.
Defaults to `true`.
The JSON body must not be empty. The body must be at least `{}`.
## Response
The following fields are returned in the JSON response body:
* `kicked_users` - An array of users (`user_id`) that were kicked.
* `failed_to_kick_users` - An array of users (`user_id`) that that were not kicked.
* `local_aliases` - An array of strings representing the local aliases that were migrated from
the old room to the new.
* `new_room_id` - A string representing the room ID of the new room.
+23 -1
View File
@@ -10,6 +10,8 @@ disallow any further invites or joins.
The local server will only have the power to move local user and room aliases to The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected. the new room. Users on other servers will be unaffected.
See also: [Delete Room API](rooms.md#delete-room-api)
## API ## API
You will need to authenticate with an access token for an admin user. You will need to authenticate with an access token for an admin user.
@@ -31,7 +33,7 @@ You will need to authenticate with an access token for an admin user.
* `message` - Optional. A string containing the first message that will be sent as * `message` - Optional. A string containing the first message that will be sent as
`new_room_user_id` in the new room. Ideally this will clearly convey why the `new_room_user_id` in the new room. Ideally this will clearly convey why the
original room was shut down. original room was shut down.
If not specified, the default value of `room_name` is "Content Violation If not specified, the default value of `room_name` is "Content Violation
Notification". The default value of `message` is "Sharing illegal content on Notification". The default value of `message` is "Sharing illegal content on
othis server is not permitted and rooms in violation will be blocked." othis server is not permitted and rooms in violation will be blocked."
@@ -70,3 +72,23 @@ Response:
"new_room_id": "!newroomid:example.com", "new_room_id": "!newroomid:example.com",
}, },
``` ```
## Undoing room shutdowns
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
the structure can and does change without notice.
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past.
1. For safety reasons, it is recommended to shut down Synapse prior to continuing.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
3. Restart Synapse (required).
You will have to manually handle, if you so choose, the following:
* Aliases that would have been redirected to the Content Violation room.
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.
+5 -1
View File
@@ -91,10 +91,14 @@ Body parameters:
- ``admin``, optional, defaults to ``false``. - ``admin``, optional, defaults to ``false``.
- ``deactivated``, optional, defaults to ``false``. - ``deactivated``, optional. If unspecified, deactivation state will be left
unchanged on existing accounts and set to ``false`` for new accounts.
If the user already exists then optional parameters default to the current value. If the user already exists then optional parameters default to the current value.
In order to re-activate an account ``deactivated`` must be set to ``false``. If
users do not login via single-sign-on, a new ``password`` must be provided.
List Accounts List Accounts
============= =============
+13 -6
View File
@@ -20,12 +20,18 @@ follows:
Note that the login type of `m.login.jwt` is supported, but is deprecated. This Note that the login type of `m.login.jwt` is supported, but is deprecated. This
will be removed in a future version of Synapse. will be removed in a future version of Synapse.
The `jwt` should encode the local part of the user ID as the standard `sub` The `token` field should include the JSON web token with the following claims:
claim. In the case that the token is not valid, the homeserver must respond with
`401 Unauthorized` and an error code of `M_UNAUTHORIZED`.
(Note that this differs from the token based logins which return a * The `sub` (subject) claim is required and should encode the local part of the
`403 Forbidden` and an error code of `M_FORBIDDEN` if an error occurs.) user ID.
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
claims are optional, but validated if present.
* The issuer (`iss`) claim is optional, but required and validated if configured.
* The audience (`aud`) claim is optional, but required and validated if configured.
Providing the audience claim when not configured will cause validation to fail.
In the case that the token is not valid, the homeserver must respond with
`403 Forbidden` and an error code of `M_FORBIDDEN`.
As with other login types, there are additional fields (e.g. `device_id` and As with other login types, there are additional fields (e.g. `device_id` and
`initial_device_display_name`) which can be included in the above request. `initial_device_display_name`) which can be included in the above request.
@@ -55,7 +61,8 @@ sample settings.
Although JSON Web Tokens are typically generated from an external server, the Although JSON Web Tokens are typically generated from an external server, the
examples below use [PyJWT](https://pyjwt.readthedocs.io/en/latest/) directly. examples below use [PyJWT](https://pyjwt.readthedocs.io/en/latest/) directly.
1. Configure Synapse with JWT logins: 1. Configure Synapse with JWT logins, note that this example uses a pre-shared
secret and an algorithm of HS256:
```yaml ```yaml
jwt_config: jwt_config:
+1 -1
View File
@@ -27,7 +27,7 @@
different thread to Synapse. This can make it more resilient to different thread to Synapse. This can make it more resilient to
heavy load meaning metrics cannot be retrieved, and can be exposed heavy load meaning metrics cannot be retrieved, and can be exposed
to just internal networks easier. The served metrics are available to just internal networks easier. The served metrics are available
over HTTP only, and will be available at `/`. over HTTP only, and will be available at `/_synapse/metrics`.
Add a new listener to homeserver.yaml: Add a new listener to homeserver.yaml:
+82 -81
View File
@@ -19,102 +19,103 @@ password auth provider module implementations:
Password auth provider classes must provide the following methods: Password auth provider classes must provide the following methods:
*class* `SomeProvider.parse_config`(*config*) * `parse_config(config)`
This method is passed the `config` object for this module from the
homeserver configuration file.
> This method is passed the `config` object for this module from the It should perform any appropriate sanity checks on the provided
> homeserver configuration file. configuration, and return an object which is then passed into
>
> It should perform any appropriate sanity checks on the provided
> configuration, and return an object which is then passed into
> `__init__`.
*class* `SomeProvider`(*config*, *account_handler*) This method should have the `@staticmethod` decoration.
> The constructor is passed the config object returned by * `__init__(self, config, account_handler)`
> `parse_config`, and a `synapse.module_api.ModuleApi` object which
> allows the password provider to check if accounts exist and/or create The constructor is passed the config object returned by
> new ones. `parse_config`, and a `synapse.module_api.ModuleApi` object which
allows the password provider to check if accounts exist and/or create
new ones.
## Optional methods ## Optional methods
Password auth provider classes may optionally provide the following Password auth provider classes may optionally provide the following methods:
methods.
*class* `SomeProvider.get_db_schema_files`() * `get_db_schema_files(self)`
> This method, if implemented, should return an Iterable of This method, if implemented, should return an Iterable of
> `(name, stream)` pairs of database schema files. Each file is applied `(name, stream)` pairs of database schema files. Each file is applied
> in turn at initialisation, and a record is then made in the database in turn at initialisation, and a record is then made in the database
> so that it is not re-applied on the next start. so that it is not re-applied on the next start.
`someprovider.get_supported_login_types`() * `get_supported_login_types(self)`
> This method, if implemented, should return a `dict` mapping from a This method, if implemented, should return a `dict` mapping from a
> login type identifier (such as `m.login.password`) to an iterable login type identifier (such as `m.login.password`) to an iterable
> giving the fields which must be provided by the user in the submission giving the fields which must be provided by the user in the submission
> to the `/login` api. These fields are passed in the `login_dict` to [the `/login` API](https://matrix.org/docs/spec/client_server/latest#post-matrix-client-r0-login).
> dictionary to `check_auth`. These fields are passed in the `login_dict` dictionary to `check_auth`.
>
> For example, if a password auth provider wants to implement a custom
> login type of `com.example.custom_login`, where the client is expected
> to pass the fields `secret1` and `secret2`, the provider should
> implement this method and return the following dict:
>
> {"com.example.custom_login": ("secret1", "secret2")}
`someprovider.check_auth`(*username*, *login_type*, *login_dict*) For example, if a password auth provider wants to implement a custom
login type of `com.example.custom_login`, where the client is expected
to pass the fields `secret1` and `secret2`, the provider should
implement this method and return the following dict:
> This method is the one that does the real work. If implemented, it ```python
> will be called for each login attempt where the login type matches one {"com.example.custom_login": ("secret1", "secret2")}
> of the keys returned by `get_supported_login_types`. ```
>
> It is passed the (possibly UNqualified) `user` provided by the client,
> the login type, and a dictionary of login secrets passed by the
> client.
>
> The method should return a Twisted `Deferred` object, which resolves
> to the canonical `@localpart:domain` user id if authentication is
> successful, and `None` if not.
>
> Alternatively, the `Deferred` can resolve to a `(str, func)` tuple, in
> which case the second field is a callback which will be called with
> the result from the `/login` call (including `access_token`,
> `device_id`, etc.)
`someprovider.check_3pid_auth`(*medium*, *address*, *password*) * `check_auth(self, username, login_type, login_dict)`
> This method, if implemented, is called when a user attempts to This method does the real work. If implemented, it
> register or log in with a third party identifier, such as email. It is will be called for each login attempt where the login type matches one
> passed the medium (ex. "email"), an address (ex. of the keys returned by `get_supported_login_types`.
> "<jdoe@example.com>") and the user's password.
>
> The method should return a Twisted `Deferred` object, which resolves
> to a `str` containing the user's (canonical) User ID if
> authentication was successful, and `None` if not.
>
> As with `check_auth`, the `Deferred` may alternatively resolve to a
> `(user_id, callback)` tuple.
`someprovider.check_password`(*user_id*, *password*) It is passed the (possibly unqualified) `user` field provided by the client,
the login type, and a dictionary of login secrets passed by the
client.
> This method provides a simpler interface than The method should return an `Awaitable` object, which resolves
> `get_supported_login_types` and `check_auth` for password auth to the canonical `@localpart:domain` user ID if authentication is
> providers that just want to provide a mechanism for validating successful, and `None` if not.
> `m.login.password` logins.
>
> Iif implemented, it will be called to check logins with an
> `m.login.password` login type. It is passed a qualified
> `@localpart:domain` user id, and the password provided by the user.
>
> The method should return a Twisted `Deferred` object, which resolves
> to `True` if authentication is successful, and `False` if not.
`someprovider.on_logged_out`(*user_id*, *device_id*, *access_token*) Alternatively, the `Awaitable` can resolve to a `(str, func)` tuple, in
which case the second field is a callback which will be called with
the result from the `/login` call (including `access_token`,
`device_id`, etc.)
> This method, if implemented, is called when a user logs out. It is * `check_3pid_auth(self, medium, address, password)`
> passed the qualified user ID, the ID of the deactivated device (if
> any: access tokens are occasionally created without an associated This method, if implemented, is called when a user attempts to
> device ID), and the (now deactivated) access token. register or log in with a third party identifier, such as email. It is
> passed the medium (ex. "email"), an address (ex.
> It may return a Twisted `Deferred` object; the logout request will "<jdoe@example.com>") and the user's password.
> wait for the deferred to complete but the result is ignored.
The method should return an `Awaitable` object, which resolves
to a `str` containing the user's (canonical) User id if
authentication was successful, and `None` if not.
As with `check_auth`, the `Awaitable` may alternatively resolve to a
`(user_id, callback)` tuple.
* `check_password(self, user_id, password)`
This method provides a simpler interface than
`get_supported_login_types` and `check_auth` for password auth
providers that just want to provide a mechanism for validating
`m.login.password` logins.
If implemented, it will be called to check logins with an
`m.login.password` login type. It is passed a qualified
`@localpart:domain` user id, and the password provided by the user.
The method should return an `Awaitable` object, which resolves
to `True` if authentication is successful, and `False` if not.
* `on_logged_out(self, user_id, device_id, access_token)`
This method, if implemented, is called when a user logs out. It is
passed the qualified user ID, the ID of the deactivated device (if
any: access tokens are occasionally created without an associated
device ID), and the (now deactivated) access token.
It may return an `Awaitable` object; the logout request will
wait for the `Awaitable` to complete, but the result is ignored.
+3
View File
@@ -188,6 +188,9 @@ to do step 2.
It is safe to at any time kill the port script and restart it. It is safe to at any time kill the port script and restart it.
Note that the database may take up significantly more (25% - 100% more)
space on disk after porting to Postgres.
### Using the port script ### Using the port script
Firstly, shut down the currently running synapse server and copy its Firstly, shut down the currently running synapse server and copy its
+5 -11
View File
@@ -38,6 +38,11 @@ the reverse proxy and the homeserver.
server { server {
listen 443 ssl; listen 443 ssl;
listen [::]:443 ssl; listen [::]:443 ssl;
# For the federation port
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
server_name matrix.example.com; server_name matrix.example.com;
location /_matrix { location /_matrix {
@@ -48,17 +53,6 @@ server {
client_max_body_size 10M; client_max_body_size 10M;
} }
} }
server {
listen 8448 ssl default_server;
listen [::]:8448 ssl default_server;
server_name example.com;
location / {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
``` ```
**NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will **NOTE**: Do not add a path after the port in `proxy_pass`, otherwise nginx will
+209 -55
View File
@@ -10,6 +10,17 @@
# homeserver.yaml. Instead, if you are starting from scratch, please generate # homeserver.yaml. Instead, if you are starting from scratch, please generate
# a fresh config using Synapse by following the instructions in INSTALL.md. # a fresh config using Synapse by following the instructions in INSTALL.md.
# Configuration options that take a time period can be set using a number
# followed by a letter. Letters have the following meanings:
# s = second
# m = minute
# h = hour
# d = day
# w = week
# y = year
# For example, setting redaction_retention_period: 5m would remove redacted
# messages from the database after 5 minutes, rather than 5 months.
################################################################################ ################################################################################
# Configuration file for Synapse. # Configuration file for Synapse.
@@ -102,7 +113,9 @@ pid_file: DATADIR/homeserver.pid
#gc_thresholds: [700, 10, 10] #gc_thresholds: [700, 10, 10]
# Set the limit on the returned events in the timeline in the get # Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is -1, means no upper limit. # and sync operations. The default value is 100. -1 means no upper limit.
#
# Uncomment the following to increase the limit to 5000.
# #
#filter_timeline_limit: 5000 #filter_timeline_limit: 5000
@@ -118,38 +131,6 @@ pid_file: DATADIR/homeserver.pid
# #
#enable_search: false #enable_search: false
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their # List of ports that Synapse should listen on, their purpose and their
# configuration. # configuration.
# #
@@ -178,7 +159,7 @@ federation_ip_range_blacklist:
# names: a list of names of HTTP resources. See below for a list of # names: a list of names of HTTP resources. See below for a list of
# valid resource names. # valid resource names.
# #
# compress: set to true to enable HTTP comression for this resource. # compress: set to true to enable HTTP compression for this resource.
# #
# additional_resources: Only valid for an 'http' listener. A map of # additional_resources: Only valid for an 'http' listener. A map of
# additional endpoints which should be loaded via dynamic modules. # additional endpoints which should be loaded via dynamic modules.
@@ -344,6 +325,10 @@ limit_remote_rooms:
# #
#complexity_error: "This room is too complex." #complexity_error: "This room is too complex."
# allow server admins to join complex rooms. Default is false.
#
#admins_can_join: true
# Whether to require a user to be in the room to add an alias to it. # Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'. # Defaults to 'true'.
# #
@@ -608,6 +593,39 @@ acme:
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
## Caching ## ## Caching ##
# Caching can be configured through the following options. # Caching can be configured through the following options.
@@ -682,7 +700,7 @@ caches:
#database: #database:
# name: psycopg2 # name: psycopg2
# args: # args:
# user: synapse # user: synapse_user
# password: secretpassword # password: secretpassword
# database: synapse # database: synapse
# host: localhost # host: localhost
@@ -728,6 +746,10 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# - one for ratelimiting redactions by room admins. If this is not explicitly # - one for ratelimiting redactions by room admins. If this is not explicitly
# set then it uses the same ratelimiting as per rc_message. This is useful # set then it uses the same ratelimiting as per rc_message. This is useful
# to allow room admins to deal with abuse quickly. # to allow room admins to deal with abuse quickly.
# - two for ratelimiting number of rooms a user can join, "local" for when
# users are joining rooms the server is already in (this is cheap) vs
# "remote" for when users are trying to join rooms not on the server (which
# can be more expensive)
# #
# The defaults are as shown below. # The defaults are as shown below.
# #
@@ -753,6 +775,14 @@ log_config: "CONFDIR/SERVERNAME.log.config"
#rc_admin_redaction: #rc_admin_redaction:
# per_second: 1 # per_second: 1
# burst_count: 50 # burst_count: 50
#
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 3
# remote:
# per_second: 0.01
# burst_count: 3
# Ratelimiting settings for incoming federation # Ratelimiting settings for incoming federation
@@ -1142,24 +1172,6 @@ account_validity:
# #
#default_identity_server: https://matrix.org #default_identity_server: https://matrix.org
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
#
# Note: This option is deprecated. Since v0.99.4, Synapse has tracked which identity
# server a 3PID has been bound to. For 3PIDs bound before then, Synapse runs a
# background migration script, informing itself that the identity server all of its
# 3PIDs have been bound to is likely one of the below.
#
# As of Synapse v1.4.0, all other functionality of this option has been deprecated, and
# it is now solely used for the purposes of the background migration script, and can be
# removed once it has run.
#trusted_third_party_id_servers:
# - matrix.org
# - vector.im
# Handle threepid (email/phone etc) registration and password resets through a set of # Handle threepid (email/phone etc) registration and password resets through a set of
# *trusted* identity servers. Note that this allows the configured identity server to # *trusted* identity servers. Note that this allows the configured identity server to
# reset passwords for accounts! # reset passwords for accounts!
@@ -1811,6 +1823,9 @@ sso:
# Each JSON Web Token needs to contain a "sub" (subject) claim, which is # Each JSON Web Token needs to contain a "sub" (subject) claim, which is
# used as the localpart of the mxid. # used as the localpart of the mxid.
# #
# Additionally, the expiration time ("exp"), not before time ("nbf"),
# and issued at ("iat") claims are validated if present.
#
# Note that this is a non-standard login type and client support is # Note that this is a non-standard login type and client support is
# expected to be non-existant. # expected to be non-existant.
# #
@@ -1838,6 +1853,24 @@ sso:
# #
#algorithm: "provided-by-your-issuer" #algorithm: "provided-by-your-issuer"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and
# validated for all JSON web tokens.
#
#issuer: "provided-by-your-issuer"
# A list of audiences to validate the "aud" claim against.
#
# Optional, if provided the "aud" claim will be required and
# validated for all JSON web tokens.
#
# Note that if the "aud" claim is included in a JSON web token then
# validation will fail without configuring audiences.
#
#audiences:
# - "provided-by-your-issuer"
password_config: password_config:
# Uncomment to disable password login # Uncomment to disable password login
@@ -1927,8 +1960,8 @@ email:
# #
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>" #notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from. It # app_name defines the default value for '%(app)s' in notif_from and email
# defaults to 'Matrix'. # subjects. It defaults to 'Matrix'.
# #
#app_name: my_branded_matrix_server #app_name: my_branded_matrix_server
@@ -1997,6 +2030,73 @@ email:
# #
#template_dir: "res/templates" #template_dir: "res/templates"
# Subjects to use when sending emails from Synapse.
#
# The placeholder '%(app)s' will be replaced with the value of the 'app_name'
# setting above, or by a value dictated by the Matrix client application.
#
# If a subject isn't overridden in this configuration file, the value used as
# its example will be used.
#
#subjects:
# Subjects for notification emails.
#
# On top of the '%(app)s' placeholder, these can use the following
# placeholders:
#
# * '%(person)s', which will be replaced by the display name of the user(s)
# that sent the message(s), e.g. "Alice and Bob".
# * '%(room)s', which will be replaced by the name of the room the
# message(s) have been sent to, e.g. "My super room".
#
# See the example provided for each setting to see which placeholder can be
# used and how to use them.
#
# Subject to use to notify about one message from one or more user(s) in a
# room which has a name.
#message_from_person_in_room: "[%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room..."
#
# Subject to use to notify about one message from one or more user(s) in a
# room which doesn't have a name.
#message_from_person: "[%(app)s] You have a message on %(app)s from %(person)s..."
#
# Subject to use to notify about multiple messages from one or more users in
# a room which doesn't have a name.
#messages_from_person: "[%(app)s] You have messages on %(app)s from %(person)s..."
#
# Subject to use to notify about multiple messages in a room which has a
# name.
#messages_in_room: "[%(app)s] You have messages on %(app)s in the %(room)s room..."
#
# Subject to use to notify about multiple messages in multiple rooms.
#messages_in_room_and_others: "[%(app)s] You have messages on %(app)s in the %(room)s room and others..."
#
# Subject to use to notify about multiple messages from multiple persons in
# multiple rooms. This is similar to the setting above except it's used when
# the room in which the notification was triggered has no name.
#messages_from_person_and_others: "[%(app)s] You have messages on %(app)s from %(person)s and others..."
#
# Subject to use to notify about an invite to a room which has a name.
#invite_from_person_to_room: "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s..."
#
# Subject to use to notify about an invite to a room which doesn't have a
# name.
#invite_from_person: "[%(app)s] %(person)s has invited you to chat on %(app)s..."
# Subject for emails related to account administration.
#
# On top of the '%(app)s' placeholder, these one can use the
# '%(server_name)s' placeholder, which will be replaced by the value of the
# 'server_name' setting in your Synapse configuration.
#
# Subject to use when sending a password reset email.
#password_reset: "[%(server_name)s] Password reset"
#
# Subject to use when sending a verification email to assert an address's
# ownership.
#email_validation: "[%(server_name)s] Validate your email"
# Password providers allow homeserver administrators to integrate # Password providers allow homeserver administrators to integrate
# their Synapse installation with existing authentication methods # their Synapse installation with existing authentication methods
@@ -2307,3 +2407,57 @@ opentracing:
# #
# logging: # logging:
# false # false
## Workers ##
# Disables sending of outbound federation transactions on the main process.
# Uncomment if using a federation sender worker.
#
#send_federation: false
# It is possible to run multiple federation sender workers, in which case the
# work is balanced across them.
#
# This configuration must be shared between all federation sender workers, and if
# changed all federation sender workers must be stopped at the same time and then
# started, to ensure that all instances are running with the same config (otherwise
# events may be dropped).
#
#federation_sender_instances:
# - federation_sender1
# When using workers this should be a map from `worker_name` to the
# HTTP replication listener of the worker, if configured.
#
#instance_map:
# worker1:
# host: localhost
# port: 8034
# Experimental: When using workers you can define which workers should
# handle event persistence and typing notifications. Any worker
# specified here must also be in the `instance_map`.
#
#stream_writers:
# events: worker1
# typing: worker1
# Configuration for Redis when using workers. This *must* be enabled when
# using workers (unless using old style direct TCP configuration).
#
redis:
# Uncomment the below to enable Redis support.
#
#enabled: true
# Optional host and port to use to connect to redis. Defaults to
# localhost and 6379
#
#host: localhost
#port: 6379
# Optional password if configured on the Redis instance
#
#password: <secret_password>
+32
View File
@@ -0,0 +1,32 @@
### Using synctl with workers
If you want to use `synctl` to manage your synapse processes, you will need to
create an an additional configuration file for the main synapse process. That
configuration should look like this:
```yaml
worker_app: synapse.app.homeserver
```
Additionally, each worker app must be configured with the name of a "pid file",
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:
```yaml
worker_pid_file: /home/matrix/synapse/worker1.pid
```
Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:
synctl -a $CONFIG/workers start
Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.
To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/worker1.yaml restart
+261 -219
View File
@@ -1,10 +1,10 @@
# Scaling synapse via workers # Scaling synapse via workers
For small instances it recommended to run Synapse in monolith mode (the For small instances it recommended to run Synapse in the default monolith mode.
default). For larger instances where performance is a concern it can be helpful For larger instances where performance is a concern it can be helpful to split
to split out functionality into multiple separate python processes. These out functionality into multiple separate python processes. These processes are
processes are called 'workers', and are (eventually) intended to scale called 'workers', and are (eventually) intended to scale horizontally
horizontally independently. independently.
Synapse's worker support is under active development and subject to change as Synapse's worker support is under active development and subject to change as
we attempt to rapidly scale ever larger Synapse instances. However we are we attempt to rapidly scale ever larger Synapse instances. However we are
@@ -16,69 +16,115 @@ workers only work with PostgreSQL-based Synapse deployments. SQLite should only
be used for demo purposes and any admin considering workers should already be be used for demo purposes and any admin considering workers should already be
running PostgreSQL. running PostgreSQL.
## Master/worker communication ## Main process/worker communication
The workers communicate with the master process via a Synapse-specific protocol The processes communicate with each other via a Synapse-specific protocol called
called 'replication' (analogous to MySQL- or Postgres-style database 'replication' (analogous to MySQL- or Postgres-style database replication) which
replication) which feeds a stream of relevant data from the master to the feeds streams of newly written data between processes so they can be kept in
workers so they can be kept in sync with the master process and database state. sync with the database state.
Additionally, workers may make HTTP requests to the master, to send information When configured to do so, Synapse uses a
in the other direction. Typically this is used for operations which need to [Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
wait for a reply - such as sending an event. stream between all configured Synapse processes. Additionally, processes may
make HTTP requests to each other, primarily for operations which need to wait
for a reply ─ such as sending an event.
## Configuration Redis support was added in v1.13.0 with it becoming the recommended method in
v1.18.0. It replaced the old direct TCP connections (which is deprecated as of
v1.18.0) to the main process. With Redis, rather than all the workers connecting
to the main process, all the workers and the main process connect to Redis,
which relays replication commands between processes. This can give a significant
cpu saving on the main process and will be a prerequisite for upcoming
performance improvements.
See the [Architectural diagram](#architectural-diagram) section at the end for
a visualisation of what this looks like.
## Setting up workers
A Redis server is required to manage the communication between the processes.
The Redis server should be installed following the normal procedure for your
distribution (e.g. `apt install redis-server` on Debian). It is safe to use an
existing Redis deployment if you have one.
Once installed, check that Redis is running and accessible from the host running
Synapse, for example by executing `echo PING | nc -q1 localhost 6379` and seeing
a response of `+PONG`.
The appropriate dependencies must also be installed for Synapse. If using a
virtualenv, these can be installed with:
```sh
pip install matrix-synapse[redis]
```
Note that these dependencies are included when synapse is installed with `pip
install matrix-synapse[all]`. They are also included in the debian packages from
`matrix.org` and in the docker images at
https://hub.docker.com/r/matrixdotorg/synapse/.
To make effective use of the workers, you will need to configure an HTTP To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. Note that this includes the correct worker, or to the main synapse instance. See
requests made to the federation port. See [reverse_proxy.md](reverse_proxy.md) [reverse_proxy.md](reverse_proxy.md) for information on setting up a reverse
for information on setting up a reverse proxy. proxy.
To enable workers, you need to add *two* replication listeners to the To enable workers you should create a configuration file for each worker
main Synapse configuration file (`homeserver.yaml`). For example: process. Each worker configuration file inherits the configuration of the shared
homeserver configuration file. You can then override configuration specific to
that worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc. You should minimise the number of overrides though to
maintain a usable config.
### Shared Configuration
Next you need to add both a HTTP replication listener, used for HTTP requests
between processes, and redis config to the shared Synapse configuration file
(`homeserver.yaml`). For example:
```yaml ```yaml
# extend the existing `listeners` section. This defines the ports that the
# main process will listen on.
listeners: listeners:
# The TCP replication port
- port: 9092
bind_address: '127.0.0.1'
type: replication
# The HTTP replication port # The HTTP replication port
- port: 9093 - port: 9093
bind_address: '127.0.0.1' bind_address: '127.0.0.1'
type: http type: http
resources: resources:
- names: [replication] - names: [replication]
redis:
enabled: true
``` ```
Under **no circumstances** should these replication API listeners be exposed to See the sample config for the full documentation of each option.
the public internet; they have no authentication and are unencrypted.
You should then create a set of configs for the various worker processes. Each Under **no circumstances** should the replication listener be exposed to the
worker configuration file inherits the configuration of the main homeserver public internet; it has no authentication and is unencrypted.
configuration file. You can then override configuration specific to that
worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc. You should minimise the number of overrides though to ### Worker Configuration
maintain a usable config.
In the config file for each worker, you must specify the type of worker In the config file for each worker, you must specify the type of worker
application (`worker_app`). The currently available worker applications are application (`worker_app`), and you should specify a unqiue name for the worker
listed below. You must also specify the replication endpoints that it should (`worker_name`). The currently available worker applications are listed below.
talk to on the main synapse process. `worker_replication_host` should specify You must also specify the HTTP replication endpoint that it should talk to on
the host of the main synapse, `worker_replication_port` should point to the TCP the main synapse process. `worker_replication_host` should specify the host of
replication listener port and `worker_replication_http_port` should point to the main synapse and `worker_replication_http_port` should point to the HTTP
the HTTP replication port. replication port. If the worker will handle HTTP requests then the
`worker_listeners` option should be set with a `http` listener, in the same way
as the `listeners` option in the shared config.
For example: For example:
```yaml ```yaml
worker_app: synapse.app.synchrotron worker_app: synapse.app.generic_worker
worker_name: worker1
# The replication listener on the synapse to talk to. # The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1 worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093 worker_replication_http_port: 9093
worker_listeners: worker_listeners:
@@ -87,13 +133,14 @@ worker_listeners:
resources: resources:
- names: - names:
- client - client
- federation
worker_log_config: /home/matrix/synapse/config/synchrotron_log_config.yaml worker_log_config: /home/matrix/synapse/config/worker1_log_config.yaml
``` ```
...is a full configuration for a synchrotron worker instance, which will expose a ...is a full configuration for a generic worker instance, which will expose a
plain HTTP `/sync` endpoint on port 8083 separately from the `/sync` endpoint provided plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
by the main synapse. `/sync`, which are listed below.
Obviously you should configure your reverse-proxy to route the relevant Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example). endpoints to the worker (`localhost:8083` in the above example).
@@ -102,127 +149,24 @@ Finally, you need to start your worker processes. This can be done with either
`synctl` or your distribution's preferred service manager such as `systemd`. We `synctl` or your distribution's preferred service manager such as `systemd`. We
recommend the use of `systemd` where available: for information on setting up recommend the use of `systemd` where available: for information on setting up
`systemd` to start synapse workers, see `systemd` to start synapse workers, see
[systemd-with-workers](systemd-with-workers). To use `synctl`, see below. [systemd-with-workers](systemd-with-workers). To use `synctl`, see
[synctl_workers.md](synctl_workers.md).
### **Experimental** support for replication over redis
As of Synapse v1.13.0, it is possible to configure Synapse to send replication
via a [Redis pub/sub channel](https://redis.io/topics/pubsub). This is an
alternative to direct TCP connections to the master: rather than all the
workers connecting to the master, all the workers and the master connect to
Redis, which relays replication commands between processes. This can give a
significant cpu saving on the master and will be a prerequisite for upcoming
performance improvements.
Note that this support is currently experimental; you may experience lost
messages and similar problems! It is strongly recommended that admins setting
up workers for the first time use direct TCP replication as above.
To configure Synapse to use Redis:
1. Install Redis following the normal procedure for your distribution - for
example, on Debian, `apt install redis-server`. (It is safe to use an
existing Redis deployment if you have one: we use a pub/sub stream named
according to the `server_name` of your synapse server.)
2. Check Redis is running and accessible: you should be able to `echo PING | nc -q1
localhost 6379` and get a response of `+PONG`.
3. Install the python prerequisites. If you installed synapse into a
virtualenv, this can be done with:
```sh
pip install matrix-synapse[redis]
```
The debian packages from matrix.org already include the required
dependencies.
4. Add config to the shared configuration (`homeserver.yaml`):
```yaml
redis:
enabled: true
```
Optional parameters which can go alongside `enabled` are `host`, `port`,
`password`. Normally none of these are required.
5. Restart master and all workers.
Once redis replication is in use, `worker_replication_port` is redundant and
can be removed from the worker configuration files. Similarly, the
configuration for the `listener` for the TCP replication port can be removed
from the main configuration file. Note that the HTTP replication port is
still required.
### Using synctl
If you want to use `synctl` to manage your synapse processes, you will need to
create an an additional configuration file for the master synapse process. That
configuration should look like this:
```yaml
worker_app: synapse.app.homeserver
```
Additionally, each worker app must be configured with the name of a "pid file",
to which it will write its process ID when it starts. For example, for a
synchrotron, you might write:
```yaml
worker_pid_file: /home/matrix/synapse/synchrotron.pid
```
Finally, to actually run your worker-based synapse, you must pass synctl the `-a`
commandline option to tell it to operate on all the worker configurations found
in the given directory, e.g.:
synctl -a $CONFIG/workers start
Currently one should always restart all workers when restarting or upgrading
synapse, unless you explicitly know it's safe not to. For instance, restarting
synapse without restarting all the synchrotrons may result in broken typing
notifications.
To manipulate a specific worker, you pass the -w option to synctl:
synctl -w $CONFIG/workers/synchrotron.yaml restart
## Available worker applications ## Available worker applications
### `synapse.app.pusher` ### `synapse.app.generic_worker`
Handles sending push notifications to sygnal and email. Doesn't handle any This worker can handle API requests matching the following regular
REST endpoints itself, but you should set `start_pushers: False` in the expressions:
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.synchrotron`
The synchrotron handles `sync` requests from clients. In particular, it can
handle REST endpoints matching the following regular expressions:
# Sync requests
^/_matrix/client/(v2_alpha|r0)/sync$ ^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$ ^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$ ^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$ ^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
The above endpoints should all be routed to the synchrotron worker by the # Federation requests
reverse-proxy configuration.
It is possible to run multiple instances of the synchrotron to scale
horizontally. In this case the reverse-proxy should be configured to
load-balance across the instances, though it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting
a userid from the access token is currently left as an exercise for the reader.
### `synapse.app.appservice`
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending these notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.federation_reader`
Handles a subset of federation endpoints. In particular, it can handle REST
endpoints matching the following regular expressions:
^/_matrix/federation/v1/event/ ^/_matrix/federation/v1/event/
^/_matrix/federation/v1/state/ ^/_matrix/federation/v1/state/
^/_matrix/federation/v1/state_ids/ ^/_matrix/federation/v1/state_ids/
@@ -242,40 +186,145 @@ endpoints matching the following regular expressions:
^/_matrix/federation/v1/event_auth/ ^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/ ^/_matrix/federation/v1/exchange_third_party_invite/
^/_matrix/federation/v1/user/devices/ ^/_matrix/federation/v1/user/devices/
^/_matrix/federation/v1/send/
^/_matrix/federation/v1/get_groups_publicised$ ^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query ^/_matrix/key/v2/query
# Inbound federation transaction request
^/_matrix/federation/v1/send/
# Client API requests
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
Additionally, the following REST endpoints can be handled for GET requests: Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/federation/v1/groups/ ^/_matrix/federation/v1/groups/
The above endpoints should all be routed to the federation_reader worker by the Pagination requests can also be handled, but all requests for a given
reverse-proxy configuration. room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single ^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
instance.
Note that `federation` must be added to the listener resources in the worker config: Note that a HTTP listener with `client` and `federation` resources must be
configured in the `worker_listeners` option in the worker config.
#### Load balancing
It is possible to run multiple instances of this worker app, with incoming requests
being load-balanced between them by the reverse-proxy. However, different endpoints
have different characteristics and so admins
may wish to run multiple groups of workers handling different endpoints so that
load balancing can be done in different ways.
For `/sync` and `/initialSync` requests it will be more efficient if all
requests from a particular user are routed to a single instance. Extracting a
user ID from the access token or `Authorization` header is currently left as an
exercise for the reader. Admins may additionally wish to separate out `/sync`
requests that have a `since` query parameter from those that don't (and
`/initialSync`), as requests that don't are known as "initial sync" that happens
when a user logs in on a new device and can be *very* resource intensive, so
isolating these requests will stop them from interfering with other users ongoing
syncs.
Federation and client requests can be balanced via simple round robin.
The inbound federation transaction request `^/_matrix/federation/v1/send/`
should be balanced by source IP so that transactions from the same remote server
go to the same process.
Registration/login requests can be handled separately purely to help ensure that
unexpected load doesn't affect new logins and sign ups.
Finally, event sending requests can be balanced by the room ID in the URI (or
the full URI, or even just round robin), the room ID is the path component after
`/rooms/`. If there is a large bridge connected that is sending or may send lots
of events, then a dedicated set of workers can be provisioned to limit the
effects of bursts of events from that bridge on events sent by normal users.
#### Stream writers
Additionally, there is *experimental* support for moving writing of specific
streams (such as events) off of the main process to a particular worker. (This
is only supported with Redis-based replication.)
Currently support streams are `events` and `typing`.
To enable this, the worker must have a HTTP replication listener configured,
have a `worker_name` and be listed in the `instance_map` config. For example to
move event persistence off to a dedicated worker, the shared configuration would
include:
```yaml ```yaml
worker_app: synapse.app.federation_reader instance_map:
... event_persister1:
worker_listeners: host: localhost
- type: http port: 8034
port: <port>
resources: stream_writers:
- names: events: event_persister1
- federation
``` ```
### `synapse.app.pusher`
Handles sending push notifications to sygnal and email. Doesn't handle any
REST endpoints itself, but you should set `start_pushers: False` in the
shared configuration file to stop the main synapse sending push notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.appservice`
Handles sending output traffic to Application Services. Doesn't handle any
REST endpoints itself, but you should set `notify_appservices: False` in the
shared configuration file to stop the main synapse sending appservice notifications.
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.federation_sender` ### `synapse.app.federation_sender`
Handles sending federation traffic to other servers. Doesn't handle any Handles sending federation traffic to other servers. Doesn't handle any
REST endpoints itself, but you should set `send_federation: False` in the REST endpoints itself, but you should set `send_federation: False` in the
shared configuration file to stop the main synapse sending this traffic. shared configuration file to stop the main synapse sending this traffic.
Note this worker cannot be load-balanced: only one instance should be active. If running multiple federation senders then you must list each
instance in the `federation_sender_instances` option by their `worker_name`.
All instances must be stopped and started when adding or removing instances.
For example:
```yaml
federation_sender_instances:
- federation_sender1
- federation_sender2
```
### `synapse.app.media_repository` ### `synapse.app.media_repository`
@@ -314,46 +363,6 @@ and you must configure a single instance to run the background tasks, e.g.:
media_instance_running_background_jobs: "media-repository-1" media_instance_running_background_jobs: "media-repository-1"
``` ```
### `synapse.app.client_reader`
Handles client API endpoints. It can handle REST endpoints matching the
following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
Additionally, the following REST endpoints can be handled for GET requests:
^/_matrix/client/(api/v1|r0|unstable)/pushrules/.*$
^/_matrix/client/(api/v1|r0|unstable)/groups/.*$
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/account_data/
^/_matrix/client/(api/v1|r0|unstable)/user/[^/]*/rooms/[^/]*/account_data/
Additionally, the following REST endpoints can be handled, but all requests must
be routed to the same instance:
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(r0|unstable)/auth/.*/fallback/web$
Pagination requests can also be handled, but all requests with the same path
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
### `synapse.app.user_dir` ### `synapse.app.user_dir`
Handles searches in the user directory. It can handle REST endpoints matching Handles searches in the user directory. It can handle REST endpoints matching
@@ -388,15 +397,48 @@ file. For example:
worker_main_http_uri: http://127.0.0.1:8008 worker_main_http_uri: http://127.0.0.1:8008
### `synapse.app.event_creator` ### Historical apps
Handles some event creation. It can handle REST endpoints matching: *Note:* Historically there used to be more apps, however they have been
amalgamated into a single `synapse.app.generic_worker` app. The remaining apps
are ones that do specific processing unrelated to requests, e.g. the `pusher`
that handles sending out push notifications for new events. The intention is for
all these to be folded into the `generic_worker` app and to use config to define
which processes handle the various proccessing such as push notifications.
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
It will create events locally and then send them on to the main synapse ## Architectural diagram
instance to be persisted and handled.
The following shows an example setup using Redis and a reverse proxy:
```
Clients & Federation
|
v
+-----------+
| |
| Reverse |
| Proxy |
| |
+-----------+
| | |
| | | HTTP requests
+-------------------+ | +-----------+
| +---+ |
| | |
v v v
+--------------+ +--------------+ +--------------+ +--------------+
| Main | | Generic | | Generic | | Event |
| Process | | Worker 1 | | Worker 2 | | Persister |
+--------------+ +--------------+ +--------------+ +--------------+
^ ^ | ^ | | ^ | ^ ^
| | | | | | | | | |
| | | | | HTTP | | | | |
| +----------+<--|---|---------+ | | | |
| | +-------------|-->+----------+ |
| | | |
| | | |
v v v v
====================================================================
Redis pub/sub channel
```
-1
View File
@@ -24,7 +24,6 @@ DISTS = (
"debian:sid", "debian:sid",
"ubuntu:xenial", "ubuntu:xenial",
"ubuntu:bionic", "ubuntu:bionic",
"ubuntu:eoan",
"ubuntu:focal", "ubuntu:focal",
) )
+34
View File
@@ -0,0 +1,34 @@
#!/bin/bash
#
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# This script checks that line terminators in all repository files (excluding
# those in the .git directory) feature unix line terminators.
#
# Usage:
#
# ./check_line_terminators.sh
#
# The script will emit exit code 1 if any files that do not use unix line
# terminators are found, 0 otherwise.
# cd to the root of the repository
cd `dirname $0`/..
# Find and print files with non-unix line terminators
if find . -path './.git/*' -prune -o -type f -print0 | xargs -0 grep -I -l $'\r$'; then
echo -e '\e[31mERROR: found files with CRLF line endings. See above.\e[39m'
exit 1
fi
+1 -1
View File
@@ -11,7 +11,7 @@ if [ $# -ge 1 ]
then then
files=$* files=$*
else else
files="synapse tests scripts-dev scripts" files="synapse tests scripts-dev scripts contrib synctl"
fi fi
echo "Linting these locations: $files" echo "Linting these locations: $files"
+12 -2
View File
@@ -48,6 +48,7 @@ from synapse.storage.data_stores.main.media_repository import (
) )
from synapse.storage.data_stores.main.registration import ( from synapse.storage.data_stores.main.registration import (
RegistrationBackgroundUpdateStore, RegistrationBackgroundUpdateStore,
find_max_generated_user_id_localpart,
) )
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
@@ -68,7 +69,7 @@ logger = logging.getLogger("synapse_port_db")
BOOLEAN_COLUMNS = { BOOLEAN_COLUMNS = {
"events": ["processed", "outlier", "contains_url"], "events": ["processed", "outlier", "contains_url", "count_as_unread"],
"rooms": ["is_public"], "rooms": ["is_public"],
"event_edges": ["is_state"], "event_edges": ["is_state"],
"presence_list": ["accepted"], "presence_list": ["accepted"],
@@ -622,8 +623,10 @@ class Porter(object):
) )
) )
# Step 5. Do final post-processing # Step 5. Set up sequences
self.progress.set_state("Setting up sequence generators")
await self._setup_state_group_id_seq() await self._setup_state_group_id_seq()
await self._setup_user_id_seq()
self.progress.done() self.progress.done()
except Exception as e: except Exception as e:
@@ -793,6 +796,13 @@ class Porter(object):
return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r) return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
def _setup_user_id_seq(self):
def r(txn):
next_id = find_max_generated_user_id_localpart(txn) + 1
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db.runInteraction("setup_user_id_seq", r)
############################################## ##############################################
# The following is simply UI stuff # The following is simply UI stuff
+1
View File
@@ -22,6 +22,7 @@ class RedisProtocol:
def publish(self, channel: str, message: bytes): ... def publish(self, channel: str, message: bytes): ...
class SubscriberProtocol: class SubscriberProtocol:
def __init__(self, *args, **kwargs): ...
password: Optional[str] password: Optional[str]
def subscribe(self, channels: Union[str, List[str]]): ... def subscribe(self, channels: Union[str, List[str]]): ...
def connectionMade(self): ... def connectionMade(self): ...
+13 -1
View File
@@ -17,6 +17,7 @@
""" This is a reference implementation of a Matrix homeserver. """ This is a reference implementation of a Matrix homeserver.
""" """
import json
import os import os
import sys import sys
@@ -25,6 +26,9 @@ if sys.version_info < (3, 5):
print("Synapse requires Python 3.5 or above.") print("Synapse requires Python 3.5 or above.")
sys.exit(1) sys.exit(1)
# Twisted and canonicaljson will fail to import when this file is executed to
# get the __version__ during a fresh install. That's OK and subsequent calls to
# actually start Synapse will import these libraries fine.
try: try:
from twisted.internet import protocol from twisted.internet import protocol
from twisted.internet.protocol import Factory from twisted.internet.protocol import Factory
@@ -36,7 +40,15 @@ try:
except ImportError: except ImportError:
pass pass
__version__ = "1.17.0" # Use the standard library json implementation instead of simplejson.
try:
from canonicaljson import set_json_library
set_json_library(json)
except ImportError:
pass
__version__ = "1.18.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)): if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when # We import here so that we don't have to install a bunch of deps when
+9 -5
View File
@@ -82,7 +82,7 @@ class Auth(object):
@defer.inlineCallbacks @defer.inlineCallbacks
def check_from_context(self, room_version: str, event, context, do_sig_check=True): def check_from_context(self, room_version: str, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids() prev_state_ids = yield defer.ensureDeferred(context.get_prev_state_ids())
auth_events_ids = yield self.compute_auth_events( auth_events_ids = yield self.compute_auth_events(
event, prev_state_ids, for_verification=True event, prev_state_ids, for_verification=True
) )
@@ -127,8 +127,10 @@ class Auth(object):
if current_state: if current_state:
member = current_state.get((EventTypes.Member, user_id), None) member = current_state.get((EventTypes.Member, user_id), None)
else: else:
member = yield self.state.get_current_state( member = yield defer.ensureDeferred(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
) )
membership = member.membership if member else None membership = member.membership if member else None
@@ -665,8 +667,10 @@ class Auth(object):
) )
return member_event.membership, member_event.event_id return member_event.membership, member_event.event_id
except AuthError: except AuthError:
visibility = yield self.state.get_current_state( visibility = yield defer.ensureDeferred(
room_id, EventTypes.RoomHistoryVisibility, "" self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
) )
if ( if (
visibility visibility
+74 -56
View File
@@ -17,13 +17,17 @@
"""Contains exceptions and error codes.""" """Contains exceptions and error codes."""
import logging import logging
import typing
from http import HTTPStatus from http import HTTPStatus
from typing import Dict, List from typing import Dict, List, Optional, Union
from canonicaljson import json from canonicaljson import json
from twisted.web import http from twisted.web import http
if typing.TYPE_CHECKING:
from synapse.types import JsonDict
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -78,11 +82,11 @@ class CodeMessageException(RuntimeError):
"""An exception with integer code and message string attributes. """An exception with integer code and message string attributes.
Attributes: Attributes:
code (int): HTTP error code code: HTTP error code
msg (str): string describing the error msg: string describing the error
""" """
def __init__(self, code, msg): def __init__(self, code: Union[int, HTTPStatus], msg: str):
super(CodeMessageException, self).__init__("%d: %s" % (code, msg)) super(CodeMessageException, self).__init__("%d: %s" % (code, msg))
# Some calls to this method pass instances of http.HTTPStatus for `code`. # Some calls to this method pass instances of http.HTTPStatus for `code`.
@@ -123,16 +127,16 @@ class SynapseError(CodeMessageException):
message (as well as an HTTP status code). message (as well as an HTTP status code).
Attributes: Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN' errcode: Matrix error code e.g 'M_FORBIDDEN'
""" """
def __init__(self, code, msg, errcode=Codes.UNKNOWN): def __init__(self, code: int, msg: str, errcode: str = Codes.UNKNOWN):
"""Constructs a synapse error. """Constructs a synapse error.
Args: Args:
code (int): The integer error code (an HTTP response code) code: The integer error code (an HTTP response code)
msg (str): The human-readable error message. msg: The human-readable error message.
errcode (str): The matrix error code e.g 'M_FORBIDDEN' errcode: The matrix error code e.g 'M_FORBIDDEN'
""" """
super(SynapseError, self).__init__(code, msg) super(SynapseError, self).__init__(code, msg)
self.errcode = errcode self.errcode = errcode
@@ -145,10 +149,16 @@ class ProxiedRequestError(SynapseError):
"""An error from a general matrix endpoint, eg. from a proxied Matrix API call. """An error from a general matrix endpoint, eg. from a proxied Matrix API call.
Attributes: Attributes:
errcode (str): Matrix error code e.g 'M_FORBIDDEN' errcode: Matrix error code e.g 'M_FORBIDDEN'
""" """
def __init__(self, code, msg, errcode=Codes.UNKNOWN, additional_fields=None): def __init__(
self,
code: int,
msg: str,
errcode: str = Codes.UNKNOWN,
additional_fields: Optional[Dict] = None,
):
super(ProxiedRequestError, self).__init__(code, msg, errcode) super(ProxiedRequestError, self).__init__(code, msg, errcode)
if additional_fields is None: if additional_fields is None:
self._additional_fields = {} # type: Dict self._additional_fields = {} # type: Dict
@@ -164,12 +174,12 @@ class ConsentNotGivenError(SynapseError):
privacy policy. privacy policy.
""" """
def __init__(self, msg, consent_uri): def __init__(self, msg: str, consent_uri: str):
"""Constructs a ConsentNotGivenError """Constructs a ConsentNotGivenError
Args: Args:
msg (str): The human-readable error message msg: The human-readable error message
consent_url (str): The URL where the user can give their consent consent_url: The URL where the user can give their consent
""" """
super(ConsentNotGivenError, self).__init__( super(ConsentNotGivenError, self).__init__(
code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.CONSENT_NOT_GIVEN code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.CONSENT_NOT_GIVEN
@@ -185,11 +195,11 @@ class UserDeactivatedError(SynapseError):
authenticated endpoint, but the account has been deactivated. authenticated endpoint, but the account has been deactivated.
""" """
def __init__(self, msg): def __init__(self, msg: str):
"""Constructs a UserDeactivatedError """Constructs a UserDeactivatedError
Args: Args:
msg (str): The human-readable error message msg: The human-readable error message
""" """
super(UserDeactivatedError, self).__init__( super(UserDeactivatedError, self).__init__(
code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED code=HTTPStatus.FORBIDDEN, msg=msg, errcode=Codes.USER_DEACTIVATED
@@ -201,16 +211,16 @@ class FederationDeniedError(SynapseError):
is not on its federation whitelist. is not on its federation whitelist.
Attributes: Attributes:
destination (str): The destination which has been denied destination: The destination which has been denied
""" """
def __init__(self, destination): def __init__(self, destination: Optional[str]):
"""Raised by federation client or server to indicate that we are """Raised by federation client or server to indicate that we are
are deliberately not attempting to contact a given server because it is are deliberately not attempting to contact a given server because it is
not on our federation whitelist. not on our federation whitelist.
Args: Args:
destination (str): the domain in question destination: the domain in question
""" """
self.destination = destination self.destination = destination
@@ -228,11 +238,11 @@ class InteractiveAuthIncompleteError(Exception):
(This indicates we should return a 401 with 'result' as the body) (This indicates we should return a 401 with 'result' as the body)
Attributes: Attributes:
result (dict): the server response to the request, which should be result: the server response to the request, which should be
passed back to the client passed back to the client
""" """
def __init__(self, result): def __init__(self, result: "JsonDict"):
super(InteractiveAuthIncompleteError, self).__init__( super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete" "Interactive auth not yet complete"
) )
@@ -245,7 +255,6 @@ class UnrecognizedRequestError(SynapseError):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
if "errcode" not in kwargs: if "errcode" not in kwargs:
kwargs["errcode"] = Codes.UNRECOGNIZED kwargs["errcode"] = Codes.UNRECOGNIZED
message = None
if len(args) == 0: if len(args) == 0:
message = "Unrecognized request" message = "Unrecognized request"
else: else:
@@ -256,7 +265,7 @@ class UnrecognizedRequestError(SynapseError):
class NotFoundError(SynapseError): class NotFoundError(SynapseError):
"""An error indicating we can't find the thing you asked for""" """An error indicating we can't find the thing you asked for"""
def __init__(self, msg="Not found", errcode=Codes.NOT_FOUND): def __init__(self, msg: str = "Not found", errcode: str = Codes.NOT_FOUND):
super(NotFoundError, self).__init__(404, msg, errcode=errcode) super(NotFoundError, self).__init__(404, msg, errcode=errcode)
@@ -282,21 +291,23 @@ class InvalidClientCredentialsError(SynapseError):
M_UNKNOWN_TOKEN respectively. M_UNKNOWN_TOKEN respectively.
""" """
def __init__(self, msg, errcode): def __init__(self, msg: str, errcode: str):
super().__init__(code=401, msg=msg, errcode=errcode) super().__init__(code=401, msg=msg, errcode=errcode)
class MissingClientTokenError(InvalidClientCredentialsError): class MissingClientTokenError(InvalidClientCredentialsError):
"""Raised when we couldn't find the access token in a request""" """Raised when we couldn't find the access token in a request"""
def __init__(self, msg="Missing access token"): def __init__(self, msg: str = "Missing access token"):
super().__init__(msg=msg, errcode="M_MISSING_TOKEN") super().__init__(msg=msg, errcode="M_MISSING_TOKEN")
class InvalidClientTokenError(InvalidClientCredentialsError): class InvalidClientTokenError(InvalidClientCredentialsError):
"""Raised when we didn't understand the access token in a request""" """Raised when we didn't understand the access token in a request"""
def __init__(self, msg="Unrecognised access token", soft_logout=False): def __init__(
self, msg: str = "Unrecognised access token", soft_logout: bool = False
):
super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN") super().__init__(msg=msg, errcode="M_UNKNOWN_TOKEN")
self._soft_logout = soft_logout self._soft_logout = soft_logout
@@ -314,11 +325,11 @@ class ResourceLimitError(SynapseError):
def __init__( def __init__(
self, self,
code, code: int,
msg, msg: str,
errcode=Codes.RESOURCE_LIMIT_EXCEEDED, errcode: str = Codes.RESOURCE_LIMIT_EXCEEDED,
admin_contact=None, admin_contact: Optional[str] = None,
limit_type=None, limit_type: Optional[str] = None,
): ):
self.admin_contact = admin_contact self.admin_contact = admin_contact
self.limit_type = limit_type self.limit_type = limit_type
@@ -366,10 +377,10 @@ class StoreError(SynapseError):
class InvalidCaptchaError(SynapseError): class InvalidCaptchaError(SynapseError):
def __init__( def __init__(
self, self,
code=400, code: int = 400,
msg="Invalid captcha.", msg: str = "Invalid captcha.",
error_url=None, error_url: Optional[str] = None,
errcode=Codes.CAPTCHA_INVALID, errcode: str = Codes.CAPTCHA_INVALID,
): ):
super(InvalidCaptchaError, self).__init__(code, msg, errcode) super(InvalidCaptchaError, self).__init__(code, msg, errcode)
self.error_url = error_url self.error_url = error_url
@@ -384,10 +395,10 @@ class LimitExceededError(SynapseError):
def __init__( def __init__(
self, self,
code=429, code: int = 429,
msg="Too Many Requests", msg: str = "Too Many Requests",
retry_after_ms=None, retry_after_ms: Optional[int] = None,
errcode=Codes.LIMIT_EXCEEDED, errcode: str = Codes.LIMIT_EXCEEDED,
): ):
super(LimitExceededError, self).__init__(code, msg, errcode) super(LimitExceededError, self).__init__(code, msg, errcode)
self.retry_after_ms = retry_after_ms self.retry_after_ms = retry_after_ms
@@ -400,10 +411,10 @@ class RoomKeysVersionError(SynapseError):
"""A client has tried to upload to a non-current version of the room_keys store """A client has tried to upload to a non-current version of the room_keys store
""" """
def __init__(self, current_version): def __init__(self, current_version: str):
""" """
Args: Args:
current_version (str): the current version of the store they should have used current_version: the current version of the store they should have used
""" """
super(RoomKeysVersionError, self).__init__( super(RoomKeysVersionError, self).__init__(
403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION 403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION
@@ -415,7 +426,7 @@ class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does """The client's request to create a room used a room version that the server does
not support.""" not support."""
def __init__(self, msg="Homeserver does not support this room version"): def __init__(self, msg: str = "Homeserver does not support this room version"):
super(UnsupportedRoomVersionError, self).__init__( super(UnsupportedRoomVersionError, self).__init__(
code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION, code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION,
) )
@@ -437,7 +448,7 @@ class IncompatibleRoomVersionError(SynapseError):
failing. failing.
""" """
def __init__(self, room_version): def __init__(self, room_version: str):
super(IncompatibleRoomVersionError, self).__init__( super(IncompatibleRoomVersionError, self).__init__(
code=400, code=400,
msg="Your homeserver does not support the features required to " msg="Your homeserver does not support the features required to "
@@ -457,8 +468,8 @@ class PasswordRefusedError(SynapseError):
def __init__( def __init__(
self, self,
msg="This password doesn't comply with the server's policy", msg: str = "This password doesn't comply with the server's policy",
errcode=Codes.WEAK_PASSWORD, errcode: str = Codes.WEAK_PASSWORD,
): ):
super(PasswordRefusedError, self).__init__( super(PasswordRefusedError, self).__init__(
code=400, msg=msg, errcode=errcode, code=400, msg=msg, errcode=errcode,
@@ -483,14 +494,14 @@ class RequestSendFailed(RuntimeError):
self.can_retry = can_retry self.can_retry = can_retry
def cs_error(msg, code=Codes.UNKNOWN, **kwargs): def cs_error(msg: str, code: str = Codes.UNKNOWN, **kwargs):
""" Utility method for constructing an error response for client-server """ Utility method for constructing an error response for client-server
interactions. interactions.
Args: Args:
msg (str): The error message. msg: The error message.
code (str): The error code. code: The error code.
kwargs : Additional keys to add to the response. kwargs: Additional keys to add to the response.
Returns: Returns:
A dict representing the error response JSON. A dict representing the error response JSON.
""" """
@@ -512,7 +523,14 @@ class FederationError(RuntimeError):
is wrong (e.g., it referred to an invalid event) is wrong (e.g., it referred to an invalid event)
""" """
def __init__(self, level, code, reason, affected, source=None): def __init__(
self,
level: str,
code: int,
reason: str,
affected: str,
source: Optional[str] = None,
):
if level not in ["FATAL", "ERROR", "WARN"]: if level not in ["FATAL", "ERROR", "WARN"]:
raise ValueError("Level is not valid: %s" % (level,)) raise ValueError("Level is not valid: %s" % (level,))
self.level = level self.level = level
@@ -539,16 +557,16 @@ class HttpResponseException(CodeMessageException):
Represents an HTTP-level failure of an outbound request Represents an HTTP-level failure of an outbound request
Attributes: Attributes:
response (bytes): body of response response: body of response
""" """
def __init__(self, code, msg, response): def __init__(self, code: int, msg: str, response: bytes):
""" """
Args: Args:
code (int): HTTP status code code: HTTP status code
msg (str): reason phrase from HTTP response status line msg: reason phrase from HTTP response status line
response (bytes): body of response response: body of response
""" """
super(HttpResponseException, self).__init__(code, msg) super(HttpResponseException, self).__init__(code, msg)
self.response = response self.response = response
@@ -573,7 +591,7 @@ class HttpResponseException(CodeMessageException):
# try to parse the body as json, to get better errcode/msg, but # try to parse the body as json, to get better errcode/msg, but
# default to M_UNKNOWN with the HTTP status as the error text # default to M_UNKNOWN with the HTTP status as the error text
try: try:
j = json.loads(self.response) j = json.loads(self.response.decode("utf-8"))
except ValueError: except ValueError:
j = {} j = {}
+24 -92
View File
@@ -21,7 +21,7 @@ from typing import Dict, Iterable, Optional, Set
from typing_extensions import ContextManager from typing_extensions import ContextManager
from twisted.internet import address, defer, reactor from twisted.internet import address, reactor
import synapse import synapse
import synapse.events import synapse.events
@@ -87,7 +87,6 @@ from synapse.replication.tcp.streams import (
ReceiptsStream, ReceiptsStream,
TagAccountDataStream, TagAccountDataStream,
ToDeviceStream, ToDeviceStream,
TypingStream,
) )
from synapse.rest.admin import register_servlets_for_media_repo from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.client.v1 import events from synapse.rest.client.v1 import events
@@ -111,6 +110,7 @@ from synapse.rest.client.v1.room import (
RoomSendEventRestServlet, RoomSendEventRestServlet,
RoomStateEventRestServlet, RoomStateEventRestServlet,
RoomStateRestServlet, RoomStateRestServlet,
RoomTypingRestServlet,
) )
from synapse.rest.client.v1.voip import VoipRestServlet from synapse.rest.client.v1.voip import VoipRestServlet
from synapse.rest.client.v2_alpha import groups, sync, user_directory from synapse.rest.client.v2_alpha import groups, sync, user_directory
@@ -374,9 +374,8 @@ class GenericWorkerPresence(BasePresenceHandler):
return _user_syncing() return _user_syncing()
@defer.inlineCallbacks async def notify_from_replication(self, states, stream_id):
def notify_from_replication(self, states, stream_id): parties = await get_interested_parties(self.store, states)
parties = yield get_interested_parties(self.store, states)
room_ids_to_states, users_to_states = parties room_ids_to_states, users_to_states = parties
self.notifier.on_new_event( self.notifier.on_new_event(
@@ -386,8 +385,7 @@ class GenericWorkerPresence(BasePresenceHandler):
users=users_to_states.keys(), users=users_to_states.keys(),
) )
@defer.inlineCallbacks async def process_replication_rows(self, token, rows):
def process_replication_rows(self, token, rows):
states = [ states = [
UserPresenceState( UserPresenceState(
row.user_id, row.user_id,
@@ -405,7 +403,7 @@ class GenericWorkerPresence(BasePresenceHandler):
self.user_to_current_state[state.user_id] = state self.user_to_current_state[state.user_id] = state
stream_id = token stream_id = token
yield self.notify_from_replication(states, stream_id) await self.notify_from_replication(states, stream_id)
def get_currently_syncing_users_for_replication(self) -> Iterable[str]: def get_currently_syncing_users_for_replication(self) -> Iterable[str]:
return [ return [
@@ -451,37 +449,6 @@ class GenericWorkerPresence(BasePresenceHandler):
await self._bump_active_client(user_id=user_id) await self._bump_active_client(user_id=user_id)
class GenericWorkerTyping(object):
def __init__(self, hs):
self._latest_room_serial = 0
self._reset()
def _reset(self):
"""
Reset the typing handler's data caches.
"""
# map room IDs to serial numbers
self._room_serials = {}
# map room IDs to sets of users currently typing
self._room_typing = {}
def process_replication_rows(self, token, rows):
if self._latest_room_serial > token:
# The master has gone backwards. To prevent inconsistent data, just
# clear everything.
self._reset()
# Set the latest serial token to whatever the server gave us.
self._latest_room_serial = token
for row in rows:
self._room_serials[row.room_id] = token
self._room_typing[row.room_id] = row.user_ids
def get_current_token(self) -> int:
return self._latest_room_serial
class GenericWorkerSlavedStore( class GenericWorkerSlavedStore(
# FIXME(#3714): We need to add UserDirectoryStore as we write directly # FIXME(#3714): We need to add UserDirectoryStore as we write directly
# rather than going via the correct worker. # rather than going via the correct worker.
@@ -511,25 +478,7 @@ class GenericWorkerSlavedStore(
SearchWorkerStore, SearchWorkerStore,
BaseSlavedStore, BaseSlavedStore,
): ):
def __init__(self, database, db_conn, hs): pass
super(GenericWorkerSlavedStore, self).__init__(database, db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
# that we don't have to bounce via a deferred once when we start the
# replication streams.
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
txn.execute(sql, ("federation",))
rows = txn.fetchall()
txn.close()
return rows[0][0] if rows else -1
class GenericWorkerServer(HomeServer): class GenericWorkerServer(HomeServer):
@@ -576,6 +525,7 @@ class GenericWorkerServer(HomeServer):
KeyUploadServlet(self).register(resource) KeyUploadServlet(self).register(resource)
AccountDataServlet(self).register(resource) AccountDataServlet(self).register(resource)
RoomAccountDataServlet(self).register(resource) RoomAccountDataServlet(self).register(resource)
RoomTypingRestServlet(self).register(resource)
sync.register_servlets(self, resource) sync.register_servlets(self, resource)
events.register_servlets(self, resource) events.register_servlets(self, resource)
@@ -678,7 +628,7 @@ class GenericWorkerServer(HomeServer):
self.get_tcp_replication().start_replication(self) self.get_tcp_replication().start_replication(self)
def remove_pusher(self, app_id, push_key, user_id): async def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id) self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def build_replication_data_handler(self): def build_replication_data_handler(self):
@@ -687,16 +637,12 @@ class GenericWorkerServer(HomeServer):
def build_presence_handler(self): def build_presence_handler(self):
return GenericWorkerPresence(self) return GenericWorkerPresence(self)
def build_typing_handler(self):
return GenericWorkerTyping(self)
class GenericWorkerReplicationHandler(ReplicationDataHandler): class GenericWorkerReplicationHandler(ReplicationDataHandler):
def __init__(self, hs): def __init__(self, hs):
super(GenericWorkerReplicationHandler, self).__init__(hs) super(GenericWorkerReplicationHandler, self).__init__(hs)
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.typing_handler = hs.get_typing_handler()
self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence self.presence_handler = hs.get_presence_handler() # type: GenericWorkerPresence
self.notifier = hs.get_notifier() self.notifier = hs.get_notifier()
@@ -733,11 +679,6 @@ class GenericWorkerReplicationHandler(ReplicationDataHandler):
await self.pusher_pool.on_new_receipts( await self.pusher_pool.on_new_receipts(
token, token, {row.room_id for row in rows} token, token, {row.room_id for row in rows}
) )
elif stream_name == TypingStream.NAME:
self.typing_handler.process_replication_rows(token, rows)
self.notifier.on_new_event(
"typing_key", token, rooms=[row.room_id for row in rows]
)
elif stream_name == ToDeviceStream.NAME: elif stream_name == ToDeviceStream.NAME:
entities = [row.entity for row in rows if row.entity.startswith("@")] entities = [row.entity for row in rows if row.entity.startswith("@")]
if entities: if entities:
@@ -812,19 +753,11 @@ class FederationSenderHandler(object):
self.federation_sender = hs.get_federation_sender() self.federation_sender = hs.get_federation_sender()
self._hs = hs self._hs = hs
# if the worker is restarted, we want to pick up where we left off in # Stores the latest position in the federation stream we've gotten up
# the replication stream, so load the position from the database. # to. This is always set before we use it.
# self.federation_position = None
# XXX is this actually worthwhile? Whenever the master is restarted, we'll
# drop some rows anyway (which is mostly fine because we're only dropping
# typing and presence notifications). If the replication stream is
# unreliable, why do we do all this hoop-jumping to store the position in the
# database? See also https://github.com/matrix-org/synapse/issues/7535.
#
self.federation_position = self.store.federation_out_pos_startup
self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer") self._fed_position_linearizer = Linearizer(name="_fed_position_linearizer")
self._last_ack = self.federation_position
def on_start(self): def on_start(self):
# There may be some events that are persisted but haven't been sent, # There may be some events that are persisted but haven't been sent,
@@ -932,7 +865,6 @@ class FederationSenderHandler(object):
# We ACK this token over replication so that the master can drop # We ACK this token over replication so that the master can drop
# its in memory queues # its in memory queues
self._hs.get_tcp_replication().send_federation_ack(current_position) self._hs.get_tcp_replication().send_federation_ack(current_position)
self._last_ack = current_position
except Exception: except Exception:
logger.exception("Error updating federation stream position") logger.exception("Error updating federation stream position")
@@ -960,7 +892,7 @@ def start(config_options):
) )
if config.worker_app == "synapse.app.appservice": if config.worker_app == "synapse.app.appservice":
if config.notify_appservices: if config.appservice.notify_appservices:
sys.stderr.write( sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process" "\nThe appservices must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker." "\nbefore they can be run in a separate worker."
@@ -970,13 +902,13 @@ def start(config_options):
sys.exit(1) sys.exit(1)
# Force the appservice to start since they will be disabled in the main config # Force the appservice to start since they will be disabled in the main config
config.notify_appservices = True config.appservice.notify_appservices = True
else: else:
# For other worker types we force this to off. # For other worker types we force this to off.
config.notify_appservices = False config.appservice.notify_appservices = False
if config.worker_app == "synapse.app.pusher": if config.worker_app == "synapse.app.pusher":
if config.start_pushers: if config.server.start_pushers:
sys.stderr.write( sys.stderr.write(
"\nThe pushers must be disabled in the main synapse process" "\nThe pushers must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker." "\nbefore they can be run in a separate worker."
@@ -986,13 +918,13 @@ def start(config_options):
sys.exit(1) sys.exit(1)
# Force the pushers to start since they will be disabled in the main config # Force the pushers to start since they will be disabled in the main config
config.start_pushers = True config.server.start_pushers = True
else: else:
# For other worker types we force this to off. # For other worker types we force this to off.
config.start_pushers = False config.server.start_pushers = False
if config.worker_app == "synapse.app.user_dir": if config.worker_app == "synapse.app.user_dir":
if config.update_user_directory: if config.server.update_user_directory:
sys.stderr.write( sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process" "\nThe update_user_directory must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker." "\nbefore they can be run in a separate worker."
@@ -1002,13 +934,13 @@ def start(config_options):
sys.exit(1) sys.exit(1)
# Force the pushers to start since they will be disabled in the main config # Force the pushers to start since they will be disabled in the main config
config.update_user_directory = True config.server.update_user_directory = True
else: else:
# For other worker types we force this to off. # For other worker types we force this to off.
config.update_user_directory = False config.server.update_user_directory = False
if config.worker_app == "synapse.app.federation_sender": if config.worker_app == "synapse.app.federation_sender":
if config.send_federation: if config.worker.send_federation:
sys.stderr.write( sys.stderr.write(
"\nThe send_federation must be disabled in the main synapse process" "\nThe send_federation must be disabled in the main synapse process"
"\nbefore they can be run in a separate worker." "\nbefore they can be run in a separate worker."
@@ -1018,10 +950,10 @@ def start(config_options):
sys.exit(1) sys.exit(1)
# Force the pushers to start since they will be disabled in the main config # Force the pushers to start since they will be disabled in the main config
config.send_federation = True config.worker.send_federation = True
else: else:
# For other worker types we force this to off. # For other worker types we force this to off.
config.send_federation = False config.worker.send_federation = False
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
+18 -20
View File
@@ -380,13 +380,12 @@ def setup(config_options):
hs.setup_master() hs.setup_master()
@defer.inlineCallbacks async def do_acme() -> bool:
def do_acme():
""" """
Reprovision an ACME certificate, if it's required. Reprovision an ACME certificate, if it's required.
Returns: Returns:
Deferred[bool]: Whether the cert has been updated. Whether the cert has been updated.
""" """
acme = hs.get_acme_handler() acme = hs.get_acme_handler()
@@ -405,7 +404,7 @@ def setup(config_options):
provision = True provision = True
if provision: if provision:
yield acme.provision_certificate() await acme.provision_certificate()
return provision return provision
@@ -415,7 +414,7 @@ def setup(config_options):
Provision a certificate from ACME, if required, and reload the TLS Provision a certificate from ACME, if required, and reload the TLS
certificate if it's renewed. certificate if it's renewed.
""" """
reprovisioned = yield do_acme() reprovisioned = yield defer.ensureDeferred(do_acme())
if reprovisioned: if reprovisioned:
_base.refresh_certificate(hs) _base.refresh_certificate(hs)
@@ -427,8 +426,8 @@ def setup(config_options):
acme = hs.get_acme_handler() acme = hs.get_acme_handler()
# Start up the webservices which we will respond to ACME # Start up the webservices which we will respond to ACME
# challenges with, and then provision. # challenges with, and then provision.
yield acme.start_listening() yield defer.ensureDeferred(acme.start_listening())
yield do_acme() yield defer.ensureDeferred(do_acme())
# Check if it needs to be reprovisioned every day. # Check if it needs to be reprovisioned every day.
hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000) hs.get_clock().looping_call(reprovision_acme, 24 * 60 * 60 * 1000)
@@ -483,8 +482,7 @@ class SynapseService(service.Service):
_stats_process = [] _stats_process = []
@defer.inlineCallbacks async def phone_stats_home(hs, stats, stats_process=_stats_process):
def phone_stats_home(hs, stats, stats_process=_stats_process):
logger.info("Gathering stats for reporting") logger.info("Gathering stats for reporting")
now = int(hs.get_clock().time()) now = int(hs.get_clock().time())
uptime = int(now - hs.start_time) uptime = int(now - hs.start_time)
@@ -522,28 +520,28 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
stats["python_version"] = "{}.{}.{}".format( stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro version.major, version.minor, version.micro
) )
stats["total_users"] = yield hs.get_datastore().count_all_users() stats["total_users"] = await hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users() total_nonbridged_users = await hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type() daily_user_type_results = await hs.get_datastore().count_daily_user_type()
for name, count in daily_user_type_results.items(): for name, count in daily_user_type_results.items():
stats["daily_user_type_" + name] = count stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count() room_count = await hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users() stats["daily_active_users"] = await hs.get_datastore().count_daily_users()
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users() stats["monthly_active_users"] = await hs.get_datastore().count_monthly_users()
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms() stats["daily_active_rooms"] = await hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages() stats["daily_messages"] = await hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users() r30_results = await hs.get_datastore().count_r30_users()
for name, count in r30_results.items(): for name, count in r30_results.items():
stats["r30_users_" + name] = count stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages() daily_sent_messages = await hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = hs.config.caches.global_factor stats["cache_factor"] = hs.config.caches.global_factor
stats["event_cache_size"] = hs.config.caches.event_cache_size stats["event_cache_size"] = hs.config.caches.event_cache_size
@@ -558,7 +556,7 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats)) logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
try: try:
yield hs.get_proxied_http_client().put_json( await hs.get_proxied_http_client().put_json(
hs.config.report_stats_endpoint, stats hs.config.report_stats_endpoint, stats
) )
except Exception as e: except Exception as e:
+13 -18
View File
@@ -15,11 +15,9 @@
import logging import logging
import re import re
from twisted.internet import defer
from synapse.api.constants import EventTypes from synapse.api.constants import EventTypes
from synapse.types import GroupID, get_domain_from_id from synapse.types import GroupID, get_domain_from_id
from synapse.util.caches.descriptors import cachedInlineCallbacks from synapse.util.caches.descriptors import cached
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
@@ -43,7 +41,7 @@ class AppServiceTransaction(object):
Args: Args:
as_api(ApplicationServiceApi): The API to use to send. as_api(ApplicationServiceApi): The API to use to send.
Returns: Returns:
A Deferred which resolves to True if the transaction was sent. An Awaitable which resolves to True if the transaction was sent.
""" """
return as_api.push_bulk( return as_api.push_bulk(
service=self.service, events=self.events, txn_id=self.id service=self.service, events=self.events, txn_id=self.id
@@ -172,8 +170,7 @@ class ApplicationService(object):
return regex_obj["exclusive"] return regex_obj["exclusive"]
return False return False
@defer.inlineCallbacks async def _matches_user(self, event, store):
def _matches_user(self, event, store):
if not event: if not event:
return False return False
@@ -188,12 +185,12 @@ class ApplicationService(object):
if not store: if not store:
return False return False
does_match = yield self._matches_user_in_member_list(event.room_id, store) does_match = await self._matches_user_in_member_list(event.room_id, store)
return does_match return does_match
@cachedInlineCallbacks(num_args=1, cache_context=True) @cached(num_args=1, cache_context=True)
def _matches_user_in_member_list(self, room_id, store, cache_context): async def _matches_user_in_member_list(self, room_id, store, cache_context):
member_list = yield store.get_users_in_room( member_list = await store.get_users_in_room(
room_id, on_invalidate=cache_context.invalidate room_id, on_invalidate=cache_context.invalidate
) )
@@ -208,35 +205,33 @@ class ApplicationService(object):
return self.is_interested_in_room(event.room_id) return self.is_interested_in_room(event.room_id)
return False return False
@defer.inlineCallbacks async def _matches_aliases(self, event, store):
def _matches_aliases(self, event, store):
if not store or not event: if not store or not event:
return False return False
alias_list = yield store.get_aliases_for_room(event.room_id) alias_list = await store.get_aliases_for_room(event.room_id)
for alias in alias_list: for alias in alias_list:
if self.is_interested_in_alias(alias): if self.is_interested_in_alias(alias):
return True return True
return False return False
@defer.inlineCallbacks async def is_interested(self, event, store=None) -> bool:
def is_interested(self, event, store=None):
"""Check if this service is interested in this event. """Check if this service is interested in this event.
Args: Args:
event(Event): The event to check. event(Event): The event to check.
store(DataStore) store(DataStore)
Returns: Returns:
bool: True if this service would like to know about this event. True if this service would like to know about this event.
""" """
# Do cheap checks first # Do cheap checks first
if self._matches_room_id(event): if self._matches_room_id(event):
return True return True
if (yield self._matches_aliases(event, store)): if await self._matches_aliases(event, store):
return True return True
if (yield self._matches_user(event, store)): if await self._matches_user(event, store):
return True return True
return False return False
+24 -17
View File
@@ -19,7 +19,7 @@ from prometheus_client import Counter
from twisted.internet import defer from twisted.internet import defer
from synapse.api.constants import ThirdPartyEntityKind from synapse.api.constants import EventTypes, ThirdPartyEntityKind
from synapse.api.errors import CodeMessageException from synapse.api.errors import CodeMessageException
from synapse.events.utils import serialize_event from synapse.events.utils import serialize_event
from synapse.http.client import SimpleHttpClient from synapse.http.client import SimpleHttpClient
@@ -93,13 +93,12 @@ class ApplicationServiceApi(SimpleHttpClient):
hs, "as_protocol_meta", timeout_ms=HOUR_IN_MS hs, "as_protocol_meta", timeout_ms=HOUR_IN_MS
) )
@defer.inlineCallbacks async def query_user(self, service, user_id):
def query_user(self, service, user_id):
if service.url is None: if service.url is None:
return False return False
uri = service.url + ("/users/%s" % urllib.parse.quote(user_id)) uri = service.url + ("/users/%s" % urllib.parse.quote(user_id))
try: try:
response = yield self.get_json(uri, {"access_token": service.hs_token}) response = await self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object if response is not None: # just an empty json object
return True return True
except CodeMessageException as e: except CodeMessageException as e:
@@ -110,14 +109,12 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning("query_user to %s threw exception %s", uri, ex) logger.warning("query_user to %s threw exception %s", uri, ex)
return False return False
@defer.inlineCallbacks async def query_alias(self, service, alias):
def query_alias(self, service, alias):
if service.url is None: if service.url is None:
return False return False
uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias)) uri = service.url + ("/rooms/%s" % urllib.parse.quote(alias))
response = None
try: try:
response = yield self.get_json(uri, {"access_token": service.hs_token}) response = await self.get_json(uri, {"access_token": service.hs_token})
if response is not None: # just an empty json object if response is not None: # just an empty json object
return True return True
except CodeMessageException as e: except CodeMessageException as e:
@@ -128,8 +125,7 @@ class ApplicationServiceApi(SimpleHttpClient):
logger.warning("query_alias to %s threw exception %s", uri, ex) logger.warning("query_alias to %s threw exception %s", uri, ex)
return False return False
@defer.inlineCallbacks async def query_3pe(self, service, kind, protocol, fields):
def query_3pe(self, service, kind, protocol, fields):
if kind == ThirdPartyEntityKind.USER: if kind == ThirdPartyEntityKind.USER:
required_field = "userid" required_field = "userid"
elif kind == ThirdPartyEntityKind.LOCATION: elif kind == ThirdPartyEntityKind.LOCATION:
@@ -146,7 +142,7 @@ class ApplicationServiceApi(SimpleHttpClient):
urllib.parse.quote(protocol), urllib.parse.quote(protocol),
) )
try: try:
response = yield self.get_json(uri, fields) response = await self.get_json(uri, fields)
if not isinstance(response, list): if not isinstance(response, list):
logger.warning( logger.warning(
"query_3pe to %s returned an invalid response %r", uri, response "query_3pe to %s returned an invalid response %r", uri, response
@@ -202,12 +198,11 @@ class ApplicationServiceApi(SimpleHttpClient):
key = (service.id, protocol) key = (service.id, protocol)
return self.protocol_meta_cache.wrap(key, _get) return self.protocol_meta_cache.wrap(key, _get)
@defer.inlineCallbacks async def push_bulk(self, service, events, txn_id=None):
def push_bulk(self, service, events, txn_id=None):
if service.url is None: if service.url is None:
return True return True
events = self._serialize(events) events = self._serialize(service, events)
if txn_id is None: if txn_id is None:
logger.warning( logger.warning(
@@ -218,7 +213,7 @@ class ApplicationServiceApi(SimpleHttpClient):
uri = service.url + ("/transactions/%s" % urllib.parse.quote(txn_id)) uri = service.url + ("/transactions/%s" % urllib.parse.quote(txn_id))
try: try:
yield self.put_json( await self.put_json(
uri=uri, uri=uri,
json_body={"events": events}, json_body={"events": events},
args={"access_token": service.hs_token}, args={"access_token": service.hs_token},
@@ -233,6 +228,18 @@ class ApplicationServiceApi(SimpleHttpClient):
failed_transactions_counter.labels(service.id).inc() failed_transactions_counter.labels(service.id).inc()
return False return False
def _serialize(self, events): def _serialize(self, service, events):
time_now = self.clock.time_msec() time_now = self.clock.time_msec()
return [serialize_event(e, time_now, as_client_event=True) for e in events] return [
serialize_event(
e,
time_now,
as_client_event=True,
is_invite=(
e.type == EventTypes.Member
and e.membership == "invite"
and service.is_interested_in_user(e.state_key)
),
)
for e in events
]
+20 -29
View File
@@ -50,8 +50,6 @@ components.
""" """
import logging import logging
from twisted.internet import defer
from synapse.appservice import ApplicationServiceState from synapse.appservice import ApplicationServiceState
from synapse.logging.context import run_in_background from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process from synapse.metrics.background_process_metrics import run_as_background_process
@@ -73,12 +71,11 @@ class ApplicationServiceScheduler(object):
self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api) self.txn_ctrl = _TransactionController(self.clock, self.store, self.as_api)
self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock) self.queuer = _ServiceQueuer(self.txn_ctrl, self.clock)
@defer.inlineCallbacks async def start(self):
def start(self):
logger.info("Starting appservice scheduler") logger.info("Starting appservice scheduler")
# check for any DOWN ASes and start recoverers for them. # check for any DOWN ASes and start recoverers for them.
services = yield self.store.get_appservices_by_state( services = await self.store.get_appservices_by_state(
ApplicationServiceState.DOWN ApplicationServiceState.DOWN
) )
@@ -117,8 +114,7 @@ class _ServiceQueuer(object):
"as-sender-%s" % (service.id,), self._send_request, service "as-sender-%s" % (service.id,), self._send_request, service
) )
@defer.inlineCallbacks async def _send_request(self, service):
def _send_request(self, service):
# sanity-check: we shouldn't get here if this service already has a sender # sanity-check: we shouldn't get here if this service already has a sender
# running. # running.
assert service.id not in self.requests_in_flight assert service.id not in self.requests_in_flight
@@ -130,7 +126,7 @@ class _ServiceQueuer(object):
if not events: if not events:
return return
try: try:
yield self.txn_ctrl.send(service, events) await self.txn_ctrl.send(service, events)
except Exception: except Exception:
logger.exception("AS request failed") logger.exception("AS request failed")
finally: finally:
@@ -162,36 +158,33 @@ class _TransactionController(object):
# for UTs # for UTs
self.RECOVERER_CLASS = _Recoverer self.RECOVERER_CLASS = _Recoverer
@defer.inlineCallbacks async def send(self, service, events):
def send(self, service, events):
try: try:
txn = yield self.store.create_appservice_txn(service=service, events=events) txn = await self.store.create_appservice_txn(service=service, events=events)
service_is_up = yield self._is_service_up(service) service_is_up = await self._is_service_up(service)
if service_is_up: if service_is_up:
sent = yield txn.send(self.as_api) sent = await txn.send(self.as_api)
if sent: if sent:
yield txn.complete(self.store) await txn.complete(self.store)
else: else:
run_in_background(self._on_txn_fail, service) run_in_background(self._on_txn_fail, service)
except Exception: except Exception:
logger.exception("Error creating appservice transaction") logger.exception("Error creating appservice transaction")
run_in_background(self._on_txn_fail, service) run_in_background(self._on_txn_fail, service)
@defer.inlineCallbacks async def on_recovered(self, recoverer):
def on_recovered(self, recoverer):
logger.info( logger.info(
"Successfully recovered application service AS ID %s", recoverer.service.id "Successfully recovered application service AS ID %s", recoverer.service.id
) )
self.recoverers.pop(recoverer.service.id) self.recoverers.pop(recoverer.service.id)
logger.info("Remaining active recoverers: %s", len(self.recoverers)) logger.info("Remaining active recoverers: %s", len(self.recoverers))
yield self.store.set_appservice_state( await self.store.set_appservice_state(
recoverer.service, ApplicationServiceState.UP recoverer.service, ApplicationServiceState.UP
) )
@defer.inlineCallbacks async def _on_txn_fail(self, service):
def _on_txn_fail(self, service):
try: try:
yield self.store.set_appservice_state(service, ApplicationServiceState.DOWN) await self.store.set_appservice_state(service, ApplicationServiceState.DOWN)
self.start_recoverer(service) self.start_recoverer(service)
except Exception: except Exception:
logger.exception("Error starting AS recoverer") logger.exception("Error starting AS recoverer")
@@ -211,9 +204,8 @@ class _TransactionController(object):
recoverer.recover() recoverer.recover()
logger.info("Now %i active recoverers", len(self.recoverers)) logger.info("Now %i active recoverers", len(self.recoverers))
@defer.inlineCallbacks async def _is_service_up(self, service):
def _is_service_up(self, service): state = await self.store.get_appservice_state(service)
state = yield self.store.get_appservice_state(service)
return state == ApplicationServiceState.UP or state is None return state == ApplicationServiceState.UP or state is None
@@ -254,25 +246,24 @@ class _Recoverer(object):
self.backoff_counter += 1 self.backoff_counter += 1
self.recover() self.recover()
@defer.inlineCallbacks async def retry(self):
def retry(self):
logger.info("Starting retries on %s", self.service.id) logger.info("Starting retries on %s", self.service.id)
try: try:
while True: while True:
txn = yield self.store.get_oldest_unsent_txn(self.service) txn = await self.store.get_oldest_unsent_txn(self.service)
if not txn: if not txn:
# nothing left: we're done! # nothing left: we're done!
self.callback(self) await self.callback(self)
return return
logger.info( logger.info(
"Retrying transaction %s for AS ID %s", txn.id, txn.service.id "Retrying transaction %s for AS ID %s", txn.id, txn.service.id
) )
sent = yield txn.send(self.as_api) sent = await txn.send(self.as_api)
if not sent: if not sent:
break break
yield txn.complete(self.store) await txn.complete(self.store)
# reset the backoff counter and then process the next transaction # reset the backoff counter and then process the next transaction
self.backoff_counter = 1 self.backoff_counter = 1
+36 -2
View File
@@ -19,9 +19,11 @@ import argparse
import errno import errno
import os import os
from collections import OrderedDict from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent from textwrap import dedent
from typing import Any, MutableMapping, Optional from typing import Any, List, MutableMapping, Optional
import attr
import yaml import yaml
@@ -717,4 +719,36 @@ def find_config_files(search_paths):
return config_files return config_files
__all__ = ["Config", "RootConfig"] @attr.s
class ShardedWorkerHandlingConfig:
"""Algorithm for choosing which instance is responsible for handling some
sharded work.
For example, the federation senders use this to determine which instances
handles sending stuff to a given destination (which is used as the `key`
below).
"""
instances = attr.ib(type=List[str])
def should_handle(self, instance_name: str, key: str) -> bool:
"""Whether this instance is responsible for handling the given key.
"""
# If multiple instances are not defined we always return true.
if not self.instances or len(self.instances) == 1:
return True
# We shard by taking the hash, modulo it by the number of instances and
# then checking whether this instance matches the instance at that
# index.
#
# (Technically this introduces some bias and is not entirely uniform,
# but since the hash is so large the bias is ridiculously small).
dest_hash = sha256(key.encode("utf8")).digest()
dest_int = int.from_bytes(dest_hash, byteorder="little")
remainder = dest_int % (len(self.instances))
return self.instances[remainder] == instance_name
__all__ = ["Config", "RootConfig", "ShardedWorkerHandlingConfig"]
+5
View File
@@ -137,3 +137,8 @@ class Config:
def read_config_files(config_files: List[str]): ... def read_config_files(config_files: List[str]): ...
def find_config_files(search_paths: List[str]): ... def find_config_files(search_paths: List[str]): ...
class ShardedWorkerHandlingConfig:
instances: List[str]
def __init__(self, instances: List[str]) -> None: ...
def should_handle(self, instance_name: str, key: str) -> bool: ...
+1 -1
View File
@@ -55,7 +55,7 @@ DEFAULT_CONFIG = """\
#database: #database:
# name: psycopg2 # name: psycopg2
# args: # args:
# user: synapse # user: synapse_user
# password: secretpassword # password: secretpassword
# database: synapse # database: synapse
# host: localhost # host: localhost
+112 -6
View File
@@ -22,6 +22,7 @@ import os
from enum import Enum from enum import Enum
from typing import Optional from typing import Optional
import attr
import pkg_resources import pkg_resources
from ._base import Config, ConfigError from ._base import Config, ConfigError
@@ -32,6 +33,33 @@ Password reset emails are enabled on this homeserver due to a partial
%s %s
""" """
DEFAULT_SUBJECTS = {
"message_from_person_in_room": "[%(app)s] You have a message on %(app)s from %(person)s in the %(room)s room...",
"message_from_person": "[%(app)s] You have a message on %(app)s from %(person)s...",
"messages_from_person": "[%(app)s] You have messages on %(app)s from %(person)s...",
"messages_in_room": "[%(app)s] You have messages on %(app)s in the %(room)s room...",
"messages_in_room_and_others": "[%(app)s] You have messages on %(app)s in the %(room)s room and others...",
"messages_from_person_and_others": "[%(app)s] You have messages on %(app)s from %(person)s and others...",
"invite_from_person": "[%(app)s] %(person)s has invited you to chat on %(app)s...",
"invite_from_person_to_room": "[%(app)s] %(person)s has invited you to join the %(room)s room on %(app)s...",
"password_reset": "[%(server_name)s] Password reset",
"email_validation": "[%(server_name)s] Validate your email",
}
@attr.s
class EmailSubjectConfig:
message_from_person_in_room = attr.ib(type=str)
message_from_person = attr.ib(type=str)
messages_from_person = attr.ib(type=str)
messages_in_room = attr.ib(type=str)
messages_in_room_and_others = attr.ib(type=str)
messages_from_person_and_others = attr.ib(type=str)
invite_from_person = attr.ib(type=str)
invite_from_person_to_room = attr.ib(type=str)
password_reset = attr.ib(type=str)
email_validation = attr.ib(type=str)
class EmailConfig(Config): class EmailConfig(Config):
section = "email" section = "email"
@@ -294,8 +322,17 @@ class EmailConfig(Config):
if not os.path.isfile(p): if not os.path.isfile(p):
raise ConfigError("Unable to find email template file %s" % (p,)) raise ConfigError("Unable to find email template file %s" % (p,))
subjects_config = email_config.get("subjects", {})
subjects = {}
for key, default in DEFAULT_SUBJECTS.items():
subjects[key] = subjects_config.get(key, default)
self.email_subjects = EmailSubjectConfig(**subjects)
def generate_config_section(self, config_dir_path, server_name, **kwargs): def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\ return (
"""\
# Configuration for sending emails from Synapse. # Configuration for sending emails from Synapse.
# #
email: email:
@@ -323,17 +360,17 @@ class EmailConfig(Config):
# notif_from defines the "From" address to use when sending emails. # notif_from defines the "From" address to use when sending emails.
# It must be set if email sending is enabled. # It must be set if email sending is enabled.
# #
# The placeholder '%(app)s' will be replaced by the application name, # The placeholder '%%(app)s' will be replaced by the application name,
# which is normally 'app_name' (below), but may be overridden by the # which is normally 'app_name' (below), but may be overridden by the
# Matrix client application. # Matrix client application.
# #
# Note that the placeholder must be written '%(app)s', including the # Note that the placeholder must be written '%%(app)s', including the
# trailing 's'. # trailing 's'.
# #
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>" #notif_from: "Your Friendly %%(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from. It # app_name defines the default value for '%%(app)s' in notif_from and email
# defaults to 'Matrix'. # subjects. It defaults to 'Matrix'.
# #
#app_name: my_branded_matrix_server #app_name: my_branded_matrix_server
@@ -401,7 +438,76 @@ class EmailConfig(Config):
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
#template_dir: "res/templates" #template_dir: "res/templates"
# Subjects to use when sending emails from Synapse.
#
# The placeholder '%%(app)s' will be replaced with the value of the 'app_name'
# setting above, or by a value dictated by the Matrix client application.
#
# If a subject isn't overridden in this configuration file, the value used as
# its example will be used.
#
#subjects:
# Subjects for notification emails.
#
# On top of the '%%(app)s' placeholder, these can use the following
# placeholders:
#
# * '%%(person)s', which will be replaced by the display name of the user(s)
# that sent the message(s), e.g. "Alice and Bob".
# * '%%(room)s', which will be replaced by the name of the room the
# message(s) have been sent to, e.g. "My super room".
#
# See the example provided for each setting to see which placeholder can be
# used and how to use them.
#
# Subject to use to notify about one message from one or more user(s) in a
# room which has a name.
#message_from_person_in_room: "%(message_from_person_in_room)s"
#
# Subject to use to notify about one message from one or more user(s) in a
# room which doesn't have a name.
#message_from_person: "%(message_from_person)s"
#
# Subject to use to notify about multiple messages from one or more users in
# a room which doesn't have a name.
#messages_from_person: "%(messages_from_person)s"
#
# Subject to use to notify about multiple messages in a room which has a
# name.
#messages_in_room: "%(messages_in_room)s"
#
# Subject to use to notify about multiple messages in multiple rooms.
#messages_in_room_and_others: "%(messages_in_room_and_others)s"
#
# Subject to use to notify about multiple messages from multiple persons in
# multiple rooms. This is similar to the setting above except it's used when
# the room in which the notification was triggered has no name.
#messages_from_person_and_others: "%(messages_from_person_and_others)s"
#
# Subject to use to notify about an invite to a room which has a name.
#invite_from_person_to_room: "%(invite_from_person_to_room)s"
#
# Subject to use to notify about an invite to a room which doesn't have a
# name.
#invite_from_person: "%(invite_from_person)s"
# Subject for emails related to account administration.
#
# On top of the '%%(app)s' placeholder, these one can use the
# '%%(server_name)s' placeholder, which will be replaced by the value of the
# 'server_name' setting in your Synapse configuration.
#
# Subject to use when sending a password reset email.
#password_reset: "%(password_reset)s"
#
# Subject to use when sending a verification email to assert an address's
# ownership.
#email_validation: "%(email_validation)s"
""" """
% DEFAULT_SUBJECTS
)
class ThreepidBehaviour(Enum): class ThreepidBehaviour(Enum):
+88
View File
@@ -0,0 +1,88 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
from netaddr import IPSet
from ._base import Config, ConfigError
class FederationConfig(Config):
section = "federation"
def read_config(self, config, **kwargs):
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None # type: Optional[dict]
federation_domain_whitelist = config.get("federation_domain_whitelist", None)
if federation_domain_whitelist is not None:
# turn the whitelist into a hash for speed of lookup
self.federation_domain_whitelist = {}
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
self.federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", []
)
# Attempt to create an IPSet from the given ranges
try:
self.federation_ip_range_blacklist = IPSet(
self.federation_ip_range_blacklist
)
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
"""
+4 -1
View File
@@ -23,6 +23,7 @@ from .cas import CasConfig
from .consent_config import ConsentConfig from .consent_config import ConsentConfig
from .database import DatabaseConfig from .database import DatabaseConfig
from .emailconfig import EmailConfig from .emailconfig import EmailConfig
from .federation import FederationConfig
from .groups import GroupsConfig from .groups import GroupsConfig
from .jwt_config import JWTConfig from .jwt_config import JWTConfig
from .key import KeyConfig from .key import KeyConfig
@@ -57,6 +58,7 @@ class HomeServerConfig(RootConfig):
config_classes = [ config_classes = [
ServerConfig, ServerConfig,
TlsConfig, TlsConfig,
FederationConfig,
CacheConfig, CacheConfig,
DatabaseConfig, DatabaseConfig,
LoggingConfig, LoggingConfig,
@@ -76,7 +78,6 @@ class HomeServerConfig(RootConfig):
JWTConfig, JWTConfig,
PasswordConfig, PasswordConfig,
EmailConfig, EmailConfig,
WorkerConfig,
PasswordAuthProviderConfig, PasswordAuthProviderConfig,
PushConfig, PushConfig,
SpamCheckerConfig, SpamCheckerConfig,
@@ -89,5 +90,7 @@ class HomeServerConfig(RootConfig):
RoomDirectoryConfig, RoomDirectoryConfig,
ThirdPartyRulesConfig, ThirdPartyRulesConfig,
TracerConfig, TracerConfig,
WorkerConfig,
RedisConfig, RedisConfig,
FederationConfig,
] ]
+28
View File
@@ -32,6 +32,11 @@ class JWTConfig(Config):
self.jwt_secret = jwt_config["secret"] self.jwt_secret = jwt_config["secret"]
self.jwt_algorithm = jwt_config["algorithm"] self.jwt_algorithm = jwt_config["algorithm"]
# The issuer and audiences are optional, if provided, it is asserted
# that the claims exist on the JWT.
self.jwt_issuer = jwt_config.get("issuer")
self.jwt_audiences = jwt_config.get("audiences")
try: try:
import jwt import jwt
@@ -42,6 +47,8 @@ class JWTConfig(Config):
self.jwt_enabled = False self.jwt_enabled = False
self.jwt_secret = None self.jwt_secret = None
self.jwt_algorithm = None self.jwt_algorithm = None
self.jwt_issuer = None
self.jwt_audiences = None
def generate_config_section(self, **kwargs): def generate_config_section(self, **kwargs):
return """\ return """\
@@ -52,6 +59,9 @@ class JWTConfig(Config):
# Each JSON Web Token needs to contain a "sub" (subject) claim, which is # Each JSON Web Token needs to contain a "sub" (subject) claim, which is
# used as the localpart of the mxid. # used as the localpart of the mxid.
# #
# Additionally, the expiration time ("exp"), not before time ("nbf"),
# and issued at ("iat") claims are validated if present.
#
# Note that this is a non-standard login type and client support is # Note that this is a non-standard login type and client support is
# expected to be non-existant. # expected to be non-existant.
# #
@@ -78,4 +88,22 @@ class JWTConfig(Config):
# Required if 'enabled' is true. # Required if 'enabled' is true.
# #
#algorithm: "provided-by-your-issuer" #algorithm: "provided-by-your-issuer"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and
# validated for all JSON web tokens.
#
#issuer: "provided-by-your-issuer"
# A list of audiences to validate the "aud" claim against.
#
# Optional, if provided the "aud" claim will be required and
# validated for all JSON web tokens.
#
# Note that if the "aud" claim is included in a JSON web token then
# validation will fail without configuring audiences.
#
#audiences:
# - "provided-by-your-issuer"
""" """
+1 -1
View File
@@ -214,7 +214,7 @@ def setup_logging(
Set up the logging subsystem. Set up the logging subsystem.
Args: Args:
config (LoggingConfig | synapse.config.workers.WorkerConfig): config (LoggingConfig | synapse.config.worker.WorkerConfig):
configuration data configuration data
use_worker_options (bool): True to use the 'worker_log_config' option use_worker_options (bool): True to use the 'worker_log_config' option
+4 -1
View File
@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from ._base import Config from ._base import Config, ShardedWorkerHandlingConfig
class PushConfig(Config): class PushConfig(Config):
@@ -24,6 +24,9 @@ class PushConfig(Config):
push_config = config.get("push", {}) push_config = config.get("push", {})
self.push_include_content = push_config.get("include_content", True) self.push_include_content = push_config.get("include_content", True)
pusher_instances = config.get("pusher_instances") or []
self.pusher_shard_config = ShardedWorkerHandlingConfig(pusher_instances)
# There was a a 'redact_content' setting but mistakenly read from the # There was a a 'redact_content' setting but mistakenly read from the
# 'email'section'. Check for the flag in the 'push' section, and log, # 'email'section'. Check for the flag in the 'push' section, and log,
# but do not honour it to avoid nasty surprises when people upgrade. # but do not honour it to avoid nasty surprises when people upgrade.
+21
View File
@@ -93,6 +93,15 @@ class RatelimitConfig(Config):
if rc_admin_redaction: if rc_admin_redaction:
self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction) self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction)
self.rc_joins_local = RateLimitConfig(
config.get("rc_joins", {}).get("local", {}),
defaults={"per_second": 0.1, "burst_count": 3},
)
self.rc_joins_remote = RateLimitConfig(
config.get("rc_joins", {}).get("remote", {}),
defaults={"per_second": 0.01, "burst_count": 3},
)
def generate_config_section(self, **kwargs): def generate_config_section(self, **kwargs):
return """\ return """\
## Ratelimiting ## ## Ratelimiting ##
@@ -118,6 +127,10 @@ class RatelimitConfig(Config):
# - one for ratelimiting redactions by room admins. If this is not explicitly # - one for ratelimiting redactions by room admins. If this is not explicitly
# set then it uses the same ratelimiting as per rc_message. This is useful # set then it uses the same ratelimiting as per rc_message. This is useful
# to allow room admins to deal with abuse quickly. # to allow room admins to deal with abuse quickly.
# - two for ratelimiting number of rooms a user can join, "local" for when
# users are joining rooms the server is already in (this is cheap) vs
# "remote" for when users are trying to join rooms not on the server (which
# can be more expensive)
# #
# The defaults are as shown below. # The defaults are as shown below.
# #
@@ -143,6 +156,14 @@ class RatelimitConfig(Config):
#rc_admin_redaction: #rc_admin_redaction:
# per_second: 1 # per_second: 1
# burst_count: 50 # burst_count: 50
#
#rc_joins:
# local:
# per_second: 0.1
# burst_count: 3
# remote:
# per_second: 0.01
# burst_count: 3
# Ratelimiting settings for incoming federation # Ratelimiting settings for incoming federation
+22 -1
View File
@@ -21,7 +21,7 @@ class RedisConfig(Config):
section = "redis" section = "redis"
def read_config(self, config, **kwargs): def read_config(self, config, **kwargs):
redis_config = config.get("redis", {}) redis_config = config.get("redis") or {}
self.redis_enabled = redis_config.get("enabled", False) self.redis_enabled = redis_config.get("enabled", False)
if not self.redis_enabled: if not self.redis_enabled:
@@ -32,3 +32,24 @@ class RedisConfig(Config):
self.redis_host = redis_config.get("host", "localhost") self.redis_host = redis_config.get("host", "localhost")
self.redis_port = redis_config.get("port", 6379) self.redis_port = redis_config.get("port", 6379)
self.redis_password = redis_config.get("password") self.redis_password = redis_config.get("password")
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
# Configuration for Redis when using workers. This *must* be enabled when
# using workers (unless using old style direct TCP configuration).
#
redis:
# Uncomment the below to enable Redis support.
#
#enabled: true
# Optional host and port to use to connect to redis. Defaults to
# localhost and 6379
#
#host: localhost
#port: 6379
# Optional password if configured on the Redis instance
#
#password: <secret_password>
"""
-18
View File
@@ -333,24 +333,6 @@ class RegistrationConfig(Config):
# #
#default_identity_server: https://matrix.org #default_identity_server: https://matrix.org
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
#
# Note: This option is deprecated. Since v0.99.4, Synapse has tracked which identity
# server a 3PID has been bound to. For 3PIDs bound before then, Synapse runs a
# background migration script, informing itself that the identity server all of its
# 3PIDs have been bound to is likely one of the below.
#
# As of Synapse v1.4.0, all other functionality of this option has been deprecated, and
# it is now solely used for the purposes of the background migration script, and can be
# removed once it has run.
#trusted_third_party_id_servers:
# - matrix.org
# - vector.im
# Handle threepid (email/phone etc) registration and password resets through a set of # Handle threepid (email/phone etc) registration and password resets through a set of
# *trusted* identity servers. Note that this allows the configured identity server to # *trusted* identity servers. Note that this allows the configured identity server to
# reset passwords for accounts! # reset passwords for accounts!
+6 -1
View File
@@ -50,7 +50,12 @@ class RoomConfig(Config):
RoomCreationPreset.PRIVATE_CHAT, RoomCreationPreset.PRIVATE_CHAT,
RoomCreationPreset.TRUSTED_PRIVATE_CHAT, RoomCreationPreset.TRUSTED_PRIVATE_CHAT,
] ]
elif encryption_for_room_type == RoomDefaultEncryptionTypes.OFF: elif (
encryption_for_room_type == RoomDefaultEncryptionTypes.OFF
or encryption_for_room_type is False
):
# PyYAML translates "off" into False if it's unquoted, so we also need to
# check for encryption_for_room_type being False.
self.encryption_enabled_by_default_for_room_presets = [] self.encryption_enabled_by_default_for_room_presets = []
else: else:
raise ConfigError( raise ConfigError(
+12 -69
View File
@@ -23,7 +23,6 @@ from typing import Any, Dict, Iterable, List, Optional
import attr import attr
import yaml import yaml
from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.http.endpoint import parse_and_validate_server_name from synapse.http.endpoint import parse_and_validate_server_name
@@ -136,11 +135,6 @@ class ServerConfig(Config):
self.use_frozen_dicts = config.get("use_frozen_dicts", False) self.use_frozen_dicts = config.get("use_frozen_dicts", False)
self.public_baseurl = config.get("public_baseurl") self.public_baseurl = config.get("public_baseurl")
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
# "disable" federation
self.send_federation = config.get("send_federation", True)
# Whether to enable user presence. # Whether to enable user presence.
self.use_presence = config.get("use_presence", True) self.use_presence = config.get("use_presence", True)
@@ -213,7 +207,7 @@ class ServerConfig(Config):
# errors when attempting to search for messages. # errors when attempting to search for messages.
self.enable_search = config.get("enable_search", True) self.enable_search = config.get("enable_search", True)
self.filter_timeline_limit = config.get("filter_timeline_limit", -1) self.filter_timeline_limit = config.get("filter_timeline_limit", 100)
# Whether we should block invites sent to users on this server # Whether we should block invites sent to users on this server
# (other than those sent by local server admins) # (other than those sent by local server admins)
@@ -263,34 +257,6 @@ class ServerConfig(Config):
# due to resource constraints # due to resource constraints
self.admin_contact = config.get("admin_contact", None) self.admin_contact = config.get("admin_contact", None)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None # type: Optional[dict]
federation_domain_whitelist = config.get("federation_domain_whitelist", None)
if federation_domain_whitelist is not None:
# turn the whitelist into a hash for speed of lookup
self.federation_domain_whitelist = {}
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
self.federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", []
)
# Attempt to create an IPSet from the given ranges
try:
self.federation_ip_range_blacklist = IPSet(
self.federation_ip_range_blacklist
)
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
if self.public_baseurl is not None: if self.public_baseurl is not None:
if self.public_baseurl[-1] != "/": if self.public_baseurl[-1] != "/":
self.public_baseurl += "/" self.public_baseurl += "/"
@@ -473,6 +439,9 @@ class ServerConfig(Config):
validator=attr.validators.instance_of(str), validator=attr.validators.instance_of(str),
default=ROOM_COMPLEXITY_TOO_GREAT, default=ROOM_COMPLEXITY_TOO_GREAT,
) )
admins_can_join = attr.ib(
validator=attr.validators.instance_of(bool), default=False
)
self.limit_remote_rooms = LimitRemoteRoomsConfig( self.limit_remote_rooms = LimitRemoteRoomsConfig(
**(config.get("limit_remote_rooms") or {}) **(config.get("limit_remote_rooms") or {})
@@ -727,7 +696,9 @@ class ServerConfig(Config):
#gc_thresholds: [700, 10, 10] #gc_thresholds: [700, 10, 10]
# Set the limit on the returned events in the timeline in the get # Set the limit on the returned events in the timeline in the get
# and sync operations. The default value is -1, means no upper limit. # and sync operations. The default value is 100. -1 means no upper limit.
#
# Uncomment the following to increase the limit to 5000.
# #
#filter_timeline_limit: 5000 #filter_timeline_limit: 5000
@@ -743,38 +714,6 @@ class ServerConfig(Config):
# #
#enable_search: false #enable_search: false
# Restrict federation to the following whitelist of domains.
# N.B. we recommend also firewalling your federation listener to limit
# inbound federation traffic as early as possible, rather than relying
# purely on this application-layer restriction. If not specified, the
# default is to whitelist everything.
#
#federation_domain_whitelist:
# - lon.example.com
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their # List of ports that Synapse should listen on, their purpose and their
# configuration. # configuration.
# #
@@ -803,7 +742,7 @@ class ServerConfig(Config):
# names: a list of names of HTTP resources. See below for a list of # names: a list of names of HTTP resources. See below for a list of
# valid resource names. # valid resource names.
# #
# compress: set to true to enable HTTP comression for this resource. # compress: set to true to enable HTTP compression for this resource.
# #
# additional_resources: Only valid for an 'http' listener. A map of # additional_resources: Only valid for an 'http' listener. A map of
# additional endpoints which should be loaded via dynamic modules. # additional endpoints which should be loaded via dynamic modules.
@@ -957,6 +896,10 @@ class ServerConfig(Config):
# #
#complexity_error: "This room is too complex." #complexity_error: "This room is too complex."
# allow server admins to join complex rooms. Default is false.
#
#admins_can_join: true
# Whether to require a user to be in the room to add an alias to it. # Whether to require a user to be in the room to add an alias to it.
# Defaults to 'true'. # Defaults to 'true'.
# #
+58 -10
View File
@@ -15,7 +15,7 @@
import attr import attr
from ._base import Config, ConfigError from ._base import Config, ConfigError, ShardedWorkerHandlingConfig
from .server import ListenerConfig, parse_listener_def from .server import ListenerConfig, parse_listener_def
@@ -34,9 +34,11 @@ class WriterLocations:
Attributes: Attributes:
events: The instance that writes to the event and backfill streams. events: The instance that writes to the event and backfill streams.
events: The instance that writes to the typing stream.
""" """
events = attr.ib(default="master", type=str) events = attr.ib(default="master", type=str)
typing = attr.ib(default="master", type=str)
class WorkerConfig(Config): class WorkerConfig(Config):
@@ -83,6 +85,16 @@ class WorkerConfig(Config):
) )
) )
# Whether to send federation traffic out in this process. This only
# applies to some federation traffic, and so shouldn't be used to
# "disable" federation
self.send_federation = config.get("send_federation", True)
federation_sender_instances = config.get("federation_sender_instances") or []
self.federation_shard_config = ShardedWorkerHandlingConfig(
federation_sender_instances
)
# A map from instance name to host/port of their HTTP replication endpoint. # A map from instance name to host/port of their HTTP replication endpoint.
instance_map = config.get("instance_map") or {} instance_map = config.get("instance_map") or {}
self.instance_map = { self.instance_map = {
@@ -93,16 +105,52 @@ class WorkerConfig(Config):
writers = config.get("stream_writers") or {} writers = config.get("stream_writers") or {}
self.writers = WriterLocations(**writers) self.writers = WriterLocations(**writers)
# Check that the configured writer for events also appears in # Check that the configured writer for events and typing also appears in
# `instance_map`. # `instance_map`.
if ( for stream in ("events", "typing"):
self.writers.events != "master" instance = getattr(self.writers, stream)
and self.writers.events not in self.instance_map if instance != "master" and instance not in self.instance_map:
): raise ConfigError(
raise ConfigError( "Instance %r is configured to write %s but does not appear in `instance_map` config."
"Instance %r is configured to write events but does not appear in `instance_map` config." % (instance, stream)
% (self.writers.events,) )
)
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """\
## Workers ##
# Disables sending of outbound federation transactions on the main process.
# Uncomment if using a federation sender worker.
#
#send_federation: false
# It is possible to run multiple federation sender workers, in which case the
# work is balanced across them.
#
# This configuration must be shared between all federation sender workers, and if
# changed all federation sender workers must be stopped at the same time and then
# started, to ensure that all instances are running with the same config (otherwise
# events may be dropped).
#
#federation_sender_instances:
# - federation_sender1
# When using workers this should be a map from `worker_name` to the
# HTTP replication listener of the worker, if configured.
#
#instance_map:
# worker1:
# host: localhost
# port: 8034
# Experimental: When using workers you can define which workers should
# handle event persistence and typing notifications. Any worker
# specified here must also be in the `instance_map`.
#
#stream_writers:
# events: worker1
# typing: worker1
"""
def read_arguments(self, args): def read_arguments(self, args):
# We support a bunch of command line arguments that override options in # We support a bunch of command line arguments that override options in
+67 -80
View File
@@ -223,8 +223,7 @@ class Keyring(object):
return results return results
@defer.inlineCallbacks async def _start_key_lookups(self, verify_requests):
def _start_key_lookups(self, verify_requests):
"""Sets off the key fetches for each verify request """Sets off the key fetches for each verify request
Once each fetch completes, verify_request.key_ready will be resolved. Once each fetch completes, verify_request.key_ready will be resolved.
@@ -245,7 +244,7 @@ class Keyring(object):
server_to_request_ids.setdefault(server_name, set()).add(request_id) server_to_request_ids.setdefault(server_name, set()).add(request_id)
# Wait for any previous lookups to complete before proceeding. # Wait for any previous lookups to complete before proceeding.
yield self.wait_for_previous_lookups(server_to_request_ids.keys()) await self.wait_for_previous_lookups(server_to_request_ids.keys())
# take out a lock on each of the servers by sticking a Deferred in # take out a lock on each of the servers by sticking a Deferred in
# key_downloads # key_downloads
@@ -283,15 +282,14 @@ class Keyring(object):
except Exception: except Exception:
logger.exception("Error starting key lookups") logger.exception("Error starting key lookups")
@defer.inlineCallbacks async def wait_for_previous_lookups(self, server_names) -> None:
def wait_for_previous_lookups(self, server_names):
"""Waits for any previous key lookups for the given servers to finish. """Waits for any previous key lookups for the given servers to finish.
Args: Args:
server_names (Iterable[str]): list of servers which we want to look up server_names (Iterable[str]): list of servers which we want to look up
Returns: Returns:
Deferred[None]: resolves once all key lookups for the given servers have Resolves once all key lookups for the given servers have
completed. Follows the synapse rules of logcontext preservation. completed. Follows the synapse rules of logcontext preservation.
""" """
loop_count = 1 loop_count = 1
@@ -309,7 +307,7 @@ class Keyring(object):
loop_count, loop_count,
) )
with PreserveLoggingContext(): with PreserveLoggingContext():
yield defer.DeferredList((w[1] for w in wait_on)) await defer.DeferredList((w[1] for w in wait_on))
loop_count += 1 loop_count += 1
@@ -326,44 +324,44 @@ class Keyring(object):
remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called} remaining_requests = {rq for rq in verify_requests if not rq.key_ready.called}
@defer.inlineCallbacks async def do_iterations():
def do_iterations(): try:
with Measure(self.clock, "get_server_verify_keys"): with Measure(self.clock, "get_server_verify_keys"):
for f in self._key_fetchers: for f in self._key_fetchers:
if not remaining_requests: if not remaining_requests:
return return
yield self._attempt_key_fetches_with_fetcher(f, remaining_requests) await self._attempt_key_fetches_with_fetcher(
f, remaining_requests
# look for any requests which weren't satisfied
with PreserveLoggingContext():
for verify_request in remaining_requests:
verify_request.key_ready.errback(
SynapseError(
401,
"No key for %s with ids in %s (min_validity %i)"
% (
verify_request.server_name,
verify_request.key_ids,
verify_request.minimum_valid_until_ts,
),
Codes.UNAUTHORIZED,
)
) )
def on_err(err): # look for any requests which weren't satisfied
# we don't really expect to get here, because any errors should already with PreserveLoggingContext():
# have been caught and logged. But if we do, let's log the error and make for verify_request in remaining_requests:
# sure that all of the deferreds are resolved. verify_request.key_ready.errback(
logger.error("Unexpected error in _get_server_verify_keys: %s", err) SynapseError(
with PreserveLoggingContext(): 401,
for verify_request in remaining_requests: "No key for %s with ids in %s (min_validity %i)"
if not verify_request.key_ready.called: % (
verify_request.key_ready.errback(err) verify_request.server_name,
verify_request.key_ids,
verify_request.minimum_valid_until_ts,
),
Codes.UNAUTHORIZED,
)
)
except Exception as err:
# we don't really expect to get here, because any errors should already
# have been caught and logged. But if we do, let's log the error and make
# sure that all of the deferreds are resolved.
logger.error("Unexpected error in _get_server_verify_keys: %s", err)
with PreserveLoggingContext():
for verify_request in remaining_requests:
if not verify_request.key_ready.called:
verify_request.key_ready.errback(err)
run_in_background(do_iterations).addErrback(on_err) run_in_background(do_iterations)
@defer.inlineCallbacks async def _attempt_key_fetches_with_fetcher(self, fetcher, remaining_requests):
def _attempt_key_fetches_with_fetcher(self, fetcher, remaining_requests):
"""Use a key fetcher to attempt to satisfy some key requests """Use a key fetcher to attempt to satisfy some key requests
Args: Args:
@@ -390,7 +388,7 @@ class Keyring(object):
verify_request.minimum_valid_until_ts, verify_request.minimum_valid_until_ts,
) )
results = yield fetcher.get_keys(missing_keys) results = await fetcher.get_keys(missing_keys)
completed = [] completed = []
for verify_request in remaining_requests: for verify_request in remaining_requests:
@@ -423,7 +421,7 @@ class Keyring(object):
class KeyFetcher(object): class KeyFetcher(object):
def get_keys(self, keys_to_fetch): async def get_keys(self, keys_to_fetch):
""" """
Args: Args:
keys_to_fetch (dict[str, dict[str, int]]): keys_to_fetch (dict[str, dict[str, int]]):
@@ -442,8 +440,7 @@ class StoreKeyFetcher(KeyFetcher):
def __init__(self, hs): def __init__(self, hs):
self.store = hs.get_datastore() self.store = hs.get_datastore()
@defer.inlineCallbacks async def get_keys(self, keys_to_fetch):
def get_keys(self, keys_to_fetch):
"""see KeyFetcher.get_keys""" """see KeyFetcher.get_keys"""
keys_to_fetch = ( keys_to_fetch = (
@@ -452,7 +449,7 @@ class StoreKeyFetcher(KeyFetcher):
for key_id in keys_for_server.keys() for key_id in keys_for_server.keys()
) )
res = yield self.store.get_server_verify_keys(keys_to_fetch) res = await self.store.get_server_verify_keys(keys_to_fetch)
keys = {} keys = {}
for (server_name, key_id), key in res.items(): for (server_name, key_id), key in res.items():
keys.setdefault(server_name, {})[key_id] = key keys.setdefault(server_name, {})[key_id] = key
@@ -464,8 +461,7 @@ class BaseV2KeyFetcher(object):
self.store = hs.get_datastore() self.store = hs.get_datastore()
self.config = hs.get_config() self.config = hs.get_config()
@defer.inlineCallbacks async def process_v2_response(self, from_server, response_json, time_added_ms):
def process_v2_response(self, from_server, response_json, time_added_ms):
"""Parse a 'Server Keys' structure from the result of a /key request """Parse a 'Server Keys' structure from the result of a /key request
This is used to parse either the entirety of the response from This is used to parse either the entirety of the response from
@@ -537,7 +533,7 @@ class BaseV2KeyFetcher(object):
key_json_bytes = encode_canonical_json(response_json) key_json_bytes = encode_canonical_json(response_json)
yield make_deferred_yieldable( await make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
[ [
run_in_background( run_in_background(
@@ -567,14 +563,12 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
self.client = hs.get_http_client() self.client = hs.get_http_client()
self.key_servers = self.config.key_servers self.key_servers = self.config.key_servers
@defer.inlineCallbacks async def get_keys(self, keys_to_fetch):
def get_keys(self, keys_to_fetch):
"""see KeyFetcher.get_keys""" """see KeyFetcher.get_keys"""
@defer.inlineCallbacks async def get_key(key_server):
def get_key(key_server):
try: try:
result = yield self.get_server_verify_key_v2_indirect( result = await self.get_server_verify_key_v2_indirect(
keys_to_fetch, key_server keys_to_fetch, key_server
) )
return result return result
@@ -592,7 +586,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
return {} return {}
results = yield make_deferred_yieldable( results = await make_deferred_yieldable(
defer.gatherResults( defer.gatherResults(
[run_in_background(get_key, server) for server in self.key_servers], [run_in_background(get_key, server) for server in self.key_servers],
consumeErrors=True, consumeErrors=True,
@@ -606,8 +600,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
return union_of_keys return union_of_keys
@defer.inlineCallbacks async def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
def get_server_verify_key_v2_indirect(self, keys_to_fetch, key_server):
""" """
Args: Args:
keys_to_fetch (dict[str, dict[str, int]]): keys_to_fetch (dict[str, dict[str, int]]):
@@ -617,7 +610,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
the keys the keys
Returns: Returns:
Deferred[dict[str, dict[str, synapse.storage.keys.FetchKeyResult]]]: map dict[str, dict[str, synapse.storage.keys.FetchKeyResult]]: map
from server_name -> key_id -> FetchKeyResult from server_name -> key_id -> FetchKeyResult
Raises: Raises:
@@ -632,7 +625,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
) )
try: try:
query_response = yield self.client.post_json( query_response = await self.client.post_json(
destination=perspective_name, destination=perspective_name,
path="/_matrix/key/v2/query", path="/_matrix/key/v2/query",
data={ data={
@@ -668,7 +661,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
try: try:
self._validate_perspectives_response(key_server, response) self._validate_perspectives_response(key_server, response)
processed_response = yield self.process_v2_response( processed_response = await self.process_v2_response(
perspective_name, response, time_added_ms=time_now_ms perspective_name, response, time_added_ms=time_now_ms
) )
except KeyLookupError as e: except KeyLookupError as e:
@@ -687,7 +680,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
) )
keys.setdefault(server_name, {}).update(processed_response) keys.setdefault(server_name, {}).update(processed_response)
yield self.store.store_server_verify_keys( await self.store.store_server_verify_keys(
perspective_name, time_now_ms, added_keys perspective_name, time_now_ms, added_keys
) )
@@ -739,24 +732,23 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
self.clock = hs.get_clock() self.clock = hs.get_clock()
self.client = hs.get_http_client() self.client = hs.get_http_client()
def get_keys(self, keys_to_fetch): async def get_keys(self, keys_to_fetch):
""" """
Args: Args:
keys_to_fetch (dict[str, iterable[str]]): keys_to_fetch (dict[str, iterable[str]]):
the keys to be fetched. server_name -> key_ids the keys to be fetched. server_name -> key_ids
Returns: Returns:
Deferred[dict[str, dict[str, synapse.storage.keys.FetchKeyResult|None]]]: dict[str, dict[str, synapse.storage.keys.FetchKeyResult|None]]:
map from server_name -> key_id -> FetchKeyResult map from server_name -> key_id -> FetchKeyResult
""" """
results = {} results = {}
@defer.inlineCallbacks async def get_key(key_to_fetch_item):
def get_key(key_to_fetch_item):
server_name, key_ids = key_to_fetch_item server_name, key_ids = key_to_fetch_item
try: try:
keys = yield self.get_server_verify_key_v2_direct(server_name, key_ids) keys = await self.get_server_verify_key_v2_direct(server_name, key_ids)
results[server_name] = keys results[server_name] = keys
except KeyLookupError as e: except KeyLookupError as e:
logger.warning( logger.warning(
@@ -765,12 +757,11 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
except Exception: except Exception:
logger.exception("Error getting keys %s from %s", key_ids, server_name) logger.exception("Error getting keys %s from %s", key_ids, server_name)
return yieldable_gather_results(get_key, keys_to_fetch.items()).addCallback( return await yieldable_gather_results(
lambda _: results get_key, keys_to_fetch.items()
) ).addCallback(lambda _: results)
@defer.inlineCallbacks async def get_server_verify_key_v2_direct(self, server_name, key_ids):
def get_server_verify_key_v2_direct(self, server_name, key_ids):
""" """
Args: Args:
@@ -792,7 +783,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
time_now_ms = self.clock.time_msec() time_now_ms = self.clock.time_msec()
try: try:
response = yield self.client.get_json( response = await self.client.get_json(
destination=server_name, destination=server_name,
path="/_matrix/key/v2/server/" path="/_matrix/key/v2/server/"
+ urllib.parse.quote(requested_key_id), + urllib.parse.quote(requested_key_id),
@@ -823,12 +814,12 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
% (server_name, response["server_name"]) % (server_name, response["server_name"])
) )
response_keys = yield self.process_v2_response( response_keys = await self.process_v2_response(
from_server=server_name, from_server=server_name,
response_json=response, response_json=response,
time_added_ms=time_now_ms, time_added_ms=time_now_ms,
) )
yield self.store.store_server_verify_keys( await self.store.store_server_verify_keys(
server_name, server_name,
time_now_ms, time_now_ms,
((server_name, key_id, key) for key_id, key in response_keys.items()), ((server_name, key_id, key) for key_id, key in response_keys.items()),
@@ -838,22 +829,18 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
return keys return keys
@defer.inlineCallbacks async def _handle_key_deferred(verify_request) -> None:
def _handle_key_deferred(verify_request):
"""Waits for the key to become available, and then performs a verification """Waits for the key to become available, and then performs a verification
Args: Args:
verify_request (VerifyJsonRequest): verify_request (VerifyJsonRequest):
Returns:
Deferred[None]
Raises: Raises:
SynapseError if there was a problem performing the verification SynapseError if there was a problem performing the verification
""" """
server_name = verify_request.server_name server_name = verify_request.server_name
with PreserveLoggingContext(): with PreserveLoggingContext():
_, key_id, verify_key = yield verify_request.key_ready _, key_id, verify_key = await verify_request.key_ready
json_object = verify_request.json_object json_object = verify_request.json_object
+6 -4
View File
@@ -65,14 +65,16 @@ def check(
room_id = event.room_id room_id = event.room_id
# I'm not really expecting to get auth events in the wrong room, but let's # We need to ensure that the auth events are actually for the same room, to
# sanity-check it # stop people from using powers they've been granted in other rooms for
# example.
for auth_event in auth_events.values(): for auth_event in auth_events.values():
if auth_event.room_id != room_id: if auth_event.room_id != room_id:
raise Exception( raise AuthError(
403,
"During auth for event %s in room %s, found event %s in the state " "During auth for event %s in room %s, found event %s in the state "
"which is in room %s" "which is in room %s"
% (event.event_id, room_id, auth_event.event_id, auth_event.room_id) % (event.event_id, room_id, auth_event.event_id, auth_event.room_id),
) )
if do_sig_check: if do_sig_check:
+7 -10
View File
@@ -17,8 +17,6 @@ from typing import Optional
import attr import attr
from nacl.signing import SigningKey from nacl.signing import SigningKey
from twisted.internet import defer
from synapse.api.constants import MAX_DEPTH from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import UnsupportedRoomVersionError from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import ( from synapse.api.room_versions import (
@@ -95,31 +93,30 @@ class EventBuilder(object):
def is_state(self): def is_state(self):
return self._state_key is not None return self._state_key is not None
@defer.inlineCallbacks async def build(self, prev_event_ids):
def build(self, prev_event_ids):
"""Transform into a fully signed and hashed event """Transform into a fully signed and hashed event
Args: Args:
prev_event_ids (list[str]): The event IDs to use as the prev events prev_event_ids (list[str]): The event IDs to use as the prev events
Returns: Returns:
Deferred[FrozenEvent] FrozenEvent
""" """
state_ids = yield self._state.get_current_state_ids( state_ids = await self._state.get_current_state_ids(
self.room_id, prev_event_ids self.room_id, prev_event_ids
) )
auth_ids = yield self._auth.compute_auth_events(self, state_ids) auth_ids = await self._auth.compute_auth_events(self, state_ids)
format_version = self.room_version.event_format format_version = self.room_version.event_format
if format_version == EventFormatVersions.V1: if format_version == EventFormatVersions.V1:
auth_events = yield self._store.add_event_hashes(auth_ids) auth_events = await self._store.add_event_hashes(auth_ids)
prev_events = yield self._store.add_event_hashes(prev_event_ids) prev_events = await self._store.add_event_hashes(prev_event_ids)
else: else:
auth_events = auth_ids auth_events = auth_ids
prev_events = prev_event_ids prev_events = prev_event_ids
old_depth = yield self._store.get_max_depth_of(prev_event_ids) old_depth = await self._store.get_max_depth_of(prev_event_ids)
depth = old_depth + 1 depth = old_depth + 1
# we cap depth of generated events, to ensure that they are not # we cap depth of generated events, to ensure that they are not
+22 -24
View File
@@ -12,17 +12,19 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from typing import Optional, Union from typing import TYPE_CHECKING, Optional, Union
import attr import attr
from frozendict import frozendict from frozendict import frozendict
from twisted.internet import defer
from synapse.appservice import ApplicationService from synapse.appservice import ApplicationService
from synapse.events import EventBase
from synapse.logging.context import make_deferred_yieldable, run_in_background from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import StateMap from synapse.types import StateMap
if TYPE_CHECKING:
from synapse.storage.data_stores.main import DataStore
@attr.s(slots=True) @attr.s(slots=True)
class EventContext: class EventContext:
@@ -129,8 +131,7 @@ class EventContext:
delta_ids=delta_ids, delta_ids=delta_ids,
) )
@defer.inlineCallbacks async def serialize(self, event: EventBase, store: "DataStore") -> dict:
def serialize(self, event, store):
"""Converts self to a type that can be serialized as JSON, and then """Converts self to a type that can be serialized as JSON, and then
deserialized by `deserialize` deserialized by `deserialize`
@@ -146,7 +147,7 @@ class EventContext:
# the prev_state_ids, so if we're a state event we include the event # the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state. # id that we replaced in the state.
if event.is_state(): if event.is_state():
prev_state_ids = yield self.get_prev_state_ids() prev_state_ids = await self.get_prev_state_ids()
prev_state_id = prev_state_ids.get((event.type, event.state_key)) prev_state_id = prev_state_ids.get((event.type, event.state_key))
else: else:
prev_state_id = None prev_state_id = None
@@ -214,8 +215,7 @@ class EventContext:
return self._state_group return self._state_group
@defer.inlineCallbacks async def get_current_state_ids(self) -> Optional[StateMap[str]]:
def get_current_state_ids(self):
""" """
Gets the room state map, including this event - ie, the state in ``state_group`` Gets the room state map, including this event - ie, the state in ``state_group``
@@ -224,32 +224,31 @@ class EventContext:
``rejected`` is set. ``rejected`` is set.
Returns: Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group Returns None if state_group is None, which happens when the associated
is None, which happens when the associated event is an outlier. event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching Maps a (type, state_key) to the event ID of the state event matching
this tuple. this tuple.
""" """
if self.rejected: if self.rejected:
raise RuntimeError("Attempt to access state_ids of rejected event") raise RuntimeError("Attempt to access state_ids of rejected event")
yield self._ensure_fetched() await self._ensure_fetched()
return self._current_state_ids return self._current_state_ids
@defer.inlineCallbacks async def get_prev_state_ids(self):
def get_prev_state_ids(self):
""" """
Gets the room state map, excluding this event. Gets the room state map, excluding this event.
For a non-state event, this will be the same as get_current_state_ids(). For a non-state event, this will be the same as get_current_state_ids().
Returns: Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group dict[(str, str), str]|None: Returns None if state_group
is None, which happens when the associated event is an outlier. is None, which happens when the associated event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching Maps a (type, state_key) to the event ID of the state event matching
this tuple. this tuple.
""" """
yield self._ensure_fetched() await self._ensure_fetched()
return self._prev_state_ids return self._prev_state_ids
def get_cached_current_state_ids(self): def get_cached_current_state_ids(self):
@@ -269,8 +268,8 @@ class EventContext:
return self._current_state_ids return self._current_state_ids
def _ensure_fetched(self): async def _ensure_fetched(self):
return defer.succeed(None) return None
@attr.s(slots=True) @attr.s(slots=True)
@@ -303,21 +302,20 @@ class _AsyncEventContextImpl(EventContext):
_event_state_key = attr.ib(default=None) _event_state_key = attr.ib(default=None)
_fetching_state_deferred = attr.ib(default=None) _fetching_state_deferred = attr.ib(default=None)
def _ensure_fetched(self): async def _ensure_fetched(self):
if not self._fetching_state_deferred: if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(self._fill_out_state) self._fetching_state_deferred = run_in_background(self._fill_out_state)
return make_deferred_yieldable(self._fetching_state_deferred) return await make_deferred_yieldable(self._fetching_state_deferred)
@defer.inlineCallbacks async def _fill_out_state(self):
def _fill_out_state(self):
"""Called to populate the _current_state_ids and _prev_state_ids """Called to populate the _current_state_ids and _prev_state_ids
attributes by loading from the database. attributes by loading from the database.
""" """
if self.state_group is None: if self.state_group is None:
return return
self._current_state_ids = yield self._storage.state.get_state_ids_for_group( self._current_state_ids = await self._storage.state.get_state_ids_for_group(
self.state_group self.state_group
) )
if self._event_state_key is not None: if self._event_state_key is not None:
+30 -25
View File
@@ -13,7 +13,9 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
from twisted.internet import defer from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.types import Requester
class ThirdPartyEventRules(object): class ThirdPartyEventRules(object):
@@ -39,76 +41,79 @@ class ThirdPartyEventRules(object):
config=config, http_client=hs.get_simple_http_client() config=config, http_client=hs.get_simple_http_client()
) )
@defer.inlineCallbacks async def check_event_allowed(
def check_event_allowed(self, event, context): self, event: EventBase, context: EventContext
) -> bool:
"""Check if a provided event should be allowed in the given context. """Check if a provided event should be allowed in the given context.
Args: Args:
event (synapse.events.EventBase): The event to be checked. event: The event to be checked.
context (synapse.events.snapshot.EventContext): The context of the event. context: The context of the event.
Returns: Returns:
defer.Deferred[bool]: True if the event should be allowed, False if not. True if the event should be allowed, False if not.
""" """
if self.third_party_rules is None: if self.third_party_rules is None:
return True return True
prev_state_ids = yield context.get_prev_state_ids() prev_state_ids = await context.get_prev_state_ids()
# Retrieve the state events from the database. # Retrieve the state events from the database.
state_events = {} state_events = {}
for key, event_id in prev_state_ids.items(): for key, event_id in prev_state_ids.items():
state_events[key] = yield self.store.get_event(event_id, allow_none=True) state_events[key] = await self.store.get_event(event_id, allow_none=True)
ret = yield self.third_party_rules.check_event_allowed(event, state_events) ret = await self.third_party_rules.check_event_allowed(event, state_events)
return ret return ret
@defer.inlineCallbacks async def on_create_room(
def on_create_room(self, requester, config, is_requester_admin): self, requester: Requester, config: dict, is_requester_admin: bool
) -> bool:
"""Intercept requests to create room to allow, deny or update the """Intercept requests to create room to allow, deny or update the
request config. request config.
Args: Args:
requester (Requester) requester
config (dict): The creation config from the client. config: The creation config from the client.
is_requester_admin (bool): If the requester is an admin is_requester_admin: If the requester is an admin
Returns: Returns:
defer.Deferred[bool]: Whether room creation is allowed or denied. Whether room creation is allowed or denied.
""" """
if self.third_party_rules is None: if self.third_party_rules is None:
return True return True
ret = yield self.third_party_rules.on_create_room( ret = await self.third_party_rules.on_create_room(
requester, config, is_requester_admin requester, config, is_requester_admin
) )
return ret return ret
@defer.inlineCallbacks async def check_threepid_can_be_invited(
def check_threepid_can_be_invited(self, medium, address, room_id): self, medium: str, address: str, room_id: str
) -> bool:
"""Check if a provided 3PID can be invited in the given room. """Check if a provided 3PID can be invited in the given room.
Args: Args:
medium (str): The 3PID's medium. medium: The 3PID's medium.
address (str): The 3PID's address. address: The 3PID's address.
room_id (str): The room we want to invite the threepid to. room_id: The room we want to invite the threepid to.
Returns: Returns:
defer.Deferred[bool], True if the 3PID can be invited, False if not. True if the 3PID can be invited, False if not.
""" """
if self.third_party_rules is None: if self.third_party_rules is None:
return True return True
state_ids = yield self.store.get_filtered_current_state_ids(room_id) state_ids = await self.store.get_filtered_current_state_ids(room_id)
room_state_events = yield self.store.get_events(state_ids.values()) room_state_events = await self.store.get_events(state_ids.values())
state_events = {} state_events = {}
for key, event_id in state_ids.items(): for key, event_id in state_ids.items():
state_events[key] = room_state_events[event_id] state_events[key] = room_state_events[event_id]
ret = yield self.third_party_rules.check_threepid_can_be_invited( ret = await self.third_party_rules.check_threepid_can_be_invited(
medium, address, state_events medium, address, state_events
) )
return ret return ret

Some files were not shown because too many files have changed in this diff Show More