1
0

Compare commits

...

70 Commits

Author SHA1 Message Date
Olivier Wilkinson (reivilibre)
3234d5c305 Changelog changes
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-08-17 14:21:20 +01:00
Olivier Wilkinson (reivilibre)
ea4e4d2f0b 1.19.0 2020-08-17 14:12:46 +01:00
Olivier Wilkinson (reivilibre)
93848f3c89 More changelog tweaks
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-08-13 17:57:46 +01:00
Olivier Wilkinson (reivilibre)
4550b77312 More changelog tweaks
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-08-13 17:46:22 +01:00
Olivier Wilkinson (reivilibre)
a69ba6f457 Remove unwanted changelog line
Signed-off-by: Olivier Wilkinson (reivilibre) <olivier@librepush.net>
2020-08-13 17:17:37 +01:00
Olivier Wilkinson (reivilibre)
091ca3910d 1.19.0rc1 2020-08-13 17:12:21 +01:00
Patrick Cloke
fbe930dad2 Convert the roommember database to async/await. (#8070) 2020-08-12 12:14:34 -04:00
Patrick Cloke
5ecc8b5825 Convert devices database to async/await. (#8069) 2020-08-12 10:51:42 -04:00
Erik Johnston
5dd73d029e Add type hints to handlers.message and events.builder (#8067) 2020-08-12 15:05:50 +01:00
Patrick Cloke
d68e10f308 Convert account data, device inbox, and censor events databases to async/await (#8063) 2020-08-12 09:29:06 -04:00
Patrick Cloke
a3a59bab7b Convert appservice, group server, profile and more databases to async (#8066) 2020-08-12 09:28:48 -04:00
Erik Johnston
9d1e4942ab Fix typing for notifier (#8064) 2020-08-12 14:03:08 +01:00
Erik Johnston
6ba621d786 Merge pull request #8060 from matrix-org/erikj/type_server
Change HomeServer definition to work with typing.
2020-08-11 22:32:14 +01:00
Patrick Cloke
04faa0bfa9 Convert tags and metrics databases to async/await (#8062) 2020-08-11 17:21:20 -04:00
Patrick Cloke
a0acdfa9e9 Converts event_federation and registration databases to async/await (#8061) 2020-08-11 17:21:13 -04:00
Erik Johnston
fdb46b5442 Merge remote-tracking branch 'origin/develop' into erikj/type_server 2020-08-11 22:03:14 +01:00
Erik Johnston
c066928915 Add comment explaining cast 2020-08-11 22:01:12 +01:00
Erik Johnston
61d8ff0d44 Auto set logging filter (#8051)
We do this to prevent foot guns. The default config uses a MemoryFilter,
but users are free to change to logging to files directly. If they do
then they have to ensure to set the `filters: [context]` on the right
handler, otherwise records get written with the wrong context.

Instead we move the logic to happen when we generate a record, which is
when we *log* rather than *handle*.

(It's possible to add filters to loggers in the config, however they
don't apply to descendant loggers and so they have to be manually set on
*every* logger used in the code base)
2020-08-11 21:58:56 +01:00
Erik Johnston
3c796e4159 Update changelog.d/8051.misc
Co-authored-by: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-08-11 21:08:43 +01:00
Erik Johnston
a1e9bb9eae Add typing info to Notifier (#8058) 2020-08-11 19:40:02 +01:00
Erik Johnston
8a3dac3c19 Handle optional dependencies for Oidc and Saml 2020-08-11 18:20:45 +01:00
Erik Johnston
e1af09dccb Newsfile 2020-08-11 18:10:46 +01:00
Erik Johnston
0304ad0c3d Move setting of Filter into code.
We do this to prevent foot guns. The default config uses a MemoryFilter,
but users are free to change to logging to files directly. If they do
then they have to ensure to set the `filters: [context]` on the right
handler, otherwise records get written with the wrong context.

Instead we move the logic to happen when we generate a record, which is
when we *log* rather than *handle*.

(It's possible to add filters to loggers in the config, however they
don't apply to descendant loggers and so they have to be manually set on
*every* logger used in the code base)
2020-08-11 18:10:46 +01:00
Erik Johnston
a0f574f3c2 Reduce INFO logging (#8050)
c.f. #8021 

A lot of the code here is to change the `Completed 200 OK` logging to include the request URI so that we can drop the `Sending request...` log line.

Some notes:

1. We won't log retries, which may be confusing considering the time taken log line includes retries and sleeps.
2. The `_send_request_with_optional_trailing_slash` will always be logged *without* the forward slash, even if it succeeded only with the forward slash.
2020-08-11 18:10:07 +01:00
Erik Johnston
db131b6b22 Change the default log config to reduce disk I/O and storage (#8040)
* Change default log config to buffer by default.

This batches up writes to the filesystem, which is more efficient for
disk I/O. This means that it can take some time for logs to get written
to disk. Note that ERROR logs (and above) immediately flush the buffer.

This only effects new installs, as we only write the log config if
started with `--generate-config` (in the same way we do for generating
signing keys).

* Default to keeping last 4 days of logs.

This hopefully reduces the amount of logs kept for new servers. Keeping
the last 1GB of logs is likely overkill for new servers, but equally may
not be enough for busy ones.

Instead, we keep the last four days worth of logs, enough so that admins
can investigate any problems that happened over e.g. a long weekend.
2020-08-11 18:09:46 +01:00
Erik Johnston
64e5bb0dc8 Newsfile 2020-08-11 18:03:26 +01:00
Erik Johnston
0f1afbe8dc Change HomeServer definition to work with typing.
Duplicating function signatures between server.py and server.pyi is
silly. This commit changes that by changing all `build_*` methods to
`get_*` methods and changing the `_make_dependency_method` to work work
as a descriptor that caches the produced value.

There are some changes in other files that were made to fix the typing
in server.py.
2020-08-11 18:00:17 +01:00
Richard van der Hoff
0cb169900e Implement login blocking based on SAML attributes (#8052)
Hopefully this mostly speaks for itself. I also did a bit of cleaning up of the
error handling.

Fixes #8047
2020-08-11 16:08:10 +01:00
Richard van der Hoff
aa827b6ad7 Merge remote-tracking branch 'origin/master' into develop 2020-08-10 23:42:12 +01:00
Richard van der Hoff
39c3f68758 Stop uploading -py3 docker images (#8056) 2020-08-10 23:41:50 +01:00
Richard van der Hoff
fcbab08cbd Add an assertion on prev_events in create_new_client_event (#8041)
I think this would have caught all the cases in
https://github.com/matrix-org/synapse/issues/7642 - and I think a 500 makes
more sense here than a 403
2020-08-10 12:29:47 +01:00
Brendan Abolivier
cdbb8e6d6e Implement new experimental push rules (#7997)
With an undocumented configuration setting to enable them for specific users.
2020-08-10 11:48:01 +01:00
Brendan Abolivier
5c43c43240 Typo 2020-08-10 11:23:24 +01:00
Brendan Abolivier
1a3aabcf3f Lint 2020-08-10 11:13:21 +01:00
Brendan Abolivier
cee6c6012e why mypy why 2020-08-10 11:10:34 +01:00
Patrick Cloke
7f837959ea Convert directory, e2e_room_keys, end_to_end_keys, monthly_active_users database to async (#8042) 2020-08-07 13:36:29 -04:00
Patrick Cloke
f3fe6961b2 Convert additional database stores to async/await (#8045) 2020-08-07 12:17:17 -04:00
Travis Ralston
1048ed2afa Clarify that undoing a shutdown might not be possible (#8010) 2020-08-07 17:16:24 +01:00
Richard van der Hoff
de6f892065 Add a comment about SSLv23_METHOD (#8043) 2020-08-07 15:14:29 +01:00
Erik Johnston
2f9fd5ab00 Don't log OPTIONS request at INFO (#8049) 2020-08-07 14:53:05 +01:00
Patrick Cloke
4e874ed593 Remove unnecessary maybeDeferred calls (#8044) 2020-08-07 09:44:48 -04:00
Erik Johnston
7620912d84 Add health check endpoint (#8048) 2020-08-07 14:21:24 +01:00
David Vo
4dd27e6d11 Reduce unnecessary whitespace in JSON. (#7372) 2020-08-07 08:02:55 -04:00
Brendan Abolivier
367e9e6e9e Lint 2020-08-06 17:57:58 +01:00
Brendan Abolivier
bf33d5c457 Incorporate review 2020-08-06 17:52:34 +01:00
Brendan Abolivier
2ffd6783c7 Revert #7736 (#8039) 2020-08-06 17:15:35 +01:00
Patrick Cloke
fe6cfc80ec Convert some util functions to async (#8035) 2020-08-06 08:39:35 -04:00
Patrick Cloke
d4a7829b12 Convert synapse.api to async/await (#8031) 2020-08-06 08:30:06 -04:00
Patrick Cloke
c36228c403 Convert run_as_background_process inner function to async. (#8032) 2020-08-06 08:20:42 -04:00
Patrick Cloke
66f24449dd Improve performance of the register endpoint (#8009) 2020-08-06 08:09:55 -04:00
Brendan Abolivier
118a9eafb3 Merge branch 'develop' of github.com:matrix-org/synapse into babolivier/new_push_rules 2020-08-06 10:52:50 +01:00
Brendan Abolivier
dd11f575a2 Incorporate review 2020-08-06 10:52:26 +01:00
Erik Johnston
079bc3c8e3 Fixup worker doc (again) (#8000) 2020-08-06 10:35:59 +01:00
Erik Johnston
a7bdf98d01 Rename database classes to make some sense (#8033) 2020-08-05 21:38:57 +01:00
Richard van der Hoff
0a86850ba3 Stop the parent process flushing the logs on exit (#8012)
This solves the problem that the first few lines are logged twice on matrix.org. Hopefully the comments explain it.
2020-08-05 09:35:17 +01:00
Richard van der Hoff
8b786db323 bug report template: move comments into comment (#8030) 2020-08-05 09:34:42 +01:00
Andrew Morgan
7cac9006d6 Spruce up the check-newsfragment CI output (#8024)
This PR:

* Reduces the amount of noise in the `check-newsfragment` CI output by hiding the dependency installation output by default.
* Prints a link to the changelog/debian changelog section of the contributing guide if an error is found.
2020-08-04 22:10:23 +01:00
Patrick Cloke
8ff2deda72 Fix async/await calls for broken media providers. (#8027) 2020-08-04 09:44:25 -04:00
Patrick Cloke
88a3ff12f0 Convert the SimpleHttpClient to async. (#8016) 2020-08-04 07:22:04 -04:00
Patrick Cloke
e19de43eb5 Convert streams to async. (#8014) 2020-08-04 07:21:47 -04:00
Richard van der Hoff
916cf2d439 re-implement daemonize (#8011)
This has long been something I've wanted to do. Basically the `Daemonize` code
is both too flexible and not flexible enough, in that it offers a bunch of
features that we don't use (changing UID, closing FDs in the child, logging to
syslog) and doesn't offer a bunch that we could do with (redirecting stdout/err
to a file instead of /dev/null; having the parent not exit until the child is
running).

As a first step, I've lifted the Daemonize code and removed the bits we don't
use. This should be a non-functional change. Fixing everything else will come
later.
2020-08-04 10:03:41 +01:00
Brendan Abolivier
e2f1cccc8a Fix PUT /pushrules to use the right rule IDs 2020-08-03 11:52:52 +01:00
Brendan Abolivier
1678057b56 Back out the database hack and replace it with a temporary config setting 2020-08-03 11:22:22 +01:00
Brendan Abolivier
cf42d0a60c Fix cache name 2020-07-31 15:06:41 +01:00
Brendan Abolivier
79d991eff0 Fix cache invalidation calls 2020-07-31 13:58:42 +01:00
Brendan Abolivier
713d70d6c6 Merge branch 'develop' of github.com:matrix-org/synapse into babolivier/new_push_rules 2020-07-31 13:58:09 +01:00
Brendan Abolivier
60328ce9fb Lint 2020-07-30 19:02:28 +01:00
Brendan Abolivier
69158e554f Merge branch 'develop' of github.com:matrix-org/synapse into babolivier/new_push_rules 2020-07-30 19:00:29 +01:00
Brendan Abolivier
8b04c4cd70 Changelog 2020-07-30 17:43:17 +01:00
Brendan Abolivier
9725c59247 Implement new experimental push rules with a database hack to enable them 2020-07-28 19:20:55 +01:00
460 changed files with 4141 additions and 3749 deletions

View File

@@ -4,18 +4,16 @@ jobs:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
dockerhubuploadlatest:
machine: true
steps:
- checkout
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest .
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- run: docker push matrixdotorg/synapse:latest
- run: docker push matrixdotorg/synapse:latest-py3
workflows:
version: 2

View File

@@ -4,12 +4,12 @@ about: Create a report to help us improve
---
<!--
**THIS IS NOT A SUPPORT CHANNEL!**
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
<!--
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
This is a bug report template. By following the instructions below and

View File

@@ -1,3 +1,77 @@
Synapse 1.19.0 (2020-08-17)
===========================
No significant changes since 1.19.0rc1.
Removal warning
---------------
As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0), we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the `latest-py3` tag. Please see [the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180).
Synapse 1.19.0rc1 (2020-08-13)
==============================
Features
--------
- Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus. ([\#7902](https://github.com/matrix-org/synapse/issues/7902))
- Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel. ([\#7964](https://github.com/matrix-org/synapse/issues/7964))
- Add rate limiting to users joining rooms. ([\#8008](https://github.com/matrix-org/synapse/issues/8008))
- Add a `/health` endpoint to every configured HTTP listener that can be used as a health check endpoint by load balancers. ([\#8048](https://github.com/matrix-org/synapse/issues/8048))
- Allow login to be blocked based on the values of SAML attributes. ([\#8052](https://github.com/matrix-org/synapse/issues/8052))
- Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7314](https://github.com/matrix-org/synapse/issues/7314))
Bugfixes
--------
- Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory. ([\#7977](https://github.com/matrix-org/synapse/issues/7977))
- Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured. ([\#7978](https://github.com/matrix-org/synapse/issues/7978))
- Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0. ([\#7980](https://github.com/matrix-org/synapse/issues/7980))
- Fix various comments and minor discrepencies in server notices code. ([\#7996](https://github.com/matrix-org/synapse/issues/7996))
- Fix a long standing bug where HTTP HEAD requests resulted in a 400 error. ([\#7999](https://github.com/matrix-org/synapse/issues/7999))
- Fix a long-standing bug which caused two copies of some log lines to be written when synctl was used along with a MemoryHandler logger. ([\#8011](https://github.com/matrix-org/synapse/issues/8011), [\#8012](https://github.com/matrix-org/synapse/issues/8012))
Updates to the Docker image
---------------------------
- We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056))
Improved Documentation
----------------------
- Document how to set up a client .well-known file and fix several pieces of outdated documentation. ([\#7899](https://github.com/matrix-org/synapse/issues/7899))
- Improve workers docs. ([\#7990](https://github.com/matrix-org/synapse/issues/7990), [\#8000](https://github.com/matrix-org/synapse/issues/8000))
- Fix typo in `docs/workers.md`. ([\#7992](https://github.com/matrix-org/synapse/issues/7992))
- Add documentation for how to undo a room shutdown. ([\#7998](https://github.com/matrix-org/synapse/issues/7998), [\#8010](https://github.com/matrix-org/synapse/issues/8010))
Internal Changes
----------------
- Reduce the amount of whitespace in JSON stored and sent in responses. Contributed by David Vo. ([\#7372](https://github.com/matrix-org/synapse/issues/7372))
- Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0. ([\#7936](https://github.com/matrix-org/synapse/issues/7936), [\#7979](https://github.com/matrix-org/synapse/issues/7979))
- Convert various parts of the codebase to async/await. ([\#7947](https://github.com/matrix-org/synapse/issues/7947), [\#7948](https://github.com/matrix-org/synapse/issues/7948), [\#7949](https://github.com/matrix-org/synapse/issues/7949), [\#7951](https://github.com/matrix-org/synapse/issues/7951), [\#7963](https://github.com/matrix-org/synapse/issues/7963), [\#7973](https://github.com/matrix-org/synapse/issues/7973), [\#7975](https://github.com/matrix-org/synapse/issues/7975), [\#7976](https://github.com/matrix-org/synapse/issues/7976), [\#7981](https://github.com/matrix-org/synapse/issues/7981), [\#7987](https://github.com/matrix-org/synapse/issues/7987), [\#7989](https://github.com/matrix-org/synapse/issues/7989), [\#8003](https://github.com/matrix-org/synapse/issues/8003), [\#8014](https://github.com/matrix-org/synapse/issues/8014), [\#8016](https://github.com/matrix-org/synapse/issues/8016), [\#8027](https://github.com/matrix-org/synapse/issues/8027), [\#8031](https://github.com/matrix-org/synapse/issues/8031), [\#8032](https://github.com/matrix-org/synapse/issues/8032), [\#8035](https://github.com/matrix-org/synapse/issues/8035), [\#8042](https://github.com/matrix-org/synapse/issues/8042), [\#8044](https://github.com/matrix-org/synapse/issues/8044), [\#8045](https://github.com/matrix-org/synapse/issues/8045), [\#8061](https://github.com/matrix-org/synapse/issues/8061), [\#8062](https://github.com/matrix-org/synapse/issues/8062), [\#8063](https://github.com/matrix-org/synapse/issues/8063), [\#8066](https://github.com/matrix-org/synapse/issues/8066), [\#8069](https://github.com/matrix-org/synapse/issues/8069), [\#8070](https://github.com/matrix-org/synapse/issues/8070))
- Move some database-related log lines from the default logger to the database/transaction loggers. ([\#7952](https://github.com/matrix-org/synapse/issues/7952))
- Add a script to detect source code files using non-unix line terminators. ([\#7965](https://github.com/matrix-org/synapse/issues/7965), [\#7970](https://github.com/matrix-org/synapse/issues/7970))
- Log the SAML session ID during creation. ([\#7971](https://github.com/matrix-org/synapse/issues/7971))
- Implement new experimental push rules for some users. ([\#7997](https://github.com/matrix-org/synapse/issues/7997))
- Remove redundant and unreliable signature check for v1 Identity Service lookup responses. ([\#8001](https://github.com/matrix-org/synapse/issues/8001))
- Improve the performance of the register endpoint. ([\#8009](https://github.com/matrix-org/synapse/issues/8009))
- Reduce less useful output in the newsfragment CI step. Add a link to the changelog section of the contributing guide on error. ([\#8024](https://github.com/matrix-org/synapse/issues/8024))
- Rename storage layer objects to be more sensible. ([\#8033](https://github.com/matrix-org/synapse/issues/8033))
- Change the default log config to reduce disk I/O and storage for new servers. ([\#8040](https://github.com/matrix-org/synapse/issues/8040))
- Add an assertion on `prev_events` in `create_new_client_event`. ([\#8041](https://github.com/matrix-org/synapse/issues/8041))
- Add a comment to `ServerContextFactory` about the use of `SSLv23_METHOD`. ([\#8043](https://github.com/matrix-org/synapse/issues/8043))
- Log `OPTIONS` requests at `DEBUG` rather than `INFO` level to reduce amount logged at `INFO`. ([\#8049](https://github.com/matrix-org/synapse/issues/8049))
- Reduce amount of outbound request logging at `INFO` level. ([\#8050](https://github.com/matrix-org/synapse/issues/8050))
- It is no longer necessary to explicitly define `filters` in the logging configuration. (Continuing to do so is redundant but harmless.) ([\#8051](https://github.com/matrix-org/synapse/issues/8051))
- Add and improve type hints. ([\#8058](https://github.com/matrix-org/synapse/issues/8058), [\#8064](https://github.com/matrix-org/synapse/issues/8064), [\#8060](https://github.com/matrix-org/synapse/issues/8060), [\#8067](https://github.com/matrix-org/synapse/issues/8067))
Synapse 1.18.0 (2020-07-30)
===========================

View File

@@ -1 +0,0 @@
Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH.

View File

@@ -1 +0,0 @@
Add unread messages count to sync responses, as specified in [MSC2654](https://github.com/matrix-org/matrix-doc/pull/2654).

View File

@@ -1 +0,0 @@
Document how to set up a Client Well-Known file and fix several pieces of outdated documentation.

View File

@@ -1 +0,0 @@
Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus.

View File

@@ -1 +0,0 @@
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Move some database-related log lines from the default logger to the database/transaction loggers.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel.

View File

@@ -1 +0,0 @@
Add a script to detect source code files using non-unix line terminators.

View File

@@ -1 +0,0 @@
Add a script to detect source code files using non-unix line terminators.

View File

@@ -1 +0,0 @@
Log the SAML session ID during creation.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory.

View File

@@ -1 +0,0 @@
Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured.

View File

@@ -1 +0,0 @@
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.

View File

@@ -1 +0,0 @@
Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Improve workers docs.

View File

@@ -1 +0,0 @@
Fix typo in `docs/workers.md`.

View File

@@ -1 +0,0 @@
Fix various comments and minor discrepencies in server notices code.

View File

@@ -1 +0,0 @@
Add documentation for how to undo a room shutdown.

View File

@@ -1 +0,0 @@
Fix a long standing bug where HTTP HEAD requests resulted in a 400 error.

View File

@@ -1 +0,0 @@
Remove redundant and unreliable signature check for v1 Identity Service lookup responses.

View File

@@ -1 +0,0 @@
Convert various parts of the codebase to async/await.

View File

@@ -1 +0,0 @@
Add rate limiting to users joining rooms.

6
debian/changelog vendored
View File

@@ -1,12 +1,12 @@
matrix-synapse-py3 (1.xx.0) stable; urgency=medium
matrix-synapse-py3 (1.19.0) stable; urgency=medium
[ Synapse Packaging team ]
* New synapse release 1.xx.0.
* New synapse release 1.19.0.
[ Aaron Raimist ]
* Fix outdated documentation for SYNAPSE_CACHE_FACTOR
-- Synapse Packaging team <packages@matrix.org> XXXXX
-- Synapse Packaging team <packages@matrix.org> Mon, 17 Aug 2020 14:06:42 +0100
matrix-synapse-py3 (1.18.0) stable; urgency=medium

View File

@@ -4,16 +4,10 @@ formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:

View File

@@ -79,13 +79,20 @@ Response:
the structure can and does change without notice.
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past.
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:
1. For safety reasons, it is recommended to shut down Synapse prior to continuing.
* If the room was invite-only, your users will need to be re-invited.
* If the room no longer has any members at all, it'll be impossible to rejoin.
* The first user to rejoin will have to do so via an alias on a different server.
With all that being said, if you still want to try and recover the room:
1. For safety reasons, shut down Synapse.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
3. Restart Synapse (required).
3. Restart Synapse.
You will have to manually handle, if you so choose, the following:

View File

@@ -139,3 +139,10 @@ client IP addresses are recorded correctly.
Having done so, you can then use `https://matrix.example.com` (instead
of `https://matrix.example.com:8448`) as the "Custom server" when
connecting to Synapse from a client.
## Health check endpoint
Synapse exposes a health check endpoint for use by reverse proxies.
Each configured HTTP listener has a `/health` endpoint which always returns
200 OK (and doesn't get logged).

View File

@@ -1577,6 +1577,17 @@ saml2_config:
#
#grandfathered_mxid_source_attribute: upn
# It is possible to configure Synapse to only allow logins if SAML attributes
# match particular values. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted.
#
#attribute_requirements:
# - attribute: userGroup
# value: "staff"
# - attribute: department
# value: "sales"
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#

View File

@@ -11,24 +11,33 @@ formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
class: logging.handlers.TimedRotatingFileHandler
formatter: precise
filename: /var/log/matrix-synapse/homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
when: midnight
backupCount: 3 # Does not include the current log file.
encoding: utf8
# Default to buffering writes to log file for efficiency. This means that
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
# performance, at the expensive of it taking longer for log lines to
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
# A handler that writes logs to stderr. Unused by default, but can be used
# instead of "buffer" and "file" in the logger handlers.
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:
@@ -36,8 +45,23 @@ loggers:
# information such as access tokens.
level: INFO
twisted:
# We send the twisted logging directly to the file handler,
# to work around https://github.com/matrix-org/synapse/issues/3471
# when using "buffer" logger. Use "console" to log to stderr instead.
handlers: [file]
propagate: false
root:
level: INFO
handlers: [file, console]
# Write logs to the `buffer` handler, which will buffer them together in memory,
# then write them to a file.
#
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
# also need to update the configuation for the `twisted` logger above, in
# this case.)
#
handlers: [buffer]
disable_existing_loggers: false

View File

@@ -1,7 +1,7 @@
worker_app: synapse.app.federation_reader
worker_name: federation_reader1
worker_replication_host: 127.0.0.1
worker_replication_port: 9092
worker_replication_http_port: 9093
worker_listeners:

View File

@@ -7,6 +7,6 @@ who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
solution to fix it is to execute the SQL [here](../synapse/storage/databases/main/schema/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -23,7 +23,7 @@ The processes communicate with each other via a Synapse-specific protocol called
feeds streams of newly written data between processes so they can be kept in
sync with the database state.
When configured to do so, Synapse uses a
When configured to do so, Synapse uses a
[Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
stream between all configured Synapse processes. Additionally, processes may
make HTTP requests to each other, primarily for operations which need to wait
@@ -66,23 +66,31 @@ https://hub.docker.com/r/matrixdotorg/synapse/.
To make effective use of the workers, you will need to configure an HTTP
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
the correct worker, or to the main synapse instance. See
the correct worker, or to the main synapse instance. See
[reverse_proxy.md](reverse_proxy.md) for information on setting up a reverse
proxy.
To enable workers you should create a configuration file for each worker
process. Each worker configuration file inherits the configuration of the shared
homeserver configuration file. You can then override configuration specific to
that worker, e.g. the HTTP listener that it provides (if any); logging
configuration; etc. You should minimise the number of overrides though to
maintain a usable config.
When using workers, each worker process has its own configuration file which
contains settings specific to that worker, such as the HTTP listener that it
provides (if any), logging configuration, etc.
Normally, the worker processes are configured to read from a shared
configuration file as well as the worker-specific configuration files. This
makes it easier to keep common configuration settings synchronised across all
the processes.
The main process is somewhat special in this respect: it does not normally
need its own configuration file and can take all of its configuration from the
shared configuration file.
### Shared Configuration
### Shared configuration
Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an "HTTP replication
listener" for the main process; and secondly, you need to enable redis-based
replication. For example:
Next you need to add both a HTTP replication listener, used for HTTP requests
between processes, and redis config to the shared Synapse configuration file
(`homeserver.yaml`). For example:
```yaml
# extend the existing `listeners` section. This defines the ports that the
@@ -105,7 +113,7 @@ Under **no circumstances** should the replication listener be exposed to the
public internet; it has no authentication and is unencrypted.
### Worker Configuration
### Worker configuration
In the config file for each worker, you must specify the type of worker
application (`worker_app`), and you should specify a unqiue name for the worker
@@ -145,6 +153,9 @@ plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
Obviously you should configure your reverse-proxy to route the relevant
endpoints to the worker (`localhost:8083` in the above example).
### Running Synapse with workers
Finally, you need to start your worker processes. This can be done with either
`synctl` or your distribution's preferred service manager such as `systemd`. We
recommend the use of `systemd` where available: for information on setting up
@@ -407,6 +418,23 @@ all these to be folded into the `generic_worker` app and to use config to define
which processes handle the various proccessing such as push notifications.
## Migration from old config
There are two main independent changes that have been made: introducing Redis
support and merging apps into `synapse.app.generic_worker`. Both these changes
are backwards compatible and so no changes to the config are required, however
server admins are encouraged to plan to migrate to Redis as the old style direct
TCP replication config is deprecated.
To migrate to Redis add the `redis` config as above, and optionally remove the
TCP `replication` listener from master and `worker_replication_port` from worker
config.
To migrate apps to use `synapse.app.generic_worker` simply update the
`worker_app` option in the worker configs, and where worker are started (e.g.
in systemd service files, but not required for synctl).
## Architectural diagram
The following shows an example setup using Redis and a reverse proxy:

View File

@@ -81,3 +81,6 @@ ignore_missing_imports = True
[mypy-rust_python_jaeger_reporter.*]
ignore_missing_imports = True
[mypy-nacl.*]
ignore_missing_imports = True

View File

@@ -3,6 +3,8 @@
# A script which checks that an appropriate news file has been added on this
# branch.
echo -e "+++ \033[32mChecking newsfragment\033[m"
set -e
# make sure that origin/develop is up to date
@@ -16,6 +18,8 @@ pr="$BUILDKITE_PULL_REQUEST"
if ! git diff --quiet FETCH_HEAD... -- debian; then
if git diff --quiet FETCH_HEAD... -- debian/changelog; then
echo "Updates to debian directory, but no update to the changelog." >&2
echo "!! Please see the contributing guide for help writing your changelog entry:" >&2
echo "https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#debian-changelog" >&2
exit 1
fi
fi
@@ -26,7 +30,12 @@ if ! git diff --name-only FETCH_HEAD... | grep -qv '^debian/'; then
exit 0
fi
tox -qe check-newsfragment
# Print a link to the contributing guide if the user makes a mistake
CONTRIBUTING_GUIDE_TEXT="!! Please see the contributing guide for help writing your changelog entry:
https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#changelog"
# If check-newsfragment returns a non-zero exit code, print the contributing guide and exit
tox -qe check-newsfragment || (echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 && exit 1)
echo
echo "--------------------------"
@@ -38,6 +47,7 @@ for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do
lastchar=`tr -d '\n' < $f | tail -c 1`
if [ $lastchar != '.' -a $lastchar != '!' ]; then
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
exit 1
fi
@@ -47,5 +57,6 @@ done
if [[ -n "$pr" && "$matched" -eq 0 ]]; then
echo -e "\e[31mERROR: Did not find a news fragment with the right number: expected changelog.d/$pr.*.\e[39m" >&2
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
exit 1
fi

View File

@@ -40,7 +40,7 @@ class MockHomeserver(HomeServer):
config.server_name, reactor=reactor, config=config, **kwargs
)
self.version_string = "Synapse/"+get_version_string(synapse)
self.version_string = "Synapse/" + get_version_string(synapse)
if __name__ == "__main__":
@@ -86,7 +86,7 @@ if __name__ == "__main__":
store = hs.get_datastore()
async def run_background_updates():
await store.db.updates.run_background_updates(sleep=False)
await store.db_pool.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()

View File

@@ -35,31 +35,29 @@ from synapse.logging.context import (
make_deferred_yieldable,
run_in_background,
)
from synapse.storage.data_stores.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.data_stores.main.deviceinbox import (
DeviceInboxBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.data_stores.main.events_bg_updates import (
from synapse.storage.database import DatabasePool, make_conn
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
from synapse.storage.databases.main.events_bg_updates import (
EventsBackgroundUpdatesStore,
)
from synapse.storage.data_stores.main.media_repository import (
from synapse.storage.databases.main.media_repository import (
MediaRepositoryBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.registration import (
from synapse.storage.databases.main.registration import (
RegistrationBackgroundUpdateStore,
find_max_generated_user_id_localpart,
)
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
from synapse.storage.data_stores.main.state import MainStateBackgroundUpdateStore
from synapse.storage.data_stores.main.stats import StatsStore
from synapse.storage.data_stores.main.user_directory import (
from synapse.storage.databases.main.room import RoomBackgroundUpdateStore
from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.databases.main.search import SearchBackgroundUpdateStore
from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore
from synapse.storage.databases.main.stats import StatsStore
from synapse.storage.databases.main.user_directory import (
UserDirectoryBackgroundUpdateStore,
)
from synapse.storage.data_stores.state.bg_updates import StateBackgroundUpdateStore
from synapse.storage.database import Database, make_conn
from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
@@ -69,7 +67,7 @@ logger = logging.getLogger("synapse_port_db")
BOOLEAN_COLUMNS = {
"events": ["processed", "outlier", "contains_url", "count_as_unread"],
"events": ["processed", "outlier", "contains_url"],
"rooms": ["is_public"],
"event_edges": ["is_state"],
"presence_list": ["accepted"],
@@ -175,14 +173,14 @@ class Store(
StatsStore,
):
def execute(self, f, *args, **kwargs):
return self.db.runInteraction(f.__name__, f, *args, **kwargs)
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
def execute_sql(self, sql, *args):
def r(txn):
txn.execute(sql, args)
return txn.fetchall()
return self.db.runInteraction("execute_sql", r)
return self.db_pool.runInteraction("execute_sql", r)
def insert_many_txn(self, txn, table, headers, rows):
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
@@ -227,7 +225,7 @@ class Porter(object):
async def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = await self.postgres_store.db.simple_select_one(
row = await self.postgres_store.db_pool.simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcols=("forward_rowid", "backward_rowid"),
@@ -244,7 +242,7 @@ class Porter(object):
) = await self._setup_sent_transactions()
backward_chunk = 0
else:
await self.postgres_store.db.simple_insert(
await self.postgres_store.db_pool.simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
@@ -274,7 +272,7 @@ class Porter(object):
await self.postgres_store.execute(delete_all)
await self.postgres_store.db.simple_insert(
await self.postgres_store.db_pool.simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
@@ -318,7 +316,7 @@ class Porter(object):
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
await self.postgres_store.db.simple_insert(
await self.postgres_store.db_pool.simple_insert(
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
@@ -359,7 +357,7 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = await self.sqlite_store.db.runInteraction(
headers, frows, brows = await self.sqlite_store.db_pool.runInteraction(
"select", r
)
@@ -375,7 +373,7 @@ class Porter(object):
def insert(txn):
self.postgres_store.insert_many_txn(txn, table, headers[1:], rows)
self.postgres_store.db.simple_update_one_txn(
self.postgres_store.db_pool.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
@@ -413,7 +411,7 @@ class Porter(object):
return headers, rows
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
headers, rows = await self.sqlite_store.db_pool.runInteraction("select", r)
if rows:
forward_chunk = rows[-1][0] + 1
@@ -451,7 +449,7 @@ class Porter(object):
],
)
self.postgres_store.db.simple_update_one_txn(
self.postgres_store.db_pool.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": "event_search"},
@@ -494,7 +492,7 @@ class Porter(object):
db_conn, allow_outdated_version=allow_outdated_version
)
prepare_database(db_conn, engine, config=self.hs_config)
store = Store(Database(hs, db_config, engine), db_conn, hs)
store = Store(DatabasePool(hs, db_config, engine), db_conn, hs)
db_conn.commit()
return store
@@ -502,7 +500,7 @@ class Porter(object):
async def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = (
await self.postgres_store.db.updates.has_completed_background_updates()
await self.postgres_store.db_pool.updates.has_completed_background_updates()
)
if not postgres_ready:
@@ -511,9 +509,9 @@ class Porter(object):
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
await self.postgres_store.db.updates.do_next_background_update(100)
await self.postgres_store.db_pool.updates.do_next_background_update(100)
postgres_ready = await (
self.postgres_store.db.updates.has_completed_background_updates()
self.postgres_store.db_pool.updates.has_completed_background_updates()
)
async def run(self):
@@ -534,7 +532,7 @@ class Porter(object):
# Check if all background updates are done, abort if not.
updates_complete = (
await self.sqlite_store.db.updates.has_completed_background_updates()
await self.sqlite_store.db_pool.updates.has_completed_background_updates()
)
if not updates_complete:
end_error = (
@@ -576,22 +574,24 @@ class Porter(object):
)
try:
await self.postgres_store.db.runInteraction("alter_table", alter_table)
await self.postgres_store.db_pool.runInteraction(
"alter_table", alter_table
)
except Exception:
# On Error Resume Next
pass
await self.postgres_store.db.runInteraction(
await self.postgres_store.db_pool.runInteraction(
"create_port_table", create_port_table
)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = await self.sqlite_store.db.simple_select_onecol(
sqlite_tables = await self.sqlite_store.db_pool.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = await self.postgres_store.db.simple_select_onecol(
postgres_tables = await self.postgres_store.db_pool.simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
@@ -692,7 +692,7 @@ class Porter(object):
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
headers, rows = await self.sqlite_store.db_pool.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@@ -725,7 +725,7 @@ class Porter(object):
next_chunk = await self.sqlite_store.execute(get_start_id)
next_chunk = max(max_inserted_rowid + 1, next_chunk)
await self.postgres_store.db.simple_insert(
await self.postgres_store.db_pool.simple_insert(
table="port_from_sqlite3",
values={
"table_name": "sent_transactions",
@@ -794,14 +794,14 @@ class Porter(object):
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
return self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
def _setup_user_id_seq(self):
def r(txn):
next_id = find_max_generated_user_id_localpart(txn) + 1
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.db.runInteraction("setup_user_id_seq", r)
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
##############################################

View File

@@ -48,7 +48,7 @@ try:
except ImportError:
pass
__version__ = "1.18.0"
__version__ = "1.19.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -13,12 +13,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Optional
from typing import List, Optional, Tuple
import pymacaroons
from netaddr import IPAddress
from twisted.internet import defer
from twisted.web.server import Request
import synapse.types
@@ -80,13 +79,14 @@ class Auth(object):
self._track_appservice_user_ips = hs.config.track_appservice_user_ips
self._macaroon_secret_key = hs.config.macaroon_secret_key
@defer.inlineCallbacks
def check_from_context(self, room_version: str, event, context, do_sig_check=True):
prev_state_ids = yield defer.ensureDeferred(context.get_prev_state_ids())
auth_events_ids = yield self.compute_auth_events(
async def check_from_context(
self, room_version: str, event, context, do_sig_check=True
):
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = self.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = await self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
@@ -94,14 +94,13 @@ class Auth(object):
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
@defer.inlineCallbacks
def check_user_in_room(
async def check_user_in_room(
self,
room_id: str,
user_id: str,
current_state: Optional[StateMap[EventBase]] = None,
allow_departed_users: bool = False,
):
) -> EventBase:
"""Check if the user is in the room, or was at some point.
Args:
room_id: The room to check.
@@ -119,37 +118,35 @@ class Auth(object):
Raises:
AuthError if the user is/was not in the room.
Returns:
Deferred[Optional[EventBase]]:
Membership event for the user if the user was in the
room. This will be the join event if they are currently joined to
the room. This will be the leave event if they have left the room.
Membership event for the user if the user was in the
room. This will be the join event if they are currently joined to
the room. This will be the leave event if they have left the room.
"""
if current_state:
member = current_state.get((EventTypes.Member, user_id), None)
else:
member = yield defer.ensureDeferred(
self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
member = await self.state.get_current_state(
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
)
membership = member.membership if member else None
if membership == Membership.JOIN:
return member
if member:
membership = member.membership
# XXX this looks totally bogus. Why do we not allow users who have been banned,
# or those who were members previously and have been re-invited?
if allow_departed_users and membership == Membership.LEAVE:
forgot = yield self.store.did_forget(user_id, room_id)
if not forgot:
if membership == Membership.JOIN:
return member
# XXX this looks totally bogus. Why do we not allow users who have been banned,
# or those who were members previously and have been re-invited?
if allow_departed_users and membership == Membership.LEAVE:
forgot = await self.store.did_forget(user_id, room_id)
if not forgot:
return member
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
@defer.inlineCallbacks
def check_host_in_room(self, room_id, host):
async def check_host_in_room(self, room_id, host):
with Measure(self.clock, "check_host_in_room"):
latest_event_ids = yield self.store.is_host_joined(room_id, host)
latest_event_ids = await self.store.is_host_joined(room_id, host)
return latest_event_ids
def can_federate(self, event, auth_events):
@@ -160,14 +157,13 @@ class Auth(object):
def get_public_keys(self, invite_event):
return event_auth.get_public_keys(invite_event)
@defer.inlineCallbacks
def get_user_by_req(
async def get_user_by_req(
self,
request: Request,
allow_guest: bool = False,
rights: str = "access",
allow_expired: bool = False,
):
) -> synapse.types.Requester:
""" Get a registered user's ID.
Args:
@@ -180,7 +176,7 @@ class Auth(object):
/login will deliver access tokens regardless of expiration.
Returns:
defer.Deferred: resolves to a `synapse.types.Requester` object
Resolves to the requester
Raises:
InvalidClientCredentialsError if no user by that token exists or the token
is invalid.
@@ -194,14 +190,14 @@ class Auth(object):
access_token = self.get_access_token_from_request(request)
user_id, app_service = yield self._get_appservice_user_id(request)
user_id, app_service = await self._get_appservice_user_id(request)
if user_id:
request.authenticated_entity = user_id
opentracing.set_tag("authenticated_entity", user_id)
opentracing.set_tag("appservice_id", app_service.id)
if ip_addr and self._track_appservice_user_ips:
yield self.store.insert_client_ip(
await self.store.insert_client_ip(
user_id=user_id,
access_token=access_token,
ip=ip_addr,
@@ -211,7 +207,7 @@ class Auth(object):
return synapse.types.create_requester(user_id, app_service=app_service)
user_info = yield self.get_user_by_access_token(
user_info = await self.get_user_by_access_token(
access_token, rights, allow_expired=allow_expired
)
user = user_info["user"]
@@ -221,7 +217,7 @@ class Auth(object):
# Deny the request if the user account has expired.
if self._account_validity.enabled and not allow_expired:
user_id = user.to_string()
expiration_ts = yield self.store.get_expiration_ts_for_user(user_id)
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
if (
expiration_ts is not None
and self.clock.time_msec() >= expiration_ts
@@ -235,7 +231,7 @@ class Auth(object):
device_id = user_info.get("device_id")
if user and access_token and ip_addr:
yield self.store.insert_client_ip(
await self.store.insert_client_ip(
user_id=user.to_string(),
access_token=access_token,
ip=ip_addr,
@@ -261,8 +257,7 @@ class Auth(object):
except KeyError:
raise MissingClientTokenError()
@defer.inlineCallbacks
def _get_appservice_user_id(self, request):
async def _get_appservice_user_id(self, request):
app_service = self.store.get_app_service_by_token(
self.get_access_token_from_request(request)
)
@@ -283,14 +278,13 @@ class Auth(object):
if not app_service.is_interested_in_user(user_id):
raise AuthError(403, "Application service cannot masquerade as this user.")
if not (yield self.store.get_user_by_id(user_id)):
if not (await self.store.get_user_by_id(user_id)):
raise AuthError(403, "Application service has not registered this user")
return user_id, app_service
@defer.inlineCallbacks
def get_user_by_access_token(
async def get_user_by_access_token(
self, token: str, rights: str = "access", allow_expired: bool = False,
):
) -> dict:
""" Validate access token and get user_id from it
Args:
@@ -300,7 +294,7 @@ class Auth(object):
allow_expired: If False, raises an InvalidClientTokenError
if the token is expired
Returns:
Deferred[dict]: dict that includes:
dict that includes:
`user` (UserID)
`is_guest` (bool)
`token_id` (int|None): access token id. May be None if guest
@@ -314,7 +308,7 @@ class Auth(object):
if rights == "access":
# first look in the database
r = yield self._look_up_user_by_access_token(token)
r = await self._look_up_user_by_access_token(token)
if r:
valid_until_ms = r["valid_until_ms"]
if (
@@ -352,7 +346,7 @@ class Auth(object):
# It would of course be much easier to store guest access
# tokens in the database as well, but that would break existing
# guest tokens.
stored_user = yield self.store.get_user_by_id(user_id)
stored_user = await self.store.get_user_by_id(user_id)
if not stored_user:
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
if not stored_user["is_guest"]:
@@ -482,9 +476,8 @@ class Auth(object):
now = self.hs.get_clock().time_msec()
return now < expiry
@defer.inlineCallbacks
def _look_up_user_by_access_token(self, token):
ret = yield self.store.get_user_by_access_token(token)
async def _look_up_user_by_access_token(self, token):
ret = await self.store.get_user_by_access_token(token)
if not ret:
return None
@@ -507,7 +500,7 @@ class Auth(object):
logger.warning("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
return defer.succeed(service)
return service
async def is_server_admin(self, user: UserID) -> bool:
""" Check if the given user is a local server admin.
@@ -522,7 +515,7 @@ class Auth(object):
def compute_auth_events(
self, event, current_state_ids: StateMap[str], for_verification: bool = False,
):
) -> List[str]:
"""Given an event and current state return the list of event IDs used
to auth an event.
@@ -530,11 +523,11 @@ class Auth(object):
should be added to the event's `auth_events`.
Returns:
defer.Deferred(list[str]): List of event IDs.
List of event IDs.
"""
if event.type == EventTypes.Create:
return defer.succeed([])
return []
# Currently we ignore the `for_verification` flag even though there are
# some situations where we can drop particular auth events when adding
@@ -553,7 +546,7 @@ class Auth(object):
if auth_ev_id:
auth_ids.append(auth_ev_id)
return defer.succeed(auth_ids)
return auth_ids
async def check_can_change_room_list(self, room_id: str, user: UserID):
"""Determine whether the user is allowed to edit the room's entry in the
@@ -636,10 +629,9 @@ class Auth(object):
return query_params[0].decode("ascii")
@defer.inlineCallbacks
def check_user_in_room_or_world_readable(
async def check_user_in_room_or_world_readable(
self, room_id: str, user_id: str, allow_departed_users: bool = False
):
) -> Tuple[str, Optional[str]]:
"""Checks that the user is or was in the room or the room is world
readable. If it isn't then an exception is raised.
@@ -650,10 +642,9 @@ class Auth(object):
members but have now departed
Returns:
Deferred[tuple[str, str|None]]: Resolves to the current membership of
the user in the room and the membership event ID of the user. If
the user is not in the room and never has been, then
`(Membership.JOIN, None)` is returned.
Resolves to the current membership of the user in the room and the
membership event ID of the user. If the user is not in the room and
never has been, then `(Membership.JOIN, None)` is returned.
"""
try:
@@ -662,15 +653,13 @@ class Auth(object):
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.check_user_in_room(
member_event = await self.check_user_in_room(
room_id, user_id, allow_departed_users=allow_departed_users
)
return member_event.membership, member_event.event_id
except AuthError:
visibility = yield defer.ensureDeferred(
self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
visibility = await self.state.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (
visibility

View File

@@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.api.constants import LimitBlockingTypes, UserTypes
from synapse.api.errors import Codes, ResourceLimitError
from synapse.config.server import is_threepid_reserved
@@ -36,8 +34,7 @@ class AuthBlocking(object):
self._limit_usage_by_mau = hs.config.limit_usage_by_mau
self._mau_limits_reserved_threepids = hs.config.mau_limits_reserved_threepids
@defer.inlineCallbacks
def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
async def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
"""Checks if the user should be rejected for some external reason,
such as monthly active user limiting or global disable flag
@@ -60,7 +57,7 @@ class AuthBlocking(object):
if user_id is not None:
if user_id == self._server_notices_mxid:
return
if (yield self.store.is_support_user(user_id)):
if await self.store.is_support_user(user_id):
return
if self._hs_disabled:
@@ -76,11 +73,11 @@ class AuthBlocking(object):
# If the user is already part of the MAU cohort or a trial user
if user_id:
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
timestamp = await self.store.user_last_seen_monthly_active(user_id)
if timestamp:
return
is_trial = yield self.store.is_trial_user(user_id)
is_trial = await self.store.is_trial_user(user_id)
if is_trial:
return
elif threepid:
@@ -93,7 +90,7 @@ class AuthBlocking(object):
# allow registration. Support users are excluded from MAU checks.
return
# Else if there is no room in the MAU bucket, bail
current_mau = yield self.store.get_monthly_active_count()
current_mau = await self.store.get_monthly_active_count()
if current_mau >= self._max_mau_value:
raise ResourceLimitError(
403,

View File

@@ -238,14 +238,16 @@ class InteractiveAuthIncompleteError(Exception):
(This indicates we should return a 401 with 'result' as the body)
Attributes:
session_id: The ID of the ongoing interactive auth session.
result: the server response to the request, which should be
passed back to the client
"""
def __init__(self, result: "JsonDict"):
def __init__(self, session_id: str, result: "JsonDict"):
super(InteractiveAuthIncompleteError, self).__init__(
"Interactive auth not yet complete"
)
self.session_id = session_id
self.result = result

View File

@@ -21,8 +21,6 @@ import jsonschema
from canonicaljson import json
from jsonschema import FormatChecker
from twisted.internet import defer
from synapse.api.constants import EventContentFields
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
@@ -137,9 +135,8 @@ class Filtering(object):
super(Filtering, self).__init__()
self.store = hs.get_datastore()
@defer.inlineCallbacks
def get_user_filter(self, user_localpart, filter_id):
result = yield self.store.get_user_filter(user_localpart, filter_id)
async def get_user_filter(self, user_localpart, filter_id):
result = await self.store.get_user_filter(user_localpart, filter_id)
return FilterCollection(result)
def add_user_filter(self, user_localpart, user_filter):

View File

@@ -12,7 +12,6 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import gc
import logging
import os
@@ -22,7 +21,6 @@ import sys
import traceback
from typing import Iterable
from daemonize import Daemonize
from typing_extensions import NoReturn
from twisted.internet import defer, error, reactor
@@ -34,6 +32,7 @@ from synapse.config.server import ListenerConfig
from synapse.crypto import context_factory
from synapse.logging.context import PreserveLoggingContext
from synapse.util.async_helpers import Linearizer
from synapse.util.daemonize import daemonize_process
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
@@ -129,17 +128,8 @@ def start_reactor(
if print_pidfile:
print(pid_file)
daemon = Daemonize(
app=appname,
pid=pid_file,
action=run,
auto_close_fds=False,
verbose=True,
logger=logger,
)
daemon.start()
else:
run()
daemonize_process(pid_file, logger)
run()
def quit_with_error(error_string: str) -> NoReturn:
@@ -278,7 +268,7 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().db.start_profiling()
hs.get_datastore().db_pool.start_profiling()
hs.get_pusherpool().start()
setup_sentry(hs)

View File

@@ -123,17 +123,18 @@ from synapse.rest.client.v2_alpha.account_data import (
from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer
from synapse.storage.data_stores.main.censor_events import CensorEventsStore
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
from synapse.storage.data_stores.main.monthly_active_users import (
from synapse.server import HomeServer, cache_in_self
from synapse.storage.databases.main.censor_events import CensorEventsStore
from synapse.storage.databases.main.media_repository import MediaRepositoryStore
from synapse.storage.databases.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.storage.data_stores.main.search import SearchWorkerStore
from synapse.storage.data_stores.main.ui_auth import UIAuthWorkerStore
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.databases.main.presence import UserPresenceState
from synapse.storage.databases.main.search import SearchWorkerStore
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
@@ -493,7 +494,10 @@ class GenericWorkerServer(HomeServer):
site_tag = listener_config.http_options.tag
if site_tag is None:
site_tag = port
resources = {}
# We always include a health resource.
resources = {"/health": HealthResource()}
for res in listener_config.http_options.resources:
for name in res.names:
if name == "metrics":
@@ -631,10 +635,12 @@ class GenericWorkerServer(HomeServer):
async def remove_pusher(self, app_id, push_key, user_id):
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
def build_replication_data_handler(self):
@cache_in_self
def get_replication_data_handler(self):
return GenericWorkerReplicationHandler(self)
def build_presence_handler(self):
@cache_in_self
def get_presence_handler(self):
return GenericWorkerPresence(self)

View File

@@ -68,6 +68,7 @@ from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest.health import HealthResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
@@ -98,7 +99,9 @@ class SynapseHomeServer(HomeServer):
if site_tag is None:
site_tag = port
resources = {}
# We always include a health resource.
resources = {"/health": HealthResource()}
for res in listener_config.http_options.resources:
for name in res.names:
if name == "openid" and "federation" in res.names:
@@ -441,7 +444,7 @@ def setup(config_options):
_base.start(hs, config.listeners)
hs.get_datastore().db.updates.start_doing_background_updates()
hs.get_datastore().db_pool.updates.start_doing_background_updates()
except Exception:
# Print the exception and bail out.
print("Error during startup:", file=sys.stderr)
@@ -551,8 +554,8 @@ async def phone_stats_home(hs, stats, stats_process=_stats_process):
#
# This only reports info about the *main* database.
stats["database_engine"] = hs.get_datastore().db.engine.module.__name__
stats["database_server_version"] = hs.get_datastore().db.engine.server_version
stats["database_engine"] = hs.get_datastore().db_pool.engine.module.__name__
stats["database_server_version"] = hs.get_datastore().db_pool.engine.server_version
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
try:

View File

@@ -175,7 +175,7 @@ class ApplicationServiceApi(SimpleHttpClient):
urllib.parse.quote(protocol),
)
try:
info = yield self.get_json(uri, {})
info = yield defer.ensureDeferred(self.get_json(uri, {}))
if not _is_valid_3pe_metadata(info):
logger.warning(

49
synapse/config/_util.py Normal file
View File

@@ -0,0 +1,49 @@
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Any, List
import jsonschema
from synapse.config._base import ConfigError
from synapse.types import JsonDict
def validate_config(json_schema: JsonDict, config: Any, config_path: List[str]) -> None:
"""Validates a config setting against a JsonSchema definition
This can be used to validate a section of the config file against a schema
definition. If the validation fails, a ConfigError is raised with a textual
description of the problem.
Args:
json_schema: the schema to validate against
config: the configuration value to be validated
config_path: the path within the config file. This will be used as a basis
for the error message.
"""
try:
jsonschema.validate(config, json_schema)
except jsonschema.ValidationError as e:
# copy `config_path` before modifying it.
path = list(config_path)
for p in list(e.path):
if isinstance(p, int):
path.append("<item %i>" % p)
else:
path.append(str(p))
raise ConfigError(
"Unable to parse configuration: %s at %s" % (e.message, ".".join(path))
)

View File

@@ -100,7 +100,10 @@ class DatabaseConnectionConfig:
self.name = name
self.config = db_config
self.data_stores = data_stores
# The `data_stores` config is actually talking about `databases` (we
# changed the name).
self.databases = data_stores
class DatabaseConfig(Config):

View File

@@ -55,24 +55,33 @@ formatters:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - \
%(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
class: logging.handlers.TimedRotatingFileHandler
formatter: precise
filename: ${log_file}
maxBytes: 104857600
backupCount: 10
filters: [context]
when: midnight
backupCount: 3 # Does not include the current log file.
encoding: utf8
# Default to buffering writes to log file for efficiency. This means that
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
# logs will still be flushed immediately.
buffer:
class: logging.handlers.MemoryHandler
target: file
# The capacity is the number of log lines that are buffered before
# being written to disk. Increasing this will lead to better
# performance, at the expensive of it taking longer for log lines to
# be written to disk.
capacity: 10
flushLevel: 30 # Flush for WARNING logs as well
# A handler that writes logs to stderr. Unused by default, but can be used
# instead of "buffer" and "file" in the logger handlers.
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:
@@ -80,9 +89,24 @@ loggers:
# information such as access tokens.
level: INFO
twisted:
# We send the twisted logging directly to the file handler,
# to work around https://github.com/matrix-org/synapse/issues/3471
# when using "buffer" logger. Use "console" to log to stderr instead.
handlers: [file]
propagate: false
root:
level: INFO
handlers: [file, console]
# Write logs to the `buffer` handler, which will buffer them together in memory,
# then write them to a file.
#
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
# also need to update the configuation for the `twisted` logger above, in
# this case.)
#
handlers: [buffer]
disable_existing_loggers: false
"""
@@ -168,11 +192,26 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
handler = logging.StreamHandler()
handler.setFormatter(formatter)
handler.addFilter(LoggingContextFilter(request=""))
logger.addHandler(handler)
else:
logging.config.dictConfig(log_config)
# We add a log record factory that runs all messages through the
# LoggingContextFilter so that we get the context *at the time we log*
# rather than when we write to a handler. This can be done in config using
# filter options, but care must when using e.g. MemoryHandler to buffer
# writes.
log_filter = LoggingContextFilter(request="")
old_factory = logging.getLogRecordFactory()
def factory(*args, **kwargs):
record = old_factory(*args, **kwargs)
log_filter.filter(record)
return record
logging.setLogRecordFactory(factory)
# Route Twisted's native logging through to the standard library logging
# system.
observer = STDLibLogObserver()

View File

@@ -15,7 +15,9 @@
# limitations under the License.
import logging
from typing import Any, List
import attr
import jinja2
import pkg_resources
@@ -23,6 +25,7 @@ from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module, load_python_module
from ._base import Config, ConfigError
from ._util import validate_config
logger = logging.getLogger(__name__)
@@ -80,6 +83,11 @@ class SAML2Config(Config):
self.saml2_enabled = True
attribute_requirements = saml2_config.get("attribute_requirements") or []
self.attribute_requirements = _parse_attribute_requirements_def(
attribute_requirements
)
self.saml2_grandfathered_mxid_source_attribute = saml2_config.get(
"grandfathered_mxid_source_attribute", "uid"
)
@@ -341,6 +349,17 @@ class SAML2Config(Config):
#
#grandfathered_mxid_source_attribute: upn
# It is possible to configure Synapse to only allow logins if SAML attributes
# match particular values. The requirements can be listed under
# `attribute_requirements` as shown below. All of the listed attributes must
# match for the login to be permitted.
#
#attribute_requirements:
# - attribute: userGroup
# value: "staff"
# - attribute: department
# value: "sales"
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
@@ -368,3 +387,34 @@ class SAML2Config(Config):
""" % {
"config_dir_path": config_dir_path
}
@attr.s(frozen=True)
class SamlAttributeRequirement:
"""Object describing a single requirement for SAML attributes."""
attribute = attr.ib(type=str)
value = attr.ib(type=str)
JSON_SCHEMA = {
"type": "object",
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
"required": ["attribute", "value"],
}
ATTRIBUTE_REQUIREMENTS_SCHEMA = {
"type": "array",
"items": SamlAttributeRequirement.JSON_SCHEMA,
}
def _parse_attribute_requirements_def(
attribute_requirements: Any,
) -> List[SamlAttributeRequirement]:
validate_config(
ATTRIBUTE_REQUIREMENTS_SCHEMA,
attribute_requirements,
config_path=["saml2_config", "attribute_requirements"],
)
return [SamlAttributeRequirement(**x) for x in attribute_requirements]

View File

@@ -530,6 +530,21 @@ class ServerConfig(Config):
"request_token_inhibit_3pid_errors", False,
)
# List of users trialing the new experimental default push rules. This setting is
# not included in the sample configuration file on purpose as it's a temporary
# hack, so that some users can trial the new defaults without impacting every
# user on the homeserver.
users_new_default_push_rules = (
config.get("users_new_default_push_rules") or []
) # type: list
if not isinstance(users_new_default_push_rules, list):
raise ConfigError("'users_new_default_push_rules' must be a list")
# Turn the list into a set to improve lookup speed.
self.users_new_default_push_rules = set(
users_new_default_push_rules
) # type: set
def has_tls_listener(self) -> bool:
return any(listener.tls for listener in self.listeners)

View File

@@ -48,6 +48,14 @@ class ServerContextFactory(ContextFactory):
connections."""
def __init__(self, config):
# TODO: once pyOpenSSL exposes TLS_METHOD and SSL_CTX_set_min_proto_version,
# switch to those (see https://github.com/pyca/cryptography/issues/5379).
#
# note that, despite the confusing name, SSLv23_METHOD does *not* enforce SSLv2
# or v3, but is a synonym for TLS_METHOD, which allows the client and server
# to negotiate an appropriate version of TLS constrained by the version options
# set with context.set_options.
#
self._context = SSL.Context(SSL.SSLv23_METHOD)
self.configure_context(self._context, config)

View File

@@ -17,6 +17,7 @@ from typing import Optional
import attr
from nacl.signing import SigningKey
from synapse.api.auth import Auth
from synapse.api.constants import MAX_DEPTH
from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import (
@@ -27,6 +28,8 @@ from synapse.api.room_versions import (
)
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict
from synapse.state import StateHandler
from synapse.storage.databases.main import DataStore
from synapse.types import EventID, JsonDict
from synapse.util import Clock
from synapse.util.stringutils import random_string
@@ -42,45 +45,46 @@ class EventBuilder(object):
Attributes:
room_version: Version of the target room
room_id (str)
type (str)
sender (str)
content (dict)
unsigned (dict)
internal_metadata (_EventInternalMetadata)
room_id
type
sender
content
unsigned
internal_metadata
_state (StateHandler)
_auth (synapse.api.Auth)
_store (DataStore)
_clock (Clock)
_hostname (str): The hostname of the server creating the event
_state
_auth
_store
_clock
_hostname: The hostname of the server creating the event
_signing_key: The signing key to use to sign the event as the server
"""
_state = attr.ib()
_auth = attr.ib()
_store = attr.ib()
_clock = attr.ib()
_hostname = attr.ib()
_signing_key = attr.ib()
_state = attr.ib(type=StateHandler)
_auth = attr.ib(type=Auth)
_store = attr.ib(type=DataStore)
_clock = attr.ib(type=Clock)
_hostname = attr.ib(type=str)
_signing_key = attr.ib(type=SigningKey)
room_version = attr.ib(type=RoomVersion)
room_id = attr.ib()
type = attr.ib()
sender = attr.ib()
room_id = attr.ib(type=str)
type = attr.ib(type=str)
sender = attr.ib(type=str)
content = attr.ib(default=attr.Factory(dict))
unsigned = attr.ib(default=attr.Factory(dict))
content = attr.ib(default=attr.Factory(dict), type=JsonDict)
unsigned = attr.ib(default=attr.Factory(dict), type=JsonDict)
# These only exist on a subset of events, so they raise AttributeError if
# someone tries to get them when they don't exist.
_state_key = attr.ib(default=None)
_redacts = attr.ib(default=None)
_origin_server_ts = attr.ib(default=None)
_state_key = attr.ib(default=None, type=Optional[str])
_redacts = attr.ib(default=None, type=Optional[str])
_origin_server_ts = attr.ib(default=None, type=Optional[int])
internal_metadata = attr.ib(
default=attr.Factory(lambda: _EventInternalMetadata({}))
default=attr.Factory(lambda: _EventInternalMetadata({})),
type=_EventInternalMetadata,
)
@property
@@ -106,7 +110,7 @@ class EventBuilder(object):
state_ids = await self._state.get_current_state_ids(
self.room_id, prev_event_ids
)
auth_ids = await self._auth.compute_auth_events(self, state_ids)
auth_ids = self._auth.compute_auth_events(self, state_ids)
format_version = self.room_version.event_format
if format_version == EventFormatVersions.V1:

View File

@@ -23,7 +23,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import StateMap
if TYPE_CHECKING:
from synapse.storage.data_stores.main import DataStore
from synapse.storage.databases.main import DataStore
@attr.s(slots=True)

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, List
from typing import TYPE_CHECKING, List, Tuple
from canonicaljson import json
@@ -54,7 +54,10 @@ class TransactionManager(object):
@measure_func("_send_new_transaction")
async def send_new_transaction(
self, destination: str, pending_pdus: List[EventBase], pending_edus: List[Edu]
self,
destination: str,
pending_pdus: List[Tuple[EventBase, int]],
pending_edus: List[Edu],
):
# Make a transaction-sending opentracing span. This span follows on from

View File

@@ -101,7 +101,7 @@ class ApplicationServicesHandler(object):
async def start_scheduler():
try:
return self.scheduler.start()
return await self.scheduler.start()
except Exception:
logger.error("Application Services Failure")

View File

@@ -162,7 +162,7 @@ class AuthHandler(BaseHandler):
request_body: Dict[str, Any],
clientip: str,
description: str,
) -> dict:
) -> Tuple[dict, str]:
"""
Checks that the user is who they claim to be, via a UI auth.
@@ -183,9 +183,14 @@ class AuthHandler(BaseHandler):
describes the operation happening on their account.
Returns:
The parameters for this request (which may
A tuple of (params, session_id).
'params' contains the parameters for this request (which may
have been given only in a previous call).
'session_id' is the ID of this session, either passed in by the
client or assigned by this call
Raises:
InteractiveAuthIncompleteError if the client has not yet completed
any of the permitted login flows
@@ -207,7 +212,7 @@ class AuthHandler(BaseHandler):
flows = [[login_type] for login_type in self._supported_ui_auth_types]
try:
result, params, _ = await self.check_auth(
result, params, session_id = await self.check_ui_auth(
flows, request, request_body, clientip, description
)
except LoginError:
@@ -230,7 +235,7 @@ class AuthHandler(BaseHandler):
if user_id != requester.user.to_string():
raise AuthError(403, "Invalid auth")
return params
return params, session_id
def get_enabled_auth_types(self):
"""Return the enabled user-interactive authentication types
@@ -240,7 +245,7 @@ class AuthHandler(BaseHandler):
"""
return self.checkers.keys()
async def check_auth(
async def check_ui_auth(
self,
flows: List[List[str]],
request: SynapseRequest,
@@ -363,7 +368,7 @@ class AuthHandler(BaseHandler):
if not authdict:
raise InteractiveAuthIncompleteError(
self._auth_dict_for_flows(flows, session.session_id)
session.session_id, self._auth_dict_for_flows(flows, session.session_id)
)
# check auth type currently being presented
@@ -410,7 +415,7 @@ class AuthHandler(BaseHandler):
ret = self._auth_dict_for_flows(flows, session.session_id)
ret["completed"] = list(creds)
ret.update(errordict)
raise InteractiveAuthIncompleteError(ret)
raise InteractiveAuthIncompleteError(session.session_id, ret)
async def add_oob_auth(
self, stagetype: str, authdict: Dict[str, Any], clientip: str

View File

@@ -57,13 +57,10 @@ class EventStreamHandler(BaseHandler):
timeout=0,
as_client_event=True,
affect_presence=True,
only_keys=None,
room_id=None,
is_guest=False,
):
"""Fetches the events stream for a given user.
If `only_keys` is not None, events from keys will be sent down.
"""
if room_id:
@@ -93,7 +90,6 @@ class EventStreamHandler(BaseHandler):
auth_user,
pagin_config,
timeout,
only_keys=only_keys,
is_guest=is_guest,
explicit_room_id=room_id,
)

View File

@@ -71,7 +71,7 @@ from synapse.replication.http.federation import (
)
from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet
from synapse.state import StateResolutionStore, resolve_events_with_store
from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.distributor import user_joined_room
@@ -2064,7 +2064,7 @@ class FederationHandler(BaseHandler):
if not auth_events:
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = await self.auth.compute_auth_events(
auth_events_ids = self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events_x = await self.store.get_events(auth_events_ids)

View File

@@ -109,7 +109,7 @@ class InitialSyncHandler(BaseHandler):
rooms_ret = []
now_token = await self.hs.get_event_sources().get_current_token()
now_token = self.hs.get_event_sources().get_current_token()
presence_stream = self.hs.get_event_sources().sources["presence"]
pagination_config = PaginationConfig(from_token=now_token)
@@ -360,7 +360,7 @@ class InitialSyncHandler(BaseHandler):
current_state.values(), time_now
)
now_token = await self.hs.get_event_sources().get_current_token()
now_token = self.hs.get_event_sources().get_current_token()
limit = pagin_config.limit if pagin_config else None
if limit is None:

View File

@@ -15,7 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
from canonicaljson import encode_canonical_json, json
@@ -45,7 +45,7 @@ from synapse.events.validator import EventValidator
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.replication.http.send_event import ReplicationSendEventRestServlet
from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
from synapse.storage.state import StateFilter
from synapse.types import (
Collection,
@@ -93,11 +93,11 @@ class MessageHandler(object):
async def get_room_data(
self,
user_id: str = None,
room_id: str = None,
event_type: Optional[str] = None,
state_key: str = "",
is_guest: bool = False,
user_id: str,
room_id: str,
event_type: str,
state_key: str,
is_guest: bool,
) -> dict:
""" Get data from a room.
@@ -407,7 +407,7 @@ class EventCreationHandler(object):
#
# map from room id to time-of-last-attempt.
#
self._rooms_to_exclude_from_dummy_event_insertion = {} # type: dict[str, int]
self._rooms_to_exclude_from_dummy_event_insertion = {} # type: Dict[str, int]
# we need to construct a ConsentURIBuilder here, as it checks that the necessary
# config options, but *only* if we have a configuration for which we are
@@ -707,7 +707,7 @@ class EventCreationHandler(object):
async def create_and_send_nonmember_event(
self,
requester: Requester,
event_dict: EventBase,
event_dict: dict,
ratelimit: bool = True,
txn_id: Optional[str] = None,
) -> Tuple[EventBase, int]:
@@ -768,6 +768,15 @@ class EventCreationHandler(object):
else:
prev_event_ids = await self.store.get_prev_events_for_room(builder.room_id)
# we now ought to have some prev_events (unless it's a create event).
#
# do a quick sanity check here, rather than waiting until we've created the
# event and then try to auth it (which fails with a somewhat confusing "No
# create event in auth events")
assert (
builder.type == EventTypes.Create or len(prev_event_ids) > 0
), "Attempting to create an event with no prev_events"
event = await builder.build(prev_event_ids=prev_event_ids)
context = await self.state.compute_event_context(event)
if requester:
@@ -962,7 +971,7 @@ class EventCreationHandler(object):
# Validate a newly added alias or newly added alt_aliases.
original_alias = None
original_alt_aliases = set()
original_alt_aliases = [] # type: List[str]
original_event_id = event.unsigned.get("replaces_state")
if original_event_id:
@@ -1010,6 +1019,10 @@ class EventCreationHandler(object):
current_state_ids = await context.get_current_state_ids()
# We know this event is not an outlier, so this must be
# non-None.
assert current_state_ids is not None
state_to_include_ids = [
e_id
for k, e_id in current_state_ids.items()
@@ -1061,7 +1074,7 @@ class EventCreationHandler(object):
raise SynapseError(400, "Cannot redact event from a different room")
prev_state_ids = await context.get_prev_state_ids()
auth_events_ids = await self.auth.compute_auth_events(
auth_events_ids = self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = await self.store.get_events(auth_events_ids)

View File

@@ -14,7 +14,7 @@
# limitations under the License.
import json
import logging
from typing import Dict, Generic, List, Optional, Tuple, TypeVar
from typing import TYPE_CHECKING, Dict, Generic, List, Optional, Tuple, TypeVar
from urllib.parse import urlencode
import attr
@@ -39,9 +39,11 @@ from synapse.http.server import respond_with_html
from synapse.http.site import SynapseRequest
from synapse.logging.context import make_deferred_yieldable
from synapse.push.mailer import load_jinja2_templates
from synapse.server import HomeServer
from synapse.types import UserID, map_username_to_mxid_localpart
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
SESSION_COOKIE_NAME = b"oidc_session"
@@ -91,7 +93,7 @@ class OidcHandler:
"""Handles requests related to the OpenID Connect login flow.
"""
def __init__(self, hs: HomeServer):
def __init__(self, hs: "HomeServer"):
self._callback_url = hs.config.oidc_callback_url # type: str
self._scopes = hs.config.oidc_scopes # type: List[str]
self._client_auth = ClientAuth(

View File

@@ -309,7 +309,7 @@ class PaginationHandler(object):
room_token = pagin_config.from_token.room_key
else:
pagin_config.from_token = (
await self.hs.get_event_sources().get_current_token_for_pagination()
self.hs.get_event_sources().get_current_token_for_pagination()
)
room_token = pagin_config.from_token.room_key

View File

@@ -38,7 +38,7 @@ from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.state import StateHandler
from synapse.storage.data_stores.main import DataStore
from synapse.storage.databases.main import DataStore
from synapse.storage.presence import UserPresenceState
from synapse.types import JsonDict, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer
@@ -319,7 +319,7 @@ class PresenceHandler(BasePresenceHandler):
is some spurious presence changes that will self-correct.
"""
# If the DB pool has already terminated, don't try updating
if not self.store.db.is_running():
if not self.store.db_pool.is_running():
return
logger.info(

View File

@@ -22,7 +22,7 @@ import logging
import math
import string
from collections import OrderedDict
from typing import Optional, Tuple
from typing import Awaitable, Optional, Tuple
from synapse.api.constants import (
EventTypes,
@@ -1041,7 +1041,7 @@ class RoomEventSource(object):
):
# We just ignore the key for now.
to_key = await self.get_current_key()
to_key = self.get_current_key()
from_token = RoomStreamToken.parse(from_key)
if from_token.topological:
@@ -1081,10 +1081,10 @@ class RoomEventSource(object):
return (events, end_key)
def get_current_key(self):
return self.store.get_room_events_max_id()
def get_current_key(self) -> str:
return "s%d" % (self.store.get_room_max_stream_ordering(),)
def get_current_key_for_room(self, room_id):
def get_current_key_for_room(self, room_id: str) -> Awaitable[str]:
return self.store.get_room_events_max_id(room_id)

View File

@@ -16,7 +16,7 @@
import abc
import logging
from http import HTTPStatus
from typing import Dict, Iterable, List, Optional, Tuple, Union
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Union
from unpaddedbase64 import encode_base64
@@ -37,6 +37,10 @@ from synapse.util.distributor import user_joined_room, user_left_room
from ._base import BaseHandler
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
@@ -48,7 +52,7 @@ class RoomMemberHandler(object):
__metaclass__ = abc.ABCMeta
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.store = hs.get_datastore()
self.auth = hs.get_auth()
@@ -207,7 +211,7 @@ class RoomMemberHandler(object):
return duplicate.event_id, stream_id
stream_id = await self.event_creation_handler.handle_new_client_event(
requester, event, context, extra_users=[target], ratelimit=ratelimit
requester, event, context, extra_users=[target], ratelimit=ratelimit,
)
prev_state_ids = await context.get_prev_state_ids()
@@ -1000,7 +1004,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
check_complexity = self.hs.config.limit_remote_rooms.enabled
if check_complexity and self.hs.config.limit_remote_rooms.admins_can_join:
check_complexity = not await self.hs.auth.is_server_admin(user)
check_complexity = not await self.auth.is_server_admin(user)
if check_complexity:
# Fetch the room complexity

View File

@@ -14,15 +14,16 @@
# limitations under the License.
import logging
import re
from typing import Callable, Dict, Optional, Set, Tuple
from typing import TYPE_CHECKING, Callable, Dict, Optional, Set, Tuple
import attr
import saml2
import saml2.response
from saml2.client import Saml2Client
from synapse.api.errors import SynapseError
from synapse.api.errors import AuthError, SynapseError
from synapse.config import ConfigError
from synapse.config.saml2_config import SamlAttributeRequirement
from synapse.http.servlet import parse_string
from synapse.http.site import SynapseRequest
from synapse.module_api import ModuleApi
@@ -34,6 +35,9 @@ from synapse.types import (
from synapse.util.async_helpers import Linearizer
from synapse.util.iterutils import chunk_seq
if TYPE_CHECKING:
import synapse.server
logger = logging.getLogger(__name__)
@@ -49,7 +53,7 @@ class Saml2SessionData:
class SamlHandler:
def __init__(self, hs):
def __init__(self, hs: "synapse.server.HomeServer"):
self._saml_client = Saml2Client(hs.config.saml2_sp_config)
self._auth = hs.get_auth()
self._auth_handler = hs.get_auth_handler()
@@ -62,6 +66,7 @@ class SamlHandler:
self._grandfathered_mxid_source_attribute = (
hs.config.saml2_grandfathered_mxid_source_attribute
)
self._saml2_attribute_requirements = hs.config.saml2.attribute_requirements
# plugin to do custom mapping from saml response to mxid
self._user_mapping_provider = hs.config.saml2_user_mapping_provider_class(
@@ -73,7 +78,7 @@ class SamlHandler:
self._auth_provider_id = "saml"
# a map from saml session id to Saml2SessionData object
self._outstanding_requests_dict = {}
self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData]
# a lock on the mappings
self._mapping_lock = Linearizer(name="saml_mapping", clock=self._clock)
@@ -165,11 +170,18 @@ class SamlHandler:
saml2.BINDING_HTTP_POST,
outstanding=self._outstanding_requests_dict,
)
except saml2.response.UnsolicitedResponse as e:
# the pysaml2 library helpfully logs an ERROR here, but neglects to log
# the session ID. I don't really want to put the full text of the exception
# in the (user-visible) exception message, so let's log the exception here
# so we can track down the session IDs later.
logger.warning(str(e))
raise SynapseError(400, "Unexpected SAML2 login.")
except Exception as e:
raise SynapseError(400, "Unable to parse SAML2 response: %s" % (e,))
raise SynapseError(400, "Unable to parse SAML2 response: %s." % (e,))
if saml2_auth.not_signed:
raise SynapseError(400, "SAML2 response was not signed")
raise SynapseError(400, "SAML2 response was not signed.")
logger.debug("SAML2 response: %s", saml2_auth.origxml)
for assertion in saml2_auth.assertions:
@@ -188,6 +200,9 @@ class SamlHandler:
saml2_auth.in_response_to, None
)
for requirement in self._saml2_attribute_requirements:
_check_attribute_requirement(saml2_auth.ava, requirement)
remote_user_id = self._user_mapping_provider.get_remote_user_id(
saml2_auth, client_redirect_url
)
@@ -294,6 +309,21 @@ class SamlHandler:
del self._outstanding_requests_dict[reqid]
def _check_attribute_requirement(ava: dict, req: SamlAttributeRequirement):
values = ava.get(req.attribute, [])
for v in values:
if v == req.value:
return
logger.info(
"SAML2 attribute %s did not match required value '%s' (was '%s')",
req.attribute,
req.value,
values,
)
raise AuthError(403, "You are not authorized to log in here.")
DOT_REPLACE_PATTERN = re.compile(
("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),))
)

View File

@@ -340,7 +340,7 @@ class SearchHandler(BaseHandler):
# If client has asked for "context" for each event (i.e. some surrounding
# events and state), fetch that
if event_context is not None:
now_token = await self.hs.get_event_sources().get_current_token()
now_token = self.hs.get_event_sources().get_current_token()
contexts = {}
for event in allowed_events:

View File

@@ -103,7 +103,6 @@ class JoinedSyncResult:
account_data = attr.ib(type=List[JsonDict])
unread_notifications = attr.ib(type=JsonDict)
summary = attr.ib(type=Optional[JsonDict])
unread_count = attr.ib(type=int)
def __nonzero__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
@@ -961,7 +960,7 @@ class SyncHandler(object):
# this is due to some of the underlying streams not supporting the ability
# to query up to a given point.
# Always use the `now_token` in `SyncResultBuilder`
now_token = await self.event_sources.get_current_token()
now_token = self.event_sources.get_current_token()
logger.debug(
"Calculating sync response for %r between %s and %s",
@@ -1887,10 +1886,6 @@ class SyncHandler(object):
if room_builder.rtype == "joined":
unread_notifications = {} # type: Dict[str, str]
unread_count = await self.store.get_unread_message_count_for_user(
room_id, sync_config.user.to_string(),
)
room_sync = JoinedSyncResult(
room_id=room_id,
timeline=batch,
@@ -1899,7 +1894,6 @@ class SyncHandler(object):
account_data=account_data_events,
unread_notifications=unread_notifications,
summary=summary,
unread_count=unread_count,
)
if room_sync or always_include:

View File

@@ -284,8 +284,7 @@ class SimpleHttpClient(object):
ip_blacklist=self._ip_blacklist,
)
@defer.inlineCallbacks
def request(self, method, uri, data=None, headers=None):
async def request(self, method, uri, data=None, headers=None):
"""
Args:
method (str): HTTP method to use.
@@ -298,7 +297,7 @@ class SimpleHttpClient(object):
outgoing_requests_counter.labels(method).inc()
# log request but strip `access_token` (AS requests for example include this)
logger.info("Sending request %s %s", method, redact_uri(uri))
logger.debug("Sending request %s %s", method, redact_uri(uri))
with start_active_span(
"outgoing-client-request",
@@ -330,7 +329,7 @@ class SimpleHttpClient(object):
self.hs.get_reactor(),
cancelled_to_request_timed_out_error,
)
response = yield make_deferred_yieldable(request_deferred)
response = await make_deferred_yieldable(request_deferred)
incoming_responses_counter.labels(method, response.code).inc()
logger.info(
@@ -353,8 +352,7 @@ class SimpleHttpClient(object):
set_tag("error_reason", e.args[0])
raise
@defer.inlineCallbacks
def post_urlencoded_get_json(self, uri, args={}, headers=None):
async def post_urlencoded_get_json(self, uri, args={}, headers=None):
"""
Args:
uri (str):
@@ -363,7 +361,7 @@ class SimpleHttpClient(object):
header name to a list of values for that header
Returns:
Deferred[object]: parsed json
object: parsed json
Raises:
HttpResponseException: On a non-2xx HTTP response.
@@ -386,11 +384,11 @@ class SimpleHttpClient(object):
if headers:
actual_headers.update(headers)
response = yield self.request(
response = await self.request(
"POST", uri, headers=Headers(actual_headers), data=query_bytes
)
body = yield make_deferred_yieldable(readBody(response))
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return json.loads(body.decode("utf-8"))
@@ -399,8 +397,7 @@ class SimpleHttpClient(object):
response.code, response.phrase.decode("ascii", errors="replace"), body
)
@defer.inlineCallbacks
def post_json_get_json(self, uri, post_json, headers=None):
async def post_json_get_json(self, uri, post_json, headers=None):
"""
Args:
@@ -410,7 +407,7 @@ class SimpleHttpClient(object):
header name to a list of values for that header
Returns:
Deferred[object]: parsed json
object: parsed json
Raises:
HttpResponseException: On a non-2xx HTTP response.
@@ -429,11 +426,11 @@ class SimpleHttpClient(object):
if headers:
actual_headers.update(headers)
response = yield self.request(
response = await self.request(
"POST", uri, headers=Headers(actual_headers), data=json_str
)
body = yield make_deferred_yieldable(readBody(response))
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return json.loads(body.decode("utf-8"))
@@ -442,8 +439,7 @@ class SimpleHttpClient(object):
response.code, response.phrase.decode("ascii", errors="replace"), body
)
@defer.inlineCallbacks
def get_json(self, uri, args={}, headers=None):
async def get_json(self, uri, args={}, headers=None):
""" Gets some json from the given URI.
Args:
@@ -455,7 +451,7 @@ class SimpleHttpClient(object):
headers (dict[str|bytes, List[str|bytes]]|None): If not None, a map from
header name to a list of values for that header
Returns:
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
Succeeds when we get *any* 2xx HTTP response, with the
HTTP body as JSON.
Raises:
HttpResponseException On a non-2xx HTTP response.
@@ -466,11 +462,10 @@ class SimpleHttpClient(object):
if headers:
actual_headers.update(headers)
body = yield self.get_raw(uri, args, headers=headers)
body = await self.get_raw(uri, args, headers=headers)
return json.loads(body.decode("utf-8"))
@defer.inlineCallbacks
def put_json(self, uri, json_body, args={}, headers=None):
async def put_json(self, uri, json_body, args={}, headers=None):
""" Puts some json to the given URI.
Args:
@@ -483,7 +478,7 @@ class SimpleHttpClient(object):
headers (dict[str|bytes, List[str|bytes]]|None): If not None, a map from
header name to a list of values for that header
Returns:
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
Succeeds when we get *any* 2xx HTTP response, with the
HTTP body as JSON.
Raises:
HttpResponseException On a non-2xx HTTP response.
@@ -504,11 +499,11 @@ class SimpleHttpClient(object):
if headers:
actual_headers.update(headers)
response = yield self.request(
response = await self.request(
"PUT", uri, headers=Headers(actual_headers), data=json_str
)
body = yield make_deferred_yieldable(readBody(response))
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return json.loads(body.decode("utf-8"))
@@ -517,8 +512,7 @@ class SimpleHttpClient(object):
response.code, response.phrase.decode("ascii", errors="replace"), body
)
@defer.inlineCallbacks
def get_raw(self, uri, args={}, headers=None):
async def get_raw(self, uri, args={}, headers=None):
""" Gets raw text from the given URI.
Args:
@@ -530,7 +524,7 @@ class SimpleHttpClient(object):
headers (dict[str|bytes, List[str|bytes]]|None): If not None, a map from
header name to a list of values for that header
Returns:
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
Succeeds when we get *any* 2xx HTTP response, with the
HTTP body as bytes.
Raises:
HttpResponseException on a non-2xx HTTP response.
@@ -543,9 +537,9 @@ class SimpleHttpClient(object):
if headers:
actual_headers.update(headers)
response = yield self.request("GET", uri, headers=Headers(actual_headers))
response = await self.request("GET", uri, headers=Headers(actual_headers))
body = yield make_deferred_yieldable(readBody(response))
body = await make_deferred_yieldable(readBody(response))
if 200 <= response.code < 300:
return body
@@ -557,8 +551,7 @@ class SimpleHttpClient(object):
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
# The two should be factored out.
@defer.inlineCallbacks
def get_file(self, url, output_stream, max_size=None, headers=None):
async def get_file(self, url, output_stream, max_size=None, headers=None):
"""GETs a file from a given URL
Args:
url (str): The URL to GET
@@ -574,7 +567,7 @@ class SimpleHttpClient(object):
if headers:
actual_headers.update(headers)
response = yield self.request("GET", url, headers=Headers(actual_headers))
response = await self.request("GET", url, headers=Headers(actual_headers))
resp_headers = dict(response.headers.getAllRawHeaders())
@@ -598,7 +591,7 @@ class SimpleHttpClient(object):
# straight back in again
try:
length = yield make_deferred_yieldable(
length = await make_deferred_yieldable(
_readBodyToFile(response, output_stream, max_size)
)
except SynapseError:

View File

@@ -247,7 +247,7 @@ class MatrixHostnameEndpoint(object):
port = server.port
try:
logger.info("Connecting to %s:%i", host.decode("ascii"), port)
logger.debug("Connecting to %s:%i", host.decode("ascii"), port)
endpoint = HostnameEndpoint(self._reactor, host, port)
if self._tls_options:
endpoint = wrapClientTLS(self._tls_options, endpoint)

View File

@@ -29,10 +29,11 @@ from zope.interface import implementer
from twisted.internet import defer, protocol
from twisted.internet.error import DNSLookupError
from twisted.internet.interfaces import IReactorPluggableNameResolver
from twisted.internet.interfaces import IReactorPluggableNameResolver, IReactorTime
from twisted.internet.task import _EPSILON, Cooperator
from twisted.web._newclient import ResponseDone
from twisted.web.http_headers import Headers
from twisted.web.iweb import IResponse
import synapse.metrics
import synapse.util.retryutils
@@ -74,7 +75,7 @@ MAXINT = sys.maxsize
_next_id = 1
@attr.s
@attr.s(frozen=True)
class MatrixFederationRequest(object):
method = attr.ib()
"""HTTP method
@@ -110,26 +111,52 @@ class MatrixFederationRequest(object):
:type: str|None
"""
uri = attr.ib(init=False, type=bytes)
"""The URI of this request
"""
def __attrs_post_init__(self):
global _next_id
self.txn_id = "%s-O-%s" % (self.method, _next_id)
txn_id = "%s-O-%s" % (self.method, _next_id)
_next_id = (_next_id + 1) % (MAXINT - 1)
object.__setattr__(self, "txn_id", txn_id)
destination_bytes = self.destination.encode("ascii")
path_bytes = self.path.encode("ascii")
if self.query:
query_bytes = encode_query_args(self.query)
else:
query_bytes = b""
# The object is frozen so we can pre-compute this.
uri = urllib.parse.urlunparse(
(b"matrix", destination_bytes, path_bytes, None, query_bytes, b"")
)
object.__setattr__(self, "uri", uri)
def get_json(self):
if self.json_callback:
return self.json_callback()
return self.json
async def _handle_json_response(reactor, timeout_sec, request, response):
async def _handle_json_response(
reactor: IReactorTime,
timeout_sec: float,
request: MatrixFederationRequest,
response: IResponse,
start_ms: int,
):
"""
Reads the JSON body of a response, with a timeout
Args:
reactor (IReactor): twisted reactor, for the timeout
timeout_sec (float): number of seconds to wait for response to complete
request (MatrixFederationRequest): the request that triggered the response
response (IResponse): response to the request
reactor: twisted reactor, for the timeout
timeout_sec: number of seconds to wait for response to complete
request: the request that triggered the response
response: response to the request
start_ms: Timestamp when request was made
Returns:
dict: parsed JSON response
@@ -143,23 +170,35 @@ async def _handle_json_response(reactor, timeout_sec, request, response):
body = await make_deferred_yieldable(d)
except TimeoutError as e:
logger.warning(
"{%s} [%s] Timed out reading response", request.txn_id, request.destination,
"{%s} [%s] Timed out reading response - %s %s",
request.txn_id,
request.destination,
request.method,
request.uri.decode("ascii"),
)
raise RequestSendFailed(e, can_retry=True) from e
except Exception as e:
logger.warning(
"{%s} [%s] Error reading response: %s",
"{%s} [%s] Error reading response %s %s: %s",
request.txn_id,
request.destination,
request.method,
request.uri.decode("ascii"),
e,
)
raise
time_taken_secs = reactor.seconds() - start_ms / 1000
logger.info(
"{%s} [%s] Completed: %d %s",
"{%s} [%s] Completed request: %d %s in %.2f secs - %s %s",
request.txn_id,
request.destination,
response.code,
response.phrase.decode("ascii", errors="replace"),
time_taken_secs,
request.method,
request.uri.decode("ascii"),
)
return body
@@ -261,7 +300,9 @@ class MatrixFederationHttpClient(object):
# 'M_UNRECOGNIZED' which some endpoints can return when omitting a
# trailing slash on Synapse <= v0.99.3.
logger.info("Retrying request with trailing slash")
request.path += "/"
# Request is frozen so we create a new instance
request = attr.evolve(request, path=request.path + "/")
response = await self._send_request(request, **send_request_args)
@@ -373,9 +414,7 @@ class MatrixFederationHttpClient(object):
else:
retries_left = MAX_SHORT_RETRIES
url_bytes = urllib.parse.urlunparse(
(b"matrix", destination_bytes, path_bytes, None, query_bytes, b"")
)
url_bytes = request.uri
url_str = url_bytes.decode("ascii")
url_to_sign_bytes = urllib.parse.urlunparse(
@@ -402,7 +441,7 @@ class MatrixFederationHttpClient(object):
headers_dict[b"Authorization"] = auth_headers
logger.info(
logger.debug(
"{%s} [%s] Sending request: %s %s; timeout %fs",
request.txn_id,
request.destination,
@@ -436,7 +475,6 @@ class MatrixFederationHttpClient(object):
except DNSLookupError as e:
raise RequestSendFailed(e, can_retry=retry_on_dns_fail) from e
except Exception as e:
logger.info("Failed to send request: %s", e)
raise RequestSendFailed(e, can_retry=True) from e
incoming_responses_counter.labels(
@@ -496,7 +534,7 @@ class MatrixFederationHttpClient(object):
break
except RequestSendFailed as e:
logger.warning(
logger.info(
"{%s} [%s] Request failed: %s %s: %s",
request.txn_id,
request.destination,
@@ -654,6 +692,8 @@ class MatrixFederationHttpClient(object):
json=data,
)
start_ms = self.clock.time_msec()
response = await self._send_request_with_optional_trailing_slash(
request,
try_trailing_slash_on_400,
@@ -664,7 +704,7 @@ class MatrixFederationHttpClient(object):
)
body = await _handle_json_response(
self.reactor, self.default_timeout, request, response
self.reactor, self.default_timeout, request, response, start_ms
)
return body
@@ -720,6 +760,8 @@ class MatrixFederationHttpClient(object):
method="POST", destination=destination, path=path, query=args, json=data
)
start_ms = self.clock.time_msec()
response = await self._send_request(
request,
long_retries=long_retries,
@@ -733,7 +775,7 @@ class MatrixFederationHttpClient(object):
_sec_timeout = self.default_timeout
body = await _handle_json_response(
self.reactor, _sec_timeout, request, response
self.reactor, _sec_timeout, request, response, start_ms,
)
return body
@@ -786,6 +828,8 @@ class MatrixFederationHttpClient(object):
method="GET", destination=destination, path=path, query=args
)
start_ms = self.clock.time_msec()
response = await self._send_request_with_optional_trailing_slash(
request,
try_trailing_slash_on_400,
@@ -796,7 +840,7 @@ class MatrixFederationHttpClient(object):
)
body = await _handle_json_response(
self.reactor, self.default_timeout, request, response
self.reactor, self.default_timeout, request, response, start_ms
)
return body
@@ -846,6 +890,8 @@ class MatrixFederationHttpClient(object):
method="DELETE", destination=destination, path=path, query=args
)
start_ms = self.clock.time_msec()
response = await self._send_request(
request,
long_retries=long_retries,
@@ -854,7 +900,7 @@ class MatrixFederationHttpClient(object):
)
body = await _handle_json_response(
self.reactor, self.default_timeout, request, response
self.reactor, self.default_timeout, request, response, start_ms
)
return body
@@ -914,12 +960,14 @@ class MatrixFederationHttpClient(object):
)
raise
logger.info(
"{%s} [%s] Completed: %d %s [%d bytes]",
"{%s} [%s] Completed: %d %s [%d bytes] %s %s",
request.txn_id,
request.destination,
response.code,
response.phrase.decode("ascii", errors="replace"),
length,
request.method,
request.uri.decode("ascii"),
)
return (length, headers)

View File

@@ -25,7 +25,7 @@ from io import BytesIO
from typing import Any, Callable, Dict, Tuple, Union
import jinja2
from canonicaljson import encode_canonical_json, encode_pretty_printed_json, json
from canonicaljson import encode_canonical_json, encode_pretty_printed_json
from twisted.internet import defer
from twisted.python import failure
@@ -46,6 +46,7 @@ from synapse.api.errors import (
from synapse.http.site import SynapseRequest
from synapse.logging.context import preserve_fn
from synapse.logging.opentracing import trace_servlet
from synapse.util import json_encoder
from synapse.util.caches import intern_dict
logger = logging.getLogger(__name__)
@@ -538,7 +539,7 @@ def respond_with_json(
# canonicaljson already encodes to bytes
json_bytes = encode_canonical_json(json_object)
else:
json_bytes = json.dumps(json_object).encode("utf-8")
json_bytes = json_encoder.encode(json_object).encode("utf-8")
return respond_with_json_bytes(request, code, json_bytes, send_cors=send_cors)

View File

@@ -146,10 +146,9 @@ class SynapseRequest(Request):
Returns a context manager; the correct way to use this is:
@defer.inlineCallbacks
def handle_request(request):
async def handle_request(request):
with request.processing("FooServlet"):
yield really_handle_the_request()
await really_handle_the_request()
Once the context manager is closed, the completion of the request will be logged,
and the various metrics will be updated.
@@ -287,7 +286,9 @@ class SynapseRequest(Request):
# the connection dropped)
code += "!"
self.site.access_logger.info(
log_level = logging.INFO if self._should_log_request() else logging.DEBUG
self.site.access_logger.log(
log_level,
"%s - %s - {%s}"
" Processed request: %.3fsec/%.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
' %sB %s "%s %s %s" "%s" [%d dbevts]',
@@ -315,6 +316,17 @@ class SynapseRequest(Request):
except Exception as e:
logger.warning("Failed to stop metrics: %r", e)
def _should_log_request(self) -> bool:
"""Whether we should log at INFO that we processed the request.
"""
if self.path == b"/health":
return False
if self.method == b"OPTIONS":
return False
return True
class XForwardedForRequest(SynapseRequest):
def __init__(self, *args, **kw):

View File

@@ -13,16 +13,15 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import logging
import threading
from asyncio import iscoroutine
from functools import wraps
from typing import TYPE_CHECKING, Dict, Optional, Set
from prometheus_client.core import REGISTRY, Counter, Gauge
from twisted.internet import defer
from twisted.python.failure import Failure
from synapse.logging.context import LoggingContext, PreserveLoggingContext
@@ -167,7 +166,7 @@ class _BackgroundProcess(object):
)
def run_as_background_process(desc, func, *args, **kwargs):
def run_as_background_process(desc: str, func, *args, **kwargs):
"""Run the given function in its own logcontext, with resource metrics
This should be used to wrap processes which are fired off to run in the
@@ -179,7 +178,7 @@ def run_as_background_process(desc, func, *args, **kwargs):
normal synapse inlineCallbacks function).
Args:
desc (str): a description for this background process type
desc: a description for this background process type
func: a function, which may return a Deferred or a coroutine
args: positional args for func
kwargs: keyword args for func
@@ -188,8 +187,7 @@ def run_as_background_process(desc, func, *args, **kwargs):
follow the synapse logcontext rules.
"""
@defer.inlineCallbacks
def run():
async def run():
with _bg_metrics_lock:
count = _background_process_counts.get(desc, 0)
_background_process_counts[desc] = count + 1
@@ -203,29 +201,21 @@ def run_as_background_process(desc, func, *args, **kwargs):
try:
result = func(*args, **kwargs)
# We probably don't have an ensureDeferred in our call stack to handle
# coroutine results, so we need to ensureDeferred here.
#
# But we need this check because ensureDeferred doesn't like being
# called on immediate values (as opposed to Deferreds or coroutines).
if iscoroutine(result):
result = defer.ensureDeferred(result)
if inspect.isawaitable(result):
result = await result
return (yield result)
return result
except Exception:
# failure.Failure() fishes the original Failure out of our stack, and
# thus gives us a sensible stack trace.
f = Failure()
logger.error(
"Background process '%s' threw an exception",
desc,
exc_info=(f.type, f.value, f.getTracebackObject()),
logger.exception(
"Background process '%s' threw an exception", desc,
)
finally:
_background_process_in_flight_count.labels(desc).dec()
with PreserveLoggingContext():
return run()
# Note that we return a Deferred here so that it can be used in a
# looping_call and other places that expect a Deferred.
return defer.ensureDeferred(run())
def wrap_as_background_process(desc):

View File

@@ -194,12 +194,16 @@ class ModuleApi(object):
synapse.api.errors.AuthError: the access token is invalid
"""
# see if the access token corresponds to a device
user_info = yield self._auth.get_user_by_access_token(access_token)
user_info = yield defer.ensureDeferred(
self._auth.get_user_by_access_token(access_token)
)
device_id = user_info.get("device_id")
user_id = user_info["user"].to_string()
if device_id:
# delete the device, which will also delete its access tokens
yield self._hs.get_device_handler().delete_device(user_id, device_id)
yield defer.ensureDeferred(
self._hs.get_device_handler().delete_device(user_id, device_id)
)
else:
# no associated device. Just delete the access token.
yield defer.ensureDeferred(
@@ -219,7 +223,7 @@ class ModuleApi(object):
Returns:
Deferred[object]: result of func
"""
return self._store.db.runInteraction(desc, func, *args, **kwargs)
return self._store.db_pool.runInteraction(desc, func, *args, **kwargs)
def complete_sso_login(
self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str

View File

@@ -15,7 +15,18 @@
import logging
from collections import namedtuple
from typing import Callable, Iterable, List, TypeVar
from typing import (
Awaitable,
Callable,
Dict,
Iterable,
List,
Optional,
Set,
Tuple,
TypeVar,
Union,
)
from prometheus_client import Counter
@@ -24,12 +35,14 @@ from twisted.internet import defer
import synapse.server
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError
from synapse.events import EventBase
from synapse.handlers.presence import format_user_presence_state
from synapse.logging.context import PreserveLoggingContext
from synapse.logging.utils import log_function
from synapse.metrics import LaterGauge
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import StreamToken
from synapse.streams.config import PaginationConfig
from synapse.types import Collection, StreamToken, UserID
from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
from synapse.util.metrics import Measure
from synapse.visibility import filter_events_for_client
@@ -77,7 +90,13 @@ class _NotifierUserStream(object):
so that it can remove itself from the indexes in the Notifier class.
"""
def __init__(self, user_id, rooms, current_token, time_now_ms):
def __init__(
self,
user_id: str,
rooms: Collection[str],
current_token: StreamToken,
time_now_ms: int,
):
self.user_id = user_id
self.rooms = set(rooms)
self.current_token = current_token
@@ -93,13 +112,13 @@ class _NotifierUserStream(object):
with PreserveLoggingContext():
self.notify_deferred = ObservableDeferred(defer.Deferred())
def notify(self, stream_key, stream_id, time_now_ms):
def notify(self, stream_key: str, stream_id: int, time_now_ms: int):
"""Notify any listeners for this user of a new event from an
event source.
Args:
stream_key(str): The stream the event came from.
stream_id(str): The new id for the stream the event came from.
time_now_ms(int): The current time in milliseconds.
stream_key: The stream the event came from.
stream_id: The new id for the stream the event came from.
time_now_ms: The current time in milliseconds.
"""
self.current_token = self.current_token.copy_and_advance(stream_key, stream_id)
self.last_notified_token = self.current_token
@@ -112,7 +131,7 @@ class _NotifierUserStream(object):
self.notify_deferred = ObservableDeferred(defer.Deferred())
noify_deferred.callback(self.current_token)
def remove(self, notifier):
def remove(self, notifier: "Notifier"):
""" Remove this listener from all the indexes in the Notifier
it knows about.
"""
@@ -123,10 +142,10 @@ class _NotifierUserStream(object):
notifier.user_to_user_stream.pop(self.user_id)
def count_listeners(self):
def count_listeners(self) -> int:
return len(self.notify_deferred.observers())
def new_listener(self, token):
def new_listener(self, token: StreamToken) -> _NotificationListener:
"""Returns a deferred that is resolved when there is a new token
greater than the given token.
@@ -159,14 +178,16 @@ class Notifier(object):
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
def __init__(self, hs: "synapse.server.HomeServer"):
self.user_to_user_stream = {}
self.room_to_user_streams = {}
self.user_to_user_stream = {} # type: Dict[str, _NotifierUserStream]
self.room_to_user_streams = {} # type: Dict[str, Set[_NotifierUserStream]]
self.hs = hs
self.storage = hs.get_storage()
self.event_sources = hs.get_event_sources()
self.store = hs.get_datastore()
self.pending_new_room_events = []
self.pending_new_room_events = (
[]
) # type: List[Tuple[int, EventBase, Collection[Union[str, UserID]]]]
# Called when there are new things to stream over replication
self.replication_callbacks = [] # type: List[Callable[[], None]]
@@ -178,10 +199,9 @@ class Notifier(object):
self.clock = hs.get_clock()
self.appservice_handler = hs.get_application_service_handler()
self.federation_sender = None
if hs.should_send_federation():
self.federation_sender = hs.get_federation_sender()
else:
self.federation_sender = None
self.state_handler = hs.get_state_handler()
@@ -193,12 +213,12 @@ class Notifier(object):
# when rendering the metrics page, which is likely once per minute at
# most when scraping it.
def count_listeners():
all_user_streams = set()
all_user_streams = set() # type: Set[_NotifierUserStream]
for x in list(self.room_to_user_streams.values()):
all_user_streams |= x
for x in list(self.user_to_user_stream.values()):
all_user_streams.add(x)
for streams in list(self.room_to_user_streams.values()):
all_user_streams |= streams
for stream in list(self.user_to_user_stream.values()):
all_user_streams.add(stream)
return sum(stream.count_listeners() for stream in all_user_streams)
@@ -223,7 +243,11 @@ class Notifier(object):
self.replication_callbacks.append(cb)
def on_new_room_event(
self, event, room_stream_id, max_room_stream_id, extra_users=[]
self,
event: EventBase,
room_stream_id: int,
max_room_stream_id: int,
extra_users: Collection[Union[str, UserID]] = [],
):
""" Used by handlers to inform the notifier something has happened
in the room, room event wise.
@@ -241,11 +265,11 @@ class Notifier(object):
self.notify_replication()
def _notify_pending_new_room_events(self, max_room_stream_id):
def _notify_pending_new_room_events(self, max_room_stream_id: int):
"""Notify for the room events that were queued waiting for a previous
event to be persisted.
Args:
max_room_stream_id(int): The highest stream_id below which all
max_room_stream_id: The highest stream_id below which all
events have been persisted.
"""
pending = self.pending_new_room_events
@@ -258,7 +282,12 @@ class Notifier(object):
else:
self._on_new_room_event(event, room_stream_id, extra_users)
def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
def _on_new_room_event(
self,
event: EventBase,
room_stream_id: int,
extra_users: Collection[Union[str, UserID]] = [],
):
"""Notify any user streams that are interested in this room event"""
# poke any interested application service.
run_as_background_process(
@@ -275,13 +304,19 @@ class Notifier(object):
"room_key", room_stream_id, users=extra_users, rooms=[event.room_id]
)
async def _notify_app_services(self, room_stream_id):
async def _notify_app_services(self, room_stream_id: int):
try:
await self.appservice_handler.notify_interested_services(room_stream_id)
except Exception:
logger.exception("Error notifying application services of event")
def on_new_event(self, stream_key, new_token, users=[], rooms=[]):
def on_new_event(
self,
stream_key: str,
new_token: int,
users: Collection[Union[str, UserID]] = [],
rooms: Collection[str] = [],
):
""" Used to inform listeners that something has happened event wise.
Will wake up all listeners for the given users and rooms.
@@ -307,20 +342,25 @@ class Notifier(object):
self.notify_replication()
def on_new_replication_data(self):
def on_new_replication_data(self) -> None:
"""Used to inform replication listeners that something has happend
without waking up any of the normal user event streams"""
self.notify_replication()
async def wait_for_events(
self, user_id, timeout, callback, room_ids=None, from_token=StreamToken.START
):
self,
user_id: str,
timeout: int,
callback: Callable[[StreamToken, StreamToken], Awaitable[T]],
room_ids=None,
from_token=StreamToken.START,
) -> T:
"""Wait until the callback returns a non empty response or the
timeout fires.
"""
user_stream = self.user_to_user_stream.get(user_id)
if user_stream is None:
current_token = await self.event_sources.get_current_token()
current_token = self.event_sources.get_current_token()
if room_ids is None:
room_ids = await self.store.get_rooms_for_user(user_id)
user_stream = _NotifierUserStream(
@@ -377,19 +417,16 @@ class Notifier(object):
async def get_events_for(
self,
user,
pagination_config,
timeout,
only_keys=None,
is_guest=False,
explicit_room_id=None,
):
user: UserID,
pagination_config: PaginationConfig,
timeout: int,
is_guest: bool = False,
explicit_room_id: str = None,
) -> EventStreamResult:
""" For the given user and rooms, return any new events for them. If
there are no new events wait for up to `timeout` milliseconds for any
new events to happen before returning.
If `only_keys` is not None, events from keys will be sent down.
If explicit_room_id is not set, the user's joined rooms will be polled
for events.
If explicit_room_id is set, that room will be polled for events only if
@@ -397,18 +434,20 @@ class Notifier(object):
"""
from_token = pagination_config.from_token
if not from_token:
from_token = await self.event_sources.get_current_token()
from_token = self.event_sources.get_current_token()
limit = pagination_config.limit
room_ids, is_joined = await self._get_room_ids(user, explicit_room_id)
is_peeking = not is_joined
async def check_for_updates(before_token, after_token):
async def check_for_updates(
before_token: StreamToken, after_token: StreamToken
) -> EventStreamResult:
if not after_token.is_after(before_token):
return EventStreamResult([], (from_token, from_token))
events = []
events = [] # type: List[EventBase]
end_token = from_token
for name, source in self.event_sources.sources.items():
@@ -417,8 +456,6 @@ class Notifier(object):
after_id = getattr(after_token, keyname)
if before_id == after_id:
continue
if only_keys and name not in only_keys:
continue
new_events, new_key = await source.get_new_events(
user=user,
@@ -476,7 +513,9 @@ class Notifier(object):
return result
async def _get_room_ids(self, user, explicit_room_id):
async def _get_room_ids(
self, user: UserID, explicit_room_id: Optional[str]
) -> Tuple[Collection[str], bool]:
joined_room_ids = await self.store.get_rooms_for_user(user.to_string())
if explicit_room_id:
if explicit_room_id in joined_room_ids:
@@ -486,7 +525,7 @@ class Notifier(object):
raise AuthError(403, "Non-joined access not allowed")
return joined_room_ids, True
async def _is_world_readable(self, room_id):
async def _is_world_readable(self, room_id: str) -> bool:
state = await self.state_handler.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
@@ -496,7 +535,7 @@ class Notifier(object):
return False
@log_function
def remove_expired_streams(self):
def remove_expired_streams(self) -> None:
time_now_ms = self.clock.time_msec()
expired_streams = []
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS
@@ -510,21 +549,21 @@ class Notifier(object):
expired_stream.remove(self)
@log_function
def _register_with_keys(self, user_stream):
def _register_with_keys(self, user_stream: _NotifierUserStream):
self.user_to_user_stream[user_stream.user_id] = user_stream
for room in user_stream.rooms:
s = self.room_to_user_streams.setdefault(room, set())
s.add(user_stream)
def _user_joined_room(self, user_id, room_id):
def _user_joined_room(self, user_id: str, room_id: str):
new_user_stream = self.user_to_user_stream.get(user_id)
if new_user_stream is not None:
room_streams = self.room_to_user_streams.setdefault(room_id, set())
room_streams.add(new_user_stream)
new_user_stream.rooms.add(room_id)
def notify_replication(self):
def notify_replication(self) -> None:
"""Notify the any replication listeners that there's a new event"""
for cb in self.replication_callbacks:
cb()

View File

@@ -19,11 +19,13 @@ import copy
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
def list_with_base_rules(rawrules):
def list_with_base_rules(rawrules, use_new_defaults=False):
"""Combine the list of rules set by the user with the default push rules
Args:
rawrules(list): The rules the user has modified or set.
use_new_defaults(bool): Whether to use the new experimental default rules when
appending or prepending default rules.
Returns:
A new list with the rules set by the user combined with the defaults.
@@ -43,7 +45,9 @@ def list_with_base_rules(rawrules):
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
@@ -54,6 +58,7 @@ def list_with_base_rules(rawrules):
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
current_prio_class -= 1
@@ -62,6 +67,7 @@ def list_with_base_rules(rawrules):
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
@@ -70,27 +76,39 @@ def list_with_base_rules(rawrules):
while current_prio_class > 0:
ruleslist.extend(
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
current_prio_class -= 1
if current_prio_class > 0:
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
return ruleslist
def make_base_append_rules(kind, modified_base_rules):
def make_base_append_rules(kind, modified_base_rules, use_new_defaults=False):
rules = []
if kind == "override":
rules = BASE_APPEND_OVERRIDE_RULES
rules = (
NEW_APPEND_OVERRIDE_RULES
if use_new_defaults
else BASE_APPEND_OVERRIDE_RULES
)
elif kind == "underride":
rules = BASE_APPEND_UNDERRIDE_RULES
rules = (
NEW_APPEND_UNDERRIDE_RULES
if use_new_defaults
else BASE_APPEND_UNDERRIDE_RULES
)
elif kind == "content":
rules = BASE_APPEND_CONTENT_RULES
@@ -105,7 +123,7 @@ def make_base_append_rules(kind, modified_base_rules):
return rules
def make_base_prepend_rules(kind, modified_base_rules):
def make_base_prepend_rules(kind, modified_base_rules, use_new_defaults=False):
rules = []
if kind == "override":
@@ -270,6 +288,135 @@ BASE_APPEND_OVERRIDE_RULES = [
]
NEW_APPEND_OVERRIDE_RULES = [
{
"rule_id": "global/override/.m.rule.encrypted",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.encrypted",
"_id": "_encrypted",
}
],
"actions": ["notify"],
},
{
"rule_id": "global/override/.m.rule.suppress_notices",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.message",
"_id": "_suppress_notices_type",
},
{
"kind": "event_match",
"key": "content.msgtype",
"pattern": "m.notice",
"_id": "_suppress_notices",
},
],
"actions": [],
},
{
"rule_id": "global/underride/.m.rule.suppress_edits",
"conditions": [
{
"kind": "event_match",
"key": "m.relates_to.m.rel_type",
"pattern": "m.replace",
"_id": "_suppress_edits",
}
],
"actions": [],
},
{
"rule_id": "global/override/.m.rule.invite_for_me",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.member",
"_id": "_member",
},
{
"kind": "event_match",
"key": "content.membership",
"pattern": "invite",
"_id": "_invite_member",
},
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
],
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
},
{
"rule_id": "global/override/.m.rule.contains_display_name",
"conditions": [{"kind": "contains_display_name"}],
"actions": [
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
},
{
"rule_id": "global/override/.m.rule.tombstone",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.tombstone",
"_id": "_tombstone",
},
{
"kind": "event_match",
"key": "state_key",
"pattern": "",
"_id": "_tombstone_statekey",
},
],
"actions": [
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
},
{
"rule_id": "global/override/.m.rule.roomnotif",
"conditions": [
{
"kind": "event_match",
"key": "content.body",
"pattern": "@room",
"_id": "_roomnotif_content",
},
{
"kind": "sender_notification_permission",
"key": "room",
"_id": "_roomnotif_pl",
},
],
"actions": [
"notify",
{"set_tweak": "highlight"},
{"set_tweak": "sound", "value": "default"},
],
},
{
"rule_id": "global/override/.m.rule.call",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.call.invite",
"_id": "_call",
}
],
"actions": ["notify", {"set_tweak": "sound", "value": "ring"}],
},
]
BASE_APPEND_UNDERRIDE_RULES = [
{
"rule_id": "global/underride/.m.rule.call",
@@ -354,6 +501,36 @@ BASE_APPEND_UNDERRIDE_RULES = [
]
NEW_APPEND_UNDERRIDE_RULES = [
{
"rule_id": "global/underride/.m.rule.room_one_to_one",
"conditions": [
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
{
"kind": "event_match",
"key": "content.body",
"pattern": "*",
"_id": "body",
},
],
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
},
{
"rule_id": "global/underride/.m.rule.message",
"conditions": [
{
"kind": "event_match",
"key": "content.body",
"pattern": "*",
"_id": "body",
},
],
"actions": ["notify"],
"enabled": False,
},
]
BASE_RULE_IDS = set()
for r in BASE_APPEND_CONTENT_RULES:
@@ -375,3 +552,26 @@ for r in BASE_APPEND_UNDERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
NEW_RULE_IDS = set()
for r in BASE_APPEND_CONTENT_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])
for r in BASE_PREPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])
for r in NEW_APPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])
for r in NEW_APPEND_UNDERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])

View File

@@ -120,7 +120,7 @@ class BulkPushRuleEvaluator(object):
pl_event = await self.store.get_event(pl_event_id)
auth_events = {POWER_KEY: pl_event}
else:
auth_events_ids = await self.auth.compute_auth_events(
auth_events_ids = self.auth.compute_auth_events(
event, prev_state_ids, for_verification=False
)
auth_events = await self.store.get_events(auth_events_ids)

View File

@@ -21,13 +21,22 @@ async def get_badge_count(store, user_id):
invites = await store.get_invited_rooms_for_local_user(user_id)
joins = await store.get_rooms_for_user(user_id)
my_receipts_by_room = await store.get_receipts_for_user(user_id, "m.read")
badge = len(invites)
for room_id in joins:
unread_count = await store.get_unread_message_count_for_user(room_id, user_id)
# return one badge count per conversation, as count per
# message is so noisy as to be almost useless
badge += 1 if unread_count else 0
if room_id in my_receipts_by_room:
last_unread_event_id = my_receipts_by_room[room_id]
notifs = await (
store.get_unread_event_push_actions_by_room_for_user(
room_id, user_id, last_unread_event_id
)
)
# return one badge count per conversation, as count per
# message is so noisy as to be almost useless
badge += 1 if notifs["notify_count"] else 0
return badge

View File

@@ -59,7 +59,6 @@ REQUIREMENTS = [
"pyyaml>=3.11",
"pyasn1>=0.1.9",
"pyasn1-modules>=0.0.7",
"daemonize>=2.3.1",
"bcrypt>=3.1.0",
"pillow>=4.3.0",
"sortedcontainers>=1.4.4",

View File

@@ -16,8 +16,8 @@
import logging
from typing import Optional
from synapse.storage.data_stores.main.cache import CacheInvalidationWorkerStore
from synapse.storage.database import Database
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
from synapse.storage.engines import PostgresEngine
from synapse.storage.util.id_generators import MultiWriterIdGenerator
@@ -25,7 +25,7 @@ logger = logging.getLogger(__name__)
class BaseSlavedStore(CacheInvalidationWorkerStore):
def __init__(self, database: Database, db_conn, hs):
def __init__(self, database: DatabasePool, db_conn, hs):
super(BaseSlavedStore, self).__init__(database, db_conn, hs)
if isinstance(self.database_engine, PostgresEngine):
self._cache_id_gen = MultiWriterIdGenerator(

View File

@@ -17,13 +17,13 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.replication.tcp.streams import AccountDataStream, TagAccountDataStream
from synapse.storage.data_stores.main.account_data import AccountDataWorkerStore
from synapse.storage.data_stores.main.tags import TagsWorkerStore
from synapse.storage.database import Database
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.tags import TagsWorkerStore
class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore):
def __init__(self, database: Database, db_conn, hs):
def __init__(self, database: DatabasePool, db_conn, hs):
self._account_data_id_gen = SlavedIdTracker(
db_conn,
"account_data",

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.data_stores.main.appservice import (
from synapse.storage.databases.main.appservice import (
ApplicationServiceTransactionWorkerStore,
ApplicationServiceWorkerStore,
)

View File

@@ -13,22 +13,22 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.data_stores.main.client_ips import LAST_SEEN_GRANULARITY
from synapse.storage.database import Database
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.client_ips import LAST_SEEN_GRANULARITY
from synapse.util.caches.descriptors import Cache
from ._base import BaseSlavedStore
class SlavedClientIpStore(BaseSlavedStore):
def __init__(self, database: Database, db_conn, hs):
def __init__(self, database: DatabasePool, db_conn, hs):
super(SlavedClientIpStore, self).__init__(database, db_conn, hs)
self.client_ip_last_seen = Cache(
name="client_ip_last_seen", keylen=4, max_entries=50000
)
def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
async def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
now = int(self._clock.time_msec())
key = (user_id, access_token, ip)

View File

@@ -16,14 +16,14 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.replication.tcp.streams import ToDeviceStream
from synapse.storage.data_stores.main.deviceinbox import DeviceInboxWorkerStore
from synapse.storage.database import Database
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.caches.stream_change_cache import StreamChangeCache
class SlavedDeviceInboxStore(DeviceInboxWorkerStore, BaseSlavedStore):
def __init__(self, database: Database, db_conn, hs):
def __init__(self, database: DatabasePool, db_conn, hs):
super(SlavedDeviceInboxStore, self).__init__(database, db_conn, hs)
self._device_inbox_id_gen = SlavedIdTracker(
db_conn, "device_inbox", "stream_id"

View File

@@ -16,14 +16,14 @@
from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
from synapse.replication.tcp.streams._base import DeviceListsStream, UserSignatureStream
from synapse.storage.data_stores.main.devices import DeviceWorkerStore
from synapse.storage.data_stores.main.end_to_end_keys import EndToEndKeyWorkerStore
from synapse.storage.database import Database
from synapse.storage.database import DatabasePool
from synapse.storage.databases.main.devices import DeviceWorkerStore
from synapse.storage.databases.main.end_to_end_keys import EndToEndKeyWorkerStore
from synapse.util.caches.stream_change_cache import StreamChangeCache
class SlavedDeviceStore(EndToEndKeyWorkerStore, DeviceWorkerStore, BaseSlavedStore):
def __init__(self, database: Database, db_conn, hs):
def __init__(self, database: DatabasePool, db_conn, hs):
super(SlavedDeviceStore, self).__init__(database, db_conn, hs)
self.hs = hs

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from synapse.storage.data_stores.main.directory import DirectoryWorkerStore
from synapse.storage.databases.main.directory import DirectoryWorkerStore
from ._base import BaseSlavedStore

Some files were not shown because too many files have changed in this diff Show More