Compare commits
2 Commits
release-v1
...
erikj/remo
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
ac2aad2123 | ||
|
|
14ddce892f |
@@ -4,16 +4,18 @@ jobs:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} .
|
||||
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:${CIRCLE_TAG} -t matrixdotorg/synapse:${CIRCLE_TAG}-py3 .
|
||||
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
|
||||
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}
|
||||
- run: docker push matrixdotorg/synapse:${CIRCLE_TAG}-py3
|
||||
dockerhubuploadlatest:
|
||||
machine: true
|
||||
steps:
|
||||
- checkout
|
||||
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest .
|
||||
- run: docker build -f docker/Dockerfile --label gitsha1=${CIRCLE_SHA1} -t matrixdotorg/synapse:latest -t matrixdotorg/synapse:latest-py3 .
|
||||
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
|
||||
- run: docker push matrixdotorg/synapse:latest
|
||||
- run: docker push matrixdotorg/synapse:latest-py3
|
||||
|
||||
workflows:
|
||||
version: 2
|
||||
|
||||
4
.github/ISSUE_TEMPLATE/BUG_REPORT.md
vendored
4
.github/ISSUE_TEMPLATE/BUG_REPORT.md
vendored
@@ -4,12 +4,12 @@ about: Create a report to help us improve
|
||||
|
||||
---
|
||||
|
||||
<!--
|
||||
|
||||
**THIS IS NOT A SUPPORT CHANNEL!**
|
||||
**IF YOU HAVE SUPPORT QUESTIONS ABOUT RUNNING OR CONFIGURING YOUR OWN HOME SERVER**,
|
||||
please ask in **#synapse:matrix.org** (using a matrix.org account if necessary)
|
||||
|
||||
<!--
|
||||
|
||||
If you want to report a security issue, please see https://matrix.org/security-disclosure-policy/
|
||||
|
||||
This is a bug report template. By following the instructions below and
|
||||
|
||||
90
CHANGES.md
90
CHANGES.md
@@ -1,93 +1,3 @@
|
||||
Synapse 1.19.1 (2020-08-27)
|
||||
===========================
|
||||
|
||||
No significant changes.
|
||||
|
||||
|
||||
Synapse 1.19.1rc1 (2020-08-25)
|
||||
==============================
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in v1.19.0 where appservices with ratelimiting disabled would still be ratelimited when joining rooms. ([\#8139](https://github.com/matrix-org/synapse/issues/8139))
|
||||
- Fix a bug introduced in v1.19.0 that would cause e.g. profile updates to fail due to incorrect application of rate limits on join requests. ([\#8153](https://github.com/matrix-org/synapse/issues/8153))
|
||||
|
||||
|
||||
Synapse 1.19.0 (2020-08-17)
|
||||
===========================
|
||||
|
||||
No significant changes since 1.19.0rc1.
|
||||
|
||||
Removal warning
|
||||
---------------
|
||||
|
||||
As outlined in the [previous release](https://github.com/matrix-org/synapse/releases/tag/v1.18.0), we are no longer publishing Docker images with the `-py3` tag suffix. On top of that, we have also removed the `latest-py3` tag. Please see [the announcement in the upgrade notes for 1.18.0](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180).
|
||||
|
||||
|
||||
Synapse 1.19.0rc1 (2020-08-13)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus. ([\#7902](https://github.com/matrix-org/synapse/issues/7902))
|
||||
- Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel. ([\#7964](https://github.com/matrix-org/synapse/issues/7964))
|
||||
- Add rate limiting to users joining rooms. ([\#8008](https://github.com/matrix-org/synapse/issues/8008))
|
||||
- Add a `/health` endpoint to every configured HTTP listener that can be used as a health check endpoint by load balancers. ([\#8048](https://github.com/matrix-org/synapse/issues/8048))
|
||||
- Allow login to be blocked based on the values of SAML attributes. ([\#8052](https://github.com/matrix-org/synapse/issues/8052))
|
||||
- Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#7314](https://github.com/matrix-org/synapse/issues/7314))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory. ([\#7977](https://github.com/matrix-org/synapse/issues/7977))
|
||||
- Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured. ([\#7978](https://github.com/matrix-org/synapse/issues/7978))
|
||||
- Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0. ([\#7980](https://github.com/matrix-org/synapse/issues/7980))
|
||||
- Fix various comments and minor discrepencies in server notices code. ([\#7996](https://github.com/matrix-org/synapse/issues/7996))
|
||||
- Fix a long standing bug where HTTP HEAD requests resulted in a 400 error. ([\#7999](https://github.com/matrix-org/synapse/issues/7999))
|
||||
- Fix a long-standing bug which caused two copies of some log lines to be written when synctl was used along with a MemoryHandler logger. ([\#8011](https://github.com/matrix-org/synapse/issues/8011), [\#8012](https://github.com/matrix-org/synapse/issues/8012))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
- We no longer publish Docker images with the `-py3` tag suffix, as [announced in the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/UPGRADE.rst#upgrading-to-v1180). ([\#8056](https://github.com/matrix-org/synapse/issues/8056))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Document how to set up a client .well-known file and fix several pieces of outdated documentation. ([\#7899](https://github.com/matrix-org/synapse/issues/7899))
|
||||
- Improve workers docs. ([\#7990](https://github.com/matrix-org/synapse/issues/7990), [\#8000](https://github.com/matrix-org/synapse/issues/8000))
|
||||
- Fix typo in `docs/workers.md`. ([\#7992](https://github.com/matrix-org/synapse/issues/7992))
|
||||
- Add documentation for how to undo a room shutdown. ([\#7998](https://github.com/matrix-org/synapse/issues/7998), [\#8010](https://github.com/matrix-org/synapse/issues/8010))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Reduce the amount of whitespace in JSON stored and sent in responses. Contributed by David Vo. ([\#7372](https://github.com/matrix-org/synapse/issues/7372))
|
||||
- Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0. ([\#7936](https://github.com/matrix-org/synapse/issues/7936), [\#7979](https://github.com/matrix-org/synapse/issues/7979))
|
||||
- Convert various parts of the codebase to async/await. ([\#7947](https://github.com/matrix-org/synapse/issues/7947), [\#7948](https://github.com/matrix-org/synapse/issues/7948), [\#7949](https://github.com/matrix-org/synapse/issues/7949), [\#7951](https://github.com/matrix-org/synapse/issues/7951), [\#7963](https://github.com/matrix-org/synapse/issues/7963), [\#7973](https://github.com/matrix-org/synapse/issues/7973), [\#7975](https://github.com/matrix-org/synapse/issues/7975), [\#7976](https://github.com/matrix-org/synapse/issues/7976), [\#7981](https://github.com/matrix-org/synapse/issues/7981), [\#7987](https://github.com/matrix-org/synapse/issues/7987), [\#7989](https://github.com/matrix-org/synapse/issues/7989), [\#8003](https://github.com/matrix-org/synapse/issues/8003), [\#8014](https://github.com/matrix-org/synapse/issues/8014), [\#8016](https://github.com/matrix-org/synapse/issues/8016), [\#8027](https://github.com/matrix-org/synapse/issues/8027), [\#8031](https://github.com/matrix-org/synapse/issues/8031), [\#8032](https://github.com/matrix-org/synapse/issues/8032), [\#8035](https://github.com/matrix-org/synapse/issues/8035), [\#8042](https://github.com/matrix-org/synapse/issues/8042), [\#8044](https://github.com/matrix-org/synapse/issues/8044), [\#8045](https://github.com/matrix-org/synapse/issues/8045), [\#8061](https://github.com/matrix-org/synapse/issues/8061), [\#8062](https://github.com/matrix-org/synapse/issues/8062), [\#8063](https://github.com/matrix-org/synapse/issues/8063), [\#8066](https://github.com/matrix-org/synapse/issues/8066), [\#8069](https://github.com/matrix-org/synapse/issues/8069), [\#8070](https://github.com/matrix-org/synapse/issues/8070))
|
||||
- Move some database-related log lines from the default logger to the database/transaction loggers. ([\#7952](https://github.com/matrix-org/synapse/issues/7952))
|
||||
- Add a script to detect source code files using non-unix line terminators. ([\#7965](https://github.com/matrix-org/synapse/issues/7965), [\#7970](https://github.com/matrix-org/synapse/issues/7970))
|
||||
- Log the SAML session ID during creation. ([\#7971](https://github.com/matrix-org/synapse/issues/7971))
|
||||
- Implement new experimental push rules for some users. ([\#7997](https://github.com/matrix-org/synapse/issues/7997))
|
||||
- Remove redundant and unreliable signature check for v1 Identity Service lookup responses. ([\#8001](https://github.com/matrix-org/synapse/issues/8001))
|
||||
- Improve the performance of the register endpoint. ([\#8009](https://github.com/matrix-org/synapse/issues/8009))
|
||||
- Reduce less useful output in the newsfragment CI step. Add a link to the changelog section of the contributing guide on error. ([\#8024](https://github.com/matrix-org/synapse/issues/8024))
|
||||
- Rename storage layer objects to be more sensible. ([\#8033](https://github.com/matrix-org/synapse/issues/8033))
|
||||
- Change the default log config to reduce disk I/O and storage for new servers. ([\#8040](https://github.com/matrix-org/synapse/issues/8040))
|
||||
- Add an assertion on `prev_events` in `create_new_client_event`. ([\#8041](https://github.com/matrix-org/synapse/issues/8041))
|
||||
- Add a comment to `ServerContextFactory` about the use of `SSLv23_METHOD`. ([\#8043](https://github.com/matrix-org/synapse/issues/8043))
|
||||
- Log `OPTIONS` requests at `DEBUG` rather than `INFO` level to reduce amount logged at `INFO`. ([\#8049](https://github.com/matrix-org/synapse/issues/8049))
|
||||
- Reduce amount of outbound request logging at `INFO` level. ([\#8050](https://github.com/matrix-org/synapse/issues/8050))
|
||||
- It is no longer necessary to explicitly define `filters` in the logging configuration. (Continuing to do so is redundant but harmless.) ([\#8051](https://github.com/matrix-org/synapse/issues/8051))
|
||||
- Add and improve type hints. ([\#8058](https://github.com/matrix-org/synapse/issues/8058), [\#8064](https://github.com/matrix-org/synapse/issues/8064), [\#8060](https://github.com/matrix-org/synapse/issues/8060), [\#8067](https://github.com/matrix-org/synapse/issues/8067))
|
||||
|
||||
|
||||
Synapse 1.18.0 (2020-07-30)
|
||||
===========================
|
||||
|
||||
|
||||
1
changelog.d/7314.misc
Normal file
1
changelog.d/7314.misc
Normal file
@@ -0,0 +1 @@
|
||||
Allow guest access to the `GET /_matrix/client/r0/rooms/{room_id}/members` endpoint, according to MSC2689. Contributed by Awesome Technologies Innovationslabor GmbH.
|
||||
1
changelog.d/7736.feature
Normal file
1
changelog.d/7736.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add unread messages count to sync responses, as specified in [MSC2654](https://github.com/matrix-org/matrix-doc/pull/2654).
|
||||
1
changelog.d/7899.doc
Normal file
1
changelog.d/7899.doc
Normal file
@@ -0,0 +1 @@
|
||||
Document how to set up a Client Well-Known file and fix several pieces of outdated documentation.
|
||||
1
changelog.d/7902.feature
Normal file
1
changelog.d/7902.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add option to allow server admins to join rooms which fail complexity checks. Contributed by @lugino-emeritus.
|
||||
1
changelog.d/7936.misc
Normal file
1
changelog.d/7936.misc
Normal file
@@ -0,0 +1 @@
|
||||
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.
|
||||
1
changelog.d/7947.misc
Normal file
1
changelog.d/7947.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7948.misc
Normal file
1
changelog.d/7948.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7949.misc
Normal file
1
changelog.d/7949.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7951.misc
Normal file
1
changelog.d/7951.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7952.misc
Normal file
1
changelog.d/7952.misc
Normal file
@@ -0,0 +1 @@
|
||||
Move some database-related log lines from the default logger to the database/transaction loggers.
|
||||
1
changelog.d/7963.misc
Normal file
1
changelog.d/7963.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7964.feature
Normal file
1
changelog.d/7964.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add an option to purge room or not with delete room admin endpoint (`POST /_synapse/admin/v1/rooms/<room_id>/delete`). Contributed by @dklimpel.
|
||||
1
changelog.d/7965.misc
Normal file
1
changelog.d/7965.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a script to detect source code files using non-unix line terminators.
|
||||
1
changelog.d/7970.misc
Normal file
1
changelog.d/7970.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add a script to detect source code files using non-unix line terminators.
|
||||
1
changelog.d/7971.misc
Normal file
1
changelog.d/7971.misc
Normal file
@@ -0,0 +1 @@
|
||||
Log the SAML session ID during creation.
|
||||
1
changelog.d/7973.misc
Normal file
1
changelog.d/7973.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7975.misc
Normal file
1
changelog.d/7975.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7976.misc
Normal file
1
changelog.d/7976.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7977.bugfix
Normal file
1
changelog.d/7977.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse v1.7.2 which caused inaccurate membership counts in the room directory.
|
||||
1
changelog.d/7978.bugfix
Normal file
1
changelog.d/7978.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long standing bug: 'Duplicate key value violates unique constraint "event_relations_id"' when message retention is configured.
|
||||
1
changelog.d/7979.misc
Normal file
1
changelog.d/7979.misc
Normal file
@@ -0,0 +1 @@
|
||||
Switch to the JSON implementation from the standard library and bump the minimum version of the canonicaljson library to 1.2.0.
|
||||
1
changelog.d/7980.bugfix
Normal file
1
changelog.d/7980.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix "no create event in auth events" when trying to reject invitation after inviter leaves. Bug introduced in Synapse v1.10.0.
|
||||
1
changelog.d/7981.misc
Normal file
1
changelog.d/7981.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7987.misc
Normal file
1
changelog.d/7987.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7989.misc
Normal file
1
changelog.d/7989.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/7990.doc
Normal file
1
changelog.d/7990.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve workers docs.
|
||||
1
changelog.d/7992.doc
Normal file
1
changelog.d/7992.doc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typo in `docs/workers.md`.
|
||||
1
changelog.d/7996.bugfix
Normal file
1
changelog.d/7996.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix various comments and minor discrepencies in server notices code.
|
||||
1
changelog.d/7998.doc
Normal file
1
changelog.d/7998.doc
Normal file
@@ -0,0 +1 @@
|
||||
Add documentation for how to undo a room shutdown.
|
||||
1
changelog.d/7999.bugfix
Normal file
1
changelog.d/7999.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long standing bug where HTTP HEAD requests resulted in a 400 error.
|
||||
1
changelog.d/8001.misc
Normal file
1
changelog.d/8001.misc
Normal file
@@ -0,0 +1 @@
|
||||
Remove redundant and unreliable signature check for v1 Identity Service lookup responses.
|
||||
1
changelog.d/8003.misc
Normal file
1
changelog.d/8003.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert various parts of the codebase to async/await.
|
||||
1
changelog.d/8008.feature
Normal file
1
changelog.d/8008.feature
Normal file
@@ -0,0 +1 @@
|
||||
Add rate limiting to users joining rooms.
|
||||
1
changelog.d/8025.bugfix
Normal file
1
changelog.d/8025.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug where state (e.g. power levels) would reset incorrectly when receiving an event from a remote server.
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,18 +1,12 @@
|
||||
matrix-synapse-py3 (1.19.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.19.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 27 Aug 2020 10:50:19 +0100
|
||||
|
||||
matrix-synapse-py3 (1.19.0) stable; urgency=medium
|
||||
matrix-synapse-py3 (1.xx.0) stable; urgency=medium
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New synapse release 1.19.0.
|
||||
* New synapse release 1.xx.0.
|
||||
|
||||
[ Aaron Raimist ]
|
||||
* Fix outdated documentation for SYNAPSE_CACHE_FACTOR
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Mon, 17 Aug 2020 14:06:42 +0100
|
||||
-- Synapse Packaging team <packages@matrix.org> XXXXX
|
||||
|
||||
matrix-synapse-py3 (1.18.0) stable; urgency=medium
|
||||
|
||||
|
||||
@@ -4,10 +4,16 @@ formatters:
|
||||
precise:
|
||||
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
|
||||
|
||||
filters:
|
||||
context:
|
||||
(): synapse.logging.context.LoggingContextFilter
|
||||
request: ""
|
||||
|
||||
handlers:
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
|
||||
loggers:
|
||||
synapse.storage.SQL:
|
||||
|
||||
@@ -79,20 +79,13 @@ Response:
|
||||
the structure can and does change without notice.
|
||||
|
||||
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
|
||||
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
|
||||
to recover at all:
|
||||
never happened - work has to be done to move forward instead of resetting the past.
|
||||
|
||||
* If the room was invite-only, your users will need to be re-invited.
|
||||
* If the room no longer has any members at all, it'll be impossible to rejoin.
|
||||
* The first user to rejoin will have to do so via an alias on a different server.
|
||||
|
||||
With all that being said, if you still want to try and recover the room:
|
||||
|
||||
1. For safety reasons, shut down Synapse.
|
||||
1. For safety reasons, it is recommended to shut down Synapse prior to continuing.
|
||||
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
|
||||
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
|
||||
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
|
||||
3. Restart Synapse.
|
||||
3. Restart Synapse (required).
|
||||
|
||||
You will have to manually handle, if you so choose, the following:
|
||||
|
||||
|
||||
@@ -139,10 +139,3 @@ client IP addresses are recorded correctly.
|
||||
Having done so, you can then use `https://matrix.example.com` (instead
|
||||
of `https://matrix.example.com:8448`) as the "Custom server" when
|
||||
connecting to Synapse from a client.
|
||||
|
||||
|
||||
## Health check endpoint
|
||||
|
||||
Synapse exposes a health check endpoint for use by reverse proxies.
|
||||
Each configured HTTP listener has a `/health` endpoint which always returns
|
||||
200 OK (and doesn't get logged).
|
||||
|
||||
@@ -1577,17 +1577,6 @@ saml2_config:
|
||||
#
|
||||
#grandfathered_mxid_source_attribute: upn
|
||||
|
||||
# It is possible to configure Synapse to only allow logins if SAML attributes
|
||||
# match particular values. The requirements can be listed under
|
||||
# `attribute_requirements` as shown below. All of the listed attributes must
|
||||
# match for the login to be permitted.
|
||||
#
|
||||
#attribute_requirements:
|
||||
# - attribute: userGroup
|
||||
# value: "staff"
|
||||
# - attribute: department
|
||||
# value: "sales"
|
||||
|
||||
# Directory in which Synapse will try to find the template files below.
|
||||
# If not set, default templates from within the Synapse package will be used.
|
||||
#
|
||||
|
||||
@@ -11,33 +11,24 @@ formatters:
|
||||
precise:
|
||||
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
|
||||
|
||||
filters:
|
||||
context:
|
||||
(): synapse.logging.context.LoggingContextFilter
|
||||
request: ""
|
||||
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.TimedRotatingFileHandler
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: /var/log/matrix-synapse/homeserver.log
|
||||
when: midnight
|
||||
backupCount: 3 # Does not include the current log file.
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
encoding: utf8
|
||||
|
||||
# Default to buffering writes to log file for efficiency. This means that
|
||||
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
|
||||
# logs will still be flushed immediately.
|
||||
buffer:
|
||||
class: logging.handlers.MemoryHandler
|
||||
target: file
|
||||
# The capacity is the number of log lines that are buffered before
|
||||
# being written to disk. Increasing this will lead to better
|
||||
# performance, at the expensive of it taking longer for log lines to
|
||||
# be written to disk.
|
||||
capacity: 10
|
||||
flushLevel: 30 # Flush for WARNING logs as well
|
||||
|
||||
# A handler that writes logs to stderr. Unused by default, but can be used
|
||||
# instead of "buffer" and "file" in the logger handlers.
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
|
||||
loggers:
|
||||
synapse.storage.SQL:
|
||||
@@ -45,23 +36,8 @@ loggers:
|
||||
# information such as access tokens.
|
||||
level: INFO
|
||||
|
||||
twisted:
|
||||
# We send the twisted logging directly to the file handler,
|
||||
# to work around https://github.com/matrix-org/synapse/issues/3471
|
||||
# when using "buffer" logger. Use "console" to log to stderr instead.
|
||||
handlers: [file]
|
||||
propagate: false
|
||||
|
||||
root:
|
||||
level: INFO
|
||||
|
||||
# Write logs to the `buffer` handler, which will buffer them together in memory,
|
||||
# then write them to a file.
|
||||
#
|
||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||
# also need to update the configuation for the `twisted` logger above, in
|
||||
# this case.)
|
||||
#
|
||||
handlers: [buffer]
|
||||
handlers: [file, console]
|
||||
|
||||
disable_existing_loggers: false
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
worker_app: synapse.app.federation_reader
|
||||
worker_name: federation_reader1
|
||||
|
||||
worker_replication_host: 127.0.0.1
|
||||
worker_replication_port: 9092
|
||||
worker_replication_http_port: 9093
|
||||
|
||||
worker_listeners:
|
||||
|
||||
@@ -7,6 +7,6 @@ who are present in a publicly viewable room present on the server.
|
||||
|
||||
The directory info is stored in various tables, which can (typically after
|
||||
DB corruption) get stale or out of sync. If this happens, for now the
|
||||
solution to fix it is to execute the SQL [here](../synapse/storage/databases/main/schema/delta/53/user_dir_populate.sql)
|
||||
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
|
||||
and then restart synapse. This should then start a background task to
|
||||
flush the current tables and regenerate the directory.
|
||||
|
||||
@@ -23,7 +23,7 @@ The processes communicate with each other via a Synapse-specific protocol called
|
||||
feeds streams of newly written data between processes so they can be kept in
|
||||
sync with the database state.
|
||||
|
||||
When configured to do so, Synapse uses a
|
||||
When configured to do so, Synapse uses a
|
||||
[Redis pub/sub channel](https://redis.io/topics/pubsub) to send the replication
|
||||
stream between all configured Synapse processes. Additionally, processes may
|
||||
make HTTP requests to each other, primarily for operations which need to wait
|
||||
@@ -66,31 +66,23 @@ https://hub.docker.com/r/matrixdotorg/synapse/.
|
||||
|
||||
To make effective use of the workers, you will need to configure an HTTP
|
||||
reverse-proxy such as nginx or haproxy, which will direct incoming requests to
|
||||
the correct worker, or to the main synapse instance. See
|
||||
the correct worker, or to the main synapse instance. See
|
||||
[reverse_proxy.md](reverse_proxy.md) for information on setting up a reverse
|
||||
proxy.
|
||||
|
||||
When using workers, each worker process has its own configuration file which
|
||||
contains settings specific to that worker, such as the HTTP listener that it
|
||||
provides (if any), logging configuration, etc.
|
||||
|
||||
Normally, the worker processes are configured to read from a shared
|
||||
configuration file as well as the worker-specific configuration files. This
|
||||
makes it easier to keep common configuration settings synchronised across all
|
||||
the processes.
|
||||
|
||||
The main process is somewhat special in this respect: it does not normally
|
||||
need its own configuration file and can take all of its configuration from the
|
||||
shared configuration file.
|
||||
To enable workers you should create a configuration file for each worker
|
||||
process. Each worker configuration file inherits the configuration of the shared
|
||||
homeserver configuration file. You can then override configuration specific to
|
||||
that worker, e.g. the HTTP listener that it provides (if any); logging
|
||||
configuration; etc. You should minimise the number of overrides though to
|
||||
maintain a usable config.
|
||||
|
||||
|
||||
### Shared configuration
|
||||
|
||||
Normally, only a couple of changes are needed to make an existing configuration
|
||||
file suitable for use with workers. First, you need to enable an "HTTP replication
|
||||
listener" for the main process; and secondly, you need to enable redis-based
|
||||
replication. For example:
|
||||
### Shared Configuration
|
||||
|
||||
Next you need to add both a HTTP replication listener, used for HTTP requests
|
||||
between processes, and redis config to the shared Synapse configuration file
|
||||
(`homeserver.yaml`). For example:
|
||||
|
||||
```yaml
|
||||
# extend the existing `listeners` section. This defines the ports that the
|
||||
@@ -113,7 +105,7 @@ Under **no circumstances** should the replication listener be exposed to the
|
||||
public internet; it has no authentication and is unencrypted.
|
||||
|
||||
|
||||
### Worker configuration
|
||||
### Worker Configuration
|
||||
|
||||
In the config file for each worker, you must specify the type of worker
|
||||
application (`worker_app`), and you should specify a unqiue name for the worker
|
||||
@@ -153,9 +145,6 @@ plain HTTP endpoint on port 8083 separately serving various endpoints, e.g.
|
||||
Obviously you should configure your reverse-proxy to route the relevant
|
||||
endpoints to the worker (`localhost:8083` in the above example).
|
||||
|
||||
|
||||
### Running Synapse with workers
|
||||
|
||||
Finally, you need to start your worker processes. This can be done with either
|
||||
`synctl` or your distribution's preferred service manager such as `systemd`. We
|
||||
recommend the use of `systemd` where available: for information on setting up
|
||||
@@ -418,23 +407,6 @@ all these to be folded into the `generic_worker` app and to use config to define
|
||||
which processes handle the various proccessing such as push notifications.
|
||||
|
||||
|
||||
## Migration from old config
|
||||
|
||||
There are two main independent changes that have been made: introducing Redis
|
||||
support and merging apps into `synapse.app.generic_worker`. Both these changes
|
||||
are backwards compatible and so no changes to the config are required, however
|
||||
server admins are encouraged to plan to migrate to Redis as the old style direct
|
||||
TCP replication config is deprecated.
|
||||
|
||||
To migrate to Redis add the `redis` config as above, and optionally remove the
|
||||
TCP `replication` listener from master and `worker_replication_port` from worker
|
||||
config.
|
||||
|
||||
To migrate apps to use `synapse.app.generic_worker` simply update the
|
||||
`worker_app` option in the worker configs, and where worker are started (e.g.
|
||||
in systemd service files, but not required for synctl).
|
||||
|
||||
|
||||
## Architectural diagram
|
||||
|
||||
The following shows an example setup using Redis and a reverse proxy:
|
||||
|
||||
3
mypy.ini
3
mypy.ini
@@ -81,6 +81,3 @@ ignore_missing_imports = True
|
||||
|
||||
[mypy-rust_python_jaeger_reporter.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
[mypy-nacl.*]
|
||||
ignore_missing_imports = True
|
||||
|
||||
@@ -3,8 +3,6 @@
|
||||
# A script which checks that an appropriate news file has been added on this
|
||||
# branch.
|
||||
|
||||
echo -e "+++ \033[32mChecking newsfragment\033[m"
|
||||
|
||||
set -e
|
||||
|
||||
# make sure that origin/develop is up to date
|
||||
@@ -18,8 +16,6 @@ pr="$BUILDKITE_PULL_REQUEST"
|
||||
if ! git diff --quiet FETCH_HEAD... -- debian; then
|
||||
if git diff --quiet FETCH_HEAD... -- debian/changelog; then
|
||||
echo "Updates to debian directory, but no update to the changelog." >&2
|
||||
echo "!! Please see the contributing guide for help writing your changelog entry:" >&2
|
||||
echo "https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#debian-changelog" >&2
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
@@ -30,12 +26,7 @@ if ! git diff --name-only FETCH_HEAD... | grep -qv '^debian/'; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Print a link to the contributing guide if the user makes a mistake
|
||||
CONTRIBUTING_GUIDE_TEXT="!! Please see the contributing guide for help writing your changelog entry:
|
||||
https://github.com/matrix-org/synapse/blob/develop/CONTRIBUTING.md#changelog"
|
||||
|
||||
# If check-newsfragment returns a non-zero exit code, print the contributing guide and exit
|
||||
tox -qe check-newsfragment || (echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2 && exit 1)
|
||||
tox -qe check-newsfragment
|
||||
|
||||
echo
|
||||
echo "--------------------------"
|
||||
@@ -47,7 +38,6 @@ for f in `git diff --name-only FETCH_HEAD... -- changelog.d`; do
|
||||
lastchar=`tr -d '\n' < $f | tail -c 1`
|
||||
if [ $lastchar != '.' -a $lastchar != '!' ]; then
|
||||
echo -e "\e[31mERROR: newsfragment $f does not end with a '.' or '!'\e[39m" >&2
|
||||
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -57,6 +47,5 @@ done
|
||||
|
||||
if [[ -n "$pr" && "$matched" -eq 0 ]]; then
|
||||
echo -e "\e[31mERROR: Did not find a news fragment with the right number: expected changelog.d/$pr.*.\e[39m" >&2
|
||||
echo -e "$CONTRIBUTING_GUIDE_TEXT" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
@@ -40,7 +40,7 @@ class MockHomeserver(HomeServer):
|
||||
config.server_name, reactor=reactor, config=config, **kwargs
|
||||
)
|
||||
|
||||
self.version_string = "Synapse/" + get_version_string(synapse)
|
||||
self.version_string = "Synapse/"+get_version_string(synapse)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
@@ -86,7 +86,7 @@ if __name__ == "__main__":
|
||||
store = hs.get_datastore()
|
||||
|
||||
async def run_background_updates():
|
||||
await store.db_pool.updates.run_background_updates(sleep=False)
|
||||
await store.db.updates.run_background_updates(sleep=False)
|
||||
# Stop the reactor to exit the script once every background update is run.
|
||||
reactor.stop()
|
||||
|
||||
|
||||
@@ -35,29 +35,31 @@ from synapse.logging.context import (
|
||||
make_deferred_yieldable,
|
||||
run_in_background,
|
||||
)
|
||||
from synapse.storage.database import DatabasePool, make_conn
|
||||
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.events_bg_updates import (
|
||||
from synapse.storage.data_stores.main.client_ips import ClientIpBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.deviceinbox import (
|
||||
DeviceInboxBackgroundUpdateStore,
|
||||
)
|
||||
from synapse.storage.data_stores.main.devices import DeviceBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.events_bg_updates import (
|
||||
EventsBackgroundUpdatesStore,
|
||||
)
|
||||
from synapse.storage.databases.main.media_repository import (
|
||||
from synapse.storage.data_stores.main.media_repository import (
|
||||
MediaRepositoryBackgroundUpdateStore,
|
||||
)
|
||||
from synapse.storage.databases.main.registration import (
|
||||
from synapse.storage.data_stores.main.registration import (
|
||||
RegistrationBackgroundUpdateStore,
|
||||
find_max_generated_user_id_localpart,
|
||||
)
|
||||
from synapse.storage.databases.main.room import RoomBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.roommember import RoomMemberBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.search import SearchBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.state import MainStateBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.stats import StatsStore
|
||||
from synapse.storage.databases.main.user_directory import (
|
||||
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.state import MainStateBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.main.stats import StatsStore
|
||||
from synapse.storage.data_stores.main.user_directory import (
|
||||
UserDirectoryBackgroundUpdateStore,
|
||||
)
|
||||
from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStore
|
||||
from synapse.storage.data_stores.state.bg_updates import StateBackgroundUpdateStore
|
||||
from synapse.storage.database import Database, make_conn
|
||||
from synapse.storage.engines import create_engine
|
||||
from synapse.storage.prepare_database import prepare_database
|
||||
from synapse.util import Clock
|
||||
@@ -67,7 +69,7 @@ logger = logging.getLogger("synapse_port_db")
|
||||
|
||||
|
||||
BOOLEAN_COLUMNS = {
|
||||
"events": ["processed", "outlier", "contains_url"],
|
||||
"events": ["processed", "outlier", "contains_url", "count_as_unread"],
|
||||
"rooms": ["is_public"],
|
||||
"event_edges": ["is_state"],
|
||||
"presence_list": ["accepted"],
|
||||
@@ -173,14 +175,14 @@ class Store(
|
||||
StatsStore,
|
||||
):
|
||||
def execute(self, f, *args, **kwargs):
|
||||
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
|
||||
return self.db.runInteraction(f.__name__, f, *args, **kwargs)
|
||||
|
||||
def execute_sql(self, sql, *args):
|
||||
def r(txn):
|
||||
txn.execute(sql, args)
|
||||
return txn.fetchall()
|
||||
|
||||
return self.db_pool.runInteraction("execute_sql", r)
|
||||
return self.db.runInteraction("execute_sql", r)
|
||||
|
||||
def insert_many_txn(self, txn, table, headers, rows):
|
||||
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
|
||||
@@ -225,7 +227,7 @@ class Porter(object):
|
||||
async def setup_table(self, table):
|
||||
if table in APPEND_ONLY_TABLES:
|
||||
# It's safe to just carry on inserting.
|
||||
row = await self.postgres_store.db_pool.simple_select_one(
|
||||
row = await self.postgres_store.db.simple_select_one(
|
||||
table="port_from_sqlite3",
|
||||
keyvalues={"table_name": table},
|
||||
retcols=("forward_rowid", "backward_rowid"),
|
||||
@@ -242,7 +244,7 @@ class Porter(object):
|
||||
) = await self._setup_sent_transactions()
|
||||
backward_chunk = 0
|
||||
else:
|
||||
await self.postgres_store.db_pool.simple_insert(
|
||||
await self.postgres_store.db.simple_insert(
|
||||
table="port_from_sqlite3",
|
||||
values={
|
||||
"table_name": table,
|
||||
@@ -272,7 +274,7 @@ class Porter(object):
|
||||
|
||||
await self.postgres_store.execute(delete_all)
|
||||
|
||||
await self.postgres_store.db_pool.simple_insert(
|
||||
await self.postgres_store.db.simple_insert(
|
||||
table="port_from_sqlite3",
|
||||
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
|
||||
)
|
||||
@@ -316,7 +318,7 @@ class Porter(object):
|
||||
if table == "user_directory_stream_pos":
|
||||
# We need to make sure there is a single row, `(X, null), as that is
|
||||
# what synapse expects to be there.
|
||||
await self.postgres_store.db_pool.simple_insert(
|
||||
await self.postgres_store.db.simple_insert(
|
||||
table=table, values={"stream_id": None}
|
||||
)
|
||||
self.progress.update(table, table_size) # Mark table as done
|
||||
@@ -357,7 +359,7 @@ class Porter(object):
|
||||
|
||||
return headers, forward_rows, backward_rows
|
||||
|
||||
headers, frows, brows = await self.sqlite_store.db_pool.runInteraction(
|
||||
headers, frows, brows = await self.sqlite_store.db.runInteraction(
|
||||
"select", r
|
||||
)
|
||||
|
||||
@@ -373,7 +375,7 @@ class Porter(object):
|
||||
def insert(txn):
|
||||
self.postgres_store.insert_many_txn(txn, table, headers[1:], rows)
|
||||
|
||||
self.postgres_store.db_pool.simple_update_one_txn(
|
||||
self.postgres_store.db.simple_update_one_txn(
|
||||
txn,
|
||||
table="port_from_sqlite3",
|
||||
keyvalues={"table_name": table},
|
||||
@@ -411,7 +413,7 @@ class Porter(object):
|
||||
|
||||
return headers, rows
|
||||
|
||||
headers, rows = await self.sqlite_store.db_pool.runInteraction("select", r)
|
||||
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
|
||||
|
||||
if rows:
|
||||
forward_chunk = rows[-1][0] + 1
|
||||
@@ -449,7 +451,7 @@ class Porter(object):
|
||||
],
|
||||
)
|
||||
|
||||
self.postgres_store.db_pool.simple_update_one_txn(
|
||||
self.postgres_store.db.simple_update_one_txn(
|
||||
txn,
|
||||
table="port_from_sqlite3",
|
||||
keyvalues={"table_name": "event_search"},
|
||||
@@ -492,7 +494,7 @@ class Porter(object):
|
||||
db_conn, allow_outdated_version=allow_outdated_version
|
||||
)
|
||||
prepare_database(db_conn, engine, config=self.hs_config)
|
||||
store = Store(DatabasePool(hs, db_config, engine), db_conn, hs)
|
||||
store = Store(Database(hs, db_config, engine), db_conn, hs)
|
||||
db_conn.commit()
|
||||
|
||||
return store
|
||||
@@ -500,7 +502,7 @@ class Porter(object):
|
||||
async def run_background_updates_on_postgres(self):
|
||||
# Manually apply all background updates on the PostgreSQL database.
|
||||
postgres_ready = (
|
||||
await self.postgres_store.db_pool.updates.has_completed_background_updates()
|
||||
await self.postgres_store.db.updates.has_completed_background_updates()
|
||||
)
|
||||
|
||||
if not postgres_ready:
|
||||
@@ -509,9 +511,9 @@ class Porter(object):
|
||||
self.progress.set_state("Running background updates on PostgreSQL")
|
||||
|
||||
while not postgres_ready:
|
||||
await self.postgres_store.db_pool.updates.do_next_background_update(100)
|
||||
await self.postgres_store.db.updates.do_next_background_update(100)
|
||||
postgres_ready = await (
|
||||
self.postgres_store.db_pool.updates.has_completed_background_updates()
|
||||
self.postgres_store.db.updates.has_completed_background_updates()
|
||||
)
|
||||
|
||||
async def run(self):
|
||||
@@ -532,7 +534,7 @@ class Porter(object):
|
||||
|
||||
# Check if all background updates are done, abort if not.
|
||||
updates_complete = (
|
||||
await self.sqlite_store.db_pool.updates.has_completed_background_updates()
|
||||
await self.sqlite_store.db.updates.has_completed_background_updates()
|
||||
)
|
||||
if not updates_complete:
|
||||
end_error = (
|
||||
@@ -574,24 +576,22 @@ class Porter(object):
|
||||
)
|
||||
|
||||
try:
|
||||
await self.postgres_store.db_pool.runInteraction(
|
||||
"alter_table", alter_table
|
||||
)
|
||||
await self.postgres_store.db.runInteraction("alter_table", alter_table)
|
||||
except Exception:
|
||||
# On Error Resume Next
|
||||
pass
|
||||
|
||||
await self.postgres_store.db_pool.runInteraction(
|
||||
await self.postgres_store.db.runInteraction(
|
||||
"create_port_table", create_port_table
|
||||
)
|
||||
|
||||
# Step 2. Get tables.
|
||||
self.progress.set_state("Fetching tables")
|
||||
sqlite_tables = await self.sqlite_store.db_pool.simple_select_onecol(
|
||||
sqlite_tables = await self.sqlite_store.db.simple_select_onecol(
|
||||
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
|
||||
)
|
||||
|
||||
postgres_tables = await self.postgres_store.db_pool.simple_select_onecol(
|
||||
postgres_tables = await self.postgres_store.db.simple_select_onecol(
|
||||
table="information_schema.tables",
|
||||
keyvalues={},
|
||||
retcol="distinct table_name",
|
||||
@@ -692,7 +692,7 @@ class Porter(object):
|
||||
|
||||
return headers, [r for r in rows if r[ts_ind] < yesterday]
|
||||
|
||||
headers, rows = await self.sqlite_store.db_pool.runInteraction("select", r)
|
||||
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
|
||||
|
||||
rows = self._convert_rows("sent_transactions", headers, rows)
|
||||
|
||||
@@ -725,7 +725,7 @@ class Porter(object):
|
||||
next_chunk = await self.sqlite_store.execute(get_start_id)
|
||||
next_chunk = max(max_inserted_rowid + 1, next_chunk)
|
||||
|
||||
await self.postgres_store.db_pool.simple_insert(
|
||||
await self.postgres_store.db.simple_insert(
|
||||
table="port_from_sqlite3",
|
||||
values={
|
||||
"table_name": "sent_transactions",
|
||||
@@ -794,14 +794,14 @@ class Porter(object):
|
||||
next_id = curr_id + 1
|
||||
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
|
||||
|
||||
return self.postgres_store.db_pool.runInteraction("setup_state_group_id_seq", r)
|
||||
return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
|
||||
|
||||
def _setup_user_id_seq(self):
|
||||
def r(txn):
|
||||
next_id = find_max_generated_user_id_localpart(txn) + 1
|
||||
txn.execute("ALTER SEQUENCE user_id_seq RESTART WITH %s", (next_id,))
|
||||
|
||||
return self.postgres_store.db_pool.runInteraction("setup_user_id_seq", r)
|
||||
return self.postgres_store.db.runInteraction("setup_user_id_seq", r)
|
||||
|
||||
|
||||
##############################################
|
||||
|
||||
@@ -48,7 +48,7 @@ try:
|
||||
except ImportError:
|
||||
pass
|
||||
|
||||
__version__ = "1.19.1"
|
||||
__version__ = "1.18.0"
|
||||
|
||||
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
# We import here so that we don't have to install a bunch of deps when
|
||||
|
||||
@@ -13,11 +13,12 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from typing import List, Optional, Tuple
|
||||
from typing import Optional
|
||||
|
||||
import pymacaroons
|
||||
from netaddr import IPAddress
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.web.server import Request
|
||||
|
||||
import synapse.types
|
||||
@@ -79,14 +80,13 @@ class Auth(object):
|
||||
self._track_appservice_user_ips = hs.config.track_appservice_user_ips
|
||||
self._macaroon_secret_key = hs.config.macaroon_secret_key
|
||||
|
||||
async def check_from_context(
|
||||
self, room_version: str, event, context, do_sig_check=True
|
||||
):
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
auth_events_ids = self.compute_auth_events(
|
||||
@defer.inlineCallbacks
|
||||
def check_from_context(self, room_version: str, event, context, do_sig_check=True):
|
||||
prev_state_ids = yield defer.ensureDeferred(context.get_prev_state_ids())
|
||||
auth_events_ids = yield self.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
)
|
||||
auth_events = await self.store.get_events(auth_events_ids)
|
||||
auth_events = yield self.store.get_events(auth_events_ids)
|
||||
auth_events = {(e.type, e.state_key): e for e in auth_events.values()}
|
||||
|
||||
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
|
||||
@@ -94,13 +94,14 @@ class Auth(object):
|
||||
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
|
||||
)
|
||||
|
||||
async def check_user_in_room(
|
||||
@defer.inlineCallbacks
|
||||
def check_user_in_room(
|
||||
self,
|
||||
room_id: str,
|
||||
user_id: str,
|
||||
current_state: Optional[StateMap[EventBase]] = None,
|
||||
allow_departed_users: bool = False,
|
||||
) -> EventBase:
|
||||
):
|
||||
"""Check if the user is in the room, or was at some point.
|
||||
Args:
|
||||
room_id: The room to check.
|
||||
@@ -118,35 +119,37 @@ class Auth(object):
|
||||
Raises:
|
||||
AuthError if the user is/was not in the room.
|
||||
Returns:
|
||||
Membership event for the user if the user was in the
|
||||
room. This will be the join event if they are currently joined to
|
||||
the room. This will be the leave event if they have left the room.
|
||||
Deferred[Optional[EventBase]]:
|
||||
Membership event for the user if the user was in the
|
||||
room. This will be the join event if they are currently joined to
|
||||
the room. This will be the leave event if they have left the room.
|
||||
"""
|
||||
if current_state:
|
||||
member = current_state.get((EventTypes.Member, user_id), None)
|
||||
else:
|
||||
member = await self.state.get_current_state(
|
||||
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
|
||||
member = yield defer.ensureDeferred(
|
||||
self.state.get_current_state(
|
||||
room_id=room_id, event_type=EventTypes.Member, state_key=user_id
|
||||
)
|
||||
)
|
||||
membership = member.membership if member else None
|
||||
|
||||
if member:
|
||||
membership = member.membership
|
||||
if membership == Membership.JOIN:
|
||||
return member
|
||||
|
||||
if membership == Membership.JOIN:
|
||||
# XXX this looks totally bogus. Why do we not allow users who have been banned,
|
||||
# or those who were members previously and have been re-invited?
|
||||
if allow_departed_users and membership == Membership.LEAVE:
|
||||
forgot = yield self.store.did_forget(user_id, room_id)
|
||||
if not forgot:
|
||||
return member
|
||||
|
||||
# XXX this looks totally bogus. Why do we not allow users who have been banned,
|
||||
# or those who were members previously and have been re-invited?
|
||||
if allow_departed_users and membership == Membership.LEAVE:
|
||||
forgot = await self.store.did_forget(user_id, room_id)
|
||||
if not forgot:
|
||||
return member
|
||||
|
||||
raise AuthError(403, "User %s not in room %s" % (user_id, room_id))
|
||||
|
||||
async def check_host_in_room(self, room_id, host):
|
||||
@defer.inlineCallbacks
|
||||
def check_host_in_room(self, room_id, host):
|
||||
with Measure(self.clock, "check_host_in_room"):
|
||||
latest_event_ids = await self.store.is_host_joined(room_id, host)
|
||||
latest_event_ids = yield self.store.is_host_joined(room_id, host)
|
||||
return latest_event_ids
|
||||
|
||||
def can_federate(self, event, auth_events):
|
||||
@@ -157,13 +160,14 @@ class Auth(object):
|
||||
def get_public_keys(self, invite_event):
|
||||
return event_auth.get_public_keys(invite_event)
|
||||
|
||||
async def get_user_by_req(
|
||||
@defer.inlineCallbacks
|
||||
def get_user_by_req(
|
||||
self,
|
||||
request: Request,
|
||||
allow_guest: bool = False,
|
||||
rights: str = "access",
|
||||
allow_expired: bool = False,
|
||||
) -> synapse.types.Requester:
|
||||
):
|
||||
""" Get a registered user's ID.
|
||||
|
||||
Args:
|
||||
@@ -176,7 +180,7 @@ class Auth(object):
|
||||
/login will deliver access tokens regardless of expiration.
|
||||
|
||||
Returns:
|
||||
Resolves to the requester
|
||||
defer.Deferred: resolves to a `synapse.types.Requester` object
|
||||
Raises:
|
||||
InvalidClientCredentialsError if no user by that token exists or the token
|
||||
is invalid.
|
||||
@@ -190,14 +194,14 @@ class Auth(object):
|
||||
|
||||
access_token = self.get_access_token_from_request(request)
|
||||
|
||||
user_id, app_service = await self._get_appservice_user_id(request)
|
||||
user_id, app_service = yield self._get_appservice_user_id(request)
|
||||
if user_id:
|
||||
request.authenticated_entity = user_id
|
||||
opentracing.set_tag("authenticated_entity", user_id)
|
||||
opentracing.set_tag("appservice_id", app_service.id)
|
||||
|
||||
if ip_addr and self._track_appservice_user_ips:
|
||||
await self.store.insert_client_ip(
|
||||
yield self.store.insert_client_ip(
|
||||
user_id=user_id,
|
||||
access_token=access_token,
|
||||
ip=ip_addr,
|
||||
@@ -207,7 +211,7 @@ class Auth(object):
|
||||
|
||||
return synapse.types.create_requester(user_id, app_service=app_service)
|
||||
|
||||
user_info = await self.get_user_by_access_token(
|
||||
user_info = yield self.get_user_by_access_token(
|
||||
access_token, rights, allow_expired=allow_expired
|
||||
)
|
||||
user = user_info["user"]
|
||||
@@ -217,7 +221,7 @@ class Auth(object):
|
||||
# Deny the request if the user account has expired.
|
||||
if self._account_validity.enabled and not allow_expired:
|
||||
user_id = user.to_string()
|
||||
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
|
||||
expiration_ts = yield self.store.get_expiration_ts_for_user(user_id)
|
||||
if (
|
||||
expiration_ts is not None
|
||||
and self.clock.time_msec() >= expiration_ts
|
||||
@@ -231,7 +235,7 @@ class Auth(object):
|
||||
device_id = user_info.get("device_id")
|
||||
|
||||
if user and access_token and ip_addr:
|
||||
await self.store.insert_client_ip(
|
||||
yield self.store.insert_client_ip(
|
||||
user_id=user.to_string(),
|
||||
access_token=access_token,
|
||||
ip=ip_addr,
|
||||
@@ -257,7 +261,8 @@ class Auth(object):
|
||||
except KeyError:
|
||||
raise MissingClientTokenError()
|
||||
|
||||
async def _get_appservice_user_id(self, request):
|
||||
@defer.inlineCallbacks
|
||||
def _get_appservice_user_id(self, request):
|
||||
app_service = self.store.get_app_service_by_token(
|
||||
self.get_access_token_from_request(request)
|
||||
)
|
||||
@@ -278,13 +283,14 @@ class Auth(object):
|
||||
|
||||
if not app_service.is_interested_in_user(user_id):
|
||||
raise AuthError(403, "Application service cannot masquerade as this user.")
|
||||
if not (await self.store.get_user_by_id(user_id)):
|
||||
if not (yield self.store.get_user_by_id(user_id)):
|
||||
raise AuthError(403, "Application service has not registered this user")
|
||||
return user_id, app_service
|
||||
|
||||
async def get_user_by_access_token(
|
||||
@defer.inlineCallbacks
|
||||
def get_user_by_access_token(
|
||||
self, token: str, rights: str = "access", allow_expired: bool = False,
|
||||
) -> dict:
|
||||
):
|
||||
""" Validate access token and get user_id from it
|
||||
|
||||
Args:
|
||||
@@ -294,7 +300,7 @@ class Auth(object):
|
||||
allow_expired: If False, raises an InvalidClientTokenError
|
||||
if the token is expired
|
||||
Returns:
|
||||
dict that includes:
|
||||
Deferred[dict]: dict that includes:
|
||||
`user` (UserID)
|
||||
`is_guest` (bool)
|
||||
`token_id` (int|None): access token id. May be None if guest
|
||||
@@ -308,7 +314,7 @@ class Auth(object):
|
||||
|
||||
if rights == "access":
|
||||
# first look in the database
|
||||
r = await self._look_up_user_by_access_token(token)
|
||||
r = yield self._look_up_user_by_access_token(token)
|
||||
if r:
|
||||
valid_until_ms = r["valid_until_ms"]
|
||||
if (
|
||||
@@ -346,7 +352,7 @@ class Auth(object):
|
||||
# It would of course be much easier to store guest access
|
||||
# tokens in the database as well, but that would break existing
|
||||
# guest tokens.
|
||||
stored_user = await self.store.get_user_by_id(user_id)
|
||||
stored_user = yield self.store.get_user_by_id(user_id)
|
||||
if not stored_user:
|
||||
raise InvalidClientTokenError("Unknown user_id %s" % user_id)
|
||||
if not stored_user["is_guest"]:
|
||||
@@ -476,8 +482,9 @@ class Auth(object):
|
||||
now = self.hs.get_clock().time_msec()
|
||||
return now < expiry
|
||||
|
||||
async def _look_up_user_by_access_token(self, token):
|
||||
ret = await self.store.get_user_by_access_token(token)
|
||||
@defer.inlineCallbacks
|
||||
def _look_up_user_by_access_token(self, token):
|
||||
ret = yield self.store.get_user_by_access_token(token)
|
||||
if not ret:
|
||||
return None
|
||||
|
||||
@@ -500,7 +507,7 @@ class Auth(object):
|
||||
logger.warning("Unrecognised appservice access token.")
|
||||
raise InvalidClientTokenError()
|
||||
request.authenticated_entity = service.sender
|
||||
return service
|
||||
return defer.succeed(service)
|
||||
|
||||
async def is_server_admin(self, user: UserID) -> bool:
|
||||
""" Check if the given user is a local server admin.
|
||||
@@ -515,7 +522,7 @@ class Auth(object):
|
||||
|
||||
def compute_auth_events(
|
||||
self, event, current_state_ids: StateMap[str], for_verification: bool = False,
|
||||
) -> List[str]:
|
||||
):
|
||||
"""Given an event and current state return the list of event IDs used
|
||||
to auth an event.
|
||||
|
||||
@@ -523,11 +530,11 @@ class Auth(object):
|
||||
should be added to the event's `auth_events`.
|
||||
|
||||
Returns:
|
||||
List of event IDs.
|
||||
defer.Deferred(list[str]): List of event IDs.
|
||||
"""
|
||||
|
||||
if event.type == EventTypes.Create:
|
||||
return []
|
||||
return defer.succeed([])
|
||||
|
||||
# Currently we ignore the `for_verification` flag even though there are
|
||||
# some situations where we can drop particular auth events when adding
|
||||
@@ -546,7 +553,7 @@ class Auth(object):
|
||||
if auth_ev_id:
|
||||
auth_ids.append(auth_ev_id)
|
||||
|
||||
return auth_ids
|
||||
return defer.succeed(auth_ids)
|
||||
|
||||
async def check_can_change_room_list(self, room_id: str, user: UserID):
|
||||
"""Determine whether the user is allowed to edit the room's entry in the
|
||||
@@ -629,9 +636,10 @@ class Auth(object):
|
||||
|
||||
return query_params[0].decode("ascii")
|
||||
|
||||
async def check_user_in_room_or_world_readable(
|
||||
@defer.inlineCallbacks
|
||||
def check_user_in_room_or_world_readable(
|
||||
self, room_id: str, user_id: str, allow_departed_users: bool = False
|
||||
) -> Tuple[str, Optional[str]]:
|
||||
):
|
||||
"""Checks that the user is or was in the room or the room is world
|
||||
readable. If it isn't then an exception is raised.
|
||||
|
||||
@@ -642,9 +650,10 @@ class Auth(object):
|
||||
members but have now departed
|
||||
|
||||
Returns:
|
||||
Resolves to the current membership of the user in the room and the
|
||||
membership event ID of the user. If the user is not in the room and
|
||||
never has been, then `(Membership.JOIN, None)` is returned.
|
||||
Deferred[tuple[str, str|None]]: Resolves to the current membership of
|
||||
the user in the room and the membership event ID of the user. If
|
||||
the user is not in the room and never has been, then
|
||||
`(Membership.JOIN, None)` is returned.
|
||||
"""
|
||||
|
||||
try:
|
||||
@@ -653,13 +662,15 @@ class Auth(object):
|
||||
# * The user is a non-guest user, and was ever in the room
|
||||
# * The user is a guest user, and has joined the room
|
||||
# else it will throw.
|
||||
member_event = await self.check_user_in_room(
|
||||
member_event = yield self.check_user_in_room(
|
||||
room_id, user_id, allow_departed_users=allow_departed_users
|
||||
)
|
||||
return member_event.membership, member_event.event_id
|
||||
except AuthError:
|
||||
visibility = await self.state.get_current_state(
|
||||
room_id, EventTypes.RoomHistoryVisibility, ""
|
||||
visibility = yield defer.ensureDeferred(
|
||||
self.state.get_current_state(
|
||||
room_id, EventTypes.RoomHistoryVisibility, ""
|
||||
)
|
||||
)
|
||||
if (
|
||||
visibility
|
||||
|
||||
@@ -15,6 +15,8 @@
|
||||
|
||||
import logging
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import LimitBlockingTypes, UserTypes
|
||||
from synapse.api.errors import Codes, ResourceLimitError
|
||||
from synapse.config.server import is_threepid_reserved
|
||||
@@ -34,7 +36,8 @@ class AuthBlocking(object):
|
||||
self._limit_usage_by_mau = hs.config.limit_usage_by_mau
|
||||
self._mau_limits_reserved_threepids = hs.config.mau_limits_reserved_threepids
|
||||
|
||||
async def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
|
||||
@defer.inlineCallbacks
|
||||
def check_auth_blocking(self, user_id=None, threepid=None, user_type=None):
|
||||
"""Checks if the user should be rejected for some external reason,
|
||||
such as monthly active user limiting or global disable flag
|
||||
|
||||
@@ -57,7 +60,7 @@ class AuthBlocking(object):
|
||||
if user_id is not None:
|
||||
if user_id == self._server_notices_mxid:
|
||||
return
|
||||
if await self.store.is_support_user(user_id):
|
||||
if (yield self.store.is_support_user(user_id)):
|
||||
return
|
||||
|
||||
if self._hs_disabled:
|
||||
@@ -73,11 +76,11 @@ class AuthBlocking(object):
|
||||
|
||||
# If the user is already part of the MAU cohort or a trial user
|
||||
if user_id:
|
||||
timestamp = await self.store.user_last_seen_monthly_active(user_id)
|
||||
timestamp = yield self.store.user_last_seen_monthly_active(user_id)
|
||||
if timestamp:
|
||||
return
|
||||
|
||||
is_trial = await self.store.is_trial_user(user_id)
|
||||
is_trial = yield self.store.is_trial_user(user_id)
|
||||
if is_trial:
|
||||
return
|
||||
elif threepid:
|
||||
@@ -90,7 +93,7 @@ class AuthBlocking(object):
|
||||
# allow registration. Support users are excluded from MAU checks.
|
||||
return
|
||||
# Else if there is no room in the MAU bucket, bail
|
||||
current_mau = await self.store.get_monthly_active_count()
|
||||
current_mau = yield self.store.get_monthly_active_count()
|
||||
if current_mau >= self._max_mau_value:
|
||||
raise ResourceLimitError(
|
||||
403,
|
||||
|
||||
@@ -238,16 +238,14 @@ class InteractiveAuthIncompleteError(Exception):
|
||||
(This indicates we should return a 401 with 'result' as the body)
|
||||
|
||||
Attributes:
|
||||
session_id: The ID of the ongoing interactive auth session.
|
||||
result: the server response to the request, which should be
|
||||
passed back to the client
|
||||
"""
|
||||
|
||||
def __init__(self, session_id: str, result: "JsonDict"):
|
||||
def __init__(self, result: "JsonDict"):
|
||||
super(InteractiveAuthIncompleteError, self).__init__(
|
||||
"Interactive auth not yet complete"
|
||||
)
|
||||
self.session_id = session_id
|
||||
self.result = result
|
||||
|
||||
|
||||
|
||||
@@ -21,6 +21,8 @@ import jsonschema
|
||||
from canonicaljson import json
|
||||
from jsonschema import FormatChecker
|
||||
|
||||
from twisted.internet import defer
|
||||
|
||||
from synapse.api.constants import EventContentFields
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.storage.presence import UserPresenceState
|
||||
@@ -135,8 +137,9 @@ class Filtering(object):
|
||||
super(Filtering, self).__init__()
|
||||
self.store = hs.get_datastore()
|
||||
|
||||
async def get_user_filter(self, user_localpart, filter_id):
|
||||
result = await self.store.get_user_filter(user_localpart, filter_id)
|
||||
@defer.inlineCallbacks
|
||||
def get_user_filter(self, user_localpart, filter_id):
|
||||
result = yield self.store.get_user_filter(user_localpart, filter_id)
|
||||
return FilterCollection(result)
|
||||
|
||||
def add_user_filter(self, user_localpart, user_filter):
|
||||
|
||||
@@ -17,7 +17,6 @@ from collections import OrderedDict
|
||||
from typing import Any, Optional, Tuple
|
||||
|
||||
from synapse.api.errors import LimitExceededError
|
||||
from synapse.types import Requester
|
||||
from synapse.util import Clock
|
||||
|
||||
|
||||
@@ -44,42 +43,6 @@ class Ratelimiter(object):
|
||||
# * The rate_hz of this particular entry. This can vary per request
|
||||
self.actions = OrderedDict() # type: OrderedDict[Any, Tuple[float, int, float]]
|
||||
|
||||
def can_requester_do_action(
|
||||
self,
|
||||
requester: Requester,
|
||||
rate_hz: Optional[float] = None,
|
||||
burst_count: Optional[int] = None,
|
||||
update: bool = True,
|
||||
_time_now_s: Optional[int] = None,
|
||||
) -> Tuple[bool, float]:
|
||||
"""Can the requester perform the action?
|
||||
|
||||
Args:
|
||||
requester: The requester to key off when rate limiting. The user property
|
||||
will be used.
|
||||
rate_hz: The long term number of actions that can be performed in a second.
|
||||
Overrides the value set during instantiation if set.
|
||||
burst_count: How many actions that can be performed before being limited.
|
||||
Overrides the value set during instantiation if set.
|
||||
update: Whether to count this check as performing the action
|
||||
_time_now_s: The current time. Optional, defaults to the current time according
|
||||
to self.clock. Only used by tests.
|
||||
|
||||
Returns:
|
||||
A tuple containing:
|
||||
* A bool indicating if they can perform the action now
|
||||
* The reactor timestamp for when the action can be performed next.
|
||||
-1 if rate_hz is less than or equal to zero
|
||||
"""
|
||||
# Disable rate limiting of users belonging to any AS that is configured
|
||||
# not to be rate limited in its registration file (rate_limited: true|false).
|
||||
if requester.app_service and not requester.app_service.is_rate_limited():
|
||||
return True, -1.0
|
||||
|
||||
return self.can_do_action(
|
||||
requester.user.to_string(), rate_hz, burst_count, update, _time_now_s
|
||||
)
|
||||
|
||||
def can_do_action(
|
||||
self,
|
||||
key: Any,
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import gc
|
||||
import logging
|
||||
import os
|
||||
@@ -21,6 +22,7 @@ import sys
|
||||
import traceback
|
||||
from typing import Iterable
|
||||
|
||||
from daemonize import Daemonize
|
||||
from typing_extensions import NoReturn
|
||||
|
||||
from twisted.internet import defer, error, reactor
|
||||
@@ -32,7 +34,6 @@ from synapse.config.server import ListenerConfig
|
||||
from synapse.crypto import context_factory
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.daemonize import daemonize_process
|
||||
from synapse.util.rlimit import change_resource_limit
|
||||
from synapse.util.versionstring import get_version_string
|
||||
|
||||
@@ -128,8 +129,17 @@ def start_reactor(
|
||||
if print_pidfile:
|
||||
print(pid_file)
|
||||
|
||||
daemonize_process(pid_file, logger)
|
||||
run()
|
||||
daemon = Daemonize(
|
||||
app=appname,
|
||||
pid=pid_file,
|
||||
action=run,
|
||||
auto_close_fds=False,
|
||||
verbose=True,
|
||||
logger=logger,
|
||||
)
|
||||
daemon.start()
|
||||
else:
|
||||
run()
|
||||
|
||||
|
||||
def quit_with_error(error_string: str) -> NoReturn:
|
||||
@@ -268,7 +278,7 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
|
||||
|
||||
# It is now safe to start your Synapse.
|
||||
hs.start_listening(listeners)
|
||||
hs.get_datastore().db_pool.start_profiling()
|
||||
hs.get_datastore().db.start_profiling()
|
||||
hs.get_pusherpool().start()
|
||||
|
||||
setup_sentry(hs)
|
||||
|
||||
@@ -123,18 +123,17 @@ from synapse.rest.client.v2_alpha.account_data import (
|
||||
from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
|
||||
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
|
||||
from synapse.rest.client.versions import VersionsRestServlet
|
||||
from synapse.rest.health import HealthResource
|
||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||
from synapse.server import HomeServer, cache_in_self
|
||||
from synapse.storage.databases.main.censor_events import CensorEventsStore
|
||||
from synapse.storage.databases.main.media_repository import MediaRepositoryStore
|
||||
from synapse.storage.databases.main.monthly_active_users import (
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.data_stores.main.censor_events import CensorEventsStore
|
||||
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
|
||||
from synapse.storage.data_stores.main.monthly_active_users import (
|
||||
MonthlyActiveUsersWorkerStore,
|
||||
)
|
||||
from synapse.storage.databases.main.presence import UserPresenceState
|
||||
from synapse.storage.databases.main.search import SearchWorkerStore
|
||||
from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
|
||||
from synapse.storage.databases.main.user_directory import UserDirectoryStore
|
||||
from synapse.storage.data_stores.main.presence import UserPresenceState
|
||||
from synapse.storage.data_stores.main.search import SearchWorkerStore
|
||||
from synapse.storage.data_stores.main.ui_auth import UIAuthWorkerStore
|
||||
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
|
||||
from synapse.types import ReadReceipt
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.httpresourcetree import create_resource_tree
|
||||
@@ -494,10 +493,7 @@ class GenericWorkerServer(HomeServer):
|
||||
site_tag = listener_config.http_options.tag
|
||||
if site_tag is None:
|
||||
site_tag = port
|
||||
|
||||
# We always include a health resource.
|
||||
resources = {"/health": HealthResource()}
|
||||
|
||||
resources = {}
|
||||
for res in listener_config.http_options.resources:
|
||||
for name in res.names:
|
||||
if name == "metrics":
|
||||
@@ -635,12 +631,10 @@ class GenericWorkerServer(HomeServer):
|
||||
async def remove_pusher(self, app_id, push_key, user_id):
|
||||
self.get_tcp_replication().send_remove_pusher(app_id, push_key, user_id)
|
||||
|
||||
@cache_in_self
|
||||
def get_replication_data_handler(self):
|
||||
def build_replication_data_handler(self):
|
||||
return GenericWorkerReplicationHandler(self)
|
||||
|
||||
@cache_in_self
|
||||
def get_presence_handler(self):
|
||||
def build_presence_handler(self):
|
||||
return GenericWorkerPresence(self)
|
||||
|
||||
|
||||
|
||||
@@ -68,7 +68,6 @@ from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||
from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
|
||||
from synapse.rest import ClientRestResource
|
||||
from synapse.rest.admin import AdminRestResource
|
||||
from synapse.rest.health import HealthResource
|
||||
from synapse.rest.key.v2 import KeyApiV2Resource
|
||||
from synapse.rest.well_known import WellKnownResource
|
||||
from synapse.server import HomeServer
|
||||
@@ -99,9 +98,7 @@ class SynapseHomeServer(HomeServer):
|
||||
if site_tag is None:
|
||||
site_tag = port
|
||||
|
||||
# We always include a health resource.
|
||||
resources = {"/health": HealthResource()}
|
||||
|
||||
resources = {}
|
||||
for res in listener_config.http_options.resources:
|
||||
for name in res.names:
|
||||
if name == "openid" and "federation" in res.names:
|
||||
@@ -444,7 +441,7 @@ def setup(config_options):
|
||||
|
||||
_base.start(hs, config.listeners)
|
||||
|
||||
hs.get_datastore().db_pool.updates.start_doing_background_updates()
|
||||
hs.get_datastore().db.updates.start_doing_background_updates()
|
||||
except Exception:
|
||||
# Print the exception and bail out.
|
||||
print("Error during startup:", file=sys.stderr)
|
||||
@@ -554,8 +551,8 @@ async def phone_stats_home(hs, stats, stats_process=_stats_process):
|
||||
#
|
||||
|
||||
# This only reports info about the *main* database.
|
||||
stats["database_engine"] = hs.get_datastore().db_pool.engine.module.__name__
|
||||
stats["database_server_version"] = hs.get_datastore().db_pool.engine.server_version
|
||||
stats["database_engine"] = hs.get_datastore().db.engine.module.__name__
|
||||
stats["database_server_version"] = hs.get_datastore().db.engine.server_version
|
||||
|
||||
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
|
||||
try:
|
||||
|
||||
@@ -175,7 +175,7 @@ class ApplicationServiceApi(SimpleHttpClient):
|
||||
urllib.parse.quote(protocol),
|
||||
)
|
||||
try:
|
||||
info = yield defer.ensureDeferred(self.get_json(uri, {}))
|
||||
info = yield self.get_json(uri, {})
|
||||
|
||||
if not _is_valid_3pe_metadata(info):
|
||||
logger.warning(
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
# Copyright 2020 The Matrix.org Foundation C.I.C.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
from typing import Any, List
|
||||
|
||||
import jsonschema
|
||||
|
||||
from synapse.config._base import ConfigError
|
||||
from synapse.types import JsonDict
|
||||
|
||||
|
||||
def validate_config(json_schema: JsonDict, config: Any, config_path: List[str]) -> None:
|
||||
"""Validates a config setting against a JsonSchema definition
|
||||
|
||||
This can be used to validate a section of the config file against a schema
|
||||
definition. If the validation fails, a ConfigError is raised with a textual
|
||||
description of the problem.
|
||||
|
||||
Args:
|
||||
json_schema: the schema to validate against
|
||||
config: the configuration value to be validated
|
||||
config_path: the path within the config file. This will be used as a basis
|
||||
for the error message.
|
||||
"""
|
||||
try:
|
||||
jsonschema.validate(config, json_schema)
|
||||
except jsonschema.ValidationError as e:
|
||||
# copy `config_path` before modifying it.
|
||||
path = list(config_path)
|
||||
for p in list(e.path):
|
||||
if isinstance(p, int):
|
||||
path.append("<item %i>" % p)
|
||||
else:
|
||||
path.append(str(p))
|
||||
|
||||
raise ConfigError(
|
||||
"Unable to parse configuration: %s at %s" % (e.message, ".".join(path))
|
||||
)
|
||||
@@ -100,10 +100,7 @@ class DatabaseConnectionConfig:
|
||||
|
||||
self.name = name
|
||||
self.config = db_config
|
||||
|
||||
# The `data_stores` config is actually talking about `databases` (we
|
||||
# changed the name).
|
||||
self.databases = data_stores
|
||||
self.data_stores = data_stores
|
||||
|
||||
|
||||
class DatabaseConfig(Config):
|
||||
|
||||
@@ -55,33 +55,24 @@ formatters:
|
||||
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - \
|
||||
%(request)s - %(message)s'
|
||||
|
||||
filters:
|
||||
context:
|
||||
(): synapse.logging.context.LoggingContextFilter
|
||||
request: ""
|
||||
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.TimedRotatingFileHandler
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: precise
|
||||
filename: ${log_file}
|
||||
when: midnight
|
||||
backupCount: 3 # Does not include the current log file.
|
||||
maxBytes: 104857600
|
||||
backupCount: 10
|
||||
filters: [context]
|
||||
encoding: utf8
|
||||
|
||||
# Default to buffering writes to log file for efficiency. This means that
|
||||
# will be a delay for INFO/DEBUG logs to get written, but WARNING/ERROR
|
||||
# logs will still be flushed immediately.
|
||||
buffer:
|
||||
class: logging.handlers.MemoryHandler
|
||||
target: file
|
||||
# The capacity is the number of log lines that are buffered before
|
||||
# being written to disk. Increasing this will lead to better
|
||||
# performance, at the expensive of it taking longer for log lines to
|
||||
# be written to disk.
|
||||
capacity: 10
|
||||
flushLevel: 30 # Flush for WARNING logs as well
|
||||
|
||||
# A handler that writes logs to stderr. Unused by default, but can be used
|
||||
# instead of "buffer" and "file" in the logger handlers.
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: precise
|
||||
filters: [context]
|
||||
|
||||
loggers:
|
||||
synapse.storage.SQL:
|
||||
@@ -89,24 +80,9 @@ loggers:
|
||||
# information such as access tokens.
|
||||
level: INFO
|
||||
|
||||
twisted:
|
||||
# We send the twisted logging directly to the file handler,
|
||||
# to work around https://github.com/matrix-org/synapse/issues/3471
|
||||
# when using "buffer" logger. Use "console" to log to stderr instead.
|
||||
handlers: [file]
|
||||
propagate: false
|
||||
|
||||
root:
|
||||
level: INFO
|
||||
|
||||
# Write logs to the `buffer` handler, which will buffer them together in memory,
|
||||
# then write them to a file.
|
||||
#
|
||||
# Replace "buffer" with "console" to log to stderr instead. (Note that you'll
|
||||
# also need to update the configuation for the `twisted` logger above, in
|
||||
# this case.)
|
||||
#
|
||||
handlers: [buffer]
|
||||
handlers: [file, console]
|
||||
|
||||
disable_existing_loggers: false
|
||||
"""
|
||||
@@ -192,26 +168,11 @@ def _setup_stdlib_logging(config, log_config, logBeginner: LogBeginner):
|
||||
|
||||
handler = logging.StreamHandler()
|
||||
handler.setFormatter(formatter)
|
||||
handler.addFilter(LoggingContextFilter(request=""))
|
||||
logger.addHandler(handler)
|
||||
else:
|
||||
logging.config.dictConfig(log_config)
|
||||
|
||||
# We add a log record factory that runs all messages through the
|
||||
# LoggingContextFilter so that we get the context *at the time we log*
|
||||
# rather than when we write to a handler. This can be done in config using
|
||||
# filter options, but care must when using e.g. MemoryHandler to buffer
|
||||
# writes.
|
||||
|
||||
log_filter = LoggingContextFilter(request="")
|
||||
old_factory = logging.getLogRecordFactory()
|
||||
|
||||
def factory(*args, **kwargs):
|
||||
record = old_factory(*args, **kwargs)
|
||||
log_filter.filter(record)
|
||||
return record
|
||||
|
||||
logging.setLogRecordFactory(factory)
|
||||
|
||||
# Route Twisted's native logging through to the standard library logging
|
||||
# system.
|
||||
observer = STDLibLogObserver()
|
||||
|
||||
@@ -15,9 +15,7 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import Any, List
|
||||
|
||||
import attr
|
||||
import jinja2
|
||||
import pkg_resources
|
||||
|
||||
@@ -25,7 +23,6 @@ from synapse.python_dependencies import DependencyException, check_requirements
|
||||
from synapse.util.module_loader import load_module, load_python_module
|
||||
|
||||
from ._base import Config, ConfigError
|
||||
from ._util import validate_config
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -83,11 +80,6 @@ class SAML2Config(Config):
|
||||
|
||||
self.saml2_enabled = True
|
||||
|
||||
attribute_requirements = saml2_config.get("attribute_requirements") or []
|
||||
self.attribute_requirements = _parse_attribute_requirements_def(
|
||||
attribute_requirements
|
||||
)
|
||||
|
||||
self.saml2_grandfathered_mxid_source_attribute = saml2_config.get(
|
||||
"grandfathered_mxid_source_attribute", "uid"
|
||||
)
|
||||
@@ -349,17 +341,6 @@ class SAML2Config(Config):
|
||||
#
|
||||
#grandfathered_mxid_source_attribute: upn
|
||||
|
||||
# It is possible to configure Synapse to only allow logins if SAML attributes
|
||||
# match particular values. The requirements can be listed under
|
||||
# `attribute_requirements` as shown below. All of the listed attributes must
|
||||
# match for the login to be permitted.
|
||||
#
|
||||
#attribute_requirements:
|
||||
# - attribute: userGroup
|
||||
# value: "staff"
|
||||
# - attribute: department
|
||||
# value: "sales"
|
||||
|
||||
# Directory in which Synapse will try to find the template files below.
|
||||
# If not set, default templates from within the Synapse package will be used.
|
||||
#
|
||||
@@ -387,34 +368,3 @@ class SAML2Config(Config):
|
||||
""" % {
|
||||
"config_dir_path": config_dir_path
|
||||
}
|
||||
|
||||
|
||||
@attr.s(frozen=True)
|
||||
class SamlAttributeRequirement:
|
||||
"""Object describing a single requirement for SAML attributes."""
|
||||
|
||||
attribute = attr.ib(type=str)
|
||||
value = attr.ib(type=str)
|
||||
|
||||
JSON_SCHEMA = {
|
||||
"type": "object",
|
||||
"properties": {"attribute": {"type": "string"}, "value": {"type": "string"}},
|
||||
"required": ["attribute", "value"],
|
||||
}
|
||||
|
||||
|
||||
ATTRIBUTE_REQUIREMENTS_SCHEMA = {
|
||||
"type": "array",
|
||||
"items": SamlAttributeRequirement.JSON_SCHEMA,
|
||||
}
|
||||
|
||||
|
||||
def _parse_attribute_requirements_def(
|
||||
attribute_requirements: Any,
|
||||
) -> List[SamlAttributeRequirement]:
|
||||
validate_config(
|
||||
ATTRIBUTE_REQUIREMENTS_SCHEMA,
|
||||
attribute_requirements,
|
||||
config_path=["saml2_config", "attribute_requirements"],
|
||||
)
|
||||
return [SamlAttributeRequirement(**x) for x in attribute_requirements]
|
||||
|
||||
@@ -530,21 +530,6 @@ class ServerConfig(Config):
|
||||
"request_token_inhibit_3pid_errors", False,
|
||||
)
|
||||
|
||||
# List of users trialing the new experimental default push rules. This setting is
|
||||
# not included in the sample configuration file on purpose as it's a temporary
|
||||
# hack, so that some users can trial the new defaults without impacting every
|
||||
# user on the homeserver.
|
||||
users_new_default_push_rules = (
|
||||
config.get("users_new_default_push_rules") or []
|
||||
) # type: list
|
||||
if not isinstance(users_new_default_push_rules, list):
|
||||
raise ConfigError("'users_new_default_push_rules' must be a list")
|
||||
|
||||
# Turn the list into a set to improve lookup speed.
|
||||
self.users_new_default_push_rules = set(
|
||||
users_new_default_push_rules
|
||||
) # type: set
|
||||
|
||||
def has_tls_listener(self) -> bool:
|
||||
return any(listener.tls for listener in self.listeners)
|
||||
|
||||
|
||||
@@ -48,14 +48,6 @@ class ServerContextFactory(ContextFactory):
|
||||
connections."""
|
||||
|
||||
def __init__(self, config):
|
||||
# TODO: once pyOpenSSL exposes TLS_METHOD and SSL_CTX_set_min_proto_version,
|
||||
# switch to those (see https://github.com/pyca/cryptography/issues/5379).
|
||||
#
|
||||
# note that, despite the confusing name, SSLv23_METHOD does *not* enforce SSLv2
|
||||
# or v3, but is a synonym for TLS_METHOD, which allows the client and server
|
||||
# to negotiate an appropriate version of TLS constrained by the version options
|
||||
# set with context.set_options.
|
||||
#
|
||||
self._context = SSL.Context(SSL.SSLv23_METHOD)
|
||||
self.configure_context(self._context, config)
|
||||
|
||||
|
||||
@@ -17,7 +17,6 @@ from typing import Optional
|
||||
import attr
|
||||
from nacl.signing import SigningKey
|
||||
|
||||
from synapse.api.auth import Auth
|
||||
from synapse.api.constants import MAX_DEPTH
|
||||
from synapse.api.errors import UnsupportedRoomVersionError
|
||||
from synapse.api.room_versions import (
|
||||
@@ -28,8 +27,6 @@ from synapse.api.room_versions import (
|
||||
)
|
||||
from synapse.crypto.event_signing import add_hashes_and_signatures
|
||||
from synapse.events import EventBase, _EventInternalMetadata, make_event_from_dict
|
||||
from synapse.state import StateHandler
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.types import EventID, JsonDict
|
||||
from synapse.util import Clock
|
||||
from synapse.util.stringutils import random_string
|
||||
@@ -45,46 +42,45 @@ class EventBuilder(object):
|
||||
|
||||
Attributes:
|
||||
room_version: Version of the target room
|
||||
room_id
|
||||
type
|
||||
sender
|
||||
content
|
||||
unsigned
|
||||
internal_metadata
|
||||
room_id (str)
|
||||
type (str)
|
||||
sender (str)
|
||||
content (dict)
|
||||
unsigned (dict)
|
||||
internal_metadata (_EventInternalMetadata)
|
||||
|
||||
_state
|
||||
_auth
|
||||
_store
|
||||
_clock
|
||||
_hostname: The hostname of the server creating the event
|
||||
_state (StateHandler)
|
||||
_auth (synapse.api.Auth)
|
||||
_store (DataStore)
|
||||
_clock (Clock)
|
||||
_hostname (str): The hostname of the server creating the event
|
||||
_signing_key: The signing key to use to sign the event as the server
|
||||
"""
|
||||
|
||||
_state = attr.ib(type=StateHandler)
|
||||
_auth = attr.ib(type=Auth)
|
||||
_store = attr.ib(type=DataStore)
|
||||
_clock = attr.ib(type=Clock)
|
||||
_hostname = attr.ib(type=str)
|
||||
_signing_key = attr.ib(type=SigningKey)
|
||||
_state = attr.ib()
|
||||
_auth = attr.ib()
|
||||
_store = attr.ib()
|
||||
_clock = attr.ib()
|
||||
_hostname = attr.ib()
|
||||
_signing_key = attr.ib()
|
||||
|
||||
room_version = attr.ib(type=RoomVersion)
|
||||
|
||||
room_id = attr.ib(type=str)
|
||||
type = attr.ib(type=str)
|
||||
sender = attr.ib(type=str)
|
||||
room_id = attr.ib()
|
||||
type = attr.ib()
|
||||
sender = attr.ib()
|
||||
|
||||
content = attr.ib(default=attr.Factory(dict), type=JsonDict)
|
||||
unsigned = attr.ib(default=attr.Factory(dict), type=JsonDict)
|
||||
content = attr.ib(default=attr.Factory(dict))
|
||||
unsigned = attr.ib(default=attr.Factory(dict))
|
||||
|
||||
# These only exist on a subset of events, so they raise AttributeError if
|
||||
# someone tries to get them when they don't exist.
|
||||
_state_key = attr.ib(default=None, type=Optional[str])
|
||||
_redacts = attr.ib(default=None, type=Optional[str])
|
||||
_origin_server_ts = attr.ib(default=None, type=Optional[int])
|
||||
_state_key = attr.ib(default=None)
|
||||
_redacts = attr.ib(default=None)
|
||||
_origin_server_ts = attr.ib(default=None)
|
||||
|
||||
internal_metadata = attr.ib(
|
||||
default=attr.Factory(lambda: _EventInternalMetadata({})),
|
||||
type=_EventInternalMetadata,
|
||||
default=attr.Factory(lambda: _EventInternalMetadata({}))
|
||||
)
|
||||
|
||||
@property
|
||||
@@ -110,7 +106,7 @@ class EventBuilder(object):
|
||||
state_ids = await self._state.get_current_state_ids(
|
||||
self.room_id, prev_event_ids
|
||||
)
|
||||
auth_ids = self._auth.compute_auth_events(self, state_ids)
|
||||
auth_ids = await self._auth.compute_auth_events(self, state_ids)
|
||||
|
||||
format_version = self.room_version.event_format
|
||||
if format_version == EventFormatVersions.V1:
|
||||
|
||||
@@ -23,7 +23,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.types import StateMap
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.storage.data_stores.main import DataStore
|
||||
|
||||
|
||||
@attr.s(slots=True)
|
||||
|
||||
@@ -13,7 +13,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, List, Tuple
|
||||
from typing import TYPE_CHECKING, List
|
||||
|
||||
from canonicaljson import json
|
||||
|
||||
@@ -54,10 +54,7 @@ class TransactionManager(object):
|
||||
|
||||
@measure_func("_send_new_transaction")
|
||||
async def send_new_transaction(
|
||||
self,
|
||||
destination: str,
|
||||
pending_pdus: List[Tuple[EventBase, int]],
|
||||
pending_edus: List[Edu],
|
||||
self, destination: str, pending_pdus: List[EventBase], pending_edus: List[Edu]
|
||||
):
|
||||
|
||||
# Make a transaction-sending opentracing span. This span follows on from
|
||||
|
||||
@@ -101,7 +101,7 @@ class ApplicationServicesHandler(object):
|
||||
|
||||
async def start_scheduler():
|
||||
try:
|
||||
return await self.scheduler.start()
|
||||
return self.scheduler.start()
|
||||
except Exception:
|
||||
logger.error("Application Services Failure")
|
||||
|
||||
|
||||
@@ -162,7 +162,7 @@ class AuthHandler(BaseHandler):
|
||||
request_body: Dict[str, Any],
|
||||
clientip: str,
|
||||
description: str,
|
||||
) -> Tuple[dict, str]:
|
||||
) -> dict:
|
||||
"""
|
||||
Checks that the user is who they claim to be, via a UI auth.
|
||||
|
||||
@@ -183,14 +183,9 @@ class AuthHandler(BaseHandler):
|
||||
describes the operation happening on their account.
|
||||
|
||||
Returns:
|
||||
A tuple of (params, session_id).
|
||||
|
||||
'params' contains the parameters for this request (which may
|
||||
The parameters for this request (which may
|
||||
have been given only in a previous call).
|
||||
|
||||
'session_id' is the ID of this session, either passed in by the
|
||||
client or assigned by this call
|
||||
|
||||
Raises:
|
||||
InteractiveAuthIncompleteError if the client has not yet completed
|
||||
any of the permitted login flows
|
||||
@@ -212,7 +207,7 @@ class AuthHandler(BaseHandler):
|
||||
flows = [[login_type] for login_type in self._supported_ui_auth_types]
|
||||
|
||||
try:
|
||||
result, params, session_id = await self.check_ui_auth(
|
||||
result, params, _ = await self.check_auth(
|
||||
flows, request, request_body, clientip, description
|
||||
)
|
||||
except LoginError:
|
||||
@@ -235,7 +230,7 @@ class AuthHandler(BaseHandler):
|
||||
if user_id != requester.user.to_string():
|
||||
raise AuthError(403, "Invalid auth")
|
||||
|
||||
return params, session_id
|
||||
return params
|
||||
|
||||
def get_enabled_auth_types(self):
|
||||
"""Return the enabled user-interactive authentication types
|
||||
@@ -245,7 +240,7 @@ class AuthHandler(BaseHandler):
|
||||
"""
|
||||
return self.checkers.keys()
|
||||
|
||||
async def check_ui_auth(
|
||||
async def check_auth(
|
||||
self,
|
||||
flows: List[List[str]],
|
||||
request: SynapseRequest,
|
||||
@@ -368,7 +363,7 @@ class AuthHandler(BaseHandler):
|
||||
|
||||
if not authdict:
|
||||
raise InteractiveAuthIncompleteError(
|
||||
session.session_id, self._auth_dict_for_flows(flows, session.session_id)
|
||||
self._auth_dict_for_flows(flows, session.session_id)
|
||||
)
|
||||
|
||||
# check auth type currently being presented
|
||||
@@ -415,7 +410,7 @@ class AuthHandler(BaseHandler):
|
||||
ret = self._auth_dict_for_flows(flows, session.session_id)
|
||||
ret["completed"] = list(creds)
|
||||
ret.update(errordict)
|
||||
raise InteractiveAuthIncompleteError(session.session_id, ret)
|
||||
raise InteractiveAuthIncompleteError(ret)
|
||||
|
||||
async def add_oob_auth(
|
||||
self, stagetype: str, authdict: Dict[str, Any], clientip: str
|
||||
|
||||
@@ -57,10 +57,13 @@ class EventStreamHandler(BaseHandler):
|
||||
timeout=0,
|
||||
as_client_event=True,
|
||||
affect_presence=True,
|
||||
only_keys=None,
|
||||
room_id=None,
|
||||
is_guest=False,
|
||||
):
|
||||
"""Fetches the events stream for a given user.
|
||||
|
||||
If `only_keys` is not None, events from keys will be sent down.
|
||||
"""
|
||||
|
||||
if room_id:
|
||||
@@ -90,6 +93,7 @@ class EventStreamHandler(BaseHandler):
|
||||
auth_user,
|
||||
pagin_config,
|
||||
timeout,
|
||||
only_keys=only_keys,
|
||||
is_guest=is_guest,
|
||||
explicit_room_id=room_id,
|
||||
)
|
||||
|
||||
@@ -71,7 +71,7 @@ from synapse.replication.http.federation import (
|
||||
)
|
||||
from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet
|
||||
from synapse.state import StateResolutionStore, resolve_events_with_store
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
|
||||
from synapse.types import JsonDict, StateMap, UserID, get_domain_from_id
|
||||
from synapse.util.async_helpers import Linearizer, concurrently_execute
|
||||
from synapse.util.distributor import user_joined_room
|
||||
@@ -2064,7 +2064,7 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
if not auth_events:
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
auth_events_ids = self.auth.compute_auth_events(
|
||||
auth_events_ids = await self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
)
|
||||
auth_events_x = await self.store.get_events(auth_events_ids)
|
||||
@@ -2242,8 +2242,6 @@ class FederationHandler(BaseHandler):
|
||||
at the event's position in the DAG, though occasionally (eg if the
|
||||
event is an outlier), may be the auth events claimed by the remote
|
||||
server.
|
||||
|
||||
Also NB that this function adds entries to it.
|
||||
Returns:
|
||||
updated context object
|
||||
"""
|
||||
@@ -2251,7 +2249,7 @@ class FederationHandler(BaseHandler):
|
||||
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
|
||||
|
||||
try:
|
||||
context = await self._update_auth_events_and_context_for_auth(
|
||||
context = await self._fetch_missing_auth_events(
|
||||
origin, event, context, auth_events
|
||||
)
|
||||
except Exception:
|
||||
@@ -2273,7 +2271,7 @@ class FederationHandler(BaseHandler):
|
||||
|
||||
return context
|
||||
|
||||
async def _update_auth_events_and_context_for_auth(
|
||||
async def _fetch_missing_auth_events(
|
||||
self,
|
||||
origin: str,
|
||||
event: EventBase,
|
||||
@@ -2282,14 +2280,8 @@ class FederationHandler(BaseHandler):
|
||||
) -> EventContext:
|
||||
"""Helper for do_auth. See there for docs.
|
||||
|
||||
Checks whether a given event has the expected auth events. If it
|
||||
doesn't then we talk to the remote server to compare state to see if
|
||||
we can come to a consensus (e.g. if one server missed some valid
|
||||
state).
|
||||
|
||||
This attempts to resolve any potential divergence of state between
|
||||
servers, but is not essential and so failures should not block further
|
||||
processing of the event.
|
||||
Checks and fetches if there are any auth events that we don't have,
|
||||
and if so fetch them.
|
||||
|
||||
Args:
|
||||
origin:
|
||||
@@ -2304,8 +2296,6 @@ class FederationHandler(BaseHandler):
|
||||
event is an outlier), may be the auth events claimed by the remote
|
||||
server.
|
||||
|
||||
Also NB that this function adds entries to it.
|
||||
|
||||
Returns:
|
||||
updated context
|
||||
"""
|
||||
@@ -2363,141 +2353,14 @@ class FederationHandler(BaseHandler):
|
||||
"do_auth %s missing_auth: %s", event.event_id, e.event_id
|
||||
)
|
||||
await self._handle_new_event(origin, e, auth_events=auth)
|
||||
|
||||
if e.event_id in event_auth_events:
|
||||
auth_events[(e.type, e.state_key)] = e
|
||||
except AuthError:
|
||||
pass
|
||||
|
||||
except Exception:
|
||||
logger.exception("Failed to get auth chain")
|
||||
|
||||
if event.internal_metadata.is_outlier():
|
||||
# XXX: given that, for an outlier, we'll be working with the
|
||||
# event's *claimed* auth events rather than those we calculated:
|
||||
# (a) is there any point in this test, since different_auth below will
|
||||
# obviously be empty
|
||||
# (b) alternatively, why don't we do it earlier?
|
||||
logger.info("Skipping auth_event fetch for outlier")
|
||||
return context
|
||||
|
||||
different_auth = event_auth_events.difference(
|
||||
e.event_id for e in auth_events.values()
|
||||
)
|
||||
|
||||
if not different_auth:
|
||||
return context
|
||||
|
||||
logger.info(
|
||||
"auth_events refers to events which are not in our calculated auth "
|
||||
"chain: %s",
|
||||
different_auth,
|
||||
)
|
||||
|
||||
# XXX: currently this checks for redactions but I'm not convinced that is
|
||||
# necessary?
|
||||
different_events = await self.store.get_events_as_list(different_auth)
|
||||
|
||||
for d in different_events:
|
||||
if d.room_id != event.room_id:
|
||||
logger.warning(
|
||||
"Event %s refers to auth_event %s which is in a different room",
|
||||
event.event_id,
|
||||
d.event_id,
|
||||
)
|
||||
|
||||
# don't attempt to resolve the claimed auth events against our own
|
||||
# in this case: just use our own auth events.
|
||||
#
|
||||
# XXX: should we reject the event in this case? It feels like we should,
|
||||
# but then shouldn't we also do so if we've failed to fetch any of the
|
||||
# auth events?
|
||||
return context
|
||||
|
||||
# now we state-resolve between our own idea of the auth events, and the remote's
|
||||
# idea of them.
|
||||
|
||||
local_state = auth_events.values()
|
||||
remote_auth_events = dict(auth_events)
|
||||
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
|
||||
remote_state = remote_auth_events.values()
|
||||
|
||||
room_version = await self.store.get_room_version_id(event.room_id)
|
||||
new_state = await self.state_handler.resolve_events(
|
||||
room_version, (local_state, remote_state), event
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"After state res: updating auth_events with new state %s",
|
||||
{
|
||||
(d.type, d.state_key): d.event_id
|
||||
for d in new_state.values()
|
||||
if auth_events.get((d.type, d.state_key)) != d
|
||||
},
|
||||
)
|
||||
|
||||
auth_events.update(new_state)
|
||||
|
||||
context = await self._update_context_for_auth_events(
|
||||
event, context, auth_events
|
||||
)
|
||||
|
||||
return context
|
||||
|
||||
async def _update_context_for_auth_events(
|
||||
self, event: EventBase, context: EventContext, auth_events: StateMap[EventBase]
|
||||
) -> EventContext:
|
||||
"""Update the state_ids in an event context after auth event resolution,
|
||||
storing the changes as a new state group.
|
||||
|
||||
Args:
|
||||
event: The event we're handling the context for
|
||||
|
||||
context: initial event context
|
||||
|
||||
auth_events: Events to update in the event context.
|
||||
|
||||
Returns:
|
||||
new event context
|
||||
"""
|
||||
# exclude the state key of the new event from the current_state in the context.
|
||||
if event.is_state():
|
||||
event_key = (event.type, event.state_key) # type: Optional[Tuple[str, str]]
|
||||
else:
|
||||
event_key = None
|
||||
state_updates = {
|
||||
k: a.event_id for k, a in auth_events.items() if k != event_key
|
||||
}
|
||||
|
||||
current_state_ids = await context.get_current_state_ids()
|
||||
current_state_ids = dict(current_state_ids) # type: ignore
|
||||
|
||||
current_state_ids.update(state_updates)
|
||||
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
prev_state_ids = dict(prev_state_ids)
|
||||
|
||||
prev_state_ids.update({k: a.event_id for k, a in auth_events.items()})
|
||||
|
||||
# create a new state group as a delta from the existing one.
|
||||
prev_group = context.state_group
|
||||
state_group = await self.state_store.store_state_group(
|
||||
event.event_id,
|
||||
event.room_id,
|
||||
prev_group=prev_group,
|
||||
delta_ids=state_updates,
|
||||
current_state_ids=current_state_ids,
|
||||
)
|
||||
|
||||
return EventContext.with_state(
|
||||
state_group=state_group,
|
||||
state_group_before_event=context.state_group_before_event,
|
||||
current_state_ids=current_state_ids,
|
||||
prev_state_ids=prev_state_ids,
|
||||
prev_group=prev_group,
|
||||
delta_ids=state_updates,
|
||||
)
|
||||
|
||||
async def construct_auth_difference(
|
||||
self, local_auth: Iterable[EventBase], remote_auth: Iterable[EventBase]
|
||||
) -> Dict:
|
||||
|
||||
@@ -109,7 +109,7 @@ class InitialSyncHandler(BaseHandler):
|
||||
|
||||
rooms_ret = []
|
||||
|
||||
now_token = self.hs.get_event_sources().get_current_token()
|
||||
now_token = await self.hs.get_event_sources().get_current_token()
|
||||
|
||||
presence_stream = self.hs.get_event_sources().sources["presence"]
|
||||
pagination_config = PaginationConfig(from_token=now_token)
|
||||
@@ -360,7 +360,7 @@ class InitialSyncHandler(BaseHandler):
|
||||
current_state.values(), time_now
|
||||
)
|
||||
|
||||
now_token = self.hs.get_event_sources().get_current_token()
|
||||
now_token = await self.hs.get_event_sources().get_current_token()
|
||||
|
||||
limit = pagin_config.limit if pagin_config else None
|
||||
if limit is None:
|
||||
|
||||
@@ -15,7 +15,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple
|
||||
|
||||
from canonicaljson import encode_canonical_json, json
|
||||
|
||||
@@ -45,7 +45,7 @@ from synapse.events.validator import EventValidator
|
||||
from synapse.logging.context import run_in_background
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.replication.http.send_event import ReplicationSendEventRestServlet
|
||||
from synapse.storage.databases.main.events_worker import EventRedactBehaviour
|
||||
from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
|
||||
from synapse.storage.state import StateFilter
|
||||
from synapse.types import (
|
||||
Collection,
|
||||
@@ -93,11 +93,11 @@ class MessageHandler(object):
|
||||
|
||||
async def get_room_data(
|
||||
self,
|
||||
user_id: str,
|
||||
room_id: str,
|
||||
event_type: str,
|
||||
state_key: str,
|
||||
is_guest: bool,
|
||||
user_id: str = None,
|
||||
room_id: str = None,
|
||||
event_type: Optional[str] = None,
|
||||
state_key: str = "",
|
||||
is_guest: bool = False,
|
||||
) -> dict:
|
||||
""" Get data from a room.
|
||||
|
||||
@@ -407,7 +407,7 @@ class EventCreationHandler(object):
|
||||
#
|
||||
# map from room id to time-of-last-attempt.
|
||||
#
|
||||
self._rooms_to_exclude_from_dummy_event_insertion = {} # type: Dict[str, int]
|
||||
self._rooms_to_exclude_from_dummy_event_insertion = {} # type: dict[str, int]
|
||||
|
||||
# we need to construct a ConsentURIBuilder here, as it checks that the necessary
|
||||
# config options, but *only* if we have a configuration for which we are
|
||||
@@ -707,7 +707,7 @@ class EventCreationHandler(object):
|
||||
async def create_and_send_nonmember_event(
|
||||
self,
|
||||
requester: Requester,
|
||||
event_dict: dict,
|
||||
event_dict: EventBase,
|
||||
ratelimit: bool = True,
|
||||
txn_id: Optional[str] = None,
|
||||
) -> Tuple[EventBase, int]:
|
||||
@@ -768,15 +768,6 @@ class EventCreationHandler(object):
|
||||
else:
|
||||
prev_event_ids = await self.store.get_prev_events_for_room(builder.room_id)
|
||||
|
||||
# we now ought to have some prev_events (unless it's a create event).
|
||||
#
|
||||
# do a quick sanity check here, rather than waiting until we've created the
|
||||
# event and then try to auth it (which fails with a somewhat confusing "No
|
||||
# create event in auth events")
|
||||
assert (
|
||||
builder.type == EventTypes.Create or len(prev_event_ids) > 0
|
||||
), "Attempting to create an event with no prev_events"
|
||||
|
||||
event = await builder.build(prev_event_ids=prev_event_ids)
|
||||
context = await self.state.compute_event_context(event)
|
||||
if requester:
|
||||
@@ -971,7 +962,7 @@ class EventCreationHandler(object):
|
||||
# Validate a newly added alias or newly added alt_aliases.
|
||||
|
||||
original_alias = None
|
||||
original_alt_aliases = [] # type: List[str]
|
||||
original_alt_aliases = set()
|
||||
|
||||
original_event_id = event.unsigned.get("replaces_state")
|
||||
if original_event_id:
|
||||
@@ -1019,10 +1010,6 @@ class EventCreationHandler(object):
|
||||
|
||||
current_state_ids = await context.get_current_state_ids()
|
||||
|
||||
# We know this event is not an outlier, so this must be
|
||||
# non-None.
|
||||
assert current_state_ids is not None
|
||||
|
||||
state_to_include_ids = [
|
||||
e_id
|
||||
for k, e_id in current_state_ids.items()
|
||||
@@ -1074,7 +1061,7 @@ class EventCreationHandler(object):
|
||||
raise SynapseError(400, "Cannot redact event from a different room")
|
||||
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
auth_events_ids = self.auth.compute_auth_events(
|
||||
auth_events_ids = await self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=True
|
||||
)
|
||||
auth_events = await self.store.get_events(auth_events_ids)
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
# limitations under the License.
|
||||
import json
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Dict, Generic, List, Optional, Tuple, TypeVar
|
||||
from typing import Dict, Generic, List, Optional, Tuple, TypeVar
|
||||
from urllib.parse import urlencode
|
||||
|
||||
import attr
|
||||
@@ -39,11 +39,9 @@ from synapse.http.server import respond_with_html
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import make_deferred_yieldable
|
||||
from synapse.push.mailer import load_jinja2_templates
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import UserID, map_username_to_mxid_localpart
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
SESSION_COOKIE_NAME = b"oidc_session"
|
||||
@@ -93,7 +91,7 @@ class OidcHandler:
|
||||
"""Handles requests related to the OpenID Connect login flow.
|
||||
"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
def __init__(self, hs: HomeServer):
|
||||
self._callback_url = hs.config.oidc_callback_url # type: str
|
||||
self._scopes = hs.config.oidc_scopes # type: List[str]
|
||||
self._client_auth = ClientAuth(
|
||||
|
||||
@@ -309,7 +309,7 @@ class PaginationHandler(object):
|
||||
room_token = pagin_config.from_token.room_key
|
||||
else:
|
||||
pagin_config.from_token = (
|
||||
self.hs.get_event_sources().get_current_token_for_pagination()
|
||||
await self.hs.get_event_sources().get_current_token_for_pagination()
|
||||
)
|
||||
room_token = pagin_config.from_token.room_key
|
||||
|
||||
|
||||
@@ -38,7 +38,7 @@ from synapse.logging.utils import log_function
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.state import StateHandler
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.storage.data_stores.main import DataStore
|
||||
from synapse.storage.presence import UserPresenceState
|
||||
from synapse.types import JsonDict, UserID, get_domain_from_id
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
@@ -319,7 +319,7 @@ class PresenceHandler(BasePresenceHandler):
|
||||
is some spurious presence changes that will self-correct.
|
||||
"""
|
||||
# If the DB pool has already terminated, don't try updating
|
||||
if not self.store.db_pool.is_running():
|
||||
if not self.store.db.is_running():
|
||||
return
|
||||
|
||||
logger.info(
|
||||
|
||||
@@ -22,7 +22,7 @@ import logging
|
||||
import math
|
||||
import string
|
||||
from collections import OrderedDict
|
||||
from typing import Awaitable, Optional, Tuple
|
||||
from typing import Optional, Tuple
|
||||
|
||||
from synapse.api.constants import (
|
||||
EventTypes,
|
||||
@@ -1041,7 +1041,7 @@ class RoomEventSource(object):
|
||||
):
|
||||
# We just ignore the key for now.
|
||||
|
||||
to_key = self.get_current_key()
|
||||
to_key = await self.get_current_key()
|
||||
|
||||
from_token = RoomStreamToken.parse(from_key)
|
||||
if from_token.topological:
|
||||
@@ -1081,10 +1081,10 @@ class RoomEventSource(object):
|
||||
|
||||
return (events, end_key)
|
||||
|
||||
def get_current_key(self) -> str:
|
||||
return "s%d" % (self.store.get_room_max_stream_ordering(),)
|
||||
def get_current_key(self):
|
||||
return self.store.get_room_events_max_id()
|
||||
|
||||
def get_current_key_for_room(self, room_id: str) -> Awaitable[str]:
|
||||
def get_current_key_for_room(self, room_id):
|
||||
return self.store.get_room_events_max_id(room_id)
|
||||
|
||||
|
||||
|
||||
@@ -16,7 +16,7 @@
|
||||
import abc
|
||||
import logging
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Dict, Iterable, List, Optional, Tuple, Union
|
||||
from typing import Dict, Iterable, List, Optional, Tuple, Union
|
||||
|
||||
from unpaddedbase64 import encode_base64
|
||||
|
||||
@@ -37,10 +37,6 @@ from synapse.util.distributor import user_joined_room, user_left_room
|
||||
|
||||
from ._base import BaseHandler
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -52,7 +48,7 @@ class RoomMemberHandler(object):
|
||||
|
||||
__metaclass__ = abc.ABCMeta
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
def __init__(self, hs):
|
||||
self.hs = hs
|
||||
self.store = hs.get_datastore()
|
||||
self.auth = hs.get_auth()
|
||||
@@ -210,40 +206,24 @@ class RoomMemberHandler(object):
|
||||
_, stream_id = await self.store.get_event_ordering(duplicate.event_id)
|
||||
return duplicate.event_id, stream_id
|
||||
|
||||
stream_id = await self.event_creation_handler.handle_new_client_event(
|
||||
requester, event, context, extra_users=[target], ratelimit=ratelimit
|
||||
)
|
||||
|
||||
prev_state_ids = await context.get_prev_state_ids()
|
||||
|
||||
prev_member_event_id = prev_state_ids.get((EventTypes.Member, user_id), None)
|
||||
|
||||
newly_joined = False
|
||||
if event.membership == Membership.JOIN:
|
||||
# Only fire user_joined_room if the user has actually joined the
|
||||
# room. Don't bother if the user is just changing their profile
|
||||
# info.
|
||||
newly_joined = True
|
||||
if prev_member_event_id:
|
||||
prev_member_event = await self.store.get_event(prev_member_event_id)
|
||||
newly_joined = prev_member_event.membership != Membership.JOIN
|
||||
|
||||
# Only rate-limit if the user actually joined the room, otherwise we'll end
|
||||
# up blocking profile updates.
|
||||
if newly_joined:
|
||||
time_now_s = self.clock.time()
|
||||
(
|
||||
allowed,
|
||||
time_allowed,
|
||||
) = self._join_rate_limiter_local.can_requester_do_action(requester)
|
||||
|
||||
if not allowed:
|
||||
raise LimitExceededError(
|
||||
retry_after_ms=int(1000 * (time_allowed - time_now_s))
|
||||
)
|
||||
|
||||
stream_id = await self.event_creation_handler.handle_new_client_event(
|
||||
requester, event, context, extra_users=[target], ratelimit=ratelimit,
|
||||
)
|
||||
|
||||
if event.membership == Membership.JOIN and newly_joined:
|
||||
# Only fire user_joined_room if the user has actually joined the
|
||||
# room. Don't bother if the user is just changing their profile
|
||||
# info.
|
||||
await self._user_joined_room(target, room_id)
|
||||
await self._user_joined_room(target, room_id)
|
||||
elif event.membership == Membership.LEAVE:
|
||||
if prev_member_event_id:
|
||||
prev_member_event = await self.store.get_event(prev_member_event_id)
|
||||
@@ -473,12 +453,22 @@ class RoomMemberHandler(object):
|
||||
# so don't really fit into the general auth process.
|
||||
raise AuthError(403, "Guest access not allowed")
|
||||
|
||||
if not is_host_in_room:
|
||||
if is_host_in_room:
|
||||
time_now_s = self.clock.time()
|
||||
(
|
||||
allowed,
|
||||
time_allowed,
|
||||
) = self._join_rate_limiter_remote.can_requester_do_action(requester,)
|
||||
allowed, time_allowed = self._join_rate_limiter_local.can_do_action(
|
||||
requester.user.to_string(),
|
||||
)
|
||||
|
||||
if not allowed:
|
||||
raise LimitExceededError(
|
||||
retry_after_ms=int(1000 * (time_allowed - time_now_s))
|
||||
)
|
||||
|
||||
else:
|
||||
time_now_s = self.clock.time()
|
||||
allowed, time_allowed = self._join_rate_limiter_remote.can_do_action(
|
||||
requester.user.to_string(),
|
||||
)
|
||||
|
||||
if not allowed:
|
||||
raise LimitExceededError(
|
||||
@@ -1010,7 +1000,7 @@ class RoomMemberMasterHandler(RoomMemberHandler):
|
||||
|
||||
check_complexity = self.hs.config.limit_remote_rooms.enabled
|
||||
if check_complexity and self.hs.config.limit_remote_rooms.admins_can_join:
|
||||
check_complexity = not await self.auth.is_server_admin(user)
|
||||
check_complexity = not await self.hs.auth.is_server_admin(user)
|
||||
|
||||
if check_complexity:
|
||||
# Fetch the room complexity
|
||||
|
||||
@@ -14,16 +14,15 @@
|
||||
# limitations under the License.
|
||||
import logging
|
||||
import re
|
||||
from typing import TYPE_CHECKING, Callable, Dict, Optional, Set, Tuple
|
||||
from typing import Callable, Dict, Optional, Set, Tuple
|
||||
|
||||
import attr
|
||||
import saml2
|
||||
import saml2.response
|
||||
from saml2.client import Saml2Client
|
||||
|
||||
from synapse.api.errors import AuthError, SynapseError
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.config import ConfigError
|
||||
from synapse.config.saml2_config import SamlAttributeRequirement
|
||||
from synapse.http.servlet import parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.module_api import ModuleApi
|
||||
@@ -35,9 +34,6 @@ from synapse.types import (
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.iterutils import chunk_seq
|
||||
|
||||
if TYPE_CHECKING:
|
||||
import synapse.server
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -53,7 +49,7 @@ class Saml2SessionData:
|
||||
|
||||
|
||||
class SamlHandler:
|
||||
def __init__(self, hs: "synapse.server.HomeServer"):
|
||||
def __init__(self, hs):
|
||||
self._saml_client = Saml2Client(hs.config.saml2_sp_config)
|
||||
self._auth = hs.get_auth()
|
||||
self._auth_handler = hs.get_auth_handler()
|
||||
@@ -66,7 +62,6 @@ class SamlHandler:
|
||||
self._grandfathered_mxid_source_attribute = (
|
||||
hs.config.saml2_grandfathered_mxid_source_attribute
|
||||
)
|
||||
self._saml2_attribute_requirements = hs.config.saml2.attribute_requirements
|
||||
|
||||
# plugin to do custom mapping from saml response to mxid
|
||||
self._user_mapping_provider = hs.config.saml2_user_mapping_provider_class(
|
||||
@@ -78,7 +73,7 @@ class SamlHandler:
|
||||
self._auth_provider_id = "saml"
|
||||
|
||||
# a map from saml session id to Saml2SessionData object
|
||||
self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData]
|
||||
self._outstanding_requests_dict = {}
|
||||
|
||||
# a lock on the mappings
|
||||
self._mapping_lock = Linearizer(name="saml_mapping", clock=self._clock)
|
||||
@@ -170,18 +165,11 @@ class SamlHandler:
|
||||
saml2.BINDING_HTTP_POST,
|
||||
outstanding=self._outstanding_requests_dict,
|
||||
)
|
||||
except saml2.response.UnsolicitedResponse as e:
|
||||
# the pysaml2 library helpfully logs an ERROR here, but neglects to log
|
||||
# the session ID. I don't really want to put the full text of the exception
|
||||
# in the (user-visible) exception message, so let's log the exception here
|
||||
# so we can track down the session IDs later.
|
||||
logger.warning(str(e))
|
||||
raise SynapseError(400, "Unexpected SAML2 login.")
|
||||
except Exception as e:
|
||||
raise SynapseError(400, "Unable to parse SAML2 response: %s." % (e,))
|
||||
raise SynapseError(400, "Unable to parse SAML2 response: %s" % (e,))
|
||||
|
||||
if saml2_auth.not_signed:
|
||||
raise SynapseError(400, "SAML2 response was not signed.")
|
||||
raise SynapseError(400, "SAML2 response was not signed")
|
||||
|
||||
logger.debug("SAML2 response: %s", saml2_auth.origxml)
|
||||
for assertion in saml2_auth.assertions:
|
||||
@@ -200,9 +188,6 @@ class SamlHandler:
|
||||
saml2_auth.in_response_to, None
|
||||
)
|
||||
|
||||
for requirement in self._saml2_attribute_requirements:
|
||||
_check_attribute_requirement(saml2_auth.ava, requirement)
|
||||
|
||||
remote_user_id = self._user_mapping_provider.get_remote_user_id(
|
||||
saml2_auth, client_redirect_url
|
||||
)
|
||||
@@ -309,21 +294,6 @@ class SamlHandler:
|
||||
del self._outstanding_requests_dict[reqid]
|
||||
|
||||
|
||||
def _check_attribute_requirement(ava: dict, req: SamlAttributeRequirement):
|
||||
values = ava.get(req.attribute, [])
|
||||
for v in values:
|
||||
if v == req.value:
|
||||
return
|
||||
|
||||
logger.info(
|
||||
"SAML2 attribute %s did not match required value '%s' (was '%s')",
|
||||
req.attribute,
|
||||
req.value,
|
||||
values,
|
||||
)
|
||||
raise AuthError(403, "You are not authorized to log in here.")
|
||||
|
||||
|
||||
DOT_REPLACE_PATTERN = re.compile(
|
||||
("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),))
|
||||
)
|
||||
|
||||
@@ -340,7 +340,7 @@ class SearchHandler(BaseHandler):
|
||||
# If client has asked for "context" for each event (i.e. some surrounding
|
||||
# events and state), fetch that
|
||||
if event_context is not None:
|
||||
now_token = self.hs.get_event_sources().get_current_token()
|
||||
now_token = await self.hs.get_event_sources().get_current_token()
|
||||
|
||||
contexts = {}
|
||||
for event in allowed_events:
|
||||
|
||||
@@ -103,6 +103,7 @@ class JoinedSyncResult:
|
||||
account_data = attr.ib(type=List[JsonDict])
|
||||
unread_notifications = attr.ib(type=JsonDict)
|
||||
summary = attr.ib(type=Optional[JsonDict])
|
||||
unread_count = attr.ib(type=int)
|
||||
|
||||
def __nonzero__(self) -> bool:
|
||||
"""Make the result appear empty if there are no updates. This is used
|
||||
@@ -960,7 +961,7 @@ class SyncHandler(object):
|
||||
# this is due to some of the underlying streams not supporting the ability
|
||||
# to query up to a given point.
|
||||
# Always use the `now_token` in `SyncResultBuilder`
|
||||
now_token = self.event_sources.get_current_token()
|
||||
now_token = await self.event_sources.get_current_token()
|
||||
|
||||
logger.debug(
|
||||
"Calculating sync response for %r between %s and %s",
|
||||
@@ -1886,6 +1887,10 @@ class SyncHandler(object):
|
||||
|
||||
if room_builder.rtype == "joined":
|
||||
unread_notifications = {} # type: Dict[str, str]
|
||||
|
||||
unread_count = await self.store.get_unread_message_count_for_user(
|
||||
room_id, sync_config.user.to_string(),
|
||||
)
|
||||
room_sync = JoinedSyncResult(
|
||||
room_id=room_id,
|
||||
timeline=batch,
|
||||
@@ -1894,6 +1899,7 @@ class SyncHandler(object):
|
||||
account_data=account_data_events,
|
||||
unread_notifications=unread_notifications,
|
||||
summary=summary,
|
||||
unread_count=unread_count,
|
||||
)
|
||||
|
||||
if room_sync or always_include:
|
||||
|
||||
@@ -284,7 +284,8 @@ class SimpleHttpClient(object):
|
||||
ip_blacklist=self._ip_blacklist,
|
||||
)
|
||||
|
||||
async def request(self, method, uri, data=None, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def request(self, method, uri, data=None, headers=None):
|
||||
"""
|
||||
Args:
|
||||
method (str): HTTP method to use.
|
||||
@@ -297,7 +298,7 @@ class SimpleHttpClient(object):
|
||||
outgoing_requests_counter.labels(method).inc()
|
||||
|
||||
# log request but strip `access_token` (AS requests for example include this)
|
||||
logger.debug("Sending request %s %s", method, redact_uri(uri))
|
||||
logger.info("Sending request %s %s", method, redact_uri(uri))
|
||||
|
||||
with start_active_span(
|
||||
"outgoing-client-request",
|
||||
@@ -329,7 +330,7 @@ class SimpleHttpClient(object):
|
||||
self.hs.get_reactor(),
|
||||
cancelled_to_request_timed_out_error,
|
||||
)
|
||||
response = await make_deferred_yieldable(request_deferred)
|
||||
response = yield make_deferred_yieldable(request_deferred)
|
||||
|
||||
incoming_responses_counter.labels(method, response.code).inc()
|
||||
logger.info(
|
||||
@@ -352,7 +353,8 @@ class SimpleHttpClient(object):
|
||||
set_tag("error_reason", e.args[0])
|
||||
raise
|
||||
|
||||
async def post_urlencoded_get_json(self, uri, args={}, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def post_urlencoded_get_json(self, uri, args={}, headers=None):
|
||||
"""
|
||||
Args:
|
||||
uri (str):
|
||||
@@ -361,7 +363,7 @@ class SimpleHttpClient(object):
|
||||
header name to a list of values for that header
|
||||
|
||||
Returns:
|
||||
object: parsed json
|
||||
Deferred[object]: parsed json
|
||||
|
||||
Raises:
|
||||
HttpResponseException: On a non-2xx HTTP response.
|
||||
@@ -384,11 +386,11 @@ class SimpleHttpClient(object):
|
||||
if headers:
|
||||
actual_headers.update(headers)
|
||||
|
||||
response = await self.request(
|
||||
response = yield self.request(
|
||||
"POST", uri, headers=Headers(actual_headers), data=query_bytes
|
||||
)
|
||||
|
||||
body = await make_deferred_yieldable(readBody(response))
|
||||
body = yield make_deferred_yieldable(readBody(response))
|
||||
|
||||
if 200 <= response.code < 300:
|
||||
return json.loads(body.decode("utf-8"))
|
||||
@@ -397,7 +399,8 @@ class SimpleHttpClient(object):
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
async def post_json_get_json(self, uri, post_json, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def post_json_get_json(self, uri, post_json, headers=None):
|
||||
"""
|
||||
|
||||
Args:
|
||||
@@ -407,7 +410,7 @@ class SimpleHttpClient(object):
|
||||
header name to a list of values for that header
|
||||
|
||||
Returns:
|
||||
object: parsed json
|
||||
Deferred[object]: parsed json
|
||||
|
||||
Raises:
|
||||
HttpResponseException: On a non-2xx HTTP response.
|
||||
@@ -426,11 +429,11 @@ class SimpleHttpClient(object):
|
||||
if headers:
|
||||
actual_headers.update(headers)
|
||||
|
||||
response = await self.request(
|
||||
response = yield self.request(
|
||||
"POST", uri, headers=Headers(actual_headers), data=json_str
|
||||
)
|
||||
|
||||
body = await make_deferred_yieldable(readBody(response))
|
||||
body = yield make_deferred_yieldable(readBody(response))
|
||||
|
||||
if 200 <= response.code < 300:
|
||||
return json.loads(body.decode("utf-8"))
|
||||
@@ -439,7 +442,8 @@ class SimpleHttpClient(object):
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
async def get_json(self, uri, args={}, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def get_json(self, uri, args={}, headers=None):
|
||||
""" Gets some json from the given URI.
|
||||
|
||||
Args:
|
||||
@@ -451,7 +455,7 @@ class SimpleHttpClient(object):
|
||||
headers (dict[str|bytes, List[str|bytes]]|None): If not None, a map from
|
||||
header name to a list of values for that header
|
||||
Returns:
|
||||
Succeeds when we get *any* 2xx HTTP response, with the
|
||||
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
|
||||
HTTP body as JSON.
|
||||
Raises:
|
||||
HttpResponseException On a non-2xx HTTP response.
|
||||
@@ -462,10 +466,11 @@ class SimpleHttpClient(object):
|
||||
if headers:
|
||||
actual_headers.update(headers)
|
||||
|
||||
body = await self.get_raw(uri, args, headers=headers)
|
||||
body = yield self.get_raw(uri, args, headers=headers)
|
||||
return json.loads(body.decode("utf-8"))
|
||||
|
||||
async def put_json(self, uri, json_body, args={}, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def put_json(self, uri, json_body, args={}, headers=None):
|
||||
""" Puts some json to the given URI.
|
||||
|
||||
Args:
|
||||
@@ -478,7 +483,7 @@ class SimpleHttpClient(object):
|
||||
headers (dict[str|bytes, List[str|bytes]]|None): If not None, a map from
|
||||
header name to a list of values for that header
|
||||
Returns:
|
||||
Succeeds when we get *any* 2xx HTTP response, with the
|
||||
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
|
||||
HTTP body as JSON.
|
||||
Raises:
|
||||
HttpResponseException On a non-2xx HTTP response.
|
||||
@@ -499,11 +504,11 @@ class SimpleHttpClient(object):
|
||||
if headers:
|
||||
actual_headers.update(headers)
|
||||
|
||||
response = await self.request(
|
||||
response = yield self.request(
|
||||
"PUT", uri, headers=Headers(actual_headers), data=json_str
|
||||
)
|
||||
|
||||
body = await make_deferred_yieldable(readBody(response))
|
||||
body = yield make_deferred_yieldable(readBody(response))
|
||||
|
||||
if 200 <= response.code < 300:
|
||||
return json.loads(body.decode("utf-8"))
|
||||
@@ -512,7 +517,8 @@ class SimpleHttpClient(object):
|
||||
response.code, response.phrase.decode("ascii", errors="replace"), body
|
||||
)
|
||||
|
||||
async def get_raw(self, uri, args={}, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def get_raw(self, uri, args={}, headers=None):
|
||||
""" Gets raw text from the given URI.
|
||||
|
||||
Args:
|
||||
@@ -524,7 +530,7 @@ class SimpleHttpClient(object):
|
||||
headers (dict[str|bytes, List[str|bytes]]|None): If not None, a map from
|
||||
header name to a list of values for that header
|
||||
Returns:
|
||||
Succeeds when we get *any* 2xx HTTP response, with the
|
||||
Deferred: Succeeds when we get *any* 2xx HTTP response, with the
|
||||
HTTP body as bytes.
|
||||
Raises:
|
||||
HttpResponseException on a non-2xx HTTP response.
|
||||
@@ -537,9 +543,9 @@ class SimpleHttpClient(object):
|
||||
if headers:
|
||||
actual_headers.update(headers)
|
||||
|
||||
response = await self.request("GET", uri, headers=Headers(actual_headers))
|
||||
response = yield self.request("GET", uri, headers=Headers(actual_headers))
|
||||
|
||||
body = await make_deferred_yieldable(readBody(response))
|
||||
body = yield make_deferred_yieldable(readBody(response))
|
||||
|
||||
if 200 <= response.code < 300:
|
||||
return body
|
||||
@@ -551,7 +557,8 @@ class SimpleHttpClient(object):
|
||||
# XXX: FIXME: This is horribly copy-pasted from matrixfederationclient.
|
||||
# The two should be factored out.
|
||||
|
||||
async def get_file(self, url, output_stream, max_size=None, headers=None):
|
||||
@defer.inlineCallbacks
|
||||
def get_file(self, url, output_stream, max_size=None, headers=None):
|
||||
"""GETs a file from a given URL
|
||||
Args:
|
||||
url (str): The URL to GET
|
||||
@@ -567,7 +574,7 @@ class SimpleHttpClient(object):
|
||||
if headers:
|
||||
actual_headers.update(headers)
|
||||
|
||||
response = await self.request("GET", url, headers=Headers(actual_headers))
|
||||
response = yield self.request("GET", url, headers=Headers(actual_headers))
|
||||
|
||||
resp_headers = dict(response.headers.getAllRawHeaders())
|
||||
|
||||
@@ -591,7 +598,7 @@ class SimpleHttpClient(object):
|
||||
# straight back in again
|
||||
|
||||
try:
|
||||
length = await make_deferred_yieldable(
|
||||
length = yield make_deferred_yieldable(
|
||||
_readBodyToFile(response, output_stream, max_size)
|
||||
)
|
||||
except SynapseError:
|
||||
|
||||
@@ -247,7 +247,7 @@ class MatrixHostnameEndpoint(object):
|
||||
port = server.port
|
||||
|
||||
try:
|
||||
logger.debug("Connecting to %s:%i", host.decode("ascii"), port)
|
||||
logger.info("Connecting to %s:%i", host.decode("ascii"), port)
|
||||
endpoint = HostnameEndpoint(self._reactor, host, port)
|
||||
if self._tls_options:
|
||||
endpoint = wrapClientTLS(self._tls_options, endpoint)
|
||||
|
||||
@@ -29,11 +29,10 @@ from zope.interface import implementer
|
||||
|
||||
from twisted.internet import defer, protocol
|
||||
from twisted.internet.error import DNSLookupError
|
||||
from twisted.internet.interfaces import IReactorPluggableNameResolver, IReactorTime
|
||||
from twisted.internet.interfaces import IReactorPluggableNameResolver
|
||||
from twisted.internet.task import _EPSILON, Cooperator
|
||||
from twisted.web._newclient import ResponseDone
|
||||
from twisted.web.http_headers import Headers
|
||||
from twisted.web.iweb import IResponse
|
||||
|
||||
import synapse.metrics
|
||||
import synapse.util.retryutils
|
||||
@@ -75,7 +74,7 @@ MAXINT = sys.maxsize
|
||||
_next_id = 1
|
||||
|
||||
|
||||
@attr.s(frozen=True)
|
||||
@attr.s
|
||||
class MatrixFederationRequest(object):
|
||||
method = attr.ib()
|
||||
"""HTTP method
|
||||
@@ -111,52 +110,26 @@ class MatrixFederationRequest(object):
|
||||
:type: str|None
|
||||
"""
|
||||
|
||||
uri = attr.ib(init=False, type=bytes)
|
||||
"""The URI of this request
|
||||
"""
|
||||
|
||||
def __attrs_post_init__(self):
|
||||
global _next_id
|
||||
txn_id = "%s-O-%s" % (self.method, _next_id)
|
||||
self.txn_id = "%s-O-%s" % (self.method, _next_id)
|
||||
_next_id = (_next_id + 1) % (MAXINT - 1)
|
||||
|
||||
object.__setattr__(self, "txn_id", txn_id)
|
||||
|
||||
destination_bytes = self.destination.encode("ascii")
|
||||
path_bytes = self.path.encode("ascii")
|
||||
if self.query:
|
||||
query_bytes = encode_query_args(self.query)
|
||||
else:
|
||||
query_bytes = b""
|
||||
|
||||
# The object is frozen so we can pre-compute this.
|
||||
uri = urllib.parse.urlunparse(
|
||||
(b"matrix", destination_bytes, path_bytes, None, query_bytes, b"")
|
||||
)
|
||||
object.__setattr__(self, "uri", uri)
|
||||
|
||||
def get_json(self):
|
||||
if self.json_callback:
|
||||
return self.json_callback()
|
||||
return self.json
|
||||
|
||||
|
||||
async def _handle_json_response(
|
||||
reactor: IReactorTime,
|
||||
timeout_sec: float,
|
||||
request: MatrixFederationRequest,
|
||||
response: IResponse,
|
||||
start_ms: int,
|
||||
):
|
||||
async def _handle_json_response(reactor, timeout_sec, request, response):
|
||||
"""
|
||||
Reads the JSON body of a response, with a timeout
|
||||
|
||||
Args:
|
||||
reactor: twisted reactor, for the timeout
|
||||
timeout_sec: number of seconds to wait for response to complete
|
||||
request: the request that triggered the response
|
||||
response: response to the request
|
||||
start_ms: Timestamp when request was made
|
||||
reactor (IReactor): twisted reactor, for the timeout
|
||||
timeout_sec (float): number of seconds to wait for response to complete
|
||||
request (MatrixFederationRequest): the request that triggered the response
|
||||
response (IResponse): response to the request
|
||||
|
||||
Returns:
|
||||
dict: parsed JSON response
|
||||
@@ -170,35 +143,23 @@ async def _handle_json_response(
|
||||
body = await make_deferred_yieldable(d)
|
||||
except TimeoutError as e:
|
||||
logger.warning(
|
||||
"{%s} [%s] Timed out reading response - %s %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
request.method,
|
||||
request.uri.decode("ascii"),
|
||||
"{%s} [%s] Timed out reading response", request.txn_id, request.destination,
|
||||
)
|
||||
raise RequestSendFailed(e, can_retry=True) from e
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
"{%s} [%s] Error reading response %s %s: %s",
|
||||
"{%s} [%s] Error reading response: %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
request.method,
|
||||
request.uri.decode("ascii"),
|
||||
e,
|
||||
)
|
||||
raise
|
||||
|
||||
time_taken_secs = reactor.seconds() - start_ms / 1000
|
||||
|
||||
logger.info(
|
||||
"{%s} [%s] Completed request: %d %s in %.2f secs - %s %s",
|
||||
"{%s} [%s] Completed: %d %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
response.code,
|
||||
response.phrase.decode("ascii", errors="replace"),
|
||||
time_taken_secs,
|
||||
request.method,
|
||||
request.uri.decode("ascii"),
|
||||
)
|
||||
return body
|
||||
|
||||
@@ -300,9 +261,7 @@ class MatrixFederationHttpClient(object):
|
||||
# 'M_UNRECOGNIZED' which some endpoints can return when omitting a
|
||||
# trailing slash on Synapse <= v0.99.3.
|
||||
logger.info("Retrying request with trailing slash")
|
||||
|
||||
# Request is frozen so we create a new instance
|
||||
request = attr.evolve(request, path=request.path + "/")
|
||||
request.path += "/"
|
||||
|
||||
response = await self._send_request(request, **send_request_args)
|
||||
|
||||
@@ -414,7 +373,9 @@ class MatrixFederationHttpClient(object):
|
||||
else:
|
||||
retries_left = MAX_SHORT_RETRIES
|
||||
|
||||
url_bytes = request.uri
|
||||
url_bytes = urllib.parse.urlunparse(
|
||||
(b"matrix", destination_bytes, path_bytes, None, query_bytes, b"")
|
||||
)
|
||||
url_str = url_bytes.decode("ascii")
|
||||
|
||||
url_to_sign_bytes = urllib.parse.urlunparse(
|
||||
@@ -441,7 +402,7 @@ class MatrixFederationHttpClient(object):
|
||||
|
||||
headers_dict[b"Authorization"] = auth_headers
|
||||
|
||||
logger.debug(
|
||||
logger.info(
|
||||
"{%s} [%s] Sending request: %s %s; timeout %fs",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
@@ -475,6 +436,7 @@ class MatrixFederationHttpClient(object):
|
||||
except DNSLookupError as e:
|
||||
raise RequestSendFailed(e, can_retry=retry_on_dns_fail) from e
|
||||
except Exception as e:
|
||||
logger.info("Failed to send request: %s", e)
|
||||
raise RequestSendFailed(e, can_retry=True) from e
|
||||
|
||||
incoming_responses_counter.labels(
|
||||
@@ -534,7 +496,7 @@ class MatrixFederationHttpClient(object):
|
||||
|
||||
break
|
||||
except RequestSendFailed as e:
|
||||
logger.info(
|
||||
logger.warning(
|
||||
"{%s} [%s] Request failed: %s %s: %s",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
@@ -692,8 +654,6 @@ class MatrixFederationHttpClient(object):
|
||||
json=data,
|
||||
)
|
||||
|
||||
start_ms = self.clock.time_msec()
|
||||
|
||||
response = await self._send_request_with_optional_trailing_slash(
|
||||
request,
|
||||
try_trailing_slash_on_400,
|
||||
@@ -704,7 +664,7 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
|
||||
body = await _handle_json_response(
|
||||
self.reactor, self.default_timeout, request, response, start_ms
|
||||
self.reactor, self.default_timeout, request, response
|
||||
)
|
||||
|
||||
return body
|
||||
@@ -760,8 +720,6 @@ class MatrixFederationHttpClient(object):
|
||||
method="POST", destination=destination, path=path, query=args, json=data
|
||||
)
|
||||
|
||||
start_ms = self.clock.time_msec()
|
||||
|
||||
response = await self._send_request(
|
||||
request,
|
||||
long_retries=long_retries,
|
||||
@@ -775,7 +733,7 @@ class MatrixFederationHttpClient(object):
|
||||
_sec_timeout = self.default_timeout
|
||||
|
||||
body = await _handle_json_response(
|
||||
self.reactor, _sec_timeout, request, response, start_ms,
|
||||
self.reactor, _sec_timeout, request, response
|
||||
)
|
||||
return body
|
||||
|
||||
@@ -828,8 +786,6 @@ class MatrixFederationHttpClient(object):
|
||||
method="GET", destination=destination, path=path, query=args
|
||||
)
|
||||
|
||||
start_ms = self.clock.time_msec()
|
||||
|
||||
response = await self._send_request_with_optional_trailing_slash(
|
||||
request,
|
||||
try_trailing_slash_on_400,
|
||||
@@ -840,7 +796,7 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
|
||||
body = await _handle_json_response(
|
||||
self.reactor, self.default_timeout, request, response, start_ms
|
||||
self.reactor, self.default_timeout, request, response
|
||||
)
|
||||
|
||||
return body
|
||||
@@ -890,8 +846,6 @@ class MatrixFederationHttpClient(object):
|
||||
method="DELETE", destination=destination, path=path, query=args
|
||||
)
|
||||
|
||||
start_ms = self.clock.time_msec()
|
||||
|
||||
response = await self._send_request(
|
||||
request,
|
||||
long_retries=long_retries,
|
||||
@@ -900,7 +854,7 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
|
||||
body = await _handle_json_response(
|
||||
self.reactor, self.default_timeout, request, response, start_ms
|
||||
self.reactor, self.default_timeout, request, response
|
||||
)
|
||||
return body
|
||||
|
||||
@@ -960,14 +914,12 @@ class MatrixFederationHttpClient(object):
|
||||
)
|
||||
raise
|
||||
logger.info(
|
||||
"{%s} [%s] Completed: %d %s [%d bytes] %s %s",
|
||||
"{%s} [%s] Completed: %d %s [%d bytes]",
|
||||
request.txn_id,
|
||||
request.destination,
|
||||
response.code,
|
||||
response.phrase.decode("ascii", errors="replace"),
|
||||
length,
|
||||
request.method,
|
||||
request.uri.decode("ascii"),
|
||||
)
|
||||
return (length, headers)
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ from io import BytesIO
|
||||
from typing import Any, Callable, Dict, Tuple, Union
|
||||
|
||||
import jinja2
|
||||
from canonicaljson import encode_canonical_json, encode_pretty_printed_json
|
||||
from canonicaljson import encode_canonical_json, encode_pretty_printed_json, json
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.python import failure
|
||||
@@ -46,7 +46,6 @@ from synapse.api.errors import (
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.context import preserve_fn
|
||||
from synapse.logging.opentracing import trace_servlet
|
||||
from synapse.util import json_encoder
|
||||
from synapse.util.caches import intern_dict
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -539,7 +538,7 @@ def respond_with_json(
|
||||
# canonicaljson already encodes to bytes
|
||||
json_bytes = encode_canonical_json(json_object)
|
||||
else:
|
||||
json_bytes = json_encoder.encode(json_object).encode("utf-8")
|
||||
json_bytes = json.dumps(json_object).encode("utf-8")
|
||||
|
||||
return respond_with_json_bytes(request, code, json_bytes, send_cors=send_cors)
|
||||
|
||||
|
||||
@@ -146,9 +146,10 @@ class SynapseRequest(Request):
|
||||
|
||||
Returns a context manager; the correct way to use this is:
|
||||
|
||||
async def handle_request(request):
|
||||
@defer.inlineCallbacks
|
||||
def handle_request(request):
|
||||
with request.processing("FooServlet"):
|
||||
await really_handle_the_request()
|
||||
yield really_handle_the_request()
|
||||
|
||||
Once the context manager is closed, the completion of the request will be logged,
|
||||
and the various metrics will be updated.
|
||||
@@ -286,9 +287,7 @@ class SynapseRequest(Request):
|
||||
# the connection dropped)
|
||||
code += "!"
|
||||
|
||||
log_level = logging.INFO if self._should_log_request() else logging.DEBUG
|
||||
self.site.access_logger.log(
|
||||
log_level,
|
||||
self.site.access_logger.info(
|
||||
"%s - %s - {%s}"
|
||||
" Processed request: %.3fsec/%.3fsec (%.3fsec, %.3fsec) (%.3fsec/%.3fsec/%d)"
|
||||
' %sB %s "%s %s %s" "%s" [%d dbevts]',
|
||||
@@ -316,17 +315,6 @@ class SynapseRequest(Request):
|
||||
except Exception as e:
|
||||
logger.warning("Failed to stop metrics: %r", e)
|
||||
|
||||
def _should_log_request(self) -> bool:
|
||||
"""Whether we should log at INFO that we processed the request.
|
||||
"""
|
||||
if self.path == b"/health":
|
||||
return False
|
||||
|
||||
if self.method == b"OPTIONS":
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
|
||||
class XForwardedForRequest(SynapseRequest):
|
||||
def __init__(self, *args, **kw):
|
||||
|
||||
@@ -13,15 +13,16 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
import inspect
|
||||
import logging
|
||||
import threading
|
||||
from asyncio import iscoroutine
|
||||
from functools import wraps
|
||||
from typing import TYPE_CHECKING, Dict, Optional, Set
|
||||
|
||||
from prometheus_client.core import REGISTRY, Counter, Gauge
|
||||
|
||||
from twisted.internet import defer
|
||||
from twisted.python.failure import Failure
|
||||
|
||||
from synapse.logging.context import LoggingContext, PreserveLoggingContext
|
||||
|
||||
@@ -166,7 +167,7 @@ class _BackgroundProcess(object):
|
||||
)
|
||||
|
||||
|
||||
def run_as_background_process(desc: str, func, *args, **kwargs):
|
||||
def run_as_background_process(desc, func, *args, **kwargs):
|
||||
"""Run the given function in its own logcontext, with resource metrics
|
||||
|
||||
This should be used to wrap processes which are fired off to run in the
|
||||
@@ -178,7 +179,7 @@ def run_as_background_process(desc: str, func, *args, **kwargs):
|
||||
normal synapse inlineCallbacks function).
|
||||
|
||||
Args:
|
||||
desc: a description for this background process type
|
||||
desc (str): a description for this background process type
|
||||
func: a function, which may return a Deferred or a coroutine
|
||||
args: positional args for func
|
||||
kwargs: keyword args for func
|
||||
@@ -187,7 +188,8 @@ def run_as_background_process(desc: str, func, *args, **kwargs):
|
||||
follow the synapse logcontext rules.
|
||||
"""
|
||||
|
||||
async def run():
|
||||
@defer.inlineCallbacks
|
||||
def run():
|
||||
with _bg_metrics_lock:
|
||||
count = _background_process_counts.get(desc, 0)
|
||||
_background_process_counts[desc] = count + 1
|
||||
@@ -201,21 +203,29 @@ def run_as_background_process(desc: str, func, *args, **kwargs):
|
||||
try:
|
||||
result = func(*args, **kwargs)
|
||||
|
||||
if inspect.isawaitable(result):
|
||||
result = await result
|
||||
# We probably don't have an ensureDeferred in our call stack to handle
|
||||
# coroutine results, so we need to ensureDeferred here.
|
||||
#
|
||||
# But we need this check because ensureDeferred doesn't like being
|
||||
# called on immediate values (as opposed to Deferreds or coroutines).
|
||||
if iscoroutine(result):
|
||||
result = defer.ensureDeferred(result)
|
||||
|
||||
return result
|
||||
return (yield result)
|
||||
except Exception:
|
||||
logger.exception(
|
||||
"Background process '%s' threw an exception", desc,
|
||||
# failure.Failure() fishes the original Failure out of our stack, and
|
||||
# thus gives us a sensible stack trace.
|
||||
f = Failure()
|
||||
logger.error(
|
||||
"Background process '%s' threw an exception",
|
||||
desc,
|
||||
exc_info=(f.type, f.value, f.getTracebackObject()),
|
||||
)
|
||||
finally:
|
||||
_background_process_in_flight_count.labels(desc).dec()
|
||||
|
||||
with PreserveLoggingContext():
|
||||
# Note that we return a Deferred here so that it can be used in a
|
||||
# looping_call and other places that expect a Deferred.
|
||||
return defer.ensureDeferred(run())
|
||||
return run()
|
||||
|
||||
|
||||
def wrap_as_background_process(desc):
|
||||
|
||||
@@ -194,16 +194,12 @@ class ModuleApi(object):
|
||||
synapse.api.errors.AuthError: the access token is invalid
|
||||
"""
|
||||
# see if the access token corresponds to a device
|
||||
user_info = yield defer.ensureDeferred(
|
||||
self._auth.get_user_by_access_token(access_token)
|
||||
)
|
||||
user_info = yield self._auth.get_user_by_access_token(access_token)
|
||||
device_id = user_info.get("device_id")
|
||||
user_id = user_info["user"].to_string()
|
||||
if device_id:
|
||||
# delete the device, which will also delete its access tokens
|
||||
yield defer.ensureDeferred(
|
||||
self._hs.get_device_handler().delete_device(user_id, device_id)
|
||||
)
|
||||
yield self._hs.get_device_handler().delete_device(user_id, device_id)
|
||||
else:
|
||||
# no associated device. Just delete the access token.
|
||||
yield defer.ensureDeferred(
|
||||
@@ -223,7 +219,7 @@ class ModuleApi(object):
|
||||
Returns:
|
||||
Deferred[object]: result of func
|
||||
"""
|
||||
return self._store.db_pool.runInteraction(desc, func, *args, **kwargs)
|
||||
return self._store.db.runInteraction(desc, func, *args, **kwargs)
|
||||
|
||||
def complete_sso_login(
|
||||
self, registered_user_id: str, request: SynapseRequest, client_redirect_url: str
|
||||
|
||||
@@ -15,18 +15,7 @@
|
||||
|
||||
import logging
|
||||
from collections import namedtuple
|
||||
from typing import (
|
||||
Awaitable,
|
||||
Callable,
|
||||
Dict,
|
||||
Iterable,
|
||||
List,
|
||||
Optional,
|
||||
Set,
|
||||
Tuple,
|
||||
TypeVar,
|
||||
Union,
|
||||
)
|
||||
from typing import Callable, Iterable, List, TypeVar
|
||||
|
||||
from prometheus_client import Counter
|
||||
|
||||
@@ -35,14 +24,12 @@ from twisted.internet import defer
|
||||
import synapse.server
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.api.errors import AuthError
|
||||
from synapse.events import EventBase
|
||||
from synapse.handlers.presence import format_user_presence_state
|
||||
from synapse.logging.context import PreserveLoggingContext
|
||||
from synapse.logging.utils import log_function
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
from synapse.streams.config import PaginationConfig
|
||||
from synapse.types import Collection, StreamToken, UserID
|
||||
from synapse.types import StreamToken
|
||||
from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
|
||||
from synapse.util.metrics import Measure
|
||||
from synapse.visibility import filter_events_for_client
|
||||
@@ -90,13 +77,7 @@ class _NotifierUserStream(object):
|
||||
so that it can remove itself from the indexes in the Notifier class.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
user_id: str,
|
||||
rooms: Collection[str],
|
||||
current_token: StreamToken,
|
||||
time_now_ms: int,
|
||||
):
|
||||
def __init__(self, user_id, rooms, current_token, time_now_ms):
|
||||
self.user_id = user_id
|
||||
self.rooms = set(rooms)
|
||||
self.current_token = current_token
|
||||
@@ -112,13 +93,13 @@ class _NotifierUserStream(object):
|
||||
with PreserveLoggingContext():
|
||||
self.notify_deferred = ObservableDeferred(defer.Deferred())
|
||||
|
||||
def notify(self, stream_key: str, stream_id: int, time_now_ms: int):
|
||||
def notify(self, stream_key, stream_id, time_now_ms):
|
||||
"""Notify any listeners for this user of a new event from an
|
||||
event source.
|
||||
Args:
|
||||
stream_key: The stream the event came from.
|
||||
stream_id: The new id for the stream the event came from.
|
||||
time_now_ms: The current time in milliseconds.
|
||||
stream_key(str): The stream the event came from.
|
||||
stream_id(str): The new id for the stream the event came from.
|
||||
time_now_ms(int): The current time in milliseconds.
|
||||
"""
|
||||
self.current_token = self.current_token.copy_and_advance(stream_key, stream_id)
|
||||
self.last_notified_token = self.current_token
|
||||
@@ -131,7 +112,7 @@ class _NotifierUserStream(object):
|
||||
self.notify_deferred = ObservableDeferred(defer.Deferred())
|
||||
noify_deferred.callback(self.current_token)
|
||||
|
||||
def remove(self, notifier: "Notifier"):
|
||||
def remove(self, notifier):
|
||||
""" Remove this listener from all the indexes in the Notifier
|
||||
it knows about.
|
||||
"""
|
||||
@@ -142,10 +123,10 @@ class _NotifierUserStream(object):
|
||||
|
||||
notifier.user_to_user_stream.pop(self.user_id)
|
||||
|
||||
def count_listeners(self) -> int:
|
||||
def count_listeners(self):
|
||||
return len(self.notify_deferred.observers())
|
||||
|
||||
def new_listener(self, token: StreamToken) -> _NotificationListener:
|
||||
def new_listener(self, token):
|
||||
"""Returns a deferred that is resolved when there is a new token
|
||||
greater than the given token.
|
||||
|
||||
@@ -178,16 +159,14 @@ class Notifier(object):
|
||||
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
|
||||
|
||||
def __init__(self, hs: "synapse.server.HomeServer"):
|
||||
self.user_to_user_stream = {} # type: Dict[str, _NotifierUserStream]
|
||||
self.room_to_user_streams = {} # type: Dict[str, Set[_NotifierUserStream]]
|
||||
self.user_to_user_stream = {}
|
||||
self.room_to_user_streams = {}
|
||||
|
||||
self.hs = hs
|
||||
self.storage = hs.get_storage()
|
||||
self.event_sources = hs.get_event_sources()
|
||||
self.store = hs.get_datastore()
|
||||
self.pending_new_room_events = (
|
||||
[]
|
||||
) # type: List[Tuple[int, EventBase, Collection[Union[str, UserID]]]]
|
||||
self.pending_new_room_events = []
|
||||
|
||||
# Called when there are new things to stream over replication
|
||||
self.replication_callbacks = [] # type: List[Callable[[], None]]
|
||||
@@ -199,9 +178,10 @@ class Notifier(object):
|
||||
self.clock = hs.get_clock()
|
||||
self.appservice_handler = hs.get_application_service_handler()
|
||||
|
||||
self.federation_sender = None
|
||||
if hs.should_send_federation():
|
||||
self.federation_sender = hs.get_federation_sender()
|
||||
else:
|
||||
self.federation_sender = None
|
||||
|
||||
self.state_handler = hs.get_state_handler()
|
||||
|
||||
@@ -213,12 +193,12 @@ class Notifier(object):
|
||||
# when rendering the metrics page, which is likely once per minute at
|
||||
# most when scraping it.
|
||||
def count_listeners():
|
||||
all_user_streams = set() # type: Set[_NotifierUserStream]
|
||||
all_user_streams = set()
|
||||
|
||||
for streams in list(self.room_to_user_streams.values()):
|
||||
all_user_streams |= streams
|
||||
for stream in list(self.user_to_user_stream.values()):
|
||||
all_user_streams.add(stream)
|
||||
for x in list(self.room_to_user_streams.values()):
|
||||
all_user_streams |= x
|
||||
for x in list(self.user_to_user_stream.values()):
|
||||
all_user_streams.add(x)
|
||||
|
||||
return sum(stream.count_listeners() for stream in all_user_streams)
|
||||
|
||||
@@ -243,11 +223,7 @@ class Notifier(object):
|
||||
self.replication_callbacks.append(cb)
|
||||
|
||||
def on_new_room_event(
|
||||
self,
|
||||
event: EventBase,
|
||||
room_stream_id: int,
|
||||
max_room_stream_id: int,
|
||||
extra_users: Collection[Union[str, UserID]] = [],
|
||||
self, event, room_stream_id, max_room_stream_id, extra_users=[]
|
||||
):
|
||||
""" Used by handlers to inform the notifier something has happened
|
||||
in the room, room event wise.
|
||||
@@ -265,11 +241,11 @@ class Notifier(object):
|
||||
|
||||
self.notify_replication()
|
||||
|
||||
def _notify_pending_new_room_events(self, max_room_stream_id: int):
|
||||
def _notify_pending_new_room_events(self, max_room_stream_id):
|
||||
"""Notify for the room events that were queued waiting for a previous
|
||||
event to be persisted.
|
||||
Args:
|
||||
max_room_stream_id: The highest stream_id below which all
|
||||
max_room_stream_id(int): The highest stream_id below which all
|
||||
events have been persisted.
|
||||
"""
|
||||
pending = self.pending_new_room_events
|
||||
@@ -282,12 +258,7 @@ class Notifier(object):
|
||||
else:
|
||||
self._on_new_room_event(event, room_stream_id, extra_users)
|
||||
|
||||
def _on_new_room_event(
|
||||
self,
|
||||
event: EventBase,
|
||||
room_stream_id: int,
|
||||
extra_users: Collection[Union[str, UserID]] = [],
|
||||
):
|
||||
def _on_new_room_event(self, event, room_stream_id, extra_users=[]):
|
||||
"""Notify any user streams that are interested in this room event"""
|
||||
# poke any interested application service.
|
||||
run_as_background_process(
|
||||
@@ -304,19 +275,13 @@ class Notifier(object):
|
||||
"room_key", room_stream_id, users=extra_users, rooms=[event.room_id]
|
||||
)
|
||||
|
||||
async def _notify_app_services(self, room_stream_id: int):
|
||||
async def _notify_app_services(self, room_stream_id):
|
||||
try:
|
||||
await self.appservice_handler.notify_interested_services(room_stream_id)
|
||||
except Exception:
|
||||
logger.exception("Error notifying application services of event")
|
||||
|
||||
def on_new_event(
|
||||
self,
|
||||
stream_key: str,
|
||||
new_token: int,
|
||||
users: Collection[Union[str, UserID]] = [],
|
||||
rooms: Collection[str] = [],
|
||||
):
|
||||
def on_new_event(self, stream_key, new_token, users=[], rooms=[]):
|
||||
""" Used to inform listeners that something has happened event wise.
|
||||
|
||||
Will wake up all listeners for the given users and rooms.
|
||||
@@ -342,25 +307,20 @@ class Notifier(object):
|
||||
|
||||
self.notify_replication()
|
||||
|
||||
def on_new_replication_data(self) -> None:
|
||||
def on_new_replication_data(self):
|
||||
"""Used to inform replication listeners that something has happend
|
||||
without waking up any of the normal user event streams"""
|
||||
self.notify_replication()
|
||||
|
||||
async def wait_for_events(
|
||||
self,
|
||||
user_id: str,
|
||||
timeout: int,
|
||||
callback: Callable[[StreamToken, StreamToken], Awaitable[T]],
|
||||
room_ids=None,
|
||||
from_token=StreamToken.START,
|
||||
) -> T:
|
||||
self, user_id, timeout, callback, room_ids=None, from_token=StreamToken.START
|
||||
):
|
||||
"""Wait until the callback returns a non empty response or the
|
||||
timeout fires.
|
||||
"""
|
||||
user_stream = self.user_to_user_stream.get(user_id)
|
||||
if user_stream is None:
|
||||
current_token = self.event_sources.get_current_token()
|
||||
current_token = await self.event_sources.get_current_token()
|
||||
if room_ids is None:
|
||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
user_stream = _NotifierUserStream(
|
||||
@@ -417,16 +377,19 @@ class Notifier(object):
|
||||
|
||||
async def get_events_for(
|
||||
self,
|
||||
user: UserID,
|
||||
pagination_config: PaginationConfig,
|
||||
timeout: int,
|
||||
is_guest: bool = False,
|
||||
explicit_room_id: str = None,
|
||||
) -> EventStreamResult:
|
||||
user,
|
||||
pagination_config,
|
||||
timeout,
|
||||
only_keys=None,
|
||||
is_guest=False,
|
||||
explicit_room_id=None,
|
||||
):
|
||||
""" For the given user and rooms, return any new events for them. If
|
||||
there are no new events wait for up to `timeout` milliseconds for any
|
||||
new events to happen before returning.
|
||||
|
||||
If `only_keys` is not None, events from keys will be sent down.
|
||||
|
||||
If explicit_room_id is not set, the user's joined rooms will be polled
|
||||
for events.
|
||||
If explicit_room_id is set, that room will be polled for events only if
|
||||
@@ -434,20 +397,18 @@ class Notifier(object):
|
||||
"""
|
||||
from_token = pagination_config.from_token
|
||||
if not from_token:
|
||||
from_token = self.event_sources.get_current_token()
|
||||
from_token = await self.event_sources.get_current_token()
|
||||
|
||||
limit = pagination_config.limit
|
||||
|
||||
room_ids, is_joined = await self._get_room_ids(user, explicit_room_id)
|
||||
is_peeking = not is_joined
|
||||
|
||||
async def check_for_updates(
|
||||
before_token: StreamToken, after_token: StreamToken
|
||||
) -> EventStreamResult:
|
||||
async def check_for_updates(before_token, after_token):
|
||||
if not after_token.is_after(before_token):
|
||||
return EventStreamResult([], (from_token, from_token))
|
||||
|
||||
events = [] # type: List[EventBase]
|
||||
events = []
|
||||
end_token = from_token
|
||||
|
||||
for name, source in self.event_sources.sources.items():
|
||||
@@ -456,6 +417,8 @@ class Notifier(object):
|
||||
after_id = getattr(after_token, keyname)
|
||||
if before_id == after_id:
|
||||
continue
|
||||
if only_keys and name not in only_keys:
|
||||
continue
|
||||
|
||||
new_events, new_key = await source.get_new_events(
|
||||
user=user,
|
||||
@@ -513,9 +476,7 @@ class Notifier(object):
|
||||
|
||||
return result
|
||||
|
||||
async def _get_room_ids(
|
||||
self, user: UserID, explicit_room_id: Optional[str]
|
||||
) -> Tuple[Collection[str], bool]:
|
||||
async def _get_room_ids(self, user, explicit_room_id):
|
||||
joined_room_ids = await self.store.get_rooms_for_user(user.to_string())
|
||||
if explicit_room_id:
|
||||
if explicit_room_id in joined_room_ids:
|
||||
@@ -525,7 +486,7 @@ class Notifier(object):
|
||||
raise AuthError(403, "Non-joined access not allowed")
|
||||
return joined_room_ids, True
|
||||
|
||||
async def _is_world_readable(self, room_id: str) -> bool:
|
||||
async def _is_world_readable(self, room_id):
|
||||
state = await self.state_handler.get_current_state(
|
||||
room_id, EventTypes.RoomHistoryVisibility, ""
|
||||
)
|
||||
@@ -535,7 +496,7 @@ class Notifier(object):
|
||||
return False
|
||||
|
||||
@log_function
|
||||
def remove_expired_streams(self) -> None:
|
||||
def remove_expired_streams(self):
|
||||
time_now_ms = self.clock.time_msec()
|
||||
expired_streams = []
|
||||
expire_before_ts = time_now_ms - self.UNUSED_STREAM_EXPIRY_MS
|
||||
@@ -549,21 +510,21 @@ class Notifier(object):
|
||||
expired_stream.remove(self)
|
||||
|
||||
@log_function
|
||||
def _register_with_keys(self, user_stream: _NotifierUserStream):
|
||||
def _register_with_keys(self, user_stream):
|
||||
self.user_to_user_stream[user_stream.user_id] = user_stream
|
||||
|
||||
for room in user_stream.rooms:
|
||||
s = self.room_to_user_streams.setdefault(room, set())
|
||||
s.add(user_stream)
|
||||
|
||||
def _user_joined_room(self, user_id: str, room_id: str):
|
||||
def _user_joined_room(self, user_id, room_id):
|
||||
new_user_stream = self.user_to_user_stream.get(user_id)
|
||||
if new_user_stream is not None:
|
||||
room_streams = self.room_to_user_streams.setdefault(room_id, set())
|
||||
room_streams.add(new_user_stream)
|
||||
new_user_stream.rooms.add(room_id)
|
||||
|
||||
def notify_replication(self) -> None:
|
||||
def notify_replication(self):
|
||||
"""Notify the any replication listeners that there's a new event"""
|
||||
for cb in self.replication_callbacks:
|
||||
cb()
|
||||
|
||||
@@ -19,13 +19,11 @@ import copy
|
||||
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
|
||||
|
||||
|
||||
def list_with_base_rules(rawrules, use_new_defaults=False):
|
||||
def list_with_base_rules(rawrules):
|
||||
"""Combine the list of rules set by the user with the default push rules
|
||||
|
||||
Args:
|
||||
rawrules(list): The rules the user has modified or set.
|
||||
use_new_defaults(bool): Whether to use the new experimental default rules when
|
||||
appending or prepending default rules.
|
||||
|
||||
Returns:
|
||||
A new list with the rules set by the user combined with the defaults.
|
||||
@@ -45,9 +43,7 @@ def list_with_base_rules(rawrules, use_new_defaults=False):
|
||||
|
||||
ruleslist.extend(
|
||||
make_base_prepend_rules(
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||
modified_base_rules,
|
||||
use_new_defaults,
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
||||
)
|
||||
)
|
||||
|
||||
@@ -58,7 +54,6 @@ def list_with_base_rules(rawrules, use_new_defaults=False):
|
||||
make_base_append_rules(
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||
modified_base_rules,
|
||||
use_new_defaults,
|
||||
)
|
||||
)
|
||||
current_prio_class -= 1
|
||||
@@ -67,7 +62,6 @@ def list_with_base_rules(rawrules, use_new_defaults=False):
|
||||
make_base_prepend_rules(
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||
modified_base_rules,
|
||||
use_new_defaults,
|
||||
)
|
||||
)
|
||||
|
||||
@@ -76,39 +70,27 @@ def list_with_base_rules(rawrules, use_new_defaults=False):
|
||||
while current_prio_class > 0:
|
||||
ruleslist.extend(
|
||||
make_base_append_rules(
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||
modified_base_rules,
|
||||
use_new_defaults,
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
||||
)
|
||||
)
|
||||
current_prio_class -= 1
|
||||
if current_prio_class > 0:
|
||||
ruleslist.extend(
|
||||
make_base_prepend_rules(
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
|
||||
modified_base_rules,
|
||||
use_new_defaults,
|
||||
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
|
||||
)
|
||||
)
|
||||
|
||||
return ruleslist
|
||||
|
||||
|
||||
def make_base_append_rules(kind, modified_base_rules, use_new_defaults=False):
|
||||
def make_base_append_rules(kind, modified_base_rules):
|
||||
rules = []
|
||||
|
||||
if kind == "override":
|
||||
rules = (
|
||||
NEW_APPEND_OVERRIDE_RULES
|
||||
if use_new_defaults
|
||||
else BASE_APPEND_OVERRIDE_RULES
|
||||
)
|
||||
rules = BASE_APPEND_OVERRIDE_RULES
|
||||
elif kind == "underride":
|
||||
rules = (
|
||||
NEW_APPEND_UNDERRIDE_RULES
|
||||
if use_new_defaults
|
||||
else BASE_APPEND_UNDERRIDE_RULES
|
||||
)
|
||||
rules = BASE_APPEND_UNDERRIDE_RULES
|
||||
elif kind == "content":
|
||||
rules = BASE_APPEND_CONTENT_RULES
|
||||
|
||||
@@ -123,7 +105,7 @@ def make_base_append_rules(kind, modified_base_rules, use_new_defaults=False):
|
||||
return rules
|
||||
|
||||
|
||||
def make_base_prepend_rules(kind, modified_base_rules, use_new_defaults=False):
|
||||
def make_base_prepend_rules(kind, modified_base_rules):
|
||||
rules = []
|
||||
|
||||
if kind == "override":
|
||||
@@ -288,135 +270,6 @@ BASE_APPEND_OVERRIDE_RULES = [
|
||||
]
|
||||
|
||||
|
||||
NEW_APPEND_OVERRIDE_RULES = [
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.encrypted",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.encrypted",
|
||||
"_id": "_encrypted",
|
||||
}
|
||||
],
|
||||
"actions": ["notify"],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.suppress_notices",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.message",
|
||||
"_id": "_suppress_notices_type",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.msgtype",
|
||||
"pattern": "m.notice",
|
||||
"_id": "_suppress_notices",
|
||||
},
|
||||
],
|
||||
"actions": [],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/underride/.m.rule.suppress_edits",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "m.relates_to.m.rel_type",
|
||||
"pattern": "m.replace",
|
||||
"_id": "_suppress_edits",
|
||||
}
|
||||
],
|
||||
"actions": [],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.invite_for_me",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.member",
|
||||
"_id": "_member",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.membership",
|
||||
"pattern": "invite",
|
||||
"_id": "_invite_member",
|
||||
},
|
||||
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.contains_display_name",
|
||||
"conditions": [{"kind": "contains_display_name"}],
|
||||
"actions": [
|
||||
"notify",
|
||||
{"set_tweak": "sound", "value": "default"},
|
||||
{"set_tweak": "highlight"},
|
||||
],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.tombstone",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.room.tombstone",
|
||||
"_id": "_tombstone",
|
||||
},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "state_key",
|
||||
"pattern": "",
|
||||
"_id": "_tombstone_statekey",
|
||||
},
|
||||
],
|
||||
"actions": [
|
||||
"notify",
|
||||
{"set_tweak": "sound", "value": "default"},
|
||||
{"set_tweak": "highlight"},
|
||||
],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.roomnotif",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.body",
|
||||
"pattern": "@room",
|
||||
"_id": "_roomnotif_content",
|
||||
},
|
||||
{
|
||||
"kind": "sender_notification_permission",
|
||||
"key": "room",
|
||||
"_id": "_roomnotif_pl",
|
||||
},
|
||||
],
|
||||
"actions": [
|
||||
"notify",
|
||||
{"set_tweak": "highlight"},
|
||||
{"set_tweak": "sound", "value": "default"},
|
||||
],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/override/.m.rule.call",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "type",
|
||||
"pattern": "m.call.invite",
|
||||
"_id": "_call",
|
||||
}
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "sound", "value": "ring"}],
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
BASE_APPEND_UNDERRIDE_RULES = [
|
||||
{
|
||||
"rule_id": "global/underride/.m.rule.call",
|
||||
@@ -501,36 +354,6 @@ BASE_APPEND_UNDERRIDE_RULES = [
|
||||
]
|
||||
|
||||
|
||||
NEW_APPEND_UNDERRIDE_RULES = [
|
||||
{
|
||||
"rule_id": "global/underride/.m.rule.room_one_to_one",
|
||||
"conditions": [
|
||||
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.body",
|
||||
"pattern": "*",
|
||||
"_id": "body",
|
||||
},
|
||||
],
|
||||
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
|
||||
},
|
||||
{
|
||||
"rule_id": "global/underride/.m.rule.message",
|
||||
"conditions": [
|
||||
{
|
||||
"kind": "event_match",
|
||||
"key": "content.body",
|
||||
"pattern": "*",
|
||||
"_id": "body",
|
||||
},
|
||||
],
|
||||
"actions": ["notify"],
|
||||
"enabled": False,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
BASE_RULE_IDS = set()
|
||||
|
||||
for r in BASE_APPEND_CONTENT_RULES:
|
||||
@@ -552,26 +375,3 @@ for r in BASE_APPEND_UNDERRIDE_RULES:
|
||||
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
|
||||
r["default"] = True
|
||||
BASE_RULE_IDS.add(r["rule_id"])
|
||||
|
||||
|
||||
NEW_RULE_IDS = set()
|
||||
|
||||
for r in BASE_APPEND_CONTENT_RULES:
|
||||
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
|
||||
r["default"] = True
|
||||
NEW_RULE_IDS.add(r["rule_id"])
|
||||
|
||||
for r in BASE_PREPEND_OVERRIDE_RULES:
|
||||
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
|
||||
r["default"] = True
|
||||
NEW_RULE_IDS.add(r["rule_id"])
|
||||
|
||||
for r in NEW_APPEND_OVERRIDE_RULES:
|
||||
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
|
||||
r["default"] = True
|
||||
NEW_RULE_IDS.add(r["rule_id"])
|
||||
|
||||
for r in NEW_APPEND_UNDERRIDE_RULES:
|
||||
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
|
||||
r["default"] = True
|
||||
NEW_RULE_IDS.add(r["rule_id"])
|
||||
|
||||
@@ -120,7 +120,7 @@ class BulkPushRuleEvaluator(object):
|
||||
pl_event = await self.store.get_event(pl_event_id)
|
||||
auth_events = {POWER_KEY: pl_event}
|
||||
else:
|
||||
auth_events_ids = self.auth.compute_auth_events(
|
||||
auth_events_ids = await self.auth.compute_auth_events(
|
||||
event, prev_state_ids, for_verification=False
|
||||
)
|
||||
auth_events = await self.store.get_events(auth_events_ids)
|
||||
|
||||
@@ -21,22 +21,13 @@ async def get_badge_count(store, user_id):
|
||||
invites = await store.get_invited_rooms_for_local_user(user_id)
|
||||
joins = await store.get_rooms_for_user(user_id)
|
||||
|
||||
my_receipts_by_room = await store.get_receipts_for_user(user_id, "m.read")
|
||||
|
||||
badge = len(invites)
|
||||
|
||||
for room_id in joins:
|
||||
if room_id in my_receipts_by_room:
|
||||
last_unread_event_id = my_receipts_by_room[room_id]
|
||||
|
||||
notifs = await (
|
||||
store.get_unread_event_push_actions_by_room_for_user(
|
||||
room_id, user_id, last_unread_event_id
|
||||
)
|
||||
)
|
||||
# return one badge count per conversation, as count per
|
||||
# message is so noisy as to be almost useless
|
||||
badge += 1 if notifs["notify_count"] else 0
|
||||
unread_count = await store.get_unread_message_count_for_user(room_id, user_id)
|
||||
# return one badge count per conversation, as count per
|
||||
# message is so noisy as to be almost useless
|
||||
badge += 1 if unread_count else 0
|
||||
return badge
|
||||
|
||||
|
||||
|
||||
@@ -59,6 +59,7 @@ REQUIREMENTS = [
|
||||
"pyyaml>=3.11",
|
||||
"pyasn1>=0.1.9",
|
||||
"pyasn1-modules>=0.0.7",
|
||||
"daemonize>=2.3.1",
|
||||
"bcrypt>=3.1.0",
|
||||
"pillow>=4.3.0",
|
||||
"sortedcontainers>=1.4.4",
|
||||
|
||||
@@ -16,8 +16,8 @@
|
||||
import logging
|
||||
from typing import Optional
|
||||
|
||||
from synapse.storage.database import DatabasePool
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.data_stores.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.database import Database
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
|
||||
@@ -25,7 +25,7 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class BaseSlavedStore(CacheInvalidationWorkerStore):
|
||||
def __init__(self, database: DatabasePool, db_conn, hs):
|
||||
def __init__(self, database: Database, db_conn, hs):
|
||||
super(BaseSlavedStore, self).__init__(database, db_conn, hs)
|
||||
if isinstance(self.database_engine, PostgresEngine):
|
||||
self._cache_id_gen = MultiWriterIdGenerator(
|
||||
|
||||
@@ -17,13 +17,13 @@
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
||||
from synapse.replication.tcp.streams import AccountDataStream, TagAccountDataStream
|
||||
from synapse.storage.database import DatabasePool
|
||||
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
|
||||
from synapse.storage.databases.main.tags import TagsWorkerStore
|
||||
from synapse.storage.data_stores.main.account_data import AccountDataWorkerStore
|
||||
from synapse.storage.data_stores.main.tags import TagsWorkerStore
|
||||
from synapse.storage.database import Database
|
||||
|
||||
|
||||
class SlavedAccountDataStore(TagsWorkerStore, AccountDataWorkerStore, BaseSlavedStore):
|
||||
def __init__(self, database: DatabasePool, db_conn, hs):
|
||||
def __init__(self, database: Database, db_conn, hs):
|
||||
self._account_data_id_gen = SlavedIdTracker(
|
||||
db_conn,
|
||||
"account_data",
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.storage.databases.main.appservice import (
|
||||
from synapse.storage.data_stores.main.appservice import (
|
||||
ApplicationServiceTransactionWorkerStore,
|
||||
ApplicationServiceWorkerStore,
|
||||
)
|
||||
|
||||
@@ -13,22 +13,22 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
from synapse.storage.database import DatabasePool
|
||||
from synapse.storage.databases.main.client_ips import LAST_SEEN_GRANULARITY
|
||||
from synapse.storage.data_stores.main.client_ips import LAST_SEEN_GRANULARITY
|
||||
from synapse.storage.database import Database
|
||||
from synapse.util.caches.descriptors import Cache
|
||||
|
||||
from ._base import BaseSlavedStore
|
||||
|
||||
|
||||
class SlavedClientIpStore(BaseSlavedStore):
|
||||
def __init__(self, database: DatabasePool, db_conn, hs):
|
||||
def __init__(self, database: Database, db_conn, hs):
|
||||
super(SlavedClientIpStore, self).__init__(database, db_conn, hs)
|
||||
|
||||
self.client_ip_last_seen = Cache(
|
||||
name="client_ip_last_seen", keylen=4, max_entries=50000
|
||||
)
|
||||
|
||||
async def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
|
||||
def insert_client_ip(self, user_id, access_token, ip, user_agent, device_id):
|
||||
now = int(self._clock.time_msec())
|
||||
key = (user_id, access_token, ip)
|
||||
|
||||
|
||||
@@ -16,14 +16,14 @@
|
||||
from synapse.replication.slave.storage._base import BaseSlavedStore
|
||||
from synapse.replication.slave.storage._slaved_id_tracker import SlavedIdTracker
|
||||
from synapse.replication.tcp.streams import ToDeviceStream
|
||||
from synapse.storage.database import DatabasePool
|
||||
from synapse.storage.databases.main.deviceinbox import DeviceInboxWorkerStore
|
||||
from synapse.storage.data_stores.main.deviceinbox import DeviceInboxWorkerStore
|
||||
from synapse.storage.database import Database
|
||||
from synapse.util.caches.expiringcache import ExpiringCache
|
||||
from synapse.util.caches.stream_change_cache import StreamChangeCache
|
||||
|
||||
|
||||
class SlavedDeviceInboxStore(DeviceInboxWorkerStore, BaseSlavedStore):
|
||||
def __init__(self, database: DatabasePool, db_conn, hs):
|
||||
def __init__(self, database: Database, db_conn, hs):
|
||||
super(SlavedDeviceInboxStore, self).__init__(database, db_conn, hs)
|
||||
self._device_inbox_id_gen = SlavedIdTracker(
|
||||
db_conn, "device_inbox", "stream_id"
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user