1
0

Compare commits

..

66 Commits

Author SHA1 Message Date
Mathieu Velten
f19f47b44b Add test image for complement with workers 2021-01-19 11:12:35 +01:00
Andrew Morgan
9115c47dc3 Temp: add script for building and running worker-mode synapse in complement 2021-01-07 00:22:27 -05:00
Andrew Morgan
ce591bf75b Remove MoveToComplement.Dockerfile
Also added some debug logging in order to try and figure out why the
main homeserver config file isn't getting generated by start.py
2021-01-07 00:08:29 -05:00
Mathieu Velten
1acb2d9ee1 Remove replication listener from the global template 2020-12-31 17:39:24 +01:00
Mathieu Velten
f73e9db981 Various fixes + force TLS disabled 2020-12-31 15:15:07 +01:00
Mathieu Velten
cbe335d2f0 Add more workers 2020-12-31 00:38:05 +01:00
Mathieu Velten
ee138d87db Move client_max_body_size to server and increase to 100M 2020-12-31 00:21:03 +01:00
Mathieu Velten
dfd5e8079b Add more workers config 2020-12-31 00:16:11 +01:00
Mathieu Velten
80db995e33 Change to more dynamic workers config 2020-12-30 20:07:01 +01:00
Andrew Morgan
fa8bc0ba39 Only expose nginx listening port (8008). Add more worker configs 2020-12-30 13:51:38 +00:00
Andrew Morgan
62ac8b9c0d Get Synapse main and worker process startup working! 2020-12-15 19:15:55 +00:00
Andrew Morgan
422d40e82f major wip 2020-12-14 18:34:58 +00:00
Patrick Cloke
3af0672350 Improve tests for structured logging. (#8916) 2020-12-11 07:25:01 -05:00
Dirk Klimpel
0a34cdfc66 Add number of local devices to Room Details Admin API (#8886) 2020-12-11 10:42:47 +00:00
Erik Johnston
1d55c7b567 Don't ratelimit autojoining of rooms (#8921)
Fixes #8866
2020-12-11 10:17:49 +00:00
Richard van der Hoff
dc016c66ae Don't publish latest docker image until all archs are built (#8909) 2020-12-10 17:00:29 +00:00
Erik Johnston
80a992d7b9 Fix deadlock on SIGHUP (#8918)
Fixes #8892
2020-12-10 16:56:05 +00:00
Richard van der Hoff
c64002e1c1 Refactor SsoHandler.get_mxid_from_sso (#8900)
* Factor out _call_attribute_mapper and _register_mapped_user

This is mostly an attempt to simplify `get_mxid_from_sso`.

* Move mapping_lock down into SsoHandler.
2020-12-10 12:43:58 +00:00
Richard van der Hoff
1821f7cc26 Fix buglet in DirectRenderJsonResource (#8897)
this was using `canonical_json` without setting it, so when you used it as a
standalone class, you would get exceptions.
2020-12-10 12:42:55 +00:00
Dirk Klimpel
a5f7aff5e5 Deprecate Shutdown Room and Purge Room Admin API (#8829)
Deprecate both APIs in favour of the Delete Room API.

Related: #8663 and #8810
2020-12-10 11:42:48 +00:00
Patrick Cloke
344ab0b53a Default to blacklisting reserved IP ranges and add a whitelist. (#8870)
This defaults `ip_range_blacklist` to reserved IP ranges and also adds an
`ip_range_whitelist` setting to override it.
2020-12-09 13:56:06 -05:00
Patrick Cloke
6ff34e00d9 Skip the SAML tests if xmlsec1 isn't available. (#8905) 2020-12-09 12:23:30 -05:00
Dirk Klimpel
43bf3c5178 Combine related media admin API docs (#8839)
Related: #8810
Also a few small improvements.

Signed-off-by: Dirk Klimpel dirk@klimpel.org
2020-12-09 16:19:57 +00:00
Richard van der Hoff
a4a5c7a35e Merge remote-tracking branch 'origin/master' into develop 2020-12-09 16:13:52 +00:00
Richard van der Hoff
3e8292d483 Merge pull request #8906 from matrix-org/rav/fix_multiarch_builds
Pin the docker version for multiarch builds
2020-12-09 16:03:12 +00:00
Erik Johnston
cf7d3c90d6 Merge branch 'release-v1.24.0' into develop 2020-12-09 16:01:12 +00:00
Richard van der Hoff
9bbbb11ac2 Pin the docker version for multiarch builds
It seems that letting CircleCI use its default docker version (17.09.0-ce,
apparently) did not interact well with multiarch builds: in particular, we saw
weird effects where running an amd64 build at the same time as an arm64 build
caused the arm64 builds to fail with:

   Error while loading /usr/sbin/dpkg-deb: No such file or directory
2020-12-09 15:51:11 +00:00
Erik Johnston
57068eae75 Add 'xmlsec1' to dependency list 2020-12-09 13:48:16 +00:00
Erik Johnston
fd83debcc0 Merge branch 'master' into develop 2020-12-09 11:30:08 +00:00
Erik Johnston
320e8c8064 Merge tag 'v1.23.1'
Synapse 1.23.1 (2020-12-09)
===========================

Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.

Security advisory
-----------------

The following issues are fixed in v1.23.1 and v1.24.0.

- There is a denial of service attack
  ([CVE-2020-26257](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26257))
  against the federation APIs in which future events will not be correctly sent
  to other servers over federation. This affects all servers that participate in
  open federation. (Fixed in [#8776](https://github.com/matrix-org/synapse/pull/8776)).

- Synapse may be affected by OpenSSL
  [CVE-2020-1971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1971).
  Synapse administrators should ensure that they have the latest versions of
  the cryptography Python package installed.

To upgrade Synapse along with the cryptography package:

* Administrators using the [`matrix.org` Docker
  image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
  packages from
  `matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
  should ensure that they have version 1.24.0 or 1.23.1 installed: these images include
  the updated packages.
* Administrators who have [installed Synapse from
  source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
  should upgrade the cryptography package within their virtualenv by running:
  ```sh
  <path_to_virtualenv>/bin/pip install 'cryptography>=3.3'
  ```
* Administrators who have installed Synapse from distribution packages should
  consult the information from their distributions.

Bugfixes
--------

- Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body. ([\#8776](https://github.com/matrix-org/synapse/issues/8776))

Internal Changes
----------------

- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
2020-12-09 11:29:56 +00:00
Erik Johnston
adfc9cb53d Merge branch 'master' into develop 2020-12-09 11:26:48 +00:00
Erik Johnston
1cec3d1457 1.23.1 2020-12-09 11:07:41 +00:00
Patrick Cloke
0eb9b2f866 Fix installing pysaml2 on Python 3.5. (#8898)
This pins pysaml2 to < 6.4.0 on Python 3.5, as the last known working version.
2020-12-09 10:38:46 +00:00
Richard van der Hoff
3ce2f303f1 Consistently use room_id from federation request body (#8776)
* Consistently use room_id from federation request body

Some federation APIs have a redundant `room_id` path param (see
https://github.com/matrix-org/matrix-doc/issues/2330). We should make sure we
consistently use either the path param or the body param, and the body param is
easier.

* Kill off some references to "context"

Once upon a time, "rooms" were known as "contexts". I think this kills of the
last references to "contexts".
2020-12-09 10:38:39 +00:00
Aaron Raimist
cd9e72b185 Add X-Robots-Tag header to stop crawlers from indexing media (#8887)
Fixes / related to: https://github.com/matrix-org/synapse/issues/6533

This should do essentially the same thing as a robots.txt file telling robots to not index the media repo. https://developers.google.com/search/reference/robots_meta_tag

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2020-12-08 22:51:03 +00:00
Richard van der Hoff
ab7a24cc6b Better formatting for config errors from modules (#8874)
The idea is that the parse_config method of extension modules can raise either a ConfigError or a JsonValidationError,
and it will be magically turned into a legible error message. There's a few components to it:

* Separating the "path" and the "message" parts of a ConfigError, so that we can fiddle with the path bit to turn it
   into an absolute path.
* Generally improving the way ConfigErrors get printed.
* Passing in the config path to load_module so that it can wrap any exceptions that get caught appropriately.
2020-12-08 14:04:35 +00:00
Richard van der Hoff
36ba73f53d Simplify the flow for SSO UIA (#8881)
* SsoHandler: remove inheritance from BaseHandler

* Simplify the flow for SSO UIA

We don't need to do all the magic for mapping users when we are doing UIA, so
let's factor that out.
2020-12-08 14:03:38 +00:00
Richard van der Hoff
025fa06fc7 Clarify config template comments (#8891) 2020-12-08 14:03:08 +00:00
Will Hunt
ff1f0ee094 Call set_avatar_url with target_user, not user_id (#8872)
* Call set_avatar_url with target_user, not user_id

Fixes https://github.com/matrix-org/synapse/issues/8871

* Create 8872.bugfix

* Update synapse/rest/admin/users.py

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>

* Testing

* Update changelog.d/8872.bugfix

Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>

Co-authored-by: Patrick Cloke <clokep@users.noreply.github.com>
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2020-12-07 19:13:07 +00:00
Patrick Cloke
1f3748f033 Do not raise a 500 exception when previewing empty media. (#8883) 2020-12-07 10:00:08 -05:00
Patrick Cloke
92d87c6882 Add type hints for HTTP and email pushers. (#8880) 2020-12-07 09:59:38 -05:00
Patrick Cloke
02e588856a Add type hints to the push mailer module. (#8882) 2020-12-07 07:10:22 -05:00
Patrick Cloke
96358cb424 Add authentication to replication endpoints. (#8853)
Authentication is done by checking a shared secret provided
in the Synapse configuration file.
2020-12-04 10:56:28 -05:00
Erik Johnston
df4b1e9c74 Pass room_id to get_auth_chain_difference (#8879)
This is so that we can choose which algorithm to use based on the room ID.
2020-12-04 15:52:49 +00:00
Patrick Cloke
b774c555d8 Add additional validation to pusher URLs. (#8865)
Pusher URLs now must end in `/_matrix/push/v1/notify` per the
specification.
2020-12-04 10:51:56 -05:00
Patrick Cloke
df3e6a23a7 Do not 500 if the content-length is not provided when uploading media. (#8862)
Instead return the proper 400 error.
2020-12-04 10:26:09 -05:00
Patrick Cloke
112f6bd49e Merge tag 'v1.24.0rc2' into develop
Synapse 1.24.0rc2 (2020-12-04)
==============================

Bugfixes
--------

- Fix a regression in v1.24.0rc1 which failed to allow SAML mapping providers which were unable to redirect users to an additional page. ([\#8878](https://github.com/matrix-org/synapse/issues/8878))

Internal Changes
----------------

- Add support for the `prometheus_client` newer than 0.9.0. Contributed by Jordan Bancino. ([\#8875](https://github.com/matrix-org/synapse/issues/8875))
2020-12-04 09:14:31 -05:00
Richard van der Hoff
6e4f71c057 Fix a buglet in the SAML username mapping provider doc (#8873)
the constructor is called with a `module_api`.
2020-12-04 10:14:15 +00:00
Richard van der Hoff
cf3b8156be Fix errorcode for disabled registration (#8867)
The spec says we should return `M_FORBIDDEN` when someone tries to register and
registration is disabled.
2020-12-03 15:41:19 +00:00
Richard van der Hoff
66f75c5b74 Merge pull request #8861 from matrix-org/rav/remove_unused_mocks
Remove some unnecessary mocking from the unit tests
2020-12-03 10:02:47 +00:00
Richard van der Hoff
269ba1bc84 Merge remote-tracking branch 'origin/develop' into rav/remove_unused_mocks 2020-12-02 20:08:46 +00:00
Richard van der Hoff
ed5172852a Merge pull request #8858 from matrix-org/rav/sso_uia
UIA: offer only available auth flows
2020-12-02 20:06:53 +00:00
Richard van der Hoff
f347f0cd58 remove unused FakeResponse (#8864) 2020-12-02 18:58:25 +00:00
Richard van der Hoff
935732768c newsfile 2020-12-02 18:54:15 +00:00
Richard van der Hoff
0bac276890 UIA: offer only available auth flows
During user-interactive auth, do not offer password auth to users with no
password, nor SSO auth to users with no SSO.

Fixes #7559.
2020-12-02 18:54:15 +00:00
Richard van der Hoff
92ce4a5258 changelog 2020-12-02 18:38:29 +00:00
Richard van der Hoff
b751624ff8 remove unused DeferredMockCallable 2020-12-02 18:38:29 +00:00
Richard van der Hoff
c834f1d67a remove unused resource_for_federation
This is now only used in `test_typing`, so move it there.
2020-12-02 18:38:29 +00:00
Richard van der Hoff
76469898ee Factor out FakeResponse from test_oidc 2020-12-02 18:30:29 +00:00
Richard van der Hoff
90cf1eec44 Remove redundant mocking 2020-12-02 17:53:38 +00:00
Richard van der Hoff
7ea85302f3 fix up various test cases
A few test cases were relying on being able to mount non-client servlets on the
test resource. it's better to give them their own Resources.
2020-12-02 16:30:01 +00:00
Patrick Cloke
30fba62108 Apply an IP range blacklist to push and key revocation requests. (#8821)
Replaces the `federation_ip_range_blacklist` configuration setting with an
`ip_range_blacklist` setting with wider scope. It now applies to:

* Federation
* Identity servers
* Push notifications
* Checking key validitity for third-party invite events

The old `federation_ip_range_blacklist` setting is still honored if present, but
with reduced scope (it only applies to federation and identity servers).
2020-12-02 11:09:24 -05:00
Erik Johnston
c5b6abd53d Correctly handle unpersisted events when calculating auth chain difference. (#8827)
We do state res with unpersisted events when calculating the new current state of the room, so that should be the only thing impacted. I don't think this is tooooo big of a deal as:

1. the next time a state event happens in the room the current state should correct itself;
2. in the common case all the unpersisted events' auth events will be pulled in by other state, so will still return the correct result (or one which is sufficiently close to not affect the result); and
3. we mostly use the state at an event to do important operations, which isn't affected by this.
2020-12-02 15:22:37 +00:00
Richard van der Hoff
693516e756 Add create_resource_dict method to HomeserverTestCase
Rather than using a single JsonResource, construct a resource tree, as we do in
the prod code, and allow testcases to add extra resources by overriding
`create_resource_dict`.
2020-12-02 15:21:00 +00:00
Johanna Dorothea Reichmann
0fed46ebe5 Add missing prometheus rules for persisted events (#8802)
The official dashboard uses data from these rules, but they were never added to the synapse-v2.rules. They are mentioned in this issue: https://github.com/matrix-org/synapse/issues/7917#issuecomment-661330409, but never got added to the rules.

Adding them results in all graphs in the "Event persist rate" section to function as intended.

Signed-off-by: Johanna Dorothea Reichmann <transcaffeine@finallycoffee.eu>
2020-12-02 15:18:41 +00:00
David Florness
c4675e1b24 Add additional validation for the admin register endpoint. (#8837)
Raise a proper 400 error if the `mac` field is missing.
2020-12-02 10:01:15 -05:00
163 changed files with 2930 additions and 878 deletions

View File

@@ -5,9 +5,10 @@ jobs:
- image: docker:git
steps:
- checkout
- setup_remote_docker
- docker_prepare
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# for release builds, we want to get the amd64 image out asap, so first
# we do an amd64-only build, before following up with a multiarch build.
- docker_build:
tag: -t matrixdotorg/synapse:${CIRCLE_TAG}
platforms: linux/amd64
@@ -20,12 +21,10 @@ jobs:
- image: docker:git
steps:
- checkout
- setup_remote_docker
- docker_prepare
- run: docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
- docker_build:
tag: -t matrixdotorg/synapse:latest
platforms: linux/amd64
# for `latest`, we don't want the arm images to disappear, so don't update the tag
# until all of the platforms are built.
- docker_build:
tag: -t matrixdotorg/synapse:latest
platforms: linux/amd64,linux/arm/v7,linux/arm64
@@ -46,12 +45,16 @@ workflows:
commands:
docker_prepare:
description: Downloads the buildx cli plugin and enables multiarch images
description: Sets up a remote docker server, downloads the buildx cli plugin, and enables multiarch images
parameters:
buildx_version:
type: string
default: "v0.4.1"
steps:
- setup_remote_docker:
# 19.03.13 was the most recent available on circleci at the time of
# writing.
version: 19.03.13
- run: apk add --no-cache curl
- run: mkdir -vp ~/.docker/cli-plugins/ ~/dockercache
- run: curl --silent -L "https://github.com/docker/buildx/releases/download/<< parameters.buildx_version >>/buildx-<< parameters.buildx_version >>.linux-amd64" > ~/.docker/cli-plugins/docker-buildx

View File

@@ -1,3 +1,18 @@
Synapse 1.25.0 (2020-xx-xx)
===========================
Removal warning
---------------
The old [Purge Room API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api/purge_room.md)
and [Shutdown Room API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api/shutdown_room.md)
are deprecated and will be removed in a future release. They will be replaced by the
[Delete Room API](https://github.com/matrix-org/synapse/tree/master/docs/admin_api/rooms.md#delete-room-api).
`POST /_synapse/admin/v1/rooms/<room_id>/delete` replaces `POST /_synapse/admin/v1/purge_room` and
`POST /_synapse/admin/v1/shutdown_room/<room_id>`.
Synapse 1.24.0 (2020-12-09)
===========================
@@ -44,6 +59,58 @@ Internal Changes
- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
Synapse 1.23.1 (2020-12-09)
===========================
Due to the two security issues highlighted below, server administrators are
encouraged to update Synapse. We are not aware of these vulnerabilities being
exploited in the wild.
Security advisory
-----------------
The following issues are fixed in v1.23.1 and v1.24.0.
- There is a denial of service attack
([CVE-2020-26257](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-26257))
against the federation APIs in which future events will not be correctly sent
to other servers over federation. This affects all servers that participate in
open federation. (Fixed in [#8776](https://github.com/matrix-org/synapse/pull/8776)).
- Synapse may be affected by OpenSSL
[CVE-2020-1971](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2020-1971).
Synapse administrators should ensure that they have the latest versions of
the cryptography Python package installed.
To upgrade Synapse along with the cryptography package:
* Administrators using the [`matrix.org` Docker
image](https://hub.docker.com/r/matrixdotorg/synapse/) or the [Debian/Ubuntu
packages from
`matrix.org`](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#matrixorg-packages)
should ensure that they have version 1.24.0 or 1.23.1 installed: these images include
the updated packages.
* Administrators who have [installed Synapse from
source](https://github.com/matrix-org/synapse/blob/master/INSTALL.md#installing-from-source)
should upgrade the cryptography package within their virtualenv by running:
```sh
<path_to_virtualenv>/bin/pip install 'cryptography>=3.3'
```
* Administrators who have installed Synapse from distribution packages should
consult the information from their distributions.
Bugfixes
--------
- Fix a bug in some federation APIs which could lead to unexpected behaviour if different parameters were set in the URI and the request body. ([\#8776](https://github.com/matrix-org/synapse/issues/8776))
Internal Changes
----------------
- Add a maximum version for pysaml2 on Python 3.5. ([\#8898](https://github.com/matrix-org/synapse/issues/8898))
Synapse 1.24.0rc2 (2020-12-04)
==============================

View File

@@ -557,10 +557,9 @@ This is critical from a security perspective to stop arbitrary Matrix users
spidering 'internal' URLs on your network. At the very least we recommend that
your loopback and RFC1918 IP addresses are blacklisted.
This also requires the optional `lxml` and `netaddr` python dependencies to be
installed. This in turn requires the `libxml2` library to be available - on
Debian/Ubuntu this means `apt-get install libxml2-dev`, or equivalent for
your OS.
This also requires the optional `lxml` python dependency to be installed. This
in turn requires the `libxml2` library to be available - on Debian/Ubuntu this
means `apt-get install libxml2-dev`, or equivalent for your OS.
# Troubleshooting Installation

View File

@@ -75,6 +75,27 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.25.0
====================
Blacklisting IP ranges
----------------------
Synapse v1.25.0 includes new settings, ``ip_range_blacklist`` and
``ip_range_whitelist``, for controlling outgoing requests from Synapse for federation,
identity servers, push, and for checking key validity for third-party invite events.
The previous setting, ``federation_ip_range_blacklist``, is deprecated. The new
``ip_range_blacklist`` defaults to private IP ranges if it is not defined.
If you have never customised ``federation_ip_range_blacklist`` it is recommended
that you remove that setting.
If you have customised ``federation_ip_range_blacklist`` you should update the
setting name to ``ip_range_blacklist``.
If you have a custom push server that is reached via private IP space you may
need to customise ``ip_range_blacklist`` or ``ip_range_whitelist``.
Upgrading to v1.24.0
====================

1
changelog.d/8802.doc Normal file
View File

@@ -0,0 +1 @@
Fix the "Event persist rate" section of the included grafana dashboard by adding missing prometheus rules.

1
changelog.d/8821.bugfix Normal file
View File

@@ -0,0 +1 @@
Apply an IP range blacklist to push and key revocation requests.

1
changelog.d/8827.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where we might not correctly calculate the current state for rooms with multiple extremities.

1
changelog.d/8829.removal Normal file
View File

@@ -0,0 +1 @@
Deprecate Shutdown Room and Purge Room Admin APIs.

1
changelog.d/8837.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long standing bug in the register admin endpoint (`/_synapse/admin/v1/register`) when the `mac` field was not provided. The endpoint now properly returns a 400 error. Contributed by @edwargix.

1
changelog.d/8839.doc Normal file
View File

@@ -0,0 +1 @@
Combine related media admin API docs.

1
changelog.d/8853.feature Normal file
View File

@@ -0,0 +1 @@
Add optional HTTP authentication to replication endpoints.

1
changelog.d/8858.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug on Synapse instances supporting Single-Sign-On, where users would be prompted to enter their password to confirm certain actions, even though they have not set a password.

1
changelog.d/8861.misc Normal file
View File

@@ -0,0 +1 @@
Remove some unnecessary stubbing from unit tests.

1
changelog.d/8862.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a longstanding bug where a 500 error would be returned if the `Content-Length` header was not provided to the upload media resource.

1
changelog.d/8864.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused `FakeResponse` class from unit tests.

1
changelog.d/8865.bugfix Normal file
View File

@@ -0,0 +1 @@
Add additional validation to pusher URLs to be compliant with the specification.

1
changelog.d/8867.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix the error code that is returned when a user tries to register on a homeserver on which new-user registration has been disabled.

1
changelog.d/8870.bugfix Normal file
View File

@@ -0,0 +1 @@
Apply an IP range blacklist to push and key revocation requests.

1
changelog.d/8872.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where `PUT /_synapse/admin/v2/users/<user_id>` failed to create a new user when `avatar_url` is specified. Bug introduced in Synapse v1.9.0.

1
changelog.d/8873.doc Normal file
View File

@@ -0,0 +1 @@
Fix an error in the documentation for the SAML username mapping provider.

1
changelog.d/8874.feature Normal file
View File

@@ -0,0 +1 @@
Improve the error messages printed as a result of configuration problems for extension modules.

1
changelog.d/8879.misc Normal file
View File

@@ -0,0 +1 @@
Pass `room_id` to `get_auth_chain_difference`.

1
changelog.d/8880.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to push module.

1
changelog.d/8881.misc Normal file
View File

@@ -0,0 +1 @@
Simplify logic for handling user-interactive-auth via single-sign-on servers.

1
changelog.d/8882.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to push module.

1
changelog.d/8883.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a 500 error when attempting to preview an empty HTML file.

1
changelog.d/8886.feature Normal file
View File

@@ -0,0 +1 @@
Add number of local devices to Room Details Admin API. Contributed by @dklimpel.

1
changelog.d/8887.feature Normal file
View File

@@ -0,0 +1 @@
Add `X-Robots-Tag` header to stop web crawlers from indexing media.

1
changelog.d/8891.doc Normal file
View File

@@ -0,0 +1 @@
Clarify comments around template directories in `sample_config.yaml`.

1
changelog.d/8897.feature Normal file
View File

@@ -0,0 +1 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

1
changelog.d/8900.feature Normal file
View File

@@ -0,0 +1 @@
Add support for allowing users to pick their own user ID during a single-sign-on login.

1
changelog.d/8905.misc Normal file
View File

@@ -0,0 +1 @@
Skip the SAML tests if the requirements (`pysaml2` and `xmlsec1`) aren't available.

1
changelog.d/8906.misc Normal file
View File

@@ -0,0 +1 @@
Fix multiarch docker image builds.

1
changelog.d/8909.misc Normal file
View File

@@ -0,0 +1 @@
Don't publish `latest` docker image until all archs are built.

1
changelog.d/8916.misc Normal file
View File

@@ -0,0 +1 @@
Improve structured logging tests.

1
changelog.d/8918.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix occasional deadlock when handling SIGHUP.

1
changelog.d/8921.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where we ratelimited auto joining of rooms on registration (using `auto_join_rooms` config).

View File

@@ -58,3 +58,21 @@ groups:
labels:
type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_type="remote"})
labels:
type: remote
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity="*client*",origin_type="local"})
labels:
type: local
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity!="*client*",origin_type="local"})
labels:
type: bridges
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep)
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep)

6
debian/changelog vendored
View File

@@ -4,6 +4,12 @@ matrix-synapse-py3 (1.24.0) stable; urgency=medium
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:14:30 +0000
matrix-synapse-py3 (1.23.1) stable; urgency=medium
* New synapse release 1.23.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Dec 2020 10:40:39 +0000
matrix-synapse-py3 (1.23.0) stable; urgency=medium
* New synapse release 1.23.0.

View File

@@ -69,7 +69,8 @@ RUN apt-get update -qq -o Acquire::Languages=none \
python3-setuptools \
python3-venv \
sqlite3 \
libpq-dev
libpq-dev \
xmlsec1
COPY --from=builder /dh-virtualenv_1.2~dev-1_all.deb /

24
docker/Dockerfile-workers Normal file
View File

@@ -0,0 +1,24 @@
# Inherit from the official Synapse docker image
FROM matrixdotorg/synapse
# Install deps
RUN apt-get update
RUN apt-get install -y supervisor redis nginx
RUN rm /etc/nginx/sites-enabled/default
# Copy the worker process and log configuration files
COPY ./docker/worker.yaml.j2 /conf/worker.yaml.j2
# Expose nginx listener port
EXPOSE 8080/tcp
# Volume for user-editable config files, logs etc.
VOLUME ["/data"]
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
# TODO: Healthcheck? Which worker to ask? Can we ask supervisord?

View File

@@ -0,0 +1,31 @@
# Inherit from the workers Synapse docker image
FROM matrixdotorg/synapse:workers
RUN apt-get update
RUN apt-get install -y postgresql
RUN pg_ctlcluster 11 main start && su postgres -c "echo \
\"ALTER USER postgres PASSWORD 'somesecret'; \
CREATE DATABASE synapse \
ENCODING 'UTF8' \
LC_COLLATE='C' \
LC_CTYPE='C' \
template=template0;\" | psql" && pg_ctlcluster 11 main stop
WORKDIR /root
RUN curl -OL "https://github.com/caddyserver/caddy/releases/download/v2.3.0/caddy_2.3.0_linux_amd64.tar.gz" && \
tar xzf caddy_2.3.0_linux_amd64.tar.gz && rm caddy_2.3.0_linux_amd64.tar.gz
COPY ./docker/caddy.complement.json /root/caddy.json
EXPOSE 8008 8448
ENTRYPOINT sed -i "s/{{ server_name }}/${SERVER_NAME}/g" /root/caddy.json && \
pg_ctlcluster 11 main start > /dev/null && \
/root/caddy start --config /root/caddy.json > /dev/null && \
SYNAPSE_SERVER_NAME=${SERVER_NAME} \
SYNAPSE_REPORT_STATS=no \
POSTGRES_PASSWORD=somesecret POSTGRES_USER=postgres POSTGRES_HOST=localhost \
SYNAPSE_WORKERS=synchrotron \
/configure_workers_and_start.py

View File

@@ -0,0 +1,76 @@
{
"apps": {
"http": {
"servers": {
"srv0": {
"listen": [
":8448"
],
"routes": [
{
"match": [
{
"host": [
"{{ server_name }}"
]
}
],
"handle": [
{
"handler": "subroute",
"routes": [
{
"handle": [
{
"handler": "reverse_proxy",
"upstreams": [
{
"dial": "localhost:80"
}
]
}
]
}
]
}
],
"terminal": true
}
]
}
}
},
"tls": {
"automation": {
"policies": [
{
"subjects": [
"{{ server_name }}"
],
"issuers": [
{
"module": "internal"
}
],
"on_demand": true
}
]
}
},
"pki": {
"certificate_authorities": {
"local": {
"name": "Complement CA",
"root": {
"certificate": "/ca/ca.crt",
"private_key": "/ca/ca.key"
},
"intermediate": {
"certificate": "/ca/ca.crt",
"private_key": "/ca/ca.key"
}
}
}
}
}
}

View File

@@ -27,8 +27,7 @@ log_config: "{{ SYNAPSE_LOG_CONFIG }}"
listeners:
{% if not SYNAPSE_NO_TLS %}
-
port: 8448
- port: 8448
bind_addresses: ['::']
type: http
tls: true
@@ -44,7 +43,7 @@ listeners:
tls: false
bind_addresses: ['::']
type: http
x_forwarded: false
x_forwarded: true
resources:
- names: [client]

View File

@@ -0,0 +1,366 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This script reads environment variables and generates a shared Synapse worker,
# nginx and supervisord configs depending on the workers requested
import os
import sys
import subprocess
import jinja2
import yaml
DEFAULT_LISTENER_RESOURCES = ["client", "federation"]
WORKERS_CONFIG = {
"pusher": {
"app": "synapse.app.pusher",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": "start_pushers: false"
},
"user_dir": {
"app": "synapse.app.user_dir",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
],
"shared_extra_conf": "update_user_directory: false"
},
"media_repository": {
"app": "synapse.app.media_repository",
"listener_resources": ["media"],
"endpoint_patterns": [
"^/_synapse/admin/v1/purge_media_cache$",
"^/_synapse/admin/v1/room/.*/media.*$",
"^/_synapse/admin/v1/user/.*/media.*$",
"^/_synapse/admin/v1/media/.*$",
"^/_synapse/admin/v1/quarantine_media/.*$",
],
"shared_extra_conf": "enable_media_repo: false"
},
"appservice": {
"app": "synapse.app.appservice",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": "notify_appservices: false"
},
"federation_sender": {
"app": "synapse.app.federation_sender",
"listener_resources": [],
"endpoint_patterns": [],
"shared_extra_conf": "send_federation: false"
},
"synchrotron": {
"app": "synapse.app.generic_worker",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"^/_matrix/client/(v2_alpha|r0)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
"^/_matrix/client/(api/v1|r0)/initialSync$",
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
],
"shared_extra_conf": ""
},
"federation_reader": {
"app": "synapse.app.generic_worker",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",
"^/_matrix/federation/(v1|v2)/backfill/",
"^/_matrix/federation/(v1|v2)/get_missing_events/",
"^/_matrix/federation/(v1|v2)/publicRooms",
"^/_matrix/federation/(v1|v2)/query/",
"^/_matrix/federation/(v1|v2)/make_join/",
"^/_matrix/federation/(v1|v2)/make_leave/",
"^/_matrix/federation/(v1|v2)/send_join/",
"^/_matrix/federation/(v1|v2)/send_leave/",
"^/_matrix/federation/(v1|v2)/invite/",
"^/_matrix/federation/(v1|v2)/query_auth/",
"^/_matrix/federation/(v1|v2)/event_auth/",
"^/_matrix/federation/(v1|v2)/exchange_third_party_invite/",
"^/_matrix/federation/(v1|v2)/user/devices/",
"^/_matrix/federation/(v1|v2)/get_groups_publicised$",
"^/_matrix/key/v2/query",
],
"shared_extra_conf": ""
},
"federation_inbound": {
"app": "synapse.app.generic_worker",
"listener_resources": DEFAULT_LISTENER_RESOURCES,
"endpoint_patterns": [
"/_matrix/federation/(v1|v2)/send/",
],
"shared_extra_conf": ""
},
}
# Utility functions
def log(txt):
print(txt)
def error(txt):
log(txt)
sys.exit(2)
def convert(src, dst, environ):
"""Generate a file from a template
Args:
src (str): path to input file
dst (str): path to file to write
environ (dict): environment dictionary, for replacement mappings.
"""
with open(src) as infile:
template = infile.read()
rendered = jinja2.Template(template, autoescape=True).render(**environ)
print(rendered)
with open(dst, "w") as outfile:
outfile.write(rendered)
def generate_base_homeserver_config():
"""Starts Synapse and generates a basic homeserver config, which will later be
modified for worker support.
Raises: CalledProcessError if calling start.py return a non-zero exit code.
"""
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
subprocess.check_output(["/usr/local/bin/python", "/start.py", "migrate_config"])
def generate_worker_files(environ, config_path: str, data_dir: str):
"""Read the desired list of workers from environment variables and generate
shared homeserver, nginx and supervisord configs.
Args:
environ: _Environ[str]
config_path: Where to output the generated Synapse main worker config file.
data_dir: The location of the synapse data directory. Where log and
user-facing config files live.
"""
# Note that yaml cares about indentation, so care should be taken to insert lines
# into files at the correct indentation below.
# The contents of a Synapse config file that will be added alongside the generated
# config when running the main Synapse process.
# It is intended mainly for disabling functionality when certain workers are spun up,
# and add the replication listener
# first read the original config file to take listeners config and add the replication one
listeners = [{
"port": 9093,
"bind_address": "127.0.0.1",
"type": "http",
"resources":[{
"names": ["replication"]
}]
}]
with open(config_path) as file_stream:
original_config = yaml.safe_load(file_stream)
original_listeners = original_config.get("listeners")
if original_listeners:
listeners += original_listeners
homeserver_config = yaml.dump({"listeners": listeners})
homeserver_config += """
redis:
enabled: true
# TODO: remove before prod
suppress_key_server_warning: true
"""
# The supervisord config
supervisord_config = """
[supervisord]
nodaemon=true
[program:nginx]
command=/usr/sbin/nginx -g "daemon off;"
priority=500
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
username=www-data
autorestart=true
[program:synapse_main]
command=/usr/local/bin/python -m synapse.app.homeserver \
--config-path="%s" \
--config-path=/conf/workers/shared.yaml
priority=1
# Log startup failures to supervisord's stdout/err
# Regular synapse logs will still go in the configured data directory
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0
autorestart=unexpected
exitcodes=0
""" % (config_path,)
# An nginx site config. Will live in /etc/nginx/conf.d
nginx_config_template_header = """
server {
# Listen on Synapse's default HTTP port number
listen 8080;
listen [::]:8080;
server_name localhost;
# Nginx by default only allows file uploads up to 1M in size
# Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
client_max_body_size 100M;
"""
nginx_config_body = "" # to modify below
nginx_config_template_end = """
# Send all other traffic to the main process
location ~* ^(\/_matrix|\/_synapse) {
proxy_pass http://localhost:8008;
proxy_set_header X-Forwarded-For $remote_addr;
}
}
"""
# Read desired worker configuration from environment
if "SYNAPSE_WORKERS" not in environ:
worker_types = []
else:
worker_types = environ.get("SYNAPSE_WORKERS")
worker_types = worker_types.split(",")
os.mkdir("/conf/workers")
worker_port = 18009
for worker_type in worker_types:
worker_type = worker_type.strip()
worker_config = WORKERS_CONFIG.get(worker_type)
if worker_config:
worker_config = worker_config.copy()
else:
log(worker_type + " is a wrong worker type ! It will be ignored")
continue
# this is not hardcoded bc we want to be able to have several workers
# of each type ultimately (not supported for now)
worker_name = worker_type
worker_config.update({"name": worker_name})
worker_config.update({"port": worker_port})
worker_config.update({"config_path": config_path})
homeserver_config += worker_config['shared_extra_conf'] + "\n"
# Enable the pusher worker in supervisord
supervisord_config += """
[program:synapse_{name}]
command=/usr/local/bin/python -m {app} \
--config-path="{config_path}" \
--config-path=/conf/workers/shared.yaml \
--config-path=/conf/workers/{name}.yaml
autorestart=unexpected
priority=500
exitcodes=0
stdout_logfile=/dev/stdout
stdout_logfile_maxbytes=0
stderr_logfile=/dev/stderr
stderr_logfile_maxbytes=0""".format_map(worker_config)
for pattern in worker_config['endpoint_patterns']:
nginx_config_body += """
location ~* %s {
proxy_pass http://localhost:%s;
proxy_set_header X-Forwarded-For $remote_addr;
}
""" % (pattern, worker_port)
convert("/conf/worker.yaml.j2", "/conf/workers/{name}.yaml".format(name=worker_name), worker_config)
worker_port += 1
# Write out the config files. We use append mode for each in case the
# files may have already been written to by others.
# Shared homeserver config
print(homeserver_config)
with open("/conf/workers/shared.yaml", "a") as f:
f.write(homeserver_config)
# Nginx config
print()
print(nginx_config_template_header)
print(nginx_config_body)
print(nginx_config_template_end)
with open("/etc/nginx/conf.d/matrix-synapse.conf", "a") as f:
f.write(nginx_config_template_header)
f.write(nginx_config_body)
f.write(nginx_config_template_end)
# Supervisord config
print()
print(supervisord_config)
with open("/etc/supervisor/conf.d/supervisord.conf", "a") as f:
f.write(supervisord_config)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"
if not os.path.exists(log_dir):
os.mkdir(log_dir)
def start_supervisord():
"""Starts up supervisord which then starts and monitors all other necessary processes
Raises: CalledProcessError if calling start.py return a non-zero exit code.
"""
subprocess.check_output(["/usr/bin/supervisord"])
def main(args, environ):
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
# override SYNAPSE_NO_TLS, we don't support TLS in worker mode,
# this needs to be handled by a frontend proxy
environ["SYNAPSE_NO_TLS"] = "yes"
# Generate the base homeserver config if one does not yet exist
if not os.path.exists(config_path):
log("Generating base homeserver config")
generate_base_homeserver_config()
# Always regenerate all other config files
generate_worker_files(environ, config_path, data_dir)
# Start supervisord, which will start Synapse, all of the configured worker
# processes, redis, nginx etc. according to the config we created above.
start_supervisord()
if __name__ == "__main__":
main(sys.argv, os.environ)

View File

@@ -134,6 +134,7 @@ def run_generate_config(environ, ownership):
Never returns.
"""
print("running generate config")
for v in ("SYNAPSE_SERVER_NAME", "SYNAPSE_REPORT_STATS"):
if v not in environ:
error("Environment variable '%s' is mandatory in `generate` mode." % (v,))
@@ -149,6 +150,8 @@ def run_generate_config(environ, ownership):
log("Creating log config %s" % (log_config_file,))
convert("/conf/log.config", log_config_file, environ)
print("Generating config at", config_path, "Config dir:", config_dir)
args = [
"python",
"-m",
@@ -177,8 +180,8 @@ def run_generate_config(environ, ownership):
else:
os.execv("/usr/local/bin/python", args)
def main(args, environ):
print("bla")
mode = args[1] if len(args) > 1 else "run"
desired_uid = int(environ.get("UID", "991"))
desired_gid = int(environ.get("GID", "991"))

15
docker/worker.yaml.j2 Normal file
View File

@@ -0,0 +1,15 @@
worker_app: "{{ app }}"
worker_name: "{{ name }}"
# The replication listener on the main synapse process.
worker_replication_host: 127.0.0.1
worker_replication_http_port: 9093
worker_listeners:
- type: http
port: {{ port }}
resources:
- names:
{%- for resource in listener_resources %}
- {{ resource }}
{%- endfor %}

View File

@@ -1,3 +1,14 @@
# Contents
- [List all media in a room](#list-all-media-in-a-room)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
* [Quarantining all media of a user](#quarantining-all-media-of-a-user)
- [Delete local media](#delete-local-media)
* [Delete a specific local media](#delete-a-specific-local-media)
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
- [Purge Remote Media API](#purge-remote-media-api)
# List all media in a room
This API gets a list of known media in a room.
@@ -11,16 +22,16 @@ To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
The API returns a JSON body like the following:
```
```json
{
"local": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba",
"mxc://localhost/abcdefghijklmnopqrstuvwx"
],
"remote": [
"mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
"mxc://matrix.org/abcdefghijklmnopqrstuvwx"
]
"local": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba",
"mxc://localhost/abcdefghijklmnopqrstuvwx"
],
"remote": [
"mxc://matrix.org/xwvutsrqponmlkjihgfedcba",
"mxc://matrix.org/abcdefghijklmnopqrstuvwx"
]
}
```
@@ -48,7 +59,7 @@ form of `abcdefg12345...`.
Response:
```
```json
{}
```
@@ -68,14 +79,18 @@ Where `room_id` is in the form of `!roomid12345:example.org`.
Response:
```
```json
{
"num_quarantined": 10 # The number of media items successfully quarantined
"num_quarantined": 10
}
```
The following fields are returned in the JSON response body:
* `num_quarantined`: integer - The number of media items successfully quarantined
Note that there is a legacy endpoint, `POST
/_synapse/admin/v1/quarantine_media/<room_id >`, that operates the same.
/_synapse/admin/v1/quarantine_media/<room_id>`, that operates the same.
However, it is deprecated and may be removed in a future release.
## Quarantining all media of a user
@@ -92,23 +107,29 @@ POST /_synapse/admin/v1/user/<user_id>/media/quarantine
{}
```
Where `user_id` is in the form of `@bob:example.org`.
URL Parameters
* `user_id`: string - User ID in the form of `@bob:example.org`
Response:
```
```json
{
"num_quarantined": 10 # The number of media items successfully quarantined
"num_quarantined": 10
}
```
The following fields are returned in the JSON response body:
* `num_quarantined`: integer - The number of media items successfully quarantined
# Delete local media
This API deletes the *local* media from the disk of your own server.
This includes any local thumbnails and copies of media downloaded from
remote homeservers.
This API will not affect media that has been uploaded to external
media repositories (e.g https://github.com/turt2live/matrix-media-repo/).
See also [purge_remote_media.rst](purge_remote_media.rst).
See also [Purge Remote Media API](#purge-remote-media-api).
## Delete a specific local media
Delete a specific `media_id`.
@@ -129,12 +150,12 @@ URL Parameters
Response:
```json
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx"
],
"total": 1
}
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx"
],
"total": 1
}
```
The following fields are returned in the JSON response body:
@@ -167,16 +188,51 @@ If `false` these files will be deleted. Defaults to `true`.
Response:
```json
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx",
"abcdefghijklmnopqrstuvwz"
],
"total": 2
}
{
"deleted_media": [
"abcdefghijklmnopqrstuvwx",
"abcdefghijklmnopqrstuvwz"
],
"total": 2
}
```
The following fields are returned in the JSON response body:
* `deleted_media`: an array of strings - List of deleted `media_id`
* `total`: integer - Total number of deleted `media_id`
# Purge Remote Media API
The purge remote media API allows server admins to purge old cached remote media.
The API is:
```
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
{}
```
URL Parameters
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in ms.
All cached media that was last accessed before this timestamp will be removed.
Response:
```json
{
"deleted": 10
}
```
The following fields are returned in the JSON response body:
* `deleted`: integer - The number of media items successfully deleted
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [README.rst](README.rst).
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -1,20 +0,0 @@
Purge Remote Media API
======================
The purge remote media API allows server admins to purge old cached remote
media.
The API is::
POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
{}
\... which will remove all cached media that was last accessed before
``<unix_timestamp_in_ms>``.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see `README.rst <README.rst>`_.
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -1,12 +1,13 @@
Purge room API
==============
Deprecated: Purge room API
==========================
**The old Purge room API is deprecated and will be removed in a future release.
See the new [Delete Room API](rooms.md#delete-room-api) for more details.**
This API will remove all trace of a room from your database.
All local users must have left the room before it can be removed.
See also: [Delete Room API](rooms.md#delete-room-api)
The API is:
```

View File

@@ -1,3 +1,14 @@
# Contents
- [List Room API](#list-room-api)
* [Parameters](#parameters)
* [Usage](#usage)
- [Room Details API](#room-details-api)
- [Room Members API](#room-members-api)
- [Delete Room API](#delete-room-api)
* [Parameters](#parameters-1)
* [Response](#response)
* [Undoing room shutdowns](#undoing-room-shutdowns)
# List Room API
The List Room admin API allows server admins to get a list of rooms on their
@@ -76,7 +87,7 @@ GET /_synapse/admin/v1/rooms
Response:
```
```jsonc
{
"rooms": [
{
@@ -128,7 +139,7 @@ GET /_synapse/admin/v1/rooms?search_term=TWIM
Response:
```
```json
{
"rooms": [
{
@@ -163,7 +174,7 @@ GET /_synapse/admin/v1/rooms?order_by=size
Response:
```
```jsonc
{
"rooms": [
{
@@ -219,14 +230,14 @@ GET /_synapse/admin/v1/rooms?order_by=size&from=100
Response:
```
```jsonc
{
"rooms": [
{
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org",
"joined_members": 127
"joined_members": 127,
"joined_local_members": 2,
"version": "1",
"creator": "@foo:matrix.org",
@@ -243,7 +254,7 @@ Response:
"room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk",
"name": "weechat-matrix",
"canonical_alias": "#weechat-matrix:termina.org.uk",
"joined_members": 137
"joined_members": 137,
"joined_local_members": 20,
"version": "4",
"creator": "@foo:termina.org.uk",
@@ -278,6 +289,7 @@ The following fields are possible in the JSON response body:
* `canonical_alias` - The canonical (main) alias address of the room.
* `joined_members` - How many users are currently in the room.
* `joined_local_members` - How many local users are currently in the room.
* `joined_local_devices` - How many local devices are currently in the room.
* `version` - The version of the room as a string.
* `creator` - The `user_id` of the room creator.
* `encryption` - Algorithm of end-to-end encryption of messages. Is `null` if encryption is not active.
@@ -300,15 +312,16 @@ GET /_synapse/admin/v1/rooms/<room_id>
Response:
```
```json
{
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory",
"avatar": "mxc://matrix.org/AQDaVFlbkQoErdOgqWRgiGSV",
"topic": "Theory, Composition, Notation, Analysis",
"canonical_alias": "#musictheory:matrix.org",
"joined_members": 127
"joined_members": 127,
"joined_local_members": 2,
"joined_local_devices": 2,
"version": "1",
"creator": "@foo:matrix.org",
"encryption": null,
@@ -342,13 +355,13 @@ GET /_synapse/admin/v1/rooms/<room_id>/members
Response:
```
```json
{
"members": [
"@foo:matrix.org",
"@bar:matrix.org",
"@foobar:matrix.org
],
"@foobar:matrix.org"
],
"total": 3
}
```
@@ -357,8 +370,6 @@ Response:
The Delete Room admin API allows server admins to remove rooms from server
and block these rooms.
It is a combination and improvement of "[Shutdown room](shutdown_room.md)"
and "[Purge room](purge_room.md)" API.
Shuts down a room. Moves all local users and room aliases automatically to a
new room if `new_room_user_id` is set. Otherwise local users only
@@ -455,3 +466,30 @@ The following fields are returned in the JSON response body:
* `local_aliases` - An array of strings representing the local aliases that were migrated from
the old room to the new.
* `new_room_id` - A string representing the room ID of the new room.
## Undoing room shutdowns
*Note*: This guide may be outdated by the time you read it. By nature of room shutdowns being performed at the database level,
the structure can and does change without notice.
First, it's important to understand that a room shutdown is very destructive. Undoing a shutdown is not as simple as pretending it
never happened - work has to be done to move forward instead of resetting the past. In fact, in some cases it might not be possible
to recover at all:
* If the room was invite-only, your users will need to be re-invited.
* If the room no longer has any members at all, it'll be impossible to rejoin.
* The first user to rejoin will have to do so via an alias on a different server.
With all that being said, if you still want to try and recover the room:
1. For safety reasons, shut down Synapse.
2. In the database, run `DELETE FROM blocked_rooms WHERE room_id = '!example:example.org';`
* For caution: it's recommended to run this in a transaction: `BEGIN; DELETE ...;`, verify you got 1 result, then `COMMIT;`.
* The room ID is the same one supplied to the shutdown room API, not the Content Violation room.
3. Restart Synapse.
You will have to manually handle, if you so choose, the following:
* Aliases that would have been redirected to the Content Violation room.
* Users that would have been booted from the room (and will have been force-joined to the Content Violation room).
* Removal of the Content Violation room if desired.

View File

@@ -1,4 +1,7 @@
# Shutdown room API
# Deprecated: Shutdown room API
**The old Shutdown room API is deprecated and will be removed in a future release.
See the new [Delete Room API](rooms.md#delete-room-api) for more details.**
Shuts down a room, preventing new joins and moves local users and room aliases automatically
to a new room. The new room will be created with the user specified by the
@@ -10,8 +13,6 @@ disallow any further invites or joins.
The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.
See also: [Delete Room API](rooms.md#delete-room-api)
## API
You will need to authenticate with an access token for an admin user.

View File

@@ -144,6 +144,35 @@ pid_file: DATADIR/homeserver.pid
#
#enable_search: false
# Prevent outgoing requests from being sent to the following blacklisted IP address
# CIDR ranges. If this option is not specified then it defaults to private IP
# address ranges (see the example below).
#
# The blacklist applies to the outbound requests for federation, identity servers,
# push servers, and for checking key validity for third-party invite events.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This option replaces federation_ip_range_blacklist in Synapse v1.25.0.
#
#ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
# - '203.0.113.0/24'
# - '224.0.0.0/4'
# - '::1/128'
# - 'fe80::/10'
# - 'fc00::/7'
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#
@@ -642,26 +671,17 @@ acme:
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
#ip_range_whitelist:
# - '192.168.1.1'
# Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of "delay" on inbound
@@ -953,9 +973,15 @@ media_store_path: "DATADIR/media_store"
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '192.0.0.0/24'
# - '169.254.0.0/16'
# - '198.18.0.0/15'
# - '192.0.2.0/24'
# - '198.51.100.0/24'
# - '203.0.113.0/24'
# - '224.0.0.0/4'
# - '::1/128'
# - 'fe80::/64'
# - 'fe80::/10'
# - 'fc00::/7'
# List of IP address CIDR ranges that the URL preview spider is allowed
@@ -1877,11 +1903,8 @@ sso:
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
#
# Synapse will look for the following templates in this directory:
#
@@ -2111,9 +2134,8 @@ email:
#validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# Do not uncomment this setting unless you want to customise the templates.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
#
# Synapse will look for the following templates in this directory:
#
@@ -2587,6 +2609,13 @@ opentracing:
#
#run_background_tasks_on: worker1
# A shared secret used by the replication APIs to authenticate HTTP requests
# from workers.
#
# By default this is unused and traffic is not authenticated.
#
#worker_replication_secret: ""
# Configuration for Redis when using workers. This *must* be enabled when
# using workers (unless using old style direct TCP configuration).

View File

@@ -116,11 +116,13 @@ comment these options out and use those specified by the module instead.
A custom mapping provider must specify the following methods:
* `__init__(self, parsed_config)`
* `__init__(self, parsed_config, module_api)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
- `module_api` - a `synapse.module_api.ModuleApi` object which provides the
stable API available for extension modules.
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:

View File

@@ -89,7 +89,8 @@ shared configuration file.
Normally, only a couple of changes are needed to make an existing configuration
file suitable for use with workers. First, you need to enable an "HTTP replication
listener" for the main process; and secondly, you need to enable redis-based
replication. For example:
replication. Optionally, a shared secret can be used to authenticate HTTP
traffic between workers. For example:
```yaml
@@ -103,6 +104,9 @@ listeners:
resources:
- names: [replication]
# Add a random shared secret to authenticate traffic.
worker_replication_secret: ""
redis:
enabled: true
```

View File

@@ -43,6 +43,7 @@ files =
synapse/handlers/room_member.py,
synapse/handlers/room_member_worker.py,
synapse/handlers/saml_handler.py,
synapse/handlers/sso.py,
synapse/handlers/sync.py,
synapse/handlers/ui_auth,
synapse/http/client.py,
@@ -55,6 +56,10 @@ files =
synapse/metrics,
synapse/module_api,
synapse/notifier.py,
synapse/push/emailpusher.py,
synapse/push/httppusher.py,
synapse/push/mailer.py,
synapse/push/pusher.py,
synapse/push/pusherpool.py,
synapse/push/push_rule_evaluator.py,
synapse/replication,

View File

@@ -0,0 +1,30 @@
#! /bin/bash -eu
# This script is designed for developers who want to test their code
# against Complement.
#
# It creates a Complement-ready worker-enabled Synapse docker image from
# the local checkout and runs Complement tests against it.
#
# This script assumes that it is located in the scripts-dev folder of a
# Synapse checkout, and that Complement exists at ../../complement
# In my case, I have /home/user/code/complement and /home/user/code/synapse.
COMPLEMENT_DIR="/home/user/code/complement"
cd "$(dirname $0)/.."
# Build the Synapse image from the local checkout
docker build -t matrixdotorg/synapse:latest -f docker/Dockerfile .
# Build the base Synapse worker image
docker build -t matrixdotorg/synapse:workers -f docker/Dockerfile-workers .
cd "$COMPLEMENT_DIR"
# Build the Complement Synapse worker image
docker build -t matrixdotorg/complement-synapse:workers -f dockerfiles/SynapseWorkers.Dockerfile dockerfiles
# Run the tests on the resulting image!
COMPLEMENT_VERSION_CHECK_ITERATIONS=300 COMPLEMENT_DEBUG=1 COMPLEMENT_BASE_IMAGE=matrixdotorg/complement-synapse:workers go test -v -count=1 -tags="synapse_blacklist" -failfast ./tests
#COMPLEMENT_VERSION_CHECK_ITERATIONS=100 COMPLEMENT_DEBUG=1 COMPLEMENT_BASE_IMAGE=complement-synapse go test -v -count=1 -parallel=1 ./tests/
#COMPLEMENT_VERSION_CHECK_ITERATIONS=100 COMPLEMENT_BASE_IMAGE=complement-synapse go test ./tests

View File

@@ -245,6 +245,8 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
# Set up the SIGHUP machinery.
if hasattr(signal, "SIGHUP"):
reactor = hs.get_reactor()
@wrap_as_background_process("sighup")
def handle_sighup(*args, **kwargs):
# Tell systemd our state, if we're using it. This will silently fail if
@@ -260,7 +262,9 @@ def start(hs: "synapse.server.HomeServer", listeners: Iterable[ListenerConfig]):
# is so that we're in a sane state, e.g. flushing the logs may fail
# if the sighup happens in the middle of writing a log entry.
def run_sighup(*args, **kwargs):
hs.get_clock().call_later(0, handle_sighup, *args, **kwargs)
# `callFromThread` should be "signal safe" as well as thread
# safe.
reactor.callFromThread(handle_sighup, *args, **kwargs)
signal.signal(signal.SIGHUP, run_sighup)

View File

@@ -266,7 +266,6 @@ class GenericWorkerPresence(BasePresenceHandler):
super().__init__(hs)
self.hs = hs
self.is_mine_id = hs.is_mine_id
self.http_client = hs.get_simple_http_client()
self._presence_enabled = hs.config.use_presence

View File

@@ -19,7 +19,7 @@ import gc
import logging
import os
import sys
from typing import Iterable
from typing import Iterable, Iterator
from twisted.application import service
from twisted.internet import defer, reactor
@@ -90,7 +90,7 @@ class SynapseHomeServer(HomeServer):
tls = listener_config.tls
site_tag = listener_config.http_options.tag
if site_tag is None:
site_tag = port
site_tag = str(port)
# We always include a health resource.
resources = {"/health": HealthResource()}
@@ -107,7 +107,10 @@ class SynapseHomeServer(HomeServer):
logger.debug("Configuring additional resources: %r", additional_resources)
module_api = self.get_module_api()
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
handler_cls, config = load_module(
resmodule,
("listeners", site_tag, "additional_resources", "<%s>" % (path,)),
)
handler = handler_cls(config, module_api)
if IResource.providedBy(handler):
resource = handler
@@ -342,7 +345,10 @@ def setup(config_options):
"Synapse Homeserver", config_options
)
except ConfigError as e:
sys.stderr.write("\nERROR: %s\n" % (e,))
sys.stderr.write("\n")
for f in format_config_error(e):
sys.stderr.write(f)
sys.stderr.write("\n")
sys.exit(1)
if not config:
@@ -445,6 +451,38 @@ def setup(config_options):
return hs
def format_config_error(e: ConfigError) -> Iterator[str]:
"""
Formats a config error neatly
The idea is to format the immediate error, plus the "causes" of those errors,
hopefully in a way that makes sense to the user. For example:
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
Failed to parse config for module 'JinjaOidcMappingProvider':
invalid jinja template:
unexpected end of template, expected 'end of print statement'.
Args:
e: the error to be formatted
Returns: An iterator which yields string fragments to be formatted
"""
yield "Error in configuration"
if e.path:
yield " at '%s'" % (".".join(e.path),)
yield ":\n %s" % (e.msg,)
e = e.__cause__
indent = 1
while e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(e))
e = e.__cause__
class SynapseService(service.Service):
"""
A twisted Service class that will start synapse. Used to run synapse

View File

@@ -23,7 +23,7 @@ import urllib.parse
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import Any, Callable, List, MutableMapping, Optional
from typing import Any, Callable, Iterable, List, MutableMapping, Optional
import attr
import jinja2
@@ -32,7 +32,17 @@ import yaml
class ConfigError(Exception):
pass
"""Represents a problem parsing the configuration
Args:
msg: A textual description of the error.
path: Where appropriate, an indication of where in the configuration
the problem lies.
"""
def __init__(self, msg: str, path: Optional[Iterable[str]] = None):
self.msg = msg
self.path = path
# We split these messages out to allow packages to override with package

View File

@@ -1,4 +1,4 @@
from typing import Any, List, Optional
from typing import Any, Iterable, List, Optional
from synapse.config import (
api,
@@ -35,7 +35,10 @@ from synapse.config import (
workers,
)
class ConfigError(Exception): ...
class ConfigError(Exception):
def __init__(self, msg: str, path: Optional[Iterable[str]] = None):
self.msg = msg
self.path = path
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
MISSING_REPORT_STATS_SPIEL: str

View File

@@ -38,14 +38,27 @@ def validate_config(
try:
jsonschema.validate(config, json_schema)
except jsonschema.ValidationError as e:
# copy `config_path` before modifying it.
path = list(config_path)
for p in list(e.path):
if isinstance(p, int):
path.append("<item %i>" % p)
else:
path.append(str(p))
raise json_error_to_config_error(e, config_path)
raise ConfigError(
"Unable to parse configuration: %s at %s" % (e.message, ".".join(path))
)
def json_error_to_config_error(
e: jsonschema.ValidationError, config_path: Iterable[str]
) -> ConfigError:
"""Converts a json validation error to a user-readable ConfigError
Args:
e: the exception to be converted
config_path: the path within the config file. This will be used as a basis
for the error message.
Returns:
a ConfigError
"""
# copy `config_path` before modifying it.
path = list(config_path)
for p in list(e.path):
if isinstance(p, int):
path.append("<item %i>" % p)
else:
path.append(str(p))
return ConfigError(e.message, path)

View File

@@ -390,9 +390,8 @@ class EmailConfig(Config):
#validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# Do not uncomment this setting unless you want to customise the templates.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
#
# Synapse will look for the following templates in this directory:
#

View File

@@ -12,12 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
from netaddr import IPSet
from synapse.config._base import Config, ConfigError
from synapse.config._base import Config
from synapse.config._util import validate_config
@@ -36,23 +33,6 @@ class FederationConfig(Config):
for domain in federation_domain_whitelist:
self.federation_domain_whitelist[domain] = True
self.federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", []
)
# Attempt to create an IPSet from the given ranges
try:
self.federation_ip_range_blacklist = IPSet(
self.federation_ip_range_blacklist
)
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
federation_metrics_domains = config.get("federation_metrics_domains") or []
validate_config(
_METRICS_FOR_DOMAINS_SCHEMA,
@@ -76,26 +56,17 @@ class FederationConfig(Config):
# - nyc.example.com
# - syd.example.com
# Prevent federation requests from being sent to the following
# blacklist IP address CIDR ranges. If this option is not specified, or
# specified with an empty list, no ip range blacklist will be enforced.
# List of IP address CIDR ranges that should be allowed for federation,
# identity servers, push servers, and for checking key validity for
# third-party invite events. This is useful for specifying exceptions to
# wide-ranging blacklisted target IP ranges - e.g. for communication with
# a push server only visible in your network.
#
# As of Synapse v1.4.0 this option also affects any outbound requests to identity
# servers provided by user input.
# This whitelist overrides ip_range_blacklist and defaults to an empty
# list.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
federation_ip_range_blacklist:
- '127.0.0.0/8'
- '10.0.0.0/8'
- '172.16.0.0/12'
- '192.168.0.0/16'
- '100.64.0.0/10'
- '169.254.0.0/16'
- '::1/128'
- 'fe80::/64'
- 'fc00::/7'
#ip_range_whitelist:
# - '192.168.1.1'
# Report prometheus metrics on the age of PDUs being sent to and received from
# the following domains. This can be used to give an idea of "delay" on inbound

View File

@@ -66,7 +66,7 @@ class OIDCConfig(Config):
(
self.oidc_user_mapping_provider_class,
self.oidc_user_mapping_provider_config,
) = load_module(ump_config)
) = load_module(ump_config, ("oidc_config", "user_mapping_provider"))
# Ensure loaded user mapping module has defined all necessary methods
required_methods = [

View File

@@ -36,7 +36,7 @@ class PasswordAuthProviderConfig(Config):
providers.append({"module": LDAP_PROVIDER, "config": ldap_config})
providers.extend(config.get("password_providers") or [])
for provider in providers:
for i, provider in enumerate(providers):
mod_name = provider["module"]
# This is for backwards compat when the ldap auth provider resided
@@ -45,7 +45,8 @@ class PasswordAuthProviderConfig(Config):
mod_name = LDAP_PROVIDER
(provider_class, provider_config) = load_module(
{"module": mod_name, "config": provider["config"]}
{"module": mod_name, "config": provider["config"]},
("password_providers", "<item %i>" % i),
)
self.password_providers.append((provider_class, provider_config))

View File

@@ -17,6 +17,9 @@ import os
from collections import namedtuple
from typing import Dict, List
from netaddr import IPSet
from synapse.config.server import DEFAULT_IP_RANGE_BLACKLIST
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.util.module_loader import load_module
@@ -142,7 +145,7 @@ class ContentRepositoryConfig(Config):
# them to be started.
self.media_storage_providers = [] # type: List[tuple]
for provider_config in storage_providers:
for i, provider_config in enumerate(storage_providers):
# We special case the module "file_system" so as not to need to
# expose FileStorageProviderBackend
if provider_config["module"] == "file_system":
@@ -151,7 +154,9 @@ class ContentRepositoryConfig(Config):
".FileStorageProviderBackend"
)
provider_class, parsed_config = load_module(provider_config)
provider_class, parsed_config = load_module(
provider_config, ("media_storage_providers", "<item %i>" % i)
)
wrapper_config = MediaStorageProviderConfig(
provider_config.get("store_local", False),
@@ -182,9 +187,6 @@ class ContentRepositoryConfig(Config):
"to work"
)
# netaddr is a dependency for url_preview
from netaddr import IPSet
self.url_preview_ip_range_blacklist = IPSet(
config["url_preview_ip_range_blacklist"]
)
@@ -213,6 +215,10 @@ class ContentRepositoryConfig(Config):
# strip final NL
formatted_thumbnail_sizes = formatted_thumbnail_sizes[:-1]
ip_range_blacklist = "\n".join(
" # - '%s'" % ip for ip in DEFAULT_IP_RANGE_BLACKLIST
)
return (
r"""
## Media Store ##
@@ -283,15 +289,7 @@ class ContentRepositoryConfig(Config):
# you uncomment the following list as a starting point.
#
#url_preview_ip_range_blacklist:
# - '127.0.0.0/8'
# - '10.0.0.0/8'
# - '172.16.0.0/12'
# - '192.168.0.0/16'
# - '100.64.0.0/10'
# - '169.254.0.0/16'
# - '::1/128'
# - 'fe80::/64'
# - 'fc00::/7'
%(ip_range_blacklist)s
# List of IP address CIDR ranges that the URL preview spider is allowed
# to access even if they are specified in url_preview_ip_range_blacklist.

View File

@@ -180,7 +180,7 @@ class _RoomDirectoryRule:
self._alias_regex = glob_to_regex(alias)
self._room_id_regex = glob_to_regex(room_id)
except Exception as e:
raise ConfigError("Failed to parse glob into regex: %s", e)
raise ConfigError("Failed to parse glob into regex") from e
def matches(self, user_id, room_id, aliases):
"""Tests if this rule matches the given user_id, room_id and aliases.

View File

@@ -125,7 +125,7 @@ class SAML2Config(Config):
(
self.saml2_user_mapping_provider_class,
self.saml2_user_mapping_provider_config,
) = load_module(ump_dict)
) = load_module(ump_dict, ("saml2_config", "user_mapping_provider"))
# Ensure loaded user mapping module has defined all necessary methods
# Note parse_config() is already checked during the call to load_module

View File

@@ -23,6 +23,7 @@ from typing import Any, Dict, Iterable, List, Optional, Set
import attr
import yaml
from netaddr import IPSet
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.http.endpoint import parse_and_validate_server_name
@@ -39,6 +40,34 @@ logger = logging.Logger(__name__)
# in the list.
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_IP_RANGE_BLACKLIST = [
# Localhost
"127.0.0.0/8",
# Private networks.
"10.0.0.0/8",
"172.16.0.0/12",
"192.168.0.0/16",
# Carrier grade NAT.
"100.64.0.0/10",
# Address registry.
"192.0.0.0/24",
# Link-local networks.
"169.254.0.0/16",
# Testing networks.
"198.18.0.0/15",
"192.0.2.0/24",
"198.51.100.0/24",
"203.0.113.0/24",
# Multicast.
"224.0.0.0/4",
# Localhost
"::1/128",
# Link-local addresses.
"fe80::/10",
# Unique local addresses.
"fc00::/7",
]
DEFAULT_ROOM_VERSION = "6"
ROOM_COMPLEXITY_TOO_GREAT = (
@@ -256,6 +285,38 @@ class ServerConfig(Config):
# due to resource constraints
self.admin_contact = config.get("admin_contact", None)
ip_range_blacklist = config.get(
"ip_range_blacklist", DEFAULT_IP_RANGE_BLACKLIST
)
# Attempt to create an IPSet from the given ranges
try:
self.ip_range_blacklist = IPSet(ip_range_blacklist)
except Exception as e:
raise ConfigError("Invalid range(s) provided in ip_range_blacklist.") from e
# Always blacklist 0.0.0.0, ::
self.ip_range_blacklist.update(["0.0.0.0", "::"])
try:
self.ip_range_whitelist = IPSet(config.get("ip_range_whitelist", ()))
except Exception as e:
raise ConfigError("Invalid range(s) provided in ip_range_whitelist.") from e
# The federation_ip_range_blacklist is used for backwards-compatibility
# and only applies to federation and identity servers. If it is not given,
# default to ip_range_blacklist.
federation_ip_range_blacklist = config.get(
"federation_ip_range_blacklist", ip_range_blacklist
)
try:
self.federation_ip_range_blacklist = IPSet(federation_ip_range_blacklist)
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in federation_ip_range_blacklist."
) from e
# Always blacklist 0.0.0.0, ::
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
if self.public_baseurl is not None:
if self.public_baseurl[-1] != "/":
self.public_baseurl += "/"
@@ -561,6 +622,10 @@ class ServerConfig(Config):
def generate_config_section(
self, server_name, data_dir_path, open_private_ports, listeners, **kwargs
):
ip_range_blacklist = "\n".join(
" # - '%s'" % ip for ip in DEFAULT_IP_RANGE_BLACKLIST
)
_, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None:
unsecure_port = bind_port - 400
@@ -752,6 +817,21 @@ class ServerConfig(Config):
#
#enable_search: false
# Prevent outgoing requests from being sent to the following blacklisted IP address
# CIDR ranges. If this option is not specified then it defaults to private IP
# address ranges (see the example below).
#
# The blacklist applies to the outbound requests for federation, identity servers,
# push servers, and for checking key validity for third-party invite events.
#
# (0.0.0.0 and :: are always blacklisted, whether or not they are explicitly
# listed here, since they correspond to unroutable addresses.)
#
# This option replaces federation_ip_range_blacklist in Synapse v1.25.0.
#
#ip_range_blacklist:
%(ip_range_blacklist)s
# List of ports that Synapse should listen on, their purpose and their
# configuration.
#

View File

@@ -33,13 +33,14 @@ class SpamCheckerConfig(Config):
# spam checker, and thus was simply a dictionary with module
# and config keys. Support this old behaviour by checking
# to see if the option resolves to a dictionary
self.spam_checkers.append(load_module(spam_checkers))
self.spam_checkers.append(load_module(spam_checkers, ("spam_checker",)))
elif isinstance(spam_checkers, list):
for spam_checker in spam_checkers:
for i, spam_checker in enumerate(spam_checkers):
config_path = ("spam_checker", "<item %i>" % i)
if not isinstance(spam_checker, dict):
raise ConfigError("spam_checker syntax is incorrect")
raise ConfigError("expected a mapping", config_path)
self.spam_checkers.append(load_module(spam_checker))
self.spam_checkers.append(load_module(spam_checker, config_path))
else:
raise ConfigError("spam_checker syntax is incorrect")

View File

@@ -93,11 +93,8 @@ class SSOConfig(Config):
# - https://my.custom.client/
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
# If not set, or the files named below are not found within the template
# directory, default templates from within the Synapse package will be used.
#
# Synapse will look for the following templates in this directory:
#

View File

@@ -26,7 +26,9 @@ class ThirdPartyRulesConfig(Config):
provider = config.get("third_party_event_rules", None)
if provider is not None:
self.third_party_event_rules = load_module(provider)
self.third_party_event_rules = load_module(
provider, ("third_party_event_rules",)
)
def generate_config_section(self, **kwargs):
return """\

View File

@@ -85,6 +85,9 @@ class WorkerConfig(Config):
# The port on the main synapse for HTTP replication endpoint
self.worker_replication_http_port = config.get("worker_replication_http_port")
# The shared secret used for authentication when connecting to the main synapse.
self.worker_replication_secret = config.get("worker_replication_secret", None)
self.worker_name = config.get("worker_name", self.worker_app)
self.worker_main_http_uri = config.get("worker_main_http_uri", None)
@@ -185,6 +188,13 @@ class WorkerConfig(Config):
# data). If not provided this defaults to the main process.
#
#run_background_tasks_on: worker1
# A shared secret used by the replication APIs to authenticate HTTP requests
# from workers.
#
# By default this is unused and traffic is not authenticated.
#
#worker_replication_secret: ""
"""
def read_arguments(self, args):

View File

@@ -578,7 +578,7 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
def __init__(self, hs):
super().__init__(hs)
self.clock = hs.get_clock()
self.client = hs.get_http_client()
self.client = hs.get_federation_http_client()
self.key_servers = self.config.key_servers
async def get_keys(self, keys_to_fetch):
@@ -748,7 +748,7 @@ class ServerKeyFetcher(BaseV2KeyFetcher):
def __init__(self, hs):
super().__init__(hs)
self.clock = hs.get_clock()
self.client = hs.get_http_client()
self.client = hs.get_federation_http_client()
async def get_keys(self, keys_to_fetch):
"""

View File

@@ -845,7 +845,6 @@ class FederationHandlerRegistry:
def __init__(self, hs: "HomeServer"):
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self.clock = hs.get_clock()
self._instance_name = hs.get_instance_name()

View File

@@ -35,7 +35,7 @@ class TransportLayerClient:
def __init__(self, hs):
self.server_name = hs.hostname
self.client = hs.get_http_client()
self.client = hs.get_federation_http_client()
@log_function
def get_room_state_ids(self, destination, room_id, event_id):

View File

@@ -1462,7 +1462,7 @@ def register_servlets(hs, resource, authenticator, ratelimiter, servlet_groups=N
Args:
hs (synapse.server.HomeServer): homeserver
resource (TransportLayerServer): resource class to register to
resource (JsonResource): resource class to register to
authenticator (Authenticator): authenticator to use
ratelimiter (util.ratelimitutils.FederationRateLimiter): ratelimiter to use
servlet_groups (list[str], optional): List of servlet groups to register.

View File

@@ -32,6 +32,10 @@ logger = logging.getLogger(__name__)
class BaseHandler:
"""
Common base class for the event handlers.
Deprecated: new code should not use this. Instead, Handler classes should define the
fields they actually need. The utility methods should either be factored out to
standalone helper functions, or to different Handler classes.
"""
def __init__(self, hs: "HomeServer"):

View File

@@ -36,6 +36,8 @@ import attr
import bcrypt
import pymacaroons
from twisted.web.http import Request
from synapse.api.constants import LoginType
from synapse.api.errors import (
AuthError,
@@ -193,9 +195,7 @@ class AuthHandler(BaseHandler):
self.hs = hs # FIXME better possibility to access registrationHandler later?
self.macaroon_gen = hs.get_macaroon_generator()
self._password_enabled = hs.config.password_enabled
self._sso_enabled = (
hs.config.cas_enabled or hs.config.saml2_enabled or hs.config.oidc_enabled
)
self._password_localdb_enabled = hs.config.password_localdb_enabled
# we keep this as a list despite the O(N^2) implication so that we can
# keep PASSWORD first and avoid confusing clients which pick the first
@@ -205,7 +205,7 @@ class AuthHandler(BaseHandler):
# start out by assuming PASSWORD is enabled; we will remove it later if not.
login_types = []
if hs.config.password_localdb_enabled:
if self._password_localdb_enabled:
login_types.append(LoginType.PASSWORD)
for provider in self.password_providers:
@@ -219,14 +219,6 @@ class AuthHandler(BaseHandler):
self._supported_login_types = login_types
# Login types and UI Auth types have a heavy overlap, but are not
# necessarily identical. Login types have SSO (and other login types)
# added in the rest layer, see synapse.rest.client.v1.login.LoginRestServerlet.on_GET.
ui_auth_types = login_types.copy()
if self._sso_enabled:
ui_auth_types.append(LoginType.SSO)
self._supported_ui_auth_types = ui_auth_types
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
self._failed_uia_attempts_ratelimiter = Ratelimiter(
@@ -339,7 +331,10 @@ class AuthHandler(BaseHandler):
self._failed_uia_attempts_ratelimiter.ratelimit(user_id, update=False)
# build a list of supported flows
flows = [[login_type] for login_type in self._supported_ui_auth_types]
supported_ui_auth_types = await self._get_available_ui_auth_types(
requester.user
)
flows = [[login_type] for login_type in supported_ui_auth_types]
try:
result, params, session_id = await self.check_ui_auth(
@@ -351,7 +346,7 @@ class AuthHandler(BaseHandler):
raise
# find the completed login type
for login_type in self._supported_ui_auth_types:
for login_type in supported_ui_auth_types:
if login_type not in result:
continue
@@ -367,6 +362,41 @@ class AuthHandler(BaseHandler):
return params, session_id
async def _get_available_ui_auth_types(self, user: UserID) -> Iterable[str]:
"""Get a list of the authentication types this user can use
"""
ui_auth_types = set()
# if the HS supports password auth, and the user has a non-null password, we
# support password auth
if self._password_localdb_enabled and self._password_enabled:
lookupres = await self._find_user_id_and_pwd_hash(user.to_string())
if lookupres:
_, password_hash = lookupres
if password_hash:
ui_auth_types.add(LoginType.PASSWORD)
# also allow auth from password providers
for provider in self.password_providers:
for t in provider.get_supported_login_types().keys():
if t == LoginType.PASSWORD and not self._password_enabled:
continue
ui_auth_types.add(t)
# if sso is enabled, allow the user to log in via SSO iff they have a mapping
# from sso to mxid.
if self.hs.config.saml2.saml2_enabled or self.hs.config.oidc.oidc_enabled:
if await self.store.get_external_ids_by_user(user.to_string()):
ui_auth_types.add(LoginType.SSO)
# Our CAS impl does not (yet) correctly register users in user_external_ids,
# so always offer that if it's available.
if self.hs.config.cas.cas_enabled:
ui_auth_types.add(LoginType.SSO)
return ui_auth_types
def get_enabled_auth_types(self):
"""Return the enabled user-interactive authentication types
@@ -1029,7 +1059,7 @@ class AuthHandler(BaseHandler):
if result:
return result
if login_type == LoginType.PASSWORD and self.hs.config.password_localdb_enabled:
if login_type == LoginType.PASSWORD and self._password_localdb_enabled:
known_login_type = True
# we've already checked that there is a (valid) password field
@@ -1303,15 +1333,14 @@ class AuthHandler(BaseHandler):
)
async def complete_sso_ui_auth(
self, registered_user_id: str, session_id: str, request: SynapseRequest,
self, registered_user_id: str, session_id: str, request: Request,
):
"""Having figured out a mxid for this user, complete the HTTP request
Args:
registered_user_id: The registered user ID to complete SSO login for.
session_id: The ID of the user-interactive auth session.
request: The request to complete.
client_redirect_url: The URL to which to redirect the user at the end of the
process.
"""
# Mark the stage of the authentication as successful.
# Save the user who authenticated with SSO, this will be used to ensure
@@ -1327,7 +1356,7 @@ class AuthHandler(BaseHandler):
async def complete_sso_login(
self,
registered_user_id: str,
request: SynapseRequest,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
):
@@ -1355,7 +1384,7 @@ class AuthHandler(BaseHandler):
def _complete_sso_login(
self,
registered_user_id: str,
request: SynapseRequest,
request: Request,
client_redirect_url: str,
extra_attributes: Optional[JsonDict] = None,
):

View File

@@ -140,7 +140,7 @@ class FederationHandler(BaseHandler):
self._message_handler = hs.get_message_handler()
self._server_notices_mxid = hs.config.server_notices_mxid
self.config = hs.config
self.http_client = hs.get_simple_http_client()
self.http_client = hs.get_proxied_blacklisted_http_client()
self._instance_name = hs.get_instance_name()
self._replication = hs.get_replication_data_handler()

View File

@@ -46,13 +46,13 @@ class IdentityHandler(BaseHandler):
def __init__(self, hs):
super().__init__(hs)
# An HTTP client for contacting trusted URLs.
self.http_client = SimpleHttpClient(hs)
# We create a blacklisting instance of SimpleHttpClient for contacting identity
# servers specified by clients
# An HTTP client for contacting identity servers specified by clients.
self.blacklisting_http_client = SimpleHttpClient(
hs, ip_blacklist=hs.config.federation_ip_range_blacklist
)
self.federation_http_client = hs.get_http_client()
self.federation_http_client = hs.get_federation_http_client()
self.hs = hs
async def threepid_from_creds(

View File

@@ -674,6 +674,21 @@ class OidcHandler(BaseHandler):
self._sso_handler.render_error(request, "invalid_token", str(e))
return
# first check if we're doing a UIA
if ui_auth_session_id:
try:
remote_user_id = self._remote_id_from_userinfo(userinfo)
except Exception as e:
logger.exception("Could not extract remote user id")
self._sso_handler.render_error(request, "mapping_error", str(e))
return
return await self._sso_handler.complete_sso_ui_auth_request(
self._auth_provider_id, remote_user_id, ui_auth_session_id, request
)
# otherwise, it's a login
# Pull out the user-agent and IP from the request.
user_agent = request.get_user_agent("")
ip_address = self.hs.get_ip_from_request(request)
@@ -698,14 +713,9 @@ class OidcHandler(BaseHandler):
extra_attributes = await get_extra_attributes(userinfo, token)
# and finally complete the login
if ui_auth_session_id:
await self._auth_handler.complete_sso_ui_auth(
user_id, ui_auth_session_id, request
)
else:
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url, extra_attributes
)
await self._auth_handler.complete_sso_login(
user_id, request, client_redirect_url, extra_attributes
)
def _generate_oidc_session_token(
self,
@@ -856,14 +866,11 @@ class OidcHandler(BaseHandler):
The mxid of the user
"""
try:
remote_user_id = self._user_mapping_provider.get_remote_user_id(userinfo)
remote_user_id = self._remote_id_from_userinfo(userinfo)
except Exception as e:
raise MappingException(
"Failed to extract subject from OIDC response: %s" % (e,)
)
# Some OIDC providers use integer IDs, but Synapse expects external IDs
# to be strings.
remote_user_id = str(remote_user_id)
# Older mapping providers don't accept the `failures` argument, so we
# try and detect support.
@@ -933,6 +940,19 @@ class OidcHandler(BaseHandler):
grandfather_existing_users,
)
def _remote_id_from_userinfo(self, userinfo: UserInfo) -> str:
"""Extract the unique remote id from an OIDC UserInfo block
Args:
userinfo: An object representing the user given by the OIDC provider
Returns:
remote user id
"""
remote_user_id = self._user_mapping_provider.get_remote_user_id(userinfo)
# Some OIDC providers use integer IDs, but Synapse expects external IDs
# to be strings.
return str(remote_user_id)
UserAttributeDict = TypedDict(
"UserAttributeDict", {"localpart": str, "display_name": Optional[str]}

View File

@@ -440,6 +440,7 @@ class RoomCreationHandler(BaseHandler):
invite_list=[],
initial_state=initial_state,
creation_content=creation_content,
ratelimit=False,
)
# Transfer membership events
@@ -735,6 +736,7 @@ class RoomCreationHandler(BaseHandler):
room_alias=room_alias,
power_level_content_override=power_level_content_override,
creator_join_profile=creator_join_profile,
ratelimit=ratelimit,
)
if "name" in config:
@@ -838,6 +840,7 @@ class RoomCreationHandler(BaseHandler):
room_alias: Optional[RoomAlias] = None,
power_level_content_override: Optional[JsonDict] = None,
creator_join_profile: Optional[JsonDict] = None,
ratelimit: bool = True,
) -> int:
"""Sends the initial events into a new room.
@@ -884,7 +887,7 @@ class RoomCreationHandler(BaseHandler):
creator.user,
room_id,
"join",
ratelimit=False,
ratelimit=ratelimit,
content=creator_join_profile,
)

View File

@@ -203,7 +203,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# Only rate-limit if the user actually joined the room, otherwise we'll end
# up blocking profile updates.
if newly_joined:
if newly_joined and ratelimit:
time_now_s = self.clock.time()
(
allowed,
@@ -488,17 +488,20 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
raise AuthError(403, "Guest access not allowed")
if not is_host_in_room:
time_now_s = self.clock.time()
(
allowed,
time_allowed,
) = self._join_rate_limiter_remote.can_requester_do_action(requester,)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s))
if ratelimit:
time_now_s = self.clock.time()
(
allowed,
time_allowed,
) = self._join_rate_limiter_remote.can_requester_do_action(
requester,
)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now_s))
)
inviter = await self._get_inviter(target.to_string(), room_id)
if inviter and not self.hs.is_mine(inviter):
remote_room_hosts.append(inviter.domain)

View File

@@ -34,7 +34,6 @@ from synapse.types import (
map_username_to_mxid_localpart,
mxid_localpart_allowed_characters,
)
from synapse.util.async_helpers import Linearizer
from synapse.util.iterutils import chunk_seq
if TYPE_CHECKING:
@@ -81,9 +80,6 @@ class SamlHandler(BaseHandler):
# a map from saml session id to Saml2SessionData object
self._outstanding_requests_dict = {} # type: Dict[str, Saml2SessionData]
# a lock on the mappings
self._mapping_lock = Linearizer(name="saml_mapping", clock=self.clock)
self._sso_handler = hs.get_sso_handler()
def handle_redirect_request(
@@ -183,6 +179,24 @@ class SamlHandler(BaseHandler):
saml2_auth.in_response_to, None
)
# first check if we're doing a UIA
if current_session and current_session.ui_auth_session_id:
try:
remote_user_id = self._remote_id_from_saml_response(saml2_auth, None)
except MappingException as e:
logger.exception("Failed to extract remote user id from SAML response")
self._sso_handler.render_error(request, "mapping_error", str(e))
return
return await self._sso_handler.complete_sso_ui_auth_request(
self._auth_provider_id,
remote_user_id,
current_session.ui_auth_session_id,
request,
)
# otherwise, we're handling a login request.
# Ensure that the attributes of the logged in user meet the required
# attributes.
for requirement in self._saml2_attribute_requirements:
@@ -206,14 +220,7 @@ class SamlHandler(BaseHandler):
self._sso_handler.render_error(request, "mapping_error", str(e))
return
# Complete the interactive auth session or the login.
if current_session and current_session.ui_auth_session_id:
await self._auth_handler.complete_sso_ui_auth(
user_id, current_session.ui_auth_session_id, request
)
else:
await self._auth_handler.complete_sso_login(user_id, request, relay_state)
await self._auth_handler.complete_sso_login(user_id, request, relay_state)
async def _map_saml_response_to_user(
self,
@@ -239,16 +246,10 @@ class SamlHandler(BaseHandler):
RedirectException: some mapping providers may raise this if they need
to redirect to an interstitial page.
"""
remote_user_id = self._user_mapping_provider.get_remote_user_id(
remote_user_id = self._remote_id_from_saml_response(
saml2_auth, client_redirect_url
)
if not remote_user_id:
raise MappingException(
"Failed to extract remote user id from SAML response"
)
async def saml_response_to_remapped_user_attributes(
failures: int,
) -> UserAttributes:
@@ -294,16 +295,44 @@ class SamlHandler(BaseHandler):
return None
with (await self._mapping_lock.queue(self._auth_provider_id)):
return await self._sso_handler.get_mxid_from_sso(
self._auth_provider_id,
remote_user_id,
user_agent,
ip_address,
saml_response_to_remapped_user_attributes,
grandfather_existing_users,
return await self._sso_handler.get_mxid_from_sso(
self._auth_provider_id,
remote_user_id,
user_agent,
ip_address,
saml_response_to_remapped_user_attributes,
grandfather_existing_users,
)
def _remote_id_from_saml_response(
self,
saml2_auth: saml2.response.AuthnResponse,
client_redirect_url: Optional[str],
) -> str:
"""Extract the unique remote id from a SAML2 AuthnResponse
Args:
saml2_auth: The parsed SAML2 response.
client_redirect_url: The redirect URL passed in by the client.
Returns:
remote user id
Raises:
MappingException if there was an error extracting the user id
"""
# It's not obvious why we need to pass in the redirect URI to the mapping
# provider, but we do :/
remote_user_id = self._user_mapping_provider.get_remote_user_id(
saml2_auth, client_redirect_url
)
if not remote_user_id:
raise MappingException(
"Failed to extract remote user id from SAML response"
)
return remote_user_id
def expire_sessions(self):
expire_before = self.clock.time_msec() - self._saml2_session_lifetime
to_expire = set()

View File

@@ -17,10 +17,12 @@ from typing import TYPE_CHECKING, Awaitable, Callable, List, Optional
import attr
from twisted.web.http import Request
from synapse.api.errors import RedirectException
from synapse.handlers._base import BaseHandler
from synapse.http.server import respond_with_html
from synapse.types import UserID, contains_invalid_mxid_characters
from synapse.util.async_helpers import Linearizer
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -42,14 +44,19 @@ class UserAttributes:
emails = attr.ib(type=List[str], default=attr.Factory(list))
class SsoHandler(BaseHandler):
class SsoHandler:
# The number of attempts to ask the mapping provider for when generating an MXID.
_MAP_USERNAME_RETRIES = 1000
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self._store = hs.get_datastore()
self._server_name = hs.hostname
self._registration_handler = hs.get_registration_handler()
self._error_template = hs.config.sso_error_template
self._auth_handler = hs.get_auth_handler()
# a lock on the mappings
self._mapping_lock = Linearizer(name="sso_user_mapping", clock=hs.get_clock())
def render_error(
self, request, error: str, error_description: Optional[str] = None
@@ -95,7 +102,7 @@ class SsoHandler(BaseHandler):
)
# Check if we already have a mapping for this user.
previously_registered_user_id = await self.store.get_user_by_external_id(
previously_registered_user_id = await self._store.get_user_by_external_id(
auth_provider_id, remote_user_id,
)
@@ -169,24 +176,38 @@ class SsoHandler(BaseHandler):
to an additional page. (e.g. to prompt for more information)
"""
# first of all, check if we already have a mapping for this user
previously_registered_user_id = await self.get_sso_user_by_remote_user_id(
auth_provider_id, remote_user_id,
)
if previously_registered_user_id:
return previously_registered_user_id
# Check for grandfathering of users.
if grandfather_existing_users:
previously_registered_user_id = await grandfather_existing_users()
# grab a lock while we try to find a mapping for this user. This seems...
# optimistic, especially for implementations that end up redirecting to
# interstitial pages.
with await self._mapping_lock.queue(auth_provider_id):
# first of all, check if we already have a mapping for this user
previously_registered_user_id = await self.get_sso_user_by_remote_user_id(
auth_provider_id, remote_user_id,
)
if previously_registered_user_id:
# Future logins should also match this user ID.
await self.store.record_user_external_id(
auth_provider_id, remote_user_id, previously_registered_user_id
)
return previously_registered_user_id
# Otherwise, generate a new user.
# Check for grandfathering of users.
if grandfather_existing_users:
previously_registered_user_id = await grandfather_existing_users()
if previously_registered_user_id:
# Future logins should also match this user ID.
await self._store.record_user_external_id(
auth_provider_id, remote_user_id, previously_registered_user_id
)
return previously_registered_user_id
# Otherwise, generate a new user.
attributes = await self._call_attribute_mapper(sso_to_matrix_id_mapper)
user_id = await self._register_mapped_user(
attributes, auth_provider_id, remote_user_id, user_agent, ip_address,
)
return user_id
async def _call_attribute_mapper(
self, sso_to_matrix_id_mapper: Callable[[int], Awaitable[UserAttributes]],
) -> UserAttributes:
"""Call the attribute mapper function in a loop, until we get a unique userid"""
for i in range(self._MAP_USERNAME_RETRIES):
try:
attributes = await sso_to_matrix_id_mapper(i)
@@ -214,8 +235,8 @@ class SsoHandler(BaseHandler):
)
# Check if this mxid already exists
user_id = UserID(attributes.localpart, self.server_name).to_string()
if not await self.store.get_users_by_id_case_insensitive(user_id):
user_id = UserID(attributes.localpart, self._server_name).to_string()
if not await self._store.get_users_by_id_case_insensitive(user_id):
# This mxid is free
break
else:
@@ -224,7 +245,16 @@ class SsoHandler(BaseHandler):
raise MappingException(
"Unable to generate a Matrix ID from the SSO response"
)
return attributes
async def _register_mapped_user(
self,
attributes: UserAttributes,
auth_provider_id: str,
remote_user_id: str,
user_agent: str,
ip_address: str,
) -> str:
# Since the localpart is provided via a potentially untrusted module,
# ensure the MXID is valid before registering.
if contains_invalid_mxid_characters(attributes.localpart):
@@ -238,7 +268,47 @@ class SsoHandler(BaseHandler):
user_agent_ips=[(user_agent, ip_address)],
)
await self.store.record_user_external_id(
await self._store.record_user_external_id(
auth_provider_id, remote_user_id, registered_user_id
)
return registered_user_id
async def complete_sso_ui_auth_request(
self,
auth_provider_id: str,
remote_user_id: str,
ui_auth_session_id: str,
request: Request,
) -> None:
"""
Given an SSO ID, retrieve the user ID for it and complete UIA.
Note that this requires that the user is mapped in the "user_external_ids"
table. This will be the case if they have ever logged in via SAML or OIDC in
recentish synapse versions, but may not be for older users.
Args:
auth_provider_id: A unique identifier for this SSO provider, e.g.
"oidc" or "saml".
remote_user_id: The unique identifier from the SSO provider.
ui_auth_session_id: The ID of the user-interactive auth session.
request: The request to complete.
"""
user_id = await self.get_sso_user_by_remote_user_id(
auth_provider_id, remote_user_id,
)
if not user_id:
logger.warning(
"Remote user %s/%s has not previously logged in here: UIA will fail",
auth_provider_id,
remote_user_id,
)
# Let the UIA flow handle this the same as if they presented creds for a
# different user.
user_id = ""
await self._auth_handler.complete_sso_ui_auth(
user_id, ui_auth_session_id, request
)

View File

@@ -125,7 +125,7 @@ def _make_scheduler(reactor):
return _scheduler
class IPBlacklistingResolver:
class _IPBlacklistingResolver:
"""
A proxy for reactor.nameResolver which only produces non-blacklisted IP
addresses, preventing DNS rebinding attacks on URL preview.
@@ -199,6 +199,35 @@ class IPBlacklistingResolver:
return r
@implementer(IReactorPluggableNameResolver)
class BlacklistingReactorWrapper:
"""
A Reactor wrapper which will prevent DNS resolution to blacklisted IP
addresses, to prevent DNS rebinding.
"""
def __init__(
self,
reactor: IReactorPluggableNameResolver,
ip_whitelist: Optional[IPSet],
ip_blacklist: IPSet,
):
self._reactor = reactor
# We need to use a DNS resolver which filters out blacklisted IP
# addresses, to prevent DNS rebinding.
self._nameResolver = _IPBlacklistingResolver(
self._reactor, ip_whitelist, ip_blacklist
)
def __getattr__(self, attr: str) -> Any:
# Passthrough to the real reactor except for the DNS resolver.
if attr == "nameResolver":
return self._nameResolver
else:
return getattr(self._reactor, attr)
class BlacklistingAgentWrapper(Agent):
"""
An Agent wrapper which will prevent access to IP addresses being accessed
@@ -292,22 +321,11 @@ class SimpleHttpClient:
self.user_agent = self.user_agent.encode("ascii")
if self._ip_blacklist:
real_reactor = hs.get_reactor()
# If we have an IP blacklist, we need to use a DNS resolver which
# filters out blacklisted IP addresses, to prevent DNS rebinding.
nameResolver = IPBlacklistingResolver(
real_reactor, self._ip_whitelist, self._ip_blacklist
self.reactor = BlacklistingReactorWrapper(
hs.get_reactor(), self._ip_whitelist, self._ip_blacklist
)
@implementer(IReactorPluggableNameResolver)
class Reactor:
def __getattr__(_self, attr):
if attr == "nameResolver":
return nameResolver
else:
return getattr(real_reactor, attr)
self.reactor = Reactor()
else:
self.reactor = hs.get_reactor()

View File

@@ -16,7 +16,7 @@ import logging
import urllib.parse
from typing import List, Optional
from netaddr import AddrFormatError, IPAddress
from netaddr import AddrFormatError, IPAddress, IPSet
from zope.interface import implementer
from twisted.internet import defer
@@ -31,6 +31,7 @@ from twisted.web.http_headers import Headers
from twisted.web.iweb import IAgent, IAgentEndpointFactory, IBodyProducer
from synapse.crypto.context_factory import FederationPolicyForHTTPS
from synapse.http.client import BlacklistingAgentWrapper
from synapse.http.federation.srv_resolver import Server, SrvResolver
from synapse.http.federation.well_known_resolver import WellKnownResolver
from synapse.logging.context import make_deferred_yieldable, run_in_background
@@ -70,6 +71,7 @@ class MatrixFederationAgent:
reactor: IReactorCore,
tls_client_options_factory: Optional[FederationPolicyForHTTPS],
user_agent: bytes,
ip_blacklist: IPSet,
_srv_resolver: Optional[SrvResolver] = None,
_well_known_resolver: Optional[WellKnownResolver] = None,
):
@@ -90,12 +92,18 @@ class MatrixFederationAgent:
self.user_agent = user_agent
if _well_known_resolver is None:
# Note that the name resolver has already been wrapped in a
# IPBlacklistingResolver by MatrixFederationHttpClient.
_well_known_resolver = WellKnownResolver(
self._reactor,
agent=Agent(
agent=BlacklistingAgentWrapper(
Agent(
self._reactor,
pool=self._pool,
contextFactory=tls_client_options_factory,
),
self._reactor,
pool=self._pool,
contextFactory=tls_client_options_factory,
ip_blacklist=ip_blacklist,
),
user_agent=self.user_agent,
)

View File

@@ -26,11 +26,10 @@ import treq
from canonicaljson import encode_canonical_json
from prometheus_client import Counter
from signedjson.sign import sign_json
from zope.interface import implementer
from twisted.internet import defer
from twisted.internet.error import DNSLookupError
from twisted.internet.interfaces import IReactorPluggableNameResolver, IReactorTime
from twisted.internet.interfaces import IReactorTime
from twisted.internet.task import _EPSILON, Cooperator
from twisted.web.http_headers import Headers
from twisted.web.iweb import IBodyProducer, IResponse
@@ -45,7 +44,7 @@ from synapse.api.errors import (
from synapse.http import QuieterFileBodyProducer
from synapse.http.client import (
BlacklistingAgentWrapper,
IPBlacklistingResolver,
BlacklistingReactorWrapper,
encode_query_args,
readBodyToFile,
)
@@ -221,31 +220,22 @@ class MatrixFederationHttpClient:
self.signing_key = hs.signing_key
self.server_name = hs.hostname
real_reactor = hs.get_reactor()
# We need to use a DNS resolver which filters out blacklisted IP
# addresses, to prevent DNS rebinding.
nameResolver = IPBlacklistingResolver(
real_reactor, None, hs.config.federation_ip_range_blacklist
self.reactor = BlacklistingReactorWrapper(
hs.get_reactor(), None, hs.config.federation_ip_range_blacklist
)
@implementer(IReactorPluggableNameResolver)
class Reactor:
def __getattr__(_self, attr):
if attr == "nameResolver":
return nameResolver
else:
return getattr(real_reactor, attr)
self.reactor = Reactor()
user_agent = hs.version_string
if hs.config.user_agent_suffix:
user_agent = "%s %s" % (user_agent, hs.config.user_agent_suffix)
user_agent = user_agent.encode("ascii")
self.agent = MatrixFederationAgent(
self.reactor, tls_client_options_factory, user_agent
self.reactor,
tls_client_options_factory,
user_agent,
hs.config.federation_ip_range_blacklist,
)
# Use a BlacklistingAgentWrapper to prevent circumventing the IP

View File

@@ -275,6 +275,10 @@ class DirectServeJsonResource(_AsyncResource):
formatting responses and errors as JSON.
"""
def __init__(self, canonical_json=False, extract_context=False):
super().__init__(extract_context)
self.canonical_json = canonical_json
def _send_response(
self, request: Request, code: int, response_object: Any,
):
@@ -318,9 +322,7 @@ class JsonResource(DirectServeJsonResource):
)
def __init__(self, hs, canonical_json=True, extract_context=False):
super().__init__(extract_context)
self.canonical_json = canonical_json
super().__init__(canonical_json, extract_context)
self.clock = hs.get_clock()
self.path_regexs = {}
self.hs = hs

View File

@@ -13,7 +13,56 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import abc
from typing import TYPE_CHECKING, Any, Dict, Optional
from synapse.types import RoomStreamToken
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
class Pusher(metaclass=abc.ABCMeta):
def __init__(self, hs: "HomeServer", pusherdict: Dict[str, Any]):
self.hs = hs
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.pusher_id = pusherdict["id"]
self.user_id = pusherdict["user_name"]
self.app_id = pusherdict["app_id"]
self.pushkey = pusherdict["pushkey"]
# This is the highest stream ordering we know it's safe to process.
# When new events arrive, we'll be given a window of new events: we
# should honour this rather than just looking for anything higher
# because of potential out-of-order event serialisation. This starts
# off as None though as we don't know any better.
self.max_stream_ordering = None # type: Optional[int]
@abc.abstractmethod
def on_new_notifications(self, max_token: RoomStreamToken) -> None:
raise NotImplementedError()
@abc.abstractmethod
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None:
raise NotImplementedError()
@abc.abstractmethod
def on_started(self, have_notifs: bool) -> None:
"""Called when this pusher has been started.
Args:
should_check_for_notifs: Whether we should immediately
check for push to send. Set to False only if it's known there
is nothing to send
"""
raise NotImplementedError()
@abc.abstractmethod
def on_stop(self) -> None:
raise NotImplementedError()
class PusherConfigException(Exception):
def __init__(self, msg):
super().__init__(msg)
"""An error occurred when creating a pusher."""

View File

@@ -14,12 +14,19 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Dict, List, Optional
from twisted.internet.base import DelayedCall
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.push import Pusher
from synapse.push.mailer import Mailer
from synapse.types import RoomStreamToken
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
# The amount of time we always wait before ever emailing about a notification
@@ -46,7 +53,7 @@ THROTTLE_RESET_AFTER_MS = 12 * 60 * 60 * 1000
INCLUDE_ALL_UNREAD_NOTIFS = False
class EmailPusher:
class EmailPusher(Pusher):
"""
A pusher that sends email notifications about events (approximately)
when they happen.
@@ -54,37 +61,31 @@ class EmailPusher:
factor out the common parts
"""
def __init__(self, hs, pusherdict, mailer):
self.hs = hs
def __init__(self, hs: "HomeServer", pusherdict: Dict[str, Any], mailer: Mailer):
super().__init__(hs, pusherdict)
self.mailer = mailer
self.store = self.hs.get_datastore()
self.clock = self.hs.get_clock()
self.pusher_id = pusherdict["id"]
self.user_id = pusherdict["user_name"]
self.app_id = pusherdict["app_id"]
self.email = pusherdict["pushkey"]
self.last_stream_ordering = pusherdict["last_stream_ordering"]
self.timed_call = None
self.throttle_params = None
# See httppusher
self.max_stream_ordering = None
self.timed_call = None # type: Optional[DelayedCall]
self.throttle_params = {} # type: Dict[str, Dict[str, int]]
self._inited = False
self._is_processing = False
def on_started(self, should_check_for_notifs):
def on_started(self, should_check_for_notifs: bool) -> None:
"""Called when this pusher has been started.
Args:
should_check_for_notifs (bool): Whether we should immediately
should_check_for_notifs: Whether we should immediately
check for push to send. Set to False only if it's known there
is nothing to send
"""
if should_check_for_notifs and self.mailer is not None:
self._start_processing()
def on_stop(self):
def on_stop(self) -> None:
if self.timed_call:
try:
self.timed_call.cancel()
@@ -92,7 +93,7 @@ class EmailPusher:
pass
self.timed_call = None
def on_new_notifications(self, max_token: RoomStreamToken):
def on_new_notifications(self, max_token: RoomStreamToken) -> None:
# We just use the minimum stream ordering and ignore the vector clock
# component. This is safe to do as long as we *always* ignore the vector
# clock components.
@@ -106,23 +107,23 @@ class EmailPusher:
self.max_stream_ordering = max_stream_ordering
self._start_processing()
def on_new_receipts(self, min_stream_id, max_stream_id):
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None:
# We could wake up and cancel the timer but there tend to be quite a
# lot of read receipts so it's probably less work to just let the
# timer fire
pass
def on_timer(self):
def on_timer(self) -> None:
self.timed_call = None
self._start_processing()
def _start_processing(self):
def _start_processing(self) -> None:
if self._is_processing:
return
run_as_background_process("emailpush.process", self._process)
def _pause_processing(self):
def _pause_processing(self) -> None:
"""Used by tests to temporarily pause processing of events.
Asserts that its not currently processing.
@@ -130,25 +131,26 @@ class EmailPusher:
assert not self._is_processing
self._is_processing = True
def _resume_processing(self):
def _resume_processing(self) -> None:
"""Used by tests to resume processing of events after pausing.
"""
assert self._is_processing
self._is_processing = False
self._start_processing()
async def _process(self):
async def _process(self) -> None:
# we should never get here if we are already processing
assert not self._is_processing
try:
self._is_processing = True
if self.throttle_params is None:
if not self._inited:
# this is our first loop: load up the throttle params
self.throttle_params = await self.store.get_throttle_params_by_room(
self.pusher_id
)
self._inited = True
# if the max ordering changes while we're running _unsafe_process,
# call it again, and so on until we've caught up.
@@ -163,17 +165,19 @@ class EmailPusher:
finally:
self._is_processing = False
async def _unsafe_process(self):
async def _unsafe_process(self) -> None:
"""
Main logic of the push loop without the wrapper function that sets
up logging, measures and guards against multiple instances of it
being run.
"""
start = 0 if INCLUDE_ALL_UNREAD_NOTIFS else self.last_stream_ordering
fn = self.store.get_unread_push_actions_for_user_in_range_for_email
unprocessed = await fn(self.user_id, start, self.max_stream_ordering)
assert self.max_stream_ordering is not None
unprocessed = await self.store.get_unread_push_actions_for_user_in_range_for_email(
self.user_id, start, self.max_stream_ordering
)
soonest_due_at = None
soonest_due_at = None # type: Optional[int]
if not unprocessed:
await self.save_last_stream_ordering_and_success(self.max_stream_ordering)
@@ -230,7 +234,9 @@ class EmailPusher:
self.seconds_until(soonest_due_at), self.on_timer
)
async def save_last_stream_ordering_and_success(self, last_stream_ordering):
async def save_last_stream_ordering_and_success(
self, last_stream_ordering: Optional[int]
) -> None:
if last_stream_ordering is None:
# This happens if we haven't yet processed anything
return
@@ -248,28 +254,30 @@ class EmailPusher:
# lets just stop and return.
self.on_stop()
def seconds_until(self, ts_msec):
def seconds_until(self, ts_msec: int) -> float:
secs = (ts_msec - self.clock.time_msec()) / 1000
return max(secs, 0)
def get_room_throttle_ms(self, room_id):
def get_room_throttle_ms(self, room_id: str) -> int:
if room_id in self.throttle_params:
return self.throttle_params[room_id]["throttle_ms"]
else:
return 0
def get_room_last_sent_ts(self, room_id):
def get_room_last_sent_ts(self, room_id: str) -> int:
if room_id in self.throttle_params:
return self.throttle_params[room_id]["last_sent_ts"]
else:
return 0
def room_ready_to_notify_at(self, room_id):
def room_ready_to_notify_at(self, room_id: str) -> int:
"""
Determines whether throttling should prevent us from sending an email
for the given room
Returns: The timestamp when we are next allowed to send an email notif
for this room
Returns:
The timestamp when we are next allowed to send an email notif
for this room
"""
last_sent_ts = self.get_room_last_sent_ts(room_id)
throttle_ms = self.get_room_throttle_ms(room_id)
@@ -277,7 +285,9 @@ class EmailPusher:
may_send_at = last_sent_ts + throttle_ms
return may_send_at
async def sent_notif_update_throttle(self, room_id, notified_push_action):
async def sent_notif_update_throttle(
self, room_id: str, notified_push_action: dict
) -> None:
# We have sent a notification, so update the throttle accordingly.
# If the event that triggered the notif happened more than
# THROTTLE_RESET_AFTER_MS after the previous one that triggered a
@@ -315,7 +325,7 @@ class EmailPusher:
self.pusher_id, room_id, self.throttle_params[room_id]
)
async def send_notification(self, push_actions, reason):
async def send_notification(self, push_actions: List[dict], reason: dict) -> None:
logger.info("Sending notif email for user %r", self.user_id)
await self.mailer.send_notification_mail(

View File

@@ -14,19 +14,25 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import urllib.parse
from typing import TYPE_CHECKING, Any, Dict, Iterable, Union
from prometheus_client import Counter
from twisted.internet.error import AlreadyCalled, AlreadyCancelled
from synapse.api.constants import EventTypes
from synapse.events import EventBase
from synapse.logging import opentracing
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.push import PusherConfigException
from synapse.push import Pusher, PusherConfigException
from synapse.types import RoomStreamToken
from . import push_rule_evaluator, push_tools
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
http_push_processed_counter = Counter(
@@ -50,24 +56,18 @@ http_badges_failed_counter = Counter(
)
class HttpPusher:
class HttpPusher(Pusher):
INITIAL_BACKOFF_SEC = 1 # in seconds because that's what Twisted takes
MAX_BACKOFF_SEC = 60 * 60
# This one's in ms because we compare it against the clock
GIVE_UP_AFTER_MS = 24 * 60 * 60 * 1000
def __init__(self, hs, pusherdict):
self.hs = hs
self.store = self.hs.get_datastore()
def __init__(self, hs: "HomeServer", pusherdict: Dict[str, Any]):
super().__init__(hs, pusherdict)
self.storage = self.hs.get_storage()
self.clock = self.hs.get_clock()
self.state_handler = self.hs.get_state_handler()
self.user_id = pusherdict["user_name"]
self.app_id = pusherdict["app_id"]
self.app_display_name = pusherdict["app_display_name"]
self.device_display_name = pusherdict["device_display_name"]
self.pushkey = pusherdict["pushkey"]
self.pushkey_ts = pusherdict["ts"]
self.data = pusherdict["data"]
self.last_stream_ordering = pusherdict["last_stream_ordering"]
@@ -77,13 +77,6 @@ class HttpPusher:
self._is_processing = False
self._group_unread_count_by_room = hs.config.push_group_unread_count_by_room
# This is the highest stream ordering we know it's safe to process.
# When new events arrive, we'll be given a window of new events: we
# should honour this rather than just looking for anything higher
# because of potential out-of-order event serialisation. This starts
# off as None though as we don't know any better.
self.max_stream_ordering = None
if "data" not in pusherdict:
raise PusherConfigException("No 'data' key for HTTP pusher")
self.data = pusherdict["data"]
@@ -97,26 +90,39 @@ class HttpPusher:
if self.data is None:
raise PusherConfigException("data can not be null for HTTP pusher")
# Validate that there's a URL and it is of the proper form.
if "url" not in self.data:
raise PusherConfigException("'url' required in data for HTTP pusher")
self.url = self.data["url"]
self.http_client = hs.get_proxied_http_client()
url = self.data["url"]
if not isinstance(url, str):
raise PusherConfigException("'url' must be a string")
url_parts = urllib.parse.urlparse(url)
# Note that the specification also says the scheme must be HTTPS, but
# it isn't up to the homeserver to verify that.
if url_parts.path != "/_matrix/push/v1/notify":
raise PusherConfigException(
"'url' must have a path of '/_matrix/push/v1/notify'"
)
self.url = url
self.http_client = hs.get_proxied_blacklisted_http_client()
self.data_minus_url = {}
self.data_minus_url.update(self.data)
del self.data_minus_url["url"]
def on_started(self, should_check_for_notifs):
def on_started(self, should_check_for_notifs: bool) -> None:
"""Called when this pusher has been started.
Args:
should_check_for_notifs (bool): Whether we should immediately
should_check_for_notifs: Whether we should immediately
check for push to send. Set to False only if it's known there
is nothing to send
"""
if should_check_for_notifs:
self._start_processing()
def on_new_notifications(self, max_token: RoomStreamToken):
def on_new_notifications(self, max_token: RoomStreamToken) -> None:
# We just use the minimum stream ordering and ignore the vector clock
# component. This is safe to do as long as we *always* ignore the vector
# clock components.
@@ -127,14 +133,14 @@ class HttpPusher:
)
self._start_processing()
def on_new_receipts(self, min_stream_id, max_stream_id):
def on_new_receipts(self, min_stream_id: int, max_stream_id: int) -> None:
# Note that the min here shouldn't be relied upon to be accurate.
# We could check the receipts are actually m.read receipts here,
# but currently that's the only type of receipt anyway...
run_as_background_process("http_pusher.on_new_receipts", self._update_badge)
async def _update_badge(self):
async def _update_badge(self) -> None:
# XXX as per https://github.com/matrix-org/matrix-doc/issues/2627, this seems
# to be largely redundant. perhaps we can remove it.
badge = await push_tools.get_badge_count(
@@ -144,10 +150,10 @@ class HttpPusher:
)
await self._send_badge(badge)
def on_timer(self):
def on_timer(self) -> None:
self._start_processing()
def on_stop(self):
def on_stop(self) -> None:
if self.timed_call:
try:
self.timed_call.cancel()
@@ -155,13 +161,13 @@ class HttpPusher:
pass
self.timed_call = None
def _start_processing(self):
def _start_processing(self) -> None:
if self._is_processing:
return
run_as_background_process("httppush.process", self._process)
async def _process(self):
async def _process(self) -> None:
# we should never get here if we are already processing
assert not self._is_processing
@@ -180,7 +186,7 @@ class HttpPusher:
finally:
self._is_processing = False
async def _unsafe_process(self):
async def _unsafe_process(self) -> None:
"""
Looks for unset notifications and dispatch them, in order
Never call this directly: use _process which will only allow this to
@@ -188,6 +194,7 @@ class HttpPusher:
"""
fn = self.store.get_unread_push_actions_for_user_in_range_for_http
assert self.max_stream_ordering is not None
unprocessed = await fn(
self.user_id, self.last_stream_ordering, self.max_stream_ordering
)
@@ -257,17 +264,12 @@ class HttpPusher:
)
self.backoff_delay = HttpPusher.INITIAL_BACKOFF_SEC
self.last_stream_ordering = push_action["stream_ordering"]
pusher_still_exists = await self.store.update_pusher_last_stream_ordering(
await self.store.update_pusher_last_stream_ordering(
self.app_id,
self.pushkey,
self.user_id,
self.last_stream_ordering,
)
if not pusher_still_exists:
# The pusher has been deleted while we were processing, so
# lets just stop and return.
self.on_stop()
return
self.failing_since = None
await self.store.update_pusher_failing_since(
@@ -283,7 +285,7 @@ class HttpPusher:
)
break
async def _process_one(self, push_action):
async def _process_one(self, push_action: dict) -> bool:
if "notify" not in push_action["actions"]:
return True
@@ -314,7 +316,9 @@ class HttpPusher:
await self.hs.remove_pusher(self.app_id, pk, self.user_id)
return True
async def _build_notification_dict(self, event, tweaks, badge):
async def _build_notification_dict(
self, event: EventBase, tweaks: Dict[str, bool], badge: int
) -> Dict[str, Any]:
priority = "low"
if (
event.type == EventTypes.Encrypted
@@ -344,9 +348,7 @@ class HttpPusher:
}
return d
ctx = await push_tools.get_context_for_event(
self.storage, self.state_handler, event, self.user_id
)
ctx = await push_tools.get_context_for_event(self.storage, event, self.user_id)
d = {
"notification": {
@@ -386,7 +388,9 @@ class HttpPusher:
return d
async def dispatch_push(self, event, tweaks, badge):
async def dispatch_push(
self, event: EventBase, tweaks: Dict[str, bool], badge: int
) -> Union[bool, Iterable[str]]:
notification_dict = await self._build_notification_dict(event, tweaks, badge)
if not notification_dict:
return []

View File

@@ -19,7 +19,7 @@ import logging
import urllib.parse
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from typing import Iterable, List, TypeVar
from typing import TYPE_CHECKING, Any, Dict, Iterable, List, Optional, TypeVar
import bleach
import jinja2
@@ -27,16 +27,20 @@ import jinja2
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import StoreError
from synapse.config.emailconfig import EmailSubjectConfig
from synapse.events import EventBase
from synapse.logging.context import make_deferred_yieldable
from synapse.push.presentable_names import (
calculate_room_name,
descriptor_from_member_events,
name_from_member_event,
)
from synapse.types import UserID
from synapse.types import StateMap, UserID
from synapse.util.async_helpers import concurrently_execute
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
T = TypeVar("T")
@@ -93,7 +97,13 @@ ALLOWED_ATTRS = {
class Mailer:
def __init__(self, hs, app_name, template_html, template_text):
def __init__(
self,
hs: "HomeServer",
app_name: str,
template_html: jinja2.Template,
template_text: jinja2.Template,
):
self.hs = hs
self.template_html = template_html
self.template_text = template_text
@@ -108,17 +118,19 @@ class Mailer:
logger.info("Created Mailer for app_name %s" % app_name)
async def send_password_reset_mail(self, email_address, token, client_secret, sid):
async def send_password_reset_mail(
self, email_address: str, token: str, client_secret: str, sid: str
) -> None:
"""Send an email with a password reset link to a user
Args:
email_address (str): Email address we're sending the password
email_address: Email address we're sending the password
reset to
token (str): Unique token generated by the server to verify
token: Unique token generated by the server to verify
the email was received
client_secret (str): Unique token generated by the client to
client_secret: Unique token generated by the client to
group together multiple email sending attempts
sid (str): The generated session ID
sid: The generated session ID
"""
params = {"token": token, "client_secret": client_secret, "sid": sid}
link = (
@@ -136,17 +148,19 @@ class Mailer:
template_vars,
)
async def send_registration_mail(self, email_address, token, client_secret, sid):
async def send_registration_mail(
self, email_address: str, token: str, client_secret: str, sid: str
) -> None:
"""Send an email with a registration confirmation link to a user
Args:
email_address (str): Email address we're sending the registration
email_address: Email address we're sending the registration
link to
token (str): Unique token generated by the server to verify
token: Unique token generated by the server to verify
the email was received
client_secret (str): Unique token generated by the client to
client_secret: Unique token generated by the client to
group together multiple email sending attempts
sid (str): The generated session ID
sid: The generated session ID
"""
params = {"token": token, "client_secret": client_secret, "sid": sid}
link = (
@@ -164,18 +178,20 @@ class Mailer:
template_vars,
)
async def send_add_threepid_mail(self, email_address, token, client_secret, sid):
async def send_add_threepid_mail(
self, email_address: str, token: str, client_secret: str, sid: str
) -> None:
"""Send an email with a validation link to a user for adding a 3pid to their account
Args:
email_address (str): Email address we're sending the validation link to
email_address: Email address we're sending the validation link to
token (str): Unique token generated by the server to verify the email was received
token: Unique token generated by the server to verify the email was received
client_secret (str): Unique token generated by the client to group together
client_secret: Unique token generated by the client to group together
multiple email sending attempts
sid (str): The generated session ID
sid: The generated session ID
"""
params = {"token": token, "client_secret": client_secret, "sid": sid}
link = (
@@ -194,8 +210,13 @@ class Mailer:
)
async def send_notification_mail(
self, app_id, user_id, email_address, push_actions, reason
):
self,
app_id: str,
user_id: str,
email_address: str,
push_actions: Iterable[Dict[str, Any]],
reason: Dict[str, Any],
) -> None:
"""Send email regarding a user's room notifications"""
rooms_in_order = deduped_ordered_list([pa["room_id"] for pa in push_actions])
@@ -203,7 +224,7 @@ class Mailer:
[pa["event_id"] for pa in push_actions]
)
notifs_by_room = {}
notifs_by_room = {} # type: Dict[str, List[Dict[str, Any]]]
for pa in push_actions:
notifs_by_room.setdefault(pa["room_id"], []).append(pa)
@@ -262,7 +283,9 @@ class Mailer:
await self.send_email(email_address, summary_text, template_vars)
async def send_email(self, email_address, subject, extra_template_vars):
async def send_email(
self, email_address: str, subject: str, extra_template_vars: Dict[str, Any]
) -> None:
"""Send an email with the given information and template text"""
try:
from_string = self.hs.config.email_notif_from % {"app": self.app_name}
@@ -315,8 +338,13 @@ class Mailer:
)
async def get_room_vars(
self, room_id, user_id, notifs, notif_events, room_state_ids
):
self,
room_id: str,
user_id: str,
notifs: Iterable[Dict[str, Any]],
notif_events: Dict[str, EventBase],
room_state_ids: StateMap[str],
) -> Dict[str, Any]:
# Check if one of the notifs is an invite event for the user.
is_invite = False
for n in notifs:
@@ -334,7 +362,7 @@ class Mailer:
"notifs": [],
"invite": is_invite,
"link": self.make_room_link(room_id),
}
} # type: Dict[str, Any]
if not is_invite:
for n in notifs:
@@ -365,7 +393,13 @@ class Mailer:
return room_vars
async def get_notif_vars(self, notif, user_id, notif_event, room_state_ids):
async def get_notif_vars(
self,
notif: Dict[str, Any],
user_id: str,
notif_event: EventBase,
room_state_ids: StateMap[str],
) -> Dict[str, Any]:
results = await self.store.get_events_around(
notif["room_id"],
notif["event_id"],
@@ -391,7 +425,9 @@ class Mailer:
return ret
async def get_message_vars(self, notif, event, room_state_ids):
async def get_message_vars(
self, notif: Dict[str, Any], event: EventBase, room_state_ids: StateMap[str]
) -> Optional[Dict[str, Any]]:
if event.type != EventTypes.Message and event.type != EventTypes.Encrypted:
return None
@@ -432,7 +468,9 @@ class Mailer:
return ret
def add_text_message_vars(self, messagevars, event):
def add_text_message_vars(
self, messagevars: Dict[str, Any], event: EventBase
) -> None:
msgformat = event.content.get("format")
messagevars["format"] = msgformat
@@ -445,15 +483,18 @@ class Mailer:
elif body:
messagevars["body_text_html"] = safe_text(body)
return messagevars
def add_image_message_vars(self, messagevars, event):
def add_image_message_vars(
self, messagevars: Dict[str, Any], event: EventBase
) -> None:
messagevars["image_url"] = event.content["url"]
return messagevars
async def make_summary_text(
self, notifs_by_room, room_state_ids, notif_events, user_id, reason
self,
notifs_by_room: Dict[str, List[Dict[str, Any]]],
room_state_ids: Dict[str, StateMap[str]],
notif_events: Dict[str, EventBase],
user_id: str,
reason: Dict[str, Any],
):
if len(notifs_by_room) == 1:
# Only one room has new stuff
@@ -580,7 +621,7 @@ class Mailer:
"app": self.app_name,
}
def make_room_link(self, room_id):
def make_room_link(self, room_id: str) -> str:
if self.hs.config.email_riot_base_url:
base_url = "%s/#/room" % (self.hs.config.email_riot_base_url)
elif self.app_name == "Vector":
@@ -590,7 +631,7 @@ class Mailer:
base_url = "https://matrix.to/#"
return "%s/%s" % (base_url, room_id)
def make_notif_link(self, notif):
def make_notif_link(self, notif: Dict[str, str]) -> str:
if self.hs.config.email_riot_base_url:
return "%s/#/room/%s/%s" % (
self.hs.config.email_riot_base_url,
@@ -606,7 +647,9 @@ class Mailer:
else:
return "https://matrix.to/#/%s/%s" % (notif["room_id"], notif["event_id"])
def make_unsubscribe_link(self, user_id, app_id, email_address):
def make_unsubscribe_link(
self, user_id: str, app_id: str, email_address: str
) -> str:
params = {
"access_token": self.macaroon_gen.generate_delete_pusher_token(user_id),
"app_id": app_id,
@@ -620,7 +663,7 @@ class Mailer:
)
def safe_markup(raw_html):
def safe_markup(raw_html: str) -> jinja2.Markup:
return jinja2.Markup(
bleach.linkify(
bleach.clean(
@@ -635,7 +678,7 @@ def safe_markup(raw_html):
)
def safe_text(raw_text):
def safe_text(raw_text: str) -> jinja2.Markup:
"""
Process text: treat it as HTML but escape any tags (ie. just escape the
HTML) then linkify it.
@@ -655,7 +698,7 @@ def deduped_ordered_list(it: Iterable[T]) -> List[T]:
return ret
def string_ordinal_total(s):
def string_ordinal_total(s: str) -> int:
tot = 0
for c in s:
tot += ord(c)

View File

@@ -12,6 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict
from synapse.events import EventBase
from synapse.push.presentable_names import calculate_room_name, name_from_member_event
from synapse.storage import Storage
from synapse.storage.databases.main import DataStore
@@ -46,7 +49,9 @@ async def get_badge_count(store: DataStore, user_id: str, group_by_room: bool) -
return badge
async def get_context_for_event(storage: Storage, state_handler, ev, user_id):
async def get_context_for_event(
storage: Storage, ev: EventBase, user_id: str
) -> Dict[str, str]:
ctx = {}
room_state_ids = await storage.state.get_state_ids_for_event(ev.event_id)

View File

@@ -14,25 +14,31 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Any, Callable, Dict, Optional
from synapse.push import Pusher
from synapse.push.emailpusher import EmailPusher
from synapse.push.httppusher import HttpPusher
from synapse.push.mailer import Mailer
from .httppusher import HttpPusher
if TYPE_CHECKING:
from synapse.app.homeserver import HomeServer
logger = logging.getLogger(__name__)
class PusherFactory:
def __init__(self, hs):
def __init__(self, hs: "HomeServer"):
self.hs = hs
self.config = hs.config
self.pusher_types = {"http": HttpPusher}
self.pusher_types = {
"http": HttpPusher
} # type: Dict[str, Callable[[HomeServer, dict], Pusher]]
logger.info("email enable notifs: %r", hs.config.email_enable_notifs)
if hs.config.email_enable_notifs:
self.mailers = {} # app_name -> Mailer
self.mailers = {} # type: Dict[str, Mailer]
self._notif_template_html = hs.config.email_notif_template_html
self._notif_template_text = hs.config.email_notif_template_text
@@ -41,7 +47,7 @@ class PusherFactory:
logger.info("defined email pusher type")
def create_pusher(self, pusherdict):
def create_pusher(self, pusherdict: Dict[str, Any]) -> Optional[Pusher]:
kind = pusherdict["kind"]
f = self.pusher_types.get(kind, None)
if not f:
@@ -49,7 +55,9 @@ class PusherFactory:
logger.debug("creating %s pusher for %r", kind, pusherdict)
return f(self.hs, pusherdict)
def _create_email_pusher(self, _hs, pusherdict):
def _create_email_pusher(
self, _hs: "HomeServer", pusherdict: Dict[str, Any]
) -> EmailPusher:
app_name = self._app_name_from_pusherdict(pusherdict)
mailer = self.mailers.get(app_name)
if not mailer:
@@ -62,7 +70,7 @@ class PusherFactory:
self.mailers[app_name] = mailer
return EmailPusher(self.hs, pusherdict, mailer)
def _app_name_from_pusherdict(self, pusherdict):
def _app_name_from_pusherdict(self, pusherdict: Dict[str, Any]) -> str:
data = pusherdict["data"]
if isinstance(data, dict):

View File

@@ -15,7 +15,7 @@
# limitations under the License.
import logging
from typing import TYPE_CHECKING, Dict, Union
from typing import TYPE_CHECKING, Any, Dict, Optional
from prometheus_client import Gauge
@@ -23,9 +23,7 @@ from synapse.metrics.background_process_metrics import (
run_as_background_process,
wrap_as_background_process,
)
from synapse.push import PusherConfigException
from synapse.push.emailpusher import EmailPusher
from synapse.push.httppusher import HttpPusher
from synapse.push import Pusher, PusherConfigException
from synapse.push.pusher import PusherFactory
from synapse.types import RoomStreamToken
from synapse.util.async_helpers import concurrently_execute
@@ -77,7 +75,7 @@ class PusherPool:
self._last_room_stream_id_seen = self.store.get_room_max_stream_ordering()
# map from user id to app_id:pushkey to pusher
self.pushers = {} # type: Dict[str, Dict[str, Union[HttpPusher, EmailPusher]]]
self.pushers = {} # type: Dict[str, Dict[str, Pusher]]
def start(self):
"""Starts the pushers off in a background process.
@@ -99,11 +97,11 @@ class PusherPool:
lang,
data,
profile_tag="",
):
) -> Optional[Pusher]:
"""Creates a new pusher and adds it to the pool
Returns:
EmailPusher|HttpPusher
The newly created pusher.
"""
time_now_msec = self.clock.time_msec()
@@ -267,17 +265,19 @@ class PusherPool:
except Exception:
logger.exception("Exception in pusher on_new_receipts")
async def start_pusher_by_id(self, app_id, pushkey, user_id):
async def start_pusher_by_id(
self, app_id: str, pushkey: str, user_id: str
) -> Optional[Pusher]:
"""Look up the details for the given pusher, and start it
Returns:
EmailPusher|HttpPusher|None: The pusher started, if any
The pusher started, if any
"""
if not self._should_start_pushers:
return
return None
if not self._pusher_shard_config.should_handle(self._instance_name, user_id):
return
return None
resultlist = await self.store.get_pushers_by_app_id_and_pushkey(app_id, pushkey)
@@ -303,19 +303,19 @@ class PusherPool:
logger.info("Started pushers")
async def _start_pusher(self, pusherdict):
async def _start_pusher(self, pusherdict: Dict[str, Any]) -> Optional[Pusher]:
"""Start the given pusher
Args:
pusherdict (dict): dict with the values pulled from the db table
pusherdict: dict with the values pulled from the db table
Returns:
EmailPusher|HttpPusher
The newly created pusher or None.
"""
if not self._pusher_shard_config.should_handle(
self._instance_name, pusherdict["user_name"]
):
return
return None
try:
p = self.pusher_factory.create_pusher(pusherdict)
@@ -328,15 +328,15 @@ class PusherPool:
pusherdict.get("pushkey"),
e,
)
return
return None
except Exception:
logger.exception(
"Couldn't start pusher id %i: caught Exception", pusherdict["id"],
)
return
return None
if not p:
return
return None
appid_pushkey = "%s:%s" % (pusherdict["app_id"], pusherdict["pushkey"])

View File

@@ -106,6 +106,25 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
assert self.METHOD in ("PUT", "POST", "GET")
self._replication_secret = None
if hs.config.worker.worker_replication_secret:
self._replication_secret = hs.config.worker.worker_replication_secret
def _check_auth(self, request) -> None:
# Get the authorization header.
auth_headers = request.requestHeaders.getRawHeaders(b"Authorization")
if len(auth_headers) > 1:
raise RuntimeError("Too many Authorization headers.")
parts = auth_headers[0].split(b" ")
if parts[0] == b"Bearer" and len(parts) == 2:
received_secret = parts[1].decode("ascii")
if self._replication_secret == received_secret:
# Success!
return
raise RuntimeError("Invalid Authorization header.")
@abc.abstractmethod
async def _serialize_payload(**kwargs):
"""Static method that is called when creating a request.
@@ -150,6 +169,12 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
outgoing_gauge = _pending_outgoing_requests.labels(cls.NAME)
replication_secret = None
if hs.config.worker.worker_replication_secret:
replication_secret = hs.config.worker.worker_replication_secret.encode(
"ascii"
)
@trace(opname="outgoing_replication_request")
@outgoing_gauge.track_inprogress()
async def send_request(instance_name="master", **kwargs):
@@ -202,6 +227,9 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
# the master, and so whether we should clean up or not.
while True:
headers = {} # type: Dict[bytes, List[bytes]]
# Add an authorization header, if configured.
if replication_secret:
headers[b"Authorization"] = [b"Bearer " + replication_secret]
inject_active_span_byte_dict(headers, None, check_destination=False)
try:
result = await request_func(uri, data, headers=headers)
@@ -236,21 +264,19 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
"""
url_args = list(self.PATH_ARGS)
handler = self._handle_request
method = self.METHOD
if self.CACHE:
handler = self._cached_handler # type: ignore
url_args.append("txn_id")
args = "/".join("(?P<%s>[^/]+)" % (arg,) for arg in url_args)
pattern = re.compile("^/_synapse/replication/%s/%s$" % (self.NAME, args))
http_server.register_paths(
method, [pattern], handler, self.__class__.__name__,
method, [pattern], self._check_auth_and_handle, self.__class__.__name__,
)
def _cached_handler(self, request, txn_id, **kwargs):
def _check_auth_and_handle(self, request, **kwargs):
"""Called on new incoming requests when caching is enabled. Checks
if there is a cached response for the request and returns that,
otherwise calls `_handle_request` and caches its response.
@@ -258,6 +284,15 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
# We just use the txn_id here, but we probably also want to use the
# other PATH_ARGS as well.
assert self.CACHE
# Check the authorization headers before handling the request.
if self._replication_secret:
self._check_auth(request)
return self.response_cache.wrap(txn_id, self._handle_request, request, **kwargs)
if self.CACHE:
txn_id = kwargs.pop("txn_id")
return self.response_cache.wrap(
txn_id, self._handle_request, request, **kwargs
)
return self._handle_request(request, **kwargs)

Some files were not shown because too many files have changed in this diff Show More