1
0

Compare commits

..

1 Commits

Author SHA1 Message Date
Matthew Hodgson
246279b545 custom edu draft 2022-02-01 01:18:46 +00:00
199 changed files with 3856 additions and 5444 deletions

View File

@@ -15,4 +15,4 @@ export LANG="C.UTF-8"
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
exec tox -e py3-old
exec tox -e py3-old,combine

View File

@@ -345,7 +345,7 @@ jobs:
path: synapse
# Attempt to check out the same branch of Complement as the PR. If it
# doesn't exist, fallback to HEAD.
# doesn't exist, fallback to master.
- name: Checkout complement
shell: bash
run: |
@@ -358,8 +358,8 @@ jobs:
# for pull requests, otherwise GITHUB_REF).
# 2. Attempt to use the base branch, e.g. when merging into release-vX.Y
# (GITHUB_BASE_REF for pull requests).
# 3. Use the default complement branch ("HEAD").
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "HEAD"; do
# 3. Use the default complement branch ("master").
for BRANCH_NAME in "$GITHUB_HEAD_REF" "$GITHUB_BASE_REF" "${GITHUB_REF#refs/heads/}" "master"; do
# Skip empty branch names and merge commits.
if [[ -z "$BRANCH_NAME" || $BRANCH_NAME =~ ^refs/pull/.* ]]; then
continue
@@ -383,7 +383,7 @@ jobs:
# Run Complement
- run: |
set -o pipefail
go test -v -json -p 1 -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
go test -v -json -tags synapse_blacklist,msc2403 ./tests/... 2>&1 | gotestfmt
shell: bash
name: Run Complement Tests
env:

View File

@@ -1,162 +1,3 @@
Synapse 1.53.0 (2022-02-22)
===========================
No significant changes.
Synapse 1.53.0rc1 (2022-02-15)
==============================
Features
--------
- Add experimental support for sending to-device messages to application services, as specified by [MSC2409](https://github.com/matrix-org/matrix-doc/pull/2409). ([\#11215](https://github.com/matrix-org/synapse/issues/11215), [\#11966](https://github.com/matrix-org/synapse/issues/11966))
- Remove account data (including client config, push rules and ignored users) upon user deactivation. ([\#11655](https://github.com/matrix-org/synapse/issues/11655))
- Experimental support for [MSC3666](https://github.com/matrix-org/matrix-doc/pull/3666): including bundled aggregations in server side search results. ([\#11837](https://github.com/matrix-org/synapse/issues/11837))
- Enable cache time-based expiry by default. The `expiry_time` config flag has been superseded by `expire_caches` and `cache_entry_ttl`. ([\#11849](https://github.com/matrix-org/synapse/issues/11849))
- Add a callback to allow modules to allow or forbid a 3PID (email address, phone number) from being associated to a local account. ([\#11854](https://github.com/matrix-org/synapse/issues/11854))
- Stabilize support and remove unstable endpoints for [MSC3231](https://github.com/matrix-org/matrix-doc/pull/3231). Clients must switch to the stable identifier and endpoint. See the [upgrade notes](https://matrix-org.github.io/synapse/develop/upgrade#stablisation-of-msc3231) for more information. ([\#11867](https://github.com/matrix-org/synapse/issues/11867))
- Allow modules to retrieve the current instance's server name and worker name. ([\#11868](https://github.com/matrix-org/synapse/issues/11868))
- Use a dedicated configurable rate limiter for 3PID invites. ([\#11892](https://github.com/matrix-org/synapse/issues/11892))
- Support the stable API endpoint for [MSC3283](https://github.com/matrix-org/matrix-doc/pull/3283): new settings in `/capabilities` endpoint. ([\#11933](https://github.com/matrix-org/synapse/issues/11933), [\#11989](https://github.com/matrix-org/synapse/issues/11989))
- Support the `dir` parameter on the `/relations` endpoint, per [MSC3715](https://github.com/matrix-org/matrix-doc/pull/3715). ([\#11941](https://github.com/matrix-org/synapse/issues/11941))
- Experimental implementation of [MSC3706](https://github.com/matrix-org/matrix-doc/pull/3706): extensions to `/send_join` to support reduced response size. ([\#11967](https://github.com/matrix-org/synapse/issues/11967))
Bugfixes
--------
- Fix [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical messages backfilling in random order on remote homeservers. ([\#11114](https://github.com/matrix-org/synapse/issues/11114))
- Fix a bug introduced in Synapse 1.51.0 where incoming federation transactions containing at least one EDU would be dropped if debug logging was enabled for `synapse.8631_debug`. ([\#11890](https://github.com/matrix-org/synapse/issues/11890))
- Fix a long-standing bug where some unknown endpoints would return HTML error pages instead of JSON `M_UNRECOGNIZED` errors. ([\#11930](https://github.com/matrix-org/synapse/issues/11930))
- Implement an allow list of content types for which we will attempt to preview a URL. This prevents Synapse from making useless longer-lived connections to streaming media servers. ([\#11936](https://github.com/matrix-org/synapse/issues/11936))
- Fix a long-standing bug where pagination tokens from `/sync` and `/messages` could not be provided to the `/relations` API. ([\#11952](https://github.com/matrix-org/synapse/issues/11952))
- Require that modules register their callbacks using keyword arguments. ([\#11975](https://github.com/matrix-org/synapse/issues/11975))
- Fix a long-standing bug where `M_WRONG_ROOM_KEYS_VERSION` errors would not include the specced `current_version` field. ([\#11988](https://github.com/matrix-org/synapse/issues/11988))
Improved Documentation
----------------------
- Fix typo in User Admin API: unpind -> unbind. ([\#11859](https://github.com/matrix-org/synapse/issues/11859))
- Document images returned by the User List Media Admin API can include those generated by URL previews. ([\#11862](https://github.com/matrix-org/synapse/issues/11862))
- Remove outdated MSC1711 FAQ document. ([\#11907](https://github.com/matrix-org/synapse/issues/11907))
- Correct the structured logging configuration example. Contributed by Brad Jones. ([\#11946](https://github.com/matrix-org/synapse/issues/11946))
- Add information on the Synapse release cycle. ([\#11954](https://github.com/matrix-org/synapse/issues/11954))
- Fix broken link in the README to the admin API for password reset. ([\#11955](https://github.com/matrix-org/synapse/issues/11955))
Deprecations and Removals
-------------------------
- Drop support for `webclient` listeners and configuring `web_client_location` to a non-HTTP(S) URL. Deprecated configurations are a configuration error. ([\#11895](https://github.com/matrix-org/synapse/issues/11895))
- Remove deprecated `user_may_create_room_with_invites` spam checker callback. See the [upgrade notes](https://matrix-org.github.io/synapse/latest/upgrade.html#removal-of-user_may_create_room_with_invites) for more information. ([\#11950](https://github.com/matrix-org/synapse/issues/11950))
- No longer build `.deb` packages for Ubuntu 21.04 Hirsute Hippo, which has now EOLed. ([\#11961](https://github.com/matrix-org/synapse/issues/11961))
Internal Changes
----------------
- Enhance user registration test helpers to make them more useful for tests involving application services and devices. ([\#11615](https://github.com/matrix-org/synapse/issues/11615), [\#11616](https://github.com/matrix-org/synapse/issues/11616))
- Improve performance when fetching bundled aggregations for multiple events. ([\#11660](https://github.com/matrix-org/synapse/issues/11660), [\#11752](https://github.com/matrix-org/synapse/issues/11752))
- Fix type errors introduced by new annotations in the Prometheus Client library. ([\#11832](https://github.com/matrix-org/synapse/issues/11832))
- Add missing type hints to replication code. ([\#11856](https://github.com/matrix-org/synapse/issues/11856), [\#11938](https://github.com/matrix-org/synapse/issues/11938))
- Ensure that `opentracing` scopes are activated and closed at the right time. ([\#11869](https://github.com/matrix-org/synapse/issues/11869))
- Improve opentracing for incoming federation requests. ([\#11870](https://github.com/matrix-org/synapse/issues/11870))
- Improve internal docstrings in `synapse.util.caches`. ([\#11876](https://github.com/matrix-org/synapse/issues/11876))
- Do not needlessly clear the `get_users_in_room` and `get_users_in_room_with_profiles` caches when any room state changes. ([\#11878](https://github.com/matrix-org/synapse/issues/11878))
- Convert `ApplicationServiceTestCase` to use `simple_async_mock`. ([\#11880](https://github.com/matrix-org/synapse/issues/11880))
- Remove experimental changes to the default push rules which were introduced in Synapse 1.19.0 but never enabled. ([\#11884](https://github.com/matrix-org/synapse/issues/11884))
- Disable coverage calculation for olddeps build. ([\#11888](https://github.com/matrix-org/synapse/issues/11888))
- Preparation to support sending device list updates to application services. ([\#11905](https://github.com/matrix-org/synapse/issues/11905))
- Add a test that checks users receive their own device list updates down `/sync`. ([\#11909](https://github.com/matrix-org/synapse/issues/11909))
- Run Complement tests sequentially. ([\#11910](https://github.com/matrix-org/synapse/issues/11910))
- Various refactors to the application service notifier code. ([\#11911](https://github.com/matrix-org/synapse/issues/11911), [\#11912](https://github.com/matrix-org/synapse/issues/11912))
- Tests: replace mocked `Authenticator` with the real thing. ([\#11913](https://github.com/matrix-org/synapse/issues/11913))
- Various refactors to the typing notifications code. ([\#11914](https://github.com/matrix-org/synapse/issues/11914))
- Use the proper type for the `Content-Length` header in the `UploadResource`. ([\#11927](https://github.com/matrix-org/synapse/issues/11927))
- Remove an unnecessary ignoring of type hints due to fixes in upstream packages. ([\#11939](https://github.com/matrix-org/synapse/issues/11939))
- Add missing type hints. ([\#11953](https://github.com/matrix-org/synapse/issues/11953))
- Fix an import cycle in `synapse.event_auth`. ([\#11965](https://github.com/matrix-org/synapse/issues/11965))
- Unpin `frozendict` but exclude the known bad version 2.1.2. ([\#11969](https://github.com/matrix-org/synapse/issues/11969))
- Prepare for rename of default Complement branch. ([\#11971](https://github.com/matrix-org/synapse/issues/11971))
- Fetch Synapse's version using a helper from `matrix-common`. ([\#11979](https://github.com/matrix-org/synapse/issues/11979))
Synapse 1.52.0 (2022-02-08)
===========================
No significant changes since 1.52.0rc1.
Note that [Twisted 22.1.0](https://github.com/twisted/twisted/releases/tag/twisted-22.1.0)
has recently been released, which fixes a [security issue](https://github.com/twisted/twisted/security/advisories/GHSA-92x2-jw7w-xvvx)
within the Twisted library. We do not believe Synapse is affected by this vulnerability,
though we advise server administrators who installed Synapse via pip to upgrade Twisted
with `pip install --upgrade Twisted` as a matter of good practice. The Docker image
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` are using the
updated library.
Synapse 1.52.0rc1 (2022-02-01)
==============================
Features
--------
- Remove account data (including client config, push rules and ignored users) upon user deactivation. ([\#11621](https://github.com/matrix-org/synapse/issues/11621), [\#11788](https://github.com/matrix-org/synapse/issues/11788), [\#11789](https://github.com/matrix-org/synapse/issues/11789))
- Add an admin API to reset connection timeouts for remote server. ([\#11639](https://github.com/matrix-org/synapse/issues/11639))
- Add an admin API to get a list of rooms that federate with a given remote homeserver. ([\#11658](https://github.com/matrix-org/synapse/issues/11658))
- Add a config flag to inhibit `M_USER_IN_USE` during registration. ([\#11743](https://github.com/matrix-org/synapse/issues/11743))
- Add a module callback to set username at registration. ([\#11790](https://github.com/matrix-org/synapse/issues/11790))
- Allow configuring a maximum file size as well as a list of allowed content types for avatars. ([\#11846](https://github.com/matrix-org/synapse/issues/11846))
Bugfixes
--------
- Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675). ([\#11612](https://github.com/matrix-org/synapse/issues/11612))
- Fix a long-standing bug when previewing Reddit URLs which do not contain an image. ([\#11767](https://github.com/matrix-org/synapse/issues/11767))
- Fix a long-standing bug that media streams could cause long-lived connections when generating URL previews. ([\#11784](https://github.com/matrix-org/synapse/issues/11784))
- Include a `prev_content` field in state events sent to Application Services. Contributed by @totallynotvaishnav. ([\#11798](https://github.com/matrix-org/synapse/issues/11798))
- Fix a bug introduced in Synapse 0.33.3 causing requests to sometimes log strings such as `HTTPStatus.OK` instead of integer status codes. ([\#11827](https://github.com/matrix-org/synapse/issues/11827))
Improved Documentation
----------------------
- Update pypi installation docs to indicate that we now support Python 3.10. ([\#11820](https://github.com/matrix-org/synapse/issues/11820))
- Add missing steps to the contribution submission process in the documentation. Contributed by @sequentialread. ([\#11821](https://github.com/matrix-org/synapse/issues/11821))
- Remove not needed old table of contents in documentation. ([\#11860](https://github.com/matrix-org/synapse/issues/11860))
- Consolidate the `access_token` information at the top of each relevant page in the Admin API documentation. ([\#11861](https://github.com/matrix-org/synapse/issues/11861))
Deprecations and Removals
-------------------------
- Drop support for Python 3.6, which is EOL. ([\#11683](https://github.com/matrix-org/synapse/issues/11683))
- Remove the `experimental_msc1849_support_enabled` flag as the features are now stable. ([\#11843](https://github.com/matrix-org/synapse/issues/11843))
Internal Changes
----------------
- Preparation for database schema simplifications: add `state_key` and `rejection_reason` columns to `events` table. ([\#11792](https://github.com/matrix-org/synapse/issues/11792))
- Add `FrozenEvent.get_state_key` and use it in a couple of places. ([\#11793](https://github.com/matrix-org/synapse/issues/11793))
- Preparation for database schema simplifications: stop reading from `event_reference_hashes`. ([\#11794](https://github.com/matrix-org/synapse/issues/11794))
- Drop unused table `public_room_list_stream`. ([\#11795](https://github.com/matrix-org/synapse/issues/11795))
- Preparation for reducing Postgres serialization errors: allow setting transaction isolation level. Contributed by Nick @ Beeper. ([\#11799](https://github.com/matrix-org/synapse/issues/11799), [\#11847](https://github.com/matrix-org/synapse/issues/11847))
- Docker: skip the initial amd64-only build and go straight to multiarch. ([\#11810](https://github.com/matrix-org/synapse/issues/11810))
- Run Complement on the Github Actions VM and not inside a Docker container. ([\#11811](https://github.com/matrix-org/synapse/issues/11811))
- Log module names at startup. ([\#11813](https://github.com/matrix-org/synapse/issues/11813))
- Improve type safety of bundled aggregations code. ([\#11815](https://github.com/matrix-org/synapse/issues/11815))
- Correct a type annotation in the event validation logic. ([\#11817](https://github.com/matrix-org/synapse/issues/11817), [\#11830](https://github.com/matrix-org/synapse/issues/11830))
- Minor updates and documentation for database schema delta files. ([\#11823](https://github.com/matrix-org/synapse/issues/11823))
- Workaround a type annotation problem in `prometheus_client` 0.13.0. ([\#11834](https://github.com/matrix-org/synapse/issues/11834))
- Minor performance improvement in room state lookup. ([\#11836](https://github.com/matrix-org/synapse/issues/11836))
- Fix some indentation inconsistencies in the sample config. ([\#11838](https://github.com/matrix-org/synapse/issues/11838))
- Add type hints to `tests/rest/admin`. ([\#11851](https://github.com/matrix-org/synapse/issues/11851))
Synapse 1.51.0 (2022-01-25)
===========================

View File

@@ -246,7 +246,7 @@ Password reset
==============
Users can reset their password through their client. Alternatively, a server admin
can reset a users password using the `admin API <docs/admin_api/user_admin_api.md#reset-password>`_
can reset a users password using the `admin API <docs/admin_api/user_admin_api.rst#reset-password>`_
or by directly editing the database as shown below.
First calculate the hash of the new password::

1
changelog.d/11612.bugfix Normal file
View File

@@ -0,0 +1 @@
Include the bundled aggregations in the `/sync` response, per [MSC2675](https://github.com/matrix-org/matrix-doc/pull/2675).

View File

@@ -0,0 +1 @@
Remove account data (including client config, push rules and ignored users) upon user deactivation.

View File

@@ -0,0 +1 @@
Add admin API to reset connection timeouts for remote server.

View File

@@ -0,0 +1 @@
Add an admin API to get a list of rooms that federate with a given remote homeserver.

View File

@@ -0,0 +1 @@
Drop support for Python 3.6, which is EOL.

1
changelog.d/11767.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug when previewing Reddit URLs which do not contain an image.

1
changelog.d/11784.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug that media streams could cause long-lived connections when generating URL previews.

View File

@@ -0,0 +1 @@
Remove account data (including client config, push rules and ignored users) upon user deactivation.

View File

@@ -0,0 +1 @@
Remove account data (including client config, push rules and ignored users) upon user deactivation.

1
changelog.d/11792.misc Normal file
View File

@@ -0,0 +1 @@
Preparation for database schema simplifications: add `state_key` and `rejection_reason` columns to `events` table.

1
changelog.d/11793.misc Normal file
View File

@@ -0,0 +1 @@
Add `FrozenEvent.get_state_key` and use it in a couple of places.

1
changelog.d/11794.misc Normal file
View File

@@ -0,0 +1 @@
Preparation for database schema simplifications: stop reading from `event_reference_hashes`.

1
changelog.d/11795.misc Normal file
View File

@@ -0,0 +1 @@
Drop unused table `public_room_list_stream`.

1
changelog.d/11799.misc Normal file
View File

@@ -0,0 +1 @@
Preparation for reducing Postgres serialization errors: allow setting transaction isolation level. Contributed by Nick @ Beeper.

1
changelog.d/11810.misc Normal file
View File

@@ -0,0 +1 @@
Docker: skip the initial amd64-only build and go straight to multiarch.

1
changelog.d/11811.misc Normal file
View File

@@ -0,0 +1 @@
Run Complement on the Github Actions VM and not inside a Docker container.

1
changelog.d/11813.misc Normal file
View File

@@ -0,0 +1 @@
Log module names at startup.

1
changelog.d/11816.misc Normal file
View File

@@ -0,0 +1 @@
Drop support for Python 3.6, which is EOL.

1
changelog.d/11817.misc Normal file
View File

@@ -0,0 +1 @@
Correct a type annotation in the event validation logic.

1
changelog.d/11821.doc Normal file
View File

@@ -0,0 +1 @@
Add missing steps to the contribution submission process in the documentation. Contributed by @sequentialread.

1
changelog.d/11823.misc Normal file
View File

@@ -0,0 +1 @@
Minor updates and documentation for database schema delta files.

1
changelog.d/11830.misc Normal file
View File

@@ -0,0 +1 @@
Correct a type annotation in the event validation logic.

24
debian/changelog vendored
View File

@@ -1,27 +1,3 @@
matrix-synapse-py3 (1.53.0) stable; urgency=medium
* New synapse release 1.53.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Feb 2022 11:32:06 +0000
matrix-synapse-py3 (1.53.0~rc1) stable; urgency=medium
* New synapse release 1.53.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Feb 2022 10:40:50 +0000
matrix-synapse-py3 (1.52.0) stable; urgency=medium
* New synapse release 1.52.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Feb 2022 11:34:54 +0000
matrix-synapse-py3 (1.52.0~rc1) stable; urgency=medium
* New synapse release 1.52.0~rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Feb 2022 11:04:09 +0000
matrix-synapse-py3 (1.51.0) stable; urgency=medium
* New synapse release 1.51.0.

View File

@@ -0,0 +1,335 @@
# MSC1711 Certificates FAQ
## Historical Note
This document was originally written to guide server admins through the upgrade
path towards Synapse 1.0. Specifically,
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/main/proposals/1711-x509-for-federation.md)
required that all servers present valid TLS certificates on their federation
API. Admins were encouraged to achieve compliance from version 0.99.0 (released
in February 2019) ahead of version 1.0 (released June 2019) enforcing the
certificate checks.
Much of what follows is now outdated since most admins will have already
upgraded, however it may be of use to those with old installs returning to the
project.
If you are setting up a server from scratch you almost certainly should look at
the [installation guide](setup/installation.md) instead.
## Introduction
The goal of Synapse 0.99.0 is to act as a stepping stone to Synapse 1.0.0. It
supports the r0.1 release of the server to server specification, but is
compatible with both the legacy Matrix federation behaviour (pre-r0.1) as well
as post-r0.1 behaviour, in order to allow for a smooth upgrade across the
federation.
The most important thing to know is that Synapse 1.0.0 will require a valid TLS
certificate on federation endpoints. Self signed certificates will not be
sufficient.
Synapse 0.99.0 makes it easy to configure TLS certificates and will
interoperate with both >= 1.0.0 servers as well as existing servers yet to
upgrade.
**It is critical that all admins upgrade to 0.99.0 and configure a valid TLS
certificate.** Admins will have 1 month to do so, after which 1.0.0 will be
released and those servers without a valid certificate will not longer be able
to federate with >= 1.0.0 servers.
Full details on how to carry out this configuration change is given
[below](#configuring-certificates-for-compatibility-with-synapse-100). A
timeline and some frequently asked questions are also given below.
For more details and context on the release of the r0.1 Server/Server API and
imminent Matrix 1.0 release, you can also see our
[main talk from FOSDEM 2019](https://matrix.org/blog/2019/02/04/matrix-at-fosdem-2019/).
## Contents
* Timeline
* Configuring certificates for compatibility with Synapse 1.0
* FAQ
* Synapse 0.99.0 has just been released, what do I need to do right now?
* How do I upgrade?
* What will happen if I do not set up a valid federation certificate
immediately?
* What will happen if I do nothing at all?
* When do I need a SRV record or .well-known URI?
* Can I still use an SRV record?
* I have created a .well-known URI. Do I still need an SRV record?
* It used to work just fine, why are you breaking everything?
* Can I manage my own certificates rather than having Synapse renew
certificates itself?
* Do you still recommend against using a reverse proxy on the federation port?
* Do I still need to give my TLS certificates to Synapse if I am using a
reverse proxy?
* Do I need the same certificate for the client and federation port?
* How do I tell Synapse to reload my keys/certificates after I replace them?
## Timeline
**5th Feb 2019 - Synapse 0.99.0 is released.**
All server admins are encouraged to upgrade.
0.99.0:
- provides support for ACME to make setting up Let's Encrypt certs easy, as
well as .well-known support.
- does not enforce that a valid CA cert is present on the federation API, but
rather makes it easy to set one up.
- provides support for .well-known
Admins should upgrade and configure a valid CA cert. Homeservers that require a
.well-known entry (see below), should retain their SRV record and use it
alongside their .well-known record.
**10th June 2019 - Synapse 1.0.0 is released**
1.0.0 is scheduled for release on 10th June. In
accordance with the the [S2S spec](https://matrix.org/docs/spec/server_server/r0.1.0.html)
1.0.0 will enforce certificate validity. This means that any homeserver without a
valid certificate after this point will no longer be able to federate with
1.0.0 servers.
## Configuring certificates for compatibility with Synapse 1.0.0
### If you do not currently have an SRV record
In this case, your `server_name` points to the host where your Synapse is
running. There is no need to create a `.well-known` URI or an SRV record, but
you will need to give Synapse a valid, signed, certificate.
### If you do have an SRV record currently
If you are using an SRV record, your matrix domain (`server_name`) may not
point to the same host that your Synapse is running on (the 'target
domain'). (If it does, you can follow the recommendation above; otherwise, read
on.)
Let's assume that your `server_name` is `example.com`, and your Synapse is
hosted at a target domain of `customer.example.net`. Currently you should have
an SRV record which looks like:
```
_matrix._tcp.example.com. IN SRV 10 5 8000 customer.example.net.
```
In this situation, you have three choices for how to proceed:
#### Option 1: give Synapse a certificate for your matrix domain
Synapse 1.0 will expect your server to present a TLS certificate for your
`server_name` (`example.com` in the above example). You can achieve this by acquiring a
certificate for the `server_name` yourself (for example, using `certbot`), and giving it
and the key to Synapse via `tls_certificate_path` and `tls_private_key_path`.
#### Option 2: run Synapse behind a reverse proxy
If you have an existing reverse proxy set up with correct TLS certificates for
your domain, you can simply route all traffic through the reverse proxy by
updating the SRV record appropriately (or removing it, if the proxy listens on
8448).
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
reverse proxy.
#### Option 3: add a .well-known file to delegate your matrix traffic
This will allow you to keep Synapse on a separate domain, without having to
give it a certificate for the matrix domain.
You can do this with a `.well-known` file as follows:
1. Keep the SRV record in place - it is needed for backwards compatibility
with Synapse 0.34 and earlier.
2. Give Synapse a certificate corresponding to the target domain
(`customer.example.net` in the above example). You can do this by acquire a
certificate for the target domain and giving it to Synapse via `tls_certificate_path`
and `tls_private_key_path`.
3. Restart Synapse to ensure the new certificate is loaded.
4. Arrange for a `.well-known` file at
`https://<server_name>/.well-known/matrix/server` with contents:
```json
{"m.server": "<target server name>"}
```
where the target server name is resolved as usual (i.e. SRV lookup, falling
back to talking to port 8448).
In the above example, where synapse is listening on port 8000,
`https://example.com/.well-known/matrix/server` should have `m.server` set to one of:
1. `customer.example.net` ─ with a SRV record on
`_matrix._tcp.customer.example.com` pointing to port 8000, or:
2. `customer.example.net` ─ updating synapse to listen on the default port
8448, or:
3. `customer.example.net:8000` ─ ensuring that if there is a reverse proxy
on `customer.example.net:8000` it correctly handles HTTP requests with
Host header set to `customer.example.net:8000`.
## FAQ
### Synapse 0.99.0 has just been released, what do I need to do right now?
Upgrade as soon as you can in preparation for Synapse 1.0.0, and update your
TLS certificates as [above](#configuring-certificates-for-compatibility-with-synapse-100).
### What will happen if I do not set up a valid federation certificate immediately?
Nothing initially, but once 1.0.0 is in the wild it will not be possible to
federate with 1.0.0 servers.
### What will happen if I do nothing at all?
If the admin takes no action at all, and remains on a Synapse < 0.99.0 then the
homeserver will be unable to federate with those who have implemented
.well-known. Then, as above, once the month upgrade window has expired the
homeserver will not be able to federate with any Synapse >= 1.0.0
### When do I need a SRV record or .well-known URI?
If your homeserver listens on the default federation port (8448), and your
`server_name` points to the host that your homeserver runs on, you do not need an
SRV record or `.well-known/matrix/server` URI.
For instance, if you registered `example.com` and pointed its DNS A record at a
fresh Upcloud VPS or similar, you could install Synapse 0.99 on that host,
giving it a server_name of `example.com`, and it would automatically generate a
valid TLS certificate for you via Let's Encrypt and no SRV record or
`.well-known` URI would be needed.
This is the common case, although you can add an SRV record or
`.well-known/matrix/server` URI for completeness if you wish.
**However**, if your server does not listen on port 8448, or if your `server_name`
does not point to the host that your homeserver runs on, you will need to let
other servers know how to find it.
In this case, you should see ["If you do have an SRV record
currently"](#if-you-do-have-an-srv-record-currently) above.
### Can I still use an SRV record?
Firstly, if you didn't need an SRV record before (because your server is
listening on port 8448 of your server_name), you certainly don't need one now:
the defaults are still the same.
If you previously had an SRV record, you can keep using it provided you are
able to give Synapse a TLS certificate corresponding to your server name. For
example, suppose you had the following SRV record, which directs matrix traffic
for example.com to matrix.example.com:443:
```
_matrix._tcp.example.com. IN SRV 10 5 443 matrix.example.com
```
In this case, Synapse must be given a certificate for example.com - or be
configured to acquire one from Let's Encrypt.
If you are unable to give Synapse a certificate for your server_name, you will
also need to use a .well-known URI instead. However, see also "I have created a
.well-known URI. Do I still need an SRV record?".
### I have created a .well-known URI. Do I still need an SRV record?
As of Synapse 0.99, Synapse will first check for the existence of a `.well-known`
URI and follow any delegation it suggests. It will only then check for the
existence of an SRV record.
That means that the SRV record will often be redundant. However, you should
remember that there may still be older versions of Synapse in the federation
which do not understand `.well-known` URIs, so if you removed your SRV record you
would no longer be able to federate with them.
It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
earlier will follow the SRV record (and not care about the invalid
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
correct certificate chain.
### It used to work just fine, why are you breaking everything?
We have always wanted Matrix servers to be as easy to set up as possible, and
so back when we started federation in 2014 we didn't want admins to have to go
through the cumbersome process of buying a valid TLS certificate to run a
server. This was before Let's Encrypt came along and made getting a free and
valid TLS certificate straightforward. So instead, we adopted a system based on
[Perspectives](https://en.wikipedia.org/wiki/Convergence_(SSL)): an approach
where you check a set of "notary servers" (in practice, homeservers) to vouch
for the validity of a certificate rather than having it signed by a CA. As long
as enough different notaries agree on the certificate's validity, then it is
trusted.
However, in practice this has never worked properly. Most people only use the
default notary server (matrix.org), leading to inadvertent centralisation which
we want to eliminate. Meanwhile, we never implemented the full consensus
algorithm to query the servers participating in a room to determine consensus
on whether a given certificate is valid. This is fiddly to get right
(especially in face of sybil attacks), and we found ourselves questioning
whether it was worth the effort to finish the work and commit to maintaining a
secure certificate validation system as opposed to focusing on core Matrix
development.
Meanwhile, Let's Encrypt came along in 2016, and put the final nail in the
coffin of the Perspectives project (which was already pretty dead). So, the
Spec Core Team decided that a better approach would be to mandate valid TLS
certificates for federation alongside the rest of the Web. More details can be
found in
[MSC1711](https://github.com/matrix-org/matrix-doc/blob/main/proposals/1711-x509-for-federation.md#background-the-failure-of-the-perspectives-approach).
This results in a breaking change, which is disruptive, but absolutely critical
for the security model. However, the existence of Let's Encrypt as a trivial
way to replace the old self-signed certificates with valid CA-signed ones helps
smooth things over massively, especially as Synapse can now automate Let's
Encrypt certificate generation if needed.
### Can I manage my own certificates rather than having Synapse renew certificates itself?
Yes, you are welcome to manage your certificates yourself. Synapse will only
attempt to obtain certificates from Let's Encrypt if you configure it to do
so.The only requirement is that there is a valid TLS cert present for
federation end points.
### Do you still recommend against using a reverse proxy on the federation port?
We no longer actively recommend against using a reverse proxy. Many admins will
find it easier to direct federation traffic to a reverse proxy and manage their
own TLS certificates, and this is a supported configuration.
See [the reverse proxy documentation](reverse_proxy.md) for information on setting up a
reverse proxy.
### Do I still need to give my TLS certificates to Synapse if I am using a reverse proxy?
Practically speaking, this is no longer necessary.
If you are using a reverse proxy for all of your TLS traffic, then you can set
`no_tls: True`. In that case, the only reason Synapse needs the certificate is
to populate a legacy 'tls_fingerprints' field in the federation API. This is
ignored by Synapse 0.99.0 and later, and the only time pre-0.99 Synapses will
check it is when attempting to fetch the server keys - and generally this is
delegated via `matrix.org`, which is on 0.99.0.
However, there is a bug in Synapse 0.99.0
[4554](<https://github.com/matrix-org/synapse/issues/4554>) which prevents
Synapse from starting if you do not give it a TLS certificate. To work around
this, you can give it any TLS certificate at all. This will be fixed soon.
### Do I need the same certificate for the client and federation port?
No. There is nothing stopping you from using different certificates,
particularly if you are using a reverse proxy. However, Synapse will use the
same certificate on any ports where TLS is configured.
### How do I tell Synapse to reload my keys/certificates after I replace them?
Synapse will reload the keys and certificates when it receives a SIGHUP - for
example `kill -HUP $(cat homeserver.pid)`. Alternatively, simply restart
Synapse, though this will result in downtime while it restarts.

View File

@@ -13,6 +13,7 @@
# Upgrading
- [Upgrading between Synapse Versions](upgrade.md)
- [Upgrading from pre-Synapse 1.0](MSC1711_certificates_FAQ.md)
# Usage
- [Federation](federate.md)
@@ -71,7 +72,7 @@
- [Understanding Synapse Through Grafana Graphs](usage/administration/understanding_synapse_through_grafana_graphs.md)
- [Useful SQL for Admins](usage/administration/useful_sql_for_admins.md)
- [Database Maintenance Tools](usage/administration/database_maintenance_tools.md)
- [State Groups](usage/administration/state_groups.md)
- [State Groups](usage/administration/state_groups.md)
- [Request log format](usage/administration/request_log.md)
- [Admin FAQ](usage/administration/admin_faq.md)
- [Scripts]()
@@ -79,7 +80,6 @@
# Development
- [Contributing Guide](development/contributing_guide.md)
- [Code Style](code_style.md)
- [Release Cycle](development/releases.md)
- [Git Usage](development/git.md)
- [Testing]()
- [OpenTracing](opentracing.md)

View File

@@ -4,9 +4,6 @@ This API allows a server administrator to manage the validity of an account. To
use it, you must enable the account validity feature (under
`account_validity`) in Synapse's configuration.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
## Renew account
This API extends the validity of an account by as much time as configured in the

View File

@@ -4,11 +4,11 @@ This API lets a server admin delete a local group. Doing so will kick all
users out of the group so that their clients will correctly handle the group
being deleted.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
The API is:
```
POST /_synapse/admin/v1/delete_group/<group_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../usage/administration/admin_api).

View File

@@ -2,13 +2,12 @@
This API returns information about reported events.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
The api is:
```
GET /_synapse/admin/v1/event_reports?from=0&limit=10
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../usage/administration/admin_api).
It returns a JSON body like the following:
@@ -95,6 +94,8 @@ The api is:
```
GET /_synapse/admin/v1/event_reports/<report_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../usage/administration/admin_api).
It returns a JSON body like the following:

View File

@@ -1,13 +1,24 @@
# Contents
- [Querying media](#querying-media)
* [List all media in a room](#list-all-media-in-a-room)
* [List all media uploaded by a user](#list-all-media-uploaded-by-a-user)
- [Quarantine media](#quarantine-media)
* [Quarantining media by ID](#quarantining-media-by-id)
* [Remove media from quarantine by ID](#remove-media-from-quarantine-by-id)
* [Quarantining media in a room](#quarantining-media-in-a-room)
* [Quarantining all media of a user](#quarantining-all-media-of-a-user)
* [Protecting media from being quarantined](#protecting-media-from-being-quarantined)
* [Unprotecting media from being quarantined](#unprotecting-media-from-being-quarantined)
- [Delete local media](#delete-local-media)
* [Delete a specific local media](#delete-a-specific-local-media)
* [Delete local media by date or size](#delete-local-media-by-date-or-size)
* [Delete media uploaded by a user](#delete-media-uploaded-by-a-user)
- [Purge Remote Media API](#purge-remote-media-api)
# Querying media
These APIs allow extracting media information from the homeserver.
Details about the format of the `media_id` and storage of the media in the file system
are documented under [media repository](../media_repository.md).
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
## List all media in a room
This API gets a list of known media in a room.
@@ -17,6 +28,8 @@ The API is:
```
GET /_synapse/admin/v1/room/<room_id>/media
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../usage/administration/admin_api).
The API returns a JSON body like the following:
```json
@@ -304,5 +317,8 @@ The following fields are returned in the JSON response body:
* `deleted`: integer - The number of media items successfully deleted
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../usage/administration/admin_api).
If the user re-requests purged remote media, synapse will re-request the media
from the originating server.

View File

@@ -10,15 +10,15 @@ paginate further back in the room from the point being purged from.
Note that Synapse requires at least one message in each room, so it will never
delete the last message in a room.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
The API is:
```
POST /_synapse/admin/v1/purge_history/<room_id>[/<event_id>]
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
By default, events sent by local users are not deleted, as they may represent
the only copies of this content in existence. (Events sent by remote users are
deleted.)
@@ -57,6 +57,9 @@ It is possible to poll for updates on recent purges with a second API;
GET /_synapse/admin/v1/purge_history_status/<purge_id>
```
Again, you will need to authenticate by providing an `access_token` for a
server admin.
This API returns a JSON body like the following:
```json

View File

@@ -5,9 +5,6 @@ to a room with a given `room_id_or_alias`. You can only modify the membership of
local users. The server administrator must be in the room and have permission to
invite users.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
## Parameters
The following parameters are available:
@@ -26,6 +23,9 @@ POST /_synapse/admin/v1/join/<room_id_or_alias>
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: see [Admin API](../usage/administration/admin_api).
Response:
```json

View File

@@ -1,12 +1,24 @@
# Contents
- [List Room API](#list-room-api)
- [Room Details API](#room-details-api)
- [Room Members API](#room-members-api)
- [Room State API](#room-state-api)
- [Block Room API](#block-room-api)
- [Delete Room API](#delete-room-api)
* [Version 1 (old version)](#version-1-old-version)
* [Version 2 (new version)](#version-2-new-version)
* [Status of deleting rooms](#status-of-deleting-rooms)
* [Undoing room shutdowns](#undoing-room-shutdowns)
- [Make Room Admin API](#make-room-admin-api)
- [Forward Extremities Admin API](#forward-extremities-admin-api)
- [Event Context API](#event-context-api)
# List Room API
The List Room admin API allows server admins to get a list of rooms on their
server. There are various parameters available that allow for filtering and
sorting the returned list. This API supports pagination.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
**Parameters**
The following query parameters are available:
@@ -481,6 +493,9 @@ several minutes or longer.
The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.
To use it, you will need to authenticate by providing an ``access_token`` for a
server admin: see [Admin API](../usage/administration/admin_api).
## Version 1 (old version)
This version works synchronously. That means you only get the response once the server has

View File

@@ -3,15 +3,15 @@
Returns information about all local media usage of users. Gives the
possibility to filter them by time and user.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
The API is:
```
GET /_synapse/admin/v1/statistics/users/media
```
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
A response body like the following is returned:
```json

View File

@@ -1,8 +1,5 @@
# User Admin API
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
## Query User Account
This API returns information about a specific user account.
@@ -13,6 +10,9 @@ The api is:
GET /_synapse/admin/v2/users/<user_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following:
```jsonc
@@ -104,6 +104,9 @@ with a body of:
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
Returns HTTP status code:
- `201` - When a new user object was created.
- `200` - When a user was modified.
@@ -153,6 +156,9 @@ By default, the response is ordered by ascending user ID.
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -272,6 +278,9 @@ GET /_matrix/client/r0/admin/whois/<userId>
See also: [Client Server
API Whois](https://matrix.org/docs/spec/client_server/r0.6.1#get-matrix-client-r0-admin-whois-userid).
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
It returns a JSON body like the following:
```json
@@ -326,12 +335,15 @@ with a body of:
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
The erase parameter is optional and defaults to `false`.
An empty body may be passed for backwards compatibility.
The following actions are performed when deactivating an user:
- Try to unbind 3PIDs from the identity server
- Try to unpind 3PIDs from the identity server
- Remove all 3PIDs from the homeserver
- Delete all devices and E2EE keys
- Delete all access tokens
@@ -382,6 +394,9 @@ with a body of:
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
The parameter `new_password` is required.
The parameter `logout_devices` is optional and defaults to `true`.
@@ -394,6 +409,9 @@ The api is:
GET /_synapse/admin/v1/users/<user_id>/admin
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -421,6 +439,10 @@ with a body of:
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
## List room memberships of a user
Gets a list of all `room_id` that a specific `user_id` is member.
@@ -431,6 +453,9 @@ The API is:
GET /_synapse/admin/v1/users/<user_id>/joined_rooms
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -539,11 +564,6 @@ The following fields are returned in the JSON response body:
### List media uploaded by a user
Gets a list of all local media that a specific `user_id` has created.
These are media that the user has uploaded themselves
([local media](../media_repository.md#local-media)), as well as
[URL preview images](../media_repository.md#url-previews) requested by the user if the
[feature is enabled](../development/url_previews.md).
By default, the response is ordered by descending creation date and ascending media ID.
The newest media is on top. You can change the order with parameters
`order_by` and `dir`.
@@ -554,6 +574,9 @@ The API is:
GET /_synapse/admin/v1/users/<user_id>/media
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -640,9 +663,7 @@ The following fields are returned in the JSON response body:
Media objects contain the following fields:
- `created_ts` - integer - Timestamp when the content was uploaded in ms.
- `last_access_ts` - integer - Timestamp when the content was last accessed in ms.
- `media_id` - string - The id used to refer to the media. Details about the format
are documented under
[media repository](../media_repository.md).
- `media_id` - string - The id used to refer to the media.
- `media_length` - integer - Length of the media in bytes.
- `media_type` - string - The MIME-type of the media.
- `quarantined_by` - string - The user ID that initiated the quarantine request
@@ -670,6 +691,9 @@ The API is:
DELETE /_synapse/admin/v1/users/<user_id>/media
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -742,6 +766,9 @@ The API is:
GET /_synapse/admin/v2/users/<user_id>/devices
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -807,6 +834,9 @@ POST /_synapse/admin/v2/users/<user_id>/delete_devices
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
**Parameters**
@@ -828,6 +858,9 @@ The API is:
GET /_synapse/admin/v2/users/<user_id>/devices/<device_id>
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -873,6 +906,9 @@ PUT /_synapse/admin/v2/users/<user_id>/devices/<device_id>
}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
**Parameters**
@@ -899,6 +935,9 @@ DELETE /_synapse/admin/v2/users/<user_id>/devices/<device_id>
{}
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
**Parameters**
@@ -917,6 +956,9 @@ The API is:
GET /_synapse/admin/v1/users/<user_id>/pushers
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -1011,6 +1053,9 @@ To un-shadow-ban a user the API is:
DELETE /_synapse/admin/v1/users/<user_id>/shadow_ban
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned in both cases.
**Parameters**
@@ -1033,6 +1078,9 @@ The API is:
GET /_synapse/admin/v1/users/<user_id>/override_ratelimit
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -1072,6 +1120,9 @@ The API is:
POST /_synapse/admin/v1/users/<user_id>/override_ratelimit
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
A response body like the following is returned:
```json
@@ -1114,6 +1165,9 @@ The API is:
DELETE /_synapse/admin/v1/users/<user_id>/override_ratelimit
```
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)
An empty JSON dict is returned.
```json
@@ -1142,5 +1196,7 @@ The API is:
GET /_synapse/admin/v1/username_available?username=$localpart
```
The request and response format is the same as the
[/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
The request and response format is the same as the [/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
To use it, you will need to authenticate by providing an `access_token` for a
server admin: [Admin API](../usage/administration/admin_api)

View File

@@ -1,37 +0,0 @@
# Synapse Release Cycle
Releases of Synapse follow a two week release cycle with new releases usually
occurring on Tuesdays:
* Day 0: Synapse `N - 1` is released.
* Day 7: Synapse `N` release candidate 1 is released.
* Days 7 - 13: Synapse `N` release candidates 2+ are released, if bugs are found.
* Day 14: Synapse `N` is released.
Note that this schedule might be modified depending on the availability of the
Synapse team, e.g. releases may be skipped to avoid holidays.
Release announcements can be found in the
[release category of the Matrix blog](https://matrix.org/blog/category/releases).
## Bugfix releases
If a bug is found after release that is deemed severe enough (by a combination
of the impacted users and the impact on those users) then a bugfix release may
be issued. This may be at any point in the release cycle.
## Security releases
Security will sometimes be backported to the previous version and released
immediately before the next release candidate. An example of this might be:
* Day 0: Synapse N - 1 is released.
* Day 7: Synapse (N - 1).1 is released as Synapse N - 1 + the security fix.
* Day 7: Synapse N release candidate 1 is released (including the security fix).
Depending on the impact and complexity of security fixes, multiple fixes might
be held to be released together.
In some cases, a pre-disclosure of a security release will be issued as a notice
to Synapse operators that there is an upcoming security release. These can be
found in the [security category of the Matrix blog](https://matrix.org/blog/category/security).

View File

@@ -105,87 +105,6 @@ device ID), and the (now deactivated) access token.
If multiple modules implement this callback, Synapse runs them all in order.
### `get_username_for_registration`
_First introduced in Synapse v1.52.0_
```python
async def get_username_for_registration(
uia_results: Dict[str, Any],
params: Dict[str, Any],
) -> Optional[str]
```
Called when registering a new user. The module can return a username to set for the user
being registered by returning it as a string, or `None` if it doesn't wish to force a
username for this user. If a username is returned, it will be used as the local part of a
user's full Matrix ID (e.g. it's `alice` in `@alice:example.com`).
This callback is called once [User-Interactive Authentication](https://spec.matrix.org/latest/client-server-api/#user-interactive-authentication-api)
has been completed by the user. It is not called when registering a user via SSO. It is
passed two dictionaries, which include the information that the user has provided during
the registration process.
The first dictionary contains the results of the [User-Interactive Authentication](https://spec.matrix.org/latest/client-server-api/#user-interactive-authentication-api)
flow followed by the user. Its keys are the identifiers of every step involved in the flow,
associated with either a boolean value indicating whether the step was correctly completed,
or additional information (e.g. email address, phone number...). A list of most existing
identifiers can be found in the [Matrix specification](https://spec.matrix.org/v1.1/client-server-api/#authentication-types).
Here's an example featuring all currently supported keys:
```python
{
"m.login.dummy": True, # Dummy authentication
"m.login.terms": True, # User has accepted the terms of service for the homeserver
"m.login.recaptcha": True, # User has completed the recaptcha challenge
"m.login.email.identity": { # User has provided and verified an email address
"medium": "email",
"address": "alice@example.com",
"validated_at": 1642701357084,
},
"m.login.msisdn": { # User has provided and verified a phone number
"medium": "msisdn",
"address": "33123456789",
"validated_at": 1642701357084,
},
"m.login.registration_token": "sometoken", # User has registered through a registration token
}
```
The second dictionary contains the parameters provided by the user's client in the request
to `/_matrix/client/v3/register`. See the [Matrix specification](https://spec.matrix.org/latest/client-server-api/#post_matrixclientv3register)
for a complete list of these parameters.
If the module cannot, or does not wish to, generate a username for this user, it must
return `None`.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `None`, Synapse falls through to the next one. The value of the first
callback that does not return `None` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback. If every callback return `None`,
the username provided by the user is used, if any (otherwise one is automatically
generated).
## `is_3pid_allowed`
_First introduced in Synapse v1.53.0_
```python
async def is_3pid_allowed(self, medium: str, address: str, registration: bool) -> bool
```
Called when attempting to bind a third-party identifier (i.e. an email address or a phone
number). The module is given the medium of the third-party identifier (which is `email` if
the identifier is an email address, or `msisdn` if the identifier is a phone number) and
its address, as well as a boolean indicating whether the attempt to bind is happening as
part of registering a new user. The module must return a boolean indicating whether the
identifier can be allowed to be bound to an account on the local homeserver.
If multiple modules implement this callback, they will be considered in order. If a
callback returns `True`, Synapse falls through to the next one. The value of the first
callback that does not return `True` will be used. If this happens, Synapse will not call
any of the subsequent implementations of this callback.
## Example
The example module below implements authentication checkers for two different login types:

View File

@@ -41,11 +41,11 @@
# documentation on how to configure or create custom modules for Synapse.
#
modules:
#- module: my_super_module.MySuperClass
# config:
# do_thing: true
#- module: my_other_super_module.SomeClass
# config: {}
# - module: my_super_module.MySuperClass
# config:
# do_thing: true
# - module: my_other_super_module.SomeClass
# config: {}
## Server ##
@@ -471,20 +471,6 @@ limit_remote_rooms:
#
#allow_per_room_profiles: false
# The largest allowed file size for a user avatar. Defaults to no restriction.
#
# Note that user avatar changes will not work if this is set without
# using Synapse's media repository.
#
#max_avatar_size: 10M
# The MIME types allowed for user avatars. Defaults to no restriction.
#
# Note that user avatar changes will not work if this is set without
# using Synapse's media repository.
#
#allowed_avatar_mimetypes: ["image/png", "image/jpeg", "image/gif"]
# How long to keep redacted events in unredacted form in the database. After
# this period redacted events get replaced with their redacted form in the DB.
#
@@ -751,16 +737,11 @@ caches:
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
# Controls whether cache entries are evicted after a specified time
# period. Defaults to true. Uncomment to disable this feature.
# Controls how long an entry can be in a cache without having been
# accessed before being evicted. Defaults to None, which means
# entries are never evicted based on time.
#
#expire_caches: false
# If expire_caches is enabled, this flag controls how long an entry can
# be in a cache without having been accessed before being evicted.
# Defaults to 30m. Uncomment to set a different time to live for cache entries.
#
#cache_entry_ttl: 30m
#expiry_time: 30m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
@@ -862,9 +843,6 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
# - two for ratelimiting how often invites can be sent in a room or to a
# specific user.
# - one for ratelimiting 3PID invites (i.e. invites sent to a third-party ID
# such as an email address or a phone number) based on the account that's
# sending the invite.
#
# The defaults are as shown below.
#
@@ -914,10 +892,6 @@ log_config: "CONFDIR/SERVERNAME.log.config"
# per_user:
# per_second: 0.003
# burst_count: 5
#
#rc_third_party_invite:
# per_second: 0.2
# burst_count: 10
# Ratelimiting settings for incoming federation
#
@@ -1454,16 +1428,6 @@ account_threepid_delegates:
#
#auto_join_rooms_for_guests: false
# Whether to inhibit errors raised when registering a new account if the user ID
# already exists. If turned on, that requests to /register/available will always
# show a user ID as available, and Synapse won't raise an error when starting
# a registration with a user ID that already exists. However, Synapse will still
# raise an error if the registration completes and the username conflicts.
#
# Defaults to false.
#
#inhibit_user_in_use_error: true
## Metrics ###

View File

@@ -194,7 +194,7 @@ When following this route please make sure that the [Platform-specific prerequis
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.7 or later, up to Python 3.10.
- Python 3.7 or later, up to Python 3.9.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
To install the Synapse homeserver run:

View File

@@ -141,7 +141,7 @@ formatters:
handlers:
console:
class: logging.StreamHandler
stream: ext://sys.stdout
location: ext://sys.stdout
file:
class: logging.FileHandler
formatter: json

View File

@@ -85,82 +85,6 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.53.0
## Dropping support for `webclient` listeners and non-HTTP(S) `web_client_location`
Per the deprecation notice in Synapse v1.51.0, listeners of type `webclient`
are no longer supported and configuring them is a now a configuration error.
Configuring a non-HTTP(S) `web_client_location` configuration is is now a
configuration error. Since the `webclient` listener is no longer supported, this
setting only applies to the root path `/` of Synapse's web server and no longer
the `/_matrix/client/` path.
## Stablisation of MSC3231
The unstable validity-check endpoint for the
[Registration Tokens](https://spec.matrix.org/v1.2/client-server-api/#get_matrixclientv1registermloginregistration_tokenvalidity)
feature has been stabilised and moved from:
`/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity`
to:
`/_matrix/client/v1/register/m.login.registration_token/validity`
Please update any relevant reverse proxy or firewall configurations appropriately.
## Time-based cache expiry is now enabled by default
Formerly, entries in the cache were not evicted regardless of whether they were accessed after storing.
This behavior has now changed. By default entries in the cache are now evicted after 30m of not being accessed.
To change the default behavior, go to the `caches` section of the config and change the `expire_caches` and
`cache_entry_ttl` flags as necessary. Please note that these flags replace the `expiry_time` flag in the config.
The `expiry_time` flag will still continue to work, but it has been deprecated and will be removed in the future.
## Deprecation of `capability` `org.matrix.msc3283.*`
The `capabilities` of MSC3283 from the REST API `/_matrix/client/r0/capabilities`
becomes stable.
The old `capabilities`
- `org.matrix.msc3283.set_displayname`,
- `org.matrix.msc3283.set_avatar_url` and
- `org.matrix.msc3283.3pid_changes`
are deprecated and scheduled to be removed in Synapse v1.54.0.
The new `capabilities`
- `m.set_displayname`,
- `m.set_avatar_url` and
- `m.3pid_changes`
are now active by default.
## Removal of `user_may_create_room_with_invites`
As announced with the release of [Synapse 1.47.0](#deprecation-of-the-user_may_create_room_with_invites-module-callback),
the deprecated `user_may_create_room_with_invites` module callback has been removed.
Modules relying on it can instead implement [`user_may_invite`](https://matrix-org.github.io/synapse/latest/modules/spam_checker_callbacks.html#user_may_invite)
and use the [`get_room_state`](https://github.com/matrix-org/synapse/blob/872f23b95fa980a61b0866c1475e84491991fa20/synapse/module_api/__init__.py#L869-L876)
module API to infer whether the invite is happening while creating a room (see [this function](https://github.com/matrix-org/synapse-domain-rule-checker/blob/e7d092dd9f2a7f844928771dbfd9fd24c2332e48/synapse_domain_rule_checker/__init__.py#L56-L89)
as an example). Alternately, modules can also implement [`on_create_room`](https://matrix-org.github.io/synapse/latest/modules/third_party_rules_callbacks.html#on_create_room).
# Upgrading to v1.52.0
## Twisted security release
Note that [Twisted 22.1.0](https://github.com/twisted/twisted/releases/tag/twisted-22.1.0)
has recently been released, which fixes a [security issue](https://github.com/twisted/twisted/security/advisories/GHSA-92x2-jw7w-xvvx)
within the Twisted library. We do not believe Synapse is affected by this vulnerability,
though we advise server administrators who installed Synapse via pip to upgrade Twisted
with `pip install --upgrade Twisted` as a matter of good practice. The Docker image
`matrixdotorg/synapse` and the Debian packages from `packages.matrix.org` are using the
updated library.
# Upgrading to v1.51.0
## Deprecation of `webclient` listeners and non-HTTP(S) `web_client_location`
@@ -1205,7 +1129,8 @@ more details on upgrading your database.
Synapse v1.0 is the first release to enforce validation of TLS
certificates for the federation API. It is therefore essential that your
certificates are correctly configured.
certificates are correctly configured. See the
[FAQ](MSC1711_certificates_FAQ.md) for more information.
Note, v1.0 installations will also no longer be able to federate with
servers that have not correctly configured their certificates.
@@ -1270,6 +1195,9 @@ you will need to replace any self-signed certificates with those
verified by a root CA. Information on how to do so can be found at the
ACME docs.
For more information on configuring TLS certificates see the
[FAQ](MSC1711_certificates_FAQ.md).
# Upgrading to v0.34.0
1. This release is the first to fully support Python 3. Synapse will

View File

@@ -241,7 +241,7 @@ expressions:
# Registration/login requests
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
^/_matrix/client/(r0|v3|unstable)/register$
^/_matrix/client/v1/register/m.login.registration_token/validity$
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
# Event sending requests
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact

View File

@@ -77,6 +77,9 @@ exclude = (?x)
|tests/push/test_http.py
|tests/push/test_presentable_names.py
|tests/push/test_push_rule_evaluator.py
|tests/rest/admin/test_admin.py
|tests/rest/admin/test_user.py
|tests/rest/admin/test_username_available.py
|tests/rest/client/test_account.py
|tests/rest/client/test_events.py
|tests/rest/client/test_filter.py
@@ -142,9 +145,6 @@ disallow_untyped_defs = True
[mypy-synapse.crypto.*]
disallow_untyped_defs = True
[mypy-synapse.event_auth]
disallow_untyped_defs = True
[mypy-synapse.events.*]
disallow_untyped_defs = True
@@ -169,15 +169,9 @@ disallow_untyped_defs = True
[mypy-synapse.module_api.*]
disallow_untyped_defs = True
[mypy-synapse.notifier]
disallow_untyped_defs = True
[mypy-synapse.push.*]
disallow_untyped_defs = True
[mypy-synapse.replication.*]
disallow_untyped_defs = True
[mypy-synapse.rest.*]
disallow_untyped_defs = True

View File

@@ -25,6 +25,7 @@ DISTS = (
"debian:bookworm",
"debian:sid",
"ubuntu:focal", # 20.04 LTS (our EOL forced by Py38 on 2024-10-14)
"ubuntu:hirsute", # 21.04 (EOL 2022-01-05)
"ubuntu:impish", # 21.10 (EOL 2022-07)
)

View File

@@ -24,10 +24,10 @@ import traceback
from typing import Dict, Iterable, Optional, Set
import yaml
from matrix_common.versionstring import get_distribution_version_string
from twisted.internet import defer, reactor
import synapse
from synapse.config.database import DatabaseConnectionConfig
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import (
@@ -36,8 +36,6 @@ from synapse.logging.context import (
run_in_background,
)
from synapse.storage.database import DatabasePool, make_conn
from synapse.storage.databases.main import PushRuleStore
from synapse.storage.databases.main.account_data import AccountDataWorkerStore
from synapse.storage.databases.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.databases.main.deviceinbox import DeviceInboxBackgroundUpdateStore
from synapse.storage.databases.main.devices import DeviceBackgroundUpdateStore
@@ -67,6 +65,7 @@ from synapse.storage.databases.state.bg_updates import StateBackgroundUpdateStor
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse_port_db")
@@ -181,8 +180,6 @@ class Store(
UserDirectoryBackgroundUpdateStore,
EndToEndKeyBackgroundStore,
StatsStore,
AccountDataWorkerStore,
PushRuleStore,
PusherWorkerStore,
PresenceBackgroundUpdateStore,
GroupServerWorkerStore,
@@ -221,9 +218,7 @@ class MockHomeserver:
self.clock = Clock(reactor)
self.config = config
self.hostname = config.server.server_name
self.version_string = "Synapse/" + get_distribution_version_string(
"matrix-synapse"
)
self.version_string = "Synapse/" + get_version_string(synapse)
def get_clock(self):
return self.clock

View File

@@ -18,14 +18,15 @@ import logging
import sys
import yaml
from matrix_common.versionstring import get_distribution_version_string
from twisted.internet import defer, reactor
import synapse
from synapse.config.homeserver import HomeServerConfig
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("update_database")
@@ -38,9 +39,7 @@ class MockHomeserver(HomeServer):
config.server.server_name, reactor=reactor, config=config, **kwargs
)
self.version_string = "Synapse/" + get_distribution_version_string(
"matrix-synapse"
)
self.version_string = "Synapse/" + get_version_string(synapse)
def run_background_updates(hs):

View File

@@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.53.0"
__version__ = "1.51.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -81,7 +81,7 @@ class LoginType:
TERMS: Final = "m.login.terms"
SSO: Final = "m.login.sso"
DUMMY: Final = "m.login.dummy"
REGISTRATION_TOKEN: Final = "m.login.registration_token"
REGISTRATION_TOKEN: Final = "org.matrix.msc3231.login.registration_token"
# This is used in the `type` parameter for /register when called by

View File

@@ -406,9 +406,6 @@ class RoomKeysVersionError(SynapseError):
super().__init__(403, "Wrong room_keys version", Codes.WRONG_ROOM_KEYS_VERSION)
self.current_version = current_version
def error_dict(self) -> "JsonDict":
return cs_error(self.msg, self.errcode, current_version=self.current_version)
class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does

View File

@@ -28,6 +28,7 @@ FEDERATION_V1_PREFIX = FEDERATION_PREFIX + "/v1"
FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
STATIC_PREFIX = "/_matrix/static"
WEB_CLIENT_PREFIX = "/_matrix/client"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_R0_PREFIX = "/_matrix/media/r0"
MEDIA_V3_PREFIX = "/_matrix/media/v3"

View File

@@ -37,7 +37,6 @@ from typing import (
)
from cryptography.utils import CryptographyDeprecationWarning
from matrix_common.versionstring import get_distribution_version_string
import twisted
from twisted.internet import defer, error, reactor as _reactor
@@ -68,6 +67,7 @@ from synapse.util.caches.lrucache import setup_expire_lru_cache_entries
from synapse.util.daemonize import daemonize_process
from synapse.util.gai_resolver import GAIResolver
from synapse.util.rlimit import change_resource_limit
from synapse.util.versionstring import get_version_string
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -487,8 +487,7 @@ def setup_sentry(hs: "HomeServer") -> None:
import sentry_sdk
sentry_sdk.init(
dsn=hs.config.metrics.sentry_dsn,
release=get_distribution_version_string("matrix-synapse"),
dsn=hs.config.metrics.sentry_dsn, release=get_version_string(synapse)
)
# We set some default tags that give some context to this instance

View File

@@ -19,8 +19,6 @@ import sys
import tempfile
from typing import List, Optional
from matrix_common.versionstring import get_distribution_version_string
from twisted.internet import defer, task
import synapse
@@ -46,6 +44,7 @@ from synapse.server import HomeServer
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.types import StateMap
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.admin_cmd")
@@ -224,7 +223,7 @@ def start(config_options: List[str]) -> None:
ss = AdminCmdServer(
config.server.server_name,
config=config,
version_string="Synapse/" + get_distribution_version_string("matrix-synapse"),
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -16,8 +16,6 @@ import logging
import sys
from typing import Dict, List, Optional, Tuple
from matrix_common.versionstring import get_distribution_version_string
from twisted.internet import address
from twisted.web.resource import Resource
@@ -124,6 +122,7 @@ from synapse.storage.databases.main.ui_auth import UIAuthWorkerStore
from synapse.storage.databases.main.user_directory import UserDirectoryStore
from synapse.types import JsonDict
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.generic_worker")
@@ -483,7 +482,7 @@ def start(config_options: List[str]) -> None:
hs = GenericWorkerServer(
config.server.server_name,
config=config,
version_string="Synapse/" + get_distribution_version_string("matrix-synapse"),
version_string="Synapse/" + get_version_string(synapse),
)
setup_logging(hs, config, use_worker_options=True)

View File

@@ -18,23 +18,22 @@ import os
import sys
from typing import Dict, Iterable, Iterator, List
from matrix_common.versionstring import get_distribution_version_string
from twisted.internet.tcp import Port
from twisted.web.resource import EncodingResourceWrapper, Resource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import (
CLIENT_API_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
)
from synapse.app import _base
from synapse.app._base import (
@@ -54,6 +53,7 @@ from synapse.http.additional_resource import AdditionalResource
from synapse.http.server import (
OptionsResource,
RootOptionsRedirectResource,
RootRedirect,
StaticResource,
)
from synapse.http.site import SynapseSite
@@ -72,6 +72,7 @@ from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.module_loader import load_module
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse.app.homeserver")
@@ -133,12 +134,15 @@ class SynapseHomeServer(HomeServer):
# Try to find something useful to serve at '/':
#
# 1. Redirect to the web client if it is an HTTP(S) URL.
# 2. Redirect to the static "Synapse is running" page.
# 3. Do not redirect and use a blank resource.
if self.config.server.web_client_location:
# 2. Redirect to the web client served via Synapse.
# 3. Redirect to the static "Synapse is running" page.
# 4. Do not redirect and use a blank resource.
if self.config.server.web_client_location_is_redirect:
root_resource: Resource = RootOptionsRedirectResource(
self.config.server.web_client_location
)
elif WEB_CLIENT_PREFIX in resources:
root_resource = RootOptionsRedirectResource(WEB_CLIENT_PREFIX)
elif STATIC_PREFIX in resources:
root_resource = RootOptionsRedirectResource(STATIC_PREFIX)
else:
@@ -197,7 +201,13 @@ class SynapseHomeServer(HomeServer):
resources.update(
{
CLIENT_API_PREFIX: client_resource,
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/v1": client_resource,
"/_matrix/client/v3": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
"/.well-known": well_known_resource(self),
"/_synapse/admin": AdminRestResource(self),
**build_synapse_client_resource_tree(self),
@@ -260,6 +270,28 @@ class SynapseHomeServer(HomeServer):
if name in ["keys", "federation"]:
resources[SERVER_KEY_V2_PREFIX] = KeyApiV2Resource(self)
if name == "webclient":
# webclient listeners are deprecated as of Synapse v1.51.0, remove it
# in > v1.53.0.
webclient_loc = self.config.server.web_client_location
if webclient_loc is None:
logger.warning(
"Not enabling webclient resource, as web_client_location is unset."
)
elif self.config.server.web_client_location_is_redirect:
resources[WEB_CLIENT_PREFIX] = RootRedirect(webclient_loc)
else:
logger.warning(
"Running webclient on the same domain is not recommended: "
"https://github.com/matrix-org/synapse#security-note - "
"after you move webclient to different host you can set "
"web_client_location to its full URL to enable redirection."
)
# GZip is disabled here due to
# https://twistedmatrix.com/trac/ticket/7678
resources[WEB_CLIENT_PREFIX] = File(webclient_loc)
if name == "metrics" and self.config.metrics.enable_metrics:
resources[METRICS_PREFIX] = MetricsResource(RegistryProxy)
@@ -351,7 +383,7 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
hs = SynapseHomeServer(
config.server.server_name,
config=config,
version_string="Synapse/" + get_distribution_version_string("matrix-synapse"),
version_string="Synapse/" + get_version_string(synapse),
)
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)

View File

@@ -165,16 +165,23 @@ class ApplicationService:
return namespace.exclusive
return False
async def _matches_user(self, event: EventBase, store: "DataStore") -> bool:
async def _matches_user(
self, event: Optional[EventBase], store: Optional["DataStore"] = None
) -> bool:
if not event:
return False
if self.is_interested_in_user(event.sender):
return True
# also check m.room.member state key
if event.type == EventTypes.Member and self.is_interested_in_user(
event.state_key
):
return True
if not store:
return False
does_match = await self.matches_user_in_member_list(event.room_id, store)
return does_match
@@ -209,15 +216,21 @@ class ApplicationService:
return self.is_interested_in_room(event.room_id)
return False
async def _matches_aliases(self, event: EventBase, store: "DataStore") -> bool:
async def _matches_aliases(
self, event: EventBase, store: Optional["DataStore"] = None
) -> bool:
if not store or not event:
return False
alias_list = await store.get_aliases_for_room(event.room_id)
for alias in alias_list:
if self.is_interested_in_alias(alias):
return True
return False
async def is_interested(self, event: EventBase, store: "DataStore") -> bool:
async def is_interested(
self, event: EventBase, store: Optional["DataStore"] = None
) -> bool:
"""Check if this service is interested in this event.
Args:
@@ -338,13 +351,11 @@ class AppServiceTransaction:
id: int,
events: List[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
):
self.service = service
self.id = id
self.events = events
self.ephemeral = ephemeral
self.to_device_messages = to_device_messages
async def send(self, as_api: "ApplicationServiceApi") -> bool:
"""Sends this transaction using the provided AS API interface.
@@ -358,7 +369,6 @@ class AppServiceTransaction:
service=self.service,
events=self.events,
ephemeral=self.ephemeral,
to_device_messages=self.to_device_messages,
txn_id=self.id,
)

View File

@@ -218,23 +218,8 @@ class ApplicationServiceApi(SimpleHttpClient):
service: "ApplicationService",
events: List[EventBase],
ephemeral: List[JsonDict],
to_device_messages: List[JsonDict],
txn_id: Optional[int] = None,
) -> bool:
"""
Push data to an application service.
Args:
service: The application service to send to.
events: The persistent events to send.
ephemeral: The ephemeral events to send.
to_device_messages: The to-device messages to send.
txn_id: An unique ID to assign to this transaction. Application services should
deduplicate transactions received with identitical IDs.
Returns:
True if the task succeeded, False if it failed.
"""
if service.url is None:
return True
@@ -252,15 +237,13 @@ class ApplicationServiceApi(SimpleHttpClient):
uri = service.url + ("/transactions/%s" % urllib.parse.quote(str(txn_id)))
# Never send ephemeral events to appservices that do not support it
body: Dict[str, List[JsonDict]] = {"events": serialized_events}
if service.supports_ephemeral:
body.update(
{
# TODO: Update to stable prefixes once MSC2409 completes FCP merge.
"de.sorunome.msc2409.ephemeral": ephemeral,
"de.sorunome.msc2409.to_device": to_device_messages,
}
)
body = {
"events": serialized_events,
"de.sorunome.msc2409.ephemeral": ephemeral,
}
else:
body = {"events": serialized_events}
try:
await self.put_json(

View File

@@ -48,16 +48,7 @@ This is all tied together by the AppServiceScheduler which DIs the required
components.
"""
import logging
from typing import (
TYPE_CHECKING,
Awaitable,
Callable,
Collection,
Dict,
List,
Optional,
Set,
)
from typing import TYPE_CHECKING, Awaitable, Callable, Dict, List, Optional, Set
from synapse.appservice import ApplicationService, ApplicationServiceState
from synapse.appservice.api import ApplicationServiceApi
@@ -80,9 +71,6 @@ MAX_PERSISTENT_EVENTS_PER_TRANSACTION = 100
# Maximum number of ephemeral events to provide in an AS transaction.
MAX_EPHEMERAL_EVENTS_PER_TRANSACTION = 100
# Maximum number of to-device messages to provide in an AS transaction.
MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION = 100
class ApplicationServiceScheduler:
"""Public facing API for this module. Does the required DI to tie the
@@ -109,40 +97,15 @@ class ApplicationServiceScheduler:
for service in services:
self.txn_ctrl.start_recoverer(service)
def enqueue_for_appservice(
self,
appservice: ApplicationService,
events: Optional[Collection[EventBase]] = None,
ephemeral: Optional[Collection[JsonDict]] = None,
to_device_messages: Optional[Collection[JsonDict]] = None,
def submit_event_for_as(
self, service: ApplicationService, event: EventBase
) -> None:
"""
Enqueue some data to be sent off to an application service.
self.queuer.enqueue_event(service, event)
Args:
appservice: The application service to create and send a transaction to.
events: The persistent room events to send.
ephemeral: The ephemeral events to send.
to_device_messages: The to-device messages to send. These differ from normal
to-device messages sent to clients, as they have 'to_device_id' and
'to_user_id' fields.
"""
# We purposefully allow this method to run with empty events/ephemeral
# collections, so that callers do not need to check iterable size themselves.
if not events and not ephemeral and not to_device_messages:
return
if events:
self.queuer.queued_events.setdefault(appservice.id, []).extend(events)
if ephemeral:
self.queuer.queued_ephemeral.setdefault(appservice.id, []).extend(ephemeral)
if to_device_messages:
self.queuer.queued_to_device_messages.setdefault(appservice.id, []).extend(
to_device_messages
)
# Kick off a new application service transaction
self.queuer.start_background_request(appservice)
def submit_ephemeral_events_for_as(
self, service: ApplicationService, events: List[JsonDict]
) -> None:
self.queuer.enqueue_ephemeral(service, events)
class _ServiceQueuer:
@@ -158,15 +121,13 @@ class _ServiceQueuer:
self.queued_events: Dict[str, List[EventBase]] = {}
# dict of {service_id: [events]}
self.queued_ephemeral: Dict[str, List[JsonDict]] = {}
# dict of {service_id: [to_device_message_json]}
self.queued_to_device_messages: Dict[str, List[JsonDict]] = {}
# the appservices which currently have a transaction in flight
self.requests_in_flight: Set[str] = set()
self.txn_ctrl = txn_ctrl
self.clock = clock
def start_background_request(self, service: ApplicationService) -> None:
def _start_background_request(self, service: ApplicationService) -> None:
# start a sender for this appservice if we don't already have one
if service.id in self.requests_in_flight:
return
@@ -175,6 +136,16 @@ class _ServiceQueuer:
"as-sender-%s" % (service.id,), self._send_request, service
)
def enqueue_event(self, service: ApplicationService, event: EventBase) -> None:
self.queued_events.setdefault(service.id, []).append(event)
self._start_background_request(service)
def enqueue_ephemeral(
self, service: ApplicationService, events: List[JsonDict]
) -> None:
self.queued_ephemeral.setdefault(service.id, []).extend(events)
self._start_background_request(service)
async def _send_request(self, service: ApplicationService) -> None:
# sanity-check: we shouldn't get here if this service already has a sender
# running.
@@ -191,21 +162,11 @@ class _ServiceQueuer:
ephemeral = all_events_ephemeral[:MAX_EPHEMERAL_EVENTS_PER_TRANSACTION]
del all_events_ephemeral[:MAX_EPHEMERAL_EVENTS_PER_TRANSACTION]
all_to_device_messages = self.queued_to_device_messages.get(
service.id, []
)
to_device_messages_to_send = all_to_device_messages[
:MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION
]
del all_to_device_messages[:MAX_TO_DEVICE_MESSAGES_PER_TRANSACTION]
if not events and not ephemeral and not to_device_messages_to_send:
if not events and not ephemeral:
return
try:
await self.txn_ctrl.send(
service, events, ephemeral, to_device_messages_to_send
)
await self.txn_ctrl.send(service, events, ephemeral)
except Exception:
logger.exception("AS request failed")
finally:
@@ -237,24 +198,10 @@ class _TransactionController:
service: ApplicationService,
events: List[EventBase],
ephemeral: Optional[List[JsonDict]] = None,
to_device_messages: Optional[List[JsonDict]] = None,
) -> None:
"""
Create a transaction with the given data and send to the provided
application service.
Args:
service: The application service to send the transaction to.
events: The persistent events to include in the transaction.
ephemeral: The ephemeral events to include in the transaction.
to_device_messages: The to-device messages to include in the transaction.
"""
try:
txn = await self.store.create_appservice_txn(
service=service,
events=events,
ephemeral=ephemeral or [],
to_device_messages=to_device_messages or [],
service=service, events=events, ephemeral=ephemeral or []
)
service_is_up = await self._is_service_up(service)
if service_is_up:

View File

@@ -12,7 +12,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
import re
import threading
@@ -24,8 +23,6 @@ from synapse.python_dependencies import DependencyException, check_requirements
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)
# The prefix for all cache factor-related environment variables
_CACHE_PREFIX = "SYNAPSE_CACHE_FACTOR"
@@ -151,16 +148,11 @@ class CacheConfig(Config):
per_cache_factors:
#get_users_who_share_room_with_user: 2.0
# Controls whether cache entries are evicted after a specified time
# period. Defaults to true. Uncomment to disable this feature.
# Controls how long an entry can be in a cache without having been
# accessed before being evicted. Defaults to None, which means
# entries are never evicted based on time.
#
#expire_caches: false
# If expire_caches is enabled, this flag controls how long an entry can
# be in a cache without having been accessed before being evicted.
# Defaults to 30m. Uncomment to set a different time to live for cache entries.
#
#cache_entry_ttl: 30m
#expiry_time: 30m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
@@ -225,30 +217,12 @@ class CacheConfig(Config):
e.message # noqa: B306, DependencyException.message is a property
)
expire_caches = cache_config.get("expire_caches", True)
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
if expire_caches:
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
expiry_time = cache_config.get("expiry_time")
if expiry_time:
self.expiry_time_msec: Optional[int] = self.parse_duration(expiry_time)
else:
self.expiry_time_msec = None
# Backwards compatibility support for the now-removed "expiry_time" config flag.
expiry_time = cache_config.get("expiry_time")
if expiry_time and expire_caches:
logger.warning(
"You have set two incompatible options, expiry_time and expire_caches. Please only use the "
"expire_caches and cache_entry_ttl options and delete the expiry_time option as it is "
"deprecated."
)
if expiry_time:
logger.warning(
"Expiry_time is a deprecated option, please use the expire_caches and cache_entry_ttl options "
"instead."
)
self.expiry_time_msec = self.parse_duration(expiry_time)
self.sync_response_cache_duration = self.parse_duration(
cache_config.get("sync_response_cache_duration", 0)
)

View File

@@ -24,10 +24,10 @@ class ExperimentalConfig(Config):
def read_config(self, config: JsonDict, **kwargs):
experimental = config.get("experimental_features") or {}
# Whether to enable experimental MSC1849 (aka relations) support
self.msc1849_enabled = config.get("experimental_msc1849_support_enabled", True)
# MSC3440 (thread relation)
self.msc3440_enabled: bool = experimental.get("msc3440_enabled", False)
# MSC3666: including bundled relations in /search.
self.msc3666_enabled: bool = experimental.get("msc3666_enabled", False)
# MSC3026 (busy presence state)
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
@@ -54,13 +54,3 @@ class ExperimentalConfig(Config):
self.msc3202_device_masquerading_enabled: bool = experimental.get(
"msc3202_device_masquerading", False
)
# MSC2409 (this setting only relates to optionally sending to-device messages).
# Presence, typing and read receipt EDUs are already sent to application services that
# have opted in to receive them. If enabled, this adds to-device messages to that list.
self.msc2409_to_device_messages_enabled: bool = experimental.get(
"msc2409_to_device_messages_enabled", False
)
# MSC3706 (server-side support for partial state in /send_join responses)
self.msc3706_enabled: bool = experimental.get("msc3706_enabled", False)

View File

@@ -22,7 +22,6 @@ from string import Template
from typing import TYPE_CHECKING, Any, Dict, Optional
import yaml
from matrix_common.versionstring import get_distribution_version_string
from zope.interface import implementer
from twisted.logger import (
@@ -33,9 +32,11 @@ from twisted.logger import (
globalLogBeginner,
)
import synapse
from synapse.logging._structured import setup_structured_logging
from synapse.logging.context import LoggingContextFilter
from synapse.logging.filter import MetadataFilter
from synapse.util.versionstring import get_version_string
from ._base import Config, ConfigError
@@ -346,10 +347,6 @@ def setup_logging(
# Log immediately so we can grep backwards.
logging.warning("***** STARTING SERVER *****")
logging.warning(
"Server %s version %s",
sys.argv[0],
get_distribution_version_string("matrix-synapse"),
)
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server.server_name)
logging.info("Instance name: %s", hs.get_instance_name())

View File

@@ -41,9 +41,9 @@ class ModulesConfig(Config):
# documentation on how to configure or create custom modules for Synapse.
#
modules:
#- module: my_super_module.MySuperClass
# config:
# do_thing: true
#- module: my_other_super_module.SomeClass
# config: {}
# - module: my_super_module.MySuperClass
# config:
# do_thing: true
# - module: my_other_super_module.SomeClass
# config: {}
"""

View File

@@ -134,14 +134,6 @@ class RatelimitConfig(Config):
defaults={"per_second": 0.003, "burst_count": 5},
)
self.rc_third_party_invite = RateLimitConfig(
config.get("rc_third_party_invite", {}),
defaults={
"per_second": self.rc_message.per_second,
"burst_count": self.rc_message.burst_count,
},
)
def generate_config_section(self, **kwargs):
return """\
## Ratelimiting ##
@@ -176,9 +168,6 @@ class RatelimitConfig(Config):
# - one for ratelimiting how often a user or IP can attempt to validate a 3PID.
# - two for ratelimiting how often invites can be sent in a room or to a
# specific user.
# - one for ratelimiting 3PID invites (i.e. invites sent to a third-party ID
# such as an email address or a phone number) based on the account that's
# sending the invite.
#
# The defaults are as shown below.
#
@@ -228,10 +217,6 @@ class RatelimitConfig(Config):
# per_user:
# per_second: 0.003
# burst_count: 5
#
#rc_third_party_invite:
# per_second: 0.2
# burst_count: 10
# Ratelimiting settings for incoming federation
#

View File

@@ -190,8 +190,6 @@ class RegistrationConfig(Config):
# The success template used during fallback auth.
self.fallback_success_template = self.read_template("auth_success.html")
self.inhibit_user_in_use_error = config.get("inhibit_user_in_use_error", False)
def generate_config_section(self, generate_secrets=False, **kwargs):
if generate_secrets:
registration_shared_secret = 'registration_shared_secret: "%s"' % (
@@ -448,16 +446,6 @@ class RegistrationConfig(Config):
# Defaults to true.
#
#auto_join_rooms_for_guests: false
# Whether to inhibit errors raised when registering a new account if the user ID
# already exists. If turned on, that requests to /register/available will always
# show a user ID as available, and Synapse won't raise an error when starting
# a registration with a user ID that already exists. However, Synapse will still
# raise an error if the registration completes and the username conflicts.
#
# Defaults to false.
#
#inhibit_user_in_use_error: true
"""
% locals()
)

View File

@@ -179,6 +179,7 @@ KNOWN_RESOURCES = {
"openid",
"replication",
"static",
"webclient",
}
@@ -488,19 +489,6 @@ class ServerConfig(Config):
# events with profile information that differ from the target's global profile.
self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
# The maximum size an avatar can have, in bytes.
self.max_avatar_size = config.get("max_avatar_size")
if self.max_avatar_size is not None:
self.max_avatar_size = self.parse_size(self.max_avatar_size)
# The MIME types allowed for an avatar.
self.allowed_avatar_mimetypes = config.get("allowed_avatar_mimetypes")
if self.allowed_avatar_mimetypes and not isinstance(
self.allowed_avatar_mimetypes,
list,
):
raise ConfigError("allowed_avatar_mimetypes must be a list")
self.listeners = [parse_listener_def(x) for x in config.get("listeners", [])]
# no_tls is not really supported any more, but let's grandfather it in
@@ -518,12 +506,16 @@ class ServerConfig(Config):
self.listeners = l2
self.web_client_location = config.get("web_client_location", None)
# Non-HTTP(S) web client location is not supported.
if self.web_client_location and not (
self.web_client_location_is_redirect = self.web_client_location and (
self.web_client_location.startswith("http://")
or self.web_client_location.startswith("https://")
):
raise ConfigError("web_client_location must point to a HTTP(S) URL.")
)
# A non-HTTP(S) web client location is deprecated.
if self.web_client_location and not self.web_client_location_is_redirect:
logger.warning(NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING)
# Warn if webclient is configured for a worker.
_warn_if_webclient_configured(self.listeners)
self.gc_thresholds = read_gc_thresholds(config.get("gc_thresholds", None))
self.gc_seconds = self.read_gc_intervals(config.get("gc_min_interval", None))
@@ -651,6 +643,19 @@ class ServerConfig(Config):
False,
)
# List of users trialing the new experimental default push rules. This setting is
# not included in the sample configuration file on purpose as it's a temporary
# hack, so that some users can trial the new defaults without impacting every
# user on the homeserver.
users_new_default_push_rules: list = (
config.get("users_new_default_push_rules") or []
)
if not isinstance(users_new_default_push_rules, list):
raise ConfigError("'users_new_default_push_rules' must be a list")
# Turn the list into a set to improve lookup speed.
self.users_new_default_push_rules: set = set(users_new_default_push_rules)
# Whitelist of domain names that given next_link parameters must have
next_link_domain_whitelist: Optional[List[str]] = config.get(
"next_link_domain_whitelist"
@@ -1163,20 +1168,6 @@ class ServerConfig(Config):
#
#allow_per_room_profiles: false
# The largest allowed file size for a user avatar. Defaults to no restriction.
#
# Note that user avatar changes will not work if this is set without
# using Synapse's media repository.
#
#max_avatar_size: 10M
# The MIME types allowed for user avatars. Defaults to no restriction.
#
# Note that user avatar changes will not work if this is set without
# using Synapse's media repository.
#
#allowed_avatar_mimetypes: ["image/png", "image/jpeg", "image/gif"]
# How long to keep redacted events in unredacted form in the database. After
# this period redacted events get replaced with their redacted form in the DB.
#
@@ -1346,16 +1337,11 @@ def parse_listener_def(listener: Any) -> ListenerConfig:
http_config = None
if listener_type == "http":
try:
resources = [
HttpResourceConfig(**res) for res in listener.get("resources", [])
]
except ValueError as e:
raise ConfigError("Unknown listener resource") from e
http_config = HttpListenerConfig(
x_forwarded=listener.get("x_forwarded", False),
resources=resources,
resources=[
HttpResourceConfig(**res) for res in listener.get("resources", [])
],
additional_resources=listener.get("additional_resources", {}),
tag=listener.get("tag"),
)
@@ -1363,6 +1349,30 @@ def parse_listener_def(listener: Any) -> ListenerConfig:
return ListenerConfig(port, bind_addresses, listener_type, tls, http_config)
NO_MORE_NONE_HTTP_WEB_CLIENT_LOCATION_WARNING = """
Synapse no longer supports serving a web client. To remove this warning,
configure 'web_client_location' with an HTTP(S) URL.
"""
NO_MORE_WEB_CLIENT_WARNING = """
Synapse no longer includes a web client. To redirect the root resource to a web client, configure
'web_client_location'. To remove this warning, remove 'webclient' from the 'listeners'
configuration.
"""
def _warn_if_webclient_configured(listeners: Iterable[ListenerConfig]) -> None:
for listener in listeners:
if not listener.http_options:
continue
for res in listener.http_options.resources:
for name in res.names:
if name == "webclient":
logger.warning(NO_MORE_WEB_CLIENT_WARNING)
return
_MANHOLE_SETTINGS_SCHEMA = {
"type": "object",
"properties": {

View File

@@ -14,7 +14,6 @@
# limitations under the License.
import logging
import typing
from typing import Any, Dict, Iterable, List, Optional, Set, Tuple, Union
from canonicaljson import encode_canonical_json
@@ -35,18 +34,15 @@ from synapse.api.room_versions import (
EventFormatVersions,
RoomVersion,
)
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
from synapse.types import StateMap, UserID, get_domain_from_id
if typing.TYPE_CHECKING:
# conditional imports to avoid import cycle
from synapse.events import EventBase
from synapse.events.builder import EventBuilder
logger = logging.getLogger(__name__)
def validate_event_for_room_version(
room_version_obj: RoomVersion, event: "EventBase"
room_version_obj: RoomVersion, event: EventBase
) -> None:
"""Ensure that the event complies with the limits, and has the right signatures
@@ -117,9 +113,7 @@ def validate_event_for_room_version(
def check_auth_rules_for_event(
room_version_obj: RoomVersion,
event: "EventBase",
auth_events: Iterable["EventBase"],
room_version_obj: RoomVersion, event: EventBase, auth_events: Iterable[EventBase]
) -> None:
"""Check that an event complies with the auth rules
@@ -262,7 +256,7 @@ def check_auth_rules_for_event(
logger.debug("Allowing! %s", event)
def _check_size_limits(event: "EventBase") -> None:
def _check_size_limits(event: EventBase) -> None:
if len(event.user_id) > 255:
raise EventSizeError("'user_id' too large")
if len(event.room_id) > 255:
@@ -277,7 +271,7 @@ def _check_size_limits(event: "EventBase") -> None:
raise EventSizeError("event too large")
def _can_federate(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
def _can_federate(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
creation_event = auth_events.get((EventTypes.Create, ""))
# There should always be a creation event, but if not don't federate.
if not creation_event:
@@ -287,7 +281,7 @@ def _can_federate(event: "EventBase", auth_events: StateMap["EventBase"]) -> boo
def _is_membership_change_allowed(
room_version: RoomVersion, event: "EventBase", auth_events: StateMap["EventBase"]
room_version: RoomVersion, event: EventBase, auth_events: StateMap[EventBase]
) -> None:
"""
Confirms that the event which changes membership is an allowed change.
@@ -477,7 +471,7 @@ def _is_membership_change_allowed(
def _check_event_sender_in_room(
event: "EventBase", auth_events: StateMap["EventBase"]
event: EventBase, auth_events: StateMap[EventBase]
) -> None:
key = (EventTypes.Member, event.user_id)
member_event = auth_events.get(key)
@@ -485,9 +479,7 @@ def _check_event_sender_in_room(
_check_joined_room(member_event, event.user_id, event.room_id)
def _check_joined_room(
member: Optional["EventBase"], user_id: str, room_id: str
) -> None:
def _check_joined_room(member: Optional[EventBase], user_id: str, room_id: str) -> None:
if not member or member.membership != Membership.JOIN:
raise AuthError(
403, "User %s not in room %s (%s)" % (user_id, room_id, repr(member))
@@ -495,7 +487,7 @@ def _check_joined_room(
def get_send_level(
etype: str, state_key: Optional[str], power_levels_event: Optional["EventBase"]
etype: str, state_key: Optional[str], power_levels_event: Optional[EventBase]
) -> int:
"""Get the power level required to send an event of a given type
@@ -531,7 +523,7 @@ def get_send_level(
return int(send_level)
def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> bool:
def _can_send_event(event: EventBase, auth_events: StateMap[EventBase]) -> bool:
power_levels_event = get_power_level_event(auth_events)
send_level = get_send_level(event.type, event.get("state_key"), power_levels_event)
@@ -555,8 +547,8 @@ def _can_send_event(event: "EventBase", auth_events: StateMap["EventBase"]) -> b
def check_redaction(
room_version_obj: RoomVersion,
event: "EventBase",
auth_events: StateMap["EventBase"],
event: EventBase,
auth_events: StateMap[EventBase],
) -> bool:
"""Check whether the event sender is allowed to redact the target event.
@@ -593,8 +585,8 @@ def check_redaction(
def check_historical(
room_version_obj: RoomVersion,
event: "EventBase",
auth_events: StateMap["EventBase"],
event: EventBase,
auth_events: StateMap[EventBase],
) -> None:
"""Check whether the event sender is allowed to send historical related
events like "insertion", "batch", and "marker".
@@ -624,8 +616,8 @@ def check_historical(
def _check_power_levels(
room_version_obj: RoomVersion,
event: "EventBase",
auth_events: StateMap["EventBase"],
event: EventBase,
auth_events: StateMap[EventBase],
) -> None:
user_list = event.content.get("users", {})
# Validate users
@@ -718,11 +710,11 @@ def _check_power_levels(
)
def get_power_level_event(auth_events: StateMap["EventBase"]) -> Optional["EventBase"]:
def get_power_level_event(auth_events: StateMap[EventBase]) -> Optional[EventBase]:
return auth_events.get((EventTypes.PowerLevels, ""))
def get_user_power_level(user_id: str, auth_events: StateMap["EventBase"]) -> int:
def get_user_power_level(user_id: str, auth_events: StateMap[EventBase]) -> int:
"""Get a user's power level
Args:
@@ -758,7 +750,7 @@ def get_user_power_level(user_id: str, auth_events: StateMap["EventBase"]) -> in
return 0
def get_named_level(auth_events: StateMap["EventBase"], name: str, default: int) -> int:
def get_named_level(auth_events: StateMap[EventBase], name: str, default: int) -> int:
power_level_event = get_power_level_event(auth_events)
if not power_level_event:
@@ -771,9 +763,7 @@ def get_named_level(auth_events: StateMap["EventBase"], name: str, default: int)
return default
def _verify_third_party_invite(
event: "EventBase", auth_events: StateMap["EventBase"]
) -> bool:
def _verify_third_party_invite(event: EventBase, auth_events: StateMap[EventBase]):
"""
Validates that the invite event is authorized by a previous third-party invite.
@@ -837,7 +827,7 @@ def _verify_third_party_invite(
return False
def get_public_keys(invite_event: "EventBase") -> List[Dict[str, Any]]:
def get_public_keys(invite_event: EventBase) -> List[Dict[str, Any]]:
public_keys = []
if "public_key" in invite_event.content:
o = {"public_key": invite_event.content["public_key"]}
@@ -849,7 +839,7 @@ def get_public_keys(invite_event: "EventBase") -> List[Dict[str, Any]]:
def auth_types_for_event(
room_version: RoomVersion, event: Union["EventBase", "EventBuilder"]
room_version: RoomVersion, event: Union[EventBase, EventBuilder]
) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what

View File

@@ -48,6 +48,9 @@ USER_MAY_JOIN_ROOM_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[[str, str, str, str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK = Callable[
[str, List[str], List[Dict[str, str]]], Awaitable[bool]
]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[Dict[str, str]], Awaitable[bool]]
@@ -171,6 +174,9 @@ class SpamChecker:
USER_MAY_SEND_3PID_INVITE_CALLBACK
] = []
self._user_may_create_room_callbacks: List[USER_MAY_CREATE_ROOM_CALLBACK] = []
self._user_may_create_room_with_invites_callbacks: List[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = []
self._user_may_create_room_alias_callbacks: List[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = []
@@ -192,6 +198,9 @@ class SpamChecker:
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_with_invites: Optional[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = None,
user_may_create_room_alias: Optional[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = None,
@@ -220,6 +229,11 @@ class SpamChecker:
if user_may_create_room is not None:
self._user_may_create_room_callbacks.append(user_may_create_room)
if user_may_create_room_with_invites is not None:
self._user_may_create_room_with_invites_callbacks.append(
user_may_create_room_with_invites,
)
if user_may_create_room_alias is not None:
self._user_may_create_room_alias_callbacks.append(
user_may_create_room_alias,
@@ -345,6 +359,34 @@ class SpamChecker:
return True
async def user_may_create_room_with_invites(
self,
userid: str,
invites: List[str],
threepid_invites: List[Dict[str, str]],
) -> bool:
"""Checks if a given user may create a room with invites
If this method returns false, the creation request will be rejected.
Args:
userid: The ID of the user attempting to create a room
invites: The IDs of the Matrix users to be invited if the room creation is
allowed.
threepid_invites: The threepids to be invited if the room creation is allowed,
as a dict including a "medium" key indicating the threepid's medium (e.g.
"email") and an "address" key indicating the threepid's address (e.g.
"alice@example.com")
Returns:
True if the user may create the room, otherwise False
"""
for callback in self._user_may_create_room_with_invites_callbacks:
if await callback(userid, invites, threepid_invites) is False:
return False
return True
async def user_may_create_room_alias(
self, userid: str, room_alias: RoomAlias
) -> bool:

View File

@@ -14,17 +14,7 @@
# limitations under the License.
import collections.abc
import re
from typing import (
TYPE_CHECKING,
Any,
Callable,
Dict,
Iterable,
List,
Mapping,
Optional,
Union,
)
from typing import Any, Callable, Dict, Iterable, List, Mapping, Optional, Union
from frozendict import frozendict
@@ -36,10 +26,6 @@ from synapse.util.frozenutils import unfreeze
from . import EventBase
if TYPE_CHECKING:
from synapse.storage.databases.main.relations import BundledAggregations
# Split strings on "." but not "\." This uses a negative lookbehind assertion for '\'
# (?<!stuff) matches if the current position in the string is not preceded
# by a match for 'stuff'.
@@ -390,7 +376,7 @@ class EventClientSerializer:
event: Union[JsonDict, EventBase],
time_now: int,
*,
bundle_aggregations: Optional[Dict[str, "BundledAggregations"]] = None,
bundle_aggregations: Optional[Dict[str, JsonDict]] = None,
**kwargs: Any,
) -> JsonDict:
"""Serializes a single event.
@@ -429,7 +415,7 @@ class EventClientSerializer:
self,
event: EventBase,
time_now: int,
aggregations: "BundledAggregations",
aggregations: JsonDict,
serialized_event: JsonDict,
) -> None:
"""Potentially injects bundled aggregations into the unsigned portion of the serialized event.
@@ -441,18 +427,13 @@ class EventClientSerializer:
serialized_event: The serialized event which may be modified.
"""
serialized_aggregations = {}
# Make a copy in-case the object is cached.
aggregations = aggregations.copy()
if aggregations.annotations:
serialized_aggregations[RelationTypes.ANNOTATION] = aggregations.annotations
if aggregations.references:
serialized_aggregations[RelationTypes.REFERENCE] = aggregations.references
if aggregations.replace:
if RelationTypes.REPLACE in aggregations:
# If there is an edit replace the content, preserving existing
# relations.
edit = aggregations.replace
edit = aggregations[RelationTypes.REPLACE]
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
@@ -470,28 +451,24 @@ class EventClientSerializer:
else:
serialized_event["content"].pop("m.relates_to", None)
serialized_aggregations[RelationTypes.REPLACE] = {
aggregations[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
# If this event is the start of a thread, include a summary of the replies.
if aggregations.thread:
serialized_aggregations[RelationTypes.THREAD] = {
# Don't bundle aggregations as this could recurse forever.
"latest_event": self.serialize_event(
aggregations.thread.latest_event, time_now, bundle_aggregations=None
),
"count": aggregations.thread.count,
"current_user_participated": aggregations.thread.current_user_participated,
}
if RelationTypes.THREAD in aggregations:
# Serialize the latest thread event.
latest_thread_event = aggregations[RelationTypes.THREAD]["latest_event"]
# Don't bundle aggregations as this could recurse forever.
aggregations[RelationTypes.THREAD]["latest_event"] = self.serialize_event(
latest_thread_event, time_now, bundle_aggregations=None
)
# Include the bundled aggregations in the event.
if serialized_aggregations:
serialized_event["unsigned"].setdefault("m.relations", {}).update(
serialized_aggregations
)
serialized_event["unsigned"].setdefault("m.relations", {}).update(aggregations)
def serialize_events(
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any

View File

@@ -20,7 +20,6 @@ from typing import (
Any,
Awaitable,
Callable,
Collection,
Dict,
Iterable,
List,
@@ -65,7 +64,7 @@ from synapse.replication.http.federation import (
ReplicationGetQueryRestServlet,
)
from synapse.storage.databases.main.lock import Lock
from synapse.types import JsonDict, StateMap, get_domain_from_id
from synapse.types import JsonDict, get_domain_from_id
from synapse.util import json_decoder, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
from synapse.util.caches.response_cache import ResponseCache
@@ -572,7 +571,7 @@ class FederationServer(FederationBase):
) -> JsonDict:
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = await self.store.get_auth_chain_ids(room_id, state_ids)
return {"pdu_ids": state_ids, "auth_chain_ids": list(auth_chain_ids)}
return {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
async def _on_context_state_request_compute(
self, room_id: str, event_id: Optional[str]
@@ -646,61 +645,27 @@ class FederationServer(FederationBase):
return {"event": ret_pdu.get_pdu_json(time_now)}
async def on_send_join_request(
self,
origin: str,
content: JsonDict,
room_id: str,
caller_supports_partial_state: bool = False,
self, origin: str, content: JsonDict, room_id: str
) -> Dict[str, Any]:
event, context = await self._on_send_membership_event(
origin, content, Membership.JOIN, room_id
)
prev_state_ids = await context.get_prev_state_ids()
state_ids = list(prev_state_ids.values())
auth_chain = await self.store.get_auth_chain(room_id, state_ids)
state = await self.store.get_events(state_ids)
state_event_ids: Collection[str]
servers_in_room: Optional[Collection[str]]
if caller_supports_partial_state:
state_event_ids = _get_event_ids_for_partial_state_join(
event, prev_state_ids
)
servers_in_room = await self.state.get_hosts_in_room_at_events(
room_id, event_ids=event.prev_event_ids()
)
else:
state_event_ids = prev_state_ids.values()
servers_in_room = None
auth_chain_event_ids = await self.store.get_auth_chain_ids(
room_id, state_event_ids
)
# if the caller has opted in, we can omit any auth_chain events which are
# already in state_event_ids
if caller_supports_partial_state:
auth_chain_event_ids.difference_update(state_event_ids)
auth_chain_events = await self.store.get_events_as_list(auth_chain_event_ids)
state_events = await self.store.get_events_as_list(state_event_ids)
# we try to do all the async stuff before this point, so that time_now is as
# accurate as possible.
time_now = self._clock.time_msec()
event_json = event.get_pdu_json(time_now)
resp = {
event_json = event.get_pdu_json()
return {
# TODO Remove the unstable prefix when servers have updated.
"org.matrix.msc3083.v2.event": event_json,
"event": event_json,
"state": [p.get_pdu_json(time_now) for p in state_events],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain_events],
"org.matrix.msc3706.partial_state": caller_supports_partial_state,
"state": [p.get_pdu_json(time_now) for p in state.values()],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
}
if servers_in_room is not None:
resp["org.matrix.msc3706.servers_in_room"] = list(servers_in_room)
return resp
async def on_make_leave_request(
self, origin: str, room_id: str, user_id: str
) -> Dict[str, Any]:
@@ -1374,39 +1339,3 @@ class FederationHandlerRegistry:
# error.
logger.warning("No handler registered for query type %s", query_type)
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
def _get_event_ids_for_partial_state_join(
join_event: EventBase,
prev_state_ids: StateMap[str],
) -> Collection[str]:
"""Calculate state to be retuned in a partial_state send_join
Args:
join_event: the join event being send_joined
prev_state_ids: the event ids of the state before the join
Returns:
the event ids to be returned
"""
# return all non-member events
state_event_ids = {
event_id
for (event_type, state_key), event_id in prev_state_ids.items()
if event_type != EventTypes.Member
}
# we also need the current state of the current user (it's going to
# be an auth event for the new join, so we may as well return it)
current_membership_event_id = prev_state_ids.get(
(EventTypes.Member, join_event.state_key)
)
if current_membership_event_id is not None:
state_event_ids.add(current_membership_event_id)
# TODO: return a few more members:
# - those with invites
# - those that are kicked? / banned
return state_event_ids

View File

@@ -15,7 +15,6 @@
import functools
import logging
import re
import time
from typing import TYPE_CHECKING, Any, Awaitable, Callable, Optional, Tuple, cast
from synapse.api.errors import Codes, FederationDeniedError, SynapseError
@@ -25,10 +24,8 @@ from synapse.http.servlet import parse_json_object_from_request
from synapse.http.site import SynapseRequest
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
active_span,
set_tag,
span_context_from_request,
start_active_span,
start_active_span_follows_from,
whitelisted_homeserver,
)
@@ -268,10 +265,9 @@ class BaseFederationServlet:
content = parse_json_object_from_request(request)
try:
with start_active_span("authenticate_request"):
origin: Optional[str] = await authenticator.authenticate_request(
request, content
)
origin: Optional[str] = await authenticator.authenticate_request(
request, content
)
except NoAuthenticationError:
origin = None
if self.REQUIRE_AUTH:
@@ -286,57 +282,32 @@ class BaseFederationServlet:
# update the active opentracing span with the authenticated entity
set_tag("authenticated_entity", origin)
# if the origin is authenticated and whitelisted, use its span context
# as the parent.
# if the origin is authenticated and whitelisted, link to its span context
context = None
if origin and whitelisted_homeserver(origin):
context = span_context_from_request(request)
if context:
servlet_span = active_span()
# a scope which uses the origin's context as a parent
processing_start_time = time.time()
scope = start_active_span_follows_from(
"incoming-federation-request",
child_of=context,
contexts=(servlet_span,),
start_time=processing_start_time,
)
scope = start_active_span_follows_from(
"incoming-federation-request", contexts=(context,) if context else ()
)
else:
# just use our context as a parent
scope = start_active_span(
"incoming-federation-request",
)
try:
with scope:
if origin and self.RATELIMIT:
with ratelimiter.ratelimit(origin) as d:
await d
if request._disconnected:
logger.warning(
"client disconnected before we started processing "
"request"
)
return None
response = await func(
origin, content, request.args, *args, **kwargs
with scope:
if origin and self.RATELIMIT:
with ratelimiter.ratelimit(origin) as d:
await d
if request._disconnected:
logger.warning(
"client disconnected before we started processing "
"request"
)
else:
return None
response = await func(
origin, content, request.args, *args, **kwargs
)
finally:
# if we used the origin's context as the parent, add a new span using
# the servlet span as a parent, so that we have a link
if context:
scope2 = start_active_span_follows_from(
"process-federation_request",
contexts=(scope.span,),
start_time=processing_start_time,
else:
response = await func(
origin, content, request.args, *args, **kwargs
)
scope2.close()
return response

View File

@@ -24,9 +24,9 @@ from typing import (
Union,
)
from matrix_common.versionstring import get_distribution_version_string
from typing_extensions import Literal
import synapse
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersions
from synapse.api.urls import FEDERATION_UNSTABLE_PREFIX, FEDERATION_V2_PREFIX
@@ -42,6 +42,7 @@ from synapse.http.servlet import (
)
from synapse.types import JsonDict
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -108,11 +109,11 @@ class FederationSendServlet(BaseFederationServerServlet):
)
if issue_8631_logger.isEnabledFor(logging.DEBUG):
DEVICE_UPDATE_EDUS = ["m.device_list_update", "m.signing_key_update"]
DEVICE_UPDATE_EDUS = {"m.device_list_update", "m.signing_key_update"}
device_list_updates = [
edu.content
for edu in transaction_data.get("edus", [])
if edu.get("edu_type") in DEVICE_UPDATE_EDUS
if edu.edu_type in DEVICE_UPDATE_EDUS
]
if device_list_updates:
issue_8631_logger.debug(
@@ -411,16 +412,6 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
PREFIX = FEDERATION_V2_PREFIX
def __init__(
self,
hs: "HomeServer",
authenticator: Authenticator,
ratelimiter: FederationRateLimiter,
server_name: str,
):
super().__init__(hs, authenticator, ratelimiter, server_name)
self._msc3706_enabled = hs.config.experimental.msc3706_enabled
async def on_PUT(
self,
origin: str,
@@ -431,15 +422,7 @@ class FederationV2SendJoinServlet(BaseFederationServerServlet):
) -> Tuple[int, JsonDict]:
# TODO(paul): assert that event_id parsed from path actually
# match those given in content
partial_state = False
if self._msc3706_enabled:
partial_state = parse_boolean_from_args(
query, "org.matrix.msc3706.partial_state", default=False
)
result = await self.handler.on_send_join_request(
origin, content, room_id, caller_supports_partial_state=partial_state
)
result = await self.handler.on_send_join_request(origin, content, room_id)
return 200, result
@@ -615,12 +598,7 @@ class FederationVersionServlet(BaseFederationServlet):
) -> Tuple[int, JsonDict]:
return (
200,
{
"server": {
"name": "Synapse",
"version": get_distribution_version_string("matrix-synapse"),
}
},
{"server": {"name": "Synapse", "version": get_version_string(synapse)}},
)

View File

@@ -55,9 +55,6 @@ class ApplicationServicesHandler:
self.clock = hs.get_clock()
self.notify_appservices = hs.config.appservice.notify_appservices
self.event_sources = hs.get_event_sources()
self._msc2409_to_device_messages_enabled = (
hs.config.experimental.msc2409_to_device_messages_enabled
)
self.current_max = 0
self.is_processing = False
@@ -135,9 +132,7 @@ class ApplicationServicesHandler:
# Fork off pushes to these services
for service in services:
self.scheduler.enqueue_for_appservice(
service, events=[event]
)
self.scheduler.submit_event_for_as(service, event)
now = self.clock.time_msec()
ts = await self.store.get_received_ts(event.event_id)
@@ -204,9 +199,9 @@ class ApplicationServicesHandler:
Args:
stream_key: The stream the event came from.
`stream_key` can be "typing_key", "receipt_key", "presence_key" or
"to_device_key". Any other value for `stream_key` will cause this function
to return early.
`stream_key` can be "typing_key", "receipt_key", "edu_key"
or "presence_key". Any other value for `stream_key` will
cause this function to return early.
Ephemeral events will only be pushed to appservices that have opted into
receiving them by setting `push_ephemeral` to true in their registration
@@ -222,15 +217,8 @@ class ApplicationServicesHandler:
if not self.notify_appservices:
return
# Notify appservices of updates in ephemeral event streams.
# Only the following streams are currently supported.
# FIXME: We should use constants for these values.
if stream_key not in (
"typing_key",
"receipt_key",
"presence_key",
"to_device_key",
):
# Ignore any unsupported streams
if stream_key not in ("typing_key", "receipt_key", "presence_key", "edu_key"):
return
# Assert that new_token is an integer (and not a RoomStreamToken).
@@ -246,13 +234,6 @@ class ApplicationServicesHandler:
# Additional context: https://github.com/matrix-org/synapse/pull/11137
assert isinstance(new_token, int)
# Ignore to-device messages if the feature flag is not enabled
if (
stream_key == "to_device_key"
and not self._msc2409_to_device_messages_enabled
):
return
# Check whether there are any appservices which have registered to receive
# ephemeral events.
#
@@ -286,7 +267,7 @@ class ApplicationServicesHandler:
with Measure(self.clock, "notify_interested_services_ephemeral"):
for service in services:
if stream_key == "typing_key":
# Note that we don't persist the token (via set_appservice_stream_type_pos)
# Note that we don't persist the token (via set_type_stream_id_for_appservice)
# for typing_key due to performance reasons and due to their highly
# ephemeral nature.
#
@@ -294,7 +275,7 @@ class ApplicationServicesHandler:
# and, if they apply to this application service, send it off.
events = await self._handle_typing(service, new_token)
if events:
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
self.scheduler.submit_ephemeral_events_for_as(service, events)
continue
# Since we read/update the stream position for this AS/stream
@@ -305,35 +286,38 @@ class ApplicationServicesHandler:
):
if stream_key == "receipt_key":
events = await self._handle_receipts(service, new_token)
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
if events:
self.scheduler.submit_ephemeral_events_for_as(
service, events
)
# Persist the latest handled stream token for this appservice
await self.store.set_appservice_stream_type_pos(
await self.store.set_type_stream_id_for_appservice(
service, "read_receipt", new_token
)
elif stream_key == "presence_key":
events = await self._handle_presence(service, users, new_token)
self.scheduler.enqueue_for_appservice(service, ephemeral=events)
if events:
self.scheduler.submit_ephemeral_events_for_as(
service, events
)
# Persist the latest handled stream token for this appservice
await self.store.set_appservice_stream_type_pos(
await self.store.set_type_stream_id_for_appservice(
service, "presence", new_token
)
elif stream_key == "to_device_key":
# Retrieve a list of to-device message events, as well as the
# maximum stream token of the messages we were able to retrieve.
to_device_messages = await self._get_to_device_messages(
service, new_token, users
)
self.scheduler.enqueue_for_appservice(
service, to_device_messages=to_device_messages
)
elif stream_key == "edu_key":
events = await self._handle_edus(service, new_token)
if events:
self.scheduler.submit_ephemeral_events_for_as(
service, events
)
# Persist the latest handled stream token for this appservice
await self.store.set_appservice_stream_type_pos(
service, "to_device", new_token
await self.store.set_type_stream_id_for_appservice(
service, "edu", new_token
)
async def _handle_typing(
@@ -407,6 +391,42 @@ class ApplicationServicesHandler:
)
return receipts
async def _handle_edus(
self, service: ApplicationService, new_token: Optional[int]
) -> List[JsonDict]:
"""
Return the latest custom EDUs that the given application service should receive.
First fetch all custom EDUs between the last EDU stream token that this
application service should have previously received (non-inclusive) and the
latest EDU stream token (inclusive). Then from that set, return only
those custom EDUs that the given application service may be interested in.
Args:
service: The application service to check for which events it should receive.
new_token: A receipts event stream token. Purely used to double-check that the
from_token we pull from the database isn't greater than or equal to this
token. Prevents accidentally duplicating work.
Returns:
A list of JSON dictionaries containing data derived from the custom EDUs that
should be sent to the given application service.
"""
from_key = await self.store.get_type_stream_id_for_appservice(
service, "edu"
)
if new_token is not None and new_token <= from_key:
logger.debug(
"Rejecting token lower than or equal to stored: %s" % (new_token,)
)
return []
edus_source = self.event_sources.sources.edus
edus, _ = await edus_source.get_new_events_as(
service=service, from_key=from_key
)
return edus
async def _handle_presence(
self,
service: ApplicationService,
@@ -469,79 +489,6 @@ class ApplicationServicesHandler:
return events
async def _get_to_device_messages(
self,
service: ApplicationService,
new_token: int,
users: Collection[Union[str, UserID]],
) -> List[JsonDict]:
"""
Given an application service, determine which events it should receive
from those between the last-recorded to-device message stream token for this
appservice and the given stream token.
Args:
service: The application service to check for which events it should receive.
new_token: The latest to-device event stream token.
users: The users to be notified for the new to-device messages
(ie, the recipients of the messages).
Returns:
A list of JSON dictionaries containing data derived from the to-device events
that should be sent to the given application service.
"""
# Get the stream token that this application service has processed up until
from_key = await self.store.get_type_stream_id_for_appservice(
service, "to_device"
)
# Filter out users that this appservice is not interested in
users_appservice_is_interested_in: List[str] = []
for user in users:
# FIXME: We should do this farther up the call stack. We currently repeat
# this operation in _handle_presence.
if isinstance(user, UserID):
user = user.to_string()
if service.is_interested_in_user(user):
users_appservice_is_interested_in.append(user)
if not users_appservice_is_interested_in:
# Return early if the AS was not interested in any of these users
return []
# Retrieve the to-device messages for each user
recipient_device_to_messages = await self.store.get_messages_for_user_devices(
users_appservice_is_interested_in,
from_key,
new_token,
)
# According to MSC2409, we'll need to add 'to_user_id' and 'to_device_id' fields
# to the event JSON so that the application service will know which user/device
# combination this messages was intended for.
#
# So we mangle this dict into a flat list of to-device messages with the relevant
# user ID and device ID embedded inside each message dict.
message_payload: List[JsonDict] = []
for (
user_id,
device_id,
), messages in recipient_device_to_messages.items():
for message_json in messages:
# Remove 'message_id' from the to-device message, as it's an internal ID
message_json.pop("message_id", None)
message_payload.append(
{
"to_user_id": user_id,
"to_device_id": device_id,
**message_json,
}
)
return message_payload
async def query_user_exists(self, user_id: str) -> bool:
"""Check if any application service knows this user_id exists.
@@ -649,7 +596,7 @@ class ApplicationServicesHandler:
"""Retrieve a list of application services interested in this event.
Args:
event: The event to check.
event: The event to check. Can be None if alias_list is not.
Returns:
A list of services interested in this event based on the service regex.
"""

View File

@@ -2060,11 +2060,6 @@ CHECK_AUTH_CALLBACK = Callable[
Optional[Tuple[str, Optional[Callable[["LoginResponse"], Awaitable[None]]]]]
],
]
GET_USERNAME_FOR_REGISTRATION_CALLBACK = Callable[
[JsonDict, JsonDict],
Awaitable[Optional[str]],
]
IS_3PID_ALLOWED_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
class PasswordAuthProvider:
@@ -2077,10 +2072,6 @@ class PasswordAuthProvider:
# lists of callbacks
self.check_3pid_auth_callbacks: List[CHECK_3PID_AUTH_CALLBACK] = []
self.on_logged_out_callbacks: List[ON_LOGGED_OUT_CALLBACK] = []
self.get_username_for_registration_callbacks: List[
GET_USERNAME_FOR_REGISTRATION_CALLBACK
] = []
self.is_3pid_allowed_callbacks: List[IS_3PID_ALLOWED_CALLBACK] = []
# Mapping from login type to login parameters
self._supported_login_types: Dict[str, Iterable[str]] = {}
@@ -2092,13 +2083,9 @@ class PasswordAuthProvider:
self,
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
is_3pid_allowed: Optional[IS_3PID_ALLOWED_CALLBACK] = None,
auth_checkers: Optional[
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
] = None,
get_username_for_registration: Optional[
GET_USERNAME_FOR_REGISTRATION_CALLBACK
] = None,
) -> None:
# Register check_3pid_auth callback
if check_3pid_auth is not None:
@@ -2143,14 +2130,6 @@ class PasswordAuthProvider:
# Add the new method to the list of auth_checker_callbacks for this login type
self.auth_checker_callbacks.setdefault(login_type, []).append(callback)
if get_username_for_registration is not None:
self.get_username_for_registration_callbacks.append(
get_username_for_registration,
)
if is_3pid_allowed is not None:
self.is_3pid_allowed_callbacks.append(is_3pid_allowed)
def get_supported_login_types(self) -> Mapping[str, Iterable[str]]:
"""Get the login types supported by this password provider
@@ -2306,84 +2285,3 @@ class PasswordAuthProvider:
except Exception as e:
logger.warning("Failed to run module API callback %s: %s", callback, e)
continue
async def get_username_for_registration(
self,
uia_results: JsonDict,
params: JsonDict,
) -> Optional[str]:
"""Defines the username to use when registering the user, using the credentials
and parameters provided during the UIA flow.
Stops at the first callback that returns a string.
Args:
uia_results: The credentials provided during the UIA flow.
params: The parameters provided by the registration request.
Returns:
The localpart to use when registering this user, or None if no module
returned a localpart.
"""
for callback in self.get_username_for_registration_callbacks:
try:
res = await callback(uia_results, params)
if isinstance(res, str):
return res
elif res is not None:
# mypy complains that this line is unreachable because it assumes the
# data returned by the module fits the expected type. We just want
# to make sure this is the case.
logger.warning( # type: ignore[unreachable]
"Ignoring non-string value returned by"
" get_username_for_registration callback %s: %s",
callback,
res,
)
except Exception as e:
logger.error(
"Module raised an exception in get_username_for_registration: %s",
e,
)
raise SynapseError(code=500, msg="Internal Server Error")
return None
async def is_3pid_allowed(
self,
medium: str,
address: str,
registration: bool,
) -> bool:
"""Check if the user can be allowed to bind a 3PID on this homeserver.
Args:
medium: The medium of the 3PID.
address: The address of the 3PID.
registration: Whether the 3PID is being bound when registering a new user.
Returns:
Whether the 3PID is allowed to be bound on this homeserver
"""
for callback in self.is_3pid_allowed_callbacks:
try:
res = await callback(medium, address, registration)
if res is False:
return res
elif not isinstance(res, bool):
# mypy complains that this line is unreachable because it assumes the
# data returned by the module fits the expected type. We just want
# to make sure this is the case.
logger.warning( # type: ignore[unreachable]
"Ignoring non-string value returned by"
" is_3pid_allowed callback %s: %s",
callback,
res,
)
except Exception as e:
logger.error("Module raised an exception in is_3pid_allowed: %s", e)
raise SynapseError(code=500, msg="Internal Server Error")
return True

View File

@@ -495,11 +495,13 @@ class DeviceHandler(DeviceWorkerHandler):
"Notifying about update %r/%r, ID: %r", user_id, device_id, position
)
room_ids = await self.store.get_rooms_for_user(user_id)
# specify the user ID too since the user should always get their own device list
# updates, even if they aren't in any rooms.
users_to_notify = users_who_share_room.union({user_id})
self.notifier.on_new_event("device_list_key", position, users=users_to_notify)
self.notifier.on_new_event(
"device_list_key", position, users=[user_id], rooms=room_ids
)
if hosts:
logger.info(

View File

@@ -166,14 +166,9 @@ class FederationHandler:
oldest_events_with_depth = (
await self.store.get_oldest_event_ids_with_depth_in_room(room_id)
)
insertion_events_to_be_backfilled: Dict[str, int] = {}
if self.hs.config.experimental.msc2716_enabled:
insertion_events_to_be_backfilled = (
await self.store.get_insertion_event_backward_extremities_in_room(
room_id
)
)
insertion_events_to_be_backfilled = (
await self.store.get_insertion_event_backwards_extremities_in_room(room_id)
)
logger.debug(
"_maybe_backfill_inner: extremities oldest_events_with_depth=%s insertion_events_to_be_backfilled=%s",
oldest_events_with_depth,
@@ -276,12 +271,11 @@ class FederationHandler:
]
logger.debug(
"room_id: %s, backfill: current_depth: %s, limit: %s, max_depth: %s, extrems (%d): %s filtered_sorted_extremeties_tuple: %s",
"room_id: %s, backfill: current_depth: %s, limit: %s, max_depth: %s, extrems: %s filtered_sorted_extremeties_tuple: %s",
room_id,
current_depth,
limit,
max_depth,
len(sorted_extremeties_tuple),
sorted_extremeties_tuple,
filtered_sorted_extremeties_tuple,
)
@@ -1053,19 +1047,6 @@ class FederationHandler:
limit = min(limit, 100)
events = await self.store.get_backfill_events(room_id, pdu_list, limit)
logger.debug(
"on_backfill_request: backfill events=%s",
[
"event_id=%s,depth=%d,body=%s,prevs=%s\n"
% (
event.event_id,
event.depth,
event.content.get("body", event.type),
event.prev_event_ids(),
)
for event in events
],
)
events = await filter_events_for_server(self.storage, origin, events)

View File

@@ -508,11 +508,7 @@ class FederationEventHandler:
f"room {ev.room_id}, when we were backfilling in {room_id}"
)
await self._process_pulled_events(
dest,
events,
backfilled=True,
)
await self._process_pulled_events(dest, events, backfilled=True)
async def _get_missing_events_for_pdu(
self, origin: str, pdu: EventBase, prevs: Set[str], min_depth: int
@@ -630,24 +626,11 @@ class FederationEventHandler:
backfilled: True if this is part of a historical batch of events (inhibits
notification to clients, and validation of device keys.)
"""
logger.debug(
"processing pulled backfilled=%s events=%s",
backfilled,
[
"event_id=%s,depth=%d,body=%s,prevs=%s\n"
% (
event.event_id,
event.depth,
event.content.get("body", event.type),
event.prev_event_ids(),
)
for event in events
],
)
# We want to sort these by depth so we process them and
# tell clients about them in order.
sorted_events = sorted(events, key=lambda x: x.depth)
for ev in sorted_events:
with nested_logging_context(ev.event_id):
await self._process_pulled_event(origin, ev, backfilled=backfilled)
@@ -1009,8 +992,6 @@ class FederationEventHandler:
await self._run_push_actions_and_persist_event(event, context, backfilled)
await self._handle_marker_event(origin, event)
if backfilled or context.rejected:
return
@@ -1090,6 +1071,8 @@ class FederationEventHandler:
event.sender,
)
await self._handle_marker_event(origin, event)
async def _resync_device(self, sender: str) -> None:
"""We have detected that the device list for the given user may be out
of sync, so we try and resync them.
@@ -1340,14 +1323,7 @@ class FederationEventHandler:
return event, context
events_to_persist = (x for x in (prep(event) for event in fetched_events) if x)
await self.persist_events_and_notify(
room_id,
tuple(events_to_persist),
# Mark these events backfilled as they're historic events that will
# eventually be backfilled. For example, missing events we fetch
# during backfill should be marked as backfilled as well.
backfilled=True,
)
await self.persist_events_and_notify(room_id, tuple(events_to_persist))
async def _check_event_auth(
self,

View File

@@ -1,7 +1,7 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017-2018 New Vector Ltd
# Copyright 2019-2020 The Matrix.org Foundation C.I.C.
# Copyrignt 2020 Sorunome
# Copyright 2019-2022 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -80,6 +80,10 @@ class MessageHandler:
self.state_store = self.storage.state
self._event_serializer = hs.get_event_client_serializer()
self._ephemeral_events_enabled = hs.config.server.enable_ephemeral_messages
self.federation = None
if hs.should_send_federation():
self.federation = hs.get_federation_sender()
self.notifier = hs.get_notifier()
# The scheduled call to self._expire_event. None if no call is currently
# scheduled.
@@ -490,12 +494,12 @@ class EventCreationHandler:
requester: Requester,
event_dict: dict,
txn_id: Optional[str] = None,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
allow_no_prev_events: bool = False,
depth: Optional[int] = None,
) -> Tuple[EventBase, EventContext]:
"""
@@ -510,10 +514,6 @@ class EventCreationHandler:
requester
event_dict: An entire event
txn_id
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids:
the forward extremities to use as the prev_events for the
new event.
@@ -608,10 +608,10 @@ class EventCreationHandler:
event, context = await self.create_new_client_event(
builder=builder,
requester=requester,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
depth=depth,
allow_no_prev_events=allow_no_prev_events,
)
# In an ideal world we wouldn't need the second part of this condition. However,
@@ -764,11 +764,90 @@ class EventCreationHandler:
return prev_event
return None
async def _push_remote_edu(
self,
event_dict: dict,
) -> None:
if not self.federation:
return
try:
users = await self.store.get_users_in_room(event_dict, room_id)
for domain in {get_domain_from_id(u) for u in users}:
if domain != self.server_name:
logger.debug("sending custom EDU to %s", domain)
self.federation.build_and_send_edu(
destination=domain,
edu_type=event_dict.type,
content=event_dict,
)
except Exception:
logger.exception("Error pushing custom EDU to remotes")
async def send_ephemeral_event(
self,
requester: Requester,
event_dict: dict,
ratelimit: bool = True,
txn_id: Optional[str] = None,
ignore_shadow_ban: bool = False,
) -> None:
"""
Creates an event, then sends it.
See self.create_event and self.handle_new_client_event.
Args:
requester: The requester sending the event.
event_dict: An entire ephemeral event dict.
ratelimit: Whether to rate limit this send.
txn_id: The transaction ID.
ignore_shadow_ban: True if shadow-banned users should be allowed to
send this event.
Raises:
ShadowBanError if the requester has been shadow-banned.
"""
if not self._ephemeral_events_enabled:
return
if not ignore_shadow_ban and requester.shadow_banned:
# We randomly sleep a bit just to annoy the requester.
await self.clock.sleep(random.randint(1, 10))
raise ShadowBanError()
assert self.hs.is_mine_id(event.sender), "User must be our own: %s" % (
event.sender,
)
# TODO: spam check the EDU
# TODO: store it in the DB so it's persisted nicely
# send it remotely
run_as_background_process(
"message._push_remote_edu", self._push_remote_edu, event_dict
)
# send it locally
async def _notify() -> None:
try:
await self.notifier.on_new_event(
"ephemeral_key", self._latest_room_serial, rooms=[room_id]
)
except Exception:
logger.exception(
"Error notifying about new custom EDU"
)
run_in_background(_notify);
async def create_and_send_nonmember_event(
self,
requester: Requester,
event_dict: dict,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
ratelimit: bool = True,
@@ -786,10 +865,6 @@ class EventCreationHandler:
Args:
requester: The requester sending the event.
event_dict: An entire event.
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids:
The event IDs to use as the prev events.
Should normally be left as None to automatically request them
@@ -889,20 +964,16 @@ class EventCreationHandler:
self,
builder: EventBuilder,
requester: Optional[Requester] = None,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
depth: Optional[int] = None,
allow_no_prev_events: bool = False,
) -> Tuple[EventBase, EventContext]:
"""Create a new event for a local client
Args:
builder:
requester:
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids:
the forward extremities to use as the prev_events for the
new event.
@@ -921,6 +992,7 @@ class EventCreationHandler:
Returns:
Tuple of created event, context
"""
# Strip down the auth_event_ids to only what we need to auth the event.
# For example, we don't need extra m.room.member that don't match event.sender
full_state_ids_at_event = None
@@ -1801,3 +1873,70 @@ class EventCreationHandler:
# delta_ids might need an update.
context = await self.state.compute_event_context(event)
return event, context
class EduEventSource(EventSource[int, JsonDict]):
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
self.config = hs.config
async def get_new_events(
self,
user: UserID,
from_key: int,
limit: Optional[int],
room_ids: Iterable[str],
is_guest: bool,
explicit_room_id: Optional[str] = None,
) -> Tuple[List[JsonDict], int]:
from_key = int(from_key)
to_key = self.get_current_key()
if from_key == to_key:
return [], to_key
events = await self.store.get_edus_for_rooms(
room_ids, from_key=from_key, to_key=to_key
)
return events, to_key
async def get_new_events_as(
self, from_key: int, service: ApplicationService
) -> Tuple[List[JsonDict], int]:
"""Returns a set of new custom EDUs that an appservice
may be interested in.
Args:
from_key: the stream position at which events should be fetched from
service: The appservice which may be interested
Returns:
A two-tuple containing the following:
* A list of json dictionaries derived from read receipts that the
appservice may be interested in.
* The current read receipt stream token.
"""
from_key = int(from_key)
to_key = self.get_current_key()
if from_key == to_key:
return [], to_key
# Fetch all custom EDUs for all rooms, up to a limit of 100. This is ordered
# by most recent.
rooms_to_events = await self.store.get_edus_for_all_rooms(
from_key=from_key, to_key=to_key
)
# Then filter down to rooms that the AS can read
events = []
for room_id, event in rooms_to_events.items():
if not await service.matches_user_in_member_list(room_id, self.store):
continue
events.append(event)
return events, to_key
def get_current_key(self, direction: str = "f") -> int:
return self.store.get_max_edus_stream_id()

View File

@@ -544,9 +544,9 @@ class OidcProvider:
"""
metadata = await self.load_metadata()
token_endpoint = metadata.get("token_endpoint")
raw_headers: Dict[str, str] = {
raw_headers = {
"Content-Type": "application/x-www-form-urlencoded",
"User-Agent": self._http_client.user_agent.decode("ascii"),
"User-Agent": self._http_client.user_agent,
"Accept": "application/json",
}

View File

@@ -31,8 +31,6 @@ from synapse.types import (
create_requester,
get_domain_from_id,
)
from synapse.util.caches.descriptors import cached
from synapse.util.stringutils import parse_and_validate_mxc_uri
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -66,11 +64,6 @@ class ProfileHandler:
self.user_directory_handler = hs.get_user_directory_handler()
self.request_ratelimiter = hs.get_request_ratelimiter()
self.max_avatar_size = hs.config.server.max_avatar_size
self.allowed_avatar_mimetypes = hs.config.server.allowed_avatar_mimetypes
self.server_name = hs.config.server.server_name
if hs.config.worker.run_background_tasks:
self.clock.looping_call(
self._update_remote_profile_cache, self.PROFILE_UPDATE_MS
@@ -293,9 +286,6 @@ class ProfileHandler:
400, "Avatar URL is too long (max %i)" % (MAX_AVATAR_URL_LEN,)
)
if not await self.check_avatar_size_and_mime_type(new_avatar_url):
raise SynapseError(403, "This avatar is not allowed", Codes.FORBIDDEN)
avatar_url_to_set: Optional[str] = new_avatar_url
if new_avatar_url == "":
avatar_url_to_set = None
@@ -317,63 +307,6 @@ class ProfileHandler:
await self._update_join_states(requester, target_user)
@cached()
async def check_avatar_size_and_mime_type(self, mxc: str) -> bool:
"""Check that the size and content type of the avatar at the given MXC URI are
within the configured limits.
Args:
mxc: The MXC URI at which the avatar can be found.
Returns:
A boolean indicating whether the file can be allowed to be set as an avatar.
"""
if not self.max_avatar_size and not self.allowed_avatar_mimetypes:
return True
server_name, _, media_id = parse_and_validate_mxc_uri(mxc)
if server_name == self.server_name:
media_info = await self.store.get_local_media(media_id)
else:
media_info = await self.store.get_cached_remote_media(server_name, media_id)
if media_info is None:
# Both configuration options need to access the file's metadata, and
# retrieving remote avatars just for this becomes a bit of a faff, especially
# if e.g. the file is too big. It's also generally safe to assume most files
# used as avatar are uploaded locally, or if the upload didn't happen as part
# of a PUT request on /avatar_url that the file was at least previewed by the
# user locally (and therefore downloaded to the remote media cache).
logger.warning("Forbidding avatar change to %s: avatar not on server", mxc)
return False
if self.max_avatar_size:
# Ensure avatar does not exceed max allowed avatar size
if media_info["media_length"] > self.max_avatar_size:
logger.warning(
"Forbidding avatar change to %s: %d bytes is above the allowed size "
"limit",
mxc,
media_info["media_length"],
)
return False
if self.allowed_avatar_mimetypes:
# Ensure the avatar's file type is allowed
if (
self.allowed_avatar_mimetypes
and media_info["media_type"] not in self.allowed_avatar_mimetypes
):
logger.warning(
"Forbidding avatar change to %s: mimetype %s not allowed",
mxc,
media_info["media_type"],
)
return False
return True
async def on_profile_query(self, args: JsonDict) -> JsonDict:
"""Handles federation profile query requests."""

View File

@@ -132,7 +132,6 @@ class RegistrationHandler:
localpart: str,
guest_access_token: Optional[str] = None,
assigned_user_id: Optional[str] = None,
inhibit_user_in_use_error: bool = False,
) -> None:
if types.contains_invalid_mxid_characters(localpart):
raise SynapseError(
@@ -172,22 +171,21 @@ class RegistrationHandler:
users = await self.store.get_users_by_id_case_insensitive(user_id)
if users:
if not inhibit_user_in_use_error and not guest_access_token:
if not guest_access_token:
raise SynapseError(
400, "User ID already taken.", errcode=Codes.USER_IN_USE
)
if guest_access_token:
user_data = await self.auth.get_user_by_access_token(guest_access_token)
if (
not user_data.is_guest
or UserID.from_string(user_data.user_id).localpart != localpart
):
raise AuthError(
403,
"Cannot register taken user ID without valid guest "
"credentials for that user.",
errcode=Codes.FORBIDDEN,
)
user_data = await self.auth.get_user_by_access_token(guest_access_token)
if (
not user_data.is_guest
or UserID.from_string(user_data.user_id).localpart != localpart
):
raise AuthError(
403,
"Cannot register taken user ID without valid guest "
"credentials for that user.",
errcode=Codes.FORBIDDEN,
)
if guest_access_token is None:
try:

View File

@@ -30,7 +30,6 @@ from typing import (
Tuple,
)
import attr
from typing_extensions import TypedDict
from synapse.api.constants import (
@@ -61,7 +60,6 @@ from synapse.events.utils import copy_power_levels_contents
from synapse.federation.federation_client import InvalidResponseError
from synapse.handlers.federation import get_domains_from_state
from synapse.rest.admin._base import assert_user_is_admin
from synapse.storage.databases.main.relations import BundledAggregations
from synapse.storage.state import StateFilter
from synapse.streams import EventSource
from synapse.types import (
@@ -92,17 +90,6 @@ id_server_scheme = "https://"
FIVE_MINUTES_IN_MS = 5 * 60 * 1000
@attr.s(slots=True, frozen=True, auto_attribs=True)
class EventContext:
events_before: List[EventBase]
event: EventBase
events_after: List[EventBase]
state: List[EventBase]
aggregations: Dict[str, BundledAggregations]
start: str
end: str
class RoomCreationHandler:
def __init__(self, hs: "HomeServer"):
self.store = hs.get_datastore()
@@ -694,6 +681,11 @@ class RoomCreationHandler:
if not is_requester_admin and not (
await self.spam_checker.user_may_create_room(user_id)
and await self.spam_checker.user_may_create_room_with_invites(
user_id,
invite_list,
invite_3pid_list,
)
):
raise SynapseError(
403, "You are not permitted to create rooms", Codes.FORBIDDEN
@@ -1127,7 +1119,7 @@ class RoomContextHandler:
limit: int,
event_filter: Optional[Filter],
use_admin_priviledge: bool = False,
) -> Optional[EventContext]:
) -> Optional[JsonDict]:
"""Retrieves events, pagination tokens and state around a given event
in a room.
@@ -1175,28 +1167,38 @@ class RoomContextHandler:
results = await self.store.get_events_around(
room_id, event_id, before_limit, after_limit, event_filter
)
events_before = results.events_before
events_after = results.events_after
if event_filter:
events_before = await event_filter.filter(events_before)
events_after = await event_filter.filter(events_after)
results["events_before"] = await event_filter.filter(
results["events_before"]
)
results["events_after"] = await event_filter.filter(results["events_after"])
events_before = await filter_evts(events_before)
events_after = await filter_evts(events_after)
results["events_before"] = await filter_evts(results["events_before"])
results["events_after"] = await filter_evts(results["events_after"])
# filter_evts can return a pruned event in case the user is allowed to see that
# there's something there but not see the content, so use the event that's in
# `filtered` rather than the event we retrieved from the datastore.
event = filtered[0]
results["event"] = filtered[0]
# Fetch the aggregations.
aggregations = await self.store.get_bundled_aggregations(
itertools.chain(events_before, (event,), events_after),
user.to_string(),
[results["event"]], user.to_string()
)
aggregations.update(
await self.store.get_bundled_aggregations(
results["events_before"], user.to_string()
)
)
aggregations.update(
await self.store.get_bundled_aggregations(
results["events_after"], user.to_string()
)
)
results["aggregations"] = aggregations
if events_after:
last_event_id = events_after[-1].event_id
if results["events_after"]:
last_event_id = results["events_after"][-1].event_id
else:
last_event_id = event_id
@@ -1204,9 +1206,9 @@ class RoomContextHandler:
state_filter = StateFilter.from_lazy_load_member_list(
ev.sender
for ev in itertools.chain(
events_before,
(event,),
events_after,
results["events_before"],
(results["event"],),
results["events_after"],
)
)
else:
@@ -1224,23 +1226,21 @@ class RoomContextHandler:
if event_filter:
state_events = await event_filter.filter(state_events)
results["state"] = await filter_evts(state_events)
# We use a dummy token here as we only care about the room portion of
# the token, which we replace.
token = StreamToken.START
return EventContext(
events_before=events_before,
event=event,
events_after=events_after,
state=await filter_evts(state_events),
aggregations=aggregations,
start=await token.copy_and_replace("room_key", results.start).to_string(
self.store
),
end=await token.copy_and_replace("room_key", results.end).to_string(
self.store
),
)
results["start"] = await token.copy_and_replace(
"room_key", results["start"]
).to_string(self.store)
results["end"] = await token.copy_and_replace(
"room_key", results["end"]
).to_string(self.store)
return results
class TimestampLookupHandler:

View File

@@ -13,6 +13,10 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
def generate_fake_event_id() -> str:
return "$fake_" + random_string(43)
class RoomBatchHandler:
def __init__(self, hs: "HomeServer"):
self.hs = hs
@@ -178,12 +182,11 @@ class RoomBatchHandler:
state_event_ids_at_start = []
auth_event_ids = initial_auth_event_ids.copy()
# Make the state events float off on their own by specifying no
# prev_events for the first one in the chain so we don't have a bunch of
# `@mxid joined the room` noise between each batch.
prev_event_ids_for_state_chain: List[str] = []
# Make the state events float off on their own so we don't have a
# bunch of `@mxid joined the room` noise between each batch
prev_event_id_for_state_chain = generate_fake_event_id()
for index, state_event in enumerate(state_events_at_start):
for state_event in state_events_at_start:
assert_params_in_dict(
state_event, ["type", "origin_server_ts", "content", "sender"]
)
@@ -219,10 +222,7 @@ class RoomBatchHandler:
content=event_dict["content"],
outlier=True,
historical=True,
# Only the first event in the chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=prev_event_ids_for_state_chain,
prev_event_ids=[prev_event_id_for_state_chain],
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
@@ -242,10 +242,7 @@ class RoomBatchHandler:
event_dict,
outlier=True,
historical=True,
# Only the first event in the chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=prev_event_ids_for_state_chain,
prev_event_ids=[prev_event_id_for_state_chain],
# Make sure to use a copy of this list because we modify it
# later in the loop here. Otherwise it will be the same
# reference and also update in the event when we append later.
@@ -256,7 +253,7 @@ class RoomBatchHandler:
state_event_ids_at_start.append(event_id)
auth_event_ids.append(event_id)
# Connect all the state in a floating chain
prev_event_ids_for_state_chain = [event_id]
prev_event_id_for_state_chain = event_id
return state_event_ids_at_start
@@ -264,6 +261,7 @@ class RoomBatchHandler:
self,
events_to_create: List[JsonDict],
room_id: str,
initial_prev_event_ids: List[str],
inherited_depth: int,
auth_event_ids: List[str],
app_service_requester: Requester,
@@ -279,6 +277,9 @@ class RoomBatchHandler:
events_to_create: List of historical events to create in JSON
dictionary format.
room_id: Room where you want the events persisted in.
initial_prev_event_ids: These will be the prev_events for the first
event created. Each event created afterwards will point to the
previous event created.
inherited_depth: The depth to create the events at (you will
probably by calling inherit_depth_from_prev_ids(...)).
auth_event_ids: Define which events allow you to create the given
@@ -290,14 +291,11 @@ class RoomBatchHandler:
"""
assert app_service_requester.app_service
# Make the historical event chain float off on its own by specifying no
# prev_events for the first event in the chain which causes the HS to
# ask for the state at the start of the batch later.
prev_event_ids: List[str] = []
prev_event_ids = initial_prev_event_ids.copy()
event_ids = []
events_to_persist = []
for index, ev in enumerate(events_to_create):
for ev in events_to_create:
assert_params_in_dict(ev, ["type", "origin_server_ts", "content", "sender"])
assert self.hs.is_mine_id(ev["sender"]), "User must be our own: %s" % (
@@ -321,9 +319,6 @@ class RoomBatchHandler:
ev["sender"], app_service_requester.app_service
),
event_dict,
# Only the first event in the chain should be floating.
# The rest should hang off each other in a chain.
allow_no_prev_events=index == 0,
prev_event_ids=event_dict.get("prev_events"),
auth_event_ids=auth_event_ids,
historical=True,
@@ -375,6 +370,7 @@ class RoomBatchHandler:
events_to_create: List[JsonDict],
room_id: str,
batch_id_to_connect_to: str,
initial_prev_event_ids: List[str],
inherited_depth: int,
auth_event_ids: List[str],
app_service_requester: Requester,
@@ -389,6 +385,9 @@ class RoomBatchHandler:
room_id: Room where you want the events created in.
batch_id_to_connect_to: The batch_id from the insertion event you
want this batch to connect to.
initial_prev_event_ids: These will be the prev_events for the first
event created. Each event created afterwards will point to the
previous event created.
inherited_depth: The depth to create the events at (you will
probably by calling inherit_depth_from_prev_ids(...)).
auth_event_ids: Define which events allow you to create the given
@@ -437,6 +436,7 @@ class RoomBatchHandler:
event_ids = await self.persist_historical_events(
events_to_create=events_to_create,
room_id=room_id,
initial_prev_event_ids=initial_prev_event_ids,
inherited_depth=inherited_depth,
auth_event_ids=auth_event_ids,
app_service_requester=app_service_requester,

View File

@@ -116,13 +116,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
burst_count=hs.config.ratelimiting.rc_invites_per_user.burst_count,
)
self._third_party_invite_limiter = Ratelimiter(
store=self.store,
clock=self.clock,
rate_hz=hs.config.ratelimiting.rc_third_party_invite.per_second,
burst_count=hs.config.ratelimiting.rc_third_party_invite.burst_count,
)
self.request_ratelimiter = hs.get_request_ratelimiter()
@abc.abstractmethod
@@ -268,8 +261,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
target: UserID,
room_id: str,
membership: str,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
prev_event_ids: List[str],
auth_event_ids: Optional[List[str]] = None,
txn_id: Optional[str] = None,
ratelimit: bool = True,
@@ -287,12 +279,8 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
target:
room_id:
membership:
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
Should normally be left as None, which will cause them to be calculated
@@ -349,7 +337,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
"membership": membership,
},
txn_id=txn_id,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
require_consent=require_consent,
@@ -452,7 +439,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
@@ -477,10 +463,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
@@ -515,7 +497,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
require_consent=require_consent,
outlier=outlier,
historical=historical,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
)
@@ -537,7 +518,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
require_consent: bool = True,
outlier: bool = False,
historical: bool = False,
allow_no_prev_events: bool = False,
prev_event_ids: Optional[List[str]] = None,
auth_event_ids: Optional[List[str]] = None,
) -> Tuple[str, int]:
@@ -564,10 +544,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
historical: Indicates whether the message is being inserted
back in time around some existing events. This is used to skip
a few checks and mark the event as backfilled.
allow_no_prev_events: Whether to allow this event to be created an empty
list of prev_events. Normally this is prohibited just because most
events should have a prev_event and we should only use this in special
cases like MSC2716.
prev_event_ids: The event IDs to use as the prev events
auth_event_ids:
The event ids to use as the auth_events for the new event.
@@ -614,12 +590,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
errcode=Codes.BAD_JSON,
)
if "avatar_url" in content:
if not await self.profile_handler.check_avatar_size_and_mime_type(
content["avatar_url"],
):
raise SynapseError(403, "This avatar is not allowed", Codes.FORBIDDEN)
# The event content should *not* include the authorising user as
# it won't be properly signed. Strip it out since it might come
# back from a client updating a display name / avatar.
@@ -697,7 +667,6 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
membership=effective_membership_state,
txn_id=txn_id,
ratelimit=ratelimit,
allow_no_prev_events=allow_no_prev_events,
prev_event_ids=prev_event_ids,
auth_event_ids=auth_event_ids,
content=content,
@@ -1320,7 +1289,7 @@ class RoomMemberHandler(metaclass=abc.ABCMeta):
# We need to rate limit *before* we send out any 3PID invites, so we
# can't just rely on the standard ratelimiting of events.
await self._third_party_invite_limiter.ratelimit(requester)
await self.request_ratelimiter.ratelimit(requester)
can_invite = await self.third_party_event_rules.check_threepid_can_be_invited(
medium, address, room_id

View File

@@ -43,8 +43,6 @@ class SearchHandler:
self.state_store = self.storage.state
self.auth = hs.get_auth()
self._msc3666_enabled = hs.config.experimental.msc3666_enabled
async def get_old_rooms_from_upgraded_room(self, room_id: str) -> Iterable[str]:
"""Retrieves room IDs of old rooms in the history of an upgraded room.
@@ -240,6 +238,8 @@ class SearchHandler:
results = search_result["results"]
results_map = {r["event"].event_id: r for r in results}
rank_map.update({r["event"].event_id: r["rank"] for r in results})
filtered_events = await search_filter.filter([r["event"] for r in results])
@@ -361,37 +361,36 @@ class SearchHandler:
logger.info(
"Context for search returned %d and %d events",
len(res.events_before),
len(res.events_after),
len(res["events_before"]),
len(res["events_after"]),
)
events_before = await filter_events_for_client(
self.storage, user.to_string(), res.events_before
res["events_before"] = await filter_events_for_client(
self.storage, user.to_string(), res["events_before"]
)
events_after = await filter_events_for_client(
self.storage, user.to_string(), res.events_after
res["events_after"] = await filter_events_for_client(
self.storage, user.to_string(), res["events_after"]
)
context = {
"events_before": events_before,
"events_after": events_after,
"start": await now_token.copy_and_replace(
"room_key", res.start
).to_string(self.store),
"end": await now_token.copy_and_replace(
"room_key", res.end
).to_string(self.store),
}
res["start"] = await now_token.copy_and_replace(
"room_key", res["start"]
).to_string(self.store)
res["end"] = await now_token.copy_and_replace(
"room_key", res["end"]
).to_string(self.store)
if include_profile:
senders = {
ev.sender
for ev in itertools.chain(events_before, [event], events_after)
for ev in itertools.chain(
res["events_before"], [event], res["events_after"]
)
}
if events_after:
last_event_id = events_after[-1].event_id
if res["events_after"]:
last_event_id = res["events_after"][-1].event_id
else:
last_event_id = event.event_id
@@ -403,7 +402,7 @@ class SearchHandler:
last_event_id, state_filter
)
context["profile_info"] = {
res["profile_info"] = {
s.state_key: {
"displayname": s.content.get("displayname", None),
"avatar_url": s.content.get("avatar_url", None),
@@ -412,7 +411,7 @@ class SearchHandler:
if s.type == EventTypes.Member and s.state_key in senders
}
contexts[event.event_id] = context
contexts[event.event_id] = res
else:
contexts = {}
@@ -420,29 +419,12 @@ class SearchHandler:
time_now = self.clock.time_msec()
aggregations = None
if self._msc3666_enabled:
aggregations = await self.store.get_bundled_aggregations(
# Generate an iterable of EventBase for all the events that will be
# returned, including contextual events.
itertools.chain(
# The events_before and events_after for each context.
itertools.chain.from_iterable(
itertools.chain(context["events_before"], context["events_after"]) # type: ignore[arg-type]
for context in contexts.values()
),
# The returned events.
allowed_events,
),
user.to_string(),
)
for context in contexts.values():
context["events_before"] = self._event_serializer.serialize_events(
context["events_before"], time_now, bundle_aggregations=aggregations # type: ignore[arg-type]
context["events_before"], time_now
)
context["events_after"] = self._event_serializer.serialize_events(
context["events_after"], time_now, bundle_aggregations=aggregations # type: ignore[arg-type]
context["events_after"], time_now
)
state_results = {}
@@ -459,9 +441,7 @@ class SearchHandler:
results.append(
{
"rank": rank_map[e.event_id],
"result": self._event_serializer.serialize_event(
e, time_now, bundle_aggregations=aggregations
),
"result": self._event_serializer.serialize_event(e, time_now),
"context": contexts.get(e.event_id, {}),
}
)

View File

@@ -106,7 +106,7 @@ async def _sendmail(
factory = build_sender_factory(hostname=smtphost if enable_tls else None)
reactor.connectTCP(
smtphost,
smtphost, # type: ignore[arg-type]
smtpport,
factory,
timeout=30,

View File

@@ -37,7 +37,6 @@ from synapse.logging.context import current_context
from synapse.logging.opentracing import SynapseTags, log_kv, set_tag, start_active_span
from synapse.push.clientformat import format_push_rules_for_user
from synapse.storage.databases.main.event_push_actions import NotifCounts
from synapse.storage.databases.main.relations import BundledAggregations
from synapse.storage.roommember import MemberSummary
from synapse.storage.state import StateFilter
from synapse.types import (
@@ -101,7 +100,7 @@ class TimelineBatch:
limited: bool
# A mapping of event ID to the bundled aggregations for the above events.
# This is only calculated if limited is true.
bundled_aggregations: Optional[Dict[str, BundledAggregations]] = None
bundled_aggregations: Optional[Dict[str, Dict[str, Any]]] = None
def __bool__(self) -> bool:
"""Make the result appear empty if there are no updates. This is used
@@ -500,6 +499,22 @@ class SyncHandler:
event_copy = {k: v for (k, v) in event.items() if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
edus_source = self.event_sources.sources.edus
edus, edu_key = await edus_source.get_new_events(
user=sync_config.user,
from_key=receipt_key,
limit=sync_config.filter_collection.ephemeral_limit(),
room_ids=room_ids,
is_guest=sync_config.is_guest,
)
now_token = now_token.copy_and_replace("edu_key", edu_key)
for event in edus:
room_id = event["room_id"]
# exclude room id, as above
event_copy = {k: v for (k, v) in event.items() if k != "room_id"}
ephemeral_by_room.setdefault(room_id, []).append(event_copy)
return now_token, ephemeral_by_room
async def _load_filtered_recents(
@@ -1348,8 +1363,8 @@ class SyncHandler:
if sync_result_builder.since_token is not None:
since_stream_id = int(sync_result_builder.since_token.to_device_key)
if device_id is not None and since_stream_id != int(now_token.to_device_key):
messages, stream_id = await self.store.get_messages_for_device(
if since_stream_id != int(now_token.to_device_key):
messages, stream_id = await self.store.get_new_messages_for_device(
user_id, device_id, since_stream_id, now_token.to_device_key
)

View File

@@ -446,7 +446,7 @@ class TypingWriterHandler(FollowerTypingHandler):
class TypingNotificationEventSource(EventSource[int, JsonDict]):
def __init__(self, hs: "HomeServer"):
self._main_store = hs.get_datastore()
self.hs = hs
self.clock = hs.get_clock()
# We can't call get_typing_handler here because there's a cycle:
#
@@ -487,7 +487,7 @@ class TypingNotificationEventSource(EventSource[int, JsonDict]):
continue
if not await service.matches_user_in_member_list(
room_id, self._main_store
room_id, handler.store
):
continue

View File

@@ -38,4 +38,4 @@ class UIAuthSessionDataConstants:
# used during registration to store the registration token used (if required) so that:
# - we can prevent a token being used twice by one session
# - we can 'use up' the token after registration has successfully completed
REGISTRATION_TOKEN = "m.login.registration_token"
REGISTRATION_TOKEN = "org.matrix.msc3231.login.registration_token"

View File

@@ -20,7 +20,6 @@ from typing import (
TYPE_CHECKING,
Any,
BinaryIO,
Callable,
Dict,
Iterable,
List,
@@ -322,20 +321,21 @@ class SimpleHttpClient:
self._ip_whitelist = ip_whitelist
self._ip_blacklist = ip_blacklist
self._extra_treq_args = treq_args or {}
self.clock = hs.get_clock()
user_agent = hs.version_string
self.user_agent = hs.version_string
self.clock = hs.get_clock()
if hs.config.server.user_agent_suffix:
user_agent = "%s %s" % (
user_agent,
self.user_agent = "%s %s" % (
self.user_agent,
hs.config.server.user_agent_suffix,
)
self.user_agent = user_agent.encode("ascii")
# We use this for our body producers to ensure that they use the correct
# reactor.
self._cooperator = Cooperator(scheduler=_make_scheduler(hs.get_reactor()))
self.user_agent = self.user_agent.encode("ascii")
if self._ip_blacklist:
# If we have an IP blacklist, we need to use a DNS resolver which
# filters out blacklisted IP addresses, to prevent DNS rebinding.
@@ -693,18 +693,12 @@ class SimpleHttpClient:
output_stream: BinaryIO,
max_size: Optional[int] = None,
headers: Optional[RawHeaders] = None,
is_allowed_content_type: Optional[Callable[[str], bool]] = None,
) -> Tuple[int, Dict[bytes, List[bytes]], str, int]:
"""GETs a file from a given URL
Args:
url: The URL to GET
output_stream: File to write the response body to.
headers: A map from header name to a list of values for that header
is_allowed_content_type: A predicate to determine whether the
content type of the file we're downloading is allowed. If set and
it evaluates to False when called with the content type, the
request will be terminated before completing the download by
raising SynapseError.
Returns:
A tuple of the file length, dict of the response
headers, absolute URI of the response and HTTP response code.
@@ -732,17 +726,6 @@ class SimpleHttpClient:
HTTPStatus.BAD_GATEWAY, "Got error %d" % (response.code,), Codes.UNKNOWN
)
if is_allowed_content_type and b"Content-Type" in resp_headers:
content_type = resp_headers[b"Content-Type"][0].decode("ascii")
if not is_allowed_content_type(content_type):
raise SynapseError(
HTTPStatus.BAD_GATEWAY,
(
"Requested file's content type not allowed for this operation: %s"
% content_type
),
)
# TODO: if our Content-Type is HTML or something, just read the first
# N bytes into RAM rather than saving it all to disk only to read it
# straight back in again

View File

@@ -334,11 +334,12 @@ class MatrixFederationHttpClient:
user_agent = hs.version_string
if hs.config.server.user_agent_suffix:
user_agent = "%s %s" % (user_agent, hs.config.server.user_agent_suffix)
user_agent = user_agent.encode("ascii")
federation_agent = MatrixFederationAgent(
self.reactor,
tls_client_options_factory,
user_agent.encode("ascii"),
user_agent,
hs.config.server.federation_ip_range_whitelist,
hs.config.server.federation_ip_range_blacklist,
)

View File

@@ -407,10 +407,7 @@ class SynapseRequest(Request):
user_agent = get_request_user_agent(self, "-")
# int(self.code) looks redundant, because self.code is already an int.
# But self.code might be an HTTPStatus (which inherits from int)---which has
# a different string representation. So ensure we really have an integer.
code = str(int(self.code))
code = str(self.code)
if not self.finished:
# we didn't send the full response before we gave up (presumably because
# the connection dropped)

View File

@@ -443,14 +443,10 @@ def start_active_span(
start_time=None,
ignore_active_span=False,
finish_on_close=True,
*,
tracer=None,
):
"""Starts an active opentracing span.
Records the start time for the span, and sets it as the "active span" in the
scope manager.
"""Starts an active opentracing span. Note, the scope doesn't become active
until it has been entered, however, the span starts from the time this
message is called.
Args:
See opentracing.tracer
Returns:
@@ -460,11 +456,7 @@ def start_active_span(
if opentracing is None:
return noop_context_manager() # type: ignore[unreachable]
if tracer is None:
# use the global tracer by default
tracer = opentracing.tracer
return tracer.start_active_span(
return opentracing.tracer.start_active_span(
operation_name,
child_of=child_of,
references=references,
@@ -476,42 +468,21 @@ def start_active_span(
def start_active_span_follows_from(
operation_name: str,
contexts: Collection,
child_of=None,
start_time: Optional[float] = None,
*,
inherit_force_tracing=False,
tracer=None,
operation_name: str, contexts: Collection, inherit_force_tracing=False
):
"""Starts an active opentracing span, with additional references to previous spans
Args:
operation_name: name of the operation represented by the new span
contexts: the previous spans to inherit from
child_of: optionally override the parent span. If unset, the currently active
span will be the parent. (If there is no currently active span, the first
span in `contexts` will be the parent.)
start_time: optional override for the start time of the created span. Seconds
since the epoch.
inherit_force_tracing: if set, and any of the previous contexts have had tracing
forced, the new span will also have tracing forced.
tracer: override the opentracing tracer. By default the global tracer is used.
"""
if opentracing is None:
return noop_context_manager() # type: ignore[unreachable]
references = [opentracing.follows_from(context) for context in contexts]
scope = start_active_span(
operation_name,
child_of=child_of,
references=references,
start_time=start_time,
tracer=tracer,
)
scope = start_active_span(operation_name, references=references)
if inherit_force_tracing and any(
is_context_forced_tracing(ctx) for ctx in contexts

View File

@@ -28,9 +28,8 @@ class LogContextScopeManager(ScopeManager):
The LogContextScopeManager tracks the active scope in opentracing
by using the log contexts which are native to synapse. This is so
that the basic opentracing api can be used across twisted defereds.
It would be nice just to use opentracing's ContextVarsScopeManager,
but currently that doesn't work due to https://twistedmatrix.com/trac/ticket/10301.
(I would love to break logcontexts and this into an OS package. but
let's wait for twisted's contexts to be released.)
"""
def __init__(self, config):
@@ -66,45 +65,29 @@ class LogContextScopeManager(ScopeManager):
Scope.close() on the returned instance.
"""
enter_logcontext = False
ctx = current_context()
if not ctx:
# We don't want this scope to affect.
logger.error("Tried to activate scope outside of loggingcontext")
return Scope(None, span) # type: ignore[arg-type]
if ctx.scope is not None:
# start a new logging context as a child of the existing one.
# Doing so -- rather than updating the existing logcontext -- means that
# creating several concurrent spans under the same logcontext works
# correctly.
elif ctx.scope is not None:
# We want the logging scope to look exactly the same so we give it
# a blank suffix
ctx = nested_logging_context("")
enter_logcontext = True
else:
# if there is no span currently associated with the current logcontext, we
# just store the scope in it.
#
# This feels a bit dubious, but it does hack around a problem where a
# span outlasts its parent logcontext (which would otherwise lead to
# "Re-starting finished log context" errors).
enter_logcontext = False
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
ctx.scope = scope
if enter_logcontext:
ctx.__enter__()
return scope
class _LogContextScope(Scope):
"""
A custom opentracing scope, associated with a LogContext
* filters out _DefGen_Return exceptions which arise from calling
`defer.returnValue` in Twisted code
* When the scope is closed, the logcontext's active scope is reset to None.
and - if enter_logcontext was set - the logcontext is finished too.
A custom opentracing scope. The only significant difference is that it will
close the log context it's related to if the logcontext was created specifically
for this scope.
"""
def __init__(self, manager, span, logcontext, enter_logcontext, finish_on_close):
@@ -118,7 +101,8 @@ class _LogContextScope(Scope):
logcontext (LogContext):
the logcontext to which this scope is attached.
enter_logcontext (Boolean):
if True the logcontext will be exited when the scope is finished
if True the logcontext will be entered and exited when the scope
is entered and exited respectively
finish_on_close (Boolean):
if True finish the span when the scope is closed
"""
@@ -127,28 +111,26 @@ class _LogContextScope(Scope):
self._finish_on_close = finish_on_close
self._enter_logcontext = enter_logcontext
def __exit__(self, exc_type, value, traceback):
if exc_type == twisted.internet.defer._DefGen_Return:
# filter out defer.returnValue() calls
exc_type = value = traceback = None
super().__exit__(exc_type, value, traceback)
def __enter__(self):
if self._enter_logcontext:
self.logcontext.__enter__()
def __str__(self):
return f"Scope<{self.span}>"
return self
def __exit__(self, type, value, traceback):
if type == twisted.internet.defer._DefGen_Return:
super().__exit__(None, None, None)
else:
super().__exit__(type, value, traceback)
if self._enter_logcontext:
self.logcontext.__exit__(type, value, traceback)
else: # the logcontext existed before the creation of the scope
self.logcontext.scope = None
def close(self):
active_scope = self.manager.active
if active_scope is not self:
logger.error(
"Closing scope %s which is not the currently-active one %s",
self,
active_scope,
)
if self.manager.active is not self:
logger.error("Tried to close a non-active scope!")
return
if self._finish_on_close:
self.span.finish()
self.logcontext.scope = None
if self._enter_logcontext:
self.logcontext.__exit__(None, None, None)

View File

@@ -30,11 +30,9 @@ from typing import (
Type,
TypeVar,
Union,
cast,
)
import attr
from matrix_common.versionstring import get_distribution_version_string
from prometheus_client import CollectorRegistry, Counter, Gauge, Histogram, Metric
from prometheus_client.core import (
REGISTRY,
@@ -44,14 +42,14 @@ from prometheus_client.core import (
from twisted.python.threadpool import ThreadPool
# This module is imported for its side effects; flake8 needn't warn that it's unused.
import synapse.metrics._reactor_metrics # noqa: F401
import synapse.metrics._reactor_metrics
from synapse.metrics._exposition import (
MetricsResource,
generate_latest,
start_http_server,
)
from synapse.metrics._gc import MIN_TIME_BETWEEN_GCS, install_gc_manager
from synapse.util.versionstring import get_version_string
logger = logging.getLogger(__name__)
@@ -62,7 +60,7 @@ all_gauges: "Dict[str, Union[LaterGauge, InFlightGauge]]" = {}
HAVE_PROC_SELF_STAT = os.path.exists("/proc/self/stat")
class _RegistryProxy:
class RegistryProxy:
@staticmethod
def collect() -> Iterable[Metric]:
for metric in REGISTRY.collect():
@@ -70,13 +68,6 @@ class _RegistryProxy:
yield metric
# A little bit nasty, but collect() above is static so a Protocol doesn't work.
# _RegistryProxy matches the signature of a CollectorRegistry instance enough
# for it to be usable in the contexts in which we use it.
# TODO Do something nicer about this.
RegistryProxy = cast(CollectorRegistry, _RegistryProxy)
@attr.s(slots=True, hash=True, auto_attribs=True)
class LaterGauge:
@@ -418,7 +409,7 @@ build_info = Gauge(
)
build_info.labels(
" ".join([platform.python_implementation(), platform.python_version()]),
get_distribution_version_string("matrix-synapse"),
get_version_string(synapse),
" ".join([platform.system(), platform.release()]),
).set(1)

View File

@@ -48,6 +48,7 @@ from synapse.events.spamcheck import (
CHECK_USERNAME_FOR_SPAM_CALLBACK,
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK,
USER_MAY_CREATE_ROOM_CALLBACK,
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK,
USER_MAY_INVITE_CALLBACK,
USER_MAY_JOIN_ROOM_CALLBACK,
USER_MAY_PUBLISH_ROOM_CALLBACK,
@@ -70,8 +71,6 @@ from synapse.handlers.account_validity import (
from synapse.handlers.auth import (
CHECK_3PID_AUTH_CALLBACK,
CHECK_AUTH_CALLBACK,
GET_USERNAME_FOR_REGISTRATION_CALLBACK,
IS_3PID_ALLOWED_CALLBACK,
ON_LOGGED_OUT_CALLBACK,
AuthHandler,
)
@@ -178,7 +177,6 @@ class ModuleApi:
self._presence_stream = hs.get_event_sources().sources.presence
self._state = hs.get_state_handler()
self._clock: Clock = hs.get_clock()
self._registration_handler = hs.get_registration_handler()
self._send_email_handler = hs.get_send_email_handler()
self.custom_template_dir = hs.config.server.custom_template_directory
@@ -211,12 +209,14 @@ class ModuleApi:
def register_spam_checker_callbacks(
self,
*,
check_event_for_spam: Optional[CHECK_EVENT_FOR_SPAM_CALLBACK] = None,
user_may_join_room: Optional[USER_MAY_JOIN_ROOM_CALLBACK] = None,
user_may_invite: Optional[USER_MAY_INVITE_CALLBACK] = None,
user_may_send_3pid_invite: Optional[USER_MAY_SEND_3PID_INVITE_CALLBACK] = None,
user_may_create_room: Optional[USER_MAY_CREATE_ROOM_CALLBACK] = None,
user_may_create_room_with_invites: Optional[
USER_MAY_CREATE_ROOM_WITH_INVITES_CALLBACK
] = None,
user_may_create_room_alias: Optional[
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK
] = None,
@@ -237,6 +237,7 @@ class ModuleApi:
user_may_invite=user_may_invite,
user_may_send_3pid_invite=user_may_send_3pid_invite,
user_may_create_room=user_may_create_room,
user_may_create_room_with_invites=user_may_create_room_with_invites,
user_may_create_room_alias=user_may_create_room_alias,
user_may_publish_room=user_may_publish_room,
check_username_for_spam=check_username_for_spam,
@@ -246,7 +247,6 @@ class ModuleApi:
def register_account_validity_callbacks(
self,
*,
is_user_expired: Optional[IS_USER_EXPIRED_CALLBACK] = None,
on_user_registration: Optional[ON_USER_REGISTRATION_CALLBACK] = None,
on_legacy_send_mail: Optional[ON_LEGACY_SEND_MAIL_CALLBACK] = None,
@@ -267,7 +267,6 @@ class ModuleApi:
def register_third_party_rules_callbacks(
self,
*,
check_event_allowed: Optional[CHECK_EVENT_ALLOWED_CALLBACK] = None,
on_create_room: Optional[ON_CREATE_ROOM_CALLBACK] = None,
check_threepid_can_be_invited: Optional[
@@ -292,7 +291,6 @@ class ModuleApi:
def register_presence_router_callbacks(
self,
*,
get_users_for_states: Optional[GET_USERS_FOR_STATES_CALLBACK] = None,
get_interested_users: Optional[GET_INTERESTED_USERS_CALLBACK] = None,
) -> None:
@@ -307,16 +305,11 @@ class ModuleApi:
def register_password_auth_provider_callbacks(
self,
*,
check_3pid_auth: Optional[CHECK_3PID_AUTH_CALLBACK] = None,
on_logged_out: Optional[ON_LOGGED_OUT_CALLBACK] = None,
auth_checkers: Optional[
Dict[Tuple[str, Tuple[str, ...]], CHECK_AUTH_CALLBACK]
] = None,
is_3pid_allowed: Optional[IS_3PID_ALLOWED_CALLBACK] = None,
get_username_for_registration: Optional[
GET_USERNAME_FOR_REGISTRATION_CALLBACK
] = None,
) -> None:
"""Registers callbacks for password auth provider capabilities.
@@ -325,14 +318,11 @@ class ModuleApi:
return self._password_auth_provider.register_password_auth_provider_callbacks(
check_3pid_auth=check_3pid_auth,
on_logged_out=on_logged_out,
is_3pid_allowed=is_3pid_allowed,
auth_checkers=auth_checkers,
get_username_for_registration=get_username_for_registration,
)
def register_background_update_controller_callbacks(
self,
*,
on_update: ON_UPDATE_CALLBACK,
default_batch_size: Optional[DEFAULT_BATCH_SIZE_CALLBACK] = None,
min_batch_size: Optional[MIN_BATCH_SIZE_CALLBACK] = None,
@@ -405,32 +395,6 @@ class ModuleApi:
"""
return self._hs.config.email.email_app_name
@property
def server_name(self) -> str:
"""The server name for the local homeserver.
Added in Synapse v1.53.0.
"""
return self._server_name
@property
def worker_name(self) -> Optional[str]:
"""The name of the worker this specific instance is running as per the
"worker_name" configuration setting, or None if it's the main process.
Added in Synapse v1.53.0.
"""
return self._hs.config.worker.worker_name
@property
def worker_app(self) -> Optional[str]:
"""The name of the worker app this specific instance is running as per the
"worker_app" configuration setting, or None if it's the main process.
Added in Synapse v1.53.0.
"""
return self._hs.config.worker.worker_app
async def get_userinfo_by_id(self, user_id: str) -> Optional[UserInfo]:
"""Get user info by user_id
@@ -1238,22 +1202,6 @@ class ModuleApi:
"""
return await defer_to_thread(self._hs.get_reactor(), f, *args, **kwargs)
async def check_username(self, username: str) -> None:
"""Checks if the provided username uses the grammar defined in the Matrix
specification, and is already being used by an existing user.
Added in Synapse v1.52.0.
Args:
username: The username to check. This is the local part of the user's full
Matrix user ID, i.e. it's "alice" if the full user ID is "@alice:foo.com".
Raises:
SynapseError with the errcode "M_USER_IN_USE" if the username is already in
use.
"""
await self._registration_handler.check_username(username)
class PublicRoomListManager:
"""Contains methods for adding to, removing from and querying whether a room

View File

@@ -14,7 +14,6 @@
import logging
from typing import (
TYPE_CHECKING,
Awaitable,
Callable,
Collection,
@@ -33,6 +32,7 @@ from prometheus_client import Counter
from twisted.internet import defer
import synapse.server
from synapse.api.constants import EventTypes, HistoryVisibility, Membership
from synapse.api.errors import AuthError
from synapse.events import EventBase
@@ -53,9 +53,6 @@ from synapse.util.async_helpers import ObservableDeferred, timeout_deferred
from synapse.util.metrics import Measure
from synapse.visibility import filter_events_for_client
if TYPE_CHECKING:
from synapse.server import HomeServer
logger = logging.getLogger(__name__)
notified_events_counter = Counter("synapse_notifier_notified_events", "")
@@ -85,7 +82,7 @@ class _NotificationListener:
__slots__ = ["deferred"]
def __init__(self, deferred: "defer.Deferred"):
def __init__(self, deferred):
self.deferred = deferred
@@ -127,7 +124,7 @@ class _NotifierUserStream:
stream_key: str,
stream_id: Union[int, RoomStreamToken],
time_now_ms: int,
) -> None:
):
"""Notify any listeners for this user of a new event from an
event source.
Args:
@@ -155,7 +152,7 @@ class _NotifierUserStream:
self.notify_deferred = ObservableDeferred(defer.Deferred())
noify_deferred.callback(self.current_token)
def remove(self, notifier: "Notifier") -> None:
def remove(self, notifier: "Notifier"):
"""Remove this listener from all the indexes in the Notifier
it knows about.
"""
@@ -191,7 +188,7 @@ class EventStreamResult:
start_token: StreamToken
end_token: StreamToken
def __bool__(self) -> bool:
def __bool__(self):
return bool(self.events)
@@ -215,7 +212,7 @@ class Notifier:
UNUSED_STREAM_EXPIRY_MS = 10 * 60 * 1000
def __init__(self, hs: "HomeServer"):
def __init__(self, hs: "synapse.server.HomeServer"):
self.user_to_user_stream: Dict[str, _NotifierUserStream] = {}
self.room_to_user_streams: Dict[str, Set[_NotifierUserStream]] = {}
@@ -251,7 +248,7 @@ class Notifier:
# This is not a very cheap test to perform, but it's only executed
# when rendering the metrics page, which is likely once per minute at
# most when scraping it.
def count_listeners() -> int:
def count_listeners():
all_user_streams: Set[_NotifierUserStream] = set()
for streams in list(self.room_to_user_streams.values()):
@@ -273,7 +270,7 @@ class Notifier:
"synapse_notifier_users", "", [], lambda: len(self.user_to_user_stream)
)
def add_replication_callback(self, cb: Callable[[], None]) -> None:
def add_replication_callback(self, cb: Callable[[], None]):
"""Add a callback that will be called when some new data is available.
Callback is not given any arguments. It should *not* return a Deferred - if
it needs to do any asynchronous work, a background thread should be started and
@@ -287,7 +284,7 @@ class Notifier:
event_pos: PersistedEventPosition,
max_room_stream_token: RoomStreamToken,
extra_users: Optional[Collection[UserID]] = None,
) -> None:
):
"""Unwraps event and calls `on_new_room_event_args`."""
await self.on_new_room_event_args(
event_pos=event_pos,
@@ -310,7 +307,7 @@ class Notifier:
event_pos: PersistedEventPosition,
max_room_stream_token: RoomStreamToken,
extra_users: Optional[Collection[UserID]] = None,
) -> None:
):
"""Used by handlers to inform the notifier something has happened
in the room, room event wise.
@@ -341,9 +338,7 @@ class Notifier:
self.notify_replication()
def _notify_pending_new_room_events(
self, max_room_stream_token: RoomStreamToken
) -> None:
def _notify_pending_new_room_events(self, max_room_stream_token: RoomStreamToken):
"""Notify for the room events that were queued waiting for a previous
event to be persisted.
Args:
@@ -379,7 +374,7 @@ class Notifier:
)
self._on_updated_room_token(max_room_stream_token)
def _on_updated_room_token(self, max_room_stream_token: RoomStreamToken) -> None:
def _on_updated_room_token(self, max_room_stream_token: RoomStreamToken):
"""Poke services that might care that the room position has been
updated.
"""
@@ -391,13 +386,13 @@ class Notifier:
if self.federation_sender:
self.federation_sender.notify_new_events(max_room_stream_token)
def _notify_app_services(self, max_room_stream_token: RoomStreamToken) -> None:
def _notify_app_services(self, max_room_stream_token: RoomStreamToken):
try:
self.appservice_handler.notify_interested_services(max_room_stream_token)
except Exception:
logger.exception("Error notifying application services of event")
def _notify_pusher_pool(self, max_room_stream_token: RoomStreamToken) -> None:
def _notify_pusher_pool(self, max_room_stream_token: RoomStreamToken):
try:
self._pusher_pool.on_new_notifications(max_room_stream_token)
except Exception:
@@ -466,9 +461,7 @@ class Notifier:
users,
)
except Exception:
logger.exception(
"Error notifying application services of ephemeral events"
)
logger.exception("Error notifying application services of event")
def on_new_replication_data(self) -> None:
"""Used to inform replication listeners that something has happened
@@ -480,8 +473,8 @@ class Notifier:
user_id: str,
timeout: int,
callback: Callable[[StreamToken, StreamToken], Awaitable[T]],
room_ids: Optional[Collection[str]] = None,
from_token: StreamToken = StreamToken.START,
room_ids=None,
from_token=StreamToken.START,
) -> T:
"""Wait until the callback returns a non empty response or the
timeout fires.
@@ -705,14 +698,14 @@ class Notifier:
for expired_stream in expired_streams:
expired_stream.remove(self)
def _register_with_keys(self, user_stream: _NotifierUserStream) -> None:
def _register_with_keys(self, user_stream: _NotifierUserStream):
self.user_to_user_stream[user_stream.user_id] = user_stream
for room in user_stream.rooms:
s = self.room_to_user_streams.setdefault(room, set())
s.add(user_stream)
def _user_joined_room(self, user_id: str, room_id: str) -> None:
def _user_joined_room(self, user_id: str, room_id: str):
new_user_stream = self.user_to_user_stream.get(user_id)
if new_user_stream is not None:
room_streams = self.room_to_user_streams.setdefault(room_id, set())
@@ -724,7 +717,7 @@ class Notifier:
for cb in self.replication_callbacks:
cb()
def notify_remote_server_up(self, server: str) -> None:
def notify_remote_server_up(self, server: str):
"""Notify any replication that a remote server has come back up"""
# We call federation_sender directly rather than registering as a
# callback as a) we already have a reference to it and b) it introduces

View File

@@ -20,11 +20,15 @@ from typing import Any, Dict, List
from synapse.push.rulekinds import PRIORITY_CLASS_INVERSE_MAP, PRIORITY_CLASS_MAP
def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
def list_with_base_rules(
rawrules: List[Dict[str, Any]], use_new_defaults: bool = False
) -> List[Dict[str, Any]]:
"""Combine the list of rules set by the user with the default push rules
Args:
rawrules: The rules the user has modified or set.
use_new_defaults: Whether to use the new experimental default rules when
appending or prepending default rules.
Returns:
A new list with the rules set by the user combined with the defaults.
@@ -44,7 +48,9 @@ def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
@@ -55,6 +61,7 @@ def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
current_prio_class -= 1
@@ -63,6 +70,7 @@ def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
@@ -71,14 +79,18 @@ def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]
while current_prio_class > 0:
ruleslist.extend(
make_base_append_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
current_prio_class -= 1
if current_prio_class > 0:
ruleslist.extend(
make_base_prepend_rules(
PRIORITY_CLASS_INVERSE_MAP[current_prio_class], modified_base_rules
PRIORITY_CLASS_INVERSE_MAP[current_prio_class],
modified_base_rules,
use_new_defaults,
)
)
@@ -86,14 +98,24 @@ def list_with_base_rules(rawrules: List[Dict[str, Any]]) -> List[Dict[str, Any]]
def make_base_append_rules(
kind: str, modified_base_rules: Dict[str, Dict[str, Any]]
kind: str,
modified_base_rules: Dict[str, Dict[str, Any]],
use_new_defaults: bool = False,
) -> List[Dict[str, Any]]:
rules = []
if kind == "override":
rules = BASE_APPEND_OVERRIDE_RULES
rules = (
NEW_APPEND_OVERRIDE_RULES
if use_new_defaults
else BASE_APPEND_OVERRIDE_RULES
)
elif kind == "underride":
rules = BASE_APPEND_UNDERRIDE_RULES
rules = (
NEW_APPEND_UNDERRIDE_RULES
if use_new_defaults
else BASE_APPEND_UNDERRIDE_RULES
)
elif kind == "content":
rules = BASE_APPEND_CONTENT_RULES
@@ -112,6 +134,7 @@ def make_base_append_rules(
def make_base_prepend_rules(
kind: str,
modified_base_rules: Dict[str, Dict[str, Any]],
use_new_defaults: bool = False,
) -> List[Dict[str, Any]]:
rules = []
@@ -278,6 +301,135 @@ BASE_APPEND_OVERRIDE_RULES = [
]
NEW_APPEND_OVERRIDE_RULES = [
{
"rule_id": "global/override/.m.rule.encrypted",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.encrypted",
"_id": "_encrypted",
}
],
"actions": ["notify"],
},
{
"rule_id": "global/override/.m.rule.suppress_notices",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.message",
"_id": "_suppress_notices_type",
},
{
"kind": "event_match",
"key": "content.msgtype",
"pattern": "m.notice",
"_id": "_suppress_notices",
},
],
"actions": [],
},
{
"rule_id": "global/underride/.m.rule.suppress_edits",
"conditions": [
{
"kind": "event_match",
"key": "m.relates_to.m.rel_type",
"pattern": "m.replace",
"_id": "_suppress_edits",
}
],
"actions": [],
},
{
"rule_id": "global/override/.m.rule.invite_for_me",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.member",
"_id": "_member",
},
{
"kind": "event_match",
"key": "content.membership",
"pattern": "invite",
"_id": "_invite_member",
},
{"kind": "event_match", "key": "state_key", "pattern_type": "user_id"},
],
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
},
{
"rule_id": "global/override/.m.rule.contains_display_name",
"conditions": [{"kind": "contains_display_name"}],
"actions": [
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
},
{
"rule_id": "global/override/.m.rule.tombstone",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.room.tombstone",
"_id": "_tombstone",
},
{
"kind": "event_match",
"key": "state_key",
"pattern": "",
"_id": "_tombstone_statekey",
},
],
"actions": [
"notify",
{"set_tweak": "sound", "value": "default"},
{"set_tweak": "highlight"},
],
},
{
"rule_id": "global/override/.m.rule.roomnotif",
"conditions": [
{
"kind": "event_match",
"key": "content.body",
"pattern": "@room",
"_id": "_roomnotif_content",
},
{
"kind": "sender_notification_permission",
"key": "room",
"_id": "_roomnotif_pl",
},
],
"actions": [
"notify",
{"set_tweak": "highlight"},
{"set_tweak": "sound", "value": "default"},
],
},
{
"rule_id": "global/override/.m.rule.call",
"conditions": [
{
"kind": "event_match",
"key": "type",
"pattern": "m.call.invite",
"_id": "_call",
}
],
"actions": ["notify", {"set_tweak": "sound", "value": "ring"}],
},
]
BASE_APPEND_UNDERRIDE_RULES = [
{
"rule_id": "global/underride/.m.rule.call",
@@ -386,6 +538,36 @@ BASE_APPEND_UNDERRIDE_RULES = [
]
NEW_APPEND_UNDERRIDE_RULES = [
{
"rule_id": "global/underride/.m.rule.room_one_to_one",
"conditions": [
{"kind": "room_member_count", "is": "2", "_id": "member_count"},
{
"kind": "event_match",
"key": "content.body",
"pattern": "*",
"_id": "body",
},
],
"actions": ["notify", {"set_tweak": "sound", "value": "default"}],
},
{
"rule_id": "global/underride/.m.rule.message",
"conditions": [
{
"kind": "event_match",
"key": "content.body",
"pattern": "*",
"_id": "body",
},
],
"actions": ["notify"],
"enabled": False,
},
]
BASE_RULE_IDS = set()
for r in BASE_APPEND_CONTENT_RULES:
@@ -407,3 +589,26 @@ for r in BASE_APPEND_UNDERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
r["default"] = True
BASE_RULE_IDS.add(r["rule_id"])
NEW_RULE_IDS = set()
for r in BASE_APPEND_CONTENT_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["content"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])
for r in BASE_PREPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])
for r in NEW_APPEND_OVERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["override"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])
for r in NEW_APPEND_UNDERRIDE_RULES:
r["priority_class"] = PRIORITY_CLASS_MAP["underride"]
r["default"] = True
NEW_RULE_IDS.add(r["rule_id"])

View File

@@ -455,7 +455,7 @@ class Mailer:
}
the_events = await filter_events_for_client(
self.storage, user_id, results.events_before
self.storage, user_id, results["events_before"]
)
the_events.append(notif_event)

Some files were not shown because too many files have changed in this diff Show More