1
0

Merge branch 'develop' into squah/leave_space_admin_api

This commit is contained in:
Sean Quah
2021-11-30 12:18:02 +00:00
183 changed files with 4878 additions and 1602 deletions

View File

@@ -5,7 +5,7 @@ name: Build docker images
on:
push:
tags: ["v*"]
branches: [ master, main ]
branches: [ master, main, develop ]
workflow_dispatch:
permissions:
@@ -38,6 +38,9 @@ jobs:
id: set-tag
run: |
case "${GITHUB_REF}" in
refs/heads/develop)
tag=develop
;;
refs/heads/master|refs/heads/main)
tag=latest
;;

View File

@@ -374,7 +374,7 @@ jobs:
working-directory: complement/dockerfiles
# Run Complement
- run: go test -v -tags synapse_blacklist,msc2403,msc2946,msc3083 ./tests/...
- run: go test -v -tags synapse_blacklist,msc2403 ./tests/...
env:
COMPLEMENT_BASE_IMAGE: complement-synapse:latest
working-directory: complement

View File

@@ -1,3 +1,42 @@
Synapse 1.47.1 (2021-11-23)
===========================
This release fixes a security issue in the media store, affecting all prior releases of Synapse. Server administrators are encouraged to update Synapse as soon as possible. We are not aware of these vulnerabilities being exploited in the wild.
Server administrators who are unable to update Synapse may use the workarounds described in the linked GitHub Security Advisory below.
Security advisory
-----------------
The following issue is fixed in 1.47.1.
- **[GHSA-3hfw-x7gx-437c](https://github.com/matrix-org/synapse/security/advisories/GHSA-3hfw-x7gx-437c) / [CVE-2021-41281](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41281): Path traversal when downloading remote media.**
Synapse instances with the media repository enabled can be tricked into downloading a file from a remote server into an arbitrary directory, potentially outside the media store directory.
The last two directories and file name of the path are chosen randomly by Synapse and cannot be controlled by an attacker, which limits the impact.
Homeservers with the media repository disabled are unaffected. Homeservers configured with a federation whitelist are also unaffected.
Fixed by [91f2bd090](https://github.com/matrix-org/synapse/commit/91f2bd090).
Synapse 1.47.0 (2021-11-17)
===========================
No significant changes since 1.47.0rc3.
Synapse 1.47.0rc3 (2021-11-16)
==============================
Bugfixes
--------
- Fix a bug introduced in 1.47.0rc1 which caused worker processes to not halt startup in the presence of outstanding database migrations. ([\#11346](https://github.com/matrix-org/synapse/issues/11346))
- Fix a bug introduced in 1.47.0rc1 which prevented the 'remove deleted devices from `device_inbox` column' background process from running when updating from a recent Synapse version. ([\#11303](https://github.com/matrix-org/synapse/issues/11303), [\#11353](https://github.com/matrix-org/synapse/issues/11353))
Synapse 1.47.0rc2 (2021-11-10)
==============================

1
changelog.d/10847.misc Normal file
View File

@@ -0,0 +1 @@
Add type annotations to `synapse.metrics`.

1
changelog.d/11029.misc Normal file
View File

@@ -0,0 +1 @@
Improve type annotations in `synapse.module_api`.

View File

@@ -0,0 +1 @@
Experimental support for the thread relation defined in [MSC3440](https://github.com/matrix-org/matrix-doc/pull/3440).

1
changelog.d/11220.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix using MSC2716 batch sending in combination with event persistence workers. Contributed by @tulir at Beeper.

1
changelog.d/11265.bugfix Normal file
View File

@@ -0,0 +1 @@
Prevent [MSC2716](https://github.com/matrix-org/matrix-doc/pull/2716) historical state events from being pushed to an application service via `/transactions`.

View File

@@ -0,0 +1 @@
Add plugin support for controlling database background updates.

View File

@@ -0,0 +1 @@
Add support for the `/_matrix/client/v3` APIs from Matrix v1.1.

View File

@@ -0,0 +1 @@
Add dedicated admin API for blocking a room.

1
changelog.d/11328.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to `synapse.util`.

View File

@@ -0,0 +1 @@
Support the stable API endpoints for [MSC2946](https://github.com/matrix-org/matrix-doc/pull/2946): the room `/hierarchy` endpoint.

1
changelog.d/11333.misc Normal file
View File

@@ -0,0 +1 @@
Remove deprecated `trust_identity_server_for_password_resets` configuration flag.

1
changelog.d/11341.misc Normal file
View File

@@ -0,0 +1 @@
Add type annotations for some methods and properties in the module API.

View File

@@ -0,0 +1 @@
Add admin API to run background jobs.

1
changelog.d/11355.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in 1.41.0 where space hierarchy responses would be incorrectly reused if multiple users were to make the same request at the same time.

1
changelog.d/11359.misc Normal file
View File

@@ -0,0 +1 @@
Require all files in synapse/ and tests/ to pass mypy unless specifically excluded.

View File

@@ -0,0 +1 @@
Update the JWT login type to support custom a `sub` claim.

1
changelog.d/11368.misc Normal file
View File

@@ -0,0 +1 @@
Fix running `scripts-dev/complement.sh`, which was broken in v1.47.0rc1.

1
changelog.d/11369.misc Normal file
View File

@@ -0,0 +1 @@
Rename `get_access_token_for_user_id` to `create_access_token_for_user_id` to better reflect what it does.

1
changelog.d/11370.misc Normal file
View File

@@ -0,0 +1 @@
Rename `get_refresh_token_for_user_id` to `create_refresh_token_for_user_id` to better describe what it does.

View File

@@ -0,0 +1 @@
Add support for the `/_matrix/media/v3` APIs from Matrix v1.1.

1
changelog.d/11376.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where all requests that read events from the database could get stuck as a result of losing the database connection, for real this time. Also fix a race condition introduced in the previous insufficient fix in 1.47.0.

1
changelog.d/11377.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in v1.45.0 where the `read_templates` method of the module API would error.

1
changelog.d/11377.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to configuration classes.

1
changelog.d/11379.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix an issue introduced in v1.47.0 which prevented servers re-joining rooms they had previously left, if their signing keys were replaced.

1
changelog.d/11380.misc Normal file
View File

@@ -0,0 +1 @@
Publish a `develop` image to dockerhub.

1
changelog.d/11381.doc Normal file
View File

@@ -0,0 +1 @@
Fix missing quotes for wildcard domains in `federation_certificate_verification_whitelist`.

1
changelog.d/11382.misc Normal file
View File

@@ -0,0 +1 @@
Keep fallback key marked as used if it's re-uploaded.

1
changelog.d/11386.misc Normal file
View File

@@ -0,0 +1 @@
Use `auto_attribs` on the `attrs` class `RefreshTokenLookupResult`.

1
changelog.d/11388.misc Normal file
View File

@@ -0,0 +1 @@
Rename unstable `access_token_lifetime` configuration option to `refreshable_access_token_lifetime` to make it clear it only concerns refreshable access tokens.

1
changelog.d/11389.misc Normal file
View File

@@ -0,0 +1 @@
Do not run the broken MSC2716 tests when running `scripts-dev/complement.sh`.

View File

@@ -0,0 +1 @@
Store and allow querying of arbitrary event relations.

1
changelog.d/11392.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in v1.13.0 where creating and publishing a room could cause errors if `room_list_publication_rules` is configured.

1
changelog.d/11393.misc Normal file
View File

@@ -0,0 +1 @@
Remove dead code from supporting ACME.

View File

@@ -0,0 +1 @@
Remove deprecated `trust_identity_server_for_password_resets` configuration flag.

1
changelog.d/11408.misc Normal file
View File

@@ -0,0 +1 @@
Refactor including the bundled relations when serializing an event.

1
changelog.d/11411.misc Normal file
View File

@@ -0,0 +1 @@
Add type hints to storage classes.

1
changelog.d/11413.bugfix Normal file
View File

@@ -0,0 +1 @@
The `/send_join` response now includes the stable `event` field instead of the unstable field from [MSC3083](https://github.com/matrix-org/matrix-doc/pull/3083).

1
changelog.d/11415.doc Normal file
View File

@@ -0,0 +1 @@
Update the media repository documentation.

1
changelog.d/11421.bugfix Normal file
View File

@@ -0,0 +1 @@
Improve performance of various background database schema updates.

1
changelog.d/11422.bugfix Normal file
View File

@@ -0,0 +1 @@
Improve performance of various background database schema updates.

View File

@@ -0,0 +1 @@
Support expiry of refresh tokens and expiry of the overall session when refresh tokens are in use.

1
changelog.d/11428.misc Normal file
View File

@@ -0,0 +1 @@
Add type annotations to some of the configuration surrounding refresh tokens.

1
changelog.d/11429.docker Normal file
View File

@@ -0,0 +1 @@
Update `Dockerfile-workers` to healthcheck all workers in container.

1
changelog.d/11430.misc Normal file
View File

@@ -0,0 +1 @@
Update [MSC2918 refresh token](https://github.com/matrix-org/matrix-doc/blob/main/proposals/2918-refreshtokens.md#msc2918-refresh-tokens) support to confirm with the latest revision: accept the `refresh_tokens` parameter in the request body rather than in the URL parameters.

1
changelog.d/11439.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in 1.47.0 where `send_join` could fail due to an outdated `ijson` version.

1
changelog.d/11440.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse 1.36 which could cause problems fetching event-signing keys from trusted key servers.

1
changelog.d/11441.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in 1.47.0 where `send_join` could fail due to an outdated `ijson` version.

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
matrix-synapse-py3 (1.47.1) stable; urgency=medium
* New synapse release 1.47.1.
-- Synapse Packaging team <packages@matrix.org> Fri, 19 Nov 2021 13:44:32 +0000
matrix-synapse-py3 (1.47.0) stable; urgency=medium
* New synapse release 1.47.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 17 Nov 2021 13:09:43 +0000
matrix-synapse-py3 (1.47.0~rc3) stable; urgency=medium
* New synapse release 1.47.0~rc3.
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Nov 2021 14:32:47 +0000
matrix-synapse-py3 (1.47.0~rc2) stable; urgency=medium
[ Dan Callahan ]

View File

@@ -21,3 +21,6 @@ VOLUME ["/data"]
# files to run the desired worker configuration. Will start supervisord.
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
ENTRYPOINT ["/configure_workers_and_start.py"]
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD /bin/sh /healthcheck.sh

View File

@@ -0,0 +1,6 @@
#!/bin/sh
# This healthcheck script is designed to return OK when every
# host involved returns OK
{%- for healthcheck_url in healthcheck_urls %}
curl -fSs {{ healthcheck_url }} || exit 1
{%- endfor %}

View File

@@ -148,14 +148,6 @@ bcrypt_rounds: 12
allow_guest_access: {{ "True" if SYNAPSE_ALLOW_GUEST else "False" }}
enable_group_creation: true
# The list of identity servers trusted to verify third party
# identifiers by this server.
#
# Also defines the ID server which will be called when an account is
# deactivated (one will be picked arbitrarily).
trusted_third_party_id_servers:
- matrix.org
- vector.im
## Metrics ###

View File

@@ -48,7 +48,7 @@ WORKERS_CONFIG = {
"app": "synapse.app.user_dir",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$"
"^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$"
],
"shared_extra_conf": {"update_user_directory": False},
"worker_extra_conf": "",
@@ -85,10 +85,10 @@ WORKERS_CONFIG = {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(v2_alpha|r0)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0)/events$",
"^/_matrix/client/(api/v1|r0)/initialSync$",
"^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$",
"^/_matrix/client/(v2_alpha|r0|v3)/sync$",
"^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$",
"^/_matrix/client/(api/v1|r0|v3)/initialSync$",
"^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -146,11 +146,11 @@ WORKERS_CONFIG = {
"app": "synapse.app.generic_worker",
"listener_resources": ["client"],
"endpoint_patterns": [
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send",
"^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
"^/_matrix/client/(api/v1|r0|unstable)/join/",
"^/_matrix/client/(api/v1|r0|unstable)/profile/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact",
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send",
"^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/join/",
"^/_matrix/client/(api/v1|r0|v3|unstable)/profile/",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -158,7 +158,7 @@ WORKERS_CONFIG = {
"frontend_proxy": {
"app": "synapse.app.frontend_proxy",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|unstable)/keys/upload"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
"shared_extra_conf": {},
"worker_extra_conf": (
"worker_main_http_uri: http://127.0.0.1:%d"
@@ -474,10 +474,16 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
# Determine the load-balancing upstreams to configure
nginx_upstream_config = ""
# At the same time, prepare a list of internal endpoints to healthcheck
# starting with the main process which exists even if no workers do.
healthcheck_urls = ["http://localhost:8080/health"]
for upstream_worker_type, upstream_worker_ports in nginx_upstreams.items():
body = ""
for port in upstream_worker_ports:
body += " server localhost:%d;\n" % (port,)
healthcheck_urls.append("http://localhost:%d/health" % (port,))
# Add to the list of configured upstreams
nginx_upstream_config += NGINX_UPSTREAM_CONFIG_BLOCK.format(
@@ -510,6 +516,13 @@ def generate_worker_files(environ, config_path: str, data_dir: str):
worker_config=supervisord_config,
)
# healthcheck config
convert(
"/conf/healthcheck.sh.j2",
"/healthcheck.sh",
healthcheck_urls=healthcheck_urls,
)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"
if not os.path.exists(log_dir):

View File

@@ -50,8 +50,10 @@ build the documentation with:
mdbook build
```
The rendered contents will be outputted to a new `book/` directory at the root of the repository. You can
browse the book by opening `book/index.html` in a web browser.
The rendered contents will be outputted to a new `book/` directory at the root of the repository. Please note that
index.html is not built by default, it is created by copying over the file `welcome_and_overview.html` to `index.html`
during deployment. Thus, when running `mdbook serve` locally the book will initially show a 404 in place of the index
due to the above. Do not be alarmed!
You can also have mdbook host the docs on a local webserver with hot-reload functionality via:

View File

@@ -3,6 +3,7 @@
- [Room Details API](#room-details-api)
- [Room Members API](#room-members-api)
- [Room State API](#room-state-api)
- [Block Room API](#block-room-api)
- [Delete Room API](#delete-room-api)
* [Version 1 (old version)](#version-1-old-version)
* [Version 2 (new version)](#version-2-new-version)
@@ -386,6 +387,83 @@ A response body like the following is returned:
}
```
# Block Room API
The Block Room admin API allows server admins to block and unblock rooms,
and query to see if a given room is blocked.
This API can be used to pre-emptively block a room, even if it's unknown to this
homeserver. Users will be prevented from joining a blocked room.
## Block or unblock a room
The API is:
```
PUT /_synapse/admin/v1/rooms/<room_id>/block
```
with a body of:
```json
{
"block": true
}
```
A response body like the following is returned:
```json
{
"block": true
}
```
**Parameters**
The following parameters should be set in the URL:
- `room_id` - The ID of the room.
The following JSON body parameters are available:
- `block` - If `true` the room will be blocked and if `false` the room will be unblocked.
**Response**
The following fields are possible in the JSON response body:
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
## Get block status
The API is:
```
GET /_synapse/admin/v1/rooms/<room_id>/block
```
A response body like the following is returned:
```json
{
"block": true,
"user_id": "<user_id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `room_id` - The ID of the room.
**Response**
The following fields are possible in the JSON response body:
- `block` - A boolean. `true` if the room is blocked, otherwise `false`
- `user_id` - An optional string. If the room is blocked (`block` is `true`) shows
the user who has add the room to blocking list. Otherwise it is not displayed.
# Delete Room API
The Delete Room admin API allows server admins to remove rooms from the server

View File

@@ -22,8 +22,9 @@ will be removed in a future version of Synapse.
The `token` field should include the JSON web token with the following claims:
* The `sub` (subject) claim is required and should encode the local part of the
user ID.
* A claim that encodes the local part of the user ID is required. By default,
the `sub` (subject) claim is used, or a custom claim can be set in the
configuration file.
* The expiration time (`exp`), not before time (`nbf`), and issued at (`iat`)
claims are optional, but validated if present.
* The issuer (`iss`) claim is optional, but required and validated if configured.

View File

@@ -2,29 +2,80 @@
*Synapse implementation-specific details for the media repository*
The media repository is where attachments and avatar photos are stored.
It stores attachment content and thumbnails for media uploaded by local users.
It caches attachment content and thumbnails for media uploaded by remote users.
The media repository
* stores avatars, attachments and their thumbnails for media uploaded by local
users.
* caches avatars, attachments and their thumbnails for media uploaded by remote
users.
* caches resources and thumbnails used for
[URL previews](development/url_previews.md).
## Storage
All media in Matrix can be identified by a unique
[MXC URI](https://spec.matrix.org/latest/client-server-api/#matrix-content-mxc-uris),
consisting of a server name and media ID:
```
mxc://<server-name>/<media-id>
```
Each item of media is assigned a `media_id` when it is uploaded.
The `media_id` is a randomly chosen, URL safe 24 character string.
## Local Media
Synapse generates 24 character media IDs for content uploaded by local users.
These media IDs consist of upper and lowercase letters and are case-sensitive.
Other homeserver implementations may generate media IDs differently.
Metadata such as the MIME type, upload time and length are stored in the
sqlite3 database indexed by `media_id`.
Local media is recorded in the `local_media_repository` table, which includes
metadata such as MIME types, upload times and file sizes.
Note that this table is shared by the URL cache, which has a different media ID
scheme.
Content is stored on the filesystem under a `"local_content"` directory.
### Paths
A file with media ID `aabbcccccccccccccccccccc` and its `128x96` `image/jpeg`
thumbnail, created by scaling, would be stored at:
```
local_content/aa/bb/cccccccccccccccccccc
local_thumbnails/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
```
Thumbnails are stored under a `"local_thumbnails"` directory.
## Remote Media
When media from a remote homeserver is requested from Synapse, it is assigned
a local `filesystem_id`, with the same format as locally-generated media IDs,
as described above.
The item with `media_id` `"aabbccccccccdddddddddddd"` is stored under
`"local_content/aa/bb/ccccccccdddddddddddd"`. Its thumbnail with width
`128` and height `96` and type `"image/jpeg"` is stored under
`"local_thumbnails/aa/bb/ccccccccdddddddddddd/128-96-image-jpeg"`
A record of remote media is stored in the `remote_media_cache` table, which
can be used to map remote MXC URIs (server names and media IDs) to local
`filesystem_id`s.
Remote content is cached under `"remote_content"` directory. Each item of
remote content is assigned a local `"filesystem_id"` to ensure that the
directory structure `"remote_content/server_name/aa/bb/ccccccccdddddddddddd"`
is appropriate. Thumbnails for remote content are stored under
`"remote_thumbnail/server_name/..."`
### Paths
A file from `matrix.org` with `filesystem_id` `aabbcccccccccccccccccccc` and its
`128x96` `image/jpeg` thumbnail, created by scaling, would be stored at:
```
remote_content/matrix.org/aa/bb/cccccccccccccccccccc
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg-scale
```
Older thumbnails may omit the thumbnailing method:
```
remote_thumbnail/matrix.org/aa/bb/cccccccccccccccccccc/128-96-image-jpeg
```
Note that `remote_thumbnail/` does not have an `s`.
## URL Previews
See [URL Previews](development/url_previews.md) for documentation on the URL preview
process.
When generating previews for URLs, Synapse may download and cache various
resources, including images. These resources are assigned temporary media IDs
of the form `yyyy-mm-dd_aaaaaaaaaaaaaaaa`, where `yyyy-mm-dd` is the current
date and `aaaaaaaaaaaaaaaa` is a random sequence of 16 case-sensitive letters.
The metadata for these cached resources is stored in the
`local_media_repository` and `local_media_repository_url_cache` tables.
Resources for URL previews are deleted after a few days.
### Paths
The file with media ID `yyyy-mm-dd_aaaaaaaaaaaaaaaa` and its `128x96`
`image/jpeg` thumbnail, created by scaling, would be stored at:
```
url_cache/yyyy-mm-dd/aaaaaaaaaaaaaaaa
url_cache_thumbnails/yyyy-mm-dd/aaaaaaaaaaaaaaaa/128-96-image-jpeg-scale
```

View File

@@ -0,0 +1,71 @@
# Background update controller callbacks
Background update controller callbacks allow module developers to control (e.g. rate-limit)
how database background updates are run. A database background update is an operation
Synapse runs on its database in the background after it starts. It's usually used to run
database operations that would take too long if they were run at the same time as schema
updates (which are run on startup) and delay Synapse's startup too much: populating a
table with a big amount of data, adding an index on a big table, deleting superfluous data,
etc.
Background update controller callbacks can be registered using the module API's
`register_background_update_controller_callbacks` method. Only the first module (in order
of appearance in Synapse's configuration file) calling this method can register background
update controller callbacks, subsequent calls are ignored.
The available background update controller callbacks are:
### `on_update`
_First introduced in Synapse v1.49.0_
```python
def on_update(update_name: str, database_name: str, one_shot: bool) -> AsyncContextManager[int]
```
Called when about to do an iteration of a background update. The module is given the name
of the update, the name of the database, and a flag to indicate whether the background
update will happen in one go and may take a long time (e.g. creating indices). If this last
argument is set to `False`, the update will be run in batches.
The module must return an async context manager. It will be entered before Synapse runs a
background update; this should return the desired duration of the iteration, in
milliseconds.
The context manager will be exited when the iteration completes. Note that the duration
returned by the context manager is a target, and an iteration may take substantially longer
or shorter. If the `one_shot` flag is set to `True`, the duration returned is ignored.
__Note__: Unlike most module callbacks in Synapse, this one is _synchronous_. This is
because asynchronous operations are expected to be run by the async context manager.
This callback is required when registering any other background update controller callback.
### `default_batch_size`
_First introduced in Synapse v1.49.0_
```python
async def default_batch_size(update_name: str, database_name: str) -> int
```
Called before the first iteration of a background update, with the name of the update and
of the database. The module must return the number of elements to process in this first
iteration.
If this callback is not defined, Synapse will use a default value of 100.
### `min_batch_size`
_First introduced in Synapse v1.49.0_
```python
async def min_batch_size(update_name: str, database_name: str) -> int
```
Called before running a new batch for a background update, with the name of the update and
of the database. The module must return an integer representing the minimum number of
elements to process in this iteration. This number must be at least 1, and is used to
ensure that progress is always made.
If this callback is not defined, Synapse will use a default value of 100.

View File

@@ -71,15 +71,15 @@ Modules **must** register their web resources in their `__init__` method.
## Registering a callback
Modules can use Synapse's module API to register callbacks. Callbacks are functions that
Synapse will call when performing specific actions. Callbacks must be asynchronous, and
are split in categories. A single module may implement callbacks from multiple categories,
and is under no obligation to implement all callbacks from the categories it registers
callbacks for.
Synapse will call when performing specific actions. Callbacks must be asynchronous (unless
specified otherwise), and are split in categories. A single module may implement callbacks
from multiple categories, and is under no obligation to implement all callbacks from the
categories it registers callbacks for.
Modules can register callbacks using one of the module API's `register_[...]_callbacks`
methods. The callback functions are passed to these methods as keyword arguments, with
the callback name as the argument name and the function as its value. This is demonstrated
in the example below. A `register_[...]_callbacks` method exists for each category.
the callback name as the argument name and the function as its value. A
`register_[...]_callbacks` method exists for each category.
Callbacks for each category can be found on their respective page of the
[Synapse documentation website](https://matrix-org.github.io/synapse).

View File

@@ -1,7 +1,7 @@
<h2 style="color:red">
This page of the Synapse documentation is now deprecated. For up to date
documentation on setting up or writing a password auth provider module, please see
<a href="modules.md">this page</a>.
<a href="modules/index.md">this page</a>.
</h2>
# Password auth provider modules

View File

@@ -647,8 +647,8 @@ retention:
#
#federation_certificate_verification_whitelist:
# - lon.example.com
# - *.domain.com
# - *.onion
# - "*.domain.com"
# - "*.onion"
# List of custom certificate authorities for federation traffic.
#
@@ -2039,6 +2039,12 @@ sso:
#
#algorithm: "provided-by-your-issuer"
# Name of the claim containing a unique identifier for the user.
#
# Optional, defaults to `sub`.
#
#subject_claim: "sub"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and
@@ -2360,8 +2366,8 @@ user_directory:
# indexes were (re)built was before Synapse 1.44, you'll have to
# rebuild the indexes in order to search through all known users.
# These indexes are built the first time Synapse starts; admins can
# manually trigger a rebuild following the instructions at
# https://matrix-org.github.io/synapse/latest/user_directory.html
# manually trigger a rebuild via API following the instructions at
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.

View File

@@ -220,7 +220,7 @@ Here are a few things to try:
anyone who has successfully set this up.
* Check that you have opened your firewall to allow TCP and UDP traffic to the
TURN ports (normally 3478 and 5479).
TURN ports (normally 3478 and 5349).
* Check that you have opened your firewall to allow UDP traffic to the UDP
relay ports (49152-65535 by default).

View File

@@ -42,7 +42,6 @@ For each update:
`average_items_per_ms` how many items are processed per millisecond based on an exponential average.
## Enabled
This API allow pausing background updates.
@@ -82,3 +81,29 @@ The API returns the `enabled` param.
```
There is also a `GET` version which returns the `enabled` state.
## Run
This API schedules a specific background update to run. The job starts immediately after calling the API.
The API is:
```
POST /_synapse/admin/v1/background_updates/start_job
```
with the following body:
```json
{
"job_name": "populate_stats_process_rooms"
}
```
The following JSON body parameters are available:
- `job_name` - A string which job to run. Valid values are:
- `populate_stats_process_rooms` - Recalculate the stats for all rooms.
- `regenerate_directory` - Recalculate the [user directory](../../../user_directory.md) if it is stale or out of sync.

View File

@@ -6,9 +6,9 @@ on this particular server - i.e. ones which your account shares a room with, or
who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL [here](https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/main/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to use the [admin API](usage/administration/admin_api/background_updates.md#run)
and execute the job `regenerate_directory`. This should then start a background task to
flush the current tables and regenerate the directory.
Data model

View File

@@ -182,10 +182,10 @@ This worker can handle API requests matching the following regular
expressions:
# Sync requests
^/_matrix/client/(v2_alpha|r0)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0)/events$
^/_matrix/client/(api/v1|r0)/initialSync$
^/_matrix/client/(api/v1|r0)/rooms/[^/]+/initialSync$
^/_matrix/client/(v2_alpha|r0|v3)/sync$
^/_matrix/client/(api/v1|v2_alpha|r0|v3)/events$
^/_matrix/client/(api/v1|r0|v3)/initialSync$
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
# Federation requests
^/_matrix/federation/v1/event/
@@ -210,46 +210,46 @@ expressions:
^/_matrix/federation/v1/get_groups_publicised$
^/_matrix/key/v2/query
^/_matrix/federation/unstable/org.matrix.msc2946/spaces/
^/_matrix/federation/unstable/org.matrix.msc2946/hierarchy/
^/_matrix/federation/(v1|unstable/org.matrix.msc2946)/hierarchy/
# Inbound federation transaction request
^/_matrix/federation/v1/send/
# Client API requests
^/_matrix/client/(api/v1|r0|unstable)/createRoom$
^/_matrix/client/(api/v1|r0|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state$
^/_matrix/client/(api/v1|r0|v3|unstable)/createRoom$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicRooms$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/joined_members$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/context/.*$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/members$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/spaces$
^/_matrix/client/unstable/org.matrix.msc2946/rooms/.*/hierarchy$
^/_matrix/client/(v1|unstable/org.matrix.msc2946)/rooms/.*/hierarchy$
^/_matrix/client/unstable/im.nheko.summary/rooms/.*/summary$
^/_matrix/client/(api/v1|r0|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|unstable)/devices$
^/_matrix/client/(api/v1|r0|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|unstable)/keys/changes$
^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$
^/_matrix/client/(api/v1|r0|v3|unstable)/devices$
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/query$
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/changes$
^/_matrix/client/versions$
^/_matrix/client/(api/v1|r0|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|unstable)/publicised_groups/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|unstable)/joined_rooms$
^/_matrix/client/(api/v1|r0|unstable)/search$
^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_groups$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups$
^/_matrix/client/(api/v1|r0|v3|unstable)/publicised_groups/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/event/
^/_matrix/client/(api/v1|r0|v3|unstable)/joined_rooms$
^/_matrix/client/(api/v1|r0|v3|unstable)/search$
# Registration/login requests
^/_matrix/client/(api/v1|r0|unstable)/login$
^/_matrix/client/(r0|unstable)/register$
^/_matrix/client/(api/v1|r0|v3|unstable)/login$
^/_matrix/client/(r0|v3|unstable)/register$
^/_matrix/client/unstable/org.matrix.msc3231/register/org.matrix.msc3231.login.registration_token/validity$
# Event sending requests
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/redact
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/redact
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|v3|unstable)/join/
^/_matrix/client/(api/v1|r0|v3|unstable)/profile/
Additionally, the following REST endpoints can be handled for GET requests:
@@ -261,14 +261,14 @@ room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests
for the room are in flight:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/messages$
^/_matrix/client/(api/v1|r0|v3|unstable)/rooms/.*/messages$
Additionally, the following endpoints should be included if Synapse is configured
to use SSO (you only need to include the ones for whichever SSO provider you're
using):
# for all SSO providers
^/_matrix/client/(api/v1|r0|unstable)/login/sso/redirect
^/_matrix/client/(api/v1|r0|v3|unstable)/login/sso/redirect
^/_synapse/client/pick_idp$
^/_synapse/client/pick_username
^/_synapse/client/new_user_consent$
@@ -281,7 +281,7 @@ using):
^/_synapse/client/saml2/authn_response$
# CAS requests.
^/_matrix/client/(api/v1|r0|unstable)/login/cas/ticket$
^/_matrix/client/(api/v1|r0|v3|unstable)/login/cas/ticket$
Ensure that all SSO logins go to a single process.
For multiple workers not handling the SSO endpoints properly, see
@@ -465,7 +465,7 @@ Note that if a reverse proxy is used , then `/_matrix/media/` must be routed for
Handles searches in the user directory. It can handle REST endpoints matching
the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
^/_matrix/client/(api/v1|r0|v3|unstable)/user_directory/search$
When using this worker you must also set `update_user_directory: False` in the
shared configuration file to stop the main synapse running background
@@ -477,12 +477,12 @@ Proxies some frequently-requested client endpoints to add caching and remove
load from the main synapse. It can handle REST endpoints matching the following
regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/keys/upload
^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload
If `use_presence` is False in the homeserver config, it can also handle REST
endpoints matching the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/presence/[^/]+/status
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/[^/]+/status
This "stub" presence handler will pass through `GET` request but make the
`PUT` effectively a no-op.

104
mypy.ini
View File

@@ -33,7 +33,6 @@ exclude = (?x)
|synapse/storage/databases/main/event_federation.py
|synapse/storage/databases/main/event_push_actions.py
|synapse/storage/databases/main/events_bg_updates.py
|synapse/storage/databases/main/events_worker.py
|synapse/storage/databases/main/group_server.py
|synapse/storage/databases/main/metrics.py
|synapse/storage/databases/main/monthly_active_users.py
@@ -151,6 +150,9 @@ disallow_untyped_defs = True
[mypy-synapse.app.*]
disallow_untyped_defs = True
[mypy-synapse.config._base]
disallow_untyped_defs = True
[mypy-synapse.crypto.*]
disallow_untyped_defs = True
@@ -160,6 +162,12 @@ disallow_untyped_defs = True
[mypy-synapse.handlers.*]
disallow_untyped_defs = True
[mypy-synapse.metrics.*]
disallow_untyped_defs = True
[mypy-synapse.module_api.*]
disallow_untyped_defs = True
[mypy-synapse.push.*]
disallow_untyped_defs = True
@@ -178,6 +186,9 @@ disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.directory]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.events_worker]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.room_batch]
disallow_untyped_defs = True
@@ -196,92 +207,11 @@ disallow_untyped_defs = True
[mypy-synapse.streams.*]
disallow_untyped_defs = True
[mypy-synapse.util.batching_queue]
[mypy-synapse.util.*]
disallow_untyped_defs = True
[mypy-synapse.util.caches.cached_call]
disallow_untyped_defs = True
[mypy-synapse.util.caches.dictionary_cache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.lrucache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.response_cache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.stream_change_cache]
disallow_untyped_defs = True
[mypy-synapse.util.caches.ttl_cache]
disallow_untyped_defs = True
[mypy-synapse.util.daemonize]
disallow_untyped_defs = True
[mypy-synapse.util.file_consumer]
disallow_untyped_defs = True
[mypy-synapse.util.frozenutils]
disallow_untyped_defs = True
[mypy-synapse.util.hash]
disallow_untyped_defs = True
[mypy-synapse.util.httpresourcetree]
disallow_untyped_defs = True
[mypy-synapse.util.iterutils]
disallow_untyped_defs = True
[mypy-synapse.util.linked_list]
disallow_untyped_defs = True
[mypy-synapse.util.logcontext]
disallow_untyped_defs = True
[mypy-synapse.util.logformatter]
disallow_untyped_defs = True
[mypy-synapse.util.macaroons]
disallow_untyped_defs = True
[mypy-synapse.util.manhole]
disallow_untyped_defs = True
[mypy-synapse.util.module_loader]
disallow_untyped_defs = True
[mypy-synapse.util.msisdn]
disallow_untyped_defs = True
[mypy-synapse.util.patch_inline_callbacks]
disallow_untyped_defs = True
[mypy-synapse.util.ratelimitutils]
disallow_untyped_defs = True
[mypy-synapse.util.retryutils]
disallow_untyped_defs = True
[mypy-synapse.util.rlimit]
disallow_untyped_defs = True
[mypy-synapse.util.stringutils]
disallow_untyped_defs = True
[mypy-synapse.util.templates]
disallow_untyped_defs = True
[mypy-synapse.util.threepids]
disallow_untyped_defs = True
[mypy-synapse.util.wheel_timer]
disallow_untyped_defs = True
[mypy-synapse.util.versionstring]
disallow_untyped_defs = True
[mypy-synapse.util.caches.treecache]
disallow_untyped_defs = False
[mypy-tests.handlers.test_user_directory]
disallow_untyped_defs = True
@@ -295,6 +225,10 @@ disallow_untyped_defs = True
[mypy-tests.rest.client.test_directory]
disallow_untyped_defs = True
[mypy-tests.federation.transport.test_client]
disallow_untyped_defs = True
;; Dependencies without annotations
;; Before ignoring a module, check to see if type stubs are available.
;; The `typeshed` project maintains stubs here:

View File

@@ -24,7 +24,7 @@
set -e
# Change to the repository root
cd "$(dirname "$0")/.."
cd "$(dirname $0)/.."
# Check for a user-specified Complement checkout
if [[ -z "$COMPLEMENT_DIR" ]]; then
@@ -61,8 +61,8 @@ cd "$COMPLEMENT_DIR"
EXTRA_COMPLEMENT_ARGS=""
if [[ -n "$1" ]]; then
# A test name regex has been set, supply it to Complement
EXTRA_COMPLEMENT_ARGS=(-run "$1")
EXTRA_COMPLEMENT_ARGS+="-run $1 "
fi
# Run the tests!
go test -v -tags synapse_blacklist,msc2946,msc3083,msc2403,msc2716 -count=1 "${EXTRA_COMPLEMENT_ARGS[@]}" ./tests/...
go test -v -tags synapse_blacklist,msc2403 -count=1 $EXTRA_COMPLEMENT_ARGS ./tests/...

View File

@@ -119,7 +119,9 @@ CONDITIONAL_REQUIREMENTS["mypy"] = [
# Tests assume that all optional dependencies are installed.
#
# parameterized_class decorator was introduced in parameterized 0.7.0
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0"]
#
# We use `mock` library as that backports `AsyncMock` to Python 3.6
CONDITIONAL_REQUIREMENTS["test"] = ["parameterized>=0.7.0", "mock>=4.0.0"]
CONDITIONAL_REQUIREMENTS["dev"] = (
CONDITIONAL_REQUIREMENTS["lint"]

View File

@@ -47,7 +47,7 @@ try:
except ImportError:
pass
__version__ = "1.47.0rc2"
__version__ = "1.47.1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -30,7 +30,8 @@ FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
STATIC_PREFIX = "/_matrix/static"
WEB_CLIENT_PREFIX = "/_matrix/client"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
MEDIA_R0_PREFIX = "/_matrix/media/r0"
MEDIA_V3_PREFIX = "/_matrix/media/v3"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"

View File

@@ -402,7 +402,7 @@ async def start(hs: "HomeServer") -> None:
if hasattr(signal, "SIGHUP"):
@wrap_as_background_process("sighup")
def handle_sighup(*args: Any, **kwargs: Any) -> None:
async def handle_sighup(*args: Any, **kwargs: Any) -> None:
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
sdnotify(b"RELOADING=1")

View File

@@ -26,7 +26,8 @@ from synapse.api.urls import (
CLIENT_API_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
)
from synapse.app import _base
@@ -112,6 +113,7 @@ from synapse.storage.databases.main.monthly_active_users import (
)
from synapse.storage.databases.main.presence import PresenceStore
from synapse.storage.databases.main.room import RoomWorkerStore
from synapse.storage.databases.main.room_batch import RoomBatchStore
from synapse.storage.databases.main.search import SearchStore
from synapse.storage.databases.main.session import SessionStore
from synapse.storage.databases.main.stats import StatsStore
@@ -239,6 +241,7 @@ class GenericWorkerSlavedStore(
SlavedEventStore,
SlavedKeyStore,
RoomWorkerStore,
RoomBatchStore,
DirectoryStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
@@ -338,7 +341,8 @@ class GenericWorkerServer(HomeServer):
resources.update(
{
MEDIA_PREFIX: media_repo,
MEDIA_R0_PREFIX: media_repo,
MEDIA_V3_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
"/_synapse/admin": admin_resource,
}

View File

@@ -29,7 +29,8 @@ from synapse import events
from synapse.api.urls import (
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
MEDIA_R0_PREFIX,
MEDIA_V3_PREFIX,
SERVER_KEY_V2_PREFIX,
STATIC_PREFIX,
WEB_CLIENT_PREFIX,
@@ -193,6 +194,8 @@ class SynapseHomeServer(HomeServer):
{
"/_matrix/client/api/v1": client_resource,
"/_matrix/client/r0": client_resource,
"/_matrix/client/v1": client_resource,
"/_matrix/client/v3": client_resource,
"/_matrix/client/unstable": client_resource,
"/_matrix/client/v2_alpha": client_resource,
"/_matrix/client/versions": client_resource,
@@ -244,7 +247,11 @@ class SynapseHomeServer(HomeServer):
if self.config.server.enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
{
MEDIA_R0_PREFIX: media_repo,
MEDIA_V3_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
}
)
elif name == "media":
raise ConfigError(

View File

@@ -231,13 +231,32 @@ class ApplicationServiceApi(SimpleHttpClient):
json_body=body,
args={"access_token": service.hs_token},
)
if logger.isEnabledFor(logging.DEBUG):
logger.debug(
"push_bulk to %s succeeded! events=%s",
uri,
[event.get("event_id") for event in events],
)
sent_transactions_counter.labels(service.id).inc()
sent_events_counter.labels(service.id).inc(len(events))
return True
except CodeMessageException as e:
logger.warning("push_bulk to %s received %s", uri, e.code)
logger.warning(
"push_bulk to %s received code=%s msg=%s",
uri,
e.code,
e.msg,
exc_info=logger.isEnabledFor(logging.DEBUG),
)
except Exception as ex:
logger.warning("push_bulk to %s threw exception %s", uri, ex)
logger.warning(
"push_bulk to %s threw exception(%s) %s args=%s",
uri,
type(ex).__name__,
ex,
ex.args,
exc_info=logger.isEnabledFor(logging.DEBUG),
)
failed_transactions_counter.labels(service.id).inc()
return False

View File

@@ -20,7 +20,18 @@ import os
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import Any, Iterable, List, MutableMapping, Optional, Union
from typing import (
Any,
Dict,
Iterable,
List,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
)
import attr
import jinja2
@@ -78,7 +89,7 @@ CONFIG_FILE_HEADER = """\
"""
def path_exists(file_path):
def path_exists(file_path: str) -> bool:
"""Check if a file exists
Unlike os.path.exists, this throws an exception if there is an error
@@ -86,7 +97,7 @@ def path_exists(file_path):
the parent dir).
Returns:
bool: True if the file exists; False if not.
True if the file exists; False if not.
"""
try:
os.stat(file_path)
@@ -102,15 +113,15 @@ class Config:
A configuration section, containing configuration keys and values.
Attributes:
section (str): The section title of this config object, such as
section: The section title of this config object, such as
"tls" or "logger". This is used to refer to it on the root
logger (for example, `config.tls.some_option`). Must be
defined in subclasses.
"""
section = None
section: str
def __init__(self, root_config=None):
def __init__(self, root_config: "RootConfig" = None):
self.root = root_config
# Get the path to the default Synapse template directory
@@ -119,7 +130,7 @@ class Config:
)
@staticmethod
def parse_size(value):
def parse_size(value: Union[str, int]) -> int:
if isinstance(value, int):
return value
sizes = {"K": 1024, "M": 1024 * 1024}
@@ -162,15 +173,15 @@ class Config:
return int(value) * size
@staticmethod
def abspath(file_path):
def abspath(file_path: str) -> str:
return os.path.abspath(file_path) if file_path else file_path
@classmethod
def path_exists(cls, file_path):
def path_exists(cls, file_path: str) -> bool:
return path_exists(file_path)
@classmethod
def check_file(cls, file_path, config_name):
def check_file(cls, file_path: Optional[str], config_name: str) -> str:
if file_path is None:
raise ConfigError("Missing config for %s." % (config_name,))
try:
@@ -183,7 +194,7 @@ class Config:
return cls.abspath(file_path)
@classmethod
def ensure_directory(cls, dir_path):
def ensure_directory(cls, dir_path: str) -> str:
dir_path = cls.abspath(dir_path)
os.makedirs(dir_path, exist_ok=True)
if not os.path.isdir(dir_path):
@@ -191,7 +202,7 @@ class Config:
return dir_path
@classmethod
def read_file(cls, file_path, config_name):
def read_file(cls, file_path: Any, config_name: str) -> str:
"""Deprecated: call read_file directly"""
return read_file(file_path, (config_name,))
@@ -284,6 +295,9 @@ class Config:
return [env.get_template(filename) for filename in filenames]
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
class RootConfig:
"""
Holder of an application's configuration.
@@ -308,7 +322,9 @@ class RootConfig:
raise Exception("Failed making %s: %r" % (config_class.section, e))
setattr(self, config_class.section, conf)
def invoke_all(self, func_name: str, *args, **kwargs) -> MutableMapping[str, Any]:
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
) -> MutableMapping[str, Any]:
"""
Invoke a function on all instantiated config objects this RootConfig is
configured to use.
@@ -317,6 +333,7 @@ class RootConfig:
func_name: Name of function to invoke
*args
**kwargs
Returns:
ordered dictionary of config section name and the result of the
function from it.
@@ -332,7 +349,7 @@ class RootConfig:
return res
@classmethod
def invoke_all_static(cls, func_name: str, *args, **kwargs):
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: any) -> None:
"""
Invoke a static function on config objects this RootConfig is
configured to use.
@@ -341,6 +358,7 @@ class RootConfig:
func_name: Name of function to invoke
*args
**kwargs
Returns:
ordered dictionary of config section name and the result of the
function from it.
@@ -351,16 +369,16 @@ class RootConfig:
def generate_config(
self,
config_dir_path,
data_dir_path,
server_name,
generate_secrets=False,
report_stats=None,
open_private_ports=False,
listeners=None,
tls_certificate_path=None,
tls_private_key_path=None,
):
config_dir_path: str,
data_dir_path: str,
server_name: str,
generate_secrets: bool = False,
report_stats: Optional[bool] = None,
open_private_ports: bool = False,
listeners: Optional[List[dict]] = None,
tls_certificate_path: Optional[str] = None,
tls_private_key_path: Optional[str] = None,
) -> str:
"""
Build a default configuration file
@@ -368,27 +386,27 @@ class RootConfig:
(eg with --generate_config).
Args:
config_dir_path (str): The path where the config files are kept. Used to
config_dir_path: The path where the config files are kept. Used to
create filenames for things like the log config and the signing key.
data_dir_path (str): The path where the data files are kept. Used to create
data_dir_path: The path where the data files are kept. Used to create
filenames for things like the database and media store.
server_name (str): The server name. Used to initialise the server_name
server_name: The server name. Used to initialise the server_name
config param, but also used in the names of some of the config files.
generate_secrets (bool): True if we should generate new secrets for things
generate_secrets: True if we should generate new secrets for things
like the macaroon_secret_key. If False, these parameters will be left
unset.
report_stats (bool|None): Initial setting for the report_stats setting.
report_stats: Initial setting for the report_stats setting.
If None, report_stats will be left unset.
open_private_ports (bool): True to leave private ports (such as the non-TLS
open_private_ports: True to leave private ports (such as the non-TLS
HTTP listener) open to the internet.
listeners (list(dict)|None): A list of descriptions of the listeners
synapse should start with each of which specifies a port (str), a list of
listeners: A list of descriptions of the listeners synapse should
start with each of which specifies a port (int), a list of
resources (list(str)), tls (bool) and type (str). For example:
[{
"port": 8448,
@@ -403,16 +421,12 @@ class RootConfig:
"type": "http",
}],
tls_certificate_path: The path to the tls certificate.
database (str|None): The database type to configure, either `psycog2`
or `sqlite3`.
tls_certificate_path (str|None): The path to the tls certificate.
tls_private_key_path (str|None): The path to the tls private key.
tls_private_key_path: The path to the tls private key.
Returns:
str: the yaml config file
The yaml config file
"""
return CONFIG_FILE_HEADER + "\n\n".join(
@@ -432,12 +446,15 @@ class RootConfig:
)
@classmethod
def load_config(cls, description, argv):
def load_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> TRootConfig:
"""Parse the commandline and config files
Doesn't support config-file-generation: used by the worker apps.
Returns: Config object.
Returns:
Config object.
"""
config_parser = argparse.ArgumentParser(description=description)
cls.add_arguments_to_parser(config_parser)
@@ -446,7 +463,7 @@ class RootConfig:
return obj
@classmethod
def add_arguments_to_parser(cls, config_parser):
def add_arguments_to_parser(cls, config_parser: argparse.ArgumentParser) -> None:
"""Adds all the config flags to an ArgumentParser.
Doesn't support config-file-generation: used by the worker apps.
@@ -454,7 +471,7 @@ class RootConfig:
Used for workers where we want to add extra flags/subcommands.
Args:
config_parser (ArgumentParser): App description
config_parser: App description
"""
config_parser.add_argument(
@@ -477,7 +494,9 @@ class RootConfig:
cls.invoke_all_static("add_arguments", config_parser)
@classmethod
def load_config_with_parser(cls, parser, argv):
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]:
"""Parse the commandline and config files with the given parser
Doesn't support config-file-generation: used by the worker apps.
@@ -485,13 +504,12 @@ class RootConfig:
Used for workers where we want to add extra flags/subcommands.
Args:
parser (ArgumentParser)
argv (list[str])
parser
argv
Returns:
tuple[HomeServerConfig, argparse.Namespace]: Returns the parsed
config object and the parsed argparse.Namespace object from
`parser.parse_args(..)`
Returns the parsed config object and the parsed argparse.Namespace
object from parser.parse_args(..)`
"""
obj = cls()
@@ -520,12 +538,15 @@ class RootConfig:
return obj, config_args
@classmethod
def load_or_generate_config(cls, description, argv):
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> Optional[TRootConfig]:
"""Parse the commandline and config files
Supports generation of config files, so is used for the main homeserver app.
Returns: Config object, or None if --generate-config or --generate-keys was set
Returns:
Config object, or None if --generate-config or --generate-keys was set
"""
parser = argparse.ArgumentParser(description=description)
parser.add_argument(
@@ -680,16 +701,21 @@ class RootConfig:
return obj
def parse_config_dict(self, config_dict, config_dir_path=None, data_dir_path=None):
def parse_config_dict(
self,
config_dict: Dict[str, Any],
config_dir_path: Optional[str] = None,
data_dir_path: Optional[str] = None,
) -> None:
"""Read the information from the config dict into this Config object.
Args:
config_dict (dict): Configuration data, as read from the yaml
config_dict: Configuration data, as read from the yaml
config_dir_path (str): The path where the config files are kept. Used to
config_dir_path: The path where the config files are kept. Used to
create filenames for things like the log config and the signing key.
data_dir_path (str): The path where the data files are kept. Used to create
data_dir_path: The path where the data files are kept. Used to create
filenames for things like the database and media store.
"""
self.invoke_all(
@@ -699,17 +725,20 @@ class RootConfig:
data_dir_path=data_dir_path,
)
def generate_missing_files(self, config_dict, config_dir_path):
def generate_missing_files(
self, config_dict: Dict[str, Any], config_dir_path: str
) -> None:
self.invoke_all("generate_files", config_dict, config_dir_path)
def read_config_files(config_files):
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
"""Read the config files into a dict
Args:
config_files (iterable[str]): A list of the config files to read
config_files: A list of the config files to read
Returns: dict
Returns:
The configuration dictionary.
"""
specified_config = {}
for config_file in config_files:
@@ -733,17 +762,17 @@ def read_config_files(config_files):
return specified_config
def find_config_files(search_paths):
def find_config_files(search_paths: List[str]) -> List[str]:
"""Finds config files using a list of search paths. If a path is a file
then that file path is added to the list. If a search path is a directory
then all the "*.yaml" files in that directory are added to the list in
sorted order.
Args:
search_paths(list(str)): A list of paths to search.
search_paths: A list of paths to search.
Returns:
list(str): A list of file paths.
A list of file paths.
"""
config_files = []
@@ -777,7 +806,7 @@ def find_config_files(search_paths):
return config_files
@attr.s
@attr.s(auto_attribs=True)
class ShardedWorkerHandlingConfig:
"""Algorithm for choosing which instance is responsible for handling some
sharded work.
@@ -787,7 +816,7 @@ class ShardedWorkerHandlingConfig:
below).
"""
instances = attr.ib(type=List[str])
instances: List[str]
def should_handle(self, instance_name: str, key: str) -> bool:
"""Whether this instance is responsible for handling the given key."""

View File

@@ -1,4 +1,18 @@
from typing import Any, Iterable, List, Optional
import argparse
from typing import (
Any,
Dict,
Iterable,
List,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
)
import jinja2
from synapse.config import (
account_validity,
@@ -19,6 +33,7 @@ from synapse.config import (
logger,
metrics,
modules,
oembed,
oidc,
password_auth_providers,
push,
@@ -27,6 +42,7 @@ from synapse.config import (
registration,
repository,
retention,
room,
room_directory,
saml2,
server,
@@ -51,7 +67,9 @@ MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
MISSING_REPORT_STATS_SPIEL: str
MISSING_SERVER_NAME: str
def path_exists(file_path: str): ...
def path_exists(file_path: str) -> bool: ...
TRootConfig = TypeVar("TRootConfig", bound="RootConfig")
class RootConfig:
server: server.ServerConfig
@@ -61,6 +79,7 @@ class RootConfig:
logging: logger.LoggingConfig
ratelimiting: ratelimiting.RatelimitConfig
media: repository.ContentRepositoryConfig
oembed: oembed.OembedConfig
captcha: captcha.CaptchaConfig
voip: voip.VoipConfig
registration: registration.RegistrationConfig
@@ -80,6 +99,7 @@ class RootConfig:
authproviders: password_auth_providers.PasswordAuthProviderConfig
push: push.PushConfig
spamchecker: spam_checker.SpamCheckerConfig
room: room.RoomConfig
groups: groups.GroupsConfig
userdirectory: user_directory.UserDirectoryConfig
consent: consent.ConsentConfig
@@ -87,72 +107,85 @@ class RootConfig:
servernotices: server_notices.ServerNoticesConfig
roomdirectory: room_directory.RoomDirectoryConfig
thirdpartyrules: third_party_event_rules.ThirdPartyRulesConfig
tracer: tracer.TracerConfig
tracing: tracer.TracerConfig
redis: redis.RedisConfig
modules: modules.ModulesConfig
caches: cache.CacheConfig
federation: federation.FederationConfig
retention: retention.RetentionConfig
config_classes: List = ...
config_classes: List[Type["Config"]] = ...
def __init__(self) -> None: ...
def invoke_all(self, func_name: str, *args: Any, **kwargs: Any): ...
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
) -> MutableMapping[str, Any]: ...
@classmethod
def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ...
def __getattr__(self, item: str): ...
def parse_config_dict(
self,
config_dict: Any,
config_dir_path: Optional[Any] = ...,
data_dir_path: Optional[Any] = ...,
config_dict: Dict[str, Any],
config_dir_path: Optional[str] = ...,
data_dir_path: Optional[str] = ...,
) -> None: ...
read_config: Any = ...
def generate_config(
self,
config_dir_path: str,
data_dir_path: str,
server_name: str,
generate_secrets: bool = ...,
report_stats: Optional[str] = ...,
report_stats: Optional[bool] = ...,
open_private_ports: bool = ...,
listeners: Optional[Any] = ...,
database_conf: Optional[Any] = ...,
tls_certificate_path: Optional[str] = ...,
tls_private_key_path: Optional[str] = ...,
): ...
) -> str: ...
@classmethod
def load_or_generate_config(cls, description: Any, argv: Any): ...
def load_or_generate_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> Optional[TRootConfig]: ...
@classmethod
def load_config(cls, description: Any, argv: Any): ...
def load_config(
cls: Type[TRootConfig], description: str, argv: List[str]
) -> TRootConfig: ...
@classmethod
def add_arguments_to_parser(cls, config_parser: Any) -> None: ...
def add_arguments_to_parser(
cls, config_parser: argparse.ArgumentParser
) -> None: ...
@classmethod
def load_config_with_parser(cls, parser: Any, argv: Any): ...
def load_config_with_parser(
cls: Type[TRootConfig], parser: argparse.ArgumentParser, argv: List[str]
) -> Tuple[TRootConfig, argparse.Namespace]: ...
def generate_missing_files(
self, config_dict: dict, config_dir_path: str
) -> None: ...
class Config:
root: RootConfig
default_template_dir: str
def __init__(self, root_config: Optional[RootConfig] = ...) -> None: ...
def __getattr__(self, item: str, from_root: bool = ...): ...
@staticmethod
def parse_size(value: Any): ...
def parse_size(value: Union[str, int]) -> int: ...
@staticmethod
def parse_duration(value: Any): ...
def parse_duration(value: Union[str, int]) -> int: ...
@staticmethod
def abspath(file_path: Optional[str]): ...
def abspath(file_path: Optional[str]) -> str: ...
@classmethod
def path_exists(cls, file_path: str): ...
def path_exists(cls, file_path: str) -> bool: ...
@classmethod
def check_file(cls, file_path: str, config_name: str): ...
def check_file(cls, file_path: str, config_name: str) -> str: ...
@classmethod
def ensure_directory(cls, dir_path: str): ...
def ensure_directory(cls, dir_path: str) -> str: ...
@classmethod
def read_file(cls, file_path: str, config_name: str): ...
def read_file(cls, file_path: str, config_name: str) -> str: ...
def read_template(self, filenames: str) -> jinja2.Template: ...
def read_templates(
self,
filenames: List[str],
custom_template_directories: Optional[Iterable[str]] = None,
) -> List[jinja2.Template]: ...
def read_config_files(config_files: List[str]): ...
def find_config_files(search_paths: List[str]): ...
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]: ...
def find_config_files(search_paths: List[str]) -> List[str]: ...
class ShardedWorkerHandlingConfig:
instances: List[str]

View File

@@ -15,7 +15,7 @@
import os
import re
import threading
from typing import Callable, Dict
from typing import Callable, Dict, Optional
from synapse.python_dependencies import DependencyException, check_requirements
@@ -217,7 +217,7 @@ class CacheConfig(Config):
expiry_time = cache_config.get("expiry_time")
if expiry_time:
self.expiry_time_msec = self.parse_duration(expiry_time)
self.expiry_time_msec: Optional[int] = self.parse_duration(expiry_time)
else:
self.expiry_time_msec = None

View File

@@ -137,33 +137,14 @@ class EmailConfig(Config):
if self.root.registration.account_threepid_delegate_email
else ThreepidBehaviour.LOCAL
)
# Prior to Synapse v1.4.0, there was another option that defined whether Synapse would
# use an identity server to password reset tokens on its behalf. We now warn the user
# if they have this set and tell them to use the updated option, while using a default
# identity server in the process.
self.using_identity_server_from_trusted_list = False
if (
not self.root.registration.account_threepid_delegate_email
and config.get("trust_identity_server_for_password_resets", False) is True
):
# Use the first entry in self.trusted_third_party_id_servers instead
if self.trusted_third_party_id_servers:
# XXX: It's a little confusing that account_threepid_delegate_email is modified
# both in RegistrationConfig and here. We should factor this bit out
first_trusted_identity_server = self.trusted_third_party_id_servers[0]
# trusted_third_party_id_servers does not contain a scheme whereas
# account_threepid_delegate_email is expected to. Presume https
self.root.registration.account_threepid_delegate_email = (
"https://" + first_trusted_identity_server
)
self.using_identity_server_from_trusted_list = True
else:
raise ConfigError(
"Attempted to use an identity server from"
'"trusted_third_party_id_servers" but it is empty.'
)
if config.get("trust_identity_server_for_password_resets"):
raise ConfigError(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"details and update your config file."
)
self.local_threepid_handling_disabled_due_to_email_config = False
if (

View File

@@ -31,6 +31,8 @@ class JWTConfig(Config):
self.jwt_secret = jwt_config["secret"]
self.jwt_algorithm = jwt_config["algorithm"]
self.jwt_subject_claim = jwt_config.get("subject_claim", "sub")
# The issuer and audiences are optional, if provided, it is asserted
# that the claims exist on the JWT.
self.jwt_issuer = jwt_config.get("issuer")
@@ -46,6 +48,7 @@ class JWTConfig(Config):
self.jwt_enabled = False
self.jwt_secret = None
self.jwt_algorithm = None
self.jwt_subject_claim = None
self.jwt_issuer = None
self.jwt_audiences = None
@@ -88,6 +91,12 @@ class JWTConfig(Config):
#
#algorithm: "provided-by-your-issuer"
# Name of the claim containing a unique identifier for the user.
#
# Optional, defaults to `sub`.
#
#subject_claim: "sub"
# The issuer to validate the "iss" claim against.
#
# Optional, if provided the "iss" claim will be required and

View File

@@ -16,6 +16,7 @@
import hashlib
import logging
import os
from typing import Any, Dict
import attr
import jsonschema
@@ -312,7 +313,7 @@ class KeyConfig(Config):
)
return keys
def generate_files(self, config, config_dir_path):
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
if "signing_key" in config:
return

View File

@@ -18,7 +18,7 @@ import os
import sys
import threading
from string import Template
from typing import TYPE_CHECKING
from typing import TYPE_CHECKING, Any, Dict
import yaml
from zope.interface import implementer
@@ -185,7 +185,7 @@ class LoggingConfig(Config):
help=argparse.SUPPRESS,
)
def generate_files(self, config, config_dir_path):
def generate_files(self, config: Dict[str, Any], config_dir_path: str) -> None:
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):
log_file = self.abspath("homeserver.log")

View File

@@ -11,6 +11,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
from synapse.api.constants import RoomCreationPreset
from synapse.config._base import Config, ConfigError
@@ -39,9 +40,7 @@ class RegistrationConfig(Config):
self.registration_shared_secret = config.get("registration_shared_secret")
self.bcrypt_rounds = config.get("bcrypt_rounds", 12)
self.trusted_third_party_id_servers = config.get(
"trusted_third_party_id_servers", ["matrix.org", "vector.im"]
)
account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
@@ -114,26 +113,25 @@ class RegistrationConfig(Config):
session_lifetime = self.parse_duration(session_lifetime)
self.session_lifetime = session_lifetime
# The `access_token_lifetime` applies for tokens that can be renewed
# using a refresh token, as per MSC2918. If it is `None`, the refresh
# token mechanism is disabled.
#
# Since it is incompatible with the `session_lifetime` mechanism, it is set to
# `None` by default if a `session_lifetime` is set.
access_token_lifetime = config.get(
"access_token_lifetime", "5m" if session_lifetime is None else None
# The `refreshable_access_token_lifetime` applies for tokens that can be renewed
# using a refresh token, as per MSC2918.
# If it is `None`, the refresh token mechanism is disabled.
refreshable_access_token_lifetime = config.get(
"refreshable_access_token_lifetime",
"5m",
)
if access_token_lifetime is not None:
access_token_lifetime = self.parse_duration(access_token_lifetime)
self.access_token_lifetime = access_token_lifetime
if session_lifetime is not None and access_token_lifetime is not None:
raise ConfigError(
"The refresh token mechanism is incompatible with the "
"`session_lifetime` option. Consider disabling the "
"`session_lifetime` option or disabling the refresh token "
"mechanism by removing the `access_token_lifetime` option."
if refreshable_access_token_lifetime is not None:
refreshable_access_token_lifetime = self.parse_duration(
refreshable_access_token_lifetime
)
self.refreshable_access_token_lifetime: Optional[
int
] = refreshable_access_token_lifetime
refresh_token_lifetime = config.get("refresh_token_lifetime")
if refresh_token_lifetime is not None:
refresh_token_lifetime = self.parse_duration(refresh_token_lifetime)
self.refresh_token_lifetime: Optional[int] = refresh_token_lifetime
# The fallback template used for authenticating using a registration token
self.registration_token_template = self.read_template("registration_token.html")

View File

@@ -1,4 +1,5 @@
# Copyright 2018 New Vector Ltd
# Copyright 2021 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -12,6 +13,9 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from synapse.types import JsonDict
from synapse.util import glob_to_regex
from ._base import Config, ConfigError
@@ -20,7 +24,7 @@ from ._base import Config, ConfigError
class RoomDirectoryConfig(Config):
section = "roomdirectory"
def read_config(self, config, **kwargs):
def read_config(self, config, **kwargs) -> None:
self.enable_room_list_search = config.get("enable_room_list_search", True)
alias_creation_rules = config.get("alias_creation_rules")
@@ -47,7 +51,7 @@ class RoomDirectoryConfig(Config):
_RoomDirectoryRule("room_list_publication_rules", {"action": "allow"})
]
def generate_config_section(self, config_dir_path, server_name, **kwargs):
def generate_config_section(self, config_dir_path, server_name, **kwargs) -> str:
return """
# Uncomment to disable searching the public room list. When disabled
# blocks searching local and remote room lists for local and remote
@@ -113,16 +117,16 @@ class RoomDirectoryConfig(Config):
# action: allow
"""
def is_alias_creation_allowed(self, user_id, room_id, alias):
def is_alias_creation_allowed(self, user_id: str, room_id: str, alias: str) -> bool:
"""Checks if the given user is allowed to create the given alias
Args:
user_id (str)
room_id (str)
alias (str)
user_id: The user to check.
room_id: The room ID for the alias.
alias: The alias being created.
Returns:
boolean: True if user is allowed to create the alias
True if user is allowed to create the alias
"""
for rule in self._alias_creation_rules:
if rule.matches(user_id, room_id, [alias]):
@@ -130,16 +134,18 @@ class RoomDirectoryConfig(Config):
return False
def is_publishing_room_allowed(self, user_id, room_id, aliases):
def is_publishing_room_allowed(
self, user_id: str, room_id: str, aliases: List[str]
) -> bool:
"""Checks if the given user is allowed to publish the room
Args:
user_id (str)
room_id (str)
aliases (list[str]): any local aliases associated with the room
user_id: The user ID publishing the room.
room_id: The room being published.
aliases: any local aliases associated with the room
Returns:
boolean: True if user can publish room
True if user can publish room
"""
for rule in self._room_list_publication_rules:
if rule.matches(user_id, room_id, aliases):
@@ -153,11 +159,11 @@ class _RoomDirectoryRule:
creating an alias or publishing a room.
"""
def __init__(self, option_name, rule):
def __init__(self, option_name: str, rule: JsonDict):
"""
Args:
option_name (str): Name of the config option this rule belongs to
rule (dict): The rule as specified in the config
option_name: Name of the config option this rule belongs to
rule: The rule as specified in the config
"""
action = rule["action"]
@@ -181,18 +187,18 @@ class _RoomDirectoryRule:
except Exception as e:
raise ConfigError("Failed to parse glob into regex") from e
def matches(self, user_id, room_id, aliases):
def matches(self, user_id: str, room_id: str, aliases: List[str]) -> bool:
"""Tests if this rule matches the given user_id, room_id and aliases.
Args:
user_id (str)
room_id (str)
aliases (list[str]): The associated aliases to the room. Will be a
single element for testing alias creation, and can be empty for
testing room publishing.
user_id: The user ID to check.
room_id: The room ID to check.
aliases: The associated aliases to the room. Will be a single element
for testing alias creation, and can be empty for testing room
publishing.
Returns:
boolean
True if the rule matches.
"""
# Note: The regexes are anchored at both ends

View File

@@ -421,7 +421,7 @@ class ServerConfig(Config):
# before redacting them.
redaction_retention_period = config.get("redaction_retention_period", "7d")
if redaction_retention_period is not None:
self.redaction_retention_period = self.parse_duration(
self.redaction_retention_period: Optional[int] = self.parse_duration(
redaction_retention_period
)
else:
@@ -430,7 +430,7 @@ class ServerConfig(Config):
# How long to keep entries in the `users_ips` table.
user_ips_max_age = config.get("user_ips_max_age", "28d")
if user_ips_max_age is not None:
self.user_ips_max_age = self.parse_duration(user_ips_max_age)
self.user_ips_max_age: Optional[int] = self.parse_duration(user_ips_max_age)
else:
self.user_ips_max_age = None

View File

@@ -14,7 +14,6 @@
import logging
import os
from datetime import datetime
from typing import List, Optional, Pattern
from OpenSSL import SSL, crypto
@@ -133,55 +132,6 @@ class TlsConfig(Config):
self.tls_certificate: Optional[crypto.X509] = None
self.tls_private_key: Optional[crypto.PKey] = None
def is_disk_cert_valid(self, allow_self_signed=True):
"""
Is the certificate we have on disk valid, and if so, for how long?
Args:
allow_self_signed (bool): Should we allow the certificate we
read to be self signed?
Returns:
int: Days remaining of certificate validity.
None: No certificate exists.
"""
if not os.path.exists(self.tls_certificate_file):
return None
try:
with open(self.tls_certificate_file, "rb") as f:
cert_pem = f.read()
except Exception as e:
raise ConfigError(
"Failed to read existing certificate file %s: %s"
% (self.tls_certificate_file, e)
)
try:
tls_certificate = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
except Exception as e:
raise ConfigError(
"Failed to parse existing certificate file %s: %s"
% (self.tls_certificate_file, e)
)
if not allow_self_signed:
if tls_certificate.get_subject() == tls_certificate.get_issuer():
raise ValueError(
"TLS Certificate is self signed, and this is not permitted"
)
# YYYYMMDDhhmmssZ -- in UTC
expiry_data = tls_certificate.get_notAfter()
if expiry_data is None:
raise ValueError(
"TLS Certificate has no expiry date, and this is not permitted"
)
expires_on = datetime.strptime(expiry_data.decode("ascii"), "%Y%m%d%H%M%SZ")
now = datetime.utcnow()
days_remaining = (expires_on - now).days
return days_remaining
def read_certificate_from_disk(self):
"""
Read the certificates and private key from disk.
@@ -263,8 +213,8 @@ class TlsConfig(Config):
#
#federation_certificate_verification_whitelist:
# - lon.example.com
# - *.domain.com
# - *.onion
# - "*.domain.com"
# - "*.onion"
# List of custom certificate authorities for federation traffic.
#
@@ -295,7 +245,7 @@ class TlsConfig(Config):
cert_path = self.tls_certificate_file
logger.info("Loading TLS certificate from %s", cert_path)
cert_pem = self.read_file(cert_path, "tls_certificate_path")
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem)
cert = crypto.load_certificate(crypto.FILETYPE_PEM, cert_pem.encode())
return cert

View File

@@ -53,8 +53,8 @@ class UserDirectoryConfig(Config):
# indexes were (re)built was before Synapse 1.44, you'll have to
# rebuild the indexes in order to search through all known users.
# These indexes are built the first time Synapse starts; admins can
# manually trigger a rebuild following the instructions at
# https://matrix-org.github.io/synapse/latest/user_directory.html
# manually trigger a rebuild via API following the instructions at
# https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/background_updates.html#run
#
# Uncomment to return search results containing all known users, even if that
# user does not share a room with the requester.

View File

@@ -1,5 +1,4 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017, 2018 New Vector Ltd
# Copyright 2014-2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -120,16 +119,6 @@ class VerifyJsonRequest:
key_ids=key_ids,
)
def to_fetch_key_request(self) -> "_FetchKeyRequest":
"""Create a key fetch request for all keys needed to satisfy the
verification request.
"""
return _FetchKeyRequest(
server_name=self.server_name,
minimum_valid_until_ts=self.minimum_valid_until_ts,
key_ids=self.key_ids,
)
class KeyLookupError(ValueError):
pass
@@ -179,8 +168,22 @@ class Keyring:
clock=hs.get_clock(),
process_batch_callback=self._inner_fetch_key_requests,
)
self.verify_key = get_verify_key(hs.signing_key)
self.hostname = hs.hostname
self._hostname = hs.hostname
# build a FetchKeyResult for each of our own keys, to shortcircuit the
# fetcher.
self._local_verify_keys: Dict[str, FetchKeyResult] = {}
for key_id, key in hs.config.key.old_signing_keys.items():
self._local_verify_keys[key_id] = FetchKeyResult(
verify_key=key, valid_until_ts=key.expired_ts
)
vk = get_verify_key(hs.signing_key)
self._local_verify_keys[f"{vk.alg}:{vk.version}"] = FetchKeyResult(
verify_key=vk,
valid_until_ts=2 ** 63, # fake future timestamp
)
async def verify_json_for_server(
self,
@@ -267,22 +270,32 @@ class Keyring:
Codes.UNAUTHORIZED,
)
# If we are the originating server don't fetch verify key for self over federation
if verify_request.server_name == self.hostname:
await self._process_json(self.verify_key, verify_request)
return
found_keys: Dict[str, FetchKeyResult] = {}
# Add the keys we need to verify to the queue for retrieval. We queue
# up requests for the same server so we don't end up with many in flight
# requests for the same keys.
key_request = verify_request.to_fetch_key_request()
found_keys_by_server = await self._server_queue.add_to_queue(
key_request, key=verify_request.server_name
)
# If we are the originating server, short-circuit the key-fetch for any keys
# we already have
if verify_request.server_name == self._hostname:
for key_id in verify_request.key_ids:
if key_id in self._local_verify_keys:
found_keys[key_id] = self._local_verify_keys[key_id]
# Since we batch up requests the returned set of keys may contain keys
# from other servers, so we pull out only the ones we care about.s
found_keys = found_keys_by_server.get(verify_request.server_name, {})
key_ids_to_find = set(verify_request.key_ids) - found_keys.keys()
if key_ids_to_find:
# Add the keys we need to verify to the queue for retrieval. We queue
# up requests for the same server so we don't end up with many in flight
# requests for the same keys.
key_request = _FetchKeyRequest(
server_name=verify_request.server_name,
minimum_valid_until_ts=verify_request.minimum_valid_until_ts,
key_ids=list(key_ids_to_find),
)
found_keys_by_server = await self._server_queue.add_to_queue(
key_request, key=verify_request.server_name
)
# Since we batch up requests the returned set of keys may contain keys
# from other servers, so we pull out only the ones we care about.
found_keys.update(found_keys_by_server.get(verify_request.server_name, {}))
# Verify each signature we got valid keys for, raising if we can't
# verify any of them.
@@ -654,21 +667,25 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
perspective_name,
)
request: JsonDict = {}
for queue_value in keys_to_fetch:
# there may be multiple requests for each server, so we have to merge
# them intelligently.
request_for_server = {
key_id: {
"minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
}
for key_id in queue_value.key_ids
}
request.setdefault(queue_value.server_name, {}).update(request_for_server)
logger.debug("Request to notary server %s: %s", perspective_name, request)
try:
query_response = await self.client.post_json(
destination=perspective_name,
path="/_matrix/key/v2/query",
data={
"server_keys": {
queue_value.server_name: {
key_id: {
"minimum_valid_until_ts": queue_value.minimum_valid_until_ts,
}
for key_id in queue_value.key_ids
}
for queue_value in keys_to_fetch
}
},
data={"server_keys": request},
)
except (NotRetryingDestination, RequestSendFailed) as e:
# these both have str() representations which we can't really improve upon
@@ -676,6 +693,10 @@ class PerspectivesKeyFetcher(BaseV2KeyFetcher):
except HttpResponseException as e:
raise KeyLookupError("Remote server returned an error: %s" % (e,))
logger.debug(
"Response from notary server %s: %s", perspective_name, query_response
)
keys: Dict[str, Dict[str, FetchKeyResult]] = {}
added_keys: List[Tuple[str, str, FetchKeyResult]] = []

View File

@@ -322,6 +322,11 @@ class _AsyncEventContextImpl(EventContext):
attributes by loading from the database.
"""
if self.state_group is None:
# No state group means the event is an outlier. Usually the state_ids dicts are also
# pre-set to empty dicts, but they get reset when the context is serialized, so set
# them to empty dicts again here.
self._current_state_ids = {}
self._prev_state_ids = {}
return
current_state_ids = await self._storage.state.get_state_ids_for_group(

View File

@@ -1,4 +1,5 @@
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2021 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -392,15 +393,16 @@ class EventClientSerializer:
self,
event: Union[JsonDict, EventBase],
time_now: int,
bundle_aggregations: bool = True,
bundle_relations: bool = True,
**kwargs: Any,
) -> JsonDict:
"""Serializes a single event.
Args:
event
event: The event being serialized.
time_now: The current time in milliseconds
bundle_aggregations: Whether to bundle in related events
bundle_relations: Whether to include the bundled relations for this
event.
**kwargs: Arguments to pass to `serialize_event`
Returns:
@@ -410,77 +412,93 @@ class EventClientSerializer:
if not isinstance(event, EventBase):
return event
event_id = event.event_id
serialized_event = serialize_event(event, time_now, **kwargs)
# If MSC1849 is enabled then we need to look if there are any relations
# we need to bundle in with the event.
# Do not bundle relations if the event has been redacted
if not event.internal_metadata.is_redacted() and (
self._msc1849_enabled and bundle_aggregations
self._msc1849_enabled and bundle_relations
):
annotations = await self.store.get_aggregation_groups_for_event(event_id)
references = await self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f"
)
if annotations.chunk:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.ANNOTATION] = annotations.to_dict()
if references.chunk:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.store.get_applicable_edit(event_id)
if edit:
# If there is an edit replace the content, preserving existing
# relations.
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relations = event.content.get("m.relates_to")
if relations:
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relations.copy()
else:
serialized_event["content"].pop("m.relates_to", None)
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
# If this event is the start of a thread, include a summary of the replies.
if self._msc3440_enabled:
(
thread_count,
latest_thread_event,
) = await self.store.get_thread_summary(event_id)
if latest_thread_event:
r = serialized_event["unsigned"].setdefault("m.relations", {})
r[RelationTypes.THREAD] = {
# Don't bundle aggregations as this could recurse forever.
"latest_event": await self.serialize_event(
latest_thread_event, time_now, bundle_aggregations=False
),
"count": thread_count,
}
await self._injected_bundled_relations(event, time_now, serialized_event)
return serialized_event
async def _injected_bundled_relations(
self, event: EventBase, time_now: int, serialized_event: JsonDict
) -> None:
"""Potentially injects bundled relations into the unsigned portion of the serialized event.
Args:
event: The event being serialized.
time_now: The current time in milliseconds
serialized_event: The serialized event which may be modified.
"""
event_id = event.event_id
# The bundled relations to include.
relations = {}
annotations = await self.store.get_aggregation_groups_for_event(event_id)
if annotations.chunk:
relations[RelationTypes.ANNOTATION] = annotations.to_dict()
references = await self.store.get_relations_for_event(
event_id, RelationTypes.REFERENCE, direction="f"
)
if references.chunk:
relations[RelationTypes.REFERENCE] = references.to_dict()
edit = None
if event.type == EventTypes.Message:
edit = await self.store.get_applicable_edit(event_id)
if edit:
# If there is an edit replace the content, preserving existing
# relations.
# Ensure we take copies of the edit content, otherwise we risk modifying
# the original event.
edit_content = edit.content.copy()
# Unfreeze the event content if necessary, so that we may modify it below
edit_content = unfreeze(edit_content)
serialized_event["content"] = edit_content.get("m.new_content", {})
# Check for existing relations
relates_to = event.content.get("m.relates_to")
if relates_to:
# Keep the relations, ensuring we use a dict copy of the original
serialized_event["content"]["m.relates_to"] = relates_to.copy()
else:
serialized_event["content"].pop("m.relates_to", None)
relations[RelationTypes.REPLACE] = {
"event_id": edit.event_id,
"origin_server_ts": edit.origin_server_ts,
"sender": edit.sender,
}
# If this event is the start of a thread, include a summary of the replies.
if self._msc3440_enabled:
(
thread_count,
latest_thread_event,
) = await self.store.get_thread_summary(event_id)
if latest_thread_event:
relations[RelationTypes.THREAD] = {
# Don't bundle relations as this could recurse forever.
"latest_event": await self.serialize_event(
latest_thread_event, time_now, bundle_relations=False
),
"count": thread_count,
}
# If any bundled relations were found, include them.
if relations:
serialized_event["unsigned"].setdefault("m.relations", {}).update(relations)
async def serialize_events(
self, events: Iterable[Union[JsonDict, EventBase]], time_now: int, **kwargs: Any
) -> List[JsonDict]:

View File

@@ -1395,11 +1395,28 @@ class FederationClient(FederationBase):
async def send_request(
destination: str,
) -> Tuple[JsonDict, Sequence[JsonDict], Sequence[str]]:
res = await self.transport_layer.get_room_hierarchy(
destination=destination,
room_id=room_id,
suggested_only=suggested_only,
)
try:
res = await self.transport_layer.get_room_hierarchy(
destination=destination,
room_id=room_id,
suggested_only=suggested_only,
)
except HttpResponseException as e:
# If an error is received that is due to an unrecognised endpoint,
# fallback to the unstable endpoint. Otherwise consider it a
# legitmate error and raise.
if not self._is_unknown_endpoint(e):
raise
logger.debug(
"Couldn't fetch room hierarchy with the v1 API, falling back to the unstable API"
)
res = await self.transport_layer.get_room_hierarchy_unstable(
destination=destination,
room_id=room_id,
suggested_only=suggested_only,
)
room = res.get("room")
if not isinstance(room, dict):
@@ -1449,6 +1466,10 @@ class FederationClient(FederationBase):
if e.code != 502:
raise
logger.debug(
"Couldn't fetch room hierarchy, falling back to the spaces API"
)
# Fallback to the old federation API and translate the results if
# no servers implement the new API.
#

View File

@@ -613,8 +613,11 @@ class FederationServer(FederationBase):
state = await self.store.get_events(state_ids)
time_now = self._clock.time_msec()
event_json = event.get_pdu_json()
return {
"org.matrix.msc3083.v2.event": event.get_pdu_json(),
# TODO Remove the unstable prefix when servers have updated.
"org.matrix.msc3083.v2.event": event_json,
"event": event_json,
"state": [p.get_pdu_json(time_now) for p in state.values()],
"auth_chain": [p.get_pdu_json(time_now) for p in auth_chain],
}

View File

@@ -1192,10 +1192,24 @@ class TransportLayerClient:
)
async def get_room_hierarchy(
self,
destination: str,
room_id: str,
suggested_only: bool,
self, destination: str, room_id: str, suggested_only: bool
) -> JsonDict:
"""
Args:
destination: The remote server
room_id: The room ID to ask about.
suggested_only: if True, only suggested rooms will be returned
"""
path = _create_v1_path("/hierarchy/%s", room_id)
return await self.client.get_json(
destination=destination,
path=path,
args={"suggested_only": "true" if suggested_only else "false"},
)
async def get_room_hierarchy_unstable(
self, destination: str, room_id: str, suggested_only: bool
) -> JsonDict:
"""
Args:
@@ -1317,15 +1331,26 @@ class SendJoinParser(ByteParser[SendJoinResponse]):
prefix + "auth_chain.item",
use_float=True,
)
self._coro_event = ijson.kvitems_coro(
# TODO Remove the unstable prefix when servers have updated.
#
# By re-using the same event dictionary this will cause the parsing of
# org.matrix.msc3083.v2.event and event to stomp over each other.
# Generally this should be fine.
self._coro_unstable_event = ijson.kvitems_coro(
_event_parser(self._response.event_dict),
prefix + "org.matrix.msc3083.v2.event",
use_float=True,
)
self._coro_event = ijson.kvitems_coro(
_event_parser(self._response.event_dict),
prefix + "event",
use_float=True,
)
def write(self, data: bytes) -> int:
self._coro_state.send(data)
self._coro_auth.send(data)
self._coro_unstable_event.send(data)
self._coro_event.send(data)
return len(data)

View File

@@ -611,7 +611,6 @@ class FederationSpaceSummaryServlet(BaseFederationServlet):
class FederationRoomHierarchyServlet(BaseFederationServlet):
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
PATH = "/hierarchy/(?P<room_id>[^/]*)"
def __init__(
@@ -637,6 +636,10 @@ class FederationRoomHierarchyServlet(BaseFederationServlet):
)
class FederationRoomHierarchyUnstableServlet(FederationRoomHierarchyServlet):
PREFIX = FEDERATION_UNSTABLE_PREFIX + "/org.matrix.msc2946"
class RoomComplexityServlet(BaseFederationServlet):
"""
Indicates to other servers how complex (and therefore likely
@@ -701,6 +704,7 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
RoomComplexityServlet,
FederationSpaceSummaryServlet,
FederationRoomHierarchyServlet,
FederationRoomHierarchyUnstableServlet,
FederationV1SendKnockServlet,
FederationMakeKnockServlet,
)

View File

@@ -40,6 +40,8 @@ from typing import TYPE_CHECKING, Optional, Tuple
from signedjson.sign import sign_json
from twisted.internet.defer import Deferred
from synapse.api.errors import HttpResponseException, RequestSendFailed, SynapseError
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import JsonDict, get_domain_from_id
@@ -166,7 +168,7 @@ class GroupAttestionRenewer:
return {}
def _start_renew_attestations(self) -> None:
def _start_renew_attestations(self) -> "Deferred[None]":
return run_as_background_process("renew_attestations", self._renew_attestations)
async def _renew_attestations(self) -> None:

View File

@@ -18,6 +18,7 @@ import time
import unicodedata
import urllib.parse
from binascii import crc32
from http import HTTPStatus
from typing import (
TYPE_CHECKING,
Any,
@@ -756,53 +757,109 @@ class AuthHandler:
async def refresh_token(
self,
refresh_token: str,
valid_until_ms: Optional[int],
) -> Tuple[str, str]:
access_token_valid_until_ms: Optional[int],
refresh_token_valid_until_ms: Optional[int],
) -> Tuple[str, str, Optional[int]]:
"""
Consumes a refresh token and generate both a new access token and a new refresh token from it.
The consumed refresh token is considered invalid after the first use of the new access token or the new refresh token.
The lifetime of both the access token and refresh token will be capped so that they
do not exceed the session's ultimate expiry time, if applicable.
Args:
refresh_token: The token to consume.
valid_until_ms: The expiration timestamp of the new access token.
access_token_valid_until_ms: The expiration timestamp of the new access token.
None if the access token does not expire.
refresh_token_valid_until_ms: The expiration timestamp of the new refresh token.
None if the refresh token does not expire.
Returns:
A tuple containing the new access token and refresh token
A tuple containing:
- the new access token
- the new refresh token
- the actual expiry time of the access token, which may be earlier than
`access_token_valid_until_ms`.
"""
# Verify the token signature first before looking up the token
if not self._verify_refresh_token(refresh_token):
raise SynapseError(401, "invalid refresh token", Codes.UNKNOWN_TOKEN)
raise SynapseError(
HTTPStatus.UNAUTHORIZED, "invalid refresh token", Codes.UNKNOWN_TOKEN
)
existing_token = await self.store.lookup_refresh_token(refresh_token)
if existing_token is None:
raise SynapseError(401, "refresh token does not exist", Codes.UNKNOWN_TOKEN)
raise SynapseError(
HTTPStatus.UNAUTHORIZED,
"refresh token does not exist",
Codes.UNKNOWN_TOKEN,
)
if (
existing_token.has_next_access_token_been_used
or existing_token.has_next_refresh_token_been_refreshed
):
raise SynapseError(
403, "refresh token isn't valid anymore", Codes.FORBIDDEN
HTTPStatus.FORBIDDEN,
"refresh token isn't valid anymore",
Codes.FORBIDDEN,
)
now_ms = self._clock.time_msec()
if existing_token.expiry_ts is not None and existing_token.expiry_ts < now_ms:
raise SynapseError(
HTTPStatus.FORBIDDEN,
"The supplied refresh token has expired",
Codes.FORBIDDEN,
)
if existing_token.ultimate_session_expiry_ts is not None:
# This session has a bounded lifetime, even across refreshes.
if access_token_valid_until_ms is not None:
access_token_valid_until_ms = min(
access_token_valid_until_ms,
existing_token.ultimate_session_expiry_ts,
)
else:
access_token_valid_until_ms = existing_token.ultimate_session_expiry_ts
if refresh_token_valid_until_ms is not None:
refresh_token_valid_until_ms = min(
refresh_token_valid_until_ms,
existing_token.ultimate_session_expiry_ts,
)
else:
refresh_token_valid_until_ms = existing_token.ultimate_session_expiry_ts
if existing_token.ultimate_session_expiry_ts < now_ms:
raise SynapseError(
HTTPStatus.FORBIDDEN,
"The session has expired and can no longer be refreshed",
Codes.FORBIDDEN,
)
(
new_refresh_token,
new_refresh_token_id,
) = await self.get_refresh_token_for_user_id(
user_id=existing_token.user_id, device_id=existing_token.device_id
)
access_token = await self.get_access_token_for_user_id(
) = await self.create_refresh_token_for_user_id(
user_id=existing_token.user_id,
device_id=existing_token.device_id,
valid_until_ms=valid_until_ms,
expiry_ts=refresh_token_valid_until_ms,
ultimate_session_expiry_ts=existing_token.ultimate_session_expiry_ts,
)
access_token = await self.create_access_token_for_user_id(
user_id=existing_token.user_id,
device_id=existing_token.device_id,
valid_until_ms=access_token_valid_until_ms,
refresh_token_id=new_refresh_token_id,
)
await self.store.replace_refresh_token(
existing_token.token_id, new_refresh_token_id
)
return access_token, new_refresh_token
return access_token, new_refresh_token, access_token_valid_until_ms
def _verify_refresh_token(self, token: str) -> bool:
"""
@@ -832,10 +889,12 @@ class AuthHandler:
return True
async def get_refresh_token_for_user_id(
async def create_refresh_token_for_user_id(
self,
user_id: str,
device_id: str,
expiry_ts: Optional[int],
ultimate_session_expiry_ts: Optional[int],
) -> Tuple[str, int]:
"""
Creates a new refresh token for the user with the given user ID.
@@ -843,6 +902,13 @@ class AuthHandler:
Args:
user_id: canonical user ID
device_id: the device ID to associate with the token.
expiry_ts (milliseconds since the epoch): Time after which the
refresh token cannot be used.
If None, the refresh token never expires until it has been used.
ultimate_session_expiry_ts (milliseconds since the epoch):
Time at which the session will end and can not be extended any
further.
If None, the session can be refreshed indefinitely.
Returns:
The newly created refresh token and its ID in the database
@@ -852,10 +918,12 @@ class AuthHandler:
user_id=user_id,
token=refresh_token,
device_id=device_id,
expiry_ts=expiry_ts,
ultimate_session_expiry_ts=ultimate_session_expiry_ts,
)
return refresh_token, refresh_token_id
async def get_access_token_for_user_id(
async def create_access_token_for_user_id(
self,
user_id: str,
device_id: Optional[str],

View File

@@ -124,7 +124,7 @@ class EventStreamHandler:
as_client_event=as_client_event,
# We don't bundle "live" events, as otherwise clients
# will end up double counting annotations.
bundle_aggregations=False,
bundle_relations=False,
)
chunk = {

View File

@@ -464,15 +464,6 @@ class IdentityHandler:
if next_link:
params["next_link"] = next_link
if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"details and update your config file."
)
try:
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/email/requestToken",
@@ -517,15 +508,6 @@ class IdentityHandler:
if next_link:
params["next_link"] = next_link
if self.hs.config.email.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
"details and update your config file."
)
try:
data = await self.http_client.post_json_get_json(
id_server + "/_matrix/identity/api/v1/validate/msisdn/requestToken",

View File

@@ -252,7 +252,7 @@ class MessageHandler:
now,
# We don't bother bundling aggregations in when asked for state
# events, as clients won't use them.
bundle_aggregations=False,
bundle_relations=False,
)
return events
@@ -1001,13 +1001,52 @@ class EventCreationHandler:
)
self.validator.validate_new(event, self.config)
await self._validate_event_relation(event)
logger.debug("Created event %s", event.event_id)
return event, context
async def _validate_event_relation(self, event: EventBase) -> None:
"""
Ensure the relation data on a new event is not bogus.
Args:
event: The event being created.
Raises:
SynapseError if the event is invalid.
"""
relation = event.content.get("m.relates_to")
if not relation:
return
relation_type = relation.get("rel_type")
if not relation_type:
return
# Ensure the parent is real.
relates_to = relation.get("event_id")
if not relates_to:
return
parent_event = await self.store.get_event(relates_to, allow_none=True)
if parent_event:
# And in the same room.
if parent_event.room_id != event.room_id:
raise SynapseError(400, "Relations must be in the same room")
else:
# There must be some reason that the client knows the event exists,
# see if there are existing relations. If so, assume everything is fine.
if not await self.store.event_is_target_of_relation(relates_to):
# Otherwise, the client can't know about the parent event!
raise SynapseError(400, "Can't send relation to unknown event")
# If this event is an annotation then we check that that the sender
# can't annotate the same way twice (e.g. stops users from liking an
# event multiple times).
relation = event.content.get("m.relates_to", {})
if relation.get("rel_type") == RelationTypes.ANNOTATION:
relates_to = relation["event_id"]
if relation_type == RelationTypes.ANNOTATION:
aggregation_key = relation["key"]
already_exists = await self.store.has_user_annotated_event(
@@ -1016,9 +1055,12 @@ class EventCreationHandler:
if already_exists:
raise SynapseError(400, "Can't send same reaction twice")
logger.debug("Created event %s", event.event_id)
return event, context
# Don't attempt to start a thread if the parent event is a relation.
elif relation_type == RelationTypes.THREAD:
if await self.store.event_includes_relation(relates_to):
raise SynapseError(
400, "Cannot start threads from an event with a relation"
)
@measure_func("handle_new_client_event")
async def handle_new_client_event(

Some files were not shown because too many files have changed in this diff Show More