1
0

Compare commits

..

1 Commits

Author SHA1 Message Date
David Teller
ae66c672fe Uniformize spam-checker API:
- Some callbacks should return `True` to allow, `False` to deny, while others should return `True` to deny and `False` to allow. With this PR, all callbacks return `ALLOW` to allow or a `Codes` (typically `Codes.FORBIDDEN`) to deny.
- Similarly, some methods returned `True` to allow, `False` to deny, while others returned `True` to deny and `False` to allow. They now all return `ALLOW` to allow or a `Codes` to deny.
- Spam-checker implementations may now return an explicit code, e.g. to differentiate between "User account has been suspended" (which is in practice required by law in some countries, including UK) and "This message looks like spam".
2022-05-11 10:32:30 +02:00
222 changed files with 1763 additions and 4514 deletions

View File

@@ -1,18 +1,7 @@
Synapse 1.59.1 (2022-05-18)
===========================
Synapse 1.59.0rc1 (2022-05-10)
==============================
This release fixes a long-standing issue which could prevent Synapse's user directory for updating properly.
Bugfixes
----------------
- Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar. Contributed by Nick @ Beeper. ([\#12762](https://github.com/matrix-org/synapse/issues/12762))
Synapse 1.59.0 (2022-05-17)
===========================
Synapse 1.59 makes several changes that server administrators should be aware of:
This release makes several changes that server administrators should be aware of:
- Device name lookup over federation is now disabled by default. ([\#12616](https://github.com/matrix-org/synapse/issues/12616))
- The `synapse.app.appservice` and `synapse.app.user_dir` worker application types are now deprecated. ([\#12452](https://github.com/matrix-org/synapse/issues/12452), [\#12654](https://github.com/matrix-org/synapse/issues/12654))
@@ -21,27 +10,6 @@ See [the upgrade notes](https://github.com/matrix-org/synapse/blob/develop/docs/
Additionally, this release removes the non-standard `m.login.jwt` login type from Synapse. It can be replaced with `org.matrix.login.jwt` for identical behaviour. This is only used if `jwt_config.enabled` is set to `true` in the configuration. ([\#12597](https://github.com/matrix-org/synapse/issues/12597))
Bugfixes
--------
- Fix DB performance regression introduced in Synapse 1.59.0rc2. ([\#12745](https://github.com/matrix-org/synapse/issues/12745))
Synapse 1.59.0rc2 (2022-05-16)
==============================
Note: this release candidate includes a performance regression which can cause database disruption. Other release candidates in the v1.59.0 series are not affected, and a fix will be included in the v1.59.0 final release.
Bugfixes
--------
- Fix a bug introduced in Synapse 1.58.0 where `/sync` would fail if the most recent event in a room was rejected. ([\#12729](https://github.com/matrix-org/synapse/issues/12729))
Synapse 1.59.0rc1 (2022-05-10)
==============================
Features
--------

View File

@@ -1 +0,0 @@
Improve event caching mechanism to avoid having multiple copies of an event in memory at a time.

View File

@@ -1 +0,0 @@
Preparation for faster-room-join work: return subsets of room state which we already have, immediately.

View File

@@ -1 +0,0 @@
Measure the time taken in spam-checking callbacks and expose those measurements as metrics.

View File

@@ -1 +0,0 @@
Replace string literal instances of stream key types with typed constants.

View File

@@ -1 +0,0 @@
Add a `default_power_level_content_override` config option to set default room power levels per room preset.

View File

@@ -1 +0,0 @@
Add support for [MSC3787: Allowing knocks to restricted rooms](https://github.com/matrix-org/matrix-spec-proposals/pull/3787).

View File

@@ -1 +0,0 @@
Send `USER_IP` commands on a different Redis channel, in order to reduce traffic to workers that do not process these commands.

View File

@@ -1 +0,0 @@
Synapse will now reload [cache config](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#caching) when it receives a [SIGHUP](https://en.wikipedia.org/wiki/SIGHUP) signal.

View File

@@ -1 +0,0 @@
Remove code which updates unused database column `application_services_state.last_txn`.

View File

@@ -1 +0,0 @@
Add a unique index to `state_group_edges` to prevent duplicates being accidentally introduced and the consequential impact to performance.

View File

@@ -1 +0,0 @@
Remove an unneeded class in the push code.

View File

@@ -1 +0,0 @@
Consolidate parsing of relation information from events.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where an empty room would be created when a user with an insufficient power level tried to upgrade a room.

View File

@@ -1 +0,0 @@
Respect the `@cancellable` flag for `DirectServe{Html,Json}Resource`s.

View File

@@ -1 +0,0 @@
Respect the `@cancellable` flag for `RestServlet`s and `BaseFederationServlet`s.

View File

@@ -1 +0,0 @@
Respect the `@cancellable` flag for `ReplicationEndpoint`s.

View File

@@ -1 +0,0 @@
Add a config options to allow for auto-tuning of caches.

View File

@@ -1 +0,0 @@
Convert namespace class `Codes` into a string enum.

View File

@@ -1 +0,0 @@
Complain if a federation endpoint has the `@cancellable` flag, since some of the wrapper code may not handle cancellation correctly yet.

View File

@@ -1 +0,0 @@
Enable cancellation of `GET /rooms/$room_id/members`, `GET /rooms/$room_id/state` and `GET /rooms/$room_id/state/$event_type/*` requests.

View File

@@ -1 +0,0 @@
Require a body in POST requests to `/rooms/{roomId}/receipt/{receiptType}/{eventId}`, as required by the [Matrix specification](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidreceiptreceipttypeeventid). This breaks compatibility with Element Android 1.2.0 and earlier: users of those clients will be unable to send read receipts.

View File

@@ -1 +0,0 @@
Optimize private read receipt filtering.

View File

@@ -1 +0,0 @@
Fix a bug introduced in Synapse 1.30.0 where empty rooms could be automatically created if a monthly active users limit is set.

View File

@@ -1 +0,0 @@
Fix a typo in the Media Admin API documentation.

View File

@@ -1 +0,0 @@
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.

View File

@@ -1 +0,0 @@
Add some type hints to datastore.

View File

@@ -1 +0,0 @@
Drop the logging level of status messages for the URL preview cache expiry job from INFO to DEBUG.

View File

@@ -1 +0,0 @@
Fix push to dismiss notifications when read on another client. Contributed by @SpiritCroc @ Beeper.

View File

@@ -1 +0,0 @@
Downgrade some OIDC errors to warnings in the logs, to reduce the noise of Sentry reports.

View File

@@ -1 +0,0 @@
Add type annotations to increase the number of modules passing `disallow-untyped-defs`.

View File

@@ -1 +0,0 @@
Update the OpenID Connect example for Keycloak to be compatible with newer versions of Keycloak. Contributed by @nhh.

View File

@@ -1 +0,0 @@
Update configs used by Complement to allow more invites/3PID validations during tests.

View File

@@ -1 +0,0 @@
Tidy up and type-hint the database engine modules.

View File

@@ -1 +0,0 @@
Fix typo in server listener documentation.

View File

@@ -1 +0,0 @@
Fix poor database performance when reading the cache invalidation stream for large servers with lots of workers.

View File

@@ -1 +0,0 @@
Link to the configuration manual from the welcome page of the documentation.

View File

@@ -1 +0,0 @@
Fix typo in 'run_background_tasks_on' option name in configuration manual documentation.

View File

@@ -1 +0,0 @@
Add some type hints to datastore.

View File

@@ -1 +0,0 @@
Add information regarding the `rc_invites` ratelimiting option to the configuration docs.

View File

@@ -1 +0,0 @@
Add documentation for cancellation of request processing.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where the user directory background process would fail to make forward progress if a user included a null codepoint in their display name or avatar.

View File

@@ -1 +0,0 @@
Recommend using docker to run tests against postgres.

View File

@@ -1 +0,0 @@
Tweak the mypy plugin so that `@cached` can accept `on_invalidate=None`.

View File

@@ -1 +0,0 @@
Delete events from the `federation_inbound_events_staging` table when a room is purged through the admin API.

View File

@@ -1 +0,0 @@
Move methods that call `add_push_rule` to the `PushRuleStore` class.

View File

@@ -1 +0,0 @@
Add missing user directory endpoint from the generic worker documentation. Contributed by @olmari.

View File

@@ -1 +0,0 @@
Make handling of federation Authorization header (more) compliant with RFC7230.

View File

@@ -1 +0,0 @@
Refactor `resolve_state_groups_for_events` to not pull out full state when no state resolution happens.

View File

@@ -1,2 +0,0 @@
Add additional info to documentation of config option `cache_autotuning`.

View File

@@ -1,2 +0,0 @@
Update configuration manual documentation to document size-related suffixes.

View File

@@ -1 +0,0 @@
Give a meaningful error message when a client tries to create a room with an invalid alias localpart.

View File

@@ -1 +0,0 @@
Do not keep going if there are 5 back-to-back background update failures.

View File

@@ -1 +0,0 @@
Fix federation when using the demo scripts.

View File

@@ -1 +0,0 @@
Fix invalid YAML syntax in the example documentation for the `url_preview_accept_language` config option.

View File

@@ -1 +0,0 @@
Implement [MSC3818: Copy room type on upgrade](https://github.com/matrix-org/matrix-spec-proposals/pull/3818).

View File

@@ -1 +0,0 @@
The `hash_password` script now fails when it is called without specifying a config file.

View File

@@ -1 +0,0 @@
Simplify `disallow_untyped_defs` config in `mypy.ini`.

View File

@@ -1 +0,0 @@
Update EventContext `get_current_event_ids` and `get_prev_event_ids` to accept state filters and update calls where possible.

View File

@@ -1 +0,0 @@
Implement [MSC3818: Copy room type on upgrade](https://github.com/matrix-org/matrix-spec-proposals/pull/3818).

View File

@@ -1 +0,0 @@
Fix a bug introduced in 1.43.0 where a file (`providers.json`) was never closed. Contributed by @arkamar.

View File

@@ -1 +0,0 @@
Fix a long-standing bug where finished log contexts would be re-started when failing to contact remote homeservers.

View File

@@ -1 +0,0 @@
Send `USER_IP` commands on a different Redis channel, in order to reduce traffic to workers that do not process these commands.

View File

@@ -1 +0,0 @@
Pull out less state when handling gaps in room DAG.

18
debian/changelog vendored
View File

@@ -1,21 +1,3 @@
matrix-synapse-py3 (1.59.1) stable; urgency=medium
* New Synapse release 1.59.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 18 May 2022 11:41:46 +0100
matrix-synapse-py3 (1.59.0) stable; urgency=medium
* New Synapse release 1.59.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 May 2022 10:26:50 +0100
matrix-synapse-py3 (1.59.0~rc2) stable; urgency=medium
* New Synapse release 1.59.0rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 16 May 2022 12:52:15 +0100
matrix-synapse-py3 (1.59.0~rc1) stable; urgency=medium
* Adjust how the `exported-requirements.txt` file is generated as part of

View File

@@ -12,7 +12,6 @@ export PYTHONPATH
echo "$PYTHONPATH"
# Create servers which listen on HTTP at 808x and HTTPS at 848x.
for port in 8080 8081 8082; do
echo "Starting server on port $port... "
@@ -20,12 +19,10 @@ for port in 8080 8081 8082; do
mkdir -p demo/$port
pushd demo/$port || exit
# Generate the configuration for the homeserver at localhost:848x, note that
# the homeserver name needs to match the HTTPS listening port for federation
# to properly work..
# Generate the configuration for the homeserver at localhost:848x.
python3 -m synapse.app.homeserver \
--generate-config \
--server-name "localhost:$https_port" \
--server-name "localhost:$port" \
--config-path "$port.config" \
--report-stats no

View File

@@ -53,18 +53,6 @@ rc_joins:
per_second: 9999
burst_count: 9999
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
federation_rr_transactions_per_room_per_second: 9999
## Experimental Features ##

View File

@@ -87,18 +87,6 @@ rc_joins:
per_second: 9999
burst_count: 9999
rc_3pid_validation:
per_second: 1000
burst_count: 1000
rc_invites:
per_room:
per_second: 1000
burst_count: 1000
per_user:
per_second: 1000
burst_count: 1000
federation_rr_transactions_per_room_per_second: 9999
## API Configuration ##

View File

@@ -89,7 +89,6 @@
- [Database Schemas](development/database_schema.md)
- [Experimental features](development/experimental_features.md)
- [Synapse Architecture]()
- [Cancellation](development/synapse_architecture/cancellation.md)
- [Log Contexts](log_contexts.md)
- [Replication](replication.md)
- [TCP Replication](tcp_replication.md)

View File

@@ -289,7 +289,7 @@ POST /_synapse/admin/v1/purge_media_cache?before_ts=<unix_timestamp_in_ms>
URL Parameters
* `before_ts`: string representing a positive integer - Unix timestamp in milliseconds.
* `unix_timestamp_in_ms`: string representing a positive integer - Unix timestamp in milliseconds.
All cached media that was last accessed before this timestamp will be removed.
Response:

View File

@@ -206,32 +206,7 @@ This means that we need to run our unit tests against PostgreSQL too. Our CI doe
this automatically for pull requests and release candidates, but it's sometimes
useful to reproduce this locally.
#### Using Docker
The easiest way to do so is to run Postgres via a docker container. In one
terminal:
```shell
docker run --rm -e POSTGRES_PASSWORD=mysecretpassword -e POSTGRES_USER=postgres -e POSTGRES_DB=postgress -p 5432:5432 postgres:14
```
If you see an error like
```
docker: Error response from daemon: driver failed programming external connectivity on endpoint nice_ride (b57bbe2e251b70015518d00c9981e8cb8346b5c785250341a6c53e3c899875f1): Error starting userland proxy: listen tcp4 0.0.0.0:5432: bind: address already in use.
```
then something is already bound to port 5432. You're probably already running postgres locally.
Once you have a postgres server running, invoke `trial` in a second terminal:
```shell
SYNAPSE_POSTGRES=1 SYNAPSE_POSTGRES_HOST=127.0.0.1 SYNAPSE_POSTGRES_USER=postgres SYNAPSE_POSTGRES_PASSWORD=mysecretpassword poetry run trial tests
````
#### Using an existing Postgres installation
If you have postgres already installed on your system, you can run `trial` with the
To do so, [configure Postgres](../postgres.md) and run `trial` with the
following environment variables matching your configuration:
- `SYNAPSE_POSTGRES` to anything nonempty
@@ -254,8 +229,8 @@ You don't need to specify the host, user, port or password if your Postgres
server is set to authenticate you over the UNIX socket (i.e. if the `psql` command
works without further arguments).
Your Postgres account needs to be able to create databases; see the postgres
docs for [`ALTER ROLE`](https://www.postgresql.org/docs/current/sql-alterrole.html).
Your Postgres account needs to be able to create databases.
## Run the integration tests ([Sytest](https://github.com/matrix-org/sytest)).

View File

@@ -5,7 +5,7 @@
Requires you to have a [Synapse development environment setup](https://matrix-org.github.io/synapse/develop/development/contributing_guide.html#4-install-the-dependencies).
The demo setup allows running three federation Synapse servers, with server
names `localhost:8480`, `localhost:8481`, and `localhost:8482`.
names `localhost:8080`, `localhost:8081`, and `localhost:8082`.
You can access them via any Matrix client over HTTP at `localhost:8080`,
`localhost:8081`, and `localhost:8082` or over HTTPS at `localhost:8480`,
@@ -20,10 +20,9 @@ and the servers are configured in a highly insecure way, including:
The servers are configured to store their data under `demo/8080`, `demo/8081`, and
`demo/8082`. This includes configuration, logs, SQLite databases, and media.
Note that when joining a public room on a different homeserver via "#foo:bar.net",
then you are (in the current implementation) joining a room with room_id "foo".
This means that it won't work if your homeserver already has a room with that
name.
Note that when joining a public room on a different HS via "#foo:bar.net", then
you are (in the current impl) joining a room with room_id "foo". This means that
it won't work if your HS already has a room with that name.
## Using the demo scripts

View File

@@ -1,392 +0,0 @@
# Cancellation
Sometimes, requests take a long time to service and clients disconnect
before Synapse produces a response. To avoid wasting resources, Synapse
can cancel request processing for select endpoints marked with the
`@cancellable` decorator.
Synapse makes use of Twisted's `Deferred.cancel()` feature to make
cancellation work. The `@cancellable` decorator does nothing by itself
and merely acts as a flag, signalling to developers and other code alike
that a method can be cancelled.
## Enabling cancellation for an endpoint
1. Check that the endpoint method, and any `async` functions in its call
tree handle cancellation correctly. See
[Handling cancellation correctly](#handling-cancellation-correctly)
for a list of things to look out for.
2. Add the `@cancellable` decorator to the `on_GET/POST/PUT/DELETE`
method. It's not recommended to make non-`GET` methods cancellable,
since cancellation midway through some database updates is less
likely to be handled correctly.
## Mechanics
There are two stages to cancellation: downward propagation of a
`cancel()` call, followed by upwards propagation of a `CancelledError`
out of a blocked `await`.
Both Twisted and asyncio have a cancellation mechanism.
| | Method | Exception | Exception inherits from |
|---------------|---------------------|-----------------------------------------|-------------------------|
| Twisted | `Deferred.cancel()` | `twisted.internet.defer.CancelledError` | `Exception` (!) |
| asyncio | `Task.cancel()` | `asyncio.CancelledError` | `BaseException` |
### Deferred.cancel()
When Synapse starts handling a request, it runs the async method
responsible for handling it using `defer.ensureDeferred`, which returns
a `Deferred`. For example:
```python
def do_something() -> Deferred[None]:
...
@cancellable
async def on_GET() -> Tuple[int, JsonDict]:
d = make_deferred_yieldable(do_something())
await d
return 200, {}
request = defer.ensureDeferred(on_GET())
```
When a client disconnects early, Synapse checks for the presence of the
`@cancellable` decorator on `on_GET`. Since `on_GET` is cancellable,
`Deferred.cancel()` is called on the `Deferred` from
`defer.ensureDeferred`, ie. `request`. Twisted knows which `Deferred`
`request` is waiting on and passes the `cancel()` call on to `d`.
The `Deferred` being waited on, `d`, may have its own handling for
`cancel()` and pass the call on to other `Deferred`s.
Eventually, a `Deferred` handles the `cancel()` call by resolving itself
with a `CancelledError`.
### CancelledError
The `CancelledError` gets raised out of the `await` and bubbles up, as
per normal Python exception handling.
## Handling cancellation correctly
In general, when writing code that might be subject to cancellation, two
things must be considered:
* The effect of `CancelledError`s raised out of `await`s.
* The effect of `Deferred`s being `cancel()`ed.
Examples of code that handles cancellation incorrectly include:
* `try-except` blocks which swallow `CancelledError`s.
* Code that shares the same `Deferred`, which may be cancelled, between
multiple requests.
* Code that starts some processing that's exempt from cancellation, but
uses a logging context from cancellable code. The logging context
will be finished upon cancellation, while the uncancelled processing
is still using it.
Some common patterns are listed below in more detail.
### `async` function calls
Most functions in Synapse are relatively straightforward from a
cancellation standpoint: they don't do anything with `Deferred`s and
purely call and `await` other `async` functions.
An `async` function handles cancellation correctly if its own code
handles cancellation correctly and all the async function it calls
handle cancellation correctly. For example:
```python
async def do_two_things() -> None:
check_something()
await do_something()
await do_something_else()
```
`do_two_things` handles cancellation correctly if `do_something` and
`do_something_else` handle cancellation correctly.
That is, when checking whether a function handles cancellation
correctly, its implementation and all its `async` function calls need to
be checked, recursively.
As `check_something` is not `async`, it does not need to be checked.
### CancelledErrors
Because Twisted's `CancelledError`s are `Exception`s, it's easy to
accidentally catch and suppress them. Care must be taken to ensure that
`CancelledError`s are allowed to propagate upwards.
<table width="100%">
<tr>
<td width="50%" valign="top">
**Bad**:
```python
try:
await do_something()
except Exception:
# `CancelledError` gets swallowed here.
logger.info(...)
```
</td>
<td width="50%" valign="top">
**Good**:
```python
try:
await do_something()
except CancelledError:
raise
except Exception:
logger.info(...)
```
</td>
</tr>
<tr>
<td width="50%" valign="top">
**OK**:
```python
try:
check_something()
# A `CancelledError` won't ever be raised here.
except Exception:
logger.info(...)
```
</td>
<td width="50%" valign="top">
**Good**:
```python
try:
await do_something()
except ValueError:
logger.info(...)
```
</td>
</tr>
</table>
#### defer.gatherResults
`defer.gatherResults` produces a `Deferred` which:
* broadcasts `cancel()` calls to every `Deferred` being waited on.
* wraps the first exception it sees in a `FirstError`.
Together, this means that `CancelledError`s will be wrapped in
a `FirstError` unless unwrapped. Such `FirstError`s are liable to be
swallowed, so they must be unwrapped.
<table width="100%">
<tr>
<td width="50%" valign="top">
**Bad**:
```python
async def do_something() -> None:
await make_deferred_yieldable(
defer.gatherResults([...], consumeErrors=True)
)
try:
await do_something()
except CancelledError:
raise
except Exception:
# `FirstError(CancelledError)` gets swallowed here.
logger.info(...)
```
</td>
<td width="50%" valign="top">
**Good**:
```python
async def do_something() -> None:
await make_deferred_yieldable(
defer.gatherResults([...], consumeErrors=True)
).addErrback(unwrapFirstError)
try:
await do_something()
except CancelledError:
raise
except Exception:
logger.info(...)
```
</td>
</tr>
</table>
### Creation of `Deferred`s
If a function creates a `Deferred`, the effect of cancelling it must be considered. `Deferred`s that get shared are likely to have unintended behaviour when cancelled.
<table width="100%">
<tr>
<td width="50%" valign="top">
**Bad**:
```python
cache: Dict[str, Deferred[None]] = {}
def wait_for_room(room_id: str) -> Deferred[None]:
deferred = cache.get(room_id)
if deferred is None:
deferred = Deferred()
cache[room_id] = deferred
# `deferred` can have multiple waiters.
# All of them will observe a `CancelledError`
# if any one of them is cancelled.
return make_deferred_yieldable(deferred)
# Request 1
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
# Request 2
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
```
</td>
<td width="50%" valign="top">
**Good**:
```python
cache: Dict[str, Deferred[None]] = {}
def wait_for_room(room_id: str) -> Deferred[None]:
deferred = cache.get(room_id)
if deferred is None:
deferred = Deferred()
cache[room_id] = deferred
# `deferred` will never be cancelled now.
# A `CancelledError` will still come out of
# the `await`.
# `delay_cancellation` may also be used.
return make_deferred_yieldable(stop_cancellation(deferred))
# Request 1
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
# Request 2
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
```
</td>
</tr>
<tr>
<td width="50%" valign="top">
</td>
<td width="50%" valign="top">
**Good**:
```python
cache: Dict[str, List[Deferred[None]]] = {}
def wait_for_room(room_id: str) -> Deferred[None]:
if room_id not in cache:
cache[room_id] = []
# Each request gets its own `Deferred` to wait on.
deferred = Deferred()
cache[room_id]].append(deferred)
return make_deferred_yieldable(deferred)
# Request 1
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
# Request 2
await wait_for_room("!aAAaaAaaaAAAaAaAA:matrix.org")
```
</td>
</table>
### Uncancelled processing
Some `async` functions may kick off some `async` processing which is
intentionally protected from cancellation, by `stop_cancellation` or
other means. If the `async` processing inherits the logcontext of the
request which initiated it, care must be taken to ensure that the
logcontext is not finished before the `async` processing completes.
<table width="100%">
<tr>
<td width="50%" valign="top">
**Bad**:
```python
cache: Optional[ObservableDeferred[None]] = None
async def do_something_else(
to_resolve: Deferred[None]
) -> None:
await ...
logger.info("done!")
to_resolve.callback(None)
async def do_something() -> None:
if not cache:
to_resolve = Deferred()
cache = ObservableDeferred(to_resolve)
# `do_something_else` will never be cancelled and
# can outlive the `request-1` logging context.
run_in_background(do_something_else, to_resolve)
await make_deferred_yieldable(cache.observe())
with LoggingContext("request-1"):
await do_something()
```
</td>
<td width="50%" valign="top">
**Good**:
```python
cache: Optional[ObservableDeferred[None]] = None
async def do_something_else(
to_resolve: Deferred[None]
) -> None:
await ...
logger.info("done!")
to_resolve.callback(None)
async def do_something() -> None:
if not cache:
to_resolve = Deferred()
cache = ObservableDeferred(to_resolve)
run_in_background(do_something_else, to_resolve)
# We'll wait until `do_something_else` is
# done before raising a `CancelledError`.
await make_deferred_yieldable(
delay_cancellation(cache.observe())
)
else:
await make_deferred_yieldable(cache.observe())
with LoggingContext("request-1"):
await do_something()
```
</td>
</tr>
<tr>
<td width="50%">
**OK**:
```python
cache: Optional[ObservableDeferred[None]] = None
async def do_something_else(
to_resolve: Deferred[None]
) -> None:
await ...
logger.info("done!")
to_resolve.callback(None)
async def do_something() -> None:
if not cache:
to_resolve = Deferred()
cache = ObservableDeferred(to_resolve)
# `do_something_else` will get its own independent
# logging context. `request-1` will not count any
# metrics from `do_something_else`.
run_as_background_process(
"do_something_else",
do_something_else,
to_resolve,
)
await make_deferred_yieldable(cache.observe())
with LoggingContext("request-1"):
await do_something()
```
</td>
<td width="50%">
</td>
</tr>
</table>

View File

@@ -18,6 +18,17 @@ async def check_event_for_spam(event: "synapse.events.EventBase") -> Union[bool,
Called when receiving an event from a client or via federation. The callback must return
either:
- on `Decision.ALLOW`, the action is permitted.
- on `Decision.DENY`, the action is rejected with a default error message/code.
- on `Codes`, the action is rejected with a specific error message/code. In case
of doubt, use `Codes.FORBIDDEN`.
- (deprecated) on `False`, behave as `Decision.ALLOW`. Deprecated as methods in
this API are inconsistent, some expect `True` for `ALLOW` and others `True`
for `DENY`.
- (deprecated) on `True`, behave as `Decision.DENY`. Deprecated as methods in
this API are inconsistent, some expect `True` for `ALLOW` and others `True`
for `DENY`.
- an error message string, to indicate the event must be rejected because of spam and
give a rejection reason to forward to clients;
- the boolean `True`, to indicate that the event is spammy, but not provide further details; or

View File

@@ -159,7 +159,7 @@ Follow the [Getting Started Guide](https://www.keycloak.org/getting-started) to
oidc_providers:
- idp_id: keycloak
idp_name: "My KeyCloak server"
issuer: "https://127.0.0.1:8443/realms/{realm_name}"
issuer: "https://127.0.0.1:8443/auth/realms/{realm_name}"
client_id: "synapse"
client_secret: "copy secret generated from above"
scopes: ["openid", "profile"]
@@ -293,7 +293,7 @@ can be used to retrieve information on the authenticated user. As the Synapse
login mechanism needs an attribute to uniquely identify users, and that endpoint
does not return a `sub` property, an alternative `subject_claim` has to be set.
1. Create a new OAuth application: [https://github.com/settings/applications/new](https://github.com/settings/applications/new).
1. Create a new OAuth application: https://github.com/settings/applications/new.
2. Set the callback URL to `[synapse public baseurl]/_synapse/client/oidc/callback`.
Synapse config:
@@ -322,10 +322,10 @@ oidc_providers:
[Google][google-idp] is an OpenID certified authentication and authorisation provider.
1. Set up a project in the Google API Console (see
[documentation](https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup)).
3. Add an "OAuth Client ID" for a Web Application under "Credentials".
4. Copy the Client ID and Client Secret, and add the following to your synapse config:
1. Set up a project in the Google API Console (see
https://developers.google.com/identity/protocols/oauth2/openid-connect#appsetup).
2. Add an "OAuth Client ID" for a Web Application under "Credentials".
3. Copy the Client ID and Client Secret, and add the following to your synapse config:
```yaml
oidc_providers:
- idp_id: google
@@ -501,8 +501,8 @@ As well as the private key file, you will need:
* Team ID: a 10-character ID associated with your developer account.
* Key ID: the 10-character identifier for the key.
[Apple's developer documentation](https://help.apple.com/developer-account/?lang=en#/dev77c875b7e)
has more information on setting up SiWA.
https://help.apple.com/developer-account/?lang=en#/dev77c875b7e has more
documentation on setting up SiWA.
The synapse config will look like this:
@@ -535,8 +535,8 @@ needed to add OAuth2 capabilities to your Django projects. It supports
Configuration on Django's side:
1. Add an application: `https://example.com/admin/oauth2_provider/application/add/` and choose parameters like this:
* `Redirect uris`: `https://synapse.example.com/_synapse/client/oidc/callback`
1. Add an application: https://example.com/admin/oauth2_provider/application/add/ and choose parameters like this:
* `Redirect uris`: https://synapse.example.com/_synapse/client/oidc/callback
* `Client type`: `Confidential`
* `Authorization grant type`: `Authorization code`
* `Algorithm`: `HMAC with SHA-2 256`

View File

@@ -289,7 +289,7 @@ presence:
# federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid'
#
# keys: the key discovery API (/_matrix/key).
# keys: the key discovery API (/_matrix/keys).
#
# media: the media API (/_matrix/media).
#
@@ -730,12 +730,6 @@ retention:
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
#
# The configuration for cache factors (caches.global_factor and
# caches.per_cache_factors) can be reloaded while the application is running,
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
# the caching config will NOT be applied after a SIGHUP is received; a restart
# is necessary.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
@@ -784,24 +778,6 @@ caches:
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.
@@ -2486,40 +2462,6 @@ push:
#
#encryption_enabled_by_default_for_room_type: invite
# Override the default power levels for rooms created on this server, per
# room creation preset.
#
# The appropriate dictionary for the room preset will be applied on top
# of the existing power levels content.
#
# Useful if you know that your users need special permissions in rooms
# that they create (e.g. to send particular types of state events without
# needing an elevated power level). This takes the same shape as the
# `power_level_content_override` parameter in the /createRoom API, but
# is applied before that parameter.
#
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
# and `public_chat`. Inside each of those should be any of the
# properties allowed in `power_level_content_override` in the
# /createRoom API. If any property is missing, its default value will
# continue to be used. If any property is present, it will overwrite
# the existing default completely (so if the `events` property exists,
# the default event power levels will be ignored).
#
#default_power_level_content_override:
# private_chat:
# "events":
# "com.example.myeventtype" : 0
# "m.room.avatar": 50
# "m.room.canonical_alias": 50
# "m.room.encryption": 100
# "m.room.history_visibility": 100
# "m.room.name": 50
# "m.room.power_levels": 100
# "m.room.server_acl": 100
# "m.room.tombstone": 100
# "events_default": 1
# Uncomment to allow non-server-admin users to create groups on this server
#

View File

@@ -89,96 +89,6 @@ process, for example:
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.60.0
## Adding a new unique index to `state_group_edges` could fail if your database is corrupted
This release of Synapse will add a unique index to the `state_group_edges` table, in order
to prevent accidentally introducing duplicate information (for example, because a database
backup was restored multiple times).
Duplicate rows being present in this table could cause drastic performance problems; see
[issue 11779](https://github.com/matrix-org/synapse/issues/11779) for more details.
If your Synapse database already has had duplicate rows introduced into this table,
this could fail, with either of these errors:
**On Postgres:**
```
synapse.storage.background_updates - 623 - INFO - background_updates-0 - Adding index state_group_edges_unique_idx to state_group_edges
synapse.storage.background_updates - 282 - ERROR - background_updates-0 - Error doing update
...
psycopg2.errors.UniqueViolation: could not create unique index "state_group_edges_unique_idx"
DETAIL: Key (state_group, prev_state_group)=(2, 1) is duplicated.
```
(The numbers may be different.)
**On SQLite:**
```
synapse.storage.background_updates - 623 - INFO - background_updates-0 - Adding index state_group_edges_unique_idx to state_group_edges
synapse.storage.background_updates - 282 - ERROR - background_updates-0 - Error doing update
...
sqlite3.IntegrityError: UNIQUE constraint failed: state_group_edges.state_group, state_group_edges.prev_state_group
```
<details>
<summary><b>Expand this section for steps to resolve this problem</b></summary>
### On Postgres
Connect to your database with `psql`.
```sql
BEGIN;
DELETE FROM state_group_edges WHERE (ctid, state_group, prev_state_group) IN (
SELECT row_id, state_group, prev_state_group
FROM (
SELECT
ctid AS row_id,
MIN(ctid) OVER (PARTITION BY state_group, prev_state_group) AS min_row_id,
state_group,
prev_state_group
FROM state_group_edges
) AS t1
WHERE row_id <> min_row_id
);
COMMIT;
```
### On SQLite
At the command-line, use `sqlite3 path/to/your-homeserver-database.db`:
```sql
BEGIN;
DELETE FROM state_group_edges WHERE (rowid, state_group, prev_state_group) IN (
SELECT row_id, state_group, prev_state_group
FROM (
SELECT
rowid AS row_id,
MIN(rowid) OVER (PARTITION BY state_group, prev_state_group) AS min_row_id,
state_group,
prev_state_group
FROM state_group_edges
)
WHERE row_id <> min_row_id
);
COMMIT;
```
### For more details
[This comment on issue 11779](https://github.com/matrix-org/synapse/issues/11779#issuecomment-1131545970)
has queries that can be used to check a database for this problem in advance.
</details>
# Upgrading to v1.59.0
## Device name lookup over federation has been disabled by default

View File

@@ -23,14 +23,6 @@ followed by a letter. Letters have the following meanings:
For example, setting `redaction_retention_period: 5m` would remove redacted
messages from the database after 5 minutes, rather than 5 months.
In addition, configuration options referring to size use the following suffixes:
* `M` = MiB, or 1,048,576 bytes
* `K` = KiB, or 1024 bytes
For example, setting `max_avatar_size: 10M` means that Synapse will not accept files larger than 10,485,760 bytes
for a user avatar.
### YAML
The configuration file is a [YAML](https://yaml.org/) file, which means that certain syntax rules
apply if you want your config file to be read properly. A few helpful things to know:
@@ -475,13 +467,13 @@ Sub-options for each listener include:
Valid resource names are:
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies `media` and `static`.
* `client`: the client-server API (/_matrix/client), and the synapse admin API (/_synapse/admin). Also implies 'media' and 'static'.
* `consent`: user consent forms (/_matrix/consent). See [here](../../consent_tracking.md) for more.
* `federation`: the server-server API (/_matrix/federation). Also implies `media`, `keys`, `openid`
* `keys`: the key discovery API (/_matrix/key).
* `keys`: the key discovery API (/_matrix/keys).
* `media`: the media API (/_matrix/media).
@@ -1127,22 +1119,7 @@ Caching can be configured through the following sub-options:
with intermittent connections, at the cost of higher memory usage.
By default, this is zero, which means that sync responses are not cached
at all.
* `cache_autotuning` and its sub-options `max_cache_memory_usage`, `target_cache_memory_usage`, and
`min_cache_ttl` work in conjunction with each other to maintain a balance between cache memory
usage and cache entry availability. You must be using [jemalloc](https://github.com/matrix-org/synapse#help-synapse-is-slow-and-eats-all-my-ramcpu)
to utilize this option, and all three of the options must be specified for this feature to work. This option
defaults to off, enable it by providing values for the sub-options listed below. Please note that the feature will not work
and may cause unstable behavior (such as excessive emptying of caches or exceptions) if all of the values are not provided.
Please see the [Config Conventions](#config-conventions) for information on how to specify memory size and cache expiry
durations.
* `max_cache_memory_usage` sets a ceiling on how much memory the cache can use before caches begin to be continuously evicted.
They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
the setting below, or until the `min_cache_ttl` is hit. There is no default value for this option.
* `target_memory_usage` sets a rough target for the desired memory usage of the caches. There is no default value
for this option.
* `min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
from being emptied while Synapse is evicting due to memory. There is no default value for this option.
Example configuration:
```yaml
@@ -1150,29 +1127,9 @@ caches:
global_factor: 1.0
per_cache_factors:
get_users_who_share_room_with_user: 2.0
expire_caches: false
sync_response_cache_duration: 2m
cache_autotuning:
max_cache_memory_usage: 1024M
target_cache_memory_usage: 758M
min_cache_ttl: 5m
```
### Reloading cache factors
The cache factors (i.e. `caches.global_factor` and `caches.per_cache_factors`) may be reloaded at any time by sending a
[`SIGHUP`](https://en.wikipedia.org/wiki/SIGHUP) signal to Synapse using e.g.
```commandline
kill -HUP [PID_OF_SYNAPSE_PROCESS]
```
If you are running multiple workers, you must individually update the worker
config file and send this signal to each worker process.
If you're using the [example systemd service](https://github.com/matrix-org/synapse/blob/develop/contrib/systemd/matrix-synapse.service)
file in Synapse's `contrib` directory, you can send a `SIGHUP` signal by using
`systemctl reload matrix-synapse`.
---
## Database ##
Config options related to database settings.
@@ -1207,7 +1164,7 @@ For more information on using Synapse with Postgres,
see [here](../../postgres.md).
Example SQLite configuration:
```yaml
```
database:
name: sqlite3
args:
@@ -1215,7 +1172,7 @@ database:
```
Example Postgres configuration:
```yaml
```
database:
name: psycopg2
txn_limit: 10000
@@ -1370,20 +1327,6 @@ This option sets ratelimiting how often invites can be sent in a room or to a
specific user. `per_room` defaults to `per_second: 0.3`, `burst_count: 10` and
`per_user` defaults to `per_second: 0.003`, `burst_count: 5`.
Client requests that invite user(s) when [creating a
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3createroom)
will count against the `rc_invites.per_room` limit, whereas
client requests to [invite a single user to a
room](https://spec.matrix.org/v1.2/client-server-api/#post_matrixclientv3roomsroomidinvite)
will count against both the `rc_invites.per_user` and `rc_invites.per_room` limits.
Federation requests to invite a user will count against the `rc_invites.per_user`
limit only, as Synapse presumes ratelimiting by room will be done by the sending server.
The `rc_invites.per_user` limit applies to the *receiver* of the invite, rather than the
sender, meaning that a `rc_invite.per_user.burst_count` of 5 mandates that a single user
cannot *receive* more than a burst of 5 invites at a time.
Example configuration:
```yaml
rc_invites:
@@ -1692,10 +1635,10 @@ Defaults to "en".
Example configuration:
```yaml
url_preview_accept_language:
- 'en-UK'
- 'en-US;q=0.9'
- 'fr;q=0.8'
- '*;q=0.7'
- en-UK
- en-US;q=0.9
- fr;q=0.8
- *;q=0.7
```
----
Config option: `oembed`
@@ -3355,32 +3298,6 @@ room_list_publication_rules:
room_id: "*"
action: allow
```
---
Config option: `default_power_level_content_override`
The `default_power_level_content_override` option controls the default power
levels for rooms.
Useful if you know that your users need special permissions in rooms
that they create (e.g. to send particular types of state events without
needing an elevated power level). This takes the same shape as the
`power_level_content_override` parameter in the /createRoom API, but
is applied before that parameter.
Note that each key provided inside a preset (for example `events` in the example
below) will overwrite all existing defaults inside that key. So in the example
below, newly-created private_chat rooms will have no rules for any event types
except `com.example.foo`.
Example configuration:
```yaml
default_power_level_content_override:
private_chat: { "events": { "com.example.foo" : 0 } }
trusted_private_chat: null
public_chat: null
```
---
## Opentracing ##
Configuration options related to Opentracing support.
@@ -3481,7 +3398,7 @@ stream_writers:
typing: worker1
```
---
Config option: `run_background_tasks_on`
Config option: `run_background_task_on`
The worker that is used to run background tasks (e.g. cleaning up expired
data). If not provided this defaults to the main process.

View File

@@ -7,10 +7,10 @@ team.
## Installing and using Synapse
This documentation covers topics for **installation**, **configuration** and
**maintenance** of your Synapse process:
**maintainence** of your Synapse process:
* Learn how to [install](setup/installation.md) and
[configure](usage/configuration/config_documentation.md) your own instance, perhaps with [Single
[configure](usage/configuration/index.html) your own instance, perhaps with [Single
Sign-On](usage/configuration/user_authentication/index.html).
* See how to [upgrade](upgrade.md) between Synapse versions.
@@ -65,7 +65,7 @@ following documentation:
Want to help keep Synapse going but don't know how to code? Synapse is a
[Matrix.org Foundation](https://matrix.org) project. Consider becoming a
supporter on [Liberapay](https://liberapay.com/matrixdotorg),
supportor on [Liberapay](https://liberapay.com/matrixdotorg),
[Patreon](https://patreon.com/matrixdotorg) or through
[PayPal](https://paypal.me/matrixdotorg) via a one-time donation.

View File

@@ -251,8 +251,6 @@ information.
# Presence requests
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
# User directory search requests
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
Additionally, the following REST endpoints can be handled for GET requests:
@@ -450,14 +448,6 @@ update_user_directory_from_worker: worker_name
This work cannot be load-balanced; please ensure the main process is restarted
after setting this option in the shared configuration!
User directory updates allow REST endpoints matching the following regular
expressions to work:
^/_matrix/client/(r0|v3|unstable)/user_directory/search$
The above endpoints can be routed to any worker, though you may choose to route
it to the chosen user directory worker.
This style of configuration supersedes the legacy `synapse.app.user_dir`
worker application type.

132
mypy.ini
View File

@@ -10,7 +10,6 @@ warn_unreachable = True
warn_unused_ignores = True
local_partial_types = True
no_implicit_optional = True
disallow_untyped_defs = True
files =
docker/,
@@ -28,6 +27,9 @@ exclude = (?x)
|synapse/storage/databases/__init__.py
|synapse/storage/databases/main/cache.py
|synapse/storage/databases/main/devices.py
|synapse/storage/databases/main/event_federation.py
|synapse/storage/databases/main/push_rule.py
|synapse/storage/databases/main/roommember.py
|synapse/storage/schema/
|tests/api/test_auth.py
@@ -87,39 +89,131 @@ exclude = (?x)
|tests/utils.py
)$
[mypy-synapse._scripts.*]
disallow_untyped_defs = True
[mypy-synapse.api.*]
disallow_untyped_defs = True
[mypy-synapse.app.*]
disallow_untyped_defs = True
[mypy-synapse.appservice.*]
disallow_untyped_defs = True
[mypy-synapse.config.*]
disallow_untyped_defs = True
[mypy-synapse.crypto.*]
disallow_untyped_defs = True
[mypy-synapse.event_auth]
disallow_untyped_defs = True
[mypy-synapse.events.*]
disallow_untyped_defs = True
[mypy-synapse.federation.*]
disallow_untyped_defs = True
[mypy-synapse.federation.transport.client]
disallow_untyped_defs = False
[mypy-synapse.http.client]
disallow_untyped_defs = False
[mypy-synapse.handlers.*]
disallow_untyped_defs = True
[mypy-synapse.http.matrixfederationclient]
disallow_untyped_defs = False
[mypy-synapse.http.server]
disallow_untyped_defs = True
[mypy-synapse.logging.opentracing]
disallow_untyped_defs = False
[mypy-synapse.logging.context]
disallow_untyped_defs = True
[mypy-synapse.logging.scopecontextmanager]
disallow_untyped_defs = False
[mypy-synapse.metrics.*]
disallow_untyped_defs = True
[mypy-synapse.metrics._reactor_metrics]
disallow_untyped_defs = False
# This module imports select.epoll. That exists on Linux, but doesn't on macOS.
# See https://github.com/matrix-org/synapse/pull/11771.
warn_unused_ignores = False
[mypy-synapse.module_api.*]
disallow_untyped_defs = True
[mypy-synapse.notifier]
disallow_untyped_defs = True
[mypy-synapse.push.*]
disallow_untyped_defs = True
[mypy-synapse.replication.*]
disallow_untyped_defs = True
[mypy-synapse.rest.*]
disallow_untyped_defs = True
[mypy-synapse.server_notices.*]
disallow_untyped_defs = True
[mypy-synapse.state.*]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.account_data]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.client_ips]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.directory]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.e2e_room_keys]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.end_to_end_keys]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.event_push_actions]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.events_bg_updates]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.events_worker]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.room]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.room_batch]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.profile]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.stats]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.state_deltas]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.transactions]
disallow_untyped_defs = True
[mypy-synapse.storage.databases.main.user_erasure_store]
disallow_untyped_defs = True
[mypy-synapse.storage.util.*]
disallow_untyped_defs = True
[mypy-synapse.streams.*]
disallow_untyped_defs = True
[mypy-synapse.util.*]
disallow_untyped_defs = True
[mypy-synapse.util.caches.treecache]
disallow_untyped_defs = False
[mypy-synapse.server]
disallow_untyped_defs = False
[mypy-synapse.storage.database]
disallow_untyped_defs = False
[mypy-tests.*]
disallow_untyped_defs = False
[mypy-tests.handlers.test_user_directory]
disallow_untyped_defs = True

View File

@@ -54,7 +54,7 @@ skip_gitignore = true
[tool.poetry]
name = "matrix-synapse"
version = "1.59.1"
version = "1.59.0rc1"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"

View File

@@ -21,7 +21,7 @@ from typing import Callable, Optional, Type
from mypy.nodes import ARG_NAMED_OPT
from mypy.plugin import MethodSigContext, Plugin
from mypy.typeops import bind_self
from mypy.types import CallableType, NoneType, UnionType
from mypy.types import CallableType, NoneType
class SynapsePlugin(Plugin):
@@ -72,20 +72,13 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
# Third, we add an optional "on_invalidate" argument.
#
# This is a either
# - a callable which accepts no input and returns nothing, or
# - None.
calltyp = UnionType(
[
NoneType(),
CallableType(
arg_types=[],
arg_kinds=[],
arg_names=[],
ret_type=NoneType(),
fallback=ctx.api.named_generic_type("builtins.function", []),
),
]
# This is a callable which accepts no input and returns nothing.
calltyp = CallableType(
arg_types=[],
arg_kinds=[],
arg_names=[],
ret_type=NoneType(),
fallback=ctx.api.named_generic_type("builtins.function", []),
)
arg_types.append(calltyp)
@@ -102,7 +95,7 @@ def cached_function_method_signature(ctx: MethodSigContext) -> CallableType:
def plugin(version: str) -> Type[SynapsePlugin]:
# This is the entry point of the plugin, and lets us deal with the fact
# This is the entry point of the plugin, and let's us deal with the fact
# that the mypy plugin interface is *not* stable by looking at the version
# string.
#

View File

@@ -46,14 +46,14 @@ def main() -> None:
"Path to server config file. "
"Used to read in bcrypt_rounds and password_pepper."
),
required=True,
)
args = parser.parse_args()
config = yaml.safe_load(args.config)
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
password_config = config.get("password_config", None) or {}
password_pepper = password_config.get("pepper", password_pepper)
if "config" in args and args.config:
config = yaml.safe_load(args.config)
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
password_config = config.get("password_config", None) or {}
password_pepper = password_config.get("pepper", password_pepper)
password = args.password
if not password:

View File

@@ -65,8 +65,6 @@ class JoinRules:
PRIVATE: Final = "private"
# As defined for MSC3083.
RESTRICTED: Final = "restricted"
# As defined for MSC3787.
KNOCK_RESTRICTED: Final = "knock_restricted"
class RestrictedJoinRuleTypes:

View File

@@ -17,7 +17,6 @@
import logging
import typing
from enum import Enum
from http import HTTPStatus
from typing import Any, Dict, List, Optional, Union
@@ -31,11 +30,7 @@ if typing.TYPE_CHECKING:
logger = logging.getLogger(__name__)
class Codes(str, Enum):
"""
All known error codes, as an enum of strings.
"""
class Codes:
UNRECOGNIZED = "M_UNRECOGNIZED"
UNAUTHORIZED = "M_UNAUTHORIZED"
FORBIDDEN = "M_FORBIDDEN"
@@ -270,9 +265,7 @@ class UnrecognizedRequestError(SynapseError):
"""An error indicating we don't understand the request you're trying to make"""
def __init__(
self,
msg: str = "Unrecognized request",
errcode: str = Codes.UNRECOGNIZED,
self, msg: str = "Unrecognized request", errcode: str = Codes.UNRECOGNIZED
):
super().__init__(400, msg, errcode)

View File

@@ -81,9 +81,6 @@ class RoomVersion:
msc2716_historical: bool
# MSC2716: Adds support for redacting "insertion", "chunk", and "marker" events
msc2716_redactions: bool
# MSC3787: Adds support for a `knock_restricted` join rule, mixing concepts of
# knocks and restricted join rules into the same join condition.
msc3787_knock_restricted_join_rule: bool
class RoomVersions:
@@ -102,7 +99,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V2 = RoomVersion(
"2",
@@ -119,7 +115,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V3 = RoomVersion(
"3",
@@ -136,7 +131,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V4 = RoomVersion(
"4",
@@ -153,7 +147,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V5 = RoomVersion(
"5",
@@ -170,7 +163,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V6 = RoomVersion(
"6",
@@ -187,7 +179,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
MSC2176 = RoomVersion(
"org.matrix.msc2176",
@@ -204,7 +195,6 @@ class RoomVersions:
msc2403_knocking=False,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V7 = RoomVersion(
"7",
@@ -221,7 +211,6 @@ class RoomVersions:
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V8 = RoomVersion(
"8",
@@ -238,7 +227,6 @@ class RoomVersions:
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
V9 = RoomVersion(
"9",
@@ -255,7 +243,6 @@ class RoomVersions:
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=False,
)
MSC2716v3 = RoomVersion(
"org.matrix.msc2716v3",
@@ -272,24 +259,6 @@ class RoomVersions:
msc2403_knocking=True,
msc2716_historical=True,
msc2716_redactions=True,
msc3787_knock_restricted_join_rule=False,
)
MSC3787 = RoomVersion(
"org.matrix.msc3787",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=False,
strict_canonicaljson=True,
limit_notifications_power_levels=True,
msc2176_redaction_rules=False,
msc3083_join_rules=True,
msc3375_redaction_rules=True,
msc2403_knocking=True,
msc2716_historical=False,
msc2716_redactions=False,
msc3787_knock_restricted_join_rule=True,
)
@@ -307,7 +276,6 @@ KNOWN_ROOM_VERSIONS: Dict[str, RoomVersion] = {
RoomVersions.V8,
RoomVersions.V9,
RoomVersions.MSC2716v3,
RoomVersions.MSC3787,
)
}

View File

@@ -49,12 +49,9 @@ from twisted.logger import LoggingFile, LogLevel
from twisted.protocols.tls import TLSMemoryBIOFactory
from twisted.python.threadpool import ThreadPool
import synapse.util.caches
from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.app.phone_stats_home import start_phone_stats_home
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ManholeConfig
from synapse.crypto import context_factory
@@ -435,10 +432,6 @@ async def start(hs: "HomeServer") -> None:
signal.signal(signal.SIGHUP, run_sighup)
register_sighup(refresh_certificate, hs)
register_sighup(reload_cache_config, hs.config)
# Apply the cache config.
hs.config.caches.resize_all_caches()
# Load the certificate from disk.
refresh_certificate(hs)
@@ -493,43 +486,6 @@ async def start(hs: "HomeServer") -> None:
atexit.register(gc.freeze)
def reload_cache_config(config: HomeServerConfig) -> None:
"""Reload cache config from disk and immediately apply it.resize caches accordingly.
If the config is invalid, a `ConfigError` is logged and no changes are made.
Otherwise, this:
- replaces the `caches` section on the given `config` object,
- resizes all caches according to the new cache factors, and
Note that the following cache config keys are read, but not applied:
- event_cache_size: used to set a max_size and _original_max_size on
EventsWorkerStore._get_event_cache when it is created. We'd have to update
the _original_max_size (and maybe
- sync_response_cache_duration: would have to update the timeout_sec attribute on
HomeServer -> SyncHandler -> ResponseCache.
- track_memory_usage. This affects synapse.util.caches.TRACK_MEMORY_USAGE which
influences Synapse's self-reported metrics.
Also, the HTTPConnectionPool in SimpleHTTPClient sets its maxPersistentPerHost
parameter based on the global_factor. This won't be applied on a config reload.
"""
try:
previous_cache_config = config.reload_config_section("caches")
except ConfigError as e:
logger.warning("Failed to reload cache config")
for f in format_config_error(e):
logger.warning(f)
else:
logger.debug(
"New cache config. Was:\n %s\nNow:\n",
previous_cache_config.__dict__,
config.caches.__dict__,
)
synapse.util.caches.TRACK_MEMORY_USAGE = config.caches.track_memory_usage
config.caches.resize_all_caches()
def setup_sentry(hs: "HomeServer") -> None:
"""Enable sentry integration, if enabled in configuration"""

View File

@@ -16,7 +16,7 @@
import logging
import os
import sys
from typing import Dict, Iterable, List
from typing import Dict, Iterable, Iterator, List
from matrix_common.versionstring import get_distribution_version_string
@@ -45,7 +45,7 @@ from synapse.app._base import (
redirect_stdio_to_logs,
register_start,
)
from synapse.config._base import ConfigError, format_config_error
from synapse.config._base import ConfigError
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import ListenerConfig
@@ -399,6 +399,38 @@ def setup(config_options: List[str]) -> SynapseHomeServer:
return hs
def format_config_error(e: ConfigError) -> Iterator[str]:
"""
Formats a config error neatly
The idea is to format the immediate error, plus the "causes" of those errors,
hopefully in a way that makes sense to the user. For example:
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
Failed to parse config for module 'JinjaOidcMappingProvider':
invalid jinja template:
unexpected end of template, expected 'end of print statement'.
Args:
e: the error to be formatted
Returns: An iterator which yields string fragments to be formatted
"""
yield "Error in configuration"
if e.path:
yield " at '%s'" % (".".join(e.path),)
yield ":\n %s" % (e.msg,)
parent_e = e.__cause__
indent = 1
while parent_e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(parent_e))
parent_e = parent_e.__cause__
def run(hs: HomeServer) -> None:
_base.start_reactor(
"synapse-homeserver",

View File

@@ -16,18 +16,14 @@
import argparse
import errno
import logging
import os
from collections import OrderedDict
from hashlib import sha256
from textwrap import dedent
from typing import (
Any,
ClassVar,
Collection,
Dict,
Iterable,
Iterator,
List,
MutableMapping,
Optional,
@@ -44,8 +40,6 @@ import yaml
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
logger = logging.getLogger(__name__)
class ConfigError(Exception):
"""Represents a problem parsing the configuration
@@ -61,38 +55,6 @@ class ConfigError(Exception):
self.path = path
def format_config_error(e: ConfigError) -> Iterator[str]:
"""
Formats a config error neatly
The idea is to format the immediate error, plus the "causes" of those errors,
hopefully in a way that makes sense to the user. For example:
Error in configuration at 'oidc_config.user_mapping_provider.config.display_name_template':
Failed to parse config for module 'JinjaOidcMappingProvider':
invalid jinja template:
unexpected end of template, expected 'end of print statement'.
Args:
e: the error to be formatted
Returns: An iterator which yields string fragments to be formatted
"""
yield "Error in configuration"
if e.path:
yield " at '%s'" % (".".join(e.path),)
yield ":\n %s" % (e.msg,)
parent_e = e.__cause__
indent = 1
while parent_e:
indent += 1
yield ":\n%s%s" % (" " * indent, str(parent_e))
parent_e = parent_e.__cause__
# We split these messages out to allow packages to override with package
# specific instructions.
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS = """\
@@ -157,7 +119,7 @@ class Config:
defined in subclasses.
"""
section: ClassVar[str]
section: str
def __init__(self, root_config: "RootConfig" = None):
self.root = root_config
@@ -347,12 +309,9 @@ class RootConfig:
class, lower-cased and with "Config" removed.
"""
config_classes: List[Type[Config]] = []
def __init__(self, config_files: Collection[str] = ()):
# Capture absolute paths here, so we can reload config after we daemonize.
self.config_files = [os.path.abspath(path) for path in config_files]
config_classes = []
def __init__(self):
for config_class in self.config_classes:
if config_class.section is None:
raise ValueError("%r requires a section name" % (config_class,))
@@ -553,10 +512,12 @@ class RootConfig:
object from parser.parse_args(..)`
"""
obj = cls()
config_args = parser.parse_args(argv)
config_files = find_config_files(search_paths=config_args.config_path)
obj = cls(config_files)
if not config_files:
parser.error("Must supply a config file.")
@@ -666,7 +627,7 @@ class RootConfig:
generate_missing_configs = config_args.generate_missing_configs
obj = cls(config_files)
obj = cls()
if config_args.generate_config:
if config_args.report_stats is None:
@@ -766,34 +727,6 @@ class RootConfig:
) -> None:
self.invoke_all("generate_files", config_dict, config_dir_path)
def reload_config_section(self, section_name: str) -> Config:
"""Reconstruct the given config section, leaving all others unchanged.
This works in three steps:
1. Create a new instance of the relevant `Config` subclass.
2. Call `read_config` on that instance to parse the new config.
3. Replace the existing config instance with the new one.
:raises ValueError: if the given `section` does not exist.
:raises ConfigError: for any other problems reloading config.
:returns: the previous config object, which no longer has a reference to this
RootConfig.
"""
existing_config: Optional[Config] = getattr(self, section_name, None)
if existing_config is None:
raise ValueError(f"Unknown config section '{section_name}'")
logger.info("Reloading config section '%s'", section_name)
new_config_data = read_config_files(self.config_files)
new_config = type(existing_config)(self)
new_config.read_config(new_config_data)
setattr(self, section_name, new_config)
existing_config.root = None
return existing_config
def read_config_files(config_files: Iterable[str]) -> Dict[str, Any]:
"""Read the config files into a dict

View File

@@ -1,19 +1,15 @@
import argparse
from typing import (
Any,
Collection,
Dict,
Iterable,
Iterator,
List,
Literal,
MutableMapping,
Optional,
Tuple,
Type,
TypeVar,
Union,
overload,
)
import jinja2
@@ -68,8 +64,6 @@ class ConfigError(Exception):
self.msg = msg
self.path = path
def format_config_error(e: ConfigError) -> Iterator[str]: ...
MISSING_REPORT_STATS_CONFIG_INSTRUCTIONS: str
MISSING_REPORT_STATS_SPIEL: str
MISSING_SERVER_NAME: str
@@ -123,8 +117,7 @@ class RootConfig:
background_updates: background_updates.BackgroundUpdateConfig
config_classes: List[Type["Config"]] = ...
config_files: List[str]
def __init__(self, config_files: Collection[str] = ...) -> None: ...
def __init__(self) -> None: ...
def invoke_all(
self, func_name: str, *args: Any, **kwargs: Any
) -> MutableMapping[str, Any]: ...
@@ -164,12 +157,6 @@ class RootConfig:
def generate_missing_files(
self, config_dict: dict, config_dir_path: str
) -> None: ...
@overload
def reload_config_section(
self, section_name: Literal["caches"]
) -> cache.CacheConfig: ...
@overload
def reload_config_section(self, section_name: str) -> Config: ...
class Config:
root: RootConfig

View File

@@ -69,11 +69,11 @@ def _canonicalise_cache_name(cache_name: str) -> str:
def add_resizable_cache(
cache_name: str, cache_resize_callback: Callable[[float], None]
) -> None:
"""Register a cache whose size can dynamically change
"""Register a cache that's size can dynamically change
Args:
cache_name: A reference to the cache
cache_resize_callback: A callback function that will run whenever
cache_resize_callback: A callback function that will be ran whenever
the cache needs to be resized
"""
# Some caches have '*' in them which we strip out.
@@ -96,13 +96,6 @@ class CacheConfig(Config):
section = "caches"
_environ = os.environ
event_cache_size: int
cache_factors: Dict[str, float]
global_factor: float
track_memory_usage: bool
expiry_time_msec: Optional[int]
sync_response_cache_duration: int
@staticmethod
def reset() -> None:
"""Resets the caches to their defaults. Used for tests."""
@@ -122,12 +115,6 @@ class CacheConfig(Config):
# A cache 'factor' is a multiplier that can be applied to each of
# Synapse's caches in order to increase or decrease the maximum
# number of entries that can be stored.
#
# The configuration for cache factors (caches.global_factor and
# caches.per_cache_factors) can be reloaded while the application is running,
# by sending a SIGHUP signal to the Synapse process. Changes to other parts of
# the caching config will NOT be applied after a SIGHUP is received; a restart
# is necessary.
# The number of events to cache in memory. Not affected by
# caches.global_factor.
@@ -176,24 +163,6 @@ class CacheConfig(Config):
#
#cache_entry_ttl: 30m
# This flag enables cache autotuning, and is further specified by the sub-options `max_cache_memory_usage`,
# `target_cache_memory_usage`, `min_cache_ttl`. These flags work in conjunction with each other to maintain
# a balance between cache memory usage and cache entry availability. You must be using jemalloc to utilize
# this option, and all three of the options must be specified for this feature to work.
#cache_autotuning:
# This flag sets a ceiling on much memory the cache can use before caches begin to be continuously evicted.
# They will continue to be evicted until the memory usage drops below the `target_memory_usage`, set in
# the flag below, or until the `min_cache_ttl` is hit.
#max_cache_memory_usage: 1024M
# This flag sets a rough target for the desired memory usage of the caches.
#target_cache_memory_usage: 758M
# 'min_cache_ttl` sets a limit under which newer cache entries are not evicted and is only applied when
# caches are actively being evicted/`max_cache_memory_usage` has been exceeded. This is to protect hot caches
# from being emptied while Synapse is evicting due to memory.
#min_cache_ttl: 5m
# Controls how long the results of a /sync request are cached for after
# a successful response is returned. A higher duration can help clients with
# intermittent connections, at the cost of higher memory usage.
@@ -205,21 +174,21 @@ class CacheConfig(Config):
"""
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
"""Populate this config object with values from `config`.
This method does NOT resize existing or future caches: use `resize_all_caches`.
We use two separate methods so that we can reject bad config before applying it.
"""
self.event_cache_size = self.parse_size(
config.get("event_cache_size", _DEFAULT_EVENT_CACHE_SIZE)
)
self.cache_factors = {}
self.cache_factors: Dict[str, float] = {}
cache_config = config.get("caches") or {}
self.global_factor = cache_config.get("global_factor", _DEFAULT_FACTOR_SIZE)
self.global_factor = cache_config.get(
"global_factor", properties.default_factor_size
)
if not isinstance(self.global_factor, (int, float)):
raise ConfigError("caches.global_factor must be a number.")
# Set the global one so that it's reflected in new caches
properties.default_factor_size = self.global_factor
# Load cache factors from the config
individual_factors = cache_config.get("per_cache_factors") or {}
if not isinstance(individual_factors, dict):
@@ -261,7 +230,7 @@ class CacheConfig(Config):
cache_entry_ttl = cache_config.get("cache_entry_ttl", "30m")
if expire_caches:
self.expiry_time_msec = self.parse_duration(cache_entry_ttl)
self.expiry_time_msec: Optional[int] = self.parse_duration(cache_entry_ttl)
else:
self.expiry_time_msec = None
@@ -281,38 +250,23 @@ class CacheConfig(Config):
)
self.expiry_time_msec = self.parse_duration(expiry_time)
self.cache_autotuning = cache_config.get("cache_autotuning")
if self.cache_autotuning:
max_memory_usage = self.cache_autotuning.get("max_cache_memory_usage")
self.cache_autotuning["max_cache_memory_usage"] = self.parse_size(
max_memory_usage
)
target_mem_size = self.cache_autotuning.get("target_cache_memory_usage")
self.cache_autotuning["target_cache_memory_usage"] = self.parse_size(
target_mem_size
)
min_cache_ttl = self.cache_autotuning.get("min_cache_ttl")
self.cache_autotuning["min_cache_ttl"] = self.parse_duration(min_cache_ttl)
self.sync_response_cache_duration = self.parse_duration(
cache_config.get("sync_response_cache_duration", 0)
)
# Resize all caches (if necessary) with the new factors we've loaded
self.resize_all_caches()
# Store this function so that it can be called from other classes without
# needing an instance of Config
properties.resize_all_caches_func = self.resize_all_caches
def resize_all_caches(self) -> None:
"""Ensure all cache sizes are up-to-date.
"""Ensure all cache sizes are up to date
For each cache, run the mapped callback function with either
a specific cache factor or the default, global one.
"""
# Set the global factor size, so that new caches are appropriately sized.
properties.default_factor_size = self.global_factor
# Store this function so that it can be called from other classes without
# needing an instance of CacheConfig
properties.resize_all_caches_func = self.resize_all_caches
# block other threads from modifying _CACHES while we iterate it.
with _CACHES_LOCK:
for cache_name, callback in _CACHES.items():

View File

@@ -57,9 +57,9 @@ class OembedConfig(Config):
"""
# Whether to use the packaged providers.json file.
if not oembed_config.get("disable_default_providers") or False:
with pkg_resources.resource_stream("synapse", "res/providers.json") as s:
providers = json.load(s)
providers = json.load(
pkg_resources.resource_stream("synapse", "res/providers.json")
)
yield from self._parse_and_validate_provider(
providers, config_path=("oembed",)
)

View File

@@ -63,19 +63,6 @@ class RoomConfig(Config):
"Invalid value for encryption_enabled_by_default_for_room_type"
)
self.default_power_level_content_override = config.get(
"default_power_level_content_override",
None,
)
if self.default_power_level_content_override is not None:
for preset in self.default_power_level_content_override:
if preset not in vars(RoomCreationPreset).values():
raise ConfigError(
"Unrecognised room preset %s in default_power_level_content_override"
% preset
)
# We validate the actual overrides when we try to apply them.
def generate_config_section(self, **kwargs: Any) -> str:
return """\
## Rooms ##
@@ -96,38 +83,4 @@ class RoomConfig(Config):
# will also not affect rooms created by other servers.
#
#encryption_enabled_by_default_for_room_type: invite
# Override the default power levels for rooms created on this server, per
# room creation preset.
#
# The appropriate dictionary for the room preset will be applied on top
# of the existing power levels content.
#
# Useful if you know that your users need special permissions in rooms
# that they create (e.g. to send particular types of state events without
# needing an elevated power level). This takes the same shape as the
# `power_level_content_override` parameter in the /createRoom API, but
# is applied before that parameter.
#
# Valid keys are some or all of `private_chat`, `trusted_private_chat`
# and `public_chat`. Inside each of those should be any of the
# properties allowed in `power_level_content_override` in the
# /createRoom API. If any property is missing, its default value will
# continue to be used. If any property is present, it will overwrite
# the existing default completely (so if the `events` property exists,
# the default event power levels will be ignored).
#
#default_power_level_content_override:
# private_chat:
# "events":
# "com.example.myeventtype" : 0
# "m.room.avatar": 50
# "m.room.canonical_alias": 50
# "m.room.encryption": 100
# "m.room.history_visibility": 100
# "m.room.name": 50
# "m.room.power_levels": 100
# "m.room.server_acl": 100
# "m.room.tombstone": 100
# "events_default": 1
"""

View File

@@ -996,7 +996,7 @@ class ServerConfig(Config):
# federation: the server-server API (/_matrix/federation). Also implies
# 'media', 'keys', 'openid'
#
# keys: the key discovery API (/_matrix/key).
# keys: the key discovery API (/_matrix/keys).
#
# media: the media API (/_matrix/media).
#

View File

@@ -414,12 +414,7 @@ def _is_membership_change_allowed(
raise AuthError(403, "You are banned from this room")
elif join_rule == JoinRules.PUBLIC:
pass
elif (
room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED
) or (
room_version.msc3787_knock_restricted_join_rule
and join_rule == JoinRules.KNOCK_RESTRICTED
):
elif room_version.msc3083_join_rules and join_rule == JoinRules.RESTRICTED:
# This is the same as public, but the event must contain a reference
# to the server who authorised the join. If the event does not contain
# the proper content it is rejected.
@@ -445,13 +440,8 @@ def _is_membership_change_allowed(
if authorising_user_level < invite_level:
raise AuthError(403, "Join event authorised by invalid server.")
elif (
join_rule == JoinRules.INVITE
or (room_version.msc2403_knocking and join_rule == JoinRules.KNOCK)
or (
room_version.msc3787_knock_restricted_join_rule
and join_rule == JoinRules.KNOCK_RESTRICTED
)
elif join_rule == JoinRules.INVITE or (
room_version.msc2403_knocking and join_rule == JoinRules.KNOCK
):
if not caller_in_room and not caller_invited:
raise AuthError(403, "You are not invited to this room.")
@@ -472,10 +462,7 @@ def _is_membership_change_allowed(
if user_level < ban_level or user_level <= target_level:
raise AuthError(403, "You don't have permission to ban")
elif room_version.msc2403_knocking and Membership.KNOCK == membership:
if join_rule != JoinRules.KNOCK and (
not room_version.msc3787_knock_restricted_join_rule
or join_rule != JoinRules.KNOCK_RESTRICTED
):
if join_rule != JoinRules.KNOCK:
raise AuthError(403, "You don't have permission to knock")
elif target_user_id != event.user_id:
raise AuthError(403, "You cannot knock for other users")

View File

@@ -15,7 +15,6 @@
# limitations under the License.
import abc
import collections.abc
import os
from typing import (
TYPE_CHECKING,
@@ -33,11 +32,9 @@ from typing import (
overload,
)
import attr
from typing_extensions import Literal
from unpaddedbase64 import encode_base64
from synapse.api.constants import RelationTypes
from synapse.api.room_versions import EventFormatVersions, RoomVersion, RoomVersions
from synapse.types import JsonDict, RoomStreamToken
from synapse.util.caches import intern_dict
@@ -618,45 +615,3 @@ def make_event_from_dict(
return event_type(
event_dict, room_version, internal_metadata_dict or {}, rejected_reason
)
@attr.s(slots=True, frozen=True, auto_attribs=True)
class _EventRelation:
# The target event of the relation.
parent_id: str
# The relation type.
rel_type: str
# The aggregation key. Will be None if the rel_type is not m.annotation or is
# not a string.
aggregation_key: Optional[str]
def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
"""
Attempt to parse relation information an event.
Returns:
The event relation information, if it is valid. None, otherwise.
"""
relation = event.content.get("m.relates_to")
if not relation or not isinstance(relation, collections.abc.Mapping):
# No relation information.
return None
# Relations must have a type and parent event ID.
rel_type = relation.get("rel_type")
if not isinstance(rel_type, str):
return None
parent_id = relation.get("event_id")
if not isinstance(parent_id, str):
return None
# Annotations have a key field.
aggregation_key = None
if rel_type == RelationTypes.ANNOTATION:
aggregation_key = relation.get("key")
if not isinstance(aggregation_key, str):
aggregation_key = None
return _EventRelation(parent_id, rel_type, aggregation_key)

View File

@@ -24,7 +24,6 @@ from synapse.types import JsonDict, StateMap
if TYPE_CHECKING:
from synapse.storage import Storage
from synapse.storage.databases.main import DataStore
from synapse.storage.state import StateFilter
@attr.s(slots=True, auto_attribs=True)
@@ -197,9 +196,7 @@ class EventContext:
return self._state_group
async def get_current_state_ids(
self, state_filter: Optional["StateFilter"] = None
) -> Optional[StateMap[str]]:
async def get_current_state_ids(self) -> Optional[StateMap[str]]:
"""
Gets the room state map, including this event - ie, the state in ``state_group``
@@ -207,9 +204,6 @@ class EventContext:
not make it into the room state. This method will raise an exception if
``rejected`` is set.
Arg:
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
Returns:
Returns None if state_group is None, which happens when the associated
event is an outlier.
@@ -222,7 +216,7 @@ class EventContext:
assert self._state_delta_due_to_event is not None
prev_state_ids = await self.get_prev_state_ids(state_filter)
prev_state_ids = await self.get_prev_state_ids()
if self._state_delta_due_to_event:
prev_state_ids = dict(prev_state_ids)
@@ -230,17 +224,12 @@ class EventContext:
return prev_state_ids
async def get_prev_state_ids(
self, state_filter: Optional["StateFilter"] = None
) -> StateMap[str]:
async def get_prev_state_ids(self) -> StateMap[str]:
"""
Gets the room state map, excluding this event.
For a non-state event, this will be the same as get_current_state_ids().
Args:
state_filter: specifies the type of state event to fetch from DB, example: EventTypes.JoinRules
Returns:
Returns {} if state_group is None, which happens when the associated
event is an outlier.
@@ -250,7 +239,7 @@ class EventContext:
"""
assert self.state_group_before_event is not None
return await self._storage.state.get_state_ids_for_group(
self.state_group_before_event, state_filter
self.state_group_before_event
)

View File

@@ -27,12 +27,12 @@ from typing import (
Union,
)
from synapse.api.errors import Codes
from synapse.rest.media.v1._base import FileInfo
from synapse.rest.media.v1.media_storage import ReadableFileWrapper
from synapse.spam_checker_api import RegistrationBehaviour
from synapse.spam_checker_api import ALLOW, Decision, RegistrationBehaviour
from synapse.types import RoomAlias, UserProfile
from synapse.util.async_helpers import delay_cancellation, maybe_awaitable
from synapse.util.metrics import Measure
if TYPE_CHECKING:
import synapse.events
@@ -40,17 +40,34 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
DEPRECATED_BOOL = bool
CHECK_EVENT_FOR_SPAM_CALLBACK = Callable[
["synapse.events.EventBase"],
Awaitable[Union[bool, str]],
Awaitable[Union[ALLOW, Codes, str, DEPRECATED_BOOL]],
]
USER_MAY_JOIN_ROOM_CALLBACK = Callable[
[str, str, bool], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
USER_MAY_INVITE_CALLBACK = Callable[
[str, str, str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[
[str, str, str, str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
USER_MAY_CREATE_ROOM_CALLBACK = Callable[
[str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[
[str, RoomAlias], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[
[str, str], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[
[UserProfile], Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]]
]
USER_MAY_JOIN_ROOM_CALLBACK = Callable[[str, str, bool], Awaitable[bool]]
USER_MAY_INVITE_CALLBACK = Callable[[str, str, str], Awaitable[bool]]
USER_MAY_SEND_3PID_INVITE_CALLBACK = Callable[[str, str, str, str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_CALLBACK = Callable[[str], Awaitable[bool]]
USER_MAY_CREATE_ROOM_ALIAS_CALLBACK = Callable[[str, RoomAlias], Awaitable[bool]]
USER_MAY_PUBLISH_ROOM_CALLBACK = Callable[[str, str], Awaitable[bool]]
CHECK_USERNAME_FOR_SPAM_CALLBACK = Callable[[UserProfile], Awaitable[bool]]
LEGACY_CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
[
Optional[dict],
@@ -66,11 +83,11 @@ CHECK_REGISTRATION_FOR_SPAM_CALLBACK = Callable[
Collection[Tuple[str, str]],
Optional[str],
],
Awaitable[RegistrationBehaviour],
Awaitable[Union[RegistrationBehaviour, Codes]],
]
CHECK_MEDIA_FILE_FOR_SPAM_CALLBACK = Callable[
[ReadableFileWrapper, FileInfo],
Awaitable[bool],
Awaitable[Union[ALLOW, Codes, DEPRECATED_BOOL]],
]
@@ -163,10 +180,7 @@ def load_legacy_spam_checkers(hs: "synapse.server.HomeServer") -> None:
class SpamChecker:
def __init__(self, hs: "synapse.server.HomeServer") -> None:
self.hs = hs
self.clock = hs.get_clock()
def __init__(self) -> None:
self._check_event_for_spam_callbacks: List[CHECK_EVENT_FOR_SPAM_CALLBACK] = []
self._user_may_join_room_callbacks: List[USER_MAY_JOIN_ROOM_CALLBACK] = []
self._user_may_invite_callbacks: List[USER_MAY_INVITE_CALLBACK] = []
@@ -244,7 +258,7 @@ class SpamChecker:
async def check_event_for_spam(
self, event: "synapse.events.EventBase"
) -> Union[bool, str]:
) -> Union[ALLOW, Codes, str]:
"""Checks if a given event is considered "spammy" by this server.
If the server considers an event spammy, then it will be rejected if
@@ -255,22 +269,29 @@ class SpamChecker:
event: the event to be checked
Returns:
True or a string if the event is spammy. If a string is returned it
will be used as the error message returned to the user.
- on `ALLOW`, the event is considered good (non-spammy) and should
be let through. Other spamcheck filters may still reject it.
- on `Codes`, the event is considered spammy and is rejected with a specific
error message/code.
- on `str`, the event is considered spammy and the string is used as error
message.
"""
for callback in self._check_event_for_spam_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
res: Union[bool, str] = await delay_cancellation(callback(event))
if res:
res: Union[ALLOW, Codes, str, DEPRECATED_BOOL] = await delay_cancellation(
callback(event)
)
if res is False or res is ALLOW:
continue
elif res is True:
return Codes.FORBIDDEN
else:
return res
return False
return ALLOW
async def user_may_join_room(
self, user_id: str, room_id: str, is_invited: bool
) -> bool:
) -> Decision:
"""Checks if a given users is allowed to join a room.
Not called when a user creates a room.
@@ -280,54 +301,54 @@ class SpamChecker:
is_invited: Whether the user is invited into the room
Returns:
Whether the user may join the room
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._user_may_join_room_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
may_join_room = await delay_cancellation(
callback(user_id, room_id, is_invited)
)
if may_join_room is False:
return False
may_join_room = await delay_cancellation(
callback(user_id, room_id, is_invited)
)
if may_join_room is True or may_join_room is ALLOW:
continue
elif may_join_room is False:
return Codes.FORBIDDEN
else:
return may_join_room
return True
return ALLOW
async def user_may_invite(
self, inviter_userid: str, invitee_userid: str, room_id: str
) -> bool:
) -> Decision:
"""Checks if a given user may send an invite
If this method returns false, the invite will be rejected.
Args:
inviter_userid: The user ID of the sender of the invitation
invitee_userid: The user ID targeted in the invitation
room_id: The room ID
Returns:
True if the user may send an invite, otherwise False
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._user_may_invite_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
may_invite = await delay_cancellation(
callback(inviter_userid, invitee_userid, room_id)
)
if may_invite is False:
return False
may_invite = await delay_cancellation(
callback(inviter_userid, invitee_userid, room_id)
)
if may_invite is True or may_invite is ALLOW:
continue
elif may_invite is False:
return Codes.FORBIDDEN
else:
return may_invite
return True
return ALLOW
async def user_may_send_3pid_invite(
self, inviter_userid: str, medium: str, address: str, room_id: str
) -> bool:
) -> Decision:
"""Checks if a given user may invite a given threepid into the room
If this method returns false, the threepid invite will be rejected.
Note that if the threepid is already associated with a Matrix user ID, Synapse
will call user_may_invite with said user ID instead.
@@ -338,90 +359,94 @@ class SpamChecker:
room_id: The room ID
Returns:
True if the user may send the invite, otherwise False
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._user_may_send_3pid_invite_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
may_send_3pid_invite = await delay_cancellation(
callback(inviter_userid, medium, address, room_id)
)
if may_send_3pid_invite is False:
return False
may_send_3pid_invite = await delay_cancellation(
callback(inviter_userid, medium, address, room_id)
)
if may_send_3pid_invite is True or may_send_3pid_invite is ALLOW:
continue
elif may_send_3pid_invite is False:
return Codes.FORBIDDEN
else:
return may_send_3pid_invite
return True
return ALLOW
async def user_may_create_room(self, userid: str) -> bool:
async def user_may_create_room(self, userid: str) -> Decision:
"""Checks if a given user may create a room
If this method returns false, the creation request will be rejected.
Args:
userid: The ID of the user attempting to create a room
Returns:
True if the user may create a room, otherwise False
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._user_may_create_room_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
may_create_room = await delay_cancellation(callback(userid))
if may_create_room is False:
return False
may_create_room = await delay_cancellation(callback(userid))
if may_create_room is True or may_create_room is ALLOW:
continue
elif may_create_room is False:
return Codes.FORBIDDEN
else:
return may_create_room
return True
return ALLOW
async def user_may_create_room_alias(
self, userid: str, room_alias: RoomAlias
) -> bool:
) -> Decision:
"""Checks if a given user may create a room alias
If this method returns false, the association request will be rejected.
Args:
userid: The ID of the user attempting to create a room alias
room_alias: The alias to be created
Returns:
True if the user may create a room alias, otherwise False
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._user_may_create_room_alias_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
may_create_room_alias = await delay_cancellation(
callback(userid, room_alias)
)
if may_create_room_alias is False:
return False
may_create_room_alias = await delay_cancellation(
callback(userid, room_alias)
)
if may_create_room_alias is True or may_create_room_alias is ALLOW:
continue
elif may_create_room_alias is False:
return Codes.FORBIDDEN
else:
return may_create_room_alias
return True
return ALLOW
async def user_may_publish_room(self, userid: str, room_id: str) -> bool:
async def user_may_publish_room(
self, userid: str, room_id: str
) -> Union[ALLOW, Codes, DEPRECATED_BOOL]:
"""Checks if a given user may publish a room to the directory
If this method returns false, the publish request will be rejected.
Args:
userid: The user ID attempting to publish the room
room_id: The ID of the room that would be published
Returns:
True if the user may publish the room, otherwise False
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._user_may_publish_room_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
may_publish_room = await delay_cancellation(callback(userid, room_id))
if may_publish_room is False:
return False
may_publish_room = await delay_cancellation(callback(userid, room_id))
if may_publish_room is True or may_publish_room is ALLOW:
continue
elif may_publish_room is False:
return Codes.FORBIDDEN
else:
return may_publish_room
return True
return ALLOW
async def check_username_for_spam(self, user_profile: UserProfile) -> bool:
async def check_username_for_spam(self, user_profile: UserProfile) -> Decision:
"""Checks if a user ID or display name are considered "spammy" by this server.
If the server considers a username spammy, then it will not be included in
@@ -434,19 +459,21 @@ class SpamChecker:
* avatar_url
Returns:
True if the user is spammy.
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._check_username_for_spam_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
# Make a copy of the user profile object to ensure the spam checker cannot
# modify it.
res = await delay_cancellation(callback(user_profile.copy()))
if res:
return True
# Make a copy of the user profile object to ensure the spam checker cannot
# modify it.
is_spam = await delay_cancellation(callback(user_profile.copy()))
if is_spam is False or is_spam is ALLOW:
continue
elif is_spam is True:
return Codes.FORBIDDEN
else:
return is_spam
return False
return ALLOW
async def check_registration_for_spam(
self,
@@ -471,12 +498,11 @@ class SpamChecker:
"""
for callback in self._check_registration_for_spam_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
behaviour = await delay_cancellation(
callback(email_threepid, username, request_info, auth_provider_id)
)
behaviour = await delay_cancellation(
callback(email_threepid, username, request_info, auth_provider_id)
)
if isinstance(behaviour, Codes):
return behaviour
assert isinstance(behaviour, RegistrationBehaviour)
if behaviour != RegistrationBehaviour.ALLOW:
return behaviour
@@ -485,7 +511,7 @@ class SpamChecker:
async def check_media_file_for_spam(
self, file_wrapper: ReadableFileWrapper, file_info: FileInfo
) -> bool:
) -> Decision:
"""Checks if a piece of newly uploaded media should be blocked.
This will be called for local uploads, downloads of remote media, each
@@ -507,22 +533,22 @@ class SpamChecker:
return False
Args:
file: An object that allows reading the contents of the media.
file_info: Metadata about the file.
Returns:
True if the media should be blocked or False if it should be
allowed.
- on `ALLOW`, the action is permitted.
- on `Codes`, the action is rejected with a specific error message/code.
"""
for callback in self._check_media_file_for_spam_callbacks:
with Measure(
self.clock, "{}.{}".format(callback.__module__, callback.__qualname__)
):
spam = await delay_cancellation(callback(file_wrapper, file_info))
if spam:
return True
is_spam = await delay_cancellation(callback(file_wrapper, file_info))
if is_spam is False or is_spam is ALLOW:
continue
elif is_spam is True:
return Codes.FORBIDDEN
else:
return is_spam
return False
return ALLOW

View File

@@ -15,6 +15,7 @@
import logging
from typing import TYPE_CHECKING
import synapse
from synapse.api.constants import MAX_DEPTH, EventContentFields, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import EventFormatVersions, RoomVersion
@@ -98,9 +99,9 @@ class FederationBase:
)
return redacted_event
result = await self.spam_checker.check_event_for_spam(pdu)
spam_check = await self.spam_checker.check_event_for_spam(pdu)
if result:
if spam_check is not synapse.spam_checker_api.ALLOW:
logger.warning("Event contains spam, soft-failing %s", pdu.event_id)
# we redact (to save disk space) as well as soft-failing (to stop
# using the event in prev_events).

Some files were not shown because too many files have changed in this diff Show More