1
0

Compare commits

..

65 Commits

Author SHA1 Message Date
Olivier 'reivilibre
bbb06bc01e XXX 2025-12-19 18:17:30 +00:00
Olivier 'reivilibre
683ef01764 Batch generation of stream_ids 2025-12-19 17:45:07 +00:00
Olivier 'reivilibre
4fb1c5a66a Don't duplicate clamping logic 2025-12-19 17:40:40 +00:00
Olivier 'reivilibre
36ccb9063c Move is_mine_hs into the database 2025-12-19 16:44:41 +00:00
Olivier 'reivilibre
f056f2d0b4 Docstring on MAX_DURATION_MS 2025-12-19 16:28:52 +00:00
Olivier 'reivilibre
59b9f9793e Docstring on sticky 2025-12-19 15:42:22 +00:00
Olivier 'reivilibre
99cfd755d3 Use labelled duration_ms in test helper 2025-12-19 15:06:29 +00:00
Olivier 'reivilibre
11472bb9d1 but -> except for 2025-12-18 15:04:51 +00:00
Olivier 'reivilibre
ad3d66f23a Fix MSC divergence: duration should be clamped, not ignored if high 2025-12-18 15:04:51 +00:00
Olivier 'reivilibre
375ebb5ffc Forced kwargs, describe to unfootgun 2025-12-18 15:04:51 +00:00
Olivier 'reivilibre
d7092a3a76 Remove redundant override 2025-12-18 15:04:51 +00:00
Olivier 'reivilibre
f86d9f8919 Tidy, don't concat SQL, simplify accumulation 2025-12-18 15:04:51 +00:00
Olivier 'reivilibre
6e874a9e65 Fix docstring, tidy 2025-12-18 12:50:37 +00:00
Olivier 'reivilibre
6522c3eaed Tidy, inline conditions, reformat 2025-12-18 12:49:19 +00:00
Olivier 'reivilibre
0ea82c597c Build set directly 2025-12-18 12:44:25 +00:00
Olivier 'reivilibre
e1b6eab8a9 Comment 2025-12-18 12:42:43 +00:00
Olivier 'reivilibre
8b08b1f606 Build set directly 2025-12-18 12:28:37 +00:00
Olivier 'reivilibre
c90dd644bb Select onecol to avoid manual unwrap 2025-12-18 12:25:10 +00:00
Olivier 'reivilibre
38f5dadd1b Generate multiple IDs in single call 2025-12-18 12:22:51 +00:00
Olivier 'reivilibre
df3d2aace6 Tidy, annotate, comment 2025-12-18 12:16:40 +00:00
Olivier 'reivilibre
55727c7212 Clean up manual UPDATE many 2025-12-18 12:00:20 +00:00
Olivier 'reivilibre
155ea241d3 redundant condition 2025-12-18 11:47:51 +00:00
Olivier 'reivilibre
fd8d0ddaf7 Use clock as a time source 2025-12-18 11:45:34 +00:00
Olivier 'reivilibre
67a1af4191 test utils: use f-strings 2025-12-18 11:35:45 +00:00
Olivier 'reivilibre
e9773ffd99 Docstring tweaks 2025-12-12 18:23:07 +00:00
Olivier 'reivilibre
aea25954fc msc4354_enabled fields: make private 2025-12-12 17:36:34 +00:00
Olivier 'reivilibre
1f077a566c Various tidying (no semantic change) 2025-12-12 17:32:49 +00:00
Olivier 'reivilibre
dcb7678281 fixup! Merge tag 'v1.144.0' into kegan/sticky-events 2025-12-11 17:25:13 +00:00
Olivier 'reivilibre
9016d68926 Merge tag 'v1.144.0' into kegan/sticky-events
The team has decided to deprecate and stop publishing python wheels for MacOS.
Synapse docker images will continue to work on MacOS, as will building Synapse
from source (though note this requires a Rust compiler).

Admins using the unstable [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) endpoint (`/_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms`),
please check [the relevant section in the upgrade notes](https://github.com/element-hq/synapse/blob/develop/docs/upgrade.md#upgrading-to-v11440) as this release contains changes
that disable that endpoint by default.

No significant changes since 1.144.0rc1.

Admins using the unstable [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) endpoint (`/_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms`), please check [the relevant section in the upgrade notes](https://github.com/element-hq/synapse/blob/develop/docs/upgrade.md#upgrading-to-v11440) as this release contains changes that disable that endpoint by default.

- Add experimentatal implememntation of [MSC4380](https://github.com/matrix-org/matrix-spec-proposals/pull/4380) (invite blocking). ([\#19203](https://github.com/element-hq/synapse/issues/19203))
- Allow restarting delayed event timeouts on workers. ([\#19207](https://github.com/element-hq/synapse/issues/19207))

- Fix a bug in the database function for fetching state deltas that could result in unnecessarily long query times. ([\#18960](https://github.com/element-hq/synapse/issues/18960))
- Fix v12 rooms when running with `use_frozen_dicts: True`. ([\#19235](https://github.com/element-hq/synapse/issues/19235))
- Fix bug where invalid `canonical_alias` content would return 500 instead of 400. ([\#19240](https://github.com/element-hq/synapse/issues/19240))
- Fix bug where `Duration` was logged incorrectly. ([\#19267](https://github.com/element-hq/synapse/issues/19267))

- Document in the `--config-path` help how multiple files are merged - by merging them shallowly. ([\#19243](https://github.com/element-hq/synapse/issues/19243))

- Stop building release wheels for MacOS. ([\#19225](https://github.com/element-hq/synapse/issues/19225))

- Improve event filtering for Simplified Sliding Sync. ([\#17782](https://github.com/element-hq/synapse/issues/17782))
- Export `SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES` environment variable from `scripts-dev/complement.sh`. ([\#19208](https://github.com/element-hq/synapse/issues/19208))
- Refactor `scripts-dev/complement.sh` logic to avoid `exit` to facilitate being able to source it from other scripts (composable). ([\#19209](https://github.com/element-hq/synapse/issues/19209))
- Expire sliding sync connections that are too old or have too much pending data. ([\#19211](https://github.com/element-hq/synapse/issues/19211))
- Require an experimental feature flag to be enabled in order for the unstable [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) endpoint (`/_matrix/client/unstable/uk.half-shot.msc2666/user/mutual_rooms`) to be available. ([\#19219](https://github.com/element-hq/synapse/issues/19219))
- Prevent changelog check CI running on @dependabot's PRs even when a human has modified the branch. ([\#19220](https://github.com/element-hq/synapse/issues/19220))
- Auto-fix trailing spaces in multi-line strings and comments when running the lint script. ([\#19221](https://github.com/element-hq/synapse/issues/19221))
- Move towards using a dedicated `Duration` type. ([\#19223](https://github.com/element-hq/synapse/issues/19223), [\#19229](https://github.com/element-hq/synapse/issues/19229))
- Improve robustness of the SQL schema linting in CI. ([\#19224](https://github.com/element-hq/synapse/issues/19224))
- Add log to determine whether clients are using `/messages` as expected. ([\#19226](https://github.com/element-hq/synapse/issues/19226))
- Simplify README and add ESS Getting started section. ([\#19228](https://github.com/element-hq/synapse/issues/19228), [\#19259](https://github.com/element-hq/synapse/issues/19259))
- Add a unit test for ensuring associated refresh tokens are erased when a device is deleted. ([\#19230](https://github.com/element-hq/synapse/issues/19230))
- Prompt user to consider adding future deprecations to the changelog in release script. ([\#19239](https://github.com/element-hq/synapse/issues/19239))
- Fix check of the Rust compiled code being outdated when using source checkout and `.egg-info`. ([\#19251](https://github.com/element-hq/synapse/issues/19251))
- Stop building macos wheels in CI pipeline. ([\#19263](https://github.com/element-hq/synapse/issues/19263))

* Bump Swatinem/rust-cache from 2.8.1 to 2.8.2. ([\#19244](https://github.com/element-hq/synapse/issues/19244))
* Bump actions/checkout from 5.0.0 to 6.0.0. ([\#19213](https://github.com/element-hq/synapse/issues/19213))
* Bump actions/setup-go from 6.0.0 to 6.1.0. ([\#19214](https://github.com/element-hq/synapse/issues/19214))
* Bump actions/setup-python from 6.0.0 to 6.1.0. ([\#19245](https://github.com/element-hq/synapse/issues/19245))
* Bump attrs from 25.3.0 to 25.4.0. ([\#19215](https://github.com/element-hq/synapse/issues/19215))
* Bump docker/metadata-action from 5.9.0 to 5.10.0. ([\#19246](https://github.com/element-hq/synapse/issues/19246))
* Bump http from 1.3.1 to 1.4.0. ([\#19249](https://github.com/element-hq/synapse/issues/19249))
* Bump pydantic from 2.12.4 to 2.12.5. ([\#19250](https://github.com/element-hq/synapse/issues/19250))
* Bump pyopenssl from 25.1.0 to 25.3.0. ([\#19248](https://github.com/element-hq/synapse/issues/19248))
* Bump rpds-py from 0.28.0 to 0.29.0. ([\#19216](https://github.com/element-hq/synapse/issues/19216))
* Bump rpds-py from 0.29.0 to 0.30.0. ([\#19247](https://github.com/element-hq/synapse/issues/19247))
* Bump sentry-sdk from 2.44.0 to 2.46.0. ([\#19218](https://github.com/element-hq/synapse/issues/19218))
* Bump types-bleach from 6.2.0.20250809 to 6.3.0.20251115. ([\#19217](https://github.com/element-hq/synapse/issues/19217))
* Bump types-jsonschema from 4.25.1.20250822 to 4.25.1.20251009. ([\#19252](https://github.com/element-hq/synapse/issues/19252))
2025-12-11 16:42:27 +00:00
Kegan Dougal
a4cac852a9 Run MSC4354 tests 2025-10-08 12:50:21 +01:00
Kegan Dougal
4d02a4cca0 Linting 2025-10-08 11:59:17 +01:00
Kegan Dougal
b2c967fd1c Fix query param name 2025-10-08 11:37:27 +01:00
Kegan Dougal
f0689cee5e Linting 2025-10-08 10:01:47 +01:00
Kegan Dougal
b1af5fece6 Merge branch 'develop' into kegan/sticky-events 2025-10-08 09:43:15 +01:00
Kegan Dougal
adb601b2d1 Add chunking for /sync, not SSS 2025-10-08 09:11:32 +01:00
Kegan Dougal
4def40414e Appease the regexp on delta lint checks 2025-10-06 14:45:14 +01:00
Kegan Dougal
686ce52723 Changelog 2025-10-06 14:28:21 +01:00
Kegan Dougal
58bf128581 Send NEW_SERVER_JOINED at the right time 2025-10-06 14:18:39 +01:00
Kegan Dougal
7f1e057cca Always cache invalidate when receiving sticky event updates, so removing the need for a tri-state soft failure flag 2025-10-03 15:24:34 +01:00
Kegan Dougal
075312cf2d Send sticky events to newly joined servers.
Process those sticky events as well. In so doing, fixes
https://github.com/element-hq/synapse/issues/18563 because we now
have the room_creator as an update value not insert value.
2025-10-03 11:35:12 +01:00
Kegan Dougal
aac3c846a8 Use a tri-state for soft failed to communicate when we need to cache invalidate 2025-10-02 16:47:45 +01:00
Kegan Dougal
888ab79b3b Add NewServerJoined replication command
Currently emits on joins and logs on receive, WIP.
2025-10-02 15:25:47 +01:00
Kegan Dougal
aa45bf7c3a Add msc4354 to /versions response 2025-10-02 10:46:44 +01:00
Kegan Dougal
15453d4e6e JSON false not str 2025-10-02 09:30:21 +01:00
Kegan Dougal
78c40973f4 SQLite specific soft-failure update code 2025-10-02 09:26:33 +01:00
Kegan Dougal
4acc98d23e Don't persist sticky outliers 2025-10-01 17:01:45 +01:00
Kegan Dougal
651e829632 Use standard unstable identifiers 2025-10-01 11:55:39 +01:00
Kegan Dougal
105d2cd05b Merge branch 'develop' into kegan/sticky-events 2025-09-30 12:56:30 +01:00
Kegan Dougal
de3e9b49ec Add msc4354_sticky_duration_ttl_ms support 2025-09-29 16:26:32 +01:00
Kegan Dougal
148caefcba Fix trial tests 2025-09-26 09:51:59 +01:00
Kegan Dougal
33d80be69f Send sticky events when catching up over federation 2025-09-26 09:29:52 +01:00
Kegan Dougal
ad6a2b9e0c Update docs to not lie 2025-09-24 12:45:22 +01:00
Kegan Dougal
771692addd Rejig when we persist sticky events
Persist inside persist_events to guarantee it is done. After that txn,
recheck soft failure.
2025-09-24 11:45:06 +01:00
Kegan Dougal
666e94b75a Don't re-evaluate spam 2025-09-24 11:25:35 +01:00
Kegan Dougal
2728b21f3d Re-evaluate soft-failure on sticky events 2025-09-24 10:11:18 +01:00
Kegan Dougal
1e812e4df0 Fix sqlite 2025-09-23 14:48:28 +01:00
Kegan Dougal
ac0f8c20e8 Support MSC4140 Delayed Events with sticky events 2025-09-23 13:54:32 +01:00
Kegan Dougal
7c8daf4ed9 Get sticky events working with Simplified Sliding Sync 2025-09-23 11:02:20 +01:00
Kegan Dougal
0cfdd0d6b5 Delete from sticky_events periodically 2025-09-22 14:27:54 +01:00
Kegan Dougal
7af74298b3 Don't include expired sticky events in /sync responses 2025-09-22 10:16:00 +01:00
Kegan Dougal
3e7a5a6bd6 Insert sticky events into /sync responses 2025-09-19 16:28:04 +01:00
Kegan Dougal
e01a22b2de Hook up replication receiver, add sticky events to sync tokens 2025-09-19 14:30:26 +01:00
Kegan Dougal
7801e68a33 Use multi-writer streams for sticky events 2025-09-19 10:09:18 +01:00
Kegan Dougal
869953456a Persist sticky events in sticky_events table
This only works on postgres for now
2025-09-18 16:45:10 +01:00
Kegan Dougal
abf658c712 WIP MSC4354; stub out soft-failure code for now 2025-09-18 14:33:11 +01:00
271 changed files with 13887 additions and 12673 deletions

View File

@@ -7,4 +7,4 @@ if command -v yum &> /dev/null; then
fi
# Install a Rust toolchain
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y --profile minimal
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain 1.82.0 -y --profile minimal

146
.ci/scripts/auditwheel_wrapper.py Executable file
View File

@@ -0,0 +1,146 @@
#!/usr/bin/env python
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
# Wraps `auditwheel repair` to first check if we're repairing a potentially abi3
# compatible wheel, if so rename the wheel before repairing it.
import argparse
import os
import subprocess
from zipfile import ZipFile
from packaging.tags import Tag
from packaging.utils import parse_wheel_filename
from packaging.version import Version
def check_is_abi3_compatible(wheel_file: str) -> None:
"""Check the contents of the built wheel for any `.so` files that are *not*
abi3 compatible.
"""
with ZipFile(wheel_file, "r") as wheel:
for file in wheel.namelist():
if not file.endswith(".so"):
continue
if not file.endswith(".abi3.so"):
raise Exception(f"Found non-abi3 lib: {file}")
def cpython(wheel_file: str, name: str, version: Version, tag: Tag) -> str:
"""Replaces the cpython wheel file with a ABI3 compatible wheel"""
if tag.abi == "abi3":
# Nothing to do.
return wheel_file
check_is_abi3_compatible(wheel_file)
# HACK: it seems that some older versions of pip will consider a wheel marked
# as macosx_11_0 as incompatible with Big Sur. I haven't done the full archaeology
# here; there are some clues in
# https://github.com/pantsbuild/pants/pull/12857
# https://github.com/pypa/pip/issues/9138
# https://github.com/pypa/packaging/pull/319
# Empirically this seems to work, note that macOS 11 and 10.16 are the same,
# both versions are valid for backwards compatibility.
platform = tag.platform.replace("macosx_11_0", "macosx_10_16")
abi3_tag = Tag(tag.interpreter, "abi3", platform)
dirname = os.path.dirname(wheel_file)
new_wheel_file = os.path.join(
dirname,
f"{name}-{version}-{abi3_tag}.whl",
)
os.rename(wheel_file, new_wheel_file)
print("Renamed wheel to", new_wheel_file)
return new_wheel_file
def main(wheel_file: str, dest_dir: str, archs: str | None) -> None:
"""Entry point"""
# Parse the wheel file name into its parts. Note that `parse_wheel_filename`
# normalizes the package name (i.e. it converts matrix_synapse ->
# matrix-synapse), which is not what we want.
_, version, build, tags = parse_wheel_filename(os.path.basename(wheel_file))
name = os.path.basename(wheel_file).split("-")[0]
if len(tags) != 1:
# We expect only a wheel file with only a single tag
raise Exception(f"Unexpectedly found multiple tags: {tags}")
tag = next(iter(tags))
if build:
# We don't use build tags in Synapse
raise Exception(f"Unexpected build tag: {build}")
# If the wheel is for cpython then convert it into an abi3 wheel.
if tag.interpreter.startswith("cp"):
wheel_file = cpython(wheel_file, name, version, tag)
# Finally, repair the wheel.
if archs is not None:
# If we are given archs then we are on macos and need to use
# `delocate-listdeps`.
subprocess.run(["delocate-listdeps", wheel_file], check=True)
subprocess.run(
["delocate-wheel", "--require-archs", archs, "-w", dest_dir, wheel_file],
check=True,
)
else:
subprocess.run(["auditwheel", "repair", "-w", dest_dir, wheel_file], check=True)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Tag wheel as abi3 and repair it.")
parser.add_argument(
"--wheel-dir",
"-w",
metavar="WHEEL_DIR",
help="Directory to store delocated wheels",
required=True,
)
parser.add_argument(
"--require-archs",
metavar="archs",
default=None,
)
parser.add_argument(
"wheel_file",
metavar="WHEEL_FILE",
)
args = parser.parse_args()
wheel_file = args.wheel_file
wheel_dir = args.wheel_dir
archs = args.require_archs
main(wheel_file, wheel_dir, archs)

39
.ci/scripts/prepare_old_deps.sh Executable file
View File

@@ -0,0 +1,39 @@
#!/usr/bin/env bash
# this script is run by GitHub Actions in a plain `jammy` container; it
# - installs the minimal system requirements, and poetry;
# - patches the project definition file to refer to old versions only;
# - creates a venv with these old versions using poetry; and finally
# - invokes `trial` to run the tests with old deps.
set -ex
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
# TODO: in the future, we could use an implementation of
# https://github.com/python-poetry/poetry/issues/3527
# https://github.com/pypa/pip/issues/8085
# to select the lowest possible versions, rather than resorting to this sed script.
# Patch the project definitions in-place:
# - `-E` use extended regex syntax.
# - Don't modify the line that defines required Python versions.
# - Replace all lower and tilde bounds with exact bounds.
# - Replace all caret bounds with exact bounds.
# - Delete all lines referring to psycopg2 - so no testing of postgres support.
# - Use pyopenssl 17.0, which is the oldest version that works with
# a `cryptography` compiled against OpenSSL 1.1.
# - Omit systemd: we're not logging to journal here.
sed -i -E '
/^\s*requires-python\s*=/b
s/[~>]=/==/g
s/\^/==/g
/psycopg2/d
s/pyOpenSSL\s*==\s*16\.0\.0"/pyOpenSSL==17.0.0"/
/systemd/d
' pyproject.toml
echo "::group::Patched pyproject.toml"
cat pyproject.toml
echo "::endgroup::"

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
.github/workflows/* merge=ours

View File

@@ -1,92 +1,23 @@
version: 2
# As dependabot is currently only run on a weekly basis, we raise the
# open-pull-requests-limit to 10 (from the default of 5) to better ensure we
# don't continuously grow a backlog of updates.
updates:
- # "pip" is the correct setting for poetry, per https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem
package-ecosystem: "pip"
directory: "/"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# Group patch updates to packages together into a single PR, as they rarely
# if ever contain breaking changes that need to be reviewed separately.
#
# Less PRs means a streamlined review process.
#
# Python packages follow semantic versioning, and tend to only introduce
# breaking changes in major version bumps. Thus, we'll group minor and patch
# versions together.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
# Prevent pulling packages that were recently updated to help mitigate
# supply chain attacks. 14 days was taken from the recommendation at
# https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns
# where the author noted that 9/10 attacks would have been mitigated by a
# two week cooldown.
#
# The cooldown only applies to general updates; security updates will still
# be pulled in as soon as possible.
cooldown:
default-days: 14
- package-ecosystem: "docker"
directory: "/docker"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# For container versions, breaking changes are also typically only introduced in major
# package bumps.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
cooldown:
default-days: 14
- package-ecosystem: "github-actions"
directory: "/"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# Similarly for GitHub Actions, breaking changes are typically only introduced in major
# package bumps.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
cooldown:
default-days: 14
- package-ecosystem: "cargo"
directory: "/"
open-pull-requests-limit: 10
versioning-strategy: "lockfile-only"
schedule:
interval: "weekly"
# The Rust ecosystem is special in that breaking changes are often introduced
# in minor version bumps, as packages typically stay pre-1.0 for a long time.
# Thus we specifically keep minor version bumps separate in their own PRs.
groups:
patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "patch"
cooldown:
default-days: 14

155
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,155 @@
# GitHub actions workflow which builds and publishes the docker images.
name: Build docker images
on:
push:
tags: ["v*"]
branches: [master, main, develop]
workflow_dispatch:
permissions:
contents: read
packages: write
id-token: write # needed for signing the images with GitHub OIDC Token
jobs:
build:
name: Build and push image for ${{ matrix.platform }}
runs-on: ${{ matrix.runs_on }}
strategy:
matrix:
include:
- platform: linux/amd64
runs_on: ubuntu-24.04
suffix: linux-amd64
- platform: linux/arm64
runs_on: ubuntu-24.04-arm
suffix: linux-arm64
steps:
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsshell
shell: bash
run: |
echo "SYNAPSE_VERSION=$(grep "^version" pyproject.toml | sed -E 's/version\s*=\s*["]([^"]*)["]/\1/')" >> $GITHUB_ENV
- name: Log in to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push by digest
id: build
uses: docker/build-push-action@263435318d21b8e681c14492fe198d362a7d2c83 # v6.18.0
with:
push: true
labels: |
gitsha1=${{ github.sha }}
org.opencontainers.image.version=${{ env.SYNAPSE_VERSION }}
tags: |
docker.io/matrixdotorg/synapse
ghcr.io/element-hq/synapse
file: "docker/Dockerfile"
platforms: ${{ matrix.platform }}
outputs: type=image,push-by-digest=true,name-canonical=true,push=true
- name: Export digest
run: |
mkdir -p ${{ runner.temp }}/digests
digest="${{ steps.build.outputs.digest }}"
touch "${{ runner.temp }}/digests/${digest#sha256:}"
- name: Upload digest
uses: actions/upload-artifact@v5
with:
name: digests-${{ matrix.suffix }}
path: ${{ runner.temp }}/digests/*
if-no-files-found: error
retention-days: 1
merge:
name: Push merged images to ${{ matrix.repository }}
runs-on: ubuntu-latest
strategy:
matrix:
repository:
- docker.io/matrixdotorg/synapse
- ghcr.io/element-hq/synapse
needs:
- build
steps:
- name: Download digests
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
with:
path: ${{ runner.temp }}/digests
pattern: digests-*
merge-multiple: true
- name: Log in to DockerHub
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
if: ${{ startsWith(matrix.repository, 'docker.io') }}
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
if: ${{ startsWith(matrix.repository, 'ghcr.io') }}
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Install Cosign
uses: sigstore/cosign-installer@faadad0cce49287aee09b3a48701e75088a2c6ad # v4.0.0
- name: Calculate docker image tag
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ${{ matrix.repository }}
flavor: |
latest=false
tags: |
type=raw,value=develop,enable=${{ github.ref == 'refs/heads/develop' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/master' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
type=pep440,pattern={{raw}}
type=sha
- name: Create manifest list and push
working-directory: ${{ runner.temp }}/digests
env:
REPOSITORY: ${{ matrix.repository }}
run: |
docker buildx imagetools create $(jq -cr '.tags | map("-t " + .) | join(" ")' <<< "$DOCKER_METADATA_OUTPUT_JSON") \
$(printf "$REPOSITORY@sha256:%s " *)
- name: Sign each manifest
env:
REPOSITORY: ${{ matrix.repository }}
run: |
DIGESTS=""
for TAG in $(echo "$DOCKER_METADATA_OUTPUT_JSON" | jq -r '.tags[]'); do
DIGEST="$(docker buildx imagetools inspect $TAG --format '{{json .Manifest}}' | jq -r '.digest')"
DIGESTS="$DIGESTS $REPOSITORY@$DIGEST"
done
cosign sign --yes $DIGESTS

34
.github/workflows/docs-pr-netlify.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Deploy documentation PR preview
on:
workflow_run:
workflows: [ "Prepare documentation PR preview" ]
types:
- completed
jobs:
netlify:
if: github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.event == 'pull_request'
runs-on: ubuntu-latest
steps:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@ac66b43f0e6a346234dd65d4d0c8fbb31cb316e5 # v11
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}
name: book
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@9805cd123fc9a7e421e35340a05e1ebc5dee46b5 # v3
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}
branch: ${{ github.event.workflow_run.head_branch }}
revision: ${{ github.event.workflow_run.head_sha }}
token: ${{ secrets.NETLIFY_AUTH_TOKEN }}
site_id: ${{ secrets.NETLIFY_SITE_ID }}
desc: Documentation preview
deployment_env: PR Documentation Preview

71
.github/workflows/docs-pr.yaml vendored Normal file
View File

@@ -0,0 +1,71 @@
name: Prepare documentation PR preview
on:
pull_request:
paths:
- docs/**
- book.toml
- .github/workflows/docs-pr.yaml
- scripts-dev/schema_versions.py
jobs:
pages:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
with:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
- name: Upload Artifact
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: book
path: book
# We'll only use this in a workflow_run, then we're done with it
retention-days: 1
link-check:
name: Check links in documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
with:
mdbook-version: '0.4.17'
- name: Setup htmltest
run: |
wget https://github.com/wjdp/htmltest/releases/download/v0.17.0/htmltest_0.17.0_linux_amd64.tar.gz
echo '775c597ee74899d6002cd2d93076f897f4ba68686bceabe2e5d72e84c57bc0fb htmltest_0.17.0_linux_amd64.tar.gz' | sha256sum -c
tar zxf htmltest_0.17.0_linux_amd64.tar.gz
- name: Test links with htmltest
# Build the book with `./` as the site URL (to make checks on 404.html possible)
# Then run htmltest (without checking external links since that involves the network and is slow).
run: |
MDBOOK_OUTPUT__HTML__SITE_URL="./" mdbook build
./htmltest book --skip-external

99
.github/workflows/docs.yaml vendored Normal file
View File

@@ -0,0 +1,99 @@
name: Deploy the documentation
on:
push:
branches:
# For bleeding-edge documentation
- develop
# For documentation specific to a release
- 'release-v*'
# stable docs
- master
workflow_dispatch:
jobs:
pre:
name: Calculate variables for GitHub Pages deployment
runs-on: ubuntu-latest
steps:
# Figure out the target directory.
#
# The target directory depends on the name of the branch
#
- name: Get the target directory name
id: vars
run: |
# first strip the 'refs/heads/' prefix with some shell foo
branch="${GITHUB_REF#refs/heads/}"
case $branch in
release-*)
# strip 'release-' from the name for release branches.
branch="${branch#release-}"
;;
master)
# deploy to "latest" for the master branch.
branch="latest"
;;
esac
# finally, set the 'branch-version' var.
echo "branch-version=$branch" >> "$GITHUB_OUTPUT"
outputs:
branch-version: ${{ steps.vars.outputs.branch-version }}
################################################################################
pages-docs:
name: GitHub Pages
runs-on: ubuntu-latest
needs:
- pre
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
with:
mdbook-version: '0.4.17'
- name: Set version of docs
run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js
- name: Setup python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
- name: Prepare and publish schema files
run: |
sudo apt-get update && sudo apt-get install -y yq
mkdir -p book/schema
# Remove developer notice before publishing.
rm schema/v*/Do\ not\ edit\ files\ in\ this\ folder
# Copy schema files that are independent from current Synapse version.
cp -r -t book/schema schema/v*/
# Convert config schema from YAML source file to JSON.
yq < schema/synapse-config.schema.yaml \
> book/schema/synapse-config.schema.json
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./book
destination_dir: ./${{ needs.pre.outputs.branch-version }}

52
.github/workflows/fix_lint.yaml vendored Normal file
View File

@@ -0,0 +1,52 @@
# A helper workflow to automatically fixup any linting errors on a PR. Must be
# triggered manually.
name: Attempt to automatically fix linting errors
on:
workflow_dispatch:
env:
# We use nightly so that `fmt` correctly groups together imports, and
# clippy correctly fixes up the benchmarks.
RUST_VERSION: nightly-2025-06-24
jobs:
fixup:
name: Fix up
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
components: clippy, rustfmt
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
install-project: "false"
poetry-version: "2.1.1"
- name: Run ruff check
continue-on-error: true
run: poetry run ruff check --fix .
- name: Run ruff format
continue-on-error: true
run: poetry run ruff format --quiet .
- run: cargo clippy --all-features --fix -- -D warnings
continue-on-error: true
- run: cargo fmt
continue-on-error: true
- uses: stefanzweifel/git-auto-commit-action@28e16e81777b558cc906c8750092100bbb34c5e3 # v7.0.0
with:
commit_message: "Attempt to fix linting"

243
.github/workflows/latest_deps.yml vendored Normal file
View File

@@ -0,0 +1,243 @@
# People who are freshly `pip install`ing from PyPI will pull in the latest versions of
# dependencies which match the broad requirements. Since most CI runs are against
# the locked poetry environment, run specifically against the latest dependencies to
# know if there's an upcoming breaking change.
#
# As an overview this workflow:
# - checks out develop,
# - installs from source, pulling in the dependencies like a fresh `pip install` would, and
# - runs mypy and test suites in that checkout.
#
# Based on the twisted trunk CI job.
name: Latest dependencies
on:
schedule:
- cron: 0 7 * * *
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
RUST_VERSION: 1.87.0
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than element-hq/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'element-hq/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
# The dev dependencies aren't exposed in the wheel metadata (at least with current
# poetry-core versions), so we install with poetry.
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
poetry-version: "2.1.1"
extras: "all"
# Dump installed versions for debugging.
- run: poetry run pip list > before.txt
# Upgrade all runtime dependencies only. This is intended to mimic a fresh
# `pip install matrix-synapse[all]` as closely as possible.
- run: poetry update --without dev
- run: poetry run pip list > after.txt && (diff -u before.txt after.txt || true)
- name: Remove unhelpful options from mypy config
run: sed -e '/warn_unused_ignores = True/d' -e '/warn_redundant_casts = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- database: "sqlite"
- database: "postgres"
postgres-version: "14"
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
if: ${{ matrix.postgres-version }}
run: |
docker run -d -p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: pip install .[all,test]
- name: Await PostgreSQL
if: ${{ matrix.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
# We nuke the local copy, as we've installed synapse into the virtualenv
# (rather than use an editable install, which we no longer support). If we
# don't do this then python can't find the native lib.
- run: rm -rf synapse/
- run: python -m twisted.trial --jobs=2 tests
env:
SYNAPSE_POSTGRES: ${{ matrix.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: localhost
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:testing
volumes:
- ${{ github.workspace }}:/src
strategy:
fail-fast: false
matrix:
include:
- sytest-tag: bookworm
- sytest-tag: bookworm
postgres: postgres
workers: workers
redis: redis
env:
POSTGRES: ${{ matrix.postgres && 1}}
WORKERS: ${{ matrix.workers && 1 }}
REDIS: ${{ matrix.redis && 1 }}
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Ensure sytest runs `pip install`
# Delete the lockfile so sytest will `pip install` rather than `poetry install`
run: rm /src/poetry.lock
working-directory: /src
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
complement:
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Check out synapse codebase
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
- run: |
set -o pipefail
TEST_ONLY_IGNORE_POETRY_LOCKFILE=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
name: Run Complement Tests
# Open an issue if the build fails, so we know about it.
# Only do this if we're not experimenting with this action in a PR.
open-issue:
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request' && needs.check_repo.outputs.should_run_workflow == 'true'"
needs:
# TODO: should mypy be included here? It feels more brittle than the others.
- mypy
- trial
- sytest
- complement
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
update_existing: true
filename: .ci/latest_deps_build_failed_issue_template.md

24
.github/workflows/poetry_lockfile.yaml vendored Normal file
View File

@@ -0,0 +1,24 @@
on:
push:
branches: ["develop", "release-*"]
paths:
- poetry.lock
pull_request:
paths:
- poetry.lock
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
check-sdists:
name: "Check locked dependencies have sdists"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.x'
- run: pip install tomli
- run: ./scripts-dev/check_locked_deps_have_sdists.py

View File

@@ -0,0 +1,74 @@
# This task does not run complement tests, see tests.yaml instead.
# This task does not build docker images for synapse for use on docker hub, see docker.yaml instead
name: Store complement-synapse image in ghcr.io
on:
push:
branches: [ "master" ]
schedule:
- cron: '0 5 * * *'
workflow_dispatch:
inputs:
branch:
required: true
default: 'develop'
type: choice
options:
- develop
- master
# Only run this action once per pull request/branch; restart if a new commit arrives.
# C.f. https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#concurrency
# and https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
name: Build and push complement image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
if: github.event_name == 'push'
with:
ref: master
- name: Login to registry
uses: docker/login-action@5e57cd118135c172c3672efd75eb46360885c0ef # v3.6.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Work out labels for complement image
id: meta
uses: docker/metadata-action@c299e40c65443455700f0fdfc63efafe5b349051 # v5.10.0
with:
images: ghcr.io/${{ github.repository }}/complement-synapse
tags: |
type=schedule,pattern=nightly,enable=${{ github.event_name == 'schedule'}}
type=raw,value=develop,enable=${{ github.event_name == 'schedule' || inputs.branch == 'develop' }}
type=raw,value=latest,enable=${{ github.event_name == 'push' || inputs.branch == 'master' }}
type=sha,format=long
- name: Run scripts-dev/complement.sh to generate complement-synapse:latest image.
run: scripts-dev/complement.sh --build-only
- name: Tag and push generated image
run: |
for TAG in ${{ join(fromJson(steps.meta.outputs.json).tags, ' ') }}; do
echo "tag and push $TAG"
docker tag complement-synapse $TAG
docker push $TAG
done

208
.github/workflows/release-artifacts.yml vendored Normal file
View File

@@ -0,0 +1,208 @@
# GitHub actions workflow which builds the release artifacts.
name: Build release artifacts
on:
# we build on PRs and develop to (hopefully) get early warning
# of things breaking (but only build one set of debs). PRs skip
# building wheels on macOS & ARM.
pull_request:
push:
branches: ["develop", "release-*"]
# we do the full build on tags.
tags: ["v*"]
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
jobs:
get-distros:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- id: set-distros
run: |
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
# NOTE: inside the actual Dockerfile-dhvirtualenv, the image name is expanded into its full image path
dists='["debian:sid"]'
if [[ $GITHUB_REF == refs/tags/* ]]; then
dists=$(scripts-dev/build_debian_packages.py --show-dists-json)
fi
echo "distros=$dists" >> "$GITHUB_OUTPUT"
# map the step outputs to job outputs
outputs:
distros: ${{ steps.set-distros.outputs.distros }}
# now build the packages with a matrix build.
build-debs:
needs: get-distros
name: "Build .deb packages"
runs-on: ubuntu-latest
strategy:
matrix:
distro: ${{ fromJson(needs.get-distros.outputs.distros) }}
steps:
- name: Checkout
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: src
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
with:
install: true
- name: Set up docker layer caching
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- name: Build the packages
# see https://github.com/docker/build-push-action/issues/252
# for the cache magic here
run: |
./src/scripts-dev/build_debian_packages.py \
--docker-build-arg=--cache-from=type=local,src=/tmp/.buildx-cache \
--docker-build-arg=--cache-to=type=local,mode=max,dest=/tmp/.buildx-cache-new \
--docker-build-arg=--progress=plain \
--docker-build-arg=--load \
"${{ matrix.distro }}"
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Artifact name
id: artifact-name
# We can't have colons in the upload name of the artifact, so we convert
# e.g. `debian:sid` to `sid`.
env:
DISTRO: ${{ matrix.distro }}
run: |
echo "ARTIFACT_NAME=${DISTRO#*:}" >> "$GITHUB_OUTPUT"
- name: Upload debs as artifacts
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: debs-${{ steps.artifact-name.outputs.ARTIFACT_NAME }}
path: debs/*
build-wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os:
- ubuntu-24.04
- ubuntu-24.04-arm
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
is_pr:
- ${{ startsWith(github.ref, 'refs/pull/') }}
exclude:
# Don't build aarch64 wheels on PR CI.
- is_pr: true
os: "ubuntu-24.04-arm"
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7.
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==3.2.1
- name: Only build a single wheel on PR
if: startsWith(github.ref, 'refs/pull/')
run: echo "CIBW_BUILD="cp310-manylinux_*"" >> $GITHUB_ENV
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
env:
# The platforms that we build for are determined by the
# `tool.cibuildwheel.skip` option in `pyproject.toml`.
# We skip testing wheels for the following platforms in CI:
#
# pp3*-* (PyPy wheels) broke in CI (TODO: investigate).
# musl: (TODO: investigate).
CIBW_TEST_SKIP: pp3*-* *musl*
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: Wheel-${{ matrix.os }}
path: ./wheelhouse/*.whl
build-sdist:
name: Build sdist
runs-on: ubuntu-latest
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.10"
- run: pip install build
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
with:
name: Sdist
path: dist/*.tar.gz
# if it's a tag, create a release and attach the artifacts to it
attach-assets:
name: "Attach assets to release"
if: ${{ !failure() && !cancelled() && startsWith(github.ref, 'refs/tags/') }}
needs:
- build-debs
- build-wheels
- build-sdist
runs-on: ubuntu-latest
steps:
- name: Download all workflow run artifacts
uses: actions/download-artifact@018cc2cf5baa6db3ef3c5f8a56943fffe632ef53 # v6.0.0
- name: Build a tarball for the debs
# We need to merge all the debs uploads into one folder, then compress
# that.
run: |
mkdir debs
mv debs*/* debs/
tar -cvJf debs.tar.xz debs
- name: Attach to release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
gh release upload "${{ github.ref_name }}" \
Sdist/* \
Wheel*/* \
debs.tar.xz \
--repo ${{ github.repository }}

57
.github/workflows/schema.yaml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Schema
on:
pull_request:
paths:
- schema/**
- docs/usage/configuration/config_documentation.md
push:
branches: ["develop", "release-*"]
workflow_dispatch:
jobs:
validate-schema:
name: Ensure Synapse config schema is valid
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- name: Install check-jsonschema
run: pip install check-jsonschema==0.33.0
- name: Validate meta schema
run: check-jsonschema --check-metaschema schema/v*/meta.schema.json
- name: Validate schema
run: |-
# Please bump on introduction of a new meta schema.
LATEST_META_SCHEMA_VERSION=v1
check-jsonschema \
--schemafile="schema/$LATEST_META_SCHEMA_VERSION/meta.schema.json" \
schema/synapse-config.schema.yaml
- name: Validate default config
# Populates the empty instance with default values and checks against the schema.
run: |-
echo "{}" | check-jsonschema \
--fill-defaults --schemafile=schema/synapse-config.schema.yaml -
check-doc-generation:
name: Ensure generated documentation is up-to-date
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- name: Install PyYAML
run: pip install PyYAML==6.0.2
- name: Regenerate config documentation
run: |
scripts-dev/gen_config_documentation.py \
schema/synapse-config.schema.yaml \
> docs/usage/configuration/config_documentation.md
- name: Error in case of any differences
# Errors if there are now any modified files (untracked files are ignored).
run: 'git diff --exit-code'

792
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,792 @@
name: Tests
on:
push:
branches: ["develop", "release-*"]
pull_request:
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
RUST_VERSION: 1.87.0
jobs:
# Job to detect what has changed so we don't run e.g. Rust checks on PRs that
# don't modify Rust code.
changes:
runs-on: ubuntu-latest
outputs:
rust: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.rust }}
trial: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.trial }}
integration: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.integration }}
linting: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting }}
linting_readme: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting_readme }}
steps:
- uses: dorny/paths-filter@de90cc6fb38fc0963ad72b210f1f284cd68cea36 # v3.0.2
id: filter
# We only check on PRs
if: startsWith(github.ref, 'refs/pull/')
with:
filters: |
rust:
- 'rust/**'
- 'Cargo.toml'
- 'Cargo.lock'
- '.rustfmt.toml'
- '.github/workflows/tests.yml'
trial:
- 'synapse/**'
- 'tests/**'
- 'rust/**'
- '.ci/scripts/calculate_jobs.py'
- 'Cargo.toml'
- 'Cargo.lock'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/tests.yml'
integration:
- 'synapse/**'
- 'rust/**'
- 'docker/**'
- 'Cargo.toml'
- 'Cargo.lock'
- 'pyproject.toml'
- 'poetry.lock'
- 'docker/**'
- '.ci/**'
- 'scripts-dev/complement.sh'
- '.github/workflows/tests.yml'
linting:
- 'synapse/**'
- 'docker/**'
- 'tests/**'
- 'scripts-dev/**'
- 'contrib/**'
- 'synmark/**'
- 'stubs/**'
- '.ci/**'
- 'mypy.ini'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/tests.yml'
linting_readme:
- 'README.rst'
check-sampleconfig:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
poetry-version: "2.1.1"
extras: "all"
- run: poetry run scripts-dev/generate_sample_config.sh --check
- run: poetry run scripts-dev/config-lint.sh
check-schema-delta:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20' 'sqlglot>=28.0.0'"
- run: scripts-dev/check_schema_delta.py --force-colors
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: .ci/scripts/check_lockfile.py
lint:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
poetry-version: "2.1.1"
install-project: "false"
- name: Run ruff check
run: poetry run ruff check --output-format=github .
- name: Run ruff format
run: poetry run ruff format --check .
lint-mypy:
runs-on: ubuntu-latest
name: Typechecking
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
# We want to make use of type hints in optional dependencies too.
extras: all
# We have seen odd mypy failures that were resolved when we started
# installing the project again:
# https://github.com/matrix-org/synapse/pull/15376#issuecomment-1498983775
# To make CI green, err towards caution and install the project.
install-project: "true"
poetry-version: "2.1.1"
# Cribbed from
# https://github.com/AustinScola/mypy-cache-github-action/blob/85ea4f2972abed39b33bd02c36e341b28ca59213/src/restore.ts#L10-L17
- name: Restore/persist mypy's cache
uses: actions/cache@0057852bfaa89a56745cba8c7296529d2fc39830 # v4.3.0
with:
path: |
.mypy_cache
key: mypy-cache-${{ github.context.sha }}
restore-keys: mypy-cache-
- name: Run mypy
run: poetry run mypy
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
# Only run on pull_request events, targeting develop/release branches, and skip when the PR author is dependabot[bot].
if: ${{ github.event_name == 'pull_request' && (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.event.pull_request.user.login != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment.sh
env:
PULL_REQUEST_NUMBER: ${{ github.event.number }}
lint-clippy:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
components: clippy
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo clippy -- -D warnings
# We also lint against a nightly rustc so that we can lint the benchmark
# suite, which requires a nightly compiler.
lint-clippy-nightly:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2025-04-23
components: clippy
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo clippy --all-features -- -D warnings
lint-rust:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- name: Checkout repository
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
# Install like a normal project from source with all optional dependencies
extras: all
install-project: "true"
poetry-version: "2.1.1"
- name: Ensure `Cargo.lock` is up to date (no stray changes after install)
# The `::error::` syntax is using GitHub Actions' error annotations, see
# https://docs.github.com/en/actions/reference/workflow-commands-for-github-actions
run: |
if git diff --quiet Cargo.lock; then
echo "Cargo.lock is up to date"
else
echo "::error::Cargo.lock has uncommitted changes after install. Please run 'poetry install --extras all' and commit the Cargo.lock changes."
git diff --exit-code Cargo.lock
exit 1
fi
# This job is split from `lint-rust` because it requires a nightly Rust toolchain
# for some of the unstable options we use in `.rustfmt.toml`.
lint-rustfmt:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
# We use nightly so that we can use some unstable options that we use in
# `.rustfmt.toml`.
toolchain: nightly-2025-04-23
components: rustfmt
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo fmt --check
# This is to detect issues with the rst file, which can otherwise cause issues
# when uploading packages to PyPi.
lint-readme:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- run: "pip install rstcheck"
- run: "rstcheck --report-level=WARNING README.rst"
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
needs:
- lint
- lint-mypy
- lint-crlf
- lint-newsfile
- check-sampleconfig
- check-schema-delta
- check-lockfile
- lint-clippy
- lint-clippy-nightly
- lint-rust
- lint-rustfmt
- lint-readme
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@3409aa904e8a2aaf2220f09bc954d3d0b0a2ee67 # v3
with:
needs: ${{ toJSON(needs) }}
# Various bits are skipped if there was no applicable changes.
skippable: |
check-sampleconfig
check-schema-delta
lint
lint-mypy
lint-newsfile
lint-clippy
lint-clippy-nightly
lint-rust
lint-rustfmt
lint-readme
calculate-test-jobs:
if: ${{ !cancelled() && !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: "3.x"
- id: get-matrix
run: .ci/scripts/calculate_jobs.py
outputs:
trial_test_matrix: ${{ steps.get-matrix.outputs.trial_test_matrix }}
sytest_test_matrix: ${{ steps.get-matrix.outputs.sytest_test_matrix }}
trial:
if: ${{ !cancelled() && !failure() && needs.changes.outputs.trial == 'true' }} # Allow previous steps to be skipped, but not fail
needs:
- calculate-test-jobs
- changes
runs-on: ubuntu-latest
strategy:
matrix:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
# 1. Mount postgres data files onto a tmpfs in-memory filesystem to reduce overhead of docker's overlayfs layer.
# 2. Expose the unix socket for postgres. This removes latency of using docker-proxy for connections.
run: |
docker run -d -p 5432:5432 \
--tmpfs /var/lib/postgres:rw,size=6144m \
--mount 'type=bind,src=/var/run/postgresql,dst=/var/run/postgresql' \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: ${{ matrix.job.python-version }}
poetry-version: "2.1.1"
extras: ${{ matrix.job.extras }}
- name: Await PostgreSQL
if: ${{ matrix.job.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
- run: poetry run trial --jobs=6 tests
env:
SYNAPSE_POSTGRES: ${{ matrix.job.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: /var/run/postgresql
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-olddeps:
# Note: sqlite only; no postgres
if: ${{ !cancelled() && !failure() && needs.changes.outputs.trial == 'true' }} # Allow previous steps to be skipped, but not fail
needs:
- linting-done
- changes
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
# There aren't wheels for some of the older deps, so we need to install
# their build dependencies
- run: |
sudo apt-get -qq update
sudo apt-get -qq install build-essential libffi-dev python3-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@83679a892e2d95755f2dac6acb0bfd1e9ac5d548 # v6.1.0
with:
python-version: '3.10'
- name: Prepare old deps
if: steps.cache-poetry-old-deps.outputs.cache-hit != 'true'
run: .ci/scripts/prepare_old_deps.sh
# Note: we install using `pip` here, not poetry. `poetry install` ignores the
# build-system section (https://github.com/python-poetry/poetry/issues/6154), but
# we explicitly want to test that you can `pip install` using the oldest version
# of poetry-core and setuptools-rust.
- run: pip install .[all,test]
# We nuke the local copy, as we've installed synapse into the virtualenv
# (rather than use an editable install, which we no longer support). If we
# don't do this then python can't find the native lib.
- run: rm -rf synapse/
# Sanity check we can import/run Synapse
- run: python -m synapse.app.homeserver --help
- run: python -m twisted.trial -j6 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-pypy:
# Very slow; only run if the branch name includes 'pypy'
# Note: sqlite only; no postgres. Completely untested since poetry move.
if: ${{ contains(github.ref, 'pypy') && !failure() && !cancelled() && needs.changes.outputs.trial == 'true' }}
needs:
- linting-done
- changes
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.10"]
extras: ["all"]
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: ${{ matrix.python-version }}
poetry-version: "2.1.1"
extras: ${{ matrix.extras }}
- run: poetry run trial --jobs=2 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
if: ${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true' }}
needs:
- calculate-test-jobs
- changes
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:${{ matrix.job.sytest-tag }}
volumes:
- ${{ github.workspace }}:/src
env:
# If this is a pull request to a release branch, use that branch as default branch for sytest, else use develop
# This works because the release script always create a branch on the sytest repo with the same name as the release branch
SYTEST_DEFAULT_BRANCH: ${{ startsWith(github.base_ref, 'release-') && github.base_ref || 'develop' }}
SYTEST_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.job.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') || '' }}
ASYNCIO_REACTOR: ${{ (matrix.job.reactor == 'asyncio') || '' }}
WORKERS: ${{ matrix.job.workers && 1 }}
BLACKLIST: ${{ matrix.job.workers && 'synapse-blacklist-with-workers' }}
TOP: ${{ github.workspace }}
strategy:
fail-fast: false
matrix:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.job.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
export-data:
if: ${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true'}} # Allow previous steps to be skipped, but not fail
needs: [linting-done, portdb, changes]
runs-on: ubuntu-latest
env:
TOP: ${{ github.workspace }}
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
poetry-version: "2.1.1"
extras: "postgres"
- run: .ci/scripts/test_export_data_command.sh
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: postgres
portdb:
if: ${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true'}} # Allow previous steps to be skipped, but not fail
needs:
- linting-done
- changes
runs-on: ubuntu-latest
strategy:
matrix:
include:
- python-version: "3.10"
postgres-version: "14"
- python-version: "3.14"
postgres-version: "17"
services:
postgres:
image: postgres:${{ matrix.postgres-version }}
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
# behind new releases, so we have to use the PostreSQL apt repository.
# Steps taken from https://www.postgresql.org/download/linux/ubuntu/
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: ${{ matrix.python-version }}
poetry-version: "2.1.1"
extras: "postgres"
- run: .ci/scripts/test_synapse_port_db.sh
id: run_tester_script
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: postgres
- name: "Upload schema differences"
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: ${{ failure() && !cancelled() && steps.run_tester_script.outcome == 'failure' }}
with:
name: Schema dumps
path: |
unported.sql
ported.sql
schema_diff
complement:
if: "${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true' }}"
needs:
- linting-done
- changes
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Checkout synapse codebase
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
# use p=1 concurrency as GHA boxes are underpowered and don't like running tons of synapses at once.
- run: |
set -o pipefail
COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -p 1 -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
env:
POSTGRES: ${{ (matrix.database == 'Postgres') && 1 || '' }}
WORKERS: ${{ (matrix.arrangement == 'workers') && 1 || '' }}
name: Run Complement Tests
cargo-test:
if: ${{ needs.changes.outputs.rust == 'true' }}
runs-on: ubuntu-latest
needs:
- linting-done
- changes
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo test
# We want to ensure that the cargo benchmarks still compile, which requires a
# nightly compiler.
cargo-bench:
if: ${{ needs.changes.outputs.rust == 'true' }}
runs-on: ubuntu-latest
needs:
- linting-done
- changes
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- run: cargo bench --no-run
# a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done:
if: ${{ always() }}
needs:
- trial
- trial-olddeps
- sytest
- export-data
- portdb
- complement
- cargo-test
- cargo-bench
- linting-done
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@3409aa904e8a2aaf2220f09bc954d3d0b0a2ee67 # v3
with:
needs: ${{ toJSON(needs) }}
# Various bits are skipped if there was no applicable changes.
# The newsfile lint may be skipped on non PR builds.
skippable: |
trial
trial-olddeps
sytest
portdb
export-data
complement
lint-newsfile
cargo-test
cargo-bench

14
.github/workflows/triage-incoming.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Move new issues into the issue triage board
on:
issues:
types: [ opened ]
jobs:
triage:
uses: matrix-org/backend-meta/.github/workflows/triage-incoming.yml@18beaf3c8e536108bd04d18e6c3dc40ba3931e28 # v2.0.3
with:
project_id: 'PVT_kwDOAIB0Bs4AFDdZ'
content_id: ${{ github.event.issue.node_id }}
secrets:
github_access_token: ${{ secrets.ELEMENT_BOT_TOKEN }}

31
.github/workflows/triage_labelled.yml vendored Normal file
View File

@@ -0,0 +1,31 @@
name: Move labelled issues to correct projects
on:
issues:
types: [ labeled ]
jobs:
move_needs_info:
runs-on: ubuntu-latest
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
permissions:
contents: read
env:
# This token must have the following scopes: ["repo:public_repo", "admin:org->read:org", "user->read:user", "project"]
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
PROJECT_OWNER: matrix-org
# Backend issue triage board.
# https://github.com/orgs/matrix-org/projects/67/views/1
PROJECT_NUMBER: 67
ISSUE_URL: ${{ github.event.issue.html_url }}
# This field is case-sensitive.
TARGET_STATUS: Needs info
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
# Only clone the script file we care about, instead of the whole repo.
sparse-checkout: .ci/scripts/triage_labelled_issue.sh
- name: Ensure issue exists on the board, then set Status
run: .ci/scripts/triage_labelled_issue.sh

226
.github/workflows/twisted_trunk.yml vendored Normal file
View File

@@ -0,0 +1,226 @@
name: Twisted Trunk
on:
schedule:
- cron: 0 8 * * *
workflow_dispatch:
# NB: inputs are only present when this workflow is dispatched manually.
# (The default below is the default field value in the form to trigger
# a manual dispatch). Otherwise the inputs will evaluate to null.
inputs:
twisted_ref:
description: Commit, branch or tag to checkout from upstream Twisted.
required: false
default: 'trunk'
type: string
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
RUST_VERSION: 1.87.0
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than element-hq/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
if: github.repository == 'element-hq/synapse'
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'element-hq/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
extras: "all"
poetry-version: "2.1.1"
- run: |
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#${{ inputs.twisted_ref || 'trunk' }}
poetry install --no-interaction --extras "all test"
- name: Remove unhelpful options from mypy config
run: sed -e '/warn_unused_ignores = True/d' -e '/warn_redundant_casts = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
python-version: "3.x"
extras: "all test"
poetry-version: "2.1.1"
- run: |
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry install --no-interaction --extras "all test"
- run: poetry run trial --jobs 2 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
# We're using bookworm because that's what Debian oldstable is at the time of writing.
# This job is a canary to warn us about unreleased twisted changes that would cause problems for us if
# they were to be released immediately. For simplicity's sake (and to save CI runners) we use the oldest
# version, assuming that any incompatibilities on newer versions would also be present on the oldest.
image: matrixdotorg/sytest-synapse:bookworm
volumes:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@779680da715d629ac1d338a641029a2f4372abb5 # v2.8.2
- name: Patch dependencies
# Note: The poetry commands want to create a virtualenv in /src/.venv/,
# but the sytest-synapse container expects it to be in /venv/.
# We symlink it before running poetry so that poetry actually
# ends up installing to `/venv`.
run: |
ln -s -T /venv /src/.venv
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry install --no-interaction --extras "all test"
working-directory: /src
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
env:
# Use offline mode to avoid reinstalling the pinned version of
# twisted.
OFFLINE: 1
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@330a01c490aca151604b8cf639adc76d48f6c5d4 # v5.0.0
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
complement:
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@4dc6199c7b1a012772edbd06daecab0f50c9053c # v6.1.0
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
# This step is specific to the 'Twisted trunk' test run:
- name: Patch dependencies
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==2.1.1
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock
working-directory: synapse
- run: |
set -o pipefail
TEST_ONLY_SKIP_DEP_HASH_VERIFICATION=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
name: Run Complement Tests
# open an issue if the build fails, so we know about it.
open-issue:
if: failure() && needs.check_repo.outputs.should_run_workflow == 'true'
needs:
- mypy
- trial
- sytest
- complement
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
update_existing: true
filename: .ci/twisted_trunk_build_failed_issue_template.md

View File

@@ -1,244 +1,3 @@
# Synapse 1.149.0rc1 (2026-03-03)
## Features
- Add experimental support for [MSC4388: Secure out-of-band channel for sign in with QR](https://github.com/matrix-org/matrix-spec-proposals/pull/4388). ([\#19127](https://github.com/element-hq/synapse/issues/19127))
- Add stable support for [MSC4380](https://github.com/matrix-org/matrix-spec-proposals/pull/4380) invite blocking. ([\#19431](https://github.com/element-hq/synapse/issues/19431))
## Bugfixes
- Fix the 'Login as a user' Admin API not checking if the user exists before issuing an access token. ([\#18518](https://github.com/element-hq/synapse/issues/18518))
- Fix `/sync` missing membership event in `state_after` (experimental [MSC4222](https://github.com/matrix-org/matrix-spec-proposals/pull/4222) implementation) in some scenarios. ([\#19460](https://github.com/element-hq/synapse/issues/19460))
## Internal Changes
- Add log to explain when and why we freeze objects in the garbage collector. ([\#19440](https://github.com/element-hq/synapse/issues/19440))
- Better instrument `JoinRoomAliasServlet` with tracing. ([\#19461](https://github.com/element-hq/synapse/issues/19461))
- Fix Complement CI not running against the code from our PRs. ([\#19475](https://github.com/element-hq/synapse/issues/19475))
- Log `docker system info` in CI so we have a plain record of how GitHub runners evolve over time. ([\#19480](https://github.com/element-hq/synapse/issues/19480))
- Rename the `test_disconnect` test helper so that pytest doesn't see it as a test. ([\#19486](https://github.com/element-hq/synapse/issues/19486))
- Add a log line when we delete devices. Contributed by @bradtgmurray @ Beeper. ([\#19496](https://github.com/element-hq/synapse/issues/19496))
- Pre-allocate the buffer based on the expected `Content-Length` with the Rust HTTP client. ([\#19498](https://github.com/element-hq/synapse/issues/19498))
- Cancel long-running sync requests if the client has gone away. ([\#19499](https://github.com/element-hq/synapse/issues/19499))
- Try and reduce reactor tick times when under heavy load. ([\#19507](https://github.com/element-hq/synapse/issues/19507))
- Simplify Rust HTTP client response streaming and limiting. ([\#19510](https://github.com/element-hq/synapse/issues/19510))
- Replace deprecated collection import locations with current locations. ([\#19515](https://github.com/element-hq/synapse/issues/19515))
- Bump most locked Python dependencies to their latest versions. ([\#19519](https://github.com/element-hq/synapse/issues/19519))
# Synapse 1.148.0 (2026-02-24)
No significant changes since 1.148.0rc1.
# Synapse 1.148.0rc1 (2026-02-17)
## Features
- Support sending and receiving [MSC4354 Sticky Event](https://github.com/matrix-org/matrix-spec-proposals/pull/4354) metadata. ([\#19365](https://github.com/element-hq/synapse/issues/19365))
## Improved Documentation
- Fix reference to the `experimental_features` section of the configuration manual documentation. ([\#19435](https://github.com/element-hq/synapse/issues/19435))
## Deprecations and Removals
- Remove support for [MSC3244: Room version capabilities](https://github.com/matrix-org/matrix-spec-proposals/pull/3244) as the MSC was rejected. ([\#19429](https://github.com/element-hq/synapse/issues/19429))
## Internal Changes
- Add in-repo Complement tests so we can test Synapse specific behavior at an end-to-end level. ([\#19406](https://github.com/element-hq/synapse/issues/19406))
- Push Synapse docker images to Element OCI Registry. ([\#19420](https://github.com/element-hq/synapse/issues/19420))
- Allow configuring the Rust HTTP client to use HTTP/2 only. ([\#19457](https://github.com/element-hq/synapse/issues/19457))
- Correctly refuse to start if the Rust workspace config has changed and the Rust library has not been rebuilt. ([\#19470](https://github.com/element-hq/synapse/issues/19470))
# Synapse 1.147.1 (2026-02-12)
## Internal Changes
- Block federation requests and events authenticated using a known insecure signing key. See [CVE-2026-24044](https://www.cve.org/CVERecord?id=CVE-2026-24044) / [ELEMENTSEC-2025-1670](https://github.com/element-hq/ess-helm/security/advisories/GHSA-qwcj-h6m8-vp6q). ([\#19459](https://github.com/element-hq/synapse/issues/19459))
# Synapse 1.147.0 (2026-02-10)
No significant changes since 1.147.0rc1.
# Synapse 1.147.0rc1 (2026-02-03)
## Bugfixes
- Fix memory leak caused by not cleaning up stopped looping calls. Introduced in v1.140.0. ([\#19416](https://github.com/element-hq/synapse/issues/19416))
- Fix a typo that incorrectly made `setuptools_rust` a runtime dependency. ([\#19417](https://github.com/element-hq/synapse/issues/19417))
## Internal Changes
- Prune stale entries from `sliding_sync_connection_required_state` table. ([\#19306](https://github.com/element-hq/synapse/issues/19306))
- Update "Event Send Time Quantiles" graph to only use dots for the event persistence rate (Grafana dashboard). ([\#19399](https://github.com/element-hq/synapse/issues/19399))
- Update and align Grafana dashboard to use regex matching for `job` selectors (`job=~"$job"`) so the "all" value works correctly across all panels. ([\#19400](https://github.com/element-hq/synapse/issues/19400))
- Don't retry joining partial state rooms all at once on startup. ([\#19402](https://github.com/element-hq/synapse/issues/19402))
- Disallow requests to the health endpoint from containing trailing path characters. ([\#19405](https://github.com/element-hq/synapse/issues/19405))
- Add notes that new experimental features should have associated tracking issues. ([\#19410](https://github.com/element-hq/synapse/issues/19410))
- Bump `pyo3` from 0.26.0 to 0.27.2 and `pythonize` from 0.26.0 to 0.27.0. Contributed by @razvp @ ERCOM. ([\#19412](https://github.com/element-hq/synapse/issues/19412))
# Synapse 1.146.0 (2026-01-27)
No significant changes since 1.146.0rc1.
## Deprecations and Removals
- [MSC2697](https://github.com/matrix-org/matrix-spec-proposals/pull/2697) (Dehydrated devices) has been removed, as the MSC is closed. Developers should migrate to [MSC3814](https://github.com/matrix-org/matrix-spec-proposals/pull/3814). ([\#19346](https://github.com/element-hq/synapse/issues/19346))
- Support for Ubuntu 25.04 (Plucky Puffin) has been dropped. Synapse no longer builds debian packages for Ubuntu 25.04.
# Synapse 1.146.0rc1 (2026-01-20)
## Features
- Add a new config option [`enable_local_media_storage`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#enable_local_media_storage) which controls whether media is additionally stored locally when using configured `media_storage_providers`. Setting this to `false` allows off-site media storage without a local cache. Contributed by Patrice Brend'amour @dr.allgood. ([\#19204](https://github.com/element-hq/synapse/issues/19204))
- Stabilise support for [MSC4312](https://github.com/matrix-org/matrix-spec-proposals/pull/4312)'s `m.oauth` User-Interactive Auth stage for resetting cross-signing identity with the OAuth 2.0 API. The old, unstable name (`org.matrix.cross_signing_reset`) is now deprecated and will be removed in a future release. ([\#19273](https://github.com/element-hq/synapse/issues/19273))
- Refactor Grafana dashboard to use `server_name` label (instead of `instance`). ([\#19337](https://github.com/element-hq/synapse/issues/19337))
## Bugfixes
- Fix joining a restricted v12 room locally when no local room creator is present but local users with sufficient power levels are. Contributed by @nexy7574. ([\#19321](https://github.com/element-hq/synapse/issues/19321))
- Fixed parallel calls to `/_matrix/media/v1/create` being ratelimited for appservices even if `rate_limited: false` was set in the registration. Contributed by @tulir @ Beeper. ([\#19335](https://github.com/element-hq/synapse/issues/19335))
- Fix a bug introduced in 1.61.0 where a user's membership in a room was accidentally ignored when considering access to historical state events in rooms with the "shared" history visibility. Contributed by Lukas Tautz. ([\#19353](https://github.com/element-hq/synapse/issues/19353))
- [MSC4140](https://github.com/matrix-org/matrix-spec-proposals/pull/4140): Store the JSON content of scheduled delayed events as text instead of a byte array. This fixes the inability to schedule a delayed event with non-ASCII characters in its content. ([\#19360](https://github.com/element-hq/synapse/issues/19360))
- Always rollback database transactions when retrying (avoid orphaned connections). ([\#19372](https://github.com/element-hq/synapse/issues/19372))
- Fix `InFlightGauge` typing to allow upgrading to `prometheus_client` 0.24. ([\#19379](https://github.com/element-hq/synapse/issues/19379))
## Updates to the Docker image
- Add [Prometheus HTTP service discovery](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config) endpoint for easy discovery of all workers when using the `docker/Dockerfile-workers` image (see the [*Metrics* section of our Docker testing docs](docker/README-testing.md#metrics)). ([\#19336](https://github.com/element-hq/synapse/issues/19336))
## Improved Documentation
- Remove docs on legacy metric names (no longer in the codebase since 2022-12-06). ([\#19341](https://github.com/element-hq/synapse/issues/19341))
- Clarify how the estimated value of room complexity is calculated internally. ([\#19384](https://github.com/element-hq/synapse/issues/19384))
## Internal Changes
- Add an internal `cancel_task` API to the task scheduler. ([\#19310](https://github.com/element-hq/synapse/issues/19310))
- Tweak docstrings and signatures of `auth_types_for_event` and `get_catchup_room_event_ids`. ([\#19320](https://github.com/element-hq/synapse/issues/19320))
- Replace usage of deprecated `assertEquals` with `assertEqual` in unit test code. ([\#19345](https://github.com/element-hq/synapse/issues/19345))
- Drop support for Ubuntu 25.04 'Plucky Puffin', add support for Ubuntu 25.10 'Questing Quokka'. ([\#19348](https://github.com/element-hq/synapse/issues/19348))
- Revert "Add an Admin API endpoint for listing quarantined media (#19268)". ([\#19351](https://github.com/element-hq/synapse/issues/19351))
- Bump `mdbook` from 0.4.17 to 0.5.2 and remove our custom table-of-contents plugin in favour of the new default functionality. ([\#19356](https://github.com/element-hq/synapse/issues/19356))
- Replace deprecated usage of PyGitHub's `GitRelease.title` with `.name` in release script. ([\#19358](https://github.com/element-hq/synapse/issues/19358))
- Update the Element logo in Synapse's README to be an absolute URL, allowing it to render on other sites (such as PyPI). ([\#19368](https://github.com/element-hq/synapse/issues/19368))
- Apply minor tweaks to v1.145.0 changelog. ([\#19376](https://github.com/element-hq/synapse/issues/19376))
- Update Grafana dashboard syntax to use the latest from importing/exporting with Grafana 12.3.1. ([\#19381](https://github.com/element-hq/synapse/issues/19381))
- Warn about skipping reactor metrics when using unknown reactor type. ([\#19383](https://github.com/element-hq/synapse/issues/19383))
- Add support for reactor metrics with the `ProxiedReactor` used in worker Complement tests. ([\#19385](https://github.com/element-hq/synapse/issues/19385))
# Synapse 1.145.0 (2026-01-13)
No significant changes since 1.145.0rc4.
## End of Life of Ubuntu 25.04 Plucky Puffin
Ubuntu 25.04 (Plucky Puffin) will be end of life on Jan 17, 2026. Synapse will stop building packages for Ubuntu 25.04 shortly thereafter.
## Updates to Locked Dependencies No Longer Included in Changelog
The "Updates to locked dependencies" section has been removed from the changelog due to lack of use and the maintenance burden. ([\#19254](https://github.com/element-hq/synapse/issues/19254))
# Synapse 1.145.0rc4 (2026-01-08)
No significant changes since 1.145.0rc3.
This RC contains a fix specifically for openSUSE packaging and no other changes.
# Synapse 1.145.0rc3 (2026-01-07)
No significant changes since 1.145.0rc2.
This RC strips out unnecessary files from the wheels that were added when fixing the source distribution packaging in the previous RC.
# Synapse 1.145.0rc2 (2026-01-07)
No significant changes since 1.145.0rc1.
This RC fixes the source distribution packaging for uploading to PyPI.
# Synapse 1.145.0rc1 (2026-01-06)
## Features
- Add `memberships` endpoint to the admin API. This is useful for forensics and T&S purposes. ([\#19260](https://github.com/element-hq/synapse/issues/19260))
- Server admins can bypass the quarantine media check when downloading media by setting the `admin_unsafely_bypass_quarantine` query parameter to `true` on Client-Server API media download requests. ([\#19275](https://github.com/element-hq/synapse/issues/19275))
- Implemented pagination for the [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) mutual rooms endpoint. Contributed by @tulir @ Beeper. ([\#19279](https://github.com/element-hq/synapse/issues/19279))
- Admin API: add worker support to `GET /_synapse/admin/v2/users/<user_id>`. ([\#19281](https://github.com/element-hq/synapse/issues/19281))
- Improve proxy support for the `federation_client.py` dev script. Contributed by Denis Kasak (@dkasak). ([\#19300](https://github.com/element-hq/synapse/issues/19300))
## Bugfixes
- Fix sliding sync performance slow down for long lived connections. ([\#19206](https://github.com/element-hq/synapse/issues/19206))
- Fix a bug where Mastodon posts (and possibly other embeds) have the wrong description for URL previews. ([\#19231](https://github.com/element-hq/synapse/issues/19231))
- Fix bug where `Duration` was logged incorrectly. ([\#19267](https://github.com/element-hq/synapse/issues/19267))
- Fix bug introduced in 1.143.0 that broke support for versions of `zope-interface` older than 6.2. ([\#19274](https://github.com/element-hq/synapse/issues/19274))
- Transform events with client metadata before serialising in /event response. ([\#19340](https://github.com/element-hq/synapse/issues/19340))
## Updates to the Docker image
- Add a way to expose metrics from the Docker image (`SYNAPSE_ENABLE_METRICS`). ([\#19324](https://github.com/element-hq/synapse/issues/19324))
## Improved Documentation
- Document the importance of `public_baseurl` when configuring OpenID Connect authentication. ([\#19270](https://github.com/element-hq/synapse/issues/19270))
## Deprecations and Removals
- Ubuntu 25.04 (Plucky Puffin) will be end of life on Jan 17, 2026. Synapse will stop building packages for Ubuntu 25.04 shortly thereafter.
- Remove the "Updates to locked dependencies" section from the changelog due to lack of use and the maintenance burden. ([\#19254](https://github.com/element-hq/synapse/issues/19254))
## Internal Changes
- Group together dependabot update PRs to reduce the review load. ([\#18402](https://github.com/element-hq/synapse/issues/18402))
- Fix `HomeServer.shutdown()` failing if the homeserver hasn't been setup yet. ([\#19187](https://github.com/element-hq/synapse/issues/19187))
- Respond with useful error codes with `Content-Length` header/s are invalid. ([\#19212](https://github.com/element-hq/synapse/issues/19212))
- Fix `HomeServer.shutdown()` failing if the homeserver failed to `start`. ([\#19232](https://github.com/element-hq/synapse/issues/19232))
- Switch the build backend from `poetry-core` to `maturin`. ([\#19234](https://github.com/element-hq/synapse/issues/19234))
- Raise the limit for concurrently-open non-security @dependabot PRs from 5 to 10. ([\#19253](https://github.com/element-hq/synapse/issues/19253))
- Require 14 days to pass before pulling in general dependency updates to help mitigate upstream supply chain attacks. ([\#19258](https://github.com/element-hq/synapse/issues/19258))
- Drop the broken netlify documentation workflow until a new one is implemented. ([\#19262](https://github.com/element-hq/synapse/issues/19262))
- Don't include debug logs in `Clock` unless explicitly enabled. ([\#19278](https://github.com/element-hq/synapse/issues/19278))
- Use `uv` to test olddeps to ensure all transitive dependencies use minimum versions. ([\#19289](https://github.com/element-hq/synapse/issues/19289))
- Add a config to be able to rate limit search in the user directory. ([\#19291](https://github.com/element-hq/synapse/issues/19291))
- Log the original bind exception when encountering `Failed to listen on 0.0.0.0, continuing because listening on [::]`. ([\#19297](https://github.com/element-hq/synapse/issues/19297))
- Unpin the version of Rust we use to build Synapse wheels (was 1.82.0) now that MacOS support has been dropped. ([\#19302](https://github.com/element-hq/synapse/issues/19302))
- Make it more clear how `shared_extra_conf` is combined in our Docker configuration scripts. ([\#19323](https://github.com/element-hq/synapse/issues/19323))
- Update CI to stream Complement progress and format logs in a separate step after all tests are done. ([\#19326](https://github.com/element-hq/synapse/issues/19326))
- Format `.github/workflows/tests.yml`. ([\#19327](https://github.com/element-hq/synapse/issues/19327))
# Synapse 1.144.0 (2025-12-09)
## Deprecation of MacOS Python wheels

108
Cargo.lock generated
View File

@@ -13,9 +13,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.101"
version = "1.0.100"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e0fee31ef5ed1ba1316088939cea399010ed7731dba877ed44aeb407a75ea"
checksum = "a23eb6b1614318a8071c9b2521f36b424b2c83db5eb3a0fead4a6c0809af6e61"
[[package]]
name = "arc-swap"
@@ -73,9 +73,9 @@ checksum = "46c5e41b57b8bba42a04676d81cb89e9ee8e859a1a66f80a5a72e1cb76b34d43"
[[package]]
name = "bytes"
version = "1.11.1"
version = "1.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1e748733b7cbc798e1434b6ac524f0c1ff2ab456fe201501e6497c8417a4fc33"
checksum = "b35204fbdc0b3f4446b89fc1ac2cf84a8a68971995d0bf2e925ec7cd960f9cb3"
[[package]]
name = "cc"
@@ -187,9 +187,9 @@ dependencies = [
[[package]]
name = "futures"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b147ee9d1f6d097cef9ce628cd2ee62288d963e16fb287bd9286455b241382d"
checksum = "65bc07b1a8bc7c85c5f2e110c476c7389b4554ba72af57d8445ea63a576b0876"
dependencies = [
"futures-channel",
"futures-core",
@@ -202,9 +202,9 @@ dependencies = [
[[package]]
name = "futures-channel"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07bbe89c50d7a535e539b8c17bc0b49bdb77747034daa8087407d655f3f7cc1d"
checksum = "2dff15bf788c671c1934e366d07e30c1814a8ef514e1af724a602e8a2fbe1b10"
dependencies = [
"futures-core",
"futures-sink",
@@ -212,15 +212,15 @@ dependencies = [
[[package]]
name = "futures-core"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7e3450815272ef58cec6d564423f6e755e25379b217b0bc688e295ba24df6b1d"
checksum = "05f29059c0c2090612e8d742178b0580d2dc940c837851ad723096f87af6663e"
[[package]]
name = "futures-executor"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "baf29c38818342a3b26b5b923639e7b1f4a61fc5e76102d4b1981c6dc7a7579d"
checksum = "1e28d1d997f585e54aebc3f97d39e72338912123a67330d723fdbb564d646c9f"
dependencies = [
"futures-core",
"futures-task",
@@ -229,15 +229,15 @@ dependencies = [
[[package]]
name = "futures-io"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cecba35d7ad927e23624b22ad55235f2239cfa44fd10428eecbeba6d6a717718"
checksum = "9e5c1b78ca4aae1ac06c48a526a655760685149f0d465d21f37abfe57ce075c6"
[[package]]
name = "futures-macro"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e835b70203e41293343137df5c0664546da5745f82ec9b84d40be8336958447b"
checksum = "162ee34ebcb7c64a8abebc059ce0fee27c2262618d7b60ed8faf72fef13c3650"
dependencies = [
"proc-macro2",
"quote",
@@ -246,21 +246,21 @@ dependencies = [
[[package]]
name = "futures-sink"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c39754e157331b013978ec91992bde1ac089843443c49cbc7f46150b0fad0893"
checksum = "e575fab7d1e0dcb8d0c7bcf9a63ee213816ab51902e6d244a95819acacf1d4f7"
[[package]]
name = "futures-task"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "037711b3d59c33004d3856fbdc83b99d4ff37a24768fa1be9ce3538a1cde4393"
checksum = "f90f7dce0722e95104fcb095585910c0977252f286e354b5e3bd38902cd99988"
[[package]]
name = "futures-util"
version = "0.3.32"
version = "0.3.31"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "389ca41296e6190b48053de0321d02a77f32f8a5d2461dd38762c0593805c6d6"
checksum = "9fa08315bb612088cc391249efdc3bc77536f16c91f6cf495e6fbe85b20a4a81"
dependencies = [
"futures-channel",
"futures-core",
@@ -270,6 +270,7 @@ dependencies = [
"futures-task",
"memchr",
"pin-project-lite",
"pin-utils",
"slab",
]
@@ -704,9 +705,9 @@ checksum = "241eaef5fd12c88705a01fc1066c48c4b36e0dd4377dcdc7ec3942cea7a69956"
[[package]]
name = "log"
version = "0.4.29"
version = "0.4.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897"
checksum = "34080505efa8e45a4b816c349525ebe327ceaa8559756f0356cba97ef3bf7432"
[[package]]
name = "lru-slab"
@@ -770,6 +771,12 @@ version = "0.2.16"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3b3cff922bd51709b605d9ead9aa71031d81447142d828eb4a6eba76fe619f9b"
[[package]]
name = "pin-utils"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "portable-atomic"
version = "1.11.1"
@@ -806,12 +813,11 @@ dependencies = [
[[package]]
name = "pyo3"
version = "0.27.2"
version = "0.26.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ab53c047fcd1a1d2a8820fe84f05d6be69e9526be40cb03b73f86b6b03e6d87d"
checksum = "7ba0117f4212101ee6544044dae45abe1083d30ce7b29c4b5cbdfa2354e07383"
dependencies = [
"anyhow",
"bytes",
"indoc",
"libc",
"memoffset",
@@ -825,18 +831,18 @@ dependencies = [
[[package]]
name = "pyo3-build-config"
version = "0.27.2"
version = "0.26.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b455933107de8642b4487ed26d912c2d899dec6114884214a0b3bb3be9261ea6"
checksum = "4fc6ddaf24947d12a9aa31ac65431fb1b851b8f4365426e182901eabfb87df5f"
dependencies = [
"target-lexicon",
]
[[package]]
name = "pyo3-ffi"
version = "0.27.2"
version = "0.26.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1c85c9cbfaddf651b1221594209aed57e9e5cff63c4d11d1feead529b872a089"
checksum = "025474d3928738efb38ac36d4744a74a400c901c7596199e20e45d98eb194105"
dependencies = [
"libc",
"pyo3-build-config",
@@ -855,9 +861,9 @@ dependencies = [
[[package]]
name = "pyo3-macros"
version = "0.27.2"
version = "0.26.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a5b10c9bf9888125d917fb4d2ca2d25c8df94c7ab5a52e13313a07e050a3b02"
checksum = "2e64eb489f22fe1c95911b77c44cc41e7c19f3082fc81cce90f657cdc42ffded"
dependencies = [
"proc-macro2",
"pyo3-macros-backend",
@@ -867,9 +873,9 @@ dependencies = [
[[package]]
name = "pyo3-macros-backend"
version = "0.27.2"
version = "0.26.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "03b51720d314836e53327f5871d4c0cfb4fb37cc2c4a11cc71907a86342c40f9"
checksum = "100246c0ecf400b475341b8455a9213344569af29a3c841d29270e53102e0fcf"
dependencies = [
"heck",
"proc-macro2",
@@ -880,9 +886,9 @@ dependencies = [
[[package]]
name = "pythonize"
version = "0.27.0"
version = "0.26.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3a8f29db331e28c332c63496cfcbb822aca3d7320bc08b655d7fd0c29c50ede"
checksum = "11e06e4cff9be2bbf2bddf28a486ae619172ea57e79787f856572878c62dcfe2"
dependencies = [
"pyo3",
"serde",
@@ -989,9 +995,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.12.3"
version = "1.12.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e10754a14b9137dd7b1e3e5b0493cc9171fdd105e0ab477f51b72e7f3ac0e276"
checksum = "843bc0191f75f3e22651ae5f1e72939ab2f72a4bc30fa80a066bd66edefc24d4"
dependencies = [
"aho-corasick",
"memchr",
@@ -1018,9 +1024,9 @@ checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "reqwest"
version = "0.12.28"
version = "0.12.24"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eddd3ca559203180a307f12d114c268abf583f59b03cb906fd0b3ff8646c1147"
checksum = "9d0946410b9f7b082a427e4ef5c8ff541a88b357bc6c637c40db3a68ac70a36f"
dependencies = [
"base64",
"bytes",
@@ -1201,15 +1207,15 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.149"
version = "1.0.145"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "83fc039473c5595ace860d8c4fafa220ff474b3fc6bfdb4293327f1a37e94d86"
checksum = "402a6f66d8c709116cf22f558eab210f5a50187f702eb4d7e5ef38d9a7f1c79c"
dependencies = [
"itoa",
"memchr",
"ryu",
"serde",
"serde_core",
"zmij",
]
[[package]]
@@ -1410,9 +1416,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "tokio"
version = "1.49.0"
version = "1.48.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "72a2903cd7736441aac9df9d7688bd0ce48edccaadf181c3b90be801e81d3d86"
checksum = "ff360e02eab121e0bc37a2d3b4d4dc622e6eda3a8e5253d5435ecf5bd4c68408"
dependencies = [
"bytes",
"libc",
@@ -1462,9 +1468,9 @@ dependencies = [
[[package]]
name = "tower-http"
version = "0.6.8"
version = "0.6.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d4e6559d53cc268e5031cd8429d05415bc4cb4aefc4aa5d6cc35fbf5b924a1f8"
checksum = "adc82fd73de2a9722ac5da747f12383d2bfdb93591ee6c58486e0097890f05f2"
dependencies = [
"bitflags",
"bytes",
@@ -1915,9 +1921,3 @@ dependencies = [
"quote",
"syn",
]
[[package]]
name = "zmij"
version = "1.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3ff05f8caa9038894637571ae6b9e29466c1f4f829d26c9b28f869a29cbe3445"

View File

@@ -1,4 +1,4 @@
.. image:: https://github.com/element-hq/synapse/raw/develop/docs/element_logo_white_bg.svg
.. image:: ./docs/element_logo_white_bg.svg
:height: 60px
**Element Synapse - Matrix homeserver implementation**

View File

@@ -4,6 +4,7 @@
title = "Synapse"
authors = ["The Matrix.org Foundation C.I.C."]
language = "en"
multilingual = false
# The directory that documentation files are stored in
src = "docs"
@@ -30,10 +31,13 @@ site-url = "/synapse/"
# Additional HTML, JS, CSS that's injected into each page of the book.
# More information available in docs/website_files/README.md
additional-css = [
"docs/website_files/table-of-contents.css",
"docs/website_files/remove-nav-buttons.css",
"docs/website_files/indent-section-headers.css",
"docs/website_files/version-picker.css",
]
additional-js = [
"docs/website_files/table-of-contents.js",
"docs/website_files/version-picker.js",
"docs/website_files/version.js",
]

View File

@@ -0,0 +1 @@
Implement support for MSC4354: Sticky Events.

View File

@@ -1 +0,0 @@
Update docs to clarify `outbound_federation_restricted_to` can also be used with the [Secure Border Gateway (SBG)](https://element.io/en/server-suite/secure-border-gateways).

View File

@@ -1 +0,0 @@
Unify Complement developer docs.

View File

@@ -1,38 +0,0 @@
# Docs: https://golangci-lint.run/docs/configuration/file/
#
# Go formatting/linting rules
#
# This is run as part of the normal linting utility script,
# `poetry run ./scripts-dev/lint.sh`
version: "2"
linters:
# Default set of linters.
# The value can be:
# - `standard`: https://golangci-lint.run/docs/linters/#enabled-by-default
# - `all`: enables all linters by default.
# - `none`: disables all linters by default.
# - `fast`: enables only linters considered as "fast" (`golangci-lint help linters --json | jq '[ .[] | select(.fast==true) ] | map(.name)'`).
# Default: standard
default: standard
# Enable specific linter.
# enable:
# - example
# Disable specific linters.
disable:
# FIXME: Ideally, we'd enable the `bodyclose` lint but there are many
# false-positives (like https://github.com/timakin/bodyclose/issues/39) and just is
# not well-suited for our use case (https://github.com/timakin/bodyclose/issues/11 and
# https://github.com/timakin/bodyclose/issues/76).
- bodyclose
formatters:
# Enable specific formatter.
# Default: [] (uses standard Go formatting)
enable:
- gofmt
- goimports
- golines

View File

@@ -1,95 +0,0 @@
# Complement testing
Complement is a black box integration testing framework for Matrix homeservers. It
allows us to write end-to-end tests that interact with real Synapse homeservers to
ensure everything works at a holistic level.
## Setup
Nothing beyond a [normal Complement
setup](https://github.com/matrix-org/complement#running) (just Go and Docker).
## Running tests
Run tests from the [Complement](https://github.com/matrix-org/complement) repo:
```shell
# Run the tests
./scripts-dev/complement.sh
# To run a whole group of tests, you can specify part of the test path:
scripts-dev/complement.sh ./tests/csapi/... -run TestRoomCreate
# To run a specific test, you can specify the whole name structure:
scripts-dev/complement.sh ./tests/csapi/... -run TestRoomCreate/Parallel/POST_/createRoom_makes_a_public_room
# Generally though, the `-run` parameter accepts regex patterns, so you can match however you like:
scripts-dev/complement.sh ./tests/... -run 'TestRoomCreate/Parallel/POST_/createRoom_makes_a_(.*)'
```
It's often nice to develop on Synapse and write Complement tests at the same time.
Here is how to run your local Synapse checkout against your local Complement checkout.
```shell
# To run a specific test
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh ./tests/csapi/... -run TestRoomCreate
```
The above will run a monolithic (single-process) Synapse with SQLite as the database.
For other configurations, try:
- Passing `POSTGRES=1` as an environment variable to use the Postgres database instead.
- Passing `WORKERS=1` as an environment variable to use a workerised setup instead. This
option implies the use of Postgres.
- If setting `WORKERS=1`, optionally set `WORKER_TYPES=` to declare which worker types
you wish to test. A simple comma-delimited string containing the worker types
defined from the `WORKERS_CONFIG` template in
[here](https://github.com/element-hq/synapse/blob/develop/docker/configure_workers_and_start.py#L54).
A safe example would be `WORKER_TYPES="federation_inbound, federation_sender,
synchrotron"`. See the [worker documentation](../workers.md) for additional
information on workers.
- Passing `ASYNCIO_REACTOR=1` as an environment variable to use the asyncio-backed
reactor with Twisted instead of the default one.
- Passing `PODMAN=1` will use the [podman](https://podman.io/) container runtime,
instead of docker.
- Passing `UNIX_SOCKETS=1` will utilise Unix socket functionality for Synapse, Redis,
and Postgres(when applicable).
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh
SYNAPSE_TEST_LOG_LEVEL=DEBUG COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestRoomCreate
```
### Running in-repo tests
In-repo Complement tests are tests that are vendored into this project. We use the
in-repo test suite to test Synapse specific behaviors like the admin API.
To run the in-repo Complement tests, use the `--in-repo` command line argument.
```shell
# Run only a specific test package.
# Note: test packages are relative to the `./complement` directory in this project
./scripts-dev/complement.sh --in-repo ./tests/...
# Similarly, you can also use `-run` to specify all or part of a specific test path to run
scripts-dev/complement.sh --in-repo ./tests/... -run TestIntraShardFederation
```
### Access database for homeserver after Complement test runs.
If you're curious what the database looks like after you run some tests, here are some
steps to get you going in Synapse:
1. In your Complement test comment out `defer deployment.Destroy(t)` and replace with
`defer time.Sleep(2 * time.Hour)` to keep the homeserver running after the tests
complete
1. Start the Complement tests
1. Find the name of the container, `docker ps -f name=complement_` (this will filter for
just the Complement related Docker containers)
1. Access the container replacing the name with what you found in the previous step:
`docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash`
1. Install sqlite (database driver), `apt-get update && apt-get install -y sqlite3`
1. Then run `sqlite3` and open the database `.open /conf/homeserver.db` (this db path
comes from the Synapse homeserver.yaml)

View File

@@ -1,57 +0,0 @@
module github.com/element-hq/synapse
go 1.24.1
toolchain go1.24.4
require (
github.com/matrix-org/complement v0.0.0-20251120181401-44111a2a8a9d
github.com/matrix-org/gomatrixserverlib v0.0.0-20250813150445-9f5070a65744
)
require (
github.com/docker/docker v28.3.3+incompatible // indirect
github.com/docker/go-connections v0.5.0 // indirect
github.com/hashicorp/go-set/v3 v3.0.0 // indirect
github.com/moby/sys/atomicwriter v0.1.0 // indirect
github.com/oleiade/lane/v2 v2.0.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0 // indirect
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 // indirect
gotest.tools/v3 v3.4.0 // indirect
)
require (
github.com/Microsoft/go-winio v0.6.2 // indirect
github.com/containerd/errdefs v1.0.0 // indirect
github.com/containerd/errdefs/pkg v0.3.0 // indirect
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
github.com/distribution/reference v0.6.0 // indirect
github.com/docker/go-units v0.5.0 // indirect
github.com/felixge/httpsnoop v1.0.4 // indirect
github.com/go-logr/logr v1.4.3 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/matrix-org/util v0.0.0-20221111132719-399730281e66 // indirect
github.com/moby/docker-image-spec v1.3.1 // indirect
github.com/moby/term v0.5.2 // indirect
github.com/opencontainers/go-digest v1.0.0 // indirect
github.com/opencontainers/image-spec v1.1.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
github.com/sirupsen/logrus v1.9.3 // indirect
github.com/tidwall/gjson v1.18.0 // indirect
github.com/tidwall/match v1.1.1 // indirect
github.com/tidwall/pretty v1.2.1 // indirect
github.com/tidwall/sjson v1.2.5 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 // indirect
go.opentelemetry.io/otel v1.36.0 // indirect
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 // indirect
go.opentelemetry.io/otel/metric v1.36.0 // indirect
go.opentelemetry.io/otel/trace v1.36.0 // indirect
go.opentelemetry.io/proto/otlp v1.7.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/time v0.11.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
)

View File

@@ -1,169 +0,0 @@
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
github.com/cenkalti/backoff/v4 v4.3.0 h1:MyRJ/UdXutAwSAT+s3wNd7MfTIcy71VQueUuFK343L8=
github.com/cenkalti/backoff/v4 v4.3.0/go.mod h1:Y3VNntkOUPxTVeUxJ/G5vcM//AlwfmyYozVcomhLiZE=
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
github.com/docker/docker v28.3.3+incompatible h1:Dypm25kh4rmk49v1eiVbsAtpAsYURjYkaKubwuBdxEI=
github.com/docker/docker v28.3.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/go-connections v0.5.0 h1:USnMq7hx7gwdVZq1L49hLXaFtUdTADjXGp+uj1Br63c=
github.com/docker/go-connections v0.5.0/go.mod h1:ov60Kzw0kKElRwhNs9UlUHAE/F9Fe6GLaXnqyDdmEXc=
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/hashicorp/go-set/v3 v3.0.0 h1:CaJBQvQCOWoftrBcDt7Nwgo0kdpmrKxar/x2o6pV9JA=
github.com/hashicorp/go-set/v3 v3.0.0/go.mod h1:IEghM2MpE5IaNvL+D7X480dfNtxjRXZ6VMpK3C8s2ok=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/matrix-org/complement v0.0.0-20251120181401-44111a2a8a9d h1:s2Xc9GB2E/pXdElP18h8+04Y3SmhaII7xh2YmCM7oZc=
github.com/matrix-org/complement v0.0.0-20251120181401-44111a2a8a9d/go.mod h1:HioTV089DHLBfljH9QLGifJRE4Avnyk08BXXhCwd4gs=
github.com/matrix-org/gomatrix v0.0.0-20220926102614-ceba4d9f7530 h1:kHKxCOLcHH8r4Fzarl4+Y3K5hjothkVW5z7T1dUM11U=
github.com/matrix-org/gomatrix v0.0.0-20220926102614-ceba4d9f7530/go.mod h1:/gBX06Kw0exX1HrwmoBibFA98yBk/jxKpGVeyQbff+s=
github.com/matrix-org/gomatrixserverlib v0.0.0-20250813150445-9f5070a65744 h1:5GvC2FD9O/PhuyY95iJQdNYHbDioEhMWdeMP9maDUL8=
github.com/matrix-org/gomatrixserverlib v0.0.0-20250813150445-9f5070a65744/go.mod h1:b6KVfDjXjA5Q7vhpOaMqIhFYvu5BuFVZixlNeTV/CLc=
github.com/matrix-org/util v0.0.0-20221111132719-399730281e66 h1:6z4KxomXSIGWqhHcfzExgkH3Z3UkIXry4ibJS4Aqz2Y=
github.com/matrix-org/util v0.0.0-20221111132719-399730281e66/go.mod h1:iBI1foelCqA09JJgPV0FYz4qA5dUXYOxMi57FxKBdd4=
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
github.com/oleiade/lane/v2 v2.0.0 h1:XW/ex/Inr+bPkLd3O240xrFOhUkTd4Wy176+Gv0E3Qw=
github.com/oleiade/lane/v2 v2.0.0/go.mod h1:i5FBPFAYSWCgLh58UkUGCChjcCzef/MI7PlQm2TKCeg=
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/shoenig/test v1.11.0 h1:NoPa5GIoBwuqzIviCrnUJa+t5Xb4xi5Z+zODJnIDsEQ=
github.com/shoenig/test v1.11.0/go.mod h1:UxJ6u/x2v/TNs/LoLxBNJRV9DiwBBKYxXSyczsBHFoI=
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/tidwall/gjson v1.14.2/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/gjson v1.18.0 h1:FIDeeyB800efLX89e5a8Y0BNH+LOngJyGrIWxG2FKQY=
github.com/tidwall/gjson v1.18.0/go.mod h1:/wbyibRr2FHMks5tjHJ5F8dMZh3AcwJEMf5vlfC0lxk=
github.com/tidwall/match v1.1.1 h1:+Ho715JplO36QYgwN9PGYNhgZvoUSc9X2c80KVTi+GA=
github.com/tidwall/match v1.1.1/go.mod h1:eRSPERbgtNPcGhD8UCthc6PmLEQXEWd3PRB5JTxsfmM=
github.com/tidwall/pretty v1.2.0/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/pretty v1.2.1 h1:qjsOFOWWQl+N3RsoF5/ssm1pHmJJwhjlSbZ51I6wMl4=
github.com/tidwall/pretty v1.2.1/go.mod h1:ITEVvHYasfjBbM0u2Pg8T2nJnzm8xPwvNhhsoaGGjNU=
github.com/tidwall/sjson v1.2.5 h1:kLy8mja+1c9jlljvWTlSazM7cKDRfJuR/bOJhcY5NcY=
github.com/tidwall/sjson v1.2.5/go.mod h1:Fvgq9kS/6ociJEDnK0Fk1cpYF4FIW6ZF7LAe+6jwd28=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0 h1:F7Jx+6hwnZ41NSFTO5q4LYDtJRXBf2PD0rNBkeB/lus=
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.61.0/go.mod h1:UHB22Z8QsdRDrnAtX4PntOl36ajSxcdUMt1sF7Y6E7Q=
go.opentelemetry.io/otel v1.36.0 h1:UumtzIklRBY6cI/lllNZlALOF5nNIzJVb16APdvgTXg=
go.opentelemetry.io/otel v1.36.0/go.mod h1:/TcFMXYjyRNh8khOAO9ybYkqaDBb/70aVwkNML4pP8E=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0 h1:dNzwXjZKpMpE2JhmO+9HsPl42NIXFIFSUSSs0fiqra0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.36.0/go.mod h1:90PoxvaEB5n6AOdZvi+yWJQoE95U8Dhhw2bSyRqnTD0=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0 h1:BEj3SPM81McUZHYjRS5pEgNgnmzGJ5tRpU5krWnV8Bs=
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.34.0/go.mod h1:9cKLGBDzI/F3NoHLQGm4ZrYdIHsvGt6ej6hUowxY0J4=
go.opentelemetry.io/otel/metric v1.36.0 h1:MoWPKVhQvJ+eeXWHFBOPoBOi20jh6Iq2CcCREuTYufE=
go.opentelemetry.io/otel/metric v1.36.0/go.mod h1:zC7Ks+yeyJt4xig9DEw9kuUFe5C3zLbVjV2PzT6qzbs=
go.opentelemetry.io/otel/sdk v1.36.0 h1:b6SYIuLRs88ztox4EyrvRti80uXIFy+Sqzoh9kFULbs=
go.opentelemetry.io/otel/sdk v1.36.0/go.mod h1:+lC+mTgD+MUWfjJubi2vvXWcVxyr9rmlshZni72pXeY=
go.opentelemetry.io/otel/sdk/metric v1.36.0 h1:r0ntwwGosWGaa0CrSt8cuNuTcccMXERFwHX4dThiPis=
go.opentelemetry.io/otel/sdk/metric v1.36.0/go.mod h1:qTNOhFDfKRwX0yXOqJYegL5WRaW376QbB7P4Pb0qva4=
go.opentelemetry.io/otel/trace v1.36.0 h1:ahxWNuqZjpdiFAyrIoQ4GIiAIhxAunQR6MUoKrsNd4w=
go.opentelemetry.io/otel/trace v1.36.0/go.mod h1:gQ+OnDZzrybY4k4seLzPAWNwVBBVlF2szhehOBB/tGA=
go.opentelemetry.io/proto/otlp v1.7.0 h1:jX1VolD6nHuFzOYso2E73H85i92Mv8JQYk0K9vz09os=
go.opentelemetry.io/proto/otlp v1.7.0/go.mod h1:fSKjH6YJ7HDlwzltzyMj036AJ3ejJLCgCSHGj4efDDo=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210119212857-b64e53b001e4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/time v0.11.0 h1:/bpjEDfN9tkoN/ryeYHnv5hcMlc8ncjMcM4XBk5NWV0=
golang.org/x/time v0.11.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.0/go.mod h1:xkSsbof2nBLbhDlRMhhhyNLN/zl3eTqcnHD5viDpcZ0=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/genproto/googleapis/api v0.0.0-20250528174236-200df99c418a h1:SGktgSolFCo75dnHJF2yMvnns6jCmHFJ0vE4Vn2JKvQ=
google.golang.org/genproto/googleapis/api v0.0.0-20250528174236-200df99c418a/go.mod h1:a77HrdMjoeKbnd2jmgcWdaS++ZLZAEq3orIOAEIKiVw=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a h1:v2PbRU4K3llS09c7zodFpNePeamkAwG3mPrAery9VeE=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250528174236-200df99c418a/go.mod h1:qQ0YXyHHx3XkvlzUtpXDkS29lDSafHMZBAZDc03LQ3A=
google.golang.org/grpc v1.72.2 h1:TdbGzwb82ty4OusHWepvFWGLgIbNo1/SUynEN0ssqv8=
google.golang.org/grpc v1.72.2/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gotest.tools/v3 v3.4.0 h1:ZazjZUfuVeZGLAmlKKuyv3IKP5orXcwtOwDQH6YVr6o=
gotest.tools/v3 v3.4.0/go.mod h1:CtbdzLSsqVhDgMtKsx03ird5YTGB3ar27v0u/yKBW5g=

View File

@@ -1,65 +0,0 @@
// This file is licensed under the Affero General Public License (AGPL) version 3.
//
// Copyright (C) 2026 Element Creations Ltd
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as
// published by the Free Software Foundation, either version 3 of the
// License, or (at your option) any later version.
//
// See the GNU Affero General Public License for more details:
// <https://www.gnu.org/licenses/agpl-3.0.html>.
package synapse_tests
import (
"testing"
"github.com/matrix-org/complement"
"github.com/matrix-org/complement/client"
"github.com/matrix-org/complement/helpers"
"github.com/matrix-org/gomatrixserverlib/spec"
)
// Stub test to ensure that homeservers can communicate with each other (federation works correctly).
//
// TODO: This test will disappear once we have other real Synapse specific tests in
// place. This is simply here as an example without bloating the PR with some specific
// new tests.
func TestFederation(t *testing.T) {
// Create two homeservers
deployment := complement.Deploy(t, 2)
defer deployment.Destroy(t)
alice := deployment.Register(t, "hs1", helpers.RegistrationOpts{})
bob := deployment.Register(t, "hs2", helpers.RegistrationOpts{})
aliceRoomID := alice.MustCreateRoom(t, map[string]any{
"preset": "public_chat",
})
bobRoomID := bob.MustCreateRoom(t, map[string]any{
"preset": "public_chat",
})
t.Run("parallel", func(t *testing.T) {
t.Run("HS1 -> HS2", func(t *testing.T) {
t.Parallel()
alice.MustJoinRoom(t, bobRoomID, []spec.ServerName{
deployment.GetFullyQualifiedHomeserverName(t, "hs2"),
})
bob.MustSyncUntil(t, client.SyncReq{}, client.SyncJoinedTo(alice.UserID, bobRoomID))
})
t.Run("HS2 -> HS1", func(t *testing.T) {
t.Parallel()
bob.MustJoinRoom(t, aliceRoomID, []spec.ServerName{
deployment.GetFullyQualifiedHomeserverName(t, "hs1"),
})
alice.MustSyncUntil(t, client.SyncReq{}, client.SyncJoinedTo(bob.UserID, aliceRoomID))
})
})
}

View File

@@ -1,23 +0,0 @@
// This file is licensed under the Affero General Public License (AGPL) version 3.
//
// Copyright (C) 2026 Element Creations Ltd
//
// This program is free software: you can redistribute it and/or modify
// it under the terms of the GNU Affero General Public License as
// published by the Free Software Foundation, either version 3 of the
// License, or (at your option) any later version.
//
// See the GNU Affero General Public License for more details:
// <https://www.gnu.org/licenses/agpl-3.0.html>.
package synapse_tests
import (
"testing"
"github.com/matrix-org/complement"
)
func TestMain(m *testing.M) {
complement.TestMain(m, "synapse")
}

File diff suppressed because it is too large Load Diff

78
debian/changelog vendored
View File

@@ -1,81 +1,3 @@
matrix-synapse-py3 (1.149.0~rc1) stable; urgency=medium
* New synapse release 1.149.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Mar 2026 14:37:57 +0000
matrix-synapse-py3 (1.148.0) stable; urgency=medium
* New synapse release 1.148.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 24 Feb 2026 11:17:49 +0000
matrix-synapse-py3 (1.148.0~rc1) stable; urgency=medium
* New synapse release 1.148.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Feb 2026 16:44:08 +0000
matrix-synapse-py3 (1.147.1) stable; urgency=medium
* New synapse release 1.147.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 12 Feb 2026 15:45:15 +0000
matrix-synapse-py3 (1.147.0) stable; urgency=medium
* New synapse release 1.147.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 10 Feb 2026 12:39:58 +0000
matrix-synapse-py3 (1.147.0~rc1) stable; urgency=medium
* New Synapse release 1.147.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Feb 2026 08:53:17 -0700
matrix-synapse-py3 (1.146.0) stable; urgency=medium
* New Synapse release 1.146.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 27 Jan 2026 08:43:59 -0700
matrix-synapse-py3 (1.146.0~rc1) stable; urgency=medium
* New Synapse release 1.146.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 Jan 2026 08:42:10 -0700
matrix-synapse-py3 (1.145.0) stable; urgency=medium
* New Synapse release 1.145.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Jan 2026 08:37:42 -0700
matrix-synapse-py3 (1.145.0~rc4) stable; urgency=medium
* New Synapse release 1.145.0rc4.
-- Synapse Packaging team <packages@matrix.org> Thu, 08 Jan 2026 12:06:35 -0700
matrix-synapse-py3 (1.145.0~rc3) stable; urgency=medium
* New Synapse release 1.145.0rc3.
-- Synapse Packaging team <packages@matrix.org> Wed, 07 Jan 2026 15:32:07 -0700
matrix-synapse-py3 (1.145.0~rc2) stable; urgency=medium
* New Synapse release 1.145.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 07 Jan 2026 10:10:07 -0700
matrix-synapse-py3 (1.145.0~rc1) stable; urgency=medium
* New Synapse release 1.145.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Jan 2026 09:29:39 -0700
matrix-synapse-py3 (1.144.0) stable; urgency=medium
* New Synapse release 1.144.0.

View File

@@ -145,12 +145,6 @@ for port in 8080 8081 8082; do
rc_delayed_event_mgmt:
per_second: 1000
burst_count: 1000
rc_room_creation:
per_second: 1000
burst_count: 1000
rc_user_directory:
per_second: 1000
burst_count: 1000
RC
)
echo "${ratelimiting}" >> "$port.config"

View File

@@ -188,12 +188,7 @@ COPY --from=builder --exclude=.lock /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf
# 8008: CS Matrix API port from Synapse
# 8448: SS Matrix API port from Synapse
EXPOSE 8008/tcp 8448/tcp
# 19090: Metrics listener port for the main process (metrics must be enabled with
# SYNAPSE_ENABLE_METRICS=1).
EXPOSE 19090/tcp
EXPOSE 8008/tcp 8009/tcp 8448/tcp
ENTRYPOINT ["/start.py"]

View File

@@ -71,15 +71,6 @@ FROM $FROM
# Expose nginx listener port
EXPOSE 8080/tcp
# Metrics for workers are on ports starting from 19091 but since these are dynamic
# we don't expose them by default (metrics must be enabled with
# SYNAPSE_ENABLE_METRICS=1)
#
# Instead, we expose a single port used for Prometheus HTTP service discovery
# (`http://<synapse_container>:9469/metrics/service_discovery`) and proxy all of the
# workers' metrics endpoints through that
# (`http://<synapse_container>:9469/metrics/worker/<worker_name>`).
EXPOSE 9469/tcp
# A script to read environment variables and create the necessary
# files to run the desired worker configuration. Will start supervisord.

View File

@@ -12,9 +12,11 @@ Note that running Synapse's unit tests from within the docker image is not suppo
`scripts-dev/complement.sh` is a script that will automatically build
and run Synapse against Complement.
Consult our [Complement docs][https://github.com/element-hq/synapse/tree/develop/complement] for instructions on how to use it.
Consult the [contributing guide][guideComplementSh] for instructions on how to use it.
[guideComplementSh]: https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-integration-tests-complement
## Building and running the images manually
Under some circumstances, you may wish to build the images manually.
@@ -29,23 +31,23 @@ release of Synapse, instead of your current checkout, you can skip this step. Fr
root of the repository:
```sh
docker build -t localhost/synapse -f docker/Dockerfile .
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
Next, build the workerised Synapse docker image, which is a layer over the base
image.
```sh
docker build -t localhost/synapse-workers --build-arg FROM=localhost/synapse -f docker/Dockerfile-workers .
docker build -t matrixdotorg/synapse-workers -f docker/Dockerfile-workers .
```
Finally, build the multi-purpose image for Complement, which is a layer over the workers image.
```sh
docker build -t localhost/complement-synapse -f docker/complement/Dockerfile --build-arg FROM=localhost/synapse-workers docker/complement
docker build -t complement-synapse -f docker/complement/Dockerfile docker/complement
```
This will build an image with the tag `localhost/complement-synapse`, which can be handed to
This will build an image with the tag `complement-synapse`, which can be handed to
Complement for testing via the `COMPLEMENT_BASE_IMAGE` environment variable. Refer to
[Complement's documentation](https://github.com/matrix-org/complement/#running) for
how to run the tests, as well as the various available command line flags.
@@ -133,49 +135,3 @@ but it does not serve TLS by default.
You can configure `SYNAPSE_TLS_CERT` and `SYNAPSE_TLS_KEY` to point to a
TLS certificate and key (respectively), both in PEM (textual) format.
In this case, Nginx will additionally serve using HTTPS on port 8448.
### Metrics
Set `SYNAPSE_ENABLE_METRICS=1` to configure `enable_metrics: true` and setup the
`metrics` listener on the main and worker processes. Defaults to `0` (disabled). The
main process will listen on port `19090` and workers on port `19091 + <worker index>`.
When using `docker/Dockerfile-workers`, to ease the complexity with the metrics setup,
we also have a [Prometheus HTTP service
discovery](https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config)
endpoint available at `http://<synapse_container>:9469/metrics/service_discovery`.
The metrics from each worker can also be accessed via
`http://<synapse_container>:9469/metrics/worker/<worker_name>` which is what the service
discovery response points to behind the scenes. This way, you only need to expose a
single port (9469) to access all metrics.
```yaml
global:
scrape_interval: 15s
scrape_timeout: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: synapse
scrape_interval: 15s
metrics_path: /_synapse/metrics
scheme: http
# We set `honor_labels` so that each service can set their own `job`/`instance` label
#
# > honor_labels controls how Prometheus handles conflicts between labels that are
# > already present in scraped data and labels that Prometheus would attach
# > server-side ("job" and "instance" labels, manually configured target
# > labels, and labels generated by service discovery implementations).
# >
# > *-- https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config*
honor_labels: true
# Use HTTP service discovery
#
# Reference:
# - https://prometheus.io/docs/prometheus/latest/http_sd/
# - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config
http_sd_configs:
- url: 'http://localhost:9469/metrics/service_discovery'
```

View File

@@ -75,9 +75,6 @@ The following environment variables are supported in `generate` mode:
particularly tricky.
* `SYNAPSE_LOG_TESTING`: if set, Synapse will log additional information useful
for testing.
* `SYNAPSE_ENABLE_METRICS`: if set to `1`, the metrics listener will be enabled on the
main and worker processes. Defaults to `0` (disabled). The main process will listen on
port `19090` and workers on port `19091 + <worker index>`.
## Postgres

View File

@@ -102,10 +102,6 @@ rc_room_creation:
per_second: 9999
burst_count: 9999
rc_user_directory:
per_second: 9999
burst_count: 9999
federation_rr_transactions_per_room_per_second: 9999
allow_device_name_lookup_over_federation: true
@@ -141,8 +137,6 @@ experimental_features:
msc4306_enabled: true
# Sticky Events
msc4354_enabled: true
# `/sync` `state_after`
msc4222_enabled: true
server_notices:
system_mxid_localpart: _server

View File

@@ -48,5 +48,3 @@ server {
proxy_set_header Host $host:$server_port;
}
}
{{ nginx_prometheus_metrics_service_discovery }}

View File

@@ -20,9 +20,4 @@ app_service_config_files:
{%- endfor %}
{%- endif %}
{# Controlled by SYNAPSE_ENABLE_METRICS #}
{% if enable_metrics %}
enable_metrics: true
{% endif %}
{{ shared_worker_config }}

View File

@@ -21,14 +21,6 @@ worker_listeners:
{%- endfor %}
{% endif %}
{# Controlled by SYNAPSE_ENABLE_METRICS #}
{% if metrics_port %}
- type: metrics
# Prometheus does not support Unix sockets so we don't bother with
# `SYNAPSE_USE_UNIX_SOCKET`, https://github.com/prometheus/prometheus/issues/12024
port: {{ metrics_port }}
{% endif %}
worker_log_config: {{ worker_log_config_filepath }}
{{ worker_extra_conf }}

View File

@@ -53,15 +53,6 @@ listeners:
- names: [federation]
compress: false
{% if SYNAPSE_ENABLE_METRICS %}
- type: metrics
# The main process always uses the same port 19090
#
# Prometheus does not support Unix sockets so we don't bother with
# `SYNAPSE_USE_UNIX_SOCKET`, https://github.com/prometheus/prometheus/issues/12024
port: 19090
{% endif %}
## Database ##
{% if POSTGRES_PASSWORD %}

View File

@@ -49,16 +49,11 @@
# regardless of the SYNAPSE_LOG_LEVEL setting.
# * SYNAPSE_LOG_TESTING: if set, Synapse will log additional information useful
# for testing.
# * SYNAPSE_USE_UNIX_SOCKET: TODO
# * `SYNAPSE_ENABLE_METRICS`: if set to `1`, the metrics listener will be enabled on the
# main and worker processes. Defaults to `0` (disabled). The main process will listen on
# port `19090` and workers on port `19091 + <worker index>`.
#
# NOTE: According to Complement's ENTRYPOINT expectations for a homeserver image (as defined
# in the project's README), this script may be run multiple times, and functionality should
# continue to work if so.
import json
import os
import platform
import re
@@ -76,7 +71,6 @@ from typing import (
SupportsIndex,
)
import attr
import yaml
from jinja2 import Environment, FileSystemLoader
@@ -343,7 +337,7 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
}
# Templates for sections that may be inserted multiple times in config files
NGINX_LOCATION_REGEX_CONFIG_BLOCK = """
NGINX_LOCATION_CONFIG_BLOCK = """
location ~* {endpoint} {{
proxy_pass {upstream};
proxy_set_header X-Forwarded-For $remote_addr;
@@ -352,25 +346,6 @@ NGINX_LOCATION_REGEX_CONFIG_BLOCK = """
}}
"""
# Having both **regex** (`NGINX_LOCATION_REGEX_CONFIG_BLOCK`) match vs **exact**
# (`NGINX_LOCATION_EXACT_CONFIG_BLOCK`) match is necessary because we can't use a URI
# path in `proxy_pass http://localhost:19090/_synapse/metrics` with the regex version.
#
# Example of what happens if you try to use `proxy_pass http://localhost:19090/_synapse/metrics`
# with `NGINX_LOCATION_REGEX_CONFIG_BLOCK`:
# ```
# nginx | 2025/12/31 22:58:34 [emerg] 21#21: "proxy_pass" cannot have URI part in location given by regular expression, or inside named location, or inside "if" statement, or inside "limit_except" block in /etc/nginx/conf.d/matrix-synapse.conf:732
# ```
NGINX_LOCATION_EXACT_CONFIG_BLOCK = """
location = {endpoint} {{
proxy_pass {upstream};
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
}}
"""
NGINX_UPSTREAM_CONFIG_BLOCK = """
upstream {upstream_worker_base_name} {{
{body}
@@ -378,63 +353,6 @@ upstream {upstream_worker_base_name} {{
"""
PROMETHEUS_METRICS_SERVICE_DISCOVERY_FILE_PATH = (
"/data/prometheus_service_discovery.json"
)
"""
We serve this file with nginx so people can use it with `http_sd_config` in their
Prometheus config.
"""
NGINX_HOST_PLACEHOLDER = "<HOST_PLACEHOLDER>"
"""Will be replaced with the whatever hostname:port used to access the nginx metrics endpoint."""
NGINX_PROMETHEUS_METRICS_SERVICE_DISCOVERY = """
server {{
listen 9469;
location = /metrics/service_discovery {{
alias {service_discovery_file_path};
default_type application/json;
# Find/replace the host placeholder in the response body with the actual
# host used to access this endpoint.
#
# We want to reflect back whatever host the client used to access this file.
# For example, if they accessed it via `localhost:9469`, then they
# can also reach all of the proxied metrics endpoints at the same address.
# Or if it's Prometheus in another container, it will access this via
# `host.docker.internal:9469`, etc. Or perhaps it's even some randomly assigned
# port mapping.
sub_filter '{host_placeholder}' '$http_host';
# By default, `ngx_http_sub_module` only works on `text/html` responses. We want
# to find/replace in `application/JSON`.
sub_filter_types application/json;
# Replace all occurences
sub_filter_once off;
}}
# Make the service discovery endpoint easy to find; redirect to the correct spot.
location = / {{
return 302 /metrics/service_discovery;
}}
{metrics_proxy_locations}
}}
"""
"""
Setup the nginx config necessary to serve the JSON file for Prometheus HTTP service discovery
(`http_sd_config`). Served at `/metrics/service_discovery`.
Reference:
- https://prometheus.io/docs/prometheus/latest/http_sd/
- https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config
We also proxy all of the Synapse metrics endpoints through a central place so that
people only need to expose the single 9469 port and service discovery can take care of
the rest: `/metrics/worker/<worker_name>` -> http://localhost:19090/_synapse/metrics
"""
# Utility functions
def log(txt: str) -> None:
print(txt)
@@ -694,42 +612,9 @@ def generate_base_homeserver_config() -> None:
subprocess.run([sys.executable, "/start.py", "migrate_config"], check=True)
@attr.s(auto_attribs=True)
class Worker:
worker_name: str
"""
ex.
`event_persister:2` -> `event_persister1` and `event_persister2`
`stream_writers=account_data+presence+receipts+to_device+typing"` -> `stream_writers`
"""
worker_base_name: str
"""
ex.
`event_persister:2` -> `event_persister`
`stream_writers=account_data+presence+receipts+to_device+typing"` -> `stream_writers`
"""
worker_index: int
"""
The index of the worker starting from 1 for each worker type requested.
ex.
`event_persister:2` -> `1` and `2`
`stream_writers=account_data+presence+receipts+to_device+typing"` -> `1`
"""
worker_types: set[str]
"""
ex.
`event_persister:2` -> `{"event_persister"}`
`stream_writers=account_data+presence+receipts+to_device+typing"` -> `{"account_data", "presence", "receipts","to_device", "typing"}
"""
def parse_worker_types(
requested_worker_types: list[str],
) -> list[Worker]:
) -> dict[str, set[str]]:
"""Read the desired list of requested workers and prepare the data for use in
generating worker config files while also checking for potential gotchas.
@@ -737,7 +622,10 @@ def parse_worker_types(
requested_worker_types: The list formed from the split environment variable
containing the unprocessed requests for workers.
Returns: A list of requested workers
Returns: A dict of worker names to set of worker types. Format:
{'worker_name':
{'worker_type', 'worker_type2'}
}
"""
# A counter of worker_base_name -> int. Used for determining the name for a given
# worker when generating its config file, as each worker's name is just
@@ -748,8 +636,8 @@ def parse_worker_types(
# more than a single worker for cases where multiples would be bad(e.g. presence).
worker_type_shard_counter: dict[str, int] = defaultdict(int)
# Map from worker name to `Worker`
worker_dict: dict[str, Worker] = {}
# The final result of all this processing
dict_to_return: dict[str, set[str]] = {}
# Handle any multipliers requested for given workers.
multiple_processed_worker_types = apply_requested_multiplier_for_worker(
@@ -835,29 +723,24 @@ def parse_worker_types(
if worker_number > 1:
# If this isn't the first worker, check that we don't have a confusing
# mixture of worker types with the same base name.
first_worker_with_base_name = worker_dict[f"{worker_base_name}1"]
if first_worker_with_base_name.worker_types != worker_types_set:
first_worker_with_base_name = dict_to_return[f"{worker_base_name}1"]
if first_worker_with_base_name != worker_types_set:
error(
f"Can not use worker_name: '{worker_name}' for worker_type(s): "
f"{worker_types_set!r}. It is already in use by "
f"worker_type(s): {first_worker_with_base_name.worker_types!r}"
f"worker_type(s): {first_worker_with_base_name!r}"
)
worker_dict[worker_name] = Worker(
worker_name=worker_name,
worker_base_name=worker_base_name,
worker_index=worker_number,
worker_types=worker_types_set,
)
dict_to_return[worker_name] = worker_types_set
return list(worker_dict.values())
return dict_to_return
def generate_worker_files(
environ: Mapping[str, str],
config_path: str,
data_dir: str,
requested_workers: list[Worker],
requested_worker_types: dict[str, set[str]],
) -> None:
"""Read the desired workers(if any) that is passed in and generate shared
homeserver, nginx and supervisord configs.
@@ -867,16 +750,14 @@ def generate_worker_files(
config_path: The location of the generated Synapse main worker config file.
data_dir: The location of the synapse data directory. Where log and
user-facing config files live.
requested_workers: A list of requested workers
requested_worker_types: A Dict containing requested workers in the format of
{'worker_name1': {'worker_type', ...}}
"""
# Note that yaml cares about indentation, so care should be taken to insert lines
# into files at the correct indentation below.
# Convenience helper for if using unix sockets instead of host:port
using_unix_sockets = environ.get("SYNAPSE_USE_UNIX_SOCKET", False)
enable_metrics = environ.get("SYNAPSE_ENABLE_METRICS", "0") == "1"
# First read the original config file and extract the listeners block. Then we'll
# add another listener for replication. Later we'll write out the result to the
# shared config file.
@@ -908,11 +789,7 @@ def generate_worker_files(
# base shared worker jinja2 template. This config file will be passed to all
# workers, included Synapse's main process. It is intended mainly for disabling
# functionality when certain workers are spun up, and adding a replication listener.
shared_config: dict[str, Any] = {
"listeners": listeners,
# Controls `enable_metrics: true`
"enable_metrics": enable_metrics,
}
shared_config: dict[str, Any] = {"listeners": listeners}
# List of dicts that describe workers.
# We pass this to the Supervisor template later to generate the appropriate
@@ -939,8 +816,6 @@ def generate_worker_files(
# Start worker ports from this arbitrary port
worker_port = 18009
# The main process metrics port is 19090, so start workers from 19091
worker_metrics_port = 19091
# A list of internal endpoints to healthcheck, starting with the main process
# which exists even if no workers do.
@@ -957,9 +832,7 @@ def generate_worker_files(
healthcheck_urls = ["http://localhost:8080/health"]
# Get the set of all worker types that we have configured
all_worker_types_in_use = set(
chain(*[worker.worker_types for worker in requested_workers])
)
all_worker_types_in_use = set(chain(*requested_worker_types.values()))
# Map locations to upstreams (corresponding to worker types) in Nginx
# but only if we use the appropriate worker type
for worker_type in all_worker_types_in_use:
@@ -968,13 +841,12 @@ def generate_worker_files(
# For each worker type specified by the user, create config values and write it's
# yaml config file
worker_name_to_metrics_port_map: dict[str, int] = {}
for worker in requested_workers:
for worker_name, worker_types_set in requested_worker_types.items():
# The collected and processed data will live here.
worker_config: dict[str, Any] = {}
# Merge all worker config templates for this worker into a single config
for worker_type in worker.worker_types:
for worker_type in worker_types_set:
copy_of_template_config = WORKERS_CONFIG[worker_type].copy()
# Merge worker type template configuration data. It's a combination of lists
@@ -984,27 +856,16 @@ def generate_worker_files(
)
# Replace placeholder names in the config template with the actual worker name.
worker_config = insert_worker_name_for_worker_config(
worker_config, worker.worker_name
)
worker_config = insert_worker_name_for_worker_config(worker_config, worker_name)
worker_config.update(
{
"name": worker.worker_name,
"port": str(worker_port),
"config_path": config_path,
}
{"name": worker_name, "port": str(worker_port), "config_path": config_path}
)
# Keep the `shared_config` up to date with the `shared_extra_conf` from each
# worker.
shared_config = {
**worker_config["shared_extra_conf"],
# We combine `shared_config` second to avoid overwriting existing keys just
# for sanity sake (always use the first worker).
**shared_config,
}
# Update the shared config with any worker_type specific options. The first of a
# given worker_type needs to stay assigned and not be replaced.
worker_config["shared_extra_conf"].update(shared_config)
shared_config = worker_config["shared_extra_conf"]
if using_unix_sockets:
healthcheck_urls.append(
f"--unix-socket /run/worker.{worker_port} http://localhost/health"
@@ -1016,51 +877,39 @@ def generate_worker_files(
# the `events` stream. For other workers, the worker name is the same
# name of the stream they write to, but for some reason it is not the
# case for event_persister.
if "event_persister" in worker.worker_types:
worker.worker_types.add("events")
if "event_persister" in worker_types_set:
worker_types_set.add("events")
# Update the shared config with sharding-related options if necessary
add_worker_roles_to_shared_config(
shared_config, worker.worker_types, worker.worker_name, worker_port
shared_config, worker_types_set, worker_name, worker_port
)
# Enable the worker in supervisord
worker_descriptors.append(worker_config)
# Write out the worker's logging config file
log_config_filepath = generate_worker_log_config(
environ, worker.worker_name, data_dir
)
worker_name_to_metrics_port_map[worker.worker_name] = worker_metrics_port
if enable_metrics:
# Enable prometheus metrics endpoint on this worker
worker_config["metrics_port"] = worker_metrics_port
if enable_metrics:
# Enable prometheus metrics endpoint on this worker
worker_config["metrics_port"] = worker_metrics_port
log_config_filepath = generate_worker_log_config(environ, worker_name, data_dir)
# Then a worker config file
convert(
"/conf/worker.yaml.j2",
f"/conf/workers/{worker.worker_name}.yaml",
f"/conf/workers/{worker_name}.yaml",
**worker_config,
worker_log_config_filepath=log_config_filepath,
using_unix_sockets=using_unix_sockets,
)
# Save this worker's port number to the correct nginx upstreams
for worker_type in worker.worker_types:
for worker_type in worker_types_set:
nginx_upstreams.setdefault(worker_type, set()).add(worker_port)
worker_port += 1
worker_metrics_port += 1
# Build the nginx location config blocks
nginx_location_config = ""
for endpoint, upstream in nginx_locations.items():
nginx_location_config += NGINX_LOCATION_REGEX_CONFIG_BLOCK.format(
nginx_location_config += NGINX_LOCATION_CONFIG_BLOCK.format(
endpoint=endpoint,
upstream=upstream,
)
@@ -1083,111 +932,6 @@ def generate_worker_files(
body=body,
)
# Provide a Prometheus metrics service discovery endpoint to easily be able to pick
# up all of the workers
nginx_prometheus_metrics_service_discovery = ""
if enable_metrics:
# Write JSON file for Prometheus service discovery pointing to all of the
# workers. We serve this file with nginx so people can use it with
# `http_sd_config` in their Prometheus config.
#
# > It fetches targets from an HTTP endpoint containing a list of zero or more
# > `<static_config>`s. The target must reply with an HTTP 200 response. The HTTP
# > header `Content-Type` must be `application/json`, and the body must be valid
# > JSON.
# >
# > *-- https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config*
#
# Another reference: https://prometheus.io/docs/prometheus/latest/http_sd/
prometheus_http_service_discovery_content = [
{
"targets": [NGINX_HOST_PLACEHOLDER],
"labels": {
# The downstream user should also configure `honor_labels: true` in
# their Prometheus config to prevent Prometheus from overwriting the
# `job` labels.
#
# > honor_labels controls how Prometheus handles conflicts between labels that are
# > already present in scraped data and labels that Prometheus would attach
# > server-side ("job" and "instance" labels, manually configured target
# > labels, and labels generated by service discovery implementations).
# >
# > *-- https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config*
#
# Reference:
# - https://prometheus.io/docs/concepts/jobs_instances/
# - https://prometheus.io/docs/prometheus/latest/configuration/configuration/#scrape_config
"job": worker.worker_base_name,
"index": f"{worker.worker_index}",
# This allows us to change the `metrics_path` on a per-target basis.
# We want to grab the metrics from our nginx proxied location (setup
# below).
#
# While there doesn't seem to be official docs on these special
# labels (`__metrics_path__`, `__scheme__`, `__scrape_interval__`,
# `__scrape_timeout__`), this discussion best summarizes how this
# works: https://github.com/prometheus/prometheus/discussions/13217
"__metrics_path__": f"/metrics/worker/{worker.worker_name}",
},
}
for worker in requested_workers
]
# Add the main Synapse process as well
prometheus_http_service_discovery_content.append(
{
"targets": [NGINX_HOST_PLACEHOLDER],
"labels": {
# We use `"synapse"` as the job name for the main process because it
# matches what we expect people to use from a monolith setup with
# their static scrape config. It's `job` name used in our Grafana
# dashboard for the main process.
"job": "synapse",
"index": "1",
"__metrics_path__": "/metrics/worker/main",
},
}
)
# Check to make sure the file doesn't already exist
if os.path.isfile(PROMETHEUS_METRICS_SERVICE_DISCOVERY_FILE_PATH):
error(
f"Prometheus service discovery file "
f"'{PROMETHEUS_METRICS_SERVICE_DISCOVERY_FILE_PATH}' already exists (unexpected)! "
f"This is a problem because the existing file probably doesn't match the "
"Synapse workers we're setting up now."
)
# Write the file
with open(PROMETHEUS_METRICS_SERVICE_DISCOVERY_FILE_PATH, "w") as outfile:
outfile.write(
json.dumps(prometheus_http_service_discovery_content, indent=4)
)
# Proxy all of the Synapse metrics endpoints through a central place so that
# people only need to expose the single 9469 port and service discovery can take
# care of the rest: `/metrics/worker/<worker_name>` ->
# http://localhost:19090/_synapse/metrics
#
# Build the nginx location config blocks
metrics_proxy_locations = ""
for worker in requested_workers:
metrics_proxy_locations += NGINX_LOCATION_EXACT_CONFIG_BLOCK.format(
endpoint=f"/metrics/worker/{worker.worker_name}",
upstream=f"http://localhost:{worker_name_to_metrics_port_map[worker.worker_name]}/_synapse/metrics",
)
# Add the main Synapse process as well
metrics_proxy_locations += NGINX_LOCATION_EXACT_CONFIG_BLOCK.format(
endpoint="/metrics/worker/main",
upstream="http://localhost:19090/_synapse/metrics",
)
# Add a nginx server/location to serve the JSON file
nginx_prometheus_metrics_service_discovery = NGINX_PROMETHEUS_METRICS_SERVICE_DISCOVERY.format(
service_discovery_file_path=PROMETHEUS_METRICS_SERVICE_DISCOVERY_FILE_PATH,
host_placeholder=NGINX_HOST_PLACEHOLDER,
metrics_proxy_locations=metrics_proxy_locations,
)
# Finally, we'll write out the config files.
# log config for the master process
@@ -1205,7 +949,7 @@ def generate_worker_files(
if reg_path.suffix.lower() in (".yaml", ".yml")
]
workers_in_use = len(requested_workers) > 0
workers_in_use = len(requested_worker_types) > 0
# If there are workers, add the main process to the instance_map too.
if workers_in_use:
@@ -1240,7 +984,6 @@ def generate_worker_files(
tls_cert_path=os.environ.get("SYNAPSE_TLS_CERT"),
tls_key_path=os.environ.get("SYNAPSE_TLS_KEY"),
using_unix_sockets=using_unix_sockets,
nginx_prometheus_metrics_service_discovery=nginx_prometheus_metrics_service_discovery,
)
# Supervisord config
@@ -1341,20 +1084,15 @@ def main(args: list[str], environ: MutableMapping[str, str]) -> None:
if not worker_types_env:
# No workers, just the main process
worker_types = []
requested_workers: list[Worker] = []
requested_worker_types: dict[str, Any] = {}
else:
# Split type names by comma, ignoring whitespace.
worker_types = split_and_strip_string(worker_types_env, ",")
requested_workers = parse_worker_types(worker_types)
requested_worker_types = parse_worker_types(worker_types)
# Always regenerate all other config files
log("Generating worker config files")
generate_worker_files(
environ=environ,
config_path=config_path,
data_dir=data_dir,
requested_workers=requested_workers,
)
generate_worker_files(environ, config_path, data_dir, requested_worker_types)
# Mark workers as being configured
with open(mark_filepath, "w") as f:

View File

@@ -31,25 +31,6 @@ def flush_buffers() -> None:
sys.stderr.flush()
def strtobool(val: str) -> bool:
"""Convert a string representation of truth to True or False
True values are 'y', 'yes', 't', 'true', 'on', and '1'; false values
are 'n', 'no', 'f', 'false', 'off', and '0'. Raises ValueError if
'val' is anything else.
This is lifted from distutils.util.strtobool, with the exception that it actually
returns a bool, rather than an int.
"""
val = val.lower()
if val in ("y", "yes", "t", "true", "on", "1"):
return True
elif val in ("n", "no", "f", "false", "off", "0"):
return False
else:
raise ValueError("invalid truth value %r" % (val,))
def convert(src: str, dst: str, environ: Mapping[str, object]) -> None:
"""Generate a file from a template
@@ -117,16 +98,19 @@ def generate_config_from_template(
os.mkdir(config_dir)
# Convert SYNAPSE_NO_TLS to boolean if exists
tlsanswerstring = environ.get("SYNAPSE_NO_TLS")
if tlsanswerstring is not None:
try:
environ["SYNAPSE_NO_TLS"] = strtobool(tlsanswerstring)
except ValueError:
error(
'Environment variable "SYNAPSE_NO_TLS" found but value "'
+ tlsanswerstring
+ '" unrecognized; exiting.'
)
if "SYNAPSE_NO_TLS" in environ:
tlsanswerstring = str.lower(environ["SYNAPSE_NO_TLS"])
if tlsanswerstring in ("true", "on", "1", "yes"):
environ["SYNAPSE_NO_TLS"] = True
else:
if tlsanswerstring in ("false", "off", "0", "no"):
environ["SYNAPSE_NO_TLS"] = False
else:
error(
'Environment variable "SYNAPSE_NO_TLS" found but value "'
+ tlsanswerstring
+ '" unrecognized; exiting.'
)
if "SYNAPSE_LOG_CONFIG" not in environ:
environ["SYNAPSE_LOG_CONFIG"] = config_dir + "/log.config"
@@ -180,18 +164,6 @@ def run_generate_config(environ: Mapping[str, str], ownership: str | None) -> No
config_dir = environ.get("SYNAPSE_CONFIG_DIR", "/data")
config_path = environ.get("SYNAPSE_CONFIG_PATH", config_dir + "/homeserver.yaml")
data_dir = environ.get("SYNAPSE_DATA_DIR", "/data")
enable_metrics_raw = environ.get("SYNAPSE_ENABLE_METRICS", "0")
enable_metrics = False
if enable_metrics_raw is not None:
try:
enable_metrics = strtobool(enable_metrics_raw)
except ValueError:
error(
'Environment variable "SYNAPSE_ENABLE_METRICS" found but value "'
+ enable_metrics_raw
+ '" unrecognized; exiting.'
)
# create a suitable log config from our template
log_config_file = "%s/%s.log.config" % (config_dir, server_name)
@@ -218,9 +190,6 @@ def run_generate_config(environ: Mapping[str, str], ownership: str | None) -> No
"--open-private-ports",
]
if enable_metrics:
args.append("--enable-metrics")
if ownership is not None:
# make sure that synapse has perms to write to the data dir.
log(f"Setting ownership on {data_dir} to {ownership}")

View File

@@ -1,5 +0,0 @@
# Configuration for htmltest, which we run in CI to check that links aren't broken in the built documentation.
# See all config options: https://github.com/wjdp/htmltest#wrench-configuration
# Don't check external links, as that requires network access and is slow.
CheckExternal: false

View File

@@ -88,20 +88,6 @@ is quarantined, Synapse will:
- Quarantine any existing cached remote media.
- Quarantine any future remote media.
## Downloading quarantined media
Normally, when media is quarantined, it will return a 404 error when downloaded.
Admins can bypass this by adding `?admin_unsafely_bypass_quarantine=true`
to the [normal download URL](https://spec.matrix.org/v1.16/client-server-api/#get_matrixclientv1mediadownloadservernamemediaid).
Bypassing the quarantine check is not recommended. Media is typically quarantined
to prevent harmful content from being served to users, which includes admins. Only
set the bypass parameter if you intentionally want to access potentially harmful
content.
Non-admin users cannot bypass quarantine checks, even when specifying the above
query parameter.
## Quarantining media by ID
This API quarantines a single piece of local or remote media.

View File

@@ -36,10 +36,9 @@ It returns a JSON body like the following:
- "scheduled" - Task is scheduled but not active
- "active" - Task is active and probably running, and if not will be run on next scheduler loop run
- "complete" - Task has completed successfully
- "cancelled" - Task has been cancelled
- "failed" - Task is over and either returned a failed status, or had an exception
* `max_timestamp`: int - Is optional. Returns only the scheduled tasks with a timestamp (in milliseconds since the unix epoch) inferior to the specified one.
* `max_timestamp`: int - Is optional. Returns only the scheduled tasks with a timestamp inferior to the specified one.
**Response**

View File

@@ -505,55 +505,6 @@ with a body of:
}
```
## List room memberships of a user
Gets a list of room memberships for a specific `user_id`. This
endpoint differs from
[`GET /_synapse/admin/v1/users/<user_id>/joined_rooms`](#list-joined-rooms-of-a-user)
in that it returns rooms with memberships other than "join".
The API is:
```
GET /_synapse/admin/v1/users/<user_id>/memberships
```
A response body like the following is returned:
```json
{
"memberships": {
"!DuGcnbhHGaSZQoNQR:matrix.org": "join",
"!ZtSaPCawyWtxfWiIy:matrix.org": "leave",
}
}
```
which is a list of room membership states for the given user. This endpoint can
be used with both local and remote users, with the caveat that the homeserver will
only be aware of the memberships for rooms that one of its local users has joined.
Remote user memberships may also be out of date if all local users have since left
a room. The homeserver will thus no longer receive membership updates about it.
The list includes rooms that the user has since left; other membership states (knock,
invite, etc.) are also possible.
Note that rooms will only disappear from this list if they are
[purged](./rooms.md#delete-room-api) from the homeserver.
**Parameters**
The following parameters should be set in the URL:
- `user_id` - fully qualified: for example, `@user:server.com`.
**Response**
The following fields are returned in the JSON response body:
- `memberships` - A map of `room_id` (string) to `membership` state (string).
## List joined rooms of a user
Gets a list of all `room_id` that a specific `user_id` is joined to and is a member of (participating in).

View File

@@ -334,9 +334,46 @@ For more details about other configurations, see the [Docker-specific documentat
## Run the integration tests ([Complement](https://github.com/matrix-org/complement)).
See our [Complement docs](https://github.com/element-hq/synapse/tree/develop/complement)
for how to use the `./scripts-dev/complement.sh` test runner script.
[Complement](https://github.com/matrix-org/complement) is a suite of black box tests that can be run on any homeserver implementation. It can also be thought of as end-to-end (e2e) tests.
It's often nice to develop on Synapse and write Complement tests at the same time.
Here is how to run your local Synapse checkout against your local Complement checkout.
(checkout [`complement`](https://github.com/matrix-org/complement) alongside your `synapse` checkout)
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh
```
To run a specific test file, you can pass the test name at the end of the command. The name passed comes from the naming structure in your Complement tests. If you're unsure of the name, you can do a full run and copy it from the test output:
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages
```
To run a specific test, you can specify the whole name structure:
```sh
COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages/parallel/Historical_events_resolve_in_the_correct_order
```
The above will run a monolithic (single-process) Synapse with SQLite as the database. For other configurations, try:
- Passing `POSTGRES=1` as an environment variable to use the Postgres database instead.
- Passing `WORKERS=1` as an environment variable to use a workerised setup instead. This option implies the use of Postgres.
- If setting `WORKERS=1`, optionally set `WORKER_TYPES=` to declare which worker
types you wish to test. A simple comma-delimited string containing the worker types
defined from the `WORKERS_CONFIG` template in
[here](https://github.com/element-hq/synapse/blob/develop/docker/configure_workers_and_start.py#L54).
A safe example would be `WORKER_TYPES="federation_inbound, federation_sender, synchrotron"`.
See the [worker documentation](../workers.md) for additional information on workers.
- Passing `ASYNCIO_REACTOR=1` as an environment variable to use the Twisted asyncio reactor instead of the default one.
- Passing `PODMAN=1` will use the [podman](https://podman.io/) container runtime, instead of docker.
- Passing `UNIX_SOCKETS=1` will utilise Unix socket functionality for Synapse, Redis, and Postgres(when applicable).
To increase the log level for the tests, set `SYNAPSE_TEST_LOG_LEVEL`, e.g:
```sh
SYNAPSE_TEST_LOG_LEVEL=DEBUG COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -run TestImportHistoricalMessages
```
### Prettier formatting with `gotestfmt`
@@ -352,6 +389,18 @@ COMPLEMENT_DIR=../complement ./scripts-dev/complement.sh -json | gotestfmt -hide
(Remove `-hide successful-tests` if you don't want to hide successful tests.)
### Access database for homeserver after Complement test runs.
If you're curious what the database looks like after you run some tests, here are some steps to get you going in Synapse:
1. In your Complement test comment out `defer deployment.Destroy(t)` and replace with `defer time.Sleep(2 * time.Hour)` to keep the homeserver running after the tests complete
1. Start the Complement tests
1. Find the name of the container, `docker ps -f name=complement_` (this will filter for just the Compelement related Docker containers)
1. Access the container replacing the name with what you found in the previous step: `docker exec -it complement_1_hs_with_application_service.hs1_2 /bin/bash`
1. Install sqlite (database driver), `apt-get update && apt-get install -y sqlite3`
1. Then run `sqlite3` and open the database `.open /conf/homeserver.db` (this db path comes from the Synapse homeserver.yaml)
# 9. Submit your patch.
Once you're happy with your patch, it's time to prepare a Pull Request.

View File

@@ -32,33 +32,6 @@ expected and not an issue.
It is not a requirement for experimental features to be behind a configuration flag,
but one should be used if unsure.
New experimental configuration flags should be added under the `experimental_features`
New experimental configuration flags should be added under the `experimental`
configuration key (see the `synapse.config.experimental` file) and either explain
(briefly) what is being enabled, or include the MSC number.
The configuration flag should link to the tracking issue for the experimental feature (see below).
## Tracking issues for experimental features
In the interest of having some documentation around experimental features, without
polluting the stable documentation, all new experimental features should have a tracking issue with
[the `T-ExperimentalFeature` label](https://github.com/element-hq/synapse/issues?q=sort%3Aupdated-desc+state%3Aopen+label%3A%22T-ExperimentalFeature%22),
kept open as long as the experimental feature is present in Synapse.
The configuration option for the feature should have a comment linking to the tracking issue,
for ease of discoverability.
As a guideline, the issue should contain:
- Context for why this experimental feature is in Synapse
- This could well be a link to somewhere else, where this context is already available.
- If applicable, why the feature is enabled by default. (Why do we need to enable it by default and why is it safe?)
- If applicable, setup instructions for any non-standard components or configuration needed by the feature.
(Ideally this will be moved to the configuration manual after stabilisation.)
- Design decisions behind the Synapse implementation.
(Ideally this will be moved to the developers' documentation after stabilisation.)
- Any caveats around the current implementation of the feature, such as:
- missing aspects
- breakage or incompatibility that is expected if/when the feature is stabilised,
or when the feature is turned on/off
- Criteria for how we know whether we can remove the feature in the future.

View File

@@ -123,21 +123,193 @@ Example Prometheus target for Synapse with workers:
static_configs:
- targets: ["my.server.here:port"]
labels:
instance: "my.server"
job: "master"
index: 1
- targets: ["my.workerserver.here:port"]
labels:
instance: "my.server"
job: "generic_worker"
index: 1
- targets: ["my.workerserver.here:port"]
labels:
instance: "my.server"
job: "generic_worker"
index: 2
- targets: ["my.workerserver.here:port"]
labels:
instance: "my.server"
job: "media_repository"
index: 1
```
Labels (`job`, `index`) can be defined as anything.
Labels (`instance`, `job`, `index`) can be defined as anything.
The labels are used to group graphs in grafana.
## Renaming of metrics & deprecation of old names in 1.2
Synapse 1.2 updates the Prometheus metrics to match the naming
convention of the upstream `prometheus_client`. The old names are
considered deprecated and will be removed in a future version of
Synapse.
**The old names will be disabled by default in Synapse v1.71.0 and removed
altogether in Synapse v1.73.0.**
| New Name | Old Name |
| ---------------------------------------------------------------------------- | ---------------------------------------------------------------------- |
| python_gc_objects_collected_total | python_gc_objects_collected |
| python_gc_objects_uncollectable_total | python_gc_objects_uncollectable |
| python_gc_collections_total | python_gc_collections |
| process_cpu_seconds_total | process_cpu_seconds |
| synapse_federation_client_sent_transactions_total | synapse_federation_client_sent_transactions |
| synapse_federation_client_events_processed_total | synapse_federation_client_events_processed |
| synapse_event_processing_loop_count_total | synapse_event_processing_loop_count |
| synapse_event_processing_loop_room_count_total | synapse_event_processing_loop_room_count |
| synapse_util_caches_cache_hits | synapse_util_caches_cache:hits |
| synapse_util_caches_cache_size | synapse_util_caches_cache:size |
| synapse_util_caches_cache_evicted_size | synapse_util_caches_cache:evicted_size |
| synapse_util_caches_cache | synapse_util_caches_cache:total |
| synapse_util_caches_response_cache_size | synapse_util_caches_response_cache:size |
| synapse_util_caches_response_cache_hits | synapse_util_caches_response_cache:hits |
| synapse_util_caches_response_cache_evicted_size | synapse_util_caches_response_cache:evicted_size |
| synapse_util_metrics_block_count_total | synapse_util_metrics_block_count |
| synapse_util_metrics_block_time_seconds_total | synapse_util_metrics_block_time_seconds |
| synapse_util_metrics_block_ru_utime_seconds_total | synapse_util_metrics_block_ru_utime_seconds |
| synapse_util_metrics_block_ru_stime_seconds_total | synapse_util_metrics_block_ru_stime_seconds |
| synapse_util_metrics_block_db_txn_count_total | synapse_util_metrics_block_db_txn_count |
| synapse_util_metrics_block_db_txn_duration_seconds_total | synapse_util_metrics_block_db_txn_duration_seconds |
| synapse_util_metrics_block_db_sched_duration_seconds_total | synapse_util_metrics_block_db_sched_duration_seconds |
| synapse_background_process_start_count_total | synapse_background_process_start_count |
| synapse_background_process_ru_utime_seconds_total | synapse_background_process_ru_utime_seconds |
| synapse_background_process_ru_stime_seconds_total | synapse_background_process_ru_stime_seconds |
| synapse_background_process_db_txn_count_total | synapse_background_process_db_txn_count |
| synapse_background_process_db_txn_duration_seconds_total | synapse_background_process_db_txn_duration_seconds |
| synapse_background_process_db_sched_duration_seconds_total | synapse_background_process_db_sched_duration_seconds |
| synapse_storage_events_persisted_events_total | synapse_storage_events_persisted_events |
| synapse_storage_events_persisted_events_sep_total | synapse_storage_events_persisted_events_sep |
| synapse_storage_events_state_delta_total | synapse_storage_events_state_delta |
| synapse_storage_events_state_delta_single_event_total | synapse_storage_events_state_delta_single_event |
| synapse_storage_events_state_delta_reuse_delta_total | synapse_storage_events_state_delta_reuse_delta |
| synapse_federation_server_received_pdus_total | synapse_federation_server_received_pdus |
| synapse_federation_server_received_edus_total | synapse_federation_server_received_edus |
| synapse_handler_presence_notified_presence_total | synapse_handler_presence_notified_presence |
| synapse_handler_presence_federation_presence_out_total | synapse_handler_presence_federation_presence_out |
| synapse_handler_presence_presence_updates_total | synapse_handler_presence_presence_updates |
| synapse_handler_presence_timers_fired_total | synapse_handler_presence_timers_fired |
| synapse_handler_presence_federation_presence_total | synapse_handler_presence_federation_presence |
| synapse_handler_presence_bump_active_time_total | synapse_handler_presence_bump_active_time |
| synapse_federation_client_sent_edus_total | synapse_federation_client_sent_edus |
| synapse_federation_client_sent_pdu_destinations_count_total | synapse_federation_client_sent_pdu_destinations:count |
| synapse_federation_client_sent_pdu_destinations_total | synapse_federation_client_sent_pdu_destinations:total |
| synapse_handlers_appservice_events_processed_total | synapse_handlers_appservice_events_processed |
| synapse_notifier_notified_events_total | synapse_notifier_notified_events |
| synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter |
| synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total | synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter |
| synapse_http_httppusher_http_pushes_processed_total | synapse_http_httppusher_http_pushes_processed |
| synapse_http_httppusher_http_pushes_failed_total | synapse_http_httppusher_http_pushes_failed |
| synapse_http_httppusher_badge_updates_processed_total | synapse_http_httppusher_badge_updates_processed |
| synapse_http_httppusher_badge_updates_failed_total | synapse_http_httppusher_badge_updates_failed |
| synapse_admin_mau_current | synapse_admin_mau:current |
| synapse_admin_mau_max | synapse_admin_mau:max |
| synapse_admin_mau_registered_reserved_users | synapse_admin_mau:registered_reserved_users |
Removal of deprecated metrics & time based counters becoming histograms in 0.31.0
---------------------------------------------------------------------------------
The duplicated metrics deprecated in Synapse 0.27.0 have been removed.
All time duration-based metrics have been changed to be seconds. This
affects:
| msec -> sec metrics |
| -------------------------------------- |
| python_gc_time |
| python_twisted_reactor_tick_time |
| synapse_storage_query_time |
| synapse_storage_schedule_time |
| synapse_storage_transaction_time |
Several metrics have been changed to be histograms, which sort entries
into buckets and allow better analysis. The following metrics are now
histograms:
| Altered metrics |
| ------------------------------------------------ |
| python_gc_time |
| python_twisted_reactor_pending_calls |
| python_twisted_reactor_tick_time |
| synapse_http_server_response_time_seconds |
| synapse_storage_query_time |
| synapse_storage_schedule_time |
| synapse_storage_transaction_time |
Block and response metrics renamed for 0.27.0
---------------------------------------------
Synapse 0.27.0 begins the process of rationalising the duplicate
`*:count` metrics reported for the resource tracking for code blocks and
HTTP requests.
At the same time, the corresponding `*:total` metrics are being renamed,
as the `:total` suffix no longer makes sense in the absence of a
corresponding `:count` metric.
To enable a graceful migration path, this release just adds new names
for the metrics being renamed. A future release will remove the old
ones.
The following table shows the new metrics, and the old metrics which
they are replacing.
| New name | Old name |
| ------------------------------------------------------------- | ---------------------------------------------------------- |
| synapse_util_metrics_block_count | synapse_util_metrics_block_timer:count |
| synapse_util_metrics_block_count | synapse_util_metrics_block_ru_utime:count |
| synapse_util_metrics_block_count | synapse_util_metrics_block_ru_stime:count |
| synapse_util_metrics_block_count | synapse_util_metrics_block_db_txn_count:count |
| synapse_util_metrics_block_count | synapse_util_metrics_block_db_txn_duration:count |
| synapse_util_metrics_block_time_seconds | synapse_util_metrics_block_timer:total |
| synapse_util_metrics_block_ru_utime_seconds | synapse_util_metrics_block_ru_utime:total |
| synapse_util_metrics_block_ru_stime_seconds | synapse_util_metrics_block_ru_stime:total |
| synapse_util_metrics_block_db_txn_count | synapse_util_metrics_block_db_txn_count:total |
| synapse_util_metrics_block_db_txn_duration_seconds | synapse_util_metrics_block_db_txn_duration:total |
| synapse_http_server_response_count | synapse_http_server_requests |
| synapse_http_server_response_count | synapse_http_server_response_time:count |
| synapse_http_server_response_count | synapse_http_server_response_ru_utime:count |
| synapse_http_server_response_count | synapse_http_server_response_ru_stime:count |
| synapse_http_server_response_count | synapse_http_server_response_db_txn_count:count |
| synapse_http_server_response_count | synapse_http_server_response_db_txn_duration:count |
| synapse_http_server_response_time_seconds | synapse_http_server_response_time:total |
| synapse_http_server_response_ru_utime_seconds | synapse_http_server_response_ru_utime:total |
| synapse_http_server_response_ru_stime_seconds | synapse_http_server_response_ru_stime:total |
| synapse_http_server_response_db_txn_count | synapse_http_server_response_db_txn_count:total |
| synapse_http_server_response_db_txn_duration_seconds | synapse_http_server_response_db_txn_duration:total |
Standard Metric Names
---------------------
As of synapse version 0.18.2, the format of the process-wide metrics has
been changed to fit prometheus standard naming conventions. Additionally
the units have been changed to seconds, from milliseconds.
| New name | Old name |
| ---------------------------------------- | --------------------------------- |
| process_cpu_user_seconds_total | process_resource_utime / 1000 |
| process_cpu_system_seconds_total | process_resource_stime / 1000 |
| process_open_fds (no \'type\' label) | process_fds |
The python-specific counts of garbage collector performance have been
renamed.
| New name | Old name |
| -------------------------------- | -------------------------- |
| python_gc_time | reactor_gc_time |
| python_gc_unreachable_total | reactor_gc_unreachable |
| python_gc_counts | reactor_gc_counts |
The twisted-specific reactor metrics have been renamed.
| New name | Old name |
| -------------------------------------- | ----------------------- |
| python_twisted_reactor_pending_calls | reactor_pending_calls |
| python_twisted_reactor_tick_time | reactor_tick_time |

View File

@@ -50,11 +50,6 @@ setting in your configuration file.
See the [configuration manual](usage/configuration/config_documentation.md#oidc_providers) for some sample settings, as well as
the text below for example configurations for specific providers.
For setups using [`.well-known` delegation](delegate.md), make sure
[`public_baseurl`](usage/configuration/config_documentation.md#public_baseurl) is set
appropriately. If unset, Synapse defaults to `https://<server_name>/` which is used in
the OIDC callback URL.
## OIDC Back-Channel Logout
Synapse supports receiving [OpenID Connect Back-Channel Logout](https://openid.net/specs/openid-connect-backchannel-1_0.html) notifications.

View File

@@ -24,18 +24,14 @@
server_name: "SERVERNAME"
pid_file: DATADIR/homeserver.pid
listeners:
- bind_addresses:
- ::1
- 127.0.0.1
port: 8008
resources:
- compress: false
names:
- client
- federation
- port: 8008
tls: false
type: http
x_forwarded: true
bind_addresses: ['::1', '127.0.0.1']
resources:
- names: [client, federation]
compress: false
database:
name: sqlite3
args:

View File

@@ -117,22 +117,6 @@ each upgrade are complete before moving on to the next upgrade, to avoid
stacking them up. You can monitor the currently running background updates with
[the Admin API](usage/administration/admin_api/background_updates.html#status).
# Upgrading to v1.146.0
## Drop support for Ubuntu 25.04 Plucky Puffin, and add support for 25.10 Questing Quokka
Ubuntu 25.04 Plucky Puffin [is end-of-life as of 17 Jan
2026](https://endoflife.date/ubuntu). This release drops support for Ubuntu
25.04, and in its place adds support for Ubuntu 25.10 Questing Quokka.
## Removal of MSC2697 (Legacy) Dehydrated devices
The endpoints for
[MSC2697](https://github.com/matrix-org/matrix-spec-proposals/pull/2697) have now
been removed, since the MSC is closed. Developers who rely on this feature should
migrate to [MSC3814](https://github.com/matrix-org/matrix-spec-proposals/pull/3814)
which introduces support for a newer version of dehydrated devices.
# Upgrading to v1.144.0
## Worker support for unstable MSC4140 `/restart` endpoint
@@ -844,7 +828,7 @@ the names of Prometheus metrics.
If you want to test your changes before legacy names are disabled by default,
you may specify `enable_legacy_metrics: false` in your homeserver configuration.
A list of affected metrics is available on the [Metrics How-to page](https://element-hq.github.io/synapse/v1.69/metrics-howto.html#renaming-of-metrics--deprecation-of-old-names-in-12).
A list of affected metrics is available on the [Metrics How-to page](https://element-hq.github.io/synapse/v1.69/metrics-howto.html?highlight=metrics%20deprecated#renaming-of-metrics--deprecation-of-old-names-in-12).
## Deprecation of the `generate_short_term_login_token` module API method
@@ -2439,7 +2423,7 @@ back to v1.3.1, subject to the following:
Some counter metrics have been renamed, with the old names deprecated.
See [the metrics
documentation](https://element-hq.github.io/synapse/v1.69/metrics-howto.html#renaming-of-metrics--deprecation-of-old-names-in-12)
documentation](metrics-howto.md#renaming-of-metrics--deprecation-of-old-names-in-12)
for details.
# Upgrading to v1.1.0

View File

@@ -956,7 +956,7 @@ server_context: context
---
### `limit_remote_rooms`
*(object)* When this option is enabled, the room "complexity" will be checked before a user joins a new remote room. If it is above the complexity limit, the server will disallow joining, or will instantly leave. This is useful for homeservers that are resource-constrained. In Synapse, the complexity of a room is measured by the number of current state events in a room, divided by 500. "Current" here means the latest state, i.e. if a user joins, then leaves, then joins, that will count as 1 current `m.room.member` state event.
*(object)* When this option is enabled, the room "complexity" will be checked before a user joins a new remote room. If it is above the complexity limit, the server will disallow joining, or will instantly leave. This is useful for homeservers that are resource-constrained. Room complexity is an arbitrary measure based on factors such as the number of users in the room.
This setting has the following sub-options:
@@ -2041,25 +2041,6 @@ rc_room_creation:
burst_count: 5.0
```
---
### `rc_user_directory`
*(object)* This option allows admins to ratelimit searches in the user directory.
_Added in Synapse 1.145.0._
This setting has the following sub-options:
* `per_second` (number): Maximum number of requests a client can send per second.
* `burst_count` (number): Maximum number of requests a client can send before being throttled.
Default configuration:
```yaml
rc_user_directory:
per_second: 0.016
burst_count: 200.0
```
---
### `federation_rr_transactions_per_room_per_second`
*(integer)* Sets outgoing federation transaction frequency for sending read-receipts, per-room.
@@ -2111,16 +2092,6 @@ Example configuration:
enable_media_repo: false
```
---
### `enable_local_media_storage`
*(boolean)* Enable the local on-disk media storage provider. When disabled, media is stored only in configured `media_storage_providers` and temporary files are used for processing.
**Warning:** If this option is set to `false` and no `media_storage_providers` are configured, all media requests will return 404 errors as there will be no storage backend available. Defaults to `true`.
Example configuration:
```yaml
enable_local_media_storage: false
```
---
### `media_store_path`
*(string)* Directory where uploaded images and attachments are stored. Defaults to `"media_store"`.
@@ -4484,7 +4455,7 @@ stream_writers:
---
### `outbound_federation_restricted_to`
*(array)* You can restrict outbound federation traffic to only go through a specific subset of workers including the [Secure Border Gateway (SBG)](https://element.io/en/server-suite/secure-border-gateways). Any worker specified here (including the SBG) must also be in the [`instance_map`](#instance_map). [`worker_replication_secret`](#worker_replication_secret) must also be configured to authorize inter-worker communication.
*(array)* When using workers, you can restrict outbound federation traffic to only go through a specific subset of workers. Any worker specified here must also be in the [`instance_map`](#instance_map). [`worker_replication_secret`](#worker_replication_secret) must also be configured to authorize inter-worker communication.
Also see the [worker documentation](../../workers.md#restrict-outbound-federation-traffic-to-a-specific-set-of-workers) for more info.

View File

@@ -9,18 +9,27 @@ point to additional JS/CSS in this directory that are added on each page load. I
addition, the `theme` directory contains files that overwrite their counterparts in
each of the default themes included with mdbook.
Currently we use these files to make a few modifications:
Currently we use these files to generate a floating Table of Contents panel. The code for
which was partially taken from
[JorelAli/mdBook-pagetoc](https://github.com/JorelAli/mdBook-pagetoc/)
before being modified such that it scrolls with the content of the page. This is handled
by the `table-of-contents.js/css` files. The table of contents panel only appears on pages
that have more than one header, as well as only appearing on desktop-sized monitors.
* We stylise the chapter titles in the left sidebar by indenting them
slightly so that they are more visually distinguishable from the section headers
(the bold titles). This is done through the `indent-section-headers.css` file.
We remove the navigation arrows which typically appear on the left and right side of the
screen on desktop as they interfere with the table of contents. This is handled by
the `remove-nav-buttons.css` file.
* We add a version picker pertaining to the different documentation versions
shipped with each version of Synapse. This functionality was implemented through
the `version-picker.js` and `version-picker.css` files, and is currently the only
requirement for the custom `theme/`.
Finally, we also stylise the chapter titles in the left sidebar by indenting them
slightly so that they are more visually distinguishable from the section headers
(the bold titles). This is done through the `indent-section-headers.css` file.
In addition to these modifications, we have added a version picker to the documentation.
Users can switch between documentations for different versions of Synapse.
This functionality was implemented through the `version-picker.js` and
`version-picker.css` files.
More information can be found in mdbook's official documentation for
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
and
[customising the default themes](https://rust-lang.github.io/mdBook/format/theme/index.html).
[customising the default themes](https://rust-lang.github.io/mdBook/format/theme/index.html).

View File

@@ -0,0 +1,8 @@
/* Remove the prev, next chapter buttons as they interfere with the
* table of contents.
* Note that the table of contents only appears on desktop, thus we
* only remove the desktop (wide) chapter buttons.
*/
.nav-wide-wrapper {
display: none
}

View File

@@ -0,0 +1,47 @@
:root {
--pagetoc-width: 250px;
}
@media only screen and (max-width:1439px) {
.sidetoc {
display: none;
}
}
@media only screen and (min-width:1440px) {
main {
position: relative;
margin-left: 100px !important;
margin-right: var(--pagetoc-width) !important;
}
.sidetoc {
margin-left: auto;
margin-right: auto;
left: calc(100% + (var(--content-max-width))/4 - 140px);
position: absolute;
text-align: right;
}
.pagetoc {
position: fixed;
width: var(--pagetoc-width);
overflow: auto;
right: 20px;
height: calc(100% - var(--menu-bar-height));
}
.pagetoc a {
color: var(--fg) !important;
display: block;
padding: 5px 15px 5px 10px;
text-align: left;
text-decoration: none;
}
.pagetoc a:hover,
.pagetoc a.active {
background: var(--sidebar-bg) !important;
color: var(--sidebar-fg) !important;
}
.pagetoc .active {
background: var(--sidebar-bg);
color: var(--sidebar-fg);
}
}

View File

@@ -0,0 +1,148 @@
const getPageToc = () => document.getElementsByClassName('pagetoc')[0];
const pageToc = getPageToc();
const pageTocChildren = [...pageToc.children];
const headers = [...document.getElementsByClassName('header')];
// Select highlighted item in ToC when clicking an item
pageTocChildren.forEach(child => {
child.addEventHandler('click', () => {
pageTocChildren.forEach(child => {
child.classList.remove('active');
});
child.classList.add('active');
});
});
/**
* Test whether a node is in the viewport
*/
function isInViewport(node) {
const rect = node.getBoundingClientRect();
return rect.top >= 0 && rect.left >= 0 && rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) && rect.right <= (window.innerWidth || document.documentElement.clientWidth);
}
/**
* Set a new ToC entry.
* Clear any previously highlighted ToC items, set the new one,
* and adjust the ToC scroll position.
*/
function setTocEntry() {
let activeEntry;
const pageTocChildren = [...getPageToc().children];
// Calculate which header is the current one at the top of screen
headers.forEach(header => {
if (window.pageYOffset >= header.offsetTop) {
activeEntry = header;
}
});
// Update selected item in ToC when scrolling
pageTocChildren.forEach(child => {
if (activeEntry.href.localeCompare(child.href) === 0) {
child.classList.add('active');
} else {
child.classList.remove('active');
}
});
let tocEntryForLocation = document.querySelector(`nav a[href="${activeEntry.href}"]`);
if (tocEntryForLocation) {
const headingForLocation = document.querySelector(activeEntry.hash);
if (headingForLocation && isInViewport(headingForLocation)) {
// Update ToC scroll
const nav = getPageToc();
const content = document.querySelector('html');
if (content.scrollTop !== 0) {
nav.scrollTo({
top: tocEntryForLocation.offsetTop - 100,
left: 0,
behavior: 'smooth',
});
} else {
nav.scrollTop = 0;
}
}
}
}
/**
* Populate sidebar on load
*/
window.addEventListener('load', () => {
// Prevent rendering the table of contents of the "print book" page, as it
// will end up being rendered into the output (in a broken-looking way)
// Get the name of the current page (i.e. 'print.html')
const pageNameExtension = window.location.pathname.split('/').pop();
// Split off the extension (as '.../print' is also a valid page name), which
// should result in 'print'
const pageName = pageNameExtension.split('.')[0];
if (pageName === "print") {
// Don't render the table of contents on this page
return;
}
// Only create table of contents if there is more than one header on the page
if (headers.length <= 1) {
return;
}
// Create an entry in the page table of contents for each header in the document
headers.forEach((header, index) => {
const link = document.createElement('a');
// Indent shows hierarchy
let indent = '0px';
switch (header.parentElement.tagName) {
case 'H1':
indent = '5px';
break;
case 'H2':
indent = '20px';
break;
case 'H3':
indent = '30px';
break;
case 'H4':
indent = '40px';
break;
case 'H5':
indent = '50px';
break;
case 'H6':
indent = '60px';
break;
default:
break;
}
let tocEntry;
if (index == 0) {
// Create a bolded title for the first element
tocEntry = document.createElement("strong");
tocEntry.innerHTML = header.text;
} else {
// All other elements are non-bold
tocEntry = document.createTextNode(header.text);
}
link.appendChild(tocEntry);
link.style.paddingLeft = indent;
link.href = header.href;
pageToc.appendChild(link);
});
setTocEntry.call();
});
// Handle active headers on scroll, if there is more than one header on the page
if (headers.length > 1) {
window.addEventListener('scroll', setTocEntry);
}

View File

@@ -1,11 +1,11 @@
<!DOCTYPE HTML>
<html lang="{{ language }}" class="{{ default_theme }} sidebar-visible" dir="{{ text_direction }}">
<html lang="{{ language }}" class="sidebar-visible no-js {{ default_theme }}">
<head>
<!-- Book generated using mdBook -->
<meta charset="UTF-8">
<title>{{ title }}</title>
{{#if is_print }}
<meta name="robots" content="noindex">
<meta name="robots" content="noindex" />
{{/if}}
{{#if base_url}}
<base href="{{ base_url }}">
@@ -15,78 +15,60 @@
<!-- Custom HTML head -->
{{> head}}
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
<meta name="description" content="{{ description }}">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="theme-color" content="#ffffff">
<meta name="theme-color" content="#ffffff" />
{{#if favicon_svg}}
<link rel="icon" href="{{ resource "favicon.svg" }}">
<link rel="icon" href="{{ path_to_root }}favicon.svg">
{{/if}}
{{#if favicon_png}}
<link rel="shortcut icon" href="{{ resource "favicon.png" }}">
<link rel="shortcut icon" href="{{ path_to_root }}favicon.png">
{{/if}}
<link rel="stylesheet" href="{{ resource "css/variables.css" }}">
<link rel="stylesheet" href="{{ resource "css/general.css" }}">
<link rel="stylesheet" href="{{ resource "css/chrome.css" }}">
<link rel="stylesheet" href="{{ path_to_root }}css/variables.css">
<link rel="stylesheet" href="{{ path_to_root }}css/general.css">
<link rel="stylesheet" href="{{ path_to_root }}css/chrome.css">
{{#if print_enable}}
<link rel="stylesheet" href="{{ resource "css/print.css" }}" media="print">
<link rel="stylesheet" href="{{ path_to_root }}css/print.css" media="print">
{{/if}}
<!-- Fonts -->
<link rel="stylesheet" href="{{ resource "fonts/fonts.css" }}">
<link rel="stylesheet" href="{{ path_to_root }}FontAwesome/css/font-awesome.css">
{{#if copy_fonts}}
<link rel="stylesheet" href="{{ path_to_root }}fonts/fonts.css">
{{/if}}
<!-- Highlight.js Stylesheets -->
<link rel="stylesheet" id="mdbook-highlight-css" href="{{ resource "highlight.css" }}">
<link rel="stylesheet" id="mdbook-tomorrow-night-css" href="{{ resource "tomorrow-night.css" }}">
<link rel="stylesheet" id="mdbook-ayu-highlight-css" href="{{ resource "ayu-highlight.css" }}">
<link rel="stylesheet" href="{{ path_to_root }}highlight.css">
<link rel="stylesheet" href="{{ path_to_root }}tomorrow-night.css">
<link rel="stylesheet" href="{{ path_to_root }}ayu-highlight.css">
<!-- Custom theme stylesheets -->
{{#each additional_css}}
<link rel="stylesheet" href="{{ resource this }}">
<link rel="stylesheet" href="{{ ../path_to_root }}{{ this }}">
{{/each}}
{{#if mathjax_support}}
<!-- MathJax -->
<script async src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
<script async type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.1/MathJax.js?config=TeX-AMS-MML_HTMLorMML"></script>
{{/if}}
<!-- Provide site root and default themes to javascript -->
<script>
const path_to_root = "{{ path_to_root }}";
const default_light_theme = "{{ default_theme }}";
const default_dark_theme = "{{ preferred_dark_theme }}";
{{#if search_js}}
window.path_to_searchindex_js = "{{ resource "searchindex.js" }}";
{{/if}}
</script>
<!-- Start loading toc.js asap -->
<script src="{{ resource "toc.js" }}"></script>
</head>
<body>
<div id="mdbook-help-container">
<div id="mdbook-help-popup">
<h2 class="mdbook-help-title">Keyboard shortcuts</h2>
<div>
<p>Press <kbd>←</kbd> or <kbd>→</kbd> to navigate between chapters</p>
{{#if search_enabled}}
<p>Press <kbd>S</kbd> or <kbd>/</kbd> to search in the book</p>
{{/if}}
<p>Press <kbd>?</kbd> to show this help</p>
<p>Press <kbd>Esc</kbd> to hide this help</p>
</div>
</div>
</div>
<div id="mdbook-body-container">
<!-- Work around some values being stored in localStorage wrapped in quotes -->
<script>
try {
let theme = localStorage.getItem('mdbook-theme');
let sidebar = localStorage.getItem('mdbook-sidebar');
<!-- Provide site root to javascript -->
<script type="text/javascript">
var path_to_root = "{{ path_to_root }}";
var default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "{{ preferred_dark_theme }}" : "{{ default_theme }}";
</script>
<!-- Work around some values being stored in localStorage wrapped in quotes -->
<script type="text/javascript">
try {
var theme = localStorage.getItem('mdbook-theme');
var sidebar = localStorage.getItem('mdbook-sidebar');
if (theme.startsWith('"') && theme.endsWith('"')) {
localStorage.setItem('mdbook-theme', theme.slice(1, theme.length - 1));
}
if (sidebar.startsWith('"') && sidebar.endsWith('"')) {
localStorage.setItem('mdbook-sidebar', sidebar.slice(1, sidebar.length - 1));
}
@@ -94,107 +76,91 @@
</script>
<!-- Set the theme before any content is loaded, prevents flash -->
<script>
const default_theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? default_dark_theme : default_light_theme;
let theme;
<script type="text/javascript">
var theme;
try { theme = localStorage.getItem('mdbook-theme'); } catch(e) { }
if (theme === null || theme === undefined) { theme = default_theme; }
const html = document.documentElement;
var html = document.querySelector('html');
html.classList.remove('no-js')
html.classList.remove('{{ default_theme }}')
html.classList.add(theme);
html.classList.add("js");
html.classList.add('js');
</script>
<input type="checkbox" id="mdbook-sidebar-toggle-anchor" class="hidden">
<!-- Hide / unhide sidebar before it is displayed -->
<script>
let sidebar = null;
const sidebar_toggle = document.getElementById("mdbook-sidebar-toggle-anchor");
<script type="text/javascript">
var html = document.querySelector('html');
var sidebar = 'hidden';
if (document.body.clientWidth >= 1080) {
try { sidebar = localStorage.getItem('mdbook-sidebar'); } catch(e) { }
sidebar = sidebar || 'visible';
} else {
sidebar = 'hidden';
sidebar_toggle.checked = false;
}
if (sidebar === 'visible') {
sidebar_toggle.checked = true;
} else {
html.classList.remove('sidebar-visible');
}
html.classList.remove('sidebar-visible');
html.classList.add("sidebar-" + sidebar);
</script>
<nav id="mdbook-sidebar" class="sidebar" aria-label="Table of contents">
<!-- populated by js -->
<mdbook-sidebar-scrollbox class="sidebar-scrollbox"></mdbook-sidebar-scrollbox>
<noscript>
<iframe class="sidebar-iframe-outer" src="{{ path_to_root }}toc.html"></iframe>
</noscript>
<div id="mdbook-sidebar-resize-handle" class="sidebar-resize-handle">
<div class="sidebar-resize-indicator"></div>
<nav id="sidebar" class="sidebar" aria-label="Table of contents">
<div class="sidebar-scrollbox">
{{#toc}}{{/toc}}
</div>
<div id="sidebar-resize-handle" class="sidebar-resize-handle"></div>
</nav>
<div id="mdbook-page-wrapper" class="page-wrapper">
<div id="page-wrapper" class="page-wrapper">
<div class="page">
{{> header}}
<div id="mdbook-menu-bar-hover-placeholder"></div>
<div id="mdbook-menu-bar" class="menu-bar sticky">
<div id="menu-bar-hover-placeholder"></div>
<div id="menu-bar" class="menu-bar sticky bordered">
<div class="left-buttons">
<label id="mdbook-sidebar-toggle" class="icon-button" for="mdbook-sidebar-toggle-anchor" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="mdbook-sidebar">
{{fa "solid" "bars"}}
</label>
<button id="mdbook-theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="mdbook-theme-list">
{{fa "solid" "paintbrush"}}
<button id="sidebar-toggle" class="icon-button" type="button" title="Toggle Table of Contents" aria-label="Toggle Table of Contents" aria-controls="sidebar">
<i class="fa fa-bars"></i>
</button>
<ul id="mdbook-theme-list" class="theme-popup" aria-label="Themes" role="menu">
<li role="none"><button role="menuitem" class="theme" id="mdbook-theme-default_theme">Auto</button></li>
<li role="none"><button role="menuitem" class="theme" id="mdbook-theme-light">Light</button></li>
<li role="none"><button role="menuitem" class="theme" id="mdbook-theme-rust">Rust</button></li>
<li role="none"><button role="menuitem" class="theme" id="mdbook-theme-coal">Coal</button></li>
<li role="none"><button role="menuitem" class="theme" id="mdbook-theme-navy">Navy</button></li>
<li role="none"><button role="menuitem" class="theme" id="mdbook-theme-ayu">Ayu</button></li>
<button id="theme-toggle" class="icon-button" type="button" title="Change theme" aria-label="Change theme" aria-haspopup="true" aria-expanded="false" aria-controls="theme-list">
<i class="fa fa-paint-brush"></i>
</button>
<ul id="theme-list" class="theme-popup" aria-label="Themes" role="menu">
<li role="none"><button role="menuitem" class="theme" id="light">{{ theme_option "Light" }}</button></li>
<li role="none"><button role="menuitem" class="theme" id="rust">{{ theme_option "Rust" }}</button></li>
<li role="none"><button role="menuitem" class="theme" id="coal">{{ theme_option "Coal" }}</button></li>
<li role="none"><button role="menuitem" class="theme" id="navy">{{ theme_option "Navy" }}</button></li>
<li role="none"><button role="menuitem" class="theme" id="ayu">{{ theme_option "Ayu" }}</button></li>
</ul>
{{#if search_enabled}}
<button id="mdbook-search-toggle" class="icon-button" type="button" title="Search (`/`)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="/ s" aria-controls="mdbook-searchbar">
{{fa "solid" "magnifying-glass"}}
<button id="search-toggle" class="icon-button" type="button" title="Search. (Shortkey: s)" aria-label="Toggle Searchbar" aria-expanded="false" aria-keyshortcuts="S" aria-controls="searchbar">
<i class="fa fa-search"></i>
</button>
{{/if}}
</div>
<!-- BEGIN CUSTOM SYNAPSE MODIFICATIONS -->
<div class="version-picker">
<div class="dropdown">
<div class="select">
<span></span>
<i class="fa fa-chevron-down"></i>
<div class="version-picker">
<div class="dropdown">
<div class="select">
<span></span>
<i class="fa fa-chevron-down"></i>
</div>
<input type="hidden" name="version">
<ul class="dropdown-menu">
<!-- Versions will be added dynamically in version-picker.js -->
</ul>
</div>
<input type="hidden" name="version">
<ul class="dropdown-menu">
<!-- Versions will be added dynamically in version-picker.js -->
</ul>
</div>
</div>
<!-- END CUSTOM SYNAPSE MODIFICATIONS -->
<h1 class="menu-title">{{ book_title }}</h1>
<div class="right-buttons">
{{#if print_enable}}
<a href="{{ path_to_root }}print.html" title="Print this book" aria-label="Print this book">
{{fa "solid" "print" "print-button"}}
<i id="print-button" class="fa fa-print"></i>
</a>
{{/if}}
{{#if git_repository_url}}
<a href="{{git_repository_url}}" title="Git repository" aria-label="Git repository">
{{fa git_repository_icon_class git_repository_icon}}
<i id="git-repository-button" class="fa {{git_repository_icon}}"></i>
</a>
{{/if}}
{{#if git_repository_edit_url}}
<a href="{{git_repository_edit_url}}" title="Suggest an edit" aria-label="Suggest an edit" rel="edit">
{{fa "solid" "pencil" "git-edit-button"}}
<a href="{{git_repository_edit_url}}" title="Suggest an edit" aria-label="Suggest an edit">
<i id="git-edit-button" class="fa fa-edit"></i>
</a>
{{/if}}
@@ -202,58 +168,50 @@
</div>
{{#if search_enabled}}
<div id="mdbook-search-wrapper" class="hidden">
<form id="mdbook-searchbar-outer" class="searchbar-outer">
<div class="search-wrapper">
<input type="search" id="mdbook-searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="mdbook-searchresults-outer" aria-describedby="searchresults-header">
<div class="spinner-wrapper">
{{fa "solid" "spinner" "fa-spin"}}
</div>
</div>
<div id="search-wrapper" class="hidden">
<form id="searchbar-outer" class="searchbar-outer">
<input type="search" id="searchbar" name="searchbar" placeholder="Search this book ..." aria-controls="searchresults-outer" aria-describedby="searchresults-header">
</form>
<div id="mdbook-searchresults-outer" class="searchresults-outer hidden">
<div id="mdbook-searchresults-header" class="searchresults-header"></div>
<ul id="mdbook-searchresults">
<div id="searchresults-outer" class="searchresults-outer hidden">
<div id="searchresults-header" class="searchresults-header"></div>
<ul id="searchresults">
</ul>
</div>
</div>
{{/if}}
<!-- Apply ARIA attributes after the sidebar and the sidebar toggle button are added to the DOM -->
<script>
document.getElementById('mdbook-sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
document.getElementById('mdbook-sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
Array.from(document.querySelectorAll('#mdbook-sidebar a')).forEach(function(link) {
<script type="text/javascript">
document.getElementById('sidebar-toggle').setAttribute('aria-expanded', sidebar === 'visible');
document.getElementById('sidebar').setAttribute('aria-hidden', sidebar !== 'visible');
Array.from(document.querySelectorAll('#sidebar a')).forEach(function(link) {
link.setAttribute('tabIndex', sidebar === 'visible' ? 0 : -1);
});
</script>
<div id="mdbook-content" class="content">
<div id="content" class="content">
<main>
<!-- Page table of contents -->
<div class="sidetoc">
<nav class="pagetoc"></nav>
</div>
{{{ content }}}
</main>
<nav class="nav-wrapper" aria-label="Page navigation">
<!-- Mobile navigation buttons -->
{{#if previous}}
<a rel="prev" href="{{ path_to_root }}{{previous.link}}" class="mobile-nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left">
{{#if (eq ../text_direction "rtl")}}
{{fa "solid" "angle-right"}}
{{else}}
{{fa "solid" "angle-left"}}
{{/if}}
{{#previous}}
<a rel="prev" href="{{ path_to_root }}{{link}}" class="mobile-nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left">
<i class="fa fa-angle-left"></i>
</a>
{{/if}}
{{/previous}}
{{#if next}}
<a rel="next prefetch" href="{{ path_to_root }}{{next.link}}" class="mobile-nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
{{#if (eq ../text_direction "rtl")}}
{{fa "solid" "angle-left"}}
{{else}}
{{fa "solid" "angle-right"}}
{{/if}}
{{#next}}
<a rel="next" href="{{ path_to_root }}{{link}}" class="mobile-nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
<i class="fa fa-angle-right"></i>
</a>
{{/if}}
{{/next}}
<div style="clear: both"></div>
</nav>
@@ -261,92 +219,92 @@
</div>
<nav class="nav-wide-wrapper" aria-label="Page navigation">
{{#if previous}}
<a rel="prev" href="{{ path_to_root }}{{previous.link}}" class="nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left">
{{#if (eq ../text_direction "rtl")}}
{{fa "solid" "angle-right"}}
{{else}}
{{fa "solid" "angle-left"}}
{{/if}}
{{#previous}}
<a rel="prev" href="{{ path_to_root }}{{link}}" class="nav-chapters previous" title="Previous chapter" aria-label="Previous chapter" aria-keyshortcuts="Left">
<i class="fa fa-angle-left"></i>
</a>
{{/if}}
{{/previous}}
{{#if next}}
<a rel="next prefetch" href="{{ path_to_root }}{{next.link}}" class="nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
{{#if (eq text_direction "rtl")}}
{{fa "solid" "angle-left"}}
{{else}}
{{fa "solid" "angle-right"}}
{{/if}}
{{#next}}
<a rel="next" href="{{ path_to_root }}{{link}}" class="nav-chapters next" title="Next chapter" aria-label="Next chapter" aria-keyshortcuts="Right">
<i class="fa fa-angle-right"></i>
</a>
{{/if}}
{{/next}}
</nav>
</div>
<template id=fa-eye>{{fa "solid" "eye"}}</template>
<template id=fa-eye-slash>{{fa "solid" "eye-slash"}}</template>
<template id=fa-copy>{{fa "regular" "copy"}}</template>
<template id=fa-play>{{fa "solid" "play"}}</template>
<template id=fa-clock-rotate-left>{{fa "solid" "clock-rotate-left"}}</template>
{{#if live_reload_endpoint}}
{{#if livereload}}
<!-- Livereload script (if served using the cli tool) -->
<script>
const wsProtocol = location.protocol === 'https:' ? 'wss:' : 'ws:';
const wsAddress = wsProtocol + "//" + location.host + "/" + "{{{live_reload_endpoint}}}";
const socket = new WebSocket(wsAddress);
<script type="text/javascript">
var socket = new WebSocket("{{{livereload}}}");
socket.onmessage = function (event) {
if (event.data === "reload") {
socket.close();
location.reload();
}
};
window.onbeforeunload = function() {
socket.close();
}
</script>
{{/if}}
{{#if google_analytics}}
<!-- Google Analytics Tag -->
<script type="text/javascript">
var localAddrs = ["localhost", "127.0.0.1", ""];
// make sure we don't activate google analytics if the developer is
// inspecting the book locally...
if (localAddrs.indexOf(document.location.hostname) === -1) {
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', '{{google_analytics}}', 'auto');
ga('send', 'pageview');
}
</script>
{{/if}}
{{#if playground_line_numbers}}
<script>
<script type="text/javascript">
window.playground_line_numbers = true;
</script>
{{/if}}
{{#if playground_copyable}}
<script>
<script type="text/javascript">
window.playground_copyable = true;
</script>
{{/if}}
{{#if playground_js}}
<script src="{{ resource "ace.js" }}"></script>
<script src="{{ resource "mode-rust.js" }}"></script>
<script src="{{ resource "editor.js" }}"></script>
<script src="{{ resource "theme-dawn.js" }}"></script>
<script src="{{ resource "theme-tomorrow_night.js" }}"></script>
<script src="{{ path_to_root }}ace.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}editor.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}mode-rust.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}theme-dawn.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}theme-tomorrow_night.js" type="text/javascript" charset="utf-8"></script>
{{/if}}
{{#if search_js}}
<script src="{{ resource "elasticlunr.min.js" }}"></script>
<script src="{{ resource "mark.min.js" }}"></script>
<script src="{{ resource "searcher.js" }}"></script>
<script src="{{ path_to_root }}elasticlunr.min.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}mark.min.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}searcher.js" type="text/javascript" charset="utf-8"></script>
{{/if}}
<script src="{{ resource "clipboard.min.js" }}"></script>
<script src="{{ resource "highlight.js" }}"></script>
<script src="{{ resource "book.js" }}"></script>
<script src="{{ path_to_root }}clipboard.min.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}highlight.js" type="text/javascript" charset="utf-8"></script>
<script src="{{ path_to_root }}book.js" type="text/javascript" charset="utf-8"></script>
<!-- Custom JS scripts -->
{{#each additional_js}}
<script src="{{ resource this}}"></script>
<script type="text/javascript" src="{{ ../path_to_root }}{{this}}"></script>
{{/each}}
{{#if is_print}}
{{#if mathjax_support}}
<script>
<script type="text/javascript">
window.addEventListener('load', function() {
MathJax.Hub.Register.StartupHook('End', function() {
window.setTimeout(window.print, 100);
@@ -354,7 +312,7 @@
});
</script>
{{else}}
<script>
<script type="text/javascript">
window.addEventListener('load', function() {
window.setTimeout(window.print, 100);
});
@@ -362,21 +320,5 @@
{{/if}}
{{/if}}
{{#if fragment_map}}
<script>
document.addEventListener('DOMContentLoaded', function() {
const fragmentMap =
{{{fragment_map}}}
;
const target = fragmentMap[window.location.hash];
if (target) {
let url = new URL(target, window.location.href);
window.location.replace(url.href);
}
});
</script>
{{/if}}
</div>
</body>
</html>

View File

@@ -255,8 +255,6 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$
^/_matrix/client/(r0|v3|unstable)/capabilities$
^/_matrix/client/(r0|v3|unstable)/notifications$
# Admin API requests
^/_synapse/admin/v1/rooms/[^/]+$
# Encryption requests
@@ -302,9 +300,6 @@ Additionally, the following REST endpoints can be handled for GET requests:
# Presence requests
^/_matrix/client/(api/v1|r0|v3|unstable)/presence/
# Admin API requests
^/_synapse/admin/v2/users/[^/]+$
Pagination requests can also be handled, but all requests for a given
room must be routed to the same instance. Additionally, care must be taken to
ensure that the purge history admin API is not used while pagination requests

2166
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,6 @@
[project]
name = "matrix-synapse"
version = "1.149.0rc1"
version = "1.144.0"
description = "Homeserver for the Matrix decentralised comms protocol"
readme = "README.rst"
authors = [
@@ -42,8 +42,7 @@ dependencies = [
"Twisted[tls]>=21.2.0",
"treq>=21.5.0",
# Twisted has required pyopenssl 16.0 since about Twisted 16.6.
# pyOpenSSL 16.2.0 fixes compatibility with OpenSSL 1.1.0.
"pyOpenSSL>=16.2.0",
"pyOpenSSL>=16.0.0",
"PyYAML>=5.3",
"pyasn1>=0.1.9",
"pyasn1-modules>=0.0.7",
@@ -96,25 +95,6 @@ dependencies = [
# This is used for parsing multipart responses
"python-multipart>=0.0.9",
# Transitive dependency constraints
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
# We should periodically check to see if these dependencies are still necessary and
# remove any that are no longer required.
"cffi>=1.15", # via cryptography
"pynacl>=1.3", # via signedjson
"pyparsing>=2.4", # via packaging
"pyrsistent>=0.18.0", # via jsonschema
"requests>=2.16.0", # 2.16.0+ no longer vendors urllib3, avoiding Python 3.10+ incompatibility
"urllib3>=1.26.5", # via treq; 1.26.5 fixes Python 3.10+ collections.abc compatibility
# 5.2 is the current version in Debian oldstable. If we don't care to support that, then 5.4 is
# the minimum version from Ubuntu 22.04 and RHEL 9. (as of 2025-12)
# When bumping this version to 6.2 or above, refer to https://github.com/element-hq/synapse/pull/19274
# for details of Synapse improvements that may be unlocked. Particularly around the use of `|`
# syntax with zope interface types.
"zope-interface>=5.2", # via twisted
]
[project.optional-dependencies]
@@ -124,16 +104,7 @@ postgres = [
"psycopg2cffi>=2.8;platform_python_implementation == 'PyPy'",
"psycopg2cffi-compat==1.1;platform_python_implementation == 'PyPy'",
]
saml2 = [
"pysaml2>=4.5.0",
# Transitive dependencies from pysaml2
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
"defusedxml>=0.7.1", # via pysaml2
"pytz>=2018.3", # via pysaml2
]
saml2 = ["pysaml2>=4.5.0"]
oidc = ["authlib>=0.15.1"]
# systemd-python is necessary for logging to the systemd journal via
# `systemd.journal.JournalHandler`, as is documented in
@@ -141,25 +112,15 @@ oidc = ["authlib>=0.15.1"]
systemd = ["systemd-python>=231"]
url-preview = ["lxml>=4.6.3"]
sentry = ["sentry-sdk>=0.7.2"]
opentracing = [
"jaeger-client>=4.2.0",
"opentracing>=2.2.0",
# Transitive dependencies from jaeger-client
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
"thrift>=0.10", # via jaeger-client
"tornado>=6.0", # via jaeger-client
]
opentracing = ["jaeger-client>=4.2.0", "opentracing>=2.2.0"]
jwt = ["authlib"]
# hiredis is not a *strict* dependency, but it makes things much faster.
# (if it is not installed, we fall back to slow code.)
redis = ["txredisapi>=1.4.7", "hiredis>=0.3"]
redis = ["txredisapi>=1.4.7", "hiredis"]
# Required to use experimental `caches.track_memory_usage` config option.
cache-memory = ["pympler>=1.0"]
cache-memory = ["pympler"]
# If this is updated, don't forget to update the equivalent lines in
# `dependency-groups.dev` below.
# tool.poetry.group.dev.dependencies.
test = ["parameterized>=0.9.0", "idna>=3.3"]
# The duplication here is awful.
@@ -188,22 +149,12 @@ all = [
# opentracing
"jaeger-client>=4.2.0", "opentracing>=2.2.0",
# redis
"txredisapi>=1.4.7", "hiredis>=0.3",
"txredisapi>=1.4.7", "hiredis",
# cache-memory
# 1.0 added support for python 3.10, our current minimum supported python version
"pympler>=1.0",
"pympler",
# omitted:
# - test: it's useful to have this separate from dev deps in the olddeps job
# - systemd: this is a system-based requirement
# Transitive dependencies
# These dependencies aren't directly required by Synapse.
# However, in order for Synapse to build, Synapse requires a higher minimum version
# for these dependencies than the minimum specified by the direct dependency.
"defusedxml>=0.7.1", # via pysaml2
"pytz>=2018.3", # via pysaml2
"thrift>=0.10", # via jaeger-client
"tornado>=6.0", # via jaeger-client
]
[project.urls]
@@ -226,85 +177,6 @@ synapse_port_db = "synapse._scripts.synapse_port_db:main"
synapse_review_recent_signups = "synapse._scripts.review_recent_signups:main"
update_synapse_database = "synapse._scripts.update_synapse_database:main"
[tool.poetry]
packages = [{ include = "synapse" }]
[tool.poetry.build]
# Compile our rust module when using `poetry install`. This is still required
# while using `poetry` as the build frontend. Saves the developer from needing
# to run both:
#
# $ poetry install
# $ maturin develop
script = "build_rust.py"
# Create a `setup.py` file which will call the `build` method in our build
# script.
#
# Our build script currently uses the "old" build method, where we define a
# `build` method and `setup.py` calls it. Poetry developers have mentioned that
# this will eventually be removed:
# https://github.com/matrix-org/synapse/pull/14949#issuecomment-1418001859
#
# The new build method is defined here:
# https://python-poetry.org/docs/building-extension-modules/#maturin-build-script
# but is still marked as "unstable" at the time of writing. This would also
# bump our minimum `poetry-core` version to 1.5.0.
#
# We can just drop this work-around entirely if migrating away from
# Poetry, thus there's little motivation to update the build script.
generate-setup-file = true
# Dependencies used for developing Synapse itself.
#
# Hold off on migrating these to `dev-dependencies` (PEP 735) for now until
# Poetry 2.2.0+, pip 25.1+ are more widely available.
[tool.poetry.group.dev.dependencies]
# We pin development dependencies in poetry.lock so that our tests don't start
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.14.6"
# Typechecking
lxml-stubs = ">=0.4.0"
mypy = "*"
mypy-zope = "*"
types-bleach = ">=4.1.0"
types-jsonschema = ">=3.2.0"
types-netaddr = ">=0.8.0.6"
types-opentracing = ">=2.4.2"
types-Pillow = ">=8.3.4"
types-psycopg2 = ">=2.9.9"
types-pyOpenSSL = ">=20.0.7"
types-PyYAML = ">=5.4.10"
types-requests = ">=2.26.0"
types-setuptools = ">=57.4.0"
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.
# Tests assume that all optional dependencies are installed.
#
# If this is updated, don't forget to update the equivalent lines in
# project.optional-dependencies.test.
parameterized = ">=0.9.0"
idna = ">=3.3"
# The following are used by the release script
click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
markdown-it-py = ">=3.0.0"
pygithub = ">=1.59"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
towncrier = ">=18.6.0rc1"
# Used for checking the Poetry lockfile
tomli = ">=1.2.3"
# Used for checking the schema delta files
sqlglot = ">=28.0.0"
[tool.towncrier]
package = "synapse"
@@ -388,10 +260,15 @@ select = [
"G",
# pyupgrade
"UP006",
"UP007",
"UP045",
]
extend-safe-fixes = [
# pyupgrade rules compatible with Python >= 3.9
"UP006",
"UP007",
# pyupgrade rules compatible with Python >= 3.10
"UP045",
# Allow ruff to automatically fix trailing spaces within a multi-line string/comment.
"W293"
]
@@ -411,32 +288,28 @@ indent-style = "space"
skip-magic-trailing-comma = false
line-ending = "auto"
[tool.ruff.lint.flake8-bugbear]
extend-immutable-calls = [
# Durations are immutable
"synapse.util.duration.Duration",
]
[tool.maturin]
manifest-path = "rust/Cargo.toml"
module-name = "synapse.synapse_rust"
python-source = "."
[tool.poetry]
packages = [
{ include = "synapse" },
]
include = [
{ path = "AUTHORS.rst", format = "sdist" },
{ path = "book.toml", format = "sdist" },
{ path = "changelog.d/**/*", format = "sdist" },
{ path = "changelog.d", format = "sdist" },
{ path = "CHANGES.md", format = "sdist" },
{ path = "CONTRIBUTING.md", format = "sdist" },
{ path = "demo/**/*", format = "sdist" },
{ path = "docs/**/*", format = "sdist" },
{ path = "demo", format = "sdist" },
{ path = "docs", format = "sdist" },
{ path = "INSTALL.md", format = "sdist" },
{ path = "LICENSE-AGPL-3.0", format = "sdist" },
{ path = "LICENSE-COMMERCIAL", format = "sdist" },
{ path = "mypy.ini", format = "sdist" },
{ path = "scripts-dev/**/*", format = "sdist" },
{ path = "synmark/**/*", format = "sdist" },
{ path = "scripts-dev", format = "sdist" },
{ path = "synmark", format="sdist" },
{ path = "sytest-blacklist", format = "sdist" },
{ path = "tests/**/*", format = "sdist" },
{ path = "tests", format = "sdist" },
{ path = "UPGRADE.rst", format = "sdist" },
{ path = "Cargo.toml", format = "sdist" },
{ path = "Cargo.lock", format = "sdist" },
@@ -445,9 +318,62 @@ include = [
{ path = "rust/src/**", format = "sdist" },
]
exclude = [
{ path = "synapse/*.so", format = "sdist" },
{ path = "synapse/*.so", format = "sdist"}
]
[tool.poetry.build]
script = "build_rust.py"
generate-setup-file = true
[tool.poetry.group.dev.dependencies]
# We pin development dependencies in poetry.lock so that our tests don't start
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.14.5"
# Typechecking
lxml-stubs = ">=0.4.0"
mypy = "*"
mypy-zope = "*"
types-bleach = ">=4.1.0"
types-jsonschema = ">=3.2.0"
types-netaddr = ">=0.8.0.6"
types-opentracing = ">=2.4.2"
types-Pillow = ">=8.3.4"
types-psycopg2 = ">=2.9.9"
types-pyOpenSSL = ">=20.0.7"
types-PyYAML = ">=5.4.10"
types-requests = ">=2.26.0"
types-setuptools = ">=57.4.0"
# Dependencies which are exclusively required by unit test code. This is
# NOT a list of all modules that are necessary to run the unit tests.
# Tests assume that all optional dependencies are installed.
#
# If this is updated, don't forget to update the equivalent lines in
# project.optional-dependencies.test.
parameterized = ">=0.9.0"
idna = ">=3.3"
# The following are used by the release script
click = ">=8.1.3"
# GitPython was == 3.1.14; bumped to 3.1.20, the first release with type hints.
GitPython = ">=3.1.20"
markdown-it-py = ">=3.0.0"
pygithub = ">=1.59"
# The following are executed as commands by the release script.
twine = "*"
# Towncrier min version comes from https://github.com/matrix-org/synapse/pull/3425. Rationale unclear.
towncrier = ">=18.6.0rc1"
# Used for checking the Poetry lockfile
tomli = ">=1.2.3"
# Used for checking the schema delta files
sqlglot = ">=28.0.0"
[build-system]
# The upper bounds here are defensive, intended to prevent situations like
# https://github.com/matrix-org/synapse/issues/13849 and
@@ -455,8 +381,8 @@ exclude = [
# runtime errors caused by build system changes.
# We are happy to raise these upper bounds upon request,
# provided we check that it's safe to do so (i.e. that CI passes).
requires = ["maturin>=1.0,<2.0"]
build-backend = "maturin"
requires = ["poetry-core>=2.0.0,<=2.1.3", "setuptools_rust>=1.3,<=1.11.1"]
build-backend = "poetry.core.masonry.api"
[tool.cibuildwheel]
@@ -481,6 +407,9 @@ skip = "cp3??t-* *i686* *macosx*"
enable = "pypy"
# We need a rust compiler.
#
# We temporarily pin Rust to 1.82.0 to work around
# https://github.com/element-hq/synapse/issues/17988
before-all = "sh .ci/before_build_wheel.sh"
environment= { PATH = "$PATH:$HOME/.cargo/bin" }
@@ -490,3 +419,8 @@ environment= { PATH = "$PATH:$HOME/.cargo/bin" }
before-build = "rm -rf {project}/build"
build-frontend = "build"
test-command = "python -c 'from synapse.synapse_rust import sum_as_string; print(sum_as_string(1, 2))'"
[tool.cibuildwheel.linux]
# Wrap the repair command to correctly rename the built cpython wheels as ABI3.
repair-wheel-command = "./.ci/scripts/auditwheel_wrapper.py -w {dest_dir} {wheel}"

View File

@@ -30,17 +30,14 @@ http = "1.1.0"
lazy_static = "1.4.0"
log = "0.4.17"
mime = "0.3.17"
pyo3 = { version = "0.27.2", features = [
pyo3 = { version = "0.26.0", features = [
"macros",
"anyhow",
"abi3",
"abi3-py310",
# So we can pass `bytes::Bytes` directly back to Python efficiently,
# https://docs.rs/pyo3/latest/pyo3/bytes/index.html
"bytes",
] }
pyo3-log = "0.13.1"
pythonize = "0.27.0"
pythonize = "0.26.0"
regex = "1.6.0"
sha2 = "0.10.8"
serde = { version = "1.0.144", features = ["derive"] }

View File

@@ -29,13 +29,6 @@ fn main() -> Result<(), std::io::Error> {
}
}
// Manually add Cargo.toml's, Cargo.lock and build.rs to the hash, since changes to
// these files should also invalidate the built module.
paths.push("Cargo.toml".to_string());
paths.push("../Cargo.lock".to_string());
paths.push("../Cargo.toml".to_string());
paths.push("build.rs".to_string());
paths.sort();
let mut hasher = Blake2b512::new();
@@ -48,12 +41,5 @@ fn main() -> Result<(), std::io::Error> {
let hex_digest = hex::encode(hasher.finalize());
println!("cargo:rustc-env=SYNAPSE_RUST_DIGEST={hex_digest}");
// The default rules don't pick up trivial changes to the workspace config
// files, but we need to rebuild if those change to pick up the changed
// hashes.
println!("cargo::rerun-if-changed=.");
println!("cargo::rerun-if-changed=../Cargo.lock");
println!("cargo::rerun-if-changed=../Cargo.toml");
Ok(())
}

View File

@@ -62,7 +62,7 @@ impl NotFoundError {
import_exception!(synapse.api.errors, HttpResponseException);
impl HttpResponseException {
pub fn new(status: StatusCode, bytes: bytes::Bytes) -> pyo3::PyErr {
pub fn new(status: StatusCode, bytes: Vec<u8>) -> pyo3::PyErr {
HttpResponseException::new_err((
status.as_u16(),
status.canonical_reason().unwrap_or_default(),

View File

@@ -32,7 +32,7 @@ fn read_io_body(body: &Bound<'_, PyAny>, chunk_size: usize) -> PyResult<Bytes> {
let mut buf = BytesMut::new();
loop {
let bound = &body.call_method1("read", (chunk_size,))?;
let bytes: &Bound<'_, PyBytes> = bound.cast()?;
let bytes: &Bound<'_, PyBytes> = bound.downcast()?;
if bytes.as_bytes().is_empty() {
return Ok(buf.into());
}
@@ -58,12 +58,12 @@ pub fn http_request_from_twisted(request: &Bound<'_, PyAny>) -> PyResult<Request
let mut req = Request::new(body);
let bound = &request.getattr("uri")?;
let uri: &Bound<'_, PyBytes> = bound.cast()?;
let uri: &Bound<'_, PyBytes> = bound.downcast()?;
*req.uri_mut() =
Uri::try_from(uri.as_bytes()).map_err(|_| PyValueError::new_err("invalid uri"))?;
let bound = &request.getattr("method")?;
let method: &Bound<'_, PyBytes> = bound.cast()?;
let method: &Bound<'_, PyBytes> = bound.downcast()?;
*req.method_mut() = Method::from_bytes(method.as_bytes())
.map_err(|_| PyValueError::new_err("invalid method"))?;
@@ -74,17 +74,17 @@ pub fn http_request_from_twisted(request: &Bound<'_, PyAny>) -> PyResult<Request
for header in headers_iter {
let header = header?;
let header: &Bound<'_, PyTuple> = header.cast()?;
let header: &Bound<'_, PyTuple> = header.downcast()?;
let bound = &header.get_item(0)?;
let name: &Bound<'_, PyBytes> = bound.cast()?;
let name: &Bound<'_, PyBytes> = bound.downcast()?;
let name = HeaderName::from_bytes(name.as_bytes())
.map_err(|_| PyValueError::new_err("invalid header name"))?;
let bound = &header.get_item(1)?;
let values: &Bound<'_, PySequence> = bound.cast()?;
let values: &Bound<'_, PySequence> = bound.downcast()?;
for index in 0..values.len()? {
let bound = &values.get_item(index)?;
let value: &Bound<'_, PyBytes> = bound.cast()?;
let value: &Bound<'_, PyBytes> = bound.downcast()?;
let value = HeaderValue::from_bytes(value.as_bytes())
.map_err(|_| PyValueError::new_err("invalid header value"))?;
req.headers_mut().append(name.clone(), value);

View File

@@ -15,7 +15,7 @@
use std::{collections::HashMap, future::Future, sync::OnceLock};
use anyhow::Context;
use http_body_util::BodyExt;
use futures::TryStreamExt;
use once_cell::sync::OnceCell;
use pyo3::{create_exception, exceptions::PyException, prelude::*};
use reqwest::RequestBuilder;
@@ -171,27 +171,15 @@ struct HttpClient {
#[pymethods]
impl HttpClient {
#[new]
#[pyo3(signature = (reactor, user_agent, http2_only = false))]
pub fn py_new(
reactor: Bound<PyAny>,
user_agent: &str,
http2_only: bool,
) -> PyResult<HttpClient> {
pub fn py_new(reactor: Bound<PyAny>, user_agent: &str) -> PyResult<HttpClient> {
// Make sure the runtime gets installed
let _ = runtime(&reactor)?;
let mut builder = reqwest::Client::builder().user_agent(user_agent);
if http2_only {
// Create the client with 'HTTP/2 prior knowledge' enabled, which
// means it will always use HTTP/2 for unencrypted connections
builder = builder.http2_prior_knowledge();
}
let client = builder.build().context("building reqwest client")?;
Ok(HttpClient {
client,
client: reqwest::Client::builder()
.user_agent(user_agent)
.build()
.context("building reqwest client")?,
reactor: reactor.unbind(),
})
}
@@ -235,30 +223,23 @@ impl HttpClient {
let status = response.status();
// A light-weight way to read the response up until the `response_limit`. We
// want to avoid allocating a giant response object on the server above our
// expected `response_limit` to avoid out-of-memory DOS problems.
let body = reqwest::Body::from(response);
let limited_body = http_body_util::Limited::new(body, response_limit);
let collected = limited_body
.collect()
.await
.map_err(anyhow::Error::from_boxed)
.with_context(|| {
format!(
"Response body exceeded response limit ({} bytes)",
response_limit
)
})?;
let bytes: bytes::Bytes = collected.to_bytes();
let mut stream = response.bytes_stream();
let mut buffer = Vec::new();
while let Some(chunk) = stream.try_next().await.context("reading body")? {
if buffer.len() + chunk.len() > response_limit {
Err(anyhow::anyhow!("Response size too large"))?;
}
if !status.is_success() {
return Err(HttpResponseException::new(status, bytes));
buffer.extend_from_slice(&chunk);
}
// Because of the `pyo3` `bytes` feature, we can pass this back to Python
// land efficiently
Ok(bytes)
if !status.is_success() {
return Err(HttpResponseException::new(status, buffer));
}
let r = Python::attach(|py| buffer.into_pyobject(py).map(|o| o.unbind()))?;
Ok(r)
})
}
}
@@ -335,8 +316,5 @@ fn make_deferred_yieldable<'py>(
func
});
make_deferred_yieldable
.call1(py, (deferred,))?
.extract(py)
.map_err(Into::into)
make_deferred_yieldable.call1(py, (deferred,))?.extract(py)
}

View File

@@ -12,7 +12,6 @@ pub mod http;
pub mod http_client;
pub mod identifier;
pub mod matrix_const;
pub mod msc4388_rendezvous;
pub mod push;
pub mod rendezvous;
pub mod segmenter;
@@ -56,7 +55,6 @@ fn synapse_rust(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
events::register_module(py, m)?;
http_client::register_module(py, m)?;
rendezvous::register_module(py, m)?;
msc4388_rendezvous::register_module(py, m)?;
segmenter::register_module(py, m)?;
Ok(())

View File

@@ -1,370 +0,0 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2026 Element Creations Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use std::{
collections::BTreeMap,
time::{Duration, SystemTime},
};
use http::StatusCode;
use pyo3::{
pyclass, pymethods,
types::{PyAnyMethods, PyModule, PyModuleMethods},
Bound, IntoPyObject, Py, PyAny, PyResult, Python,
};
use serde::Deserialize;
use ulid::Ulid;
use self::session::Session;
use crate::{
duration::SynapseDuration,
errors::{NotFoundError, SynapseError},
http::http_request_from_twisted,
msc4388_rendezvous::session::{GetResponse, PostResponse, PutResponse},
UnwrapInfallible,
};
mod session;
#[pyclass]
struct MSC4388RendezvousHandler {
clock: Py<PyAny>,
sessions: BTreeMap<Ulid, Session>,
soft_limit: usize,
hard_limit: usize,
max_content_length: u64,
ttl: Duration,
}
impl MSC4388RendezvousHandler {
/// Check the length of the data parameter and throw error if invalid.
fn check_data_length(&self, data: &str) -> PyResult<()> {
let data_length = data.len() as u64;
if data_length > self.max_content_length {
return Err(SynapseError::new(
StatusCode::PAYLOAD_TOO_LARGE,
"Payload too large".to_owned(),
"M_TOO_LARGE",
None,
None,
));
}
Ok(())
}
/// Evict expired sessions and remove the oldest sessions until we're under the capacity.
fn evict(&mut self, now: SystemTime) {
// First remove all the entries which expired
self.sessions.retain(|_, session| !session.expired(now));
// Then we remove the oldest entries until we're under the soft limit
while self.sessions.len() > self.soft_limit {
self.sessions.pop_first();
}
}
}
#[derive(Deserialize)]
pub struct PostRequest {
data: String,
}
#[derive(Deserialize)]
pub struct PutRequest {
sequence_token: String,
data: String,
}
#[pymethods]
impl MSC4388RendezvousHandler {
#[new]
#[pyo3(signature = (homeserver, /, soft_limit=100, hard_limit=200,max_content_length=4*1024, eviction_interval=60*1000, ttl=2*60*1000))]
fn new(
py: Python<'_>,
homeserver: &Bound<'_, PyAny>,
soft_limit: usize,
hard_limit: usize,
max_content_length: u64,
eviction_interval: u64,
ttl: u64,
) -> PyResult<Py<Self>> {
let clock = homeserver
.call_method0("get_clock")?
.into_pyobject(py)
.unwrap_infallible()
.unbind();
// Construct a Python object so that we can get a reference to the
// evict method and schedule it to run.
let self_ = Py::new(
py,
Self {
clock,
sessions: BTreeMap::new(),
soft_limit,
hard_limit,
max_content_length,
ttl: Duration::from_millis(ttl),
},
)?;
let eviction_duration = SynapseDuration::from_milliseconds(eviction_interval);
let evict = self_.getattr(py, "_evict")?;
homeserver.call_method0("get_clock")?.call_method(
"looping_call",
(evict, &eviction_duration),
None,
)?;
Ok(self_)
}
fn _evict(&mut self, py: Python<'_>) -> PyResult<()> {
let clock = self.clock.bind(py);
let now: u64 = clock.call_method0("time_msec")?.extract()?;
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
self.evict(now);
Ok(())
}
fn handle_post(
&mut self,
py: Python<'_>,
twisted_request: &Bound<'_, PyAny>,
) -> PyResult<(u8, PostResponse)> {
let clock = self.clock.bind(py);
let now: u64 = clock.call_method0("time_msec")?.extract()?;
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
// We trigger an immediate eviction if we're at the hard limit
if self.sessions.len() >= self.hard_limit {
self.evict(now);
}
// Generate a new ULID for the session from the current time.
let id = Ulid::from_datetime(now);
let request = http_request_from_twisted(twisted_request)?;
// parse JSON body
let post_request: PostRequest =
serde_json::from_slice(&request.into_body()).map_err(|_| {
SynapseError::new(
StatusCode::BAD_REQUEST,
"Invalid JSON in request body".to_owned(),
"M_INVALID_PARAM",
None,
None,
)
})?;
let data: String = post_request.data;
self.check_data_length(&data)?;
let session = Session::new(id, data, now, self.ttl);
let response = session.post_response(now);
self.sessions.insert(id, session);
Ok((200, response))
}
fn handle_get(
&mut self,
py: Python<'_>,
id: &str,
twisted_request: &Bound<'_, PyAny>,
) -> PyResult<(u8, GetResponse)> {
let request = http_request_from_twisted(twisted_request)?;
// As per the MSC, we check the Sec-Fetch-* headers to ensure this request did not come from somewhere that will
// be rendered directly to the user, as the response may contain sensitive data. These headers are added by
// well behaved browsers so are helpful for protecting regular users.
// Sec-Fetch-Dest: https://www.w3.org/TR/fetch-metadata/#sec-fetch-dest-header
//
// If the header is present then this must be "empty". All other values such as document, image etc.
// are considered potentially dangerous as they might be rendered to the user.
//
// Note that because we only ever return JSON, so it is unlikely that it could somehow be rendered as an image,
// video or other media.
let sec_fetch_dest: Option<String> = request
.headers()
.get("sec-fetch-dest")
.and_then(|v| v.to_str().ok())
.map(|s| s.to_owned());
if sec_fetch_dest.is_some() && sec_fetch_dest.as_deref() != Some("empty") {
return Err(SynapseError::new(
StatusCode::FORBIDDEN,
"Rendezvous content is not accessible from the request destination".to_owned(),
"M_FORBIDDEN",
None,
None,
));
}
// Sec-Fetch-Mode: https://www.w3.org/TR/fetch-metadata/#sec-fetch-mode-header
//
// A request mode of "navigate" is not allowed as this indicates the request is being made by the
// browser to navigate to a URL, which could lead to the response being rendered directly to the user.
//
// Note that usually Sec-Fetch-Dest would be "document" in this case and so the request would be rejected earlier,
// but we check the mode just in case the destination is not set correctly.
let sec_fetch_mode: Option<String> = request
.headers()
.get("sec-fetch-mode")
.and_then(|v| v.to_str().ok())
.map(|s| s.to_owned());
if sec_fetch_mode.as_deref() == Some("navigate") {
return Err(SynapseError::new(
StatusCode::FORBIDDEN,
"Rendezvous content is not accessible via top-level navigation".to_owned(),
"M_FORBIDDEN",
None,
None,
));
}
// Sec-Fetch-User: https://www.w3.org/TR/fetch-metadata/#sec-fetch-user-header
//
// If the request has a Sec-Fetch-User header with a value of "?1", this indicates that the
// request was triggered by user activation, such as a click.
//
// Note that usually Sec-Fetch-Mode would be "navigate" or the Sec-Fetch-Dest would be "document" in this case
// and so the request would be rejected earlier, but we check the user activation just in case those headers are
// not set correctly.
let sec_fetch_user: Option<String> = request
.headers()
.get("sec-fetch-user")
.and_then(|v| v.to_str().ok())
.map(|s| s.to_owned());
if sec_fetch_user.as_deref() == Some("?1") {
return Err(SynapseError::new(
StatusCode::FORBIDDEN,
"Rendezvous content is not accessible from requests with user activation"
.to_owned(),
"M_FORBIDDEN",
None,
None,
));
}
// Sec-Fetch-Site: https://www.w3.org/TR/fetch-metadata/#sec-fetch-site-header
//
// "none" indicates the request did not originate from a web page
// (e.g. typed URL, bookmark, or browser extension), so we disallow it.
let sec_fetch_site: Option<String> = request
.headers()
.get("sec-fetch-site")
.and_then(|v| v.to_str().ok())
.map(|s| s.to_owned());
if sec_fetch_site.as_deref() == Some("none") {
return Err(SynapseError::new(
StatusCode::FORBIDDEN,
"Rendezvous content is not accessible from requests from user interaction"
.to_owned(),
"M_FORBIDDEN",
None,
None,
));
}
let clock = self.clock.bind(py);
let now: u64 = clock.call_method0("time_msec")?.extract()?;
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
let id: Ulid = id.parse().map_err(|_| NotFoundError::new())?;
let session = self
.sessions
.get(&id)
.filter(|s| !s.expired(now))
.ok_or_else(NotFoundError::new)?;
Ok((200, session.get_response(now)))
}
fn handle_put(
&mut self,
py: Python<'_>,
id: &str,
twisted_request: &Bound<'_, PyAny>,
) -> PyResult<(u8, PutResponse)> {
let request = http_request_from_twisted(twisted_request)?;
// parse JSON body
let put_request: PutRequest =
serde_json::from_slice(&request.into_body()).map_err(|_| {
SynapseError::new(
StatusCode::BAD_REQUEST,
"Invalid JSON in request body".to_owned(),
"M_INVALID_PARAM",
None,
None,
)
})?;
let sequence_token: String = put_request.sequence_token;
let data: String = put_request.data;
self.check_data_length(&data)?;
let clock = self.clock.bind(py);
let now: u64 = clock.call_method0("time_msec")?.extract()?;
let now = SystemTime::UNIX_EPOCH + Duration::from_millis(now);
let id: Ulid = id.parse().map_err(|_| NotFoundError::new())?;
let session = self
.sessions
.get_mut(&id)
.filter(|s| !s.expired(now))
.ok_or_else(NotFoundError::new)?;
if !session.sequence_token().eq(&sequence_token) {
return Err(SynapseError::new(
StatusCode::CONFLICT,
"sequence_token does not match".to_owned(),
"IO_ELEMENT_MSC4388_CONCURRENT_WRITE",
None,
None,
));
}
session.update(data, now);
Ok((200, session.put_response()))
}
fn handle_delete(&mut self, id: &str) -> PyResult<(u8, ())> {
let id: Ulid = id.parse().map_err(|_| NotFoundError::new())?;
let _session = self.sessions.remove(&id).ok_or_else(NotFoundError::new)?;
Ok((200, ()))
}
}
pub fn register_module(py: Python<'_>, m: &Bound<'_, PyModule>) -> PyResult<()> {
let child_module = PyModule::new(py, "msc4388_rendezvous")?;
child_module.add_class::<MSC4388RendezvousHandler>()?;
m.add_submodule(&child_module)?;
// We need to manually add the module to sys.modules to make `from
// synapse.synapse_rust import rendezvous` work.
py.import("sys")?
.getattr("modules")?
.set_item("synapse.synapse_rust.msc4388_rendezvous", child_module)?;
Ok(())
}

View File

@@ -1,153 +0,0 @@
/*
* This file is licensed under the Affero General Public License (AGPL) version 3.
*
* Copyright (C) 2026 Element Creations Ltd
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* See the GNU Affero General Public License for more details:
* <https://www.gnu.org/licenses/agpl-3.0.html>.
*/
use std::time::{Duration, SystemTime};
use base64::{engine::general_purpose::URL_SAFE_NO_PAD, Engine as _};
use pyo3::{Bound, IntoPyObject, PyAny, Python};
use pythonize::{pythonize, PythonizeError};
use serde::Serialize;
use sha2::{Digest, Sha256};
use ulid::Ulid;
/// A single session, containing data, metadata, and expiry information.
pub struct Session {
id: Ulid,
hash: [u8; 32],
data: String,
last_modified: SystemTime,
expires: SystemTime,
}
#[derive(Serialize)]
pub struct PostResponse {
id: String,
sequence_token: String,
expires_in_ms: u64,
}
impl<'source> IntoPyObject<'source> for PostResponse {
type Target = PyAny;
type Output = Bound<'source, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'source>) -> Result<Self::Output, Self::Error> {
pythonize(py, &self)
}
}
#[derive(Serialize)]
pub struct GetResponse {
data: String,
sequence_token: String,
expires_in_ms: u64,
}
impl<'source> IntoPyObject<'source> for GetResponse {
type Target = PyAny;
type Output = Bound<'source, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'source>) -> Result<Self::Output, Self::Error> {
pythonize(py, &self)
}
}
#[derive(Serialize)]
pub struct PutResponse {
sequence_token: String,
}
impl<'source> IntoPyObject<'source> for PutResponse {
type Target = PyAny;
type Output = Bound<'source, Self::Target>;
type Error = PythonizeError;
fn into_pyobject(self, py: Python<'source>) -> Result<Self::Output, Self::Error> {
pythonize(py, &self)
}
}
impl Session {
/// Create a new session with the given data and time-to-live.
pub fn new(id: Ulid, data: String, now: SystemTime, ttl: Duration) -> Self {
let hash = Self::compute_hash(&data, now);
Self {
id,
hash,
data,
expires: now + ttl,
last_modified: now,
}
}
/// Returns true if the session has expired at the given time.
pub fn expired(&self, now: SystemTime) -> bool {
self.expires <= now
}
/// Update the session with new data and last modified time.
pub fn update(&mut self, data: String, now: SystemTime) {
self.hash = Self::compute_hash(&data, now);
self.data = data;
self.last_modified = now;
}
/// Compute the hash of the data and timestamp.
fn compute_hash(data: &str, now: SystemTime) -> [u8; 32] {
let mut hasher = Sha256::new();
hasher.update(data);
let now_millis = now
.duration_since(SystemTime::UNIX_EPOCH)
.unwrap_or_default()
.as_millis();
hasher.update(now_millis.to_be_bytes());
hasher.finalize().into()
}
/// The sequence token for the session.
pub fn sequence_token(&self) -> String {
URL_SAFE_NO_PAD.encode(self.hash)
}
pub fn get_response(&self, now: SystemTime) -> GetResponse {
GetResponse {
data: self.data.clone(),
sequence_token: self.sequence_token(),
expires_in_ms: self
.expires
.duration_since(now)
.unwrap_or_default()
.as_millis() as u64,
}
}
pub fn post_response(&self, now: SystemTime) -> PostResponse {
PostResponse {
id: self.id.to_string(),
sequence_token: self.sequence_token(),
expires_in_ms: self
.expires
.duration_since(now)
.unwrap_or_default()
.as_millis() as u64,
}
}
pub fn put_response(&self) -> PutResponse {
PutResponse {
sequence_token: self.sequence_token(),
}
}
}

View File

@@ -273,16 +273,14 @@ pub enum SimpleJsonValue {
Null,
}
impl<'source> FromPyObject<'_, 'source> for SimpleJsonValue {
type Error = PyErr;
fn extract(ob: Borrowed<'_, 'source, PyAny>) -> Result<Self, Self::Error> {
if let Ok(s) = ob.cast::<PyString>() {
impl<'source> FromPyObject<'source> for SimpleJsonValue {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(s) = ob.downcast::<PyString>() {
Ok(SimpleJsonValue::Str(Cow::Owned(s.to_string())))
// A bool *is* an int, ensure we try bool first.
} else if let Ok(b) = ob.cast::<PyBool>() {
} else if let Ok(b) = ob.downcast::<PyBool>() {
Ok(SimpleJsonValue::Bool(b.extract()?))
} else if let Ok(i) = ob.cast::<PyInt>() {
} else if let Ok(i) = ob.downcast::<PyInt>() {
Ok(SimpleJsonValue::Int(i.extract()?))
} else if ob.is_none() {
Ok(SimpleJsonValue::Null)
@@ -303,14 +301,12 @@ pub enum JsonValue {
Value(SimpleJsonValue),
}
impl<'source> FromPyObject<'_, 'source> for JsonValue {
type Error = PyErr;
fn extract(ob: Borrowed<'_, 'source, PyAny>) -> Result<Self, Self::Error> {
if let Ok(l) = ob.cast::<PyList>() {
impl<'source> FromPyObject<'source> for JsonValue {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
if let Ok(l) = ob.downcast::<PyList>() {
match l
.iter()
.map(|it| SimpleJsonValue::extract(it.as_borrowed()))
.map(|it| SimpleJsonValue::extract_bound(&it))
.collect()
{
Ok(a) => Ok(JsonValue::Array(a)),
@@ -318,7 +314,7 @@ impl<'source> FromPyObject<'_, 'source> for JsonValue {
"Can't convert to JsonValue::Array: {e}"
))),
}
} else if let Ok(v) = SimpleJsonValue::extract(ob) {
} else if let Ok(v) = SimpleJsonValue::extract_bound(ob) {
Ok(JsonValue::Value(v))
} else {
Err(PyTypeError::new_err(format!(
@@ -389,11 +385,9 @@ impl<'source> IntoPyObject<'source> for Condition {
}
}
impl<'source> FromPyObject<'_, 'source> for Condition {
type Error = PyErr;
fn extract(ob: Borrowed<'_, 'source, PyAny>) -> Result<Self, Self::Error> {
Ok(depythonize(&ob)?)
impl<'source> FromPyObject<'source> for Condition {
fn extract_bound(ob: &Bound<'source, PyAny>) -> PyResult<Self> {
Ok(depythonize(ob)?)
}
}

View File

@@ -1,5 +1,5 @@
$schema: https://element-hq.github.io/synapse/latest/schema/v1/meta.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.149/synapse-config.schema.json
$id: https://element-hq.github.io/synapse/schema/synapse/v1.144/synapse-config.schema.json
type: object
properties:
modules:
@@ -1113,11 +1113,8 @@ properties:
When this option is enabled, the room "complexity" will be checked before
a user joins a new remote room. If it is above the complexity limit, the
server will disallow joining, or will instantly leave. This is useful for
homeservers that are resource-constrained. In Synapse, the complexity of a
room is measured by the number of current state events in a room, divided
by 500. "Current" here means the latest state, i.e. if a user joins, then
leaves, then joins, that will count as 1 current `m.room.member` state
event.
homeservers that are resource-constrained. Room complexity is an arbitrary
measure based on factors such as the number of users in the room.
properties:
enabled:
type: boolean
@@ -2277,16 +2274,6 @@ properties:
examples:
- per_second: 1.0
burst_count: 5.0
rc_user_directory:
$ref: "#/$defs/rc"
description: >-
This option allows admins to ratelimit searches in the user directory.
_Added in Synapse 1.145.0._
default:
per_second: 0.016
burst_count: 200.0
federation_rr_transactions_per_room_per_second:
type: integer
description: >-
@@ -2351,19 +2338,6 @@ properties:
default: true
examples:
- false
enable_local_media_storage:
type: boolean
description: >-
Enable the local on-disk media storage provider. When disabled, media is
stored only in configured `media_storage_providers` and temporary files are
used for processing.
**Warning:** If this option is set to `false` and no `media_storage_providers`
are configured, all media requests will return 404 errors as there will be
no storage backend available.
default: true
examples:
- false
media_store_path:
type: string
description: Directory where uploaded images and attachments are stored.
@@ -5529,13 +5503,11 @@ properties:
outbound_federation_restricted_to:
type: array
description: >-
You can restrict outbound federation traffic to only go through a specific subset
of workers including the [Secure Border Gateway
(SBG)](https://element.io/en/server-suite/secure-border-gateways). Any worker
specified here (including the SBG) must also be in the
[`instance_map`](#instance_map).
[`worker_replication_secret`](#worker_replication_secret) must also be configured
to authorize inter-worker communication.
When using workers, you can restrict outbound federation traffic to only
go through a specific subset of workers. Any worker specified here must
also be in the [`instance_map`](#instance_map).
[`worker_replication_secret`](#worker_replication_secret) must also be
configured to authorize inter-worker communication.
Also see the [worker

View File

@@ -31,7 +31,7 @@ DISTS = (
"debian:sid", # (rolling distro, no EOL)
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
"ubuntu:noble", # 24.04 LTS (EOL 2029-06)
"ubuntu:questing", # 25.10 (EOL 2026-07)
"ubuntu:plucky", # 25.04 (EOL 2026-01)
"debian:trixie", # (EOL not specified yet)
)
@@ -94,7 +94,6 @@ class Builder:
build_args = (
(
"docker",
"buildx",
"build",
"--tag",
"dh-venv-builder:" + tag,

View File

@@ -14,6 +14,7 @@ import sqlglot.expressions
SCHEMA_FILE_REGEX = re.compile(r"^synapse/storage/schema/(.*)/delta/(.*)/(.*)$")
# The base branch we want to check against. We use the main development branch
# on the assumption that is what we are developing against.
DEVELOP_BRANCH = "develop"
@@ -187,14 +188,6 @@ def check_schema_delta(delta_files: list[str], force_colors: bool) -> bool:
sql_lang = "postgres"
if delta_file.endswith(".sqlite"):
sql_lang = "sqlite"
elif delta_file.endswith(".py"):
click.secho(
f"Skipping Python delta file: '{delta_file}'",
fg="yellow",
bold=True,
color=force_colors,
)
return True
statements = sqlglot.parse(delta_contents, read=sql_lang)

View File

@@ -35,26 +35,6 @@
# Exit if a line returns a non-zero exit code
set -e
# Tag local builds with a dummy registry namespace so that later builds may reference
# them exactly instead of accidentally pulling from a remote registry.
#
# This is important as some storage drivers/types prefer remote images over local
# (`containerd`) which causes problems as we're testing against some remote image that
# doesn't include all of the changes that we're trying to test (be it locally or in a PR
# in CI). This is spawning from a real-world problem where the GitHub runners were
# updated to use Docker Engine 29.0.0+ which uses `containerd` by default for new
# installations.
LOCAL_IMAGE_NAMESPACE=localhost
# The image tags for how these images will be stored in the registry
SYNAPSE_IMAGE_PATH="$LOCAL_IMAGE_NAMESPACE/synapse"
SYNAPSE_WORKERS_IMAGE_PATH="$LOCAL_IMAGE_NAMESPACE/synapse-workers"
COMPLEMENT_SYNAPSE_IMAGE_PATH="$LOCAL_IMAGE_NAMESPACE/complement-synapse"
SYNAPSE_EDITABLE_IMAGE_PATH="$LOCAL_IMAGE_NAMESPACE/synapse-editable"
SYNAPSE_WORKERS_EDITABLE_IMAGE_PATH="$LOCAL_IMAGE_NAMESPACE/synapse-workers-editable"
COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH="$LOCAL_IMAGE_NAMESPACE/complement-synapse-editable"
# Helper to emit annotations that collapse portions of the log in GitHub Actions
echo_if_github() {
if [[ -n "$GITHUB_WORKFLOW" ]]; then
@@ -67,13 +47,10 @@ usage() {
cat >&2 <<EOF
Usage: $0 [-f] <go test arguments>...
Run the complement test suite on Synapse.
--in-repo
Whether to run the in-repo suite of Complement tests (see `./complement` in this project)
vs the Complement tests from the Complement repo.
-f, --fast
Skip rebuilding the docker images, and just use the most recent
'localhost/complement-synapse:latest' image.
'complement-synapse:latest' image.
Conflicts with --build-only.
--build-only
@@ -105,7 +82,6 @@ main() {
# parse our arguments
skip_docker_build=""
skip_complement_run=""
use_in_repo_tests=""
while [ $# -ge 1 ]; do
arg=$1
case "$arg" in
@@ -113,9 +89,6 @@ main() {
usage
return 1
;;
"--in-repo")
use_in_repo_tests=1
;;
"-f"|"--fast")
skip_docker_build=1
;;
@@ -174,16 +147,16 @@ main() {
editable_mount="$(realpath .):/editable-src:z"
if [ -n "$rebuild_editable_synapse" ]; then
unset skip_docker_build
elif $CONTAINER_RUNTIME inspect "$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH" &>/dev/null; then
elif $CONTAINER_RUNTIME inspect complement-synapse-editable &>/dev/null; then
# complement-synapse-editable already exists: see if we can still use it:
# - The Rust module must still be importable; it will fail to import if the Rust source has changed.
# - The Poetry lock file must be the same (otherwise we assume dependencies have changed)
# First set up the module in the right place for an editable installation.
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' "$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH" -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
if ($CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'python' "$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH" -c 'import synapse.synapse_rust' \
&& $CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'diff' "$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH" --brief /editable-src/poetry.lock /poetry.lock.bak); then
if ($CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'python' complement-synapse-editable -c 'import synapse.synapse_rust' \
&& $CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'diff' complement-synapse-editable --brief /editable-src/poetry.lock /poetry.lock.bak); then
skip_docker_build=1
else
echo "Editable Synapse image is stale. Will rebuild."
@@ -197,47 +170,42 @@ main() {
# Build a special image designed for use in development with editable
# installs.
$CONTAINER_RUNTIME build \
-t "$SYNAPSE_EDITABLE_IMAGE_PATH" \
$CONTAINER_RUNTIME build -t synapse-editable \
-f "docker/editable.Dockerfile" .
$CONTAINER_RUNTIME build \
-t "$SYNAPSE_WORKERS_EDITABLE_IMAGE_PATH" \
--build-arg FROM="$SYNAPSE_EDITABLE_IMAGE_PATH" \
$CONTAINER_RUNTIME build -t synapse-workers-editable \
--build-arg FROM=synapse-editable \
-f "docker/Dockerfile-workers" .
$CONTAINER_RUNTIME build \
-t "$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH" \
--build-arg FROM="$SYNAPSE_WORKERS_EDITABLE_IMAGE_PATH" \
$CONTAINER_RUNTIME build -t complement-synapse-editable \
--build-arg FROM=synapse-workers-editable \
-f "docker/complement/Dockerfile" "docker/complement"
# Prepare the Rust module
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' "$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH" -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
$CONTAINER_RUNTIME run --rm -v $editable_mount --entrypoint 'cp' complement-synapse-editable -- /synapse_rust.abi3.so.bak /editable-src/synapse/synapse_rust.abi3.so
else
# Build the base Synapse image from the local checkout
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
$CONTAINER_RUNTIME build \
-t "$SYNAPSE_IMAGE_PATH" \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
-f "docker/Dockerfile" .
$CONTAINER_RUNTIME build -t matrixdotorg/synapse \
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
-f "docker/Dockerfile" .
echo_if_github "::endgroup::"
# Build the workers docker image (from the base Synapse image we just built).
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
$CONTAINER_RUNTIME build \
-t "$SYNAPSE_WORKERS_IMAGE_PATH" \
--build-arg FROM="$SYNAPSE_IMAGE_PATH" \
-f "docker/Dockerfile-workers" .
$CONTAINER_RUNTIME build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
echo_if_github "::endgroup::"
# Build the unified Complement image (from the worker Synapse image we just built).
echo_if_github "::group::Build Docker image: complement/Dockerfile"
$CONTAINER_RUNTIME build \
-t "$COMPLEMENT_SYNAPSE_IMAGE_PATH" \
--build-arg FROM="$SYNAPSE_WORKERS_IMAGE_PATH" \
$CONTAINER_RUNTIME build -t complement-synapse \
`# This is the tag we end up pushing to the registry (see` \
`# .github/workflows/push_complement_image.yml) so let's just label it now` \
`# so people can reference it by the same name locally.` \
-t ghcr.io/element-hq/synapse/complement-synapse \
-f "docker/complement/Dockerfile" "docker/complement"
echo_if_github "::endgroup::"
@@ -248,10 +216,7 @@ main() {
echo "Skipping Docker image build as requested."
fi
# Default set of Complement tests to run from the Complement repo
#
# We pick and choose the specific MSC's that Synapse supports.
default_complement_test_packages=(
test_packages=(
./tests/csapi
./tests
./tests/msc3874
@@ -264,24 +229,16 @@ main() {
./tests/msc4140
./tests/msc4155
./tests/msc4306
./tests/msc4222
./tests/msc4354
)
# Export the list of test packages as a space-separated environment variable, so other
# scripts can use it.
export SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES="${default_complement_test_packages[@]}"
export SYNAPSE_SUPPORTED_COMPLEMENT_TEST_PACKAGES="${test_packages[@]}"
# Default set of Complement tests to run when using the in-repo test suite. Most
# likely, this should be all tests.
#
# Relative to the `./complement` repo in this project
default_in_repo_complement_test_packages=(
./tests/...
)
export COMPLEMENT_BASE_IMAGE="$COMPLEMENT_SYNAPSE_IMAGE_PATH"
export COMPLEMENT_BASE_IMAGE=complement-synapse
if [ -n "$use_editable_synapse" ]; then
export COMPLEMENT_BASE_IMAGE="$COMPLEMENT_SYNAPSE_EDITABLE_IMAGE_PATH"
export COMPLEMENT_BASE_IMAGE=complement-synapse-editable
export COMPLEMENT_HOST_MOUNTS="$editable_mount"
fi
@@ -360,26 +317,11 @@ main() {
echo "Skipping Complement run as requested."
return 0
fi
# Print out the executed commands so it's more obvious what's happening at the end here.
# Things are slightly ambiguous with the in-repo vs Complement tests.
set -x
if [ -n "$use_in_repo_tests" ]; then
# Run the suite of Complement tests in the `./complement` directory in this repo
cd "./complement"
go test "${test_args[@]}" "$@" "${default_in_repo_complement_test_packages[@]}"
else
# Run the tests (from the Complement repo)!
cd "$COMPLEMENT_DIR"
go test "${test_args[@]}" "$@" "${default_complement_test_packages[@]}"
fi
# We don't need to print out executed commands anymore
#
# This is just `set +x` without printing `+ set +x` to the console (via
# https://stackoverflow.com/questions/13195655/bash-set-x-without-it-being-printed/19226038#19226038)
{ set +x; } 2>/dev/null
# Run the tests!
echo "Running Complement with ${test_args[@]} $@ ${test_packages[@]}"
cd "$COMPLEMENT_DIR"
go test "${test_args[@]}" "$@" "${test_packages[@]}"
}
main "$@"

View File

@@ -145,7 +145,7 @@ def request(
print("Requesting %s" % dest, file=sys.stderr)
s = requests.Session()
s.mount("matrix-federation://", MatrixConnectionAdapter(verify_tls=verify_tls))
s.mount("matrix-federation://", MatrixConnectionAdapter())
headers: dict[str, str] = {
"Authorization": authorization_headers[0],
@@ -267,17 +267,6 @@ def read_args_from_config(args: argparse.Namespace) -> None:
class MatrixConnectionAdapter(HTTPAdapter):
"""
A Matrix federation-aware HTTP Adapter.
"""
verify_tls: bool
"""whether to verify the remote server's TLS certificate."""
def __init__(self, verify_tls: bool = True) -> None:
self.verify_tls = verify_tls
super().__init__()
def send(
self,
request: PreparedRequest,
@@ -291,7 +280,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
assert isinstance(request.url, str)
parsed = urlparse.urlsplit(request.url)
server_name = parsed.netloc
well_known = self._get_well_known(parsed.netloc, verify_tls=self.verify_tls)
well_known = self._get_well_known(parsed.netloc)
if well_known:
server_name = well_known
@@ -329,21 +318,6 @@ class MatrixConnectionAdapter(HTTPAdapter):
print(
f"Connecting to {host}:{port} with SNI {ssl_server_name}", file=sys.stderr
)
if proxies:
scheme = parsed.scheme
if isinstance(scheme, bytes):
scheme = scheme.decode("utf-8")
proxy_for_scheme = proxies.get(scheme)
if proxy_for_scheme:
return self.proxy_manager_for(proxy_for_scheme).connection_from_host(
host,
port=port,
scheme="https",
pool_kwargs={"server_hostname": ssl_server_name},
)
return self.poolmanager.connection_from_host(
host,
port=port,
@@ -394,7 +368,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
return server_name, 8448, server_name
@staticmethod
def _get_well_known(server_name: str, verify_tls: bool = True) -> str | None:
def _get_well_known(server_name: str) -> str | None:
if ":" in server_name:
# explicit port, or ipv6 literal. Either way, no .well-known
return None
@@ -405,7 +379,7 @@ class MatrixConnectionAdapter(HTTPAdapter):
print(f"fetching {uri}", file=sys.stderr)
try:
resp = requests.get(uri, verify=verify_tls)
resp = requests.get(uri)
if resp.status_code != 200:
print("%s gave %i" % (uri, resp.status_code), file=sys.stderr)
return None

View File

@@ -139,14 +139,3 @@ mypy
# Generate configuration documentation from the JSON Schema
./scripts-dev/gen_config_documentation.py schema/synapse-config.schema.yaml > docs/usage/configuration/config_documentation.md
# Lint/format the in-repo Complement test code (Go)
pushd ./complement
# Run golangci-lint with:
# - The `run` command will lint the code
# - `--fix`: Will apply any suggested fixes like autofixes from `linters` *and*
# `formatters` which always produce suggested fixes that are equivalent to running
# `golangci-lint fmt`.
# - `--max-issues-per-linter=0`: Show all issues, don't limit the number reported
go run github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v2.6.1 run ./... --fix --max-issues-per-linter=0
popd

View File

@@ -133,7 +133,6 @@ prometheus_metric_fullname_to_label_arg_map: Mapping[str, ArgLocation | None] =
"prometheus_client.metrics.Info": ArgLocation("labelnames", 2),
"prometheus_client.metrics.Enum": ArgLocation("labelnames", 2),
"synapse.metrics.LaterGauge": ArgLocation("labelnames", 2),
"synapse.metrics._InFlightGaugeRuntime": ArgLocation("labels", 2),
"synapse.metrics.InFlightGauge": ArgLocation("labels", 2),
"synapse.metrics.GaugeBucketCollector": ArgLocation("labelnames", 2),
"prometheus_client.registry.Collector": None,

View File

@@ -32,7 +32,7 @@ import time
import urllib.request
from os import path
from tempfile import TemporaryDirectory
from typing import Any
from typing import Any, Match
import attr
import click
@@ -455,19 +455,19 @@ def _publish(gh_token: str) -> None:
gh = Github(auth=github.Auth.Token(token=gh_token))
gh_repo = gh.get_repo("element-hq/synapse")
for release in gh_repo.get_releases():
if release.name == tag_name:
if release.title == tag_name:
break
else:
raise ClickException(f"Failed to find GitHub release for {tag_name}")
assert release.name == tag_name
assert release.title == tag_name
if not release.draft:
click.echo("Release already published.")
return
release = release.update_release(
name=release.name,
name=release.title,
message=release.body,
tag_name=release.tag_name,
prerelease=release.prerelease,
@@ -968,6 +968,10 @@ def generate_and_write_changelog(
new_changes = new_changes.replace(
"No significant changes.", f"No significant changes since {current_version}."
)
new_changes += build_dependabot_changelog(
repo,
current_version,
)
# Prepend changes to changelog
with open("CHANGES.md", "r+") as f:
@@ -982,5 +986,49 @@ def generate_and_write_changelog(
os.remove(filename)
def build_dependabot_changelog(repo: Repo, current_version: version.Version) -> str:
"""Summarise dependabot commits between `current_version` and `release_branch`.
Returns an empty string if there have been no such commits; otherwise outputs a
third-level markdown header followed by an unordered list."""
last_release_commit = repo.tag("v" + str(current_version)).commit
rev_spec = f"{last_release_commit.hexsha}.."
commits = list(git.objects.Commit.iter_items(repo, rev_spec))
messages = []
for commit in reversed(commits):
if commit.author.name == "dependabot[bot]":
message: str | bytes = commit.message
if isinstance(message, bytes):
message = message.decode("utf-8")
messages.append(message.split("\n", maxsplit=1)[0])
if not messages:
print(f"No dependabot commits in range {rev_spec}", file=sys.stderr)
return ""
messages.sort()
def replacer(match: Match[str]) -> str:
desc = match.group(1)
number = match.group(2)
return f"* {desc}. ([\\#{number}](https://github.com/element-hq/synapse/issues/{number}))"
for i, message in enumerate(messages):
messages[i] = re.sub(r"(.*) \(#(\d+)\)$", replacer, message)
messages.insert(0, "### Updates to locked dependencies\n")
# Add an extra blank line to the bottom of the section
messages.append("")
return "\n".join(messages)
@cli.command()
@click.argument("since")
def test_dependabot_changelog(since: str) -> None:
"""Test building the dependabot changelog.
Summarises all dependabot commits between the SINCE tag and the current git HEAD."""
print(build_dependabot_changelog(git.Repo("."), version.Version(since)))
if __name__ == "__main__":
cli()

View File

@@ -172,7 +172,7 @@ if __name__ == "__main__":
# Expect JSON data on stdin.
context, book = json.load(sys.stdin)
for section in book["items"]:
for section in book["sections"]:
if "Chapter" in section and section["Chapter"]["path"] == "upgrade.md":
section["Chapter"]["content"] = section["Chapter"]["content"].replace(
"<!-- REPLACE_WITH_SCHEMA_VERSIONS -->", calculate_version_chart()

View File

@@ -132,6 +132,7 @@ BOOLEAN_COLUMNS = {
"has_known_state",
"is_encrypted",
],
"sticky_events": ["soft_failed"],
"thread_subscriptions": ["subscribed", "automatic"],
"users": ["shadow_banned", "approved", "locked", "suspended"],
"un_partial_stated_event_stream": ["rejection_status_changed"],

View File

@@ -45,7 +45,6 @@ from synapse.synapse_rust.http_client import HttpClient
from synapse.types import JsonDict, Requester, UserID, create_requester
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
from . import introspection_response_timer
@@ -140,7 +139,7 @@ class MasDelegatedAuth(BaseAuth):
clock=self._clock,
name="mas_token_introspection",
server_name=self.server_name,
timeout=Duration(minutes=2),
timeout_ms=120_000,
# don't log because the keys are access tokens
enable_logging=False,
)

View File

@@ -49,7 +49,6 @@ from synapse.synapse_rust.http_client import HttpClient
from synapse.types import Requester, UserID, create_requester
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
from synapse.util.caches.response_cache import ResponseCache, ResponseCacheContext
from synapse.util.duration import Duration
from synapse.util.json import json_decoder
from . import introspection_response_timer
@@ -206,7 +205,7 @@ class MSC3861DelegatedAuth(BaseAuth):
clock=self._clock,
name="token_introspection",
server_name=self.server_name,
timeout=Duration(minutes=2),
timeout_ms=120_000,
# don't log because the keys are access tokens
enable_logging=False,
)

View File

@@ -26,24 +26,9 @@
import enum
from typing import Final, TypedDict
from synapse.util.duration import Duration
# the max size of a (canonical-json-encoded) event
MAX_PDU_SIZE = 65536
# The maximum allowed size of an HTTP request.
# Other than media uploads, the biggest request we expect to see is a fully-loaded
# /federation/v1/send request.
#
# The main thing in such a request is up to 50 PDUs, and up to 100 EDUs. PDUs are
# limited to 65536 bytes (possibly slightly more if the sender didn't use canonical
# json encoding); there is no specced limit to EDUs (see
# https://github.com/matrix-org/matrix-doc/issues/3121).
#
# in short, we somewhat arbitrarily limit requests to 200 * 64K (about 12.5M)
#
MAX_REQUEST_SIZE = 200 * MAX_PDU_SIZE
# Max/min size of ints in canonical JSON
CANONICALJSON_MAX_INT = (2**53) - 1
CANONICALJSON_MIN_INT = -CANONICALJSON_MAX_INT
@@ -325,7 +310,9 @@ class AccountDataTypes:
"org.matrix.msc4155.invite_permission_config"
)
# MSC4380: Invite blocking
INVITE_PERMISSION_CONFIG: Final = "m.invite_permission_config"
MSC4380_INVITE_PERMISSION_CONFIG: Final = (
"org.matrix.msc4380.invite_permission_config"
)
# Synapse-specific behaviour. See "Client-Server API Extensions" documentation
# in Admin API for more information.
SYNAPSE_ADMIN_CLIENT_CONFIG: Final = "io.element.synapse.admin_client_config"
@@ -382,37 +369,15 @@ class ProfileFields:
class StickyEventField(TypedDict):
"""
Dict content of the `sticky` part of an event.
"""
duration_ms: int
class StickyEvent:
QUERY_PARAM_NAME: Final = "org.matrix.msc4354.sticky_duration_ms"
"""
Query parameter used by clients for setting the sticky duration of an event they are sending.
Applies to:
- /rooms/.../send/...
- /rooms/.../state/...
"""
EVENT_FIELD_NAME: Final = "msc4354_sticky"
"""
Name of the field in the top-level event dict that contains the sticky event dict.
"""
MAX_DURATION: Duration = Duration(hours=1)
FIELD_NAME: Final = "msc4354_sticky"
MAX_DURATION_MS: Final = 3600000 # 1 hour
"""
Maximum stickiness duration as specified in MSC4354.
Ensures that data in the /sync response can go down and not grow unbounded.
"""
MAX_EVENTS_IN_SYNC: Final = 100
"""
Maximum number of sticky events to include in /sync.
This is the default specified in the MSC. Chosen arbitrarily.
"""

View File

@@ -138,7 +138,7 @@ class Codes(str, Enum):
KEY_TOO_LARGE = "M_KEY_TOO_LARGE"
# Part of MSC4155/MSC4380
INVITE_BLOCKED = "M_INVITE_BLOCKED"
INVITE_BLOCKED = "ORG.MATRIX.MSC4155.M_INVITE_BLOCKED"
# Part of MSC4190
APPSERVICE_LOGIN_UNSUPPORTED = "IO.ELEMENT.MSC4190.M_APPSERVICE_LOGIN_UNSUPPORTED"
@@ -856,12 +856,6 @@ class HttpResponseException(CodeMessageException):
return ProxiedRequestError(self.code, errmsg, errcode, j)
class HomeServerNotSetupException(Exception):
"""
Raised when an operation is attempted on the HomeServer before setup() has been called.
"""
class ShadowBanError(Exception):
"""
Raised when a shadow-banned user attempts to perform an action.

View File

@@ -18,6 +18,7 @@
#
#
from typing import Callable
import attr
@@ -495,3 +496,40 @@ KNOWN_ROOM_VERSIONS: dict[str, RoomVersion] = {
RoomVersions.HydraV11,
)
}
@attr.s(slots=True, frozen=True, auto_attribs=True)
class RoomVersionCapability:
"""An object which describes the unique attributes of a room version."""
identifier: str # the identifier for this capability
preferred_version: RoomVersion | None
support_check_lambda: Callable[[RoomVersion], bool]
MSC3244_CAPABILITIES = {
cap.identifier: {
"preferred": (
cap.preferred_version.identifier
if cap.preferred_version is not None
else None
),
"support": [
v.identifier
for v in KNOWN_ROOM_VERSIONS.values()
if cap.support_check_lambda(v)
],
}
for cap in (
RoomVersionCapability(
"knock",
RoomVersions.V7,
lambda room_version: room_version.knock_join_rule,
),
RoomVersionCapability(
"restricted",
RoomVersions.V9,
lambda room_version: room_version.restricted_join_rule,
),
)
}

View File

@@ -54,9 +54,7 @@ def check_bind_error(
"""
if address == "0.0.0.0" and "::" in bind_addresses:
logger.warning(
"Failed to listen on 0.0.0.0, continuing because listening on [::]. Original exception: %s: %s",
type(e).__name__,
str(e),
"Failed to listen on 0.0.0.0, continuing because listening on [::]"
)
else:
raise e

View File

@@ -36,13 +36,12 @@ from typing import (
Awaitable,
Callable,
NoReturn,
Optional,
cast,
)
from wsgiref.simple_server import WSGIServer
from cryptography.utils import CryptographyDeprecationWarning
from typing_extensions import ParamSpec, assert_never
from typing_extensions import ParamSpec
import twisted
from twisted.internet import defer, error, reactor as _reactor
@@ -60,17 +59,12 @@ from twisted.python.threadpool import ThreadPool
from twisted.web.resource import Resource
import synapse.util.caches
from synapse.api.constants import MAX_REQUEST_SIZE
from synapse.api.constants import MAX_PDU_SIZE
from synapse.app import check_bind_error
from synapse.config import ConfigError
from synapse.config._base import format_config_error
from synapse.config.homeserver import HomeServerConfig
from synapse.config.server import (
ListenerConfig,
ManholeConfig,
TCPListenerConfig,
UnixListenerConfig,
)
from synapse.config.server import ListenerConfig, ManholeConfig, TCPListenerConfig
from synapse.crypto import context_factory
from synapse.events.auto_accept_invites import InviteAutoAccepter
from synapse.events.presence_router import load_legacy_presence_router
@@ -419,44 +413,13 @@ def listen_unix(
]
class ListenerException(RuntimeError):
"""
An exception raised when we fail to listen with the given `ListenerConfig`.
Attributes:
listener_config: The listener config that caused the exception.
"""
def __init__(
self,
listener_config: ListenerConfig,
):
listener_human_name = ""
port = ""
if isinstance(listener_config, TCPListenerConfig):
listener_human_name = "TCP port"
port = str(listener_config.port)
elif isinstance(listener_config, UnixListenerConfig):
listener_human_name = "unix socket"
port = listener_config.path
else:
assert_never(listener_config)
super().__init__(
"Failed to listen on %s (%s) with the given listener config: %s"
% (listener_human_name, port, listener_config)
)
self.listener_config = listener_config
def listen_http(
hs: "HomeServer",
listener_config: ListenerConfig,
root_resource: Resource,
version_string: str,
max_request_body_size: int,
context_factory: Optional[IOpenSSLContextFactory],
context_factory: IOpenSSLContextFactory | None,
reactor: ISynapseReactor = reactor,
) -> list[Port]:
"""
@@ -484,55 +447,39 @@ def listen_http(
hs=hs,
)
try:
if isinstance(listener_config, TCPListenerConfig):
if listener_config.is_tls():
# refresh_certificate should have been called before this.
assert context_factory is not None
ports = listen_ssl(
listener_config.bind_addresses,
listener_config.port,
site,
context_factory,
reactor=reactor,
)
logger.info(
"Synapse now listening on TCP port %d (TLS)", listener_config.port
)
else:
ports = listen_tcp(
listener_config.bind_addresses,
listener_config.port,
site,
reactor=reactor,
)
logger.info(
"Synapse now listening on TCP port %d", listener_config.port
)
elif isinstance(listener_config, UnixListenerConfig):
ports = listen_unix(
listener_config.path, listener_config.mode, site, reactor=reactor
if isinstance(listener_config, TCPListenerConfig):
if listener_config.is_tls():
# refresh_certificate should have been called before this.
assert context_factory is not None
ports = listen_ssl(
listener_config.bind_addresses,
listener_config.port,
site,
context_factory,
reactor=reactor,
)
# getHost() returns a UNIXAddress which contains an instance variable of 'name'
# encoded as a byte string. Decode as utf-8 so pretty.
logger.info(
"Synapse now listening on Unix Socket at: %s",
ports[0].getHost().name.decode("utf-8"),
"Synapse now listening on TCP port %d (TLS)", listener_config.port
)
else:
assert_never(listener_config)
except Exception as exc:
# The Twisted interface says that "Users should not call this function
# themselves!" but this appears to be the correct/only way handle proper cleanup
# of the site when things go wrong. In the normal case, a `Port` is created
# which we can call `Port.stopListening()` on to do the same thing (but no
# `Port` is created when an error occurs).
#
# We use `site.stopFactory()` instead of `site.doStop()` as the latter assumes
# that `site.doStart()` was called (which won't be the case if an error occurs).
site.stopFactory()
raise ListenerException(listener_config) from exc
ports = listen_tcp(
listener_config.bind_addresses,
listener_config.port,
site,
reactor=reactor,
)
logger.info("Synapse now listening on TCP port %d", listener_config.port)
else:
ports = listen_unix(
listener_config.path, listener_config.mode, site, reactor=reactor
)
# getHost() returns a UNIXAddress which contains an instance variable of 'name'
# encoded as a byte string. Decode as utf-8 so pretty.
logger.info(
"Synapse now listening on Unix Socket at: %s",
ports[0].getHost().name.decode("utf-8"),
)
return ports
@@ -776,11 +723,6 @@ async def start(hs: "HomeServer", *, freeze: bool = True) -> None:
#
# PyPy does not (yet?) implement gc.freeze()
if hasattr(gc, "freeze"):
logger.info(
"garbage collector: Freezing all allocated objects in the hopes that (almost) "
"everything currently allocated are things that will be used by the homeserver "
"for the rest of time. Doing so means less work each GC (hopefully)."
)
gc.collect()
gc.freeze()
@@ -901,8 +843,17 @@ def sdnotify(state: bytes) -> None:
def max_request_body_size(config: HomeServerConfig) -> int:
"""Get a suitable maximum size for incoming HTTP requests"""
# Baseline default for any request that isn't configured in the homeserver config
max_request_size = MAX_REQUEST_SIZE
# Other than media uploads, the biggest request we expect to see is a fully-loaded
# /federation/v1/send request.
#
# The main thing in such a request is up to 50 PDUs, and up to 100 EDUs. PDUs are
# limited to 65536 bytes (possibly slightly more if the sender didn't use canonical
# json encoding); there is no specced limit to EDUs (see
# https://github.com/matrix-org/matrix-doc/issues/3121).
#
# in short, we somewhat arbitrarily limit requests to 200 * 64K (about 12.5M)
#
max_request_size = 200 * MAX_PDU_SIZE
# if we have a media repo enabled, we may need to allow larger uploads than that
if config.media.can_load_media_repo:

View File

@@ -24,7 +24,7 @@ import logging
import os
import sys
import tempfile
from typing import Mapping, Optional, Sequence
from typing import Mapping, Sequence
from twisted.internet import defer, task
@@ -291,7 +291,7 @@ def load_config(argv_options: list[str]) -> tuple[HomeServerConfig, argparse.Nam
def create_homeserver(
config: HomeServerConfig,
reactor: Optional[ISynapseReactor] = None,
reactor: ISynapseReactor | None = None,
) -> AdminCmdServer:
"""
Create a homeserver instance for the Synapse admin command process.

View File

@@ -1,60 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2026 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# [This file includes modifications made by New Vector Limited]
#
#
from typing import Any
class ProxiedReactor:
"""
Twisted tracks the 'installed' reactor as a global variable.
(Actually, it does some module trickery, but the effect is similar.)
The default EpollReactor is buggy if it's created before a process is
forked, then used in the child.
See https://twistedmatrix.com/trac/ticket/4759#comment:17.
However, importing certain Twisted modules will automatically create and
install a reactor if one hasn't already been installed.
It's not normally possible to re-install a reactor.
Given the goal of launching workers with fork() to only import the code once,
this presents a conflict.
Our work around is to 'install' this ProxiedReactor which prevents Twisted
from creating and installing one, but which lets us replace the actual reactor
in use later on.
"""
def __init__(self) -> None:
self.___reactor_target: Any = None
def _install_real_reactor(self, new_reactor: Any) -> None:
"""
Install a real reactor for this ProxiedReactor to forward lookups onto.
This method is specific to our ProxiedReactor and should not clash with
any names used on an actual Twisted reactor.
"""
self.___reactor_target = new_reactor
# Import here to avoid circular imports.
from synapse.metrics._reactor_metrics import install_reactor_metrics
# Install reactor metrics now we've got a real reactor.
install_reactor_metrics(new_reactor)
def __getattr__(self, attr_name: str) -> Any:
return getattr(self.___reactor_target, attr_name)

View File

@@ -30,13 +30,47 @@ from typing import Any, Callable
from twisted.internet.main import installReactor
from synapse.app.complement_fork_proxied_reactor import ProxiedReactor
# a list of the original signal handlers, before we installed our custom ones.
# We restore these in our child processes.
_original_signal_handlers: dict[int, Any] = {}
class ProxiedReactor:
"""
Twisted tracks the 'installed' reactor as a global variable.
(Actually, it does some module trickery, but the effect is similar.)
The default EpollReactor is buggy if it's created before a process is
forked, then used in the child.
See https://twistedmatrix.com/trac/ticket/4759#comment:17.
However, importing certain Twisted modules will automatically create and
install a reactor if one hasn't already been installed.
It's not normally possible to re-install a reactor.
Given the goal of launching workers with fork() to only import the code once,
this presents a conflict.
Our work around is to 'install' this ProxiedReactor which prevents Twisted
from creating and installing one, but which lets us replace the actual reactor
in use later on.
"""
def __init__(self) -> None:
self.___reactor_target: Any = None
def _install_real_reactor(self, new_reactor: Any) -> None:
"""
Install a real reactor for this ProxiedReactor to forward lookups onto.
This method is specific to our ProxiedReactor and should not clash with
any names used on an actual Twisted reactor.
"""
self.___reactor_target = new_reactor
def __getattr__(self, attr_name: str) -> Any:
return getattr(self.___reactor_target, attr_name)
def _worker_entrypoint(
func: Callable[[], None], proxy_reactor: ProxiedReactor, args: list[str]
) -> None:

View File

@@ -21,7 +21,6 @@
#
import logging
import sys
from typing import Optional
from twisted.web.resource import Resource
@@ -338,7 +337,7 @@ def load_config(argv_options: list[str]) -> HomeServerConfig:
def create_homeserver(
config: HomeServerConfig,
reactor: Optional[ISynapseReactor] = None,
reactor: ISynapseReactor | None = None,
) -> GenericWorkerServer:
"""
Create a homeserver instance for the Synapse worker process.

View File

@@ -22,7 +22,7 @@
import logging
import os
import sys
from typing import Iterable, Optional
from typing import Iterable
from twisted.internet.tcp import Port
from twisted.web.resource import EncodingResourceWrapper, Resource
@@ -350,7 +350,7 @@ def load_or_generate_config(argv_options: list[str]) -> HomeServerConfig:
def create_homeserver(
config: HomeServerConfig,
reactor: Optional[ISynapseReactor] = None,
reactor: ISynapseReactor | None = None,
) -> SynapseHomeServer:
"""
Create a homeserver instance for the Synapse main process.

View File

@@ -46,7 +46,6 @@ from synapse.logging import opentracing
from synapse.metrics import SERVER_NAME_LABEL
from synapse.types import DeviceListUpdates, JsonDict, JsonMapping, ThirdPartyInstanceID
from synapse.util.caches.response_cache import ResponseCache
from synapse.util.duration import Duration
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -133,7 +132,7 @@ class ApplicationServiceApi(SimpleHttpClient):
clock=hs.get_clock(),
name="as_protocol_meta",
server_name=self.server_name,
timeout=Duration(hours=1),
timeout_ms=HOUR_IN_MS,
)
def _get_headers(self, service: "ApplicationService") -> dict[bytes, list[bytes]]:

View File

@@ -44,7 +44,6 @@ import jinja2
import yaml
from synapse.types import StrSequence
from synapse.util.stringutils import parse_and_validate_server_name
from synapse.util.templates import _create_mxc_to_http_filter, _format_ts_filter
logger = logging.getLogger(__name__)
@@ -466,7 +465,6 @@ class RootConfig:
generate_secrets: bool = False,
report_stats: bool | None = None,
open_private_ports: bool = False,
enable_metrics: bool = False,
listeners: list[dict] | None = None,
tls_certificate_path: str | None = None,
tls_private_key_path: str | None = None,
@@ -497,15 +495,9 @@ class RootConfig:
open_private_ports: True to leave private ports (such as the non-TLS
HTTP listener) open to the internet.
enable_metrics: True to set `enable_metrics: true` and when using the
default set of listeners, will also add the metrics listener on port 19090.
listeners: A list of descriptions of the listeners synapse should
start with each of which specifies a port (int), a list of resources
(list(str)), tls (bool) and type (str). There is a default set of
listeners when `None`.
Example usage:
start with each of which specifies a port (int), a list of
resources (list(str)), tls (bool) and type (str). For example:
[{
"port": 8448,
"resources": [{"names": ["federation"]}],
@@ -526,35 +518,6 @@ class RootConfig:
Returns:
The yaml config file
"""
_, bind_port = parse_and_validate_server_name(server_name)
if bind_port is not None:
unsecure_port = bind_port - 400
else:
bind_port = 8448
unsecure_port = 8008
# The default listeners
if listeners is None:
listeners = [
{
"port": unsecure_port,
"tls": False,
"type": "http",
"x_forwarded": True,
"resources": [
{"names": ["client", "federation"], "compress": False}
],
}
]
if enable_metrics:
listeners.append(
{
"port": 19090,
"tls": False,
"type": "metrics",
}
)
conf = CONFIG_FILE_HEADER + "\n".join(
dedent(conf)
@@ -566,7 +529,6 @@ class RootConfig:
generate_secrets=generate_secrets,
report_stats=report_stats,
open_private_ports=open_private_ports,
enable_metrics=enable_metrics,
listeners=listeners,
tls_certificate_path=tls_certificate_path,
tls_private_key_path=tls_private_key_path,
@@ -794,14 +756,6 @@ class RootConfig:
" internet. Do not use this unless you know what you are doing."
),
)
generate_group.add_argument(
"--enable-metrics",
action="store_true",
help=(
"Sets `enable_metrics: true` and when using the default set of listeners, "
"will also add the metrics listener on port 19090."
),
)
cls.invoke_all_static("add_arguments", parser)
config_args = parser.parse_args(argv_options)
@@ -858,7 +812,6 @@ class RootConfig:
report_stats=(config_args.report_stats == "yes"),
generate_secrets=True,
open_private_ports=config_args.open_private_ports,
enable_metrics=config_args.enable_metrics,
)
os.makedirs(config_dir_path, exist_ok=True)

View File

@@ -146,7 +146,6 @@ class RootConfig:
generate_secrets: bool = ...,
report_stats: bool | None = ...,
open_private_ports: bool = ...,
enable_metrics: bool = ...,
listeners: Any | None = ...,
tls_certificate_path: str | None = ...,
tls_private_key_path: str | None = ...,

View File

@@ -29,7 +29,6 @@ import attr
from synapse.types import JsonDict
from synapse.util.check_dependencies import check_requirements
from synapse.util.duration import Duration
from ._base import Config, ConfigError
@@ -109,7 +108,7 @@ class CacheConfig(Config):
global_factor: float
track_memory_usage: bool
expiry_time_msec: int | None
sync_response_cache_duration: Duration
sync_response_cache_duration: int
@staticmethod
def reset() -> None:
@@ -208,14 +207,10 @@ class CacheConfig(Config):
min_cache_ttl = self.cache_autotuning.get("min_cache_ttl")
self.cache_autotuning["min_cache_ttl"] = self.parse_duration(min_cache_ttl)
sync_response_cache_duration_ms = self.parse_duration(
self.sync_response_cache_duration = self.parse_duration(
cache_config.get("sync_response_cache_duration", "2m")
)
self.sync_response_cache_duration = Duration(
milliseconds=sync_response_cache_duration_ms
)
def resize_all_caches(self) -> None:
"""Ensure all cache sizes are up-to-date.

Some files were not shown because too many files have changed in this diff Show More