diff --git a/.github/workflows/docker.yml b/.github/workflows/docker.yml index b439cb0915..e8319364dd 100644 --- a/.github/workflows/docker.yml +++ b/.github/workflows/docker.yml @@ -30,7 +30,7 @@ jobs: run: docker buildx inspect - name: Install Cosign - uses: sigstore/cosign-installer@v3.8.0 + uses: sigstore/cosign-installer@v3.8.1 - name: Checkout repository uses: actions/checkout@v4 diff --git a/CHANGES.md b/CHANGES.md index d83430088c..992c099d6e 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,16 +1,78 @@ +# Synapse 1.126.0rc1 (2025-03-04) + +Installations using the Debian/Ubuntu packages from `packages.matrix.org`: +Please be aware that we have recently updated the expiry date on the repository's GPG signing key, but this change +must be imported into your keyring. +If you have the `matrix-org-archive-keyring` package installed and update before the current key expires, this should +happen automatically. +Otherwise, if you see an error similar to `The following signatures were invalid: EXPKEYSIG F473DD4473365DE1`, you +will need to get a fresh copy of the keys. You can do so with: + +```sh +sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg +``` + +### Features + +- Define ratelimit configuration for delayed event management. ([\#18019](https://github.com/element-hq/synapse/issues/18019)) +- Add `form_secret_path` config option. ([\#18090](https://github.com/element-hq/synapse/issues/18090)) +- Add the `--no-secrets-in-config` command line option. ([\#18092](https://github.com/element-hq/synapse/issues/18092)) +- Add background job to clear unreferenced state groups. ([\#18154](https://github.com/element-hq/synapse/issues/18154)) +- Add support for specifying/overriding `id_token_signing_alg_values_supported` for an OpenID identity provider. ([\#18177](https://github.com/element-hq/synapse/issues/18177)) +- Add `worker_replication_secret_path` config option. ([\#18191](https://github.com/element-hq/synapse/issues/18191)) +- Add support for specifying/overriding `redirect_uri` in the authorization and token requests against an OpenID identity provider. ([\#18197](https://github.com/element-hq/synapse/issues/18197)) + +### Bugfixes + +- Make sure we advertise registration as disabled when MSC3861 is enabled. ([\#17661](https://github.com/element-hq/synapse/issues/17661)) +- Prevent suspended users from sending encrypted messages. ([\#18157](https://github.com/element-hq/synapse/issues/18157)) +- Cleanup deleted state group references. ([\#18165](https://github.com/element-hq/synapse/issues/18165)) +- Fix MSC4108 QR-code login not working with some reverse-proxy setups. ([\#18178](https://github.com/element-hq/synapse/issues/18178)) +- Support device IDs that can't be represented in a scope when delegating auth to Matrix Authentication Service 0.15.0+. ([\#18174](https://github.com/element-hq/synapse/issues/18174)) + +### Updates to the Docker image + +- Speed up the building of the Docker image. ([\#18038](https://github.com/element-hq/synapse/issues/18038)) + +### Improved Documentation + +- Move incorrectly placed version indicator in User Event Redaction Admin API docs. ([\#18152](https://github.com/element-hq/synapse/issues/18152)) +- Document suspension Admin API. ([\#18162](https://github.com/element-hq/synapse/issues/18162)) + +### Deprecations and Removals + +- Disable room list publication by default. ([\#18175](https://github.com/element-hq/synapse/issues/18175)) + +### Updates to locked dependencies + +* Bump anyhow from 1.0.95 to 1.0.96. ([\#18187](https://github.com/element-hq/synapse/issues/18187)) +* Bump authlib from 1.4.0 to 1.4.1. ([\#18190](https://github.com/element-hq/synapse/issues/18190)) +* Bump click from 8.1.7 to 8.1.8. ([\#18189](https://github.com/element-hq/synapse/issues/18189)) +* Bump log from 0.4.25 to 0.4.26. ([\#18184](https://github.com/element-hq/synapse/issues/18184)) +* Bump pyo3-log from 0.12.0 to 0.12.1. ([\#18046](https://github.com/element-hq/synapse/issues/18046)) +* Bump serde from 1.0.217 to 1.0.218. ([\#18183](https://github.com/element-hq/synapse/issues/18183)) +* Bump serde_json from 1.0.138 to 1.0.139. ([\#18186](https://github.com/element-hq/synapse/issues/18186)) +* Bump sigstore/cosign-installer from 3.8.0 to 3.8.1. ([\#18185](https://github.com/element-hq/synapse/issues/18185)) +* Bump types-psycopg2 from 2.9.21.20241019 to 2.9.21.20250121. ([\#18188](https://github.com/element-hq/synapse/issues/18188)) + + +# Synapse 1.125.0 (2025-02-25) + +No significant changes since 1.125.0rc1. + + # Synapse 1.125.0rc1 (2025-02-18) ### Features - Add functionality to be able to use multiple values in SSO feature `attribute_requirements`. ([\#17949](https://github.com/element-hq/synapse/issues/17949)) -- Add experimental config options `admin_token_path` and `client_secret_path` for MSC 3861. ([\#18004](https://github.com/element-hq/synapse/issues/18004)) +- Add experimental config options `admin_token_path` and `client_secret_path` for [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861). ([\#18004](https://github.com/element-hq/synapse/issues/18004)) - Add `get_current_time_msec()` method to the [module API](https://matrix-org.github.io/synapse/latest/modules/writing_a_module.html) for sound time comparisons with Synapse. ([\#18144](https://github.com/element-hq/synapse/issues/18144)) ### Bugfixes -- Fix a bug when updating a user 3pid with invalid returns 500 server error change to 400 with a message. ([\#18125](https://github.com/element-hq/synapse/issues/18125)) +- Update the response when a client attempts to add an invalid email address to the user's account from a 500, to a 400 with error text. ([\#18125](https://github.com/element-hq/synapse/issues/18125)) - Fix user directory search when using a legacy module with a `check_username_for_spam` callback. Broke in v1.122.0. ([\#18135](https://github.com/element-hq/synapse/issues/18135)) -- Add rate limit `rc_presence.per_user`. This prevents load from excessive presence updates sent by clients via sync api. Also rate limit `/_matrix/client/v3/presence` as per the spec. Contributed by @rda0. ([\#18145](https://github.com/element-hq/synapse/issues/18145)) ### Updates to the Docker image @@ -25,7 +87,7 @@ ### Internal Changes -- Overload DatabasePool.simple_select_one_txn to return non-None when the allow_none parameter is False. ([\#17616](https://github.com/element-hq/synapse/issues/17616)) +- Overload `DatabasePool.simple_select_one_txn` to return non-`None` when the `allow_none` parameter is `False`. ([\#17616](https://github.com/element-hq/synapse/issues/17616)) - Python 3.8 EOL: compile native extensions with the 3.9 ABI and use typing hints from the standard library. ([\#17967](https://github.com/element-hq/synapse/issues/17967)) - Add log message when worker lock timeouts get large. ([\#18124](https://github.com/element-hq/synapse/issues/18124)) - Make it explicit that you can buy an AGPL-alternative commercial license from Element. ([\#18134](https://github.com/element-hq/synapse/issues/18134)) diff --git a/Cargo.lock b/Cargo.lock index 8831f7e6fd..b9aa1c8a6b 100644 --- a/Cargo.lock +++ b/Cargo.lock @@ -13,9 +13,9 @@ dependencies = [ [[package]] name = "anyhow" -version = "1.0.95" +version = "1.0.96" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "34ac096ce696dc2fcabef30516bb13c0a68a11d30131d3df6f04711467681b04" +checksum = "6b964d184e89d9b6b67dd2715bc8e74cf3107fb2b529990c90cf517326150bf4" [[package]] name = "arc-swap" @@ -223,9 +223,9 @@ checksum = "ae743338b92ff9146ce83992f766a31066a91a8c84a45e0e9f21e7cf6de6d346" [[package]] name = "log" -version = "0.4.25" +version = "0.4.26" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "04cbf5b083de1c7e0222a7a51dbfdba1cbe1c6ab0b15e29fff3f6c077fd9cd9f" +checksum = "30bde2b3dc3671ae49d8e2e9f044c7c005836e7a023ee57cffa25ab82764bb9e" [[package]] name = "memchr" @@ -316,9 +316,9 @@ dependencies = [ [[package]] name = "pyo3-log" -version = "0.12.0" +version = "0.12.1" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "3eb421dc86d38d08e04b927b02424db480be71b777fa3a56f32e2f2a3a1a3b08" +checksum = "be5bb22b77965a7b5394e9aae9897a0607b51df5167561ffc3b02643b4200bc7" dependencies = [ "arc-swap", "log", @@ -437,18 +437,18 @@ checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f" [[package]] name = "serde" -version = "1.0.217" +version = "1.0.218" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "02fc4265df13d6fa1d00ecff087228cc0a2b5f3c0e87e258d8b94a156e984c70" +checksum = "e8dfc9d19bdbf6d17e22319da49161d5d0108e4188e8b680aef6299eed22df60" dependencies = [ "serde_derive", ] [[package]] name = "serde_derive" -version = "1.0.217" +version = "1.0.218" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "5a9bf7cf98d04a2b28aead066b7496853d4779c9cc183c440dbac457641e19a0" +checksum = "f09503e191f4e797cb8aac08e9a4a4695c5edf6a2e70e376d961ddd5c969f82b" dependencies = [ "proc-macro2", "quote", @@ -457,9 +457,9 @@ dependencies = [ [[package]] name = "serde_json" -version = "1.0.138" +version = "1.0.139" source = "registry+https://github.com/rust-lang/crates.io-index" -checksum = "d434192e7da787e94a6ea7e9670b26a036d0ca41e0b7efb2676dd32bae872949" +checksum = "44f86c3acccc9c65b153fe1b85a3be07fe5515274ec9f0653b4a0875731c72a6" dependencies = [ "itoa", "memchr", diff --git a/debian/changelog b/debian/changelog index a0d5a7614f..d7a3909224 100644 --- a/debian/changelog +++ b/debian/changelog @@ -1,3 +1,15 @@ +matrix-synapse-py3 (1.126.0~rc1) stable; urgency=medium + + * New Synapse release 1.126.0rc1. + + -- Synapse Packaging team Tue, 04 Mar 2025 13:11:51 +0000 + +matrix-synapse-py3 (1.125.0) stable; urgency=medium + + * New Synapse release 1.125.0. + + -- Synapse Packaging team Tue, 25 Feb 2025 08:10:07 -0700 + matrix-synapse-py3 (1.125.0~rc1) stable; urgency=medium * New synapse release 1.125.0rc1. diff --git a/demo/start.sh b/demo/start.sh index 7636c41f1f..e010302bf4 100755 --- a/demo/start.sh +++ b/demo/start.sh @@ -142,6 +142,9 @@ for port in 8080 8081 8082; do per_user: per_second: 1000 burst_count: 1000 + rc_delayed_event_mgmt: + per_second: 1000 + burst_count: 1000 RC ) echo "${ratelimiting}" >> "$port.config" diff --git a/docker/Dockerfile b/docker/Dockerfile index a4931011a7..1dd65f2413 100644 --- a/docker/Dockerfile +++ b/docker/Dockerfile @@ -20,45 +20,16 @@ # `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in # in `poetry export` in the past. +ARG DEBIAN_VERSION=bookworm ARG PYTHON_VERSION=3.12 +ARG POETRY_VERSION=1.8.3 ### ### Stage 0: generate requirements.txt ### -# We hardcode the use of Debian bookworm here because this could change upstream -# and other Dockerfiles used for testing are expecting bookworm. -FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS requirements - -# RUN --mount is specific to buildkit and is documented at -# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount. -# Here we use it to set up a cache for apt (and below for pip), to improve -# rebuild speeds on slow connections. -RUN \ - --mount=type=cache,target=/var/cache/apt,sharing=locked \ - --mount=type=cache,target=/var/lib/apt,sharing=locked \ - apt-get update -qq && apt-get install -yqq \ - build-essential curl git libffi-dev libssl-dev pkg-config \ - && rm -rf /var/lib/apt/lists/* - -# Install rust and ensure its in the PATH. -# (Rust may be needed to compile `cryptography`---which is one of poetry's -# dependencies---on platforms that don't have a `cryptography` wheel. -ENV RUSTUP_HOME=/rust -ENV CARGO_HOME=/cargo -ENV PATH=/cargo/bin:/rust/bin:$PATH -RUN mkdir /rust /cargo - -RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable --profile minimal - -# arm64 builds consume a lot of memory if `CARGO_NET_GIT_FETCH_WITH_CLI` is not -# set to true, so we expose it as a build-arg. -ARG CARGO_NET_GIT_FETCH_WITH_CLI=false -ENV CARGO_NET_GIT_FETCH_WITH_CLI=$CARGO_NET_GIT_FETCH_WITH_CLI - -# We install poetry in its own build stage to avoid its dependencies conflicting with -# synapse's dependencies. -RUN --mount=type=cache,target=/root/.cache/pip \ - pip install --user "poetry==1.3.2" +### This stage is platform-agnostic, so we can use the build platform in case of cross-compilation. +### +FROM --platform=$BUILDPLATFORM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS requirements WORKDIR /synapse @@ -75,41 +46,30 @@ ARG TEST_ONLY_SKIP_DEP_HASH_VERIFICATION # Instead, we'll just install what a regular `pip install` would from PyPI. ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE +# This silences a warning as uv isn't able to do hardlinks between its cache +# (mounted as --mount=type=cache) and the target directory. +ENV UV_LINK_MODE=copy + # Export the dependencies, but only if we're actually going to use the Poetry lockfile. # Otherwise, just create an empty requirements file so that the Dockerfile can # proceed. -RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \ - /root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \ +ARG POETRY_VERSION +RUN --mount=type=cache,target=/root/.cache/uv \ + if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \ + uvx --with poetry-plugin-export==1.8.0 \ + poetry@${POETRY_VERSION} export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \ else \ - touch /synapse/requirements.txt; \ + touch /synapse/requirements.txt; \ fi ### ### Stage 1: builder ### -FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS builder - -# install the OS build deps -RUN \ - --mount=type=cache,target=/var/cache/apt,sharing=locked \ - --mount=type=cache,target=/var/lib/apt,sharing=locked \ - apt-get update -qq && apt-get install -yqq \ - build-essential \ - libffi-dev \ - libjpeg-dev \ - libpq-dev \ - libssl-dev \ - libwebp-dev \ - libxml++2.6-dev \ - libxslt1-dev \ - openssl \ - zlib1g-dev \ - git \ - curl \ - libicu-dev \ - pkg-config \ - && rm -rf /var/lib/apt/lists/* +FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS builder +# This silences a warning as uv isn't able to do hardlinks between its cache +# (mounted as --mount=type=cache) and the target directory. +ENV UV_LINK_MODE=copy # Install rust and ensure its in the PATH ENV RUSTUP_HOME=/rust @@ -119,7 +79,6 @@ RUN mkdir /rust /cargo RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable --profile minimal - # arm64 builds consume a lot of memory if `CARGO_NET_GIT_FETCH_WITH_CLI` is not # set to true, so we expose it as a build-arg. ARG CARGO_NET_GIT_FETCH_WITH_CLI=false @@ -131,8 +90,8 @@ ENV CARGO_NET_GIT_FETCH_WITH_CLI=$CARGO_NET_GIT_FETCH_WITH_CLI # # This is aiming at installing the `[tool.poetry.depdendencies]` from pyproject.toml. COPY --from=requirements /synapse/requirements.txt /synapse/ -RUN --mount=type=cache,target=/root/.cache/pip \ - pip install --prefix="/install" --no-deps --no-warn-script-location -r /synapse/requirements.txt +RUN --mount=type=cache,target=/root/.cache/uv \ + uv pip install --prefix="/install" --no-deps -r /synapse/requirements.txt # Copy over the rest of the synapse source code. COPY synapse /synapse/synapse/ @@ -146,41 +105,85 @@ ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE # Install the synapse package itself. # If we have populated requirements.txt, we don't install any dependencies # as we should already have those from the previous `pip install` step. -RUN --mount=type=cache,target=/synapse/target,sharing=locked \ +RUN \ + --mount=type=cache,target=/root/.cache/uv \ + --mount=type=cache,target=/synapse/target,sharing=locked \ --mount=type=cache,target=${CARGO_HOME}/registry,sharing=locked \ if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \ - pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; \ + uv pip install --prefix="/install" --no-deps /synapse[all]; \ else \ - pip install --prefix="/install" --no-warn-script-location /synapse[all]; \ + uv pip install --prefix="/install" /synapse[all]; \ fi ### -### Stage 2: runtime +### Stage 2: runtime dependencies download for ARM64 and AMD64 +### +FROM --platform=$BUILDPLATFORM docker.io/library/debian:${DEBIAN_VERSION} AS runtime-deps + +# Tell apt to keep downloaded package files, as we're using cache mounts. +RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache + +# Add both target architectures +RUN dpkg --add-architecture arm64 +RUN dpkg --add-architecture amd64 + +# Fetch the runtime dependencies debs for both architectures +# We do that by building a recursive list of packages we need to download with `apt-cache depends` +# and then downloading them with `apt-get download`. +RUN \ + --mount=type=cache,target=/var/cache/apt,sharing=locked \ + --mount=type=cache,target=/var/lib/apt,sharing=locked \ + apt-get update -qq && \ + apt-get install -y --no-install-recommends rsync && \ + apt-cache depends --recurse --no-recommends --no-suggests --no-conflicts --no-breaks --no-replaces --no-enhances --no-pre-depends \ + curl \ + gosu \ + libjpeg62-turbo \ + libpq5 \ + libwebp7 \ + xmlsec1 \ + libjemalloc2 \ + libicu \ + | grep '^\w' > /tmp/pkg-list && \ + for arch in arm64 amd64; do \ + mkdir -p /tmp/debs-${arch} && \ + cd /tmp/debs-${arch} && \ + apt-get download $(sed "s/$/:${arch}/" /tmp/pkg-list); \ + done + +# Extract the debs for each architecture +# On the runtime image, /lib is a symlink to /usr/lib, so we need to copy the +# libraries to the right place, else the `COPY` won't work. +# On amd64, we'll also have a /lib64 folder with ld-linux-x86-64.so.2, which is +# already present in the runtime image. +RUN \ + for arch in arm64 amd64; do \ + mkdir -p /install-${arch}/var/lib/dpkg/status.d/ && \ + for deb in /tmp/debs-${arch}/*.deb; do \ + package_name=$(dpkg-deb -I ${deb} | awk '/^ Package: .*$/ {print $2}'); \ + echo "Extracting: ${package_name}"; \ + dpkg --ctrl-tarfile $deb | tar -Ox ./control > /install-${arch}/var/lib/dpkg/status.d/${package_name}; \ + dpkg --extract $deb /install-${arch}; \ + done; \ + rsync -avr /install-${arch}/lib/ /install-${arch}/usr/lib; \ + rm -rf /install-${arch}/lib /install-${arch}/lib64; \ + done + + +### +### Stage 3: runtime ### -FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm +FROM docker.io/library/python:${PYTHON_VERSION}-slim-${DEBIAN_VERSION} + +ARG TARGETARCH LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse' LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md' LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git' LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later' -RUN \ - --mount=type=cache,target=/var/cache/apt,sharing=locked \ - --mount=type=cache,target=/var/lib/apt,sharing=locked \ - apt-get update -qq && apt-get install -yqq \ - curl \ - gosu \ - libjpeg62-turbo \ - libpq5 \ - libwebp7 \ - xmlsec1 \ - libjemalloc2 \ - libicu72 \ - libssl-dev \ - openssl \ - && rm -rf /var/lib/apt/lists/* - +COPY --from=runtime-deps /install-${TARGETARCH} / COPY --from=builder /install /usr/local COPY ./docker/start.py /start.py COPY ./docker/conf /conf diff --git a/docker/complement/conf/workers-shared-extra.yaml.j2 b/docker/complement/conf/workers-shared-extra.yaml.j2 index 797d58e9b3..9ab8fedcae 100644 --- a/docker/complement/conf/workers-shared-extra.yaml.j2 +++ b/docker/complement/conf/workers-shared-extra.yaml.j2 @@ -94,6 +94,10 @@ rc_presence: per_second: 9999 burst_count: 9999 +rc_delayed_event_mgmt: + per_second: 9999 + burst_count: 9999 + federation_rr_transactions_per_room_per_second: 9999 allow_device_name_lookup_over_federation: true @@ -139,4 +143,9 @@ caches: sync_response_cache_duration: 0 +# Complement assumes that it can publish to the room list by default. +room_list_publication_rules: + - action: allow + + {% include "shared-orig.yaml.j2" %} diff --git a/docs/admin_api/user_admin_api.md b/docs/admin_api/user_admin_api.md index 2742d2d2cd..875876081f 100644 --- a/docs/admin_api/user_admin_api.md +++ b/docs/admin_api/user_admin_api.md @@ -414,6 +414,32 @@ The following actions are **NOT** performed. The list may be incomplete. - Remove from monthly active users - Remove user's consent information (consent version and timestamp) +## Suspend/Unsuspend Account + +This API allows an admin to suspend/unsuspend an account. While an account is suspended, the user is +prohibited from sending invites, joining or knocking on rooms, sending messages, changing profile data, and redacting messages other than their own. + +The api is: + +``` +PUT /_synapse/admin/v1/suspend/ +``` + +with a body of: + +```json +{ + "suspend": true +} +``` + +To unsuspend a user, use the same endpoint with a body of: +```json +{ + "suspend": false +} +``` + ## Reset password **Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582) @@ -1468,13 +1494,13 @@ The following JSON body parameter must be provided: - `rooms` - A list of rooms to redact the user's events in. If an empty list is provided all events in all rooms the user is a member of will be redacted -_Added in Synapse 1.116.0._ - The following JSON body parameters are optional: - `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users. - `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided +_Added in Synapse 1.116.0._ + ## Check the status of a redaction process diff --git a/docs/development/database_schema.md b/docs/development/database_schema.md index 37a06acc12..620d1c16b0 100644 --- a/docs/development/database_schema.md +++ b/docs/development/database_schema.md @@ -162,7 +162,7 @@ by a unique name, the current status (stored in JSON), and some dependency infor * Whether the update requires a previous update to be complete. * A rough ordering for which to complete updates. -A new background updates needs to be added to the `background_updates` table: +A new background update needs to be added to the `background_updates` table: ```sql INSERT INTO background_updates (ordering, update_name, depends_on, progress_json) VALUES diff --git a/docs/upgrade.md b/docs/upgrade.md index 6c96cb91a3..7e4cd52e1d 100644 --- a/docs/upgrade.md +++ b/docs/upgrade.md @@ -117,6 +117,26 @@ each upgrade are complete before moving on to the next upgrade, to avoid stacking them up. You can monitor the currently running background updates with [the Admin API](usage/administration/admin_api/background_updates.html#status). +# Upgrading to v1.126.0 + +## Room list publication rules change + +The default [`room_list_publication_rules`] setting was changed to disallow +anyone (except server admins) from publishing to the room list by default. + +This is in line with Synapse policy of locking down features by default that can +be abused without moderation. + +To keep the previous behavior of allowing publication by default, add the +following to the config: + +```yaml +room_list_publication_rules: + - "action": "allow" +``` + +[`room_list_publication_rules`]: usage/configuration/config_documentation.md#room_list_publication_rules + # Upgrading to v1.122.0 ## Dropping support for PostgreSQL 11 and 12 diff --git a/docs/usage/configuration/config_documentation.md b/docs/usage/configuration/config_documentation.md index e3c06d5371..d2d282f203 100644 --- a/docs/usage/configuration/config_documentation.md +++ b/docs/usage/configuration/config_documentation.md @@ -1947,6 +1947,29 @@ rc_presence: burst_count: 1 ``` --- +### `rc_delayed_event_mgmt` + +Ratelimiting settings for delayed event management. + +This is a ratelimiting option that ratelimits +attempts to restart, cancel, or view delayed events +based on the sending client's account and device ID. +It defaults to: `per_second: 1`, `burst_count: 5`. + +Attempts to create or send delayed events are ratelimited not by this setting, but by `rc_message`. + +Setting this to a high value allows clients to make delayed event management requests often +(such as repeatedly restarting a delayed event with a short timeout, +or restarting several different delayed events all at once) +without the risk of being ratelimited. + +Example configuration: +```yaml +rc_delayed_event_mgmt: + per_second: 2 + burst_count: 20 +``` +--- ### `federation_rr_transactions_per_room_per_second` Sets outgoing federation transaction frequency for sending read-receipts, @@ -3215,6 +3238,22 @@ Example configuration: ```yaml form_secret: ``` +--- +### `form_secret_path` + +An alternative to [`form_secret`](#form_secret): +allows the secret to be specified in an external file. + +The file should be a plain text file, containing only the secret. +Synapse reads the secret from the given file once at startup. + +Example configuration: +```yaml +form_secret_path: /path/to/secrets/file +``` + +_Added in Synapse 1.126.0._ + --- ## Signing Keys Config options relating to signing keys @@ -3579,6 +3618,24 @@ Options for each entry include: to `auto`, which uses PKCE if supported during metadata discovery. Set to `always` to force enable PKCE or `never` to force disable PKCE. +* `id_token_signing_alg_values_supported`: List of the JWS signing algorithms (`alg` + values) that are supported for signing the `id_token`. + + This is *not* required if `discovery` is disabled. We default to supporting `RS256` in + the downstream usage if no algorithms are configured here or in the discovery + document. + + According to the spec, the algorithm `"RS256"` MUST be included. The absolute rigid + approach would be to reject this provider as non-compliant if it's not included but we + simply allow whatever and see what happens (you're the one that configured the value + and cooperating with the identity provider). + + The `alg` value `"none"` MAY be supported but can only be used if the Authorization + Endpoint does not include `id_token` in the `response_type` (ex. + `/authorize?response_type=code` where `none` can apply, + `/authorize?response_type=code%20id_token` where `none` can't apply) (such as when + using the Authorization Code Flow). + * `scopes`: list of scopes to request. This should normally include the "openid" scope. Defaults to `["openid"]`. @@ -3605,6 +3662,13 @@ Options for each entry include: not included in `scopes`. Set to `userinfo_endpoint` to always use the userinfo endpoint. +* `redirect_uri`: An optional string, that if set will override the `redirect_uri` + parameter sent in the requests to the authorization and token endpoints. + Useful if you want to redirect the client to another endpoint as part of the + OIDC login. Be aware that the client must then call Synapse's OIDC callback + URL (`/_synapse/client/oidc/callback`) manually afterwards. + Must be a valid URL including scheme and path. + * `additional_authorization_parameters`: String to string dictionary that will be passed as additional parameters to the authorization grant URL. @@ -4227,8 +4291,8 @@ unwanted entries from being published in the public room list. The format of this option is the same as that for [`alias_creation_rules`](#alias_creation_rules): an optional list of 0 or more -rules. By default, no list is provided, meaning that all rooms may be -published to the room list. +rules. By default, no list is provided, meaning that no one may publish to the +room list (except server admins). Otherwise, requests to publish a room are matched against each rule in order. The first rule that matches decides if the request is allowed or denied. If no @@ -4254,6 +4318,10 @@ Note that the patterns match against fully qualified IDs, e.g. against of `alice`, `room` and `abcedgghijk`. +_Changed in Synapse 1.126.0: The default was changed to deny publishing to the +room list by default_ + + Example configuration: ```yaml @@ -4466,6 +4534,22 @@ Example configuration: ```yaml worker_replication_secret: "secret_secret" ``` +--- +### `worker_replication_secret_path` + +An alternative to [`worker_replication_secret`](#worker_replication_secret): +allows the secret to be specified in an external file. + +The file should be a plain text file, containing only the secret. +Synapse reads the secret from the given file once at startup. + +Example configuration: +```yaml +worker_replication_secret_path: /path/to/secrets/file +``` + +_Added in Synapse 1.126.0._ + --- ### `start_pushers` diff --git a/poetry.lock b/poetry.lock index 0d388eff6a..1ad631199a 100644 --- a/poetry.lock +++ b/poetry.lock @@ -32,13 +32,13 @@ tests-mypy = ["mypy (>=1.11.1)", "pytest-mypy-plugins"] [[package]] name = "authlib" -version = "1.4.0" +version = "1.4.1" description = "The ultimate Python library in building OAuth and OpenID Connect servers and clients." optional = true python-versions = ">=3.9" files = [ - {file = "Authlib-1.4.0-py2.py3-none-any.whl", hash = "sha256:4bb20b978c8b636222b549317c1815e1fe62234fc1c5efe8855d84aebf3a74e3"}, - {file = "authlib-1.4.0.tar.gz", hash = "sha256:1c1e6608b5ed3624aeeee136ca7f8c120d6f51f731aa152b153d54741840e1f2"}, + {file = "Authlib-1.4.1-py2.py3-none-any.whl", hash = "sha256:edc29c3f6a3e72cd9e9f45fff67fc663a2c364022eb0371c003f22d5405915c1"}, + {file = "authlib-1.4.1.tar.gz", hash = "sha256:30ead9ea4993cdbab821dc6e01e818362f92da290c04c7f6a1940f86507a790d"}, ] [package.dependencies] @@ -304,13 +304,13 @@ files = [ [[package]] name = "click" -version = "8.1.7" +version = "8.1.8" description = "Composable command line interface toolkit" optional = false python-versions = ">=3.7" files = [ - {file = "click-8.1.7-py3-none-any.whl", hash = "sha256:ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28"}, - {file = "click-8.1.7.tar.gz", hash = "sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de"}, + {file = "click-8.1.8-py3-none-any.whl", hash = "sha256:63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2"}, + {file = "click-8.1.8.tar.gz", hash = "sha256:ed53c9d8990d83c2a27deae68e4ee337473f6330c040a31d4225c9574d16096a"}, ] [package.dependencies] @@ -1155,61 +1155,72 @@ testing = ["coverage", "pytest", "pytest-cov", "pytest-regressions"] [[package]] name = "markupsafe" -version = "2.1.2" +version = "3.0.2" description = "Safely add untrusted strings to HTML/XML markup." optional = false -python-versions = ">=3.7" +python-versions = ">=3.9" files = [ - {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:665a36ae6f8f20a4676b53224e33d456a6f5a72657d9c83c2aa00765072f31f7"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:340bea174e9761308703ae988e982005aedf427de816d1afe98147668cc03036"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:22152d00bf4a9c7c83960521fc558f55a1adbc0631fbb00a9471e097b19d72e1"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:28057e985dace2f478e042eaa15606c7efccb700797660629da387eb289b9323"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ca244fa73f50a800cf8c3ebf7fd93149ec37f5cb9596aa8873ae2c1d23498601"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d9d971ec1e79906046aa3ca266de79eac42f1dbf3612a05dc9368125952bd1a1"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_i686.whl", hash = "sha256:7e007132af78ea9df29495dbf7b5824cb71648d7133cf7848a2a5dd00d36f9ff"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:7313ce6a199651c4ed9d7e4cfb4aa56fe923b1adf9af3b420ee14e6d9a73df65"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-win32.whl", hash = "sha256:c4a549890a45f57f1ebf99c067a4ad0cb423a05544accaf2b065246827ed9603"}, - {file = "MarkupSafe-2.1.2-cp310-cp310-win_amd64.whl", hash = "sha256:835fb5e38fd89328e9c81067fd642b3593c33e1e17e2fdbf77f5676abb14a156"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:2ec4f2d48ae59bbb9d1f9d7efb9236ab81429a764dedca114f5fdabbc3788013"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:608e7073dfa9e38a85d38474c082d4281f4ce276ac0010224eaba11e929dd53a"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:65608c35bfb8a76763f37036547f7adfd09270fbdbf96608be2bead319728fcd"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f2bfb563d0211ce16b63c7cb9395d2c682a23187f54c3d79bfec33e6705473c6"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:da25303d91526aac3672ee6d49a2f3db2d9502a4a60b55519feb1a4c7714e07d"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:9cad97ab29dfc3f0249b483412c85c8ef4766d96cdf9dcf5a1e3caa3f3661cf1"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_i686.whl", hash = "sha256:085fd3201e7b12809f9e6e9bc1e5c96a368c8523fad5afb02afe3c051ae4afcc"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:1bea30e9bf331f3fef67e0a3877b2288593c98a21ccb2cf29b74c581a4eb3af0"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-win32.whl", hash = "sha256:7df70907e00c970c60b9ef2938d894a9381f38e6b9db73c5be35e59d92e06625"}, - {file = "MarkupSafe-2.1.2-cp311-cp311-win_amd64.whl", hash = "sha256:e55e40ff0cc8cc5c07996915ad367fa47da6b3fc091fdadca7f5403239c5fec3"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-macosx_10_9_x86_64.whl", hash = "sha256:a6e40afa7f45939ca356f348c8e23048e02cb109ced1eb8420961b2f40fb373a"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cf877ab4ed6e302ec1d04952ca358b381a882fbd9d1b07cccbfd61783561f98a"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63ba06c9941e46fa389d389644e2d8225e0e3e5ebcc4ff1ea8506dce646f8c8a"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f1cd098434e83e656abf198f103a8207a8187c0fc110306691a2e94a78d0abb2"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_aarch64.whl", hash = "sha256:55f44b440d491028addb3b88f72207d71eeebfb7b5dbf0643f7c023ae1fba619"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_i686.whl", hash = "sha256:a6f2fcca746e8d5910e18782f976489939d54a91f9411c32051b4aab2bd7c513"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-musllinux_1_1_x86_64.whl", hash = "sha256:0b462104ba25f1ac006fdab8b6a01ebbfbce9ed37fd37fd4acd70c67c973e460"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-win32.whl", hash = "sha256:7668b52e102d0ed87cb082380a7e2e1e78737ddecdde129acadb0eccc5423859"}, - {file = "MarkupSafe-2.1.2-cp37-cp37m-win_amd64.whl", hash = "sha256:6d6607f98fcf17e534162f0709aaad3ab7a96032723d8ac8750ffe17ae5a0666"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_universal2.whl", hash = "sha256:a806db027852538d2ad7555b203300173dd1b77ba116de92da9afbc3a3be3eed"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-macosx_10_9_x86_64.whl", hash = "sha256:a4abaec6ca3ad8660690236d11bfe28dfd707778e2442b45addd2f086d6ef094"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f03a532d7dee1bed20bc4884194a16160a2de9ffc6354b3878ec9682bb623c54"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4cf06cdc1dda95223e9d2d3c58d3b178aa5dacb35ee7e3bbac10e4e1faacb419"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:22731d79ed2eb25059ae3df1dfc9cb1546691cc41f4e3130fe6bfbc3ecbbecfa"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:f8ffb705ffcf5ddd0e80b65ddf7bed7ee4f5a441ea7d3419e861a12eaf41af58"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_i686.whl", hash = "sha256:8db032bf0ce9022a8e41a22598eefc802314e81b879ae093f36ce9ddf39ab1ba"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:2298c859cfc5463f1b64bd55cb3e602528db6fa0f3cfd568d3605c50678f8f03"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-win32.whl", hash = "sha256:50c42830a633fa0cf9e7d27664637532791bfc31c731a87b202d2d8ac40c3ea2"}, - {file = "MarkupSafe-2.1.2-cp38-cp38-win_amd64.whl", hash = "sha256:bb06feb762bade6bf3c8b844462274db0c76acc95c52abe8dbed28ae3d44a147"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:99625a92da8229df6d44335e6fcc558a5037dd0a760e11d84be2260e6f37002f"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-macosx_10_9_x86_64.whl", hash = "sha256:8bca7e26c1dd751236cfb0c6c72d4ad61d986e9a41bbf76cb445f69488b2a2bd"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:40627dcf047dadb22cd25ea7ecfe9cbf3bbbad0482ee5920b582f3809c97654f"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:40dfd3fefbef579ee058f139733ac336312663c6706d1163b82b3003fb1925c4"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:090376d812fb6ac5f171e5938e82e7f2d7adc2b629101cec0db8b267815c85e2"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:2e7821bffe00aa6bd07a23913b7f4e01328c3d5cc0b40b36c0bd81d362faeb65"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_i686.whl", hash = "sha256:c0a33bc9f02c2b17c3ea382f91b4db0e6cde90b63b296422a939886a7a80de1c"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:b8526c6d437855442cdd3d87eede9c425c4445ea011ca38d937db299382e6fa3"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-win32.whl", hash = "sha256:137678c63c977754abe9086a3ec011e8fd985ab90631145dfb9294ad09c102a7"}, - {file = "MarkupSafe-2.1.2-cp39-cp39-win_amd64.whl", hash = "sha256:0576fe974b40a400449768941d5d0858cc624e3249dfd1e0c33674e5c7ca7aed"}, - {file = "MarkupSafe-2.1.2.tar.gz", hash = "sha256:abcabc8c2b26036d62d4c746381a6f7cf60aafcc653198ad678306986b09450d"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-macosx_10_9_universal2.whl", hash = "sha256:7e94c425039cde14257288fd61dcfb01963e658efbc0ff54f5306b06054700f8"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:9e2d922824181480953426608b81967de705c3cef4d1af983af849d7bd619158"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:38a9ef736c01fccdd6600705b09dc574584b89bea478200c5fbf112a6b0d5579"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:bbcb445fa71794da8f178f0f6d66789a28d7319071af7a496d4d507ed566270d"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:57cb5a3cf367aeb1d316576250f65edec5bb3be939e9247ae594b4bcbc317dfb"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:3809ede931876f5b2ec92eef964286840ed3540dadf803dd570c3b7e13141a3b"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:e07c3764494e3776c602c1e78e298937c3315ccc9043ead7e685b7f2b8d47b3c"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:b424c77b206d63d500bcb69fa55ed8d0e6a3774056bdc4839fc9298a7edca171"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-win32.whl", hash = "sha256:fcabf5ff6eea076f859677f5f0b6b5c1a51e70a376b0579e0eadef8db48c6b50"}, + {file = "MarkupSafe-3.0.2-cp310-cp310-win_amd64.whl", hash = "sha256:6af100e168aa82a50e186c82875a5893c5597a0c1ccdb0d8b40240b1f28b969a"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-macosx_10_9_universal2.whl", hash = "sha256:9025b4018f3a1314059769c7bf15441064b2207cb3f065e6ea1e7359cb46db9d"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:93335ca3812df2f366e80509ae119189886b0f3c2b81325d39efdb84a1e2ae93"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2cb8438c3cbb25e220c2ab33bb226559e7afb3baec11c4f218ffa7308603c832"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a123e330ef0853c6e822384873bef7507557d8e4a082961e1defa947aa59ba84"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:1e084f686b92e5b83186b07e8a17fc09e38fff551f3602b249881fec658d3eca"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:d8213e09c917a951de9d09ecee036d5c7d36cb6cb7dbaece4c71a60d79fb9798"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:5b02fb34468b6aaa40dfc198d813a641e3a63b98c2b05a16b9f80b7ec314185e"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0bff5e0ae4ef2e1ae4fdf2dfd5b76c75e5c2fa4132d05fc1b0dabcd20c7e28c4"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-win32.whl", hash = "sha256:6c89876f41da747c8d3677a2b540fb32ef5715f97b66eeb0c6b66f5e3ef6f59d"}, + {file = "MarkupSafe-3.0.2-cp311-cp311-win_amd64.whl", hash = "sha256:70a87b411535ccad5ef2f1df5136506a10775d267e197e4cf531ced10537bd6b"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-macosx_10_13_universal2.whl", hash = "sha256:9778bd8ab0a994ebf6f84c2b949e65736d5575320a17ae8984a77fab08db94cf"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:846ade7b71e3536c4e56b386c2a47adf5741d2d8b94ec9dc3e92e5e1ee1e2225"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1c99d261bd2d5f6b59325c92c73df481e05e57f19837bdca8413b9eac4bd8028"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e17c96c14e19278594aa4841ec148115f9c7615a47382ecb6b82bd8fea3ab0c8"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:88416bd1e65dcea10bc7569faacb2c20ce071dd1f87539ca2ab364bf6231393c"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2181e67807fc2fa785d0592dc2d6206c019b9502410671cc905d132a92866557"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:52305740fe773d09cffb16f8ed0427942901f00adedac82ec8b67752f58a1b22"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:ad10d3ded218f1039f11a75f8091880239651b52e9bb592ca27de44eed242a48"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-win32.whl", hash = "sha256:0f4ca02bea9a23221c0182836703cbf8930c5e9454bacce27e767509fa286a30"}, + {file = "MarkupSafe-3.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:8e06879fc22a25ca47312fbe7c8264eb0b662f6db27cb2d3bbbc74b1df4b9b87"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-macosx_10_13_universal2.whl", hash = "sha256:ba9527cdd4c926ed0760bc301f6728ef34d841f405abf9d4f959c478421e4efd"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f8b3d067f2e40fe93e1ccdd6b2e1d16c43140e76f02fb1319a05cf2b79d99430"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:569511d3b58c8791ab4c2e1285575265991e6d8f8700c7be0e88f86cb0672094"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:15ab75ef81add55874e7ab7055e9c397312385bd9ced94920f2802310c930396"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:f3818cb119498c0678015754eba762e0d61e5b52d34c8b13d770f0719f7b1d79"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:cdb82a876c47801bb54a690c5ae105a46b392ac6099881cdfb9f6e95e4014c6a"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:cabc348d87e913db6ab4aa100f01b08f481097838bdddf7c7a84b7575b7309ca"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:444dcda765c8a838eaae23112db52f1efaf750daddb2d9ca300bcae1039adc5c"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-win32.whl", hash = "sha256:bcf3e58998965654fdaff38e58584d8937aa3096ab5354d493c77d1fdd66d7a1"}, + {file = "MarkupSafe-3.0.2-cp313-cp313-win_amd64.whl", hash = "sha256:e6a2a455bd412959b57a172ce6328d2dd1f01cb2135efda2e4576e8a23fa3b0f"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-macosx_10_13_universal2.whl", hash = "sha256:b5a6b3ada725cea8a5e634536b1b01c30bcdcd7f9c6fff4151548d5bf6b3a36c"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:a904af0a6162c73e3edcb969eeeb53a63ceeb5d8cf642fade7d39e7963a22ddb"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4aa4e5faecf353ed117801a068ebab7b7e09ffb6e1d5e412dc852e0da018126c"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c0ef13eaeee5b615fb07c9a7dadb38eac06a0608b41570d8ade51c56539e509d"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d16a81a06776313e817c951135cf7340a3e91e8c1ff2fac444cfd75fffa04afe"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:6381026f158fdb7c72a168278597a5e3a5222e83ea18f543112b2662a9b699c5"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:3d79d162e7be8f996986c064d1c7c817f6df3a77fe3d6859f6f9e7be4b8c213a"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:131a3c7689c85f5ad20f9f6fb1b866f402c445b220c19fe4308c0b147ccd2ad9"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-win32.whl", hash = "sha256:ba8062ed2cf21c07a9e295d5b8a2a5ce678b913b45fdf68c32d95d6c1291e0b6"}, + {file = "MarkupSafe-3.0.2-cp313-cp313t-win_amd64.whl", hash = "sha256:e444a31f8db13eb18ada366ab3cf45fd4b31e4db1236a4448f68778c1d1a5a2f"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-macosx_10_9_universal2.whl", hash = "sha256:eaa0a10b7f72326f1372a713e73c3f739b524b3af41feb43e4921cb529f5929a"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:48032821bbdf20f5799ff537c7ac3d1fba0ba032cfc06194faffa8cda8b560ff"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:1a9d3f5f0901fdec14d8d2f66ef7d035f2157240a433441719ac9a3fba440b13"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:88b49a3b9ff31e19998750c38e030fc7bb937398b1f78cfa599aaef92d693144"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:cfad01eed2c2e0c01fd0ecd2ef42c492f7f93902e39a42fc9ee1692961443a29"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:1225beacc926f536dc82e45f8a4d68502949dc67eea90eab715dea3a21c1b5f0"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-musllinux_1_2_i686.whl", hash = "sha256:3169b1eefae027567d1ce6ee7cae382c57fe26e82775f460f0b2778beaad66c0"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:eb7972a85c54febfb25b5c4b4f3af4dcc731994c7da0d8a0b4a6eb0640e1d178"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-win32.whl", hash = "sha256:8c4e8c3ce11e1f92f6536ff07154f9d49677ebaaafc32db9db4620bc11ed480f"}, + {file = "MarkupSafe-3.0.2-cp39-cp39-win_amd64.whl", hash = "sha256:6e296a513ca3d94054c2c881cc913116e90fd030ad1c656b3869762b754f5f8a"}, + {file = "markupsafe-3.0.2.tar.gz", hash = "sha256:ee55d3edf80167e48ea11a923c7386f4669df67d7994554387f84e7d8b0a2bf0"}, ] [[package]] @@ -2821,13 +2832,13 @@ files = [ [[package]] name = "types-psycopg2" -version = "2.9.21.20241019" +version = "2.9.21.20250121" description = "Typing stubs for psycopg2" optional = false -python-versions = ">=3.8" +python-versions = ">=3.9" files = [ - {file = "types-psycopg2-2.9.21.20241019.tar.gz", hash = "sha256:bca89b988d2ebd19bcd08b177d22a877ea8b841decb10ed130afcf39404612fa"}, - {file = "types_psycopg2-2.9.21.20241019-py3-none-any.whl", hash = "sha256:44d091e67732d16a941baae48cd7b53bf91911bc36888652447cf1ef0c1fb3f6"}, + {file = "types_psycopg2-2.9.21.20250121-py3-none-any.whl", hash = "sha256:b890dc6f5a08b6433f0ff73a4ec9a834deedad3e914f2a4a6fd43df021f745f1"}, + {file = "types_psycopg2-2.9.21.20250121.tar.gz", hash = "sha256:2b0e2cd0f3747af1ae25a7027898716d80209604770ef3cbf350fe055b9c349b"}, ] [[package]] diff --git a/pyproject.toml b/pyproject.toml index a40be93d7b..5f18bd0768 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust" [tool.poetry] name = "matrix-synapse" -version = "1.125.0rc1" +version = "1.126.0rc1" description = "Homeserver for the Matrix decentralised comms protocol" authors = ["Matrix.org Team and Contributors "] license = "AGPL-3.0-or-later" diff --git a/rust/src/rendezvous/mod.rs b/rust/src/rendezvous/mod.rs index 23de668102..3148e0f67a 100644 --- a/rust/src/rendezvous/mod.rs +++ b/rust/src/rendezvous/mod.rs @@ -47,7 +47,7 @@ fn prepare_headers(headers: &mut HeaderMap, session: &Session) { headers.typed_insert(AccessControlAllowOrigin::ANY); headers.typed_insert(AccessControlExposeHeaders::from_iter([ETAG])); headers.typed_insert(Pragma::no_cache()); - headers.typed_insert(CacheControl::new().with_no_store()); + headers.typed_insert(CacheControl::new().with_no_store().with_no_transform()); headers.typed_insert(session.etag()); headers.typed_insert(session.expires()); headers.typed_insert(session.last_modified()); @@ -192,10 +192,12 @@ impl RendezvousHandler { "url": uri, }) .to_string(); + let length = response.len() as _; let mut response = Response::new(response.as_bytes()); *response.status_mut() = StatusCode::CREATED; response.headers_mut().typed_insert(ContentType::json()); + response.headers_mut().typed_insert(ContentLength(length)); prepare_headers(response.headers_mut(), &session); http_response_to_twisted(twisted_request, response)?; @@ -299,6 +301,7 @@ impl RendezvousHandler { // proxy/cache setup which strips the ETag header if there is no Content-Type set. // Specifically, we noticed this behaviour when placing Synapse behind Cloudflare. response.headers_mut().typed_insert(ContentType::text()); + response.headers_mut().typed_insert(ContentLength(0)); http_response_to_twisted(twisted_request, response)?; @@ -316,6 +319,7 @@ impl RendezvousHandler { response .headers_mut() .typed_insert(AccessControlAllowOrigin::ANY); + response.headers_mut().typed_insert(ContentLength(0)); http_response_to_twisted(twisted_request, response)?; Ok(()) diff --git a/synapse/_scripts/synapse_port_db.py b/synapse/_scripts/synapse_port_db.py index 3f67a739a0..59065a0504 100755 --- a/synapse/_scripts/synapse_port_db.py +++ b/synapse/_scripts/synapse_port_db.py @@ -191,6 +191,11 @@ APPEND_ONLY_TABLES = [ IGNORED_TABLES = { + # Porting the auto generated sequence in this table is non-trivial. + # None of the entries in this list are mandatory for Synapse to keep working. + # If state group disk space is an issue after the port, the + # `delete_unreferenced_state_groups_bg_update` background task can be run again. + "state_groups_pending_deletion", # We don't port these tables, as they're a faff and we can regenerate # them anyway. "user_directory", @@ -216,6 +221,15 @@ IGNORED_TABLES = { } +# These background updates will not be applied upon creation of the postgres database. +IGNORED_BACKGROUND_UPDATES = { + # Reapplying this background update to the postgres database is unnecessary after + # already having waited for the SQLite database to complete all running background + # updates. + "delete_unreferenced_state_groups_bg_update", +} + + # Error returned by the run function. Used at the top-level part of the script to # handle errors and return codes. end_error: Optional[str] = None @@ -687,6 +701,20 @@ class Porter: # 0 means off. 1 means full. 2 means incremental. return autovacuum_setting != 0 + async def remove_ignored_background_updates_from_database(self) -> None: + def _remove_delete_unreferenced_state_groups_bg_updates( + txn: LoggingTransaction, + ) -> None: + txn.execute( + "DELETE FROM background_updates WHERE update_name = ANY(?)", + (list(IGNORED_BACKGROUND_UPDATES),), + ) + + await self.postgres_store.db_pool.runInteraction( + "remove_delete_unreferenced_state_groups_bg_updates", + _remove_delete_unreferenced_state_groups_bg_updates, + ) + async def run(self) -> None: """Ports the SQLite database to a PostgreSQL database. @@ -732,6 +760,8 @@ class Porter: self.hs_config.database.get_single_database() ) + await self.remove_ignored_background_updates_from_database() + await self.run_background_updates_on_postgres() self.progress.set_state("Creating port tables") diff --git a/synapse/api/auth/msc3861_delegated.py b/synapse/api/auth/msc3861_delegated.py index f825b5c95e..e6bf271a1f 100644 --- a/synapse/api/auth/msc3861_delegated.py +++ b/synapse/api/auth/msc3861_delegated.py @@ -214,6 +214,9 @@ class MSC3861DelegatedAuth(BaseAuth): "Content-Type": "application/x-www-form-urlencoded", "User-Agent": str(self._http_client.user_agent, "utf-8"), "Accept": "application/json", + # Tell MAS that we support reading the device ID as an explicit + # value, not encoded in the scope. This is supported by MAS 0.15+ + "X-MAS-Supports-Device-Id": "1", } args = {"token": token, "token_type_hint": "access_token"} @@ -409,29 +412,41 @@ class MSC3861DelegatedAuth(BaseAuth): else: user_id = UserID.from_string(user_id_str) - # Find device_ids in scope - # We only allow a single device_id in the scope, so we find them all in the - # scope list, and raise if there are more than one. The OIDC server should be - # the one enforcing valid scopes, so we raise a 500 if we find an invalid scope. - device_ids = [ - tok[len(SCOPE_MATRIX_DEVICE_PREFIX) :] - for tok in scope - if tok.startswith(SCOPE_MATRIX_DEVICE_PREFIX) - ] + # MAS 0.15+ will give us the device ID as an explicit value for compatibility sessions + # If present, we get it from here, if not we get it in thee scope + device_id = introspection_result.get("device_id") + if device_id is not None: + # We got the device ID explicitly, just sanity check that it's a string + if not isinstance(device_id, str): + raise AuthError( + 500, + "Invalid device ID in introspection result", + ) + else: + # Find device_ids in scope + # We only allow a single device_id in the scope, so we find them all in the + # scope list, and raise if there are more than one. The OIDC server should be + # the one enforcing valid scopes, so we raise a 500 if we find an invalid scope. + device_ids = [ + tok[len(SCOPE_MATRIX_DEVICE_PREFIX) :] + for tok in scope + if tok.startswith(SCOPE_MATRIX_DEVICE_PREFIX) + ] - if len(device_ids) > 1: - raise AuthError( - 500, - "Multiple device IDs in scope", - ) + if len(device_ids) > 1: + raise AuthError( + 500, + "Multiple device IDs in scope", + ) + + device_id = device_ids[0] if device_ids else None - device_id = device_ids[0] if device_ids else None if device_id is not None: # Sanity check the device_id if len(device_id) > 255 or len(device_id) < 1: raise AuthError( 500, - "Invalid device ID in scope", + "Invalid device ID in introspection result", ) # Create the device on the fly if it does not exist diff --git a/synapse/config/_base.py b/synapse/config/_base.py index 912346a423..132ba26af9 100644 --- a/synapse/config/_base.py +++ b/synapse/config/_base.py @@ -589,6 +589,14 @@ class RootConfig: " Defaults to the directory containing the last config file", ) + config_parser.add_argument( + "--no-secrets-in-config", + dest="secrets_in_config", + action="store_false", + default=True, + help="Reject config options that expect an in-line secret as value.", + ) + cls.invoke_all_static("add_arguments", config_parser) @classmethod @@ -626,7 +634,10 @@ class RootConfig: config_dict = read_config_files(config_files) obj.parse_config_dict( - config_dict, config_dir_path=config_dir_path, data_dir_path=data_dir_path + config_dict, + config_dir_path=config_dir_path, + data_dir_path=data_dir_path, + allow_secrets_in_config=config_args.secrets_in_config, ) obj.invoke_all("read_arguments", config_args) @@ -653,6 +664,13 @@ class RootConfig: help="Specify config file. Can be given multiple times and" " may specify directories containing *.yaml files.", ) + parser.add_argument( + "--no-secrets-in-config", + dest="secrets_in_config", + action="store_false", + default=True, + help="Reject config options that expect an in-line secret as value.", + ) # we nest the mutually-exclusive group inside another group so that the help # text shows them in their own group. @@ -821,14 +839,21 @@ class RootConfig: return None obj.parse_config_dict( - config_dict, config_dir_path=config_dir_path, data_dir_path=data_dir_path + config_dict, + config_dir_path=config_dir_path, + data_dir_path=data_dir_path, + allow_secrets_in_config=config_args.secrets_in_config, ) obj.invoke_all("read_arguments", config_args) return obj def parse_config_dict( - self, config_dict: Dict[str, Any], config_dir_path: str, data_dir_path: str + self, + config_dict: Dict[str, Any], + config_dir_path: str, + data_dir_path: str, + allow_secrets_in_config: bool = True, ) -> None: """Read the information from the config dict into this Config object. @@ -846,6 +871,7 @@ class RootConfig: config_dict, config_dir_path=config_dir_path, data_dir_path=data_dir_path, + allow_secrets_in_config=allow_secrets_in_config, ) def generate_missing_files( diff --git a/synapse/config/_base.pyi b/synapse/config/_base.pyi index d9cb0da38b..55b0e2cbf4 100644 --- a/synapse/config/_base.pyi +++ b/synapse/config/_base.pyi @@ -132,7 +132,11 @@ class RootConfig: @classmethod def invoke_all_static(cls, func_name: str, *args: Any, **kwargs: Any) -> None: ... def parse_config_dict( - self, config_dict: Dict[str, Any], config_dir_path: str, data_dir_path: str + self, + config_dict: Dict[str, Any], + config_dir_path: str, + data_dir_path: str, + allow_secrets_in_config: bool = ..., ) -> None: ... def generate_config( self, diff --git a/synapse/config/captcha.py b/synapse/config/captcha.py index 84897c09c5..57d67abbc3 100644 --- a/synapse/config/captcha.py +++ b/synapse/config/captcha.py @@ -29,8 +29,15 @@ from ._base import Config, ConfigError class CaptchaConfig(Config): section = "captcha" - def read_config(self, config: JsonDict, **kwargs: Any) -> None: + def read_config( + self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any + ) -> None: recaptcha_private_key = config.get("recaptcha_private_key") + if recaptcha_private_key and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("recaptcha_private_key",), + ) if recaptcha_private_key is not None and not isinstance( recaptcha_private_key, str ): @@ -38,6 +45,11 @@ class CaptchaConfig(Config): self.recaptcha_private_key = recaptcha_private_key recaptcha_public_key = config.get("recaptcha_public_key") + if recaptcha_public_key and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("recaptcha_public_key",), + ) if recaptcha_public_key is not None and not isinstance( recaptcha_public_key, str ): diff --git a/synapse/config/experimental.py b/synapse/config/experimental.py index 3beaeb8869..0a963b121a 100644 --- a/synapse/config/experimental.py +++ b/synapse/config/experimental.py @@ -250,7 +250,9 @@ class MSC3861: ) return self._admin_token - def check_config_conflicts(self, root: RootConfig) -> None: + def check_config_conflicts( + self, root: RootConfig, allow_secrets_in_config: bool + ) -> None: """Checks for any configuration conflicts with other parts of Synapse. Raises: @@ -260,6 +262,24 @@ class MSC3861: if not self.enabled: return + if self._client_secret and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("experimental", "msc3861", "client_secret"), + ) + + if self.jwk and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("experimental", "msc3861", "jwk"), + ) + + if self._admin_token and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("experimental", "msc3861", "admin_token"), + ) + if ( root.auth.password_enabled_for_reauth or root.auth.password_enabled_for_login @@ -350,7 +370,9 @@ class ExperimentalConfig(Config): section = "experimental" - def read_config(self, config: JsonDict, **kwargs: Any) -> None: + def read_config( + self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any + ) -> None: experimental = config.get("experimental_features") or {} # MSC3026 (busy presence state) @@ -494,7 +516,9 @@ class ExperimentalConfig(Config): ) from exc # Check that none of the other config options conflict with MSC3861 when enabled - self.msc3861.check_config_conflicts(self.root) + self.msc3861.check_config_conflicts( + self.root, allow_secrets_in_config=allow_secrets_in_config + ) self.msc4028_push_encrypted_events = experimental.get( "msc4028_push_encrypted_events", False diff --git a/synapse/config/key.py b/synapse/config/key.py index 01aae09c13..337f98dbc1 100644 --- a/synapse/config/key.py +++ b/synapse/config/key.py @@ -96,6 +96,11 @@ Conflicting options 'macaroon_secret_key' and 'macaroon_secret_key_path' are both defined in config file. """ +CONFLICTING_FORM_SECRET_OPTS_ERROR = """\ +Conflicting options 'form_secret' and 'form_secret_path' are both defined in +config file. +""" + logger = logging.getLogger(__name__) @@ -112,7 +117,11 @@ class KeyConfig(Config): section = "key" def read_config( - self, config: JsonDict, config_dir_path: str, **kwargs: Any + self, + config: JsonDict, + config_dir_path: str, + allow_secrets_in_config: bool, + **kwargs: Any, ) -> None: # the signing key can be specified inline or in a separate file if "signing_key" in config: @@ -172,6 +181,11 @@ class KeyConfig(Config): ) macaroon_secret_key = config.get("macaroon_secret_key") + if macaroon_secret_key and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("macaroon_secret_key",), + ) macaroon_secret_key_path = config.get("macaroon_secret_key_path") if macaroon_secret_key_path: if macaroon_secret_key: @@ -192,7 +206,19 @@ class KeyConfig(Config): # a secret which is used to calculate HMACs for form values, to stop # falsification of values - self.form_secret = config.get("form_secret", None) + form_secret = config.get("form_secret", None) + if form_secret and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("form_secret",), + ) + form_secret_path = config.get("form_secret_path", None) + if form_secret_path: + if form_secret: + raise ConfigError(CONFLICTING_FORM_SECRET_OPTS_ERROR) + self.form_secret = read_file(form_secret_path, "form_secret_path").strip() + else: + self.form_secret = form_secret def generate_config_section( self, diff --git a/synapse/config/oidc.py b/synapse/config/oidc.py index d0a03baf55..8ba0ba2c36 100644 --- a/synapse/config/oidc.py +++ b/synapse/config/oidc.py @@ -125,6 +125,10 @@ OIDC_PROVIDER_CONFIG_SCHEMA = { "enum": ["client_secret_basic", "client_secret_post", "none"], }, "pkce_method": {"type": "string", "enum": ["auto", "always", "never"]}, + "id_token_signing_alg_values_supported": { + "type": "array", + "items": {"type": "string"}, + }, "scopes": {"type": "array", "items": {"type": "string"}}, "authorization_endpoint": {"type": "string"}, "token_endpoint": {"type": "string"}, @@ -137,6 +141,9 @@ OIDC_PROVIDER_CONFIG_SCHEMA = { "type": "string", "enum": ["auto", "userinfo_endpoint"], }, + "redirect_uri": { + "type": ["string", "null"], + }, "allow_existing_users": {"type": "boolean"}, "user_mapping_provider": {"type": ["object", "null"]}, "attribute_requirements": { @@ -326,6 +333,9 @@ def _parse_oidc_config_dict( client_secret_jwt_key=client_secret_jwt_key, client_auth_method=client_auth_method, pkce_method=oidc_config.get("pkce_method", "auto"), + id_token_signing_alg_values_supported=oidc_config.get( + "id_token_signing_alg_values_supported" + ), scopes=oidc_config.get("scopes", ["openid"]), authorization_endpoint=oidc_config.get("authorization_endpoint"), token_endpoint=oidc_config.get("token_endpoint"), @@ -337,6 +347,7 @@ def _parse_oidc_config_dict( ), skip_verification=oidc_config.get("skip_verification", False), user_profile_method=oidc_config.get("user_profile_method", "auto"), + redirect_uri=oidc_config.get("redirect_uri"), allow_existing_users=oidc_config.get("allow_existing_users", False), user_mapping_provider_class=user_mapping_provider_class, user_mapping_provider_config=user_mapping_provider_config, @@ -402,6 +413,34 @@ class OidcProviderConfig: # Valid values are 'auto', 'always', and 'never'. pkce_method: str + id_token_signing_alg_values_supported: Optional[List[str]] + """ + List of the JWS signing algorithms (`alg` values) that are supported for signing the + `id_token`. + + This is *not* required if `discovery` is disabled. We default to supporting `RS256` + in the downstream usage if no algorithms are configured here or in the discovery + document. + + According to the spec, the algorithm `"RS256"` MUST be included. The absolute rigid + approach would be to reject this provider as non-compliant if it's not included but + we can just allow whatever and see what happens (they're the ones that configured + the value and cooperating with the identity provider). It wouldn't be wise to add it + ourselves because absence of `RS256` might indicate that the provider actually + doesn't support it, despite the spec requirement. Adding it silently could lead to + failed authentication attempts or strange mismatch attacks. + + The `alg` value `"none"` MAY be supported but can only be used if the Authorization + Endpoint does not include `id_token` in the `response_type` (ex. + `/authorize?response_type=code` where `none` can apply, + `/authorize?response_type=code%20id_token` where `none` can't apply) (such as when + using the Authorization Code Flow). + + Spec: + - https://openid.net/specs/openid-connect-discovery-1_0.html#ProviderMetadata + - https://openid.net/specs/openid-connect-core-1_0.html#AuthorizationExamples + """ + # list of scopes to request scopes: Collection[str] @@ -432,6 +471,18 @@ class OidcProviderConfig: # values are: "auto" or "userinfo_endpoint". user_profile_method: str + redirect_uri: Optional[str] + """ + An optional replacement for Synapse's hardcoded `redirect_uri` URL + (`/_synapse/client/oidc/callback`). This can be used to send + the client to a different URL after it receives a response from the + `authorization_endpoint`. + + If this is set, the client is expected to call Synapse's OIDC callback URL + reproduced above itself with the necessary parameters and session cookie, in + order to complete OIDC login. + """ + # whether to allow a user logging in via OIDC to match a pre-existing account # instead of failing allow_existing_users: bool diff --git a/synapse/config/ratelimiting.py b/synapse/config/ratelimiting.py index 06af4da3c5..eb1dc2dacb 100644 --- a/synapse/config/ratelimiting.py +++ b/synapse/config/ratelimiting.py @@ -234,3 +234,9 @@ class RatelimitConfig(Config): "rc_presence.per_user", defaults={"per_second": 0.1, "burst_count": 1}, ) + + self.rc_delayed_event_mgmt = RatelimitSettings.parse( + config, + "rc_delayed_event_mgmt", + defaults={"per_second": 1, "burst_count": 5}, + ) diff --git a/synapse/config/redis.py b/synapse/config/redis.py index 3f38fa11b0..948c95eef7 100644 --- a/synapse/config/redis.py +++ b/synapse/config/redis.py @@ -34,7 +34,9 @@ These are mutually incompatible. class RedisConfig(Config): section = "redis" - def read_config(self, config: JsonDict, **kwargs: Any) -> None: + def read_config( + self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any + ) -> None: redis_config = config.get("redis") or {} self.redis_enabled = redis_config.get("enabled", False) @@ -48,6 +50,11 @@ class RedisConfig(Config): self.redis_path = redis_config.get("path", None) self.redis_dbid = redis_config.get("dbid", None) self.redis_password = redis_config.get("password") + if self.redis_password and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("redis", "password"), + ) redis_password_path = redis_config.get("password_path") if redis_password_path: if self.redis_password: diff --git a/synapse/config/registration.py b/synapse/config/registration.py index c7f3e6d35e..3cf7031656 100644 --- a/synapse/config/registration.py +++ b/synapse/config/registration.py @@ -43,7 +43,9 @@ You have configured both `registration_shared_secret` and class RegistrationConfig(Config): section = "registration" - def read_config(self, config: JsonDict, **kwargs: Any) -> None: + def read_config( + self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any + ) -> None: self.enable_registration = strtobool( str(config.get("enable_registration", False)) ) @@ -68,6 +70,11 @@ class RegistrationConfig(Config): # read the shared secret, either inline or from an external file self.registration_shared_secret = config.get("registration_shared_secret") + if self.registration_shared_secret and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("registration_shared_secret",), + ) registration_shared_secret_path = config.get("registration_shared_secret_path") if registration_shared_secret_path: if self.registration_shared_secret: diff --git a/synapse/config/room_directory.py b/synapse/config/room_directory.py index 704895cf9a..f0349b68f2 100644 --- a/synapse/config/room_directory.py +++ b/synapse/config/room_directory.py @@ -54,9 +54,7 @@ class RoomDirectoryConfig(Config): for rule in room_list_publication_rules ] else: - self._room_list_publication_rules = [ - _RoomDirectoryRule("room_list_publication_rules", {"action": "allow"}) - ] + self._room_list_publication_rules = [] def is_alias_creation_allowed(self, user_id: str, room_id: str, alias: str) -> bool: """Checks if the given user is allowed to create the given alias diff --git a/synapse/config/voip.py b/synapse/config/voip.py index 8614a41dd4..f33602d975 100644 --- a/synapse/config/voip.py +++ b/synapse/config/voip.py @@ -34,9 +34,16 @@ These are mutually incompatible. class VoipConfig(Config): section = "voip" - def read_config(self, config: JsonDict, **kwargs: Any) -> None: + def read_config( + self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any + ) -> None: self.turn_uris = config.get("turn_uris", []) self.turn_shared_secret = config.get("turn_shared_secret") + if self.turn_shared_secret and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("turn_shared_secret",), + ) turn_shared_secret_path = config.get("turn_shared_secret_path") if turn_shared_secret_path: if self.turn_shared_secret: diff --git a/synapse/config/workers.py b/synapse/config/workers.py index ab896be307..5af50ee952 100644 --- a/synapse/config/workers.py +++ b/synapse/config/workers.py @@ -38,6 +38,7 @@ from synapse.config._base import ( ConfigError, RoutableShardedWorkerHandlingConfig, ShardedWorkerHandlingConfig, + read_file, ) from synapse.config._util import parse_and_validate_mapping from synapse.config.server import ( @@ -65,6 +66,11 @@ configuration under `main` inside the `instance_map`. See workers documentation `https://element-hq.github.io/synapse/latest/workers.html#worker-configuration` """ +CONFLICTING_WORKER_REPLICATION_SECRET_OPTS_ERROR = """\ +Conflicting options 'worker_replication_secret' and +'worker_replication_secret_path' are both defined in config file. +""" + # This allows for a handy knob when it's time to change from 'master' to # something with less 'history' MAIN_PROCESS_INSTANCE_NAME = "master" @@ -218,7 +224,9 @@ class WorkerConfig(Config): section = "worker" - def read_config(self, config: JsonDict, **kwargs: Any) -> None: + def read_config( + self, config: JsonDict, allow_secrets_in_config: bool, **kwargs: Any + ) -> None: self.worker_app = config.get("worker_app") # Canonicalise worker_app so that master always has None @@ -242,7 +250,23 @@ class WorkerConfig(Config): raise ConfigError(DIRECT_TCP_ERROR, ("worker_replication_port",)) # The shared secret used for authentication when connecting to the main synapse. - self.worker_replication_secret = config.get("worker_replication_secret", None) + worker_replication_secret = config.get("worker_replication_secret", None) + if worker_replication_secret and not allow_secrets_in_config: + raise ConfigError( + "Config options that expect an in-line secret as value are disabled", + ("worker_replication_secret",), + ) + worker_replication_secret_path = config.get( + "worker_replication_secret_path", None + ) + if worker_replication_secret_path: + if worker_replication_secret: + raise ConfigError(CONFLICTING_WORKER_REPLICATION_SECRET_OPTS_ERROR) + self.worker_replication_secret = read_file( + worker_replication_secret_path, "worker_replication_secret_path" + ).strip() + else: + self.worker_replication_secret = worker_replication_secret self.worker_name = config.get("worker_name", self.worker_app) self.instance_name = self.worker_name or MAIN_PROCESS_INSTANCE_NAME diff --git a/synapse/handlers/delayed_events.py b/synapse/handlers/delayed_events.py index 3c88a96fd3..b3f40809a1 100644 --- a/synapse/handlers/delayed_events.py +++ b/synapse/handlers/delayed_events.py @@ -19,6 +19,7 @@ from twisted.internet.interfaces import IDelayedCall from synapse.api.constants import EventTypes from synapse.api.errors import ShadowBanError +from synapse.api.ratelimiting import Ratelimiter from synapse.config.workers import MAIN_PROCESS_INSTANCE_NAME from synapse.logging.opentracing import set_tag from synapse.metrics import event_processing_positions @@ -57,10 +58,19 @@ class DelayedEventsHandler: self._storage_controllers = hs.get_storage_controllers() self._config = hs.config self._clock = hs.get_clock() - self._request_ratelimiter = hs.get_request_ratelimiter() self._event_creation_handler = hs.get_event_creation_handler() self._room_member_handler = hs.get_room_member_handler() + self._request_ratelimiter = hs.get_request_ratelimiter() + + # Ratelimiter for management of existing delayed events, + # keyed by the sending user ID & device ID. + self._delayed_event_mgmt_ratelimiter = Ratelimiter( + store=self._store, + clock=self._clock, + cfg=self._config.ratelimiting.rc_delayed_event_mgmt, + ) + self._next_delayed_event_call: Optional[IDelayedCall] = None # The current position in the current_state_delta stream @@ -227,6 +237,9 @@ class DelayedEventsHandler: Raises: SynapseError: if the delayed event fails validation checks. """ + # Use standard request limiter for scheduling new delayed events. + # TODO: Instead apply ratelimiting based on the scheduled send time. + # See https://github.com/element-hq/synapse/issues/18021 await self._request_ratelimiter.ratelimit(requester) self._event_creation_handler.validator.validate_builder( @@ -285,7 +298,10 @@ class DelayedEventsHandler: NotFoundError: if no matching delayed event could be found. """ assert self._is_master - await self._request_ratelimiter.ratelimit(requester) + await self._delayed_event_mgmt_ratelimiter.ratelimit( + requester, + (requester.user.to_string(), requester.device_id), + ) await self._initialized_from_db next_send_ts = await self._store.cancel_delayed_event( @@ -308,7 +324,10 @@ class DelayedEventsHandler: NotFoundError: if no matching delayed event could be found. """ assert self._is_master - await self._request_ratelimiter.ratelimit(requester) + await self._delayed_event_mgmt_ratelimiter.ratelimit( + requester, + (requester.user.to_string(), requester.device_id), + ) await self._initialized_from_db next_send_ts = await self._store.restart_delayed_event( @@ -332,6 +351,8 @@ class DelayedEventsHandler: NotFoundError: if no matching delayed event could be found. """ assert self._is_master + # Use standard request limiter for sending delayed events on-demand, + # as an on-demand send is similar to sending a regular event. await self._request_ratelimiter.ratelimit(requester) await self._initialized_from_db @@ -415,7 +436,10 @@ class DelayedEventsHandler: async def get_all_for_user(self, requester: Requester) -> List[JsonDict]: """Return all pending delayed events requested by the given user.""" - await self._request_ratelimiter.ratelimit(requester) + await self._delayed_event_mgmt_ratelimiter.ratelimit( + requester, + (requester.user.to_string(), requester.device_id), + ) return await self._store.get_all_delayed_events_for_user( requester.user.localpart ) diff --git a/synapse/handlers/message.py b/synapse/handlers/message.py index df3010ecf6..4642b8b578 100644 --- a/synapse/handlers/message.py +++ b/synapse/handlers/message.py @@ -644,11 +644,33 @@ class EventCreationHandler: """ await self.auth_blocking.check_auth_blocking(requester=requester) - if event_dict["type"] == EventTypes.Message: - requester_suspended = await self.store.get_user_suspended_status( - requester.user.to_string() - ) - if requester_suspended: + requester_suspended = await self.store.get_user_suspended_status( + requester.user.to_string() + ) + if requester_suspended: + # We want to allow suspended users to perform "corrective" actions + # asked of them by server admins, such as redact their messages and + # leave rooms. + if event_dict["type"] in ["m.room.redaction", "m.room.member"]: + if event_dict["type"] == "m.room.redaction": + event = await self.store.get_event( + event_dict["content"]["redacts"], allow_none=True + ) + if event: + if event.sender != requester.user.to_string(): + raise SynapseError( + 403, + "You can only redact your own events while account is suspended.", + Codes.USER_ACCOUNT_SUSPENDED, + ) + if event_dict["type"] == "m.room.member": + if event_dict["content"]["membership"] != "leave": + raise SynapseError( + 403, + "Changing membership while account is suspended is not allowed.", + Codes.USER_ACCOUNT_SUSPENDED, + ) + else: raise SynapseError( 403, "Sending messages while account is suspended is not allowed.", diff --git a/synapse/handlers/oidc.py b/synapse/handlers/oidc.py index c9109c9e79..18efdd9f6e 100644 --- a/synapse/handlers/oidc.py +++ b/synapse/handlers/oidc.py @@ -382,7 +382,12 @@ class OidcProvider: self._macaroon_generaton = macaroon_generator self._config = provider - self._callback_url: str = hs.config.oidc.oidc_callback_url + + self._callback_url: str + if provider.redirect_uri is not None: + self._callback_url = provider.redirect_uri + else: + self._callback_url = hs.config.oidc.oidc_callback_url # Calculate the prefix for OIDC callback paths based on the public_baseurl. # We'll insert this into the Path= parameter of any session cookies we set. @@ -640,6 +645,11 @@ class OidcProvider: elif self._config.pkce_method == "never": metadata.pop("code_challenge_methods_supported", None) + if self._config.id_token_signing_alg_values_supported: + metadata["id_token_signing_alg_values_supported"] = ( + self._config.id_token_signing_alg_values_supported + ) + self._validate_metadata(metadata) return metadata diff --git a/synapse/rest/client/register.py b/synapse/rest/client/register.py index ad76f188ab..58231d2b04 100644 --- a/synapse/rest/client/register.py +++ b/synapse/rest/client/register.py @@ -908,6 +908,14 @@ class RegisterAppServiceOnlyRestServlet(RestServlet): await self.ratelimiter.ratelimit(None, client_addr, update=False) + # Allow only ASes to use this API. + if body.get("type") != APP_SERVICE_REGISTRATION_TYPE: + raise SynapseError( + 403, + "Registration has been disabled. Only m.login.application_service registrations are allowed.", + errcode=Codes.FORBIDDEN, + ) + kind = parse_string(request, "kind", default="user") if kind == "guest": @@ -923,10 +931,6 @@ class RegisterAppServiceOnlyRestServlet(RestServlet): if not isinstance(desired_username, str) or len(desired_username) > 512: raise SynapseError(400, "Invalid username") - # Allow only ASes to use this API. - if body.get("type") != APP_SERVICE_REGISTRATION_TYPE: - raise SynapseError(403, "Non-application service registration type") - if not self.auth.has_access_token(request): raise SynapseError( 400, diff --git a/synapse/storage/controllers/purge_events.py b/synapse/storage/controllers/purge_events.py index 47cec8c469..ef30bf2895 100644 --- a/synapse/storage/controllers/purge_events.py +++ b/synapse/storage/controllers/purge_events.py @@ -21,11 +21,18 @@ import itertools import logging -from typing import TYPE_CHECKING, Collection, Mapping, Set +from typing import ( + TYPE_CHECKING, + Collection, + Mapping, + Set, +) from synapse.logging.context import nested_logging_context from synapse.metrics.background_process_metrics import wrap_as_background_process +from synapse.storage.database import LoggingTransaction from synapse.storage.databases import Databases +from synapse.types.storage import _BackgroundUpdates if TYPE_CHECKING: from synapse.server import HomeServer @@ -44,6 +51,11 @@ class PurgeEventsStorageController: self._delete_state_groups_loop, 60 * 1000 ) + self.stores.state.db_pool.updates.register_background_update_handler( + _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE, + self._background_delete_unrefereneced_state_groups, + ) + async def purge_room(self, room_id: str) -> None: """Deletes all record of a room""" @@ -80,68 +92,6 @@ class PurgeEventsStorageController: sg_to_delete ) - async def _find_unreferenced_groups( - self, state_groups: Collection[int] - ) -> Set[int]: - """Used when purging history to figure out which state groups can be - deleted. - - Args: - state_groups: Set of state groups referenced by events - that are going to be deleted. - - Returns: - The set of state groups that can be deleted. - """ - # Set of events that we have found to be referenced by events - referenced_groups = set() - - # Set of state groups we've already seen - state_groups_seen = set(state_groups) - - # Set of state groups to handle next. - next_to_search = set(state_groups) - while next_to_search: - # We bound size of groups we're looking up at once, to stop the - # SQL query getting too big - if len(next_to_search) < 100: - current_search = next_to_search - next_to_search = set() - else: - current_search = set(itertools.islice(next_to_search, 100)) - next_to_search -= current_search - - referenced = await self.stores.main.get_referenced_state_groups( - current_search - ) - referenced_groups |= referenced - - # We don't continue iterating up the state group graphs for state - # groups that are referenced. - current_search -= referenced - - edges = await self.stores.state.get_previous_state_groups(current_search) - - prevs = set(edges.values()) - # We don't bother re-handling groups we've already seen - prevs -= state_groups_seen - next_to_search |= prevs - state_groups_seen |= prevs - - # We also check to see if anything referencing the state groups are - # also unreferenced. This helps ensure that we delete unreferenced - # state groups, if we don't then we will de-delta them when we - # delete the other state groups leading to increased DB usage. - next_edges = await self.stores.state.get_next_state_groups(current_search) - nexts = set(next_edges.keys()) - nexts -= state_groups_seen - next_to_search |= nexts - state_groups_seen |= nexts - - to_delete = state_groups_seen - referenced_groups - - return to_delete - @wrap_as_background_process("_delete_state_groups_loop") async def _delete_state_groups_loop(self) -> None: """Background task that deletes any state groups that may be pending @@ -203,3 +153,173 @@ class PurgeEventsStorageController: room_id, groups_to_sequences, ) + + async def _background_delete_unrefereneced_state_groups( + self, progress: dict, batch_size: int + ) -> int: + """This background update will slowly delete any unreferenced state groups""" + + last_checked_state_group = progress.get("last_checked_state_group") + max_state_group = progress.get("max_state_group") + + if last_checked_state_group is None or max_state_group is None: + # This is the first run. + last_checked_state_group = 0 + + max_state_group = await self.stores.state.db_pool.simple_select_one_onecol( + table="state_groups", + keyvalues={}, + retcol="MAX(id)", + allow_none=True, + desc="get_max_state_group", + ) + if max_state_group is None: + # There are no state groups so the background process is finished. + await self.stores.state.db_pool.updates._end_background_update( + _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE + ) + return batch_size + + ( + last_checked_state_group, + final_batch, + ) = await self._delete_unreferenced_state_groups_batch( + last_checked_state_group, batch_size, max_state_group + ) + + if not final_batch: + # There are more state groups to check. + progress = { + "last_checked_state_group": last_checked_state_group, + "max_state_group": max_state_group, + } + await self.stores.state.db_pool.updates._background_update_progress( + _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE, + progress, + ) + else: + # This background process is finished. + await self.stores.state.db_pool.updates._end_background_update( + _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE + ) + + return batch_size + + async def _delete_unreferenced_state_groups_batch( + self, + last_checked_state_group: int, + batch_size: int, + max_state_group: int, + ) -> tuple[int, bool]: + """Looks for unreferenced state groups starting from the last state group + checked, and any state groups which would become unreferenced if a state group + was deleted, and marks them for deletion. + + Args: + last_checked_state_group: The last state group that was checked. + batch_size: How many state groups to process in this iteration. + + Returns: + (last_checked_state_group, final_batch) + """ + + # Look for state groups that can be cleaned up. + def get_next_state_groups_txn(txn: LoggingTransaction) -> Set[int]: + state_group_sql = "SELECT id FROM state_groups WHERE ? < id AND id <= ? ORDER BY id LIMIT ?" + txn.execute( + state_group_sql, (last_checked_state_group, max_state_group, batch_size) + ) + + next_set = {row[0] for row in txn} + + return next_set + + next_set = await self.stores.state.db_pool.runInteraction( + "get_next_state_groups", get_next_state_groups_txn + ) + + final_batch = False + if len(next_set) < batch_size: + final_batch = True + else: + last_checked_state_group = max(next_set) + + if len(next_set) == 0: + return last_checked_state_group, final_batch + + # Find all state groups that can be deleted if the original set is deleted. + # This set includes the original set, as well as any state groups that would + # become unreferenced upon deleting the original set. + to_delete = await self._find_unreferenced_groups(next_set) + + if len(to_delete) == 0: + return last_checked_state_group, final_batch + + await self.stores.state_deletion.mark_state_groups_as_pending_deletion( + to_delete + ) + + return last_checked_state_group, final_batch + + async def _find_unreferenced_groups( + self, + state_groups: Collection[int], + ) -> Set[int]: + """Used when purging history to figure out which state groups can be + deleted. + + Args: + state_groups: Set of state groups referenced by events + that are going to be deleted. + + Returns: + The set of state groups that can be deleted. + """ + # Set of events that we have found to be referenced by events + referenced_groups = set() + + # Set of state groups we've already seen + state_groups_seen = set(state_groups) + + # Set of state groups to handle next. + next_to_search = set(state_groups) + while next_to_search: + # We bound size of groups we're looking up at once, to stop the + # SQL query getting too big + if len(next_to_search) < 100: + current_search = next_to_search + next_to_search = set() + else: + current_search = set(itertools.islice(next_to_search, 100)) + next_to_search -= current_search + + referenced = await self.stores.main.get_referenced_state_groups( + current_search + ) + referenced_groups |= referenced + + # We don't continue iterating up the state group graphs for state + # groups that are referenced. + current_search -= referenced + + edges = await self.stores.state.get_previous_state_groups(current_search) + + prevs = set(edges.values()) + # We don't bother re-handling groups we've already seen + prevs -= state_groups_seen + next_to_search |= prevs + state_groups_seen |= prevs + + # We also check to see if anything referencing the state groups are + # also unreferenced. This helps ensure that we delete unreferenced + # state groups, if we don't then we will de-delta them when we + # delete the other state groups leading to increased DB usage. + next_edges = await self.stores.state.get_next_state_groups(current_search) + nexts = set(next_edges.keys()) + nexts -= state_groups_seen + next_to_search |= nexts + state_groups_seen |= nexts + + to_delete = state_groups_seen - referenced_groups + + return to_delete diff --git a/synapse/storage/databases/state/bg_updates.py b/synapse/storage/databases/state/bg_updates.py index f7824cba0f..95fd0ae73a 100644 --- a/synapse/storage/databases/state/bg_updates.py +++ b/synapse/storage/databases/state/bg_updates.py @@ -20,7 +20,15 @@ # import logging -from typing import TYPE_CHECKING, Dict, List, Mapping, Optional, Tuple, Union +from typing import ( + TYPE_CHECKING, + Dict, + List, + Mapping, + Optional, + Tuple, + Union, +) from synapse.logging.opentracing import tag_args, trace from synapse.storage._base import SQLBaseStore diff --git a/synapse/storage/databases/state/deletion.py b/synapse/storage/databases/state/deletion.py index d4b1c20a45..f77c46f6ae 100644 --- a/synapse/storage/databases/state/deletion.py +++ b/synapse/storage/databases/state/deletion.py @@ -321,18 +321,42 @@ class StateDeletionDataStore: async def mark_state_groups_as_pending_deletion( self, state_groups: Collection[int] ) -> None: - """Mark the given state groups as pending deletion""" + """Mark the given state groups as pending deletion. + + If any of the state groups are already pending deletion, then those records are + left as is. + """ + + await self.db_pool.runInteraction( + "mark_state_groups_as_pending_deletion", + self._mark_state_groups_as_pending_deletion_txn, + state_groups, + ) + + def _mark_state_groups_as_pending_deletion_txn( + self, + txn: LoggingTransaction, + state_groups: Collection[int], + ) -> None: + sql = """ + INSERT INTO state_groups_pending_deletion (state_group, insertion_ts) + VALUES %s + ON CONFLICT (state_group) + DO NOTHING + """ now = self._clock.time_msec() - - await self.db_pool.simple_upsert_many( - table="state_groups_pending_deletion", - key_names=("state_group",), - key_values=[(state_group,) for state_group in state_groups], - value_names=("insertion_ts",), - value_values=[(now,) for _ in state_groups], - desc="mark_state_groups_as_pending_deletion", - ) + rows = [ + ( + state_group, + now, + ) + for state_group in state_groups + ] + if isinstance(txn.database_engine, PostgresEngine): + txn.execute_values(sql % ("?",), rows, fetch=False) + else: + txn.execute_batch(sql % ("(?, ?)",), rows) async def mark_state_groups_as_used(self, state_groups: Collection[int]) -> None: """Mark the given state groups as now being referenced""" diff --git a/synapse/storage/databases/state/store.py b/synapse/storage/databases/state/store.py index 8c7980e719..90d7beb92f 100644 --- a/synapse/storage/databases/state/store.py +++ b/synapse/storage/databases/state/store.py @@ -828,10 +828,18 @@ class StateGroupDataStore(StateBackgroundUpdateStore, SQLBaseStore): "DELETE FROM state_groups_state WHERE state_group = ?", [(sg,) for sg in state_groups_to_delete], ) + txn.execute_batch( + "DELETE FROM state_group_edges WHERE state_group = ?", + [(sg,) for sg in state_groups_to_delete], + ) txn.execute_batch( "DELETE FROM state_groups WHERE id = ?", [(sg,) for sg in state_groups_to_delete], ) + txn.execute_batch( + "DELETE FROM state_groups_pending_deletion WHERE state_group = ?", + [(sg,) for sg in state_groups_to_delete], + ) return True diff --git a/synapse/storage/schema/__init__.py b/synapse/storage/schema/__init__.py index 49e648a92f..c90c2c6051 100644 --- a/synapse/storage/schema/__init__.py +++ b/synapse/storage/schema/__init__.py @@ -158,6 +158,7 @@ Changes in SCHEMA_VERSION = 88 Changes in SCHEMA_VERSION = 89 - Add `state_groups_pending_deletion` and `state_groups_persisting` tables. + - Add background update to delete unreferenced state groups. """ diff --git a/synapse/storage/schema/state/delta/89/02_delete_unreferenced_state_groups.sql b/synapse/storage/schema/state/delta/89/02_delete_unreferenced_state_groups.sql new file mode 100644 index 0000000000..184dc8564c --- /dev/null +++ b/synapse/storage/schema/state/delta/89/02_delete_unreferenced_state_groups.sql @@ -0,0 +1,16 @@ +-- +-- This file is licensed under the Affero General Public License (AGPL) version 3. +-- +-- Copyright (C) 2025 New Vector, Ltd +-- +-- This program is free software: you can redistribute it and/or modify +-- it under the terms of the GNU Affero General Public License as +-- published by the Free Software Foundation, either version 3 of the +-- License, or (at your option) any later version. +-- +-- See the GNU Affero General Public License for more details: +-- . + +-- Add a background update to delete any unreferenced state groups +INSERT INTO background_updates (ordering, update_name, progress_json) VALUES + (8902, 'delete_unreferenced_state_groups_bg_update', '{}'); diff --git a/synapse/types/storage/__init__.py b/synapse/types/storage/__init__.py index b5fa20a41a..d0a85ef208 100644 --- a/synapse/types/storage/__init__.py +++ b/synapse/types/storage/__init__.py @@ -48,3 +48,7 @@ class _BackgroundUpdates: SLIDING_SYNC_MEMBERSHIP_SNAPSHOTS_FIX_FORGOTTEN_COLUMN_BG_UPDATE = ( "sliding_sync_membership_snapshots_fix_forgotten_column_bg_update" ) + + DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE = ( + "delete_unreferenced_state_groups_bg_update" + ) diff --git a/tests/config/test_load.py b/tests/config/test_load.py index 220ca23aa7..a5456ac6f8 100644 --- a/tests/config/test_load.py +++ b/tests/config/test_load.py @@ -21,6 +21,7 @@ # import tempfile from typing import Callable +from unittest import mock import yaml from parameterized import parameterized @@ -31,6 +32,11 @@ from synapse.config.homeserver import HomeServerConfig from tests.config.utils import ConfigFileTestCase +try: + import authlib +except ImportError: + authlib = None + try: import hiredis except ImportError: @@ -132,6 +138,8 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase): "turn_shared_secret_path: /does/not/exist", "registration_shared_secret_path: /does/not/exist", "macaroon_secret_key_path: /does/not/exist", + "form_secret_path: /does/not/exist", + "worker_replication_secret_path: /does/not/exist", "experimental_features:\n msc3861:\n client_secret_path: /does/not/exist", "experimental_features:\n msc3861:\n admin_token_path: /does/not/exist", *["redis:\n enabled: true\n password_path: /does/not/exist"] @@ -159,6 +167,14 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase): "macaroon_secret_key_path: {}", lambda c: c.key.macaroon_secret_key, ), + ( + "form_secret_path: {}", + lambda c: c.key.form_secret.encode("utf-8"), + ), + ( + "worker_replication_secret_path: {}", + lambda c: c.worker.worker_replication_secret.encode("utf-8"), + ), ( "experimental_features:\n msc3861:\n client_secret_path: {}", lambda c: c.experimental.msc3861.client_secret().encode("utf-8"), @@ -180,7 +196,7 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase): self, config_line: str, get_secret: Callable[[RootConfig], str] ) -> None: self.generate_config_and_remove_lines_containing( - ["registration_shared_secret", "macaroon_secret_key"] + ["form_secret", "macaroon_secret_key", "registration_shared_secret"] ) with tempfile.NamedTemporaryFile(buffering=0) as secret_file: secret_file.write(b"53C237") @@ -189,3 +205,101 @@ class ConfigLoadingFileTestCase(ConfigFileTestCase): config = HomeServerConfig.load_config("", ["-c", self.config_file]) self.assertEqual(get_secret(config), b"53C237") + + @parameterized.expand( + [ + "turn_shared_secret: 53C237", + "registration_shared_secret: 53C237", + "macaroon_secret_key: 53C237", + "recaptcha_private_key: 53C237", + "recaptcha_public_key: ¬53C237", + "form_secret: 53C237", + "worker_replication_secret: 53C237", + *[ + "experimental_features:\n" + " msc3861:\n" + " enabled: true\n" + " client_secret: 53C237" + ] + * (authlib is not None), + *[ + "experimental_features:\n" + " msc3861:\n" + " enabled: true\n" + " client_auth_method: private_key_jwt\n" + ' jwk: {{"mock": "mock"}}' + ] + * (authlib is not None), + *[ + "experimental_features:\n" + " msc3861:\n" + " enabled: true\n" + " admin_token: 53C237\n" + " client_secret_path: {secret_file}" + ] + * (authlib is not None), + *["redis:\n enabled: true\n password: 53C237"] * (hiredis is not None), + ] + ) + def test_no_secrets_in_config(self, config_line: str) -> None: + if authlib is not None: + patcher = mock.patch("authlib.jose.rfc7517.JsonWebKey.import_key") + self.addCleanup(patcher.stop) + patcher.start() + + with tempfile.NamedTemporaryFile(buffering=0) as secret_file: + # Only used for less mocking with admin_token + secret_file.write(b"53C237") + + self.generate_config_and_remove_lines_containing( + ["form_secret", "macaroon_secret_key", "registration_shared_secret"] + ) + # Check strict mode with no offenders. + HomeServerConfig.load_config( + "", ["-c", self.config_file, "--no-secrets-in-config"] + ) + self.add_lines_to_config( + ["", config_line.format(secret_file=secret_file.name)] + ) + # Check strict mode with a single offender. + with self.assertRaises(ConfigError): + HomeServerConfig.load_config( + "", ["-c", self.config_file, "--no-secrets-in-config"] + ) + + # Check lenient mode with a single offender. + HomeServerConfig.load_config("", ["-c", self.config_file]) + + def test_no_secrets_in_config_but_in_files(self) -> None: + with tempfile.NamedTemporaryFile(buffering=0) as secret_file: + secret_file.write(b"53C237") + + self.generate_config_and_remove_lines_containing( + ["form_secret", "macaroon_secret_key", "registration_shared_secret"] + ) + self.add_lines_to_config( + [ + "", + f"turn_shared_secret_path: {secret_file.name}", + f"registration_shared_secret_path: {secret_file.name}", + f"macaroon_secret_key_path: {secret_file.name}", + f"recaptcha_private_key_path: {secret_file.name}", + f"recaptcha_public_key_path: {secret_file.name}", + f"form_secret_path: {secret_file.name}", + f"worker_replication_secret_path: {secret_file.name}", + *[ + "experimental_features:\n" + " msc3861:\n" + " enabled: true\n" + f" admin_token_path: {secret_file.name}\n" + f" client_secret_path: {secret_file.name}\n" + # f" jwk_path: {secret_file.name}" + ] + * (authlib is not None), + *[f"redis:\n enabled: true\n password_path: {secret_file.name}"] + * (hiredis is not None), + ] + ) + HomeServerConfig.load_config( + "", ["-c", self.config_file, "--no-secrets-in-config"] + ) diff --git a/tests/config/test_workers.py b/tests/config/test_workers.py index 64c0285d01..3a21975b89 100644 --- a/tests/config/test_workers.py +++ b/tests/config/test_workers.py @@ -47,7 +47,7 @@ class WorkerDutyConfigTestCase(TestCase): "worker_app": worker_app, **extras, } - worker_config.read_config(worker_config_dict) + worker_config.read_config(worker_config_dict, allow_secrets_in_config=True) return worker_config def test_old_configs_master(self) -> None: diff --git a/tests/handlers/test_directory.py b/tests/handlers/test_directory.py index 4a3e36ffde..b7058d8002 100644 --- a/tests/handlers/test_directory.py +++ b/tests/handlers/test_directory.py @@ -587,6 +587,7 @@ class TestRoomListSearchDisabled(unittest.HomeserverTestCase): self.room_list_handler = hs.get_room_list_handler() self.directory_handler = hs.get_directory_handler() + @unittest.override_config({"room_list_publication_rules": [{"action": "allow"}]}) def test_disabling_room_list(self) -> None: self.room_list_handler.enable_room_list_search = True self.directory_handler.enable_room_list_search = True diff --git a/tests/handlers/test_oauth_delegation.py b/tests/handlers/test_oauth_delegation.py index ba2f8ff510..5f8c25557a 100644 --- a/tests/handlers/test_oauth_delegation.py +++ b/tests/handlers/test_oauth_delegation.py @@ -43,6 +43,7 @@ from synapse.api.errors import ( OAuthInsufficientScopeError, SynapseError, ) +from synapse.appservice import ApplicationService from synapse.http.site import SynapseRequest from synapse.rest import admin from synapse.rest.client import account, devices, keys, login, logout, register @@ -379,6 +380,44 @@ class MSC3861OAuthDelegation(HomeserverTestCase): ) self.assertEqual(requester.device_id, DEVICE) + def test_active_user_with_device_explicit_device_id(self) -> None: + """The handler should return a requester with normal user rights and a device ID, given explicitly, as supported by MAS 0.15+""" + + self.http_client.request = AsyncMock( + return_value=FakeResponse.json( + code=200, + payload={ + "active": True, + "sub": SUBJECT, + "scope": " ".join([MATRIX_USER_SCOPE]), + "device_id": DEVICE, + "username": USERNAME, + }, + ) + ) + request = Mock(args={}) + request.args[b"access_token"] = [b"mockAccessToken"] + request.requestHeaders.getRawHeaders = mock_getRawHeaders() + requester = self.get_success(self.auth.get_user_by_req(request)) + self.http_client.get_json.assert_called_once_with(WELL_KNOWN) + self.http_client.request.assert_called_once_with( + method="POST", uri=INTROSPECTION_ENDPOINT, data=ANY, headers=ANY + ) + # It should have called with the 'X-MAS-Supports-Device-Id: 1' header + self.assertEqual( + self.http_client.request.call_args[1]["headers"].getRawHeaders( + b"X-MAS-Supports-Device-Id", + ), + [b"1"], + ) + self._assertParams() + self.assertEqual(requester.user.to_string(), "@%s:%s" % (USERNAME, SERVER_NAME)) + self.assertEqual(requester.is_guest, False) + self.assertEqual( + get_awaitable_result(self.auth.is_server_admin(requester)), False + ) + self.assertEqual(requester.device_id, DEVICE) + def test_multiple_devices(self) -> None: """The handler should raise an error if multiple devices are found in the scope.""" @@ -575,6 +614,16 @@ class MSC3861OAuthDelegation(HomeserverTestCase): channel.json_body["errcode"], Codes.UNRECOGNIZED, channel.json_body ) + def expect_forbidden( + self, method: str, path: str, content: Union[bytes, str, JsonDict] = "" + ) -> None: + channel = self.make_request(method, path, content) + + self.assertEqual(channel.code, 403, channel.json_body) + self.assertEqual( + channel.json_body["errcode"], Codes.FORBIDDEN, channel.json_body + ) + def test_uia_endpoints(self) -> None: """Test that endpoints that were removed in MSC2964 are no longer available.""" @@ -629,11 +678,35 @@ class MSC3861OAuthDelegation(HomeserverTestCase): def test_registration_endpoints_removed(self) -> None: """Test that registration endpoints that were removed in MSC2964 are no longer available.""" + appservice = ApplicationService( + token="i_am_an_app_service", + id="1234", + namespaces={"users": [{"regex": r"@alice:.+", "exclusive": True}]}, + sender="@as_main:test", + ) + + self.hs.get_datastores().main.services_cache = [appservice] self.expect_unrecognized( "GET", "/_matrix/client/v1/register/m.login.registration_token/validity" ) + + # Registration is disabled + self.expect_forbidden( + "POST", + "/_matrix/client/v3/register", + {"username": "alice", "password": "hunter2"}, + ) + # This is still available for AS registrations - # self.expect_unrecognized("POST", "/_matrix/client/v3/register") + channel = self.make_request( + "POST", + "/_matrix/client/v3/register", + {"username": "alice", "type": "m.login.application_service"}, + shorthand=False, + access_token="i_am_an_app_service", + ) + self.assertEqual(channel.code, 200, channel.json_body) + self.expect_unrecognized("GET", "/_matrix/client/v3/register/available") self.expect_unrecognized( "POST", "/_matrix/client/v3/register/email/requestToken" diff --git a/tests/handlers/test_oidc.py b/tests/handlers/test_oidc.py index 1b43ee43c6..cfd9969563 100644 --- a/tests/handlers/test_oidc.py +++ b/tests/handlers/test_oidc.py @@ -57,6 +57,7 @@ CLIENT_ID = "test-client-id" CLIENT_SECRET = "test-client-secret" BASE_URL = "https://synapse/" CALLBACK_URL = BASE_URL + "_synapse/client/oidc/callback" +TEST_REDIRECT_URI = "https://test/oidc/callback" SCOPES = ["openid"] # config for common cases @@ -70,12 +71,16 @@ DEFAULT_CONFIG = { } # extends the default config with explicit OAuth2 endpoints instead of using discovery +# +# We add "explicit" to things to make them different from the discovered values to make +# sure that the explicit values override the discovered ones. EXPLICIT_ENDPOINT_CONFIG = { **DEFAULT_CONFIG, "discover": False, - "authorization_endpoint": ISSUER + "authorize", - "token_endpoint": ISSUER + "token", - "jwks_uri": ISSUER + "jwks", + "authorization_endpoint": ISSUER + "authorize-explicit", + "token_endpoint": ISSUER + "token-explicit", + "jwks_uri": ISSUER + "jwks-explicit", + "id_token_signing_alg_values_supported": ["RS256", ""], } @@ -259,12 +264,64 @@ class OidcHandlerTestCase(HomeserverTestCase): self.get_success(self.provider.load_metadata()) self.fake_server.get_metadata_handler.assert_not_called() + @override_config({"oidc_config": {**EXPLICIT_ENDPOINT_CONFIG, "discover": True}}) + def test_discovery_with_explicit_config(self) -> None: + """ + The handler should discover the endpoints from OIDC discovery document but + values are overriden by the explicit config. + """ + # This would throw if some metadata were invalid + metadata = self.get_success(self.provider.load_metadata()) + self.fake_server.get_metadata_handler.assert_called_once() + + self.assertEqual(metadata.issuer, self.fake_server.issuer) + # It seems like authlib does not have that defined in its metadata models + self.assertEqual( + metadata.get("userinfo_endpoint"), + self.fake_server.userinfo_endpoint, + ) + + # Ensure the values are overridden correctly since these were configured + # explicitly + self.assertEqual( + metadata.authorization_endpoint, + EXPLICIT_ENDPOINT_CONFIG["authorization_endpoint"], + ) + self.assertEqual( + metadata.token_endpoint, EXPLICIT_ENDPOINT_CONFIG["token_endpoint"] + ) + self.assertEqual(metadata.jwks_uri, EXPLICIT_ENDPOINT_CONFIG["jwks_uri"]) + self.assertEqual( + metadata.id_token_signing_alg_values_supported, + EXPLICIT_ENDPOINT_CONFIG["id_token_signing_alg_values_supported"], + ) + + # subsequent calls should be cached + self.reset_mocks() + self.get_success(self.provider.load_metadata()) + self.fake_server.get_metadata_handler.assert_not_called() + @override_config({"oidc_config": EXPLICIT_ENDPOINT_CONFIG}) def test_no_discovery(self) -> None: """When discovery is disabled, it should not try to load from discovery document.""" - self.get_success(self.provider.load_metadata()) + metadata = self.get_success(self.provider.load_metadata()) self.fake_server.get_metadata_handler.assert_not_called() + # Ensure the values are overridden correctly since these were configured + # explicitly + self.assertEqual( + metadata.authorization_endpoint, + EXPLICIT_ENDPOINT_CONFIG["authorization_endpoint"], + ) + self.assertEqual( + metadata.token_endpoint, EXPLICIT_ENDPOINT_CONFIG["token_endpoint"] + ) + self.assertEqual(metadata.jwks_uri, EXPLICIT_ENDPOINT_CONFIG["jwks_uri"]) + self.assertEqual( + metadata.id_token_signing_alg_values_supported, + EXPLICIT_ENDPOINT_CONFIG["id_token_signing_alg_values_supported"], + ) + @override_config({"oidc_config": DEFAULT_CONFIG}) def test_load_jwks(self) -> None: """JWKS loading is done once (then cached) if used.""" @@ -530,6 +587,24 @@ class OidcHandlerTestCase(HomeserverTestCase): code_verifier = get_value_from_macaroon(macaroon, "code_verifier") self.assertEqual(code_verifier, "") + @override_config( + {"oidc_config": {**DEFAULT_CONFIG, "redirect_uri": TEST_REDIRECT_URI}} + ) + def test_redirect_request_with_overridden_redirect_uri(self) -> None: + """The authorization endpoint redirect has the overridden `redirect_uri` value.""" + req = Mock(spec=["cookies"]) + req.cookies = [] + + url = urlparse( + self.get_success( + self.provider.handle_redirect_request(req, b"http://client/redirect") + ) + ) + + # Ensure that the redirect_uri in the returned url has been overridden. + params = parse_qs(url.query) + self.assertEqual(params["redirect_uri"], [TEST_REDIRECT_URI]) + @override_config({"oidc_config": DEFAULT_CONFIG}) def test_callback_error(self) -> None: """Errors from the provider returned in the callback are displayed.""" @@ -897,6 +972,37 @@ class OidcHandlerTestCase(HomeserverTestCase): self.assertEqual(args["client_id"], [CLIENT_ID]) self.assertEqual(args["redirect_uri"], [CALLBACK_URL]) + @override_config( + { + "oidc_config": { + **DEFAULT_CONFIG, + "redirect_uri": TEST_REDIRECT_URI, + } + } + ) + def test_code_exchange_with_overridden_redirect_uri(self) -> None: + """Code exchange behaves correctly and handles various error scenarios.""" + # Set up a fake IdP with a token endpoint handler. + token = { + "type": "Bearer", + "access_token": "aabbcc", + } + + self.fake_server.post_token_handler.side_effect = None + self.fake_server.post_token_handler.return_value = FakeResponse.json( + payload=token + ) + code = "code" + + # Exchange the code against the fake IdP. + self.get_success(self.provider._exchange_code(code, code_verifier="")) + + # Check that the `redirect_uri` parameter provided matches our + # overridden config value. + kwargs = self.fake_server.request.call_args[1] + args = parse_qs(kwargs["data"].decode("utf-8")) + self.assertEqual(args["redirect_uri"], [TEST_REDIRECT_URI]) + @override_config( { "oidc_config": { diff --git a/tests/handlers/test_room_list.py b/tests/handlers/test_room_list.py index 4d22ef98c2..45cef09b22 100644 --- a/tests/handlers/test_room_list.py +++ b/tests/handlers/test_room_list.py @@ -6,6 +6,7 @@ from synapse.rest.client import directory, login, room from synapse.types import JsonDict from tests import unittest +from tests.utils import default_config class RoomListHandlerTestCase(unittest.HomeserverTestCase): @@ -30,6 +31,11 @@ class RoomListHandlerTestCase(unittest.HomeserverTestCase): assert channel.code == HTTPStatus.OK, f"couldn't publish room: {channel.result}" return room_id + def default_config(self) -> JsonDict: + config = default_config("test") + config["room_list_publication_rules"] = [{"action": "allow"}] + return config + def test_acls_applied_to_room_directory_results(self) -> None: """ Creates 3 rooms. Room 2 has an ACL that only permits the homeservers diff --git a/tests/rest/admin/test_room.py b/tests/rest/admin/test_room.py index 1817d67a00..1d44106bd7 100644 --- a/tests/rest/admin/test_room.py +++ b/tests/rest/admin/test_room.py @@ -1282,6 +1282,7 @@ class RoomTestCase(unittest.HomeserverTestCase): self.admin_user = self.register_user("admin", "pass", admin=True) self.admin_user_tok = self.login("admin", "pass") + @unittest.override_config({"room_list_publication_rules": [{"action": "allow"}]}) def test_list_rooms(self) -> None: """Test that we can list rooms""" # Create 3 test rooms @@ -1795,6 +1796,7 @@ class RoomTestCase(unittest.HomeserverTestCase): self.assertEqual(room_id, channel.json_body["rooms"][0].get("room_id")) self.assertEqual("ж", channel.json_body["rooms"][0].get("name")) + @unittest.override_config({"room_list_publication_rules": [{"action": "allow"}]}) def test_filter_public_rooms(self) -> None: self.helper.create_room_as( self.admin_user, tok=self.admin_user_tok, is_public=True @@ -1872,6 +1874,7 @@ class RoomTestCase(unittest.HomeserverTestCase): self.assertEqual(1, response.json_body["total_rooms"]) self.assertEqual(1, len(response.json_body["rooms"])) + @unittest.override_config({"room_list_publication_rules": [{"action": "allow"}]}) def test_single_room(self) -> None: """Test that a single room can be requested correctly""" # Create two test rooms diff --git a/tests/rest/client/test_delayed_events.py b/tests/rest/client/test_delayed_events.py index 1793b38c4a..2c938390c8 100644 --- a/tests/rest/client/test_delayed_events.py +++ b/tests/rest/client/test_delayed_events.py @@ -109,6 +109,27 @@ class DelayedEventsTestCase(HomeserverTestCase): ) self.assertEqual(setter_expected, content.get(setter_key), content) + @unittest.override_config( + {"rc_delayed_event_mgmt": {"per_second": 0.5, "burst_count": 1}} + ) + def test_get_delayed_events_ratelimit(self) -> None: + args = ("GET", PATH_PREFIX) + + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result) + + # Add the current user to the ratelimit overrides, allowing them no ratelimiting. + self.get_success( + self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0) + ) + + # Test that the request isn't ratelimited anymore. + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + def test_update_delayed_event_without_id(self) -> None: channel = self.make_request( "POST", @@ -206,6 +227,46 @@ class DelayedEventsTestCase(HomeserverTestCase): expect_code=HTTPStatus.NOT_FOUND, ) + @unittest.override_config( + {"rc_delayed_event_mgmt": {"per_second": 0.5, "burst_count": 1}} + ) + def test_cancel_delayed_event_ratelimit(self) -> None: + delay_ids = [] + for _ in range(2): + channel = self.make_request( + "POST", + _get_path_for_delayed_send(self.room_id, _EVENT_TYPE, 100000), + {}, + ) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + delay_id = channel.json_body.get("delay_id") + self.assertIsNotNone(delay_id) + delay_ids.append(delay_id) + + channel = self.make_request( + "POST", + f"{PATH_PREFIX}/{delay_ids.pop(0)}", + {"action": "cancel"}, + ) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + + args = ( + "POST", + f"{PATH_PREFIX}/{delay_ids.pop(0)}", + {"action": "cancel"}, + ) + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result) + + # Add the current user to the ratelimit overrides, allowing them no ratelimiting. + self.get_success( + self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0) + ) + + # Test that the request isn't ratelimited anymore. + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + def test_send_delayed_state_event(self) -> None: state_key = "to_send_on_request" @@ -250,6 +311,44 @@ class DelayedEventsTestCase(HomeserverTestCase): ) self.assertEqual(setter_expected, content.get(setter_key), content) + @unittest.override_config({"rc_message": {"per_second": 3.5, "burst_count": 4}}) + def test_send_delayed_event_ratelimit(self) -> None: + delay_ids = [] + for _ in range(2): + channel = self.make_request( + "POST", + _get_path_for_delayed_send(self.room_id, _EVENT_TYPE, 100000), + {}, + ) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + delay_id = channel.json_body.get("delay_id") + self.assertIsNotNone(delay_id) + delay_ids.append(delay_id) + + channel = self.make_request( + "POST", + f"{PATH_PREFIX}/{delay_ids.pop(0)}", + {"action": "send"}, + ) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + + args = ( + "POST", + f"{PATH_PREFIX}/{delay_ids.pop(0)}", + {"action": "send"}, + ) + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result) + + # Add the current user to the ratelimit overrides, allowing them no ratelimiting. + self.get_success( + self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0) + ) + + # Test that the request isn't ratelimited anymore. + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + def test_restart_delayed_state_event(self) -> None: state_key = "to_send_on_restarted_timeout" @@ -309,6 +408,46 @@ class DelayedEventsTestCase(HomeserverTestCase): ) self.assertEqual(setter_expected, content.get(setter_key), content) + @unittest.override_config( + {"rc_delayed_event_mgmt": {"per_second": 0.5, "burst_count": 1}} + ) + def test_restart_delayed_event_ratelimit(self) -> None: + delay_ids = [] + for _ in range(2): + channel = self.make_request( + "POST", + _get_path_for_delayed_send(self.room_id, _EVENT_TYPE, 100000), + {}, + ) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + delay_id = channel.json_body.get("delay_id") + self.assertIsNotNone(delay_id) + delay_ids.append(delay_id) + + channel = self.make_request( + "POST", + f"{PATH_PREFIX}/{delay_ids.pop(0)}", + {"action": "restart"}, + ) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + + args = ( + "POST", + f"{PATH_PREFIX}/{delay_ids.pop(0)}", + {"action": "restart"}, + ) + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result) + + # Add the current user to the ratelimit overrides, allowing them no ratelimiting. + self.get_success( + self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0) + ) + + # Test that the request isn't ratelimited anymore. + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + def test_delayed_state_events_are_cancelled_by_more_recent_state(self) -> None: state_key = "to_be_cancelled" @@ -374,3 +513,7 @@ def _get_path_for_delayed_state( room_id: str, event_type: str, state_key: str, delay_ms: int ) -> str: return f"rooms/{room_id}/state/{event_type}/{state_key}?org.matrix.msc4140.delay={delay_ms}" + + +def _get_path_for_delayed_send(room_id: str, event_type: str, delay_ms: int) -> str: + return f"rooms/{room_id}/send/{event_type}?org.matrix.msc4140.delay={delay_ms}" diff --git a/tests/rest/client/test_rendezvous.py b/tests/rest/client/test_rendezvous.py index ab701680a6..83a5cbdc15 100644 --- a/tests/rest/client/test_rendezvous.py +++ b/tests/rest/client/test_rendezvous.py @@ -117,10 +117,11 @@ class RendezvousServletTestCase(unittest.HomeserverTestCase): headers = dict(channel.headers.getAllRawHeaders()) self.assertIn(b"ETag", headers) self.assertIn(b"Expires", headers) + self.assertIn(b"Content-Length", headers) self.assertEqual(headers[b"Content-Type"], [b"application/json"]) self.assertEqual(headers[b"Access-Control-Allow-Origin"], [b"*"]) self.assertEqual(headers[b"Access-Control-Expose-Headers"], [b"etag"]) - self.assertEqual(headers[b"Cache-Control"], [b"no-store"]) + self.assertEqual(headers[b"Cache-Control"], [b"no-store, no-transform"]) self.assertEqual(headers[b"Pragma"], [b"no-cache"]) self.assertIn("url", channel.json_body) self.assertTrue(channel.json_body["url"].startswith("https://")) @@ -141,9 +142,10 @@ class RendezvousServletTestCase(unittest.HomeserverTestCase): self.assertEqual(headers[b"ETag"], [etag]) self.assertIn(b"Expires", headers) self.assertEqual(headers[b"Content-Type"], [b"text/plain"]) + self.assertEqual(headers[b"Content-Length"], [b"7"]) self.assertEqual(headers[b"Access-Control-Allow-Origin"], [b"*"]) self.assertEqual(headers[b"Access-Control-Expose-Headers"], [b"etag"]) - self.assertEqual(headers[b"Cache-Control"], [b"no-store"]) + self.assertEqual(headers[b"Cache-Control"], [b"no-store, no-transform"]) self.assertEqual(headers[b"Pragma"], [b"no-cache"]) self.assertEqual(channel.text_body, "foo=bar") diff --git a/tests/rest/client/test_rooms.py b/tests/rest/client/test_rooms.py index 604b585150..dd8350ddd1 100644 --- a/tests/rest/client/test_rooms.py +++ b/tests/rest/client/test_rooms.py @@ -67,6 +67,7 @@ from tests.http.server._base import make_request_with_cancellation_test from tests.storage.test_stream import PaginationTestCase from tests.test_utils.event_injection import create_event from tests.unittest import override_config +from tests.utils import default_config PATH_PREFIX = b"/_matrix/client/api/v1" @@ -1371,6 +1372,23 @@ class RoomJoinTestCase(RoomBase): ) self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED") + def test_suspended_user_can_leave_room(self) -> None: + channel = self.make_request( + "POST", f"/join/{self.room1}", access_token=self.tok1 + ) + self.assertEqual(channel.code, 200) + + # set the user as suspended + self.get_success(self.store.set_user_suspended_status(self.user1, True)) + + # leave room + channel = self.make_request( + "POST", + f"/rooms/{self.room1}/leave", + access_token=self.tok1, + ) + self.assertEqual(channel.code, 200) + class RoomAppserviceTsParamTestCase(unittest.HomeserverTestCase): servlets = [ @@ -2381,6 +2399,41 @@ class RoomDelayedEventTestCase(RoomBase): ) self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + @unittest.override_config( + { + "max_event_delay_duration": "24h", + "rc_message": {"per_second": 1, "burst_count": 2}, + } + ) + def test_add_delayed_event_ratelimit(self) -> None: + """Test that requests to schedule new delayed events are ratelimited by a RateLimiter, + which ratelimits them correctly, including by not limiting when the requester is + exempt from ratelimiting. + """ + + # Test that new delayed events are correctly ratelimited. + args = ( + "POST", + ( + "rooms/%s/send/m.room.message?org.matrix.msc4140.delay=2000" + % self.room_id + ).encode("ascii"), + {"body": "test", "msgtype": "m.text"}, + ) + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.TOO_MANY_REQUESTS, channel.code, channel.result) + + # Add the current user to the ratelimit overrides, allowing them no ratelimiting. + self.get_success( + self.hs.get_datastores().main.set_ratelimit_for_user(self.user_id, 0, 0) + ) + + # Test that the new delayed events aren't ratelimited anymore. + channel = self.make_request(*args) + self.assertEqual(HTTPStatus.OK, channel.code, channel.result) + class RoomSearchTestCase(unittest.HomeserverTestCase): servlets = [ @@ -2548,6 +2601,11 @@ class PublicRoomsRoomTypeFilterTestCase(unittest.HomeserverTestCase): tok=self.token, ) + def default_config(self) -> JsonDict: + config = default_config("test") + config["room_list_publication_rules"] = [{"action": "allow"}] + return config + def make_public_rooms_request( self, room_types: Optional[List[Union[str, None]]], @@ -3989,10 +4047,25 @@ class UserSuspensionTests(unittest.HomeserverTestCase): self.user2 = self.register_user("teresa", "hackme") self.tok2 = self.login("teresa", "hackme") - self.room1 = self.helper.create_room_as(room_creator=self.user1, tok=self.tok1) + self.admin = self.register_user("admin", "pass", True) + self.admin_tok = self.login("admin", "pass") + + self.room1 = self.helper.create_room_as( + room_creator=self.user1, tok=self.tok1, room_version="11" + ) self.store = hs.get_datastores().main - def test_suspended_user_cannot_send_message_to_room(self) -> None: + self.room2 = self.helper.create_room_as( + room_creator=self.user1, is_public=False, tok=self.tok1 + ) + self.helper.send_state( + self.room2, + EventTypes.RoomEncryption, + {EventContentFields.ENCRYPTION_ALGORITHM: "m.megolm.v1.aes-sha2"}, + tok=self.tok1, + ) + + def test_suspended_user_cannot_send_message_to_public_room(self) -> None: # set the user as suspended self.get_success(self.store.set_user_suspended_status(self.user1, True)) @@ -4004,6 +4077,24 @@ class UserSuspensionTests(unittest.HomeserverTestCase): ) self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED") + def test_suspended_user_cannot_send_message_to_encrypted_room(self) -> None: + channel = self.make_request( + "PUT", + f"/_synapse/admin/v1/suspend/{self.user1}", + {"suspend": True}, + access_token=self.admin_tok, + ) + self.assertEqual(channel.code, 200) + self.assertEqual(channel.json_body, {f"user_{self.user1}_suspended": True}) + + channel = self.make_request( + "PUT", + f"/rooms/{self.room2}/send/m.room.encrypted/1", + access_token=self.tok1, + content={}, + ) + self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED") + def test_suspended_user_cannot_change_profile_data(self) -> None: # set the user as suspended self.get_success(self.store.set_user_suspended_status(self.user1, True)) @@ -4069,3 +4160,51 @@ class UserSuspensionTests(unittest.HomeserverTestCase): shorthand=False, ) self.assertEqual(channel.code, 200) + + channel = self.make_request( + "PUT", + f"/_matrix/client/v3/rooms/{self.room1}/send/m.room.redaction/3456346", + access_token=self.tok1, + content={"reason": "bogus", "redacts": event_id}, + shorthand=False, + ) + self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED") + + channel = self.make_request( + "PUT", + f"/_matrix/client/v3/rooms/{self.room1}/send/m.room.redaction/3456346", + access_token=self.tok1, + content={"reason": "bogus", "redacts": event_id2}, + shorthand=False, + ) + self.assertEqual(channel.code, 200) + + def test_suspended_user_cannot_ban_others(self) -> None: + # user to ban joins room user1 created + self.make_request("POST", f"/rooms/{self.room1}/join", access_token=self.tok2) + + # suspend user1 + self.get_success(self.store.set_user_suspended_status(self.user1, True)) + + # user1 tries to ban other user while suspended + channel = self.make_request( + "POST", + f"/_matrix/client/v3/rooms/{self.room1}/ban", + access_token=self.tok1, + content={"reason": "spite", "user_id": self.user2}, + shorthand=False, + ) + self.assertEqual(channel.json_body["errcode"], "M_USER_SUSPENDED") + + # un-suspend user1 + self.get_success(self.store.set_user_suspended_status(self.user1, False)) + + # ban now goes through + channel = self.make_request( + "POST", + f"/_matrix/client/v3/rooms/{self.room1}/ban", + access_token=self.tok1, + content={"reason": "spite", "user_id": self.user2}, + shorthand=False, + ) + self.assertEqual(channel.code, 200) diff --git a/tests/storage/test_purge.py b/tests/storage/test_purge.py index 5d6a8518c0..ecdc893405 100644 --- a/tests/storage/test_purge.py +++ b/tests/storage/test_purge.py @@ -24,6 +24,7 @@ from synapse.api.errors import NotFoundError, SynapseError from synapse.rest.client import room from synapse.server import HomeServer from synapse.types.state import StateFilter +from synapse.types.storage import _BackgroundUpdates from synapse.util import Clock from tests.unittest import HomeserverTestCase @@ -247,7 +248,7 @@ class PurgeTests(HomeserverTestCase): 1 + self.state_deletion_store.DELAY_BEFORE_DELETION_MS / 1000 ) - # We expect that the unreferenced state group has been deleted. + # We expect that the unreferenced state group has been deleted from all tables. row = self.get_success( self.state_store.db_pool.simple_select_one_onecol( table="state_groups", @@ -259,6 +260,39 @@ class PurgeTests(HomeserverTestCase): ) self.assertIsNone(row) + row = self.get_success( + self.state_store.db_pool.simple_select_one_onecol( + table="state_groups_state", + keyvalues={"state_group": unreferenced_state_group}, + retcol="state_group", + allow_none=True, + desc="test_purge_unreferenced_state_group", + ) + ) + self.assertIsNone(row) + + row = self.get_success( + self.state_store.db_pool.simple_select_one_onecol( + table="state_group_edges", + keyvalues={"state_group": unreferenced_state_group}, + retcol="state_group", + allow_none=True, + desc="test_purge_unreferenced_state_group", + ) + ) + self.assertIsNone(row) + + row = self.get_success( + self.state_store.db_pool.simple_select_one_onecol( + table="state_groups_pending_deletion", + keyvalues={"state_group": unreferenced_state_group}, + retcol="state_group", + allow_none=True, + desc="test_purge_unreferenced_state_group", + ) + ) + self.assertIsNone(row) + # We expect there to now only be one state group for the room, which is # the state group of the last event (as the only outlier). state_groups = self.get_success( @@ -270,3 +304,99 @@ class PurgeTests(HomeserverTestCase): ) ) self.assertEqual(len(state_groups), 1) + + def test_clear_unreferenced_state_groups(self) -> None: + """Test that any unreferenced state groups are automatically cleaned up.""" + + self.helper.send(self.room_id, body="test1") + state1 = self.helper.send_state( + self.room_id, "org.matrix.test", body={"number": 2} + ) + # Create enough state events to require multiple batches of + # delete_unreferenced_state_groups_bg_update to be run. + for i in range(200): + self.helper.send_state(self.room_id, "org.matrix.test", body={"number": i}) + state2 = self.helper.send_state( + self.room_id, "org.matrix.test", body={"number": 3} + ) + self.helper.send(self.room_id, body="test4") + last = self.helper.send(self.room_id, body="test5") + + # Create an unreferenced state group that has a prev group of one of the + # to-be-purged events. + prev_group = self.get_success( + self.store._get_state_group_for_event(state1["event_id"]) + ) + unreferenced_state_group = self.get_success( + self.state_store.store_state_group( + event_id=last["event_id"], + room_id=self.room_id, + prev_group=prev_group, + delta_ids={("org.matrix.test", ""): state2["event_id"]}, + current_state_ids=None, + ) + ) + + another_unreferenced_state_group = self.get_success( + self.state_store.store_state_group( + event_id=last["event_id"], + room_id=self.room_id, + prev_group=unreferenced_state_group, + delta_ids={("org.matrix.test", ""): state2["event_id"]}, + current_state_ids=None, + ) + ) + + # Insert and run the background update. + self.get_success( + self.store.db_pool.simple_insert( + "background_updates", + { + "update_name": _BackgroundUpdates.DELETE_UNREFERENCED_STATE_GROUPS_BG_UPDATE, + "progress_json": "{}", + }, + ) + ) + self.store.db_pool.updates._all_done = False + self.wait_for_background_updates() + + # Advance so that the background job to delete the state groups runs + self.reactor.advance( + 1 + self.state_deletion_store.DELAY_BEFORE_DELETION_MS / 1000 + ) + + # We expect that the unreferenced state group has been deleted. + row = self.get_success( + self.state_store.db_pool.simple_select_one_onecol( + table="state_groups", + keyvalues={"id": unreferenced_state_group}, + retcol="id", + allow_none=True, + desc="test_purge_unreferenced_state_group", + ) + ) + self.assertIsNone(row) + + # We expect that the other unreferenced state group has also been deleted. + row = self.get_success( + self.state_store.db_pool.simple_select_one_onecol( + table="state_groups", + keyvalues={"id": another_unreferenced_state_group}, + retcol="id", + allow_none=True, + desc="test_purge_unreferenced_state_group", + ) + ) + self.assertIsNone(row) + + # We expect there to now only be one state group for the room, which is + # the state group of the last event (as the only outlier). + state_groups = self.get_success( + self.state_store.db_pool.simple_select_onecol( + table="state_groups", + keyvalues={"room_id": self.room_id}, + retcol="id", + desc="test_purge_unreferenced_state_group", + ) + ) + self.assertEqual(len(state_groups), 207)