1
0

Compare commits

..

28 Commits

Author SHA1 Message Date
Eric Eastwood e83513410d Picking backfill points: current_depth should be padded for approximate comparisons 2025-09-02 01:04:06 -05:00
Eric Eastwood b0b9b4ebcb Temp: Implement /messages?backfill=true/false
To try out the flow:

 - **Default to fast responses with gaps**: As a default, we can always
     respond quickly and indicate gaps ([MSC3871]
     (https://github.com/matrix-org/matrix-spec-proposals/pull/3871)) for
     clients to paginate at their leisure.
 - **Fast back-pagination**: Clients back-paginate with
     `/messages?dir=b&backfill=false`, and Synapse skips backfilling
     entirely, returning only local history with gaps as necessary.
 - **Explicit gap filling**: To fill in gaps, clients use
     `/messages?dir=b&backfill=true` which works just like today to do a best
     effort backfill.

This allows the client to back-paginate the history we already have without
delay. And can fill in the gaps as they see fit.

This is basically a simplified version of [MSC4282]
(https://github.com/matrix-org/matrix-spec-proposals/pull/4282).
2025-08-28 19:44:57 -05:00
Eric Eastwood d353cfcbb5 Add changelog 2025-08-28 19:38:21 -05:00
Eric Eastwood 50d8337b72 Add test_gaps_going_backwards 2025-08-28 19:31:36 -05:00
Eric Eastwood a14808e151 First stab at gaps in /messages
See [MSC: Gappy timelines](https://github.com/matrix-org/matrix-spec-proposals/pull/3871)
2025-08-28 16:58:13 -05:00
Eric Eastwood 68068de3a4 Trace how much work is being done while "recursively fetching redactions" (#18854)
Spawning from observing this trace for a `/messages` request
(`RoomMessageListRestServlet`). We don't know if it took a while for the
database to fetch a single redaction or a whole chain of redactions.
2025-08-27 12:27:33 -05:00
Eric Eastwood 356cc4a0a1 Instrument _ByteProducer with tracing to measure potential dead time while writing bytes to the request (#18804)
This will allow to easily see how much time is taken up by
being able to filter by the `write_bytes_to_request` operation
in Jaeger.

Spawning from https://github.com/element-hq/synapse/issues/17722

The `write_bytes_to_request` span won't show up in the trace until
https://github.com/element-hq/synapse/pull/18849 is merged.

Note: It's totally fine for a span child to finish after the parent. See
https://opentracing.io/specification/#references-between-spans which
shows "Child Span D" outliving the "Parent Span"
2025-08-27 12:26:42 -05:00
Eric Eastwood 27fc3389f3 Switch to OpenTracing's ContextVarsScopeManager (#18849)
Switch to OpenTracing's `ContextVarsScopeManager` instead of our own
custom `LogContextScopeManager`.

This is now possible because the linked Twisted issue from the comment
in our custom `LogContextScopeManager` is resolved:
https://twistedmatrix.com/trac/ticket/10301

This PR is spawning from exploring different possibilities to solve the
`scope` loss problem I was encountering in
https://github.com/element-hq/synapse/pull/18804#discussion_r2268254424.
This appears to solve the problem and I've added the additional test
from there to this PR 
2025-08-27 11:41:00 -05:00
Eric Eastwood df2cfb3932 Link upstream Twisted bug: Idle connection timeout incorrectly enforced while sending large response with Request.write(...) (#18855)
Link upstream Twisted bug ->
https://github.com/twisted/twisted/issues/12498

Spawning from https://github.com/element-hq/synapse/pull/18852
2025-08-27 11:25:57 -05:00
Andrew Ferrazzutti c339021ce8 Reduce strictness of delayed event delta fetching (#18858) 2025-08-27 13:26:10 +01:00
dependabot[bot] 499f947c67 Bump actions/checkout from 4.3.0 to 5.0.0 (#18834)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:47:19 +01:00
dependabot[bot] e76a9af4d7 Bump types-jsonschema from 4.25.0.20250720 to 4.25.1.20250822 (#18867)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:28:01 +01:00
dependabot[bot] eec1ca6e93 Bump serde_json from 1.0.142 to 1.0.143 (#18866)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:04:50 +01:00
dependabot[bot] 56b5759c0f Bump ruff from 0.12.7 to 0.12.10 (#18865)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:03:41 +01:00
dependabot[bot] 767177ca5a Bump regex from 1.11.1 to 1.11.2 (#18864)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:01:59 +01:00
dependabot[bot] 5b8e6e7911 Bump actions/add-to-project from c0c5949b017d0d4a39f7ba888255881bdac2a823 to 4515659e2b458b27365e167605ac44f219494b66 (#18863)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:01:12 +01:00
dependabot[bot] 6a6be6fbe2 Bump dtolnay/rust-toolchain from b3b07ba8b418998c39fb20f53e8b695cdcc8de1b to e97e2d8cc328f1b50210efc529dca0028893a2d9 (#18862)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 07:00:07 +01:00
dependabot[bot] 21c7841228 Bump reqwest from 0.12.22 to 0.12.23 (#18842)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:57:17 +01:00
dependabot[bot] 5b55e3f15d Bump anyhow from 1.0.98 to 1.0.99 (#18841)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:55:31 +01:00
dependabot[bot] 0e2b92bcbc Bump types-bleach from 6.2.0.20250514 to 6.2.0.20250809 (#18838)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:54:32 +01:00
dependabot[bot] 481987eb83 Bump phonenumbers from 9.0.11 to 9.0.12 (#18837)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:53:52 +01:00
dependabot[bot] 5fd30c7ea7 Bump types-psycopg2 from 2.9.21.20250718 to 2.9.21.20250809 (#18836)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:53:28 +01:00
dependabot[bot] d527c794fb Bump docker/login-action from 3.4.0 to 3.5.0 (#18835)
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-27 06:52:28 +01:00
Erik Johnston 19fe3f001e Merge branch 'master' into develop 2025-08-26 10:54:46 +01:00
Erik Johnston f8a44638eb 1.137.0 2025-08-26 10:23:44 +01:00
Richard van der Hoff 7ec5e60671 Introduce EventPersistencePair type (#18857)
`Tuple[EventBase, EventContext]` is everywhere and I keep misspelling
it. Let's just define a type for it.
2025-08-26 10:15:03 +01:00
Ben Banfield-Zanin 48184eefa3 Fix worker documentation around room Admin APIs (#18853)
Discovered via https://github.com/element-hq/ess-helm/issues/677.
Looking at
https://github.com/element-hq/synapse/blob/v1.136.0/synapse/rest/admin/__init__.py#L266
only `RoomRestServlet` is generally worker capable. This is just the
Room Details API and the v1 Room Delete API and not all the APIs
documented on
https://element-hq.github.io/synapse/latest/admin_api/rooms.html
2025-08-26 10:04:47 +02:00
Shay 205d9e4fc4 Improve redact_on_ban performance (#18851)
Co-authored-by: Erik Johnston <erikj@jki.re>
2025-08-23 11:43:50 +01:00
56 changed files with 852 additions and 465 deletions
+5 -5
View File
@@ -31,7 +31,7 @@ jobs:
uses: docker/setup-buildx-action@e468171a9de216ec08956ac3ada2f0791b6bd435 # v3.11.1
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see
@@ -41,13 +41,13 @@ jobs:
echo "SYNAPSE_VERSION=$(grep "^version" pyproject.toml | sed -E 's/version\s*=\s*["]([^"]*)["]/\1/')" >> $GITHUB_ENV
- name: Log in to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -102,14 +102,14 @@ jobs:
merge-multiple: true
- name: Log in to DockerHub
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ startsWith(matrix.repository, 'docker.io') }}
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
if: ${{ startsWith(matrix.repository, 'ghcr.io') }}
with:
registry: ghcr.io
+2 -2
View File
@@ -13,7 +13,7 @@ jobs:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
@@ -50,7 +50,7 @@ jobs:
name: Check links in documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
+1 -1
View File
@@ -50,7 +50,7 @@ jobs:
needs:
- pre
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
+2 -2
View File
@@ -18,10 +18,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
components: clippy, rustfmt
+8 -8
View File
@@ -42,9 +42,9 @@ jobs:
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -77,10 +77,10 @@ jobs:
postgres-version: "14"
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -152,10 +152,10 @@ jobs:
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -202,7 +202,7 @@ jobs:
steps:
- name: Check out synapse codebase
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: synapse
@@ -234,7 +234,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+1 -1
View File
@@ -16,7 +16,7 @@ jobs:
name: "Check locked dependencies have sdists"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.x'
+4 -4
View File
@@ -33,22 +33,22 @@ jobs:
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
if: github.event_name == 'push'
with:
ref: master
- name: Login to registry
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
uses: docker/login-action@184bdaa0721073962dff0199f1fb9940f07167d1 # v3.5.0
with:
registry: ghcr.io
username: ${{ github.actor }}
+4 -4
View File
@@ -27,7 +27,7 @@ jobs:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -55,7 +55,7 @@ jobs:
steps:
- name: Checkout
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: src
@@ -132,7 +132,7 @@ jobs:
os: "ubuntu-24.04-arm"
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
@@ -165,7 +165,7 @@ jobs:
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.10"
+2 -2
View File
@@ -14,7 +14,7 @@ jobs:
name: Ensure Synapse config schema is valid
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -40,7 +40,7 @@ jobs:
name: Ensure generated documentation is up-to-date
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
+36 -36
View File
@@ -86,9 +86,9 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -106,7 +106,7 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -116,7 +116,7 @@ jobs:
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -129,7 +129,7 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -151,10 +151,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -187,7 +187,7 @@ jobs:
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
@@ -195,7 +195,7 @@ jobs:
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
@@ -213,11 +213,11 @@ jobs:
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -233,10 +233,10 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
components: clippy
toolchain: ${{ env.RUST_VERSION }}
@@ -252,10 +252,10 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2025-04-23
components: clippy
@@ -270,10 +270,10 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -306,10 +306,10 @@ jobs:
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
# We use nightly so that we can use some unstable options that we use in
# `.rustfmt.toml`.
@@ -326,7 +326,7 @@ jobs:
needs: changes
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -376,7 +376,7 @@ jobs:
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: "3.x"
@@ -397,7 +397,7 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
@@ -412,7 +412,7 @@ jobs:
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -453,10 +453,10 @@ jobs:
- changes
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -518,7 +518,7 @@ jobs:
extras: ["all"]
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
@@ -568,12 +568,12 @@ jobs:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -615,7 +615,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@5bbf6603c5c930615ec8a29f1b5d7d258d905aa4 # v2.0.0
with:
@@ -659,7 +659,7 @@ jobs:
--health-retries 5
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
@@ -714,12 +714,12 @@ jobs:
steps:
- name: Checkout synapse codebase
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -750,10 +750,10 @@ jobs:
- changes
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -770,10 +770,10 @@ jobs:
- changes
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
+1 -1
View File
@@ -11,7 +11,7 @@ jobs:
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
steps:
- uses: actions/add-to-project@c0c5949b017d0d4a39f7ba888255881bdac2a823 # v1.0.2
- uses: actions/add-to-project@4515659e2b458b27365e167605ac44f219494b66 # v1.0.2
id: add_project
with:
project-url: "https://github.com/orgs/matrix-org/projects/67"
+8 -8
View File
@@ -43,10 +43,10 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -70,11 +70,11 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -117,10 +117,10 @@ jobs:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- name: Install Rust
uses: dtolnay/rust-toolchain@b3b07ba8b418998c39fb20f53e8b695cdcc8de1b # master
uses: dtolnay/rust-toolchain@e97e2d8cc328f1b50210efc529dca0028893a2d9 # master
with:
toolchain: ${{ env.RUST_VERSION }}
- uses: Swatinem/rust-cache@98c8021b550208e191a6a3145459bfc9fb29c4c0 # v2.8.0
@@ -175,7 +175,7 @@ jobs:
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
with:
path: synapse
@@ -217,7 +217,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@08eba0b27e820071cde6df949e0beb9ba4906955 # v4.3.0
- uses: actions/checkout@08c6903cd8c0fde910a37f88322edcfb5dd907a8 # v5.0.0
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
+7
View File
@@ -1,3 +1,10 @@
# Synapse 1.137.0 (2025-08-26)
No significant changes since 1.137.0rc1.
# Synapse 1.137.0rc1 (2025-08-19)
### Bugfixes
Generated
+8 -8
View File
@@ -28,9 +28,9 @@ dependencies = [
[[package]]
name = "anyhow"
version = "1.0.98"
version = "1.0.99"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e16d2d3311acee920a9eb8d33b8cbc1787ce4a264e85f964c2404b969bdcd487"
checksum = "b0674a1ddeecb70197781e945de4b3b8ffb61fa939a5597bcf48503737663100"
[[package]]
name = "arc-swap"
@@ -1062,9 +1062,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.11.1"
version = "1.11.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b544ef1b4eac5dc2db33ea63606ae9ffcfac26c1416a2806ae0bf5f56b201191"
checksum = "23d7fd106d8c02486a8d64e778353d1cffe08ce79ac2e82f540c86d0facf6912"
dependencies = [
"aho-corasick",
"memchr",
@@ -1091,9 +1091,9 @@ checksum = "2b15c43186be67a4fd63bee50d0303afffcef381492ebe2c5d87f324e1b8815c"
[[package]]
name = "reqwest"
version = "0.12.22"
version = "0.12.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cbc931937e6ca3a06e3b6c0aa7841849b160a90351d6ab467a8b9b9959767531"
checksum = "d429f34c8092b2d42c7c93cec323bb4adeb7c67698f70839adec842ec10c7ceb"
dependencies = [
"base64",
"bytes",
@@ -1270,9 +1270,9 @@ dependencies = [
[[package]]
name = "serde_json"
version = "1.0.142"
version = "1.0.143"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "030fedb782600dcbd6f02d479bf0d817ac3bb40d644745b769d6a96bc3afc5a7"
checksum = "d401abef1d108fbd9cbaebc3e46611f4b1021f714a0597a71f41ee463f5f4a5a"
dependencies = [
"itoa",
"memchr",
+1
View File
@@ -0,0 +1 @@
Instrument `_ByteProducer` with tracing to measure potential dead time while writing bytes to the request.
+1
View File
@@ -0,0 +1 @@
Switch to OpenTracing's `ContextVarsScopeManager` instead of our own custom `LogContextScopeManager`.
+1
View File
@@ -0,0 +1 @@
Improve database performance of [MSC4293](https://github.com/matrix-org/matrix-spec-proposals/pull/4293) - Redact on Kick/Ban.
+1
View File
@@ -0,0 +1 @@
Fix worker documentation incorrectly indicating all room Admin API requests were capable of being handled by workers.
+1
View File
@@ -0,0 +1 @@
Trace how much work is being done while "recursively fetching redactions".
+1
View File
@@ -0,0 +1 @@
Link [upstream Twisted bug](https://github.com/twisted/twisted/issues/12498) tracking the problem that explains why we have to use a `Producer` to write bytes to the request.
+1
View File
@@ -0,0 +1 @@
Introduce `EventPersistencePair` type.
+1
View File
@@ -0,0 +1 @@
Do not throw an error when fetching a rejected delayed state event on startup.
+1
View File
@@ -0,0 +1 @@
Implement experimental [MSC3871](https://github.com/matrix-org/matrix-spec-proposals/pull/3871) to indicate `gaps` in the `/messages` timeline.
+6
View File
@@ -1,3 +1,9 @@
matrix-synapse-py3 (1.137.0) stable; urgency=medium
* New Synapse release 1.137.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Aug 2025 10:23:41 +0100
matrix-synapse-py3 (1.137.0~rc1) stable; urgency=medium
* New Synapse release 1.137.0rc1.
+1 -1
View File
@@ -252,7 +252,7 @@ information.
^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$
^/_matrix/client/(r0|v3|unstable)/capabilities$
^/_matrix/client/(r0|v3|unstable)/notifications$
^/_synapse/admin/v1/rooms/
^/_synapse/admin/v1/rooms/[^/]+$
# Encryption requests
^/_matrix/client/(r0|v3|unstable)/keys/query$
Generated
+33 -32
View File
@@ -1531,14 +1531,14 @@ files = [
[[package]]
name = "phonenumbers"
version = "9.0.11"
version = "9.0.12"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
groups = ["main"]
files = [
{file = "phonenumbers-9.0.11-py2.py3-none-any.whl", hash = "sha256:a8ebb2136f1f14dfdbadb98be01cb71b96f880dea011eb5e0921967fe3a23abf"},
{file = "phonenumbers-9.0.11.tar.gz", hash = "sha256:6573858dcf0a7a2753a071375e154d9fc11791546c699b575af95d2ba7d84a1d"},
{file = "phonenumbers-9.0.12-py2.py3-none-any.whl", hash = "sha256:900633afc3e12191458d710262df5efc117838bd1e2e613b64fa254a86bb20a1"},
{file = "phonenumbers-9.0.12.tar.gz", hash = "sha256:ccadff6b949494bd606836d8c9678bee5b55cb1cbad1e98bf7adae108e6fd0be"},
]
[[package]]
@@ -2396,30 +2396,31 @@ files = [
[[package]]
name = "ruff"
version = "0.12.7"
version = "0.12.10"
description = "An extremely fast Python linter and code formatter, written in Rust."
optional = false
python-versions = ">=3.7"
groups = ["dev"]
files = [
{file = "ruff-0.12.7-py3-none-linux_armv6l.whl", hash = "sha256:76e4f31529899b8c434c3c1dede98c4483b89590e15fb49f2d46183801565303"},
{file = "ruff-0.12.7-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:789b7a03e72507c54fb3ba6209e4bb36517b90f1a3569ea17084e3fd295500fb"},
{file = "ruff-0.12.7-py3-none-macosx_11_0_arm64.whl", hash = "sha256:2e1c2a3b8626339bb6369116e7030a4cf194ea48f49b64bb505732a7fce4f4e3"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:32dec41817623d388e645612ec70d5757a6d9c035f3744a52c7b195a57e03860"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:47ef751f722053a5df5fa48d412dbb54d41ab9b17875c6840a58ec63ff0c247c"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:a828a5fc25a3efd3e1ff7b241fd392686c9386f20e5ac90aa9234a5faa12c423"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:5726f59b171111fa6a69d82aef48f00b56598b03a22f0f4170664ff4d8298efb"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:74e6f5c04c4dd4aba223f4fe6e7104f79e0eebf7d307e4f9b18c18362124bccd"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5d0bfe4e77fba61bf2ccadf8cf005d6133e3ce08793bbe870dd1c734f2699a3e"},
{file = "ruff-0.12.7-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:06bfb01e1623bf7f59ea749a841da56f8f653d641bfd046edee32ede7ff6c606"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:e41df94a957d50083fd09b916d6e89e497246698c3f3d5c681c8b3e7b9bb4ac8"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:4000623300563c709458d0ce170c3d0d788c23a058912f28bbadc6f905d67afa"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_i686.whl", hash = "sha256:69ffe0e5f9b2cf2b8e289a3f8945b402a1b19eff24ec389f45f23c42a3dd6fb5"},
{file = "ruff-0.12.7-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a07a5c8ffa2611a52732bdc67bf88e243abd84fe2d7f6daef3826b59abbfeda4"},
{file = "ruff-0.12.7-py3-none-win32.whl", hash = "sha256:c928f1b2ec59fb77dfdf70e0419408898b63998789cc98197e15f560b9e77f77"},
{file = "ruff-0.12.7-py3-none-win_amd64.whl", hash = "sha256:9c18f3d707ee9edf89da76131956aba1270c6348bfee8f6c647de841eac7194f"},
{file = "ruff-0.12.7-py3-none-win_arm64.whl", hash = "sha256:dfce05101dbd11833a0776716d5d1578641b7fddb537fe7fa956ab85d1769b69"},
{file = "ruff-0.12.7.tar.gz", hash = "sha256:1fc3193f238bc2d7968772c82831a4ff69252f673be371fb49663f0068b7ec71"},
{file = "ruff-0.12.10-py3-none-linux_armv6l.whl", hash = "sha256:8b593cb0fb55cc8692dac7b06deb29afda78c721c7ccfed22db941201b7b8f7b"},
{file = "ruff-0.12.10-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:ebb7333a45d56efc7c110a46a69a1b32365d5c5161e7244aaf3aa20ce62399c1"},
{file = "ruff-0.12.10-py3-none-macosx_11_0_arm64.whl", hash = "sha256:d59e58586829f8e4a9920788f6efba97a13d1fa320b047814e8afede381c6839"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:822d9677b560f1fdeab69b89d1f444bf5459da4aa04e06e766cf0121771ab844"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:37b4a64f4062a50c75019c61c7017ff598cb444984b638511f48539d3a1c98db"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:2c6f4064c69d2542029b2a61d39920c85240c39837599d7f2e32e80d36401d6e"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:059e863ea3a9ade41407ad71c1de2badfbe01539117f38f763ba42a1206f7559"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1bef6161e297c68908b7218fa6e0e93e99a286e5ed9653d4be71e687dff101cf"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4f1345fbf8fb0531cd722285b5f15af49b2932742fc96b633e883da8d841896b"},
{file = "ruff-0.12.10-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:1f68433c4fbc63efbfa3ba5db31727db229fa4e61000f452c540474b03de52a9"},
{file = "ruff-0.12.10-py3-none-manylinux_2_31_riscv64.whl", hash = "sha256:141ce3d88803c625257b8a6debf4a0473eb6eed9643a6189b68838b43e78165a"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:f3fc21178cd44c98142ae7590f42ddcb587b8e09a3b849cbc84edb62ee95de60"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:7d1a4e0bdfafcd2e3e235ecf50bf0176f74dd37902f241588ae1f6c827a36c56"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_i686.whl", hash = "sha256:e67d96827854f50b9e3e8327b031647e7bcc090dbe7bb11101a81a3a2cbf1cc9"},
{file = "ruff-0.12.10-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:ae479e1a18b439c59138f066ae79cc0f3ee250712a873d00dbafadaad9481e5b"},
{file = "ruff-0.12.10-py3-none-win32.whl", hash = "sha256:9de785e95dc2f09846c5e6e1d3a3d32ecd0b283a979898ad427a9be7be22b266"},
{file = "ruff-0.12.10-py3-none-win_amd64.whl", hash = "sha256:7837eca8787f076f67aba2ca559cefd9c5cbc3a9852fd66186f4201b87c1563e"},
{file = "ruff-0.12.10-py3-none-win_arm64.whl", hash = "sha256:cc138cc06ed9d4bfa9d667a65af7172b47840e1a98b02ce7011c391e54635ffc"},
{file = "ruff-0.12.10.tar.gz", hash = "sha256:189ab65149d11ea69a2d775343adf5f49bb2426fc4780f65ee33b423ad2e47f9"},
]
[[package]]
@@ -2877,14 +2878,14 @@ twisted = "*"
[[package]]
name = "types-bleach"
version = "6.2.0.20250514"
version = "6.2.0.20250809"
description = "Typing stubs for bleach"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_bleach-6.2.0.20250514-py3-none-any.whl", hash = "sha256:380cb74f0db1e3c3b2e0cde217221108e975e07e95ef0970c9d41f7cd4e8ea3c"},
{file = "types_bleach-6.2.0.20250514.tar.gz", hash = "sha256:38c2e51d9cac51dc70c1b66121a11f4dad8bbf47fbad494bb7a77d8b8f3c4323"},
{file = "types_bleach-6.2.0.20250809-py3-none-any.whl", hash = "sha256:0b372a75117947d9ac8a31ae733fd0f8d92ec75c4772e7b37093ba3fa5b48fb9"},
{file = "types_bleach-6.2.0.20250809.tar.gz", hash = "sha256:188d7a1119f6c953140b513ed57ba4213755695815472c19d0c22ac09c79b90b"},
]
[package.dependencies]
@@ -2919,14 +2920,14 @@ files = [
[[package]]
name = "types-jsonschema"
version = "4.25.0.20250720"
version = "4.25.1.20250822"
description = "Typing stubs for jsonschema"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_jsonschema-4.25.0.20250720-py3-none-any.whl", hash = "sha256:7d7897c715310d8bf9ae27a2cedba78bbb09e4cad83ce06d2aa79b73a88941df"},
{file = "types_jsonschema-4.25.0.20250720.tar.gz", hash = "sha256:765a3b6144798fe3161fd8cbe570a756ed3e8c0e5adb7c09693eb49faad39dbd"},
{file = "types_jsonschema-4.25.1.20250822-py3-none-any.whl", hash = "sha256:f82c2d7fa1ce1c0b84ba1de4ed6798469768188884db04e66421913a4e181294"},
{file = "types_jsonschema-4.25.1.20250822.tar.gz", hash = "sha256:aac69ed4b23f49aaceb7fcb834141d61b9e4e6a7f6008cb2f0d3b831dfa8464a"},
]
[package.dependencies]
@@ -2970,14 +2971,14 @@ files = [
[[package]]
name = "types-psycopg2"
version = "2.9.21.20250718"
version = "2.9.21.20250809"
description = "Typing stubs for psycopg2"
optional = false
python-versions = ">=3.9"
groups = ["dev"]
files = [
{file = "types_psycopg2-2.9.21.20250718-py3-none-any.whl", hash = "sha256:bcf085d4293bda48f5943a46dadf0389b2f98f7e8007722f7e1c12ee0f541858"},
{file = "types_psycopg2-2.9.21.20250718.tar.gz", hash = "sha256:dc09a97272ef67e739e57b9f4740b761208f4514257e311c0b05c8c7a37d04b4"},
{file = "types_psycopg2-2.9.21.20250809-py3-none-any.whl", hash = "sha256:59b7b0ed56dcae9efae62b8373497274fc1a0484bdc5135cdacbe5a8f44e1d7b"},
{file = "types_psycopg2-2.9.21.20250809.tar.gz", hash = "sha256:b7c2cbdcf7c0bd16240f59ba694347329b0463e43398de69784ea4dee45f3c6d"},
]
[[package]]
@@ -3255,4 +3256,4 @@ url-preview = ["lxml"]
[metadata]
lock-version = "2.1"
python-versions = "^3.9.0"
content-hash = "600a349d08dde732df251583094a121b5385eb43ae0c6ceff10dcf9749359446"
content-hash = "2e8ea085e1a0c6f0ac051d4bc457a96827d01f621b1827086de01a5ffa98cf79"
+2 -2
View File
@@ -101,7 +101,7 @@ module-name = "synapse.synapse_rust"
[tool.poetry]
name = "matrix-synapse"
version = "1.137.0rc1"
version = "1.137.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "AGPL-3.0-or-later"
@@ -324,7 +324,7 @@ all = [
# failing on new releases. Keeping lower bounds loose here means that dependabot
# can bump versions without having to update the content-hash in the lockfile.
# This helps prevents merge conflicts when running a batch of dependabot updates.
ruff = "0.12.7"
ruff = "0.12.10"
# Type checking only works with the pydantic.v1 compat module from pydantic v2
pydantic = "^2"
-5
View File
@@ -62,7 +62,6 @@ fn bench_match_exact(b: &mut Bencher) {
false,
false,
false,
false,
)
.unwrap();
@@ -110,7 +109,6 @@ fn bench_match_word(b: &mut Bencher) {
false,
false,
false,
false,
)
.unwrap();
@@ -158,7 +156,6 @@ fn bench_match_word_miss(b: &mut Bencher) {
false,
false,
false,
false,
)
.unwrap();
@@ -206,7 +203,6 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
)
.unwrap();
@@ -219,7 +215,6 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person"), None));
-8
View File
@@ -111,8 +111,6 @@ pub struct PushRuleEvaluator {
/// If MSC4306 (thread subscriptions) is enabled.
msc4306_enabled: bool,
xxx_enabled: bool,
}
#[pymethods]
@@ -132,7 +130,6 @@ impl PushRuleEvaluator {
msc3931_enabled,
msc4210_enabled,
msc4306_enabled,
xxx_enabled,
))]
pub fn py_new(
flattened_keys: BTreeMap<String, JsonValue>,
@@ -146,7 +143,6 @@ impl PushRuleEvaluator {
msc3931_enabled: bool,
msc4210_enabled: bool,
msc4306_enabled: bool,
xxx_enabled: bool,
) -> Result<Self, Error> {
let body = match flattened_keys.get("content.body") {
Some(JsonValue::Value(SimpleJsonValue::Str(s))) => s.clone().into_owned(),
@@ -166,7 +162,6 @@ impl PushRuleEvaluator {
msc3931_enabled,
msc4210_enabled,
msc4306_enabled,
xxx_enabled,
})
}
@@ -574,7 +569,6 @@ fn push_rule_evaluator() {
true,
false,
false,
false,
)
.unwrap();
@@ -606,7 +600,6 @@ fn test_requires_room_version_supports_condition() {
true,
false,
false,
false,
)
.unwrap();
@@ -644,7 +637,6 @@ fn test_requires_room_version_supports_condition() {
false,
false,
false,
false,
),
None,
None,
-7
View File
@@ -552,7 +552,6 @@ pub struct FilteredPushRules {
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
msc4306_enabled: bool,
xxx_enabled: bool,
}
#[pymethods]
@@ -568,7 +567,6 @@ impl FilteredPushRules {
msc4028_push_encrypted_events: bool,
msc4210_enabled: bool,
msc4306_enabled: bool,
xxx_enabled: bool,
) -> Self {
Self {
push_rules,
@@ -579,7 +577,6 @@ impl FilteredPushRules {
msc4028_push_encrypted_events,
msc4210_enabled,
msc4306_enabled,
xxx_enabled,
}
}
@@ -634,10 +631,6 @@ impl FilteredPushRules {
return false;
}
if self.xxx_enabled {
return true;
}
true
})
.map(|r| {
+7 -1
View File
@@ -306,6 +306,12 @@ class EventContext(UnpersistedEventContextBase):
)
EventPersistencePair = Tuple[EventBase, EventContext]
"""
The combination of an event to be persisted and its context.
"""
@attr.s(slots=True, auto_attribs=True)
class UnpersistedEventContext(UnpersistedEventContextBase):
"""
@@ -363,7 +369,7 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
room_id: str,
last_known_state_group: int,
datastore: "StateGroupDataStore",
) -> List[Tuple[EventBase, EventContext]]:
) -> List[EventPersistencePair]:
"""
Takes a list of events and their associated unpersisted contexts and persists
the unpersisted contexts, returning a list of events and persisted contexts.
+2 -2
View File
@@ -59,7 +59,7 @@ from synapse.api.errors import (
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.crypto.event_signing import compute_event_signature
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventPersistencePair
from synapse.federation.federation_base import (
FederationBase,
InvalidEventSignatureError,
@@ -914,7 +914,7 @@ class FederationServer(FederationBase):
async def _on_send_membership_event(
self, origin: str, content: JsonDict, membership_type: str, room_id: str
) -> Tuple[EventBase, EventContext]:
) -> EventPersistencePair:
"""Handle an on_send_{join,leave,knock} request
Does some preliminary validation before passing the request on to the
+3 -3
View File
@@ -215,9 +215,9 @@ class DelayedEventsHandler:
"Handling: %r %r, %s", delta.event_type, delta.state_key, delta.event_id
)
event = await self._store.get_event(
delta.event_id, check_room_id=delta.room_id
)
event = await self._store.get_event(delta.event_id, allow_none=True)
if not event:
continue
sender = UserID.from_string(event.sender)
next_send_ts = await self._store.cancel_delayed_state_events(
+3 -1
View File
@@ -259,7 +259,9 @@ class FederationHandler:
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
for event_id, depth in await self.store.get_backfill_points_in_room(
room_id=room_id,
current_depth=current_depth,
# Per the docstring, it's best to pad the `current_depth` by the
# number of messages you plan to backfill from these points.
current_depth=current_depth + limit,
# We only need to end up with 5 extremities combined with the
# insertion event extremities to make the `/backfill` request
# but fetch an order of magnitude more to make sure there is
+8 -4
View File
@@ -66,7 +66,11 @@ from synapse.event_auth import (
validate_event_for_room_version,
)
from synapse.events import EventBase
from synapse.events.snapshot import EventContext, UnpersistedEventContextBase
from synapse.events.snapshot import (
EventContext,
EventPersistencePair,
UnpersistedEventContextBase,
)
from synapse.federation.federation_client import InvalidResponseError, PulledPduInfo
from synapse.logging.context import nested_logging_context
from synapse.logging.opentracing import (
@@ -341,7 +345,7 @@ class FederationEventHandler:
async def on_send_membership_event(
self, origin: str, event: EventBase
) -> Tuple[EventBase, EventContext]:
) -> EventPersistencePair:
"""
We have received a join/leave/knock event for a room via send_join/leave/knock.
@@ -1712,7 +1716,7 @@ class FederationEventHandler:
)
auth_map.update(persisted_events)
events_and_contexts_to_persist: List[Tuple[EventBase, EventContext]] = []
events_and_contexts_to_persist: List[EventPersistencePair] = []
async def prep(event: EventBase) -> None:
with nested_logging_context(suffix=event.event_id):
@@ -2225,7 +2229,7 @@ class FederationEventHandler:
async def persist_events_and_notify(
self,
room_id: str,
event_and_contexts: Sequence[Tuple[EventBase, EventContext]],
event_and_contexts: Sequence[EventPersistencePair],
backfilled: bool = False,
) -> int:
"""Persists events and tells the notifier/pushers about them, if
+5 -4
View File
@@ -57,6 +57,7 @@ from synapse.events import EventBase, relation_from_event
from synapse.events.builder import EventBuilder
from synapse.events.snapshot import (
EventContext,
EventPersistencePair,
UnpersistedEventContext,
UnpersistedEventContextBase,
)
@@ -1439,7 +1440,7 @@ class EventCreationHandler:
async def handle_new_client_event(
self,
requester: Requester,
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
ratelimit: bool = True,
extra_users: Optional[List[UserID]] = None,
ignore_shadow_ban: bool = False,
@@ -1651,7 +1652,7 @@ class EventCreationHandler:
async def _persist_events(
self,
requester: Requester,
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
ratelimit: bool = True,
extra_users: Optional[List[UserID]] = None,
) -> EventBase:
@@ -1737,7 +1738,7 @@ class EventCreationHandler:
raise
async def cache_joined_hosts_for_events(
self, events_and_context: List[Tuple[EventBase, EventContext]]
self, events_and_context: List[EventPersistencePair]
) -> None:
"""Precalculate the joined hosts at each of the given events, when using Redis, so that
external federation senders don't have to recalculate it themselves.
@@ -1843,7 +1844,7 @@ class EventCreationHandler:
async def persist_and_notify_client_events(
self,
requester: Requester,
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
ratelimit: bool = True,
extra_users: Optional[List[UserID]] = None,
) -> EventBase:
+23 -1
View File
@@ -414,12 +414,14 @@ class PaginationHandler:
@trace
async def get_messages(
self,
*,
requester: Requester,
room_id: str,
pagin_config: PaginationConfig,
as_client_event: bool = True,
event_filter: Optional[Filter] = None,
use_admin_priviledge: bool = False,
backfill: bool = True,
) -> JsonDict:
"""Get messages in a room.
@@ -432,6 +434,8 @@ class PaginationHandler:
use_admin_priviledge: if `True`, return all events, regardless
of whether `user` has access to them. To be used **ONLY**
from the admin API.
backfill: If false, we skip backfill altogether. When true, we backfill as a
best effort.
Returns:
Pagination API results
@@ -522,7 +526,7 @@ class PaginationHandler:
event_filter=event_filter,
)
if pagin_config.direction == Direction.BACKWARDS:
if backfill and pagin_config.direction == Direction.BACKWARDS:
# We use a `Set` because there can be multiple events at a given depth
# and we only care about looking at the unique continum of depths to
# find gaps.
@@ -622,6 +626,7 @@ class PaginationHandler:
if not events:
return {
"chunk": [],
"gaps": [],
"start": await from_token.to_string(self.store),
}
@@ -641,6 +646,7 @@ class PaginationHandler:
if not events:
return {
"chunk": [],
"gaps": [],
"start": await from_token.to_string(self.store),
"end": await next_token.to_string(self.store),
}
@@ -666,6 +672,10 @@ class PaginationHandler:
events, user_id
)
gaps = await self.store.get_events_next_to_gaps(
events=events, direction=pagin_config.direction
)
time_now = self.clock.time_msec()
serialize_options = SerializeEventConfig(
@@ -681,6 +691,18 @@ class PaginationHandler:
bundle_aggregations=aggregations,
)
),
"gaps": [
{
"prev_pagination_token": await from_token.copy_and_replace(
StreamKeyType.ROOM, gap.prev_token
).to_string(self.store),
"event_id": gap.event_id,
"next_pagination_token": await from_token.copy_and_replace(
StreamKeyType.ROOM, gap.next_token
).to_string(self.store),
}
for gap in gaps
],
"start": await from_token.to_string(self.store),
"end": await next_token.to_string(self.store),
}
+15 -2
View File
@@ -702,6 +702,10 @@ class _ByteProducer:
self._request: Optional[Request] = request
self._iterator = iterator
self._paused = False
self.tracing_scope = start_active_span(
"write_bytes_to_request",
)
self.tracing_scope.__enter__()
try:
self._request.registerProducer(self, True)
@@ -712,8 +716,8 @@ class _ByteProducer:
logger.info("Connection disconnected before response was written: %r", e)
# We drop our references to data we'll not use.
self._request = None
self._iterator = iter(())
self.tracing_scope.__exit__(type(e), None, e.__traceback__)
else:
# Start producing if `registerProducer` was successful
self.resumeProducing()
@@ -727,6 +731,9 @@ class _ByteProducer:
self._request.write(b"".join(data))
def pauseProducing(self) -> None:
opentracing_span = active_span()
if opentracing_span is not None:
opentracing_span.log_kv({"event": "producer_paused"})
self._paused = True
def resumeProducing(self) -> None:
@@ -737,6 +744,10 @@ class _ByteProducer:
self._paused = False
opentracing_span = active_span()
if opentracing_span is not None:
opentracing_span.log_kv({"event": "producer_resumed"})
# Write until there's backpressure telling us to stop.
while not self._paused:
# Get the next chunk and write it to the request.
@@ -771,6 +782,7 @@ class _ByteProducer:
def stopProducing(self) -> None:
# Clear a circular reference.
self._request = None
self.tracing_scope.__exit__(None, None, None)
def _encode_json_bytes(json_object: object) -> bytes:
@@ -913,8 +925,9 @@ def _write_bytes_to_request(request: Request, bytes_to_write: bytes) -> None:
# once (via `Request.write`) is that doing so starts the timeout for the
# next request to be received: so if it takes longer than 60s to stream back
# the response to the client, the client never gets it.
# c.f https://github.com/twisted/twisted/issues/12498
#
# The correct solution is to use a Producer; then the timeout is only
# One workaround is to use a `Producer`; then the timeout is only
# started once all of the content is sent over the TCP connection.
# To make sure we don't write all of the bytes at once we split it up into
+1 -8
View File
@@ -56,7 +56,6 @@ from twisted.internet import defer, threads
from twisted.python.threadpool import ThreadPool
if TYPE_CHECKING:
from synapse.logging.scopecontextmanager import _LogContextScope
from synapse.types import ISynapseReactor
logger = logging.getLogger(__name__)
@@ -230,14 +229,13 @@ LoggingContextOrSentinel = Union["LoggingContext", "_Sentinel"]
class _Sentinel:
"""Sentinel to represent the root context"""
__slots__ = ["previous_context", "finished", "request", "scope", "tag"]
__slots__ = ["previous_context", "finished", "request", "tag"]
def __init__(self) -> None:
# Minimal set for compatibility with LoggingContext
self.previous_context = None
self.finished = False
self.request = None
self.scope = None
self.tag = None
def __str__(self) -> str:
@@ -290,7 +288,6 @@ class LoggingContext:
"finished",
"request",
"tag",
"scope",
]
def __init__(
@@ -311,7 +308,6 @@ class LoggingContext:
self.main_thread = get_thread_id()
self.request = None
self.tag = ""
self.scope: Optional["_LogContextScope"] = None
# keep track of whether we have hit the __exit__ block for this context
# (suggesting that the the thing that created the context thinks it should
@@ -324,9 +320,6 @@ class LoggingContext:
# we track the current request_id
self.request = self.parent_context.request
# we also track the current scope:
self.scope = self.parent_context.scope
if request is not None:
# the request param overrides the request from the parent context
self.request = request
+3 -4
View File
@@ -251,18 +251,17 @@ class _DummyTagNames:
try:
import opentracing
import opentracing.tags
from opentracing.scope_managers.contextvars import ContextVarsScopeManager
tags = opentracing.tags
except ImportError:
opentracing = None # type: ignore[assignment]
tags = _DummyTagNames # type: ignore[assignment]
ContextVarsScopeManager = None # type: ignore
try:
from jaeger_client import Config as JaegerConfig
from synapse.logging.scopecontextmanager import LogContextScopeManager
except ImportError:
JaegerConfig = None # type: ignore
LogContextScopeManager = None # type: ignore
try:
@@ -484,7 +483,7 @@ def init_tracer(hs: "HomeServer") -> None:
config = JaegerConfig(
config=jaeger_config,
service_name=f"{hs.config.server.server_name} {instance_name_by_type}",
scope_manager=LogContextScopeManager(),
scope_manager=ContextVarsScopeManager(),
metrics_factory=PrometheusMetricsFactory(),
)
-161
View File
@@ -1,161 +0,0 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright 2019 The Matrix.org Foundation C.I.C.
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
import logging
from typing import Optional
from opentracing import Scope, ScopeManager, Span
from synapse.logging.context import (
LoggingContext,
current_context,
nested_logging_context,
)
logger = logging.getLogger(__name__)
class LogContextScopeManager(ScopeManager):
"""
The LogContextScopeManager tracks the active scope in opentracing
by using the log contexts which are native to synapse. This is so
that the basic opentracing api can be used across twisted defereds.
It would be nice just to use opentracing's ContextVarsScopeManager,
but currently that doesn't work due to https://twistedmatrix.com/trac/ticket/10301.
"""
def __init__(self) -> None:
pass
@property
def active(self) -> Optional[Scope]:
"""
Returns the currently active Scope which can be used to access the
currently active Scope.span.
If there is a non-null Scope, its wrapped Span
becomes an implicit parent of any newly-created Span at
Tracer.start_active_span() time.
Return:
The Scope that is active, or None if not available.
"""
ctx = current_context()
return ctx.scope
def activate(self, span: Span, finish_on_close: bool) -> Scope:
"""
Makes a Span active.
Args
span: the span that should become active.
finish_on_close: whether Span should be automatically finished when
Scope.close() is called.
Returns:
Scope to control the end of the active period for
*span*. It is a programming error to neglect to call
Scope.close() on the returned instance.
"""
ctx = current_context()
if not ctx:
logger.error("Tried to activate scope outside of loggingcontext")
return Scope(None, span) # type: ignore[arg-type]
if ctx.scope is not None:
# start a new logging context as a child of the existing one.
# Doing so -- rather than updating the existing logcontext -- means that
# creating several concurrent spans under the same logcontext works
# correctly.
ctx = nested_logging_context("")
enter_logcontext = True
else:
# if there is no span currently associated with the current logcontext, we
# just store the scope in it.
#
# This feels a bit dubious, but it does hack around a problem where a
# span outlasts its parent logcontext (which would otherwise lead to
# "Re-starting finished log context" errors).
enter_logcontext = False
scope = _LogContextScope(self, span, ctx, enter_logcontext, finish_on_close)
ctx.scope = scope
if enter_logcontext:
ctx.__enter__()
return scope
class _LogContextScope(Scope):
"""
A custom opentracing scope, associated with a LogContext
* When the scope is closed, the logcontext's active scope is reset to None.
and - if enter_logcontext was set - the logcontext is finished too.
"""
def __init__(
self,
manager: LogContextScopeManager,
span: Span,
logcontext: LoggingContext,
enter_logcontext: bool,
finish_on_close: bool,
):
"""
Args:
manager:
the manager that is responsible for this scope.
span:
the opentracing span which this scope represents the local
lifetime for.
logcontext:
the log context to which this scope is attached.
enter_logcontext:
if True the log context will be exited when the scope is finished
finish_on_close:
if True finish the span when the scope is closed
"""
super().__init__(manager, span)
self.logcontext = logcontext
self._finish_on_close = finish_on_close
self._enter_logcontext = enter_logcontext
def __str__(self) -> str:
return f"Scope<{self.span}>"
def close(self) -> None:
active_scope = self.manager.active
if active_scope is not self:
logger.error(
"Closing scope %s which is not the currently-active one %s",
self,
active_scope,
)
if self._finish_on_close:
self.span.finish()
self.logcontext.scope = None
if self._enter_logcontext:
self.logcontext.__exit__(None, None, None)
+2 -3
View File
@@ -49,7 +49,7 @@ from synapse.api.constants import (
from synapse.api.room_versions import PushRuleRoomFlag
from synapse.event_auth import auth_types_for_event, get_user_power_level
from synapse.events import EventBase, relation_from_event
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.metrics import SERVER_NAME_LABEL
from synapse.state import CREATE_KEY, POWER_KEY
@@ -352,7 +352,7 @@ class BulkPushRuleEvaluator:
return related_events
async def action_for_events_by_user(
self, events_and_context: List[Tuple[EventBase, EventContext]]
self, events_and_context: List[EventPersistencePair]
) -> None:
"""Given a list of events and their associated contexts, evaluate the push rules
for each event, check if the message should increment the unread count, and
@@ -479,7 +479,6 @@ class BulkPushRuleEvaluator:
self.hs.config.experimental.msc1767_enabled, # MSC3931 flag
self.hs.config.experimental.msc4210_enabled,
self.hs.config.experimental.msc4306_enabled,
False,
)
msc4306_thread_subscribers: Optional[FrozenSet[str]] = None
+3 -3
View File
@@ -24,8 +24,8 @@ from typing import TYPE_CHECKING, List, Tuple
from twisted.web.server import Request
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion
from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.events import make_event_from_dict
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.http.server import HttpServer
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict
@@ -86,7 +86,7 @@ class ReplicationFederationSendEventsRestServlet(ReplicationEndpoint):
async def _serialize_payload( # type: ignore[override]
store: "DataStore",
room_id: str,
event_and_contexts: List[Tuple[EventBase, EventContext]],
event_and_contexts: List[EventPersistencePair],
backfilled: bool,
) -> JsonDict:
"""
+3 -3
View File
@@ -25,8 +25,8 @@ from typing import TYPE_CHECKING, List, Tuple
from twisted.web.server import Request
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.events import EventBase, make_event_from_dict
from synapse.events.snapshot import EventContext
from synapse.events import make_event_from_dict
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.http.server import HttpServer
from synapse.replication.http._base import ReplicationEndpoint
from synapse.types import JsonDict, Requester, UserID
@@ -85,7 +85,7 @@ class ReplicationSendEventsRestServlet(ReplicationEndpoint):
@staticmethod
async def _serialize_payload( # type: ignore[override]
events_and_context: List[Tuple[EventBase, EventContext]],
events_and_context: List[EventPersistencePair],
store: "DataStore",
requester: Requester,
ratelimit: bool,
+14
View File
@@ -811,6 +811,17 @@ class RoomMessageListRestServlet(RestServlet):
async def on_GET(
self, request: SynapseRequest, room_id: str
) -> Tuple[int, JsonDict]:
"""
Query paremeters:
dir
from
to
limit
filter
backfill: If false, we skip backfill altogether. When true, we backfill as a
best effort.
"""
processing_start_time = self.clock.time_msec()
# Fire off and hope that we get a result by the end.
#
@@ -840,12 +851,15 @@ class RoomMessageListRestServlet(RestServlet):
):
as_client_event = False
backfill = parse_boolean(request, "backfill", default=True)
msgs = await self.pagination_handler.get_messages(
room_id=room_id,
requester=requester,
pagin_config=pagination_config,
as_client_event=as_client_event,
event_filter=event_filter,
backfill=backfill,
)
processing_end_time = self.clock.time_msec()
+10 -10
View File
@@ -51,7 +51,7 @@ from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventContext, EventPersistencePair
from synapse.handlers.worker_lock import NEW_EVENT_DURING_PURGE_LOCK_NAME
from synapse.logging.context import PreserveLoggingContext, make_deferred_yieldable
from synapse.logging.opentracing import (
@@ -144,7 +144,7 @@ class _PersistEventsTask:
name: ClassVar[str] = "persist_event_batch" # used for opentracing
events_and_contexts: List[Tuple[EventBase, EventContext]]
events_and_contexts: List[EventPersistencePair]
backfilled: bool
def try_merge(self, task: "_EventPersistQueueTask") -> bool:
@@ -391,7 +391,7 @@ class EventsPersistenceStorageController:
@trace
async def persist_events(
self,
events_and_contexts: Iterable[Tuple[EventBase, EventContext]],
events_and_contexts: Iterable[EventPersistencePair],
backfilled: bool = False,
) -> Tuple[List[EventBase], RoomStreamToken]:
"""
@@ -414,7 +414,7 @@ class EventsPersistenceStorageController:
a room that has been un-partial stated.
"""
event_ids: List[str] = []
partitioned: Dict[str, List[Tuple[EventBase, EventContext]]] = {}
partitioned: Dict[str, List[EventPersistencePair]] = {}
for event, ctx in events_and_contexts:
partitioned.setdefault(event.room_id, []).append((event, ctx))
event_ids.append(event.event_id)
@@ -430,7 +430,7 @@ class EventsPersistenceStorageController:
set_tag(SynapseTags.FUNC_ARG_PREFIX + "backfilled", str(backfilled))
async def enqueue(
item: Tuple[str, List[Tuple[EventBase, EventContext]]],
item: Tuple[str, List[EventPersistencePair]],
) -> Dict[str, str]:
room_id, evs_ctxs = item
return await self._event_persist_queue.add_to_queue(
@@ -677,7 +677,7 @@ class EventsPersistenceStorageController:
return replaced_events
async def _calculate_new_forward_extremities_and_state_delta(
self, room_id: str, ev_ctx_rm: List[Tuple[EventBase, EventContext]]
self, room_id: str, ev_ctx_rm: List[EventPersistencePair]
) -> Tuple[Optional[Set[str]], Optional[DeltaState]]:
"""Calculates the new forward extremities and state delta for a room
given events to persist.
@@ -802,7 +802,7 @@ class EventsPersistenceStorageController:
async def _calculate_new_extremities(
self,
room_id: str,
event_contexts: List[Tuple[EventBase, EventContext]],
event_contexts: List[EventPersistencePair],
latest_event_ids: AbstractSet[str],
) -> Set[str]:
"""Calculates the new forward extremities for a room given events to
@@ -862,7 +862,7 @@ class EventsPersistenceStorageController:
async def _get_new_state_after_events(
self,
room_id: str,
events_context: List[Tuple[EventBase, EventContext]],
events_context: List[EventPersistencePair],
old_latest_event_ids: AbstractSet[str],
new_latest_event_ids: Set[str],
) -> Tuple[Optional[StateMap[str]], Optional[StateMap[str]], Set[str]]:
@@ -1039,7 +1039,7 @@ class EventsPersistenceStorageController:
new_latest_event_ids: Set[str],
resolved_state_group: int,
event_id_to_state_group: Dict[str, int],
events_context: List[Tuple[EventBase, EventContext]],
events_context: List[EventPersistencePair],
) -> Set[str]:
"""See if we can prune any of the extremities after calculating the
resolved state.
@@ -1176,7 +1176,7 @@ class EventsPersistenceStorageController:
async def _is_server_still_joined(
self,
room_id: str,
ev_ctx_rm: List[Tuple[EventBase, EventContext]],
ev_ctx_rm: List[EventPersistencePair],
delta: DeltaState,
) -> bool:
"""Check if the server will still be joined after the given events have
@@ -1212,6 +1212,30 @@ class EventFederationWorkerStore(
equal to the `current_depth`. Sorted by depth, highest to lowest (descending)
so the closest events to the `current_depth` are first in the list.
Note: We can only do approximate depth comparisons. Backwards extremeties are
the oldest events we know of in the room but we only know of them because some
other event referenced them by prev_event and aren't persisted in our database
yet (meaning we don't know their depth specifically). So we need to look for the
approximate depth from the events connected to the current backwards
extremeties.
It's best to pad the `current_depth` by the number of messages you plan to
backfill from these points.
Example:
- Your pagination token represents a scroll position at `depth` of `100`.
- We have a backfill point at an approximate depth of `125`
- You plan to backfill `50` events from that backfill point.
When we pad our `current_depth`, `100` + `50` = `150`, we pick up the backfill
point at `125` (because <= `150`, our `current_depth`), backfill `50` events to
a depth of `75` in the timeline (exposing new events that we can return `100` ->
`75`).
When we don't pad our `current_depth`, `100` is lower than any of the backfill
points so we don't pick any and miss out on backfilling any events.
We ignore extremities that are newer than the user's current scroll position
(ie, those with depth greater than `current_depth`) as:
1. we don't really care about getting events that have happened
@@ -1223,7 +1247,7 @@ class EventFederationWorkerStore(
Args:
room_id: Room where we want to find the oldest events
current_depth: The depth at the user's current scrollback position
current_depth: The depth at the user's current scrollback position (see notes above).
limit: The max number of backfill points to return
Returns:
+21 -23
View File
@@ -57,7 +57,7 @@ from synapse.events import (
is_creator,
relation_from_event,
)
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventPersistencePair
from synapse.events.utils import parse_stripped_state_event
from synapse.logging.opentracing import trace
from synapse.metrics import SERVER_NAME_LABEL
@@ -274,7 +274,7 @@ class PersistEventsStore:
async def _persist_events_and_state_updates(
self,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
*,
state_delta_for_room: Optional[DeltaState],
new_forward_extremities: Optional[Set[str]],
@@ -532,7 +532,7 @@ class PersistEventsStore:
async def _calculate_sliding_sync_table_changes(
self,
room_id: str,
events_and_contexts: Sequence[Tuple[EventBase, EventContext]],
events_and_contexts: Sequence[EventPersistencePair],
delta_state: DeltaState,
) -> SlidingSyncTableChanges:
"""
@@ -1016,7 +1016,7 @@ class PersistEventsStore:
txn: LoggingTransaction,
*,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
inhibit_local_membership_updates: bool,
state_delta_for_room: Optional[DeltaState],
new_forward_extremities: Optional[Set[str]],
@@ -1666,7 +1666,7 @@ class PersistEventsStore:
def _persist_transaction_ids_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
"""Persist the mapping from transaction IDs to event IDs (if defined)."""
@@ -2316,7 +2316,7 @@ class PersistEventsStore:
self,
txn: LoggingTransaction,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
"""
Update the latest `event_stream_ordering`/`bump_stamp` columns in the
@@ -2456,8 +2456,8 @@ class PersistEventsStore:
@classmethod
def _filter_events_and_contexts_for_duplicates(
cls, events_and_contexts: List[Tuple[EventBase, EventContext]]
) -> List[Tuple[EventBase, EventContext]]:
cls, events_and_contexts: List[EventPersistencePair]
) -> List[EventPersistencePair]:
"""Ensure that we don't have the same event twice.
Pick the earliest non-outlier if there is one, else the earliest one.
@@ -2468,9 +2468,7 @@ class PersistEventsStore:
Returns:
filtered list
"""
new_events_and_contexts: OrderedDict[str, Tuple[EventBase, EventContext]] = (
OrderedDict()
)
new_events_and_contexts: OrderedDict[str, EventPersistencePair] = OrderedDict()
for event, context in events_and_contexts:
prev_event_context = new_events_and_contexts.get(event.event_id)
if prev_event_context:
@@ -2488,7 +2486,7 @@ class PersistEventsStore:
self,
txn: LoggingTransaction,
room_id: str,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
"""Update min_depth for each room
@@ -2530,8 +2528,8 @@ class PersistEventsStore:
def _update_outliers_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
) -> List[Tuple[EventBase, EventContext]]:
events_and_contexts: List[EventPersistencePair],
) -> List[EventPersistencePair]:
"""Update any outliers with new event info.
This turns outliers into ex-outliers (unless the new event was rejected), and
@@ -2638,7 +2636,7 @@ class PersistEventsStore:
def _store_event_txn(
self,
txn: LoggingTransaction,
events_and_contexts: Collection[Tuple[EventBase, EventContext]],
events_and_contexts: Collection[EventPersistencePair],
) -> None:
"""Insert new events into the event, event_json, redaction and
state_events tables.
@@ -2742,8 +2740,8 @@ class PersistEventsStore:
def _store_rejected_events_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
) -> List[Tuple[EventBase, EventContext]]:
events_and_contexts: List[EventPersistencePair],
) -> List[EventPersistencePair]:
"""Add rows to the 'rejections' table for received events which were
rejected
@@ -2770,8 +2768,8 @@ class PersistEventsStore:
self,
txn: LoggingTransaction,
*,
events_and_contexts: List[Tuple[EventBase, EventContext]],
all_events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
all_events_and_contexts: List[EventPersistencePair],
inhibit_local_membership_updates: bool = False,
) -> None:
"""Update all the miscellaneous tables for new events
@@ -2865,7 +2863,7 @@ class PersistEventsStore:
def _add_to_cache(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
) -> None:
to_prefill: List[EventCacheEntry] = []
@@ -3338,8 +3336,8 @@ class PersistEventsStore:
def _set_push_actions_for_event_and_users_txn(
self,
txn: LoggingTransaction,
events_and_contexts: List[Tuple[EventBase, EventContext]],
all_events_and_contexts: List[Tuple[EventBase, EventContext]],
events_and_contexts: List[EventPersistencePair],
all_events_and_contexts: List[EventPersistencePair],
) -> None:
"""Handles moving push actions from staging table to main
event_push_actions table for all events in `events_and_contexts`.
@@ -3422,7 +3420,7 @@ class PersistEventsStore:
def _store_event_state_mappings_txn(
self,
txn: LoggingTransaction,
events_and_contexts: Collection[Tuple[EventBase, EventContext]],
events_and_contexts: Collection[EventPersistencePair],
) -> None:
"""
Raises:
+158 -45
View File
@@ -34,6 +34,7 @@ from typing import (
Mapping,
MutableMapping,
Optional,
Sequence,
Set,
Tuple,
cast,
@@ -42,6 +43,7 @@ from typing import (
import attr
from prometheus_client import Gauge
from typing_extensions import assert_never
from twisted.internet import defer
@@ -81,14 +83,19 @@ from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
LoggingTransaction,
make_tuple_in_list_sql_clause,
)
# from synapse.storage.databases.main.stream import (
# generate_next_token,
# )
from synapse.storage.types import Cursor
from synapse.storage.util.id_generators import (
AbstractStreamIdGenerator,
MultiWriterIdGenerator,
)
from synapse.storage.util.sequence import build_sequence_generator
from synapse.types import JsonDict, get_domain_from_id
from synapse.types import JsonDict, RoomStreamToken, get_domain_from_id
from synapse.types.state import StateFilter
from synapse.types.storage import _BackgroundUpdates
from synapse.util import unwrapFirstError
@@ -99,6 +106,7 @@ from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.iterutils import batch_iter
from synapse.util.metrics import Measure
from synapse.util.tokens import generate_next_token
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -213,6 +221,34 @@ class EventRedactBehaviour(Enum):
block = auto()
@attr.s(slots=True, frozen=True, auto_attribs=True)
class EventGapEntry:
"""
Represents a gap in the timeline.
From MSC3871: Gappy timeline
"""
event_id: str
"""
The target event ID which we see a gap before or after.
"""
prev_token: RoomStreamToken
"""
The token position before the target `event_id`
Remember: tokens are positions between events
"""
next_token: RoomStreamToken
"""
The token position after the target `event_id`
Remember: tokens are positions between events
"""
class EventsWorkerStore(SQLBaseStore):
# Whether to use dedicated DB threads for event fetching. This is only used
# if there are multiple DB threads available. When used will lock the DB
@@ -1337,6 +1373,7 @@ class EventsWorkerStore(SQLBaseStore):
fetched_event_ids: Set[str] = set()
fetched_events: Dict[str, _EventRow] = {}
@trace
async def _fetch_event_ids_and_get_outstanding_redactions(
event_ids_to_fetch: Collection[str],
) -> Collection[str]:
@@ -1344,6 +1381,10 @@ class EventsWorkerStore(SQLBaseStore):
Fetch all of the given event_ids and return any associated redaction event_ids
that we still need to fetch in the next iteration.
"""
set_tag(
SynapseTags.FUNC_ARG_PREFIX + "event_ids_to_fetch.length",
str(len(event_ids_to_fetch)),
)
row_map = await self._enqueue_events(event_ids_to_fetch)
# we need to recursively fetch any redactions of those events
@@ -1617,21 +1658,28 @@ class EventsWorkerStore(SQLBaseStore):
# likely that some of these events may be for the same room/user combo, in
# which case we don't need to do redundant queries
to_check_set = set(to_check)
for room_and_user in to_check_set:
room_redactions_sql = "SELECT redacting_event_id, redact_end_ordering FROM room_ban_redactions WHERE room_id = ? and user_id = ?"
txn.execute(room_redactions_sql, room_and_user)
res = txn.fetchone()
# we have a redaction for a room, user_id combo - apply it to matching events
if not res:
continue
room_redaction_sql = "SELECT room_id, user_id, redacting_event_id, redact_end_ordering FROM room_ban_redactions WHERE "
(
in_list_clause,
room_redaction_args,
) = make_tuple_in_list_sql_clause(
self.database_engine, ("room_id", "user_id"), to_check_set
)
txn.execute(room_redaction_sql + in_list_clause, room_redaction_args)
for (
returned_room_id,
returned_user_id,
redacting_event_id,
redact_end_ordering,
) in txn:
for e_row in events:
e_json = json.loads(e_row.json)
room_id = e_json.get("room_id")
user_id = e_json.get("sender")
room_and_user = (returned_room_id, returned_user_id)
# check if we have a redaction match for this room, user combination
if room_and_user != (room_id, user_id):
continue
redacting_event_id, redact_end_ordering = res
if redact_end_ordering:
# Avoid redacting any events arriving *after* the membership event which
# ends an active redaction - note that this will always redact
@@ -2302,15 +2350,24 @@ class EventsWorkerStore(SQLBaseStore):
is_event_next_to_backward_gap_txn,
)
async def is_event_next_to_forward_gap(self, event: EventBase) -> bool:
"""Check if the given event is next to a forward gap of missing events.
The gap in front of the latest events is not considered a gap.
async def is_event_next_to_forward_gap(
self, event: EventBase, *, ignore_gap_after_latest: bool = True
) -> bool:
"""
Check if the given event is next to a forward gap of missing events.
By default when `ignore_gap_after_latest = True`, the gap in front of the
latest events is not considered a gap.
<latest messages> A(False)--->B(False)--->C(False)---> <gap, unknown events> <oldest messages>
<latest messages> A(False)--->B(False)---> <gap, unknown events> --->D(True)--->E(False) <oldest messages>
When `ignore_gap_after_latest = False`, `A` would be considered next to a gap.
Args:
room_id: room where the event lives
event: event to check (can't be an `outlier`)
ignore_gap_after_latest: Whether the gap after the latest events (forward
extremeties) in the room should be considered as an actual gap.
Returns:
Boolean indicating whether it's an extremity
@@ -2322,38 +2379,39 @@ class EventsWorkerStore(SQLBaseStore):
)
def is_event_next_to_gap_txn(txn: LoggingTransaction) -> bool:
# If the event in question is a forward extremity, we will just
# consider any potential forward gap as not a gap since it's one of
# the latest events in the room.
#
# `event_forward_extremities` does not include backfilled or outlier
# events so we can't rely on it to find forward gaps. We can only
# use it to determine whether a message is the latest in the room.
#
# We can't combine this query with the `forward_edge_query` below
# because if the event in question has no forward edges (isn't
# referenced by any other event's prev_events) but is in
# `event_forward_extremities`, we don't want to return 0 rows and
# say it's next to a gap.
forward_extremity_query = """
SELECT 1 FROM event_forward_extremities
WHERE
room_id = ?
AND event_id = ?
LIMIT 1
"""
if ignore_gap_after_latest:
# If the event in question is a forward extremity, we will just
# consider any potential forward gap as not a gap since it's one of
# the latest events in the room.
#
# `event_forward_extremities` does not include backfilled or outlier
# events so we can't rely on it to find forward gaps. We can only
# use it to determine whether a message is the latest in the room.
#
# We can't combine this query with the `forward_edge_query` below
# because if the event in question has no forward edges (isn't
# referenced by any other event's prev_events) but is in
# `event_forward_extremities`, we don't want to return 0 rows and
# say it's next to a gap.
forward_extremity_query = """
SELECT 1 FROM event_forward_extremities
WHERE
room_id = ?
AND event_id = ?
LIMIT 1
"""
# We consider any forward extremity as the latest in the room and
# not a forward gap.
#
# To expand, even though there is technically a gap at the front of
# the room where the forward extremities are, we consider those the
# latest messages in the room so asking other homeservers for more
# is useless. The new latest messages will just be federated as
# usual.
txn.execute(forward_extremity_query, (event.room_id, event.event_id))
if txn.fetchone():
return False
# We consider any forward extremity as the latest in the room and
# not a forward gap.
#
# To expand, even though there is technically a gap at the front of
# the room where the forward extremities are, we consider those the
# latest messages in the room so asking other homeservers for more
# is useless. The new latest messages will just be federated as
# usual.
txn.execute(forward_extremity_query, (event.room_id, event.event_id))
if txn.fetchone():
return False
# Check to see whether the event in question is already referenced
# by another event. If we don't see any edges, we're next to a
@@ -2385,6 +2443,61 @@ class EventsWorkerStore(SQLBaseStore):
is_event_next_to_gap_txn,
)
async def get_events_next_to_gaps(
self, events: Sequence[EventBase], direction: Direction
) -> Sequence[EventGapEntry]:
"""
Find all of the events that have gaps next to them.
When going backwards, we look for backward gaps (i.e. missing prev_events).
When going forwards, we look for forward gaps (i.e. events that aren't
referenced by any other events).
Args:
events: topological ordered list of events
direction: which side of the events to check for gaps. This should match the
direction we're paginating in.
"""
gaps = []
for event in events:
# FIXME: We should use a bulk look-up instead of N+1 queries.
if direction == Direction.BACKWARDS:
is_next_to_gap = await self.is_event_next_to_backward_gap(event)
elif direction == Direction.FORWARDS:
is_next_to_gap = await self.is_event_next_to_forward_gap(
event, ignore_gap_after_latest=False
)
else:
assert_never(direction)
if not is_next_to_gap:
continue
stream_ordering = event.internal_metadata.stream_ordering
assert stream_ordering is not None, (
"persisted events should have stream_ordering"
)
gaps.append(
EventGapEntry(
prev_token=generate_next_token(
direction=Direction.BACKWARDS,
last_topo_ordering=event.depth,
last_stream_ordering=stream_ordering,
),
event_id=event.event_id,
next_token=generate_next_token(
direction=Direction.FORWARDS,
last_topo_ordering=event.depth,
last_stream_ordering=stream_ordering,
),
)
)
return gaps
async def get_event_id_for_timestamp(
self, room_id: str, timestamp: int, direction: Direction
) -> Optional[str]:
+1 -1
View File
@@ -48,13 +48,13 @@ from synapse.storage.database import (
make_in_list_sql_clause,
)
from synapse.storage.databases.main.stream import (
generate_next_token,
generate_pagination_bounds,
generate_pagination_where_clause,
)
from synapse.storage.engines import PostgresEngine
from synapse.types import JsonDict, MultiWriterStreamToken, StreamKeyType, StreamToken
from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.tokens import generate_next_token
if TYPE_CHECKING:
from synapse.server import HomeServer
+1 -24
View File
@@ -86,6 +86,7 @@ from synapse.util.caches.descriptors import cached, cachedList
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.cancellation import cancellable
from synapse.util.iterutils import batch_iter
from synapse.util.tokens import generate_next_token
if TYPE_CHECKING:
from synapse.server import HomeServer
@@ -294,30 +295,6 @@ def generate_pagination_bounds(
return order, from_bound, to_bound
def generate_next_token(
direction: Direction, last_topo_ordering: Optional[int], last_stream_ordering: int
) -> RoomStreamToken:
"""
Generate the next room stream token based on the currently returned data.
Args:
direction: Whether pagination is going forwards or backwards.
last_topo_ordering: The last topological ordering being returned.
last_stream_ordering: The last stream ordering being returned.
Returns:
A new RoomStreamToken to return to the client.
"""
if direction == Direction.BACKWARDS:
# Tokens are positions between events.
# This token points *after* the last event in the chunk.
# We need it to point to the event before it in the chunk
# when we are going backwards so we subtract one from the
# stream part.
last_stream_ordering -= 1
return RoomStreamToken(topological=last_topo_ordering, stream=last_stream_ordering)
def _make_generic_sql_bound(
bound: str,
column_names: Tuple[str, str],
+2 -3
View File
@@ -25,8 +25,7 @@ from typing import (
Tuple,
)
from synapse.events import EventBase
from synapse.events.snapshot import EventContext
from synapse.events.snapshot import EventPersistencePair
from synapse.storage.database import (
DatabasePool,
LoggingDatabaseConnection,
@@ -228,7 +227,7 @@ class StateDeletionDataStore:
@contextlib.asynccontextmanager
async def persisting_state_group_references(
self, event_and_contexts: Collection[Tuple[EventBase, EventContext]]
self, event_and_contexts: Collection[EventPersistencePair]
) -> AsyncIterator[None]:
"""Wraps the persistence of the given events and contexts, ensuring that
any state groups referenced still exist and that they don't get deleted
+9 -11
View File
@@ -604,7 +604,7 @@ class AbstractMultiWriterStreamToken(metaclass=abc.ABCMeta):
return self.instance_map.get(instance_name, self.stream)
def is_before_or_eq(self, other_token: Self) -> bool:
"""Wether this token is before the other token, i.e. every constituent
"""Whether this token is before the other token, i.e. every constituent
part is before the other.
Essentially it is `self <= other`.
@@ -694,7 +694,7 @@ class RoomStreamToken(AbstractMultiWriterStreamToken):
---
Historic tokens start with a "t" followed by the `depth`
Historical tokens start with a "t" followed by the `depth`
(`topological_ordering` in the event graph) of the event that comes before
the position of the token, followed by "-", followed by the
`stream_ordering` of the event that comes before the position of the token.
@@ -827,17 +827,15 @@ class RoomStreamToken(AbstractMultiWriterStreamToken):
return self.topological, self.stream
def get_stream_pos_for_instance(self, instance_name: str) -> int:
"""Get the stream position that the given writer was at at this token.
def is_before_or_eq(self, other_token: Self) -> bool:
is_before_or_eq_stream_ordering = super().is_before_or_eq(other_token)
if not is_before_or_eq_stream_ordering:
return False
This only makes sense for "live" tokens that may have a vector clock
component, and so asserts that this is a "live" token.
"""
assert self.topological is None
if self.topological is not None and other_token.topological is not None:
return self.topological <= other_token.topological
# If we don't have an entry for the instance we can assume that it was
# at `self.stream`.
return self.instance_map.get(instance_name, self.stream)
return True
async def to_string(self, store: "DataStore") -> str:
"""See class level docstring for information about the format."""
+47
View File
@@ -0,0 +1,47 @@
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2025 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
from typing import Optional
from synapse.api.constants import Direction
from synapse.types import RoomStreamToken
def generate_next_token(
direction: Direction, last_topo_ordering: Optional[int], last_stream_ordering: int
) -> RoomStreamToken:
"""
Generate the next room stream token based on the currently returned data.
Args:
direction: Whether pagination is going forwards or backwards.
last_topo_ordering: The last topological ordering being returned.
last_stream_ordering: The last stream ordering being returned.
Returns:
A new RoomStreamToken to return to the client.
"""
if direction == Direction.BACKWARDS:
# Tokens are positions between events.
# This token points *after* the last event in the chunk.
# We need it to point to the event before it in the chunk
# when we are going backwards so we subtract one from the
# stream part.
last_stream_ordering -= 1
# TODO: Is this okay to do? Kinda seems more correct
if last_topo_ordering is not None:
last_topo_ordering -= 1
return RoomStreamToken(topological=last_topo_ordering, stream=last_stream_ordering)
+139 -7
View File
@@ -19,7 +19,7 @@
#
#
from typing import Awaitable, cast
from typing import Awaitable, Dict, cast
from twisted.internet import defer
from twisted.internet.testing import MemoryReactorClock
@@ -38,9 +38,11 @@ from synapse.logging.opentracing import (
from synapse.util import Clock
try:
from synapse.logging.scopecontextmanager import LogContextScopeManager
import opentracing
from opentracing.scope_managers.contextvars import ContextVarsScopeManager
except ImportError:
LogContextScopeManager = None # type: ignore
opentracing = None # type: ignore
ContextVarsScopeManager = None # type: ignore
try:
import jaeger_client
@@ -54,9 +56,10 @@ from tests.unittest import TestCase
logger = logging.getLogger(__name__)
class LogContextScopeManagerTestCase(TestCase):
class TracingScopeTestCase(TestCase):
"""
Test logging contexts and active opentracing spans.
Test that our tracing machinery works well in a variety of situations (especially
with Twisted's runtime and deferreds).
There's casts throughout this from generic opentracing objects (e.g.
opentracing.Span) to the ones specific to Jaeger since they have additional
@@ -64,7 +67,7 @@ class LogContextScopeManagerTestCase(TestCase):
opentracing backend is Jaeger.
"""
if LogContextScopeManager is None:
if opentracing is None:
skip = "Requires opentracing" # type: ignore[unreachable]
if jaeger_client is None:
skip = "Requires jaeger_client" # type: ignore[unreachable]
@@ -74,7 +77,7 @@ class LogContextScopeManagerTestCase(TestCase):
# global variables that power opentracing. We create our own tracer instance
# and test with it.
scope_manager = LogContextScopeManager()
scope_manager = ContextVarsScopeManager()
config = jaeger_client.config.Config(
config={}, service_name="test", scope_manager=scope_manager
)
@@ -208,6 +211,135 @@ class LogContextScopeManagerTestCase(TestCase):
[scopes[1].span, scopes[2].span, scopes[0].span],
)
def test_run_in_background_active_scope_still_available(self) -> None:
"""
Test that tasks running via `run_in_background` still have access to the
active tracing scope.
This is a regression test for a previous Synapse issue where the tracing scope
would `__exit__` and close before the `run_in_background` task completed and our
own previous custom `_LogContextScope.close(...)` would clear
`LoggingContext.scope` preventing further tracing spans from having the correct
parent.
"""
reactor = MemoryReactorClock()
clock = Clock(reactor)
scope_map: Dict[str, opentracing.Scope] = {}
async def async_task() -> None:
root_scope = scope_map["root"]
root_context = cast(jaeger_client.SpanContext, root_scope.span.context)
self.assertEqual(
self._tracer.active_span,
root_scope.span,
"expected to inherit the root tracing scope from where this was run",
)
# Return control back to the reactor thread and wait an arbitrary amount
await clock.sleep(4)
# This is a key part of what we're testing! In a previous version of
# Synapse, we would lose the active span at this point.
self.assertEqual(
self._tracer.active_span,
root_scope.span,
"expected to still have a root tracing scope/span active",
)
# For complete-ness sake, let's also trace more sub-tasks here and assert
# they have the correct span parents as well (root)
# Start tracing some other sub-task.
#
# This is a key part of what we're testing! In a previous version of
# Synapse, it would have the incorrect span parents.
scope = start_active_span(
"task1",
tracer=self._tracer,
)
scope_map["task1"] = scope
# Ensure the span parent is pointing to the root scope
context = cast(jaeger_client.SpanContext, scope.span.context)
self.assertEqual(
context.parent_id,
root_context.span_id,
"expected task1 parent to be the root span",
)
# Ensure that the active span is our new sub-task now
self.assertEqual(self._tracer.active_span, scope.span)
# Return control back to the reactor thread and wait an arbitrary amount
await clock.sleep(4)
# We should still see the active span as the scope wasn't closed yet
self.assertEqual(self._tracer.active_span, scope.span)
scope.close()
async def root() -> None:
with start_active_span(
"root span",
tracer=self._tracer,
# We will close this off later. We're basically just mimicking the same
# pattern for how we handle requests. We pass the span off to the
# request for it to finish.
finish_on_close=False,
) as root_scope:
scope_map["root"] = root_scope
self.assertEqual(self._tracer.active_span, root_scope.span)
# Fire-and-forget a task
#
# XXX: The root scope context manager will `__exit__` before this task
# completes.
run_in_background(async_task)
# Because we used `run_in_background`, the active span should still be
# the root.
self.assertEqual(self._tracer.active_span, root_scope.span)
# We shouldn't see any active spans outside of the scope
self.assertIsNone(self._tracer.active_span)
with LoggingContext("root context"):
# Start the test off
d_root = defer.ensureDeferred(root())
# Let the tasks complete
reactor.pump((2,) * 8)
self.successResultOf(d_root)
# After we see all of the tasks are done (like a request when it
# `_finished_processing`), let's finish our root span
scope_map["root"].span.finish()
# Sanity check again: We shouldn't see any active spans leftover in this
# this context.
self.assertIsNone(self._tracer.active_span)
# The spans should be reported in order of their finishing: task 1, task 2,
# root.
#
# We use `assertIncludes` just as an easier way to see if items are missing or
# added. We assert the order just below
self.assertIncludes(
set(self._reporter.get_spans()),
{
scope_map["task1"].span,
scope_map["root"].span,
},
exact=True,
)
# This is where we actually assert the correct order
self.assertEqual(
self._reporter.get_spans(),
[
scope_map["task1"].span,
scope_map["root"].span,
],
)
def test_trace_decorator_sync(self) -> None:
"""
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
+207 -3
View File
@@ -24,6 +24,7 @@
"""Tests REST events for /rooms paths."""
import json
import logging
from http import HTTPStatus
from typing import Any, Dict, Iterable, List, Literal, Optional, Tuple, Union
from unittest.mock import AsyncMock, Mock, call, patch
@@ -59,7 +60,14 @@ from synapse.rest.client import (
sync,
)
from synapse.server import HomeServer
from synapse.types import JsonDict, RoomAlias, UserID, create_requester
from synapse.types import (
JsonDict,
RoomAlias,
StreamKeyType,
StreamToken,
UserID,
create_requester,
)
from synapse.util import Clock
from synapse.util.stringutils import random_string
@@ -70,6 +78,8 @@ from tests.test_utils.event_injection import create_event
from tests.unittest import override_config
from tests.utils import default_config
logger = logging.getLogger(__name__)
PATH_PREFIX = b"/_matrix/client/api/v1"
@@ -1739,8 +1749,8 @@ class RoomJoinRatelimitTestCase(RoomBase):
self.assertEqual(len(rooms), 4)
class RoomMessagesTestCase(RoomBase):
"""Tests /rooms/$room_id/messages/$user_id/$msg_id REST events."""
class RoomSendMessagesTestCase(RoomBase):
"""Tests /rooms/{roomId}/send/{eventType}/{txnId} REST events."""
user_id = "@sid1:red"
@@ -2242,6 +2252,11 @@ class RoomMessageListTestCase(RoomBase):
user_id = "@sid1:red"
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
self.store = self.hs.get_datastores().main
persistence = self.hs.get_storage_controllers().persistence
assert persistence is not None
self.persistence = persistence
self.room_id = self.helper.create_room_as(self.user_id)
def test_topo_token_is_accepted(self) -> None:
@@ -2371,6 +2386,195 @@ class RoomMessageListTestCase(RoomBase):
channel.json_body["errcode"], Codes.NOT_JSON, channel.json_body
)
def _setup_gappy_timeline(self) -> Tuple[Dict[str, str], Dict[str, str]]:
"""
Set up a gappy timeline for testing.
We create a chain of events but only persist every other event so we have gaps
everywhere.
(`p` means the event was persisted and known to this local server)
```
p p p p p
old history <- foo -> bar <- baz -> qux <- corge <- grault <- garply <- waldo <- fred
```
We also have some <primordial events from room creation> that are persisted at
the beginning of the room but that's just a quirk of how we set this test
fixture up. The "old history" is supposed to represent the point that we've
actually back-paginated so far from our server.
Returns:
Tuple of:
1. Mapping from message to event IDs.
2. Mapping from event IDs to messages.
"""
message_list = [
"old history",
"foo",
"bar",
"baz",
"qux",
"corge",
"grault",
"garply",
"waldo",
"fred",
]
message_to_event_id_map = {}
event_id_to_message_map = {}
# Make a straight line of events where only every other is persisted
forward_extremity_event_ids = list(
self.get_success(
self.hs.get_datastores().main.get_latest_event_ids_in_room(self.room_id)
)
)
previous_depth = 0
for message_index, message_text in enumerate(message_list):
event, event_context = self.get_success(
create_event(
self.hs,
prev_event_ids=forward_extremity_event_ids,
type=EventTypes.Message,
content={"body": message_text, "msgtype": "m.text"},
sender=self.user_id,
room_id=self.room_id,
room_version=self.get_success(
self.store.get_room_version_id(self.room_id)
),
)
)
message_to_event_id_map[message_text] = event.event_id
event_id_to_message_map[event.event_id] = message_text
# Update the forward extremity to the new event
forward_extremity_event_ids = [
event.event_id,
# Because we only persist every other event, if we just give Synapse a
# unknown event ID as a `prev_event_id`, it wont' be able to calculate
# `depth` in the DAG and will just default it to a `depth` of 1.
#
# Let's just connect it to one of the previous-previous events so that
# Synapse has some known `prev_event_id` to calculate the `depth` from.
forward_extremity_event_ids[0],
]
# Persist every other event (do the odds, so we start with *not* persisting
# the event representing the "old history")
if message_index % 2 == 1:
event, _, _ = self.get_success(
self.persistence.persist_event(event, event_context)
)
# For sanity sake because `/messages` uses topological ordering, let's
# assert that the `depth` is increasing.
self.assertGreater(
event.depth,
previous_depth,
"Expected event depth to increase as we persist events",
)
previous_depth = event.depth
return message_to_event_id_map, event_id_to_message_map
def test_gaps_going_backwards(self) -> None:
message_to_event_id_map, event_id_to_message_map = self._setup_gappy_timeline()
# Craft a token the represents the position just after the "corge" event.
# When looking backwards, we should see the "corge" event.
corge_room_stream_token = self.get_success(
self.store.get_topological_token_for_event(message_to_event_id_map["corge"])
)
current_token = self.hs.get_event_sources().get_current_token()
corge_token = self.get_success(
current_token.copy_and_replace(
StreamKeyType.ROOM,
corge_room_stream_token,
).to_string(self.store)
)
messages_type_filter = '{"types": ["m.room.message"]}'
channel = self.make_request(
"GET",
"/rooms/%s/messages?dir=b&from=%s&filter=%s"
% (self.room_id, corge_token, messages_type_filter),
)
self.assertEqual(HTTPStatus.OK, channel.code)
logger.info("asdf %s", channel.json_body)
# Make sure the timeline includes everything from "corge" backwards (inclusive)
#
actual_messages = [
event_id_to_message_map.get(event["event_id"], event["event_id"])
for event in channel.json_body["chunk"]
]
expected_messages = [
"corge",
# "qux",
"baz",
# "bar",
"foo",
# "old history",
]
# Because the `assertEquals` assertion to assert exact order gives horrible diff
# output when it fails, let's use `assertIncludes` as a first step to sanity
# check everything is there before we assert the exact order.
self.assertIncludes(
set(actual_messages),
set(expected_messages),
exact=True,
)
# Asser the actual order
self.assertEqual(actual_messages, expected_messages)
# Make sure the gaps are correct
actual_gaps = [
event_id_to_message_map.get(gap["event_id"], gap["event_id"])
for gap in channel.json_body["gaps"]
]
expected_gaps = expected_messages
# We only need to assert gaps are in the list (the order doesn't matter)
self.assertIncludes(
set(actual_gaps),
set(expected_gaps),
exact=True,
)
# Ensure that the tokens point to the correct positions
for gap in channel.json_body["gaps"]:
event_room_stream_token = self.get_success(
self.store.get_topological_token_for_event(gap["event_id"])
)
# Make sure that the `prev_pagination_token` points to the position before
# the event
prev_pagination_token = self.get_success(
StreamToken.from_string(self.store, gap["prev_pagination_token"])
)
assert prev_pagination_token.room_key.topological is not None, (
"expected `gap.prev_pagination_token` to be a topological token since it was returned from `/messages`"
)
assert prev_pagination_token.room_key.is_before_or_eq(
event_room_stream_token
), (
"expected the `gap.prev_pagination_token` to point to the position before the event"
)
# Make sure that the `next_pagination_token` points to the position after
# the event
next_pagination_token = self.get_success(
StreamToken.from_string(self.store, gap["next_pagination_token"])
)
assert next_pagination_token.room_key.topological is not None, (
"expected `gap.next_pagination_token` to be a topological token since it was returned from `/messages`"
)
assert not event_room_stream_token.is_before_or_eq(
prev_pagination_token.room_key
), (
"expected the `gap.next_pagination_token` to point to the position after the event"
)
# TODO: `test_gaps_going_forwards`
class RoomMessageFilterTestCase(RoomBase):
"""Tests /rooms/$room_id/messages REST events."""