1
0

Compare commits

...

56 Commits

Author SHA1 Message Date
Mathieu Velten
cec52ea9b0 Add changelog 2023-06-14 00:50:53 +02:00
Mathieu Velten
0cb8502bbb Use parse_duration for newly introduced options 2023-06-13 23:55:15 +02:00
Shay
553f2f53e7 Replace EventContext fields prev_group and delta_ids with field state_group_deltas (#15233) 2023-06-13 13:22:06 -07:00
Mathieu Velten
59ec4a0dc1 Fix MSC3983 support: only one OTK per device was returned through federation (#15770) 2023-06-13 19:51:47 +02:00
Eric Eastwood
0757d59ec4 Avoid backfill when we already have messages to return (#15737)
We now only block the client to backfill when we see a large gap in the events (more than 2 events missing in a row according to `depth`), more than 3 single-event holes, or not enough messages to fill the response. Otherwise, we return the messages directly to the client and backfill in the background for eventual consistency sake. 

Fix https://github.com/matrix-org/synapse/issues/15696
2023-06-13 12:31:08 -05:00
Patrick Cloke
df945e0d7c Fix MSC3983 support: Use the unstable /keys/claim federation endpoint if multiple keys are requested (#15755) 2023-06-13 18:07:55 +02:00
dependabot[bot]
99c850f798 Bump regex from 1.7.3 to 1.8.4 (#15769)
Bumps [regex](https://github.com/rust-lang/regex) from 1.7.3 to 1.8.4.
- [Release notes](https://github.com/rust-lang/regex/releases)
- [Changelog](https://github.com/rust-lang/regex/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/regex/compare/1.7.3...1.8.4)

---
updated-dependencies:
- dependency-name: regex
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 10:05:29 +01:00
dependabot[bot]
8afc9a4cda Bump log from 0.4.18 to 0.4.19 (#15761)
Bumps [log](https://github.com/rust-lang/log) from 0.4.18 to 0.4.19.
- [Release notes](https://github.com/rust-lang/log/releases)
- [Changelog](https://github.com/rust-lang/log/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/log/compare/0.4.18...0.4.19)

---
updated-dependencies:
- dependency-name: log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-13 10:05:13 +01:00
Erik Johnston
ba97b39881 Bump minimum supported Rust version (#15768)
Important crates such as `log` and `regex` have bumped theirs to 1.60.0
as well.
2023-06-12 13:27:11 +00:00
dependabot[bot]
0b104364f9 Bump pyo3-log from 0.8.1 to 0.8.2 (#15759)
Bumps [pyo3-log](https://github.com/vorner/pyo3-log) from 0.8.1 to 0.8.2.
- [Changelog](https://github.com/vorner/pyo3-log/blob/main/CHANGELOG.md)
- [Commits](https://github.com/vorner/pyo3-log/compare/v0.8.1...v0.8.2)

---
updated-dependencies:
- dependency-name: pyo3-log
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:22:21 +01:00
dependabot[bot]
42eb4fea1c Bump serde from 1.0.163 to 1.0.164 (#15760)
Bumps [serde](https://github.com/serde-rs/serde) from 1.0.163 to 1.0.164.
- [Release notes](https://github.com/serde-rs/serde/releases)
- [Commits](https://github.com/serde-rs/serde/compare/v1.0.163...v1.0.164)

---
updated-dependencies:
- dependency-name: serde
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:21:20 +01:00
dependabot[bot]
9e321e0098 Bump pyopenssl from 23.1.1 to 23.2.0 (#15765)
Bumps [pyopenssl](https://github.com/pyca/pyopenssl) from 23.1.1 to 23.2.0.
- [Changelog](https://github.com/pyca/pyopenssl/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pyca/pyopenssl/compare/23.1.1...23.2.0)

---
updated-dependencies:
- dependency-name: pyopenssl
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:20:55 +01:00
dependabot[bot]
0aa731cb6f Bump pydantic from 1.10.8 to 1.10.9 (#15762)
Bumps [pydantic](https://github.com/pydantic/pydantic) from 1.10.8 to 1.10.9.
- [Release notes](https://github.com/pydantic/pydantic/releases)
- [Changelog](https://github.com/pydantic/pydantic/blob/main/HISTORY.md)
- [Commits](https://github.com/pydantic/pydantic/compare/v1.10.8...v1.10.9)

---
updated-dependencies:
- dependency-name: pydantic
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:19:43 +01:00
dependabot[bot]
aad7e2d0c1 Bump sentry-sdk from 1.25.0 to 1.25.1 (#15764)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.25.0 to 1.25.1.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.25.0...1.25.1)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:19:01 +01:00
dependabot[bot]
046e7e494a Bump phonenumbers from 8.13.11 to 8.13.13 (#15763)
Bumps [phonenumbers](https://github.com/daviddrysdale/python-phonenumbers) from 8.13.11 to 8.13.13.
- [Commits](https://github.com/daviddrysdale/python-phonenumbers/compare/v8.13.11...v8.13.13)

---
updated-dependencies:
- dependency-name: phonenumbers
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:17:40 +01:00
dependabot[bot]
4f2bd6be69 Bump types-pyopenssl from 23.1.0.2 to 23.2.0.0 (#15766)
Bumps [types-pyopenssl](https://github.com/python/typeshed) from 23.1.0.2 to 23.2.0.0.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyopenssl
  dependency-type: direct:development
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-12 09:17:04 +01:00
Eric Eastwood
fcc3ca37e1 Backfill in the background if we're doing it "just because" (#15710)
Fix https://github.com/matrix-org/synapse/issues/15702
2023-06-09 15:39:49 -05:00
Erik Johnston
373c0c7ff7 Speed up typechecking CI (#15752)
By restoring the rust cache before installing the project.
2023-06-09 15:00:30 +01:00
Shay
d84e66144d Allow for the configuration of max request retries and min/max retry delays in the matrix federation client (#12504)
Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Erik Johnston <erik@matrix.org>
2023-06-09 09:00:46 +02:00
Erik Johnston
f6321e386c Merge branch 'master' into develop 2023-06-08 13:16:46 +01:00
Erik Johnston
b5b7bb7c0f Merge branch 'release-v1.85' 2023-06-08 13:16:39 +01:00
Erik Johnston
ac3a70a7dd Fix up changelog 2023-06-08 13:15:56 +01:00
Erik Johnston
c485ed1c5a Clear event caches when we purge history (#15609)
This should help a little with #13476

---------

Co-authored-by: Patrick Cloke <patrickc@matrix.org>
2023-06-08 13:14:40 +01:00
Erik Johnston
a4921b2370 1.85.2 2023-06-08 13:04:26 +01:00
Erik Johnston
733342ad3e Fix using TLS for replication (#15746)
Fixes #15744.
2023-06-08 13:03:48 +01:00
David Robertson
d162aecaac Quick & dirty metric for background update status (#15740)
* Quick & dirty metric for background update status

* Changelog

* Remove debug

Co-authored-by: Mathieu Velten <mathieuv@matrix.org>

* Actually write to _aborted

---------

Co-authored-by: Mathieu Velten <mathieuv@matrix.org>
2023-06-07 17:12:23 +00:00
Eric Eastwood
e536f02f68 Remove superfluous room_memberships join from background update (#15733)
Spawning from https://github.com/matrix-org/synapse/pull/15731
2023-06-07 11:47:01 -05:00
Eric Eastwood
195b6a298d Remove redundant room_memberships join to find participating servers in a room (#15732)
Spawning from https://github.com/matrix-org/synapse/pull/15731
2023-06-07 11:45:16 -05:00
Grant McLean
5c24d7b9eb Check required power levels earlier in createRoom handler. (#15695)
* Check required power levels earlier in createRoom handler.

- If a server was configured to reject the creation of rooms with E2EE
  enabled (by specifying an unattainably high power level for
  "m.room.encryption" in default_power_level_content_override), the 403
  error was not being triggered until after the room was created and
  before the "m.room.power_levels" was sent.  This allowed a user to
  access the partially-configured room and complete the setup of E2EE
  and power levels manually.

- This change causes the power level overrides to be checked earlier and
  the request to be rejected before the user gains access to the room.

- A new `_validate_room_config` method is added to contain checks that
  should be run before a room is created.

- The new test case confirms that a user request is rejected by the new
  validation method.

Signed-off-by: Grant McLean <grant@catalyst.net.nz>

* Add a changelog file.

* Formatting fix for black.

* Remove unneeded line from test.

---------

Signed-off-by: Grant McLean <grant@catalyst.net.nz>
2023-06-07 16:21:25 +01:00
Erik Johnston
8934c11935 Merge branch 'master' into develop 2023-06-07 14:45:19 +01:00
Erik Johnston
140a76c00f Merge branch 'release-v1.85' 2023-06-07 14:45:09 +01:00
Erik Johnston
6cd6a2ae59 Update changelog 2023-06-07 13:07:40 +01:00
Erik Johnston
28423977be Update changelog 2023-06-07 13:04:20 +01:00
Erik Johnston
f7c6553ebc Fix schema delta error in 1.85 (#15739)
Some users seem to have multiple rows per user / room with a null thread
ID, which we need to handle.
2023-06-07 13:02:42 +01:00
Erik Johnston
7acf7f2f8d 1.85.1 2023-06-07 10:51:17 +01:00
Erik Johnston
a701c089fa Fix schema delta error in 1.85 (#15738)
There appears to be a race where you can end up with entries in
`event_push_summary` with both a `NULL` and `main` thread ID.

Fixes #15736

Introduced in #15597
2023-06-07 10:50:32 +01:00
Eric Eastwood
9d911b0da6 No need for the extra join since membership is built-in to current_state_events (#15731)
This helps with the upstream `is_host_joined()` and `is_host_invited()` functions.

`membership` was added to `current_state_events` in https://github.com/matrix-org/synapse/pull/5706 and forced in https://github.com/matrix-org/synapse/pull/13745
2023-06-06 22:19:57 -05:00
Eric Eastwood
8bfded81f3 Trace functions which return Awaitable (#15650) 2023-06-06 17:39:22 -05:00
Eric Eastwood
4e6390cb10 Update error to more plainly explain we can only authorize our own events (#15725) 2023-06-06 16:26:12 -05:00
Eric Eastwood
33c3550887 Add context for when/why to use the long_retries option when sending Federation requests (#15721) 2023-06-06 16:25:03 -05:00
Shay
6ee96e9366 Improve performance of user directory search (#15729) 2023-06-06 21:16:03 +01:00
Andrew Morgan
d43c72a6c8 Prevent "twisted trunk" and "latest deps" workflows from running on forks (#15726) 2023-06-06 18:29:54 +00:00
Sean Quah
dfd77f426e Remove some unused server_name fields (#15723)
Signed-off-by: Sean Quah <seanq@matrix.org>
2023-06-06 12:32:29 +01:00
Erik Johnston
1a54953473 Merge remote-tracking branch 'origin/master' into develop 2023-06-06 10:59:20 +01:00
Erik Johnston
ad690037de Fix link in changelog 2023-06-06 10:58:32 +01:00
Erik Johnston
07fd6d82d7 Merge branch 'master' into develop 2023-06-06 10:49:04 +01:00
Erik Johnston
ec71214243 Fixup changelog 2023-06-06 10:06:21 +01:00
Erik Johnston
564f37aca6 1.85.0 2023-06-06 09:55:42 +01:00
Patrick Cloke
f880e64b11 Stabilize support for MSC3952: Intentional mentions. (#15520) 2023-06-06 09:11:07 +01:00
Eric Eastwood
f9561b9e37 Some house keeping on maybe_backfill() functions (#15709) 2023-06-05 23:38:52 -05:00
dependabot[bot]
ca8906be2c Bump types-requests from 2.31.0.0 to 2.31.0.1 (#15715)
Bumps [types-requests](https://github.com/python/typeshed) from 2.31.0.0 to 2.31.0.1.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-requests
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:39:34 +01:00
dependabot[bot]
2d97d5b1c3 Bump types-jsonschema from 4.17.0.7 to 4.17.0.8 (#15716)
Bumps [types-jsonschema](https://github.com/python/typeshed) from 4.17.0.7 to 4.17.0.8.
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-jsonschema
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:32:25 +01:00
dependabot[bot]
1a7aa81715 Bump sentry-sdk from 1.22.1 to 1.25.0 (#15714)
Bumps [sentry-sdk](https://github.com/getsentry/sentry-python) from 1.22.1 to 1.25.0.
- [Release notes](https://github.com/getsentry/sentry-python/releases)
- [Changelog](https://github.com/getsentry/sentry-python/blob/master/CHANGELOG.md)
- [Commits](https://github.com/getsentry/sentry-python/compare/1.22.1...1.25.0)

---
updated-dependencies:
- dependency-name: sentry-sdk
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:32:16 +01:00
dependabot[bot]
5feabbdf06 Bump pyasn1 from 0.4.8 to 0.5.0 (#15713)
Bumps [pyasn1](https://github.com/pyasn1/pyasn1) from 0.4.8 to 0.5.0.
- [Release notes](https://github.com/pyasn1/pyasn1/releases)
- [Changelog](https://github.com/pyasn1/pyasn1/blob/main/CHANGES.rst)
- [Commits](https://github.com/pyasn1/pyasn1/compare/v0.4.8...v0.5.0)

---
updated-dependencies:
- dependency-name: pyasn1
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:32:07 +01:00
dependabot[bot]
36a5bcae2c Bump library/redis from 6-bullseye to 7-bullseye in /docker (#15712)
Bumps library/redis from 6-bullseye to 7-bullseye.

---
updated-dependencies:
- dependency-name: library/redis
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:31:54 +01:00
dependabot[bot]
8ba530c0e3 Bump importlib-metadata from 6.1.0 to 6.6.0 (#15711)
Bumps [importlib-metadata](https://github.com/python/importlib_metadata) from 6.1.0 to 6.6.0.
- [Release notes](https://github.com/python/importlib_metadata/releases)
- [Changelog](https://github.com/python/importlib_metadata/blob/main/CHANGES.rst)
- [Commits](https://github.com/python/importlib_metadata/compare/v6.1.0...v6.6.0)

---
updated-dependencies:
- dependency-name: importlib-metadata
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-06-05 10:31:41 +01:00
83 changed files with 1044 additions and 501 deletions

View File

@@ -22,7 +22,21 @@ concurrency:
cancel-in-progress: true
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than matrix-org/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'matrix-org/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
@@ -47,6 +61,8 @@ jobs:
run: sed '/warn_unused_ignores = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
strategy:
matrix:
@@ -105,6 +121,8 @@ jobs:
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:testing
@@ -156,7 +174,8 @@ jobs:
complement:
if: "${{ !failure() && !cancelled() }}"
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
@@ -192,7 +211,7 @@ jobs:
# Open an issue if the build fails, so we know about it.
# Only do this if we're not experimenting with this action in a PR.
open-issue:
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request'"
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request' && needs.check_repo.outputs.should_run_workflow == 'true'"
needs:
# TODO: should mypy be included here? It feels more brittle than the others.
- mypy

View File

@@ -35,7 +35,7 @@ jobs:
steps:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -92,6 +92,10 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
@@ -103,10 +107,6 @@ jobs:
# To make CI green, err towards caution and install the project.
install-project: "true"
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
- uses: Swatinem/rust-cache@v2
# Cribbed from
# https://github.com/AustinScola/mypy-cache-github-action/blob/85ea4f2972abed39b33bd02c36e341b28ca59213/src/restore.ts#L10-L17
- name: Restore/persist mypy's cache
@@ -150,7 +150,7 @@ jobs:
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
@@ -167,7 +167,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
with:
components: clippy
- uses: Swatinem/rust-cache@v2
@@ -268,7 +268,7 @@ jobs:
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
@@ -308,7 +308,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
# There aren't wheels for some of the older deps, so we need to install
@@ -416,7 +416,7 @@ jobs:
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- name: Run SyTest
@@ -556,7 +556,7 @@ jobs:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- uses: actions/setup-go@v4
@@ -584,7 +584,7 @@ jobs:
- uses: actions/checkout@v3
- name: Install Rust
uses: dtolnay/rust-toolchain@1.58.1
uses: dtolnay/rust-toolchain@1.60.0
- uses: Swatinem/rust-cache@v2
- run: cargo test

View File

@@ -18,7 +18,22 @@ concurrency:
cancel-in-progress: true
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than matrix-org/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
if: github.repository == 'matrix-org/synapse'
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'matrix-org/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
@@ -41,6 +56,8 @@ jobs:
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
@@ -75,6 +92,8 @@ jobs:
|| true
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:buster
@@ -119,7 +138,8 @@ jobs:
/logs/**/*.log*
complement:
if: "${{ !failure() && !cancelled() }}"
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
@@ -166,7 +186,7 @@ jobs:
# open an issue if the build fails, so we know about it.
open-issue:
if: failure()
if: failure() && needs.check_repo.outputs.should_run_workflow == 'true'
needs:
- mypy
- trial

View File

@@ -1,3 +1,44 @@
Synapse 1.85.2 (2023-06-08)
===========================
Bugfixes
--------
- Fix regression where using TLS for HTTP replication between workers did not work. Introduced in v1.85.0. ([\#15746](https://github.com/matrix-org/synapse/issues/15746))
Synapse 1.85.1 (2023-06-07)
===========================
Note: this release only fixes a bug that stopped some deployments from upgrading to v1.85.0. There is no need to upgrade to v1.85.1 if successfully running v1.85.0.
Bugfixes
--------
- Fix bug in schema delta that broke upgrades for some deployments. Introduced in v1.85.0. ([\#15738](https://github.com/matrix-org/synapse/issues/15738), [\#15739](https://github.com/matrix-org/synapse/issues/15739))
Synapse 1.85.0 (2023-06-06)
===========================
No significant changes since 1.85.0rc2.
## Security advisory
The following issues are fixed in 1.85.0 (and RCs).
- [GHSA-26c5-ppr8-f33p](https://github.com/matrix-org/synapse/security/advisories/GHSA-26c5-ppr8-f33p) / [CVE-2023-32682](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-32682) — Low Severity
It may be possible for a deactivated user to login when using uncommon configurations.
- [GHSA-98px-6486-j7qc](https://github.com/matrix-org/synapse/security/advisories/GHSA-98px-6486-j7qc) / [CVE-2023-32683](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-32683) — Low Severity
A discovered oEmbed or image URL can bypass the `url_preview_url_blacklist` setting potentially allowing server side request forgery or bypassing network policies. Impact is limited to IP addresses allowed by the `url_preview_ip_range_blacklist` setting (by default this only allows public IPs).
See the advisories for more details. If you have any questions, email security@matrix.org.
Synapse 1.85.0rc2 (2023-06-01)
==============================

28
Cargo.lock generated
View File

@@ -4,9 +4,9 @@ version = 3
[[package]]
name = "aho-corasick"
version = "0.7.19"
version = "1.0.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b4f55bd91a0978cbfd91c457a164bab8b4001c833b7f323132c0a4e1922dd44e"
checksum = "43f6cb1bf222025340178f382c426f13757b2960e89779dfcb319c32542a5a41"
dependencies = [
"memchr",
]
@@ -132,9 +132,9 @@ dependencies = [
[[package]]
name = "log"
version = "0.4.18"
version = "0.4.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "518ef76f2f87365916b142844c16d8fefd85039bc5699050210a7778ee1cd1de"
checksum = "b06a4cde4c0f271a446782e3eff8de789548ce57dbc8eca9292c27f4a42004b4"
[[package]]
name = "memchr"
@@ -229,9 +229,9 @@ dependencies = [
[[package]]
name = "pyo3-log"
version = "0.8.1"
version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9c8b57fe71fb5dcf38970ebedc2b1531cf1c14b1b9b4c560a182a57e115575c"
checksum = "c94ff6535a6bae58d7d0b85e60d4c53f7f84d0d0aa35d6a28c3f3e70bfe51444"
dependencies = [
"arc-swap",
"log",
@@ -291,9 +291,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.7.3"
version = "1.8.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8b1f693b24f6ac912f4893ef08244d70b6067480d2f1a46e950c9691e6749d1d"
checksum = "d0ab3ca65655bb1e41f2a8c8cd662eb4fb035e67c3f78da1d61dffe89d07300f"
dependencies = [
"aho-corasick",
"memchr",
@@ -302,9 +302,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.6.29"
version = "0.7.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f162c6dd7b008981e4d40210aca20b4bd0f9b60ca9271061b07f78537722f2e1"
checksum = "436b050e76ed2903236f032a59761c1eb99e1b0aead2c257922771dab1fc8c78"
[[package]]
name = "ryu"
@@ -320,18 +320,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.163"
version = "1.0.164"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2113ab51b87a539ae008b5c6c02dc020ffa39afd2d83cffcb3f4eb2722cebec2"
checksum = "9e8c8cf938e98f769bc164923b06dce91cea1751522f46f8466461af04c9027d"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.163"
version = "1.0.164"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8c805777e3930c8883389c602315a24224bcc738b63905ef87cd1420353ea93e"
checksum = "d9735b638ccc51c28bf6914d90a2e9725b377144fc612c49a611fddd1b631d68"
dependencies = [
"proc-macro2",
"quote",

1
changelog.d/12504.misc Normal file
View File

@@ -0,0 +1 @@
Allow for the configuration of max request retries and min/max retry delays in the matrix federation client.

1
changelog.d/15233.misc Normal file
View File

@@ -0,0 +1 @@
Replace `EventContext` fields `prev_group` and `delta_ids` with field `state_group_deltas`.

View File

@@ -0,0 +1 @@
Enable support for [MSC3952](https://github.com/matrix-org/matrix-spec-proposals/pull/3952): intentional mentions.

1
changelog.d/15609.bugfix Normal file
View File

@@ -0,0 +1 @@
Correctly clear caches when we delete a room.

1
changelog.d/15650.misc Normal file
View File

@@ -0,0 +1 @@
Add support for tracing functions which return `Awaitable`s.

1
changelog.d/15695.bugfix Normal file
View File

@@ -0,0 +1 @@
Check permissions for enabling encryption earlier during room creation to avoid creating broken rooms.

1
changelog.d/15709.misc Normal file
View File

@@ -0,0 +1 @@
Update docstring and traces on `maybe_backfill()` functions.

View File

@@ -0,0 +1 @@
Speed up `/messages` by backfilling in the background when there are no backward extremities where we are directly paginating.

1
changelog.d/15721.misc Normal file
View File

@@ -0,0 +1 @@
Add context for when/why to use the `long_retries` option when sending Federation requests.

1
changelog.d/15723.misc Normal file
View File

@@ -0,0 +1 @@
Removed some unused fields.

1
changelog.d/15725.misc Normal file
View File

@@ -0,0 +1 @@
Update federation error to more plainly explain we can only authorize our own membership events.

1
changelog.d/15726.misc Normal file
View File

@@ -0,0 +1 @@
Prevent the `latest_deps` and `twisted_trunk` daily GitHub Actions workflows from running on forks of the codebase.

1
changelog.d/15729.misc Normal file
View File

@@ -0,0 +1 @@
Improve performance of user directory search.

1
changelog.d/15731.misc Normal file
View File

@@ -0,0 +1 @@
Remove redundant table join with `room_memberships` when doing a `is_host_joined()`/`is_host_invited()` call (`membership` is already part of the `current_state_events`).

1
changelog.d/15732.doc Normal file
View File

@@ -0,0 +1 @@
Simplify query to find participating servers in a room.

1
changelog.d/15733.misc Normal file
View File

@@ -0,0 +1 @@
Remove superfluous `room_memberships` join from background update.

View File

@@ -0,0 +1 @@
Improve `/messages` response time by avoiding backfill when we already have messages to return.

View File

@@ -0,0 +1 @@
Expose a metric reporting the database background update status.

1
changelog.d/15752.misc Normal file
View File

@@ -0,0 +1 @@
Speed up typechecking CI.

1
changelog.d/15755.misc Normal file
View File

@@ -0,0 +1 @@
Fix requesting multiple keys at once over federation, related to [MSC3983](https://github.com/matrix-org/matrix-spec-proposals/pull/3983).

1
changelog.d/15768.misc Normal file
View File

@@ -0,0 +1 @@
Bump minimum supported Rust version to 1.60.0.

1
changelog.d/15770.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix requesting multiple keys at once over federation, related to [MSC3983](https://github.com/matrix-org/matrix-spec-proposals/pull/3983).

1
changelog.d/15778.misc Normal file
View File

@@ -0,0 +1 @@
Use parse_duration for federation client timeout and retry options.

18
debian/changelog vendored
View File

@@ -1,3 +1,21 @@
matrix-synapse-py3 (1.85.2) stable; urgency=medium
* New Synapse release 1.85.2.
-- Synapse Packaging team <packages@matrix.org> Thu, 08 Jun 2023 13:04:18 +0100
matrix-synapse-py3 (1.85.1) stable; urgency=medium
* New Synapse release 1.85.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 07 Jun 2023 10:51:12 +0100
matrix-synapse-py3 (1.85.0) stable; urgency=medium
* New Synapse release 1.85.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Jun 2023 09:39:29 +0100
matrix-synapse-py3 (1.85.0~rc2) stable; urgency=medium
* New Synapse release 1.85.0rc2.

View File

@@ -21,7 +21,7 @@ FROM docker.io/library/debian:bullseye-slim AS deps_base
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:6-bullseye AS redis_base
FROM docker.io/library/redis:7-bullseye AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM

View File

@@ -87,6 +87,14 @@ process, for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
```
# Upgrading to v1.86.0
## Minimum supported Rust version
The minimum supported Rust version has been increased from v1.58.1 to v1.60.0.
Users building from source will need to ensure their `rustc` version is up to
date.
# Upgrading to v1.85.0

View File

@@ -27,9 +27,8 @@ What servers are currently participating in this room?
Run this sql query on your db:
```sql
SELECT DISTINCT split_part(state_key, ':', 2)
FROM current_state_events AS c
INNER JOIN room_memberships AS m USING (room_id, event_id)
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
FROM current_state_events
WHERE room_id = '!cURbafjkfsMDVwdRDQ:matrix.org' AND membership = 'join';
```
What users are registered on my server?

View File

@@ -1196,6 +1196,32 @@ Example configuration:
allow_device_name_lookup_over_federation: true
```
---
### `federation`
The federation section defines some sub-options related to federation.
The following options are related to configuring timeout and retry logic for one request,
independently of the others.
Short retry algorithm is used when something or someone will wait for the request to have an
answer, while long retry is used for requests that happen in the background,
like sending a federation transaction.
* `client_timeout`: timeout for the federation requests in seconds. Default to 60s.
* `max_short_retry_delay`: maximum delay to be used for the short retry algo in seconds. Default to 2s.
* `max_long_retry_delay`: maximum delay to be used for the short retry algo in seconds. Default to 60s.
* `max_short_retries`: maximum number of retries for the short retry algo. Default to 3 attempts.
* `max_long_retries`: maximum number of retries for the long retry algo. Default to 10 attempts.
Example configuration:
```yaml
federation:
client_timeout: 180
max_short_retry_delay: 7
max_long_retry_delay: 100
max_short_retries: 5
max_long_retries: 20
```
---
## Caching
Options related to caching.

287
poetry.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.85.0rc2"
version = "1.85.2"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"

View File

@@ -7,7 +7,7 @@ name = "synapse"
version = "0.1.0"
edition = "2021"
rust-version = "1.58.1"
rust-version = "1.60.0"
[lib]
name = "synapse"

View File

@@ -13,8 +13,6 @@
// limitations under the License.
#![feature(test)]
use std::collections::BTreeSet;
use synapse::push::{
evaluator::PushRuleEvaluator, Condition, EventMatchCondition, FilteredPushRules, JsonValue,
PushRules, SimpleJsonValue,
@@ -197,7 +195,6 @@ fn bench_eval_message(b: &mut Bencher) {
false,
false,
false,
false,
);
b.iter(|| eval.run(&rules, Some("bob"), Some("person")));

View File

@@ -142,11 +142,11 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(".org.matrix.msc3952.is_user_mention"),
rule_id: Cow::Borrowed("global/override/.m.is_user_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[Condition::Known(
KnownCondition::ExactEventPropertyContainsType(EventPropertyIsTypeCondition {
key: Cow::Borrowed("content.org\\.matrix\\.msc3952\\.mentions.user_ids"),
key: Cow::Borrowed("content.m\\.mentions.user_ids"),
value_type: Cow::Borrowed(&EventMatchPatternType::UserId),
}),
)]),
@@ -163,11 +163,11 @@ pub const BASE_APPEND_OVERRIDE_RULES: &[PushRule] = &[
default_enabled: true,
},
PushRule {
rule_id: Cow::Borrowed(".org.matrix.msc3952.is_room_mention"),
rule_id: Cow::Borrowed("global/override/.m.is_room_mention"),
priority_class: 5,
conditions: Cow::Borrowed(&[
Condition::Known(KnownCondition::EventPropertyIs(EventPropertyIsCondition {
key: Cow::Borrowed("content.org\\.matrix\\.msc3952\\.mentions.room"),
key: Cow::Borrowed("content.m\\.mentions.room"),
value: Cow::Borrowed(&SimpleJsonValue::Bool(true)),
})),
Condition::Known(KnownCondition::SenderNotificationPermission {

View File

@@ -70,7 +70,9 @@ pub struct PushRuleEvaluator {
/// The "content.body", if any.
body: String,
/// True if the event has a mentions property and MSC3952 support is enabled.
/// True if the event has a m.mentions property. (Note that this is a separate
/// flag instead of checking flattened_keys since the m.mentions property
/// might be an empty map and not appear in flattened_keys.
has_mentions: bool,
/// The number of users in the room.
@@ -155,9 +157,7 @@ impl PushRuleEvaluator {
let rule_id = &push_rule.rule_id().to_string();
// For backwards-compatibility the legacy mention rules are disabled
// if the event contains the 'm.mentions' property (and if the
// experimental feature is enabled, both of these are represented
// by the has_mentions flag).
// if the event contains the 'm.mentions' property.
if self.has_mentions
&& (rule_id == "global/override/.m.rule.contains_display_name"
|| rule_id == "global/content/.m.rule.contains_user_name"
@@ -562,7 +562,7 @@ fn test_requires_room_version_supports_condition() {
};
let rules = PushRules::new(vec![custom_rule]);
result = evaluator.run(
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false, false),
&FilteredPushRules::py_new(rules, BTreeMap::new(), true, false, true, false),
None,
None,
);

View File

@@ -527,7 +527,6 @@ pub struct FilteredPushRules {
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
}
@@ -540,7 +539,6 @@ impl FilteredPushRules {
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
) -> Self {
Self {
@@ -549,7 +547,6 @@ impl FilteredPushRules {
msc1767_enabled,
msc3381_polls_enabled,
msc3664_enabled,
msc3952_intentional_mentions,
msc3958_suppress_edits_enabled,
}
}
@@ -587,10 +584,6 @@ impl FilteredPushRules {
return false;
}
if !self.msc3952_intentional_mentions && rule.rule_id.contains("org.matrix.msc3952")
{
return false;
}
if !self.msc3958_suppress_edits_enabled
&& rule.rule_id == "global/override/.com.beeper.suppress_edits"
{

View File

@@ -46,7 +46,6 @@ class FilteredPushRules:
msc1767_enabled: bool,
msc3381_polls_enabled: bool,
msc3664_enabled: bool,
msc3952_intentional_mentions: bool,
msc3958_suppress_edits_enabled: bool,
): ...
def rules(self) -> Collection[Tuple[PushRule, bool]]: ...

View File

@@ -236,7 +236,7 @@ class EventContentFields:
AUTHORISING_USER: Final = "join_authorised_via_users_server"
# Use for mentioning users.
MSC3952_MENTIONS: Final = "org.matrix.msc3952.mentions"
MENTIONS: Final = "m.mentions"
# an unspecced field added to to-device messages to identify them uniquely-ish
TO_DEVICE_MSGID: Final = "org.matrix.msgid"

View File

@@ -358,11 +358,6 @@ class ExperimentalConfig(Config):
# MSC3391: Removing account data.
self.msc3391_enabled = experimental.get("msc3391_enabled", False)
# MSC3952: Intentional mentions, this depends on MSC3966.
self.msc3952_intentional_mentions = experimental.get(
"msc3952_intentional_mentions", False
)
# MSC3959: Do not generate notifications for edits.
self.msc3958_supress_edit_notifs = experimental.get(
"msc3958_supress_edit_notifs", False

View File

@@ -22,6 +22,8 @@ class FederationConfig(Config):
section = "federation"
def read_config(self, config: JsonDict, **kwargs: Any) -> None:
federation_config = config.setdefault("federation", {})
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist: Optional[dict] = None
federation_domain_whitelist = config.get("federation_domain_whitelist", None)
@@ -49,5 +51,19 @@ class FederationConfig(Config):
"allow_device_name_lookup_over_federation", False
)
# Allow for the configuration of timeout, max request retries
# and min/max retry delays in the matrix federation client.
self.client_timeout = Config.parse_duration(
federation_config.get("client_timeout", "60s")
)
self.max_long_retry_delay = Config.parse_duration(
federation_config.get("max_long_retry_delay", "60s")
)
self.max_short_retry_delay = Config.parse_duration(
federation_config.get("max_short_retry_delay", "2s")
)
self.max_long_retries = federation_config.get("max_long_retries", 10)
self.max_short_retries = federation_config.get("max_short_retries", 3)
_METRICS_FOR_DOMAINS_SCHEMA = {"type": "array", "items": {"type": "string"}}

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from abc import ABC, abstractmethod
from typing import TYPE_CHECKING, List, Optional, Tuple
from typing import TYPE_CHECKING, Dict, List, Optional, Tuple
import attr
from immutabledict import immutabledict
@@ -107,33 +107,32 @@ class EventContext(UnpersistedEventContextBase):
state_delta_due_to_event: If `state_group` and `state_group_before_event` are not None
then this is the delta of the state between the two groups.
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
None does not necessarily mean that ``state_group`` does not have
a prev_group!
state_group_deltas: If not empty, this is a dict collecting a mapping of the state
difference between state groups.
If the event is a state event, this is normally the same as
``state_group_before_event``.
The keys are a tuple of two integers: the initial group and final state group.
The corresponding value is a state map representing the state delta between
these state groups.
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
will always also be ``None``.
The dictionary is expected to have at most two entries with state groups of:
Note that this *not* (necessarily) the state group associated with
``_prev_state_ids``.
1. The state group before the event and after the event.
2. The state group preceding the state group before the event and the
state group before the event.
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
and ``state_group``.
This information is collected and stored as part of an optimization for persisting
events.
partial_state: if True, we may be storing this event with a temporary,
incomplete state.
"""
_storage: "StorageControllers"
state_group_deltas: Dict[Tuple[int, int], StateMap[str]]
rejected: Optional[str] = None
_state_group: Optional[int] = None
state_group_before_event: Optional[int] = None
_state_delta_due_to_event: Optional[StateMap[str]] = None
prev_group: Optional[int] = None
delta_ids: Optional[StateMap[str]] = None
app_service: Optional[ApplicationService] = None
partial_state: bool = False
@@ -145,16 +144,14 @@ class EventContext(UnpersistedEventContextBase):
state_group_before_event: Optional[int],
state_delta_due_to_event: Optional[StateMap[str]],
partial_state: bool,
prev_group: Optional[int] = None,
delta_ids: Optional[StateMap[str]] = None,
state_group_deltas: Dict[Tuple[int, int], StateMap[str]],
) -> "EventContext":
return EventContext(
storage=storage,
state_group=state_group,
state_group_before_event=state_group_before_event,
state_delta_due_to_event=state_delta_due_to_event,
prev_group=prev_group,
delta_ids=delta_ids,
state_group_deltas=state_group_deltas,
partial_state=partial_state,
)
@@ -163,7 +160,7 @@ class EventContext(UnpersistedEventContextBase):
storage: "StorageControllers",
) -> "EventContext":
"""Return an EventContext instance suitable for persisting an outlier event"""
return EventContext(storage=storage)
return EventContext(storage=storage, state_group_deltas={})
async def persist(self, event: EventBase) -> "EventContext":
return self
@@ -183,13 +180,15 @@ class EventContext(UnpersistedEventContextBase):
"state_group": self._state_group,
"state_group_before_event": self.state_group_before_event,
"rejected": self.rejected,
"prev_group": self.prev_group,
"state_group_deltas": _encode_state_group_delta(self.state_group_deltas),
"state_delta_due_to_event": _encode_state_dict(
self._state_delta_due_to_event
),
"delta_ids": _encode_state_dict(self.delta_ids),
"app_service_id": self.app_service.id if self.app_service else None,
"partial_state": self.partial_state,
# add dummy delta_ids and prev_group for backwards compatibility
"delta_ids": None,
"prev_group": None,
}
@staticmethod
@@ -204,17 +203,24 @@ class EventContext(UnpersistedEventContextBase):
Returns:
The event context.
"""
# workaround for backwards/forwards compatibility: if the input doesn't have a value
# for "state_group_deltas" just assign an empty dict
state_group_deltas = input.get("state_group_deltas", None)
if state_group_deltas:
state_group_deltas = _decode_state_group_delta(state_group_deltas)
else:
state_group_deltas = {}
context = EventContext(
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
storage=storage,
state_group=input["state_group"],
state_group_before_event=input["state_group_before_event"],
prev_group=input["prev_group"],
state_group_deltas=state_group_deltas,
state_delta_due_to_event=_decode_state_dict(
input["state_delta_due_to_event"]
),
delta_ids=_decode_state_dict(input["delta_ids"]),
rejected=input["rejected"],
partial_state=input.get("partial_state", False),
)
@@ -349,7 +355,7 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
_storage: "StorageControllers"
state_group_before_event: Optional[int]
state_group_after_event: Optional[int]
state_delta_due_to_event: Optional[dict]
state_delta_due_to_event: Optional[StateMap[str]]
prev_group_for_state_group_before_event: Optional[int]
delta_ids_to_state_group_before_event: Optional[StateMap[str]]
partial_state: bool
@@ -380,26 +386,16 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
events_and_persisted_context = []
for event, unpersisted_context in amended_events_and_context:
if event.is_state():
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
prev_group=unpersisted_context.state_group_before_event,
delta_ids=unpersisted_context.state_delta_due_to_event,
)
else:
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
prev_group=unpersisted_context.prev_group_for_state_group_before_event,
delta_ids=unpersisted_context.delta_ids_to_state_group_before_event,
)
state_group_deltas = unpersisted_context._build_state_group_deltas()
context = EventContext(
storage=unpersisted_context._storage,
state_group=unpersisted_context.state_group_after_event,
state_group_before_event=unpersisted_context.state_group_before_event,
state_delta_due_to_event=unpersisted_context.state_delta_due_to_event,
partial_state=unpersisted_context.partial_state,
state_group_deltas=state_group_deltas,
)
events_and_persisted_context.append((event, context))
return events_and_persisted_context
@@ -452,11 +448,11 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
# if the event isn't a state event the state group doesn't change
if not self.state_delta_due_to_event:
state_group_after_event = self.state_group_before_event
self.state_group_after_event = self.state_group_before_event
# otherwise if it is a state event we need to get a state group for it
else:
state_group_after_event = await self._storage.state.store_state_group(
self.state_group_after_event = await self._storage.state.store_state_group(
event.event_id,
event.room_id,
prev_group=self.state_group_before_event,
@@ -464,16 +460,81 @@ class UnpersistedEventContext(UnpersistedEventContextBase):
current_state_ids=None,
)
state_group_deltas = self._build_state_group_deltas()
return EventContext.with_state(
storage=self._storage,
state_group=state_group_after_event,
state_group=self.state_group_after_event,
state_group_before_event=self.state_group_before_event,
state_delta_due_to_event=self.state_delta_due_to_event,
state_group_deltas=state_group_deltas,
partial_state=self.partial_state,
prev_group=self.state_group_before_event,
delta_ids=self.state_delta_due_to_event,
)
def _build_state_group_deltas(self) -> Dict[Tuple[int, int], StateMap]:
"""
Collect deltas between the state groups associated with this context
"""
state_group_deltas = {}
# if we know the state group before the event and after the event, add them and the
# state delta between them to state_group_deltas
if self.state_group_before_event and self.state_group_after_event:
# if we have the state groups we should have the delta
assert self.state_delta_due_to_event is not None
state_group_deltas[
(
self.state_group_before_event,
self.state_group_after_event,
)
] = self.state_delta_due_to_event
# the state group before the event may also have a state group which precedes it, if
# we have that and the state group before the event, add them and the state
# delta between them to state_group_deltas
if (
self.prev_group_for_state_group_before_event
and self.state_group_before_event
):
# if we have both state groups we should have the delta between them
assert self.delta_ids_to_state_group_before_event is not None
state_group_deltas[
(
self.prev_group_for_state_group_before_event,
self.state_group_before_event,
)
] = self.delta_ids_to_state_group_before_event
return state_group_deltas
def _encode_state_group_delta(
state_group_delta: Dict[Tuple[int, int], StateMap[str]]
) -> List[Tuple[int, int, Optional[List[Tuple[str, str, str]]]]]:
if not state_group_delta:
return []
state_group_delta_encoded = []
for key, value in state_group_delta.items():
state_group_delta_encoded.append((key[0], key[1], _encode_state_dict(value)))
return state_group_delta_encoded
def _decode_state_group_delta(
input: List[Tuple[int, int, List[Tuple[str, str, str]]]]
) -> Dict[Tuple[int, int], StateMap[str]]:
if not input:
return {}
state_group_deltas = {}
for state_group_1, state_group_2, state_dict in input:
state_map = _decode_state_dict(state_dict)
assert state_map is not None
state_group_deltas[(state_group_1, state_group_2)] = state_map
return state_group_deltas
def _encode_state_dict(
state_dict: Optional[StateMap[str]],

View File

@@ -134,13 +134,8 @@ class EventValidator:
)
# If the event contains a mentions key, validate it.
if (
EventContentFields.MSC3952_MENTIONS in event.content
and config.experimental.msc3952_intentional_mentions
):
validate_json_object(
event.content[EventContentFields.MSC3952_MENTIONS], Mentions
)
if EventContentFields.MENTIONS in event.content:
validate_json_object(event.content[EventContentFields.MENTIONS], Mentions)
def _validate_retention(self, event: EventBase) -> None:
"""Checks that an event that defines the retention policy for a room respects the

View File

@@ -260,7 +260,9 @@ class FederationClient(FederationBase):
use_unstable = False
for user_id, one_time_keys in query.items():
for device_id, algorithms in one_time_keys.items():
if any(count > 1 for count in algorithms.values()):
# If more than one algorithm is requested, attempt to use the unstable
# endpoint.
if sum(algorithms.values()) > 1:
use_unstable = True
if algorithms:
# For the stable query, choose only the first algorithm.
@@ -296,6 +298,7 @@ class FederationClient(FederationBase):
else:
logger.debug("Skipping unstable claim client keys API")
# TODO Potentially attempt multiple queries and combine the results?
return await self.transport_layer.claim_client_keys(
user, destination, content, timeout
)

View File

@@ -944,7 +944,7 @@ class FederationServer(FederationBase):
if not self._is_mine_server_name(authorising_server):
raise SynapseError(
400,
f"Cannot authorise request from resident server: {authorising_server}",
f"Cannot authorise membership event for {authorising_server}. We can only authorise requests from our own homeserver",
)
event.signatures.update(
@@ -1016,7 +1016,9 @@ class FederationServer(FederationBase):
for user_id, device_keys in result.items():
for device_id, keys in device_keys.items():
for key_id, key in keys.items():
json_result.setdefault(user_id, {})[device_id] = {key_id: key}
json_result.setdefault(user_id, {}).setdefault(device_id, {})[
key_id
] = key
logger.info(
"Claimed one-time-keys: %s",

View File

@@ -200,6 +200,7 @@ class FederationHandler:
)
@trace
@tag_args
async def maybe_backfill(
self, room_id: str, current_depth: int, limit: int
) -> bool:
@@ -214,6 +215,9 @@ class FederationHandler:
limit: The number of events that the pagination request will
return. This is used as part of the heuristic to decide if we
should back paginate.
Returns:
True if we actually tried to backfill something, otherwise False.
"""
# Starting the processing time here so we can include the room backfill
# linearizer lock queue in the timing
@@ -227,6 +231,8 @@ class FederationHandler:
processing_start_time=processing_start_time,
)
@trace
@tag_args
async def _maybe_backfill_inner(
self,
room_id: str,
@@ -247,6 +253,9 @@ class FederationHandler:
limit: The max number of events to request from the remote federated server.
processing_start_time: The time when `maybe_backfill` started processing.
Only used for timing. If `None`, no timing observation will be made.
Returns:
True if we actually tried to backfill something, otherwise False.
"""
backwards_extremities = [
_BackfillPoint(event_id, depth, _BackfillPointType.BACKWARDS_EXTREMITY)
@@ -302,15 +311,30 @@ class FederationHandler:
len(sorted_backfill_points),
sorted_backfill_points,
)
set_tag(
SynapseTags.RESULT_PREFIX + "sorted_backfill_points",
str(sorted_backfill_points),
)
set_tag(
SynapseTags.RESULT_PREFIX + "sorted_backfill_points.length",
str(len(sorted_backfill_points)),
)
# If we have no backfill points lower than the `current_depth` then
# either we can a) bail or b) still attempt to backfill. We opt to try
# backfilling anyway just in case we do get relevant events.
# If we have no backfill points lower than the `current_depth` then either we
# can a) bail or b) still attempt to backfill. We opt to try backfilling anyway
# just in case we do get relevant events. This is good for eventual consistency
# sake but we don't need to block the client for something that is just as
# likely not to return anything relevant so we backfill in the background. The
# only way, this could return something relevant is if we discover a new branch
# of history that extends all the way back to where we are currently paginating
# and it's within the 100 events that are returned from `/backfill`.
if not sorted_backfill_points and current_depth != MAX_DEPTH:
logger.debug(
"_maybe_backfill_inner: all backfill points are *after* current depth. Trying again with later backfill points."
)
return await self._maybe_backfill_inner(
run_as_background_process(
"_maybe_backfill_inner_anyway_with_max_depth",
self._maybe_backfill_inner,
room_id=room_id,
# We use `MAX_DEPTH` so that we find all backfill points next
# time (all events are below the `MAX_DEPTH`)
@@ -321,6 +345,9 @@ class FederationHandler:
# overall otherwise the smaller one will throw off the results.
processing_start_time=None,
)
# We return `False` because we're backfilling in the background and there is
# no new events immediately for the caller to know about yet.
return False
# Even after recursing with `MAX_DEPTH`, we didn't find any
# backward extremities to backfill from.

View File

@@ -40,6 +40,11 @@ if TYPE_CHECKING:
logger = logging.getLogger(__name__)
# How many single event gaps we tolerate returning in a `/messages` response before we
# backfill and try to fill in the history. This is an arbitrarily picked number so feel
# free to tune it in the future.
BACKFILL_BECAUSE_TOO_MANY_GAPS_THRESHOLD = 3
@attr.s(slots=True, auto_attribs=True)
class PurgeStatus:
@@ -486,35 +491,35 @@ class PaginationHandler:
room_id, room_token.stream
)
if not use_admin_priviledge and membership == Membership.LEAVE:
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
# If they have left the room then clamp the token to be before
# they left the room, to save the effort of loading from the
# database.
if (
pagin_config.direction == Direction.BACKWARDS
and not use_admin_priviledge
and membership == Membership.LEAVE
):
# This is only None if the room is world_readable, in which case
# "Membership.JOIN" would have been returned and we should never hit
# this branch.
assert member_event_id
# This is only None if the room is world_readable, in which
# case "JOIN" would have been returned.
assert member_event_id
leave_token = await self.store.get_topological_token_for_event(
member_event_id
)
assert leave_token.topological is not None
if leave_token.topological < curr_topo:
from_token = from_token.copy_and_replace(
StreamKeyType.ROOM, leave_token
)
await self.hs.get_federation_handler().maybe_backfill(
room_id,
curr_topo,
limit=pagin_config.limit,
leave_token = await self.store.get_topological_token_for_event(
member_event_id
)
assert leave_token.topological is not None
if leave_token.topological < curr_topo:
from_token = from_token.copy_and_replace(
StreamKeyType.ROOM, leave_token
)
to_room_key = None
if pagin_config.to_token:
to_room_key = pagin_config.to_token.room_key
# Initially fetch the events from the database. With any luck, we can return
# these without blocking on backfill (handled below).
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
@@ -524,6 +529,94 @@ class PaginationHandler:
event_filter=event_filter,
)
if pagin_config.direction == Direction.BACKWARDS:
# We use a `Set` because there can be multiple events at a given depth
# and we only care about looking at the unique continum of depths to
# find gaps.
event_depths: Set[int] = {event.depth for event in events}
sorted_event_depths = sorted(event_depths)
# Inspect the depths of the returned events to see if there are any gaps
found_big_gap = False
number_of_gaps = 0
previous_event_depth = (
sorted_event_depths[0] if len(sorted_event_depths) > 0 else 0
)
for event_depth in sorted_event_depths:
# We don't expect a negative depth but we'll just deal with it in
# any case by taking the absolute value to get the true gap between
# any two integers.
depth_gap = abs(event_depth - previous_event_depth)
# A `depth_gap` of 1 is a normal continuous chain to the next event
# (1 <-- 2 <-- 3) so anything larger indicates a missing event (it's
# also possible there is no event at a given depth but we can't ever
# know that for sure)
if depth_gap > 1:
number_of_gaps += 1
# We only tolerate a small number single-event long gaps in the
# returned events because those are most likely just events we've
# failed to pull in the past. Anything longer than that is probably
# a sign that we're missing a decent chunk of history and we should
# try to backfill it.
#
# XXX: It's possible we could tolerate longer gaps if we checked
# that a given events `prev_events` is one that has failed pull
# attempts and we could just treat it like a dead branch of history
# for now or at least something that we don't need the block the
# client on to try pulling.
#
# XXX: If we had something like MSC3871 to indicate gaps in the
# timeline to the client, we could also get away with any sized gap
# and just have the client refetch the holes as they see fit.
if depth_gap > 2:
found_big_gap = True
break
previous_event_depth = event_depth
# Backfill in the foreground if we found a big gap, have too many holes,
# or we don't have enough events to fill the limit that the client asked
# for.
missing_too_many_events = (
number_of_gaps > BACKFILL_BECAUSE_TOO_MANY_GAPS_THRESHOLD
)
not_enough_events_to_fill_response = len(events) < pagin_config.limit
if (
found_big_gap
or missing_too_many_events
or not_enough_events_to_fill_response
):
did_backfill = (
await self.hs.get_federation_handler().maybe_backfill(
room_id,
curr_topo,
limit=pagin_config.limit,
)
)
# If we did backfill something, refetch the events from the database to
# catch anything new that might have been added since we last fetched.
if did_backfill:
events, next_key = await self.store.paginate_room_events(
room_id=room_id,
from_key=from_token.room_key,
to_key=to_room_key,
direction=pagin_config.direction,
limit=pagin_config.limit,
event_filter=event_filter,
)
else:
# Otherwise, we can backfill in the background for eventual
# consistency's sake but we don't need to block the client waiting
# for a costly federation call and processing.
run_as_background_process(
"maybe_backfill_in_the_background",
self.hs.get_federation_handler().maybe_backfill,
room_id,
curr_topo,
limit=pagin_config.limit,
)
next_token = from_token.copy_and_replace(StreamKeyType.ROOM, next_key)
# if no events are returned from pagination, that implies

View File

@@ -648,7 +648,6 @@ class PresenceHandler(BasePresenceHandler):
def __init__(self, hs: "HomeServer"):
super().__init__(hs)
self.hs = hs
self.server_name = hs.hostname
self.wheel_timer: WheelTimer[str] = WheelTimer()
self.notifier = hs.get_notifier()
self._presence_enabled = hs.config.server.use_presence

View File

@@ -27,7 +27,6 @@ logger = logging.getLogger(__name__)
class ReadMarkerHandler:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.config.server.server_name
self.store = hs.get_datastores().main
self.account_data_handler = hs.get_account_data_handler()
self.read_marker_linearizer = Linearizer(name="read_marker")

View File

@@ -872,6 +872,8 @@ class RoomCreationHandler:
visibility = config.get("visibility", "private")
is_public = visibility == "public"
self._validate_room_config(config, visibility)
room_id = await self._generate_and_create_room_id(
creator_id=user_id,
is_public=is_public,
@@ -1111,20 +1113,7 @@ class RoomCreationHandler:
return new_event, new_unpersisted_context
visibility = room_config.get("visibility", "private")
preset_config = room_config.get(
"preset",
RoomCreationPreset.PRIVATE_CHAT
if visibility == "private"
else RoomCreationPreset.PUBLIC_CHAT,
)
try:
config = self._presets_dict[preset_config]
except KeyError:
raise SynapseError(
400, f"'{preset_config}' is not a valid preset", errcode=Codes.BAD_JSON
)
preset_config, config = self._room_preset_config(room_config)
# MSC2175 removes the creator field from the create event.
if not room_version.msc2175_implicit_room_creator:
@@ -1306,6 +1295,65 @@ class RoomCreationHandler:
assert last_event.internal_metadata.stream_ordering is not None
return last_event.internal_metadata.stream_ordering, last_event.event_id, depth
def _validate_room_config(
self,
config: JsonDict,
visibility: str,
) -> None:
"""Checks configuration parameters for a /createRoom request.
If validation detects invalid parameters an exception may be raised to
cause room creation to be aborted and an error response to be returned
to the client.
Args:
config: A dict of configuration options. Originally from the body of
the /createRoom request
visibility: One of "public" or "private"
"""
# Validate the requested preset, raise a 400 error if not valid
preset_name, preset_config = self._room_preset_config(config)
# If the user is trying to create an encrypted room and this is forbidden
# by the configured default_power_level_content_override, then reject the
# request before the room is created.
raw_initial_state = config.get("initial_state", [])
room_encryption_event = any(
s.get("type", "") == EventTypes.RoomEncryption for s in raw_initial_state
)
if preset_config["encrypted"] or room_encryption_event:
if self._default_power_level_content_override:
override = self._default_power_level_content_override.get(preset_name)
if override is not None:
event_levels = override.get("events", {})
room_admin_level = event_levels.get(EventTypes.PowerLevels, 100)
encryption_level = event_levels.get(EventTypes.RoomEncryption, 100)
if encryption_level > room_admin_level:
raise SynapseError(
403,
f"You cannot create an encrypted room. user_level ({room_admin_level}) < send_level ({encryption_level})",
)
def _room_preset_config(self, room_config: JsonDict) -> Tuple[str, dict]:
# The spec says rooms should default to private visibility if
# `visibility` is not specified.
visibility = room_config.get("visibility", "private")
preset_name = room_config.get(
"preset",
RoomCreationPreset.PRIVATE_CHAT
if visibility == "private"
else RoomCreationPreset.PUBLIC_CHAT,
)
try:
preset_config = self._presets_dict[preset_name]
except KeyError:
raise SynapseError(
400, f"'{preset_name}' is not a valid preset", errcode=Codes.BAD_JSON
)
return preset_name, preset_config
def _generate_room_id(self) -> str:
"""Generates a random room ID.
@@ -1490,7 +1538,6 @@ class RoomContextHandler:
class TimestampLookupHandler:
def __init__(self, hs: "HomeServer"):
self.server_name = hs.hostname
self.store = hs.get_datastores().main
self.state_handler = hs.get_state_handler()
self.federation_client = hs.get_federation_client()

View File

@@ -42,7 +42,6 @@ class StatsHandler:
self.store = hs.get_datastores().main
self._storage_controllers = hs.get_storage_controllers()
self.state = hs.get_state_handler()
self.server_name = hs.hostname
self.clock = hs.get_clock()
self.notifier = hs.get_notifier()
self.is_mine_id = hs.is_mine_id

View File

@@ -95,8 +95,6 @@ incoming_responses_counter = Counter(
)
MAX_LONG_RETRIES = 10
MAX_SHORT_RETRIES = 3
MAXINT = sys.maxsize
@@ -406,7 +404,12 @@ class MatrixFederationHttpClient:
self.clock = hs.get_clock()
self._store = hs.get_datastores().main
self.version_string_bytes = hs.version_string.encode("ascii")
self.default_timeout = 60
self.default_timeout = hs.config.federation.client_timeout / 1000
self.max_long_retry_delay = hs.config.federation.max_long_retry_delay / 1000
self.max_short_retry_delay = hs.config.federation.max_short_retry_delay / 1000
self.max_long_retries = hs.config.federation.max_long_retries
self.max_short_retries = hs.config.federation.max_short_retries
self._cooperator = Cooperator(scheduler=_make_scheduler(self.reactor))
@@ -499,8 +502,15 @@ class MatrixFederationHttpClient:
Note that the above intervals are *in addition* to the time spent
waiting for the request to complete (up to `timeout` ms).
NB: the long retry algorithm takes over 20 minutes to complete, with
a default timeout of 60s!
NB: the long retry algorithm takes over 20 minutes to complete, with a
default timeout of 60s! It's best not to use the `long_retries` option
for something that is blocking a client so we don't make them wait for
aaaaages, whereas some things like sending transactions (server to
server) we can be a lot more lenient but its very fuzzy / hand-wavey.
In the future, we could be more intelligent about doing this sort of
thing by looking at things with the bigger picture in mind,
https://github.com/matrix-org/synapse/issues/8917
ignore_backoff: true to ignore the historical backoff data
and try the request anyway.
@@ -528,10 +538,10 @@ class MatrixFederationHttpClient:
logger.exception(f"Invalid destination: {request.destination}.")
raise FederationDeniedError(request.destination)
if timeout:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
if timeout is None:
timeout = int(self.default_timeout)
_sec_timeout = timeout / 1000
if (
self.hs.config.federation.federation_domain_whitelist is not None
@@ -576,9 +586,9 @@ class MatrixFederationHttpClient:
# XXX: Would be much nicer to retry only at the transaction-layer
# (once we have reliable transactions in place)
if long_retries:
retries_left = MAX_LONG_RETRIES
retries_left = self.max_long_retries
else:
retries_left = MAX_SHORT_RETRIES
retries_left = self.max_short_retries
url_bytes = request.uri
url_str = url_bytes.decode("ascii")
@@ -723,12 +733,12 @@ class MatrixFederationHttpClient:
if retries_left and not timeout:
if long_retries:
delay = 4 ** (MAX_LONG_RETRIES + 1 - retries_left)
delay = min(delay, 60)
delay = 4 ** (self.max_long_retries + 1 - retries_left)
delay = min(delay, self.max_long_retry_delay)
delay *= random.uniform(0.8, 1.4)
else:
delay = 0.5 * 2 ** (MAX_SHORT_RETRIES - retries_left)
delay = min(delay, 2)
delay = 0.5 * 2 ** (self.max_short_retries - retries_left)
delay = min(delay, self.max_short_retry_delay)
delay *= random.uniform(0.8, 1.4)
logger.debug(
@@ -936,10 +946,9 @@ class MatrixFederationHttpClient:
timeout=timeout,
)
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
if timeout is None:
timeout = int(self.default_timeout)
_sec_timeout = timeout / 1000
if parser is None:
parser = cast(ByteParser[T], JsonParser())
@@ -1125,10 +1134,9 @@ class MatrixFederationHttpClient:
timeout=timeout,
)
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
if timeout is None:
timeout = int(self.default_timeout)
_sec_timeout = timeout / 1000
if parser is None:
parser = cast(ByteParser[T], JsonParser())
@@ -1201,10 +1209,9 @@ class MatrixFederationHttpClient:
ignore_backoff=ignore_backoff,
)
if timeout is not None:
_sec_timeout = timeout / 1000
else:
_sec_timeout = self.default_timeout
if timeout is None:
timeout = int(self.default_timeout)
_sec_timeout = timeout / 1000
body = await _handle_response(
self.reactor, _sec_timeout, request, response, start_ms, parser=JsonParser()

View File

@@ -76,7 +76,7 @@ class ReplicationEndpointFactory:
endpoint = wrapClientTLS(
# The 'port' argument below isn't actually used by the function
self.context_factory.creatorForNetloc(
self.instance_map[worker_name].host,
self.instance_map[worker_name].host.encode("utf-8"),
self.instance_map[worker_name].port,
),
endpoint,

View File

@@ -171,6 +171,7 @@ from functools import wraps
from typing import (
TYPE_CHECKING,
Any,
Awaitable,
Callable,
Collection,
ContextManager,
@@ -903,6 +904,7 @@ def _custom_sync_async_decorator(
"""
if inspect.iscoroutinefunction(func):
# For this branch, we handle async functions like `async def func() -> RInner`.
# In this branch, R = Awaitable[RInner], for some other type RInner
@wraps(func)
async def _wrapper(
@@ -914,15 +916,16 @@ def _custom_sync_async_decorator(
return await func(*args, **kwargs) # type: ignore[misc]
else:
# The other case here handles both sync functions and those
# decorated with inlineDeferred.
# The other case here handles sync functions including those decorated with
# `@defer.inlineCallbacks` or that return a `Deferred` or other `Awaitable`.
@wraps(func)
def _wrapper(*args: P.args, **kwargs: P.kwargs) -> R:
def _wrapper(*args: P.args, **kwargs: P.kwargs) -> Any:
scope = wrapping_logic(func, *args, **kwargs)
scope.__enter__()
try:
result = func(*args, **kwargs)
if isinstance(result, defer.Deferred):
def call_back(result: R) -> R:
@@ -930,20 +933,32 @@ def _custom_sync_async_decorator(
return result
def err_back(result: R) -> R:
# TODO: Pass the error details into `scope.__exit__(...)` for
# consistency with the other paths.
scope.__exit__(None, None, None)
return result
result.addCallbacks(call_back, err_back)
else:
if inspect.isawaitable(result):
logger.error(
"@trace may not have wrapped %s correctly! "
"The function is not async but returned a %s.",
func.__qualname__,
type(result).__name__,
)
elif inspect.isawaitable(result):
async def wrap_awaitable() -> Any:
try:
assert isinstance(result, Awaitable)
awaited_result = await result
scope.__exit__(None, None, None)
return awaited_result
except Exception as e:
scope.__exit__(type(e), None, e.__traceback__)
raise
# The original method returned an awaitable, eg. a coroutine, so we
# create another awaitable wrapping it that calls
# `scope.__exit__(...)`.
return wrap_awaitable()
else:
# Just a simple sync function so we can just exit the scope and
# return the result without any fuss.
scope.__exit__(None, None, None)
return result

View File

@@ -77,6 +77,8 @@ RegistryProxy = cast(CollectorRegistry, _RegistryProxy)
@attr.s(slots=True, hash=True, auto_attribs=True)
class LaterGauge(Collector):
"""A Gauge which periodically calls a user-provided callback to produce metrics."""
name: str
desc: str
labels: Optional[Sequence[str]] = attr.ib(hash=False)

View File

@@ -120,9 +120,6 @@ class BulkPushRuleEvaluator:
self.should_calculate_push_rules = self.hs.config.push.enable_push
self._related_event_match_enabled = self.hs.config.experimental.msc3664_enabled
self._intentional_mentions_enabled = (
self.hs.config.experimental.msc3952_intentional_mentions
)
self.room_push_rule_cache_metrics = register_cache(
"cache",
@@ -390,10 +387,7 @@ class BulkPushRuleEvaluator:
del notification_levels[key]
# Pull out any user and room mentions.
has_mentions = (
self._intentional_mentions_enabled
and EventContentFields.MSC3952_MENTIONS in event.content
)
has_mentions = EventContentFields.MENTIONS in event.content
evaluator = PushRuleEvaluator(
_flatten_dict(event),

View File

@@ -124,8 +124,6 @@ class VersionsRestServlet(RestServlet):
is not None,
# Adds support for relation-based redactions as per MSC3912.
"org.matrix.msc3912": self.config.experimental.msc3912_enabled,
# Adds support for unstable "intentional mentions" behaviour.
"org.matrix.msc3952_intentional_mentions": self.config.experimental.msc3952_intentional_mentions,
# Whether recursively provide relations is supported.
"org.matrix.msc3981": self.config.experimental.msc3981_recurse_relations,
# Adds support for deleting account data.

View File

@@ -39,7 +39,6 @@ class UploadResource(DirectServeJsonResource):
self.filepaths = media_repo.filepaths
self.store = hs.get_datastores().main
self.clock = hs.get_clock()
self.server_name = hs.hostname
self.auth = hs.get_auth()
self.max_upload_size = hs.config.media.max_upload_size
self.clock = hs.get_clock()

View File

@@ -86,9 +86,14 @@ class SQLBaseStore(metaclass=ABCMeta):
room_id: Room where state changed
members_changed: The user_ids of members that have changed
"""
# XXX: If you add something to this function make sure you add it to
# `_invalidate_state_caches_all` as well.
# If there were any membership changes, purge the appropriate caches.
for host in {get_domain_from_id(u) for u in members_changed}:
self._attempt_to_invalidate_cache("is_host_joined", (room_id, host))
self._attempt_to_invalidate_cache("is_host_invited", (room_id, host))
if members_changed:
self._attempt_to_invalidate_cache("get_users_in_room", (room_id,))
self._attempt_to_invalidate_cache("get_current_hosts_in_room", (room_id,))
@@ -117,6 +122,32 @@ class SQLBaseStore(metaclass=ABCMeta):
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
def _invalidate_state_caches_all(self, room_id: str) -> None:
"""Invalidates caches that are based on the current state, but does
not stream invalidations down replication.
Same as `_invalidate_state_caches`, except that works when we don't know
which memberships have changed.
Args:
room_id: Room where state changed
"""
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
self._attempt_to_invalidate_cache("get_users_in_room", (room_id,))
self._attempt_to_invalidate_cache("is_host_invited", None)
self._attempt_to_invalidate_cache("is_host_joined", None)
self._attempt_to_invalidate_cache("get_current_hosts_in_room", (room_id,))
self._attempt_to_invalidate_cache("get_users_in_room_with_profiles", (room_id,))
self._attempt_to_invalidate_cache("get_number_joined_users_in_room", (room_id,))
self._attempt_to_invalidate_cache("get_local_users_in_room", (room_id,))
self._attempt_to_invalidate_cache("does_pair_of_users_share_a_room", None)
self._attempt_to_invalidate_cache("get_user_in_room_with_profile", None)
self._attempt_to_invalidate_cache(
"get_rooms_for_user_with_stream_ordering", None
)
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
def _attempt_to_invalidate_cache(
self, cache_name: str, key: Optional[Collection[Any]]
) -> bool:

View File

@@ -12,6 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from enum import IntEnum
from types import TracebackType
from typing import (
TYPE_CHECKING,
@@ -136,6 +137,15 @@ class BackgroundUpdatePerformance:
return float(self.total_item_count) / float(self.total_duration_ms)
class UpdaterStatus(IntEnum):
# Use negative values for error conditions.
ABORTED = -1
DISABLED = 0
NOT_STARTED = 1
RUNNING_UPDATE = 2
COMPLETE = 3
class BackgroundUpdater:
"""Background updates are updates to the database that run in the
background. Each update processes a batch of data at once. We attempt to
@@ -158,11 +168,16 @@ class BackgroundUpdater:
self._background_update_performance: Dict[str, BackgroundUpdatePerformance] = {}
self._background_update_handlers: Dict[str, _BackgroundUpdateHandler] = {}
# TODO: all these bool flags make me feel icky---can we combine into a status
# enum?
self._all_done = False
# Whether we're currently running updates
self._running = False
# Marker to be set if we abort and halt all background updates.
self._aborted = False
# Whether background updates are enabled. This allows us to
# enable/disable background updates via the admin API.
self.enabled = True
@@ -175,6 +190,20 @@ class BackgroundUpdater:
self.sleep_duration_ms = hs.config.background_updates.sleep_duration_ms
self.sleep_enabled = hs.config.background_updates.sleep_enabled
def get_status(self) -> UpdaterStatus:
"""An integer summarising the updater status. Used as a metric."""
if self._aborted:
return UpdaterStatus.ABORTED
# TODO: a status for "have seen at least one failure, but haven't aborted yet".
if not self.enabled:
return UpdaterStatus.DISABLED
if self._all_done:
return UpdaterStatus.COMPLETE
if self._running:
return UpdaterStatus.RUNNING_UPDATE
return UpdaterStatus.NOT_STARTED
def register_update_controller_callbacks(
self,
on_update: ON_UPDATE_CALLBACK,
@@ -296,6 +325,7 @@ class BackgroundUpdater:
except Exception:
back_to_back_failures += 1
if back_to_back_failures >= 5:
self._aborted = True
raise RuntimeError(
"5 back-to-back background update failures; aborting."
)

View File

@@ -839,9 +839,8 @@ class EventsPersistenceStorageController:
"group" % (ev.event_id,)
)
continue
if ctx.prev_group:
state_group_deltas[(ctx.prev_group, ctx.state_group)] = ctx.delta_ids
if ctx.state_group_deltas:
state_group_deltas.update(ctx.state_group_deltas)
# We need to map the event_ids to their state groups. First, let's
# check if the event is one we're persisting, in which case we can

View File

@@ -54,7 +54,7 @@ from synapse.logging.context import (
current_context,
make_deferred_yieldable,
)
from synapse.metrics import register_threadpool
from synapse.metrics import LaterGauge, register_threadpool
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.background_updates import BackgroundUpdater
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
@@ -547,6 +547,12 @@ class DatabasePool:
self._db_pool = make_pool(hs.get_reactor(), database_config, engine)
self.updates = BackgroundUpdater(hs, self)
LaterGauge(
"synapse_background_update_status",
"Background update status",
[],
self.updates.get_status,
)
self._previous_txn_total_time = 0.0
self._current_txn_total_time = 0.0

View File

@@ -46,6 +46,12 @@ logger = logging.getLogger(__name__)
# based on the current state when notifying workers over replication.
CURRENT_STATE_CACHE_NAME = "cs_cache_fake"
# As above, but for invalidating event caches on history deletion
PURGE_HISTORY_CACHE_NAME = "ph_cache_fake"
# As above, but for invalidating room caches on room deletion
DELETE_ROOM_CACHE_NAME = "dr_cache_fake"
class CacheInvalidationWorkerStore(SQLBaseStore):
def __init__(
@@ -175,6 +181,23 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
room_id = row.keys[0]
members_changed = set(row.keys[1:])
self._invalidate_state_caches(room_id, members_changed)
elif row.cache_func == PURGE_HISTORY_CACHE_NAME:
if row.keys is None:
raise Exception(
"Can't send an 'invalidate all' for 'purge history' cache"
)
room_id = row.keys[0]
self._invalidate_caches_for_room_events(room_id)
elif row.cache_func == DELETE_ROOM_CACHE_NAME:
if row.keys is None:
raise Exception(
"Can't send an 'invalidate all' for 'delete room' cache"
)
room_id = row.keys[0]
self._invalidate_caches_for_room_events(room_id)
self._invalidate_caches_for_room(room_id)
else:
self._attempt_to_invalidate_cache(row.cache_func, row.keys)
@@ -226,6 +249,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
relates_to: Optional[str],
backfilled: bool,
) -> None:
# XXX: If you add something to this function make sure you add it to
# `_invalidate_caches_for_room_events` as well.
# This invalidates any local in-memory cached event objects, the original
# process triggering the invalidation is responsible for clearing any external
# cached objects.
@@ -271,6 +297,106 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
self._attempt_to_invalidate_cache("get_thread_participated", (relates_to,))
self._attempt_to_invalidate_cache("get_threads", (room_id,))
def _invalidate_caches_for_room_events_and_stream(
self, txn: LoggingTransaction, room_id: str
) -> None:
"""Invalidate caches associated with events in a room, and stream to
replication.
Used when we delete events a room, but don't know which events we've
deleted.
"""
self._send_invalidation_to_replication(txn, PURGE_HISTORY_CACHE_NAME, [room_id])
txn.call_after(self._invalidate_caches_for_room_events, room_id)
def _invalidate_caches_for_room_events(self, room_id: str) -> None:
"""Invalidate caches associated with events in a room, and stream to
replication.
Used when we delete events in a room, but don't know which events we've
deleted.
"""
self._invalidate_local_get_event_cache_all() # type: ignore[attr-defined]
self._attempt_to_invalidate_cache("have_seen_event", (room_id,))
self._attempt_to_invalidate_cache("get_latest_event_ids_in_room", (room_id,))
self._attempt_to_invalidate_cache(
"get_unread_event_push_actions_by_room_for_user", (room_id,)
)
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
self._attempt_to_invalidate_cache("get_relations_for_event", None)
self._attempt_to_invalidate_cache("get_applicable_edit", None)
self._attempt_to_invalidate_cache("get_thread_id", None)
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
self._attempt_to_invalidate_cache("get_invited_rooms_for_local_user", None)
self._attempt_to_invalidate_cache(
"get_rooms_for_user_with_stream_ordering", None
)
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
self._attempt_to_invalidate_cache("get_references_for_event", None)
self._attempt_to_invalidate_cache("get_thread_summary", None)
self._attempt_to_invalidate_cache("get_thread_participated", None)
self._attempt_to_invalidate_cache("get_threads", (room_id,))
self._attempt_to_invalidate_cache("_get_state_group_for_event", None)
self._attempt_to_invalidate_cache("get_event_ordering", None)
self._attempt_to_invalidate_cache("is_partial_state_event", None)
self._attempt_to_invalidate_cache("_get_joined_profile_from_event_id", None)
def _invalidate_caches_for_room_and_stream(
self, txn: LoggingTransaction, room_id: str
) -> None:
"""Invalidate caches associated with rooms, and stream to replication.
Used when we delete rooms.
"""
self._send_invalidation_to_replication(txn, DELETE_ROOM_CACHE_NAME, [room_id])
txn.call_after(self._invalidate_caches_for_room, room_id)
def _invalidate_caches_for_room(self, room_id: str) -> None:
"""Invalidate caches associated with rooms.
Used when we delete rooms.
"""
# If we've deleted the room then we also need to purge all event caches.
self._invalidate_caches_for_room_events(room_id)
self._attempt_to_invalidate_cache("get_account_data_for_room", None)
self._attempt_to_invalidate_cache("get_account_data_for_room_and_type", None)
self._attempt_to_invalidate_cache("get_aliases_for_room", (room_id,))
self._attempt_to_invalidate_cache("get_latest_event_ids_in_room", (room_id,))
self._attempt_to_invalidate_cache("_get_forward_extremeties_for_room", None)
self._attempt_to_invalidate_cache(
"get_unread_event_push_actions_by_room_for_user", (room_id,)
)
self._attempt_to_invalidate_cache(
"_get_linearized_receipts_for_room", (room_id,)
)
self._attempt_to_invalidate_cache("is_room_blocked", (room_id,))
self._attempt_to_invalidate_cache("get_retention_policy_for_room", (room_id,))
self._attempt_to_invalidate_cache(
"_get_partial_state_servers_at_join", (room_id,)
)
self._attempt_to_invalidate_cache("is_partial_state_room", (room_id,))
self._attempt_to_invalidate_cache("get_invited_rooms_for_local_user", None)
self._attempt_to_invalidate_cache(
"get_current_hosts_in_room_ordered", (room_id,)
)
self._attempt_to_invalidate_cache("did_forget", None)
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
self._attempt_to_invalidate_cache("get_room_version_id", (room_id,))
# And delete state caches.
self._invalidate_state_caches_all(room_id)
async def invalidate_cache_and_stream(
self, cache_name: str, keys: Tuple[Any, ...]
) -> None:
@@ -377,6 +503,14 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
"Can't stream invalidate all with magic current state cache"
)
if cache_name == PURGE_HISTORY_CACHE_NAME and keys is None:
raise Exception(
"Can't stream invalidate all with magic purge history cache"
)
if cache_name == DELETE_ROOM_CACHE_NAME and keys is None:
raise Exception("Can't stream invalidate all with magic delete room cache")
if isinstance(self.database_engine, PostgresEngine):
assert self._cache_id_gen is not None

View File

@@ -903,6 +903,15 @@ class EventsWorkerStore(SQLBaseStore):
self._event_ref.pop(event_id, None)
self._current_event_fetches.pop(event_id, None)
def _invalidate_local_get_event_cache_all(self) -> None:
"""Clears the in-memory get event caches.
Used when we purge room history.
"""
self._get_event_cache.clear()
self._event_ref.clear()
self._current_event_fetches.clear()
async def _get_events_from_cache(
self, events: Iterable[str], update_metrics: bool = True
) -> Dict[str, EventCacheEntry]:

View File

@@ -308,6 +308,8 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
logger.info("[purge] done")
self._invalidate_caches_for_room_events_and_stream(txn, room_id)
return referenced_state_groups
async def purge_room(self, room_id: str) -> List[int]:
@@ -485,10 +487,6 @@ class PurgeEventsStore(StateGroupWorkerStore, CacheInvalidationWorkerStore):
# index on them. In any case we should be clearing out 'stream' tables
# periodically anyway (#5888)
# TODO: we could probably usefully do a bunch more cache invalidation here
# XXX: as with purge_history, this is racy, but no worse than other races
# that already exist.
self._invalidate_cache_and_stream(txn, self.have_seen_event, (room_id,))
self._invalidate_caches_for_room_and_stream(txn, room_id)
return state_groups

View File

@@ -88,7 +88,6 @@ def _load_rules(
msc1767_enabled=experimental_config.msc1767_enabled,
msc3664_enabled=experimental_config.msc3664_enabled,
msc3381_polls_enabled=experimental_config.msc3381_polls_enabled,
msc3952_intentional_mentions=experimental_config.msc3952_intentional_mentions,
msc3958_suppress_edits_enabled=experimental_config.msc3958_supress_edit_notifs,
)

View File

@@ -927,11 +927,10 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
raise Exception("Invalid host name")
sql = """
SELECT state_key FROM current_state_events AS c
INNER JOIN room_memberships AS m USING (event_id)
WHERE m.membership = ?
SELECT state_key FROM current_state_events
WHERE membership = ?
AND type = 'm.room.member'
AND c.room_id = ?
AND room_id = ?
AND state_key LIKE ?
LIMIT 1
"""
@@ -1461,7 +1460,6 @@ class RoomMemberBackgroundUpdateStore(SQLBaseStore):
SELECT stream_ordering, event_id, events.room_id, event_json.json
FROM events
INNER JOIN event_json USING (event_id)
INNER JOIN room_memberships USING (event_id)
WHERE ? <= stream_ordering AND stream_ordering < ?
AND type = 'm.room.member'
ORDER BY stream_ordering DESC

View File

@@ -1061,12 +1061,15 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
# The array of numbers are the weights for the various part of the
# search: (domain, _, display name, localpart)
sql = """
WITH matching_users AS (
SELECT user_id, vector FROM user_directory_search WHERE vector @@ to_tsquery('simple', ?)
LIMIT 10000
)
SELECT d.user_id AS user_id, display_name, avatar_url
FROM user_directory_search as t
FROM matching_users as t
INNER JOIN user_directory AS d USING (user_id)
WHERE
%(where_clause)s
AND vector @@ to_tsquery('simple', ?)
ORDER BY
(CASE WHEN d.user_id IS NOT NULL THEN 4.0 ELSE 1.0 END)
* (CASE WHEN display_name IS NOT NULL THEN 1.2 ELSE 1.0 END)
@@ -1095,8 +1098,9 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
"order_case_statements": " ".join(additional_ordering_statements),
}
args = (
join_args
+ (full_query, exact_query, prefix_query)
(full_query,)
+ join_args
+ (exact_query, prefix_query)
+ ordering_arguments
+ (limit + 1,)
)

View File

@@ -21,7 +21,27 @@ DELETE FROM background_updates WHERE update_name = 'event_push_backfill_thread_i
-- Overwrite any null thread_id values.
UPDATE event_push_actions_staging SET thread_id = 'main' WHERE thread_id IS NULL;
UPDATE event_push_actions SET thread_id = 'main' WHERE thread_id IS NULL;
UPDATE event_push_summary SET thread_id = 'main' WHERE thread_id IS NULL;
-- Empirically we can end up with entries in the push summary table with both a
-- `NULL` and `main` thread ID, which causes the insert below to fail. We fudge
-- this by deleting any `NULL` rows that have a corresponding `main`.
DELETE FROM event_push_summary AS a WHERE thread_id IS NULL AND EXISTS (
SELECT 1 FROM event_push_summary AS b
WHERE b.thread_id = 'main' AND a.user_id = b.user_id AND a.room_id = b.room_id
);
-- Copy the NULL threads to have a 'main' thread ID.
--
-- Note: Some people seem to have duplicate rows with a `NULL` thread ID, in
-- which case we just fudge it with using MAX of the values. The counts *may* be
-- wrong for such rooms, but a) its an edge case, and b) they'll be fixed when
-- the user reads the room.
INSERT INTO event_push_summary (user_id, room_id, notif_count, stream_ordering, unread_count, last_receipt_stream_ordering, thread_id)
SELECT user_id, room_id, MAX(notif_count), MAX(stream_ordering), MAX(unread_count), MAX(last_receipt_stream_ordering), 'main'
FROM event_push_summary
WHERE thread_id IS NULL
GROUP BY user_id, room_id, thread_id;
DELETE FROM event_push_summary AS a WHERE thread_id IS NULL;
-- Drop the background updates to calculate the indexes used to find null thread_ids.
DELETE FROM background_updates WHERE update_name = 'event_push_actions_thread_id_null';

View File

@@ -862,5 +862,5 @@ class AsyncLruCache(Generic[KT, VT]):
async def contains(self, key: KT) -> bool:
return self._lru_cache.contains(key)
async def clear(self) -> None:
def clear(self) -> None:
self._lru_cache.clear()

View File

@@ -101,8 +101,7 @@ class TestEventContext(unittest.HomeserverTestCase):
self.assertEqual(
context.state_group_before_event, d_context.state_group_before_event
)
self.assertEqual(context.prev_group, d_context.prev_group)
self.assertEqual(context.delta_ids, d_context.delta_ids)
self.assertEqual(context.state_group_deltas, d_context.state_group_deltas)
self.assertEqual(context.app_service, d_context.app_service)
self.assertEqual(

View File

@@ -163,7 +163,7 @@ class SyncTestCase(tests.unittest.HomeserverTestCase):
# Blow away caches (supported room versions can only change due to a restart).
self.store.get_rooms_for_user_with_stream_ordering.invalidate_all()
self.store.get_rooms_for_user.invalidate_all()
self.get_success(self.store._get_event_cache.clear())
self.store._get_event_cache.clear()
self.store._event_ref.clear()
# The rooms should be excluded from the sync response.

View File

@@ -40,7 +40,7 @@ from synapse.server import HomeServer
from synapse.util import Clock
from tests.server import FakeTransport
from tests.unittest import HomeserverTestCase
from tests.unittest import HomeserverTestCase, override_config
def check_logcontext(context: LoggingContextOrSentinel) -> None:
@@ -640,3 +640,21 @@ class FederationClientTests(HomeserverTestCase):
self.cl.build_auth_headers(
b"", b"GET", b"https://example.com", destination_is=b""
)
@override_config(
{
"federation": {
"client_timeout": "180s",
"max_long_retry_delay": "100s",
"max_short_retry_delay": "7s",
"max_long_retries": 20,
"max_short_retries": 5,
}
}
)
def test_configurable_retry_and_delay_values(self) -> None:
self.assertEqual(self.cl.default_timeout, 180)
self.assertEqual(self.cl.max_long_retry_delay, 100)
self.assertEqual(self.cl.max_short_retry_delay, 7)
self.assertEqual(self.cl.max_long_retries, 20)
self.assertEqual(self.cl.max_short_retries, 5)

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import cast
from typing import Awaitable, cast
from twisted.internet import defer
from twisted.test.proto_helpers import MemoryReactorClock
@@ -227,8 +227,6 @@ class LogContextScopeManagerTestCase(TestCase):
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
with functions that return deferreds
"""
reactor = MemoryReactorClock()
with LoggingContext("root context"):
@trace_with_opname("fixture_deferred_func", tracer=self._tracer)
@@ -240,9 +238,6 @@ class LogContextScopeManagerTestCase(TestCase):
result_d1 = fixture_deferred_func()
# let the tasks complete
reactor.pump((2,) * 8)
self.assertEqual(self.successResultOf(result_d1), "foo")
# the span should have been reported
@@ -256,8 +251,6 @@ class LogContextScopeManagerTestCase(TestCase):
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
with async functions
"""
reactor = MemoryReactorClock()
with LoggingContext("root context"):
@trace_with_opname("fixture_async_func", tracer=self._tracer)
@@ -267,9 +260,6 @@ class LogContextScopeManagerTestCase(TestCase):
d1 = defer.ensureDeferred(fixture_async_func())
# let the tasks complete
reactor.pump((2,) * 8)
self.assertEqual(self.successResultOf(d1), "foo")
# the span should have been reported
@@ -277,3 +267,34 @@ class LogContextScopeManagerTestCase(TestCase):
[span.operation_name for span in self._reporter.get_spans()],
["fixture_async_func"],
)
def test_trace_decorator_awaitable_return(self) -> None:
"""
Test whether we can use `@trace_with_opname` (`@trace`) and `@tag_args`
with functions that return an awaitable (e.g. a coroutine)
"""
with LoggingContext("root context"):
# Something we can return without `await` to get a coroutine
async def fixture_async_func() -> str:
return "foo"
# The actual kind of function we want to test that returns an awaitable
@trace_with_opname("fixture_awaitable_return_func", tracer=self._tracer)
@tag_args
def fixture_awaitable_return_func() -> Awaitable[str]:
return fixture_async_func()
# Something we can run with `defer.ensureDeferred(runner())` and pump the
# whole async tasks through to completion.
async def runner() -> str:
return await fixture_awaitable_return_func()
d1 = defer.ensureDeferred(runner())
self.assertEqual(self.successResultOf(d1), "foo")
# the span should have been reported
self.assertEqual(
[span.operation_name for span in self._reporter.get_spans()],
["fixture_awaitable_return_func"],
)

View File

@@ -228,7 +228,6 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
)
return len(result) > 0
@override_config({"experimental_features": {"msc3952_intentional_mentions": True}})
def test_user_mentions(self) -> None:
"""Test the behavior of an event which includes invalid user mentions."""
bulk_evaluator = BulkPushRuleEvaluator(self.hs)
@@ -237,9 +236,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self.assertFalse(self._create_and_process(bulk_evaluator))
# An empty mentions field should not notify.
self.assertFalse(
self._create_and_process(
bulk_evaluator, {EventContentFields.MSC3952_MENTIONS: {}}
)
self._create_and_process(bulk_evaluator, {EventContentFields.MENTIONS: {}})
)
# Non-dict mentions should be ignored.
@@ -253,7 +250,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
for mentions in (None, True, False, 1, "foo", []):
self.assertFalse(
self._create_and_process(
bulk_evaluator, {EventContentFields.MSC3952_MENTIONS: mentions}
bulk_evaluator, {EventContentFields.MENTIONS: mentions}
)
)
@@ -262,7 +259,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self.assertFalse(
self._create_and_process(
bulk_evaluator,
{EventContentFields.MSC3952_MENTIONS: {"user_ids": mentions}},
{EventContentFields.MENTIONS: {"user_ids": mentions}},
)
)
@@ -270,14 +267,14 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self.assertTrue(
self._create_and_process(
bulk_evaluator,
{EventContentFields.MSC3952_MENTIONS: {"user_ids": [self.alice]}},
{EventContentFields.MENTIONS: {"user_ids": [self.alice]}},
)
)
self.assertTrue(
self._create_and_process(
bulk_evaluator,
{
EventContentFields.MSC3952_MENTIONS: {
EventContentFields.MENTIONS: {
"user_ids": ["@another:test", self.alice]
}
},
@@ -288,11 +285,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self.assertTrue(
self._create_and_process(
bulk_evaluator,
{
EventContentFields.MSC3952_MENTIONS: {
"user_ids": [self.alice, self.alice]
}
},
{EventContentFields.MENTIONS: {"user_ids": [self.alice, self.alice]}},
)
)
@@ -307,7 +300,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self._create_and_process(
bulk_evaluator,
{
EventContentFields.MSC3952_MENTIONS: {
EventContentFields.MENTIONS: {
"user_ids": [None, True, False, {}, []]
}
},
@@ -317,7 +310,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self._create_and_process(
bulk_evaluator,
{
EventContentFields.MSC3952_MENTIONS: {
EventContentFields.MENTIONS: {
"user_ids": [None, True, False, {}, [], self.alice]
}
},
@@ -331,12 +324,11 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
{
"body": self.alice,
"msgtype": "m.text",
EventContentFields.MSC3952_MENTIONS: {},
EventContentFields.MENTIONS: {},
},
)
)
@override_config({"experimental_features": {"msc3952_intentional_mentions": True}})
def test_room_mentions(self) -> None:
"""Test the behavior of an event which includes invalid room mentions."""
bulk_evaluator = BulkPushRuleEvaluator(self.hs)
@@ -344,7 +336,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
# Room mentions from those without power should not notify.
self.assertFalse(
self._create_and_process(
bulk_evaluator, {EventContentFields.MSC3952_MENTIONS: {"room": True}}
bulk_evaluator, {EventContentFields.MENTIONS: {"room": True}}
)
)
@@ -358,7 +350,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
)
self.assertTrue(
self._create_and_process(
bulk_evaluator, {EventContentFields.MSC3952_MENTIONS: {"room": True}}
bulk_evaluator, {EventContentFields.MENTIONS: {"room": True}}
)
)
@@ -374,7 +366,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
self.assertFalse(
self._create_and_process(
bulk_evaluator,
{EventContentFields.MSC3952_MENTIONS: {"room": mentions}},
{EventContentFields.MENTIONS: {"room": mentions}},
)
)
@@ -385,7 +377,7 @@ class TestBulkPushRuleEvaluator(HomeserverTestCase):
{
"body": "@room",
"msgtype": "m.text",
EventContentFields.MSC3952_MENTIONS: {},
EventContentFields.MENTIONS: {},
},
)
)

View File

@@ -131,9 +131,6 @@ class ReadMarkerTestCase(unittest.HomeserverTestCase):
event = self.get_success(self.store.get_event(event_id_1, allow_none=True))
assert event is None
# TODO See https://github.com/matrix-org/synapse/issues/13476
self.store.get_event_ordering.invalidate_all()
# Test moving the read marker to a newer event
event_id_2 = send_message()
channel = self.make_request(

View File

@@ -1941,6 +1941,43 @@ class RoomPowerLevelOverridesInPracticeTestCase(RoomBase):
channel.json_body["error"],
)
@unittest.override_config(
{
"default_power_level_content_override": {
"private_chat": {
"events": {
"m.room.avatar": 50,
"m.room.canonical_alias": 50,
"m.room.encryption": 999,
"m.room.history_visibility": 100,
"m.room.name": 50,
"m.room.power_levels": 100,
"m.room.server_acl": 100,
"m.room.tombstone": 100,
},
"events_default": 0,
},
}
},
)
def test_config_override_blocks_encrypted_room(self) -> None:
# Given the server has config for private_chats,
# When I attempt to create an encrypted private_chat room
channel = self.make_request(
"POST",
"/createRoom",
'{"creation_content": {"m.federate": false},"name": "Secret Private Room","preset": "private_chat","initial_state": [{"type": "m.room.encryption","state_key": "","content": {"algorithm": "m.megolm.v1.aes-sha2"}}]}',
)
# Then I am not allowed because the required power level is unattainable
self.assertEqual(HTTPStatus.FORBIDDEN, channel.code, msg=channel.result["body"])
self.assertEqual(
"You cannot create an encrypted room. "
+ "user_level (100) < send_level (999)",
channel.json_body["error"],
)
class RoomInitialSyncTestCase(RoomBase):
"""Tests /rooms/$room_id/initialSync."""

View File

@@ -188,7 +188,7 @@ class EventCacheTestCase(unittest.HomeserverTestCase):
self.event_id = res["event_id"]
# Reset the event cache so the tests start with it empty
self.get_success(self.store._get_event_cache.clear())
self.store._get_event_cache.clear()
def test_simple(self) -> None:
"""Test that we cache events that we pull from the DB."""
@@ -205,7 +205,7 @@ class EventCacheTestCase(unittest.HomeserverTestCase):
"""
# Reset the event cache
self.get_success(self.store._get_event_cache.clear())
self.store._get_event_cache.clear()
with LoggingContext("test") as ctx:
# We keep hold of the event event though we never use it.
@@ -215,7 +215,7 @@ class EventCacheTestCase(unittest.HomeserverTestCase):
self.assertEqual(ctx.get_resource_usage().evt_db_fetch_count, 1)
# Reset the event cache
self.get_success(self.store._get_event_cache.clear())
self.store._get_event_cache.clear()
with LoggingContext("test") as ctx:
self.get_success(self.store.get_event(self.event_id))
@@ -390,7 +390,7 @@ class GetEventCancellationTestCase(unittest.HomeserverTestCase):
self.event_id = res["event_id"]
# Reset the event cache so the tests start with it empty
self.get_success(self.store._get_event_cache.clear())
self.store._get_event_cache.clear()
@contextmanager
def blocking_get_event_calls(

View File

@@ -401,7 +401,10 @@ class EventChainStoreTestCase(HomeserverTestCase):
assert persist_events_store is not None
persist_events_store._store_event_txn(
txn,
[(e, EventContext(self.hs.get_storage_controllers())) for e in events],
[
(e, EventContext(self.hs.get_storage_controllers(), {}))
for e in events
],
)
# Actually call the function that calculates the auth chain stuff.

View File

@@ -555,10 +555,15 @@ class StateTestCase(unittest.TestCase):
(e.event_id for e in old_state + [event]), current_state_ids.values()
)
self.assertIsNotNone(context.state_group_before_event)
assert context.state_group_before_event is not None
assert context.state_group is not None
self.assertEqual(
context.state_group_deltas.get(
(context.state_group_before_event, context.state_group)
),
{(event.type, event.state_key): event.event_id},
)
self.assertNotEqual(context.state_group_before_event, context.state_group)
self.assertEqual(context.state_group_before_event, context.prev_group)
self.assertEqual({("state", ""): event.event_id}, context.delta_ids)
@defer.inlineCallbacks
def test_trivial_annotate_message(