Compare commits
19 Commits
patch-1
...
release-v1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
20c9e19519 | ||
|
|
55b0aa847a | ||
|
|
fbb2573525 | ||
|
|
db4e321219 | ||
|
|
657b8cc75c | ||
|
|
a2a543fd12 | ||
|
|
89f1092284 | ||
|
|
4ffed6330f | ||
|
|
e363881592 | ||
|
|
d40878451c | ||
|
|
892cbd0624 | ||
|
|
106cfd4b39 | ||
|
|
0a6ae6fe4c | ||
|
|
13a3987929 | ||
|
|
680f60102b | ||
|
|
3e51b370c5 | ||
|
|
9b8597e431 | ||
|
|
4d10a8fb18 | ||
|
|
1f8f991d51 |
53
CHANGES.md
53
CHANGES.md
@@ -1,3 +1,56 @@
|
||||
# Synapse 1.105.1 (2024-04-23)
|
||||
|
||||
## Security advisory
|
||||
|
||||
The following issues are fixed in 1.105.1.
|
||||
|
||||
- [GHSA-3h7q-rfh9-xm4v](https://github.com/element-hq/synapse/security/advisories/GHSA-3h7q-rfh9-xm4v) / [CVE-2024-31208](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-31208) — High Severity
|
||||
|
||||
Weakness in auth chain indexing allows DoS from remote room members through disk fill and high CPU usage.
|
||||
|
||||
See the advisories for more details. If you have any questions, email security@element.io.
|
||||
|
||||
|
||||
|
||||
# Synapse 1.105.0 (2024-04-16)
|
||||
|
||||
No significant changes since 1.105.0rc1.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.105.0rc1 (2024-04-11)
|
||||
|
||||
### Features
|
||||
|
||||
- Stabilize support for [MSC4010](https://github.com/matrix-org/matrix-spec-proposals/pull/4010) which clarifies the interaction of push rules and account data. Contributed by @clokep. ([\#17022](https://github.com/element-hq/synapse/issues/17022))
|
||||
- Stabilize support for [MSC3981](https://github.com/matrix-org/matrix-spec-proposals/pull/3981): `/relations` recursion. Contributed by @clokep. ([\#17023](https://github.com/element-hq/synapse/issues/17023))
|
||||
- Add support for moving `/pushrules` off of main process. ([\#17037](https://github.com/element-hq/synapse/issues/17037), [\#17038](https://github.com/element-hq/synapse/issues/17038))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix various long-standing bugs which could cause incorrect state to be returned from `/sync` in certain situations. ([\#16930](https://github.com/element-hq/synapse/issues/16930), [\#16932](https://github.com/element-hq/synapse/issues/16932), [\#16942](https://github.com/element-hq/synapse/issues/16942), [\#17064](https://github.com/element-hq/synapse/issues/17064), [\#17065](https://github.com/element-hq/synapse/issues/17065), [\#17066](https://github.com/element-hq/synapse/issues/17066))
|
||||
- Fix server notice rooms not always being created as unencrypted rooms, even when `encryption_enabled_by_default_for_room_type` is in use (server notices are always unencrypted). ([\#17033](https://github.com/element-hq/synapse/issues/17033))
|
||||
- Fix the `.m.rule.encrypted_room_one_to_one` and `.m.rule.room_one_to_one` default underride push rules being in the wrong order. Contributed by @Sumpy1. ([\#17043](https://github.com/element-hq/synapse/issues/17043))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Refactor auth chain fetching to reduce duplication. ([\#17044](https://github.com/element-hq/synapse/issues/17044))
|
||||
- Improve database performance by adding a missing index to `access_tokens.refresh_token_id`. ([\#17045](https://github.com/element-hq/synapse/issues/17045), [\#17054](https://github.com/element-hq/synapse/issues/17054))
|
||||
- Improve database performance by reducing number of receipts fetched when sending push notifications. ([\#17049](https://github.com/element-hq/synapse/issues/17049))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump packaging from 23.2 to 24.0. ([\#17027](https://github.com/element-hq/synapse/issues/17027))
|
||||
* Bump regex from 1.10.3 to 1.10.4. ([\#17028](https://github.com/element-hq/synapse/issues/17028))
|
||||
* Bump ruff from 0.3.2 to 0.3.5. ([\#17060](https://github.com/element-hq/synapse/issues/17060))
|
||||
* Bump serde_json from 1.0.114 to 1.0.115. ([\#17041](https://github.com/element-hq/synapse/issues/17041))
|
||||
* Bump types-pillow from 10.2.0.20240125 to 10.2.0.20240406. ([\#17061](https://github.com/element-hq/synapse/issues/17061))
|
||||
* Bump types-requests from 2.31.0.20240125 to 2.31.0.20240406. ([\#17063](https://github.com/element-hq/synapse/issues/17063))
|
||||
* Bump typing-extensions from 4.9.0 to 4.11.0. ([\#17062](https://github.com/element-hq/synapse/issues/17062))
|
||||
|
||||
# Synapse 1.104.0 (2024-04-02)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
8
Cargo.lock
generated
8
Cargo.lock
generated
@@ -306,9 +306,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "regex"
|
||||
version = "1.10.3"
|
||||
version = "1.10.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b62dbe01f0b06f9d8dc7d49e05a0785f153b00b2c227856282f671e0318c9b15"
|
||||
checksum = "c117dbdfde9c8308975b6a18d71f3f385c89461f7b3fb054288ecf2a2058ba4c"
|
||||
dependencies = [
|
||||
"aho-corasick",
|
||||
"memchr",
|
||||
@@ -367,9 +367,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "serde_json"
|
||||
version = "1.0.114"
|
||||
version = "1.0.115"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c5f09b1bd632ef549eaa9f60a1f8de742bdbc698e6cee2095fc84dde5f549ae0"
|
||||
checksum = "12dc5c46daa8e9fdf4f5e71b6cf9a53f2487da0e86e55808e2d35539666497dd"
|
||||
dependencies = [
|
||||
"itoa",
|
||||
"ryu",
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Fix various long-standing bugs which could cause incorrect state to be returned from `/sync` in certain situations.
|
||||
@@ -1 +0,0 @@
|
||||
Fix various long-standing bugs which could cause incorrect state to be returned from `/sync` in certain situations.
|
||||
@@ -1 +0,0 @@
|
||||
Fix various long-standing bugs which could cause incorrect state to be returned from `/sync` in certain situations.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for moving `/pushrules` off of main process.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for moving `/pushrules` off of main process.
|
||||
@@ -1 +0,0 @@
|
||||
Refactor auth chain fetching to reduce duplication.
|
||||
@@ -1 +0,0 @@
|
||||
Improve database performance by adding a missing index to `access_tokens.refresh_token_id`.
|
||||
@@ -1 +0,0 @@
|
||||
Improve database performance by reducing number of receipts fetched when sending push notifications.
|
||||
@@ -1 +0,0 @@
|
||||
Document [`/v1/make_knock`](https://spec.matrix.org/v1.10/server-server-api/#get_matrixfederationv1make_knockroomiduserid) and [`/v1/send_knock/](https://spec.matrix.org/v1.10/server-server-api/#put_matrixfederationv1send_knockroomideventid) federation endpoints as worker-compatible.
|
||||
18
debian/changelog
vendored
18
debian/changelog
vendored
@@ -1,3 +1,21 @@
|
||||
matrix-synapse-py3 (1.105.1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.105.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Apr 2024 15:56:18 +0100
|
||||
|
||||
matrix-synapse-py3 (1.105.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.105.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Apr 2024 15:53:23 +0100
|
||||
|
||||
matrix-synapse-py3 (1.105.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.105.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 11 Apr 2024 12:15:49 +0100
|
||||
|
||||
matrix-synapse-py3 (1.104.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.104.0.
|
||||
|
||||
@@ -211,8 +211,6 @@ information.
|
||||
^/_matrix/federation/v1/make_leave/
|
||||
^/_matrix/federation/(v1|v2)/send_join/
|
||||
^/_matrix/federation/(v1|v2)/send_leave/
|
||||
^/_matrix/federation/v1/make_knock/
|
||||
^/_matrix/federation/v1/send_knock/
|
||||
^/_matrix/federation/(v1|v2)/invite/
|
||||
^/_matrix/federation/v1/event_auth/
|
||||
^/_matrix/federation/v1/timestamp_to_event/
|
||||
|
||||
64
poetry.lock
generated
64
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 1.7.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.8.2 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "alabaster"
|
||||
@@ -1602,13 +1602,13 @@ tests = ["Sphinx", "doubles", "flake8", "flake8-quotes", "gevent", "mock", "pyte
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "23.2"
|
||||
version = "24.0"
|
||||
description = "Core utilities for Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "packaging-23.2-py3-none-any.whl", hash = "sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7"},
|
||||
{file = "packaging-23.2.tar.gz", hash = "sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5"},
|
||||
{file = "packaging-24.0-py3-none-any.whl", hash = "sha256:2ddfb553fdf02fb784c234c7ba6ccc288296ceabec964ad2eae3777778130bc5"},
|
||||
{file = "packaging-24.0.tar.gz", hash = "sha256:eb82c5e3e56209074766e6885bb04b8c38a0c015d0a30036ebe7ece34c9989e9"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -2444,28 +2444,28 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "ruff"
|
||||
version = "0.3.2"
|
||||
version = "0.3.5"
|
||||
description = "An extremely fast Python linter and code formatter, written in Rust."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "ruff-0.3.2-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:77f2612752e25f730da7421ca5e3147b213dca4f9a0f7e0b534e9562c5441f01"},
|
||||
{file = "ruff-0.3.2-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:9966b964b2dd1107797be9ca7195002b874424d1d5472097701ae8f43eadef5d"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b83d17ff166aa0659d1e1deaf9f2f14cbe387293a906de09bc4860717eb2e2da"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bb875c6cc87b3703aeda85f01c9aebdce3d217aeaca3c2e52e38077383f7268a"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:be75e468a6a86426430373d81c041b7605137a28f7014a72d2fc749e47f572aa"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:967978ac2d4506255e2f52afe70dda023fc602b283e97685c8447d036863a302"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1231eacd4510f73222940727ac927bc5d07667a86b0cbe822024dd00343e77e9"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:2c6d613b19e9a8021be2ee1d0e27710208d1603b56f47203d0abbde906929a9b"},
|
||||
{file = "ruff-0.3.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8439338a6303585d27b66b4626cbde89bb3e50fa3cae86ce52c1db7449330a7"},
|
||||
{file = "ruff-0.3.2-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:de8b480d8379620cbb5ea466a9e53bb467d2fb07c7eca54a4aa8576483c35d36"},
|
||||
{file = "ruff-0.3.2-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:b74c3de9103bd35df2bb05d8b2899bf2dbe4efda6474ea9681280648ec4d237d"},
|
||||
{file = "ruff-0.3.2-py3-none-musllinux_1_2_i686.whl", hash = "sha256:f380be9fc15a99765c9cf316b40b9da1f6ad2ab9639e551703e581a5e6da6745"},
|
||||
{file = "ruff-0.3.2-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:0ac06a3759c3ab9ef86bbeca665d31ad3aa9a4b1c17684aadb7e61c10baa0df4"},
|
||||
{file = "ruff-0.3.2-py3-none-win32.whl", hash = "sha256:9bd640a8f7dd07a0b6901fcebccedadeb1a705a50350fb86b4003b805c81385a"},
|
||||
{file = "ruff-0.3.2-py3-none-win_amd64.whl", hash = "sha256:0c1bdd9920cab5707c26c8b3bf33a064a4ca7842d91a99ec0634fec68f9f4037"},
|
||||
{file = "ruff-0.3.2-py3-none-win_arm64.whl", hash = "sha256:5f65103b1d76e0d600cabd577b04179ff592064eaa451a70a81085930e907d0b"},
|
||||
{file = "ruff-0.3.2.tar.gz", hash = "sha256:fa78ec9418eb1ca3db392811df3376b46471ae93792a81af2d1cbb0e5dcb5142"},
|
||||
{file = "ruff-0.3.5-py3-none-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl", hash = "sha256:aef5bd3b89e657007e1be6b16553c8813b221ff6d92c7526b7e0227450981eac"},
|
||||
{file = "ruff-0.3.5-py3-none-macosx_10_12_x86_64.whl", hash = "sha256:89b1e92b3bd9fca249153a97d23f29bed3992cff414b222fcd361d763fc53f12"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5e55771559c89272c3ebab23326dc23e7f813e492052391fe7950c1a5a139d89"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:dabc62195bf54b8a7876add6e789caae0268f34582333cda340497c886111c39"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:3a05f3793ba25f194f395578579c546ca5d83e0195f992edc32e5907d142bfa3"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_ppc64.manylinux2014_ppc64.whl", hash = "sha256:dfd3504e881082959b4160ab02f7a205f0fadc0a9619cc481982b6837b2fd4c0"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:87258e0d4b04046cf1d6cc1c56fadbf7a880cc3de1f7294938e923234cf9e498"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:712e71283fc7d9f95047ed5f793bc019b0b0a29849b14664a60fd66c23b96da1"},
|
||||
{file = "ruff-0.3.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:a532a90b4a18d3f722c124c513ffb5e5eaff0cc4f6d3aa4bda38e691b8600c9f"},
|
||||
{file = "ruff-0.3.5-py3-none-musllinux_1_2_aarch64.whl", hash = "sha256:122de171a147c76ada00f76df533b54676f6e321e61bd8656ae54be326c10296"},
|
||||
{file = "ruff-0.3.5-py3-none-musllinux_1_2_armv7l.whl", hash = "sha256:d80a6b18a6c3b6ed25b71b05eba183f37d9bc8b16ace9e3d700997f00b74660b"},
|
||||
{file = "ruff-0.3.5-py3-none-musllinux_1_2_i686.whl", hash = "sha256:a7b6e63194c68bca8e71f81de30cfa6f58ff70393cf45aab4c20f158227d5936"},
|
||||
{file = "ruff-0.3.5-py3-none-musllinux_1_2_x86_64.whl", hash = "sha256:a759d33a20c72f2dfa54dae6e85e1225b8e302e8ac655773aff22e542a300985"},
|
||||
{file = "ruff-0.3.5-py3-none-win32.whl", hash = "sha256:9d8605aa990045517c911726d21293ef4baa64f87265896e491a05461cae078d"},
|
||||
{file = "ruff-0.3.5-py3-none-win_amd64.whl", hash = "sha256:dc56bb16a63c1303bd47563c60482a1512721053d93231cf7e9e1c6954395a0e"},
|
||||
{file = "ruff-0.3.5-py3-none-win_arm64.whl", hash = "sha256:faeeae9905446b975dcf6d4499dc93439b131f1443ee264055c5716dd947af55"},
|
||||
{file = "ruff-0.3.5.tar.gz", hash = "sha256:a067daaeb1dc2baf9b82a32dae67d154d95212080c80435eb052d95da647763d"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3109,13 +3109,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-pillow"
|
||||
version = "10.2.0.20240125"
|
||||
version = "10.2.0.20240406"
|
||||
description = "Typing stubs for Pillow"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-Pillow-10.2.0.20240125.tar.gz", hash = "sha256:c449b2c43b9fdbe0494a7b950e6b39a4e50516091213fec24ef3f33c1d017717"},
|
||||
{file = "types_Pillow-10.2.0.20240125-py3-none-any.whl", hash = "sha256:322dbae32b4b7918da5e8a47c50ac0f24b0aa72a804a23857620f2722b03c858"},
|
||||
{file = "types-Pillow-10.2.0.20240406.tar.gz", hash = "sha256:62e0cc1f17caba40e72e7154a483f4c7f3bea0e1c34c0ebba9de3c7745bc306d"},
|
||||
{file = "types_Pillow-10.2.0.20240406-py3-none-any.whl", hash = "sha256:5ac182e8afce53de30abca2fdf9cbec7b2500e549d0be84da035a729a84c7c47"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3156,13 +3156,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-requests"
|
||||
version = "2.31.0.20240125"
|
||||
version = "2.31.0.20240406"
|
||||
description = "Typing stubs for requests"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "types-requests-2.31.0.20240125.tar.gz", hash = "sha256:03a28ce1d7cd54199148e043b2079cdded22d6795d19a2c2a6791a4b2b5e2eb5"},
|
||||
{file = "types_requests-2.31.0.20240125-py3-none-any.whl", hash = "sha256:9592a9a4cb92d6d75d9b491a41477272b710e021011a2a3061157e2fb1f1a5d1"},
|
||||
{file = "types-requests-2.31.0.20240406.tar.gz", hash = "sha256:4428df33c5503945c74b3f42e82b181e86ec7b724620419a2966e2de604ce1a1"},
|
||||
{file = "types_requests-2.31.0.20240406-py3-none-any.whl", hash = "sha256:6216cdac377c6b9a040ac1c0404f7284bd13199c0e1bb235f4324627e8898cf5"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3181,13 +3181,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "typing-extensions"
|
||||
version = "4.9.0"
|
||||
version = "4.11.0"
|
||||
description = "Backported and Experimental Type Hints for Python 3.8+"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "typing_extensions-4.9.0-py3-none-any.whl", hash = "sha256:af72aea155e91adfc61c3ae9e0e342dbc0cba726d6cba4b6c72c1f34e47291cd"},
|
||||
{file = "typing_extensions-4.9.0.tar.gz", hash = "sha256:23478f88c37f27d76ac8aee6c905017a143b0b1b886c3c9f66bc2fd94f9f5783"},
|
||||
{file = "typing_extensions-4.11.0-py3-none-any.whl", hash = "sha256:c1f94d72897edaf4ce775bb7558d5b79d8126906a14ea5ed1635921406c0387a"},
|
||||
{file = "typing_extensions-4.11.0.tar.gz", hash = "sha256:83f085bd5ca59c80295fc2a82ab5dac679cbe02b9f33f7d83af68e241bea51b0"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -3451,4 +3451,4 @@ user-search = ["pyicu"]
|
||||
[metadata]
|
||||
lock-version = "2.0"
|
||||
python-versions = "^3.8.0"
|
||||
content-hash = "b510fa05f4ea33194bec079f5d04ebb3f9ffbb5c1ea96a0341d57ba770ef81e6"
|
||||
content-hash = "4abda113a01f162bb3978b0372956d569364533aa39f57863c234363f8449a4f"
|
||||
|
||||
@@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.104.0"
|
||||
version = "1.105.1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
@@ -321,7 +321,7 @@ all = [
|
||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||
isort = ">=5.10.1"
|
||||
black = ">=22.7.0"
|
||||
ruff = "0.3.2"
|
||||
ruff = "0.3.5"
|
||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||
pydantic = "^2"
|
||||
|
||||
|
||||
@@ -304,12 +304,12 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
|
||||
default_enabled: true,
|
||||
},
|
||||
PushRule {
|
||||
rule_id: Cow::Borrowed("global/underride/.m.rule.room_one_to_one"),
|
||||
rule_id: Cow::Borrowed("global/underride/.m.rule.encrypted_room_one_to_one"),
|
||||
priority_class: 1,
|
||||
conditions: Cow::Borrowed(&[
|
||||
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
|
||||
key: Cow::Borrowed("type"),
|
||||
pattern: Cow::Borrowed("m.room.message"),
|
||||
pattern: Cow::Borrowed("m.room.encrypted"),
|
||||
})),
|
||||
Condition::Known(KnownCondition::RoomMemberCount {
|
||||
is: Some(Cow::Borrowed("2")),
|
||||
@@ -320,12 +320,12 @@ pub const BASE_APPEND_UNDERRIDE_RULES: &[PushRule] = &[
|
||||
default_enabled: true,
|
||||
},
|
||||
PushRule {
|
||||
rule_id: Cow::Borrowed("global/underride/.m.rule.encrypted_room_one_to_one"),
|
||||
rule_id: Cow::Borrowed("global/underride/.m.rule.room_one_to_one"),
|
||||
priority_class: 1,
|
||||
conditions: Cow::Borrowed(&[
|
||||
Condition::Known(KnownCondition::EventMatch(EventMatchCondition {
|
||||
key: Cow::Borrowed("type"),
|
||||
pattern: Cow::Borrowed("m.room.encrypted"),
|
||||
pattern: Cow::Borrowed("m.room.message"),
|
||||
})),
|
||||
Condition::Known(KnownCondition::RoomMemberCount {
|
||||
is: Some(Cow::Borrowed("2")),
|
||||
|
||||
@@ -393,11 +393,6 @@ class ExperimentalConfig(Config):
|
||||
# MSC3967: Do not require UIA when first uploading cross signing keys
|
||||
self.msc3967_enabled = experimental.get("msc3967_enabled", False)
|
||||
|
||||
# MSC3981: Recurse relations
|
||||
self.msc3981_recurse_relations = experimental.get(
|
||||
"msc3981_recurse_relations", False
|
||||
)
|
||||
|
||||
# MSC3861: Matrix architecture change to delegate authentication via OIDC
|
||||
try:
|
||||
self.msc3861 = MSC3861(**experimental.get("msc3861", {}))
|
||||
@@ -409,11 +404,6 @@ class ExperimentalConfig(Config):
|
||||
# Check that none of the other config options conflict with MSC3861 when enabled
|
||||
self.msc3861.check_config_conflicts(self.root)
|
||||
|
||||
# MSC4010: Do not allow setting m.push_rules account data.
|
||||
self.msc4010_push_rules_account_data = experimental.get(
|
||||
"msc4010_push_rules_account_data", False
|
||||
)
|
||||
|
||||
self.msc4028_push_encrypted_events = experimental.get(
|
||||
"msc4028_push_encrypted_events", False
|
||||
)
|
||||
|
||||
@@ -956,6 +956,7 @@ class RoomCreationHandler:
|
||||
room_alias=room_alias,
|
||||
power_level_content_override=power_level_content_override,
|
||||
creator_join_profile=creator_join_profile,
|
||||
ignore_forced_encryption=ignore_forced_encryption,
|
||||
)
|
||||
|
||||
# we avoid dropping the lock between invites, as otherwise joins can
|
||||
|
||||
@@ -1259,6 +1259,51 @@ class SyncHandler:
|
||||
await_full_state = True
|
||||
lazy_load_members = False
|
||||
|
||||
# For a non-gappy sync if the events in the timeline are simply a linear
|
||||
# chain (i.e. no merging/branching of the graph), then we know the state
|
||||
# delta between the end of the previous sync and start of the new one is
|
||||
# empty.
|
||||
#
|
||||
# c.f. #16941 for an example of why we can't do this for all non-gappy
|
||||
# syncs.
|
||||
is_linear_timeline = True
|
||||
if batch.events:
|
||||
# We need to make sure the first event in our batch points to the
|
||||
# last event in the previous batch.
|
||||
last_event_id_prev_batch = (
|
||||
await self.store.get_last_event_in_room_before_stream_ordering(
|
||||
room_id,
|
||||
end_token=since_token.room_key,
|
||||
)
|
||||
)
|
||||
|
||||
prev_event_id = last_event_id_prev_batch
|
||||
for e in batch.events:
|
||||
if e.prev_event_ids() != [prev_event_id]:
|
||||
is_linear_timeline = False
|
||||
break
|
||||
prev_event_id = e.event_id
|
||||
|
||||
if is_linear_timeline and not batch.limited:
|
||||
state_ids: StateMap[str] = {}
|
||||
if lazy_load_members:
|
||||
if members_to_fetch and batch.events:
|
||||
# We're lazy-loading, so the client might need some more
|
||||
# member events to understand the events in this timeline.
|
||||
# So we fish out all the member events corresponding to the
|
||||
# timeline here. The caller will then dedupe any redundant
|
||||
# ones.
|
||||
|
||||
state_ids = await self._state_storage_controller.get_state_ids_for_event(
|
||||
batch.events[0].event_id,
|
||||
# we only want members!
|
||||
state_filter=StateFilter.from_types(
|
||||
(EventTypes.Member, member) for member in members_to_fetch
|
||||
),
|
||||
await_full_state=False,
|
||||
)
|
||||
return state_ids
|
||||
|
||||
if batch:
|
||||
state_at_timeline_start = (
|
||||
await self._state_storage_controller.get_state_ids_for_event(
|
||||
|
||||
@@ -81,8 +81,7 @@ class AccountDataServlet(RestServlet):
|
||||
raise AuthError(403, "Cannot add account data for other users.")
|
||||
|
||||
# Raise an error if the account data type cannot be set directly.
|
||||
if self._hs.config.experimental.msc4010_push_rules_account_data:
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
@@ -108,10 +107,7 @@ class AccountDataServlet(RestServlet):
|
||||
raise AuthError(403, "Cannot get account data for other users.")
|
||||
|
||||
# Push rules are stored in a separate table and must be queried separately.
|
||||
if (
|
||||
self._hs.config.experimental.msc4010_push_rules_account_data
|
||||
and account_data_type == AccountDataTypes.PUSH_RULES
|
||||
):
|
||||
if account_data_type == AccountDataTypes.PUSH_RULES:
|
||||
account_data: Optional[JsonMapping] = (
|
||||
await self._push_rules_handler.push_rules_for_user(requester.user)
|
||||
)
|
||||
@@ -162,8 +158,7 @@ class UnstableAccountDataServlet(RestServlet):
|
||||
raise AuthError(403, "Cannot delete account data for other users.")
|
||||
|
||||
# Raise an error if the account data type cannot be set directly.
|
||||
if self._hs.config.experimental.msc4010_push_rules_account_data:
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
|
||||
await self.handler.remove_account_data_for_user(user_id, account_data_type)
|
||||
|
||||
@@ -209,15 +204,7 @@ class RoomAccountDataServlet(RestServlet):
|
||||
)
|
||||
|
||||
# Raise an error if the account data type cannot be set directly.
|
||||
if self._hs.config.experimental.msc4010_push_rules_account_data:
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
elif account_data_type == ReceiptTypes.FULLY_READ:
|
||||
raise SynapseError(
|
||||
405,
|
||||
"Cannot set m.fully_read through this API."
|
||||
" Use /rooms/!roomId:server.name/read_markers",
|
||||
Codes.BAD_JSON,
|
||||
)
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
|
||||
body = parse_json_object_from_request(request)
|
||||
|
||||
@@ -256,10 +243,7 @@ class RoomAccountDataServlet(RestServlet):
|
||||
)
|
||||
|
||||
# Room-specific push rules are not currently supported.
|
||||
if (
|
||||
self._hs.config.experimental.msc4010_push_rules_account_data
|
||||
and account_data_type == AccountDataTypes.PUSH_RULES
|
||||
):
|
||||
if account_data_type == AccountDataTypes.PUSH_RULES:
|
||||
account_data: Optional[JsonMapping] = {}
|
||||
else:
|
||||
account_data = await self.store.get_account_data_for_room_and_type(
|
||||
@@ -317,8 +301,7 @@ class UnstableRoomAccountDataServlet(RestServlet):
|
||||
)
|
||||
|
||||
# Raise an error if the account data type cannot be set directly.
|
||||
if self._hs.config.experimental.msc4010_push_rules_account_data:
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
_check_can_set_account_data_type(account_data_type)
|
||||
|
||||
await self.handler.remove_account_data_for_room(
|
||||
user_id, room_id, account_data_type
|
||||
|
||||
@@ -55,7 +55,6 @@ class RelationPaginationServlet(RestServlet):
|
||||
self.auth = hs.get_auth()
|
||||
self._store = hs.get_datastores().main
|
||||
self._relations_handler = hs.get_relations_handler()
|
||||
self._support_recurse = hs.config.experimental.msc3981_recurse_relations
|
||||
|
||||
async def on_GET(
|
||||
self,
|
||||
@@ -70,12 +69,9 @@ class RelationPaginationServlet(RestServlet):
|
||||
pagination_config = await PaginationConfig.from_request(
|
||||
self._store, request, default_limit=5, default_dir=Direction.BACKWARDS
|
||||
)
|
||||
if self._support_recurse:
|
||||
recurse = parse_boolean(request, "recurse", default=False) or parse_boolean(
|
||||
request, "org.matrix.msc3981.recurse", default=False
|
||||
)
|
||||
else:
|
||||
recurse = False
|
||||
recurse = parse_boolean(request, "recurse", default=False) or parse_boolean(
|
||||
request, "org.matrix.msc3981.recurse", default=False
|
||||
)
|
||||
|
||||
# The unstable version of this API returns an extra field for client
|
||||
# compatibility, see https://github.com/matrix-org/synapse/issues/12930.
|
||||
|
||||
@@ -132,7 +132,8 @@ class VersionsRestServlet(RestServlet):
|
||||
# Adds support for relation-based redactions as per MSC3912.
|
||||
"org.matrix.msc3912": self.config.experimental.msc3912_enabled,
|
||||
# Whether recursively provide relations is supported.
|
||||
"org.matrix.msc3981": self.config.experimental.msc3981_recurse_relations,
|
||||
# TODO This is no longer needed once unstable MSC3981 does not need to be supported.
|
||||
"org.matrix.msc3981": True,
|
||||
# Adds support for deleting account data.
|
||||
"org.matrix.msc3391": self.config.experimental.msc3391_enabled,
|
||||
# Allows clients to inhibit profile update propagation.
|
||||
|
||||
@@ -19,6 +19,7 @@
|
||||
# [This file includes modifications made by New Vector Limited]
|
||||
#
|
||||
#
|
||||
import collections
|
||||
import itertools
|
||||
import logging
|
||||
from collections import OrderedDict
|
||||
@@ -53,6 +54,7 @@ from synapse.storage.database import (
|
||||
LoggingDatabaseConnection,
|
||||
LoggingTransaction,
|
||||
)
|
||||
from synapse.storage.databases.main.event_federation import EventFederationStore
|
||||
from synapse.storage.databases.main.events_worker import EventCacheEntry
|
||||
from synapse.storage.databases.main.search import SearchEntry
|
||||
from synapse.storage.engines import PostgresEngine
|
||||
@@ -768,40 +770,26 @@ class PersistEventsStore:
|
||||
# that have the same chain ID as the event.
|
||||
# 2. For each retained auth event we:
|
||||
# a. Add a link from the event's to the auth event's chain
|
||||
# ID/sequence number; and
|
||||
# b. Add a link from the event to every chain reachable by the
|
||||
# auth event.
|
||||
# ID/sequence number
|
||||
|
||||
# Step 1, fetch all existing links from all the chains we've seen
|
||||
# referenced.
|
||||
chain_links = _LinkMap()
|
||||
auth_chain_rows = cast(
|
||||
List[Tuple[int, int, int, int]],
|
||||
db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="event_auth_chain_links",
|
||||
column="origin_chain_id",
|
||||
iterable={chain_id for chain_id, _ in chain_map.values()},
|
||||
keyvalues={},
|
||||
retcols=(
|
||||
"origin_chain_id",
|
||||
"origin_sequence_number",
|
||||
"target_chain_id",
|
||||
"target_sequence_number",
|
||||
),
|
||||
),
|
||||
)
|
||||
for (
|
||||
origin_chain_id,
|
||||
origin_sequence_number,
|
||||
target_chain_id,
|
||||
target_sequence_number,
|
||||
) in auth_chain_rows:
|
||||
chain_links.add_link(
|
||||
(origin_chain_id, origin_sequence_number),
|
||||
(target_chain_id, target_sequence_number),
|
||||
new=False,
|
||||
)
|
||||
|
||||
for links in EventFederationStore._get_chain_links(
|
||||
txn, {chain_id for chain_id, _ in chain_map.values()}
|
||||
):
|
||||
for origin_chain_id, inner_links in links.items():
|
||||
for (
|
||||
origin_sequence_number,
|
||||
target_chain_id,
|
||||
target_sequence_number,
|
||||
) in inner_links:
|
||||
chain_links.add_link(
|
||||
(origin_chain_id, origin_sequence_number),
|
||||
(target_chain_id, target_sequence_number),
|
||||
new=False,
|
||||
)
|
||||
|
||||
# We do this in toplogical order to avoid adding redundant links.
|
||||
for event_id in sorted_topologically(
|
||||
@@ -836,18 +824,6 @@ class PersistEventsStore:
|
||||
(chain_id, sequence_number), (auth_chain_id, auth_sequence_number)
|
||||
)
|
||||
|
||||
# Step 2b, add a link to chains reachable from the auth
|
||||
# event.
|
||||
for target_id, target_seq in chain_links.get_links_from(
|
||||
(auth_chain_id, auth_sequence_number)
|
||||
):
|
||||
if target_id == chain_id:
|
||||
continue
|
||||
|
||||
chain_links.add_link(
|
||||
(chain_id, sequence_number), (target_id, target_seq)
|
||||
)
|
||||
|
||||
db_pool.simple_insert_many_txn(
|
||||
txn,
|
||||
table="event_auth_chain_links",
|
||||
@@ -2451,31 +2427,6 @@ class _LinkMap:
|
||||
current_links[src_seq] = target_seq
|
||||
return True
|
||||
|
||||
def get_links_from(
|
||||
self, src_tuple: Tuple[int, int]
|
||||
) -> Generator[Tuple[int, int], None, None]:
|
||||
"""Gets the chains reachable from the given chain/sequence number.
|
||||
|
||||
Yields:
|
||||
The chain ID and sequence number the link points to.
|
||||
"""
|
||||
src_chain, src_seq = src_tuple
|
||||
for target_id, sequence_numbers in self.maps.get(src_chain, {}).items():
|
||||
for link_src_seq, target_seq in sequence_numbers.items():
|
||||
if link_src_seq <= src_seq:
|
||||
yield target_id, target_seq
|
||||
|
||||
def get_links_between(
|
||||
self, source_chain: int, target_chain: int
|
||||
) -> Generator[Tuple[int, int], None, None]:
|
||||
"""Gets the links between two chains.
|
||||
|
||||
Yields:
|
||||
The source and target sequence numbers.
|
||||
"""
|
||||
|
||||
yield from self.maps.get(source_chain, {}).get(target_chain, {}).items()
|
||||
|
||||
def get_additions(self) -> Generator[Tuple[int, int, int, int], None, None]:
|
||||
"""Gets any newly added links.
|
||||
|
||||
@@ -2502,9 +2453,24 @@ class _LinkMap:
|
||||
if src_chain == target_chain:
|
||||
return target_seq <= src_seq
|
||||
|
||||
links = self.get_links_between(src_chain, target_chain)
|
||||
for link_start_seq, link_end_seq in links:
|
||||
if link_start_seq <= src_seq and target_seq <= link_end_seq:
|
||||
return True
|
||||
# We have to graph traverse the links to check for indirect paths.
|
||||
visited_chains = collections.Counter()
|
||||
search = [(src_chain, src_seq)]
|
||||
while search:
|
||||
chain, seq = search.pop()
|
||||
visited_chains[chain] = max(seq, visited_chains[chain])
|
||||
for tc, links in self.maps.get(chain, {}).items():
|
||||
for ss, ts in links.items():
|
||||
# Don't revisit chains we've already seen, unless the target
|
||||
# sequence number is higher than last time.
|
||||
if ts <= visited_chains.get(tc, 0):
|
||||
continue
|
||||
|
||||
if ss <= seq:
|
||||
if tc == target_chain:
|
||||
if target_seq <= ts:
|
||||
return True
|
||||
else:
|
||||
search.append((tc, ts))
|
||||
|
||||
return False
|
||||
|
||||
@@ -2108,6 +2108,13 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
|
||||
unique=False,
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
update_name="access_tokens_refresh_token_id_idx",
|
||||
index_name="access_tokens_refresh_token_id_idx",
|
||||
table="access_tokens",
|
||||
columns=("refresh_token_id",),
|
||||
)
|
||||
|
||||
async def _background_update_set_deactivated_flag(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
) -> int:
|
||||
@@ -2266,13 +2273,6 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
update_name="access_tokens_refresh_token_id_idx",
|
||||
index_name="access_tokens_refresh_token_id_idx",
|
||||
table="access_tokens",
|
||||
columns=("refresh_token_id",),
|
||||
)
|
||||
|
||||
self._ignore_unknown_session_error = (
|
||||
hs.config.server.request_token_inhibit_3pid_errors
|
||||
)
|
||||
|
||||
@@ -132,12 +132,16 @@ Changes in SCHEMA_VERSION = 82
|
||||
|
||||
Changes in SCHEMA_VERSION = 83
|
||||
- The event_txn_id is no longer used.
|
||||
|
||||
Changes in SCHEMA_VERSION = 84
|
||||
- No longer assumes that `event_auth_chain_links` holds transitive links, and
|
||||
so read operations must do graph traversal.
|
||||
"""
|
||||
|
||||
|
||||
SCHEMA_COMPAT_VERSION = (
|
||||
# The event_txn_id table and tables from MSC2716 no longer exist.
|
||||
83
|
||||
# Transitive links are no longer written to `event_auth_chain_links`
|
||||
84
|
||||
)
|
||||
"""Limit on how far the synapse codebase can be rolled back without breaking db compat
|
||||
|
||||
|
||||
@@ -0,0 +1,15 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2023 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(8404, 'access_tokens_refresh_token_id_idx', '{}');
|
||||
@@ -435,6 +435,101 @@ class SyncTestCase(tests.unittest.HomeserverTestCase):
|
||||
[s2_event],
|
||||
)
|
||||
|
||||
def test_state_includes_changes_on_long_lived_forks(self) -> None:
|
||||
"""State changes that happen on a fork of the DAG must be included in `state`
|
||||
|
||||
Given the following DAG:
|
||||
|
||||
E1
|
||||
↗ ↖
|
||||
| S2
|
||||
| ↑
|
||||
--|------|----
|
||||
E3 |
|
||||
--|------|----
|
||||
| E4
|
||||
| |
|
||||
|
||||
... and a filter that means we only return 1 event, represented by the dashed
|
||||
horizontal lines: `S2` must be included in the `state` section on the second sync.
|
||||
"""
|
||||
alice = self.register_user("alice", "password")
|
||||
alice_tok = self.login(alice, "password")
|
||||
alice_requester = create_requester(alice)
|
||||
room_id = self.helper.create_room_as(alice, is_public=True, tok=alice_tok)
|
||||
|
||||
# Do an initial sync as Alice to get a known starting point.
|
||||
initial_sync_result = self.get_success(
|
||||
self.sync_handler.wait_for_sync_for_user(
|
||||
alice_requester, generate_sync_config(alice)
|
||||
)
|
||||
)
|
||||
last_room_creation_event_id = (
|
||||
initial_sync_result.joined[0].timeline.events[-1].event_id
|
||||
)
|
||||
|
||||
# Send a state event, and a regular event, both using the same prev ID
|
||||
with self._patch_get_latest_events([last_room_creation_event_id]):
|
||||
s2_event = self.helper.send_state(room_id, "s2", {}, tok=alice_tok)[
|
||||
"event_id"
|
||||
]
|
||||
e3_event = self.helper.send(room_id, "e3", tok=alice_tok)["event_id"]
|
||||
|
||||
# Do an incremental sync, this will return E3 but *not* S2 at this
|
||||
# point.
|
||||
incremental_sync = self.get_success(
|
||||
self.sync_handler.wait_for_sync_for_user(
|
||||
alice_requester,
|
||||
generate_sync_config(
|
||||
alice,
|
||||
filter_collection=FilterCollection(
|
||||
self.hs, {"room": {"timeline": {"limit": 1}}}
|
||||
),
|
||||
),
|
||||
since_token=initial_sync_result.next_batch,
|
||||
)
|
||||
)
|
||||
room_sync = incremental_sync.joined[0]
|
||||
self.assertEqual(room_sync.room_id, room_id)
|
||||
self.assertTrue(room_sync.timeline.limited)
|
||||
self.assertEqual(
|
||||
[e.event_id for e in room_sync.timeline.events],
|
||||
[e3_event],
|
||||
)
|
||||
self.assertEqual(
|
||||
[e.event_id for e in room_sync.state.values()],
|
||||
[],
|
||||
)
|
||||
|
||||
# Now send another event that points to S2, but not E3.
|
||||
with self._patch_get_latest_events([s2_event]):
|
||||
e4_event = self.helper.send(room_id, "e4", tok=alice_tok)["event_id"]
|
||||
|
||||
# Doing an incremental sync should return S2 in state.
|
||||
incremental_sync = self.get_success(
|
||||
self.sync_handler.wait_for_sync_for_user(
|
||||
alice_requester,
|
||||
generate_sync_config(
|
||||
alice,
|
||||
filter_collection=FilterCollection(
|
||||
self.hs, {"room": {"timeline": {"limit": 1}}}
|
||||
),
|
||||
),
|
||||
since_token=incremental_sync.next_batch,
|
||||
)
|
||||
)
|
||||
room_sync = incremental_sync.joined[0]
|
||||
self.assertEqual(room_sync.room_id, room_id)
|
||||
self.assertFalse(room_sync.timeline.limited)
|
||||
self.assertEqual(
|
||||
[e.event_id for e in room_sync.timeline.events],
|
||||
[e4_event],
|
||||
)
|
||||
self.assertEqual(
|
||||
[e.event_id for e in room_sync.state.values()],
|
||||
[s2_event],
|
||||
)
|
||||
|
||||
def test_state_includes_changes_on_ungappy_syncs(self) -> None:
|
||||
"""Test `state` where the sync is not gappy.
|
||||
|
||||
|
||||
@@ -35,7 +35,6 @@ from synapse.util import Clock
|
||||
from tests import unittest
|
||||
from tests.server import FakeChannel
|
||||
from tests.test_utils.event_injection import inject_event
|
||||
from tests.unittest import override_config
|
||||
|
||||
|
||||
class BaseRelationsTestCase(unittest.HomeserverTestCase):
|
||||
@@ -957,7 +956,6 @@ class RelationPaginationTestCase(BaseRelationsTestCase):
|
||||
|
||||
|
||||
class RecursiveRelationTestCase(BaseRelationsTestCase):
|
||||
@override_config({"experimental_features": {"msc3981_recurse_relations": True}})
|
||||
def test_recursive_relations(self) -> None:
|
||||
"""Generate a complex, multi-level relationship tree and query it."""
|
||||
# Create a thread with a few messages in it.
|
||||
@@ -1003,7 +1001,7 @@ class RecursiveRelationTestCase(BaseRelationsTestCase):
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/rooms/{self.room}/relations/{self.parent_id}"
|
||||
"?dir=f&limit=20&org.matrix.msc3981.recurse=true",
|
||||
"?dir=f&limit=20&recurse=true",
|
||||
access_token=self.user_token,
|
||||
)
|
||||
self.assertEqual(200, channel.code, channel.json_body)
|
||||
@@ -1024,7 +1022,6 @@ class RecursiveRelationTestCase(BaseRelationsTestCase):
|
||||
],
|
||||
)
|
||||
|
||||
@override_config({"experimental_features": {"msc3981_recurse_relations": True}})
|
||||
def test_recursive_relations_with_filter(self) -> None:
|
||||
"""The event_type and rel_type still apply."""
|
||||
# Create a thread with a few messages in it.
|
||||
@@ -1052,7 +1049,7 @@ class RecursiveRelationTestCase(BaseRelationsTestCase):
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/rooms/{self.room}/relations/{self.parent_id}/{RelationTypes.ANNOTATION}"
|
||||
"?dir=f&limit=20&org.matrix.msc3981.recurse=true",
|
||||
"?dir=f&limit=20&recurse=true",
|
||||
access_token=self.user_token,
|
||||
)
|
||||
self.assertEqual(200, channel.code, channel.json_body)
|
||||
@@ -1065,7 +1062,7 @@ class RecursiveRelationTestCase(BaseRelationsTestCase):
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
f"/_matrix/client/v1/rooms/{self.room}/relations/{self.parent_id}/{RelationTypes.ANNOTATION}/m.reaction"
|
||||
"?dir=f&limit=20&org.matrix.msc3981.recurse=true",
|
||||
"?dir=f&limit=20&recurse=true",
|
||||
access_token=self.user_token,
|
||||
)
|
||||
self.assertEqual(200, channel.code, channel.json_body)
|
||||
|
||||
@@ -21,6 +21,8 @@
|
||||
|
||||
from typing import Dict, List, Set, Tuple, cast
|
||||
|
||||
from parameterized import parameterized
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
from twisted.trial import unittest
|
||||
|
||||
@@ -45,7 +47,8 @@ class EventChainStoreTestCase(HomeserverTestCase):
|
||||
self.store = hs.get_datastores().main
|
||||
self._next_stream_ordering = 1
|
||||
|
||||
def test_simple(self) -> None:
|
||||
@parameterized.expand([(False,), (True,)])
|
||||
def test_simple(self, batched: bool) -> None:
|
||||
"""Test that the example in `docs/auth_chain_difference_algorithm.md`
|
||||
works.
|
||||
"""
|
||||
@@ -53,6 +56,7 @@ class EventChainStoreTestCase(HomeserverTestCase):
|
||||
event_factory = self.hs.get_event_builder_factory()
|
||||
bob = "@creator:test"
|
||||
alice = "@alice:test"
|
||||
charlie = "@charlie:test"
|
||||
room_id = "!room:test"
|
||||
|
||||
# Ensure that we have a rooms entry so that we generate the chain index.
|
||||
@@ -191,6 +195,26 @@ class EventChainStoreTestCase(HomeserverTestCase):
|
||||
)
|
||||
)
|
||||
|
||||
charlie_invite = self.get_success(
|
||||
event_factory.for_room_version(
|
||||
RoomVersions.V6,
|
||||
{
|
||||
"type": EventTypes.Member,
|
||||
"state_key": charlie,
|
||||
"sender": alice,
|
||||
"room_id": room_id,
|
||||
"content": {"tag": "charlie_invite"},
|
||||
},
|
||||
).build(
|
||||
prev_event_ids=[],
|
||||
auth_event_ids=[
|
||||
create.event_id,
|
||||
alice_join2.event_id,
|
||||
power_2.event_id,
|
||||
],
|
||||
)
|
||||
)
|
||||
|
||||
events = [
|
||||
create,
|
||||
bob_join,
|
||||
@@ -200,33 +224,41 @@ class EventChainStoreTestCase(HomeserverTestCase):
|
||||
bob_join_2,
|
||||
power_2,
|
||||
alice_join2,
|
||||
charlie_invite,
|
||||
]
|
||||
|
||||
expected_links = [
|
||||
(bob_join, create),
|
||||
(power, create),
|
||||
(power, bob_join),
|
||||
(alice_invite, create),
|
||||
(alice_invite, power),
|
||||
(alice_invite, bob_join),
|
||||
(bob_join_2, power),
|
||||
(alice_join2, power_2),
|
||||
(charlie_invite, alice_join2),
|
||||
]
|
||||
|
||||
self.persist(events)
|
||||
# We either persist as a batch or one-by-one depending on test
|
||||
# parameter.
|
||||
if batched:
|
||||
self.persist(events)
|
||||
else:
|
||||
for event in events:
|
||||
self.persist([event])
|
||||
|
||||
chain_map, link_map = self.fetch_chains(events)
|
||||
|
||||
# Check that the expected links and only the expected links have been
|
||||
# added.
|
||||
self.assertEqual(len(expected_links), len(list(link_map.get_additions())))
|
||||
event_map = {e.event_id: e for e in events}
|
||||
reverse_chain_map = {v: event_map[k] for k, v in chain_map.items()}
|
||||
|
||||
for start, end in expected_links:
|
||||
start_id, start_seq = chain_map[start.event_id]
|
||||
end_id, end_seq = chain_map[end.event_id]
|
||||
|
||||
self.assertIn(
|
||||
(start_seq, end_seq), list(link_map.get_links_between(start_id, end_id))
|
||||
)
|
||||
self.maxDiff = None
|
||||
self.assertCountEqual(
|
||||
expected_links,
|
||||
[
|
||||
(reverse_chain_map[(s1, s2)], reverse_chain_map[(t1, t2)])
|
||||
for s1, s2, t1, t2 in link_map.get_additions()
|
||||
],
|
||||
)
|
||||
|
||||
# Test that everything can reach the create event, but the create event
|
||||
# can't reach anything.
|
||||
@@ -368,24 +400,23 @@ class EventChainStoreTestCase(HomeserverTestCase):
|
||||
|
||||
expected_links = [
|
||||
(bob_join, create),
|
||||
(power, create),
|
||||
(power, bob_join),
|
||||
(alice_invite, create),
|
||||
(alice_invite, power),
|
||||
(alice_invite, bob_join),
|
||||
]
|
||||
|
||||
# Check that the expected links and only the expected links have been
|
||||
# added.
|
||||
self.assertEqual(len(expected_links), len(list(link_map.get_additions())))
|
||||
event_map = {e.event_id: e for e in events}
|
||||
reverse_chain_map = {v: event_map[k] for k, v in chain_map.items()}
|
||||
|
||||
for start, end in expected_links:
|
||||
start_id, start_seq = chain_map[start.event_id]
|
||||
end_id, end_seq = chain_map[end.event_id]
|
||||
|
||||
self.assertIn(
|
||||
(start_seq, end_seq), list(link_map.get_links_between(start_id, end_id))
|
||||
)
|
||||
self.maxDiff = None
|
||||
self.assertCountEqual(
|
||||
expected_links,
|
||||
[
|
||||
(reverse_chain_map[(s1, s2)], reverse_chain_map[(t1, t2)])
|
||||
for s1, s2, t1, t2 in link_map.get_additions()
|
||||
],
|
||||
)
|
||||
|
||||
def persist(
|
||||
self,
|
||||
@@ -489,8 +520,6 @@ class LinkMapTestCase(unittest.TestCase):
|
||||
link_map = _LinkMap()
|
||||
|
||||
link_map.add_link((1, 1), (2, 1), new=False)
|
||||
self.assertCountEqual(link_map.get_links_between(1, 2), [(1, 1)])
|
||||
self.assertCountEqual(link_map.get_links_from((1, 1)), [(2, 1)])
|
||||
self.assertCountEqual(link_map.get_additions(), [])
|
||||
self.assertTrue(link_map.exists_path_from((1, 5), (2, 1)))
|
||||
self.assertFalse(link_map.exists_path_from((1, 5), (2, 2)))
|
||||
@@ -499,18 +528,31 @@ class LinkMapTestCase(unittest.TestCase):
|
||||
|
||||
# Attempting to add a redundant link is ignored.
|
||||
self.assertFalse(link_map.add_link((1, 4), (2, 1)))
|
||||
self.assertCountEqual(link_map.get_links_between(1, 2), [(1, 1)])
|
||||
self.assertCountEqual(link_map.get_additions(), [])
|
||||
|
||||
# Adding new non-redundant links works
|
||||
self.assertTrue(link_map.add_link((1, 3), (2, 3)))
|
||||
self.assertCountEqual(link_map.get_links_between(1, 2), [(1, 1), (3, 3)])
|
||||
self.assertCountEqual(link_map.get_additions(), [(1, 3, 2, 3)])
|
||||
|
||||
self.assertTrue(link_map.add_link((2, 5), (1, 3)))
|
||||
self.assertCountEqual(link_map.get_links_between(2, 1), [(5, 3)])
|
||||
self.assertCountEqual(link_map.get_links_between(1, 2), [(1, 1), (3, 3)])
|
||||
|
||||
self.assertCountEqual(link_map.get_additions(), [(1, 3, 2, 3), (2, 5, 1, 3)])
|
||||
|
||||
def test_exists_path_from(self) -> None:
|
||||
"Check that `exists_path_from` can handle non-direct links"
|
||||
link_map = _LinkMap()
|
||||
|
||||
link_map.add_link((1, 1), (2, 1), new=False)
|
||||
link_map.add_link((2, 1), (3, 1), new=False)
|
||||
|
||||
self.assertTrue(link_map.exists_path_from((1, 4), (3, 1)))
|
||||
self.assertFalse(link_map.exists_path_from((1, 4), (3, 2)))
|
||||
|
||||
link_map.add_link((1, 5), (2, 3), new=False)
|
||||
link_map.add_link((2, 2), (3, 3), new=False)
|
||||
|
||||
self.assertTrue(link_map.exists_path_from((1, 6), (3, 2)))
|
||||
self.assertFalse(link_map.exists_path_from((1, 4), (3, 2)))
|
||||
|
||||
|
||||
class EventChainBackgroundUpdateTestCase(HomeserverTestCase):
|
||||
servlets = [
|
||||
|
||||
Reference in New Issue
Block a user