Compare commits
38 Commits
devon/medi
...
release-v1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
f92c6455ef | ||
|
|
a36f3a6d87 | ||
|
|
67920c0aca | ||
|
|
f5ed52c1e2 | ||
|
|
99c15f4630 | ||
|
|
09b4109c2e | ||
|
|
40ce11ded0 | ||
|
|
3dade08e7c | ||
|
|
1920dfff40 | ||
|
|
b7728a2df1 | ||
|
|
c6dfe70014 | ||
|
|
b5d94f654c | ||
|
|
7c633f1a58 | ||
|
|
ae877aa101 | ||
|
|
740fc885cd | ||
|
|
9a62b2d47a | ||
|
|
d0873d549a | ||
|
|
c9adbc6a1c | ||
|
|
9f9eb56333 | ||
|
|
fe8bb620de | ||
|
|
b8146d4b03 | ||
|
|
411d239db4 | ||
|
|
d18edf67d6 | ||
|
|
fd5d3d852d | ||
|
|
ea376126a0 | ||
|
|
74be5cfdbc | ||
|
|
f2ca2e31f7 | ||
|
|
6dc1ecd359 | ||
|
|
2965c9970c | ||
|
|
5f587dfd38 | ||
|
|
a4ec96ca34 | ||
|
|
02dca7c67a | ||
|
|
dbf5b0be67 | ||
|
|
b2f12d22e4 | ||
|
|
d67e9c5367 | ||
|
|
2b5c6239de | ||
|
|
9b8eebbe4e | ||
|
|
5ced4efe1d |
2
.github/workflows/latest_deps.yml
vendored
2
.github/workflows/latest_deps.yml
vendored
@@ -200,7 +200,7 @@ jobs:
|
||||
- name: Prepare Complement's Prerequisites
|
||||
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
|
||||
|
||||
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
|
||||
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
with:
|
||||
cache-dependency-path: complement/go.sum
|
||||
go-version-file: complement/go.mod
|
||||
|
||||
2
.github/workflows/tests.yml
vendored
2
.github/workflows/tests.yml
vendored
@@ -669,7 +669,7 @@ jobs:
|
||||
- name: Prepare Complement's Prerequisites
|
||||
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
|
||||
|
||||
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
|
||||
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
with:
|
||||
cache-dependency-path: complement/go.sum
|
||||
go-version-file: complement/go.mod
|
||||
|
||||
2
.github/workflows/twisted_trunk.yml
vendored
2
.github/workflows/twisted_trunk.yml
vendored
@@ -173,7 +173,7 @@ jobs:
|
||||
- name: Prepare Complement's Prerequisites
|
||||
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
|
||||
|
||||
- uses: actions/setup-go@0aaccfd150d50ccaeb58ebd88d36e91967a5f35b # v5.4.0
|
||||
- uses: actions/setup-go@d35c59abb061a4a6fb18e82ac0862c26744d6ab5 # v5.5.0
|
||||
with:
|
||||
cache-dependency-path: complement/go.sum
|
||||
go-version-file: complement/go.mod
|
||||
|
||||
119
CHANGES.md
119
CHANGES.md
@@ -1,3 +1,122 @@
|
||||
# Synapse 1.130.0 (2025-05-20)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix startup being blocked on creating a new index that was introduced in v1.130.0rc1. ([\#18439](https://github.com/element-hq/synapse/issues/18439))
|
||||
- Fix the ordering of local messages in rooms that were affected by [GHSA-v56r-hwv5-mxg6](https://github.com/advisories/GHSA-v56r-hwv5-mxg6). ([\#18447](https://github.com/element-hq/synapse/issues/18447))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.130.0rc1 (2025-05-13)
|
||||
|
||||
### Features
|
||||
|
||||
- Add an Admin API endpoint `GET /_synapse/admin/v1/scheduled_tasks` to fetch scheduled tasks. ([\#18214](https://github.com/element-hq/synapse/issues/18214))
|
||||
- Add config option `user_directory.exclude_remote_users` which, when enabled, excludes remote users from user directory search results. ([\#18300](https://github.com/element-hq/synapse/issues/18300))
|
||||
- Add support for handling `GET /devices/` on workers. ([\#18355](https://github.com/element-hq/synapse/issues/18355))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix a longstanding bug where Synapse would immediately retry a failing push endpoint when a new event is received, ignoring any backoff timers. ([\#18363](https://github.com/element-hq/synapse/issues/18363))
|
||||
- Pass leave from remote invite rejection down Sliding Sync. ([\#18375](https://github.com/element-hq/synapse/issues/18375))
|
||||
|
||||
### Updates to the Docker image
|
||||
|
||||
- In `configure_workers_and_start.py`, use the same absolute path of Python in the interpreter shebang, and invoke child Python processes with `sys.executable`. ([\#18291](https://github.com/element-hq/synapse/issues/18291))
|
||||
- Optimize the build of the workers image. ([\#18292](https://github.com/element-hq/synapse/issues/18292))
|
||||
- In `start_for_complement.sh`, replace some external program calls with shell builtins. ([\#18293](https://github.com/element-hq/synapse/issues/18293))
|
||||
- When generating container scripts from templates, don't add a leading newline so that their shebangs may be handled correctly. ([\#18295](https://github.com/element-hq/synapse/issues/18295))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Improve formatting of the README file. ([\#18218](https://github.com/element-hq/synapse/issues/18218))
|
||||
- Add documentation for configuring [Pocket ID](https://github.com/pocket-id/pocket-id) as an OIDC provider. ([\#18237](https://github.com/element-hq/synapse/issues/18237))
|
||||
- Fix typo in docs about the `push` config option. Contributed by @HarHarLinks. ([\#18320](https://github.com/element-hq/synapse/issues/18320))
|
||||
- Add `/_matrix/federation/v1/version` to list of federation endpoints that can be handled by workers. ([\#18377](https://github.com/element-hq/synapse/issues/18377))
|
||||
- Add an Admin API endpoint `GET /_synapse/admin/v1/scheduled_tasks` to fetch scheduled tasks. ([\#18384](https://github.com/element-hq/synapse/issues/18384))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Return specific error code when adding an email address / phone number to account is not supported ([MSC4178](https://github.com/matrix-org/matrix-spec-proposals/pull/4178)). ([\#17578](https://github.com/element-hq/synapse/issues/17578))
|
||||
- Stop auto-provisionning missing users & devices when delegating auth to Matrix Authentication Service. Requires MAS 0.13.0 or later. ([\#18181](https://github.com/element-hq/synapse/issues/18181))
|
||||
- Apply file hashing and existing quarantines to media downloaded for URL previews. ([\#18297](https://github.com/element-hq/synapse/issues/18297))
|
||||
- Allow a few admin APIs used by matrix-authentication-service to run on workers. ([\#18313](https://github.com/element-hq/synapse/issues/18313))
|
||||
- Apply `should_drop_federated_event` to federation invites. ([\#18330](https://github.com/element-hq/synapse/issues/18330))
|
||||
- Allow `/rooms/` admin API to be run on workers. ([\#18360](https://github.com/element-hq/synapse/issues/18360))
|
||||
- Minor performance improvements to the notifier. ([\#18367](https://github.com/element-hq/synapse/issues/18367))
|
||||
- Slight performance increase when using the ratelimiter. ([\#18369](https://github.com/element-hq/synapse/issues/18369))
|
||||
- Don't validate the `at_hash` (access token hash) field in OIDC ID Tokens if we don't end up actually using the OIDC Access Token. ([\#18374](https://github.com/element-hq/synapse/issues/18374), [\#18385](https://github.com/element-hq/synapse/issues/18385))
|
||||
- Fixed test failures when using authlib 1.5.2. ([\#18390](https://github.com/element-hq/synapse/issues/18390))
|
||||
- Refactor [MSC4186](https://github.com/matrix-org/matrix-spec-proposals/pull/4186) Simplified Sliding Sync room list tests to cover both new and fallback logic paths. ([\#18399](https://github.com/element-hq/synapse/issues/18399))
|
||||
|
||||
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump actions/add-to-project from 280af8ae1f83a494cfad2cb10f02f6d13529caa9 to 5b1a254a3546aef88e0a7724a77a623fa2e47c36. ([\#18365](https://github.com/element-hq/synapse/issues/18365))
|
||||
* Bump actions/download-artifact from 4.2.1 to 4.3.0. ([\#18364](https://github.com/element-hq/synapse/issues/18364))
|
||||
* Bump actions/setup-go from 5.4.0 to 5.5.0. ([\#18426](https://github.com/element-hq/synapse/issues/18426))
|
||||
* Bump anyhow from 1.0.97 to 1.0.98. ([\#18336](https://github.com/element-hq/synapse/issues/18336))
|
||||
* Bump packaging from 24.2 to 25.0. ([\#18393](https://github.com/element-hq/synapse/issues/18393))
|
||||
* Bump pillow from 11.1.0 to 11.2.1. ([\#18429](https://github.com/element-hq/synapse/issues/18429))
|
||||
* Bump pydantic from 2.10.3 to 2.11.4. ([\#18394](https://github.com/element-hq/synapse/issues/18394))
|
||||
* Bump pyo3-log from 0.12.2 to 0.12.3. ([\#18317](https://github.com/element-hq/synapse/issues/18317))
|
||||
* Bump pyopenssl from 24.3.0 to 25.0.0. ([\#18315](https://github.com/element-hq/synapse/issues/18315))
|
||||
* Bump sha2 from 0.10.8 to 0.10.9. ([\#18395](https://github.com/element-hq/synapse/issues/18395))
|
||||
* Bump sigstore/cosign-installer from 3.8.1 to 3.8.2. ([\#18366](https://github.com/element-hq/synapse/issues/18366))
|
||||
* Bump softprops/action-gh-release from 1 to 2. ([\#18264](https://github.com/element-hq/synapse/issues/18264))
|
||||
* Bump stefanzweifel/git-auto-commit-action from 5.1.0 to 5.2.0. ([\#18354](https://github.com/element-hq/synapse/issues/18354))
|
||||
* Bump txredisapi from 1.4.10 to 1.4.11. ([\#18392](https://github.com/element-hq/synapse/issues/18392))
|
||||
* Bump types-jsonschema from 4.23.0.20240813 to 4.23.0.20241208. ([\#18305](https://github.com/element-hq/synapse/issues/18305))
|
||||
* Bump types-psycopg2 from 2.9.21.20250121 to 2.9.21.20250318. ([\#18316](https://github.com/element-hq/synapse/issues/18316))
|
||||
|
||||
# Synapse 1.129.0 (2025-05-06)
|
||||
|
||||
No significant changes since 1.129.0rc2.
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.129.0rc2 (2025-04-30)
|
||||
|
||||
Synapse 1.129.0rc1 was never formally released due to regressions discovered during the release process. 1.129.0rc2 fixes those regressions by reverting the affected PRs.
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Revert the slow background update introduced by [\#18068](https://github.com/element-hq/synapse/issues/18068) in v1.128.0. ([\#18372](https://github.com/element-hq/synapse/issues/18372))
|
||||
- Revert "Add total event, unencrypted message, and e2ee event counts to stats reporting", added in v1.129.0rc1. ([\#18373](https://github.com/element-hq/synapse/issues/18373))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.129.0rc1 (2025-04-15)
|
||||
|
||||
### Features
|
||||
|
||||
- Add `passthrough_authorization_parameters` in OIDC configuration to allow passing parameters to the authorization grant URL. ([\#18232](https://github.com/element-hq/synapse/issues/18232))
|
||||
- Add `total_event_count`, `total_message_count`, and `total_e2ee_event_count` fields to the homeserver usage statistics. ([\#18260](https://github.com/element-hq/synapse/issues/18260))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix `force_tracing_for_users` config when using delegated auth. ([\#18334](https://github.com/element-hq/synapse/issues/18334))
|
||||
- Fix the token introspection cache logging access tokens when MAS integration is in use. ([\#18335](https://github.com/element-hq/synapse/issues/18335))
|
||||
- Stop caching introspection failures when delegating auth to MAS. ([\#18339](https://github.com/element-hq/synapse/issues/18339))
|
||||
- Fix `ExternalIDReuse` exception after migrating to MAS on workers with a high traffic. ([\#18342](https://github.com/element-hq/synapse/issues/18342))
|
||||
- Fix minor performance regression caused by tracking of room participation. Regressed in v1.128.0. ([\#18345](https://github.com/element-hq/synapse/issues/18345))
|
||||
|
||||
### Updates to the Docker image
|
||||
|
||||
- Optimize the build of the complement-synapse image. ([\#18294](https://github.com/element-hq/synapse/issues/18294))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Disable statement timeout during room purge. ([\#18133](https://github.com/element-hq/synapse/issues/18133))
|
||||
- Add cache to storage functions used to auth requests when using delegated auth. ([\#18337](https://github.com/element-hq/synapse/issues/18337))
|
||||
|
||||
|
||||
|
||||
|
||||
# Synapse 1.128.0 (2025-04-08)
|
||||
|
||||
No significant changes since 1.128.0rc1.
|
||||
|
||||
4
Cargo.lock
generated
4
Cargo.lock
generated
@@ -480,9 +480,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "sha2"
|
||||
version = "0.10.8"
|
||||
version = "0.10.9"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "793db75ad2bcafc3ffa7c68b215fee268f537982cd901d132f89c6343f3a3dc8"
|
||||
checksum = "a7507d819769d01a365ab707794a4084392c824f54a7a6a7862f8c3d0892b283"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"cpufeatures",
|
||||
|
||||
12
README.rst
12
README.rst
@@ -253,15 +253,17 @@ Alongside all that, join our developer community on Matrix:
|
||||
Copyright and Licensing
|
||||
=======================
|
||||
|
||||
Copyright 2014-2017 OpenMarket Ltd
|
||||
Copyright 2017 Vector Creations Ltd
|
||||
Copyright 2017-2025 New Vector Ltd
|
||||
| Copyright 2014-2017 OpenMarket Ltd
|
||||
| Copyright 2017 Vector Creations Ltd
|
||||
| Copyright 2017-2025 New Vector Ltd
|
||||
|
|
||||
|
||||
This software is dual-licensed by New Vector Ltd (Element). It can be used either:
|
||||
|
||||
|
||||
(1) for free under the terms of the GNU Affero General Public License (as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version); OR
|
||||
|
||||
|
||||
(2) under the terms of a paid-for Element Commercial License agreement between you and Element (the terms of which may vary depending on what you and Element have agreed to).
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the specific language governing permissions and limitations under the Licenses.
|
||||
|
||||
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Disable statement timeout during room purge.
|
||||
@@ -1 +0,0 @@
|
||||
Add `passthrough_authorization_parameters` in OIDC configuration to allow to pass parameters to the authorization grant URL.
|
||||
@@ -1 +0,0 @@
|
||||
Add documentation for configuring [Pocket ID](https://github.com/pocket-id/pocket-id) as an OIDC provider.
|
||||
@@ -1 +0,0 @@
|
||||
In configure_workers_and_start.py, use the same absolute path of Python in the interpreter shebang, and invoke child Python processes with `sys.executable`.
|
||||
@@ -1 +0,0 @@
|
||||
Optimize the build of the workers image.
|
||||
@@ -1 +0,0 @@
|
||||
In start_for_complement.sh, replace some external program calls with shell builtins.
|
||||
@@ -1 +0,0 @@
|
||||
Optimize the build of the complement-synapse image.
|
||||
@@ -1 +0,0 @@
|
||||
When generating container scripts from templates, don't add a leading newline so that their shebangs may be handled correctly.
|
||||
@@ -1 +0,0 @@
|
||||
Fix typo in docs about the `push` config option. Contributed by @HarHarLinks.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `force_tracing_for_users` config when using delegated auth.
|
||||
@@ -1 +0,0 @@
|
||||
Fix the token introspection cache logging access tokens when MAS integration is in use.
|
||||
@@ -1 +0,0 @@
|
||||
Add cache to storage functions used to auth requests when using delegated auth.
|
||||
@@ -1 +0,0 @@
|
||||
Stop caching introspection failures when delegating auth to MAS.
|
||||
@@ -1 +0,0 @@
|
||||
Fix `ExternalIDReuse` exception after migrating to MAS on workers with a high traffic.
|
||||
@@ -1 +0,0 @@
|
||||
Fix minor performance regression caused by tracking of room participation. Regressed in v1.128.0.
|
||||
@@ -1 +0,0 @@
|
||||
Add support for handling `GET /devices/` on workers.
|
||||
@@ -1 +0,0 @@
|
||||
Allow `/rooms/` admin API to be run on workers.
|
||||
@@ -1 +0,0 @@
|
||||
Fix longstanding bug where Synapse would immediately retry a failing push endpoint when a new event is received, ignoring any backoff timers.
|
||||
@@ -1 +0,0 @@
|
||||
Minor performance improvements to the notifier.
|
||||
@@ -1 +0,0 @@
|
||||
Slight performance increase when using the ratelimiter.
|
||||
@@ -1 +0,0 @@
|
||||
Allow client & media admin apis to coexist.
|
||||
30
debian/changelog
vendored
30
debian/changelog
vendored
@@ -1,3 +1,33 @@
|
||||
matrix-synapse-py3 (1.130.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.130.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 20 May 2025 08:34:13 -0600
|
||||
|
||||
matrix-synapse-py3 (1.130.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.130.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 13 May 2025 10:44:04 +0100
|
||||
|
||||
matrix-synapse-py3 (1.129.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.129.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 06 May 2025 12:22:11 +0100
|
||||
|
||||
matrix-synapse-py3 (1.129.0~rc2) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.129.0rc2.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Wed, 30 Apr 2025 13:13:16 +0000
|
||||
|
||||
matrix-synapse-py3 (1.129.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.129.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Apr 2025 10:47:43 -0600
|
||||
|
||||
matrix-synapse-py3 (1.128.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.128.0.
|
||||
|
||||
@@ -202,6 +202,7 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
|
||||
"app": "synapse.app.generic_worker",
|
||||
"listener_resources": ["federation"],
|
||||
"endpoint_patterns": [
|
||||
"^/_matrix/federation/v1/version$",
|
||||
"^/_matrix/federation/(v1|v2)/event/",
|
||||
"^/_matrix/federation/(v1|v2)/state/",
|
||||
"^/_matrix/federation/(v1|v2)/state_ids/",
|
||||
|
||||
54
docs/admin_api/scheduled_tasks.md
Normal file
54
docs/admin_api/scheduled_tasks.md
Normal file
@@ -0,0 +1,54 @@
|
||||
# Show scheduled tasks
|
||||
|
||||
This API returns information about scheduled tasks.
|
||||
|
||||
To use it, you will need to authenticate by providing an `access_token`
|
||||
for a server admin: see [Admin API](../usage/administration/admin_api/).
|
||||
|
||||
The api is:
|
||||
```
|
||||
GET /_synapse/admin/v1/scheduled_tasks
|
||||
```
|
||||
|
||||
It returns a JSON body like the following:
|
||||
|
||||
```json
|
||||
{
|
||||
"scheduled_tasks": [
|
||||
{
|
||||
"id": "GSA124oegf1",
|
||||
"action": "shutdown_room",
|
||||
"status": "complete",
|
||||
"timestamp_ms": 23423523,
|
||||
"resource_id": "!roomid",
|
||||
"result": "some result",
|
||||
"error": null
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Query parameters:**
|
||||
|
||||
* `action_name`: string - Is optional. Returns only the scheduled tasks with the given action name.
|
||||
* `resource_id`: string - Is optional. Returns only the scheduled tasks with the given resource id.
|
||||
* `status`: string - Is optional. Returns only the scheduled tasks matching the given status, one of
|
||||
- "scheduled" - Task is scheduled but not active
|
||||
- "active" - Task is active and probably running, and if not will be run on next scheduler loop run
|
||||
- "complete" - Task has completed successfully
|
||||
- "failed" - Task is over and either returned a failed status, or had an exception
|
||||
|
||||
* `max_timestamp`: int - Is optional. Returns only the scheduled tasks with a timestamp inferior to the specified one.
|
||||
|
||||
**Response**
|
||||
|
||||
The following fields are returned in the JSON response body along with a `200` HTTP status code:
|
||||
|
||||
* `id`: string - ID of scheduled task.
|
||||
* `action`: string - The name of the scheduled task's action.
|
||||
* `status`: string - The status of the scheduled task.
|
||||
* `timestamp_ms`: integer - The timestamp (in milliseconds since the unix epoch) of the given task - If the status is "scheduled" then this represents when it should be launched.
|
||||
Otherwise it represents the last time this task got a change of state.
|
||||
* `resource_id`: Optional string - The resource id of the scheduled task, if it possesses one
|
||||
* `result`: Optional Json - Any result of the scheduled task, if given
|
||||
* `error`: Optional string - If the task has the status "failed", the error associated with this failure
|
||||
@@ -353,6 +353,8 @@ callback returns `False`, Synapse falls through to the next one. The value of th
|
||||
callback that does not return `False` will be used. If this happens, Synapse will not call
|
||||
any of the subsequent implementations of this callback.
|
||||
|
||||
Note that this check is applied to federation invites as of Synapse v1.130.0.
|
||||
|
||||
|
||||
### `check_login_for_spam`
|
||||
|
||||
|
||||
@@ -117,6 +117,16 @@ each upgrade are complete before moving on to the next upgrade, to avoid
|
||||
stacking them up. You can monitor the currently running background updates with
|
||||
[the Admin API](usage/administration/admin_api/background_updates.html#status).
|
||||
|
||||
# Upgrading to v1.130.0
|
||||
|
||||
## Documented endpoint which can be delegated to a federation worker
|
||||
|
||||
The endpoint `^/_matrix/federation/v1/version$` can be delegated to a federation
|
||||
worker. This is not new behaviour, but had not been documented yet. The
|
||||
[list of delegatable endpoints](workers.md#synapseappgeneric_worker) has
|
||||
been updated to include it. Make sure to check your reverse proxy rules if you
|
||||
are using workers.
|
||||
|
||||
# Upgrading to v1.126.0
|
||||
|
||||
## Room list publication rules change
|
||||
|
||||
@@ -4095,6 +4095,7 @@ This option has the following sub-options:
|
||||
* `prefer_local_users`: Defines whether to prefer local users in search query results.
|
||||
If set to true, local users are more likely to appear above remote users when searching the
|
||||
user directory. Defaults to false.
|
||||
* `exclude_remote_users`: If set to true, the search will only return local users. Defaults to false.
|
||||
* `show_locked_users`: Defines whether to show locked users in search query results. Defaults to false.
|
||||
|
||||
Example configuration:
|
||||
@@ -4103,6 +4104,7 @@ user_directory:
|
||||
enabled: false
|
||||
search_all_users: true
|
||||
prefer_local_users: true
|
||||
exclude_remote_users: false
|
||||
show_locked_users: true
|
||||
```
|
||||
---
|
||||
|
||||
@@ -200,6 +200,7 @@ information.
|
||||
^/_matrix/client/(api/v1|r0|v3)/rooms/[^/]+/initialSync$
|
||||
|
||||
# Federation requests
|
||||
^/_matrix/federation/v1/version$
|
||||
^/_matrix/federation/v1/event/
|
||||
^/_matrix/federation/v1/state/
|
||||
^/_matrix/federation/v1/state_ids/
|
||||
@@ -322,6 +323,15 @@ For multiple workers not handling the SSO endpoints properly, see
|
||||
[#7530](https://github.com/matrix-org/synapse/issues/7530) and
|
||||
[#9427](https://github.com/matrix-org/synapse/issues/9427).
|
||||
|
||||
Additionally, when MSC3861 is enabled (`experimental_features.msc3861.enabled`
|
||||
set to `true`), the following endpoints can be handled by the worker:
|
||||
|
||||
^/_synapse/admin/v2/users/[^/]+$
|
||||
^/_synapse/admin/v1/username_available$
|
||||
^/_synapse/admin/v1/users/[^/]+/_allow_cross_signing_replacement_without_uia$
|
||||
# Only the GET method:
|
||||
^/_synapse/admin/v1/users/[^/]+/devices$
|
||||
|
||||
Note that a [HTTP listener](usage/configuration/config_documentation.md#listeners)
|
||||
with `client` and `federation` `resources` must be configured in the
|
||||
[`worker_listeners`](usage/configuration/config_documentation.md#worker_listeners)
|
||||
|
||||
398
poetry.lock
generated
398
poetry.lock
generated
@@ -1561,14 +1561,14 @@ tests = ["Sphinx", "doubles", "flake8", "flake8-quotes", "gevent", "mock", "pyte
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "24.2"
|
||||
version = "25.0"
|
||||
description = "Core utilities for Python packages"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759"},
|
||||
{file = "packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f"},
|
||||
{file = "packaging-25.0-py3-none-any.whl", hash = "sha256:29572ef2b1f17581046b3a2227d5c611fb25ec70ca1ba8554b24b0e69331a484"},
|
||||
{file = "packaging-25.0.tar.gz", hash = "sha256:d443872c98d677bf60f6a1f2f8c1cb748e8fe762d2bf9d3148b5599295b0fc4f"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1600,89 +1600,100 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "pillow"
|
||||
version = "11.1.0"
|
||||
version = "11.2.1"
|
||||
description = "Python Imaging Library (Fork)"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main"]
|
||||
files = [
|
||||
{file = "pillow-11.1.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:e1abe69aca89514737465752b4bcaf8016de61b3be1397a8fc260ba33321b3a8"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c640e5a06869c75994624551f45e5506e4256562ead981cce820d5ab39ae2192"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a07dba04c5e22824816b2615ad7a7484432d7f540e6fa86af60d2de57b0fcee2"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e267b0ed063341f3e60acd25c05200df4193e15a4a5807075cd71225a2386e26"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:bd165131fd51697e22421d0e467997ad31621b74bfc0b75956608cb2906dda07"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:abc56501c3fd148d60659aae0af6ddc149660469082859fa7b066a298bde9482"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:54ce1c9a16a9561b6d6d8cb30089ab1e5eb66918cb47d457bd996ef34182922e"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:73ddde795ee9b06257dac5ad42fcb07f3b9b813f8c1f7f870f402f4dc54b5269"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-win32.whl", hash = "sha256:3a5fe20a7b66e8135d7fd617b13272626a28278d0e578c98720d9ba4b2439d49"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-win_amd64.whl", hash = "sha256:b6123aa4a59d75f06e9dd3dac5bf8bc9aa383121bb3dd9a7a612e05eabc9961a"},
|
||||
{file = "pillow-11.1.0-cp310-cp310-win_arm64.whl", hash = "sha256:a76da0a31da6fcae4210aa94fd779c65c75786bc9af06289cd1c184451ef7a65"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:e06695e0326d05b06833b40b7ef477e475d0b1ba3a6d27da1bb48c23209bf457"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:96f82000e12f23e4f29346e42702b6ed9a2f2fea34a740dd5ffffcc8c539eb35"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a3cd561ded2cf2bbae44d4605837221b987c216cff94f49dfeed63488bb228d2"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f189805c8be5ca5add39e6f899e6ce2ed824e65fb45f3c28cb2841911da19070"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:dd0052e9db3474df30433f83a71b9b23bd9e4ef1de13d92df21a52c0303b8ab6"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:837060a8599b8f5d402e97197d4924f05a2e0d68756998345c829c33186217b1"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:aa8dd43daa836b9a8128dbe7d923423e5ad86f50a7a14dc688194b7be5c0dea2"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0a2f91f8a8b367e7a57c6e91cd25af510168091fb89ec5146003e424e1558a96"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-win32.whl", hash = "sha256:c12fc111ef090845de2bb15009372175d76ac99969bdf31e2ce9b42e4b8cd88f"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-win_amd64.whl", hash = "sha256:fbd43429d0d7ed6533b25fc993861b8fd512c42d04514a0dd6337fb3ccf22761"},
|
||||
{file = "pillow-11.1.0-cp311-cp311-win_arm64.whl", hash = "sha256:f7955ecf5609dee9442cbface754f2c6e541d9e6eda87fad7f7a989b0bdb9d71"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:2062ffb1d36544d42fcaa277b069c88b01bb7298f4efa06731a7fd6cc290b81a"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a85b653980faad27e88b141348707ceeef8a1186f75ecc600c395dcac19f385b"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9409c080586d1f683df3f184f20e36fb647f2e0bc3988094d4fd8c9f4eb1b3b3"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7fdadc077553621911f27ce206ffcbec7d3f8d7b50e0da39f10997e8e2bb7f6a"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:93a18841d09bcdd774dcdc308e4537e1f867b3dec059c131fde0327899734aa1"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:9aa9aeddeed452b2f616ff5507459e7bab436916ccb10961c4a382cd3e03f47f"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3cdcdb0b896e981678eee140d882b70092dac83ac1cdf6b3a60e2216a73f2b91"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:36ba10b9cb413e7c7dfa3e189aba252deee0602c86c309799da5a74009ac7a1c"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-win32.whl", hash = "sha256:cfd5cd998c2e36a862d0e27b2df63237e67273f2fc78f47445b14e73a810e7e6"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-win_amd64.whl", hash = "sha256:a697cd8ba0383bba3d2d3ada02b34ed268cb548b369943cd349007730c92bddf"},
|
||||
{file = "pillow-11.1.0-cp312-cp312-win_arm64.whl", hash = "sha256:4dd43a78897793f60766563969442020e90eb7847463eca901e41ba186a7d4a5"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:ae98e14432d458fc3de11a77ccb3ae65ddce70f730e7c76140653048c71bfcbc"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:cc1331b6d5a6e144aeb5e626f4375f5b7ae9934ba620c0ac6b3e43d5e683a0f0"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:758e9d4ef15d3560214cddbc97b8ef3ef86ce04d62ddac17ad39ba87e89bd3b1"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b523466b1a31d0dcef7c5be1f20b942919b62fd6e9a9be199d035509cbefc0ec"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:9044b5e4f7083f209c4e35aa5dd54b1dd5b112b108648f5c902ad586d4f945c5"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:3764d53e09cdedd91bee65c2527815d315c6b90d7b8b79759cc48d7bf5d4f114"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:31eba6bbdd27dde97b0174ddf0297d7a9c3a507a8a1480e1e60ef914fe23d352"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:b5d658fbd9f0d6eea113aea286b21d3cd4d3fd978157cbf2447a6035916506d3"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-win32.whl", hash = "sha256:f86d3a7a9af5d826744fabf4afd15b9dfef44fe69a98541f666f66fbb8d3fef9"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-win_amd64.whl", hash = "sha256:593c5fd6be85da83656b93ffcccc2312d2d149d251e98588b14fbc288fd8909c"},
|
||||
{file = "pillow-11.1.0-cp313-cp313-win_arm64.whl", hash = "sha256:11633d58b6ee5733bde153a8dafd25e505ea3d32e261accd388827ee987baf65"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:70ca5ef3b3b1c4a0812b5c63c57c23b63e53bc38e758b37a951e5bc466449861"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:8000376f139d4d38d6851eb149b321a52bb8893a88dae8ee7d95840431977081"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9ee85f0696a17dd28fbcfceb59f9510aa71934b483d1f5601d1030c3c8304f3c"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:dd0e081319328928531df7a0e63621caf67652c8464303fd102141b785ef9547"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:e63e4e5081de46517099dc30abe418122f54531a6ae2ebc8680bcd7096860eab"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-win32.whl", hash = "sha256:dda60aa465b861324e65a78c9f5cf0f4bc713e4309f83bc387be158b077963d9"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-win_amd64.whl", hash = "sha256:ad5db5781c774ab9a9b2c4302bbf0c1014960a0a7be63278d13ae6fdf88126fe"},
|
||||
{file = "pillow-11.1.0-cp313-cp313t-win_arm64.whl", hash = "sha256:67cd427c68926108778a9005f2a04adbd5e67c442ed21d95389fe1d595458756"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:bf902d7413c82a1bfa08b06a070876132a5ae6b2388e2712aab3a7cbc02205c6"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:c1eec9d950b6fe688edee07138993e54ee4ae634c51443cfb7c1e7613322718e"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:8e275ee4cb11c262bd108ab2081f750db2a1c0b8c12c1897f27b160c8bd57bbc"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:4db853948ce4e718f2fc775b75c37ba2efb6aaea41a1a5fc57f0af59eee774b2"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:ab8a209b8485d3db694fa97a896d96dd6533d63c22829043fd9de627060beade"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:54251ef02a2309b5eec99d151ebf5c9904b77976c8abdcbce7891ed22df53884"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:5bb94705aea800051a743aa4874bb1397d4695fb0583ba5e425ee0328757f196"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:89dbdb3e6e9594d512780a5a1c42801879628b38e3efc7038094430844e271d8"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-win32.whl", hash = "sha256:e5449ca63da169a2e6068dd0e2fcc8d91f9558aba89ff6d02121ca8ab11e79e5"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-win_amd64.whl", hash = "sha256:3362c6ca227e65c54bf71a5f88b3d4565ff1bcbc63ae72c34b07bbb1cc59a43f"},
|
||||
{file = "pillow-11.1.0-cp39-cp39-win_arm64.whl", hash = "sha256:b20be51b37a75cc54c2c55def3fa2c65bb94ba859dde241cd0a4fd302de5ae0a"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:8c730dc3a83e5ac137fbc92dfcfe1511ce3b2b5d7578315b63dbbb76f7f51d90"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:7d33d2fae0e8b170b6a6c57400e077412240f6f5bb2a342cf1ee512a787942bb"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a8d65b38173085f24bc07f8b6c505cbb7418009fa1a1fcb111b1f4961814a442"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:015c6e863faa4779251436db398ae75051469f7c903b043a48f078e437656f83"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:d44ff19eea13ae4acdaaab0179fa68c0c6f2f45d66a4d8ec1eda7d6cecbcc15f"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:d3d8da4a631471dfaf94c10c85f5277b1f8e42ac42bade1ac67da4b4a7359b73"},
|
||||
{file = "pillow-11.1.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:4637b88343166249fe8aa94e7c4a62a180c4b3898283bb5d3d2fd5fe10d8e4e0"},
|
||||
{file = "pillow-11.1.0.tar.gz", hash = "sha256:368da70808b36d73b4b390a8ffac11069f8a5c85f29eff1f1b01bcf3ef5b2a20"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:d57a75d53922fc20c165016a20d9c44f73305e67c351bbc60d1adaf662e74047"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:127bf6ac4a5b58b3d32fc8289656f77f80567d65660bc46f72c0d77e6600cc95"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4ba4be812c7a40280629e55ae0b14a0aafa150dd6451297562e1764808bbe61"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c8bd62331e5032bc396a93609982a9ab6b411c05078a52f5fe3cc59234a3abd1"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:562d11134c97a62fe3af29581f083033179f7ff435f78392565a1ad2d1c2c45c"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:c97209e85b5be259994eb5b69ff50c5d20cca0f458ef9abd835e262d9d88b39d"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:0c3e6d0f59171dfa2e25d7116217543310908dfa2770aa64b8f87605f8cacc97"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:cc1c3bc53befb6096b84165956e886b1729634a799e9d6329a0c512ab651e579"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-win32.whl", hash = "sha256:312c77b7f07ab2139924d2639860e084ec2a13e72af54d4f08ac843a5fc9c79d"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-win_amd64.whl", hash = "sha256:9bc7ae48b8057a611e5fe9f853baa88093b9a76303937449397899385da06fad"},
|
||||
{file = "pillow-11.2.1-cp310-cp310-win_arm64.whl", hash = "sha256:2728567e249cdd939f6cc3d1f049595c66e4187f3c34078cbc0a7d21c47482d2"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:35ca289f712ccfc699508c4658a1d14652e8033e9b69839edf83cbdd0ba39e70"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e0409af9f829f87a2dfb7e259f78f317a5351f2045158be321fd135973fff7bf"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d4e5c5edee874dce4f653dbe59db7c73a600119fbea8d31f53423586ee2aafd7"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:b93a07e76d13bff9444f1a029e0af2964e654bfc2e2c2d46bfd080df5ad5f3d8"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:e6def7eed9e7fa90fde255afaf08060dc4b343bbe524a8f69bdd2a2f0018f600"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:8f4f3724c068be008c08257207210c138d5f3731af6c155a81c2b09a9eb3a788"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:a0a6709b47019dff32e678bc12c63008311b82b9327613f534e496dacaefb71e"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:f6b0c664ccb879109ee3ca702a9272d877f4fcd21e5eb63c26422fd6e415365e"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-win32.whl", hash = "sha256:cc5d875d56e49f112b6def6813c4e3d3036d269c008bf8aef72cd08d20ca6df6"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:0f5c7eda47bf8e3c8a283762cab94e496ba977a420868cb819159980b6709193"},
|
||||
{file = "pillow-11.2.1-cp311-cp311-win_arm64.whl", hash = "sha256:4d375eb838755f2528ac8cbc926c3e31cc49ca4ad0cf79cff48b20e30634a4a7"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:78afba22027b4accef10dbd5eed84425930ba41b3ea0a86fa8d20baaf19d807f"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:78092232a4ab376a35d68c4e6d5e00dfd73454bd12b230420025fbe178ee3b0b"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:25a5f306095c6780c52e6bbb6109624b95c5b18e40aab1c3041da3e9e0cd3e2d"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0c7b29dbd4281923a2bfe562acb734cee96bbb129e96e6972d315ed9f232bef4"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:3e645b020f3209a0181a418bffe7b4a93171eef6c4ef6cc20980b30bebf17b7d"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:b2dbea1012ccb784a65349f57bbc93730b96e85b42e9bf7b01ef40443db720b4"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:da3104c57bbd72948d75f6a9389e6727d2ab6333c3617f0a89d72d4940aa0443"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:598174aef4589af795f66f9caab87ba4ff860ce08cd5bb447c6fc553ffee603c"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-win32.whl", hash = "sha256:1d535df14716e7f8776b9e7fee118576d65572b4aad3ed639be9e4fa88a1cad3"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:14e33b28bf17c7a38eede290f77db7c664e4eb01f7869e37fa98a5aa95978941"},
|
||||
{file = "pillow-11.2.1-cp312-cp312-win_arm64.whl", hash = "sha256:21e1470ac9e5739ff880c211fc3af01e3ae505859392bf65458c224d0bf283eb"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:fdec757fea0b793056419bca3e9932eb2b0ceec90ef4813ea4c1e072c389eb28"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:b0e130705d568e2f43a17bcbe74d90958e8a16263868a12c3e0d9c8162690830"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7bdb5e09068332578214cadd9c05e3d64d99e0e87591be22a324bdbc18925be0"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d189ba1bebfbc0c0e529159631ec72bb9e9bc041f01ec6d3233d6d82eb823bc1"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-manylinux_2_28_aarch64.whl", hash = "sha256:191955c55d8a712fab8934a42bfefbf99dd0b5875078240943f913bb66d46d9f"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-manylinux_2_28_x86_64.whl", hash = "sha256:ad275964d52e2243430472fc5d2c2334b4fc3ff9c16cb0a19254e25efa03a155"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:750f96efe0597382660d8b53e90dd1dd44568a8edb51cb7f9d5d918b80d4de14"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:fe15238d3798788d00716637b3d4e7bb6bde18b26e5d08335a96e88564a36b6b"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-win32.whl", hash = "sha256:3fe735ced9a607fee4f481423a9c36701a39719252a9bb251679635f99d0f7d2"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:74ee3d7ecb3f3c05459ba95eed5efa28d6092d751ce9bf20e3e253a4e497e691"},
|
||||
{file = "pillow-11.2.1-cp313-cp313-win_arm64.whl", hash = "sha256:5119225c622403afb4b44bad4c1ca6c1f98eed79db8d3bc6e4e160fc6339d66c"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:8ce2e8411c7aaef53e6bb29fe98f28cd4fbd9a1d9be2eeea434331aac0536b22"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:9ee66787e095127116d91dea2143db65c7bb1e232f617aa5957c0d9d2a3f23a7"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9622e3b6c1d8b551b6e6f21873bdcc55762b4b2126633014cea1803368a9aa16"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:63b5dff3a68f371ea06025a1a6966c9a1e1ee452fc8020c2cd0ea41b83e9037b"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-manylinux_2_28_aarch64.whl", hash = "sha256:31df6e2d3d8fc99f993fd253e97fae451a8db2e7207acf97859732273e108406"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-manylinux_2_28_x86_64.whl", hash = "sha256:062b7a42d672c45a70fa1f8b43d1d38ff76b63421cbbe7f88146b39e8a558d91"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:4eb92eca2711ef8be42fd3f67533765d9fd043b8c80db204f16c8ea62ee1a751"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:f91ebf30830a48c825590aede79376cb40f110b387c17ee9bd59932c961044f9"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-win32.whl", hash = "sha256:e0b55f27f584ed623221cfe995c912c61606be8513bfa0e07d2c674b4516d9dd"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-win_amd64.whl", hash = "sha256:36d6b82164c39ce5482f649b437382c0fb2395eabc1e2b1702a6deb8ad647d6e"},
|
||||
{file = "pillow-11.2.1-cp313-cp313t-win_arm64.whl", hash = "sha256:225c832a13326e34f212d2072982bb1adb210e0cc0b153e688743018c94a2681"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:7491cf8a79b8eb867d419648fff2f83cb0b3891c8b36da92cc7f1931d46108c8"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:8b02d8f9cb83c52578a0b4beadba92e37d83a4ef11570a8688bbf43f4ca50909"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:014ca0050c85003620526b0ac1ac53f56fc93af128f7546623cc8e31875ab928"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3692b68c87096ac6308296d96354eddd25f98740c9d2ab54e1549d6c8aea9d79"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:f781dcb0bc9929adc77bad571b8621ecb1e4cdef86e940fe2e5b5ee24fd33b35"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:2b490402c96f907a166615e9a5afacf2519e28295f157ec3a2bb9bd57de638cb"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-musllinux_1_2_aarch64.whl", hash = "sha256:dd6b20b93b3ccc9c1b597999209e4bc5cf2853f9ee66e3fc9a400a78733ffc9a"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-musllinux_1_2_x86_64.whl", hash = "sha256:4b835d89c08a6c2ee7781b8dd0a30209a8012b5f09c0a665b65b0eb3560b6f36"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-win32.whl", hash = "sha256:b10428b3416d4f9c61f94b494681280be7686bda15898a3a9e08eb66a6d92d67"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-win_amd64.whl", hash = "sha256:6ebce70c3f486acf7591a3d73431fa504a4e18a9b97ff27f5f47b7368e4b9dd1"},
|
||||
{file = "pillow-11.2.1-cp39-cp39-win_arm64.whl", hash = "sha256:c27476257b2fdcd7872d54cfd119b3a9ce4610fb85c8e32b70b42e3680a29a1e"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-macosx_10_15_x86_64.whl", hash = "sha256:9b7b0d4fd2635f54ad82785d56bc0d94f147096493a79985d0ab57aedd563156"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:aa442755e31c64037aa7c1cb186e0b369f8416c567381852c63444dd666fb772"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:f0d3348c95b766f54b76116d53d4cb171b52992a1027e7ca50c81b43b9d9e363"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:85d27ea4c889342f7e35f6d56e7e1cb345632ad592e8c51b693d7b7556043ce0"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:bf2c33d6791c598142f00c9c4c7d47f6476731c31081331664eb26d6ab583e01"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e616e7154c37669fc1dfc14584f11e284e05d1c650e1c0f972f281c4ccc53193"},
|
||||
{file = "pillow-11.2.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:39ad2e0f424394e3aebc40168845fee52df1394a4673a6ee512d840d14ab3013"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-macosx_10_15_x86_64.whl", hash = "sha256:80f1df8dbe9572b4b7abdfa17eb5d78dd620b1d55d9e25f834efdbee872d3aed"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:ea926cfbc3957090becbcbbb65ad177161a2ff2ad578b5a6ec9bb1e1cd78753c"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:738db0e0941ca0376804d4de6a782c005245264edaa253ffce24e5a15cbdc7bd"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9db98ab6565c69082ec9b0d4e40dd9f6181dab0dd236d26f7a50b8b9bfbd5076"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:036e53f4170e270ddb8797d4c590e6dd14d28e15c7da375c18978045f7e6c37b"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:14f73f7c291279bd65fda51ee87affd7c1e097709f7fdd0188957a16c264601f"},
|
||||
{file = "pillow-11.2.1-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:208653868d5c9ecc2b327f9b9ef34e0e42a4cdd172c2988fd81d62d2bc9bc044"},
|
||||
{file = "pillow-11.2.1.tar.gz", hash = "sha256:a64dd61998416367b7ef979b73d3a85853ba9bec4c2925f74e588879a58716b6"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
docs = ["furo", "olefile", "sphinx (>=8.1)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinxext-opengraph"]
|
||||
docs = ["furo", "olefile", "sphinx (>=8.2)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinxext-opengraph"]
|
||||
fpx = ["olefile"]
|
||||
mic = ["olefile"]
|
||||
test-arrow = ["pyarrow"]
|
||||
tests = ["check-manifest", "coverage (>=7.4.2)", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout", "trove-classifiers (>=2024.10.12)"]
|
||||
typing = ["typing-extensions ; python_version < \"3.10\""]
|
||||
xmp = ["defusedxml"]
|
||||
@@ -1795,20 +1806,21 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "pydantic"
|
||||
version = "2.10.3"
|
||||
version = "2.11.4"
|
||||
description = "Data validation using Python type hints"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "pydantic-2.10.3-py3-none-any.whl", hash = "sha256:be04d85bbc7b65651c5f8e6b9976ed9c6f41782a55524cef079a34a0bb82144d"},
|
||||
{file = "pydantic-2.10.3.tar.gz", hash = "sha256:cb5ac360ce894ceacd69c403187900a02c4b20b693a9dd1d643e1effab9eadf9"},
|
||||
{file = "pydantic-2.11.4-py3-none-any.whl", hash = "sha256:d9615eaa9ac5a063471da949c8fc16376a84afb5024688b3ff885693506764eb"},
|
||||
{file = "pydantic-2.11.4.tar.gz", hash = "sha256:32738d19d63a226a52eed76645a98ee07c1f410ee41d93b4afbfa85ed8111c2d"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
annotated-types = ">=0.6.0"
|
||||
pydantic-core = "2.27.1"
|
||||
pydantic-core = "2.33.2"
|
||||
typing-extensions = ">=4.12.2"
|
||||
typing-inspection = ">=0.4.0"
|
||||
|
||||
[package.extras]
|
||||
email = ["email-validator (>=2.0.0)"]
|
||||
@@ -1816,112 +1828,111 @@ timezone = ["tzdata ; python_version >= \"3.9\" and platform_system == \"Windows
|
||||
|
||||
[[package]]
|
||||
name = "pydantic-core"
|
||||
version = "2.27.1"
|
||||
version = "2.33.2"
|
||||
description = "Core functionality for Pydantic validation and serialization"
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:71a5e35c75c021aaf400ac048dacc855f000bdfed91614b4a726f7432f1f3d6a"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:f82d068a2d6ecfc6e054726080af69a6764a10015467d7d7b9f66d6ed5afa23b"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:121ceb0e822f79163dd4699e4c54f5ad38b157084d97b34de8b232bcaad70278"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:4603137322c18eaf2e06a4495f426aa8d8388940f3c457e7548145011bb68e05"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a33cd6ad9017bbeaa9ed78a2e0752c5e250eafb9534f308e7a5f7849b0b1bfb4"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:15cc53a3179ba0fcefe1e3ae50beb2784dede4003ad2dfd24f81bba4b23a454f"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:45d9c5eb9273aa50999ad6adc6be5e0ecea7e09dbd0d31bd0c65a55a2592ca08"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:8bf7b66ce12a2ac52d16f776b31d16d91033150266eb796967a7e4621707e4f6"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:655d7dd86f26cb15ce8a431036f66ce0318648f8853d709b4167786ec2fa4807"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:5556470f1a2157031e676f776c2bc20acd34c1990ca5f7e56f1ebf938b9ab57c"},
|
||||
{file = "pydantic_core-2.27.1-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:f69ed81ab24d5a3bd93861c8c4436f54afdf8e8cc421562b0c7504cf3be58206"},
|
||||
{file = "pydantic_core-2.27.1-cp310-none-win32.whl", hash = "sha256:f5a823165e6d04ccea61a9f0576f345f8ce40ed533013580e087bd4d7442b52c"},
|
||||
{file = "pydantic_core-2.27.1-cp310-none-win_amd64.whl", hash = "sha256:57866a76e0b3823e0b56692d1a0bf722bffb324839bb5b7226a7dbd6c9a40b17"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:ac3b20653bdbe160febbea8aa6c079d3df19310d50ac314911ed8cc4eb7f8cb8"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:a5a8e19d7c707c4cadb8c18f5f60c843052ae83c20fa7d44f41594c644a1d330"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:7f7059ca8d64fea7f238994c97d91f75965216bcbe5f695bb44f354893f11d52"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:bed0f8a0eeea9fb72937ba118f9db0cb7e90773462af7962d382445f3005e5a4"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:a3cb37038123447cf0f3ea4c74751f6a9d7afef0eb71aa07bf5f652b5e6a132c"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:84286494f6c5d05243456e04223d5a9417d7f443c3b76065e75001beb26f88de"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:acc07b2cfc5b835444b44a9956846b578d27beeacd4b52e45489e93276241025"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:4fefee876e07a6e9aad7a8c8c9f85b0cdbe7df52b8a9552307b09050f7512c7e"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:258c57abf1188926c774a4c94dd29237e77eda19462e5bb901d88adcab6af919"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:35c14ac45fcfdf7167ca76cc80b2001205a8d5d16d80524e13508371fb8cdd9c"},
|
||||
{file = "pydantic_core-2.27.1-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d1b26e1dff225c31897696cab7d4f0a315d4c0d9e8666dbffdb28216f3b17fdc"},
|
||||
{file = "pydantic_core-2.27.1-cp311-none-win32.whl", hash = "sha256:2cdf7d86886bc6982354862204ae3b2f7f96f21a3eb0ba5ca0ac42c7b38598b9"},
|
||||
{file = "pydantic_core-2.27.1-cp311-none-win_amd64.whl", hash = "sha256:3af385b0cee8df3746c3f406f38bcbfdc9041b5c2d5ce3e5fc6637256e60bbc5"},
|
||||
{file = "pydantic_core-2.27.1-cp311-none-win_arm64.whl", hash = "sha256:81f2ec23ddc1b476ff96563f2e8d723830b06dceae348ce02914a37cb4e74b89"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:9cbd94fc661d2bab2bc702cddd2d3370bbdcc4cd0f8f57488a81bcce90c7a54f"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:5f8c4718cd44ec1580e180cb739713ecda2bdee1341084c1467802a417fe0f02"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:15aae984e46de8d376df515f00450d1522077254ef6b7ce189b38ecee7c9677c"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:1ba5e3963344ff25fc8c40da90f44b0afca8cfd89d12964feb79ac1411a260ac"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:992cea5f4f3b29d6b4f7f1726ed8ee46c8331c6b4eed6db5b40134c6fe1768bb"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0325336f348dbee6550d129b1627cb8f5351a9dc91aad141ffb96d4937bd9529"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7597c07fbd11515f654d6ece3d0e4e5093edc30a436c63142d9a4b8e22f19c35"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:3bbd5d8cc692616d5ef6fbbbd50dbec142c7e6ad9beb66b78a96e9c16729b089"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:dc61505e73298a84a2f317255fcc72b710b72980f3a1f670447a21efc88f8381"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:e1f735dc43da318cad19b4173dd1ffce1d84aafd6c9b782b3abc04a0d5a6f5bb"},
|
||||
{file = "pydantic_core-2.27.1-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:f4e5658dbffe8843a0f12366a4c2d1c316dbe09bb4dfbdc9d2d9cd6031de8aae"},
|
||||
{file = "pydantic_core-2.27.1-cp312-none-win32.whl", hash = "sha256:672ebbe820bb37988c4d136eca2652ee114992d5d41c7e4858cdd90ea94ffe5c"},
|
||||
{file = "pydantic_core-2.27.1-cp312-none-win_amd64.whl", hash = "sha256:66ff044fd0bb1768688aecbe28b6190f6e799349221fb0de0e6f4048eca14c16"},
|
||||
{file = "pydantic_core-2.27.1-cp312-none-win_arm64.whl", hash = "sha256:9a3b0793b1bbfd4146304e23d90045f2a9b5fd5823aa682665fbdaf2a6c28f3e"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:f216dbce0e60e4d03e0c4353c7023b202d95cbaeff12e5fd2e82ea0a66905073"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:a2e02889071850bbfd36b56fd6bc98945e23670773bc7a76657e90e6b6603c08"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:42b0e23f119b2b456d07ca91b307ae167cc3f6c846a7b169fca5326e32fdc6cf"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:764be71193f87d460a03f1f7385a82e226639732214b402f9aa61f0d025f0737"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1c00666a3bd2f84920a4e94434f5974d7bbc57e461318d6bb34ce9cdbbc1f6b2"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3ccaa88b24eebc0f849ce0a4d09e8a408ec5a94afff395eb69baf868f5183107"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c65af9088ac534313e1963443d0ec360bb2b9cba6c2909478d22c2e363d98a51"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:206b5cf6f0c513baffaeae7bd817717140770c74528f3e4c3e1cec7871ddd61a"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:062f60e512fc7fff8b8a9d680ff0ddaaef0193dba9fa83e679c0c5f5fbd018bc"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:a0697803ed7d4af5e4c1adf1670af078f8fcab7a86350e969f454daf598c4960"},
|
||||
{file = "pydantic_core-2.27.1-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:58ca98a950171f3151c603aeea9303ef6c235f692fe555e883591103da709b23"},
|
||||
{file = "pydantic_core-2.27.1-cp313-none-win32.whl", hash = "sha256:8065914ff79f7eab1599bd80406681f0ad08f8e47c880f17b416c9f8f7a26d05"},
|
||||
{file = "pydantic_core-2.27.1-cp313-none-win_amd64.whl", hash = "sha256:ba630d5e3db74c79300d9a5bdaaf6200172b107f263c98a0539eeecb857b2337"},
|
||||
{file = "pydantic_core-2.27.1-cp313-none-win_arm64.whl", hash = "sha256:45cf8588c066860b623cd11c4ba687f8d7175d5f7ef65f7129df8a394c502de5"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-macosx_10_12_x86_64.whl", hash = "sha256:5897bec80a09b4084aee23f9b73a9477a46c3304ad1d2d07acca19723fb1de62"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:d0165ab2914379bd56908c02294ed8405c252250668ebcb438a55494c69f44ab"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:6b9af86e1d8e4cfc82c2022bfaa6f459381a50b94a29e95dcdda8442d6d83864"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:5f6c8a66741c5f5447e047ab0ba7a1c61d1e95580d64bce852e3df1f895c4067"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:9a42d6a8156ff78981f8aa56eb6394114e0dedb217cf8b729f438f643608cbcd"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:64c65f40b4cd8b0e049a8edde07e38b476da7e3aaebe63287c899d2cff253fa5"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdcf339322a3fae5cbd504edcefddd5a50d9ee00d968696846f089b4432cf78"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bf99c8404f008750c846cb4ac4667b798a9f7de673ff719d705d9b2d6de49c5f"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:8f1edcea27918d748c7e5e4d917297b2a0ab80cad10f86631e488b7cddf76a36"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-musllinux_1_1_armv7l.whl", hash = "sha256:159cac0a3d096f79ab6a44d77a961917219707e2a130739c64d4dd46281f5c2a"},
|
||||
{file = "pydantic_core-2.27.1-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:029d9757eb621cc6e1848fa0b0310310de7301057f623985698ed7ebb014391b"},
|
||||
{file = "pydantic_core-2.27.1-cp38-none-win32.whl", hash = "sha256:a28af0695a45f7060e6f9b7092558a928a28553366519f64083c63a44f70e618"},
|
||||
{file = "pydantic_core-2.27.1-cp38-none-win_amd64.whl", hash = "sha256:2d4567c850905d5eaaed2f7a404e61012a51caf288292e016360aa2b96ff38d4"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:e9386266798d64eeb19dd3677051f5705bf873e98e15897ddb7d76f477131967"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:4228b5b646caa73f119b1ae756216b59cc6e2267201c27d3912b592c5e323b60"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0b3dfe500de26c52abe0477dde16192ac39c98f05bf2d80e76102d394bd13854"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:aee66be87825cdf72ac64cb03ad4c15ffef4143dbf5c113f64a5ff4f81477bf9"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:3b748c44bb9f53031c8cbc99a8a061bc181c1000c60a30f55393b6e9c45cc5bd"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:5ca038c7f6a0afd0b2448941b6ef9d5e1949e999f9e5517692eb6da58e9d44be"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6e0bd57539da59a3e4671b90a502da9a28c72322a4f17866ba3ac63a82c4498e"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:ac6c2c45c847bbf8f91930d88716a0fb924b51e0c6dad329b793d670ec5db792"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:b94d4ba43739bbe8b0ce4262bcc3b7b9f31459ad120fb595627eaeb7f9b9ca01"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:00e6424f4b26fe82d44577b4c842d7df97c20be6439e8e685d0d715feceb9fb9"},
|
||||
{file = "pydantic_core-2.27.1-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:38de0a70160dd97540335b7ad3a74571b24f1dc3ed33f815f0880682e6880131"},
|
||||
{file = "pydantic_core-2.27.1-cp39-none-win32.whl", hash = "sha256:7ccebf51efc61634f6c2344da73e366c75e735960b5654b63d7e6f69a5885fa3"},
|
||||
{file = "pydantic_core-2.27.1-cp39-none-win_amd64.whl", hash = "sha256:a57847b090d7892f123726202b7daa20df6694cbd583b67a592e856bff603d6c"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3fa80ac2bd5856580e242dbc202db873c60a01b20309c8319b5c5986fbe53ce6"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d950caa237bb1954f1b8c9227b5065ba6875ac9771bb8ec790d956a699b78676"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0e4216e64d203e39c62df627aa882f02a2438d18a5f21d7f721621f7a5d3611d"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:02a3d637bd387c41d46b002f0e49c52642281edacd2740e5a42f7017feea3f2c"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:161c27ccce13b6b0c8689418da3885d3220ed2eae2ea5e9b2f7f3d48f1d52c27"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:19910754e4cc9c63bc1c7f6d73aa1cfee82f42007e407c0f413695c2f7ed777f"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:e173486019cc283dc9778315fa29a363579372fe67045e971e89b6365cc035ed"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:af52d26579b308921b73b956153066481f064875140ccd1dfd4e77db89dbb12f"},
|
||||
{file = "pydantic_core-2.27.1-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:981fb88516bd1ae8b0cbbd2034678a39dedc98752f264ac9bc5839d3923fa04c"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5fde892e6c697ce3e30c61b239330fc5d569a71fefd4eb6512fc6caec9dd9e2f"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:816f5aa087094099fff7edabb5e01cc370eb21aa1a1d44fe2d2aefdfb5599b31"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:9c10c309e18e443ddb108f0ef64e8729363adbfd92d6d57beec680f6261556f3"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:98476c98b02c8e9b2eec76ac4156fd006628b1b2d0ef27e548ffa978393fd154"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c3027001c28434e7ca5a6e1e527487051136aa81803ac812be51802150d880dd"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:7699b1df36a48169cdebda7ab5a2bac265204003f153b4bd17276153d997670a"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1c39b07d90be6b48968ddc8c19e7585052088fd7ec8d568bb31ff64c70ae3c97"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:46ccfe3032b3915586e469d4972973f893c0a2bb65669194a5bdea9bacc088c2"},
|
||||
{file = "pydantic_core-2.27.1-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:62ba45e21cf6571d7f716d903b5b7b6d2617e2d5d67c0923dc47b9d41369f840"},
|
||||
{file = "pydantic_core-2.27.1.tar.gz", hash = "sha256:62a763352879b84aa31058fc931884055fd75089cccbd9d58bb6afd01141b235"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-macosx_10_12_x86_64.whl", hash = "sha256:2b3d326aaef0c0399d9afffeb6367d5e26ddc24d351dbc9c636840ac355dc5d8"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:0e5b2671f05ba48b94cb90ce55d8bdcaaedb8ba00cc5359f6810fc918713983d"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0069c9acc3f3981b9ff4cdfaf088e98d83440a4c7ea1bc07460af3d4dc22e72d"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:d53b22f2032c42eaaf025f7c40c2e3b94568ae077a606f006d206a463bc69572"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:0405262705a123b7ce9f0b92f123334d67b70fd1f20a9372b907ce1080c7ba02"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:4b25d91e288e2c4e0662b8038a28c6a07eaac3e196cfc4ff69de4ea3db992a1b"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6bdfe4b3789761f3bcb4b1ddf33355a71079858958e3a552f16d5af19768fef2"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:efec8db3266b76ef9607c2c4c419bdb06bf335ae433b80816089ea7585816f6a"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:031c57d67ca86902726e0fae2214ce6770bbe2f710dc33063187a68744a5ecac"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_armv7l.whl", hash = "sha256:f8de619080e944347f5f20de29a975c2d815d9ddd8be9b9b7268e2e3ef68605a"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:73662edf539e72a9440129f231ed3757faab89630d291b784ca99237fb94db2b"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-win32.whl", hash = "sha256:0a39979dcbb70998b0e505fb1556a1d550a0781463ce84ebf915ba293ccb7e22"},
|
||||
{file = "pydantic_core-2.33.2-cp310-cp310-win_amd64.whl", hash = "sha256:b0379a2b24882fef529ec3b4987cb5d003b9cda32256024e6fe1586ac45fc640"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-macosx_10_12_x86_64.whl", hash = "sha256:4c5b0a576fb381edd6d27f0a85915c6daf2f8138dc5c267a57c08a62900758c7"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:e799c050df38a639db758c617ec771fd8fb7a5f8eaaa4b27b101f266b216a246"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:dc46a01bf8d62f227d5ecee74178ffc448ff4e5197c756331f71efcc66dc980f"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:a144d4f717285c6d9234a66778059f33a89096dfb9b39117663fd8413d582dcc"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:73cf6373c21bc80b2e0dc88444f41ae60b2f070ed02095754eb5a01df12256de"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:3dc625f4aa79713512d1976fe9f0bc99f706a9dee21dfd1810b4bbbf228d0e8a"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:881b21b5549499972441da4758d662aeea93f1923f953e9cbaff14b8b9565aef"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:bdc25f3681f7b78572699569514036afe3c243bc3059d3942624e936ec93450e"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:fe5b32187cbc0c862ee201ad66c30cf218e5ed468ec8dc1cf49dec66e160cc4d"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_armv7l.whl", hash = "sha256:bc7aee6f634a6f4a95676fcb5d6559a2c2a390330098dba5e5a5f28a2e4ada30"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:235f45e5dbcccf6bd99f9f472858849f73d11120d76ea8707115415f8e5ebebf"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-win32.whl", hash = "sha256:6368900c2d3ef09b69cb0b913f9f8263b03786e5b2a387706c5afb66800efd51"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-win_amd64.whl", hash = "sha256:1e063337ef9e9820c77acc768546325ebe04ee38b08703244c1309cccc4f1bab"},
|
||||
{file = "pydantic_core-2.33.2-cp311-cp311-win_arm64.whl", hash = "sha256:6b99022f1d19bc32a4c2a0d544fc9a76e3be90f0b3f4af413f87d38749300e65"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-macosx_10_12_x86_64.whl", hash = "sha256:a7ec89dc587667f22b6a0b6579c249fca9026ce7c333fc142ba42411fa243cdc"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3c6db6e52c6d70aa0d00d45cdb9b40f0433b96380071ea80b09277dba021ddf7"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e61206137cbc65e6d5256e1166f88331d3b6238e082d9f74613b9b765fb9025"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:eb8c529b2819c37140eb51b914153063d27ed88e3bdc31b71198a198e921e011"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:c52b02ad8b4e2cf14ca7b3d918f3eb0ee91e63b3167c32591e57c4317e134f8f"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:96081f1605125ba0855dfda83f6f3df5ec90c61195421ba72223de35ccfb2f88"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f57a69461af2a5fa6e6bbd7a5f60d3b7e6cebb687f55106933188e79ad155c1"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:572c7e6c8bb4774d2ac88929e3d1f12bc45714ae5ee6d9a788a9fb35e60bb04b"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:db4b41f9bd95fbe5acd76d89920336ba96f03e149097365afe1cb092fceb89a1"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_armv7l.whl", hash = "sha256:fa854f5cf7e33842a892e5c73f45327760bc7bc516339fda888c75ae60edaeb6"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:5f483cfb75ff703095c59e365360cb73e00185e01aaea067cd19acffd2ab20ea"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-win32.whl", hash = "sha256:9cb1da0f5a471435a7bc7e439b8a728e8b61e59784b2af70d7c169f8dd8ae290"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-win_amd64.whl", hash = "sha256:f941635f2a3d96b2973e867144fde513665c87f13fe0e193c158ac51bfaaa7b2"},
|
||||
{file = "pydantic_core-2.33.2-cp312-cp312-win_arm64.whl", hash = "sha256:cca3868ddfaccfbc4bfb1d608e2ccaaebe0ae628e1416aeb9c4d88c001bb45ab"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-macosx_10_12_x86_64.whl", hash = "sha256:1082dd3e2d7109ad8b7da48e1d4710c8d06c253cbc4a27c1cff4fbcaa97a9e3f"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:f517ca031dfc037a9c07e748cefd8d96235088b83b4f4ba8939105d20fa1dcd6"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:0a9f2c9dd19656823cb8250b0724ee9c60a82f3cdf68a080979d13092a3b0fef"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:2b0a451c263b01acebe51895bfb0e1cc842a5c666efe06cdf13846c7418caa9a"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:1ea40a64d23faa25e62a70ad163571c0b342b8bf66d5fa612ac0dec4f069d916"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0fb2d542b4d66f9470e8065c5469ec676978d625a8b7a363f07d9a501a9cb36a"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9fdac5d6ffa1b5a83bca06ffe7583f5576555e6c8b3a91fbd25ea7780f825f7d"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:04a1a413977ab517154eebb2d326da71638271477d6ad87a769102f7c2488c56"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_aarch64.whl", hash = "sha256:c8e7af2f4e0194c22b5b37205bfb293d166a7344a5b0d0eaccebc376546d77d5"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_armv7l.whl", hash = "sha256:5c92edd15cd58b3c2d34873597a1e20f13094f59cf88068adb18947df5455b4e"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-musllinux_1_1_x86_64.whl", hash = "sha256:65132b7b4a1c0beded5e057324b7e16e10910c106d43675d9bd87d4f38dde162"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-win32.whl", hash = "sha256:52fb90784e0a242bb96ec53f42196a17278855b0f31ac7c3cc6f5c1ec4811849"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-win_amd64.whl", hash = "sha256:c083a3bdd5a93dfe480f1125926afcdbf2917ae714bdb80b36d34318b2bec5d9"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313-win_arm64.whl", hash = "sha256:e80b087132752f6b3d714f041ccf74403799d3b23a72722ea2e6ba2e892555b9"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:61c18fba8e5e9db3ab908620af374db0ac1baa69f0f32df4f61ae23f15e586ac"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313t-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:95237e53bb015f67b63c91af7518a62a8660376a6a0db19b89acc77a4d6199f5"},
|
||||
{file = "pydantic_core-2.33.2-cp313-cp313t-win_amd64.whl", hash = "sha256:c2fc0a768ef76c15ab9238afa6da7f69895bb5d1ee83aeea2e3509af4472d0b9"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-macosx_10_12_x86_64.whl", hash = "sha256:a2b911a5b90e0374d03813674bf0a5fbbb7741570dcd4b4e85a2e48d17def29d"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:6fa6dfc3e4d1f734a34710f391ae822e0a8eb8559a85c6979e14e65ee6ba2954"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c54c939ee22dc8e2d545da79fc5381f1c020d6d3141d3bd747eab59164dc89fb"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_armv7l.manylinux2014_armv7l.whl", hash = "sha256:53a57d2ed685940a504248187d5685e49eb5eef0f696853647bf37c418c538f7"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_ppc64le.manylinux2014_ppc64le.whl", hash = "sha256:09fb9dd6571aacd023fe6aaca316bd01cf60ab27240d7eb39ebd66a3a15293b4"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_s390x.manylinux2014_s390x.whl", hash = "sha256:0e6116757f7959a712db11f3e9c0a99ade00a5bbedae83cb801985aa154f071b"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8d55ab81c57b8ff8548c3e4947f119551253f4e3787a7bbc0b6b3ca47498a9d3"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:c20c462aa4434b33a2661701b861604913f912254e441ab8d78d30485736115a"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:44857c3227d3fb5e753d5fe4a3420d6376fa594b07b621e220cd93703fe21782"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_armv7l.whl", hash = "sha256:eb9b459ca4df0e5c87deb59d37377461a538852765293f9e6ee834f0435a93b9"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:9fcd347d2cc5c23b06de6d3b7b8275be558a0c90549495c699e379a80bf8379e"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-win32.whl", hash = "sha256:83aa99b1285bc8f038941ddf598501a86f1536789740991d7d8756e34f1e74d9"},
|
||||
{file = "pydantic_core-2.33.2-cp39-cp39-win_amd64.whl", hash = "sha256:f481959862f57f29601ccced557cc2e817bce7533ab8e01a797a48b49c9692b3"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:5c4aa4e82353f65e548c476b37e64189783aa5384903bfea4f41580f255fddfa"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-macosx_11_0_arm64.whl", hash = "sha256:d946c8bf0d5c24bf4fe333af284c59a19358aa3ec18cb3dc4370080da1e8ad29"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:87b31b6846e361ef83fedb187bb5b4372d0da3f7e28d85415efa92d6125d6e6d"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aa9d91b338f2df0508606f7009fde642391425189bba6d8c653afd80fd6bb64e"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:2058a32994f1fde4ca0480ab9d1e75a0e8c87c22b53a3ae66554f9af78f2fe8c"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:0e03262ab796d986f978f79c943fc5f620381be7287148b8010b4097f79a39ec"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:1a8695a8d00c73e50bff9dfda4d540b7dee29ff9b8053e38380426a85ef10052"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:fa754d1850735a0b0e03bcffd9d4b4343eb417e47196e4485d9cca326073a42c"},
|
||||
{file = "pydantic_core-2.33.2-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:a11c8d26a50bfab49002947d3d237abe4d9e4b5bdc8846a63537b6488e197808"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_10_12_x86_64.whl", hash = "sha256:dd14041875d09cc0f9308e37a6f8b65f5585cf2598a53aa0123df8b129d481f8"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-macosx_11_0_arm64.whl", hash = "sha256:d87c561733f66531dced0da6e864f44ebf89a8fba55f31407b00c2f7f9449593"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:2f82865531efd18d6e07a04a17331af02cb7a651583c418df8266f17a63c6612"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2bfb5112df54209d820d7bf9317c7a6c9025ea52e49f46b6a2060104bba37de7"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:64632ff9d614e5eecfb495796ad51b0ed98c453e447a76bcbeeb69615079fc7e"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:f889f7a40498cc077332c7ab6b4608d296d852182211787d4f3ee377aaae66e8"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:de4b83bb311557e439b9e186f733f6c645b9417c84e2eb8203f3f820a4b988bf"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:82f68293f055f51b51ea42fafc74b6aad03e70e191799430b90c13d643059ebb"},
|
||||
{file = "pydantic_core-2.33.2-pp311-pypy311_pp73-win_amd64.whl", hash = "sha256:329467cecfb529c925cf2bbd4d60d2c509bc2fb52a20c1045bf09bb70971a9c1"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:87acbfcf8e90ca885206e98359d7dca4bcbb35abdc0ff66672a293e1d7a19101"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-macosx_11_0_arm64.whl", hash = "sha256:7f92c15cd1e97d4b12acd1cc9004fa092578acfa57b67ad5e43a197175d01a64"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d3f26877a748dc4251cfcfda9dfb5f13fcb034f5308388066bcfe9031b63ae7d"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:dac89aea9af8cd672fa7b510e7b8c33b0bba9a43186680550ccf23020f32d535"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-manylinux_2_5_i686.manylinux1_i686.whl", hash = "sha256:970919794d126ba8645f3837ab6046fb4e72bbc057b3709144066204c19a455d"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_aarch64.whl", hash = "sha256:3eb3fe62804e8f859c49ed20a8451342de53ed764150cb14ca71357c765dc2a6"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_armv7l.whl", hash = "sha256:3abcd9392a36025e3bd55f9bd38d908bd17962cc49bc6da8e7e96285336e2bca"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-musllinux_1_1_x86_64.whl", hash = "sha256:3a1c81334778f9e3af2f8aeb7a960736e5cab1dfebfb26aabca09afd2906c039"},
|
||||
{file = "pydantic_core-2.33.2-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:2807668ba86cb38c6817ad9bc66215ab8584d1d304030ce4f0887336f28a5e27"},
|
||||
{file = "pydantic_core-2.33.2.tar.gz", hash = "sha256:7cb8bc3605c29176e1b105350d2e6474142d7c1bd1d9327c4a9bdb46bf827acc"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2886,15 +2897,15 @@ windows-platform = ["appdirs (>=1.4.0)", "appdirs (>=1.4.0)", "bcrypt (>=3.1.3)"
|
||||
|
||||
[[package]]
|
||||
name = "txredisapi"
|
||||
version = "1.4.10"
|
||||
version = "1.4.11"
|
||||
description = "non-blocking redis client for python"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
groups = ["main"]
|
||||
markers = "extra == \"all\" or extra == \"redis\""
|
||||
files = [
|
||||
{file = "txredisapi-1.4.10-py3-none-any.whl", hash = "sha256:0a6ea77f27f8cf092f907654f08302a97b48fa35f24e0ad99dfb74115f018161"},
|
||||
{file = "txredisapi-1.4.10.tar.gz", hash = "sha256:7609a6af6ff4619a3189c0adfb86aeda789afba69eb59fc1e19ac0199e725395"},
|
||||
{file = "txredisapi-1.4.11-py3-none-any.whl", hash = "sha256:ac64d7a9342b58edca13ef267d4fa7637c1aa63f8595e066801c1e8b56b22d0b"},
|
||||
{file = "txredisapi-1.4.11.tar.gz", hash = "sha256:3eb1af99aefdefb59eb877b1dd08861efad60915e30ad5bf3d5bf6c5cedcdbc6"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3085,6 +3096,21 @@ files = [
|
||||
{file = "typing_extensions-4.12.2.tar.gz", hash = "sha256:1a7ead55c7e559dd4dee8856e3a88b41225abfe1ce8df57b7c13915fe121ffb8"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "typing-inspection"
|
||||
version = "0.4.0"
|
||||
description = "Runtime typing introspection tools"
|
||||
optional = false
|
||||
python-versions = ">=3.9"
|
||||
groups = ["main", "dev"]
|
||||
files = [
|
||||
{file = "typing_inspection-0.4.0-py3-none-any.whl", hash = "sha256:50e72559fcd2a6367a19f7a7e610e6afcb9fac940c650290eed893d61386832f"},
|
||||
{file = "typing_inspection-0.4.0.tar.gz", hash = "sha256:9765c87de36671694a67904bf2c96e395be9c6439bb6c87b5142569dcdd65122"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
typing-extensions = ">=4.12.0"
|
||||
|
||||
[[package]]
|
||||
name = "unpaddedbase64"
|
||||
version = "2.1.0"
|
||||
|
||||
@@ -97,7 +97,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.128.0"
|
||||
version = "1.130.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "AGPL-3.0-or-later"
|
||||
|
||||
@@ -39,7 +39,6 @@ from synapse.api.errors import (
|
||||
HttpResponseException,
|
||||
InvalidClientTokenError,
|
||||
OAuthInsufficientScopeError,
|
||||
StoreError,
|
||||
SynapseError,
|
||||
UnrecognizedRequestError,
|
||||
)
|
||||
@@ -512,7 +511,7 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
raise InvalidClientTokenError("No scope in token granting user rights")
|
||||
|
||||
# Match via the sub claim
|
||||
sub: Optional[str] = introspection_result.get_sub()
|
||||
sub = introspection_result.get_sub()
|
||||
if sub is None:
|
||||
raise InvalidClientTokenError(
|
||||
"Invalid sub claim in the introspection result"
|
||||
@@ -525,29 +524,20 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
# If we could not find a user via the external_id, it either does not exist,
|
||||
# or the external_id was never recorded
|
||||
|
||||
# TODO: claim mapping should be configurable
|
||||
username: Optional[str] = introspection_result.get_username()
|
||||
if username is None or not isinstance(username, str):
|
||||
username = introspection_result.get_username()
|
||||
if username is None:
|
||||
raise AuthError(
|
||||
500,
|
||||
"Invalid username claim in the introspection result",
|
||||
)
|
||||
user_id = UserID(username, self._hostname)
|
||||
|
||||
# First try to find a user from the username claim
|
||||
# Try to find a user from the username claim
|
||||
user_info = await self.store.get_user_by_id(user_id=user_id.to_string())
|
||||
if user_info is None:
|
||||
# If the user does not exist, we should create it on the fly
|
||||
# TODO: we could use SCIM to provision users ahead of time and listen
|
||||
# for SCIM SET events if those ever become standard:
|
||||
# https://datatracker.ietf.org/doc/html/draft-hunt-scim-notify-00
|
||||
|
||||
# TODO: claim mapping should be configurable
|
||||
# If present, use the name claim as the displayname
|
||||
name: Optional[str] = introspection_result.get_name()
|
||||
|
||||
await self.store.register_user(
|
||||
user_id=user_id.to_string(), create_profile_with_displayname=name
|
||||
raise AuthError(
|
||||
500,
|
||||
"User not found",
|
||||
)
|
||||
|
||||
# And record the sub as external_id
|
||||
@@ -587,17 +577,10 @@ class MSC3861DelegatedAuth(BaseAuth):
|
||||
"Invalid device ID in introspection result",
|
||||
)
|
||||
|
||||
# Create the device on the fly if it does not exist
|
||||
try:
|
||||
await self.store.get_device(
|
||||
user_id=user_id.to_string(), device_id=device_id
|
||||
)
|
||||
except StoreError:
|
||||
await self.store.store_device(
|
||||
user_id=user_id.to_string(),
|
||||
device_id=device_id,
|
||||
initial_device_display_name="OIDC-native client",
|
||||
)
|
||||
# Make sure the device exists
|
||||
await self.store.get_device(
|
||||
user_id=user_id.to_string(), device_id=device_id
|
||||
)
|
||||
|
||||
# TODO: there is a few things missing in the requester here, which still need
|
||||
# to be figured out, like:
|
||||
|
||||
@@ -70,6 +70,7 @@ class Codes(str, Enum):
|
||||
THREEPID_NOT_FOUND = "M_THREEPID_NOT_FOUND"
|
||||
THREEPID_DENIED = "M_THREEPID_DENIED"
|
||||
INVALID_USERNAME = "M_INVALID_USERNAME"
|
||||
THREEPID_MEDIUM_NOT_SUPPORTED = "M_THREEPID_MEDIUM_NOT_SUPPORTED"
|
||||
SERVER_NOT_TRUSTED = "M_SERVER_NOT_TRUSTED"
|
||||
CONSENT_NOT_GIVEN = "M_CONSENT_NOT_GIVEN"
|
||||
CANNOT_LEAVE_SERVER_NOTICE_ROOM = "M_CANNOT_LEAVE_SERVER_NOTICE_ROOM"
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
#
|
||||
import logging
|
||||
import sys
|
||||
from typing import Dict, List, cast
|
||||
from typing import Dict, List
|
||||
|
||||
from twisted.web.resource import Resource
|
||||
|
||||
@@ -52,7 +52,6 @@ from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||
from synapse.rest import ClientRestResource, admin
|
||||
from synapse.rest.admin import AdminRestResource, register_servlets_for_media_repo
|
||||
from synapse.rest.health import HealthResource
|
||||
from synapse.rest.key.v2 import KeyResource
|
||||
from synapse.rest.synapse.client import build_synapse_client_resource_tree
|
||||
@@ -176,8 +175,13 @@ class GenericWorkerServer(HomeServer):
|
||||
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
||||
assert listener_config.http_options is not None
|
||||
|
||||
# We always include a health resource.
|
||||
resources: Dict[str, Resource] = {"/health": HealthResource()}
|
||||
# We always include an admin resource that we populate with servlets as needed
|
||||
admin_resource = JsonResource(self, canonical_json=False)
|
||||
resources: Dict[str, Resource] = {
|
||||
# We always include a health resource.
|
||||
"/health": HealthResource(),
|
||||
"/_synapse/admin": admin_resource,
|
||||
}
|
||||
|
||||
for res in listener_config.http_options.resources:
|
||||
for name in res.names:
|
||||
@@ -190,11 +194,8 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
resources.update(build_synapse_client_resource_tree(self))
|
||||
resources["/.well-known"] = well_known_resource(self)
|
||||
admin_res = resources.get("/_synapse/admin")
|
||||
if admin_res is not None:
|
||||
admin.register_servlets(self, cast(JsonResource, admin_res))
|
||||
else:
|
||||
resources["/_synapse/admin"] = AdminRestResource(self)
|
||||
admin.register_servlets(self, admin_resource)
|
||||
|
||||
elif name == "federation":
|
||||
resources[FEDERATION_PREFIX] = TransportLayerServer(self)
|
||||
elif name == "media":
|
||||
@@ -203,15 +204,7 @@ class GenericWorkerServer(HomeServer):
|
||||
|
||||
# We need to serve the admin servlets for media on the
|
||||
# worker.
|
||||
admin_res = resources.get("/_synapse/admin")
|
||||
if admin_res is not None:
|
||||
register_servlets_for_media_repo(
|
||||
self, cast(JsonResource, admin_res)
|
||||
)
|
||||
else:
|
||||
admin_resource = JsonResource(self, canonical_json=False)
|
||||
register_servlets_for_media_repo(self, admin_resource)
|
||||
resources["/_synapse/admin"] = admin_resource
|
||||
admin.register_servlets_for_media_repo(self, admin_resource)
|
||||
|
||||
resources.update(
|
||||
{
|
||||
|
||||
@@ -54,6 +54,7 @@ from synapse.config.server import ListenerConfig, TCPListenerConfig
|
||||
from synapse.federation.transport.server import TransportLayerServer
|
||||
from synapse.http.additional_resource import AdditionalResource
|
||||
from synapse.http.server import (
|
||||
JsonResource,
|
||||
OptionsResource,
|
||||
RootOptionsRedirectResource,
|
||||
StaticResource,
|
||||
@@ -61,8 +62,7 @@ from synapse.http.server import (
|
||||
from synapse.logging.context import LoggingContext
|
||||
from synapse.metrics import METRICS_PREFIX, MetricsResource, RegistryProxy
|
||||
from synapse.replication.http import REPLICATION_PREFIX, ReplicationRestResource
|
||||
from synapse.rest import ClientRestResource
|
||||
from synapse.rest.admin import AdminRestResource
|
||||
from synapse.rest import ClientRestResource, admin
|
||||
from synapse.rest.health import HealthResource
|
||||
from synapse.rest.key.v2 import KeyResource
|
||||
from synapse.rest.synapse.client import build_synapse_client_resource_tree
|
||||
@@ -180,11 +180,14 @@ class SynapseHomeServer(HomeServer):
|
||||
if compress:
|
||||
client_resource = gz_wrap(client_resource)
|
||||
|
||||
admin_resource = JsonResource(self, canonical_json=False)
|
||||
admin.register_servlets(self, admin_resource)
|
||||
|
||||
resources.update(
|
||||
{
|
||||
CLIENT_API_PREFIX: client_resource,
|
||||
"/.well-known": well_known_resource(self),
|
||||
"/_synapse/admin": AdminRestResource(self),
|
||||
"/_synapse/admin": admin_resource,
|
||||
**build_synapse_client_resource_tree(self),
|
||||
}
|
||||
)
|
||||
|
||||
@@ -38,6 +38,9 @@ class UserDirectoryConfig(Config):
|
||||
self.user_directory_search_all_users = user_directory_config.get(
|
||||
"search_all_users", False
|
||||
)
|
||||
self.user_directory_exclude_remote_users = user_directory_config.get(
|
||||
"exclude_remote_users", False
|
||||
)
|
||||
self.user_directory_search_prefer_local_users = user_directory_config.get(
|
||||
"prefer_local_users", False
|
||||
)
|
||||
|
||||
@@ -701,6 +701,12 @@ class FederationServer(FederationBase):
|
||||
pdu = event_from_pdu_json(content, room_version)
|
||||
origin_host, _ = parse_server_name(origin)
|
||||
await self.check_server_matches_acl(origin_host, pdu.room_id)
|
||||
if await self._spam_checker_module_callbacks.should_drop_federated_event(pdu):
|
||||
logger.info(
|
||||
"Federated event contains spam, dropping %s",
|
||||
pdu.event_id,
|
||||
)
|
||||
raise SynapseError(403, Codes.FORBIDDEN)
|
||||
try:
|
||||
pdu = await self._check_sigs_and_hash(room_version, pdu)
|
||||
except InvalidEventSignatureError as e:
|
||||
|
||||
@@ -586,6 +586,24 @@ class OidcProvider:
|
||||
or self._user_profile_method == "userinfo_endpoint"
|
||||
)
|
||||
|
||||
@property
|
||||
def _uses_access_token(self) -> bool:
|
||||
"""Return True if the `access_token` will be used during the login process.
|
||||
|
||||
This is useful to determine whether the access token
|
||||
returned by the identity provider, and
|
||||
any related metadata (such as the `at_hash` field in
|
||||
the ID token), should be validated.
|
||||
"""
|
||||
# Currently, Synapse only uses the access_token to fetch user metadata
|
||||
# from the userinfo endpoint. Therefore we only have a single criteria
|
||||
# to check right now but this may change in the future and this function
|
||||
# should be updated if more usages are introduced.
|
||||
#
|
||||
# For example, if we start to use the access_token given to us by the
|
||||
# IdP for more things, such as accessing Resource Server APIs.
|
||||
return self._uses_userinfo
|
||||
|
||||
@property
|
||||
def issuer(self) -> str:
|
||||
"""The issuer identifying this provider."""
|
||||
@@ -957,9 +975,16 @@ class OidcProvider:
|
||||
"nonce": nonce,
|
||||
"client_id": self._client_auth.client_id,
|
||||
}
|
||||
if "access_token" in token:
|
||||
if self._uses_access_token and "access_token" in token:
|
||||
# If we got an `access_token`, there should be an `at_hash` claim
|
||||
# in the `id_token` that we can check against.
|
||||
# in the `id_token` that we can check against. Setting this
|
||||
# instructs authlib to check the value of `at_hash` in the
|
||||
# ID token.
|
||||
#
|
||||
# We only need to verify the access token if we actually make
|
||||
# use of it. Which currently only happens when we need to fetch
|
||||
# the user's information from the userinfo_endpoint. Thus, this
|
||||
# check is also gated on self._uses_userinfo.
|
||||
claims_params["access_token"] = token["access_token"]
|
||||
|
||||
claims_options = {"iss": {"values": [metadata["issuer"]]}}
|
||||
|
||||
@@ -36,10 +36,17 @@ class SetPasswordHandler:
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.store = hs.get_datastores().main
|
||||
self._auth_handler = hs.get_auth_handler()
|
||||
# This can only be instantiated on the main process.
|
||||
device_handler = hs.get_device_handler()
|
||||
assert isinstance(device_handler, DeviceHandler)
|
||||
self._device_handler = device_handler
|
||||
|
||||
# We don't need the device handler if password changing is disabled.
|
||||
# This allows us to instantiate the SetPasswordHandler on the workers
|
||||
# that have admin APIs for MAS
|
||||
if self._auth_handler.can_change_password():
|
||||
# This can only be instantiated on the main process.
|
||||
device_handler = hs.get_device_handler()
|
||||
assert isinstance(device_handler, DeviceHandler)
|
||||
self._device_handler: Optional[DeviceHandler] = device_handler
|
||||
else:
|
||||
self._device_handler = None
|
||||
|
||||
async def set_password(
|
||||
self,
|
||||
@@ -51,6 +58,9 @@ class SetPasswordHandler:
|
||||
if not self._auth_handler.can_change_password():
|
||||
raise SynapseError(403, "Password change disabled", errcode=Codes.FORBIDDEN)
|
||||
|
||||
# We should have this available only if password changing is enabled.
|
||||
assert self._device_handler is not None
|
||||
|
||||
try:
|
||||
await self.store.user_set_password_hash(user_id, password_hash)
|
||||
except StoreError as e:
|
||||
|
||||
@@ -271,6 +271,7 @@ class SlidingSyncHandler:
|
||||
from_token=from_token,
|
||||
to_token=to_token,
|
||||
newly_joined=room_id in interested_rooms.newly_joined_rooms,
|
||||
newly_left=room_id in interested_rooms.newly_left_rooms,
|
||||
is_dm=room_id in interested_rooms.dm_room_ids,
|
||||
)
|
||||
|
||||
@@ -542,6 +543,7 @@ class SlidingSyncHandler:
|
||||
from_token: Optional[SlidingSyncStreamToken],
|
||||
to_token: StreamToken,
|
||||
newly_joined: bool,
|
||||
newly_left: bool,
|
||||
is_dm: bool,
|
||||
) -> SlidingSyncResult.RoomResult:
|
||||
"""
|
||||
@@ -559,6 +561,7 @@ class SlidingSyncHandler:
|
||||
from_token: The point in the stream to sync from.
|
||||
to_token: The point in the stream to sync up to.
|
||||
newly_joined: If the user has newly joined the room
|
||||
newly_left: If the user has newly left the room
|
||||
is_dm: Whether the room is a DM room
|
||||
"""
|
||||
user = sync_config.user
|
||||
@@ -856,6 +859,26 @@ class SlidingSyncHandler:
|
||||
# TODO: Limit the number of state events we're about to send down
|
||||
# the room, if its too many we should change this to an
|
||||
# `initial=True`?
|
||||
|
||||
# For the case of rejecting remote invites, the leave event won't be
|
||||
# returned by `get_current_state_deltas_for_room`. This is due to the current
|
||||
# state only being filled out for rooms the server is in, and so doesn't pick
|
||||
# up out-of-band leaves (including locally rejected invites) as these events
|
||||
# are outliers and not added to the `current_state_delta_stream`.
|
||||
#
|
||||
# We rely on being explicitly told that the room has been `newly_left` to
|
||||
# ensure we extract the out-of-band leave.
|
||||
if newly_left and room_membership_for_user_at_to_token.event_id is not None:
|
||||
membership_changed = True
|
||||
leave_event = await self.store.get_event(
|
||||
room_membership_for_user_at_to_token.event_id
|
||||
)
|
||||
state_key = leave_event.get_state_key()
|
||||
if state_key is not None:
|
||||
room_state_delta_id_map[(leave_event.type, state_key)] = (
|
||||
room_membership_for_user_at_to_token.event_id
|
||||
)
|
||||
|
||||
deltas = await self.get_current_state_deltas_for_room(
|
||||
room_id=room_id,
|
||||
room_membership_for_user_at_to_token=room_membership_for_user_at_to_token,
|
||||
|
||||
@@ -244,14 +244,47 @@ class SlidingSyncRoomLists:
|
||||
# Note: this won't include rooms the user has left themselves. We add back
|
||||
# `newly_left` rooms below. This is more efficient than fetching all rooms and
|
||||
# then filtering out the old left rooms.
|
||||
room_membership_for_user_map = await self.store.get_sliding_sync_rooms_for_user(
|
||||
user_id
|
||||
room_membership_for_user_map = (
|
||||
await self.store.get_sliding_sync_rooms_for_user_from_membership_snapshots(
|
||||
user_id
|
||||
)
|
||||
)
|
||||
# To play nice with the rewind logic below, we need to go fetch the rooms the
|
||||
# user has left themselves but only if it changed after the `to_token`.
|
||||
#
|
||||
# If a leave happens *after* the token range, we may have still been joined (or
|
||||
# any non-self-leave which is relevant to sync) to the room before so we need to
|
||||
# include it in the list of potentially relevant rooms and apply our rewind
|
||||
# logic (outside of this function) to see if it's actually relevant.
|
||||
#
|
||||
# We do this separately from
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` as those results
|
||||
# are cached and the `to_token` isn't very cache friendly (people are constantly
|
||||
# requesting with new tokens) so we separate it out here.
|
||||
self_leave_room_membership_for_user_map = (
|
||||
await self.store.get_sliding_sync_self_leave_rooms_after_to_token(
|
||||
user_id, to_token
|
||||
)
|
||||
)
|
||||
if self_leave_room_membership_for_user_map:
|
||||
# FIXME: It would be nice to avoid this copy but since
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
|
||||
# can't return a mutable value like a `dict`. We make the copy to get a
|
||||
# mutable dict that we can change. We try to only make a copy when necessary
|
||||
# (if we actually need to change something) as in most cases, the logic
|
||||
# doesn't need to run.
|
||||
room_membership_for_user_map = dict(room_membership_for_user_map)
|
||||
room_membership_for_user_map.update(self_leave_room_membership_for_user_map)
|
||||
|
||||
# Remove invites from ignored users
|
||||
ignored_users = await self.store.ignored_users(user_id)
|
||||
if ignored_users:
|
||||
# TODO: It would be nice to avoid these copies
|
||||
# FIXME: It would be nice to avoid this copy but since
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
|
||||
# can't return a mutable value like a `dict`. We make the copy to get a
|
||||
# mutable dict that we can change. We try to only make a copy when necessary
|
||||
# (if we actually need to change something) as in most cases, the logic
|
||||
# doesn't need to run.
|
||||
room_membership_for_user_map = dict(room_membership_for_user_map)
|
||||
# Make a copy so we don't run into an error: `dictionary changed size during
|
||||
# iteration`, when we remove items
|
||||
@@ -263,11 +296,23 @@ class SlidingSyncRoomLists:
|
||||
):
|
||||
room_membership_for_user_map.pop(room_id, None)
|
||||
|
||||
(
|
||||
newly_joined_room_ids,
|
||||
newly_left_room_map,
|
||||
) = await self._get_newly_joined_and_left_rooms(
|
||||
user_id, from_token=from_token, to_token=to_token
|
||||
)
|
||||
|
||||
changes = await self._get_rewind_changes_to_current_membership_to_token(
|
||||
sync_config.user, room_membership_for_user_map, to_token=to_token
|
||||
)
|
||||
if changes:
|
||||
# TODO: It would be nice to avoid these copies
|
||||
# FIXME: It would be nice to avoid this copy but since
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
|
||||
# can't return a mutable value like a `dict`. We make the copy to get a
|
||||
# mutable dict that we can change. We try to only make a copy when necessary
|
||||
# (if we actually need to change something) as in most cases, the logic
|
||||
# doesn't need to run.
|
||||
room_membership_for_user_map = dict(room_membership_for_user_map)
|
||||
for room_id, change in changes.items():
|
||||
if change is None:
|
||||
@@ -278,7 +323,7 @@ class SlidingSyncRoomLists:
|
||||
existing_room = room_membership_for_user_map.get(room_id)
|
||||
if existing_room is not None:
|
||||
# Update room membership events to the point in time of the `to_token`
|
||||
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
|
||||
room_for_user = RoomsForUserSlidingSync(
|
||||
room_id=room_id,
|
||||
sender=change.sender,
|
||||
membership=change.membership,
|
||||
@@ -290,18 +335,18 @@ class SlidingSyncRoomLists:
|
||||
room_type=existing_room.room_type,
|
||||
is_encrypted=existing_room.is_encrypted,
|
||||
)
|
||||
|
||||
(
|
||||
newly_joined_room_ids,
|
||||
newly_left_room_map,
|
||||
) = await self._get_newly_joined_and_left_rooms(
|
||||
user_id, from_token=from_token, to_token=to_token
|
||||
)
|
||||
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
|
||||
if filter_membership_for_sync(
|
||||
user_id=user_id,
|
||||
room_membership_for_user=room_for_user,
|
||||
newly_left=room_id in newly_left_room_map,
|
||||
):
|
||||
room_membership_for_user_map[room_id] = room_for_user
|
||||
else:
|
||||
room_membership_for_user_map.pop(room_id, None)
|
||||
|
||||
# Add back `newly_left` rooms (rooms left in the from -> to token range).
|
||||
#
|
||||
# We do this because `get_sliding_sync_rooms_for_user(...)` doesn't include
|
||||
# We do this because `get_sliding_sync_rooms_for_user_from_membership_snapshots(...)` doesn't include
|
||||
# rooms that the user left themselves as it's more efficient to add them back
|
||||
# here than to fetch all rooms and then filter out the old left rooms. The user
|
||||
# only leaves a room once in a blue moon so this barely needs to run.
|
||||
@@ -310,7 +355,12 @@ class SlidingSyncRoomLists:
|
||||
newly_left_room_map.keys() - room_membership_for_user_map.keys()
|
||||
)
|
||||
if missing_newly_left_rooms:
|
||||
# TODO: It would be nice to avoid these copies
|
||||
# FIXME: It would be nice to avoid this copy but since
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
|
||||
# can't return a mutable value like a `dict`. We make the copy to get a
|
||||
# mutable dict that we can change. We try to only make a copy when necessary
|
||||
# (if we actually need to change something) as in most cases, the logic
|
||||
# doesn't need to run.
|
||||
room_membership_for_user_map = dict(room_membership_for_user_map)
|
||||
for room_id in missing_newly_left_rooms:
|
||||
newly_left_room_for_user = newly_left_room_map[room_id]
|
||||
@@ -327,14 +377,21 @@ class SlidingSyncRoomLists:
|
||||
# If the membership exists, it's just a normal user left the room on
|
||||
# their own
|
||||
if newly_left_room_for_user_sliding_sync is not None:
|
||||
room_membership_for_user_map[room_id] = (
|
||||
newly_left_room_for_user_sliding_sync
|
||||
)
|
||||
if filter_membership_for_sync(
|
||||
user_id=user_id,
|
||||
room_membership_for_user=newly_left_room_for_user_sliding_sync,
|
||||
newly_left=room_id in newly_left_room_map,
|
||||
):
|
||||
room_membership_for_user_map[room_id] = (
|
||||
newly_left_room_for_user_sliding_sync
|
||||
)
|
||||
else:
|
||||
room_membership_for_user_map.pop(room_id, None)
|
||||
|
||||
change = changes.get(room_id)
|
||||
if change is not None:
|
||||
# Update room membership events to the point in time of the `to_token`
|
||||
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
|
||||
room_for_user = RoomsForUserSlidingSync(
|
||||
room_id=room_id,
|
||||
sender=change.sender,
|
||||
membership=change.membership,
|
||||
@@ -346,6 +403,14 @@ class SlidingSyncRoomLists:
|
||||
room_type=newly_left_room_for_user_sliding_sync.room_type,
|
||||
is_encrypted=newly_left_room_for_user_sliding_sync.is_encrypted,
|
||||
)
|
||||
if filter_membership_for_sync(
|
||||
user_id=user_id,
|
||||
room_membership_for_user=room_for_user,
|
||||
newly_left=room_id in newly_left_room_map,
|
||||
):
|
||||
room_membership_for_user_map[room_id] = room_for_user
|
||||
else:
|
||||
room_membership_for_user_map.pop(room_id, None)
|
||||
|
||||
# If we are `newly_left` from the room but can't find any membership,
|
||||
# then we have been "state reset" out of the room
|
||||
@@ -367,7 +432,7 @@ class SlidingSyncRoomLists:
|
||||
newly_left_room_for_user.event_pos.to_room_stream_token(),
|
||||
)
|
||||
|
||||
room_membership_for_user_map[room_id] = RoomsForUserSlidingSync(
|
||||
room_for_user = RoomsForUserSlidingSync(
|
||||
room_id=room_id,
|
||||
sender=newly_left_room_for_user.sender,
|
||||
membership=newly_left_room_for_user.membership,
|
||||
@@ -378,6 +443,16 @@ class SlidingSyncRoomLists:
|
||||
room_type=room_type,
|
||||
is_encrypted=is_encrypted,
|
||||
)
|
||||
if filter_membership_for_sync(
|
||||
user_id=user_id,
|
||||
room_membership_for_user=room_for_user,
|
||||
newly_left=room_id in newly_left_room_map,
|
||||
):
|
||||
room_membership_for_user_map[room_id] = room_for_user
|
||||
else:
|
||||
room_membership_for_user_map.pop(room_id, None)
|
||||
|
||||
dm_room_ids = await self._get_dm_rooms_for_user(user_id)
|
||||
|
||||
if sync_config.lists:
|
||||
sync_room_map = room_membership_for_user_map
|
||||
@@ -493,7 +568,12 @@ class SlidingSyncRoomLists:
|
||||
|
||||
if sync_config.room_subscriptions:
|
||||
with start_active_span("assemble_room_subscriptions"):
|
||||
# TODO: It would be nice to avoid these copies
|
||||
# FIXME: It would be nice to avoid this copy but since
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` is cached, it
|
||||
# can't return a mutable value like a `dict`. We make the copy to get a
|
||||
# mutable dict that we can change. We try to only make a copy when necessary
|
||||
# (if we actually need to change something) as in most cases, the logic
|
||||
# doesn't need to run.
|
||||
room_membership_for_user_map = dict(room_membership_for_user_map)
|
||||
|
||||
# Find which rooms are partially stated and may need to be filtered out
|
||||
@@ -1040,7 +1120,7 @@ class SlidingSyncRoomLists:
|
||||
(
|
||||
newly_joined_room_ids,
|
||||
newly_left_room_map,
|
||||
) = await self._get_newly_joined_and_left_rooms(
|
||||
) = await self._get_newly_joined_and_left_rooms_fallback(
|
||||
user_id, to_token=to_token, from_token=from_token
|
||||
)
|
||||
|
||||
@@ -1096,6 +1176,53 @@ class SlidingSyncRoomLists:
|
||||
"state reset" out of the room, and so that room would not be part of the
|
||||
"current memberships" of the user.
|
||||
|
||||
Returns:
|
||||
A 2-tuple of newly joined room IDs and a map of newly_left room
|
||||
IDs to the `RoomsForUserStateReset` entry.
|
||||
|
||||
We're using `RoomsForUserStateReset` but that doesn't necessarily mean the
|
||||
user was state reset of the rooms. It's just that the `event_id`/`sender`
|
||||
are optional and we can't tell the difference between the server leaving the
|
||||
room when the user was the last person participating in the room and left or
|
||||
was state reset out of the room. To actually check for a state reset, you
|
||||
need to check if a membership still exists in the room.
|
||||
"""
|
||||
|
||||
newly_joined_room_ids: Set[str] = set()
|
||||
newly_left_room_map: Dict[str, RoomsForUserStateReset] = {}
|
||||
|
||||
if not from_token:
|
||||
return newly_joined_room_ids, newly_left_room_map
|
||||
|
||||
changes = await self.store.get_sliding_sync_membership_changes(
|
||||
user_id,
|
||||
from_key=from_token.room_key,
|
||||
to_key=to_token.room_key,
|
||||
excluded_room_ids=set(self.rooms_to_exclude_globally),
|
||||
)
|
||||
|
||||
for room_id, entry in changes.items():
|
||||
if entry.membership == Membership.JOIN:
|
||||
newly_joined_room_ids.add(room_id)
|
||||
elif entry.membership == Membership.LEAVE:
|
||||
newly_left_room_map[room_id] = entry
|
||||
|
||||
return newly_joined_room_ids, newly_left_room_map
|
||||
|
||||
@trace
|
||||
async def _get_newly_joined_and_left_rooms_fallback(
|
||||
self,
|
||||
user_id: str,
|
||||
to_token: StreamToken,
|
||||
from_token: Optional[StreamToken],
|
||||
) -> Tuple[AbstractSet[str], Mapping[str, RoomsForUserStateReset]]:
|
||||
"""Fetch the sets of rooms that the user newly joined or left in the
|
||||
given token range.
|
||||
|
||||
Note: there may be rooms in the newly left rooms where the user was
|
||||
"state reset" out of the room, and so that room would not be part of the
|
||||
"current memberships" of the user.
|
||||
|
||||
Returns:
|
||||
A 2-tuple of newly joined room IDs and a map of newly_left room
|
||||
IDs to the `RoomsForUserStateReset` entry.
|
||||
|
||||
@@ -108,6 +108,9 @@ class UserDirectoryHandler(StateDeltasHandler):
|
||||
self.is_mine_id = hs.is_mine_id
|
||||
self.update_user_directory = hs.config.worker.should_update_user_directory
|
||||
self.search_all_users = hs.config.userdirectory.user_directory_search_all_users
|
||||
self.exclude_remote_users = (
|
||||
hs.config.userdirectory.user_directory_exclude_remote_users
|
||||
)
|
||||
self.show_locked_users = hs.config.userdirectory.show_locked_users
|
||||
self._spam_checker_module_callbacks = hs.get_module_api_callbacks().spam_checker
|
||||
self._hs = hs
|
||||
|
||||
@@ -378,7 +378,6 @@ class MediaRepository:
|
||||
media_length=content_length,
|
||||
user_id=auth_user,
|
||||
sha256=sha256,
|
||||
# TODO: Better name?
|
||||
quarantined_by="system" if should_quarantine else None,
|
||||
)
|
||||
|
||||
|
||||
@@ -41,7 +41,7 @@ from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.http.client import SimpleHttpClient
|
||||
from synapse.logging.context import make_deferred_yieldable, run_in_background
|
||||
from synapse.media._base import FileInfo, get_filename_from_headers
|
||||
from synapse.media.media_storage import MediaStorage
|
||||
from synapse.media.media_storage import MediaStorage, SHA256TransparentIOWriter
|
||||
from synapse.media.oembed import OEmbedProvider
|
||||
from synapse.media.preview_html import decode_body, parse_html_to_open_graph
|
||||
from synapse.metrics.background_process_metrics import run_as_background_process
|
||||
@@ -593,17 +593,26 @@ class UrlPreviewer:
|
||||
file_info = FileInfo(server_name=None, file_id=file_id, url_cache=True)
|
||||
|
||||
async with self.media_storage.store_into_file(file_info) as (f, fname):
|
||||
sha256writer = SHA256TransparentIOWriter(f)
|
||||
if url.startswith("data:"):
|
||||
if not allow_data_urls:
|
||||
raise SynapseError(
|
||||
500, "Previewing of data: URLs is forbidden", Codes.UNKNOWN
|
||||
)
|
||||
|
||||
download_result = await self._parse_data_url(url, f)
|
||||
download_result = await self._parse_data_url(url, sha256writer.wrap())
|
||||
else:
|
||||
download_result = await self._download_url(url, f)
|
||||
download_result = await self._download_url(url, sha256writer.wrap())
|
||||
|
||||
try:
|
||||
sha256 = sha256writer.hexdigest()
|
||||
should_quarantine = await self.store.get_is_hash_quarantined(sha256)
|
||||
|
||||
if should_quarantine:
|
||||
logger.warn(
|
||||
"Media has been automatically quarantined as it matched existing quarantined media"
|
||||
)
|
||||
|
||||
time_now_ms = self.clock.time_msec()
|
||||
|
||||
await self.store.store_local_media(
|
||||
@@ -614,6 +623,8 @@ class UrlPreviewer:
|
||||
media_length=download_result.length,
|
||||
user_id=user,
|
||||
url_cache=url,
|
||||
sha256=sha256,
|
||||
quarantined_by="system" if should_quarantine else None,
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
|
||||
@@ -187,7 +187,6 @@ class ClientRestResource(JsonResource):
|
||||
mutual_rooms.register_servlets,
|
||||
login_token_request.register_servlets,
|
||||
rendezvous.register_servlets,
|
||||
auth_metadata.register_servlets,
|
||||
]:
|
||||
continue
|
||||
|
||||
|
||||
@@ -39,7 +39,7 @@ from typing import TYPE_CHECKING, Optional, Tuple
|
||||
|
||||
from synapse.api.errors import Codes, NotFoundError, SynapseError
|
||||
from synapse.handlers.pagination import PURGE_HISTORY_ACTION_NAME
|
||||
from synapse.http.server import HttpServer, JsonResource
|
||||
from synapse.http.server import HttpServer
|
||||
from synapse.http.servlet import RestServlet, parse_json_object_from_request
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.rest.admin._base import admin_patterns, assert_requester_is_admin
|
||||
@@ -51,6 +51,7 @@ from synapse.rest.admin.background_updates import (
|
||||
from synapse.rest.admin.devices import (
|
||||
DeleteDevicesRestServlet,
|
||||
DeviceRestServlet,
|
||||
DevicesGetRestServlet,
|
||||
DevicesRestServlet,
|
||||
)
|
||||
from synapse.rest.admin.event_reports import (
|
||||
@@ -86,6 +87,7 @@ from synapse.rest.admin.rooms import (
|
||||
RoomStateRestServlet,
|
||||
RoomTimestampToEventRestServlet,
|
||||
)
|
||||
from synapse.rest.admin.scheduled_tasks import ScheduledTasksRestServlet
|
||||
from synapse.rest.admin.server_notice_servlet import SendServerNoticeServlet
|
||||
from synapse.rest.admin.statistics import (
|
||||
LargestRoomsStatistics,
|
||||
@@ -263,14 +265,6 @@ class PurgeHistoryStatusRestServlet(RestServlet):
|
||||
########################################################################################
|
||||
|
||||
|
||||
class AdminRestResource(JsonResource):
|
||||
"""The REST resource which gets mounted at /_synapse/admin"""
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
JsonResource.__init__(self, hs, canonical_json=False)
|
||||
register_servlets(hs, self)
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
"""
|
||||
Register all the admin servlets.
|
||||
@@ -279,6 +273,10 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
|
||||
# Admin servlets below may not work on workers.
|
||||
if hs.config.worker.worker_app is not None:
|
||||
# Some admin servlets can be mounted on workers when MSC3861 is enabled.
|
||||
if hs.config.experimental.msc3861.enabled:
|
||||
register_servlets_for_msc3861_delegation(hs, http_server)
|
||||
|
||||
return
|
||||
|
||||
register_servlets_for_client_rest_resource(hs, http_server)
|
||||
@@ -338,6 +336,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
BackgroundUpdateStartJobRestServlet(hs).register(http_server)
|
||||
ExperimentalFeaturesRestServlet(hs).register(http_server)
|
||||
SuspendAccountRestServlet(hs).register(http_server)
|
||||
ScheduledTasksRestServlet(hs).register(http_server)
|
||||
|
||||
|
||||
def register_servlets_for_client_rest_resource(
|
||||
@@ -365,4 +364,16 @@ def register_servlets_for_client_rest_resource(
|
||||
ListMediaInRoom(hs).register(http_server)
|
||||
|
||||
# don't add more things here: new servlets should only be exposed on
|
||||
# /_synapse/admin so should not go here. Instead register them in AdminRestResource.
|
||||
# /_synapse/admin so should not go here. Instead register them in register_servlets.
|
||||
|
||||
|
||||
def register_servlets_for_msc3861_delegation(
|
||||
hs: "HomeServer", http_server: HttpServer
|
||||
) -> None:
|
||||
"""Register servlets needed by MAS when MSC3861 is enabled"""
|
||||
assert hs.config.experimental.msc3861.enabled
|
||||
|
||||
UserRestServletV2(hs).register(http_server)
|
||||
UsernameAvailableRestServlet(hs).register(http_server)
|
||||
UserReplaceMasterCrossSigningKeyRestServlet(hs).register(http_server)
|
||||
DevicesGetRestServlet(hs).register(http_server)
|
||||
|
||||
@@ -113,18 +113,19 @@ class DeviceRestServlet(RestServlet):
|
||||
return HTTPStatus.OK, {}
|
||||
|
||||
|
||||
class DevicesRestServlet(RestServlet):
|
||||
class DevicesGetRestServlet(RestServlet):
|
||||
"""
|
||||
Retrieve the given user's devices
|
||||
|
||||
This can be mounted on workers as it is read-only, as opposed
|
||||
to `DevicesRestServlet`.
|
||||
"""
|
||||
|
||||
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self.auth = hs.get_auth()
|
||||
handler = hs.get_device_handler()
|
||||
assert isinstance(handler, DeviceHandler)
|
||||
self.device_handler = handler
|
||||
self.device_worker_handler = hs.get_device_handler()
|
||||
self.store = hs.get_datastores().main
|
||||
self.is_mine = hs.is_mine
|
||||
|
||||
@@ -141,9 +142,24 @@ class DevicesRestServlet(RestServlet):
|
||||
if u is None:
|
||||
raise NotFoundError("Unknown user")
|
||||
|
||||
devices = await self.device_handler.get_devices_by_user(target_user.to_string())
|
||||
devices = await self.device_worker_handler.get_devices_by_user(
|
||||
target_user.to_string()
|
||||
)
|
||||
return HTTPStatus.OK, {"devices": devices, "total": len(devices)}
|
||||
|
||||
|
||||
class DevicesRestServlet(DevicesGetRestServlet):
|
||||
"""
|
||||
Retrieve the given user's devices
|
||||
"""
|
||||
|
||||
PATTERNS = admin_patterns("/users/(?P<user_id>[^/]*)/devices$", "v2")
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__(hs)
|
||||
assert isinstance(self.device_worker_handler, DeviceHandler)
|
||||
self.device_handler = self.device_worker_handler
|
||||
|
||||
async def on_POST(
|
||||
self, request: SynapseRequest, user_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
|
||||
70
synapse/rest/admin/scheduled_tasks.py
Normal file
70
synapse/rest/admin/scheduled_tasks.py
Normal file
@@ -0,0 +1,70 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2025 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
#
|
||||
from typing import TYPE_CHECKING, Tuple
|
||||
|
||||
from synapse.http.servlet import RestServlet, parse_integer, parse_string
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.rest.admin import admin_patterns, assert_requester_is_admin
|
||||
from synapse.types import JsonDict, TaskStatus
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
|
||||
|
||||
class ScheduledTasksRestServlet(RestServlet):
|
||||
"""Get a list of scheduled tasks and their statuses
|
||||
optionally filtered by action name, resource id, status, and max timestamp
|
||||
"""
|
||||
|
||||
PATTERNS = admin_patterns("/scheduled_tasks$")
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
self._auth = hs.get_auth()
|
||||
self._store = hs.get_datastores().main
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
await assert_requester_is_admin(self._auth, request)
|
||||
|
||||
# extract query params
|
||||
action_name = parse_string(request, "action_name")
|
||||
resource_id = parse_string(request, "resource_id")
|
||||
status = parse_string(request, "job_status")
|
||||
max_timestamp = parse_integer(request, "max_timestamp")
|
||||
|
||||
actions = [action_name] if action_name else None
|
||||
statuses = [TaskStatus(status)] if status else None
|
||||
|
||||
tasks = await self._store.get_scheduled_tasks(
|
||||
actions=actions,
|
||||
resource_id=resource_id,
|
||||
statuses=statuses,
|
||||
max_timestamp=max_timestamp,
|
||||
)
|
||||
|
||||
json_tasks = []
|
||||
for task in tasks:
|
||||
result_task = {
|
||||
"id": task.id,
|
||||
"action": task.action,
|
||||
"status": task.status,
|
||||
"timestamp_ms": task.timestamp,
|
||||
"resource_id": task.resource_id,
|
||||
"result": task.result,
|
||||
"error": task.error,
|
||||
}
|
||||
json_tasks.append(result_task)
|
||||
|
||||
return 200, {"scheduled_tasks": json_tasks}
|
||||
@@ -350,6 +350,7 @@ class EmailThreepidRequestTokenRestServlet(RestServlet):
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Adding an email to your account is disabled on this server",
|
||||
Codes.THREEPID_MEDIUM_NOT_SUPPORTED,
|
||||
)
|
||||
|
||||
body = parse_and_validate_json_object_from_request(
|
||||
@@ -456,6 +457,7 @@ class MsisdnThreepidRequestTokenRestServlet(RestServlet):
|
||||
raise SynapseError(
|
||||
400,
|
||||
"Adding phone numbers to user account is not supported by this homeserver",
|
||||
Codes.THREEPID_MEDIUM_NOT_SUPPORTED,
|
||||
)
|
||||
|
||||
ret = await self.identity_handler.requestMsisdnToken(
|
||||
@@ -498,7 +500,9 @@ class AddThreepidEmailSubmitTokenServlet(RestServlet):
|
||||
"Adding emails have been disabled due to lack of an email config"
|
||||
)
|
||||
raise SynapseError(
|
||||
400, "Adding an email to your account is disabled on this server"
|
||||
400,
|
||||
"Adding an email to your account is disabled on this server",
|
||||
Codes.THREEPID_MEDIUM_NOT_SUPPORTED,
|
||||
)
|
||||
|
||||
sid = parse_string(request, "sid", required=True)
|
||||
|
||||
@@ -130,7 +130,7 @@ class SQLBaseStore(metaclass=ABCMeta):
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", (user_id,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user", (user_id,)
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", (user_id,)
|
||||
)
|
||||
|
||||
# Purge other caches based on room state.
|
||||
@@ -138,7 +138,9 @@ class SQLBaseStore(metaclass=ABCMeta):
|
||||
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
|
||||
def _invalidate_state_caches_all(self, room_id: str) -> None:
|
||||
"""Invalidates caches that are based on the current state, but does
|
||||
@@ -168,7 +170,9 @@ class SQLBaseStore(metaclass=ABCMeta):
|
||||
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
|
||||
def _attempt_to_invalidate_cache(
|
||||
self, cache_name: str, key: Optional[Collection[Any]]
|
||||
|
||||
@@ -307,7 +307,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
"get_rooms_for_user", (data.state_key,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user", None
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
self._membership_stream_cache.entity_has_changed(data.state_key, token) # type: ignore[attr-defined]
|
||||
elif data.type == EventTypes.RoomEncryption:
|
||||
@@ -319,7 +319,7 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
|
||||
if (data.type, data.state_key) in SLIDING_SYNC_RELEVANT_STATE_SET:
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user", None
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
elif row.type == EventsStreamAllStateRow.TypeId:
|
||||
assert isinstance(data, EventsStreamAllStateRow)
|
||||
@@ -330,7 +330,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (data.room_id,))
|
||||
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
else:
|
||||
raise Exception("Unknown events stream row type %s" % (row.type,))
|
||||
|
||||
@@ -394,7 +396,8 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", (state_key,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user", (state_key,)
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots",
|
||||
(state_key,),
|
||||
)
|
||||
|
||||
self._attempt_to_invalidate_cache(
|
||||
@@ -413,7 +416,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
||||
|
||||
if (etype, state_key) in SLIDING_SYNC_RELEVANT_STATE_SET:
|
||||
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
|
||||
if relates_to:
|
||||
self._attempt_to_invalidate_cache(
|
||||
@@ -470,7 +475,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache(
|
||||
"_get_rooms_for_local_user_where_membership_is_inner", None
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
self._attempt_to_invalidate_cache("did_forget", None)
|
||||
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache("get_references_for_event", None)
|
||||
@@ -529,7 +536,9 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_current_hosts_in_room_ordered", (room_id,)
|
||||
)
|
||||
self._attempt_to_invalidate_cache("get_sliding_sync_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache(
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots", None
|
||||
)
|
||||
self._attempt_to_invalidate_cache("did_forget", None)
|
||||
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
||||
|
||||
@@ -1501,6 +1501,45 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
||||
"delete_old_otks_for_next_user_batch", impl
|
||||
)
|
||||
|
||||
async def allow_master_cross_signing_key_replacement_without_uia(
|
||||
self, user_id: str, duration_ms: int
|
||||
) -> Optional[int]:
|
||||
"""Mark this user's latest master key as being replaceable without UIA.
|
||||
|
||||
Said replacement will only be permitted for a short time after calling this
|
||||
function. That time period is controlled by the duration argument.
|
||||
|
||||
Returns:
|
||||
None, if there is no such key.
|
||||
Otherwise, the timestamp before which replacement is allowed without UIA.
|
||||
"""
|
||||
timestamp = self._clock.time_msec() + duration_ms
|
||||
|
||||
def impl(txn: LoggingTransaction) -> Optional[int]:
|
||||
txn.execute(
|
||||
"""
|
||||
UPDATE e2e_cross_signing_keys
|
||||
SET updatable_without_uia_before_ms = ?
|
||||
WHERE stream_id = (
|
||||
SELECT stream_id
|
||||
FROM e2e_cross_signing_keys
|
||||
WHERE user_id = ? AND keytype = 'master'
|
||||
ORDER BY stream_id DESC
|
||||
LIMIT 1
|
||||
)
|
||||
""",
|
||||
(timestamp, user_id),
|
||||
)
|
||||
if txn.rowcount == 0:
|
||||
return None
|
||||
|
||||
return timestamp
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"allow_master_cross_signing_key_replacement_without_uia",
|
||||
impl,
|
||||
)
|
||||
|
||||
|
||||
class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||
def __init__(
|
||||
@@ -1755,42 +1794,3 @@ class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||
],
|
||||
desc="add_e2e_signing_key",
|
||||
)
|
||||
|
||||
async def allow_master_cross_signing_key_replacement_without_uia(
|
||||
self, user_id: str, duration_ms: int
|
||||
) -> Optional[int]:
|
||||
"""Mark this user's latest master key as being replaceable without UIA.
|
||||
|
||||
Said replacement will only be permitted for a short time after calling this
|
||||
function. That time period is controlled by the duration argument.
|
||||
|
||||
Returns:
|
||||
None, if there is no such key.
|
||||
Otherwise, the timestamp before which replacement is allowed without UIA.
|
||||
"""
|
||||
timestamp = self._clock.time_msec() + duration_ms
|
||||
|
||||
def impl(txn: LoggingTransaction) -> Optional[int]:
|
||||
txn.execute(
|
||||
"""
|
||||
UPDATE e2e_cross_signing_keys
|
||||
SET updatable_without_uia_before_ms = ?
|
||||
WHERE stream_id = (
|
||||
SELECT stream_id
|
||||
FROM e2e_cross_signing_keys
|
||||
WHERE user_id = ? AND keytype = 'master'
|
||||
ORDER BY stream_id DESC
|
||||
LIMIT 1
|
||||
)
|
||||
""",
|
||||
(timestamp, user_id),
|
||||
)
|
||||
if txn.rowcount == 0:
|
||||
return None
|
||||
|
||||
return timestamp
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"allow_master_cross_signing_key_replacement_without_uia",
|
||||
impl,
|
||||
)
|
||||
|
||||
@@ -24,7 +24,12 @@ from typing import TYPE_CHECKING, Dict, List, Optional, Set, Tuple, cast
|
||||
|
||||
import attr
|
||||
|
||||
from synapse.api.constants import EventContentFields, Membership, RelationTypes
|
||||
from synapse.api.constants import (
|
||||
MAX_DEPTH,
|
||||
EventContentFields,
|
||||
Membership,
|
||||
RelationTypes,
|
||||
)
|
||||
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
|
||||
from synapse.events import EventBase, make_event_from_dict
|
||||
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
||||
@@ -311,6 +316,10 @@ class EventsBackgroundUpdatesStore(StreamWorkerStore, StateDeltasStore, SQLBaseS
|
||||
self._sliding_sync_membership_snapshots_fix_forgotten_column_bg_update,
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
_BackgroundUpdates.FIXUP_MAX_DEPTH_CAP, self.fixup_max_depth_cap_bg_update
|
||||
)
|
||||
|
||||
# We want this to run on the main database at startup before we start processing
|
||||
# events.
|
||||
#
|
||||
@@ -2547,6 +2556,77 @@ class EventsBackgroundUpdatesStore(StreamWorkerStore, StateDeltasStore, SQLBaseS
|
||||
|
||||
return num_rows
|
||||
|
||||
async def fixup_max_depth_cap_bg_update(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
) -> int:
|
||||
"""Fixes the topological ordering for events that have a depth greater
|
||||
than MAX_DEPTH. This should fix /messages ordering oddities."""
|
||||
|
||||
room_id_bound = progress.get("room_id", "")
|
||||
|
||||
def redo_max_depth_bg_update_txn(txn: LoggingTransaction) -> Tuple[bool, int]:
|
||||
txn.execute(
|
||||
"""
|
||||
SELECT room_id, room_version FROM rooms
|
||||
WHERE room_id > ?
|
||||
ORDER BY room_id
|
||||
LIMIT ?
|
||||
""",
|
||||
(room_id_bound, batch_size),
|
||||
)
|
||||
|
||||
# Find the next room ID to process, with a relevant room version.
|
||||
room_ids: List[str] = []
|
||||
max_room_id: Optional[str] = None
|
||||
for room_id, room_version_str in txn:
|
||||
max_room_id = room_id
|
||||
|
||||
# We only want to process rooms with a known room version that
|
||||
# has strict canonical json validation enabled.
|
||||
room_version = KNOWN_ROOM_VERSIONS.get(room_version_str)
|
||||
if room_version and room_version.strict_canonicaljson:
|
||||
room_ids.append(room_id)
|
||||
|
||||
if max_room_id is None:
|
||||
# The query did not return any rooms, so we are done.
|
||||
return True, 0
|
||||
|
||||
# Update the progress to the last room ID we pulled from the DB,
|
||||
# this ensures we always make progress.
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn,
|
||||
_BackgroundUpdates.FIXUP_MAX_DEPTH_CAP,
|
||||
progress={"room_id": max_room_id},
|
||||
)
|
||||
|
||||
if not room_ids:
|
||||
# There were no rooms in this batch that required the fix.
|
||||
return False, 0
|
||||
|
||||
clause, list_args = make_in_list_sql_clause(
|
||||
self.database_engine, "room_id", room_ids
|
||||
)
|
||||
sql = f"""
|
||||
UPDATE events SET topological_ordering = ?
|
||||
WHERE topological_ordering > ? AND {clause}
|
||||
"""
|
||||
args = [MAX_DEPTH, MAX_DEPTH]
|
||||
args.extend(list_args)
|
||||
txn.execute(sql, args)
|
||||
|
||||
return False, len(room_ids)
|
||||
|
||||
done, num_rooms = await self.db_pool.runInteraction(
|
||||
"redo_max_depth_bg_update", redo_max_depth_bg_update_txn
|
||||
)
|
||||
|
||||
if done:
|
||||
await self.db_pool.updates._end_background_update(
|
||||
_BackgroundUpdates.FIXUP_MAX_DEPTH_CAP
|
||||
)
|
||||
|
||||
return num_rooms
|
||||
|
||||
|
||||
def _resolve_stale_data_in_sliding_sync_tables(
|
||||
txn: LoggingTransaction,
|
||||
|
||||
@@ -2105,6 +2105,136 @@ class RegistrationWorkerStore(CacheInvalidationWorkerStore):
|
||||
func=is_user_approved_txn,
|
||||
)
|
||||
|
||||
async def set_user_deactivated_status(
|
||||
self, user_id: str, deactivated: bool
|
||||
) -> None:
|
||||
"""Set the `deactivated` property for the provided user to the provided value.
|
||||
|
||||
Args:
|
||||
user_id: The ID of the user to set the status for.
|
||||
deactivated: The value to set for `deactivated`.
|
||||
"""
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"set_user_deactivated_status",
|
||||
self.set_user_deactivated_status_txn,
|
||||
user_id,
|
||||
deactivated,
|
||||
)
|
||||
|
||||
def set_user_deactivated_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, deactivated: bool
|
||||
) -> None:
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"deactivated": 1 if deactivated else 0},
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_user_deactivated_status, (user_id,)
|
||||
)
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
self._invalidate_cache_and_stream(txn, self.is_guest, (user_id,))
|
||||
|
||||
async def set_user_suspended_status(self, user_id: str, suspended: bool) -> None:
|
||||
"""
|
||||
Set whether the user's account is suspended in the `users` table.
|
||||
|
||||
Args:
|
||||
user_id: The user ID of the user in question
|
||||
suspended: True if the user is suspended, false if not
|
||||
"""
|
||||
await self.db_pool.runInteraction(
|
||||
"set_user_suspended_status",
|
||||
self.set_user_suspended_status_txn,
|
||||
user_id,
|
||||
suspended,
|
||||
)
|
||||
|
||||
def set_user_suspended_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, suspended: bool
|
||||
) -> None:
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"suspended": suspended},
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_user_suspended_status, (user_id,)
|
||||
)
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
|
||||
async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
|
||||
"""Set the `locked` property for the provided user to the provided value.
|
||||
|
||||
Args:
|
||||
user_id: The ID of the user to set the status for.
|
||||
locked: The value to set for `locked`.
|
||||
"""
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"set_user_locked_status",
|
||||
self.set_user_locked_status_txn,
|
||||
user_id,
|
||||
locked,
|
||||
)
|
||||
|
||||
def set_user_locked_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, locked: bool
|
||||
) -> None:
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"locked": locked},
|
||||
)
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_locked_status, (user_id,))
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
|
||||
async def update_user_approval_status(
|
||||
self, user_id: UserID, approved: bool
|
||||
) -> None:
|
||||
"""Set the user's 'approved' flag to the given value.
|
||||
|
||||
The boolean will be turned into an int (in update_user_approval_status_txn)
|
||||
because the column is a smallint.
|
||||
|
||||
Args:
|
||||
user_id: the user to update the flag for.
|
||||
approved: the value to set the flag to.
|
||||
"""
|
||||
await self.db_pool.runInteraction(
|
||||
"update_user_approval_status",
|
||||
self.update_user_approval_status_txn,
|
||||
user_id.to_string(),
|
||||
approved,
|
||||
)
|
||||
|
||||
def update_user_approval_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, approved: bool
|
||||
) -> None:
|
||||
"""Set the user's 'approved' flag to the given value.
|
||||
|
||||
The boolean is turned into an int because the column is a smallint.
|
||||
|
||||
Args:
|
||||
txn: the current database transaction.
|
||||
user_id: the user to update the flag for.
|
||||
approved: the value to set the flag to.
|
||||
"""
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"approved": approved},
|
||||
)
|
||||
|
||||
# Invalidate the caches of methods that read the value of the 'approved' flag.
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
self._invalidate_cache_and_stream(txn, self.is_user_approved, (user_id,))
|
||||
|
||||
|
||||
class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
|
||||
def __init__(
|
||||
@@ -2217,117 +2347,6 @@ class RegistrationBackgroundUpdateStore(RegistrationWorkerStore):
|
||||
|
||||
return nb_processed
|
||||
|
||||
async def set_user_deactivated_status(
|
||||
self, user_id: str, deactivated: bool
|
||||
) -> None:
|
||||
"""Set the `deactivated` property for the provided user to the provided value.
|
||||
|
||||
Args:
|
||||
user_id: The ID of the user to set the status for.
|
||||
deactivated: The value to set for `deactivated`.
|
||||
"""
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"set_user_deactivated_status",
|
||||
self.set_user_deactivated_status_txn,
|
||||
user_id,
|
||||
deactivated,
|
||||
)
|
||||
|
||||
def set_user_deactivated_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, deactivated: bool
|
||||
) -> None:
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"deactivated": 1 if deactivated else 0},
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_user_deactivated_status, (user_id,)
|
||||
)
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
txn.call_after(self.is_guest.invalidate, (user_id,))
|
||||
|
||||
async def set_user_suspended_status(self, user_id: str, suspended: bool) -> None:
|
||||
"""
|
||||
Set whether the user's account is suspended in the `users` table.
|
||||
|
||||
Args:
|
||||
user_id: The user ID of the user in question
|
||||
suspended: True if the user is suspended, false if not
|
||||
"""
|
||||
await self.db_pool.runInteraction(
|
||||
"set_user_suspended_status",
|
||||
self.set_user_suspended_status_txn,
|
||||
user_id,
|
||||
suspended,
|
||||
)
|
||||
|
||||
def set_user_suspended_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, suspended: bool
|
||||
) -> None:
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"suspended": suspended},
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_user_suspended_status, (user_id,)
|
||||
)
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
|
||||
async def set_user_locked_status(self, user_id: str, locked: bool) -> None:
|
||||
"""Set the `locked` property for the provided user to the provided value.
|
||||
|
||||
Args:
|
||||
user_id: The ID of the user to set the status for.
|
||||
locked: The value to set for `locked`.
|
||||
"""
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"set_user_locked_status",
|
||||
self.set_user_locked_status_txn,
|
||||
user_id,
|
||||
locked,
|
||||
)
|
||||
|
||||
def set_user_locked_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, locked: bool
|
||||
) -> None:
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"locked": locked},
|
||||
)
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_locked_status, (user_id,))
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
|
||||
def update_user_approval_status_txn(
|
||||
self, txn: LoggingTransaction, user_id: str, approved: bool
|
||||
) -> None:
|
||||
"""Set the user's 'approved' flag to the given value.
|
||||
|
||||
The boolean is turned into an int because the column is a smallint.
|
||||
|
||||
Args:
|
||||
txn: the current database transaction.
|
||||
user_id: the user to update the flag for.
|
||||
approved: the value to set the flag to.
|
||||
"""
|
||||
self.db_pool.simple_update_one_txn(
|
||||
txn=txn,
|
||||
table="users",
|
||||
keyvalues={"name": user_id},
|
||||
updatevalues={"approved": approved},
|
||||
)
|
||||
|
||||
# Invalidate the caches of methods that read the value of the 'approved' flag.
|
||||
self._invalidate_cache_and_stream(txn, self.get_user_by_id, (user_id,))
|
||||
self._invalidate_cache_and_stream(txn, self.is_user_approved, (user_id,))
|
||||
|
||||
|
||||
class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
|
||||
def __init__(
|
||||
@@ -2956,25 +2975,6 @@ class RegistrationStore(StatsStore, RegistrationBackgroundUpdateStore):
|
||||
start_or_continue_validation_session_txn,
|
||||
)
|
||||
|
||||
async def update_user_approval_status(
|
||||
self, user_id: UserID, approved: bool
|
||||
) -> None:
|
||||
"""Set the user's 'approved' flag to the given value.
|
||||
|
||||
The boolean will be turned into an int (in update_user_approval_status_txn)
|
||||
because the column is a smallint.
|
||||
|
||||
Args:
|
||||
user_id: the user to update the flag for.
|
||||
approved: the value to set the flag to.
|
||||
"""
|
||||
await self.db_pool.runInteraction(
|
||||
"update_user_approval_status",
|
||||
self.update_user_approval_status_txn,
|
||||
user_id.to_string(),
|
||||
approved,
|
||||
)
|
||||
|
||||
@wrap_as_background_process("delete_expired_login_tokens")
|
||||
async def _delete_expired_login_tokens(self) -> None:
|
||||
"""Remove login tokens with expiry dates that have passed."""
|
||||
|
||||
@@ -53,6 +53,7 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.databases.main.cache import CacheInvalidationWorkerStore
|
||||
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||
from synapse.storage.databases.main.stream import _filter_results_by_stream
|
||||
from synapse.storage.engines import Sqlite3Engine
|
||||
from synapse.storage.roommember import (
|
||||
MemberSummary,
|
||||
@@ -65,6 +66,7 @@ from synapse.types import (
|
||||
PersistedEventPosition,
|
||||
StateMap,
|
||||
StrCollection,
|
||||
StreamToken,
|
||||
get_domain_from_id,
|
||||
)
|
||||
from synapse.util.caches.descriptors import _CacheContext, cached, cachedList
|
||||
@@ -1389,7 +1391,9 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
txn, self.get_forgotten_rooms_for_user, (user_id,)
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_sliding_sync_rooms_for_user, (user_id,)
|
||||
txn,
|
||||
self.get_sliding_sync_rooms_for_user_from_membership_snapshots,
|
||||
(user_id,),
|
||||
)
|
||||
|
||||
await self.db_pool.runInteraction("forget_membership", f)
|
||||
@@ -1421,25 +1425,30 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
)
|
||||
|
||||
@cached(iterable=True, max_entries=10000)
|
||||
async def get_sliding_sync_rooms_for_user(
|
||||
self,
|
||||
user_id: str,
|
||||
async def get_sliding_sync_rooms_for_user_from_membership_snapshots(
|
||||
self, user_id: str
|
||||
) -> Mapping[str, RoomsForUserSlidingSync]:
|
||||
"""Get all the rooms for a user to handle a sliding sync request.
|
||||
"""
|
||||
Get all the rooms for a user to handle a sliding sync request from the
|
||||
`sliding_sync_membership_snapshots` table. These will be current memberships and
|
||||
need to be rewound to the token range.
|
||||
|
||||
Ignores forgotten rooms and rooms that the user has left themselves.
|
||||
|
||||
Args:
|
||||
user_id: The user ID to get the rooms for.
|
||||
|
||||
Returns:
|
||||
Map from room ID to membership info
|
||||
"""
|
||||
|
||||
def get_sliding_sync_rooms_for_user_txn(
|
||||
def _txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Dict[str, RoomsForUserSlidingSync]:
|
||||
# XXX: If you use any new columns that can change (like from
|
||||
# `sliding_sync_joined_rooms` or `forgotten`), make sure to bust the
|
||||
# `get_sliding_sync_rooms_for_user` cache in the appropriate places (and add
|
||||
# tests).
|
||||
# `get_sliding_sync_rooms_for_user_from_membership_snapshots` cache in the
|
||||
# appropriate places (and add tests).
|
||||
sql = """
|
||||
SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
|
||||
r.room_version,
|
||||
@@ -1455,6 +1464,7 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
AND (m.membership != 'leave' OR m.user_id != m.sender)
|
||||
"""
|
||||
txn.execute(sql, (user_id,))
|
||||
|
||||
return {
|
||||
row[0]: RoomsForUserSlidingSync(
|
||||
room_id=row[0],
|
||||
@@ -1475,8 +1485,113 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
}
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_sliding_sync_rooms_for_user",
|
||||
get_sliding_sync_rooms_for_user_txn,
|
||||
"get_sliding_sync_rooms_for_user_from_membership_snapshots",
|
||||
_txn,
|
||||
)
|
||||
|
||||
async def get_sliding_sync_self_leave_rooms_after_to_token(
|
||||
self,
|
||||
user_id: str,
|
||||
to_token: StreamToken,
|
||||
) -> Dict[str, RoomsForUserSlidingSync]:
|
||||
"""
|
||||
Get all the self-leave rooms for a user after the `to_token` (outside the token
|
||||
range) that are potentially relevant[1] and needed to handle a sliding sync
|
||||
request. The results are from the `sliding_sync_membership_snapshots` table and
|
||||
will be current memberships and need to be rewound to the token range.
|
||||
|
||||
[1] If a leave happens after the token range, we may have still been joined (or
|
||||
any non-self-leave which is relevant to sync) to the room before so we need to
|
||||
include it in the list of potentially relevant rooms and apply
|
||||
our rewind logic (outside of this function) to see if it's actually relevant.
|
||||
|
||||
This is basically a sister-function to
|
||||
`get_sliding_sync_rooms_for_user_from_membership_snapshots`. We could
|
||||
alternatively incorporate this logic into
|
||||
`get_sliding_sync_rooms_for_user_from_membership_snapshots` but those results
|
||||
are cached and the `to_token` isn't very cache friendly (people are constantly
|
||||
requesting with new tokens) so we separate it out here.
|
||||
|
||||
Args:
|
||||
user_id: The user ID to get the rooms for.
|
||||
to_token: Any self-leave memberships after this position will be returned.
|
||||
|
||||
Returns:
|
||||
Map from room ID to membership info
|
||||
"""
|
||||
# TODO: Potential to check
|
||||
# `self._membership_stream_cache.has_entity_changed(...)` as an early-return
|
||||
# shortcut.
|
||||
|
||||
def _txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> Dict[str, RoomsForUserSlidingSync]:
|
||||
sql = """
|
||||
SELECT m.room_id, m.sender, m.membership, m.membership_event_id,
|
||||
r.room_version,
|
||||
m.event_instance_name, m.event_stream_ordering,
|
||||
m.has_known_state,
|
||||
m.room_type,
|
||||
m.is_encrypted
|
||||
FROM sliding_sync_membership_snapshots AS m
|
||||
INNER JOIN rooms AS r USING (room_id)
|
||||
WHERE user_id = ?
|
||||
AND m.forgotten = 0
|
||||
AND m.membership = 'leave'
|
||||
AND m.user_id = m.sender
|
||||
AND (m.event_stream_ordering > ?)
|
||||
"""
|
||||
# If a leave happens after the token range, we may have still been joined
|
||||
# (or any non-self-leave which is relevant to sync) to the room before so we
|
||||
# need to include it in the list of potentially relevant rooms and apply our
|
||||
# rewind logic (outside of this function).
|
||||
#
|
||||
# To handle tokens with a non-empty instance_map we fetch more
|
||||
# results than necessary and then filter down
|
||||
min_to_token_position = to_token.room_key.stream
|
||||
txn.execute(sql, (user_id, min_to_token_position))
|
||||
|
||||
# Map from room_id to membership info
|
||||
room_membership_for_user_map: Dict[str, RoomsForUserSlidingSync] = {}
|
||||
for row in txn:
|
||||
room_for_user = RoomsForUserSlidingSync(
|
||||
room_id=row[0],
|
||||
sender=row[1],
|
||||
membership=row[2],
|
||||
event_id=row[3],
|
||||
room_version_id=row[4],
|
||||
event_pos=PersistedEventPosition(row[5], row[6]),
|
||||
has_known_state=bool(row[7]),
|
||||
room_type=row[8],
|
||||
is_encrypted=bool(row[9]),
|
||||
)
|
||||
|
||||
# We filter out unknown room versions proactively. They shouldn't go
|
||||
# down sync and their metadata may be in a broken state (causing
|
||||
# errors).
|
||||
if row[4] not in KNOWN_ROOM_VERSIONS:
|
||||
continue
|
||||
|
||||
# We only want to include the self-leave membership if it happened after
|
||||
# the token range.
|
||||
#
|
||||
# Since the database pulls out more than necessary, we need to filter it
|
||||
# down here.
|
||||
if _filter_results_by_stream(
|
||||
lower_token=None,
|
||||
upper_token=to_token.room_key,
|
||||
instance_name=room_for_user.event_pos.instance_name,
|
||||
stream_ordering=room_for_user.event_pos.stream,
|
||||
):
|
||||
continue
|
||||
|
||||
room_membership_for_user_map[room_for_user.room_id] = room_for_user
|
||||
|
||||
return room_membership_for_user_map
|
||||
|
||||
return await self.db_pool.runInteraction(
|
||||
"get_sliding_sync_self_leave_rooms_after_to_token",
|
||||
_txn,
|
||||
)
|
||||
|
||||
async def get_sliding_sync_room_for_user(
|
||||
@@ -1693,93 +1808,6 @@ class RoomMemberBackgroundUpdateStore(SQLBaseStore):
|
||||
columns=["user_id", "room_id"],
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
"populate_participant_bg_update", self._populate_participant
|
||||
)
|
||||
|
||||
async def _populate_participant(self, progress: JsonDict, batch_size: int) -> int:
|
||||
"""
|
||||
Background update to populate column `participant` on `room_memberships` table
|
||||
|
||||
A 'participant' is someone who is currently joined to a room and has sent at least
|
||||
one `m.room.message` or `m.room.encrypted` event.
|
||||
|
||||
This background update will set the `participant` column across all rows in
|
||||
`room_memberships` based on the user's *current* join status, and if
|
||||
they've *ever* sent a message or encrypted event. Therefore one should
|
||||
never assume the `participant` column's value is based solely on whether
|
||||
the user participated in a previous "session" (where a "session" is defined
|
||||
as a period between the user joining and leaving). See
|
||||
https://github.com/element-hq/synapse/pull/18068#discussion_r1931070291
|
||||
for further detail.
|
||||
"""
|
||||
stream_token = progress.get("last_stream_token", None)
|
||||
|
||||
def _get_max_stream_token_txn(txn: LoggingTransaction) -> int:
|
||||
sql = """
|
||||
SELECT event_stream_ordering from room_memberships
|
||||
ORDER BY event_stream_ordering DESC
|
||||
LIMIT 1;
|
||||
"""
|
||||
txn.execute(sql)
|
||||
res = txn.fetchone()
|
||||
if not res or not res[0]:
|
||||
return 0
|
||||
return res[0]
|
||||
|
||||
def _background_populate_participant_txn(
|
||||
txn: LoggingTransaction, stream_token: str
|
||||
) -> None:
|
||||
sql = """
|
||||
UPDATE room_memberships
|
||||
SET participant = True
|
||||
FROM (
|
||||
SELECT DISTINCT c.state_key, e.room_id
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN events AS e ON c.room_id = e.room_id
|
||||
WHERE c.membership = 'join'
|
||||
AND c.state_key = e.sender
|
||||
AND (
|
||||
e.type = 'm.room.message'
|
||||
OR e.type = 'm.room.encrypted'
|
||||
)
|
||||
) AS subquery
|
||||
WHERE room_memberships.user_id = subquery.state_key
|
||||
AND room_memberships.room_id = subquery.room_id
|
||||
AND room_memberships.event_stream_ordering <= ?
|
||||
AND room_memberships.event_stream_ordering > ?;
|
||||
"""
|
||||
batch = int(stream_token) - _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
|
||||
txn.execute(sql, (stream_token, batch))
|
||||
|
||||
if stream_token is None:
|
||||
stream_token = await self.db_pool.runInteraction(
|
||||
"_get_max_stream_token", _get_max_stream_token_txn
|
||||
)
|
||||
|
||||
if stream_token < 0:
|
||||
await self.db_pool.updates._end_background_update(
|
||||
"populate_participant_bg_update"
|
||||
)
|
||||
return _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
|
||||
|
||||
await self.db_pool.runInteraction(
|
||||
"_background_populate_participant_txn",
|
||||
_background_populate_participant_txn,
|
||||
stream_token,
|
||||
)
|
||||
|
||||
progress["last_stream_token"] = (
|
||||
stream_token - _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
|
||||
)
|
||||
await self.db_pool.runInteraction(
|
||||
"populate_participant_bg_update",
|
||||
self.db_pool.updates._background_update_progress_txn,
|
||||
"populate_participant_bg_update",
|
||||
progress,
|
||||
)
|
||||
return _POPULATE_PARTICIPANT_BG_UPDATE_BATCH_SIZE
|
||||
|
||||
async def _background_add_membership_profile(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
) -> int:
|
||||
|
||||
@@ -68,6 +68,14 @@ class SlidingSyncStore(SQLBaseStore):
|
||||
columns=("membership_event_id",),
|
||||
)
|
||||
|
||||
self.db_pool.updates.register_background_index_update(
|
||||
update_name="sliding_sync_membership_snapshots_user_id_stream_ordering",
|
||||
index_name="sliding_sync_membership_snapshots_user_id_stream_ordering",
|
||||
table="sliding_sync_membership_snapshots",
|
||||
columns=("user_id", "event_stream_ordering"),
|
||||
replaces_index="sliding_sync_membership_snapshots_user_id",
|
||||
)
|
||||
|
||||
async def get_latest_bump_stamp_for_room(
|
||||
self,
|
||||
room_id: str,
|
||||
|
||||
@@ -80,6 +80,7 @@ from synapse.storage.database import (
|
||||
)
|
||||
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||
from synapse.storage.engines import BaseDatabaseEngine, PostgresEngine, Sqlite3Engine
|
||||
from synapse.storage.roommember import RoomsForUserStateReset
|
||||
from synapse.storage.util.id_generators import MultiWriterIdGenerator
|
||||
from synapse.types import PersistedEventPosition, RoomStreamToken, StrCollection
|
||||
from synapse.util.caches.descriptors import cached, cachedList
|
||||
@@ -453,6 +454,8 @@ def _filter_results_by_stream(
|
||||
stream_ordering falls between the two tokens (taking a None
|
||||
token to mean unbounded).
|
||||
|
||||
The token range is defined by > `lower_token` and <= `upper_token`.
|
||||
|
||||
Used to filter results from fetching events in the DB against the given
|
||||
tokens. This is necessary to handle the case where the tokens include
|
||||
position maps, which we handle by fetching more than necessary from the DB
|
||||
@@ -991,6 +994,10 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
available in the `current_state_delta_stream` table. To actually check for a
|
||||
state reset, you need to check if a membership still exists in the room.
|
||||
"""
|
||||
|
||||
assert from_key.topological is None
|
||||
assert to_key.topological is None
|
||||
|
||||
# Start by ruling out cases where a DB query is not necessary.
|
||||
if from_key == to_key:
|
||||
return []
|
||||
@@ -1136,6 +1143,203 @@ class StreamWorkerStore(EventsWorkerStore, SQLBaseStore):
|
||||
if membership_change.room_id not in room_ids_to_exclude
|
||||
]
|
||||
|
||||
@trace
|
||||
async def get_sliding_sync_membership_changes(
|
||||
self,
|
||||
user_id: str,
|
||||
from_key: RoomStreamToken,
|
||||
to_key: RoomStreamToken,
|
||||
excluded_room_ids: Optional[AbstractSet[str]] = None,
|
||||
) -> Dict[str, RoomsForUserStateReset]:
|
||||
"""
|
||||
Fetch membership events that result in a meaningful membership change for a
|
||||
given user.
|
||||
|
||||
A meaningful membership changes is one where the `membership` value actually
|
||||
changes. This means memberships changes from `join` to `join` (like a display
|
||||
name change) will be filtered out since they result in no meaningful change.
|
||||
|
||||
Note: This function only works with "live" tokens with `stream_ordering` only.
|
||||
|
||||
We're looking for membership changes in the token range (> `from_key` and <=
|
||||
`to_key`).
|
||||
|
||||
Args:
|
||||
user_id: The user ID to fetch membership events for.
|
||||
from_key: The point in the stream to sync from (fetching events > this point).
|
||||
to_key: The token to fetch rooms up to (fetching events <= this point).
|
||||
excluded_room_ids: Optional list of room IDs to exclude from the results.
|
||||
|
||||
Returns:
|
||||
All meaningful membership changes to the current state in the token range.
|
||||
Events are sorted by `stream_ordering` ascending.
|
||||
|
||||
`event_id`/`sender` can be `None` when the server leaves a room (meaning
|
||||
everyone locally left) or a state reset which removed the person from the
|
||||
room. We can't tell the difference between the two cases with what's
|
||||
available in the `current_state_delta_stream` table. To actually check for a
|
||||
state reset, you need to check if a membership still exists in the room.
|
||||
"""
|
||||
|
||||
assert from_key.topological is None
|
||||
assert to_key.topological is None
|
||||
|
||||
# Start by ruling out cases where a DB query is not necessary.
|
||||
if from_key == to_key:
|
||||
return {}
|
||||
|
||||
if from_key:
|
||||
has_changed = self._membership_stream_cache.has_entity_changed(
|
||||
user_id, int(from_key.stream)
|
||||
)
|
||||
if not has_changed:
|
||||
return {}
|
||||
|
||||
room_ids_to_exclude: AbstractSet[str] = set()
|
||||
if excluded_room_ids is not None:
|
||||
room_ids_to_exclude = excluded_room_ids
|
||||
|
||||
def f(txn: LoggingTransaction) -> Dict[str, RoomsForUserStateReset]:
|
||||
# To handle tokens with a non-empty instance_map we fetch more
|
||||
# results than necessary and then filter down
|
||||
min_from_id = from_key.stream
|
||||
max_to_id = to_key.get_max_stream_pos()
|
||||
|
||||
# This query looks at membership changes in
|
||||
# `sliding_sync_membership_snapshots` which will not include users
|
||||
# that were state reset out of rooms; so we need to look for that
|
||||
# case in `current_state_delta_stream`.
|
||||
sql = """
|
||||
SELECT
|
||||
room_id,
|
||||
membership_event_id,
|
||||
event_instance_name,
|
||||
event_stream_ordering,
|
||||
membership,
|
||||
sender,
|
||||
prev_membership,
|
||||
room_version
|
||||
FROM
|
||||
(
|
||||
SELECT
|
||||
s.room_id,
|
||||
s.membership_event_id,
|
||||
s.event_instance_name,
|
||||
s.event_stream_ordering,
|
||||
s.membership,
|
||||
s.sender,
|
||||
m_prev.membership AS prev_membership
|
||||
FROM sliding_sync_membership_snapshots as s
|
||||
LEFT JOIN event_edges AS e ON e.event_id = s.membership_event_id
|
||||
LEFT JOIN room_memberships AS m_prev ON m_prev.event_id = e.prev_event_id
|
||||
WHERE s.user_id = ?
|
||||
|
||||
UNION ALL
|
||||
|
||||
SELECT
|
||||
s.room_id,
|
||||
e.event_id,
|
||||
s.instance_name,
|
||||
s.stream_id,
|
||||
m.membership,
|
||||
e.sender,
|
||||
m_prev.membership AS prev_membership
|
||||
FROM current_state_delta_stream AS s
|
||||
LEFT JOIN events AS e ON e.event_id = s.event_id
|
||||
LEFT JOIN room_memberships AS m ON m.event_id = s.event_id
|
||||
LEFT JOIN room_memberships AS m_prev ON m_prev.event_id = s.prev_event_id
|
||||
WHERE
|
||||
s.type = ?
|
||||
AND s.state_key = ?
|
||||
) AS c
|
||||
INNER JOIN rooms USING (room_id)
|
||||
WHERE event_stream_ordering > ? AND event_stream_ordering <= ?
|
||||
ORDER BY event_stream_ordering ASC
|
||||
"""
|
||||
|
||||
txn.execute(
|
||||
sql,
|
||||
(user_id, EventTypes.Member, user_id, min_from_id, max_to_id),
|
||||
)
|
||||
|
||||
membership_changes: Dict[str, RoomsForUserStateReset] = {}
|
||||
for (
|
||||
room_id,
|
||||
membership_event_id,
|
||||
event_instance_name,
|
||||
event_stream_ordering,
|
||||
membership,
|
||||
sender,
|
||||
prev_membership,
|
||||
room_version_id,
|
||||
) in txn:
|
||||
assert room_id is not None
|
||||
assert event_stream_ordering is not None
|
||||
|
||||
if room_id in room_ids_to_exclude:
|
||||
continue
|
||||
|
||||
if _filter_results_by_stream(
|
||||
from_key,
|
||||
to_key,
|
||||
event_instance_name,
|
||||
event_stream_ordering,
|
||||
):
|
||||
# When the server leaves a room, it will insert new rows into the
|
||||
# `current_state_delta_stream` table with `event_id = null` for all
|
||||
# current state. This means we might already have a row for the
|
||||
# leave event and then another for the same leave where the
|
||||
# `event_id=null` but the `prev_event_id` is pointing back at the
|
||||
# earlier leave event. We don't want to report the leave, if we
|
||||
# already have a leave event.
|
||||
if (
|
||||
membership_event_id is None
|
||||
and prev_membership == Membership.LEAVE
|
||||
):
|
||||
continue
|
||||
|
||||
if membership_event_id is None and room_id in membership_changes:
|
||||
# SUSPICIOUS: if we join a room and get state reset out of it
|
||||
# in the same queried window,
|
||||
# won't this ignore the 'state reset out of it' part?
|
||||
continue
|
||||
|
||||
# When `s.event_id = null`, we won't be able to get respective
|
||||
# `room_membership` but can assume the user has left the room
|
||||
# because this only happens when the server leaves a room
|
||||
# (meaning everyone locally left) or a state reset which removed
|
||||
# the person from the room.
|
||||
membership = (
|
||||
membership if membership is not None else Membership.LEAVE
|
||||
)
|
||||
|
||||
if membership == prev_membership:
|
||||
# If `membership` and `prev_membership` are the same then this
|
||||
# is not a meaningful change so we can skip it.
|
||||
# An example of this happening is when the user changes their display name.
|
||||
continue
|
||||
|
||||
membership_change = RoomsForUserStateReset(
|
||||
room_id=room_id,
|
||||
sender=sender,
|
||||
membership=membership,
|
||||
event_id=membership_event_id,
|
||||
event_pos=PersistedEventPosition(
|
||||
event_instance_name, event_stream_ordering
|
||||
),
|
||||
room_version_id=room_version_id,
|
||||
)
|
||||
|
||||
membership_changes[room_id] = membership_change
|
||||
|
||||
return membership_changes
|
||||
|
||||
membership_changes = await self.db_pool.runInteraction(
|
||||
"get_sliding_sync_membership_changes", f
|
||||
)
|
||||
|
||||
return membership_changes
|
||||
|
||||
@cancellable
|
||||
async def get_membership_changes_for_user(
|
||||
self,
|
||||
|
||||
@@ -1037,11 +1037,11 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
|
||||
}
|
||||
"""
|
||||
|
||||
join_args: Tuple[str, ...] = (user_id,)
|
||||
|
||||
if self.hs.config.userdirectory.user_directory_search_all_users:
|
||||
join_args = (user_id,)
|
||||
where_clause = "user_id != ?"
|
||||
else:
|
||||
join_args = (user_id,)
|
||||
where_clause = """
|
||||
(
|
||||
EXISTS (select 1 from users_in_public_rooms WHERE user_id = t.user_id)
|
||||
@@ -1055,6 +1055,14 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
|
||||
if not show_locked_users:
|
||||
where_clause += " AND (u.locked IS NULL OR u.locked = FALSE)"
|
||||
|
||||
# Adjust the JOIN type based on the exclude_remote_users flag (the users
|
||||
# table only contains local users so an inner join is a good way to
|
||||
# to exclude remote users)
|
||||
if self.hs.config.userdirectory.user_directory_exclude_remote_users:
|
||||
join_type = "JOIN"
|
||||
else:
|
||||
join_type = "LEFT JOIN"
|
||||
|
||||
# We allow manipulating the ranking algorithm by injecting statements
|
||||
# based on config options.
|
||||
additional_ordering_statements = []
|
||||
@@ -1086,7 +1094,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
|
||||
SELECT d.user_id AS user_id, display_name, avatar_url
|
||||
FROM matching_users as t
|
||||
INNER JOIN user_directory AS d USING (user_id)
|
||||
LEFT JOIN users AS u ON t.user_id = u.name
|
||||
%(join_type)s users AS u ON t.user_id = u.name
|
||||
WHERE
|
||||
%(where_clause)s
|
||||
ORDER BY
|
||||
@@ -1115,6 +1123,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
|
||||
""" % {
|
||||
"where_clause": where_clause,
|
||||
"order_case_statements": " ".join(additional_ordering_statements),
|
||||
"join_type": join_type,
|
||||
}
|
||||
args = (
|
||||
(full_query,)
|
||||
@@ -1142,7 +1151,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
|
||||
SELECT d.user_id AS user_id, display_name, avatar_url
|
||||
FROM user_directory_search as t
|
||||
INNER JOIN user_directory AS d USING (user_id)
|
||||
LEFT JOIN users AS u ON t.user_id = u.name
|
||||
%(join_type)s users AS u ON t.user_id = u.name
|
||||
WHERE
|
||||
%(where_clause)s
|
||||
AND value MATCH ?
|
||||
@@ -1155,6 +1164,7 @@ class UserDirectoryStore(UserDirectoryBackgroundUpdateStore):
|
||||
""" % {
|
||||
"where_clause": where_clause,
|
||||
"order_statements": " ".join(additional_ordering_statements),
|
||||
"join_type": join_type,
|
||||
}
|
||||
args = join_args + (search_query,) + ordering_arguments + (limit + 1,)
|
||||
else:
|
||||
|
||||
@@ -19,7 +19,7 @@
|
||||
#
|
||||
#
|
||||
|
||||
SCHEMA_VERSION = 91 # remember to update the list below when updating
|
||||
SCHEMA_VERSION = 92 # remember to update the list below when updating
|
||||
"""Represents the expectations made by the codebase about the database schema
|
||||
|
||||
This should be incremented whenever the codebase changes its requirements on the
|
||||
@@ -162,6 +162,12 @@ Changes in SCHEMA_VERSION = 89
|
||||
Changes in SCHEMA_VERSION = 90
|
||||
- Add a column `participant` to `room_memberships` table
|
||||
- Add background update to delete unreferenced state groups.
|
||||
|
||||
Changes in SCHEMA_VERSION = 91
|
||||
- Add a `sha256` column to the `local_media_repository` and `remote_media_cache` tables.
|
||||
|
||||
Changes in SCHEMA_VERSION = 92
|
||||
- Cleaned up a trigger that was added in #18260 and then reverted.
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -13,8 +13,4 @@
|
||||
|
||||
-- Add a column `participant` to `room_memberships` table to track whether a room member has sent
|
||||
-- a `m.room.message` or `m.room.encrypted` event into a room they are a member of
|
||||
ALTER TABLE room_memberships ADD COLUMN participant BOOLEAN DEFAULT FALSE;
|
||||
|
||||
-- Add a background update to populate `participant` column
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(9001, 'populate_participant_bg_update', '{}');
|
||||
ALTER TABLE room_memberships ADD COLUMN participant BOOLEAN DEFAULT FALSE;
|
||||
@@ -0,0 +1,16 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2025 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Removes the trigger that was added in #18260 and then reverted
|
||||
DROP TRIGGER IF EXISTS event_stats_increment_counts_trigger ON events;
|
||||
DROP FUNCTION IF EXISTS event_stats_increment_counts();
|
||||
@@ -0,0 +1,16 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2025 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Removes the trigger that was added in #18260 and then reverted
|
||||
DROP TRIGGER IF EXISTS event_stats_events_insert_trigger;
|
||||
DROP TRIGGER IF EXISTS event_stats_events_delete_trigger;
|
||||
@@ -0,0 +1,17 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2025 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Remove the background update if it was scheduled, as it is not rollback-safe
|
||||
-- See https://github.com/element-hq/synapse/issues/18356 for context
|
||||
DELETE FROM background_updates
|
||||
WHERE update_name = 'populate_participant_bg_update';
|
||||
@@ -0,0 +1,16 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2025 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- So we can fetch all rooms for a given user sorted by stream order
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(9204, 'sliding_sync_membership_snapshots_user_id_stream_ordering', '{}');
|
||||
@@ -0,0 +1,17 @@
|
||||
--
|
||||
-- This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
--
|
||||
-- Copyright (C) 2025 New Vector, Ltd
|
||||
--
|
||||
-- This program is free software: you can redistribute it and/or modify
|
||||
-- it under the terms of the GNU Affero General Public License as
|
||||
-- published by the Free Software Foundation, either version 3 of the
|
||||
-- License, or (at your option) any later version.
|
||||
--
|
||||
-- See the GNU Affero General Public License for more details:
|
||||
-- <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
|
||||
-- Background update that fixes any events with a topological ordering above the
|
||||
-- MAX_DEPTH value.
|
||||
INSERT INTO background_updates (ordering, update_name, progress_json) VALUES
|
||||
(9205, 'fixup_max_depth_cap', '{}');
|
||||
@@ -52,3 +52,5 @@ class _BackgroundUpdates:
|
||||
MARK_UNREFERENCED_STATE_GROUPS_FOR_DELETION_BG_UPDATE = (
|
||||
"mark_unreferenced_state_groups_for_deletion_bg_update"
|
||||
)
|
||||
|
||||
FIXUP_MAX_DEPTH_CAP = "fixup_max_depth_cap"
|
||||
|
||||
@@ -21,7 +21,7 @@
|
||||
|
||||
import phonenumbers
|
||||
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
|
||||
|
||||
def phone_number_to_msisdn(country: str, number: str) -> str:
|
||||
@@ -45,7 +45,7 @@ def phone_number_to_msisdn(country: str, number: str) -> str:
|
||||
try:
|
||||
phoneNumber = phonenumbers.parse(number, country)
|
||||
except phonenumbers.NumberParseException:
|
||||
raise SynapseError(400, "Unable to parse phone number")
|
||||
raise SynapseError(400, "Unable to parse phone number", Codes.INVALID_PARAM)
|
||||
return phonenumbers.format_number(phoneNumber, phonenumbers.PhoneNumberFormat.E164)[
|
||||
1:
|
||||
]
|
||||
|
||||
@@ -147,6 +147,16 @@ class MSC3861OAuthDelegation(HomeserverTestCase):
|
||||
|
||||
return hs
|
||||
|
||||
def prepare(
|
||||
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
|
||||
) -> None:
|
||||
# Provision the user and the device we use in the tests.
|
||||
store = homeserver.get_datastores().main
|
||||
self.get_success(store.register_user(USER_ID))
|
||||
self.get_success(
|
||||
store.store_device(USER_ID, DEVICE, initial_device_display_name=None)
|
||||
)
|
||||
|
||||
def _assertParams(self) -> None:
|
||||
"""Assert that the request parameters are correct."""
|
||||
params = parse_qs(self.http_client.request.call_args[1]["data"].decode("utf-8"))
|
||||
|
||||
@@ -1029,6 +1029,50 @@ class OidcHandlerTestCase(HomeserverTestCase):
|
||||
args = parse_qs(kwargs["data"].decode("utf-8"))
|
||||
self.assertEqual(args["redirect_uri"], [TEST_REDIRECT_URI])
|
||||
|
||||
@override_config(
|
||||
{
|
||||
"oidc_config": {
|
||||
**DEFAULT_CONFIG,
|
||||
"redirect_uri": TEST_REDIRECT_URI,
|
||||
}
|
||||
}
|
||||
)
|
||||
def test_code_exchange_ignores_access_token(self) -> None:
|
||||
"""
|
||||
Code exchange completes successfully and doesn't validate the `at_hash`
|
||||
(access token hash) field of an ID token when the access token isn't
|
||||
going to be used.
|
||||
|
||||
The access token won't be used in this test because Synapse (currently)
|
||||
only needs it to fetch a user's metadata if it isn't included in the ID
|
||||
token itself.
|
||||
|
||||
Because we have included "openid" in the requested scopes for this IdP
|
||||
(see `SCOPES`), user metadata is be included in the ID token. Thus the
|
||||
access token isn't needed, and it's unnecessary for Synapse to validate
|
||||
the access token.
|
||||
|
||||
This is a regression test for a situation where an upstream identity
|
||||
provider was providing an invalid `at_hash` value, which Synapse errored
|
||||
on, yet Synapse wasn't using the access token for anything.
|
||||
"""
|
||||
# Exchange the code against the fake IdP.
|
||||
userinfo = {
|
||||
"sub": "foo",
|
||||
"username": "foo",
|
||||
"phone": "1234567",
|
||||
}
|
||||
with self.fake_server.id_token_override(
|
||||
{
|
||||
"at_hash": "invalid-hash",
|
||||
}
|
||||
):
|
||||
request, _ = self.start_authorization(userinfo)
|
||||
self.get_success(self.handler.handle_oidc_callback(request))
|
||||
|
||||
# If no error was rendered, then we have success.
|
||||
self.render_error.assert_not_called()
|
||||
|
||||
@override_config(
|
||||
{
|
||||
"oidc_config": {
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -992,6 +992,67 @@ class UserDirectoryTestCase(unittest.HomeserverTestCase):
|
||||
[self.assertIn(user, local_users) for user in received_user_id_ordering[:3]]
|
||||
[self.assertIn(user, remote_users) for user in received_user_id_ordering[3:]]
|
||||
|
||||
@override_config(
|
||||
{
|
||||
"user_directory": {
|
||||
"enabled": True,
|
||||
"search_all_users": True,
|
||||
"exclude_remote_users": True,
|
||||
}
|
||||
}
|
||||
)
|
||||
def test_exclude_remote_users(self) -> None:
|
||||
"""Tests that only local users are returned when
|
||||
user_directory.exclude_remote_users is True.
|
||||
"""
|
||||
|
||||
# Create a room and few users to test the directory with
|
||||
searching_user = self.register_user("searcher", "password")
|
||||
searching_user_tok = self.login("searcher", "password")
|
||||
|
||||
room_id = self.helper.create_room_as(
|
||||
searching_user,
|
||||
room_version=RoomVersions.V1.identifier,
|
||||
tok=searching_user_tok,
|
||||
)
|
||||
|
||||
# Create a few local users and join them to the room
|
||||
local_user_1 = self.register_user("user_xxxxx", "password")
|
||||
local_user_2 = self.register_user("user_bbbbb", "password")
|
||||
local_user_3 = self.register_user("user_zzzzz", "password")
|
||||
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, local_user_1)
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, local_user_2)
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, local_user_3)
|
||||
|
||||
# Create a few "remote" users and join them to the room
|
||||
remote_user_1 = "@user_aaaaa:remote_server"
|
||||
remote_user_2 = "@user_yyyyy:remote_server"
|
||||
remote_user_3 = "@user_ccccc:remote_server"
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, remote_user_1)
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, remote_user_2)
|
||||
self._add_user_to_room(room_id, RoomVersions.V1, remote_user_3)
|
||||
|
||||
local_users = [local_user_1, local_user_2, local_user_3]
|
||||
remote_users = [remote_user_1, remote_user_2, remote_user_3]
|
||||
|
||||
# The local searching user searches for the term "user", which other users have
|
||||
# in their user id
|
||||
results = self.get_success(
|
||||
self.handler.search_users(searching_user, "user", 20)
|
||||
)["results"]
|
||||
received_user_ids = [result["user_id"] for result in results]
|
||||
|
||||
for user in local_users:
|
||||
self.assertIn(
|
||||
user, received_user_ids, f"Local user {user} not found in results"
|
||||
)
|
||||
|
||||
for user in remote_users:
|
||||
self.assertNotIn(
|
||||
user, received_user_ids, f"Remote user {user} should not be in results"
|
||||
)
|
||||
|
||||
def _add_user_to_room(
|
||||
self,
|
||||
room_id: str,
|
||||
|
||||
192
tests/rest/admin/test_scheduled_tasks.py
Normal file
192
tests/rest/admin/test_scheduled_tasks.py
Normal file
@@ -0,0 +1,192 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2025 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
#
|
||||
from typing import Mapping, Optional, Tuple
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
import synapse.rest.admin
|
||||
from synapse.api.errors import Codes
|
||||
from synapse.rest.client import login
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonMapping, ScheduledTask, TaskStatus
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
|
||||
|
||||
class ScheduledTasksAdminApiTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
synapse.rest.admin.register_servlets,
|
||||
login.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.store = hs.get_datastores().main
|
||||
self.admin_user = self.register_user("admin", "pass", admin=True)
|
||||
self.admin_user_tok = self.login("admin", "pass")
|
||||
self._task_scheduler = hs.get_task_scheduler()
|
||||
|
||||
# create and schedule a few tasks
|
||||
async def _test_task(
|
||||
task: ScheduledTask,
|
||||
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
|
||||
return TaskStatus.ACTIVE, None, None
|
||||
|
||||
async def _finished_test_task(
|
||||
task: ScheduledTask,
|
||||
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
|
||||
return TaskStatus.COMPLETE, None, None
|
||||
|
||||
async def _failed_test_task(
|
||||
task: ScheduledTask,
|
||||
) -> Tuple[TaskStatus, Optional[JsonMapping], Optional[str]]:
|
||||
return TaskStatus.FAILED, None, "Everything failed"
|
||||
|
||||
self._task_scheduler.register_action(_test_task, "test_task")
|
||||
self.get_success(
|
||||
self._task_scheduler.schedule_task("test_task", resource_id="test")
|
||||
)
|
||||
|
||||
self._task_scheduler.register_action(_finished_test_task, "finished_test_task")
|
||||
self.get_success(
|
||||
self._task_scheduler.schedule_task(
|
||||
"finished_test_task", resource_id="finished_task"
|
||||
)
|
||||
)
|
||||
|
||||
self._task_scheduler.register_action(_failed_test_task, "failed_test_task")
|
||||
self.get_success(
|
||||
self._task_scheduler.schedule_task(
|
||||
"failed_test_task", resource_id="failed_task"
|
||||
)
|
||||
)
|
||||
|
||||
def check_scheduled_tasks_response(self, scheduled_tasks: Mapping) -> list:
|
||||
result = []
|
||||
for task in scheduled_tasks:
|
||||
if task["resource_id"] == "test":
|
||||
self.assertEqual(task["status"], TaskStatus.ACTIVE)
|
||||
self.assertEqual(task["action"], "test_task")
|
||||
result.append(task)
|
||||
if task["resource_id"] == "finished_task":
|
||||
self.assertEqual(task["status"], TaskStatus.COMPLETE)
|
||||
self.assertEqual(task["action"], "finished_test_task")
|
||||
result.append(task)
|
||||
if task["resource_id"] == "failed_task":
|
||||
self.assertEqual(task["status"], TaskStatus.FAILED)
|
||||
self.assertEqual(task["action"], "failed_test_task")
|
||||
result.append(task)
|
||||
|
||||
return result
|
||||
|
||||
def test_requester_is_not_admin(self) -> None:
|
||||
"""
|
||||
If the user is not a server admin, an error 403 is returned.
|
||||
"""
|
||||
|
||||
self.register_user("user", "pass", admin=False)
|
||||
other_user_tok = self.login("user", "pass")
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/admin/v1/scheduled_tasks",
|
||||
content={},
|
||||
access_token=other_user_tok,
|
||||
)
|
||||
|
||||
self.assertEqual(403, channel.code, msg=channel.json_body)
|
||||
self.assertEqual(Codes.FORBIDDEN, channel.json_body["errcode"])
|
||||
|
||||
def test_scheduled_tasks(self) -> None:
|
||||
"""
|
||||
Test that endpoint returns scheduled tasks.
|
||||
"""
|
||||
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/admin/v1/scheduled_tasks",
|
||||
content={},
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
scheduled_tasks = channel.json_body["scheduled_tasks"]
|
||||
|
||||
# make sure we got back all the scheduled tasks
|
||||
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
|
||||
self.assertEqual(len(found_tasks), 3)
|
||||
|
||||
def test_filtering_scheduled_tasks(self) -> None:
|
||||
"""
|
||||
Test that filtering the scheduled tasks response via query params works as expected.
|
||||
"""
|
||||
# filter via job_status
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/admin/v1/scheduled_tasks?job_status=active",
|
||||
content={},
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
scheduled_tasks = channel.json_body["scheduled_tasks"]
|
||||
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
|
||||
|
||||
# only the active task should have been returned
|
||||
self.assertEqual(len(found_tasks), 1)
|
||||
self.assertEqual(found_tasks[0]["status"], "active")
|
||||
|
||||
# filter via action_name
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/admin/v1/scheduled_tasks?action_name=test_task",
|
||||
content={},
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
scheduled_tasks = channel.json_body["scheduled_tasks"]
|
||||
|
||||
# only test_task should have been returned
|
||||
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
|
||||
self.assertEqual(len(found_tasks), 1)
|
||||
self.assertEqual(found_tasks[0]["action"], "test_task")
|
||||
|
||||
# filter via max_timestamp
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/admin/v1/scheduled_tasks?max_timestamp=0",
|
||||
content={},
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
scheduled_tasks = channel.json_body["scheduled_tasks"]
|
||||
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
|
||||
|
||||
# none should have been returned
|
||||
self.assertEqual(len(found_tasks), 0)
|
||||
|
||||
# filter via resource id
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"/_synapse/admin/v1/scheduled_tasks?resource_id=failed_task",
|
||||
content={},
|
||||
access_token=self.admin_user_tok,
|
||||
)
|
||||
self.assertEqual(200, channel.code, msg=channel.json_body)
|
||||
scheduled_tasks = channel.json_body["scheduled_tasks"]
|
||||
found_tasks = self.check_scheduled_tasks_response(scheduled_tasks)
|
||||
|
||||
# only the task with the matching resource id should have been returned
|
||||
self.assertEqual(len(found_tasks), 1)
|
||||
self.assertEqual(found_tasks[0]["resource_id"], "failed_task")
|
||||
@@ -790,6 +790,64 @@ class SlidingSyncTestCase(SlidingSyncBase):
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_reject_remote_invite(self) -> None:
|
||||
"""Test that rejecting a remote invite comes down incremental sync"""
|
||||
|
||||
user_id = self.register_user("user1", "pass")
|
||||
user_tok = self.login(user_id, "pass")
|
||||
|
||||
# Create a remote room invite (out-of-band membership)
|
||||
room_id = "!room:remote.server"
|
||||
self._create_remote_invite_room_for_user(user_id, None, room_id)
|
||||
|
||||
# Make the Sliding Sync request
|
||||
sync_body = {
|
||||
"lists": {
|
||||
"foo-list": {
|
||||
"ranges": [[0, 1]],
|
||||
"required_state": [(EventTypes.Member, StateValues.ME)],
|
||||
"timeline_limit": 3,
|
||||
}
|
||||
}
|
||||
}
|
||||
response_body, from_token = self.do_sync(sync_body, tok=user_tok)
|
||||
# We should see the room (like normal)
|
||||
self.assertIncludes(
|
||||
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
|
||||
{room_id},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
# Reject the remote room invite
|
||||
self.helper.leave(room_id, user_id, tok=user_tok)
|
||||
|
||||
# Sync again after rejecting the invite
|
||||
response_body, _ = self.do_sync(sync_body, since=from_token, tok=user_tok)
|
||||
|
||||
# The fix to add the leave event to incremental sync when rejecting a remote
|
||||
# invite relies on the new tables to work.
|
||||
if self.use_new_tables:
|
||||
# We should see the newly_left room
|
||||
self.assertIncludes(
|
||||
set(response_body["lists"]["foo-list"]["ops"][0]["room_ids"]),
|
||||
{room_id},
|
||||
exact=True,
|
||||
)
|
||||
# We should see the leave state for the room so clients don't end up with stuck
|
||||
# invites
|
||||
self.assertIncludes(
|
||||
{
|
||||
(
|
||||
state["type"],
|
||||
state["state_key"],
|
||||
state["content"].get("membership"),
|
||||
)
|
||||
for state in response_body["rooms"][room_id]["required_state"]
|
||||
},
|
||||
{(EventTypes.Member, user_id, Membership.LEAVE)},
|
||||
exact=True,
|
||||
)
|
||||
|
||||
def test_ignored_user_invites_initial_sync(self) -> None:
|
||||
"""
|
||||
Make sure we ignore invites if they are from one of the `m.ignored_user_list` on
|
||||
|
||||
@@ -1262,18 +1262,18 @@ class JWTTestCase(unittest.HomeserverTestCase):
|
||||
channel = self.jwt_login({"sub": "kermit", "iss": "invalid"})
|
||||
self.assertEqual(channel.code, 403, msg=channel.result)
|
||||
self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
|
||||
self.assertEqual(
|
||||
self.assertRegex(
|
||||
channel.json_body["error"],
|
||||
'JWT validation failed: invalid_claim: Invalid claim "iss"',
|
||||
r"^JWT validation failed: invalid_claim: Invalid claim [\"']iss[\"']$",
|
||||
)
|
||||
|
||||
# Not providing an issuer.
|
||||
channel = self.jwt_login({"sub": "kermit"})
|
||||
self.assertEqual(channel.code, 403, msg=channel.result)
|
||||
self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
|
||||
self.assertEqual(
|
||||
self.assertRegex(
|
||||
channel.json_body["error"],
|
||||
'JWT validation failed: missing_claim: Missing "iss" claim',
|
||||
r"^JWT validation failed: missing_claim: Missing [\"']iss[\"'] claim$",
|
||||
)
|
||||
|
||||
def test_login_iss_no_config(self) -> None:
|
||||
@@ -1294,18 +1294,18 @@ class JWTTestCase(unittest.HomeserverTestCase):
|
||||
channel = self.jwt_login({"sub": "kermit", "aud": "invalid"})
|
||||
self.assertEqual(channel.code, 403, msg=channel.result)
|
||||
self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
|
||||
self.assertEqual(
|
||||
self.assertRegex(
|
||||
channel.json_body["error"],
|
||||
'JWT validation failed: invalid_claim: Invalid claim "aud"',
|
||||
r"^JWT validation failed: invalid_claim: Invalid claim [\"']aud[\"']$",
|
||||
)
|
||||
|
||||
# Not providing an audience.
|
||||
channel = self.jwt_login({"sub": "kermit"})
|
||||
self.assertEqual(channel.code, 403, msg=channel.result)
|
||||
self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
|
||||
self.assertEqual(
|
||||
self.assertRegex(
|
||||
channel.json_body["error"],
|
||||
'JWT validation failed: missing_claim: Missing "aud" claim',
|
||||
r"^JWT validation failed: missing_claim: Missing [\"']aud[\"'] claim$",
|
||||
)
|
||||
|
||||
def test_login_aud_no_config(self) -> None:
|
||||
@@ -1313,9 +1313,9 @@ class JWTTestCase(unittest.HomeserverTestCase):
|
||||
channel = self.jwt_login({"sub": "kermit", "aud": "invalid"})
|
||||
self.assertEqual(channel.code, 403, msg=channel.result)
|
||||
self.assertEqual(channel.json_body["errcode"], "M_FORBIDDEN")
|
||||
self.assertEqual(
|
||||
self.assertRegex(
|
||||
channel.json_body["error"],
|
||||
'JWT validation failed: invalid_claim: Invalid claim "aud"',
|
||||
r"^JWT validation failed: invalid_claim: Invalid claim [\"']aud[\"']$",
|
||||
)
|
||||
|
||||
def test_login_default_sub(self) -> None:
|
||||
|
||||
157
tests/storage/test_events_bg_updates.py
Normal file
157
tests/storage/test_events_bg_updates.py
Normal file
@@ -0,0 +1,157 @@
|
||||
#
|
||||
# This file is licensed under the Affero General Public License (AGPL) version 3.
|
||||
#
|
||||
# Copyright (C) 2025 New Vector, Ltd
|
||||
#
|
||||
# This program is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU Affero General Public License as
|
||||
# published by the Free Software Foundation, either version 3 of the
|
||||
# License, or (at your option) any later version.
|
||||
#
|
||||
# See the GNU Affero General Public License for more details:
|
||||
# <https://www.gnu.org/licenses/agpl-3.0.html>.
|
||||
#
|
||||
#
|
||||
|
||||
from typing import Dict
|
||||
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.constants import MAX_DEPTH
|
||||
from synapse.api.room_versions import RoomVersion, RoomVersions
|
||||
from synapse.server import HomeServer
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests.unittest import HomeserverTestCase
|
||||
|
||||
|
||||
class TestFixupMaxDepthCapBgUpdate(HomeserverTestCase):
|
||||
"""Test the background update that caps topological_ordering at MAX_DEPTH."""
|
||||
|
||||
def prepare(
|
||||
self, reactor: MemoryReactor, clock: Clock, homeserver: HomeServer
|
||||
) -> None:
|
||||
self.store = self.hs.get_datastores().main
|
||||
self.db_pool = self.store.db_pool
|
||||
|
||||
self.room_id = "!testroom:example.com"
|
||||
|
||||
# Reinsert the background update as it was already run at the start of
|
||||
# the test.
|
||||
self.get_success(
|
||||
self.db_pool.simple_insert(
|
||||
table="background_updates",
|
||||
values={
|
||||
"update_name": "fixup_max_depth_cap",
|
||||
"progress_json": "{}",
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
def create_room(self, room_version: RoomVersion) -> Dict[str, int]:
|
||||
"""Create a room with a known room version and insert events.
|
||||
|
||||
Returns the set of event IDs that exceed MAX_DEPTH and
|
||||
their depth.
|
||||
"""
|
||||
|
||||
# Create a room with a specific room version
|
||||
self.get_success(
|
||||
self.db_pool.simple_insert(
|
||||
table="rooms",
|
||||
values={
|
||||
"room_id": self.room_id,
|
||||
"room_version": room_version.identifier,
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
# Insert events with some depths exceeding MAX_DEPTH
|
||||
event_id_to_depth: Dict[str, int] = {}
|
||||
for depth in range(MAX_DEPTH - 5, MAX_DEPTH + 5):
|
||||
event_id = f"$event{depth}:example.com"
|
||||
event_id_to_depth[event_id] = depth
|
||||
|
||||
self.get_success(
|
||||
self.db_pool.simple_insert(
|
||||
table="events",
|
||||
values={
|
||||
"event_id": event_id,
|
||||
"room_id": self.room_id,
|
||||
"topological_ordering": depth,
|
||||
"depth": depth,
|
||||
"type": "m.test",
|
||||
"sender": "@user:test",
|
||||
"processed": True,
|
||||
"outlier": False,
|
||||
},
|
||||
)
|
||||
)
|
||||
|
||||
return event_id_to_depth
|
||||
|
||||
def test_fixup_max_depth_cap_bg_update(self) -> None:
|
||||
"""Test that the background update correctly caps topological_ordering
|
||||
at MAX_DEPTH."""
|
||||
|
||||
event_id_to_depth = self.create_room(RoomVersions.V6)
|
||||
|
||||
# Run the background update
|
||||
progress = {"room_id": ""}
|
||||
batch_size = 10
|
||||
num_rooms = self.get_success(
|
||||
self.store.fixup_max_depth_cap_bg_update(progress, batch_size)
|
||||
)
|
||||
|
||||
# Verify the number of rooms processed
|
||||
self.assertEqual(num_rooms, 1)
|
||||
|
||||
# Verify that the topological_ordering of events has been capped at
|
||||
# MAX_DEPTH
|
||||
rows = self.get_success(
|
||||
self.db_pool.simple_select_list(
|
||||
table="events",
|
||||
keyvalues={"room_id": self.room_id},
|
||||
retcols=["event_id", "topological_ordering"],
|
||||
)
|
||||
)
|
||||
|
||||
for event_id, topological_ordering in rows:
|
||||
if event_id_to_depth[event_id] >= MAX_DEPTH:
|
||||
# Events with a depth greater than or equal to MAX_DEPTH should
|
||||
# be capped at MAX_DEPTH.
|
||||
self.assertEqual(topological_ordering, MAX_DEPTH)
|
||||
else:
|
||||
# Events with a depth less than MAX_DEPTH should remain
|
||||
# unchanged.
|
||||
self.assertEqual(topological_ordering, event_id_to_depth[event_id])
|
||||
|
||||
def test_fixup_max_depth_cap_bg_update_old_room_version(self) -> None:
|
||||
"""Test that the background update does not cap topological_ordering for
|
||||
rooms with old room versions."""
|
||||
|
||||
event_id_to_depth = self.create_room(RoomVersions.V5)
|
||||
|
||||
# Run the background update
|
||||
progress = {"room_id": ""}
|
||||
batch_size = 10
|
||||
num_rooms = self.get_success(
|
||||
self.store.fixup_max_depth_cap_bg_update(progress, batch_size)
|
||||
)
|
||||
|
||||
# Verify the number of rooms processed
|
||||
self.assertEqual(num_rooms, 0)
|
||||
|
||||
# Verify that the topological_ordering of events has been capped at
|
||||
# MAX_DEPTH
|
||||
rows = self.get_success(
|
||||
self.db_pool.simple_select_list(
|
||||
table="events",
|
||||
keyvalues={"room_id": self.room_id},
|
||||
retcols=["event_id", "topological_ordering"],
|
||||
)
|
||||
)
|
||||
|
||||
# Assert that the topological_ordering of events has not been changed
|
||||
# from their depth.
|
||||
self.assertDictEqual(event_id_to_depth, dict(rows))
|
||||
@@ -20,7 +20,9 @@
|
||||
#
|
||||
|
||||
|
||||
import base64
|
||||
import json
|
||||
from hashlib import sha256
|
||||
from typing import Any, ContextManager, Dict, List, Optional, Tuple
|
||||
from unittest.mock import Mock, patch
|
||||
from urllib.parse import parse_qs
|
||||
@@ -154,10 +156,23 @@ class FakeOidcServer:
|
||||
json_payload = json.dumps(payload)
|
||||
return jws.serialize_compact(protected, json_payload, self._key).decode("utf-8")
|
||||
|
||||
def generate_id_token(self, grant: FakeAuthorizationGrant) -> str:
|
||||
def generate_id_token(
|
||||
self, grant: FakeAuthorizationGrant, access_token: str
|
||||
) -> str:
|
||||
# Generate a hash of the access token for the optional
|
||||
# `at_hash` field in an ID Token.
|
||||
#
|
||||
# 3.1.3.6. ID Token, https://openid.net/specs/openid-connect-core-1_0.html#CodeIDToken
|
||||
at_hash = (
|
||||
base64.urlsafe_b64encode(sha256(access_token.encode("ascii")).digest()[:16])
|
||||
.rstrip(b"=")
|
||||
.decode("ascii")
|
||||
)
|
||||
|
||||
now = int(self._clock.time())
|
||||
id_token = {
|
||||
**grant.userinfo,
|
||||
"at_hash": at_hash,
|
||||
"iss": self.issuer,
|
||||
"aud": grant.client_id,
|
||||
"iat": now,
|
||||
@@ -243,7 +258,7 @@ class FakeOidcServer:
|
||||
}
|
||||
|
||||
if "openid" in grant.scope:
|
||||
token["id_token"] = self.generate_id_token(grant)
|
||||
token["id_token"] = self.generate_id_token(grant, access_token)
|
||||
|
||||
return dict(token)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user