Compare commits
29 Commits
readme-cli
...
release-v1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5009ad212e | ||
|
|
c4f5522189 | ||
|
|
6a1352e564 | ||
|
|
76f990c244 | ||
|
|
47c682101f | ||
|
|
2de2258bd2 | ||
|
|
ff0148a165 | ||
|
|
bf69b57422 | ||
|
|
c5b543938b | ||
|
|
4bb2b4aa9a | ||
|
|
1f033ad5e4 | ||
|
|
60c5e8d79b | ||
|
|
0cc6509cfe | ||
|
|
a11511954a | ||
|
|
daec55e1fe | ||
|
|
7a3a55ac98 | ||
|
|
4724a6ded1 | ||
|
|
de981ae567 | ||
|
|
91aa52c911 | ||
|
|
408c13801a | ||
|
|
fdce83ee60 | ||
|
|
a3f6200d65 | ||
|
|
4e1a19d375 | ||
|
|
8c63e93286 | ||
|
|
ba55835000 | ||
|
|
8f7cd4cd03 | ||
|
|
425cb9c23c | ||
|
|
13f6467785 | ||
|
|
bcaaeab410 |
7
.github/workflows/latest_deps.yml
vendored
7
.github/workflows/latest_deps.yml
vendored
@@ -197,11 +197,14 @@ jobs:
|
||||
with:
|
||||
path: synapse
|
||||
|
||||
- uses: actions/setup-go@v4
|
||||
|
||||
- name: Prepare Complement's Prerequisites
|
||||
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
|
||||
|
||||
- uses: actions/setup-go@v4
|
||||
with:
|
||||
cache-dependency-path: complement/go.sum
|
||||
go-version-file: complement/go.mod
|
||||
|
||||
- run: |
|
||||
set -o pipefail
|
||||
TEST_ONLY_IGNORE_POETRY_LOCKFILE=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
|
||||
|
||||
2
.github/workflows/release-artifacts.yml
vendored
2
.github/workflows/release-artifacts.yml
vendored
@@ -130,7 +130,7 @@ jobs:
|
||||
python-version: "3.x"
|
||||
|
||||
- name: Install cibuildwheel
|
||||
run: python -m pip install cibuildwheel==2.9.0
|
||||
run: python -m pip install cibuildwheel==2.16.2
|
||||
|
||||
- name: Set up QEMU to emulate aarch64
|
||||
if: matrix.arch == 'aarch64'
|
||||
|
||||
7
.github/workflows/tests.yml
vendored
7
.github/workflows/tests.yml
vendored
@@ -633,11 +633,14 @@ jobs:
|
||||
uses: dtolnay/rust-toolchain@1.61.0
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
|
||||
- uses: actions/setup-go@v4
|
||||
|
||||
- name: Prepare Complement's Prerequisites
|
||||
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
|
||||
|
||||
- uses: actions/setup-go@v4
|
||||
with:
|
||||
cache-dependency-path: complement/go.sum
|
||||
go-version-file: complement/go.mod
|
||||
|
||||
# use p=1 concurrency as GHA boxes are underpowered and don't like running tons of synapses at once.
|
||||
- run: |
|
||||
set -o pipefail
|
||||
|
||||
7
.github/workflows/twisted_trunk.yml
vendored
7
.github/workflows/twisted_trunk.yml
vendored
@@ -168,11 +168,14 @@ jobs:
|
||||
with:
|
||||
path: synapse
|
||||
|
||||
- uses: actions/setup-go@v4
|
||||
|
||||
- name: Prepare Complement's Prerequisites
|
||||
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
|
||||
|
||||
- uses: actions/setup-go@v4
|
||||
with:
|
||||
cache-dependency-path: complement/go.sum
|
||||
go-version-file: complement/go.mod
|
||||
|
||||
# This step is specific to the 'Twisted trunk' test run:
|
||||
- name: Patch dependencies
|
||||
run: |
|
||||
|
||||
100
CHANGES.md
100
CHANGES.md
@@ -1,3 +1,103 @@
|
||||
# Synapse 1.96.1 (2023-11-17)
|
||||
|
||||
Synapse will soon be forked by Element under an AGPLv3.0 licence (with CLA, for
|
||||
proprietary dual licensing). You can read more about this here:
|
||||
|
||||
* https://matrix.org/blog/2023/11/06/future-of-synapse-dendrite/
|
||||
* https://element.io/blog/element-to-adopt-agplv3/
|
||||
|
||||
The Matrix.org Foundation copy of the project will be archived. Any changes needed
|
||||
by server administrators will be communicated via our usual
|
||||
[announcements channels](https://matrix.to/#/#homeowners:matrix.org), but we are
|
||||
striving to make this as seamless as possible.
|
||||
|
||||
This minor release was needed only because of CI-related trouble on [v1.96.0](https://github.com/matrix-org/synapse/releases/tag/v1.96.0), which was never released.
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Fix building of wheels in CI. ([\#16653](https://github.com/matrix-org/synapse/issues/16653))
|
||||
|
||||
# Synapse 1.96.0 (2023-11-16)
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix "'int' object is not iterable" error in `set_device_id_for_pushers` background update introduced in Synapse 1.95.0. ([\#16594](https://github.com/matrix-org/synapse/issues/16594))
|
||||
|
||||
# Synapse 1.96.0rc1 (2023-10-31)
|
||||
|
||||
### Features
|
||||
|
||||
- Add experimental support to allow multiple workers to write to receipts stream. ([\#16432](https://github.com/matrix-org/synapse/issues/16432))
|
||||
- Add a new module API for controller presence. ([\#16544](https://github.com/matrix-org/synapse/issues/16544))
|
||||
- Add a new module API callback that allows adding extra fields to events' unsigned section when sent down to clients. ([\#16549](https://github.com/matrix-org/synapse/issues/16549))
|
||||
- Improve the performance of claiming encryption keys. ([\#16565](https://github.com/matrix-org/synapse/issues/16565), [\#16570](https://github.com/matrix-org/synapse/issues/16570))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fixed a bug in the example Grafana dashboard that prevents it from finding the correct datasource. Contributed by @MichaelSasser. ([\#16471](https://github.com/matrix-org/synapse/issues/16471))
|
||||
- Fix a long-standing, exceedingly rare edge case where the first event persisted by a new event persister worker might not be sent down `/sync`. ([\#16473](https://github.com/matrix-org/synapse/issues/16473), [\#16557](https://github.com/matrix-org/synapse/issues/16557), [\#16561](https://github.com/matrix-org/synapse/issues/16561), [\#16578](https://github.com/matrix-org/synapse/issues/16578), [\#16580](https://github.com/matrix-org/synapse/issues/16580))
|
||||
- Fix long-standing bug where `/sync` incorrectly did not mark a room as `limited` in a sync requests when there were missing remote events. ([\#16485](https://github.com/matrix-org/synapse/issues/16485))
|
||||
- Fix a bug introduced in Synapse 1.41 where HTTP(S) forward proxy authorization would fail when using basic HTTP authentication with a long `username:password` string. ([\#16504](https://github.com/matrix-org/synapse/issues/16504))
|
||||
- Force TLS certificate verification in user registration script. ([\#16530](https://github.com/matrix-org/synapse/issues/16530))
|
||||
- Fix long-standing bug where `/sync` could tightloop after restart when using SQLite. ([\#16540](https://github.com/matrix-org/synapse/issues/16540))
|
||||
- Fix ratelimiting of message sending when using workers, where the ratelimit would only be applied after most of the work has been done. ([\#16558](https://github.com/matrix-org/synapse/issues/16558))
|
||||
- Fix a long-standing bug where invited/knocking users would not leave during a room purge. ([\#16559](https://github.com/matrix-org/synapse/issues/16559))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Improve documentation of presence router. ([\#16529](https://github.com/matrix-org/synapse/issues/16529))
|
||||
- Add a sentence to the [opentracing docs](https://matrix-org.github.io/synapse/latest/opentracing.html) on how you can have jaeger in a different place than synapse. ([\#16531](https://github.com/matrix-org/synapse/issues/16531))
|
||||
- Correctly describe the meaning of unspecified rule lists in the [`alias_creation_rules`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#alias_creation_rules) and [`room_list_publication_rules`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#room_list_publication_rules) config options and improve their descriptions more generally. ([\#16541](https://github.com/matrix-org/synapse/issues/16541))
|
||||
- Pin the recommended poetry version in [contributors' guide](https://matrix-org.github.io/synapse/latest/development/contributing_guide.html). ([\#16550](https://github.com/matrix-org/synapse/issues/16550))
|
||||
- Fix a broken link to the [client breakdown](https://matrix.org/ecosystem/clients/) in the README. ([\#16569](https://github.com/matrix-org/synapse/issues/16569))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Improve performance of delete device messages query, cf issue [16479](https://github.com/matrix-org/synapse/issues/16479). ([\#16492](https://github.com/matrix-org/synapse/issues/16492))
|
||||
- Reduce memory allocations. ([\#16505](https://github.com/matrix-org/synapse/issues/16505))
|
||||
- Improve replication performance when purging rooms. ([\#16510](https://github.com/matrix-org/synapse/issues/16510))
|
||||
- Run tests against Python 3.12. ([\#16511](https://github.com/matrix-org/synapse/issues/16511))
|
||||
- Run trial & integration tests in continuous integration when `.ci` directory is modified. ([\#16512](https://github.com/matrix-org/synapse/issues/16512))
|
||||
- Remove duplicate call to mark remote server 'awake' when using a federation sending worker. ([\#16515](https://github.com/matrix-org/synapse/issues/16515))
|
||||
- Enable dirty runs on Complement CI, which is significantly faster. ([\#16520](https://github.com/matrix-org/synapse/issues/16520))
|
||||
- Stop deleting from an unused table. ([\#16521](https://github.com/matrix-org/synapse/issues/16521))
|
||||
- Improve type hints. ([\#16526](https://github.com/matrix-org/synapse/issues/16526), [\#16551](https://github.com/matrix-org/synapse/issues/16551))
|
||||
- Fix running unit tests on Twisted trunk. ([\#16528](https://github.com/matrix-org/synapse/issues/16528))
|
||||
- Reduce some spurious logging in worker mode. ([\#16555](https://github.com/matrix-org/synapse/issues/16555))
|
||||
- Stop porting a table in port db that we're going to nuke and rebuild anyway. ([\#16563](https://github.com/matrix-org/synapse/issues/16563))
|
||||
- Deal with warnings from running complement in CI. ([\#16567](https://github.com/matrix-org/synapse/issues/16567))
|
||||
- Allow building with `setuptools_rust` 1.8.0. ([\#16574](https://github.com/matrix-org/synapse/issues/16574))
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump black from 23.10.0 to 23.10.1. ([\#16575](https://github.com/matrix-org/synapse/issues/16575))
|
||||
* Bump black from 23.9.1 to 23.10.0. ([\#16538](https://github.com/matrix-org/synapse/issues/16538))
|
||||
* Bump cryptography from 41.0.4 to 41.0.5. ([\#16572](https://github.com/matrix-org/synapse/issues/16572))
|
||||
* Bump gitpython from 3.1.37 to 3.1.40. ([\#16534](https://github.com/matrix-org/synapse/issues/16534))
|
||||
* Bump phonenumbers from 8.13.22 to 8.13.23. ([\#16576](https://github.com/matrix-org/synapse/issues/16576))
|
||||
* Bump pygithub from 1.59.1 to 2.1.1. ([\#16535](https://github.com/matrix-org/synapse/issues/16535))
|
||||
- Bump matrix-synapse-ldap3 from 0.2.2 to 0.3.0. ([\#16539](https://github.com/matrix-org/synapse/issues/16539))
|
||||
* Bump serde from 1.0.189 to 1.0.190. ([\#16577](https://github.com/matrix-org/synapse/issues/16577))
|
||||
* Bump setuptools-rust from 1.7.0 to 1.8.0. ([\#16574](https://github.com/matrix-org/synapse/issues/16574))
|
||||
* Bump types-pillow from 10.0.0.3 to 10.1.0.0. ([\#16536](https://github.com/matrix-org/synapse/issues/16536))
|
||||
* Bump types-psycopg2 from 2.9.21.14 to 2.9.21.15. ([\#16573](https://github.com/matrix-org/synapse/issues/16573))
|
||||
* Bump types-requests from 2.31.0.2 to 2.31.0.10. ([\#16537](https://github.com/matrix-org/synapse/issues/16537))
|
||||
* Bump urllib3 from 1.26.17 to 1.26.18. ([\#16516](https://github.com/matrix-org/synapse/issues/16516))
|
||||
|
||||
# Synapse 1.95.1 (2023-10-31)
|
||||
|
||||
## Security advisory
|
||||
|
||||
The following issue is fixed in 1.95.1.
|
||||
|
||||
- [GHSA-mp92-3jfm-3575](https://github.com/matrix-org/synapse/security/advisories/GHSA-mp92-3jfm-3575) / [CVE-2023-43796](https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2023-43796) — Moderate Severity
|
||||
|
||||
Cached device information of remote users can be queried from Synapse. This can be used to enumerate the remote users known to a homeserver.
|
||||
|
||||
See the advisory for more details. If you have any questions, email security@matrix.org.
|
||||
|
||||
|
||||
|
||||
# Synapse 1.95.0 (2023-10-24)
|
||||
|
||||
### Internal Changes
|
||||
|
||||
8
Cargo.lock
generated
8
Cargo.lock
generated
@@ -332,18 +332,18 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
|
||||
|
||||
[[package]]
|
||||
name = "serde"
|
||||
version = "1.0.189"
|
||||
version = "1.0.190"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8e422a44e74ad4001bdc8eede9a4570ab52f71190e9c076d14369f38b9200537"
|
||||
checksum = "91d3c334ca1ee894a2c6f6ad698fe8c435b76d504b13d436f0685d648d6d96f7"
|
||||
dependencies = [
|
||||
"serde_derive",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "serde_derive"
|
||||
version = "1.0.189"
|
||||
version = "1.0.190"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "1e48d1f918009ce3145511378cf68d613e3b3d9137d67272562080d68a2b32d5"
|
||||
checksum = "67c5609f394e5c2bd7fc51efda478004ea80ef42fee983d5c67a65e34f32c0e3"
|
||||
dependencies = [
|
||||
"proc-macro2",
|
||||
"quote",
|
||||
|
||||
12
book.toml
12
book.toml
@@ -34,6 +34,14 @@ additional-css = [
|
||||
"docs/website_files/table-of-contents.css",
|
||||
"docs/website_files/remove-nav-buttons.css",
|
||||
"docs/website_files/indent-section-headers.css",
|
||||
"docs/website_files/version-picker.css",
|
||||
]
|
||||
additional-js = ["docs/website_files/table-of-contents.js"]
|
||||
theme = "docs/website_files/theme"
|
||||
additional-js = [
|
||||
"docs/website_files/table-of-contents.js",
|
||||
"docs/website_files/version-picker.js",
|
||||
"docs/website_files/version.js",
|
||||
]
|
||||
theme = "docs/website_files/theme"
|
||||
|
||||
[preprocessor.schema_versions]
|
||||
command = "./scripts-dev/schema_versions.py"
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Allow multiple workers to write to receipts stream.
|
||||
@@ -1 +0,0 @@
|
||||
Fixed a bug that prevents Grafana from finding the correct datasource. Contributed by @MichaelSasser.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing, exceedingly rare edge case where the first event persisted by a new event persister worker might not be sent down `/sync`.
|
||||
@@ -1 +0,0 @@
|
||||
Fix long-standing bug where `/sync` incorrectly did not mark a room as `limited` in a sync requests when there were missing remote events.
|
||||
@@ -1 +0,0 @@
|
||||
Improve performance of delete device messages query, cf issue [16479](https://github.com/matrix-org/synapse/issues/16479).
|
||||
@@ -1 +0,0 @@
|
||||
Fix a bug introduced in Synapse 1.41 where HTTP(S) forward proxy authorization would fail when using basic HTTP authentication with a long `username:password` string.
|
||||
@@ -1 +0,0 @@
|
||||
Reduce memory allocations.
|
||||
@@ -1 +0,0 @@
|
||||
Improve replication performance when purging rooms.
|
||||
@@ -1 +0,0 @@
|
||||
Run tests against Python 3.12.
|
||||
@@ -1 +0,0 @@
|
||||
Run trial & integration tests in continuous integration when `.ci` directory is modified.
|
||||
@@ -1 +0,0 @@
|
||||
Remove duplicate call to mark remote server 'awake' when using a federation sending worker.
|
||||
@@ -1 +0,0 @@
|
||||
Enable dirty runs on Complement CI, which is significantly faster.
|
||||
@@ -1 +0,0 @@
|
||||
Stop deleting from an unused table.
|
||||
@@ -1 +0,0 @@
|
||||
Improve type hints.
|
||||
@@ -1 +0,0 @@
|
||||
Fix running unit tests on Twisted trunk.
|
||||
@@ -1 +0,0 @@
|
||||
Improve documentation of presence router.
|
||||
@@ -1 +0,0 @@
|
||||
Force TLS certificate verification in user registration script.
|
||||
@@ -1 +0,0 @@
|
||||
Add a sentence to the opentracing docs on how you can have jaeger in a different place than synapse.
|
||||
@@ -1 +0,0 @@
|
||||
Bump matrix-synapse-ldap3 from 0.2.2 to 0.3.0.
|
||||
@@ -1 +0,0 @@
|
||||
Fix long-standing bug where `/sync` could tightloop after restart when using SQLite.
|
||||
@@ -1 +0,0 @@
|
||||
Correctly describe the meaning of unspecified rule lists in the [`alias_creation_rules`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#alias_creation_rules) and [`room_list_publication_rules`](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html#room_list_publication_rules) config options and improve their descriptions more generally.
|
||||
@@ -1 +0,0 @@
|
||||
Add a new module API for controller presence.
|
||||
@@ -1 +0,0 @@
|
||||
Add a new module API callback that allows adding extra fields to events' unsigned section when sent down to clients.
|
||||
@@ -1 +0,0 @@
|
||||
Pin the recommended poetry version in contributors' guide.
|
||||
@@ -1 +0,0 @@
|
||||
Improve type hints.
|
||||
@@ -1 +0,0 @@
|
||||
Reduce some spurious logging in worker mode.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing, exceedingly rare edge case where the first event persisted by a new event persister worker might not be sent down `/sync`.
|
||||
@@ -1 +0,0 @@
|
||||
Fix ratelimiting of message sending when using workers, where the ratelimit would only be applied after most of the work has been done.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing bug where invited/knocking users would not leave during a room purge.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing, exceedingly rare edge case where the first event persisted by a new event persister worker might not be sent down `/sync`.
|
||||
@@ -1 +0,0 @@
|
||||
Stop porting a table in port db that we're going to nuke and rebuild anyway.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a broken link to the [client breakdown](https://matrix.org/ecosystem/clients/) in the README.
|
||||
24
debian/changelog
vendored
24
debian/changelog
vendored
@@ -1,3 +1,27 @@
|
||||
matrix-synapse-py3 (1.96.1) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.96.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Fri, 17 Nov 2023 12:48:45 +0000
|
||||
|
||||
matrix-synapse-py3 (1.96.0) stable; urgency=medium
|
||||
|
||||
* New synapse release 1.96.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Thu, 16 Nov 2023 17:54:26 +0000
|
||||
|
||||
matrix-synapse-py3 (1.96.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.96.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 31 Oct 2023 14:09:09 +0000
|
||||
|
||||
matrix-synapse-py3 (1.95.1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.95.1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 31 Oct 2023 14:00:00 +0000
|
||||
|
||||
matrix-synapse-py3 (1.95.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.95.0.
|
||||
|
||||
@@ -24,6 +24,11 @@ Finally, we also stylise the chapter titles in the left sidebar by indenting the
|
||||
slightly so that they are more visually distinguishable from the section headers
|
||||
(the bold titles). This is done through the `indent-section-headers.css` file.
|
||||
|
||||
In addition to these modifications, we have added a version picker to the documentation.
|
||||
Users can switch between documentations for different versions of Synapse.
|
||||
This functionality was implemented through the `version-picker.js` and
|
||||
`version-picker.css` files.
|
||||
|
||||
More information can be found in mdbook's official documentation for
|
||||
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
|
||||
and
|
||||
|
||||
@@ -131,6 +131,18 @@
|
||||
<i class="fa fa-search"></i>
|
||||
</button>
|
||||
{{/if}}
|
||||
<div class="version-picker">
|
||||
<div class="dropdown">
|
||||
<div class="select">
|
||||
<span></span>
|
||||
<i class="fa fa-chevron-down"></i>
|
||||
</div>
|
||||
<input type="hidden" name="version">
|
||||
<ul class="dropdown-menu">
|
||||
<!-- Versions will be added dynamically in version-picker.js -->
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h1 class="menu-title">{{ book_title }}</h1>
|
||||
@@ -309,4 +321,4 @@
|
||||
{{/if}}
|
||||
|
||||
</body>
|
||||
</html>
|
||||
</html>
|
||||
|
||||
78
docs/website_files/version-picker.css
Normal file
78
docs/website_files/version-picker.css
Normal file
@@ -0,0 +1,78 @@
|
||||
.version-picker {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.version-picker .dropdown {
|
||||
width: 130px;
|
||||
max-height: 29px;
|
||||
margin-left: 10px;
|
||||
display: inline-block;
|
||||
border-radius: 4px;
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
position: relative;
|
||||
font-size: 13px;
|
||||
color: var(--fg);
|
||||
height: 100%;
|
||||
text-align: left;
|
||||
}
|
||||
.version-picker .dropdown .select {
|
||||
cursor: pointer;
|
||||
display: block;
|
||||
padding: 5px 2px 5px 15px;
|
||||
}
|
||||
.version-picker .dropdown .select > i {
|
||||
font-size: 10px;
|
||||
color: var(--fg);
|
||||
cursor: pointer;
|
||||
float: right;
|
||||
line-height: 20px !important;
|
||||
}
|
||||
.version-picker .dropdown:hover {
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
}
|
||||
.version-picker .dropdown:active {
|
||||
background-color: var(--theme-popup-bg);
|
||||
}
|
||||
.version-picker .dropdown.active:hover,
|
||||
.version-picker .dropdown.active {
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
border-radius: 2px 2px 0 0;
|
||||
background-color: var(--theme-popup-bg);
|
||||
}
|
||||
.version-picker .dropdown.active .select > i {
|
||||
transform: rotate(-180deg);
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu {
|
||||
position: absolute;
|
||||
background-color: var(--theme-popup-bg);
|
||||
width: 100%;
|
||||
left: -1px;
|
||||
right: 1px;
|
||||
margin-top: 1px;
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
border-radius: 0 0 4px 4px;
|
||||
overflow: hidden;
|
||||
display: none;
|
||||
max-height: 300px;
|
||||
overflow-y: auto;
|
||||
z-index: 9;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li {
|
||||
font-size: 12px;
|
||||
padding: 6px 20px;
|
||||
cursor: pointer;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu {
|
||||
padding: 0;
|
||||
list-style: none;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li:hover {
|
||||
background-color: var(--theme-hover);
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li.active::before {
|
||||
display: inline-block;
|
||||
content: "✓";
|
||||
margin-inline-start: -14px;
|
||||
width: 14px;
|
||||
}
|
||||
127
docs/website_files/version-picker.js
Normal file
127
docs/website_files/version-picker.js
Normal file
@@ -0,0 +1,127 @@
|
||||
|
||||
const dropdown = document.querySelector('.version-picker .dropdown');
|
||||
const dropdownMenu = dropdown.querySelector('.dropdown-menu');
|
||||
|
||||
fetchVersions(dropdown, dropdownMenu).then(() => {
|
||||
initializeVersionDropdown(dropdown, dropdownMenu);
|
||||
});
|
||||
|
||||
/**
|
||||
* Initialize the dropdown functionality for version selection.
|
||||
*
|
||||
* @param {Element} dropdown - The dropdown element.
|
||||
* @param {Element} dropdownMenu - The dropdown menu element.
|
||||
*/
|
||||
function initializeVersionDropdown(dropdown, dropdownMenu) {
|
||||
// Toggle the dropdown menu on click
|
||||
dropdown.addEventListener('click', function () {
|
||||
this.setAttribute('tabindex', 1);
|
||||
this.classList.toggle('active');
|
||||
dropdownMenu.style.display = (dropdownMenu.style.display === 'block') ? 'none' : 'block';
|
||||
});
|
||||
|
||||
// Remove the 'active' class and hide the dropdown menu on focusout
|
||||
dropdown.addEventListener('focusout', function () {
|
||||
this.classList.remove('active');
|
||||
dropdownMenu.style.display = 'none';
|
||||
});
|
||||
|
||||
// Handle item selection within the dropdown menu
|
||||
const dropdownMenuItems = dropdownMenu.querySelectorAll('li');
|
||||
dropdownMenuItems.forEach(function (item) {
|
||||
item.addEventListener('click', function () {
|
||||
dropdownMenuItems.forEach(function (item) {
|
||||
item.classList.remove('active');
|
||||
});
|
||||
this.classList.add('active');
|
||||
dropdown.querySelector('span').textContent = this.textContent;
|
||||
dropdown.querySelector('input').value = this.getAttribute('id');
|
||||
|
||||
window.location.href = changeVersion(window.location.href, this.textContent);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* This function fetches the available versions from a GitHub repository
|
||||
* and inserts them into the version picker.
|
||||
*
|
||||
* @param {Element} dropdown - The dropdown element.
|
||||
* @param {Element} dropdownMenu - The dropdown menu element.
|
||||
* @returns {Promise<Array<string>>} A promise that resolves with an array of available versions.
|
||||
*/
|
||||
function fetchVersions(dropdown, dropdownMenu) {
|
||||
return new Promise((resolve, reject) => {
|
||||
window.addEventListener("load", () => {
|
||||
|
||||
fetch("https://api.github.com/repos/matrix-org/synapse/git/trees/gh-pages", {
|
||||
cache: "force-cache",
|
||||
}).then(res =>
|
||||
res.json()
|
||||
).then(resObject => {
|
||||
const excluded = ['dev-docs', 'v1.91.0', 'v1.80.0', 'v1.69.0'];
|
||||
const tree = resObject.tree.filter(item => item.type === "tree" && !excluded.includes(item.path));
|
||||
const versions = tree.map(item => item.path).sort(sortVersions);
|
||||
|
||||
// Create a list of <li> items for versions
|
||||
versions.forEach((version) => {
|
||||
const li = document.createElement("li");
|
||||
li.textContent = version;
|
||||
li.id = version;
|
||||
|
||||
if (window.SYNAPSE_VERSION === version) {
|
||||
li.classList.add('active');
|
||||
dropdown.querySelector('span').textContent = version;
|
||||
dropdown.querySelector('input').value = version;
|
||||
}
|
||||
|
||||
dropdownMenu.appendChild(li);
|
||||
});
|
||||
|
||||
resolve(versions);
|
||||
|
||||
}).catch(ex => {
|
||||
console.error("Failed to fetch version data", ex);
|
||||
reject(ex);
|
||||
})
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom sorting function to sort an array of version strings.
|
||||
*
|
||||
* @param {string} a - The first version string to compare.
|
||||
* @param {string} b - The second version string to compare.
|
||||
* @returns {number} - A negative number if a should come before b, a positive number if b should come before a, or 0 if they are equal.
|
||||
*/
|
||||
function sortVersions(a, b) {
|
||||
// Put 'develop' and 'latest' at the top
|
||||
if (a === 'develop' || a === 'latest') return -1;
|
||||
if (b === 'develop' || b === 'latest') return 1;
|
||||
|
||||
const versionA = (a.match(/v\d+(\.\d+)+/) || [])[0];
|
||||
const versionB = (b.match(/v\d+(\.\d+)+/) || [])[0];
|
||||
|
||||
return versionB.localeCompare(versionA);
|
||||
}
|
||||
|
||||
/**
|
||||
* Change the version in a URL path.
|
||||
*
|
||||
* @param {string} url - The original URL to be modified.
|
||||
* @param {string} newVersion - The new version to replace the existing version in the URL.
|
||||
* @returns {string} The updated URL with the new version.
|
||||
*/
|
||||
function changeVersion(url, newVersion) {
|
||||
const parsedURL = new URL(url);
|
||||
const pathSegments = parsedURL.pathname.split('/');
|
||||
|
||||
// Modify the version
|
||||
pathSegments[2] = newVersion;
|
||||
|
||||
// Reconstruct the URL
|
||||
parsedURL.pathname = pathSegments.join('/');
|
||||
|
||||
return parsedURL.href;
|
||||
}
|
||||
1
docs/website_files/version.js
Normal file
1
docs/website_files/version.js
Normal file
@@ -0,0 +1 @@
|
||||
window.SYNAPSE_VERSION = 'v1.96';
|
||||
113
poetry.lock
generated
113
poetry.lock
generated
@@ -1,4 +1,4 @@
|
||||
# This file is automatically @generated by Poetry 1.5.1 and should not be changed by hand.
|
||||
# This file is automatically @generated by Poetry 1.6.1 and should not be changed by hand.
|
||||
|
||||
[[package]]
|
||||
name = "alabaster"
|
||||
@@ -162,29 +162,29 @@ lxml = ["lxml"]
|
||||
|
||||
[[package]]
|
||||
name = "black"
|
||||
version = "23.10.0"
|
||||
version = "23.10.1"
|
||||
description = "The uncompromising code formatter."
|
||||
optional = false
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "black-23.10.0-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:f8dc7d50d94063cdfd13c82368afd8588bac4ce360e4224ac399e769d6704e98"},
|
||||
{file = "black-23.10.0-cp310-cp310-macosx_10_16_x86_64.whl", hash = "sha256:f20ff03f3fdd2fd4460b4f631663813e57dc277e37fb216463f3b907aa5a9bdd"},
|
||||
{file = "black-23.10.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:d3d9129ce05b0829730323bdcb00f928a448a124af5acf90aa94d9aba6969604"},
|
||||
{file = "black-23.10.0-cp310-cp310-win_amd64.whl", hash = "sha256:960c21555be135c4b37b7018d63d6248bdae8514e5c55b71e994ad37407f45b8"},
|
||||
{file = "black-23.10.0-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:30b78ac9b54cf87bcb9910ee3d499d2bc893afd52495066c49d9ee6b21eee06e"},
|
||||
{file = "black-23.10.0-cp311-cp311-macosx_10_16_x86_64.whl", hash = "sha256:0e232f24a337fed7a82c1185ae46c56c4a6167fb0fe37411b43e876892c76699"},
|
||||
{file = "black-23.10.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:31946ec6f9c54ed7ba431c38bc81d758970dd734b96b8e8c2b17a367d7908171"},
|
||||
{file = "black-23.10.0-cp311-cp311-win_amd64.whl", hash = "sha256:c870bee76ad5f7a5ea7bd01dc646028d05568d33b0b09b7ecfc8ec0da3f3f39c"},
|
||||
{file = "black-23.10.0-cp38-cp38-macosx_10_16_arm64.whl", hash = "sha256:6901631b937acbee93c75537e74f69463adaf34379a04eef32425b88aca88a23"},
|
||||
{file = "black-23.10.0-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:481167c60cd3e6b1cb8ef2aac0f76165843a374346aeeaa9d86765fe0dd0318b"},
|
||||
{file = "black-23.10.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:f74892b4b836e5162aa0452393112a574dac85e13902c57dfbaaf388e4eda37c"},
|
||||
{file = "black-23.10.0-cp38-cp38-win_amd64.whl", hash = "sha256:47c4510f70ec2e8f9135ba490811c071419c115e46f143e4dce2ac45afdcf4c9"},
|
||||
{file = "black-23.10.0-cp39-cp39-macosx_10_16_arm64.whl", hash = "sha256:76baba9281e5e5b230c9b7f83a96daf67a95e919c2dfc240d9e6295eab7b9204"},
|
||||
{file = "black-23.10.0-cp39-cp39-macosx_10_16_x86_64.whl", hash = "sha256:a3c2ddb35f71976a4cfeca558848c2f2f89abc86b06e8dd89b5a65c1e6c0f22a"},
|
||||
{file = "black-23.10.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db451a3363b1e765c172c3fd86213a4ce63fb8524c938ebd82919bf2a6e28c6a"},
|
||||
{file = "black-23.10.0-cp39-cp39-win_amd64.whl", hash = "sha256:7fb5fc36bb65160df21498d5a3dd330af8b6401be3f25af60c6ebfe23753f747"},
|
||||
{file = "black-23.10.0-py3-none-any.whl", hash = "sha256:e223b731a0e025f8ef427dd79d8cd69c167da807f5710add30cdf131f13dd62e"},
|
||||
{file = "black-23.10.0.tar.gz", hash = "sha256:31b9f87b277a68d0e99d2905edae08807c007973eaa609da5f0c62def6b7c0bd"},
|
||||
{file = "black-23.10.1-cp310-cp310-macosx_10_16_arm64.whl", hash = "sha256:ec3f8e6234c4e46ff9e16d9ae96f4ef69fa328bb4ad08198c8cee45bb1f08c69"},
|
||||
{file = "black-23.10.1-cp310-cp310-macosx_10_16_x86_64.whl", hash = "sha256:1b917a2aa020ca600483a7b340c165970b26e9029067f019e3755b56e8dd5916"},
|
||||
{file = "black-23.10.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9c74de4c77b849e6359c6f01987e94873c707098322b91490d24296f66d067dc"},
|
||||
{file = "black-23.10.1-cp310-cp310-win_amd64.whl", hash = "sha256:7b4d10b0f016616a0d93d24a448100adf1699712fb7a4efd0e2c32bbb219b173"},
|
||||
{file = "black-23.10.1-cp311-cp311-macosx_10_16_arm64.whl", hash = "sha256:b15b75fc53a2fbcac8a87d3e20f69874d161beef13954747e053bca7a1ce53a0"},
|
||||
{file = "black-23.10.1-cp311-cp311-macosx_10_16_x86_64.whl", hash = "sha256:e293e4c2f4a992b980032bbd62df07c1bcff82d6964d6c9496f2cd726e246ace"},
|
||||
{file = "black-23.10.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:7d56124b7a61d092cb52cce34182a5280e160e6aff3137172a68c2c2c4b76bcb"},
|
||||
{file = "black-23.10.1-cp311-cp311-win_amd64.whl", hash = "sha256:3f157a8945a7b2d424da3335f7ace89c14a3b0625e6593d21139c2d8214d55ce"},
|
||||
{file = "black-23.10.1-cp38-cp38-macosx_10_16_arm64.whl", hash = "sha256:cfcce6f0a384d0da692119f2d72d79ed07c7159879d0bb1bb32d2e443382bf3a"},
|
||||
{file = "black-23.10.1-cp38-cp38-macosx_10_16_x86_64.whl", hash = "sha256:33d40f5b06be80c1bbce17b173cda17994fbad096ce60eb22054da021bf933d1"},
|
||||
{file = "black-23.10.1-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:840015166dbdfbc47992871325799fd2dc0dcf9395e401ada6d88fe11498abad"},
|
||||
{file = "black-23.10.1-cp38-cp38-win_amd64.whl", hash = "sha256:037e9b4664cafda5f025a1728c50a9e9aedb99a759c89f760bd83730e76ba884"},
|
||||
{file = "black-23.10.1-cp39-cp39-macosx_10_16_arm64.whl", hash = "sha256:7cb5936e686e782fddb1c73f8aa6f459e1ad38a6a7b0e54b403f1f05a1507ee9"},
|
||||
{file = "black-23.10.1-cp39-cp39-macosx_10_16_x86_64.whl", hash = "sha256:7670242e90dc129c539e9ca17665e39a146a761e681805c54fbd86015c7c84f7"},
|
||||
{file = "black-23.10.1-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:5ed45ac9a613fb52dad3b61c8dea2ec9510bf3108d4db88422bacc7d1ba1243d"},
|
||||
{file = "black-23.10.1-cp39-cp39-win_amd64.whl", hash = "sha256:6d23d7822140e3fef190734216cefb262521789367fbdc0b3f22af6744058982"},
|
||||
{file = "black-23.10.1-py3-none-any.whl", hash = "sha256:d431e6739f727bb2e0495df64a6c7a5310758e87505f5f8cde9ff6c0f2d7e4fe"},
|
||||
{file = "black-23.10.1.tar.gz", hash = "sha256:1f8ce316753428ff68749c65a5f7844631aa18c8679dfd3ca9dc1a289979c258"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -467,34 +467,34 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "cryptography"
|
||||
version = "41.0.4"
|
||||
version = "41.0.5"
|
||||
description = "cryptography is a package which provides cryptographic recipes and primitives to Python developers."
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "cryptography-41.0.4-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:80907d3faa55dc5434a16579952ac6da800935cd98d14dbd62f6f042c7f5e839"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:35c00f637cd0b9d5b6c6bd11b6c3359194a8eba9c46d4e875a3660e3b400005f"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:cecfefa17042941f94ab54f769c8ce0fe14beff2694e9ac684176a2535bf9714"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e40211b4923ba5a6dc9769eab704bdb3fbb58d56c5b336d30996c24fcf12aadb"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:23a25c09dfd0d9f28da2352503b23e086f8e78096b9fd585d1d14eca01613e13"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:2ed09183922d66c4ec5fdaa59b4d14e105c084dd0febd27452de8f6f74704143"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:5a0f09cefded00e648a127048119f77bc2b2ec61e736660b5789e638f43cc397"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:9eeb77214afae972a00dee47382d2591abe77bdae166bda672fb1e24702a3860"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-win32.whl", hash = "sha256:3b224890962a2d7b57cf5eeb16ccaafba6083f7b811829f00476309bce2fe0fd"},
|
||||
{file = "cryptography-41.0.4-cp37-abi3-win_amd64.whl", hash = "sha256:c880eba5175f4307129784eca96f4e70b88e57aa3f680aeba3bab0e980b0f37d"},
|
||||
{file = "cryptography-41.0.4-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:004b6ccc95943f6a9ad3142cfabcc769d7ee38a3f60fb0dddbfb431f818c3a67"},
|
||||
{file = "cryptography-41.0.4-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:86defa8d248c3fa029da68ce61fe735432b047e32179883bdb1e79ed9bb8195e"},
|
||||
{file = "cryptography-41.0.4-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:37480760ae08065437e6573d14be973112c9e6dcaf5f11d00147ee74f37a3829"},
|
||||
{file = "cryptography-41.0.4-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:b5f4dfe950ff0479f1f00eda09c18798d4f49b98f4e2006d644b3301682ebdca"},
|
||||
{file = "cryptography-41.0.4-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:7e53db173370dea832190870e975a1e09c86a879b613948f09eb49324218c14d"},
|
||||
{file = "cryptography-41.0.4-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:5b72205a360f3b6176485a333256b9bcd48700fc755fef51c8e7e67c4b63e3ac"},
|
||||
{file = "cryptography-41.0.4-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:93530900d14c37a46ce3d6c9e6fd35dbe5f5601bf6b3a5c325c7bffc030344d9"},
|
||||
{file = "cryptography-41.0.4-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:efc8ad4e6fc4f1752ebfb58aefece8b4e3c4cae940b0994d43649bdfce8d0d4f"},
|
||||
{file = "cryptography-41.0.4-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c3391bd8e6de35f6f1140e50aaeb3e2b3d6a9012536ca23ab0d9c35ec18c8a91"},
|
||||
{file = "cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:0d9409894f495d465fe6fda92cb70e8323e9648af912d5b9141d616df40a87b8"},
|
||||
{file = "cryptography-41.0.4-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:8ac4f9ead4bbd0bc8ab2d318f97d85147167a488be0e08814a37eb2f439d5cf6"},
|
||||
{file = "cryptography-41.0.4-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:047c4603aeb4bbd8db2756e38f5b8bd7e94318c047cfe4efeb5d715e08b49311"},
|
||||
{file = "cryptography-41.0.4.tar.gz", hash = "sha256:7febc3094125fc126a7f6fb1f420d0da639f3f32cb15c8ff0dc3997c4549f51a"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_universal2.whl", hash = "sha256:da6a0ff8f1016ccc7477e6339e1d50ce5f59b88905585f77193ebd5068f1e797"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-macosx_10_12_x86_64.whl", hash = "sha256:b948e09fe5fb18517d99994184854ebd50b57248736fd4c720ad540560174ec5"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:d38e6031e113b7421db1de0c1b1f7739564a88f1684c6b89234fbf6c11b75147"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:e270c04f4d9b5671ebcc792b3ba5d4488bf7c42c3c241a3748e2599776f29696"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_aarch64.whl", hash = "sha256:ec3b055ff8f1dce8e6ef28f626e0972981475173d7973d63f271b29c8a2897da"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-manylinux_2_28_x86_64.whl", hash = "sha256:7d208c21e47940369accfc9e85f0de7693d9a5d843c2509b3846b2db170dfd20"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_aarch64.whl", hash = "sha256:8254962e6ba1f4d2090c44daf50a547cd5f0bf446dc658a8e5f8156cae0d8548"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-musllinux_1_1_x86_64.whl", hash = "sha256:a48e74dad1fb349f3dc1d449ed88e0017d792997a7ad2ec9587ed17405667e6d"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-win32.whl", hash = "sha256:d3977f0e276f6f5bf245c403156673db103283266601405376f075c849a0b936"},
|
||||
{file = "cryptography-41.0.5-cp37-abi3-win_amd64.whl", hash = "sha256:73801ac9736741f220e20435f84ecec75ed70eda90f781a148f1bad546963d81"},
|
||||
{file = "cryptography-41.0.5-pp310-pypy310_pp73-macosx_10_12_x86_64.whl", hash = "sha256:3be3ca726e1572517d2bef99a818378bbcf7d7799d5372a46c79c29eb8d166c1"},
|
||||
{file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:e886098619d3815e0ad5790c973afeee2c0e6e04b4da90b88e6bd06e2a0b1b72"},
|
||||
{file = "cryptography-41.0.5-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:573eb7128cbca75f9157dcde974781209463ce56b5804983e11a1c462f0f4e88"},
|
||||
{file = "cryptography-41.0.5-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:0c327cac00f082013c7c9fb6c46b7cc9fa3c288ca702c74773968173bda421bf"},
|
||||
{file = "cryptography-41.0.5-pp38-pypy38_pp73-macosx_10_12_x86_64.whl", hash = "sha256:227ec057cd32a41c6651701abc0328135e472ed450f47c2766f23267b792a88e"},
|
||||
{file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:22892cc830d8b2c89ea60148227631bb96a7da0c1b722f2aac8824b1b7c0b6b8"},
|
||||
{file = "cryptography-41.0.5-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:5a70187954ba7292c7876734183e810b728b4f3965fbe571421cb2434d279179"},
|
||||
{file = "cryptography-41.0.5-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:88417bff20162f635f24f849ab182b092697922088b477a7abd6664ddd82291d"},
|
||||
{file = "cryptography-41.0.5-pp39-pypy39_pp73-macosx_10_12_x86_64.whl", hash = "sha256:c707f7afd813478e2019ae32a7c49cd932dd60ab2d2a93e796f68236b7e1fbf1"},
|
||||
{file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_aarch64.whl", hash = "sha256:580afc7b7216deeb87a098ef0674d6ee34ab55993140838b14c9b83312b37b86"},
|
||||
{file = "cryptography-41.0.5-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:fba1e91467c65fe64a82c689dc6cf58151158993b13eb7a7f3f4b7f395636723"},
|
||||
{file = "cryptography-41.0.5-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:0d2a6a598847c46e3e321a7aef8af1436f11c27f1254933746304ff014664d84"},
|
||||
{file = "cryptography-41.0.5.tar.gz", hash = "sha256:392cb88b597247177172e02da6b7a63deeff1937fa6fec3bbf902ebd75d97ec7"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1624,13 +1624,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "phonenumbers"
|
||||
version = "8.13.22"
|
||||
version = "8.13.23"
|
||||
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "phonenumbers-8.13.22-py2.py3-none-any.whl", hash = "sha256:85ceeba9e67984ba98182c77e8e4c70093d38c0c6a0cb2bd392e0694ddaeb1f6"},
|
||||
{file = "phonenumbers-8.13.22.tar.gz", hash = "sha256:001664c90f59b8954766c2db85adafc8dbc96177efeb49607ca4e64a7acaf569"},
|
||||
{file = "phonenumbers-8.13.23-py2.py3-none-any.whl", hash = "sha256:34d6cb279dd4a64714e324c71350f96e5bda3237be28d11b4c555c44701544cd"},
|
||||
{file = "phonenumbers-8.13.23.tar.gz", hash = "sha256:869e44fcaaf276eca6b953a401e2b27d57461f3a18a66cf5f13377e7bb0e228c"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -1765,6 +1765,8 @@ files = [
|
||||
{file = "psycopg2-2.9.9-cp310-cp310-win_amd64.whl", hash = "sha256:426f9f29bde126913a20a96ff8ce7d73fd8a216cfb323b1f04da402d452853c3"},
|
||||
{file = "psycopg2-2.9.9-cp311-cp311-win32.whl", hash = "sha256:ade01303ccf7ae12c356a5e10911c9e1c51136003a9a1d92f7aa9d010fb98372"},
|
||||
{file = "psycopg2-2.9.9-cp311-cp311-win_amd64.whl", hash = "sha256:121081ea2e76729acfb0673ff33755e8703d45e926e416cb59bae3a86c6a4981"},
|
||||
{file = "psycopg2-2.9.9-cp312-cp312-win32.whl", hash = "sha256:d735786acc7dd25815e89cc4ad529a43af779db2e25aa7c626de864127e5a024"},
|
||||
{file = "psycopg2-2.9.9-cp312-cp312-win_amd64.whl", hash = "sha256:a7653d00b732afb6fc597e29c50ad28087dcb4fbfb28e86092277a559ae4e693"},
|
||||
{file = "psycopg2-2.9.9-cp37-cp37m-win32.whl", hash = "sha256:5e0d98cade4f0e0304d7d6f25bbfbc5bd186e07b38eac65379309c4ca3193efa"},
|
||||
{file = "psycopg2-2.9.9-cp37-cp37m-win_amd64.whl", hash = "sha256:7e2dacf8b009a1c1e843b5213a87f7c544b2b042476ed7755be813eaf4e8347a"},
|
||||
{file = "psycopg2-2.9.9-cp38-cp38-win32.whl", hash = "sha256:ff432630e510709564c01dafdbe996cb552e0b9f3f065eb89bdce5bd31fabf4c"},
|
||||
@@ -2578,20 +2580,19 @@ testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (
|
||||
|
||||
[[package]]
|
||||
name = "setuptools-rust"
|
||||
version = "1.7.0"
|
||||
version = "1.8.0"
|
||||
description = "Setuptools Rust extension plugin"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "setuptools-rust-1.7.0.tar.gz", hash = "sha256:c7100999948235a38ae7e555fe199aa66c253dc384b125f5d85473bf81eae3a3"},
|
||||
{file = "setuptools_rust-1.7.0-py3-none-any.whl", hash = "sha256:071099885949132a2180d16abf907b60837e74b4085047ba7e9c0f5b365310c1"},
|
||||
{file = "setuptools-rust-1.8.0.tar.gz", hash = "sha256:5e02b7a80058853bf64127314f6b97d0efed11e08b94c88ca639a20976f6adc4"},
|
||||
{file = "setuptools_rust-1.8.0-py3-none-any.whl", hash = "sha256:95ec67edee2ca73233c9e75250e9d23a302aa23b4c8413dfd19c14c30d08f703"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
semantic-version = ">=2.8.2,<3"
|
||||
setuptools = ">=62.4"
|
||||
tomli = {version = ">=1.2.1", markers = "python_version < \"3.11\""}
|
||||
typing-extensions = ">=3.7.4.3"
|
||||
|
||||
[[package]]
|
||||
name = "signedjson"
|
||||
@@ -3116,13 +3117,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-psycopg2"
|
||||
version = "2.9.21.14"
|
||||
version = "2.9.21.15"
|
||||
description = "Typing stubs for psycopg2"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "types-psycopg2-2.9.21.14.tar.gz", hash = "sha256:bf73a0ac4da4e278c89bf1b01fc596d5a5ac7a356cfe6ac0249f47b9e259f868"},
|
||||
{file = "types_psycopg2-2.9.21.14-py3-none-any.whl", hash = "sha256:cd9c5350631f3bc6184ec8d48f2ed31d4ea660f89d0fffe78239450782f383c5"},
|
||||
{file = "types-psycopg2-2.9.21.15.tar.gz", hash = "sha256:cf99b62ab32cd4ef412fc3c4da1c29ca5a130847dff06d709b84a523802406f0"},
|
||||
{file = "types_psycopg2-2.9.21.15-py3-none-any.whl", hash = "sha256:cc80479def02e4dd1ef21649d82f04426c73bc0693bcc0a8b5223c7c168472af"},
|
||||
]
|
||||
|
||||
[[package]]
|
||||
|
||||
@@ -96,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.95.0"
|
||||
version = "1.96.1"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
@@ -381,7 +381,7 @@ furo = ">=2022.12.7,<2024.0.0"
|
||||
# system changes.
|
||||
# We are happy to raise these upper bounds upon request,
|
||||
# provided we check that it's safe to do so (i.e. that CI passes).
|
||||
requires = ["poetry-core>=1.1.0,<=1.7.0", "setuptools_rust>=1.3,<=1.7.0"]
|
||||
requires = ["poetry-core>=1.1.0,<=1.7.0", "setuptools_rust>=1.3,<=1.8.0"]
|
||||
build-backend = "poetry.core.masonry.api"
|
||||
|
||||
|
||||
|
||||
@@ -84,7 +84,7 @@ from synapse.replication.http.federation import (
|
||||
from synapse.storage.databases.main.lock import Lock
|
||||
from synapse.storage.databases.main.roommember import extract_heroes_from_room_summary
|
||||
from synapse.storage.roommember import MemberSummary
|
||||
from synapse.types import JsonDict, StateMap, get_domain_from_id
|
||||
from synapse.types import JsonDict, StateMap, get_domain_from_id, UserID
|
||||
from synapse.util import unwrapFirstError
|
||||
from synapse.util.async_helpers import Linearizer, concurrently_execute, gather_results
|
||||
from synapse.util.caches.response_cache import ResponseCache
|
||||
@@ -999,6 +999,12 @@ class FederationServer(FederationBase):
|
||||
async def on_claim_client_keys(
|
||||
self, query: List[Tuple[str, str, str, int]], always_include_fallback_keys: bool
|
||||
) -> Dict[str, Any]:
|
||||
if any(
|
||||
not self.hs.is_mine(UserID.from_string(user_id))
|
||||
for user_id, _, _, _ in query
|
||||
):
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
log_kv({"message": "Claiming one time keys.", "user, device pairs": query})
|
||||
results = await self._e2e_keys_handler.claim_local_one_time_keys(
|
||||
query, always_include_fallback_keys=always_include_fallback_keys
|
||||
|
||||
@@ -328,6 +328,9 @@ class DeviceWorkerHandler:
|
||||
return result
|
||||
|
||||
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
|
||||
if not self.hs.is_mine(UserID.from_string(user_id)):
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
stream_id, devices = await self.store.get_e2e_device_keys_for_federation_query(
|
||||
user_id
|
||||
)
|
||||
|
||||
@@ -542,6 +542,12 @@ class E2eKeysHandler:
|
||||
device_keys_query: Dict[str, Optional[List[str]]] = query_body.get(
|
||||
"device_keys", {}
|
||||
)
|
||||
if any(
|
||||
not self.is_mine(UserID.from_string(user_id))
|
||||
for user_id in device_keys_query
|
||||
):
|
||||
raise SynapseError(400, "User is not hosted on this homeserver")
|
||||
|
||||
res = await self.query_local_devices(
|
||||
device_keys_query,
|
||||
include_displaynames=(
|
||||
@@ -659,6 +665,20 @@ class E2eKeysHandler:
|
||||
timeout: Optional[int],
|
||||
always_include_fallback_keys: bool,
|
||||
) -> JsonDict:
|
||||
"""
|
||||
Args:
|
||||
query: A chain of maps from (user_id, device_id, algorithm) to the requested
|
||||
number of keys to claim.
|
||||
user: The user who is claiming these keys.
|
||||
timeout: How long to wait for any federation key claim requests before
|
||||
giving up.
|
||||
always_include_fallback_keys: always include a fallback key for local users'
|
||||
devices, even if we managed to claim a one-time-key.
|
||||
|
||||
Returns: a heterogeneous dict with two keys:
|
||||
one_time_keys: chain of maps user ID -> device ID -> key ID -> key.
|
||||
failures: map from remote destination to a JsonDict describing the error.
|
||||
"""
|
||||
local_query: List[Tuple[str, str, str, int]] = []
|
||||
remote_queries: Dict[str, Dict[str, Dict[str, Dict[str, int]]]] = {}
|
||||
|
||||
@@ -739,6 +759,16 @@ class E2eKeysHandler:
|
||||
async def upload_keys_for_user(
|
||||
self, user_id: str, device_id: str, keys: JsonDict
|
||||
) -> JsonDict:
|
||||
"""
|
||||
Args:
|
||||
user_id: user whose keys are being uploaded.
|
||||
device_id: device whose keys are being uploaded.
|
||||
keys: the body of a /keys/upload request.
|
||||
|
||||
Returns a dictionary with one field:
|
||||
"one_time_keys": A mapping from algorithm to number of keys for that
|
||||
algorithm, including those previously persisted.
|
||||
"""
|
||||
# This can only be called from the main process.
|
||||
assert isinstance(self.device_handler, DeviceHandler)
|
||||
|
||||
|
||||
@@ -433,7 +433,7 @@ class ReplicationEndpoint(metaclass=abc.ABCMeta):
|
||||
|
||||
if self.WAIT_FOR_STREAMS:
|
||||
response[_STREAM_POSITION_KEY] = {
|
||||
stream.NAME: stream.current_token(self._instance_name)
|
||||
stream.NAME: stream.minimal_local_current_token()
|
||||
for stream in self._streams
|
||||
}
|
||||
|
||||
|
||||
@@ -157,6 +157,12 @@ class EventsStream(_StreamFromIdGen):
|
||||
current_token: Token,
|
||||
target_row_count: int,
|
||||
) -> StreamUpdateResult:
|
||||
# The events stream cannot be "reset", so its safe to return early if
|
||||
# the from token is larger than the current token (the DB query will
|
||||
# trivially return 0 rows anyway).
|
||||
if from_token >= current_token:
|
||||
return [], current_token, False
|
||||
|
||||
# the events stream merges together three separate sources:
|
||||
# * new events
|
||||
# * current_state changes
|
||||
|
||||
@@ -420,6 +420,16 @@ class LoggingTransaction:
|
||||
self._do_execute(self.txn.execute, sql, parameters)
|
||||
|
||||
def executemany(self, sql: str, *args: Any) -> None:
|
||||
"""Repeatedly execute the same piece of SQL with different parameters.
|
||||
|
||||
See https://peps.python.org/pep-0249/#executemany. Note in particular that
|
||||
|
||||
> Use of this method for an operation which produces one or more result sets
|
||||
> constitutes undefined behavior
|
||||
|
||||
so you can't use this for e.g. a SELECT, an UPDATE ... RETURNING, or a
|
||||
DELETE FROM... RETURNING.
|
||||
"""
|
||||
# TODO: we should add a type for *args here. Looking at Cursor.executemany
|
||||
# and DBAPI2 it ought to be Sequence[_Parameter], but we pass in
|
||||
# Iterable[Iterable[Any]] in execute_batch and execute_values above, which mypy
|
||||
|
||||
@@ -24,6 +24,7 @@ from typing import (
|
||||
Mapping,
|
||||
Optional,
|
||||
Sequence,
|
||||
Set,
|
||||
Tuple,
|
||||
Union,
|
||||
cast,
|
||||
@@ -1110,7 +1111,7 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
||||
...
|
||||
|
||||
async def claim_e2e_one_time_keys(
|
||||
self, query_list: Iterable[Tuple[str, str, str, int]]
|
||||
self, query_list: Collection[Tuple[str, str, str, int]]
|
||||
) -> Tuple[
|
||||
Dict[str, Dict[str, Dict[str, JsonDict]]], List[Tuple[str, str, str, int]]
|
||||
]:
|
||||
@@ -1120,131 +1121,63 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
||||
query_list: An iterable of tuples of (user ID, device ID, algorithm).
|
||||
|
||||
Returns:
|
||||
A tuple pf:
|
||||
A tuple (results, missing) of:
|
||||
A map of user ID -> a map device ID -> a map of key ID -> JSON.
|
||||
|
||||
A copy of the input which has not been fulfilled.
|
||||
A copy of the input which has not been fulfilled. The returned counts
|
||||
may be less than the input counts. In this case, the returned counts
|
||||
are the number of claims that were not fulfilled.
|
||||
"""
|
||||
|
||||
@trace
|
||||
def _claim_e2e_one_time_key_simple(
|
||||
txn: LoggingTransaction,
|
||||
user_id: str,
|
||||
device_id: str,
|
||||
algorithm: str,
|
||||
count: int,
|
||||
) -> List[Tuple[str, str]]:
|
||||
"""Claim OTK for device for DBs that don't support RETURNING.
|
||||
|
||||
Returns:
|
||||
A tuple of key name (algorithm + key ID) and key JSON, if an
|
||||
OTK was found.
|
||||
"""
|
||||
|
||||
sql = """
|
||||
SELECT key_id, key_json FROM e2e_one_time_keys_json
|
||||
WHERE user_id = ? AND device_id = ? AND algorithm = ?
|
||||
LIMIT ?
|
||||
"""
|
||||
|
||||
txn.execute(sql, (user_id, device_id, algorithm, count))
|
||||
otk_rows = list(txn)
|
||||
if not otk_rows:
|
||||
return []
|
||||
|
||||
self.db_pool.simple_delete_many_txn(
|
||||
txn,
|
||||
table="e2e_one_time_keys_json",
|
||||
column="key_id",
|
||||
values=[otk_row[0] for otk_row in otk_rows],
|
||||
keyvalues={
|
||||
"user_id": user_id,
|
||||
"device_id": device_id,
|
||||
"algorithm": algorithm,
|
||||
},
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.count_e2e_one_time_keys, (user_id, device_id)
|
||||
)
|
||||
|
||||
return [
|
||||
(f"{algorithm}:{key_id}", key_json) for key_id, key_json in otk_rows
|
||||
]
|
||||
|
||||
@trace
|
||||
def _claim_e2e_one_time_key_returning(
|
||||
txn: LoggingTransaction,
|
||||
user_id: str,
|
||||
device_id: str,
|
||||
algorithm: str,
|
||||
count: int,
|
||||
) -> List[Tuple[str, str]]:
|
||||
"""Claim OTK for device for DBs that support RETURNING.
|
||||
|
||||
Returns:
|
||||
A tuple of key name (algorithm + key ID) and key JSON, if an
|
||||
OTK was found.
|
||||
"""
|
||||
|
||||
# We can use RETURNING to do the fetch and DELETE in once step.
|
||||
sql = """
|
||||
DELETE FROM e2e_one_time_keys_json
|
||||
WHERE user_id = ? AND device_id = ? AND algorithm = ?
|
||||
AND key_id IN (
|
||||
SELECT key_id FROM e2e_one_time_keys_json
|
||||
WHERE user_id = ? AND device_id = ? AND algorithm = ?
|
||||
LIMIT ?
|
||||
)
|
||||
RETURNING key_id, key_json
|
||||
"""
|
||||
|
||||
txn.execute(
|
||||
sql,
|
||||
(user_id, device_id, algorithm, user_id, device_id, algorithm, count),
|
||||
)
|
||||
otk_rows = list(txn)
|
||||
if not otk_rows:
|
||||
return []
|
||||
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.count_e2e_one_time_keys, (user_id, device_id)
|
||||
)
|
||||
|
||||
return [
|
||||
(f"{algorithm}:{key_id}", key_json) for key_id, key_json in otk_rows
|
||||
]
|
||||
|
||||
results: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
|
||||
missing: List[Tuple[str, str, str, int]] = []
|
||||
for user_id, device_id, algorithm, count in query_list:
|
||||
if self.database_engine.supports_returning:
|
||||
# If we support RETURNING clause we can use a single query that
|
||||
# allows us to use autocommit mode.
|
||||
_claim_e2e_one_time_key = _claim_e2e_one_time_key_returning
|
||||
db_autocommit = True
|
||||
else:
|
||||
_claim_e2e_one_time_key = _claim_e2e_one_time_key_simple
|
||||
db_autocommit = False
|
||||
if isinstance(self.database_engine, PostgresEngine):
|
||||
# If we can use execute_values we can use a single batch query
|
||||
# in autocommit mode.
|
||||
unfulfilled_claim_counts: Dict[Tuple[str, str, str], int] = {}
|
||||
for user_id, device_id, algorithm, count in query_list:
|
||||
unfulfilled_claim_counts[user_id, device_id, algorithm] = count
|
||||
|
||||
claim_rows = await self.db_pool.runInteraction(
|
||||
bulk_claims = await self.db_pool.runInteraction(
|
||||
"claim_e2e_one_time_keys",
|
||||
_claim_e2e_one_time_key,
|
||||
user_id,
|
||||
device_id,
|
||||
algorithm,
|
||||
count,
|
||||
db_autocommit=db_autocommit,
|
||||
self._claim_e2e_one_time_keys_bulk,
|
||||
query_list,
|
||||
db_autocommit=True,
|
||||
)
|
||||
if claim_rows:
|
||||
|
||||
for user_id, device_id, algorithm, key_id, key_json in bulk_claims:
|
||||
device_results = results.setdefault(user_id, {}).setdefault(
|
||||
device_id, {}
|
||||
)
|
||||
for claim_row in claim_rows:
|
||||
device_results[claim_row[0]] = json_decoder.decode(claim_row[1])
|
||||
device_results[f"{algorithm}:{key_id}"] = json_decoder.decode(key_json)
|
||||
unfulfilled_claim_counts[(user_id, device_id, algorithm)] -= 1
|
||||
|
||||
# Did we get enough OTKs?
|
||||
count -= len(claim_rows)
|
||||
if count:
|
||||
missing.append((user_id, device_id, algorithm, count))
|
||||
missing = [
|
||||
(user, device, alg, count)
|
||||
for (user, device, alg), count in unfulfilled_claim_counts.items()
|
||||
if count > 0
|
||||
]
|
||||
else:
|
||||
for user_id, device_id, algorithm, count in query_list:
|
||||
claim_rows = await self.db_pool.runInteraction(
|
||||
"claim_e2e_one_time_keys",
|
||||
self._claim_e2e_one_time_key_simple,
|
||||
user_id,
|
||||
device_id,
|
||||
algorithm,
|
||||
count,
|
||||
db_autocommit=False,
|
||||
)
|
||||
if claim_rows:
|
||||
device_results = results.setdefault(user_id, {}).setdefault(
|
||||
device_id, {}
|
||||
)
|
||||
for claim_row in claim_rows:
|
||||
device_results[claim_row[0]] = json_decoder.decode(claim_row[1])
|
||||
# Did we get enough OTKs?
|
||||
count -= len(claim_rows)
|
||||
if count:
|
||||
missing.append((user_id, device_id, algorithm, count))
|
||||
|
||||
return results, missing
|
||||
|
||||
@@ -1260,6 +1193,65 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
||||
Returns:
|
||||
A map of user ID -> a map device ID -> a map of key ID -> JSON.
|
||||
"""
|
||||
if isinstance(self.database_engine, PostgresEngine):
|
||||
return await self.db_pool.runInteraction(
|
||||
"_claim_e2e_fallback_keys_bulk",
|
||||
self._claim_e2e_fallback_keys_bulk_txn,
|
||||
query_list,
|
||||
db_autocommit=True,
|
||||
)
|
||||
# Use an UPDATE FROM... RETURNING combined with a VALUES block to do
|
||||
# everything in one query. Note: this is also supported in SQLite 3.33.0,
|
||||
# (see https://www.sqlite.org/lang_update.html#update_from), but we do not
|
||||
# have an equivalent of psycopg2's execute_values to do this in one query.
|
||||
else:
|
||||
return await self._claim_e2e_fallback_keys_simple(query_list)
|
||||
|
||||
def _claim_e2e_fallback_keys_bulk_txn(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
query_list: Iterable[Tuple[str, str, str, bool]],
|
||||
) -> Dict[str, Dict[str, Dict[str, JsonDict]]]:
|
||||
"""Efficient implementation of claim_e2e_fallback_keys for Postgres.
|
||||
|
||||
Safe to autocommit: this is a single query.
|
||||
"""
|
||||
results: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
|
||||
|
||||
sql = """
|
||||
WITH claims(user_id, device_id, algorithm, mark_as_used) AS (
|
||||
VALUES ?
|
||||
)
|
||||
UPDATE e2e_fallback_keys_json k
|
||||
SET used = used OR mark_as_used
|
||||
FROM claims
|
||||
WHERE (k.user_id, k.device_id, k.algorithm) = (claims.user_id, claims.device_id, claims.algorithm)
|
||||
RETURNING k.user_id, k.device_id, k.algorithm, k.key_id, k.key_json;
|
||||
"""
|
||||
claimed_keys = cast(
|
||||
List[Tuple[str, str, str, str, str]],
|
||||
txn.execute_values(sql, query_list),
|
||||
)
|
||||
|
||||
seen_user_device: Set[Tuple[str, str]] = set()
|
||||
for user_id, device_id, algorithm, key_id, key_json in claimed_keys:
|
||||
device_results = results.setdefault(user_id, {}).setdefault(device_id, {})
|
||||
device_results[f"{algorithm}:{key_id}"] = json_decoder.decode(key_json)
|
||||
|
||||
if (user_id, device_id) in seen_user_device:
|
||||
continue
|
||||
seen_user_device.add((user_id, device_id))
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.get_e2e_unused_fallback_key_types, (user_id, device_id)
|
||||
)
|
||||
|
||||
return results
|
||||
|
||||
async def _claim_e2e_fallback_keys_simple(
|
||||
self,
|
||||
query_list: Iterable[Tuple[str, str, str, bool]],
|
||||
) -> Dict[str, Dict[str, Dict[str, JsonDict]]]:
|
||||
"""Naive, inefficient implementation of claim_e2e_fallback_keys for SQLite."""
|
||||
results: Dict[str, Dict[str, Dict[str, JsonDict]]] = {}
|
||||
for user_id, device_id, algorithm, mark_as_used in query_list:
|
||||
row = await self.db_pool.simple_select_one(
|
||||
@@ -1302,6 +1294,99 @@ class EndToEndKeyWorkerStore(EndToEndKeyBackgroundStore, CacheInvalidationWorker
|
||||
|
||||
return results
|
||||
|
||||
@trace
|
||||
def _claim_e2e_one_time_key_simple(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
user_id: str,
|
||||
device_id: str,
|
||||
algorithm: str,
|
||||
count: int,
|
||||
) -> List[Tuple[str, str]]:
|
||||
"""Claim OTK for device for DBs that don't support RETURNING.
|
||||
|
||||
Returns:
|
||||
A tuple of key name (algorithm + key ID) and key JSON, if an
|
||||
OTK was found.
|
||||
"""
|
||||
|
||||
sql = """
|
||||
SELECT key_id, key_json FROM e2e_one_time_keys_json
|
||||
WHERE user_id = ? AND device_id = ? AND algorithm = ?
|
||||
LIMIT ?
|
||||
"""
|
||||
|
||||
txn.execute(sql, (user_id, device_id, algorithm, count))
|
||||
otk_rows = list(txn)
|
||||
if not otk_rows:
|
||||
return []
|
||||
|
||||
self.db_pool.simple_delete_many_txn(
|
||||
txn,
|
||||
table="e2e_one_time_keys_json",
|
||||
column="key_id",
|
||||
values=[otk_row[0] for otk_row in otk_rows],
|
||||
keyvalues={
|
||||
"user_id": user_id,
|
||||
"device_id": device_id,
|
||||
"algorithm": algorithm,
|
||||
},
|
||||
)
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.count_e2e_one_time_keys, (user_id, device_id)
|
||||
)
|
||||
|
||||
return [(f"{algorithm}:{key_id}", key_json) for key_id, key_json in otk_rows]
|
||||
|
||||
@trace
|
||||
def _claim_e2e_one_time_keys_bulk(
|
||||
self,
|
||||
txn: LoggingTransaction,
|
||||
query_list: Iterable[Tuple[str, str, str, int]],
|
||||
) -> List[Tuple[str, str, str, str, str]]:
|
||||
"""Bulk claim OTKs, for DBs that support DELETE FROM... RETURNING.
|
||||
|
||||
Args:
|
||||
query_list: Collection of tuples (user_id, device_id, algorithm, count)
|
||||
as passed to claim_e2e_one_time_keys.
|
||||
|
||||
Returns:
|
||||
A list of tuples (user_id, device_id, algorithm, key_id, key_json)
|
||||
for each OTK claimed.
|
||||
"""
|
||||
sql = """
|
||||
WITH claims(user_id, device_id, algorithm, claim_count) AS (
|
||||
VALUES ?
|
||||
), ranked_keys AS (
|
||||
SELECT
|
||||
user_id, device_id, algorithm, key_id, claim_count,
|
||||
ROW_NUMBER() OVER (PARTITION BY (user_id, device_id, algorithm)) AS r
|
||||
FROM e2e_one_time_keys_json
|
||||
JOIN claims USING (user_id, device_id, algorithm)
|
||||
)
|
||||
DELETE FROM e2e_one_time_keys_json k
|
||||
WHERE (user_id, device_id, algorithm, key_id) IN (
|
||||
SELECT user_id, device_id, algorithm, key_id
|
||||
FROM ranked_keys
|
||||
WHERE r <= claim_count
|
||||
)
|
||||
RETURNING user_id, device_id, algorithm, key_id, key_json;
|
||||
"""
|
||||
otk_rows = cast(
|
||||
List[Tuple[str, str, str, str, str]], txn.execute_values(sql, query_list)
|
||||
)
|
||||
|
||||
seen_user_device: Set[Tuple[str, str]] = set()
|
||||
for user_id, device_id, _, _, _ in otk_rows:
|
||||
if (user_id, device_id) in seen_user_device:
|
||||
continue
|
||||
seen_user_device.add((user_id, device_id))
|
||||
self._invalidate_cache_and_stream(
|
||||
txn, self.count_e2e_one_time_keys, (user_id, device_id)
|
||||
)
|
||||
|
||||
return otk_rows
|
||||
|
||||
|
||||
class EndToEndKeyStore(EndToEndKeyWorkerStore, SQLBaseStore):
|
||||
def __init__(
|
||||
|
||||
@@ -601,7 +601,7 @@ class PusherBackgroundUpdatesStore(SQLBaseStore):
|
||||
(last_pusher_id, batch_size),
|
||||
)
|
||||
|
||||
rows = txn.fetchall()
|
||||
rows = cast(List[Tuple[int, Optional[str], Optional[str]]], txn.fetchall())
|
||||
if len(rows) == 0:
|
||||
return 0
|
||||
|
||||
@@ -617,7 +617,7 @@ class PusherBackgroundUpdatesStore(SQLBaseStore):
|
||||
txn=txn,
|
||||
table="pushers",
|
||||
key_names=("id",),
|
||||
key_values=[row[0] for row in rows],
|
||||
key_values=[(row[0],) for row in rows],
|
||||
value_names=("device_id", "access_token"),
|
||||
# If there was already a device_id on the pusher, we only want to clear
|
||||
# the access_token column, so we keep the existing device_id. Otherwise,
|
||||
|
||||
@@ -174,6 +174,164 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
|
||||
},
|
||||
)
|
||||
|
||||
def test_claim_one_time_key_bulk(self) -> None:
|
||||
"""Like test_claim_one_time_key but claims multiple keys in one handler call."""
|
||||
# Apologies to the reader. This test is a little too verbose. It is particularly
|
||||
# tricky to make assertions neatly with all these nested dictionaries in play.
|
||||
|
||||
# Three users with two devices each. Each device uses two algorithms.
|
||||
# Each algorithm is invoked with two keys.
|
||||
alice = f"@alice:{self.hs.hostname}"
|
||||
brian = f"@brian:{self.hs.hostname}"
|
||||
chris = f"@chris:{self.hs.hostname}"
|
||||
one_time_keys = {
|
||||
alice: {
|
||||
"alice_dev_1": {
|
||||
"alg1:k1": {"dummy_id": 1},
|
||||
"alg1:k2": {"dummy_id": 2},
|
||||
"alg2:k3": {"dummy_id": 3},
|
||||
"alg2:k4": {"dummy_id": 4},
|
||||
},
|
||||
"alice_dev_2": {
|
||||
"alg1:k5": {"dummy_id": 5},
|
||||
"alg1:k6": {"dummy_id": 6},
|
||||
"alg2:k7": {"dummy_id": 7},
|
||||
"alg2:k8": {"dummy_id": 8},
|
||||
},
|
||||
},
|
||||
brian: {
|
||||
"brian_dev_1": {
|
||||
"alg1:k9": {"dummy_id": 9},
|
||||
"alg1:k10": {"dummy_id": 10},
|
||||
"alg2:k11": {"dummy_id": 11},
|
||||
"alg2:k12": {"dummy_id": 12},
|
||||
},
|
||||
"brian_dev_2": {
|
||||
"alg1:k13": {"dummy_id": 13},
|
||||
"alg1:k14": {"dummy_id": 14},
|
||||
"alg2:k15": {"dummy_id": 15},
|
||||
"alg2:k16": {"dummy_id": 16},
|
||||
},
|
||||
},
|
||||
chris: {
|
||||
"chris_dev_1": {
|
||||
"alg1:k17": {"dummy_id": 17},
|
||||
"alg1:k18": {"dummy_id": 18},
|
||||
"alg2:k19": {"dummy_id": 19},
|
||||
"alg2:k20": {"dummy_id": 20},
|
||||
},
|
||||
"chris_dev_2": {
|
||||
"alg1:k21": {"dummy_id": 21},
|
||||
"alg1:k22": {"dummy_id": 22},
|
||||
"alg2:k23": {"dummy_id": 23},
|
||||
"alg2:k24": {"dummy_id": 24},
|
||||
},
|
||||
},
|
||||
}
|
||||
for user_id, devices in one_time_keys.items():
|
||||
for device_id, keys_dict in devices.items():
|
||||
counts = self.get_success(
|
||||
self.handler.upload_keys_for_user(
|
||||
user_id,
|
||||
device_id,
|
||||
{"one_time_keys": keys_dict},
|
||||
)
|
||||
)
|
||||
# The upload should report 2 keys per algorithm.
|
||||
expected_counts = {
|
||||
"one_time_key_counts": {
|
||||
# See count_e2e_one_time_keys for why this is hardcoded.
|
||||
"signed_curve25519": 0,
|
||||
"alg1": 2,
|
||||
"alg2": 2,
|
||||
},
|
||||
}
|
||||
self.assertEqual(counts, expected_counts)
|
||||
|
||||
# Claim a variety of keys.
|
||||
# Raw format, easier to make test assertions about.
|
||||
claims_to_make = {
|
||||
(alice, "alice_dev_1", "alg1"): 1,
|
||||
(alice, "alice_dev_1", "alg2"): 2,
|
||||
(alice, "alice_dev_2", "alg2"): 1,
|
||||
(brian, "brian_dev_1", "alg1"): 2,
|
||||
(brian, "brian_dev_2", "alg2"): 9001,
|
||||
(chris, "chris_dev_2", "alg2"): 1,
|
||||
}
|
||||
# Convert to the format the handler wants.
|
||||
query: Dict[str, Dict[str, Dict[str, int]]] = {}
|
||||
for (user_id, device_id, algorithm), count in claims_to_make.items():
|
||||
query.setdefault(user_id, {}).setdefault(device_id, {})[algorithm] = count
|
||||
claim_res = self.get_success(
|
||||
self.handler.claim_one_time_keys(
|
||||
query,
|
||||
self.requester,
|
||||
timeout=None,
|
||||
always_include_fallback_keys=False,
|
||||
)
|
||||
)
|
||||
|
||||
# No failures, please!
|
||||
self.assertEqual(claim_res["failures"], {})
|
||||
|
||||
# Check that we get exactly the (user, device, algorithm)s we asked for.
|
||||
got_otks = claim_res["one_time_keys"]
|
||||
claimed_user_device_algorithms = {
|
||||
(user_id, device_id, alg_key_id.split(":")[0])
|
||||
for user_id, devices in got_otks.items()
|
||||
for device_id, key_dict in devices.items()
|
||||
for alg_key_id in key_dict
|
||||
}
|
||||
self.assertEqual(claimed_user_device_algorithms, set(claims_to_make))
|
||||
|
||||
# Now check the keys we got are what we expected.
|
||||
def assertExactlyOneOtk(
|
||||
user_id: str, device_id: str, *alg_key_pairs: str
|
||||
) -> None:
|
||||
key_dict = got_otks[user_id][device_id]
|
||||
found = 0
|
||||
for alg_key in alg_key_pairs:
|
||||
if alg_key in key_dict:
|
||||
expected_key_json = one_time_keys[user_id][device_id][alg_key]
|
||||
self.assertEqual(key_dict[alg_key], expected_key_json)
|
||||
found += 1
|
||||
self.assertEqual(found, 1)
|
||||
|
||||
def assertAllOtks(user_id: str, device_id: str, *alg_key_pairs: str) -> None:
|
||||
key_dict = got_otks[user_id][device_id]
|
||||
for alg_key in alg_key_pairs:
|
||||
expected_key_json = one_time_keys[user_id][device_id][alg_key]
|
||||
self.assertEqual(key_dict[alg_key], expected_key_json)
|
||||
|
||||
# Expect a single arbitrary key to be returned.
|
||||
assertExactlyOneOtk(alice, "alice_dev_1", "alg1:k1", "alg1:k2")
|
||||
assertExactlyOneOtk(alice, "alice_dev_2", "alg2:k7", "alg2:k8")
|
||||
assertExactlyOneOtk(chris, "chris_dev_2", "alg2:k23", "alg2:k24")
|
||||
|
||||
assertAllOtks(alice, "alice_dev_1", "alg2:k3", "alg2:k4")
|
||||
assertAllOtks(brian, "brian_dev_1", "alg1:k9", "alg1:k10")
|
||||
assertAllOtks(brian, "brian_dev_2", "alg2:k15", "alg2:k16")
|
||||
|
||||
# Now check the unused key counts.
|
||||
for user_id, devices in one_time_keys.items():
|
||||
for device_id in devices:
|
||||
counts_by_alg = self.get_success(
|
||||
self.store.count_e2e_one_time_keys(user_id, device_id)
|
||||
)
|
||||
# Somewhat fiddley to compute the expected count dict.
|
||||
expected_counts_by_alg = {
|
||||
"signed_curve25519": 0,
|
||||
}
|
||||
for alg in ["alg1", "alg2"]:
|
||||
claim_count = claims_to_make.get((user_id, device_id, alg), 0)
|
||||
remaining_count = max(0, 2 - claim_count)
|
||||
if remaining_count > 0:
|
||||
expected_counts_by_alg[alg] = remaining_count
|
||||
|
||||
self.assertEqual(
|
||||
counts_by_alg, expected_counts_by_alg, f"{user_id}:{device_id}"
|
||||
)
|
||||
|
||||
def test_fallback_key(self) -> None:
|
||||
local_user = "@boris:" + self.hs.hostname
|
||||
device_id = "xyz"
|
||||
@@ -322,6 +480,83 @@ class E2eKeysHandlerTestCase(unittest.HomeserverTestCase):
|
||||
{"failures": {}, "one_time_keys": {local_user: {device_id: fallback_key3}}},
|
||||
)
|
||||
|
||||
def test_fallback_key_bulk(self) -> None:
|
||||
"""Like test_fallback_key, but claims multiple keys in one handler call."""
|
||||
alice = f"@alice:{self.hs.hostname}"
|
||||
brian = f"@brian:{self.hs.hostname}"
|
||||
chris = f"@chris:{self.hs.hostname}"
|
||||
|
||||
# Have three users upload fallback keys for two devices.
|
||||
fallback_keys = {
|
||||
alice: {
|
||||
"alice_dev_1": {"alg1:k1": "fallback_key1"},
|
||||
"alice_dev_2": {"alg2:k2": "fallback_key2"},
|
||||
},
|
||||
brian: {
|
||||
"brian_dev_1": {"alg1:k3": "fallback_key3"},
|
||||
"brian_dev_2": {"alg2:k4": "fallback_key4"},
|
||||
},
|
||||
chris: {
|
||||
"chris_dev_1": {"alg1:k5": "fallback_key5"},
|
||||
"chris_dev_2": {"alg2:k6": "fallback_key6"},
|
||||
},
|
||||
}
|
||||
|
||||
for user_id, devices in fallback_keys.items():
|
||||
for device_id, key_dict in devices.items():
|
||||
self.get_success(
|
||||
self.handler.upload_keys_for_user(
|
||||
user_id,
|
||||
device_id,
|
||||
{"fallback_keys": key_dict},
|
||||
)
|
||||
)
|
||||
|
||||
# Each device should have an unused fallback key.
|
||||
for user_id, devices in fallback_keys.items():
|
||||
for device_id in devices:
|
||||
fallback_res = self.get_success(
|
||||
self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||
)
|
||||
expected_algorithm_name = f"alg{device_id[-1]}"
|
||||
self.assertEqual(fallback_res, [expected_algorithm_name])
|
||||
|
||||
# Claim the fallback key for one device per user.
|
||||
claim_res = self.get_success(
|
||||
self.handler.claim_one_time_keys(
|
||||
{
|
||||
alice: {"alice_dev_1": {"alg1": 1}},
|
||||
brian: {"brian_dev_2": {"alg2": 1}},
|
||||
chris: {"chris_dev_2": {"alg2": 1}},
|
||||
},
|
||||
self.requester,
|
||||
timeout=None,
|
||||
always_include_fallback_keys=False,
|
||||
)
|
||||
)
|
||||
expected_claims = {
|
||||
alice: {"alice_dev_1": {"alg1:k1": "fallback_key1"}},
|
||||
brian: {"brian_dev_2": {"alg2:k4": "fallback_key4"}},
|
||||
chris: {"chris_dev_2": {"alg2:k6": "fallback_key6"}},
|
||||
}
|
||||
self.assertEqual(
|
||||
claim_res,
|
||||
{"failures": {}, "one_time_keys": expected_claims},
|
||||
)
|
||||
|
||||
for user_id, devices in fallback_keys.items():
|
||||
for device_id in devices:
|
||||
fallback_res = self.get_success(
|
||||
self.store.get_e2e_unused_fallback_key_types(user_id, device_id)
|
||||
)
|
||||
# Claimed fallback keys should no longer show up as unused.
|
||||
# Unclaimed fallback keys should still be unused.
|
||||
if device_id in expected_claims[user_id]:
|
||||
self.assertEqual(fallback_res, [])
|
||||
else:
|
||||
expected_algorithm_name = f"alg{device_id[-1]}"
|
||||
self.assertEqual(fallback_res, [expected_algorithm_name])
|
||||
|
||||
def test_fallback_key_always_returned(self) -> None:
|
||||
local_user = "@boris:" + self.hs.hostname
|
||||
device_id = "xyz"
|
||||
|
||||
Reference in New Issue
Block a user