1
0

Compare commits

..

4 Commits

Author SHA1 Message Date
Andrew Morgan
7cf88d743e Install gcc, lower poetry verbosity 2023-08-04 16:46:05 +01:00
Andrew Morgan
2f83b4f206 Enable debug logging for poetry 2023-08-04 15:43:10 +01:00
Andrew Morgan
0f8e3337ca changelog 2023-08-04 13:45:30 +01:00
Andrew Morgan
27efb8a38f Switch devenv from development branch to v0.6.3
Broke in https://github.com/matrix-org/synapse/pull/16019. This would have been caught
if someone ran 'devenv up' during manual testing of that PR, or better yet, if we had
CI for the nix developer environment.

Switch back to the latest release version, which doesn't have the upstream issue:
https://github.com/cachix/devenv/issues/756
2023-08-04 13:41:54 +01:00
43 changed files with 90 additions and 853 deletions

View File

@@ -1,73 +1,3 @@
# Synapse 1.90.0 (2023-08-15)
No significant changes since 1.90.0rc1.
# Synapse 1.90.0rc1 (2023-08-08)
### Features
- Scope transaction IDs to devices (implement [MSC3970](https://github.com/matrix-org/matrix-spec-proposals/pull/3970)). ([\#15629](https://github.com/matrix-org/synapse/issues/15629))
- Remove old rows from the `cache_invalidation_stream_by_instance` table automatically (this table is unused in SQLite). ([\#15868](https://github.com/matrix-org/synapse/issues/15868))
### Bugfixes
- Fix a long-standing bug where purging history and paginating simultaneously could lead to database corruption when using workers. ([\#15791](https://github.com/matrix-org/synapse/issues/15791))
- Fix a long-standing bug where profile endpoint returned a 404 when the user's display name was empty. ([\#16012](https://github.com/matrix-org/synapse/issues/16012))
- Fix a long-standing bug where the `synapse_port_db` failed to configure sequences for application services and partial stated rooms. ([\#16043](https://github.com/matrix-org/synapse/issues/16043))
- Fix long-standing bug with deletion in dehydrated devices v2. ([\#16046](https://github.com/matrix-org/synapse/issues/16046))
### Updates to the Docker image
- Add `org.opencontainers.image.version` labels to Docker containers [published by Matrix.org](https://hub.docker.com/r/matrixdotorg/synapse). Contributed by Mo Balaa. ([\#15972](https://github.com/matrix-org/synapse/issues/15972), [\#16009](https://github.com/matrix-org/synapse/issues/16009))
### Improved Documentation
- Add a internal documentation page describing the ["streams" used within Synapse](https://matrix-org.github.io/synapse/v1.90/development/synapse_architecture/streams.html). ([\#16015](https://github.com/matrix-org/synapse/issues/16015))
- Clarify comment on the keys/upload over replication enpoint. ([\#16016](https://github.com/matrix-org/synapse/issues/16016))
- Do not expose Admin API in caddy reverse proxy example. Contributed by @NilsIrl. ([\#16027](https://github.com/matrix-org/synapse/issues/16027))
### Deprecations and Removals
- Remove support for legacy application service paths. ([\#15964](https://github.com/matrix-org/synapse/issues/15964))
- Move support for application service query parameter authorization behind a configuration option. ([\#16017](https://github.com/matrix-org/synapse/issues/16017))
### Internal Changes
- Update SQL queries to inline boolean parameters as supported in SQLite 3.27. ([\#15525](https://github.com/matrix-org/synapse/issues/15525))
- Allow for the configuration of the backoff algorithm for federation destinations. ([\#15754](https://github.com/matrix-org/synapse/issues/15754))
- Allow modules to check whether the current worker is configured to run background tasks. ([\#15991](https://github.com/matrix-org/synapse/issues/15991))
- Update support for [MSC3958](https://github.com/matrix-org/matrix-spec-proposals/pull/3958) to match the latest revision of the MSC. ([\#15992](https://github.com/matrix-org/synapse/issues/15992))
- Allow modules to schedule delayed background calls. ([\#15993](https://github.com/matrix-org/synapse/issues/15993))
- Properly overwrite the `redacts` content-property for forwards-compatibility with room versions 1 through 10. ([\#16013](https://github.com/matrix-org/synapse/issues/16013))
- Fix building the nix development environment on MacOS systems. ([\#16019](https://github.com/matrix-org/synapse/issues/16019))
- Remove leading and trailing spaces when setting a display name. ([\#16031](https://github.com/matrix-org/synapse/issues/16031))
- Combine duplicated code. ([\#16023](https://github.com/matrix-org/synapse/issues/16023))
- Collect additional metrics from `ResponseCache` for eviction. ([\#16028](https://github.com/matrix-org/synapse/issues/16028))
- Fix endpoint improperly declaring support for MSC3814. ([\#16068](https://github.com/matrix-org/synapse/issues/16068))
- Drop backwards compat hack for event serialization. ([\#16069](https://github.com/matrix-org/synapse/issues/16069))
### Updates to locked dependencies
* Update PyYAML to 6.0.1. ([\#16011](https://github.com/matrix-org/synapse/issues/16011))
* Bump cryptography from 41.0.2 to 41.0.3. ([\#16048](https://github.com/matrix-org/synapse/issues/16048))
* Bump furo from 2023.5.20 to 2023.7.26. ([\#16077](https://github.com/matrix-org/synapse/issues/16077))
* Bump immutabledict from 2.2.4 to 3.0.0. ([\#16034](https://github.com/matrix-org/synapse/issues/16034))
* Update certifi to 2023.7.22 and pygments to 2.15.1. ([\#16044](https://github.com/matrix-org/synapse/issues/16044))
* Bump jsonschema from 4.18.3 to 4.19.0. ([\#16081](https://github.com/matrix-org/synapse/issues/16081))
* Bump phonenumbers from 8.13.14 to 8.13.18. ([\#16076](https://github.com/matrix-org/synapse/issues/16076))
* Bump regex from 1.9.1 to 1.9.3. ([\#16073](https://github.com/matrix-org/synapse/issues/16073))
* Bump serde from 1.0.171 to 1.0.175. ([\#15982](https://github.com/matrix-org/synapse/issues/15982))
* Bump serde from 1.0.175 to 1.0.179. ([\#16033](https://github.com/matrix-org/synapse/issues/16033))
* Bump serde from 1.0.179 to 1.0.183. ([\#16074](https://github.com/matrix-org/synapse/issues/16074))
* Bump serde_json from 1.0.103 to 1.0.104. ([\#16032](https://github.com/matrix-org/synapse/issues/16032))
* Bump service-identity from 21.1.0 to 23.1.0. ([\#16038](https://github.com/matrix-org/synapse/issues/16038))
* Bump types-commonmark from 0.9.2.3 to 0.9.2.4. ([\#16037](https://github.com/matrix-org/synapse/issues/16037))
* Bump types-jsonschema from 4.17.0.8 to 4.17.0.10. ([\#16036](https://github.com/matrix-org/synapse/issues/16036))
* Bump types-netaddr from 0.8.0.8 to 0.8.0.9. ([\#16035](https://github.com/matrix-org/synapse/issues/16035))
* Bump types-opentracing from 2.4.10.5 to 2.4.10.6. ([\#16078](https://github.com/matrix-org/synapse/issues/16078))
* Bump types-setuptools from 68.0.0.0 to 68.0.0.3. ([\#16079](https://github.com/matrix-org/synapse/issues/16079))
# Synapse 1.89.0 (2023-08-01)
No significant changes since 1.89.0rc1.

26
Cargo.lock generated
View File

@@ -291,9 +291,9 @@ dependencies = [
[[package]]
name = "regex"
version = "1.9.3"
version = "1.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "81bc1d4caf89fac26a70747fe603c130093b53c773888797a6329091246d651a"
checksum = "b2eae68fc220f7cf2532e4494aded17545fce192d59cd996e0fe7887f4ceb575"
dependencies = [
"aho-corasick",
"memchr",
@@ -303,9 +303,9 @@ dependencies = [
[[package]]
name = "regex-automata"
version = "0.3.6"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fed1ceff11a1dddaee50c9dc8e4938bd106e9d89ae372f192311e7da498e3b69"
checksum = "83d3daa6976cffb758ec878f108ba0e062a45b2d6ca3a2cca965338855476caf"
dependencies = [
"aho-corasick",
"memchr",
@@ -314,9 +314,9 @@ dependencies = [
[[package]]
name = "regex-syntax"
version = "0.7.4"
version = "0.7.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e5ea92a5b6195c6ef2a0295ea818b312502c6fc94dde986c5553242e18fd4ce2"
checksum = "2ab07dc67230e4a4718e70fd5c20055a4334b121f1f9db8fe63ef39ce9b8c846"
[[package]]
name = "ryu"
@@ -332,22 +332,22 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "serde"
version = "1.0.183"
version = "1.0.179"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32ac8da02677876d532745a130fc9d8e6edfa81a269b107c5b00829b91d8eb3c"
checksum = "0a5bf42b8d227d4abf38a1ddb08602e229108a517cd4e5bb28f9c7eaafdce5c0"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.183"
version = "1.0.179"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "aafe972d60b0b9bee71a91b92fee2d4fb3c9d7e8f6b179aa99f27203d99a4816"
checksum = "741e124f5485c7e60c03b043f79f320bff3527f4bbf12cf3831750dc46a0ec2c"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.28",
"syn 2.0.25",
]
[[package]]
@@ -386,9 +386,9 @@ dependencies = [
[[package]]
name = "syn"
version = "2.0.28"
version = "2.0.25"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04361975b3f5e348b2189d8dc55bc942f278b2d482a6a0365de5bdd62d351567"
checksum = "15e3fc8c0c74267e2df136e5e5fb656a464158aa57624053375eb9c8c6e25ae2"
dependencies = [
"proc-macro2",
"quote",

View File

@@ -34,14 +34,6 @@ additional-css = [
"docs/website_files/table-of-contents.css",
"docs/website_files/remove-nav-buttons.css",
"docs/website_files/indent-section-headers.css",
"docs/website_files/version-picker.css",
]
additional-js = [
"docs/website_files/table-of-contents.js",
"docs/website_files/version-picker.js",
"docs/website_files/version.js",
]
theme = "docs/website_files/theme"
[preprocessor.schema_versions]
command = "./scripts-dev/schema_versions.py"
additional-js = ["docs/website_files/table-of-contents.js"]
theme = "docs/website_files/theme"

1
changelog.d/15525.misc Normal file
View File

@@ -0,0 +1 @@
Update SQL queries to inline boolean parameters as supported in SQLite 3.27.

View File

@@ -0,0 +1 @@
Scope transaction IDs to devices (implement [MSC3970](https://github.com/matrix-org/matrix-spec-proposals/pull/3970)).

1
changelog.d/15754.misc Normal file
View File

@@ -0,0 +1 @@
Allow for the configuration of the backoff algorithm for federation destinations.

1
changelog.d/15791.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix bug where purging history and paginating simultaneously could lead to database corruption when using workers.

View File

@@ -0,0 +1 @@
Remove support for legacy application service paths.

1
changelog.d/15972.docker Normal file
View File

@@ -0,0 +1 @@
Add `org.opencontainers.image.version` labels to Docker containers [published by Matrix.org](https://hub.docker.com/r/matrixdotorg/synapse). Contributed by Mo Balaa.

1
changelog.d/15991.misc Normal file
View File

@@ -0,0 +1 @@
Allow modules to check whether the current worker is configured to run background tasks.

1
changelog.d/15992.misc Normal file
View File

@@ -0,0 +1 @@
Update support for [MSC3958](https://github.com/matrix-org/matrix-spec-proposals/pull/3958) to match the latest revision of the MSC.

1
changelog.d/16009.docker Normal file
View File

@@ -0,0 +1 @@
Add `org.opencontainers.image.version` labels to Docker containers [published by Matrix.org](https://hub.docker.com/r/matrixdotorg/synapse). Contributed by Mo Balaa.

1
changelog.d/16011.misc Normal file
View File

@@ -0,0 +1 @@
Update PyYAML to 6.0.1.

1
changelog.d/16012.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix 404 not found code returned on profile endpoint when the display name is empty but not the avatar URL.

1
changelog.d/16013.misc Normal file
View File

@@ -0,0 +1 @@
Properly overwrite the `redacts` content-property for forwards-compatibility with room versions 1 through 10.

2
changelog.d/16016.doc Normal file
View File

@@ -0,0 +1,2 @@
Clarify comment on the keys/upload over replication enpoint.

View File

@@ -0,0 +1 @@
Move support for application service query parameter authorization behind a configuration option.

1
changelog.d/16019.misc Normal file
View File

@@ -0,0 +1 @@
Fix building the nix development environment on MacOS systems.

1
changelog.d/16023.misc Normal file
View File

@@ -0,0 +1 @@
Combine duplicated code.

1
changelog.d/16027.doc Normal file
View File

@@ -0,0 +1 @@
Do not expose Admin API in caddy reverse proxy example. Contributed by @NilsIrl.

1
changelog.d/16028.misc Normal file
View File

@@ -0,0 +1 @@
Collect additional metrics from `ResponseCache` for eviction.

1
changelog.d/16031.bugfix Normal file
View File

@@ -0,0 +1 @@
Remove leading and trailing spaces when setting a display name.

1
changelog.d/16043.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where the `synapse_port_db` failed to configure sequences for application services and partial stated rooms.

1
changelog.d/16044.misc Normal file
View File

@@ -0,0 +1 @@
Update certifi to 2023.7.22 and pygments to 2.15.1.

1
changelog.d/16063.misc Normal file
View File

@@ -0,0 +1 @@
Fix building the nix development environment on MacOS systems.

12
debian/changelog vendored
View File

@@ -1,15 +1,3 @@
matrix-synapse-py3 (1.90.0) stable; urgency=medium
* New Synapse release 1.90.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Aug 2023 11:17:34 +0100
matrix-synapse-py3 (1.90.0~rc1) stable; urgency=medium
* New Synapse release 1.90.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Aug 2023 15:29:34 +0100
matrix-synapse-py3 (1.89.0) stable; urgency=medium
* New Synapse release 1.89.0.

View File

@@ -97,7 +97,6 @@
- [Cancellation](development/synapse_architecture/cancellation.md)
- [Log Contexts](log_contexts.md)
- [Replication](replication.md)
- [Streams](development/synapse_architecture/streams.md)
- [TCP Replication](tcp_replication.md)
- [Faster remote joins](development/synapse_architecture/faster_joins.md)
- [Internal Documentation](development/internal_documentation/README.md)

View File

@@ -1,157 +0,0 @@
## Streams
Synapse has a concept of "streams", which are roughly described in [`id_generators.py`](
https://github.com/matrix-org/synapse/blob/develop/synapse/storage/util/id_generators.py
).
Generally speaking, streams are a series of notifications that something in Synapse's database has changed that the application might need to respond to.
For example:
- The events stream reports new events (PDUs) that Synapse creates, or that Synapse accepts from another homeserver.
- The account data stream reports changes to users' [account data](https://spec.matrix.org/v1.7/client-server-api/#client-config).
- The to-device stream reports when a device has a new [to-device message](https://spec.matrix.org/v1.7/client-server-api/#send-to-device-messaging).
See [`synapse.replication.tcp.streams`](
https://github.com/matrix-org/synapse/blob/develop/synapse/replication/tcp/streams/__init__.py
) for the full list of streams.
It is very helpful to understand the streams mechanism when working on any part of Synapse that needs to respond to changes—especially if those changes are made by different workers.
To that end, let's describe streams formally, paraphrasing from the docstring of [`AbstractStreamIdGenerator`](
https://github.com/matrix-org/synapse/blob/a719b703d9bd0dade2565ddcad0e2f3a7a9d4c37/synapse/storage/util/id_generators.py#L96
).
### Definition
A stream is an append-only log `T1, T2, ..., Tn, ...` of facts[^1] which grows over time.
Only "writers" can add facts to a stream, and there may be multiple writers.
Each fact has an ID, called its "stream ID".
Readers should only process facts in ascending stream ID order.
Roughly speaking, each stream is backed by a database table.
It should have a `stream_id` (or similar) bigint column holding stream IDs, plus additional columns as necessary to describe the fact.
Typically, a fact is expressed with a single row in its backing table.[^2]
Within a stream, no two facts may have the same stream_id.
> _Aside_. Some additional notes on streams' backing tables.
>
> 1. Rich would like to [ditch the backing tables](https://github.com/matrix-org/synapse/issues/13456).
> 2. The backing tables may have other uses.
> For example, the events table serves backs the events stream, and is read when processing new events.
> But old rows are read from the table all the time, whenever Synapse needs to lookup some facts about an event.
> 3. Rich suspects that sometimes the stream is backed by multiple tables, so the stream proper is the union of those tables.
Stream writers can "reserve" a stream ID, and then later mark it as having being completed.
Stream writers need to track the completion of each stream fact.
In the happy case, completion means a fact has been written to the stream table.
But unhappy cases (e.g. transaction rollback due to an error) also count as completion.
Once completed, the rows written with that stream ID are fixed, and no new rows
will be inserted with that ID.
### Current stream ID
For any given stream reader (including writers themselves), we may define a per-writer current stream ID:
> The current stream ID _for a writer W_ is the largest stream ID such that
> all transactions added by W with equal or smaller ID have completed.
Similarly, there is a "linear" notion of current stream ID:
> The "linear" current stream ID is the largest stream ID such that
> all facts (added by any writer) with equal or smaller ID have completed.
Because different stream readers A and B learn about new facts at different times, A and B may disagree about current stream IDs.
Put differently: we should think of stream readers as being independent of each other, proceeding through a stream of facts at different rates.
**NB.** For both senses of "current", that if a writer opens a transaction that never completes, the current stream ID will never advance beyond that writer's last written stream ID.
For single-writer streams, the per-writer current ID and the linear current ID are the same.
Both senses of current ID are monotonic, but they may "skip" or jump over IDs because facts complete out of order.
_Example_.
Consider a single-writer stream which is initially at ID 1.
| Action | Current stream ID | Notes |
|------------|-------------------|-------------------------------------------------|
| | 1 | |
| Reserve 2 | 1 | |
| Reserve 3 | 1 | |
| Complete 3 | 1 | current ID unchanged, waiting for 2 to complete |
| Complete 2 | 3 | current ID jumps from 1 -> 3 |
| Reserve 4 | 3 | |
| Reserve 5 | 3 | |
| Reserve 6 | 3 | |
| Complete 5 | 3 | |
| Complete 4 | 5 | current ID jumps 3->5, even though 6 is pending |
| Complete 6 | 6 | |
### Multi-writer streams
There are two ways to view a multi-writer stream.
1. Treat it as a collection of distinct single-writer streams, one
for each writer.
2. Treat it as a single stream.
The single stream (option 2) is conceptually simpler, and easier to represent (a single stream id).
However, it requires each reader to know about the entire set of writers, to ensures that readers don't erroneously advance their current stream position too early and miss a fact from an unknown writer.
In contrast, multiple parallel streams (option 1) are more complex, requiring more state to represent (map from writer to stream id).
The payoff for doing so is that readers can "peek" ahead to facts that completed on one writer no matter the state of the others, reducing latency.
Note that a multi-writer stream can be viewed in both ways.
For example, the events stream is treated as multiple single-writer streams (option 1) by the sync handler, so that events are sent to clients as soon as possible.
But the background process that works through events treats them as a single linear stream.
Another useful example is the cache invalidation stream.
The facts this stream holds are instructions to "you should now invalidate these cache entries".
We only ever treat this as a multiple single-writer streams as there is no important ordering between cache invalidations.
(Invalidations are self-contained facts; and the invalidations commute/are idempotent).
### Writing to streams
Writers need to track:
- track their current position (i.e. its own per-writer stream ID).
- their facts currently awaiting completion.
At startup,
- the current position of that writer can be found by querying the database (which suggests that facts need to be written to the database atomically, in a transaction); and
- there are no facts awaiting completion.
To reserve a stream ID, call [`nextval`](https://www.postgresql.org/docs/current/functions-sequence.html) on the appropriate postgres sequence.
To write a fact to the stream: insert the appropriate rows to the appropriate backing table.
To complete a fact, first remove it from your map of facts currently awaiting completion.
Then, if no earlier fact is awaiting completion, the writer can advance its current position in that stream.
Upon doing so it should emit an `RDATA` message[^3], once for every fact between the old and the new stream ID.
### Subscribing to streams
Readers need to track the current position of every writer.
At startup, they can find this by contacting each writer with a `REPLICATE` message,
requesting that all writers reply describing their current position in their streams.
Writers reply with a `POSITION` message.
To learn about new facts, readers should listen for `RDATA` messages and process them to respond to the new fact.
The `RDATA` itself is not a self-contained representation of the fact;
readers will have to query the stream tables for the full details.
Readers must also advance their record of the writer's current position for that stream.
# Summary
In a nutshell: we have an append-only log with a "buffer/scratchpad" at the end where we have to wait for the sequence to be linear and contiguous.
---
[^1]: we use the word _fact_ here for two reasons.
Firstly, the word "event" is already heavily overloaded (PDUs, EDUs, account data, ...) and we don't need to make that worse.
Secondly, "fact" emphasises that the things we append to a stream cannot change after the fact.
[^2]: A fact might be expressed with 0 rows, e.g. if we opened a transaction to persist an event, but failed and rolled the transaction back before marking the fact as completed.
In principle a fact might be expressed with 2 or more rows; if so, each of those rows should share the fact's stream ID.
[^3]: This communication used to happen directly with the writers [over TCP](../../tcp_replication.md);
nowadays it's done via Redis's Pubsub.

View File

@@ -24,11 +24,6 @@ Finally, we also stylise the chapter titles in the left sidebar by indenting the
slightly so that they are more visually distinguishable from the section headers
(the bold titles). This is done through the `indent-section-headers.css` file.
In addition to these modifications, we have added a version picker to the documentation.
Users can switch between documentations for different versions of Synapse.
This functionality was implemented through the `version-picker.js` and
`version-picker.css` files.
More information can be found in mdbook's official documentation for
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
and

View File

@@ -131,18 +131,6 @@
<i class="fa fa-search"></i>
</button>
{{/if}}
<div class="version-picker">
<div class="dropdown">
<div class="select">
<span></span>
<i class="fa fa-chevron-down"></i>
</div>
<input type="hidden" name="version">
<ul class="dropdown-menu">
<!-- Versions will be added dynamically in version-picker.js -->
</ul>
</div>
</div>
</div>
<h1 class="menu-title">{{ book_title }}</h1>
@@ -321,4 +309,4 @@
{{/if}}
</body>
</html>
</html>

View File

@@ -1,78 +0,0 @@
.version-picker {
display: flex;
align-items: center;
}
.version-picker .dropdown {
width: 130px;
max-height: 29px;
margin-left: 10px;
display: inline-block;
border-radius: 4px;
border: 1px solid var(--theme-popup-border);
position: relative;
font-size: 13px;
color: var(--fg);
height: 100%;
text-align: left;
}
.version-picker .dropdown .select {
cursor: pointer;
display: block;
padding: 5px 2px 5px 15px;
}
.version-picker .dropdown .select > i {
font-size: 10px;
color: var(--fg);
cursor: pointer;
float: right;
line-height: 20px !important;
}
.version-picker .dropdown:hover {
border: 1px solid var(--theme-popup-border);
}
.version-picker .dropdown:active {
background-color: var(--theme-popup-bg);
}
.version-picker .dropdown.active:hover,
.version-picker .dropdown.active {
border: 1px solid var(--theme-popup-border);
border-radius: 2px 2px 0 0;
background-color: var(--theme-popup-bg);
}
.version-picker .dropdown.active .select > i {
transform: rotate(-180deg);
}
.version-picker .dropdown .dropdown-menu {
position: absolute;
background-color: var(--theme-popup-bg);
width: 100%;
left: -1px;
right: 1px;
margin-top: 1px;
border: 1px solid var(--theme-popup-border);
border-radius: 0 0 4px 4px;
overflow: hidden;
display: none;
max-height: 300px;
overflow-y: auto;
z-index: 9;
}
.version-picker .dropdown .dropdown-menu li {
font-size: 12px;
padding: 6px 20px;
cursor: pointer;
}
.version-picker .dropdown .dropdown-menu {
padding: 0;
list-style: none;
}
.version-picker .dropdown .dropdown-menu li:hover {
background-color: var(--theme-hover);
}
.version-picker .dropdown .dropdown-menu li.active::before {
display: inline-block;
content: "✓";
margin-inline-start: -14px;
width: 14px;
}

View File

@@ -1,127 +0,0 @@
const dropdown = document.querySelector('.version-picker .dropdown');
const dropdownMenu = dropdown.querySelector('.dropdown-menu');
fetchVersions(dropdown, dropdownMenu).then(() => {
initializeVersionDropdown(dropdown, dropdownMenu);
});
/**
* Initialize the dropdown functionality for version selection.
*
* @param {Element} dropdown - The dropdown element.
* @param {Element} dropdownMenu - The dropdown menu element.
*/
function initializeVersionDropdown(dropdown, dropdownMenu) {
// Toggle the dropdown menu on click
dropdown.addEventListener('click', function () {
this.setAttribute('tabindex', 1);
this.classList.toggle('active');
dropdownMenu.style.display = (dropdownMenu.style.display === 'block') ? 'none' : 'block';
});
// Remove the 'active' class and hide the dropdown menu on focusout
dropdown.addEventListener('focusout', function () {
this.classList.remove('active');
dropdownMenu.style.display = 'none';
});
// Handle item selection within the dropdown menu
const dropdownMenuItems = dropdownMenu.querySelectorAll('li');
dropdownMenuItems.forEach(function (item) {
item.addEventListener('click', function () {
dropdownMenuItems.forEach(function (item) {
item.classList.remove('active');
});
this.classList.add('active');
dropdown.querySelector('span').textContent = this.textContent;
dropdown.querySelector('input').value = this.getAttribute('id');
window.location.href = changeVersion(window.location.href, this.textContent);
});
});
};
/**
* This function fetches the available versions from a GitHub repository
* and inserts them into the version picker.
*
* @param {Element} dropdown - The dropdown element.
* @param {Element} dropdownMenu - The dropdown menu element.
* @returns {Promise<Array<string>>} A promise that resolves with an array of available versions.
*/
function fetchVersions(dropdown, dropdownMenu) {
return new Promise((resolve, reject) => {
window.addEventListener("load", () => {
fetch("https://api.github.com/repos/matrix-org/synapse/git/trees/gh-pages", {
cache: "force-cache",
}).then(res =>
res.json()
).then(resObject => {
const excluded = ['dev-docs', 'v1.91.0', 'v1.80.0', 'v1.69.0'];
const tree = resObject.tree.filter(item => item.type === "tree" && !excluded.includes(item.path));
const versions = tree.map(item => item.path).sort(sortVersions);
// Create a list of <li> items for versions
versions.forEach((version) => {
const li = document.createElement("li");
li.textContent = version;
li.id = version;
if (window.SYNAPSE_VERSION === version) {
li.classList.add('active');
dropdown.querySelector('span').textContent = version;
dropdown.querySelector('input').value = version;
}
dropdownMenu.appendChild(li);
});
resolve(versions);
}).catch(ex => {
console.error("Failed to fetch version data", ex);
reject(ex);
})
});
});
}
/**
* Custom sorting function to sort an array of version strings.
*
* @param {string} a - The first version string to compare.
* @param {string} b - The second version string to compare.
* @returns {number} - A negative number if a should come before b, a positive number if b should come before a, or 0 if they are equal.
*/
function sortVersions(a, b) {
// Put 'develop' and 'latest' at the top
if (a === 'develop' || a === 'latest') return -1;
if (b === 'develop' || b === 'latest') return 1;
const versionA = (a.match(/v\d+(\.\d+)+/) || [])[0];
const versionB = (b.match(/v\d+(\.\d+)+/) || [])[0];
return versionB.localeCompare(versionA);
}
/**
* Change the version in a URL path.
*
* @param {string} url - The original URL to be modified.
* @param {string} newVersion - The new version to replace the existing version in the URL.
* @returns {string} The updated URL with the new version.
*/
function changeVersion(url, newVersion) {
const parsedURL = new URL(url);
const pathSegments = parsedURL.pathname.split('/');
// Modify the version
pathSegments[2] = newVersion;
// Reconstruct the URL
parsedURL.pathname = pathSegments.join('/');
return parsedURL.href;
}

View File

@@ -1 +0,0 @@
window.SYNAPSE_VERSION = 'v1.90';

8
flake.lock generated
View File

@@ -8,16 +8,16 @@
"pre-commit-hooks": "pre-commit-hooks"
},
"locked": {
"lastModified": 1690534632,
"narHash": "sha256-kOXS9x5y17VKliC7wZxyszAYrWdRl1JzggbQl0gyo94=",
"lastModified": 1688058187,
"narHash": "sha256-ipDcc7qrucpJ0+0eYNlwnE+ISTcq4m03qW+CWUshRXI=",
"owner": "cachix",
"repo": "devenv",
"rev": "6568e7e485a46bbf32051e4d6347fa1fed8b2f25",
"rev": "c8778e3dc30eb9043e218aaa3861d42d4992de77",
"type": "github"
},
"original": {
"owner": "cachix",
"ref": "main",
"ref": "v0.6.3",
"repo": "devenv",
"type": "github"
}

View File

@@ -45,7 +45,7 @@
# Output a development shell for x86_64/aarch64 Linux/Darwin (MacOS).
systems.url = "github:nix-systems/default";
# A development environment manager built on Nix. See https://devenv.sh.
devenv.url = "github:cachix/devenv/main";
devenv.url = "github:cachix/devenv/v0.6.3";
# Rust toolchain.
rust-overlay.url = "github:oxalica/rust-overlay";
};
@@ -90,6 +90,9 @@
# The rust-analyzer language server implementation.
rust-analyzer
# For building any Python bindings to C or Rust code.
gcc
# Native dependencies for running Synapse.
icu
libffi
@@ -122,7 +125,7 @@
languages.python.poetry.activate.enable = true;
# Install all extra Python dependencies; this is needed to run the unit
# tests and utilitise all Synapse features.
languages.python.poetry.install.arguments = ["--extras all"];
languages.python.poetry.install.arguments = ["-v" "--extras all"];
# Install the 'matrix-synapse' package from the local checkout.
languages.python.poetry.install.installRootPackage = true;

30
poetry.lock generated
View File

@@ -558,13 +558,13 @@ dev = ["Sphinx", "coverage", "flake8", "lxml", "lxml-stubs", "memory-profiler",
[[package]]
name = "furo"
version = "2023.7.26"
version = "2023.5.20"
description = "A clean customisable Sphinx documentation theme."
optional = false
python-versions = ">=3.7"
files = [
{file = "furo-2023.7.26-py3-none-any.whl", hash = "sha256:1c7936929ec57c5ddecc7c85f07fa8b2ce536b5c89137764cca508be90e11efd"},
{file = "furo-2023.7.26.tar.gz", hash = "sha256:257f63bab97aa85213a1fa24303837a3c3f30be92901ec732fea74290800f59e"},
{file = "furo-2023.5.20-py3-none-any.whl", hash = "sha256:594a8436ddfe0c071f3a9e9a209c314a219d8341f3f1af33fdf7c69544fab9e6"},
{file = "furo-2023.5.20.tar.gz", hash = "sha256:40e09fa17c6f4b22419d122e933089226dcdb59747b5b6c79363089827dea16f"},
]
[package.dependencies]
@@ -973,13 +973,13 @@ i18n = ["Babel (>=2.7)"]
[[package]]
name = "jsonschema"
version = "4.19.0"
version = "4.18.3"
description = "An implementation of JSON Schema validation for Python"
optional = false
python-versions = ">=3.8"
files = [
{file = "jsonschema-4.19.0-py3-none-any.whl", hash = "sha256:043dc26a3845ff09d20e4420d6012a9c91c9aa8999fa184e7efcfeccb41e32cb"},
{file = "jsonschema-4.19.0.tar.gz", hash = "sha256:6e1e7569ac13be8139b2dd2c21a55d350066ee3f80df06c608b398cdc6f30e8f"},
{file = "jsonschema-4.18.3-py3-none-any.whl", hash = "sha256:aab78b34c2de001c6b692232f08c21a97b436fe18e0b817bf0511046924fceef"},
{file = "jsonschema-4.18.3.tar.gz", hash = "sha256:64b7104d72efe856bea49ca4af37a14a9eba31b40bb7238179f3803130fd34d9"},
]
[package.dependencies]
@@ -1610,13 +1610,13 @@ files = [
[[package]]
name = "phonenumbers"
version = "8.13.18"
version = "8.13.14"
description = "Python version of Google's common library for parsing, formatting, storing and validating international phone numbers."
optional = false
python-versions = "*"
files = [
{file = "phonenumbers-8.13.18-py2.py3-none-any.whl", hash = "sha256:3d802739a22592e4127139349937753dee9b6a20bdd5d56847cd885bdc766b1f"},
{file = "phonenumbers-8.13.18.tar.gz", hash = "sha256:b360c756252805d44b447b5bca6d250cf6bd6c69b6f0f4258f3bfe5ab81bef69"},
{file = "phonenumbers-8.13.14-py2.py3-none-any.whl", hash = "sha256:a4b20b6ba7dd402728f5cc8e86e1f29b1a873af45f5381dbee7e3083af497ff6"},
{file = "phonenumbers-8.13.14.tar.gz", hash = "sha256:5fa952b4abf9fccdaf1f130d96114a520c48890d4091b50a064e22c0fdc12dec"},
]
[[package]]
@@ -2980,13 +2980,13 @@ files = [
[[package]]
name = "types-opentracing"
version = "2.4.10.6"
version = "2.4.10.5"
description = "Typing stubs for opentracing"
optional = false
python-versions = "*"
files = [
{file = "types-opentracing-2.4.10.6.tar.gz", hash = "sha256:87a1bdfce9de5e555e30497663583b9b9c3bb494d029ef9806aa1f137c19e744"},
{file = "types_opentracing-2.4.10.6-py3-none-any.whl", hash = "sha256:25914c834db033a4a38fc322df0b5e5e14503b0ac97f78304ae180d721555e97"},
{file = "types-opentracing-2.4.10.5.tar.gz", hash = "sha256:852d13ab1324832835d50c00cfd58b9267f0e79ec3189e5664c2a90c26880fd4"},
{file = "types_opentracing-2.4.10.5-py3-none-any.whl", hash = "sha256:8f12ab4dce3e298a8e6655da9a6d52171e7a275357eae4cec22a1663d94023a7"},
]
[[package]]
@@ -3052,13 +3052,13 @@ types-urllib3 = "*"
[[package]]
name = "types-setuptools"
version = "68.0.0.3"
version = "68.0.0.0"
description = "Typing stubs for setuptools"
optional = false
python-versions = "*"
files = [
{file = "types-setuptools-68.0.0.3.tar.gz", hash = "sha256:d57ae6076100b5704b3cc869fdefc671e1baf4c2cd6643f84265dfc0b955bf05"},
{file = "types_setuptools-68.0.0.3-py3-none-any.whl", hash = "sha256:fec09e5c18264c5c09351c00be01a34456fb7a88e457abe97401325f84ad9d36"},
{file = "types-setuptools-68.0.0.0.tar.gz", hash = "sha256:fc958b4123b155ffc069a66d3af5fe6c1f9d0600c35c0c8444b2ab4147112641"},
{file = "types_setuptools-68.0.0.0-py3-none-any.whl", hash = "sha256:cc00e09ba8f535362cbe1ea8b8407d15d14b59c57f4190cceaf61a9e57616446"},
]
[[package]]

View File

@@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.90.0"
version = "1.89.0"
description = "Homeserver for the Matrix decentralised comms protocol"
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
license = "Apache-2.0"

View File

@@ -186,6 +186,9 @@ class EventContext(UnpersistedEventContextBase):
),
"app_service_id": self.app_service.id if self.app_service else None,
"partial_state": self.partial_state,
# add dummy delta_ids and prev_group for backwards compatibility
"delta_ids": None,
"prev_group": None,
}
@staticmethod
@@ -200,6 +203,13 @@ class EventContext(UnpersistedEventContextBase):
Returns:
The event context.
"""
# workaround for backwards/forwards compatibility: if the input doesn't have a value
# for "state_group_deltas" just assign an empty dict
state_group_deltas = input.get("state_group_deltas", None)
if state_group_deltas:
state_group_deltas = _decode_state_group_delta(state_group_deltas)
else:
state_group_deltas = {}
context = EventContext(
# We use the state_group and prev_state_id stuff to pull the
@@ -207,7 +217,7 @@ class EventContext(UnpersistedEventContextBase):
storage=storage,
state_group=input["state_group"],
state_group_before_event=input["state_group_before_event"],
state_group_deltas=_decode_state_group_delta(input["state_group_deltas"]),
state_group_deltas=state_group_deltas,
state_delta_due_to_event=_decode_state_dict(
input["state_delta_due_to_event"]
),

View File

@@ -722,22 +722,6 @@ class DeviceHandler(DeviceWorkerHandler):
return {"success": True}
async def delete_dehydrated_device(self, user_id: str, device_id: str) -> None:
"""
Delete a stored dehydrated device.
Args:
user_id: the user_id to delete the device from
device_id: id of the dehydrated device to delete
"""
success = await self.store.remove_dehydrated_device(user_id, device_id)
if not success:
raise errors.NotFoundError()
await self.delete_devices(user_id, [device_id])
await self.store.delete_e2e_keys_by_device(user_id=user_id, device_id=device_id)
@wrap_as_background_process("_handle_new_device_update_async")
async def _handle_new_device_update_async(self) -> None:
"""Called when we have a new local device list update that we need to

View File

@@ -34,7 +34,6 @@ import jinja2
from typing_extensions import ParamSpec
from twisted.internet import defer
from twisted.internet.interfaces import IDelayedCall
from twisted.web.resource import Resource
from synapse.api import errors
@@ -1243,46 +1242,6 @@ class ModuleApi:
"""
return self._hs.config.worker.run_background_tasks
def delayed_background_call(
self,
msec: float,
f: Callable,
*args: object,
desc: Optional[str] = None,
**kwargs: object,
) -> IDelayedCall:
"""Wraps a function as a background process and calls it in a given number of milliseconds.
The scheduled call is not persistent: if the current Synapse instance is
restarted before the call is made, the call will not be made.
Added in Synapse v1.90.0.
Args:
msec: How long to wait before calling, in milliseconds.
f: The function to call once. f can be either synchronous or
asynchronous, and must follow Synapse's logcontext rules.
More info about logcontexts is available at
https://matrix-org.github.io/synapse/latest/log_contexts.html
*args: Positional arguments to pass to function.
desc: The background task's description. Default to the function's name.
**kwargs: Keyword arguments to pass to function.
Returns:
IDelayedCall handle from twisted, which allows to cancel the delayed call if desired.
"""
if desc is None:
desc = f.__name__
return self._clock.call_later(
# convert ms to seconds as needed by call_later.
msec * 0.001,
run_as_background_process,
desc,
lambda: maybe_awaitable(f(*args, **kwargs)),
)
async def sleep(self, seconds: float) -> None:
"""Sleeps for the given number of seconds.

View File

@@ -232,7 +232,7 @@ class DehydratedDeviceDataModel(RequestBodyModel):
class DehydratedDeviceServlet(RestServlet):
"""Retrieve or store a dehydrated device.
Implements MSC2697.
Implements either MSC2697 or MSC3814.
GET /org.matrix.msc2697.v2/dehydrated_device
@@ -266,12 +266,7 @@ class DehydratedDeviceServlet(RestServlet):
"""
PATTERNS = client_patterns(
"/org.matrix.msc2697.v2/dehydrated_device$",
releases=(),
)
def __init__(self, hs: "HomeServer"):
def __init__(self, hs: "HomeServer", msc2697: bool = True):
super().__init__()
self.hs = hs
self.auth = hs.get_auth()
@@ -279,6 +274,13 @@ class DehydratedDeviceServlet(RestServlet):
assert isinstance(handler, DeviceHandler)
self.device_handler = handler
self.PATTERNS = client_patterns(
"/org.matrix.msc2697.v2/dehydrated_device$"
if msc2697
else "/org.matrix.msc3814.v1/dehydrated_device$",
releases=(),
)
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
requester = await self.auth.get_user_by_req(request)
dehydrated_device = await self.device_handler.get_dehydrated_device(
@@ -511,8 +513,10 @@ class DehydratedDeviceV2Servlet(RestServlet):
if dehydrated_device is not None:
(device_id, device_data) = dehydrated_device
await self.device_handler.delete_dehydrated_device(
requester.user.to_string(), device_id
result = await self.device_handler.rehydrate_device(
requester.user.to_string(),
self.auth.get_access_token_from_request(request),
device_id,
)
result = {"device_id": device_id}
@@ -534,14 +538,6 @@ class DehydratedDeviceV2Servlet(RestServlet):
requester = await self.auth.get_user_by_req(request)
user_id = requester.user.to_string()
old_dehydrated_device = await self.device_handler.get_dehydrated_device(user_id)
# if an old device exists, delete it before creating a new one
if old_dehydrated_device:
await self.device_handler.delete_dehydrated_device(
user_id, old_dehydrated_device[0]
)
device_info = submission.dict()
if "device_keys" not in device_info.keys():
raise SynapseError(
@@ -577,7 +573,7 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
if hs.config.worker.worker_app is None:
DeviceRestServlet(hs).register(http_server)
if hs.config.experimental.msc2697_enabled:
DehydratedDeviceServlet(hs).register(http_server)
DehydratedDeviceServlet(hs, msc2697=True).register(http_server)
ClaimDehydratedDeviceServlet(hs).register(http_server)
if hs.config.experimental.msc3814_enabled:
DehydratedDeviceV2Servlet(hs).register(http_server)

View File

@@ -18,8 +18,6 @@ import logging
from typing import TYPE_CHECKING, Any, Collection, Iterable, List, Optional, Tuple
from synapse.api.constants import EventTypes
from synapse.config._base import Config
from synapse.metrics.background_process_metrics import wrap_as_background_process
from synapse.replication.tcp.streams import BackfillStream, CachesStream
from synapse.replication.tcp.streams.events import (
EventsStream,
@@ -54,21 +52,6 @@ PURGE_HISTORY_CACHE_NAME = "ph_cache_fake"
# As above, but for invalidating room caches on room deletion
DELETE_ROOM_CACHE_NAME = "dr_cache_fake"
# How long between cache invalidation table cleanups, once we have caught up
# with the backlog.
REGULAR_CLEANUP_INTERVAL_MS = Config.parse_duration("1h")
# How long between cache invalidation table cleanups, before we have caught
# up with the backlog.
CATCH_UP_CLEANUP_INTERVAL_MS = Config.parse_duration("1m")
# Maximum number of cache invalidation rows to delete at once.
CLEAN_UP_MAX_BATCH_SIZE = 20_000
# Keep cache invalidations for 7 days
# (This is likely to be quite excessive.)
RETENTION_PERIOD_OF_CACHE_INVALIDATIONS_MS = Config.parse_duration("7d")
class CacheInvalidationWorkerStore(SQLBaseStore):
def __init__(
@@ -115,18 +98,6 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
else:
self._cache_id_gen = None
# Occasionally clean up the cache invalidations stream table by deleting
# old rows.
# This is only applicable when Postgres is in use; this table is unused
# and not populated at all when SQLite is the active database engine.
if hs.config.worker.run_background_tasks and isinstance(
self.database_engine, PostgresEngine
):
self.hs.get_clock().call_later(
CATCH_UP_CLEANUP_INTERVAL_MS / 1000,
self._clean_up_cache_invalidation_wrapper,
)
async def get_all_updated_caches(
self, instance_name: str, last_id: int, current_id: int, limit: int
) -> Tuple[List[Tuple[int, tuple]], int, bool]:
@@ -583,104 +554,3 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
return self._cache_id_gen.get_current_token_for_writer(instance_name)
else:
return 0
@wrap_as_background_process("clean_up_old_cache_invalidations")
async def _clean_up_cache_invalidation_wrapper(self) -> None:
"""
Clean up cache invalidation stream table entries occasionally.
If we are behind (i.e. there are entries old enough to
be deleted but too many of them to be deleted in one go),
then we run slightly more frequently.
"""
delete_up_to: int = (
self.hs.get_clock().time_msec() - RETENTION_PERIOD_OF_CACHE_INVALIDATIONS_MS
)
in_backlog = await self._clean_up_batch_of_old_cache_invalidations(delete_up_to)
# Vary how long we wait before calling again depending on whether we
# are still sifting through backlog or we have caught up.
if in_backlog:
next_interval = CATCH_UP_CLEANUP_INTERVAL_MS
else:
next_interval = REGULAR_CLEANUP_INTERVAL_MS
self.hs.get_clock().call_later(
next_interval / 1000, self._clean_up_cache_invalidation_wrapper
)
async def _clean_up_batch_of_old_cache_invalidations(
self, delete_up_to_millisec: int
) -> bool:
"""
Remove old rows from the `cache_invalidation_stream_by_instance` table automatically (this table is unused in SQLite).
Up to `CLEAN_UP_BATCH_SIZE` rows will be deleted at once.
Returns true if and only if we were limited by batch size (i.e. we are in backlog:
there are more things to clean up).
"""
def _clean_up_batch_of_old_cache_invalidations_txn(
txn: LoggingTransaction,
) -> bool:
# First get the earliest stream ID
txn.execute(
"""
SELECT stream_id FROM cache_invalidation_stream_by_instance
ORDER BY stream_id ASC
LIMIT 1
"""
)
row = txn.fetchone()
if row is None:
return False
earliest_stream_id: int = row[0]
# Then find the last stream ID of the range we will delete
txn.execute(
"""
SELECT stream_id FROM cache_invalidation_stream_by_instance
WHERE stream_id <= ? AND invalidation_ts <= ?
ORDER BY stream_id DESC
LIMIT 1
""",
(earliest_stream_id + CLEAN_UP_MAX_BATCH_SIZE, delete_up_to_millisec),
)
row = txn.fetchone()
if row is None:
return False
cutoff_stream_id: int = row[0]
# Determine whether we are caught up or still catching up
txn.execute(
"""
SELECT invalidation_ts FROM cache_invalidation_stream_by_instance
WHERE stream_id > ?
ORDER BY stream_id ASC
LIMIT 1
""",
(cutoff_stream_id,),
)
row = txn.fetchone()
if row is None:
in_backlog = False
else:
# We are in backlog if the next row could have been deleted
# if we didn't have such a small batch size
in_backlog = row[0] <= delete_up_to_millisec
txn.execute(
"""
DELETE FROM cache_invalidation_stream_by_instance
WHERE ? <= stream_id AND stream_id <= ?
""",
(earliest_stream_id, cutoff_stream_id),
)
return in_backlog
return await self.db_pool.runInteraction(
"clean_up_old_cache_invalidations",
_clean_up_batch_of_old_cache_invalidations_txn,
)

View File

@@ -379,141 +379,4 @@ class DehydratedDeviceTestCase(unittest.HomeserverTestCase):
access_token=token,
shorthand=False,
)
self.assertEqual(channel.code, 401)
@unittest.override_config(
{"experimental_features": {"msc2697_enabled": False, "msc3814_enabled": True}}
)
def test_msc3814_dehydrated_device_delete_works(self) -> None:
user = self.register_user("mikey", "pass")
token = self.login(user, "pass", device_id="device1")
content: JsonDict = {
"device_data": {
"algorithm": "m.dehydration.v1.olm",
},
"device_id": "device2",
"initial_device_display_name": "foo bar",
"device_keys": {
"user_id": "@mikey:test",
"device_id": "device2",
"valid_until_ts": "80",
"algorithms": [
"m.olm.curve25519-aes-sha2",
],
"keys": {
"<algorithm>:<device_id>": "<key_base64>",
},
"signatures": {
"<user_id>": {"<algorithm>:<device_id>": "<signature_base64>"}
},
},
}
channel = self.make_request(
"PUT",
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
content=content,
access_token=token,
shorthand=False,
)
self.assertEqual(channel.code, 200)
device_id = channel.json_body.get("device_id")
assert device_id is not None
self.assertIsInstance(device_id, str)
self.assertEqual("device2", device_id)
# ensure that keys were uploaded and available
channel = self.make_request(
"POST",
"/_matrix/client/r0/keys/query",
{
"device_keys": {
user: ["device2"],
},
},
token,
)
self.assertEqual(
channel.json_body["device_keys"][user]["device2"]["keys"],
{
"<algorithm>:<device_id>": "<key_base64>",
},
)
# delete the dehydrated device
channel = self.make_request(
"DELETE",
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
access_token=token,
shorthand=False,
)
self.assertEqual(channel.code, 200)
# ensure that keys are no longer available for deleted device
channel = self.make_request(
"POST",
"/_matrix/client/r0/keys/query",
{
"device_keys": {
user: ["device2"],
},
},
token,
)
self.assertEqual(channel.json_body["device_keys"], {"@mikey:test": {}})
# check that an old device is deleted when user PUTs a new device
# First, create a device
content["device_id"] = "device3"
content["device_keys"]["device_id"] = "device3"
channel = self.make_request(
"PUT",
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
content=content,
access_token=token,
shorthand=False,
)
self.assertEqual(channel.code, 200)
device_id = channel.json_body.get("device_id")
assert device_id is not None
self.assertIsInstance(device_id, str)
self.assertEqual("device3", device_id)
# create a second device without deleting first device
content["device_id"] = "device4"
content["device_keys"]["device_id"] = "device4"
channel = self.make_request(
"PUT",
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
content=content,
access_token=token,
shorthand=False,
)
self.assertEqual(channel.code, 200)
device_id = channel.json_body.get("device_id")
assert device_id is not None
self.assertIsInstance(device_id, str)
self.assertEqual("device4", device_id)
# check that the second device that was created is what is returned when we GET
channel = self.make_request(
"GET",
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
access_token=token,
shorthand=False,
)
self.assertEqual(channel.code, 200)
returned_device_id = channel.json_body["device_id"]
self.assertEqual(returned_device_id, "device4")
# and that if we query the keys for the first device they are not there
channel = self.make_request(
"POST",
"/_matrix/client/r0/keys/query",
{
"device_keys": {
user: ["device3"],
},
},
token,
)
self.assertEqual(channel.json_body["device_keys"], {"@mikey:test": {}})
self.assertEqual(channel.code, 404)