Compare commits
19 Commits
erikj/effi
...
release-v1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
08cb145977 | ||
|
|
190c990a76 | ||
|
|
76e392b0fa | ||
|
|
d4ea465496 | ||
|
|
8ebfd577e2 | ||
|
|
dbee081d14 | ||
|
|
99b7b801c3 | ||
|
|
641ff9ef7e | ||
|
|
05f8dada8b | ||
|
|
654902a758 | ||
|
|
4a711bf379 | ||
|
|
fc566cdf0a | ||
|
|
3b6208b835 | ||
|
|
3b8348b06e | ||
|
|
5c7364fea5 | ||
|
|
f08d05dd2c | ||
|
|
e1fa42249c | ||
|
|
fc1e534e41 | ||
|
|
835174180b |
66
CHANGES.md
66
CHANGES.md
@@ -1,3 +1,69 @@
|
||||
# Synapse 1.89.0 (2023-08-01)
|
||||
|
||||
No significant changes since 1.89.0rc1.
|
||||
|
||||
|
||||
# Synapse 1.89.0rc1 (2023-07-25)
|
||||
|
||||
### Features
|
||||
|
||||
- Add Unix Socket support for HTTP Replication Listeners. [Document and provide usage instructions](https://matrix-org.github.io/synapse/v1.89/usage/configuration/config_documentation.html#listeners) for utilizing Unix sockets in Synapse. Contributed by Jason Little. ([\#15708](https://github.com/matrix-org/synapse/issues/15708), [\#15924](https://github.com/matrix-org/synapse/issues/15924))
|
||||
- Allow `+` in Matrix IDs, per [MSC4009](https://github.com/matrix-org/matrix-spec-proposals/pull/4009). ([\#15911](https://github.com/matrix-org/synapse/issues/15911))
|
||||
- Support room version 11 from [MSC3820](https://github.com/matrix-org/matrix-spec-proposals/pull/3820). ([\#15912](https://github.com/matrix-org/synapse/issues/15912))
|
||||
- Allow configuring the set of workers to proxy outbound federation traffic through via `outbound_federation_restricted_to`. ([\#15913](https://github.com/matrix-org/synapse/issues/15913), [\#15969](https://github.com/matrix-org/synapse/issues/15969))
|
||||
- Implement [MSC3814](https://github.com/matrix-org/matrix-spec-proposals/pull/3814), dehydrated devices v2/shrivelled sessions and move [MSC2697](https://github.com/matrix-org/matrix-spec-proposals/pull/2697) behind a config flag. Contributed by Nico from Famedly, H-Shay and poljar. ([\#15929](https://github.com/matrix-org/synapse/issues/15929))
|
||||
|
||||
### Bugfixes
|
||||
|
||||
- Fix a long-standing bug where remote invites weren't correctly pushed. ([\#15820](https://github.com/matrix-org/synapse/issues/15820))
|
||||
- Fix background schema updates failing over a large upgrade gap. ([\#15887](https://github.com/matrix-org/synapse/issues/15887))
|
||||
- Fix a bug introduced in 1.86.0 where Synapse starting with an empty `experimental_features` configuration setting. ([\#15925](https://github.com/matrix-org/synapse/issues/15925))
|
||||
- Fixed deploy annotations in the provided Grafana dashboard config, so that it shows for any homeserver and not just matrix.org. Contributed by @wrjlewis. ([\#15957](https://github.com/matrix-org/synapse/issues/15957))
|
||||
- Ensure a long state res does not starve CPU by occasionally yielding to the reactor. ([\#15960](https://github.com/matrix-org/synapse/issues/15960))
|
||||
- Properly handle redactions of creation events. ([\#15973](https://github.com/matrix-org/synapse/issues/15973))
|
||||
- Fix a bug where resyncing stale device lists could block responding to federation transactions, and thus delay receiving new data from the remote server. ([\#15975](https://github.com/matrix-org/synapse/issues/15975))
|
||||
|
||||
### Improved Documentation
|
||||
|
||||
- Better clarify how to run a worker instance (pass both configs). ([\#15921](https://github.com/matrix-org/synapse/issues/15921))
|
||||
- Improve [the documentation](https://matrix-org.github.io/synapse/v1.89/admin_api/user_admin_api.html#login-as-a-user) for the login as a user admin API. ([\#15938](https://github.com/matrix-org/synapse/issues/15938))
|
||||
- Fix broken Arch Linux package link. Contributed by @SnipeXandrej. ([\#15981](https://github.com/matrix-org/synapse/issues/15981))
|
||||
|
||||
### Deprecations and Removals
|
||||
|
||||
- Remove support for calling the `/register` endpoint with an unspecced `user` property for application services. ([\#15928](https://github.com/matrix-org/synapse/issues/15928))
|
||||
|
||||
### Internal Changes
|
||||
|
||||
- Mark `get_user_in_directory` private since it is only used in tests. Also remove the cache from it. ([\#15884](https://github.com/matrix-org/synapse/issues/15884))
|
||||
- Document which Python version runs on a given Linux distribution so we can more easily clean up later. ([\#15909](https://github.com/matrix-org/synapse/issues/15909))
|
||||
- Add details to warning in log when we fail to fetch an alias. ([\#15922](https://github.com/matrix-org/synapse/issues/15922))
|
||||
- Remove unneeded `__init__`. ([\#15926](https://github.com/matrix-org/synapse/issues/15926))
|
||||
- Fix bug with read/write lock implementation. This is currently unused so has no observable effects. ([\#15933](https://github.com/matrix-org/synapse/issues/15933), [\#15958](https://github.com/matrix-org/synapse/issues/15958))
|
||||
- Unbreak the nix development environment by pinning the Rust version to 1.70.0. ([\#15940](https://github.com/matrix-org/synapse/issues/15940))
|
||||
- Update presence metrics to differentiate remote vs local users. ([\#15952](https://github.com/matrix-org/synapse/issues/15952))
|
||||
- Stop reading from column `user_id` of table `profiles`. ([\#15955](https://github.com/matrix-org/synapse/issues/15955))
|
||||
- Build packages for Debian Trixie. ([\#15961](https://github.com/matrix-org/synapse/issues/15961))
|
||||
- Reduce the amount of state we pull out. ([\#15968](https://github.com/matrix-org/synapse/issues/15968))
|
||||
- Speed up updating state in large rooms. ([\#15971](https://github.com/matrix-org/synapse/issues/15971))
|
||||
|
||||
### Updates to locked dependencies
|
||||
|
||||
* Bump anyhow from 1.0.71 to 1.0.72. ([\#15949](https://github.com/matrix-org/synapse/issues/15949))
|
||||
* Bump click from 8.1.3 to 8.1.6. ([\#15984](https://github.com/matrix-org/synapse/issues/15984))
|
||||
* Bump cryptography from 41.0.1 to 41.0.2. ([\#15943](https://github.com/matrix-org/synapse/issues/15943))
|
||||
* Bump jsonschema from 4.17.3 to 4.18.3. ([\#15948](https://github.com/matrix-org/synapse/issues/15948))
|
||||
* Bump pillow from 9.4.0 to 10.0.0. ([\#15986](https://github.com/matrix-org/synapse/issues/15986))
|
||||
* Bump prometheus-client from 0.17.0 to 0.17.1. ([\#15945](https://github.com/matrix-org/synapse/issues/15945))
|
||||
* Bump pydantic from 1.10.10 to 1.10.11. ([\#15946](https://github.com/matrix-org/synapse/issues/15946))
|
||||
* Bump pygithub from 1.58.2 to 1.59.0. ([\#15834](https://github.com/matrix-org/synapse/issues/15834))
|
||||
* Bump pyo3-log from 0.8.2 to 0.8.3. ([\#15951](https://github.com/matrix-org/synapse/issues/15951))
|
||||
* Bump sentry-sdk from 1.26.0 to 1.28.1. ([\#15985](https://github.com/matrix-org/synapse/issues/15985))
|
||||
* Bump serde_json from 1.0.100 to 1.0.103. ([\#15950](https://github.com/matrix-org/synapse/issues/15950))
|
||||
* Bump types-pillow from 9.5.0.4 to 10.0.0.1. ([\#15932](https://github.com/matrix-org/synapse/issues/15932))
|
||||
* Bump types-requests from 2.31.0.1 to 2.31.0.2. ([\#15983](https://github.com/matrix-org/synapse/issues/15983))
|
||||
* Bump typing-extensions from 4.5.0 to 4.7.1. ([\#15947](https://github.com/matrix-org/synapse/issues/15947))
|
||||
|
||||
# Synapse 1.88.0 (2023-07-18)
|
||||
|
||||
This release
|
||||
|
||||
12
book.toml
12
book.toml
@@ -34,6 +34,14 @@ additional-css = [
|
||||
"docs/website_files/table-of-contents.css",
|
||||
"docs/website_files/remove-nav-buttons.css",
|
||||
"docs/website_files/indent-section-headers.css",
|
||||
"docs/website_files/version-picker.css",
|
||||
]
|
||||
additional-js = ["docs/website_files/table-of-contents.js"]
|
||||
theme = "docs/website_files/theme"
|
||||
additional-js = [
|
||||
"docs/website_files/table-of-contents.js",
|
||||
"docs/website_files/version-picker.js",
|
||||
"docs/website_files/version.js",
|
||||
]
|
||||
theme = "docs/website_files/theme"
|
||||
|
||||
[preprocessor.schema_versions]
|
||||
command = "./scripts-dev/schema_versions.py"
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
Add Unix Socket support for HTTP Replication Listeners. Document and provide usage instructions for utilizing Unix sockets in Synapse. Contributed by Jason Little.
|
||||
@@ -1 +0,0 @@
|
||||
Fix long-standing bug where remote invites weren't correctly pushed.
|
||||
@@ -1 +0,0 @@
|
||||
Mark `get_user_in_directory` private since it is only used in tests. Also remove the cache from it.
|
||||
@@ -1 +0,0 @@
|
||||
Fix background schema updates failing over a large upgrade gap.
|
||||
@@ -1 +0,0 @@
|
||||
Document which Python version runs on a given Linux distribution so we can more easily clean up later.
|
||||
@@ -1 +0,0 @@
|
||||
Allow `+` in Matrix IDs, per [MSC4009](https://github.com/matrix-org/matrix-spec-proposals/pull/4009).
|
||||
@@ -1 +0,0 @@
|
||||
Support room version 11 from [MSC3820](https://github.com/matrix-org/matrix-spec-proposals/pull/3820).
|
||||
@@ -1 +0,0 @@
|
||||
Allow configuring the set of workers to proxy outbound federation traffic through via `outbound_federation_restricted_to`.
|
||||
@@ -1 +0,0 @@
|
||||
Better clarify how to run a worker instance (pass both configs).
|
||||
@@ -1 +0,0 @@
|
||||
Add details to warning in log when we fail to fetch an alias.
|
||||
@@ -1 +0,0 @@
|
||||
Add Unix Socket support for HTTP Replication Listeners. Document and provide usage instructions for utilizing Unix sockets in Synapse. Contributed by Jason Little.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a bug introduced in 1.86.0 where Synapse starting with an empty `experimental_features` configuration setting.
|
||||
@@ -1 +0,0 @@
|
||||
Remove unneeded `__init__`.
|
||||
@@ -1 +0,0 @@
|
||||
Remove support for calling the `/register` endpoint with an unspecced `user` property for application services.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug with read/write lock implementation. This is currently unused so has no observable effects.
|
||||
@@ -1 +0,0 @@
|
||||
Improve the documentation for the login as a user admin API.
|
||||
@@ -1 +0,0 @@
|
||||
Unbreak the nix development environment by pinning the Rust version to 1.70.0.
|
||||
@@ -1 +0,0 @@
|
||||
Update presence metrics to differentiate remote vs local users.
|
||||
@@ -1 +0,0 @@
|
||||
Fix bug with read/write lock implementation. This is currently unused so has no observable effects.
|
||||
@@ -1 +0,0 @@
|
||||
Ensure a long state res does not starve CPU by occasionally yielding to the reactor.
|
||||
@@ -1 +0,0 @@
|
||||
Reduce the amount of state we pull out.
|
||||
@@ -1 +0,0 @@
|
||||
Allow configuring the set of workers to proxy outbound federation traffic through via `outbound_federation_restricted_to`.
|
||||
@@ -63,7 +63,7 @@
|
||||
"uid": "${DS_PROMETHEUS}"
|
||||
},
|
||||
"enable": true,
|
||||
"expr": "changes(process_start_time_seconds{instance=\"matrix.org\",job=~\"synapse\"}[$bucket_size]) * on (instance, job) group_left(version) synapse_build_info{instance=\"matrix.org\",job=\"synapse\"}",
|
||||
"expr": "changes(process_start_time_seconds{instance=\"$instance\",job=~\"synapse\"}[$bucket_size]) * on (instance, job) group_left(version) synapse_build_info{instance=\"$instance\",job=\"synapse\"}",
|
||||
"iconColor": "purple",
|
||||
"name": "deploys",
|
||||
"titleFormat": "Deployed {{version}}"
|
||||
|
||||
12
debian/changelog
vendored
12
debian/changelog
vendored
@@ -1,3 +1,15 @@
|
||||
matrix-synapse-py3 (1.89.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.89.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Aug 2023 11:07:15 +0100
|
||||
|
||||
matrix-synapse-py3 (1.89.0~rc1) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.89.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Jul 2023 14:31:07 +0200
|
||||
|
||||
matrix-synapse-py3 (1.88.0) stable; urgency=medium
|
||||
|
||||
* New Synapse release 1.88.0.
|
||||
|
||||
@@ -135,8 +135,8 @@ Unofficial package are built for SLES 15 in the openSUSE:Backports:SLE-15 reposi
|
||||
|
||||
#### ArchLinux
|
||||
|
||||
The quickest way to get up and running with ArchLinux is probably with the community package
|
||||
<https://archlinux.org/packages/community/x86_64/matrix-synapse/>, which should pull in most of
|
||||
The quickest way to get up and running with ArchLinux is probably with the package provided by ArchLinux
|
||||
<https://archlinux.org/packages/extra/x86_64/matrix-synapse/>, which should pull in most of
|
||||
the necessary dependencies.
|
||||
|
||||
pip may be outdated (6.0.7-1 and needs to be upgraded to 6.0.8-1 ):
|
||||
|
||||
@@ -24,6 +24,11 @@ Finally, we also stylise the chapter titles in the left sidebar by indenting the
|
||||
slightly so that they are more visually distinguishable from the section headers
|
||||
(the bold titles). This is done through the `indent-section-headers.css` file.
|
||||
|
||||
In addition to these modifications, we have added a version picker to the documentation.
|
||||
Users can switch between documentations for different versions of Synapse.
|
||||
This functionality was implemented through the `version-picker.js` and
|
||||
`version-picker.css` files.
|
||||
|
||||
More information can be found in mdbook's official documentation for
|
||||
[injecting page JS/CSS](https://rust-lang.github.io/mdBook/format/config.html)
|
||||
and
|
||||
|
||||
@@ -131,6 +131,18 @@
|
||||
<i class="fa fa-search"></i>
|
||||
</button>
|
||||
{{/if}}
|
||||
<div class="version-picker">
|
||||
<div class="dropdown">
|
||||
<div class="select">
|
||||
<span></span>
|
||||
<i class="fa fa-chevron-down"></i>
|
||||
</div>
|
||||
<input type="hidden" name="version">
|
||||
<ul class="dropdown-menu">
|
||||
<!-- Versions will be added dynamically in version-picker.js -->
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<h1 class="menu-title">{{ book_title }}</h1>
|
||||
@@ -309,4 +321,4 @@
|
||||
{{/if}}
|
||||
|
||||
</body>
|
||||
</html>
|
||||
</html>
|
||||
|
||||
78
docs/website_files/version-picker.css
Normal file
78
docs/website_files/version-picker.css
Normal file
@@ -0,0 +1,78 @@
|
||||
.version-picker {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
|
||||
.version-picker .dropdown {
|
||||
width: 130px;
|
||||
max-height: 29px;
|
||||
margin-left: 10px;
|
||||
display: inline-block;
|
||||
border-radius: 4px;
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
position: relative;
|
||||
font-size: 13px;
|
||||
color: var(--fg);
|
||||
height: 100%;
|
||||
text-align: left;
|
||||
}
|
||||
.version-picker .dropdown .select {
|
||||
cursor: pointer;
|
||||
display: block;
|
||||
padding: 5px 2px 5px 15px;
|
||||
}
|
||||
.version-picker .dropdown .select > i {
|
||||
font-size: 10px;
|
||||
color: var(--fg);
|
||||
cursor: pointer;
|
||||
float: right;
|
||||
line-height: 20px !important;
|
||||
}
|
||||
.version-picker .dropdown:hover {
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
}
|
||||
.version-picker .dropdown:active {
|
||||
background-color: var(--theme-popup-bg);
|
||||
}
|
||||
.version-picker .dropdown.active:hover,
|
||||
.version-picker .dropdown.active {
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
border-radius: 2px 2px 0 0;
|
||||
background-color: var(--theme-popup-bg);
|
||||
}
|
||||
.version-picker .dropdown.active .select > i {
|
||||
transform: rotate(-180deg);
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu {
|
||||
position: absolute;
|
||||
background-color: var(--theme-popup-bg);
|
||||
width: 100%;
|
||||
left: -1px;
|
||||
right: 1px;
|
||||
margin-top: 1px;
|
||||
border: 1px solid var(--theme-popup-border);
|
||||
border-radius: 0 0 4px 4px;
|
||||
overflow: hidden;
|
||||
display: none;
|
||||
max-height: 300px;
|
||||
overflow-y: auto;
|
||||
z-index: 9;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li {
|
||||
font-size: 12px;
|
||||
padding: 6px 20px;
|
||||
cursor: pointer;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu {
|
||||
padding: 0;
|
||||
list-style: none;
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li:hover {
|
||||
background-color: var(--theme-hover);
|
||||
}
|
||||
.version-picker .dropdown .dropdown-menu li.active::before {
|
||||
display: inline-block;
|
||||
content: "✓";
|
||||
margin-inline-start: -14px;
|
||||
width: 14px;
|
||||
}
|
||||
127
docs/website_files/version-picker.js
Normal file
127
docs/website_files/version-picker.js
Normal file
@@ -0,0 +1,127 @@
|
||||
|
||||
const dropdown = document.querySelector('.version-picker .dropdown');
|
||||
const dropdownMenu = dropdown.querySelector('.dropdown-menu');
|
||||
|
||||
fetchVersions(dropdown, dropdownMenu).then(() => {
|
||||
initializeVersionDropdown(dropdown, dropdownMenu);
|
||||
});
|
||||
|
||||
/**
|
||||
* Initialize the dropdown functionality for version selection.
|
||||
*
|
||||
* @param {Element} dropdown - The dropdown element.
|
||||
* @param {Element} dropdownMenu - The dropdown menu element.
|
||||
*/
|
||||
function initializeVersionDropdown(dropdown, dropdownMenu) {
|
||||
// Toggle the dropdown menu on click
|
||||
dropdown.addEventListener('click', function () {
|
||||
this.setAttribute('tabindex', 1);
|
||||
this.classList.toggle('active');
|
||||
dropdownMenu.style.display = (dropdownMenu.style.display === 'block') ? 'none' : 'block';
|
||||
});
|
||||
|
||||
// Remove the 'active' class and hide the dropdown menu on focusout
|
||||
dropdown.addEventListener('focusout', function () {
|
||||
this.classList.remove('active');
|
||||
dropdownMenu.style.display = 'none';
|
||||
});
|
||||
|
||||
// Handle item selection within the dropdown menu
|
||||
const dropdownMenuItems = dropdownMenu.querySelectorAll('li');
|
||||
dropdownMenuItems.forEach(function (item) {
|
||||
item.addEventListener('click', function () {
|
||||
dropdownMenuItems.forEach(function (item) {
|
||||
item.classList.remove('active');
|
||||
});
|
||||
this.classList.add('active');
|
||||
dropdown.querySelector('span').textContent = this.textContent;
|
||||
dropdown.querySelector('input').value = this.getAttribute('id');
|
||||
|
||||
window.location.href = changeVersion(window.location.href, this.textContent);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* This function fetches the available versions from a GitHub repository
|
||||
* and inserts them into the version picker.
|
||||
*
|
||||
* @param {Element} dropdown - The dropdown element.
|
||||
* @param {Element} dropdownMenu - The dropdown menu element.
|
||||
* @returns {Promise<Array<string>>} A promise that resolves with an array of available versions.
|
||||
*/
|
||||
function fetchVersions(dropdown, dropdownMenu) {
|
||||
return new Promise((resolve, reject) => {
|
||||
window.addEventListener("load", () => {
|
||||
|
||||
fetch("https://api.github.com/repos/matrix-org/synapse/git/trees/gh-pages", {
|
||||
cache: "force-cache",
|
||||
}).then(res =>
|
||||
res.json()
|
||||
).then(resObject => {
|
||||
const excluded = ['dev-docs', 'v1.91.0', 'v1.80.0', 'v1.69.0'];
|
||||
const tree = resObject.tree.filter(item => item.type === "tree" && !excluded.includes(item.path));
|
||||
const versions = tree.map(item => item.path).sort(sortVersions);
|
||||
|
||||
// Create a list of <li> items for versions
|
||||
versions.forEach((version) => {
|
||||
const li = document.createElement("li");
|
||||
li.textContent = version;
|
||||
li.id = version;
|
||||
|
||||
if (window.SYNAPSE_VERSION === version) {
|
||||
li.classList.add('active');
|
||||
dropdown.querySelector('span').textContent = version;
|
||||
dropdown.querySelector('input').value = version;
|
||||
}
|
||||
|
||||
dropdownMenu.appendChild(li);
|
||||
});
|
||||
|
||||
resolve(versions);
|
||||
|
||||
}).catch(ex => {
|
||||
console.error("Failed to fetch version data", ex);
|
||||
reject(ex);
|
||||
})
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Custom sorting function to sort an array of version strings.
|
||||
*
|
||||
* @param {string} a - The first version string to compare.
|
||||
* @param {string} b - The second version string to compare.
|
||||
* @returns {number} - A negative number if a should come before b, a positive number if b should come before a, or 0 if they are equal.
|
||||
*/
|
||||
function sortVersions(a, b) {
|
||||
// Put 'develop' and 'latest' at the top
|
||||
if (a === 'develop' || a === 'latest') return -1;
|
||||
if (b === 'develop' || b === 'latest') return 1;
|
||||
|
||||
const versionA = (a.match(/v\d+(\.\d+)+/) || [])[0];
|
||||
const versionB = (b.match(/v\d+(\.\d+)+/) || [])[0];
|
||||
|
||||
return versionB.localeCompare(versionA);
|
||||
}
|
||||
|
||||
/**
|
||||
* Change the version in a URL path.
|
||||
*
|
||||
* @param {string} url - The original URL to be modified.
|
||||
* @param {string} newVersion - The new version to replace the existing version in the URL.
|
||||
* @returns {string} The updated URL with the new version.
|
||||
*/
|
||||
function changeVersion(url, newVersion) {
|
||||
const parsedURL = new URL(url);
|
||||
const pathSegments = parsedURL.pathname.split('/');
|
||||
|
||||
// Modify the version
|
||||
pathSegments[2] = newVersion;
|
||||
|
||||
// Reconstruct the URL
|
||||
parsedURL.pathname = pathSegments.join('/');
|
||||
|
||||
return parsedURL.href;
|
||||
}
|
||||
1
docs/website_files/version.js
Normal file
1
docs/website_files/version.js
Normal file
@@ -0,0 +1 @@
|
||||
window.SYNAPSE_VERSION = 'v1.89';
|
||||
163
poetry.lock
generated
163
poetry.lock
generated
@@ -397,13 +397,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "click"
|
||||
version = "8.1.3"
|
||||
version = "8.1.6"
|
||||
description = "Composable command line interface toolkit"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "click-8.1.3-py3-none-any.whl", hash = "sha256:bb4d8133cb15a609f44e8213d9b391b0809795062913b383c62be0ee95b1db48"},
|
||||
{file = "click-8.1.3.tar.gz", hash = "sha256:7682dc8afb30297001674575ea00d1814d808d6a36af415a82bd481d37ba7b8e"},
|
||||
{file = "click-8.1.6-py3-none-any.whl", hash = "sha256:fa244bb30b3b5ee2cae3da8f55c9e5e0c0e86093306301fb418eb9dc40fbded5"},
|
||||
{file = "click-8.1.6.tar.gz", hash = "sha256:48ee849951919527a045bfe3bf7baa8a959c423134e1a5b98c05c20ba75a1cbd"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -1621,92 +1621,71 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "pillow"
|
||||
version = "9.4.0"
|
||||
version = "10.0.0"
|
||||
description = "Python Imaging Library (Fork)"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
python-versions = ">=3.8"
|
||||
files = [
|
||||
{file = "Pillow-9.4.0-1-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1b4b4e9dda4f4e4c4e6896f93e84a8f0bcca3b059de9ddf67dac3c334b1195e1"},
|
||||
{file = "Pillow-9.4.0-1-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:fb5c1ad6bad98c57482236a21bf985ab0ef42bd51f7ad4e4538e89a997624e12"},
|
||||
{file = "Pillow-9.4.0-1-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:f0caf4a5dcf610d96c3bd32932bfac8aee61c96e60481c2a0ea58da435e25acd"},
|
||||
{file = "Pillow-9.4.0-1-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:3f4cc516e0b264c8d4ccd6b6cbc69a07c6d582d8337df79be1e15a5056b258c9"},
|
||||
{file = "Pillow-9.4.0-1-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:b8c2f6eb0df979ee99433d8b3f6d193d9590f735cf12274c108bd954e30ca858"},
|
||||
{file = "Pillow-9.4.0-1-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b70756ec9417c34e097f987b4d8c510975216ad26ba6e57ccb53bc758f490dab"},
|
||||
{file = "Pillow-9.4.0-1-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:43521ce2c4b865d385e78579a082b6ad1166ebed2b1a2293c3be1d68dd7ca3b9"},
|
||||
{file = "Pillow-9.4.0-2-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:9d9a62576b68cd90f7075876f4e8444487db5eeea0e4df3ba298ee38a8d067b0"},
|
||||
{file = "Pillow-9.4.0-2-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:87708d78a14d56a990fbf4f9cb350b7d89ee8988705e58e39bdf4d82c149210f"},
|
||||
{file = "Pillow-9.4.0-2-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:8a2b5874d17e72dfb80d917213abd55d7e1ed2479f38f001f264f7ce7bae757c"},
|
||||
{file = "Pillow-9.4.0-2-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:83125753a60cfc8c412de5896d10a0a405e0bd88d0470ad82e0869ddf0cb3848"},
|
||||
{file = "Pillow-9.4.0-2-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:9e5f94742033898bfe84c93c831a6f552bb629448d4072dd312306bab3bd96f1"},
|
||||
{file = "Pillow-9.4.0-2-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:013016af6b3a12a2f40b704677f8b51f72cb007dac785a9933d5c86a72a7fe33"},
|
||||
{file = "Pillow-9.4.0-2-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:99d92d148dd03fd19d16175b6d355cc1b01faf80dae93c6c3eb4163709edc0a9"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:2968c58feca624bb6c8502f9564dd187d0e1389964898f5e9e1fbc8533169157"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:c5c1362c14aee73f50143d74389b2c158707b4abce2cb055b7ad37ce60738d47"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:bd752c5ff1b4a870b7661234694f24b1d2b9076b8bf337321a814c612665f343"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:9a3049a10261d7f2b6514d35bbb7a4dfc3ece4c4de14ef5876c4b7a23a0e566d"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:16a8df99701f9095bea8a6c4b3197da105df6f74e6176c5b410bc2df2fd29a57"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:94cdff45173b1919350601f82d61365e792895e3c3a3443cf99819e6fbf717a5"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:ed3e4b4e1e6de75fdc16d3259098de7c6571b1a6cc863b1a49e7d3d53e036070"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d5b2f8a31bd43e0f18172d8ac82347c8f37ef3e0b414431157718aa234991b28"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:09b89ddc95c248ee788328528e6a2996e09eaccddeeb82a5356e92645733be35"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-win32.whl", hash = "sha256:f09598b416ba39a8f489c124447b007fe865f786a89dbfa48bb5cf395693132a"},
|
||||
{file = "Pillow-9.4.0-cp310-cp310-win_amd64.whl", hash = "sha256:f6e78171be3fb7941f9910ea15b4b14ec27725865a73c15277bc39f5ca4f8391"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:3fa1284762aacca6dc97474ee9c16f83990b8eeb6697f2ba17140d54b453e133"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:eaef5d2de3c7e9b21f1e762f289d17b726c2239a42b11e25446abf82b26ac132"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a4dfdae195335abb4e89cc9762b2edc524f3c6e80d647a9a81bf81e17e3fb6f0"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:6abfb51a82e919e3933eb137e17c4ae9c0475a25508ea88993bb59faf82f3b35"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:451f10ef963918e65b8869e17d67db5e2f4ab40e716ee6ce7129b0cde2876eab"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:6663977496d616b618b6cfa43ec86e479ee62b942e1da76a2c3daa1c75933ef4"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:60e7da3a3ad1812c128750fc1bc14a7ceeb8d29f77e0a2356a8fb2aa8925287d"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:19005a8e58b7c1796bc0167862b1f54a64d3b44ee5d48152b06bb861458bc0f8"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:f715c32e774a60a337b2bb8ad9839b4abf75b267a0f18806f6f4f5f1688c4b5a"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-win32.whl", hash = "sha256:b222090c455d6d1a64e6b7bb5f4035c4dff479e22455c9eaa1bdd4c75b52c80c"},
|
||||
{file = "Pillow-9.4.0-cp311-cp311-win_amd64.whl", hash = "sha256:ba6612b6548220ff5e9df85261bddc811a057b0b465a1226b39bfb8550616aee"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-macosx_10_10_x86_64.whl", hash = "sha256:5f532a2ad4d174eb73494e7397988e22bf427f91acc8e6ebf5bb10597b49c493"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5dd5a9c3091a0f414a963d427f920368e2b6a4c2f7527fdd82cde8ef0bc7a327"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:ef21af928e807f10bf4141cad4746eee692a0dd3ff56cfb25fce076ec3cc8abe"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:847b114580c5cc9ebaf216dd8c8dbc6b00a3b7ab0131e173d7120e6deade1f57"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_aarch64.whl", hash = "sha256:653d7fb2df65efefbcbf81ef5fe5e5be931f1ee4332c2893ca638c9b11a409c4"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-manylinux_2_28_x86_64.whl", hash = "sha256:46f39cab8bbf4a384ba7cb0bc8bae7b7062b6a11cfac1ca4bc144dea90d4a9f5"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-win32.whl", hash = "sha256:7ac7594397698f77bce84382929747130765f66406dc2cd8b4ab4da68ade4c6e"},
|
||||
{file = "Pillow-9.4.0-cp37-cp37m-win_amd64.whl", hash = "sha256:46c259e87199041583658457372a183636ae8cd56dbf3f0755e0f376a7f9d0e6"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:0e51f608da093e5d9038c592b5b575cadc12fd748af1479b5e858045fff955a9"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:765cb54c0b8724a7c12c55146ae4647e0274a839fb6de7bcba841e04298e1011"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:519e14e2c49fcf7616d6d2cfc5c70adae95682ae20f0395e9280db85e8d6c4df"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d197df5489004db87d90b918033edbeee0bd6df3848a204bca3ff0a903bef837"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0845adc64fe9886db00f5ab68c4a8cd933ab749a87747555cec1c95acea64b0b"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:e1339790c083c5a4de48f688b4841f18df839eb3c9584a770cbd818b33e26d5d"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:a96e6e23f2b79433390273eaf8cc94fec9c6370842e577ab10dabdcc7ea0a66b"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:7cfc287da09f9d2a7ec146ee4d72d6ea1342e770d975e49a8621bf54eaa8f30f"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:d7081c084ceb58278dd3cf81f836bc818978c0ccc770cbbb202125ddabec6628"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-win32.whl", hash = "sha256:df41112ccce5d47770a0c13651479fbcd8793f34232a2dd9faeccb75eb5d0d0d"},
|
||||
{file = "Pillow-9.4.0-cp38-cp38-win_amd64.whl", hash = "sha256:7a21222644ab69ddd9967cfe6f2bb420b460dae4289c9d40ff9a4896e7c35c9a"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:0f3269304c1a7ce82f1759c12ce731ef9b6e95b6df829dccd9fe42912cc48569"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:cb362e3b0976dc994857391b776ddaa8c13c28a16f80ac6522c23d5257156bed"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a2e0f87144fcbbe54297cae708c5e7f9da21a4646523456b00cc956bd4c65815"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:28676836c7796805914b76b1837a40f76827ee0d5398f72f7dcc634bae7c6264"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0884ba7b515163a1a05440a138adeb722b8a6ae2c2b33aea93ea3118dd3a899e"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:53dcb50fbdc3fb2c55431a9b30caeb2f7027fcd2aeb501459464f0214200a503"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:e8c5cf126889a4de385c02a2c3d3aba4b00f70234bfddae82a5eaa3ee6d5e3e6"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:6c6b1389ed66cdd174d040105123a5a1bc91d0aa7059c7261d20e583b6d8cbd2"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:0dd4c681b82214b36273c18ca7ee87065a50e013112eea7d78c7a1b89a739153"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-win32.whl", hash = "sha256:6d9dfb9959a3b0039ee06c1a1a90dc23bac3b430842dcb97908ddde05870601c"},
|
||||
{file = "Pillow-9.4.0-cp39-cp39-win_amd64.whl", hash = "sha256:54614444887e0d3043557d9dbc697dbb16cfb5a35d672b7a0fcc1ed0cf1c600b"},
|
||||
{file = "Pillow-9.4.0-pp38-pypy38_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b9b752ab91e78234941e44abdecc07f1f0d8f51fb62941d32995b8161f68cfe5"},
|
||||
{file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:d3b56206244dc8711f7e8b7d6cad4663917cd5b2d950799425076681e8766286"},
|
||||
{file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:aabdab8ec1e7ca7f1434d042bf8b1e92056245fb179790dc97ed040361f16bfd"},
|
||||
{file = "Pillow-9.4.0-pp38-pypy38_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:db74f5562c09953b2c5f8ec4b7dfd3f5421f31811e97d1dbc0a7c93d6e3a24df"},
|
||||
{file = "Pillow-9.4.0-pp38-pypy38_pp73-win_amd64.whl", hash = "sha256:e9d7747847c53a16a729b6ee5e737cf170f7a16611c143d95aa60a109a59c336"},
|
||||
{file = "Pillow-9.4.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:b52ff4f4e002f828ea6483faf4c4e8deea8d743cf801b74910243c58acc6eda3"},
|
||||
{file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:575d8912dca808edd9acd6f7795199332696d3469665ef26163cd090fa1f8bfa"},
|
||||
{file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:c3c4ed2ff6760e98d262e0cc9c9a7f7b8a9f61aa4d47c58835cdaf7b0b8811bb"},
|
||||
{file = "Pillow-9.4.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:e621b0246192d3b9cb1dc62c78cfa4c6f6d2ddc0ec207d43c0dedecb914f152a"},
|
||||
{file = "Pillow-9.4.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:8f127e7b028900421cad64f51f75c051b628db17fb00e099eb148761eed598c9"},
|
||||
{file = "Pillow-9.4.0.tar.gz", hash = "sha256:a1c2d7780448eb93fbcc3789bf3916aa5720d942e37945f4056680317f1cd23e"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-macosx_10_10_x86_64.whl", hash = "sha256:1f62406a884ae75fb2f818694469519fb685cc7eaff05d3451a9ebe55c646891"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:d5db32e2a6ccbb3d34d87c87b432959e0db29755727afb37290e10f6e8e62614"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:edf4392b77bdc81f36e92d3a07a5cd072f90253197f4a52a55a8cec48a12483b"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:520f2a520dc040512699f20fa1c363eed506e94248d71f85412b625026f6142c"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-manylinux_2_28_aarch64.whl", hash = "sha256:8c11160913e3dd06c8ffdb5f233a4f254cb449f4dfc0f8f4549eda9e542c93d1"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-manylinux_2_28_x86_64.whl", hash = "sha256:a74ba0c356aaa3bb8e3eb79606a87669e7ec6444be352870623025d75a14a2bf"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-musllinux_1_1_aarch64.whl", hash = "sha256:d5d0dae4cfd56969d23d94dc8e89fb6a217be461c69090768227beb8ed28c0a3"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-musllinux_1_1_x86_64.whl", hash = "sha256:22c10cc517668d44b211717fd9775799ccec4124b9a7f7b3635fc5386e584992"},
|
||||
{file = "Pillow-10.0.0-cp310-cp310-win_amd64.whl", hash = "sha256:dffe31a7f47b603318c609f378ebcd57f1554a3a6a8effbc59c3c69f804296de"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-macosx_10_10_x86_64.whl", hash = "sha256:9fb218c8a12e51d7ead2a7c9e101a04982237d4855716af2e9499306728fb485"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:d35e3c8d9b1268cbf5d3670285feb3528f6680420eafe35cccc686b73c1e330f"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:3ed64f9ca2f0a95411e88a4efbd7a29e5ce2cea36072c53dd9d26d9c76f753b3"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:0b6eb5502f45a60a3f411c63187db83a3d3107887ad0d036c13ce836f8a36f1d"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-manylinux_2_28_aarch64.whl", hash = "sha256:c1fbe7621c167ecaa38ad29643d77a9ce7311583761abf7836e1510c580bf3dd"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-manylinux_2_28_x86_64.whl", hash = "sha256:cd25d2a9d2b36fcb318882481367956d2cf91329f6892fe5d385c346c0649629"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-musllinux_1_1_aarch64.whl", hash = "sha256:3b08d4cc24f471b2c8ca24ec060abf4bebc6b144cb89cba638c720546b1cf538"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-musllinux_1_1_x86_64.whl", hash = "sha256:d737a602fbd82afd892ca746392401b634e278cb65d55c4b7a8f48e9ef8d008d"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-win_amd64.whl", hash = "sha256:3a82c40d706d9aa9734289740ce26460a11aeec2d9c79b7af87bb35f0073c12f"},
|
||||
{file = "Pillow-10.0.0-cp311-cp311-win_arm64.whl", hash = "sha256:bc2ec7c7b5d66b8ec9ce9f720dbb5fa4bace0f545acd34870eff4a369b44bf37"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-macosx_10_10_x86_64.whl", hash = "sha256:d80cf684b541685fccdd84c485b31ce73fc5c9b5d7523bf1394ce134a60c6883"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:76de421f9c326da8f43d690110f0e79fe3ad1e54be811545d7d91898b4c8493e"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:81ff539a12457809666fef6624684c008e00ff6bf455b4b89fd00a140eecd640"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce543ed15570eedbb85df19b0a1a7314a9c8141a36ce089c0a894adbfccb4568"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:685ac03cc4ed5ebc15ad5c23bc555d68a87777586d970c2c3e216619a5476223"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:d72e2ecc68a942e8cf9739619b7f408cc7b272b279b56b2c83c6123fcfa5cdff"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-musllinux_1_1_aarch64.whl", hash = "sha256:d50b6aec14bc737742ca96e85d6d0a5f9bfbded018264b3b70ff9d8c33485551"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-musllinux_1_1_x86_64.whl", hash = "sha256:00e65f5e822decd501e374b0650146063fbb30a7264b4d2744bdd7b913e0cab5"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-win_amd64.whl", hash = "sha256:f31f9fdbfecb042d046f9d91270a0ba28368a723302786c0009ee9b9f1f60199"},
|
||||
{file = "Pillow-10.0.0-cp312-cp312-win_arm64.whl", hash = "sha256:1ce91b6ec08d866b14413d3f0bbdea7e24dfdc8e59f562bb77bc3fe60b6144ca"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-macosx_10_10_x86_64.whl", hash = "sha256:349930d6e9c685c089284b013478d6f76e3a534e36ddfa912cde493f235372f3"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-macosx_11_0_arm64.whl", hash = "sha256:3a684105f7c32488f7153905a4e3015a3b6c7182e106fe3c37fbb5ef3e6994c3"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:b4f69b3700201b80bb82c3a97d5e9254084f6dd5fb5b16fc1a7b974260f89f43"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:3f07ea8d2f827d7d2a49ecf1639ec02d75ffd1b88dcc5b3a61bbb37a8759ad8d"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-manylinux_2_28_aarch64.whl", hash = "sha256:040586f7d37b34547153fa383f7f9aed68b738992380ac911447bb78f2abe530"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-manylinux_2_28_x86_64.whl", hash = "sha256:f88a0b92277de8e3ca715a0d79d68dc82807457dae3ab8699c758f07c20b3c51"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-musllinux_1_1_aarch64.whl", hash = "sha256:c7cf14a27b0d6adfaebb3ae4153f1e516df54e47e42dcc073d7b3d76111a8d86"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-musllinux_1_1_x86_64.whl", hash = "sha256:3400aae60685b06bb96f99a21e1ada7bc7a413d5f49bce739828ecd9391bb8f7"},
|
||||
{file = "Pillow-10.0.0-cp38-cp38-win_amd64.whl", hash = "sha256:dbc02381779d412145331789b40cc7b11fdf449e5d94f6bc0b080db0a56ea3f0"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-macosx_10_10_x86_64.whl", hash = "sha256:9211e7ad69d7c9401cfc0e23d49b69ca65ddd898976d660a2fa5904e3d7a9baa"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-macosx_11_0_arm64.whl", hash = "sha256:faaf07ea35355b01a35cb442dd950d8f1bb5b040a7787791a535de13db15ed90"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:c9f72a021fbb792ce98306ffb0c348b3c9cb967dce0f12a49aa4c3d3fdefa967"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9f7c16705f44e0504a3a2a14197c1f0b32a95731d251777dcb060aa83022cb2d"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-manylinux_2_28_aarch64.whl", hash = "sha256:76edb0a1fa2b4745fb0c99fb9fb98f8b180a1bbceb8be49b087e0b21867e77d3"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-manylinux_2_28_x86_64.whl", hash = "sha256:368ab3dfb5f49e312231b6f27b8820c823652b7cd29cfbd34090565a015e99ba"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-musllinux_1_1_aarch64.whl", hash = "sha256:608bfdee0d57cf297d32bcbb3c728dc1da0907519d1784962c5f0c68bb93e5a3"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-musllinux_1_1_x86_64.whl", hash = "sha256:5c6e3df6bdd396749bafd45314871b3d0af81ff935b2d188385e970052091017"},
|
||||
{file = "Pillow-10.0.0-cp39-cp39-win_amd64.whl", hash = "sha256:7be600823e4c8631b74e4a0d38384c73f680e6105a7d3c6824fcf226c178c7e6"},
|
||||
{file = "Pillow-10.0.0-pp310-pypy310_pp73-macosx_10_10_x86_64.whl", hash = "sha256:92be919bbc9f7d09f7ae343c38f5bb21c973d2576c1d45600fce4b74bafa7ac0"},
|
||||
{file = "Pillow-10.0.0-pp310-pypy310_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:8f8182b523b2289f7c415f589118228d30ac8c355baa2f3194ced084dac2dbba"},
|
||||
{file = "Pillow-10.0.0-pp310-pypy310_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:38250a349b6b390ee6047a62c086d3817ac69022c127f8a5dc058c31ccef17f3"},
|
||||
{file = "Pillow-10.0.0-pp310-pypy310_pp73-win_amd64.whl", hash = "sha256:88af2003543cc40c80f6fca01411892ec52b11021b3dc22ec3bc9d5afd1c5334"},
|
||||
{file = "Pillow-10.0.0-pp39-pypy39_pp73-macosx_10_10_x86_64.whl", hash = "sha256:c189af0545965fa8d3b9613cfdb0cd37f9d71349e0f7750e1fd704648d475ed2"},
|
||||
{file = "Pillow-10.0.0-pp39-pypy39_pp73-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:ce7b031a6fc11365970e6a5686d7ba8c63e4c1cf1ea143811acbb524295eabed"},
|
||||
{file = "Pillow-10.0.0-pp39-pypy39_pp73-manylinux_2_28_x86_64.whl", hash = "sha256:db24668940f82321e746773a4bc617bfac06ec831e5c88b643f91f122a785684"},
|
||||
{file = "Pillow-10.0.0-pp39-pypy39_pp73-win_amd64.whl", hash = "sha256:efe8c0681042536e0d06c11f48cebe759707c9e9abf880ee213541c5b46c5bf3"},
|
||||
{file = "Pillow-10.0.0.tar.gz", hash = "sha256:9c82b5b3e043c7af0d95792d0d20ccf68f61a1fec6b3530e718b688422727396"},
|
||||
]
|
||||
|
||||
[package.extras]
|
||||
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-issues (>=3.0.1)", "sphinx-removed-in", "sphinxext-opengraph"]
|
||||
docs = ["furo", "olefile", "sphinx (>=2.4)", "sphinx-copybutton", "sphinx-inline-tabs", "sphinx-removed-in", "sphinxext-opengraph"]
|
||||
tests = ["check-manifest", "coverage", "defusedxml", "markdown2", "olefile", "packaging", "pyroma", "pytest", "pytest-cov", "pytest-timeout"]
|
||||
|
||||
[[package]]
|
||||
@@ -1902,13 +1881,13 @@ email = ["email-validator (>=1.0.3)"]
|
||||
|
||||
[[package]]
|
||||
name = "pygithub"
|
||||
version = "1.58.2"
|
||||
version = "1.59.0"
|
||||
description = "Use the full Github API v3"
|
||||
optional = false
|
||||
python-versions = ">=3.7"
|
||||
files = [
|
||||
{file = "PyGithub-1.58.2-py3-none-any.whl", hash = "sha256:f435884af617c6debaa76cbc355372d1027445a56fbc39972a3b9ed4968badc8"},
|
||||
{file = "PyGithub-1.58.2.tar.gz", hash = "sha256:1e6b1b7afe31f75151fb81f7ab6b984a7188a852bdb123dbb9ae90023c3ce60f"},
|
||||
{file = "PyGithub-1.59.0-py3-none-any.whl", hash = "sha256:126bdbae72087d8d038b113aab6b059b4553cb59348e3024bb1a1cae406ace9e"},
|
||||
{file = "PyGithub-1.59.0.tar.gz", hash = "sha256:6e05ff49bac3caa7d1d6177a10c6e55a3e20c85b92424cc198571fd0cf786690"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -2406,13 +2385,13 @@ doc = ["Sphinx", "sphinx-rtd-theme"]
|
||||
|
||||
[[package]]
|
||||
name = "sentry-sdk"
|
||||
version = "1.26.0"
|
||||
version = "1.28.1"
|
||||
description = "Python client for Sentry (https://sentry.io)"
|
||||
optional = true
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "sentry-sdk-1.26.0.tar.gz", hash = "sha256:760e4fb6d01c994110507133e08ecd4bdf4d75ee4be77f296a3579796cf73134"},
|
||||
{file = "sentry_sdk-1.26.0-py2.py3-none-any.whl", hash = "sha256:0c9f858337ec3781cf4851972ef42bba8c9828aea116b0dbed8f38c5f9a1896c"},
|
||||
{file = "sentry-sdk-1.28.1.tar.gz", hash = "sha256:dcd88c68aa64dae715311b5ede6502fd684f70d00a7cd4858118f0ba3153a3ae"},
|
||||
{file = "sentry_sdk-1.28.1-py2.py3-none-any.whl", hash = "sha256:6bdb25bd9092478d3a817cb0d01fa99e296aea34d404eac3ca0037faa5c2aa0a"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
@@ -3059,13 +3038,13 @@ files = [
|
||||
|
||||
[[package]]
|
||||
name = "types-requests"
|
||||
version = "2.31.0.1"
|
||||
version = "2.31.0.2"
|
||||
description = "Typing stubs for requests"
|
||||
optional = false
|
||||
python-versions = "*"
|
||||
files = [
|
||||
{file = "types-requests-2.31.0.1.tar.gz", hash = "sha256:3de667cffa123ce698591de0ad7db034a5317457a596eb0b4944e5a9d9e8d1ac"},
|
||||
{file = "types_requests-2.31.0.1-py3-none-any.whl", hash = "sha256:afb06ef8f25ba83d59a1d424bd7a5a939082f94b94e90ab5e6116bd2559deaa3"},
|
||||
{file = "types-requests-2.31.0.2.tar.gz", hash = "sha256:6aa3f7faf0ea52d728bb18c0a0d1522d9bfd8c72d26ff6f61bfc3d06a411cf40"},
|
||||
{file = "types_requests-2.31.0.2-py3-none-any.whl", hash = "sha256:56d181c85b5925cbc59f4489a57e72a8b2166f18273fd8ba7b6fe0c0b986f12a"},
|
||||
]
|
||||
|
||||
[package.dependencies]
|
||||
|
||||
@@ -89,7 +89,7 @@ manifest-path = "rust/Cargo.toml"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.88.0"
|
||||
version = "1.89.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
|
||||
@@ -34,6 +34,7 @@ DISTS = (
|
||||
"ubuntu:jammy", # 22.04 LTS (EOL 2027-04) (our EOL forced by Python 3.10 is 2026-10-04)
|
||||
"ubuntu:kinetic", # 22.10 (EOL 2023-07-20) (our EOL forced by Python 3.10 is 2026-10-04)
|
||||
"ubuntu:lunar", # 23.04 (EOL 2024-01) (our EOL forced by Python 3.11 is 2027-10-24)
|
||||
"debian:trixie", # (EOL not specified yet)
|
||||
)
|
||||
|
||||
DESC = """\
|
||||
|
||||
@@ -247,6 +247,27 @@ class ExperimentalConfig(Config):
|
||||
# MSC3026 (busy presence state)
|
||||
self.msc3026_enabled: bool = experimental.get("msc3026_enabled", False)
|
||||
|
||||
# MSC2697 (device dehydration)
|
||||
# Enabled by default since this option was added after adding the feature.
|
||||
# It is not recommended that both MSC2697 and MSC3814 both be enabled at
|
||||
# once.
|
||||
self.msc2697_enabled: bool = experimental.get("msc2697_enabled", True)
|
||||
|
||||
# MSC3814 (dehydrated devices with SSSS)
|
||||
# This is an alternative method to achieve the same goals as MSC2697.
|
||||
# It is not recommended that both MSC2697 and MSC3814 both be enabled at
|
||||
# once.
|
||||
self.msc3814_enabled: bool = experimental.get("msc3814_enabled", False)
|
||||
|
||||
if self.msc2697_enabled and self.msc3814_enabled:
|
||||
raise ConfigError(
|
||||
"MSC2697 and MSC3814 should not both be enabled.",
|
||||
(
|
||||
"experimental_features",
|
||||
"msc3814_enabled",
|
||||
),
|
||||
)
|
||||
|
||||
# MSC3244 (room version capabilities)
|
||||
self.msc3244_enabled: bool = experimental.get("msc3244_enabled", True)
|
||||
|
||||
|
||||
@@ -136,11 +136,13 @@ def prune_event_dict(room_version: RoomVersion, event_dict: JsonDict) -> JsonDic
|
||||
]
|
||||
|
||||
elif event_type == EventTypes.Create:
|
||||
# MSC2176 rules state that create events cannot be redacted.
|
||||
if room_version.updated_redaction_rules:
|
||||
return event_dict
|
||||
# MSC2176 rules state that create events cannot have their `content` redacted.
|
||||
new_content = event_dict["content"]
|
||||
elif not room_version.implicit_room_creator:
|
||||
# Some room versions give meaning to `creator`
|
||||
add_fields("creator")
|
||||
|
||||
add_fields("creator")
|
||||
elif event_type == EventTypes.JoinRules:
|
||||
add_fields("join_rule")
|
||||
if room_version.restricted_join_rule:
|
||||
|
||||
@@ -653,6 +653,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
async def store_dehydrated_device(
|
||||
self,
|
||||
user_id: str,
|
||||
device_id: Optional[str],
|
||||
device_data: JsonDict,
|
||||
initial_device_display_name: Optional[str] = None,
|
||||
) -> str:
|
||||
@@ -661,6 +662,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
|
||||
Args:
|
||||
user_id: the user that we are storing the device for
|
||||
device_id: device id supplied by client
|
||||
device_data: the dehydrated device information
|
||||
initial_device_display_name: The display name to use for the device
|
||||
Returns:
|
||||
@@ -668,7 +670,7 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
"""
|
||||
device_id = await self.check_device_registered(
|
||||
user_id,
|
||||
None,
|
||||
device_id,
|
||||
initial_device_display_name,
|
||||
)
|
||||
old_device_id = await self.store.store_dehydrated_device(
|
||||
@@ -1124,7 +1126,14 @@ class DeviceListUpdater(DeviceListWorkerUpdater):
|
||||
)
|
||||
|
||||
if resync:
|
||||
await self.multi_user_device_resync([user_id])
|
||||
# We mark as stale up front in case we get restarted.
|
||||
await self.store.mark_remote_users_device_caches_as_stale([user_id])
|
||||
run_as_background_process(
|
||||
"_maybe_retry_device_resync",
|
||||
self.multi_user_device_resync,
|
||||
[user_id],
|
||||
False,
|
||||
)
|
||||
else:
|
||||
# Simply update the single device, since we know that is the only
|
||||
# change (because of the single prev_id matching the current cache)
|
||||
|
||||
@@ -13,10 +13,11 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from typing import TYPE_CHECKING, Any, Dict
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, Any, Dict, Optional
|
||||
|
||||
from synapse.api.constants import EduTypes, EventContentFields, ToDeviceEventTypes
|
||||
from synapse.api.errors import SynapseError
|
||||
from synapse.api.errors import Codes, SynapseError
|
||||
from synapse.api.ratelimiting import Ratelimiter
|
||||
from synapse.logging.context import run_in_background
|
||||
from synapse.logging.opentracing import (
|
||||
@@ -48,6 +49,9 @@ class DeviceMessageHandler:
|
||||
self.store = hs.get_datastores().main
|
||||
self.notifier = hs.get_notifier()
|
||||
self.is_mine = hs.is_mine
|
||||
if hs.config.experimental.msc3814_enabled:
|
||||
self.event_sources = hs.get_event_sources()
|
||||
self.device_handler = hs.get_device_handler()
|
||||
|
||||
# We only need to poke the federation sender explicitly if its on the
|
||||
# same instance. Other federation sender instances will get notified by
|
||||
@@ -303,3 +307,103 @@ class DeviceMessageHandler:
|
||||
# Enqueue a new federation transaction to send the new
|
||||
# device messages to each remote destination.
|
||||
self.federation_sender.send_device_messages(destination)
|
||||
|
||||
async def get_events_for_dehydrated_device(
|
||||
self,
|
||||
requester: Requester,
|
||||
device_id: str,
|
||||
since_token: Optional[str],
|
||||
limit: int,
|
||||
) -> JsonDict:
|
||||
"""Fetches up to `limit` events sent to `device_id` starting from `since_token`
|
||||
and returns the new since token. If there are no more messages, returns an empty
|
||||
array.
|
||||
|
||||
Args:
|
||||
requester: the user requesting the messages
|
||||
device_id: ID of the dehydrated device
|
||||
since_token: stream id to start from when fetching messages
|
||||
limit: the number of messages to fetch
|
||||
Returns:
|
||||
A dict containing the to-device messages, as well as a token that the client
|
||||
can provide in the next call to fetch the next batch of messages
|
||||
"""
|
||||
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
# only allow fetching messages for the dehydrated device id currently associated
|
||||
# with the user
|
||||
dehydrated_device = await self.device_handler.get_dehydrated_device(user_id)
|
||||
if dehydrated_device is None:
|
||||
raise SynapseError(
|
||||
HTTPStatus.FORBIDDEN,
|
||||
"No dehydrated device exists",
|
||||
Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
dehydrated_device_id, _ = dehydrated_device
|
||||
if device_id != dehydrated_device_id:
|
||||
raise SynapseError(
|
||||
HTTPStatus.FORBIDDEN,
|
||||
"You may only fetch messages for your dehydrated device",
|
||||
Codes.FORBIDDEN,
|
||||
)
|
||||
|
||||
since_stream_id = 0
|
||||
if since_token:
|
||||
if not since_token.startswith("d"):
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"from parameter %r has an invalid format" % (since_token,),
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
try:
|
||||
since_stream_id = int(since_token[1:])
|
||||
except Exception:
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"from parameter %r has an invalid format" % (since_token,),
|
||||
errcode=Codes.INVALID_PARAM,
|
||||
)
|
||||
|
||||
# if we have a since token, delete any to-device messages before that token
|
||||
# (since we now know that the device has received them)
|
||||
deleted = await self.store.delete_messages_for_device(
|
||||
user_id, device_id, since_stream_id
|
||||
)
|
||||
logger.debug(
|
||||
"Deleted %d to-device messages up to %d for user_id %s device_id %s",
|
||||
deleted,
|
||||
since_stream_id,
|
||||
user_id,
|
||||
device_id,
|
||||
)
|
||||
|
||||
to_token = self.event_sources.get_current_token().to_device_key
|
||||
|
||||
messages, stream_id = await self.store.get_messages_for_device(
|
||||
user_id, device_id, since_stream_id, to_token, limit
|
||||
)
|
||||
|
||||
for message in messages:
|
||||
# Remove the message id before sending to client
|
||||
message_id = message.pop("message_id", None)
|
||||
if message_id:
|
||||
set_tag(SynapseTags.TO_DEVICE_EDU_ID, message_id)
|
||||
|
||||
logger.debug(
|
||||
"Returning %d to-device messages between %d and %d (current token: %d) for "
|
||||
"dehydrated device %s, user_id %s",
|
||||
len(messages),
|
||||
since_stream_id,
|
||||
stream_id,
|
||||
to_token,
|
||||
device_id,
|
||||
user_id,
|
||||
)
|
||||
|
||||
return {
|
||||
"events": messages,
|
||||
"next_batch": f"d{stream_id}",
|
||||
}
|
||||
|
||||
@@ -1565,12 +1565,11 @@ class EventCreationHandler:
|
||||
if state_entry.state_group in self._external_cache_joined_hosts_updates:
|
||||
return
|
||||
|
||||
state = await state_entry.get_state(
|
||||
self._storage_controllers.state, StateFilter.all()
|
||||
)
|
||||
with opentracing.start_active_span("get_joined_hosts"):
|
||||
joined_hosts = await self.store.get_joined_hosts(
|
||||
event.room_id, state, state_entry
|
||||
joined_hosts = (
|
||||
await self._storage_controllers.state.get_joined_hosts(
|
||||
event.room_id, state_entry
|
||||
)
|
||||
)
|
||||
|
||||
# Note that the expiry times must be larger than the expiry time in
|
||||
|
||||
@@ -14,19 +14,22 @@
|
||||
# limitations under the License.
|
||||
|
||||
import logging
|
||||
from http import HTTPStatus
|
||||
from typing import TYPE_CHECKING, List, Optional, Tuple
|
||||
|
||||
from pydantic import Extra, StrictStr
|
||||
|
||||
from synapse.api import errors
|
||||
from synapse.api.errors import NotFoundError, UnrecognizedRequestError
|
||||
from synapse.api.errors import NotFoundError, SynapseError, UnrecognizedRequestError
|
||||
from synapse.handlers.device import DeviceHandler
|
||||
from synapse.http.server import HttpServer
|
||||
from synapse.http.servlet import (
|
||||
RestServlet,
|
||||
parse_and_validate_json_object_from_request,
|
||||
parse_integer,
|
||||
)
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.replication.http.devices import ReplicationUploadKeysForUserRestServlet
|
||||
from synapse.rest.client._base import client_patterns, interactive_auth_handler
|
||||
from synapse.rest.client.models import AuthenticationData
|
||||
from synapse.rest.models import RequestBodyModel
|
||||
@@ -229,6 +232,8 @@ class DehydratedDeviceDataModel(RequestBodyModel):
|
||||
class DehydratedDeviceServlet(RestServlet):
|
||||
"""Retrieve or store a dehydrated device.
|
||||
|
||||
Implements either MSC2697 or MSC3814.
|
||||
|
||||
GET /org.matrix.msc2697.v2/dehydrated_device
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
@@ -261,9 +266,7 @@ class DehydratedDeviceServlet(RestServlet):
|
||||
|
||||
"""
|
||||
|
||||
PATTERNS = client_patterns("/org.matrix.msc2697.v2/dehydrated_device$", releases=())
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
def __init__(self, hs: "HomeServer", msc2697: bool = True):
|
||||
super().__init__()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
@@ -271,6 +274,13 @@ class DehydratedDeviceServlet(RestServlet):
|
||||
assert isinstance(handler, DeviceHandler)
|
||||
self.device_handler = handler
|
||||
|
||||
self.PATTERNS = client_patterns(
|
||||
"/org.matrix.msc2697.v2/dehydrated_device$"
|
||||
if msc2697
|
||||
else "/org.matrix.msc3814.v1/dehydrated_device$",
|
||||
releases=(),
|
||||
)
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
dehydrated_device = await self.device_handler.get_dehydrated_device(
|
||||
@@ -293,6 +303,7 @@ class DehydratedDeviceServlet(RestServlet):
|
||||
|
||||
device_id = await self.device_handler.store_dehydrated_device(
|
||||
requester.user.to_string(),
|
||||
None,
|
||||
submission.device_data.dict(),
|
||||
submission.initial_device_display_name,
|
||||
)
|
||||
@@ -347,6 +358,210 @@ class ClaimDehydratedDeviceServlet(RestServlet):
|
||||
return 200, result
|
||||
|
||||
|
||||
class DehydratedDeviceEventsServlet(RestServlet):
|
||||
PATTERNS = client_patterns(
|
||||
"/org.matrix.msc3814.v1/dehydrated_device/(?P<device_id>[^/]*)/events$",
|
||||
releases=(),
|
||||
)
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
self.message_handler = hs.get_device_message_handler()
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastores().main
|
||||
|
||||
class PostBody(RequestBodyModel):
|
||||
next_batch: Optional[StrictStr]
|
||||
|
||||
async def on_POST(
|
||||
self, request: SynapseRequest, device_id: str
|
||||
) -> Tuple[int, JsonDict]:
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
|
||||
next_batch = parse_and_validate_json_object_from_request(
|
||||
request, self.PostBody
|
||||
).next_batch
|
||||
limit = parse_integer(request, "limit", 100)
|
||||
|
||||
msgs = await self.message_handler.get_events_for_dehydrated_device(
|
||||
requester=requester,
|
||||
device_id=device_id,
|
||||
since_token=next_batch,
|
||||
limit=limit,
|
||||
)
|
||||
|
||||
return 200, msgs
|
||||
|
||||
|
||||
class DehydratedDeviceV2Servlet(RestServlet):
|
||||
"""Upload, retrieve, or delete a dehydrated device.
|
||||
|
||||
GET /org.matrix.msc3814.v1/dehydrated_device
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"device_id": "dehydrated_device_id",
|
||||
"device_data": {
|
||||
"algorithm": "org.matrix.msc2697.v1.dehydration.v1.olm",
|
||||
"account": "dehydrated_device"
|
||||
}
|
||||
}
|
||||
|
||||
PUT /org.matrix.msc3814.v1/dehydrated_device
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"device_id": "dehydrated_device_id",
|
||||
"device_data": {
|
||||
"algorithm": "org.matrix.msc2697.v1.dehydration.v1.olm",
|
||||
"account": "dehydrated_device"
|
||||
},
|
||||
"device_keys": {
|
||||
"user_id": "<user_id>",
|
||||
"device_id": "<device_id>",
|
||||
"valid_until_ts": <millisecond_timestamp>,
|
||||
"algorithms": [
|
||||
"m.olm.curve25519-aes-sha2",
|
||||
]
|
||||
"keys": {
|
||||
"<algorithm>:<device_id>": "<key_base64>",
|
||||
},
|
||||
"signatures:" {
|
||||
"<user_id>" {
|
||||
"<algorithm>:<device_id>": "<signature_base64>"
|
||||
}
|
||||
}
|
||||
},
|
||||
"fallback_keys": {
|
||||
"<algorithm>:<device_id>": "<key_base64>",
|
||||
"signed_<algorithm>:<device_id>": {
|
||||
"fallback": true,
|
||||
"key": "<key_base64>",
|
||||
"signatures": {
|
||||
"<user_id>": {
|
||||
"<algorithm>:<device_id>": "<key_base64>"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
"one_time_keys": {
|
||||
"<algorithm>:<key_id>": "<key_base64>"
|
||||
},
|
||||
|
||||
}
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"device_id": "dehydrated_device_id"
|
||||
}
|
||||
|
||||
DELETE /org.matrix.msc3814.v1/dehydrated_device
|
||||
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"device_id": "dehydrated_device_id",
|
||||
}
|
||||
"""
|
||||
|
||||
PATTERNS = [
|
||||
*client_patterns("/org.matrix.msc3814.v1/dehydrated_device$", releases=()),
|
||||
]
|
||||
|
||||
def __init__(self, hs: "HomeServer"):
|
||||
super().__init__()
|
||||
self.hs = hs
|
||||
self.auth = hs.get_auth()
|
||||
handler = hs.get_device_handler()
|
||||
assert isinstance(handler, DeviceHandler)
|
||||
self.e2e_keys_handler = hs.get_e2e_keys_handler()
|
||||
self.device_handler = handler
|
||||
|
||||
if hs.config.worker.worker_app is None:
|
||||
# if main process
|
||||
self.key_uploader = self.e2e_keys_handler.upload_keys_for_user
|
||||
else:
|
||||
# then a worker
|
||||
self.key_uploader = ReplicationUploadKeysForUserRestServlet.make_client(hs)
|
||||
|
||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
|
||||
dehydrated_device = await self.device_handler.get_dehydrated_device(
|
||||
requester.user.to_string()
|
||||
)
|
||||
|
||||
if dehydrated_device is not None:
|
||||
(device_id, device_data) = dehydrated_device
|
||||
result = {"device_id": device_id, "device_data": device_data}
|
||||
return 200, result
|
||||
else:
|
||||
raise errors.NotFoundError("No dehydrated device available")
|
||||
|
||||
async def on_DELETE(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
|
||||
dehydrated_device = await self.device_handler.get_dehydrated_device(
|
||||
requester.user.to_string()
|
||||
)
|
||||
|
||||
if dehydrated_device is not None:
|
||||
(device_id, device_data) = dehydrated_device
|
||||
|
||||
result = await self.device_handler.rehydrate_device(
|
||||
requester.user.to_string(),
|
||||
self.auth.get_access_token_from_request(request),
|
||||
device_id,
|
||||
)
|
||||
|
||||
result = {"device_id": device_id}
|
||||
|
||||
return 200, result
|
||||
else:
|
||||
raise errors.NotFoundError("No dehydrated device available")
|
||||
|
||||
class PutBody(RequestBodyModel):
|
||||
device_data: DehydratedDeviceDataModel
|
||||
device_id: StrictStr
|
||||
initial_device_display_name: Optional[StrictStr]
|
||||
|
||||
class Config:
|
||||
extra = Extra.allow
|
||||
|
||||
async def on_PUT(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||
submission = parse_and_validate_json_object_from_request(request, self.PutBody)
|
||||
requester = await self.auth.get_user_by_req(request)
|
||||
user_id = requester.user.to_string()
|
||||
|
||||
device_info = submission.dict()
|
||||
if "device_keys" not in device_info.keys():
|
||||
raise SynapseError(
|
||||
HTTPStatus.BAD_REQUEST,
|
||||
"Device key(s) not found, these must be provided.",
|
||||
)
|
||||
|
||||
# TODO: Those two operations, creating a device and storing the
|
||||
# device's keys should be atomic.
|
||||
device_id = await self.device_handler.store_dehydrated_device(
|
||||
requester.user.to_string(),
|
||||
submission.device_id,
|
||||
submission.device_data.dict(),
|
||||
submission.initial_device_display_name,
|
||||
)
|
||||
|
||||
# TODO: Do we need to do something with the result here?
|
||||
await self.key_uploader(
|
||||
user_id=user_id, device_id=submission.device_id, keys=submission.dict()
|
||||
)
|
||||
|
||||
return 200, {"device_id": device_id}
|
||||
|
||||
|
||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
if (
|
||||
hs.config.worker.worker_app is None
|
||||
@@ -354,7 +569,12 @@ def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||
):
|
||||
DeleteDevicesRestServlet(hs).register(http_server)
|
||||
DevicesRestServlet(hs).register(http_server)
|
||||
|
||||
if hs.config.worker.worker_app is None:
|
||||
DeviceRestServlet(hs).register(http_server)
|
||||
DehydratedDeviceServlet(hs).register(http_server)
|
||||
ClaimDehydratedDeviceServlet(hs).register(http_server)
|
||||
if hs.config.experimental.msc2697_enabled:
|
||||
DehydratedDeviceServlet(hs, msc2697=True).register(http_server)
|
||||
ClaimDehydratedDeviceServlet(hs).register(http_server)
|
||||
if hs.config.experimental.msc3814_enabled:
|
||||
DehydratedDeviceV2Servlet(hs).register(http_server)
|
||||
DehydratedDeviceEventsServlet(hs).register(http_server)
|
||||
|
||||
@@ -268,8 +268,7 @@ class StateHandler:
|
||||
The hosts in the room at the given events
|
||||
"""
|
||||
entry = await self.resolve_state_groups_for_events(room_id, event_ids)
|
||||
state = await entry.get_state(self._state_storage_controller, StateFilter.all())
|
||||
return await self.store.get_joined_hosts(room_id, state, entry)
|
||||
return await self._state_storage_controller.get_joined_hosts(room_id, entry)
|
||||
|
||||
@trace
|
||||
@tag_args
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from itertools import chain
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
AbstractSet,
|
||||
@@ -19,14 +20,16 @@ from typing import (
|
||||
Callable,
|
||||
Collection,
|
||||
Dict,
|
||||
FrozenSet,
|
||||
Iterable,
|
||||
List,
|
||||
Mapping,
|
||||
Optional,
|
||||
Tuple,
|
||||
Union,
|
||||
)
|
||||
|
||||
from synapse.api.constants import EventTypes
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.events import EventBase
|
||||
from synapse.logging.opentracing import tag_args, trace
|
||||
from synapse.storage.roommember import ProfileInfo
|
||||
@@ -34,14 +37,20 @@ from synapse.storage.util.partial_state_events_tracker import (
|
||||
PartialCurrentStateTracker,
|
||||
PartialStateEventsTracker,
|
||||
)
|
||||
from synapse.types import MutableStateMap, StateMap
|
||||
from synapse.types import MutableStateMap, StateMap, get_domain_from_id
|
||||
from synapse.types.state import StateFilter
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.caches import intern_string
|
||||
from synapse.util.caches.descriptors import cached
|
||||
from synapse.util.cancellation import cancellable
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
from synapse.state import _StateCacheEntry
|
||||
from synapse.storage.databases import Databases
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@@ -52,10 +61,15 @@ class StateStorageController:
|
||||
|
||||
def __init__(self, hs: "HomeServer", stores: "Databases"):
|
||||
self._is_mine_id = hs.is_mine_id
|
||||
self._clock = hs.get_clock()
|
||||
self.stores = stores
|
||||
self._partial_state_events_tracker = PartialStateEventsTracker(stores.main)
|
||||
self._partial_state_room_tracker = PartialCurrentStateTracker(stores.main)
|
||||
|
||||
# Used by `_get_joined_hosts` to ensure only one thing mutates the cache
|
||||
# at a time. Keyed by room_id.
|
||||
self._joined_host_linearizer = Linearizer("_JoinedHostsCache")
|
||||
|
||||
def notify_event_un_partial_stated(self, event_id: str) -> None:
|
||||
self._partial_state_events_tracker.notify_un_partial_stated(event_id)
|
||||
|
||||
@@ -627,3 +641,122 @@ class StateStorageController:
|
||||
await self._partial_state_room_tracker.await_full_state(room_id)
|
||||
|
||||
return await self.stores.main.get_users_in_room_with_profiles(room_id)
|
||||
|
||||
async def get_joined_hosts(
|
||||
self, room_id: str, state_entry: "_StateCacheEntry"
|
||||
) -> FrozenSet[str]:
|
||||
state_group: Union[object, int] = state_entry.state_group
|
||||
if not state_group:
|
||||
# If state_group is None it means it has yet to be assigned a
|
||||
# state group, i.e. we need to make sure that calls with a state_group
|
||||
# of None don't hit previous cached calls with a None state_group.
|
||||
# To do this we set the state_group to a new object as object() != object()
|
||||
state_group = object()
|
||||
|
||||
assert state_group is not None
|
||||
with Measure(self._clock, "get_joined_hosts"):
|
||||
return await self._get_joined_hosts(
|
||||
room_id, state_group, state_entry=state_entry
|
||||
)
|
||||
|
||||
@cached(num_args=2, max_entries=10000, iterable=True)
|
||||
async def _get_joined_hosts(
|
||||
self,
|
||||
room_id: str,
|
||||
state_group: Union[object, int],
|
||||
state_entry: "_StateCacheEntry",
|
||||
) -> FrozenSet[str]:
|
||||
# We don't use `state_group`, it's there so that we can cache based on
|
||||
# it. However, its important that its never None, since two
|
||||
# current_state's with a state_group of None are likely to be different.
|
||||
#
|
||||
# The `state_group` must match the `state_entry.state_group` (if not None).
|
||||
assert state_group is not None
|
||||
assert state_entry.state_group is None or state_entry.state_group == state_group
|
||||
|
||||
# We use a secondary cache of previous work to allow us to build up the
|
||||
# joined hosts for the given state group based on previous state groups.
|
||||
#
|
||||
# We cache one object per room containing the results of the last state
|
||||
# group we got joined hosts for. The idea is that generally
|
||||
# `get_joined_hosts` is called with the "current" state group for the
|
||||
# room, and so consecutive calls will be for consecutive state groups
|
||||
# which point to the previous state group.
|
||||
cache = await self.stores.main._get_joined_hosts_cache(room_id)
|
||||
|
||||
# If the state group in the cache matches, we already have the data we need.
|
||||
if state_entry.state_group == cache.state_group:
|
||||
return frozenset(cache.hosts_to_joined_users)
|
||||
|
||||
# Since we'll mutate the cache we need to lock.
|
||||
async with self._joined_host_linearizer.queue(room_id):
|
||||
if state_entry.state_group == cache.state_group:
|
||||
# Same state group, so nothing to do. We've already checked for
|
||||
# this above, but the cache may have changed while waiting on
|
||||
# the lock.
|
||||
pass
|
||||
elif state_entry.prev_group == cache.state_group:
|
||||
# The cached work is for the previous state group, so we work out
|
||||
# the delta.
|
||||
assert state_entry.delta_ids is not None
|
||||
for (typ, state_key), event_id in state_entry.delta_ids.items():
|
||||
if typ != EventTypes.Member:
|
||||
continue
|
||||
|
||||
host = intern_string(get_domain_from_id(state_key))
|
||||
user_id = state_key
|
||||
known_joins = cache.hosts_to_joined_users.setdefault(host, set())
|
||||
|
||||
event = await self.stores.main.get_event(event_id)
|
||||
if event.membership == Membership.JOIN:
|
||||
known_joins.add(user_id)
|
||||
else:
|
||||
known_joins.discard(user_id)
|
||||
|
||||
if not known_joins:
|
||||
cache.hosts_to_joined_users.pop(host, None)
|
||||
else:
|
||||
# The cache doesn't match the state group or prev state group,
|
||||
# so we calculate the result from first principles.
|
||||
#
|
||||
# We need to fetch all hosts joined to the room according to `state` by
|
||||
# inspecting all join memberships in `state`. However, if the `state` is
|
||||
# relatively recent then many of its events are likely to be held in
|
||||
# the current state of the room, which is easily available and likely
|
||||
# cached.
|
||||
#
|
||||
# We therefore compute the set of `state` events not in the
|
||||
# current state and only fetch those.
|
||||
current_memberships = (
|
||||
await self.stores.main._get_approximate_current_memberships_in_room(
|
||||
room_id
|
||||
)
|
||||
)
|
||||
unknown_state_events = {}
|
||||
joined_users_in_current_state = []
|
||||
|
||||
state = await state_entry.get_state(
|
||||
self, StateFilter.from_types([(EventTypes.Member, None)])
|
||||
)
|
||||
|
||||
for (type, state_key), event_id in state.items():
|
||||
if event_id not in current_memberships:
|
||||
unknown_state_events[type, state_key] = event_id
|
||||
elif current_memberships[event_id] == Membership.JOIN:
|
||||
joined_users_in_current_state.append(state_key)
|
||||
|
||||
joined_user_ids = await self.stores.main.get_joined_user_ids_from_state(
|
||||
room_id, unknown_state_events
|
||||
)
|
||||
|
||||
cache.hosts_to_joined_users = {}
|
||||
for user_id in chain(joined_user_ids, joined_users_in_current_state):
|
||||
host = intern_string(get_domain_from_id(user_id))
|
||||
cache.hosts_to_joined_users.setdefault(host, set()).add(user_id)
|
||||
|
||||
if state_entry.state_group:
|
||||
cache.state_group = state_entry.state_group
|
||||
else:
|
||||
cache.state_group = object()
|
||||
|
||||
return frozenset(cache.hosts_to_joined_users)
|
||||
|
||||
@@ -196,7 +196,7 @@ class DataStore(
|
||||
txn: LoggingTransaction,
|
||||
) -> Tuple[List[JsonDict], int]:
|
||||
filters = []
|
||||
args = [self.hs.config.server.server_name]
|
||||
args: list = []
|
||||
|
||||
# Set ordering
|
||||
order_by_column = UserSortOrder(order_by).value
|
||||
@@ -263,7 +263,7 @@ class DataStore(
|
||||
|
||||
sql_base = f"""
|
||||
FROM users as u
|
||||
LEFT JOIN profiles AS p ON u.name = '@' || p.user_id || ':' || ?
|
||||
LEFT JOIN profiles AS p ON u.name = p.full_user_id
|
||||
LEFT JOIN erased_users AS eu ON u.name = eu.user_id
|
||||
{where_clause}
|
||||
"""
|
||||
|
||||
@@ -13,7 +13,6 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
import logging
|
||||
from itertools import chain
|
||||
from typing import (
|
||||
TYPE_CHECKING,
|
||||
AbstractSet,
|
||||
@@ -57,15 +56,12 @@ from synapse.types import (
|
||||
StrCollection,
|
||||
get_domain_from_id,
|
||||
)
|
||||
from synapse.util.async_helpers import Linearizer
|
||||
from synapse.util.caches import intern_string
|
||||
from synapse.util.caches.descriptors import _CacheContext, cached, cachedList
|
||||
from synapse.util.iterutils import batch_iter
|
||||
from synapse.util.metrics import Measure
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from synapse.server import HomeServer
|
||||
from synapse.state import _StateCacheEntry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@@ -91,10 +87,6 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
# Used by `_get_joined_hosts` to ensure only one thing mutates the cache
|
||||
# at a time. Keyed by room_id.
|
||||
self._joined_host_linearizer = Linearizer("_JoinedHostsCache")
|
||||
|
||||
self._server_notices_mxid = hs.config.servernotices.server_notices_mxid
|
||||
|
||||
if (
|
||||
@@ -1057,120 +1049,6 @@ class RoomMemberWorkerStore(EventsWorkerStore, CacheInvalidationWorkerStore):
|
||||
"get_current_hosts_in_room_ordered", get_current_hosts_in_room_ordered_txn
|
||||
)
|
||||
|
||||
async def get_joined_hosts(
|
||||
self, room_id: str, state: StateMap[str], state_entry: "_StateCacheEntry"
|
||||
) -> FrozenSet[str]:
|
||||
state_group: Union[object, int] = state_entry.state_group
|
||||
if not state_group:
|
||||
# If state_group is None it means it has yet to be assigned a
|
||||
# state group, i.e. we need to make sure that calls with a state_group
|
||||
# of None don't hit previous cached calls with a None state_group.
|
||||
# To do this we set the state_group to a new object as object() != object()
|
||||
state_group = object()
|
||||
|
||||
assert state_group is not None
|
||||
with Measure(self._clock, "get_joined_hosts"):
|
||||
return await self._get_joined_hosts(
|
||||
room_id, state_group, state, state_entry=state_entry
|
||||
)
|
||||
|
||||
@cached(num_args=2, max_entries=10000, iterable=True)
|
||||
async def _get_joined_hosts(
|
||||
self,
|
||||
room_id: str,
|
||||
state_group: Union[object, int],
|
||||
state: StateMap[str],
|
||||
state_entry: "_StateCacheEntry",
|
||||
) -> FrozenSet[str]:
|
||||
# We don't use `state_group`, it's there so that we can cache based on
|
||||
# it. However, its important that its never None, since two
|
||||
# current_state's with a state_group of None are likely to be different.
|
||||
#
|
||||
# The `state_group` must match the `state_entry.state_group` (if not None).
|
||||
assert state_group is not None
|
||||
assert state_entry.state_group is None or state_entry.state_group == state_group
|
||||
|
||||
# We use a secondary cache of previous work to allow us to build up the
|
||||
# joined hosts for the given state group based on previous state groups.
|
||||
#
|
||||
# We cache one object per room containing the results of the last state
|
||||
# group we got joined hosts for. The idea is that generally
|
||||
# `get_joined_hosts` is called with the "current" state group for the
|
||||
# room, and so consecutive calls will be for consecutive state groups
|
||||
# which point to the previous state group.
|
||||
cache = await self._get_joined_hosts_cache(room_id)
|
||||
|
||||
# If the state group in the cache matches, we already have the data we need.
|
||||
if state_entry.state_group == cache.state_group:
|
||||
return frozenset(cache.hosts_to_joined_users)
|
||||
|
||||
# Since we'll mutate the cache we need to lock.
|
||||
async with self._joined_host_linearizer.queue(room_id):
|
||||
if state_entry.state_group == cache.state_group:
|
||||
# Same state group, so nothing to do. We've already checked for
|
||||
# this above, but the cache may have changed while waiting on
|
||||
# the lock.
|
||||
pass
|
||||
elif state_entry.prev_group == cache.state_group:
|
||||
# The cached work is for the previous state group, so we work out
|
||||
# the delta.
|
||||
assert state_entry.delta_ids is not None
|
||||
for (typ, state_key), event_id in state_entry.delta_ids.items():
|
||||
if typ != EventTypes.Member:
|
||||
continue
|
||||
|
||||
host = intern_string(get_domain_from_id(state_key))
|
||||
user_id = state_key
|
||||
known_joins = cache.hosts_to_joined_users.setdefault(host, set())
|
||||
|
||||
event = await self.get_event(event_id)
|
||||
if event.membership == Membership.JOIN:
|
||||
known_joins.add(user_id)
|
||||
else:
|
||||
known_joins.discard(user_id)
|
||||
|
||||
if not known_joins:
|
||||
cache.hosts_to_joined_users.pop(host, None)
|
||||
else:
|
||||
# The cache doesn't match the state group or prev state group,
|
||||
# so we calculate the result from first principles.
|
||||
#
|
||||
# We need to fetch all hosts joined to the room according to `state` by
|
||||
# inspecting all join memberships in `state`. However, if the `state` is
|
||||
# relatively recent then many of its events are likely to be held in
|
||||
# the current state of the room, which is easily available and likely
|
||||
# cached.
|
||||
#
|
||||
# We therefore compute the set of `state` events not in the
|
||||
# current state and only fetch those.
|
||||
current_memberships = (
|
||||
await self._get_approximate_current_memberships_in_room(room_id)
|
||||
)
|
||||
unknown_state_events = {}
|
||||
joined_users_in_current_state = []
|
||||
|
||||
for (type, state_key), event_id in state.items():
|
||||
if event_id not in current_memberships:
|
||||
unknown_state_events[type, state_key] = event_id
|
||||
elif current_memberships[event_id] == Membership.JOIN:
|
||||
joined_users_in_current_state.append(state_key)
|
||||
|
||||
joined_user_ids = await self.get_joined_user_ids_from_state(
|
||||
room_id, unknown_state_events
|
||||
)
|
||||
|
||||
cache.hosts_to_joined_users = {}
|
||||
for user_id in chain(joined_user_ids, joined_users_in_current_state):
|
||||
host = intern_string(get_domain_from_id(user_id))
|
||||
cache.hosts_to_joined_users.setdefault(host, set()).add(user_id)
|
||||
|
||||
if state_entry.state_group:
|
||||
cache.state_group = state_entry.state_group
|
||||
else:
|
||||
cache.state_group = object()
|
||||
|
||||
return frozenset(cache.hosts_to_joined_users)
|
||||
|
||||
async def _get_approximate_current_memberships_in_room(
|
||||
self, room_id: str
|
||||
) -> Mapping[str, Optional[str]]:
|
||||
|
||||
@@ -697,7 +697,7 @@ class StatsStore(StateDeltasStore):
|
||||
txn: LoggingTransaction,
|
||||
) -> Tuple[List[JsonDict], int]:
|
||||
filters = []
|
||||
args = [self.hs.config.server.server_name]
|
||||
args: list = []
|
||||
|
||||
if search_term:
|
||||
filters.append("(lmr.user_id LIKE ? OR displayname LIKE ?)")
|
||||
@@ -733,7 +733,7 @@ class StatsStore(StateDeltasStore):
|
||||
|
||||
sql_base = """
|
||||
FROM local_media_repository as lmr
|
||||
LEFT JOIN profiles AS p ON lmr.user_id = '@' || p.user_id || ':' || ?
|
||||
LEFT JOIN profiles AS p ON lmr.user_id = p.full_user_id
|
||||
{}
|
||||
GROUP BY lmr.user_id, displayname
|
||||
""".format(
|
||||
|
||||
@@ -409,23 +409,22 @@ class UserDirectoryBackgroundUpdateStore(StateDeltasStore):
|
||||
txn, users_to_work_on
|
||||
)
|
||||
|
||||
# Next fetch their profiles. Note that the `user_id` here is the
|
||||
# *localpart*, and that not all users have profiles.
|
||||
# Next fetch their profiles. Note that not all users have profiles.
|
||||
profile_rows = self.db_pool.simple_select_many_txn(
|
||||
txn,
|
||||
table="profiles",
|
||||
column="user_id",
|
||||
iterable=[get_localpart_from_id(u) for u in users_to_insert],
|
||||
column="full_user_id",
|
||||
iterable=list(users_to_insert),
|
||||
retcols=(
|
||||
"user_id",
|
||||
"full_user_id",
|
||||
"displayname",
|
||||
"avatar_url",
|
||||
),
|
||||
keyvalues={},
|
||||
)
|
||||
profiles = {
|
||||
f"@{row['user_id']}:{self.server_name}": _UserDirProfile(
|
||||
f"@{row['user_id']}:{self.server_name}",
|
||||
row["full_user_id"]: _UserDirProfile(
|
||||
row["full_user_id"],
|
||||
row["displayname"],
|
||||
row["avatar_url"],
|
||||
)
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
SCHEMA_VERSION = 78 # remember to update the list below when updating
|
||||
SCHEMA_VERSION = 79 # remember to update the list below when updating
|
||||
"""Represents the expectations made by the codebase about the database schema
|
||||
|
||||
This should be incremented whenever the codebase changes its requirements on the
|
||||
@@ -106,6 +106,10 @@ Changes in SCHEMA_VERSION = 77
|
||||
|
||||
Changes in SCHEMA_VERSION = 78
|
||||
- Validate check (full_user_id IS NOT NULL) on tables profiles and user_filters
|
||||
|
||||
Changes in SCHEMA_VERSION = 79
|
||||
- Add tables to handle in DB read-write locks.
|
||||
- Add some mitigations for a painful race between foreground and background updates, cf #15677.
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -44,7 +44,7 @@
|
||||
|
||||
-- A table to track whether a lock is currently acquired, and if so whether its
|
||||
-- in read or write mode.
|
||||
CREATE TABLE worker_read_write_locks_mode (
|
||||
CREATE TABLE IF NOT EXISTS worker_read_write_locks_mode (
|
||||
lock_name TEXT NOT NULL,
|
||||
lock_key TEXT NOT NULL,
|
||||
-- Whether this lock is in read (false) or write (true) mode
|
||||
@@ -55,14 +55,14 @@ CREATE TABLE worker_read_write_locks_mode (
|
||||
);
|
||||
|
||||
-- Ensure that we can only have one row per lock
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_mode_key ON worker_read_write_locks_mode (lock_name, lock_key);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_mode_key ON worker_read_write_locks_mode (lock_name, lock_key);
|
||||
-- We need this (redundant) constraint so that we can have a foreign key
|
||||
-- constraint against this table.
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_mode_type ON worker_read_write_locks_mode (lock_name, lock_key, write_lock);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_mode_type ON worker_read_write_locks_mode (lock_name, lock_key, write_lock);
|
||||
|
||||
|
||||
-- A table to track who has currently acquired a given lock.
|
||||
CREATE TABLE worker_read_write_locks (
|
||||
CREATE TABLE IF NOT EXISTS worker_read_write_locks (
|
||||
lock_name TEXT NOT NULL,
|
||||
lock_key TEXT NOT NULL,
|
||||
-- We write the instance name to ease manual debugging, we don't ever read
|
||||
@@ -84,9 +84,9 @@ CREATE TABLE worker_read_write_locks (
|
||||
FOREIGN KEY (lock_name, lock_key, write_lock) REFERENCES worker_read_write_locks_mode (lock_name, lock_key, write_lock)
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_key ON worker_read_write_locks (lock_name, lock_key, token);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_key ON worker_read_write_locks (lock_name, lock_key, token);
|
||||
-- Ensures that only one instance can acquire a lock in write mode at a time.
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_write ON worker_read_write_locks (lock_name, lock_key) WHERE write_lock;
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_write ON worker_read_write_locks (lock_name, lock_key) WHERE write_lock;
|
||||
|
||||
|
||||
-- Add a foreign key constraint to ensure that if a lock is in
|
||||
@@ -97,5 +97,6 @@ CREATE UNIQUE INDEX worker_read_write_locks_write ON worker_read_write_locks (lo
|
||||
-- We only add to PostgreSQL as SQLite does not support adding constraints
|
||||
-- after table creation, and so doesn't support "circular" foreign key
|
||||
-- constraints.
|
||||
ALTER TABLE worker_read_write_locks_mode DROP CONSTRAINT IF EXISTS worker_read_write_locks_mode_foreign;
|
||||
ALTER TABLE worker_read_write_locks_mode ADD CONSTRAINT worker_read_write_locks_mode_foreign
|
||||
FOREIGN KEY (lock_name, lock_key, token) REFERENCES worker_read_write_locks(lock_name, lock_key, token) DEFERRABLE INITIALLY DEFERRED;
|
||||
@@ -22,7 +22,7 @@
|
||||
|
||||
-- A table to track whether a lock is currently acquired, and if so whether its
|
||||
-- in read or write mode.
|
||||
CREATE TABLE worker_read_write_locks_mode (
|
||||
CREATE TABLE IF NOT EXISTS worker_read_write_locks_mode (
|
||||
lock_name TEXT NOT NULL,
|
||||
lock_key TEXT NOT NULL,
|
||||
-- Whether this lock is in read (false) or write (true) mode
|
||||
@@ -38,14 +38,14 @@ CREATE TABLE worker_read_write_locks_mode (
|
||||
);
|
||||
|
||||
-- Ensure that we can only have one row per lock
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_mode_key ON worker_read_write_locks_mode (lock_name, lock_key);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_mode_key ON worker_read_write_locks_mode (lock_name, lock_key);
|
||||
-- We need this (redundant) constraint so that we can have a foreign key
|
||||
-- constraint against this table.
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_mode_type ON worker_read_write_locks_mode (lock_name, lock_key, write_lock);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_mode_type ON worker_read_write_locks_mode (lock_name, lock_key, write_lock);
|
||||
|
||||
|
||||
-- A table to track who has currently acquired a given lock.
|
||||
CREATE TABLE worker_read_write_locks (
|
||||
CREATE TABLE IF NOT EXISTS worker_read_write_locks (
|
||||
lock_name TEXT NOT NULL,
|
||||
lock_key TEXT NOT NULL,
|
||||
-- We write the instance name to ease manual debugging, we don't ever read
|
||||
@@ -67,6 +67,6 @@ CREATE TABLE worker_read_write_locks (
|
||||
FOREIGN KEY (lock_name, lock_key, write_lock) REFERENCES worker_read_write_locks_mode (lock_name, lock_key, write_lock)
|
||||
);
|
||||
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_key ON worker_read_write_locks (lock_name, lock_key, token);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_key ON worker_read_write_locks (lock_name, lock_key, token);
|
||||
-- Ensures that only one instance can acquire a lock in write mode at a time.
|
||||
CREATE UNIQUE INDEX worker_read_write_locks_write ON worker_read_write_locks (lock_name, lock_key) WHERE write_lock;
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS worker_read_write_locks_write ON worker_read_write_locks (lock_name, lock_key) WHERE write_lock;
|
||||
@@ -37,17 +37,17 @@ def run_create(
|
||||
# after the background update has finished
|
||||
if res:
|
||||
drop_cse_sql = """
|
||||
ALTER TABLE current_state_events DROP CONSTRAINT event_stream_ordering_fkey
|
||||
ALTER TABLE current_state_events DROP CONSTRAINT IF EXISTS event_stream_ordering_fkey
|
||||
"""
|
||||
cur.execute(drop_cse_sql)
|
||||
|
||||
drop_lcm_sql = """
|
||||
ALTER TABLE local_current_membership DROP CONSTRAINT event_stream_ordering_fkey
|
||||
ALTER TABLE local_current_membership DROP CONSTRAINT IF EXISTS event_stream_ordering_fkey
|
||||
"""
|
||||
cur.execute(drop_lcm_sql)
|
||||
|
||||
drop_rm_sql = """
|
||||
ALTER TABLE room_memberships DROP CONSTRAINT event_stream_ordering_fkey
|
||||
ALTER TABLE room_memberships DROP CONSTRAINT IF EXISTS event_stream_ordering_fkey
|
||||
"""
|
||||
cur.execute(drop_rm_sql)
|
||||
|
||||
@@ -225,9 +225,14 @@ class PruneEventTestCase(stdlib_unittest.TestCase):
|
||||
},
|
||||
)
|
||||
|
||||
# After MSC2176, create events get nothing redacted.
|
||||
# After MSC2176, create events should preserve field `content`
|
||||
self.run_test(
|
||||
{"type": "m.room.create", "content": {"not_a_real_key": True}},
|
||||
{
|
||||
"type": "m.room.create",
|
||||
"content": {"not_a_real_key": True},
|
||||
"origin": "some_homeserver",
|
||||
"nonsense_field": "some_random_garbage",
|
||||
},
|
||||
{
|
||||
"type": "m.room.create",
|
||||
"content": {"not_a_real_key": True},
|
||||
|
||||
@@ -17,15 +17,18 @@
|
||||
from typing import Optional
|
||||
from unittest import mock
|
||||
|
||||
from twisted.internet.defer import ensureDeferred
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.constants import RoomEncryptionAlgorithms
|
||||
from synapse.api.errors import NotFoundError, SynapseError
|
||||
from synapse.appservice import ApplicationService
|
||||
from synapse.handlers.device import MAX_DEVICE_DISPLAY_NAME_LEN, DeviceHandler
|
||||
from synapse.rest import admin
|
||||
from synapse.rest.client import devices, login, register
|
||||
from synapse.server import HomeServer
|
||||
from synapse.storage.databases.main.appservice import _make_exclusive_regex
|
||||
from synapse.types import JsonDict
|
||||
from synapse.types import JsonDict, create_requester
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
@@ -399,11 +402,19 @@ class DeviceTestCase(unittest.HomeserverTestCase):
|
||||
|
||||
|
||||
class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||
servlets = [
|
||||
admin.register_servlets_for_client_rest_resource,
|
||||
login.register_servlets,
|
||||
register.register_servlets,
|
||||
devices.register_servlets,
|
||||
]
|
||||
|
||||
def make_homeserver(self, reactor: MemoryReactor, clock: Clock) -> HomeServer:
|
||||
hs = self.setup_test_homeserver("server")
|
||||
handler = hs.get_device_handler()
|
||||
assert isinstance(handler, DeviceHandler)
|
||||
self.handler = handler
|
||||
self.message_handler = hs.get_device_message_handler()
|
||||
self.registration = hs.get_registration_handler()
|
||||
self.auth = hs.get_auth()
|
||||
self.store = hs.get_datastores().main
|
||||
@@ -418,6 +429,7 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||
stored_dehydrated_device_id = self.get_success(
|
||||
self.handler.store_dehydrated_device(
|
||||
user_id=user_id,
|
||||
device_id=None,
|
||||
device_data={"device_data": {"foo": "bar"}},
|
||||
initial_device_display_name="dehydrated device",
|
||||
)
|
||||
@@ -481,3 +493,88 @@ class DehydrationTestCase(unittest.HomeserverTestCase):
|
||||
ret = self.get_success(self.handler.get_dehydrated_device(user_id=user_id))
|
||||
|
||||
self.assertIsNone(ret)
|
||||
|
||||
@unittest.override_config(
|
||||
{"experimental_features": {"msc2697_enabled": False, "msc3814_enabled": True}}
|
||||
)
|
||||
def test_dehydrate_v2_and_fetch_events(self) -> None:
|
||||
user_id = "@boris:server"
|
||||
|
||||
self.get_success(self.store.register_user(user_id, "foobar"))
|
||||
|
||||
# First check if we can store and fetch a dehydrated device
|
||||
stored_dehydrated_device_id = self.get_success(
|
||||
self.handler.store_dehydrated_device(
|
||||
user_id=user_id,
|
||||
device_id=None,
|
||||
device_data={"device_data": {"foo": "bar"}},
|
||||
initial_device_display_name="dehydrated device",
|
||||
)
|
||||
)
|
||||
|
||||
device_info = self.get_success(
|
||||
self.handler.get_dehydrated_device(user_id=user_id)
|
||||
)
|
||||
assert device_info is not None
|
||||
retrieved_device_id, device_data = device_info
|
||||
self.assertEqual(retrieved_device_id, stored_dehydrated_device_id)
|
||||
self.assertEqual(device_data, {"device_data": {"foo": "bar"}})
|
||||
|
||||
# Create a new login for the user
|
||||
device_id, access_token, _expiration_time, _refresh_token = self.get_success(
|
||||
self.registration.register_device(
|
||||
user_id=user_id,
|
||||
device_id=None,
|
||||
initial_display_name="new device",
|
||||
)
|
||||
)
|
||||
|
||||
requester = create_requester(user_id, device_id=device_id)
|
||||
|
||||
# Fetching messages for a non-existing device should return an error
|
||||
self.get_failure(
|
||||
self.message_handler.get_events_for_dehydrated_device(
|
||||
requester=requester,
|
||||
device_id="not the right device ID",
|
||||
since_token=None,
|
||||
limit=10,
|
||||
),
|
||||
SynapseError,
|
||||
)
|
||||
|
||||
# Send a message to the dehydrated device
|
||||
ensureDeferred(
|
||||
self.message_handler.send_device_message(
|
||||
requester=requester,
|
||||
message_type="test.message",
|
||||
messages={user_id: {stored_dehydrated_device_id: {"body": "foo"}}},
|
||||
)
|
||||
)
|
||||
self.pump()
|
||||
|
||||
# Fetch the message of the dehydrated device
|
||||
res = self.get_success(
|
||||
self.message_handler.get_events_for_dehydrated_device(
|
||||
requester=requester,
|
||||
device_id=stored_dehydrated_device_id,
|
||||
since_token=None,
|
||||
limit=10,
|
||||
)
|
||||
)
|
||||
|
||||
self.assertTrue(len(res["next_batch"]) > 1)
|
||||
self.assertEqual(len(res["events"]), 1)
|
||||
self.assertEqual(res["events"][0]["content"]["body"], "foo")
|
||||
|
||||
# Fetch the message of the dehydrated device again, which should return nothing
|
||||
# and delete the old messages
|
||||
res = self.get_success(
|
||||
self.message_handler.get_events_for_dehydrated_device(
|
||||
requester=requester,
|
||||
device_id=stored_dehydrated_device_id,
|
||||
since_token=res["next_batch"],
|
||||
limit=10,
|
||||
)
|
||||
)
|
||||
self.assertTrue(len(res["next_batch"]) > 1)
|
||||
self.assertEqual(len(res["events"]), 0)
|
||||
|
||||
@@ -1418,7 +1418,7 @@ class DeactivateAccountTestCase(unittest.HomeserverTestCase):
|
||||
# To test deactivation for users without a profile, we delete the profile information for our user.
|
||||
self.get_success(
|
||||
self.store.db_pool.simple_delete_one(
|
||||
table="profiles", keyvalues={"user_id": "user"}
|
||||
table="profiles", keyvalues={"full_user_id": "@user:test"}
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
@@ -13,12 +13,14 @@
|
||||
# limitations under the License.
|
||||
from http import HTTPStatus
|
||||
|
||||
from twisted.internet.defer import ensureDeferred
|
||||
from twisted.test.proto_helpers import MemoryReactor
|
||||
|
||||
from synapse.api.errors import NotFoundError
|
||||
from synapse.rest import admin, devices, room, sync
|
||||
from synapse.rest.client import account, login, register
|
||||
from synapse.rest.client import account, keys, login, register
|
||||
from synapse.server import HomeServer
|
||||
from synapse.types import JsonDict, create_requester
|
||||
from synapse.util import Clock
|
||||
|
||||
from tests import unittest
|
||||
@@ -208,8 +210,13 @@ class DehydratedDeviceTestCase(unittest.HomeserverTestCase):
|
||||
login.register_servlets,
|
||||
register.register_servlets,
|
||||
devices.register_servlets,
|
||||
keys.register_servlets,
|
||||
]
|
||||
|
||||
def prepare(self, reactor: MemoryReactor, clock: Clock, hs: HomeServer) -> None:
|
||||
self.registration = hs.get_registration_handler()
|
||||
self.message_handler = hs.get_device_message_handler()
|
||||
|
||||
def test_PUT(self) -> None:
|
||||
"""Sanity-check that we can PUT a dehydrated device.
|
||||
|
||||
@@ -226,7 +233,21 @@ class DehydratedDeviceTestCase(unittest.HomeserverTestCase):
|
||||
"device_data": {
|
||||
"algorithm": "org.matrix.msc2697.v1.dehydration.v1.olm",
|
||||
"account": "dehydrated_device",
|
||||
}
|
||||
},
|
||||
"device_keys": {
|
||||
"user_id": "@alice:test",
|
||||
"device_id": "device1",
|
||||
"valid_until_ts": "80",
|
||||
"algorithms": [
|
||||
"m.olm.curve25519-aes-sha2",
|
||||
],
|
||||
"keys": {
|
||||
"<algorithm>:<device_id>": "<key_base64>",
|
||||
},
|
||||
"signatures": {
|
||||
"<user_id>": {"<algorithm>:<device_id>": "<signature_base64>"}
|
||||
},
|
||||
},
|
||||
},
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
@@ -234,3 +255,128 @@ class DehydratedDeviceTestCase(unittest.HomeserverTestCase):
|
||||
self.assertEqual(channel.code, HTTPStatus.OK, channel.json_body)
|
||||
device_id = channel.json_body.get("device_id")
|
||||
self.assertIsInstance(device_id, str)
|
||||
|
||||
@unittest.override_config(
|
||||
{"experimental_features": {"msc2697_enabled": False, "msc3814_enabled": True}}
|
||||
)
|
||||
def test_dehydrate_msc3814(self) -> None:
|
||||
user = self.register_user("mikey", "pass")
|
||||
token = self.login(user, "pass", device_id="device1")
|
||||
content: JsonDict = {
|
||||
"device_data": {
|
||||
"algorithm": "m.dehydration.v1.olm",
|
||||
},
|
||||
"device_id": "device1",
|
||||
"initial_device_display_name": "foo bar",
|
||||
"device_keys": {
|
||||
"user_id": "@mikey:test",
|
||||
"device_id": "device1",
|
||||
"valid_until_ts": "80",
|
||||
"algorithms": [
|
||||
"m.olm.curve25519-aes-sha2",
|
||||
],
|
||||
"keys": {
|
||||
"<algorithm>:<device_id>": "<key_base64>",
|
||||
},
|
||||
"signatures": {
|
||||
"<user_id>": {"<algorithm>:<device_id>": "<signature_base64>"}
|
||||
},
|
||||
},
|
||||
}
|
||||
channel = self.make_request(
|
||||
"PUT",
|
||||
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
|
||||
content=content,
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 200)
|
||||
device_id = channel.json_body.get("device_id")
|
||||
assert device_id is not None
|
||||
self.assertIsInstance(device_id, str)
|
||||
self.assertEqual("device1", device_id)
|
||||
|
||||
# test that we can now GET the dehydrated device info
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 200)
|
||||
returned_device_id = channel.json_body.get("device_id")
|
||||
self.assertEqual(returned_device_id, device_id)
|
||||
device_data = channel.json_body.get("device_data")
|
||||
expected_device_data = {
|
||||
"algorithm": "m.dehydration.v1.olm",
|
||||
}
|
||||
self.assertEqual(device_data, expected_device_data)
|
||||
|
||||
# create another device for the user
|
||||
(
|
||||
new_device_id,
|
||||
_,
|
||||
_,
|
||||
_,
|
||||
) = self.get_success(
|
||||
self.registration.register_device(
|
||||
user_id=user,
|
||||
device_id=None,
|
||||
initial_display_name="new device",
|
||||
)
|
||||
)
|
||||
requester = create_requester(user, device_id=new_device_id)
|
||||
|
||||
# Send a message to the dehydrated device
|
||||
ensureDeferred(
|
||||
self.message_handler.send_device_message(
|
||||
requester=requester,
|
||||
message_type="test.message",
|
||||
messages={user: {device_id: {"body": "test_message"}}},
|
||||
)
|
||||
)
|
||||
self.pump()
|
||||
|
||||
# make sure we can fetch the message with our dehydrated device id
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device/{device_id}/events",
|
||||
content={},
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 200)
|
||||
expected_content = {"body": "test_message"}
|
||||
self.assertEqual(channel.json_body["events"][0]["content"], expected_content)
|
||||
next_batch_token = channel.json_body.get("next_batch")
|
||||
|
||||
# fetch messages again and make sure that the message was deleted and we are returned an
|
||||
# empty array
|
||||
content = {"next_batch": next_batch_token}
|
||||
channel = self.make_request(
|
||||
"POST",
|
||||
f"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device/{device_id}/events",
|
||||
content=content,
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 200)
|
||||
self.assertEqual(channel.json_body["events"], [])
|
||||
|
||||
# make sure we can delete the dehydrated device
|
||||
channel = self.make_request(
|
||||
"DELETE",
|
||||
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 200)
|
||||
|
||||
# ...and after deleting it is no longer available
|
||||
channel = self.make_request(
|
||||
"GET",
|
||||
"_matrix/client/unstable/org.matrix.msc3814.v1/dehydrated_device",
|
||||
access_token=token,
|
||||
shorthand=False,
|
||||
)
|
||||
self.assertEqual(channel.code, 404)
|
||||
|
||||
Reference in New Issue
Block a user