Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0471547e96 | |||
| e891986eb0 |
@@ -139,7 +139,7 @@ jobs:
|
|||||||
|
|
||||||
- name: Semantic checks (ruff)
|
- name: Semantic checks (ruff)
|
||||||
# --quiet suppresses the update check.
|
# --quiet suppresses the update check.
|
||||||
run: poetry run ruff check --quiet .
|
run: poetry run ruff --quiet .
|
||||||
|
|
||||||
lint-mypy:
|
lint-mypy:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
@@ -305,7 +305,7 @@ jobs:
|
|||||||
- lint-readme
|
- lint-readme
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: matrix-org/done-action@v3
|
- uses: matrix-org/done-action@v2
|
||||||
with:
|
with:
|
||||||
needs: ${{ toJSON(needs) }}
|
needs: ${{ toJSON(needs) }}
|
||||||
|
|
||||||
@@ -737,7 +737,7 @@ jobs:
|
|||||||
- linting-done
|
- linting-done
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: matrix-org/done-action@v3
|
- uses: matrix-org/done-action@v2
|
||||||
with:
|
with:
|
||||||
needs: ${{ toJSON(needs) }}
|
needs: ${{ toJSON(needs) }}
|
||||||
|
|
||||||
|
|||||||
+1
-182
@@ -1,184 +1,3 @@
|
|||||||
# Synapse 1.112.0 (2024-07-30)
|
|
||||||
|
|
||||||
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
|
|
||||||
|
|
||||||
Note that this security fix is also available as **Synapse 1.111.1**, which does not include the rest of the changes in Synapse 1.112.0.
|
|
||||||
|
|
||||||
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
|
|
||||||
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
|
|
||||||
|
|
||||||
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
|
|
||||||
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
|
|
||||||
|
|
||||||
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
|
|
||||||
|
|
||||||
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
|
|
||||||
|
|
||||||
### Internal Changes
|
|
||||||
|
|
||||||
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.112.0rc1 (2024-07-23)
|
|
||||||
|
|
||||||
Please note that this release candidate does not include the security dependency update
|
|
||||||
included in version 1.111.1 as this version was released before 1.111.1.
|
|
||||||
The same security fix can be found in the full release of 1.112.0.
|
|
||||||
|
|
||||||
### Features
|
|
||||||
|
|
||||||
- Add to-device extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17416](https://github.com/element-hq/synapse/issues/17416))
|
|
||||||
- Populate `name`/`avatar` fields in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17418](https://github.com/element-hq/synapse/issues/17418))
|
|
||||||
- Populate `heroes` and room summary fields (`joined_count`, `invited_count`) in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17419](https://github.com/element-hq/synapse/issues/17419))
|
|
||||||
- Populate `is_dm` room field in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17429](https://github.com/element-hq/synapse/issues/17429))
|
|
||||||
- Add room subscriptions to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17432](https://github.com/element-hq/synapse/issues/17432))
|
|
||||||
- Prepare for authenticated media freeze. ([\#17433](https://github.com/element-hq/synapse/issues/17433))
|
|
||||||
- Add E2EE extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17454](https://github.com/element-hq/synapse/issues/17454))
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
|
|
||||||
- Add configurable option to always include offline users in presence sync results. Contributed by @Michael-Hollister. ([\#17231](https://github.com/element-hq/synapse/issues/17231))
|
|
||||||
- Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using room type filters and the user has one or more remote invites. ([\#17434](https://github.com/element-hq/synapse/issues/17434))
|
|
||||||
- Order `heroes` by `stream_ordering` as the Matrix specification states (applies to `/sync`). ([\#17435](https://github.com/element-hq/synapse/issues/17435))
|
|
||||||
- Fix rare bug where `/sync` would break for a user when using workers with multiple stream writers. ([\#17438](https://github.com/element-hq/synapse/issues/17438))
|
|
||||||
|
|
||||||
### Improved Documentation
|
|
||||||
|
|
||||||
- Update the readme image to have a white background, so that it is readable in dark mode. ([\#17387](https://github.com/element-hq/synapse/issues/17387))
|
|
||||||
- Add Red Hat Enterprise Linux and Rocky Linux 8 and 9 installation instructions. ([\#17423](https://github.com/element-hq/synapse/issues/17423))
|
|
||||||
- Improve documentation for the [`default_power_level_content_override`](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#default_power_level_content_override) config option. ([\#17451](https://github.com/element-hq/synapse/issues/17451))
|
|
||||||
|
|
||||||
### Internal Changes
|
|
||||||
|
|
||||||
- Make sure we always use the right logic for enabling the media repo. ([\#17424](https://github.com/element-hq/synapse/issues/17424))
|
|
||||||
- Fix argument documentation for method `RateLimiter.record_action`. ([\#17426](https://github.com/element-hq/synapse/issues/17426))
|
|
||||||
- Reduce volume of 'Waiting for current token' logs, which were introduced in v1.109.0. ([\#17428](https://github.com/element-hq/synapse/issues/17428))
|
|
||||||
- Limit concurrent remote downloads to 6 per IP address, and decrement remote downloads without a content-length from the ratelimiter after the download is complete. ([\#17439](https://github.com/element-hq/synapse/issues/17439))
|
|
||||||
- Remove unnecessary call to resume producing in fake channel. ([\#17449](https://github.com/element-hq/synapse/issues/17449))
|
|
||||||
- Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to bump room when it is created. ([\#17453](https://github.com/element-hq/synapse/issues/17453))
|
|
||||||
- Speed up generating sliding sync responses. ([\#17458](https://github.com/element-hq/synapse/issues/17458))
|
|
||||||
- Add cache to `get_rooms_for_local_user_where_membership_is` to speed up sliding sync. ([\#17460](https://github.com/element-hq/synapse/issues/17460))
|
|
||||||
- Speed up fetching room keys from backup. ([\#17461](https://github.com/element-hq/synapse/issues/17461))
|
|
||||||
- Speed up sorting of the room list in sliding sync. ([\#17468](https://github.com/element-hq/synapse/issues/17468))
|
|
||||||
- Implement handling of `$ME` as a state key in sliding sync. ([\#17469](https://github.com/element-hq/synapse/issues/17469))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Updates to locked dependencies
|
|
||||||
|
|
||||||
* Bump bytes from 1.6.0 to 1.6.1. ([\#17441](https://github.com/element-hq/synapse/issues/17441))
|
|
||||||
* Bump hiredis from 2.3.2 to 3.0.0. ([\#17464](https://github.com/element-hq/synapse/issues/17464))
|
|
||||||
* Bump jsonschema from 4.22.0 to 4.23.0. ([\#17444](https://github.com/element-hq/synapse/issues/17444))
|
|
||||||
* Bump matrix-org/done-action from 2 to 3. ([\#17440](https://github.com/element-hq/synapse/issues/17440))
|
|
||||||
* Bump mypy from 1.9.0 to 1.10.1. ([\#17445](https://github.com/element-hq/synapse/issues/17445))
|
|
||||||
* Bump pyopenssl from 24.1.0 to 24.2.1. ([\#17465](https://github.com/element-hq/synapse/issues/17465))
|
|
||||||
* Bump ruff from 0.5.0 to 0.5.4. ([\#17466](https://github.com/element-hq/synapse/issues/17466))
|
|
||||||
* Bump sentry-sdk from 2.6.0 to 2.8.0. ([\#17456](https://github.com/element-hq/synapse/issues/17456))
|
|
||||||
* Bump sentry-sdk from 2.8.0 to 2.10.0. ([\#17467](https://github.com/element-hq/synapse/issues/17467))
|
|
||||||
* Bump setuptools from 67.6.0 to 70.0.0. ([\#17448](https://github.com/element-hq/synapse/issues/17448))
|
|
||||||
* Bump twine from 5.1.0 to 5.1.1. ([\#17443](https://github.com/element-hq/synapse/issues/17443))
|
|
||||||
* Bump types-jsonschema from 4.22.0.20240610 to 4.23.0.20240712. ([\#17446](https://github.com/element-hq/synapse/issues/17446))
|
|
||||||
* Bump ulid from 1.1.2 to 1.1.3. ([\#17442](https://github.com/element-hq/synapse/issues/17442))
|
|
||||||
* Bump zipp from 3.15.0 to 3.19.1. ([\#17427](https://github.com/element-hq/synapse/issues/17427))
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.111.1 (2024-07-30)
|
|
||||||
|
|
||||||
This security release is to update our locked dependency on Twisted to 24.7.0rc1, which includes a security fix for [CVE-2024-41671 / GHSA-c8m8-j448-xjx7: Disordered HTTP pipeline response in twisted.web, again](https://github.com/twisted/twisted/security/advisories/GHSA-c8m8-j448-xjx7).
|
|
||||||
|
|
||||||
This issue means that, if multiple HTTP requests are pipelined in the same TCP connection, Synapse can send responses to the wrong HTTP request.
|
|
||||||
If a reverse proxy was configured to use HTTP pipelining, this could result in responses being sent to the wrong user, severely harming confidentiality.
|
|
||||||
|
|
||||||
With that said, despite being a high severity issue, **we consider it unlikely that Synapse installations will be affected**.
|
|
||||||
The use of HTTP pipelining in this fashion would cause worse performance for clients (request-response latencies would be increased as users' responses would be artificially blocked behind other users' slow requests). Further, Nginx and Haproxy, two common reverse proxies, do not appear to support configuring their upstreams to use HTTP pipelining and thus would not be affected. For both of these reasons, we consider it unlikely that a Synapse deployment would be set up in such a configuration.
|
|
||||||
|
|
||||||
Despite that, we cannot rule out that some installations may exist with this unusual setup and so we are releasing this security update today.
|
|
||||||
|
|
||||||
**pip users:** Note that by default, upgrading Synapse using pip will not automatically upgrade Twisted. **Please manually install the new version of Twisted** using `pip install Twisted==24.7.0rc1`. Note also that even the `--upgrade-strategy=eager` flag to `pip install -U matrix-synapse` will not upgrade Twisted to a patched version because it is only a release candidate at this time.
|
|
||||||
|
|
||||||
|
|
||||||
### Internal Changes
|
|
||||||
|
|
||||||
- Upgrade locked dependency on Twisted to 24.7.0rc1. ([\#17502](https://github.com/element-hq/synapse/issues/17502))
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.111.0 (2024-07-16)
|
|
||||||
|
|
||||||
No significant changes since 1.111.0rc2.
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.111.0rc2 (2024-07-10)
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
|
|
||||||
- Fix bug where using `synapse.app.media_repository` worker configuration would break the new media endpoints. ([\#17420](https://github.com/element-hq/synapse/issues/17420))
|
|
||||||
|
|
||||||
### Improved Documentation
|
|
||||||
|
|
||||||
- Document the new federation media worker endpoints in the [upgrade notes](https://element-hq.github.io/synapse/v1.111/upgrade.html) and [worker docs](https://element-hq.github.io/synapse/v1.111/workers.html). ([\#17421](https://github.com/element-hq/synapse/issues/17421))
|
|
||||||
|
|
||||||
### Internal Changes
|
|
||||||
|
|
||||||
- Route authenticated federation media requests to media repository workers in Complement tests. ([\#17422](https://github.com/element-hq/synapse/issues/17422))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
# Synapse 1.111.0rc1 (2024-07-09)
|
|
||||||
|
|
||||||
### Features
|
|
||||||
|
|
||||||
- Add `rooms` data to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17320](https://github.com/element-hq/synapse/issues/17320))
|
|
||||||
- Add `room_types`/`not_room_types` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17337](https://github.com/element-hq/synapse/issues/17337))
|
|
||||||
- Return "required state" in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17342](https://github.com/element-hq/synapse/issues/17342))
|
|
||||||
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3916-authentication-for-media.md) by adding [`_matrix/client/v1/media/download`](https://spec.matrix.org/v1.11/client-server-api/#get_matrixclientv1mediadownloadservernamemediaid) endpoint. ([\#17365](https://github.com/element-hq/synapse/issues/17365))
|
|
||||||
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md)
|
|
||||||
by adding [`_matrix/client/v1/media/thumbnail`](https://spec.matrix.org/v1.11/client-server-api/#get_matrixclientv1mediathumbnailservernamemediaid), [`_matrix/federation/v1/media/thumbnail`](https://spec.matrix.org/v1.11/server-server-api/#get_matrixfederationv1mediathumbnailmediaid) endpoints and stabilizing the
|
|
||||||
remaining [`_matrix/client/v1/media`](https://spec.matrix.org/v1.11/client-server-api/#get_matrixclientv1mediaconfig) endpoints. ([\#17388](https://github.com/element-hq/synapse/issues/17388))
|
|
||||||
- Add `rooms.bump_stamp` for easier client-side sorting in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17395](https://github.com/element-hq/synapse/issues/17395))
|
|
||||||
- Forget all of a user's rooms upon deactivation, preventing local room purges from being blocked on deactivated users. ([\#17400](https://github.com/element-hq/synapse/issues/17400))
|
|
||||||
- Declare support for [Matrix 1.11](https://matrix.org/blog/2024/06/20/matrix-v1.11-release/). ([\#17403](https://github.com/element-hq/synapse/issues/17403))
|
|
||||||
- [MSC3861](https://github.com/matrix-org/matrix-spec-proposals/pull/3861): allow overriding the introspection endpoint. ([\#17406](https://github.com/element-hq/synapse/issues/17406))
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
|
|
||||||
- Fix rare race which caused no new to-device messages to be received from remote server. ([\#17362](https://github.com/element-hq/synapse/issues/17362))
|
|
||||||
- Fix bug in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint when using an old database. ([\#17398](https://github.com/element-hq/synapse/issues/17398))
|
|
||||||
|
|
||||||
### Improved Documentation
|
|
||||||
|
|
||||||
- Clarify that `url_preview_url_blacklist` is a usability feature. ([\#17356](https://github.com/element-hq/synapse/issues/17356))
|
|
||||||
- Fix broken links in README. ([\#17379](https://github.com/element-hq/synapse/issues/17379))
|
|
||||||
- Clarify that changelog content *and file extension* need to match in order for entries to merge. ([\#17399](https://github.com/element-hq/synapse/issues/17399))
|
|
||||||
|
|
||||||
### Internal Changes
|
|
||||||
|
|
||||||
- Make the release script create a release branch for Complement as well. ([\#17318](https://github.com/element-hq/synapse/issues/17318))
|
|
||||||
- Fix uploading packages to PyPi. ([\#17363](https://github.com/element-hq/synapse/issues/17363))
|
|
||||||
- Add CI check for the README. ([\#17367](https://github.com/element-hq/synapse/issues/17367))
|
|
||||||
- Fix linting errors from new `ruff` version. ([\#17381](https://github.com/element-hq/synapse/issues/17381), [\#17411](https://github.com/element-hq/synapse/issues/17411))
|
|
||||||
- Fix building debian packages on non-clean checkouts. ([\#17390](https://github.com/element-hq/synapse/issues/17390))
|
|
||||||
- Finish up work to allow per-user feature flags. ([\#17392](https://github.com/element-hq/synapse/issues/17392), [\#17410](https://github.com/element-hq/synapse/issues/17410))
|
|
||||||
- Allow enabling sliding sync per-user. ([\#17393](https://github.com/element-hq/synapse/issues/17393))
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Updates to locked dependencies
|
|
||||||
|
|
||||||
* Bump certifi from 2023.7.22 to 2024.7.4. ([\#17404](https://github.com/element-hq/synapse/issues/17404))
|
|
||||||
* Bump cryptography from 42.0.7 to 42.0.8. ([\#17382](https://github.com/element-hq/synapse/issues/17382))
|
|
||||||
* Bump ijson from 3.2.3 to 3.3.0. ([\#17413](https://github.com/element-hq/synapse/issues/17413))
|
|
||||||
* Bump log from 0.4.21 to 0.4.22. ([\#17384](https://github.com/element-hq/synapse/issues/17384))
|
|
||||||
* Bump mypy-zope from 1.0.4 to 1.0.5. ([\#17414](https://github.com/element-hq/synapse/issues/17414))
|
|
||||||
* Bump pillow from 10.3.0 to 10.4.0. ([\#17412](https://github.com/element-hq/synapse/issues/17412))
|
|
||||||
* Bump pydantic from 2.7.1 to 2.8.2. ([\#17415](https://github.com/element-hq/synapse/issues/17415))
|
|
||||||
* Bump ruff from 0.3.7 to 0.5.0. ([\#17381](https://github.com/element-hq/synapse/issues/17381))
|
|
||||||
* Bump serde from 1.0.203 to 1.0.204. ([\#17409](https://github.com/element-hq/synapse/issues/17409))
|
|
||||||
* Bump serde_json from 1.0.117 to 1.0.120. ([\#17385](https://github.com/element-hq/synapse/issues/17385), [\#17408](https://github.com/element-hq/synapse/issues/17408))
|
|
||||||
* Bump types-setuptools from 69.5.0.20240423 to 70.1.0.20240627. ([\#17380](https://github.com/element-hq/synapse/issues/17380))
|
|
||||||
|
|
||||||
# Synapse 1.110.0 (2024-07-03)
|
# Synapse 1.110.0 (2024-07-03)
|
||||||
|
|
||||||
No significant changes since 1.110.0rc3.
|
No significant changes since 1.110.0rc3.
|
||||||
@@ -229,7 +48,7 @@ No significant changes since 1.110.0rc3.
|
|||||||
This is useful for scripts that bootstrap user accounts with initial passwords. ([\#17304](https://github.com/element-hq/synapse/issues/17304))
|
This is useful for scripts that bootstrap user accounts with initial passwords. ([\#17304](https://github.com/element-hq/synapse/issues/17304))
|
||||||
- Add support for via query parameter from [MSC4156](https://github.com/matrix-org/matrix-spec-proposals/pull/4156). ([\#17322](https://github.com/element-hq/synapse/issues/17322))
|
- Add support for via query parameter from [MSC4156](https://github.com/matrix-org/matrix-spec-proposals/pull/4156). ([\#17322](https://github.com/element-hq/synapse/issues/17322))
|
||||||
- Add `is_invite` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17335](https://github.com/element-hq/synapse/issues/17335))
|
- Add `is_invite` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint. ([\#17335](https://github.com/element-hq/synapse/issues/17335))
|
||||||
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/main/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
|
- Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding a federation /download endpoint. ([\#17350](https://github.com/element-hq/synapse/issues/17350))
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
|
|
||||||
|
|||||||
Generated
+10
-11
@@ -67,9 +67,9 @@ checksum = "79296716171880943b8470b5f8d03aa55eb2e645a4874bdbb28adb49162e012c"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "bytes"
|
name = "bytes"
|
||||||
version = "1.6.1"
|
version = "1.6.0"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "a12916984aab3fa6e39d655a33e09c0071eb36d6ab3aea5c2d78551f1df6d952"
|
checksum = "514de17de45fdb8dc022b1a7975556c53c86f9f0aa5f534b98977b171857c2c9"
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "cfg-if"
|
name = "cfg-if"
|
||||||
@@ -485,18 +485,18 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde"
|
name = "serde"
|
||||||
version = "1.0.204"
|
version = "1.0.203"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "bc76f558e0cbb2a839d37354c575f1dc3fdc6546b5be373ba43d95f231bf7c12"
|
checksum = "7253ab4de971e72fb7be983802300c30b5a7f0c2e56fab8abfc6a214307c0094"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"serde_derive",
|
"serde_derive",
|
||||||
]
|
]
|
||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_derive"
|
name = "serde_derive"
|
||||||
version = "1.0.204"
|
version = "1.0.203"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "e0cd7e117be63d3c3678776753929474f3b04a43a080c744d6b0ae2a8c28e222"
|
checksum = "500cbc0ebeb6f46627f50f3f5811ccf6bf00643be300b4c3eabc0ef55dc5b5ba"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"proc-macro2",
|
"proc-macro2",
|
||||||
"quote",
|
"quote",
|
||||||
@@ -505,12 +505,11 @@ dependencies = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "serde_json"
|
name = "serde_json"
|
||||||
version = "1.0.121"
|
version = "1.0.119"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "4ab380d7d9f22ef3f21ad3e6c1ebe8e4fc7a2000ccba2e4d71fc96f15b2cb609"
|
checksum = "e8eddb61f0697cc3989c5d64b452f5488e2b8a60fd7d5076a3045076ffef8cb0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"itoa",
|
"itoa",
|
||||||
"memchr",
|
|
||||||
"ryu",
|
"ryu",
|
||||||
"serde",
|
"serde",
|
||||||
]
|
]
|
||||||
@@ -598,9 +597,9 @@ checksum = "42ff0bf0c66b8238c6f3b578df37d0b7848e55df8577b3f74f92a69acceeb825"
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "ulid"
|
name = "ulid"
|
||||||
version = "1.1.3"
|
version = "1.1.2"
|
||||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||||
checksum = "04f903f293d11f31c0c29e4148f6dc0d033a7f80cebc0282bea147611667d289"
|
checksum = "34778c17965aa2a08913b57e1f34db9b4a63f5de31768b55bf20d2795f921259"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"getrandom",
|
"getrandom",
|
||||||
"rand",
|
"rand",
|
||||||
|
|||||||
+4
-4
@@ -1,4 +1,4 @@
|
|||||||
.. image:: ./docs/element_logo_white_bg.svg
|
.. image:: https://github.com/element-hq/product/assets/87339233/7abf477a-5277-47f3-be44-ea44917d8ed7
|
||||||
:height: 60px
|
:height: 60px
|
||||||
|
|
||||||
**Element Synapse - Matrix homeserver implementation**
|
**Element Synapse - Matrix homeserver implementation**
|
||||||
@@ -179,10 +179,10 @@ desired ``localpart`` in the 'User name' box.
|
|||||||
-----------------------
|
-----------------------
|
||||||
|
|
||||||
Enterprise quality support for Synapse including SLAs is available as part of an
|
Enterprise quality support for Synapse including SLAs is available as part of an
|
||||||
`Element Server Suite (ESS) <https://element.io/pricing>`_ subscription.
|
`Element Server Suite (ESS) <https://element.io/pricing>` subscription.
|
||||||
|
|
||||||
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`_
|
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`
|
||||||
and access the `knowledge base <https://ems-docs.element.io>`_.
|
and access the `knowledge base <https://ems-docs.element.io>`.
|
||||||
|
|
||||||
🤝 Community support
|
🤝 Community support
|
||||||
--------------------
|
--------------------
|
||||||
|
|||||||
@@ -0,0 +1 @@
|
|||||||
|
Add `rooms` data to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Add `room_types`/`not_room_types` filtering to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Clarify `url_preview_url_blacklist` is a usability feature.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Fix rare race which causes no new to-device messages to be received from remote server.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Fix uploading packages to PyPi.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Support [MSC3916](https://github.com/matrix-org/matrix-spec-proposals/blob/rav/authentication-for-media/proposals/3916-authentication-for-media.md) by adding _matrix/client/v1/media/download endpoint.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Add CI check for the README.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Fix building debian packages on non-clean checkouts.
|
||||||
@@ -0,0 +1 @@
|
|||||||
|
Pin CI to complement release branch for releases.
|
||||||
@@ -1 +0,0 @@
|
|||||||
Track which rooms have been sent to clients in the experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Update experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint to handle invite/knock rooms when filtering.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Change sliding sync to use their own token format in preparation for storing per-connection state.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Update the [`allowed_local_3pids`](https://element-hq.github.io/synapse/v1.112/usage/configuration/config_documentation.html#allowed_local_3pids) config option's msisdn address to a working example.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Add Account Data extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Ensure we don't send down negative `bump_stamp` in experimental sliding sync endpoint.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Do not send down empty room entries down experimental sliding sync endpoint.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Refactor Sliding Sync tests to better utilize the `SlidingSyncBase`.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Add receipts extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Fix a bug introduced in v1.110.0 which caused `/keys/query` to return incomplete results, leading to high network activity and CPU usage on Matrix clients.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Add some opentracing tags and logging to the experimental sliding sync implementation.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Split and move Sliding Sync tests so we have some more sane test file sizes.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Add typing notification extension support to experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint.
|
|
||||||
@@ -1 +0,0 @@
|
|||||||
Update the `limited` field description in the Sliding Sync response to accurately describe what it actually represents.
|
|
||||||
Vendored
-36
@@ -1,39 +1,3 @@
|
|||||||
matrix-synapse-py3 (1.112.0) stable; urgency=medium
|
|
||||||
|
|
||||||
* New Synapse release 1.112.0.
|
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 17:15:48 +0100
|
|
||||||
|
|
||||||
matrix-synapse-py3 (1.112.0~rc1) stable; urgency=medium
|
|
||||||
|
|
||||||
* New Synapse release 1.112.0rc1.
|
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Jul 2024 08:58:55 -0600
|
|
||||||
|
|
||||||
matrix-synapse-py3 (1.111.1) stable; urgency=medium
|
|
||||||
|
|
||||||
* New Synapse release 1.111.1.
|
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Jul 2024 16:13:52 +0100
|
|
||||||
|
|
||||||
matrix-synapse-py3 (1.111.0) stable; urgency=medium
|
|
||||||
|
|
||||||
* New Synapse release 1.111.0.
|
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Tue, 16 Jul 2024 12:42:46 +0200
|
|
||||||
|
|
||||||
matrix-synapse-py3 (1.111.0~rc2) stable; urgency=medium
|
|
||||||
|
|
||||||
* New synapse release 1.111.0rc2.
|
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Wed, 10 Jul 2024 08:46:54 +0000
|
|
||||||
|
|
||||||
matrix-synapse-py3 (1.111.0~rc1) stable; urgency=medium
|
|
||||||
|
|
||||||
* New synapse release 1.111.0rc1.
|
|
||||||
|
|
||||||
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Jul 2024 09:49:25 +0000
|
|
||||||
|
|
||||||
matrix-synapse-py3 (1.110.0) stable; urgency=medium
|
matrix-synapse-py3 (1.110.0) stable; urgency=medium
|
||||||
|
|
||||||
* New Synapse release 1.110.0.
|
* New Synapse release 1.110.0.
|
||||||
|
|||||||
Vendored
+1
-1
@@ -5,7 +5,7 @@ _Description: Name of the server:
|
|||||||
servers via federation. This is normally the public hostname of the
|
servers via federation. This is normally the public hostname of the
|
||||||
server running synapse, but can be different if you set up delegation.
|
server running synapse, but can be different if you set up delegation.
|
||||||
Please refer to the delegation documentation in this case:
|
Please refer to the delegation documentation in this case:
|
||||||
https://element-hq.github.io/synapse/latest/delegate.html.
|
https://github.com/element-hq/synapse/blob/master/docs/delegate.md.
|
||||||
|
|
||||||
Template: matrix-synapse/report-stats
|
Template: matrix-synapse/report-stats
|
||||||
Type: boolean
|
Type: boolean
|
||||||
|
|||||||
+2
-2
@@ -27,7 +27,7 @@ ARG PYTHON_VERSION=3.11
|
|||||||
###
|
###
|
||||||
# We hardcode the use of Debian bookworm here because this could change upstream
|
# We hardcode the use of Debian bookworm here because this could change upstream
|
||||||
# and other Dockerfiles used for testing are expecting bookworm.
|
# and other Dockerfiles used for testing are expecting bookworm.
|
||||||
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS requirements
|
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm as requirements
|
||||||
|
|
||||||
# RUN --mount is specific to buildkit and is documented at
|
# RUN --mount is specific to buildkit and is documented at
|
||||||
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
|
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
|
||||||
@@ -87,7 +87,7 @@ RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
|
|||||||
###
|
###
|
||||||
### Stage 1: builder
|
### Stage 1: builder
|
||||||
###
|
###
|
||||||
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS builder
|
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm as builder
|
||||||
|
|
||||||
# install the OS build deps
|
# install the OS build deps
|
||||||
RUN \
|
RUN \
|
||||||
|
|||||||
@@ -24,7 +24,7 @@ ARG distro=""
|
|||||||
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
|
# https://launchpad.net/~jyrki-pulliainen/+archive/ubuntu/dh-virtualenv, but
|
||||||
# it's not obviously easier to use that than to build our own.)
|
# it's not obviously easier to use that than to build our own.)
|
||||||
|
|
||||||
FROM docker.io/library/${distro} AS builder
|
FROM docker.io/library/${distro} as builder
|
||||||
|
|
||||||
RUN apt-get update -qq -o Acquire::Languages=none
|
RUN apt-get update -qq -o Acquire::Languages=none
|
||||||
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
|
RUN env DEBIAN_FRONTEND=noninteractive apt-get install \
|
||||||
|
|||||||
+50
-45
@@ -1,62 +1,67 @@
|
|||||||
# syntax=docker/dockerfile:1
|
# syntax=docker/dockerfile:1
|
||||||
|
|
||||||
ARG SYNAPSE_VERSION=latest
|
ARG SYNAPSE_VERSION=latest
|
||||||
ARG SYNAPSE_IMAGE=docker.io/matrixdotorg/synapse:$SYNAPSE_VERSION
|
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
|
||||||
|
|
||||||
ARG MAS_VERSION=latest
|
# first of all, we create a base image with an nginx which we can copy into the
|
||||||
ARG MAS_IMAGE=ghcr.io/matrix-org/matrix-authentication-service:$MAS_VERSION
|
# target image. For repeated rebuilds, this is much faster than apt installing
|
||||||
|
# each time.
|
||||||
|
|
||||||
ARG REDIS_VERSION=7.4.0
|
FROM docker.io/library/debian:bookworm-slim AS deps_base
|
||||||
ARG REDIS_IMAGE=docker.io/library/redis:$REDIS_VERSION-bookworm
|
RUN \
|
||||||
|
--mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||||
|
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||||
|
apt-get update -qq && \
|
||||||
|
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
|
||||||
|
redis-server nginx-light
|
||||||
|
|
||||||
ARG NGINX_VERSION=1.26.1
|
# Similarly, a base to copy the redis server from.
|
||||||
ARG NGINX_IMAGE=docker.io/library/nginx:$NGINX_VERSION-bookworm
|
#
|
||||||
|
# The redis docker image has fewer dynamic libraries than the debian package,
|
||||||
FROM $NGINX_IMAGE AS nginx
|
# which makes it much easier to copy (but we need to make sure we use an image
|
||||||
FROM $REDIS_IMAGE AS redis
|
# based on the same debian version as the synapse image, to make sure we get
|
||||||
FROM $MAS_IMAGE AS mas
|
# the expected version of libc.
|
||||||
|
FROM docker.io/library/redis:7-bookworm AS redis_base
|
||||||
|
|
||||||
# now build the final image, based on the the regular Synapse docker image
|
# now build the final image, based on the the regular Synapse docker image
|
||||||
FROM $SYNAPSE_IMAGE
|
FROM $FROM
|
||||||
|
|
||||||
# Install supervisord with pip instead of apt, to avoid installing a second
|
# Install supervisord with pip instead of apt, to avoid installing a second
|
||||||
# copy of python.
|
# copy of python.
|
||||||
RUN --mount=type=cache,target=/root/.cache/pip \
|
RUN --mount=type=cache,target=/root/.cache/pip \
|
||||||
pip install supervisor~=4.2
|
pip install supervisor~=4.2
|
||||||
RUN mkdir -p /etc/supervisor/conf.d
|
RUN mkdir -p /etc/supervisor/conf.d
|
||||||
|
|
||||||
# Copy over redis, nginx and matrix-authentication-service
|
# Copy over redis and nginx
|
||||||
COPY --from=redis /usr/local/bin/redis-server /usr/local/bin
|
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
|
||||||
|
|
||||||
COPY --from=nginx /usr/sbin/nginx /usr/sbin
|
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
|
||||||
COPY --from=nginx /usr/share/nginx /usr/share/nginx
|
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
|
||||||
COPY --from=nginx /usr/lib/nginx /usr/lib/nginx
|
COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx
|
||||||
COPY --from=nginx /etc/nginx /etc/nginx
|
COPY --from=deps_base /etc/nginx /etc/nginx
|
||||||
RUN mkdir /var/log/nginx /var/lib/nginx
|
RUN rm /etc/nginx/sites-enabled/default
|
||||||
RUN chown www-data /var/lib/nginx
|
RUN mkdir /var/log/nginx /var/lib/nginx
|
||||||
|
RUN chown www-data /var/lib/nginx
|
||||||
|
|
||||||
# have nginx log to stderr/out
|
# have nginx log to stderr/out
|
||||||
RUN ln -sf /dev/stdout /var/log/nginx/access.log
|
RUN ln -sf /dev/stdout /var/log/nginx/access.log
|
||||||
RUN ln -sf /dev/stderr /var/log/nginx/error.log
|
RUN ln -sf /dev/stderr /var/log/nginx/error.log
|
||||||
|
|
||||||
COPY --from=mas /usr/local/bin/mas-cli /usr/local/bin
|
# Copy Synapse worker, nginx and supervisord configuration template files
|
||||||
COPY --from=mas /usr/local/share/mas-cli /usr/local/share
|
COPY ./docker/conf-workers/* /conf/
|
||||||
|
|
||||||
# Copy Synapse worker, nginx and supervisord configuration template files
|
# Copy a script to prefix log lines with the supervisor program name
|
||||||
COPY ./docker/conf-workers/* /conf/
|
COPY ./docker/prefix-log /usr/local/bin/
|
||||||
|
|
||||||
# Copy a script to prefix log lines with the supervisor program name
|
# Expose nginx listener port
|
||||||
COPY ./docker/prefix-log /usr/local/bin/
|
EXPOSE 8080/tcp
|
||||||
|
|
||||||
# Expose nginx listener port
|
# A script to read environment variables and create the necessary
|
||||||
EXPOSE 8080/tcp
|
# files to run the desired worker configuration. Will start supervisord.
|
||||||
|
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
|
||||||
|
ENTRYPOINT ["/configure_workers_and_start.py"]
|
||||||
|
|
||||||
# A script to read environment variables and create the necessary
|
# Replace the healthcheck with one which checks *all* the workers. The script
|
||||||
# files to run the desired worker configuration. Will start supervisord.
|
# is generated by configure_workers_and_start.py.
|
||||||
COPY ./docker/configure_workers_and_start.py /configure_workers_and_start.py
|
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
|
||||||
ENTRYPOINT ["/configure_workers_and_start.py"]
|
CMD /bin/sh /healthcheck.sh
|
||||||
|
|
||||||
# Replace the healthcheck with one which checks *all* the workers. The script
|
|
||||||
# is generated by configure_workers_and_start.py.
|
|
||||||
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
|
|
||||||
CMD /bin/sh /healthcheck.sh
|
|
||||||
|
|||||||
@@ -6,17 +6,11 @@
|
|||||||
# Instructions for building this image from those it depends on is detailed in this guide:
|
# Instructions for building this image from those it depends on is detailed in this guide:
|
||||||
# https://github.com/element-hq/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
|
# https://github.com/element-hq/synapse/blob/develop/docker/README-testing.md#testing-with-postgresql-and-single-or-multi-process-synapse
|
||||||
|
|
||||||
|
ARG SYNAPSE_VERSION=latest
|
||||||
# This is an intermediate image, to be built locally (not pulled from a registry).
|
# This is an intermediate image, to be built locally (not pulled from a registry).
|
||||||
ARG SYNAPSE_WORKERS_IMAGE=synapse-workers
|
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
|
||||||
|
|
||||||
ARG POSTGRES_VERSION=13
|
|
||||||
ARG POSTGRES_IMAGE=docker.io/library/postgres:$POSTGRES_VERSION-bookworm
|
|
||||||
|
|
||||||
# Save the Postgres image for later
|
|
||||||
FROM $POSTGRES_IMAGE AS postgres
|
|
||||||
|
|
||||||
FROM $SYNAPSE_WORKERS_IMAGE
|
|
||||||
|
|
||||||
|
FROM $FROM
|
||||||
# First of all, we copy postgres server from the official postgres image,
|
# First of all, we copy postgres server from the official postgres image,
|
||||||
# since for repeated rebuilds, this is much faster than apt installing
|
# since for repeated rebuilds, this is much faster than apt installing
|
||||||
# postgres each time.
|
# postgres each time.
|
||||||
@@ -26,8 +20,8 @@ FROM $SYNAPSE_WORKERS_IMAGE
|
|||||||
# the same debian version as Synapse's docker image (so the versions of the
|
# the same debian version as Synapse's docker image (so the versions of the
|
||||||
# shared libraries match).
|
# shared libraries match).
|
||||||
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
|
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
|
||||||
COPY --from=postgres /usr/lib/postgresql /usr/lib/postgresql
|
COPY --from=docker.io/library/postgres:13-bookworm /usr/lib/postgresql /usr/lib/postgresql
|
||||||
COPY --from=postgres /usr/share/postgresql /usr/share/postgresql
|
COPY --from=docker.io/library/postgres:13-bookworm /usr/share/postgresql /usr/share/postgresql
|
||||||
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
|
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
|
||||||
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
|
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
|
||||||
ENV PGDATA=/var/lib/postgresql/data
|
ENV PGDATA=/var/lib/postgresql/data
|
||||||
@@ -35,10 +29,9 @@ ENV PGDATA=/var/lib/postgresql/data
|
|||||||
# We also initialize the database at build time, rather than runtime, so that it's faster to spin up the image.
|
# We also initialize the database at build time, rather than runtime, so that it's faster to spin up the image.
|
||||||
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
|
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
|
||||||
|
|
||||||
# Configure a password and create a database for Synapse and MAS
|
# Configure a password and create a database for Synapse
|
||||||
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
|
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
|
||||||
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
|
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
|
||||||
RUN echo "CREATE DATABASE mas" | gosu postgres postgres --single
|
|
||||||
|
|
||||||
# Extend the shared homeserver config to disable rate-limiting,
|
# Extend the shared homeserver config to disable rate-limiting,
|
||||||
# set Complement's static shared secret, enable registration, amongst other
|
# set Complement's static shared secret, enable registration, amongst other
|
||||||
|
|||||||
@@ -20,15 +20,4 @@ app_service_config_files:
|
|||||||
{%- endfor %}
|
{%- endfor %}
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
|
|
||||||
{% if enable_mas %}
|
|
||||||
experimental_features:
|
|
||||||
msc3861:
|
|
||||||
enabled: true
|
|
||||||
issuer: "http://localhost:8008/"
|
|
||||||
client_id: "0000000000000000000SYNAPSE"
|
|
||||||
client_auth_method: client_secret_basic
|
|
||||||
client_secret: choozia3ThiefahZaofeiveish1kahr0
|
|
||||||
admin_token: eeShoo4ceebae4Lo4Che1hoofoophaiz
|
|
||||||
{% endif %}
|
|
||||||
|
|
||||||
{{ shared_worker_config }}
|
{{ shared_worker_config }}
|
||||||
|
|||||||
@@ -35,12 +35,3 @@ autorestart=true
|
|||||||
# Redis can be disabled if the image is being used without workers
|
# Redis can be disabled if the image is being used without workers
|
||||||
autostart={{ enable_redis }}
|
autostart={{ enable_redis }}
|
||||||
|
|
||||||
[program:mas]
|
|
||||||
comamnd=/usr/local/bin/prefix-log /usr/local/bin/mas-cli --config /conf/mas.yaml
|
|
||||||
stdout_logfile=/dev/stdout
|
|
||||||
stdout_logfile_maxbytes=0
|
|
||||||
stderr_logfile=/dev/stderr
|
|
||||||
stderr_logfile_maxbytes=0
|
|
||||||
autorestart=unexpected
|
|
||||||
|
|
||||||
autostart={{ enable_mas }}
|
|
||||||
|
|||||||
@@ -126,7 +126,6 @@ WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
|
|||||||
"^/_synapse/admin/v1/media/.*$",
|
"^/_synapse/admin/v1/media/.*$",
|
||||||
"^/_synapse/admin/v1/quarantine_media/.*$",
|
"^/_synapse/admin/v1/quarantine_media/.*$",
|
||||||
"^/_matrix/client/v1/media/.*$",
|
"^/_matrix/client/v1/media/.*$",
|
||||||
"^/_matrix/federation/v1/media/.*$",
|
|
||||||
],
|
],
|
||||||
# The first configured media worker will run the media background jobs
|
# The first configured media worker will run the media background jobs
|
||||||
"shared_extra_conf": {
|
"shared_extra_conf": {
|
||||||
@@ -959,7 +958,6 @@ def generate_worker_files(
|
|||||||
shared_worker_config=yaml.dump(shared_config),
|
shared_worker_config=yaml.dump(shared_config),
|
||||||
appservice_registrations=appservice_registrations,
|
appservice_registrations=appservice_registrations,
|
||||||
enable_redis=workers_in_use,
|
enable_redis=workers_in_use,
|
||||||
enable_mas=False,
|
|
||||||
workers_in_use=workers_in_use,
|
workers_in_use=workers_in_use,
|
||||||
using_unix_sockets=using_unix_sockets,
|
using_unix_sockets=using_unix_sockets,
|
||||||
)
|
)
|
||||||
@@ -982,7 +980,6 @@ def generate_worker_files(
|
|||||||
"/etc/supervisor/supervisord.conf",
|
"/etc/supervisor/supervisord.conf",
|
||||||
main_config_path=config_path,
|
main_config_path=config_path,
|
||||||
enable_redis=workers_in_use,
|
enable_redis=workers_in_use,
|
||||||
enable_mas=False,
|
|
||||||
using_unix_sockets=using_unix_sockets,
|
using_unix_sockets=using_unix_sockets,
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|||||||
@@ -2,9 +2,13 @@
|
|||||||
|
|
||||||
This API allows a server administrator to enable or disable some experimental features on a per-user
|
This API allows a server administrator to enable or disable some experimental features on a per-user
|
||||||
basis. The currently supported features are:
|
basis. The currently supported features are:
|
||||||
|
- [MSC3026](https://github.com/matrix-org/matrix-spec-proposals/pull/3026): busy
|
||||||
|
presence state enabled
|
||||||
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
|
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
|
||||||
for another client
|
for another client
|
||||||
- [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): enable experimental sliding sync support
|
- [MSC3967](https://github.com/matrix-org/matrix-spec-proposals/pull/3967): do not require
|
||||||
|
UIA when first uploading cross-signing keys.
|
||||||
|
|
||||||
|
|
||||||
To use it, you will need to authenticate by providing an `access_token`
|
To use it, you will need to authenticate by providing an `access_token`
|
||||||
for a server admin: see [Admin API](../usage/administration/admin_api/).
|
for a server admin: see [Admin API](../usage/administration/admin_api/).
|
||||||
|
|||||||
@@ -449,9 +449,9 @@ For example, a fix in PR #1234 would have its changelog entry in
|
|||||||
> The security levels of Florbs are now validated when received
|
> The security levels of Florbs are now validated when received
|
||||||
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
|
> via the `/federation/florb` endpoint. Contributed by Jane Matrix.
|
||||||
|
|
||||||
If there are multiple pull requests involved in a single bugfix/feature/etc, then the
|
If there are multiple pull requests involved in a single bugfix/feature/etc,
|
||||||
content for each `changelog.d` file and file extension should be the same. Towncrier
|
then the content for each `changelog.d` file should be the same. Towncrier will
|
||||||
will merge the matching files together into a single changelog entry when we come to
|
merge the matching files together into a single changelog entry when we come to
|
||||||
release.
|
release.
|
||||||
|
|
||||||
### How do I know what to call the changelog file before I create the PR?
|
### How do I know what to call the changelog file before I create the PR?
|
||||||
|
|||||||
@@ -1,94 +0,0 @@
|
|||||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
|
||||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
|
||||||
|
|
||||||
<svg
|
|
||||||
width="41.440346mm"
|
|
||||||
height="10.383124mm"
|
|
||||||
viewBox="0 0 41.440346 10.383125"
|
|
||||||
version="1.1"
|
|
||||||
id="svg1"
|
|
||||||
xml:space="preserve"
|
|
||||||
sodipodi:docname="element_logo_white_bg.svg"
|
|
||||||
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
|
|
||||||
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
|
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
|
||||||
xmlns:svg="http://www.w3.org/2000/svg"><sodipodi:namedview
|
|
||||||
id="namedview1"
|
|
||||||
pagecolor="#ffffff"
|
|
||||||
bordercolor="#000000"
|
|
||||||
borderopacity="0.25"
|
|
||||||
inkscape:showpageshadow="2"
|
|
||||||
inkscape:pageopacity="0.0"
|
|
||||||
inkscape:pagecheckerboard="0"
|
|
||||||
inkscape:deskcolor="#d1d1d1"
|
|
||||||
inkscape:document-units="mm"
|
|
||||||
showgrid="false"
|
|
||||||
inkscape:export-bgcolor="#ffffffff" /><defs
|
|
||||||
id="defs1" /><g
|
|
||||||
id="layer1"
|
|
||||||
transform="translate(-84.803844,-143.2075)"
|
|
||||||
inkscape:export-filename="element_logo_white_bg.svg"
|
|
||||||
inkscape:export-xdpi="96"
|
|
||||||
inkscape:export-ydpi="96"><g
|
|
||||||
style="fill:none"
|
|
||||||
id="g1"
|
|
||||||
transform="matrix(0.26458333,0,0,0.26458333,85.841658,144.26667)"><rect
|
|
||||||
style="display:inline;fill:#ffffff;fill-opacity:1;stroke:#ffffff;stroke-width:1.31041;stroke-dasharray:none;stroke-opacity:1"
|
|
||||||
id="rect20"
|
|
||||||
width="155.31451"
|
|
||||||
height="37.932892"
|
|
||||||
x="-3.2672384"
|
|
||||||
y="-3.3479743"
|
|
||||||
rx="3.3718522"
|
|
||||||
ry="3.7915266"
|
|
||||||
transform="translate(-2.1259843e-6)"
|
|
||||||
inkscape:label="rect20"
|
|
||||||
inkscape:export-filename="rect20.svg"
|
|
||||||
inkscape:export-xdpi="96"
|
|
||||||
inkscape:export-ydpi="96" /><path
|
|
||||||
fill-rule="evenodd"
|
|
||||||
clip-rule="evenodd"
|
|
||||||
d="M 16,32 C 24.8366,32 32,24.8366 32,16 32,7.16344 24.8366,0 16,0 7.16344,0 0,7.16344 0,16 0,24.8366 7.16344,32 16,32 Z"
|
|
||||||
fill="#0dbd8b"
|
|
||||||
id="path1" /><path
|
|
||||||
fill-rule="evenodd"
|
|
||||||
clip-rule="evenodd"
|
|
||||||
d="m 13.0756,7.455 c 0,-0.64584 0.5247,-1.1694 1.1719,-1.1694 4.3864,0 7.9423,3.54853 7.9423,7.9259 0,0.6458 -0.5246,1.1694 -1.1718,1.1694 -0.6472,0 -1.1719,-0.5236 -1.1719,-1.1694 0,-3.0857 -2.5066,-5.58711 -5.5986,-5.58711 -0.6472,0 -1.1719,-0.52355 -1.1719,-1.16939 z"
|
|
||||||
fill="#ffffff"
|
|
||||||
id="path2" /><path
|
|
||||||
fill-rule="evenodd"
|
|
||||||
clip-rule="evenodd"
|
|
||||||
d="m 24.5424,13.042 c 0.6472,0 1.1719,0.5235 1.1719,1.1694 0,4.3773 -3.5559,7.9258 -7.9424,7.9258 -0.6472,0 -1.1718,-0.5235 -1.1718,-1.1693 0,-0.6459 0.5246,-1.1694 1.1718,-1.1694 3.0921,0 5.5987,-2.5015 5.5987,-5.5871 0,-0.6459 0.5247,-1.1694 1.1718,-1.1694 z"
|
|
||||||
fill="#ffffff"
|
|
||||||
id="path3" /><path
|
|
||||||
fill-rule="evenodd"
|
|
||||||
clip-rule="evenodd"
|
|
||||||
d="m 18.9446,24.5446 c 0,0.6459 -0.5247,1.1694 -1.1718,1.1694 -4.3865,0 -7.94239,-3.5485 -7.94239,-7.9258 0,-0.6459 0.52469,-1.1694 1.17179,-1.1694 0.6472,0 1.1719,0.5235 1.1719,1.1694 0,3.0856 2.5066,5.587 5.5987,5.587 0.6471,0 1.1718,0.5236 1.1718,1.1694 z"
|
|
||||||
fill="#ffffff"
|
|
||||||
id="path4" /><path
|
|
||||||
fill-rule="evenodd"
|
|
||||||
clip-rule="evenodd"
|
|
||||||
d="m 7.45823,18.9576 c -0.64718,0 -1.17183,-0.5235 -1.17183,-1.1694 0,-4.3773 3.55591,-7.92581 7.9423,-7.92581 0.6472,0 1.1719,0.52351 1.1719,1.16941 0,0.6458 -0.5247,1.1694 -1.1719,1.1694 -3.092,0 -5.59864,2.5014 -5.59864,5.587 0,0.6459 -0.52465,1.1694 -1.17183,1.1694 z"
|
|
||||||
fill="#ffffff"
|
|
||||||
id="path5" /><path
|
|
||||||
d="M 56.2856,18.1428 H 44.9998 c 0.1334,1.181 0.5619,2.1238 1.2858,2.8286 0.7238,0.6857 1.6761,1.0286 2.8571,1.0286 0.7809,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3428,-1.5428 h 3.4286 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1714,0 3.9238,0.73333 5.2572,2.20003 1.3523,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0667,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6858,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path6" /><path
|
|
||||||
d="M 58.6539,20.1428 V 3.14282 h 3.4 V 20.2 c 0,0.7619 0.419,1.1428 1.2571,1.1428 l 0.6,-0.0285 v 3.2285 c -0.3238,0.0572 -0.6667,0.0857 -1.0286,0.0857 -1.4666,0 -2.5428,-0.3714 -3.2285,-1.1142 -0.6667,-0.7429 -1,-1.8667 -1,-3.3715 z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path7" /><path
|
|
||||||
d="M 79.7454,18.1428 H 68.4597 c 0.1333,1.181 0.5619,2.1238 1.2857,2.8286 0.7238,0.6857 1.6762,1.0286 2.8571,1.0286 0.781,0 1.4857,-0.1905 2.1143,-0.5715 0.6286,-0.3809 1.0762,-0.8952 1.3429,-1.5428 h 3.4285 c -0.4571,1.5047 -1.3143,2.7238 -2.5714,3.6571 -1.2381,0.9143 -2.7048,1.3715 -4.4,1.3715 -2.2095,0 -4,-0.7334 -5.3714,-2.2 -1.3524,-1.4667 -2.0286,-3.3239 -2.0286,-5.5715 0,-2.1905 0.6857,-4.0285 2.0571,-5.5143 1.3715,-1.4857 3.1429,-2.22853 5.3143,-2.22853 2.1715,0 3.9238,0.73333 5.2572,2.20003 1.3524,1.4476 2.0285,3.2762 2.0285,5.4857 z m -7.2572,-5.9714 c -1.0666,0 -1.9524,0.3143 -2.6571,0.9429 -0.7048,0.6285 -1.1429,1.4666 -1.3143,2.5142 h 7.8857 c -0.1524,-1.0476 -0.5714,-1.8857 -1.2571,-2.5142 -0.6857,-0.6286 -1.5715,-0.9429 -2.6572,-0.9429 z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path8" /><path
|
|
||||||
d="m 95.0851,16.0571 v 8.5143 h -3.4 v -8.8857 c 0,-2.2476 -0.9333,-3.3714 -2.8,-3.3714 -1.0095,0 -1.819,0.3238 -2.4286,0.9714 -0.5904,0.6476 -0.8857,1.5333 -0.8857,2.6571 v 8.6286 h -3.4 V 9.74282 h 3.1429 v 1.97148 c 0.3619,-0.6667 0.9143,-1.2191 1.6571,-1.6572 0.7429,-0.43809 1.6667,-0.65713 2.7714,-0.65713 2.0572,0 3.5429,0.78093 4.4572,2.34283 1.2571,-1.5619 2.9333,-2.34283 5.0286,-2.34283 1.733,0 3.067,0.54285 4,1.62853 0.933,1.0667 1.4,2.4762 1.4,4.2286 v 9.3143 h -3.4 v -8.8857 c 0,-2.2476 -0.933,-3.3714 -2.8,-3.3714 -1.0286,0 -1.8477,0.3333 -2.4572,1 -0.5905,0.6476 -0.8857,1.5619 -0.8857,2.7428 z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path9" /><path
|
|
||||||
d="m 121.537,18.1428 h -11.286 c 0.133,1.181 0.562,2.1238 1.286,2.8286 0.723,0.6857 1.676,1.0286 2.857,1.0286 0.781,0 1.486,-0.1905 2.114,-0.5715 0.629,-0.3809 1.076,-0.8952 1.343,-1.5428 h 3.429 c -0.458,1.5047 -1.315,2.7238 -2.572,3.6571 -1.238,0.9143 -2.705,1.3715 -4.4,1.3715 -2.209,0 -4,-0.7334 -5.371,-2.2 -1.353,-1.4667 -2.029,-3.3239 -2.029,-5.5715 0,-2.1905 0.686,-4.0285 2.057,-5.5143 1.372,-1.4857 3.143,-2.22853 5.315,-2.22853 2.171,0 3.923,0.73333 5.257,2.20003 1.352,1.4476 2.028,3.2762 2.028,5.4857 z m -7.257,-5.9714 c -1.067,0 -1.953,0.3143 -2.658,0.9429 -0.704,0.6285 -1.142,1.4666 -1.314,2.5142 h 7.886 c -0.153,-1.0476 -0.572,-1.8857 -1.257,-2.5142 -0.686,-0.6286 -1.572,-0.9429 -2.657,-0.9429 z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path10" /><path
|
|
||||||
d="m 127.105,9.74282 v 1.97148 c 0.343,-0.6477 0.905,-1.1905 1.686,-1.6286 0.8,-0.45716 1.762,-0.68573 2.885,-0.68573 1.753,0 3.105,0.53333 4.058,1.60003 0.971,1.0666 1.457,2.4857 1.457,4.2571 v 9.3143 h -3.4 v -8.8857 c 0,-1.0476 -0.248,-1.8667 -0.743,-2.4572 -0.476,-0.6095 -1.21,-0.9142 -2.2,-0.9142 -1.086,0 -1.943,0.3238 -2.572,0.9714 -0.609,0.6476 -0.914,1.5428 -0.914,2.6857 v 8.6 h -3.4 V 9.74282 Z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path11" /><path
|
|
||||||
d="m 147.12,21.5428 v 2.9429 c -0.419,0.1143 -1.009,0.1714 -1.771,0.1714 -2.895,0 -4.343,-1.4571 -4.343,-4.3714 v -7.8286 h -2.257 V 9.74282 h 2.257 V 5.88568 h 3.4 v 3.85714 h 2.772 v 2.71428 h -2.772 v 7.4857 c 0,1.1619 0.552,1.7429 1.657,1.7429 z"
|
|
||||||
fill="#000000"
|
|
||||||
id="path12" /></g></g></svg>
|
|
||||||
|
Before Width: | Height: | Size: 7.5 KiB |
@@ -67,7 +67,7 @@ in Synapse can be deactivated.
|
|||||||
**NOTE**: This has an impact on security and is for testing purposes only!
|
**NOTE**: This has an impact on security and is for testing purposes only!
|
||||||
|
|
||||||
To deactivate the certificate validation, the following setting must be added to
|
To deactivate the certificate validation, the following setting must be added to
|
||||||
your [homeserver.yaml](../usage/configuration/homeserver_sample_config.md).
|
your [homserver.yaml](../usage/configuration/homeserver_sample_config.md).
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
use_insecure_ssl_client_just_for_testing_do_not_use: true
|
use_insecure_ssl_client_just_for_testing_do_not_use: true
|
||||||
|
|||||||
@@ -309,62 +309,7 @@ sudo dnf install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
|
|||||||
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
|
libwebp-devel libxml2-devel libxslt-devel libpq-devel \
|
||||||
python3-virtualenv libffi-devel openssl-devel python3-devel \
|
python3-virtualenv libffi-devel openssl-devel python3-devel \
|
||||||
libicu-devel
|
libicu-devel
|
||||||
sudo dnf group install "Development Tools"
|
sudo dnf groupinstall "Development Tools"
|
||||||
```
|
|
||||||
|
|
||||||
##### Red Hat Enterprise Linux / Rocky Linux
|
|
||||||
|
|
||||||
*Note: The term "RHEL" below refers to both Red Hat Enterprise Linux and Rocky Linux. The distributions are 1:1 binary compatible.*
|
|
||||||
|
|
||||||
It's recommended to use the latest Python versions.
|
|
||||||
|
|
||||||
RHEL 8 in particular ships with Python 3.6 by default which is EOL and therefore no longer supported by Synapse. RHEL 9 ship with Python 3.9 which is still supported by the Python core team as of this writing. However, newer Python versions provide significant performance improvements and they're available in official distributions' repositories. Therefore it's recommended to use them.
|
|
||||||
|
|
||||||
Python 3.11 and 3.12 are available for both RHEL 8 and 9.
|
|
||||||
|
|
||||||
These commands should be run as root user.
|
|
||||||
|
|
||||||
RHEL 8
|
|
||||||
```bash
|
|
||||||
# Enable PowerTools repository
|
|
||||||
dnf config-manager --set-enabled powertools
|
|
||||||
```
|
|
||||||
RHEL 9
|
|
||||||
```bash
|
|
||||||
# Enable CodeReady Linux Builder repository
|
|
||||||
crb enable
|
|
||||||
```
|
|
||||||
|
|
||||||
Install new version of Python. You only need one of these:
|
|
||||||
```bash
|
|
||||||
# Python 3.11
|
|
||||||
dnf install python3.11 python3.11-devel
|
|
||||||
```
|
|
||||||
```bash
|
|
||||||
# Python 3.12
|
|
||||||
dnf install python3.12 python3.12-devel
|
|
||||||
```
|
|
||||||
Finally, install common prerequisites
|
|
||||||
```bash
|
|
||||||
dnf install libicu libicu-devel libpq5 libpq5-devel lz4 pkgconf
|
|
||||||
dnf group install "Development Tools"
|
|
||||||
```
|
|
||||||
###### Using venv module instead of virtualenv command
|
|
||||||
|
|
||||||
It's recommended to use Python venv module directly rather than the virtualenv command.
|
|
||||||
* On RHEL 9, virtualenv is only available on [EPEL](https://docs.fedoraproject.org/en-US/epel/).
|
|
||||||
* On RHEL 8, virtualenv is based on Python 3.6. It does not support creating 3.11/3.12 virtual environments.
|
|
||||||
|
|
||||||
Here's an example of creating Python 3.12 virtual environment and installing Synapse from PyPI.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mkdir -p ~/synapse
|
|
||||||
# To use Python 3.11, simply use the command "python3.11" instead.
|
|
||||||
python3.12 -m venv ~/synapse/env
|
|
||||||
source ~/synapse/env/bin/activate
|
|
||||||
pip install --upgrade pip
|
|
||||||
pip install --upgrade setuptools
|
|
||||||
pip install matrix-synapse
|
|
||||||
```
|
```
|
||||||
|
|
||||||
##### macOS
|
##### macOS
|
||||||
|
|||||||
+2
-3
@@ -119,14 +119,13 @@ stacking them up. You can monitor the currently running background updates with
|
|||||||
|
|
||||||
# Upgrading to v1.111.0
|
# Upgrading to v1.111.0
|
||||||
|
|
||||||
## New worker endpoints for authenticated client and federation media
|
## New worker endpoints for authenticated client media
|
||||||
|
|
||||||
[Media repository workers](./workers.md#synapseappmedia_repository) handling
|
[Media repository workers](./workers.md#synapseappmedia_repository) handling
|
||||||
Media APIs can now handle the following endpoint patterns:
|
Media APIs can now handle the following endpoint pattern:
|
||||||
|
|
||||||
```
|
```
|
||||||
^/_matrix/client/v1/media/.*$
|
^/_matrix/client/v1/media/.*$
|
||||||
^/_matrix/federation/v1/media/.*$
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Please update your reverse proxy configuration.
|
Please update your reverse proxy configuration.
|
||||||
|
|||||||
@@ -246,7 +246,6 @@ Example configuration:
|
|||||||
```yaml
|
```yaml
|
||||||
presence:
|
presence:
|
||||||
enabled: false
|
enabled: false
|
||||||
include_offline_users_on_sync: false
|
|
||||||
```
|
```
|
||||||
|
|
||||||
`enabled` can also be set to a special value of "untracked" which ignores updates
|
`enabled` can also be set to a special value of "untracked" which ignores updates
|
||||||
@@ -255,10 +254,6 @@ received via clients and federation, while still accepting updates from the
|
|||||||
|
|
||||||
*The "untracked" option was added in Synapse 1.96.0.*
|
*The "untracked" option was added in Synapse 1.96.0.*
|
||||||
|
|
||||||
When clients perform an initial or `full_state` sync, presence results for offline users are
|
|
||||||
not included by default. Setting `include_offline_users_on_sync` to `true` will always include
|
|
||||||
offline users in the results. Defaults to false.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
### `require_auth_for_profile_requests`
|
### `require_auth_for_profile_requests`
|
||||||
|
|
||||||
@@ -1868,18 +1863,6 @@ federation_rr_transactions_per_room_per_second: 40
|
|||||||
## Media Store
|
## Media Store
|
||||||
Config options related to Synapse's media store.
|
Config options related to Synapse's media store.
|
||||||
|
|
||||||
---
|
|
||||||
### `enable_authenticated_media`
|
|
||||||
|
|
||||||
When set to true, all subsequent media uploads will be marked as authenticated, and will not be available over legacy
|
|
||||||
unauthenticated media endpoints (`/_matrix/media/(r0|v3|v1)/download` and `/_matrix/media/(r0|v3|v1)/thumbnail`) - requests for authenticated media over these endpoints will result in a 404. All media, including authenticated media, will be available over the authenticated media endpoints `_matrix/client/v1/media/download` and `_matrix/client/v1/media/thumbnail`. Media uploaded prior to setting this option to true will still be available over the legacy endpoints. Note if the setting is switched to false
|
|
||||||
after enabling, media marked as authenticated will be available over legacy endpoints. Defaults to false, but
|
|
||||||
this will change to true in a future Synapse release.
|
|
||||||
|
|
||||||
Example configuration:
|
|
||||||
```yaml
|
|
||||||
enable_authenticated_media: true
|
|
||||||
```
|
|
||||||
---
|
---
|
||||||
### `enable_media_repo`
|
### `enable_media_repo`
|
||||||
|
|
||||||
@@ -2386,7 +2369,7 @@ enable_registration_without_verification: true
|
|||||||
---
|
---
|
||||||
### `registrations_require_3pid`
|
### `registrations_require_3pid`
|
||||||
|
|
||||||
If this is set, users must provide all of the specified types of [3PID](https://spec.matrix.org/latest/appendices/#3pid-types) when registering an account.
|
If this is set, users must provide all of the specified types of 3PID when registering an account.
|
||||||
|
|
||||||
Note that [`enable_registration`](#enable_registration) must also be set to allow account registration.
|
Note that [`enable_registration`](#enable_registration) must also be set to allow account registration.
|
||||||
|
|
||||||
@@ -2411,9 +2394,6 @@ disable_msisdn_registration: true
|
|||||||
|
|
||||||
Mandate that users are only allowed to associate certain formats of
|
Mandate that users are only allowed to associate certain formats of
|
||||||
3PIDs with accounts on this server, as specified by the `medium` and `pattern` sub-options.
|
3PIDs with accounts on this server, as specified by the `medium` and `pattern` sub-options.
|
||||||
`pattern` is a [Perl-like regular expression](https://docs.python.org/3/library/re.html#module-re).
|
|
||||||
|
|
||||||
More information about 3PIDs, allowed `medium` types and their `address` syntax can be found [in the Matrix spec](https://spec.matrix.org/latest/appendices/#3pid-types).
|
|
||||||
|
|
||||||
Example configuration:
|
Example configuration:
|
||||||
```yaml
|
```yaml
|
||||||
@@ -2423,7 +2403,7 @@ allowed_local_3pids:
|
|||||||
- medium: email
|
- medium: email
|
||||||
pattern: '^[^@]+@vector\.im$'
|
pattern: '^[^@]+@vector\.im$'
|
||||||
- medium: msisdn
|
- medium: msisdn
|
||||||
pattern: '^44\d{10}$'
|
pattern: '\+44'
|
||||||
```
|
```
|
||||||
---
|
---
|
||||||
### `enable_3pid_lookup`
|
### `enable_3pid_lookup`
|
||||||
@@ -4154,38 +4134,6 @@ default_power_level_content_override:
|
|||||||
trusted_private_chat: null
|
trusted_private_chat: null
|
||||||
public_chat: null
|
public_chat: null
|
||||||
```
|
```
|
||||||
|
|
||||||
The default power levels for each preset are:
|
|
||||||
```yaml
|
|
||||||
"m.room.name": 50
|
|
||||||
"m.room.power_levels": 100
|
|
||||||
"m.room.history_visibility": 100
|
|
||||||
"m.room.canonical_alias": 50
|
|
||||||
"m.room.avatar": 50
|
|
||||||
"m.room.tombstone": 100
|
|
||||||
"m.room.server_acl": 100
|
|
||||||
"m.room.encryption": 100
|
|
||||||
```
|
|
||||||
|
|
||||||
So a complete example where the default power-levels for a preset are maintained
|
|
||||||
but the power level for a new key is set is:
|
|
||||||
```yaml
|
|
||||||
default_power_level_content_override:
|
|
||||||
private_chat:
|
|
||||||
events:
|
|
||||||
"com.example.foo": 0
|
|
||||||
"m.room.name": 50
|
|
||||||
"m.room.power_levels": 100
|
|
||||||
"m.room.history_visibility": 100
|
|
||||||
"m.room.canonical_alias": 50
|
|
||||||
"m.room.avatar": 50
|
|
||||||
"m.room.tombstone": 100
|
|
||||||
"m.room.server_acl": 100
|
|
||||||
"m.room.encryption": 100
|
|
||||||
trusted_private_chat: null
|
|
||||||
public_chat: null
|
|
||||||
```
|
|
||||||
|
|
||||||
---
|
---
|
||||||
### `forget_rooms_on_leave`
|
### `forget_rooms_on_leave`
|
||||||
|
|
||||||
|
|||||||
@@ -740,7 +740,6 @@ Handles the media repository. It can handle all endpoints starting with:
|
|||||||
|
|
||||||
/_matrix/media/
|
/_matrix/media/
|
||||||
/_matrix/client/v1/media/
|
/_matrix/client/v1/media/
|
||||||
/_matrix/federation/v1/media/
|
|
||||||
|
|
||||||
... and the following regular expressions matching media-specific administration APIs:
|
... and the following regular expressions matching media-specific administration APIs:
|
||||||
|
|
||||||
|
|||||||
Generated
+511
-506
File diff suppressed because it is too large
Load Diff
+4
-5
@@ -43,7 +43,6 @@ target-version = ['py38', 'py39', 'py310', 'py311']
|
|||||||
[tool.ruff]
|
[tool.ruff]
|
||||||
line-length = 88
|
line-length = 88
|
||||||
|
|
||||||
[tool.ruff.lint]
|
|
||||||
# See https://beta.ruff.rs/docs/rules/#error-e
|
# See https://beta.ruff.rs/docs/rules/#error-e
|
||||||
# for error codes. The ones we ignore are:
|
# for error codes. The ones we ignore are:
|
||||||
# E501: Line too long (black enforces this for us)
|
# E501: Line too long (black enforces this for us)
|
||||||
@@ -97,7 +96,7 @@ module-name = "synapse.synapse_rust"
|
|||||||
|
|
||||||
[tool.poetry]
|
[tool.poetry]
|
||||||
name = "matrix-synapse"
|
name = "matrix-synapse"
|
||||||
version = "1.112.0"
|
version = "1.110.0"
|
||||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||||
license = "AGPL-3.0-or-later"
|
license = "AGPL-3.0-or-later"
|
||||||
@@ -201,8 +200,8 @@ netaddr = ">=0.7.18"
|
|||||||
# add a lower bound to the Jinja2 dependency.
|
# add a lower bound to the Jinja2 dependency.
|
||||||
Jinja2 = ">=3.0"
|
Jinja2 = ">=3.0"
|
||||||
bleach = ">=1.4.3"
|
bleach = ">=1.4.3"
|
||||||
# We use `assert_never`, which were added in `typing-extensions` 4.1.
|
# We use `Self`, which were added in `typing-extensions` 4.0.
|
||||||
typing-extensions = ">=4.1"
|
typing-extensions = ">=4.0"
|
||||||
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
# We enforce that we have a `cryptography` version that bundles an `openssl`
|
||||||
# with the latest security patches.
|
# with the latest security patches.
|
||||||
cryptography = ">=3.4.7"
|
cryptography = ">=3.4.7"
|
||||||
@@ -322,7 +321,7 @@ all = [
|
|||||||
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
# This helps prevents merge conflicts when running a batch of dependabot updates.
|
||||||
isort = ">=5.10.1"
|
isort = ">=5.10.1"
|
||||||
black = ">=22.7.0"
|
black = ">=22.7.0"
|
||||||
ruff = "0.5.5"
|
ruff = "0.3.7"
|
||||||
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
# Type checking only works with the pydantic.v1 compat module from pydantic v2
|
||||||
pydantic = "^2"
|
pydantic = "^2"
|
||||||
|
|
||||||
|
|||||||
+10
-13
@@ -167,11 +167,11 @@ if [ -z "$skip_docker_build" ]; then
|
|||||||
-f "docker/editable.Dockerfile" .
|
-f "docker/editable.Dockerfile" .
|
||||||
|
|
||||||
$CONTAINER_RUNTIME build -t synapse-workers-editable \
|
$CONTAINER_RUNTIME build -t synapse-workers-editable \
|
||||||
--build-arg SYNAPSE_IMAGE=synapse-editable \
|
--build-arg FROM=synapse-editable \
|
||||||
-f "docker/Dockerfile-workers" .
|
-f "docker/Dockerfile-workers" .
|
||||||
|
|
||||||
$CONTAINER_RUNTIME build -t complement-synapse-editable \
|
$CONTAINER_RUNTIME build -t complement-synapse-editable \
|
||||||
--build-arg SUNAPSE_WORKERS_IMAGE=synapse-workers-editable \
|
--build-arg FROM=synapse-workers-editable \
|
||||||
-f "docker/complement/Dockerfile" "docker/complement"
|
-f "docker/complement/Dockerfile" "docker/complement"
|
||||||
|
|
||||||
# Prepare the Rust module
|
# Prepare the Rust module
|
||||||
@@ -180,24 +180,21 @@ if [ -z "$skip_docker_build" ]; then
|
|||||||
else
|
else
|
||||||
|
|
||||||
# Build the base Synapse image from the local checkout
|
# Build the base Synapse image from the local checkout
|
||||||
echo_if_github "::group::Build Docker image: synapse"
|
echo_if_github "::group::Build Docker image: matrixdotorg/synapse"
|
||||||
$CONTAINER_RUNTIME build -t synapse \
|
$CONTAINER_RUNTIME build -t matrixdotorg/synapse \
|
||||||
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
|
--build-arg TEST_ONLY_SKIP_DEP_HASH_VERIFICATION \
|
||||||
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
|
--build-arg TEST_ONLY_IGNORE_POETRY_LOCKFILE \
|
||||||
-f "docker/Dockerfile" .
|
-f "docker/Dockerfile" .
|
||||||
echo_if_github "::endgroup::"
|
echo_if_github "::endgroup::"
|
||||||
|
|
||||||
# Build the workers docker image (from the base Synapse image we just built).
|
# Build the workers docker image (from the base Synapse image we just built).
|
||||||
echo_if_github "::group::Build Docker image: synapse-workers"
|
echo_if_github "::group::Build Docker image: matrixdotorg/synapse-workers"
|
||||||
$CONTAINER_RUNTIME build -t synapse-workers \
|
$CONTAINER_RUNTIME build -t matrixdotorg/synapse-workers -f "docker/Dockerfile-workers" .
|
||||||
--build-arg SYNAPSE_IMAGE=synapse \
|
|
||||||
-f "docker/Dockerfile-workers" .
|
|
||||||
echo_if_github "::endgroup::"
|
echo_if_github "::endgroup::"
|
||||||
|
|
||||||
# Build the unified Complement image (from the worker Synapse image we just built).
|
# Build the unified Complement image (from the worker Synapse image we just built).
|
||||||
echo_if_github "::group::Build Docker image: complement-synapse"
|
echo_if_github "::group::Build Docker image: complement/Dockerfile"
|
||||||
$CONTAINER_RUNTIME build -t complement-synapse \
|
$CONTAINER_RUNTIME build -t complement-synapse \
|
||||||
--build-arg SYNAPSE_WORKERS_IMAGE=synapse-workers \
|
|
||||||
-f "docker/complement/Dockerfile" "docker/complement"
|
-f "docker/complement/Dockerfile" "docker/complement"
|
||||||
echo_if_github "::endgroup::"
|
echo_if_github "::endgroup::"
|
||||||
|
|
||||||
|
|||||||
+1
-1
@@ -112,7 +112,7 @@ python3 -m black "${files[@]}"
|
|||||||
|
|
||||||
# Catch any common programming mistakes in Python code.
|
# Catch any common programming mistakes in Python code.
|
||||||
# --quiet suppresses the update check.
|
# --quiet suppresses the update check.
|
||||||
ruff check --quiet --fix "${files[@]}"
|
ruff --quiet --fix "${files[@]}"
|
||||||
|
|
||||||
# Catch any common programming mistakes in Rust code.
|
# Catch any common programming mistakes in Rust code.
|
||||||
#
|
#
|
||||||
|
|||||||
+16
-12
@@ -70,6 +70,7 @@ def cli() -> None:
|
|||||||
pip install -e .[dev]
|
pip install -e .[dev]
|
||||||
|
|
||||||
- A checkout of the sytest repository at ../sytest
|
- A checkout of the sytest repository at ../sytest
|
||||||
|
|
||||||
- A checkout of the complement repository at ../complement
|
- A checkout of the complement repository at ../complement
|
||||||
|
|
||||||
Then to use:
|
Then to use:
|
||||||
@@ -115,7 +116,7 @@ def _prepare() -> None:
|
|||||||
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
|
sytest_repo = get_repo_and_check_clean_checkout("../sytest", "sytest")
|
||||||
complement_repo = get_repo_and_check_clean_checkout("../complement", "complement")
|
complement_repo = get_repo_and_check_clean_checkout("../complement", "complement")
|
||||||
|
|
||||||
click.secho("Updating Synapse and Sytest git repos...")
|
click.secho("Updating Synapse, Sytest and Complement git repos...")
|
||||||
synapse_repo.remote().fetch()
|
synapse_repo.remote().fetch()
|
||||||
sytest_repo.remote().fetch()
|
sytest_repo.remote().fetch()
|
||||||
complement_repo.remote().fetch()
|
complement_repo.remote().fetch()
|
||||||
@@ -202,24 +203,28 @@ def _prepare() -> None:
|
|||||||
# release type.
|
# release type.
|
||||||
if current_version.is_prerelease:
|
if current_version.is_prerelease:
|
||||||
default = release_branch_name
|
default = release_branch_name
|
||||||
|
complement_default = release_branch_name
|
||||||
elif release_type == "minor":
|
elif release_type == "minor":
|
||||||
default = "develop"
|
default = "develop"
|
||||||
|
complement_default = "main"
|
||||||
else:
|
else:
|
||||||
default = "master"
|
default = "master"
|
||||||
|
complement_default = "main"
|
||||||
|
|
||||||
branch_name = click.prompt(
|
sy_branch_name = click.prompt(
|
||||||
"Which branch should the release be based on?", default=default
|
"Which branch should the release be based on?", default=default
|
||||||
)
|
)
|
||||||
|
|
||||||
for repo_name, repo in {
|
complement_branch = click.prompt(
|
||||||
"synapse": synapse_repo,
|
"Which Complement branch should the release be based on?",
|
||||||
"sytest": sytest_repo,
|
default=complement_default,
|
||||||
"complement": complement_repo,
|
)
|
||||||
}.items():
|
|
||||||
# Special case for Complement: `develop` maps to `main`
|
|
||||||
if repo_name == "complement" and branch_name == "develop":
|
|
||||||
branch_name = "main"
|
|
||||||
|
|
||||||
|
for repo_name, (repo, branch_name) in {
|
||||||
|
"synapse": (synapse_repo, sy_branch_name),
|
||||||
|
"sytest": (sytest_repo, sy_branch_name),
|
||||||
|
"complement": (complement_repo, complement_branch),
|
||||||
|
}.items():
|
||||||
base_branch = find_ref(repo, branch_name)
|
base_branch = find_ref(repo, branch_name)
|
||||||
if not base_branch:
|
if not base_branch:
|
||||||
print(f"Could not find base branch {branch_name} for {repo_name}!")
|
print(f"Could not find base branch {branch_name} for {repo_name}!")
|
||||||
@@ -241,8 +246,7 @@ def _prepare() -> None:
|
|||||||
# not on subsequent RCs or full releases).
|
# not on subsequent RCs or full releases).
|
||||||
if click.confirm("Push new SyTest branch?", default=True):
|
if click.confirm("Push new SyTest branch?", default=True):
|
||||||
sytest_repo.git.push("-u", sytest_repo.remote().name, release_branch_name)
|
sytest_repo.git.push("-u", sytest_repo.remote().name, release_branch_name)
|
||||||
|
# The same special case rules apply to Complement.
|
||||||
# Same for Complement
|
|
||||||
if click.confirm("Push new Complement branch?", default=True):
|
if click.confirm("Push new Complement branch?", default=True):
|
||||||
complement_repo.git.push(
|
complement_repo.git.push(
|
||||||
"-u", complement_repo.remote().name, release_branch_name
|
"-u", complement_repo.remote().name, release_branch_name
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ logger = logging.getLogger("generate_workers_map")
|
|||||||
|
|
||||||
|
|
||||||
class MockHomeserver(HomeServer):
|
class MockHomeserver(HomeServer):
|
||||||
DATASTORE_CLASS = DataStore
|
DATASTORE_CLASS = DataStore # type: ignore
|
||||||
|
|
||||||
def __init__(self, config: HomeServerConfig, worker_app: Optional[str]) -> None:
|
def __init__(self, config: HomeServerConfig, worker_app: Optional[str]) -> None:
|
||||||
super().__init__(config.server.server_name, config=config)
|
super().__init__(config.server.server_name, config=config)
|
||||||
|
|||||||
@@ -119,19 +119,18 @@ BOOLEAN_COLUMNS = {
|
|||||||
"e2e_room_keys": ["is_verified"],
|
"e2e_room_keys": ["is_verified"],
|
||||||
"event_edges": ["is_state"],
|
"event_edges": ["is_state"],
|
||||||
"events": ["processed", "outlier", "contains_url"],
|
"events": ["processed", "outlier", "contains_url"],
|
||||||
"local_media_repository": ["safe_from_quarantine", "authenticated"],
|
"local_media_repository": ["safe_from_quarantine"],
|
||||||
"per_user_experimental_features": ["enabled"],
|
|
||||||
"presence_list": ["accepted"],
|
"presence_list": ["accepted"],
|
||||||
"presence_stream": ["currently_active"],
|
"presence_stream": ["currently_active"],
|
||||||
"public_room_list_stream": ["visibility"],
|
"public_room_list_stream": ["visibility"],
|
||||||
"pushers": ["enabled"],
|
"pushers": ["enabled"],
|
||||||
"redactions": ["have_censored"],
|
"redactions": ["have_censored"],
|
||||||
"remote_media_cache": ["authenticated"],
|
|
||||||
"room_stats_state": ["is_federatable"],
|
"room_stats_state": ["is_federatable"],
|
||||||
"rooms": ["is_public", "has_auth_chain_index"],
|
"rooms": ["is_public", "has_auth_chain_index"],
|
||||||
"users": ["shadow_banned", "approved", "locked", "suspended"],
|
"users": ["shadow_banned", "approved", "locked", "suspended"],
|
||||||
"un_partial_stated_event_stream": ["rejection_status_changed"],
|
"un_partial_stated_event_stream": ["rejection_status_changed"],
|
||||||
"users_who_share_rooms": ["share_private"],
|
"users_who_share_rooms": ["share_private"],
|
||||||
|
"per_user_experimental_features": ["enabled"],
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -41,7 +41,7 @@ logger = logging.getLogger("update_database")
|
|||||||
|
|
||||||
|
|
||||||
class MockHomeserver(HomeServer):
|
class MockHomeserver(HomeServer):
|
||||||
DATASTORE_CLASS = DataStore
|
DATASTORE_CLASS = DataStore # type: ignore [assignment]
|
||||||
|
|
||||||
def __init__(self, config: HomeServerConfig):
|
def __init__(self, config: HomeServerConfig):
|
||||||
super().__init__(
|
super().__init__(
|
||||||
|
|||||||
@@ -18,7 +18,7 @@
|
|||||||
# [This file includes modifications made by New Vector Limited]
|
# [This file includes modifications made by New Vector Limited]
|
||||||
#
|
#
|
||||||
#
|
#
|
||||||
from typing import TYPE_CHECKING, Optional, Tuple
|
from typing import Optional, Tuple
|
||||||
|
|
||||||
from typing_extensions import Protocol
|
from typing_extensions import Protocol
|
||||||
|
|
||||||
@@ -28,9 +28,6 @@ from synapse.appservice import ApplicationService
|
|||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.types import Requester
|
from synapse.types import Requester
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
|
||||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
|
||||||
|
|
||||||
# guests always get this device id.
|
# guests always get this device id.
|
||||||
GUEST_DEVICE_ID = "guest_device"
|
GUEST_DEVICE_ID = "guest_device"
|
||||||
|
|
||||||
@@ -90,19 +87,6 @@ class Auth(Protocol):
|
|||||||
AuthError if access is denied for the user in the access token
|
AuthError if access is denied for the user in the access token
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async def get_user_by_req_experimental_feature(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
feature: "ExperimentalFeature",
|
|
||||||
allow_guest: bool = False,
|
|
||||||
allow_expired: bool = False,
|
|
||||||
allow_locked: bool = False,
|
|
||||||
) -> Requester:
|
|
||||||
"""Like `get_user_by_req`, except also checks if the user has access to
|
|
||||||
the experimental feature. If they don't returns a 404 unrecognized
|
|
||||||
request.
|
|
||||||
"""
|
|
||||||
|
|
||||||
async def validate_appservice_can_control_user_id(
|
async def validate_appservice_can_control_user_id(
|
||||||
self, app_service: ApplicationService, user_id: str
|
self, app_service: ApplicationService, user_id: str
|
||||||
) -> None:
|
) -> None:
|
||||||
|
|||||||
@@ -28,7 +28,6 @@ from synapse.api.errors import (
|
|||||||
Codes,
|
Codes,
|
||||||
InvalidClientTokenError,
|
InvalidClientTokenError,
|
||||||
MissingClientTokenError,
|
MissingClientTokenError,
|
||||||
UnrecognizedRequestError,
|
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
|
from synapse.logging.opentracing import active_span, force_tracing, start_active_span
|
||||||
@@ -39,10 +38,8 @@ from . import GUEST_DEVICE_ID
|
|||||||
from .base import BaseAuth
|
from .base import BaseAuth
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
@@ -109,32 +106,6 @@ class InternalAuth(BaseAuth):
|
|||||||
parent_span.set_tag("appservice_id", requester.app_service.id)
|
parent_span.set_tag("appservice_id", requester.app_service.id)
|
||||||
return requester
|
return requester
|
||||||
|
|
||||||
async def get_user_by_req_experimental_feature(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
feature: "ExperimentalFeature",
|
|
||||||
allow_guest: bool = False,
|
|
||||||
allow_expired: bool = False,
|
|
||||||
allow_locked: bool = False,
|
|
||||||
) -> Requester:
|
|
||||||
try:
|
|
||||||
requester = await self.get_user_by_req(
|
|
||||||
request,
|
|
||||||
allow_guest=allow_guest,
|
|
||||||
allow_expired=allow_expired,
|
|
||||||
allow_locked=allow_locked,
|
|
||||||
)
|
|
||||||
if await self.store.is_feature_enabled(requester.user.to_string(), feature):
|
|
||||||
return requester
|
|
||||||
|
|
||||||
raise UnrecognizedRequestError(code=404)
|
|
||||||
except (AuthError, InvalidClientTokenError):
|
|
||||||
if feature.is_globally_enabled(self.hs.config):
|
|
||||||
# If its globally enabled then return the auth error
|
|
||||||
raise
|
|
||||||
|
|
||||||
raise UnrecognizedRequestError(code=404)
|
|
||||||
|
|
||||||
@cancellable
|
@cancellable
|
||||||
async def _wrapped_get_user_by_req(
|
async def _wrapped_get_user_by_req(
|
||||||
self,
|
self,
|
||||||
|
|||||||
@@ -40,7 +40,6 @@ from synapse.api.errors import (
|
|||||||
OAuthInsufficientScopeError,
|
OAuthInsufficientScopeError,
|
||||||
StoreError,
|
StoreError,
|
||||||
SynapseError,
|
SynapseError,
|
||||||
UnrecognizedRequestError,
|
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.context import make_deferred_yieldable
|
from synapse.logging.context import make_deferred_yieldable
|
||||||
@@ -49,7 +48,6 @@ from synapse.util import json_decoder
|
|||||||
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
from synapse.util.caches.cached_call import RetryOnExceptionCachedCall
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
@@ -145,18 +143,6 @@ class MSC3861DelegatedAuth(BaseAuth):
|
|||||||
# metadata.validate_introspection_endpoint()
|
# metadata.validate_introspection_endpoint()
|
||||||
return metadata
|
return metadata
|
||||||
|
|
||||||
async def _introspection_endpoint(self) -> str:
|
|
||||||
"""
|
|
||||||
Returns the introspection endpoint of the issuer
|
|
||||||
|
|
||||||
It uses the config option if set, otherwise it will use OIDC discovery to get it
|
|
||||||
"""
|
|
||||||
if self._config.introspection_endpoint is not None:
|
|
||||||
return self._config.introspection_endpoint
|
|
||||||
|
|
||||||
metadata = await self._load_metadata()
|
|
||||||
return metadata.get("introspection_endpoint")
|
|
||||||
|
|
||||||
async def _introspect_token(self, token: str) -> IntrospectionToken:
|
async def _introspect_token(self, token: str) -> IntrospectionToken:
|
||||||
"""
|
"""
|
||||||
Send a token to the introspection endpoint and returns the introspection response
|
Send a token to the introspection endpoint and returns the introspection response
|
||||||
@@ -173,7 +159,8 @@ class MSC3861DelegatedAuth(BaseAuth):
|
|||||||
Returns:
|
Returns:
|
||||||
The introspection response
|
The introspection response
|
||||||
"""
|
"""
|
||||||
introspection_endpoint = await self._introspection_endpoint()
|
metadata = await self._issuer_metadata.get()
|
||||||
|
introspection_endpoint = metadata.get("introspection_endpoint")
|
||||||
raw_headers: Dict[str, str] = {
|
raw_headers: Dict[str, str] = {
|
||||||
"Content-Type": "application/x-www-form-urlencoded",
|
"Content-Type": "application/x-www-form-urlencoded",
|
||||||
"User-Agent": str(self._http_client.user_agent, "utf-8"),
|
"User-Agent": str(self._http_client.user_agent, "utf-8"),
|
||||||
@@ -258,32 +245,6 @@ class MSC3861DelegatedAuth(BaseAuth):
|
|||||||
|
|
||||||
return requester
|
return requester
|
||||||
|
|
||||||
async def get_user_by_req_experimental_feature(
|
|
||||||
self,
|
|
||||||
request: SynapseRequest,
|
|
||||||
feature: "ExperimentalFeature",
|
|
||||||
allow_guest: bool = False,
|
|
||||||
allow_expired: bool = False,
|
|
||||||
allow_locked: bool = False,
|
|
||||||
) -> Requester:
|
|
||||||
try:
|
|
||||||
requester = await self.get_user_by_req(
|
|
||||||
request,
|
|
||||||
allow_guest=allow_guest,
|
|
||||||
allow_expired=allow_expired,
|
|
||||||
allow_locked=allow_locked,
|
|
||||||
)
|
|
||||||
if await self.store.is_feature_enabled(requester.user.to_string(), feature):
|
|
||||||
return requester
|
|
||||||
|
|
||||||
raise UnrecognizedRequestError(code=404)
|
|
||||||
except (AuthError, InvalidClientTokenError):
|
|
||||||
if feature.is_globally_enabled(self.hs.config):
|
|
||||||
# If its globally enabled then return the auth error
|
|
||||||
raise
|
|
||||||
|
|
||||||
raise UnrecognizedRequestError(code=404)
|
|
||||||
|
|
||||||
async def get_user_by_access_token(
|
async def get_user_by_access_token(
|
||||||
self,
|
self,
|
||||||
token: str,
|
token: str,
|
||||||
|
|||||||
@@ -50,7 +50,7 @@ class Membership:
|
|||||||
KNOCK: Final = "knock"
|
KNOCK: Final = "knock"
|
||||||
LEAVE: Final = "leave"
|
LEAVE: Final = "leave"
|
||||||
BAN: Final = "ban"
|
BAN: Final = "ban"
|
||||||
LIST: Final = frozenset((INVITE, JOIN, KNOCK, LEAVE, BAN))
|
LIST: Final = {INVITE, JOIN, KNOCK, LEAVE, BAN}
|
||||||
|
|
||||||
|
|
||||||
class PresenceState:
|
class PresenceState:
|
||||||
@@ -128,13 +128,9 @@ class EventTypes:
|
|||||||
SpaceParent: Final = "m.space.parent"
|
SpaceParent: Final = "m.space.parent"
|
||||||
|
|
||||||
Reaction: Final = "m.reaction"
|
Reaction: Final = "m.reaction"
|
||||||
Sticker: Final = "m.sticker"
|
|
||||||
LiveLocationShareStart: Final = "m.beacon_info"
|
|
||||||
|
|
||||||
CallInvite: Final = "m.call.invite"
|
CallInvite: Final = "m.call.invite"
|
||||||
|
|
||||||
PollStart: Final = "m.poll.start"
|
|
||||||
|
|
||||||
|
|
||||||
class ToDeviceEventTypes:
|
class ToDeviceEventTypes:
|
||||||
RoomKeyRequest: Final = "m.room_key_request"
|
RoomKeyRequest: Final = "m.room_key_request"
|
||||||
@@ -225,11 +221,6 @@ class EventContentFields:
|
|||||||
# This is deprecated in MSC2175.
|
# This is deprecated in MSC2175.
|
||||||
ROOM_CREATOR: Final = "creator"
|
ROOM_CREATOR: Final = "creator"
|
||||||
|
|
||||||
# The version of the room for `m.room.create` events.
|
|
||||||
ROOM_VERSION: Final = "room_version"
|
|
||||||
|
|
||||||
ROOM_NAME: Final = "name"
|
|
||||||
|
|
||||||
# Used in m.room.guest_access events.
|
# Used in m.room.guest_access events.
|
||||||
GUEST_ACCESS: Final = "guest_access"
|
GUEST_ACCESS: Final = "guest_access"
|
||||||
|
|
||||||
@@ -242,9 +233,6 @@ class EventContentFields:
|
|||||||
# an unspecced field added to to-device messages to identify them uniquely-ish
|
# an unspecced field added to to-device messages to identify them uniquely-ish
|
||||||
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
|
TO_DEVICE_MSGID: Final = "org.matrix.msgid"
|
||||||
|
|
||||||
# `m.room.encryption`` algorithm field
|
|
||||||
ENCRYPTION_ALGORITHM: Final = "algorithm"
|
|
||||||
|
|
||||||
|
|
||||||
class EventUnsignedContentFields:
|
class EventUnsignedContentFields:
|
||||||
"""Fields found inside the 'unsigned' data on events"""
|
"""Fields found inside the 'unsigned' data on events"""
|
||||||
|
|||||||
@@ -236,8 +236,9 @@ class Ratelimiter:
|
|||||||
requester: The requester that is doing the action, if any.
|
requester: The requester that is doing the action, if any.
|
||||||
key: An arbitrary key used to classify an action. Defaults to the
|
key: An arbitrary key used to classify an action. Defaults to the
|
||||||
requester's user ID.
|
requester's user ID.
|
||||||
n_actions: The number of times the user performed the action. May be negative
|
n_actions: The number of times the user wants to do this action. If the user
|
||||||
to "refund" the rate limit.
|
cannot do all of the actions, the user's action count is not incremented
|
||||||
|
at all.
|
||||||
_time_now_s: The current time. Optional, defaults to the current time according
|
_time_now_s: The current time. Optional, defaults to the current time according
|
||||||
to self.clock. Only used by tests.
|
to self.clock. Only used by tests.
|
||||||
"""
|
"""
|
||||||
|
|||||||
@@ -110,7 +110,7 @@ class AdminCmdStore(
|
|||||||
|
|
||||||
|
|
||||||
class AdminCmdServer(HomeServer):
|
class AdminCmdServer(HomeServer):
|
||||||
DATASTORE_CLASS = AdminCmdStore
|
DATASTORE_CLASS = AdminCmdStore # type: ignore
|
||||||
|
|
||||||
|
|
||||||
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
async def export_data_command(hs: HomeServer, args: argparse.Namespace) -> None:
|
||||||
|
|||||||
@@ -74,9 +74,6 @@ from synapse.storage.databases.main.event_push_actions import (
|
|||||||
EventPushActionsWorkerStore,
|
EventPushActionsWorkerStore,
|
||||||
)
|
)
|
||||||
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
from synapse.storage.databases.main.events_worker import EventsWorkerStore
|
||||||
from synapse.storage.databases.main.experimental_features import (
|
|
||||||
ExperimentalFeaturesStore,
|
|
||||||
)
|
|
||||||
from synapse.storage.databases.main.filtering import FilteringWorkerStore
|
from synapse.storage.databases.main.filtering import FilteringWorkerStore
|
||||||
from synapse.storage.databases.main.keys import KeyStore
|
from synapse.storage.databases.main.keys import KeyStore
|
||||||
from synapse.storage.databases.main.lock import LockStore
|
from synapse.storage.databases.main.lock import LockStore
|
||||||
@@ -158,7 +155,6 @@ class GenericWorkerStore(
|
|||||||
LockStore,
|
LockStore,
|
||||||
SessionStore,
|
SessionStore,
|
||||||
TaskSchedulerWorkerStore,
|
TaskSchedulerWorkerStore,
|
||||||
ExperimentalFeaturesStore,
|
|
||||||
):
|
):
|
||||||
# Properties that multiple storage classes define. Tell mypy what the
|
# Properties that multiple storage classes define. Tell mypy what the
|
||||||
# expected type is.
|
# expected type is.
|
||||||
@@ -167,7 +163,7 @@ class GenericWorkerStore(
|
|||||||
|
|
||||||
|
|
||||||
class GenericWorkerServer(HomeServer):
|
class GenericWorkerServer(HomeServer):
|
||||||
DATASTORE_CLASS = GenericWorkerStore
|
DATASTORE_CLASS = GenericWorkerStore # type: ignore
|
||||||
|
|
||||||
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
def _listen_http(self, listener_config: ListenerConfig) -> None:
|
||||||
assert listener_config.http_options is not None
|
assert listener_config.http_options is not None
|
||||||
|
|||||||
@@ -81,7 +81,7 @@ def gz_wrap(r: Resource) -> Resource:
|
|||||||
|
|
||||||
|
|
||||||
class SynapseHomeServer(HomeServer):
|
class SynapseHomeServer(HomeServer):
|
||||||
DATASTORE_CLASS = DataStore
|
DATASTORE_CLASS = DataStore # type: ignore
|
||||||
|
|
||||||
def _listener_http(
|
def _listener_http(
|
||||||
self,
|
self,
|
||||||
@@ -217,7 +217,7 @@ class SynapseHomeServer(HomeServer):
|
|||||||
)
|
)
|
||||||
|
|
||||||
if name in ["media", "federation", "client"]:
|
if name in ["media", "federation", "client"]:
|
||||||
if self.config.media.can_load_media_repo:
|
if self.config.server.enable_media_repo:
|
||||||
media_repo = self.get_media_repository_resource()
|
media_repo = self.get_media_repository_resource()
|
||||||
resources.update(
|
resources.update(
|
||||||
{
|
{
|
||||||
|
|||||||
@@ -140,12 +140,6 @@ class MSC3861:
|
|||||||
("experimental", "msc3861", "client_auth_method"),
|
("experimental", "msc3861", "client_auth_method"),
|
||||||
)
|
)
|
||||||
|
|
||||||
introspection_endpoint: Optional[str] = attr.ib(
|
|
||||||
default=None,
|
|
||||||
validator=attr.validators.optional(attr.validators.instance_of(str)),
|
|
||||||
)
|
|
||||||
"""The URL of the introspection endpoint used to validate access tokens."""
|
|
||||||
|
|
||||||
account_management_url: Optional[str] = attr.ib(
|
account_management_url: Optional[str] = attr.ib(
|
||||||
default=None,
|
default=None,
|
||||||
validator=attr.validators.optional(attr.validators.instance_of(str)),
|
validator=attr.validators.optional(attr.validators.instance_of(str)),
|
||||||
@@ -443,6 +437,10 @@ class ExperimentalConfig(Config):
|
|||||||
"msc3823_account_suspension", False
|
"msc3823_account_suspension", False
|
||||||
)
|
)
|
||||||
|
|
||||||
|
self.msc3916_authenticated_media_enabled = experimental.get(
|
||||||
|
"msc3916_authenticated_media_enabled", False
|
||||||
|
)
|
||||||
|
|
||||||
# MSC4151: Report room API (Client-Server API)
|
# MSC4151: Report room API (Client-Server API)
|
||||||
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
|
self.msc4151_enabled: bool = experimental.get("msc4151_enabled", False)
|
||||||
|
|
||||||
|
|||||||
@@ -126,7 +126,7 @@ class ContentRepositoryConfig(Config):
|
|||||||
# Only enable the media repo if either the media repo is enabled or the
|
# Only enable the media repo if either the media repo is enabled or the
|
||||||
# current worker app is the media repo.
|
# current worker app is the media repo.
|
||||||
if (
|
if (
|
||||||
config.get("enable_media_repo", True) is False
|
self.root.server.enable_media_repo is False
|
||||||
and config.get("worker_app") != "synapse.app.media_repository"
|
and config.get("worker_app") != "synapse.app.media_repository"
|
||||||
):
|
):
|
||||||
self.can_load_media_repo = False
|
self.can_load_media_repo = False
|
||||||
@@ -272,10 +272,6 @@ class ContentRepositoryConfig(Config):
|
|||||||
remote_media_lifetime
|
remote_media_lifetime
|
||||||
)
|
)
|
||||||
|
|
||||||
self.enable_authenticated_media = config.get(
|
|
||||||
"enable_authenticated_media", False
|
|
||||||
)
|
|
||||||
|
|
||||||
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
|
def generate_config_section(self, data_dir_path: str, **kwargs: Any) -> str:
|
||||||
assert data_dir_path is not None
|
assert data_dir_path is not None
|
||||||
media_store = os.path.join(data_dir_path, "media_store")
|
media_store = os.path.join(data_dir_path, "media_store")
|
||||||
|
|||||||
@@ -384,11 +384,6 @@ class ServerConfig(Config):
|
|||||||
# Whether to internally track presence, requires that presence is enabled,
|
# Whether to internally track presence, requires that presence is enabled,
|
||||||
self.track_presence = self.presence_enabled and presence_enabled != "untracked"
|
self.track_presence = self.presence_enabled and presence_enabled != "untracked"
|
||||||
|
|
||||||
# Determines if presence results for offline users are included on initial/full sync
|
|
||||||
self.presence_include_offline_users_on_sync = presence_config.get(
|
|
||||||
"include_offline_users_on_sync", False
|
|
||||||
)
|
|
||||||
|
|
||||||
# Custom presence router module
|
# Custom presence router module
|
||||||
# This is the legacy way of configuring it (the config should now be put in the modules section)
|
# This is the legacy way of configuring it (the config should now be put in the modules section)
|
||||||
self.presence_router_module_class = None
|
self.presence_router_module_class = None
|
||||||
@@ -400,6 +395,12 @@ class ServerConfig(Config):
|
|||||||
self.presence_router_config,
|
self.presence_router_config,
|
||||||
) = load_module(presence_router_config, ("presence", "presence_router"))
|
) = load_module(presence_router_config, ("presence", "presence_router"))
|
||||||
|
|
||||||
|
# whether to enable the media repository endpoints. This should be set
|
||||||
|
# to false if the media repository is running as a separate endpoint;
|
||||||
|
# doing so ensures that we will not run cache cleanup jobs on the
|
||||||
|
# master, potentially causing inconsistency.
|
||||||
|
self.enable_media_repo = config.get("enable_media_repo", True)
|
||||||
|
|
||||||
# Whether to require authentication to retrieve profile data (avatars,
|
# Whether to require authentication to retrieve profile data (avatars,
|
||||||
# display names) of other users through the client API.
|
# display names) of other users through the client API.
|
||||||
self.require_auth_for_profile_requests = config.get(
|
self.require_auth_for_profile_requests = config.get(
|
||||||
|
|||||||
@@ -554,22 +554,3 @@ def relation_from_event(event: EventBase) -> Optional[_EventRelation]:
|
|||||||
aggregation_key = None
|
aggregation_key = None
|
||||||
|
|
||||||
return _EventRelation(parent_id, rel_type, aggregation_key)
|
return _EventRelation(parent_id, rel_type, aggregation_key)
|
||||||
|
|
||||||
|
|
||||||
@attr.s(slots=True, frozen=True, auto_attribs=True)
|
|
||||||
class StrippedStateEvent:
|
|
||||||
"""
|
|
||||||
A stripped down state event. Usually used for remote invite/knocks so the user can
|
|
||||||
make an informed decision on whether they want to join.
|
|
||||||
|
|
||||||
Attributes:
|
|
||||||
type: Event `type`
|
|
||||||
state_key: Event `state_key`
|
|
||||||
sender: Event `sender`
|
|
||||||
content: Event `content`
|
|
||||||
"""
|
|
||||||
|
|
||||||
type: str
|
|
||||||
state_key: str
|
|
||||||
sender: str
|
|
||||||
content: Dict[str, Any]
|
|
||||||
|
|||||||
+1
-28
@@ -49,7 +49,7 @@ from synapse.api.errors import Codes, SynapseError
|
|||||||
from synapse.api.room_versions import RoomVersion
|
from synapse.api.room_versions import RoomVersion
|
||||||
from synapse.types import JsonDict, Requester
|
from synapse.types import JsonDict, Requester
|
||||||
|
|
||||||
from . import EventBase, StrippedStateEvent, make_event_from_dict
|
from . import EventBase, make_event_from_dict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.handlers.relations import BundledAggregations
|
from synapse.handlers.relations import BundledAggregations
|
||||||
@@ -854,30 +854,3 @@ def strip_event(event: EventBase) -> JsonDict:
|
|||||||
"content": event.content,
|
"content": event.content,
|
||||||
"sender": event.sender,
|
"sender": event.sender,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
def parse_stripped_state_event(raw_stripped_event: Any) -> Optional[StrippedStateEvent]:
|
|
||||||
"""
|
|
||||||
Given a raw value from an event's `unsigned` field, attempt to parse it into a
|
|
||||||
`StrippedStateEvent`.
|
|
||||||
"""
|
|
||||||
if isinstance(raw_stripped_event, dict):
|
|
||||||
# All of these fields are required
|
|
||||||
type = raw_stripped_event.get("type")
|
|
||||||
state_key = raw_stripped_event.get("state_key")
|
|
||||||
sender = raw_stripped_event.get("sender")
|
|
||||||
content = raw_stripped_event.get("content")
|
|
||||||
if (
|
|
||||||
isinstance(type, str)
|
|
||||||
and isinstance(state_key, str)
|
|
||||||
and isinstance(sender, str)
|
|
||||||
and isinstance(content, dict)
|
|
||||||
):
|
|
||||||
return StrippedStateEvent(
|
|
||||||
type=type,
|
|
||||||
state_key=state_key,
|
|
||||||
sender=sender,
|
|
||||||
content=content,
|
|
||||||
)
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|||||||
@@ -338,11 +338,12 @@ class PerDestinationQueue:
|
|||||||
# not caught up yet
|
# not caught up yet
|
||||||
return
|
return
|
||||||
|
|
||||||
|
pending_pdus = []
|
||||||
while True:
|
while True:
|
||||||
self._new_data_to_send = False
|
self._new_data_to_send = False
|
||||||
|
|
||||||
async with _TransactionQueueManager(self) as (
|
async with _TransactionQueueManager(self) as (
|
||||||
pending_pdus, # noqa: F811
|
pending_pdus,
|
||||||
pending_edus,
|
pending_edus,
|
||||||
):
|
):
|
||||||
if not pending_pdus and not pending_edus:
|
if not pending_pdus and not pending_edus:
|
||||||
|
|||||||
@@ -33,7 +33,6 @@ from synapse.federation.transport.server.federation import (
|
|||||||
FEDERATION_SERVLET_CLASSES,
|
FEDERATION_SERVLET_CLASSES,
|
||||||
FederationAccountStatusServlet,
|
FederationAccountStatusServlet,
|
||||||
FederationMediaDownloadServlet,
|
FederationMediaDownloadServlet,
|
||||||
FederationMediaThumbnailServlet,
|
|
||||||
FederationUnstableClientKeysClaimServlet,
|
FederationUnstableClientKeysClaimServlet,
|
||||||
)
|
)
|
||||||
from synapse.http.server import HttpServer, JsonResource
|
from synapse.http.server import HttpServer, JsonResource
|
||||||
@@ -317,11 +316,8 @@ def register_servlets(
|
|||||||
):
|
):
|
||||||
continue
|
continue
|
||||||
|
|
||||||
if (
|
if servletclass == FederationMediaDownloadServlet:
|
||||||
servletclass == FederationMediaDownloadServlet
|
if not hs.config.server.enable_media_repo:
|
||||||
or servletclass == FederationMediaThumbnailServlet
|
|
||||||
):
|
|
||||||
if not hs.config.media.can_load_media_repo:
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
servletclass(
|
servletclass(
|
||||||
|
|||||||
@@ -363,8 +363,6 @@ class BaseFederationServlet:
|
|||||||
if (
|
if (
|
||||||
func.__self__.__class__.__name__ # type: ignore
|
func.__self__.__class__.__name__ # type: ignore
|
||||||
== "FederationMediaDownloadServlet"
|
== "FederationMediaDownloadServlet"
|
||||||
or func.__self__.__class__.__name__ # type: ignore
|
|
||||||
== "FederationMediaThumbnailServlet"
|
|
||||||
):
|
):
|
||||||
response = await func(
|
response = await func(
|
||||||
origin, content, request, *args, **kwargs
|
origin, content, request, *args, **kwargs
|
||||||
@@ -377,8 +375,6 @@ class BaseFederationServlet:
|
|||||||
if (
|
if (
|
||||||
func.__self__.__class__.__name__ # type: ignore
|
func.__self__.__class__.__name__ # type: ignore
|
||||||
== "FederationMediaDownloadServlet"
|
== "FederationMediaDownloadServlet"
|
||||||
or func.__self__.__class__.__name__ # type: ignore
|
|
||||||
== "FederationMediaThumbnailServlet"
|
|
||||||
):
|
):
|
||||||
response = await func(
|
response = await func(
|
||||||
origin, content, request, *args, **kwargs
|
origin, content, request, *args, **kwargs
|
||||||
|
|||||||
@@ -46,13 +46,11 @@ from synapse.http.servlet import (
|
|||||||
parse_boolean_from_args,
|
parse_boolean_from_args,
|
||||||
parse_integer,
|
parse_integer,
|
||||||
parse_integer_from_args,
|
parse_integer_from_args,
|
||||||
parse_string,
|
|
||||||
parse_string_from_args,
|
parse_string_from_args,
|
||||||
parse_strings_from_args,
|
parse_strings_from_args,
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
|
from synapse.media._base import DEFAULT_MAX_TIMEOUT_MS, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS
|
||||||
from synapse.media.thumbnailer import ThumbnailProvider
|
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import SYNAPSE_VERSION
|
from synapse.util import SYNAPSE_VERSION
|
||||||
from synapse.util.ratelimitutils import FederationRateLimiter
|
from synapse.util.ratelimitutils import FederationRateLimiter
|
||||||
@@ -828,59 +826,6 @@ class FederationMediaDownloadServlet(BaseFederationServerServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
|
|
||||||
class FederationMediaThumbnailServlet(BaseFederationServerServlet):
|
|
||||||
"""
|
|
||||||
Implementation of new federation media `/thumbnail` endpoint outlined in MSC3916. Returns
|
|
||||||
a multipart/mixed response consisting of a JSON object and the requested media
|
|
||||||
item. This endpoint only returns local media.
|
|
||||||
"""
|
|
||||||
|
|
||||||
PATH = "/media/thumbnail/(?P<media_id>[^/]*)"
|
|
||||||
RATELIMIT = True
|
|
||||||
|
|
||||||
def __init__(
|
|
||||||
self,
|
|
||||||
hs: "HomeServer",
|
|
||||||
ratelimiter: FederationRateLimiter,
|
|
||||||
authenticator: Authenticator,
|
|
||||||
server_name: str,
|
|
||||||
):
|
|
||||||
super().__init__(hs, authenticator, ratelimiter, server_name)
|
|
||||||
self.media_repo = self.hs.get_media_repository()
|
|
||||||
self.dynamic_thumbnails = hs.config.media.dynamic_thumbnails
|
|
||||||
self.thumbnail_provider = ThumbnailProvider(
|
|
||||||
hs, self.media_repo, self.media_repo.media_storage
|
|
||||||
)
|
|
||||||
|
|
||||||
async def on_GET(
|
|
||||||
self,
|
|
||||||
origin: Optional[str],
|
|
||||||
content: Literal[None],
|
|
||||||
request: SynapseRequest,
|
|
||||||
media_id: str,
|
|
||||||
) -> None:
|
|
||||||
|
|
||||||
width = parse_integer(request, "width", required=True)
|
|
||||||
height = parse_integer(request, "height", required=True)
|
|
||||||
method = parse_string(request, "method", "scale")
|
|
||||||
# TODO Parse the Accept header to get an prioritised list of thumbnail types.
|
|
||||||
m_type = "image/png"
|
|
||||||
max_timeout_ms = parse_integer(
|
|
||||||
request, "timeout_ms", default=DEFAULT_MAX_TIMEOUT_MS
|
|
||||||
)
|
|
||||||
max_timeout_ms = min(max_timeout_ms, MAXIMUM_ALLOWED_MAX_TIMEOUT_MS)
|
|
||||||
|
|
||||||
if self.dynamic_thumbnails:
|
|
||||||
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
|
||||||
request, media_id, width, height, method, m_type, max_timeout_ms, True
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
await self.thumbnail_provider.respond_local_thumbnail(
|
|
||||||
request, media_id, width, height, method, m_type, max_timeout_ms, True
|
|
||||||
)
|
|
||||||
self.media_repo.mark_recently_accessed(None, media_id)
|
|
||||||
|
|
||||||
|
|
||||||
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
||||||
FederationSendServlet,
|
FederationSendServlet,
|
||||||
FederationEventServlet,
|
FederationEventServlet,
|
||||||
@@ -913,5 +858,4 @@ FEDERATION_SERVLET_CLASSES: Tuple[Type[BaseFederationServlet], ...] = (
|
|||||||
FederationMakeKnockServlet,
|
FederationMakeKnockServlet,
|
||||||
FederationAccountStatusServlet,
|
FederationAccountStatusServlet,
|
||||||
FederationMediaDownloadServlet,
|
FederationMediaDownloadServlet,
|
||||||
FederationMediaThumbnailServlet,
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -283,10 +283,6 @@ class DeactivateAccountHandler:
|
|||||||
ratelimit=False,
|
ratelimit=False,
|
||||||
require_consent=False,
|
require_consent=False,
|
||||||
)
|
)
|
||||||
|
|
||||||
# Mark the room forgotten too, because they won't be able to do this
|
|
||||||
# for us. This may lead to the room being purged eventually.
|
|
||||||
await self._room_member_handler.forget(user, room_id)
|
|
||||||
except Exception:
|
except Exception:
|
||||||
logger.exception(
|
logger.exception(
|
||||||
"Failed to part user %r from room %r: ignoring and continuing",
|
"Failed to part user %r from room %r: ignoring and continuing",
|
||||||
|
|||||||
@@ -39,7 +39,6 @@ from synapse.metrics.background_process_metrics import (
|
|||||||
)
|
)
|
||||||
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
|
from synapse.storage.databases.main.client_ips import DeviceLastConnectionInfo
|
||||||
from synapse.types import (
|
from synapse.types import (
|
||||||
DeviceListUpdates,
|
|
||||||
JsonDict,
|
JsonDict,
|
||||||
JsonMapping,
|
JsonMapping,
|
||||||
ScheduledTask,
|
ScheduledTask,
|
||||||
@@ -215,7 +214,7 @@ class DeviceWorkerHandler:
|
|||||||
@cancellable
|
@cancellable
|
||||||
async def get_user_ids_changed(
|
async def get_user_ids_changed(
|
||||||
self, user_id: str, from_token: StreamToken
|
self, user_id: str, from_token: StreamToken
|
||||||
) -> DeviceListUpdates:
|
) -> JsonDict:
|
||||||
"""Get list of users that have had the devices updated, or have newly
|
"""Get list of users that have had the devices updated, or have newly
|
||||||
joined a room, that `user_id` may be interested in.
|
joined a room, that `user_id` may be interested in.
|
||||||
"""
|
"""
|
||||||
@@ -342,19 +341,11 @@ class DeviceWorkerHandler:
|
|||||||
possibly_joined = set()
|
possibly_joined = set()
|
||||||
possibly_left = set()
|
possibly_left = set()
|
||||||
|
|
||||||
device_list_updates = DeviceListUpdates(
|
result = {"changed": list(possibly_joined), "left": list(possibly_left)}
|
||||||
changed=possibly_joined,
|
|
||||||
left=possibly_left,
|
|
||||||
)
|
|
||||||
|
|
||||||
log_kv(
|
log_kv(result)
|
||||||
{
|
|
||||||
"changed": device_list_updates.changed,
|
|
||||||
"left": device_list_updates.left,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
return device_list_updates
|
return result
|
||||||
|
|
||||||
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
|
async def on_federation_query_user_devices(self, user_id: str) -> JsonDict:
|
||||||
if not self.hs.is_mine(UserID.from_string(user_id)):
|
if not self.hs.is_mine(UserID.from_string(user_id)):
|
||||||
|
|||||||
@@ -291,20 +291,13 @@ class E2eKeysHandler:
|
|||||||
|
|
||||||
# Only try and fetch keys for destinations that are not marked as
|
# Only try and fetch keys for destinations that are not marked as
|
||||||
# down.
|
# down.
|
||||||
unfiltered_destinations = remote_queries_not_in_cache.keys()
|
filtered_destinations = await filter_destinations_by_retry_limiter(
|
||||||
filtered_destinations = set(
|
remote_queries_not_in_cache.keys(),
|
||||||
await filter_destinations_by_retry_limiter(
|
self.clock,
|
||||||
unfiltered_destinations,
|
self.store,
|
||||||
self.clock,
|
# Let's give an arbitrary grace period for those hosts that are
|
||||||
self.store,
|
# only recently down
|
||||||
# Let's give an arbitrary grace period for those hosts that are
|
retry_due_within_ms=60 * 1000,
|
||||||
# only recently down
|
|
||||||
retry_due_within_ms=60 * 1000,
|
|
||||||
)
|
|
||||||
)
|
|
||||||
failures.update(
|
|
||||||
(dest, _NOT_READY_FOR_RETRY_FAILURE)
|
|
||||||
for dest in (unfiltered_destinations - filtered_destinations)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
await concurrently_execute(
|
await concurrently_execute(
|
||||||
@@ -1648,9 +1641,6 @@ def _check_device_signature(
|
|||||||
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
|
raise SynapseError(400, "Invalid signature", Codes.INVALID_SIGNATURE)
|
||||||
|
|
||||||
|
|
||||||
_NOT_READY_FOR_RETRY_FAILURE = {"status": 503, "message": "Not ready for retry"}
|
|
||||||
|
|
||||||
|
|
||||||
def _exception_to_failure(e: Exception) -> JsonDict:
|
def _exception_to_failure(e: Exception) -> JsonDict:
|
||||||
if isinstance(e, SynapseError):
|
if isinstance(e, SynapseError):
|
||||||
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
|
return {"status": e.code, "errcode": e.errcode, "message": str(e)}
|
||||||
@@ -1659,7 +1649,7 @@ def _exception_to_failure(e: Exception) -> JsonDict:
|
|||||||
return {"status": e.code, "message": str(e)}
|
return {"status": e.code, "message": str(e)}
|
||||||
|
|
||||||
if isinstance(e, NotRetryingDestination):
|
if isinstance(e, NotRetryingDestination):
|
||||||
return _NOT_READY_FOR_RETRY_FAILURE
|
return {"status": 503, "message": "Not ready for retry"}
|
||||||
|
|
||||||
# include ConnectionRefused and other errors
|
# include ConnectionRefused and other errors
|
||||||
#
|
#
|
||||||
|
|||||||
@@ -34,7 +34,7 @@ from synapse.api.errors import (
|
|||||||
from synapse.logging.opentracing import log_kv, trace
|
from synapse.logging.opentracing import log_kv, trace
|
||||||
from synapse.storage.databases.main.e2e_room_keys import RoomKey
|
from synapse.storage.databases.main.e2e_room_keys import RoomKey
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util.async_helpers import ReadWriteLock
|
from synapse.util.async_helpers import Linearizer
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.server import HomeServer
|
from synapse.server import HomeServer
|
||||||
@@ -58,7 +58,7 @@ class E2eRoomKeysHandler:
|
|||||||
# clients belonging to a user will receive and try to upload a new session at
|
# clients belonging to a user will receive and try to upload a new session at
|
||||||
# roughly the same time. Also used to lock out uploads when the key is being
|
# roughly the same time. Also used to lock out uploads when the key is being
|
||||||
# changed.
|
# changed.
|
||||||
self._upload_lock = ReadWriteLock()
|
self._upload_linearizer = Linearizer("upload_room_keys_lock")
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
async def get_room_keys(
|
async def get_room_keys(
|
||||||
@@ -89,7 +89,7 @@ class E2eRoomKeysHandler:
|
|||||||
|
|
||||||
# we deliberately take the lock to get keys so that changing the version
|
# we deliberately take the lock to get keys so that changing the version
|
||||||
# works atomically
|
# works atomically
|
||||||
async with self._upload_lock.read(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
# make sure the backup version exists
|
# make sure the backup version exists
|
||||||
try:
|
try:
|
||||||
await self.store.get_e2e_room_keys_version_info(user_id, version)
|
await self.store.get_e2e_room_keys_version_info(user_id, version)
|
||||||
@@ -132,7 +132,7 @@ class E2eRoomKeysHandler:
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
# lock for consistency with uploading
|
# lock for consistency with uploading
|
||||||
async with self._upload_lock.write(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
# make sure the backup version exists
|
# make sure the backup version exists
|
||||||
try:
|
try:
|
||||||
version_info = await self.store.get_e2e_room_keys_version_info(
|
version_info = await self.store.get_e2e_room_keys_version_info(
|
||||||
@@ -193,7 +193,7 @@ class E2eRoomKeysHandler:
|
|||||||
# TODO: Validate the JSON to make sure it has the right keys.
|
# TODO: Validate the JSON to make sure it has the right keys.
|
||||||
|
|
||||||
# XXX: perhaps we should use a finer grained lock here?
|
# XXX: perhaps we should use a finer grained lock here?
|
||||||
async with self._upload_lock.write(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
# Check that the version we're trying to upload is the current version
|
# Check that the version we're trying to upload is the current version
|
||||||
try:
|
try:
|
||||||
version_info = await self.store.get_e2e_room_keys_version_info(user_id)
|
version_info = await self.store.get_e2e_room_keys_version_info(user_id)
|
||||||
@@ -355,7 +355,7 @@ class E2eRoomKeysHandler:
|
|||||||
# TODO: Validate the JSON to make sure it has the right keys.
|
# TODO: Validate the JSON to make sure it has the right keys.
|
||||||
|
|
||||||
# lock everyone out until we've switched version
|
# lock everyone out until we've switched version
|
||||||
async with self._upload_lock.write(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
new_version = await self.store.create_e2e_room_keys_version(
|
new_version = await self.store.create_e2e_room_keys_version(
|
||||||
user_id, version_info
|
user_id, version_info
|
||||||
)
|
)
|
||||||
@@ -382,7 +382,7 @@ class E2eRoomKeysHandler:
|
|||||||
}
|
}
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async with self._upload_lock.read(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
try:
|
try:
|
||||||
res = await self.store.get_e2e_room_keys_version_info(user_id, version)
|
res = await self.store.get_e2e_room_keys_version_info(user_id, version)
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
@@ -407,7 +407,7 @@ class E2eRoomKeysHandler:
|
|||||||
NotFoundError: if this backup version doesn't exist
|
NotFoundError: if this backup version doesn't exist
|
||||||
"""
|
"""
|
||||||
|
|
||||||
async with self._upload_lock.write(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
try:
|
try:
|
||||||
await self.store.delete_e2e_room_keys_version(user_id, version)
|
await self.store.delete_e2e_room_keys_version(user_id, version)
|
||||||
except StoreError as e:
|
except StoreError as e:
|
||||||
@@ -437,7 +437,7 @@ class E2eRoomKeysHandler:
|
|||||||
raise SynapseError(
|
raise SynapseError(
|
||||||
400, "Version in body does not match", Codes.INVALID_PARAM
|
400, "Version in body does not match", Codes.INVALID_PARAM
|
||||||
)
|
)
|
||||||
async with self._upload_lock.write(user_id):
|
async with self._upload_linearizer.queue(user_id):
|
||||||
try:
|
try:
|
||||||
old_info = await self.store.get_e2e_room_keys_version_info(
|
old_info = await self.store.get_e2e_room_keys_version_info(
|
||||||
user_id, version
|
user_id, version
|
||||||
|
|||||||
@@ -286,14 +286,8 @@ class ReceiptEventSource(EventSource[MultiWriterStreamToken, JsonMapping]):
|
|||||||
room_ids: Iterable[str],
|
room_ids: Iterable[str],
|
||||||
is_guest: bool,
|
is_guest: bool,
|
||||||
explicit_room_id: Optional[str] = None,
|
explicit_room_id: Optional[str] = None,
|
||||||
to_key: Optional[MultiWriterStreamToken] = None,
|
|
||||||
) -> Tuple[List[JsonMapping], MultiWriterStreamToken]:
|
) -> Tuple[List[JsonMapping], MultiWriterStreamToken]:
|
||||||
"""
|
to_key = self.get_current_key()
|
||||||
Find read receipts for given rooms (> `from_token` and <= `to_token`)
|
|
||||||
"""
|
|
||||||
|
|
||||||
if to_key is None:
|
|
||||||
to_key = self.get_current_key()
|
|
||||||
|
|
||||||
if from_key == to_key:
|
if from_key == to_key:
|
||||||
return [], to_key
|
return [], to_key
|
||||||
|
|||||||
@@ -1188,8 +1188,6 @@ class RoomCreationHandler:
|
|||||||
)
|
)
|
||||||
events_to_send.append((power_event, power_context))
|
events_to_send.append((power_event, power_context))
|
||||||
else:
|
else:
|
||||||
# Please update the docs for `default_power_level_content_override` when
|
|
||||||
# updating the `events` dict below
|
|
||||||
power_level_content: JsonDict = {
|
power_level_content: JsonDict = {
|
||||||
"users": {creator_id: 100},
|
"users": {creator_id: 100},
|
||||||
"users_default": 0,
|
"users_default": 0,
|
||||||
|
|||||||
+278
-2208
File diff suppressed because it is too large
Load Diff
@@ -293,9 +293,7 @@ class StatsHandler:
|
|||||||
"history_visibility"
|
"history_visibility"
|
||||||
)
|
)
|
||||||
elif delta.event_type == EventTypes.RoomEncryption:
|
elif delta.event_type == EventTypes.RoomEncryption:
|
||||||
room_state["encryption"] = event_content.get(
|
room_state["encryption"] = event_content.get("algorithm")
|
||||||
EventContentFields.ENCRYPTION_ALGORITHM
|
|
||||||
)
|
|
||||||
elif delta.event_type == EventTypes.Name:
|
elif delta.event_type == EventTypes.Name:
|
||||||
room_state["name"] = event_content.get("name")
|
room_state["name"] = event_content.get("name")
|
||||||
elif delta.event_type == EventTypes.Topic:
|
elif delta.event_type == EventTypes.Topic:
|
||||||
|
|||||||
+11
-17
@@ -1352,7 +1352,7 @@ class SyncHandler:
|
|||||||
await_full_state = True
|
await_full_state = True
|
||||||
lazy_load_members = False
|
lazy_load_members = False
|
||||||
|
|
||||||
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
|
state_at_timeline_end = await self._state_storage_controller.get_state_at(
|
||||||
room_id,
|
room_id,
|
||||||
stream_position=end_token,
|
stream_position=end_token,
|
||||||
state_filter=state_filter,
|
state_filter=state_filter,
|
||||||
@@ -1480,13 +1480,11 @@ class SyncHandler:
|
|||||||
else:
|
else:
|
||||||
# We can get here if the user has ignored the senders of all
|
# We can get here if the user has ignored the senders of all
|
||||||
# the recent events.
|
# the recent events.
|
||||||
state_at_timeline_start = (
|
state_at_timeline_start = await self._state_storage_controller.get_state_at(
|
||||||
await self._state_storage_controller.get_state_ids_at(
|
room_id,
|
||||||
room_id,
|
stream_position=end_token,
|
||||||
stream_position=end_token,
|
state_filter=state_filter,
|
||||||
state_filter=state_filter,
|
await_full_state=await_full_state,
|
||||||
await_full_state=await_full_state,
|
|
||||||
)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
if batch.limited:
|
if batch.limited:
|
||||||
@@ -1504,14 +1502,14 @@ class SyncHandler:
|
|||||||
# about them).
|
# about them).
|
||||||
state_filter = StateFilter.all()
|
state_filter = StateFilter.all()
|
||||||
|
|
||||||
state_at_previous_sync = await self._state_storage_controller.get_state_ids_at(
|
state_at_previous_sync = await self._state_storage_controller.get_state_at(
|
||||||
room_id,
|
room_id,
|
||||||
stream_position=since_token,
|
stream_position=since_token,
|
||||||
state_filter=state_filter,
|
state_filter=state_filter,
|
||||||
await_full_state=await_full_state,
|
await_full_state=await_full_state,
|
||||||
)
|
)
|
||||||
|
|
||||||
state_at_timeline_end = await self._state_storage_controller.get_state_ids_at(
|
state_at_timeline_end = await self._state_storage_controller.get_state_at(
|
||||||
room_id,
|
room_id,
|
||||||
stream_position=end_token,
|
stream_position=end_token,
|
||||||
state_filter=state_filter,
|
state_filter=state_filter,
|
||||||
@@ -2270,11 +2268,7 @@ class SyncHandler:
|
|||||||
user=user,
|
user=user,
|
||||||
from_key=presence_key,
|
from_key=presence_key,
|
||||||
is_guest=sync_config.is_guest,
|
is_guest=sync_config.is_guest,
|
||||||
include_offline=(
|
include_offline=include_offline,
|
||||||
True
|
|
||||||
if self.hs_config.server.presence_include_offline_users_on_sync
|
|
||||||
else include_offline
|
|
||||||
),
|
|
||||||
)
|
)
|
||||||
assert presence_key
|
assert presence_key
|
||||||
sync_result_builder.now_token = now_token.copy_and_replace(
|
sync_result_builder.now_token = now_token.copy_and_replace(
|
||||||
@@ -2514,7 +2508,7 @@ class SyncHandler:
|
|||||||
continue
|
continue
|
||||||
|
|
||||||
if room_id in sync_result_builder.joined_room_ids or has_join:
|
if room_id in sync_result_builder.joined_room_ids or has_join:
|
||||||
old_state_ids = await self._state_storage_controller.get_state_ids_at(
|
old_state_ids = await self._state_storage_controller.get_state_at(
|
||||||
room_id,
|
room_id,
|
||||||
since_token,
|
since_token,
|
||||||
state_filter=StateFilter.from_types([(EventTypes.Member, user_id)]),
|
state_filter=StateFilter.from_types([(EventTypes.Member, user_id)]),
|
||||||
@@ -2545,7 +2539,7 @@ class SyncHandler:
|
|||||||
else:
|
else:
|
||||||
if not old_state_ids:
|
if not old_state_ids:
|
||||||
old_state_ids = (
|
old_state_ids = (
|
||||||
await self._state_storage_controller.get_state_ids_at(
|
await self._state_storage_controller.get_state_at(
|
||||||
room_id,
|
room_id,
|
||||||
since_token,
|
since_token,
|
||||||
state_filter=StateFilter.from_types(
|
state_filter=StateFilter.from_types(
|
||||||
|
|||||||
@@ -565,12 +565,7 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
|
|||||||
room_ids: Iterable[str],
|
room_ids: Iterable[str],
|
||||||
is_guest: bool,
|
is_guest: bool,
|
||||||
explicit_room_id: Optional[str] = None,
|
explicit_room_id: Optional[str] = None,
|
||||||
to_key: Optional[int] = None,
|
|
||||||
) -> Tuple[List[JsonMapping], int]:
|
) -> Tuple[List[JsonMapping], int]:
|
||||||
"""
|
|
||||||
Find typing notifications for given rooms (> `from_token` and <= `to_token`)
|
|
||||||
"""
|
|
||||||
|
|
||||||
with Measure(self.clock, "typing.get_new_events"):
|
with Measure(self.clock, "typing.get_new_events"):
|
||||||
from_key = int(from_key)
|
from_key = int(from_key)
|
||||||
handler = self.get_typing_handler()
|
handler = self.get_typing_handler()
|
||||||
@@ -579,9 +574,7 @@ class TypingNotificationEventSource(EventSource[int, JsonMapping]):
|
|||||||
for room_id in room_ids:
|
for room_id in room_ids:
|
||||||
if room_id not in handler._room_serials:
|
if room_id not in handler._room_serials:
|
||||||
continue
|
continue
|
||||||
if handler._room_serials[room_id] <= from_key or (
|
if handler._room_serials[room_id] <= from_key:
|
||||||
to_key is not None and handler._room_serials[room_id] > to_key
|
|
||||||
):
|
|
||||||
continue
|
continue
|
||||||
|
|
||||||
events.append(self._make_event_for(room_id))
|
events.append(self._make_event_for(room_id))
|
||||||
|
|||||||
@@ -90,7 +90,7 @@ from synapse.logging.context import make_deferred_yieldable, run_in_background
|
|||||||
from synapse.logging.opentracing import set_tag, start_active_span, tags
|
from synapse.logging.opentracing import set_tag, start_active_span, tags
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
from synapse.util.async_helpers import AwakenableSleeper, Linearizer, timeout_deferred
|
from synapse.util.async_helpers import AwakenableSleeper, timeout_deferred
|
||||||
from synapse.util.metrics import Measure
|
from synapse.util.metrics import Measure
|
||||||
from synapse.util.stringutils import parse_and_validate_server_name
|
from synapse.util.stringutils import parse_and_validate_server_name
|
||||||
|
|
||||||
@@ -475,8 +475,6 @@ class MatrixFederationHttpClient:
|
|||||||
use_proxy=True,
|
use_proxy=True,
|
||||||
)
|
)
|
||||||
|
|
||||||
self.remote_download_linearizer = Linearizer("remote_download_linearizer", 6)
|
|
||||||
|
|
||||||
def wake_destination(self, destination: str) -> None:
|
def wake_destination(self, destination: str) -> None:
|
||||||
"""Called when the remote server may have come back online."""
|
"""Called when the remote server may have come back online."""
|
||||||
|
|
||||||
@@ -1488,44 +1486,35 @@ class MatrixFederationHttpClient:
|
|||||||
)
|
)
|
||||||
|
|
||||||
headers = dict(response.headers.getAllRawHeaders())
|
headers = dict(response.headers.getAllRawHeaders())
|
||||||
expected_size = response.length
|
|
||||||
|
|
||||||
|
expected_size = response.length
|
||||||
|
# if we don't get an expected length then use the max length
|
||||||
if expected_size == UNKNOWN_LENGTH:
|
if expected_size == UNKNOWN_LENGTH:
|
||||||
expected_size = max_size
|
expected_size = max_size
|
||||||
else:
|
logger.debug(
|
||||||
if int(expected_size) > max_size:
|
f"File size unknown, assuming file is max allowable size: {max_size}"
|
||||||
msg = "Requested file is too large > %r bytes" % (max_size,)
|
|
||||||
logger.warning(
|
|
||||||
"{%s} [%s] %s",
|
|
||||||
request.txn_id,
|
|
||||||
request.destination,
|
|
||||||
msg,
|
|
||||||
)
|
|
||||||
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
|
|
||||||
|
|
||||||
read_body, _ = await download_ratelimiter.can_do_action(
|
|
||||||
requester=None,
|
|
||||||
key=ip_address,
|
|
||||||
n_actions=expected_size,
|
|
||||||
)
|
)
|
||||||
if not read_body:
|
|
||||||
msg = "Requested file size exceeds ratelimits"
|
read_body, _ = await download_ratelimiter.can_do_action(
|
||||||
logger.warning(
|
requester=None,
|
||||||
"{%s} [%s] %s",
|
key=ip_address,
|
||||||
request.txn_id,
|
n_actions=expected_size,
|
||||||
request.destination,
|
)
|
||||||
msg,
|
if not read_body:
|
||||||
)
|
msg = "Requested file size exceeds ratelimits"
|
||||||
raise SynapseError(
|
logger.warning(
|
||||||
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
|
"{%s} [%s] %s",
|
||||||
)
|
request.txn_id,
|
||||||
|
request.destination,
|
||||||
|
msg,
|
||||||
|
)
|
||||||
|
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with self.remote_download_linearizer.queue(ip_address):
|
# add a byte of headroom to max size as function errs at >=
|
||||||
# add a byte of headroom to max size as function errs at >=
|
d = read_body_with_max_size(response, output_stream, expected_size + 1)
|
||||||
d = read_body_with_max_size(response, output_stream, expected_size + 1)
|
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||||
d.addTimeout(self.default_timeout_seconds, self.reactor)
|
length = await make_deferred_yieldable(d)
|
||||||
length = await make_deferred_yieldable(d)
|
|
||||||
except BodyExceededMaxSize:
|
except BodyExceededMaxSize:
|
||||||
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@@ -1571,13 +1560,6 @@ class MatrixFederationHttpClient:
|
|||||||
request.method,
|
request.method,
|
||||||
request.uri.decode("ascii"),
|
request.uri.decode("ascii"),
|
||||||
)
|
)
|
||||||
|
|
||||||
# if we didn't know the length upfront, decrement the actual size from ratelimiter
|
|
||||||
if response.length == UNKNOWN_LENGTH:
|
|
||||||
download_ratelimiter.record_action(
|
|
||||||
requester=None, key=ip_address, n_actions=length
|
|
||||||
)
|
|
||||||
|
|
||||||
return length, headers
|
return length, headers
|
||||||
|
|
||||||
async def federation_get_file(
|
async def federation_get_file(
|
||||||
@@ -1648,37 +1630,29 @@ class MatrixFederationHttpClient:
|
|||||||
)
|
)
|
||||||
|
|
||||||
headers = dict(response.headers.getAllRawHeaders())
|
headers = dict(response.headers.getAllRawHeaders())
|
||||||
expected_size = response.length
|
|
||||||
|
|
||||||
|
expected_size = response.length
|
||||||
|
# if we don't get an expected length then use the max length
|
||||||
if expected_size == UNKNOWN_LENGTH:
|
if expected_size == UNKNOWN_LENGTH:
|
||||||
expected_size = max_size
|
expected_size = max_size
|
||||||
else:
|
logger.debug(
|
||||||
if int(expected_size) > max_size:
|
f"File size unknown, assuming file is max allowable size: {max_size}"
|
||||||
msg = "Requested file is too large > %r bytes" % (max_size,)
|
|
||||||
logger.warning(
|
|
||||||
"{%s} [%s] %s",
|
|
||||||
request.txn_id,
|
|
||||||
request.destination,
|
|
||||||
msg,
|
|
||||||
)
|
|
||||||
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg, Codes.TOO_LARGE)
|
|
||||||
|
|
||||||
read_body, _ = await download_ratelimiter.can_do_action(
|
|
||||||
requester=None,
|
|
||||||
key=ip_address,
|
|
||||||
n_actions=expected_size,
|
|
||||||
)
|
)
|
||||||
if not read_body:
|
|
||||||
msg = "Requested file size exceeds ratelimits"
|
read_body, _ = await download_ratelimiter.can_do_action(
|
||||||
logger.warning(
|
requester=None,
|
||||||
"{%s} [%s] %s",
|
key=ip_address,
|
||||||
request.txn_id,
|
n_actions=expected_size,
|
||||||
request.destination,
|
)
|
||||||
msg,
|
if not read_body:
|
||||||
)
|
msg = "Requested file size exceeds ratelimits"
|
||||||
raise SynapseError(
|
logger.warning(
|
||||||
HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED
|
"{%s} [%s] %s",
|
||||||
)
|
request.txn_id,
|
||||||
|
request.destination,
|
||||||
|
msg,
|
||||||
|
)
|
||||||
|
raise SynapseError(HTTPStatus.TOO_MANY_REQUESTS, msg, Codes.LIMIT_EXCEEDED)
|
||||||
|
|
||||||
# this should be a multipart/mixed response with the boundary string in the header
|
# this should be a multipart/mixed response with the boundary string in the header
|
||||||
try:
|
try:
|
||||||
@@ -1698,12 +1672,11 @@ class MatrixFederationHttpClient:
|
|||||||
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg)
|
raise SynapseError(HTTPStatus.BAD_GATEWAY, msg)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
async with self.remote_download_linearizer.queue(ip_address):
|
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
|
||||||
# add a byte of headroom to max size as `_MultipartParserProtocol.dataReceived` errs at >=
|
deferred = read_multipart_response(
|
||||||
deferred = read_multipart_response(
|
response, output_stream, boundary, expected_size + 1
|
||||||
response, output_stream, boundary, expected_size + 1
|
)
|
||||||
)
|
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
|
||||||
deferred.addTimeout(self.default_timeout_seconds, self.reactor)
|
|
||||||
except BodyExceededMaxSize:
|
except BodyExceededMaxSize:
|
||||||
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
msg = "Requested file is too large > %r bytes" % (expected_size,)
|
||||||
logger.warning(
|
logger.warning(
|
||||||
@@ -1770,13 +1743,6 @@ class MatrixFederationHttpClient:
|
|||||||
request.method,
|
request.method,
|
||||||
request.uri.decode("ascii"),
|
request.uri.decode("ascii"),
|
||||||
)
|
)
|
||||||
|
|
||||||
# if we didn't know the length upfront, decrement the actual size from ratelimiter
|
|
||||||
if response.length == UNKNOWN_LENGTH:
|
|
||||||
download_ratelimiter.record_action(
|
|
||||||
requester=None, key=ip_address, n_actions=length
|
|
||||||
)
|
|
||||||
|
|
||||||
return length, headers, multipart_response.json
|
return length, headers, multipart_response.json
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
+2
-10
@@ -62,15 +62,6 @@ HOP_BY_HOP_HEADERS = {
|
|||||||
"Upgrade",
|
"Upgrade",
|
||||||
}
|
}
|
||||||
|
|
||||||
if hasattr(Headers, "_canonicalNameCaps"):
|
|
||||||
# Twisted < 24.7.0rc1
|
|
||||||
_canonicalHeaderName = Headers()._canonicalNameCaps # type: ignore[attr-defined]
|
|
||||||
else:
|
|
||||||
# Twisted >= 24.7.0rc1
|
|
||||||
# But note that `_encodeName` still exists on prior versions,
|
|
||||||
# it just encodes differently
|
|
||||||
_canonicalHeaderName = Headers()._encodeName
|
|
||||||
|
|
||||||
|
|
||||||
def parse_connection_header_value(
|
def parse_connection_header_value(
|
||||||
connection_header_value: Optional[bytes],
|
connection_header_value: Optional[bytes],
|
||||||
@@ -94,10 +85,11 @@ def parse_connection_header_value(
|
|||||||
The set of header names that should not be copied over from the remote response.
|
The set of header names that should not be copied over from the remote response.
|
||||||
The keys are capitalized in canonical capitalization.
|
The keys are capitalized in canonical capitalization.
|
||||||
"""
|
"""
|
||||||
|
headers = Headers()
|
||||||
extra_headers_to_remove: Set[str] = set()
|
extra_headers_to_remove: Set[str] = set()
|
||||||
if connection_header_value:
|
if connection_header_value:
|
||||||
extra_headers_to_remove = {
|
extra_headers_to_remove = {
|
||||||
_canonicalHeaderName(connection_option.strip()).decode("ascii")
|
headers._canonicalNameCaps(connection_option.strip()).decode("ascii")
|
||||||
for connection_option in connection_header_value.split(b",")
|
for connection_option in connection_header_value.split(b",")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -74,7 +74,6 @@ from synapse.api.errors import (
|
|||||||
from synapse.config.homeserver import HomeServerConfig
|
from synapse.config.homeserver import HomeServerConfig
|
||||||
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
|
from synapse.logging.context import defer_to_thread, preserve_fn, run_in_background
|
||||||
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
|
from synapse.logging.opentracing import active_span, start_active_span, trace_servlet
|
||||||
from synapse.types import ISynapseReactor
|
|
||||||
from synapse.util import json_encoder
|
from synapse.util import json_encoder
|
||||||
from synapse.util.caches import intern_dict
|
from synapse.util.caches import intern_dict
|
||||||
from synapse.util.cancellation import is_function_cancellable
|
from synapse.util.cancellation import is_function_cancellable
|
||||||
@@ -869,8 +868,7 @@ async def _async_write_json_to_request_in_thread(
|
|||||||
|
|
||||||
with start_active_span("encode_json_response"):
|
with start_active_span("encode_json_response"):
|
||||||
span = active_span()
|
span = active_span()
|
||||||
reactor: ISynapseReactor = request.reactor # type: ignore
|
json_str = await defer_to_thread(request.reactor, encode, span)
|
||||||
json_str = await defer_to_thread(reactor, encode, span)
|
|
||||||
|
|
||||||
_write_bytes_to_request(request, json_str)
|
_write_bytes_to_request(request, json_str)
|
||||||
|
|
||||||
|
|||||||
@@ -683,7 +683,7 @@ class SynapseSite(ProxySite):
|
|||||||
self.access_logger = logging.getLogger(logger_name)
|
self.access_logger = logging.getLogger(logger_name)
|
||||||
self.server_version_string = server_version_string.encode("ascii")
|
self.server_version_string = server_version_string.encode("ascii")
|
||||||
|
|
||||||
def log(self, request: SynapseRequest) -> None: # type: ignore[override]
|
def log(self, request: SynapseRequest) -> None:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -430,7 +430,6 @@ class MediaRepository:
|
|||||||
media_id: str,
|
media_id: str,
|
||||||
name: Optional[str],
|
name: Optional[str],
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
allow_authenticated: bool = True,
|
|
||||||
federation: bool = False,
|
federation: bool = False,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Responds to requests for local media, if exists, or returns 404.
|
"""Responds to requests for local media, if exists, or returns 404.
|
||||||
@@ -443,7 +442,6 @@ class MediaRepository:
|
|||||||
the filename in the Content-Disposition header of the response.
|
the filename in the Content-Disposition header of the response.
|
||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
allow_authenticated: whether media marked as authenticated may be served to this request
|
|
||||||
federation: whether the local media being fetched is for a federation request
|
federation: whether the local media being fetched is for a federation request
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@@ -453,10 +451,6 @@ class MediaRepository:
|
|||||||
if not media_info:
|
if not media_info:
|
||||||
return
|
return
|
||||||
|
|
||||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
|
||||||
if media_info.authenticated:
|
|
||||||
raise NotFoundError()
|
|
||||||
|
|
||||||
self.mark_recently_accessed(None, media_id)
|
self.mark_recently_accessed(None, media_id)
|
||||||
|
|
||||||
media_type = media_info.media_type
|
media_type = media_info.media_type
|
||||||
@@ -487,7 +481,6 @@ class MediaRepository:
|
|||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
ip_address: str,
|
ip_address: str,
|
||||||
use_federation_endpoint: bool,
|
use_federation_endpoint: bool,
|
||||||
allow_authenticated: bool = True,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
"""Respond to requests for remote media.
|
"""Respond to requests for remote media.
|
||||||
|
|
||||||
@@ -502,8 +495,6 @@ class MediaRepository:
|
|||||||
ip_address: the IP address of the requester
|
ip_address: the IP address of the requester
|
||||||
use_federation_endpoint: whether to request the remote media over the new
|
use_federation_endpoint: whether to request the remote media over the new
|
||||||
federation `/download` endpoint
|
federation `/download` endpoint
|
||||||
allow_authenticated: whether media marked as authenticated may be served to this
|
|
||||||
request
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Resolves once a response has successfully been written to request
|
Resolves once a response has successfully been written to request
|
||||||
@@ -535,7 +526,6 @@ class MediaRepository:
|
|||||||
self.download_ratelimiter,
|
self.download_ratelimiter,
|
||||||
ip_address,
|
ip_address,
|
||||||
use_federation_endpoint,
|
use_federation_endpoint,
|
||||||
allow_authenticated,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# We deliberately stream the file outside the lock
|
# We deliberately stream the file outside the lock
|
||||||
@@ -552,13 +542,7 @@ class MediaRepository:
|
|||||||
respond_404(request)
|
respond_404(request)
|
||||||
|
|
||||||
async def get_remote_media_info(
|
async def get_remote_media_info(
|
||||||
self,
|
self, server_name: str, media_id: str, max_timeout_ms: int, ip_address: str
|
||||||
server_name: str,
|
|
||||||
media_id: str,
|
|
||||||
max_timeout_ms: int,
|
|
||||||
ip_address: str,
|
|
||||||
use_federation: bool,
|
|
||||||
allow_authenticated: bool,
|
|
||||||
) -> RemoteMedia:
|
) -> RemoteMedia:
|
||||||
"""Gets the media info associated with the remote file, downloading
|
"""Gets the media info associated with the remote file, downloading
|
||||||
if necessary.
|
if necessary.
|
||||||
@@ -569,10 +553,6 @@ class MediaRepository:
|
|||||||
max_timeout_ms: the maximum number of milliseconds to wait for the
|
max_timeout_ms: the maximum number of milliseconds to wait for the
|
||||||
media to be uploaded.
|
media to be uploaded.
|
||||||
ip_address: IP address of the requester
|
ip_address: IP address of the requester
|
||||||
use_federation: if a download is necessary, whether to request the remote file
|
|
||||||
over the federation `/download` endpoint
|
|
||||||
allow_authenticated: whether media marked as authenticated may be served to this
|
|
||||||
request
|
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
The media info of the file
|
The media info of the file
|
||||||
@@ -593,8 +573,7 @@ class MediaRepository:
|
|||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
self.download_ratelimiter,
|
self.download_ratelimiter,
|
||||||
ip_address,
|
ip_address,
|
||||||
use_federation,
|
False,
|
||||||
allow_authenticated,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
# Ensure we actually use the responder so that it releases resources
|
# Ensure we actually use the responder so that it releases resources
|
||||||
@@ -612,7 +591,6 @@ class MediaRepository:
|
|||||||
download_ratelimiter: Ratelimiter,
|
download_ratelimiter: Ratelimiter,
|
||||||
ip_address: str,
|
ip_address: str,
|
||||||
use_federation_endpoint: bool,
|
use_federation_endpoint: bool,
|
||||||
allow_authenticated: bool,
|
|
||||||
) -> Tuple[Optional[Responder], RemoteMedia]:
|
) -> Tuple[Optional[Responder], RemoteMedia]:
|
||||||
"""Looks for media in local cache, if not there then attempt to
|
"""Looks for media in local cache, if not there then attempt to
|
||||||
download from remote server.
|
download from remote server.
|
||||||
@@ -634,11 +612,6 @@ class MediaRepository:
|
|||||||
"""
|
"""
|
||||||
media_info = await self.store.get_cached_remote_media(server_name, media_id)
|
media_info = await self.store.get_cached_remote_media(server_name, media_id)
|
||||||
|
|
||||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
|
||||||
# if it isn't cached then don't fetch it or if it's authenticated then don't serve it
|
|
||||||
if not media_info or media_info.authenticated:
|
|
||||||
raise NotFoundError()
|
|
||||||
|
|
||||||
# file_id is the ID we use to track the file locally. If we've already
|
# file_id is the ID we use to track the file locally. If we've already
|
||||||
# seen the file then reuse the existing ID, otherwise generate a new
|
# seen the file then reuse the existing ID, otherwise generate a new
|
||||||
# one.
|
# one.
|
||||||
@@ -812,11 +785,6 @@ class MediaRepository:
|
|||||||
|
|
||||||
logger.info("Stored remote media in file %r", fname)
|
logger.info("Stored remote media in file %r", fname)
|
||||||
|
|
||||||
if self.hs.config.media.enable_authenticated_media:
|
|
||||||
authenticated = True
|
|
||||||
else:
|
|
||||||
authenticated = False
|
|
||||||
|
|
||||||
return RemoteMedia(
|
return RemoteMedia(
|
||||||
media_origin=server_name,
|
media_origin=server_name,
|
||||||
media_id=media_id,
|
media_id=media_id,
|
||||||
@@ -827,7 +795,6 @@ class MediaRepository:
|
|||||||
filesystem_id=file_id,
|
filesystem_id=file_id,
|
||||||
last_access_ts=time_now_ms,
|
last_access_ts=time_now_ms,
|
||||||
quarantined_by=None,
|
quarantined_by=None,
|
||||||
authenticated=authenticated,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _federation_download_remote_file(
|
async def _federation_download_remote_file(
|
||||||
@@ -941,11 +908,6 @@ class MediaRepository:
|
|||||||
|
|
||||||
logger.debug("Stored remote media in file %r", fname)
|
logger.debug("Stored remote media in file %r", fname)
|
||||||
|
|
||||||
if self.hs.config.media.enable_authenticated_media:
|
|
||||||
authenticated = True
|
|
||||||
else:
|
|
||||||
authenticated = False
|
|
||||||
|
|
||||||
return RemoteMedia(
|
return RemoteMedia(
|
||||||
media_origin=server_name,
|
media_origin=server_name,
|
||||||
media_id=media_id,
|
media_id=media_id,
|
||||||
@@ -956,7 +918,6 @@ class MediaRepository:
|
|||||||
filesystem_id=file_id,
|
filesystem_id=file_id,
|
||||||
last_access_ts=time_now_ms,
|
last_access_ts=time_now_ms,
|
||||||
quarantined_by=None,
|
quarantined_by=None,
|
||||||
authenticated=authenticated,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def _get_thumbnail_requirements(
|
def _get_thumbnail_requirements(
|
||||||
@@ -1062,12 +1023,7 @@ class MediaRepository:
|
|||||||
t_len = os.path.getsize(output_path)
|
t_len = os.path.getsize(output_path)
|
||||||
|
|
||||||
await self.store.store_local_thumbnail(
|
await self.store.store_local_thumbnail(
|
||||||
media_id,
|
media_id, t_width, t_height, t_type, t_method, t_len
|
||||||
t_width,
|
|
||||||
t_height,
|
|
||||||
t_type,
|
|
||||||
t_method,
|
|
||||||
t_len,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return output_path
|
return output_path
|
||||||
|
|||||||
+23
-101
@@ -26,7 +26,7 @@ from typing import TYPE_CHECKING, List, Optional, Tuple, Type
|
|||||||
|
|
||||||
from PIL import Image
|
from PIL import Image
|
||||||
|
|
||||||
from synapse.api.errors import Codes, NotFoundError, SynapseError, cs_error
|
from synapse.api.errors import Codes, SynapseError, cs_error
|
||||||
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
from synapse.config.repository import THUMBNAIL_SUPPORTED_MEDIA_FORMAT_MAP
|
||||||
from synapse.http.server import respond_with_json
|
from synapse.http.server import respond_with_json
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
@@ -36,11 +36,9 @@ from synapse.media._base import (
|
|||||||
ThumbnailInfo,
|
ThumbnailInfo,
|
||||||
respond_404,
|
respond_404,
|
||||||
respond_with_file,
|
respond_with_file,
|
||||||
respond_with_multipart_responder,
|
|
||||||
respond_with_responder,
|
respond_with_responder,
|
||||||
)
|
)
|
||||||
from synapse.media.media_storage import FileResponder, MediaStorage
|
from synapse.media.media_storage import MediaStorage
|
||||||
from synapse.storage.databases.main.media_repository import LocalMedia
|
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from synapse.media.media_repository import MediaRepository
|
from synapse.media.media_repository import MediaRepository
|
||||||
@@ -273,8 +271,6 @@ class ThumbnailProvider:
|
|||||||
method: str,
|
method: str,
|
||||||
m_type: str,
|
m_type: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
for_federation: bool,
|
|
||||||
allow_authenticated: bool = True,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
media_info = await self.media_repo.get_local_media_info(
|
media_info = await self.media_repo.get_local_media_info(
|
||||||
request, media_id, max_timeout_ms
|
request, media_id, max_timeout_ms
|
||||||
@@ -282,12 +278,6 @@ class ThumbnailProvider:
|
|||||||
if not media_info:
|
if not media_info:
|
||||||
return
|
return
|
||||||
|
|
||||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
|
||||||
# thumbnail over an unauthenticated endpoint
|
|
||||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
|
||||||
if media_info.authenticated:
|
|
||||||
raise NotFoundError()
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||||
await self._select_and_respond_with_thumbnail(
|
await self._select_and_respond_with_thumbnail(
|
||||||
request,
|
request,
|
||||||
@@ -300,8 +290,6 @@ class ThumbnailProvider:
|
|||||||
media_id,
|
media_id,
|
||||||
url_cache=bool(media_info.url_cache),
|
url_cache=bool(media_info.url_cache),
|
||||||
server_name=None,
|
server_name=None,
|
||||||
for_federation=for_federation,
|
|
||||||
media_info=media_info,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
async def select_or_generate_local_thumbnail(
|
async def select_or_generate_local_thumbnail(
|
||||||
@@ -313,21 +301,14 @@ class ThumbnailProvider:
|
|||||||
desired_method: str,
|
desired_method: str,
|
||||||
desired_type: str,
|
desired_type: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
for_federation: bool,
|
|
||||||
allow_authenticated: bool = True,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
media_info = await self.media_repo.get_local_media_info(
|
media_info = await self.media_repo.get_local_media_info(
|
||||||
request, media_id, max_timeout_ms
|
request, media_id, max_timeout_ms
|
||||||
)
|
)
|
||||||
|
|
||||||
if not media_info:
|
if not media_info:
|
||||||
return
|
return
|
||||||
|
|
||||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
|
||||||
# thumbnail over an unauthenticated endpoint
|
|
||||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
|
||||||
if media_info.authenticated:
|
|
||||||
raise NotFoundError()
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
thumbnail_infos = await self.store.get_local_media_thumbnails(media_id)
|
||||||
for info in thumbnail_infos:
|
for info in thumbnail_infos:
|
||||||
t_w = info.width == desired_width
|
t_w = info.width == desired_width
|
||||||
@@ -345,16 +326,10 @@ class ThumbnailProvider:
|
|||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
if responder:
|
if responder:
|
||||||
if for_federation:
|
await respond_with_responder(
|
||||||
await respond_with_multipart_responder(
|
request, responder, info.type, info.length
|
||||||
self.hs.get_clock(), request, responder, media_info
|
)
|
||||||
)
|
return
|
||||||
return
|
|
||||||
else:
|
|
||||||
await respond_with_responder(
|
|
||||||
request, responder, info.type, info.length
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
logger.debug("We don't have a thumbnail of that size. Generating")
|
logger.debug("We don't have a thumbnail of that size. Generating")
|
||||||
|
|
||||||
@@ -369,15 +344,7 @@ class ThumbnailProvider:
|
|||||||
)
|
)
|
||||||
|
|
||||||
if file_path:
|
if file_path:
|
||||||
if for_federation:
|
await respond_with_file(request, desired_type, file_path)
|
||||||
await respond_with_multipart_responder(
|
|
||||||
self.hs.get_clock(),
|
|
||||||
request,
|
|
||||||
FileResponder(open(file_path, "rb")),
|
|
||||||
media_info,
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
await respond_with_file(request, desired_type, file_path)
|
|
||||||
else:
|
else:
|
||||||
logger.warning("Failed to generate thumbnail")
|
logger.warning("Failed to generate thumbnail")
|
||||||
raise SynapseError(400, "Failed to generate thumbnail.")
|
raise SynapseError(400, "Failed to generate thumbnail.")
|
||||||
@@ -393,28 +360,14 @@ class ThumbnailProvider:
|
|||||||
desired_type: str,
|
desired_type: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
ip_address: str,
|
ip_address: str,
|
||||||
use_federation: bool,
|
|
||||||
allow_authenticated: bool = True,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
media_info = await self.media_repo.get_remote_media_info(
|
media_info = await self.media_repo.get_remote_media_info(
|
||||||
server_name,
|
server_name, media_id, max_timeout_ms, ip_address
|
||||||
media_id,
|
|
||||||
max_timeout_ms,
|
|
||||||
ip_address,
|
|
||||||
use_federation,
|
|
||||||
allow_authenticated,
|
|
||||||
)
|
)
|
||||||
if not media_info:
|
if not media_info:
|
||||||
respond_404(request)
|
respond_404(request)
|
||||||
return
|
return
|
||||||
|
|
||||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
|
||||||
# thumbnail over an unauthenticated endpoint
|
|
||||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
|
||||||
if media_info.authenticated:
|
|
||||||
respond_404(request)
|
|
||||||
return
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||||
server_name, media_id
|
server_name, media_id
|
||||||
)
|
)
|
||||||
@@ -471,29 +424,16 @@ class ThumbnailProvider:
|
|||||||
m_type: str,
|
m_type: str,
|
||||||
max_timeout_ms: int,
|
max_timeout_ms: int,
|
||||||
ip_address: str,
|
ip_address: str,
|
||||||
use_federation: bool,
|
|
||||||
allow_authenticated: bool = True,
|
|
||||||
) -> None:
|
) -> None:
|
||||||
# TODO: Don't download the whole remote file
|
# TODO: Don't download the whole remote file
|
||||||
# We should proxy the thumbnail from the remote server instead of
|
# We should proxy the thumbnail from the remote server instead of
|
||||||
# downloading the remote file and generating our own thumbnails.
|
# downloading the remote file and generating our own thumbnails.
|
||||||
media_info = await self.media_repo.get_remote_media_info(
|
media_info = await self.media_repo.get_remote_media_info(
|
||||||
server_name,
|
server_name, media_id, max_timeout_ms, ip_address
|
||||||
media_id,
|
|
||||||
max_timeout_ms,
|
|
||||||
ip_address,
|
|
||||||
use_federation,
|
|
||||||
allow_authenticated,
|
|
||||||
)
|
)
|
||||||
if not media_info:
|
if not media_info:
|
||||||
return
|
return
|
||||||
|
|
||||||
# if the media the thumbnail is generated from is authenticated, don't serve the
|
|
||||||
# thumbnail over an unauthenticated endpoint
|
|
||||||
if self.hs.config.media.enable_authenticated_media and not allow_authenticated:
|
|
||||||
if media_info.authenticated:
|
|
||||||
raise NotFoundError()
|
|
||||||
|
|
||||||
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
thumbnail_infos = await self.store.get_remote_media_thumbnails(
|
||||||
server_name, media_id
|
server_name, media_id
|
||||||
)
|
)
|
||||||
@@ -508,7 +448,6 @@ class ThumbnailProvider:
|
|||||||
media_info.filesystem_id,
|
media_info.filesystem_id,
|
||||||
url_cache=False,
|
url_cache=False,
|
||||||
server_name=server_name,
|
server_name=server_name,
|
||||||
for_federation=False,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
async def _select_and_respond_with_thumbnail(
|
async def _select_and_respond_with_thumbnail(
|
||||||
@@ -522,8 +461,6 @@ class ThumbnailProvider:
|
|||||||
media_id: str,
|
media_id: str,
|
||||||
file_id: str,
|
file_id: str,
|
||||||
url_cache: bool,
|
url_cache: bool,
|
||||||
for_federation: bool,
|
|
||||||
media_info: Optional[LocalMedia] = None,
|
|
||||||
server_name: Optional[str] = None,
|
server_name: Optional[str] = None,
|
||||||
) -> None:
|
) -> None:
|
||||||
"""
|
"""
|
||||||
@@ -539,8 +476,6 @@ class ThumbnailProvider:
|
|||||||
file_id: The ID of the media that a thumbnail is being requested for.
|
file_id: The ID of the media that a thumbnail is being requested for.
|
||||||
url_cache: True if this is from a URL cache.
|
url_cache: True if this is from a URL cache.
|
||||||
server_name: The server name, if this is a remote thumbnail.
|
server_name: The server name, if this is a remote thumbnail.
|
||||||
for_federation: whether the request is from the federation /thumbnail request
|
|
||||||
media_info: metadata about the media being requested.
|
|
||||||
"""
|
"""
|
||||||
logger.debug(
|
logger.debug(
|
||||||
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
"_select_and_respond_with_thumbnail: media_id=%s desired=%sx%s (%s) thumbnail_infos=%s",
|
||||||
@@ -576,20 +511,13 @@ class ThumbnailProvider:
|
|||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
if responder:
|
if responder:
|
||||||
if for_federation:
|
await respond_with_responder(
|
||||||
assert media_info is not None
|
request,
|
||||||
await respond_with_multipart_responder(
|
responder,
|
||||||
self.hs.get_clock(), request, responder, media_info
|
file_info.thumbnail.type,
|
||||||
)
|
file_info.thumbnail.length,
|
||||||
return
|
)
|
||||||
else:
|
return
|
||||||
await respond_with_responder(
|
|
||||||
request,
|
|
||||||
responder,
|
|
||||||
file_info.thumbnail.type,
|
|
||||||
file_info.thumbnail.length,
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
# If we can't find the thumbnail we regenerate it. This can happen
|
# If we can't find the thumbnail we regenerate it. This can happen
|
||||||
# if e.g. we've deleted the thumbnails but still have the original
|
# if e.g. we've deleted the thumbnails but still have the original
|
||||||
@@ -630,18 +558,12 @@ class ThumbnailProvider:
|
|||||||
)
|
)
|
||||||
|
|
||||||
responder = await self.media_storage.fetch_media(file_info)
|
responder = await self.media_storage.fetch_media(file_info)
|
||||||
if for_federation:
|
await respond_with_responder(
|
||||||
assert media_info is not None
|
request,
|
||||||
await respond_with_multipart_responder(
|
responder,
|
||||||
self.hs.get_clock(), request, responder, media_info
|
file_info.thumbnail.type,
|
||||||
)
|
file_info.thumbnail.length,
|
||||||
else:
|
)
|
||||||
await respond_with_responder(
|
|
||||||
request,
|
|
||||||
responder,
|
|
||||||
file_info.thumbnail.type,
|
|
||||||
file_info.thumbnail.length,
|
|
||||||
)
|
|
||||||
else:
|
else:
|
||||||
# This might be because:
|
# This might be because:
|
||||||
# 1. We can't create thumbnails for the given media (corrupted or
|
# 1. We can't create thumbnails for the given media (corrupted or
|
||||||
|
|||||||
+5
-8
@@ -773,7 +773,6 @@ class Notifier:
|
|||||||
stream_token = await self.event_sources.bound_future_token(stream_token)
|
stream_token = await self.event_sources.bound_future_token(stream_token)
|
||||||
|
|
||||||
start = self.clock.time_msec()
|
start = self.clock.time_msec()
|
||||||
logged = False
|
|
||||||
while True:
|
while True:
|
||||||
current_token = self.event_sources.get_current_token()
|
current_token = self.event_sources.get_current_token()
|
||||||
if stream_token.is_before_or_eq(current_token):
|
if stream_token.is_before_or_eq(current_token):
|
||||||
@@ -784,13 +783,11 @@ class Notifier:
|
|||||||
if now - start > 10_000:
|
if now - start > 10_000:
|
||||||
return False
|
return False
|
||||||
|
|
||||||
if not logged:
|
logger.info(
|
||||||
logger.info(
|
"Waiting for current token to reach %s; currently at %s",
|
||||||
"Waiting for current token to reach %s; currently at %s",
|
stream_token,
|
||||||
stream_token,
|
current_token,
|
||||||
current_token,
|
)
|
||||||
)
|
|
||||||
logged = True
|
|
||||||
|
|
||||||
# TODO: be better
|
# TODO: be better
|
||||||
await self.clock.sleep(0.5)
|
await self.clock.sleep(0.5)
|
||||||
|
|||||||
@@ -145,7 +145,7 @@ class ClientRestResource(JsonResource):
|
|||||||
password_policy.register_servlets(hs, client_resource)
|
password_policy.register_servlets(hs, client_resource)
|
||||||
knock.register_servlets(hs, client_resource)
|
knock.register_servlets(hs, client_resource)
|
||||||
appservice_ping.register_servlets(hs, client_resource)
|
appservice_ping.register_servlets(hs, client_resource)
|
||||||
if hs.config.media.can_load_media_repo:
|
if hs.config.server.enable_media_repo:
|
||||||
from synapse.rest.client import media
|
from synapse.rest.client import media
|
||||||
|
|
||||||
media.register_servlets(hs, client_resource)
|
media.register_servlets(hs, client_resource)
|
||||||
|
|||||||
@@ -31,9 +31,7 @@ from synapse.rest.admin import admin_patterns, assert_requester_is_admin
|
|||||||
from synapse.types import JsonDict, UserID
|
from synapse.types import JsonDict, UserID
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
from typing_extensions import assert_never
|
from synapse.server import HomeServer
|
||||||
|
|
||||||
from synapse.server import HomeServer, HomeServerConfig
|
|
||||||
|
|
||||||
|
|
||||||
class ExperimentalFeature(str, Enum):
|
class ExperimentalFeature(str, Enum):
|
||||||
@@ -41,16 +39,8 @@ class ExperimentalFeature(str, Enum):
|
|||||||
Currently supported per-user features
|
Currently supported per-user features
|
||||||
"""
|
"""
|
||||||
|
|
||||||
|
MSC3026 = "msc3026"
|
||||||
MSC3881 = "msc3881"
|
MSC3881 = "msc3881"
|
||||||
MSC3575 = "msc3575"
|
|
||||||
|
|
||||||
def is_globally_enabled(self, config: "HomeServerConfig") -> bool:
|
|
||||||
if self is ExperimentalFeature.MSC3881:
|
|
||||||
return config.experimental.msc3881_enabled
|
|
||||||
if self is ExperimentalFeature.MSC3575:
|
|
||||||
return config.experimental.msc3575_enabled
|
|
||||||
|
|
||||||
assert_never(self)
|
|
||||||
|
|
||||||
|
|
||||||
class ExperimentalFeaturesRestServlet(RestServlet):
|
class ExperimentalFeaturesRestServlet(RestServlet):
|
||||||
|
|||||||
@@ -256,15 +256,9 @@ class KeyChangesServlet(RestServlet):
|
|||||||
|
|
||||||
user_id = requester.user.to_string()
|
user_id = requester.user.to_string()
|
||||||
|
|
||||||
device_list_updates = await self.device_handler.get_user_ids_changed(
|
results = await self.device_handler.get_user_ids_changed(user_id, from_token)
|
||||||
user_id, from_token
|
|
||||||
)
|
|
||||||
|
|
||||||
response: JsonDict = {}
|
return 200, results
|
||||||
response["changed"] = list(device_list_updates.changed)
|
|
||||||
response["left"] = list(device_list_updates.left)
|
|
||||||
|
|
||||||
return 200, response
|
|
||||||
|
|
||||||
|
|
||||||
class OneTimeKeyServlet(RestServlet):
|
class OneTimeKeyServlet(RestServlet):
|
||||||
|
|||||||
@@ -47,7 +47,7 @@ from synapse.util.stringutils import parse_and_validate_server_name
|
|||||||
logger = logging.getLogger(__name__)
|
logger = logging.getLogger(__name__)
|
||||||
|
|
||||||
|
|
||||||
class PreviewURLServlet(RestServlet):
|
class UnstablePreviewURLServlet(RestServlet):
|
||||||
"""
|
"""
|
||||||
Same as `GET /_matrix/media/r0/preview_url`, this endpoint provides a generic preview API
|
Same as `GET /_matrix/media/r0/preview_url`, this endpoint provides a generic preview API
|
||||||
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
|
for URLs which outputs Open Graph (https://ogp.me/) responses (with some Matrix
|
||||||
@@ -65,7 +65,9 @@ class PreviewURLServlet(RestServlet):
|
|||||||
* Matrix cannot be used to distribute the metadata between homeservers.
|
* Matrix cannot be used to distribute the metadata between homeservers.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
PATTERNS = [re.compile(r"^/_matrix/client/v1/media/preview_url$")]
|
PATTERNS = [
|
||||||
|
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/preview_url$")
|
||||||
|
]
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
@@ -93,8 +95,10 @@ class PreviewURLServlet(RestServlet):
|
|||||||
respond_with_json_bytes(request, 200, og, send_cors=True)
|
respond_with_json_bytes(request, 200, og, send_cors=True)
|
||||||
|
|
||||||
|
|
||||||
class MediaConfigResource(RestServlet):
|
class UnstableMediaConfigResource(RestServlet):
|
||||||
PATTERNS = [re.compile(r"^/_matrix/client/v1/media/config$")]
|
PATTERNS = [
|
||||||
|
re.compile(r"^/_matrix/client/unstable/org.matrix.msc3916/media/config$")
|
||||||
|
]
|
||||||
|
|
||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
@@ -108,10 +112,10 @@ class MediaConfigResource(RestServlet):
|
|||||||
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
respond_with_json(request, 200, self.limits_dict, send_cors=True)
|
||||||
|
|
||||||
|
|
||||||
class ThumbnailResource(RestServlet):
|
class UnstableThumbnailResource(RestServlet):
|
||||||
PATTERNS = [
|
PATTERNS = [
|
||||||
re.compile(
|
re.compile(
|
||||||
"/_matrix/client/v1/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
"/_matrix/client/unstable/org.matrix.msc3916/media/thumbnail/(?P<server_name>[^/]*)/(?P<media_id>[^/]*)$"
|
||||||
)
|
)
|
||||||
]
|
]
|
||||||
|
|
||||||
@@ -155,25 +159,11 @@ class ThumbnailResource(RestServlet):
|
|||||||
if self._is_mine_server_name(server_name):
|
if self._is_mine_server_name(server_name):
|
||||||
if self.dynamic_thumbnails:
|
if self.dynamic_thumbnails:
|
||||||
await self.thumbnailer.select_or_generate_local_thumbnail(
|
await self.thumbnailer.select_or_generate_local_thumbnail(
|
||||||
request,
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
media_id,
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
method,
|
|
||||||
m_type,
|
|
||||||
max_timeout_ms,
|
|
||||||
False,
|
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await self.thumbnailer.respond_local_thumbnail(
|
await self.thumbnailer.respond_local_thumbnail(
|
||||||
request,
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
media_id,
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
method,
|
|
||||||
m_type,
|
|
||||||
max_timeout_ms,
|
|
||||||
False,
|
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(None, media_id)
|
self.media_repo.mark_recently_accessed(None, media_id)
|
||||||
else:
|
else:
|
||||||
@@ -201,7 +191,6 @@ class ThumbnailResource(RestServlet):
|
|||||||
m_type,
|
m_type,
|
||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
ip_address,
|
ip_address,
|
||||||
True,
|
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|
||||||
@@ -271,9 +260,11 @@ class DownloadResource(RestServlet):
|
|||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
media_repo = hs.get_media_repository()
|
media_repo = hs.get_media_repository()
|
||||||
if hs.config.media.url_preview_enabled:
|
if hs.config.media.url_preview_enabled:
|
||||||
PreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
|
UnstablePreviewURLServlet(hs, media_repo, media_repo.media_storage).register(
|
||||||
http_server
|
http_server
|
||||||
)
|
)
|
||||||
MediaConfigResource(hs).register(http_server)
|
UnstableMediaConfigResource(hs).register(http_server)
|
||||||
ThumbnailResource(hs, media_repo, media_repo.media_storage).register(http_server)
|
UnstableThumbnailResource(hs, media_repo, media_repo.media_storage).register(
|
||||||
|
http_server
|
||||||
|
)
|
||||||
DownloadResource(hs, media_repo).register(http_server)
|
DownloadResource(hs, media_repo).register(http_server)
|
||||||
|
|||||||
@@ -32,7 +32,6 @@ from synapse.http.servlet import (
|
|||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.push import PusherConfigException
|
from synapse.push import PusherConfigException
|
||||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
|
||||||
from synapse.rest.client._base import client_patterns
|
from synapse.rest.client._base import client_patterns
|
||||||
from synapse.rest.synapse.client.unsubscribe import UnsubscribeResource
|
from synapse.rest.synapse.client.unsubscribe import UnsubscribeResource
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
@@ -50,22 +49,20 @@ class PushersRestServlet(RestServlet):
|
|||||||
super().__init__()
|
super().__init__()
|
||||||
self.hs = hs
|
self.hs = hs
|
||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self._store = hs.get_datastores().main
|
self._msc3881_enabled = self.hs.config.experimental.msc3881_enabled
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
user_id = requester.user.to_string()
|
user = requester.user
|
||||||
|
|
||||||
msc3881_enabled = await self._store.is_feature_enabled(
|
pushers = await self.hs.get_datastores().main.get_pushers_by_user_id(
|
||||||
user_id, ExperimentalFeature.MSC3881
|
user.to_string()
|
||||||
)
|
)
|
||||||
|
|
||||||
pushers = await self.hs.get_datastores().main.get_pushers_by_user_id(user_id)
|
|
||||||
|
|
||||||
pusher_dicts = [p.as_dict() for p in pushers]
|
pusher_dicts = [p.as_dict() for p in pushers]
|
||||||
|
|
||||||
for pusher in pusher_dicts:
|
for pusher in pusher_dicts:
|
||||||
if msc3881_enabled:
|
if self._msc3881_enabled:
|
||||||
pusher["org.matrix.msc3881.enabled"] = pusher["enabled"]
|
pusher["org.matrix.msc3881.enabled"] = pusher["enabled"]
|
||||||
pusher["org.matrix.msc3881.device_id"] = pusher["device_id"]
|
pusher["org.matrix.msc3881.device_id"] = pusher["device_id"]
|
||||||
del pusher["enabled"]
|
del pusher["enabled"]
|
||||||
@@ -83,15 +80,11 @@ class PushersSetRestServlet(RestServlet):
|
|||||||
self.auth = hs.get_auth()
|
self.auth = hs.get_auth()
|
||||||
self.notifier = hs.get_notifier()
|
self.notifier = hs.get_notifier()
|
||||||
self.pusher_pool = self.hs.get_pusherpool()
|
self.pusher_pool = self.hs.get_pusherpool()
|
||||||
self._store = hs.get_datastores().main
|
self._msc3881_enabled = self.hs.config.experimental.msc3881_enabled
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req(request)
|
requester = await self.auth.get_user_by_req(request)
|
||||||
user_id = requester.user.to_string()
|
user = requester.user
|
||||||
|
|
||||||
msc3881_enabled = await self._store.is_feature_enabled(
|
|
||||||
user_id, ExperimentalFeature.MSC3881
|
|
||||||
)
|
|
||||||
|
|
||||||
content = parse_json_object_from_request(request)
|
content = parse_json_object_from_request(request)
|
||||||
|
|
||||||
@@ -102,7 +95,7 @@ class PushersSetRestServlet(RestServlet):
|
|||||||
and content["kind"] is None
|
and content["kind"] is None
|
||||||
):
|
):
|
||||||
await self.pusher_pool.remove_pusher(
|
await self.pusher_pool.remove_pusher(
|
||||||
content["app_id"], content["pushkey"], user_id=user_id
|
content["app_id"], content["pushkey"], user_id=user.to_string()
|
||||||
)
|
)
|
||||||
return 200, {}
|
return 200, {}
|
||||||
|
|
||||||
@@ -127,19 +120,19 @@ class PushersSetRestServlet(RestServlet):
|
|||||||
append = content["append"]
|
append = content["append"]
|
||||||
|
|
||||||
enabled = True
|
enabled = True
|
||||||
if msc3881_enabled and "org.matrix.msc3881.enabled" in content:
|
if self._msc3881_enabled and "org.matrix.msc3881.enabled" in content:
|
||||||
enabled = content["org.matrix.msc3881.enabled"]
|
enabled = content["org.matrix.msc3881.enabled"]
|
||||||
|
|
||||||
if not append:
|
if not append:
|
||||||
await self.pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user(
|
await self.pusher_pool.remove_pushers_by_app_id_and_pushkey_not_user(
|
||||||
app_id=content["app_id"],
|
app_id=content["app_id"],
|
||||||
pushkey=content["pushkey"],
|
pushkey=content["pushkey"],
|
||||||
not_user_id=user_id,
|
not_user_id=user.to_string(),
|
||||||
)
|
)
|
||||||
|
|
||||||
try:
|
try:
|
||||||
await self.pusher_pool.add_or_update_pusher(
|
await self.pusher_pool.add_or_update_pusher(
|
||||||
user_id=user_id,
|
user_id=user.to_string(),
|
||||||
kind=content["kind"],
|
kind=content["kind"],
|
||||||
app_id=content["app_id"],
|
app_id=content["app_id"],
|
||||||
app_display_name=content["app_display_name"],
|
app_display_name=content["app_display_name"],
|
||||||
|
|||||||
+24
-138
@@ -52,9 +52,8 @@ from synapse.http.servlet import (
|
|||||||
parse_string,
|
parse_string,
|
||||||
)
|
)
|
||||||
from synapse.http.site import SynapseRequest
|
from synapse.http.site import SynapseRequest
|
||||||
from synapse.logging.opentracing import log_kv, set_tag, trace_with_opname
|
from synapse.logging.opentracing import trace_with_opname
|
||||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
from synapse.types import JsonDict, Requester, StreamToken
|
||||||
from synapse.types import JsonDict, Requester, SlidingSyncStreamToken, StreamToken
|
|
||||||
from synapse.types.rest.client import SlidingSyncBody
|
from synapse.types.rest.client import SlidingSyncBody
|
||||||
from synapse.util import json_decoder
|
from synapse.util import json_decoder
|
||||||
from synapse.util.caches.lrucache import LruCache
|
from synapse.util.caches.lrucache import LruCache
|
||||||
@@ -674,9 +673,7 @@ class SlidingSyncE2eeRestServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req_experimental_feature(
|
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||||
request, allow_guest=True, feature=ExperimentalFeature.MSC3575
|
|
||||||
)
|
|
||||||
user = requester.user
|
user = requester.user
|
||||||
device_id = requester.device_id
|
device_id = requester.device_id
|
||||||
|
|
||||||
@@ -876,11 +873,9 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
self.event_serializer = hs.get_event_client_serializer()
|
self.event_serializer = hs.get_event_client_serializer()
|
||||||
|
|
||||||
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
async def on_POST(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
||||||
requester = await self.auth.get_user_by_req_experimental_feature(
|
requester = await self.auth.get_user_by_req(request, allow_guest=True)
|
||||||
request, allow_guest=True, feature=ExperimentalFeature.MSC3575
|
|
||||||
)
|
|
||||||
|
|
||||||
user = requester.user
|
user = requester.user
|
||||||
|
device_id = requester.device_id
|
||||||
|
|
||||||
timeout = parse_integer(request, "timeout", default=0)
|
timeout = parse_integer(request, "timeout", default=0)
|
||||||
# Position in the stream
|
# Position in the stream
|
||||||
@@ -888,41 +883,22 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
|
|
||||||
from_token = None
|
from_token = None
|
||||||
if from_token_string is not None:
|
if from_token_string is not None:
|
||||||
from_token = await SlidingSyncStreamToken.from_string(
|
from_token = await StreamToken.from_string(self.store, from_token_string)
|
||||||
self.store, from_token_string
|
|
||||||
)
|
|
||||||
|
|
||||||
# TODO: We currently don't know whether we're going to use sticky params or
|
# TODO: We currently don't know whether we're going to use sticky params or
|
||||||
# maybe some filters like sync v2 where they are built up once and referenced
|
# maybe some filters like sync v2 where they are built up once and referenced
|
||||||
# by filter ID. For now, we will just prototype with always passing everything
|
# by filter ID. For now, we will just prototype with always passing everything
|
||||||
# in.
|
# in.
|
||||||
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
|
body = parse_and_validate_json_object_from_request(request, SlidingSyncBody)
|
||||||
|
logger.info("Sliding sync request: %r", body)
|
||||||
# Tag and log useful data to differentiate requests.
|
|
||||||
set_tag("sliding_sync.conn_id", body.conn_id or "")
|
|
||||||
log_kv(
|
|
||||||
{
|
|
||||||
"sliding_sync.lists": {
|
|
||||||
list_name: {
|
|
||||||
"ranges": list_config.ranges,
|
|
||||||
"timeline_limit": list_config.timeline_limit,
|
|
||||||
}
|
|
||||||
for list_name, list_config in (body.lists or {}).items()
|
|
||||||
},
|
|
||||||
"sliding_sync.room_subscriptions": list(
|
|
||||||
(body.room_subscriptions or {}).keys()
|
|
||||||
),
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
sync_config = SlidingSyncConfig(
|
sync_config = SlidingSyncConfig(
|
||||||
user=user,
|
user=user,
|
||||||
requester=requester,
|
device_id=device_id,
|
||||||
# FIXME: Currently, we're just manually copying the fields from the
|
# FIXME: Currently, we're just manually copying the fields from the
|
||||||
# `SlidingSyncBody` into the config. How can we guarantee into the future
|
# `SlidingSyncBody` into the config. How can we gurantee into the future
|
||||||
# that we don't forget any? I would like something more structured like
|
# that we don't forget any? I would like something more structured like
|
||||||
# `copy_attributes(from=body, to=config)`
|
# `copy_attributes(from=body, to=config)`
|
||||||
conn_id=body.conn_id,
|
|
||||||
lists=body.lists,
|
lists=body.lists,
|
||||||
room_subscriptions=body.room_subscriptions,
|
room_subscriptions=body.room_subscriptions,
|
||||||
extensions=body.extensions,
|
extensions=body.extensions,
|
||||||
@@ -945,6 +921,7 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
|
|
||||||
return 200, response_content
|
return 200, response_content
|
||||||
|
|
||||||
|
# TODO: Is there a better way to encode things?
|
||||||
async def encode_response(
|
async def encode_response(
|
||||||
self,
|
self,
|
||||||
requester: Requester,
|
requester: Requester,
|
||||||
@@ -959,9 +936,7 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
response["rooms"] = await self.encode_rooms(
|
response["rooms"] = await self.encode_rooms(
|
||||||
requester, sliding_sync_result.rooms
|
requester, sliding_sync_result.rooms
|
||||||
)
|
)
|
||||||
response["extensions"] = await self.encode_extensions(
|
response["extensions"] = {} # TODO: sliding_sync_result.extensions
|
||||||
requester, sliding_sync_result.extensions
|
|
||||||
)
|
|
||||||
|
|
||||||
return response
|
return response
|
||||||
|
|
||||||
@@ -1001,7 +976,6 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
serialized_rooms: Dict[str, JsonDict] = {}
|
serialized_rooms: Dict[str, JsonDict] = {}
|
||||||
for room_id, room_result in rooms.items():
|
for room_id, room_result in rooms.items():
|
||||||
serialized_rooms[room_id] = {
|
serialized_rooms[room_id] = {
|
||||||
"bump_stamp": room_result.bump_stamp,
|
|
||||||
"joined_count": room_result.joined_count,
|
"joined_count": room_result.joined_count,
|
||||||
"invited_count": room_result.invited_count,
|
"invited_count": room_result.invited_count,
|
||||||
"notification_count": room_result.notification_count,
|
"notification_count": room_result.notification_count,
|
||||||
@@ -1014,32 +988,16 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
if room_result.avatar:
|
if room_result.avatar:
|
||||||
serialized_rooms[room_id]["avatar"] = room_result.avatar
|
serialized_rooms[room_id]["avatar"] = room_result.avatar
|
||||||
|
|
||||||
if room_result.heroes is not None and len(room_result.heroes) > 0:
|
if room_result.heroes:
|
||||||
serialized_heroes = []
|
serialized_rooms[room_id]["heroes"] = room_result.heroes
|
||||||
for hero in room_result.heroes:
|
|
||||||
serialized_hero = {
|
|
||||||
"user_id": hero.user_id,
|
|
||||||
}
|
|
||||||
if hero.display_name is not None:
|
|
||||||
# Not a typo, just how "displayname" is spelled in the spec
|
|
||||||
serialized_hero["displayname"] = hero.display_name
|
|
||||||
|
|
||||||
if hero.avatar_url is not None:
|
|
||||||
serialized_hero["avatar_url"] = hero.avatar_url
|
|
||||||
|
|
||||||
serialized_heroes.append(serialized_hero)
|
|
||||||
serialized_rooms[room_id]["heroes"] = serialized_heroes
|
|
||||||
|
|
||||||
# We should only include the `initial` key if it's `True` to save bandwidth.
|
# We should only include the `initial` key if it's `True` to save bandwidth.
|
||||||
# The absense of this flag means `False`.
|
# The absense of this flag means `False`.
|
||||||
if room_result.initial:
|
if room_result.initial:
|
||||||
serialized_rooms[room_id]["initial"] = room_result.initial
|
serialized_rooms[room_id]["initial"] = room_result.initial
|
||||||
|
|
||||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
if (
|
if room_result.required_state is not None:
|
||||||
room_result.required_state is not None
|
|
||||||
and len(room_result.required_state) > 0
|
|
||||||
):
|
|
||||||
serialized_required_state = (
|
serialized_required_state = (
|
||||||
await self.event_serializer.serialize_events(
|
await self.event_serializer.serialize_events(
|
||||||
room_result.required_state,
|
room_result.required_state,
|
||||||
@@ -1049,11 +1007,8 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
serialized_rooms[room_id]["required_state"] = serialized_required_state
|
serialized_rooms[room_id]["required_state"] = serialized_required_state
|
||||||
|
|
||||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
if (
|
if room_result.timeline_events is not None:
|
||||||
room_result.timeline_events is not None
|
|
||||||
and len(room_result.timeline_events) > 0
|
|
||||||
):
|
|
||||||
serialized_timeline = await self.event_serializer.serialize_events(
|
serialized_timeline = await self.event_serializer.serialize_events(
|
||||||
room_result.timeline_events,
|
room_result.timeline_events,
|
||||||
time_now,
|
time_now,
|
||||||
@@ -1062,17 +1017,17 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
)
|
)
|
||||||
serialized_rooms[room_id]["timeline"] = serialized_timeline
|
serialized_rooms[room_id]["timeline"] = serialized_timeline
|
||||||
|
|
||||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
if room_result.limited is not None:
|
if room_result.limited is not None:
|
||||||
serialized_rooms[room_id]["limited"] = room_result.limited
|
serialized_rooms[room_id]["limited"] = room_result.limited
|
||||||
|
|
||||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
if room_result.prev_batch is not None:
|
if room_result.prev_batch is not None:
|
||||||
serialized_rooms[room_id]["prev_batch"] = (
|
serialized_rooms[room_id]["prev_batch"] = (
|
||||||
await room_result.prev_batch.to_string(self.store)
|
await room_result.prev_batch.to_string(self.store)
|
||||||
)
|
)
|
||||||
|
|
||||||
# This will be omitted for invite/knock rooms with `stripped_state`
|
# This will omitted for invite/knock rooms with `stripped_state`
|
||||||
if room_result.num_live is not None:
|
if room_result.num_live is not None:
|
||||||
serialized_rooms[room_id]["num_live"] = room_result.num_live
|
serialized_rooms[room_id]["num_live"] = room_result.num_live
|
||||||
|
|
||||||
@@ -1081,10 +1036,7 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
serialized_rooms[room_id]["is_dm"] = room_result.is_dm
|
serialized_rooms[room_id]["is_dm"] = room_result.is_dm
|
||||||
|
|
||||||
# Stripped state only applies to invite/knock rooms
|
# Stripped state only applies to invite/knock rooms
|
||||||
if (
|
if room_result.stripped_state is not None:
|
||||||
room_result.stripped_state is not None
|
|
||||||
and len(room_result.stripped_state) > 0
|
|
||||||
):
|
|
||||||
# TODO: `knocked_state` but that isn't specced yet.
|
# TODO: `knocked_state` but that isn't specced yet.
|
||||||
#
|
#
|
||||||
# TODO: Instead of adding `knocked_state`, it would be good to rename
|
# TODO: Instead of adding `knocked_state`, it would be good to rename
|
||||||
@@ -1095,76 +1047,10 @@ class SlidingSyncRestServlet(RestServlet):
|
|||||||
|
|
||||||
return serialized_rooms
|
return serialized_rooms
|
||||||
|
|
||||||
async def encode_extensions(
|
|
||||||
self, requester: Requester, extensions: SlidingSyncResult.Extensions
|
|
||||||
) -> JsonDict:
|
|
||||||
serialized_extensions: JsonDict = {}
|
|
||||||
|
|
||||||
if extensions.to_device is not None:
|
|
||||||
serialized_extensions["to_device"] = {
|
|
||||||
"next_batch": extensions.to_device.next_batch,
|
|
||||||
"events": extensions.to_device.events,
|
|
||||||
}
|
|
||||||
|
|
||||||
if extensions.e2ee is not None:
|
|
||||||
serialized_extensions["e2ee"] = {
|
|
||||||
# We always include this because
|
|
||||||
# https://github.com/vector-im/element-android/issues/3725. The spec
|
|
||||||
# isn't terribly clear on when this can be omitted and how a client
|
|
||||||
# would tell the difference between "no keys present" and "nothing
|
|
||||||
# changed" in terms of whole field absent / individual key type entry
|
|
||||||
# absent Corresponding synapse issue:
|
|
||||||
# https://github.com/matrix-org/synapse/issues/10456
|
|
||||||
"device_one_time_keys_count": extensions.e2ee.device_one_time_keys_count,
|
|
||||||
# https://github.com/matrix-org/matrix-doc/blob/54255851f642f84a4f1aaf7bc063eebe3d76752b/proposals/2732-olm-fallback-keys.md
|
|
||||||
# states that this field should always be included, as long as the
|
|
||||||
# server supports the feature.
|
|
||||||
"device_unused_fallback_key_types": extensions.e2ee.device_unused_fallback_key_types,
|
|
||||||
}
|
|
||||||
|
|
||||||
if extensions.e2ee.device_list_updates is not None:
|
|
||||||
serialized_extensions["e2ee"]["device_lists"] = {}
|
|
||||||
|
|
||||||
serialized_extensions["e2ee"]["device_lists"]["changed"] = list(
|
|
||||||
extensions.e2ee.device_list_updates.changed
|
|
||||||
)
|
|
||||||
serialized_extensions["e2ee"]["device_lists"]["left"] = list(
|
|
||||||
extensions.e2ee.device_list_updates.left
|
|
||||||
)
|
|
||||||
|
|
||||||
if extensions.account_data is not None:
|
|
||||||
serialized_extensions["account_data"] = {
|
|
||||||
# Same as the the top-level `account_data.events` field in Sync v2.
|
|
||||||
"global": [
|
|
||||||
{"type": account_data_type, "content": content}
|
|
||||||
for account_data_type, content in extensions.account_data.global_account_data_map.items()
|
|
||||||
],
|
|
||||||
# Same as the joined room's account_data field in Sync v2, e.g the path
|
|
||||||
# `rooms.join["!foo:bar"].account_data.events`.
|
|
||||||
"rooms": {
|
|
||||||
room_id: [
|
|
||||||
{"type": account_data_type, "content": content}
|
|
||||||
for account_data_type, content in event_map.items()
|
|
||||||
]
|
|
||||||
for room_id, event_map in extensions.account_data.account_data_by_room_map.items()
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
if extensions.receipts is not None:
|
|
||||||
serialized_extensions["receipts"] = {
|
|
||||||
"rooms": extensions.receipts.room_id_to_receipt_map,
|
|
||||||
}
|
|
||||||
|
|
||||||
if extensions.typing is not None:
|
|
||||||
serialized_extensions["typing"] = {
|
|
||||||
"rooms": extensions.typing.room_id_to_typing_map,
|
|
||||||
}
|
|
||||||
|
|
||||||
return serialized_extensions
|
|
||||||
|
|
||||||
|
|
||||||
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
def register_servlets(hs: "HomeServer", http_server: HttpServer) -> None:
|
||||||
SyncRestServlet(hs).register(http_server)
|
SyncRestServlet(hs).register(http_server)
|
||||||
|
|
||||||
SlidingSyncRestServlet(hs).register(http_server)
|
if hs.config.experimental.msc3575_enabled:
|
||||||
SlidingSyncE2eeRestServlet(hs).register(http_server)
|
SlidingSyncRestServlet(hs).register(http_server)
|
||||||
|
SlidingSyncE2eeRestServlet(hs).register(http_server)
|
||||||
|
|||||||
@@ -25,11 +25,11 @@ import logging
|
|||||||
import re
|
import re
|
||||||
from typing import TYPE_CHECKING, Tuple
|
from typing import TYPE_CHECKING, Tuple
|
||||||
|
|
||||||
|
from twisted.web.server import Request
|
||||||
|
|
||||||
from synapse.api.constants import RoomCreationPreset
|
from synapse.api.constants import RoomCreationPreset
|
||||||
from synapse.http.server import HttpServer
|
from synapse.http.server import HttpServer
|
||||||
from synapse.http.servlet import RestServlet
|
from synapse.http.servlet import RestServlet
|
||||||
from synapse.http.site import SynapseRequest
|
|
||||||
from synapse.rest.admin.experimental_features import ExperimentalFeature
|
|
||||||
from synapse.types import JsonDict
|
from synapse.types import JsonDict
|
||||||
|
|
||||||
if TYPE_CHECKING:
|
if TYPE_CHECKING:
|
||||||
@@ -45,8 +45,6 @@ class VersionsRestServlet(RestServlet):
|
|||||||
def __init__(self, hs: "HomeServer"):
|
def __init__(self, hs: "HomeServer"):
|
||||||
super().__init__()
|
super().__init__()
|
||||||
self.config = hs.config
|
self.config = hs.config
|
||||||
self.auth = hs.get_auth()
|
|
||||||
self.store = hs.get_datastores().main
|
|
||||||
|
|
||||||
# Calculate these once since they shouldn't change after start-up.
|
# Calculate these once since they shouldn't change after start-up.
|
||||||
self.e2ee_forced_public = (
|
self.e2ee_forced_public = (
|
||||||
@@ -62,22 +60,7 @@ class VersionsRestServlet(RestServlet):
|
|||||||
in self.config.room.encryption_enabled_by_default_for_room_presets
|
in self.config.room.encryption_enabled_by_default_for_room_presets
|
||||||
)
|
)
|
||||||
|
|
||||||
async def on_GET(self, request: SynapseRequest) -> Tuple[int, JsonDict]:
|
def on_GET(self, request: Request) -> Tuple[int, JsonDict]:
|
||||||
msc3881_enabled = self.config.experimental.msc3881_enabled
|
|
||||||
|
|
||||||
if self.auth.has_access_token(request):
|
|
||||||
requester = await self.auth.get_user_by_req(
|
|
||||||
request,
|
|
||||||
allow_guest=True,
|
|
||||||
allow_locked=True,
|
|
||||||
allow_expired=True,
|
|
||||||
)
|
|
||||||
user_id = requester.user.to_string()
|
|
||||||
|
|
||||||
msc3881_enabled = await self.store.is_feature_enabled(
|
|
||||||
user_id, ExperimentalFeature.MSC3881
|
|
||||||
)
|
|
||||||
|
|
||||||
return (
|
return (
|
||||||
200,
|
200,
|
||||||
{
|
{
|
||||||
@@ -107,7 +90,6 @@ class VersionsRestServlet(RestServlet):
|
|||||||
"v1.8",
|
"v1.8",
|
||||||
"v1.9",
|
"v1.9",
|
||||||
"v1.10",
|
"v1.10",
|
||||||
"v1.11",
|
|
||||||
],
|
],
|
||||||
# as per MSC1497:
|
# as per MSC1497:
|
||||||
"unstable_features": {
|
"unstable_features": {
|
||||||
@@ -142,7 +124,7 @@ class VersionsRestServlet(RestServlet):
|
|||||||
# TODO: this is no longer needed once unstable MSC3882 does not need to be supported:
|
# TODO: this is no longer needed once unstable MSC3882 does not need to be supported:
|
||||||
"org.matrix.msc3882": self.config.auth.login_via_existing_enabled,
|
"org.matrix.msc3882": self.config.auth.login_via_existing_enabled,
|
||||||
# Adds support for remotely enabling/disabling pushers, as per MSC3881
|
# Adds support for remotely enabling/disabling pushers, as per MSC3881
|
||||||
"org.matrix.msc3881": msc3881_enabled,
|
"org.matrix.msc3881": self.config.experimental.msc3881_enabled,
|
||||||
# Adds support for filtering /messages by event relation.
|
# Adds support for filtering /messages by event relation.
|
||||||
"org.matrix.msc3874": self.config.experimental.msc3874_enabled,
|
"org.matrix.msc3874": self.config.experimental.msc3874_enabled,
|
||||||
# Adds support for simple HTTP rendezvous as per MSC3886
|
# Adds support for simple HTTP rendezvous as per MSC3886
|
||||||
|
|||||||
@@ -84,7 +84,7 @@ class DownloadResource(RestServlet):
|
|||||||
|
|
||||||
if self._is_mine_server_name(server_name):
|
if self._is_mine_server_name(server_name):
|
||||||
await self.media_repo.get_local_media(
|
await self.media_repo.get_local_media(
|
||||||
request, media_id, file_name, max_timeout_ms, allow_authenticated=False
|
request, media_id, file_name, max_timeout_ms
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
allow_remote = parse_boolean(request, "allow_remote", default=True)
|
allow_remote = parse_boolean(request, "allow_remote", default=True)
|
||||||
@@ -106,5 +106,4 @@ class DownloadResource(RestServlet):
|
|||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
ip_address,
|
ip_address,
|
||||||
False,
|
False,
|
||||||
allow_authenticated=False,
|
|
||||||
)
|
)
|
||||||
|
|||||||
@@ -88,27 +88,11 @@ class ThumbnailResource(RestServlet):
|
|||||||
if self._is_mine_server_name(server_name):
|
if self._is_mine_server_name(server_name):
|
||||||
if self.dynamic_thumbnails:
|
if self.dynamic_thumbnails:
|
||||||
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
await self.thumbnail_provider.select_or_generate_local_thumbnail(
|
||||||
request,
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
media_id,
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
method,
|
|
||||||
m_type,
|
|
||||||
max_timeout_ms,
|
|
||||||
False,
|
|
||||||
allow_authenticated=False,
|
|
||||||
)
|
)
|
||||||
else:
|
else:
|
||||||
await self.thumbnail_provider.respond_local_thumbnail(
|
await self.thumbnail_provider.respond_local_thumbnail(
|
||||||
request,
|
request, media_id, width, height, method, m_type, max_timeout_ms
|
||||||
media_id,
|
|
||||||
width,
|
|
||||||
height,
|
|
||||||
method,
|
|
||||||
m_type,
|
|
||||||
max_timeout_ms,
|
|
||||||
False,
|
|
||||||
allow_authenticated=False,
|
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(None, media_id)
|
self.media_repo.mark_recently_accessed(None, media_id)
|
||||||
else:
|
else:
|
||||||
@@ -136,7 +120,5 @@ class ThumbnailResource(RestServlet):
|
|||||||
m_type,
|
m_type,
|
||||||
max_timeout_ms,
|
max_timeout_ms,
|
||||||
ip_address,
|
ip_address,
|
||||||
use_federation=False,
|
|
||||||
allow_authenticated=False,
|
|
||||||
)
|
)
|
||||||
self.media_repo.mark_recently_accessed(server_name, media_id)
|
self.media_repo.mark_recently_accessed(server_name, media_id)
|
||||||
|
|||||||
+5
-10
@@ -28,7 +28,7 @@
|
|||||||
import abc
|
import abc
|
||||||
import functools
|
import functools
|
||||||
import logging
|
import logging
|
||||||
from typing import TYPE_CHECKING, Callable, Dict, List, Optional, Type, TypeVar, cast
|
from typing import TYPE_CHECKING, Callable, Dict, List, Optional, TypeVar, cast
|
||||||
|
|
||||||
from typing_extensions import TypeAlias
|
from typing_extensions import TypeAlias
|
||||||
|
|
||||||
@@ -161,7 +161,6 @@ if TYPE_CHECKING:
|
|||||||
from synapse.handlers.jwt import JwtHandler
|
from synapse.handlers.jwt import JwtHandler
|
||||||
from synapse.handlers.oidc import OidcHandler
|
from synapse.handlers.oidc import OidcHandler
|
||||||
from synapse.handlers.saml import SamlHandler
|
from synapse.handlers.saml import SamlHandler
|
||||||
from synapse.storage._base import SQLBaseStore
|
|
||||||
|
|
||||||
|
|
||||||
# The annotation for `cache_in_self` used to be
|
# The annotation for `cache_in_self` used to be
|
||||||
@@ -256,13 +255,10 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||||||
"stats",
|
"stats",
|
||||||
]
|
]
|
||||||
|
|
||||||
@property
|
# This is overridden in derived application classes
|
||||||
@abc.abstractmethod
|
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
||||||
def DATASTORE_CLASS(self) -> Type["SQLBaseStore"]:
|
# instantiated during setup() for future return by get_datastores()
|
||||||
# This is overridden in derived application classes
|
DATASTORE_CLASS = abc.abstractproperty()
|
||||||
# (such as synapse.app.homeserver.SynapseHomeServer) and gives the class to be
|
|
||||||
# instantiated during setup() for future return by get_datastores()
|
|
||||||
pass
|
|
||||||
|
|
||||||
def __init__(
|
def __init__(
|
||||||
self,
|
self,
|
||||||
@@ -559,7 +555,6 @@ class HomeServer(metaclass=abc.ABCMeta):
|
|||||||
def get_sync_handler(self) -> SyncHandler:
|
def get_sync_handler(self) -> SyncHandler:
|
||||||
return SyncHandler(self)
|
return SyncHandler(self)
|
||||||
|
|
||||||
@cache_in_self
|
|
||||||
def get_sliding_sync_handler(self) -> SlidingSyncHandler:
|
def get_sliding_sync_handler(self) -> SlidingSyncHandler:
|
||||||
return SlidingSyncHandler(self)
|
return SlidingSyncHandler(self)
|
||||||
|
|
||||||
|
|||||||
@@ -119,16 +119,14 @@ class SQLBaseStore(metaclass=ABCMeta):
|
|||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"get_user_in_room_with_profile", (room_id, user_id)
|
"get_user_in_room_with_profile", (room_id, user_id)
|
||||||
)
|
)
|
||||||
self._attempt_to_invalidate_cache("get_rooms_for_user", (user_id,))
|
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"_get_rooms_for_local_user_where_membership_is_inner", (user_id,)
|
"get_rooms_for_user_with_stream_ordering", (user_id,)
|
||||||
)
|
)
|
||||||
|
self._attempt_to_invalidate_cache("get_rooms_for_user", (user_id,))
|
||||||
|
|
||||||
# Purge other caches based on room state.
|
# Purge other caches based on room state.
|
||||||
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
|
self._attempt_to_invalidate_cache("get_partial_current_state_ids", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
|
||||||
|
|
||||||
def _invalidate_state_caches_all(self, room_id: str) -> None:
|
def _invalidate_state_caches_all(self, room_id: str) -> None:
|
||||||
"""Invalidates caches that are based on the current state, but does
|
"""Invalidates caches that are based on the current state, but does
|
||||||
@@ -150,13 +148,11 @@ class SQLBaseStore(metaclass=ABCMeta):
|
|||||||
self._attempt_to_invalidate_cache("get_local_users_in_room", (room_id,))
|
self._attempt_to_invalidate_cache("get_local_users_in_room", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("does_pair_of_users_share_a_room", None)
|
self._attempt_to_invalidate_cache("does_pair_of_users_share_a_room", None)
|
||||||
self._attempt_to_invalidate_cache("get_user_in_room_with_profile", None)
|
self._attempt_to_invalidate_cache("get_user_in_room_with_profile", None)
|
||||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"_get_rooms_for_local_user_where_membership_is_inner", None
|
"get_rooms_for_user_with_stream_ordering", None
|
||||||
)
|
)
|
||||||
|
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
||||||
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
self._attempt_to_invalidate_cache("get_room_summary", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
|
||||||
|
|
||||||
def _attempt_to_invalidate_cache(
|
def _attempt_to_invalidate_cache(
|
||||||
self, cache_name: str, key: Optional[Collection[Any]]
|
self, cache_name: str, key: Optional[Collection[Any]]
|
||||||
|
|||||||
@@ -409,7 +409,7 @@ class StateStorageController:
|
|||||||
|
|
||||||
return state_ids
|
return state_ids
|
||||||
|
|
||||||
async def get_state_ids_at(
|
async def get_state_at(
|
||||||
self,
|
self,
|
||||||
room_id: str,
|
room_id: str,
|
||||||
stream_position: StreamToken,
|
stream_position: StreamToken,
|
||||||
@@ -460,30 +460,6 @@ class StateStorageController:
|
|||||||
)
|
)
|
||||||
return state
|
return state
|
||||||
|
|
||||||
@trace
|
|
||||||
@tag_args
|
|
||||||
async def get_state_at(
|
|
||||||
self,
|
|
||||||
room_id: str,
|
|
||||||
stream_position: StreamToken,
|
|
||||||
state_filter: Optional[StateFilter] = None,
|
|
||||||
await_full_state: bool = True,
|
|
||||||
) -> StateMap[EventBase]:
|
|
||||||
"""Same as `get_state_ids_at` but also fetches the events"""
|
|
||||||
state_map_ids = await self.get_state_ids_at(
|
|
||||||
room_id, stream_position, state_filter, await_full_state
|
|
||||||
)
|
|
||||||
|
|
||||||
event_map = await self.stores.main.get_events(list(state_map_ids.values()))
|
|
||||||
|
|
||||||
state_map = {}
|
|
||||||
for key, event_id in state_map_ids.items():
|
|
||||||
event = event_map.get(event_id)
|
|
||||||
if event:
|
|
||||||
state_map[key] = event
|
|
||||||
|
|
||||||
return state_map
|
|
||||||
|
|
||||||
@trace
|
@trace
|
||||||
@tag_args
|
@tag_args
|
||||||
async def get_state_for_groups(
|
async def get_state_for_groups(
|
||||||
|
|||||||
@@ -268,23 +268,17 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
self._curr_state_delta_stream_cache.entity_has_changed(data.room_id, token) # type: ignore[attr-defined]
|
self._curr_state_delta_stream_cache.entity_has_changed(data.room_id, token) # type: ignore[attr-defined]
|
||||||
|
|
||||||
if data.type == EventTypes.Member:
|
if data.type == EventTypes.Member:
|
||||||
self._attempt_to_invalidate_cache(
|
self.get_rooms_for_user_with_stream_ordering.invalidate( # type: ignore[attr-defined]
|
||||||
"get_rooms_for_user", (data.state_key,)
|
(data.state_key,)
|
||||||
)
|
)
|
||||||
elif data.type == EventTypes.RoomEncryption:
|
self.get_rooms_for_user.invalidate((data.state_key,)) # type: ignore[attr-defined]
|
||||||
self._attempt_to_invalidate_cache(
|
|
||||||
"get_room_encryption", (data.room_id,)
|
|
||||||
)
|
|
||||||
elif data.type == EventTypes.Create:
|
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
|
|
||||||
elif row.type == EventsStreamAllStateRow.TypeId:
|
elif row.type == EventsStreamAllStateRow.TypeId:
|
||||||
assert isinstance(data, EventsStreamAllStateRow)
|
assert isinstance(data, EventsStreamAllStateRow)
|
||||||
# Similar to the above, but the entire caches are invalidated. This is
|
# Similar to the above, but the entire caches are invalidated. This is
|
||||||
# unfortunate for the membership caches, but should recover quickly.
|
# unfortunate for the membership caches, but should recover quickly.
|
||||||
self._curr_state_delta_stream_cache.entity_has_changed(data.room_id, token) # type: ignore[attr-defined]
|
self._curr_state_delta_stream_cache.entity_has_changed(data.room_id, token) # type: ignore[attr-defined]
|
||||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
self.get_rooms_for_user_with_stream_ordering.invalidate_all() # type: ignore[attr-defined]
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (data.room_id,))
|
self.get_rooms_for_user.invalidate_all() # type: ignore[attr-defined]
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (data.room_id,))
|
|
||||||
else:
|
else:
|
||||||
raise Exception("Unknown events stream row type %s" % (row.type,))
|
raise Exception("Unknown events stream row type %s" % (row.type,))
|
||||||
|
|
||||||
@@ -340,10 +334,10 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"get_invited_rooms_for_local_user", (state_key,)
|
"get_invited_rooms_for_local_user", (state_key,)
|
||||||
)
|
)
|
||||||
self._attempt_to_invalidate_cache("get_rooms_for_user", (state_key,))
|
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"_get_rooms_for_local_user_where_membership_is_inner", (state_key,)
|
"get_rooms_for_user_with_stream_ordering", (state_key,)
|
||||||
)
|
)
|
||||||
|
self._attempt_to_invalidate_cache("get_rooms_for_user", (state_key,))
|
||||||
|
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"did_forget",
|
"did_forget",
|
||||||
@@ -355,10 +349,6 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"get_forgotten_rooms_for_user", (state_key,)
|
"get_forgotten_rooms_for_user", (state_key,)
|
||||||
)
|
)
|
||||||
elif etype == EventTypes.Create:
|
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
|
||||||
elif etype == EventTypes.RoomEncryption:
|
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
|
||||||
|
|
||||||
if relates_to:
|
if relates_to:
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
@@ -409,18 +399,16 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
self._attempt_to_invalidate_cache("get_thread_id", None)
|
self._attempt_to_invalidate_cache("get_thread_id", None)
|
||||||
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
|
self._attempt_to_invalidate_cache("get_thread_id_for_receipts", None)
|
||||||
self._attempt_to_invalidate_cache("get_invited_rooms_for_local_user", None)
|
self._attempt_to_invalidate_cache("get_invited_rooms_for_local_user", None)
|
||||||
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
|
||||||
self._attempt_to_invalidate_cache(
|
self._attempt_to_invalidate_cache(
|
||||||
"_get_rooms_for_local_user_where_membership_is_inner", None
|
"get_rooms_for_user_with_stream_ordering", None
|
||||||
)
|
)
|
||||||
|
self._attempt_to_invalidate_cache("get_rooms_for_user", None)
|
||||||
self._attempt_to_invalidate_cache("did_forget", None)
|
self._attempt_to_invalidate_cache("did_forget", None)
|
||||||
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
||||||
self._attempt_to_invalidate_cache("get_references_for_event", None)
|
self._attempt_to_invalidate_cache("get_references_for_event", None)
|
||||||
self._attempt_to_invalidate_cache("get_thread_summary", None)
|
self._attempt_to_invalidate_cache("get_thread_summary", None)
|
||||||
self._attempt_to_invalidate_cache("get_thread_participated", None)
|
self._attempt_to_invalidate_cache("get_thread_participated", None)
|
||||||
self._attempt_to_invalidate_cache("get_threads", (room_id,))
|
self._attempt_to_invalidate_cache("get_threads", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
|
||||||
|
|
||||||
self._attempt_to_invalidate_cache("_get_state_group_for_event", None)
|
self._attempt_to_invalidate_cache("_get_state_group_for_event", None)
|
||||||
|
|
||||||
@@ -473,8 +461,6 @@ class CacheInvalidationWorkerStore(SQLBaseStore):
|
|||||||
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
self._attempt_to_invalidate_cache("get_forgotten_rooms_for_user", None)
|
||||||
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
self._attempt_to_invalidate_cache("_get_membership_from_event_id", None)
|
||||||
self._attempt_to_invalidate_cache("get_room_version_id", (room_id,))
|
self._attempt_to_invalidate_cache("get_room_version_id", (room_id,))
|
||||||
self._attempt_to_invalidate_cache("get_room_type", (room_id,))
|
|
||||||
self._attempt_to_invalidate_cache("get_room_encryption", (room_id,))
|
|
||||||
|
|
||||||
# And delete state caches.
|
# And delete state caches.
|
||||||
|
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user