Compare commits
3 Commits
mv/complem
...
erikj/rust
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
1780a70748 | ||
|
|
f3b7940e14 | ||
|
|
53e83c76b2 |
8
.github/workflows/tests.yml
vendored
8
.github/workflows/tests.yml
vendored
@@ -468,6 +468,10 @@ jobs:
|
||||
tests-done:
|
||||
if: ${{ always() }}
|
||||
needs:
|
||||
- check-sampleconfig
|
||||
- lint
|
||||
- lint-crlf
|
||||
- lint-newsfile
|
||||
- trial
|
||||
- trial-olddeps
|
||||
- sytest
|
||||
@@ -482,7 +486,5 @@ jobs:
|
||||
needs: ${{ toJSON(needs) }}
|
||||
|
||||
# The newsfile lint may be skipped on non PR builds
|
||||
# Cargo test is skipped if there is no changes on Rust code
|
||||
skippable: |
|
||||
skippable:
|
||||
lint-newsfile
|
||||
cargo-test
|
||||
|
||||
108
CHANGES.md
108
CHANGES.md
@@ -1,111 +1,3 @@
|
||||
Synapse 1.67.0 (2022-09-13)
|
||||
===========================
|
||||
|
||||
This release removes using the deprecated direct TCP replication configuration
|
||||
for workers. Server admins should use Redis instead. See the [upgrade
|
||||
notes](https://matrix-org.github.io/synapse/v1.67/upgrade.html#upgrading-to-v1670).
|
||||
|
||||
The minimum version of `poetry` supported for managing source checkouts is now
|
||||
1.2.0.
|
||||
|
||||
**Notice:** from the next major release (1.68.0) installing Synapse from a source
|
||||
checkout will require a recent Rust compiler. Those using packages or
|
||||
`pip install matrix-synapse` will not be affected. See the [upgrade
|
||||
notes](https://matrix-org.github.io/synapse/v1.67/upgrade.html#upgrading-to-v1670).
|
||||
|
||||
**Notice:** from the next major release (1.68.0), running Synapse with a SQLite
|
||||
database will require SQLite version 3.27.0 or higher. (The [current minimum
|
||||
version is SQLite 3.22.0](https://github.com/matrix-org/synapse/blob/release-v1.67/synapse/storage/engines/sqlite.py#L69-L78).)
|
||||
See [#12983](https://github.com/matrix-org/synapse/issues/12983) and the [upgrade notes](https://matrix-org.github.io/synapse/v1.67/upgrade.html#upgrading-to-v1670) for more details.
|
||||
|
||||
|
||||
No significant changes since 1.67.0rc1.
|
||||
|
||||
|
||||
Synapse 1.67.0rc1 (2022-09-06)
|
||||
==============================
|
||||
|
||||
Features
|
||||
--------
|
||||
|
||||
- Support setting the registration shared secret in a file, via a new `registration_shared_secret_path` configuration option. ([\#13614](https://github.com/matrix-org/synapse/issues/13614))
|
||||
- Change the default startup behaviour so that any missing "additional" configuration files (signing key, etc) are generated automatically. ([\#13615](https://github.com/matrix-org/synapse/issues/13615))
|
||||
- Improve performance of sending messages in rooms with thousands of local users. ([\#13634](https://github.com/matrix-org/synapse/issues/13634))
|
||||
|
||||
|
||||
Bugfixes
|
||||
--------
|
||||
|
||||
- Fix a bug introduced in Synapse 1.13 where the [List Rooms admin API](https://matrix-org.github.io/synapse/develop/admin_api/rooms.html#list-room-api) would return integers instead of booleans for the `federatable` and `public` fields when using a Sqlite database. ([\#13509](https://github.com/matrix-org/synapse/issues/13509))
|
||||
- Fix bug that user cannot `/forget` rooms after the last member has left the room. ([\#13546](https://github.com/matrix-org/synapse/issues/13546))
|
||||
- Faster Room Joins: fix `/make_knock` blocking indefinitely when the room in question is a partial-stated room. ([\#13583](https://github.com/matrix-org/synapse/issues/13583))
|
||||
- Fix loading the current stream position behind the actual position. ([\#13585](https://github.com/matrix-org/synapse/issues/13585))
|
||||
- Fix a longstanding bug in `register_new_matrix_user` which meant it was always necessary to explicitly give a server URL. ([\#13616](https://github.com/matrix-org/synapse/issues/13616))
|
||||
- Fix the running of [MSC1763](https://github.com/matrix-org/matrix-spec-proposals/pull/1763) retention purge_jobs in deployments with background jobs running on a worker by forcing them back onto the main worker. Contributed by Brad @ Beeper. ([\#13632](https://github.com/matrix-org/synapse/issues/13632))
|
||||
- Fix a long-standing bug that downloaded media for URL previews was not deleted while database background updates were running. ([\#13657](https://github.com/matrix-org/synapse/issues/13657))
|
||||
- Fix [MSC3030](https://github.com/matrix-org/matrix-spec-proposals/pull/3030) `/timestamp_to_event` endpoint to return the correct next event when the events have the same timestamp. ([\#13658](https://github.com/matrix-org/synapse/issues/13658))
|
||||
- Fix bug where we wedge media plugins if clients disconnect early. Introduced in v1.22.0. ([\#13660](https://github.com/matrix-org/synapse/issues/13660))
|
||||
- Fix a long-standing bug which meant that keys for unwhitelisted servers were not returned by `/_matrix/key/v2/query`. ([\#13683](https://github.com/matrix-org/synapse/issues/13683))
|
||||
- Fix a bug introduced in Synapse v1.20.0 that would cause the unstable unread counts from [MSC2654](https://github.com/matrix-org/matrix-spec-proposals/pull/2654) to be calculated even if the feature is disabled. ([\#13694](https://github.com/matrix-org/synapse/issues/13694))
|
||||
|
||||
|
||||
Updates to the Docker image
|
||||
---------------------------
|
||||
|
||||
- Update docker image to use a stable version of poetry. ([\#13688](https://github.com/matrix-org/synapse/issues/13688))
|
||||
|
||||
|
||||
Improved Documentation
|
||||
----------------------
|
||||
|
||||
- Improve the description of the ["chain cover index"](https://matrix-org.github.io/synapse/latest/auth_chain_difference_algorithm.html) used internally by Synapse. ([\#13602](https://github.com/matrix-org/synapse/issues/13602))
|
||||
- Document how ["monthly active users"](https://matrix-org.github.io/synapse/latest/usage/administration/monthly_active_users.html) is calculated and used. ([\#13617](https://github.com/matrix-org/synapse/issues/13617))
|
||||
- Improve documentation around user registration. ([\#13640](https://github.com/matrix-org/synapse/issues/13640))
|
||||
- Remove documentation of legacy `frontend_proxy` worker app. ([\#13645](https://github.com/matrix-org/synapse/issues/13645))
|
||||
- Clarify documentation that HTTP replication traffic can be protected with a shared secret. ([\#13656](https://github.com/matrix-org/synapse/issues/13656))
|
||||
- Remove unintentional colons from [config manual](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html) headers. ([\#13665](https://github.com/matrix-org/synapse/issues/13665))
|
||||
- Update docs to make enabling metrics more clear. ([\#13678](https://github.com/matrix-org/synapse/issues/13678))
|
||||
- Clarify `(room_id, event_id)` global uniqueness and how we should scope our database schemas. ([\#13701](https://github.com/matrix-org/synapse/issues/13701))
|
||||
|
||||
|
||||
Deprecations and Removals
|
||||
-------------------------
|
||||
|
||||
- Drop support for calling `/_matrix/client/v3/rooms/{roomId}/invite` without an `id_access_token`, which was not permitted by the spec. Contributed by @Vetchu. ([\#13241](https://github.com/matrix-org/synapse/issues/13241))
|
||||
- Remove redundant `_get_joined_users_from_context` cache. Contributed by Nick @ Beeper (@fizzadar). ([\#13569](https://github.com/matrix-org/synapse/issues/13569))
|
||||
- Remove the ability to use direct TCP replication with workers. Direct TCP replication was deprecated in Synapse v1.18.0. Workers now require using Redis. ([\#13647](https://github.com/matrix-org/synapse/issues/13647))
|
||||
- Remove support for unstable [private read receipts](https://github.com/matrix-org/matrix-spec-proposals/pull/2285). ([\#13653](https://github.com/matrix-org/synapse/issues/13653), [\#13692](https://github.com/matrix-org/synapse/issues/13692))
|
||||
|
||||
|
||||
Internal Changes
|
||||
----------------
|
||||
|
||||
- Extend the release script to wait for GitHub Actions to finish and to be usable as a guide for the whole process. ([\#13483](https://github.com/matrix-org/synapse/issues/13483))
|
||||
- Add experimental configuration option to allow disabling legacy Prometheus metric names. ([\#13540](https://github.com/matrix-org/synapse/issues/13540))
|
||||
- Cache user IDs instead of profiles to reduce cache memory usage. Contributed by Nick @ Beeper (@fizzadar). ([\#13573](https://github.com/matrix-org/synapse/issues/13573), [\#13600](https://github.com/matrix-org/synapse/issues/13600))
|
||||
- Optimize how Synapse calculates domains to fetch from during backfill. ([\#13575](https://github.com/matrix-org/synapse/issues/13575))
|
||||
- Comment about a better future where we can get the state diff between two events. ([\#13586](https://github.com/matrix-org/synapse/issues/13586))
|
||||
- Instrument `_check_sigs_and_hash_and_fetch` to trace time spent in child concurrent calls for understandable traces in Jaeger. ([\#13588](https://github.com/matrix-org/synapse/issues/13588))
|
||||
- Improve performance of `@cachedList`. ([\#13591](https://github.com/matrix-org/synapse/issues/13591))
|
||||
- Minor speed up of fetching large numbers of push rules. ([\#13592](https://github.com/matrix-org/synapse/issues/13592))
|
||||
- Optimise push action fetching queries. Contributed by Nick @ Beeper (@fizzadar). ([\#13597](https://github.com/matrix-org/synapse/issues/13597))
|
||||
- Rename `event_map` to `unpersisted_events` when computing the auth differences. ([\#13603](https://github.com/matrix-org/synapse/issues/13603))
|
||||
- Refactor `get_users_in_room(room_id)` mis-use with dedicated `get_current_hosts_in_room(room_id)` function. ([\#13605](https://github.com/matrix-org/synapse/issues/13605))
|
||||
- Use dedicated `get_local_users_in_room(room_id)` function to find local users when calculating `join_authorised_via_users_server` of a `/make_join` request. ([\#13606](https://github.com/matrix-org/synapse/issues/13606))
|
||||
- Refactor `get_users_in_room(room_id)` mis-use to lookup single local user with dedicated `check_local_user_in_room(...)` function. ([\#13608](https://github.com/matrix-org/synapse/issues/13608))
|
||||
- Drop unused column `application_services_state.last_txn`. ([\#13627](https://github.com/matrix-org/synapse/issues/13627))
|
||||
- Improve readability of Complement CI logs by printing failure results last. ([\#13639](https://github.com/matrix-org/synapse/issues/13639))
|
||||
- Generalise the `@cancellable` annotation so it can be used on functions other than just servlet methods. ([\#13662](https://github.com/matrix-org/synapse/issues/13662))
|
||||
- Introduce a `CommonUsageMetrics` class to share some usage metrics between the Prometheus exporter and the phone home stats. ([\#13671](https://github.com/matrix-org/synapse/issues/13671))
|
||||
- Add some logging to help track down #13444. ([\#13679](https://github.com/matrix-org/synapse/issues/13679))
|
||||
- Update poetry lock file for v1.2.0. ([\#13689](https://github.com/matrix-org/synapse/issues/13689))
|
||||
- Add cache to `is_partial_state_room`. ([\#13693](https://github.com/matrix-org/synapse/issues/13693))
|
||||
- Update the Grafana dashboard that is included with Synapse in the `contrib` directory. ([\#13697](https://github.com/matrix-org/synapse/issues/13697))
|
||||
- Only run trial CI on all python versions on non-PRs. ([\#13698](https://github.com/matrix-org/synapse/issues/13698))
|
||||
- Fix typechecking with latest types-jsonschema. ([\#13712](https://github.com/matrix-org/synapse/issues/13712))
|
||||
- Reduce number of CI checks we run for PRs. ([\#13713](https://github.com/matrix-org/synapse/issues/13713))
|
||||
|
||||
|
||||
Synapse 1.66.0 (2022-08-31)
|
||||
===========================
|
||||
|
||||
|
||||
@@ -3,3 +3,7 @@
|
||||
|
||||
[workspace]
|
||||
members = ["rust"]
|
||||
|
||||
|
||||
[profile.release]
|
||||
debug = true
|
||||
|
||||
1
changelog.d/13241.removal
Normal file
1
changelog.d/13241.removal
Normal file
@@ -0,0 +1 @@
|
||||
Drop support for calling `/_matrix/client/v3/rooms/{roomId}/invite` without an `id_access_token`, which was not permitted by the spec. Contributed by @Vetchu.
|
||||
@@ -1 +0,0 @@
|
||||
Note that `libpq` is required on ARM-based Macs.
|
||||
1
changelog.d/13483.misc
Normal file
1
changelog.d/13483.misc
Normal file
@@ -0,0 +1 @@
|
||||
Extend the release script to wait for GitHub Actions to finish and to be usable as a guide for the whole process.
|
||||
1
changelog.d/13509.bugfix
Normal file
1
changelog.d/13509.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse 1.13 where the [List Rooms admin API](https://matrix-org.github.io/synapse/develop/admin_api/rooms.html#list-room-api) would return integers instead of booleans for the `federatable` and `public` fields when using a Sqlite database.
|
||||
1
changelog.d/13540.misc
Normal file
1
changelog.d/13540.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add experimental configuration option to allow disabling legacy Prometheus metric names.
|
||||
1
changelog.d/13546.bugfix
Normal file
1
changelog.d/13546.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug that user cannot `/forget` rooms after the last member has left the room.
|
||||
1
changelog.d/13569.removal
Normal file
1
changelog.d/13569.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove redundant `_get_joined_users_from_context` cache. Contributed by Nick @ Beeper (@fizzadar).
|
||||
1
changelog.d/13573.misc
Normal file
1
changelog.d/13573.misc
Normal file
@@ -0,0 +1 @@
|
||||
Cache user IDs instead of profiles to reduce cache memory usage. Contributed by Nick @ Beeper (@fizzadar).
|
||||
1
changelog.d/13575.misc
Normal file
1
changelog.d/13575.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimize how Synapse calculates domains to fetch from during backfill.
|
||||
1
changelog.d/13583.bugfix
Normal file
1
changelog.d/13583.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Faster Room Joins: fix `/make_knock` blocking indefinitely when the room in question is a partial-stated room.
|
||||
1
changelog.d/13585.bugfix
Normal file
1
changelog.d/13585.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix loading the current stream position behind the actual position.
|
||||
1
changelog.d/13586.misc
Normal file
1
changelog.d/13586.misc
Normal file
@@ -0,0 +1 @@
|
||||
Comment about a better future where we can get the state diff between two events.
|
||||
1
changelog.d/13588.misc
Normal file
1
changelog.d/13588.misc
Normal file
@@ -0,0 +1 @@
|
||||
Instrument `_check_sigs_and_hash_and_fetch` to trace time spent in child concurrent calls for understandable traces in Jaeger.
|
||||
1
changelog.d/13591.misc
Normal file
1
changelog.d/13591.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve performance of `@cachedList`.
|
||||
1
changelog.d/13592.misc
Normal file
1
changelog.d/13592.misc
Normal file
@@ -0,0 +1 @@
|
||||
Minor speed up of fetching large numbers of push rules.
|
||||
1
changelog.d/13597.misc
Normal file
1
changelog.d/13597.misc
Normal file
@@ -0,0 +1 @@
|
||||
Optimise push action fetching queries. Contributed by Nick @ Beeper (@fizzadar).
|
||||
1
changelog.d/13600.misc
Normal file
1
changelog.d/13600.misc
Normal file
@@ -0,0 +1 @@
|
||||
Cache user IDs instead of profiles to reduce cache memory usage. Contributed by Nick @ Beeper (@fizzadar).
|
||||
1
changelog.d/13602.doc
Normal file
1
changelog.d/13602.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve the description of the ["chain cover index"](https://matrix-org.github.io/synapse/latest/auth_chain_difference_algorithm.html) used internally by Synapse.
|
||||
1
changelog.d/13603.misc
Normal file
1
changelog.d/13603.misc
Normal file
@@ -0,0 +1 @@
|
||||
Rename `event_map` to `unpersisted_events` when computing the auth differences.
|
||||
1
changelog.d/13605.misc
Normal file
1
changelog.d/13605.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor `get_users_in_room(room_id)` mis-use with dedicated `get_current_hosts_in_room(room_id)` function.
|
||||
1
changelog.d/13606.misc
Normal file
1
changelog.d/13606.misc
Normal file
@@ -0,0 +1 @@
|
||||
Use dedicated `get_local_users_in_room(room_id)` function to find local users when calculating `join_authorised_via_users_server` of a `/make_join` request.
|
||||
1
changelog.d/13608.misc
Normal file
1
changelog.d/13608.misc
Normal file
@@ -0,0 +1 @@
|
||||
Refactor `get_users_in_room(room_id)` mis-use to lookup single local user with dedicated `check_local_user_in_room(...)` function.
|
||||
1
changelog.d/13614.feature
Normal file
1
changelog.d/13614.feature
Normal file
@@ -0,0 +1 @@
|
||||
Support setting the registration shared secret in a file, via a new `registration_shared_secret_path` configuration option.
|
||||
1
changelog.d/13615.feature
Normal file
1
changelog.d/13615.feature
Normal file
@@ -0,0 +1 @@
|
||||
Change the default startup behaviour so that any missing "additional" configuration files (signing key, etc) are generated automatically.
|
||||
1
changelog.d/13616.bugfix
Normal file
1
changelog.d/13616.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a longstanding bug in `register_new_matrix_user` which meant it was always necessary to explicitly give a server URL.
|
||||
1
changelog.d/13617.doc
Normal file
1
changelog.d/13617.doc
Normal file
@@ -0,0 +1 @@
|
||||
Document how ["monthly active users"](https://matrix-org.github.io/synapse/latest/usage/administration/monthly_active_users.html) is calculated and used.
|
||||
1
changelog.d/13627.misc
Normal file
1
changelog.d/13627.misc
Normal file
@@ -0,0 +1 @@
|
||||
Drop unused column `application_services_state.last_txn`.
|
||||
1
changelog.d/13632.bugfix
Normal file
1
changelog.d/13632.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix the running of MSC1763 retention purge_jobs in deployments with background jobs running on a worker by forcing them back onto the main worker. Contributed by Brad @ Beeper.
|
||||
1
changelog.d/13634.feature
Normal file
1
changelog.d/13634.feature
Normal file
@@ -0,0 +1 @@
|
||||
Improve performance of sending messages in rooms with thousands of local users.
|
||||
1
changelog.d/13639.misc
Normal file
1
changelog.d/13639.misc
Normal file
@@ -0,0 +1 @@
|
||||
Improve readability of Complement CI logs by printing failure results last.
|
||||
1
changelog.d/13640.doc
Normal file
1
changelog.d/13640.doc
Normal file
@@ -0,0 +1 @@
|
||||
Improve documentation around user registration.
|
||||
1
changelog.d/13645.doc
Normal file
1
changelog.d/13645.doc
Normal file
@@ -0,0 +1 @@
|
||||
Remove documentation of legacy `frontend_proxy` worker app.
|
||||
1
changelog.d/13647.removal
Normal file
1
changelog.d/13647.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove the ability to use direct TCP replication with workers. Direct TCP replication was deprecated in Synapse v1.18.0. Workers now require using Redis.
|
||||
1
changelog.d/13653.removal
Normal file
1
changelog.d/13653.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove support for unstable [private read receipts](https://github.com/matrix-org/matrix-spec-proposals/pull/2285).
|
||||
1
changelog.d/13656.doc
Normal file
1
changelog.d/13656.doc
Normal file
@@ -0,0 +1 @@
|
||||
Clarify documentation that HTTP replication traffic can be protected with a shared secret.
|
||||
1
changelog.d/13657.bugfix
Normal file
1
changelog.d/13657.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug that downloaded media for URL previews was not deleted while database background updates were running.
|
||||
1
changelog.d/13658.bugfix
Normal file
1
changelog.d/13658.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix MSC3030 `/timestamp_to_event` endpoint to return the correct next event when the events have the same timestamp.
|
||||
1
changelog.d/13660.bugfix
Normal file
1
changelog.d/13660.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix bug where we wedge media plugins if clients disconnect early. Introduced in v1.22.0.
|
||||
1
changelog.d/13662.misc
Normal file
1
changelog.d/13662.misc
Normal file
@@ -0,0 +1 @@
|
||||
Generalise the `@cancellable` annotation so it can be used on functions other than just servlet methods.
|
||||
1
changelog.d/13665.doc
Normal file
1
changelog.d/13665.doc
Normal file
@@ -0,0 +1 @@
|
||||
Remove unintentional colons from [config manual](https://matrix-org.github.io/synapse/latest/usage/configuration/config_documentation.html) headers.
|
||||
1
changelog.d/13671.misc
Normal file
1
changelog.d/13671.misc
Normal file
@@ -0,0 +1 @@
|
||||
Introduce a `CommonUsageMetrics` class to share some usage metrics between the Prometheus exporter and the phone home stats.
|
||||
1
changelog.d/13678.doc
Normal file
1
changelog.d/13678.doc
Normal file
@@ -0,0 +1 @@
|
||||
Update docs to make enabling metrics more clear.
|
||||
1
changelog.d/13679.misc
Normal file
1
changelog.d/13679.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add some logging to help track down #13444.
|
||||
1
changelog.d/13683.bugfix
Normal file
1
changelog.d/13683.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a long-standing bug which meant that keys for unwhitelisted servers were not returned by `/_matrix/key/v2/query`.
|
||||
1
changelog.d/13688.docker
Normal file
1
changelog.d/13688.docker
Normal file
@@ -0,0 +1 @@
|
||||
Update docker image to use a stable version of poetry.
|
||||
1
changelog.d/13689.misc
Normal file
1
changelog.d/13689.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update poetry lock file for v1.2.0.
|
||||
1
changelog.d/13692.removal
Normal file
1
changelog.d/13692.removal
Normal file
@@ -0,0 +1 @@
|
||||
Remove support for unstable [private read receipts](https://github.com/matrix-org/matrix-spec-proposals/pull/2285).
|
||||
1
changelog.d/13693.misc
Normal file
1
changelog.d/13693.misc
Normal file
@@ -0,0 +1 @@
|
||||
Add cache to `is_partial_state_room`.
|
||||
1
changelog.d/13694.bugfix
Normal file
1
changelog.d/13694.bugfix
Normal file
@@ -0,0 +1 @@
|
||||
Fix a bug introduced in Synapse v1.20.0 that would cause the unstable unread counts from [MSC2654](https://github.com/matrix-org/matrix-spec-proposals/pull/2654) to be calculated even if the feature is disabled.
|
||||
1
changelog.d/13697.misc
Normal file
1
changelog.d/13697.misc
Normal file
@@ -0,0 +1 @@
|
||||
Update the Grafana dashboard that is included with Synapse in the `contrib` directory.
|
||||
1
changelog.d/13698.misc
Normal file
1
changelog.d/13698.misc
Normal file
@@ -0,0 +1 @@
|
||||
Only run trial CI on all python versions on non-PRs.
|
||||
1
changelog.d/13701.doc
Normal file
1
changelog.d/13701.doc
Normal file
@@ -0,0 +1 @@
|
||||
Clarify `(room_id, event_id)` global uniqueness and how we should scope our database schemas.
|
||||
@@ -1 +0,0 @@
|
||||
Add & populate `event_stream_ordering` column on receipts table for future optimisation of push action processing. Contributed by Nick @ Beeper (@fizzadar).
|
||||
1
changelog.d/13712.misc
Normal file
1
changelog.d/13712.misc
Normal file
@@ -0,0 +1 @@
|
||||
Fix typechecking with latest types-jsonschema.
|
||||
1
changelog.d/13713.misc
Normal file
1
changelog.d/13713.misc
Normal file
@@ -0,0 +1 @@
|
||||
Reduce number of CI checks we run for PRs.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a typo in the documentation for the login ratelimiting configuration.
|
||||
@@ -1 +0,0 @@
|
||||
Strip number suffix from instance name to consolidate services that traces are spread over.
|
||||
1
changelog.d/13733.misc
Normal file
1
changelog.d/13733.misc
Normal file
@@ -0,0 +1 @@
|
||||
Convert `LruCache` linked lists into Rust.
|
||||
@@ -1 +0,0 @@
|
||||
Remove old queries to join room memberships to current state events. Contributed by Nick @ Beeper (@fizzadar).
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long standing bug where device lists would remain cached when remote users left and rejoined the last room shared with the local homeserver.
|
||||
@@ -1 +0,0 @@
|
||||
Add a check for editable installs if the Rust library needs rebuilding.
|
||||
@@ -1 +0,0 @@
|
||||
Tag traces with the instance name to be able to easily jump into the right logs and filter traces by instance.
|
||||
@@ -1 +0,0 @@
|
||||
Concurrently fetch room push actions when calculating badge counts. Contributed by Nick @ Beeper (@fizzadar).
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing bug where the `cache_invalidation_stream_seq` sequence would begin at 1 instead of 2.
|
||||
@@ -1 +0,0 @@
|
||||
Add a stub Rust crate.
|
||||
@@ -1 +0,0 @@
|
||||
Update the script which makes full schema dumps.
|
||||
@@ -1 +0,0 @@
|
||||
Add a stub Rust crate.
|
||||
@@ -1 +0,0 @@
|
||||
Simplify the dependency DAG in the tests workflow.
|
||||
@@ -1 +0,0 @@
|
||||
Fix a long-standing spec compliance bug where Synapse would accept a trailing slash on the end of `/get_missing_events` federation requests.
|
||||
@@ -1 +0,0 @@
|
||||
complement tests: put postgres data folder on an host path on /tmp that we bindmount, outside of the container storage that can be quite slow.
|
||||
14
debian/changelog
vendored
14
debian/changelog
vendored
@@ -1,18 +1,8 @@
|
||||
matrix-synapse-py3 (1.67.0) stable; urgency=medium
|
||||
matrix-synapse-py3 (1.66.0ubuntu1) UNRELEASED; urgency=medium
|
||||
|
||||
* New Synapse release 1.67.0.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 13 Sep 2022 09:19:56 +0100
|
||||
|
||||
matrix-synapse-py3 (1.67.0~rc1) stable; urgency=medium
|
||||
|
||||
[ Erik Johnston ]
|
||||
* Use stable poetry 1.2.0 version, rather than a prerelease.
|
||||
|
||||
[ Synapse Packaging team ]
|
||||
* New Synapse release 1.67.0rc1.
|
||||
|
||||
-- Synapse Packaging team <packages@matrix.org> Tue, 06 Sep 2022 09:01:06 +0100
|
||||
-- Erik Johnston <erik@matrix.org> Thu, 01 Sep 2022 13:48:31 +0100
|
||||
|
||||
matrix-synapse-py3 (1.66.0) stable; urgency=medium
|
||||
|
||||
|
||||
@@ -17,16 +17,25 @@ ARG SYNAPSE_VERSION=latest
|
||||
# the same debian version as Synapse's docker image (so the versions of the
|
||||
# shared libraries match).
|
||||
|
||||
FROM postgres:13-bullseye AS postgres_base
|
||||
# initialise the database cluster in /var/lib/postgresql
|
||||
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
|
||||
|
||||
# Configure a password and create a database for Synapse
|
||||
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
|
||||
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
|
||||
|
||||
# now build the final image, based on the Synapse image.
|
||||
|
||||
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
|
||||
# copy the postgres installation over from the image we built above
|
||||
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
|
||||
COPY --from=postgres:13-bullseye /usr/lib/postgresql /usr/lib/postgresql
|
||||
COPY --from=postgres:13-bullseye /usr/share/postgresql /usr/share/postgresql
|
||||
COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql
|
||||
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
|
||||
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
|
||||
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
|
||||
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
|
||||
ENV PGDATA=/var/lib/postgresql/data/main
|
||||
ENV PGDATA=/var/lib/postgresql/data
|
||||
|
||||
# Extend the shared homeserver config to disable rate-limiting,
|
||||
# set Complement's static shared secret, enable registration, amongst other
|
||||
|
||||
@@ -25,16 +25,8 @@ case "$SYNAPSE_COMPLEMENT_DATABASE" in
|
||||
# Set postgres authentication details which will be placed in the homeserver config file
|
||||
export POSTGRES_PASSWORD=somesecret
|
||||
export POSTGRES_USER=postgres
|
||||
|
||||
export POSTGRES_HOST=localhost
|
||||
|
||||
if [ ! -f "$PGDATA/PG_VERSION" ]; then
|
||||
gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
|
||||
|
||||
echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
|
||||
echo "CREATE DATABASE synapse" | gosu postgres postgres --single
|
||||
fi
|
||||
|
||||
# configure supervisord to start postgres
|
||||
export START_POSTGRES=true
|
||||
;;
|
||||
|
||||
@@ -303,10 +303,9 @@ You may need to install the latest Xcode developer tools:
|
||||
xcode-select --install
|
||||
```
|
||||
|
||||
On ARM-based Macs you may need to install libjpeg and libpq.
|
||||
You can use Homebrew (https://brew.sh):
|
||||
On ARM-based Macs you may need to explicitly install libjpeg which is a pillow dependency. You can use Homebrew (https://brew.sh):
|
||||
```sh
|
||||
brew install jpeg libpq
|
||||
brew install jpeg
|
||||
```
|
||||
|
||||
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
|
||||
|
||||
@@ -111,30 +111,6 @@ and remove the TCP `replication` listener from config of the master and
|
||||
The minimum supported version of poetry is now 1.2. This should only affect
|
||||
those installing from a source checkout.
|
||||
|
||||
## Rust requirement in the next release
|
||||
|
||||
From the next major release (v1.68.0) installing Synapse from a source checkout
|
||||
will require a recent Rust compiler. Those using packages or
|
||||
`pip install matrix-synapse` will not be affected.
|
||||
|
||||
The simplest way of installing Rust is via [rustup.rs](https://rustup.rs/)
|
||||
|
||||
## SQLite version requirement in the next release
|
||||
|
||||
From the next major release (v1.68.0) Synapse will require SQLite 3.27.0 or
|
||||
higher. Synapse v1.67.0 will be the last major release supporting SQLite
|
||||
versions 3.22 to 3.26.
|
||||
|
||||
Those using docker images or Debian packages from Matrix.org will not be
|
||||
affected. If you have installed from source, you should check the version of
|
||||
SQLite used by Python with:
|
||||
|
||||
```shell
|
||||
python -c "import sqlite3; print(sqlite3.sqlite_version)"
|
||||
```
|
||||
|
||||
If this is too old, refer to your distribution for advice on upgrading.
|
||||
|
||||
# Upgrading to v1.66.0
|
||||
|
||||
## Delegation of email validation no longer supported
|
||||
|
||||
@@ -1393,7 +1393,7 @@ This option specifies several limits for login:
|
||||
client is attempting to log into. Defaults to `per_second: 0.17`,
|
||||
`burst_count: 3`.
|
||||
|
||||
* `failed_attempts` ratelimits login requests based on the account the
|
||||
* `failted_attempts` ratelimits login requests based on the account the
|
||||
client is attempting to log into, based on the amount of failed login
|
||||
attempts for this account. Defaults to `per_second: 0.17`, `burst_count: 3`.
|
||||
|
||||
|
||||
@@ -57,7 +57,7 @@ manifest-path = "rust/Cargo.toml"
|
||||
|
||||
[tool.poetry]
|
||||
name = "matrix-synapse"
|
||||
version = "1.67.0"
|
||||
version = "1.66.0"
|
||||
description = "Homeserver for the Matrix decentralised comms protocol"
|
||||
authors = ["Matrix.org Team and Contributors <packages@matrix.org>"]
|
||||
license = "Apache-2.0"
|
||||
|
||||
@@ -18,8 +18,7 @@ crate-type = ["cdylib"]
|
||||
name = "synapse.synapse_rust"
|
||||
|
||||
[dependencies]
|
||||
intrusive-collections = "0.9.4"
|
||||
lazy_static = "1.4.0"
|
||||
log = "0.4.17"
|
||||
pyo3 = { version = "0.16.5", features = ["extension-module", "macros", "abi3", "abi3-py37"] }
|
||||
|
||||
[build-dependencies]
|
||||
blake2 = "0.10.4"
|
||||
hex = "0.4.3"
|
||||
|
||||
@@ -1,45 +0,0 @@
|
||||
//! This build script calculates the hash of all files in the `src/`
|
||||
//! directory and adds it as an environment variable during build time.
|
||||
//!
|
||||
//! This is used so that the python code can detect when the built native module
|
||||
//! does not match the source in-tree, helping to detect the case where the
|
||||
//! source has been updated but the library hasn't been rebuilt.
|
||||
|
||||
use std::path::PathBuf;
|
||||
|
||||
use blake2::{Blake2b512, Digest};
|
||||
|
||||
fn main() -> Result<(), std::io::Error> {
|
||||
let mut dirs = vec![PathBuf::from("src")];
|
||||
|
||||
let mut paths = Vec::new();
|
||||
while let Some(path) = dirs.pop() {
|
||||
let mut entries = std::fs::read_dir(path)?
|
||||
.map(|res| res.map(|e| e.path()))
|
||||
.collect::<Result<Vec<_>, std::io::Error>>()?;
|
||||
|
||||
entries.sort();
|
||||
|
||||
for entry in entries {
|
||||
if entry.is_dir() {
|
||||
dirs.push(entry)
|
||||
} else {
|
||||
paths.push(entry.to_str().expect("valid rust paths").to_string());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
paths.sort();
|
||||
|
||||
let mut hasher = Blake2b512::new();
|
||||
|
||||
for path in paths {
|
||||
let bytes = std::fs::read(path)?;
|
||||
hasher.update(bytes);
|
||||
}
|
||||
|
||||
let hex_digest = hex::encode(hasher.finalize());
|
||||
println!("cargo:rustc-env=SYNAPSE_RUST_DIGEST={hex_digest}");
|
||||
|
||||
Ok(())
|
||||
}
|
||||
@@ -1,12 +1,6 @@
|
||||
use pyo3::prelude::*;
|
||||
|
||||
/// Returns the hash of all the rust source files at the time it was compiled.
|
||||
///
|
||||
/// Used by python to detect if the rust library is outdated.
|
||||
#[pyfunction]
|
||||
fn get_rust_file_digest() -> &'static str {
|
||||
env!("SYNAPSE_RUST_DIGEST")
|
||||
}
|
||||
mod lru_cache;
|
||||
|
||||
/// Formats the sum of two numbers as string.
|
||||
#[pyfunction]
|
||||
@@ -17,8 +11,9 @@ fn sum_as_string(a: usize, b: usize) -> PyResult<String> {
|
||||
|
||||
/// The entry point for defining the Python module.
|
||||
#[pymodule]
|
||||
fn synapse_rust(_py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
||||
fn synapse_rust(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
||||
m.add_function(wrap_pyfunction!(sum_as_string, m)?)?;
|
||||
m.add_function(wrap_pyfunction!(get_rust_file_digest, m)?)?;
|
||||
|
||||
lru_cache::register_module(py, m)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
236
rust/src/lru_cache.rs
Normal file
236
rust/src/lru_cache.rs
Normal file
@@ -0,0 +1,236 @@
|
||||
use std::sync::{Arc, Mutex};
|
||||
|
||||
use intrusive_collections::{intrusive_adapter, LinkedListAtomicLink};
|
||||
use intrusive_collections::{LinkedList, LinkedListLink};
|
||||
use lazy_static::lazy_static;
|
||||
use log::error;
|
||||
use pyo3::prelude::*;
|
||||
use pyo3::types::PySet;
|
||||
|
||||
/// Called when registering modules with python.
|
||||
pub fn register_module(py: Python<'_>, m: &PyModule) -> PyResult<()> {
|
||||
let child_module = PyModule::new(py, "push")?;
|
||||
child_module.add_class::<LruCacheNode>()?;
|
||||
child_module.add_class::<PerCacheLinkedList>()?;
|
||||
child_module.add_function(wrap_pyfunction!(get_global_list, m)?)?;
|
||||
|
||||
m.add_submodule(child_module)?;
|
||||
|
||||
// We need to manually add the module to sys.modules to make `from
|
||||
// synapse.synapse_rust import push` work.
|
||||
py.import("sys")?
|
||||
.getattr("modules")?
|
||||
.set_item("synapse.synapse_rust.lru_cache", child_module)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[pyclass]
|
||||
#[derive(Clone)]
|
||||
struct PerCacheLinkedList(Arc<Mutex<LinkedList<LruCacheNodeAdapterPerCache>>>);
|
||||
|
||||
#[pymethods]
|
||||
impl PerCacheLinkedList {
|
||||
#[new]
|
||||
fn new() -> PerCacheLinkedList {
|
||||
PerCacheLinkedList(Default::default())
|
||||
}
|
||||
|
||||
fn get_back(&self) -> Option<LruCacheNode> {
|
||||
let list = self.0.lock().expect("poisoned");
|
||||
list.back().clone_pointer().map(|n| LruCacheNode(n))
|
||||
}
|
||||
}
|
||||
|
||||
struct LruCacheNodeInner {
|
||||
per_cache_link: LinkedListAtomicLink,
|
||||
global_list_link: LinkedListAtomicLink,
|
||||
per_cache_list: Arc<Mutex<LinkedList<LruCacheNodeAdapterPerCache>>>,
|
||||
cache: Mutex<Option<PyObject>>,
|
||||
key: PyObject,
|
||||
value: Arc<Mutex<PyObject>>,
|
||||
callbacks: Py<PySet>,
|
||||
memory: usize,
|
||||
}
|
||||
|
||||
#[pyclass]
|
||||
struct LruCacheNode(Arc<LruCacheNodeInner>);
|
||||
|
||||
#[pymethods]
|
||||
impl LruCacheNode {
|
||||
#[new]
|
||||
fn py_new(
|
||||
cache: PyObject,
|
||||
cache_list: PerCacheLinkedList,
|
||||
key: PyObject,
|
||||
value: PyObject,
|
||||
callbacks: Py<PySet>,
|
||||
memory: usize,
|
||||
) -> Self {
|
||||
let node = Arc::new(LruCacheNodeInner {
|
||||
per_cache_link: Default::default(),
|
||||
global_list_link: Default::default(),
|
||||
per_cache_list: cache_list.0,
|
||||
cache: Mutex::new(Some(cache)),
|
||||
key,
|
||||
value: Arc::new(Mutex::new(value)),
|
||||
callbacks,
|
||||
memory,
|
||||
});
|
||||
|
||||
GLOBAL_LIST
|
||||
.lock()
|
||||
.expect("posioned")
|
||||
.push_front(node.clone());
|
||||
|
||||
node.per_cache_list
|
||||
.lock()
|
||||
.expect("posioned")
|
||||
.push_front(node.clone());
|
||||
|
||||
LruCacheNode(node)
|
||||
}
|
||||
|
||||
fn add_callbacks(&self, py: Python<'_>, new_callbacks: &PyAny) -> PyResult<()> {
|
||||
if new_callbacks.len()? == 0 {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
let current_callbacks = self.0.callbacks.as_ref(py);
|
||||
|
||||
for cb in new_callbacks.iter()? {
|
||||
current_callbacks.add(cb?)?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn run_and_clear_callbacks(&self, py: Python<'_>) {
|
||||
let callbacks = self.0.callbacks.as_ref(py);
|
||||
|
||||
if callbacks.is_empty() {
|
||||
return;
|
||||
}
|
||||
|
||||
for callback in callbacks {
|
||||
if let Err(err) = callback.call0() {
|
||||
error!("LruCacheNode callback errored: {err}");
|
||||
}
|
||||
}
|
||||
|
||||
callbacks.clear();
|
||||
}
|
||||
|
||||
fn drop_from_cache(&self) -> PyResult<()> {
|
||||
let cache = self.0.cache.lock().expect("poisoned").take();
|
||||
|
||||
if let Some(cache) = cache {
|
||||
Python::with_gil(|py| cache.call_method1(py, "pop", (&self.0.key, None::<()>)))?;
|
||||
}
|
||||
|
||||
self.drop_from_lists();
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn drop_from_lists(&self) {
|
||||
if self.0.global_list_link.is_linked() {
|
||||
let mut glboal_list = GLOBAL_LIST.lock().expect("poisoned");
|
||||
|
||||
let mut curor_mut = unsafe {
|
||||
// Getting the cursor is unsafe as we need to ensure the list link
|
||||
// belongs to the given list.
|
||||
glboal_list.cursor_mut_from_ptr(Arc::into_raw(self.0.clone()))
|
||||
};
|
||||
|
||||
curor_mut.remove();
|
||||
}
|
||||
|
||||
if self.0.per_cache_link.is_linked() {
|
||||
let mut per_cache_list = self.0.per_cache_list.lock().expect("poisoned");
|
||||
|
||||
let mut curor_mut = unsafe {
|
||||
// Getting the cursor is unsafe as we need to ensure the list link
|
||||
// belongs to the given list.
|
||||
per_cache_list.cursor_mut_from_ptr(Arc::into_raw(self.0.clone()))
|
||||
};
|
||||
|
||||
curor_mut.remove();
|
||||
}
|
||||
}
|
||||
|
||||
fn move_to_front(&self) {
|
||||
if self.0.global_list_link.is_linked() {
|
||||
let mut global_list = GLOBAL_LIST.lock().expect("poisoned");
|
||||
|
||||
let mut curor_mut = unsafe {
|
||||
// Getting the cursor is unsafe as we need to ensure the list link
|
||||
// belongs to the given list.
|
||||
global_list.cursor_mut_from_ptr(Arc::into_raw(self.0.clone()))
|
||||
};
|
||||
curor_mut.remove();
|
||||
|
||||
global_list.push_front(self.0.clone());
|
||||
}
|
||||
|
||||
if self.0.per_cache_link.is_linked() {
|
||||
let mut per_cache_list = self.0.per_cache_list.lock().expect("poisoned");
|
||||
|
||||
let mut curor_mut = unsafe {
|
||||
// Getting the cursor is unsafe as we need to ensure the list link
|
||||
// belongs to the given list.
|
||||
per_cache_list.cursor_mut_from_ptr(Arc::into_raw(self.0.clone()))
|
||||
};
|
||||
|
||||
curor_mut.remove();
|
||||
|
||||
per_cache_list.push_front(self.0.clone());
|
||||
}
|
||||
}
|
||||
|
||||
#[getter]
|
||||
fn key(&self) -> &PyObject {
|
||||
&self.0.key
|
||||
}
|
||||
|
||||
#[getter]
|
||||
fn value(&self) -> PyObject {
|
||||
self.0.value.lock().expect("poisoned").clone()
|
||||
}
|
||||
|
||||
#[setter]
|
||||
fn set_value(&self, value: PyObject) {
|
||||
*self.0.value.lock().expect("poisoned") = value
|
||||
}
|
||||
|
||||
#[getter]
|
||||
fn memory(&self) -> usize {
|
||||
self.0.memory
|
||||
}
|
||||
}
|
||||
|
||||
#[pyfunction]
|
||||
fn get_global_list() -> Vec<LruCacheNode> {
|
||||
let list = GLOBAL_LIST.lock().expect("poisoned");
|
||||
|
||||
let mut vec = Vec::new();
|
||||
|
||||
let mut cursor = list.front();
|
||||
|
||||
while let Some(n) = cursor.clone_pointer() {
|
||||
vec.push(LruCacheNode(n));
|
||||
|
||||
cursor.move_next();
|
||||
}
|
||||
|
||||
vec
|
||||
}
|
||||
|
||||
intrusive_adapter!(LruCacheNodeAdapterPerCache = Arc<LruCacheNodeInner>: LruCacheNodeInner { per_cache_link: LinkedListLink });
|
||||
intrusive_adapter!(LruCacheNodeAdapterGlobal = Arc<LruCacheNodeInner>: LruCacheNodeInner { global_list_link: LinkedListLink });
|
||||
|
||||
lazy_static! {
|
||||
static ref GLOBAL_LIST_ADAPTER: LruCacheNodeAdapterGlobal = LruCacheNodeAdapterGlobal::new();
|
||||
static ref GLOBAL_LIST: Arc<Mutex<LinkedList<LruCacheNodeAdapterGlobal>>> =
|
||||
Arc::new(Mutex::new(LinkedList::new(GLOBAL_LIST_ADAPTER.clone())));
|
||||
}
|
||||
@@ -122,14 +122,7 @@ if [ -n "$skip_complement_run" ]; then
|
||||
exit
|
||||
fi
|
||||
|
||||
PG_DATA_FOLDER=/tmp/postgres-data
|
||||
|
||||
rm -rf $PG_DATA_FOLDER
|
||||
mkdir -p $PG_DATA_FOLDER
|
||||
chmod 777 $PG_DATA_FOLDER
|
||||
|
||||
export COMPLEMENT_BASE_IMAGE=complement-synapse
|
||||
export COMPLEMENT_HOST_MOUNTS=$PG_DATA_FOLDER:/var/lib/postgresql/data
|
||||
|
||||
extra_test_args=()
|
||||
|
||||
@@ -185,5 +178,3 @@ echo "Images built; running complement"
|
||||
cd "$COMPLEMENT_DIR"
|
||||
|
||||
go test -v -tags $test_tags -count=1 "${extra_test_args[@]}" "$@" ./tests/...
|
||||
|
||||
rm -rf $PG_DATA_FOLDER
|
||||
|
||||
@@ -9,10 +9,8 @@
|
||||
export PGHOST="localhost"
|
||||
POSTGRES_DB_NAME="synapse_full_schema.$$"
|
||||
|
||||
SQLITE_SCHEMA_FILE="schema.sql.sqlite"
|
||||
SQLITE_ROWS_FILE="rows.sql.sqlite"
|
||||
POSTGRES_SCHEMA_FILE="full.sql.postgres"
|
||||
POSTGRES_ROWS_FILE="rows.sql.postgres"
|
||||
SQLITE_FULL_SCHEMA_OUTPUT_FILE="full.sql.sqlite"
|
||||
POSTGRES_FULL_SCHEMA_OUTPUT_FILE="full.sql.postgres"
|
||||
|
||||
REQUIRED_DEPS=("matrix-synapse" "psycopg2")
|
||||
|
||||
@@ -24,7 +22,7 @@ usage() {
|
||||
echo " Username to connect to local postgres instance. The password will be requested"
|
||||
echo " during script execution."
|
||||
echo "-c"
|
||||
echo " CI mode. Prints every command that the script runs."
|
||||
echo " CI mode. Enables coverage tracking and prints every command that the script runs."
|
||||
echo "-o <path>"
|
||||
echo " Directory to output full schema files to."
|
||||
echo "-h"
|
||||
@@ -39,6 +37,11 @@ while getopts "p:co:h" opt; do
|
||||
c)
|
||||
# Print all commands that are being executed
|
||||
set -x
|
||||
|
||||
# Modify required dependencies for coverage
|
||||
REQUIRED_DEPS+=("coverage" "coverage-enable-subprocess")
|
||||
|
||||
COVERAGE=1
|
||||
;;
|
||||
o)
|
||||
command -v realpath > /dev/null || (echo "The -o flag requires the 'realpath' binary to be installed" && exit 1)
|
||||
@@ -99,7 +102,6 @@ SQLITE_DB=$TMPDIR/homeserver.db
|
||||
POSTGRES_CONFIG=$TMPDIR/postgres.conf
|
||||
|
||||
# Ensure these files are delete on script exit
|
||||
# TODO: the trap should also drop the temp postgres DB
|
||||
trap 'rm -rf $TMPDIR' EXIT
|
||||
|
||||
cat > "$SQLITE_CONFIG" <<EOF
|
||||
@@ -145,34 +147,48 @@ python -m synapse.app.homeserver --generate-keys -c "$SQLITE_CONFIG"
|
||||
|
||||
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
|
||||
echo "Running db background jobs..."
|
||||
synapse/_scripts/update_synapse_database.py --database-config "$SQLITE_CONFIG" --run-background-updates
|
||||
synapse/_scripts/update_synapse_database.py --database-config --run-background-updates "$SQLITE_CONFIG"
|
||||
|
||||
# Create the PostgreSQL database.
|
||||
echo "Creating postgres database..."
|
||||
createdb --lc-collate=C --lc-ctype=C --template=template0 "$POSTGRES_DB_NAME"
|
||||
|
||||
echo "Running db background jobs..."
|
||||
synapse/_scripts/update_synapse_database.py --database-config "$POSTGRES_CONFIG" --run-background-updates
|
||||
|
||||
echo "Copying data from SQLite3 to Postgres with synapse_port_db..."
|
||||
if [ -z "$COVERAGE" ]; then
|
||||
# No coverage needed
|
||||
synapse/_scripts/synapse_port_db.py --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
|
||||
else
|
||||
# Coverage desired
|
||||
coverage run synapse/_scripts/synapse_port_db.py --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
|
||||
fi
|
||||
|
||||
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
|
||||
# Also delete any shadow tables from fts4
|
||||
# This needs to be done after synapse_port_db is run
|
||||
echo "Dropping unwanted db tables..."
|
||||
SQL="
|
||||
DROP TABLE schema_version;
|
||||
DROP TABLE applied_schema_deltas;
|
||||
DROP TABLE applied_module_schemas;
|
||||
DROP TABLE event_search_content;
|
||||
DROP TABLE event_search_segments;
|
||||
DROP TABLE event_search_segdir;
|
||||
DROP TABLE event_search_docsize;
|
||||
DROP TABLE event_search_stat;
|
||||
DROP TABLE user_directory_search_content;
|
||||
DROP TABLE user_directory_search_segments;
|
||||
DROP TABLE user_directory_search_segdir;
|
||||
DROP TABLE user_directory_search_docsize;
|
||||
DROP TABLE user_directory_search_stat;
|
||||
"
|
||||
sqlite3 "$SQLITE_DB" <<< "$SQL"
|
||||
psql "$POSTGRES_DB_NAME" -w <<< "$SQL"
|
||||
|
||||
echo "Dumping SQLite3 schema to '$OUTPUT_DIR/$SQLITE_SCHEMA_FILE' and '$OUTPUT_DIR/$SQLITE_ROWS_FILE'..."
|
||||
sqlite3 "$SQLITE_DB" ".schema --indent" > "$OUTPUT_DIR/$SQLITE_SCHEMA_FILE"
|
||||
sqlite3 "$SQLITE_DB" ".dump --data-only --nosys" > "$OUTPUT_DIR/$SQLITE_ROWS_FILE"
|
||||
echo "Dumping SQLite3 schema to '$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE'..."
|
||||
sqlite3 "$SQLITE_DB" ".dump" > "$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE"
|
||||
|
||||
echo "Dumping Postgres schema to '$OUTPUT_DIR/$POSTGRES_SCHEMA_FILE' and '$OUTPUT_DIR/$POSTGRES_ROWS_FILE'..."
|
||||
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner "$POSTGRES_DB_NAME" | sed -e '/^$/d' -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_SCHEMA_FILE"
|
||||
pg_dump --format=plain --data-only --inserts --no-tablespaces --no-acl --no-owner "$POSTGRES_DB_NAME" | sed -e '/^$/d' -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_ROWS_FILE"
|
||||
echo "Dumping Postgres schema to '$OUTPUT_DIR/$POSTGRES_FULL_SCHEMA_OUTPUT_FILE'..."
|
||||
pg_dump --format=plain --no-tablespaces --no-acl --no-owner $POSTGRES_DB_NAME | sed -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_FULL_SCHEMA_OUTPUT_FILE"
|
||||
|
||||
echo "Cleaning up temporary Postgres database..."
|
||||
dropdb $POSTGRES_DB_NAME
|
||||
|
||||
@@ -1,2 +1 @@
|
||||
def sum_as_string(a: int, b: int) -> str: ...
|
||||
def get_rust_file_digest() -> str: ...
|
||||
|
||||
@@ -20,8 +20,6 @@ import json
|
||||
import os
|
||||
import sys
|
||||
|
||||
from synapse.util.rust import check_rust_lib_up_to_date
|
||||
|
||||
# Check that we're not running on an unsupported Python version.
|
||||
if sys.version_info < (3, 7):
|
||||
print("Synapse requires Python 3.7 or above.")
|
||||
@@ -80,6 +78,3 @@ if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
|
||||
from synapse.util.patch_inline_callbacks import do_patch
|
||||
|
||||
do_patch()
|
||||
|
||||
|
||||
check_rust_lib_up_to_date()
|
||||
|
||||
@@ -67,7 +67,6 @@ from synapse.storage.databases.main.media_repository import (
|
||||
)
|
||||
from synapse.storage.databases.main.presence import PresenceBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.pusher import PusherWorkerStore
|
||||
from synapse.storage.databases.main.receipts import ReceiptsBackgroundUpdateStore
|
||||
from synapse.storage.databases.main.registration import (
|
||||
RegistrationBackgroundUpdateStore,
|
||||
find_max_generated_user_id_localpart,
|
||||
@@ -204,7 +203,6 @@ class Store(
|
||||
PushRuleStore,
|
||||
PusherWorkerStore,
|
||||
PresenceBackgroundUpdateStore,
|
||||
ReceiptsBackgroundUpdateStore,
|
||||
):
|
||||
def execute(self, f: Callable[..., R], *args: Any, **kwargs: Any) -> Awaitable[R]:
|
||||
return self.db_pool.runInteraction(f.__name__, f, *args, **kwargs)
|
||||
|
||||
@@ -32,7 +32,6 @@ from synapse.appservice import ApplicationService
|
||||
from synapse.http import get_request_user_agent
|
||||
from synapse.http.site import SynapseRequest
|
||||
from synapse.logging.opentracing import (
|
||||
SynapseTags,
|
||||
active_span,
|
||||
force_tracing,
|
||||
start_active_span,
|
||||
@@ -162,12 +161,6 @@ class Auth:
|
||||
parent_span.set_tag(
|
||||
"authenticated_entity", requester.authenticated_entity
|
||||
)
|
||||
# We tag the Synapse instance name so that it's an easy jumping
|
||||
# off point into the logs. Can also be used to filter for an
|
||||
# instance that is under load.
|
||||
parent_span.set_tag(
|
||||
SynapseTags.INSTANCE_NAME, self.hs.get_instance_name()
|
||||
)
|
||||
parent_span.set_tag("user_id", requester.user.to_string())
|
||||
if requester.device_id is not None:
|
||||
parent_span.set_tag("device_id", requester.device_id)
|
||||
|
||||
@@ -549,7 +549,8 @@ class FederationClientKeysClaimServlet(BaseFederationServerServlet):
|
||||
|
||||
|
||||
class FederationGetMissingEventsServlet(BaseFederationServerServlet):
|
||||
PATH = "/get_missing_events/(?P<room_id>[^/]*)"
|
||||
# TODO(paul): Why does this path alone end with "/?" optional?
|
||||
PATH = "/get_missing_events/(?P<room_id>[^/]*)/?"
|
||||
|
||||
async def on_POST(
|
||||
self,
|
||||
|
||||
@@ -45,6 +45,7 @@ from synapse.types import (
|
||||
JsonDict,
|
||||
StreamKeyType,
|
||||
StreamToken,
|
||||
UserID,
|
||||
get_domain_from_id,
|
||||
get_verify_key_from_cross_signing_key,
|
||||
)
|
||||
@@ -323,6 +324,8 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
self.device_list_updater.incoming_device_list_update,
|
||||
)
|
||||
|
||||
hs.get_distributor().observe("user_left_room", self.user_left_room)
|
||||
|
||||
# Whether `_handle_new_device_update_async` is currently processing.
|
||||
self._handle_new_device_update_is_processing = False
|
||||
|
||||
@@ -566,6 +569,14 @@ class DeviceHandler(DeviceWorkerHandler):
|
||||
StreamKeyType.DEVICE_LIST, position, users=[from_user_id]
|
||||
)
|
||||
|
||||
async def user_left_room(self, user: UserID, room_id: str) -> None:
|
||||
user_id = user.to_string()
|
||||
room_ids = await self.store.get_rooms_for_user(user_id)
|
||||
if not room_ids:
|
||||
# We no longer share rooms with this user, so we'll no longer
|
||||
# receive device updates. Mark this in DB.
|
||||
await self.store.mark_remote_user_device_list_as_unsubscribed(user_id)
|
||||
|
||||
async def store_dehydrated_device(
|
||||
self,
|
||||
user_id: str,
|
||||
|
||||
@@ -175,32 +175,6 @@ class E2eKeysHandler:
|
||||
user_ids_not_in_cache,
|
||||
remote_results,
|
||||
) = await self.store.get_user_devices_from_cache(query_list)
|
||||
|
||||
# Check that the homeserver still shares a room with all cached users.
|
||||
# Note that this check may be slightly racy when a remote user leaves a
|
||||
# room after we have fetched their cached device list. In the worst case
|
||||
# we will do extra federation queries for devices that we had cached.
|
||||
cached_users = set(remote_results.keys())
|
||||
valid_cached_users = (
|
||||
await self.store.get_users_server_still_shares_room_with(
|
||||
remote_results.keys()
|
||||
)
|
||||
)
|
||||
invalid_cached_users = cached_users - valid_cached_users
|
||||
if invalid_cached_users:
|
||||
# Fix up results. If we get here, there is either a bug in device
|
||||
# list tracking, or we hit the race mentioned above.
|
||||
user_ids_not_in_cache.update(invalid_cached_users)
|
||||
for invalid_user_id in invalid_cached_users:
|
||||
remote_results.pop(invalid_user_id)
|
||||
# This log message may be removed if it turns out it's almost
|
||||
# entirely triggered by races.
|
||||
logger.error(
|
||||
"Devices for %s were cached, but the server no longer shares "
|
||||
"any rooms with them. The cached device lists are stale.",
|
||||
invalid_cached_users,
|
||||
)
|
||||
|
||||
for user_id, devices in remote_results.items():
|
||||
user_devices = results.setdefault(user_id, {})
|
||||
for device_id, device in devices.items():
|
||||
|
||||
@@ -203,9 +203,6 @@ if TYPE_CHECKING:
|
||||
|
||||
# Helper class
|
||||
|
||||
# Matches the number suffix in an instance name like "matrix.org client_reader-8"
|
||||
STRIP_INSTANCE_NUMBER_SUFFIX_REGEX = re.compile(r"[_-]?\d+$")
|
||||
|
||||
|
||||
class _DummyTagNames:
|
||||
"""wrapper of opentracings tags. We need to have them if we
|
||||
@@ -298,8 +295,6 @@ class SynapseTags:
|
||||
# Whether the sync response has new data to be returned to the client.
|
||||
SYNC_RESULT = "sync.new_data"
|
||||
|
||||
INSTANCE_NAME = "instance_name"
|
||||
|
||||
# incoming HTTP request ID (as written in the logs)
|
||||
REQUEST_ID = "request_id"
|
||||
|
||||
@@ -446,17 +441,9 @@ def init_tracer(hs: "HomeServer") -> None:
|
||||
|
||||
from jaeger_client.metrics.prometheus import PrometheusMetricsFactory
|
||||
|
||||
# Instance names are opaque strings but by stripping off the number suffix,
|
||||
# we can get something that looks like a "worker type", e.g.
|
||||
# "client_reader-1" -> "client_reader" so we don't spread the traces across
|
||||
# so many services.
|
||||
instance_name_by_type = re.sub(
|
||||
STRIP_INSTANCE_NUMBER_SUFFIX_REGEX, "", hs.get_instance_name()
|
||||
)
|
||||
|
||||
config = JaegerConfig(
|
||||
config=hs.config.tracing.jaeger_config,
|
||||
service_name=f"{hs.config.server.server_name} {instance_name_by_type}",
|
||||
service_name=f"{hs.config.server.server_name} {hs.get_instance_name()}",
|
||||
scope_manager=LogContextScopeManager(),
|
||||
metrics_factory=PrometheusMetricsFactory(),
|
||||
)
|
||||
@@ -1045,11 +1032,11 @@ def trace_servlet(
|
||||
# with JsonResource).
|
||||
scope.span.set_operation_name(request.request_metrics.name)
|
||||
|
||||
# set the tags *after* the servlet completes, in case it decided to
|
||||
# prioritise the span (tags will get dropped on unprioritised spans)
|
||||
request_tags[
|
||||
SynapseTags.REQUEST_TAG
|
||||
] = request.request_metrics.start_context.tag
|
||||
|
||||
# set the tags *after* the servlet completes, in case it decided to
|
||||
# prioritise the span (tags will get dropped on unprioritised spans)
|
||||
for k, v in request_tags.items():
|
||||
scope.span.set_tag(k, v)
|
||||
|
||||
@@ -17,7 +17,6 @@ from synapse.events import EventBase
|
||||
from synapse.push.presentable_names import calculate_room_name, name_from_member_event
|
||||
from synapse.storage.controllers import StorageControllers
|
||||
from synapse.storage.databases.main import DataStore
|
||||
from synapse.util.async_helpers import concurrently_execute
|
||||
|
||||
|
||||
async def get_badge_count(store: DataStore, user_id: str, group_by_room: bool) -> int:
|
||||
@@ -26,19 +25,13 @@ async def get_badge_count(store: DataStore, user_id: str, group_by_room: bool) -
|
||||
|
||||
badge = len(invites)
|
||||
|
||||
room_notifs = []
|
||||
|
||||
async def get_room_unread_count(room_id: str) -> None:
|
||||
room_notifs.append(
|
||||
await store.get_unread_event_push_actions_by_room_for_user(
|
||||
for room_id in joins:
|
||||
notifs = await (
|
||||
store.get_unread_event_push_actions_by_room_for_user(
|
||||
room_id,
|
||||
user_id,
|
||||
)
|
||||
)
|
||||
|
||||
await concurrently_execute(get_room_unread_count, joins, 10)
|
||||
|
||||
for notifs in room_notifs:
|
||||
if notifs.notify_count == 0:
|
||||
continue
|
||||
|
||||
|
||||
@@ -598,9 +598,9 @@ class EventsPersistenceStorageController:
|
||||
# room
|
||||
state_delta_for_room: Dict[str, DeltaState] = {}
|
||||
|
||||
# Set of remote users which were in rooms the server has left or who may
|
||||
# have left rooms the server is in. We should check if we still share any
|
||||
# rooms and if not we mark their device lists as stale.
|
||||
# Set of remote users which were in rooms the server has left. We
|
||||
# should check if we still share any rooms and if not we mark their
|
||||
# device lists as stale.
|
||||
potentially_left_users: Set[str] = set()
|
||||
|
||||
if not backfilled:
|
||||
@@ -725,20 +725,6 @@ class EventsPersistenceStorageController:
|
||||
current_state = {}
|
||||
delta.no_longer_in_room = True
|
||||
|
||||
# Add all remote users that might have left rooms.
|
||||
potentially_left_users.update(
|
||||
user_id
|
||||
for event_type, user_id in delta.to_delete
|
||||
if event_type == EventTypes.Member
|
||||
and not self.is_mine_id(user_id)
|
||||
)
|
||||
potentially_left_users.update(
|
||||
user_id
|
||||
for event_type, user_id in delta.to_insert.keys()
|
||||
if event_type == EventTypes.Member
|
||||
and not self.is_mine_id(user_id)
|
||||
)
|
||||
|
||||
state_delta_for_room[room_id] = delta
|
||||
|
||||
await self.persist_events_store._persist_events_and_state_updates(
|
||||
|
||||
@@ -675,7 +675,6 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||
values={
|
||||
"stream_id": stream_id,
|
||||
"event_id": event_id,
|
||||
"event_stream_ordering": stream_ordering,
|
||||
"data": json_encoder.encode(data),
|
||||
},
|
||||
# receipts_linearized has a unique constraint on
|
||||
@@ -831,76 +830,5 @@ class ReceiptsWorkerStore(SQLBaseStore):
|
||||
)
|
||||
|
||||
|
||||
class ReceiptsBackgroundUpdateStore(SQLBaseStore):
|
||||
POPULATE_RECEIPT_EVENT_STREAM_ORDERING = "populate_event_stream_ordering"
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
database: DatabasePool,
|
||||
db_conn: LoggingDatabaseConnection,
|
||||
hs: "HomeServer",
|
||||
):
|
||||
super().__init__(database, db_conn, hs)
|
||||
|
||||
self.db_pool.updates.register_background_update_handler(
|
||||
self.POPULATE_RECEIPT_EVENT_STREAM_ORDERING,
|
||||
self._populate_receipt_event_stream_ordering,
|
||||
)
|
||||
|
||||
async def _populate_receipt_event_stream_ordering(
|
||||
self, progress: JsonDict, batch_size: int
|
||||
) -> int:
|
||||
def _populate_receipt_event_stream_ordering_txn(
|
||||
txn: LoggingTransaction,
|
||||
) -> bool:
|
||||
|
||||
if "max_stream_id" in progress:
|
||||
max_stream_id = progress["max_stream_id"]
|
||||
else:
|
||||
txn.execute("SELECT max(stream_id) FROM receipts_linearized")
|
||||
res = txn.fetchone()
|
||||
if res is None or res[0] is None:
|
||||
return True
|
||||
else:
|
||||
max_stream_id = res[0]
|
||||
|
||||
start = progress.get("stream_id", 0)
|
||||
stop = start + batch_size
|
||||
|
||||
sql = """
|
||||
UPDATE receipts_linearized
|
||||
SET event_stream_ordering = (
|
||||
SELECT stream_ordering
|
||||
FROM events
|
||||
WHERE event_id = receipts_linearized.event_id
|
||||
)
|
||||
WHERE stream_id >= ? AND stream_id < ?
|
||||
"""
|
||||
txn.execute(sql, (start, stop))
|
||||
|
||||
self.db_pool.updates._background_update_progress_txn(
|
||||
txn,
|
||||
self.POPULATE_RECEIPT_EVENT_STREAM_ORDERING,
|
||||
{
|
||||
"stream_id": stop,
|
||||
"max_stream_id": max_stream_id,
|
||||
},
|
||||
)
|
||||
|
||||
return stop > max_stream_id
|
||||
|
||||
finished = await self.db_pool.runInteraction(
|
||||
"_remove_devices_from_device_inbox_txn",
|
||||
_populate_receipt_event_stream_ordering_txn,
|
||||
)
|
||||
|
||||
if finished:
|
||||
await self.db_pool.updates._end_background_update(
|
||||
self.POPULATE_RECEIPT_EVENT_STREAM_ORDERING
|
||||
)
|
||||
|
||||
return batch_size
|
||||
|
||||
|
||||
class ReceiptsStore(ReceiptsWorkerStore, ReceiptsBackgroundUpdateStore):
|
||||
class ReceiptsStore(ReceiptsWorkerStore):
|
||||
pass
|
||||
|
||||
@@ -32,7 +32,10 @@ import attr
|
||||
|
||||
from synapse.api.constants import EventTypes, Membership
|
||||
from synapse.metrics import LaterGauge
|
||||
from synapse.metrics.background_process_metrics import wrap_as_background_process
|
||||
from synapse.metrics.background_process_metrics import (
|
||||
run_as_background_process,
|
||||
wrap_as_background_process,
|
||||
)
|
||||
from synapse.storage._base import SQLBaseStore, db_to_json, make_in_list_sql_clause
|
||||
from synapse.storage.database import (
|
||||
DatabasePool,
|
||||
@@ -88,6 +91,16 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
# at a time. Keyed by room_id.
|
||||
self._joined_host_linearizer = Linearizer("_JoinedHostsCache")
|
||||
|
||||
# Is the current_state_events.membership up to date? Or is the
|
||||
# background update still running?
|
||||
self._current_state_events_membership_up_to_date = False
|
||||
|
||||
txn = db_conn.cursor(
|
||||
txn_name="_check_safe_current_state_events_membership_updated"
|
||||
)
|
||||
self._check_safe_current_state_events_membership_updated_txn(txn)
|
||||
txn.close()
|
||||
|
||||
if (
|
||||
self.hs.config.worker.run_background_tasks
|
||||
and self.hs.config.metrics.metrics_flags.known_servers
|
||||
@@ -144,6 +157,34 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
self._known_servers_count = max([count, 1])
|
||||
return self._known_servers_count
|
||||
|
||||
def _check_safe_current_state_events_membership_updated_txn(
|
||||
self, txn: LoggingTransaction
|
||||
) -> None:
|
||||
"""Checks if it is safe to assume the new current_state_events
|
||||
membership column is up to date
|
||||
"""
|
||||
|
||||
pending_update = self.db_pool.simple_select_one_txn(
|
||||
txn,
|
||||
table="background_updates",
|
||||
keyvalues={"update_name": _CURRENT_STATE_MEMBERSHIP_UPDATE_NAME},
|
||||
retcols=["update_name"],
|
||||
allow_none=True,
|
||||
)
|
||||
|
||||
self._current_state_events_membership_up_to_date = not pending_update
|
||||
|
||||
# If the update is still running, reschedule to run.
|
||||
if pending_update:
|
||||
self._clock.call_later(
|
||||
15.0,
|
||||
run_as_background_process,
|
||||
"_check_safe_current_state_events_membership_updated",
|
||||
self.db_pool.runInteraction,
|
||||
"_check_safe_current_state_events_membership_updated",
|
||||
self._check_safe_current_state_events_membership_updated_txn,
|
||||
)
|
||||
|
||||
@cached(max_entries=100000, iterable=True)
|
||||
async def get_users_in_room(self, room_id: str) -> List[str]:
|
||||
"""
|
||||
@@ -171,14 +212,31 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
`get_current_hosts_in_room()` and so we can re-use the cache but it's
|
||||
not horrible to have here either.
|
||||
"""
|
||||
sql = """
|
||||
SELECT c.state_key FROM current_state_events as c
|
||||
/* Get the depth of the event from the events table */
|
||||
INNER JOIN events AS e USING (event_id)
|
||||
WHERE c.type = 'm.room.member' AND c.room_id = ? AND membership = ?
|
||||
/* Sorted by lowest depth first */
|
||||
ORDER BY e.depth ASC;
|
||||
"""
|
||||
# If we can assume current_state_events.membership is up to date
|
||||
# then we can avoid a join, which is a Very Good Thing given how
|
||||
# frequently this function gets called.
|
||||
if self._current_state_events_membership_up_to_date:
|
||||
sql = """
|
||||
SELECT c.state_key FROM current_state_events as c
|
||||
/* Get the depth of the event from the events table */
|
||||
INNER JOIN events AS e USING (event_id)
|
||||
WHERE c.type = 'm.room.member' AND c.room_id = ? AND membership = ?
|
||||
/* Sorted by lowest depth first */
|
||||
ORDER BY e.depth ASC;
|
||||
"""
|
||||
else:
|
||||
sql = """
|
||||
SELECT c.state_key FROM room_memberships as m
|
||||
/* Get the depth of the event from the events table */
|
||||
INNER JOIN events AS e USING (event_id)
|
||||
INNER JOIN current_state_events as c
|
||||
ON m.event_id = c.event_id
|
||||
AND m.room_id = c.room_id
|
||||
AND m.user_id = c.state_key
|
||||
WHERE c.type = 'm.room.member' AND c.room_id = ? AND m.membership = ?
|
||||
/* Sorted by lowest depth first */
|
||||
ORDER BY e.depth ASC;
|
||||
"""
|
||||
|
||||
txn.execute(sql, (room_id, Membership.JOIN))
|
||||
return [r[0] for r in txn]
|
||||
@@ -295,14 +353,28 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
# We do this all in one transaction to keep the cache small.
|
||||
# FIXME: get rid of this when we have room_stats
|
||||
|
||||
# Note, rejected events will have a null membership field, so
|
||||
# we we manually filter them out.
|
||||
sql = """
|
||||
SELECT count(*), membership FROM current_state_events
|
||||
WHERE type = 'm.room.member' AND room_id = ?
|
||||
AND membership IS NOT NULL
|
||||
GROUP BY membership
|
||||
"""
|
||||
# If we can assume current_state_events.membership is up to date
|
||||
# then we can avoid a join, which is a Very Good Thing given how
|
||||
# frequently this function gets called.
|
||||
if self._current_state_events_membership_up_to_date:
|
||||
# Note, rejected events will have a null membership field, so
|
||||
# we we manually filter them out.
|
||||
sql = """
|
||||
SELECT count(*), membership FROM current_state_events
|
||||
WHERE type = 'm.room.member' AND room_id = ?
|
||||
AND membership IS NOT NULL
|
||||
GROUP BY membership
|
||||
"""
|
||||
else:
|
||||
sql = """
|
||||
SELECT count(*), m.membership FROM room_memberships as m
|
||||
INNER JOIN current_state_events as c
|
||||
ON m.event_id = c.event_id
|
||||
AND m.room_id = c.room_id
|
||||
AND m.user_id = c.state_key
|
||||
WHERE c.type = 'm.room.member' AND c.room_id = ?
|
||||
GROUP BY m.membership
|
||||
"""
|
||||
|
||||
txn.execute(sql, (room_id,))
|
||||
res: Dict[str, MemberSummary] = {}
|
||||
@@ -311,18 +383,30 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
|
||||
# we order by membership and then fairly arbitrarily by event_id so
|
||||
# heroes are consistent
|
||||
# Note, rejected events will have a null membership field, so
|
||||
# we we manually filter them out.
|
||||
sql = """
|
||||
SELECT state_key, membership, event_id
|
||||
FROM current_state_events
|
||||
WHERE type = 'm.room.member' AND room_id = ?
|
||||
AND membership IS NOT NULL
|
||||
ORDER BY
|
||||
CASE membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
|
||||
event_id ASC
|
||||
LIMIT ?
|
||||
"""
|
||||
if self._current_state_events_membership_up_to_date:
|
||||
# Note, rejected events will have a null membership field, so
|
||||
# we we manually filter them out.
|
||||
sql = """
|
||||
SELECT state_key, membership, event_id
|
||||
FROM current_state_events
|
||||
WHERE type = 'm.room.member' AND room_id = ?
|
||||
AND membership IS NOT NULL
|
||||
ORDER BY
|
||||
CASE membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
|
||||
event_id ASC
|
||||
LIMIT ?
|
||||
"""
|
||||
else:
|
||||
sql = """
|
||||
SELECT c.state_key, m.membership, c.event_id
|
||||
FROM room_memberships as m
|
||||
INNER JOIN current_state_events as c USING (room_id, event_id)
|
||||
WHERE c.type = 'm.room.member' AND c.room_id = ?
|
||||
ORDER BY
|
||||
CASE m.membership WHEN ? THEN 1 WHEN ? THEN 2 ELSE 3 END ASC,
|
||||
c.event_id ASC
|
||||
LIMIT ?
|
||||
"""
|
||||
|
||||
# 6 is 5 (number of heroes) plus 1, in case one of them is the calling user.
|
||||
txn.execute(sql, (room_id, Membership.JOIN, Membership.INVITE, 6))
|
||||
@@ -565,15 +649,27 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
# We use `current_state_events` here and not `local_current_membership`
|
||||
# as a) this gets called with remote users and b) this only gets called
|
||||
# for rooms the server is participating in.
|
||||
sql = """
|
||||
SELECT room_id, e.instance_name, e.stream_ordering
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
WHERE
|
||||
c.type = 'm.room.member'
|
||||
AND c.state_key = ?
|
||||
AND c.membership = ?
|
||||
"""
|
||||
if self._current_state_events_membership_up_to_date:
|
||||
sql = """
|
||||
SELECT room_id, e.instance_name, e.stream_ordering
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
WHERE
|
||||
c.type = 'm.room.member'
|
||||
AND c.state_key = ?
|
||||
AND c.membership = ?
|
||||
"""
|
||||
else:
|
||||
sql = """
|
||||
SELECT room_id, e.instance_name, e.stream_ordering
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN room_memberships AS m USING (room_id, event_id)
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
WHERE
|
||||
c.type = 'm.room.member'
|
||||
AND c.state_key = ?
|
||||
AND m.membership = ?
|
||||
"""
|
||||
|
||||
txn.execute(sql, (user_id, Membership.JOIN))
|
||||
return frozenset(
|
||||
@@ -611,15 +707,27 @@ class RoomMemberWorkerStore(EventsWorkerStore):
|
||||
user_ids,
|
||||
)
|
||||
|
||||
sql = f"""
|
||||
SELECT c.state_key, room_id, e.instance_name, e.stream_ordering
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
WHERE
|
||||
c.type = 'm.room.member'
|
||||
AND c.membership = ?
|
||||
AND {clause}
|
||||
"""
|
||||
if self._current_state_events_membership_up_to_date:
|
||||
sql = f"""
|
||||
SELECT c.state_key, room_id, e.instance_name, e.stream_ordering
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
WHERE
|
||||
c.type = 'm.room.member'
|
||||
AND c.membership = ?
|
||||
AND {clause}
|
||||
"""
|
||||
else:
|
||||
sql = f"""
|
||||
SELECT c.state_key, room_id, e.instance_name, e.stream_ordering
|
||||
FROM current_state_events AS c
|
||||
INNER JOIN room_memberships AS m USING (room_id, event_id)
|
||||
INNER JOIN events AS e USING (room_id, event_id)
|
||||
WHERE
|
||||
c.type = 'm.room.member'
|
||||
AND m.membership = ?
|
||||
AND {clause}
|
||||
"""
|
||||
|
||||
txn.execute(sql, [Membership.JOIN] + args)
|
||||
|
||||
|
||||
@@ -76,7 +76,6 @@ Changes in SCHEMA_VERSION = 72:
|
||||
- event_edges.(room_id, is_state) are no longer written to.
|
||||
- Tables related to groups are dropped.
|
||||
- Unused column application_services_state.last_txn is dropped
|
||||
- Cache invalidation stream id sequence now begins at 2 to match code expectation.
|
||||
"""
|
||||
|
||||
|
||||
|
||||
@@ -1,19 +0,0 @@
|
||||
/* Copyright 2022 Beeper
|
||||
*
|
||||
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||
* you may not use this file except in compliance with the License.
|
||||
* You may obtain a copy of the License at
|
||||
*
|
||||
* http://www.apache.org/licenses/LICENSE-2.0
|
||||
*
|
||||
* Unless required by applicable law or agreed to in writing, software
|
||||
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
* See the License for the specific language governing permissions and
|
||||
* limitations under the License.
|
||||
*/
|
||||
|
||||
ALTER TABLE receipts_linearized ADD COLUMN event_stream_ordering BIGINT;
|
||||
|
||||
INSERT INTO background_updates (update_name, progress_json) VALUES
|
||||
('populate_event_stream_ordering', '{}');
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user