1
0

Compare commits

...

102 Commits

Author SHA1 Message Date
H. Shay
404a6e3d79 add suppport for persisting batched events over replication 2022-09-19 11:49:33 -07:00
H. Shay
99aa2136c2 add fucntions to persist events as a batch, encapsulate some logic in a helper function 2022-09-19 11:48:59 -07:00
H. Shay
d950d99ab5 add some comments and sned events as a batch to be persisted 2022-09-19 11:48:35 -07:00
H. Shay
abc497bae6 Merge branch 'shay/batch_events' of https://github.com/matrix-org/synapse into shay/batch_events 2022-09-19 10:43:03 -07:00
H. Shay
2bb3af5310 fix tests to reflect new reality 2022-09-19 10:11:15 -07:00
H. Shay
fb5ac9208a ditch auth events and pass state map instead 2022-09-19 10:11:14 -07:00
H. Shay
5e72a85f33 split out creating events for batches and add helper methods for duplicated code 2022-09-19 10:11:14 -07:00
H. Shay
b80daa3b2b reduce duplicated code 2022-09-19 10:11:14 -07:00
H. Shay
86d135ac63 fix test to align with new behaviour 2022-09-19 10:11:14 -07:00
H. Shay
f0b65d057b add function to calculate event context without pulling from db 2022-09-19 10:11:14 -07:00
H. Shay
466d2c5dfb create events and contexts seperately 2022-09-19 10:11:14 -07:00
H. Shay
0a91b06ede add function to create events without computing event context 2022-09-19 10:11:14 -07:00
H. Shay
ab297fa0a1 fix newsfragment 2022-09-19 10:11:14 -07:00
H. Shay
b419e55d1e newsfragment 2022-09-19 10:11:14 -07:00
H. Shay
8f8a264fb2 fix tests to accomodate new structure of DAG after room creation 2022-09-19 10:11:14 -07:00
H. Shay
7020aeac52 batch some events to send 2022-09-19 10:11:14 -07:00
H. Shay
d06fcba002 split out creating and sending event 2022-09-19 10:11:14 -07:00
Eric Eastwood
44be42338e Add support to purge rows from MSC2716 and other tables when purging a room (#13825)
`event_failed_pull_attempts` added in https://github.com/matrix-org/synapse/pull/13589

MSC2716 related tables added in:

 - https://github.com/matrix-org/synapse/pull/10245/files#diff-3d42dfb44d02f7de3aada105e0bdc1cc9dd7f953cbf0f36c5d0f50827bf0320aR1
    - Renamed in https://github.com/matrix-org/synapse/pull/10838/files#diff-2730bfbe9e688b55e46f9371aefe67dac2bd2b2b7d9d6b92774eea1fcfae156dR1
 - https://github.com/matrix-org/synapse/pull/10498/files#diff-c52bbfbb5921a3f6f023b24343668479d966fac164f13b7c39d2197ce3afa7a5R1
2022-09-16 10:56:56 -05:00
Mathieu Velten
d5292b8017 Fix Docker build when Rust .so has been build locally first (#13811)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-09-16 15:38:54 +00:00
David Robertson
642c4b253d Compare ported to unported PG schemas in portdb test job (#13808) 2022-09-16 16:25:54 +01:00
David Robertson
5e84461653 Minor speedups to CI linting (#13827) 2022-09-16 16:18:32 +01:00
Sean Quah
d64e85197a Remove error spam when users query the keys of departed remote users (#13826)
The error message introduced in #13749 has turned out to be very spammy.
Remove it for now.
2022-09-16 16:16:05 +01:00
Mathieu Velten
384dca53d6 complement: init postgres DB directly inside the target image (#13819)
Doing so in the base postgres image doesn't work with buildah because
changes in a declared VOLUME in the Dockerfile is supposed to be
discarded, cf https://docs.docker.com/engine/reference/builder/#volume

Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-09-16 17:12:45 +02:00
Quentin Gliech
74f60cec92 Add an admin API endpoint to find a user based on its external ID in an auth provider. (#13810) 2022-09-16 12:29:03 +00:00
reivilibre
f7a77ad717 Update request log format documentation to mention the format used when the authenticated user is controlling another user. (#13794) 2022-09-16 11:48:41 +00:00
Sean Quah
b73cbb8215 Avoid putting rejected events in room state (#13723)
Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-16 12:45:04 +01:00
Eric Eastwood
6986bcbf39 Document common fix of Poetry problems by removing egg-info (#13785)
`matrix_synapse.egg-info/`

Mentioned at https://matrix.to/#/!vcyiEtMVHIhWXcJAfl:sw1v.org/$aKy_IjrKwb70aTVZWeW_6zt0k7OIZ1YkyZpkP9uiRaM?via=matrix.org&via=element.io&via=beeper.com and many other places.
2022-09-15 16:28:03 -05:00
Eric Eastwood
5093cbf88d Be able to correlate timeouts in reverse-proxy layer in front of Synapse (pull request ID from header) (#13801)
Fix https://github.com/matrix-org/synapse/issues/13685

New config:

```diff
  listeners:
    - port: 8008
      tls: false
      type: http
      x_forwarded: true
+     request_id_header: "cf-ray"
      bind_addresses: ['::1', '127.0.0.1', '0.0.0.0']
```
2022-09-15 15:32:25 -05:00
Eric Eastwood
140af0cdb6 Record any exception when processing a pulled event (#13814)
Part of https://github.com/matrix-org/synapse/issues/13700 and https://github.com/matrix-org/synapse/issues/13356

Follow-up to https://github.com/matrix-org/synapse/pull/13589
2022-09-15 14:40:49 -05:00
Patrick Cloke
b2b0c85279 Support providing an index predicate for upserts. (#13822)
This is useful to upsert against a table which has a unique
partial index while avoiding conflicts.
2022-09-15 18:28:48 +00:00
David Robertson
742f9f9d78 A third batch of Pydantic validation for rest/client/account.py (#13736) 2022-09-15 18:36:02 +01:00
Andrew Morgan
918c74bfb5 Add a MXCUri class to make working with mxc uri's easier. (#13162) 2022-09-15 12:57:16 +00:00
Eric Eastwood
957e3d74fc Keep track when we try and fail to process a pulled event (#13589)
We can follow-up this PR with:

 1. Only try to backfill from an event if we haven't tried recently -> https://github.com/matrix-org/synapse/issues/13622
 1. When we decide to backfill that event again, process it in the background so it doesn't block and make `/messages` slow when we know it will probably fail again -> https://github.com/matrix-org/synapse/issues/13623
 1. Generally track failures everywhere we try and fail to pull an event over federation -> https://github.com/matrix-org/synapse/issues/13700

Fix https://github.com/matrix-org/synapse/issues/13621

Part of https://github.com/matrix-org/synapse/issues/13356

Mentioned in [internal doc](https://docs.google.com/document/d/1lvUoVfYUiy6UaHB6Rb4HicjaJAU40-APue9Q4vzuW3c/edit#bookmark=id.qv7cj51sv9i5)
2022-09-14 13:57:50 -05:00
Patrick Cloke
666ae87729 Update event push action and receipt tables to support threads. (#13753)
Adds a `thread_id` column to the `event_push_actions`, `event_push_actions_staging`,
and `event_push_summary` tables. This will notifications to be segmented by the thread
in a future pull request. The `thread_id` column stores the root event ID or the special
value `"main"`.

The `thread_id` column for `event_push_actions` and `event_push_summary` is
backfilled with `"main"` for all existing rows. New entries into `event_push_actions`
and `event_push_actions_staging` will get the proper thread ID.

`receipts_linearized` and `receipts_graph` also gain a `thread_id` column, which is similar,
except `NULL` is a special value meaning the receipt is "unthreaded".

See MSC3771 and MSC3773 for where this data will be useful.
2022-09-14 17:11:16 +00:00
Patrick Cloke
f2d12ccabe Use partial indices on SQLIte. (#13802)
Partial indices have been supported since SQLite 3.8, but Synapse
now requires >= 3.27, so we can enable support for them.

This requires rebuilding previous indices which were partial on
PostgreSQL, but not on SQLite.
2022-09-14 12:01:42 -04:00
reivilibre
6302753012 Deduplicate is_server_notices_room. (#13780) 2022-09-14 15:53:18 +00:00
reivilibre
cf65433de2 Fix a memory leak when running the unit tests. (#13798) 2022-09-14 15:29:05 +00:00
Quentin Gliech
eaed4e6113 Remove unused method in synapse.api.auth.Auth. (#13795)
Clean-up from b19060a29b (#13094)
and 73af10f419 (#13093) which removed
all callers.
2022-09-14 10:33:54 -04:00
David Robertson
51a77e990b Remove incorrect migration file from state logical DB (#13788)
* Remove incorrect migration file from `state` logical DB

The table `ex_outlier_stream` is part of the `main` logical DB; it
should not have been created in the `state` logical DB. We remove this
migration now as a tidy-up.

Note: we cannot `DROP TABLE IF EXISTS ex_outlier_stream` in a new
migration, because some (most) instances of Synapse host both of these
logical DBs on the same DB cluster.

* Changelog
2022-09-14 14:16:12 +01:00
Erik Johnston
69beef22c2 Merge remote-tracking branch 'origin/develop' into shay/batch_events 2022-09-14 11:13:37 +01:00
Sean Quah
c73774467e Fix bug in device list caching when remote users leave rooms (#13749)
When a remote user leaves the last room shared with the homeserver, we
have to mark their device list as unsubscribed, otherwise we would hold
on to a stale device list in our cache. Crucially, the device list would
remain cached even after the remote user rejoined the room, which could
lead to E2EE failures until the next change to the remote user's device
list.

Fixes #13651.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-14 10:42:57 +01:00
reivilibre
21687ec189 Fix a long-standing spec compliance bug where Synapse would accept a trailing slash on the end of /get_missing_events federation requests. (#13789)
* Don't accept a trailing slash on the end of /get_missing_events

* Newsfile

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>

Signed-off-by: Olivier Wilkinson (reivilibre) <oliverw@matrix.org>
2022-09-14 09:28:12 +01:00
Mathieu Velten
12dacecabd Make sequence cache_invalidation_stream_seq begin at 2 (#13766)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
Co-authored-by: Sean Quah <8349537+squahtx@users.noreply.github.com>
2022-09-13 16:14:28 +02:00
Erik Johnston
9772e362aa Merge branch 'master' into develop 2022-09-13 12:11:53 +01:00
David Robertson
b60d47ab2c Updates to the schema dump script (#13770) 2022-09-13 10:53:11 +01:00
David Robertson
540afb0bfc Simplify CI tests DAG (#13784)
* Simplify CI tests DAG

* Changelog
2022-09-13 10:17:23 +01:00
Richard van der Hoff
41df25bbbd installation.md: require libpq on M1 macs (#13480) 2022-09-13 09:01:21 +00:00
Nick Mills-Barrett
cdbb641232 Add receipts event stream ordering (#13703) 2022-09-13 08:16:37 +01:00
Mathieu Velten
fa2f3d8d0c Fix GHA skippable syntax (#13778)
Signed-off-by: Mathieu Velten <mathieuv@matrix.org>
2022-09-12 17:31:23 +00:00
Brendan Abolivier
7571337445 Fix typo in ratelimiting documentation (#13727) 2022-09-12 14:11:18 +01:00
Erik Johnston
dd7484b562 Fix CI on non-PR builds (#13769)
Mark cargo-test as skippable since it only runs on Rust code change.
2022-09-12 13:26:33 +01:00
Nick Mills-Barrett
da41a7cd61 Remove check current state membership up to date (#13745)
* Remove checks for membership column in current_state_events
* Add schema script to force through the
  `current_state_events_membership` background job

Contributed by Nick @ Beeper (@fizzadar).
2022-09-12 12:58:33 +01:00
Erik Johnston
ebfeac7c5d Check if Rust lib needs rebuilding. (#13759)
This protects against the common mistake of failing to remember to rebuild Rust code after making changes.
2022-09-12 10:03:42 +00:00
Nick Mills-Barrett
4c4889cac0 Concurrently collect room unread counts for push badges (#13765)
Most of the time this function is heavily cached, but when that isn't
the case fetching the counts room by room slows down push delivery on
users with many (thousands) of rooms.

Signed off by Nick @ Beeper.
2022-09-09 19:00:21 +01:00
Eric Eastwood
a911ffb42c Tag trace with instance name (#13761)
We tag the Synapse instance name so that it's an easy jumping off point into the logs. Can also be used to filter for an instance that is under load.

As suggested by @clokep and @reivilibre in,

 - https://github.com/matrix-org/synapse/pull/13729#discussion_r964719258
 - https://github.com/matrix-org/synapse/pull/13729#discussion_r964733578
2022-09-09 11:31:37 -05:00
Eric Eastwood
f694bb71b7 Strip number suffix from instance name to consolidate services that traces are spread over (#13729)
The problem with many services is that it makes it hard to find which service has the trace you want, see https://github.com/jaegertracing/jaeger-ui/issues/985

Previously, we split traces out into services based on their instance name like `matrix.org client_reader-1`, etc but there are many worker instances of the same `client_reader` so there is a lot to click through.

With this PR, all of the traces are just collected under the worker type like `client_reader`, `event_persister` 😇

Note: A Synapse worker instance name is an opaque string with the number convention only being our own thing for the `matrix.org` deployment. But seems pretty sensible to group things this way.
2022-09-09 11:30:06 -05:00
Patrick Cloke
3d9f82efcb Use an upsert for receipts_graph. (#13752)
Instead of a delete, then insert.

This was previously done for `receipts_linearized` in
2dc430d36e (#7607).
2022-09-09 07:08:41 -04:00
Erik Johnston
c85c5ace52 Add rust to CI (#13763) 2022-09-09 11:29:04 +01:00
David Robertson
f2d2481e56 Require SQLite >= 3.27.0 (#13760) 2022-09-09 11:14:10 +01:00
Sean Quah
69fa29700e Re-type hint some collections in /sync code as read-only (#13754)
Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-08 20:13:39 +01:00
reivilibre
5261d2e2e8 Remove unused Prometheus recording rules from synapse-v2.rules and add comments describing where the rest are used. (#13756) 2022-09-08 17:50:15 +00:00
Dirk Klimpel
f799eac7ea Add timestamp to user's consent (#13741)
Co-authored-by: reivilibre <olivier@librepush.net>
2022-09-08 15:41:48 +00:00
Sean Quah
906cead9ca Update docstrings to explain the impact of partial state (#13750)
Update the docstrings for `get_users_in_room` and
`get_current_hosts_in_room` to explain the impact of partial state.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-08 15:55:29 +01:00
Sean Quah
89e8b98b65 Avoid raising errors due to malformed IDs in get_current_hosts_in_room (#13748)
Handle malformed user IDs with no colons in `get_current_hosts_in_room`.
It's not currently possible for a malformed user ID to join a room, so
this error would never be hit.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-08 15:55:03 +01:00
Sean Quah
8ef0c8ff14 Fix error in is_mine_id when encountering a malformed ID (#13746)
Previously, `is_mine_id` would raise an exception when passed an ID with
no colons. Return `False` instead.

Fixes #13040.

Signed-off-by: Sean Quah <seanq@matrix.org>
2022-09-08 15:54:36 +01:00
reivilibre
cf11919ddd Fix cache metrics not being updated when not using the legacy exposition module. (#13717) 2022-09-08 15:30:48 +01:00
reivilibre
526f84bc2e Fix Prometheus recording rules to not use legacy metric names. (#13718) 2022-09-08 15:01:42 +01:00
Erik Johnston
1cc729c177 Fix latest deps (#13743) 2022-09-08 13:58:31 +01:00
reivilibre
b7e4bfd005 Fix a bug where Synapse fails to start if a signing key file contains an empty line. (#13738) 2022-09-08 11:18:03 +01:00
Eric Eastwood
d4d3249ded Instrument get_metadata_for_events for tracing (#13730)
When backfilling, `_get_state_ids_after_missing_prev_event` calls [`get_metadata_for_events`](26bc26586b/synapse/handlers/federation_event.py (L1133)). For `#matrix:matrix.org`, it's called with 77k `state_events` which means 77 calls to the database and takes 28 seconds.
2022-09-07 11:41:52 -05:00
Erik Johnston
8d7fcf9b76 Fix latest deps CI (#13734) 2022-09-07 14:07:06 +00:00
Erik Johnston
dc0e896b68 Add some rust caching to CI (#13735) 2022-09-07 13:56:59 +00:00
David Robertson
c46fecd1f2 Correct out-of-date doc for event_cache_size (#13726) 2022-09-07 14:46:11 +01:00
David Robertson
77f3986451 Define SQLite compat policy (#13728) 2022-09-07 12:07:42 +00:00
David Robertson
b58386e37e A second batch of Pydantic models for rest/client/account.py (#13687) 2022-09-07 12:16:10 +01:00
reivilibre
d3d9ca156e Cancel the processing of key query requests when they time out. (#13680) 2022-09-07 12:03:32 +01:00
reivilibre
c2fe48a6ff Rename the EventFormatVersions enum values so that they line up with room version numbers. (#13706) 2022-09-07 11:08:20 +01:00
Connor Davis
bb5b47b62a Add Admin API to Fetch Messages Within a Particular Window (#13672)
This adds two new admin APIs that allow us to fetch messages from a room within a particular time.
2022-09-07 10:54:44 +01:00
reivilibre
26bc26586b Remove the unspecced room_id field in the /hierarchy response. (#13506)
This is a re-do of 57d334a13d (#13365),
which was backed out in 12abd72497 (#13501).

The `room_id` field represented the parent space for each room
and was made redundant by changes in the API shape where the
`children_state` is now nested underneath each `room`.

The room ID of each child is in the `state_key` field and is still
available.
2022-09-06 15:28:44 -04:00
Erik Johnston
c9b7e97355 Add a stub Rust crate (#12595) 2022-09-06 19:01:37 +01:00
Erik Johnston
3d20115115 Fix trial-olddeps (#13725) 2022-09-06 14:21:55 +00:00
David Robertson
a4ecb8e353 Actually fix typechecking with latest types-jsonschema (#13724) 2022-09-06 14:29:16 +01:00
Erik Johnston
b5effc7201 Update trial old deps CI to use poetry 1.2.0 (#13707) 2022-09-06 11:43:04 +00:00
reivilibre
b455c2a5ec Update Grafana dashboard to not use legacy metric names. (#13714) 2022-09-06 12:21:21 +01:00
H. Shay
6630daba47 fix tests to reflect new reality 2022-09-01 14:47:10 -07:00
H. Shay
070f1bb720 ditch auth events and pass state map instead 2022-09-01 14:46:57 -07:00
H. Shay
059746dec4 split out creating events for batches and add helper methods for duplicated code 2022-09-01 14:46:25 -07:00
H. Shay
2d06a39dd0 reduce duplicated code 2022-08-31 15:30:05 -07:00
Shay
172b651832 Merge branch 'develop' into shay/batch_events 2022-08-24 14:18:31 -07:00
H. Shay
b38f3e9217 fix test to align with new behaviour 2022-08-24 14:08:47 -07:00
H. Shay
28eca036bb add function to calculate event context without pulling from db 2022-08-24 14:08:31 -07:00
H. Shay
49686ede02 create events and contexts seperately 2022-08-24 14:08:04 -07:00
H. Shay
af2a5f3893 add function to create events without computing event context 2022-08-24 14:07:36 -07:00
H. Shay
066045f03e Merge branch 'shay/batch_events' of https://github.com/matrix-org/synapse into shay/batch_events 2022-08-11 21:29:58 -07:00
H. Shay
47bd7cff4e fix newsfragment 2022-08-11 21:29:35 -07:00
Shay
0da05c5a2c Merge branch 'develop' into shay/batch_events 2022-08-11 21:21:30 -07:00
H. Shay
740a48de76 newsfragment 2022-08-11 21:20:10 -07:00
H. Shay
a7ed07944f Merge branch 'shay/batch_events' of https://github.com/matrix-org/synapse into shay/batch_events 2022-08-11 21:15:57 -07:00
H. Shay
b8ed35841d fix tests to accomodate new structure of DAG after room creation 2022-08-11 21:15:16 -07:00
H. Shay
e215109b74 batch some events to send 2022-08-11 21:13:59 -07:00
Shay
9adffc2c95 Merge branch 'develop' into shay/batch_events 2022-08-09 09:04:46 -07:00
H. Shay
a08f32f8ed split out creating and sending event 2022-08-04 12:16:26 -07:00
210 changed files with 4417 additions and 1205 deletions

View File

@@ -1,31 +0,0 @@
#!/usr/bin/env python
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import psycopg2
# a very simple replacment for `psql`, to make up for the lack of the postgres client
# libraries in the synapse docker image.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = psycopg2.connect(
user="postgres", host="localhost", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
for c in sys.argv[1:]:
cur.execute(c)

View File

@@ -5,18 +5,8 @@
# - creates a venv with these old versions using poetry; and finally
# - invokes `trial` to run the tests with old deps.
# Prevent tzdata from asking for user input
export DEBIAN_FRONTEND=noninteractive
set -ex
apt-get update
apt-get install -y \
python3 python3-dev python3-pip python3-venv pipx \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
export LANG="C.UTF-8"
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
@@ -33,12 +23,6 @@ export VIRTUALENV_NO_DOWNLOAD=1
# a `cryptography` compiled against OpenSSL 1.1.
# - Omit systemd: we're not logging to journal here.
# TODO: also replace caret bounds, see https://python-poetry.org/docs/dependency-specification/#version-constraints
# We don't use these yet, but IIRC they are the default bound used when you `poetry add`.
# The sed expression 's/\^/==/g' ought to do the trick. But it would also change
# `python = "^3.7"` to `python = "==3.7", which would mean we fail because olddeps
# runs on 3.8 (#12343).
sed -i \
-e "s/[~>]=/==/g" \
-e '/^python = "^/!s/\^/==/g' \
@@ -55,7 +39,7 @@ sed -i \
# toml file. This means we don't have to ensure compatibility between old deps and
# dev tools.
pip install --user toml
pip install toml wheel
REMOVE_DEV_DEPENDENCIES="
import toml
@@ -69,8 +53,8 @@ with open('pyproject.toml', 'w') as f:
"
python3 -c "$REMOVE_DEV_DEPENDENCIES"
pipx install poetry==1.1.14
~/.local/bin/poetry lock
pip install poetry==1.2.0
poetry lock
echo "::group::Patched pyproject.toml"
cat pyproject.toml
@@ -78,6 +62,3 @@ echo "::endgroup::"
echo "::group::Lockfile after patch"
cat poetry.lock
echo "::endgroup::"
~/.local/bin/poetry install -E "all test"
~/.local/bin/poetry run trial --jobs=2 tests

View File

@@ -32,7 +32,7 @@ else
fi
# Create the PostgreSQL database.
poetry run .ci/scripts/postgres_exec.py "CREATE DATABASE synapse"
psql -c "CREATE DATABASE synapse"
# Port the SQLite databse to postgres so we can check command works against postgres
echo "+++ Port SQLite3 databse to postgres"

View File

@@ -2,27 +2,27 @@
#
# Test script for 'synapse_port_db'.
# - configures synapse and a postgres server.
# - runs the port script on a prepopulated test sqlite db
# - also runs it against an new sqlite db
# - runs the port script on a prepopulated test sqlite db. Checks that the
# return code is zero.
# - reruns the port script on the same sqlite db, targetting the same postgres db.
# Checks that the return code is zero.
# - runs the port script against a new sqlite db. Checks the return code is zero.
#
# Expects Synapse to have been already installed with `poetry install --extras postgres`.
# Expects `poetry` to be available on the `PATH`.
set -xe
set -xe -o pipefail
cd "$(dirname "$0")/../.."
echo "--- Generate the signing key"
# Generate the server's signing key.
poetry run synapse_homeserver --generate-keys -c .ci/sqlite-config.yaml
echo "--- Prepare test database"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
# Make sure the SQLite3 database is using the latest schema and has no pending background updates.
poetry run update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
# Create the PostgreSQL database.
poetry run .ci/scripts/postgres_exec.py "CREATE DATABASE synapse"
psql -c "CREATE DATABASE synapse"
echo "+++ Run synapse_port_db against test database"
# TODO: this invocation of synapse_port_db (and others below) used to be prepended with `coverage run`,
@@ -45,9 +45,23 @@ rm .ci/test_db.db
poetry run update_synapse_database --database-config .ci/sqlite-config.yaml --run-background-updates
# re-create the PostgreSQL database.
poetry run .ci/scripts/postgres_exec.py \
"DROP DATABASE synapse" \
"CREATE DATABASE synapse"
psql \
-c "DROP DATABASE synapse" \
-c "CREATE DATABASE synapse"
echo "+++ Run synapse_port_db against empty database"
poetry run synapse_port_db --sqlite-database .ci/test_db.db --postgres-config .ci/postgres-config.yaml
echo "--- Create a brand new postgres database from schema"
cp .ci/postgres-config.yaml .ci/postgres-config-unported.yaml
sed -i -e 's/database: synapse/database: synapse_unported/' .ci/postgres-config-unported.yaml
psql -c "CREATE DATABASE synapse_unported"
poetry run update_synapse_database --database-config .ci/postgres-config-unported.yaml --run-background-updates
echo "+++ Comparing ported schema with unported schema"
# Ignore the tables that portdb creates. (Should it tidy them up when the porting is completed?)
psql synapse -c "DROP TABLE port_from_sqlite3;"
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner synapse_unported > unported.sql
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner synapse > ported.sql
# By default, `diff` returns zero if there are no changes and nonzero otherwise
diff -u unported.sql ported.sql | tee schema_diff

View File

@@ -4,8 +4,13 @@
# things to include
!docker
!synapse
!rust
!README.rst
!pyproject.toml
!poetry.lock
!build_rust.py
rust/target
synapse/*.so
**/__pycache__

View File

@@ -5,7 +5,7 @@
#
# As an overview this workflow:
# - checks out develop,
# - installs from source, pulling in the dependencies like a fresh `pip install` would, and
# - installs from source, pulling in the dependencies like a fresh `pip install` would, and
# - runs mypy and test suites in that checkout.
#
# Based on the twisted trunk CI job.
@@ -26,12 +26,19 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
# The dev dependencies aren't exposed in the wheel metadata (at least with current
# poetry-core versions), so we install with poetry.
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
poetry-version: "1.2.0b1"
poetry-version: "1.2.0"
extras: "all"
# Dump installed versions for debugging.
- run: poetry run pip list > before.txt
@@ -53,6 +60,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
if: ${{ matrix.postgres-version }}
@@ -69,6 +84,12 @@ jobs:
if: ${{ matrix.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
# We nuke the local copy, as we've installed synapse into the virtualenv
# (rather than use an editable install, which we no longer support). If we
# don't do this then python can't find the native lib.
- run: rm -rf synapse/
- run: python -m twisted.trial --jobs=2 tests
env:
SYNAPSE_POSTGRES: ${{ matrix.database == 'postgres' || '' }}
@@ -113,6 +134,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- name: Ensure sytest runs `pip install`
# Delete the lockfile so sytest will `pip install` rather than `poetry install`
run: rm /src/poetry.lock
@@ -187,4 +216,3 @@ jobs:
with:
update_existing: true
filename: .ci/latest_deps_build_failed_issue_template.md

View File

@@ -15,7 +15,7 @@ on:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
@@ -89,9 +89,67 @@ jobs:
name: debs
path: debs/*
build-wheels:
name: Build wheels on ${{ matrix.os }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, macos-10.15]
is_pr:
- ${{ startsWith(github.ref, 'refs/pull/') }}
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-10.15"
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v3
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.9.0 poetry==1.2.0
# Only build a single wheel in CI.
- name: Set env vars.
run: |
echo "CIBW_BUILD="cp37-manylinux_x86_64"" >> $GITHUB_ENV
if: startsWith(github.ref, 'refs/pull/')
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
env:
# Skip testing for platforms which various libraries don't have wheels
# for, and so need extra build deps.
CIBW_TEST_SKIP: pp39-* *i686* *musl* pp37-macosx*
- uses: actions/upload-artifact@v3
with:
name: Wheel
path: ./wheelhouse/*.whl
build-sdist:
name: "Build pypi distribution files"
uses: "matrix-org/backend-meta/.github/workflows/packaging.yml@v1"
name: Build sdist
runs-on: ubuntu-latest
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: '3.10'
- run: pip install build
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@v2
with:
name: Sdist
path: dist/*.tar.gz
# if it's a tag, create a release and attach the artifacts to it
attach-assets:
@@ -99,6 +157,7 @@ jobs:
if: ${{ !failure() && !cancelled() && startsWith(github.ref, 'refs/tags/') }}
needs:
- build-debs
- build-wheels
- build-sdist
runs-on: ubuntu-latest
steps:

View File

@@ -10,14 +10,33 @@ concurrency:
cancel-in-progress: true
jobs:
# Job to detect what has changed so we don't run e.g. Rust checks on PRs that
# don't modify Rust code.
changes:
runs-on: ubuntu-latest
outputs:
rust: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.rust }}
steps:
- uses: dorny/paths-filter@v2
id: filter
# We only check on PRs
if: startsWith(github.ref, 'refs/pull/')
with:
filters: |
rust:
- 'rust/**'
- 'Cargo.toml'
check-sampleconfig:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-python@v2
- run: pip install .
- run: scripts-dev/generate_sample_config.sh --check
- run: scripts-dev/config-lint.sh
- uses: matrix-org/setup-python-poetry@v1
with:
extras: "all"
- run: poetry run scripts-dev/generate_sample_config.sh --check
- run: poetry run scripts-dev/config-lint.sh
check-schema-delta:
runs-on: ubuntu-latest
@@ -59,16 +78,59 @@ jobs:
- uses: actions/checkout@v2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: matrix-org/setup-python-poetry@v1
with:
extras: "all"
- run: poetry run scripts-dev/check_pydantic_models.py
lint-clippy:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: 1.61.0
override: true
components: clippy
- uses: Swatinem/rust-cache@v2
- run: cargo clippy
lint-rustfmt:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: 1.61.0
override: true
components: rustfmt
- uses: Swatinem/rust-cache@v2
- run: cargo fmt --check
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
needs: [lint, lint-crlf, lint-newsfile, lint-pydantic, check-sampleconfig, check-schema-delta]
needs:
- lint
- lint-crlf
- lint-newsfile
- lint-pydantic
- check-sampleconfig
- check-schema-delta
- lint-clippy
- lint-rustfmt
runs-on: ubuntu-latest
steps:
- run: "true"
@@ -135,16 +197,54 @@ jobs:
# Note: sqlite only; no postgres
if: ${{ !cancelled() && !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v2
- name: Test with old deps
uses: docker://ubuntu:focal # For old python and sqlite
# Note: focal seems to be using 3.8, but the oldest is 3.7?
# See https://github.com/matrix-org/synapse/issues/12343
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
workdir: /github/workspace
entrypoint: .ci/scripts/test_old_deps.sh
toolchain: 1.61.0
override: true
- uses: Swatinem/rust-cache@v2
# There aren't wheels for some of the older deps, so we need to install
# their build dependencies
- run: |
sudo apt-get -qq install build-essential libffi-dev python-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@v4
with:
python-version: '3.7'
# Calculating the old-deps actually takes a bunch of time, so we cache the
# pyproject.toml / poetry.lock. We need to cache pyproject.toml as
# otherwise the `poetry install` step will error due to the poetry.lock
# file being outdated.
#
# This caches the output of `Prepare old deps`, which should generate the
# same `pyproject.toml` and `poetry.lock` for a given `pyproject.toml` input.
- uses: actions/cache@v3
id: cache-poetry-old-deps
name: Cache poetry.lock
with:
path: |
poetry.lock
pyproject.toml
key: poetry-old-deps2-${{ hashFiles('pyproject.toml') }}
- name: Prepare old deps
if: steps.cache-poetry-old-deps.outputs.cache-hit != 'true'
run: .ci/scripts/prepare_old_deps.sh
# We only now install poetry so that `setup-python-poetry` caches the
# right poetry.lock's dependencies.
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: '3.7'
extras: "all test"
- run: poetry run trial -j2 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
@@ -216,6 +316,14 @@ jobs:
- uses: actions/checkout@v2
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: 1.61.0
override: true
- uses: Swatinem/rust-cache@v2
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
@@ -254,18 +362,22 @@ jobs:
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
extras: "postgres"
- run: .ci/scripts/test_export_data_command.sh
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: postgres
portdb:
if: ${{ !failure() && !cancelled() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
env:
TOP: ${{ github.workspace }}
strategy:
matrix:
include:
@@ -291,12 +403,27 @@ jobs:
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
extras: "postgres"
- run: .ci/scripts/test_synapse_port_db.sh
id: run_tester_script
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: postgres
- name: "Upload schema differences"
uses: actions/upload-artifact@v3
if: ${{ failure() && !cancelled() && steps.run_tester_script.outcome == 'failure' }}
with:
name: Schema dumps
path: |
unported.sql
ported.sql
schema_diff
complement:
if: "${{ !failure() && !cancelled() }}"
@@ -322,6 +449,13 @@ jobs:
with:
path: synapse
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: 1.61.0
override: true
- uses: Swatinem/rust-cache@v2
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
@@ -331,20 +465,36 @@ jobs:
shell: bash
name: Run Complement Tests
cargo-test:
if: ${{ needs.changes.outputs.rust == 'true' }}
runs-on: ubuntu-latest
needs:
- linting-done
- changes
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: 1.61.0
override: true
- uses: Swatinem/rust-cache@v2
- run: cargo test
# a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done:
if: ${{ always() }}
needs:
- check-sampleconfig
- lint
- lint-crlf
- lint-newsfile
- trial
- trial-olddeps
- sytest
- export-data
- portdb
- complement
- cargo-test
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v2
@@ -352,5 +502,7 @@ jobs:
needs: ${{ toJSON(needs) }}
# The newsfile lint may be skipped on non PR builds
skippable:
# Cargo test is skipped if there is no changes on Rust code
skippable: |
lint-newsfile
cargo-test

View File

@@ -16,6 +16,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
@@ -34,6 +42,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
@@ -66,6 +82,14 @@ jobs:
steps:
- uses: actions/checkout@v2
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
override: true
- uses: Swatinem/rust-cache@v2
- name: Patch dependencies
# Note: The poetry commands want to create a virtualenv in /src/.venv/,
# but the sytest-synapse container expects it to be in /venv/.

7
.gitignore vendored
View File

@@ -60,3 +60,10 @@ book/
# complement
/complement-*
/master.tar.gz
# rust
/target/
/synapse/*.so
# Poetry will create a setup.py, which we don't want to include.
/setup.py

5
Cargo.toml Normal file
View File

@@ -0,0 +1,5 @@
# We make the whole Synapse folder a workspace so that we can run `cargo`
# commands from the root (rather than having to cd into rust/).
[workspace]
members = ["rust"]

20
build_rust.py Normal file
View File

@@ -0,0 +1,20 @@
# A build script for poetry that adds the rust extension.
import os
from typing import Any, Dict
from setuptools_rust import Binding, RustExtension
def build(setup_kwargs: Dict[str, Any]) -> None:
original_project_dir = os.path.dirname(os.path.realpath(__file__))
cargo_toml_path = os.path.join(original_project_dir, "rust", "Cargo.toml")
extension = RustExtension(
target="synapse.synapse_rust",
path=cargo_toml_path,
binding=Binding.PyO3,
py_limited_api=True,
)
setup_kwargs.setdefault("rust_extensions", []).append(extension)
setup_kwargs["zip_safe"] = False

1
changelog.d/12595.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

1
changelog.d/13162.misc Normal file
View File

@@ -0,0 +1 @@
Bump the minimum dependency of `matrix_common` to 1.3.0 to make use of the `MXCUri` class. Use `MXCUri` to simplify media retention test code.

1
changelog.d/13480.doc Normal file
View File

@@ -0,0 +1 @@
Note that `libpq` is required on ARM-based Macs.

1
changelog.d/13487.misc Normal file
View File

@@ -0,0 +1 @@
Refactor ` _send_events_for_new_room` to separate creating and sending events.

1
changelog.d/13506.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug introduced in Synapse v1.41.0 where the `/hierarchy` API returned non-standard information (a `room_id` field under each entry in `children_state`).

View File

@@ -0,0 +1 @@
Keep track when we fail to process a pulled event over federation so we can intelligently back-off in the future.

View File

@@ -0,0 +1 @@
Add admin APIs to fetch messages within a particular window of time.

View File

@@ -0,0 +1 @@
Cancel the processing of key query requests when they time out.

View File

@@ -0,0 +1 @@
Improve validation of request bodies for the following client-server API endpoints: [`/account/3pid/msisdn/requestToken`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidmsisdnrequesttoken) and [`/org.matrix.msc3720/account_status`](https://github.com/matrix-org/matrix-spec-proposals/blob/babolivier/user_status/proposals/3720-account-status.md#post-_matrixclientv1account_status).

1
changelog.d/13703.misc Normal file
View File

@@ -0,0 +1 @@
Add & populate `event_stream_ordering` column on receipts table for future optimisation of push action processing. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13706.misc Normal file
View File

@@ -0,0 +1 @@
Rename the `EventFormatVersions` enum values so that they line up with room version numbers.

1
changelog.d/13707.misc Normal file
View File

@@ -0,0 +1 @@
Update trial old deps CI to use poetry 1.2.0.

1
changelog.d/13714.misc Normal file
View File

@@ -0,0 +1 @@
Add experimental configuration option to allow disabling legacy Prometheus metric names.

1
changelog.d/13717.misc Normal file
View File

@@ -0,0 +1 @@
Add experimental configuration option to allow disabling legacy Prometheus metric names.

1
changelog.d/13718.misc Normal file
View File

@@ -0,0 +1 @@
Add experimental configuration option to allow disabling legacy Prometheus metric names.

1
changelog.d/13723.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where previously rejected events could end up in room state because they pass auth checks given the current state of the room.

1
changelog.d/13724.misc Normal file
View File

@@ -0,0 +1 @@
Fix typechecking with latest types-jsonschema.

1
changelog.d/13725.misc Normal file
View File

@@ -0,0 +1 @@
Update trial old deps CI to use poetry 1.2.0.

1
changelog.d/13726.doc Normal file
View File

@@ -0,0 +1 @@
Fix a mistake in the config manual: the `event_cache_size` _is_ scaled by `caches.global_factor`. The documentation was incorrect since Synapse 1.22.

1
changelog.d/13727.doc Normal file
View File

@@ -0,0 +1 @@
Fix a typo in the documentation for the login ratelimiting configuration.

1
changelog.d/13728.doc Normal file
View File

@@ -0,0 +1 @@
Define Synapse's compatability policy for SQLite versions.

1
changelog.d/13729.misc Normal file
View File

@@ -0,0 +1 @@
Strip number suffix from instance name to consolidate services that traces are spread over.

1
changelog.d/13730.misc Normal file
View File

@@ -0,0 +1 @@
Instrument `get_metadata_for_events` for understandable traces in Jaeger.

1
changelog.d/13734.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

1
changelog.d/13735.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

View File

@@ -0,0 +1 @@
Improve validation of request bodies for the following client-server API endpoints: [`/account/3pid/add`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidadd), [`/account/3pid/bind`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidbind), [`/account/3pid/delete`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3piddelete) and [`/account/3pid/unbind`](https://spec.matrix.org/v1.3/client-server-api/#post_matrixclientv3account3pidunbind).

1
changelog.d/13738.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a bug where Synapse fails to start if a signing key file contains an empty line.

View File

@@ -0,0 +1 @@
Document the timestamp when a user accepts the consent, if [consent tracking](https://matrix-org.github.io/synapse/latest/consent_tracking.html) is used.

1
changelog.d/13743.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

1
changelog.d/13745.misc Normal file
View File

@@ -0,0 +1 @@
Remove old queries to join room memberships to current state events. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13746.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long standing bug where Synapse would fail to handle malformed user IDs or room aliases gracefully in certain cases.

1
changelog.d/13748.misc Normal file
View File

@@ -0,0 +1 @@
Avoid raising an error due to malformed user IDs in `get_current_hosts_in_room`. Malformed user IDs cannot currently join a room, so this error would not be hit.

1
changelog.d/13749.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long standing bug where device lists would remain cached when remote users left and rejoined the last room shared with the local homeserver.

1
changelog.d/13750.misc Normal file
View File

@@ -0,0 +1 @@
Update the docstrings for `get_users_in_room` and `get_current_hosts_in_room` to explain the impact of partial state.

1
changelog.d/13752.misc Normal file
View File

@@ -0,0 +1 @@
User an additional database query when persisting receipts.

1
changelog.d/13753.misc Normal file
View File

@@ -0,0 +1 @@
Prepatory work for storing thread IDs for notifications and receipts.

1
changelog.d/13754.misc Normal file
View File

@@ -0,0 +1 @@
Re-type hint some collections as read-only.

1
changelog.d/13756.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused Prometheus recording rules from `synapse-v2.rules` and add comments describing where the rest are used.

1
changelog.d/13759.misc Normal file
View File

@@ -0,0 +1 @@
Add a check for editable installs if the Rust library needs rebuilding.

View File

@@ -0,0 +1 @@
Synapse will now refuse to start if configured to use SQLite < 3.27.

1
changelog.d/13761.misc Normal file
View File

@@ -0,0 +1 @@
Tag traces with the instance name to be able to easily jump into the right logs and filter traces by instance.

1
changelog.d/13763.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

1
changelog.d/13765.misc Normal file
View File

@@ -0,0 +1 @@
Concurrently fetch room push actions when calculating badge counts. Contributed by Nick @ Beeper (@fizzadar).

1
changelog.d/13766.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing bug where the `cache_invalidation_stream_seq` sequence would begin at 1 instead of 2.

1
changelog.d/13769.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

1
changelog.d/13770.misc Normal file
View File

@@ -0,0 +1 @@
Update the script which makes full schema dumps.

1
changelog.d/13778.misc Normal file
View File

@@ -0,0 +1 @@
Add a stub Rust crate.

1
changelog.d/13780.misc Normal file
View File

@@ -0,0 +1 @@
Deduplicate `is_server_notices_room`.

1
changelog.d/13784.misc Normal file
View File

@@ -0,0 +1 @@
Simplify the dependency DAG in the tests workflow.

1
changelog.d/13785.doc Normal file
View File

@@ -0,0 +1 @@
Add docs for common fix of deleting the `matrix_synapse.egg-info/` directory for fixing Python dependency problems.

1
changelog.d/13788.misc Normal file
View File

@@ -0,0 +1 @@
Remove an old, incorrect migration file.

1
changelog.d/13789.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long-standing spec compliance bug where Synapse would accept a trailing slash on the end of `/get_missing_events` federation requests.

1
changelog.d/13794.doc Normal file
View File

@@ -0,0 +1 @@
Update request log format documentation to mention the format used when the authenticated user is controlling another user.

1
changelog.d/13795.misc Normal file
View File

@@ -0,0 +1 @@
Remove unused method in `synapse.api.auth.Auth`.

1
changelog.d/13798.misc Normal file
View File

@@ -0,0 +1 @@
Fix a memory leak when running the unit tests.

View File

@@ -0,0 +1 @@
Add `listeners[x].request_id_header` config to specify which request header to extract and use as the request ID in order to correlate requests from a reverse-proxy.

1
changelog.d/13802.misc Normal file
View File

@@ -0,0 +1 @@
Use partial indices on SQLite.

1
changelog.d/13808.misc Normal file
View File

@@ -0,0 +1 @@
Check that portdb generates the same postgres schema as that in the source tree.

View File

@@ -0,0 +1 @@
Add an admin API endpoint to find a user based on its external ID in an auth provider.

1
changelog.d/13811.misc Normal file
View File

@@ -0,0 +1 @@
Fix Docker build when Rust .so has been build locally first.

View File

@@ -0,0 +1 @@
Keep track when we fail to process a pulled event over federation so we can intelligently back-off in the future.

1
changelog.d/13819.misc Normal file
View File

@@ -0,0 +1 @@
complement: init postgres DB directly inside the target image instead of the base postgres image to fix building using Buildah.

1
changelog.d/13822.misc Normal file
View File

@@ -0,0 +1 @@
Support providing an index predicate clause when doing upserts.

1
changelog.d/13825.bugfix Normal file
View File

@@ -0,0 +1 @@
Delete associated data from `event_failed_pull_attempts`, `insertion_events`, `insertion_event_extremities`, `insertion_event_extremities`, `insertion_event_extremities` when purging the room.

1
changelog.d/13826.bugfix Normal file
View File

@@ -0,0 +1 @@
Fix a long standing bug where device lists would remain cached when remote users left and rejoined the last room shared with the local homeserver.

1
changelog.d/13827.misc Normal file
View File

@@ -0,0 +1 @@
Minor speedups to linting in CI.

View File

@@ -335,7 +335,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_storage_events_persisted_events{instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_storage_events_persisted_events_total{instance=\"$instance\"}[$bucket_size]))",
"hide": false,
"instant": false,
"legendFormat": "Events",
@@ -1423,7 +1423,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_background_process_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_background_process_ru_stime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_background_process_ru_utime_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_background_process_ru_stime_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"hide": false,
"instant": false,
@@ -1804,7 +1804,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_storage_events_persisted_events{instance=\"$instance\"}[$bucket_size])) without (job,index)",
"expr": "sum(rate(synapse_storage_events_persisted_events_total{instance=\"$instance\"}[$bucket_size])) without (job,index)",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -2437,7 +2437,7 @@
"uid": "$datasource"
},
"exemplar": false,
"expr": "sum(rate(synapse_state_res_db_for_biggest_room_seconds{instance=\"$instance\"}[1m]))",
"expr": "sum(rate(synapse_state_res_db_for_biggest_room_seconds_total{instance=\"$instance\"}[1m]))",
"format": "time_series",
"hide": false,
"instant": false,
@@ -2451,7 +2451,7 @@
"uid": "$datasource"
},
"exemplar": false,
"expr": "sum(rate(synapse_state_res_cpu_for_biggest_room_seconds{instance=\"$instance\"}[1m]))",
"expr": "sum(rate(synapse_state_res_cpu_for_biggest_room_seconds_total{instance=\"$instance\"}[1m]))",
"format": "time_series",
"hide": false,
"instant": false,
@@ -3425,7 +3425,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_background_process_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_background_process_ru_stime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_background_process_ru_utime_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])+rate(synapse_background_process_ru_stime_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@@ -3518,7 +3518,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_background_process_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_background_process_db_sched_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_background_process_db_txn_duration_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_background_process_db_sched_duration_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"hide": false,
"intervalFactor": 1,
@@ -3726,7 +3726,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_federation_client_sent_transactions{instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_federation_client_sent_transactions_total{instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "successful txn rate",
@@ -3736,7 +3736,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_util_metrics_block_count{block_name=\"_send_new_transaction\",instance=\"$instance\"}[$bucket_size]) - ignoring (block_name) rate(synapse_federation_client_sent_transactions{instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_metrics_block_count_total{block_name=\"_send_new_transaction\",instance=\"$instance\"}[$bucket_size]) - ignoring (block_name) rate(synapse_federation_client_sent_transactions_total{instance=\"$instance\"}[$bucket_size]))",
"legendFormat": "failed txn rate",
"refId": "B"
}
@@ -3826,7 +3826,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_federation_server_received_pdus{instance=~\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_federation_server_received_pdus_total{instance=~\"$instance\"}[$bucket_size]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "pdus",
@@ -3836,7 +3836,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_federation_server_received_edus{instance=~\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_federation_server_received_edus_total{instance=~\"$instance\"}[$bucket_size]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "edus",
@@ -3928,7 +3928,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_federation_client_sent_pdu_destinations:total{instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_federation_client_sent_pdu_destinations:total_total{instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@@ -3939,7 +3939,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_federation_client_sent_edus{instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_federation_client_sent_edus_total{instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "edus",
@@ -5042,7 +5042,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_http_httppusher_http_pushes_processed{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) and on (instance, job, index) (synapse_http_httppusher_http_pushes_failed + synapse_http_httppusher_http_pushes_processed) > 0",
"expr": "rate(synapse_http_httppusher_http_pushes_processed_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) and on (instance, job, index) (synapse_http_httppusher_http_pushes_failed_total + synapse_http_httppusher_http_pushes_processed_total) > 0",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -5054,7 +5054,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_http_httppusher_http_pushes_failed{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) and on (instance, job, index) (synapse_http_httppusher_http_pushes_failed + synapse_http_httppusher_http_pushes_processed) > 0",
"expr": "rate(synapse_http_httppusher_http_pushes_failed_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) and on (instance, job, index) (synapse_http_httppusher_http_pushes_failed_total + synapse_http_httppusher_http_pushes_processed_total) > 0",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "failed {{job}}",
@@ -5268,12 +5268,12 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{index}}",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_state_size_counter_total",
"refId": "A",
"step": 2
}
@@ -5369,12 +5369,12 @@
"uid": "$datasource"
},
"exemplar": true,
"expr": "sum(rate(synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "{{index}}",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total",
"refId": "A",
"step": 2
}
@@ -5475,12 +5475,12 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_util_caches_cache:hits{job=\"$job\",index=~\"$index\",name=\"push_rules_delta_state_cache_metric\",instance=\"$instance\"}[$bucket_size]))/sum(rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\", name=\"push_rules_delta_state_cache_metric\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_caches_cache_hits{job=\"$job\",index=~\"$index\",name=\"push_rules_delta_state_cache_metric\",instance=\"$instance\"}[$bucket_size]))/sum(rate(synapse_util_caches_cache{job=\"$job\",index=~\"$index\", name=\"push_rules_delta_state_cache_metric\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "Hit Rate",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total",
"refId": "A",
"step": 2
},
@@ -5490,7 +5490,7 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\", name=\"push_rules_delta_state_cache_metric\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_caches_cache{job=\"$job\",index=~\"$index\", name=\"push_rules_delta_state_cache_metric\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -5598,12 +5598,12 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_util_caches_cache:hits{job=\"$job\",index=~\"$index\",name=\"room_push_rule_cache\",instance=\"$instance\"}[$bucket_size]))/sum(rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\", name=\"room_push_rule_cache\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_caches_cache_hits{job=\"$job\",index=~\"$index\",name=\"room_push_rule_cache\",instance=\"$instance\"}[$bucket_size]))/sum(rate(synapse_util_caches_cache{job=\"$job\",index=~\"$index\", name=\"room_push_rule_cache\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "Hit Rate",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total",
"refId": "A",
"step": 2
},
@@ -5613,7 +5613,7 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\", name=\"room_push_rule_cache\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_caches_cache{job=\"$job\",index=~\"$index\", name=\"room_push_rule_cache\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -5719,12 +5719,12 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_util_caches_cache:hits{job=\"$job\",index=~\"$index\",name=\"_get_rules_for_room\",instance=\"$instance\"}[$bucket_size]))/sum(rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\", name=\"_get_rules_for_room\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_caches_cache_hits{job=\"$job\",index=~\"$index\",name=\"_get_rules_for_room\",instance=\"$instance\"}[$bucket_size]))/sum(rate(synapse_util_caches_cache{job=\"$job\",index=~\"$index\", name=\"_get_rules_for_room\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
"legendFormat": "Hit Rate",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter",
"metric": "synapse_push_bulk_push_rule_evaluator_push_rules_invalidation_counter_total",
"refId": "A",
"step": 2
},
@@ -5734,7 +5734,7 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "sum(rate(synapse_util_caches_cache:total{job=\"$job\",index=~\"$index\", name=\"_get_rules_for_room\",instance=\"$instance\"}[$bucket_size]))",
"expr": "sum(rate(synapse_util_caches_cache{job=\"$job\",index=~\"$index\", name=\"_get_rules_for_room\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -6087,7 +6087,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "topk(10, rate(synapse_storage_transaction_time_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"expr": "topk(10, rate(synapse_storage_transaction_time_count_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -6187,7 +6187,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_storage_transaction_time_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_storage_transaction_time_sum_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"instant": false,
"interval": "",
@@ -6287,7 +6287,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_storage_transaction_time_sum{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])/rate(synapse_storage_transaction_time_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_storage_transaction_time_sum_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])/rate(synapse_storage_transaction_time_count_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"instant": false,
"interval": "",
@@ -6538,7 +6538,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\",block_name!=\"wrapped_request_handler\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])",
"expr": "rate(synapse_util_metrics_block_ru_utime_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\",block_name!=\"wrapped_request_handler\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds_total[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -6636,7 +6636,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "(rate(synapse_util_metrics_block_ru_utime_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds[$bucket_size])) / rate(synapse_util_metrics_block_count[$bucket_size])",
"expr": "(rate(synapse_util_metrics_block_ru_utime_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) + rate(synapse_util_metrics_block_ru_stime_seconds_total[$bucket_size])) / rate(synapse_util_metrics_block_count_total[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -6737,7 +6737,7 @@
"uid": "${DS_PROMETHEUS}"
},
"exemplar": true,
"expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -6839,7 +6839,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_db_txn_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_db_txn_count_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -6936,7 +6936,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_db_txn_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_metrics_block_db_txn_duration_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_db_txn_count_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -7033,7 +7033,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_metrics_block_time_seconds{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_metrics_block_time_seconds_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]) / rate(synapse_util_metrics_block_count_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -7122,7 +7122,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_metrics_block_count{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_metrics_block_count_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"interval": "",
"legendFormat": "{{job}}-{{index}} {{block_name}}",
"refId": "A"
@@ -7246,7 +7246,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_caches_cache:hits{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])/rate(synapse_util_caches_cache:total{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_util_caches_cache_hits{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])/rate(synapse_util_caches_cache{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{name}} {{job}}-{{index}}",
@@ -7347,7 +7347,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "synapse_util_caches_cache:size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"expr": "synapse_util_caches_cache_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"format": "time_series",
"hide": false,
"interval": "",
@@ -7447,7 +7447,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_caches_cache:total{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_util_caches_cache{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -7547,7 +7547,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "topk(10, rate(synapse_util_caches_cache:total{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]) - rate(synapse_util_caches_cache:hits{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"expr": "topk(10, rate(synapse_util_caches_cache{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]) - rate(synapse_util_caches_cache_hits{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"format": "time_series",
"interval": "",
"intervalFactor": 2,
@@ -7643,7 +7643,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_caches_cache:evicted_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_caches_cache_evicted_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"interval": "",
"intervalFactor": 1,
@@ -7763,7 +7763,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "synapse_util_caches_response_cache:size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"expr": "synapse_util_caches_response_cache_size{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}",
"interval": "",
"legendFormat": "{{name}} {{job}}-{{index}}",
"refId": "A"
@@ -7853,7 +7853,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_util_caches_response_cache:hits{instance=\"$instance\", job=~\"$job\", index=~\"$index\"}[$bucket_size])/rate(synapse_util_caches_response_cache:total{instance=\"$instance\", job=~\"$job\", index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_util_caches_response_cache_hits{instance=\"$instance\", job=~\"$job\", index=~\"$index\"}[$bucket_size])/rate(synapse_util_caches_response_cache{instance=\"$instance\", job=~\"$job\", index=~\"$index\"}[$bucket_size])",
"interval": "",
"legendFormat": "{{name}} {{job}}-{{index}}",
"refId": "A"
@@ -9556,7 +9556,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "synapse_forward_extremities_bucket{instance=\"$instance\"} and on (index, instance, job) (synapse_storage_events_persisted_events > 0)",
"expr": "synapse_forward_extremities_bucket{instance=\"$instance\"} and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0)",
"format": "heatmap",
"intervalFactor": 1,
"legendFormat": "{{le}}",
@@ -9716,7 +9716,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0)",
"expr": "rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0)",
"format": "heatmap",
"intervalFactor": 1,
"legendFormat": "{{le}}",
@@ -9793,7 +9793,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.5, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.5, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "50%",
@@ -9803,7 +9803,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.75, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.75, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "75%",
@@ -9813,7 +9813,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.90, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.90, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "90%",
@@ -9823,7 +9823,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.99, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.99, rate(synapse_storage_events_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "99%",
@@ -9905,7 +9905,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0)",
"expr": "rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0)",
"format": "heatmap",
"intervalFactor": 1,
"legendFormat": "{{le}}",
@@ -9982,7 +9982,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.5, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.5, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "50%",
@@ -9992,7 +9992,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.75, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.75, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "75%",
@@ -10002,7 +10002,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.90, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.90, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "90%",
@@ -10012,7 +10012,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "histogram_quantile(0.99, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events > 0))",
"expr": "histogram_quantile(0.99, rate(synapse_storage_events_stale_forward_extremities_persisted_bucket{instance=\"$instance\"}[$bucket_size]) and on (index, instance, job) (synapse_storage_events_persisted_events_total > 0))",
"format": "time_series",
"intervalFactor": 1,
"legendFormat": "99%",
@@ -10297,7 +10297,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_storage_events_state_resolutions_during_persistence{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"expr": "sum(rate(synapse_storage_events_state_resolutions_during_persistence_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"interval": "",
"legendFormat": "State res ",
"refId": "A"
@@ -10306,7 +10306,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_storage_events_potential_times_prune_extremities{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"expr": "sum(rate(synapse_storage_events_potential_times_prune_extremities_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"interval": "",
"legendFormat": "Potential to prune",
"refId": "B"
@@ -10315,7 +10315,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "sum(rate(synapse_storage_events_times_pruned_extremities{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"expr": "sum(rate(synapse_storage_events_times_pruned_extremities_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size]))",
"interval": "",
"legendFormat": "Pruned",
"refId": "C"
@@ -11069,7 +11069,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_handler_presence_notified_presence{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_handler_presence_notified_presence_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"interval": "",
"legendFormat": "Notified",
"refId": "A"
@@ -11078,7 +11078,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_handler_presence_federation_presence_out{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_handler_presence_federation_presence_out_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"interval": "",
"legendFormat": "Remote ping",
"refId": "B"
@@ -11087,7 +11087,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_handler_presence_presence_updates{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_handler_presence_presence_updates_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"interval": "",
"legendFormat": "Total updates",
"refId": "C"
@@ -11096,7 +11096,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_handler_presence_federation_presence{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_handler_presence_federation_presence_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"interval": "",
"legendFormat": "Remote updates",
"refId": "D"
@@ -11105,7 +11105,7 @@
"datasource": {
"uid": "$datasource"
},
"expr": "rate(synapse_handler_presence_bump_active_time{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"expr": "rate(synapse_handler_presence_bump_active_time_total{job=\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"interval": "",
"legendFormat": "Bump active time",
"refId": "E"
@@ -11789,7 +11789,7 @@
"name": "instance",
"options": [],
"query": {
"query": "label_values(synapse_util_metrics_block_ru_utime_seconds, instance)",
"query": "label_values(synapse_util_metrics_block_ru_utime_seconds_total, instance)",
"refId": "Prometheus-instance-Variable-Query"
},
"refresh": 2,
@@ -11818,7 +11818,7 @@
"name": "job",
"options": [],
"query": {
"query": "label_values(synapse_util_metrics_block_ru_utime_seconds, job)",
"query": "label_values(synapse_util_metrics_block_ru_utime_seconds_total, job)",
"refId": "Prometheus-job-Variable-Query"
},
"refresh": 2,
@@ -11848,7 +11848,7 @@
"name": "index",
"options": [],
"query": {
"query": "label_values(synapse_util_metrics_block_ru_utime_seconds, index)",
"query": "label_values(synapse_util_metrics_block_ru_utime_seconds_total, index)",
"refId": "Prometheus-index-Variable-Query"
},
"refresh": 2,
@@ -11896,6 +11896,6 @@
"timezone": "",
"title": "Synapse",
"uid": "000000012",
"version": 132,
"version": 133,
"weekStart": ""
}
}

View File

@@ -1,21 +0,0 @@
synapse_federation_transaction_queue_pendingEdus:total = sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)
synapse_federation_transaction_queue_pendingPdus:total = sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)
synapse_http_server_request_count:method{servlet=""} = sum(synapse_http_server_request_count) by (method)
synapse_http_server_request_count:servlet{method=""} = sum(synapse_http_server_request_count) by (servlet)
synapse_http_server_request_count:total{servlet=""} = sum(synapse_http_server_request_count:by_method) by (servlet)
synapse_cache:hit_ratio_5m = rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])
synapse_cache:hit_ratio_30s = rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])
synapse_federation_client_sent{type="EDU"} = synapse_federation_client_sent_edus + 0
synapse_federation_client_sent{type="PDU"} = synapse_federation_client_sent_pdu_destinations:count + 0
synapse_federation_client_sent{type="Query"} = sum(synapse_federation_client_sent_queries) by (job)
synapse_federation_server_received{type="EDU"} = synapse_federation_server_received_edus + 0
synapse_federation_server_received{type="PDU"} = synapse_federation_server_received_pdus + 0
synapse_federation_server_received{type="Query"} = sum(synapse_federation_server_received_queries) by (job)
synapse_federation_transaction_queue_pending{type="EDU"} = synapse_federation_transaction_queue_pending_edus + 0
synapse_federation_transaction_queue_pending{type="PDU"} = synapse_federation_transaction_queue_pending_pdus + 0

View File

@@ -1,55 +1,35 @@
groups:
- name: synapse
rules:
- record: "synapse_federation_transaction_queue_pendingEdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingEdus or absent(synapse_federation_transaction_queue_pendingEdus)*0)"
- record: "synapse_federation_transaction_queue_pendingPdus:total"
expr: "sum(synapse_federation_transaction_queue_pendingPdus or absent(synapse_federation_transaction_queue_pendingPdus)*0)"
- record: 'synapse_http_server_request_count:method'
labels:
servlet: ""
expr: "sum(synapse_http_server_request_count) by (method)"
- record: 'synapse_http_server_request_count:servlet'
labels:
method: ""
expr: 'sum(synapse_http_server_request_count) by (servlet)'
- record: 'synapse_http_server_request_count:total'
labels:
servlet: ""
expr: 'sum(synapse_http_server_request_count:by_method) by (servlet)'
- record: 'synapse_cache:hit_ratio_5m'
expr: 'rate(synapse_util_caches_cache:hits[5m]) / rate(synapse_util_caches_cache:total[5m])'
- record: 'synapse_cache:hit_ratio_30s'
expr: 'rate(synapse_util_caches_cache:hits[30s]) / rate(synapse_util_caches_cache:total[30s])'
# These 3 rules are used in the included Prometheus console
- record: 'synapse_federation_client_sent'
labels:
type: "EDU"
expr: 'synapse_federation_client_sent_edus + 0'
expr: 'synapse_federation_client_sent_edus_total + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "PDU"
expr: 'synapse_federation_client_sent_pdu_destinations:count + 0'
expr: 'synapse_federation_client_sent_pdu_destinations_count_total + 0'
- record: 'synapse_federation_client_sent'
labels:
type: "Query"
expr: 'sum(synapse_federation_client_sent_queries) by (job)'
# These 3 rules are used in the included Prometheus console
- record: 'synapse_federation_server_received'
labels:
type: "EDU"
expr: 'synapse_federation_server_received_edus + 0'
expr: 'synapse_federation_server_received_edus_total + 0'
- record: 'synapse_federation_server_received'
labels:
type: "PDU"
expr: 'synapse_federation_server_received_pdus + 0'
expr: 'synapse_federation_server_received_pdus_total + 0'
- record: 'synapse_federation_server_received'
labels:
type: "Query"
expr: 'sum(synapse_federation_server_received_queries) by (job)'
# These 2 rules are used in the included Prometheus console
- record: 'synapse_federation_transaction_queue_pending'
labels:
type: "EDU"
@@ -59,20 +39,25 @@ groups:
type: "PDU"
expr: 'synapse_federation_transaction_queue_pending_pdus + 0'
# These 3 rules are used in the included Grafana dashboard
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_type="remote"})
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_type="remote"})
labels:
type: remote
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity="*client*",origin_type="local"})
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_entity="*client*",origin_type="local"})
labels:
type: local
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep{origin_entity!="*client*",origin_type="local"})
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_entity!="*client*",origin_type="local"})
labels:
type: bridges
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep)
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep)
# This rule is used in the included Grafana dashboard
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep_total)
# This rule is used in the included Grafana dashboard
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep_total)

View File

@@ -61,7 +61,7 @@ dh_virtualenv \
--extras="all,systemd,test" \
--requirements="exported_requirements.txt"
PACKAGE_BUILD_DIR="debian/matrix-synapse-py3"
PACKAGE_BUILD_DIR="$(pwd)/debian/matrix-synapse-py3"
VIRTUALENV_DIR="${PACKAGE_BUILD_DIR}${DH_VIRTUALENV_INSTALL_ROOT}/matrix-synapse"
TARGET_PYTHON="${VIRTUALENV_DIR}/bin/python"
@@ -78,9 +78,14 @@ case "$DEB_BUILD_OPTIONS" in
cp -r tests "$tmpdir"
# To avoid pulling in the unbuilt Synapse in the local directory
pushd /
PYTHONPATH="$tmpdir" \
"${TARGET_PYTHON}" -m twisted.trial --reporter=text -j2 tests
popd
;;
esac

4
debian/changelog vendored
View File

@@ -22,11 +22,15 @@ matrix-synapse-py3 (1.66.0) stable; urgency=medium
matrix-synapse-py3 (1.66.0~rc2+nmu1) UNRELEASED; urgency=medium
[ Jörg Behrmann ]
* Update debhelper to compatibility level 12.
* Drop the preinst script stopping synapse.
* Allocate a group for the system user.
* Change dpkg-statoverride to --force-statoverride-add.
[ Erik Johnston ]
* Disable `dh_auto_configure` as it broke during Rust build.
-- Jörg Behrmann <behrmann@physik.fu-berlin.de> Tue, 23 Aug 2022 17:17:00 +0100
matrix-synapse-py3 (1.66.0~rc2) stable; urgency=medium

2
debian/rules vendored
View File

@@ -12,6 +12,8 @@ override_dh_installsystemd:
# we don't really want to strip the symbols from our object files.
override_dh_strip:
override_dh_auto_configure:
# many libraries pulled from PyPI have allocatable sections after
# non-allocatable ones on which dwz errors out. For those without the issue the
# gains are only marginal

View File

@@ -31,7 +31,9 @@ ARG PYTHON_VERSION=3.9
###
### Stage 0: generate requirements.txt
###
FROM docker.io/python:${PYTHON_VERSION}-slim as requirements
# We hardcode the use of Debian bullseye here because this could change upstream
# and other Dockerfiles used for testing are expecting bullseye.
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye as requirements
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
@@ -76,7 +78,7 @@ RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
###
### Stage 1: builder
###
FROM docker.io/python:${PYTHON_VERSION}-slim as builder
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye as builder
# install the OS build deps
RUN \
@@ -92,11 +94,20 @@ RUN \
libxml++2.6-dev \
libxslt1-dev \
openssl \
rustc \
zlib1g-dev \
git \
curl \
&& rm -rf /var/lib/apt/lists/*
# Install rust and ensure its in the PATH
ENV RUSTUP_HOME=/rust
ENV CARGO_HOME=/cargo
ENV PATH=/cargo/bin:/rust/bin:$PATH
RUN mkdir /rust /cargo
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable
# To speed up rebuilds, install all of the dependencies before we copy over
# the whole synapse project, so that this layer in the Docker cache can be
# used while you develop on the source
@@ -108,8 +119,9 @@ RUN --mount=type=cache,target=/root/.cache/pip \
# Copy over the rest of the synapse source code.
COPY synapse /synapse/synapse/
COPY rust /synapse/rust/
# ... and what we need to `pip install`.
COPY pyproject.toml README.rst /synapse/
COPY pyproject.toml README.rst build_rust.py /synapse/
# Repeat of earlier build argument declaration, as this is a new build stage.
ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
@@ -127,7 +139,7 @@ RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
### Stage 2: runtime
###
FROM docker.io/python:${PYTHON_VERSION}-slim
FROM docker.io/python:${PYTHON_VERSION}-slim-bullseye
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/matrix-org/synapse/blob/master/docker/README.md'

View File

@@ -72,6 +72,7 @@ RUN apt-get update -qq -o Acquire::Languages=none \
&& env DEBIAN_FRONTEND=noninteractive apt-get install \
-yqq --no-install-recommends -o Dpkg::Options::=--force-unsafe-io \
build-essential \
curl \
debhelper \
devscripts \
libsystemd-dev \
@@ -85,6 +86,15 @@ RUN apt-get update -qq -o Acquire::Languages=none \
libpq-dev \
xmlsec1
# Install rust and ensure it's in the PATH
ENV RUSTUP_HOME=/rust
ENV CARGO_HOME=/cargo
ENV PATH=/cargo/bin:/rust/bin:$PATH
RUN mkdir /rust /cargo
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable
COPY --from=builder /dh-virtualenv_1.2.2-1_all.deb /
# install dhvirtualenv. Update the apt cache again first, in case we got a

View File

@@ -17,7 +17,17 @@ ARG SYNAPSE_VERSION=latest
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
FROM postgres:13-bullseye AS postgres_base
# now build the final image, based on the Synapse image.
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres:13-bullseye /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres:13-bullseye /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# initialise the database cluster in /var/lib/postgresql
RUN gosu postgres initdb --locale=C --encoding=UTF-8 --auth-host password
@@ -25,18 +35,6 @@ FROM postgres:13-bullseye AS postgres_base
RUN echo "ALTER USER postgres PASSWORD 'somesecret'" | gosu postgres postgres --single
RUN echo "CREATE DATABASE synapse" | gosu postgres postgres --single
# now build the final image, based on the Synapse image.
FROM matrixdotorg/synapse-workers:$SYNAPSE_VERSION
# copy the postgres installation over from the image we built above
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /var/lib/postgresql /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# Extend the shared homeserver config to disable rate-limiting,
# set Complement's static shared secret, enable registration, amongst other
# tweaks to get Synapse ready for testing.

View File

@@ -393,6 +393,151 @@ A response body like the following is returned:
}
```
# Room Messages API
The Room Messages admin API allows server admins to get all messages
sent to a room in a given timeframe. There are various parameters available
that allow for filtering and ordering the returned list. This API supports pagination.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api).
This endpoint mirrors the [Matrix Spec defined Messages API](https://spec.matrix.org/v1.1/client-server-api/#get_matrixclientv3roomsroomidmessages).
The API is:
```
GET /_synapse/admin/v1/rooms/<room_id>/messages
```
**Parameters**
The following path parameters are required:
* `room_id` - The ID of the room you wish you fetch messages from.
The following query parameters are available:
* `from` (required) - The token to start returning events from. This token can be obtained from a prev_batch
or next_batch token returned by the /sync endpoint, or from an end token returned by a previous request to this endpoint.
* `to` - The token to spot returning events at.
* `limit` - The maximum number of events to return. Defaults to `10`.
* `filter` - A JSON RoomEventFilter to filter returned events with.
* `dir` - The direction to return events from. Either `f` for forwards or `b` for backwards. Setting
this value to `b` will reverse the above sort order. Defaults to `f`.
**Response**
The following fields are possible in the JSON response body:
* `chunk` - A list of room events. The order depends on the dir parameter.
Note that an empty chunk does not necessarily imply that no more events are available. Clients should continue to paginate until no end property is returned.
* `end` - A token corresponding to the end of chunk. This token can be passed back to this endpoint to request further events.
If no further events are available, this property is omitted from the response.
* `start` - A token corresponding to the start of chunk.
* `state` - A list of state events relevant to showing the chunk.
**Example**
For more details on each chunk, read [the Matrix specification](https://spec.matrix.org/v1.1/client-server-api/#get_matrixclientv3roomsroomidmessages).
```json
{
"chunk": [
{
"content": {
"body": "This is an example text message",
"format": "org.matrix.custom.html",
"formatted_body": "<b>This is an example text message</b>",
"msgtype": "m.text"
},
"event_id": "$143273582443PhrSn:example.org",
"origin_server_ts": 1432735824653,
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"type": "m.room.message",
"unsigned": {
"age": 1234
}
},
{
"content": {
"name": "The room name"
},
"event_id": "$143273582443PhrSn:example.org",
"origin_server_ts": 1432735824653,
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"state_key": "",
"type": "m.room.name",
"unsigned": {
"age": 1234
}
},
{
"content": {
"body": "Gangnam Style",
"info": {
"duration": 2140786,
"h": 320,
"mimetype": "video/mp4",
"size": 1563685,
"thumbnail_info": {
"h": 300,
"mimetype": "image/jpeg",
"size": 46144,
"w": 300
},
"thumbnail_url": "mxc://example.org/FHyPlCeYUSFFxlgbQYZmoEoe",
"w": 480
},
"msgtype": "m.video",
"url": "mxc://example.org/a526eYUSFFxlgbQYZmo442"
},
"event_id": "$143273582443PhrSn:example.org",
"origin_server_ts": 1432735824653,
"room_id": "!636q39766251:example.com",
"sender": "@example:example.org",
"type": "m.room.message",
"unsigned": {
"age": 1234
}
}
],
"end": "t47409-4357353_219380_26003_2265",
"start": "t47429-4392820_219380_26003_2265"
}
```
# Room Timestamp to Event API
The Room Timestamp to Event API endpoint fetches the `event_id` of the closest event to the given
timestamp (`ts` query parameter) in the given direction (`dir` query parameter).
Useful for cases like jump to date so you can start paginating messages from
a given date in the archive.
The API is:
```
GET /_synapse/admin/v1/rooms/<room_id>/timestamp_to_event
```
**Parameters**
The following path parameters are required:
* `room_id` - The ID of the room you wish to check.
The following query parameters are available:
* `ts` - a timestamp in milliseconds where we will find the closest event in
the given direction.
* `dir` - can be `f` or `b` to indicate forwards and backwards in time from the
given timestamp. Defaults to `f`.
**Response**
* `event_id` - converted from timestamp
# Block Room API
The Block Room admin API allows server admins to block and unblock rooms,
and query to see if a given room is blocked.

View File

@@ -42,6 +42,7 @@ It returns a JSON body like the following:
"appservice_id": null,
"consent_server_notice_sent": null,
"consent_version": null,
"consent_ts": null,
"external_ids": [
{
"auth_provider": "<provider1>",
@@ -364,6 +365,7 @@ The following actions are **NOT** performed. The list may be incomplete.
- Remove the user's creation (registration) timestamp
- [Remove rate limit overrides](#override-ratelimiting-for-users)
- Remove from monthly active users
- Remove user's consent information (consent version and timestamp)
## Reset password
@@ -1153,3 +1155,41 @@ GET /_synapse/admin/v1/username_available?username=$localpart
The request and response format is the same as the
[/_matrix/client/r0/register/available](https://matrix.org/docs/spec/client_server/r0.6.0#get-matrix-client-r0-register-available) API.
### Find a user based on their ID in an auth provider
The API is:
```
GET /_synapse/admin/v1/auth_providers/$provider/users/$external_id
```
When a user matched the given ID for the given provider, an HTTP code `200` with a response body like the following is returned:
```json
{
"user_id": "@hello:example.org"
}
```
**Parameters**
The following parameters should be set in the URL:
- `provider` - The ID of the authentication provider, as advertised by the [`GET /_matrix/client/v3/login`](https://spec.matrix.org/latest/client-server-api/#post_matrixclientv3login) API in the `m.login.sso` authentication method.
- `external_id` - The user ID from the authentication provider. Usually corresponds to the `sub` claim for OIDC providers, or to the `uid` attestation for SAML2 providers.
The `external_id` may have characters that are not URL-safe (typically `/`, `:` or `@`), so it is advised to URL-encode those parameters.
**Errors**
Returns a `404` HTTP status code if no user was found, with a response body like this:
```json
{
"errcode":"M_NOT_FOUND",
"error":"User not found"
}
```
_Added in Synapse 1.68.0._

View File

@@ -1,9 +1,9 @@
Deprecation Policy for Platform Dependencies
============================================
Synapse has a number of platform dependencies, including Python and PostgreSQL.
This document outlines the policy towards which versions we support, and when we
drop support for versions in the future.
Synapse has a number of platform dependencies, including Python, Rust,
PostgreSQL and SQLite. This document outlines the policy towards which versions
we support, and when we drop support for versions in the future.
Policy
@@ -17,6 +17,14 @@ Details on the upstream support life cycles for Python and PostgreSQL are
documented at [https://endoflife.date/python](https://endoflife.date/python) and
[https://endoflife.date/postgresql](https://endoflife.date/postgresql).
A Rust compiler is required to build Synapse from source. For any given release
the minimum required version may be bumped up to a recent Rust version, and so
people building from source should ensure they can fetch recent versions of Rust
(e.g. by using [rustup](https://rustup.rs/)).
The oldest supported version of SQLite is the version
[provided](https://packages.debian.org/buster/libsqlite3-0) by
[Debian oldstable](https://wiki.debian.org/DebianOldStable).
Context
-------
@@ -31,3 +39,15 @@ long process.
By following the upstream support life cycles Synapse can ensure that its
dependencies continue to get security patches, while not requiring system admins
to constantly update their platform dependencies to the latest versions.
For Rust, the situation is a bit different given that a) the Rust foundation
does not generally support older Rust versions, and b) the library ecosystem
generally bump their minimum support Rust versions frequently. In general, the
Synapse team will try to avoid updating the dependency on Rust to the absolute
latest version, but introducing a formal policy is hard given the constraints of
the ecosystem.
On a similar note, SQLite does not generally have a concept of "supported
release"; bugfixes are published for the latest minor release only. We chose to
track Debian's oldstable as this is relatively conservative, predictably updated
and is consistent with the `.deb` packages released by Matrix.org.

View File

@@ -28,6 +28,9 @@ The source code of Synapse is hosted on GitHub. You will also need [a recent ver
For some tests, you will need [a recent version of Docker](https://docs.docker.com/get-docker/).
A recent version of the Rust compiler is needed to build the native modules. The
easiest way of installing the latest version is to use [rustup](https://rustup.rs/).
# 3. Get the source.
@@ -114,6 +117,11 @@ Some documentation also exists in [Synapse's GitHub
Wiki](https://github.com/matrix-org/synapse/wiki), although this is primarily
contributed to by community authors.
When changes are made to any Rust code then you must call either `poetry install`
or `maturin develop` (if installed) to rebuild the Rust code. Using [`maturin`](https://github.com/PyO3/maturin)
is quicker than `poetry install`, so is recommended when making frequent
changes to the Rust code.
# 8. Test, test, test!
<a name="test-test-test"></a>
@@ -195,7 +203,7 @@ The database file can then be inspected with:
sqlite3 _trial_temp/test.db
```
Note that the database file is cleared at the beginning of each test run. Thus it
Note that the database file is cleared at the beginning of each test run. Thus it
will always only contain the data generated by the *last run test*. Though generally
when debugging, one is only running a single test anyway.

View File

@@ -126,6 +126,23 @@ context of poetry's venv, without having to run `poetry shell` beforehand.
poetry install --extras all --remove-untracked
```
## ...delete everything and start over from scratch?
```shell
# Stop the current virtualenv if active
$ deactivate
# Remove all of the files from the current environment.
# Don't worry, even though it says "all", this will only
# remove the Poetry virtualenvs for the current project.
$ poetry env remove --all
# Reactivate Poetry shell to create the virtualenv again
$ poetry shell
# Install everything again
$ poetry install --extras all
```
## ...run a command in the `poetry` virtualenv?
Use `poetry run cmd args` when you need the python virtualenv context.
@@ -256,6 +273,16 @@ from PyPI. (This is what makes poetry seem slow when doing the first
`poetry install`.) Try `poetry cache list` and `poetry cache clear --all
<name of cache>` to see if that fixes things.
## Remove outdated egg-info
Delete the `matrix_synapse.egg-info/` directory from the root of your Synapse
install.
This stores some cached information about dependencies and often conflicts with
letting Poetry do the right thing.
## Try `--verbose` or `--dry-run` arguments.
Sometimes useful to see what poetry's internal logic is.

View File

@@ -45,6 +45,10 @@ listens to traffic on localhost. (Do not change `bind_addresses` to `127.0.0.1`
when using a containerized Synapse, as that will prevent it from responding
to proxied traffic.)
Optionally, you can also set
[`request_id_header`](../usage/configuration/config_documentation.md#listeners)
so that the server extracts and re-uses the same request ID format that the
reverse proxy is using.
## Reverse-proxy configuration examples

View File

@@ -196,6 +196,10 @@ System requirements:
- Python 3.7 or later, up to Python 3.10.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
If building on an uncommon architecture for which pre-built wheels are
unavailable, you will need to have a recent Rust compiler installed. The easiest
way of installing the latest version is to use [rustup](https://rustup.rs/).
To install the Synapse homeserver run:
```sh
@@ -299,9 +303,10 @@ You may need to install the latest Xcode developer tools:
xcode-select --install
```
On ARM-based Macs you may need to explicitly install libjpeg which is a pillow dependency. You can use Homebrew (https://brew.sh):
On ARM-based Macs you may need to install libjpeg and libpq.
You can use Homebrew (https://brew.sh):
```sh
brew install jpeg
brew install jpeg libpq
```
On macOS Catalina (10.15) you may need to explicitly install OpenSSL

View File

@@ -12,14 +12,14 @@ See the following for how to decode the dense data available from the default lo
| Part | Explanation |
| ----- | ------------ |
| AAAA | Timestamp request was logged (not recieved) |
| AAAA | Timestamp request was logged (not received) |
| BBBB | Logger name (`synapse.access.(http\|https).<tag>`, where 'tag' is defined in the `listeners` config section, normally the port) |
| CCCC | Line number in code |
| DDDD | Log Level |
| EEEE | Request Identifier (This identifier is shared by related log lines)|
| FFFF | Source IP (Or X-Forwarded-For if enabled) |
| GGGG | Server Port |
| HHHH | Federated Server or Local User making request (blank if unauthenticated or not supplied) |
| HHHH | Federated Server or Local User making request (blank if unauthenticated or not supplied).<br/>If this is of the form `@aaa:example.com|@bbb:example.com`, then that means that `@aaa:example.com` is authenticated but they are controlling `@bbb:example.com`, e.g. if `aaa` is controlling `bbb` [via the admin API](https://matrix-org.github.io/synapse/latest/admin_api/user_admin_api.html#login-as-a-user). |
| IIII | Total Time to process the request |
| JJJJ | Time to send response over network once generated (this may be negative if the socket is closed before the response is generated)|
| KKKK | Userland CPU time |

View File

@@ -434,7 +434,16 @@ Sub-options for each listener include:
* `tls`: set to true to enable TLS for this listener. Will use the TLS key/cert specified in tls_private_key_path / tls_certificate_path.
* `x_forwarded`: Only valid for an 'http' listener. Set to true to use the X-Forwarded-For header as the client IP. Useful when Synapse is
behind a reverse-proxy.
behind a [reverse-proxy](../../reverse_proxy.md).
* `request_id_header`: The header extracted from each incoming request that is
used as the basis for the request ID. The request ID is used in
[logs](../administration/request_log.md#request-log-format) and tracing to
correlate and match up requests. When unset, Synapse will automatically
generate sequential request IDs. This option is useful when Synapse is behind
a [reverse-proxy](../../reverse_proxy.md).
_Added in Synapse 1.68.0._
* `resources`: Only valid for an 'http' listener. A list of resources to host
on this port. Sub-options for each resource are:
@@ -1069,8 +1078,10 @@ Options related to caching.
---
### `event_cache_size`
The number of events to cache in memory. Not affected by
`caches.global_factor` and is not part of the `caches` section. Defaults to 10K.
The number of events to cache in memory. Defaults to 10K. Like other caches,
this is affected by `caches.global_factor` (see below).
Note that this option is not part of the `caches` section.
Example configuration:
```yaml
@@ -1391,7 +1402,7 @@ This option specifies several limits for login:
client is attempting to log into. Defaults to `per_second: 0.17`,
`burst_count: 3`.
* `failted_attempts` ratelimits login requests based on the account the
* `failed_attempts` ratelimits login requests based on the account the
client is attempting to log into, based on the amount of failed login
attempts for this account. Defaults to `per_second: 0.17`, `burst_count: 3`.

View File

@@ -16,7 +16,8 @@ files =
docker/,
scripts-dev/,
synapse/,
tests/
tests/,
build_rust.py
# Note: Better exclusion syntax coming in mypy > 0.910
# https://github.com/python/mypy/pull/11329
@@ -181,3 +182,6 @@ ignore_missing_imports = True
[mypy-incremental.*]
ignore_missing_imports = True
[mypy-setuptools_rust.*]
ignore_missing_imports = True

43
poetry.lock generated
View File

@@ -524,11 +524,11 @@ python-versions = ">=3.7"
[[package]]
name = "matrix-common"
version = "1.2.1"
version = "1.3.0"
description = "Common utilities for Synapse, Sydent and Sygnal"
category = "main"
optional = false
python-versions = ">=3.6"
python-versions = ">=3.7"
[package.dependencies]
attrs = "*"
@@ -1035,6 +1035,18 @@ python-versions = ">=3.6"
cryptography = ">=2.0"
jeepney = ">=0.6"
[[package]]
name = "semantic-version"
version = "2.10.0"
description = "A library implementing the 'SemVer' scheme."
category = "main"
optional = false
python-versions = ">=2.7"
[package.extras]
dev = ["Django (>=1.11)", "check-manifest", "colorama (<=0.4.1)", "coverage", "flake8", "nose2", "readme-renderer (<25.0)", "tox", "wheel", "zest.releaser[recommended]"]
doc = ["Sphinx", "sphinx-rtd-theme"]
[[package]]
name = "sentry-sdk"
version = "1.5.11"
@@ -1099,6 +1111,19 @@ docs = ["furo", "jaraco.packaging (>=9)", "jaraco.tidelift (>=1.4)", "pygments-g
testing = ["build[virtualenv]", "filelock (>=3.4.0)", "flake8 (<5)", "flake8-2020", "ini2toml[lite] (>=0.9)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "mock", "pip (>=19.1)", "pip-run (>=8.8)", "pytest (>=6)", "pytest-black (>=0.3.7)", "pytest-checkdocs (>=2.4)", "pytest-cov", "pytest-enabler (>=1.3)", "pytest-flake8", "pytest-mypy (>=0.9.1)", "pytest-perf", "pytest-xdist", "tomli-w (>=1.0.0)", "virtualenv (>=13.0.0)", "wheel"]
testing-integration = ["build[virtualenv]", "filelock (>=3.4.0)", "jaraco.envs (>=2.2)", "jaraco.path (>=3.2.0)", "pytest", "pytest-enabler", "pytest-xdist", "tomli", "virtualenv (>=13.0.0)", "wheel"]
[[package]]
name = "setuptools-rust"
version = "1.5.1"
description = "Setuptools Rust extension plugin"
category = "main"
optional = false
python-versions = ">=3.7"
[package.dependencies]
semantic-version = ">=2.8.2,<3"
setuptools = ">=62.4"
typing-extensions = ">=3.7.4.3"
[[package]]
name = "signedjson"
version = "1.1.4"
@@ -1600,7 +1625,7 @@ url_preview = ["lxml"]
[metadata]
lock-version = "1.1"
python-versions = "^3.7.1"
content-hash = "7de518bf27967b3547eab8574342cfb67f87d6b47b4145c13de11112141dbf2d"
content-hash = "1b14fc274d9e2a495a7f864150f3ffcf4d9f585e09a67e53301ae4ef3c2f3e48"
[metadata.files]
attrs = [
@@ -2088,8 +2113,8 @@ markupsafe = [
{file = "MarkupSafe-2.1.0.tar.gz", hash = "sha256:80beaf63ddfbc64a0452b841d8036ca0611e049650e20afcb882f5d3c266d65f"},
]
matrix-common = [
{file = "matrix_common-1.2.1-py3-none-any.whl", hash = "sha256:946709c405944a0d4b1d73207b77eb064b6dbfc5d70a69471320b06d8ce98b20"},
{file = "matrix_common-1.2.1.tar.gz", hash = "sha256:a99dcf02a6bd95b24a5a61b354888a2ac92bf2b4b839c727b8dd9da2cdfa3853"},
{file = "matrix_common-1.3.0-py3-none-any.whl", hash = "sha256:524e2785b9b03be4d15f3a8a6b857c5b6af68791ffb1b9918f0ad299abc4db20"},
{file = "matrix_common-1.3.0.tar.gz", hash = "sha256:62e121cccd9f243417b57ec37a76dc44aeb198a7a5c67afd6b8275992ff2abd1"},
]
matrix-synapse-ldap3 = [
{file = "matrix-synapse-ldap3-0.2.2.tar.gz", hash = "sha256:b388d95693486eef69adaefd0fd9e84463d52fe17b0214a00efcaa669b73cb74"},
@@ -2472,6 +2497,10 @@ secretstorage = [
{file = "SecretStorage-3.3.1-py3-none-any.whl", hash = "sha256:422d82c36172d88d6a0ed5afdec956514b189ddbfb72fefab0c8a1cee4eaf71f"},
{file = "SecretStorage-3.3.1.tar.gz", hash = "sha256:fd666c51a6bf200643495a04abb261f83229dcb6fd8472ec393df7ffc8b6f195"},
]
semantic-version = [
{file = "semantic_version-2.10.0-py2.py3-none-any.whl", hash = "sha256:de78a3b8e0feda74cabc54aab2da702113e33ac9d9eb9d2389bcf1f58b7d9177"},
{file = "semantic_version-2.10.0.tar.gz", hash = "sha256:bdabb6d336998cbb378d4b9db3a4b56a1e3235701dc05ea2690d9a997ed5041c"},
]
sentry-sdk = [
{file = "sentry-sdk-1.5.11.tar.gz", hash = "sha256:6c01d9d0b65935fd275adc120194737d1df317dce811e642cbf0394d0d37a007"},
{file = "sentry_sdk-1.5.11-py2.py3-none-any.whl", hash = "sha256:c17179183cac614e900cbd048dab03f49a48e2820182ec686c25e7ce46f8548f"},
@@ -2484,6 +2513,10 @@ setuptools = [
{file = "setuptools-65.3.0-py3-none-any.whl", hash = "sha256:2e24e0bec025f035a2e72cdd1961119f557d78ad331bb00ff82efb2ab8da8e82"},
{file = "setuptools-65.3.0.tar.gz", hash = "sha256:7732871f4f7fa58fb6bdcaeadb0161b2bd046c85905dbaa066bdcbcc81953b57"},
]
setuptools-rust = [
{file = "setuptools-rust-1.5.1.tar.gz", hash = "sha256:0e05e456645d59429cb1021370aede73c0760e9360bbfdaaefb5bced530eb9d7"},
{file = "setuptools_rust-1.5.1-py3-none-any.whl", hash = "sha256:306b236ff3aa5229180e58292610d0c2c51bb488191122d2fc559ae4caeb7d5e"},
]
signedjson = [
{file = "signedjson-1.1.4-py3-none-any.whl", hash = "sha256:45569ec54241c65d2403fe3faf7169be5322547706a231e884ca2b427f23d228"},
{file = "signedjson-1.1.4.tar.gz", hash = "sha256:cd91c56af53f169ef032c62e9c4a3292dc158866933318d0592e3462db3d6492"},

View File

@@ -52,6 +52,9 @@ include_trailing_comma = true
combine_as_imports = true
skip_gitignore = true
[tool.maturin]
manifest-path = "rust/Cargo.toml"
[tool.poetry]
name = "matrix-synapse"
version = "1.67.0"
@@ -82,7 +85,16 @@ include = [
{ path = "sytest-blacklist", format = "sdist" },
{ path = "tests", format = "sdist" },
{ path = "UPGRADE.rst", format = "sdist" },
{ path = "Cargo.toml", format = "sdist" },
{ path = "rust/Cargo.toml", format = "sdist" },
{ path = "rust/Cargo.lock", format = "sdist" },
{ path = "rust/src/**", format = "sdist" },
]
exclude = [
{ path = "synapse/*.so", format = "sdist"}
]
build = "build_rust.py"
[tool.poetry.scripts]
synapse_homeserver = "synapse.app.homeserver:main"
@@ -126,7 +138,7 @@ pyOpenSSL = ">=16.0.0"
PyYAML = ">=3.11"
pyasn1 = ">=0.1.9"
pyasn1-modules = ">=0.0.7"
bcrypt = ">=3.1.0"
bcrypt = ">=3.1.7"
Pillow = ">=5.4.0"
sortedcontainers = ">=1.4.4"
pymacaroons = ">=0.13.0"
@@ -152,7 +164,7 @@ typing-extensions = ">=3.10.0.1"
cryptography = ">=3.4.7"
# ijson 3.1.4 fixes a bug with "." in property names
ijson = ">=3.1.4"
matrix-common = "^1.2.1"
matrix-common = "^1.3.0"
# We need packaging.requirements.Requirement, added in 16.1.
packaging = ">=16.1"
# At the time of writing, we only use functions from the version `importlib.metadata`
@@ -161,6 +173,15 @@ importlib_metadata = { version = ">=1.4", python = "<3.8" }
# This is the most recent version of Pydantic with available on common distros.
pydantic = ">=1.7.4"
# This is for building the rust components during "poetry install", which
# currently ignores the `build-system.requires` directive (c.f.
# https://github.com/python-poetry/poetry/issues/6154). Both `pip install` and
# `poetry build` do the right thing without this explicit dependency.
#
# This isn't really a dev-dependency, as `poetry install --no-dev` will fail,
# but the alternative is to add it to the main list of deps where it isn't
# needed.
setuptools_rust = ">=1.3"
# Optional Dependencies
@@ -285,5 +306,21 @@ twine = "*"
towncrier = ">=18.6.0rc1"
[build-system]
requires = ["poetry-core>=1.0.0"]
requires = ["poetry-core>=1.0.0", "setuptools_rust>=1.3"]
build-backend = "poetry.core.masonry.api"
[tool.cibuildwheel]
# Skip unsupported platforms (by us or by Rust).
skip = "cp36* *-musllinux_i686"
# We need a rust compiler
before-all = "curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y"
environment= { PATH = "$PATH:$HOME/.cargo/bin" }
# For some reason if we don't manually clean the build directory we
# can end up polluting the next build with a .so that is for the wrong
# Python version.
before-build = "rm -rf {project}/build"
build-frontend = "build"
test-command = "python -c 'from synapse.synapse_rust import sum_as_string; print(sum_as_string(1, 2))'"

25
rust/Cargo.toml Normal file
View File

@@ -0,0 +1,25 @@
[package]
# We name the package `synapse` so that things like logging have the right
# logging target.
name = "synapse"
# dummy version. See pyproject.toml for the Synapse's version number.
version = "0.1.0"
edition = "2021"
rust-version = "1.61.0"
[lib]
name = "synapse"
crate-type = ["cdylib"]
[package.metadata.maturin]
# This is where we tell maturin where to place the built library.
name = "synapse.synapse_rust"
[dependencies]
pyo3 = { version = "0.16.5", features = ["extension-module", "macros", "abi3", "abi3-py37"] }
[build-dependencies]
blake2 = "0.10.4"
hex = "0.4.3"

Some files were not shown because too many files have changed in this diff Show More