1
0

Compare commits

..

159 Commits

Author SHA1 Message Date
Erik Johnston
4fd68e2736 FIXUP 2024-08-28 15:05:56 +01:00
Erik Johnston
c5a90575bf FIXUP 2024-08-28 15:04:48 +01:00
Erik Johnston
a217155570 Merge branch 'madlittlemods/sliding-sync-pre-populate-room-meta-data' into erikj/ss_hacks 2024-08-28 14:58:31 +01:00
Erik Johnston
90d0e035dd Fix port script tests by handling empty DBs correctly 2024-08-28 14:39:25 +01:00
Erik Johnston
ab414f2ab8 Use event_auth table to get previous membership 2024-08-28 14:23:32 +01:00
Erik Johnston
6f9932d146 Handle old rows with null event_stream_ordering column 2024-08-28 14:23:06 +01:00
Erik Johnston
bb905cd02c Only run the sliding sync background updates on the main database 2024-08-28 11:44:56 +01:00
Erik Johnston
7c9c62051c Remove all rooms pulled out from the queue 2024-08-28 11:31:19 +01:00
Eric Eastwood
da463fb102 Add unique index right away for sliding_sync_joined_rooms_to_recalculate
This makes it so we can always `upsert` to avoid duplicates otherwise
I'm not sure of how to not insert duplicates in certain situations
(see FIXME in the diff) which would cause problems down the line
for the unique index being added later.
2024-08-28 00:50:33 -05:00
Eric Eastwood
8468401a97 Adapt to using sliding_sync_joined_rooms_to_recalculate table 2024-08-28 00:42:14 -05:00
Eric Eastwood
53b7309f6c Add sliding_sync_joined_rooms_to_recalculate table 2024-08-27 20:48:56 -05:00
Eric Eastwood
94e1a54687 get_events(...) will omit events from unknown room versions
Thanks @erikjohnston
2024-08-27 20:01:55 -05:00
Eric Eastwood
a507f152c9 Use stream_id of some point before we fetch the current state
This is simpler and some rooms are so old that they don't have
`current_state_delta_stream` yet. It's easier if we just get a
general max `stream_id` of the whole table than the max `stream_id`
for the specific room anyway.

Thanks @erikjohnston
2024-08-27 19:45:50 -05:00
Eric Eastwood
9d08bc2157 Remove debug logs 2024-08-27 19:35:05 -05:00
Eric Eastwood
56a4c0ba6e Round out tests 2024-08-27 19:34:16 -05:00
Eric Eastwood
85a60c3132 More tests 2024-08-27 19:27:24 -05:00
Eric Eastwood
e5e7269998 Add more tests 2024-08-27 18:49:53 -05:00
Eric Eastwood
c8e17f7479 Add test when no rooms 2024-08-27 18:21:25 -05:00
Eric Eastwood
4dc9e268e6 Add test for catch-up background update 2024-08-27 18:08:17 -05:00
Eric Eastwood
9a7d8c2be4 Start catch-up if nothing written yet 2024-08-27 17:28:07 -05:00
Eric Eastwood
c51a309da5 Maybe: always start background update 2024-08-27 17:11:51 -05:00
Eric Eastwood
9764f626ea Fix query in Postgres 2024-08-27 12:09:00 -05:00
Eric Eastwood
7a0c281028 Add placeholder tests 2024-08-26 19:43:52 -05:00
Eric Eastwood
7fe5d31e20 Note down caveat about forgotten 2024-08-26 18:52:10 -05:00
Eric Eastwood
53473a0eb4 Adapt sliding_sync_joined_rooms background update to use event_stream_ordering for progress
This way we can re-use it for the catch-up background process
2024-08-26 18:35:49 -05:00
Eric Eastwood
eb3c84cf45 Kick-off background update for out-of-date snapshots 2024-08-26 17:31:26 -05:00
Eric Eastwood
6a44686dc3 Why it matters 2024-08-26 16:32:59 -05:00
Eric Eastwood
a94c1dd62c Add more context for why 2024-08-26 16:27:20 -05:00
Eric Eastwood
8bddbe23bd Clear out-of-date rows 2024-08-26 16:19:47 -05:00
Eric Eastwood
addb91485f Split test cases 2024-08-26 16:11:56 -05:00
Eric Eastwood
9795556052 Update comment 2024-08-26 14:42:55 -05:00
Erik Johnston
c44db28958 Merge branch 'erikj/ss_sort_cache' into erikj/ss_hacks 2024-08-23 15:49:31 +01:00
Erik Johnston
c6204d3fa1 Speed up sort rooms via cache 2024-08-23 15:49:15 +01:00
Erik Johnston
1ead5e0c0e Merge branch 'erikj/ss_store_lists' into erikj/ss_hacks 2024-08-23 11:34:33 +01:00
Erik Johnston
8d1d8f9b3b Add TODO 2024-08-23 11:34:17 +01:00
Erik Johnston
651e520292 Store list data 2024-08-23 11:33:31 +01:00
Erik Johnston
f457dbee35 Add room_to_lists field. 2024-08-23 11:24:41 +01:00
Erik Johnston
77d3fa8b9e Fixup 2024-08-23 10:26:06 +01:00
Erik Johnston
4a9d81f6ad Fixup 2024-08-23 10:12:16 +01:00
Erik Johnston
96f476d9b4 Merge remote-tracking branch 'origin/madlittlemods/sliding-sync-pre-populate-room-meta-data' into erikj/ss_hacks 2024-08-23 10:00:55 +01:00
Erik Johnston
e2501a0bd7 Merge remote-tracking branch 'origin/erikj/ss_store_state' into erikj/ss_hacks 2024-08-23 09:54:21 +01:00
Eric Eastwood
a57d47b778 Use simple_upsert_txn for sliding_sync_membership_snapshots in background update 2024-08-22 23:59:06 -05:00
Eric Eastwood
b6a7d2bf6c Use simple_upsert_txn for sliding_sync_joined_rooms in background update 2024-08-22 23:30:00 -05:00
Eric Eastwood
f8926d07df Fix partial-stated room re-syncing state but nothing has changed
Fixes failing test in CI: `tests.handlers.test_federation.PartialJoinTestCase.test_failed_partial_join_is_clean`
```
2024-08-22 18:57:22-0500 [-] synapse.metrics.background_process_metrics - 253 - ERROR - sync_partial_state_room-0 - Background process 'sync_partial_state_room' threw an exception
	Traceback (most recent call last):
	  File "synapse/synapse/metrics/background_process_metrics.py", line 251, in run
	    return await func(*args, **kwargs)
	           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
	  File "synapse/synapse/handlers/federation.py", line 1842, in _sync_partial_state_room_wrapper
	    await self._sync_partial_state_room(
	  File "synapse/synapse/handlers/federation.py", line 1933, in _sync_partial_state_room
	    await self.state_handler.update_current_state(room_id)
	  File "synapse/synapse/state/__init__.py", line 554, in update_current_state
	    await self._storage_controllers.persistence.update_current_state(room_id)
	  File "synapse/synapse/storage/controllers/persist_events.py", line 491, in update_current_state
	    await self._event_persist_queue.add_to_queue(
	  File "synapse/synapse/storage/controllers/persist_events.py", line 245, in add_to_queue
	    res = await make_deferred_yieldable(end_item.deferred.observe())
	          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	  File "synapse/synapse/storage/controllers/persist_events.py", line 288, in handle_queue_loop
	    ret = await self._per_item_callback(room_id, item.task)
	          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	  File "synapse/synapse/storage/controllers/persist_events.py", line 370, in _process_event_persist_queue_task
	    await self._update_current_state(room_id, task)
	  File "synapse/synapse/storage/controllers/persist_events.py", line 507, in _update_current_state
	    await self.persist_events_store._calculate_sliding_sync_table_changes(
	  File "synapse/synapse/storage/databases/main/events.py", line 624, in _calculate_sliding_sync_table_changes
	    assert most_recent_event_pos_results, (
	           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
	AssertionError: We should not be seeing `None` here because we are still in the room (!room:example.com) and it should at-least have a join membership event that's keeping us here.
```
2024-08-22 19:06:04 -05:00
Eric Eastwood
21cc97ba9d Use simple_upsert_many_txn for sliding_sync_membership_snapshots
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726820006
2024-08-22 18:52:05 -05:00
Eric Eastwood
fdb8b5931f Correct comment 2024-08-22 18:43:51 -05:00
Eric Eastwood
4b866c4fca Simplify what we need to think about to grab the best effort value 2024-08-22 18:36:43 -05:00
Eric Eastwood
088a4c7cf0 Use simple_upsert_txn to update sliding_sync_joined_rooms
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726817206
2024-08-22 18:26:37 -05:00
Eric Eastwood
0726a6d58b Derive best effort stream_ordering outside of the transaction
See https://github.com/element-hq/synapse/pull/17512#discussion_r1727995882
2024-08-22 18:13:00 -05:00
Eric Eastwood
6edc4c78ce Allow for no bump_stamp (fix portdb CI job)
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725998219
2024-08-22 17:09:43 -05:00
Eric Eastwood
44432e2118 Move tests to dedicated file
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726849798
2024-08-22 16:56:09 -05:00
Eric Eastwood
bcba8cccfe No need for transaction
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726844107
2024-08-22 16:52:31 -05:00
Eric Eastwood
693c06b2f1 Move away from backfill language 2024-08-22 16:48:02 -05:00
Eric Eastwood
4d87fa61c6 "backfill" -> "bg_update"
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726837698
2024-08-22 16:44:02 -05:00
Eric Eastwood
d61aada8ba Simplify _update_sliding_sync_tables_with_new_persisted_events_txn()
See
https://github.com/element-hq/synapse/pull/17512#discussion_r1719997640
https://github.com/element-hq/synapse/pull/17512#discussion_r1726828894
https://github.com/element-hq/synapse/pull/17512#discussion_r1726836440
2024-08-22 16:35:59 -05:00
Eric Eastwood
6723824c4a Prefer simple_delete_many_txn
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726819380
2024-08-22 15:47:44 -05:00
Eric Eastwood
980ee9aad6 Prefer simple_update_txn
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726808112
2024-08-22 15:40:08 -05:00
Erik Johnston
03eac5ae60 Newsfile 2024-08-22 18:04:23 +01:00
Erik Johnston
ed7591cbef Remove mark_token_seen 2024-08-22 18:04:23 +01:00
Erik Johnston
3838b18d3b Store state 2024-08-22 18:04:23 +01:00
Erik Johnston
b3d8e2d2bd Add simple_insert_returning_txn 2024-08-22 18:04:23 +01:00
Erik Johnston
d1ee253bef Allow making columns AUTOINCREMENT 2024-08-22 18:04:23 +01:00
Erik Johnston
87d53368d7 Newsfile 2024-08-22 18:03:57 +01:00
Erik Johnston
e34d634778 Make PerConnectionState immutable 2024-08-22 18:03:57 +01:00
Erik Johnston
7087c7c3d5 Make RoomSyncConfig immutable 2024-08-22 18:03:57 +01:00
Eric Eastwood
fc73b6ffc9 Rename insert_key/insert_value
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726805894
2024-08-22 11:44:57 -05:00
Eric Eastwood
9b8d2017af Check events_and_context for state events
See https://github.com/element-hq/synapse/pull/17512#discussion_r1726798803
2024-08-22 11:34:40 -05:00
Erik Johnston
5b77f4a67a Update mypy plugin to handle enums and typevars 2024-08-22 17:04:21 +01:00
Erik Johnston
e2ade85250 Move sliding sync types 2024-08-22 17:04:21 +01:00
Eric Eastwood
339500d067 Fix sub-query selecting multiple rows 2024-08-21 22:56:25 -05:00
Eric Eastwood
31300f4ce5 More docstring 2024-08-21 22:15:19 -05:00
Eric Eastwood
b45b1896aa Fill out docstring 2024-08-21 22:09:57 -05:00
Eric Eastwood
ee2ef0b4d9 Add forgotten column 2024-08-21 21:54:22 -05:00
Eric Eastwood
0a938b137a Add missing boolean column to portdb script 2024-08-21 19:59:35 -05:00
Eric Eastwood
97248362d0 Log which room is strange 2024-08-21 19:26:14 -05:00
Eric Eastwood
02711552cf Better handle none case 2024-08-21 19:11:08 -05:00
Eric Eastwood
8ddf5c7235 Add tombstone to tests 2024-08-21 19:05:59 -05:00
Eric Eastwood
513ec8e906 Update tests 2024-08-21 18:51:04 -05:00
Eric Eastwood
cda2311520 Add tombstone_successor_room_id column 2024-08-21 18:21:44 -05:00
Eric Eastwood
c612572d12 Move away from stream_id
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725806543
2024-08-21 17:14:34 -05:00
Eric Eastwood
5b1db39bb7 Add sender column so we can tell leaves from kicks 2024-08-21 16:42:12 -05:00
Eric Eastwood
e7a3328228 Pre-populate membership and membership_event_stream_ordering
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725311745
2024-08-21 16:20:00 -05:00
Eric Eastwood
f6d7ffd9c5 Move _calculate_sliding_sync_table_changes(...) after we assign stream_ordering to events
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725728637
2024-08-21 16:10:14 -05:00
Eric Eastwood
772c501bb6 Use available stream_id
See https://github.com/element-hq/synapse/pull/17512#discussion_r1725310035
2024-08-21 15:04:51 -05:00
Eric Eastwood
cda92af4a6 No need to update event_stream_ordering/bump_stamp ON CONFLICT 2024-08-21 14:46:50 -05:00
Eric Eastwood
d3f90e4bd8 Get full events for _sliding_sync_joined_rooms_backfill 2024-08-21 14:39:54 -05:00
Eric Eastwood
a5e06c6a8d Move back to the main store 2024-08-21 11:14:15 -05:00
Eric Eastwood
0233e20aa3 Use full event version after solving the circular import issues 2024-08-21 10:44:49 -05:00
Eric Eastwood
357132db1d Go back to simpler fetching senders 2024-08-20 21:54:03 -05:00
Eric Eastwood
726a8e9698 Attempt getting real events in backgroun update (needs work) 2024-08-20 21:48:44 -05:00
Eric Eastwood
cc200ee9f5 Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-20 17:25:48 -05:00
Eric Eastwood
3eb77c3a2a Add sanity checks and fix wrong variable usage 2024-08-20 17:24:43 -05:00
Eric Eastwood
45c89ec625 Move pre-processing completely outside transaction 2024-08-20 15:41:53 -05:00
Eric Eastwood
ac5b05c86b Use TypedDict 2024-08-20 13:29:56 -05:00
Eric Eastwood
2964c567d3 Use dicts 2024-08-20 13:21:19 -05:00
Eric Eastwood
95d39db772 Closer types 2024-08-20 11:55:24 -05:00
Eric Eastwood
6cc6bdbedf Start of moving logic outside of the transaction (pre-process) 2024-08-20 11:10:34 -05:00
Eric Eastwood
574a04a40f Test state reset on membership 2024-08-19 23:30:25 -05:00
Eric Eastwood
8ee2e114dd Add test to handle state reset in the meta data 2024-08-19 23:22:24 -05:00
Eric Eastwood
98fb56e5fe Prefer _update_sliding_sync_tables_with_new_persisted_events_txn(...) to do the right thing
See https://github.com/element-hq/synapse/pull/17512#discussion_r1719992152
2024-08-19 22:30:50 -05:00
Eric Eastwood
df0c57d383 Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-19 16:34:41 -05:00
Eric Eastwood
d2f5247e77 Update comment 2024-08-16 00:15:03 -05:00
Eric Eastwood
c89d859c7c Fill in docstrings 2024-08-15 23:52:01 -05:00
Eric Eastwood
2ec93e3f0d Move function next to other helpers 2024-08-15 23:39:39 -05:00
Eric Eastwood
fa63c02648 Fix lints 2024-08-15 23:30:16 -05:00
Eric Eastwood
419be7c6b2 Finish off background update tests 2024-08-15 23:29:29 -05:00
Eric Eastwood
ef5f0fca3a Add more tests 2024-08-15 23:18:50 -05:00
Eric Eastwood
fb5af8f5fa Add background update test for sliding_sync_membership_snapshots 2024-08-15 22:13:32 -05:00
Eric Eastwood
8461faf384 Add historical case to background update 2024-08-15 21:56:12 -05:00
Eric Eastwood
6c2fc1d20f Move background updates to StateBackgroundUpdateStore
So we can access `_get_state_groups_from_groups_txn(...)`
2024-08-15 20:51:43 -05:00
Eric Eastwood
cbeff57402 Use helper 2024-08-15 20:31:57 -05:00
Eric Eastwood
4b42e44ef9 Work on background update for sliding_sync_membership_snapshots 2024-08-15 00:21:35 -05:00
Eric Eastwood
d113e743ae Fix lints 2024-08-14 19:30:52 -05:00
Eric Eastwood
23e0d34a2d Add more tests 2024-08-14 19:30:22 -05:00
Eric Eastwood
1c931cb3e7 Add background update for sliding_sync_joined_rooms 2024-08-14 19:19:15 -05:00
Eric Eastwood
9f551f0e97 Fix lints 2024-08-14 11:32:33 -05:00
Eric Eastwood
c8508f113a Clean up tables when a room is purged/deleted 2024-08-14 11:27:57 -05:00
Eric Eastwood
f49003c35c No invites needed 2024-08-13 18:55:59 -05:00
Eric Eastwood
8b0e1692f9 More realistic remote room forgotten test 2024-08-13 18:51:11 -05:00
Eric Eastwood
5df94f47b5 Fix running into StopIteration
More context about how/why `StopIteration` was being ignored silently
which made this problem harder to debug. See
https://github.com/element-hq/synapse/pull/17512#discussion_r1715954505
2024-08-13 17:15:09 -05:00
Eric Eastwood
3566abd9bc Fix boolean schema for Postgres 2024-08-13 15:14:48 -05:00
Eric Eastwood
96a4614f92 Update fixme comment 2024-08-13 14:50:11 -05:00
Eric Eastwood
dc447a673f Clarify when/why we upsert 2024-08-13 14:47:17 -05:00
Eric Eastwood
32ae162278 Fix rejecting invite when no_longer_in_room (and other non-join transitions) 2024-08-13 14:35:24 -05:00
Eric Eastwood
a90f3d4ae2 Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-13 12:28:36 -05:00
Eric Eastwood
eb3a185cfc Fix federating backfill test 2024-08-13 12:24:53 -05:00
Eric Eastwood
517946d940 Fix lints 2024-08-12 20:31:25 -05:00
Eric Eastwood
f600eacd0d Adjust test description 2024-08-12 20:30:48 -05:00
Eric Eastwood
3423eb72d5 Add test to make sure snapshot evolves with membership 2024-08-12 20:29:58 -05:00
Eric Eastwood
5589ae48ca Add test for remote invite rejected/retracted 2024-08-12 20:14:14 -05:00
Eric Eastwood
83a5858083 Add tests for remote invites 2024-08-12 19:57:28 -05:00
Eric Eastwood
3e1f24ea11 User ID is not unique because user is joined to many rooms 2024-08-12 19:46:23 -05:00
Eric Eastwood
ab074f5335 Fix events from rooms we're not joined to affecting the joined room stream ordering 2024-08-12 19:40:53 -05:00
Eric Eastwood
53232e6df5 Fill in for remote invites (out of band, outlier membership) 2024-08-12 18:14:02 -05:00
Eric Eastwood
f069659343 Fix lints 2024-08-12 15:49:40 -05:00
Eric Eastwood
552f8f496d Update descriptions 2024-08-12 15:43:06 -05:00
Eric Eastwood
0af3b4822c Refactor to sliding_sync_membership_snapshots 2024-08-12 15:10:44 -05:00
Eric Eastwood
ed47a7eff5 Fix bumping when events are persisted out of order 2024-08-12 11:27:17 -05:00
Eric Eastwood
3367422fd3 Need to fix upsert 2024-08-08 18:23:50 -05:00
Eric Eastwood
ca909013c8 Fill in stream_ordering/bump_stamp for any event being persisted 2024-08-08 17:49:15 -05:00
Eric Eastwood
cc2d2b6b9f Fill in stream_ordering/bump_stamp when we add current state to the joined rooms table 2024-08-08 15:41:55 -05:00
Eric Eastwood
bc3796d333 Fix some lints 2024-08-07 20:49:46 -05:00
Eric Eastwood
5cf3ad3d7f Handle server left room 2024-08-07 20:47:13 -05:00
Eric Eastwood
bf78692ba0 Handle to_delete 2024-08-07 20:09:53 -05:00
Eric Eastwood
a1aaa47dad Add more tests 2024-08-07 19:58:51 -05:00
Eric Eastwood
c590474757 Test non-joins 2024-08-07 19:24:58 -05:00
Eric Eastwood
5b1053f23e Better test assertions 2024-08-07 19:07:43 -05:00
Eric Eastwood
68a3daf605 Fix comparison and insert 2024-08-07 18:10:24 -05:00
Eric Eastwood
61cea4e9b7 Closer to right 2024-08-07 18:07:53 -05:00
Eric Eastwood
87d95615d4 Change to updating the latest membership in the room 2024-08-07 16:37:17 -05:00
Eric Eastwood
cb335805d4 Server left room test 2024-08-07 10:46:34 -05:00
Eric Eastwood
2f3bd27284 Test is running 2024-08-06 16:50:14 -05:00
Eric Eastwood
f96d0c36a3 Special treatment for boolean columns
See 1dfa59b238/docs/development/database_schema.md (boolean-columns)
2024-08-06 15:30:51 -05:00
Eric Eastwood
1a251d5211 Fill in sliding_sync_non_join_memberships when current state changes 2024-08-06 15:18:34 -05:00
Eric Eastwood
2b5f07d714 Start of updating sliding_sync_joined_rooms 2024-08-05 22:34:21 -05:00
Eric Eastwood
ad1c887b4c Merge branch 'develop' into madlittlemods/sliding-sync-pre-populate-room-meta-data 2024-08-05 18:19:17 -05:00
Eric Eastwood
8392d6ac3b Use foreign keys 2024-07-31 19:00:55 -05:00
Eric Eastwood
e7e9cb289d Add changelog 2024-07-31 18:43:50 -05:00
Eric Eastwood
d26ac746d4 Start thinking about schemas 2024-07-31 18:40:23 -05:00
1025 changed files with 36061 additions and 100973 deletions

View File

@@ -1,10 +0,0 @@
#!/bin/sh
set -xeu
# On 32-bit Linux platforms, we need libatomic1 to use rustup
if command -v yum &> /dev/null; then
yum install -y libatomic
fi
# Install a Rust toolchain
curl https://sh.rustup.rs -sSf | sh -s -- --default-toolchain stable -y --profile minimal

147
.ci/scripts/auditwheel_wrapper.py Executable file
View File

@@ -0,0 +1,147 @@
#!/usr/bin/env python
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
# Copyright (C) 2023 New Vector, Ltd
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# See the GNU Affero General Public License for more details:
# <https://www.gnu.org/licenses/agpl-3.0.html>.
#
# Originally licensed under the Apache License, Version 2.0:
# <http://www.apache.org/licenses/LICENSE-2.0>.
#
# [This file includes modifications made by New Vector Limited]
#
#
# Wraps `auditwheel repair` to first check if we're repairing a potentially abi3
# compatible wheel, if so rename the wheel before repairing it.
import argparse
import os
import subprocess
from typing import Optional
from zipfile import ZipFile
from packaging.tags import Tag
from packaging.utils import parse_wheel_filename
from packaging.version import Version
def check_is_abi3_compatible(wheel_file: str) -> None:
"""Check the contents of the built wheel for any `.so` files that are *not*
abi3 compatible.
"""
with ZipFile(wheel_file, "r") as wheel:
for file in wheel.namelist():
if not file.endswith(".so"):
continue
if not file.endswith(".abi3.so"):
raise Exception(f"Found non-abi3 lib: {file}")
def cpython(wheel_file: str, name: str, version: Version, tag: Tag) -> str:
"""Replaces the cpython wheel file with a ABI3 compatible wheel"""
if tag.abi == "abi3":
# Nothing to do.
return wheel_file
check_is_abi3_compatible(wheel_file)
# HACK: it seems that some older versions of pip will consider a wheel marked
# as macosx_11_0 as incompatible with Big Sur. I haven't done the full archaeology
# here; there are some clues in
# https://github.com/pantsbuild/pants/pull/12857
# https://github.com/pypa/pip/issues/9138
# https://github.com/pypa/packaging/pull/319
# Empirically this seems to work, note that macOS 11 and 10.16 are the same,
# both versions are valid for backwards compatibility.
platform = tag.platform.replace("macosx_11_0", "macosx_10_16")
abi3_tag = Tag(tag.interpreter, "abi3", platform)
dirname = os.path.dirname(wheel_file)
new_wheel_file = os.path.join(
dirname,
f"{name}-{version}-{abi3_tag}.whl",
)
os.rename(wheel_file, new_wheel_file)
print("Renamed wheel to", new_wheel_file)
return new_wheel_file
def main(wheel_file: str, dest_dir: str, archs: Optional[str]) -> None:
"""Entry point"""
# Parse the wheel file name into its parts. Note that `parse_wheel_filename`
# normalizes the package name (i.e. it converts matrix_synapse ->
# matrix-synapse), which is not what we want.
_, version, build, tags = parse_wheel_filename(os.path.basename(wheel_file))
name = os.path.basename(wheel_file).split("-")[0]
if len(tags) != 1:
# We expect only a wheel file with only a single tag
raise Exception(f"Unexpectedly found multiple tags: {tags}")
tag = next(iter(tags))
if build:
# We don't use build tags in Synapse
raise Exception(f"Unexpected build tag: {build}")
# If the wheel is for cpython then convert it into an abi3 wheel.
if tag.interpreter.startswith("cp"):
wheel_file = cpython(wheel_file, name, version, tag)
# Finally, repair the wheel.
if archs is not None:
# If we are given archs then we are on macos and need to use
# `delocate-listdeps`.
subprocess.run(["delocate-listdeps", wheel_file], check=True)
subprocess.run(
["delocate-wheel", "--require-archs", archs, "-w", dest_dir, wheel_file],
check=True,
)
else:
subprocess.run(["auditwheel", "repair", "-w", dest_dir, wheel_file], check=True)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Tag wheel as abi3 and repair it.")
parser.add_argument(
"--wheel-dir",
"-w",
metavar="WHEEL_DIR",
help="Directory to store delocated wheels",
required=True,
)
parser.add_argument(
"--require-archs",
metavar="archs",
default=None,
)
parser.add_argument(
"wheel_file",
metavar="WHEEL_FILE",
)
args = parser.parse_args()
wheel_file = args.wheel_file
wheel_dir = args.wheel_dir
archs = args.require_archs
main(wheel_file, wheel_dir, archs)

View File

@@ -35,58 +35,49 @@ IS_PR = os.environ["GITHUB_REF"].startswith("refs/pull/")
# First calculate the various trial jobs.
#
# For PRs, we only run each type of test with the oldest and newest Python
# version that's supported. The oldest version ensures we don't accidentally
# introduce syntax or code that's too new, and the newest ensures we don't use
# code that's been dropped in the latest supported Python version.
# For PRs, we only run each type of test with the oldest Python version supported (which
# is Python 3.8 right now)
trial_sqlite_tests = [
{
"python-version": "3.10",
"python-version": "3.8",
"database": "sqlite",
"extras": "all",
},
{
"python-version": "3.14",
"database": "sqlite",
"extras": "all",
},
}
]
if not IS_PR:
# Otherwise, check all supported Python versions.
#
# Avoiding running all of these versions on every PR saves on CI time.
trial_sqlite_tests.extend(
{
"python-version": version,
"database": "sqlite",
"extras": "all",
}
for version in ("3.11", "3.12", "3.13")
for version in ("3.9", "3.10", "3.11", "3.12")
)
# Only test postgres against the earliest and latest Python versions that we
# support in order to save on CI time.
trial_postgres_tests = [
{
"python-version": "3.10",
"python-version": "3.8",
"database": "postgres",
"postgres-version": "14",
"postgres-version": "11",
"extras": "all",
},
{
"python-version": "3.14",
"database": "postgres",
"postgres-version": "17",
"extras": "all",
},
}
]
# Ensure that Synapse passes unit tests even with no extra dependencies installed.
if not IS_PR:
trial_postgres_tests.append(
{
"python-version": "3.12",
"database": "postgres",
"postgres-version": "16",
"extras": "all",
}
)
trial_no_extra_tests = [
{
"python-version": "3.10",
"python-version": "3.8",
"database": "sqlite",
"extras": "",
}
@@ -108,24 +99,24 @@ set_output("trial_test_matrix", test_matrix)
# First calculate the various sytest jobs.
#
# For each type of test we only run on bookworm on PRs
# For each type of test we only run on focal on PRs
sytest_tests = [
{
"sytest-tag": "bookworm",
"sytest-tag": "focal",
},
{
"sytest-tag": "bookworm",
"sytest-tag": "focal",
"postgres": "postgres",
},
{
"sytest-tag": "bookworm",
"sytest-tag": "focal",
"postgres": "multi-postgres",
"workers": "workers",
},
{
"sytest-tag": "bookworm",
"sytest-tag": "focal",
"postgres": "multi-postgres",
"workers": "workers",
"reactor": "asyncio",
@@ -136,11 +127,11 @@ if not IS_PR:
sytest_tests.extend(
[
{
"sytest-tag": "bookworm",
"sytest-tag": "focal",
"reactor": "asyncio",
},
{
"sytest-tag": "bookworm",
"sytest-tag": "focal",
"postgres": "postgres",
"reactor": "asyncio",
},

View File

@@ -11,12 +11,12 @@ with open("poetry.lock", "rb") as f:
try:
lock_version = lockfile["metadata"]["lock-version"]
assert lock_version == "2.1"
assert lock_version == "2.0"
except Exception:
print(
"""\
Lockfile is not version 2.1. You probably need to upgrade poetry on your local box
and re-run `poetry lock`. See the Poetry cheat sheet at
Lockfile is not version 2.0. You probably need to upgrade poetry on your local box
and re-run `poetry lock --no-update`. See the Poetry cheat sheet at
https://element-hq.github.io/synapse/develop/development/dependencies.html
"""
)

36
.ci/scripts/prepare_old_deps.sh Executable file
View File

@@ -0,0 +1,36 @@
#!/usr/bin/env bash
# this script is run by GitHub Actions in a plain `focal` container; it
# - installs the minimal system requirements, and poetry;
# - patches the project definition file to refer to old versions only;
# - creates a venv with these old versions using poetry; and finally
# - invokes `trial` to run the tests with old deps.
set -ex
# Prevent virtualenv from auto-updating pip to an incompatible version
export VIRTUALENV_NO_DOWNLOAD=1
# TODO: in the future, we could use an implementation of
# https://github.com/python-poetry/poetry/issues/3527
# https://github.com/pypa/pip/issues/8085
# to select the lowest possible versions, rather than resorting to this sed script.
# Patch the project definitions in-place:
# - Replace all lower and tilde bounds with exact bounds
# - Replace all caret bounds---but not the one that defines the supported Python version!
# - Delete all lines referring to psycopg2 --- so no testing of postgres support.
# - Use pyopenssl 17.0, which is the oldest version that works with
# a `cryptography` compiled against OpenSSL 1.1.
# - Omit systemd: we're not logging to journal here.
sed -i \
-e "s/[~>]=/==/g" \
-e '/^python = "^/!s/\^/==/g' \
-e "/psycopg2/d" \
-e 's/pyOpenSSL = "==16.0.0"/pyOpenSSL = "==17.0.0"/' \
-e '/systemd/d' \
pyproject.toml
echo "::group::Patched pyproject.toml"
cat pyproject.toml
echo "::endgroup::"

View File

@@ -61,7 +61,7 @@ poetry run update_synapse_database --database-config .ci/postgres-config-unporte
echo "+++ Comparing ported schema with unported schema"
# Ignore the tables that portdb creates. (Should it tidy them up when the porting is completed?)
psql synapse -c "DROP TABLE port_from_sqlite3;"
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner --restrict-key=TESTING synapse_unported > unported.sql
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner --restrict-key=TESTING synapse > ported.sql
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner synapse_unported > unported.sql
pg_dump --format=plain --schema-only --no-tablespaces --no-acl --no-owner synapse > ported.sql
# By default, `diff` returns zero if there are no changes and nonzero otherwise
diff -u unported.sql ported.sql | tee schema_diff
diff -u unported.sql ported.sql | tee schema_diff

View File

@@ -1,29 +0,0 @@
#!/usr/bin/env bash
set -euo pipefail
# 1) Resolve project ID.
PROJECT_ID=$(gh project view "$PROJECT_NUMBER" --owner "$PROJECT_OWNER" --format json | jq -r '.id')
# 2) Find existing item (project card) for this issue.
ITEM_ID=$(
gh project item-list "$PROJECT_NUMBER" --owner "$PROJECT_OWNER" --format json \
| jq -r --arg url "$ISSUE_URL" '.items[] | select(.content.url==$url) | .id' | head -n1
)
# 3) If one doesn't exist, add this issue to the project.
if [ -z "${ITEM_ID:-}" ]; then
ITEM_ID=$(gh project item-add "$PROJECT_NUMBER" --owner "$PROJECT_OWNER" --url "$ISSUE_URL" --format json | jq -r '.id')
fi
# 4) Get Status field id + the option id for TARGET_STATUS.
FIELDS_JSON=$(gh project field-list "$PROJECT_NUMBER" --owner "$PROJECT_OWNER" --format json)
STATUS_FIELD=$(echo "$FIELDS_JSON" | jq -r '.fields[] | select(.name=="Status")')
STATUS_FIELD_ID=$(echo "$STATUS_FIELD" | jq -r '.id')
OPTION_ID=$(echo "$STATUS_FIELD" | jq -r --arg name "$TARGET_STATUS" '.options[] | select(.name==$name) | .id')
if [ -z "${OPTION_ID:-}" ]; then
echo "No Status option named \"$TARGET_STATUS\" found"; exit 1
fi
# 5) Set Status (moves item to the matching column in the board view).
gh project item-edit --id "$ITEM_ID" --project-id "$PROJECT_ID" --field-id "$STATUS_FIELD_ID" --single-select-option-id "$OPTION_ID"

View File

@@ -26,8 +26,3 @@ c4268e3da64f1abb5b31deaeb5769adb6510c0a7
# Update black to 23.1.0 (https://github.com/matrix-org/synapse/pull/15103)
9bb2eac71962970d02842bca441f4bcdbbf93a11
# Use type hinting generics in standard collections (https://github.com/element-hq/synapse/pull/19046)
fc244bb592aa481faf28214a2e2ce3bb4e95d990
# Write union types as X | Y where possible (https://github.com/element-hq/synapse/pull/19111)
fcac7e0282b074d4bd3414d1c9c181e9701875d9

1
.gitattributes vendored
View File

@@ -1 +0,0 @@
.github/workflows/* merge=ours

View File

@@ -9,4 +9,5 @@
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
- Feel free to credit yourself, by adding a sentence "Contributed by @github_username." or "Contributed by [Your Name]." to the end of the entry.
* [ ] [Code style](https://element-hq.github.io/synapse/latest/code_style.html) is correct (run the [linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))
* [ ] [Code style](https://element-hq.github.io/synapse/latest/code_style.html) is correct
(run the [linters](https://element-hq.github.io/synapse/latest/development/contributing_guide.html#run-the-linters))

View File

@@ -1,92 +1,23 @@
version: 2
# As dependabot is currently only run on a weekly basis, we raise the
# open-pull-requests-limit to 10 (from the default of 5) to better ensure we
# don't continuously grow a backlog of updates.
updates:
- # "pip" is the correct setting for poetry, per https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file#package-ecosystem
package-ecosystem: "pip"
directory: "/"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# Group patch updates to packages together into a single PR, as they rarely
# if ever contain breaking changes that need to be reviewed separately.
#
# Less PRs means a streamlined review process.
#
# Python packages follow semantic versioning, and tend to only introduce
# breaking changes in major version bumps. Thus, we'll group minor and patch
# versions together.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
# Prevent pulling packages that were recently updated to help mitigate
# supply chain attacks. 14 days was taken from the recommendation at
# https://blog.yossarian.net/2025/11/21/We-should-all-be-using-dependency-cooldowns
# where the author noted that 9/10 attacks would have been mitigated by a
# two week cooldown.
#
# The cooldown only applies to general updates; security updates will still
# be pulled in as soon as possible.
cooldown:
default-days: 14
- package-ecosystem: "docker"
directory: "/docker"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# For container versions, breaking changes are also typically only introduced in major
# package bumps.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
cooldown:
default-days: 14
- package-ecosystem: "github-actions"
directory: "/"
open-pull-requests-limit: 10
schedule:
interval: "weekly"
# Similarly for GitHub Actions, breaking changes are typically only introduced in major
# package bumps.
groups:
minor-and-patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "minor"
- "patch"
cooldown:
default-days: 14
- package-ecosystem: "cargo"
directory: "/"
open-pull-requests-limit: 10
versioning-strategy: "lockfile-only"
schedule:
interval: "weekly"
# The Rust ecosystem is special in that breaking changes are often introduced
# in minor version bumps, as packages typically stay pre-1.0 for a long time.
# Thus we specifically keep minor version bumps separate in their own PRs.
groups:
patches:
applies-to: version-updates
patterns:
- "*"
update-types:
- "patch"
cooldown:
default-days: 14

99
.github/workflows/docker.yml vendored Normal file
View File

@@ -0,0 +1,99 @@
# GitHub actions workflow which builds and publishes the docker images.
name: Build docker images
on:
push:
tags: ["v*"]
branches: [ master, main, develop ]
workflow_dispatch:
permissions:
contents: read
packages: write
id-token: write # needed for signing the images with GitHub OIDC Token
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Set up QEMU
id: qemu
uses: docker/setup-qemu-action@v3
with:
platforms: arm64
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
- name: Inspect builder
run: docker buildx inspect
- name: Install Cosign
uses: sigstore/cosign-installer@v3.6.0
- name: Checkout repository
uses: actions/checkout@v4
- name: Extract version from pyproject.toml
# Note: explicitly requesting bash will mean bash is invoked with `-eo pipefail`, see
# https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsshell
shell: bash
run: |
echo "SYNAPSE_VERSION=$(grep "^version" pyproject.toml | sed -E 's/version\s*=\s*["]([^"]*)["]/\1/')" >> $GITHUB_ENV
- name: Log in to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Log in to GHCR
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Calculate docker image tag
id: set-tag
uses: docker/metadata-action@master
with:
images: |
docker.io/matrixdotorg/synapse
ghcr.io/element-hq/synapse
flavor: |
latest=false
tags: |
type=raw,value=develop,enable=${{ github.ref == 'refs/heads/develop' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/master' }}
type=raw,value=latest,enable=${{ github.ref == 'refs/heads/main' }}
type=pep440,pattern={{raw}}
- name: Build and push all platforms
id: build-and-push
uses: docker/build-push-action@v6
with:
push: true
labels: |
gitsha1=${{ github.sha }}
org.opencontainers.image.version=${{ env.SYNAPSE_VERSION }}
tags: "${{ steps.set-tag.outputs.tags }}"
file: "docker/Dockerfile"
platforms: linux/amd64,linux/arm64
# arm64 builds OOM without the git fetch setting. c.f.
# https://github.com/rust-lang/cargo/issues/10583
build-args: |
CARGO_NET_GIT_FETCH_WITH_CLI=true
- name: Sign the images with GitHub OIDC Token
env:
DIGEST: ${{ steps.build-and-push.outputs.digest }}
TAGS: ${{ steps.set-tag.outputs.tags }}
run: |
images=""
for tag in ${TAGS}; do
images+="${tag}@${DIGEST} "
done
cosign sign --yes ${images}

34
.github/workflows/docs-pr-netlify.yaml vendored Normal file
View File

@@ -0,0 +1,34 @@
name: Deploy documentation PR preview
on:
workflow_run:
workflows: [ "Prepare documentation PR preview" ]
types:
- completed
jobs:
netlify:
if: github.event.workflow_run.conclusion == 'success' && github.event.workflow_run.event == 'pull_request'
runs-on: ubuntu-latest
steps:
# There's a 'download artifact' action, but it hasn't been updated for the workflow_run action
# (https://github.com/actions/download-artifact/issues/60) so instead we get this mess:
- name: 📥 Download artifact
uses: dawidd6/action-download-artifact@bf251b5aa9c2f7eeb574a96ee720e24f801b7c11 # v6
with:
workflow: docs-pr.yaml
run_id: ${{ github.event.workflow_run.id }}
name: book
path: book
- name: 📤 Deploy to Netlify
uses: matrix-org/netlify-pr-preview@v3
with:
path: book
owner: ${{ github.event.workflow_run.head_repository.owner.login }}
branch: ${{ github.event.workflow_run.head_branch }}
revision: ${{ github.event.workflow_run.head_sha }}
token: ${{ secrets.NETLIFY_AUTH_TOKEN }}
site_id: ${{ secrets.NETLIFY_SITE_ID }}
desc: Documentation preview
deployment_env: PR Documentation Preview

71
.github/workflows/docs-pr.yaml vendored Normal file
View File

@@ -0,0 +1,71 @@
name: Prepare documentation PR preview
on:
pull_request:
paths:
- docs/**
- book.toml
- .github/workflows/docs-pr.yaml
- scripts-dev/schema_versions.py
jobs:
pages:
name: GitHub Pages
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
with:
mdbook-version: '0.4.17'
- name: Setup python
uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
- name: Upload Artifact
uses: actions/upload-artifact@v4
with:
name: book
path: book
# We'll only use this in a workflow_run, then we're done with it
retention-days: 1
link-check:
name: Check links in documentation
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
with:
mdbook-version: '0.4.17'
- name: Setup htmltest
run: |
wget https://github.com/wjdp/htmltest/releases/download/v0.17.0/htmltest_0.17.0_linux_amd64.tar.gz
echo '775c597ee74899d6002cd2d93076f897f4ba68686bceabe2e5d72e84c57bc0fb htmltest_0.17.0_linux_amd64.tar.gz' | sha256sum -c
tar zxf htmltest_0.17.0_linux_amd64.tar.gz
- name: Test links with htmltest
# Build the book with `./` as the site URL (to make checks on 404.html possible)
# Then run htmltest (without checking external links since that involves the network and is slow).
run: |
MDBOOK_OUTPUT__HTML__SITE_URL="./" mdbook build
./htmltest book --skip-external

87
.github/workflows/docs.yaml vendored Normal file
View File

@@ -0,0 +1,87 @@
name: Deploy the documentation
on:
push:
branches:
# For bleeding-edge documentation
- develop
# For documentation specific to a release
- 'release-v*'
# stable docs
- master
workflow_dispatch:
jobs:
pre:
name: Calculate variables for GitHub Pages deployment
runs-on: ubuntu-latest
steps:
# Figure out the target directory.
#
# The target directory depends on the name of the branch
#
- name: Get the target directory name
id: vars
run: |
# first strip the 'refs/heads/' prefix with some shell foo
branch="${GITHUB_REF#refs/heads/}"
case $branch in
release-*)
# strip 'release-' from the name for release branches.
branch="${branch#release-}"
;;
master)
# deploy to "latest" for the master branch.
branch="latest"
;;
esac
# finally, set the 'branch-version' var.
echo "branch-version=$branch" >> "$GITHUB_OUTPUT"
outputs:
branch-version: ${{ steps.vars.outputs.branch-version }}
################################################################################
pages-docs:
name: GitHub Pages
runs-on: ubuntu-latest
needs:
- pre
steps:
- uses: actions/checkout@v4
with:
# Fetch all history so that the schema_versions script works.
fetch-depth: 0
- name: Setup mdbook
uses: peaceiris/actions-mdbook@ee69d230fe19748b7abf22df32acaa93833fad08 # v2.0.0
with:
mdbook-version: '0.4.17'
- name: Set version of docs
run: echo 'window.SYNAPSE_VERSION = "${{ needs.pre.outputs.branch-version }}";' > ./docs/website_files/version.js
- name: Setup python
uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: "pip install 'packaging>=20.0' 'GitPython>=3.1.20'"
- name: Build the documentation
# mdbook will only create an index.html if we're including docs/README.md in SUMMARY.md.
# However, we're using docs/README.md for other purposes and need to pick a new page
# as the default. Let's opt for the welcome page instead.
run: |
mdbook build
cp book/welcome_and_overview.html book/index.html
# Deploy to the target directory.
- name: Deploy to gh pages
uses: peaceiris/actions-gh-pages@4f9cc6602d3f66b9c108549d475ec49e8ef4d45e # v4.0.0
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: ./book
destination_dir: ./${{ needs.pre.outputs.branch-version }}

52
.github/workflows/fix_lint.yaml vendored Normal file
View File

@@ -0,0 +1,52 @@
# A helper workflow to automatically fixup any linting errors on a PR. Must be
# triggered manually.
name: Attempt to automatically fix linting errors
on:
workflow_dispatch:
jobs:
fixup:
name: Fix up
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
# We use nightly so that `fmt` correctly groups together imports, and
# clippy correctly fixes up the benchmarks.
toolchain: nightly-2022-12-01
components: rustfmt
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
install-project: "false"
- name: Import order (isort)
continue-on-error: true
run: poetry run isort .
- name: Code style (black)
continue-on-error: true
run: poetry run black .
- name: Semantic checks (ruff)
continue-on-error: true
run: poetry run ruff --fix .
- run: cargo clippy --all-features --fix -- -D warnings
continue-on-error: true
- run: cargo fmt
continue-on-error: true
- uses: stefanzweifel/git-auto-commit-action@v5
with:
commit_message: "Attempt to fix linting"

234
.github/workflows/latest_deps.yml vendored Normal file
View File

@@ -0,0 +1,234 @@
# People who are freshly `pip install`ing from PyPI will pull in the latest versions of
# dependencies which match the broad requirements. Since most CI runs are against
# the locked poetry environment, run specifically against the latest dependencies to
# know if there's an upcoming breaking change.
#
# As an overview this workflow:
# - checks out develop,
# - installs from source, pulling in the dependencies like a fresh `pip install` would, and
# - runs mypy and test suites in that checkout.
#
# Based on the twisted trunk CI job.
name: Latest dependencies
on:
schedule:
- cron: 0 7 * * *
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than element-hq/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'element-hq/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
# The dev dependencies aren't exposed in the wheel metadata (at least with current
# poetry-core versions), so we install with poetry.
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
poetry-version: "1.3.2"
extras: "all"
# Dump installed versions for debugging.
- run: poetry run pip list > before.txt
# Upgrade all runtime dependencies only. This is intended to mimic a fresh
# `pip install matrix-synapse[all]` as closely as possible.
- run: poetry update --no-dev
- run: poetry run pip list > after.txt && (diff -u before.txt after.txt || true)
- name: Remove unhelpful options from mypy config
run: sed -e '/warn_unused_ignores = True/d' -e '/warn_redundant_casts = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
strategy:
matrix:
include:
- database: "sqlite"
- database: "postgres"
postgres-version: "14"
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.postgres-version }}
if: ${{ matrix.postgres-version }}
run: |
docker run -d -p 5432:5432 \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.postgres-version }}
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: pip install .[all,test]
- name: Await PostgreSQL
if: ${{ matrix.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
# We nuke the local copy, as we've installed synapse into the virtualenv
# (rather than use an editable install, which we no longer support). If we
# don't do this then python can't find the native lib.
- run: rm -rf synapse/
- run: python -m twisted.trial --jobs=2 tests
env:
SYNAPSE_POSTGRES: ${{ matrix.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: localhost
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:testing
volumes:
- ${{ github.workspace }}:/src
strategy:
fail-fast: false
matrix:
include:
- sytest-tag: focal
- sytest-tag: focal
postgres: postgres
workers: workers
redis: redis
env:
POSTGRES: ${{ matrix.postgres && 1}}
WORKERS: ${{ matrix.workers && 1 }}
REDIS: ${{ matrix.redis && 1 }}
BLACKLIST: ${{ matrix.workers && 'synapse-blacklist-with-workers' }}
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Ensure sytest runs `pip install`
# Delete the lockfile so sytest will `pip install` rather than `poetry install`
run: rm /src/poetry.lock
working-directory: /src
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
complement:
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@v4
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@v5
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
- run: |
set -o pipefail
TEST_ONLY_IGNORE_POETRY_LOCKFILE=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
name: Run Complement Tests
# Open an issue if the build fails, so we know about it.
# Only do this if we're not experimenting with this action in a PR.
open-issue:
if: "failure() && github.event_name != 'push' && github.event_name != 'pull_request' && needs.check_repo.outputs.should_run_workflow == 'true'"
needs:
# TODO: should mypy be included here? It feels more brittle than the others.
- mypy
- trial
- sytest
- complement
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
update_existing: true
filename: .ci/latest_deps_build_failed_issue_template.md

24
.github/workflows/poetry_lockfile.yaml vendored Normal file
View File

@@ -0,0 +1,24 @@
on:
push:
branches: ["develop", "release-*"]
paths:
- poetry.lock
pull_request:
paths:
- poetry.lock
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
check-sdists:
name: "Check locked dependencies have sdists"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.x'
- run: pip install tomli
- run: ./scripts-dev/check_locked_deps_have_sdists.py

View File

@@ -0,0 +1,74 @@
# This task does not run complement tests, see tests.yaml instead.
# This task does not build docker images for synapse for use on docker hub, see docker.yaml instead
name: Store complement-synapse image in ghcr.io
on:
push:
branches: [ "master" ]
schedule:
- cron: '0 5 * * *'
workflow_dispatch:
inputs:
branch:
required: true
default: 'develop'
type: choice
options:
- develop
- master
# Only run this action once per pull request/branch; restart if a new commit arrives.
# C.f. https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#concurrency
# and https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#github-context
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
build:
name: Build and push complement image
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
steps:
- name: Checkout specific branch (debug build)
uses: actions/checkout@v4
if: github.event_name == 'workflow_dispatch'
with:
ref: ${{ inputs.branch }}
- name: Checkout clean copy of develop (scheduled build)
uses: actions/checkout@v4
if: github.event_name == 'schedule'
with:
ref: develop
- name: Checkout clean copy of master (on-push)
uses: actions/checkout@v4
if: github.event_name == 'push'
with:
ref: master
- name: Login to registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Work out labels for complement image
id: meta
uses: docker/metadata-action@v5
with:
images: ghcr.io/${{ github.repository }}/complement-synapse
tags: |
type=schedule,pattern=nightly,enable=${{ github.event_name == 'schedule'}}
type=raw,value=develop,enable=${{ github.event_name == 'schedule' || inputs.branch == 'develop' }}
type=raw,value=latest,enable=${{ github.event_name == 'push' || inputs.branch == 'master' }}
type=sha,format=long
- name: Run scripts-dev/complement.sh to generate complement-synapse:latest image.
run: scripts-dev/complement.sh --build-only
- name: Tag and push generated image
run: |
for TAG in ${{ join(fromJson(steps.meta.outputs.json).tags, ' ') }}; do
echo "tag and push $TAG"
docker tag complement-synapse $TAG
docker push $TAG
done

212
.github/workflows/release-artifacts.yml vendored Normal file
View File

@@ -0,0 +1,212 @@
# GitHub actions workflow which builds the release artifacts.
name: Build release artifacts
on:
# we build on PRs and develop to (hopefully) get early warning
# of things breaking (but only build one set of debs). PRs skip
# building wheels on macOS & ARM.
pull_request:
push:
branches: ["develop", "release-*"]
# we do the full build on tags.
tags: ["v*"]
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions:
contents: write
jobs:
get-distros:
name: "Calculate list of debian distros"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.x'
- id: set-distros
run: |
# if we're running from a tag, get the full list of distros; otherwise just use debian:sid
# NOTE: inside the actual Dockerfile-dhvirtualenv, the image name is expanded into its full image path
dists='["debian:sid"]'
if [[ $GITHUB_REF == refs/tags/* ]]; then
dists=$(scripts-dev/build_debian_packages.py --show-dists-json)
fi
echo "distros=$dists" >> "$GITHUB_OUTPUT"
# map the step outputs to job outputs
outputs:
distros: ${{ steps.set-distros.outputs.distros }}
# now build the packages with a matrix build.
build-debs:
needs: get-distros
name: "Build .deb packages"
runs-on: ubuntu-latest
strategy:
matrix:
distro: ${{ fromJson(needs.get-distros.outputs.distros) }}
steps:
- name: Checkout
uses: actions/checkout@v4
with:
path: src
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
install: true
- name: Set up docker layer caching
uses: actions/cache@v4
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-
- name: Set up python
uses: actions/setup-python@v5
with:
python-version: '3.x'
- name: Build the packages
# see https://github.com/docker/build-push-action/issues/252
# for the cache magic here
run: |
./src/scripts-dev/build_debian_packages.py \
--docker-build-arg=--cache-from=type=local,src=/tmp/.buildx-cache \
--docker-build-arg=--cache-to=type=local,mode=max,dest=/tmp/.buildx-cache-new \
--docker-build-arg=--progress=plain \
--docker-build-arg=--load \
"${{ matrix.distro }}"
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Upload debs as artifacts
uses: actions/upload-artifact@v3 # Don't upgrade to v4; broken: https://github.com/actions/upload-artifact#breaking-changes
with:
name: debs
path: debs/*
build-wheels:
name: Build wheels on ${{ matrix.os }} for ${{ matrix.arch }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ubuntu-20.04, macos-12]
arch: [x86_64, aarch64]
# is_pr is a flag used to exclude certain jobs from the matrix on PRs.
# It is not read by the rest of the workflow.
is_pr:
- ${{ startsWith(github.ref, 'refs/pull/') }}
exclude:
# Don't build macos wheels on PR CI.
- is_pr: true
os: "macos-12"
# Don't build aarch64 wheels on mac.
- os: "macos-12"
arch: aarch64
# Don't build aarch64 wheels on PR CI.
- is_pr: true
arch: aarch64
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
# setup-python@v4 doesn't impose a default python version. Need to use 3.x
# here, because `python` on osx points to Python 2.7.
python-version: "3.x"
- name: Install cibuildwheel
run: python -m pip install cibuildwheel==2.19.1
- name: Set up QEMU to emulate aarch64
if: matrix.arch == 'aarch64'
uses: docker/setup-qemu-action@v3
with:
platforms: arm64
- name: Build aarch64 wheels
if: matrix.arch == 'aarch64'
run: echo 'CIBW_ARCHS_LINUX=aarch64' >> $GITHUB_ENV
- name: Only build a single wheel on PR
if: startsWith(github.ref, 'refs/pull/')
run: echo "CIBW_BUILD="cp38-manylinux_${{ matrix.arch }}"" >> $GITHUB_ENV
- name: Build wheels
run: python -m cibuildwheel --output-dir wheelhouse
env:
# Skip testing for platforms which various libraries don't have wheels
# for, and so need extra build deps.
CIBW_TEST_SKIP: pp3*-* *i686* *musl*
# Fix Rust OOM errors on emulated aarch64: https://github.com/rust-lang/cargo/issues/10583
CARGO_NET_GIT_FETCH_WITH_CLI: true
CIBW_ENVIRONMENT_PASS_LINUX: CARGO_NET_GIT_FETCH_WITH_CLI
- uses: actions/upload-artifact@v3 # Don't upgrade to v4; broken: https://github.com/actions/upload-artifact#breaking-changes
with:
name: Wheel
path: ./wheelhouse/*.whl
build-sdist:
name: Build sdist
runs-on: ubuntu-latest
if: ${{ !startsWith(github.ref, 'refs/pull/') }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: '3.10'
- run: pip install build
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@v3 # Don't upgrade to v4; broken: https://github.com/actions/upload-artifact#breaking-changes
with:
name: Sdist
path: dist/*.tar.gz
# if it's a tag, create a release and attach the artifacts to it
attach-assets:
name: "Attach assets to release"
if: ${{ !failure() && !cancelled() && startsWith(github.ref, 'refs/tags/') }}
needs:
- build-debs
- build-wheels
- build-sdist
runs-on: ubuntu-latest
steps:
- name: Download all workflow run artifacts
uses: actions/download-artifact@v3 # Don't upgrade to v4, it should match upload-artifact
- name: Build a tarball for the debs
run: tar -cvJf debs.tar.xz debs
- name: Attach to release
uses: softprops/action-gh-release@a929a66f232c1b11af63782948aa2210f981808a # PR#109
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
files: |
Sdist/*
Wheel/*
debs.tar.xz
# if it's not already published, keep the release as a draft.
draft: true
# mark it as a prerelease if the tag contains 'rc'.
prerelease: ${{ contains(github.ref, 'rc') }}

755
.github/workflows/tests.yml vendored Normal file
View File

@@ -0,0 +1,755 @@
name: Tests
on:
push:
branches: ["develop", "release-*"]
pull_request:
merge_group:
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
# Job to detect what has changed so we don't run e.g. Rust checks on PRs that
# don't modify Rust code.
changes:
runs-on: ubuntu-latest
outputs:
rust: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.rust }}
trial: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.trial }}
integration: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.integration }}
linting: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting }}
linting_readme: ${{ !startsWith(github.ref, 'refs/pull/') || steps.filter.outputs.linting_readme }}
steps:
- uses: dorny/paths-filter@v3
id: filter
# We only check on PRs
if: startsWith(github.ref, 'refs/pull/')
with:
filters: |
rust:
- 'rust/**'
- 'Cargo.toml'
- 'Cargo.lock'
- '.rustfmt.toml'
- '.github/workflows/tests.yml'
trial:
- 'synapse/**'
- 'tests/**'
- 'rust/**'
- '.ci/scripts/calculate_jobs.py'
- 'Cargo.toml'
- 'Cargo.lock'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/tests.yml'
integration:
- 'synapse/**'
- 'rust/**'
- 'docker/**'
- 'Cargo.toml'
- 'Cargo.lock'
- 'pyproject.toml'
- 'poetry.lock'
- 'docker/**'
- '.ci/**'
- 'scripts-dev/complement.sh'
- '.github/workflows/tests.yml'
linting:
- 'synapse/**'
- 'docker/**'
- 'tests/**'
- 'scripts-dev/**'
- 'contrib/**'
- 'synmark/**'
- 'stubs/**'
- '.ci/**'
- 'mypy.ini'
- 'pyproject.toml'
- 'poetry.lock'
- '.github/workflows/tests.yml'
linting_readme:
- 'README.rst'
check-sampleconfig:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
poetry-version: "1.3.2"
extras: "all"
- run: poetry run scripts-dev/generate_sample_config.sh --check
- run: poetry run scripts-dev/config-lint.sh
check-schema-delta:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: "pip install 'click==8.1.1' 'GitPython>=3.1.20'"
- run: scripts-dev/check_schema_delta.py --force-colors
check-lockfile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: .ci/scripts/check_lockfile.py
lint:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
install-project: "false"
- name: Import order (isort)
run: poetry run isort --check --diff .
- name: Code style (black)
run: poetry run black --check --diff .
- name: Semantic checks (ruff)
# --quiet suppresses the update check.
run: poetry run ruff check --quiet .
lint-mypy:
runs-on: ubuntu-latest
name: Typechecking
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- name: Setup Poetry
uses: matrix-org/setup-python-poetry@v1
with:
# We want to make use of type hints in optional dependencies too.
extras: all
# We have seen odd mypy failures that were resolved when we started
# installing the project again:
# https://github.com/matrix-org/synapse/pull/15376#issuecomment-1498983775
# To make CI green, err towards caution and install the project.
install-project: "true"
# Cribbed from
# https://github.com/AustinScola/mypy-cache-github-action/blob/85ea4f2972abed39b33bd02c36e341b28ca59213/src/restore.ts#L10-L17
- name: Restore/persist mypy's cache
uses: actions/cache@v4
with:
path: |
.mypy_cache
key: mypy-cache-${{ github.context.sha }}
restore-keys: mypy-cache-
- name: Run mypy
run: poetry run mypy
lint-crlf:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Check line endings
run: scripts-dev/check_line_terminators.sh
lint-newsfile:
if: ${{ (github.base_ref == 'develop' || contains(github.base_ref, 'release-')) && github.actor != 'dependabot[bot]' }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: "pip install 'towncrier>=18.6.0rc1'"
- run: scripts-dev/check-newsfragment.sh
env:
PULL_REQUEST_NUMBER: ${{ github.event.number }}
lint-pydantic:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting == 'true' }}
steps:
- uses: actions/checkout@v4
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
poetry-version: "1.3.2"
extras: "all"
- run: poetry run scripts-dev/check_pydantic_models.py
lint-clippy:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
with:
components: clippy
- uses: Swatinem/rust-cache@v2
- run: cargo clippy -- -D warnings
# We also lint against a nightly rustc so that we can lint the benchmark
# suite, which requires a nightly compiler.
lint-clippy-nightly:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
toolchain: nightly-2022-12-01
components: clippy
- uses: Swatinem/rust-cache@v2
- run: cargo clippy --all-features -- -D warnings
lint-rustfmt:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.rust == 'true' }}
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
# We use nightly so that it correctly groups together imports
toolchain: nightly-2022-12-01
components: rustfmt
- uses: Swatinem/rust-cache@v2
- run: cargo fmt --check
# This is to detect issues with the rst file, which can otherwise cause issues
# when uploading packages to PyPi.
lint-readme:
runs-on: ubuntu-latest
needs: changes
if: ${{ needs.changes.outputs.linting_readme == 'true' }}
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- run: "pip install rstcheck"
- run: "rstcheck --report-level=WARNING README.rst"
# Dummy step to gate other tests on without repeating the whole list
linting-done:
if: ${{ !cancelled() }} # Run this even if prior jobs were skipped
needs:
- lint
- lint-mypy
- lint-crlf
- lint-newsfile
- lint-pydantic
- check-sampleconfig
- check-schema-delta
- check-lockfile
- lint-clippy
- lint-clippy-nightly
- lint-rustfmt
- lint-readme
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
with:
needs: ${{ toJSON(needs) }}
# Various bits are skipped if there was no applicable changes.
skippable: |
check-sampleconfig
check-schema-delta
lint
lint-mypy
lint-newsfile
lint-pydantic
lint-clippy
lint-clippy-nightly
lint-rustfmt
lint-readme
calculate-test-jobs:
if: ${{ !cancelled() && !failure() }} # Allow previous steps to be skipped, but not fail
needs: linting-done
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.x"
- id: get-matrix
run: .ci/scripts/calculate_jobs.py
outputs:
trial_test_matrix: ${{ steps.get-matrix.outputs.trial_test_matrix }}
sytest_test_matrix: ${{ steps.get-matrix.outputs.sytest_test_matrix }}
trial:
if: ${{ !cancelled() && !failure() && needs.changes.outputs.trial == 'true' }} # Allow previous steps to be skipped, but not fail
needs:
- calculate-test-jobs
- changes
runs-on: ubuntu-latest
strategy:
matrix:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.trial_test_matrix) }}
steps:
- uses: actions/checkout@v4
- run: sudo apt-get -qq install xmlsec1
- name: Set up PostgreSQL ${{ matrix.job.postgres-version }}
if: ${{ matrix.job.postgres-version }}
# 1. Mount postgres data files onto a tmpfs in-memory filesystem to reduce overhead of docker's overlayfs layer.
# 2. Expose the unix socket for postgres. This removes latency of using docker-proxy for connections.
run: |
docker run -d -p 5432:5432 \
--tmpfs /var/lib/postgres:rw,size=6144m \
--mount 'type=bind,src=/var/run/postgresql,dst=/var/run/postgresql' \
-e POSTGRES_PASSWORD=postgres \
-e POSTGRES_INITDB_ARGS="--lc-collate C --lc-ctype C --encoding UTF8" \
postgres:${{ matrix.job.postgres-version }}
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.job.python-version }}
poetry-version: "1.3.2"
extras: ${{ matrix.job.extras }}
- name: Await PostgreSQL
if: ${{ matrix.job.postgres-version }}
timeout-minutes: 2
run: until pg_isready -h localhost; do sleep 1; done
- run: poetry run trial --jobs=6 tests
env:
SYNAPSE_POSTGRES: ${{ matrix.job.database == 'postgres' || '' }}
SYNAPSE_POSTGRES_HOST: /var/run/postgresql
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-olddeps:
# Note: sqlite only; no postgres
if: ${{ !cancelled() && !failure() && needs.changes.outputs.trial == 'true' }} # Allow previous steps to be skipped, but not fail
needs:
- linting-done
- changes
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
# There aren't wheels for some of the older deps, so we need to install
# their build dependencies
- run: |
sudo apt-get -qq update
sudo apt-get -qq install build-essential libffi-dev python-dev \
libxml2-dev libxslt-dev xmlsec1 zlib1g-dev libjpeg-dev libwebp-dev
- uses: actions/setup-python@v5
with:
python-version: '3.8'
- name: Prepare old deps
if: steps.cache-poetry-old-deps.outputs.cache-hit != 'true'
run: .ci/scripts/prepare_old_deps.sh
# Note: we install using `pip` here, not poetry. `poetry install` ignores the
# build-system section (https://github.com/python-poetry/poetry/issues/6154), but
# we explicitly want to test that you can `pip install` using the oldest version
# of poetry-core and setuptools-rust.
- run: pip install .[all,test]
# We nuke the local copy, as we've installed synapse into the virtualenv
# (rather than use an editable install, which we no longer support). If we
# don't do this then python can't find the native lib.
- run: rm -rf synapse/
# Sanity check we can import/run Synapse
- run: python -m synapse.app.homeserver --help
- run: python -m twisted.trial -j6 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
trial-pypy:
# Very slow; only run if the branch name includes 'pypy'
# Note: sqlite only; no postgres. Completely untested since poetry move.
if: ${{ contains(github.ref, 'pypy') && !failure() && !cancelled() && needs.changes.outputs.trial == 'true' }}
needs:
- linting-done
- changes
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["pypy-3.8"]
extras: ["all"]
steps:
- uses: actions/checkout@v4
# Install libs necessary for PyPy to build binary wheels for dependencies
- run: sudo apt-get -qq install xmlsec1 libxml2-dev libxslt-dev
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.3.2"
extras: ${{ matrix.extras }}
- run: poetry run trial --jobs=2 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
if: ${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true' }}
needs:
- calculate-test-jobs
- changes
runs-on: ubuntu-latest
container:
image: matrixdotorg/sytest-synapse:${{ matrix.job.sytest-tag }}
volumes:
- ${{ github.workspace }}:/src
env:
# If this is a pull request to a release branch, use that branch as default branch for sytest, else use develop
# This works because the release script always create a branch on the sytest repo with the same name as the release branch
SYTEST_DEFAULT_BRANCH: ${{ startsWith(github.base_ref, 'release-') && github.base_ref || 'develop' }}
SYTEST_BRANCH: ${{ github.head_ref }}
POSTGRES: ${{ matrix.job.postgres && 1}}
MULTI_POSTGRES: ${{ (matrix.job.postgres == 'multi-postgres') || '' }}
ASYNCIO_REACTOR: ${{ (matrix.job.reactor == 'asyncio') || '' }}
WORKERS: ${{ matrix.job.workers && 1 }}
BLACKLIST: ${{ matrix.job.workers && 'synapse-blacklist-with-workers' }}
TOP: ${{ github.workspace }}
strategy:
fail-fast: false
matrix:
job: ${{ fromJson(needs.calculate-test-jobs.outputs.sytest_test_matrix) }}
steps:
- uses: actions/checkout@v4
- name: Prepare test blacklist
run: cat sytest-blacklist .ci/worker-blacklist > synapse-blacklist-with-workers
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.job.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
export-data:
if: ${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true'}} # Allow previous steps to be skipped, but not fail
needs: [linting-done, portdb, changes]
runs-on: ubuntu-latest
env:
TOP: ${{ github.workspace }}
services:
postgres:
image: postgres
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
poetry-version: "1.3.2"
extras: "postgres"
- run: .ci/scripts/test_export_data_command.sh
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: postgres
portdb:
if: ${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true'}} # Allow previous steps to be skipped, but not fail
needs:
- linting-done
- changes
runs-on: ubuntu-latest
strategy:
matrix:
include:
- python-version: "3.8"
postgres-version: "11"
- python-version: "3.11"
postgres-version: "15"
services:
postgres:
image: postgres:${{ matrix.postgres-version }}
ports:
- 5432:5432
env:
POSTGRES_PASSWORD: "postgres"
POSTGRES_INITDB_ARGS: "--lc-collate C --lc-ctype C --encoding UTF8"
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v4
- name: Add PostgreSQL apt repository
# We need a version of pg_dump that can handle the version of
# PostgreSQL being tested against. The Ubuntu package repository lags
# behind new releases, so we have to use the PostreSQL apt repository.
# Steps taken from https://www.postgresql.org/download/linux/ubuntu/
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt-get update
- run: sudo apt-get -qq install xmlsec1 postgresql-client
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: ${{ matrix.python-version }}
poetry-version: "1.3.2"
extras: "postgres"
- run: .ci/scripts/test_synapse_port_db.sh
id: run_tester_script
env:
PGHOST: localhost
PGUSER: postgres
PGPASSWORD: postgres
PGDATABASE: postgres
- name: "Upload schema differences"
uses: actions/upload-artifact@v4
if: ${{ failure() && !cancelled() && steps.run_tester_script.outcome == 'failure' }}
with:
name: Schema dumps
path: |
unported.sql
ported.sql
schema_diff
complement:
if: "${{ !failure() && !cancelled() && needs.changes.outputs.integration == 'true' }}"
needs:
- linting-done
- changes
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@v4
with:
path: synapse
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@v5
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
# use p=1 concurrency as GHA boxes are underpowered and don't like running tons of synapses at once.
- run: |
set -o pipefail
COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -p 1 -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
env:
POSTGRES: ${{ (matrix.database == 'Postgres') && 1 || '' }}
WORKERS: ${{ (matrix.arrangement == 'workers') && 1 || '' }}
name: Run Complement Tests
cargo-test:
if: ${{ needs.changes.outputs.rust == 'true' }}
runs-on: ubuntu-latest
needs:
- linting-done
- changes
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@1.66.0
- uses: Swatinem/rust-cache@v2
- run: cargo test
# We want to ensure that the cargo benchmarks still compile, which requires a
# nightly compiler.
cargo-bench:
if: ${{ needs.changes.outputs.rust == 'true' }}
runs-on: ubuntu-latest
needs:
- linting-done
- changes
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@master
with:
toolchain: nightly-2022-12-01
- uses: Swatinem/rust-cache@v2
- run: cargo bench --no-run
# a job which marks all the other jobs as complete, thus allowing PRs to be merged.
tests-done:
if: ${{ always() }}
needs:
- trial
- trial-olddeps
- sytest
- export-data
- portdb
- complement
- cargo-test
- cargo-bench
- linting-done
runs-on: ubuntu-latest
steps:
- uses: matrix-org/done-action@v3
with:
needs: ${{ toJSON(needs) }}
# Various bits are skipped if there was no applicable changes.
# The newsfile lint may be skipped on non PR builds.
skippable: |
trial
trial-olddeps
sytest
portdb
export-data
complement
lint-newsfile
cargo-test
cargo-bench

14
.github/workflows/triage-incoming.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
name: Move new issues into the issue triage board
on:
issues:
types: [ opened ]
jobs:
triage:
uses: matrix-org/backend-meta/.github/workflows/triage-incoming.yml@v2
with:
project_id: 'PVT_kwDOAIB0Bs4AFDdZ'
content_id: ${{ github.event.issue.node_id }}
secrets:
github_access_token: ${{ secrets.ELEMENT_BOT_TOKEN }}

44
.github/workflows/triage_labelled.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: Move labelled issues to correct projects
on:
issues:
types: [ labeled ]
jobs:
move_needs_info:
name: Move X-Needs-Info on the triage board
runs-on: ubuntu-latest
if: >
contains(github.event.issue.labels.*.name, 'X-Needs-Info')
steps:
- uses: actions/add-to-project@main
id: add_project
with:
project-url: "https://github.com/orgs/matrix-org/projects/67"
github-token: ${{ secrets.ELEMENT_BOT_TOKEN }}
- name: Set status
env:
GITHUB_TOKEN: ${{ secrets.ELEMENT_BOT_TOKEN }}
run: |
gh api graphql -f query='
mutation(
$project: ID!
$item: ID!
$fieldid: ID!
$columnid: String!
) {
updateProjectV2ItemFieldValue(
input: {
projectId: $project
itemId: $item
fieldId: $fieldid
value: {
singleSelectOptionId: $columnid
}
}
) {
projectV2Item {
id
}
}
}' -f project="PVT_kwDOAIB0Bs4AFDdZ" -f item=${{ steps.add_project.outputs.itemId }} -f fieldid="PVTSSF_lADOAIB0Bs4AFDdZzgC6ZA4" -f columnid=ba22e43c --silent

215
.github/workflows/twisted_trunk.yml vendored Normal file
View File

@@ -0,0 +1,215 @@
name: Twisted Trunk
on:
schedule:
- cron: 0 8 * * *
workflow_dispatch:
# NB: inputs are only present when this workflow is dispatched manually.
# (The default below is the default field value in the form to trigger
# a manual dispatch). Otherwise the inputs will evaluate to null.
inputs:
twisted_ref:
description: Commit, branch or tag to checkout from upstream Twisted.
required: false
default: 'trunk'
type: string
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
jobs:
check_repo:
# Prevent this workflow from running on any fork of Synapse other than element-hq/synapse, as it is
# only useful to the Synapse core team.
# All other workflow steps depend on this one, thus if 'should_run_workflow' is not 'true', the rest
# of the workflow will be skipped as well.
if: github.repository == 'element-hq/synapse'
runs-on: ubuntu-latest
outputs:
should_run_workflow: ${{ steps.check_condition.outputs.should_run_workflow }}
steps:
- id: check_condition
run: echo "should_run_workflow=${{ github.repository == 'element-hq/synapse' }}" >> "$GITHUB_OUTPUT"
mypy:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
extras: "all"
- run: |
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#${{ inputs.twisted_ref || 'trunk' }}
poetry install --no-interaction --extras "all test"
- name: Remove unhelpful options from mypy config
run: sed -e '/warn_unused_ignores = True/d' -e '/warn_redundant_casts = True/d' -i mypy.ini
- run: poetry run mypy
trial:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- run: sudo apt-get -qq install xmlsec1
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- uses: matrix-org/setup-python-poetry@v1
with:
python-version: "3.x"
extras: "all test"
- run: |
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry install --no-interaction --extras "all test"
- run: poetry run trial --jobs 2 tests
- name: Dump logs
# Logs are most useful when the command fails, always include them.
if: ${{ always() }}
# Note: Dumps to workflow logs instead of using actions/upload-artifact
# This keeps logs colocated with failing jobs
# It also ignores find's exit code; this is a best effort affair
run: >-
find _trial_temp -name '*.log'
-exec echo "::group::{}" \;
-exec cat {} \;
-exec echo "::endgroup::" \;
|| true
sytest:
needs: check_repo
if: needs.check_repo.outputs.should_run_workflow == 'true'
runs-on: ubuntu-latest
container:
# We're using ubuntu:focal because it uses Python 3.8 which is our minimum supported Python version.
# This job is a canary to warn us about unreleased twisted changes that would cause problems for us if
# they were to be released immediately. For simplicity's sake (and to save CI runners) we use the oldest
# version, assuming that any incompatibilities on newer versions would also be present on the oldest.
image: matrixdotorg/sytest-synapse:focal
volumes:
- ${{ github.workspace }}:/src
steps:
- uses: actions/checkout@v4
- name: Install Rust
uses: dtolnay/rust-toolchain@stable
- uses: Swatinem/rust-cache@v2
- name: Patch dependencies
# Note: The poetry commands want to create a virtualenv in /src/.venv/,
# but the sytest-synapse container expects it to be in /venv/.
# We symlink it before running poetry so that poetry actually
# ends up installing to `/venv`.
run: |
ln -s -T /venv /src/.venv
poetry remove twisted
poetry add --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry install --no-interaction --extras "all test"
working-directory: /src
- name: Run SyTest
run: /bootstrap.sh synapse
working-directory: /src
env:
# Use offline mode to avoid reinstalling the pinned version of
# twisted.
OFFLINE: 1
- name: Summarise results.tap
if: ${{ always() }}
run: /sytest/scripts/tap_to_gha.pl /logs/results.tap
- name: Upload SyTest logs
uses: actions/upload-artifact@v4
if: ${{ always() }}
with:
name: Sytest Logs - ${{ job.status }} - (${{ join(matrix.*, ', ') }})
path: |
/logs/results.tap
/logs/**/*.log*
complement:
needs: check_repo
if: "!failure() && !cancelled() && needs.check_repo.outputs.should_run_workflow == 'true'"
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
include:
- arrangement: monolith
database: SQLite
- arrangement: monolith
database: Postgres
- arrangement: workers
database: Postgres
steps:
- name: Run actions/checkout@v4 for synapse
uses: actions/checkout@v4
with:
path: synapse
- name: Prepare Complement's Prerequisites
run: synapse/.ci/scripts/setup_complement_prerequisites.sh
- uses: actions/setup-go@v5
with:
cache-dependency-path: complement/go.sum
go-version-file: complement/go.mod
# This step is specific to the 'Twisted trunk' test run:
- name: Patch dependencies
run: |
set -x
DEBIAN_FRONTEND=noninteractive sudo apt-get install -yqq python3 pipx
pipx install poetry==1.3.2
poetry remove -n twisted
poetry add -n --extras tls git+https://github.com/twisted/twisted.git#trunk
poetry lock --no-update
working-directory: synapse
- run: |
set -o pipefail
TEST_ONLY_SKIP_DEP_HASH_VERIFICATION=1 POSTGRES=${{ (matrix.database == 'Postgres') && 1 || '' }} WORKERS=${{ (matrix.arrangement == 'workers') && 1 || '' }} COMPLEMENT_DIR=`pwd`/complement synapse/scripts-dev/complement.sh -json 2>&1 | synapse/.ci/scripts/gotestfmt
shell: bash
name: Run Complement Tests
# open an issue if the build fails, so we know about it.
open-issue:
if: failure() && needs.check_repo.outputs.should_run_workflow == 'true'
needs:
- mypy
- trial
- sytest
- complement
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: JasonEtco/create-an-issue@1b14a70e4d8dc185e5cc76d3bec9eab20257b2c5 # v2.9.2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
with:
update_existing: true
filename: .ci/twisted_trunk_build_failed_issue_template.md

1
.gitignore vendored
View File

@@ -47,7 +47,6 @@ __pycache__/
/.idea/
/.ropeproject/
/.vscode/
/.zed/
# build products
!/.coveragerc

View File

@@ -1,6 +1 @@
# Unstable options are only available on a nightly toolchain and must be opted into
unstable_features = true
# `group_imports` is an unstable option that requires nightly Rust toolchain. Tracked by
# https://github.com/rust-lang/rustfmt/issues/5083
group_imports = "StdExternalCrate"

4287
CHANGES.md

File diff suppressed because it is too large Load Diff

1517
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +0,0 @@
Licensees holding a valid commercial license with Element may use this
software in accordance with the terms contained in a written agreement
between you and Element.
To purchase a commercial license please contact our sales team at
licensing@element.io

View File

@@ -7,48 +7,170 @@
Synapse is an open source `Matrix <https://matrix.org>`__ homeserver
implementation, written and maintained by `Element <https://element.io>`_.
`Matrix <https://github.com/matrix-org>`__ is the open standard for secure and
interoperable real-time communications. You can directly run and manage the
source code in this repository, available under an AGPL license (or
alternatively under a commercial license from Element).
`Matrix <https://github.com/matrix-org>`__ is the open standard for
secure and interoperable real time communications. You can directly run
and manage the source code in this repository, available under an AGPL
license. There is no support provided from Element unless you have a
subscription.
There is no support provided by Element unless you have a subscription from
Element.
Subscription alternative
========================
🚀 Getting started
==================
Alternatively, for those that need an enterprise-ready solution, Element
Server Suite (ESS) is `available as a subscription <https://element.io/pricing>`_.
ESS builds on Synapse to offer a complete Matrix-based backend including the full
`Admin Console product <https://element.io/enterprise-functionality/admin-console>`_,
giving admins the power to easily manage an organization-wide
deployment. It includes advanced identity management, auditing,
moderation and data retention options as well as Long Term Support and
SLAs. ESS can be used to support any Matrix-based frontend client.
This component is developed and maintained by `Element <https://element.io>`_.
It gets shipped as part of the **Element Server Suite (ESS)** which provides the
official means of deployment.
.. contents::
ESS is a Matrix distribution from Element with focus on quality and ease of use.
It ships a full Matrix stack tailored to the respective use case.
🛠️ Installing and configuration
===============================
There are three editions of ESS:
The Synapse documentation describes `how to install Synapse <https://element-hq.github.io/synapse/latest/setup/installation.html>`_. We recommend using
`Docker images <https://element-hq.github.io/synapse/latest/setup/installation.html#docker-images-and-ansible-playbooks>`_ or `Debian packages from Matrix.org
<https://element-hq.github.io/synapse/latest/setup/installation.html#matrixorg-packages>`_.
- `ESS Community <https://github.com/element-hq/ess-helm>`_ - the free Matrix
distribution from Element tailored to small-/mid-scale, non-commercial
community use cases
- `ESS Pro <https://element.io/server-suite>`_ - the commercial Matrix
distribution from Element for professional use
- `ESS TI-M <https://element.io/server-suite/ti-messenger>`_ - a special version
of ESS Pro focused on the requirements of TI-Messenger Pro and ePA as
specified by the German National Digital Health Agency Gematik
.. _federation:
Synapse has a variety of `config options
<https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html>`_
which can be used to customise its behaviour after installation.
There are additional details on how to `configure Synapse for federation here
<https://element-hq.github.io/synapse/latest/federate.html>`_.
.. _reverse-proxy:
Using a reverse proxy with Synapse
----------------------------------
It is recommended to put a reverse proxy such as
`nginx <https://nginx.org/en/docs/http/ngx_http_proxy_module.html>`_,
`Apache <https://httpd.apache.org/docs/current/mod/mod_proxy_http.html>`_,
`Caddy <https://caddyserver.com/docs/quick-starts/reverse-proxy>`_,
`HAProxy <https://www.haproxy.org/>`_ or
`relayd <https://man.openbsd.org/relayd.8>`_ in front of Synapse. One advantage of
doing so is that it means that you can expose the default https port (443) to
Matrix clients without needing to run Synapse with root privileges.
For information on configuring one, see `the reverse proxy docs
<https://element-hq.github.io/synapse/latest/reverse_proxy.html>`_.
Upgrading an existing Synapse
-----------------------------
The instructions for upgrading Synapse are in `the upgrade notes`_.
Please check these instructions as upgrading may require extra steps for some
versions of Synapse.
.. _the upgrade notes: https://element-hq.github.io/synapse/develop/upgrade.html
🛠️ Standalone installation and configuration
============================================
Platform dependencies
---------------------
The Synapse documentation describes `options for installing Synapse standalone
<https://element-hq.github.io/synapse/latest/setup/installation.html>`_. See
below for more useful documentation links.
Synapse uses a number of platform dependencies such as Python and PostgreSQL,
and aims to follow supported upstream versions. See the
`deprecation policy <https://element-hq.github.io/synapse/latest/deprecation_policy.html>`_
for more details.
- `Synapse configuration options <https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html>`_
- `Synapse configuration for federation <https://element-hq.github.io/synapse/latest/federate.html>`_
- `Using a reverse proxy with Synapse <https://element-hq.github.io/synapse/latest/reverse_proxy.html>`_
- `Upgrading Synapse <https://element-hq.github.io/synapse/develop/upgrade.html>`_
Security note
-------------
Matrix serves raw, user-supplied data in some APIs -- specifically the `content
repository endpoints`_.
.. _content repository endpoints: https://matrix.org/docs/spec/client_server/latest.html#get-matrix-media-r0-download-servername-mediaid
Whilst we make a reasonable effort to mitigate against XSS attacks (for
instance, by using `CSP`_), a Matrix homeserver should not be hosted on a
domain hosting other web applications. This especially applies to sharing
the domain with Matrix web clients and other sensitive applications like
webmail. See
https://developer.github.com/changes/2014-04-25-user-content-security for more
information.
.. _CSP: https://github.com/matrix-org/synapse/pull/1021
Ideally, the homeserver should not simply be on a different subdomain, but on
a completely different `registered domain`_ (also known as top-level site or
eTLD+1). This is because `some attacks`_ are still possible as long as the two
applications share the same registered domain.
.. _registered domain: https://tools.ietf.org/html/draft-ietf-httpbis-rfc6265bis-03#section-2.3
.. _some attacks: https://en.wikipedia.org/wiki/Session_fixation#Attacks_using_cross-subdomain_cookie
To illustrate this with an example, if your Element Web or other sensitive web
application is hosted on ``A.example1.com``, you should ideally host Synapse on
``example2.com``. Some amount of protection is offered by hosting on
``B.example1.com`` instead, so this is also acceptable in some scenarios.
However, you should *not* host your Synapse on ``A.example1.com``.
Note that all of the above refers exclusively to the domain used in Synapse's
``public_baseurl`` setting. In particular, it has no bearing on the domain
mentioned in MXIDs hosted on that server.
Following this advice ensures that even if an XSS is found in Synapse, the
impact to other applications will be minimal.
🧪 Testing a new installation
=============================
The easiest way to try out your new Synapse installation is by connecting to it
from a web client.
Unless you are running a test instance of Synapse on your local machine, in
general, you will need to enable TLS support before you can successfully
connect from a client: see
`TLS certificates <https://element-hq.github.io/synapse/latest/setup/installation.html#tls-certificates>`_.
An easy way to get started is to login or register via Element at
https://app.element.io/#/login or https://app.element.io/#/register respectively.
You will need to change the server you are logging into from ``matrix.org``
and instead specify a Homeserver URL of ``https://<server_name>:8448``
(or just ``https://<server_name>`` if you are using a reverse proxy).
If you prefer to use another client, refer to our
`client breakdown <https://matrix.org/ecosystem/clients/>`_.
If all goes well you should at least be able to log in, create a room, and
start sending messages.
.. _`client-user-reg`:
Registering a new user from a client
------------------------------------
By default, registration of new users via Matrix clients is disabled. To enable
it:
1. In the
`registration config section <https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html#registration>`_
set ``enable_registration: true`` in ``homeserver.yaml``.
2. Then **either**:
a. set up a `CAPTCHA <https://element-hq.github.io/synapse/latest/CAPTCHA_SETUP.html>`_, or
b. set ``enable_registration_without_verification: true`` in ``homeserver.yaml``.
We **strongly** recommend using a CAPTCHA, particularly if your homeserver is exposed to
the public internet. Without it, anyone can freely register accounts on your homeserver.
This can be exploited by attackers to create spambots targetting the rest of the Matrix
federation.
Your new user name will be formed partly from the ``server_name``, and partly
from a localpart you specify when you create the account. Your name will take
the form of::
@localpart:my.domain.name
(pronounced "at localpart on my dot domain dot name").
As when logging in, you will need to specify a "Custom server". Specify your
desired ``localpart`` in the 'User name' box.
🎯 Troubleshooting and support
==============================
@@ -60,7 +182,7 @@ Enterprise quality support for Synapse including SLAs is available as part of an
`Element Server Suite (ESS) <https://element.io/pricing>`_ subscription.
If you are an existing ESS subscriber then you can raise a `support request <https://ems.element.io/support>`_
and access the `Element product documentation <https://docs.element.io>`_.
and access the `knowledge base <https://ems-docs.element.io>`_.
🤝 Community support
--------------------
@@ -79,6 +201,35 @@ issues for support requests, only for bug reports and feature requests.
.. |docs| replace:: ``docs``
.. _docs: docs
🪪 Identity Servers
===================
Identity servers have the job of mapping email addresses and other 3rd Party
IDs (3PIDs) to Matrix user IDs, as well as verifying the ownership of 3PIDs
before creating that mapping.
**They are not where accounts or credentials are stored - these live on home
servers. Identity Servers are just for mapping 3rd party IDs to matrix IDs.**
This process is very security-sensitive, as there is obvious risk of spam if it
is too easy to sign up for Matrix accounts or harvest 3PID data. In the longer
term, we hope to create a decentralised system to manage it (`matrix-doc #712
<https://github.com/matrix-org/matrix-doc/issues/712>`_), but in the meantime,
the role of managing trusted identity in the Matrix ecosystem is farmed out to
a cluster of known trusted ecosystem partners, who run 'Matrix Identity
Servers' such as `Sydent <https://github.com/matrix-org/sydent>`_, whose role
is purely to authenticate and track 3PID logins and publish end-user public
keys.
You can host your own copy of Sydent, but this will prevent you reaching other
users in the Matrix ecosystem via their email address, and prevent them finding
you. We therefore recommend that you use one of the centralised identity servers
at ``https://matrix.org`` or ``https://vector.im`` for now.
To reiterate: the Identity server will only be used if you choose to associate
an email address with your account, or send an invite to another user via their
email address.
🛠️ Development
==============
@@ -86,9 +237,9 @@ issues for support requests, only for bug reports and feature requests.
We welcome contributions to Synapse from the community!
The best place to get started is our
`guide for contributors <https://element-hq.github.io/synapse/latest/development/contributing_guide.html>`_.
This is part of our broader `documentation <https://element-hq.github.io/synapse/latest>`_, which includes
information for Synapse developers as well as Synapse administrators.
This is part of our larger `documentation <https://element-hq.github.io/synapse/latest>`_, which includes
information for Synapse developers as well as Synapse administrators.
Developers might be particularly interested in:
* `Synapse's database schema <https://element-hq.github.io/synapse/latest/development/database_schema.html>`_,
@@ -98,33 +249,6 @@ Developers might be particularly interested in:
Alongside all that, join our developer community on Matrix:
`#synapse-dev:matrix.org <https://matrix.to/#/#synapse-dev:matrix.org>`_, featuring real humans!
Copyright and Licensing
=======================
| Copyright 20142017 OpenMarket Ltd
| Copyright 2017 Vector Creations Ltd
| Copyright 20172025 New Vector Ltd
| Copyright 2025 Element Creations Ltd
This software is dual-licensed by Element Creations Ltd (Element). It can be
used either:
(1) for free under the terms of the GNU Affero General Public License (as
published by the Free Software Foundation, either version 3 of the License,
or (at your option) any later version); OR
(2) under the terms of a paid-for Element Commercial License agreement between
you and Element (the terms of which may vary depending on what you and
Element have agreed to).
Unless required by applicable law or agreed to in writing, software distributed
under the Licenses is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the Licenses for the
specific language governing permissions and limitations under the Licenses.
Please contact `licensing@element.io <mailto:licensing@element.io>`_ to purchase
an Element commercial license for this software.
.. |support| image:: https://img.shields.io/badge/matrix-community%20support-success
:alt: (get community support in #synapse:matrix.org)

View File

@@ -1,14 +1,12 @@
# A build script for poetry that adds the rust extension.
import itertools
import os
from typing import Any
from typing import Any, Dict
from packaging.specifiers import SpecifierSet
from setuptools_rust import Binding, RustExtension
def build(setup_kwargs: dict[str, Any]) -> None:
def build(setup_kwargs: Dict[str, Any]) -> None:
original_project_dir = os.path.dirname(os.path.realpath(__file__))
cargo_toml_path = os.path.join(original_project_dir, "rust", "Cargo.toml")
@@ -16,27 +14,10 @@ def build(setup_kwargs: dict[str, Any]) -> None:
target="synapse.synapse_rust",
path=cargo_toml_path,
binding=Binding.PyO3,
# This flag is a no-op in the latest versions. Instead, we need to
# specify this in the `bdist_wheel` config below.
py_limited_api=True,
# We always build in release mode, as we can't distinguish
# between using `poetry` in development vs production.
# We force always building in release mode, as we can't tell the
# difference between using `poetry` in development vs production.
debug=False,
)
setup_kwargs.setdefault("rust_extensions", []).append(extension)
setup_kwargs["zip_safe"] = False
# We look up the minimum supported Python version with
# `python_requires` (e.g. ">=3.10.0,<4.0.0") and finding the first Python
# version that matches. We then convert that into the `py_limited_api` form,
# e.g. cp310 for Python 3.10.
py_limited_api: str
python_bounds = SpecifierSet(setup_kwargs["python_requires"])
for minor_version in itertools.count(start=10):
if f"3.{minor_version}.0" in python_bounds:
py_limited_api = f"cp3{minor_version}"
break
setup_kwargs.setdefault("options", {}).setdefault("bdist_wheel", {})[
"py_limited_api"
] = py_limited_api

1
changelog.d/17512.misc Normal file
View File

@@ -0,0 +1 @@
Pre-populate room data used in experimental [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575) Sliding Sync `/sync` endpoint for quick filtering/sorting.

1
changelog.d/17595.misc Normal file
View File

@@ -0,0 +1 @@
Refactor sliding sync class into multiple files.

1
changelog.d/17599.misc Normal file
View File

@@ -0,0 +1 @@
Store sliding sync per-connection state in the database.

1
changelog.d/17600.misc Normal file
View File

@@ -0,0 +1 @@
Make the sliding sync `PerConnectionState` class immutable.

View File

@@ -1 +0,0 @@
Group together dependabot update PRs to reduce the review load.

View File

@@ -1 +0,0 @@
Fix `HomeServer.shutdown()` failing if the homeserver hasn't been setup yet.

View File

@@ -1 +0,0 @@
Fix sliding sync performance slow down for long lived connections.

View File

@@ -1 +0,0 @@
Respond with useful error codes with `Content-Length` header/s are invalid.

View File

@@ -1 +0,0 @@
Fix a bug where Mastodon posts (and possibly other embeds) have the wrong description for URL previews.

View File

@@ -1 +0,0 @@
Fix `HomeServer.shutdown()` failing if the homeserver failed to `start`.

View File

@@ -1 +0,0 @@
Switch the build backend from `poetry-core` to `maturin`.

View File

@@ -1 +0,0 @@
Raise the limit for concurrently-open non-security @dependabot PRs from 5 to 10.

View File

@@ -1 +0,0 @@
Remove the "Updates to locked dependencies" section from the changelog due to lack of use and the maintenance burden.

View File

@@ -1 +0,0 @@
Require 14 days to pass before pulling in general dependency updates to help mitigate upstream supply chain attacks.

View File

@@ -1 +0,0 @@
Add `memberships` endpoint to the admin API. This is useful for forensics and T&S purpose.

View File

@@ -1 +0,0 @@
Drop the broken netlify documentation workflow until a new one is implemented.

View File

@@ -1 +0,0 @@
Fix bug where `Duration` was logged incorrectly.

View File

@@ -1 +0,0 @@
Add an admin API for retrieving a paginated list of quarantined media.

View File

@@ -1 +0,0 @@
Document the importance of `public_baseurl` when configuring OpenID Connect authentication.

View File

@@ -1 +0,0 @@
Fix bug introduced in 1.143.0 that broke support for versions of `zope-interface` older than 6.2.

View File

@@ -1 +0,0 @@
Server admins can bypass the quarantine media check when downloading media by setting the `admin_unsafely_bypass_quarantine` query parameter to `true` on Client-Server API media download requests.

View File

@@ -1 +0,0 @@
Don't include debug logs in `Clock` unless explicitly enabled.

View File

@@ -1 +0,0 @@
Implemented pagination for the [MSC2666](https://github.com/matrix-org/matrix-spec-proposals/pull/2666) mutual rooms endpoint. Contributed by @tulir @ Beeper.

View File

@@ -1 +0,0 @@
Admin API: add worker support to `GET /_synapse/admin/v2/users/<user_id>`.

View File

@@ -1 +0,0 @@
Use `uv` to test olddeps to ensure all transitive dependencies use minimum versions.

View File

@@ -1 +0,0 @@
Log the original bind exception when encountering `Failed to listen on 0.0.0.0, continuing because listening on [::]`.

View File

@@ -1 +0,0 @@
Improve proxy support for the `federation_client.py` dev script. Contributed by Denis Kasak (@dkasak).

View File

@@ -1 +0,0 @@
Unpin the version of Rust we use to build Synapse wheels (was 1.82.0) now that MacOS support has been dropped.

View File

@@ -21,8 +21,7 @@
#
#
"""Starts a synapse client console."""
""" Starts a synapse client console. """
import argparse
import binascii
import cmd
@@ -33,6 +32,7 @@ import sys
import time
import urllib
from http import TwistedHttpClient
from typing import Optional
import urlparse
from signedjson.key import NACL_ED25519, decode_verify_key_bytes
@@ -244,7 +244,7 @@ class SynapseCmd(cmd.Cmd):
if "flows" not in json_res:
print("Failed to find any login flows.")
return False
defer.returnValue(False)
flow = json_res["flows"][0] # assume first is the one we want.
if "type" not in flow or "m.login.password" != flow["type"] or "stages" in flow:
@@ -253,8 +253,8 @@ class SynapseCmd(cmd.Cmd):
"Unable to login via the command line client. Please visit "
"%s to login." % fallback_url
)
return False
return True
defer.returnValue(False)
defer.returnValue(True)
def do_emailrequest(self, line):
"""Requests the association of a third party identifier
@@ -725,7 +725,7 @@ class SynapseCmd(cmd.Cmd):
method,
path,
data=None,
query_params: dict | None = None,
query_params: Optional[dict] = None,
alt_text=None,
):
"""Runs an HTTP request and pretty prints the output.

View File

@@ -22,6 +22,7 @@
import json
import urllib
from pprint import pformat
from typing import Optional
from twisted.internet import defer, reactor
from twisted.web.client import Agent, readBody
@@ -77,7 +78,7 @@ class TwistedHttpClient(HttpClient):
url, data, headers_dict={"Content-Type": ["application/json"]}
)
body = yield readBody(response)
return response.code, body
defer.returnValue((response.code, body))
@defer.inlineCallbacks
def get_json(self, url, args=None):
@@ -87,9 +88,9 @@ class TwistedHttpClient(HttpClient):
url = "%s?%s" % (url, qs)
response = yield self._create_get_request(url)
body = yield readBody(response)
return json.loads(body)
defer.returnValue(json.loads(body))
def _create_put_request(self, url, json_data, headers_dict: dict | None = None):
def _create_put_request(self, url, json_data, headers_dict: Optional[dict] = None):
"""Wrapper of _create_request to issue a PUT request"""
headers_dict = headers_dict or {}
@@ -100,7 +101,7 @@ class TwistedHttpClient(HttpClient):
"PUT", url, producer=_JsonProducer(json_data), headers_dict=headers_dict
)
def _create_get_request(self, url, headers_dict: dict | None = None):
def _create_get_request(self, url, headers_dict: Optional[dict] = None):
"""Wrapper of _create_request to issue a GET request"""
return self._create_request("GET", url, headers_dict=headers_dict or {})
@@ -112,7 +113,7 @@ class TwistedHttpClient(HttpClient):
data=None,
qparams=None,
jsonreq=True,
headers: dict | None = None,
headers: Optional[dict] = None,
):
headers = headers or {}
@@ -133,11 +134,11 @@ class TwistedHttpClient(HttpClient):
response = yield self._create_request(method, url)
body = yield readBody(response)
return json.loads(body)
defer.returnValue(json.loads(body))
@defer.inlineCallbacks
def _create_request(
self, method, url, producer=None, headers_dict: dict | None = None
self, method, url, producer=None, headers_dict: Optional[dict] = None
):
"""Creates and sends a request to the given url"""
headers_dict = headers_dict or {}
@@ -172,7 +173,7 @@ class TwistedHttpClient(HttpClient):
if self.verbose:
print("Status %s %s" % (response.code, response.phrase))
print(pformat(list(response.headers.getAllRawHeaders())))
return response
defer.returnValue(response)
def sleep(self, seconds):
d = defer.Deferred()

View File

@@ -30,6 +30,3 @@ docker-compose up -d
### More information
For more information on required environment variables and mounts, see the main docker documentation at [/docker/README.md](../../docker/README.md)
**For a more comprehensive Docker Compose example showcasing a full Matrix 2.0 stack, please see
https://github.com/element-hq/element-docker-demo**

View File

@@ -51,7 +51,7 @@ services:
- traefik.http.routers.https-synapse.tls.certResolver=le-ssl
db:
image: docker.io/postgres:15-alpine
image: docker.io/postgres:12-alpine
# Change that password, of course!
environment:
- POSTGRES_USER=synapse

View File

@@ -8,9 +8,6 @@ All examples and snippets assume that your Synapse service is called `synapse` i
An example Docker Compose file can be found [here](docker-compose.yaml).
**For a more comprehensive Docker Compose example, showcasing a full Matrix 2.0 stack (originally based on this
docker-compose.yaml), please see https://github.com/element-hq/element-docker-demo**
## Worker Service Examples in Docker Compose
In order to start the Synapse container as a worker, you must specify an `entrypoint` that loads both the `homeserver.yaml` and the configuration for the worker (`synapse-generic-worker-1.yaml` in the example below). You must also include the worker type in the environment variable `SYNAPSE_WORKER` or alternatively pass `-m synapse.app.generic_worker` as part of the `entrypoint` after `"/start.py", "run"`).

View File

@@ -220,24 +220,29 @@
"yBucketBound": "auto"
},
{
"datasource": {
"uid": "${DS_PROMETHEUS}",
"type": "prometheus"
},
"aliasColors": {},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": {
"uid": "${DS_PROMETHEUS}"
},
"description": "",
"fieldConfig": {
"defaults": {
"links": []
},
"overrides": []
},
"fill": 0,
"fillGradient": 0,
"gridPos": {
"h": 9,
"w": 12,
"x": 12,
"y": 1
},
"hiddenSeries": false,
"id": 152,
"legend": {
"avg": false,
@@ -250,81 +255,71 @@
"values": false
},
"lines": true,
"linewidth": 0,
"links": [],
"nullPointMode": "connected",
"options": {
"alertThreshold": true
},
"paceLength": 10,
"pluginVersion": "10.4.3",
"percentage": false,
"pluginVersion": "9.2.2",
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [
{
"alias": "Avg",
"fill": 0,
"linewidth": 3,
"$$hashKey": "object:48"
"linewidth": 3
},
{
"alias": "99%",
"color": "#C4162A",
"fillBelowTo": "90%",
"$$hashKey": "object:49"
"fillBelowTo": "90%"
},
{
"alias": "90%",
"color": "#FF7383",
"fillBelowTo": "75%",
"$$hashKey": "object:50"
"fillBelowTo": "75%"
},
{
"alias": "75%",
"color": "#FFEE52",
"fillBelowTo": "50%",
"$$hashKey": "object:51"
"fillBelowTo": "50%"
},
{
"alias": "50%",
"color": "#73BF69",
"fillBelowTo": "25%",
"$$hashKey": "object:52"
"fillBelowTo": "25%"
},
{
"alias": "25%",
"color": "#1F60C4",
"fillBelowTo": "5%",
"$$hashKey": "object:53"
"fillBelowTo": "5%"
},
{
"alias": "5%",
"lines": false,
"$$hashKey": "object:54"
"lines": false
},
{
"alias": "Average",
"color": "rgb(255, 255, 255)",
"lines": true,
"linewidth": 3,
"$$hashKey": "object:55"
"linewidth": 3
},
{
"alias": "Local events being persisted",
"color": "#96d98D",
"points": true,
"yaxis": 2,
"zindex": -3,
"$$hashKey": "object:56"
},
{
"$$hashKey": "object:329",
"alias": "Events",
"color": "#B877D9",
"alias": "All events being persisted",
"hideTooltip": true,
"points": true,
"yaxis": 2,
"zindex": -3
}
],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"datasource": {
@@ -389,20 +384,7 @@
},
"expr": "sum(rate(synapse_http_server_response_time_seconds_sum{servlet='RoomSendEventRestServlet',index=~\"$index\",instance=\"$instance\",code=~\"2..\"}[$bucket_size])) / sum(rate(synapse_http_server_response_time_seconds_count{servlet='RoomSendEventRestServlet',index=~\"$index\",instance=\"$instance\",code=~\"2..\"}[$bucket_size]))",
"legendFormat": "Average",
"refId": "H",
"editorMode": "code",
"range": true
},
{
"datasource": {
"uid": "${DS_PROMETHEUS}"
},
"expr": "sum(rate(synapse_http_server_response_time_seconds_count{servlet='RoomSendEventRestServlet',index=~\"$index\",instance=\"$instance\",code=~\"2..\"}[$bucket_size]))",
"hide": false,
"instant": false,
"legendFormat": "Local events being persisted",
"refId": "E",
"editorMode": "code"
"refId": "H"
},
{
"datasource": {
@@ -411,9 +393,8 @@
"expr": "sum(rate(synapse_storage_events_persisted_events_total{instance=\"$instance\"}[$bucket_size]))",
"hide": false,
"instant": false,
"legendFormat": "All events being persisted",
"refId": "I",
"editorMode": "code"
"legendFormat": "Events",
"refId": "E"
}
],
"thresholds": [
@@ -447,9 +428,7 @@
"xaxis": {
"mode": "time",
"show": true,
"values": [],
"name": null,
"buckets": null
"values": []
},
"yaxes": [
{
@@ -471,20 +450,7 @@
],
"yaxis": {
"align": false
},
"bars": false,
"dashes": false,
"description": "",
"fill": 0,
"fillGradient": 0,
"hiddenSeries": false,
"linewidth": 0,
"percentage": false,
"points": false,
"stack": false,
"steppedLine": false,
"timeFrom": null,
"timeShift": null
}
},
{
"aliasColors": {},
@@ -2166,10 +2132,10 @@
"datasource": {
"uid": "${DS_PROMETHEUS}"
},
"expr": "rate(synapse_storage_events_persisted_events_sep_total{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"expr": "rate(synapse_storage_events_persisted_by_source_type{instance=\"$instance\",job=~\"$job\",index=~\"$index\"}[$bucket_size])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{origin_type}}",
"legendFormat": "{{type}}",
"refId": "D"
}
],
@@ -2254,7 +2220,7 @@
"datasource": {
"uid": "${DS_PROMETHEUS}"
},
"expr": "sum by(type) (rate(synapse_storage_events_persisted_events_sep_total{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size]))",
"expr": "rate(synapse_storage_events_persisted_by_event_type{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"format": "time_series",
"instant": false,
"intervalFactor": 2,
@@ -2294,6 +2260,99 @@
"align": false
}
},
{
"aliasColors": {
"irc-freenode (local)": "#EAB839"
},
"bars": false,
"dashLength": 10,
"dashes": false,
"datasource": {
"uid": "${DS_PROMETHEUS}"
},
"decimals": 1,
"fill": 1,
"fillGradient": 0,
"gridPos": {
"h": 7,
"w": 12,
"x": 0,
"y": 44
},
"hiddenSeries": false,
"id": 44,
"legend": {
"alignAsTable": true,
"avg": false,
"current": false,
"hideEmpty": true,
"hideZero": true,
"max": false,
"min": false,
"show": true,
"total": false,
"values": false
},
"lines": true,
"linewidth": 1,
"links": [],
"nullPointMode": "null",
"options": {
"alertThreshold": true
},
"percentage": false,
"pluginVersion": "9.2.2",
"pointradius": 5,
"points": false,
"renderer": "flot",
"seriesOverrides": [],
"spaceLength": 10,
"stack": false,
"steppedLine": false,
"targets": [
{
"datasource": {
"uid": "${DS_PROMETHEUS}"
},
"expr": "rate(synapse_storage_events_persisted_by_origin{job=~\"$job\",index=~\"$index\",instance=\"$instance\"}[$bucket_size])",
"format": "time_series",
"intervalFactor": 2,
"legendFormat": "{{origin_entity}} ({{origin_type}})",
"refId": "A",
"step": 20
}
],
"thresholds": [],
"timeRegions": [],
"title": "Events/s by Origin",
"tooltip": {
"shared": false,
"sort": 2,
"value_type": "individual"
},
"type": "graph",
"xaxis": {
"mode": "time",
"show": true,
"values": []
},
"yaxes": [
{
"format": "hertz",
"logBase": 1,
"min": "0",
"show": true
},
{
"format": "short",
"logBase": 1,
"show": true
}
],
"yaxis": {
"align": false
}
},
{
"aliasColors": {},
"bars": false,
@@ -4303,7 +4362,7 @@
"exemplar": false,
"expr": "(time() - max without (job, index, host) (avg_over_time(synapse_federation_last_received_pdu_time[10m]))) / 60",
"instant": false,
"legendFormat": "{{origin_server_name}} ",
"legendFormat": "{{server_name}} ",
"range": true,
"refId": "A"
}
@@ -4425,7 +4484,7 @@
"exemplar": false,
"expr": "(time() - max without (job, index, host) (avg_over_time(synapse_federation_last_sent_pdu_time[10m]))) / 60",
"instant": false,
"legendFormat": "{{destination_server_name}}",
"legendFormat": "{{server_name}}",
"range": true,
"refId": "A"
}

View File

@@ -20,10 +20,11 @@
#
import argparse
import cgi
import datetime
import html
import json
import urllib.request
from typing import List
import pydot
@@ -32,7 +33,7 @@ def make_name(pdu_id: str, origin: str) -> str:
return f"{pdu_id}@{origin}"
def make_graph(pdus: list[dict], filename_prefix: str) -> None:
def make_graph(pdus: List[dict], filename_prefix: str) -> None:
"""
Generate a dot and SVG file for a graph of events in the room based on the
topological ordering by querying a homeserver.
@@ -44,10 +45,6 @@ def make_graph(pdus: list[dict], filename_prefix: str) -> None:
colors = {"red", "green", "blue", "yellow", "purple"}
for pdu in pdus:
# TODO: The "origin" field has since been removed from events generated
# by Synapse. We should consider removing it here as well but since this
# is part of `contrib/`, it is left for the community to revise and ensure things
# still work correctly.
origins.add(pdu.get("origin"))
color_map = {color: color for color in colors if color in origins}
@@ -88,7 +85,7 @@ def make_graph(pdus: list[dict], filename_prefix: str) -> None:
"name": name,
"type": pdu.get("pdu_type"),
"state_key": pdu.get("state_key"),
"content": html.escape(json.dumps(pdu.get("content")), quote=True),
"content": cgi.escape(json.dumps(pdu.get("content")), quote=True),
"time": t,
"depth": pdu.get("depth"),
}
@@ -126,7 +123,7 @@ def make_graph(pdus: list[dict], filename_prefix: str) -> None:
graph.write_svg("%s.svg" % filename_prefix, prog="dot")
def get_pdus(host: str, room: str) -> list[dict]:
def get_pdus(host: str, room: str) -> List[dict]:
transaction = json.loads(
urllib.request.urlopen(
f"http://{host}/_matrix/federation/v1/context/{room}/"

View File

@@ -44,3 +44,31 @@ groups:
###
### End of 'Prometheus Console Only' rules block
###
###
### Grafana Only
### The following rules are only needed if you use the Grafana dashboard
### in contrib/grafana/synapse.json
###
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_type="remote"})
labels:
type: remote
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_entity="*client*",origin_type="local"})
labels:
type: local
- record: synapse_storage_events_persisted_by_source_type
expr: sum without(type, origin_type, origin_entity) (synapse_storage_events_persisted_events_sep_total{origin_entity!="*client*",origin_type="local"})
labels:
type: bridges
- record: synapse_storage_events_persisted_by_event_type
expr: sum without(origin_entity, origin_type) (synapse_storage_events_persisted_events_sep_total)
- record: synapse_storage_events_persisted_by_origin
expr: sum without(type) (synapse_storage_events_persisted_events_sep_total)
###
### End of 'Grafana Only' rules block
###

View File

@@ -35,7 +35,7 @@ TEMP_VENV="$(mktemp -d)"
python3 -m venv "$TEMP_VENV"
source "$TEMP_VENV/bin/activate"
pip install -U pip
pip install poetry==2.1.1 poetry-plugin-export==1.9.0
pip install poetry==1.3.2
poetry export \
--extras all \
--extras test \

555
debian/changelog vendored
View File

@@ -1,558 +1,3 @@
matrix-synapse-py3 (1.144.0) stable; urgency=medium
* New Synapse release 1.144.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 09 Dec 2025 08:30:40 -0700
matrix-synapse-py3 (1.144.0~rc1) stable; urgency=medium
* New Synapse release 1.144.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 02 Dec 2025 09:11:19 -0700
matrix-synapse-py3 (1.143.0) stable; urgency=medium
* New Synapse release 1.143.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Nov 2025 08:44:56 -0700
matrix-synapse-py3 (1.143.0~rc2) stable; urgency=medium
* New Synapse release 1.143.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Nov 2025 17:36:08 -0700
matrix-synapse-py3 (1.143.0~rc1) stable; urgency=medium
* New Synapse release 1.143.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Nov 2025 13:08:39 -0700
matrix-synapse-py3 (1.142.1) stable; urgency=medium
* New Synapse release 1.142.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Nov 2025 12:25:23 -0700
matrix-synapse-py3 (1.142.0) stable; urgency=medium
* New Synapse release 1.142.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Nov 2025 09:45:51 +0000
matrix-synapse-py3 (1.142.0~rc4) stable; urgency=medium
* New Synapse release 1.142.0rc4.
-- Synapse Packaging team <packages@matrix.org> Fri, 07 Nov 2025 10:54:42 +0000
matrix-synapse-py3 (1.142.0~rc3) stable; urgency=medium
* New Synapse release 1.142.0rc3.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Nov 2025 17:39:11 +0000
matrix-synapse-py3 (1.142.0~rc2) stable; urgency=medium
* New Synapse release 1.142.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Nov 2025 16:21:30 +0000
matrix-synapse-py3 (1.142.0~rc1) stable; urgency=medium
* New Synapse release 1.142.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Nov 2025 13:20:15 +0000
matrix-synapse-py3 (1.141.0) stable; urgency=medium
* New Synapse release 1.141.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 29 Oct 2025 11:01:43 +0000
matrix-synapse-py3 (1.141.0~rc2) stable; urgency=medium
* New Synapse release 1.141.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Oct 2025 10:20:26 +0000
matrix-synapse-py3 (1.141.0~rc1) stable; urgency=medium
* New Synapse release 1.141.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Oct 2025 11:01:44 +0100
matrix-synapse-py3 (1.140.0) stable; urgency=medium
* New Synapse release 1.140.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Oct 2025 15:22:36 +0100
matrix-synapse-py3 (1.140.0~rc1) stable; urgency=medium
* New Synapse release 1.140.0rc1.
-- Synapse Packaging team <packages@matrix.org> Fri, 10 Oct 2025 10:56:51 +0100
matrix-synapse-py3 (1.139.2) stable; urgency=medium
* New Synapse release 1.139.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:29:47 +0100
matrix-synapse-py3 (1.139.1) stable; urgency=medium
* New Synapse release 1.139.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 11:46:51 +0100
matrix-synapse-py3 (1.138.4) stable; urgency=medium
* New Synapse release 1.138.4.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 16:28:38 +0100
matrix-synapse-py3 (1.138.3) stable; urgency=medium
* New Synapse release 1.138.3.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Oct 2025 12:54:18 +0100
matrix-synapse-py3 (1.139.0) stable; urgency=medium
* New Synapse release 1.139.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 30 Sep 2025 11:58:55 +0100
matrix-synapse-py3 (1.139.0~rc3) stable; urgency=medium
* New Synapse release 1.139.0rc3.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:13:23 +0100
matrix-synapse-py3 (1.138.2) stable; urgency=medium
* The licensing specifier has been updated to add an optional
`LicenseRef-Element-Commercial` license. The code was already licensed in
this manner - the debian metadata was just not updated to reflect it.
-- Synapse Packaging team <packages@matrix.org> Thu, 25 Sep 2025 12:17:17 +0100
matrix-synapse-py3 (1.138.1) stable; urgency=medium
* New Synapse release 1.138.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 24 Sep 2025 11:32:38 +0100
matrix-synapse-py3 (1.139.0~rc2) stable; urgency=medium
* New Synapse release 1.139.0rc2.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 15:31:42 +0100
matrix-synapse-py3 (1.139.0~rc1) stable; urgency=medium
* New Synapse release 1.139.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 23 Sep 2025 13:24:50 +0100
matrix-synapse-py3 (1.138.0~rc1) stable; urgency=medium
* New synapse release 1.138.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 02 Sep 2025 12:16:14 +0000
matrix-synapse-py3 (1.137.0) stable; urgency=medium
* New Synapse release 1.137.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Aug 2025 10:23:41 +0100
matrix-synapse-py3 (1.137.0~rc1) stable; urgency=medium
* New Synapse release 1.137.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 19 Aug 2025 10:55:22 +0100
matrix-synapse-py3 (1.136.0) stable; urgency=medium
* New Synapse release 1.136.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 12 Aug 2025 13:18:03 +0100
matrix-synapse-py3 (1.136.0~rc2) stable; urgency=medium
* New Synapse release 1.136.0rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 11 Aug 2025 12:18:52 -0600
matrix-synapse-py3 (1.136.0~rc1) stable; urgency=medium
* New Synapse release 1.136.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 05 Aug 2025 08:13:30 -0600
matrix-synapse-py3 (1.135.2) stable; urgency=medium
* New Synapse release 1.135.2.
-- Synapse Packaging team <packages@matrix.org> Mon, 11 Aug 2025 11:52:01 -0600
matrix-synapse-py3 (1.135.1) stable; urgency=medium
* New Synapse release 1.135.1.
-- Synapse Packaging team <packages@matrix.org> Mon, 11 Aug 2025 11:13:15 -0600
matrix-synapse-py3 (1.135.0) stable; urgency=medium
* New Synapse release 1.135.0.
-- Synapse Packaging team <packages@matrix.org> Fri, 01 Aug 2025 13:12:28 +0100
matrix-synapse-py3 (1.135.0~rc2) stable; urgency=medium
* New Synapse release 1.135.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 30 Jul 2025 12:19:14 +0100
matrix-synapse-py3 (1.135.0~rc1) stable; urgency=medium
* New Synapse release 1.135.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Jul 2025 12:08:37 +0100
matrix-synapse-py3 (1.134.0) stable; urgency=medium
* New Synapse release 1.134.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Jul 2025 14:22:50 +0100
matrix-synapse-py3 (1.134.0~rc1) stable; urgency=medium
* New Synapse release 1.134.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 09 Jul 2025 11:27:13 +0100
matrix-synapse-py3 (1.133.0) stable; urgency=medium
* New synapse release 1.133.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Jul 2025 13:13:24 +0000
matrix-synapse-py3 (1.133.0~rc1) stable; urgency=medium
* New Synapse release 1.133.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 24 Jun 2025 11:57:47 +0100
matrix-synapse-py3 (1.132.0) stable; urgency=medium
* New Synapse release 1.132.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Jun 2025 13:16:20 +0100
matrix-synapse-py3 (1.132.0~rc1) stable; urgency=medium
* New Synapse release 1.132.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 10 Jun 2025 11:15:18 +0100
matrix-synapse-py3 (1.131.0) stable; urgency=medium
* New Synapse release 1.131.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Jun 2025 14:36:55 +0100
matrix-synapse-py3 (1.131.0~rc1) stable; urgency=medium
* New synapse release 1.131.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 28 May 2025 10:25:44 +0000
matrix-synapse-py3 (1.130.0) stable; urgency=medium
* New Synapse release 1.130.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 20 May 2025 08:34:13 -0600
matrix-synapse-py3 (1.130.0~rc1) stable; urgency=medium
* New Synapse release 1.130.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 13 May 2025 10:44:04 +0100
matrix-synapse-py3 (1.129.0) stable; urgency=medium
* New Synapse release 1.129.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 06 May 2025 12:22:11 +0100
matrix-synapse-py3 (1.129.0~rc2) stable; urgency=medium
* New synapse release 1.129.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 30 Apr 2025 13:13:16 +0000
matrix-synapse-py3 (1.129.0~rc1) stable; urgency=medium
* New Synapse release 1.129.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Apr 2025 10:47:43 -0600
matrix-synapse-py3 (1.128.0) stable; urgency=medium
* New Synapse release 1.128.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Apr 2025 14:09:54 +0100
matrix-synapse-py3 (1.128.0~rc1) stable; urgency=medium
* Update Poetry to 2.1.1.
* New synapse release 1.128.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Apr 2025 14:35:33 +0000
matrix-synapse-py3 (1.127.1) stable; urgency=medium
* New Synapse release 1.127.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 26 Mar 2025 21:07:31 +0000
matrix-synapse-py3 (1.127.0) stable; urgency=medium
* New Synapse release 1.127.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Mar 2025 12:04:15 +0000
matrix-synapse-py3 (1.127.0~rc1) stable; urgency=medium
* New Synapse release 1.127.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Mar 2025 13:30:05 +0000
matrix-synapse-py3 (1.126.0) stable; urgency=medium
* New Synapse release 1.126.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Mar 2025 13:11:29 +0000
matrix-synapse-py3 (1.126.0~rc3) stable; urgency=medium
* New Synapse release 1.126.0rc3.
-- Synapse Packaging team <packages@matrix.org> Fri, 07 Mar 2025 15:45:05 +0000
matrix-synapse-py3 (1.126.0~rc2) stable; urgency=medium
* New Synapse release 1.126.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Mar 2025 14:29:12 +0000
matrix-synapse-py3 (1.126.0~rc1) stable; urgency=medium
* New Synapse release 1.126.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Mar 2025 13:11:51 +0000
matrix-synapse-py3 (1.125.0) stable; urgency=medium
* New Synapse release 1.125.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 25 Feb 2025 08:10:07 -0700
matrix-synapse-py3 (1.125.0~rc1) stable; urgency=medium
* New synapse release 1.125.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 18 Feb 2025 13:32:49 +0000
matrix-synapse-py3 (1.124.0) stable; urgency=medium
* New Synapse release 1.124.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 11 Feb 2025 11:55:22 +0100
matrix-synapse-py3 (1.124.0~rc3) stable; urgency=medium
* New Synapse release 1.124.0rc3.
-- Synapse Packaging team <packages@matrix.org> Fri, 07 Feb 2025 13:42:55 +0000
matrix-synapse-py3 (1.124.0~rc2) stable; urgency=medium
* New Synapse release 1.124.0rc2.
-- Synapse Packaging team <packages@matrix.org> Wed, 05 Feb 2025 16:35:53 +0000
matrix-synapse-py3 (1.124.0~rc1) stable; urgency=medium
* New Synapse release 1.124.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 04 Feb 2025 11:53:05 +0000
matrix-synapse-py3 (1.123.0) stable; urgency=medium
* New Synapse release 1.123.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Jan 2025 08:37:34 -0700
matrix-synapse-py3 (1.123.0~rc1) stable; urgency=medium
* New Synapse release 1.123.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 21 Jan 2025 14:39:57 +0100
matrix-synapse-py3 (1.122.0) stable; urgency=medium
* New Synapse release 1.122.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 14 Jan 2025 14:14:14 +0000
matrix-synapse-py3 (1.122.0~rc1) stable; urgency=medium
* New Synapse release 1.122.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 07 Jan 2025 14:06:19 +0000
matrix-synapse-py3 (1.121.1) stable; urgency=medium
* New Synapse release 1.121.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 11 Dec 2024 18:24:48 +0000
matrix-synapse-py3 (1.121.0) stable; urgency=medium
* New Synapse release 1.121.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 11 Dec 2024 13:12:30 +0100
matrix-synapse-py3 (1.121.0~rc1) stable; urgency=medium
* New Synapse release 1.121.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 04 Dec 2024 14:47:23 +0000
matrix-synapse-py3 (1.120.2) stable; urgency=medium
* New synapse release 1.120.2.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Dec 2024 15:43:37 +0000
matrix-synapse-py3 (1.120.1) stable; urgency=medium
* New synapse release 1.120.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 03 Dec 2024 09:07:57 +0000
matrix-synapse-py3 (1.120.0) stable; urgency=medium
* New synapse release 1.120.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2024 13:10:23 +0000
matrix-synapse-py3 (1.120.0~rc1) stable; urgency=medium
* New Synapse release 1.120.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 20 Nov 2024 15:02:21 +0000
matrix-synapse-py3 (1.119.0) stable; urgency=medium
* New Synapse release 1.119.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 13 Nov 2024 13:57:51 +0000
matrix-synapse-py3 (1.119.0~rc2) stable; urgency=medium
* New Synapse release 1.119.0rc2.
-- Synapse Packaging team <packages@matrix.org> Mon, 11 Nov 2024 14:33:02 +0000
matrix-synapse-py3 (1.119.0~rc1) stable; urgency=medium
* New Synapse release 1.119.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 06 Nov 2024 08:59:43 -0700
matrix-synapse-py3 (1.118.0) stable; urgency=medium
* New Synapse release 1.118.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 29 Oct 2024 15:29:53 +0100
matrix-synapse-py3 (1.118.0~rc1) stable; urgency=medium
* New Synapse release 1.118.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 22 Oct 2024 11:48:14 +0100
matrix-synapse-py3 (1.117.0) stable; urgency=medium
* New Synapse release 1.117.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 15 Oct 2024 10:46:30 +0100
matrix-synapse-py3 (1.117.0~rc1) stable; urgency=medium
* New Synapse release 1.117.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 08 Oct 2024 14:37:11 +0100
matrix-synapse-py3 (1.116.0) stable; urgency=medium
* New Synapse release 1.116.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 01 Oct 2024 11:14:07 +0100
matrix-synapse-py3 (1.116.0~rc2) stable; urgency=medium
* New synapse release 1.116.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 26 Sep 2024 13:28:43 +0000
matrix-synapse-py3 (1.116.0~rc1) stable; urgency=medium
* New synapse release 1.116.0rc1.
-- Synapse Packaging team <packages@matrix.org> Wed, 25 Sep 2024 09:34:07 +0000
matrix-synapse-py3 (1.115.0) stable; urgency=medium
* New Synapse release 1.115.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 17 Sep 2024 14:32:10 +0100
matrix-synapse-py3 (1.115.0~rc2) stable; urgency=medium
* New Synapse release 1.115.0rc2.
-- Synapse Packaging team <packages@matrix.org> Thu, 12 Sep 2024 11:10:15 +0100
matrix-synapse-py3 (1.115.0~rc1) stable; urgency=medium
* New Synapse release 1.115.0rc1.
-- Synapse Packaging team <packages@matrix.org> Tue, 10 Sep 2024 08:39:09 -0600
matrix-synapse-py3 (1.114.0) stable; urgency=medium
* New Synapse release 1.114.0.
-- Synapse Packaging team <packages@matrix.org> Mon, 02 Sep 2024 15:14:53 +0100
matrix-synapse-py3 (1.114.0~rc3) stable; urgency=medium
* New Synapse release 1.114.0rc3.
-- Synapse Packaging team <packages@matrix.org> Fri, 30 Aug 2024 16:38:05 +0100
matrix-synapse-py3 (1.114.0~rc2) stable; urgency=medium
* New Synapse release 1.114.0rc2.
-- Synapse Packaging team <packages@matrix.org> Fri, 30 Aug 2024 15:35:13 +0100
matrix-synapse-py3 (1.114.0~rc1) stable; urgency=medium
* New synapse release 1.114.0rc1.

2
debian/copyright vendored
View File

@@ -8,7 +8,7 @@ License: Apache-2.0
Files: *
Copyright: 2023 New Vector Ltd
License: AGPL-3.0-or-later or LicenseRef-Element-Commercial
License: AGPL-3.0-or-later
Files: synapse/config/saml2.py
Copyright: 2015, Ericsson

View File

@@ -1,13 +1,10 @@
.\" generated with Ronn-NG/v0.10.1
.\" http://github.com/apjanke/ronn-ng/tree/0.10.1
.TH "HASH_PASSWORD" "1" "August 2024" ""
.\" generated with Ronn-NG/v0.8.0
.\" http://github.com/apjanke/ronn-ng/tree/0.8.0
.TH "HASH_PASSWORD" "1" "July 2021" "" ""
.SH "NAME"
\fBhash_password\fR \- Calculate the hash of a new password, so that passwords can be reset
.SH "SYNOPSIS"
.TS
allbox;
\fBhash_password\fR [\fB\-p\fR \fB\-\-password\fR [password]] [\fB\-c\fR \fB\-\-config\fR \fIfile\fR]
.TE
\fBhash_password\fR [\fB\-p\fR|\fB\-\-password\fR [password]] [\fB\-c\fR|\fB\-\-config\fR \fIfile\fR]
.SH "DESCRIPTION"
\fBhash_password\fR calculates the hash of a supplied password using bcrypt\.
.P
@@ -23,7 +20,7 @@ bcrypt_rounds: 17 password_config: pepper: "random hashing pepper"
.SH "OPTIONS"
.TP
\fB\-p\fR, \fB\-\-password\fR
Read the password form the command line if [password] is supplied, or from \fBSTDIN\fR\. If not, prompt the user and read the password from the tty prompt\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
Read the password form the command line if [password] is supplied\. If not, prompt the user and read the password form the \fBSTDIN\fR\. It is not recommended to type the password on the command line directly\. Use the STDIN instead\.
.TP
\fB\-c\fR, \fB\-\-config\fR
Read the supplied YAML \fIfile\fR containing the options \fBbcrypt_rounds\fR and the \fBpassword_config\fR section containing the \fBpepper\fR value\.
@@ -36,17 +33,7 @@ $2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8\.X8fWFpum7SxZ9MFe
.fi
.IP "" 0
.P
Hash from the stdin:
.IP "" 4
.nf
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX\.rcuAbM8ErLoUhybG
.fi
.IP "" 0
.P
Hash from the prompt:
Hash from the STDIN:
.IP "" 4
.nf
$ hash_password
@@ -66,6 +53,6 @@ $2b$12$CwI\.wBNr\.w3kmiUlV3T5s\.GT2wH7uebDCovDrCOh18dFedlANK99O
.fi
.IP "" 0
.SH "COPYRIGHT"
This man page was written by Rahul De «rahulde@swecha\.net» for Debian GNU/Linux distribution\.
This man page was written by Rahul De <\fI\%mailto:rahulde@swecha\.net\fR> for Debian GNU/Linux distribution\.
.SH "SEE ALSO"
synctl(1), synapse_port_db(1), register_new_matrix_user(1), synapse_review_recent_signups(1)

View File

@@ -1,182 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<meta http-equiv='content-type' content='text/html;charset=utf-8'>
<meta name='generator' content='Ronn-NG/v0.10.1 (http://github.com/apjanke/ronn-ng/tree/0.10.1)'>
<title>hash_password(1) - Calculate the hash of a new password, so that passwords can be reset</title>
<style type='text/css' media='all'>
/* style: man */
body#manpage {margin:0}
.mp {max-width:100ex;padding:0 9ex 1ex 4ex}
.mp p,.mp pre,.mp ul,.mp ol,.mp dl {margin:0 0 20px 0}
.mp h2 {margin:10px 0 0 0}
.mp > p,.mp > pre,.mp > ul,.mp > ol,.mp > dl {margin-left:8ex}
.mp h3 {margin:0 0 0 4ex}
.mp dt {margin:0;clear:left}
.mp dt.flush {float:left;width:8ex}
.mp dd {margin:0 0 0 9ex}
.mp h1,.mp h2,.mp h3,.mp h4 {clear:left}
.mp pre {margin-bottom:20px}
.mp pre+h2,.mp pre+h3 {margin-top:22px}
.mp h2+pre,.mp h3+pre {margin-top:5px}
.mp img {display:block;margin:auto}
.mp h1.man-title {display:none}
.mp,.mp code,.mp pre,.mp tt,.mp kbd,.mp samp,.mp h3,.mp h4 {font-family:monospace;font-size:14px;line-height:1.42857142857143}
.mp h2 {font-size:16px;line-height:1.25}
.mp h1 {font-size:20px;line-height:2}
.mp {text-align:justify;background:#fff}
.mp,.mp code,.mp pre,.mp pre code,.mp tt,.mp kbd,.mp samp {color:#131211}
.mp h1,.mp h2,.mp h3,.mp h4 {color:#030201}
.mp u {text-decoration:underline}
.mp code,.mp strong,.mp b {font-weight:bold;color:#131211}
.mp em,.mp var {font-style:italic;color:#232221;text-decoration:none}
.mp a,.mp a:link,.mp a:hover,.mp a code,.mp a pre,.mp a tt,.mp a kbd,.mp a samp {color:#0000ff}
.mp b.man-ref {font-weight:normal;color:#434241}
.mp pre {padding:0 4ex}
.mp pre code {font-weight:normal;color:#434241}
.mp h2+pre,h3+pre {padding-left:0}
ol.man-decor,ol.man-decor li {margin:3px 0 10px 0;padding:0;float:left;width:33%;list-style-type:none;text-transform:uppercase;color:#999;letter-spacing:1px}
ol.man-decor {width:100%}
ol.man-decor li.tl {text-align:left}
ol.man-decor li.tc {text-align:center;letter-spacing:4px}
ol.man-decor li.tr {text-align:right;float:right}
</style>
</head>
<!--
The following styles are deprecated and will be removed at some point:
div#man, div#man ol.man, div#man ol.head, div#man ol.man.
The .man-page, .man-decor, .man-head, .man-foot, .man-title, and
.man-navigation should be used instead.
-->
<body id='manpage'>
<div class='mp' id='man'>
<div class='man-navigation' style='display:none'>
<a href="#NAME">NAME</a>
<a href="#SYNOPSIS">SYNOPSIS</a>
<a href="#DESCRIPTION">DESCRIPTION</a>
<a href="#FILES">FILES</a>
<a href="#OPTIONS">OPTIONS</a>
<a href="#EXAMPLES">EXAMPLES</a>
<a href="#COPYRIGHT">COPYRIGHT</a>
<a href="#SEE-ALSO">SEE ALSO</a>
</div>
<ol class='man-decor man-head man head'>
<li class='tl'>hash_password(1)</li>
<li class='tc'></li>
<li class='tr'>hash_password(1)</li>
</ol>
<h2 id="NAME">NAME</h2>
<p class="man-name">
<code>hash_password</code> - <span class="man-whatis">Calculate the hash of a new password, so that passwords can be reset</span>
</p>
<h2 id="SYNOPSIS">SYNOPSIS</h2>
<table>
<tbody>
<tr>
<td>
<code>hash_password</code> [<code>-p</code>
</td>
<td>
<code>--password</code> [password]] [<code>-c</code>
</td>
<td>
<code>--config</code> <var>file</var>]</td>
</tr>
</tbody>
</table>
<h2 id="DESCRIPTION">DESCRIPTION</h2>
<p><strong>hash_password</strong> calculates the hash of a supplied password using bcrypt.</p>
<p><code>hash_password</code> takes a password as an parameter either on the command line
or the <code>STDIN</code> if not supplied.</p>
<p>It accepts an YAML file which can be used to specify parameters like the
number of rounds for bcrypt and password_config section having the pepper
value used for the hashing. By default <code>bcrypt_rounds</code> is set to <strong>12</strong>.</p>
<p>The hashed password is written on the <code>STDOUT</code>.</p>
<h2 id="FILES">FILES</h2>
<p>A sample YAML file accepted by <code>hash_password</code> is described below:</p>
<p>bcrypt_rounds: 17
password_config:
pepper: "random hashing pepper"</p>
<h2 id="OPTIONS">OPTIONS</h2>
<dl>
<dt>
<code>-p</code>, <code>--password</code>
</dt>
<dd>Read the password form the command line if [password] is supplied, or from <code>STDIN</code>.
If not, prompt the user and read the password from the tty prompt.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.</dd>
<dt>
<code>-c</code>, <code>--config</code>
</dt>
<dd>Read the supplied YAML <var>file</var> containing the options <code>bcrypt_rounds</code>
and the <code>password_config</code> section containing the <code>pepper</code> value.</dd>
</dl>
<h2 id="EXAMPLES">EXAMPLES</h2>
<p>Hash from the command line:</p>
<pre><code>$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
</code></pre>
<p>Hash from the stdin:</p>
<pre><code>$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Hash from the prompt:</p>
<pre><code>$ hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
</code></pre>
<p>Using a config file:</p>
<pre><code>$ hash_password -c config.yml
Password:
Confirm password:
$2b$12$CwI.wBNr.w3kmiUlV3T5s.GT2wH7uebDCovDrCOh18dFedlANK99O
</code></pre>
<h2 id="COPYRIGHT">COPYRIGHT</h2>
<p>This man page was written by Rahul De «rahulde@swecha.net»
for Debian GNU/Linux distribution.</p>
<h2 id="SEE-ALSO">SEE ALSO</h2>
<p><span class="man-ref">synctl<span class="s">(1)</span></span>, <span class="man-ref">synapse_port_db<span class="s">(1)</span></span>, <span class="man-ref">register_new_matrix_user<span class="s">(1)</span></span>, <span class="man-ref">synapse_review_recent_signups<span class="s">(1)</span></span></p>
<ol class='man-decor man-foot man foot'>
<li class='tl'></li>
<li class='tc'>August 2024</li>
<li class='tr'>hash_password(1)</li>
</ol>
</div>
</body>
</html>

View File

@@ -29,8 +29,8 @@ A sample YAML file accepted by `hash_password` is described below:
## OPTIONS
* `-p`, `--password`:
Read the password form the command line if [password] is supplied, or from `STDIN`.
If not, prompt the user and read the password from the tty prompt.
Read the password form the command line if [password] is supplied.
If not, prompt the user and read the password form the `STDIN`.
It is not recommended to type the password on the command line
directly. Use the STDIN instead.
@@ -45,14 +45,7 @@ Hash from the command line:
$ hash_password -p "p@ssw0rd"
$2b$12$VJNqWQYfsWTEwcELfoSi4Oa8eA17movHqqi8.X8fWFpum7SxZ9MFe
Hash from the stdin:
$ cat password_file | hash_password
Password:
Confirm password:
$2b$12$AszlvfmJl2esnyhmn8m/kuR2tdXgROWtWxnX.rcuAbM8ErLoUhybG
Hash from the prompt:
Hash from the STDIN:
$ hash_password
Password:

View File

@@ -138,13 +138,6 @@ for port in 8080 8081 8082; do
per_user:
per_second: 1000
burst_count: 1000
rc_presence:
per_user:
per_second: 1000
burst_count: 1000
rc_delayed_event_mgmt:
per_second: 1000
burst_count: 1000
RC
)
echo "${ratelimiting}" >> "$port.config"

View File

@@ -20,16 +20,45 @@
# `poetry export | pip install -r /dev/stdin`, but beware: we have experienced bugs in
# in `poetry export` in the past.
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG POETRY_VERSION=2.1.1
ARG PYTHON_VERSION=3.11
###
### Stage 0: generate requirements.txt
###
### This stage is platform-agnostic, so we can use the build platform in case of cross-compilation.
###
FROM --platform=$BUILDPLATFORM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS requirements
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS requirements
# RUN --mount is specific to buildkit and is documented at
# https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#build-mounts-run---mount.
# Here we use it to set up a cache for apt (and below for pip), to improve
# rebuild speeds on slow connections.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential curl git libffi-dev libssl-dev pkg-config \
&& rm -rf /var/lib/apt/lists/*
# Install rust and ensure its in the PATH.
# (Rust may be needed to compile `cryptography`---which is one of poetry's
# dependencies---on platforms that don't have a `cryptography` wheel.
ENV RUSTUP_HOME=/rust
ENV CARGO_HOME=/cargo
ENV PATH=/cargo/bin:/rust/bin:$PATH
RUN mkdir /rust /cargo
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable --profile minimal
# arm64 builds consume a lot of memory if `CARGO_NET_GIT_FETCH_WITH_CLI` is not
# set to true, so we expose it as a build-arg.
ARG CARGO_NET_GIT_FETCH_WITH_CLI=false
ENV CARGO_NET_GIT_FETCH_WITH_CLI=$CARGO_NET_GIT_FETCH_WITH_CLI
# We install poetry in its own build stage to avoid its dependencies conflicting with
# synapse's dependencies.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --user "poetry==1.3.2"
WORKDIR /synapse
@@ -46,30 +75,41 @@ ARG TEST_ONLY_SKIP_DEP_HASH_VERIFICATION
# Instead, we'll just install what a regular `pip install` would from PyPI.
ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# This silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
ENV UV_LINK_MODE=copy
# Export the dependencies, but only if we're actually going to use the Poetry lockfile.
# Otherwise, just create an empty requirements file so that the Dockerfile can
# proceed.
ARG POETRY_VERSION
RUN --mount=type=cache,target=/root/.cache/uv \
if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
uvx --with poetry-plugin-export==1.9.0 \
poetry@${POETRY_VERSION} export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
RUN if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
/root/.local/bin/poetry export --extras all -o /synapse/requirements.txt ${TEST_ONLY_SKIP_DEP_HASH_VERIFICATION:+--without-hashes}; \
else \
touch /synapse/requirements.txt; \
touch /synapse/requirements.txt; \
fi
###
### Stage 1: builder
###
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS builder
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm AS builder
# install the OS build deps
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && apt-get install -yqq \
build-essential \
libffi-dev \
libjpeg-dev \
libpq-dev \
libssl-dev \
libwebp-dev \
libxml++2.6-dev \
libxslt1-dev \
openssl \
zlib1g-dev \
git \
curl \
libicu-dev \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
# This silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
ENV UV_LINK_MODE=copy
# Install rust and ensure its in the PATH
ENV RUSTUP_HOME=/rust
@@ -79,6 +119,7 @@ RUN mkdir /rust /cargo
RUN curl -sSf https://sh.rustup.rs | sh -s -- -y --no-modify-path --default-toolchain stable --profile minimal
# arm64 builds consume a lot of memory if `CARGO_NET_GIT_FETCH_WITH_CLI` is not
# set to true, so we expose it as a build-arg.
ARG CARGO_NET_GIT_FETCH_WITH_CLI=false
@@ -90,8 +131,8 @@ ENV CARGO_NET_GIT_FETCH_WITH_CLI=$CARGO_NET_GIT_FETCH_WITH_CLI
#
# This is aiming at installing the `[tool.poetry.depdendencies]` from pyproject.toml.
COPY --from=requirements /synapse/requirements.txt /synapse/
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --prefix="/install" --no-deps -r /synapse/requirements.txt
RUN --mount=type=cache,target=/root/.cache/pip \
pip install --prefix="/install" --no-deps --no-warn-script-location -r /synapse/requirements.txt
# Copy over the rest of the synapse source code.
COPY synapse /synapse/synapse/
@@ -105,86 +146,42 @@ ARG TEST_ONLY_IGNORE_POETRY_LOCKFILE
# Install the synapse package itself.
# If we have populated requirements.txt, we don't install any dependencies
# as we should already have those from the previous `pip install` step.
RUN \
--mount=type=cache,target=/root/.cache/uv \
--mount=type=cache,target=/synapse/target,sharing=locked \
RUN --mount=type=cache,target=/synapse/target,sharing=locked \
--mount=type=cache,target=${CARGO_HOME}/registry,sharing=locked \
if [ -z "$TEST_ONLY_IGNORE_POETRY_LOCKFILE" ]; then \
uv pip install --prefix="/install" --no-deps /synapse[all]; \
pip install --prefix="/install" --no-deps --no-warn-script-location /synapse[all]; \
else \
uv pip install --prefix="/install" /synapse[all]; \
pip install --prefix="/install" --no-warn-script-location /synapse[all]; \
fi
###
### Stage 2: runtime dependencies download for ARM64 and AMD64
### Stage 2: runtime
###
FROM --platform=$BUILDPLATFORM docker.io/library/debian:${DEBIAN_VERSION} AS runtime-deps
# Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
# Add both target architectures
RUN dpkg --add-architecture arm64
RUN dpkg --add-architecture amd64
LABEL org.opencontainers.image.url='https://matrix.org/docs/projects/server/synapse'
LABEL org.opencontainers.image.documentation='https://github.com/element-hq/synapse/blob/master/docker/README.md'
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later'
# Fetch the runtime dependencies debs for both architectures
# We do that by building a recursive list of packages we need to download with `apt-cache depends`
# and then downloading them with `apt-get download`.
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
apt-cache depends --recurse --no-recommends --no-suggests --no-conflicts --no-breaks --no-replaces --no-enhances --no-pre-depends \
curl \
gosu \
libjpeg62-turbo \
libpq5 \
libwebp7 \
xmlsec1 \
libjemalloc2 \
| grep '^\w' > /tmp/pkg-list && \
for arch in arm64 amd64; do \
mkdir -p /tmp/debs-${arch} && \
chown _apt:root /tmp/debs-${arch} && \
cd /tmp/debs-${arch} && \
apt-get -o APT::Architecture="${arch}" download $(cat /tmp/pkg-list); \
done
apt-get update -qq && apt-get install -yqq \
curl \
gosu \
libjpeg62-turbo \
libpq5 \
libwebp7 \
xmlsec1 \
libjemalloc2 \
libicu72 \
libssl-dev \
openssl \
&& rm -rf /var/lib/apt/lists/*
# Extract the debs for each architecture
RUN \
for arch in arm64 amd64; do \
mkdir -p /install-${arch}/var/lib/dpkg/status.d/ && \
for deb in /tmp/debs-${arch}/*.deb; do \
package_name=$(dpkg-deb -I ${deb} | awk '/^ Package: .*$/ {print $2}'); \
echo "Extracting: ${package_name}"; \
dpkg --ctrl-tarfile $deb | tar -Ox ./control > /install-${arch}/var/lib/dpkg/status.d/${package_name}; \
dpkg --extract $deb /install-${arch}; \
done; \
done
###
### Stage 3: runtime
###
FROM docker.io/library/python:${PYTHON_VERSION}-slim-${DEBIAN_VERSION}
ARG TARGETARCH
LABEL org.opencontainers.image.url='https://github.com/element-hq/synapse'
LABEL org.opencontainers.image.documentation='https://element-hq.github.io/synapse/latest/'
LABEL org.opencontainers.image.source='https://github.com/element-hq/synapse.git'
LABEL org.opencontainers.image.licenses='AGPL-3.0-or-later OR LicenseRef-Element-Commercial'
COPY --from=runtime-deps /install-${TARGETARCH}/etc /etc
COPY --from=runtime-deps /install-${TARGETARCH}/usr /usr
COPY --from=runtime-deps /install-${TARGETARCH}/var /var
# Copy the installed python packages from the builder stage.
#
# uv will generate a `.lock` file when installing packages, which we don't want
# to copy to the final image.
COPY --from=builder --exclude=.lock /install /usr/local
COPY --from=builder /install /usr/local
COPY ./docker/start.py /start.py
COPY ./docker/conf /conf

View File

@@ -1,67 +1,51 @@
# syntax=docker/dockerfile:1-labs
# syntax=docker/dockerfile:1
ARG SYNAPSE_VERSION=latest
ARG FROM=matrixdotorg/synapse:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=trixie
ARG PYTHON_VERSION=3.13
ARG REDIS_VERSION=7.2
# first of all, we create a base image with dependencies which we can copy into the
# first of all, we create a base image with an nginx which we can copy into the
# target image. For repeated rebuilds, this is much faster than apt installing
# each time.
FROM ghcr.io/astral-sh/uv:python${PYTHON_VERSION}-${DEBIAN_VERSION} AS deps_base
ARG DEBIAN_VERSION
ARG REDIS_VERSION
# Tell apt to keep downloaded package files, as we're using cache mounts.
RUN rm -f /etc/apt/apt.conf.d/docker-clean; echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
# The upstream redis-server deb has fewer dynamic libraries than Debian's package which makes it easier to copy later on
RUN \
curl -fsSL https://packages.redis.io/gpg | gpg --dearmor -o /usr/share/keyrings/redis-archive-keyring.gpg && \
chmod 644 /usr/share/keyrings/redis-archive-keyring.gpg && \
echo "deb [signed-by=/usr/share/keyrings/redis-archive-keyring.gpg] https://packages.redis.io/deb ${DEBIAN_VERSION} main" | tee /etc/apt/sources.list.d/redis.list
FROM docker.io/library/debian:bookworm-slim AS deps_base
RUN \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update -qq && \
DEBIAN_FRONTEND=noninteractive apt-get install -yqq --no-install-recommends \
nginx-light \
redis-server="6:${REDIS_VERSION}.*" redis-tools="6:${REDIS_VERSION}.*" \
# libicu is required by postgres, see `docker/complement/Dockerfile`
libicu76
redis-server nginx-light
RUN \
# remove default page
rm /etc/nginx/sites-enabled/default && \
# have nginx log to stderr/out
ln -sf /dev/stdout /var/log/nginx/access.log && \
ln -sf /dev/stderr /var/log/nginx/error.log
# --link-mode=copy silences a warning as uv isn't able to do hardlinks between its cache
# (mounted as --mount=type=cache) and the target directory.
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --link-mode=copy --prefix="/uv/usr/local" supervisor~=4.2
RUN mkdir -p /uv/etc/supervisor/conf.d
# Similarly, a base to copy the redis server from.
#
# The redis docker image has fewer dynamic libraries than the debian package,
# which makes it much easier to copy (but we need to make sure we use an image
# based on the same debian version as the synapse image, to make sure we get
# the expected version of libc.
FROM docker.io/library/redis:7-bookworm AS redis_base
# now build the final image, based on the the regular Synapse docker image
FROM $FROM
# Copy over dependencies
COPY --from=deps_base --parents /usr/lib/*-linux-gnu/libicu* /
COPY --from=deps_base /usr/bin/redis-server /usr/local/bin
COPY --from=deps_base /uv /
# Install supervisord with pip instead of apt, to avoid installing a second
# copy of python.
RUN --mount=type=cache,target=/root/.cache/pip \
pip install supervisor~=4.2
RUN mkdir -p /etc/supervisor/conf.d
# Copy over redis and nginx
COPY --from=redis_base /usr/local/bin/redis-server /usr/local/bin
COPY --from=deps_base /usr/sbin/nginx /usr/sbin
COPY --from=deps_base /usr/share/nginx /usr/share/nginx
COPY --from=deps_base /usr/lib/nginx /usr/lib/nginx
COPY --from=deps_base /etc/nginx /etc/nginx
COPY --from=deps_base /var/log/nginx /var/log/nginx
# chown to allow non-root user to write to http-*-temp-path dirs
COPY --from=deps_base --chown=www-data:root /var/lib/nginx /var/lib/nginx
RUN rm /etc/nginx/sites-enabled/default
RUN mkdir /var/log/nginx /var/lib/nginx
RUN chown www-data /var/lib/nginx
# have nginx log to stderr/out
RUN ln -sf /dev/stdout /var/log/nginx/access.log
RUN ln -sf /dev/stderr /var/log/nginx/error.log
# Copy Synapse worker, nginx and supervisord configuration template files
COPY ./docker/conf-workers/* /conf/
@@ -80,4 +64,4 @@ FROM $FROM
# Replace the healthcheck with one which checks *all* the workers. The script
# is generated by configure_workers_and_start.py.
HEALTHCHECK --start-period=5s --interval=15s --timeout=5s \
CMD ["/healthcheck.sh"]
CMD /bin/sh /healthcheck.sh

View File

@@ -114,9 +114,6 @@ The following environment variables are supported in `run` mode:
is set via `docker run --user`, defaults to `991`, `991`. Note that this user
must have permission to read the config files, and write to the data directories.
* `TZ`: the [timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones) the container will run with. Defaults to `UTC`.
* `SYNAPSE_HTTP_PROXY`: Passed through to the Synapse process as the `http_proxy` environment variable.
* `SYNAPSE_HTTPS_PROXY`: Passed through to the Synapse process as the `https_proxy` environment variable.
* `SYNAPSE_NO_PROXY`: Passed through to the Synapse process as `no_proxy` environment variable.
For more complex setups (e.g. for workers) you can also pass your args directly to synapse using `run` mode. For example like this:

View File

@@ -9,24 +9,21 @@
ARG SYNAPSE_VERSION=latest
# This is an intermediate image, to be built locally (not pulled from a registry).
ARG FROM=matrixdotorg/synapse-workers:$SYNAPSE_VERSION
ARG DEBIAN_VERSION=trixie
FROM docker.io/library/postgres:14-${DEBIAN_VERSION} AS postgres_base
FROM $FROM
# First of all, we copy postgres server from the official postgres image,
# since for repeated rebuilds, this is much faster than apt installing
# postgres each time.
# This trick only works because we use a postgres image based on the same
# debian version as Synapse's docker image (so the versions of the shared
# libraries match). Any missing libraries need to be added to either the
# Synapse image or docker/Dockerfile-workers.
# This trick only works because (a) the Synapse image happens to have all the
# shared libraries that postgres wants, (b) we use a postgres image based on
# the same debian version as Synapse's docker image (so the versions of the
# shared libraries match).
RUN adduser --system --uid 999 postgres --home /var/lib/postgresql
COPY --from=postgres_base /usr/lib/postgresql /usr/lib/postgresql
COPY --from=postgres_base /usr/share/postgresql /usr/share/postgresql
COPY --from=postgres_base --chown=postgres /var/run/postgresql /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/14/bin"
COPY --from=docker.io/library/postgres:13-bookworm /usr/lib/postgresql /usr/lib/postgresql
COPY --from=docker.io/library/postgres:13-bookworm /usr/share/postgresql /usr/share/postgresql
RUN mkdir /var/run/postgresql && chown postgres /var/run/postgresql
ENV PATH="${PATH}:/usr/lib/postgresql/13/bin"
ENV PGDATA=/var/lib/postgresql/data
# We also initialize the database at build time, rather than runtime, so that it's faster to spin up the image.
@@ -58,4 +55,4 @@ ENTRYPOINT ["/start_for_complement.sh"]
# Update the healthcheck to have a shorter check interval
HEALTHCHECK --start-period=5s --interval=1s --timeout=1s \
CMD ["/healthcheck.sh"]
CMD /bin/sh /healthcheck.sh

View File

@@ -5,12 +5,12 @@
set -e
echo "Complement Synapse launcher"
echo " Args: $*"
echo " Args: $@"
echo " Env: SYNAPSE_COMPLEMENT_DATABASE=$SYNAPSE_COMPLEMENT_DATABASE SYNAPSE_COMPLEMENT_USE_WORKERS=$SYNAPSE_COMPLEMENT_USE_WORKERS SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR=$SYNAPSE_COMPLEMENT_USE_ASYNCIO_REACTOR"
function log {
d=$(printf '%(%Y-%m-%d %H:%M:%S)T,%.3s\n' ${EPOCHREALTIME/./ })
echo "$d $*"
d=$(date +"%Y-%m-%d %H:%M:%S,%3N")
echo "$d $@"
}
# Set the server name of the homeserver
@@ -54,6 +54,7 @@ if [[ -n "$SYNAPSE_COMPLEMENT_USE_WORKERS" ]]; then
export SYNAPSE_WORKER_TYPES="\
event_persister:2, \
background_worker, \
frontend_proxy, \
event_creator, \
user_dir, \
media_repository, \
@@ -64,7 +65,6 @@ if [[ -n "$SYNAPSE_COMPLEMENT_USE_WORKERS" ]]; then
client_reader, \
appservice, \
pusher, \
device_lists:2, \
stream_writers=account_data+presence+receipts+to_device+typing"
fi
@@ -103,11 +103,12 @@ fi
# Note that both the key and certificate are in PEM format (not DER).
# First generate a configuration file to set up a Subject Alternative Name.
echo "\
cat > /conf/server.tls.conf <<EOF
.include /etc/ssl/openssl.cnf
[SAN]
subjectAltName=DNS:${SERVER_NAME}" > /conf/server.tls.conf
subjectAltName=DNS:${SERVER_NAME}
EOF
# Generate an RSA key
openssl genrsa -out /conf/server.tls.key 2048
@@ -122,12 +123,12 @@ openssl x509 -req -in /conf/server.tls.csr \
-out /conf/server.tls.crt -extfile /conf/server.tls.conf -extensions SAN
# Assert that we have a Subject Alternative Name in the certificate.
# (the test will exit with 1 here if there isn't a SAN in the certificate.)
[[ $(openssl x509 -in /conf/server.tls.crt -noout -text) == *DNS:* ]]
# (grep will exit with 1 here if there isn't a SAN in the certificate.)
openssl x509 -in /conf/server.tls.crt -noout -text | grep DNS:
export SYNAPSE_TLS_CERT=/conf/server.tls.crt
export SYNAPSE_TLS_KEY=/conf/server.tls.key
# Run the script that writes the necessary config files and starts supervisord, which in turn
# starts everything else
exec /configure_workers_and_start.py "$@"
exec /configure_workers_and_start.py

View File

@@ -7,7 +7,6 @@
#}
## Server ##
public_baseurl: http://127.0.0.1:8008/
report_stats: False
trusted_key_servers: []
enable_registration: true
@@ -85,22 +84,6 @@ rc_invites:
per_user:
per_second: 1000
burst_count: 1000
per_issuer:
per_second: 1000
burst_count: 1000
rc_presence:
per_user:
per_second: 9999
burst_count: 9999
rc_delayed_event_mgmt:
per_second: 9999
burst_count: 9999
rc_room_creation:
per_second: 9999
burst_count: 9999
federation_rr_transactions_per_room_per_second: 9999
@@ -121,20 +104,6 @@ experimental_features:
msc3967_enabled: true
# Expose a room summary for public rooms
msc3266_enabled: true
# Send to-device messages to application services
msc2409_to_device_messages_enabled: true
# Allow application services to masquerade devices
msc3202_device_masquerading: true
# Sending device list changes, one-time key counts and fallback key usage to application services
msc3202_transaction_extensions: true
# Proxy OTK claim requests to exclusive ASes
msc3983_appservice_otk_claims: true
# Proxy key queries to exclusive ASes
msc3984_appservice_key_query: true
# Invite filtering
msc4155_enabled: true
# Thread Subscriptions
msc4306_enabled: true
server_notices:
system_mxid_localpart: _server
@@ -142,18 +111,10 @@ server_notices:
system_mxid_avatar_url: ""
room_name: "Server Alert"
# Enable delayed events (msc4140)
max_event_delay_duration: 24h
# Disable sync cache so that initial `/sync` requests are up-to-date.
caches:
sync_response_cache_duration: 0
# Complement assumes that it can publish to the room list by default.
room_list_publication_rules:
- action: allow
{% include "shared-orig.yaml.j2" %}

View File

@@ -38,13 +38,10 @@ server {
{% if using_unix_sockets %}
proxy_pass http://unix:/run/main_public.sock;
{% else %}
# note: do not add a path (even a single /) after the port in `proxy_pass`,
# otherwise nginx will canonicalise the URI and cause signature verification
# errors.
proxy_pass http://localhost:8080;
{% endif %}
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host:$server_port;
proxy_set_header Host $host;
}
}

View File

@@ -1,6 +1,5 @@
{% if use_forking_launcher %}
[program:synapse_fork]
environment=http_proxy="%(ENV_SYNAPSE_HTTP_PROXY)s",https_proxy="%(ENV_SYNAPSE_HTTPS_PROXY)s",no_proxy="%(ENV_SYNAPSE_NO_PROXY)s"
command=/usr/local/bin/python -m synapse.app.complement_fork_starter
{{ main_config_path }}
synapse.app.homeserver
@@ -21,7 +20,6 @@ exitcodes=0
{% else %}
[program:synapse_main]
environment=http_proxy="%(ENV_SYNAPSE_HTTP_PROXY)s",https_proxy="%(ENV_SYNAPSE_HTTPS_PROXY)s",no_proxy="%(ENV_SYNAPSE_NO_PROXY)s"
command=/usr/local/bin/prefix-log /usr/local/bin/python -m synapse.app.homeserver
--config-path="{{ main_config_path }}"
--config-path=/conf/workers/shared.yaml
@@ -38,7 +36,6 @@ exitcodes=0
{% for worker in workers %}
[program:synapse_{{ worker.name }}]
environment=http_proxy="%(ENV_SYNAPSE_HTTP_PROXY)s",https_proxy="%(ENV_SYNAPSE_HTTPS_PROXY)s",no_proxy="%(ENV_SYNAPSE_NO_PROXY)s"
command=/usr/local/bin/prefix-log /usr/local/bin/python -m {{ worker.app }}
--config-path="{{ main_config_path }}"
--config-path=/conf/workers/shared.yaml

View File

@@ -77,13 +77,6 @@ loggers:
#}
synapse.visibility.filtered_event_debug:
level: DEBUG
{#
If Synapse is under test, we don't care about seeing the "Applying schema" log
lines at the INFO level every time we run the tests (it's 100 lines of bulk)
#}
synapse.storage.prepare_database:
level: WARN
{% endif %}
root:

View File

@@ -1,4 +1,4 @@
#!/usr/local/bin/python
#!/usr/bin/env python
#
# This file is licensed under the Affero General Public License (AGPL) version 3.
#
@@ -65,9 +65,13 @@ from itertools import chain
from pathlib import Path
from typing import (
Any,
Dict,
List,
Mapping,
MutableMapping,
NoReturn,
Optional,
Set,
SupportsIndex,
)
@@ -92,7 +96,7 @@ WORKER_PLACEHOLDER_NAME = "placeholder_name"
# Watching /_matrix/media and related needs a "media" listener
# Stream Writers require "client" and "replication" listeners because they
# have to attach by instance_map to the master process and have client endpoints.
WORKERS_CONFIG: dict[str, dict[str, Any]] = {
WORKERS_CONFIG: Dict[str, Dict[str, Any]] = {
"pusher": {
"app": "synapse.app.generic_worker",
"listener_resources": [],
@@ -174,9 +178,6 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"^/_matrix/client/(api/v1|r0|v3|unstable)/login$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/account/3pid$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/account/whoami$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/account/deactivate$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/devices(/|$)",
"^/_matrix/client/(r0|v3)/delete_devices$",
"^/_matrix/client/versions$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/voip/turnServer$",
"^/_matrix/client/(r0|v3|unstable)/register$",
@@ -193,10 +194,6 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"^/_matrix/client/(api/v1|r0|v3|unstable)/directory/room/.*$",
"^/_matrix/client/(r0|v3|unstable)/capabilities$",
"^/_matrix/client/(r0|v3|unstable)/notifications$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload",
"^/_matrix/client/(api/v1|r0|v3|unstable)/keys/device_signing/upload$",
"^/_matrix/client/(api/v1|r0|v3|unstable)/keys/signatures/upload$",
"^/_matrix/client/unstable/org.matrix.msc4140/delayed_events(/.*/restart)?$",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
@@ -205,7 +202,6 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"app": "synapse.app.generic_worker",
"listener_resources": ["federation"],
"endpoint_patterns": [
"^/_matrix/federation/v1/version$",
"^/_matrix/federation/(v1|v2)/event/",
"^/_matrix/federation/(v1|v2)/state/",
"^/_matrix/federation/(v1|v2)/state_ids/",
@@ -268,6 +264,13 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"frontend_proxy": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": ["^/_matrix/client/(api/v1|r0|v3|unstable)/keys/upload"],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"account_data": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
@@ -302,13 +305,6 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"device_lists": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": [],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"typing": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
@@ -325,15 +321,6 @@ WORKERS_CONFIG: dict[str, dict[str, Any]] = {
"shared_extra_conf": {},
"worker_extra_conf": "",
},
"thread_subscriptions": {
"app": "synapse.app.generic_worker",
"listener_resources": ["client", "replication"],
"endpoint_patterns": [
"^/_matrix/client/unstable/io.element.msc4306/.*",
],
"shared_extra_conf": {},
"worker_extra_conf": "",
},
}
# Templates for sections that may be inserted multiple times in config files
@@ -364,11 +351,6 @@ def error(txt: str) -> NoReturn:
def flush_buffers() -> None:
"""
Python's `print()` buffers output by default, typically waiting until ~8KB
accumulates. This method can be used to flush the buffers so we can see the output
of any print statements so far.
"""
sys.stdout.flush()
sys.stderr.flush()
@@ -394,18 +376,16 @@ def convert(src: str, dst: str, **template_vars: object) -> None:
#
# We use append mode in case the files have already been written to by something else
# (for instance, as part of the instructions in a dockerfile).
exists = os.path.isfile(dst)
with open(dst, "a") as outfile:
# In case the existing file doesn't end with a newline
if exists:
outfile.write("\n")
outfile.write("\n")
outfile.write(rendered)
def add_worker_roles_to_shared_config(
shared_config: dict,
worker_types_set: set[str],
worker_types_set: Set[str],
worker_name: str,
worker_port: int,
) -> None:
@@ -424,18 +404,16 @@ def add_worker_roles_to_shared_config(
# streams
instance_map = shared_config.setdefault("instance_map", {})
# This is a list of the stream_writers.
stream_writers = {
# This is a list of the stream_writers that there can be only one of. Events can be
# sharded, and therefore doesn't belong here.
singular_stream_writers = [
"account_data",
"events",
"device_lists",
"presence",
"receipts",
"to_device",
"typing",
"push_rules",
"thread_subscriptions",
}
]
# Worker-type specific sharding config. Now a single worker can fulfill multiple
# roles, check each.
@@ -445,11 +423,28 @@ def add_worker_roles_to_shared_config(
if "federation_sender" in worker_types_set:
shared_config.setdefault("federation_sender_instances", []).append(worker_name)
if "event_persister" in worker_types_set:
# Event persisters write to the events stream, so we need to update
# the list of event stream writers
shared_config.setdefault("stream_writers", {}).setdefault("events", []).append(
worker_name
)
# Map of stream writer instance names to host/ports combos
if os.environ.get("SYNAPSE_USE_UNIX_SOCKET", False):
instance_map[worker_name] = {
"path": f"/run/worker.{worker_port}",
}
else:
instance_map[worker_name] = {
"host": "localhost",
"port": worker_port,
}
# Update the list of stream writers. It's convenient that the name of the worker
# type is the same as the stream to write. Iterate over the whole list in case there
# is more than one.
for worker in worker_types_set:
if worker in stream_writers:
if worker in singular_stream_writers:
shared_config.setdefault("stream_writers", {}).setdefault(
worker, []
).append(worker_name)
@@ -468,9 +463,9 @@ def add_worker_roles_to_shared_config(
def merge_worker_template_configs(
existing_dict: dict[str, Any] | None,
to_be_merged_dict: dict[str, Any],
) -> dict[str, Any]:
existing_dict: Optional[Dict[str, Any]],
to_be_merged_dict: Dict[str, Any],
) -> Dict[str, Any]:
"""When given an existing dict of worker template configuration consisting with both
dicts and lists, merge new template data from WORKERS_CONFIG(or create) and
return new dict.
@@ -481,7 +476,7 @@ def merge_worker_template_configs(
existing_dict.
Returns: The newly merged together dict values.
"""
new_dict: dict[str, Any] = {}
new_dict: Dict[str, Any] = {}
if not existing_dict:
# It doesn't exist yet, just use the new dict(but take a copy not a reference)
new_dict = to_be_merged_dict.copy()
@@ -506,8 +501,8 @@ def merge_worker_template_configs(
def insert_worker_name_for_worker_config(
existing_dict: dict[str, Any], worker_name: str
) -> dict[str, Any]:
existing_dict: Dict[str, Any], worker_name: str
) -> Dict[str, Any]:
"""Insert a given worker name into the worker's configuration dict.
Args:
@@ -523,7 +518,7 @@ def insert_worker_name_for_worker_config(
return dict_to_edit
def apply_requested_multiplier_for_worker(worker_types: list[str]) -> list[str]:
def apply_requested_multiplier_for_worker(worker_types: List[str]) -> List[str]:
"""
Apply multiplier(if found) by returning a new expanded list with some basic error
checking.
@@ -584,7 +579,7 @@ def is_sharding_allowed_for_worker_type(worker_type: str) -> bool:
def split_and_strip_string(
given_string: str, split_char: str, max_split: SupportsIndex = -1
) -> list[str]:
) -> List[str]:
"""
Helper to split a string on split_char and strip whitespace from each end of each
element.
@@ -609,12 +604,12 @@ def generate_base_homeserver_config() -> None:
# start.py already does this for us, so just call that.
# note that this script is copied in in the official, monolith dockerfile
os.environ["SYNAPSE_HTTP_PORT"] = str(MAIN_PROCESS_HTTP_LISTENER_PORT)
subprocess.run([sys.executable, "/start.py", "migrate_config"], check=True)
subprocess.run(["/usr/local/bin/python", "/start.py", "migrate_config"], check=True)
def parse_worker_types(
requested_worker_types: list[str],
) -> dict[str, set[str]]:
requested_worker_types: List[str],
) -> Dict[str, Set[str]]:
"""Read the desired list of requested workers and prepare the data for use in
generating worker config files while also checking for potential gotchas.
@@ -630,14 +625,14 @@ def parse_worker_types(
# A counter of worker_base_name -> int. Used for determining the name for a given
# worker when generating its config file, as each worker's name is just
# worker_base_name followed by instance number
worker_base_name_counter: dict[str, int] = defaultdict(int)
worker_base_name_counter: Dict[str, int] = defaultdict(int)
# Similar to above, but more finely grained. This is used to determine we don't have
# more than a single worker for cases where multiples would be bad(e.g. presence).
worker_type_shard_counter: dict[str, int] = defaultdict(int)
worker_type_shard_counter: Dict[str, int] = defaultdict(int)
# The final result of all this processing
dict_to_return: dict[str, set[str]] = {}
dict_to_return: Dict[str, Set[str]] = {}
# Handle any multipliers requested for given workers.
multiple_processed_worker_types = apply_requested_multiplier_for_worker(
@@ -681,7 +676,7 @@ def parse_worker_types(
# Split the worker_type_string on "+", remove whitespace from ends then make
# the list a set so it's deduplicated.
worker_types_set: set[str] = set(
worker_types_set: Set[str] = set(
split_and_strip_string(worker_type_string, "+")
)
@@ -740,7 +735,7 @@ def generate_worker_files(
environ: Mapping[str, str],
config_path: str,
data_dir: str,
requested_worker_types: dict[str, set[str]],
requested_worker_types: Dict[str, Set[str]],
) -> None:
"""Read the desired workers(if any) that is passed in and generate shared
homeserver, nginx and supervisord configs.
@@ -761,7 +756,7 @@ def generate_worker_files(
# First read the original config file and extract the listeners block. Then we'll
# add another listener for replication. Later we'll write out the result to the
# shared config file.
listeners: list[Any]
listeners: List[Any]
if using_unix_sockets:
listeners = [
{
@@ -789,12 +784,12 @@ def generate_worker_files(
# base shared worker jinja2 template. This config file will be passed to all
# workers, included Synapse's main process. It is intended mainly for disabling
# functionality when certain workers are spun up, and adding a replication listener.
shared_config: dict[str, Any] = {"listeners": listeners}
shared_config: Dict[str, Any] = {"listeners": listeners}
# List of dicts that describe workers.
# We pass this to the Supervisor template later to generate the appropriate
# program blocks.
worker_descriptors: list[dict[str, Any]] = []
worker_descriptors: List[Dict[str, Any]] = []
# Upstreams for load-balancing purposes. This dict takes the form of the worker
# type to the ports of each worker. For example:
@@ -802,14 +797,14 @@ def generate_worker_files(
# worker_type: {1234, 1235, ...}}
# }
# and will be used to construct 'upstream' nginx directives.
nginx_upstreams: dict[str, set[int]] = {}
nginx_upstreams: Dict[str, Set[int]] = {}
# A map of: {"endpoint": "upstream"}, where "upstream" is a str representing what
# will be placed after the proxy_pass directive. The main benefit to representing
# this data as a dict over a str is that we can easily deduplicate endpoints
# across multiple instances of the same worker. The final rendering will be combined
# with nginx_upstreams and placed in /etc/nginx/conf.d.
nginx_locations: dict[str, str] = {}
nginx_locations: Dict[str, str] = {}
# Create the worker configuration directory if it doesn't already exist
os.makedirs("/conf/workers", exist_ok=True)
@@ -843,7 +838,7 @@ def generate_worker_files(
# yaml config file
for worker_name, worker_types_set in requested_worker_types.items():
# The collected and processed data will live here.
worker_config: dict[str, Any] = {}
worker_config: Dict[str, Any] = {}
# Merge all worker config templates for this worker into a single config
for worker_type in worker_types_set:
@@ -873,13 +868,6 @@ def generate_worker_files(
else:
healthcheck_urls.append("http://localhost:%d/health" % (worker_port,))
# Special case for event_persister: those are just workers that write to
# the `events` stream. For other workers, the worker name is the same
# name of the stream they write to, but for some reason it is not the
# case for event_persister.
if "event_persister" in worker_types_set:
worker_types_set.add("events")
# Update the shared config with sharding-related options if necessary
add_worker_roles_to_shared_config(
shared_config, worker_types_set, worker_name, worker_port
@@ -1010,7 +998,6 @@ def generate_worker_files(
"/healthcheck.sh",
healthcheck_urls=healthcheck_urls,
)
os.chmod("/healthcheck.sh", 0o755)
# Ensure the logging directory exists
log_dir = data_dir + "/logs"
@@ -1026,7 +1013,7 @@ def generate_worker_log_config(
Returns: the path to the generated file
"""
# Check whether we should write worker logs to disk, in addition to the console
extra_log_template_args: dict[str, str | None] = {}
extra_log_template_args: Dict[str, Optional[str]] = {}
if environ.get("SYNAPSE_WORKERS_WRITE_LOGS_TO_DISK"):
extra_log_template_args["LOG_FILE_PATH"] = f"{data_dir}/logs/{worker_name}.log"
@@ -1050,7 +1037,7 @@ def generate_worker_log_config(
return log_config_filepath
def main(args: list[str], environ: MutableMapping[str, str]) -> None:
def main(args: List[str], environ: MutableMapping[str, str]) -> None:
parser = ArgumentParser()
parser.add_argument(
"--generate-only",
@@ -1084,7 +1071,7 @@ def main(args: list[str], environ: MutableMapping[str, str]) -> None:
if not worker_types_env:
# No workers, just the main process
worker_types = []
requested_worker_types: dict[str, Any] = {}
requested_worker_types: Dict[str, Any] = {}
else:
# Split type names by comma, ignoring whitespace.
worker_types = split_and_strip_string(worker_types_env, ",")
@@ -1112,13 +1099,6 @@ def main(args: list[str], environ: MutableMapping[str, str]) -> None:
else:
log("Could not find %s, will not use" % (jemallocpath,))
# Empty strings are falsy in Python so this default is fine. We just can't have these
# be undefined because supervisord will complain about our
# `%(ENV_SYNAPSE_HTTP_PROXY)s` usage.
environ.setdefault("SYNAPSE_HTTP_PROXY", "")
environ.setdefault("SYNAPSE_HTTPS_PROXY", "")
environ.setdefault("SYNAPSE_NO_PROXY", "")
# Start supervisord, which will start Synapse, all of the configured worker
# processes, redis, nginx etc. according to the config we created above.
log("Starting supervisord")

View File

@@ -3,14 +3,14 @@
#
# Used by `complement.sh`. Not suitable for production use.
ARG PYTHON_VERSION=3.10
ARG PYTHON_VERSION=3.9
###
### Stage 0: generate requirements.txt
###
# We hardcode the use of Debian trixie here because this could change upstream
# and other Dockerfiles used for testing are expecting trixie.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-trixie
# We hardcode the use of Debian bookworm here because this could change upstream
# and other Dockerfiles used for testing are expecting bookworm.
FROM docker.io/library/python:${PYTHON_VERSION}-slim-bookworm
# Install Rust and other dependencies (stolen from normal Dockerfile)
# install the OS build deps

View File

@@ -10,9 +10,6 @@
# '-W interactive' is a `mawk` extension which disables buffering on stdout and sets line-buffered reads on
# stdin. The effect is that the output is flushed after each line, rather than being batched, which helps reduce
# confusion due to to interleaving of the different processes.
prefixer() {
mawk -W interactive '{printf("%s | %s\n", ENVIRON["SUPERVISOR_PROCESS_NAME"], $0); fflush() }'
}
exec 1> >(prefixer)
exec 2> >(prefixer >&2)
exec 1> >(awk -W interactive '{print "'"${SUPERVISOR_PROCESS_NAME}"' | "$0 }' >&1)
exec 2> >(awk -W interactive '{print "'"${SUPERVISOR_PROCESS_NAME}"' | "$0 }' >&2)
exec "$@"

View File

@@ -6,7 +6,7 @@ import os
import platform
import subprocess
import sys
from typing import Any, Mapping, MutableMapping, NoReturn
from typing import Any, Dict, List, Mapping, MutableMapping, NoReturn, Optional
import jinja2
@@ -22,11 +22,6 @@ def error(txt: str) -> NoReturn:
def flush_buffers() -> None:
"""
Python's `print()` buffers output by default, typically waiting until ~8KB
accumulates. This method can be used to flush the buffers so we can see the output
of any print statements so far.
"""
sys.stdout.flush()
sys.stderr.flush()
@@ -50,7 +45,7 @@ def generate_config_from_template(
config_dir: str,
config_path: str,
os_environ: Mapping[str, str],
ownership: str | None,
ownership: Optional[str],
) -> None:
"""Generate a homeserver.yaml from environment variables
@@ -69,7 +64,7 @@ def generate_config_from_template(
)
# populate some params from data files (if they exist, else create new ones)
environ: dict[str, Any] = dict(os_environ)
environ: Dict[str, Any] = dict(os_environ)
secrets = {
"registration": "SYNAPSE_REGISTRATION_SHARED_SECRET",
"macaroon": "SYNAPSE_MACAROON_SECRET_KEY",
@@ -147,7 +142,7 @@ def generate_config_from_template(
subprocess.run(args, check=True)
def run_generate_config(environ: Mapping[str, str], ownership: str | None) -> None:
def run_generate_config(environ: Mapping[str, str], ownership: Optional[str]) -> None:
"""Run synapse with a --generate-config param to generate a template config file
Args:
@@ -200,7 +195,7 @@ def run_generate_config(environ: Mapping[str, str], ownership: str | None) -> No
subprocess.run(args, check=True)
def main(args: list[str], environ: MutableMapping[str, str]) -> None:
def main(args: List[str], environ: MutableMapping[str, str]) -> None:
mode = args[1] if len(args) > 1 else "run"
# if we were given an explicit user to switch to, do so

View File

@@ -63,18 +63,6 @@ mdbook serve
The URL at which the docs can be viewed at will be logged.
## Synapse configuration documentation
The [Configuration
Manual](https://element-hq.github.io/synapse/latest/usage/configuration/config_documentation.html)
page is generated from a YAML file,
[schema/synapse-config.schema.yaml](../schema/synapse-config.schema.yaml). To
add new options or modify existing ones, first edit that file, then run
[scripts-dev/gen_config_documentation.py](../scripts-dev/gen_config_documentation.py)
to generate an updated Configuration Manual markdown file.
Build the book as described above to preview it in a web browser.
## Configuration and theming
The look and behaviour of the website is configured by the [book.toml](../book.toml) file

View File

@@ -5,7 +5,6 @@
# Setup
- [Installation](setup/installation.md)
- [Security](setup/security.md)
- [Using Postgres](postgres.md)
- [Configuring a Reverse Proxy](reverse_proxy.md)
- [Configuring a Forward/Outbound Proxy](setup/forward_proxy.md)
@@ -50,18 +49,14 @@
- [Background update controller callbacks](modules/background_update_controller_callbacks.md)
- [Account data callbacks](modules/account_data_callbacks.md)
- [Add extra fields to client events unsigned section callbacks](modules/add_extra_fields_to_client_events_unsigned.md)
- [Media repository callbacks](modules/media_repository_callbacks.md)
- [Ratelimit callbacks](modules/ratelimit_callbacks.md)
- [Porting a legacy module to the new interface](modules/porting_legacy_module.md)
- [Workers](workers.md)
- [Using `synctl` with Workers](synctl_workers.md)
- [Systemd](systemd-with-workers/README.md)
- [Administration](usage/administration/README.md)
- [Backups](usage/administration/backups.md)
- [Admin API](usage/administration/admin_api/README.md)
- [Account Validity](admin_api/account_validity.md)
- [Background Updates](usage/administration/admin_api/background_updates.md)
- [Fetch Event](admin_api/fetch_event.md)
- [Event Reports](admin_api/event_reports.md)
- [Experimental Features](admin_api/experimental_features.md)
- [Media](admin_api/media_admin_api.md)
@@ -70,13 +65,11 @@
- [Registration Tokens](usage/administration/admin_api/registration_tokens.md)
- [Manipulate Room Membership](admin_api/room_membership.md)
- [Rooms](admin_api/rooms.md)
- [Scheduled tasks](admin_api/scheduled_tasks.md)
- [Server Notices](admin_api/server_notices.md)
- [Statistics](admin_api/statistics.md)
- [Users](admin_api/user_admin_api.md)
- [Server Version](admin_api/version_api.md)
- [Federation](usage/administration/admin_api/federation.md)
- [Client-Server API Extensions](admin_api/client_server_api_extensions.md)
- [Manhole](manhole.md)
- [Monitoring](metrics-howto.md)
- [Reporting Homeserver Usage Statistics](usage/administration/monitoring/reporting_homeserver_usage_statistics.md)
@@ -117,8 +110,6 @@
- [The Auth Chain Difference Algorithm](auth_chain_difference_algorithm.md)
- [Media Repository](media_repository.md)
- [Room and User Statistics](room_and_user_statistics.md)
- [Releasing]()
- [Release Notes Review Checklist](development/internal_documentation/release_notes_review_checklist.md)
- [Scripts]()
# Other

View File

@@ -1,67 +0,0 @@
# Client-Server API Extensions
Server administrators can set special account data to change how the Client-Server API behaves for
their clients. Setting the account data, or having it already set, as a non-admin has no effect.
All configuration options can be set through the `io.element.synapse.admin_client_config` global
account data on the admin's user account.
Example:
```
PUT /_matrix/client/v3/user/{adminUserId}/account_data/io.element.synapse.admin_client_config
{
"return_soft_failed_events": true
}
```
## See soft failed events
Learn more about soft failure from [the spec](https://spec.matrix.org/v1.14/server-server-api/#soft-failure).
To receive soft failed events in APIs like `/sync` and `/messages`, set `return_soft_failed_events`
to `true` in the admin client config. When `false`, the normal behaviour of these endpoints is to
exclude soft failed events.
**Note**: If the policy server flagged the event as spam and that caused soft failure, that will be indicated
in the event's `unsigned` content like so:
```json
{
"type": "m.room.message",
"other": "event_fields_go_here",
"unsigned": {
"io.element.synapse.soft_failed": true,
"io.element.synapse.policy_server_spammy": true
}
}
```
Default: `false`
## See events marked spammy by policy servers
Learn more about policy servers from [MSC4284](https://github.com/matrix-org/matrix-spec-proposals/pull/4284).
Similar to `return_soft_failed_events`, clients logged in with admin accounts can see events which were
flagged by the policy server as spammy (and thus soft failed) by setting `return_policy_server_spammy_events`
to `true`.
`return_policy_server_spammy_events` may be `true` while `return_soft_failed_events` is `false` to only see
policy server-flagged events. When `return_soft_failed_events` is `true` however, `return_policy_server_spammy_events`
is always `true`.
Events which were flagged by the policy will be flagged as `io.element.synapse.policy_server_spammy` in the
event's `unsigned` content, like so:
```json
{
"type": "m.room.message",
"other": "event_fields_go_here",
"unsigned": {
"io.element.synapse.soft_failed": true,
"io.element.synapse.policy_server_spammy": true
}
}
```
Default: `true` if `return_soft_failed_events` is `true`, otherwise `false`

View File

@@ -60,11 +60,10 @@ paginate through.
anything other than the return value of `next_token` from a previous call. Defaults to `0`.
* `dir`: string - Direction of event report order. Whether to fetch the most recent
first (`b`) or the oldest first (`f`). Defaults to `b`.
* `user_id`: optional string - Filter by the user ID of the reporter. This is the user who reported the event
and wrote the reason.
* `room_id`: optional string - Filter by room id.
* `event_sender_user_id`: optional string - Filter by the sender of the reported event. This is the user who
the report was made against.
* `user_id`: string - Is optional and filters to only return users with user IDs that
contain this value. This is the user who reported the event and wrote the reason.
* `room_id`: string - Is optional and filters to only return rooms with room IDs that
contain this value.
**Response**
@@ -117,6 +116,7 @@ It returns a JSON body like the following:
"hashes": {
"sha256": "xK1//xnmvHJIOvbgXlkI8eEqdvoMmihVDJ9J4SNlsAw"
},
"origin": "matrix.org",
"origin_server_ts": 1592291711430,
"prev_events": [
"$YK4arsKKcc0LRoe700pS8DSjOvUT4NDv0HfInlMFw2M"

View File

@@ -5,7 +5,6 @@ basis. The currently supported features are:
- [MSC3881](https://github.com/matrix-org/matrix-spec-proposals/pull/3881): enable remotely toggling push notifications
for another client
- [MSC3575](https://github.com/matrix-org/matrix-spec-proposals/pull/3575): enable experimental sliding sync support
- [MSC4222](https://github.com/matrix-org/matrix-spec-proposals/pull/4222): adding `state_after` to sync v2
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).

View File

@@ -1,53 +0,0 @@
# Fetch Event API
The fetch event API allows admins to fetch an event regardless of their membership in the room it
originated in.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
Request:
```http
GET /_synapse/admin/v1/fetch_event/<event_id>
```
The API returns a JSON body like the following:
Response:
```json
{
"event": {
"auth_events": [
"$WhLChbYg6atHuFRP7cUd95naUtc8L0f7fqeizlsUVvc",
"$9Wj8dt02lrNEWweeq-KjRABUYKba0K9DL2liRvsAdtQ",
"$qJxBFxBt8_ODd9b3pgOL_jXP98S_igc1_kizuPSZFi4"
],
"content": {
"body": "Hey now",
"msgtype": "m.text"
},
"depth": 6,
"event_id": "$hJ_kcXbVMcI82JDrbqfUJIHu61tJD86uIFJ_8hNHi7s",
"hashes": {
"sha256": "LiNw8DtrRVf55EgAH8R42Wz7WCJUqGsPt2We6qZO5Rg"
},
"origin_server_ts": 799,
"prev_events": [
"$cnSUrNMnC3Ywh9_W7EquFxYQjC_sT3BAAVzcUVxZq1g"
],
"room_id": "!aIhKToCqgPTBloWMpf:test",
"sender": "@user:test",
"signatures": {
"test": {
"ed25519:a_lPym": "7mqSDwK1k7rnw34Dd8Fahu0rhPW7jPmcWPRtRDoEN9Yuv+BCM2+Rfdpv2MjxNKy3AYDEBwUwYEuaKMBaEMiKAQ"
}
},
"type": "m.room.message",
"unsigned": {
"age_ts": 799
}
}
}
```

View File

@@ -39,67 +39,6 @@ the use of the
[List media uploaded by a user](user_admin_api.md#list-media-uploaded-by-a-user)
Admin API.
## Query a piece of media by ID
This API returns information about a piece of local or cached remote media given the origin server name and media id. If
information is requested for remote media which is not cached the endpoint will return 404.
Request:
```http
GET /_synapse/admin/v1/media/<origin>/<media_id>
```
The API returns a JSON body with media info like the following:
Response:
```json
{
"media_info": {
"media_origin": "remote.com",
"user_id": null,
"media_id": "sdginwegWEG",
"media_type": "img/png",
"media_length": 67,
"upload_name": "test.png",
"created_ts": 300,
"filesystem_id": "wgeweg",
"url_cache": null,
"last_access_ts": 400,
"quarantined_by": null,
"authenticated": false,
"safe_from_quarantine": null,
"sha256": "ebf4f635a17d10d6eb46ba680b70142419aa3220f228001a036d311a22ee9d2a"
}
}
```
## Listing all quarantined media
This API returns a list of all quarantined media on the server. It is paginated, and can be scoped to either local or
remote media. Note that the pagination values are also scoped to the request parameters - changing them but keeping the
same pagination values will result in unexpected results.
Request:
```http
GET /_synapse/admin/v1/media/quarantined?from=0&limit=100&kind=local
```
`from` and `limit` are optional parameters, and default to `0` and `100` respectively. They are the row index and number
of rows to return - they are not timestamps.
`kind` *MUST* either be `local` or `remote`.
The API returns a JSON body containing MXC URIs for the quarantined media, like the following:
```json
{
"media": [
"mxc://localhost/xwvutsrqponmlkjihgfedcba",
"mxc://localhost/abcdefghijklmnopqrstuvwx"
]
}
```
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies
@@ -107,28 +46,6 @@ to any local media, and any locally-cached copies of remote media.
The media file itself (and any thumbnails) is not deleted from the server.
Since Synapse 1.128.0, hashes of uploaded media are tracked. If this media
is quarantined, Synapse will:
- Quarantine any media with a matching hash that has already been uploaded.
- Quarantine any future media.
- Quarantine any existing cached remote media.
- Quarantine any future remote media.
## Downloading quarantined media
Normally, when media is quarantined, it will return a 404 error when downloaded.
Admins can bypass this by adding `?admin_unsafely_bypass_quarantine=true`
to the [normal download URL](https://spec.matrix.org/v1.16/client-server-api/#get_matrixclientv1mediadownloadservernamemediaid).
Bypassing the quarantine check is not recommended. Media is typically quarantined
to prevent harmful content from being served to users, which includes admins. Only
set the bypass parameter if you intentionally want to access potentially harmful
content.
Non-admin users cannot bypass quarantine checks, even when specifying the above
query parameter.
## Quarantining media by ID
This API quarantines a single piece of local or remote media.

View File

@@ -385,13 +385,6 @@ The API is:
GET /_synapse/admin/v1/rooms/<room_id>/state
```
**Parameters**
The following query parameter is available:
* `type` - The type of room state event to filter by, eg "m.room.create". If provided, only state events
of this type will be returned (regardless of their `state_key` value).
A response body like the following is returned:
```json
@@ -794,7 +787,6 @@ A response body like the following is returned:
"results": [
{
"delete_id": "delete_id1",
"room_id": "!roomid:example.com",
"status": "failed",
"error": "error message",
"shutdown_room": {
@@ -805,8 +797,7 @@ A response body like the following is returned:
}
}, {
"delete_id": "delete_id2",
"room_id": "!roomid:example.com",
"status": "active",
"status": "purging",
"shutdown_room": {
"kicked_users": [
"@foobar:example.com"
@@ -843,9 +834,7 @@ A response body like the following is returned:
```json
{
"status": "active",
"delete_id": "bHkCNQpHqOaFhPtK",
"room_id": "!roomid:example.com",
"status": "purging",
"shutdown_room": {
"kicked_users": [
"@foobar:example.com"
@@ -873,11 +862,10 @@ The following fields are returned in the JSON response body:
- `results` - An array of objects, each containing information about one task.
This field is omitted from the result when you query by `delete_id`.
Task objects contain the following fields:
- `delete_id` - The ID for this purge
- `room_id` - The ID of the room being deleted
- `delete_id` - The ID for this purge if you query by `room_id`.
- `status` - The status will be one of:
- `scheduled` - The deletion is waiting to be started
- `active` - The process is purging the room and event data from database.
- `shutting_down` - The process is removing users from the room.
- `purging` - The process is purging the room and event data from database.
- `complete` - The process has completed successfully.
- `failed` - The process is aborted, an error has occurred.
- `error` - A string that shows an error message if `status` is `failed`.
@@ -1115,76 +1103,3 @@ Example response:
]
}
```
# Admin Space Hierarchy Endpoint
This API allows an admin to fetch the space/room hierarchy for a given space,
returning details about that room and any children the room may have, paginating
over the space tree in a depth-first manner to locate child rooms. This is
functionally similar to the [CS Hierarchy](https://spec.matrix.org/v1.16/client-server-api/#get_matrixclientv1roomsroomidhierarchy) endpoint but does not check for
room membership when returning room summaries.
The endpoint does not query other servers over federation about remote rooms
that the server has not joined. This is a deliberate trade-off: while this
means it will leave some holes in the hierarchy that we could otherwise
sometimes fill in, it significantly improves the endpoint's response time and
the admin endpoint is designed for managing rooms local to the homeserver
anyway.
**Parameters**
The following query parameters are available:
* `from` - An optional pagination token, provided when there are more rooms to
return than the limit.
* `limit` - Maximum amount of rooms to return. Must be a non-negative integer,
defaults to `50`.
* `max_depth` - The maximum depth in the tree to explore, must be a non-negative
integer. 0 would correspond to just the root room, 1 would include just the
root room's children, etc. If not provided will recurse into the space tree without limit.
Request:
```http
GET /_synapse/admin/v1/rooms/<room_id>/hierarchy
```
Response:
```json
{
"rooms":
[
{ "children_state": [
{
"content": {
"via": ["local_test_server"]
},
"origin_server_ts": 1500,
"sender": "@user:test",
"state_key": "!QrMkkqBSwYRIFNFCso:test",
"type": "m.space.child"
}
],
"name": "space room",
"guest_can_join": false,
"join_rule": "public",
"num_joined_members": 1,
"room_id": "!sPOpNyMHbZAoAOsOFL:test",
"room_type": "m.space",
"world_readable": false
},
{
"children_state": [],
"guest_can_join": true,
"join_rule": "invite",
"name": "nefarious",
"num_joined_members": 1,
"room_id": "!QrMkkqBSwYRIFNFCso:test",
"topic": "being bad",
"world_readable": false}
],
"next_batch": "KUYmRbeSpAoaAIgOKGgyaCEn"
}
```

View File

@@ -1,54 +0,0 @@
# Show scheduled tasks
This API returns information about scheduled tasks.
To use it, you will need to authenticate by providing an `access_token`
for a server admin: see [Admin API](../usage/administration/admin_api/).
The api is:
```
GET /_synapse/admin/v1/scheduled_tasks
```
It returns a JSON body like the following:
```json
{
"scheduled_tasks": [
{
"id": "GSA124oegf1",
"action": "shutdown_room",
"status": "complete",
"timestamp_ms": 23423523,
"resource_id": "!roomid",
"result": "some result",
"error": null
}
]
}
```
**Query parameters:**
* `action_name`: string - Is optional. Returns only the scheduled tasks with the given action name.
* `resource_id`: string - Is optional. Returns only the scheduled tasks with the given resource id.
* `status`: string - Is optional. Returns only the scheduled tasks matching the given status, one of
- "scheduled" - Task is scheduled but not active
- "active" - Task is active and probably running, and if not will be run on next scheduler loop run
- "complete" - Task has completed successfully
- "failed" - Task is over and either returned a failed status, or had an exception
* `max_timestamp`: int - Is optional. Returns only the scheduled tasks with a timestamp inferior to the specified one.
**Response**
The following fields are returned in the JSON response body along with a `200` HTTP status code:
* `id`: string - ID of scheduled task.
* `action`: string - The name of the scheduled task's action.
* `status`: string - The status of the scheduled task.
* `timestamp_ms`: integer - The timestamp (in milliseconds since the unix epoch) of the given task - If the status is "scheduled" then this represents when it should be launched.
Otherwise it represents the last time this task got a change of state.
* `resource_id`: Optional string - The resource id of the scheduled task, if it possesses one
* `result`: Optional Json - Any result of the scheduled task, if given
* `error`: Optional string - If the task has the status "failed", the error associated with this failure

View File

@@ -40,7 +40,6 @@ It returns a JSON body like the following:
"erased": false,
"shadow_banned": 0,
"creation_ts": 1560432506,
"last_seen_ts": 1732919539393,
"appservice_id": null,
"consent_server_notice_sent": null,
"consent_version": null,
@@ -56,8 +55,7 @@ It returns a JSON body like the following:
}
],
"user_type": null,
"locked": false,
"suspended": false
"locked": false
}
```
@@ -163,8 +161,7 @@ Body parameters:
- `locked` - **bool**, optional. If unspecified, locked state will be left unchanged.
- `user_type` - **string** or null, optional. If not provided, the user type will be
not be changed. If `null` is given, the user type will be cleared.
Other allowed options are: `bot` and `support` and any extra values defined in the homserver
[configuration](../usage/configuration/config_documentation.md#user_types).
Other allowed options are: `bot` and `support`.
## List Accounts
### List Accounts (V2)
@@ -415,32 +412,6 @@ The following actions are **NOT** performed. The list may be incomplete.
- Remove from monthly active users
- Remove user's consent information (consent version and timestamp)
## Suspend/Unsuspend Account
This API allows an admin to suspend/unsuspend an account. While an account is suspended, the user is
prohibited from sending invites, joining or knocking on rooms, sending messages, changing profile data, and redacting messages other than their own.
The api is:
```
PUT /_synapse/admin/v1/suspend/<user_id>
```
with a body of:
```json
{
"suspend": true
}
```
To unsuspend a user, use the same endpoint with a body of:
```json
{
"suspend": false
}
```
## Reset password
**Note:** This API is disabled when MSC3861 is enabled. [See #15582](https://github.com/matrix-org/synapse/pull/15582)
@@ -507,56 +478,7 @@ with a body of:
## List room memberships of a user
Gets a list of room memberships for a specific `user_id`. This
endpoint differs from
[`GET /_synapse/admin/v1/users/<user_id>/joined_rooms`](#list-joined-rooms-of-a-user)
in that it returns rooms with memberships other than "join".
The API is:
```
GET /_synapse/admin/v1/users/<user_id>/memberships
```
A response body like the following is returned:
```json
{
"memberships": {
"!DuGcnbhHGaSZQoNQR:matrix.org": "join",
"!ZtSaPCawyWtxfWiIy:matrix.org": "leave",
}
}
```
which is a list of room membership states for the given user. This endpoint can
be used with both local and remote users, with the caveat that the homeserver will
only be aware of the memberships for rooms that one of its local users has joined.
Remote user memberships may also be out of date if all local users have since left
a room. The homeserver will thus no longer receive membership updates about it.
The list includes rooms that the user has since left; other membership states (knock,
invite, etc.) are also possible.
Note that rooms will only disappear from this list if they are
[purged](./rooms.md#delete-room-api) from the homeserver.
**Parameters**
The following parameters should be set in the URL:
- `user_id` - fully qualified: for example, `@user:server.com`.
**Response**
The following fields are returned in the JSON response body:
- `memberships` - A map of `room_id` (string) to `membership` state (string).
## List joined rooms of a user
Gets a list of all `room_id` that a specific `user_id` is joined to and is a member of (participating in).
Gets a list of all `room_id` that a specific `user_id` is member.
The API is:
@@ -593,73 +515,6 @@ The following fields are returned in the JSON response body:
- `joined_rooms` - An array of `room_id`.
- `total` - Number of rooms.
## Get the number of invites sent by the user
Fetches the number of invites sent by the provided user ID across all rooms
after the given timestamp.
```
GET /_synapse/admin/v1/users/$user_id/sent_invite_count
```
**Parameters**
The following parameters should be set in the URL:
* `user_id`: fully qualified: for example, `@user:server.com`
The following should be set as query parameters in the URL:
* `from_ts`: int, required. A timestamp in ms from the unix epoch. Only
invites sent at or after the provided timestamp will be returned.
This works by comparing the provided timestamp to the `received_ts`
column in the `events` table.
Note: https://currentmillis.com/ is a useful tool for converting dates
into timestamps and vice versa.
A response body like the following is returned:
```json
{
"invite_count": 30
}
```
_Added in Synapse 1.122.0_
## Get the cumulative number of rooms a user has joined after a given timestamp
Fetches the number of rooms that the user joined after the given timestamp, even
if they have subsequently left/been banned from those rooms.
```
GET /_synapse/admin/v1/users/$<user_id/cumulative_joined_room_count
```
**Parameters**
The following parameters should be set in the URL:
* `user_id`: fully qualified: for example, `@user:server.com`
The following should be set as query parameters in the URL:
* `from_ts`: int, required. A timestamp in ms from the unix epoch. Only
invites sent at or after the provided timestamp will be returned.
This works by comparing the provided timestamp to the `received_ts`
column in the `events` table.
Note: https://currentmillis.com/ is a useful tool for converting dates
into timestamps and vice versa.
A response body like the following is returned:
```json
{
"cumulative_joined_room_count": 30
}
```
_Added in Synapse 1.122.0_
## Account Data
Gets information about account data for a specific `user_id`.
@@ -1004,8 +859,7 @@ A response body like the following is returned:
"last_seen_ip": "1.2.3.4",
"last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
"last_seen_ts": 1474491775024,
"user_id": "<user_id>",
"dehydrated": false
"user_id": "<user_id>"
},
{
"device_id": "AUIECTSRND",
@@ -1013,8 +867,7 @@ A response body like the following is returned:
"last_seen_ip": "1.2.3.5",
"last_seen_user_agent": "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0",
"last_seen_ts": 1474491775025,
"user_id": "<user_id>",
"dehydrated": false
"user_id": "<user_id>"
}
],
"total": 2
@@ -1044,7 +897,6 @@ The following fields are returned in the JSON response body:
- `last_seen_ts` - The timestamp (in milliseconds since the unix epoch) when this
devices was last seen. (May be a few minutes out of date, for efficiency reasons).
- `user_id` - Owner of device.
- `dehydrated` - Whether the device is a dehydrated device.
- `total` - Total number of user's devices.
@@ -1276,7 +1128,7 @@ See also the
## Controlling whether a user is shadow-banned
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
Shadow-banning is a useful tool for moderating malicious or egregiously abusive users.
A shadow-banned users receives successful responses to their client-server API requests,
but the events are not propagated into rooms. This can be an effective tool as it
(hopefully) takes longer for the user to realise they are being moderated before
@@ -1509,94 +1361,3 @@ Returns a `404` HTTP status code if no user was found, with a response body like
```
_Added in Synapse 1.72.0._
## Redact all the events of a user
This endpoint allows an admin to redact the events of a given user. There are no restrictions on
redactions for a local user. By default, we puppet the user who sent the message to redact it themselves.
Redactions for non-local users are issued using the admin user, and will fail in rooms where the
admin user is not admin/does not have the specified power level to issue redactions. An option
is provided to override the default and allow the admin to issue the redactions in all cases.
The API is
```
POST /_synapse/admin/v1/user/$user_id/redact
{
"rooms": ["!roomid1", "!roomid2"]
}
```
If an empty list is provided as the key for `rooms`, all events in all the rooms the user is member of will be redacted,
otherwise all the events in the rooms provided in the request will be redacted.
The API starts redaction process running, and returns immediately with a JSON body with
a redact id which can be used to query the status of the redaction process:
```json
{
"redact_id": "<opaque id>"
}
```
**Parameters**
The following parameters should be set in the URL:
- `user_id` - The fully qualified MXID of the user: for example, `@user:server.com`.
The following JSON body parameter must be provided:
- `rooms` - A list of rooms to redact the user's events in. If an empty list is provided all events in all rooms
the user is a member of will be redacted
The following JSON body parameters are optional:
- `reason` - Reason the redaction is being requested, ie "spam", "abuse", etc. This will be included in each redaction event, and be visible to users.
- `limit` - a limit on the number of the user's events to search for ones that can be redacted (events are redacted newest to oldest) in each room, defaults to 1000 if not provided.
- `use_admin` - If set to `true`, the admin user is used to issue the redactions, rather than puppeting the user. Useful
when the admin is also the moderator of the rooms that require redactions. Note that the redactions will fail in rooms
where the admin does not have the sufficient power level to issue the redactions.
_Added in Synapse 1.116.0._
## Check the status of a redaction process
It is possible to query the status of the background task for redacting a user's events.
The status can be queried up to 24 hours after completion of the task,
or until Synapse is restarted (whichever happens first).
The API is:
```
GET /_synapse/admin/v1/user/redact_status/$redact_id
```
A response body like the following is returned:
```
{
"status": "active",
"failed_redactions": [],
}
```
**Parameters**
The following parameters should be set in the URL:
* `redact_id` - string - The ID for this redaction process, provided when the redaction was requested.
**Response**
The following fields are returned in the JSON response body:
- `status` - string - one of scheduled/active/completed/failed, indicating the status of the redaction job
- `failed_redactions` - dictionary - the keys of the dict are event ids the process was unable to redact, if any, and the values are
the corresponding error that caused the redaction to fail
_Added in Synapse 1.116.0._

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More