1
0

Compare commits

..

337 Commits

Author SHA1 Message Date
Brendan Abolivier
7b8d654a61 Move the warning at the top of the release changes 2020-02-12 12:20:37 +00:00
Brendan Abolivier
fdb816713a 1.10.0 2020-02-12 12:19:19 +00:00
Richard van der Hoff
3dd2b5f5e3 bump the version of Alpine Linux used in the docker images (#6897) 2020-02-12 12:02:53 +00:00
Richard van der Hoff
856b2a9555 1.10.0rc5 2020-02-11 11:06:28 +00:00
Richard van der Hoff
78d170262c changelog wording 2020-02-11 10:53:25 +00:00
Richard van der Hoff
aa7e4291ee Update CHANGES.md 2020-02-11 10:51:13 +00:00
Richard van der Hoff
9e45d573d4 changelog formatting 2020-02-11 10:50:55 +00:00
Richard van der Hoff
3edc65dd24 1.10.0rc4 2020-02-11 10:43:16 +00:00
Matthew Hodgson
01209382fb filter out m.room.aliases from /sync state blocks (#6884)
We forgot to filter out aliases from /sync state blocks as well as the timeline.
2020-02-10 18:07:35 +00:00
Richard van der Hoff
3de57e7062 1.10.0rc3 2020-02-10 09:56:42 +00:00
Matthew Hodgson
8e64c5a24c filter out m.room.aliases from the CS API until a better solution is specced (#6878)
We're in the middle of properly mitigating spam caused by malicious aliases being added to a room. However, until this work fully lands, we temporarily filter out all m.room.aliases events from /sync and /messages on the CS API, to remove abusive aliases. This is considered acceptable as m.room.aliases events were never a reliable record of the given alias->id mapping and were purely informational, and in their current state do more harm than good.
2020-02-10 09:36:23 +00:00
Richard van der Hoff
fe73f0d533 Update setuptools for python 3.5 tests (#6880)
Workaround for jaraco/zipp#40
2020-02-10 00:41:20 +00:00
Erik Johnston
f663118155 Update changelog 2020-02-06 10:52:25 +00:00
Erik Johnston
b5176166b7 Update changelog 2020-02-06 10:51:02 +00:00
Erik Johnston
4a50b674f2 Update changelog 2020-02-06 10:45:29 +00:00
Erik Johnston
6a7e90ad78 1.10.0rc2 2020-02-06 10:40:08 +00:00
Erik Johnston
a58860e480 Check sender_key matches on inbound encrypted events. (#6850)
If they don't then the device lists are probably out of sync.
2020-02-05 14:02:39 +00:00
Hubert Chathi
60d0672426 Merge pull request #6844 from matrix-org/uhoreg/cross_signing_fix_device_fed
add device signatures to device key query results
2020-02-05 10:54:49 +00:00
Erik Johnston
6475382d80 Fix detecting unknown devices from remote encrypted events. (#6848)
We were looking at the wrong event type (`m.room.encryption` vs
`m.room.encrypted`).

Also fixup the duplicate `EvenTypes` entries.

Introduced in #6776.
2020-02-04 17:25:54 +00:00
Erik Johnston
68ef7ebbef Update changelog 2020-01-31 15:45:08 +00:00
Erik Johnston
0f8ffa38b5 Fix link in upgrade.rst 2020-01-31 15:38:16 +00:00
Erik Johnston
ac0d45b78b 1.10.0rc1 2020-01-31 15:35:37 +00:00
Erik Johnston
83b0ea047b Fix deleting of stale marker for device lists (#6819)
We were in fact only deleting stale marker when we got an incremental
update, rather than when we did a full resync.
2020-01-31 14:04:15 +00:00
Richard van der Hoff
7f93eb1903 pass room_version into compute_event_signature (#6807) 2020-01-31 13:47:43 +00:00
Richard van der Hoff
a5afdd15e5 Merge pull request #6806 from matrix-org/rav/redact_changes/3
Pass room_version into add_hashes_and_signatures
2020-01-31 10:57:03 +00:00
Richard van der Hoff
160522e32c Merge pull request #6820 from matrix-org/rav/get_room_version_id
Make `get_room_version` return a RoomVersion object
2020-01-31 10:56:42 +00:00
Richard van der Hoff
f6fa2c0b31 newsfile 2020-01-31 10:30:29 +00:00
Richard van der Hoff
08f41a6f05 Add get_room_version method
So that we can start factoring out some of this boilerplatey boilerplate.
2020-01-31 10:28:15 +00:00
Richard van der Hoff
d7bf793cc1 s/get_room_version/get_room_version_id/
... to make way for a forthcoming get_room_version which returns a RoomVersion
object.
2020-01-31 10:06:21 +00:00
Erik Johnston
7d846e8704 Fix bug with getting missing auth event during join 500'ed (#6810) 2020-01-31 09:49:13 +00:00
Richard van der Hoff
540c5e168b changelog 2020-01-30 22:15:50 +00:00
Richard van der Hoff
2a81393a4b Pass room_version into add_hashes_and_signatures 2020-01-30 22:15:50 +00:00
Richard van der Hoff
54f3f369bd Pass room_version into create_local_event_from_event_dict 2020-01-30 22:15:50 +00:00
Richard van der Hoff
ef6bdafb29 Store the room version in EventBuilder 2020-01-30 22:15:50 +00:00
Richard van der Hoff
46a446828d pass room version into FederationHandler.on_invite_request (#6805) 2020-01-30 22:13:02 +00:00
Erik Johnston
e0992fcc5b Log when we delete room in bg update (#6816) 2020-01-30 17:55:34 +00:00
Richard van der Hoff
184303b865 MSC2260: Block direct sends of m.room.aliases events (#6794)
as per MSC2260
2020-01-30 17:20:55 +00:00
Erik Johnston
57ad702af0 Backgroud update to clean out rooms from current state (#6802) 2020-01-30 17:17:44 +00:00
Erik Johnston
b660327056 Resync remote device list when detected as stale. (#6786) 2020-01-30 17:06:38 +00:00
Erik Johnston
c3d4ad8afd Fix sending server up commands from workers (#6811)
Co-authored-by: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2020-01-30 16:42:11 +00:00
Erik Johnston
a5bab2d058 When server leaves room check for stale device lists. (#6801)
When a server leaves a room it may stop sharing a room with remote
users, and thus not get any updates to their device lists. So we need to
check for this case and delete those device lists from the cache.

We don't need to do this if we stop sharing a room because the remote
user leaves the room, because we track that case via looking at
membership changes.
2020-01-30 16:10:30 +00:00
Erik Johnston
c80a9fe13d When a client asks for remote keys check if should resync. (#6797)
If we detect that the remote users' keys may have changed then we should
attempt to resync against the remote server rather than using the
(potentially) stale local cache.
2020-01-30 15:06:58 +00:00
Richard van der Hoff
5a246611e3 Type defintions for use in refactoring for redaction changes (#6803)
* Bump signedjson to 1.1

... so that we can use the type definitions

* Fix breakage caused by upgrade to signedjson 1.1

Thanks, @illicitonion...
2020-01-30 11:25:59 +00:00
Erik Johnston
a855b7c3a8 Remove unused DeviceRow class (#6800) 2020-01-29 12:06:31 +00:00
Richard van der Hoff
281551f720 Merge pull request #6790 from matrix-org/rav/msc2260.1
MSC2260: change the default power level for m.room.aliases events
2020-01-29 11:53:11 +00:00
Richard van der Hoff
750d4d7599 changelog 2020-01-29 11:52:52 +00:00
Richard van der Hoff
dcd85b976d Make /directory/room/<alias> handle restrictive power levels
Fixes a bug where the alias would be added, but `PUT /directory/room/<alias>`
would return a 403.
2020-01-29 11:52:52 +00:00
Richard van der Hoff
b36095ae5c Set the PL for aliases events to 0. 2020-01-29 11:52:52 +00:00
Richard van der Hoff
ee42a5513e Factor out a copy_power_levels_contents method
I'm going to need another copy (hah!) of this.
2020-01-29 11:52:52 +00:00
Erik Johnston
6b9e1014cf Fix race in federation sender that delayed device updates. (#6799)
We were sending device updates down both the federation stream and
device streams. This mean there was a race if the federation sender
worker processed the federation stream first, as when the sender checked
if there were new device updates the slaved ID generator hadn't been
updated with the new stream IDs and so returned nothing.

This situation is correctly handled by events/receipts/etc by not
sending updates down the federation stream and instead having the
federation sender worker listen on the other streams and poke the
transaction queues as appropriate.
2020-01-29 11:23:01 +00:00
Erik Johnston
611215a49c Delete current state when server leaves a room (#6792)
Otherwise its just stale data, which may get deleted later anyway so
can't be relied on. It's also a bit of a shotgun if we're trying to get
the current state of a room we're not in.
2020-01-29 11:01:32 +00:00
Erik Johnston
2cad8baa70 Fix bug when querying remote user keys that require a resync. (#6796)
We ended up only returning a single device, rather than all of them.
2020-01-29 09:56:41 +00:00
Erik Johnston
fcfb591b31 Fix outbound federation request metrics (#6795) 2020-01-28 18:59:48 +00:00
Richard van der Hoff
cc109b79dd Merge pull request #6787 from matrix-org/rav/msc2260
Implement updated auth rules from MSC2260
2020-01-28 15:11:22 +00:00
Richard van der Hoff
a1f307f7d1 fix bad variable ref 2020-01-28 14:55:22 +00:00
Erik Johnston
e17a110661 Detect unknown remote devices and mark cache as stale (#6776)
We just mark the fact that the cache may be stale in the database for
now.
2020-01-28 14:43:21 +00:00
Richard van der Hoff
fbe0a82c0d update changelog 2020-01-28 14:20:10 +00:00
Richard van der Hoff
99e205fc21 changelog 2020-01-28 14:20:10 +00:00
Richard van der Hoff
49d3bca37b Implement updated auth rules from MSC2260 2020-01-28 14:20:10 +00:00
Richard van der Hoff
a8ce7aeb43 Pass room version object into event_auth.check and check_redaction (#6788)
These are easier to work with than the strings and we normally have one around.

This fixes `FederationHander._persist_auth_tree` which was passing a
RoomVersion object into event_auth.check instead of a string.
2020-01-28 14:18:29 +00:00
Erik Johnston
02b44db922 Warn if postgres database has non-C locale. (#6734)
As using non-C locale can cause issues on upgrading OS.
2020-01-28 13:44:21 +00:00
Erik Johnston
33f904835a Merge branch 'master' into develop 2020-01-28 13:39:39 +00:00
Erik Johnston
77d9357226 1.9.1 2020-01-28 13:09:36 +00:00
Erik Johnston
bdbeeb94ec Fix setting mau_limit_reserved_threepids config (#6793)
Calling the invalidation function during initialisation of the data
stores introduces a circular dependency, causing Synapse to fail to
start.
2020-01-28 13:05:24 +00:00
Erik Johnston
8df862e45d Add rooms.room_version column (#6729)
This is so that we don't have to rely on pulling it out from `current_state_events` table.
2020-01-27 14:30:57 +00:00
Erik Johnston
d5275fc55f Propagate cache invalidates from workers to other workers. (#6748)
Currently if a worker invalidates a cache it will be streamed to master, which then didn't forward those to other workers.
2020-01-27 13:47:50 +00:00
Brendan Abolivier
f74d178b17 Merge pull request #6775 from matrix-org/jaywink/worker-docs-tweaks
Clarifications to the workers documentation
2020-01-27 12:30:54 +00:00
Jason Robinson
cf9d56e5cf Formatting of changelog
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
2020-01-27 14:09:59 +02:00
Jason Robinson
1fe5001369 Fix federation_reader listeners doc as per PR review
Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-01-27 10:21:25 +02:00
Andrew Morgan
9f7aaf90b5 Validate client_secret parameter (#6767) 2020-01-24 14:28:40 +00:00
Jason Robinson
aa6ad288f1 Clarifications to the workers documentation
* Add note that user_dir requires disabling user dir
  updates from the main synapse process.
* Add note that federation_reader should have
  the federation listener resource.

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2020-01-24 11:08:50 +02:00
Erik Johnston
fa4d609e20 Make 'event.redacts' never raise. (#6771)
There are quite a few places that we assume that a redaction event has a
corresponding `redacts` key, which is not always the case. So lets
cheekily make it so that event.redacts just returns None instead.
2020-01-23 15:19:03 +00:00
Brendan Abolivier
51fc3f693e Merge branch 'master' into develop 2020-01-23 13:45:23 +00:00
Brendan Abolivier
9bae740527 Fixup changelog 2020-01-23 13:13:19 +00:00
Brendan Abolivier
1755326d8a Fixup changelog 2020-01-23 13:11:07 +00:00
Brendan Abolivier
1dc5a791cf Fixup changelog 2020-01-23 12:59:29 +00:00
Brendan Abolivier
ba64c3b615 Merge branch 'release-v1.9.0' of github.com:matrix-org/synapse into release-v1.9.0 2020-01-23 12:58:03 +00:00
Brendan Abolivier
f3eac2b3e9 1.9.0 2020-01-23 12:57:55 +00:00
Richard van der Hoff
6b7462a13f a bit of debugging for media storage providers (#6757)
* a bit of debugging for media storage providers

* changelog
2020-01-23 12:11:44 +00:00
Richard van der Hoff
5bd3cb7260 Minor fixes to user admin api (#6761)
* don't insist on a password (this is valid if you have an SSO login)
* fix reference to undefined `requester`
2020-01-23 12:03:58 +00:00
Andrew Morgan
04345338e1 Merge branch 'release-v1.9.0' into develop 2020-01-23 11:38:29 +00:00
Andrew Morgan
d31f5f4d89 Update admin room docs with correct endpoints (#6770) 2020-01-23 11:37:26 +00:00
Andrew Morgan
ce84dd9e20 Remove unnecessary abstractions in admin handler (#6751) 2020-01-22 15:09:57 +00:00
Brendan Abolivier
33f7e5ce2a Fixup warning about workers changes 2020-01-22 14:49:21 +00:00
Brendan Abolivier
91085ef49e Add deprecation headers 2020-01-22 14:30:22 +00:00
Brendan Abolivier
ffa637050d Fixup changelog 2020-01-22 14:19:23 +00:00
Brendan Abolivier
0d0f32bc53 1.9.0rc1 2020-01-22 14:03:46 +00:00
Andrew Morgan
90a28fb475 Admin API to list, filter and sort rooms (#6720) 2020-01-22 13:36:43 +00:00
Brendan Abolivier
ae6cf586b0 Merge pull request #6764 from matrix-org/babolivier/fix-thumbnail
Fix typo in _select_thumbnail
2020-01-22 13:21:00 +00:00
Brendan Abolivier
6ae0c8db33 Lint + changelog 2020-01-22 12:38:18 +00:00
Brendan Abolivier
d9a8728b11 Remove unused import 2020-01-22 12:30:49 +00:00
Brendan Abolivier
67aa18e8dc Add tests for thumbnailing 2020-01-22 12:28:07 +00:00
Brendan Abolivier
ed83c3a018 Fix typo in _select_thumbnail 2020-01-22 12:27:42 +00:00
Andrew Morgan
aa9b00fb2f Fix and add test to deprecated quarantine media admin api (#6756) 2020-01-22 11:05:50 +00:00
Neil Johnson
5e52d8563b Allow monthly active user limiting support for worker mode, fixes #4639. (#6742) 2020-01-22 11:05:14 +00:00
Erik Johnston
5d7a6ad223 Allow streaming cache invalidate all to workers. (#6749) 2020-01-22 10:37:00 +00:00
Erik Johnston
2093f83ea0 Remove unused CI docker compose files (#6754)
These now exist in the pipelines repo.
2020-01-22 10:36:48 +00:00
Ivan Vilata-i-Balaguer
837f62266b Avoid attribute error when password_config present but empty (#6753)
The old statement returned `None` for such a `password_config` (like the one
created on first run), thus retrieval of the `pepper` key failed with
`AttributeError`.

Fixes #5315

Signed-off-by: Ivan Vilata i Balaguer <ivan@selidor.net>
2020-01-22 07:32:52 +00:00
Brendan Abolivier
07124d028d Port synapse_port_db to async/await (#6718)
* Raise an exception if there are pending background updates

So we return with a non-0 code

* Changelog

* Port synapse_port_db to async/await

* Port update_database to async/await

* Add version string to mocked homeservers

* Remove unused imports

* Convert overseen bits to async/await

* Fixup logging contexts

* Fix imports

* Add a way to print an error without raising an exception

* Incorporate review
2020-01-21 19:04:58 +00:00
Erik Johnston
0e68760078 Add a DeltaState to track changes to be made to current state (#6716) 2020-01-20 18:07:20 +00:00
Erik Johnston
b0a66ab83c Fixup synapse.rest to pass mypy (#6732) 2020-01-20 17:38:21 +00:00
Erik Johnston
74b74462f1 Fix /events/:event_id deprecated API. (#6731) 2020-01-20 17:38:09 +00:00
Erik Johnston
0f6e525be3 Fixup synapse.api to pass mypy (#6733) 2020-01-20 17:34:13 +00:00
Erik Johnston
ceecedc68b Fix changing password via user admin API. (#6730) 2020-01-20 17:23:59 +00:00
Andrew Morgan
e9e066055f Fix empty account_validity config block (#6747) 2020-01-20 16:21:59 +00:00
Andrew Morgan
351fdfede6 Update changelog.d/6747.bugfix
Co-Authored-By: Erik Johnston <erik@matrix.org>
2020-01-20 15:58:44 +00:00
Erik Johnston
2f23eb27b3 Revert "Newsfile"
This reverts commit 11c23af465.
2020-01-20 15:12:58 +00:00
Erik Johnston
11c23af465 Newsfile 2020-01-20 15:11:38 +00:00
Andrew Morgan
026f4bdf3c Add changelog 2020-01-20 14:12:21 +00:00
Andrew Morgan
198d52da3a Fix empty account_validity config block 2020-01-20 14:05:29 +00:00
Brendan Abolivier
a17f64361c Add more logging around message retention policies support (#6717)
So we can debug issues like #6683 more easily
2020-01-17 20:51:44 +00:00
Erik Johnston
5909751936 Fix up changelog 2020-01-17 15:13:27 +00:00
Richard van der Hoff
0b885d62ef bump version to v1.9.0.dev2 2020-01-17 14:58:58 +00:00
Satsuki Yanagi
722b4f302d Fix syntax error in run_upgrade for schema 57 (#6728)
Fix #6727
Related #6655

Co-authored-by: Erik Johnston <erikj@jki.re>
2020-01-17 14:30:35 +00:00
Brendan Abolivier
3b72bb780a Merge pull request #6714 from matrix-org/babolivier/retention_select_event
Fix instantiation of message retention purge jobs
2020-01-17 14:23:51 +00:00
Richard van der Hoff
1dee1e900b bump version to v1.9.0.dev1 2020-01-17 10:44:12 +00:00
Richard van der Hoff
59dc87c618 Merge pull request #6724 from matrix-org/rav/log_saml_attributes
Log saml assertions rather than the whole response
2020-01-17 10:33:24 +00:00
Richard van der Hoff
2b6a77fcde Delegate remote_user_id mapping to the saml mapping provider (#6723)
Turns out that figuring out a remote user id for the SAML user isn't quite as obvious as it seems. Factor it out to the SamlMappingProvider so that it's easy to control.
2020-01-17 10:32:47 +00:00
Erik Johnston
a8a50f5b57 Wake up transaction queue when remote server comes back online (#6706)
This will be used to retry outbound transactions to a remote server if
we think it might have come back up.
2020-01-17 10:27:19 +00:00
Richard van der Hoff
5ce0b17e38 Clarify the account_validity and email sections of the sample configuration. (#6685)
Generally try to make this more comprehensible, and make it match the
conventions.

I've removed the documentation for all the settings which allow you to change
the names of the template files, because I can't really see why they are
useful.
2020-01-17 10:04:15 +00:00
Richard van der Hoff
95c5b9bfb3 changelog 2020-01-16 22:29:06 +00:00
Richard van der Hoff
acc7820574 Log saml assertions rather than the whole response
... since the whole response is huge.

We even need to break up the assertions, since kibana otherwise truncates them.
2020-01-16 22:26:34 +00:00
Richard van der Hoff
14d8f342d5 move batch_iter to a separate module 2020-01-16 22:25:32 +00:00
Brendan Abolivier
4fb3cb208a Precise changelog 2020-01-16 20:27:07 +00:00
Brendan Abolivier
dac148341b Fixup diff 2020-01-16 20:25:09 +00:00
Brendan Abolivier
842c2cfbf1 Remove get_room_event_after_stream_ordering entirely 2020-01-16 20:24:17 +00:00
Erik Johnston
d386f2f339 Add StateMap type alias (#6715) 2020-01-16 13:31:22 +00:00
Brendan Abolivier
e601f35d3b Lint 2020-01-16 09:55:11 +00:00
Andrew Morgan
7b14c4a018 Add tips for the changelog to the pull request template (#6663) 2020-01-16 09:46:36 +00:00
Neil Johnson
38e0e59f42 Add org.matrix.e2e_cross_signing to unstable_features in /versions as per MSC1756 (#6712) 2020-01-16 09:46:14 +00:00
Erik Johnston
48c3a96886 Port synapse.replication.tcp to async/await (#6666)
* Port synapse.replication.tcp to async/await

* Newsfile

* Correctly document type of on_<FOO> functions as async

* Don't be overenthusiastic with the asyncing....
2020-01-16 09:16:12 +00:00
Brendan Abolivier
48e57a6452 Rename changelog 2020-01-15 19:40:52 +00:00
Brendan Abolivier
914e73cdd9 Changelog 2020-01-15 19:36:19 +00:00
Brendan Abolivier
066b9f52b8 Correctly order when selecting before stream ordering 2020-01-15 19:32:47 +00:00
Brendan Abolivier
8363588237 Fix typo 2020-01-15 19:13:22 +00:00
Brendan Abolivier
855af069a4 Fix instantiation of message retention purge jobs
When figuring out which topological token to start a purge job at, we
need to do the following:

1. Figure out a timestamp before which events will be purged
2. Select the first stream ordering after that timestamp
3. Select info about the first event after that stream ordering
4. Build a topological token from that info

In some situations (e.g. quiet rooms with a short max_lifetime), there
might not be an event after the stream ordering at step 3, therefore we
abort the purge with the error `No event found`. To mitigate that, this
patch fetches the first event _before_ the stream ordering, instead of
after.
2020-01-15 18:56:18 +00:00
Erik Johnston
19a1aac48c Fix purge_room admin API (#6711) 2020-01-15 18:13:47 +00:00
Andrew Morgan
edc244eec4 Remove duplicate session check in web fallback servlet (#6702) 2020-01-15 18:05:18 +00:00
Richard van der Hoff
608bf7d741 Merge pull request #6688 from matrix-org/rav/module_api_extensions
Cleanups and additions to the module API
2020-01-15 16:43:13 +00:00
Richard van der Hoff
107f256cd8 Merge branch 'develop' into rav/module_api_extensions 2020-01-15 16:00:24 +00:00
Richard van der Hoff
8f5d7302ac Implement RedirectException (#6687)
Allow REST endpoint implemnentations to raise a RedirectException, which will
redirect the user's browser to a given location.
2020-01-15 15:58:55 +00:00
Erik Johnston
28c98e51ff Add local_current_membership table (#6655)
Currently we rely on `current_state_events` to figure out what rooms a
user was in and their last membership event in there. However, if the
server leaves the room then the table may be cleaned up and that
information is lost. So lets add a table that separately holds that
information.
2020-01-15 14:59:33 +00:00
Erik Johnston
b5ce7f5874 Process EDUs in parallel with PDUs. (#6697)
This means that things like to device messages don't get blocked behind
processing PDUs, which can potentially take *ages*.
2020-01-14 14:08:35 +00:00
Erik Johnston
e8b68a4e4b Fixup synapse.replication to pass mypy checks (#6667) 2020-01-14 14:08:06 +00:00
Andrew Morgan
1177d3f3a3 Quarantine media by ID or user ID (#6681) 2020-01-13 18:10:43 +00:00
Richard van der Hoff
47f4f493f0 Document more supported endpoints for workers (#6698) 2020-01-13 15:32:02 +00:00
Richard van der Hoff
326c893d24 Kill off RegistrationError (#6691)
This is pretty pointless. Let's just use SynapseError.
2020-01-13 12:48:22 +00:00
Richard van der Hoff
2d07c73777 Don't assign numeric IDs for empty usernames (#6690)
Fix a bug where we would assign a numeric userid if somebody tried registering
with an empty username
2020-01-13 12:47:30 +00:00
Richard van der Hoff
3cfac9593c Merge pull request #6689 from matrix-org/rav/saml_mapping_provider_updates
Updates to the SAML mapping provider API
2020-01-13 12:44:55 +00:00
Richard van der Hoff
8039685051 Allow additional_resources to implement Resource directly (#6686)
AdditionalResource really doesn't add any value, and it gets in the way for
resources which want to support child resources or the like. So, if the
resource object already implements the IResource interface, don't bother
wrapping it.
2020-01-13 12:42:44 +00:00
Richard van der Hoff
feee819973 Fix exceptions on requests for non-ascii urls (#6682)
Fixes #6402
2020-01-13 12:41:51 +00:00
Richard van der Hoff
da4e52544e comment for run_in_background 2020-01-12 21:53:47 +00:00
Richard van der Hoff
d56e95ea8b changelog 2020-01-12 21:42:15 +00:00
Richard van der Hoff
dc69a1cf43 Pass client redirect URL into SAML mapping providers 2020-01-12 21:40:49 +00:00
Richard van der Hoff
47e63cc67a Pass the module_api into the SamlMappingProvider
... for consistency with other modules, and because we'll need it sooner or
later and it will be a pain to introduce later.
2020-01-12 21:40:49 +00:00
Richard van der Hoff
96ed33739a changelog 2020-01-12 21:36:10 +00:00
Richard van der Hoff
01243b98e1 Handle config not being set for synapse plugin modules
Some modules don't need any config, so having to define a `config` property
just to keep the loader happy is a bit annoying.
2020-01-12 21:34:36 +00:00
Richard van der Hoff
473d3801b6 Cleanups and additions to the module API
Add some useful things, such as error types and logcontext handling, to the
API.

Make `hs` a private member to dissuade people from using it (hopefully
they aren't already).

Add a couple of new methods (`record_user_external_id` and
`generate_short_term_login_token`).
2020-01-12 21:31:44 +00:00
Richard van der Hoff
1d16f5ea0e Merge pull request #6675 from matrix-org/rav/die_sqlite37_die_die_die
Refuse to start if sqlite is older than 3.11.0
2020-01-10 12:17:22 +00:00
Richard van der Hoff
937dea42e7 update install notes for CentOS 2020-01-09 18:11:04 +00:00
Richard van der Hoff
c3843fd075 changelog 2020-01-09 18:11:04 +00:00
Richard van der Hoff
bf46821180 Refuse to start if sqlite is older than 3.11.0 2020-01-09 18:11:04 +00:00
Richard van der Hoff
e48ba84e0b Check postgres version in check_database
this saves doing it on each connection, and will allow us to pass extra options
in.
2020-01-09 18:05:59 +00:00
Richard van der Hoff
e97d1cf001 Modify check_database to take a connection rather than a cursor
We might not need the cursor at all.
2020-01-09 18:05:50 +00:00
Erik Johnston
645b1f0ea1 Merge branch 'master' of github.com:matrix-org/synapse into develop 2020-01-09 17:14:02 +00:00
Erik Johnston
c2ba994dbb Add note about log_file no longer be accepted (#6674) 2020-01-09 17:13:36 +00:00
Manuel Stahl
d2906fe666 Allow admin users to create or modify users without a shared secret (#6495)
Signed-off-by: Manuel Stahl <manuel.stahl@awesome-technologies.de>
2020-01-09 13:31:00 +00:00
Erik Johnston
d773290cb1 Merge branch 'master' into develop 2020-01-09 13:25:48 +00:00
Erik Johnston
9dfcf47e9b Fixup changelog 2020-01-09 11:40:27 +00:00
Erik Johnston
24b2c940fb 1.8.0 2020-01-09 11:39:29 +00:00
Erik Johnston
7c232bd98b Merge pull request #6664 from matrix-org/erikj/media_admin_apis
Fix media repo admin APIs when using a media worker.
2020-01-08 15:50:06 +00:00
Erik Johnston
d74054afda Shuffle the code 2020-01-08 14:57:45 +00:00
Erik Johnston
bca3455b38 Comments 2020-01-08 14:27:35 +00:00
Erik Johnston
187dc6ad02 Do not rely on streaming events, as media repo doesn't 2020-01-08 14:24:28 +00:00
Brendan Abolivier
e16521faab Merge pull request #6665 from matrix-org/babolivier/retention_doc_typo
Fix typo in message retention policies doc
2020-01-08 13:57:02 +00:00
Erik Johnston
4e2a072a05 Newsfile 2020-01-08 13:28:19 +00:00
Brendan Abolivier
32ad2a3349 Changelog 2020-01-08 13:28:12 +00:00
Brendan Abolivier
3889fcd9d7 Fix typo in message retention policies doc 2020-01-08 13:27:29 +00:00
Richard van der Hoff
b064a41291 Merge remote-tracking branch 'origin/release-v1.8.0' into develop 2020-01-08 13:27:17 +00:00
Manuel Stahl
7caaa29daa Fix GET request on /_synapse/admin/v2/users endpoint (#6563)
Fixes #6552
2020-01-08 13:26:40 +00:00
Erik Johnston
1adf27c82a Import RoomStore in media worker to fix admin APIs 2020-01-08 13:26:20 +00:00
Erik Johnston
3cf7d6d5b6 Move media admin store functions to worker store 2020-01-08 13:26:20 +00:00
Richard van der Hoff
573fee759c Back out ill-advised notary server hackery (#6657)
This was ill-advised. We can't modify verify_keys here, because the response
object has already been signed by the requested key.

Furthermore, it's somewhat unnecessary because existing versions of Synapse
(which get upset that the notary key isn't present in verify_keys) will fall
back to a direct fetch via `/key/v2/server`.

Also: more tests for fetching keys via perspectives: it would be nice if we actually tested when our fetcher can't talk to our notary impl.
2020-01-08 13:24:10 +00:00
Brendan Abolivier
cff1cb8685 Merge pull request #6624 from matrix-org/babolivier/retention_doc
Add complete documentation of the message retention policies support
2020-01-08 11:24:47 +00:00
Fabian Meyer
dd57715de2 contrib/docker-compose: fixing mount that overrides containers' /etc (#6656)
The mount in the form of ./matrix-config:/etc overwrites the contents of the container /etc folder. Since all valid ca certificates are stored in /etc, the synapse.push.httppusher, for example, cannot validate the certificate from matrix.org.
2020-01-08 07:25:05 +00:00
Matthew Hodgson
91718b3f23 typo 2020-01-07 15:46:04 +00:00
Erik Johnston
be29ed7ad8 Correctly proxy remote group HTTP errors. (#6654)
e.g. if remote returns a 404 then that shouldn't be treated as an error
but should be proxied through.
2020-01-07 15:36:41 +00:00
Brendan Abolivier
2b6b7f482a Merge pull request #6621 from matrix-org/babolivier/purge_job_config_typo
Fix a typo in the purge jobs configuration example
2020-01-07 16:17:40 +01:00
Brendan Abolivier
3675fb9bc6 Fix reference 2020-01-07 15:15:16 +00:00
Brendan Abolivier
7ba98a2874 Incorporate review 2020-01-07 15:14:33 +00:00
Brendan Abolivier
4be582d7c8 Merge branch 'develop' into babolivier/retention_doc 2020-01-07 15:07:19 +00:00
Brendan Abolivier
01fbd95736 Apply suggestions from code review
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-01-07 15:59:38 +01:00
Brendan Abolivier
03edfc5850 Update changelog.d/6624.doc
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2020-01-07 15:59:05 +01:00
Brendan Abolivier
391fb47791 Reword 2020-01-07 14:54:32 +00:00
Brendan Abolivier
3a86477162 Change the example from 5min to 12h
Have a purge job running every 5min is probably not something we want to advise admins to do as a sort-of default.
2020-01-07 14:53:07 +00:00
Erik Johnston
235d977e1f Fixup changelog 2020-01-07 14:45:54 +00:00
Erik Johnston
2f1c175936 Fixup changelog 2020-01-07 14:39:35 +00:00
Erik Johnston
06958d5bb1 Fixup changelog 2020-01-07 14:38:40 +00:00
Erik Johnston
7f0e706ebf 1.8.0rc1 2020-01-07 14:31:13 +00:00
Brendan Abolivier
0ab5853ec9 Merge pull request #6652 from matrix-org/babolivier/depth_missing_events
Fix conditions failing if min_depth = 0
2020-01-07 15:22:07 +01:00
Richard van der Hoff
85db7f73be Add a background update to clear tombstoned rooms from the directory (#6648)
* Add a background update to clear tombstoned rooms from the directory

* use the ABC metaclass
2020-01-07 14:18:43 +00:00
Richard van der Hoff
9824a39d80 Async/await for background updates (#6647)
so that bg update routines can be async
2020-01-07 14:12:42 +00:00
Richard van der Hoff
d20c346544 port BackgroundUpdateTestCase to HomeserverTestCase (#6653) 2020-01-07 14:09:07 +00:00
Richard van der Hoff
1ff5491117 Merge pull request #6645 from matrix-org/rav/fix_synchrotron_error
Fix exceptions in the synchrotron worker log when events are rejected.
2020-01-07 14:02:14 +00:00
Brendan Abolivier
1f2a5923d4 Changelog 2020-01-07 13:12:17 +00:00
Brendan Abolivier
cd428a93e2 Fix conditions failing if min_depth = 0
This could result in Synapse not fetching prev_events for new events in the room if it has missed some events.
2020-01-07 12:08:58 +00:00
Richard van der Hoff
1807db5e73 Merge pull request #6629 from matrix-org/rav/kill_event_reference_hashes
Remove a bunch of unused code from event creation
2020-01-06 17:46:31 +00:00
Richard van der Hoff
055e6fbaa2 changelog 2020-01-06 17:17:40 +00:00
Richard van der Hoff
26c5d3d398 Fix exceptions in log when rejected event is replicated 2020-01-06 17:16:28 +00:00
Richard van der Hoff
c74de81bfc async/await for SyncReplicationHandler.process_and_notify 2020-01-06 17:14:28 +00:00
Richard van der Hoff
bc42da4ab8 Clarify documentation on get_event* methods
Make it clearer how they behave in the face of rejected and/or missing events.
2020-01-06 17:12:06 +00:00
Richard van der Hoff
ba897a7590 Fix some test failures when frozen_dicts are enabled (#6642)
Fixes #4026
2020-01-06 15:22:46 +00:00
Erik Johnston
9f6c1befbb Add experimental 'databases' config (#6580) 2020-01-06 14:44:01 +00:00
Richard van der Hoff
ab4b4ee6a7 Fix an error which was thrown by the PresenceHandler _on_shutdown handler. (#6640) 2020-01-06 14:34:02 +00:00
Richard van der Hoff
550b2946d8 changelog 2020-01-06 13:45:33 +00:00
Richard van der Hoff
a7d2e5b37f Remove unused get_latest_event_ids_and_hashes_in_room 2020-01-06 13:45:33 +00:00
Richard van der Hoff
dc41fbf0dd Remove unused get_prev_events_and_hashes_for_room 2020-01-06 13:45:33 +00:00
Richard van der Hoff
38e0829a4c Remove unused hashes and depths from _update_membership params 2020-01-06 13:45:33 +00:00
Richard van der Hoff
3bef62488e Remove unused hashes and depths from create_event params 2020-01-06 13:45:33 +00:00
Richard van der Hoff
66ca914dc0 Remove unused hashes and depths from create_new_client_event params 2020-01-06 13:45:33 +00:00
Richard van der Hoff
15720092ac replace get_prev_events_and_hashes_for_room with get_prev_events_for_room in create_new_client_event 2020-01-06 13:45:33 +00:00
Richard van der Hoff
5a04781643 rename get_prev_events_for_room to get_prev_events_and_hashes_for_room
... to make way for a new method which just returns the event ids
2020-01-06 13:45:33 +00:00
Richard van der Hoff
4b36b482e0 Fix exception when fetching notary server's old keys (#6625)
Lift the restriction that *all* the keys used for signing v2 key responses be
present in verify_keys.

Fixes #6596.
2020-01-06 12:33:56 +00:00
Richard van der Hoff
18674eebb1 Workaround for error when fetching notary's own key (#6620)
* Kill off redundant SynapseRequestFactory

We already get the Site via the Channel, so there's no need for a dedicated
RequestFactory: we can just use the right constructor.

* Workaround for error when fetching notary's own key

As a notary server, when we return our own keys, include all of our signing
keys in verify_keys.

This is a workaround for #6596.
2020-01-06 12:28:58 +00:00
Andrew Morgan
01c3c6c929 Fix power levels being incorrectly set in old and new rooms after a room upgrade (#6633)
Modify a copy of an upgraded room's PL before sending to the new room
2020-01-06 09:53:07 +00:00
Richard van der Hoff
08815566bc Automate generation of the sample and debian log configs (#6627) 2020-01-03 17:14:00 +00:00
Richard van der Hoff
e484101306 Raise an error if someone tries to use the log_file config option (#6626)
This has caused some confusion for people who didn't notice it going away.
2020-01-03 17:11:29 +00:00
Richard van der Hoff
98247c4a0e Remove unused, undocumented "content repo" resource (#6628)
This looks like it got half-killed back in #888.

Fixes #6567.
2020-01-03 17:10:52 +00:00
Richard van der Hoff
b6b57ecb4e Kill off redundant SynapseRequestFactory (#6619)
We already get the Site via the Channel, so there's no need for a dedicated
RequestFactory: we can just use the right constructor.
2020-01-03 14:19:48 +00:00
Richard van der Hoff
6964ea095b Reduce the reconnect time when replication fails. (#6617) 2020-01-03 14:19:09 +00:00
Brendan Abolivier
b7dec300b7 Fix vacuum instructions for sqlite 2020-01-03 13:51:59 +01:00
Brendan Abolivier
51b8a21f0c Rename changelog 2020-01-03 13:49:12 +01:00
Brendan Abolivier
9279a2c4e4 Add a complete documentation of the message retention policies support 2020-01-03 13:45:03 +01:00
Brendan Abolivier
9c59bc59c8 Changelog 2020-01-03 13:00:32 +01:00
Brendan Abolivier
dd2954f78d Update sample config 2020-01-03 12:58:12 +01:00
Brendan Abolivier
4efe1d4d3f Fix a typo in the purge jobs configuration example 2020-01-03 12:57:24 +01:00
ewaf1
0495097a7f Added the section 'Configuration' in /docs/turn-howto.md (#6614)
put the 2nd part of the "source installation"-section into a new section, because it also applies to Debian packages
2020-01-02 10:41:30 +00:00
Aaron Raimist
32779b59fa Reword sections of federate.md that explained delegation at time of Synapse 1.0 transition (#6601)
* Remove sections of federate.md explaining delegation at time of Synapse 1.0 transition

Signed-off-by: Aaron Raimist <aaron@raim.ist>

* Add changelog

Signed-off-by: Aaron Raimist <aaron@raim.ist>
2020-01-02 10:28:20 +00:00
Richard van der Hoff
dc96943d51 Merge branch 'master' into develop 2019-12-31 11:01:06 +00:00
Richard van der Hoff
77661ce81a 1.7.3 2019-12-31 10:45:12 +00:00
Richard van der Hoff
92eac974b9 Hacks to work around #6605 (#6608)
When we have an event which refers to non-existent auth_events, ignore said events rather than exploding in a ball of fire.

Fixes #6605.
2019-12-31 10:41:44 +00:00
Richard van der Hoff
f03c877b32 sample log config
TODO: automate generation of this
2019-12-24 17:39:58 +00:00
dopple
b2db382841 Update reverse proxy file name (#6590)
s/reverse_proxy.rst/reverse_proxy.md/
2019-12-22 22:17:09 +00:00
dopple
7c6b853558 Update reverse proxy file name (#6590)
s/reverse_proxy.rst/reverse_proxy.md/
2019-12-22 22:16:56 +00:00
Richard van der Hoff
56ad230efa Merge branch 'master' into develop 2019-12-20 11:11:34 +00:00
Richard van der Hoff
138708608b Merge tag 'v1.7.2'
Synapse 1.7.2 (2019-12-20)
==========================

This release fixes some regressions introduced in Synapse 1.7.0 and 1.7.1.

Bugfixes
--------

- Fix a regression introduced in Synapse 1.7.1 which caused errors when attempting to backfill rooms over federation. ([\#6576](https://github.com/matrix-org/synapse/issues/6576))
- Fix a bug introduced in Synapse 1.7.0 which caused an error on startup when upgrading from versions before 1.3.0. ([\#6578](https://github.com/matrix-org/synapse/issues/6578))
2019-12-20 11:09:59 +00:00
Erik Johnston
75d8f26ac8 Split state groups into a separate data store (#6296) 2019-12-20 10:48:24 +00:00
Erik Johnston
fa780e9721 Change EventContext to use the Storage class (#6564) 2019-12-20 10:32:02 +00:00
Erik Johnston
0b5dbadd96 Explode on duplicate delta file names. (#6565) 2019-12-19 15:07:37 +00:00
Erik Johnston
3d46124ad0 Port some admin handlers to async/await (#6559) 2019-12-19 15:07:28 +00:00
Richard van der Hoff
bca30cefee Improve diagnostics on database upgrade failure (#6570)
`Failed to upgrade database` is not helpful, and it's unlikely that UPGRADE.rst
has anything useful.
2019-12-19 14:53:15 +00:00
Richard van der Hoff
0b794cbd7b Fix sdnotify with acme enabled (#6571)
If acme was enabled, the sdnotify startup hook would never be run because we
would try to add it to a hook which had already fired.

There's no need to delay it: we can sdnotify as soon as we've started the
listeners.
2019-12-19 14:52:52 +00:00
Richard van der Hoff
b95b762560 Add an export_signing_key script (#6546)
I want to do some key rotation, and it is silly that we don't have a way to do
this.
2019-12-19 11:11:14 +00:00
Richard van der Hoff
d6752ce5da Clean up startup for the pusher (#6558)
* Remove redundant python2 support code

`str.decode()` doesn't exist on python3, so presumably this code was doing
nothing

* Filter out pushers with corrupt data

When we get a row with unparsable json, drop the row, rather than returning a
row with null `data`, which will then cause an explosion later on.

* Improve logging when we can't start a pusher

Log the ID to help us understand the problem

* Make email pusher setup more robust

We know we'll have a `data` member, since that comes from the database. What we
*don't* know is if that is a dict, and if that has a `brand` member, and if
that member is a string.
2019-12-18 14:26:58 +00:00
Andrew Morgan
7963ca83cb Add delta file to fix missing default table data (#6555) 2019-12-18 11:13:33 +00:00
Erik Johnston
2284eb3a53 Add database config class (#6513)
This encapsulates config for a given database and is the way to get new
connections.
2019-12-18 10:45:12 +00:00
Richard van der Hoff
91ccfe9f37 Merge branch 'master' into develop 2019-12-18 10:16:39 +00:00
Richard van der Hoff
4154637834 Merge tag 'v1.7.1'
Synapse 1.7.1 (2019-12-18)
==========================

This release includes several security fixes as well as a fix to a bug exposed by the security fixes. Administrators are encouraged to upgrade as soon as possible.

Security updates
----------------

- Fix a bug which could cause room events to be incorrectly authorized using events from a different room. ([\#6501](https://github.com/matrix-org/synapse/issues/6501), [\#6503](https://github.com/matrix-org/synapse/issues/6503), [\#6521](https://github.com/matrix-org/synapse/issues/6521), [\#6524](https://github.com/matrix-org/synapse/issues/6524), [\#6530](https://github.com/matrix-org/synapse/issues/6530), [\#6531](https://github.com/matrix-org/synapse/issues/6531))
- Fix a bug causing responses to the `/context` client endpoint to not use the pruned version of the event. ([\#6553](https://github.com/matrix-org/synapse/issues/6553))
- Fix a cause of state resets in room versions 2 onwards. ([\#6556](https://github.com/matrix-org/synapse/issues/6556), [\#6560](https://github.com/matrix-org/synapse/issues/6560))

Bugfixes
--------

- Fix a bug which could cause the federation server to incorrectly return errors when handling certain obscure event graphs. ([\#6526](https://github.com/matrix-org/synapse/issues/6526), [\#6527](https://github.com/matrix-org/synapse/issues/6527))
2019-12-18 10:14:49 +00:00
Richard van der Hoff
6e8f8e14f2 Merge release-v1.7.1 into develop 2019-12-18 09:51:51 +00:00
Erik Johnston
02553901ce Remove unused get_pagination_rows methods. (#6557)
Remove unused get_pagination_rows methods
2019-12-17 11:44:32 +00:00
Erik Johnston
3fbe5b7ec3 Add auth events as per spec. (#6556)
Previously we tried to be clever and filter out some unnecessary event
IDs to keep the auth chain small, but that had some annoying
interactions with state res v2 so we stop doing that for now.
2019-12-16 16:59:32 +00:00
Will Hunt
bfb95654c9 Add option to allow profile queries without sharing a room (#6523) 2019-12-16 16:11:55 +00:00
Richard van der Hoff
6920d88892 Exclude rejected state events when calculating state at backwards extrems (#6527)
This fixes a weird bug where, if you were determined enough, you could end up with a rejected event forming part of the state at a backwards-extremity. Authing that backwards extrem would then lead to us trying to pull the rejected event from the db (with allow_rejected=False), which would fail with a 404.
2019-12-16 13:14:37 +00:00
Richard van der Hoff
bc7de87650 Persist auth/state events at backwards extremities when we fetch them (#6526)
The main point here is to make sure that the state returned by _get_state_in_room has been authed before we try to use it as state in the room.
2019-12-16 12:26:28 +00:00
Werner Sembach
9d173b312c Automatically delete empty groups/communities (#6453)
Signed-off-by: Werner Sembach <werner.sembach@fau.de>
2019-12-16 12:12:40 +00:00
Andrew Morgan
0b90fc6ed2 Document Shutdown Room admin API (#6541) 2019-12-13 15:28:48 +00:00
Richard van der Hoff
1da15f05f5 sanity-checking for events used in state res (#6531)
When we perform state resolution, check that all of the events involved are in
the right room.
2019-12-13 12:55:32 +00:00
Richard van der Hoff
971a0702b5 Sanity-check room ids in event auth (#6530)
When we do an event auth operation, check that all of the events involved are
in the right room.
2019-12-13 11:44:41 +00:00
Erik Johnston
5cadbd9ebb Merge pull request #6537 from matrix-org/erikj/bump_mypy_version
Bump mypy version
2019-12-13 11:32:53 +00:00
Erik Johnston
f46e05d053 Merge branch 'master' into develop 2019-12-13 10:55:47 +00:00
Erik Johnston
bee1982d17 Merge tag 'v1.7.0'
Synapse 1.7.0 (2019-12-13)
==========================

This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.

Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.

Administrators are reminded that SQLite should not be used for production instances. Instructions for migrating to Postgres are available [here](docs/postgres.md). A future release of synapse will, by default, disable federation for servers using SQLite.

No significant changes since 1.7.0rc2.

Synapse 1.7.0rc2 (2019-12-11)
=============================

Bugfixes
--------

- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))

Synapse 1.7.0rc1 (2019-12-09)
=============================

Features
--------

- Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
- Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
- Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
- Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
- Configure privacy-preserving settings by default for the room directory. ([\#6355](https://github.com/matrix-org/synapse/issues/6355))
- Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
- Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))

Bugfixes
--------

- Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
- Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
- Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
- Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
- Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
- Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
- Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
- Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
- Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
- Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
- Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
- Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
- Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
- Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
- Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
- Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))

Improved Documentation
----------------------

- Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
- Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
- Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
- Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
- Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
- Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))

Deprecations and Removals
-------------------------

- Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))

Internal Changes
----------------

- Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
- Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
- Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
- Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
- Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
- Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
- Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
- Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
- Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
- Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
- Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
- Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
2019-12-13 10:55:33 +00:00
Richard van der Hoff
4ce05ec171 Adjust the sytest blacklist for worker mode (#6538)
Remove tests that got blacklisted while torturing was enabled, and add one that
fails.
2019-12-13 10:15:20 +00:00
Erik Johnston
caa52836e4 Merge pull request #6496 from matrix-org/erikj/initial_sync_asnyc
Port synapse.handlers.initial_sync to async/await
2019-12-13 10:01:51 +00:00
Erik Johnston
7e67cfc87d Merge pull request #6534 from matrix-org/erikj/extend_mypy
Include more folders in mypy
2019-12-12 17:21:42 +00:00
Hubert Chathi
cb2db17994 look up cross-signing keys from the DB in bulk (#6486) 2019-12-12 12:03:28 -05:00
Andrew Morgan
5bfd8855d6 Fix redacted events being returned in search results ordered by "recent" (#6522) 2019-12-12 15:53:49 +00:00
Erik Johnston
5056d6d90a Newsfile 2019-12-12 15:22:46 +00:00
Erik Johnston
495005360c Bump version of mypy 2019-12-12 15:21:12 +00:00
Erik Johnston
c965253e4b Newsfile 2019-12-12 14:54:03 +00:00
Erik Johnston
324d4f61b8 Include more folders in mypy 2019-12-12 14:52:11 +00:00
Richard van der Hoff
25f1244329 Check the room_id of events when fetching room state/auth (#6524)
When we request the state/auth_events to populate a backwards extremity (on
backfill or in the case of missing events in a transaction push), we should
check that the returned events are in the right room rather than blindly using
them in the room state or auth chain.

Given that _get_events_from_store_or_dest takes a room_id, it seems clear that
it should be sanity-checking the room_id of the requested events, so let's do
it there.
2019-12-12 12:57:45 +00:00
Erik Johnston
b8e4b39b69 Merge pull request #6511 from matrix-org/erikj/remove_db_config_from_apps
Move database config from apps into HomeServer object
2019-12-12 10:37:56 +00:00
Erik Johnston
cfcfb57e58 Add new config param to docstring and add types 2019-12-11 17:27:46 +00:00
Erik Johnston
6828b47c45 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/initial_sync_asnyc 2019-12-11 17:01:41 +00:00
Richard van der Hoff
2045356517 Add include_event_in_state to _get_state_for_room (#6521)
Make it return the state *after* the requested event, rather than the one
before it. This is a bit easier and requires fewer calls to
get_events_from_store_or_dest.
2019-12-11 16:37:51 +00:00
Richard van der Hoff
894d2addac Merge pull request #6517 from matrix-org/rav/event_auth/13
Port some of FederationHandler to async/await
2019-12-11 16:36:06 +00:00
Mark Nowiasz
58fdcbdfe7 Update workers.md to make media_repository work (again) (#6519) 2019-12-11 16:23:38 +00:00
Erik Johnston
31905a518e Merge pull request #6504 from matrix-org/erikj/account_validity_async_await
Port handlers.account_validity to async/await.
2019-12-11 14:49:26 +00:00
Richard van der Hoff
5324bc20a6 changelog 2019-12-11 14:39:26 +00:00
Richard van der Hoff
6637d90d77 convert to async: FederationHandler._process_received_pdu
also fix user_joined_room to consistently return deferreds
2019-12-11 14:39:26 +00:00
Richard van der Hoff
4db394a4b3 convert to async: FederationHandler._get_state_for_room
... and _get_events_from_store_or_dest
2019-12-11 14:39:26 +00:00
Richard van der Hoff
e77237b935 convert to async: FederationHandler.on_receive_pdu
and associated functions:
 * on_receive_pdu
 * handle_queued_pdus
 * get_missing_events_for_pdu
2019-12-11 14:39:26 +00:00
Richard van der Hoff
7712e751b8 Convert federation backfill to async
PaginationHandler.get_messages is only called by RoomMessageListRestServlet,
which is async.

Chase the code path down from there:
 - FederationHandler.maybe_backfill (and nested try_backfill)
 - FederationHandler.backfill
2019-12-11 14:39:25 +00:00
Richard van der Hoff
7c429f92d6 Clean up some logging (#6515)
This just makes some of the logging easier to follow when things start going
wrong.
2019-12-11 14:32:25 +00:00
Erik Johnston
adb3a873fd Merge tag 'v1.7.0rc2' into develop
Synapse 1.7.0rc2 (2019-12-11)
=============================

Bugfixes
--------

- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))
2019-12-11 14:14:30 +00:00
Andrew Morgan
fc316a4894 Prevent redacted events from appearing in message search (#6377) 2019-12-11 13:39:47 +00:00
Andrew Morgan
6676ee9c4a Add dev script to generate full SQL schema files (#6394) 2019-12-11 13:16:01 +00:00
Andrew Morgan
ea0f0ad414 Prevent message search in upgraded rooms we're not in (#6385) 2019-12-11 13:07:25 +00:00
Brendan Abolivier
54ae52ba96 Merge pull request #6349 from matrix-org/babolivier/msc1802
Implement v2 APIs for send_join and send_leave
2019-12-11 11:41:47 +00:00
Erik Johnston
239d86a134 Merge pull request #6512 from matrix-org/erikj/silence_mypy
Silence mypy errors for files outside those specified
2019-12-11 10:39:57 +00:00
Richard van der Hoff
f8bc2ae883 Move get_state methods into FederationHandler (#6503)
This is a non-functional refactor as a precursor to some other work.
2019-12-10 17:42:46 +00:00
Andrew Morgan
4947de5a14 Allow SAML username provider plugins (#6411) 2019-12-10 17:30:16 +00:00
Erik Johnston
b2dcddc413 Merge pull request #6510 from matrix-org/erikj/phone_home_stats_db
Phone home stats DB reporting should not assume a single DB.
2019-12-10 16:31:24 +00:00
Richard van der Hoff
40eda84933 Fix race which caused deleted devices to reappear (#6514)
Stop the `update_client_ips` background job from recreating deleted devices.
2019-12-10 16:22:29 +00:00
Richard van der Hoff
c3dda2874d Refactor get_events_from_store_or_dest to return a dict (#6501)
There was a bunch of unnecessary conversion back and forth between dict and
list going on here. We can simplify a bunch of the code.
2019-12-10 16:22:00 +00:00
Richard van der Hoff
424fd58237 Remove redundant code from event authorisation implementation. (#6502) 2019-12-10 15:09:45 +00:00
Erik Johnston
3f97b4c16b Newsfile 2019-12-10 14:40:15 +00:00
Erik Johnston
257ef2c727 Port handlers.account_validity to async/await. 2019-12-10 14:40:15 +00:00
Erik Johnston
d630c82349 Newsfile 2019-12-10 14:34:17 +00:00
Erik Johnston
67c991b78f Fix upgrade db script 2019-12-10 14:34:17 +00:00
Erik Johnston
bc5cb8bfe8 Remove database config parsing from apps. 2019-12-10 14:34:17 +00:00
Erik Johnston
35f3c366ef Merge pull request #6505 from matrix-org/erikj/make_deferred_yiedable
Fix `make_deferred_yieldable` to work with coroutines
2019-12-10 14:20:26 +00:00
Erik Johnston
e726e18737 Merge pull request #6499 from matrix-org/erikj/fix_sqlite_7
Fix support for SQLite 3.7.
2019-12-10 13:43:52 +00:00
Erik Johnston
4643bb2a37 Newsfile 2019-12-10 13:36:00 +00:00
Erik Johnston
28b758fa0f Silence mypy errors for files outside those specified 2019-12-10 13:34:56 +00:00
Erik Johnston
accd343f91 Newsfile 2019-12-10 13:22:42 +00:00
Erik Johnston
663238aeb4 Phone home stats DB reporting should not assume a single DB. 2019-12-10 13:21:04 +00:00
Erik Johnston
ffeafade48 Update comment 2019-12-10 13:17:39 +00:00
Erik Johnston
e3f528c544 Merge pull request #6506 from matrix-org/erikj/remove_snapshot_cache
Remove SnapshotCache in favour of ResponseCache
2019-12-10 13:04:50 +00:00
Erik Johnston
b1e7012dee Newsfile 2019-12-10 11:29:44 +00:00
Erik Johnston
f5bb1531b7 Newsfile 2019-12-10 11:23:52 +00:00
Erik Johnston
9a2223d4c8 Fix make_deferred_yieldable to work with coroutines 2019-12-10 11:22:12 +00:00
Erik Johnston
353396e3a7 Port handlers.account_data to async/await. 2019-12-10 11:12:56 +00:00
Erik Johnston
aeaeb72ee4 Newsfile 2019-12-09 13:48:14 +00:00
Erik Johnston
a1f8ea9051 Port synapse.handlers.initial_sync to async/await 2019-12-09 13:46:45 +00:00
Erik Johnston
f166a8d1f5 Remove SnapshotCache in favour of ResponseCache 2019-12-09 13:42:49 +00:00
Brendan Abolivier
e126d83f74 Merge branch 'develop' into babolivier/msc1802 2019-12-05 21:00:43 +00:00
Andrew Morgan
e2cce15af1 Remove #6369 changelog 2019-12-05 15:44:02 +00:00
Brendan Abolivier
c8444b4152 Merge branch 'develop' into babolivier/msc1802 2019-11-14 10:12:11 +00:00
Brendan Abolivier
94cdd6fffe Lint 2019-11-11 16:56:55 +00:00
Brendan Abolivier
21056ad12a Changelog 2019-11-11 16:53:29 +00:00
Brendan Abolivier
edc4c7d4c5 Lint 2019-11-11 16:51:54 +00:00
Brendan Abolivier
5e18dc7955 Fix prefix for v2/send_leave 2019-11-11 16:46:09 +00:00
Brendan Abolivier
74897de01f Add server-side support to the v2 API 2019-11-11 16:40:45 +00:00
Brendan Abolivier
1e202a90f1 Implement v2 API for send_leave 2019-11-11 16:26:53 +00:00
Brendan Abolivier
92527d7b21 Add missing yield 2019-11-11 16:20:53 +00:00
Brendan Abolivier
4c131b2c78 Implement v2 API for send_join 2019-11-11 15:47:47 +00:00
284 changed files with 10633 additions and 4900 deletions

View File

@@ -1,22 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.5
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
volumes:
- ..:/src

View File

@@ -1,22 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:11
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
volumes:
- ..:/src

View File

@@ -1,22 +0,0 @@
version: '3.1'
services:
postgres:
image: postgres:9.5
environment:
POSTGRES_PASSWORD: postgres
command: -c fsync=off
testenv:
image: python:3.7
depends_on:
- postgres
env_file: .env
environment:
SYNAPSE_POSTGRES_HOST: postgres
SYNAPSE_POSTGRES_USER: postgres
SYNAPSE_POSTGRES_PASSWORD: postgres
working_dir: /src
volumes:
- ..:/src

View File

@@ -0,0 +1,18 @@
#!/bin/bash
# this script is run by buildkite in a plain `xenial` container; it installs the
# minimal requirements for tox and hands over to the py35-old tox environment.
set -ex
apt-get update
apt-get install -y python3.5 python3.5-dev python3-pip libxml2-dev libxslt-dev zlib1g-dev
# workaround for https://github.com/jaraco/zipp/issues/40
python3.5 -m pip install 'setuptools>=34.4.0'
python3.5 -m pip install tox
export LANG="C.UTF-8"
exec tox -e py35-old,combine

View File

@@ -34,33 +34,8 @@ Device list doesn't change if remote server is down
Remote servers cannot set power levels in rooms without existing powerlevels
Remote servers should reject attempts by non-creators to set the power levels
# new failures as of https://github.com/matrix-org/sytest/pull/753
GET /rooms/:room_id/messages returns a message
GET /rooms/:room_id/messages lazy loads members correctly
Read receipts are sent as events
Only original members of the room can see messages from erased users
Device deletion propagates over federation
If user leaves room, remote user changes device and rejoins we see update in /sync and /keys/changes
Changing user-signing key notifies local users
Newly updated tags appear in an incremental v2 /sync
# https://buildkite.com/matrix-dot-org/synapse/builds/6134#6f67bf47-e234-474d-80e8-c6e1868b15c5
Server correctly handles incoming m.device_list_update
Local device key changes get to remote servers with correct prev_id
AS-ghosted users can use rooms via AS
Ghost user must register before joining room
Test that a message is pushed
Invites are pushed
Rooms with aliases are correctly named in pushed
Rooms with names are correctly named in pushed
Rooms with canonical alias are correctly named in pushed
Rooms with many users are correctly pushed
Don't get pushed for rooms you've muted
Rejected events are not pushed
Test that rejected pushers are removed.
Events come down the correct room
# https://buildkite.com/matrix-dot-org/sytest/builds/326#cca62404-a88a-4fcb-ad41-175fd3377603
Presence changes to UNAVAILABLE are reported to remote room members
If remote user leaves room, changes device and rejoins we see update in sync
uploading self-signing key notifies over federation
Inbound federation can receive redacted events
Outbound federation can request missing events
# this fails reliably with a torture level of 100 due to https://github.com/matrix-org/synapse/issues/6536
Outbound federation requests missing prev_events and then asks for /state_ids and resolves the state

View File

@@ -3,6 +3,10 @@
<!-- Please read CONTRIBUTING.md before submitting your pull request -->
* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog)
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog). The entry should:
- Be a short description of your change which makes sense to users. "Fixed a bug that prevented receiving messages from other servers." instead of "Moved X method from `EventStore` to `EventWorkerStore`.".
- Use markdown where necessary, mostly for `code blocks`.
- End with either a period (.) or an exclamation mark (!).
- Start with a capital letter.
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#code-style))

View File

@@ -46,3 +46,6 @@ Joseph Weston <joseph at weston.cloud>
Benjamin Saunders <ben.e.saunders at gmail dot com>
* Documentation improvements
Werner Sembach <werner.sembach at fau dot de>
* Automatically remove a group/community when it is empty

View File

@@ -1,3 +1,289 @@
Synapse 1.10.0 (2020-02-12)
===========================
**WARNING to client developers**: As of this release Synapse validates `client_secret` parameters in the Client-Server API as per the spec. See [\#6766](https://github.com/matrix-org/synapse/issues/6766) for details.
Updates to the Docker image
---------------------------
- Update the docker images to Alpine Linux 3.11. ([\#6897](https://github.com/matrix-org/synapse/issues/6897))
Synapse 1.10.0rc5 (2020-02-11)
==============================
Bugfixes
--------
- Fix the filtering introduced in 1.10.0rc3 to also apply to the state blocks returned by `/sync`. ([\#6884](https://github.com/matrix-org/synapse/issues/6884))
Synapse 1.10.0rc4 (2020-02-11)
==============================
This release candidate was built incorrectly and is superceded by 1.10.0rc5.
Synapse 1.10.0rc3 (2020-02-10)
==============================
Features
--------
- Filter out `m.room.aliases` from the CS API to mitigate abuse while a better solution is specced. ([\#6878](https://github.com/matrix-org/synapse/issues/6878))
Internal Changes
----------------
- Fix continuous integration failures with old versions of `pip`, which were introduced by a release of the `zipp` library. ([\#6880](https://github.com/matrix-org/synapse/issues/6880))
Synapse 1.10.0rc2 (2020-02-06)
==============================
Bugfixes
--------
- Fix an issue with cross-signing where device signatures were not sent to remote servers. ([\#6844](https://github.com/matrix-org/synapse/issues/6844))
- Fix to the unknown remote device detection which was introduced in 1.10.rc1. ([\#6848](https://github.com/matrix-org/synapse/issues/6848))
Internal Changes
----------------
- Detect unexpected sender keys on remote encrypted events and resync device lists. ([\#6850](https://github.com/matrix-org/synapse/issues/6850))
Synapse 1.10.0rc1 (2020-01-31)
==============================
Features
--------
- Add experimental support for updated authorization rules for aliases events, from [MSC2260](https://github.com/matrix-org/matrix-doc/pull/2260). ([\#6787](https://github.com/matrix-org/synapse/issues/6787), [\#6790](https://github.com/matrix-org/synapse/issues/6790), [\#6794](https://github.com/matrix-org/synapse/issues/6794))
Bugfixes
--------
- Warn if postgres database has a non-C locale, as that can cause issues when upgrading locales (e.g. due to upgrading OS). ([\#6734](https://github.com/matrix-org/synapse/issues/6734))
- Minor fixes to `PUT /_synapse/admin/v2/users` admin api. ([\#6761](https://github.com/matrix-org/synapse/issues/6761))
- Validate `client_secret` parameter using the regex provided by the Client-Server API, temporarily allowing `:` characters for older clients. The `:` character will be removed in a future release. ([\#6767](https://github.com/matrix-org/synapse/issues/6767))
- Fix persisting redaction events that have been redacted (or otherwise don't have a redacts key). ([\#6771](https://github.com/matrix-org/synapse/issues/6771))
- Fix outbound federation request metrics. ([\#6795](https://github.com/matrix-org/synapse/issues/6795))
- Fix bug where querying a remote user's device keys that weren't cached resulted in only returning a single device. ([\#6796](https://github.com/matrix-org/synapse/issues/6796))
- Fix race in federation sender worker that delayed sending of device updates. ([\#6799](https://github.com/matrix-org/synapse/issues/6799), [\#6800](https://github.com/matrix-org/synapse/issues/6800))
- Fix bug where Synapse didn't invalidate cache of remote users' devices when Synapse left a room. ([\#6801](https://github.com/matrix-org/synapse/issues/6801))
- Fix waking up other workers when remote server is detected to have come back online. ([\#6811](https://github.com/matrix-org/synapse/issues/6811))
Improved Documentation
----------------------
- Clarify documentation related to `user_dir` and `federation_reader` workers. ([\#6775](https://github.com/matrix-org/synapse/issues/6775))
Internal Changes
----------------
- Record room versions in the `rooms` table. ([\#6729](https://github.com/matrix-org/synapse/issues/6729), [\#6788](https://github.com/matrix-org/synapse/issues/6788), [\#6810](https://github.com/matrix-org/synapse/issues/6810))
- Propagate cache invalidates from workers to other workers. ([\#6748](https://github.com/matrix-org/synapse/issues/6748))
- Remove some unnecessary admin handler abstraction methods. ([\#6751](https://github.com/matrix-org/synapse/issues/6751))
- Add some debugging for media storage providers. ([\#6757](https://github.com/matrix-org/synapse/issues/6757))
- Detect unknown remote devices and mark cache as stale. ([\#6776](https://github.com/matrix-org/synapse/issues/6776), [\#6819](https://github.com/matrix-org/synapse/issues/6819))
- Attempt to resync remote users' devices when detected as stale. ([\#6786](https://github.com/matrix-org/synapse/issues/6786))
- Delete current state from the database when server leaves a room. ([\#6792](https://github.com/matrix-org/synapse/issues/6792))
- When a client asks for a remote user's device keys check if the local cache for that user has been marked as potentially stale. ([\#6797](https://github.com/matrix-org/synapse/issues/6797))
- Add background update to clean out left rooms from current state. ([\#6802](https://github.com/matrix-org/synapse/issues/6802), [\#6816](https://github.com/matrix-org/synapse/issues/6816))
- Refactoring work in preparation for changing the event redaction algorithm. ([\#6803](https://github.com/matrix-org/synapse/issues/6803), [\#6805](https://github.com/matrix-org/synapse/issues/6805), [\#6806](https://github.com/matrix-org/synapse/issues/6806), [\#6807](https://github.com/matrix-org/synapse/issues/6807), [\#6820](https://github.com/matrix-org/synapse/issues/6820))
Synapse 1.9.1 (2020-01-28)
==========================
Bugfixes
--------
- Fix bug where setting `mau_limit_reserved_threepids` config would cause Synapse to refuse to start. ([\#6793](https://github.com/matrix-org/synapse/issues/6793))
Synapse 1.9.0 (2020-01-23)
==========================
**WARNING**: As of this release, Synapse no longer supports versions of SQLite before 3.11, and will refuse to start when configured to use an older version. Administrators are recommended to migrate their database to Postgres (see instructions [here](docs/postgres.md)).
If your Synapse deployment uses workers, note that the reverse-proxy configurations for the `synapse.app.media_repository`, `synapse.app.federation_reader` and `synapse.app.event_creator` workers have changed, with the addition of a few paths (see the updated configurations [here](docs/workers.md#available-worker-applications)). Existing configurations will continue to work.
Improved Documentation
----------------------
- Fix endpoint documentation for the List Rooms admin API. ([\#6770](https://github.com/matrix-org/synapse/issues/6770))
Synapse 1.9.0rc1 (2020-01-22)
=============================
Features
--------
- Allow admin to create or modify a user. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5742](https://github.com/matrix-org/synapse/issues/5742))
- Add new quarantine media admin APIs to quarantine by media ID or by user who uploaded the media. ([\#6681](https://github.com/matrix-org/synapse/issues/6681), [\#6756](https://github.com/matrix-org/synapse/issues/6756))
- Add `org.matrix.e2e_cross_signing` to `unstable_features` in `/versions` as per [MSC1756](https://github.com/matrix-org/matrix-doc/pull/1756). ([\#6712](https://github.com/matrix-org/synapse/issues/6712))
- Add a new admin API to list and filter rooms on the server. ([\#6720](https://github.com/matrix-org/synapse/issues/6720))
Bugfixes
--------
- Correctly proxy HTTP errors due to API calls to remote group servers. ([\#6654](https://github.com/matrix-org/synapse/issues/6654))
- Fix media repo admin APIs when using a media worker. ([\#6664](https://github.com/matrix-org/synapse/issues/6664))
- Fix "CRITICAL" errors being logged when a request is received for a uri containing non-ascii characters. ([\#6682](https://github.com/matrix-org/synapse/issues/6682))
- Fix a bug where we would assign a numeric user ID if somebody tried registering with an empty username. ([\#6690](https://github.com/matrix-org/synapse/issues/6690))
- Fix `purge_room` admin API. ([\#6711](https://github.com/matrix-org/synapse/issues/6711))
- Fix a bug causing Synapse to not always purge quiet rooms with a low `max_lifetime` in their message retention policies when running the automated purge jobs. ([\#6714](https://github.com/matrix-org/synapse/issues/6714))
- Fix the `synapse_port_db` not correctly running background updates. Thanks @tadzik for reporting. ([\#6718](https://github.com/matrix-org/synapse/issues/6718))
- Fix changing password via user admin API. ([\#6730](https://github.com/matrix-org/synapse/issues/6730))
- Fix `/events/:event_id` deprecated API. ([\#6731](https://github.com/matrix-org/synapse/issues/6731))
- Fix monthly active user limiting support for worker mode, fixes [#4639](https://github.com/matrix-org/synapse/issues/4639). ([\#6742](https://github.com/matrix-org/synapse/issues/6742))
- Fix bug when setting `account_validity` to an empty block in the config. Thanks to @Sorunome for reporting. ([\#6747](https://github.com/matrix-org/synapse/issues/6747))
- Fix `AttributeError: 'NoneType' object has no attribute 'get'` in `hash_password` when configuration has an empty `password_config`. Contributed by @ivilata. ([\#6753](https://github.com/matrix-org/synapse/issues/6753))
- Fix the `docker-compose.yaml` overriding the entire `/etc` folder of the container. Contributed by Fabian Meyer. ([\#6656](https://github.com/matrix-org/synapse/issues/6656))
Improved Documentation
----------------------
- Fix a typo in the configuration example for purge jobs in the sample configuration file. ([\#6621](https://github.com/matrix-org/synapse/issues/6621))
- Add complete documentation of the message retention policies support. ([\#6624](https://github.com/matrix-org/synapse/issues/6624), [\#6665](https://github.com/matrix-org/synapse/issues/6665))
- Add some helpful tips about changelog entries to the GitHub pull request template. ([\#6663](https://github.com/matrix-org/synapse/issues/6663))
- Clarify the `account_validity` and `email` sections of the sample configuration. ([\#6685](https://github.com/matrix-org/synapse/issues/6685))
- Add more endpoints to the documentation for Synapse workers. ([\#6698](https://github.com/matrix-org/synapse/issues/6698))
Deprecations and Removals
-------------------------
- Synapse no longer supports versions of SQLite before 3.11, and will refuse to start when configured to use an older version. Administrators are recommended to migrate their database to Postgres (see instructions [here](docs/postgres.md)). ([\#6675](https://github.com/matrix-org/synapse/issues/6675))
Internal Changes
----------------
- Add `local_current_membership` table for tracking local user membership state in rooms. ([\#6655](https://github.com/matrix-org/synapse/issues/6655), [\#6728](https://github.com/matrix-org/synapse/issues/6728))
- Port `synapse.replication.tcp` to async/await. ([\#6666](https://github.com/matrix-org/synapse/issues/6666))
- Fixup `synapse.replication` to pass mypy checks. ([\#6667](https://github.com/matrix-org/synapse/issues/6667))
- Allow `additional_resources` to implement `IResource` directly. ([\#6686](https://github.com/matrix-org/synapse/issues/6686))
- Allow REST endpoint implementations to raise a `RedirectException`, which will redirect the user's browser to a given location. ([\#6687](https://github.com/matrix-org/synapse/issues/6687))
- Updates and extensions to the module API. ([\#6688](https://github.com/matrix-org/synapse/issues/6688))
- Updates to the SAML mapping provider API. ([\#6689](https://github.com/matrix-org/synapse/issues/6689), [\#6723](https://github.com/matrix-org/synapse/issues/6723))
- Remove redundant `RegistrationError` class. ([\#6691](https://github.com/matrix-org/synapse/issues/6691))
- Don't block processing of incoming EDUs behind processing PDUs in the same transaction. ([\#6697](https://github.com/matrix-org/synapse/issues/6697))
- Remove duplicate check for the `session` query parameter on the `/auth/xxx/fallback/web` Client-Server endpoint. ([\#6702](https://github.com/matrix-org/synapse/issues/6702))
- Attempt to retry sending a transaction when we detect a remote server has come back online, rather than waiting for a transaction to be triggered by new data. ([\#6706](https://github.com/matrix-org/synapse/issues/6706))
- Add `StateMap` type alias to simplify types. ([\#6715](https://github.com/matrix-org/synapse/issues/6715))
- Add a `DeltaState` to track changes to be made to current state during event persistence. ([\#6716](https://github.com/matrix-org/synapse/issues/6716))
- Add more logging around message retention policies support. ([\#6717](https://github.com/matrix-org/synapse/issues/6717))
- When processing a SAML response, log the assertions for easier configuration. ([\#6724](https://github.com/matrix-org/synapse/issues/6724))
- Fixup `synapse.rest` to pass mypy. ([\#6732](https://github.com/matrix-org/synapse/issues/6732), [\#6764](https://github.com/matrix-org/synapse/issues/6764))
- Fixup `synapse.api` to pass mypy. ([\#6733](https://github.com/matrix-org/synapse/issues/6733))
- Allow streaming cache 'invalidate all' to workers. ([\#6749](https://github.com/matrix-org/synapse/issues/6749))
- Remove unused CI docker compose files. ([\#6754](https://github.com/matrix-org/synapse/issues/6754))
Synapse 1.8.0 (2020-01-09)
==========================
**WARNING**: As of this release Synapse will refuse to start if the `log_file` config option is specified. Support for the option was removed in v1.3.0.
Bugfixes
--------
- Fix `GET` request on `/_synapse/admin/v2/users` endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6563](https://github.com/matrix-org/synapse/issues/6563))
- Fix incorrect signing of responses from the key server implementation. ([\#6657](https://github.com/matrix-org/synapse/issues/6657))
Synapse 1.8.0rc1 (2020-01-07)
=============================
Features
--------
- Add v2 APIs for the `send_join` and `send_leave` federation endpoints (as described in [MSC1802](https://github.com/matrix-org/matrix-doc/pull/1802)). ([\#6349](https://github.com/matrix-org/synapse/issues/6349))
- Add a develop script to generate full SQL schemas. ([\#6394](https://github.com/matrix-org/synapse/issues/6394))
- Add custom SAML username mapping functionality through an external provider plugin. ([\#6411](https://github.com/matrix-org/synapse/issues/6411))
- Automatically delete empty groups/communities. ([\#6453](https://github.com/matrix-org/synapse/issues/6453))
- Add option `limit_profile_requests_to_users_who_share_rooms` to prevent requirement of a local user sharing a room with another user to query their profile information. ([\#6523](https://github.com/matrix-org/synapse/issues/6523))
- Add an `export_signing_key` script to extract the public part of signing keys when rotating them. ([\#6546](https://github.com/matrix-org/synapse/issues/6546))
- Add experimental config option to specify multiple databases. ([\#6580](https://github.com/matrix-org/synapse/issues/6580))
- Raise an error if someone tries to use the `log_file` config option. ([\#6626](https://github.com/matrix-org/synapse/issues/6626))
Bugfixes
--------
- Prevent redacted events from being returned during message search. ([\#6377](https://github.com/matrix-org/synapse/issues/6377), [\#6522](https://github.com/matrix-org/synapse/issues/6522))
- Prevent error on trying to search a upgraded room when the server is not in the predecessor room. ([\#6385](https://github.com/matrix-org/synapse/issues/6385))
- Improve performance of looking up cross-signing keys. ([\#6486](https://github.com/matrix-org/synapse/issues/6486))
- Fix race which occasionally caused deleted devices to reappear. ([\#6514](https://github.com/matrix-org/synapse/issues/6514))
- Fix missing row in `device_max_stream_id` that could cause unable to decrypt errors after server restart. ([\#6555](https://github.com/matrix-org/synapse/issues/6555))
- Fix a bug which meant that we did not send systemd notifications on startup if acme was enabled. ([\#6571](https://github.com/matrix-org/synapse/issues/6571))
- Fix exception when fetching the `matrix.org:ed25519:auto` key. ([\#6625](https://github.com/matrix-org/synapse/issues/6625))
- Fix bug where a moderator upgraded a room and became an admin in the new room. ([\#6633](https://github.com/matrix-org/synapse/issues/6633))
- Fix an error which was thrown by the `PresenceHandler` `_on_shutdown` handler. ([\#6640](https://github.com/matrix-org/synapse/issues/6640))
- Fix exceptions in the synchrotron worker log when events are rejected. ([\#6645](https://github.com/matrix-org/synapse/issues/6645))
- Ensure that upgraded rooms are removed from the directory. ([\#6648](https://github.com/matrix-org/synapse/issues/6648))
- Fix a bug causing Synapse not to fetch missing events when it believes it has every event in the room. ([\#6652](https://github.com/matrix-org/synapse/issues/6652))
Improved Documentation
----------------------
- Document the Room Shutdown Admin API. ([\#6541](https://github.com/matrix-org/synapse/issues/6541))
- Reword sections of [docs/federate.md](docs/federate.md) that explained delegation at time of Synapse 1.0 transition. ([\#6601](https://github.com/matrix-org/synapse/issues/6601))
- Added the section 'Configuration' in [docs/turn-howto.md](docs/turn-howto.md). ([\#6614](https://github.com/matrix-org/synapse/issues/6614))
Deprecations and Removals
-------------------------
- Remove redundant code from event authorisation implementation. ([\#6502](https://github.com/matrix-org/synapse/issues/6502))
- Remove unused, undocumented `/_matrix/content` API. ([\#6628](https://github.com/matrix-org/synapse/issues/6628))
Internal Changes
----------------
- Add *experimental* support for multiple physical databases and split out state storage to separate data store. ([\#6245](https://github.com/matrix-org/synapse/issues/6245), [\#6510](https://github.com/matrix-org/synapse/issues/6510), [\#6511](https://github.com/matrix-org/synapse/issues/6511), [\#6513](https://github.com/matrix-org/synapse/issues/6513), [\#6564](https://github.com/matrix-org/synapse/issues/6564), [\#6565](https://github.com/matrix-org/synapse/issues/6565))
- Port sections of code base to async/await. ([\#6496](https://github.com/matrix-org/synapse/issues/6496), [\#6504](https://github.com/matrix-org/synapse/issues/6504), [\#6505](https://github.com/matrix-org/synapse/issues/6505), [\#6517](https://github.com/matrix-org/synapse/issues/6517), [\#6559](https://github.com/matrix-org/synapse/issues/6559), [\#6647](https://github.com/matrix-org/synapse/issues/6647), [\#6653](https://github.com/matrix-org/synapse/issues/6653))
- Remove `SnapshotCache` in favour of `ResponseCache`. ([\#6506](https://github.com/matrix-org/synapse/issues/6506))
- Silence mypy errors for files outside those specified. ([\#6512](https://github.com/matrix-org/synapse/issues/6512))
- Clean up some logging when handling incoming events over federation. ([\#6515](https://github.com/matrix-org/synapse/issues/6515))
- Test more folders against mypy. ([\#6534](https://github.com/matrix-org/synapse/issues/6534))
- Update `mypy` to new version. ([\#6537](https://github.com/matrix-org/synapse/issues/6537))
- Adjust the sytest blacklist for worker mode. ([\#6538](https://github.com/matrix-org/synapse/issues/6538))
- Remove unused `get_pagination_rows` methods from `EventSource` classes. ([\#6557](https://github.com/matrix-org/synapse/issues/6557))
- Clean up logs from the push notifier at startup. ([\#6558](https://github.com/matrix-org/synapse/issues/6558))
- Improve diagnostics on database upgrade failure. ([\#6570](https://github.com/matrix-org/synapse/issues/6570))
- Reduce the reconnect time when worker replication fails, to make it easier to catch up. ([\#6617](https://github.com/matrix-org/synapse/issues/6617))
- Simplify http handling by removing redundant `SynapseRequestFactory`. ([\#6619](https://github.com/matrix-org/synapse/issues/6619))
- Add a workaround for synapse raising exceptions when fetching the notary's own key from the notary. ([\#6620](https://github.com/matrix-org/synapse/issues/6620))
- Automate generation of the sample log config. ([\#6627](https://github.com/matrix-org/synapse/issues/6627))
- Simplify event creation code by removing redundant queries on the `event_reference_hashes` table. ([\#6629](https://github.com/matrix-org/synapse/issues/6629))
- Fix errors when `frozen_dicts` are enabled. ([\#6642](https://github.com/matrix-org/synapse/issues/6642))
Synapse 1.7.3 (2019-12-31)
==========================
This release fixes a long-standing bug in the state resolution algorithm.
Bugfixes
--------
- Fix exceptions caused by state resolution choking on malformed events. ([\#6608](https://github.com/matrix-org/synapse/issues/6608))
Synapse 1.7.2 (2019-12-20)
==========================

View File

@@ -101,8 +101,8 @@ in the format of `PRnumber.type`. The type can be one of the following:
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
contain Markdown formatting, and should end with a full stop (.) or an
exclamation mark (!) for consistency.
Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!

View File

@@ -133,6 +133,11 @@ sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
sudo yum groupinstall "Development Tools"
```
Note that Synapse does not support versions of SQLite before 3.11, and CentOS 7
uses SQLite 3.7. You may be able to work around this by installing a more
recent SQLite version, but it is recommended that you instead use a Postgres
database: see [docs/postgres.md](docs/postgres.md).
#### macOS
Installing prerequisites on macOS:

View File

@@ -393,4 +393,4 @@ something like the following in their logs::
2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>
This is normally caused by a misconfiguration in your reverse-proxy. See
`<docs/reverse_proxy.rst>`_ and double-check that your settings are correct.
`<docs/reverse_proxy.md>`_ and double-check that your settings are correct.

View File

@@ -75,6 +75,24 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.10.0
====================
Synapse will now log a warning on start up if used with a PostgreSQL database
that has a non-recommended locale set.
See `docs/postgres.md <docs/postgres.md>`_ for details.
Upgrading to v1.8.0
===================
Specifying a ``log_file`` config option will now cause Synapse to refuse to
start, and should be replaced by with the ``log_config`` option. Support for
the ``log_file`` option was removed in v1.3.0 and has since had no effect.
Upgrading to v1.7.0
===================

View File

@@ -18,7 +18,7 @@ services:
- SYNAPSE_CONFIG_PATH=/etc/homeserver.yaml
volumes:
# You may either store all the files in a local folder
- ./matrix-config:/etc
- ./matrix-config/homeserver.yaml:/etc/homeserver.yaml
- ./files:/data
# .. or you may split this between different storage points
# - ./files:/data

View File

@@ -85,6 +85,9 @@ PYTHONPATH="$tmpdir" \
' > "${PACKAGE_BUILD_DIR}/etc/matrix-synapse/homeserver.yaml"
# build the log config file
"${TARGET_PYTHON}" -B "${VIRTUALENV_DIR}/bin/generate_log_config" \
--output-file="${PACKAGE_BUILD_DIR}/etc/matrix-synapse/log.yaml"
# add a dependency on the right version of python to substvars.
PYPKG=`basename $SNAKE`

34
debian/changelog vendored
View File

@@ -1,3 +1,37 @@
matrix-synapse-py3 (1.10.0) stable; urgency=medium
* New synapse release 1.10.0.
-- Synapse Packaging team <packages@matrix.org> Wed, 12 Feb 2020 12:18:54 +0000
matrix-synapse-py3 (1.9.1) stable; urgency=medium
* New synapse release 1.9.1.
-- Synapse Packaging team <packages@matrix.org> Tue, 28 Jan 2020 13:09:23 +0000
matrix-synapse-py3 (1.9.0) stable; urgency=medium
* New synapse release 1.9.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 23 Jan 2020 12:56:31 +0000
matrix-synapse-py3 (1.8.0) stable; urgency=medium
[ Richard van der Hoff ]
* Automate generation of the default log configuration file.
[ Synapse Packaging team ]
* New synapse release 1.8.0.
-- Synapse Packaging team <packages@matrix.org> Thu, 09 Jan 2020 11:39:27 +0000
matrix-synapse-py3 (1.7.3) stable; urgency=medium
* New synapse release 1.7.3.
-- Synapse Packaging team <packages@matrix.org> Tue, 31 Dec 2019 10:45:04 +0000
matrix-synapse-py3 (1.7.2) stable; urgency=medium
* New synapse release 1.7.2.

1
debian/install vendored
View File

@@ -1,2 +1 @@
debian/log.yaml etc/matrix-synapse
debian/manage_debconf.pl /opt/venvs/matrix-synapse/lib/

36
debian/log.yaml vendored
View File

@@ -1,36 +0,0 @@
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s- %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: /var/log/matrix-synapse/homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
level: WARN
loggers:
synapse:
level: INFO
synapse.storage.SQL:
level: INFO
root:
level: INFO
handlers: [file, console]

View File

@@ -16,7 +16,7 @@ ARG PYTHON_VERSION=3.7
###
### Stage 0: builder
###
FROM docker.io/python:${PYTHON_VERSION}-alpine3.10 as builder
FROM docker.io/python:${PYTHON_VERSION}-alpine3.11 as builder
# install the OS build deps

View File

@@ -22,19 +22,81 @@ It returns a JSON body like the following:
}
```
# Quarantine media in a room
This API 'quarantines' all the media in a room.
The API is:
```
POST /_synapse/admin/v1/quarantine_media/<room_id>
{}
```
# Quarantine media
Quarantining media means that it is marked as inaccessible by users. It applies
to any local media, and any locally-cached copies of remote media.
The media file itself (and any thumbnails) is not deleted from the server.
## Quarantining media by ID
This API quarantines a single piece of local or remote media.
Request:
```
POST /_synapse/admin/v1/media/quarantine/<server_name>/<media_id>
{}
```
Where `server_name` is in the form of `example.org`, and `media_id` is in the
form of `abcdefg12345...`.
Response:
```
{}
```
## Quarantining media in a room
This API quarantines all local and remote media in a room.
Request:
```
POST /_synapse/admin/v1/room/<room_id>/media/quarantine
{}
```
Where `room_id` is in the form of `!roomid12345:example.org`.
Response:
```
{
"num_quarantined": 10 # The number of media items successfully quarantined
}
```
Note that there is a legacy endpoint, `POST
/_synapse/admin/v1/quarantine_media/<room_id >`, that operates the same.
However, it is deprecated and may be removed in a future release.
## Quarantining all media of a user
This API quarantines all *local* media that a *local* user has uploaded. That is to say, if
you would like to quarantine media uploaded by a user on a remote homeserver, you should
instead use one of the other APIs.
Request:
```
POST /_synapse/admin/v1/user/<user_id>/media/quarantine
{}
```
Where `user_id` is in the form of `@bob:example.org`.
Response:
```
{
"num_quarantined": 10 # The number of media items successfully quarantined
}
```

173
docs/admin_api/rooms.md Normal file
View File

@@ -0,0 +1,173 @@
# List Room API
The List Room admin API allows server admins to get a list of rooms on their
server. There are various parameters available that allow for filtering and
sorting the returned list. This API supports pagination.
## Parameters
The following query parameters are available:
* `from` - Offset in the returned list. Defaults to `0`.
* `limit` - Maximum amount of rooms to return. Defaults to `100`.
* `order_by` - The method in which to sort the returned list of rooms. Valid values are:
- `alphabetical` - Rooms are ordered alphabetically by room name. This is the default.
- `size` - Rooms are ordered by the number of members. Largest to smallest.
* `dir` - Direction of room order. Either `f` for forwards or `b` for backwards. Setting
this value to `b` will reverse the above sort order. Defaults to `f`.
* `search_term` - Filter rooms by their room name. Search term can be contained in any
part of the room name. Defaults to no filtering.
The following fields are possible in the JSON response body:
* `rooms` - An array of objects, each containing information about a room.
- Room objects contain the following fields:
- `room_id` - The ID of the room.
- `name` - The name of the room.
- `canonical_alias` - The canonical (main) alias address of the room.
- `joined_members` - How many users are currently in the room.
* `offset` - The current pagination offset in rooms. This parameter should be
used instead of `next_token` for room offset as `next_token` is
not intended to be parsed.
* `total_rooms` - The total number of rooms this query can return. Using this
and `offset`, you have enough information to know the current
progression through the list.
* `next_batch` - If this field is present, we know that there are potentially
more rooms on the server that did not all fit into this response.
We can use `next_batch` to get the "next page" of results. To do
so, simply repeat your request, setting the `from` parameter to
the value of `next_batch`.
* `prev_batch` - If this field is present, it is possible to paginate backwards.
Use `prev_batch` for the `from` value in the next request to
get the "previous page" of results.
## Usage
A standard request with no filtering:
```
GET /_synapse/admin/v1/rooms
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org",
"joined_members": 8326
},
... (8 hidden items) ...
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
}
],
"offset": 0,
"total_rooms": 10
}
```
Filtering by room name:
```
GET /_synapse/admin/v1/rooms?search_term=TWIM
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
}
],
"offset": 0,
"total_rooms": 1
}
```
Paginating through a list of rooms:
```
GET /_synapse/admin/v1/rooms?order_by=size
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!OGEhHVWSdvArJzumhm:matrix.org",
"name": "Matrix HQ",
"canonical_alias": "#matrix:matrix.org",
"joined_members": 8326
},
... (98 hidden items) ...
{
"room_id": "!xYvNcQPhnkrdUmYczI:matrix.org",
"name": "This Week In Matrix (TWIM)",
"canonical_alias": "#twim:matrix.org",
"joined_members": 314
}
],
"offset": 0,
"total_rooms": 150
"next_token": 100
}
```
The presence of the `next_token` parameter tells us that there are more rooms
than returned in this request, and we need to make another request to get them.
To get the next batch of room results, we repeat our request, setting the `from`
parameter to the value of `next_token`.
```
GET /_synapse/admin/v1/rooms?order_by=size&from=100
{}
```
Response:
```
{
"rooms": [
{
"room_id": "!mscvqgqpHYjBGDxNym:matrix.org",
"name": "Music Theory",
"canonical_alias": "#musictheory:matrix.org",
"joined_members": 127
},
... (48 hidden items) ...
{
"room_id": "!twcBhHVdZlQWuuxBhN:termina.org.uk",
"name": "weechat-matrix",
"canonical_alias": "#weechat-matrix:termina.org.uk",
"joined_members": 137
}
],
"offset": 100,
"prev_batch": 0,
"total_rooms": 150
}
```
Once the `next_token` parameter is no longer present, we know we've reached the
end of the list.

View File

@@ -0,0 +1,72 @@
# Shutdown room API
Shuts down a room, preventing new joins and moves local users and room aliases automatically
to a new room. The new room will be created with the user specified by the
`new_room_user_id` parameter as room administrator and will contain a message
explaining what happened. Users invited to the new room will have power level
-10 by default, and thus be unable to speak. The old room's power levels will be changed to
disallow any further invites or joins.
The local server will only have the power to move local user and room aliases to
the new room. Users on other servers will be unaffected.
## API
You will need to authenticate with an access token for an admin user.
### URL
`POST /_synapse/admin/v1/shutdown_room/{room_id}`
### URL Parameters
* `room_id` - The ID of the room (e.g `!someroom:example.com`)
### JSON Body Parameters
* `new_room_user_id` - Required. A string representing the user ID of the user that will admin
the new room that all users in the old room will be moved to.
* `room_name` - Optional. A string representing the name of the room that new users will be
invited to.
* `message` - Optional. A string containing the first message that will be sent as
`new_room_user_id` in the new room. Ideally this will clearly convey why the
original room was shut down.
If not specified, the default value of `room_name` is "Content Violation
Notification". The default value of `message` is "Sharing illegal content on
othis server is not permitted and rooms in violation will be blocked."
### Response Parameters
* `kicked_users` - An integer number representing the number of users that
were kicked.
* `failed_to_kick_users` - An integer number representing the number of users
that were not kicked.
* `local_aliases` - An array of strings representing the local aliases that were migrated from
the old room to the new.
* `new_room_id` - A string representing the room ID of the new room.
## Example
Request:
```
POST /_synapse/admin/v1/shutdown_room/!somebadroom%3Aexample.com
{
"new_room_user_id": "@someuser:example.com",
"room_name": "Content Violation Notification",
"message": "Bad Room has been shutdown due to content violations on this server. Please review our Terms of Service."
}
```
Response:
```
{
"kicked_users": 5,
"failed_to_kick_users": 0,
"local_aliases": ["#badroom:example.com", "#evilsaloon:example.com],
"new_room_id": "!newroomid:example.com",
},
```

View File

@@ -1,3 +1,33 @@
Create or modify Account
========================
This API allows an administrator to create or modify a user account with a
specific ``user_id``.
This api is::
PUT /_synapse/admin/v2/users/<user_id>
with a body of:
.. code:: json
{
"password": "user_password",
"displayname": "User",
"avatar_url": "<avatar_url>",
"admin": false,
"deactivated": false
}
including an ``access_token`` of a server admin.
The parameter ``displayname`` is optional and defaults to ``user_id``.
The parameter ``avatar_url`` is optional.
The parameter ``admin`` is optional and defaults to 'false'.
The parameter ``deactivated`` is optional and defaults to 'false'.
If the user already exists then optional parameters default to the current value.
List Accounts
=============
@@ -50,7 +80,8 @@ This API returns information about a specific user account.
The api is::
GET /_synapse/admin/v1/whois/<user_id>
GET /_synapse/admin/v1/whois/<user_id> (deprecated)
GET /_synapse/admin/v2/users/<user_id>
including an ``access_token`` of a server admin.

View File

@@ -137,6 +137,7 @@ Some guidelines follow:
correctly handles the top-level option being set to `None` (as it
will be if no sub-options are enabled).
- Lines should be wrapped at 80 characters.
- Use two-space indents.
Example:
@@ -155,13 +156,13 @@ Example:
# Settings for the frobber
#
frobber:
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing speed. Defaults to 1.
#
#speed: 10
# frobbing distance. Defaults to 1000.
#
#distance: 100
# frobbing distance. Defaults to 1000.
#
#distance: 100
Note that the sample configuration is generated from the synapse code
and is maintained by a script, `scripts-dev/generate_sample_config`.

View File

@@ -66,10 +66,6 @@ therefore cannot gain access to the necessary certificate. With .well-known,
federation servers will check for a valid TLS certificate for the delegated
hostname (in our example: ``synapse.example.com``).
.well-known support first appeared in Synapse v0.99.0. To federate with older
servers you may need to additionally configure SRV delegation. Alternatively,
encourage the server admin in question to upgrade :).
### DNS SRV delegation
To use this delegation method, you need to have write access to your
@@ -111,29 +107,15 @@ giving it a `server_name` of `example.com`, and once [ACME](acme.md) support is
it would automatically generate a valid TLS certificate for you via Let's Encrypt
and no SRV record or .well-known URI would be needed.
This is the common case, although you can add an SRV record or
`.well-known/matrix/server` URI for completeness if you wish.
**However**, if your server does not listen on port 8448, or if your `server_name`
does not point to the host that your homeserver runs on, you will need to let
other servers know how to find it. The way to do this is via .well-known or an
SRV record.
#### I have created a .well-known URI. Do I still need an SRV record?
#### I have created a .well-known URI. Do I also need an SRV record?
As of Synapse 0.99, Synapse will first check for the existence of a .well-known
URI and follow any delegation it suggests. It will only then check for the
existence of an SRV record.
That means that the SRV record will often be redundant. However, you should
remember that there may still be older versions of Synapse in the federation
which do not understand .well-known URIs, so if you removed your SRV record
you would no longer be able to federate with them.
It is therefore best to leave the SRV record in place for now. Synapse 0.34 and
earlier will follow the SRV record (and not care about the invalid
certificate). Synapse 0.99 and later will follow the .well-known URI, with the
correct certificate chain.
No. You can use either `.well-known` delegation or use an SRV record for delegation. You
do not need to use both to delegate to the same location.
#### Can I manage my own certificates rather than having Synapse renew certificates itself?

View File

@@ -0,0 +1,191 @@
# Message retention policies
Synapse admins can enable support for message retention policies on
their homeserver. Message retention policies exist at a room level,
follow the semantics described in
[MSC1763](https://github.com/matrix-org/matrix-doc/blob/matthew/msc1763/proposals/1763-configurable-retention-periods.md),
and allow server and room admins to configure how long messages should
be kept in a homeserver's database before being purged from it.
**Please note that, as this feature isn't part of the Matrix
specification yet, this implementation is to be considered as
experimental.**
A message retention policy is mainly defined by its `max_lifetime`
parameter, which defines how long a message can be kept around after
it was sent to the room. If a room doesn't have a message retention
policy, and there's no default one for a given server, then no message
sent in that room is ever purged on that server.
MSC1763 also specifies semantics for a `min_lifetime` parameter which
defines the amount of time after which an event _can_ get purged (after
it was sent to the room), but Synapse doesn't currently support it
beyond registering it.
Both `max_lifetime` and `min_lifetime` are optional parameters.
Note that message retention policies don't apply to state events.
Once an event reaches its expiry date (defined as the time it was sent
plus the value for `max_lifetime` in the room), two things happen:
* Synapse stops serving the event to clients via any endpoint.
* The message gets picked up by the next purge job (see the "Purge jobs"
section) and is removed from Synapse's database.
Since purge jobs don't run continuously, this means that an event might
stay in a server's database for longer than the value for `max_lifetime`
in the room would allow, though hidden from clients.
Similarly, if a server (with support for message retention policies
enabled) receives from another server an event that should have been
purged according to its room's policy, then the receiving server will
process and store that event until it's picked up by the next purge job,
though it will always hide it from clients.
## Server configuration
Support for this feature can be enabled and configured in the
`retention` section of the Synapse configuration file (see the
[sample file](https://github.com/matrix-org/synapse/blob/v1.7.3/docs/sample_config.yaml#L332-L393)).
To enable support for message retention policies, set the setting
`enabled` in this section to `true`.
### Default policy
A default message retention policy is a policy defined in Synapse's
configuration that is used by Synapse for every room that doesn't have a
message retention policy configured in its state. This allows server
admins to ensure that messages are never kept indefinitely in a server's
database.
A default policy can be defined as such, in the `retention` section of
the configuration file:
```yaml
default_policy:
min_lifetime: 1d
max_lifetime: 1y
```
Here, `min_lifetime` and `max_lifetime` have the same meaning and level
of support as previously described. They can be expressed either as a
duration (using the units `s` (seconds), `m` (minutes), `h` (hours),
`d` (days), `w` (weeks) and `y` (years)) or as a number of milliseconds.
### Purge jobs
Purge jobs are the jobs that Synapse runs in the background to purge
expired events from the database. They are only run if support for
message retention policies is enabled in the server's configuration. If
no configuration for purge jobs is configured by the server admin,
Synapse will use a default configuration, which is described in the
[sample configuration file](https://github.com/matrix-org/synapse/blob/master/docs/sample_config.yaml#L332-L393).
Some server admins might want a finer control on when events are removed
depending on an event's room's policy. This can be done by setting the
`purge_jobs` sub-section in the `retention` section of the configuration
file. An example of such configuration could be:
```yaml
purge_jobs:
- longest_max_lifetime: 3d
interval: 12h
- shortest_max_lifetime: 3d
longest_max_lifetime: 1w
interval: 1d
- shortest_max_lifetime: 1w
interval: 2d
```
In this example, we define three jobs:
* one that runs twice a day (every 12 hours) and purges events in rooms
which policy's `max_lifetime` is lower or equal to 3 days.
* one that runs once a day and purges events in rooms which policy's
`max_lifetime` is between 3 days and a week.
* one that runs once every 2 days and purges events in rooms which
policy's `max_lifetime` is greater than a week.
Note that this example is tailored to show different configurations and
features slightly more jobs than it's probably necessary (in practice, a
server admin would probably consider it better to replace the two last
jobs with one that runs once a day and handles rooms which which
policy's `max_lifetime` is greater than 3 days).
Keep in mind, when configuring these jobs, that a purge job can become
quite heavy on the server if it targets many rooms, therefore prefer
having jobs with a low interval that target a limited set of rooms. Also
make sure to include a job with no minimum and one with no maximum to
make sure your configuration handles every policy.
As previously mentioned in this documentation, while a purge job that
runs e.g. every day means that an expired event might stay in the
database for up to a day after its expiry, Synapse hides expired events
from clients as soon as they expire, so the event is not visible to
local users between its expiry date and the moment it gets purged from
the server's database.
### Lifetime limits
**Note: this feature is mainly useful within a closed federation or on
servers that don't federate, because there currently is no way to
enforce these limits in an open federation.**
Server admins can restrict the values their local users are allowed to
use for both `min_lifetime` and `max_lifetime`. These limits can be
defined as such in the `retention` section of the configuration file:
```yaml
allowed_lifetime_min: 1d
allowed_lifetime_max: 1y
```
Here, `allowed_lifetime_min` is the lowest value a local user can set
for both `min_lifetime` and `max_lifetime`, and `allowed_lifetime_max`
is the highest value. Both parameters are optional (e.g. setting
`allowed_lifetime_min` but not `allowed_lifetime_max` only enforces a
minimum and no maximum).
Like other settings in this section, these parameters can be expressed
either as a duration or as a number of milliseconds.
## Room configuration
To configure a room's message retention policy, a room's admin or
moderator needs to send a state event in that room with the type
`m.room.retention` and the following content:
```json
{
"max_lifetime": ...
}
```
In this event's content, the `max_lifetime` parameter has the same
meaning as previously described, and needs to be expressed in
milliseconds. The event's content can also include a `min_lifetime`
parameter, which has the same meaning and limited support as previously
described.
Note that over every server in the room, only the ones with support for
message retention policies will actually remove expired events. This
support is currently not enabled by default in Synapse.
## Note on reclaiming disk space
While purge jobs actually delete data from the database, the disk space
used by the database might not decrease immediately on the database's
host. However, even though the database engine won't free up the disk
space, it will start writing new data into where the purged data was.
If you want to reclaim the freed disk space anyway and return it to the
operating system, the server admin needs to run `VACUUM FULL;` (or
`VACUUM;` for SQLite databases) on Synapse's database (see the related
[PostgreSQL documentation](https://www.postgresql.org/docs/current/sql-vacuum.html)).

View File

@@ -32,7 +32,7 @@ Assuming your PostgreSQL database user is called `postgres`, first authenticate
su - postgres
# Or, if your system uses sudo to get administrative rights
sudo -u postgres bash
Then, create a user ``synapse_user`` with:
createuser --pwprompt synapse_user
@@ -63,6 +63,24 @@ You may need to enable password authentication so `synapse_user` can
connect to the database. See
<https://www.postgresql.org/docs/11/auth-pg-hba-conf.html>.
### Fixing incorrect `COLLATE` or `CTYPE`
Synapse will refuse to set up a new database if it has the wrong values of
`COLLATE` and `CTYPE` set, and will log warnings on existing databases. Using
different locales can cause issues if the locale library is updated from
underneath the database, or if a different version of the locale is used on any
replicas.
The safest way to fix the issue is to take a dump and recreate the database with
the correct `COLLATE` and `CTYPE` parameters (as per
[docs/postgres.md](docs/postgres.md)). It is also possible to change the
parameters on a live database and run a `REINDEX` on the entire database,
however extreme care must be taken to avoid database corruption.
Note that the above may fail with an error about duplicate rows if corruption
has already occurred, and such duplicate rows will need to be manually removed.
## Tuning Postgres
The default settings should be fine for most deployments. For larger

View File

@@ -0,0 +1,77 @@
# SAML Mapping Providers
A SAML mapping provider is a Python class (loaded via a Python module) that
works out how to map attributes of a SAML response object to Matrix-specific
user attributes. Details such as user ID localpart, displayname, and even avatar
URLs are all things that can be mapped from talking to a SSO service.
As an example, a SSO service may return the email address
"john.smith@example.com" for a user, whereas Synapse will need to figure out how
to turn that into a displayname when creating a Matrix user for this individual.
It may choose `John Smith`, or `Smith, John [Example.com]` or any number of
variations. As each Synapse configuration may want something different, this is
where SAML mapping providers come into play.
## Enabling Providers
External mapping providers are provided to Synapse in the form of an external
Python module. Retrieve this module from [PyPi](https://pypi.org) or elsewhere,
then tell Synapse where to look for the handler class by editing the
`saml2_config.user_mapping_provider.module` config option.
`saml2_config.user_mapping_provider.config` allows you to provide custom
configuration options to the module. Check with the module's documentation for
what options it provides (if any). The options listed by default are for the
user mapping provider built in to Synapse. If using a custom module, you should
comment these options out and use those specified by the module instead.
## Building a Custom Mapping Provider
A custom mapping provider must specify the following methods:
* `__init__(self, parsed_config)`
- Arguments:
- `parsed_config` - A configuration object that is the return value of the
`parse_config` method. You should set any configuration options needed by
the module here.
* `saml_response_to_user_attributes(self, saml_response, failures)`
- Arguments:
- `saml_response` - A `saml2.response.AuthnResponse` object to extract user
information from.
- `failures` - An `int` that represents the amount of times the returned
mxid localpart mapping has failed. This should be used
to create a deduplicated mxid localpart which should be
returned instead. For example, if this method returns
`john.doe` as the value of `mxid_localpart` in the returned
dict, and that is already taken on the homeserver, this
method will be called again with the same parameters but
with failures=1. The method should then return a different
`mxid_localpart` value, such as `john.doe1`.
- This method must return a dictionary, which will then be used by Synapse
to build a new user. The following keys are allowed:
* `mxid_localpart` - Required. The mxid localpart of the new user.
* `displayname` - The displayname of the new user. If not provided, will default to
the value of `mxid_localpart`.
* `parse_config(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A `dict` representing the parsed content of the
`saml2_config.user_mapping_provider.config` homeserver config option.
Runs on homeserver startup. Providers should extract any option values
they need here.
- Whatever is returned will be passed back to the user mapping provider module's
`__init__` method during construction.
* `get_saml_attributes(config)`
- This method should have the `@staticmethod` decoration.
- Arguments:
- `config` - A object resulting from a call to `parse_config`.
- Returns a tuple of two sets. The first set equates to the saml auth
response attributes that are required for the module to function, whereas
the second set consists of those attributes which can be used if available,
but are not necessary.
## Synapse's Default Provider
Synapse has a built-in SAML mapping provider if a custom provider isn't
specified in the config. It is located at
[`synapse.handlers.saml_handler.DefaultSamlMappingProvider`](../synapse/handlers/saml_handler.py).

View File

@@ -54,6 +54,13 @@ pid_file: DATADIR/homeserver.pid
#
#require_auth_for_profile_requests: true
# Uncomment to require a user to share a room with another user in order
# to retrieve their profile information. Only checked on Client-Server
# requests. Profile requests from other servers should be checked by the
# requesting server. Defaults to 'false'.
#
#limit_profile_requests_to_users_who_share_rooms: true
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
@@ -380,17 +387,17 @@ retention:
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
# of outdated messages on a more frequent basis than for the rest of the rooms
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
# interval: 1d
## TLS ##
@@ -685,10 +692,6 @@ media_store_path: "DATADIR/media_store"
# config:
# directory: /mnt/some/other/directory
# Directory where in-progress uploads are stored.
#
uploads_path: "DATADIR/uploads"
# The largest allowed upload size in bytes
#
#max_upload_size: 10M
@@ -871,23 +874,6 @@ uploads_path: "DATADIR/uploads"
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# ``enabled`` defines whether the account validity feature is enabled. Defaults
# to False.
#
# ``period`` allows setting the period after which an account is valid
# after its registration. When renewing the account, its validity period
# will be extended by this amount of time. This parameter is required when using
# the account validity feature.
#
# ``renew_at`` is the amount of time before an account's expiry date at which
# Synapse will send an email to the account's email address with a renewal link.
# This needs the ``email`` and ``public_baseurl`` configuration sections to be
# filled.
#
# ``renew_email_subject`` is the subject of the email sent out with the renewal
# link. ``%(app)s`` can be used as a placeholder for the ``app_name`` parameter
# from the ``email`` section.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
@@ -898,21 +884,55 @@ uploads_path: "DATADIR/uploads"
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10% of the validity period.
#
#account_validity:
# enabled: true
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: "Renew your %(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
#template_dir: "res/templates"
# File within 'template_dir' giving the HTML to be displayed to the user after
# they successfully renewed their account. If not set, default text is used.
#
#account_renewed_html_path: "account_renewed.html"
# File within 'template_dir' giving the HTML to be displayed when the user
# tries to renew an account with an invalid renewal token. If not set,
# default text is used.
#
#invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#
@@ -1115,14 +1135,19 @@ metrics_flags:
signing_key_path: "CONFDIR/SERVERNAME.signing.key"
# The keys that the server used to sign messages with but won't use
# to sign new messages. E.g. it has lost its private key
# to sign new messages.
#
#old_signing_keys:
# "ed25519:auto":
# # Base64 encoded public key
# key: "The public part of your old signing key."
# # Millisecond POSIX timestamp when the key expired.
# expired_ts: 123456789123
old_signing_keys:
# For each key, `key` should be the base64-encoded public key, and
# `expired_ts`should be the time (in milliseconds since the unix epoch) that
# it was last used.
#
# It is possible to build an entry from an old signing.key file using the
# `export_signing_key` script which is provided with synapse.
#
# For example:
#
#"ed25519:id": { key: "base64string", expired_ts: 123456789123 }
# How long key response published by this server is valid for.
# Used to set the valid_until_ts in /key/v2 APIs.
@@ -1250,33 +1275,58 @@ saml2_config:
#
#config_path: "CONFDIR/sp_conf.py"
# the lifetime of a SAML session. This defines how long a user has to
# The lifetime of a SAML session. This defines how long a user has to
# complete the authentication process, if allow_unsolicited is unset.
# The default is 5 minutes.
#
#saml_session_lifetime: 5m
# The SAML attribute (after mapping via the attribute maps) to use to derive
# the Matrix ID from. 'uid' by default.
# An external module can be provided here as a custom solution to
# mapping attributes returned from a saml provider onto a matrix user.
#
#mxid_source_attribute: displayName
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
#
#module: mapping_provider.SamlMappingProvider
# The mapping system to use for mapping the saml attribute onto a matrix ID.
# Options include:
# * 'hexencode' (which maps unpermitted characters to '=xx')
# * 'dotreplace' (which replaces unpermitted characters with '.').
# The default is 'hexencode'.
#
#mxid_mapping: dotreplace
# Custom configuration values for the module. Below options are
# intended for the built-in provider, they should be changed if
# using a custom module. This section will be passed as a Python
# dictionary to the module's `parse_config` method.
#
config:
# The SAML attribute (after mapping via the attribute maps) to use
# to derive the Matrix ID from. 'uid' by default.
#
# Note: This used to be configured by the
# saml2_config.mxid_source_attribute option. If that is still
# defined, its value will be used instead.
#
#mxid_source_attribute: displayName
# In previous versions of synapse, the mapping from SAML attribute to MXID was
# always calculated dynamically rather than stored in a table. For backwards-
# compatibility, we will look for user_ids matching such a pattern before
# creating a new account.
# The mapping system to use for mapping the saml attribute onto a
# matrix ID.
#
# Options include:
# * 'hexencode' (which maps unpermitted characters to '=xx')
# * 'dotreplace' (which replaces unpermitted characters with
# '.').
# The default is 'hexencode'.
#
# Note: This used to be configured by the
# saml2_config.mxid_mapping option. If that is still defined, its
# value will be used instead.
#
#mxid_mapping: dotreplace
# In previous versions of synapse, the mapping from SAML attribute to
# MXID was always calculated dynamically rather than stored in a
# table. For backwards- compatibility, we will look for user_ids
# matching such a pattern before creating a new account.
#
# This setting controls the SAML attribute which will be used for this
# backwards-compatibility lookup. Typically it should be 'uid', but if the
# attribute maps are changed, it may be necessary to change it.
# backwards-compatibility lookup. Typically it should be 'uid', but if
# the attribute maps are changed, it may be necessary to change it.
#
# The default is 'uid'.
#
@@ -1320,107 +1370,110 @@ password_config:
#pepper: "EVEN_MORE_SECRET"
# Configuration for sending emails from Synapse.
#
email:
# The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
#
#smtp_host: mail.server
# Enable sending emails for password resets, notification events or
# account expiry notices
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25 # SSL: 465, STARTTLS: 587
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #
# notif_for_new_users: true
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
# # the "app_name" setting is ignored
# #
# riot_base_url: "http://localhost/riot"
#
# # Configure the time that a validation email or text message code
# # will expire after sending
# #
# # This is currently used for password resets
# #
# #validation_token_lifetime: 1h
#
# # Template directory. All template files should be stored within this
# # directory. If not set, default templates from within the Synapse
# # package will be used
# #
# # For the list of default templates, please see
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
# #template_dir: res/templates
#
# # Templates for email notifications
# #
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
#
# # Templates for account expiry notices
# #
# expiry_template_html: notice_expiry.html
# expiry_template_text: notice_expiry.txt
#
# # Templates for password reset emails sent by the homeserver
# #
# #password_reset_template_html: password_reset.html
# #password_reset_template_text: password_reset.txt
#
# # Templates for registration emails sent by the homeserver
# #
# #registration_template_html: registration.html
# #registration_template_text: registration.txt
#
# # Templates for validation emails sent by the homeserver when adding an email to
# # your user account
# #
# #add_threepid_template_html: add_threepid.html
# #add_threepid_template_text: add_threepid.txt
#
# # Templates for password reset success and failure pages that a user
# # will see after attempting to reset their password
# #
# #password_reset_template_success_html: password_reset_success.html
# #password_reset_template_failure_html: password_reset_failure.html
#
# # Templates for registration success and failure pages that a user
# # will see after attempting to register using an email or phone
# #
# #registration_template_success_html: registration_success.html
# #registration_template_failure_html: registration_failure.html
#
# # Templates for success and failure pages that a user will see after attempting
# # to add an email or phone to their account
# #
# #add_threepid_success_html: add_threepid_success.html
# #add_threepid_failure_html: add_threepid_failure.html
# The port on the mail server for outgoing SMTP. Defaults to 25.
#
#smtp_port: 587
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to
# TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
# Synapse will refuse to connect unless the server supports STARTTLS.
#
#require_transport_security: true
# Enable sending emails for messages that the user has missed
#
#enable_notifs: false
# notif_from defines the "From" address to use when sending emails.
# It must be set if email sending is enabled.
#
# The placeholder '%(app)s' will be replaced by the application name,
# which is normally 'app_name' (below), but may be overridden by the
# Matrix client application.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from. It
# defaults to 'Matrix'.
#
#app_name: my_branded_matrix_server
# Uncomment the following to disable automatic subscription to email
# notifications for new users. Enabled by default.
#
#notif_for_new_users: false
# Custom URL for client links within the email notifications. By default
# links will be based on "https://matrix.to".
#
# (This setting used to be called riot_base_url; the old name is still
# supported for backwards-compatibility but is now deprecated.)
#
#client_base_url: "http://localhost/riot"
# Configure the time that a validation email will expire after sending.
# Defaults to 1h.
#
#validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * The contents of email notifications of missed events: 'notif_mail.html' and
# 'notif_mail.txt'.
#
# * The contents of account expiry notice emails: 'notice_expiry.html' and
# 'notice_expiry.txt'.
#
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in the password reset email: 'password_reset_success.html' and
# 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent during registration:
# 'registration_success.html' and 'registration_failure.html'
#
# * The contents of address verification emails sent when an address is added
# to a Matrix account: 'add_threepid.html' and 'add_threepid.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent when an address is added
# to a Matrix account: 'add_threepid_success.html' and
# 'add_threepid_failure.html'
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
#password_providers:

View File

@@ -0,0 +1,43 @@
# Log configuration for Synapse.
#
# This is a YAML file containing a standard Python logging configuration
# dictionary. See [1] for details on the valid settings.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: /var/log/matrix-synapse/homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
root:
level: INFO
handlers: [file, console]
disable_existing_loggers: false

View File

@@ -209,7 +209,7 @@ Where `<token>` may be either:
* a numeric stream_id to stream updates since (exclusive)
* `NOW` to stream all subsequent updates.
The `<stream_name>` is the name of a replication stream to subscribe
The `<stream_name>` is the name of a replication stream to subscribe
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
of streams). It can also be `ALL` to subscribe to all known streams,
in which case the `<token>` must be set to `NOW`.
@@ -234,6 +234,10 @@ in which case the `<token>` must be set to `NOW`.
Used exclusively in tests
### REMOTE_SERVER_UP (S, C)
Inform other processes that a remote server may have come back online.
See `synapse/replication/tcp/commands.py` for a detailed description and
the format of each command.
@@ -250,6 +254,11 @@ and they key to invalidate. For example:
> RDATA caches 550953771 ["get_user_by_id", ["@bob:example.com"], 1550574873251]
Alternatively, an entire cache can be invalidated by sending down a `null`
instead of the key. For example:
> RDATA caches 550953772 ["get_user_by_id", null, 1550574873252]
However, there are times when a number of caches need to be invalidated
at the same time with the same key. To reduce traffic we batch those
invalidations into a single poke by defining a special cache name that

View File

@@ -39,6 +39,8 @@ The TURN daemon `coturn` is available from a variety of sources such as native p
make
make install
### Configuration
1. Create or edit the config file in `/etc/turnserver.conf`. The relevant
lines, with example values, are:

View File

@@ -168,8 +168,11 @@ endpoints matching the following regular expressions:
^/_matrix/federation/v1/make_join/
^/_matrix/federation/v1/make_leave/
^/_matrix/federation/v1/send_join/
^/_matrix/federation/v2/send_join/
^/_matrix/federation/v1/send_leave/
^/_matrix/federation/v2/send_leave/
^/_matrix/federation/v1/invite/
^/_matrix/federation/v2/invite/
^/_matrix/federation/v1/query_auth/
^/_matrix/federation/v1/event_auth/
^/_matrix/federation/v1/exchange_third_party_invite/
@@ -182,6 +185,19 @@ reverse-proxy configuration.
The `^/_matrix/federation/v1/send/` endpoint must only be handled by a single
instance.
Note that `federation` must be added to the listener resources in the worker config:
```yaml
worker_app: synapse.app.federation_reader
...
worker_listeners:
- type: http
port: <port>
resources:
- names:
- federation
```
### `synapse.app.federation_sender`
Handles sending federation traffic to other servers. Doesn't handle any
@@ -196,16 +212,30 @@ Handles the media repository. It can handle all endpoints starting with:
/_matrix/media/
And the following regular expressions matching media-specific administration APIs:
... and the following regular expressions matching media-specific administration APIs:
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
^/_synapse/admin/v1/room/.*/media.*$
^/_synapse/admin/v1/user/.*/media.*$
^/_synapse/admin/v1/media/.*$
^/_synapse/admin/v1/quarantine_media/.*$
You should also set `enable_media_repo: False` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.
In the `media_repository` worker configuration file, configure the http listener to
expose the `media` resource. For example:
```yaml
worker_listeners:
- type: http
port: 8085
resources:
- names:
- media
```
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.client_reader`
@@ -248,6 +278,10 @@ the following regular expressions:
^/_matrix/client/(api/v1|r0|unstable)/user_directory/search$
When using this worker you must also set `update_user_directory: False` in the
shared configuration file to stop the main synapse running background
jobs related to updating the user directory.
### `synapse.app.frontend_proxy`
Proxies some frequently-requested client endpoints to add caching and remove
@@ -276,6 +310,7 @@ file. For example:
Handles some event creation. It can handle REST endpoints matching:
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/send
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/state/
^/_matrix/client/(api/v1|r0|unstable)/rooms/.*/(join|invite|leave|ban|unban|kick)$
^/_matrix/client/(api/v1|r0|unstable)/join/
^/_matrix/client/(api/v1|r0|unstable)/profile/

View File

@@ -1,12 +1,15 @@
[mypy]
namespace_packages = True
plugins = mypy_zope:plugin
follow_imports = normal
follow_imports = silent
check_untyped_defs = True
show_error_codes = True
show_traceback = True
mypy_path = stubs
[mypy-pymacaroons.*]
ignore_missing_imports = True
[mypy-zope]
ignore_missing_imports = True
@@ -63,3 +66,12 @@ ignore_missing_imports = True
[mypy-sentry_sdk]
ignore_missing_imports = True
[mypy-PIL.*]
ignore_missing_imports = True
[mypy-lxml]
ignore_missing_imports = True
[mypy-jwt.*]
ignore_missing_imports = True

View File

@@ -7,12 +7,22 @@ set -e
cd `dirname $0`/..
SAMPLE_CONFIG="docs/sample_config.yaml"
SAMPLE_LOG_CONFIG="docs/sample_log_config.yaml"
check() {
diff -u "$SAMPLE_LOG_CONFIG" <(./scripts/generate_log_config) >/dev/null || return 1
}
if [ "$1" == "--check" ]; then
diff -u "$SAMPLE_CONFIG" <(./scripts/generate_config --header-file docs/.sample_config_header.yaml) >/dev/null || {
echo -e "\e[1m\e[31m$SAMPLE_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config\`.\e[0m" >&2
exit 1
}
diff -u "$SAMPLE_LOG_CONFIG" <(./scripts/generate_log_config) >/dev/null || {
echo -e "\e[1m\e[31m$SAMPLE_LOG_CONFIG is not up-to-date. Regenerate it with \`scripts-dev/generate_sample_config\`.\e[0m" >&2
exit 1
}
else
./scripts/generate_config --header-file docs/.sample_config_header.yaml -o "$SAMPLE_CONFIG"
./scripts/generate_log_config -o "$SAMPLE_LOG_CONFIG"
fi

184
scripts-dev/make_full_schema.sh Executable file
View File

@@ -0,0 +1,184 @@
#!/bin/bash
#
# This script generates SQL files for creating a brand new Synapse DB with the latest
# schema, on both SQLite3 and Postgres.
#
# It does so by having Synapse generate an up-to-date SQLite DB, then running
# synapse_port_db to convert it to Postgres. It then dumps the contents of both.
POSTGRES_HOST="localhost"
POSTGRES_DB_NAME="synapse_full_schema.$$"
SQLITE_FULL_SCHEMA_OUTPUT_FILE="full.sql.sqlite"
POSTGRES_FULL_SCHEMA_OUTPUT_FILE="full.sql.postgres"
REQUIRED_DEPS=("matrix-synapse" "psycopg2")
usage() {
echo
echo "Usage: $0 -p <postgres_username> -o <path> [-c] [-n] [-h]"
echo
echo "-p <postgres_username>"
echo " Username to connect to local postgres instance. The password will be requested"
echo " during script execution."
echo "-c"
echo " CI mode. Enables coverage tracking and prints every command that the script runs."
echo "-o <path>"
echo " Directory to output full schema files to."
echo "-h"
echo " Display this help text."
}
while getopts "p:co:h" opt; do
case $opt in
p)
POSTGRES_USERNAME=$OPTARG
;;
c)
# Print all commands that are being executed
set -x
# Modify required dependencies for coverage
REQUIRED_DEPS+=("coverage" "coverage-enable-subprocess")
COVERAGE=1
;;
o)
command -v realpath > /dev/null || (echo "The -o flag requires the 'realpath' binary to be installed" && exit 1)
OUTPUT_DIR="$(realpath "$OPTARG")"
;;
h)
usage
exit
;;
\?)
echo "ERROR: Invalid option: -$OPTARG" >&2
usage
exit
;;
esac
done
# Check that required dependencies are installed
unsatisfied_requirements=()
for dep in "${REQUIRED_DEPS[@]}"; do
pip show "$dep" --quiet || unsatisfied_requirements+=("$dep")
done
if [ ${#unsatisfied_requirements} -ne 0 ]; then
echo "Please install the following python packages: ${unsatisfied_requirements[*]}"
exit 1
fi
if [ -z "$POSTGRES_USERNAME" ]; then
echo "No postgres username supplied"
usage
exit 1
fi
if [ -z "$OUTPUT_DIR" ]; then
echo "No output directory supplied"
usage
exit 1
fi
# Create the output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
read -rsp "Postgres password for '$POSTGRES_USERNAME': " POSTGRES_PASSWORD
echo ""
# Exit immediately if a command fails
set -e
# cd to root of the synapse directory
cd "$(dirname "$0")/.."
# Create temporary SQLite and Postgres homeserver db configs and key file
TMPDIR=$(mktemp -d)
KEY_FILE=$TMPDIR/test.signing.key # default Synapse signing key path
SQLITE_CONFIG=$TMPDIR/sqlite.conf
SQLITE_DB=$TMPDIR/homeserver.db
POSTGRES_CONFIG=$TMPDIR/postgres.conf
# Ensure these files are delete on script exit
trap 'rm -rf $TMPDIR' EXIT
cat > "$SQLITE_CONFIG" <<EOF
server_name: "test"
signing_key_path: "$KEY_FILE"
macaroon_secret_key: "abcde"
report_stats: false
database:
name: "sqlite3"
args:
database: "$SQLITE_DB"
# Suppress the key server warning.
trusted_key_servers: []
EOF
cat > "$POSTGRES_CONFIG" <<EOF
server_name: "test"
signing_key_path: "$KEY_FILE"
macaroon_secret_key: "abcde"
report_stats: false
database:
name: "psycopg2"
args:
user: "$POSTGRES_USERNAME"
host: "$POSTGRES_HOST"
password: "$POSTGRES_PASSWORD"
database: "$POSTGRES_DB_NAME"
# Suppress the key server warning.
trusted_key_servers: []
EOF
# Generate the server's signing key.
echo "Generating SQLite3 db schema..."
python -m synapse.app.homeserver --generate-keys -c "$SQLITE_CONFIG"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
echo "Running db background jobs..."
scripts-dev/update_database --database-config "$SQLITE_CONFIG"
# Create the PostgreSQL database.
echo "Creating postgres database..."
createdb $POSTGRES_DB_NAME
echo "Copying data from SQLite3 to Postgres with synapse_port_db..."
if [ -z "$COVERAGE" ]; then
# No coverage needed
scripts/synapse_port_db --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
else
# Coverage desired
coverage run scripts/synapse_port_db --sqlite-database "$SQLITE_DB" --postgres-config "$POSTGRES_CONFIG"
fi
# Delete schema_version, applied_schema_deltas and applied_module_schemas tables
# This needs to be done after synapse_port_db is run
echo "Dropping unwanted db tables..."
SQL="
DROP TABLE schema_version;
DROP TABLE applied_schema_deltas;
DROP TABLE applied_module_schemas;
"
sqlite3 "$SQLITE_DB" <<< "$SQL"
psql $POSTGRES_DB_NAME -U "$POSTGRES_USERNAME" -w <<< "$SQL"
echo "Dumping SQLite3 schema to '$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE'..."
sqlite3 "$SQLITE_DB" ".dump" > "$OUTPUT_DIR/$SQLITE_FULL_SCHEMA_OUTPUT_FILE"
echo "Dumping Postgres schema to '$OUTPUT_DIR/$POSTGRES_FULL_SCHEMA_OUTPUT_FILE'..."
pg_dump --format=plain --no-tablespaces --no-acl --no-owner $POSTGRES_DB_NAME | sed -e '/^--/d' -e 's/public\.//g' -e '/^SET /d' -e '/^SELECT /d' > "$OUTPUT_DIR/$POSTGRES_FULL_SCHEMA_OUTPUT_FILE"
echo "Cleaning up temporary Postgres database..."
dropdb $POSTGRES_DB_NAME
echo "Done! Files dumped to: $OUTPUT_DIR"

View File

@@ -22,12 +22,12 @@ import yaml
from twisted.internet import defer, reactor
import synapse
from synapse.config.homeserver import HomeServerConfig
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("update_database")
@@ -35,20 +35,12 @@ logger = logging.getLogger("update_database")
class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
def __init__(self, config, database_engine, db_conn, **kwargs):
def __init__(self, config, **kwargs):
super(MockHomeserver, self).__init__(
config.server_name,
reactor=reactor,
config=config,
database_engine=database_engine,
**kwargs
config.server_name, reactor=reactor, config=config, **kwargs
)
self.database_engine = database_engine
self.db_conn = db_conn
def get_db_conn(self):
return self.db_conn
self.version_string = "Synapse/"+get_version_string(synapse)
if __name__ == "__main__":
@@ -85,37 +77,25 @@ if __name__ == "__main__":
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")
# Create the database engine and a connection to it.
database_engine = create_engine(config.database_config)
db_conn = database_engine.module.connect(
**{
k: v
for k, v in config.database_config.get("args", {}).items()
if not k.startswith("cp_")
}
)
# Update the database to the latest schema.
prepare_database(db_conn, database_engine, config=config)
db_conn.commit()
# Instantiate and initialise the homeserver object.
hs = MockHomeserver(
config, database_engine, db_conn, db_config=config.database_config,
)
# setup instantiates the store within the homeserver object.
hs = MockHomeserver(config)
# Setup instantiates the store within the homeserver object and updates the
# DB.
hs.setup()
store = hs.get_datastore()
@defer.inlineCallbacks
def run_background_updates():
yield store.db.updates.run_background_updates(sleep=False)
async def run_background_updates():
await store.db.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()
# Apply all background updates on the database.
reactor.callWhenRunning(
lambda: run_as_background_process("background_updates", run_background_updates)
)
def run():
# Apply all background updates on the database.
defer.ensureDeferred(
run_as_background_process("background_updates", run_background_updates)
)
reactor.callWhenRunning(run)
reactor.run()

94
scripts/export_signing_key Executable file
View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys
import time
from typing import Optional
import nacl.signing
from signedjson.key import encode_verify_key_base64, get_verify_key, read_signing_keys
def exit(status: int = 0, message: Optional[str] = None):
if message:
print(message, file=sys.stderr)
sys.exit(status)
def format_plain(public_key: nacl.signing.VerifyKey):
print(
"%s:%s %s"
% (public_key.alg, public_key.version, encode_verify_key_base64(public_key),)
)
def format_for_config(public_key: nacl.signing.VerifyKey, expiry_ts: int):
print(
' "%s:%s": { key: "%s", expired_ts: %i }'
% (
public_key.alg,
public_key.version,
encode_verify_key_base64(public_key),
expiry_ts,
)
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"key_file", nargs="+", type=argparse.FileType("r"), help="The key file to read",
)
parser.add_argument(
"-x",
action="store_true",
dest="for_config",
help="format the output for inclusion in the old_signing_keys config setting",
)
parser.add_argument(
"--expiry-ts",
type=int,
default=int(time.time() * 1000) + 6*3600000,
help=(
"The expiry time to use for -x, in milliseconds since 1970. The default "
"is (now+6h)."
),
)
args = parser.parse_args()
formatter = (
(lambda k: format_for_config(k, args.expiry_ts))
if args.for_config
else format_plain
)
keys = []
for file in args.key_file:
try:
res = read_signing_keys(file)
except Exception as e:
exit(
status=1,
message="Error reading key from file %s: %s %s"
% (file.name, type(e), e),
)
res = []
for key in res:
formatter(get_verify_key(key))

43
scripts/generate_log_config Executable file
View File

@@ -0,0 +1,43 @@
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import sys
from synapse.config.logger import DEFAULT_LOG_CONFIG
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-o",
"--output-file",
type=argparse.FileType("w"),
default=sys.stdout,
help="File to write the configuration to. Default: stdout",
)
parser.add_argument(
"-f",
"--log-file",
type=str,
default="/var/log/matrix-synapse/homeserver.log",
help="name of the log file",
)
args = parser.parse_args()
args.output_file.write(DEFAULT_LOG_CONFIG.substitute(log_file=args.log_file))

View File

@@ -52,7 +52,7 @@ if __name__ == "__main__":
if "config" in args and args.config:
config = yaml.safe_load(args.config)
bcrypt_rounds = config.get("bcrypt_rounds", bcrypt_rounds)
password_config = config.get("password_config", {})
password_config = config.get("password_config", None) or {}
password_pepper = password_config.get("pepper", password_pepper)
password = args.password

View File

@@ -27,12 +27,16 @@ from six import string_types
import yaml
from twisted.enterprise import adbapi
from twisted.internet import defer, reactor
import synapse
from synapse.config.database import DatabaseConnectionConfig
from synapse.config.homeserver import HomeServerConfig
from synapse.logging.context import PreserveLoggingContext
from synapse.storage._base import LoggingTransaction
from synapse.logging.context import (
LoggingContext,
make_deferred_yieldable,
run_in_background,
)
from synapse.storage.data_stores.main.client_ips import ClientIpBackgroundUpdateStore
from synapse.storage.data_stores.main.deviceinbox import (
DeviceInboxBackgroundUpdateStore,
@@ -50,15 +54,17 @@ from synapse.storage.data_stores.main.registration import (
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
from synapse.storage.data_stores.main.state import StateBackgroundUpdateStore
from synapse.storage.data_stores.main.state import MainStateBackgroundUpdateStore
from synapse.storage.data_stores.main.stats import StatsStore
from synapse.storage.data_stores.main.user_directory import (
UserDirectoryBackgroundUpdateStore,
)
from synapse.storage.database import Database
from synapse.storage.data_stores.state.bg_updates import StateBackgroundUpdateStore
from synapse.storage.database import Database, make_conn
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
from synapse.util.versionstring import get_version_string
logger = logging.getLogger("synapse_port_db")
@@ -123,6 +129,13 @@ APPEND_ONLY_TABLES = [
]
# Error returned by the run function. Used at the top-level part of the script to
# handle errors and return codes.
end_error = None
# The exec_info for the error, if any. If error is defined but not exec_info the script
# will show only the error message without the stacktrace, if exec_info is defined but
# not the error then the script will show nothing outside of what's printed in the run
# function. If both are defined, the script will print both the error and the stacktrace.
end_error_exec_info = None
@@ -137,6 +150,7 @@ class Store(
RoomMemberBackgroundUpdateStore,
SearchBackgroundUpdateStore,
StateBackgroundUpdateStore,
MainStateBackgroundUpdateStore,
UserDirectoryBackgroundUpdateStore,
StatsStore,
):
@@ -163,35 +177,34 @@ class Store(
logger.exception("Failed to insert: %s", table)
raise
def set_room_is_public(self, room_id, is_public):
raise Exception(
"Attempt to set room_is_public during port_db: database not empty?"
)
class MockHomeserver:
def __init__(self, config, database_engine, db_conn, db_pool):
self.database_engine = database_engine
self.db_conn = db_conn
self.db_pool = db_pool
def __init__(self, config):
self.clock = Clock(reactor)
self.config = config
self.hostname = config.server_name
def get_db_conn(self):
return self.db_conn
def get_db_pool(self):
return self.db_pool
self.version_string = "Synapse/"+get_version_string(synapse)
def get_clock(self):
return self.clock
def get_reactor(self):
return reactor
class Porter(object):
def __init__(self, **kwargs):
self.__dict__.update(kwargs)
@defer.inlineCallbacks
def setup_table(self, table):
async def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = yield self.postgres_store.db.simple_select_one(
row = await self.postgres_store.db.simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcols=("forward_rowid", "backward_rowid"),
@@ -205,10 +218,10 @@ class Porter(object):
forward_chunk,
already_ported,
total_to_port,
) = yield self._setup_sent_transactions()
) = await self._setup_sent_transactions()
backward_chunk = 0
else:
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
@@ -225,7 +238,7 @@ class Porter(object):
backward_chunk = row["backward_rowid"]
if total_to_port is None:
already_ported, total_to_port = yield self._get_total_count_to_port(
already_ported, total_to_port = await self._get_total_count_to_port(
table, forward_chunk, backward_chunk
)
else:
@@ -236,9 +249,9 @@ class Porter(object):
)
txn.execute("TRUNCATE %s CASCADE" % (table,))
yield self.postgres_store.execute(delete_all)
await self.postgres_store.execute(delete_all)
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
@@ -246,16 +259,13 @@ class Porter(object):
forward_chunk = 1
backward_chunk = 0
already_ported, total_to_port = yield self._get_total_count_to_port(
already_ported, total_to_port = await self._get_total_count_to_port(
table, forward_chunk, backward_chunk
)
defer.returnValue(
(table, already_ported, total_to_port, forward_chunk, backward_chunk)
)
return table, already_ported, total_to_port, forward_chunk, backward_chunk
@defer.inlineCallbacks
def handle_table(
async def handle_table(
self, table, postgres_size, table_size, forward_chunk, backward_chunk
):
logger.info(
@@ -273,7 +283,7 @@ class Porter(object):
self.progress.add_table(table, postgres_size, table_size)
if table == "event_search":
yield self.handle_search_table(
await self.handle_search_table(
postgres_size, table_size, forward_chunk, backward_chunk
)
return
@@ -292,7 +302,7 @@ class Porter(object):
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
@@ -333,7 +343,7 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = yield self.sqlite_store.db.runInteraction(
headers, frows, brows = await self.sqlite_store.db.runInteraction(
"select", r
)
@@ -359,7 +369,7 @@ class Porter(object):
},
)
yield self.postgres_store.execute(insert)
await self.postgres_store.execute(insert)
postgres_size += len(rows)
@@ -367,8 +377,7 @@ class Porter(object):
else:
return
@defer.inlineCallbacks
def handle_search_table(
async def handle_search_table(
self, postgres_size, table_size, forward_chunk, backward_chunk
):
select = (
@@ -388,7 +397,7 @@ class Porter(object):
return headers, rows
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
if rows:
forward_chunk = rows[-1][0] + 1
@@ -436,7 +445,7 @@ class Porter(object):
},
)
yield self.postgres_store.execute(insert)
await self.postgres_store.execute(insert)
postgres_size += len(rows)
@@ -445,54 +454,39 @@ class Porter(object):
else:
return
def setup_db(self, db_config, database_engine):
db_conn = database_engine.module.connect(
**{
k: v
for k, v in db_config.get("args", {}).items()
if not k.startswith("cp_")
}
)
prepare_database(db_conn, database_engine, config=None)
db_conn.commit()
return db_conn
@defer.inlineCallbacks
def build_db_store(self, config):
def build_db_store(
self, db_config: DatabaseConnectionConfig, allow_outdated_version: bool = False,
):
"""Builds and returns a database store using the provided configuration.
Args:
config: The database configuration, i.e. a dict following the structure of
the "database" section of Synapse's configuration file.
db_config: The database configuration
allow_outdated_version: True to suppress errors about the database server
version being too old to run a complete synapse
Returns:
The built Store object.
"""
engine = create_engine(config)
self.progress.set_state("Preparing %s" % db_config.config["name"])
self.progress.set_state("Preparing %s" % config["name"])
conn = self.setup_db(config, engine)
engine = create_engine(db_config.config)
db_pool = adbapi.ConnectionPool(config["name"], **config["args"])
hs = MockHomeserver(self.hs_config)
hs = MockHomeserver(self.hs_config, engine, conn, db_pool)
store = Store(Database(hs), conn, hs)
yield store.db.runInteraction(
"%s_engine.check_database" % config["name"], engine.check_database,
)
with make_conn(db_config, engine) as db_conn:
engine.check_database(
db_conn, allow_outdated_version=allow_outdated_version
)
prepare_database(db_conn, engine, config=self.hs_config)
store = Store(Database(hs, db_config, engine), db_conn, hs)
db_conn.commit()
return store
@defer.inlineCallbacks
def run_background_updates_on_postgres(self):
async def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = (
yield self.postgres_store.db.updates.has_completed_background_updates()
await self.postgres_store.db.updates.has_completed_background_updates()
)
if not postgres_ready:
@@ -501,33 +495,44 @@ class Porter(object):
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
yield self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = yield (
await self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = await (
self.postgres_store.db.updates.has_completed_background_updates()
)
@defer.inlineCallbacks
def run(self):
async def run(self):
"""Ports the SQLite database to a PostgreSQL database.
When a fatal error is met, its message is assigned to the global "end_error"
variable. When this error comes with a stacktrace, its exec_info is assigned to
the global "end_error_exec_info" variable.
"""
global end_error
try:
self.sqlite_store = yield self.build_db_store(self.sqlite_config)
# we allow people to port away from outdated versions of sqlite.
self.sqlite_store = self.build_db_store(
DatabaseConnectionConfig("master-sqlite", self.sqlite_config),
allow_outdated_version=True,
)
# Check if all background updates are done, abort if not.
updates_complete = (
yield self.sqlite_store.db.updates.has_completed_background_updates()
await self.sqlite_store.db.updates.has_completed_background_updates()
)
if not updates_complete:
sys.stderr.write(
end_error = (
"Pending background updates exist in the SQLite3 database."
" Please start Synapse again and wait until every update has finished"
" before running this script.\n"
)
defer.returnValue(None)
return
self.postgres_store = yield self.build_db_store(
self.hs_config.database_config
self.postgres_store = self.build_db_store(
self.hs_config.get_single_database()
)
yield self.run_background_updates_on_postgres()
await self.run_background_updates_on_postgres()
self.progress.set_state("Creating port tables")
@@ -555,22 +560,22 @@ class Porter(object):
)
try:
yield self.postgres_store.db.runInteraction("alter_table", alter_table)
await self.postgres_store.db.runInteraction("alter_table", alter_table)
except Exception:
# On Error Resume Next
pass
yield self.postgres_store.db.runInteraction(
await self.postgres_store.db.runInteraction(
"create_port_table", create_port_table
)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store.db.simple_select_onecol(
sqlite_tables = await self.sqlite_store.db.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = yield self.postgres_store.db.simple_select_onecol(
postgres_tables = await self.postgres_store.db.simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
@@ -581,28 +586,34 @@ class Porter(object):
# Step 3. Figure out what still needs copying
self.progress.set_state("Checking on port progress")
setup_res = yield defer.gatherResults(
[
self.setup_table(table)
for table in tables
if table not in ["schema_version", "applied_schema_deltas"]
and not table.startswith("sqlite_")
],
consumeErrors=True,
setup_res = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(self.setup_table, table)
for table in tables
if table not in ["schema_version", "applied_schema_deltas"]
and not table.startswith("sqlite_")
],
consumeErrors=True,
)
)
# Step 4. Do the copying.
self.progress.set_state("Copying to postgres")
yield defer.gatherResults(
[self.handle_table(*res) for res in setup_res], consumeErrors=True
await make_deferred_yieldable(
defer.gatherResults(
[run_in_background(self.handle_table, *res) for res in setup_res],
consumeErrors=True,
)
)
# Step 5. Do final post-processing
yield self._setup_state_group_id_seq()
await self._setup_state_group_id_seq()
self.progress.done()
except Exception:
except Exception as e:
global end_error_exec_info
end_error = e
end_error_exec_info = sys.exc_info()
logger.exception("")
finally:
@@ -642,8 +653,7 @@ class Porter(object):
return outrows
@defer.inlineCallbacks
def _setup_sent_transactions(self):
async def _setup_sent_transactions(self):
# Only save things from the last day
yesterday = int(time.time() * 1000) - 86400000
@@ -664,7 +674,7 @@ class Porter(object):
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
headers, rows = await self.sqlite_store.db.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@@ -677,7 +687,7 @@ class Porter(object):
txn, "sent_transactions", headers[1:], rows
)
yield self.postgres_store.execute(insert)
await self.postgres_store.execute(insert)
else:
max_inserted_rowid = 0
@@ -694,10 +704,10 @@ class Porter(object):
else:
return 1
next_chunk = yield self.sqlite_store.execute(get_start_id)
next_chunk = await self.sqlite_store.execute(get_start_id)
next_chunk = max(max_inserted_rowid + 1, next_chunk)
yield self.postgres_store.db.simple_insert(
await self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": "sent_transactions",
@@ -713,46 +723,49 @@ class Porter(object):
(size,) = txn.fetchone()
return int(size)
remaining_count = yield self.sqlite_store.execute(get_sent_table_size)
remaining_count = await self.sqlite_store.execute(get_sent_table_size)
total_count = remaining_count + inserted_rows
defer.returnValue((next_chunk, inserted_rows, total_count))
return next_chunk, inserted_rows, total_count
@defer.inlineCallbacks
def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk):
frows = yield self.sqlite_store.execute_sql(
async def _get_remaining_count_to_port(self, table, forward_chunk, backward_chunk):
frows = await self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid >= ?" % (table,), forward_chunk
)
brows = yield self.sqlite_store.execute_sql(
brows = await self.sqlite_store.execute_sql(
"SELECT count(*) FROM %s WHERE rowid <= ?" % (table,), backward_chunk
)
defer.returnValue(frows[0][0] + brows[0][0])
return frows[0][0] + brows[0][0]
@defer.inlineCallbacks
def _get_already_ported_count(self, table):
rows = yield self.postgres_store.execute_sql(
async def _get_already_ported_count(self, table):
rows = await self.postgres_store.execute_sql(
"SELECT count(*) FROM %s" % (table,)
)
defer.returnValue(rows[0][0])
return rows[0][0]
@defer.inlineCallbacks
def _get_total_count_to_port(self, table, forward_chunk, backward_chunk):
remaining, done = yield defer.gatherResults(
[
self._get_remaining_count_to_port(table, forward_chunk, backward_chunk),
self._get_already_ported_count(table),
],
consumeErrors=True,
async def _get_total_count_to_port(self, table, forward_chunk, backward_chunk):
remaining, done = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self._get_remaining_count_to_port,
table,
forward_chunk,
backward_chunk,
),
run_in_background(self._get_already_ported_count, table),
],
)
)
remaining = int(remaining) if remaining else 0
done = int(done) if done else 0
defer.returnValue((done, remaining + done))
return done, remaining + done
def _setup_state_group_id_seq(self):
def r(txn):
@@ -1018,7 +1031,12 @@ if __name__ == "__main__":
hs_config=config,
)
reactor.callWhenRunning(porter.run)
@defer.inlineCallbacks
def run():
with LoggingContext("synapse_port_db_run"):
yield defer.ensureDeferred(porter.run())
reactor.callWhenRunning(run)
reactor.run()
@@ -1027,7 +1045,11 @@ if __name__ == "__main__":
else:
start()
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
if end_error:
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
sys.stderr.write(end_error)
sys.exit(5)

View File

@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.7.2"
__version__ = "1.10.0"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -14,7 +14,6 @@
# limitations under the License.
import logging
from typing import Dict, Tuple
from six import itervalues
@@ -34,8 +33,9 @@ from synapse.api.errors import (
MissingClientTokenError,
ResourceLimitError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS
from synapse.config.server import is_threepid_reserved
from synapse.types import UserID
from synapse.types import StateMap, UserID
from synapse.util.caches import CACHE_SIZE_FACTOR, register_cache
from synapse.util.caches.lrucache import LruCache
from synapse.util.metrics import Measure
@@ -78,15 +78,17 @@ class Auth(object):
self._account_validity = hs.config.account_validity
@defer.inlineCallbacks
def check_from_context(self, room_version, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids(self.store)
def check_from_context(self, room_version: str, event, context, do_sig_check=True):
prev_state_ids = yield context.get_prev_state_ids()
auth_events_ids = yield self.compute_auth_events(
event, prev_state_ids, for_verification=True
)
auth_events = yield self.store.get_events(auth_events_ids)
auth_events = {(e.type, e.state_key): e for e in itervalues(auth_events)}
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
event_auth.check(
room_version, event, auth_events=auth_events, do_sig_check=do_sig_check
room_version_obj, event, auth_events=auth_events, do_sig_check=do_sig_check
)
@defer.inlineCallbacks
@@ -509,10 +511,7 @@ class Auth(object):
return self.store.is_server_admin(user)
def compute_auth_events(
self,
event,
current_state_ids: Dict[Tuple[str, str], str],
for_verification: bool = False,
self, event, current_state_ids: StateMap[str], for_verification: bool = False,
):
"""Given an event and current state return the list of event IDs used
to auth an event.

View File

@@ -77,12 +77,11 @@ class EventTypes(object):
Aliases = "m.room.aliases"
Redaction = "m.room.redaction"
ThirdPartyInvite = "m.room.third_party_invite"
Encryption = "m.room.encryption"
RelatedGroups = "m.room.related_groups"
RoomHistoryVisibility = "m.room.history_visibility"
CanonicalAlias = "m.room.canonical_alias"
Encryption = "m.room.encryption"
Encrypted = "m.room.encrypted"
RoomAvatar = "m.room.avatar"
RoomEncryption = "m.room.encryption"
GuestAccess = "m.room.guest_access"

View File

@@ -17,13 +17,15 @@
"""Contains exceptions and error codes."""
import logging
from typing import Dict
from typing import Dict, List
from six import iteritems
from six.moves import http_client
from canonicaljson import json
from twisted.web import http
logger = logging.getLogger(__name__)
@@ -80,6 +82,29 @@ class CodeMessageException(RuntimeError):
self.msg = msg
class RedirectException(CodeMessageException):
"""A pseudo-error indicating that we want to redirect the client to a different
location
Attributes:
cookies: a list of set-cookies values to add to the response. For example:
b"sessionId=a3fWa; Expires=Wed, 21 Oct 2015 07:28:00 GMT"
"""
def __init__(self, location: bytes, http_code: int = http.FOUND):
"""
Args:
location: the URI to redirect to
http_code: the HTTP response code
"""
msg = "Redirect to %s" % (location.decode("utf-8"),)
super().__init__(code=http_code, msg=msg)
self.location = location
self.cookies = [] # type: List[bytes]
class SynapseError(CodeMessageException):
"""A base exception type for matrix errors which have an errcode and error
message (as well as an HTTP status code).
@@ -158,12 +183,6 @@ class UserDeactivatedError(SynapseError):
)
class RegistrationError(SynapseError):
"""An error raised when a registration event fails."""
pass
class FederationDeniedError(SynapseError):
"""An error raised when the server tries to federate with a server which
is not on its federation whitelist.
@@ -383,11 +402,9 @@ class UnsupportedRoomVersionError(SynapseError):
"""The client's request to create a room used a room version that the server does
not support."""
def __init__(self):
def __init__(self, msg="Homeserver does not support this room version"):
super(UnsupportedRoomVersionError, self).__init__(
code=400,
msg="Homeserver does not support this room version",
errcode=Codes.UNSUPPORTED_ROOM_VERSION,
code=400, msg=msg, errcode=Codes.UNSUPPORTED_ROOM_VERSION,
)

View File

@@ -15,6 +15,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from six import text_type
import jsonschema
@@ -293,7 +295,7 @@ class Filter(object):
room_id = None
ev_type = "m.presence"
contains_url = False
labels = []
labels = [] # type: List[str]
else:
sender = event.get("sender", None)
if not sender:

View File

@@ -12,7 +12,8 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
from collections import OrderedDict
from typing import Any, Optional, Tuple
from synapse.api.errors import LimitExceededError
@@ -23,7 +24,9 @@ class Ratelimiter(object):
"""
def __init__(self):
self.message_counts = collections.OrderedDict()
self.message_counts = (
OrderedDict()
) # type: OrderedDict[Any, Tuple[float, int, Optional[float]]]
def can_do_action(self, key, time_now_s, rate_hz, burst_count, update=True):
"""Can the entity (e.g. user or IP address) perform the action?

View File

@@ -57,6 +57,9 @@ class RoomVersion(object):
state_res = attr.ib() # int; one of the StateResolutionVersions
enforce_key_validity = attr.ib() # bool
# bool: before MSC2260, anyone was allowed to send an aliases event
special_case_aliases_auth = attr.ib(type=bool, default=False)
class RoomVersions(object):
V1 = RoomVersion(
@@ -65,6 +68,7 @@ class RoomVersions(object):
EventFormatVersions.V1,
StateResolutionVersions.V1,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V2 = RoomVersion(
"2",
@@ -72,6 +76,7 @@ class RoomVersions(object):
EventFormatVersions.V1,
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V3 = RoomVersion(
"3",
@@ -79,6 +84,7 @@ class RoomVersions(object):
EventFormatVersions.V2,
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V4 = RoomVersion(
"4",
@@ -86,6 +92,7 @@ class RoomVersions(object):
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=False,
special_case_aliases_auth=True,
)
V5 = RoomVersion(
"5",
@@ -93,6 +100,14 @@ class RoomVersions(object):
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
special_case_aliases_auth=True,
)
MSC2260_DEV = RoomVersion(
"org.matrix.msc2260",
RoomDisposition.UNSTABLE,
EventFormatVersions.V3,
StateResolutionVersions.V2,
enforce_key_validity=True,
)
@@ -104,5 +119,6 @@ KNOWN_ROOM_VERSIONS = {
RoomVersions.V3,
RoomVersions.V4,
RoomVersions.V5,
RoomVersions.MSC2260_DEV,
)
} # type: Dict[str, RoomVersion]

View File

@@ -29,7 +29,6 @@ FEDERATION_V2_PREFIX = FEDERATION_PREFIX + "/v2"
FEDERATION_UNSTABLE_PREFIX = FEDERATION_PREFIX + "/unstable"
STATIC_PREFIX = "/_matrix/static"
WEB_CLIENT_PREFIX = "/_matrix/client"
CONTENT_REPO_PREFIX = "/_matrix/content"
SERVER_KEY_V2_PREFIX = "/_matrix/key/v2"
MEDIA_PREFIX = "/_matrix/media/r0"
LEGACY_MEDIA_PREFIX = "/_matrix/media/v1"

View File

@@ -237,6 +237,12 @@ def start(hs, listeners=None):
"""
Start a Synapse server or worker.
Should be called once the reactor is running and (if we're using ACME) the
TLS certificates are in place.
Will start the main HTTP listeners and do some other startup tasks, and then
notify systemd.
Args:
hs (synapse.server.HomeServer)
listeners (list[dict]): Listener configuration ('listeners' in homeserver.yaml)
@@ -311,9 +317,7 @@ def setup_sdnotify(hs):
# Tell systemd our state, if we're using it. This will silently fail if
# we're not using systemd.
hs.get_reactor().addSystemEventTrigger(
"after", "startup", sdnotify, b"READY=1\nMAINPID=%i" % (os.getpid(),)
)
sdnotify(b"READY=1\nMAINPID=%i" % (os.getpid(),))
hs.get_reactor().addSystemEventTrigger(
"before", "shutdown", sdnotify, b"STOPPING=1"

View File

@@ -45,7 +45,6 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.logcontext import LoggingContext
from synapse.util.versionstring import get_version_string
@@ -85,8 +84,7 @@ class AdminCmdServer(HomeServer):
class AdminCmdReplicationHandler(ReplicationClientHandler):
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
async def on_rdata(self, stream_name, token, rows):
pass
def get_streams_to_replicate(self):
@@ -105,8 +103,10 @@ def export_data_command(hs, args):
user_id = args.user_id
directory = args.output_directory
res = yield hs.get_handlers().admin_handler.export_user_data(
user_id, FileExfiltrationWriter(user_id, directory=directory)
res = yield defer.ensureDeferred(
hs.get_handlers().admin_handler.export_user_data(
user_id, FileExfiltrationWriter(user_id, directory=directory)
)
)
print(res)
@@ -229,14 +229,10 @@ def start(config_options):
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = AdminCmdServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -34,7 +34,6 @@ from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -116,9 +115,8 @@ class ASReplicationHandler(ReplicationClientHandler):
super(ASReplicationHandler, self).__init__(hs.get_datastore())
self.appservice_handler = hs.get_application_service_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
async def on_rdata(self, stream_name, token, rows):
await super(ASReplicationHandler, self).on_rdata(stream_name, token, rows)
if stream_name == "events":
max_stream_id = self.store.get_room_max_stream_ordering()
@@ -143,8 +141,6 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.notify_appservices:
sys.stderr.write(
"\nThe appservices must be disabled in the main synapse process"
@@ -159,10 +155,8 @@ def start(config_options):
ps = AppserviceServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ps, config, use_worker_options=True)

View File

@@ -62,7 +62,9 @@ from synapse.rest.client.v2_alpha.keys import KeyChangesServlet, KeyQueryServlet
from synapse.rest.client.v2_alpha.register import RegisterRestServlet
from synapse.rest.client.versions import VersionsRestServlet
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -86,6 +88,7 @@ class ClientReaderSlavedStore(
SlavedTransactionStore,
SlavedProfileStore,
SlavedClientIpStore,
MonthlyActiveUsersWorkerStore,
BaseSlavedStore,
):
pass
@@ -181,14 +184,10 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = ClientReaderServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -56,8 +56,10 @@ from synapse.rest.client.v1.room import (
RoomStateEventRestServlet,
)
from synapse.server import HomeServer
from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -82,6 +84,7 @@ class EventCreatorSlavedStore(
SlavedEventStore,
SlavedRegistrationStore,
RoomStore,
MonthlyActiveUsersWorkerStore,
BaseSlavedStore,
):
pass
@@ -180,14 +183,10 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = EventCreatorServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -46,7 +46,9 @@ from synapse.replication.slave.storage.transactions import SlavedTransactionStor
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -67,6 +69,7 @@ class FederationReaderSlavedStore(
RoomStore,
DirectoryStore,
SlavedTransactionStore,
MonthlyActiveUsersWorkerStore,
BaseSlavedStore,
):
pass
@@ -162,14 +165,10 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = FederationReaderServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -38,10 +38,13 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import ReceiptsStream
from synapse.replication.tcp.streams._base import (
DeviceListsStream,
ReceiptsStream,
ToDeviceStream,
)
from synapse.server import HomeServer
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
from synapse.util.httpresourcetree import create_resource_tree
@@ -146,9 +149,8 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
super(FederationSenderReplicationHandler, self).__init__(hs.get_datastore())
self.send_handler = FederationSenderHandler(hs, self)
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(FederationSenderReplicationHandler, self).on_rdata(
async def on_rdata(self, stream_name, token, rows):
await super(FederationSenderReplicationHandler, self).on_rdata(
stream_name, token, rows
)
self.send_handler.process_replication_rows(stream_name, token, rows)
@@ -160,6 +162,13 @@ class FederationSenderReplicationHandler(ReplicationClientHandler):
args.update(self.send_handler.stream_positions())
return args
def on_remote_server_up(self, server: str):
"""Called when get a new REMOTE_SERVER_UP command."""
# Let's wake up the transaction queue for the server in case we have
# pending stuff to send to it.
self.send_handler.wake_destination(server)
def start(config_options):
try:
@@ -174,8 +183,6 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.send_federation:
sys.stderr.write(
"\nThe send_federation must be disabled in the main synapse process"
@@ -190,10 +197,8 @@ def start(config_options):
ss = FederationSenderServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)
@@ -211,7 +216,7 @@ class FederationSenderHandler(object):
to the federation sender.
"""
def __init__(self, hs, replication_client):
def __init__(self, hs: FederationSenderServer, replication_client):
self.store = hs.get_datastore()
self._is_mine_id = hs.is_mine_id
self.federation_sender = hs.get_federation_sender()
@@ -232,6 +237,9 @@ class FederationSenderHandler(object):
self.store.get_room_max_stream_ordering()
)
def wake_destination(self, server: str):
self.federation_sender.wake_destination(server)
def stream_positions(self):
return {"federation": self.federation_position}
@@ -252,6 +260,20 @@ class FederationSenderHandler(object):
"process_receipts_for_federation", self._on_new_receipts, rows
)
# ... as well as device updates and messages
elif stream_name == DeviceListsStream.NAME:
hosts = set(row.destination for row in rows)
for host in hosts:
self.federation_sender.send_device_messages(host)
elif stream_name == ToDeviceStream.NAME:
# The to_device stream includes stuff to be pushed to both local
# clients and remote servers, so we ignore entities that start with
# '@' (since they'll be local users rather than destinations).
hosts = set(row.entity for row in rows if not row.entity.startswith("@"))
for host in hosts:
self.federation_sender.send_device_messages(host)
@defer.inlineCallbacks
def _on_new_receipts(self, rows):
"""

View File

@@ -39,7 +39,6 @@ from synapse.replication.slave.storage.registration import SlavedRegistrationSto
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.client.v2_alpha._base import client_patterns
from synapse.server import HomeServer
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -234,14 +233,10 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = FrontendProxyServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -31,7 +31,7 @@ from prometheus_client import Gauge
from twisted.application import service
from twisted.internet import defer, reactor
from twisted.python.failure import Failure
from twisted.web.resource import EncodingResourceWrapper, NoResource
from twisted.web.resource import EncodingResourceWrapper, IResource, NoResource
from twisted.web.server import GzipEncoderFactory
from twisted.web.static import File
@@ -39,7 +39,6 @@ import synapse
import synapse.config.logger
from synapse import events
from synapse.api.urls import (
CONTENT_REPO_PREFIX,
FEDERATION_PREFIX,
LEGACY_MEDIA_PREFIX,
MEDIA_PREFIX,
@@ -65,11 +64,10 @@ from synapse.replication.tcp.resource import ReplicationStreamProtocolFactory
from synapse.rest import ClientRestResource
from synapse.rest.admin import AdminRestResource
from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.engines import IncorrectDatabaseSetup
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
@@ -111,7 +109,16 @@ class SynapseHomeServer(HomeServer):
for path, resmodule in additional_resources.items():
handler_cls, config = load_module(resmodule)
handler = handler_cls(config, module_api)
resources[path] = AdditionalResource(self, handler.handle_request)
if IResource.providedBy(handler):
resource = handler
elif hasattr(handler, "handle_request"):
resource = AdditionalResource(self, handler.handle_request)
else:
raise ConfigError(
"additional_resource %s does not implement a known interface"
% (resmodule["module"],)
)
resources[path] = resource
# try to find something useful to redirect '/' to
if WEB_CLIENT_PREFIX in resources:
@@ -223,13 +230,7 @@ class SynapseHomeServer(HomeServer):
if self.get_config().enable_media_repo:
media_repo = self.get_media_repository_resource()
resources.update(
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
}
{MEDIA_PREFIX: media_repo, LEGACY_MEDIA_PREFIX: media_repo}
)
elif name == "media":
raise ConfigError(
@@ -318,7 +319,7 @@ def setup(config_options):
"Synapse Homeserver", config_options
)
except ConfigError as e:
sys.stderr.write("\n" + str(e) + "\n")
sys.stderr.write("\nERROR: %s\n" % (e,))
sys.exit(1)
if not config:
@@ -328,15 +329,10 @@ def setup(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
config.database_config["args"]["cp_openfun"] = database_engine.on_new_connection
hs = SynapseHomeServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
@@ -347,13 +343,8 @@ def setup(config_options):
hs.setup()
except IncorrectDatabaseSetup as e:
quit_with_error(str(e))
except UpgradeDatabaseException:
sys.stderr.write(
"\nFailed to upgrade database.\n"
"Have you checked for version specific instructions in"
" UPGRADES.rst?\n"
)
sys.exit(1)
except UpgradeDatabaseException as e:
quit_with_error("Failed to upgrade database: %s" % (e,))
hs.setup_master()
@@ -519,8 +510,10 @@ def phone_stats_home(hs, stats, stats_process=_stats_process):
# Database version
#
stats["database_engine"] = hs.database_engine.module.__name__
stats["database_server_version"] = hs.database_engine.server_version
# This only reports info about the *main* database.
stats["database_engine"] = hs.get_datastore().db.engine.module.__name__
stats["database_server_version"] = hs.get_datastore().db.engine.server_version
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
try:
yield hs.get_proxied_http_client().put_json(

View File

@@ -21,7 +21,7 @@ from twisted.web.resource import NoResource
import synapse
from synapse import events
from synapse.api.urls import CONTENT_REPO_PREFIX, LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
from synapse.api.urls import LEGACY_MEDIA_PREFIX, MEDIA_PREFIX
from synapse.app import _base
from synapse.config._base import ConfigError
from synapse.config.homeserver import HomeServerConfig
@@ -34,13 +34,12 @@ from synapse.replication.slave.storage._base import BaseSlavedStore
from synapse.replication.slave.storage.appservice import SlavedApplicationServiceStore
from synapse.replication.slave.storage.client_ips import SlavedClientIpStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.slave.storage.transactions import SlavedTransactionStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.rest.admin import register_servlets_for_media_repo
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.server import HomeServer
from synapse.storage.data_stores.main.media_repository import MediaRepositoryStore
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -49,6 +48,7 @@ logger = logging.getLogger("synapse.app.media_repository")
class MediaRepositorySlavedStore(
RoomStore,
SlavedApplicationServiceStore,
SlavedRegistrationStore,
SlavedClientIpStore,
@@ -83,9 +83,6 @@ class MediaRepositoryServer(HomeServer):
{
MEDIA_PREFIX: media_repo,
LEGACY_MEDIA_PREFIX: media_repo,
CONTENT_REPO_PREFIX: ContentRepoResource(
self, self.config.uploads_path
),
"/_synapse/admin": admin_resource,
}
)
@@ -157,14 +154,10 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = MediaRepositoryServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -37,7 +37,6 @@ from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.versionstring import get_version_string
@@ -142,9 +141,8 @@ class PusherReplicationHandler(ReplicationClientHandler):
self.pusher_pool = hs.get_pusherpool()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
async def on_rdata(self, stream_name, token, rows):
await super(PusherReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.poke_pushers, stream_name, token, rows)
@defer.inlineCallbacks
@@ -203,14 +201,10 @@ def start(config_options):
# Force the pushers to start since they will be disabled in the main config
config.start_pushers = True
database_engine = create_engine(config.database_config)
ps = PusherServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ps, config, use_worker_options=True)

View File

@@ -48,14 +48,16 @@ from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.registration import SlavedRegistrationStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams.events import EventsStreamEventRow
from synapse.replication.tcp.streams.events import EventsStreamEventRow, EventsStreamRow
from synapse.rest.client.v1 import events
from synapse.rest.client.v1.initial_sync import InitialSyncRestServlet
from synapse.rest.client.v1.room import RoomInitialSyncRestServlet
from synapse.rest.client.v2_alpha import sync
from synapse.server import HomeServer
from synapse.storage.data_stores.main.monthly_active_users import (
MonthlyActiveUsersWorkerStore,
)
from synapse.storage.data_stores.main.presence import UserPresenceState
from synapse.storage.engines import create_engine
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
from synapse.util.stringutils import random_string
@@ -78,6 +80,7 @@ class SynchrotronSlavedStore(
SlavedEventStore,
SlavedClientIpStore,
RoomStore,
MonthlyActiveUsersWorkerStore,
BaseSlavedStore,
):
pass
@@ -359,9 +362,8 @@ class SyncReplicationHandler(ReplicationClientHandler):
self.presence_handler = hs.get_presence_handler()
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
async def on_rdata(self, stream_name, token, rows):
await super(SyncReplicationHandler, self).on_rdata(stream_name, token, rows)
run_in_background(self.process_and_notify, stream_name, token, rows)
def get_streams_to_replicate(self):
@@ -372,8 +374,7 @@ class SyncReplicationHandler(ReplicationClientHandler):
def get_currently_syncing_users(self):
return self.presence_handler.get_currently_syncing_users()
@defer.inlineCallbacks
def process_and_notify(self, stream_name, token, rows):
async def process_and_notify(self, stream_name, token, rows):
try:
if stream_name == "events":
# We shouldn't get multiple rows per token for events stream, so
@@ -381,7 +382,14 @@ class SyncReplicationHandler(ReplicationClientHandler):
for row in rows:
if row.type != EventsStreamEventRow.TypeId:
continue
event = yield self.store.get_event(row.data.event_id)
assert isinstance(row, EventsStreamRow)
event = await self.store.get_event(
row.data.event_id, allow_rejected=True
)
if event.rejected_reason:
continue
extra_users = ()
if event.type == EventTypes.Member:
extra_users = (event.state_key,)
@@ -413,11 +421,11 @@ class SyncReplicationHandler(ReplicationClientHandler):
elif stream_name == "device_lists":
all_room_ids = set()
for row in rows:
room_ids = yield self.store.get_rooms_for_user(row.user_id)
room_ids = await self.store.get_rooms_for_user(row.user_id)
all_room_ids.update(room_ids)
self.notifier.on_new_event("device_list_key", token, rooms=all_room_ids)
elif stream_name == "presence":
yield self.presence_handler.process_replication_rows(token, rows)
await self.presence_handler.process_replication_rows(token, rows)
elif stream_name == "receipts":
self.notifier.on_new_event(
"groups_key", token, users=[row.user_id for row in rows]
@@ -437,14 +445,10 @@ def start(config_options):
synapse.events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
ss = SynchrotronServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
application_service_handler=SynchrotronApplicationService(),
)

View File

@@ -44,7 +44,6 @@ from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
@@ -173,9 +172,8 @@ class UserDirectoryReplicationHandler(ReplicationClientHandler):
super(UserDirectoryReplicationHandler, self).__init__(hs.get_datastore())
self.user_directory = hs.get_user_directory_handler()
@defer.inlineCallbacks
def on_rdata(self, stream_name, token, rows):
yield super(UserDirectoryReplicationHandler, self).on_rdata(
async def on_rdata(self, stream_name, token, rows):
await super(UserDirectoryReplicationHandler, self).on_rdata(
stream_name, token, rows
)
if stream_name == EventsStream.NAME:
@@ -200,8 +198,6 @@ def start(config_options):
events.USE_FROZEN_DICTS = config.use_frozen_dicts
database_engine = create_engine(config.database_config)
if config.update_user_directory:
sys.stderr.write(
"\nThe update_user_directory must be disabled in the main synapse process"
@@ -216,10 +212,8 @@ def start(config_options):
ss = UserDirectoryServer(
config.server_name,
db_config=config.database_config,
config=config,
version_string="Synapse/" + get_version_string(synapse),
database_engine=database_engine,
)
setup_logging(ss, config, use_worker_options=True)

View File

@@ -12,12 +12,45 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
import os
from textwrap import indent
import yaml
from ._base import Config
from synapse.config._base import Config, ConfigError
logger = logging.getLogger(__name__)
class DatabaseConnectionConfig:
"""Contains the connection config for a particular database.
Args:
name: A label for the database, used for logging.
db_config: The config for a particular database, as per `database`
section of main config. Has three fields: `name` for database
module name, `args` for the args to give to the database
connector, and optional `data_stores` that is a list of stores to
provision on this database (defaulting to all).
"""
def __init__(self, name: str, db_config: dict):
if db_config["name"] not in ("sqlite3", "psycopg2"):
raise ConfigError("Unsupported database type %r" % (db_config["name"],))
if db_config["name"] == "sqlite3":
db_config.setdefault("args", {}).update(
{"cp_min": 1, "cp_max": 1, "check_same_thread": False}
)
data_stores = db_config.get("data_stores")
if data_stores is None:
data_stores = ["main", "state"]
self.name = name
self.config = db_config
self.data_stores = data_stores
class DatabaseConfig(Config):
@@ -26,22 +59,43 @@ class DatabaseConfig(Config):
def read_config(self, config, **kwargs):
self.event_cache_size = self.parse_size(config.get("event_cache_size", "10K"))
self.database_config = config.get("database")
# We *experimentally* support specifying multiple databases via the
# `databases` key. This is a map from a label to database config in the
# same format as the `database` config option, plus an extra
# `data_stores` key to specify which data store goes where. For example:
#
# databases:
# master:
# name: psycopg2
# data_stores: ["main"]
# args: {}
# state:
# name: psycopg2
# data_stores: ["state"]
# args: {}
if self.database_config is None:
self.database_config = {"name": "sqlite3", "args": {}}
multi_database_config = config.get("databases")
database_config = config.get("database")
if multi_database_config and database_config:
raise ConfigError("Can't specify both 'database' and 'datbases' in config")
if multi_database_config:
if config.get("database_path"):
raise ConfigError("Can't specify 'database_path' with 'databases'")
self.databases = [
DatabaseConnectionConfig(name, db_conf)
for name, db_conf in multi_database_config.items()
]
name = self.database_config.get("name", None)
if name == "psycopg2":
pass
elif name == "sqlite3":
self.database_config.setdefault("args", {}).update(
{"cp_min": 1, "cp_max": 1, "check_same_thread": False}
)
else:
raise RuntimeError("Unsupported database type '%s'" % (name,))
if database_config is None:
database_config = {"name": "sqlite3", "args": {}}
self.set_databasepath(config.get("database_path"))
self.databases = [DatabaseConnectionConfig("master", database_config)]
self.set_databasepath(config.get("database_path"))
def generate_config_section(self, data_dir_path, database_conf, **kwargs):
if not database_conf:
@@ -76,11 +130,24 @@ class DatabaseConfig(Config):
self.set_databasepath(args.database_path)
def set_databasepath(self, database_path):
if database_path is None:
return
if database_path != ":memory:":
database_path = self.abspath(database_path)
if self.database_config.get("name", None) == "sqlite3":
if database_path is not None:
self.database_config["args"]["database"] = database_path
# We only support setting a database path if we have a single sqlite3
# database.
if len(self.databases) != 1:
raise ConfigError("Cannot specify 'database_path' with multiple databases")
database = self.get_single_database()
if database.config["name"] != "sqlite3":
# We don't raise here as we haven't done so before for this case.
logger.warn("Ignoring 'database_path' for non-sqlite3 database")
return
database.config["args"]["database"] = database_path
@staticmethod
def add_arguments(parser):
@@ -91,3 +158,11 @@ class DatabaseConfig(Config):
metavar="SQLITE_DATABASE_PATH",
help="The path to a sqlite database to use.",
)
def get_single_database(self) -> DatabaseConnectionConfig:
"""Returns the database if there is only one, useful for e.g. tests
"""
if len(self.databases) != 1:
raise Exception("More than one database exists")
return self.databases[0]

View File

@@ -21,6 +21,7 @@ from __future__ import print_function
import email.utils
import os
from enum import Enum
from typing import Optional
import pkg_resources
@@ -36,10 +37,12 @@ class EmailConfig(Config):
self.email_enable_notifs = False
email_config = config.get("email", {})
email_config = config.get("email")
if email_config is None:
email_config = {}
self.email_smtp_host = email_config.get("smtp_host", None)
self.email_smtp_port = email_config.get("smtp_port", None)
self.email_smtp_host = email_config.get("smtp_host", "localhost")
self.email_smtp_port = email_config.get("smtp_port", 25)
self.email_smtp_user = email_config.get("smtp_user", None)
self.email_smtp_pass = email_config.get("smtp_pass", None)
self.require_transport_security = email_config.get(
@@ -73,9 +76,9 @@ class EmailConfig(Config):
self.email_template_dir = os.path.abspath(template_dir)
self.email_enable_notifs = email_config.get("enable_notifs", False)
account_validity_renewal_enabled = config.get("account_validity", {}).get(
"renew_at"
)
account_validity_config = config.get("account_validity") or {}
account_validity_renewal_enabled = account_validity_config.get("renew_at")
self.threepid_behaviour_email = (
# Have Synapse handle the email sending if account_threepid_delegates.email
@@ -101,7 +104,7 @@ class EmailConfig(Config):
# both in RegistrationConfig and here. We should factor this bit out
self.account_threepid_delegate_email = self.trusted_third_party_id_servers[
0
]
] # type: Optional[str]
self.using_identity_server_from_trusted_list = True
else:
raise ConfigError(
@@ -277,7 +280,9 @@ class EmailConfig(Config):
self.email_notif_for_new_users = email_config.get(
"notif_for_new_users", True
)
self.email_riot_base_url = email_config.get("riot_base_url", None)
self.email_riot_base_url = email_config.get(
"client_base_url", email_config.get("riot_base_url", None)
)
if account_validity_renewal_enabled:
self.email_expiry_template_html = email_config.get(
@@ -293,107 +298,111 @@ class EmailConfig(Config):
raise ConfigError("Unable to find email template file %s" % (p,))
def generate_config_section(self, config_dir_path, server_name, **kwargs):
return """
# Enable sending emails for password resets, notification events or
# account expiry notices
return """\
# Configuration for sending emails from Synapse.
#
# If your SMTP server requires authentication, the optional smtp_user &
# smtp_pass variables should be used
#
#email:
# enable_notifs: false
# smtp_host: "localhost"
# smtp_port: 25 # SSL: 465, STARTTLS: 587
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #
# notif_for_new_users: true
#
# # Defining a custom URL for Riot is only needed if email notifications
# # should contain links to a self-hosted installation of Riot; when set
# # the "app_name" setting is ignored
# #
# riot_base_url: "http://localhost/riot"
#
# # Configure the time that a validation email or text message code
# # will expire after sending
# #
# # This is currently used for password resets
# #
# #validation_token_lifetime: 1h
#
# # Template directory. All template files should be stored within this
# # directory. If not set, default templates from within the Synapse
# # package will be used
# #
# # For the list of default templates, please see
# # https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
# #
# #template_dir: res/templates
#
# # Templates for email notifications
# #
# notif_template_html: notif_mail.html
# notif_template_text: notif_mail.txt
#
# # Templates for account expiry notices
# #
# expiry_template_html: notice_expiry.html
# expiry_template_text: notice_expiry.txt
#
# # Templates for password reset emails sent by the homeserver
# #
# #password_reset_template_html: password_reset.html
# #password_reset_template_text: password_reset.txt
#
# # Templates for registration emails sent by the homeserver
# #
# #registration_template_html: registration.html
# #registration_template_text: registration.txt
#
# # Templates for validation emails sent by the homeserver when adding an email to
# # your user account
# #
# #add_threepid_template_html: add_threepid.html
# #add_threepid_template_text: add_threepid.txt
#
# # Templates for password reset success and failure pages that a user
# # will see after attempting to reset their password
# #
# #password_reset_template_success_html: password_reset_success.html
# #password_reset_template_failure_html: password_reset_failure.html
#
# # Templates for registration success and failure pages that a user
# # will see after attempting to register using an email or phone
# #
# #registration_template_success_html: registration_success.html
# #registration_template_failure_html: registration_failure.html
#
# # Templates for success and failure pages that a user will see after attempting
# # to add an email or phone to their account
# #
# #add_threepid_success_html: add_threepid_success.html
# #add_threepid_failure_html: add_threepid_failure.html
email:
# The hostname of the outgoing SMTP server to use. Defaults to 'localhost'.
#
#smtp_host: mail.server
# The port on the mail server for outgoing SMTP. Defaults to 25.
#
#smtp_port: 587
# Username/password for authentication to the SMTP server. By default, no
# authentication is attempted.
#
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# Uncomment the following to require TLS transport security for SMTP.
# By default, Synapse will connect over plain text, and will then switch to
# TLS via STARTTLS *if the SMTP server supports it*. If this option is set,
# Synapse will refuse to connect unless the server supports STARTTLS.
#
#require_transport_security: true
# Enable sending emails for messages that the user has missed
#
#enable_notifs: false
# notif_from defines the "From" address to use when sending emails.
# It must be set if email sending is enabled.
#
# The placeholder '%(app)s' will be replaced by the application name,
# which is normally 'app_name' (below), but may be overridden by the
# Matrix client application.
#
# Note that the placeholder must be written '%(app)s', including the
# trailing 's'.
#
#notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name defines the default value for '%(app)s' in notif_from. It
# defaults to 'Matrix'.
#
#app_name: my_branded_matrix_server
# Uncomment the following to disable automatic subscription to email
# notifications for new users. Enabled by default.
#
#notif_for_new_users: false
# Custom URL for client links within the email notifications. By default
# links will be based on "https://matrix.to".
#
# (This setting used to be called riot_base_url; the old name is still
# supported for backwards-compatibility but is now deprecated.)
#
#client_base_url: "http://localhost/riot"
# Configure the time that a validation email will expire after sending.
# Defaults to 1h.
#
#validation_token_lifetime: 15m
# Directory in which Synapse will try to find the template files below.
# If not set, default templates from within the Synapse package will be used.
#
# DO NOT UNCOMMENT THIS SETTING unless you want to customise the templates.
# If you *do* uncomment it, you will need to make sure that all the templates
# below are in the directory.
#
# Synapse will look for the following templates in this directory:
#
# * The contents of email notifications of missed events: 'notif_mail.html' and
# 'notif_mail.txt'.
#
# * The contents of account expiry notice emails: 'notice_expiry.html' and
# 'notice_expiry.txt'.
#
# * The contents of password reset emails sent by the homeserver:
# 'password_reset.html' and 'password_reset.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in the password reset email: 'password_reset_success.html' and
# 'password_reset_failure.html'
#
# * The contents of address verification emails sent during registration:
# 'registration.html' and 'registration.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent during registration:
# 'registration_success.html' and 'registration_failure.html'
#
# * The contents of address verification emails sent when an address is added
# to a Matrix account: 'add_threepid.html' and 'add_threepid.txt'
#
# * HTML pages for success and failure that a user will see when they follow
# the link in an address verification email sent when an address is added
# to a Matrix account: 'add_threepid_success.html' and
# 'add_threepid_failure.html'
#
# You can see the default templates at:
# https://github.com/matrix-org/synapse/tree/master/synapse/res/templates
#
#template_dir: "res/templates"
"""

View File

@@ -108,7 +108,7 @@ class KeyConfig(Config):
self.signing_key = self.read_signing_keys(signing_key_path, "signing_key")
self.old_signing_keys = self.read_old_signing_keys(
config.get("old_signing_keys", {})
config.get("old_signing_keys")
)
self.key_refresh_interval = self.parse_duration(
config.get("key_refresh_interval", "1d")
@@ -199,14 +199,19 @@ class KeyConfig(Config):
signing_key_path: "%(base_key_name)s.signing.key"
# The keys that the server used to sign messages with but won't use
# to sign new messages. E.g. it has lost its private key
# to sign new messages.
#
#old_signing_keys:
# "ed25519:auto":
# # Base64 encoded public key
# key: "The public part of your old signing key."
# # Millisecond POSIX timestamp when the key expired.
# expired_ts: 123456789123
old_signing_keys:
# For each key, `key` should be the base64-encoded public key, and
# `expired_ts`should be the time (in milliseconds since the unix epoch) that
# it was last used.
#
# It is possible to build an entry from an old signing.key file using the
# `export_signing_key` script which is provided with synapse.
#
# For example:
#
#"ed25519:id": { key: "base64string", expired_ts: 123456789123 }
# How long key response published by this server is valid for.
# Used to set the valid_until_ts in /key/v2 APIs.
@@ -290,6 +295,8 @@ class KeyConfig(Config):
raise ConfigError("Error reading %s: %s" % (name, str(e)))
def read_old_signing_keys(self, old_signing_keys):
if old_signing_keys is None:
return {}
keys = {}
for key_id, key_data in old_signing_keys.items():
if is_signing_algorithm_supported(key_id):

View File

@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import logging.config
import os
@@ -37,10 +37,17 @@ from synapse.logging._structured import (
from synapse.logging.context import LoggingContextFilter
from synapse.util.versionstring import get_version_string
from ._base import Config
from ._base import Config, ConfigError
DEFAULT_LOG_CONFIG = Template(
"""
"""\
# Log configuration for Synapse.
#
# This is a YAML file containing a standard Python logging configuration
# dictionary. See [1] for details on the valid settings.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
version: 1
formatters:
@@ -81,11 +88,18 @@ disable_existing_loggers: false
"""
)
LOG_FILE_ERROR = """\
Support for the log_file configuration option and --log-file command-line option was
removed in Synapse 1.3.0. You should instead set up a separate log configuration file.
"""
class LoggingConfig(Config):
section = "logging"
def read_config(self, config, **kwargs):
if config.get("log_file"):
raise ConfigError(LOG_FILE_ERROR)
self.log_config = self.abspath(config.get("log_config"))
self.no_redirect_stdio = config.get("no_redirect_stdio", False)
@@ -106,6 +120,8 @@ class LoggingConfig(Config):
def read_arguments(self, args):
if args.no_redirect_stdio is not None:
self.no_redirect_stdio = args.no_redirect_stdio
if args.log_file is not None:
raise ConfigError(LOG_FILE_ERROR)
@staticmethod
def add_arguments(parser):
@@ -118,6 +134,10 @@ class LoggingConfig(Config):
help="Do not redirect stdout/stderr to the log",
)
logging_group.add_argument(
"-f", "--log-file", dest="log_file", help=argparse.SUPPRESS,
)
def generate_files(self, config, config_dir_path):
log_config = config.get("log_config")
if log_config and not os.path.exists(log_config):

View File

@@ -35,7 +35,7 @@ class PushConfig(Config):
# Now check for the one in the 'email' section and honour it,
# with a warning.
push_config = config.get("email", {})
push_config = config.get("email") or {}
redact_content = push_config.get("redact_content")
if redact_content is not None:
print(

View File

@@ -83,10 +83,9 @@ class RatelimitConfig(Config):
)
rc_admin_redaction = config.get("rc_admin_redaction")
self.rc_admin_redaction = None
if rc_admin_redaction:
self.rc_admin_redaction = RateLimitConfig(rc_admin_redaction)
else:
self.rc_admin_redaction = None
def generate_config_section(self, **kwargs):
return """\

View File

@@ -27,6 +27,9 @@ class AccountValidityConfig(Config):
section = "accountvalidity"
def __init__(self, config, synapse_config):
if config is None:
return
super(AccountValidityConfig, self).__init__()
self.enabled = config.get("enabled", False)
self.renew_by_email_enabled = "renew_at" in config
@@ -91,7 +94,7 @@ class RegistrationConfig(Config):
)
self.account_validity = AccountValidityConfig(
config.get("account_validity", {}), config
config.get("account_validity") or {}, config
)
self.registrations_require_3pid = config.get("registrations_require_3pid", [])
@@ -159,23 +162,6 @@ class RegistrationConfig(Config):
# Optional account validity configuration. This allows for accounts to be denied
# any request after a given period.
#
# ``enabled`` defines whether the account validity feature is enabled. Defaults
# to False.
#
# ``period`` allows setting the period after which an account is valid
# after its registration. When renewing the account, its validity period
# will be extended by this amount of time. This parameter is required when using
# the account validity feature.
#
# ``renew_at`` is the amount of time before an account's expiry date at which
# Synapse will send an email to the account's email address with a renewal link.
# This needs the ``email`` and ``public_baseurl`` configuration sections to be
# filled.
#
# ``renew_email_subject`` is the subject of the email sent out with the renewal
# link. ``%%(app)s`` can be used as a placeholder for the ``app_name`` parameter
# from the ``email`` section.
#
# Once this feature is enabled, Synapse will look for registered users without an
# expiration date at startup and will add one to every account it found using the
# current settings at that time.
@@ -186,21 +172,55 @@ class RegistrationConfig(Config):
# date will be randomly selected within a range [now + period - d ; now + period],
# where d is equal to 10%% of the validity period.
#
#account_validity:
# enabled: true
# period: 6w
# renew_at: 1w
# renew_email_subject: "Renew your %%(app)s account"
# # Directory in which Synapse will try to find the HTML files to serve to the
# # user when trying to renew an account. Optional, defaults to
# # synapse/res/templates.
# template_dir: "res/templates"
# # HTML to be displayed to the user after they successfully renewed their
# # account. Optional.
# account_renewed_html_path: "account_renewed.html"
# # HTML to be displayed when the user tries to renew an account with an invalid
# # renewal token. Optional.
# invalid_token_html_path: "invalid_token.html"
account_validity:
# The account validity feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# The period after which an account is valid after its registration. When
# renewing the account, its validity period will be extended by this amount
# of time. This parameter is required when using the account validity
# feature.
#
#period: 6w
# The amount of time before an account's expiry date at which Synapse will
# send an email to the account's email address with a renewal link. By
# default, no such emails are sent.
#
# If you enable this setting, you will also need to fill out the 'email' and
# 'public_baseurl' configuration sections.
#
#renew_at: 1w
# The subject of the email sent out with the renewal link. '%%(app)s' can be
# used as a placeholder for the 'app_name' parameter from the 'email'
# section.
#
# Note that the placeholder must be written '%%(app)s', including the
# trailing 's'.
#
# If this is not set, a default value is used.
#
#renew_email_subject: "Renew your %%(app)s account"
# Directory in which Synapse will try to find templates for the HTML files to
# serve to the user when trying to renew an account. If not set, default
# templates from within the Synapse package will be used.
#
#template_dir: "res/templates"
# File within 'template_dir' giving the HTML to be displayed to the user after
# they successfully renewed their account. If not set, default text is used.
#
#account_renewed_html_path: "account_renewed.html"
# File within 'template_dir' giving the HTML to be displayed when the user
# tries to renew an account with an invalid renewal token. If not set,
# default text is used.
#
#invalid_token_html_path: "invalid_token.html"
# Time that a user's session remains valid for, after they log in.
#

View File

@@ -156,7 +156,6 @@ class ContentRepositoryConfig(Config):
(provider_class, parsed_config, wrapper_config)
)
self.uploads_path = self.ensure_directory(config.get("uploads_path", "uploads"))
self.dynamic_thumbnails = config.get("dynamic_thumbnails", False)
self.thumbnail_requirements = parse_thumbnail_requirements(
config.get("thumbnail_sizes", DEFAULT_THUMBNAIL_SIZES)
@@ -231,10 +230,6 @@ class ContentRepositoryConfig(Config):
# config:
# directory: /mnt/some/other/directory
# Directory where in-progress uploads are stored.
#
uploads_path: "%(uploads_path)s"
# The largest allowed upload size in bytes
#
#max_upload_size: 10M

View File

@@ -14,17 +14,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import re
import logging
from synapse.python_dependencies import DependencyException, check_requirements
from synapse.types import (
map_username_to_mxid_localpart,
mxid_localpart_allowed_characters,
)
from synapse.util.module_loader import load_python_module
from synapse.util.module_loader import load_module, load_python_module
from ._base import Config, ConfigError
logger = logging.getLogger(__name__)
DEFAULT_USER_MAPPING_PROVIDER = (
"synapse.handlers.saml_handler.DefaultSamlMappingProvider"
)
def _dict_merge(merge_dict, into_dict):
"""Do a deep merge of two dicts
@@ -75,15 +77,70 @@ class SAML2Config(Config):
self.saml2_enabled = True
self.saml2_mxid_source_attribute = saml2_config.get(
"mxid_source_attribute", "uid"
)
self.saml2_grandfathered_mxid_source_attribute = saml2_config.get(
"grandfathered_mxid_source_attribute", "uid"
)
saml2_config_dict = self._default_saml_config_dict()
# user_mapping_provider may be None if the key is present but has no value
ump_dict = saml2_config.get("user_mapping_provider") or {}
# Use the default user mapping provider if not set
ump_dict.setdefault("module", DEFAULT_USER_MAPPING_PROVIDER)
# Ensure a config is present
ump_dict["config"] = ump_dict.get("config") or {}
if ump_dict["module"] == DEFAULT_USER_MAPPING_PROVIDER:
# Load deprecated options for use by the default module
old_mxid_source_attribute = saml2_config.get("mxid_source_attribute")
if old_mxid_source_attribute:
logger.warning(
"The config option saml2_config.mxid_source_attribute is deprecated. "
"Please use saml2_config.user_mapping_provider.config"
".mxid_source_attribute instead."
)
ump_dict["config"]["mxid_source_attribute"] = old_mxid_source_attribute
old_mxid_mapping = saml2_config.get("mxid_mapping")
if old_mxid_mapping:
logger.warning(
"The config option saml2_config.mxid_mapping is deprecated. Please "
"use saml2_config.user_mapping_provider.config.mxid_mapping instead."
)
ump_dict["config"]["mxid_mapping"] = old_mxid_mapping
# Retrieve an instance of the module's class
# Pass the config dictionary to the module for processing
(
self.saml2_user_mapping_provider_class,
self.saml2_user_mapping_provider_config,
) = load_module(ump_dict)
# Ensure loaded user mapping module has defined all necessary methods
# Note parse_config() is already checked during the call to load_module
required_methods = [
"get_saml_attributes",
"saml_response_to_user_attributes",
"get_remote_user_id",
]
missing_methods = [
method
for method in required_methods
if not hasattr(self.saml2_user_mapping_provider_class, method)
]
if missing_methods:
raise ConfigError(
"Class specified by saml2_config."
"user_mapping_provider.module is missing required "
"methods: %s" % (", ".join(missing_methods),)
)
# Get the desired saml auth response attributes from the module
saml2_config_dict = self._default_saml_config_dict(
*self.saml2_user_mapping_provider_class.get_saml_attributes(
self.saml2_user_mapping_provider_config
)
)
_dict_merge(
merge_dict=saml2_config.get("sp_config", {}), into_dict=saml2_config_dict
)
@@ -103,22 +160,27 @@ class SAML2Config(Config):
saml2_config.get("saml_session_lifetime", "5m")
)
mapping = saml2_config.get("mxid_mapping", "hexencode")
try:
self.saml2_mxid_mapper = MXID_MAPPER_MAP[mapping]
except KeyError:
raise ConfigError("%s is not a known mxid_mapping" % (mapping,))
def _default_saml_config_dict(
self, required_attributes: set, optional_attributes: set
):
"""Generate a configuration dictionary with required and optional attributes that
will be needed to process new user registration
def _default_saml_config_dict(self):
Args:
required_attributes: SAML auth response attributes that are
necessary to function
optional_attributes: SAML auth response attributes that can be used to add
additional information to Synapse user accounts, but are not required
Returns:
dict: A SAML configuration dictionary
"""
import saml2
public_baseurl = self.public_baseurl
if public_baseurl is None:
raise ConfigError("saml2_config requires a public_baseurl to be set")
required_attributes = {"uid", self.saml2_mxid_source_attribute}
optional_attributes = {"displayName"}
if self.saml2_grandfathered_mxid_source_attribute:
optional_attributes.add(self.saml2_grandfathered_mxid_source_attribute)
optional_attributes -= required_attributes
@@ -207,33 +269,58 @@ class SAML2Config(Config):
#
#config_path: "%(config_dir_path)s/sp_conf.py"
# the lifetime of a SAML session. This defines how long a user has to
# The lifetime of a SAML session. This defines how long a user has to
# complete the authentication process, if allow_unsolicited is unset.
# The default is 5 minutes.
#
#saml_session_lifetime: 5m
# The SAML attribute (after mapping via the attribute maps) to use to derive
# the Matrix ID from. 'uid' by default.
# An external module can be provided here as a custom solution to
# mapping attributes returned from a saml provider onto a matrix user.
#
#mxid_source_attribute: displayName
user_mapping_provider:
# The custom module's class. Uncomment to use a custom module.
#
#module: mapping_provider.SamlMappingProvider
# The mapping system to use for mapping the saml attribute onto a matrix ID.
# Options include:
# * 'hexencode' (which maps unpermitted characters to '=xx')
# * 'dotreplace' (which replaces unpermitted characters with '.').
# The default is 'hexencode'.
#
#mxid_mapping: dotreplace
# Custom configuration values for the module. Below options are
# intended for the built-in provider, they should be changed if
# using a custom module. This section will be passed as a Python
# dictionary to the module's `parse_config` method.
#
config:
# The SAML attribute (after mapping via the attribute maps) to use
# to derive the Matrix ID from. 'uid' by default.
#
# Note: This used to be configured by the
# saml2_config.mxid_source_attribute option. If that is still
# defined, its value will be used instead.
#
#mxid_source_attribute: displayName
# In previous versions of synapse, the mapping from SAML attribute to MXID was
# always calculated dynamically rather than stored in a table. For backwards-
# compatibility, we will look for user_ids matching such a pattern before
# creating a new account.
# The mapping system to use for mapping the saml attribute onto a
# matrix ID.
#
# Options include:
# * 'hexencode' (which maps unpermitted characters to '=xx')
# * 'dotreplace' (which replaces unpermitted characters with
# '.').
# The default is 'hexencode'.
#
# Note: This used to be configured by the
# saml2_config.mxid_mapping option. If that is still defined, its
# value will be used instead.
#
#mxid_mapping: dotreplace
# In previous versions of synapse, the mapping from SAML attribute to
# MXID was always calculated dynamically rather than stored in a
# table. For backwards- compatibility, we will look for user_ids
# matching such a pattern before creating a new account.
#
# This setting controls the SAML attribute which will be used for this
# backwards-compatibility lookup. Typically it should be 'uid', but if the
# attribute maps are changed, it may be necessary to change it.
# backwards-compatibility lookup. Typically it should be 'uid', but if
# the attribute maps are changed, it may be necessary to change it.
#
# The default is 'uid'.
#
@@ -241,23 +328,3 @@ class SAML2Config(Config):
""" % {
"config_dir_path": config_dir_path
}
DOT_REPLACE_PATTERN = re.compile(
("[^%s]" % (re.escape("".join(mxid_localpart_allowed_characters)),))
)
def dot_replace_for_mxid(username: str) -> str:
username = username.lower()
username = DOT_REPLACE_PATTERN.sub(".", username)
# regular mxids aren't allowed to start with an underscore either
username = re.sub("^_", "", username)
return username
MXID_MAPPER_MAP = {
"hexencode": map_username_to_mxid_localpart,
"dotreplace": dot_replace_for_mxid,
}

View File

@@ -102,6 +102,12 @@ class ServerConfig(Config):
"require_auth_for_profile_requests", False
)
# Whether to require sharing a room with a user to retrieve their
# profile data
self.limit_profile_requests_to_users_who_share_rooms = config.get(
"limit_profile_requests_to_users_who_share_rooms", False,
)
if "restrict_public_rooms_to_local_users" in config and (
"allow_public_rooms_without_auth" in config
or "allow_public_rooms_over_federation" in config
@@ -200,7 +206,7 @@ class ServerConfig(Config):
self.admin_contact = config.get("admin_contact", None)
# FIXME: federation_domain_whitelist needs sytests
self.federation_domain_whitelist = None
self.federation_domain_whitelist = None # type: Optional[dict]
federation_domain_whitelist = config.get("federation_domain_whitelist", None)
if federation_domain_whitelist is not None:
@@ -288,6 +294,14 @@ class ServerConfig(Config):
self.retention_default_min_lifetime = None
self.retention_default_max_lifetime = None
if self.retention_enabled:
logger.info(
"Message retention policies support enabled with the following default"
" policy: min_lifetime = %s ; max_lifetime = %s",
self.retention_default_min_lifetime,
self.retention_default_max_lifetime,
)
self.retention_allowed_lifetime_min = retention_config.get(
"allowed_lifetime_min"
)
@@ -621,6 +635,13 @@ class ServerConfig(Config):
#
#require_auth_for_profile_requests: true
# Uncomment to require a user to share a room with another user in order
# to retrieve their profile information. Only checked on Client-Server
# requests. Profile requests from other servers should be checked by the
# requesting server. Defaults to 'false'.
#
#limit_profile_requests_to_users_who_share_rooms: true
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
@@ -935,17 +956,17 @@ class ServerConfig(Config):
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
# of outdated messages on a more frequent basis than for the rest of the rooms
# (e.g. every 12h), but not want that purge to be performed by a job that's
# iterating over every room it knows, which could be heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# interval: 12h
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
# interval: 1d
"""
% locals()
)

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
#
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -14,16 +15,20 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import collections.abc
import hashlib
import logging
from typing import Dict
from canonicaljson import encode_canonical_json
from signedjson.sign import sign_json
from signedjson.types import SigningKey
from unpaddedbase64 import decode_base64, encode_base64
from synapse.api.errors import Codes, SynapseError
from synapse.api.room_versions import RoomVersion
from synapse.events.utils import prune_event, prune_event_dict
from synapse.types import JsonDict
logger = logging.getLogger(__name__)
@@ -40,8 +45,11 @@ def check_event_content_hash(event, hash_algorithm=hashlib.sha256):
# some malformed events lack a 'hashes'. Protect against it being missing
# or a weird type by basically treating it the same as an unhashed event.
hashes = event.get("hashes")
if not isinstance(hashes, dict):
raise SynapseError(400, "Malformed 'hashes'", Codes.UNAUTHORIZED)
# nb it might be a frozendict or a dict
if not isinstance(hashes, collections.abc.Mapping):
raise SynapseError(
400, "Malformed 'hashes': %s" % (type(hashes),), Codes.UNAUTHORIZED
)
if name not in hashes:
raise SynapseError(
@@ -109,18 +117,28 @@ def compute_event_reference_hash(event, hash_algorithm=hashlib.sha256):
return hashed.name, hashed.digest()
def compute_event_signature(event_dict, signature_name, signing_key):
def compute_event_signature(
room_version: RoomVersion,
event_dict: JsonDict,
signature_name: str,
signing_key: SigningKey,
) -> Dict[str, Dict[str, str]]:
"""Compute the signature of the event for the given name and key.
Args:
event_dict (dict): The event as a dict
signature_name (str): The name of the entity signing the event
room_version: the version of the room that this event is in.
(the room version determines the redaction algorithm and hence the
json to be signed)
event_dict: The event as a dict
signature_name: The name of the entity signing the event
(typically the server's hostname).
signing_key (syutil.crypto.SigningKey): The key to sign with
signing_key: The key to sign with
Returns:
dict[str, dict[str, str]]: Returns a dictionary in the same format of
an event's signatures field.
a dictionary in the same format of an event's signatures field.
"""
redact_json = prune_event_dict(event_dict)
redact_json.pop("age_ts", None)
@@ -134,23 +152,26 @@ def compute_event_signature(event_dict, signature_name, signing_key):
def add_hashes_and_signatures(
event_dict, signature_name, signing_key, hash_algorithm=hashlib.sha256
room_version: RoomVersion,
event_dict: JsonDict,
signature_name: str,
signing_key: SigningKey,
):
"""Add content hash and sign the event
Args:
event_dict (dict): The event to add hashes to and sign
signature_name (str): The name of the entity signing the event
room_version: the version of the room this event is in
event_dict: The event to add hashes to and sign
signature_name: The name of the entity signing the event
(typically the server's hostname).
signing_key (syutil.crypto.SigningKey): The key to sign with
hash_algorithm: A hasher from `hashlib`, e.g. hashlib.sha256, to use
to hash the event
signing_key: The key to sign with
"""
name, digest = compute_content_hash(event_dict, hash_algorithm=hash_algorithm)
name, digest = compute_content_hash(event_dict, hash_algorithm=hashlib.sha256)
event_dict.setdefault("hashes", {})[name] = encode_base64(digest)
event_dict["signatures"] = compute_event_signature(
event_dict, signature_name=signature_name, signing_key=signing_key
room_version, event_dict, signature_name=signature_name, signing_key=signing_key
)

View File

@@ -511,17 +511,18 @@ class BaseV2KeyFetcher(object):
server_name = response_json["server_name"]
verified = False
for key_id in response_json["signatures"].get(server_name, {}):
# each of the keys used for the signature must be present in the response
# json.
key = verify_keys.get(key_id)
if not key:
raise KeyLookupError(
"Key response is signed by key id %s:%s but that key is not "
"present in the response" % (server_name, key_id)
)
# the key may not be present in verify_keys if:
# * we got the key from the notary server, and:
# * the key belongs to the notary server, and:
# * the notary server is using a different key to sign notary
# responses.
continue
verify_signed_json(response_json, server_name, key.verify_key)
verified = True
break
if not verified:
raise KeyLookupError(

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2014 - 2016 OpenMarket Ltd
# Copyright 2020 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -23,17 +24,27 @@ from unpaddedbase64 import decode_base64
from synapse.api.constants import EventTypes, JoinRules, Membership
from synapse.api.errors import AuthError, EventSizeError, SynapseError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.types import UserID, get_domain_from_id
logger = logging.getLogger(__name__)
def check(room_version, event, auth_events, do_sig_check=True, do_size_check=True):
def check(
room_version_obj: RoomVersion,
event,
auth_events,
do_sig_check=True,
do_size_check=True,
):
""" Checks if this event is correctly authed.
Args:
room_version (str): the version of the room
room_version_obj: the version of the room
event: the event being checked.
auth_events (dict: event-key -> event): the existing room state.
@@ -43,6 +54,8 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
Returns:
if the auth checks pass.
"""
assert isinstance(auth_events, dict)
if do_size_check:
_check_size_limits(event)
@@ -87,13 +100,12 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
if not event.signatures.get(event_id_domain):
raise AuthError(403, "Event not signed by sending server")
if auth_events is None:
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
logger.warning("Trusting event: %s", event.event_id)
return
# Implementation of https://matrix.org/docs/spec/rooms/v1#authorization-rules
#
# 1. If type is m.room.create:
if event.type == EventTypes.Create:
# 1b. If the domain of the room_id does not match the domain of the sender,
# reject.
sender_domain = get_domain_from_id(event.sender)
room_id_domain = get_domain_from_id(event.room_id)
if room_id_domain != sender_domain:
@@ -101,39 +113,49 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
403, "Creation event's room_id domain does not match sender's"
)
room_version = event.content.get("room_version", "1")
if room_version not in KNOWN_ROOM_VERSIONS:
# 1c. If content.room_version is present and is not a recognised version, reject
room_version_prop = event.content.get("room_version", "1")
if room_version_prop not in KNOWN_ROOM_VERSIONS:
raise AuthError(
403, "room appears to have unsupported version %s" % (room_version,)
403,
"room appears to have unsupported version %s" % (room_version_prop,),
)
# FIXME
logger.debug("Allowing! %s", event)
return
# 3. If event does not have a m.room.create in its auth_events, reject.
creation_event = auth_events.get((EventTypes.Create, ""), None)
if not creation_event:
raise AuthError(403, "No create event in auth events")
# additional check for m.federate
creating_domain = get_domain_from_id(event.room_id)
originating_domain = get_domain_from_id(event.sender)
if creating_domain != originating_domain:
if not _can_federate(event, auth_events):
raise AuthError(403, "This room has been marked as unfederatable.")
# FIXME: Temp hack
# 4. If type is m.room.aliases
if event.type == EventTypes.Aliases:
# 4a. If event has no state_key, reject
if not event.is_state():
raise AuthError(403, "Alias event must be a state event")
if not event.state_key:
raise AuthError(403, "Alias event must have non-empty state_key")
# 4b. If sender's domain doesn't matches [sic] state_key, reject
sender_domain = get_domain_from_id(event.sender)
if event.state_key != sender_domain:
raise AuthError(
403, "Alias event's state_key does not match sender's domain"
)
logger.debug("Allowing! %s", event)
return
# 4c. Otherwise, allow.
# This is removed by https://github.com/matrix-org/matrix-doc/pull/2260
if room_version_obj.special_case_aliases_auth:
logger.debug("Allowing! %s", event)
return
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Auth events: %s", [a.event_id for a in auth_events.values()])
@@ -164,7 +186,7 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
_check_power_levels(event, auth_events)
if event.type == EventTypes.Redaction:
check_redaction(room_version, event, auth_events)
check_redaction(room_version_obj, event, auth_events)
logger.debug("Allowing! %s", event)
@@ -390,7 +412,7 @@ def _can_send_event(event, auth_events):
return True
def check_redaction(room_version, event, auth_events):
def check_redaction(room_version_obj: RoomVersion, event, auth_events):
"""Check whether the event sender is allowed to redact the target event.
Returns:
@@ -410,11 +432,7 @@ def check_redaction(room_version, event, auth_events):
if user_level >= redact_level:
return False
v = KNOWN_ROOM_VERSIONS.get(room_version)
if not v:
raise RuntimeError("Unrecognized room version %r" % (room_version,))
if v.event_format == EventFormatVersions.V1:
if room_version_obj.event_format == EventFormatVersions.V1:
redacter_domain = get_domain_from_id(event.event_id)
redactee_domain = get_domain_from_id(event.redacts)
if redacter_domain == redactee_domain:
@@ -638,7 +656,7 @@ def get_public_keys(invite_event):
return public_keys
def auth_types_for_event(event) -> Set[Tuple[str]]:
def auth_types_for_event(event) -> Set[Tuple[str, str]]:
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room.

View File

@@ -23,6 +23,7 @@ from unpaddedbase64 import encode_base64
from synapse.api.errors import UnsupportedRoomVersionError
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, EventFormatVersions
from synapse.types import JsonDict
from synapse.util.caches import intern_dict
from synapse.util.frozenutils import freeze
@@ -116,16 +117,32 @@ class _EventInternalMetadata(object):
return getattr(self, "redacted", False)
def _event_dict_property(key):
_SENTINEL = object()
def _event_dict_property(key, default=_SENTINEL):
"""Creates a new property for the given key that delegates access to
`self._event_dict`.
The default is used if the key is missing from the `_event_dict`, if given,
otherwise an AttributeError will be raised.
Note: If a default is given then `hasattr` will always return true.
"""
# We want to be able to use hasattr with the event dict properties.
# However, (on python3) hasattr expects AttributeError to be raised. Hence,
# we need to transform the KeyError into an AttributeError
def getter(self):
def getter_raises(self):
try:
return self._event_dict[key]
except KeyError:
raise AttributeError(key)
def getter_default(self):
return self._event_dict.get(key, default)
def setter(self, v):
try:
self._event_dict[key] = v
@@ -138,7 +155,11 @@ def _event_dict_property(key):
except KeyError:
raise AttributeError(key)
return property(getter, setter, delete)
if default is _SENTINEL:
# No default given, so use the getter that raises
return property(getter_raises, setter, delete)
else:
return property(getter_default, setter, delete)
class EventBase(object):
@@ -165,7 +186,7 @@ class EventBase(object):
origin = _event_dict_property("origin")
origin_server_ts = _event_dict_property("origin_server_ts")
prev_events = _event_dict_property("prev_events")
redacts = _event_dict_property("redacts")
redacts = _event_dict_property("redacts", None)
room_id = _event_dict_property("room_id")
sender = _event_dict_property("sender")
user_id = _event_dict_property("sender")
@@ -177,7 +198,7 @@ class EventBase(object):
def is_state(self):
return hasattr(self, "state_key") and self.state_key is not None
def get_dict(self):
def get_dict(self) -> JsonDict:
d = dict(self._event_dict)
d.update({"signatures": self.signatures, "unsigned": dict(self.unsigned)})
@@ -189,7 +210,7 @@ class EventBase(object):
def get_internal_metadata_dict(self):
return self.internal_metadata.get_dict()
def get_pdu_json(self, time_now=None):
def get_pdu_json(self, time_now=None) -> JsonDict:
pdu_json = self.get_dict()
if time_now is not None and "age_ts" in pdu_json["unsigned"]:

View File

@@ -12,8 +12,10 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Optional
import attr
from nacl.signing import SigningKey
from twisted.internet import defer
@@ -23,13 +25,18 @@ from synapse.api.room_versions import (
KNOWN_EVENT_FORMAT_VERSIONS,
KNOWN_ROOM_VERSIONS,
EventFormatVersions,
RoomVersion,
)
from synapse.crypto.event_signing import add_hashes_and_signatures
from synapse.types import EventID
from synapse.events import (
EventBase,
_EventInternalMetadata,
event_type_from_format_version,
)
from synapse.types import EventID, JsonDict
from synapse.util import Clock
from synapse.util.stringutils import random_string
from . import _EventInternalMetadata, event_type_from_format_version
@attr.s(slots=True, cmp=False, frozen=True)
class EventBuilder(object):
@@ -40,7 +47,7 @@ class EventBuilder(object):
content/unsigned/internal_metadata fields are still mutable)
Attributes:
format_version (int): Event format version
room_version: Version of the target room
room_id (str)
type (str)
sender (str)
@@ -63,7 +70,7 @@ class EventBuilder(object):
_hostname = attr.ib()
_signing_key = attr.ib()
format_version = attr.ib()
room_version = attr.ib(type=RoomVersion)
room_id = attr.ib()
type = attr.ib()
@@ -108,7 +115,8 @@ class EventBuilder(object):
)
auth_ids = yield self._auth.compute_auth_events(self, state_ids)
if self.format_version == EventFormatVersions.V1:
format_version = self.room_version.event_format
if format_version == EventFormatVersions.V1:
auth_events = yield self._store.add_event_hashes(auth_ids)
prev_events = yield self._store.add_event_hashes(prev_event_ids)
else:
@@ -148,7 +156,7 @@ class EventBuilder(object):
clock=self._clock,
hostname=self._hostname,
signing_key=self._signing_key,
format_version=self.format_version,
room_version=self.room_version,
event_dict=event_dict,
internal_metadata_dict=self.internal_metadata.get_dict(),
)
@@ -201,7 +209,7 @@ class EventBuilderFactory(object):
clock=self.clock,
hostname=self.hostname,
signing_key=self.signing_key,
format_version=room_version.event_format,
room_version=room_version,
type=key_values["type"],
state_key=key_values.get("state_key"),
room_id=key_values["room_id"],
@@ -214,29 +222,19 @@ class EventBuilderFactory(object):
def create_local_event_from_event_dict(
clock,
hostname,
signing_key,
format_version,
event_dict,
internal_metadata_dict=None,
):
clock: Clock,
hostname: str,
signing_key: SigningKey,
room_version: RoomVersion,
event_dict: JsonDict,
internal_metadata_dict: Optional[JsonDict] = None,
) -> EventBase:
"""Takes a fully formed event dict, ensuring that fields like `origin`
and `origin_server_ts` have correct values for a locally produced event,
then signs and hashes it.
Args:
clock (Clock)
hostname (str)
signing_key
format_version (int)
event_dict (dict)
internal_metadata_dict (dict|None)
Returns:
FrozenEvent
"""
format_version = room_version.event_format
if format_version not in KNOWN_EVENT_FORMAT_VERSIONS:
raise Exception("No event format defined for version %r" % (format_version,))
@@ -257,7 +255,7 @@ def create_local_event_from_event_dict(
event_dict.setdefault("signatures", {})
add_hashes_and_signatures(event_dict, hostname, signing_key)
add_hashes_and_signatures(room_version, event_dict, hostname, signing_key)
return event_type_from_format_version(format_version)(
event_dict, internal_metadata_dict=internal_metadata_dict
)

View File

@@ -12,7 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict, Optional, Tuple, Union
from typing import Optional, Union
from six import iteritems
@@ -23,6 +23,7 @@ from twisted.internet import defer
from synapse.appservice import ApplicationService
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.types import StateMap
@attr.s(slots=True)
@@ -106,13 +107,11 @@ class EventContext:
_state_group = attr.ib(default=None, type=Optional[int])
state_group_before_event = attr.ib(default=None, type=Optional[int])
prev_group = attr.ib(default=None, type=Optional[int])
delta_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
delta_ids = attr.ib(default=None, type=Optional[StateMap[str]])
app_service = attr.ib(default=None, type=Optional[ApplicationService])
_current_state_ids = attr.ib(
default=None, type=Optional[Dict[Tuple[str, str], str]]
)
_prev_state_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
_current_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
_prev_state_ids = attr.ib(default=None, type=Optional[StateMap[str]])
@staticmethod
def with_state(
@@ -149,7 +148,7 @@ class EventContext:
# the prev_state_ids, so if we're a state event we include the event
# id that we replaced in the state.
if event.is_state():
prev_state_ids = yield self.get_prev_state_ids(store)
prev_state_ids = yield self.get_prev_state_ids()
prev_state_id = prev_state_ids.get((event.type, event.state_key))
else:
prev_state_id = None
@@ -167,12 +166,13 @@ class EventContext:
}
@staticmethod
def deserialize(store, input):
def deserialize(storage, input):
"""Converts a dict that was produced by `serialize` back into a
EventContext.
Args:
store (DataStore): Used to convert AS ID to AS object
storage (Storage): Used to convert AS ID to AS object and fetch
state.
input (dict): A dict produced by `serialize`
Returns:
@@ -181,6 +181,7 @@ class EventContext:
context = _AsyncEventContextImpl(
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
storage=storage,
prev_state_id=input["prev_state_id"],
event_type=input["event_type"],
event_state_key=input["event_state_key"],
@@ -193,7 +194,7 @@ class EventContext:
app_service_id = input["app_service_id"]
if app_service_id:
context.app_service = store.get_app_service_by_id(app_service_id)
context.app_service = storage.main.get_app_service_by_id(app_service_id)
return context
@@ -216,7 +217,7 @@ class EventContext:
return self._state_group
@defer.inlineCallbacks
def get_current_state_ids(self, store):
def get_current_state_ids(self):
"""
Gets the room state map, including this event - ie, the state in ``state_group``
@@ -234,11 +235,11 @@ class EventContext:
if self.rejected:
raise RuntimeError("Attempt to access state_ids of rejected event")
yield self._ensure_fetched(store)
yield self._ensure_fetched()
return self._current_state_ids
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
def get_prev_state_ids(self):
"""
Gets the room state map, excluding this event.
@@ -250,7 +251,7 @@ class EventContext:
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
yield self._ensure_fetched(store)
yield self._ensure_fetched()
return self._prev_state_ids
def get_cached_current_state_ids(self):
@@ -270,7 +271,7 @@ class EventContext:
return self._current_state_ids
def _ensure_fetched(self, store):
def _ensure_fetched(self):
return defer.succeed(None)
@@ -282,6 +283,8 @@ class _AsyncEventContextImpl(EventContext):
Attributes:
_storage (Storage)
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
@@ -295,28 +298,30 @@ class _AsyncEventContextImpl(EventContext):
that was replaced.
"""
# This needs to have a default as we're inheriting
_storage = attr.ib(default=None)
_prev_state_id = attr.ib(default=None)
_event_type = attr.ib(default=None)
_event_state_key = attr.ib(default=None)
_fetching_state_deferred = attr.ib(default=None)
def _ensure_fetched(self, store):
def _ensure_fetched(self):
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
self._fetching_state_deferred = run_in_background(self._fill_out_state)
return make_deferred_yieldable(self._fetching_state_deferred)
@defer.inlineCallbacks
def _fill_out_state(self, store):
def _fill_out_state(self):
"""Called to populate the _current_state_ids and _prev_state_ids
attributes by loading from the database.
"""
if self.state_group is None:
return
self._current_state_ids = yield store.get_state_ids_for_group(self.state_group)
self._current_state_ids = yield self._storage.state.get_state_ids_for_group(
self.state_group
)
if self._prev_state_id and self._event_state_key is not None:
self._prev_state_ids = dict(self._current_state_ids)

View File

@@ -53,7 +53,7 @@ class ThirdPartyEventRules(object):
if self.third_party_rules is None:
return True
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = yield context.get_prev_state_ids()
# Retrieve the state events from the database.
state_events = {}

View File

@@ -12,8 +12,9 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import collections
import re
from typing import Mapping, Union
from six import string_types
@@ -422,3 +423,37 @@ class EventClientSerializer(object):
return yieldable_gather_results(
self.serialize_event, events, time_now=time_now, **kwargs
)
def copy_power_levels_contents(
old_power_levels: Mapping[str, Union[int, Mapping[str, int]]]
):
"""Copy the content of a power_levels event, unfreezing frozendicts along the way
Raises:
TypeError if the input does not look like a valid power levels event content
"""
if not isinstance(old_power_levels, collections.Mapping):
raise TypeError("Not a valid power-levels content: %r" % (old_power_levels,))
power_levels = {}
for k, v in old_power_levels.items():
if isinstance(v, int):
power_levels[k] = v
continue
if isinstance(v, collections.Mapping):
power_levels[k] = h = {}
for k1, v1 in v.items():
# we should only have one level of nesting
if not isinstance(v1, int):
raise TypeError(
"Invalid power_levels value for %s.%s: %r" % (k, k1, v1)
)
h[k1] = v1
continue
raise TypeError("Invalid power_levels value for %s: %r" % (k, v))
return power_levels

View File

@@ -17,6 +17,7 @@
import copy
import itertools
import logging
from typing import Dict, Iterable
from prometheus_client import Counter
@@ -29,6 +30,7 @@ from synapse.api.errors import (
FederationDeniedError,
HttpResponseException,
SynapseError,
UnsupportedRoomVersionError,
)
from synapse.api.room_versions import (
KNOWN_ROOM_VERSIONS,
@@ -196,7 +198,7 @@ class FederationClient(FederationBase):
logger.debug("backfill transaction_data=%r", transaction_data)
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
pdus = [
@@ -334,7 +336,7 @@ class FederationClient(FederationBase):
def get_event_auth(self, destination, room_id, event_id):
res = yield self.transport_layer.get_event_auth(destination, room_id, event_id)
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
auth_chain = [
@@ -385,6 +387,8 @@ class FederationClient(FederationBase):
return res
except InvalidResponseError as e:
logger.warning("Failed to %s via %s: %s", description, destination, e)
except UnsupportedRoomVersionError:
raise
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
@@ -404,7 +408,13 @@ class FederationClient(FederationBase):
raise SynapseError(502, "Failed to %s via any server" % (description,))
def make_membership_event(
self, destinations, room_id, user_id, membership, content, params
self,
destinations: Iterable[str],
room_id: str,
user_id: str,
membership: str,
content: dict,
params: Dict[str, str],
):
"""
Creates an m.room.member event, with context, without participating in the room.
@@ -417,21 +427,23 @@ class FederationClient(FederationBase):
Note that this does not append any events to any graphs.
Args:
destinations (Iterable[str]): Candidate homeservers which are probably
destinations: Candidate homeservers which are probably
participating in the room.
room_id (str): The room in which the event will happen.
user_id (str): The user whose membership is being evented.
membership (str): The "membership" property of the event. Must be
one of "join" or "leave".
content (dict): Any additional data to put into the content field
of the event.
params (dict[str, str|Iterable[str]]): Query parameters to include in the
request.
room_id: The room in which the event will happen.
user_id: The user whose membership is being evented.
membership: The "membership" property of the event. Must be one of
"join" or "leave".
content: Any additional data to put into the content field of the
event.
params: Query parameters to include in the request.
Return:
Deferred[tuple[str, FrozenEvent, int]]: resolves to a tuple of
`(origin, event, event_format)` where origin is the remote
homeserver which generated the event, and event_format is one of
`synapse.api.room_versions.EventFormatVersions`.
Deferred[Tuple[str, FrozenEvent, RoomVersion]]: resolves to a tuple of
`(origin, event, room_version)` where origin is the remote
homeserver which generated the event, and room_version is the
version of the room.
Fails with a `UnsupportedRoomVersionError` if remote responds with
a room version we don't understand.
Fails with a ``SynapseError`` if the chosen remote server
returns a 300/400 code.
@@ -453,8 +465,10 @@ class FederationClient(FederationBase):
# Note: If not supplied, the room version may be either v1 or v2,
# however either way the event format version will be v1.
room_version = ret.get("room_version", RoomVersions.V1.identifier)
event_format = room_version_to_event_format(room_version)
room_version_id = ret.get("room_version", RoomVersions.V1.identifier)
room_version = KNOWN_ROOM_VERSIONS.get(room_version_id)
if not room_version:
raise UnsupportedRoomVersionError()
pdu_dict = ret.get("event", None)
if not isinstance(pdu_dict, dict):
@@ -474,11 +488,11 @@ class FederationClient(FederationBase):
self._clock,
self.hostname,
self.signing_key,
format_version=event_format,
room_version=room_version,
event_dict=pdu_dict,
)
return (destination, ev, event_format)
return (destination, ev, room_version)
return self._try_destination_list(
"make_" + membership, destinations, send_request
@@ -526,13 +540,7 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_request(destination):
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_join(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
content = yield self._do_send_join(destination, pdu)
logger.debug("Got content: %s", content)
@@ -599,9 +607,47 @@ class FederationClient(FederationBase):
return self._try_destination_list("send_join", destinations, send_request)
@defer.inlineCallbacks
def _do_send_join(self, destination, pdu):
time_now = self._clock.time_msec()
try:
content = yield self.transport_layer.send_join_v2(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
return content
except HttpResponseException as e:
if e.code in [400, 404]:
err = e.to_synapse_error()
# If we receive an error response that isn't a generic error, or an
# unrecognised endpoint error, we assume that the remote understands
# the v2 invite API and this is a legitimate error.
if err.errcode not in [Codes.UNKNOWN, Codes.UNRECOGNIZED]:
raise err
else:
raise e.to_synapse_error()
logger.debug("Couldn't send_join with the v2 API, falling back to the v1 API")
resp = yield self.transport_layer.send_join_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
# We expect the v1 API to respond with [200, content], so we only return the
# content.
return resp[1]
@defer.inlineCallbacks
def send_invite(self, destination, room_id, event_id, pdu):
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
content = yield self._do_send_invite(destination, pdu, room_version)
@@ -609,7 +655,7 @@ class FederationClient(FederationBase):
logger.debug("Got response to send_invite: %s", pdu_dict)
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(pdu_dict, format_ver)
@@ -708,18 +754,50 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
def send_request(destination):
time_now = self._clock.time_msec()
_, content = yield self.transport_layer.send_leave(
content = yield self._do_send_leave(destination, pdu)
logger.debug("Got content: %s", content)
return None
return self._try_destination_list("send_leave", destinations, send_request)
@defer.inlineCallbacks
def _do_send_leave(self, destination, pdu):
time_now = self._clock.time_msec()
try:
content = yield self.transport_layer.send_leave_v2(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
logger.debug("Got content: %s", content)
return None
return content
except HttpResponseException as e:
if e.code in [400, 404]:
err = e.to_synapse_error()
return self._try_destination_list("send_leave", destinations, send_request)
# If we receive an error response that isn't a generic error, or an
# unrecognised endpoint error, we assume that the remote understands
# the v2 invite API and this is a legitimate error.
if err.errcode not in [Codes.UNKNOWN, Codes.UNRECOGNIZED]:
raise err
else:
raise e.to_synapse_error()
logger.debug("Couldn't send_leave with the v2 API, falling back to the v1 API")
resp = yield self.transport_layer.send_leave_v1(
destination=destination,
room_id=pdu.room_id,
event_id=pdu.event_id,
content=pdu.get_pdu_json(time_now),
)
# We expect the v1 API to respond with [200, content], so we only return the
# content.
return resp[1]
def get_public_rooms(
self,
@@ -779,7 +857,7 @@ class FederationClient(FederationBase):
timeout=timeout,
)
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
events = [

View File

@@ -15,6 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Dict
import six
from six import iteritems
@@ -22,6 +23,7 @@ from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
@@ -41,7 +43,11 @@ from synapse.federation.federation_base import FederationBase, event_from_pdu_js
from synapse.federation.persistence import TransactionActions
from synapse.federation.units import Edu, Transaction
from synapse.http.endpoint import parse_server_name
from synapse.logging.context import nested_logging_context
from synapse.logging.context import (
make_deferred_yieldable,
nested_logging_context,
run_in_background,
)
from synapse.logging.opentracing import log_kv, start_active_span_from_edu, trace
from synapse.logging.utils import log_function
from synapse.replication.http.federation import (
@@ -49,7 +55,7 @@ from synapse.replication.http.federation import (
ReplicationGetQueryRestServlet,
)
from synapse.types import get_domain_from_id
from synapse.util import glob_to_regex
from synapse.util import glob_to_regex, unwrapFirstError
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.caches.response_cache import ResponseCache
@@ -160,6 +166,43 @@ class FederationServer(FederationBase):
)
return 400, response
# We process PDUs and EDUs in parallel. This is important as we don't
# want to block things like to device messages from reaching clients
# behind the potentially expensive handling of PDUs.
pdu_results, _ = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
self._handle_pdus_in_txn, origin, transaction, request_time
),
run_in_background(self._handle_edus_in_txn, origin, transaction),
],
consumeErrors=True,
).addErrback(unwrapFirstError)
)
response = {"pdus": pdu_results}
logger.debug("Returning: %s", str(response))
await self.transaction_actions.set_response(origin, transaction, 200, response)
return 200, response
async def _handle_pdus_in_txn(
self, origin: str, transaction: Transaction, request_time: int
) -> Dict[str, dict]:
"""Process the PDUs in a received transaction.
Args:
origin: the server making the request
transaction: incoming transaction
request_time: timestamp that the HTTP request arrived at
Returns:
A map from event ID of a processed PDU to any errors we should
report back to the sending server.
"""
received_pdus_counter.inc(len(transaction.pdus))
origin_host, _ = parse_server_name(origin)
@@ -191,7 +234,7 @@ class FederationServer(FederationBase):
continue
try:
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
except NotFoundError:
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
continue
@@ -250,20 +293,23 @@ class FederationServer(FederationBase):
process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT
)
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
await self.received_edu(origin, edu.edu_type, edu.content)
return pdu_results
response = {"pdus": pdu_results}
async def _handle_edus_in_txn(self, origin: str, transaction: Transaction):
"""Process the EDUs in a received transaction.
"""
logger.debug("Returning: %s", str(response))
async def _process_edu(edu_dict):
received_edus_counter.inc()
await self.transaction_actions.set_response(origin, transaction, 200, response)
return 200, response
edu = Edu(**edu_dict)
await self.registry.on_edu(edu.edu_type, origin, edu.content)
async def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
await self.registry.on_edu(edu_type, origin, content)
await concurrently_execute(
_process_edu,
getattr(transaction, "edus", []),
TRANSACTION_CONCURRENCY_LIMIT,
)
async def on_context_state_request(self, origin, room_id, event_id):
origin_host, _ = parse_server_name(origin)
@@ -288,7 +334,7 @@ class FederationServer(FederationBase):
)
)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
resp["room_version"] = room_version
return 200, resp
@@ -339,7 +385,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
if room_version not in supported_versions:
logger.warning(
"Room version %s not in %s", room_version, supported_versions
@@ -364,14 +410,14 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = await self._check_sigs_and_hash(room_version, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu, room_version)
time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)}
async def on_send_join_request(self, origin, content, room_id):
logger.debug("on_send_join_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
@@ -384,22 +430,17 @@ class FederationServer(FederationBase):
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
return (
200,
{
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [
p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]
],
},
)
return {
"state": [p.get_pdu_json(time_now) for p in res_pdus["state"]],
"auth_chain": [p.get_pdu_json(time_now) for p in res_pdus["auth_chain"]],
}
async def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_leave_request(origin, room_id, user_id)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@@ -407,7 +448,7 @@ class FederationServer(FederationBase):
async def on_send_leave_request(self, origin, content, room_id):
logger.debug("on_send_leave_request: content: %s", content)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
@@ -419,7 +460,7 @@ class FederationServer(FederationBase):
pdu = await self._check_sigs_and_hash(room_version, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return 200, {}
return {}
async def on_event_auth(self, origin, room_id, event_id):
with (await self._server_linearizer.queue((origin, room_id))):
@@ -454,7 +495,7 @@ class FederationServer(FederationBase):
origin_host, _ = parse_server_name(origin)
await self.check_server_matches_acl(origin_host, room_id)
room_version = await self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
format_ver = room_version_to_event_format(room_version)
auth_chain = [
@@ -623,7 +664,7 @@ class FederationServer(FederationBase):
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = await self.store.get_room_version(pdu.room_id)
room_version = await self.store.get_room_version_id(pdu.room_id)
# Check signature.
try:

View File

@@ -69,8 +69,6 @@ class FederationRemoteSendQueue(object):
self.edus = SortedDict() # stream position -> Edu
self.device_messages = SortedDict() # stream position -> destination
self.pos = 1
self.pos_time = SortedDict()
@@ -92,7 +90,6 @@ class FederationRemoteSendQueue(object):
"keyed_edu",
"keyed_edu_changed",
"edus",
"device_messages",
"pos_time",
"presence_destinations",
]:
@@ -171,12 +168,6 @@ class FederationRemoteSendQueue(object):
for key in keys[:i]:
del self.edus[key]
# Delete things out of device map
keys = self.device_messages.keys()
i = self.device_messages.bisect_left(position_to_delete)
for key in keys[:i]:
del self.device_messages[key]
def notify_new_events(self, current_id):
"""As per FederationSender"""
# We don't need to replicate this as it gets sent down a different
@@ -249,9 +240,8 @@ class FederationRemoteSendQueue(object):
def send_device_messages(self, destination):
"""As per FederationSender"""
pos = self._next_pos()
self.device_messages[pos] = destination
self.notifier.on_new_replication_data()
# We don't need to replicate this as it gets sent down a different
# stream.
def get_current_token(self):
return self.pos - 1
@@ -259,7 +249,9 @@ class FederationRemoteSendQueue(object):
def federation_ack(self, token):
self._clear_queue_before_pos(token)
def get_replication_rows(self, from_token, to_token, limit, federation_ack=None):
async def get_replication_rows(
self, from_token, to_token, limit, federation_ack=None
):
"""Get rows to be sent over federation between the two tokens
Args:
@@ -337,14 +329,6 @@ class FederationRemoteSendQueue(object):
for (pos, edu) in edus:
rows.append((pos, EduRow(edu)))
# Fetch changed device messages
i = self.device_messages.bisect_right(from_token)
j = self.device_messages.bisect_right(to_token) + 1
device_messages = {v: k for k, v in self.device_messages.items()[i:j]}
for (destination, pos) in iteritems(device_messages):
rows.append((pos, DeviceRow(destination=destination)))
# Sort rows based on pos
rows.sort()
@@ -470,28 +454,9 @@ class EduRow(BaseFederationRow, namedtuple("EduRow", ("edu",))): # Edu
buff.edus.setdefault(self.edu.destination, []).append(self.edu)
class DeviceRow(BaseFederationRow, namedtuple("DeviceRow", ("destination",))): # str
"""Streams the fact that either a) there is pending to device messages for
users on the remote, or b) a local users device has changed and needs to
be sent to the remote.
"""
TypeId = "d"
@staticmethod
def from_data(data):
return DeviceRow(destination=data["destination"])
def to_data(self):
return {"destination": self.destination}
def add_to_buffer(self, buff):
buff.device_destinations.add(self.destination)
TypeToRow = {
Row.TypeId: Row
for Row in (PresenceRow, PresenceDestinationsRow, KeyedEduRow, EduRow, DeviceRow)
for Row in (PresenceRow, PresenceDestinationsRow, KeyedEduRow, EduRow,)
}
@@ -502,7 +467,6 @@ ParsedFederationStreamData = namedtuple(
"presence_destinations", # list of tuples of UserPresenceState and destinations
"keyed_edus", # dict of destination -> { key -> Edu }
"edus", # dict of destination -> [Edu]
"device_destinations", # set of destinations
),
)
@@ -521,11 +485,7 @@ def process_rows_for_federation(transaction_queue, rows):
# them into the appropriate collection and then send them off.
buff = ParsedFederationStreamData(
presence=[],
presence_destinations=[],
keyed_edus={},
edus={},
device_destinations=set(),
presence=[], presence_destinations=[], keyed_edus={}, edus={},
)
# Parse the rows in the stream and add to the buffer
@@ -553,6 +513,3 @@ def process_rows_for_federation(transaction_queue, rows):
for destination, edu_list in iteritems(buff.edus):
for edu in edu_list:
transaction_queue.send_edu(edu, None)
for destination in buff.device_destinations:
transaction_queue.send_device_messages(destination)

View File

@@ -21,6 +21,7 @@ from prometheus_client import Counter
from twisted.internet import defer
import synapse
import synapse.metrics
from synapse.federation.sender.per_destination_queue import PerDestinationQueue
from synapse.federation.sender.transaction_manager import TransactionManager
@@ -54,7 +55,7 @@ sent_pdus_destination_dist_total = Counter(
class FederationSender(object):
def __init__(self, hs):
def __init__(self, hs: "synapse.server.HomeServer"):
self.hs = hs
self.server_name = hs.hostname
@@ -482,7 +483,20 @@ class FederationSender(object):
def send_device_messages(self, destination):
if destination == self.server_name:
logger.info("Not sending device update to ourselves")
logger.warning("Not sending device update to ourselves")
return
self._get_per_destination_queue(destination).attempt_new_transaction()
def wake_destination(self, destination: str):
"""Called when we want to retry sending transactions to a remote.
This is mainly useful if the remote server has been down and we think it
might have come back.
"""
if destination == self.server_name:
logger.warning("Not waking up ourselves")
return
self._get_per_destination_queue(destination).attempt_new_transaction()

View File

@@ -31,6 +31,7 @@ from synapse.handlers.presence import format_user_presence_state
from synapse.metrics import sent_transactions_counter
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.presence import UserPresenceState
from synapse.types import StateMap
from synapse.util.retryutils import NotRetryingDestination, get_retry_limiter
# This is defined in the Matrix spec and enforced by the receiver.
@@ -77,7 +78,7 @@ class PerDestinationQueue(object):
# Pending EDUs by their "key". Keyed EDUs are EDUs that get clobbered
# based on their key (e.g. typing events by room_id)
# Map of (edu_type, key) -> Edu
self._pending_edus_keyed = {} # type: dict[tuple[str, str], Edu]
self._pending_edus_keyed = {} # type: StateMap[Edu]
# Map of user_id -> UserPresenceState of pending presence to be sent to this
# destination

View File

@@ -15,6 +15,7 @@
# limitations under the License.
import logging
from typing import Any, Dict
from six.moves import urllib
@@ -243,7 +244,7 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_join(self, destination, room_id, event_id, content):
def send_join_v1(self, destination, room_id, event_id, content):
path = _create_v1_path("/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json(
@@ -254,7 +255,18 @@ class TransportLayerClient(object):
@defer.inlineCallbacks
@log_function
def send_leave(self, destination, room_id, event_id, content):
def send_join_v2(self, destination, room_id, event_id, content):
path = _create_v2_path("/send_join/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination, path=path, data=content
)
return response
@defer.inlineCallbacks
@log_function
def send_leave_v1(self, destination, room_id, event_id, content):
path = _create_v1_path("/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json(
@@ -270,6 +282,24 @@ class TransportLayerClient(object):
return response
@defer.inlineCallbacks
@log_function
def send_leave_v2(self, destination, room_id, event_id, content):
path = _create_v2_path("/send_leave/%s/%s", room_id, event_id)
response = yield self.client.put_json(
destination=destination,
path=path,
data=content,
# we want to do our best to send this through. The problem is
# that if it fails, we won't retry it later, so if the remote
# server was just having a momentary blip, the room will be out of
# sync.
ignore_backoff=True,
)
return response
@defer.inlineCallbacks
@log_function
def send_invite_v1(self, destination, room_id, event_id, content):
@@ -323,7 +353,9 @@ class TransportLayerClient(object):
else:
path = _create_v1_path("/publicRooms")
args = {"include_all_networks": "true" if include_all_networks else "false"}
args = {
"include_all_networks": "true" if include_all_networks else "false"
} # type: Dict[str, Any]
if third_party_instance_id:
args["third_party_instance_id"] = (third_party_instance_id,)
if limit:

View File

@@ -18,6 +18,7 @@
import functools
import logging
import re
from typing import Optional, Tuple, Type
from twisted.internet.defer import maybeDeferred
@@ -44,6 +45,7 @@ from synapse.logging.opentracing import (
tags,
whitelisted_homeserver,
)
from synapse.server import HomeServer
from synapse.types import ThirdPartyInstanceID, get_domain_from_id
from synapse.util.ratelimitutils import FederationRateLimiter
from synapse.util.versionstring import get_version_string
@@ -101,12 +103,17 @@ class NoAuthenticationError(AuthenticationError):
class Authenticator(object):
def __init__(self, hs):
def __init__(self, hs: HomeServer):
self._clock = hs.get_clock()
self.keyring = hs.get_keyring()
self.server_name = hs.hostname
self.store = hs.get_datastore()
self.federation_domain_whitelist = hs.config.federation_domain_whitelist
self.notifer = hs.get_notifier()
self.replication_client = None
if hs.config.worker.worker_app:
self.replication_client = hs.get_tcp_replication()
# A method just so we can pass 'self' as the authenticator to the Servlets
async def authenticate_request(self, request, content):
@@ -166,6 +173,17 @@ class Authenticator(object):
try:
logger.info("Marking origin %r as up", origin)
await self.store.set_destination_retry_timings(origin, None, 0, 0)
# Inform the relevant places that the remote server is back up.
self.notifer.notify_remote_server_up(origin)
if self.replication_client:
# If we're on a worker we try and inform master about this. The
# replication client doesn't hook into the notifier to avoid
# infinite loops where we send a `REMOTE_SERVER_UP` command to
# master, which then echoes it back to us which in turn pokes
# the notifier.
self.replication_client.send_remote_server_up(origin)
except Exception:
logger.exception("Error resetting retry timings on %s", origin)
@@ -250,6 +268,8 @@ class BaseFederationServlet(object):
returned.
"""
PATH = "" # Overridden in subclasses, the regex to match against the path.
REQUIRE_AUTH = True
PREFIX = FEDERATION_V1_PREFIX # Allows specifying the API version
@@ -330,9 +350,6 @@ class BaseFederationServlet(object):
return response
# Extra logic that functools.wraps() doesn't finish
new_func.__self__ = func.__self__
return new_func
def register(self, server):
@@ -506,9 +523,19 @@ class FederationMakeLeaveServlet(BaseFederationServlet):
return 200, content
class FederationSendLeaveServlet(BaseFederationServlet):
class FederationV1SendLeaveServlet(BaseFederationServlet):
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
async def on_PUT(self, origin, content, query, room_id, event_id):
content = await self.handler.on_send_leave_request(origin, content, room_id)
return 200, (200, content)
class FederationV2SendLeaveServlet(BaseFederationServlet):
PATH = "/send_leave/(?P<room_id>[^/]*)/(?P<event_id>[^/]*)"
PREFIX = FEDERATION_V2_PREFIX
async def on_PUT(self, origin, content, query, room_id, event_id):
content = await self.handler.on_send_leave_request(origin, content, room_id)
return 200, content
@@ -521,9 +548,21 @@ class FederationEventAuthServlet(BaseFederationServlet):
return await self.handler.on_event_auth(origin, context, event_id)
class FederationSendJoinServlet(BaseFederationServlet):
class FederationV1SendJoinServlet(BaseFederationServlet):
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
async def on_PUT(self, origin, content, query, context, event_id):
# TODO(paul): assert that context/event_id parsed from path actually
# match those given in content
content = await self.handler.on_send_join_request(origin, content, context)
return 200, (200, content)
class FederationV2SendJoinServlet(BaseFederationServlet):
PATH = "/send_join/(?P<context>[^/]*)/(?P<event_id>[^/]*)"
PREFIX = FEDERATION_V2_PREFIX
async def on_PUT(self, origin, content, query, context, event_id):
# TODO(paul): assert that context/event_id parsed from path actually
# match those given in content
@@ -785,7 +824,7 @@ class PublicRoomList(BaseFederationServlet):
if not self.allow_access:
raise FederationDeniedError(origin)
limit = int(content.get("limit", 100))
limit = int(content.get("limit", 100)) # type: Optional[int]
since_token = content.get("since", None)
search_filter = content.get("filter", None)
@@ -932,7 +971,7 @@ class FederationGroupsAddRoomsConfigServlet(BaseFederationServlet):
if get_domain_from_id(requester_user_id) != origin:
raise SynapseError(403, "requester_user_id doesn't match origin")
result = await self.groups_handler.update_room_in_group(
result = await self.handler.update_room_in_group(
group_id, requester_user_id, room_id, config_key, content
)
@@ -1367,8 +1406,10 @@ FEDERATION_SERVLET_CLASSES = (
FederationMakeJoinServlet,
FederationMakeLeaveServlet,
FederationEventServlet,
FederationSendJoinServlet,
FederationSendLeaveServlet,
FederationV1SendJoinServlet,
FederationV2SendJoinServlet,
FederationV1SendLeaveServlet,
FederationV2SendLeaveServlet,
FederationV1InviteServlet,
FederationV2InviteServlet,
FederationQueryAuthServlet,
@@ -1381,11 +1422,13 @@ FEDERATION_SERVLET_CLASSES = (
On3pidBindServlet,
FederationVersionServlet,
RoomComplexityServlet,
)
) # type: Tuple[Type[BaseFederationServlet], ...]
OPENID_SERVLET_CLASSES = (OpenIdUserInfo,)
OPENID_SERVLET_CLASSES = (
OpenIdUserInfo,
) # type: Tuple[Type[BaseFederationServlet], ...]
ROOM_LIST_CLASSES = (PublicRoomList,)
ROOM_LIST_CLASSES = (PublicRoomList,) # type: Tuple[Type[PublicRoomList], ...]
GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsProfileServlet,
@@ -1406,17 +1449,19 @@ GROUP_SERVER_SERVLET_CLASSES = (
FederationGroupsAddRoomsServlet,
FederationGroupsAddRoomsConfigServlet,
FederationGroupsSettingJoinPolicyServlet,
)
) # type: Tuple[Type[BaseFederationServlet], ...]
GROUP_LOCAL_SERVLET_CLASSES = (
FederationGroupsLocalInviteServlet,
FederationGroupsRemoveLocalUserServlet,
FederationGroupsBulkPublicisedServlet,
)
) # type: Tuple[Type[BaseFederationServlet], ...]
GROUP_ATTESTATION_SERVLET_CLASSES = (FederationGroupsRenewAttestaionServlet,)
GROUP_ATTESTATION_SERVLET_CLASSES = (
FederationGroupsRenewAttestaionServlet,
) # type: Tuple[Type[BaseFederationServlet], ...]
DEFAULT_SERVLET_GROUPS = (
"federation",

View File

@@ -773,6 +773,11 @@ class GroupsServerHandler(object):
if not self.hs.is_mine_id(user_id):
yield self.store.maybe_delete_remote_profile_cache(user_id)
# Delete group if the last user has left
users = yield self.store.get_users_in_group(group_id, include_private=True)
if not users:
yield self.store.delete_group(group_id)
return {}
@defer.inlineCallbacks

View File

@@ -134,7 +134,7 @@ class BaseHandler(object):
guest_access = event.content.get("guest_access", "forbidden")
if guest_access != "can_join":
if context:
current_state_ids = yield context.get_current_state_ids(self.store)
current_state_ids = yield context.get_current_state_ids()
current_state = yield self.store.get_events(
list(current_state_ids.values())
)

View File

@@ -13,8 +13,6 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from twisted.internet import defer
class AccountDataEventSource(object):
def __init__(self, hs):
@@ -23,15 +21,14 @@ class AccountDataEventSource(object):
def get_current_key(self, direction="f"):
return self.store.get_max_account_data_stream_id()
@defer.inlineCallbacks
def get_new_events(self, user, from_key, **kwargs):
async def get_new_events(self, user, from_key, **kwargs):
user_id = user.to_string()
last_stream_id = from_key
current_stream_id = yield self.store.get_max_account_data_stream_id()
current_stream_id = self.store.get_max_account_data_stream_id()
results = []
tags = yield self.store.get_updated_tags(user_id, last_stream_id)
tags = await self.store.get_updated_tags(user_id, last_stream_id)
for room_id, room_tags in tags.items():
results.append(
@@ -41,7 +38,7 @@ class AccountDataEventSource(object):
(
account_data,
room_account_data,
) = yield self.store.get_updated_account_data_for_user(user_id, last_stream_id)
) = await self.store.get_updated_account_data_for_user(user_id, last_stream_id)
for account_data_type, content in account_data.items():
results.append({"type": account_data_type, "content": content})
@@ -53,7 +50,3 @@ class AccountDataEventSource(object):
)
return results, current_stream_id
@defer.inlineCallbacks
def get_pagination_rows(self, user, config, key):
return [], config.to_id

View File

@@ -18,8 +18,7 @@ import email.utils
import logging
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from twisted.internet import defer
from typing import List
from synapse.api.errors import StoreError
from synapse.logging.context import make_deferred_yieldable
@@ -78,42 +77,39 @@ class AccountValidityHandler(object):
# run as a background process to make sure that the database transactions
# have a logcontext to report to
return run_as_background_process(
"send_renewals", self.send_renewal_emails
"send_renewals", self._send_renewal_emails
)
self.clock.looping_call(send_emails, 30 * 60 * 1000)
@defer.inlineCallbacks
def send_renewal_emails(self):
async def _send_renewal_emails(self):
"""Gets the list of users whose account is expiring in the amount of time
configured in the ``renew_at`` parameter from the ``account_validity``
configuration, and sends renewal emails to all of these users as long as they
have an email 3PID attached to their account.
"""
expiring_users = yield self.store.get_users_expiring_soon()
expiring_users = await self.store.get_users_expiring_soon()
if expiring_users:
for user in expiring_users:
yield self._send_renewal_email(
await self._send_renewal_email(
user_id=user["user_id"], expiration_ts=user["expiration_ts_ms"]
)
@defer.inlineCallbacks
def send_renewal_email_to_user(self, user_id):
expiration_ts = yield self.store.get_expiration_ts_for_user(user_id)
yield self._send_renewal_email(user_id, expiration_ts)
async def send_renewal_email_to_user(self, user_id: str):
expiration_ts = await self.store.get_expiration_ts_for_user(user_id)
await self._send_renewal_email(user_id, expiration_ts)
@defer.inlineCallbacks
def _send_renewal_email(self, user_id, expiration_ts):
async def _send_renewal_email(self, user_id: str, expiration_ts: int):
"""Sends out a renewal email to every email address attached to the given user
with a unique link allowing them to renew their account.
Args:
user_id (str): ID of the user to send email(s) to.
expiration_ts (int): Timestamp in milliseconds for the expiration date of
user_id: ID of the user to send email(s) to.
expiration_ts: Timestamp in milliseconds for the expiration date of
this user's account (used in the email templates).
"""
addresses = yield self._get_email_addresses_for_user(user_id)
addresses = await self._get_email_addresses_for_user(user_id)
# Stop right here if the user doesn't have at least one email address.
# In this case, they will have to ask their server admin to renew their
@@ -125,7 +121,7 @@ class AccountValidityHandler(object):
return
try:
user_display_name = yield self.store.get_profile_displayname(
user_display_name = await self.store.get_profile_displayname(
UserID.from_string(user_id).localpart
)
if user_display_name is None:
@@ -133,7 +129,7 @@ class AccountValidityHandler(object):
except StoreError:
user_display_name = user_id
renewal_token = yield self._get_renewal_token(user_id)
renewal_token = await self._get_renewal_token(user_id)
url = "%s_matrix/client/unstable/account_validity/renew?token=%s" % (
self.hs.config.public_baseurl,
renewal_token,
@@ -165,7 +161,7 @@ class AccountValidityHandler(object):
logger.info("Sending renewal email to %s", address)
yield make_deferred_yieldable(
await make_deferred_yieldable(
self.sendmail(
self.hs.config.email_smtp_host,
self._raw_from,
@@ -180,19 +176,18 @@ class AccountValidityHandler(object):
)
)
yield self.store.set_renewal_mail_status(user_id=user_id, email_sent=True)
await self.store.set_renewal_mail_status(user_id=user_id, email_sent=True)
@defer.inlineCallbacks
def _get_email_addresses_for_user(self, user_id):
async def _get_email_addresses_for_user(self, user_id: str) -> List[str]:
"""Retrieve the list of email addresses attached to a user's account.
Args:
user_id (str): ID of the user to lookup email addresses for.
user_id: ID of the user to lookup email addresses for.
Returns:
defer.Deferred[list[str]]: Email addresses for this account.
Email addresses for this account.
"""
threepids = yield self.store.user_get_threepids(user_id)
threepids = await self.store.user_get_threepids(user_id)
addresses = []
for threepid in threepids:
@@ -201,16 +196,15 @@ class AccountValidityHandler(object):
return addresses
@defer.inlineCallbacks
def _get_renewal_token(self, user_id):
async def _get_renewal_token(self, user_id: str) -> str:
"""Generates a 32-byte long random string that will be inserted into the
user's renewal email's unique link, then saves it into the database.
Args:
user_id (str): ID of the user to generate a string for.
user_id: ID of the user to generate a string for.
Returns:
defer.Deferred[str]: The generated string.
The generated string.
Raises:
StoreError(500): Couldn't generate a unique string after 5 attempts.
@@ -219,52 +213,52 @@ class AccountValidityHandler(object):
while attempts < 5:
try:
renewal_token = stringutils.random_string(32)
yield self.store.set_renewal_token_for_user(user_id, renewal_token)
await self.store.set_renewal_token_for_user(user_id, renewal_token)
return renewal_token
except StoreError:
attempts += 1
raise StoreError(500, "Couldn't generate a unique string as refresh string.")
@defer.inlineCallbacks
def renew_account(self, renewal_token):
async def renew_account(self, renewal_token: str) -> bool:
"""Renews the account attached to a given renewal token by pushing back the
expiration date by the current validity period in the server's configuration.
Args:
renewal_token (str): Token sent with the renewal request.
renewal_token: Token sent with the renewal request.
Returns:
bool: Whether the provided token is valid.
Whether the provided token is valid.
"""
try:
user_id = yield self.store.get_user_from_renewal_token(renewal_token)
user_id = await self.store.get_user_from_renewal_token(renewal_token)
except StoreError:
defer.returnValue(False)
return False
logger.debug("Renewing an account for user %s", user_id)
yield self.renew_account_for_user(user_id)
await self.renew_account_for_user(user_id)
defer.returnValue(True)
return True
@defer.inlineCallbacks
def renew_account_for_user(self, user_id, expiration_ts=None, email_sent=False):
async def renew_account_for_user(
self, user_id: str, expiration_ts: int = None, email_sent: bool = False
) -> int:
"""Renews the account attached to a given user by pushing back the
expiration date by the current validity period in the server's
configuration.
Args:
renewal_token (str): Token sent with the renewal request.
expiration_ts (int): New expiration date. Defaults to now + validity period.
email_sent (bool): Whether an email has been sent for this validity period.
renewal_token: Token sent with the renewal request.
expiration_ts: New expiration date. Defaults to now + validity period.
email_sen: Whether an email has been sent for this validity period.
Defaults to False.
Returns:
defer.Deferred[int]: New expiration date for this account, as a timestamp
in milliseconds since epoch.
New expiration date for this account, as a timestamp in
milliseconds since epoch.
"""
if expiration_ts is None:
expiration_ts = self.clock.time_msec() + self._account_validity.period
yield self.store.set_account_validity_for_user(
await self.store.set_account_validity_for_user(
user_id=user_id, expiration_ts=expiration_ts, email_sent=email_sent
)

View File

@@ -14,11 +14,11 @@
# limitations under the License.
import logging
from twisted.internet import defer
from typing import List
from synapse.api.constants import Membership
from synapse.types import RoomStreamToken
from synapse.events import FrozenEvent
from synapse.types import RoomStreamToken, StateMap
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -33,11 +33,10 @@ class AdminHandler(BaseHandler):
self.storage = hs.get_storage()
self.state_store = self.storage.state
@defer.inlineCallbacks
def get_whois(self, user):
async def get_whois(self, user):
connections = []
sessions = yield self.store.get_user_ip_and_agents(user)
sessions = await self.store.get_user_ip_and_agents(user)
for session in sessions:
connections.append(
{
@@ -54,73 +53,16 @@ class AdminHandler(BaseHandler):
return ret
@defer.inlineCallbacks
def get_users(self):
"""Function to retrieve a list of users in users table.
Args:
Returns:
defer.Deferred: resolves to list[dict[str, Any]]
"""
ret = yield self.store.get_users()
async def get_user(self, user):
"""Function to get user details"""
ret = await self.store.get_user_by_id(user.to_string())
if ret:
profile = await self.store.get_profileinfo(user.localpart)
ret["displayname"] = profile.display_name
ret["avatar_url"] = profile.avatar_url
return ret
@defer.inlineCallbacks
def get_users_paginate(self, start, limit, name, guests, deactivated):
"""Function to retrieve a paginated list of users from
users list. This will return a json list of users.
Args:
start (int): start number to begin the query from
limit (int): number of rows to retrieve
name (string): filter for user names
guests (bool): whether to in include guest users
deactivated (bool): whether to include deactivated users
Returns:
defer.Deferred: resolves to json list[dict[str, Any]]
"""
ret = yield self.store.get_users_paginate(
start, limit, name, guests, deactivated
)
return ret
@defer.inlineCallbacks
def search_users(self, term):
"""Function to search users list for one or more users with
the matched term.
Args:
term (str): search term
Returns:
defer.Deferred: resolves to list[dict[str, Any]]
"""
ret = yield self.store.search_users(term)
return ret
def get_user_server_admin(self, user):
"""
Get the admin bit on a user.
Args:
user_id (UserID): the (necessarily local) user to manipulate
"""
return self.store.is_server_admin(user)
def set_user_server_admin(self, user, admin):
"""
Set the admin bit on a user.
Args:
user_id (UserID): the (necessarily local) user to manipulate
admin (bool): whether or not the user should be an admin of this server
"""
return self.store.set_server_admin(user, admin)
@defer.inlineCallbacks
def export_user_data(self, user_id, writer):
async def export_user_data(self, user_id, writer):
"""Write all data we have on the user to the given writer.
Args:
@@ -132,7 +74,7 @@ class AdminHandler(BaseHandler):
The returned value is that returned by `writer.finished()`.
"""
# Get all rooms the user is in or has been in
rooms = yield self.store.get_rooms_for_user_where_membership_is(
rooms = await self.store.get_rooms_for_local_user_where_membership_is(
user_id,
membership_list=(
Membership.JOIN,
@@ -145,7 +87,7 @@ class AdminHandler(BaseHandler):
# We only try and fetch events for rooms the user has been in. If
# they've been e.g. invited to a room without joining then we handle
# those seperately.
rooms_user_has_been_in = yield self.store.get_rooms_user_has_been_in(user_id)
rooms_user_has_been_in = await self.store.get_rooms_user_has_been_in(user_id)
for index, room in enumerate(rooms):
room_id = room.room_id
@@ -154,7 +96,7 @@ class AdminHandler(BaseHandler):
"[%s] Handling room %s, %d/%d", user_id, room_id, index + 1, len(rooms)
)
forgotten = yield self.store.did_forget(user_id, room_id)
forgotten = await self.store.did_forget(user_id, room_id)
if forgotten:
logger.info("[%s] User forgot room %d, ignoring", user_id, room_id)
continue
@@ -166,7 +108,7 @@ class AdminHandler(BaseHandler):
if room.membership == Membership.INVITE:
event_id = room.event_id
invite = yield self.store.get_event(event_id, allow_none=True)
invite = await self.store.get_event(event_id, allow_none=True)
if invite:
invited_state = invite.unsigned["invite_room_state"]
writer.write_invite(room_id, invite, invited_state)
@@ -177,7 +119,7 @@ class AdminHandler(BaseHandler):
# were joined. We estimate that point by looking at the
# stream_ordering of the last membership if it wasn't a join.
if room.membership == Membership.JOIN:
stream_ordering = yield self.store.get_room_max_stream_ordering()
stream_ordering = self.store.get_room_max_stream_ordering()
else:
stream_ordering = room.stream_ordering
@@ -203,7 +145,7 @@ class AdminHandler(BaseHandler):
# events that we have and then filtering, this isn't the most
# efficient method perhaps but it does guarantee we get everything.
while True:
events, _ = yield self.store.paginate_room_events(
events, _ = await self.store.paginate_room_events(
room_id, from_key, to_key, limit=100, direction="f"
)
if not events:
@@ -211,7 +153,7 @@ class AdminHandler(BaseHandler):
from_key = events[-1].internal_metadata.after
events = yield filter_events_for_client(self.storage, user_id, events)
events = await filter_events_for_client(self.storage, user_id, events)
writer.write_events(room_id, events)
@@ -247,7 +189,7 @@ class AdminHandler(BaseHandler):
for event_id in extremities:
if not event_to_unseen_prevs[event_id]:
continue
state = yield self.state_store.get_state_for_event(event_id)
state = await self.state_store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
return writer.finished()
@@ -257,35 +199,26 @@ class ExfiltrationWriter(object):
"""Interface used to specify how to write exported data.
"""
def write_events(self, room_id, events):
def write_events(self, room_id: str, events: List[FrozenEvent]):
"""Write a batch of events for a room.
Args:
room_id (str)
events (list[FrozenEvent])
"""
pass
def write_state(self, room_id, event_id, state):
def write_state(self, room_id: str, event_id: str, state: StateMap[FrozenEvent]):
"""Write the state at the given event in the room.
This only gets called for backward extremities rather than for each
event.
Args:
room_id (str)
event_id (str)
state (dict[tuple[str, str], FrozenEvent])
"""
pass
def write_invite(self, room_id, event, state):
def write_invite(self, room_id: str, event: FrozenEvent, state: StateMap[dict]):
"""Write an invite for the room, with associated invite state.
Args:
room_id (str)
event (FrozenEvent)
state (dict[tuple[str, str], dict]): A subset of the state at the
room_id
event
state: A subset of the state at the
invite, with a subset of the event keys (type, state_key
content and sender)
"""

View File

@@ -15,8 +15,6 @@
# limitations under the License.
import logging
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.types import UserID, create_requester
@@ -46,8 +44,7 @@ class DeactivateAccountHandler(BaseHandler):
self._account_validity_enabled = hs.config.account_validity.enabled
@defer.inlineCallbacks
def deactivate_account(self, user_id, erase_data, id_server=None):
async def deactivate_account(self, user_id, erase_data, id_server=None):
"""Deactivate a user's account
Args:
@@ -74,11 +71,11 @@ class DeactivateAccountHandler(BaseHandler):
identity_server_supports_unbinding = True
# Retrieve the 3PIDs this user has bound to an identity server
threepids = yield self.store.user_get_bound_threepids(user_id)
threepids = await self.store.user_get_bound_threepids(user_id)
for threepid in threepids:
try:
result = yield self._identity_handler.try_unbind_threepid(
result = await self._identity_handler.try_unbind_threepid(
user_id,
{
"medium": threepid["medium"],
@@ -91,33 +88,33 @@ class DeactivateAccountHandler(BaseHandler):
# Do we want this to be a fatal error or should we carry on?
logger.exception("Failed to remove threepid from ID server")
raise SynapseError(400, "Failed to remove threepid from ID server")
yield self.store.user_delete_threepid(
await self.store.user_delete_threepid(
user_id, threepid["medium"], threepid["address"]
)
# Remove all 3PIDs this user has bound to the homeserver
yield self.store.user_delete_threepids(user_id)
await self.store.user_delete_threepids(user_id)
# delete any devices belonging to the user, which will also
# delete corresponding access tokens.
yield self._device_handler.delete_all_devices_for_user(user_id)
await self._device_handler.delete_all_devices_for_user(user_id)
# then delete any remaining access tokens which weren't associated with
# a device.
yield self._auth_handler.delete_access_tokens_for_user(user_id)
await self._auth_handler.delete_access_tokens_for_user(user_id)
yield self.store.user_set_password_hash(user_id, None)
await self.store.user_set_password_hash(user_id, None)
# Add the user to a table of users pending deactivation (ie.
# removal from all the rooms they're a member of)
yield self.store.add_user_pending_deactivation(user_id)
await self.store.add_user_pending_deactivation(user_id)
# delete from user directory
yield self.user_directory_handler.handle_user_deactivated(user_id)
await self.user_directory_handler.handle_user_deactivated(user_id)
# Mark the user as erased, if they asked for that
if erase_data:
logger.info("Marking %s as erased", user_id)
yield self.store.mark_user_erased(user_id)
await self.store.mark_user_erased(user_id)
# Now start the process that goes through that list and
# parts users from rooms (if it isn't already running)
@@ -125,30 +122,29 @@ class DeactivateAccountHandler(BaseHandler):
# Reject all pending invites for the user, so that the user doesn't show up in the
# "invited" section of rooms' members list.
yield self._reject_pending_invites_for_user(user_id)
await self._reject_pending_invites_for_user(user_id)
# Remove all information on the user from the account_validity table.
if self._account_validity_enabled:
yield self.store.delete_account_validity_for_user(user_id)
await self.store.delete_account_validity_for_user(user_id)
# Mark the user as deactivated.
yield self.store.set_user_deactivated_status(user_id, True)
await self.store.set_user_deactivated_status(user_id, True)
return identity_server_supports_unbinding
@defer.inlineCallbacks
def _reject_pending_invites_for_user(self, user_id):
async def _reject_pending_invites_for_user(self, user_id):
"""Reject pending invites addressed to a given user ID.
Args:
user_id (str): The user ID to reject pending invites for.
"""
user = UserID.from_string(user_id)
pending_invites = yield self.store.get_invited_rooms_for_user(user_id)
pending_invites = await self.store.get_invited_rooms_for_local_user(user_id)
for room in pending_invites:
try:
yield self._room_member_handler.update_membership(
await self._room_member_handler.update_membership(
create_requester(user),
user,
room.room_id,
@@ -180,8 +176,7 @@ class DeactivateAccountHandler(BaseHandler):
if not self._user_parter_running:
run_as_background_process("user_parter_loop", self._user_parter_loop)
@defer.inlineCallbacks
def _user_parter_loop(self):
async def _user_parter_loop(self):
"""Loop that parts deactivated users from rooms
Returns:
@@ -191,19 +186,18 @@ class DeactivateAccountHandler(BaseHandler):
logger.info("Starting user parter")
try:
while True:
user_id = yield self.store.get_user_pending_deactivation()
user_id = await self.store.get_user_pending_deactivation()
if user_id is None:
break
logger.info("User parter parting %r", user_id)
yield self._part_user(user_id)
yield self.store.del_user_pending_deactivation(user_id)
await self._part_user(user_id)
await self.store.del_user_pending_deactivation(user_id)
logger.info("User parter finished parting %r", user_id)
logger.info("User parter finished: stopping")
finally:
self._user_parter_running = False
@defer.inlineCallbacks
def _part_user(self, user_id):
async def _part_user(self, user_id):
"""Causes the given user_id to leave all the rooms they're joined to
Returns:
@@ -211,11 +205,11 @@ class DeactivateAccountHandler(BaseHandler):
"""
user = UserID.from_string(user_id)
rooms_for_user = yield self.store.get_rooms_for_user(user_id)
rooms_for_user = await self.store.get_rooms_for_user(user_id)
for room_id in rooms_for_user:
logger.info("User parter parting %r from %r", user_id, room_id)
try:
yield self._room_member_handler.update_membership(
await self._room_member_handler.update_membership(
create_requester(user),
user,
room_id,

View File

@@ -598,7 +598,13 @@ class DeviceListUpdater(object):
# happens if we've missed updates.
resync = yield self._need_to_do_resync(user_id, pending_updates)
logger.debug("Need to re-sync devices for %r? %r", user_id, resync)
if logger.isEnabledFor(logging.INFO):
logger.info(
"Received device list update for %s, requiring resync: %s. Devices: %s",
user_id,
resync,
", ".join(u[0] for u in pending_updates),
)
if resync:
yield self.user_device_resync(user_id)

View File

@@ -14,12 +14,14 @@
# limitations under the License.
import logging
from typing import Any, Dict
from canonicaljson import json
from twisted.internet import defer
from synapse.api.errors import SynapseError
from synapse.logging.context import run_in_background
from synapse.logging.opentracing import (
get_active_span_text_map,
log_kv,
@@ -47,6 +49,8 @@ class DeviceMessageHandler(object):
"m.direct_to_device", self.on_direct_to_device_edu
)
self._device_list_updater = hs.get_device_handler().device_list_updater
@defer.inlineCallbacks
def on_direct_to_device_edu(self, origin, content):
local_messages = {}
@@ -65,6 +69,9 @@ class DeviceMessageHandler(object):
logger.warning("Request for keys for non-local user %s", user_id)
raise SynapseError(400, "Not a user here")
if not by_device:
continue
messages_by_device = {
device_id: {
"content": message_content,
@@ -73,8 +80,11 @@ class DeviceMessageHandler(object):
}
for device_id, message_content in by_device.items()
}
if messages_by_device:
local_messages[user_id] = messages_by_device
local_messages[user_id] = messages_by_device
yield self._check_for_unknown_devices(
message_type, sender_user_id, by_device
)
stream_id = yield self.store.add_messages_from_remote_to_device_inbox(
origin, message_id, local_messages
@@ -84,6 +94,55 @@ class DeviceMessageHandler(object):
"to_device_key", stream_id, users=local_messages.keys()
)
@defer.inlineCallbacks
def _check_for_unknown_devices(
self,
message_type: str,
sender_user_id: str,
by_device: Dict[str, Dict[str, Any]],
):
"""Checks inbound device messages for unkown remote devices, and if
found marks the remote cache for the user as stale.
"""
if message_type != "m.room_key_request":
return
# Get the sending device IDs
requesting_device_ids = set()
for message_content in by_device.values():
device_id = message_content.get("requesting_device_id")
requesting_device_ids.add(device_id)
# Check if we are tracking the devices of the remote user.
room_ids = yield self.store.get_rooms_for_user(sender_user_id)
if not room_ids:
logger.info(
"Received device message from remote device we don't"
" share a room with: %s %s",
sender_user_id,
requesting_device_ids,
)
return
# If we are tracking check that we know about the sending
# devices.
cached_devices = yield self.store.get_cached_devices_for_user(sender_user_id)
unknown_devices = requesting_device_ids - set(cached_devices)
if unknown_devices:
logger.info(
"Received device message from remote device not in our cache: %s %s",
sender_user_id,
unknown_devices,
)
yield self.store.mark_remote_user_device_cache_as_stale(sender_user_id)
# Immediately attempt a resync in the background
run_in_background(
self._device_list_updater.user_device_resync, sender_user_id
)
@defer.inlineCallbacks
def send_device_message(self, sender_user_id, message_type, messages):
set_tag("number_of_messages", len(messages))

View File

@@ -151,7 +151,12 @@ class DirectoryHandler(BaseHandler):
yield self._create_association(room_alias, room_id, servers, creator=user_id)
if send_event:
yield self.send_room_alias_update_event(requester, room_id)
try:
yield self.send_room_alias_update_event(requester, room_id)
except AuthError as e:
# sending the aliases event may fail due to the user not having
# permission in the room; this is permitted.
logger.info("Skipping updating aliases event due to auth error %s", e)
@defer.inlineCallbacks
def delete_association(self, requester, room_alias, send_event=True):

View File

@@ -208,8 +208,9 @@ class E2eKeysHandler(object):
)
user_devices = user_devices["devices"]
user_results = results.setdefault(user_id, {})
for device in user_devices:
results[user_id] = {device["device_id"]: device["keys"]}
user_results[device["device_id"]] = device["keys"]
user_ids_updated.append(user_id)
except Exception as e:
failures[destination] = _exception_to_failure(e)
@@ -264,6 +265,7 @@ class E2eKeysHandler(object):
return ret
@defer.inlineCallbacks
def get_cross_signing_keys_from_cache(self, query, from_user_id):
"""Get cross-signing keys for users from the database
@@ -283,14 +285,32 @@ class E2eKeysHandler(object):
self_signing_keys = {}
user_signing_keys = {}
# Currently a stub, implementation coming in https://github.com/matrix-org/synapse/pull/6486
return defer.succeed(
{
"master_keys": master_keys,
"self_signing_keys": self_signing_keys,
"user_signing_keys": user_signing_keys,
}
)
user_ids = list(query)
keys = yield self.store.get_e2e_cross_signing_keys_bulk(user_ids, from_user_id)
for user_id, user_info in keys.items():
if user_info is None:
continue
if "master" in user_info:
master_keys[user_id] = user_info["master"]
if "self_signing" in user_info:
self_signing_keys[user_id] = user_info["self_signing"]
if (
from_user_id in keys
and keys[from_user_id] is not None
and "user_signing" in keys[from_user_id]
):
# users can see other users' master and self-signing keys, but can
# only see their own user-signing keys
user_signing_keys[from_user_id] = keys[from_user_id]["user_signing"]
return {
"master_keys": master_keys,
"self_signing_keys": self_signing_keys,
"user_signing_keys": user_signing_keys,
}
@trace
@defer.inlineCallbacks

View File

@@ -19,7 +19,7 @@
import itertools
import logging
from typing import Dict, Iterable, Optional, Sequence, Tuple
from typing import Dict, Iterable, List, Optional, Sequence, Tuple
import six
from six import iteritems, itervalues
@@ -44,7 +44,7 @@ from synapse.api.errors import (
StoreError,
SynapseError,
)
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersion, RoomVersions
from synapse.crypto.event_signing import compute_event_signature
from synapse.event_auth import auth_types_for_event
from synapse.events import EventBase
@@ -57,13 +57,15 @@ from synapse.logging.context import (
run_in_background,
)
from synapse.logging.utils import log_function
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.replication.http.federation import (
ReplicationCleanRoomRestServlet,
ReplicationFederationSendEventsRestServlet,
)
from synapse.replication.http.membership import ReplicationUserJoinedLeftRoomRestServlet
from synapse.state import StateResolutionStore, resolve_events_with_store
from synapse.types import UserID, get_domain_from_id
from synapse.storage.data_stores.main.events_worker import EventRedactBehaviour
from synapse.types import StateMap, UserID, get_domain_from_id
from synapse.util.async_helpers import Linearizer, concurrently_execute
from synapse.util.distributor import user_joined_room
from synapse.util.retryutils import NotRetryingDestination
@@ -88,7 +90,7 @@ class _NewEventInfo:
event = attr.ib(type=EventBase)
state = attr.ib(type=Optional[Sequence[EventBase]], default=None)
auth_events = attr.ib(type=Optional[Dict[Tuple[str, str], EventBase]], default=None)
auth_events = attr.ib(type=Optional[StateMap[EventBase]], default=None)
def shortstr(iterable, maxitems=5):
@@ -155,6 +157,13 @@ class FederationHandler(BaseHandler):
hs
)
if hs.config.worker_app:
self._user_device_resync = ReplicationUserDevicesResyncRestServlet.make_client(
hs
)
else:
self._device_list_updater = hs.get_device_handler().device_list_updater
# When joining a room we need to queue any events for that room up
self.room_queues = {}
self._room_pdu_linearizer = Linearizer("fed_room_pdu")
@@ -163,8 +172,7 @@ class FederationHandler(BaseHandler):
self._ephemeral_messages_enabled = hs.config.enable_ephemeral_messages
@defer.inlineCallbacks
def on_receive_pdu(self, origin, pdu, sent_to_us_directly=False):
async def on_receive_pdu(self, origin, pdu, sent_to_us_directly=False) -> None:
""" Process a PDU received via a federation /send/ transaction, or
via backfill of missing prev_events
@@ -174,17 +182,15 @@ class FederationHandler(BaseHandler):
pdu (FrozenEvent): received PDU
sent_to_us_directly (bool): True if this event was pushed to us; False if
we pulled it as the result of a missing prev_event.
Returns (Deferred): completes with None
"""
room_id = pdu.room_id
event_id = pdu.event_id
logger.info("[%s %s] handling received PDU: %s", room_id, event_id, pdu)
logger.info("handling received PDU: %s", pdu)
# We reprocess pdus when we have seen them only as outliers
existing = yield self.store.get_event(
existing = await self.store.get_event(
event_id, allow_none=True, allow_rejected=True
)
@@ -228,7 +234,7 @@ class FederationHandler(BaseHandler):
#
# Note that if we were never in the room then we would have already
# dropped the event, since we wouldn't know the room version.
is_in_room = yield self.auth.check_host_in_room(room_id, self.server_name)
is_in_room = await self.auth.check_host_in_room(room_id, self.server_name)
if not is_in_room:
logger.info(
"[%s %s] Ignoring PDU from %s as we're not in the room",
@@ -243,20 +249,20 @@ class FederationHandler(BaseHandler):
# Get missing pdus if necessary.
if not pdu.internal_metadata.is_outlier():
# We only backfill backwards to the min depth.
min_depth = yield self.get_min_depth_for_context(pdu.room_id)
min_depth = await self.get_min_depth_for_context(pdu.room_id)
logger.debug("[%s %s] min_depth: %d", room_id, event_id, min_depth)
prevs = set(pdu.prev_event_ids())
seen = yield self.store.have_seen_events(prevs)
seen = await self.store.have_seen_events(prevs)
if min_depth and pdu.depth < min_depth:
if min_depth is not None and pdu.depth < min_depth:
# This is so that we don't notify the user about this
# message, to work around the fact that some events will
# reference really really old events we really don't want to
# send to the clients.
pdu.internal_metadata.outlier = True
elif min_depth and pdu.depth > min_depth:
elif min_depth is not None and pdu.depth > min_depth:
missing_prevs = prevs - seen
if sent_to_us_directly and missing_prevs:
# If we're missing stuff, ensure we only fetch stuff one
@@ -268,7 +274,7 @@ class FederationHandler(BaseHandler):
len(missing_prevs),
shortstr(missing_prevs),
)
with (yield self._room_pdu_linearizer.queue(pdu.room_id)):
with (await self._room_pdu_linearizer.queue(pdu.room_id)):
logger.info(
"[%s %s] Acquired room lock to fetch %d missing prev_events",
room_id,
@@ -276,13 +282,19 @@ class FederationHandler(BaseHandler):
len(missing_prevs),
)
yield self._get_missing_events_for_pdu(
origin, pdu, prevs, min_depth
)
try:
await self._get_missing_events_for_pdu(
origin, pdu, prevs, min_depth
)
except Exception as e:
raise Exception(
"Error fetching missing prev_events for %s: %s"
% (event_id, e)
)
# Update the set of things we've seen after trying to
# fetch the missing stuff
seen = yield self.store.have_seen_events(prevs)
seen = await self.store.have_seen_events(prevs)
if not prevs - seen:
logger.info(
@@ -290,14 +302,6 @@ class FederationHandler(BaseHandler):
room_id,
event_id,
)
elif missing_prevs:
logger.info(
"[%s %s] Not recursively fetching %d missing prev_events: %s",
room_id,
event_id,
len(missing_prevs),
shortstr(missing_prevs),
)
if prevs - seen:
# We've still not been able to get all of the prev_events for this event.
@@ -342,17 +346,21 @@ class FederationHandler(BaseHandler):
affected=pdu.event_id,
)
logger.info(
"Event %s is missing prev_events: calculating state for a "
"backwards extremity",
event_id,
)
# Calculate the state after each of the previous events, and
# resolve them to find the correct state at the current event.
event_map = {event_id: pdu}
try:
# Get the state of the events we know about
ours = yield self.state_store.get_state_groups_ids(room_id, seen)
ours = await self.state_store.get_state_groups_ids(room_id, seen)
# state_maps is a list of mappings from (type, state_key) to event_id
state_maps = list(
ours.values()
) # type: list[dict[tuple[str, str], str]]
state_maps = list(ours.values()) # type: list[StateMap[str]]
# we don't need this any more, let's delete it.
del ours
@@ -361,17 +369,14 @@ class FederationHandler(BaseHandler):
# know about
for p in prevs - seen:
logger.info(
"[%s %s] Requesting state at missing prev_event %s",
room_id,
event_id,
p,
"Requesting state at missing prev_event %s", event_id,
)
with nested_logging_context(p):
# note that if any of the missing prevs share missing state or
# auth events, the requests to fetch those events are deduped
# by the get_pdu_cache in federation_client.
(remote_state, _,) = yield self._get_state_for_room(
(remote_state, _,) = await self._get_state_for_room(
origin, room_id, p, include_event_in_state=True
)
@@ -383,8 +388,8 @@ class FederationHandler(BaseHandler):
for x in remote_state:
event_map[x.event_id] = x
room_version = yield self.store.get_room_version(room_id)
state_map = yield resolve_events_with_store(
room_version = await self.store.get_room_version_id(room_id)
state_map = await resolve_events_with_store(
room_id,
room_version,
state_maps,
@@ -397,10 +402,10 @@ class FederationHandler(BaseHandler):
# First though we need to fetch all the events that are in
# state_map, so we can build up the state below.
evs = yield self.store.get_events(
evs = await self.store.get_events(
list(state_map.values()),
get_prev_content=False,
check_redacted=False,
redact_behaviour=EventRedactBehaviour.AS_IS,
)
event_map.update(evs)
@@ -420,10 +425,9 @@ class FederationHandler(BaseHandler):
affected=event_id,
)
yield self._process_received_pdu(origin, pdu, state=state)
await self._process_received_pdu(origin, pdu, state=state)
@defer.inlineCallbacks
def _get_missing_events_for_pdu(self, origin, pdu, prevs, min_depth):
async def _get_missing_events_for_pdu(self, origin, pdu, prevs, min_depth):
"""
Args:
origin (str): Origin of the pdu. Will be called to get the missing events
@@ -435,12 +439,12 @@ class FederationHandler(BaseHandler):
room_id = pdu.room_id
event_id = pdu.event_id
seen = yield self.store.have_seen_events(prevs)
seen = await self.store.have_seen_events(prevs)
if not prevs - seen:
return
latest = yield self.store.get_latest_event_ids_in_room(room_id)
latest = await self.store.get_latest_event_ids_in_room(room_id)
# We add the prev events that we have seen to the latest
# list to ensure the remote server doesn't give them to us
@@ -504,7 +508,7 @@ class FederationHandler(BaseHandler):
# All that said: Let's try increasing the timout to 60s and see what happens.
try:
missing_events = yield self.federation_client.get_missing_events(
missing_events = await self.federation_client.get_missing_events(
origin,
room_id,
earliest_events_ids=list(latest),
@@ -543,7 +547,7 @@ class FederationHandler(BaseHandler):
)
with nested_logging_context(ev.event_id):
try:
yield self.on_receive_pdu(origin, ev, sent_to_us_directly=False)
await self.on_receive_pdu(origin, ev, sent_to_us_directly=False)
except FederationError as e:
if e.code == 403:
logger.warning(
@@ -555,29 +559,30 @@ class FederationHandler(BaseHandler):
else:
raise
@defer.inlineCallbacks
@log_function
def _get_state_for_room(
self, destination, room_id, event_id, include_event_in_state
):
async def _get_state_for_room(
self,
destination: str,
room_id: str,
event_id: str,
include_event_in_state: bool = False,
) -> Tuple[List[EventBase], List[EventBase]]:
"""Requests all of the room state at a given event from a remote homeserver.
Args:
destination (str): The remote homeserver to query for the state.
room_id (str): The id of the room we're interested in.
event_id (str): The id of the event we want the state at.
destination: The remote homeserver to query for the state.
room_id: The id of the room we're interested in.
event_id: The id of the event we want the state at.
include_event_in_state: if true, the event itself will be included in the
returned state event list.
Returns:
Deferred[Tuple[List[EventBase], List[EventBase]]]:
A list of events in the state, and a list of events in the auth chain
for the given event.
A list of events in the state, possibly including the event itself, and
a list of events in the auth chain for the given event.
"""
(
state_event_ids,
auth_event_ids,
) = yield self.federation_client.get_room_state_ids(
) = await self.federation_client.get_room_state_ids(
destination, room_id, event_id=event_id
)
@@ -586,15 +591,15 @@ class FederationHandler(BaseHandler):
if include_event_in_state:
desired_events.add(event_id)
event_map = yield self._get_events_from_store_or_dest(
event_map = await self._get_events_from_store_or_dest(
destination, room_id, desired_events
)
failed_to_fetch = desired_events - event_map.keys()
if failed_to_fetch:
logger.warning(
"Failed to fetch missing state/auth events for %s: %s",
room_id,
"Failed to fetch missing state/auth events for %s %s",
event_id,
failed_to_fetch,
)
@@ -614,15 +619,11 @@ class FederationHandler(BaseHandler):
return remote_state, auth_chain
@defer.inlineCallbacks
def _get_events_from_store_or_dest(self, destination, room_id, event_ids):
async def _get_events_from_store_or_dest(
self, destination: str, room_id: str, event_ids: Iterable[str]
) -> Dict[str, EventBase]:
"""Fetch events from a remote destination, checking if we already have them.
Args:
destination (str)
room_id (str)
event_ids (Iterable[str])
Persists any events we don't already have as outliers.
If we fail to fetch any of the events, a warning will be logged, and the event
@@ -630,10 +631,9 @@ class FederationHandler(BaseHandler):
be in the given room.
Returns:
Deferred[dict[str, EventBase]]: A deferred resolving to a map
from event_id to event
map from event_id to event
"""
fetched_events = yield self.store.get_events(event_ids, allow_rejected=True)
fetched_events = await self.store.get_events(event_ids, allow_rejected=True)
missing_events = set(event_ids) - fetched_events.keys()
@@ -644,14 +644,14 @@ class FederationHandler(BaseHandler):
room_id,
)
yield self._get_events_and_persist(
await self._get_events_and_persist(
destination=destination, room_id=room_id, events=missing_events
)
# we need to make sure we re-load from the database to get the rejected
# state correct.
fetched_events.update(
(yield self.store.get_events(missing_events, allow_rejected=True))
(await self.store.get_events(missing_events, allow_rejected=True))
)
# check for events which were in the wrong room.
@@ -677,12 +677,14 @@ class FederationHandler(BaseHandler):
bad_room_id,
room_id,
)
del fetched_events[bad_event_id]
return fetched_events
@defer.inlineCallbacks
def _process_received_pdu(self, origin, event, state):
async def _process_received_pdu(
self, origin: str, event: EventBase, state: Optional[Iterable[EventBase]],
):
""" Called when we have a new pdu. We need to do auth checks and put it
through the StateHandler.
@@ -701,16 +703,28 @@ class FederationHandler(BaseHandler):
logger.debug("[%s %s] Processing event: %s", room_id, event_id, event)
try:
context = yield self._handle_new_event(origin, event, state=state)
context = await self._handle_new_event(origin, event, state=state)
except AuthError as e:
raise FederationError("ERROR", e.code, e.msg, affected=event.event_id)
room = yield self.store.get_room(room_id)
room = await self.store.get_room(room_id)
if not room:
try:
yield self.store.store_room(
room_id=room_id, room_creator_user_id="", is_public=False
prev_state_ids = await context.get_prev_state_ids()
create_event = await self.store.get_event(
prev_state_ids[(EventTypes.Create, "")]
)
room_version_id = create_event.content.get(
"room_version", RoomVersions.V1.identifier
)
await self.store.store_room(
room_id=room_id,
room_creator_user_id="",
is_public=False,
room_version=KNOWN_ROOM_VERSIONS[room_version_id],
)
except StoreError:
logger.exception("Failed to store room.")
@@ -722,11 +736,11 @@ class FederationHandler(BaseHandler):
# changing their profile info.
newly_joined = True
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = await context.get_prev_state_ids()
prev_state_id = prev_state_ids.get((event.type, event.state_key))
if prev_state_id:
prev_state = yield self.store.get_event(
prev_state = await self.store.get_event(
prev_state_id, allow_none=True
)
if prev_state and prev_state.membership == Membership.JOIN:
@@ -734,11 +748,82 @@ class FederationHandler(BaseHandler):
if newly_joined:
user = UserID.from_string(event.state_key)
yield self.user_joined_room(user, room_id)
await self.user_joined_room(user, room_id)
# For encrypted messages we check that we know about the sending device,
# if we don't then we mark the device cache for that user as stale.
if event.type == EventTypes.Encrypted:
device_id = event.content.get("device_id")
sender_key = event.content.get("sender_key")
cached_devices = await self.store.get_cached_devices_for_user(event.sender)
resync = False # Whether we should resync device lists.
device = None
if device_id is not None:
device = cached_devices.get(device_id)
if device is None:
logger.info(
"Received event from remote device not in our cache: %s %s",
event.sender,
device_id,
)
resync = True
# We also check if the `sender_key` matches what we expect.
if sender_key is not None:
# Figure out what sender key we're expecting. If we know the
# device and recognize the algorithm then we can work out the
# exact key to expect. Otherwise check it matches any key we
# have for that device.
if device:
keys = device.get("keys", {}).get("keys", {})
if event.content.get("algorithm") == "m.megolm.v1.aes-sha2":
# For this algorithm we expect a curve25519 key.
key_name = "curve25519:%s" % (device_id,)
current_keys = [keys.get(key_name)]
else:
# We don't know understand the algorithm, so we just
# check it matches a key for the device.
current_keys = keys.values()
elif device_id:
# We don't have any keys for the device ID.
current_keys = []
else:
# The event didn't include a device ID, so we just look for
# keys across all devices.
current_keys = (
key
for device in cached_devices
for key in device.get("keys", {}).get("keys", {}).values()
)
# We now check that the sender key matches (one of) the expected
# keys.
if sender_key not in current_keys:
logger.info(
"Received event from remote device with unexpected sender key: %s %s: %s",
event.sender,
device_id or "<no device_id>",
sender_key,
)
resync = True
if resync:
await self.store.mark_remote_user_device_cache_as_stale(event.sender)
# Immediately attempt a resync in the background
if self.config.worker_app:
return run_in_background(self._user_device_resync, event.sender)
else:
return run_in_background(
self._device_list_updater.user_device_resync, event.sender
)
@log_function
@defer.inlineCallbacks
def backfill(self, dest, room_id, limit, extremities):
async def backfill(self, dest, room_id, limit, extremities):
""" Trigger a backfill request to `dest` for the given `room_id`
This will attempt to get more events from the remote. If the other side
@@ -755,7 +840,7 @@ class FederationHandler(BaseHandler):
if dest == self.server_name:
raise SynapseError(400, "Can't backfill from self.")
events = yield self.federation_client.backfill(
events = await self.federation_client.backfill(
dest, room_id, limit=limit, extremities=extremities
)
@@ -770,7 +855,7 @@ class FederationHandler(BaseHandler):
# self._sanity_check_event(ev)
# Don't bother processing events we already have.
seen_events = yield self.store.have_events_in_timeline(
seen_events = await self.store.have_events_in_timeline(
set(e.event_id for e in events)
)
@@ -796,7 +881,7 @@ class FederationHandler(BaseHandler):
state_events = {}
events_to_state = {}
for e_id in edges:
state, auth = yield self._get_state_for_room(
state, auth = await self._get_state_for_room(
destination=dest,
room_id=room_id,
event_id=e_id,
@@ -843,7 +928,7 @@ class FederationHandler(BaseHandler):
)
)
yield self._handle_new_events(dest, ev_infos, backfilled=True)
await self._handle_new_events(dest, ev_infos, backfilled=True)
# Step 2: Persist the rest of the events in the chunk one by one
events.sort(key=lambda e: e.depth)
@@ -859,16 +944,15 @@ class FederationHandler(BaseHandler):
# We store these one at a time since each event depends on the
# previous to work out the state.
# TODO: We can probably do something more clever here.
yield self._handle_new_event(dest, event, backfilled=True)
await self._handle_new_event(dest, event, backfilled=True)
return events
@defer.inlineCallbacks
def maybe_backfill(self, room_id, current_depth):
async def maybe_backfill(self, room_id, current_depth):
"""Checks the database to see if we should backfill before paginating,
and if so do.
"""
extremities = yield self.store.get_oldest_events_with_depth_in_room(room_id)
extremities = await self.store.get_oldest_events_with_depth_in_room(room_id)
if not extremities:
logger.debug("Not backfilling as no extremeties found.")
@@ -900,15 +984,17 @@ class FederationHandler(BaseHandler):
# state *before* the event, ignoring the special casing certain event
# types have.
forward_events = yield self.store.get_successor_events(list(extremities))
forward_events = await self.store.get_successor_events(list(extremities))
extremities_events = yield self.store.get_events(
forward_events, check_redacted=False, get_prev_content=False
extremities_events = await self.store.get_events(
forward_events,
redact_behaviour=EventRedactBehaviour.AS_IS,
get_prev_content=False,
)
# We set `check_history_visibility_only` as we might otherwise get false
# positives from users having been erased.
filtered_extremities = yield filter_events_for_server(
filtered_extremities = await filter_events_for_server(
self.storage,
self.server_name,
list(extremities_events.values()),
@@ -938,7 +1024,7 @@ class FederationHandler(BaseHandler):
# First we try hosts that are already in the room
# TODO: HEURISTIC ALERT.
curr_state = yield self.state_handler.get_current_state(room_id)
curr_state = await self.state_handler.get_current_state(room_id)
def get_domains_from_state(state):
"""Get joined domains from state
@@ -977,12 +1063,11 @@ class FederationHandler(BaseHandler):
domain for domain, depth in curr_domains if domain != self.server_name
]
@defer.inlineCallbacks
def try_backfill(domains):
async def try_backfill(domains):
# TODO: Should we try multiple of these at a time?
for dom in domains:
try:
yield self.backfill(
await self.backfill(
dom, room_id, limit=100, extremities=extremities
)
# If this succeeded then we probably already have the
@@ -1013,7 +1098,7 @@ class FederationHandler(BaseHandler):
return False
success = yield try_backfill(likely_domains)
success = await try_backfill(likely_domains)
if success:
return True
@@ -1027,7 +1112,7 @@ class FederationHandler(BaseHandler):
logger.debug("calling resolve_state_groups in _maybe_backfill")
resolve = preserve_fn(self.state_handler.resolve_state_groups_for_events)
states = yield make_deferred_yieldable(
states = await make_deferred_yieldable(
defer.gatherResults(
[resolve(room_id, [e]) for e in event_ids], consumeErrors=True
)
@@ -1037,7 +1122,7 @@ class FederationHandler(BaseHandler):
# event_ids.
states = dict(zip(event_ids, [s.state for s in states]))
state_map = yield self.store.get_events(
state_map = await self.store.get_events(
[e_id for ids in itervalues(states) for e_id in itervalues(ids)],
get_prev_content=False,
)
@@ -1053,7 +1138,7 @@ class FederationHandler(BaseHandler):
for e_id, _ in sorted_extremeties_tuple:
likely_domains = get_domains_from_state(states[e_id])
success = yield try_backfill(
success = await try_backfill(
[dom for dom, _ in likely_domains if dom not in tried_domains]
)
if success:
@@ -1063,8 +1148,7 @@ class FederationHandler(BaseHandler):
return False
@defer.inlineCallbacks
def _get_events_and_persist(
async def _get_events_and_persist(
self, destination: str, room_id: str, events: Iterable[str]
):
"""Fetch the given events from a server, and persist them as outliers.
@@ -1072,7 +1156,7 @@ class FederationHandler(BaseHandler):
Logs a warning if we can't find the given event.
"""
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version_id(room_id)
event_infos = []
@@ -1108,9 +1192,9 @@ class FederationHandler(BaseHandler):
e,
)
yield concurrently_execute(get_event, events, 5)
await concurrently_execute(get_event, events, 5)
yield self._handle_new_events(
await self._handle_new_events(
destination, event_infos,
)
@@ -1194,7 +1278,7 @@ class FederationHandler(BaseHandler):
"""
logger.debug("Joining %s to %s", joinee, room_id)
origin, event, event_format_version = yield self._make_and_verify_event(
origin, event, room_version_obj = yield self._make_and_verify_event(
target_hosts,
room_id,
joinee,
@@ -1222,6 +1306,8 @@ class FederationHandler(BaseHandler):
target_hosts.insert(0, origin)
except ValueError:
pass
event_format_version = room_version_obj.event_format
ret = yield self.federation_client.send_join(
target_hosts, event, event_format_version
)
@@ -1242,18 +1328,23 @@ class FederationHandler(BaseHandler):
try:
yield self.store.store_room(
room_id=room_id, room_creator_user_id="", is_public=False
room_id=room_id,
room_creator_user_id="",
is_public=False,
room_version=room_version_obj,
)
except Exception:
# FIXME
pass
yield self._persist_auth_tree(origin, auth_chain, state, event)
yield self._persist_auth_tree(
origin, auth_chain, state, event, room_version_obj
)
# Check whether this room is the result of an upgrade of a room we already know
# about. If so, migrate over user information
predecessor = yield self.store.get_room_predecessor(room_id)
if not predecessor:
if not predecessor or not isinstance(predecessor.get("room_id"), str):
return
old_room_id = predecessor["room_id"]
logger.debug(
@@ -1281,8 +1372,7 @@ class FederationHandler(BaseHandler):
return True
@defer.inlineCallbacks
def _handle_queued_pdus(self, room_queue):
async def _handle_queued_pdus(self, room_queue):
"""Process PDUs which got queued up while we were busy send_joining.
Args:
@@ -1298,7 +1388,7 @@ class FederationHandler(BaseHandler):
p.room_id,
)
with nested_logging_context(p.event_id):
yield self.on_receive_pdu(origin, p, sent_to_us_directly=True)
await self.on_receive_pdu(origin, p, sent_to_us_directly=True)
except Exception as e:
logger.warning(
"Error handling queued PDU %s from %s: %s", p.event_id, origin, e
@@ -1329,7 +1419,7 @@ class FederationHandler(BaseHandler):
event_content = {"membership": Membership.JOIN}
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(
room_version,
@@ -1428,7 +1518,7 @@ class FederationHandler(BaseHandler):
user = UserID.from_string(event.state_key)
yield self.user_joined_room(user, event.room_id)
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = yield context.get_prev_state_ids()
state_ids = list(prev_state_ids.values())
auth_chain = yield self.store.get_auth_chain(state_ids)
@@ -1438,13 +1528,13 @@ class FederationHandler(BaseHandler):
return {"state": list(state.values()), "auth_chain": auth_chain}
@defer.inlineCallbacks
def on_invite_request(self, origin, pdu):
def on_invite_request(
self, origin: str, event: EventBase, room_version: RoomVersion
):
""" We've got an invite event. Process and persist it. Sign it.
Respond with the now signed event.
"""
event = pdu
if event.state_key is None:
raise SynapseError(400, "The invite event did not have a state key")
@@ -1484,7 +1574,10 @@ class FederationHandler(BaseHandler):
event.signatures.update(
compute_event_signature(
event.get_pdu_json(), self.hs.hostname, self.hs.config.signing_key[0]
room_version,
event.get_pdu_json(),
self.hs.hostname,
self.hs.config.signing_key[0],
)
)
@@ -1495,8 +1588,8 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
def do_remotely_reject_invite(self, target_hosts, room_id, user_id, content):
origin, event, event_format_version = yield self._make_and_verify_event(
target_hosts, room_id, user_id, "leave", content=content,
origin, event, room_version = yield self._make_and_verify_event(
target_hosts, room_id, user_id, "leave", content=content
)
# Mark as outlier as we don't have any state for this event; we're not
# even in the room.
@@ -1522,7 +1615,11 @@ class FederationHandler(BaseHandler):
def _make_and_verify_event(
self, target_hosts, room_id, user_id, membership, content={}, params=None
):
origin, event, format_ver = yield self.federation_client.make_membership_event(
(
origin,
event,
room_version,
) = yield self.federation_client.make_membership_event(
target_hosts, room_id, user_id, membership, content, params=params
)
@@ -1534,7 +1631,7 @@ class FederationHandler(BaseHandler):
assert event.user_id == user_id
assert event.state_key == user_id
assert event.room_id == room_id
return origin, event, format_ver
return origin, event, room_version
@defer.inlineCallbacks
@log_function
@@ -1559,7 +1656,7 @@ class FederationHandler(BaseHandler):
)
raise SynapseError(403, "User not from origin", Codes.FORBIDDEN)
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(
room_version,
{
@@ -1819,7 +1916,14 @@ class FederationHandler(BaseHandler):
)
@defer.inlineCallbacks
def _persist_auth_tree(self, origin, auth_events, state, event):
def _persist_auth_tree(
self,
origin: str,
auth_events: List[EventBase],
state: List[EventBase],
event: EventBase,
room_version: RoomVersion,
):
"""Checks the auth chain is valid (and passes auth checks) for the
state and event. Then persists the auth chain and state atomically.
Persists the event separately. Notifies about the persisted events
@@ -1828,10 +1932,12 @@ class FederationHandler(BaseHandler):
Will attempt to fetch missing auth events.
Args:
origin (str): Where the events came from
auth_events (list)
state (list)
event (Event)
origin: Where the events came from
auth_events
state
event
room_version: The room version we expect this room to have, and
will raise if it doesn't match the version in the create event.
Returns:
Deferred
@@ -1857,10 +1963,13 @@ class FederationHandler(BaseHandler):
# invalid, and it would fail auth checks anyway.
raise SynapseError(400, "No create event in state")
room_version = create_event.content.get(
room_version_id = create_event.content.get(
"room_version", RoomVersions.V1.identifier
)
if room_version.identifier != room_version_id:
raise SynapseError(400, "Room version mismatch")
missing_auth_events = set()
for e in itertools.chain(auth_events, state, [event]):
for e_id in e.auth_event_ids():
@@ -1869,7 +1978,11 @@ class FederationHandler(BaseHandler):
for e_id in missing_auth_events:
m_ev = yield self.federation_client.get_pdu(
[origin], e_id, room_version=room_version, outlier=True, timeout=10000
[origin],
e_id,
room_version=room_version.identifier,
outlier=True,
timeout=10000,
)
if m_ev and m_ev.event_id == e_id:
event_map[e_id] = m_ev
@@ -1919,7 +2032,7 @@ class FederationHandler(BaseHandler):
origin: str,
event: EventBase,
state: Optional[Iterable[EventBase]],
auth_events: Optional[Dict[Tuple[str, str], EventBase]],
auth_events: Optional[StateMap[EventBase]],
backfilled: bool,
):
"""
@@ -1937,7 +2050,7 @@ class FederationHandler(BaseHandler):
context = yield self.state_handler.compute_event_context(event, old_state=state)
if not auth_events:
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = yield context.get_prev_state_ids()
auth_events_ids = yield self.auth.compute_auth_events(
event, prev_state_ids, for_verification=True
)
@@ -1995,7 +2108,8 @@ class FederationHandler(BaseHandler):
do_soft_fail_check = False
if do_soft_fail_check:
room_version = yield self.store.get_room_version(event.room_id)
room_version = yield self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
# Calculate the "current state".
if state is not None:
@@ -2045,7 +2159,9 @@ class FederationHandler(BaseHandler):
}
try:
event_auth.check(room_version, event, auth_events=current_auth_events)
event_auth.check(
room_version_obj, event, auth_events=current_auth_events
)
except AuthError as e:
logger.warning("Soft-failing %r because %s", event, e)
event.internal_metadata.soft_failed = True
@@ -2128,7 +2244,8 @@ class FederationHandler(BaseHandler):
Returns:
defer.Deferred[EventContext]: updated context object
"""
room_version = yield self.store.get_room_version(event.room_id)
room_version = yield self.store.get_room_version_id(event.room_id)
room_version_obj = KNOWN_ROOM_VERSIONS[room_version]
try:
context = yield self._update_auth_events_and_context_for_auth(
@@ -2146,7 +2263,7 @@ class FederationHandler(BaseHandler):
)
try:
event_auth.check(room_version, event, auth_events=auth_events)
event_auth.check(room_version_obj, event, auth_events=auth_events)
except AuthError as e:
logger.warning("Failed auth resolution for %r because %s", event, e)
context.rejected = RejectedReason.AUTH_ERROR
@@ -2299,7 +2416,7 @@ class FederationHandler(BaseHandler):
remote_auth_events.update({(d.type, d.state_key): d for d in different_events})
remote_state = remote_auth_events.values()
room_version = yield self.store.get_room_version(event.room_id)
room_version = yield self.store.get_room_version_id(event.room_id)
new_state = yield self.state_handler.resolve_events(
room_version, (local_state, remote_state), event
)
@@ -2346,12 +2463,12 @@ class FederationHandler(BaseHandler):
k: a.event_id for k, a in iteritems(auth_events) if k != event_key
}
current_state_ids = yield context.get_current_state_ids(self.store)
current_state_ids = yield context.get_current_state_ids()
current_state_ids = dict(current_state_ids)
current_state_ids.update(state_updates)
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = yield context.get_prev_state_ids()
prev_state_ids = dict(prev_state_ids)
prev_state_ids.update({k: a.event_id for k, a in iteritems(auth_events)})
@@ -2523,7 +2640,7 @@ class FederationHandler(BaseHandler):
}
if (yield self.auth.check_host_in_room(room_id, self.hs.hostname)):
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
builder = self.event_builder_factory.new(room_version, event_dict)
EventValidator().validate_builder(builder)
@@ -2586,7 +2703,7 @@ class FederationHandler(BaseHandler):
Returns:
Deferred: resolves (to None)
"""
room_version = yield self.store.get_room_version(room_id)
room_version = yield self.store.get_room_version_id(room_id)
# NB: event_dict has a particular specced format we might need to fudge
# if we change event formats too much.
@@ -2635,7 +2752,7 @@ class FederationHandler(BaseHandler):
event.content["third_party_invite"]["signed"]["token"],
)
original_invite = None
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = yield context.get_prev_state_ids()
original_invite_id = prev_state_ids.get(key)
if original_invite_id:
original_invite = yield self.store.get_event(
@@ -2683,7 +2800,7 @@ class FederationHandler(BaseHandler):
signed = event.content["third_party_invite"]["signed"]
token = signed["token"]
prev_state_ids = yield context.get_prev_state_ids(self.store)
prev_state_ids = yield context.get_prev_state_ids()
invite_event_id = prev_state_ids.get((EventTypes.ThirdPartyInvite, token))
invite_event = None
@@ -2857,7 +2974,7 @@ class FederationHandler(BaseHandler):
room_id=room_id, user_id=user.to_string(), change="joined"
)
else:
return user_joined_room(self.distributor, user, room_id)
return defer.succeed(user_joined_room(self.distributor, user, room_id))
@defer.inlineCallbacks
def get_room_complexity(self, remote_room_hosts, room_id):

View File

@@ -130,6 +130,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.get_group_summary(
get_domain_from_id(group_id), group_id, requester_user_id
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -190,6 +192,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.create_group(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -231,6 +235,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.get_users_in_group(
get_domain_from_id(group_id), group_id, requester_user_id
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -271,6 +277,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.join_group(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -315,6 +323,8 @@ class GroupsLocalHandler(object):
res = yield self.transport_client.accept_group_invite(
get_domain_from_id(group_id), group_id, user_id, content
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -361,6 +371,8 @@ class GroupsLocalHandler(object):
requester_user_id,
content,
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -424,6 +436,8 @@ class GroupsLocalHandler(object):
user_id,
content,
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")
@@ -460,6 +474,8 @@ class GroupsLocalHandler(object):
bulk_result = yield self.transport_client.bulk_get_publicised_groups(
get_domain_from_id(user_id), [user_id]
)
except HttpResponseException as e:
raise e.to_synapse_error()
except RequestSendFailed:
raise SynapseError(502, "Failed to contact group server")

View File

@@ -38,7 +38,7 @@ from synapse.api.errors import (
from synapse.config.emailconfig import ThreepidBehaviour
from synapse.http.client import SimpleHttpClient
from synapse.util.hash import sha256_and_url_safe_base64
from synapse.util.stringutils import random_string
from synapse.util.stringutils import assert_valid_client_secret, random_string
from ._base import BaseHandler
@@ -84,6 +84,8 @@ class IdentityHandler(BaseHandler):
raise SynapseError(
400, "Missing param client_secret in creds", errcode=Codes.MISSING_PARAM
)
assert_valid_client_secret(client_secret)
session_id = creds.get("sid")
if not session_id:
raise SynapseError(

View File

@@ -26,7 +26,7 @@ from synapse.streams.config import PaginationConfig
from synapse.types import StreamToken, UserID
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import concurrently_execute
from synapse.util.caches.snapshot_cache import SnapshotCache
from synapse.util.caches.response_cache import ResponseCache
from synapse.visibility import filter_events_for_client
from ._base import BaseHandler
@@ -41,7 +41,7 @@ class InitialSyncHandler(BaseHandler):
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
self.snapshot_cache = ResponseCache(hs, "initial_sync_cache")
self._event_serializer = hs.get_event_client_serializer()
self.storage = hs.get_storage()
self.state_store = self.storage.state
@@ -79,21 +79,17 @@ class InitialSyncHandler(BaseHandler):
as_client_event,
include_archived,
)
now_ms = self.clock.time_msec()
result = self.snapshot_cache.get(now_ms, key)
if result is not None:
return result
return self.snapshot_cache.set(
now_ms,
return self.snapshot_cache.wrap(
key,
self._snapshot_all_rooms(
user_id, pagin_config, as_client_event, include_archived
),
self._snapshot_all_rooms,
user_id,
pagin_config,
as_client_event,
include_archived,
)
@defer.inlineCallbacks
def _snapshot_all_rooms(
async def _snapshot_all_rooms(
self,
user_id=None,
pagin_config=None,
@@ -105,7 +101,7 @@ class InitialSyncHandler(BaseHandler):
if include_archived:
memberships.append(Membership.LEAVE)
room_list = yield self.store.get_rooms_for_user_where_membership_is(
room_list = await self.store.get_rooms_for_local_user_where_membership_is(
user_id=user_id, membership_list=memberships
)
@@ -113,33 +109,32 @@ class InitialSyncHandler(BaseHandler):
rooms_ret = []
now_token = yield self.hs.get_event_sources().get_current_token()
now_token = await self.hs.get_event_sources().get_current_token()
presence_stream = self.hs.get_event_sources().sources["presence"]
pagination_config = PaginationConfig(from_token=now_token)
presence, _ = yield presence_stream.get_pagination_rows(
presence, _ = await presence_stream.get_pagination_rows(
user, pagination_config.get_source_config("presence"), None
)
receipt_stream = self.hs.get_event_sources().sources["receipt"]
receipt, _ = yield receipt_stream.get_pagination_rows(
receipt, _ = await receipt_stream.get_pagination_rows(
user, pagination_config.get_source_config("receipt"), None
)
tags_by_room = yield self.store.get_tags_for_user(user_id)
tags_by_room = await self.store.get_tags_for_user(user_id)
account_data, account_data_by_room = yield self.store.get_account_data_for_user(
account_data, account_data_by_room = await self.store.get_account_data_for_user(
user_id
)
public_room_ids = yield self.store.get_public_room_ids()
public_room_ids = await self.store.get_public_room_ids()
limit = pagin_config.limit
if limit is None:
limit = 10
@defer.inlineCallbacks
def handle_room(event):
async def handle_room(event):
d = {
"room_id": event.room_id,
"membership": event.membership,
@@ -152,8 +147,8 @@ class InitialSyncHandler(BaseHandler):
time_now = self.clock.time_msec()
d["inviter"] = event.sender
invite_event = yield self.store.get_event(event.event_id)
d["invite"] = yield self._event_serializer.serialize_event(
invite_event = await self.store.get_event(event.event_id)
d["invite"] = await self._event_serializer.serialize_event(
invite_event, time_now, as_client_event
)
@@ -177,7 +172,7 @@ class InitialSyncHandler(BaseHandler):
lambda states: states[event.event_id]
)
(messages, token), current_state = yield make_deferred_yieldable(
(messages, token), current_state = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(
@@ -191,7 +186,7 @@ class InitialSyncHandler(BaseHandler):
)
).addErrback(unwrapFirstError)
messages = yield filter_events_for_client(
messages = await filter_events_for_client(
self.storage, user_id, messages
)
@@ -201,7 +196,7 @@ class InitialSyncHandler(BaseHandler):
d["messages"] = {
"chunk": (
yield self._event_serializer.serialize_events(
await self._event_serializer.serialize_events(
messages, time_now=time_now, as_client_event=as_client_event
)
),
@@ -209,7 +204,7 @@ class InitialSyncHandler(BaseHandler):
"end": end_token.to_string(),
}
d["state"] = yield self._event_serializer.serialize_events(
d["state"] = await self._event_serializer.serialize_events(
current_state.values(),
time_now=time_now,
as_client_event=as_client_event,
@@ -232,7 +227,7 @@ class InitialSyncHandler(BaseHandler):
except Exception:
logger.exception("Failed to get snapshot")
yield concurrently_execute(handle_room, room_list, 10)
await concurrently_execute(handle_room, room_list, 10)
account_data_events = []
for account_data_type, content in account_data.items():
@@ -256,8 +251,7 @@ class InitialSyncHandler(BaseHandler):
return ret
@defer.inlineCallbacks
def room_initial_sync(self, requester, room_id, pagin_config=None):
async def room_initial_sync(self, requester, room_id, pagin_config=None):
"""Capture the a snapshot of a room. If user is currently a member of
the room this will be what is currently in the room. If the user left
the room this will be what was in the room when they left.
@@ -274,32 +268,32 @@ class InitialSyncHandler(BaseHandler):
A JSON serialisable dict with the snapshot of the room.
"""
blocked = yield self.store.is_room_blocked(room_id)
blocked = await self.store.is_room_blocked(room_id)
if blocked:
raise SynapseError(403, "This room has been blocked on this server")
user_id = requester.user.to_string()
membership, member_event_id = yield self._check_in_room_or_world_readable(
membership, member_event_id = await self._check_in_room_or_world_readable(
room_id, user_id
)
is_peeking = member_event_id is None
if membership == Membership.JOIN:
result = yield self._room_initial_sync_joined(
result = await self._room_initial_sync_joined(
user_id, room_id, pagin_config, membership, is_peeking
)
elif membership == Membership.LEAVE:
result = yield self._room_initial_sync_parted(
result = await self._room_initial_sync_parted(
user_id, room_id, pagin_config, membership, member_event_id, is_peeking
)
account_data_events = []
tags = yield self.store.get_tags_for_room(user_id, room_id)
tags = await self.store.get_tags_for_room(user_id, room_id)
if tags:
account_data_events.append({"type": "m.tag", "content": {"tags": tags}})
account_data = yield self.store.get_account_data_for_room(user_id, room_id)
account_data = await self.store.get_account_data_for_room(user_id, room_id)
for account_data_type, content in account_data.items():
account_data_events.append({"type": account_data_type, "content": content})
@@ -307,11 +301,10 @@ class InitialSyncHandler(BaseHandler):
return result
@defer.inlineCallbacks
def _room_initial_sync_parted(
async def _room_initial_sync_parted(
self, user_id, room_id, pagin_config, membership, member_event_id, is_peeking
):
room_state = yield self.state_store.get_state_for_events([member_event_id])
room_state = await self.state_store.get_state_for_events([member_event_id])
room_state = room_state[member_event_id]
@@ -319,13 +312,13 @@ class InitialSyncHandler(BaseHandler):
if limit is None:
limit = 10
stream_token = yield self.store.get_stream_token_for_event(member_event_id)
stream_token = await self.store.get_stream_token_for_event(member_event_id)
messages, token = yield self.store.get_recent_events_for_room(
messages, token = await self.store.get_recent_events_for_room(
room_id, limit=limit, end_token=stream_token
)
messages = yield filter_events_for_client(
messages = await filter_events_for_client(
self.storage, user_id, messages, is_peeking=is_peeking
)
@@ -339,13 +332,13 @@ class InitialSyncHandler(BaseHandler):
"room_id": room_id,
"messages": {
"chunk": (
yield self._event_serializer.serialize_events(messages, time_now)
await self._event_serializer.serialize_events(messages, time_now)
),
"start": start_token.to_string(),
"end": end_token.to_string(),
},
"state": (
yield self._event_serializer.serialize_events(
await self._event_serializer.serialize_events(
room_state.values(), time_now
)
),
@@ -353,19 +346,18 @@ class InitialSyncHandler(BaseHandler):
"receipts": [],
}
@defer.inlineCallbacks
def _room_initial_sync_joined(
async def _room_initial_sync_joined(
self, user_id, room_id, pagin_config, membership, is_peeking
):
current_state = yield self.state.get_current_state(room_id=room_id)
current_state = await self.state.get_current_state(room_id=room_id)
# TODO: These concurrently
time_now = self.clock.time_msec()
state = yield self._event_serializer.serialize_events(
state = await self._event_serializer.serialize_events(
current_state.values(), time_now
)
now_token = yield self.hs.get_event_sources().get_current_token()
now_token = await self.hs.get_event_sources().get_current_token()
limit = pagin_config.limit if pagin_config else None
if limit is None:
@@ -380,28 +372,26 @@ class InitialSyncHandler(BaseHandler):
presence_handler = self.hs.get_presence_handler()
@defer.inlineCallbacks
def get_presence():
async def get_presence():
# If presence is disabled, return an empty list
if not self.hs.config.use_presence:
return []
states = yield presence_handler.get_states(
states = await presence_handler.get_states(
[m.user_id for m in room_members], as_event=True
)
return states
@defer.inlineCallbacks
def get_receipts():
receipts = yield self.store.get_linearized_receipts_for_room(
async def get_receipts():
receipts = await self.store.get_linearized_receipts_for_room(
room_id, to_key=now_token.receipt_key
)
if not receipts:
receipts = []
return receipts
presence, receipts, (messages, token) = yield make_deferred_yieldable(
presence, receipts, (messages, token) = await make_deferred_yieldable(
defer.gatherResults(
[
run_in_background(get_presence),
@@ -417,7 +407,7 @@ class InitialSyncHandler(BaseHandler):
).addErrback(unwrapFirstError)
)
messages = yield filter_events_for_client(
messages = await filter_events_for_client(
self.storage, user_id, messages, is_peeking=is_peeking
)
@@ -430,7 +420,7 @@ class InitialSyncHandler(BaseHandler):
"room_id": room_id,
"messages": {
"chunk": (
yield self._event_serializer.serialize_events(messages, time_now)
await self._event_serializer.serialize_events(messages, time_now)
),
"start": start_token.to_string(),
"end": end_token.to_string(),
@@ -444,18 +434,17 @@ class InitialSyncHandler(BaseHandler):
return ret
@defer.inlineCallbacks
def _check_in_room_or_world_readable(self, room_id, user_id):
async def _check_in_room_or_world_readable(self, room_id, user_id):
try:
# check_user_was_in_room will return the most recent membership
# event for the user if:
# * The user is a non-guest user, and was ever in the room
# * The user is a guest user, and has joined the room
# else it will throw.
member_event = yield self.auth.check_user_was_in_room(room_id, user_id)
member_event = await self.auth.check_user_was_in_room(room_id, user_id)
return member_event.membership, member_event.event_id
except AuthError:
visibility = yield self.state_handler.get_current_state(
visibility = await self.state_handler.get_current_state(
room_id, EventTypes.RoomHistoryVisibility, ""
)
if (

Some files were not shown because too many files have changed in this diff Show More