1
0

Compare commits

..

448 Commits

Author SHA1 Message Date
Richard van der Hoff
f03c877b32 sample log config
TODO: automate generation of this
2019-12-24 17:39:58 +00:00
dopple
b2db382841 Update reverse proxy file name (#6590)
s/reverse_proxy.rst/reverse_proxy.md/
2019-12-22 22:17:09 +00:00
Richard van der Hoff
138708608b Merge tag 'v1.7.2'
Synapse 1.7.2 (2019-12-20)
==========================

This release fixes some regressions introduced in Synapse 1.7.0 and 1.7.1.

Bugfixes
--------

- Fix a regression introduced in Synapse 1.7.1 which caused errors when attempting to backfill rooms over federation. ([\#6576](https://github.com/matrix-org/synapse/issues/6576))
- Fix a bug introduced in Synapse 1.7.0 which caused an error on startup when upgrading from versions before 1.3.0. ([\#6578](https://github.com/matrix-org/synapse/issues/6578))
2019-12-20 11:09:59 +00:00
Richard van der Hoff
29794c6bc8 1.7.2 2019-12-20 10:58:07 +00:00
Richard van der Hoff
4caab0e95e Backport fixes to sqlite upgrade from develop (#6578)
Only run prepare_database on connection for in-memory databases.

Fixes #6569.
2019-12-20 10:46:46 +00:00
Richard van der Hoff
03d3792f3c Fix exceptions when attempting to backfill (#6576)
Fixes #6575
2019-12-20 09:55:45 +00:00
Richard van der Hoff
4154637834 Merge tag 'v1.7.1'
Synapse 1.7.1 (2019-12-18)
==========================

This release includes several security fixes as well as a fix to a bug exposed by the security fixes. Administrators are encouraged to upgrade as soon as possible.

Security updates
----------------

- Fix a bug which could cause room events to be incorrectly authorized using events from a different room. ([\#6501](https://github.com/matrix-org/synapse/issues/6501), [\#6503](https://github.com/matrix-org/synapse/issues/6503), [\#6521](https://github.com/matrix-org/synapse/issues/6521), [\#6524](https://github.com/matrix-org/synapse/issues/6524), [\#6530](https://github.com/matrix-org/synapse/issues/6530), [\#6531](https://github.com/matrix-org/synapse/issues/6531))
- Fix a bug causing responses to the `/context` client endpoint to not use the pruned version of the event. ([\#6553](https://github.com/matrix-org/synapse/issues/6553))
- Fix a cause of state resets in room versions 2 onwards. ([\#6556](https://github.com/matrix-org/synapse/issues/6556), [\#6560](https://github.com/matrix-org/synapse/issues/6560))

Bugfixes
--------

- Fix a bug which could cause the federation server to incorrectly return errors when handling certain obscure event graphs. ([\#6526](https://github.com/matrix-org/synapse/issues/6526), [\#6527](https://github.com/matrix-org/synapse/issues/6527))
2019-12-18 10:14:49 +00:00
Richard van der Hoff
e156c86a7f too many parens 2019-12-18 09:40:03 +00:00
Richard van der Hoff
d656e91fc2 1.7.1 2019-12-18 09:38:08 +00:00
Erik Johnston
5029422530 Fix bug where we added duplicate event IDs as auth_events (#6560) 2019-12-17 15:06:08 +00:00
Erik Johnston
5ca2cfadc3 Add auth events as per spec. (#6556)
Previously we tried to be clever and filter out some unnecessary event
IDs to keep the auth chain small, but that had some annoying
interactions with state res v2 so we stop doing that for now.
2019-12-16 17:05:01 +00:00
Brendan Abolivier
6316530366 Merge pull request #6553 from matrix-org/babolivier/fix-context-filter
Use the filtered version of an event when responding to /context requests for that event
2019-12-16 16:24:10 +00:00
Brendan Abolivier
7f8d8fd2e2 Merge branch 'babolivier/fix-context-filter' of github.com:matrix-org/synapse into babolivier/fix-context-filter 2019-12-16 16:01:32 +00:00
Brendan Abolivier
a820069549 Incorporate review 2019-12-16 16:00:18 +00:00
Brendan Abolivier
284e690aa0 Update changelog.d/6553.bugfix
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
2019-12-16 14:56:05 +00:00
Brendan Abolivier
a29420f9f4 Lint 2019-12-16 14:55:50 +00:00
Brendan Abolivier
596dd9914d Add test case 2019-12-16 14:53:21 +00:00
Richard van der Hoff
bbb75ff6ee Exclude rejected state events when calculating state at backwards extrems (#6527)
This fixes a weird bug where, if you were determined enough, you could end up with a rejected event forming part of the state at a backwards-extremity. Authing that backwards extrem would then lead to us trying to pull the rejected event from the db (with allow_rejected=False), which would fail with a 404.
2019-12-16 14:07:29 +00:00
Richard van der Hoff
ff773ff724 Persist auth/state events at backwards extremities when we fetch them (#6526)
The main point here is to make sure that the state returned by _get_state_in_room has been authed before we try to use it as state in the room.
2019-12-16 14:07:11 +00:00
Richard van der Hoff
83895316d4 sanity-checking for events used in state res (#6531)
When we perform state resolution, check that all of the events involved are in
the right room.
2019-12-16 13:53:53 +00:00
Richard van der Hoff
6577f2d887 Sanity-check room ids in event auth (#6530)
When we do an event auth operation, check that all of the events involved are
in the right room.
2019-12-16 13:47:34 +00:00
Richard van der Hoff
35bbe4ca79 Check the room_id of events when fetching room state/auth (#6524)
When we request the state/auth_events to populate a backwards extremity (on
backfill or in the case of missing events in a transaction push), we should
check that the returned events are in the right room rather than blindly using
them in the room state or auth chain.

Given that _get_events_from_store_or_dest takes a room_id, it seems clear that
it should be sanity-checking the room_id of the requested events, so let's do
it there.
2019-12-16 13:47:07 +00:00
Richard van der Hoff
20d5ba16e6 Add include_event_in_state to _get_state_for_room (#6521)
Make it return the state *after* the requested event, rather than the one
before it. This is a bit easier and requires fewer calls to
get_events_from_store_or_dest.
2019-12-16 13:26:12 +00:00
Richard van der Hoff
be294d6fde Move get_state methods into FederationHandler (#6503)
This is a non-functional refactor as a precursor to some other work.
2019-12-16 13:20:21 +00:00
Richard van der Hoff
4c7b1bb6cc Refactor get_events_from_store_or_dest to return a dict (#6501)
There was a bunch of unnecessary conversion back and forth between dict and
list going on here. We can simplify a bunch of the code.
2019-12-16 13:19:32 +00:00
Brendan Abolivier
8b9f5c21c3 Changelog 2019-12-16 12:19:35 +00:00
Brendan Abolivier
ac87ddb242 Update the documentation of the filtering function 2019-12-16 12:15:37 +00:00
Brendan Abolivier
487f1bb49d Use the filtered version of an event when responding to /context requests for that event
Sometimes the filtering function can return a pruned version of an event (on top of either the event itself or an empty list), if it thinks the user should be able to see that there's an event there but not the content of that event. Therefore, the previous logic of 'if filtered is empty then we can use the event we retrieved from the database' is flawed, and we should use the event returned by the filtering function.
2019-12-16 12:14:12 +00:00
Erik Johnston
bee1982d17 Merge tag 'v1.7.0'
Synapse 1.7.0 (2019-12-13)
==========================

This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.

Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.

Administrators are reminded that SQLite should not be used for production instances. Instructions for migrating to Postgres are available [here](docs/postgres.md). A future release of synapse will, by default, disable federation for servers using SQLite.

No significant changes since 1.7.0rc2.

Synapse 1.7.0rc2 (2019-12-11)
=============================

Bugfixes
--------

- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))

Synapse 1.7.0rc1 (2019-12-09)
=============================

Features
--------

- Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
- Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
- Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
- Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
- Configure privacy-preserving settings by default for the room directory. ([\#6355](https://github.com/matrix-org/synapse/issues/6355))
- Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
- Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))

Bugfixes
--------

- Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
- Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
- Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
- Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
- Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
- Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
- Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
- Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
- Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
- Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
- Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
- Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
- Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
- Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
- Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
- Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))

Improved Documentation
----------------------

- Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
- Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
- Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
- Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
- Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
- Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))

Deprecations and Removals
-------------------------

- Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))

Internal Changes
----------------

- Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
- Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
- Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
- Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
- Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
- Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
- Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
- Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
- Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
- Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
- Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
- Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
2019-12-13 10:55:33 +00:00
Erik Johnston
ba57a45644 More rewording of changelog. 2019-12-13 10:52:29 +00:00
Erik Johnston
bac1578013 Reword changelog 2019-12-13 10:46:31 +00:00
Erik Johnston
d046f367a4 Add deprecation notes 2019-12-13 10:36:35 +00:00
Erik Johnston
f5aeea9e89 1.7.0 2019-12-13 10:19:53 +00:00
Mark Nowiasz
58fdcbdfe7 Update workers.md to make media_repository work (again) (#6519) 2019-12-11 16:23:38 +00:00
Erik Johnston
d156912c4c 1.7.0rc2 2019-12-11 13:56:50 +00:00
Erik Johnston
72acca6a32 Back out change preventing setting null avatar URLs 2019-12-11 11:46:55 +00:00
Erik Johnston
d21577bdcb Merge branch 'erikj/fix_sqlite_7' of github.com:matrix-org/synapse into release-v1.7.0 2019-12-11 11:34:50 +00:00
Brendan Abolivier
ae49d29ef1 Fixup changelogs 2019-12-10 13:55:03 +00:00
Erik Johnston
a964f18887 Merge pull request #6509 from matrix-org/babolivier/fix-room-store-config
Give the server config to the RoomWorkerStore
2019-12-10 13:43:07 +00:00
Brendan Abolivier
451ec9b8b9 Changelog 2019-12-10 13:06:41 +00:00
Brendan Abolivier
3bd049bbb7 Give the server config to the RoomWorkerStore 2019-12-10 13:05:35 +00:00
Erik Johnston
332f3b36e5 Merge pull request #6507 from matrix-org/babolivier/pusher-room-store
Make the PusherSlaveStore inherit from the slave RoomStore
2019-12-10 12:57:37 +00:00
Erik Johnston
52346990c8 Drop unused index 2019-12-10 12:46:14 +00:00
Erik Johnston
31da85e467 Convert _censor_redactions to async since it awaits on coroutines 2019-12-10 12:46:00 +00:00
Brendan Abolivier
ec5fdd1333 Changelog 2019-12-10 12:34:33 +00:00
Brendan Abolivier
2ac78438d8 Make the PusherSlaveStore inherit from the slave RoomStore
So that it has access to the get_retention_policy_for_room function which is required by filter_events_for_client.
2019-12-10 12:31:03 +00:00
Erik Johnston
cc5f6eb608 Only start censor background job after indices are created 2019-12-10 11:39:31 +00:00
Neil Johnson
d95736a2bd Fix erroneous reference for new room directory defaults. 2019-12-10 10:05:33 +00:00
Erik Johnston
52fe9788bc Newsfile 2019-12-09 15:19:32 +00:00
Erik Johnston
4cade96616 Fix support for SQLite 3.7.
Partial indices support was added in 3.8.0, so we need to use the
background updates that handles this correctly.
2019-12-09 15:09:16 +00:00
Neil Johnson
0f3614f0f6 Merge tag 'v1.7.0rc1' into develop
Synapse 1.7.0rc1 (2019-12-09)
=============================

Features
--------

- Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
- Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
- Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
- Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
- Configure privacy-preserving settings by default for the room directory. ([\#6354](https://github.com/matrix-org/synapse/issues/6354))
- Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
- Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))

Bugfixes
--------

- Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
- Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
- Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
- Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
- Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
- Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
- Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
- Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
- Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
- Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
- Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
- Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
- Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
- Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
- Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
- Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))

Improved Documentation
----------------------

- Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
- Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
- Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
- Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
- Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
- Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))

Deprecations and Removals
-------------------------

- Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))

Internal Changes
----------------

- Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
- Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
- Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
- Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
- Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
- Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
- Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
- Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
- Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
- Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
- Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
- Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
2019-12-09 15:04:08 +00:00
Neil Johnson
21aa0a458f Update CHANGES.md 2019-12-09 14:57:09 +00:00
Andrew Morgan
5e8abe9013 Better errors regarding changing avatar_url (#6497) 2019-12-09 14:54:33 +00:00
Neil Johnson
24da1ffcb6 1.7.0rc1 2019-12-09 14:46:20 +00:00
Clifford Garwood II
96d35f1028 Systemd documentation (#6490) 2019-12-09 14:42:29 +00:00
Neil Johnson
adfdd82b21 Back out perf regression from get_cross_signing_keys_from_cache. (#6494)
Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression.
2019-12-09 13:59:27 +00:00
Erik Johnston
30e9adf32f Merge pull request #6487 from matrix-org/erikj/pass_in_db
Pass in Database object to data stores.
2019-12-09 13:53:21 +00:00
Erik Johnston
e1544b0af8 Merge pull request #6493 from matrix-org/erikj/invite_state_config
Pull out room_invite_state_types config option once.
2019-12-09 12:23:21 +00:00
Erik Johnston
65b37f6729 Fix comment
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-12-09 11:56:43 +00:00
Richard van der Hoff
18660a34d8 Fix inaccurate per-block metrics (#6491)
`Measure` incorrectly assumed that it was the only thing being done by the parent `LoggingContext`. For instance, during a "renew group attestations" operation, hundreds of `outbound_request` calls could take place in parallel, all using the same `LoggingContext`. This would mean that any resources used during *any* of those calls would be reported against *all* of them, producing wildly inaccurate results.

Instead, we now give each `Measure` block its own `LoggingContext` (using the parent `LoggingContext` mechanism to ensure that the log lines look correct and that the metrics are ultimately propogated to the top level for reporting against requests/backgrond tasks).
2019-12-09 11:55:30 +00:00
Erik Johnston
4a161a29ac Newsfile 2019-12-09 11:54:43 +00:00
Erik Johnston
8ad8bcbed0 Pull out room_invite_state_types config option once.
Pulling things out of config is currently surprisingly expensive.
2019-12-09 11:52:19 +00:00
Richard van der Hoff
e519489fc4 Remove fallback for missing /federation/v1/state_ids API (#6488)
This API was added way back in 0.17.0; the code here is annoying to maintain
and entirely redundant.
2019-12-09 11:37:26 +00:00
Erik Johnston
a9b393340f Merge pull request #6484 from matrix-org/erikj/port_sync_handler
Port SyncHandler to async/await
2019-12-09 11:32:44 +00:00
Erik Johnston
71ee22c0ba Fix port db script 2019-12-06 16:41:48 +00:00
Erik Johnston
5e35f69ac3 Newsfile 2019-12-06 16:13:41 +00:00
Erik Johnston
852f80d8a6 Fixup tests 2019-12-06 16:02:50 +00:00
Erik Johnston
75f87450d8 Move start up DB checks to main data store. 2019-12-06 16:02:21 +00:00
Erik Johnston
d537be1ebd Pass Database into the data store 2019-12-06 15:49:44 +00:00
Erik Johnston
d64bb32a73 Move are_all_users_on_domain checks to main data store. 2019-12-06 13:43:40 +00:00
Erik Johnston
9a4fb457cf Change DataStores to accept 'database' param. 2019-12-06 13:30:06 +00:00
Erik Johnston
f3ea2f5a08 Merge pull request #6469 from matrix-org/erikj/make_database_class
Create a Database class and move methods out of SQLBaseStore
2019-12-06 11:56:59 +00:00
Erik Johnston
2ace775d88 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/make_database_class 2019-12-06 11:33:34 +00:00
Erik Johnston
e216ec381a Remove unused var 2019-12-06 11:16:37 +00:00
Erik Johnston
b3a4e35ca8 Fixup functions to consistently return deferreds 2019-12-06 10:40:05 +00:00
Manuel Stahl
649b6bc088 Replace /admin/v1/users_paginate endpoint with /admin/v2/users (#5925) 2019-12-05 18:12:23 +00:00
Erik Johnston
b2ee65ea8c Newsfile 2019-12-05 17:59:37 +00:00
Erik Johnston
8437e2383e Port SyncHandler to async/await 2019-12-05 17:58:25 +00:00
Erik Johnston
d085a8a0a5 Merge pull request #6483 from matrix-org/erikj/port_rest_v2
Port rest client v2_alpha to async/await
2019-12-05 17:17:57 +00:00
Erik Johnston
edb8b6af9a Newsfile 2019-12-05 16:56:23 +00:00
Erik Johnston
9c41ba4c5f Port rest.client.v2 2019-12-05 16:56:23 +00:00
Erik Johnston
af5d0ebc72 Merge pull request #6482 from matrix-org/erikj/port_rest_v1
Port rest/v1 to async/await
2019-12-05 16:40:06 +00:00
Erik Johnston
410bfd035a Newsfile 2019-12-05 15:57:28 +00:00
Erik Johnston
4ca3ef10b9 Fixup tests 2019-12-05 15:57:28 +00:00
Erik Johnston
1a0997bbd5 Port rest/v1 to async/await 2019-12-05 15:57:28 +00:00
Andrew Morgan
09623446d4 Merge branch 'master' of github.com:matrix-org/synapse into develop 2019-12-05 15:46:39 +00:00
Andrew Morgan
ff119879d6 Revert "Modify systemd unit file reference to align with installation instruction (#6369)"
This reverts commit dc8747895e.
2019-12-05 15:46:19 +00:00
Andrew Morgan
e2cce15af1 Remove #6369 changelog 2019-12-05 15:44:02 +00:00
Clifford Garwood II
dc8747895e Modify systemd unit file reference to align with installation instruction (#6369)
Signed-off-by: Clifford Garwood II cliff@cigii.com
2019-12-05 15:43:32 +00:00
Richard van der Hoff
63d6ad1064 Stronger typing in the federation handler (#6480)
replace the event_info dict with an attrs thing
2019-12-05 15:02:35 +00:00
Richard van der Hoff
e1f4c83f41 Sanity-check the rooms of auth events before pulling them in. (#6472) 2019-12-05 14:14:45 +00:00
Clifford Garwood II
ba7af15d4e Modify systemd unit file reference to align with installation instruction (#6369)
Signed-off-by: Clifford Garwood II cliff@cigii.com
2019-12-05 13:13:47 +00:00
Erik Johnston
8b77fc6506 Fix DB scripts 2019-12-05 11:20:49 +00:00
Erik Johnston
6dcd6c40a0 Newsfile 2019-12-05 11:11:30 +00:00
Erik Johnston
4a33a6dd19 Move background update handling out of store 2019-12-05 11:11:26 +00:00
Erik Johnston
8863624f78 Comments 2019-12-05 10:46:37 +00:00
Erik Johnston
756d4942f5 Move DB pool and helper functions into dedicated Database class 2019-12-05 10:46:37 +00:00
Erik Johnston
ddbbfc9512 Merge pull request #6464 from matrix-org/erikj/make_public_sql_base
Clean up SQLBaseStore private function usage
2019-12-05 10:43:49 +00:00
Brendan Abolivier
fe799f353d Merge pull request #6470 from matrix-org/babolivier/port_db_ci_failure
Make synapse_port_db exit with a non-0 code if something failed
2019-12-04 18:20:36 +00:00
Brendan Abolivier
f8421a1404 Fix background updates for synapse_port_db 2019-12-04 17:57:35 +00:00
Brendan Abolivier
02c1f36ccd Changelog 2019-12-04 17:49:28 +00:00
Brendan Abolivier
6cd11109db Make synapse_port_db exit with a non-0 code if something failed 2019-12-04 17:48:23 +00:00
Richard van der Hoff
e203874caa get rid of (most of) have_events from _update_auth_events_and_context_for_auth (#6468)
have_events was a map from event_id to rejection reason (or None) for events
which are in our local database. It was used as filter on the list of
event_ids being passed into get_events_as_list. However, since
get_events_as_list will ignore any event_ids that are unknown or rejected, we
can equivalently just leave it to get_events_as_list to do the filtering.

That means that we don't have to keep `have_events` up-to-date, and can use
`have_seen_events` instead of `get_seen_events_with_rejection` in the one place
we do need it.
2019-12-04 17:27:32 +00:00
Erik Johnston
685fae1ba5 Newsfile 2019-12-04 16:25:34 +00:00
Erik Johnston
ee86abb2d6 Remove underscore from SQLBaseStore functions 2019-12-04 16:23:43 +00:00
Erik Johnston
c2f525a525 Don't call SQLBaseStore methods from outside stores 2019-12-04 16:23:43 +00:00
Erik Johnston
3eb15c01d9 Merge pull request #6454 from matrix-org/erikj/clean_base_Store
Move things out of SQLBaseStore
2019-12-04 16:23:19 +00:00
Erik Johnston
9186c105a0 Revert "Move get_user_count_txn out of base store"
This reverts commit 00f0d67566.

Its going to get removed soon, so lets not make merge conflicts.
2019-12-04 15:46:19 +00:00
Erik Johnston
a7f20500ff _CURRENT_STATE_CACHE_NAME is public 2019-12-04 15:45:42 +00:00
Erik Johnston
b9449012db Merge pull request #6441 from syamgk/fix-parameter-mismatch
Fix issue #6406 parameter mismatch
2019-12-04 15:42:33 +00:00
Brendan Abolivier
c530f9af4d Merge pull request #6329 from matrix-org/babolivier/context_filters
Filter state, events_before and events_after in /context requests
2019-12-04 15:24:16 +00:00
Erik Johnston
00f0d67566 Move get_user_count_txn out of base store 2019-12-04 15:21:14 +00:00
Erik Johnston
a785a2febe Newsfile 2019-12-04 15:21:14 +00:00
Erik Johnston
1056d6885a Move cache invalidation to main data store 2019-12-04 15:21:14 +00:00
Erik Johnston
6b2867096b Move event fetch vars to EventWorkStore 2019-12-04 15:21:14 +00:00
Erik Johnston
ddd48b6851 Move account validity bg updates to registration store 2019-12-04 15:21:14 +00:00
Brendan Abolivier
65c6aee621 Un-remove room purge test 2019-12-04 14:37:04 +00:00
Brendan Abolivier
aeda1b3b94 Merge branch 'babolivier/context_filters' of github.com:matrix-org/synapse into babolivier/context_filters 2019-12-04 14:29:03 +00:00
Brendan Abolivier
9dc84b7989 Merge branch 'develop' into babolivier/context_filters 2019-12-04 14:23:44 +00:00
Brendan Abolivier
08a436ecb2 Incorporate review 2019-12-04 14:18:46 +00:00
Andrew Morgan
c1ae453932 Markdownification and other fixes to CONTRIBUTING (#6461) 2019-12-04 12:21:48 +00:00
Andrew Morgan
85901939c1 Fix error when using synapse_port_db on a vanilla synapse db (#6449) 2019-12-04 12:17:47 +00:00
Andrew Morgan
768b84409b Update changelog.d/6449.bugfix
Co-Authored-By: Erik Johnston <erik@matrix.org>
2019-12-04 10:45:56 +00:00
Erik Johnston
2aa8943809 Merge pull request #6451 from matrix-org/uhoreg/cross_signing_signatures_index
make cross signing signature index non-unique
2019-12-04 09:57:09 +00:00
Neil Johnson
cb0aeb147e privacy by default for room dir (#6355)
Ensure that the the default settings for the room directory are that the it is hidden from public view by default.
2019-12-04 09:46:16 +00:00
Richard van der Hoff
0120875462 Fix exception when a cross-signed device is deleted (#6462)
(hopefully)

... and deobfuscate the relevant bit of code.
2019-12-04 07:38:35 +00:00
Syam G Krishnan
b62c9db8d7 Add changelog file
Signed-off-by: Syam G Krishnan <syamgk@gmail.com>
2019-12-04 13:03:54 +05:30
Syam G Krishnan
ce1c975ebc Issue #6406 Fix parameter mismatch
Signed-off-by: Syam G Krishnan <syamgk01@gmail.com>
2019-12-04 13:03:46 +05:30
Hubert Chathi
418813b205 apply changes from review 2019-12-03 15:27:00 -05:00
Brendan Abolivier
54dd5dc12b Add ephemeral messages support (MSC2228) (#6409)
Implement part [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). The parts that differ are:

* the feature is hidden behind a configuration flag (`enable_ephemeral_messages`)
* self-destruction doesn't happen for state events
* only implement support for the `m.self_destruct_after` field (not the `m.self_destruct` one)
* doesn't send synthetic redactions to clients because for this specific case we consider the clients to be able to destroy an event themselves, instead we just censor it (by pruning its JSON) in the database
2019-12-03 19:19:45 +00:00
Richard van der Hoff
620f98b65b write some docs for the quarantine_media api (#6458) 2019-12-03 18:20:39 +00:00
Amber Brown
fdec84aa42 Add benchmarks for structured logging performance (#6266) 2019-12-03 20:21:25 +11:00
Hubert Chathi
e57567f994 add changelog 2019-12-02 17:10:57 -05:00
Hubert Chathi
2252680a98 make cross signing signature index non-unique 2019-12-02 17:01:59 -05:00
Andrew Morgan
8ee62e4b98 Add changelog 2019-12-02 18:43:25 +00:00
Andrew Morgan
57f09e01f5 Fix error when using synapse_port_db on a vanilla synapse db 2019-12-02 18:23:41 +00:00
Andrew Morgan
72078e4be5 Transfer power level state events on room upgrade (#6237) 2019-12-02 15:11:32 +00:00
Andrew Morgan
0ad75fd98e Use python3 packages for Ubuntu (#6443) 2019-12-02 15:09:57 +00:00
Filip Štědronský
81731c6e75 Fix: Pillow error when uploading RGBA image (#3325) (#6241)
Signed-Off-By: Filip Štědronský <g@regnarg.cz>
2019-12-02 12:12:55 +00:00
Andrew Morgan
23ea572125 Add User-Interactive Auth to /account/3pid/add (#6119) 2019-11-29 13:51:14 +00:00
Erik Johnston
1c3a61529f Merge pull request #6434 from matrix-org/erikj/msc2367_membership_reasons
Implement MSC 2367 - Membership Reasons
2019-11-29 13:30:36 +00:00
Brendan Abolivier
6d8576c4ce Merge pull request #6436 from matrix-org/babolivier/fix-state-retrieval
Discard retention policies when retrieving state
2019-11-29 11:25:11 +00:00
Brendan Abolivier
78ec11c085 Lint 2019-11-28 20:35:22 +00:00
Brendan Abolivier
5ee2beeddb Changelog 2019-11-28 19:32:49 +00:00
Brendan Abolivier
708cef88cf Discard retention policies when retrieving state
Purge jobs don't delete the latest event in a room in order to keep the forward extremity and not break the room. On the other hand, get_state_events, when given an at_token argument calls filter_events_for_client to know if the user can see the event that matches that (sync) token. That function uses the retention policies of the events it's given to filter out those that are too old from a client's view.

Some clients, such as Riot, when loading a room, request the list of members for the latest sync token it knows about, and get confused to the point of refusing to send any message if the server tells it that it can't get that information. This can happen very easily with the message retention feature turned on and a room with low activity so that the last event sent becomes too old according to the room's retention policy.

An easy and clean fix for that issue is to discard the room's retention policies when retrieving state.
2019-11-28 19:26:26 +00:00
Richard van der Hoff
7baeea9f37 blacklist more tests 2019-11-28 14:55:19 +00:00
Erik Johnston
19ba7c142e Newsfile 2019-11-28 13:59:32 +00:00
Andrew Morgan
96562131a4 Merge branch 'master' into develop 2019-11-28 12:08:02 +00:00
Erik Johnston
8c9a713f8d Add tests 2019-11-28 11:32:06 +00:00
Erik Johnston
2173785f0d Propagate reason in remotely rejected invites 2019-11-28 11:31:56 +00:00
Andrew Morgan
e7777f3668 1.6.1 2019-11-28 11:29:50 +00:00
Andrew Morgan
2030193e55 Remove local threepids on account deactivation (#6426) 2019-11-28 11:07:45 +00:00
Erik Johnston
69d8fb83c6 MSC2367 Allow reason field on all member events 2019-11-28 11:02:04 +00:00
Andrew Morgan
a9c44d4008 Remove local threepids on account deactivation (#6426) 2019-11-28 10:40:42 +00:00
Richard van der Hoff
c48ea98007 Clarifications for the email configuration settings. (#6423)
Cf #6422
2019-11-28 09:29:18 +00:00
Amber Brown
0f87b912ab Implementation of MSC2314 (#6176) 2019-11-28 08:54:07 +11:00
Hubert Chathi
0d27aba900 add etag and count to key backup endpoints (#5858) 2019-11-27 16:14:44 -05:00
Richard van der Hoff
6f4a63df00 Add more tests to the worker blacklist (#6429) 2019-11-27 18:18:33 +00:00
Brendan Abolivier
d31f69afa0 Merge pull request #6358 from matrix-org/babolivier/message_retention
Implement message retention policies (MSC1763)
2019-11-27 15:04:38 +00:00
Richard van der Hoff
9b9ee75666 Fix startup error when http proxy is defined. (#6421)
Guess I only tested this on python 2 :/

Fixes #6419.
2019-11-27 10:29:10 +00:00
Richard van der Hoff
70c0da4d82 clean up buildkite output 2019-11-26 19:00:24 +00:00
Andrew Morgan
ce578031f4 Remove assertion and provide a clear warning on startup for missing public_baseurl (#6379) 2019-11-26 18:42:27 +00:00
Richard van der Hoff
651d930f16 Merge pull request #6343 from matrix-org/rav/event_auth/4
Refactor _update_auth_events_and_context_for_auth
2019-11-26 18:15:03 +00:00
Richard van der Hoff
ef1a85e773 Fix startup error when http proxy is defined. (#6421)
Guess I only tested this on python 2 :/

Fixes #6419.
2019-11-26 18:10:50 +00:00
Brendan Abolivier
9e937c28ee Merge branch 'develop' into babolivier/message_retention 2019-11-26 17:53:57 +00:00
Brendan Abolivier
f0ef970824 Don't restrict the tests to v1 rooms 2019-11-26 17:49:12 +00:00
Erik Johnston
f085894cd1 Merge pull request #6420 from matrix-org/erikj/fix_find_next_generated_user_id_localpart
Fix guest registration
2019-11-26 17:04:15 +00:00
Erik Johnston
f8f14ba466 Don't construct a set 2019-11-26 16:06:41 +00:00
Erik Johnston
ba110a2030 Newsfile 2019-11-26 15:54:48 +00:00
Erik Johnston
8bb7b15894 Fix find_next_generated_user_id_localpart 2019-11-26 15:54:48 +00:00
Andrew Morgan
9fb350af65 Merge branch 'master' into develop 2019-11-26 14:15:30 +00:00
Andrew Morgan
a8175d0f96 Prevent account_data content from being sent over TCP replication (#6333) 2019-11-26 13:58:39 +00:00
Andrew Morgan
b98971e8a4 1.6.0 2019-11-26 13:28:40 +00:00
Erik Johnston
65d54c5e8c Fix phone home stats (#6418)
Fix phone home stats
2019-11-26 13:28:40 +00:00
Erik Johnston
f9f1c8acbb Merge pull request #6332 from matrix-org/erikj/query_devices_fix
Fix caching devices for remote servers in worker.
2019-11-26 12:56:05 +00:00
Richard van der Hoff
4d394d6415 remove confusing fixme 2019-11-26 12:32:37 +00:00
Erik Johnston
35f9165e96 Fixup docs 2019-11-26 12:04:48 +00:00
Brendan Abolivier
4c1b799e1b Merge branch 'develop' into babolivier/context_filters 2019-11-26 10:53:48 +00:00
Richard van der Hoff
c01d543584 Make sure that we close cursors before returning from a query (#6408)
There are lots of words in the comment as to why this is a good idea.

Fixes #6403.
2019-11-25 21:03:17 +00:00
Richard van der Hoff
07929bd62f Merge tag 'v1.6.0rc2' into develop
Synapse 1.6.0rc2 (2019-11-25)
=============================

Bugfixes
--------

- Fix a bug which could cause the background database update hander for event labels to get stuck in a loop raising exceptions. ([\#6407](https://github.com/matrix-org/synapse/issues/6407))
2019-11-25 17:51:39 +00:00
Amber Brown
9eebd46048 Improve the performance of structured logging (#6322) 2019-11-26 03:45:50 +11:00
Richard van der Hoff
f9c9e1f076 1.6.0rc2 2019-11-25 13:28:12 +00:00
Richard van der Hoff
b7367c339d Fix exceptions from background database update for event labels. (#6407)
Add some exception handling here so that events whose json cannot be parsed are
ignored rather than getting us stuck in a loop.

Fixes #6404.
2019-11-25 13:26:59 +00:00
Brendan Abolivier
78cfc05fc4 Merge pull request #6392 from matrix-org/babolivier/fix-1623
Test if a purge can make /messages return 500 responses
2019-11-25 12:59:36 +00:00
Andrew Morgan
265c0bd2fe Add working build command for docker image (#6390)
* Add working build command for docker image

* Add changelog
2019-11-23 06:54:05 +11:00
Aaron Raimist
24cc31ee96 Fix link to user_dir_populate.sql in the user directory docs (#6388) 2019-11-21 17:38:14 +00:00
Andrew Morgan
3916e1b97a Clean up newline quote marks around the codebase (#6362) 2019-11-21 12:00:14 +00:00
Matthew Hodgson
9cc168e42e update macOS installation instructions 2019-11-20 18:44:45 +00:00
Brendan Abolivier
b2f8c21a9b Format changelog 2019-11-20 16:10:27 +00:00
Brendan Abolivier
49243c55a4 Update changelog since this isn't going to be featured in 1.6.0 2019-11-20 16:09:11 +00:00
Brendan Abolivier
e2a20326e8 Lint 2019-11-20 15:08:47 +00:00
Brendan Abolivier
486be06f48 Changelog 2019-11-20 15:08:03 +00:00
Andrew Morgan
41e4566682 1.6.0rc1 2019-11-20 14:12:42 +00:00
Andrew Morgan
234f55f3c4 Docker: Change permissions for data dir before attempting to write to it (#6389) 2019-11-20 13:32:31 +00:00
Brendan Abolivier
6356f2088f Test if a purge can make /messages return 500 responses 2019-11-20 12:09:06 +00:00
Manuel Stahl
4f5ca455bf Move admin endpoints into separate files (#6308) 2019-11-20 11:49:11 +00:00
Brendan Abolivier
83446a18fb Merge pull request #6335 from matrix-org/erikj/rc_login_cleanups
Only do `rc_login` ratelimiting on succesful login.
2019-11-20 09:52:38 +00:00
Brendan Abolivier
271c322d08 Lint 2019-11-20 09:29:48 +00:00
Brendan Abolivier
cdd3cb870d Fix worker mode 2019-11-19 14:40:21 +00:00
Brendan Abolivier
a6fc6754f8 Fix 3PID invite exchange 2019-11-19 14:07:39 +00:00
Brendan Abolivier
97b863fe32 Lint again 2019-11-19 13:33:58 +00:00
Brendan Abolivier
bf9a11c54d Lint again 2019-11-19 13:30:04 +00:00
Brendan Abolivier
7c24d0f443 Lint 2019-11-19 13:22:37 +00:00
Erik Johnston
c7376cdfe3 Apply suggestions from code review
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
2019-11-18 17:10:16 +00:00
Richard van der Hoff
870c00e278 Merge remote-tracking branch 'origin/develop' into rav/event_auth/4 2019-11-18 12:05:36 +00:00
Andrew Morgan
657d614f6a Replace UPDATE with UPSERT on device_max_stream_id table (#6363) 2019-11-15 14:02:34 +00:00
James
53b6559a89 Add optional python dependencies to snap packaging (#6317)
Signed-off-by: James Hebden <james@ec0.io>
2019-11-14 18:42:46 +00:00
Andrew Morgan
745a48625d Fix guest -> real account upgrade with account validity enabled (#6359) 2019-11-14 12:02:05 +00:00
Andrew Morgan
6e1b40dc26 Replace instance variations of homeserver with correct case/spacing (#6357) 2019-11-14 11:02:58 +00:00
Andrew Morgan
a65a5ea125 Merge branch 'develop' of github.com:matrix-org/synapse into anoa/fix_account_data_sync
* 'develop' of github.com:matrix-org/synapse:
  Blacklist PurgeRoomTestCase (#6361)
  Set room version default to 5
2019-11-14 10:26:56 +00:00
Andrew Morgan
473acedcdd Merge branch 'develop' of github.com:matrix-org/synapse into anoa/homeserver_copy
* 'develop' of github.com:matrix-org/synapse:
  Blacklist PurgeRoomTestCase (#6361)
  Set room version default to 5
2019-11-14 10:26:27 +00:00
Brendan Abolivier
a42567e4a8 Merge pull request #6220 from matrix-org/neilj/set_room_version_default_to_5
Set room version default to 5
2019-11-14 10:21:00 +00:00
Andrew Morgan
c350bc2f92 Blacklist PurgeRoomTestCase (#6361) 2019-11-13 19:09:20 +00:00
Andrew Morgan
e1648dc576 sample config 2019-11-12 13:15:59 +00:00
Andrew Morgan
85f172ef96 Add changelog 2019-11-12 13:13:19 +00:00
Andrew Morgan
73d091be48 A couple more instances 2019-11-12 13:12:25 +00:00
Andrew Morgan
bc29a19731 Replace instance variations of homeserver with correct case/spacing 2019-11-12 13:08:12 +00:00
Richard van der Hoff
20d687516f newsfile 2019-11-08 16:17:02 +00:00
Andrew Morgan
cd96b4586f lint 2019-11-08 15:45:45 +00:00
Andrew Morgan
318dd21b47 Add changelog 2019-11-08 15:45:08 +00:00
Andrew Morgan
c4bdf2d785 Remove content from being sent for account data rdata stream 2019-11-08 15:44:02 +00:00
Richard van der Hoff
f41027f746 Use get_events_as_list rather than lots of calls to get_event
It's more efficient and clearer.
2019-11-08 12:21:28 +00:00
Richard van der Hoff
f8407975e7 Update some docstrings and comments 2019-11-08 12:18:20 +00:00
Richard van der Hoff
772d414975 Simplify _update_auth_events_and_context_for_auth
move event_key calculation into _update_context_for_auth_events, since it's
only used there.
2019-11-08 11:40:11 +00:00
Brendan Abolivier
963ffb60b9 Merge pull request #6340 from matrix-org/babolivier/pagination_query
Fix the SQL SELECT query in _paginate_room_events_txn
2019-11-08 11:12:24 +00:00
Brendan Abolivier
b16fa43386 Incorporate review 2019-11-08 10:34:09 +00:00
Erik Johnston
f713c01e2b Merge pull request #6295 from matrix-org/erikj/split_purge_history
Split purge API into events vs state and add PurgeEventsStorage
2019-11-08 10:19:15 +00:00
Erik Johnston
e4ec82ce0f Move type annotation into docstring 2019-11-08 09:50:48 +00:00
Brendan Abolivier
46e5db9eb2 Merge pull request #6310 from matrix-org/babolivier/msc2326_bg_update
MSC2326: Add background update to take previous events into account
2019-11-07 22:54:56 +00:00
Richard van der Hoff
c5abb67e43 Python 3.8 for tox (#6341)
... and update INSTALL.md to include py3.8.

We'll also have to update the buildkite pipeline to run it
2019-11-07 17:14:13 +00:00
Brendan Abolivier
dad8d68c99 Update synapse/storage/data_stores/main/events_bg_updates.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-11-07 17:01:53 +00:00
Brendan Abolivier
6d360f099f Update synapse/storage/data_stores/main/events_bg_updates.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-11-07 17:01:43 +00:00
Brendan Abolivier
c9b27d0044 Copy results 2019-11-07 16:47:45 +00:00
Brendan Abolivier
cd31201267 Revert "Back to using cursor_to_dict"
This reverts commit 1186612d6c.
2019-11-07 16:47:15 +00:00
Brendan Abolivier
1186612d6c Back to using cursor_to_dict 2019-11-07 16:46:41 +00:00
Brendan Abolivier
ec2cb9f298 Initialise value before looping 2019-11-07 16:18:40 +00:00
Brendan Abolivier
bb78276bdc Incorporate review 2019-11-07 15:25:27 +00:00
Brendan Abolivier
b9cba07962 Lint 2019-11-07 14:57:15 +00:00
Brendan Abolivier
70804392ae Only join on event_labels if we're filtering on labels 2019-11-07 14:55:10 +00:00
Brendan Abolivier
15a1a02e70 Handle lack of filter 2019-11-07 12:04:37 +00:00
Brendan Abolivier
4f519d556e Changelog 2019-11-07 11:51:54 +00:00
Brendan Abolivier
3f9b61ff95 Fix the SQL SELECT query in _paginate_room_events_txn
Doing a SELECT DISTINCT when paginating is quite expensive, because it requires the engine to do sorting on the entire events table. However, we only need to run it if we're filtering on 2+ labels, so this PR is changing the request so that DISTINCT is only used then.
2019-11-07 11:51:11 +00:00
Andrew Morgan
e914cf12f6 Merge pull request #6235 from matrix-org/anoa/room_upgrade_groups 2019-11-07 11:12:22 +00:00
Richard van der Hoff
b03cddaeb9 tweak changelog 2019-11-07 09:46:25 +00:00
V02460
affcc2cc36 Fix LruCache callback deduplication (#6213) 2019-11-07 09:43:51 +00:00
Brendan Abolivier
f03c9d3444 Don't apply retention policy based filtering on state events
As per MSC1763, 'Retention is only considered for non-state events.', so don't filter out state events based on the room's retention policy.
2019-11-06 18:40:04 +00:00
Brendan Abolivier
eda14737cf Also filter state events 2019-11-06 18:14:03 +00:00
Andrew Morgan
a6ebef1bfd Make numeric user_id checker start at @0, and don't ratelimit on checking (#6338) 2019-11-06 17:21:20 +00:00
Erik Johnston
5c3363233c Fix deleting state groups during room purge.
And fix the tests to actually test that things got deleted.
2019-11-06 17:02:08 +00:00
Erik Johnston
71f3bd734f Use correct type annotation 2019-11-06 17:00:18 +00:00
Andrew Morgan
55bc8d531e raise exception after multiple failures 2019-11-06 16:52:54 +00:00
Andrew Morgan
1fe3cc2c9c Address review comments 2019-11-06 14:54:24 +00:00
Richard van der Hoff
915903eada Merge branch 'master' into develop 2019-11-06 13:51:11 +00:00
Richard van der Hoff
08b2868ffe Merge branch 'release-v1.5.1' 2019-11-06 13:50:55 +00:00
Richard van der Hoff
4257feb20f build debs for eoan and bullseye 2019-11-06 13:35:56 +00:00
Andrew Morgan
d2f6a67cb4 Add changelog 2019-11-06 12:03:12 +00:00
Andrew Morgan
4059d61e26 Don't forget to ratelimit calls outside of RegistrationHandler 2019-11-06 12:01:54 +00:00
Andrew Morgan
b33c4f7a82 Numeric ID checker now checks @0, don't ratelimit on checking 2019-11-06 11:55:00 +00:00
Erik Johnston
4fc53bf1fb Newsfile 2019-11-06 11:08:58 +00:00
Erik Johnston
f697b4b4a2 Add failed auth ratelimiting to UIA 2019-11-06 11:08:58 +00:00
Erik Johnston
541f1b92d9 Only do rc_login ratelimiting on succesful login.
We were doing this in a number of places which meant that some login
code paths incremented the counter multiple times.

It was also applying ratelimiting to UIA endpoints, which was probably
not intentional.

In particular, some custom auth modules were calling
`check_user_exists`, which incremented the counters, meaning that people
would fail to login sometimes.
2019-11-06 11:08:58 +00:00
Brendan Abolivier
24a214bd1b Fix field name 2019-11-06 11:04:19 +00:00
Brendan Abolivier
70d93cafdb Update insert 2019-11-06 10:59:03 +00:00
Richard van der Hoff
807ec3bd99 Fix bug which caused rejected events to be stored with the wrong room state (#6320)
Fixes a bug where rejected events were persisted with the wrong state group.

Also fixes an occasional internal-server-error when receiving events over
federation which are rejected and (possibly because they are
backwards-extremities) have no prev_group.

Fixes #6289.
2019-11-06 10:01:39 +00:00
Richard van der Hoff
0e3ab8afdc Add some checks that we aren't using state from rejected events (#6330)
* Raise an exception if accessing state for rejected events

Add some sanity checks on accessing state_group etc for
rejected events.

* Skip calculating push actions for rejected events

It didn't actually cause any bugs, because rejected events get filtered out at
various later points, but there's not point in trying to calculate the push
actions for a rejected event.
2019-11-05 22:13:37 +00:00
Erik Johnston
01ba7b38a7 Merge pull request #6336 from matrix-org/erikj/fix_phone_home_stats
Fix phone home stats
2019-11-05 18:29:57 +00:00
Erik Johnston
b437eb48b6 Newsfile 2019-11-05 17:45:29 +00:00
Erik Johnston
052513958d Fix phone home stats 2019-11-05 17:44:09 +00:00
Richard van der Hoff
5570d1c93f Merge pull request #6334 from matrix-org/rav/url_preview_limit_title_2
Fix exception when OpenGraph tag values are ints
2019-11-05 17:28:11 +00:00
Richard van der Hoff
02f99906f2 Merge pull request #6331 from matrix-org/rav/url_preview_limit_title
Strip overlong OpenGraph data from url preview
2019-11-05 17:08:59 +00:00
Richard van der Hoff
55a7da247a Merge branch 'develop' into rav/url_preview_limit_title 2019-11-05 17:08:07 +00:00
Erik Johnston
248111bae8 Newsfile 2019-11-05 15:54:23 +00:00
Erik Johnston
c16e192e2f Fix caching devices for remote servers in worker.
When the `/keys/query` API is hit on client_reader worker Synapse may
decide that it needs to resync some remote deivces. Usually this happens
on master, and then gets cached. However, that fails on workers and so
it falls back to fetching devices from remotes directly, which may in
turn fail if the remote is down.
2019-11-05 15:49:43 +00:00
Brendan Abolivier
cb2cbe4d26 Only filter if a filter was provided 2019-11-05 15:28:36 +00:00
Brendan Abolivier
f141af4c79 Update copyright 2019-11-05 14:52:38 +00:00
Brendan Abolivier
a6863da249 Lint 2019-11-05 14:50:19 +00:00
Brendan Abolivier
8822b33111 Update copyrights 2019-11-05 14:46:35 +00:00
Brendan Abolivier
f5d8fdf0a7 Update changelog 2019-11-05 14:44:25 +00:00
Brendan Abolivier
d8d808db64 Changelog 2019-11-05 14:42:05 +00:00
Brendan Abolivier
037360e6cf Add tests for /search 2019-11-05 14:33:18 +00:00
Brendan Abolivier
c9e4748cb7 Merge labels tests for /context and /messages 2019-11-05 13:47:47 +00:00
Richard van der Hoff
4086002827 Improve documentation for EventContext fields (#6319) 2019-11-05 13:23:25 +00:00
Brendan Abolivier
a7c818c79b Add test case 2019-11-05 13:21:26 +00:00
Brendan Abolivier
1dffa78701 Filter events_before and events_after in /context requests
While the current version of the spec doesn't say much about how this endpoint uses filters (see https://github.com/matrix-org/matrix-doc/issues/2338), the current implementation is that some fields of an EventFilter apply (the ones that are used when running the SQL query) and others don't (the ones that are used by the filter itself) because we don't call event_filter.filter(...). This seems counter-intuitive and probably not what we want so this commit fixes it.
2019-11-05 12:21:59 +00:00
Erik Johnston
ffe595381d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_purge_history 2019-11-05 10:27:41 +00:00
Andrew Morgan
506a63de67 Merge branch 'develop' of github.com:matrix-org/synapse into anoa/room_upgrade_groups 2019-11-04 18:22:41 +00:00
Andrew Morgan
c2203bea57 Re-add docstring, with caveats detailed 2019-11-04 18:17:11 +00:00
Brendan Abolivier
e252ffadbc Merge branch 'develop' into babolivier/msc2326_bg_update 2019-11-04 18:09:50 +00:00
Andrew Morgan
0287d033ee Transfer upgraded rooms on groups 2019-11-04 18:08:50 +00:00
Amber Brown
4e1c7b79fa Remove the psutil dependency (#6318)
* remove psutil and replace with resource
2019-11-05 05:05:48 +11:00
Brendan Abolivier
09957ce0e4 Implement per-room message retention policies 2019-11-04 17:09:22 +00:00
Erik Johnston
7134ca7daa Change to not require a state_groups.room_id index.
This does mean that we won't clean up orphaned state groups (i.e. state
groups that were persisted but the associated event wasn't).
2019-11-04 13:36:57 +00:00
Erik Johnston
6a0092d371 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_purge_history 2019-11-04 13:29:35 +00:00
Richard van der Hoff
cc6243b4c0 document the REPLICATE command a bit better (#6305)
since I found myself wonder how it works
2019-11-04 12:40:18 +00:00
Brendan Abolivier
3b29a73f9f Print out the actual number of affected rows 2019-11-04 09:56:11 +00:00
Brendan Abolivier
824bba2f78 Correctly order results 2019-11-04 09:56:11 +00:00
Brendan Abolivier
49008e674f TODO 2019-11-04 09:56:11 +00:00
Brendan Abolivier
1586f2c7e7 Fix exit condition 2019-11-04 09:56:11 +00:00
Brendan Abolivier
1c1268245d Lint 2019-11-04 09:56:11 +00:00
Brendan Abolivier
416c7baee6 Changelog 2019-11-04 09:56:10 +00:00
Brendan Abolivier
911b03ca31 Don't try to process events we already have a label for 2019-11-04 09:56:10 +00:00
Brendan Abolivier
07cb38e965 Use a sensible default value for labels 2019-11-04 09:56:10 +00:00
Brendan Abolivier
a46574281d Use the right format for rows 2019-11-04 09:56:10 +00:00
Brendan Abolivier
c9a1b80a74 MSC2326: Add background update to take previous events into account 2019-11-04 09:56:04 +00:00
Brendan Abolivier
f496d25877 Merge pull request #6301 from matrix-org/babolivier/msc2326
Implement MSC2326 (label based filtering)
2019-11-01 17:04:45 +00:00
Brendan Abolivier
988d8d6507 Incorporate review 2019-11-01 16:22:44 +00:00
Richard van der Hoff
c6516adbe0 Factor out an _AsyncEventContextImpl (#6298)
The intention here is to make it clearer which fields we can expect to be
populated when: notably, that the _event_type etc aren't used for the
synchronous impl of EventContext.
2019-11-01 16:19:09 +00:00
Brendan Abolivier
5598445655 Update synapse/storage/data_stores/main/schema/delta/56/event_labels.sql
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-11-01 16:18:34 +00:00
Hubert Chathi
fa7e52caf1 Merge pull request #6313 from matrix-org/uhoreg/cross_signing_fix_sqlite_schema
fix hidden field in devices table for older sqlite
2019-11-01 10:52:46 -04:00
Jason Robinson
67a65918ad Add contributer docs for using the provided linters script (#6164)
* Add lint dependencies black, flake8 and isort

These are required when running the `lint.sh` dev scripts.

Signed-off-by: Jason Robinson <jasonr@matrix.org>

* Add contributer docs for using the providers linters script

Add also to the pull request template to avoid build failures due
to people not knowing that linters need running.

Signed-off-by: Jason Robinson <jasonr@matrix.org>

* Fix mention of linter errors correction

Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>

* Add mention for installing linter dependencies

Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>

* Remove linters from python dependencies as per PR review

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2019-11-02 01:45:09 +11:00
Richard van der Hoff
1cb84c6486 Support for routing outbound HTTP requests via a proxy (#6239)
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.

The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.

The proxy will then be used for
 * push
 * url previews
 * phone-home stats
 * recaptcha validation
 * CAS auth validation

It will *not* be used for:
 * Application Services
 * Identity servers
 * Outbound federation
 * In worker configurations, connections from workers to masters

Fixes #4198.
2019-11-01 14:07:44 +00:00
Andrew Morgan
fe1f2b4520 Remove last usages of deprecated logging.warn method (#6314) 2019-11-01 12:03:44 +00:00
Brendan Abolivier
a2c63c619a Add more data to the event_labels table and fix the indexes 2019-11-01 11:47:28 +00:00
Erik Johnston
669b6cbda3 Fix up comment 2019-11-01 11:32:20 +00:00
Neil Pilgrim
befd58f47b Document lint.sh & allow application to specified files only (#6312) 2019-11-01 10:52:20 +00:00
Brendan Abolivier
e3689ac6f7 Add unstable feature flag 2019-11-01 10:41:23 +00:00
Brendan Abolivier
57cdb046e4 Lint 2019-11-01 10:39:14 +00:00
Brendan Abolivier
c6dbca2422 Incorporate review 2019-11-01 10:30:51 +00:00
Andrew Morgan
ace947e8da Depublish a room from the public rooms list when it is upgraded (#6232) 2019-11-01 10:28:09 +00:00
Hubert Chathi
53d7680e32 Merge pull request #5727 from matrix-org/uhoreg/e2e_cross-signing2-part3
Cross-signing [4/4] -- federation edition
2019-10-31 23:59:35 -04:00
Hubert Chathi
1f156398b9 add changelog 2019-10-31 23:02:20 -04:00
Hubert Chathi
c61db13183 fix hidden field in devices table for older sqlite 2019-10-31 22:52:55 -04:00
Hubert Chathi
c3fc176c60 Update synapse/storage/data_stores/main/devices.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-10-31 22:49:48 -04:00
Hubert Chathi
6f4bc6d01d Merge branch 'develop' into cross-signing_federation 2019-10-31 22:38:21 -04:00
Hubert Chathi
3b4216f961 Merge pull request #6254 from matrix-org/uhoreg/cross_signing_fix_workers_notify
make notification of signatures work with workers
2019-10-31 22:35:03 -04:00
Will Hunt
42e707c663 rstrip slashes from url on appservice (#6306) 2019-10-31 17:32:25 +00:00
Hubert Chathi
9c94b48bf1 Merge branch 'develop' into uhoreg/cross_signing_fix_workers_notify 2019-10-31 12:32:07 -04:00
Hubert Chathi
f7e4a582ef clean up code a bit 2019-10-31 12:01:00 -04:00
Erik Johnston
fb1a6914cf Update log line to lie a little less 2019-10-31 15:45:48 +00:00
Amber Brown
020add5099 Update black to 19.10b0 (#6304)
* update version of black and also fix the mypy config being overridden
2019-11-01 02:43:24 +11:00
Erik Johnston
61be1a2926 Add state_groups.room_id index 2019-10-31 15:39:26 +00:00
Erik Johnston
f91f2a1f92 Docstrings 2019-10-31 15:26:00 +00:00
Erik Johnston
8f5bbdb987 Fix purge room API 2019-10-31 15:22:08 +00:00
Erik Johnston
cd581338cf Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_purge_history 2019-10-31 15:19:26 +00:00
Erik Johnston
dfe0cd71b6 Merge pull request #6294 from matrix-org/erikj/add_state_storage
Add StateGroupStorage interface
2019-10-31 16:17:53 +01:00
Travis Ralston
3a74c03ffb Expose some homeserver functionality to spam checkers (#6259)
* Offer the homeserver instance to the spam checker

* Newsfile

* Linting

* Expose a Spam Checker API instead of passing the homeserver object

* Alter changelog

* s/hs/api
2019-10-31 09:16:14 -06:00
Erik Johnston
69489f8eb1 Merge pull request #6307 from matrix-org/erikj/fix_purge_room
Fix /purge_room admin API
2019-10-31 16:08:34 +01:00
Erik Johnston
64f2b8c3d8 Apply suggestions from code review
Fix docstring

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-10-31 15:44:31 +01:00
Erik Johnston
b2ff8c305f Newsfile 2019-10-31 11:32:53 +00:00
Erik Johnston
97c60ccaa3 Add unit test for /purge_room API 2019-10-31 11:30:25 +00:00
Erik Johnston
c6bcd38841 Fix /purge_room API.
It fails trying to clean the `topic` table which was recently removed.
2019-10-31 11:17:23 +00:00
Richard van der Hoff
eb9a0d9e48 Merge remote-tracking branch 'origin/master' into develop 2019-10-31 11:17:05 +00:00
Andrew Morgan
54fef094b3 Remove usage of deprecated logger.warn method from codebase (#6271)
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
2019-10-31 10:23:24 +00:00
Hubert Chathi
998f7fe7d4 make user signatures a separate stream 2019-10-30 17:22:52 -04:00
Hubert Chathi
670972c0e1 Merge branch 'develop' into uhoreg/cross_signing_fix_workers_notify 2019-10-30 16:46:31 -04:00
Hubert Chathi
bb6cec27a5 rename get_devices_by_remote to get_device_updates_by_remote 2019-10-30 14:57:34 -04:00
Richard van der Hoff
0467f33584 fix delete_existing for _persist_events (#6300)
this is part of _retry_on_integrity_error, so should only be on _persist_events_and_state_updates
2019-10-30 18:05:00 +00:00
Brendan Abolivier
dcc069a2e2 Lint 2019-10-30 18:01:56 +00:00
Brendan Abolivier
62588eae4a Changelog 2019-10-30 17:54:40 +00:00
Brendan Abolivier
d8c9109aee Add integration tests for /messages 2019-10-30 17:48:22 +00:00
Brendan Abolivier
fe51d6cacf Add more integration testing 2019-10-30 17:28:41 +00:00
Brendan Abolivier
395683add1 Add integration tests for sync 2019-10-30 16:47:37 +00:00
Brendan Abolivier
e7943f660a Add unit tests 2019-10-30 16:15:04 +00:00
Brendan Abolivier
233b14ebe1 Add index on label 2019-10-30 15:58:05 +00:00
Brendan Abolivier
acd16ad86a Implement filtering 2019-10-30 15:56:33 +00:00
Erik Johnston
ecfba89a78 Newsfile 2019-10-30 15:23:37 +00:00
Erik Johnston
7c8c97e635 Split purge API into events vs state 2019-10-30 15:23:37 +00:00
Erik Johnston
d3f694d628 Newsfile 2019-10-30 14:53:09 +00:00
Erik Johnston
69f0054ce6 Port to use state storage 2019-10-30 14:46:54 +00:00
Erik Johnston
5db03535d5 Add StateGroupStorage interface 2019-10-30 14:46:49 +00:00
Brendan Abolivier
fa0dcbc8fa Store labels for new events 2019-10-30 14:27:15 +00:00
Hubert Chathi
7d7eac61be Merge branch 'develop' into cross-signing_federation 2019-10-30 10:17:10 -04:00
Hubert Chathi
bc32f102cd black 2019-10-30 10:07:36 -04:00
Hubert Chathi
d78b1e339d apply changes as a result of PR review 2019-10-30 10:01:53 -04:00
Erik Johnston
b7fe62b766 Merge pull request #6240 from matrix-org/erikj/split_out_persistence_store
Move persist_events out from main data store.
2019-10-30 14:58:44 +01:00
Erik Johnston
ec6de1cc7d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_out_persistence_store 2019-10-30 13:37:04 +00:00
Erik Johnston
a8d16f6c00 Review comments 2019-10-30 13:36:12 +00:00
Erik Johnston
e5c3a99091 Merge pull request #6291 from matrix-org/erikj/fix_cache_descriptor
Make ObservableDeferred.observe() always return deferred.
2019-10-30 14:06:34 +01:00
Yash Jipkate
9677613e9c Modify doc to update Google ReCaptcha terms (#6257) 2019-10-30 12:30:20 +00:00
Erik Johnston
6e677403b7 Clarify docstring 2019-10-30 11:52:04 +00:00
Andrew Morgan
7e17959984 Update email section of INSTALL.md about account_threepid_delegates (#6272) 2019-10-30 11:37:56 +00:00
Erik Johnston
1de28183cb Newsfile 2019-10-30 11:37:56 +00:00
Erik Johnston
326b3dace7 Make ObservableDeferred.observe() always return deferred.
This makes it easier to use in an async/await world.

Also fixes a bug where cache descriptors would occaisonally return a raw
value rather than a deferred.
2019-10-30 11:35:46 +00:00
Andrew Morgan
a2276d4d3c Fix log line that was printing undefined value (#6278) 2019-10-30 11:28:48 +00:00
Andrew Morgan
2cab02f9d1 Update CI to run isort on scripts and scripts-dev (#6270) 2019-10-30 11:17:14 +00:00
Andrew Morgan
7955abeaac Fix small typo in comment (#6269) 2019-10-30 11:16:19 +00:00
Andrew Morgan
46c12918ad Fix typo in domain name in account_threepid_delegates config option (#6273) 2019-10-30 11:07:42 +00:00
Andrew Morgan
9178ac1b6a Remove redundant arguments to CI's flake8 (#6277) 2019-10-30 11:07:18 +00:00
Andrew Morgan
b39ca49db1 Handle FileNotFound error in checking git repository version (#6284) 2019-10-30 11:00:15 +00:00
Erik Johnston
770d1ef673 Merge pull request #6280 from matrix-org/erikj/receipts_async_await
Port receipt and read markers to async/wait
2019-10-30 11:44:18 +01:00
Erik Johnston
ba4cc5541c Merge pull request #6274 from matrix-org/erikj/replication_async
Port replication http server endpoints to async/await
2019-10-30 11:44:08 +01:00
Erik Johnston
72bc6294ed Merge pull request #6275 from matrix-org/erikj/port_rest_events
Port room rest handlers to async/await
2019-10-30 11:44:02 +01:00
Erik Johnston
b4465564cc Merge pull request #6279 from matrix-org/erikj/federation_server_async_await
Port federation_server to async/await
2019-10-30 11:43:51 +01:00
Anton Lazarev
213d7eb227 Clarify environment variable usage when running in Docker (#6181) 2019-10-30 07:30:04 +00:00
Brendan Abolivier
47f767269c Add database table for keeping track of labels on events 2019-10-29 16:56:22 +00:00
Erik Johnston
a287f1e804 Don't return coroutines 2019-10-29 16:36:46 +00:00
Erik Johnston
38474707b9 Merge branch 'erikj/federation_server_async_await' of github.com:matrix-org/synapse into erikj/receipts_async_await 2019-10-29 15:53:17 +00:00
Erik Johnston
74c1e16106 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/federation_server_async_await 2019-10-29 15:52:39 +00:00
Erik Johnston
307e313ef4 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/port_rest_events 2019-10-29 15:51:12 +00:00
Erik Johnston
d6e40e7cbd Merge branch 'develop' of github.com:matrix-org/synapse into erikj/replication_async 2019-10-29 15:42:58 +00:00
Brendan Abolivier
d79151921a Fix CI for synapse_port_db (#6276)
* Don't use a virtualenv

* Generate the server's signing key to allow it to start

* Add signing key paths to CI configuration files

* Use a Python script to create the postgresql database

* Improve logging
2019-10-29 15:39:44 +00:00
Erik Johnston
7dd7a385f9 Newsfile 2019-10-29 15:09:48 +00:00
Erik Johnston
2c35ffead2 Port receipt and read markers to async/wait 2019-10-29 15:08:22 +00:00
Erik Johnston
09a135b039 Make concurrently_execute work with async/await 2019-10-29 15:02:23 +00:00
Richard van der Hoff
65cb307e19 Merge branch 'master' into develop 2019-10-29 14:40:57 +00:00
Erik Johnston
fec7d88645 Newsfile 2019-10-29 14:27:18 +00:00
Erik Johnston
3f33879be4 Port federation_server to async/await 2019-10-29 14:13:08 +00:00
Brendan Abolivier
cc80968f62 Merge branch 'babolivier/changelog-name' into develop 2019-10-29 14:05:49 +00:00
Erik Johnston
387324688e Newsfile 2019-10-29 13:10:45 +00:00
Erik Johnston
9be41bc121 Port room rest handlers to async/await 2019-10-29 13:09:29 +00:00
Erik Johnston
1a7ed37149 Newsfile 2019-10-29 13:01:50 +00:00
Erik Johnston
e577a4b2ad Port replication http server endpoints to async/await 2019-10-29 13:00:51 +00:00
Erik Johnston
561133c3c5 Merge pull request #6263 from matrix-org/erikj/caches_return_deferreds
Quick fix to ensure cache descriptors always return deferreds
2019-10-29 12:53:21 +01:00
Erik Johnston
e6c7e239ef Update docstring 2019-10-29 11:48:30 +00:00
Erik Johnston
e419c44ba4 Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into develop 2019-10-29 11:41:27 +00:00
Brendan Abolivier
14504ad573 Add CI for synapse_port_db (#6140)
This adds:

* a test sqlite database
* a configuration file for the sqlite database
* a configuration file for a postgresql database (using the credentials in `.buildkite/docker-compose.pyXX.pgXX.yaml`)

as well as a new script named `.buildkite/scripts/test_synapse_port_db.sh` that:

1. installs Synapse
2. updates the test sqlite database to the latest schema and runs background updates on it
3. creates an empty postgresql database
4. run the `synapse_port_db` script to migrate the test sqlite database to the empty postgresql database (with coverage)

Step `2` is done via a new script located at `scripts-dev/update_database`.

The test sqlite database is extracted from a SyTest run, so that it can be considered as an actual homeserver's database with actual data in it.
2019-10-28 17:45:32 +00:00
Tobia De Koninck
29207b4488 Fix broken URL in docker/README.md (#6264)
Signed-off-by: Tobia De Koninck <LEDfan@users.noreply.github.com>
2019-10-28 15:39:57 +00:00
Erik Johnston
a8aced58df Newsfile 2019-10-28 13:36:52 +00:00
Erik Johnston
d0d8a22c13 Quick fix to ensure cache descriptors always return deferreds 2019-10-28 13:33:04 +00:00
Richard van der Hoff
bcfc647e4d Merge tag 'v1.5.0rc2' into develop
Synapse 1.5.0rc2 (2019-10-28)
=============================

Bugfixes
--------

- Update list of boolean columns in `synapse_port_db`. ([\#6247](https://github.com/matrix-org/synapse/issues/6247))
- Fix /keys/query API on workers. ([\#6256](https://github.com/matrix-org/synapse/issues/6256))
- Improve signature checking on some federation APIs. ([\#6262](https://github.com/matrix-org/synapse/issues/6262))

Internal Changes
----------------

- Move schema delta files to the correct data store. ([\#6248](https://github.com/matrix-org/synapse/issues/6248))
- Small performance improvement by removing repeated config lookups in room stats calculation. ([\#6255](https://github.com/matrix-org/synapse/issues/6255))
2019-10-28 12:59:13 +00:00
Richard van der Hoff
9aee28927b Convert EventContext to attrs (#6218)
* make EventContext use an attr
2019-10-28 14:29:55 +02:00
Hubert Chathi
da78f61778 Merge pull request #6253 from matrix-org/uhoreg/e2e_backup_delete_keys
delete keys when deleting backup versions
2019-10-25 11:28:11 -04:00
Hubert Chathi
4697c0de0b remove unneeded imports 2019-10-25 10:47:02 -04:00
Hubert Chathi
4cf3a30a20 switch to using HomeserverTestCase 2019-10-25 10:42:07 -04:00
Erik Johnston
64c2cfda8a Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into develop 2019-10-25 11:34:49 +01:00
Erik Johnston
a71b8c87ec Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into develop 2019-10-25 11:32:24 +01:00
Erik Johnston
44ab048cfe Merge pull request #6251 from matrix-org/michaelkaye/debug_guard_logging
Reduce debug logging overhead
2019-10-25 10:05:44 +01:00
Erik Johnston
2020f11916 Merge pull request #6250 from matrix-org/michaelkaye/make_user_stats_less_verbose
Make user stats less verbose
2019-10-25 10:04:51 +01:00
Hubert Chathi
0417ca1a64 add changelog 2019-10-24 22:49:55 -04:00
Hubert Chathi
c40d7244f8 Merge branch 'develop' into cross-signing_federation 2019-10-24 22:31:25 -04:00
Hubert Chathi
8ac766c44a make notification of signatures work with workers 2019-10-24 22:14:58 -04:00
Hubert Chathi
ff05c9b760 don't error if federation query doesn't have cross-signing keys 2019-10-24 21:46:11 -04:00
Hubert Chathi
29a0bc5637 remove some unnecessary lines 2019-10-24 21:43:02 -04:00
Hubert Chathi
608947eedf add changelog 2019-10-24 21:33:35 -04:00
Hubert Chathi
848cd388d9 delete keys when deleting backups 2019-10-24 21:21:51 -04:00
Michael Kaye
e4d98188da Address codestyle concerns 2019-10-24 18:43:13 +01:00
Michael Kaye
47c02f82e3 Add missing '.' 2019-10-24 18:39:15 +01:00
Michael Kaye
0d7e9523e5 Reduce impact of debug logging 2019-10-24 18:37:55 +01:00
Michael Kaye
8f4a808d9d Delay printf until logging is required.
Using % will cause the string to be generated even if debugging
is off.
2019-10-24 18:31:53 +01:00
Michael Kaye
9eebc1e73b use %r to __repr__ objects
This avoids calculating __repr__ unless we are going to log.
2019-10-24 18:18:56 +01:00
Michael Kaye
f85b9842f0 Don't encode object as UTF-8 string if not needed.
I believe that string formatting ~10-15 sized events will
take a proportion of CPU time.
2019-10-24 18:08:45 +01:00
Michael Kaye
c3cd977fff Add changelog.d 2019-10-24 17:58:50 +01:00
Michael Kaye
39266a9c9f Make user/room stats log line less verbose. 2019-10-24 17:55:53 +01:00
Erik Johnston
9fb96889a4 Newsfile 2019-10-23 16:15:03 +01:00
Erik Johnston
3ca4c7c516 Use new EventPersistenceStore 2019-10-23 16:15:03 +01:00
Erik Johnston
73cf63784b Add DataStores and Storage classes. 2019-10-23 16:15:03 +01:00
Hubert Chathi
dc2cd6f79d move get_e2e_cross_signing_key to EndToEndKeyWorkerStore so it works with workers 2019-10-23 09:13:47 -04:00
Erik Johnston
22a9847670 Move persist_events out from main data store.
This is in preparation for splitting out of state_groups_state from the
main store into it own one, as persisting events depends on calculating
state.
2019-10-23 13:29:44 +01:00
Hubert Chathi
480eac30eb black 2019-10-22 22:37:16 -04:00
Hubert Chathi
404e8c8532 vendor-prefix the EDU name until MSC1756 is merged into the spec 2019-10-22 22:33:23 -04:00
Hubert Chathi
3e3f9b684e fix unit test 2019-10-22 22:26:30 -04:00
Hubert Chathi
0563839535 add news file 2019-10-22 21:51:01 -04:00
Hubert Chathi
1fabf82d50 update to work with newer code, and fix formatting 2019-10-22 21:44:58 -04:00
Hubert Chathi
41ad35b523 add missing param 2019-10-22 19:06:29 -04:00
Hubert Chathi
cfdb84422d make black happy 2019-10-22 19:06:06 -04:00
Hubert Chathi
a1aaf3eea6 don't crash if the user doesn't have cross-signing keys 2019-10-22 19:04:37 -04:00
Hubert Chathi
8d3542a64e implement federation parts of cross-signing 2019-10-22 19:04:35 -04:00
Neil Johnson
82c8799ec7 Set room version default to 5 2019-10-19 09:06:15 +01:00
348 changed files with 14482 additions and 7602 deletions

View File

@@ -1,6 +1,6 @@
#!/usr/bin/env bash
set -ex
set -e
if [[ "$BUILDKITE_BRANCH" =~ ^(develop|master|dinsic|shhs|release-.*)$ ]]; then
echo "Not merging forward, as this is a release branch"
@@ -18,6 +18,8 @@ else
GITBASE=$BUILDKITE_PULL_REQUEST_BASE_BRANCH
fi
echo "--- merge_base_branch $GITBASE"
# Show what we are before
git --no-pager show -s

View File

@@ -0,0 +1,21 @@
# Configuration file used for testing the 'synapse_port_db' script.
# Tells the script to connect to the postgresql database that will be available in the
# CI's Docker setup at the point where this file is considered.
server_name: "localhost:8800"
signing_key_path: "/src/.buildkite/test.signing.key"
report_stats: false
database:
name: "psycopg2"
args:
user: postgres
host: postgres
password: postgres
database: synapse
# Suppress the key server warning.
trusted_key_servers:
- server_name: "matrix.org"
suppress_key_server_warning: true

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.storage.engines import create_engine
logger = logging.getLogger("create_postgres_db")
if __name__ == "__main__":
# Create a PostgresEngine.
db_engine = create_engine({"name": "psycopg2", "args": {}})
# Connect to postgres to create the base database.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = db_engine.module.connect(
user="postgres", host="postgres", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
cur.execute("CREATE DATABASE synapse;")
cur.close()
db_conn.close()

View File

@@ -0,0 +1,36 @@
#!/bin/bash
#
# Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
# with additional dependencies needed for the test (such as coverage or the PostgreSQL
# driver), update the schema of the test SQLite database and run background updates on it,
# create an empty test database in PostgreSQL, then run the 'synapse_port_db' script to
# test porting the SQLite database to the PostgreSQL database (with coverage).
set -xe
cd `dirname $0`/../..
echo "--- Install dependencies"
# Install dependencies for this test.
pip install psycopg2 coverage coverage-enable-subprocess
# Install Synapse itself. This won't update any libraries.
pip install -e .
echo "--- Generate the signing key"
# Generate the server's signing key.
python -m synapse.app.homeserver --generate-keys -c .buildkite/sqlite-config.yaml
echo "--- Prepare the databases"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml
# Create the PostgreSQL database.
./.buildkite/scripts/create_postgres_db.py
echo "+++ Run synapse_port_db"
# Run the script
coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml

View File

@@ -0,0 +1,18 @@
# Configuration file used for testing the 'synapse_port_db' script.
# Tells the 'update_database' script to connect to the test SQLite database to upgrade its
# schema and run background updates on it.
server_name: "localhost:8800"
signing_key_path: "/src/.buildkite/test.signing.key"
report_stats: false
database:
name: "sqlite3"
args:
database: ".buildkite/test_db.db"
# Suppress the key server warning.
trusted_key_servers:
- server_name: "matrix.org"
suppress_key_server_warning: true

BIN
.buildkite/test_db.db Normal file

Binary file not shown.

View File

@@ -28,3 +28,39 @@ User sees updates to presence from other users in the incremental sync.
Gapped incremental syncs include all state changes
Old members are included in gappy incr LL sync if they start speaking
# new failures as of https://github.com/matrix-org/sytest/pull/732
Device list doesn't change if remote server is down
Remote servers cannot set power levels in rooms without existing powerlevels
Remote servers should reject attempts by non-creators to set the power levels
# new failures as of https://github.com/matrix-org/sytest/pull/753
GET /rooms/:room_id/messages returns a message
GET /rooms/:room_id/messages lazy loads members correctly
Read receipts are sent as events
Only original members of the room can see messages from erased users
Device deletion propagates over federation
If user leaves room, remote user changes device and rejoins we see update in /sync and /keys/changes
Changing user-signing key notifies local users
Newly updated tags appear in an incremental v2 /sync
Server correctly handles incoming m.device_list_update
Local device key changes get to remote servers with correct prev_id
AS-ghosted users can use rooms via AS
Ghost user must register before joining room
Test that a message is pushed
Invites are pushed
Rooms with aliases are correctly named in pushed
Rooms with names are correctly named in pushed
Rooms with canonical alias are correctly named in pushed
Rooms with many users are correctly pushed
Don't get pushed for rooms you've muted
Rejected events are not pushed
Test that rejected pushers are removed.
Events come down the correct room
# https://buildkite.com/matrix-dot-org/sytest/builds/326#cca62404-a88a-4fcb-ad41-175fd3377603
Presence changes to UNAVAILABLE are reported to remote room members
If remote user leaves room, changes device and rejoins we see update in sync
uploading self-signing key notifies over federation
Inbound federation can receive redacted events
Outbound federation can request missing events

View File

@@ -1,7 +1,8 @@
### Pull Request Checklist
<!-- Please read CONTRIBUTING.rst before submitting your pull request -->
<!-- Please read CONTRIBUTING.md before submitting your pull request -->
* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#changelog)
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#sign-off)
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#changelog)
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.md#code-style))

View File

@@ -1,3 +1,234 @@
Synapse 1.7.2 (2019-12-20)
==========================
This release fixes some regressions introduced in Synapse 1.7.0 and 1.7.1.
Bugfixes
--------
- Fix a regression introduced in Synapse 1.7.1 which caused errors when attempting to backfill rooms over federation. ([\#6576](https://github.com/matrix-org/synapse/issues/6576))
- Fix a bug introduced in Synapse 1.7.0 which caused an error on startup when upgrading from versions before 1.3.0. ([\#6578](https://github.com/matrix-org/synapse/issues/6578))
Synapse 1.7.1 (2019-12-18)
==========================
This release includes several security fixes as well as a fix to a bug exposed by the security fixes. Administrators are encouraged to upgrade as soon as possible.
Security updates
----------------
- Fix a bug which could cause room events to be incorrectly authorized using events from a different room. ([\#6501](https://github.com/matrix-org/synapse/issues/6501), [\#6503](https://github.com/matrix-org/synapse/issues/6503), [\#6521](https://github.com/matrix-org/synapse/issues/6521), [\#6524](https://github.com/matrix-org/synapse/issues/6524), [\#6530](https://github.com/matrix-org/synapse/issues/6530), [\#6531](https://github.com/matrix-org/synapse/issues/6531))
- Fix a bug causing responses to the `/context` client endpoint to not use the pruned version of the event. ([\#6553](https://github.com/matrix-org/synapse/issues/6553))
- Fix a cause of state resets in room versions 2 onwards. ([\#6556](https://github.com/matrix-org/synapse/issues/6556), [\#6560](https://github.com/matrix-org/synapse/issues/6560))
Bugfixes
--------
- Fix a bug which could cause the federation server to incorrectly return errors when handling certain obscure event graphs. ([\#6526](https://github.com/matrix-org/synapse/issues/6526), [\#6527](https://github.com/matrix-org/synapse/issues/6527))
Synapse 1.7.0 (2019-12-13)
==========================
This release changes the default settings so that only local authenticated users can query the server's room directory. See the [upgrade notes](UPGRADE.rst#upgrading-to-v170) for details.
Support for SQLite versions before 3.11 is now deprecated. A future release will refuse to start if used with an SQLite version before 3.11.
Administrators are reminded that SQLite should not be used for production instances. Instructions for migrating to Postgres are available [here](docs/postgres.md). A future release of synapse will, by default, disable federation for servers using SQLite.
No significant changes since 1.7.0rc2.
Synapse 1.7.0rc2 (2019-12-11)
=============================
Bugfixes
--------
- Fix incorrect error message for invalid requests when setting user's avatar URL. ([\#6497](https://github.com/matrix-org/synapse/issues/6497))
- Fix support for SQLite 3.7. ([\#6499](https://github.com/matrix-org/synapse/issues/6499))
- Fix regression where sending email push would not work when using a pusher worker. ([\#6507](https://github.com/matrix-org/synapse/issues/6507), [\#6509](https://github.com/matrix-org/synapse/issues/6509))
Synapse 1.7.0rc1 (2019-12-09)
=============================
Features
--------
- Implement per-room message retention policies. ([\#5815](https://github.com/matrix-org/synapse/issues/5815), [\#6436](https://github.com/matrix-org/synapse/issues/6436))
- Add etag and count fields to key backup endpoints to help clients guess if there are new keys. ([\#5858](https://github.com/matrix-org/synapse/issues/5858))
- Add `/admin/v2/users` endpoint with pagination. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Require User-Interactive Authentication for `/account/3pid/add`, meaning the user's password will be required to add a third-party ID to their account. ([\#6119](https://github.com/matrix-org/synapse/issues/6119))
- Implement the `/_matrix/federation/unstable/net.atleastfornow/state/<context>` API as drafted in MSC2314. ([\#6176](https://github.com/matrix-org/synapse/issues/6176))
- Configure privacy-preserving settings by default for the room directory. ([\#6355](https://github.com/matrix-org/synapse/issues/6355))
- Add ephemeral messages support by partially implementing [MSC2228](https://github.com/matrix-org/matrix-doc/pull/2228). ([\#6409](https://github.com/matrix-org/synapse/issues/6409))
- Add support for [MSC 2367](https://github.com/matrix-org/matrix-doc/pull/2367), which allows specifying a reason on all membership events. ([\#6434](https://github.com/matrix-org/synapse/issues/6434))
Bugfixes
--------
- Transfer non-standard power levels on room upgrade. ([\#6237](https://github.com/matrix-org/synapse/issues/6237))
- Fix error from the Pillow library when uploading RGBA images. ([\#6241](https://github.com/matrix-org/synapse/issues/6241))
- Correctly apply the event filter to the `state`, `events_before` and `events_after` fields in the response to `/context` requests. ([\#6329](https://github.com/matrix-org/synapse/issues/6329))
- Fix caching devices for remote users when using workers, so that we don't attempt to refetch (and potentially fail) each time a user requests devices. ([\#6332](https://github.com/matrix-org/synapse/issues/6332))
- Prevent account data syncs getting lost across TCP replication. ([\#6333](https://github.com/matrix-org/synapse/issues/6333))
- Fix bug: TypeError in `register_user()` while using LDAP auth module. ([\#6406](https://github.com/matrix-org/synapse/issues/6406))
- Fix an intermittent exception when handling read-receipts. ([\#6408](https://github.com/matrix-org/synapse/issues/6408))
- Fix broken guest registration when there are existing blocks of numeric user IDs. ([\#6420](https://github.com/matrix-org/synapse/issues/6420))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
- Fix error when using synapse_port_db on a vanilla synapse db. ([\#6449](https://github.com/matrix-org/synapse/issues/6449))
- Fix uploading multiple cross signing signatures for the same user. ([\#6451](https://github.com/matrix-org/synapse/issues/6451))
- Fix bug which lead to exceptions being thrown in a loop when a cross-signed device is deleted. ([\#6462](https://github.com/matrix-org/synapse/issues/6462))
- Fix `synapse_port_db` not exiting with a 0 code if something went wrong during the port process. ([\#6470](https://github.com/matrix-org/synapse/issues/6470))
- Improve sanity-checking when receiving events over federation. ([\#6472](https://github.com/matrix-org/synapse/issues/6472))
- Fix inaccurate per-block Prometheus metrics. ([\#6491](https://github.com/matrix-org/synapse/issues/6491))
- Fix small performance regression for sending invites. ([\#6493](https://github.com/matrix-org/synapse/issues/6493))
- Back out cross-signing code added in Synapse 1.5.0, which caused a performance regression. ([\#6494](https://github.com/matrix-org/synapse/issues/6494))
Improved Documentation
----------------------
- Update documentation and variables in user contributed systemd reference file. ([\#6369](https://github.com/matrix-org/synapse/issues/6369), [\#6490](https://github.com/matrix-org/synapse/issues/6490))
- Fix link in the user directory documentation. ([\#6388](https://github.com/matrix-org/synapse/issues/6388))
- Add build instructions to the docker readme. ([\#6390](https://github.com/matrix-org/synapse/issues/6390))
- Switch Ubuntu package install recommendation to use python3 packages in INSTALL.md. ([\#6443](https://github.com/matrix-org/synapse/issues/6443))
- Write some docs for the quarantine_media api. ([\#6458](https://github.com/matrix-org/synapse/issues/6458))
- Convert CONTRIBUTING.rst to markdown (among other small fixes). ([\#6461](https://github.com/matrix-org/synapse/issues/6461))
Deprecations and Removals
-------------------------
- Remove admin/v1/users_paginate endpoint. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#5925](https://github.com/matrix-org/synapse/issues/5925))
- Remove fallback for federation with old servers which lack the /federation/v1/state_ids API. ([\#6488](https://github.com/matrix-org/synapse/issues/6488))
Internal Changes
----------------
- Add benchmarks for structured logging and improve output performance. ([\#6266](https://github.com/matrix-org/synapse/issues/6266))
- Improve the performance of outputting structured logging. ([\#6322](https://github.com/matrix-org/synapse/issues/6322))
- Refactor some code in the event authentication path for clarity. ([\#6343](https://github.com/matrix-org/synapse/issues/6343), [\#6468](https://github.com/matrix-org/synapse/issues/6468), [\#6480](https://github.com/matrix-org/synapse/issues/6480))
- Clean up some unnecessary quotation marks around the codebase. ([\#6362](https://github.com/matrix-org/synapse/issues/6362))
- Complain on startup instead of 500'ing during runtime when `public_baseurl` isn't set when necessary. ([\#6379](https://github.com/matrix-org/synapse/issues/6379))
- Add a test scenario to make sure room history purges don't break `/messages` in the future. ([\#6392](https://github.com/matrix-org/synapse/issues/6392))
- Clarifications for the email configuration settings. ([\#6423](https://github.com/matrix-org/synapse/issues/6423))
- Add more tests to the blacklist when running in worker mode. ([\#6429](https://github.com/matrix-org/synapse/issues/6429))
- Refactor data store layer to support multiple databases in the future. ([\#6454](https://github.com/matrix-org/synapse/issues/6454), [\#6464](https://github.com/matrix-org/synapse/issues/6464), [\#6469](https://github.com/matrix-org/synapse/issues/6469), [\#6487](https://github.com/matrix-org/synapse/issues/6487))
- Port synapse.rest.client.v1 to async/await. ([\#6482](https://github.com/matrix-org/synapse/issues/6482))
- Port synapse.rest.client.v2_alpha to async/await. ([\#6483](https://github.com/matrix-org/synapse/issues/6483))
- Port SyncHandler to async/await. ([\#6484](https://github.com/matrix-org/synapse/issues/6484))
Synapse 1.6.1 (2019-11-28)
==========================
Security updates
----------------
This release includes a security fix ([\#6426](https://github.com/matrix-org/synapse/issues/6426), below). Administrators are encouraged to upgrade as soon as possible.
Bugfixes
--------
- Clean up local threepids from user on account deactivation. ([\#6426](https://github.com/matrix-org/synapse/issues/6426))
- Fix startup error when http proxy is defined. ([\#6421](https://github.com/matrix-org/synapse/issues/6421))
Synapse 1.6.0 (2019-11-26)
==========================
Bugfixes
--------
- Fix phone home stats reporting. ([\#6418](https://github.com/matrix-org/synapse/issues/6418))
Synapse 1.6.0rc2 (2019-11-25)
=============================
Bugfixes
--------
- Fix a bug which could cause the background database update hander for event labels to get stuck in a loop raising exceptions. ([\#6407](https://github.com/matrix-org/synapse/issues/6407))
Synapse 1.6.0rc1 (2019-11-20)
=============================
Features
--------
- Add federation support for cross-signing. ([\#5727](https://github.com/matrix-org/synapse/issues/5727))
- Increase default room version from 4 to 5, thereby enforcing server key validity period checks. ([\#6220](https://github.com/matrix-org/synapse/issues/6220))
- Add support for outbound http proxying via http_proxy/HTTPS_PROXY env vars. ([\#6238](https://github.com/matrix-org/synapse/issues/6238))
- Implement label-based filtering on `/sync` and `/messages` ([MSC2326](https://github.com/matrix-org/matrix-doc/pull/2326)). ([\#6301](https://github.com/matrix-org/synapse/issues/6301), [\#6310](https://github.com/matrix-org/synapse/issues/6310), [\#6340](https://github.com/matrix-org/synapse/issues/6340))
Bugfixes
--------
- Fix LruCache callback deduplication for Python 3.8. Contributed by @V02460. ([\#6213](https://github.com/matrix-org/synapse/issues/6213))
- Remove a room from a server's public rooms list on room upgrade. ([\#6232](https://github.com/matrix-org/synapse/issues/6232), [\#6235](https://github.com/matrix-org/synapse/issues/6235))
- Delete keys from key backup when deleting backup versions. ([\#6253](https://github.com/matrix-org/synapse/issues/6253))
- Make notification of cross-signing signatures work with workers. ([\#6254](https://github.com/matrix-org/synapse/issues/6254))
- Fix exception when remote servers attempt to join a room that they're not allowed to join. ([\#6278](https://github.com/matrix-org/synapse/issues/6278))
- Prevent errors from appearing on Synapse startup if `git` is not installed. ([\#6284](https://github.com/matrix-org/synapse/issues/6284))
- Appservice requests will no longer contain a double slash prefix when the appservice url provided ends in a slash. ([\#6306](https://github.com/matrix-org/synapse/issues/6306))
- Fix `/purge_room` admin API. ([\#6307](https://github.com/matrix-org/synapse/issues/6307))
- Fix the `hidden` field in the `devices` table for SQLite versions prior to 3.23.0. ([\#6313](https://github.com/matrix-org/synapse/issues/6313))
- Fix bug which casued rejected events to be persisted with the wrong room state. ([\#6320](https://github.com/matrix-org/synapse/issues/6320))
- Fix bug where `rc_login` ratelimiting would prematurely kick in. ([\#6335](https://github.com/matrix-org/synapse/issues/6335))
- Prevent the server taking a long time to start up when guest registration is enabled. ([\#6338](https://github.com/matrix-org/synapse/issues/6338))
- Fix bug where upgrading a guest account to a full user would fail when account validity is enabled. ([\#6359](https://github.com/matrix-org/synapse/issues/6359))
- Fix `to_device` stream ID getting reset every time Synapse restarts, which had the potential to cause unable to decrypt errors. ([\#6363](https://github.com/matrix-org/synapse/issues/6363))
- Fix permission denied error when trying to generate a config file with the docker image. ([\#6389](https://github.com/matrix-org/synapse/issues/6389))
Improved Documentation
----------------------
- Contributor documentation now mentions script to run linters. ([\#6164](https://github.com/matrix-org/synapse/issues/6164))
- Modify CAPTCHA_SETUP.md to update the terms `private key` and `public key` to `secret key` and `site key` respectively. Contributed by Yash Jipkate. ([\#6257](https://github.com/matrix-org/synapse/issues/6257))
- Update `INSTALL.md` Email section to talk about `account_threepid_delegates`. ([\#6272](https://github.com/matrix-org/synapse/issues/6272))
- Fix a small typo in `account_threepid_delegates` configuration option. ([\#6273](https://github.com/matrix-org/synapse/issues/6273))
Internal Changes
----------------
- Add a CI job to test the `synapse_port_db` script. ([\#6140](https://github.com/matrix-org/synapse/issues/6140), [\#6276](https://github.com/matrix-org/synapse/issues/6276))
- Convert EventContext to an attrs. ([\#6218](https://github.com/matrix-org/synapse/issues/6218))
- Move `persist_events` out from main data store. ([\#6240](https://github.com/matrix-org/synapse/issues/6240), [\#6300](https://github.com/matrix-org/synapse/issues/6300))
- Reduce verbosity of user/room stats. ([\#6250](https://github.com/matrix-org/synapse/issues/6250))
- Reduce impact of debug logging. ([\#6251](https://github.com/matrix-org/synapse/issues/6251))
- Expose some homeserver functionality to spam checkers. ([\#6259](https://github.com/matrix-org/synapse/issues/6259))
- Change cache descriptors to always return deferreds. ([\#6263](https://github.com/matrix-org/synapse/issues/6263), [\#6291](https://github.com/matrix-org/synapse/issues/6291))
- Fix incorrect comment regarding the functionality of an `if` statement. ([\#6269](https://github.com/matrix-org/synapse/issues/6269))
- Update CI to run `isort` over the `scripts` and `scripts-dev` directories. ([\#6270](https://github.com/matrix-org/synapse/issues/6270))
- Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated. ([\#6271](https://github.com/matrix-org/synapse/issues/6271), [\#6314](https://github.com/matrix-org/synapse/issues/6314))
- Port replication http server endpoints to async/await. ([\#6274](https://github.com/matrix-org/synapse/issues/6274))
- Port room rest handlers to async/await. ([\#6275](https://github.com/matrix-org/synapse/issues/6275))
- Remove redundant CLI parameters on CI's `flake8` step. ([\#6277](https://github.com/matrix-org/synapse/issues/6277))
- Port `federation_server.py` to async/await. ([\#6279](https://github.com/matrix-org/synapse/issues/6279))
- Port receipt and read markers to async/wait. ([\#6280](https://github.com/matrix-org/synapse/issues/6280))
- Split out state storage into separate data store. ([\#6294](https://github.com/matrix-org/synapse/issues/6294), [\#6295](https://github.com/matrix-org/synapse/issues/6295))
- Refactor EventContext for clarity. ([\#6298](https://github.com/matrix-org/synapse/issues/6298))
- Update the version of black used to 19.10b0. ([\#6304](https://github.com/matrix-org/synapse/issues/6304))
- Add some documentation about worker replication. ([\#6305](https://github.com/matrix-org/synapse/issues/6305))
- Move admin endpoints into separate files. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6308](https://github.com/matrix-org/synapse/issues/6308))
- Document the use of `lint.sh` for code style enforcement & extend it to run on specified paths only. ([\#6312](https://github.com/matrix-org/synapse/issues/6312))
- Add optional python dependencies and dependant binary libraries to snapcraft packaging. ([\#6317](https://github.com/matrix-org/synapse/issues/6317))
- Remove the dependency on psutil and replace functionality with the stdlib `resource` module. ([\#6318](https://github.com/matrix-org/synapse/issues/6318), [\#6336](https://github.com/matrix-org/synapse/issues/6336))
- Improve documentation for EventContext fields. ([\#6319](https://github.com/matrix-org/synapse/issues/6319))
- Add some checks that we aren't using state from rejected events. ([\#6330](https://github.com/matrix-org/synapse/issues/6330))
- Add continuous integration for python 3.8. ([\#6341](https://github.com/matrix-org/synapse/issues/6341))
- Correct spacing/case of various instances of the word "homeserver". ([\#6357](https://github.com/matrix-org/synapse/issues/6357))
- Temporarily blacklist the failing unit test PurgeRoomTestCase.test_purge_room. ([\#6361](https://github.com/matrix-org/synapse/issues/6361))
Synapse 1.5.1 (2019-11-06)
==========================

View File

@@ -1,5 +1,4 @@
Contributing code to Matrix
===========================
# Contributing code to Matrix
Everyone is welcome to contribute code to Matrix
(https://github.com/matrix-org), provided that they are willing to license
@@ -7,18 +6,17 @@ their contributions under the same license as the project itself. We follow a
simple 'inbound=outbound' model for contributions: the act of submitting an
'inbound' contribution means that the contributor agrees to license the code
under the same terms as the project's overall 'outbound' license - in our
case, this is almost always Apache Software License v2 (see LICENSE).
case, this is almost always Apache Software License v2 (see [LICENSE](LICENSE)).
How to contribute
~~~~~~~~~~~~~~~~~
## How to contribute
The preferred and easiest way to contribute changes to Matrix is to fork the
relevant project on github, and then create a pull request to ask us to pull
your changes into our repo
(https://help.github.com/articles/using-pull-requests/)
relevant project on github, and then [create a pull request](
https://help.github.com/articles/using-pull-requests/) to ask us to pull
your changes into our repo.
**The single biggest thing you need to know is: please base your changes on
the develop branch - /not/ master.**
the develop branch - *not* master.**
We use the master branch to track the most recent release, so that folks who
blindly clone the repo and automatically check out master get something that
@@ -30,9 +28,8 @@ use github's pull request workflow to review the contribution, and either ask
you to make any refinements needed or merge it and make them ourselves. The
changes will then land on master when we next do a release.
We use `Buildkite <https://buildkite.com/matrix-dot-org/synapse>`_ for
continuous integration. Buildkite builds need to be authorised by a
maintainer. If your change breaks the build, this will be shown in GitHub, so
We use [Buildkite](https://buildkite.com/matrix-dot-org/synapse) for continuous
integration. If your change breaks the build, this will be shown in GitHub, so
please keep an eye on the pull request for feedback.
To run unit tests in a local development environment, you can use:
@@ -47,41 +44,63 @@ To run unit tests in a local development environment, you can use:
set up PostgreSQL yourself.
Docker images are available for running the integration tests (SyTest) locally,
see the `documentation in the SyTest repo
<https://github.com/matrix-org/sytest/blob/develop/docker/README.md>`_ for more
see the [documentation in the SyTest repo](
https://github.com/matrix-org/sytest/blob/develop/docker/README.md) for more
information.
Code style
~~~~~~~~~~
## Code style
All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
at https://github.com/matrix-org/synapse/tree/master/docs/code_style.md.
[here](docs/code_style.md).
To facilitate meeting these criteria you can run `scripts-dev/lint.sh`
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool:
```
# Install the dependencies
pip install -U black flake8 isort
# Run the linter script
./scripts-dev/lint.sh
```
**Note that the script does not just test/check, but also reformats code, so you
may wish to ensure any new code is committed first**. By default this script
checks all files and can take some time; if you alter only certain files, you
might wish to specify paths as arguments to reduce the run-time:
```
./scripts-dev/lint.sh path/to/file1.py path/to/file2.py path/to/folder
```
Before pushing new changes, ensure they don't produce linting errors. Commit any
files that were corrected.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
Changelog
~~~~~~~~~
## Changelog
All changes, even minor ones, need a corresponding changelog / newsfragment
entry. These are managed by Towncrier
(https://github.com/hawkowl/towncrier).
entry. These are managed by [Towncrier](https://github.com/hawkowl/towncrier).
To create a changelog entry, make a new file in the ``changelog.d`` file named
in the format of ``PRnumber.type``. The type can be one of the following:
To create a changelog entry, make a new file in the `changelog.d` directory named
in the format of `PRnumber.type`. The type can be one of the following:
* ``feature``.
* ``bugfix``.
* ``docker`` (for updates to the Docker image).
* ``doc`` (for updates to the documentation).
* ``removal`` (also used for deprecations).
* ``misc`` (for internal-only changes).
* `feature`
* `bugfix`
* `docker` (for updates to the Docker image)
* `doc` (for updates to the documentation)
* `removal` (also used for deprecations)
* `misc` (for internal-only changes)
The content of the file is your changelog entry, which should be a short
description of your change in the same style as the rest of our `changelog
<https://github.com/matrix-org/synapse/blob/master/CHANGES.md>`_. The file can
description of your change in the same style as the rest of our [changelog](
https://github.com/matrix-org/synapse/blob/master/CHANGES.md). The file can
contain Markdown formatting, and should end with a full stop ('.') for
consistency.
@@ -89,20 +108,21 @@ Adding credits to the changelog is encouraged, we value your
contributions and would like to have you shouted out in the release notes!
For example, a fix in PR #1234 would have its changelog entry in
``changelog.d/1234.bugfix``, and contain content like "The security levels of
Florbs are now validated when recieved over federation. Contributed by Jane
`changelog.d/1234.bugfix`, and contain content like "The security levels of
Florbs are now validated when received over federation. Contributed by Jane
Matrix.".
Debian changelog
----------------
## Debian changelog
Changes which affect the debian packaging files (in ``debian``) are an
Changes which affect the debian packaging files (in `debian`) are an
exception.
In this case, you will need to add an entry to the debian changelog for the
next release. For this, run the following command::
next release. For this, run the following command:
dch
```
dch
```
This will make up a new version number (if there isn't already an unreleased
version in flight), and open an editor where you can add a new changelog entry.
@@ -114,74 +134,77 @@ directory, you will need both a regular newsfragment *and* an entry in the
debian changelog. (Though typically such changes should be submitted as two
separate pull requests.)
Sign off
~~~~~~~~
## Sign off
In order to have a concrete record that your contribution is intentional
and you agree to license it under the same terms as the project's license, we've adopted the
same lightweight approach that the Linux Kernel
`submitting patches process <https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>`_, Docker
(https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other
[submitting patches process](
https://www.kernel.org/doc/html/latest/process/submitting-patches.html#sign-your-work-the-developer-s-certificate-of-origin>),
[Docker](https://github.com/docker/docker/blob/master/CONTRIBUTING.md), and many other
projects use: the DCO (Developer Certificate of Origin:
http://developercertificate.org/). This is a simple declaration that you wrote
the contribution or otherwise have the right to contribute it to Matrix::
the contribution or otherwise have the right to contribute it to Matrix:
Developer Certificate of Origin
Version 1.1
```
Developer Certificate of Origin
Version 1.1
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
660 York Street, Suite 102,
San Francisco, CA 94110 USA
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.
Developer's Certificate of Origin 1.1
Developer's Certificate of Origin 1.1
By making a contribution to this project, I certify that:
By making a contribution to this project, I certify that:
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(a) The contribution was created in whole or in part by me and I
have the right to submit it under the open source license
indicated in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(b) The contribution is based upon previous work that, to the best
of my knowledge, is covered under an appropriate open source
license and I have the right under that license to submit that
work with modifications, whether created in whole or in part
by me, under the same open source license (unless I am
permitted to submit under a different license), as indicated
in the file; or
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(c) The contribution was provided directly to me by some other
person who certified (a), (b) or (c) and I have not modified
it.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
(d) I understand and agree that this project and the contribution
are public and that a record of the contribution (including all
personal information I submit with it, including my sign-off) is
maintained indefinitely and may be redistributed consistent with
this project or the open source license(s) involved.
```
If you agree to this for your contribution, then all that's needed is to
include the line in your commit or pull request comment::
include the line in your commit or pull request comment:
Signed-off-by: Your Name <your@email.example.org>
```
Signed-off-by: Your Name <your@email.example.org>
```
We accept contributions under a legally identifiable name, such as
your name on government documentation or common-law names (names
claimed by legitimate usage or repute). Unfortunately, we cannot
accept anonymous contributions at this time.
Git allows you to add this signoff automatically when using the ``-s``
flag to ``git commit``, which uses the name and email set in your
``user.name`` and ``user.email`` git configs.
Git allows you to add this signoff automatically when using the `-s`
flag to `git commit`, which uses the name and email set in your
`user.name` and `user.email` git configs.
Conclusion
~~~~~~~~~~
## Conclusion
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
That's it! Matrix is a very open and collaborative project as you might expect
given our obsession with open communication. If we're going to successfully
matrix together all the fragmented communication technologies out there we are
reliant on contributions and collaboration from the community to do so. So
reliant on contributions and collaboration from the community to do so. So
please get involved - and we hope you have as much fun hacking on Matrix as we
do!

View File

@@ -36,7 +36,7 @@ that your email address is probably `user@example.com` rather than
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5, 3.6, or 3.7
- Python 3.5, 3.6, 3.7 or 3.8.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
@@ -109,8 +109,8 @@ Installing prerequisites on Ubuntu or Debian:
```
sudo apt-get install build-essential python3-dev libffi-dev \
python-pip python-setuptools sqlite3 \
libssl-dev python-virtualenv libjpeg-dev libxslt1-dev
python3-pip python3-setuptools sqlite3 \
libssl-dev python3-virtualenv libjpeg-dev libxslt1-dev
```
#### ArchLinux
@@ -133,9 +133,9 @@ sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
sudo yum groupinstall "Development Tools"
```
#### Mac OS X
#### macOS
Installing prerequisites on Mac OS X:
Installing prerequisites on macOS:
```
xcode-select --install
@@ -144,6 +144,14 @@ sudo pip install virtualenv
brew install pkg-config libffi
```
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
via brew and inform `pip` about it so that `psycopg2` builds:
```
brew install openssl@1.1
export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
```
#### OpenSUSE
Installing prerequisites on openSUSE:
@@ -413,16 +421,18 @@ For a more detailed guide to configuring your server for federation, see
## Email
It is desirable for Synapse to have the capability to send email. For example,
this is required to support the 'password reset' feature.
It is desirable for Synapse to have the capability to send email. This allows
Synapse to send password reset emails, send verifications when an email address
is added to a user's account, and send email notifications to users when they
receive new messages.
To configure an SMTP server for Synapse, modify the configuration section
headed ``email``, and be sure to have at least the ``smtp_host``, ``smtp_port``
and ``notif_from`` fields filled out. You may also need to set ``smtp_user``,
``smtp_pass``, and ``require_transport_security``.
headed `email`, and be sure to have at least the `smtp_host`, `smtp_port`
and `notif_from` fields filled out. You may also need to set `smtp_user`,
`smtp_pass`, and `require_transport_security`.
If Synapse is not configured with an SMTP server, password reset via email will
be disabled by default.
If email is not configured, password reset, registration and notifications via
email will be disabled.
## Registering a user

View File

@@ -393,4 +393,4 @@ something like the following in their logs::
2019-09-11 19:32:04,271 - synapse.federation.transport.server - 288 - WARNING - GET-11752 - authenticate_request failed: 401: Invalid signature for server <server> with key ed25519:a_EqML: Unable to verify signature for <server>
This is normally caused by a misconfiguration in your reverse-proxy. See
`<docs/reverse_proxy.rst>`_ and double-check that your settings are correct.
`<docs/reverse_proxy.md>`_ and double-check that your settings are correct.

View File

@@ -75,6 +75,23 @@ for example:
wget https://packages.matrix.org/debian/pool/main/m/matrix-synapse-py3/matrix-synapse-py3_1.3.0+stretch1_amd64.deb
dpkg -i matrix-synapse-py3_1.3.0+stretch1_amd64.deb
Upgrading to v1.7.0
===================
In an attempt to configure Synapse in a privacy preserving way, the default
behaviours of ``allow_public_rooms_without_auth`` and
``allow_public_rooms_over_federation`` have been inverted. This means that by
default, only authenticated users querying the Client/Server API will be able
to query the room directory, and relatedly that the server will not share
room directory information with other servers over federation.
If your installation does not explicitly set these settings one way or the other
and you want either setting to be ``true`` then it will necessary to update
your homeserver configuration file accordingly.
For more details on the surrounding context see our `explainer
<https://matrix.org/blog/2019/11/09/avoiding-unwelcome-visitors-on-private-matrix-servers>`_.
Upgrading to v1.5.0
===================

View File

@@ -78,7 +78,7 @@ class InputOutput(object):
m = re.match("^join (\S+)$", line)
if m:
# The `sender` wants to join a room.
room_name, = m.groups()
(room_name,) = m.groups()
self.print_line("%s joining %s" % (self.user, room_name))
self.server.join_room(room_name, self.user, self.user)
# self.print_line("OK.")
@@ -105,7 +105,7 @@ class InputOutput(object):
m = re.match("^backfill (\S+)$", line)
if m:
# we want to backfill a room
room_name, = m.groups()
(room_name,) = m.groups()
self.print_line("backfill %s" % room_name)
self.server.backfill(room_name)
return

17
contrib/systemd/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Setup Synapse with Systemd
This is a setup for managing synapse with a user contributed systemd unit
file. It provides a `matrix-synapse` systemd unit file that should be tailored
to accommodate your installation in accordance with the installation
instructions provided in [installation instructions](../../INSTALL.md).
## Setup
1. Under the service section, ensure the `User` variable matches which user
you installed synapse under and wish to run it as.
2. Under the service section, ensure the `WorkingDirectory` variable matches
where you have installed synapse.
3. Under the service section, ensure the `ExecStart` variable matches the
appropriate locations of your installation.
4. Copy the `matrix-synapse.service` to `/etc/systemd/system/`
5. Start Synapse: `sudo systemctl start matrix-synapse`
6. Verify Synapse is running: `sudo systemctl status matrix-synapse`
7. *optional* Enable Synapse to start at system boot: `sudo systemctl enable matrix-synapse`

View File

@@ -4,8 +4,11 @@
# systemctl enable matrix-synapse
# systemctl start matrix-synapse
#
# This assumes that Synapse has been installed by a user named
# synapse.
#
# This assumes that Synapse has been installed in a virtualenv in
# /opt/synapse/env.
# the user's home directory: `/home/synapse/synapse/env`.
#
# **NOTE:** This is an example service file that may change in the future. If you
# wish to use this please copy rather than symlink it.
@@ -22,8 +25,8 @@ Restart=on-abort
User=synapse
Group=nogroup
WorkingDirectory=/opt/synapse
ExecStart=/opt/synapse/env/bin/python -m synapse.app.homeserver --config-path=/opt/synapse/homeserver.yaml
WorkingDirectory=/home/synapse/synapse
ExecStart=/home/synapse/synapse/env/bin/python -m synapse.app.homeserver --config-path=/home/synapse/synapse/homeserver.yaml
SyslogIdentifier=matrix-synapse
# adjust the cache factor if necessary

30
debian/changelog vendored
View File

@@ -1,3 +1,33 @@
matrix-synapse-py3 (1.7.2) stable; urgency=medium
* New synapse release 1.7.2.
-- Synapse Packaging team <packages@matrix.org> Fri, 20 Dec 2019 10:56:50 +0000
matrix-synapse-py3 (1.7.1) stable; urgency=medium
* New synapse release 1.7.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 18 Dec 2019 09:37:59 +0000
matrix-synapse-py3 (1.7.0) stable; urgency=medium
* New synapse release 1.7.0.
-- Synapse Packaging team <packages@matrix.org> Fri, 13 Dec 2019 10:19:38 +0000
matrix-synapse-py3 (1.6.1) stable; urgency=medium
* New synapse release 1.6.1.
-- Synapse Packaging team <packages@matrix.org> Thu, 28 Nov 2019 11:10:40 +0000
matrix-synapse-py3 (1.6.0) stable; urgency=medium
* New synapse release 1.6.0.
-- Synapse Packaging team <packages@matrix.org> Tue, 26 Nov 2019 12:15:40 +0000
matrix-synapse-py3 (1.5.1) stable; urgency=medium
* New synapse release 1.5.1.

View File

@@ -101,7 +101,7 @@ is suitable for local testing, but for any practical use, you will either need
to use a reverse proxy, or configure Synapse to expose an HTTPS port.
For documentation on using a reverse proxy, see
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
For more information on enabling TLS support in synapse itself, see
https://github.com/matrix-org/synapse/blob/master/INSTALL.md#tls-certificates. Of
@@ -130,3 +130,15 @@ docker run -it --rm \
This will generate the same configuration file as the legacy mode used, but
will store it in `/data/homeserver.yaml` instead of a temporary location. You
can then use it as shown above at [Running synapse](#running-synapse).
## Building the image
If you need to build the image from a Synapse checkout, use the following `docker
build` command from the repo's root:
```
docker build -t matrixdotorg/synapse -f docker/Dockerfile .
```
You can choose to build a different docker image by changing the value of the `-f` flag to
point to another Dockerfile.

View File

@@ -169,11 +169,11 @@ def run_generate_config(environ, ownership):
# log("running %s" % (args, ))
if ownership is not None:
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
else:
os.execv("/usr/local/bin/python", args)
@@ -217,8 +217,9 @@ def main(args, environ):
# backwards-compatibility generate-a-config-on-the-fly mode
if "SYNAPSE_CONFIG_PATH" in environ:
error(
"SYNAPSE_SERVER_NAME and SYNAPSE_CONFIG_PATH are mutually exclusive "
"except in `generate` or `migrate_config` mode."
"SYNAPSE_SERVER_NAME can only be combined with SYNAPSE_CONFIG_PATH "
"in `generate` or `migrate_config` mode. To start synapse using a "
"config file, unset the SYNAPSE_SERVER_NAME environment variable."
)
config_path = "/compiled/homeserver.yaml"

View File

@@ -4,7 +4,7 @@ The captcha mechanism used is Google's ReCaptcha. This requires API keys from Go
## Getting keys
Requires a public/private key pair from:
Requires a site/secret key pair from:
<https://developers.google.com/recaptcha/>
@@ -15,8 +15,8 @@ Must be a reCAPTCHA v2 key using the "I'm not a robot" Checkbox option
The keys are a config option on the home server config. If they are not
visible, you can generate them via `--generate-config`. Set the following value:
recaptcha_public_key: YOUR_PUBLIC_KEY
recaptcha_private_key: YOUR_PRIVATE_KEY
recaptcha_public_key: YOUR_SITE_KEY
recaptcha_private_key: YOUR_SECRET_KEY
In addition, you MUST enable captchas via:

View File

@@ -21,3 +21,20 @@ It returns a JSON body like the following:
]
}
```
# Quarantine media in a room
This API 'quarantines' all the media in a room.
The API is:
```
POST /_synapse/admin/v1/quarantine_media/<room_id>
{}
```
Quarantining media means that it is marked as inaccessible by users. It applies
to any local media, and any locally-cached copies of remote media.
The media file itself (and any thumbnails) is not deleted from the server.

View File

@@ -1,3 +1,48 @@
List Accounts
=============
This API returns all local user accounts.
The api is::
GET /_synapse/admin/v2/users?from=0&limit=10&guests=false
including an ``access_token`` of a server admin.
The parameters ``from`` and ``limit`` are required only for pagination.
By default, a ``limit`` of 100 is used.
The parameter ``user_id`` can be used to select only users with user ids that
contain this value.
The parameter ``guests=false`` can be used to exclude guest users,
default is to include guest users.
The parameter ``deactivated=true`` can be used to include deactivated users,
default is to exclude deactivated users.
If the endpoint does not return a ``next_token`` then there are no more users left.
It returns a JSON body like the following:
.. code:: json
{
"users": [
{
"name": "<user_id1>",
"password_hash": "<password_hash1>",
"is_guest": 0,
"admin": 0,
"user_type": null,
"deactivated": 0
}, {
"name": "<user_id2>",
"password_hash": "<password_hash2>",
"is_guest": 0,
"admin": 1,
"user_type": null,
"deactivated": 0
}
],
"next_token": "100"
}
Query Account
=============

View File

@@ -54,15 +54,16 @@ pid_file: DATADIR/homeserver.pid
#
#require_auth_for_profile_requests: true
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
#
#allow_public_rooms_without_auth: false
#allow_public_rooms_without_auth: true
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
#
#allow_public_rooms_over_federation: false
#allow_public_rooms_over_federation: true
# The default room version for newly created rooms.
#
@@ -72,7 +73,7 @@ pid_file: DATADIR/homeserver.pid
# For example, for room version 1, default_room_version should be set
# to "1".
#
#default_room_version: "4"
#default_room_version: "5"
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
#
@@ -287,7 +288,7 @@ listeners:
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
# Resource-constrained homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
@@ -328,6 +329,69 @@ listeners:
#
#user_ips_max_age: 14d
# Message retention policy at the server level.
#
# Room admins and mods can define a retention period for their rooms using the
# 'm.room.retention' state event, and server admins can cap this period by setting
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
#
# If this feature is enabled, Synapse will regularly look for and purge events
# which are older than the room's maximum retention period. Synapse will also
# filter events received over federation so that events that should have been
# purged are ignored and not stored again.
#
retention:
# The message retention policies feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# Default retention policy. If set, Synapse will apply it to rooms that lack the
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
# matter much because Synapse doesn't take it into account yet.
#
#default_policy:
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
# Server admins can define the settings of the background jobs purging the
# events which lifetime has expired under the 'purge_jobs' section.
#
# If no configuration is provided, a single job will be set up to delete expired
# events in every room daily.
#
# Each job's configuration defines which range of message lifetimes the job
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
# lower than or equal to 3 days. Both the minimum and the maximum value of a
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
# which 'max_lifetime' is lower than or equal to three days.
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
## TLS ##
@@ -743,11 +807,11 @@ uploads_path: "DATADIR/uploads"
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This Home Server's ReCAPTCHA public key.
# This homeserver's ReCAPTCHA public key.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This Home Server's ReCAPTCHA private key.
# This homeserver's ReCAPTCHA private key.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
@@ -955,7 +1019,7 @@ uploads_path: "DATADIR/uploads"
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.org
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Users who register on this homeserver will automatically be joined
@@ -1270,8 +1334,23 @@ password_config:
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #

View File

@@ -0,0 +1,43 @@
# Example log config file for synapse.
#
# This is a YAML file containing a standard Python logging configuration
# dictionary. See [1] for details on the valid settings.
#
# [1]: https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
version: 1
formatters:
precise:
format: '%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(request)s - %(message)s'
filters:
context:
(): synapse.logging.context.LoggingContextFilter
request: ""
handlers:
file:
class: logging.handlers.RotatingFileHandler
formatter: precise
filename: /home/rav/work/synapse/homeserver.log
maxBytes: 104857600
backupCount: 10
filters: [context]
encoding: utf8
console:
class: logging.StreamHandler
formatter: precise
filters: [context]
loggers:
synapse.storage.SQL:
# beware: increasing this to DEBUG will make synapse log sensitive
# information such as access tokens.
level: INFO
root:
level: INFO
handlers: [file, console]
disable_existing_loggers: false

View File

@@ -199,7 +199,20 @@ client (C):
#### REPLICATE (C)
Asks the server to replicate a given stream
Asks the server to replicate a given stream. The syntax is:
```
REPLICATE <stream_name> <token>
```
Where `<token>` may be either:
* a numeric stream_id to stream updates since (exclusive)
* `NOW` to stream all subsequent updates.
The `<stream_name>` is the name of a replication stream to subscribe
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
of streams). It can also be `ALL` to subscribe to all known streams,
in which case the `<token>` must be set to `NOW`.
#### USER_SYNC (C)

View File

@@ -7,7 +7,6 @@ who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL here
https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/delta/53/user_dir_populate.sql
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -196,7 +196,7 @@ Handles the media repository. It can handle all endpoints starting with:
/_matrix/media/
And the following regular expressions matching media-specific administration APIs:
... and the following regular expressions matching media-specific administration APIs:
^/_synapse/admin/v1/purge_media_cache$
^/_synapse/admin/v1/room/.*/media$
@@ -206,6 +206,18 @@ You should also set `enable_media_repo: False` in the shared configuration
file to stop the main synapse running background jobs related to managing the
media repository.
In the `media_repository` worker configuration file, configure the http listener to
expose the `media` resource. For example:
```yaml
worker_listeners:
- type: http
port: 8085
resources:
- names:
- media
```
Note this worker cannot be load-balanced: only one instance should be active.
### `synapse.app.client_reader`

View File

@@ -1,8 +1,11 @@
[mypy]
namespace_packages=True
plugins=mypy_zope:plugin
follow_imports=skip
mypy_path=stubs
namespace_packages = True
plugins = mypy_zope:plugin
follow_imports = normal
check_untyped_defs = True
show_error_codes = True
show_traceback = True
mypy_path = stubs
[mypy-zope]
ignore_missing_imports = True

View File

@@ -20,11 +20,13 @@ from concurrent.futures import ThreadPoolExecutor
DISTS = (
"debian:stretch",
"debian:buster",
"debian:bullseye",
"debian:sid",
"ubuntu:xenial",
"ubuntu:bionic",
"ubuntu:cosmic",
"ubuntu:disco",
"ubuntu:eoan",
)
DESC = '''\

View File

@@ -27,7 +27,7 @@ class Store(object):
"_store_pdu_reference_hash_txn"
]
_store_prev_pdu_hash_txn = SignatureStore.__dict__["_store_prev_pdu_hash_txn"]
_simple_insert_txn = SQLBaseStore.__dict__["_simple_insert_txn"]
simple_insert_txn = SQLBaseStore.__dict__["simple_insert_txn"]
store = Store()

View File

@@ -7,7 +7,15 @@
set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts
if [ $# -ge 1 ]
then
files=$*
else
files="synapse tests scripts-dev scripts"
fi
echo "Linting these locations: $files"
isort -y -rc $files
flake8 $files
python3 -m black $files
./scripts-dev/config-lint.sh

121
scripts-dev/update_database Executable file
View File

@@ -0,0 +1,121 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import sys
import yaml
from twisted.internet import defer, reactor
from synapse.config.homeserver import HomeServerConfig
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
logger = logging.getLogger("update_database")
class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
def __init__(self, config, database_engine, db_conn, **kwargs):
super(MockHomeserver, self).__init__(
config.server_name,
reactor=reactor,
config=config,
database_engine=database_engine,
**kwargs
)
self.database_engine = database_engine
self.db_conn = db_conn
def get_db_conn(self):
return self.db_conn
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=(
"Updates a synapse database to the latest schema and runs background updates"
" on it."
)
)
parser.add_argument("-v", action="store_true")
parser.add_argument(
"--database-config",
type=argparse.FileType("r"),
required=True,
help="A database config file for either a SQLite3 database or a PostgreSQL one.",
)
args = parser.parse_args()
logging_config = {
"level": logging.DEBUG if args.v else logging.INFO,
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s",
}
logging.basicConfig(**logging_config)
# Load, process and sanity-check the config.
hs_config = yaml.safe_load(args.database_config)
if "database" not in hs_config:
sys.stderr.write("The configuration file must have a 'database' section.\n")
sys.exit(4)
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")
# Create the database engine and a connection to it.
database_engine = create_engine(config.database_config)
db_conn = database_engine.module.connect(
**{
k: v
for k, v in config.database_config.get("args", {}).items()
if not k.startswith("cp_")
}
)
# Update the database to the latest schema.
prepare_database(db_conn, database_engine, config=config)
db_conn.commit()
# Instantiate and initialise the homeserver object.
hs = MockHomeserver(
config, database_engine, db_conn, db_config=config.database_config,
)
# setup instantiates the store within the homeserver object.
hs.setup()
store = hs.get_datastore()
@defer.inlineCallbacks
def run_background_updates():
yield store.db.updates.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()
# Apply all background updates on the database.
reactor.callWhenRunning(
lambda: run_as_background_process("background_updates", run_background_updates)
)
reactor.run()

View File

@@ -72,7 +72,7 @@ def move_media(origin_server, file_id, src_paths, dest_paths):
# check that the original exists
original_file = src_paths.remote_media_filepath(origin_server, file_id)
if not os.path.exists(original_file):
logger.warn(
logger.warning(
"Original for %s/%s (%s) does not exist",
origin_server,
file_id,

View File

@@ -47,6 +47,7 @@ from synapse.storage.data_stores.main.media_repository import (
from synapse.storage.data_stores.main.registration import (
RegistrationBackgroundUpdateStore,
)
from synapse.storage.data_stores.main.room import RoomBackgroundUpdateStore
from synapse.storage.data_stores.main.roommember import RoomMemberBackgroundUpdateStore
from synapse.storage.data_stores.main.search import SearchBackgroundUpdateStore
from synapse.storage.data_stores.main.state import StateBackgroundUpdateStore
@@ -54,6 +55,7 @@ from synapse.storage.data_stores.main.stats import StatsStore
from synapse.storage.data_stores.main.user_directory import (
UserDirectoryBackgroundUpdateStore,
)
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
from synapse.util import Clock
@@ -131,54 +133,22 @@ class Store(
EventsBackgroundUpdatesStore,
MediaRepositoryBackgroundUpdateStore,
RegistrationBackgroundUpdateStore,
RoomBackgroundUpdateStore,
RoomMemberBackgroundUpdateStore,
SearchBackgroundUpdateStore,
StateBackgroundUpdateStore,
UserDirectoryBackgroundUpdateStore,
StatsStore,
):
def __init__(self, db_conn, hs):
super().__init__(db_conn, hs)
self.db_pool = hs.get_db_pool()
@defer.inlineCallbacks
def runInteraction(self, desc, func, *args, **kwargs):
def r(conn):
try:
i = 0
N = 5
while True:
try:
txn = conn.cursor()
return func(
LoggingTransaction(txn, desc, self.database_engine, [], []),
*args,
**kwargs
)
except self.database_engine.module.DatabaseError as e:
if self.database_engine.is_deadlock(e):
logger.warn("[TXN DEADLOCK] {%s} %d/%d", desc, i, N)
if i < N:
i += 1
conn.rollback()
continue
raise
except Exception as e:
logger.debug("[TXN FAIL] {%s} %s", desc, e)
raise
with PreserveLoggingContext():
return (yield self.db_pool.runWithConnection(r))
def execute(self, f, *args, **kwargs):
return self.runInteraction(f.__name__, f, *args, **kwargs)
return self.db.runInteraction(f.__name__, f, *args, **kwargs)
def execute_sql(self, sql, *args):
def r(txn):
txn.execute(sql, args)
return txn.fetchall()
return self.runInteraction("execute_sql", r)
return self.db.runInteraction("execute_sql", r)
def insert_many_txn(self, txn, table, headers, rows):
sql = "INSERT INTO %s (%s) VALUES (%s)" % (
@@ -221,7 +191,7 @@ class Porter(object):
def setup_table(self, table):
if table in APPEND_ONLY_TABLES:
# It's safe to just carry on inserting.
row = yield self.postgres_store._simple_select_one(
row = yield self.postgres_store.db.simple_select_one(
table="port_from_sqlite3",
keyvalues={"table_name": table},
retcols=("forward_rowid", "backward_rowid"),
@@ -231,12 +201,14 @@ class Porter(object):
total_to_port = None
if row is None:
if table == "sent_transactions":
forward_chunk, already_ported, total_to_port = (
yield self._setup_sent_transactions()
)
(
forward_chunk,
already_ported,
total_to_port,
) = yield self._setup_sent_transactions()
backward_chunk = 0
else:
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": table,
@@ -266,7 +238,7 @@ class Porter(object):
yield self.postgres_store.execute(delete_all)
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={"table_name": table, "forward_rowid": 1, "backward_rowid": 0},
)
@@ -320,7 +292,7 @@ class Porter(object):
if table == "user_directory_stream_pos":
# We need to make sure there is a single row, `(X, null), as that is
# what synapse expects to be there.
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table=table, values={"stream_id": None}
)
self.progress.update(table, table_size) # Mark table as done
@@ -361,7 +333,9 @@ class Porter(object):
return headers, forward_rows, backward_rows
headers, frows, brows = yield self.sqlite_store.runInteraction("select", r)
headers, frows, brows = yield self.sqlite_store.db.runInteraction(
"select", r
)
if frows or brows:
if frows:
@@ -375,7 +349,7 @@ class Porter(object):
def insert(txn):
self.postgres_store.insert_many_txn(txn, table, headers[1:], rows)
self.postgres_store._simple_update_one_txn(
self.postgres_store.db.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": table},
@@ -414,7 +388,7 @@ class Porter(object):
return headers, rows
headers, rows = yield self.sqlite_store.runInteraction("select", r)
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
if rows:
forward_chunk = rows[-1][0] + 1
@@ -431,8 +405,8 @@ class Porter(object):
rows_dict = []
for row in rows:
d = dict(zip(headers, row))
if "\0" in d['value']:
logger.warn('dropping search row %s', d)
if "\0" in d["value"]:
logger.warning("dropping search row %s", d)
else:
rows_dict.append(d)
@@ -452,7 +426,7 @@ class Porter(object):
],
)
self.postgres_store._simple_update_one_txn(
self.postgres_store.db.simple_update_one_txn(
txn,
table="port_from_sqlite3",
keyvalues={"table_name": "event_search"},
@@ -502,17 +476,14 @@ class Porter(object):
self.progress.set_state("Preparing %s" % config["name"])
conn = self.setup_db(config, engine)
db_pool = adbapi.ConnectionPool(
config["name"], **config["args"]
)
db_pool = adbapi.ConnectionPool(config["name"], **config["args"])
hs = MockHomeserver(self.hs_config, engine, conn, db_pool)
store = Store(conn, hs)
store = Store(Database(hs), conn, hs)
yield store.runInteraction(
"%s_engine.check_database" % config["name"],
engine.check_database,
yield store.db.runInteraction(
"%s_engine.check_database" % config["name"], engine.check_database,
)
return store
@@ -520,7 +491,9 @@ class Porter(object):
@defer.inlineCallbacks
def run_background_updates_on_postgres(self):
# Manually apply all background updates on the PostgreSQL database.
postgres_ready = yield self.postgres_store.has_completed_background_updates()
postgres_ready = (
yield self.postgres_store.db.updates.has_completed_background_updates()
)
if not postgres_ready:
# Only say that we're running background updates when there are background
@@ -528,9 +501,9 @@ class Porter(object):
self.progress.set_state("Running background updates on PostgreSQL")
while not postgres_ready:
yield self.postgres_store.do_next_background_update(100)
yield self.postgres_store.db.updates.do_next_background_update(100)
postgres_ready = yield (
self.postgres_store.has_completed_background_updates()
self.postgres_store.db.updates.has_completed_background_updates()
)
@defer.inlineCallbacks
@@ -539,7 +512,9 @@ class Porter(object):
self.sqlite_store = yield self.build_db_store(self.sqlite_config)
# Check if all background updates are done, abort if not.
updates_complete = yield self.sqlite_store.has_completed_background_updates()
updates_complete = (
yield self.sqlite_store.db.updates.has_completed_background_updates()
)
if not updates_complete:
sys.stderr.write(
"Pending background updates exist in the SQLite3 database."
@@ -580,22 +555,22 @@ class Porter(object):
)
try:
yield self.postgres_store.runInteraction("alter_table", alter_table)
yield self.postgres_store.db.runInteraction("alter_table", alter_table)
except Exception:
# On Error Resume Next
pass
yield self.postgres_store.runInteraction(
yield self.postgres_store.db.runInteraction(
"create_port_table", create_port_table
)
# Step 2. Get tables.
self.progress.set_state("Fetching tables")
sqlite_tables = yield self.sqlite_store._simple_select_onecol(
sqlite_tables = yield self.sqlite_store.db.simple_select_onecol(
table="sqlite_master", keyvalues={"type": "table"}, retcol="name"
)
postgres_tables = yield self.postgres_store._simple_select_onecol(
postgres_tables = yield self.postgres_store.db.simple_select_onecol(
table="information_schema.tables",
keyvalues={},
retcol="distinct table_name",
@@ -647,7 +622,7 @@ class Porter(object):
if isinstance(col, bytes):
return bytearray(col)
elif isinstance(col, string_types) and "\0" in col:
logger.warn(
logger.warning(
"DROPPING ROW: NUL value in table %s col %s: %r",
table,
headers[j],
@@ -685,11 +660,11 @@ class Porter(object):
rows = txn.fetchall()
headers = [column[0] for column in txn.description]
ts_ind = headers.index('ts')
ts_ind = headers.index("ts")
return headers, [r for r in rows if r[ts_ind] < yesterday]
headers, rows = yield self.sqlite_store.runInteraction("select", r)
headers, rows = yield self.sqlite_store.db.runInteraction("select", r)
rows = self._convert_rows("sent_transactions", headers, rows)
@@ -722,7 +697,7 @@ class Porter(object):
next_chunk = yield self.sqlite_store.execute(get_start_id)
next_chunk = max(max_inserted_rowid + 1, next_chunk)
yield self.postgres_store._simple_insert(
yield self.postgres_store.db.simple_insert(
table="port_from_sqlite3",
values={
"table_name": "sent_transactions",
@@ -735,7 +710,7 @@ class Porter(object):
txn.execute(
"SELECT count(*) FROM sent_transactions" " WHERE ts >= ?", (yesterday,)
)
size, = txn.fetchone()
(size,) = txn.fetchone()
return int(size)
remaining_count = yield self.sqlite_store.execute(get_sent_table_size)
@@ -782,10 +757,13 @@ class Porter(object):
def _setup_state_group_id_seq(self):
def r(txn):
txn.execute("SELECT MAX(id) FROM state_groups")
next_id = txn.fetchone()[0] + 1
curr_id = txn.fetchone()[0]
if not curr_id:
return
next_id = curr_id + 1
txn.execute("ALTER SEQUENCE state_group_id_seq RESTART WITH %s", (next_id,))
return self.postgres_store.runInteraction("setup_state_group_id_seq", r)
return self.postgres_store.db.runInteraction("setup_state_group_id_seq", r)
##############################################
@@ -866,7 +844,7 @@ class CursesProgress(Progress):
duration = int(now) - int(self.start_time)
minutes, seconds = divmod(duration, 60)
duration_str = '%02dm %02ds' % (minutes, seconds)
duration_str = "%02dm %02ds" % (minutes, seconds)
if self.finished:
status = "Time spent: %s (Done!)" % (duration_str,)
@@ -876,7 +854,7 @@ class CursesProgress(Progress):
left = float(self.total_remaining) / self.total_processed
est_remaining = (int(now) - self.start_time) * left
est_remaining_str = '%02dm %02ds remaining' % divmod(est_remaining, 60)
est_remaining_str = "%02dm %02ds remaining" % divmod(est_remaining, 60)
else:
est_remaining_str = "Unknown"
status = "Time spent: %s (est. remaining: %s)" % (
@@ -962,7 +940,7 @@ if __name__ == "__main__":
description="A script to port an existing synapse SQLite database to"
" a new PostgreSQL database."
)
parser.add_argument("-v", action='store_true')
parser.add_argument("-v", action="store_true")
parser.add_argument(
"--sqlite-database",
required=True,
@@ -971,12 +949,12 @@ if __name__ == "__main__":
)
parser.add_argument(
"--postgres-config",
type=argparse.FileType('r'),
type=argparse.FileType("r"),
required=True,
help="The database config file for the PostgreSQL database",
)
parser.add_argument(
"--curses", action='store_true', help="display a curses based progress UI"
"--curses", action="store_true", help="display a curses based progress UI"
)
parser.add_argument(
@@ -1052,3 +1030,4 @@ if __name__ == "__main__":
if end_error_exec_info:
exc_type, exc_value, exc_traceback = end_error_exec_info
traceback.print_exception(exc_type, exc_value, exc_traceback)
sys.exit(5)

View File

@@ -20,3 +20,23 @@ parts:
source: .
plugin: python
python-version: python3
python-packages:
- '.[all]'
build-packages:
- libffi-dev
- libturbojpeg0-dev
- libssl-dev
- libxslt1-dev
- libpq-dev
- zlib1g-dev
stage-packages:
- libasn1-8-heimdal
- libgssapi3-heimdal
- libhcrypto4-heimdal
- libheimbase1-heimdal
- libheimntlm0-heimdal
- libhx509-5-heimdal
- libkrb5-26-heimdal
- libldap-2.4-2
- libpq5
- libsasl2-2

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
""" This is a reference implementation of a Matrix home server.
""" This is a reference implementation of a Matrix homeserver.
"""
import os
@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.5.1"
__version__ = "1.7.2"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -144,8 +144,8 @@ def main():
logging.captureWarnings(True)
parser = argparse.ArgumentParser(
description="Used to register new users with a given home server when"
" registration has been disabled. The home server must be"
description="Used to register new users with a given homeserver when"
" registration has been disabled. The homeserver must be"
" configured with the 'registration_shared_secret' option"
" set."
)
@@ -202,7 +202,7 @@ def main():
"server_url",
default="https://localhost:8448",
nargs="?",
help="URL to use to talk to the home server. Defaults to "
help="URL to use to talk to the homeserver. Defaults to "
" 'https://localhost:8448'.",
)

View File

@@ -14,6 +14,7 @@
# limitations under the License.
import logging
from typing import Dict, Tuple
from six import itervalues
@@ -25,13 +26,7 @@ from twisted.internet import defer
import synapse.logging.opentracing as opentracing
import synapse.types
from synapse import event_auth
from synapse.api.constants import (
EventTypes,
JoinRules,
LimitBlockingTypes,
Membership,
UserTypes,
)
from synapse.api.constants import EventTypes, LimitBlockingTypes, Membership, UserTypes
from synapse.api.errors import (
AuthError,
Codes,
@@ -497,7 +492,7 @@ class Auth(object):
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
logger.warning("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
return defer.succeed(service)
@@ -513,71 +508,43 @@ class Auth(object):
"""
return self.store.is_server_admin(user)
@defer.inlineCallbacks
def compute_auth_events(self, event, current_state_ids, for_verification=False):
def compute_auth_events(
self,
event,
current_state_ids: Dict[Tuple[str, str], str],
for_verification: bool = False,
):
"""Given an event and current state return the list of event IDs used
to auth an event.
If `for_verification` is False then only return auth events that
should be added to the event's `auth_events`.
Returns:
defer.Deferred(list[str]): List of event IDs.
"""
if event.type == EventTypes.Create:
return []
return defer.succeed([])
# Currently we ignore the `for_verification` flag even though there are
# some situations where we can drop particular auth events when adding
# to the event's `auth_events` (e.g. joins pointing to previous joins
# when room is publically joinable). Dropping event IDs has the
# advantage that the auth chain for the room grows slower, but we use
# the auth chain in state resolution v2 to order events, which means
# care must be taken if dropping events to ensure that it doesn't
# introduce undesirable "state reset" behaviour.
#
# All of which sounds a bit tricky so we don't bother for now.
auth_ids = []
for etype, state_key in event_auth.auth_types_for_event(event):
auth_ev_id = current_state_ids.get((etype, state_key))
if auth_ev_id:
auth_ids.append(auth_ev_id)
key = (EventTypes.PowerLevels, "")
power_level_event_id = current_state_ids.get(key)
if power_level_event_id:
auth_ids.append(power_level_event_id)
key = (EventTypes.JoinRules, "")
join_rule_event_id = current_state_ids.get(key)
key = (EventTypes.Member, event.sender)
member_event_id = current_state_ids.get(key)
key = (EventTypes.Create, "")
create_event_id = current_state_ids.get(key)
if create_event_id:
auth_ids.append(create_event_id)
if join_rule_event_id:
join_rule_event = yield self.store.get_event(join_rule_event_id)
join_rule = join_rule_event.content.get("join_rule")
is_public = join_rule == JoinRules.PUBLIC if join_rule else False
else:
is_public = False
if event.type == EventTypes.Member:
e_type = event.content["membership"]
if e_type in [Membership.JOIN, Membership.INVITE]:
if join_rule_event_id:
auth_ids.append(join_rule_event_id)
if e_type == Membership.JOIN:
if member_event_id and not is_public:
auth_ids.append(member_event_id)
else:
if member_event_id:
auth_ids.append(member_event_id)
if for_verification:
key = (EventTypes.Member, event.state_key)
existing_event_id = current_state_ids.get(key)
if existing_event_id:
auth_ids.append(existing_event_id)
if e_type == Membership.INVITE:
if "third_party_invite" in event.content:
key = (
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"],
)
third_party_invite_id = current_state_ids.get(key)
if third_party_invite_id:
auth_ids.append(third_party_invite_id)
elif member_event_id:
member_event = yield self.store.get_event(member_event_id)
if member_event.content["membership"] == Membership.JOIN:
auth_ids.append(member_event.event_id)
return auth_ids
return defer.succeed(auth_ids)
@defer.inlineCallbacks
def check_can_change_room_list(self, room_id, user):

View File

@@ -1,7 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2014-2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -94,6 +95,8 @@ class EventTypes(object):
ServerACL = "m.room.server_acl"
Pinned = "m.room.pinned_events"
Retention = "m.room.retention"
class RejectedReason(object):
AUTH_ERROR = "auth_error"
@@ -138,3 +141,14 @@ class LimitBlockingTypes(object):
MONTHLY_ACTIVE_USER = "monthly_active_user"
HS_DISABLED = "hs_disabled"
class EventContentFields(object):
"""Fields found in events' content, regardless of type."""
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
LABELS = "org.matrix.labels"
# Timestamp to delete the event after
# cf https://github.com/matrix-org/matrix-doc/pull/2228
SELF_DESTRUCT_AFTER = "org.matrix.self_destruct_after"

View File

@@ -457,7 +457,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
class FederationError(RuntimeError):
""" This class is used to inform remote home servers about erroneous
""" This class is used to inform remote homeservers about erroneous
PDUs they sent us.
FATAL: The remote server could not interpret the source event.

View File

@@ -1,5 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2017 Vector Creations Ltd
# Copyright 2018-2019 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -20,6 +23,7 @@ from jsonschema import FormatChecker
from twisted.internet import defer
from synapse.api.constants import EventContentFields
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import RoomID, UserID
@@ -66,6 +70,10 @@ ROOM_EVENT_FILTER_SCHEMA = {
"contains_url": {"type": "boolean"},
"lazy_load_members": {"type": "boolean"},
"include_redundant_members": {"type": "boolean"},
# Include or exclude events with the provided labels.
# cf https://github.com/matrix-org/matrix-doc/pull/2326
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
},
}
@@ -259,6 +267,9 @@ class Filter(object):
self.contains_url = self.filter_json.get("contains_url", None)
self.labels = self.filter_json.get("org.matrix.labels", None)
self.not_labels = self.filter_json.get("org.matrix.not_labels", [])
def filters_all_types(self):
return "*" in self.not_types
@@ -282,6 +293,7 @@ class Filter(object):
room_id = None
ev_type = "m.presence"
contains_url = False
labels = []
else:
sender = event.get("sender", None)
if not sender:
@@ -300,10 +312,11 @@ class Filter(object):
content = event.get("content", {})
# check if there is a string url field in the content for filtering purposes
contains_url = isinstance(content.get("url"), text_type)
labels = content.get(EventContentFields.LABELS, [])
return self.check_fields(room_id, sender, ev_type, contains_url)
return self.check_fields(room_id, sender, ev_type, labels, contains_url)
def check_fields(self, room_id, sender, event_type, contains_url):
def check_fields(self, room_id, sender, event_type, labels, contains_url):
"""Checks whether the filter matches the given event fields.
Returns:
@@ -313,6 +326,7 @@ class Filter(object):
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v),
"labels": lambda v: v in labels,
}
for name, match_func in literal_keys.items():

View File

@@ -44,6 +44,8 @@ def check_bind_error(e, address, bind_addresses):
bind_addresses (list): Addresses on which the service listens.
"""
if address == "0.0.0.0" and "::" in bind_addresses:
logger.warn("Failed to listen on 0.0.0.0, continuing because listening on [::]")
logger.warning(
"Failed to listen on 0.0.0.0, continuing because listening on [::]"
)
else:
raise e

View File

@@ -269,7 +269,7 @@ def start(hs, listeners=None):
# It is now safe to start your Synapse.
hs.start_listening(listeners)
hs.get_datastore().start_profiling()
hs.get_datastore().db.start_profiling()
setup_sentry(hs)
setup_sdnotify(hs)

View File

@@ -94,7 +94,7 @@ class AppserviceServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -103,7 +103,7 @@ class AppserviceServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -153,7 +153,7 @@ class ClientReaderServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -162,7 +162,7 @@ class ClientReaderServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -147,7 +147,7 @@ class EventCreatorServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -156,7 +156,7 @@ class EventCreatorServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -132,7 +132,7 @@ class FederationReaderServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -141,7 +141,7 @@ class FederationReaderServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -40,6 +40,7 @@ from synapse.replication.slave.storage.transactions import SlavedTransactionStor
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.replication.tcp.streams._base import ReceiptsStream
from synapse.server import HomeServer
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.types import ReadReceipt
from synapse.util.async_helpers import Linearizer
@@ -59,8 +60,8 @@ class FederationSenderSlaveStore(
SlavedDeviceStore,
SlavedPresenceStore,
):
def __init__(self, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(FederationSenderSlaveStore, self).__init__(database, db_conn, hs)
# We pull out the current federation stream position now so that we
# always have a known value for the federation position in memory so
@@ -69,7 +70,7 @@ class FederationSenderSlaveStore(
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
@@ -123,7 +124,7 @@ class FederationSenderServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -132,7 +133,7 @@ class FederationSenderServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -204,7 +204,7 @@ class FrontendProxyServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -213,7 +213,7 @@ class FrontendProxyServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -19,12 +19,13 @@ from __future__ import print_function
import gc
import logging
import math
import os
import resource
import sys
from six import iteritems
import psutil
from prometheus_client import Gauge
from twisted.application import service
@@ -67,9 +68,9 @@ from synapse.rest.key.v2 import KeyApiV2Resource
from synapse.rest.media.v0.content_repository import ContentRepoResource
from synapse.rest.well_known import WellKnownResource
from synapse.server import HomeServer
from synapse.storage import DataStore, are_all_users_on_domain
from synapse.storage import DataStore
from synapse.storage.engines import IncorrectDatabaseSetup, create_engine
from synapse.storage.prepare_database import UpgradeDatabaseException, prepare_database
from synapse.storage.prepare_database import UpgradeDatabaseException
from synapse.util.caches import CACHE_SIZE_FACTOR
from synapse.util.httpresourcetree import create_resource_tree
from synapse.util.manhole import manhole
@@ -282,7 +283,7 @@ class SynapseHomeServer(HomeServer):
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -291,23 +292,7 @@ class SynapseHomeServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
def run_startup_checks(self, db_conn, database_engine):
all_users_native = are_all_users_on_domain(
db_conn.cursor(), database_engine, self.hostname
)
if not all_users_native:
quit_with_error(
"Found users in database not native to %s!\n"
"You cannot changed a synapse server_name after it's been configured"
% (self.hostname,)
)
try:
database_engine.check_database(db_conn.cursor())
except IncorrectDatabaseSetup as e:
quit_with_error(str(e))
logger.warning("Unrecognized listener type: %s", listener["type"])
# Gauges to expose monthly active user control metrics
@@ -356,16 +341,12 @@ def setup(config_options):
synapse.config.logger.setup_logging(hs, config, use_worker_options=False)
logger.info("Preparing database: %s...", config.database_config["name"])
logger.info("Setting up server")
try:
with hs.get_db_conn(run_new_connection=False) as db_conn:
prepare_database(db_conn, database_engine, config=config)
database_engine.on_new_connection(db_conn)
hs.run_startup_checks(db_conn, database_engine)
db_conn.commit()
hs.setup()
except IncorrectDatabaseSetup as e:
quit_with_error(str(e))
except UpgradeDatabaseException:
sys.stderr.write(
"\nFailed to upgrade database.\n"
@@ -374,9 +355,6 @@ def setup(config_options):
)
sys.exit(1)
logger.info("Database prepared in %s.", config.database_config["name"])
hs.setup()
hs.setup_master()
@defer.inlineCallbacks
@@ -435,7 +413,7 @@ def setup(config_options):
_base.start(hs, config.listeners)
hs.get_pusherpool().start()
hs.get_datastore().start_doing_background_updates()
hs.get_datastore().db.updates.start_doing_background_updates()
except Exception:
# Print the exception and bail out.
print("Error during startup:", file=sys.stderr)
@@ -471,6 +449,87 @@ class SynapseService(service.Service):
return self._port.stopListening()
# Contains the list of processes we will be monitoring
# currently either 0 or 1
_stats_process = []
@defer.inlineCallbacks
def phone_stats_home(hs, stats, stats_process=_stats_process):
logger.info("Gathering stats for reporting")
now = int(hs.get_clock().time())
uptime = int(now - hs.start_time)
if uptime < 0:
uptime = 0
stats["homeserver"] = hs.config.server_name
stats["server_context"] = hs.config.server_context
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
version = sys.version_info
stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro
)
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in iteritems(daily_user_type_results):
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in iteritems(r30_results):
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
#
# Performance statistics
#
old = stats_process[0]
new = (now, resource.getrusage(resource.RUSAGE_SELF))
stats_process[0] = new
# Get RSS in bytes
stats["memory_rss"] = new[1].ru_maxrss
# Get CPU time in % of a single core, not % of all cores
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
old[1].ru_utime + old[1].ru_stime
)
if used_cpu_time == 0 or new[0] == old[0]:
stats["cpu_average"] = 0
else:
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
#
# Database version
#
stats["database_engine"] = hs.database_engine.module.__name__
stats["database_server_version"] = hs.database_engine.server_version
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
try:
yield hs.get_proxied_http_client().put_json(
hs.config.report_stats_endpoint, stats
)
except Exception as e:
logger.warning("Error reporting stats: %s", e)
def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
@@ -497,91 +556,19 @@ def run(hs):
reactor.run = profile(reactor.run)
clock = hs.get_clock()
start_time = clock.time()
stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
def performance_stats_init():
_stats_process.clear()
_stats_process.append(
(int(hs.get_clock().time()), resource.getrusage(resource.RUSAGE_SELF))
)
def start_phone_stats_home():
return run_as_background_process("phone_stats_home", phone_stats_home)
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
now = int(hs.get_clock().time())
uptime = int(now - start_time)
if uptime < 0:
uptime = 0
stats["homeserver"] = hs.config.server_name
stats["server_context"] = hs.config.server_context
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
version = sys.version_info
stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro
return run_as_background_process(
"phone_stats_home", phone_stats_home, hs, stats
)
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in iteritems(daily_user_type_results):
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
stats[
"daily_active_rooms"
] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in iteritems(r30_results):
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
stats["database_engine"] = hs.get_datastore().database_engine_name
stats["database_server_version"] = hs.get_datastore().get_server_version()
logger.info(
"Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats)
)
try:
yield hs.get_simple_http_client().put_json(
hs.config.report_stats_endpoint, stats
)
except Exception as e:
logger.warn("Error reporting stats: %s", e)
def performance_stats_init():
try:
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (AttributeError):
logger.warning("Unable to read memory/cpu stats. Disabling reporting.")
def generate_user_daily_visit_stats():
return run_as_background_process(

View File

@@ -120,7 +120,7 @@ class MediaRepositoryServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -129,7 +129,7 @@ class MediaRepositoryServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -33,6 +33,7 @@ from synapse.replication.slave.storage.account_data import SlavedAccountDataStor
from synapse.replication.slave.storage.events import SlavedEventStore
from synapse.replication.slave.storage.pushers import SlavedPusherStore
from synapse.replication.slave.storage.receipts import SlavedReceiptsStore
from synapse.replication.slave.storage.room import RoomStore
from synapse.replication.tcp.client import ReplicationClientHandler
from synapse.server import HomeServer
from synapse.storage import DataStore
@@ -45,7 +46,11 @@ logger = logging.getLogger("synapse.app.pusher")
class PusherSlaveStore(
SlavedEventStore, SlavedPusherStore, SlavedReceiptsStore, SlavedAccountDataStore
SlavedEventStore,
SlavedPusherStore,
SlavedReceiptsStore,
SlavedAccountDataStore,
RoomStore,
):
update_pusher_last_stream_ordering_and_success = __func__(
DataStore.update_pusher_last_stream_ordering_and_success
@@ -114,7 +119,7 @@ class PusherServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -123,7 +128,7 @@ class PusherServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -151,7 +151,7 @@ class SynchrotronPresence(object):
def set_state(self, user, state, ignore_status_msg=False):
# TODO Hows this supposed to work?
pass
return defer.succeed(None)
get_states = __func__(PresenceHandler.get_states)
get_state = __func__(PresenceHandler.get_state)
@@ -326,7 +326,7 @@ class SynchrotronServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -335,7 +335,7 @@ class SynchrotronServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -43,6 +43,7 @@ from synapse.replication.tcp.streams.events import (
from synapse.rest.client.v2_alpha import user_directory
from synapse.server import HomeServer
from synapse.storage.data_stores.main.user_directory import UserDirectoryStore
from synapse.storage.database import Database
from synapse.storage.engines import create_engine
from synapse.util.caches.stream_change_cache import StreamChangeCache
from synapse.util.httpresourcetree import create_resource_tree
@@ -60,11 +61,11 @@ class UserDirectorySlaveStore(
UserDirectoryStore,
BaseSlavedStore,
):
def __init__(self, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(db_conn, hs)
def __init__(self, database: Database, db_conn, hs):
super(UserDirectorySlaveStore, self).__init__(database, db_conn, hs)
events_max = self._stream_id_gen.get_current_token()
curr_state_delta_prefill, min_curr_state_delta_id = self._get_cache_dict(
curr_state_delta_prefill, min_curr_state_delta_id = self.db.get_cache_dict(
db_conn,
"current_state_delta_stream",
entity_column="room_id",
@@ -150,7 +151,7 @@ class UserDirectoryServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -159,7 +160,7 @@ class UserDirectoryServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -94,7 +94,9 @@ class ApplicationService(object):
ip_range_whitelist=None,
):
self.token = token
self.url = url
self.url = (
url.rstrip("/") if isinstance(url, str) else None
) # url must not end with a slash
self.hs_token = hs_token
self.sender = sender
self.server_name = hostname

View File

@@ -185,7 +185,7 @@ class ApplicationServiceApi(SimpleHttpClient):
if not _is_valid_3pe_metadata(info):
logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri
"query_3pe_protocol to %s did not return a valid result", uri
)
return None

View File

@@ -134,7 +134,7 @@ def _load_appservice(hostname, as_info, config_filename):
for regex_obj in as_info["namespaces"][ns]:
if not isinstance(regex_obj, dict):
raise ValueError(
"Expected namespace entry in %s to be an object," " but got %s",
"Expected namespace entry in %s to be an object, but got %s",
ns,
regex_obj,
)

View File

@@ -35,11 +35,11 @@ class CaptchaConfig(Config):
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This Home Server's ReCAPTCHA public key.
# This homeserver's ReCAPTCHA public key.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This Home Server's ReCAPTCHA private key.
# This homeserver's ReCAPTCHA private key.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"

View File

@@ -146,6 +146,8 @@ class EmailConfig(Config):
if k not in email_config:
missing.append("email." + k)
# public_baseurl is required to build password reset and validation links that
# will be emailed to users
if config.get("public_baseurl") is None:
missing.append("public_baseurl")
@@ -305,8 +307,23 @@ class EmailConfig(Config):
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# app_name: Matrix
#
# # notif_from defines the "From" address to use when sending emails.
# # It must be set if email sending is enabled.
# #
# # The placeholder '%(app)s' will be replaced by the application name,
# # which is normally 'app_name' (below), but may be overridden by the
# # Matrix client application.
# #
# # Note that the placeholder must be written '%(app)s', including the
# # trailing 's'.
# #
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
#
# # app_name defines the default value for '%(app)s' in notif_from. It
# # defaults to 'Matrix'.
# #
# #app_name: my_branded_matrix_server
#
# # Enable email notifications by default
# #

View File

@@ -125,7 +125,7 @@ class KeyConfig(Config):
# if neither trusted_key_servers nor perspectives are given, use the default.
if "perspectives" not in config and "trusted_key_servers" not in config:
logger.warn(TRUSTED_KEY_SERVER_NOT_CONFIGURED_WARN)
logger.warning(TRUSTED_KEY_SERVER_NOT_CONFIGURED_WARN)
key_servers = [{"server_name": "matrix.org"}]
else:
key_servers = config.get("trusted_key_servers", [])
@@ -156,7 +156,7 @@ class KeyConfig(Config):
if not self.macaroon_secret_key:
# Unfortunately, there are people out there that don't have this
# set. Lets just be "nice" and derive one from their secret key.
logger.warn("Config is missing macaroon_secret_key")
logger.warning("Config is missing macaroon_secret_key")
seed = bytes(self.signing_key[0])
self.macaroon_secret_key = hashlib.sha256(seed).digest()

View File

@@ -182,7 +182,7 @@ def _reload_stdlib_logging(*args, log_config=None):
logger = logging.getLogger("")
if not log_config:
logger.warn("Reloaded a blank config?")
logger.warning("Reloaded a blank config?")
logging.config.dictConfig(log_config)
@@ -234,8 +234,8 @@ def setup_logging(
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.warning("***** STARTING SERVER *****")
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)
return logger

View File

@@ -106,6 +106,13 @@ class RegistrationConfig(Config):
account_threepid_delegates = config.get("account_threepid_delegates") or {}
self.account_threepid_delegate_email = account_threepid_delegates.get("email")
self.account_threepid_delegate_msisdn = account_threepid_delegates.get("msisdn")
if self.account_threepid_delegate_msisdn and not self.public_baseurl:
raise ConfigError(
"The configuration option `public_baseurl` is required if "
"`account_threepid_delegate.msisdn` is set, such that "
"clients know where to submit validation tokens to. Please "
"configure `public_baseurl`."
)
self.default_identity_server = config.get("default_identity_server")
self.allow_guest_access = config.get("allow_guest_access", False)
@@ -300,7 +307,7 @@ class RegistrationConfig(Config):
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.org
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Users who register on this homeserver will automatically be joined

View File

@@ -170,7 +170,7 @@ class _RoomDirectoryRule(object):
self.action = action
else:
raise ConfigError(
"%s rules can only have action of 'allow'" " or 'deny'" % (option_name,)
"%s rules can only have action of 'allow' or 'deny'" % (option_name,)
)
self._alias_matches_all = alias == "*"

View File

@@ -19,7 +19,7 @@ import logging
import os.path
import re
from textwrap import indent
from typing import List
from typing import Dict, List, Optional
import attr
import yaml
@@ -41,7 +41,7 @@ logger = logging.Logger(__name__)
# in the list.
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_ROOM_VERSION = "4"
DEFAULT_ROOM_VERSION = "5"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
@@ -118,15 +118,16 @@ class ServerConfig(Config):
self.allow_public_rooms_without_auth = False
self.allow_public_rooms_over_federation = False
else:
# If set to 'False', requires authentication to access the server's public
# rooms directory through the client API. Defaults to 'True'.
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
self.allow_public_rooms_without_auth = config.get(
"allow_public_rooms_without_auth", True
"allow_public_rooms_without_auth", False
)
# If set to 'False', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'True'.
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
self.allow_public_rooms_over_federation = config.get(
"allow_public_rooms_over_federation", True
"allow_public_rooms_over_federation", False
)
default_room_version = config.get("default_room_version", DEFAULT_ROOM_VERSION)
@@ -223,7 +224,7 @@ class ServerConfig(Config):
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in " "federation_ip_range_blacklist: %s" % e
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
if self.public_baseurl is not None:
@@ -246,6 +247,124 @@ class ServerConfig(Config):
# events with profile information that differ from the target's global profile.
self.allow_per_room_profiles = config.get("allow_per_room_profiles", True)
retention_config = config.get("retention")
if retention_config is None:
retention_config = {}
self.retention_enabled = retention_config.get("enabled", False)
retention_default_policy = retention_config.get("default_policy")
if retention_default_policy is not None:
self.retention_default_min_lifetime = retention_default_policy.get(
"min_lifetime"
)
if self.retention_default_min_lifetime is not None:
self.retention_default_min_lifetime = self.parse_duration(
self.retention_default_min_lifetime
)
self.retention_default_max_lifetime = retention_default_policy.get(
"max_lifetime"
)
if self.retention_default_max_lifetime is not None:
self.retention_default_max_lifetime = self.parse_duration(
self.retention_default_max_lifetime
)
if (
self.retention_default_min_lifetime is not None
and self.retention_default_max_lifetime is not None
and (
self.retention_default_min_lifetime
> self.retention_default_max_lifetime
)
):
raise ConfigError(
"The default retention policy's 'min_lifetime' can not be greater"
" than its 'max_lifetime'"
)
else:
self.retention_default_min_lifetime = None
self.retention_default_max_lifetime = None
self.retention_allowed_lifetime_min = retention_config.get(
"allowed_lifetime_min"
)
if self.retention_allowed_lifetime_min is not None:
self.retention_allowed_lifetime_min = self.parse_duration(
self.retention_allowed_lifetime_min
)
self.retention_allowed_lifetime_max = retention_config.get(
"allowed_lifetime_max"
)
if self.retention_allowed_lifetime_max is not None:
self.retention_allowed_lifetime_max = self.parse_duration(
self.retention_allowed_lifetime_max
)
if (
self.retention_allowed_lifetime_min is not None
and self.retention_allowed_lifetime_max is not None
and self.retention_allowed_lifetime_min
> self.retention_allowed_lifetime_max
):
raise ConfigError(
"Invalid retention policy limits: 'allowed_lifetime_min' can not be"
" greater than 'allowed_lifetime_max'"
)
self.retention_purge_jobs = [] # type: List[Dict[str, Optional[int]]]
for purge_job_config in retention_config.get("purge_jobs", []):
interval_config = purge_job_config.get("interval")
if interval_config is None:
raise ConfigError(
"A retention policy's purge jobs configuration must have the"
" 'interval' key set."
)
interval = self.parse_duration(interval_config)
shortest_max_lifetime = purge_job_config.get("shortest_max_lifetime")
if shortest_max_lifetime is not None:
shortest_max_lifetime = self.parse_duration(shortest_max_lifetime)
longest_max_lifetime = purge_job_config.get("longest_max_lifetime")
if longest_max_lifetime is not None:
longest_max_lifetime = self.parse_duration(longest_max_lifetime)
if (
shortest_max_lifetime is not None
and longest_max_lifetime is not None
and shortest_max_lifetime > longest_max_lifetime
):
raise ConfigError(
"A retention policy's purge jobs configuration's"
" 'shortest_max_lifetime' value can not be greater than its"
" 'longest_max_lifetime' value."
)
self.retention_purge_jobs.append(
{
"interval": interval,
"shortest_max_lifetime": shortest_max_lifetime,
"longest_max_lifetime": longest_max_lifetime,
}
)
if not self.retention_purge_jobs:
self.retention_purge_jobs = [
{
"interval": self.parse_duration("1d"),
"shortest_max_lifetime": None,
"longest_max_lifetime": None,
}
]
self.listeners = [] # type: List[dict]
for listener in config.get("listeners", []):
if not isinstance(listener.get("port", None), int):
@@ -372,6 +491,8 @@ class ServerConfig(Config):
"cleanup_extremities_with_dummy_events", True
)
self.enable_ephemeral_messages = config.get("enable_ephemeral_messages", False)
def has_tls_listener(self) -> bool:
return any(l["tls"] for l in self.listeners)
@@ -500,15 +621,16 @@ class ServerConfig(Config):
#
#require_auth_for_profile_requests: true
# If set to 'false', requires authentication to access the server's public rooms
# directory through the client API. Defaults to 'true'.
# If set to 'true', removes the need for authentication to access the server's
# public rooms directory through the client API, meaning that anyone can
# query the room directory. Defaults to 'false'.
#
#allow_public_rooms_without_auth: false
#allow_public_rooms_without_auth: true
# If set to 'false', forbids any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'true'.
# If set to 'true', allows any other homeserver to fetch the server's public
# rooms directory via federation. Defaults to 'false'.
#
#allow_public_rooms_over_federation: false
#allow_public_rooms_over_federation: true
# The default room version for newly created rooms.
#
@@ -721,7 +843,7 @@ class ServerConfig(Config):
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
# Resource-constrained homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
@@ -761,6 +883,69 @@ class ServerConfig(Config):
# Defaults to `28d`. Set to `null` to disable clearing out of old rows.
#
#user_ips_max_age: 14d
# Message retention policy at the server level.
#
# Room admins and mods can define a retention period for their rooms using the
# 'm.room.retention' state event, and server admins can cap this period by setting
# the 'allowed_lifetime_min' and 'allowed_lifetime_max' config options.
#
# If this feature is enabled, Synapse will regularly look for and purge events
# which are older than the room's maximum retention period. Synapse will also
# filter events received over federation so that events that should have been
# purged are ignored and not stored again.
#
retention:
# The message retention policies feature is disabled by default. Uncomment the
# following line to enable it.
#
#enabled: true
# Default retention policy. If set, Synapse will apply it to rooms that lack the
# 'm.room.retention' state event. Currently, the value of 'min_lifetime' doesn't
# matter much because Synapse doesn't take it into account yet.
#
#default_policy:
# min_lifetime: 1d
# max_lifetime: 1y
# Retention policy limits. If set, a user won't be able to send a
# 'm.room.retention' event which features a 'min_lifetime' or a 'max_lifetime'
# that's not within this range. This is especially useful in closed federations,
# in which server admins can make sure every federating server applies the same
# rules.
#
#allowed_lifetime_min: 1d
#allowed_lifetime_max: 1y
# Server admins can define the settings of the background jobs purging the
# events which lifetime has expired under the 'purge_jobs' section.
#
# If no configuration is provided, a single job will be set up to delete expired
# events in every room daily.
#
# Each job's configuration defines which range of message lifetimes the job
# takes care of. For example, if 'shortest_max_lifetime' is '2d' and
# 'longest_max_lifetime' is '3d', the job will handle purging expired events in
# rooms whose state defines a 'max_lifetime' that's both higher than 2 days, and
# lower than or equal to 3 days. Both the minimum and the maximum value of a
# range are optional, e.g. a job with no 'shortest_max_lifetime' and a
# 'longest_max_lifetime' of '3d' will handle every room with a retention policy
# which 'max_lifetime' is lower than or equal to three days.
#
# The rationale for this per-job configuration is that some rooms might have a
# retention policy with a low 'max_lifetime', where history needs to be purged
# of outdated messages on a very frequent basis (e.g. every 5min), but not want
# that purge to be performed by a job that's iterating over every room it knows,
# which would be quite heavy on the server.
#
#purge_jobs:
# - shortest_max_lifetime: 1d
# longest_max_lifetime: 3d
# interval: 5m:
# - shortest_max_lifetime: 3d
# longest_max_lifetime: 1y
# interval: 24h
"""
% locals()
)
@@ -781,20 +966,20 @@ class ServerConfig(Config):
"--daemonize",
action="store_true",
default=None,
help="Daemonize the home server",
help="Daemonize the homeserver",
)
server_group.add_argument(
"--print-pidfile",
action="store_true",
default=None,
help="Print the path to the pidfile just" " before daemonizing",
help="Print the path to the pidfile just before daemonizing",
)
server_group.add_argument(
"--manhole",
metavar="PORT",
dest="manhole",
type=int,
help="Turn on the twisted telnet manhole" " service on the given port.",
help="Turn on the twisted telnet manhole service on the given port.",
)

View File

@@ -125,9 +125,11 @@ def compute_event_signature(event_dict, signature_name, signing_key):
redact_json = prune_event_dict(event_dict)
redact_json.pop("age_ts", None)
redact_json.pop("unsigned", None)
logger.debug("Signing event: %s", encode_canonical_json(redact_json))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Signing event: %s", encode_canonical_json(redact_json))
redact_json = sign_json(redact_json, signature_name, signing_key)
logger.debug("Signed event: %s", encode_canonical_json(redact_json))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Signed event: %s", encode_canonical_json(redact_json))
return redact_json["signatures"]

View File

@@ -14,6 +14,7 @@
# limitations under the License.
import logging
from typing import Set, Tuple
from canonicaljson import encode_canonical_json
from signedjson.key import decode_verify_key_bytes
@@ -48,6 +49,18 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
if not hasattr(event, "room_id"):
raise AuthError(500, "Event has no room_id: %s" % event)
room_id = event.room_id
# I'm not really expecting to get auth events in the wrong room, but let's
# sanity-check it
for auth_event in auth_events.values():
if auth_event.room_id != room_id:
raise Exception(
"During auth for event %s in room %s, found event %s in the state "
"which is in room %s"
% (event.event_id, room_id, auth_event.event_id, auth_event.room_id)
)
if do_sig_check:
sender_domain = get_domain_from_id(event.sender)
@@ -77,7 +90,7 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
if auth_events is None:
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
logger.warn("Trusting event: %s", event.event_id)
logger.warning("Trusting event: %s", event.event_id)
return
if event.type == EventTypes.Create:
@@ -625,7 +638,7 @@ def get_public_keys(invite_event):
return public_keys
def auth_types_for_event(event):
def auth_types_for_event(event) -> Set[Tuple[str]]:
"""Given an event, return a list of (EventType, StateKey) that may be
needed to auth the event. The returned list may be a superset of what
would actually be required depending on the full state of the room.
@@ -634,20 +647,20 @@ def auth_types_for_event(event):
actually auth the event.
"""
if event.type == EventTypes.Create:
return []
return set()
auth_types = [
auth_types = {
(EventTypes.PowerLevels, ""),
(EventTypes.Member, event.sender),
(EventTypes.Create, ""),
]
}
if event.type == EventTypes.Member:
membership = event.content["membership"]
if membership in [Membership.JOIN, Membership.INVITE]:
auth_types.append((EventTypes.JoinRules, ""))
auth_types.add((EventTypes.JoinRules, ""))
auth_types.append((EventTypes.Member, event.state_key))
auth_types.add((EventTypes.Member, event.state_key))
if membership == Membership.INVITE:
if "third_party_invite" in event.content:
@@ -655,6 +668,6 @@ def auth_types_for_event(event):
EventTypes.ThirdPartyInvite,
event.content["third_party_invite"]["signed"]["token"],
)
auth_types.append(key)
auth_types.add(key)
return auth_types

View File

@@ -12,104 +12,125 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict, Optional, Tuple, Union
from six import iteritems
import attr
from frozendict import frozendict
from twisted.internet import defer
from synapse.appservice import ApplicationService
from synapse.logging.context import make_deferred_yieldable, run_in_background
class EventContext(object):
@attr.s(slots=True)
class EventContext:
"""
Holds information relevant to persisting an event
Attributes:
state_group (int|None): state group id, if the state has been stored
as a state group. This is usually only None if e.g. the event is
an outlier.
rejected (bool|str): A rejection reason if the event was rejected, else
False
rejected: A rejection reason if the event was rejected, else False
push_actions (list[(str, list[object])]): list of (user_id, actions)
tuples
_state_group: The ID of the state group for this event. Note that state events
are persisted with a state group which includes the new event, so this is
effectively the state *after* the event in question.
prev_group (int): Previously persisted state group. ``None`` for an
outlier.
delta_ids (dict[(str, str), str]): Delta from ``prev_group``.
(type, state_key) -> event_id. ``None`` for an outlier.
For a *rejected* state event, where the state of the rejected event is
ignored, this state_group should never make it into the
event_to_state_groups table. Indeed, inspecting this value for a rejected
state event is almost certainly incorrect.
prev_state_events (?): XXX: is this ever set to anything other than
the empty list?
For an outlier, where we don't have the state at the event, this will be
None.
Note that this is a private attribute: it should be accessed via
the ``state_group`` property.
state_group_before_event: The ID of the state group representing the state
of the room before this event.
If this is a non-state event, this will be the same as ``state_group``. If
it's a state event, it will be the same as ``prev_group``.
If ``state_group`` is None (ie, the event is an outlier),
``state_group_before_event`` will always also be ``None``.
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
None does not necessarily mean that ``state_group`` does not have
a prev_group!
If the event is a state event, this is normally the same as ``prev_group``.
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
will always also be ``None``.
Note that this *not* (necessarily) the state group associated with
``_prev_state_ids``.
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
and ``state_group``.
app_service: If this event is being sent by a (local) application service, that
app service.
_current_state_ids: The room state map, including this event - ie, the state
in ``state_group``.
_current_state_ids (dict[(str, str), str]|None):
The current state map including the current event. None if outlier
or we haven't fetched the state from DB yet.
(type, state_key) -> event_id
_prev_state_ids (dict[(str, str), str]|None):
The current state map excluding the current event. None if outlier
or we haven't fetched the state from DB yet.
FIXME: what is this for an outlier? it seems ill-defined. It seems like
it could be either {}, or the state we were given by the remote
server, depending on $THINGS
Note that this is a private attribute: it should be accessed via
``get_current_state_ids``. _AsyncEventContext impl calculates this
on-demand: it will be None until that happens.
_prev_state_ids: The room state map, excluding this event - ie, the state
in ``state_group_before_event``. For a non-state
event, this will be the same as _current_state_events.
Note that it is a completely different thing to prev_group!
(type, state_key) -> event_id
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
FIXME: again, what is this for an outlier?
_event_type (str): The type of the event the context is associated with.
Only set when state has not been fetched yet.
_event_state_key (str|None): The state_key of the event the context is
associated with. Only set when state has not been fetched yet.
_prev_state_id (str|None): If the event associated with the context is
a state event, then `_prev_state_id` is the event_id of the state
that was replaced.
Only set when state has not been fetched yet.
As with _current_state_ids, this is a private attribute. It should be
accessed via get_prev_state_ids.
"""
__slots__ = [
"state_group",
"rejected",
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
"_current_state_ids",
"_prev_state_ids",
"_prev_state_id",
"_event_type",
"_event_state_key",
"_fetching_state_deferred",
]
rejected = attr.ib(default=False, type=Union[bool, str])
_state_group = attr.ib(default=None, type=Optional[int])
state_group_before_event = attr.ib(default=None, type=Optional[int])
prev_group = attr.ib(default=None, type=Optional[int])
delta_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
app_service = attr.ib(default=None, type=Optional[ApplicationService])
def __init__(self):
self.prev_state_events = []
self.rejected = False
self.app_service = None
_current_state_ids = attr.ib(
default=None, type=Optional[Dict[Tuple[str, str], str]]
)
_prev_state_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
@staticmethod
def with_state(
state_group, current_state_ids, prev_state_ids, prev_group=None, delta_ids=None
state_group,
state_group_before_event,
current_state_ids,
prev_state_ids,
prev_group=None,
delta_ids=None,
):
context = EventContext()
# The current state including the current event
context._current_state_ids = current_state_ids
# The current state excluding the current event
context._prev_state_ids = prev_state_ids
context.state_group = state_group
context._prev_state_id = None
context._event_type = None
context._event_state_key = None
context._fetching_state_deferred = defer.succeed(None)
# A previously persisted state group and a delta between that
# and this state.
context.prev_group = prev_group
context.delta_ids = delta_ids
return context
return EventContext(
current_state_ids=current_state_ids,
prev_state_ids=prev_state_ids,
state_group=state_group,
state_group_before_event=state_group_before_event,
prev_group=prev_group,
delta_ids=delta_ids,
)
@defer.inlineCallbacks
def serialize(self, event, store):
@@ -137,11 +158,11 @@ class EventContext(object):
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"state_group": self._state_group,
"state_group_before_event": self.state_group_before_event,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None,
}
@@ -157,24 +178,18 @@ class EventContext(object):
Returns:
EventContext
"""
context = EventContext()
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
context._prev_state_id = input["prev_state_id"]
context._event_type = input["event_type"]
context._event_state_key = input["event_state_key"]
context._current_state_ids = None
context._prev_state_ids = None
context._fetching_state_deferred = None
context.state_group = input["state_group"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.rejected = input["rejected"]
context.prev_state_events = input["prev_state_events"]
context = _AsyncEventContextImpl(
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id=input["prev_state_id"],
event_type=input["event_type"],
event_state_key=input["event_state_key"],
state_group=input["state_group"],
state_group_before_event=input["state_group_before_event"],
prev_group=input["prev_group"],
delta_ids=_decode_state_dict(input["delta_ids"]),
rejected=input["rejected"],
)
app_service_id = input["app_service_id"]
if app_service_id:
@@ -182,29 +197,52 @@ class EventContext(object):
return context
@property
def state_group(self) -> Optional[int]:
"""The ID of the state group for this event.
Note that state events are persisted with a state group which includes the new
event, so this is effectively the state *after* the event in question.
For an outlier, where we don't have the state at the event, this will be None.
It is an error to access this for a rejected event, since rejected state should
not make it into the room state. Accessing this property will raise an exception
if ``rejected`` is set.
"""
if self.rejected:
raise RuntimeError("Attempt to access state_group of rejected event")
return self._state_group
@defer.inlineCallbacks
def get_current_state_ids(self, store):
"""Gets the current state IDs
"""
Gets the room state map, including this event - ie, the state in ``state_group``
It is an error to access this for a rejected event, since rejected state should
not make it into the room state. This method will raise an exception if
``rejected`` is set.
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
if self.rejected:
raise RuntimeError("Attempt to access state_ids of rejected event")
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
yield make_deferred_yieldable(self._fetching_state_deferred)
yield self._ensure_fetched(store)
return self._current_state_ids
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
"""Gets the prev state IDs
"""
Gets the room state map, excluding this event.
For a non-state event, this will be the same as get_current_state_ids().
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
@@ -212,27 +250,64 @@ class EventContext(object):
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
yield make_deferred_yieldable(self._fetching_state_deferred)
yield self._ensure_fetched(store)
return self._prev_state_ids
def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached.
It is an error to access this for a rejected event, since rejected state should
not make it into the room state. This method will raise an exception if
``rejected`` is set.
Returns:
dict[(str, str), str]|None: Returns None if we haven't cached the
state or if state_group is None, which happens when the associated
event is an outlier.
"""
if self.rejected:
raise RuntimeError("Attempt to access state_ids of rejected event")
return self._current_state_ids
def _ensure_fetched(self, store):
return defer.succeed(None)
@attr.s(slots=True)
class _AsyncEventContextImpl(EventContext):
"""
An implementation of EventContext which fetches _current_state_ids and
_prev_state_ids from the database on demand.
Attributes:
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
_event_type (str): The type of the event the context is associated with.
_event_state_key (str): The state_key of the event the context is
associated with.
_prev_state_id (str|None): If the event associated with the context is
a state event, then `_prev_state_id` is the event_id of the state
that was replaced.
"""
_prev_state_id = attr.ib(default=None)
_event_type = attr.ib(default=None)
_event_state_key = attr.ib(default=None)
_fetching_state_deferred = attr.ib(default=None)
def _ensure_fetched(self, store):
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
return make_deferred_yieldable(self._fetching_state_deferred)
@defer.inlineCallbacks
def _fill_out_state(self, store):
"""Called to populate the _current_state_ids and _prev_state_ids
@@ -250,27 +325,6 @@ class EventContext(object):
else:
self._prev_state_ids = self._current_state_ids
@defer.inlineCallbacks
def update_state(
self, state_group, prev_state_ids, current_state_ids, prev_group, delta_ids
):
"""Replace the state in the context
"""
# We need to make sure we wait for any ongoing fetching of state
# to complete so that the updated state doesn't get clobbered
if self._fetching_state_deferred:
yield make_deferred_yieldable(self._fetching_state_deferred)
self.state_group = state_group
self._prev_state_ids = prev_state_ids
self.prev_group = prev_group
self._current_state_ids = current_state_ids
self.delta_ids = delta_ids
# We need to ensure that that we've marked as having fetched the state
self._fetching_state_deferred = defer.succeed(None)
def _encode_state_dict(state_dict):
"""Since dicts of (type, state_key) -> event_id cannot be serialized in

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,6 +14,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
from synapse.spam_checker_api import SpamCheckerApi
class SpamChecker(object):
def __init__(self, hs):
@@ -26,7 +31,14 @@ class SpamChecker(object):
pass
if module is not None:
self.spam_checker = module(config=config)
# Older spam checkers don't accept the `api` argument, so we
# try and detect support.
spam_args = inspect.getfullargspec(module)
if "api" in spam_args.args:
api = SpamCheckerApi(hs)
self.spam_checker = module(config=config, api=api)
else:
self.spam_checker = module(config=config)
def check_event_for_spam(self, event):
"""Checks if a given event is considered "spammy" by this server.

View File

@@ -13,7 +13,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
from six import string_types
from six import integer_types, string_types
from synapse.api.constants import MAX_ALIAS_LENGTH, EventTypes, Membership
from synapse.api.errors import Codes, SynapseError
@@ -22,11 +22,12 @@ from synapse.types import EventID, RoomID, UserID
class EventValidator(object):
def validate_new(self, event):
def validate_new(self, event, config):
"""Validates the event has roughly the right format
Args:
event (FrozenEvent)
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
"""
self.validate_builder(event)
@@ -67,6 +68,99 @@ class EventValidator(object):
Codes.INVALID_PARAM,
)
if event.type == EventTypes.Retention:
self._validate_retention(event, config)
def _validate_retention(self, event, config):
"""Checks that an event that defines the retention policy for a room respects the
boundaries imposed by the server's administrator.
Args:
event (FrozenEvent): The event to validate.
config (Config): The homeserver's configuration.
"""
min_lifetime = event.content.get("min_lifetime")
max_lifetime = event.content.get("max_lifetime")
if min_lifetime is not None:
if not isinstance(min_lifetime, integer_types):
raise SynapseError(
code=400,
msg="'min_lifetime' must be an integer",
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_min is not None
and min_lifetime < config.retention_allowed_lifetime_min
):
raise SynapseError(
code=400,
msg=(
"'min_lifetime' can't be lower than the minimum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_max is not None
and min_lifetime > config.retention_allowed_lifetime_max
):
raise SynapseError(
code=400,
msg=(
"'min_lifetime' can't be greater than the maximum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if max_lifetime is not None:
if not isinstance(max_lifetime, integer_types):
raise SynapseError(
code=400,
msg="'max_lifetime' must be an integer",
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_min is not None
and max_lifetime < config.retention_allowed_lifetime_min
):
raise SynapseError(
code=400,
msg=(
"'max_lifetime' can't be lower than the minimum allowed value"
" enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
config.retention_allowed_lifetime_max is not None
and max_lifetime > config.retention_allowed_lifetime_max
):
raise SynapseError(
code=400,
msg=(
"'max_lifetime' can't be greater than the maximum allowed"
" value enforced by the server's administrator"
),
errcode=Codes.BAD_JSON,
)
if (
min_lifetime is not None
and max_lifetime is not None
and min_lifetime > max_lifetime
):
raise SynapseError(
code=400,
msg="'min_lifetime' can't be greater than 'max_lifetime",
errcode=Codes.BAD_JSON,
)
def validate_builder(self, event):
"""Validates that the builder/event has roughly the right format. Only
checks values that we expect a proto event to have, rather than all the

View File

@@ -102,7 +102,7 @@ class FederationBase(object):
pass
if not res:
logger.warn(
logger.warning(
"Failed to find copy of %s with valid signature", pdu.event_id
)
@@ -173,7 +173,7 @@ class FederationBase(object):
return redacted_event
if self.spam_checker.check_event_for_spam(pdu):
logger.warn(
logger.warning(
"Event contains spam, redacting %s: %s",
pdu.event_id,
pdu.get_pdu_json(),
@@ -185,7 +185,7 @@ class FederationBase(object):
def errback(failure, pdu):
failure.trap(SynapseError)
with PreserveLoggingContext(ctx):
logger.warn(
logger.warning(
"Signature check failed for %s: %s",
pdu.event_id,
failure.getErrorMessage(),

View File

@@ -18,8 +18,6 @@ import copy
import itertools
import logging
from six.moves import range
from prometheus_client import Counter
from twisted.internet import defer
@@ -39,7 +37,7 @@ from synapse.api.room_versions import (
)
from synapse.events import builder, room_version_to_event_format
from synapse.federation.federation_base import FederationBase, event_from_pdu_json
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.utils import log_function
from synapse.util import unwrapFirstError
from synapse.util.caches.expiringcache import ExpiringCache
@@ -177,7 +175,7 @@ class FederationClient(FederationBase):
given destination server.
Args:
dest (str): The remote home server to ask.
dest (str): The remote homeserver to ask.
room_id (str): The room_id to backfill.
limit (int): The maximum number of PDUs to return.
extremities (list): List of PDU id and origins of the first pdus
@@ -196,7 +194,7 @@ class FederationClient(FederationBase):
dest, room_id, extremities, limit
)
logger.debug("backfill transaction_data=%s", repr(transaction_data))
logger.debug("backfill transaction_data=%r", transaction_data)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
@@ -227,7 +225,7 @@ class FederationClient(FederationBase):
one succeeds.
Args:
destinations (list): Which home servers to query
destinations (list): Which homeservers to query
event_id (str): event to fetch
room_version (str): version of the room
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
@@ -310,162 +308,26 @@ class FederationClient(FederationBase):
return signed_pdu
@defer.inlineCallbacks
@log_function
def get_state_for_room(self, destination, room_id, event_id):
"""Requests all of the room state at a given event from a remote home server.
Args:
destination (str): The remote homeserver to query for the state.
room_id (str): The id of the room we're interested in.
event_id (str): The id of the event we want the state at.
def get_room_state_ids(self, destination: str, room_id: str, event_id: str):
"""Calls the /state_ids endpoint to fetch the state at a particular point
in the room, and the auth events for the given event
Returns:
Deferred[Tuple[List[EventBase], List[EventBase]]]:
A list of events in the state, and a list of events in the auth chain
for the given event.
Tuple[List[str], List[str]]: a tuple of (state event_ids, auth event_ids)
"""
try:
# First we try and ask for just the IDs, as thats far quicker if
# we have most of the state and auth_chain already.
# However, this may 404 if the other side has an old synapse.
result = yield self.transport_layer.get_room_state_ids(
destination, room_id, event_id=event_id
)
state_event_ids = result["pdu_ids"]
auth_event_ids = result.get("auth_chain_ids", [])
fetched_events, failed_to_fetch = yield self.get_events_from_store_or_dest(
destination, room_id, set(state_event_ids + auth_event_ids)
)
if failed_to_fetch:
logger.warning(
"Failed to fetch missing state/auth events for %s: %s",
room_id,
failed_to_fetch,
)
event_map = {ev.event_id: ev for ev in fetched_events}
pdus = [event_map[e_id] for e_id in state_event_ids if e_id in event_map]
auth_chain = [
event_map[e_id] for e_id in auth_event_ids if e_id in event_map
]
auth_chain.sort(key=lambda e: e.depth)
return pdus, auth_chain
except HttpResponseException as e:
if e.code == 400 or e.code == 404:
logger.info("Failed to use get_room_state_ids API, falling back")
else:
raise e
result = yield self.transport_layer.get_room_state(
result = yield self.transport_layer.get_room_state_ids(
destination, room_id, event_id=event_id
)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
state_event_ids = result["pdu_ids"]
auth_event_ids = result.get("auth_chain_ids", [])
pdus = [
event_from_pdu_json(p, format_ver, outlier=True) for p in result["pdus"]
]
if not isinstance(state_event_ids, list) or not isinstance(
auth_event_ids, list
):
raise Exception("invalid response from /state_ids")
auth_chain = [
event_from_pdu_json(p, format_ver, outlier=True)
for p in result.get("auth_chain", [])
]
seen_events = yield self.store.get_events(
[ev.event_id for ev in itertools.chain(pdus, auth_chain)]
)
signed_pdus = yield self._check_sigs_and_hash_and_fetch(
destination,
[p for p in pdus if p.event_id not in seen_events],
outlier=True,
room_version=room_version,
)
signed_pdus.extend(
seen_events[p.event_id] for p in pdus if p.event_id in seen_events
)
signed_auth = yield self._check_sigs_and_hash_and_fetch(
destination,
[p for p in auth_chain if p.event_id not in seen_events],
outlier=True,
room_version=room_version,
)
signed_auth.extend(
seen_events[p.event_id] for p in auth_chain if p.event_id in seen_events
)
signed_auth.sort(key=lambda e: e.depth)
return signed_pdus, signed_auth
@defer.inlineCallbacks
def get_events_from_store_or_dest(self, destination, room_id, event_ids):
"""Fetch events from a remote destination, checking if we already have them.
Args:
destination (str)
room_id (str)
event_ids (list)
Returns:
Deferred: A deferred resolving to a 2-tuple where the first is a list of
events and the second is a list of event ids that we failed to fetch.
"""
seen_events = yield self.store.get_events(event_ids, allow_rejected=True)
signed_events = list(seen_events.values())
failed_to_fetch = set()
missing_events = set(event_ids)
for k in seen_events:
missing_events.discard(k)
if not missing_events:
return signed_events, failed_to_fetch
logger.debug(
"Fetching unknown state/auth events %s for room %s",
missing_events,
event_ids,
)
room_version = yield self.store.get_room_version(room_id)
batch_size = 20
missing_events = list(missing_events)
for i in range(0, len(missing_events), batch_size):
batch = set(missing_events[i : i + batch_size])
deferreds = [
run_in_background(
self.get_pdu,
destinations=[destination],
event_id=e_id,
room_version=room_version,
)
for e_id in batch
]
res = yield make_deferred_yieldable(
defer.DeferredList(deferreds, consumeErrors=True)
)
for success, result in res:
if success and result:
signed_events.append(result)
batch.discard(result.event_id)
# We removed all events we successfully fetched from `batch`
failed_to_fetch.update(batch)
return signed_events, failed_to_fetch
return state_event_ids, auth_event_ids
@defer.inlineCallbacks
@log_function
@@ -522,12 +384,12 @@ class FederationClient(FederationBase):
res = yield callback(destination)
return res
except InvalidResponseError as e:
logger.warn("Failed to %s via %s: %s", description, destination, e)
logger.warning("Failed to %s via %s: %s", description, destination, e)
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
else:
logger.warn(
logger.warning(
"Failed to %s via %s: %i %s",
description,
destination,
@@ -535,7 +397,9 @@ class FederationClient(FederationBase):
e.args[0],
)
except Exception:
logger.warn("Failed to %s via %s", description, destination, exc_info=1)
logger.warning(
"Failed to %s via %s", description, destination, exc_info=1
)
raise SynapseError(502, "Failed to %s via any server" % (description,))
@@ -553,7 +417,7 @@ class FederationClient(FederationBase):
Note that this does not append any events to any graphs.
Args:
destinations (str): Candidate homeservers which are probably
destinations (Iterable[str]): Candidate homeservers which are probably
participating in the room.
room_id (str): The room in which the event will happen.
user_id (str): The user whose membership is being evented.

View File

@@ -1,6 +1,7 @@
# -*- coding: utf-8 -*-
# Copyright 2015, 2016 OpenMarket Ltd
# Copyright 2018 New Vector Ltd
# Copyright 2019 Matrix.org Federation C.I.C
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -21,7 +22,6 @@ from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
@@ -74,6 +74,7 @@ class FederationServer(FederationBase):
self.auth = hs.get_auth()
self.handler = hs.get_handlers().federation_handler
self.state = hs.get_state_handler()
self._server_linearizer = Linearizer("fed_server")
self._transaction_linearizer = Linearizer("fed_txn_handler")
@@ -86,14 +87,12 @@ class FederationServer(FederationBase):
# come in waves.
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
with (yield self._server_linearizer.queue((origin, room_id))):
async def on_backfill_request(self, origin, room_id, versions, limit):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
pdus = yield self.handler.on_backfill_request(
pdus = await self.handler.on_backfill_request(
origin, room_id, versions, limit
)
@@ -101,9 +100,7 @@ class FederationServer(FederationBase):
return 200, res
@defer.inlineCallbacks
@log_function
def on_incoming_transaction(self, origin, transaction_data):
async def on_incoming_transaction(self, origin, transaction_data):
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
@@ -118,18 +115,17 @@ class FederationServer(FederationBase):
# use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel.
with (
yield self._transaction_linearizer.queue(
await self._transaction_linearizer.queue(
(origin, transaction.transaction_id)
)
):
result = yield self._handle_incoming_transaction(
result = await self._handle_incoming_transaction(
origin, transaction, request_time
)
return result
@defer.inlineCallbacks
def _handle_incoming_transaction(self, origin, transaction, request_time):
async def _handle_incoming_transaction(self, origin, transaction, request_time):
""" Process an incoming transaction and return the HTTP response
Args:
@@ -140,7 +136,7 @@ class FederationServer(FederationBase):
Returns:
Deferred[(int, object)]: http response code and body
"""
response = yield self.transaction_actions.have_responded(origin, transaction)
response = await self.transaction_actions.have_responded(origin, transaction)
if response:
logger.debug(
@@ -151,7 +147,7 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
# Reject if PDU count > 50 and EDU count > 100
# Reject if PDU count > 50 or EDU count > 100
if len(transaction.pdus) > 50 or (
hasattr(transaction, "edus") and len(transaction.edus) > 100
):
@@ -159,7 +155,7 @@ class FederationServer(FederationBase):
logger.info("Transaction PDU or EDU count too large. Returning 400")
response = {}
yield self.transaction_actions.set_response(
await self.transaction_actions.set_response(
origin, transaction, 400, response
)
return 400, response
@@ -195,7 +191,7 @@ class FederationServer(FederationBase):
continue
try:
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
except NotFoundError:
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
continue
@@ -221,13 +217,12 @@ class FederationServer(FederationBase):
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
async def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
try:
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warn("Ignoring PDUs for room %s from banned server", room_id)
logger.warning("Ignoring PDUs for room %s from banned server", room_id)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
@@ -237,10 +232,10 @@ class FederationServer(FederationBase):
event_id = pdu.event_id
with nested_logging_context(event_id):
try:
yield self._handle_received_pdu(origin, pdu)
await self._handle_received_pdu(origin, pdu)
pdu_results[event_id] = {}
except FederationError as e:
logger.warn("Error handling PDU %s: %s", event_id, e)
logger.warning("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
@@ -251,36 +246,30 @@ class FederationServer(FederationBase):
exc_info=(f.type, f.value, f.getTracebackObject()),
)
yield concurrently_execute(
await concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT
)
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
yield self.received_edu(origin, edu.edu_type, edu.content)
await self.received_edu(origin, edu.edu_type, edu.content)
response = {"pdus": pdu_results}
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(origin, transaction, 200, response)
await self.transaction_actions.set_response(origin, transaction, 200, response)
return 200, response
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
async def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
yield self.registry.on_edu(edu_type, origin, content)
@defer.inlineCallbacks
@log_function
def on_context_state_request(self, origin, room_id, event_id):
if not event_id:
raise NotImplementedError("Specify an event")
await self.registry.on_edu(edu_type, origin, content)
async def on_context_state_request(self, origin, room_id, event_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
in_room = await self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -289,75 +278,79 @@ class FederationServer(FederationBase):
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id,
event_id,
with (await self._server_linearizer.queue((origin, room_id))):
resp = dict(
await self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id,
event_id,
)
)
room_version = await self.store.get_room_version(room_id)
resp["room_version"] = room_version
return 200, resp
@defer.inlineCallbacks
def on_state_ids_request(self, origin, room_id, event_id):
async def on_state_ids_request(self, origin, room_id, event_id):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
in_room = await self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = await self.store.get_auth_chain_ids(state_ids)
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
@defer.inlineCallbacks
def _on_context_state_request_compute(self, room_id, event_id):
pdus = yield self.handler.get_state_for_pdu(room_id, event_id)
auth_chain = yield self.store.get_auth_chain([pdu.event_id for pdu in pdus])
async def _on_context_state_request_compute(self, room_id, event_id):
if event_id:
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
else:
pdus = (await self.state.get_current_state(room_id)).values()
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
@defer.inlineCallbacks
@log_function
def on_pdu_request(self, origin, event_id):
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
async def on_pdu_request(self, origin, event_id):
pdu = await self.handler.get_persisted_pdu(origin, event_id)
if pdu:
return 200, self._transaction_from_pdus([pdu]).get_dict()
else:
return 404, ""
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
async def on_query_request(self, query_type, args):
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
resp = await self.registry.on_query(query_type, args)
return 200, resp
@defer.inlineCallbacks
def on_make_join_request(self, origin, room_id, user_id, supported_versions):
async def on_make_join_request(self, origin, room_id, user_id, supported_versions):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
if room_version not in supported_versions:
logger.warn("Room version %s not in %s", room_version, supported_versions)
logger.warning(
"Room version %s not in %s", room_version, supported_versions
)
raise IncompatibleRoomVersionError(room_version=room_version)
pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
pdu = await self.handler.on_make_join_request(origin, room_id, user_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_invite_request(self, origin, content, room_version):
async def on_invite_request(self, origin, content, room_version):
if room_version not in KNOWN_ROOM_VERSIONS:
raise SynapseError(
400,
@@ -369,28 +362,27 @@ class FederationServer(FederationBase):
pdu = event_from_pdu_json(content, format_ver)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = yield self._check_sigs_and_hash(room_version, pdu)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
await self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = await self._check_sigs_and_hash(room_version, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)}
@defer.inlineCallbacks
def on_send_join_request(self, origin, content, room_id):
async def on_send_join_request(self, origin, content, room_id):
logger.debug("on_send_join_request: content: %s", content)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
return (
200,
@@ -402,48 +394,44 @@ class FederationServer(FederationBase):
},
)
@defer.inlineCallbacks
def on_make_leave_request(self, origin, room_id, user_id):
async def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(origin, room_id, user_id)
await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_leave_request(origin, room_id, user_id)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content, room_id):
async def on_send_leave_request(self, origin, content, room_id):
logger.debug("on_send_leave_request: content: %s", content)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
yield self.handler.on_send_leave_request(origin, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return 200, {}
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
with (yield self._server_linearizer.queue((origin, room_id))):
async def on_event_auth(self, origin, room_id, event_id):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
auth_pdus = await self.handler.on_event_auth(event_id)
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
return 200, res
@defer.inlineCallbacks
def on_query_auth_request(self, origin, content, room_id, event_id):
async def on_query_auth_request(self, origin, content, room_id, event_id):
"""
Content is a dict with keys::
auth_chain (list): A list of events that give the auth chain.
@@ -462,22 +450,22 @@ class FederationServer(FederationBase):
Returns:
Deferred: Results in `dict` with the same format as `content`
"""
with (yield self._server_linearizer.queue((origin, room_id))):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
auth_chain = [
event_from_pdu_json(e, format_ver) for e in content["auth_chain"]
]
signed_auth = yield self._check_sigs_and_hash_and_fetch(
signed_auth = await self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True, room_version=room_version
)
ret = yield self.handler.on_query_auth(
ret = await self.handler.on_query_auth(
origin,
event_id,
room_id,
@@ -503,16 +491,14 @@ class FederationServer(FederationBase):
return self.on_query_request("user_devices", user_id)
@trace
@defer.inlineCallbacks
@log_function
def on_claim_client_keys(self, origin, content):
async def on_claim_client_keys(self, origin, content):
query = []
for user_id, device_keys in content.get("one_time_keys", {}).items():
for device_id, algorithm in device_keys.items():
query.append((user_id, device_id, algorithm))
log_kv({"message": "Claiming one time keys.", "user, device pairs": query})
results = yield self.store.claim_e2e_one_time_keys(query)
results = await self.store.claim_e2e_one_time_keys(query)
json_result = {}
for user_id, device_keys in results.items():
@@ -536,14 +522,12 @@ class FederationServer(FederationBase):
return {"one_time_keys": json_result}
@defer.inlineCallbacks
@log_function
def on_get_missing_events(
async def on_get_missing_events(
self, origin, room_id, earliest_events, latest_events, limit
):
with (yield self._server_linearizer.queue((origin, room_id))):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
@@ -553,7 +537,7 @@ class FederationServer(FederationBase):
limit,
)
missing_events = yield self.handler.on_get_missing_events(
missing_events = await self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit
)
@@ -586,8 +570,7 @@ class FederationServer(FederationBase):
destination=None,
)
@defer.inlineCallbacks
def _handle_received_pdu(self, origin, pdu):
async def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
@@ -640,37 +623,34 @@ class FederationServer(FederationBase):
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = yield self.store.get_room_version(pdu.room_id)
room_version = await self.store.get_room_version(pdu.room_id)
# Check signature.
try:
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
except SynapseError as e:
raise FederationError("ERROR", e.code, e.msg, affected=pdu.event_id)
yield self.handler.on_receive_pdu(origin, pdu, sent_to_us_directly=True)
await self.handler.on_receive_pdu(origin, pdu, sent_to_us_directly=True)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
@defer.inlineCallbacks
def exchange_third_party_invite(
async def exchange_third_party_invite(
self, sender_user_id, target_user_id, room_id, signed
):
ret = yield self.handler.exchange_third_party_invite(
ret = await self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed
)
return ret
@defer.inlineCallbacks
def on_exchange_third_party_invite_request(self, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request(
async def on_exchange_third_party_invite_request(self, room_id, event_dict):
ret = await self.handler.on_exchange_third_party_invite_request(
room_id, event_dict
)
return ret
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):
async def check_server_matches_acl(self, server_name, room_id):
"""Check if the given server is allowed by the server ACLs in the room
Args:
@@ -680,13 +660,13 @@ class FederationServer(FederationBase):
Raises:
AuthError if the server does not match the ACL
"""
state_ids = yield self.store.get_current_state_ids(room_id)
state_ids = await self.store.get_current_state_ids(room_id)
acl_event_id = state_ids.get((EventTypes.ServerACL, ""))
if not acl_event_id:
return
acl_event = yield self.store.get_event(acl_event_id)
acl_event = await self.store.get_event(acl_event_id)
if server_matches_acl_event(server_name, acl_event):
return
@@ -709,7 +689,7 @@ def server_matches_acl_event(server_name, acl_event):
# server name is a literal IP
allow_ip_literals = acl_event.content.get("allow_ip_literals", True)
if not isinstance(allow_ip_literals, bool):
logger.warn("Ignorning non-bool allow_ip_literals flag")
logger.warning("Ignorning non-bool allow_ip_literals flag")
allow_ip_literals = True
if not allow_ip_literals:
# check for ipv6 literals. These start with '['.
@@ -723,7 +703,7 @@ def server_matches_acl_event(server_name, acl_event):
# next, check the deny list
deny = acl_event.content.get("deny", [])
if not isinstance(deny, (list, tuple)):
logger.warn("Ignorning non-list deny ACL %s", deny)
logger.warning("Ignorning non-list deny ACL %s", deny)
deny = []
for e in deny:
if _acl_entry_matches(server_name, e):
@@ -733,7 +713,7 @@ def server_matches_acl_event(server_name, acl_event):
# then the allow list.
allow = acl_event.content.get("allow", [])
if not isinstance(allow, (list, tuple)):
logger.warn("Ignorning non-list allow ACL %s", allow)
logger.warning("Ignorning non-list allow ACL %s", allow)
allow = []
for e in allow:
if _acl_entry_matches(server_name, e):
@@ -747,7 +727,7 @@ def server_matches_acl_event(server_name, acl_event):
def _acl_entry_matches(server_name, acl_entry):
if not isinstance(acl_entry, six.string_types):
logger.warn(
logger.warning(
"Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry)
)
return False
@@ -799,15 +779,14 @@ class FederationHandlerRegistry(object):
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
def on_edu(self, edu_type, origin, content):
async def on_edu(self, edu_type, origin, content):
handler = self.edu_handlers.get(edu_type)
if not handler:
logger.warn("No handler registered for EDU type %s", edu_type)
logger.warning("No handler registered for EDU type %s", edu_type)
with start_active_span_from_edu(content, "handle_edu"):
try:
yield handler(origin, content)
await handler(origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception:
@@ -816,7 +795,7 @@ class FederationHandlerRegistry(object):
def on_query(self, query_type, args):
handler = self.query_handlers.get(query_type)
if not handler:
logger.warn("No handler registered for query type %s", query_type)
logger.warning("No handler registered for query type %s", query_type)
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)
@@ -840,7 +819,7 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
super(ReplicationFederationHandlerRegistry, self).__init__()
def on_edu(self, edu_type, origin, content):
async def on_edu(self, edu_type, origin, content):
"""Overrides FederationHandlerRegistry
"""
if not self.config.use_presence and edu_type == "m.presence":
@@ -848,17 +827,17 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
handler = self.edu_handlers.get(edu_type)
if handler:
return super(ReplicationFederationHandlerRegistry, self).on_edu(
return await super(ReplicationFederationHandlerRegistry, self).on_edu(
edu_type, origin, content
)
return self._send_edu(edu_type=edu_type, origin=origin, content=content)
return await self._send_edu(edu_type=edu_type, origin=origin, content=content)
def on_query(self, query_type, args):
async def on_query(self, query_type, args):
"""Overrides FederationHandlerRegistry
"""
handler = self.query_handlers.get(query_type)
if handler:
return handler(args)
return await handler(args)
return self._get_query_client(query_type=query_type, args=args)
return await self._get_query_client(query_type=query_type, args=args)

View File

@@ -44,7 +44,7 @@ class TransactionActions(object):
response code and response body.
"""
if not transaction.transaction_id:
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
raise RuntimeError("Cannot persist a transaction with no transaction_id")
return self.store.get_received_txn_response(transaction.transaction_id, origin)
@@ -56,7 +56,7 @@ class TransactionActions(object):
Deferred
"""
if not transaction.transaction_id:
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
raise RuntimeError("Cannot persist a transaction with no transaction_id")
return self.store.set_received_txn_response(
transaction.transaction_id, origin, code, response

View File

@@ -36,6 +36,8 @@ from six import iteritems
from sortedcontainers import SortedDict
from twisted.internet import defer
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
@@ -212,7 +214,7 @@ class FederationRemoteSendQueue(object):
receipt (synapse.types.ReadReceipt):
"""
# nothing to do here: the replication listener will handle it.
pass
return defer.succeed(None)
def send_presence(self, states):
"""As per FederationSender

View File

@@ -49,7 +49,7 @@ sent_pdus_destination_dist_count = Counter(
sent_pdus_destination_dist_total = Counter(
"synapse_federation_client_sent_pdu_destinations:total",
"" "Total number of PDUs queued for sending across all destinations",
"Total number of PDUs queued for sending across all destinations",
)

View File

@@ -192,15 +192,16 @@ class PerDestinationQueue(object):
# We have to keep 2 free slots for presence and rr_edus
limit = MAX_EDUS_PER_TRANSACTION - 2
device_update_edus, dev_list_id = (
yield self._get_device_update_edus(limit)
device_update_edus, dev_list_id = yield self._get_device_update_edus(
limit
)
limit -= len(device_update_edus)
to_device_edus, device_stream_id = (
yield self._get_to_device_message_edus(limit)
)
(
to_device_edus,
device_stream_id,
) = yield self._get_to_device_message_edus(limit)
pending_edus = device_update_edus + to_device_edus
@@ -359,20 +360,20 @@ class PerDestinationQueue(object):
last_device_list = self._last_device_list_stream_id
# Retrieve list of new device updates to send to the destination
now_stream_id, results = yield self._store.get_devices_by_remote(
now_stream_id, results = yield self._store.get_device_updates_by_remote(
self._destination, last_device_list, limit=limit
)
edus = [
Edu(
origin=self._server_name,
destination=self._destination,
edu_type="m.device_list_update",
edu_type=edu_type,
content=content,
)
for content in results
for (edu_type, content) in results
]
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
assert len(edus) <= limit, "get_device_updates_by_remote returned too many EDUs"
return (edus, now_stream_id)

View File

@@ -84,7 +84,7 @@ class TransactionManager(object):
txn_id = str(self._next_txn_id)
logger.debug(
"TX [%s] {%s} Attempting new transaction" " (pdus: %d, edus: %d)",
"TX [%s] {%s} Attempting new transaction (pdus: %d, edus: %d)",
destination,
txn_id,
len(pdus),
@@ -103,7 +103,7 @@ class TransactionManager(object):
self._next_txn_id += 1
logger.info(
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
"TX [%s] {%s} Sending transaction [%s], (PDUs: %d, EDUs: %d)",
destination,
txn_id,
transaction.transaction_id,
@@ -146,7 +146,7 @@ class TransactionManager(object):
if code == 200:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warn(
logger.warning(
"TX [%s] {%s} Remote returned error for %s: %s",
destination,
txn_id,
@@ -155,7 +155,7 @@ class TransactionManager(object):
)
else:
for p in pdus:
logger.warn(
logger.warning(
"TX [%s] {%s} Failed to send event %s",
destination,
txn_id,

View File

@@ -14,9 +14,9 @@
# limitations under the License.
"""The transport layer is responsible for both sending transactions to remote
home servers and receiving a variety of requests from other home servers.
homeservers and receiving a variety of requests from other homeservers.
By default this is done over HTTPS (and all home servers are required to
By default this is done over HTTPS (and all homeservers are required to
support HTTPS), however individual pairings of servers may decide to
communicate over a different (albeit still reliable) protocol.
"""

View File

@@ -38,37 +38,13 @@ class TransportLayerClient(object):
self.server_name = hs.hostname
self.client = hs.get_http_client()
@log_function
def get_room_state(self, destination, room_id, event_id):
""" Requests all state for a given room from the given server at the
given event.
Args:
destination (str): The host name of the remote home server we want
to get the state from.
context (str): The name of the context we want the state of
event_id (str): The event we want the context at.
Returns:
Deferred: Results in a dict received from the remote homeserver.
"""
logger.debug("get_room_state dest=%s, room=%s", destination, room_id)
path = _create_v1_path("/state/%s", room_id)
return self.client.get_json(
destination,
path=path,
args={"event_id": event_id},
try_trailing_slash_on_400=True,
)
@log_function
def get_room_state_ids(self, destination, room_id, event_id):
""" Requests all state for a given room from the given server at the
given event. Returns the state's event_id's
Args:
destination (str): The host name of the remote home server we want
destination (str): The host name of the remote homeserver we want
to get the state from.
context (str): The name of the context we want the state of
event_id (str): The event we want the context at.
@@ -91,7 +67,7 @@ class TransportLayerClient(object):
""" Requests the pdu with give id and origin from the given server.
Args:
destination (str): The host name of the remote home server we want
destination (str): The host name of the remote homeserver we want
to get the state from.
event_id (str): The id of the event being requested.
timeout (int): How long to try (in ms) the destination for before
@@ -122,10 +98,10 @@ class TransportLayerClient(object):
Deferred: Results in a dict received from the remote homeserver.
"""
logger.debug(
"backfill dest=%s, room_id=%s, event_tuples=%s, limit=%s",
"backfill dest=%s, room_id=%s, event_tuples=%r, limit=%s",
destination,
room_id,
repr(event_tuples),
event_tuples,
str(limit),
)

View File

@@ -202,7 +202,7 @@ def _parse_auth_header(header_bytes):
sig = strip_quotes(param_dict["sig"])
return origin, key, sig
except Exception as e:
logger.warn(
logger.warning(
"Error parsing auth header '%s': %s",
header_bytes.decode("ascii", "replace"),
e,
@@ -287,10 +287,12 @@ class BaseFederationServlet(object):
except NoAuthenticationError:
origin = None
if self.REQUIRE_AUTH:
logger.warn("authenticate_request failed: missing authentication")
logger.warning(
"authenticate_request failed: missing authentication"
)
raise
except Exception as e:
logger.warn("authenticate_request failed: %s", e)
logger.warning("authenticate_request failed: %s", e)
raise
request_tags = {
@@ -419,7 +421,7 @@ class FederationEventServlet(BaseFederationServlet):
return await self.handler.on_pdu_request(origin, event_id)
class FederationStateServlet(BaseFederationServlet):
class FederationStateV1Servlet(BaseFederationServlet):
PATH = "/state/(?P<context>[^/]*)/?"
# This is when someone asks for all data for a given context.
@@ -427,7 +429,7 @@ class FederationStateServlet(BaseFederationServlet):
return await self.handler.on_context_state_request(
origin,
context,
parse_string_from_args(query, "event_id", None, required=True),
parse_string_from_args(query, "event_id", None, required=False),
)
@@ -712,7 +714,7 @@ class PublicRoomList(BaseFederationServlet):
This API returns information in the same format as /publicRooms on the
client API, but will only ever include local public rooms and hence is
intended for consumption by other home servers.
intended for consumption by other homeservers.
GET /publicRooms HTTP/1.1
@@ -1358,7 +1360,7 @@ class RoomComplexityServlet(BaseFederationServlet):
FEDERATION_SERVLET_CLASSES = (
FederationSendServlet,
FederationEventServlet,
FederationStateServlet,
FederationStateV1Servlet,
FederationStateIdsServlet,
FederationBackfillServlet,
FederationQueryServlet,

View File

@@ -181,7 +181,7 @@ class GroupAttestionRenewer(object):
elif not self.is_mine_id(user_id):
destination = get_domain_from_id(user_id)
else:
logger.warn(
logger.warning(
"Incorrectly trying to do attestations for user: %r in %r",
user_id,
group_id,

View File

@@ -488,7 +488,7 @@ class GroupsServerHandler(object):
profile = yield self.profile_handler.get_profile_from_cache(user_id)
user_profile.update(profile)
except Exception as e:
logger.warn("Error getting profile for %s: %s", user_id, e)
logger.warning("Error getting profile for %s: %s", user_id, e)
user_profiles.append(user_profile)
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}

View File

@@ -38,9 +38,10 @@ class AccountDataEventSource(object):
{"type": "m.tag", "content": {"tags": room_tags}, "room_id": room_id}
)
account_data, room_account_data = (
yield self.store.get_updated_account_data_for_user(user_id, last_stream_id)
)
(
account_data,
room_account_data,
) = yield self.store.get_updated_account_data_for_user(user_id, last_stream_id)
for account_data_type, content in account_data.items():
results.append({"type": account_data_type, "content": content})

View File

@@ -30,6 +30,9 @@ class AdminHandler(BaseHandler):
def __init__(self, hs):
super(AdminHandler, self).__init__(hs)
self.storage = hs.get_storage()
self.state_store = self.storage.state
@defer.inlineCallbacks
def get_whois(self, user):
connections = []
@@ -53,7 +56,7 @@ class AdminHandler(BaseHandler):
@defer.inlineCallbacks
def get_users(self):
"""Function to reterive a list of users in users table.
"""Function to retrieve a list of users in users table.
Args:
Returns:
@@ -64,19 +67,22 @@ class AdminHandler(BaseHandler):
return ret
@defer.inlineCallbacks
def get_users_paginate(self, order, start, limit):
"""Function to reterive a paginated list of users from
users list. This will return a json object, which contains
list of users and the total number of users in users table.
def get_users_paginate(self, start, limit, name, guests, deactivated):
"""Function to retrieve a paginated list of users from
users list. This will return a json list of users.
Args:
order (str): column name to order the select by this column
start (int): start number to begin the query from
limit (int): number of rows to reterive
limit (int): number of rows to retrieve
name (string): filter for user names
guests (bool): whether to in include guest users
deactivated (bool): whether to include deactivated users
Returns:
defer.Deferred: resolves to json object {list[dict[str, Any]], count}
defer.Deferred: resolves to json list[dict[str, Any]]
"""
ret = yield self.store.get_users_paginate(order, start, limit)
ret = yield self.store.get_users_paginate(
start, limit, name, guests, deactivated
)
return ret
@@ -205,7 +211,7 @@ class AdminHandler(BaseHandler):
from_key = events[-1].internal_metadata.after
events = yield filter_events_for_client(self.store, user_id, events)
events = yield filter_events_for_client(self.storage, user_id, events)
writer.write_events(room_id, events)
@@ -241,7 +247,7 @@ class AdminHandler(BaseHandler):
for event_id in extremities:
if not event_to_unseen_prevs[event_id]:
continue
state = yield self.store.get_state_for_event(event_id)
state = yield self.state_store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
return writer.finished()

View File

@@ -73,7 +73,10 @@ class ApplicationServicesHandler(object):
try:
limit = 100
while True:
upper_bound, events = yield self.store.get_new_events_for_appservice(
(
upper_bound,
events,
) = yield self.store.get_new_events_for_appservice(
self.current_max, limit
)

View File

@@ -102,8 +102,9 @@ class AuthHandler(BaseHandler):
login_types.append(t)
self._supported_login_types = login_types
self._account_ratelimiter = Ratelimiter()
self._failed_attempts_ratelimiter = Ratelimiter()
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
self._failed_uia_attempts_ratelimiter = Ratelimiter()
self._clock = self.hs.get_clock()
@@ -133,12 +134,38 @@ class AuthHandler(BaseHandler):
AuthError if the client has completed a login flow, and it gives
a different user to `requester`
LimitExceededError if the ratelimiter's failed request count for this
user is too high to proceed
"""
user_id = requester.user.to_string()
# Check if we should be ratelimited due to too many previous failed attempts
self._failed_uia_attempts_ratelimiter.ratelimit(
user_id,
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=False,
)
# build a list of supported flows
flows = [[login_type] for login_type in self._supported_login_types]
result, params, _ = yield self.check_auth(flows, request_body, clientip)
try:
result, params, _ = yield self.check_auth(flows, request_body, clientip)
except LoginError:
# Update the ratelimite to say we failed (`can_do_action` doesn't raise).
self._failed_uia_attempts_ratelimiter.can_do_action(
user_id,
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=True,
)
raise
# find the completed login type
for login_type in self._supported_login_types:
@@ -223,7 +250,7 @@ class AuthHandler(BaseHandler):
# could continue registration from your phone having clicked the
# email auth link on there). It's probably too open to abuse
# because it lets unauthenticated clients store arbitrary objects
# on a home server.
# on a homeserver.
# Revisit: Assumimg the REST APIs do sensible validation, the data
# isn't arbintrary.
session["clientdict"] = clientdict
@@ -501,11 +528,8 @@ class AuthHandler(BaseHandler):
multiple matches
Raises:
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
UserDeactivatedError if a user is found but is deactivated.
"""
self.ratelimit_login_per_account(user_id)
res = yield self._find_user_id_and_pwd_hash(user_id)
if res is not None:
return res[0]
@@ -525,7 +549,7 @@ class AuthHandler(BaseHandler):
result = None
if not user_infos:
logger.warn("Attempted to login as %s but they do not exist", user_id)
logger.warning("Attempted to login as %s but they do not exist", user_id)
elif len(user_infos) == 1:
# a single match (possibly not exact)
result = user_infos.popitem()
@@ -534,7 +558,7 @@ class AuthHandler(BaseHandler):
result = (user_id, user_infos[user_id])
else:
# multiple matches, none of them exact
logger.warn(
logger.warning(
"Attempted to login as %s but it matches more than one user "
"inexactly: %r",
user_id,
@@ -572,8 +596,6 @@ class AuthHandler(BaseHandler):
StoreError if there was a problem accessing the database
SynapseError if there was a problem with the request
LoginError if there was an authentication problem.
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
"""
if username.startswith("@"):
@@ -581,8 +603,6 @@ class AuthHandler(BaseHandler):
else:
qualified_user_id = UserID(username, self.hs.hostname).to_string()
self.ratelimit_login_per_account(qualified_user_id)
login_type = login_submission.get("type")
known_login_type = False
@@ -650,15 +670,6 @@ class AuthHandler(BaseHandler):
if not known_login_type:
raise SynapseError(400, "Unknown login type %s" % login_type)
# unknown username or invalid password.
self._failed_attempts_ratelimiter.ratelimit(
qualified_user_id.lower(),
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=True,
)
# We raise a 403 here, but note that if we're doing user-interactive
# login, it turns all LoginErrors into a 401 anyway.
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
@@ -710,10 +721,6 @@ class AuthHandler(BaseHandler):
Returns:
Deferred[unicode] the canonical_user_id, or Deferred[None] if
unknown user/bad password
Raises:
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
"""
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
if not lookupres:
@@ -728,7 +735,7 @@ class AuthHandler(BaseHandler):
result = yield self.validate_hash(password, password_hash)
if not result:
logger.warn("Failed password login for user %s", user_id)
logger.warning("Failed password login for user %s", user_id)
return None
return user_id
@@ -742,7 +749,7 @@ class AuthHandler(BaseHandler):
auth_api.validate_macaroon(macaroon, "login", user_id)
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
self.ratelimit_login_per_account(user_id)
yield self.auth.check_auth_blocking(user_id)
return user_id
@@ -810,7 +817,7 @@ class AuthHandler(BaseHandler):
@defer.inlineCallbacks
def add_threepid(self, user_id, medium, address, validated_at):
# 'Canonicalise' email addresses down to lower case.
# We've now moving towards the Home Server being the entity that
# We've now moving towards the homeserver being the entity that
# is responsible for validating threepids used for resetting passwords
# on accounts, so in future Synapse will gain knowledge of specific
# types (mediums) of threepid. For now, we still use the existing
@@ -912,35 +919,6 @@ class AuthHandler(BaseHandler):
else:
return defer.succeed(False)
def ratelimit_login_per_account(self, user_id):
"""Checks whether the process must be stopped because of ratelimiting.
Checks against two ratelimiters: the generic one for login attempts per
account and the one specific to failed attempts.
Args:
user_id (unicode): complete @user:id
Raises:
LimitExceededError if one of the ratelimiters' login requests count
for this user is too high too proceed.
"""
self._failed_attempts_ratelimiter.ratelimit(
user_id.lower(),
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=False,
)
self._account_ratelimiter.ratelimit(
user_id.lower(),
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_account.per_second,
burst_count=self.hs.config.rc_login_account.burst_count,
update=True,
)
@attr.s
class MacaroonGenerator(object):

View File

@@ -95,6 +95,9 @@ class DeactivateAccountHandler(BaseHandler):
user_id, threepid["medium"], threepid["address"]
)
# Remove all 3PIDs this user has bound to the homeserver
yield self.store.user_delete_threepids(user_id)
# delete any devices belonging to the user, which will also
# delete corresponding access tokens.
yield self._device_handler.delete_all_devices_for_user(user_id)

View File

@@ -46,6 +46,7 @@ class DeviceWorkerHandler(BaseHandler):
self.hs = hs
self.state = hs.get_state_handler()
self.state_store = hs.get_storage().state
self._auth_handler = hs.get_auth_handler()
@trace
@@ -178,7 +179,7 @@ class DeviceWorkerHandler(BaseHandler):
continue
# mapping from event_id -> state_dict
prev_state_ids = yield self.store.get_state_ids_for_events(event_ids)
prev_state_ids = yield self.state_store.get_state_ids_for_events(event_ids)
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
@@ -458,7 +459,18 @@ class DeviceHandler(DeviceWorkerHandler):
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
return {"user_id": user_id, "stream_id": stream_id, "devices": devices}
master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
self_signing_key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing"
)
return {
"user_id": user_id,
"stream_id": stream_id,
"devices": devices,
"master_key": master_key,
"self_signing_key": self_signing_key,
}
@defer.inlineCallbacks
def user_left_room(self, user, room_id):
@@ -656,7 +668,7 @@ class DeviceListUpdater(object):
except (NotRetryingDestination, RequestSendFailed, HttpResponseException):
# TODO: Remember that we are now out of sync and try again
# later
logger.warn("Failed to handle device list update for %s", user_id)
logger.warning("Failed to handle device list update for %s", user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
@@ -694,7 +706,7 @@ class DeviceListUpdater(object):
# up on storing the total list of devices and only handle the
# delta instead.
if len(devices) > 1000:
logger.warn(
logger.warning(
"Ignoring device list snapshot for %s as it has >1K devs (%d)",
user_id,
len(devices),

View File

@@ -52,7 +52,7 @@ class DeviceMessageHandler(object):
local_messages = {}
sender_user_id = content["sender"]
if origin != get_domain_from_id(sender_user_id):
logger.warn(
logger.warning(
"Dropping device message from %r with spoofed sender %r",
origin,
sender_user_id,

View File

@@ -119,7 +119,7 @@ class DirectoryHandler(BaseHandler):
if not service.is_interested_in_alias(room_alias.to_string()):
raise SynapseError(
400,
"This application service has not reserved" " this kind of alias.",
"This application service has not reserved this kind of alias.",
errcode=Codes.EXCLUSIVE,
)
else:
@@ -250,7 +250,7 @@ class DirectoryHandler(BaseHandler):
ignore_backoff=True,
)
except CodeMessageException as e:
logging.warn("Error retrieving alias")
logging.warning("Error retrieving alias")
if e.code == 404:
result = None
else:
@@ -283,7 +283,7 @@ class DirectoryHandler(BaseHandler):
def on_directory_query(self, args):
room_alias = RoomAlias.from_string(args["room_alias"])
if not self.hs.is_mine(room_alias):
raise SynapseError(400, "Room Alias is not hosted on this Home Server")
raise SynapseError(400, "Room Alias is not hosted on this homeserver")
result = yield self.get_association_from_room_alias(room_alias)

View File

@@ -30,12 +30,15 @@ from twisted.internet import defer
from synapse.api.errors import CodeMessageException, Codes, NotFoundError, SynapseError
from synapse.logging.context import make_deferred_yieldable, run_in_background
from synapse.logging.opentracing import log_kv, set_tag, tag_args, trace
from synapse.replication.http.devices import ReplicationUserDevicesResyncRestServlet
from synapse.types import (
UserID,
get_domain_from_id,
get_verify_key_from_cross_signing_key,
)
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -49,10 +52,25 @@ class E2eKeysHandler(object):
self.is_mine = hs.is_mine
self.clock = hs.get_clock()
self._edu_updater = SigningKeyEduUpdater(hs, self)
self._is_master = hs.config.worker_app is None
if not self._is_master:
self._user_device_resync_client = ReplicationUserDevicesResyncRestServlet.make_client(
hs
)
federation_registry = hs.get_federation_registry()
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
federation_registry.register_edu_handler(
"org.matrix.signing_key_update",
self._edu_updater.incoming_signing_key_update,
)
# doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the
# "query handler" interface.
hs.get_federation_registry().register_query_handler(
federation_registry.register_query_handler(
"client_keys", self.on_federation_query_client_keys
)
@@ -119,9 +137,10 @@ class E2eKeysHandler(object):
else:
query_list.append((user_id, None))
user_ids_not_in_cache, remote_results = (
yield self.store.get_user_devices_from_cache(query_list)
)
(
user_ids_not_in_cache,
remote_results,
) = yield self.store.get_user_devices_from_cache(query_list)
for user_id, devices in iteritems(remote_results):
user_devices = results.setdefault(user_id, {})
for device_id, device in iteritems(devices):
@@ -179,9 +198,15 @@ class E2eKeysHandler(object):
# probably be tracking their device lists. However, we haven't
# done an initial sync on the device list so we do it now.
try:
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
user_id
)
if self._is_master:
user_devices = yield self.device_handler.device_list_updater.user_device_resync(
user_id
)
else:
user_devices = yield self._user_device_resync_client(
user_id=user_id
)
user_devices = user_devices["devices"]
for device in user_devices:
results[user_id] = {device["device_id"]: device["keys"]}
@@ -207,13 +232,15 @@ class E2eKeysHandler(object):
if user_id in destination_query:
results[user_id] = keys
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
if "master_keys" in remote_result:
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
if "self_signing_keys" in remote_result:
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
except Exception as e:
failure = _exception_to_failure(e)
@@ -237,7 +264,6 @@ class E2eKeysHandler(object):
return ret
@defer.inlineCallbacks
def get_cross_signing_keys_from_cache(self, query, from_user_id):
"""Get cross-signing keys for users from the database
@@ -251,41 +277,20 @@ class E2eKeysHandler(object):
Returns:
defer.Deferred[dict[str, dict[str, dict]]]: map from
(master|self_signing|user_signing) -> user_id -> key
(master_keys|self_signing_keys|user_signing_keys) -> user_id -> key
"""
master_keys = {}
self_signing_keys = {}
user_signing_keys = {}
for user_id in query:
# XXX: consider changing the store functions to allow querying
# multiple users simultaneously.
key = yield self.store.get_e2e_cross_signing_key(
user_id, "master", from_user_id
)
if key:
master_keys[user_id] = key
key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing", from_user_id
)
if key:
self_signing_keys[user_id] = key
# users can see other users' master and self-signing keys, but can
# only see their own user-signing keys
if from_user_id == user_id:
key = yield self.store.get_e2e_cross_signing_key(
user_id, "user_signing", from_user_id
)
if key:
user_signing_keys[user_id] = key
return {
"master_keys": master_keys,
"self_signing_keys": self_signing_keys,
"user_signing_keys": user_signing_keys,
}
# Currently a stub, implementation coming in https://github.com/matrix-org/synapse/pull/6486
return defer.succeed(
{
"master_keys": master_keys,
"self_signing_keys": self_signing_keys,
"user_signing_keys": user_signing_keys,
}
)
@trace
@defer.inlineCallbacks
@@ -343,7 +348,16 @@ class E2eKeysHandler(object):
"""
device_keys_query = query_body.get("device_keys", {})
res = yield self.query_local_devices(device_keys_query)
return {"device_keys": res}
ret = {"device_keys": res}
# add in the cross-signing keys
cross_signing_keys = yield self.get_cross_signing_keys_from_cache(
device_keys_query, None
)
ret.update(cross_signing_keys)
return ret
@trace
@defer.inlineCallbacks
@@ -688,17 +702,21 @@ class E2eKeysHandler(object):
try:
# get our self-signing key to verify the signatures
_, self_signing_key_id, self_signing_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "self_signing"
)
(
_,
self_signing_key_id,
self_signing_verify_key,
) = yield self._get_e2e_cross_signing_verify_key(user_id, "self_signing")
# get our master key, since we may have received a signature of it.
# We need to fetch it here so that we know what its key ID is, so
# that we can check if a signature that was sent is a signature of
# the master key or of a device
master_key, _, master_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "master"
)
(
master_key,
_,
master_verify_key,
) = yield self._get_e2e_cross_signing_verify_key(user_id, "master")
# fetch our stored devices. This is used to 1. verify
# signatures on the master key, and 2. to compare with what
@@ -838,9 +856,11 @@ class E2eKeysHandler(object):
try:
# get our user-signing key to verify the signatures
user_signing_key, user_signing_key_id, user_signing_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "user_signing"
)
(
user_signing_key,
user_signing_key_id,
user_signing_verify_key,
) = yield self._get_e2e_cross_signing_verify_key(user_id, "user_signing")
except SynapseError as e:
failure = _exception_to_failure(e)
for user, devicemap in signatures.items():
@@ -859,7 +879,11 @@ class E2eKeysHandler(object):
try:
# get the target user's master key, to make sure it matches
# what was sent
master_key, master_key_id, _ = yield self._get_e2e_cross_signing_verify_key(
(
master_key,
master_key_id,
_,
) = yield self._get_e2e_cross_signing_verify_key(
target_user, "master", user_id
)
@@ -1047,3 +1071,100 @@ class SignatureListItem:
target_user_id = attr.ib()
target_device_id = attr.ib()
signature = attr.ib()
class SigningKeyEduUpdater(object):
"""Handles incoming signing key updates from federation and updates the DB"""
def __init__(self, hs, e2e_keys_handler):
self.store = hs.get_datastore()
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.e2e_keys_handler = e2e_keys_handler
self._remote_edu_linearizer = Linearizer(name="remote_signing_key")
# user_id -> list of updates waiting to be handled.
self._pending_updates = {}
# Recently seen stream ids. We don't bother keeping these in the DB,
# but they're useful to have them about to reduce the number of spurious
# resyncs.
self._seen_updates = ExpiringCache(
cache_name="signing_key_update_edu",
clock=self.clock,
max_len=10000,
expiry_ms=30 * 60 * 1000,
iterable=True,
)
@defer.inlineCallbacks
def incoming_signing_key_update(self, origin, edu_content):
"""Called on incoming signing key update from federation. Responsible for
parsing the EDU and adding to pending updates list.
Args:
origin (string): the server that sent the EDU
edu_content (dict): the contents of the EDU
"""
user_id = edu_content.pop("user_id")
master_key = edu_content.pop("master_key", None)
self_signing_key = edu_content.pop("self_signing_key", None)
if get_domain_from_id(user_id) != origin:
logger.warning("Got signing key update edu for %r from %r", user_id, origin)
return
room_ids = yield self.store.get_rooms_for_user(user_id)
if not room_ids:
# We don't share any rooms with this user. Ignore update, as we
# probably won't get any further updates.
return
self._pending_updates.setdefault(user_id, []).append(
(master_key, self_signing_key)
)
yield self._handle_signing_key_updates(user_id)
@defer.inlineCallbacks
def _handle_signing_key_updates(self, user_id):
"""Actually handle pending updates.
Args:
user_id (string): the user whose updates we are processing
"""
device_handler = self.e2e_keys_handler.device_handler
with (yield self._remote_edu_linearizer.queue(user_id)):
pending_updates = self._pending_updates.pop(user_id, [])
if not pending_updates:
# This can happen since we batch updates
return
device_ids = []
logger.info("pending updates: %r", pending_updates)
for master_key, self_signing_key in pending_updates:
if master_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "master", master_key
)
_, verify_key = get_verify_key_from_cross_signing_key(master_key)
# verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID
device_ids.append(verify_key.version)
if self_signing_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "self_signing", self_signing_key
)
_, verify_key = get_verify_key_from_cross_signing_key(
self_signing_key
)
device_ids.append(verify_key.version)
yield device_handler.notify_device_update(user_id, device_ids)

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017, 2018 New Vector Ltd
# Copyright 2019 Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -103,14 +104,35 @@ class E2eRoomKeysHandler(object):
rooms
session_id(string): session ID to delete keys for, for None to delete keys
for all sessions
Raises:
NotFoundError: if the backup version does not exist
Returns:
A deferred of the deletion transaction
A dict containing the count and etag for the backup version
"""
# lock for consistency with uploading
with (yield self._upload_linearizer.queue(user_id)):
# make sure the backup version exists
try:
version_info = yield self.store.get_e2e_room_keys_version_info(
user_id, version
)
except StoreError as e:
if e.code == 404:
raise NotFoundError("Unknown backup version")
else:
raise
yield self.store.delete_e2e_room_keys(user_id, version, room_id, session_id)
version_etag = version_info["etag"] + 1
yield self.store.update_e2e_room_keys_version(
user_id, version, None, version_etag
)
count = yield self.store.count_e2e_room_keys(user_id, version)
return {"etag": str(version_etag), "count": count}
@trace
@defer.inlineCallbacks
def upload_room_keys(self, user_id, version, room_keys):
@@ -138,6 +160,9 @@ class E2eRoomKeysHandler(object):
}
}
Returns:
A dict containing the count and etag for the backup version
Raises:
NotFoundError: if there are no versions defined
RoomKeysVersionError: if the uploaded version is not the current version
@@ -171,59 +196,62 @@ class E2eRoomKeysHandler(object):
else:
raise
# go through the room_keys.
# XXX: this should/could be done concurrently, given we're in a lock.
# Fetch any existing room keys for the sessions that have been
# submitted. Then compare them with the submitted keys. If the
# key is new, insert it; if the key should be updated, then update
# it; otherwise, drop it.
existing_keys = yield self.store.get_e2e_room_keys_multi(
user_id, version, room_keys["rooms"]
)
to_insert = [] # batch the inserts together
changed = False # if anything has changed, we need to update the etag
for room_id, room in iteritems(room_keys["rooms"]):
for session_id, session in iteritems(room["sessions"]):
yield self._upload_room_key(
user_id, version, room_id, session_id, session
for session_id, room_key in iteritems(room["sessions"]):
log_kv(
{
"message": "Trying to upload room key",
"room_id": room_id,
"session_id": session_id,
"user_id": user_id,
}
)
current_room_key = existing_keys.get(room_id, {}).get(session_id)
if current_room_key:
if self._should_replace_room_key(current_room_key, room_key):
log_kv({"message": "Replacing room key."})
# updates are done one at a time in the DB, so send
# updates right away rather than batching them up,
# like we do with the inserts
yield self.store.update_e2e_room_key(
user_id, version, room_id, session_id, room_key
)
changed = True
else:
log_kv({"message": "Not replacing room_key."})
else:
log_kv(
{
"message": "Room key not found.",
"room_id": room_id,
"user_id": user_id,
}
)
log_kv({"message": "Replacing room key."})
to_insert.append((room_id, session_id, room_key))
changed = True
@defer.inlineCallbacks
def _upload_room_key(self, user_id, version, room_id, session_id, room_key):
"""Upload a given room_key for a given room and session into a given
version of the backup. Merges the key with any which might already exist.
if len(to_insert):
yield self.store.add_e2e_room_keys(user_id, version, to_insert)
Args:
user_id(str): the user whose backup we're setting
version(str): the version ID of the backup we're updating
room_id(str): the ID of the room whose keys we're setting
session_id(str): the session whose room_key we're setting
room_key(dict): the room_key being set
"""
log_kv(
{
"message": "Trying to upload room key",
"room_id": room_id,
"session_id": session_id,
"user_id": user_id,
}
)
# get the room_key for this particular row
current_room_key = None
try:
current_room_key = yield self.store.get_e2e_room_key(
user_id, version, room_id, session_id
)
except StoreError as e:
if e.code == 404:
log_kv(
{
"message": "Room key not found.",
"room_id": room_id,
"user_id": user_id,
}
version_etag = version_info["etag"]
if changed:
version_etag = version_etag + 1
yield self.store.update_e2e_room_keys_version(
user_id, version, None, version_etag
)
else:
raise
if self._should_replace_room_key(current_room_key, room_key):
log_kv({"message": "Replacing room key."})
yield self.store.set_e2e_room_key(
user_id, version, room_id, session_id, room_key
)
else:
log_kv({"message": "Not replacing room_key."})
count = yield self.store.count_e2e_room_keys(user_id, version)
return {"etag": str(version_etag), "count": count}
@staticmethod
def _should_replace_room_key(current_room_key, room_key):
@@ -314,6 +342,8 @@ class E2eRoomKeysHandler(object):
raise NotFoundError("Unknown backup version")
else:
raise
res["count"] = yield self.store.count_e2e_room_keys(user_id, res["version"])
return res
@trace

View File

@@ -16,8 +16,6 @@
import logging
import random
from twisted.internet import defer
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import AuthError, SynapseError
from synapse.events import EventBase
@@ -50,9 +48,8 @@ class EventStreamHandler(BaseHandler):
self._server_notices_sender = hs.get_server_notices_sender()
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
@log_function
def get_stream(
async def get_stream(
self,
auth_user_id,
pagin_config,
@@ -69,17 +66,17 @@ class EventStreamHandler(BaseHandler):
"""
if room_id:
blocked = yield self.store.is_room_blocked(room_id)
blocked = await self.store.is_room_blocked(room_id)
if blocked:
raise SynapseError(403, "This room has been blocked on this server")
# send any outstanding server notices to the user.
yield self._server_notices_sender.on_user_syncing(auth_user_id)
await self._server_notices_sender.on_user_syncing(auth_user_id)
auth_user = UserID.from_string(auth_user_id)
presence_handler = self.hs.get_presence_handler()
context = yield presence_handler.user_syncing(
context = await presence_handler.user_syncing(
auth_user_id, affect_presence=affect_presence
)
with context:
@@ -91,7 +88,7 @@ class EventStreamHandler(BaseHandler):
# thundering herds on restart.
timeout = random.randint(int(timeout * 0.9), int(timeout * 1.1))
events, tokens = yield self.notifier.get_events_for(
events, tokens = await self.notifier.get_events_for(
auth_user,
pagin_config,
timeout,
@@ -112,14 +109,14 @@ class EventStreamHandler(BaseHandler):
# Send down presence.
if event.state_key == auth_user_id:
# Send down presence for everyone in the room.
users = yield self.state.get_current_users_in_room(
users = await self.state.get_current_users_in_room(
event.room_id
)
states = yield presence_handler.get_states(users, as_event=True)
states = await presence_handler.get_states(users, as_event=True)
to_add.extend(states)
else:
ev = yield presence_handler.get_state(
ev = await presence_handler.get_state(
UserID.from_string(event.state_key), as_event=True
)
to_add.append(ev)
@@ -128,7 +125,7 @@ class EventStreamHandler(BaseHandler):
time_now = self.clock.time_msec()
chunks = yield self._event_serializer.serialize_events(
chunks = await self._event_serializer.serialize_events(
events,
time_now,
as_client_event=as_client_event,
@@ -147,8 +144,11 @@ class EventStreamHandler(BaseHandler):
class EventHandler(BaseHandler):
@defer.inlineCallbacks
def get_event(self, user, room_id, event_id):
def __init__(self, hs):
super(EventHandler, self).__init__(hs)
self.storage = hs.get_storage()
async def get_event(self, user, room_id, event_id):
"""Retrieve a single specified event.
Args:
@@ -163,16 +163,16 @@ class EventHandler(BaseHandler):
AuthError if the user does not have the rights to inspect this
event.
"""
event = yield self.store.get_event(event_id, check_room_id=room_id)
event = await self.store.get_event(event_id, check_room_id=room_id)
if not event:
return None
users = yield self.store.get_users_in_room(event.room_id)
users = await self.store.get_users_in_room(event.room_id)
is_peeking = user.to_string() not in users
filtered = yield filter_events_for_client(
self.store, user.to_string(), [event], is_peeking=is_peeking
filtered = await filter_events_for_client(
self.storage, user.to_string(), [event], is_peeking=is_peeking
)
if not filtered:

File diff suppressed because it is too large Load Diff

View File

@@ -392,7 +392,7 @@ class GroupsLocalHandler(object):
try:
user_profile = yield self.profile_handler.get_profile(user_id)
except Exception as e:
logger.warn("No profile for user %s: %s", user_id, e)
logger.warning("No profile for user %s: %s", user_id, e)
user_profile = {}
return {"state": "invite", "user_profile": user_profile}

View File

@@ -272,7 +272,7 @@ class IdentityHandler(BaseHandler):
changed = False
if e.code in (400, 404, 501):
# The remote server probably doesn't support unbinding (yet)
logger.warn("Received %d response while unbinding threepid", e.code)
logger.warning("Received %d response while unbinding threepid", e.code)
else:
logger.error("Failed to unbind threepid on identity server: %s", e)
raise SynapseError(500, "Failed to contact identity server")
@@ -403,7 +403,7 @@ class IdentityHandler(BaseHandler):
if self.hs.config.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warn(
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
@@ -457,7 +457,7 @@ class IdentityHandler(BaseHandler):
if self.hs.config.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warn(
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "

View File

@@ -43,6 +43,8 @@ class InitialSyncHandler(BaseHandler):
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
self._event_serializer = hs.get_event_client_serializer()
self.storage = hs.get_storage()
self.state_store = self.storage.state
def snapshot_all_rooms(
self,
@@ -126,8 +128,8 @@ class InitialSyncHandler(BaseHandler):
tags_by_room = yield self.store.get_tags_for_user(user_id)
account_data, account_data_by_room = (
yield self.store.get_account_data_for_user(user_id)
account_data, account_data_by_room = yield self.store.get_account_data_for_user(
user_id
)
public_room_ids = yield self.store.get_public_room_ids()
@@ -169,7 +171,7 @@ class InitialSyncHandler(BaseHandler):
elif event.membership == Membership.LEAVE:
room_end_token = "s%d" % (event.stream_ordering,)
deferred_room_state = run_in_background(
self.store.get_state_for_events, [event.event_id]
self.state_store.get_state_for_events, [event.event_id]
)
deferred_room_state.addCallback(
lambda states: states[event.event_id]
@@ -189,7 +191,9 @@ class InitialSyncHandler(BaseHandler):
)
).addErrback(unwrapFirstError)
messages = yield filter_events_for_client(self.store, user_id, messages)
messages = yield filter_events_for_client(
self.storage, user_id, messages
)
start_token = now_token.copy_and_replace("room_key", token)
end_token = now_token.copy_and_replace("room_key", room_end_token)
@@ -307,7 +311,7 @@ class InitialSyncHandler(BaseHandler):
def _room_initial_sync_parted(
self, user_id, room_id, pagin_config, membership, member_event_id, is_peeking
):
room_state = yield self.store.get_state_for_events([member_event_id])
room_state = yield self.state_store.get_state_for_events([member_event_id])
room_state = room_state[member_event_id]
@@ -322,7 +326,7 @@ class InitialSyncHandler(BaseHandler):
)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking
self.storage, user_id, messages, is_peeking=is_peeking
)
start_token = StreamToken.START.copy_and_replace("room_key", token)
@@ -414,7 +418,7 @@ class InitialSyncHandler(BaseHandler):
)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking
self.storage, user_id, messages, is_peeking=is_peeking
)
start_token = now_token.copy_and_replace("room_key", token)

View File

@@ -15,6 +15,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from typing import Optional
from six import iteritems, itervalues, string_types
@@ -22,9 +23,16 @@ from canonicaljson import encode_canonical_json, json
from twisted.internet import defer
from twisted.internet.defer import succeed
from twisted.internet.interfaces import IDelayedCall
from synapse import event_auth
from synapse.api.constants import EventTypes, Membership, RelationTypes, UserTypes
from synapse.api.constants import (
EventContentFields,
EventTypes,
Membership,
RelationTypes,
UserTypes,
)
from synapse.api.errors import (
AuthError,
Codes,
@@ -59,7 +67,20 @@ class MessageHandler(object):
self.clock = hs.get_clock()
self.state = hs.get_state_handler()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self._event_serializer = hs.get_event_client_serializer()
self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages
self._is_worker_app = bool(hs.config.worker_app)
# The scheduled call to self._expire_event. None if no call is currently
# scheduled.
self._scheduled_expiry = None # type: Optional[IDelayedCall]
if not hs.config.worker_app:
run_as_background_process(
"_schedule_next_expiry", self._schedule_next_expiry
)
@defer.inlineCallbacks
def get_room_data(
@@ -74,15 +95,16 @@ class MessageHandler(object):
Raises:
SynapseError if something went wrong.
"""
membership, membership_event_id = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
(
membership,
membership_event_id,
) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
if membership == Membership.JOIN:
data = yield self.state.get_current_state(room_id, event_type, state_key)
elif membership == Membership.LEAVE:
key = (event_type, state_key)
room_state = yield self.store.get_state_for_events(
room_state = yield self.state_store.get_state_for_events(
[membership_event_id], StateFilter.from_types([key])
)
data = room_state[membership_event_id].get(key)
@@ -135,12 +157,12 @@ class MessageHandler(object):
raise NotFoundError("Can't find event for token %s" % (at_token,))
visible_events = yield filter_events_for_client(
self.store, user_id, last_events
self.storage, user_id, last_events, apply_retention_policies=False
)
event = last_events[0]
if visible_events:
room_state = yield self.store.get_state_for_events(
room_state = yield self.state_store.get_state_for_events(
[event.event_id], state_filter=state_filter
)
room_state = room_state[event.event_id]
@@ -151,9 +173,10 @@ class MessageHandler(object):
% (user_id, room_id, at_token),
)
else:
membership, membership_event_id = (
yield self.auth.check_in_room_or_world_readable(room_id, user_id)
)
(
membership,
membership_event_id,
) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
if membership == Membership.JOIN:
state_ids = yield self.store.get_filtered_current_state_ids(
@@ -161,7 +184,7 @@ class MessageHandler(object):
)
room_state = yield self.store.get_events(state_ids.values())
elif membership == Membership.LEAVE:
room_state = yield self.store.get_state_for_events(
room_state = yield self.state_store.get_state_for_events(
[membership_event_id], state_filter=state_filter
)
room_state = room_state[membership_event_id]
@@ -221,6 +244,100 @@ class MessageHandler(object):
for user_id, profile in iteritems(users_with_profile)
}
def maybe_schedule_expiry(self, event):
"""Schedule the expiry of an event if there's not already one scheduled,
or if the one running is for an event that will expire after the provided
timestamp.
This function needs to invalidate the event cache, which is only possible on
the master process, and therefore needs to be run on there.
Args:
event (EventBase): The event to schedule the expiry of.
"""
assert not self._is_worker_app
expiry_ts = event.content.get(EventContentFields.SELF_DESTRUCT_AFTER)
if not isinstance(expiry_ts, int) or event.is_state():
return
# _schedule_expiry_for_event won't actually schedule anything if there's already
# a task scheduled for a timestamp that's sooner than the provided one.
self._schedule_expiry_for_event(event.event_id, expiry_ts)
@defer.inlineCallbacks
def _schedule_next_expiry(self):
"""Retrieve the ID and the expiry timestamp of the next event to be expired,
and schedule an expiry task for it.
If there's no event left to expire, set _expiry_scheduled to None so that a
future call to save_expiry_ts can schedule a new expiry task.
"""
# Try to get the expiry timestamp of the next event to expire.
res = yield self.store.get_next_event_to_expire()
if res:
event_id, expiry_ts = res
self._schedule_expiry_for_event(event_id, expiry_ts)
def _schedule_expiry_for_event(self, event_id, expiry_ts):
"""Schedule an expiry task for the provided event if there's not already one
scheduled at a timestamp that's sooner than the provided one.
Args:
event_id (str): The ID of the event to expire.
expiry_ts (int): The timestamp at which to expire the event.
"""
if self._scheduled_expiry:
# If the provided timestamp refers to a time before the scheduled time of the
# next expiry task, cancel that task and reschedule it for this timestamp.
next_scheduled_expiry_ts = self._scheduled_expiry.getTime() * 1000
if expiry_ts < next_scheduled_expiry_ts:
self._scheduled_expiry.cancel()
else:
return
# Figure out how many seconds we need to wait before expiring the event.
now_ms = self.clock.time_msec()
delay = (expiry_ts - now_ms) / 1000
# callLater doesn't support negative delays, so trim the delay to 0 if we're
# in that case.
if delay < 0:
delay = 0
logger.info("Scheduling expiry for event %s in %.3fs", event_id, delay)
self._scheduled_expiry = self.clock.call_later(
delay,
run_as_background_process,
"_expire_event",
self._expire_event,
event_id,
)
@defer.inlineCallbacks
def _expire_event(self, event_id):
"""Retrieve and expire an event that needs to be expired from the database.
If the event doesn't exist in the database, log it and delete the expiry date
from the database (so that we don't try to expire it again).
"""
assert self._ephemeral_events_enabled
self._scheduled_expiry = None
logger.info("Expiring event %s", event_id)
try:
# Expire the event if we know about it. This function also deletes the expiry
# date from the database in the same database transaction.
yield self.store.expire_event(event_id)
except Exception as e:
logger.error("Could not expire event %s: %r", event_id, e)
# Schedule the expiry of the next event to expire.
yield self._schedule_next_expiry()
# The duration (in ms) after which rooms should be removed
# `_rooms_to_exclude_from_dummy_event_insertion` (with the effect that we will try
@@ -234,6 +351,7 @@ class EventCreationHandler(object):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
@@ -245,6 +363,8 @@ class EventCreationHandler(object):
self.config = hs.config
self.require_membership_for_aliases = hs.config.require_membership_for_aliases
self.room_invite_state_types = self.hs.config.room_invite_state_types
self.send_event_to_master = ReplicationSendEventRestServlet.make_client(hs)
# This is only used to get at ratelimit function, and maybe_kick_guest_users
@@ -290,6 +410,10 @@ class EventCreationHandler(object):
5 * 60 * 1000,
)
self._message_handler = hs.get_message_handler()
self._ephemeral_events_enabled = hs.config.enable_ephemeral_messages
@defer.inlineCallbacks
def create_event(
self,
@@ -412,7 +536,7 @@ class EventCreationHandler(object):
403, "You must be in the room to create an alias for it"
)
self.validator.validate_new(event)
self.validator.validate_new(event, self.config)
return (event, context)
@@ -629,7 +753,7 @@ class EventCreationHandler(object):
if requester:
context.app_service = requester.app_service
self.validator.validate_new(event)
self.validator.validate_new(event, self.config)
# If this event is an annotation then we check that that the sender
# can't annotate the same way twice (e.g. stops users from liking an
@@ -687,7 +811,7 @@ class EventCreationHandler(object):
try:
yield self.auth.check_from_context(room_version, event, context)
except AuthError as err:
logger.warn("Denying new event %r because %s", event, err)
logger.warning("Denying new event %r because %s", event, err)
raise err
# Ensure that we can round trip before trying to persist in db
@@ -794,7 +918,7 @@ class EventCreationHandler(object):
state_to_include_ids = [
e_id
for k, e_id in iteritems(current_state_ids)
if k[0] in self.hs.config.room_invite_state_types
if k[0] in self.room_invite_state_types
or k == (EventTypes.Member, event.sender)
]
@@ -868,10 +992,14 @@ class EventCreationHandler(object):
if prev_state_ids:
raise AuthError(403, "Changing the room create event is forbidden")
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event_stream_id, max_stream_id = yield self.storage.persistence.persist_event(
event, context=context
)
if self._ephemeral_events_enabled:
# If there's an expiry timestamp on the event, schedule its expiry.
self._message_handler.maybe_schedule_expiry(event)
yield self.pusher_pool.on_new_notifications(event_stream_id, max_stream_id)
def _notify():

View File

@@ -15,12 +15,15 @@
# limitations under the License.
import logging
from six import iteritems
from twisted.internet import defer
from twisted.python.failure import Failure
from synapse.api.constants import EventTypes, Membership
from synapse.api.errors import SynapseError
from synapse.logging.context import run_in_background
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.storage.state import StateFilter
from synapse.types import RoomStreamToken
from synapse.util.async_helpers import ReadWriteLock
@@ -69,6 +72,8 @@ class PaginationHandler(object):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.clock = hs.get_clock()
self._server_name = hs.hostname
@@ -78,6 +83,109 @@ class PaginationHandler(object):
self._purges_by_id = {}
self._event_serializer = hs.get_event_client_serializer()
self._retention_default_max_lifetime = hs.config.retention_default_max_lifetime
if hs.config.retention_enabled:
# Run the purge jobs described in the configuration file.
for job in hs.config.retention_purge_jobs:
self.clock.looping_call(
run_as_background_process,
job["interval"],
"purge_history_for_rooms_in_range",
self.purge_history_for_rooms_in_range,
job["shortest_max_lifetime"],
job["longest_max_lifetime"],
)
@defer.inlineCallbacks
def purge_history_for_rooms_in_range(self, min_ms, max_ms):
"""Purge outdated events from rooms within the given retention range.
If a default retention policy is defined in the server's configuration and its
'max_lifetime' is within this range, also targets rooms which don't have a
retention policy.
Args:
min_ms (int|None): Duration in milliseconds that define the lower limit of
the range to handle (exclusive). If None, it means that the range has no
lower limit.
max_ms (int|None): Duration in milliseconds that define the upper limit of
the range to handle (inclusive). If None, it means that the range has no
upper limit.
"""
# We want the storage layer to to include rooms with no retention policy in its
# return value only if a default retention policy is defined in the server's
# configuration and that policy's 'max_lifetime' is either lower (or equal) than
# max_ms or higher than min_ms (or both).
if self._retention_default_max_lifetime is not None:
include_null = True
if min_ms is not None and min_ms >= self._retention_default_max_lifetime:
# The default max_lifetime is lower than (or equal to) min_ms.
include_null = False
if max_ms is not None and max_ms < self._retention_default_max_lifetime:
# The default max_lifetime is higher than max_ms.
include_null = False
else:
include_null = False
rooms = yield self.store.get_rooms_for_retention_period_in_range(
min_ms, max_ms, include_null
)
for room_id, retention_policy in iteritems(rooms):
if room_id in self._purges_in_progress_by_room:
logger.warning(
"[purge] not purging room %s as there's an ongoing purge running"
" for this room",
room_id,
)
continue
max_lifetime = retention_policy["max_lifetime"]
if max_lifetime is None:
# If max_lifetime is None, it means that include_null equals True,
# therefore we can safely assume that there is a default policy defined
# in the server's configuration.
max_lifetime = self._retention_default_max_lifetime
# Figure out what token we should start purging at.
ts = self.clock.time_msec() - max_lifetime
stream_ordering = yield self.store.find_first_stream_ordering_after_ts(ts)
r = yield self.store.get_room_event_after_stream_ordering(
room_id, stream_ordering,
)
if not r:
logger.warning(
"[purge] purging events not possible: No event found "
"(ts %i => stream_ordering %i)",
ts,
stream_ordering,
)
continue
(stream, topo, _event_id) = r
token = "t%d-%d" % (topo, stream)
purge_id = random_string(16)
self._purges_by_id[purge_id] = PurgeStatus()
logger.info(
"Starting purging events in room %s (purge_id %s)" % (room_id, purge_id)
)
# We want to purge everything, including local events, and to run the purge in
# the background so that it's not blocking any other operation apart from
# other purges in the same room.
run_as_background_process(
"_purge_history", self._purge_history, purge_id, room_id, token, True,
)
def start_purge_history(self, room_id, token, delete_local_events=False):
"""Start off a history purge on a room.
@@ -125,7 +233,9 @@ class PaginationHandler(object):
self._purges_in_progress_by_room.add(room_id)
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(room_id, token, delete_local_events)
yield self.storage.purge_events.purge_history(
room_id, token, delete_local_events
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
except Exception:
@@ -168,7 +278,7 @@ class PaginationHandler(object):
if joined:
raise SynapseError(400, "Users are still joined to this room")
await self.store.purge_room(room_id)
await self.storage.purge_events.purge_room(room_id)
@defer.inlineCallbacks
def get_messages(
@@ -210,9 +320,10 @@ class PaginationHandler(object):
source_config = pagin_config.get_source_config("room")
with (yield self.pagination_lock.read(room_id)):
membership, member_event_id = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
(
membership,
member_event_id,
) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
if source_config.direction == "b":
# if we're going backwards, we might need to backfill. This
@@ -255,7 +366,7 @@ class PaginationHandler(object):
events = event_filter.filter(events)
events = yield filter_events_for_client(
self.store, user_id, events, is_peeking=(member_event_id is None)
self.storage, user_id, events, is_peeking=(member_event_id is None)
)
if not events:
@@ -274,7 +385,7 @@ class PaginationHandler(object):
(EventTypes.Member, event.sender) for event in events
)
state_ids = yield self.store.get_state_ids_for_event(
state_ids = yield self.state_store.get_state_ids_for_event(
events[0].event_id, state_filter=state_filter
)
@@ -295,10 +406,8 @@ class PaginationHandler(object):
}
if state:
chunk["state"] = (
yield self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event
)
chunk["state"] = yield self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event
)
return chunk

View File

@@ -152,7 +152,7 @@ class BaseProfileHandler(BaseHandler):
by_admin (bool): Whether this change was made by an administrator.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's displayname")
@@ -207,7 +207,7 @@ class BaseProfileHandler(BaseHandler):
"""target_user is the user whose avatar_url is to be changed;
auth_user is the user attempting to make this change."""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's avatar_url")
@@ -231,7 +231,7 @@ class BaseProfileHandler(BaseHandler):
def on_profile_query(self, args):
user = UserID.from_string(args["user_id"])
if not self.hs.is_mine(user):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
just_field = args.get("field", None)
@@ -275,7 +275,7 @@ class BaseProfileHandler(BaseHandler):
ratelimit=False, # Try to hide that these events aren't atomic.
)
except Exception as e:
logger.warn(
logger.warning(
"Failed to update join event for room %s - %s", room_id, str(e)
)

Some files were not shown because too many files have changed in this diff Show More