1
0

Compare commits

..

236 Commits

Author SHA1 Message Date
Erik Johnston
b1e498d6e1 Handle prefill 2019-11-22 18:39:26 +00:00
Erik Johnston
9cca5ee743 Track cache hit ratios 2019-11-22 18:19:55 +00:00
Aaron Raimist
24cc31ee96 Fix link to user_dir_populate.sql in the user directory docs (#6388) 2019-11-21 17:38:14 +00:00
Andrew Morgan
3916e1b97a Clean up newline quote marks around the codebase (#6362) 2019-11-21 12:00:14 +00:00
Matthew Hodgson
9cc168e42e update macOS installation instructions 2019-11-20 18:44:45 +00:00
Andrew Morgan
41e4566682 1.6.0rc1 2019-11-20 14:12:42 +00:00
Andrew Morgan
234f55f3c4 Docker: Change permissions for data dir before attempting to write to it (#6389) 2019-11-20 13:32:31 +00:00
Manuel Stahl
4f5ca455bf Move admin endpoints into separate files (#6308) 2019-11-20 11:49:11 +00:00
Brendan Abolivier
83446a18fb Merge pull request #6335 from matrix-org/erikj/rc_login_cleanups
Only do `rc_login` ratelimiting on succesful login.
2019-11-20 09:52:38 +00:00
Brendan Abolivier
271c322d08 Lint 2019-11-20 09:29:48 +00:00
Erik Johnston
c7376cdfe3 Apply suggestions from code review
Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
2019-11-18 17:10:16 +00:00
Andrew Morgan
657d614f6a Replace UPDATE with UPSERT on device_max_stream_id table (#6363) 2019-11-15 14:02:34 +00:00
James
53b6559a89 Add optional python dependencies to snap packaging (#6317)
Signed-off-by: James Hebden <james@ec0.io>
2019-11-14 18:42:46 +00:00
Andrew Morgan
745a48625d Fix guest -> real account upgrade with account validity enabled (#6359) 2019-11-14 12:02:05 +00:00
Andrew Morgan
6e1b40dc26 Replace instance variations of homeserver with correct case/spacing (#6357) 2019-11-14 11:02:58 +00:00
Andrew Morgan
473acedcdd Merge branch 'develop' of github.com:matrix-org/synapse into anoa/homeserver_copy
* 'develop' of github.com:matrix-org/synapse:
  Blacklist PurgeRoomTestCase (#6361)
  Set room version default to 5
2019-11-14 10:26:27 +00:00
Brendan Abolivier
a42567e4a8 Merge pull request #6220 from matrix-org/neilj/set_room_version_default_to_5
Set room version default to 5
2019-11-14 10:21:00 +00:00
Andrew Morgan
c350bc2f92 Blacklist PurgeRoomTestCase (#6361) 2019-11-13 19:09:20 +00:00
Andrew Morgan
e1648dc576 sample config 2019-11-12 13:15:59 +00:00
Andrew Morgan
85f172ef96 Add changelog 2019-11-12 13:13:19 +00:00
Andrew Morgan
73d091be48 A couple more instances 2019-11-12 13:12:25 +00:00
Andrew Morgan
bc29a19731 Replace instance variations of homeserver with correct case/spacing 2019-11-12 13:08:12 +00:00
Brendan Abolivier
963ffb60b9 Merge pull request #6340 from matrix-org/babolivier/pagination_query
Fix the SQL SELECT query in _paginate_room_events_txn
2019-11-08 11:12:24 +00:00
Brendan Abolivier
b16fa43386 Incorporate review 2019-11-08 10:34:09 +00:00
Erik Johnston
f713c01e2b Merge pull request #6295 from matrix-org/erikj/split_purge_history
Split purge API into events vs state and add PurgeEventsStorage
2019-11-08 10:19:15 +00:00
Erik Johnston
e4ec82ce0f Move type annotation into docstring 2019-11-08 09:50:48 +00:00
Brendan Abolivier
46e5db9eb2 Merge pull request #6310 from matrix-org/babolivier/msc2326_bg_update
MSC2326: Add background update to take previous events into account
2019-11-07 22:54:56 +00:00
Richard van der Hoff
c5abb67e43 Python 3.8 for tox (#6341)
... and update INSTALL.md to include py3.8.

We'll also have to update the buildkite pipeline to run it
2019-11-07 17:14:13 +00:00
Brendan Abolivier
dad8d68c99 Update synapse/storage/data_stores/main/events_bg_updates.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-11-07 17:01:53 +00:00
Brendan Abolivier
6d360f099f Update synapse/storage/data_stores/main/events_bg_updates.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-11-07 17:01:43 +00:00
Brendan Abolivier
c9b27d0044 Copy results 2019-11-07 16:47:45 +00:00
Brendan Abolivier
cd31201267 Revert "Back to using cursor_to_dict"
This reverts commit 1186612d6c.
2019-11-07 16:47:15 +00:00
Brendan Abolivier
1186612d6c Back to using cursor_to_dict 2019-11-07 16:46:41 +00:00
Brendan Abolivier
ec2cb9f298 Initialise value before looping 2019-11-07 16:18:40 +00:00
Brendan Abolivier
bb78276bdc Incorporate review 2019-11-07 15:25:27 +00:00
Brendan Abolivier
b9cba07962 Lint 2019-11-07 14:57:15 +00:00
Brendan Abolivier
70804392ae Only join on event_labels if we're filtering on labels 2019-11-07 14:55:10 +00:00
Brendan Abolivier
15a1a02e70 Handle lack of filter 2019-11-07 12:04:37 +00:00
Brendan Abolivier
4f519d556e Changelog 2019-11-07 11:51:54 +00:00
Brendan Abolivier
3f9b61ff95 Fix the SQL SELECT query in _paginate_room_events_txn
Doing a SELECT DISTINCT when paginating is quite expensive, because it requires the engine to do sorting on the entire events table. However, we only need to run it if we're filtering on 2+ labels, so this PR is changing the request so that DISTINCT is only used then.
2019-11-07 11:51:11 +00:00
Andrew Morgan
e914cf12f6 Merge pull request #6235 from matrix-org/anoa/room_upgrade_groups 2019-11-07 11:12:22 +00:00
Richard van der Hoff
b03cddaeb9 tweak changelog 2019-11-07 09:46:25 +00:00
V02460
affcc2cc36 Fix LruCache callback deduplication (#6213) 2019-11-07 09:43:51 +00:00
Andrew Morgan
a6ebef1bfd Make numeric user_id checker start at @0, and don't ratelimit on checking (#6338) 2019-11-06 17:21:20 +00:00
Erik Johnston
5c3363233c Fix deleting state groups during room purge.
And fix the tests to actually test that things got deleted.
2019-11-06 17:02:08 +00:00
Erik Johnston
71f3bd734f Use correct type annotation 2019-11-06 17:00:18 +00:00
Andrew Morgan
55bc8d531e raise exception after multiple failures 2019-11-06 16:52:54 +00:00
Andrew Morgan
1fe3cc2c9c Address review comments 2019-11-06 14:54:24 +00:00
Richard van der Hoff
915903eada Merge branch 'master' into develop 2019-11-06 13:51:11 +00:00
Richard van der Hoff
08b2868ffe Merge branch 'release-v1.5.1' 2019-11-06 13:50:55 +00:00
Richard van der Hoff
4257feb20f build debs for eoan and bullseye 2019-11-06 13:35:56 +00:00
Andrew Morgan
d2f6a67cb4 Add changelog 2019-11-06 12:03:12 +00:00
Andrew Morgan
4059d61e26 Don't forget to ratelimit calls outside of RegistrationHandler 2019-11-06 12:01:54 +00:00
Andrew Morgan
b33c4f7a82 Numeric ID checker now checks @0, don't ratelimit on checking 2019-11-06 11:55:00 +00:00
Erik Johnston
4fc53bf1fb Newsfile 2019-11-06 11:08:58 +00:00
Erik Johnston
f697b4b4a2 Add failed auth ratelimiting to UIA 2019-11-06 11:08:58 +00:00
Erik Johnston
541f1b92d9 Only do rc_login ratelimiting on succesful login.
We were doing this in a number of places which meant that some login
code paths incremented the counter multiple times.

It was also applying ratelimiting to UIA endpoints, which was probably
not intentional.

In particular, some custom auth modules were calling
`check_user_exists`, which incremented the counters, meaning that people
would fail to login sometimes.
2019-11-06 11:08:58 +00:00
Brendan Abolivier
24a214bd1b Fix field name 2019-11-06 11:04:19 +00:00
Brendan Abolivier
70d93cafdb Update insert 2019-11-06 10:59:03 +00:00
Richard van der Hoff
feafd98aca 1.5.1 2019-11-06 10:02:23 +00:00
Richard van der Hoff
807ec3bd99 Fix bug which caused rejected events to be stored with the wrong room state (#6320)
Fixes a bug where rejected events were persisted with the wrong state group.

Also fixes an occasional internal-server-error when receiving events over
federation which are rejected and (possibly because they are
backwards-extremities) have no prev_group.

Fixes #6289.
2019-11-06 10:01:39 +00:00
Richard van der Hoff
0e3ab8afdc Add some checks that we aren't using state from rejected events (#6330)
* Raise an exception if accessing state for rejected events

Add some sanity checks on accessing state_group etc for
rejected events.

* Skip calculating push actions for rejected events

It didn't actually cause any bugs, because rejected events get filtered out at
various later points, but there's not point in trying to calculate the push
actions for a rejected event.
2019-11-05 22:13:37 +00:00
Erik Johnston
01ba7b38a7 Merge pull request #6336 from matrix-org/erikj/fix_phone_home_stats
Fix phone home stats
2019-11-05 18:29:57 +00:00
Erik Johnston
b437eb48b6 Newsfile 2019-11-05 17:45:29 +00:00
Erik Johnston
052513958d Fix phone home stats 2019-11-05 17:44:09 +00:00
Richard van der Hoff
5570d1c93f Merge pull request #6334 from matrix-org/rav/url_preview_limit_title_2
Fix exception when OpenGraph tag values are ints
2019-11-05 17:28:11 +00:00
Richard van der Hoff
81d49cbb07 Fix exception when OpenGraph tag values are ints 2019-11-05 17:22:58 +00:00
Richard van der Hoff
02f99906f2 Merge pull request #6331 from matrix-org/rav/url_preview_limit_title
Strip overlong OpenGraph data from url preview
2019-11-05 17:08:59 +00:00
Richard van der Hoff
55a7da247a Merge branch 'develop' into rav/url_preview_limit_title 2019-11-05 17:08:07 +00:00
Richard van der Hoff
e78167c94b Apply suggestions from code review
Co-Authored-By: Brendan Abolivier <babolivier@matrix.org>
Co-Authored-By: Erik Johnston <erik@matrix.org>
2019-11-05 16:46:39 +00:00
Richard van der Hoff
e9bfe719ba Strip overlong OpenGraph data from url preview
... to stop people causing DoSes with malicious web pages
2019-11-05 15:51:18 +00:00
Brendan Abolivier
f5d8fdf0a7 Update changelog 2019-11-05 14:44:25 +00:00
Richard van der Hoff
4086002827 Improve documentation for EventContext fields (#6319) 2019-11-05 13:23:25 +00:00
Erik Johnston
ffe595381d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_purge_history 2019-11-05 10:27:41 +00:00
Andrew Morgan
506a63de67 Merge branch 'develop' of github.com:matrix-org/synapse into anoa/room_upgrade_groups 2019-11-04 18:22:41 +00:00
Andrew Morgan
c2203bea57 Re-add docstring, with caveats detailed 2019-11-04 18:17:11 +00:00
Brendan Abolivier
e252ffadbc Merge branch 'develop' into babolivier/msc2326_bg_update 2019-11-04 18:09:50 +00:00
Andrew Morgan
0287d033ee Transfer upgraded rooms on groups 2019-11-04 18:08:50 +00:00
Amber Brown
4e1c7b79fa Remove the psutil dependency (#6318)
* remove psutil and replace with resource
2019-11-05 05:05:48 +11:00
Erik Johnston
7134ca7daa Change to not require a state_groups.room_id index.
This does mean that we won't clean up orphaned state groups (i.e. state
groups that were persisted but the associated event wasn't).
2019-11-04 13:36:57 +00:00
Erik Johnston
6a0092d371 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_purge_history 2019-11-04 13:29:35 +00:00
Richard van der Hoff
cc6243b4c0 document the REPLICATE command a bit better (#6305)
since I found myself wonder how it works
2019-11-04 12:40:18 +00:00
Brendan Abolivier
3b29a73f9f Print out the actual number of affected rows 2019-11-04 09:56:11 +00:00
Brendan Abolivier
824bba2f78 Correctly order results 2019-11-04 09:56:11 +00:00
Brendan Abolivier
49008e674f TODO 2019-11-04 09:56:11 +00:00
Brendan Abolivier
1586f2c7e7 Fix exit condition 2019-11-04 09:56:11 +00:00
Brendan Abolivier
1c1268245d Lint 2019-11-04 09:56:11 +00:00
Brendan Abolivier
416c7baee6 Changelog 2019-11-04 09:56:10 +00:00
Brendan Abolivier
911b03ca31 Don't try to process events we already have a label for 2019-11-04 09:56:10 +00:00
Brendan Abolivier
07cb38e965 Use a sensible default value for labels 2019-11-04 09:56:10 +00:00
Brendan Abolivier
a46574281d Use the right format for rows 2019-11-04 09:56:10 +00:00
Brendan Abolivier
c9a1b80a74 MSC2326: Add background update to take previous events into account 2019-11-04 09:56:04 +00:00
Brendan Abolivier
f496d25877 Merge pull request #6301 from matrix-org/babolivier/msc2326
Implement MSC2326 (label based filtering)
2019-11-01 17:04:45 +00:00
Brendan Abolivier
988d8d6507 Incorporate review 2019-11-01 16:22:44 +00:00
Richard van der Hoff
c6516adbe0 Factor out an _AsyncEventContextImpl (#6298)
The intention here is to make it clearer which fields we can expect to be
populated when: notably, that the _event_type etc aren't used for the
synchronous impl of EventContext.
2019-11-01 16:19:09 +00:00
Brendan Abolivier
5598445655 Update synapse/storage/data_stores/main/schema/delta/56/event_labels.sql
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-11-01 16:18:34 +00:00
Hubert Chathi
fa7e52caf1 Merge pull request #6313 from matrix-org/uhoreg/cross_signing_fix_sqlite_schema
fix hidden field in devices table for older sqlite
2019-11-01 10:52:46 -04:00
Jason Robinson
67a65918ad Add contributer docs for using the provided linters script (#6164)
* Add lint dependencies black, flake8 and isort

These are required when running the `lint.sh` dev scripts.

Signed-off-by: Jason Robinson <jasonr@matrix.org>

* Add contributer docs for using the providers linters script

Add also to the pull request template to avoid build failures due
to people not knowing that linters need running.

Signed-off-by: Jason Robinson <jasonr@matrix.org>

* Fix mention of linter errors correction

Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>

* Add mention for installing linter dependencies

Co-Authored-By: Andrew Morgan <1342360+anoadragon453@users.noreply.github.com>

* Remove linters from python dependencies as per PR review

Signed-off-by: Jason Robinson <jasonr@matrix.org>
2019-11-02 01:45:09 +11:00
Richard van der Hoff
1cb84c6486 Support for routing outbound HTTP requests via a proxy (#6239)
The `http_proxy` and `HTTPS_PROXY` env vars can be set to a `host[:port]` value which should point to a proxy.

The address of the proxy should be excluded from IP blacklists such as the `url_preview_ip_range_blacklist`.

The proxy will then be used for
 * push
 * url previews
 * phone-home stats
 * recaptcha validation
 * CAS auth validation

It will *not* be used for:
 * Application Services
 * Identity servers
 * Outbound federation
 * In worker configurations, connections from workers to masters

Fixes #4198.
2019-11-01 14:07:44 +00:00
Andrew Morgan
fe1f2b4520 Remove last usages of deprecated logging.warn method (#6314) 2019-11-01 12:03:44 +00:00
Brendan Abolivier
a2c63c619a Add more data to the event_labels table and fix the indexes 2019-11-01 11:47:28 +00:00
Erik Johnston
669b6cbda3 Fix up comment 2019-11-01 11:32:20 +00:00
Neil Pilgrim
befd58f47b Document lint.sh & allow application to specified files only (#6312) 2019-11-01 10:52:20 +00:00
Brendan Abolivier
e3689ac6f7 Add unstable feature flag 2019-11-01 10:41:23 +00:00
Brendan Abolivier
57cdb046e4 Lint 2019-11-01 10:39:14 +00:00
Brendan Abolivier
c6dbca2422 Incorporate review 2019-11-01 10:30:51 +00:00
Andrew Morgan
ace947e8da Depublish a room from the public rooms list when it is upgraded (#6232) 2019-11-01 10:28:09 +00:00
Hubert Chathi
53d7680e32 Merge pull request #5727 from matrix-org/uhoreg/e2e_cross-signing2-part3
Cross-signing [4/4] -- federation edition
2019-10-31 23:59:35 -04:00
Hubert Chathi
1f156398b9 add changelog 2019-10-31 23:02:20 -04:00
Hubert Chathi
c61db13183 fix hidden field in devices table for older sqlite 2019-10-31 22:52:55 -04:00
Hubert Chathi
c3fc176c60 Update synapse/storage/data_stores/main/devices.py
Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-10-31 22:49:48 -04:00
Hubert Chathi
6f4bc6d01d Merge branch 'develop' into cross-signing_federation 2019-10-31 22:38:21 -04:00
Hubert Chathi
3b4216f961 Merge pull request #6254 from matrix-org/uhoreg/cross_signing_fix_workers_notify
make notification of signatures work with workers
2019-10-31 22:35:03 -04:00
Will Hunt
42e707c663 rstrip slashes from url on appservice (#6306) 2019-10-31 17:32:25 +00:00
Hubert Chathi
9c94b48bf1 Merge branch 'develop' into uhoreg/cross_signing_fix_workers_notify 2019-10-31 12:32:07 -04:00
Hubert Chathi
f7e4a582ef clean up code a bit 2019-10-31 12:01:00 -04:00
Erik Johnston
fb1a6914cf Update log line to lie a little less 2019-10-31 15:45:48 +00:00
Amber Brown
020add5099 Update black to 19.10b0 (#6304)
* update version of black and also fix the mypy config being overridden
2019-11-01 02:43:24 +11:00
Erik Johnston
61be1a2926 Add state_groups.room_id index 2019-10-31 15:39:26 +00:00
Erik Johnston
f91f2a1f92 Docstrings 2019-10-31 15:26:00 +00:00
Erik Johnston
8f5bbdb987 Fix purge room API 2019-10-31 15:22:08 +00:00
Erik Johnston
cd581338cf Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_purge_history 2019-10-31 15:19:26 +00:00
Erik Johnston
dfe0cd71b6 Merge pull request #6294 from matrix-org/erikj/add_state_storage
Add StateGroupStorage interface
2019-10-31 16:17:53 +01:00
Travis Ralston
3a74c03ffb Expose some homeserver functionality to spam checkers (#6259)
* Offer the homeserver instance to the spam checker

* Newsfile

* Linting

* Expose a Spam Checker API instead of passing the homeserver object

* Alter changelog

* s/hs/api
2019-10-31 09:16:14 -06:00
Erik Johnston
69489f8eb1 Merge pull request #6307 from matrix-org/erikj/fix_purge_room
Fix /purge_room admin API
2019-10-31 16:08:34 +01:00
Erik Johnston
64f2b8c3d8 Apply suggestions from code review
Fix docstring

Co-Authored-By: Richard van der Hoff <1389908+richvdh@users.noreply.github.com>
2019-10-31 15:44:31 +01:00
Erik Johnston
b2ff8c305f Newsfile 2019-10-31 11:32:53 +00:00
Erik Johnston
97c60ccaa3 Add unit test for /purge_room API 2019-10-31 11:30:25 +00:00
Erik Johnston
c6bcd38841 Fix /purge_room API.
It fails trying to clean the `topic` table which was recently removed.
2019-10-31 11:17:23 +00:00
Richard van der Hoff
eb9a0d9e48 Merge remote-tracking branch 'origin/master' into develop 2019-10-31 11:17:05 +00:00
Andrew Morgan
54fef094b3 Remove usage of deprecated logger.warn method from codebase (#6271)
Replace every instance of `logger.warn` with `logger.warning` as the former is deprecated.
2019-10-31 10:23:24 +00:00
Hubert Chathi
998f7fe7d4 make user signatures a separate stream 2019-10-30 17:22:52 -04:00
Hubert Chathi
670972c0e1 Merge branch 'develop' into uhoreg/cross_signing_fix_workers_notify 2019-10-30 16:46:31 -04:00
Hubert Chathi
bb6cec27a5 rename get_devices_by_remote to get_device_updates_by_remote 2019-10-30 14:57:34 -04:00
Richard van der Hoff
0467f33584 fix delete_existing for _persist_events (#6300)
this is part of _retry_on_integrity_error, so should only be on _persist_events_and_state_updates
2019-10-30 18:05:00 +00:00
Brendan Abolivier
dcc069a2e2 Lint 2019-10-30 18:01:56 +00:00
Brendan Abolivier
62588eae4a Changelog 2019-10-30 17:54:40 +00:00
Brendan Abolivier
d8c9109aee Add integration tests for /messages 2019-10-30 17:48:22 +00:00
Brendan Abolivier
fe51d6cacf Add more integration testing 2019-10-30 17:28:41 +00:00
Brendan Abolivier
395683add1 Add integration tests for sync 2019-10-30 16:47:37 +00:00
Brendan Abolivier
e7943f660a Add unit tests 2019-10-30 16:15:04 +00:00
Brendan Abolivier
233b14ebe1 Add index on label 2019-10-30 15:58:05 +00:00
Brendan Abolivier
acd16ad86a Implement filtering 2019-10-30 15:56:33 +00:00
Erik Johnston
ecfba89a78 Newsfile 2019-10-30 15:23:37 +00:00
Erik Johnston
7c8c97e635 Split purge API into events vs state 2019-10-30 15:23:37 +00:00
Erik Johnston
d3f694d628 Newsfile 2019-10-30 14:53:09 +00:00
Erik Johnston
69f0054ce6 Port to use state storage 2019-10-30 14:46:54 +00:00
Erik Johnston
5db03535d5 Add StateGroupStorage interface 2019-10-30 14:46:49 +00:00
Brendan Abolivier
fa0dcbc8fa Store labels for new events 2019-10-30 14:27:15 +00:00
Hubert Chathi
7d7eac61be Merge branch 'develop' into cross-signing_federation 2019-10-30 10:17:10 -04:00
Hubert Chathi
bc32f102cd black 2019-10-30 10:07:36 -04:00
Hubert Chathi
d78b1e339d apply changes as a result of PR review 2019-10-30 10:01:53 -04:00
Erik Johnston
b7fe62b766 Merge pull request #6240 from matrix-org/erikj/split_out_persistence_store
Move persist_events out from main data store.
2019-10-30 14:58:44 +01:00
Erik Johnston
ec6de1cc7d Merge branch 'develop' of github.com:matrix-org/synapse into erikj/split_out_persistence_store 2019-10-30 13:37:04 +00:00
Erik Johnston
a8d16f6c00 Review comments 2019-10-30 13:36:12 +00:00
Erik Johnston
e5c3a99091 Merge pull request #6291 from matrix-org/erikj/fix_cache_descriptor
Make ObservableDeferred.observe() always return deferred.
2019-10-30 14:06:34 +01:00
Yash Jipkate
9677613e9c Modify doc to update Google ReCaptcha terms (#6257) 2019-10-30 12:30:20 +00:00
Erik Johnston
6e677403b7 Clarify docstring 2019-10-30 11:52:04 +00:00
Andrew Morgan
7e17959984 Update email section of INSTALL.md about account_threepid_delegates (#6272) 2019-10-30 11:37:56 +00:00
Erik Johnston
1de28183cb Newsfile 2019-10-30 11:37:56 +00:00
Erik Johnston
326b3dace7 Make ObservableDeferred.observe() always return deferred.
This makes it easier to use in an async/await world.

Also fixes a bug where cache descriptors would occaisonally return a raw
value rather than a deferred.
2019-10-30 11:35:46 +00:00
Andrew Morgan
a2276d4d3c Fix log line that was printing undefined value (#6278) 2019-10-30 11:28:48 +00:00
Andrew Morgan
2cab02f9d1 Update CI to run isort on scripts and scripts-dev (#6270) 2019-10-30 11:17:14 +00:00
Andrew Morgan
7955abeaac Fix small typo in comment (#6269) 2019-10-30 11:16:19 +00:00
Andrew Morgan
46c12918ad Fix typo in domain name in account_threepid_delegates config option (#6273) 2019-10-30 11:07:42 +00:00
Andrew Morgan
9178ac1b6a Remove redundant arguments to CI's flake8 (#6277) 2019-10-30 11:07:18 +00:00
Andrew Morgan
b39ca49db1 Handle FileNotFound error in checking git repository version (#6284) 2019-10-30 11:00:15 +00:00
Erik Johnston
770d1ef673 Merge pull request #6280 from matrix-org/erikj/receipts_async_await
Port receipt and read markers to async/wait
2019-10-30 11:44:18 +01:00
Erik Johnston
ba4cc5541c Merge pull request #6274 from matrix-org/erikj/replication_async
Port replication http server endpoints to async/await
2019-10-30 11:44:08 +01:00
Erik Johnston
72bc6294ed Merge pull request #6275 from matrix-org/erikj/port_rest_events
Port room rest handlers to async/await
2019-10-30 11:44:02 +01:00
Erik Johnston
b4465564cc Merge pull request #6279 from matrix-org/erikj/federation_server_async_await
Port federation_server to async/await
2019-10-30 11:43:51 +01:00
Anton Lazarev
213d7eb227 Clarify environment variable usage when running in Docker (#6181) 2019-10-30 07:30:04 +00:00
Brendan Abolivier
47f767269c Add database table for keeping track of labels on events 2019-10-29 16:56:22 +00:00
Erik Johnston
a287f1e804 Don't return coroutines 2019-10-29 16:36:46 +00:00
Erik Johnston
38474707b9 Merge branch 'erikj/federation_server_async_await' of github.com:matrix-org/synapse into erikj/receipts_async_await 2019-10-29 15:53:17 +00:00
Erik Johnston
74c1e16106 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/federation_server_async_await 2019-10-29 15:52:39 +00:00
Erik Johnston
307e313ef4 Merge branch 'develop' of github.com:matrix-org/synapse into erikj/port_rest_events 2019-10-29 15:51:12 +00:00
Erik Johnston
d6e40e7cbd Merge branch 'develop' of github.com:matrix-org/synapse into erikj/replication_async 2019-10-29 15:42:58 +00:00
Brendan Abolivier
d79151921a Fix CI for synapse_port_db (#6276)
* Don't use a virtualenv

* Generate the server's signing key to allow it to start

* Add signing key paths to CI configuration files

* Use a Python script to create the postgresql database

* Improve logging
2019-10-29 15:39:44 +00:00
Erik Johnston
7dd7a385f9 Newsfile 2019-10-29 15:09:48 +00:00
Erik Johnston
2c35ffead2 Port receipt and read markers to async/wait 2019-10-29 15:08:22 +00:00
Erik Johnston
09a135b039 Make concurrently_execute work with async/await 2019-10-29 15:02:23 +00:00
Richard van der Hoff
65cb307e19 Merge branch 'master' into develop 2019-10-29 14:40:57 +00:00
Erik Johnston
fec7d88645 Newsfile 2019-10-29 14:27:18 +00:00
Erik Johnston
3f33879be4 Port federation_server to async/await 2019-10-29 14:13:08 +00:00
Brendan Abolivier
cc80968f62 Merge branch 'babolivier/changelog-name' into develop 2019-10-29 14:05:49 +00:00
Erik Johnston
387324688e Newsfile 2019-10-29 13:10:45 +00:00
Erik Johnston
9be41bc121 Port room rest handlers to async/await 2019-10-29 13:09:29 +00:00
Erik Johnston
1a7ed37149 Newsfile 2019-10-29 13:01:50 +00:00
Erik Johnston
e577a4b2ad Port replication http server endpoints to async/await 2019-10-29 13:00:51 +00:00
Erik Johnston
561133c3c5 Merge pull request #6263 from matrix-org/erikj/caches_return_deferreds
Quick fix to ensure cache descriptors always return deferreds
2019-10-29 12:53:21 +01:00
Erik Johnston
e6c7e239ef Update docstring 2019-10-29 11:48:30 +00:00
Erik Johnston
e419c44ba4 Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into develop 2019-10-29 11:41:27 +00:00
Brendan Abolivier
14504ad573 Add CI for synapse_port_db (#6140)
This adds:

* a test sqlite database
* a configuration file for the sqlite database
* a configuration file for a postgresql database (using the credentials in `.buildkite/docker-compose.pyXX.pgXX.yaml`)

as well as a new script named `.buildkite/scripts/test_synapse_port_db.sh` that:

1. installs Synapse
2. updates the test sqlite database to the latest schema and runs background updates on it
3. creates an empty postgresql database
4. run the `synapse_port_db` script to migrate the test sqlite database to the empty postgresql database (with coverage)

Step `2` is done via a new script located at `scripts-dev/update_database`.

The test sqlite database is extracted from a SyTest run, so that it can be considered as an actual homeserver's database with actual data in it.
2019-10-28 17:45:32 +00:00
Tobia De Koninck
29207b4488 Fix broken URL in docker/README.md (#6264)
Signed-off-by: Tobia De Koninck <LEDfan@users.noreply.github.com>
2019-10-28 15:39:57 +00:00
Erik Johnston
a8aced58df Newsfile 2019-10-28 13:36:52 +00:00
Erik Johnston
d0d8a22c13 Quick fix to ensure cache descriptors always return deferreds 2019-10-28 13:33:04 +00:00
Richard van der Hoff
bcfc647e4d Merge tag 'v1.5.0rc2' into develop
Synapse 1.5.0rc2 (2019-10-28)
=============================

Bugfixes
--------

- Update list of boolean columns in `synapse_port_db`. ([\#6247](https://github.com/matrix-org/synapse/issues/6247))
- Fix /keys/query API on workers. ([\#6256](https://github.com/matrix-org/synapse/issues/6256))
- Improve signature checking on some federation APIs. ([\#6262](https://github.com/matrix-org/synapse/issues/6262))

Internal Changes
----------------

- Move schema delta files to the correct data store. ([\#6248](https://github.com/matrix-org/synapse/issues/6248))
- Small performance improvement by removing repeated config lookups in room stats calculation. ([\#6255](https://github.com/matrix-org/synapse/issues/6255))
2019-10-28 12:59:13 +00:00
Richard van der Hoff
9aee28927b Convert EventContext to attrs (#6218)
* make EventContext use an attr
2019-10-28 14:29:55 +02:00
Hubert Chathi
da78f61778 Merge pull request #6253 from matrix-org/uhoreg/e2e_backup_delete_keys
delete keys when deleting backup versions
2019-10-25 11:28:11 -04:00
Hubert Chathi
4697c0de0b remove unneeded imports 2019-10-25 10:47:02 -04:00
Hubert Chathi
4cf3a30a20 switch to using HomeserverTestCase 2019-10-25 10:42:07 -04:00
Erik Johnston
64c2cfda8a Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into develop 2019-10-25 11:34:49 +01:00
Erik Johnston
a71b8c87ec Merge branch 'release-v1.5.0' of github.com:matrix-org/synapse into develop 2019-10-25 11:32:24 +01:00
Erik Johnston
44ab048cfe Merge pull request #6251 from matrix-org/michaelkaye/debug_guard_logging
Reduce debug logging overhead
2019-10-25 10:05:44 +01:00
Erik Johnston
2020f11916 Merge pull request #6250 from matrix-org/michaelkaye/make_user_stats_less_verbose
Make user stats less verbose
2019-10-25 10:04:51 +01:00
Hubert Chathi
0417ca1a64 add changelog 2019-10-24 22:49:55 -04:00
Hubert Chathi
c40d7244f8 Merge branch 'develop' into cross-signing_federation 2019-10-24 22:31:25 -04:00
Hubert Chathi
8ac766c44a make notification of signatures work with workers 2019-10-24 22:14:58 -04:00
Hubert Chathi
ff05c9b760 don't error if federation query doesn't have cross-signing keys 2019-10-24 21:46:11 -04:00
Hubert Chathi
29a0bc5637 remove some unnecessary lines 2019-10-24 21:43:02 -04:00
Hubert Chathi
608947eedf add changelog 2019-10-24 21:33:35 -04:00
Hubert Chathi
848cd388d9 delete keys when deleting backups 2019-10-24 21:21:51 -04:00
Michael Kaye
e4d98188da Address codestyle concerns 2019-10-24 18:43:13 +01:00
Michael Kaye
47c02f82e3 Add missing '.' 2019-10-24 18:39:15 +01:00
Michael Kaye
0d7e9523e5 Reduce impact of debug logging 2019-10-24 18:37:55 +01:00
Michael Kaye
8f4a808d9d Delay printf until logging is required.
Using % will cause the string to be generated even if debugging
is off.
2019-10-24 18:31:53 +01:00
Michael Kaye
9eebc1e73b use %r to __repr__ objects
This avoids calculating __repr__ unless we are going to log.
2019-10-24 18:18:56 +01:00
Michael Kaye
f85b9842f0 Don't encode object as UTF-8 string if not needed.
I believe that string formatting ~10-15 sized events will
take a proportion of CPU time.
2019-10-24 18:08:45 +01:00
Michael Kaye
c3cd977fff Add changelog.d 2019-10-24 17:58:50 +01:00
Michael Kaye
39266a9c9f Make user/room stats log line less verbose. 2019-10-24 17:55:53 +01:00
Erik Johnston
9fb96889a4 Newsfile 2019-10-23 16:15:03 +01:00
Erik Johnston
3ca4c7c516 Use new EventPersistenceStore 2019-10-23 16:15:03 +01:00
Erik Johnston
73cf63784b Add DataStores and Storage classes. 2019-10-23 16:15:03 +01:00
Hubert Chathi
dc2cd6f79d move get_e2e_cross_signing_key to EndToEndKeyWorkerStore so it works with workers 2019-10-23 09:13:47 -04:00
Erik Johnston
22a9847670 Move persist_events out from main data store.
This is in preparation for splitting out of state_groups_state from the
main store into it own one, as persisting events depends on calculating
state.
2019-10-23 13:29:44 +01:00
Hubert Chathi
480eac30eb black 2019-10-22 22:37:16 -04:00
Hubert Chathi
404e8c8532 vendor-prefix the EDU name until MSC1756 is merged into the spec 2019-10-22 22:33:23 -04:00
Hubert Chathi
3e3f9b684e fix unit test 2019-10-22 22:26:30 -04:00
Hubert Chathi
0563839535 add news file 2019-10-22 21:51:01 -04:00
Hubert Chathi
1fabf82d50 update to work with newer code, and fix formatting 2019-10-22 21:44:58 -04:00
Hubert Chathi
41ad35b523 add missing param 2019-10-22 19:06:29 -04:00
Hubert Chathi
cfdb84422d make black happy 2019-10-22 19:06:06 -04:00
Hubert Chathi
a1aaf3eea6 don't crash if the user doesn't have cross-signing keys 2019-10-22 19:04:37 -04:00
Hubert Chathi
8d3542a64e implement federation parts of cross-signing 2019-10-22 19:04:35 -04:00
Neil Johnson
82c8799ec7 Set room version default to 5 2019-10-19 09:06:15 +01:00
227 changed files with 6485 additions and 2868 deletions

View File

@@ -0,0 +1,21 @@
# Configuration file used for testing the 'synapse_port_db' script.
# Tells the script to connect to the postgresql database that will be available in the
# CI's Docker setup at the point where this file is considered.
server_name: "test"
signing_key_path: "/src/.buildkite/test.signing.key"
report_stats: false
database:
name: "psycopg2"
args:
user: postgres
host: postgres
password: postgres
database: synapse
# Suppress the key server warning.
trusted_key_servers:
- server_name: "matrix.org"
suppress_key_server_warning: true

View File

@@ -0,0 +1,36 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from synapse.storage.engines import create_engine
logger = logging.getLogger("create_postgres_db")
if __name__ == "__main__":
# Create a PostgresEngine.
db_engine = create_engine({"name": "psycopg2", "args": {}})
# Connect to postgres to create the base database.
# We use "postgres" as a database because it's bound to exist and the "synapse" one
# doesn't exist yet.
db_conn = db_engine.module.connect(
user="postgres", host="postgres", password="postgres", dbname="postgres"
)
db_conn.autocommit = True
cur = db_conn.cursor()
cur.execute("CREATE DATABASE synapse;")
cur.close()
db_conn.close()

View File

@@ -0,0 +1,36 @@
#!/bin/bash
#
# Test script for 'synapse_port_db', which creates a virtualenv, installs Synapse along
# with additional dependencies needed for the test (such as coverage or the PostgreSQL
# driver), update the schema of the test SQLite database and run background updates on it,
# create an empty test database in PostgreSQL, then run the 'synapse_port_db' script to
# test porting the SQLite database to the PostgreSQL database (with coverage).
set -xe
cd `dirname $0`/../..
echo "--- Install dependencies"
# Install dependencies for this test.
pip install psycopg2 coverage coverage-enable-subprocess
# Install Synapse itself. This won't update any libraries.
pip install -e .
echo "--- Generate the signing key"
# Generate the server's signing key.
python -m synapse.app.homeserver --generate-keys -c .buildkite/sqlite-config.yaml
echo "--- Prepare the databases"
# Make sure the SQLite3 database is using the latest schema and has no pending background update.
scripts-dev/update_database --database-config .buildkite/sqlite-config.yaml
# Create the PostgreSQL database.
./.buildkite/scripts/create_postgres_db.py
echo "+++ Run synapse_port_db"
# Run the script
coverage run scripts/synapse_port_db --sqlite-database .buildkite/test_db.db --postgres-config .buildkite/postgres-config.yaml

View File

@@ -0,0 +1,18 @@
# Configuration file used for testing the 'synapse_port_db' script.
# Tells the 'update_database' script to connect to the test SQLite database to upgrade its
# schema and run background updates on it.
server_name: "test"
signing_key_path: "/src/.buildkite/test.signing.key"
report_stats: false
database:
name: "sqlite3"
args:
database: ".buildkite/test_db.db"
# Suppress the key server warning.
trusted_key_servers:
- server_name: "matrix.org"
suppress_key_server_warning: true

BIN
.buildkite/test_db.db Normal file

Binary file not shown.

View File

@@ -5,3 +5,4 @@
* [ ] Pull request is based on the develop branch
* [ ] Pull request includes a [changelog file](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#changelog)
* [ ] Pull request includes a [sign off](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#sign-off)
* [ ] Code style is correct (run the [linters](https://github.com/matrix-org/synapse/blob/master/CONTRIBUTING.rst#code-style))

View File

@@ -1,3 +1,86 @@
Synapse 1.6.0rc1 (2019-11-20)
=============================
Features
--------
- Add federation support for cross-signing. ([\#5727](https://github.com/matrix-org/synapse/issues/5727))
- Increase default room version from 4 to 5, thereby enforcing server key validity period checks. ([\#6220](https://github.com/matrix-org/synapse/issues/6220))
- Add support for outbound http proxying via http_proxy/HTTPS_PROXY env vars. ([\#6238](https://github.com/matrix-org/synapse/issues/6238))
- Implement label-based filtering on `/sync` and `/messages` ([MSC2326](https://github.com/matrix-org/matrix-doc/pull/2326)). ([\#6301](https://github.com/matrix-org/synapse/issues/6301), [\#6310](https://github.com/matrix-org/synapse/issues/6310), [\#6340](https://github.com/matrix-org/synapse/issues/6340))
Bugfixes
--------
- Fix LruCache callback deduplication for Python 3.8. Contributed by @V02460. ([\#6213](https://github.com/matrix-org/synapse/issues/6213))
- Remove a room from a server's public rooms list on room upgrade. ([\#6232](https://github.com/matrix-org/synapse/issues/6232), [\#6235](https://github.com/matrix-org/synapse/issues/6235))
- Delete keys from key backup when deleting backup versions. ([\#6253](https://github.com/matrix-org/synapse/issues/6253))
- Make notification of cross-signing signatures work with workers. ([\#6254](https://github.com/matrix-org/synapse/issues/6254))
- Fix exception when remote servers attempt to join a room that they're not allowed to join. ([\#6278](https://github.com/matrix-org/synapse/issues/6278))
- Prevent errors from appearing on Synapse startup if `git` is not installed. ([\#6284](https://github.com/matrix-org/synapse/issues/6284))
- Appservice requests will no longer contain a double slash prefix when the appservice url provided ends in a slash. ([\#6306](https://github.com/matrix-org/synapse/issues/6306))
- Fix `/purge_room` admin API. ([\#6307](https://github.com/matrix-org/synapse/issues/6307))
- Fix the `hidden` field in the `devices` table for SQLite versions prior to 3.23.0. ([\#6313](https://github.com/matrix-org/synapse/issues/6313))
- Fix bug which casued rejected events to be persisted with the wrong room state. ([\#6320](https://github.com/matrix-org/synapse/issues/6320))
- Fix bug where `rc_login` ratelimiting would prematurely kick in. ([\#6335](https://github.com/matrix-org/synapse/issues/6335))
- Prevent the server taking a long time to start up when guest registration is enabled. ([\#6338](https://github.com/matrix-org/synapse/issues/6338))
- Fix bug where upgrading a guest account to a full user would fail when account validity is enabled. ([\#6359](https://github.com/matrix-org/synapse/issues/6359))
- Fix `to_device` stream ID getting reset every time Synapse restarts, which had the potential to cause unable to decrypt errors. ([\#6363](https://github.com/matrix-org/synapse/issues/6363))
- Fix permission denied error when trying to generate a config file with the docker image. ([\#6389](https://github.com/matrix-org/synapse/issues/6389))
Improved Documentation
----------------------
- Contributor documentation now mentions script to run linters. ([\#6164](https://github.com/matrix-org/synapse/issues/6164))
- Modify CAPTCHA_SETUP.md to update the terms `private key` and `public key` to `secret key` and `site key` respectively. Contributed by Yash Jipkate. ([\#6257](https://github.com/matrix-org/synapse/issues/6257))
- Update `INSTALL.md` Email section to talk about `account_threepid_delegates`. ([\#6272](https://github.com/matrix-org/synapse/issues/6272))
- Fix a small typo in `account_threepid_delegates` configuration option. ([\#6273](https://github.com/matrix-org/synapse/issues/6273))
Internal Changes
----------------
- Add a CI job to test the `synapse_port_db` script. ([\#6140](https://github.com/matrix-org/synapse/issues/6140), [\#6276](https://github.com/matrix-org/synapse/issues/6276))
- Convert EventContext to an attrs. ([\#6218](https://github.com/matrix-org/synapse/issues/6218))
- Move `persist_events` out from main data store. ([\#6240](https://github.com/matrix-org/synapse/issues/6240), [\#6300](https://github.com/matrix-org/synapse/issues/6300))
- Reduce verbosity of user/room stats. ([\#6250](https://github.com/matrix-org/synapse/issues/6250))
- Reduce impact of debug logging. ([\#6251](https://github.com/matrix-org/synapse/issues/6251))
- Expose some homeserver functionality to spam checkers. ([\#6259](https://github.com/matrix-org/synapse/issues/6259))
- Change cache descriptors to always return deferreds. ([\#6263](https://github.com/matrix-org/synapse/issues/6263), [\#6291](https://github.com/matrix-org/synapse/issues/6291))
- Fix incorrect comment regarding the functionality of an `if` statement. ([\#6269](https://github.com/matrix-org/synapse/issues/6269))
- Update CI to run `isort` over the `scripts` and `scripts-dev` directories. ([\#6270](https://github.com/matrix-org/synapse/issues/6270))
- Replace every instance of `logger.warn` method with `logger.warning` as the former is deprecated. ([\#6271](https://github.com/matrix-org/synapse/issues/6271), [\#6314](https://github.com/matrix-org/synapse/issues/6314))
- Port replication http server endpoints to async/await. ([\#6274](https://github.com/matrix-org/synapse/issues/6274))
- Port room rest handlers to async/await. ([\#6275](https://github.com/matrix-org/synapse/issues/6275))
- Remove redundant CLI parameters on CI's `flake8` step. ([\#6277](https://github.com/matrix-org/synapse/issues/6277))
- Port `federation_server.py` to async/await. ([\#6279](https://github.com/matrix-org/synapse/issues/6279))
- Port receipt and read markers to async/wait. ([\#6280](https://github.com/matrix-org/synapse/issues/6280))
- Split out state storage into separate data store. ([\#6294](https://github.com/matrix-org/synapse/issues/6294), [\#6295](https://github.com/matrix-org/synapse/issues/6295))
- Refactor EventContext for clarity. ([\#6298](https://github.com/matrix-org/synapse/issues/6298))
- Update the version of black used to 19.10b0. ([\#6304](https://github.com/matrix-org/synapse/issues/6304))
- Add some documentation about worker replication. ([\#6305](https://github.com/matrix-org/synapse/issues/6305))
- Move admin endpoints into separate files. Contributed by Awesome Technologies Innovationslabor GmbH. ([\#6308](https://github.com/matrix-org/synapse/issues/6308))
- Document the use of `lint.sh` for code style enforcement & extend it to run on specified paths only. ([\#6312](https://github.com/matrix-org/synapse/issues/6312))
- Add optional python dependencies and dependant binary libraries to snapcraft packaging. ([\#6317](https://github.com/matrix-org/synapse/issues/6317))
- Remove the dependency on psutil and replace functionality with the stdlib `resource` module. ([\#6318](https://github.com/matrix-org/synapse/issues/6318), [\#6336](https://github.com/matrix-org/synapse/issues/6336))
- Improve documentation for EventContext fields. ([\#6319](https://github.com/matrix-org/synapse/issues/6319))
- Add some checks that we aren't using state from rejected events. ([\#6330](https://github.com/matrix-org/synapse/issues/6330))
- Add continuous integration for python 3.8. ([\#6341](https://github.com/matrix-org/synapse/issues/6341))
- Correct spacing/case of various instances of the word "homeserver". ([\#6357](https://github.com/matrix-org/synapse/issues/6357))
- Temporarily blacklist the failing unit test PurgeRoomTestCase.test_purge_room. ([\#6361](https://github.com/matrix-org/synapse/issues/6361))
Synapse 1.5.1 (2019-11-06)
==========================
Features
--------
- Limit the length of data returned by url previews, to prevent DoS attacks. ([\#6331](https://github.com/matrix-org/synapse/issues/6331), [\#6334](https://github.com/matrix-org/synapse/issues/6334))
Synapse 1.5.0 (2019-10-29)
==========================

View File

@@ -58,10 +58,29 @@ All Matrix projects have a well-defined code-style - and sometimes we've even
got as far as documenting it... For instance, synapse's code style doc lives
at https://github.com/matrix-org/synapse/tree/master/docs/code_style.md.
To facilitate meeting these criteria you can run ``scripts-dev/lint.sh``
locally. Since this runs the tools listed in the above document, you'll need
python 3.6 and to install each tool. **Note that the script does not just
test/check, but also reformats code, so you may wish to ensure any new code is
committed first**. By default this script checks all files and can take some
time; if you alter only certain files, you might wish to specify paths as
arguments to reduce the run-time.
Please ensure your changes match the cosmetic style of the existing project,
and **never** mix cosmetic and functional changes in the same commit, as it
makes it horribly hard to review otherwise.
Before doing a commit, ensure the changes you've made don't produce
linting errors. You can do this by running the linters as follows. Ensure to
commit any files that were corrected.
::
# Install the dependencies
pip install -U black flake8 isort
# Run the linter script
./scripts-dev/lint.sh
Changelog
~~~~~~~~~

View File

@@ -36,7 +36,7 @@ that your email address is probably `user@example.com` rather than
System requirements:
- POSIX-compliant system (tested on Linux & OS X)
- Python 3.5, 3.6, or 3.7
- Python 3.5, 3.6, 3.7 or 3.8.
- At least 1GB of free RAM if you want to join large public rooms like #matrix:matrix.org
Synapse is written in Python but some of the libraries it uses are written in
@@ -133,9 +133,9 @@ sudo yum install libtiff-devel libjpeg-devel libzip-devel freetype-devel \
sudo yum groupinstall "Development Tools"
```
#### Mac OS X
#### macOS
Installing prerequisites on Mac OS X:
Installing prerequisites on macOS:
```
xcode-select --install
@@ -144,6 +144,14 @@ sudo pip install virtualenv
brew install pkg-config libffi
```
On macOS Catalina (10.15) you may need to explicitly install OpenSSL
via brew and inform `pip` about it so that `psycopg2` builds:
```
brew install openssl@1.1
export LDFLAGS=-L/usr/local/Cellar/openssl\@1.1/1.1.1d/lib/
```
#### OpenSUSE
Installing prerequisites on openSUSE:
@@ -413,16 +421,18 @@ For a more detailed guide to configuring your server for federation, see
## Email
It is desirable for Synapse to have the capability to send email. For example,
this is required to support the 'password reset' feature.
It is desirable for Synapse to have the capability to send email. This allows
Synapse to send password reset emails, send verifications when an email address
is added to a user's account, and send email notifications to users when they
receive new messages.
To configure an SMTP server for Synapse, modify the configuration section
headed ``email``, and be sure to have at least the ``smtp_host``, ``smtp_port``
and ``notif_from`` fields filled out. You may also need to set ``smtp_user``,
``smtp_pass``, and ``require_transport_security``.
headed `email`, and be sure to have at least the `smtp_host`, `smtp_port`
and `notif_from` fields filled out. You may also need to set `smtp_user`,
`smtp_pass`, and `require_transport_security`.
If Synapse is not configured with an SMTP server, password reset via email will
be disabled by default.
If email is not configured, password reset, registration and notifications via
email will be disabled.
## Registering a user

1
changelog.d/6362.misc Normal file
View File

@@ -0,0 +1 @@
Clean up some unnecessary quotation marks around the codebase.

1
changelog.d/6388.doc Normal file
View File

@@ -0,0 +1 @@
Fix link in the user directory documentation.

View File

@@ -78,7 +78,7 @@ class InputOutput(object):
m = re.match("^join (\S+)$", line)
if m:
# The `sender` wants to join a room.
room_name, = m.groups()
(room_name,) = m.groups()
self.print_line("%s joining %s" % (self.user, room_name))
self.server.join_room(room_name, self.user, self.user)
# self.print_line("OK.")
@@ -105,7 +105,7 @@ class InputOutput(object):
m = re.match("^backfill (\S+)$", line)
if m:
# we want to backfill a room
room_name, = m.groups()
(room_name,) = m.groups()
self.print_line("backfill %s" % room_name)
self.server.backfill(room_name)
return

6
debian/changelog vendored
View File

@@ -1,3 +1,9 @@
matrix-synapse-py3 (1.5.1) stable; urgency=medium
* New synapse release 1.5.1.
-- Synapse Packaging team <packages@matrix.org> Wed, 06 Nov 2019 10:02:14 +0000
matrix-synapse-py3 (1.5.0) stable; urgency=medium
* New synapse release 1.5.0.

View File

@@ -101,7 +101,7 @@ is suitable for local testing, but for any practical use, you will either need
to use a reverse proxy, or configure Synapse to expose an HTTPS port.
For documentation on using a reverse proxy, see
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.rst.
https://github.com/matrix-org/synapse/blob/master/docs/reverse_proxy.md.
For more information on enabling TLS support in synapse itself, see
https://github.com/matrix-org/synapse/blob/master/INSTALL.md#tls-certificates. Of

View File

@@ -169,11 +169,11 @@ def run_generate_config(environ, ownership):
# log("running %s" % (args, ))
if ownership is not None:
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
# make sure that synapse has perms to write to the data dir.
subprocess.check_output(["chown", ownership, data_dir])
args = ["su-exec", ownership] + args
os.execv("/sbin/su-exec", args)
else:
os.execv("/usr/local/bin/python", args)
@@ -217,8 +217,9 @@ def main(args, environ):
# backwards-compatibility generate-a-config-on-the-fly mode
if "SYNAPSE_CONFIG_PATH" in environ:
error(
"SYNAPSE_SERVER_NAME and SYNAPSE_CONFIG_PATH are mutually exclusive "
"except in `generate` or `migrate_config` mode."
"SYNAPSE_SERVER_NAME can only be combined with SYNAPSE_CONFIG_PATH "
"in `generate` or `migrate_config` mode. To start synapse using a "
"config file, unset the SYNAPSE_SERVER_NAME environment variable."
)
config_path = "/compiled/homeserver.yaml"

View File

@@ -4,7 +4,7 @@ The captcha mechanism used is Google's ReCaptcha. This requires API keys from Go
## Getting keys
Requires a public/private key pair from:
Requires a site/secret key pair from:
<https://developers.google.com/recaptcha/>
@@ -15,8 +15,8 @@ Must be a reCAPTCHA v2 key using the "I'm not a robot" Checkbox option
The keys are a config option on the home server config. If they are not
visible, you can generate them via `--generate-config`. Set the following value:
recaptcha_public_key: YOUR_PUBLIC_KEY
recaptcha_private_key: YOUR_PRIVATE_KEY
recaptcha_public_key: YOUR_SITE_KEY
recaptcha_private_key: YOUR_SECRET_KEY
In addition, you MUST enable captchas via:

View File

@@ -72,7 +72,7 @@ pid_file: DATADIR/homeserver.pid
# For example, for room version 1, default_room_version should be set
# to "1".
#
#default_room_version: "4"
#default_room_version: "5"
# The GC threshold parameters to pass to `gc.set_threshold`, if defined
#
@@ -287,7 +287,7 @@ listeners:
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
# Resource-constrained homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
@@ -743,11 +743,11 @@ uploads_path: "DATADIR/uploads"
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This Home Server's ReCAPTCHA public key.
# This homeserver's ReCAPTCHA public key.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This Home Server's ReCAPTCHA private key.
# This homeserver's ReCAPTCHA private key.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"
@@ -955,7 +955,7 @@ uploads_path: "DATADIR/uploads"
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.org
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Users who register on this homeserver will automatically be joined
@@ -1270,7 +1270,7 @@ password_config:
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name: Matrix
#
# # Enable email notifications by default

View File

@@ -199,7 +199,20 @@ client (C):
#### REPLICATE (C)
Asks the server to replicate a given stream
Asks the server to replicate a given stream. The syntax is:
```
REPLICATE <stream_name> <token>
```
Where `<token>` may be either:
* a numeric stream_id to stream updates since (exclusive)
* `NOW` to stream all subsequent updates.
The `<stream_name>` is the name of a replication stream to subscribe
to (see [here](../synapse/replication/tcp/streams/_base.py) for a list
of streams). It can also be `ALL` to subscribe to all known streams,
in which case the `<token>` must be set to `NOW`.
#### USER_SYNC (C)

View File

@@ -7,7 +7,6 @@ who are present in a publicly viewable room present on the server.
The directory info is stored in various tables, which can (typically after
DB corruption) get stale or out of sync. If this happens, for now the
solution to fix it is to execute the SQL here
https://github.com/matrix-org/synapse/blob/master/synapse/storage/schema/delta/53/user_dir_populate.sql
solution to fix it is to execute the SQL [here](../synapse/storage/data_stores/main/schema/delta/53/user_dir_populate.sql)
and then restart synapse. This should then start a background task to
flush the current tables and regenerate the directory.

View File

@@ -1,8 +1,11 @@
[mypy]
namespace_packages=True
plugins=mypy_zope:plugin
follow_imports=skip
mypy_path=stubs
namespace_packages = True
plugins = mypy_zope:plugin
follow_imports = normal
check_untyped_defs = True
show_error_codes = True
show_traceback = True
mypy_path = stubs
[mypy-zope]
ignore_missing_imports = True

View File

@@ -20,11 +20,13 @@ from concurrent.futures import ThreadPoolExecutor
DISTS = (
"debian:stretch",
"debian:buster",
"debian:bullseye",
"debian:sid",
"ubuntu:xenial",
"ubuntu:bionic",
"ubuntu:cosmic",
"ubuntu:disco",
"ubuntu:eoan",
)
DESC = '''\

View File

@@ -7,7 +7,15 @@
set -e
isort -y -rc synapse tests scripts-dev scripts
flake8 synapse tests
python3 -m black synapse tests scripts-dev scripts
if [ $# -ge 1 ]
then
files=$*
else
files="synapse tests scripts-dev scripts"
fi
echo "Linting these locations: $files"
isort -y -rc $files
flake8 $files
python3 -m black $files
./scripts-dev/config-lint.sh

124
scripts-dev/update_database Executable file
View File

@@ -0,0 +1,124 @@
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import logging
import sys
import yaml
from twisted.internet import defer, reactor
from synapse.config.homeserver import HomeServerConfig
from synapse.metrics.background_process_metrics import run_as_background_process
from synapse.server import HomeServer
from synapse.storage import DataStore
from synapse.storage.engines import create_engine
from synapse.storage.prepare_database import prepare_database
logger = logging.getLogger("update_database")
class MockHomeserver(HomeServer):
DATASTORE_CLASS = DataStore
def __init__(self, config, database_engine, db_conn, **kwargs):
super(MockHomeserver, self).__init__(
config.server_name,
reactor=reactor,
config=config,
database_engine=database_engine,
**kwargs
)
self.database_engine = database_engine
self.db_conn = db_conn
def get_db_conn(self):
return self.db_conn
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=(
"Updates a synapse database to the latest schema and runs background updates"
" on it."
)
)
parser.add_argument("-v", action='store_true')
parser.add_argument(
"--database-config",
type=argparse.FileType('r'),
required=True,
help="A database config file for either a SQLite3 database or a PostgreSQL one.",
)
args = parser.parse_args()
logging_config = {
"level": logging.DEBUG if args.v else logging.INFO,
"format": "%(asctime)s - %(name)s - %(lineno)d - %(levelname)s - %(message)s",
}
logging.basicConfig(**logging_config)
# Load, process and sanity-check the config.
hs_config = yaml.safe_load(args.database_config)
if "database" not in hs_config:
sys.stderr.write("The configuration file must have a 'database' section.\n")
sys.exit(4)
config = HomeServerConfig()
config.parse_config_dict(hs_config, "", "")
# Create the database engine and a connection to it.
database_engine = create_engine(config.database_config)
db_conn = database_engine.module.connect(
**{
k: v
for k, v in config.database_config.get("args", {}).items()
if not k.startswith("cp_")
}
)
# Update the database to the latest schema.
prepare_database(db_conn, database_engine, config=config)
db_conn.commit()
# Instantiate and initialise the homeserver object.
hs = MockHomeserver(
config,
database_engine,
db_conn,
db_config=config.database_config,
)
# setup instantiates the store within the homeserver object.
hs.setup()
store = hs.get_datastore()
@defer.inlineCallbacks
def run_background_updates():
yield store.run_background_updates(sleep=False)
# Stop the reactor to exit the script once every background update is run.
reactor.stop()
# Apply all background updates on the database.
reactor.callWhenRunning(lambda: run_as_background_process(
"background_updates", run_background_updates
))
reactor.run()

View File

@@ -72,7 +72,7 @@ def move_media(origin_server, file_id, src_paths, dest_paths):
# check that the original exists
original_file = src_paths.remote_media_filepath(origin_server, file_id)
if not os.path.exists(original_file):
logger.warn(
logger.warning(
"Original for %s/%s (%s) does not exist",
origin_server,
file_id,

View File

@@ -157,7 +157,7 @@ class Store(
)
except self.database_engine.module.DatabaseError as e:
if self.database_engine.is_deadlock(e):
logger.warn("[TXN DEADLOCK] {%s} %d/%d", desc, i, N)
logger.warning("[TXN DEADLOCK] {%s} %d/%d", desc, i, N)
if i < N:
i += 1
conn.rollback()
@@ -432,7 +432,7 @@ class Porter(object):
for row in rows:
d = dict(zip(headers, row))
if "\0" in d['value']:
logger.warn('dropping search row %s', d)
logger.warning('dropping search row %s', d)
else:
rows_dict.append(d)
@@ -647,7 +647,7 @@ class Porter(object):
if isinstance(col, bytes):
return bytearray(col)
elif isinstance(col, string_types) and "\0" in col:
logger.warn(
logger.warning(
"DROPPING ROW: NUL value in table %s col %s: %r",
table,
headers[j],

View File

@@ -20,3 +20,23 @@ parts:
source: .
plugin: python
python-version: python3
python-packages:
- '.[all]'
build-packages:
- libffi-dev
- libturbojpeg0-dev
- libssl-dev
- libxslt1-dev
- libpq-dev
- zlib1g-dev
stage-packages:
- libasn1-8-heimdal
- libgssapi3-heimdal
- libhcrypto4-heimdal
- libheimbase1-heimdal
- libheimntlm0-heimdal
- libhx509-5-heimdal
- libkrb5-26-heimdal
- libldap-2.4-2
- libpq5
- libsasl2-2

View File

@@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
""" This is a reference implementation of a Matrix home server.
""" This is a reference implementation of a Matrix homeserver.
"""
import os
@@ -36,7 +36,7 @@ try:
except ImportError:
pass
__version__ = "1.5.0"
__version__ = "1.6.0rc1"
if bool(os.environ.get("SYNAPSE_TEST_PATCH_LOG_CONTEXTS", False)):
# We import here so that we don't have to install a bunch of deps when

View File

@@ -144,8 +144,8 @@ def main():
logging.captureWarnings(True)
parser = argparse.ArgumentParser(
description="Used to register new users with a given home server when"
" registration has been disabled. The home server must be"
description="Used to register new users with a given homeserver when"
" registration has been disabled. The homeserver must be"
" configured with the 'registration_shared_secret' option"
" set."
)
@@ -202,7 +202,7 @@ def main():
"server_url",
default="https://localhost:8448",
nargs="?",
help="URL to use to talk to the home server. Defaults to "
help="URL to use to talk to the homeserver. Defaults to "
" 'https://localhost:8448'.",
)

View File

@@ -497,7 +497,7 @@ class Auth(object):
token = self.get_access_token_from_request(request)
service = self.store.get_app_service_by_token(token)
if not service:
logger.warn("Unrecognised appservice access token.")
logger.warning("Unrecognised appservice access token.")
raise InvalidClientTokenError()
request.authenticated_entity = service.sender
return defer.succeed(service)

View File

@@ -138,3 +138,10 @@ class LimitBlockingTypes(object):
MONTHLY_ACTIVE_USER = "monthly_active_user"
HS_DISABLED = "hs_disabled"
class EventContentFields(object):
"""Fields found in events' content, regardless of type."""
# Labels for the event, cf https://github.com/matrix-org/matrix-doc/pull/2326
LABELS = "org.matrix.labels"

View File

@@ -457,7 +457,7 @@ def cs_error(msg, code=Codes.UNKNOWN, **kwargs):
class FederationError(RuntimeError):
""" This class is used to inform remote home servers about erroneous
""" This class is used to inform remote homeservers about erroneous
PDUs they sent us.
FATAL: The remote server could not interpret the source event.

View File

@@ -20,6 +20,7 @@ from jsonschema import FormatChecker
from twisted.internet import defer
from synapse.api.constants import EventContentFields
from synapse.api.errors import SynapseError
from synapse.storage.presence import UserPresenceState
from synapse.types import RoomID, UserID
@@ -66,6 +67,10 @@ ROOM_EVENT_FILTER_SCHEMA = {
"contains_url": {"type": "boolean"},
"lazy_load_members": {"type": "boolean"},
"include_redundant_members": {"type": "boolean"},
# Include or exclude events with the provided labels.
# cf https://github.com/matrix-org/matrix-doc/pull/2326
"org.matrix.labels": {"type": "array", "items": {"type": "string"}},
"org.matrix.not_labels": {"type": "array", "items": {"type": "string"}},
},
}
@@ -259,6 +264,9 @@ class Filter(object):
self.contains_url = self.filter_json.get("contains_url", None)
self.labels = self.filter_json.get("org.matrix.labels", None)
self.not_labels = self.filter_json.get("org.matrix.not_labels", [])
def filters_all_types(self):
return "*" in self.not_types
@@ -282,6 +290,7 @@ class Filter(object):
room_id = None
ev_type = "m.presence"
contains_url = False
labels = []
else:
sender = event.get("sender", None)
if not sender:
@@ -300,10 +309,11 @@ class Filter(object):
content = event.get("content", {})
# check if there is a string url field in the content for filtering purposes
contains_url = isinstance(content.get("url"), text_type)
labels = content.get(EventContentFields.LABELS, [])
return self.check_fields(room_id, sender, ev_type, contains_url)
return self.check_fields(room_id, sender, ev_type, labels, contains_url)
def check_fields(self, room_id, sender, event_type, contains_url):
def check_fields(self, room_id, sender, event_type, labels, contains_url):
"""Checks whether the filter matches the given event fields.
Returns:
@@ -313,6 +323,7 @@ class Filter(object):
"rooms": lambda v: room_id == v,
"senders": lambda v: sender == v,
"types": lambda v: _matches_wildcard(event_type, v),
"labels": lambda v: v in labels,
}
for name, match_func in literal_keys.items():

View File

@@ -44,6 +44,8 @@ def check_bind_error(e, address, bind_addresses):
bind_addresses (list): Addresses on which the service listens.
"""
if address == "0.0.0.0" and "::" in bind_addresses:
logger.warn("Failed to listen on 0.0.0.0, continuing because listening on [::]")
logger.warning(
"Failed to listen on 0.0.0.0, continuing because listening on [::]"
)
else:
raise e

View File

@@ -94,7 +94,7 @@ class AppserviceServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -103,7 +103,7 @@ class AppserviceServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -153,7 +153,7 @@ class ClientReaderServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -162,7 +162,7 @@ class ClientReaderServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -147,7 +147,7 @@ class EventCreatorServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -156,7 +156,7 @@ class EventCreatorServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -132,7 +132,7 @@ class FederationReaderServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -141,7 +141,7 @@ class FederationReaderServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -69,7 +69,7 @@ class FederationSenderSlaveStore(
self.federation_out_pos_startup = self._get_federation_out_pos(db_conn)
def _get_federation_out_pos(self, db_conn):
sql = "SELECT stream_id FROM federation_stream_position" " WHERE type = ?"
sql = "SELECT stream_id FROM federation_stream_position WHERE type = ?"
sql = self.database_engine.convert_param_style(sql)
txn = db_conn.cursor()
@@ -123,7 +123,7 @@ class FederationSenderServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -132,7 +132,7 @@ class FederationSenderServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -204,7 +204,7 @@ class FrontendProxyServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -213,7 +213,7 @@ class FrontendProxyServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -19,12 +19,13 @@ from __future__ import print_function
import gc
import logging
import math
import os
import resource
import sys
from six import iteritems
import psutil
from prometheus_client import Gauge
from twisted.application import service
@@ -282,7 +283,7 @@ class SynapseHomeServer(HomeServer):
reactor.addSystemEventTrigger("before", "shutdown", s.stopListening)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -291,7 +292,7 @@ class SynapseHomeServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
def run_startup_checks(self, db_conn, database_engine):
all_users_native = are_all_users_on_domain(
@@ -471,6 +472,87 @@ class SynapseService(service.Service):
return self._port.stopListening()
# Contains the list of processes we will be monitoring
# currently either 0 or 1
_stats_process = []
@defer.inlineCallbacks
def phone_stats_home(hs, stats, stats_process=_stats_process):
logger.info("Gathering stats for reporting")
now = int(hs.get_clock().time())
uptime = int(now - hs.start_time)
if uptime < 0:
uptime = 0
stats["homeserver"] = hs.config.server_name
stats["server_context"] = hs.config.server_context
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
version = sys.version_info
stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro
)
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in iteritems(daily_user_type_results):
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
stats["daily_active_rooms"] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in iteritems(r30_results):
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
#
# Performance statistics
#
old = stats_process[0]
new = (now, resource.getrusage(resource.RUSAGE_SELF))
stats_process[0] = new
# Get RSS in bytes
stats["memory_rss"] = new[1].ru_maxrss
# Get CPU time in % of a single core, not % of all cores
used_cpu_time = (new[1].ru_utime + new[1].ru_stime) - (
old[1].ru_utime + old[1].ru_stime
)
if used_cpu_time == 0 or new[0] == old[0]:
stats["cpu_average"] = 0
else:
stats["cpu_average"] = math.floor(used_cpu_time / (new[0] - old[0]) * 100)
#
# Database version
#
stats["database_engine"] = hs.get_datastore().database_engine_name
stats["database_server_version"] = hs.get_datastore().get_server_version()
logger.info("Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats))
try:
yield hs.get_proxied_http_client().put_json(
hs.config.report_stats_endpoint, stats
)
except Exception as e:
logger.warning("Error reporting stats: %s", e)
def run(hs):
PROFILE_SYNAPSE = False
if PROFILE_SYNAPSE:
@@ -497,91 +579,19 @@ def run(hs):
reactor.run = profile(reactor.run)
clock = hs.get_clock()
start_time = clock.time()
stats = {}
# Contains the list of processes we will be monitoring
# currently either 0 or 1
stats_process = []
def performance_stats_init():
_stats_process.clear()
_stats_process.append(
(int(hs.get_clock().time(), resource.getrusage(resource.RUSAGE_SELF)))
)
def start_phone_stats_home():
return run_as_background_process("phone_stats_home", phone_stats_home)
@defer.inlineCallbacks
def phone_stats_home():
logger.info("Gathering stats for reporting")
now = int(hs.get_clock().time())
uptime = int(now - start_time)
if uptime < 0:
uptime = 0
stats["homeserver"] = hs.config.server_name
stats["server_context"] = hs.config.server_context
stats["timestamp"] = now
stats["uptime_seconds"] = uptime
version = sys.version_info
stats["python_version"] = "{}.{}.{}".format(
version.major, version.minor, version.micro
return run_as_background_process(
"phone_stats_home", phone_stats_home, hs, stats
)
stats["total_users"] = yield hs.get_datastore().count_all_users()
total_nonbridged_users = yield hs.get_datastore().count_nonbridged_users()
stats["total_nonbridged_users"] = total_nonbridged_users
daily_user_type_results = yield hs.get_datastore().count_daily_user_type()
for name, count in iteritems(daily_user_type_results):
stats["daily_user_type_" + name] = count
room_count = yield hs.get_datastore().get_room_count()
stats["total_room_count"] = room_count
stats["daily_active_users"] = yield hs.get_datastore().count_daily_users()
stats["monthly_active_users"] = yield hs.get_datastore().count_monthly_users()
stats[
"daily_active_rooms"
] = yield hs.get_datastore().count_daily_active_rooms()
stats["daily_messages"] = yield hs.get_datastore().count_daily_messages()
r30_results = yield hs.get_datastore().count_r30_users()
for name, count in iteritems(r30_results):
stats["r30_users_" + name] = count
daily_sent_messages = yield hs.get_datastore().count_daily_sent_messages()
stats["daily_sent_messages"] = daily_sent_messages
stats["cache_factor"] = CACHE_SIZE_FACTOR
stats["event_cache_size"] = hs.config.event_cache_size
if len(stats_process) > 0:
stats["memory_rss"] = 0
stats["cpu_average"] = 0
for process in stats_process:
stats["memory_rss"] += process.memory_info().rss
stats["cpu_average"] += int(process.cpu_percent(interval=None))
stats["database_engine"] = hs.get_datastore().database_engine_name
stats["database_server_version"] = hs.get_datastore().get_server_version()
logger.info(
"Reporting stats to %s: %s" % (hs.config.report_stats_endpoint, stats)
)
try:
yield hs.get_simple_http_client().put_json(
hs.config.report_stats_endpoint, stats
)
except Exception as e:
logger.warn("Error reporting stats: %s", e)
def performance_stats_init():
try:
process = psutil.Process()
# Ensure we can fetch both, and make the initial request for cpu_percent
# so the next request will use this as the initial point.
process.memory_info().rss
process.cpu_percent(interval=None)
logger.info("report_stats can use psutil")
stats_process.append(process)
except (AttributeError):
logger.warning("Unable to read memory/cpu stats. Disabling reporting.")
def generate_user_daily_visit_stats():
return run_as_background_process(

View File

@@ -120,7 +120,7 @@ class MediaRepositoryServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -129,7 +129,7 @@ class MediaRepositoryServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -114,7 +114,7 @@ class PusherServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -123,7 +123,7 @@ class PusherServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -326,7 +326,7 @@ class SynchrotronServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -335,7 +335,7 @@ class SynchrotronServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -150,7 +150,7 @@ class UserDirectoryServer(HomeServer):
)
elif listener["type"] == "metrics":
if not self.get_config().enable_metrics:
logger.warn(
logger.warning(
(
"Metrics listener configured, but "
"enable_metrics is not True!"
@@ -159,7 +159,7 @@ class UserDirectoryServer(HomeServer):
else:
_base.listen_metrics(listener["bind_addresses"], listener["port"])
else:
logger.warn("Unrecognized listener type: %s", listener["type"])
logger.warning("Unrecognized listener type: %s", listener["type"])
self.get_tcp_replication().start_replication(self)

View File

@@ -94,7 +94,9 @@ class ApplicationService(object):
ip_range_whitelist=None,
):
self.token = token
self.url = url
self.url = (
url.rstrip("/") if isinstance(url, str) else None
) # url must not end with a slash
self.hs_token = hs_token
self.sender = sender
self.server_name = hostname

View File

@@ -185,7 +185,7 @@ class ApplicationServiceApi(SimpleHttpClient):
if not _is_valid_3pe_metadata(info):
logger.warning(
"query_3pe_protocol to %s did not return a" " valid result", uri
"query_3pe_protocol to %s did not return a valid result", uri
)
return None

View File

@@ -134,7 +134,7 @@ def _load_appservice(hostname, as_info, config_filename):
for regex_obj in as_info["namespaces"][ns]:
if not isinstance(regex_obj, dict):
raise ValueError(
"Expected namespace entry in %s to be an object," " but got %s",
"Expected namespace entry in %s to be an object, but got %s",
ns,
regex_obj,
)

View File

@@ -35,11 +35,11 @@ class CaptchaConfig(Config):
## Captcha ##
# See docs/CAPTCHA_SETUP for full details of configuring this.
# This Home Server's ReCAPTCHA public key.
# This homeserver's ReCAPTCHA public key.
#
#recaptcha_public_key: "YOUR_PUBLIC_KEY"
# This Home Server's ReCAPTCHA private key.
# This homeserver's ReCAPTCHA private key.
#
#recaptcha_private_key: "YOUR_PRIVATE_KEY"

View File

@@ -305,7 +305,7 @@ class EmailConfig(Config):
# smtp_user: "exampleusername"
# smtp_pass: "examplepassword"
# require_transport_security: false
# notif_from: "Your Friendly %(app)s Home Server <noreply@example.com>"
# notif_from: "Your Friendly %(app)s homeserver <noreply@example.com>"
# app_name: Matrix
#
# # Enable email notifications by default

View File

@@ -125,7 +125,7 @@ class KeyConfig(Config):
# if neither trusted_key_servers nor perspectives are given, use the default.
if "perspectives" not in config and "trusted_key_servers" not in config:
logger.warn(TRUSTED_KEY_SERVER_NOT_CONFIGURED_WARN)
logger.warning(TRUSTED_KEY_SERVER_NOT_CONFIGURED_WARN)
key_servers = [{"server_name": "matrix.org"}]
else:
key_servers = config.get("trusted_key_servers", [])
@@ -156,7 +156,7 @@ class KeyConfig(Config):
if not self.macaroon_secret_key:
# Unfortunately, there are people out there that don't have this
# set. Lets just be "nice" and derive one from their secret key.
logger.warn("Config is missing macaroon_secret_key")
logger.warning("Config is missing macaroon_secret_key")
seed = bytes(self.signing_key[0])
self.macaroon_secret_key = hashlib.sha256(seed).digest()

View File

@@ -182,7 +182,7 @@ def _reload_stdlib_logging(*args, log_config=None):
logger = logging.getLogger("")
if not log_config:
logger.warn("Reloaded a blank config?")
logger.warning("Reloaded a blank config?")
logging.config.dictConfig(log_config)
@@ -234,8 +234,8 @@ def setup_logging(
# make sure that the first thing we log is a thing we can grep backwards
# for
logging.warn("***** STARTING SERVER *****")
logging.warn("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.warning("***** STARTING SERVER *****")
logging.warning("Server %s version %s", sys.argv[0], get_version_string(synapse))
logging.info("Server hostname: %s", config.server_name)
return logger

View File

@@ -300,7 +300,7 @@ class RegistrationConfig(Config):
# If a delegate is specified, the config option public_baseurl must also be filled out.
#
account_threepid_delegates:
#email: https://example.com # Delegate email sending to example.org
#email: https://example.com # Delegate email sending to example.com
#msisdn: http://localhost:8090 # Delegate SMS sending to this local process
# Users who register on this homeserver will automatically be joined

View File

@@ -170,7 +170,7 @@ class _RoomDirectoryRule(object):
self.action = action
else:
raise ConfigError(
"%s rules can only have action of 'allow'" " or 'deny'" % (option_name,)
"%s rules can only have action of 'allow' or 'deny'" % (option_name,)
)
self._alias_matches_all = alias == "*"

View File

@@ -41,7 +41,7 @@ logger = logging.Logger(__name__)
# in the list.
DEFAULT_BIND_ADDRESSES = ["::", "0.0.0.0"]
DEFAULT_ROOM_VERSION = "4"
DEFAULT_ROOM_VERSION = "5"
ROOM_COMPLEXITY_TOO_GREAT = (
"Your homeserver is unable to join rooms this large or complex. "
@@ -223,7 +223,7 @@ class ServerConfig(Config):
self.federation_ip_range_blacklist.update(["0.0.0.0", "::"])
except Exception as e:
raise ConfigError(
"Invalid range(s) provided in " "federation_ip_range_blacklist: %s" % e
"Invalid range(s) provided in federation_ip_range_blacklist: %s" % e
)
if self.public_baseurl is not None:
@@ -721,7 +721,7 @@ class ServerConfig(Config):
# Used by phonehome stats to group together related servers.
#server_context: context
# Resource-constrained Homeserver Settings
# Resource-constrained homeserver Settings
#
# If limit_remote_rooms.enabled is True, the room complexity will be
# checked before a user joins a new remote room. If it is above
@@ -781,20 +781,20 @@ class ServerConfig(Config):
"--daemonize",
action="store_true",
default=None,
help="Daemonize the home server",
help="Daemonize the homeserver",
)
server_group.add_argument(
"--print-pidfile",
action="store_true",
default=None,
help="Print the path to the pidfile just" " before daemonizing",
help="Print the path to the pidfile just before daemonizing",
)
server_group.add_argument(
"--manhole",
metavar="PORT",
dest="manhole",
type=int,
help="Turn on the twisted telnet manhole" " service on the given port.",
help="Turn on the twisted telnet manhole service on the given port.",
)

View File

@@ -125,9 +125,11 @@ def compute_event_signature(event_dict, signature_name, signing_key):
redact_json = prune_event_dict(event_dict)
redact_json.pop("age_ts", None)
redact_json.pop("unsigned", None)
logger.debug("Signing event: %s", encode_canonical_json(redact_json))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Signing event: %s", encode_canonical_json(redact_json))
redact_json = sign_json(redact_json, signature_name, signing_key)
logger.debug("Signed event: %s", encode_canonical_json(redact_json))
if logger.isEnabledFor(logging.DEBUG):
logger.debug("Signed event: %s", encode_canonical_json(redact_json))
return redact_json["signatures"]

View File

@@ -77,7 +77,7 @@ def check(room_version, event, auth_events, do_sig_check=True, do_size_check=Tru
if auth_events is None:
# Oh, we don't know what the state of the room was, so we
# are trusting that this is allowed (at least for now)
logger.warn("Trusting event: %s", event.event_id)
logger.warning("Trusting event: %s", event.event_id)
return
if event.type == EventTypes.Create:

View File

@@ -12,104 +12,125 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import Dict, Optional, Tuple, Union
from six import iteritems
import attr
from frozendict import frozendict
from twisted.internet import defer
from synapse.appservice import ApplicationService
from synapse.logging.context import make_deferred_yieldable, run_in_background
class EventContext(object):
@attr.s(slots=True)
class EventContext:
"""
Holds information relevant to persisting an event
Attributes:
state_group (int|None): state group id, if the state has been stored
as a state group. This is usually only None if e.g. the event is
an outlier.
rejected (bool|str): A rejection reason if the event was rejected, else
False
rejected: A rejection reason if the event was rejected, else False
push_actions (list[(str, list[object])]): list of (user_id, actions)
tuples
_state_group: The ID of the state group for this event. Note that state events
are persisted with a state group which includes the new event, so this is
effectively the state *after* the event in question.
prev_group (int): Previously persisted state group. ``None`` for an
outlier.
delta_ids (dict[(str, str), str]): Delta from ``prev_group``.
(type, state_key) -> event_id. ``None`` for an outlier.
For a *rejected* state event, where the state of the rejected event is
ignored, this state_group should never make it into the
event_to_state_groups table. Indeed, inspecting this value for a rejected
state event is almost certainly incorrect.
prev_state_events (?): XXX: is this ever set to anything other than
the empty list?
For an outlier, where we don't have the state at the event, this will be
None.
Note that this is a private attribute: it should be accessed via
the ``state_group`` property.
state_group_before_event: The ID of the state group representing the state
of the room before this event.
If this is a non-state event, this will be the same as ``state_group``. If
it's a state event, it will be the same as ``prev_group``.
If ``state_group`` is None (ie, the event is an outlier),
``state_group_before_event`` will always also be ``None``.
prev_group: If it is known, ``state_group``'s prev_group. Note that this being
None does not necessarily mean that ``state_group`` does not have
a prev_group!
If the event is a state event, this is normally the same as ``prev_group``.
If ``state_group`` is None (ie, the event is an outlier), ``prev_group``
will always also be ``None``.
Note that this *not* (necessarily) the state group associated with
``_prev_state_ids``.
delta_ids: If ``prev_group`` is not None, the state delta between ``prev_group``
and ``state_group``.
app_service: If this event is being sent by a (local) application service, that
app service.
_current_state_ids: The room state map, including this event - ie, the state
in ``state_group``.
_current_state_ids (dict[(str, str), str]|None):
The current state map including the current event. None if outlier
or we haven't fetched the state from DB yet.
(type, state_key) -> event_id
_prev_state_ids (dict[(str, str), str]|None):
The current state map excluding the current event. None if outlier
or we haven't fetched the state from DB yet.
FIXME: what is this for an outlier? it seems ill-defined. It seems like
it could be either {}, or the state we were given by the remote
server, depending on $THINGS
Note that this is a private attribute: it should be accessed via
``get_current_state_ids``. _AsyncEventContext impl calculates this
on-demand: it will be None until that happens.
_prev_state_ids: The room state map, excluding this event - ie, the state
in ``state_group_before_event``. For a non-state
event, this will be the same as _current_state_events.
Note that it is a completely different thing to prev_group!
(type, state_key) -> event_id
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
FIXME: again, what is this for an outlier?
_event_type (str): The type of the event the context is associated with.
Only set when state has not been fetched yet.
_event_state_key (str|None): The state_key of the event the context is
associated with. Only set when state has not been fetched yet.
_prev_state_id (str|None): If the event associated with the context is
a state event, then `_prev_state_id` is the event_id of the state
that was replaced.
Only set when state has not been fetched yet.
As with _current_state_ids, this is a private attribute. It should be
accessed via get_prev_state_ids.
"""
__slots__ = [
"state_group",
"rejected",
"prev_group",
"delta_ids",
"prev_state_events",
"app_service",
"_current_state_ids",
"_prev_state_ids",
"_prev_state_id",
"_event_type",
"_event_state_key",
"_fetching_state_deferred",
]
rejected = attr.ib(default=False, type=Union[bool, str])
_state_group = attr.ib(default=None, type=Optional[int])
state_group_before_event = attr.ib(default=None, type=Optional[int])
prev_group = attr.ib(default=None, type=Optional[int])
delta_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
app_service = attr.ib(default=None, type=Optional[ApplicationService])
def __init__(self):
self.prev_state_events = []
self.rejected = False
self.app_service = None
_current_state_ids = attr.ib(
default=None, type=Optional[Dict[Tuple[str, str], str]]
)
_prev_state_ids = attr.ib(default=None, type=Optional[Dict[Tuple[str, str], str]])
@staticmethod
def with_state(
state_group, current_state_ids, prev_state_ids, prev_group=None, delta_ids=None
state_group,
state_group_before_event,
current_state_ids,
prev_state_ids,
prev_group=None,
delta_ids=None,
):
context = EventContext()
# The current state including the current event
context._current_state_ids = current_state_ids
# The current state excluding the current event
context._prev_state_ids = prev_state_ids
context.state_group = state_group
context._prev_state_id = None
context._event_type = None
context._event_state_key = None
context._fetching_state_deferred = defer.succeed(None)
# A previously persisted state group and a delta between that
# and this state.
context.prev_group = prev_group
context.delta_ids = delta_ids
return context
return EventContext(
current_state_ids=current_state_ids,
prev_state_ids=prev_state_ids,
state_group=state_group,
state_group_before_event=state_group_before_event,
prev_group=prev_group,
delta_ids=delta_ids,
)
@defer.inlineCallbacks
def serialize(self, event, store):
@@ -137,11 +158,11 @@ class EventContext(object):
"prev_state_id": prev_state_id,
"event_type": event.type,
"event_state_key": event.state_key if event.is_state() else None,
"state_group": self.state_group,
"state_group": self._state_group,
"state_group_before_event": self.state_group_before_event,
"rejected": self.rejected,
"prev_group": self.prev_group,
"delta_ids": _encode_state_dict(self.delta_ids),
"prev_state_events": self.prev_state_events,
"app_service_id": self.app_service.id if self.app_service else None,
}
@@ -157,24 +178,18 @@ class EventContext(object):
Returns:
EventContext
"""
context = EventContext()
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
context._prev_state_id = input["prev_state_id"]
context._event_type = input["event_type"]
context._event_state_key = input["event_state_key"]
context._current_state_ids = None
context._prev_state_ids = None
context._fetching_state_deferred = None
context.state_group = input["state_group"]
context.prev_group = input["prev_group"]
context.delta_ids = _decode_state_dict(input["delta_ids"])
context.rejected = input["rejected"]
context.prev_state_events = input["prev_state_events"]
context = _AsyncEventContextImpl(
# We use the state_group and prev_state_id stuff to pull the
# current_state_ids out of the DB and construct prev_state_ids.
prev_state_id=input["prev_state_id"],
event_type=input["event_type"],
event_state_key=input["event_state_key"],
state_group=input["state_group"],
state_group_before_event=input["state_group_before_event"],
prev_group=input["prev_group"],
delta_ids=_decode_state_dict(input["delta_ids"]),
rejected=input["rejected"],
)
app_service_id = input["app_service_id"]
if app_service_id:
@@ -182,29 +197,52 @@ class EventContext(object):
return context
@property
def state_group(self) -> Optional[int]:
"""The ID of the state group for this event.
Note that state events are persisted with a state group which includes the new
event, so this is effectively the state *after* the event in question.
For an outlier, where we don't have the state at the event, this will be None.
It is an error to access this for a rejected event, since rejected state should
not make it into the room state. Accessing this property will raise an exception
if ``rejected`` is set.
"""
if self.rejected:
raise RuntimeError("Attempt to access state_group of rejected event")
return self._state_group
@defer.inlineCallbacks
def get_current_state_ids(self, store):
"""Gets the current state IDs
"""
Gets the room state map, including this event - ie, the state in ``state_group``
It is an error to access this for a rejected event, since rejected state should
not make it into the room state. This method will raise an exception if
``rejected`` is set.
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
is None, which happens when the associated event is an outlier.
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
if self.rejected:
raise RuntimeError("Attempt to access state_ids of rejected event")
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
yield make_deferred_yieldable(self._fetching_state_deferred)
yield self._ensure_fetched(store)
return self._current_state_ids
@defer.inlineCallbacks
def get_prev_state_ids(self, store):
"""Gets the prev state IDs
"""
Gets the room state map, excluding this event.
For a non-state event, this will be the same as get_current_state_ids().
Returns:
Deferred[dict[(str, str), str]|None]: Returns None if state_group
@@ -212,27 +250,64 @@ class EventContext(object):
Maps a (type, state_key) to the event ID of the state event matching
this tuple.
"""
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
yield make_deferred_yieldable(self._fetching_state_deferred)
yield self._ensure_fetched(store)
return self._prev_state_ids
def get_cached_current_state_ids(self):
"""Gets the current state IDs if we have them already cached.
It is an error to access this for a rejected event, since rejected state should
not make it into the room state. This method will raise an exception if
``rejected`` is set.
Returns:
dict[(str, str), str]|None: Returns None if we haven't cached the
state or if state_group is None, which happens when the associated
event is an outlier.
"""
if self.rejected:
raise RuntimeError("Attempt to access state_ids of rejected event")
return self._current_state_ids
def _ensure_fetched(self, store):
return defer.succeed(None)
@attr.s(slots=True)
class _AsyncEventContextImpl(EventContext):
"""
An implementation of EventContext which fetches _current_state_ids and
_prev_state_ids from the database on demand.
Attributes:
_fetching_state_deferred (Deferred|None): Resolves when *_state_ids have
been calculated. None if we haven't started calculating yet
_event_type (str): The type of the event the context is associated with.
_event_state_key (str): The state_key of the event the context is
associated with.
_prev_state_id (str|None): If the event associated with the context is
a state event, then `_prev_state_id` is the event_id of the state
that was replaced.
"""
_prev_state_id = attr.ib(default=None)
_event_type = attr.ib(default=None)
_event_state_key = attr.ib(default=None)
_fetching_state_deferred = attr.ib(default=None)
def _ensure_fetched(self, store):
if not self._fetching_state_deferred:
self._fetching_state_deferred = run_in_background(
self._fill_out_state, store
)
return make_deferred_yieldable(self._fetching_state_deferred)
@defer.inlineCallbacks
def _fill_out_state(self, store):
"""Called to populate the _current_state_ids and _prev_state_ids
@@ -250,27 +325,6 @@ class EventContext(object):
else:
self._prev_state_ids = self._current_state_ids
@defer.inlineCallbacks
def update_state(
self, state_group, prev_state_ids, current_state_ids, prev_group, delta_ids
):
"""Replace the state in the context
"""
# We need to make sure we wait for any ongoing fetching of state
# to complete so that the updated state doesn't get clobbered
if self._fetching_state_deferred:
yield make_deferred_yieldable(self._fetching_state_deferred)
self.state_group = state_group
self._prev_state_ids = prev_state_ids
self.prev_group = prev_group
self._current_state_ids = current_state_ids
self.delta_ids = delta_ids
# We need to ensure that that we've marked as having fetched the state
self._fetching_state_deferred = defer.succeed(None)
def _encode_state_dict(state_dict):
"""Since dicts of (type, state_key) -> event_id cannot be serialized in

View File

@@ -1,5 +1,6 @@
# -*- coding: utf-8 -*-
# Copyright 2017 New Vector Ltd
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
@@ -13,6 +14,10 @@
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
from synapse.spam_checker_api import SpamCheckerApi
class SpamChecker(object):
def __init__(self, hs):
@@ -26,7 +31,14 @@ class SpamChecker(object):
pass
if module is not None:
self.spam_checker = module(config=config)
# Older spam checkers don't accept the `api` argument, so we
# try and detect support.
spam_args = inspect.getfullargspec(module)
if "api" in spam_args.args:
api = SpamCheckerApi(hs)
self.spam_checker = module(config=config, api=api)
else:
self.spam_checker = module(config=config)
def check_event_for_spam(self, event):
"""Checks if a given event is considered "spammy" by this server.

View File

@@ -102,7 +102,7 @@ class FederationBase(object):
pass
if not res:
logger.warn(
logger.warning(
"Failed to find copy of %s with valid signature", pdu.event_id
)
@@ -173,7 +173,7 @@ class FederationBase(object):
return redacted_event
if self.spam_checker.check_event_for_spam(pdu):
logger.warn(
logger.warning(
"Event contains spam, redacting %s: %s",
pdu.event_id,
pdu.get_pdu_json(),
@@ -185,7 +185,7 @@ class FederationBase(object):
def errback(failure, pdu):
failure.trap(SynapseError)
with PreserveLoggingContext(ctx):
logger.warn(
logger.warning(
"Signature check failed for %s: %s",
pdu.event_id,
failure.getErrorMessage(),

View File

@@ -177,7 +177,7 @@ class FederationClient(FederationBase):
given destination server.
Args:
dest (str): The remote home server to ask.
dest (str): The remote homeserver to ask.
room_id (str): The room_id to backfill.
limit (int): The maximum number of PDUs to return.
extremities (list): List of PDU id and origins of the first pdus
@@ -196,7 +196,7 @@ class FederationClient(FederationBase):
dest, room_id, extremities, limit
)
logger.debug("backfill transaction_data=%s", repr(transaction_data))
logger.debug("backfill transaction_data=%r", transaction_data)
room_version = yield self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
@@ -227,7 +227,7 @@ class FederationClient(FederationBase):
one succeeds.
Args:
destinations (list): Which home servers to query
destinations (list): Which homeservers to query
event_id (str): event to fetch
room_version (str): version of the room
outlier (bool): Indicates whether the PDU is an `outlier`, i.e. if
@@ -312,7 +312,7 @@ class FederationClient(FederationBase):
@defer.inlineCallbacks
@log_function
def get_state_for_room(self, destination, room_id, event_id):
"""Requests all of the room state at a given event from a remote home server.
"""Requests all of the room state at a given event from a remote homeserver.
Args:
destination (str): The remote homeserver to query for the state.
@@ -522,12 +522,12 @@ class FederationClient(FederationBase):
res = yield callback(destination)
return res
except InvalidResponseError as e:
logger.warn("Failed to %s via %s: %s", description, destination, e)
logger.warning("Failed to %s via %s: %s", description, destination, e)
except HttpResponseException as e:
if not 500 <= e.code < 600:
raise e.to_synapse_error()
else:
logger.warn(
logger.warning(
"Failed to %s via %s: %i %s",
description,
destination,
@@ -535,7 +535,9 @@ class FederationClient(FederationBase):
e.args[0],
)
except Exception:
logger.warn("Failed to %s via %s", description, destination, exc_info=1)
logger.warning(
"Failed to %s via %s", description, destination, exc_info=1
)
raise SynapseError(502, "Failed to %s via any server" % (description,))
@@ -553,7 +555,7 @@ class FederationClient(FederationBase):
Note that this does not append any events to any graphs.
Args:
destinations (str): Candidate homeservers which are probably
destinations (Iterable[str]): Candidate homeservers which are probably
participating in the room.
room_id (str): The room in which the event will happen.
user_id (str): The user whose membership is being evented.

View File

@@ -21,7 +21,6 @@ from six import iteritems
from canonicaljson import json
from prometheus_client import Counter
from twisted.internet import defer
from twisted.internet.abstract import isIPAddress
from twisted.python import failure
@@ -86,14 +85,12 @@ class FederationServer(FederationBase):
# come in waves.
self._state_resp_cache = ResponseCache(hs, "state_resp", timeout_ms=30000)
@defer.inlineCallbacks
@log_function
def on_backfill_request(self, origin, room_id, versions, limit):
with (yield self._server_linearizer.queue((origin, room_id))):
async def on_backfill_request(self, origin, room_id, versions, limit):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
pdus = yield self.handler.on_backfill_request(
pdus = await self.handler.on_backfill_request(
origin, room_id, versions, limit
)
@@ -101,9 +98,7 @@ class FederationServer(FederationBase):
return 200, res
@defer.inlineCallbacks
@log_function
def on_incoming_transaction(self, origin, transaction_data):
async def on_incoming_transaction(self, origin, transaction_data):
# keep this as early as possible to make the calculated origin ts as
# accurate as possible.
request_time = self._clock.time_msec()
@@ -118,18 +113,17 @@ class FederationServer(FederationBase):
# use a linearizer to ensure that we don't process the same transaction
# multiple times in parallel.
with (
yield self._transaction_linearizer.queue(
await self._transaction_linearizer.queue(
(origin, transaction.transaction_id)
)
):
result = yield self._handle_incoming_transaction(
result = await self._handle_incoming_transaction(
origin, transaction, request_time
)
return result
@defer.inlineCallbacks
def _handle_incoming_transaction(self, origin, transaction, request_time):
async def _handle_incoming_transaction(self, origin, transaction, request_time):
""" Process an incoming transaction and return the HTTP response
Args:
@@ -140,7 +134,7 @@ class FederationServer(FederationBase):
Returns:
Deferred[(int, object)]: http response code and body
"""
response = yield self.transaction_actions.have_responded(origin, transaction)
response = await self.transaction_actions.have_responded(origin, transaction)
if response:
logger.debug(
@@ -151,7 +145,7 @@ class FederationServer(FederationBase):
logger.debug("[%s] Transaction is new", transaction.transaction_id)
# Reject if PDU count > 50 and EDU count > 100
# Reject if PDU count > 50 or EDU count > 100
if len(transaction.pdus) > 50 or (
hasattr(transaction, "edus") and len(transaction.edus) > 100
):
@@ -159,7 +153,7 @@ class FederationServer(FederationBase):
logger.info("Transaction PDU or EDU count too large. Returning 400")
response = {}
yield self.transaction_actions.set_response(
await self.transaction_actions.set_response(
origin, transaction, 400, response
)
return 400, response
@@ -195,7 +189,7 @@ class FederationServer(FederationBase):
continue
try:
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
except NotFoundError:
logger.info("Ignoring PDU for unknown room_id: %s", room_id)
continue
@@ -221,13 +215,12 @@ class FederationServer(FederationBase):
# require callouts to other servers to fetch missing events), but
# impose a limit to avoid going too crazy with ram/cpu.
@defer.inlineCallbacks
def process_pdus_for_room(room_id):
async def process_pdus_for_room(room_id):
logger.debug("Processing PDUs for %s", room_id)
try:
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
except AuthError as e:
logger.warn("Ignoring PDUs for room %s from banned server", room_id)
logger.warning("Ignoring PDUs for room %s from banned server", room_id)
for pdu in pdus_by_room[room_id]:
event_id = pdu.event_id
pdu_results[event_id] = e.error_dict()
@@ -237,10 +230,10 @@ class FederationServer(FederationBase):
event_id = pdu.event_id
with nested_logging_context(event_id):
try:
yield self._handle_received_pdu(origin, pdu)
await self._handle_received_pdu(origin, pdu)
pdu_results[event_id] = {}
except FederationError as e:
logger.warn("Error handling PDU %s: %s", event_id, e)
logger.warning("Error handling PDU %s: %s", event_id, e)
pdu_results[event_id] = {"error": str(e)}
except Exception as e:
f = failure.Failure()
@@ -251,36 +244,33 @@ class FederationServer(FederationBase):
exc_info=(f.type, f.value, f.getTracebackObject()),
)
yield concurrently_execute(
await concurrently_execute(
process_pdus_for_room, pdus_by_room.keys(), TRANSACTION_CONCURRENCY_LIMIT
)
if hasattr(transaction, "edus"):
for edu in (Edu(**x) for x in transaction.edus):
yield self.received_edu(origin, edu.edu_type, edu.content)
await self.received_edu(origin, edu.edu_type, edu.content)
response = {"pdus": pdu_results}
logger.debug("Returning: %s", str(response))
yield self.transaction_actions.set_response(origin, transaction, 200, response)
await self.transaction_actions.set_response(origin, transaction, 200, response)
return 200, response
@defer.inlineCallbacks
def received_edu(self, origin, edu_type, content):
async def received_edu(self, origin, edu_type, content):
received_edus_counter.inc()
yield self.registry.on_edu(edu_type, origin, content)
await self.registry.on_edu(edu_type, origin, content)
@defer.inlineCallbacks
@log_function
def on_context_state_request(self, origin, room_id, event_id):
async def on_context_state_request(self, origin, room_id, event_id):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
in_room = await self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
@@ -289,8 +279,8 @@ class FederationServer(FederationBase):
# in the cache so we could return it without waiting for the linearizer
# - but that's non-trivial to get right, and anyway somewhat defeats
# the point of the linearizer.
with (yield self._server_linearizer.queue((origin, room_id))):
resp = yield self._state_resp_cache.wrap(
with (await self._server_linearizer.queue((origin, room_id))):
resp = await self._state_resp_cache.wrap(
(room_id, event_id),
self._on_context_state_request_compute,
room_id,
@@ -299,65 +289,60 @@ class FederationServer(FederationBase):
return 200, resp
@defer.inlineCallbacks
def on_state_ids_request(self, origin, room_id, event_id):
async def on_state_ids_request(self, origin, room_id, event_id):
if not event_id:
raise NotImplementedError("Specify an event")
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
in_room = yield self.auth.check_host_in_room(room_id, origin)
in_room = await self.auth.check_host_in_room(room_id, origin)
if not in_room:
raise AuthError(403, "Host not in room.")
state_ids = yield self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = yield self.store.get_auth_chain_ids(state_ids)
state_ids = await self.handler.get_state_ids_for_pdu(room_id, event_id)
auth_chain_ids = await self.store.get_auth_chain_ids(state_ids)
return 200, {"pdu_ids": state_ids, "auth_chain_ids": auth_chain_ids}
@defer.inlineCallbacks
def _on_context_state_request_compute(self, room_id, event_id):
pdus = yield self.handler.get_state_for_pdu(room_id, event_id)
auth_chain = yield self.store.get_auth_chain([pdu.event_id for pdu in pdus])
async def _on_context_state_request_compute(self, room_id, event_id):
pdus = await self.handler.get_state_for_pdu(room_id, event_id)
auth_chain = await self.store.get_auth_chain([pdu.event_id for pdu in pdus])
return {
"pdus": [pdu.get_pdu_json() for pdu in pdus],
"auth_chain": [pdu.get_pdu_json() for pdu in auth_chain],
}
@defer.inlineCallbacks
@log_function
def on_pdu_request(self, origin, event_id):
pdu = yield self.handler.get_persisted_pdu(origin, event_id)
async def on_pdu_request(self, origin, event_id):
pdu = await self.handler.get_persisted_pdu(origin, event_id)
if pdu:
return 200, self._transaction_from_pdus([pdu]).get_dict()
else:
return 404, ""
@defer.inlineCallbacks
def on_query_request(self, query_type, args):
async def on_query_request(self, query_type, args):
received_queries_counter.labels(query_type).inc()
resp = yield self.registry.on_query(query_type, args)
resp = await self.registry.on_query(query_type, args)
return 200, resp
@defer.inlineCallbacks
def on_make_join_request(self, origin, room_id, user_id, supported_versions):
async def on_make_join_request(self, origin, room_id, user_id, supported_versions):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
if room_version not in supported_versions:
logger.warn("Room version %s not in %s", room_version, supported_versions)
logger.warning(
"Room version %s not in %s", room_version, supported_versions
)
raise IncompatibleRoomVersionError(room_version=room_version)
pdu = yield self.handler.on_make_join_request(origin, room_id, user_id)
pdu = await self.handler.on_make_join_request(origin, room_id, user_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_invite_request(self, origin, content, room_version):
async def on_invite_request(self, origin, content, room_version):
if room_version not in KNOWN_ROOM_VERSIONS:
raise SynapseError(
400,
@@ -369,28 +354,27 @@ class FederationServer(FederationBase):
pdu = event_from_pdu_json(content, format_ver)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = yield self._check_sigs_and_hash(room_version, pdu)
ret_pdu = yield self.handler.on_invite_request(origin, pdu)
await self.check_server_matches_acl(origin_host, pdu.room_id)
pdu = await self._check_sigs_and_hash(room_version, pdu)
ret_pdu = await self.handler.on_invite_request(origin, pdu)
time_now = self._clock.time_msec()
return {"event": ret_pdu.get_pdu_json(time_now)}
@defer.inlineCallbacks
def on_send_join_request(self, origin, content, room_id):
async def on_send_join_request(self, origin, content, room_id):
logger.debug("on_send_join_request: content: %s", content)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_join_request: pdu sigs: %s", pdu.signatures)
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
res_pdus = yield self.handler.on_send_join_request(origin, pdu)
res_pdus = await self.handler.on_send_join_request(origin, pdu)
time_now = self._clock.time_msec()
return (
200,
@@ -402,48 +386,44 @@ class FederationServer(FederationBase):
},
)
@defer.inlineCallbacks
def on_make_leave_request(self, origin, room_id, user_id):
async def on_make_leave_request(self, origin, room_id, user_id):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
pdu = yield self.handler.on_make_leave_request(origin, room_id, user_id)
await self.check_server_matches_acl(origin_host, room_id)
pdu = await self.handler.on_make_leave_request(origin, room_id, user_id)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
time_now = self._clock.time_msec()
return {"event": pdu.get_pdu_json(time_now), "room_version": room_version}
@defer.inlineCallbacks
def on_send_leave_request(self, origin, content, room_id):
async def on_send_leave_request(self, origin, content, room_id):
logger.debug("on_send_leave_request: content: %s", content)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
pdu = event_from_pdu_json(content, format_ver)
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, pdu.room_id)
await self.check_server_matches_acl(origin_host, pdu.room_id)
logger.debug("on_send_leave_request: pdu sigs: %s", pdu.signatures)
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
yield self.handler.on_send_leave_request(origin, pdu)
await self.handler.on_send_leave_request(origin, pdu)
return 200, {}
@defer.inlineCallbacks
def on_event_auth(self, origin, room_id, event_id):
with (yield self._server_linearizer.queue((origin, room_id))):
async def on_event_auth(self, origin, room_id, event_id):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
time_now = self._clock.time_msec()
auth_pdus = yield self.handler.on_event_auth(event_id)
auth_pdus = await self.handler.on_event_auth(event_id)
res = {"auth_chain": [a.get_pdu_json(time_now) for a in auth_pdus]}
return 200, res
@defer.inlineCallbacks
def on_query_auth_request(self, origin, content, room_id, event_id):
async def on_query_auth_request(self, origin, content, room_id, event_id):
"""
Content is a dict with keys::
auth_chain (list): A list of events that give the auth chain.
@@ -462,22 +442,22 @@ class FederationServer(FederationBase):
Returns:
Deferred: Results in `dict` with the same format as `content`
"""
with (yield self._server_linearizer.queue((origin, room_id))):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
room_version = yield self.store.get_room_version(room_id)
room_version = await self.store.get_room_version(room_id)
format_ver = room_version_to_event_format(room_version)
auth_chain = [
event_from_pdu_json(e, format_ver) for e in content["auth_chain"]
]
signed_auth = yield self._check_sigs_and_hash_and_fetch(
signed_auth = await self._check_sigs_and_hash_and_fetch(
origin, auth_chain, outlier=True, room_version=room_version
)
ret = yield self.handler.on_query_auth(
ret = await self.handler.on_query_auth(
origin,
event_id,
room_id,
@@ -503,16 +483,14 @@ class FederationServer(FederationBase):
return self.on_query_request("user_devices", user_id)
@trace
@defer.inlineCallbacks
@log_function
def on_claim_client_keys(self, origin, content):
async def on_claim_client_keys(self, origin, content):
query = []
for user_id, device_keys in content.get("one_time_keys", {}).items():
for device_id, algorithm in device_keys.items():
query.append((user_id, device_id, algorithm))
log_kv({"message": "Claiming one time keys.", "user, device pairs": query})
results = yield self.store.claim_e2e_one_time_keys(query)
results = await self.store.claim_e2e_one_time_keys(query)
json_result = {}
for user_id, device_keys in results.items():
@@ -536,14 +514,12 @@ class FederationServer(FederationBase):
return {"one_time_keys": json_result}
@defer.inlineCallbacks
@log_function
def on_get_missing_events(
async def on_get_missing_events(
self, origin, room_id, earliest_events, latest_events, limit
):
with (yield self._server_linearizer.queue((origin, room_id))):
with (await self._server_linearizer.queue((origin, room_id))):
origin_host, _ = parse_server_name(origin)
yield self.check_server_matches_acl(origin_host, room_id)
await self.check_server_matches_acl(origin_host, room_id)
logger.info(
"on_get_missing_events: earliest_events: %r, latest_events: %r,"
@@ -553,7 +529,7 @@ class FederationServer(FederationBase):
limit,
)
missing_events = yield self.handler.on_get_missing_events(
missing_events = await self.handler.on_get_missing_events(
origin, room_id, earliest_events, latest_events, limit
)
@@ -586,8 +562,7 @@ class FederationServer(FederationBase):
destination=None,
)
@defer.inlineCallbacks
def _handle_received_pdu(self, origin, pdu):
async def _handle_received_pdu(self, origin, pdu):
""" Process a PDU received in a federation /send/ transaction.
If the event is invalid, then this method throws a FederationError.
@@ -640,37 +615,34 @@ class FederationServer(FederationBase):
logger.info("Accepting join PDU %s from %s", pdu.event_id, origin)
# We've already checked that we know the room version by this point
room_version = yield self.store.get_room_version(pdu.room_id)
room_version = await self.store.get_room_version(pdu.room_id)
# Check signature.
try:
pdu = yield self._check_sigs_and_hash(room_version, pdu)
pdu = await self._check_sigs_and_hash(room_version, pdu)
except SynapseError as e:
raise FederationError("ERROR", e.code, e.msg, affected=pdu.event_id)
yield self.handler.on_receive_pdu(origin, pdu, sent_to_us_directly=True)
await self.handler.on_receive_pdu(origin, pdu, sent_to_us_directly=True)
def __str__(self):
return "<ReplicationLayer(%s)>" % self.server_name
@defer.inlineCallbacks
def exchange_third_party_invite(
async def exchange_third_party_invite(
self, sender_user_id, target_user_id, room_id, signed
):
ret = yield self.handler.exchange_third_party_invite(
ret = await self.handler.exchange_third_party_invite(
sender_user_id, target_user_id, room_id, signed
)
return ret
@defer.inlineCallbacks
def on_exchange_third_party_invite_request(self, room_id, event_dict):
ret = yield self.handler.on_exchange_third_party_invite_request(
async def on_exchange_third_party_invite_request(self, room_id, event_dict):
ret = await self.handler.on_exchange_third_party_invite_request(
room_id, event_dict
)
return ret
@defer.inlineCallbacks
def check_server_matches_acl(self, server_name, room_id):
async def check_server_matches_acl(self, server_name, room_id):
"""Check if the given server is allowed by the server ACLs in the room
Args:
@@ -680,13 +652,13 @@ class FederationServer(FederationBase):
Raises:
AuthError if the server does not match the ACL
"""
state_ids = yield self.store.get_current_state_ids(room_id)
state_ids = await self.store.get_current_state_ids(room_id)
acl_event_id = state_ids.get((EventTypes.ServerACL, ""))
if not acl_event_id:
return
acl_event = yield self.store.get_event(acl_event_id)
acl_event = await self.store.get_event(acl_event_id)
if server_matches_acl_event(server_name, acl_event):
return
@@ -709,7 +681,7 @@ def server_matches_acl_event(server_name, acl_event):
# server name is a literal IP
allow_ip_literals = acl_event.content.get("allow_ip_literals", True)
if not isinstance(allow_ip_literals, bool):
logger.warn("Ignorning non-bool allow_ip_literals flag")
logger.warning("Ignorning non-bool allow_ip_literals flag")
allow_ip_literals = True
if not allow_ip_literals:
# check for ipv6 literals. These start with '['.
@@ -723,7 +695,7 @@ def server_matches_acl_event(server_name, acl_event):
# next, check the deny list
deny = acl_event.content.get("deny", [])
if not isinstance(deny, (list, tuple)):
logger.warn("Ignorning non-list deny ACL %s", deny)
logger.warning("Ignorning non-list deny ACL %s", deny)
deny = []
for e in deny:
if _acl_entry_matches(server_name, e):
@@ -733,7 +705,7 @@ def server_matches_acl_event(server_name, acl_event):
# then the allow list.
allow = acl_event.content.get("allow", [])
if not isinstance(allow, (list, tuple)):
logger.warn("Ignorning non-list allow ACL %s", allow)
logger.warning("Ignorning non-list allow ACL %s", allow)
allow = []
for e in allow:
if _acl_entry_matches(server_name, e):
@@ -747,7 +719,7 @@ def server_matches_acl_event(server_name, acl_event):
def _acl_entry_matches(server_name, acl_entry):
if not isinstance(acl_entry, six.string_types):
logger.warn(
logger.warning(
"Ignoring non-str ACL entry '%s' (is %s)", acl_entry, type(acl_entry)
)
return False
@@ -799,15 +771,14 @@ class FederationHandlerRegistry(object):
self.query_handlers[query_type] = handler
@defer.inlineCallbacks
def on_edu(self, edu_type, origin, content):
async def on_edu(self, edu_type, origin, content):
handler = self.edu_handlers.get(edu_type)
if not handler:
logger.warn("No handler registered for EDU type %s", edu_type)
logger.warning("No handler registered for EDU type %s", edu_type)
with start_active_span_from_edu(content, "handle_edu"):
try:
yield handler(origin, content)
await handler(origin, content)
except SynapseError as e:
logger.info("Failed to handle edu %r: %r", edu_type, e)
except Exception:
@@ -816,7 +787,7 @@ class FederationHandlerRegistry(object):
def on_query(self, query_type, args):
handler = self.query_handlers.get(query_type)
if not handler:
logger.warn("No handler registered for query type %s", query_type)
logger.warning("No handler registered for query type %s", query_type)
raise NotFoundError("No handler for Query type '%s'" % (query_type,))
return handler(args)
@@ -840,7 +811,7 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
super(ReplicationFederationHandlerRegistry, self).__init__()
def on_edu(self, edu_type, origin, content):
async def on_edu(self, edu_type, origin, content):
"""Overrides FederationHandlerRegistry
"""
if not self.config.use_presence and edu_type == "m.presence":
@@ -848,17 +819,17 @@ class ReplicationFederationHandlerRegistry(FederationHandlerRegistry):
handler = self.edu_handlers.get(edu_type)
if handler:
return super(ReplicationFederationHandlerRegistry, self).on_edu(
return await super(ReplicationFederationHandlerRegistry, self).on_edu(
edu_type, origin, content
)
return self._send_edu(edu_type=edu_type, origin=origin, content=content)
return await self._send_edu(edu_type=edu_type, origin=origin, content=content)
def on_query(self, query_type, args):
async def on_query(self, query_type, args):
"""Overrides FederationHandlerRegistry
"""
handler = self.query_handlers.get(query_type)
if handler:
return handler(args)
return await handler(args)
return self._get_query_client(query_type=query_type, args=args)
return await self._get_query_client(query_type=query_type, args=args)

View File

@@ -44,7 +44,7 @@ class TransactionActions(object):
response code and response body.
"""
if not transaction.transaction_id:
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
raise RuntimeError("Cannot persist a transaction with no transaction_id")
return self.store.get_received_txn_response(transaction.transaction_id, origin)
@@ -56,7 +56,7 @@ class TransactionActions(object):
Deferred
"""
if not transaction.transaction_id:
raise RuntimeError("Cannot persist a transaction with no " "transaction_id")
raise RuntimeError("Cannot persist a transaction with no transaction_id")
return self.store.set_received_txn_response(
transaction.transaction_id, origin, code, response

View File

@@ -36,6 +36,8 @@ from six import iteritems
from sortedcontainers import SortedDict
from twisted.internet import defer
from synapse.metrics import LaterGauge
from synapse.storage.presence import UserPresenceState
from synapse.util.metrics import Measure
@@ -212,7 +214,7 @@ class FederationRemoteSendQueue(object):
receipt (synapse.types.ReadReceipt):
"""
# nothing to do here: the replication listener will handle it.
pass
return defer.succeed(None)
def send_presence(self, states):
"""As per FederationSender

View File

@@ -49,7 +49,7 @@ sent_pdus_destination_dist_count = Counter(
sent_pdus_destination_dist_total = Counter(
"synapse_federation_client_sent_pdu_destinations:total",
"" "Total number of PDUs queued for sending across all destinations",
"Total number of PDUs queued for sending across all destinations",
)

View File

@@ -192,15 +192,16 @@ class PerDestinationQueue(object):
# We have to keep 2 free slots for presence and rr_edus
limit = MAX_EDUS_PER_TRANSACTION - 2
device_update_edus, dev_list_id = (
yield self._get_device_update_edus(limit)
device_update_edus, dev_list_id = yield self._get_device_update_edus(
limit
)
limit -= len(device_update_edus)
to_device_edus, device_stream_id = (
yield self._get_to_device_message_edus(limit)
)
(
to_device_edus,
device_stream_id,
) = yield self._get_to_device_message_edus(limit)
pending_edus = device_update_edus + to_device_edus
@@ -359,20 +360,20 @@ class PerDestinationQueue(object):
last_device_list = self._last_device_list_stream_id
# Retrieve list of new device updates to send to the destination
now_stream_id, results = yield self._store.get_devices_by_remote(
now_stream_id, results = yield self._store.get_device_updates_by_remote(
self._destination, last_device_list, limit=limit
)
edus = [
Edu(
origin=self._server_name,
destination=self._destination,
edu_type="m.device_list_update",
edu_type=edu_type,
content=content,
)
for content in results
for (edu_type, content) in results
]
assert len(edus) <= limit, "get_devices_by_remote returned too many EDUs"
assert len(edus) <= limit, "get_device_updates_by_remote returned too many EDUs"
return (edus, now_stream_id)

View File

@@ -84,7 +84,7 @@ class TransactionManager(object):
txn_id = str(self._next_txn_id)
logger.debug(
"TX [%s] {%s} Attempting new transaction" " (pdus: %d, edus: %d)",
"TX [%s] {%s} Attempting new transaction (pdus: %d, edus: %d)",
destination,
txn_id,
len(pdus),
@@ -103,7 +103,7 @@ class TransactionManager(object):
self._next_txn_id += 1
logger.info(
"TX [%s] {%s} Sending transaction [%s]," " (PDUs: %d, EDUs: %d)",
"TX [%s] {%s} Sending transaction [%s], (PDUs: %d, EDUs: %d)",
destination,
txn_id,
transaction.transaction_id,
@@ -146,7 +146,7 @@ class TransactionManager(object):
if code == 200:
for e_id, r in response.get("pdus", {}).items():
if "error" in r:
logger.warn(
logger.warning(
"TX [%s] {%s} Remote returned error for %s: %s",
destination,
txn_id,
@@ -155,7 +155,7 @@ class TransactionManager(object):
)
else:
for p in pdus:
logger.warn(
logger.warning(
"TX [%s] {%s} Failed to send event %s",
destination,
txn_id,

View File

@@ -14,9 +14,9 @@
# limitations under the License.
"""The transport layer is responsible for both sending transactions to remote
home servers and receiving a variety of requests from other home servers.
homeservers and receiving a variety of requests from other homeservers.
By default this is done over HTTPS (and all home servers are required to
By default this is done over HTTPS (and all homeservers are required to
support HTTPS), however individual pairings of servers may decide to
communicate over a different (albeit still reliable) protocol.
"""

View File

@@ -44,7 +44,7 @@ class TransportLayerClient(object):
given event.
Args:
destination (str): The host name of the remote home server we want
destination (str): The host name of the remote homeserver we want
to get the state from.
context (str): The name of the context we want the state of
event_id (str): The event we want the context at.
@@ -68,7 +68,7 @@ class TransportLayerClient(object):
given event. Returns the state's event_id's
Args:
destination (str): The host name of the remote home server we want
destination (str): The host name of the remote homeserver we want
to get the state from.
context (str): The name of the context we want the state of
event_id (str): The event we want the context at.
@@ -91,7 +91,7 @@ class TransportLayerClient(object):
""" Requests the pdu with give id and origin from the given server.
Args:
destination (str): The host name of the remote home server we want
destination (str): The host name of the remote homeserver we want
to get the state from.
event_id (str): The id of the event being requested.
timeout (int): How long to try (in ms) the destination for before
@@ -122,10 +122,10 @@ class TransportLayerClient(object):
Deferred: Results in a dict received from the remote homeserver.
"""
logger.debug(
"backfill dest=%s, room_id=%s, event_tuples=%s, limit=%s",
"backfill dest=%s, room_id=%s, event_tuples=%r, limit=%s",
destination,
room_id,
repr(event_tuples),
event_tuples,
str(limit),
)

View File

@@ -202,7 +202,7 @@ def _parse_auth_header(header_bytes):
sig = strip_quotes(param_dict["sig"])
return origin, key, sig
except Exception as e:
logger.warn(
logger.warning(
"Error parsing auth header '%s': %s",
header_bytes.decode("ascii", "replace"),
e,
@@ -287,10 +287,12 @@ class BaseFederationServlet(object):
except NoAuthenticationError:
origin = None
if self.REQUIRE_AUTH:
logger.warn("authenticate_request failed: missing authentication")
logger.warning(
"authenticate_request failed: missing authentication"
)
raise
except Exception as e:
logger.warn("authenticate_request failed: %s", e)
logger.warning("authenticate_request failed: %s", e)
raise
request_tags = {
@@ -712,7 +714,7 @@ class PublicRoomList(BaseFederationServlet):
This API returns information in the same format as /publicRooms on the
client API, but will only ever include local public rooms and hence is
intended for consumption by other home servers.
intended for consumption by other homeservers.
GET /publicRooms HTTP/1.1

View File

@@ -181,7 +181,7 @@ class GroupAttestionRenewer(object):
elif not self.is_mine_id(user_id):
destination = get_domain_from_id(user_id)
else:
logger.warn(
logger.warning(
"Incorrectly trying to do attestations for user: %r in %r",
user_id,
group_id,

View File

@@ -488,7 +488,7 @@ class GroupsServerHandler(object):
profile = yield self.profile_handler.get_profile_from_cache(user_id)
user_profile.update(profile)
except Exception as e:
logger.warn("Error getting profile for %s: %s", user_id, e)
logger.warning("Error getting profile for %s: %s", user_id, e)
user_profiles.append(user_profile)
return {"chunk": user_profiles, "total_user_count_estimate": len(invited_users)}

View File

@@ -38,9 +38,10 @@ class AccountDataEventSource(object):
{"type": "m.tag", "content": {"tags": room_tags}, "room_id": room_id}
)
account_data, room_account_data = (
yield self.store.get_updated_account_data_for_user(user_id, last_stream_id)
)
(
account_data,
room_account_data,
) = yield self.store.get_updated_account_data_for_user(user_id, last_stream_id)
for account_data_type, content in account_data.items():
results.append({"type": account_data_type, "content": content})

View File

@@ -30,6 +30,9 @@ class AdminHandler(BaseHandler):
def __init__(self, hs):
super(AdminHandler, self).__init__(hs)
self.storage = hs.get_storage()
self.state_store = self.storage.state
@defer.inlineCallbacks
def get_whois(self, user):
connections = []
@@ -205,7 +208,7 @@ class AdminHandler(BaseHandler):
from_key = events[-1].internal_metadata.after
events = yield filter_events_for_client(self.store, user_id, events)
events = yield filter_events_for_client(self.storage, user_id, events)
writer.write_events(room_id, events)
@@ -241,7 +244,7 @@ class AdminHandler(BaseHandler):
for event_id in extremities:
if not event_to_unseen_prevs[event_id]:
continue
state = yield self.store.get_state_for_event(event_id)
state = yield self.state_store.get_state_for_event(event_id)
writer.write_state(room_id, event_id, state)
return writer.finished()

View File

@@ -73,7 +73,10 @@ class ApplicationServicesHandler(object):
try:
limit = 100
while True:
upper_bound, events = yield self.store.get_new_events_for_appservice(
(
upper_bound,
events,
) = yield self.store.get_new_events_for_appservice(
self.current_max, limit
)

View File

@@ -102,8 +102,9 @@ class AuthHandler(BaseHandler):
login_types.append(t)
self._supported_login_types = login_types
self._account_ratelimiter = Ratelimiter()
self._failed_attempts_ratelimiter = Ratelimiter()
# Ratelimiter for failed auth during UIA. Uses same ratelimit config
# as per `rc_login.failed_attempts`.
self._failed_uia_attempts_ratelimiter = Ratelimiter()
self._clock = self.hs.get_clock()
@@ -133,12 +134,38 @@ class AuthHandler(BaseHandler):
AuthError if the client has completed a login flow, and it gives
a different user to `requester`
LimitExceededError if the ratelimiter's failed request count for this
user is too high to proceed
"""
user_id = requester.user.to_string()
# Check if we should be ratelimited due to too many previous failed attempts
self._failed_uia_attempts_ratelimiter.ratelimit(
user_id,
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=False,
)
# build a list of supported flows
flows = [[login_type] for login_type in self._supported_login_types]
result, params, _ = yield self.check_auth(flows, request_body, clientip)
try:
result, params, _ = yield self.check_auth(flows, request_body, clientip)
except LoginError:
# Update the ratelimite to say we failed (`can_do_action` doesn't raise).
self._failed_uia_attempts_ratelimiter.can_do_action(
user_id,
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=True,
)
raise
# find the completed login type
for login_type in self._supported_login_types:
@@ -223,7 +250,7 @@ class AuthHandler(BaseHandler):
# could continue registration from your phone having clicked the
# email auth link on there). It's probably too open to abuse
# because it lets unauthenticated clients store arbitrary objects
# on a home server.
# on a homeserver.
# Revisit: Assumimg the REST APIs do sensible validation, the data
# isn't arbintrary.
session["clientdict"] = clientdict
@@ -501,11 +528,8 @@ class AuthHandler(BaseHandler):
multiple matches
Raises:
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
UserDeactivatedError if a user is found but is deactivated.
"""
self.ratelimit_login_per_account(user_id)
res = yield self._find_user_id_and_pwd_hash(user_id)
if res is not None:
return res[0]
@@ -525,7 +549,7 @@ class AuthHandler(BaseHandler):
result = None
if not user_infos:
logger.warn("Attempted to login as %s but they do not exist", user_id)
logger.warning("Attempted to login as %s but they do not exist", user_id)
elif len(user_infos) == 1:
# a single match (possibly not exact)
result = user_infos.popitem()
@@ -534,7 +558,7 @@ class AuthHandler(BaseHandler):
result = (user_id, user_infos[user_id])
else:
# multiple matches, none of them exact
logger.warn(
logger.warning(
"Attempted to login as %s but it matches more than one user "
"inexactly: %r",
user_id,
@@ -572,8 +596,6 @@ class AuthHandler(BaseHandler):
StoreError if there was a problem accessing the database
SynapseError if there was a problem with the request
LoginError if there was an authentication problem.
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
"""
if username.startswith("@"):
@@ -581,8 +603,6 @@ class AuthHandler(BaseHandler):
else:
qualified_user_id = UserID(username, self.hs.hostname).to_string()
self.ratelimit_login_per_account(qualified_user_id)
login_type = login_submission.get("type")
known_login_type = False
@@ -650,15 +670,6 @@ class AuthHandler(BaseHandler):
if not known_login_type:
raise SynapseError(400, "Unknown login type %s" % login_type)
# unknown username or invalid password.
self._failed_attempts_ratelimiter.ratelimit(
qualified_user_id.lower(),
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=True,
)
# We raise a 403 here, but note that if we're doing user-interactive
# login, it turns all LoginErrors into a 401 anyway.
raise LoginError(403, "Invalid password", errcode=Codes.FORBIDDEN)
@@ -710,10 +721,6 @@ class AuthHandler(BaseHandler):
Returns:
Deferred[unicode] the canonical_user_id, or Deferred[None] if
unknown user/bad password
Raises:
LimitExceededError if the ratelimiter's login requests count for this
user is too high too proceed.
"""
lookupres = yield self._find_user_id_and_pwd_hash(user_id)
if not lookupres:
@@ -728,7 +735,7 @@ class AuthHandler(BaseHandler):
result = yield self.validate_hash(password, password_hash)
if not result:
logger.warn("Failed password login for user %s", user_id)
logger.warning("Failed password login for user %s", user_id)
return None
return user_id
@@ -742,7 +749,7 @@ class AuthHandler(BaseHandler):
auth_api.validate_macaroon(macaroon, "login", user_id)
except Exception:
raise AuthError(403, "Invalid token", errcode=Codes.FORBIDDEN)
self.ratelimit_login_per_account(user_id)
yield self.auth.check_auth_blocking(user_id)
return user_id
@@ -810,7 +817,7 @@ class AuthHandler(BaseHandler):
@defer.inlineCallbacks
def add_threepid(self, user_id, medium, address, validated_at):
# 'Canonicalise' email addresses down to lower case.
# We've now moving towards the Home Server being the entity that
# We've now moving towards the homeserver being the entity that
# is responsible for validating threepids used for resetting passwords
# on accounts, so in future Synapse will gain knowledge of specific
# types (mediums) of threepid. For now, we still use the existing
@@ -912,35 +919,6 @@ class AuthHandler(BaseHandler):
else:
return defer.succeed(False)
def ratelimit_login_per_account(self, user_id):
"""Checks whether the process must be stopped because of ratelimiting.
Checks against two ratelimiters: the generic one for login attempts per
account and the one specific to failed attempts.
Args:
user_id (unicode): complete @user:id
Raises:
LimitExceededError if one of the ratelimiters' login requests count
for this user is too high too proceed.
"""
self._failed_attempts_ratelimiter.ratelimit(
user_id.lower(),
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_failed_attempts.per_second,
burst_count=self.hs.config.rc_login_failed_attempts.burst_count,
update=False,
)
self._account_ratelimiter.ratelimit(
user_id.lower(),
time_now_s=self._clock.time(),
rate_hz=self.hs.config.rc_login_account.per_second,
burst_count=self.hs.config.rc_login_account.burst_count,
update=True,
)
@attr.s
class MacaroonGenerator(object):

View File

@@ -46,6 +46,7 @@ class DeviceWorkerHandler(BaseHandler):
self.hs = hs
self.state = hs.get_state_handler()
self.state_store = hs.get_storage().state
self._auth_handler = hs.get_auth_handler()
@trace
@@ -178,7 +179,7 @@ class DeviceWorkerHandler(BaseHandler):
continue
# mapping from event_id -> state_dict
prev_state_ids = yield self.store.get_state_ids_for_events(event_ids)
prev_state_ids = yield self.state_store.get_state_ids_for_events(event_ids)
# Check if we've joined the room? If so we just blindly add all the users to
# the "possibly changed" users.
@@ -458,7 +459,18 @@ class DeviceHandler(DeviceWorkerHandler):
@defer.inlineCallbacks
def on_federation_query_user_devices(self, user_id):
stream_id, devices = yield self.store.get_devices_with_keys_by_user(user_id)
return {"user_id": user_id, "stream_id": stream_id, "devices": devices}
master_key = yield self.store.get_e2e_cross_signing_key(user_id, "master")
self_signing_key = yield self.store.get_e2e_cross_signing_key(
user_id, "self_signing"
)
return {
"user_id": user_id,
"stream_id": stream_id,
"devices": devices,
"master_key": master_key,
"self_signing_key": self_signing_key,
}
@defer.inlineCallbacks
def user_left_room(self, user, room_id):
@@ -656,7 +668,7 @@ class DeviceListUpdater(object):
except (NotRetryingDestination, RequestSendFailed, HttpResponseException):
# TODO: Remember that we are now out of sync and try again
# later
logger.warn("Failed to handle device list update for %s", user_id)
logger.warning("Failed to handle device list update for %s", user_id)
# We abort on exceptions rather than accepting the update
# as otherwise synapse will 'forget' that its device list
# is out of date. If we bail then we will retry the resync
@@ -694,7 +706,7 @@ class DeviceListUpdater(object):
# up on storing the total list of devices and only handle the
# delta instead.
if len(devices) > 1000:
logger.warn(
logger.warning(
"Ignoring device list snapshot for %s as it has >1K devs (%d)",
user_id,
len(devices),

View File

@@ -52,7 +52,7 @@ class DeviceMessageHandler(object):
local_messages = {}
sender_user_id = content["sender"]
if origin != get_domain_from_id(sender_user_id):
logger.warn(
logger.warning(
"Dropping device message from %r with spoofed sender %r",
origin,
sender_user_id,

View File

@@ -119,7 +119,7 @@ class DirectoryHandler(BaseHandler):
if not service.is_interested_in_alias(room_alias.to_string()):
raise SynapseError(
400,
"This application service has not reserved" " this kind of alias.",
"This application service has not reserved this kind of alias.",
errcode=Codes.EXCLUSIVE,
)
else:
@@ -250,7 +250,7 @@ class DirectoryHandler(BaseHandler):
ignore_backoff=True,
)
except CodeMessageException as e:
logging.warn("Error retrieving alias")
logging.warning("Error retrieving alias")
if e.code == 404:
result = None
else:
@@ -283,7 +283,7 @@ class DirectoryHandler(BaseHandler):
def on_directory_query(self, args):
room_alias = RoomAlias.from_string(args["room_alias"])
if not self.hs.is_mine(room_alias):
raise SynapseError(400, "Room Alias is not hosted on this Home Server")
raise SynapseError(400, "Room Alias is not hosted on this homeserver")
result = yield self.get_association_from_room_alias(room_alias)

View File

@@ -36,6 +36,8 @@ from synapse.types import (
get_verify_key_from_cross_signing_key,
)
from synapse.util import unwrapFirstError
from synapse.util.async_helpers import Linearizer
from synapse.util.caches.expiringcache import ExpiringCache
from synapse.util.retryutils import NotRetryingDestination
logger = logging.getLogger(__name__)
@@ -49,10 +51,19 @@ class E2eKeysHandler(object):
self.is_mine = hs.is_mine
self.clock = hs.get_clock()
self._edu_updater = SigningKeyEduUpdater(hs, self)
federation_registry = hs.get_federation_registry()
# FIXME: switch to m.signing_key_update when MSC1756 is merged into the spec
federation_registry.register_edu_handler(
"org.matrix.signing_key_update",
self._edu_updater.incoming_signing_key_update,
)
# doesn't really work as part of the generic query API, because the
# query request requires an object POST, but we abuse the
# "query handler" interface.
hs.get_federation_registry().register_query_handler(
federation_registry.register_query_handler(
"client_keys", self.on_federation_query_client_keys
)
@@ -119,9 +130,10 @@ class E2eKeysHandler(object):
else:
query_list.append((user_id, None))
user_ids_not_in_cache, remote_results = (
yield self.store.get_user_devices_from_cache(query_list)
)
(
user_ids_not_in_cache,
remote_results,
) = yield self.store.get_user_devices_from_cache(query_list)
for user_id, devices in iteritems(remote_results):
user_devices = results.setdefault(user_id, {})
for device_id, device in iteritems(devices):
@@ -207,13 +219,15 @@ class E2eKeysHandler(object):
if user_id in destination_query:
results[user_id] = keys
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
if "master_keys" in remote_result:
for user_id, key in remote_result["master_keys"].items():
if user_id in destination_query:
cross_signing_keys["master_keys"][user_id] = key
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
if "self_signing_keys" in remote_result:
for user_id, key in remote_result["self_signing_keys"].items():
if user_id in destination_query:
cross_signing_keys["self_signing_keys"][user_id] = key
except Exception as e:
failure = _exception_to_failure(e)
@@ -251,7 +265,7 @@ class E2eKeysHandler(object):
Returns:
defer.Deferred[dict[str, dict[str, dict]]]: map from
(master|self_signing|user_signing) -> user_id -> key
(master_keys|self_signing_keys|user_signing_keys) -> user_id -> key
"""
master_keys = {}
self_signing_keys = {}
@@ -343,7 +357,16 @@ class E2eKeysHandler(object):
"""
device_keys_query = query_body.get("device_keys", {})
res = yield self.query_local_devices(device_keys_query)
return {"device_keys": res}
ret = {"device_keys": res}
# add in the cross-signing keys
cross_signing_keys = yield self.get_cross_signing_keys_from_cache(
device_keys_query, None
)
ret.update(cross_signing_keys)
return ret
@trace
@defer.inlineCallbacks
@@ -688,17 +711,21 @@ class E2eKeysHandler(object):
try:
# get our self-signing key to verify the signatures
_, self_signing_key_id, self_signing_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "self_signing"
)
(
_,
self_signing_key_id,
self_signing_verify_key,
) = yield self._get_e2e_cross_signing_verify_key(user_id, "self_signing")
# get our master key, since we may have received a signature of it.
# We need to fetch it here so that we know what its key ID is, so
# that we can check if a signature that was sent is a signature of
# the master key or of a device
master_key, _, master_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "master"
)
(
master_key,
_,
master_verify_key,
) = yield self._get_e2e_cross_signing_verify_key(user_id, "master")
# fetch our stored devices. This is used to 1. verify
# signatures on the master key, and 2. to compare with what
@@ -838,9 +865,11 @@ class E2eKeysHandler(object):
try:
# get our user-signing key to verify the signatures
user_signing_key, user_signing_key_id, user_signing_verify_key = yield self._get_e2e_cross_signing_verify_key(
user_id, "user_signing"
)
(
user_signing_key,
user_signing_key_id,
user_signing_verify_key,
) = yield self._get_e2e_cross_signing_verify_key(user_id, "user_signing")
except SynapseError as e:
failure = _exception_to_failure(e)
for user, devicemap in signatures.items():
@@ -859,7 +888,11 @@ class E2eKeysHandler(object):
try:
# get the target user's master key, to make sure it matches
# what was sent
master_key, master_key_id, _ = yield self._get_e2e_cross_signing_verify_key(
(
master_key,
master_key_id,
_,
) = yield self._get_e2e_cross_signing_verify_key(
target_user, "master", user_id
)
@@ -1047,3 +1080,100 @@ class SignatureListItem:
target_user_id = attr.ib()
target_device_id = attr.ib()
signature = attr.ib()
class SigningKeyEduUpdater(object):
"""Handles incoming signing key updates from federation and updates the DB"""
def __init__(self, hs, e2e_keys_handler):
self.store = hs.get_datastore()
self.federation = hs.get_federation_client()
self.clock = hs.get_clock()
self.e2e_keys_handler = e2e_keys_handler
self._remote_edu_linearizer = Linearizer(name="remote_signing_key")
# user_id -> list of updates waiting to be handled.
self._pending_updates = {}
# Recently seen stream ids. We don't bother keeping these in the DB,
# but they're useful to have them about to reduce the number of spurious
# resyncs.
self._seen_updates = ExpiringCache(
cache_name="signing_key_update_edu",
clock=self.clock,
max_len=10000,
expiry_ms=30 * 60 * 1000,
iterable=True,
)
@defer.inlineCallbacks
def incoming_signing_key_update(self, origin, edu_content):
"""Called on incoming signing key update from federation. Responsible for
parsing the EDU and adding to pending updates list.
Args:
origin (string): the server that sent the EDU
edu_content (dict): the contents of the EDU
"""
user_id = edu_content.pop("user_id")
master_key = edu_content.pop("master_key", None)
self_signing_key = edu_content.pop("self_signing_key", None)
if get_domain_from_id(user_id) != origin:
logger.warning("Got signing key update edu for %r from %r", user_id, origin)
return
room_ids = yield self.store.get_rooms_for_user(user_id)
if not room_ids:
# We don't share any rooms with this user. Ignore update, as we
# probably won't get any further updates.
return
self._pending_updates.setdefault(user_id, []).append(
(master_key, self_signing_key)
)
yield self._handle_signing_key_updates(user_id)
@defer.inlineCallbacks
def _handle_signing_key_updates(self, user_id):
"""Actually handle pending updates.
Args:
user_id (string): the user whose updates we are processing
"""
device_handler = self.e2e_keys_handler.device_handler
with (yield self._remote_edu_linearizer.queue(user_id)):
pending_updates = self._pending_updates.pop(user_id, [])
if not pending_updates:
# This can happen since we batch updates
return
device_ids = []
logger.info("pending updates: %r", pending_updates)
for master_key, self_signing_key in pending_updates:
if master_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "master", master_key
)
_, verify_key = get_verify_key_from_cross_signing_key(master_key)
# verify_key is a VerifyKey from signedjson, which uses
# .version to denote the portion of the key ID after the
# algorithm and colon, which is the device ID
device_ids.append(verify_key.version)
if self_signing_key:
yield self.store.set_e2e_cross_signing_key(
user_id, "self_signing", self_signing_key
)
_, verify_key = get_verify_key_from_cross_signing_key(
self_signing_key
)
device_ids.append(verify_key.version)
yield device_handler.notify_device_update(user_id, device_ids)

View File

@@ -147,6 +147,10 @@ class EventStreamHandler(BaseHandler):
class EventHandler(BaseHandler):
def __init__(self, hs):
super(EventHandler, self).__init__(hs)
self.storage = hs.get_storage()
@defer.inlineCallbacks
def get_event(self, user, room_id, event_id):
"""Retrieve a single specified event.
@@ -172,7 +176,7 @@ class EventHandler(BaseHandler):
is_peeking = user.to_string() not in users
filtered = yield filter_events_for_client(
self.store, user.to_string(), [event], is_peeking=is_peeking
self.storage, user.to_string(), [event], is_peeking=is_peeking
)
if not filtered:

View File

@@ -45,6 +45,7 @@ from synapse.api.errors import (
from synapse.api.room_versions import KNOWN_ROOM_VERSIONS, RoomVersions
from synapse.crypto.event_signing import compute_event_signature
from synapse.event_auth import auth_types_for_event
from synapse.events.snapshot import EventContext
from synapse.events.validator import EventValidator
from synapse.logging.context import (
make_deferred_yieldable,
@@ -96,9 +97,9 @@ class FederationHandler(BaseHandler):
"""Handles events that originated from federation.
Responsible for:
a) handling received Pdus before handing them on as Events to the rest
of the home server (including auth and state conflict resoultion)
of the homeserver (including auth and state conflict resoultion)
b) converting events that were produced by local clients that may need
to be sent to remote home servers.
to be sent to remote homeservers.
c) doing the necessary dances to invite remote users and join remote
rooms.
"""
@@ -109,6 +110,8 @@ class FederationHandler(BaseHandler):
self.hs = hs
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.federation_client = hs.get_federation_client()
self.state_handler = hs.get_state_handler()
self.server_name = hs.hostname
@@ -180,7 +183,7 @@ class FederationHandler(BaseHandler):
try:
self._sanity_check_event(pdu)
except SynapseError as err:
logger.warn(
logger.warning(
"[%s %s] Received event failed sanity checks", room_id, event_id
)
raise FederationError("ERROR", err.code, err.msg, affected=pdu.event_id)
@@ -301,7 +304,7 @@ class FederationHandler(BaseHandler):
# following.
if sent_to_us_directly:
logger.warn(
logger.warning(
"[%s %s] Rejecting: failed to fetch %d prev events: %s",
room_id,
event_id,
@@ -324,7 +327,7 @@ class FederationHandler(BaseHandler):
event_map = {event_id: pdu}
try:
# Get the state of the events we know about
ours = yield self.store.get_state_groups_ids(room_id, seen)
ours = yield self.state_store.get_state_groups_ids(room_id, seen)
# state_maps is a list of mappings from (type, state_key) to event_id
state_maps = list(
@@ -350,10 +353,11 @@ class FederationHandler(BaseHandler):
# note that if any of the missing prevs share missing state or
# auth events, the requests to fetch those events are deduped
# by the get_pdu_cache in federation_client.
remote_state, got_auth_chain = (
yield self.federation_client.get_state_for_room(
origin, room_id, p
)
(
remote_state,
got_auth_chain,
) = yield self.federation_client.get_state_for_room(
origin, room_id, p
)
# we want the state *after* p; get_state_for_room returns the
@@ -405,7 +409,7 @@ class FederationHandler(BaseHandler):
state = [event_map[e] for e in six.itervalues(state_map)]
auth_chain = list(auth_chains)
except Exception:
logger.warn(
logger.warning(
"[%s %s] Error attempting to resolve state at missing "
"prev_events",
room_id,
@@ -518,7 +522,9 @@ class FederationHandler(BaseHandler):
# We failed to get the missing events, but since we need to handle
# the case of `get_missing_events` not returning the necessary
# events anyway, it is safe to simply log the error and continue.
logger.warn("[%s %s]: Failed to get prev_events: %s", room_id, event_id, e)
logger.warning(
"[%s %s]: Failed to get prev_events: %s", room_id, event_id, e
)
return
logger.info(
@@ -545,7 +551,7 @@ class FederationHandler(BaseHandler):
yield self.on_receive_pdu(origin, ev, sent_to_us_directly=False)
except FederationError as e:
if e.code == 403:
logger.warn(
logger.warning(
"[%s %s] Received prev_event %s failed history check.",
room_id,
event_id,
@@ -888,7 +894,7 @@ class FederationHandler(BaseHandler):
# We set `check_history_visibility_only` as we might otherwise get false
# positives from users having been erased.
filtered_extremities = yield filter_events_for_server(
self.store,
self.storage,
self.server_name,
list(extremities_events.values()),
redact=False,
@@ -1059,7 +1065,7 @@ class FederationHandler(BaseHandler):
SynapseError if the event does not pass muster
"""
if len(ev.prev_event_ids()) > 20:
logger.warn(
logger.warning(
"Rejecting event %s which has %i prev_events",
ev.event_id,
len(ev.prev_event_ids()),
@@ -1067,7 +1073,7 @@ class FederationHandler(BaseHandler):
raise SynapseError(http_client.BAD_REQUEST, "Too many prev_events")
if len(ev.auth_event_ids()) > 10:
logger.warn(
logger.warning(
"Rejecting event %s which has %i auth_events",
ev.event_id,
len(ev.auth_event_ids()),
@@ -1101,7 +1107,7 @@ class FederationHandler(BaseHandler):
@defer.inlineCallbacks
def do_invite_join(self, target_hosts, room_id, joinee, content):
""" Attempts to join the `joinee` to the room `room_id` via the
server `target_host`.
servers contained in `target_hosts`.
This first triggers a /make_join/ request that returns a partial
event that we can fill out and sign. This is then sent to the
@@ -1110,6 +1116,15 @@ class FederationHandler(BaseHandler):
We suspend processing of any received events from this room until we
have finished processing the join.
Args:
target_hosts (Iterable[str]): List of servers to attempt to join the room with.
room_id (str): The ID of the room to join.
joinee (str): The User ID of the joining user.
content (dict): The event content to use for the join event.
"""
logger.debug("Joining %s to %s", joinee, room_id)
@@ -1169,6 +1184,22 @@ class FederationHandler(BaseHandler):
yield self._persist_auth_tree(origin, auth_chain, state, event)
# Check whether this room is the result of an upgrade of a room we already know
# about. If so, migrate over user information
predecessor = yield self.store.get_room_predecessor(room_id)
if not predecessor:
return
old_room_id = predecessor["room_id"]
logger.debug(
"Found predecessor for %s during remote join: %s", room_id, old_room_id
)
# We retrieve the room member handler here as to not cause a cyclic dependency
member_handler = self.hs.get_room_member_handler()
yield member_handler.transfer_room_state_on_room_upgrade(
old_room_id, room_id
)
logger.debug("Finished joining %s to %s", joinee, room_id)
finally:
room_queue = self.room_queues[room_id]
@@ -1203,7 +1234,7 @@ class FederationHandler(BaseHandler):
with nested_logging_context(p.event_id):
yield self.on_receive_pdu(origin, p, sent_to_us_directly=True)
except Exception as e:
logger.warn(
logger.warning(
"Error handling queued PDU %s from %s: %s", p.event_id, origin, e
)
@@ -1250,7 +1281,7 @@ class FederationHandler(BaseHandler):
builder=builder
)
except AuthError as e:
logger.warn("Failed to create join %r because %s", event, e)
logger.warning("Failed to create join to %s because %s", room_id, e)
raise e
event_allowed = yield self.third_party_event_rules.check_event_allowed(
@@ -1494,7 +1525,7 @@ class FederationHandler(BaseHandler):
room_version, event, context, do_sig_check=False
)
except AuthError as e:
logger.warn("Failed to create new leave %r because %s", event, e)
logger.warning("Failed to create new leave %r because %s", event, e)
raise e
return event
@@ -1549,7 +1580,7 @@ class FederationHandler(BaseHandler):
event_id, allow_none=False, check_room_id=room_id
)
state_groups = yield self.store.get_state_groups(room_id, [event_id])
state_groups = yield self.state_store.get_state_groups(room_id, [event_id])
if state_groups:
_, state = list(iteritems(state_groups)).pop()
@@ -1578,7 +1609,7 @@ class FederationHandler(BaseHandler):
event_id, allow_none=False, check_room_id=room_id
)
state_groups = yield self.store.get_state_groups_ids(room_id, [event_id])
state_groups = yield self.state_store.get_state_groups_ids(room_id, [event_id])
if state_groups:
_, state = list(state_groups.items()).pop()
@@ -1606,7 +1637,7 @@ class FederationHandler(BaseHandler):
events = yield self.store.get_backfill_events(room_id, pdu_list, limit)
events = yield filter_events_for_server(self.store, origin, events)
events = yield filter_events_for_server(self.storage, origin, events)
return events
@@ -1636,7 +1667,7 @@ class FederationHandler(BaseHandler):
if not in_room:
raise AuthError(403, "Host not in room.")
events = yield filter_events_for_server(self.store, origin, [event])
events = yield filter_events_for_server(self.storage, origin, [event])
event = events[0]
return event
else:
@@ -1657,7 +1688,11 @@ class FederationHandler(BaseHandler):
# hack around with a try/finally instead.
success = False
try:
if not event.internal_metadata.is_outlier() and not backfilled:
if (
not event.internal_metadata.is_outlier()
and not backfilled
and not context.rejected
):
yield self.action_generator.handle_push_actions_for_event(
event, context
)
@@ -1788,7 +1823,7 @@ class FederationHandler(BaseHandler):
# cause SynapseErrors in auth.check. We don't want to give up
# the attempt to federate altogether in such cases.
logger.warn("Rejecting %s because %s", e.event_id, err.msg)
logger.warning("Rejecting %s because %s", e.event_id, err.msg)
if e == event:
raise
@@ -1841,12 +1876,7 @@ class FederationHandler(BaseHandler):
if c and c.type == EventTypes.Create:
auth_events[(c.type, c.state_key)] = c
try:
yield self.do_auth(origin, event, context, auth_events=auth_events)
except AuthError as e:
logger.warn("[%s %s] Rejecting: %s", event.room_id, event.event_id, e.msg)
context.rejected = RejectedReason.AUTH_ERROR
context = yield self.do_auth(origin, event, context, auth_events=auth_events)
if not context.rejected:
yield self._check_for_soft_fail(event, state, backfilled)
@@ -1902,7 +1932,7 @@ class FederationHandler(BaseHandler):
# given state at the event. This should correctly handle cases
# like bans, especially with state res v2.
state_sets = yield self.store.get_state_groups(
state_sets = yield self.state_store.get_state_groups(
event.room_id, extrem_ids
)
state_sets = list(state_sets.values())
@@ -1938,7 +1968,7 @@ class FederationHandler(BaseHandler):
try:
event_auth.check(room_version, event, auth_events=current_auth_events)
except AuthError as e:
logger.warn("Soft-failing %r because %s", event, e)
logger.warning("Soft-failing %r because %s", event, e)
event.internal_metadata.soft_failed = True
@defer.inlineCallbacks
@@ -1993,7 +2023,7 @@ class FederationHandler(BaseHandler):
)
missing_events = yield filter_events_for_server(
self.store, origin, missing_events
self.storage, origin, missing_events
)
return missing_events
@@ -2015,12 +2045,12 @@ class FederationHandler(BaseHandler):
Also NB that this function adds entries to it.
Returns:
defer.Deferred[None]
defer.Deferred[EventContext]: updated context object
"""
room_version = yield self.store.get_room_version(event.room_id)
try:
yield self._update_auth_events_and_context_for_auth(
context = yield self._update_auth_events_and_context_for_auth(
origin, event, context, auth_events
)
except Exception:
@@ -2037,8 +2067,10 @@ class FederationHandler(BaseHandler):
try:
event_auth.check(room_version, event, auth_events=auth_events)
except AuthError as e:
logger.warn("Failed auth resolution for %r because %s", event, e)
raise e
logger.warning("Failed auth resolution for %r because %s", event, e)
context.rejected = RejectedReason.AUTH_ERROR
return context
@defer.inlineCallbacks
def _update_auth_events_and_context_for_auth(
@@ -2062,7 +2094,7 @@ class FederationHandler(BaseHandler):
auth_events (dict[(str, str)->synapse.events.EventBase]):
Returns:
defer.Deferred[None]
defer.Deferred[EventContext]: updated context
"""
event_auth_events = set(event.auth_event_ids())
@@ -2101,7 +2133,7 @@ class FederationHandler(BaseHandler):
# The other side isn't around or doesn't implement the
# endpoint, so lets just bail out.
logger.info("Failed to get event auth from remote: %s", e)
return
return context
seen_remotes = yield self.store.have_seen_events(
[e.event_id for e in remote_auth_chain]
@@ -2142,7 +2174,7 @@ class FederationHandler(BaseHandler):
if event.internal_metadata.is_outlier():
logger.info("Skipping auth_event fetch for outlier")
return
return context
# FIXME: Assumes we have and stored all the state for all the
# prev_events
@@ -2151,7 +2183,7 @@ class FederationHandler(BaseHandler):
)
if not different_auth:
return
return context
logger.info(
"auth_events refers to events which are not in our calculated auth "
@@ -2198,10 +2230,12 @@ class FederationHandler(BaseHandler):
auth_events.update(new_state)
yield self._update_context_for_auth_events(
context = yield self._update_context_for_auth_events(
event, context, auth_events, event_key
)
return context
@defer.inlineCallbacks
def _update_context_for_auth_events(self, event, context, auth_events, event_key):
"""Update the state_ids in an event context after auth event resolution,
@@ -2210,14 +2244,16 @@ class FederationHandler(BaseHandler):
Args:
event (Event): The event we're handling the context for
context (synapse.events.snapshot.EventContext): event context
to be updated
context (synapse.events.snapshot.EventContext): initial event context
auth_events (dict[(str, str)->str]): Events to update in the event
context.
event_key ((str, str)): (type, state_key) for the current event.
this will not be included in the current_state in the context.
Returns:
Deferred[EventContext]: new event context
"""
state_updates = {
k: a.event_id for k, a in iteritems(auth_events) if k != event_key
@@ -2234,7 +2270,7 @@ class FederationHandler(BaseHandler):
# create a new state group as a delta from the existing one.
prev_group = context.state_group
state_group = yield self.store.store_state_group(
state_group = yield self.state_store.store_state_group(
event.event_id,
event.room_id,
prev_group=prev_group,
@@ -2242,8 +2278,9 @@ class FederationHandler(BaseHandler):
current_state_ids=current_state_ids,
)
yield context.update_state(
return EventContext.with_state(
state_group=state_group,
state_group_before_event=context.state_group_before_event,
current_state_ids=current_state_ids,
prev_state_ids=prev_state_ids,
prev_group=prev_group,
@@ -2431,10 +2468,12 @@ class FederationHandler(BaseHandler):
try:
yield self.auth.check_from_context(room_version, event, context)
except AuthError as e:
logger.warn("Denying new third party invite %r because %s", event, e)
logger.warning("Denying new third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, context)
# We retrieve the room member handler here as to not cause a cyclic dependency
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
else:
@@ -2487,7 +2526,7 @@ class FederationHandler(BaseHandler):
try:
yield self.auth.check_from_context(room_version, event, context)
except AuthError as e:
logger.warn("Denying third party invite %r because %s", event, e)
logger.warning("Denying third party invite %r because %s", event, e)
raise e
yield self._check_signature(event, context)
@@ -2495,6 +2534,7 @@ class FederationHandler(BaseHandler):
# though the sender isn't a local user.
event.internal_metadata.send_on_behalf_of = get_domain_from_id(event.sender)
# We retrieve the room member handler here as to not cause a cyclic dependency
member_handler = self.hs.get_room_member_handler()
yield member_handler.send_membership_event(None, event, context)
@@ -2664,7 +2704,7 @@ class FederationHandler(BaseHandler):
backfilled=backfilled,
)
else:
max_stream_id = yield self.store.persist_events(
max_stream_id = yield self.storage.persistence.persist_events(
event_and_contexts, backfilled=backfilled
)

View File

@@ -392,7 +392,7 @@ class GroupsLocalHandler(object):
try:
user_profile = yield self.profile_handler.get_profile(user_id)
except Exception as e:
logger.warn("No profile for user %s: %s", user_id, e)
logger.warning("No profile for user %s: %s", user_id, e)
user_profile = {}
return {"state": "invite", "user_profile": user_profile}

View File

@@ -272,7 +272,7 @@ class IdentityHandler(BaseHandler):
changed = False
if e.code in (400, 404, 501):
# The remote server probably doesn't support unbinding (yet)
logger.warn("Received %d response while unbinding threepid", e.code)
logger.warning("Received %d response while unbinding threepid", e.code)
else:
logger.error("Failed to unbind threepid on identity server: %s", e)
raise SynapseError(500, "Failed to contact identity server")
@@ -403,7 +403,7 @@ class IdentityHandler(BaseHandler):
if self.hs.config.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warn(
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "
@@ -457,7 +457,7 @@ class IdentityHandler(BaseHandler):
if self.hs.config.using_identity_server_from_trusted_list:
# Warn that a deprecated config option is in use
logger.warn(
logger.warning(
'The config option "trust_identity_server_for_password_resets" '
'has been replaced by "account_threepid_delegate". '
"Please consult the sample config at docs/sample_config.yaml for "

View File

@@ -43,6 +43,8 @@ class InitialSyncHandler(BaseHandler):
self.validator = EventValidator()
self.snapshot_cache = SnapshotCache()
self._event_serializer = hs.get_event_client_serializer()
self.storage = hs.get_storage()
self.state_store = self.storage.state
def snapshot_all_rooms(
self,
@@ -126,8 +128,8 @@ class InitialSyncHandler(BaseHandler):
tags_by_room = yield self.store.get_tags_for_user(user_id)
account_data, account_data_by_room = (
yield self.store.get_account_data_for_user(user_id)
account_data, account_data_by_room = yield self.store.get_account_data_for_user(
user_id
)
public_room_ids = yield self.store.get_public_room_ids()
@@ -169,7 +171,7 @@ class InitialSyncHandler(BaseHandler):
elif event.membership == Membership.LEAVE:
room_end_token = "s%d" % (event.stream_ordering,)
deferred_room_state = run_in_background(
self.store.get_state_for_events, [event.event_id]
self.state_store.get_state_for_events, [event.event_id]
)
deferred_room_state.addCallback(
lambda states: states[event.event_id]
@@ -189,7 +191,9 @@ class InitialSyncHandler(BaseHandler):
)
).addErrback(unwrapFirstError)
messages = yield filter_events_for_client(self.store, user_id, messages)
messages = yield filter_events_for_client(
self.storage, user_id, messages
)
start_token = now_token.copy_and_replace("room_key", token)
end_token = now_token.copy_and_replace("room_key", room_end_token)
@@ -307,7 +311,7 @@ class InitialSyncHandler(BaseHandler):
def _room_initial_sync_parted(
self, user_id, room_id, pagin_config, membership, member_event_id, is_peeking
):
room_state = yield self.store.get_state_for_events([member_event_id])
room_state = yield self.state_store.get_state_for_events([member_event_id])
room_state = room_state[member_event_id]
@@ -322,7 +326,7 @@ class InitialSyncHandler(BaseHandler):
)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking
self.storage, user_id, messages, is_peeking=is_peeking
)
start_token = StreamToken.START.copy_and_replace("room_key", token)
@@ -414,7 +418,7 @@ class InitialSyncHandler(BaseHandler):
)
messages = yield filter_events_for_client(
self.store, user_id, messages, is_peeking=is_peeking
self.storage, user_id, messages, is_peeking=is_peeking
)
start_token = now_token.copy_and_replace("room_key", token)

View File

@@ -59,6 +59,8 @@ class MessageHandler(object):
self.clock = hs.get_clock()
self.state = hs.get_state_handler()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self._event_serializer = hs.get_event_client_serializer()
@defer.inlineCallbacks
@@ -74,15 +76,16 @@ class MessageHandler(object):
Raises:
SynapseError if something went wrong.
"""
membership, membership_event_id = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
(
membership,
membership_event_id,
) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
if membership == Membership.JOIN:
data = yield self.state.get_current_state(room_id, event_type, state_key)
elif membership == Membership.LEAVE:
key = (event_type, state_key)
room_state = yield self.store.get_state_for_events(
room_state = yield self.state_store.get_state_for_events(
[membership_event_id], StateFilter.from_types([key])
)
data = room_state[membership_event_id].get(key)
@@ -135,12 +138,12 @@ class MessageHandler(object):
raise NotFoundError("Can't find event for token %s" % (at_token,))
visible_events = yield filter_events_for_client(
self.store, user_id, last_events
self.storage, user_id, last_events
)
event = last_events[0]
if visible_events:
room_state = yield self.store.get_state_for_events(
room_state = yield self.state_store.get_state_for_events(
[event.event_id], state_filter=state_filter
)
room_state = room_state[event.event_id]
@@ -151,9 +154,10 @@ class MessageHandler(object):
% (user_id, room_id, at_token),
)
else:
membership, membership_event_id = (
yield self.auth.check_in_room_or_world_readable(room_id, user_id)
)
(
membership,
membership_event_id,
) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
if membership == Membership.JOIN:
state_ids = yield self.store.get_filtered_current_state_ids(
@@ -161,7 +165,7 @@ class MessageHandler(object):
)
room_state = yield self.store.get_events(state_ids.values())
elif membership == Membership.LEAVE:
room_state = yield self.store.get_state_for_events(
room_state = yield self.state_store.get_state_for_events(
[membership_event_id], state_filter=state_filter
)
room_state = room_state[membership_event_id]
@@ -234,6 +238,7 @@ class EventCreationHandler(object):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state = hs.get_state_handler()
self.clock = hs.get_clock()
self.validator = EventValidator()
@@ -687,7 +692,7 @@ class EventCreationHandler(object):
try:
yield self.auth.check_from_context(room_version, event, context)
except AuthError as err:
logger.warn("Denying new event %r because %s", event, err)
logger.warning("Denying new event %r because %s", event, err)
raise err
# Ensure that we can round trip before trying to persist in db
@@ -868,7 +873,7 @@ class EventCreationHandler(object):
if prev_state_ids:
raise AuthError(403, "Changing the room create event is forbidden")
(event_stream_id, max_stream_id) = yield self.store.persist_event(
event_stream_id, max_stream_id = yield self.storage.persistence.persist_event(
event, context=context
)

View File

@@ -69,6 +69,8 @@ class PaginationHandler(object):
self.hs = hs
self.auth = hs.get_auth()
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
self.clock = hs.get_clock()
self._server_name = hs.hostname
@@ -125,7 +127,9 @@ class PaginationHandler(object):
self._purges_in_progress_by_room.add(room_id)
try:
with (yield self.pagination_lock.write(room_id)):
yield self.store.purge_history(room_id, token, delete_local_events)
yield self.storage.purge_events.purge_history(
room_id, token, delete_local_events
)
logger.info("[purge] complete")
self._purges_by_id[purge_id].status = PurgeStatus.STATUS_COMPLETE
except Exception:
@@ -168,7 +172,7 @@ class PaginationHandler(object):
if joined:
raise SynapseError(400, "Users are still joined to this room")
await self.store.purge_room(room_id)
await self.storage.purge_events.purge_room(room_id)
@defer.inlineCallbacks
def get_messages(
@@ -210,9 +214,10 @@ class PaginationHandler(object):
source_config = pagin_config.get_source_config("room")
with (yield self.pagination_lock.read(room_id)):
membership, member_event_id = yield self.auth.check_in_room_or_world_readable(
room_id, user_id
)
(
membership,
member_event_id,
) = yield self.auth.check_in_room_or_world_readable(room_id, user_id)
if source_config.direction == "b":
# if we're going backwards, we might need to backfill. This
@@ -255,7 +260,7 @@ class PaginationHandler(object):
events = event_filter.filter(events)
events = yield filter_events_for_client(
self.store, user_id, events, is_peeking=(member_event_id is None)
self.storage, user_id, events, is_peeking=(member_event_id is None)
)
if not events:
@@ -274,7 +279,7 @@ class PaginationHandler(object):
(EventTypes.Member, event.sender) for event in events
)
state_ids = yield self.store.get_state_ids_for_event(
state_ids = yield self.state_store.get_state_ids_for_event(
events[0].event_id, state_filter=state_filter
)
@@ -295,10 +300,8 @@ class PaginationHandler(object):
}
if state:
chunk["state"] = (
yield self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event
)
chunk["state"] = yield self._event_serializer.serialize_events(
state, time_now, as_client_event=as_client_event
)
return chunk

View File

@@ -152,7 +152,7 @@ class BaseProfileHandler(BaseHandler):
by_admin (bool): Whether this change was made by an administrator.
"""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's displayname")
@@ -207,7 +207,7 @@ class BaseProfileHandler(BaseHandler):
"""target_user is the user whose avatar_url is to be changed;
auth_user is the user attempting to make this change."""
if not self.hs.is_mine(target_user):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
if not by_admin and target_user != requester.user:
raise AuthError(400, "Cannot set another user's avatar_url")
@@ -231,7 +231,7 @@ class BaseProfileHandler(BaseHandler):
def on_profile_query(self, args):
user = UserID.from_string(args["user_id"])
if not self.hs.is_mine(user):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
just_field = args.get("field", None)
@@ -275,7 +275,7 @@ class BaseProfileHandler(BaseHandler):
ratelimit=False, # Try to hide that these events aren't atomic.
)
except Exception as e:
logger.warn(
logger.warning(
"Failed to update join event for room %s - %s", room_id, str(e)
)

View File

@@ -15,8 +15,6 @@
import logging
from twisted.internet import defer
from synapse.util.async_helpers import Linearizer
from ._base import BaseHandler
@@ -32,8 +30,7 @@ class ReadMarkerHandler(BaseHandler):
self.read_marker_linearizer = Linearizer(name="read_marker")
self.notifier = hs.get_notifier()
@defer.inlineCallbacks
def received_client_read_marker(self, room_id, user_id, event_id):
async def received_client_read_marker(self, room_id, user_id, event_id):
"""Updates the read marker for a given user in a given room if the event ID given
is ahead in the stream relative to the current read marker.
@@ -41,8 +38,8 @@ class ReadMarkerHandler(BaseHandler):
the read marker has changed.
"""
with (yield self.read_marker_linearizer.queue((room_id, user_id))):
existing_read_marker = yield self.store.get_account_data_for_room_and_type(
with await self.read_marker_linearizer.queue((room_id, user_id)):
existing_read_marker = await self.store.get_account_data_for_room_and_type(
user_id, room_id, "m.fully_read"
)
@@ -50,13 +47,13 @@ class ReadMarkerHandler(BaseHandler):
if existing_read_marker:
# Only update if the new marker is ahead in the stream
should_update = yield self.store.is_event_after(
should_update = await self.store.is_event_after(
event_id, existing_read_marker["event_id"]
)
if should_update:
content = {"event_id": event_id}
max_id = yield self.store.add_account_data_to_room(
max_id = await self.store.add_account_data_to_room(
user_id, room_id, "m.fully_read", content
)
self.notifier.on_new_event("account_data_key", max_id, users=[user_id])

View File

@@ -18,6 +18,7 @@ from twisted.internet import defer
from synapse.handlers._base import BaseHandler
from synapse.types import ReadReceipt, get_domain_from_id
from synapse.util.async_helpers import maybe_awaitable
logger = logging.getLogger(__name__)
@@ -36,8 +37,7 @@ class ReceiptsHandler(BaseHandler):
self.clock = self.hs.get_clock()
self.state = hs.get_state_handler()
@defer.inlineCallbacks
def _received_remote_receipt(self, origin, content):
async def _received_remote_receipt(self, origin, content):
"""Called when we receive an EDU of type m.receipt from a remote HS.
"""
receipts = []
@@ -62,17 +62,16 @@ class ReceiptsHandler(BaseHandler):
)
)
yield self._handle_new_receipts(receipts)
await self._handle_new_receipts(receipts)
@defer.inlineCallbacks
def _handle_new_receipts(self, receipts):
async def _handle_new_receipts(self, receipts):
"""Takes a list of receipts, stores them and informs the notifier.
"""
min_batch_id = None
max_batch_id = None
for receipt in receipts:
res = yield self.store.insert_receipt(
res = await self.store.insert_receipt(
receipt.room_id,
receipt.receipt_type,
receipt.user_id,
@@ -99,14 +98,15 @@ class ReceiptsHandler(BaseHandler):
self.notifier.on_new_event("receipt_key", max_batch_id, rooms=affected_room_ids)
# Note that the min here shouldn't be relied upon to be accurate.
yield self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids
await maybe_awaitable(
self.hs.get_pusherpool().on_new_receipts(
min_batch_id, max_batch_id, affected_room_ids
)
)
return True
@defer.inlineCallbacks
def received_client_receipt(self, room_id, receipt_type, user_id, event_id):
async def received_client_receipt(self, room_id, receipt_type, user_id, event_id):
"""Called when a client tells us a local user has read up to the given
event_id in the room.
"""
@@ -118,24 +118,11 @@ class ReceiptsHandler(BaseHandler):
data={"ts": int(self.clock.time_msec())},
)
is_new = yield self._handle_new_receipts([receipt])
is_new = await self._handle_new_receipts([receipt])
if not is_new:
return
yield self.federation.send_read_receipt(receipt)
@defer.inlineCallbacks
def get_receipts_for_room(self, room_id, to_key):
"""Gets all receipts for a room, upto the given key.
"""
result = yield self.store.get_linearized_receipts_for_room(
room_id, to_key=to_key
)
if not result:
return []
return result
await self.federation.send_read_receipt(receipt)
class ReceiptEventSource(object):

View File

@@ -24,7 +24,6 @@ from synapse.api.errors import (
AuthError,
Codes,
ConsentNotGivenError,
LimitExceededError,
RegistrationError,
SynapseError,
)
@@ -168,6 +167,7 @@ class RegistrationHandler(BaseHandler):
Raises:
RegistrationError if there was a problem registering.
"""
yield self.check_registration_ratelimit(address)
yield self.auth.check_auth_blocking(threepid=threepid)
password_hash = None
@@ -217,8 +217,13 @@ class RegistrationHandler(BaseHandler):
else:
# autogen a sequential user ID
fail_count = 0
user = None
while not user:
# Fail after being unable to find a suitable ID a few times
if fail_count > 10:
raise SynapseError(500, "Unable to find a suitable guest user ID")
localpart = yield self._generate_user_id()
user = UserID(localpart, self.hs.hostname)
user_id = user.to_string()
@@ -233,10 +238,14 @@ class RegistrationHandler(BaseHandler):
create_profile_with_displayname=default_display_name,
address=address,
)
# Successfully registered
break
except SynapseError:
# if user id is taken, just generate another
user = None
user_id = None
fail_count += 1
if not self.hs.config.user_consent_at_registration:
yield self._auto_join_rooms(user_id)
@@ -396,8 +405,8 @@ class RegistrationHandler(BaseHandler):
room_id = room_identifier
elif RoomAlias.is_valid(room_identifier):
room_alias = RoomAlias.from_string(room_identifier)
room_id, remote_room_hosts = (
yield room_member_handler.lookup_room_alias(room_alias)
room_id, remote_room_hosts = yield room_member_handler.lookup_room_alias(
room_alias
)
room_id = room_id.to_string()
else:
@@ -414,6 +423,29 @@ class RegistrationHandler(BaseHandler):
ratelimit=False,
)
def check_registration_ratelimit(self, address):
"""A simple helper method to check whether the registration rate limit has been hit
for a given IP address
Args:
address (str|None): the IP address used to perform the registration. If this is
None, no ratelimiting will be performed.
Raises:
LimitExceededError: If the rate limit has been exceeded.
"""
if not address:
return
time_now = self.clock.time()
self.ratelimiter.ratelimit(
address,
time_now_s=time_now,
rate_hz=self.hs.config.rc_registration.per_second,
burst_count=self.hs.config.rc_registration.burst_count,
)
def register_with_store(
self,
user_id,
@@ -446,22 +478,6 @@ class RegistrationHandler(BaseHandler):
Returns:
Deferred
"""
# Don't rate limit for app services
if appservice_id is None and address is not None:
time_now = self.clock.time()
allowed, time_allowed = self.ratelimiter.can_do_action(
address,
time_now_s=time_now,
rate_hz=self.hs.config.rc_registration.per_second,
burst_count=self.hs.config.rc_registration.burst_count,
)
if not allowed:
raise LimitExceededError(
retry_after_ms=int(1000 * (time_allowed - time_now))
)
if self.hs.config.worker_app:
return self._register_client(
user_id=user_id,
@@ -614,7 +630,7 @@ class RegistrationHandler(BaseHandler):
# And we add an email pusher for them by default, but only
# if email notifications are enabled (so people don't start
# getting mail spam where they weren't before if email
# notifs are set up on a home server)
# notifs are set up on a homeserver)
if (
self.hs.config.email_enable_notifs
and self.hs.config.email_notif_for_new_users

View File

@@ -129,6 +129,7 @@ class RoomCreationHandler(BaseHandler):
old_room_id,
new_version, # args for _upgrade_room
)
return ret
@defer.inlineCallbacks
@@ -147,21 +148,22 @@ class RoomCreationHandler(BaseHandler):
# we create and auth the tombstone event before properly creating the new
# room, to check our user has perms in the old room.
tombstone_event, tombstone_context = (
yield self.event_creation_handler.create_event(
requester,
{
"type": EventTypes.Tombstone,
"state_key": "",
"room_id": old_room_id,
"sender": user_id,
"content": {
"body": "This room has been replaced",
"replacement_room": new_room_id,
},
(
tombstone_event,
tombstone_context,
) = yield self.event_creation_handler.create_event(
requester,
{
"type": EventTypes.Tombstone,
"state_key": "",
"room_id": old_room_id,
"sender": user_id,
"content": {
"body": "This room has been replaced",
"replacement_room": new_room_id,
},
token_id=requester.access_token_id,
)
},
token_id=requester.access_token_id,
)
old_room_version = yield self.store.get_room_version(old_room_id)
yield self.auth.check_from_context(
@@ -188,7 +190,12 @@ class RoomCreationHandler(BaseHandler):
requester, old_room_id, new_room_id, old_room_state
)
# and finally, shut down the PLs in the old room, and update them in the new
# Copy over user push rules, tags and migrate room directory state
yield self.room_member_handler.transfer_room_state_on_room_upgrade(
old_room_id, new_room_id
)
# finally, shut down the PLs in the old room, and update them in the new
# room.
yield self._update_upgraded_room_pls(
requester, old_room_id, new_room_id, old_room_state
@@ -822,6 +829,8 @@ class RoomContextHandler(object):
def __init__(self, hs):
self.hs = hs
self.store = hs.get_datastore()
self.storage = hs.get_storage()
self.state_store = self.storage.state
@defer.inlineCallbacks
def get_event_context(self, user, room_id, event_id, limit, event_filter):
@@ -848,7 +857,7 @@ class RoomContextHandler(object):
def filter_evts(events):
return filter_events_for_client(
self.store, user.to_string(), events, is_peeking=is_peeking
self.storage, user.to_string(), events, is_peeking=is_peeking
)
event = yield self.store.get_event(
@@ -890,7 +899,7 @@ class RoomContextHandler(object):
# first? Shouldn't we be consistent with /sync?
# https://github.com/matrix-org/matrix-doc/issues/687
state = yield self.store.get_state_for_events(
state = yield self.state_store.get_state_for_events(
[last_event_id], state_filter=state_filter
)
results["state"] = list(state[last_event_id].values())
@@ -922,7 +931,7 @@ class RoomEventSource(object):
from_token = RoomStreamToken.parse(from_key)
if from_token.topological:
logger.warn("Stream has topological part!!!! %r", from_key)
logger.warning("Stream has topological part!!!! %r", from_key)
from_key = "s%s" % (from_token.stream,)
app_service = self.store.get_app_service_by_user_id(user.to_string())

View File

@@ -203,10 +203,6 @@ class RoomMemberHandler(object):
prev_member_event = yield self.store.get_event(prev_member_event_id)
newly_joined = prev_member_event.membership != Membership.JOIN
if newly_joined:
# Copy over user state if we're joining an upgraded room
yield self.copy_user_state_if_room_upgrade(
room_id, requester.user.to_string()
)
yield self._user_joined_room(target, room_id)
elif event.membership == Membership.LEAVE:
if prev_member_event_id:
@@ -455,11 +451,6 @@ class RoomMemberHandler(object):
requester, remote_room_hosts, room_id, target, content
)
# Copy over user state if this is a join on an remote upgraded room
yield self.copy_user_state_if_room_upgrade(
room_id, requester.user.to_string()
)
return remote_join_response
elif effective_membership_state == Membership.LEAVE:
@@ -498,36 +489,81 @@ class RoomMemberHandler(object):
return res
@defer.inlineCallbacks
def copy_user_state_if_room_upgrade(self, new_room_id, user_id):
"""Copy user-specific information when they join a new room if that new room is the
result of a room upgrade
def transfer_room_state_on_room_upgrade(self, old_room_id, room_id):
"""Upon our server becoming aware of an upgraded room, either by upgrading a room
ourselves or joining one, we can transfer over information from the previous room.
Copies user state (tags/push rules) for every local user that was in the old room, as
well as migrating the room directory state.
Args:
new_room_id (str): The ID of the room the user is joining
user_id (str): The ID of the user
old_room_id (str): The ID of the old room
room_id (str): The ID of the new room
Returns:
Deferred
"""
# Find all local users that were in the old room and copy over each user's state
users = yield self.store.get_users_in_room(old_room_id)
yield self.copy_user_state_on_room_upgrade(old_room_id, room_id, users)
# Add new room to the room directory if the old room was there
# Remove old room from the room directory
old_room = yield self.store.get_room(old_room_id)
if old_room and old_room["is_public"]:
yield self.store.set_room_is_public(old_room_id, False)
yield self.store.set_room_is_public(room_id, True)
# Check if any groups we own contain the predecessor room
local_group_ids = yield self.store.get_local_groups_for_room(old_room_id)
for group_id in local_group_ids:
# Add new the new room to those groups
yield self.store.add_room_to_group(group_id, room_id, old_room["is_public"])
# Remove the old room from those groups
yield self.store.remove_room_from_group(group_id, old_room_id)
@defer.inlineCallbacks
def copy_user_state_on_room_upgrade(self, old_room_id, new_room_id, user_ids):
"""Copy user-specific information when they join a new room when that new room is the
result of a room upgrade
Args:
old_room_id (str): The ID of upgraded room
new_room_id (str): The ID of the new room
user_ids (Iterable[str]): User IDs to copy state for
Returns:
Deferred
"""
# Check if the new room is an upgraded room
predecessor = yield self.store.get_room_predecessor(new_room_id)
if not predecessor:
return
logger.debug(
"Found predecessor for %s: %s. Copying over room tags and push " "rules",
"Copying over room tags and push rules from %s to %s for users %s",
old_room_id,
new_room_id,
predecessor,
user_ids,
)
# It is an upgraded room. Copy over old tags
yield self.copy_room_tags_and_direct_to_room(
predecessor["room_id"], new_room_id, user_id
)
# Copy over push rules
yield self.store.copy_push_rules_from_room_to_room_for_user(
predecessor["room_id"], new_room_id, user_id
)
for user_id in user_ids:
try:
# It is an upgraded room. Copy over old tags
yield self.copy_room_tags_and_direct_to_room(
old_room_id, new_room_id, user_id
)
# Copy over push rules
yield self.store.copy_push_rules_from_room_to_room_for_user(
old_room_id, new_room_id, user_id
)
except Exception:
logger.exception(
"Error copying tags and/or push rules from rooms %s to %s for user %s. "
"Skipping...",
old_room_id,
new_room_id,
user_id,
)
continue
@defer.inlineCallbacks
def send_membership_event(self, requester, event, context, ratelimit=True):
@@ -759,22 +795,25 @@ class RoomMemberHandler(object):
if room_avatar_event:
room_avatar_url = room_avatar_event.content.get("url", "")
token, public_keys, fallback_public_key, display_name = (
yield self.identity_handler.ask_id_server_for_third_party_invite(
requester=requester,
id_server=id_server,
medium=medium,
address=address,
room_id=room_id,
inviter_user_id=user.to_string(),
room_alias=canonical_room_alias,
room_avatar_url=room_avatar_url,
room_join_rules=room_join_rules,
room_name=room_name,
inviter_display_name=inviter_display_name,
inviter_avatar_url=inviter_avatar_url,
id_access_token=id_access_token,
)
(
token,
public_keys,
fallback_public_key,
display_name,
) = yield self.identity_handler.ask_id_server_for_third_party_invite(
requester=requester,
id_server=id_server,
medium=medium,
address=address,
room_id=room_id,
inviter_user_id=user.to_string(),
room_alias=canonical_room_alias,
room_avatar_url=room_avatar_url,
room_join_rules=room_join_rules,
room_name=room_name,
inviter_display_name=inviter_display_name,
inviter_avatar_url=inviter_avatar_url,
id_access_token=id_access_token,
)
yield self.event_creation_handler.create_and_send_nonmember_event(

View File

@@ -35,6 +35,8 @@ class SearchHandler(BaseHandler):
def __init__(self, hs):
super(SearchHandler, self).__init__(hs)
self._event_serializer = hs.get_event_client_serializer()
self.storage = hs.get_storage()
self.state_store = self.storage.state
@defer.inlineCallbacks
def get_old_rooms_from_upgraded_room(self, room_id):
@@ -221,7 +223,7 @@ class SearchHandler(BaseHandler):
filtered_events = search_filter.filter([r["event"] for r in results])
events = yield filter_events_for_client(
self.store, user.to_string(), filtered_events
self.storage, user.to_string(), filtered_events
)
events.sort(key=lambda e: -rank_map[e.event_id])
@@ -271,7 +273,7 @@ class SearchHandler(BaseHandler):
filtered_events = search_filter.filter([r["event"] for r in results])
events = yield filter_events_for_client(
self.store, user.to_string(), filtered_events
self.storage, user.to_string(), filtered_events
)
room_events.extend(events)
@@ -340,11 +342,11 @@ class SearchHandler(BaseHandler):
)
res["events_before"] = yield filter_events_for_client(
self.store, user.to_string(), res["events_before"]
self.storage, user.to_string(), res["events_before"]
)
res["events_after"] = yield filter_events_for_client(
self.store, user.to_string(), res["events_after"]
self.storage, user.to_string(), res["events_after"]
)
res["start"] = now_token.copy_and_replace(
@@ -372,7 +374,7 @@ class SearchHandler(BaseHandler):
[(EventTypes.Member, sender) for sender in senders]
)
state = yield self.store.get_state_for_event(
state = yield self.state_store.get_state_for_event(
last_event_id, state_filter
)
@@ -394,15 +396,11 @@ class SearchHandler(BaseHandler):
time_now = self.clock.time_msec()
for context in contexts.values():
context["events_before"] = (
yield self._event_serializer.serialize_events(
context["events_before"], time_now
)
context["events_before"] = yield self._event_serializer.serialize_events(
context["events_before"], time_now
)
context["events_after"] = (
yield self._event_serializer.serialize_events(
context["events_after"], time_now
)
context["events_after"] = yield self._event_serializer.serialize_events(
context["events_after"], time_now
)
state_results = {}

View File

@@ -108,7 +108,10 @@ class StatsHandler(StateDeltasHandler):
user_deltas = {}
# Then count deltas for total_events and total_event_bytes.
room_count, user_count = yield self.store.get_changes_room_total_events_and_bytes(
(
room_count,
user_count,
) = yield self.store.get_changes_room_total_events_and_bytes(
self.pos, max_pos
)

View File

@@ -230,6 +230,8 @@ class SyncHandler(object):
self.response_cache = ResponseCache(hs, "sync")
self.state = hs.get_state_handler()
self.auth = hs.get_auth()
self.storage = hs.get_storage()
self.state_store = self.storage.state
# ExpiringCache((User, Device)) -> LruCache(state_key => event_id)
self.lazy_loaded_members_cache = ExpiringCache(
@@ -417,7 +419,7 @@ class SyncHandler(object):
current_state_ids = frozenset(itervalues(current_state_ids))
recents = yield filter_events_for_client(
self.store,
self.storage,
sync_config.user.to_string(),
recents,
always_include_ids=current_state_ids,
@@ -470,7 +472,7 @@ class SyncHandler(object):
current_state_ids = frozenset(itervalues(current_state_ids))
loaded_recents = yield filter_events_for_client(
self.store,
self.storage,
sync_config.user.to_string(),
loaded_recents,
always_include_ids=current_state_ids,
@@ -509,7 +511,7 @@ class SyncHandler(object):
Returns:
A Deferred map from ((type, state_key)->Event)
"""
state_ids = yield self.store.get_state_ids_for_event(
state_ids = yield self.state_store.get_state_ids_for_event(
event.event_id, state_filter=state_filter
)
if event.is_state():
@@ -580,7 +582,7 @@ class SyncHandler(object):
return None
last_event = last_events[-1]
state_ids = yield self.store.get_state_ids_for_event(
state_ids = yield self.state_store.get_state_ids_for_event(
last_event.event_id,
state_filter=StateFilter.from_types(
[(EventTypes.Name, ""), (EventTypes.CanonicalAlias, "")]
@@ -757,11 +759,11 @@ class SyncHandler(object):
if full_state:
if batch:
current_state_ids = yield self.store.get_state_ids_for_event(
current_state_ids = yield self.state_store.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
)
state_ids = yield self.store.get_state_ids_for_event(
state_ids = yield self.state_store.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
)
@@ -781,7 +783,7 @@ class SyncHandler(object):
)
elif batch.limited:
if batch:
state_at_timeline_start = yield self.store.get_state_ids_for_event(
state_at_timeline_start = yield self.state_store.get_state_ids_for_event(
batch.events[0].event_id, state_filter=state_filter
)
else:
@@ -810,7 +812,7 @@ class SyncHandler(object):
)
if batch:
current_state_ids = yield self.store.get_state_ids_for_event(
current_state_ids = yield self.state_store.get_state_ids_for_event(
batch.events[-1].event_id, state_filter=state_filter
)
else:
@@ -841,7 +843,7 @@ class SyncHandler(object):
# So we fish out all the member events corresponding to the
# timeline here, and then dedupe any redundant ones below.
state_ids = yield self.store.get_state_ids_for_event(
state_ids = yield self.state_store.get_state_ids_for_event(
batch.events[0].event_id,
# we only want members!
state_filter=StateFilter.from_types(
@@ -1204,10 +1206,11 @@ class SyncHandler(object):
since_token = sync_result_builder.since_token
if since_token and not sync_result_builder.full_state:
account_data, account_data_by_room = (
yield self.store.get_updated_account_data_for_user(
user_id, since_token.account_data_key
)
(
account_data,
account_data_by_room,
) = yield self.store.get_updated_account_data_for_user(
user_id, since_token.account_data_key
)
push_rules_changed = yield self.store.have_push_rules_changed_for_user(
@@ -1219,9 +1222,10 @@ class SyncHandler(object):
sync_config.user
)
else:
account_data, account_data_by_room = (
yield self.store.get_account_data_for_user(sync_config.user.to_string())
)
(
account_data,
account_data_by_room,
) = yield self.store.get_account_data_for_user(sync_config.user.to_string())
account_data["m.push_rules"] = yield self.push_rules_for_user(
sync_config.user

View File

@@ -120,7 +120,7 @@ class TypingHandler(object):
auth_user_id = auth_user.to_string()
if not self.is_mine_id(target_user_id):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
if target_user_id != auth_user_id:
raise AuthError(400, "Cannot set another user's typing state")
@@ -150,7 +150,7 @@ class TypingHandler(object):
auth_user_id = auth_user.to_string()
if not self.is_mine_id(target_user_id):
raise SynapseError(400, "User is not hosted on this Home Server")
raise SynapseError(400, "User is not hosted on this homeserver")
if target_user_id != auth_user_id:
raise AuthError(400, "Cannot set another user's typing state")

View File

@@ -81,7 +81,7 @@ class RecaptchaAuthChecker(UserInteractiveAuthChecker):
def __init__(self, hs):
super().__init__(hs)
self._enabled = bool(hs.config.recaptcha_private_key)
self._http_client = hs.get_simple_http_client()
self._http_client = hs.get_proxied_http_client()
self._url = hs.config.recaptcha_siteverify_api
self._secret = hs.config.recaptcha_private_key

View File

@@ -45,6 +45,7 @@ from synapse.http import (
cancelled_to_request_timed_out_error,
redact_uri,
)
from synapse.http.proxyagent import ProxyAgent
from synapse.logging.context import make_deferred_yieldable
from synapse.logging.opentracing import set_tag, start_active_span, tags
from synapse.util.async_helpers import timeout_deferred
@@ -183,7 +184,15 @@ class SimpleHttpClient(object):
using HTTP in Matrix
"""
def __init__(self, hs, treq_args={}, ip_whitelist=None, ip_blacklist=None):
def __init__(
self,
hs,
treq_args={},
ip_whitelist=None,
ip_blacklist=None,
http_proxy=None,
https_proxy=None,
):
"""
Args:
hs (synapse.server.HomeServer)
@@ -192,6 +201,8 @@ class SimpleHttpClient(object):
we may not request.
ip_whitelist (netaddr.IPSet): The whitelisted IP addresses, that we can
request if it were otherwise caught in a blacklist.
http_proxy (bytes): proxy server to use for http connections. host[:port]
https_proxy (bytes): proxy server to use for https connections. host[:port]
"""
self.hs = hs
@@ -236,11 +247,13 @@ class SimpleHttpClient(object):
# The default context factory in Twisted 14.0.0 (which we require) is
# BrowserLikePolicyForHTTPS which will do regular cert validation
# 'like a browser'
self.agent = Agent(
self.agent = ProxyAgent(
self.reactor,
connectTimeout=15,
contextFactory=self.hs.get_http_client_context_factory(),
pool=pool,
http_proxy=http_proxy,
https_proxy=https_proxy,
)
if self._ip_blacklist:
@@ -535,7 +548,7 @@ class SimpleHttpClient(object):
b"Content-Length" in resp_headers
and int(resp_headers[b"Content-Length"][0]) > max_size
):
logger.warn("Requested URL is too large > %r bytes" % (self.max_size,))
logger.warning("Requested URL is too large > %r bytes" % (self.max_size,))
raise SynapseError(
502,
"Requested file is too large > %r bytes" % (self.max_size,),
@@ -543,7 +556,7 @@ class SimpleHttpClient(object):
)
if response.code > 299:
logger.warn("Got %d when downloading %s" % (response.code, url))
logger.warning("Got %d when downloading %s" % (response.code, url))
raise SynapseError(502, "Got error %d" % (response.code,), Codes.UNKNOWN)
# TODO: if our Content-Type is HTML or something, just read the first

View File

@@ -0,0 +1,195 @@
# -*- coding: utf-8 -*-
# Copyright 2019 The Matrix.org Foundation C.I.C.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import logging
from zope.interface import implementer
from twisted.internet import defer, protocol
from twisted.internet.error import ConnectError
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.internet.protocol import connectionDone
from twisted.web import http
logger = logging.getLogger(__name__)
class ProxyConnectError(ConnectError):
pass
@implementer(IStreamClientEndpoint)
class HTTPConnectProxyEndpoint(object):
"""An Endpoint implementation which will send a CONNECT request to an http proxy
Wraps an existing HostnameEndpoint for the proxy.
When we get the connect() request from the connection pool (via the TLS wrapper),
we'll first connect to the proxy endpoint with a ProtocolFactory which will make the
CONNECT request. Once that completes, we invoke the protocolFactory which was passed
in.
Args:
reactor: the Twisted reactor to use for the connection
proxy_endpoint (IStreamClientEndpoint): the endpoint to use to connect to the
proxy
host (bytes): hostname that we want to CONNECT to
port (int): port that we want to connect to
"""
def __init__(self, reactor, proxy_endpoint, host, port):
self._reactor = reactor
self._proxy_endpoint = proxy_endpoint
self._host = host
self._port = port
def __repr__(self):
return "<HTTPConnectProxyEndpoint %s>" % (self._proxy_endpoint,)
def connect(self, protocolFactory):
f = HTTPProxiedClientFactory(self._host, self._port, protocolFactory)
d = self._proxy_endpoint.connect(f)
# once the tcp socket connects successfully, we need to wait for the
# CONNECT to complete.
d.addCallback(lambda conn: f.on_connection)
return d
class HTTPProxiedClientFactory(protocol.ClientFactory):
"""ClientFactory wrapper that triggers an HTTP proxy CONNECT on connect.
Once the CONNECT completes, invokes the original ClientFactory to build the
HTTP Protocol object and run the rest of the connection.
Args:
dst_host (bytes): hostname that we want to CONNECT to
dst_port (int): port that we want to connect to
wrapped_factory (protocol.ClientFactory): The original Factory
"""
def __init__(self, dst_host, dst_port, wrapped_factory):
self.dst_host = dst_host
self.dst_port = dst_port
self.wrapped_factory = wrapped_factory
self.on_connection = defer.Deferred()
def startedConnecting(self, connector):
return self.wrapped_factory.startedConnecting(connector)
def buildProtocol(self, addr):
wrapped_protocol = self.wrapped_factory.buildProtocol(addr)
return HTTPConnectProtocol(
self.dst_host, self.dst_port, wrapped_protocol, self.on_connection
)
def clientConnectionFailed(self, connector, reason):
logger.debug("Connection to proxy failed: %s", reason)
if not self.on_connection.called:
self.on_connection.errback(reason)
return self.wrapped_factory.clientConnectionFailed(connector, reason)
def clientConnectionLost(self, connector, reason):
logger.debug("Connection to proxy lost: %s", reason)
if not self.on_connection.called:
self.on_connection.errback(reason)
return self.wrapped_factory.clientConnectionLost(connector, reason)
class HTTPConnectProtocol(protocol.Protocol):
"""Protocol that wraps an existing Protocol to do a CONNECT handshake at connect
Args:
host (bytes): The original HTTP(s) hostname or IPv4 or IPv6 address literal
to put in the CONNECT request
port (int): The original HTTP(s) port to put in the CONNECT request
wrapped_protocol (interfaces.IProtocol): the original protocol (probably
HTTPChannel or TLSMemoryBIOProtocol, but could be anything really)
connected_deferred (Deferred): a Deferred which will be callbacked with
wrapped_protocol when the CONNECT completes
"""
def __init__(self, host, port, wrapped_protocol, connected_deferred):
self.host = host
self.port = port
self.wrapped_protocol = wrapped_protocol
self.connected_deferred = connected_deferred
self.http_setup_client = HTTPConnectSetupClient(self.host, self.port)
self.http_setup_client.on_connected.addCallback(self.proxyConnected)
def connectionMade(self):
self.http_setup_client.makeConnection(self.transport)
def connectionLost(self, reason=connectionDone):
if self.wrapped_protocol.connected:
self.wrapped_protocol.connectionLost(reason)
self.http_setup_client.connectionLost(reason)
if not self.connected_deferred.called:
self.connected_deferred.errback(reason)
def proxyConnected(self, _):
self.wrapped_protocol.makeConnection(self.transport)
self.connected_deferred.callback(self.wrapped_protocol)
# Get any pending data from the http buf and forward it to the original protocol
buf = self.http_setup_client.clearLineBuffer()
if buf:
self.wrapped_protocol.dataReceived(buf)
def dataReceived(self, data):
# if we've set up the HTTP protocol, we can send the data there
if self.wrapped_protocol.connected:
return self.wrapped_protocol.dataReceived(data)
# otherwise, we must still be setting up the connection: send the data to the
# setup client
return self.http_setup_client.dataReceived(data)
class HTTPConnectSetupClient(http.HTTPClient):
"""HTTPClient protocol to send a CONNECT message for proxies and read the response.
Args:
host (bytes): The hostname to send in the CONNECT message
port (int): The port to send in the CONNECT message
"""
def __init__(self, host, port):
self.host = host
self.port = port
self.on_connected = defer.Deferred()
def connectionMade(self):
logger.debug("Connected to proxy, sending CONNECT")
self.sendCommand(b"CONNECT", b"%s:%d" % (self.host, self.port))
self.endHeaders()
def handleStatus(self, version, status, message):
logger.debug("Got Status: %s %s %s", status, message, version)
if status != b"200":
raise ProxyConnectError("Unexpected status on CONNECT: %s" % status)
def handleEndHeaders(self):
logger.debug("End Headers")
self.on_connected.callback(None)
def handleResponse(self, body):
pass

View File

@@ -148,7 +148,7 @@ class SrvResolver(object):
# Try something in the cache, else rereaise
cache_entry = self._cache.get(service_name, None)
if cache_entry:
logger.warn(
logger.warning(
"Failed to resolve %r, falling back to cache. %r", service_name, e
)
return list(cache_entry)

Some files were not shown because too many files have changed in this diff Show More